Number Base Converter
Convert numbers between binary, octal, decimal, hexadecimal, and any custom base from 2 to 36 in real time. Includes an interactive bit visualization grid for values up to 32 bits.
How to Use the Number Base Converter
Enter a number in the input field and select the base it's written in using the dropdown. The converter will instantly display the equivalent value in binary, octal, decimal, and hexadecimal. Choose "Custom…" from the dropdown to specify any base between 2 and 36. For numbers that fit within 32 bits, an interactive bit grid is shown where you can click individual bits to toggle them, and the input and all outputs update in real time.
Understanding Number Bases
A number base (or radix) is the number of unique digits used to represent values in a positional numeral system. The most familiar base is decimal (base 10), which uses digits 0–9. Computers operate natively in binary (base 2), using only 0 and 1, because digital circuits have two stable states (on and off). Hexadecimal (base 16) is widely used in programming because each hex digit maps to exactly four binary bits, making it a compact way to represent binary data. Octal (base 8) maps each digit to three binary bits and is common in Unix file permissions (e.g., chmod 755).
Why Base Conversion Matters
Developers encounter base conversions constantly. Memory addresses and color codes are written in hexadecimal (0xFF5733, #4f46e5). Network subnet masks and bitwise flags are more naturally understood in binary. File permissions in Unix systems use octal notation. Understanding how to move between these representations is a fundamental skill in software engineering, embedded systems, network programming, and digital electronics.
Signed vs. Unsigned Interpretation
This tool shows both signed and unsigned interpretations for values that fit within standard integer widths. In an unsigned representation, all bits contribute to a non-negative magnitude. In a signed (two's complement) representation — the standard used by virtually all modern CPUs — the most significant bit acts as a sign bit. If it's 1, the number is negative, and its value is computed by inverting all bits and adding 1. For example, the 8-bit binary value 11111111 is 255 unsigned but −1 as a signed byte. Knowing both interpretations is essential when working with low-level code, binary protocols, or debugging overflow issues.
BigInt Support
JavaScript's native Number type can only safely represent integers up to 253 − 1. This converter uses BigInt for all internal arithmetic, so you can convert extremely large numbers — hundreds of digits in decimal, or hundreds of bits in binary — without any loss of precision. This is particularly useful for cryptographic constants, large hash values, or blockchain-related numbers that exceed standard integer limits.
Interactive Bit Grid
For values up to 32 bits, the bit visualization panel shows each bit as a clickable cell. Bits that are 1 appear filled with the accent color, while 0 bits are empty. Clicking any cell toggles that bit and instantly updates all the outputs. The bits are grouped into nibbles (groups of 4) for readability, and position labels below each cell show the bit index from 0 (least significant) to 31. This visual representation makes it easy to experiment with bitwise operations, understand flag fields, or manually construct a bitmask without doing mental arithmetic.