{ }DevToolBox

Number Base Converter

Convert numbers between binary, octal, decimal, hexadecimal, and any base from 2 to 36. Supports arbitrarily large numbers via BigInt. Everything runs in your browser — no data is sent to any server.

Number Base Converter
Custom base:(2-36)
0b
0o
Enter a number above...
0x
(2-36)
Enter a number above...

Number Systems Explained

A number system (also called a numeral system or radix system) is a writing system for expressing numbers using a consistent set of symbols called digits. The base (or radix) of a number system determines how many unique digits it uses. The decimal system we use daily has base 10 and uses digits 0-9. Computers, however, fundamentally operate in binary (base 2), and programmers regularly work with octal (base 8) and hexadecimal (base 16) as well.

Understanding number bases is not just an academic exercise — it is a practical skill that every software developer, network engineer, and hardware designer uses regularly. From reading memory addresses in a debugger to configuring file permissions on a Linux server, number base conversions are woven into the fabric of computing.

Binary (Base 2) — The Language of Computers

Binary is the most fundamental number system in computing. It uses only two digits: 0 and 1. Each digit is called a bit (binary digit), and it corresponds directly to the on/off states of transistors in electronic circuits. Everything a computer does — from arithmetic to rendering graphics to running your favorite app — ultimately reduces to operations on binary numbers.

In binary, each position represents a power of 2 (just as each position in decimal represents a power of 10). The number 1101 in binary equals 1×8 + 1×4 + 0×2 + 1×1 = 13 in decimal. A group of 8 bits is called a byte, which can represent values from 0 to 255 (28 - 1). A byte is the fundamental unit of computer memory and data storage.

Binary is essential for understanding bitwise operations (AND, OR, XOR, NOT), bit shifting, bit masks, and low-level data manipulation. These operations are heavily used in graphics programming, network protocols, cryptography, and embedded systems.

Octal (Base 8) — Unix File Permissions

Octal uses digits 0 through 7. Each octal digit maps exactly to three binary digits, making octal a convenient shorthand for binary in certain contexts. While octal was more popular in early computing (some early computers had 12-bit or 36-bit word sizes that divided evenly into octal groups), its most enduring modern use is in Unix/Linux file permissions.

The chmod command uses octal notation to set file permissions. The three permission types (read = 4, write = 2, execute = 1) for three groups (owner, group, others) are each represented as an octal digit. For example, chmod 755 sets: owner = 7 (rwx = 4+2+1), group = 5 (r-x = 4+0+1), others = 5 (r-x = 4+0+1). In binary, 755 octal is 111 101 101, where each group of three bits directly corresponds to read, write, and execute permissions.

Decimal (Base 10) — The Human Default

Decimal is the number system humans use naturally, presumably because we have ten fingers. It uses digits 0 through 9, and each position represents a power of 10. While decimal is not natively efficient for computers (which think in binary), it remains the primary system for human-facing interfaces: prices, counts, measurements, and any value shown to end users.

In programming, decimal is the default literal format in most languages. When you write 42 in JavaScript, Python, or Java, the compiler or interpreter understands it as a decimal number. Most programming languages provide prefix notations for other bases: 0b for binary, 0o for octal, and 0x for hexadecimal.

Hexadecimal (Base 16) — The Programmer’s Favorite

Hexadecimal (or “hex”) uses 16 digits: 0-9 and A-F (where A=10, B=11, C=12, D=13, E=14, F=15). Each hex digit maps exactly to four binary digits (one nibble), making hex the most compact human-readable representation of binary data. Two hex digits represent one byte (00 to FF = 0 to 255).

Hexadecimal is ubiquitous in programming:

  • Colors: CSS uses hex codes like #FF5733 to represent RGB colors. Each pair of hex digits is one color channel.
  • Memory addresses: Debuggers, crash dumps, and low-level tools display memory addresses in hex (e.g., 0x7FFE0000).
  • Unicode code points: Characters are identified by hex values like U+1F600 (the grinning face emoji).
  • MAC addresses: Network hardware addresses are written in hex (e.g., 00:1A:2B:3C:4D:5E).
  • Hash values: MD5, SHA-256, and other hash functions produce output as hex strings.
  • Binary file editors: Hex editors display raw file bytes in hexadecimal format.

Base Conversion Algorithms

Converting between number bases follows a straightforward algorithm. To convert from any base to decimal (base 10), multiply each digit by its positional value and sum the results. For example, to convert 1A3 from hex to decimal:

  • 1 × 162 = 256
  • A (10) × 161 = 160
  • 3 × 160 = 3
  • Total = 256 + 160 + 3 = 419

To convert from decimal to any base, repeatedly divide by the target base and collect the remainders (read from bottom to top). For example, to convert 419 to hex: 419 ÷ 16 = 26 remainder 3; 26 ÷ 16 = 1 remainder 10 (A); 1 ÷ 16 = 0 remainder 1. Reading remainders bottom-to-top: 1A3.

To convert between two non-decimal bases, the simplest approach is to convert through decimal as an intermediate step. However, for bases that are powers of 2 (binary, octal, hex), you can convert directly by grouping or expanding bits. For example, to convert binary to hex, group the binary digits into groups of four (padding with leading zeros if needed) and convert each group to one hex digit.

Two’s Complement — Representing Negative Numbers

Computers need to represent negative numbers in binary, and the standard method is two’s complement. In this system, the most significant bit (MSB) acts as the sign bit: 0 for positive, 1 for negative. To negate a number: invert all bits (one’s complement), then add 1.

For example, in an 8-bit system: +5 = 00000101. To get -5: invert → 11111010, add 1 → 11111011. Two’s complement is elegant because addition and subtraction work identically for signed and unsigned numbers at the hardware level — the same circuit handles both.

This converter works with non-negative integers. For signed number representation, you would need to specify the bit width (8-bit, 16-bit, 32-bit, 64-bit) and interpret the result accordingly.

Floating Point Representation

Floating-point numbers (decimals like 3.14 or 0.1) use the IEEE 754 standard, which represents them in binary scientific notation. A 64-bit double-precision float consists of: 1 sign bit, 11 exponent bits, and 52 mantissa (significand) bits. This is why some decimal fractions (like 0.1) cannot be represented exactly in binary, leading to the famous floating-point precision issue: 0.1 + 0.2 !== 0.3 in JavaScript.

Understanding binary representation helps explain these quirks. The decimal fraction 0.1 in binary is 0.0001100110011... (repeating infinitely), similar to how 1/3 = 0.333... repeats infinitely in decimal. The finite number of mantissa bits forces rounding, which accumulates into visible errors.

Binary in Networking

Network engineers work with binary daily. An IPv4 address is a 32-bit number, typically written as four decimal octets: 192.168.1.1. In binary, this is 11000000.10101000.00000001.00000001. Subnet masks are also 32-bit binary numbers with a contiguous sequence of 1s followed by 0s: 255.255.255.0 = /24 = 11111111.11111111.11111111.00000000.

Understanding binary subnet masks is essential for subnetting calculations: determining network addresses, broadcast addresses, and the number of usable hosts in a subnet. The bitwise AND of an IP address and its subnet mask yields the network address, a fundamental operation in routing.

How This Tool Works

This number base converter runs entirely in your browser using JavaScript’s native BigInt type, which supports arbitrarily large integers with no precision loss. Traditional JavaScript Number types are 64-bit IEEE 754 floats and can only safely represent integers up to 253 - 1 (9,007,199,254,740,991). With BigInt, you can convert numbers of any size without overflow or rounding errors.

The conversion is performed in real-time as you type. Input validation ensures only valid digits for the selected base are accepted. All four standard bases (binary, octal, decimal, hex) are displayed simultaneously, plus a custom base output that you can set to any value from 2 to 36 (using digits 0-9 and letters A-Z).

Frequently Asked Questions

Why do computers use binary?

Electronic circuits are most reliable when distinguishing between two states: on (high voltage) and off (low voltage). These two states map naturally to 1 and 0. While ternary (base 3) and other bases have been explored, binary proved to be the most practical for building reliable, scalable digital electronics. The entire history of modern computing is built on this foundation.

What is the largest number this tool can handle?

This tool uses JavaScript BigInt, which has no theoretical upper limit on integer size. You can convert numbers with hundreds or thousands of digits. However, extremely large numbers may cause the browser to slow down during conversion due to the computational complexity of arbitrary-precision arithmetic.

Can I convert negative numbers?

This tool currently handles non-negative integers (zero and positive whole numbers). Negative number representation in different bases depends on the encoding scheme (two’s complement, sign-magnitude, etc.) and the bit width, which adds complexity beyond simple base conversion.

Why is hexadecimal so popular in programming?

Hexadecimal is the most compact human-readable representation of binary data. Each hex digit represents exactly 4 bits, so one byte (8 bits) is always exactly two hex digits. This makes hex ideal for displaying memory contents, color codes, hash values, and any raw binary data. It is much easier to read 0xFF than 0b11111111 or 255.

What are bases above 16 used for?

Base 32 and base 36 are used in various encoding schemes. Base 32 is used in Crockford’s Base32 encoding and z-base-32 for compact data representation. Base 36 uses all digits (0-9) and all letters (A-Z), making it the largest base possible with alphanumeric characters. It is sometimes used for generating short unique identifiers or compact URL-safe encodings.

What is the difference between this converter and parseInt/toString in JavaScript?

JavaScript’s parseInt() and Number.toString() work with standard Number type, which is limited to 53-bit integer precision. This tool uses BigInt, allowing it to handle numbers far beyond the Number.MAX_SAFE_INTEGER limit of 9,007,199,254,740,991 without any loss of precision.

Related Tools