# IEEE 754 Format

### The Problem

It's really easy to write integers as binary numbers in two's complement form. It's a lot more difficult to express floating point numbers in a form that a computer can understand. The biggest problem, of course, is keeping track of the decimal point. There are lots of possible ways to write floating point numbers as strings of binary digits, and there are many things to consider when picking a standard method to do this:

• Range: To be useful, your method should allow very large positive and negative numbers.
• Precision: Can you tell the difference between 1.7 and 1.8? How about between 1.700001 and 1.700002? How many decimal places should you remember?
• Time Efficiency: Does your solution make comparisons and arithmetic operations fast and easy?
• Space Considerations: An extremely precise representation of the square root of 3 is generally a wonderful thing, unless you require a megabyte to store it.
• One-to-one Relationships: Your solution will be a lot simpler if each floating-point number can be written only one way, and vice versa.

### A Solution

The method that the developers of IEEE 754 Form finally hit upon uses the idea of scientific notation. Scientific notation is a standard way to express numbers; it makes them easy to read and compare. You're probably familiar with scientific notation with base-10 numbers. You just factor your number into two parts: a value whose magnitude is in the range of $1 \le n < 10$, and a power of 10. For example:

$$3498523 \quad \textrm{ is written as } \quad 3.498523 \times 10^6$$ $$-0.0432 \quad \textrm{ is written as } \quad -4.32 \times 10^{-2}$$

The same idea applies here, except that you need to use powers of 2 because the computer works efficiently with binary numbers. Just factor your number into a value whose magnitude is in the range $1 \le n < 2$, and a power of 2. (Note, there should only be one way to do this -- do you see why?)

$$-6.84 \quad \textrm{ is written as } \quad -1.71 \times 2^2$$ $$0.05 \quad \textrm{ is written as } \quad 1.6 \times 2^{-5}$$

To create the bitstring, we need to massage this product so that it takes the following form:

$$(-1)^{\color{purple}{\textrm{sign bit}}} (1 + \color{red}{\textrm{fraction}}) \times 2^{\textrm{\color{green}{\textrm{exponent}} - bias}}$$

Once this is done, we will have three key pieces of information (shown in color above) that, when taken together, identify the number:

• First Piece -- If the sign bit is a 0, then the number is positive; $(-1)^0 = 1$. If the sign bit is a 1, the number is negative; $(-1)^1 = -1$.

• Second Piece -- We always factor so that the number in parentheses equals $(1 + \textrm{ some }\color{red}{\textrm{fraction}})$. Since we know that the $1$ is there, the only important thing is the fraction, which we will write as a binary string.

If we need to convert from the binary value back to a base-10 value, we just multiply each digit by its place value, as in these examples:

$$0.1_{binary} = 2^{-1} = 0.5$$ $$0.01_{binary} = 2^{-2} = 0.25$$ $$0.101_{binary} = 2^{-1} + 2^{-3} = 0.625$$

• Third Piece -- The power of 2 that you got in the last step is simply an integer. Note, this integer may be positive or negative, depending on whether the original value was large or small, respectively. We'll need to store this exponent -- however, using the two's complement, the usual representation for signed values, makes comparisons of these values more difficult. As such, we add a constant value, called a bias, to the exponent. By biasing the exponent before it is stored, we put it within an unsigned range more suitable for comparison.

• For single-precision floating-point, exponents in the range of -126 to + 127 are biased by adding 127 to get a value in the range 1 to 254 (0 and 255 have special meanings).

• For double-precision, exponents in the range -1022 to +1023 are biased by adding 1023 to get a value in the range 1 to 2046 (0 and 2047 have special meanings).

The sum of the bias and the power of 2 is the exponent that actually goes into the IEEE 754 string. Remember, the exponent = power + bias. (Alternatively, the power = exponent-bias). This exponent must itself ultimately be expressed in binary form -- but given that we have a positive integer after adding the bias, this can now be done in the normal way.

When you have calculated these binary values, you can put them into a 32- or 64-bit field. The digits are arranged like this:

(The numbers in parentheses show how many bits are required in each field.)

By arranging the fields in this way, so that the sign bit is in the most significant bit position, the biased exponent in the middle, then the mantissa in the least significant bits -- the resulting value will actually be ordered properly for comparisons, whether it's interpreted as a floating point or integer value. This allows high speed comparisons of floating point numbers using fixed point hardware.

There are some special cases:

• Zero
Sign bit = 0; biased exponent = all $0$ bits; and the fraction = all $0$ bits;

• Positive and Negative Infinity
Sign bit = $0$ for positive infinity, $1$ for negative infinity; biased exponent = all $1$ bits; and the fraction = all $0$ bits;

• NaN (Not-A-Number)
Sign bit = $0$ or $1$; biased exponent = all $1$ bits; and the fraction is anything but all $0$ bits. (NaN's pop up when one does an invalid operation on a floating point value, such as dividing by zero, or taking the square root of a negative number.)

### Example: Converting to IEEE 754 Form

Suppose we wish to put 0.085 in single-precision format. Here's what has to happen:

1. The first step is to look at the sign of the number.
Because 0.085 is positive, the sign bit = 0.

2. Next, we write 0.085 in base-2 scientific notation
This means that we must factor it into a number in the range $(1 \le n < 2)$ and a power of 2.

$$\begin{array}{rcl} 0.085 &=& (-1)^0 (1 + \color{red}{\textrm{fraction}}) \times 2^{\textrm{power}}, \quad \textrm{ or, equivalently: }\\ 0.085 \quad / \quad 2^{\textrm{power}} &=& 1 + \color{red}{\textrm{fraction}}\\ \end{array}$$
As such, we divide 0.085 by a power of 2 to get the $(1 + \color{red}{\textrm{fraction}})$:
$$\begin{array}{rcl} 0.085 \quad / \quad 2^{-1} &=& 0.17\\ 0.085 \quad / \quad 2^{-2} &=& 0.34\\ 0.085 \quad / \quad 2^{-3} &=& 0.68\\ 0.085 \quad / \quad 2^{-4} &=& 1.36\\ \end{array}$$
Therefore, $0.085 = 1.36 \times 2^{-4}$

3. Now, we find the exponent
The power of 2 used above was -4, and the bias for the single-precision format is 127. Thus, $$\color{green}{\textrm{exponent}} = -4+127 = 123 = \color{green}{01111011}_{\textrm{binary}}$$
4. Then, we write the fraction in binary form

Successive multiplications by 2 (while temporarily ignoring the unit's digit) quickly yields the binary form:

0.36 x 2 = 0.72
0.72 x 2 = 1.44
0.44 x 2 = 0.88
0.88 x 2 = 1.76
0.76 x 2 = 1.52
0.52 x 2 = 1.04
0.04 x 2 = 0.08              Once this process terminates or starts repeating,
0.08 x 2 = 0.16              repeating, we read the unit's digits from top to
0.16 x 2 = 0.32              bottom to reveal the binary form for 0.36:
0.32 x 2 = 0.64
0.64 x 2 = 1.28                   0.01011100001010001111010111000...
0.28 x 2 = 0.56
0.56 x 2 = 1.12
0.12 x 2 = 0.24
0.24 x 2 = 0.48
0.48 x 2 = 0.96
0.96 x 2 = 1.92
0.92 x 2 = 1.84
0.84 x 2 = 1.68
0.68 x 2 = 1.36
0.36 x 2 =  ...  (at this point the list starts repeating)


As you can see, 0.36 has a a non-terminating, repeating binary form. This is very similar to how a fraction, like 5/27 has a non-terminating, repeating decimal form. (i.e., 0.185185185...)

However, single-precision format only affords us 23 bits to work with to represent the fraction part of our number. We will have to settle for an approximation, rounding things to the 23rd digit. One should be careful here -- while it doesn't happen in this example, rounding can affect more than just the last digit. This shouldn't be surprising -- consider what happens when one rounds in base 10 the value 123999.5 to the nearest integer and gets 124000. Rounding the infinite string of digits found above to just 23 digits results in the bits 0.01011100001010001111011.

(Note, we round "up" as the binary value 0.0111000... is greater than the decimal value 0.05.)

This rounding that we have to perform to get our value to fit into the number of bits afforded to us is why floating-point numbers frequently have some small degree of error when you put them in IEEE 754 format. It is very important to remember the presence of this error when using the standard Java types (float and double) for representing floating-point numbers!

5. Finally, we put the binary strings in the correct order.
Recall, we use 1 bit for the sign, followed by 8 bits for the exponent, and 23 bits for the fraction.

So 0.85 in IEEE 754 format is:

0 01111011 01011100001010001111011

### Example Converting from IEEE 754 Form

Suppose we wish to convert the following single-precision IEEE 754 number into a floating-point decimal value:

11000000110110011001100110011010
1. First, we divide the bits into three groups:
1   10000001   10110011001100110011010

The first bit shows us the sign of the the number.
The next 8 bits give us the exponent.
The last 23 bits give us the fraction.

2. Now we look at the sign bit

If this bit is a 1, the number is negative; if it is 0, the number is positive. Here, the bit is a 1, so the number is negative.

3. Next, we get the exponent and the correct bias
To get the exponent, we simply convert the binary number 10000001 back to base-10 form, yielding 129

Remember that we will have to subtract an appropriate bias from this exponent to find the power of 2 we need. Since this is a single-precision number, the bias is 127.

4. Then we must convert the fraction bits back into base 10
To do this, we multiply each digit by the corresponding power of 2 and sum the results:
$$\begin{array}{rcl} 0.\color{red}{10110011001100110011010}_{\textrm{binary}} &=& 1 \cdot 2^{-1} + 0 \cdot 2^{-2} + 1 \cdot 2^{-3} + 1 \cdot 2^{-4} + 0 \cdot 2^{-5} + \cdots\\ &=& 1/2 + 1/8 + 1/16 + \cdots\\ &=&\color{red}{0.7000000476837158}\\ \end{array}$$
Remember, this number is most likely just an approximation of some other number. There will most likely be some error.

5. We have all the information we need. Now we just calculate the following expression:
$$\begin{array}{rcl} (-1)^{\color{purple}{\textrm{sign bit}}} (1 + \color{red}{\textrm{fraction}}) \times 2^{\textrm{\color{green}{\textrm{exponent}} - bias}} &=& (-1)^{\color{purple}{1}} (1.\color{red}{7000000476837158}) \times 2^{\color{green}{129}-127}\\ &=& -6.800000190734863\\ \end{array}$$
Thus, the IEEE 754 number 11000000110110011001100110011010 gives the floating-point decimal value -6.800000190734863. It is reasonable to suspect that the original number stored was probably -6.8, although this would be hard to prove... (One can verify that -6.8 does result in the exact same bit string, however.)

Original text by S. Orley and J. Mathews of Iowa State University; adapted by P. Oser