52905.fb2
In the last chapter, we saw how numbers could be represented in binary and hex forms. Whether we think of a number as hex or binary or indeed denary, inside the microprocessor it is only binary. The whole concept of hex is just to make life easier for us.
We may sit at a keyboard and enter a hex (or denary) number but the first job of any microprocessor-based system is to convert it to binary. All the arithmetic is done in binary and its last job is to convert it back to hex (or denary) just to keep us smiling.
There was a time when we had to enter binary and get raw binary answers but thankfully, those times have gone. Everything was definitely NOT better in the ‘good old days’.
The form binary numbers take inside of the microprocessor depends on the system design and the work of the software programmers. We will take a look at the alternatives, starting with negative numbers.
In real life, it is easy, we just put a – symbol in front of the number and it is negative so +4 becomes –4. Easy, but we don’t have any way of putting a minus sign inside the microprocessor. We have tried several ways round the problem.
The first attempt seemed easy but it was false optimism. All we had to do was to use the first bit (msb) of the number to indicate the sign 1 = minus, 0 = plus.
This had two drawbacks.
1 It used up one of the bits so an 8-bit word could now only hold seven bits to represent numbers and one bit to say ‘plus’ or ‘minus’. The seven bits can now only count up to 11111112=127 whereas the eight bits should count to 255.
2 If we added two binary numbers like +127 and +2, we would get:
The msb (most significant bit) of 1 means it is a minus number and the actual number is 0000001=1. So the final result of +127+2 is not 129 but minus 1.
When we use a microprocessor to handle arithmetic with these problems, we can ensure that the microprocessor can recognize this type of accidental negative number. We can arrange for the microprocessor to compensate for it but it is rather complicated and slow.
Luckily, a better system came along which has stood the test of time, having been used for many years.
This has two significant advantages:
1 It allows the full number of bits to be used for a number so an 8-bit word can count from 0 to 111111112 or 255.
2 It is easy to implement with addition and subtraction using substantially the same circuitry.
So, how do we manage to use all eight bits for numbers yet still be able to designate a number positive or negative?
That’s clever. We will start by looking at positive numbers first because it is so easy. All positive numbers from 0 to 255 are the same as we get by simply converting denary to binary numbers. So that’s done.
Example
Add 01011010 + 00011011.
The steps are just the same as in ‘normal’ denary arithmetic. Step 1 Lay them out and start from the lsb (least significant bit) or right-hand bit
Add the right-hand column and we have 0+1=1. So we have
Step 2 Next we add the two 1s in the next column. This results in 2, or 10 in binary. Put the 0 in the answer box and carry the 1 forward to the next column
Step 3 The next column is easy 0+0+1=1
Step 4 The next line is like the second column, 1+1=10. This is written as an answer of 0 and the 1 is carried forward to the next column
Step 5 We now have 1 in each row and a 1 carried forward so the next column is 1+1+1=3 or 11 in binary. This is an answer of 1 and a 1 carried forward to the next column
Step 6 The next column is 0 + 0 + 1 = 1, and the next is 1 + 0 = 1 and the final bit or msb is 0 + 0 = 0, so we can complete the sum
Here is a question to think about: What number could we add to 50 to give an answer of 27? In mathematical terms this would be written as 50+x=27.
What number could x represent? Surely, anything we add to 50 must make the number larger unless it is a negative number like –23:
50 + (–23) = 27
The amazing thing is that there is a number that can have the same effect as a negative number, even though it has no minus sign in front of it. It is called a ‘two’s complement’ number.
Our sum now becomes:
50 + (the two’s complement of 23) = 27
This magic number is the two’s complement of 23 and finding it is very simple.
How to find the two’s complement of any binary number
Invert each bit, then add 1 to the answer
All we have to do is to take the number we want to subtract (in its binary form) and invert each bit so every one becomes a zero and each zero becomes a one. Note: technically the result of this inversion is called the ‘one’s complement’ of 23. The mechanics of doing it will be discussed in the next chapter but it is very simple and the facility is built into all microprocessors at virtually zero cost.
Converting the 23 into a binary number gives the result of 000101112 (using eight bits). Then invert each bit to give the number 111010002 then add 1. The resulting number is then referred to as the ‘two’s complement’ of 23.
Introduction to Microprocessors and Microcontrollers In this example, we used 8-bit numbers but the arithmetic would be exactly the same with 16 bits or indeed 32 or 64 bits or any other number.
Doing the sum
We now simply add the 50 and the two’s complement of 23:
50 + (the two’s complement of 23) = 27
The answer is 100011011.
Count the bits. There are nine! We have had a carry in the last column that has created a ninth column. Inside the microprocessor, there is only space for eight bits so the ninth one is not used. If we were to ask the microprocessor for the answer to this addition, it would only give us the 8-bit answer: 000110112 or in denary, 27. We’ve done it! We’ve got the right answer!
It was quite a struggle so let’s make a quick summary of what we did.
1 Convert both numbers to binary.
2 Find the two’s complement of the number you are taking away.
3 Add the two numbers.
4 Delete the msb of the answer.
Done.
A few reminders
1 Only find the two’s complement of the number you are taking away – NOT both numbers.
2 If you have done the arithmetic correctly, the answer will always have an extra column to be deleted.
3 If the numbers do not have the same number of bits, add leading zeros as necessary as a first job. Don’t leave until later. Both of the numbers must have the same number of bits. They can be 8-bit numbers as we used, or 16, or 32 or anything else so long as they are equal.
Start from the left-hand end and invert each bit until you come to the last figure 1. Don’t invert this figure and don’t invert anything after it.
Example 1
What is –2410 expressed as an 8-bit two’s complement binary number?
1 Change the 2410 into binary. This will be 11000.
2 Add leading zeros to make it an 8-bit number. This is now 00011000.
3 Now start inverting each bit, working from the left until we come to the last figure ‘1’. Don’t invert it, and don’t invert the three zeros that follow it.
Example 2
What is –10010 expressed as a 16-bit two’s complement binary number?
1 Convert the 10010 into binary. This gives 11001002.
2 Add nine leading zeros to make the result the 16-bit number 0000000001100100.
3 Now, using the quick method, find the two’s complement:
The result is 1111 1111 1001 1100
Example 3
Find the value of 1011 01112–00 10112 using two’s complement addition.
1 The second number has only six bits so add two zeros on the lefthand end to give 1011 0111–0000 1011.
2 Invert each bit in the number to be subtracted to find the one’s complement. This changes the 00001011 to 11110100.
3 Add 1 to give the two’s complement: 11110100+1=11110101 (or do it the quick way).
4 Add the first number to the two’s complement of the second number:
5 The result so far is 110101100 which includes that extra carry so we cross off the msb to give the final answer of 101011002.
Eight-bit numbers are limited to a maximum value of 111111112 or 25510. So, 0–255 means a total of 256 different numbers. Not very many. 32-bit numbers can manage about 4¼ billion. This is quite enough for everyday work, though Bill Gates’ bank manager may still find it limiting. The problem is that scientific studies involve extremely large numbers as found in astronomy and very small distances as in nuclear physics.
So how do we cater for these? We could wait around for a 128-bit microprocessor, and then wait for a 256-bit microprocessor and so on. No, really, the favorite option is to have a look at alternative ways of handling a wide range of numbers. Rather than write a number like 100 we could write it as 1×10². Written this way it indicates that the number is 1 followed by two zeros and so a billion would be written as 1×109. In a similar way, 0.001 is a 1 preceded by two zeros would be written as 1×10–3 and a billionth, 0.000000001, would be 1×10–9. The negative power of ten is one greater than the number of zeros. By using floating point numbers, we can easily go up to 1×1099 or down to 1×10–99 without greatly increasing the number of digits.
Fancy names
Normalizing
Changing a number from the everyday version like 275 to 2.75×10² is called normalizing the number. The first number always starts with a single digit between 1 and 9 followed by a power of ten. In binary we do the same thing except the decimal point is now called a binary point and the first number is always 1 followed by a power of two as necessary.
Three examples
1 Using the same figure of 275, this could be converted to 100010011 in binary. This number is normalized to 1.00010011×28.
2 A number like 0.00010012 will have its binary point moved four places to the right to put the binary point just after the first figure 1 so the normalized number can be written as 1.001×2–4.
3 The number 1.1012 is already normalized so the binary point does not need to be moved so, in a formal way, it would be written as 1.101×20.
A useless fact
Anything with a power of zero is equal to 1. So 20=1, 100=1. It is tempting but total nonsense to use this fact to argue that since 20=1 and 100=1 then 2 must equal 10!
Terminology
There are some more fancy names given to the parts of the number to make them really scary.
The exponent is the power of ten, in this example, 9. The mantissa, or magnitude, is the number, in this case 8.0245. The radix is the base of the number system being used, 2 for binary, 16 for hex, 10 for decimal.
Storing floating point numbers
In a microprocessor, the floating point is a binary number. Now, in the case of a binary number, the mantissa always starts with 1 followed by the binary point. For example, a five digit binary mantissa would be between 1.0000 and 1.1111.
Since all mantissas in a binary system start with the number 1 and the binary point, we can save storage space by missing them out and just assuming their presence. The range above would now be stored as 0000 to 1111.
It is usual to use a 32-bit storage area for a floating point number. How these 32 bits are organized is not standardized so be careful to check before making any assumptions. Within those 32 bits, we have to include the exponent and the mantissa which can both be positive or negative. One of the more popular methods is outlined below.
Bit 0 is used to hold the sign-bit for the mantissa using the normal convention of 0 = positive and 1 = negative.
Bits 1–23 hold the mantissa in normal binary.
Bits 24–31 hold the exponent. The eight digits are used to represent numbers from –127 to +128 using either two’s complement numbers or excess-127 notation.
We have already met two’s complement numbers earlier in this chapter so we will look at excess-127 notation now.
Excess-127 notation
This is very simple, despite its impressive name. To find the exponent just add 127 to its value then convert the result to binary. This addition will ensure that all exponents have values between 0 and 255, i.e. all positive values.
Example
If the exponent is –35 then we add 127 to give the result 92, which we can then convert to binary (01011100).
When the value is to be taken out of storage and converted back to a binary number, the above process is reversed by subtracting the 127 from the exponent.
The mantissa can go as high as 1.1111 1111 1111 1111 1111 1112. To the right of the binary point the decimal equivalents are values of 1.5+0.25+0.125+0.0625 etc. Adding these up gives a total that is virtually 2 – but not quite. The larger the number of bits in the mantissa, the more accuracy we can expect in the result. The exponent has eight bits so it can range from –127 to +128 giving a maximum number of 1×2128 which is approximately 3.4×1038. The accuracy is limited by the number of bits that can be stored in the mantissa, which in this case is 23 bits.
If we want to keep to a total of 32 bits, then we have a trade-off to consider. Any increase in the size of the exponent, to give us larger numbers, must be matched by reducing the number of bits in the mantissa that would have the effect of reducing the accuracy. Floating point operations per second (FLOPS) is one of the choices for measuring speed.
IBM are building (2002) a new super computer employing a million microprocessors. The Blue Gene project will result in a computer running at a speed of over a thousand million million operations per second (1 petaflop). This is a thousand times faster that the Intel 1998 world speed record or about two million times faster than the current top-of-the-range desktop computers.
If we need more accuracy, an alternative method is to increase the number of bits that can be used to store the number from 32 (singleprecision) to 64 (double-precision). If this extra storage space is devoted to increasing the mantissa bits, then the accuracy is increased significantly.
Binary coded decimal numbers are very simple. Each decimal digit is converted to binary and written as a 4-bit or 8-bit binary number. The number 5 would be written as 01012 or 000001012. So far, this is the same as ‘ordinary’ binary but the change occurs when we have more digits.
Consider the number 2510. In regular binary this would convert to 110012. Alternatively, we could convert each digit separately to 4-bit or 8-bit numbers:
2 = 00102 or 0000 00102
5 = 01012 or 0000 01012
Putting these together, 2510 could be written using the 4-bit numbers as 0010 01012. This uses one byte and is called Packed BCD. Alternatively, we could use the 8-bit formats and express 2510 as 0000 0010 0000 01012 and would now use two bytes. This is called Unpacked BCD.
There are two disadvantages. Firstly, many numbers are of increased length after converting to BCD, particularly so if we use unpacked BCD or the numbers are very large like 25×1075. In addition, arithmetic is much more difficult although, generally, microprocessors do have the ability to handle them.
The advantage becomes apparent when the microprocessor is controlling an external device like digits on displays at a filling station or accepting inputs from a keyboard. The coding is simple and does not involve the conversion of the numbers to binary.
Overall
Arithmetic → use binary
Inputting and outputting numbers → use BCD
In each case, choose the best option.
1 The number –3510, when expressed as an 8-bit binary number in two’s complement form, is:
(a) 00100011.
(b) 1111011101.
(c) 11011101.
(d) 00110101.
2 The number 710 converted to an unpacked BCD format would be written as:
(a) 1110 0000.
(b) 7H.
(c) 0000 0111.
(d) 0111.
3 The signed magnitude number 110011002 is equivalent to:
(a) –7610.
(b) 20410.
(c) CCH.
(d) 121210.
4 In the number 0.5×1024 the number:
(a) 10 is the mantissa.
(b) 24 is the exponent.
(c) 0 is the sign bit.
(d) 5 is the radix.
5 A signed magnitude number that has a figure:
(a) zero as the msb is a negative number.
(b) one as the lsb is a negative number.
(c) one as the msb is a negative number.
(d) zero as the lsb is a negative number.