The earliest computers were comparatively slow. They were still very useful, as they could still solve problems much more quickly than could be done by hand.

But because they were slower, and worked on smaller problems, the amount of time it took to convert numbers from decimal form to binary form for use inside the computer was larger, compared to the time the computer would be doing useful work.

As well, many computers were used to perform work on data which was in the form of punched cards that were prepared by hand, sometimes punching additional cards in the same format to be added to a set of manually-punched cards.

Even today, many computers are provided with a decimal arithmetic capability to serve commercial applications, where data is kept in disk files in printable character form, and the operations on that data involve only a relatively limited amount of computation between input-output operations.

It may be noted that one Canadian computer, the D.R.T.E. computer, which performed its arithmetic in binary form, included equipment for conversion between decimal and binary as part of its input/output circuitry, which represents another possible way to deal with the conflict between how people write numbers, and how computers can best deal with them.

In the previous section on computers with drum and delay line memory, two decimal computers were discussed.

The Univac I treated a string of twelve 6-bit characters as a sign and eleven decimal digits.

The IBM 650 had a word which consisted of ten decimal digits and a sign. Characters were represented by two-digit numbers, five of which could be contained in a word.

A few other decimal computers will be very briefly described on this page.

The Burroughs 205 and 220 computers also, like the IBM 650, represented numbers as ten decimal digits and a sign, and characters as two digits. An instruction had a two-digit opcode, and two four-digit addresses.

The IBM 1401 computer, and its compatible successors such as the 1410 and 7010, used seven bits in memory to represent a 6-bit character plus a one-bit word mark. The word mark was set on the first (most significant) digit of a number, and the first character of an instruction. Addresses pointing to numbers pointed to their units digit, allowing addition performed serially to begin immediately.

The IBM 1620 computer organized memory into two-digit cells, with four bits for the digit, and one flag bit. Numbers were addressed by their units digit, and had to be at least two digits long. The most significant digit always had its flag bit set to delimit the number, and the least significant digit would also have its flag bit set if it was negative. Instructions consisted of a two-digit opcode and two five-digit addresses; if an optional indirect addressing feature was present, to flag the units position of an address indicated it was an indirect address. This computer also represented characters by two-digit codes.

The IBM NORC computer had a word consisting of 16 decimal digits. It had 80 distinct opcodes, and addresses were in a three-address format. It had hardware floating-point, and the range of numbers is from 10^31 to 10^(-43). The fact that 31 was the maximum positive exponent suggested the following to me before I was able to find out the remaining actual details of its floating-point format: I had thought that perhaps in floating-point mode, six bits were used to represent the exponent in binary form, which would allow it to vary from +31 to -32 (or possibly -31, depending on the format used), and the additional range of the exponent was obtained through gradual underflow.

One can actually hazard a more precise guess. Given that -43 is the ultimate lower limit of the exponent range for nonzero quantities, that is 11 below -32, and 12 below -31. If one assumes no attempt will be made to reject quantities with less than two or three digits, it would be necessary both for one digit position to be used for the sign, and for the exponent range to be from +31 to -31 only, to provide the exponent range given, along with 13 digits of precision, so that the maximum number of leading zeroes in an unnormalized number would be 12. This means that the exponent would have to be in sign-magnitude or one's complement form instead of excess-n or two's complement form.

Designing a computer with a floating-point
format including a binary exponent (that is, a power of 10, but
represented in binary form) but a decimal mantissa
was referred to in the book *Planning a Computer System*,
about the STRETCH computer, also made by IBM, but only as a
theoretical possibility.

In fact, however, while the NORC did use one digit for the sign of floating-point numbers, and it did have gradual underflow, as I had guessed, the exponent was in decimal form, not octal, and ranged from +30 to -30, represented in ten's complement form. This computer's floating point format was highly analogous to scientific notation, as the decimal point of the mantissa was placed after the first digit. Thus, the smallest nonzero magnitude that could be represented on the machine is 0.000000000001 * (10^(-30)), or 10^(-41), not 10^(-43) as noted in the BRL report, or even 10^(-42), as noted in the manual for the NORC.

The floating-point number format was the only format it used, but instructions could specify the target exponent of the result of an operation, allowing fixed-point arithmetic for any decimal point position.

The UNIVAC LARC computer handled numbers consisting of 11 digits and a sign, but they were contained in memory words which contained 12 symbols; each symbol occupied five bits, and had to have odd parity, but only 15 of the 16 odd-parity combinations were valid. Instructions were 12 digits long; the first digit, if nonzero, was used to flag instructions for a tracing mode. The next two digits were the opcode, the next two the destination register, the next two the index register, and the last five the address. It had only 26 registers in its basic configuration, but could be expanded to 99 registers. Again, characters were two-digit codes.

The IBM 705 and 7080 computers used a memory consisting of six-bit characters. The zone bits of the characters were used to indicate the sign of a number, and the last digit of a number contained its sign, also indicating the end of the number. Numbers are addressed by their units digits, and need to be preceded by a non-numeric character (perhaps supplied by the units digit of an abutting number) to be delimited. Instructions were always five characters long, and the zone bits are used as data, not to delimit instructions.

The IBM 7070, 7072, and 7074 computers had a word consisting of a sign and ten decimal digits. The sign and the first two digits gave the opcode; the next two digits identified a memory location used as an index register; the next two digits often indicated the start and end digits, within a memory location, to be acted upon by an instruction, and the final four digits gave the address. Again, characters were represented by two-digit codes.

The IBM 707x machines are historically interesting. Unlike the 705/7080 series, or the 1401/7010 series, although they were decimal, they operated on words of fixed length instead of arbitrary fields of digits, although there was a field within the instruction format to select a range of digits within an operand.

Thus, while the machines operated on decimal numbers, as they operated on them a word at a time and not a digit at a time, they were efficient and powerful enough to be considered for scientific computation as well. Previously, the NORC was a decimal computer made by IBM for the U.S. Navy which was used for scientific computation, and which some have referred to as the world's first supercomputer.

The 7070 series was not commercially successful for IBM; owners of the 705 demanded a fully compatible upgrade path, and the later smaller-scale and thus less-expensive 1401 was also much more successful. Among the customers for the IBM 7074, however, were the IRS in the United States and the Department of National Revenue in Canada, both having a high volume of computation of a commercial/financial nature to perform.

In attempting to design a computer acceptable for both business and scientific computing (the 7070 was IBM's first large-scale transistorized computer, preceding both the STRETCH and the 7090), IBM's experience with the 7070 series may explain the following things about the IBM 360:

- Why IBM discontinued the 1401, much to the discomfiture of its sales force, instead of also making new 1401-compatible computers out of the same IC technology as the 360: because a compatible alternative would likely lead to customers not considering the newer architecture as an alternative.
- Why the decimal instructions on the IBM 360 operated on decimal numbers of arbitrary length (although they were constrained to occupy an even number of two-digit bytes): the popular commercial architectures from IBM operated on decimal numbers this way (as well as the decimal but scientific 1620); only the less successful 7070 series (and the preceding 650 and NORC) were word-oriented and decimal.

Incidentally, the 7072, the last machine in the series to be introduced, was basically a 7074 CPU connected to slower 7070 memory.

[Up] [Previous] [Next Section]