An electrical calculator that works on numbers in binary notation would be an inconvenient device of limited usefulness. Because a computer is not merely a manually-operated calculator. it can perform a series of calculations, and this includes the series of calculations required to convert numbers from decimal to binary and back again.
To perform complicated calculations, it is necessary for a computer to have a scratchpad in which to store intermediate results in addition to the current result of its calculations. A simple computer can be thought of as being organized like a pocket calculator, with one main register, corresponding to the display of the calculator, called the accumulator, and a limited number of other memory registers.
The computer would automatically carry out a sequence of instructions, like the following:
LOAD 5 recall the number in memory 5 ADD 3 add the number in memory 3 to it STORE 7 store the result in memory 7
and on and on. An instruction would usually be in the form of a binary number, with the first few bits identifying which operation is to be performed, and the remaining bits identifying which memory cell to use in the instruction.
However, a computer that can only give the result of a formula, however complicated, is still very limited. One extra thing is needed for an electronic calculator to be viewed as a real computer.
A computer also needs instructions that change the course of action depending on the current result of the calculation being performed. This lets it stop when it has found the right answer to a question, it lets it handle complicated formulas with special cases, it lets it perform a calculation for the numbers from 1 to 100 and then stop.
A very early computer might read its instructions from paper tape, and have conditional instructions which switched between one paper tape reader and another. The memories used for numbers involved in a calculation were expensive, and only a few of them were included in the earliest computers.
Eventually, though, inexpensive forms of memory were found for computers. At first they were awkward, such as mercury delay lines or Williams tubes; then, the magnetic drum was convenient but slow, and finally magnetic core memory provided computers with fast and well-behaved large memories.
When the same memory contained the computer's program and the numbers with which it worked, the computer was called a stored-program computer, and this type of computer quickly eclipsed all others.
The Williams tube, like core, was a random-access memory device; it was only inconvenient in that, although it could be made reliable in practice, this required effort. Mercury delay lines also required effort to cope with; other serial devices, such as magnetostrictive delay lines and drum memories, were relatively well-behaved.
Some early delay-line computers merely had a number of delay lines all the same length: 32 delay lines of 16 words each in the EDSAC, 126 delay lines of 8 words each in the Univac I (one of which served as a spare, the memory having 1000 words). Others mixed short and long delay lines: the Packard-Bell 250 had nine delay lines with 256 words and one delay line with 16 words; the Ferranti Pegasus had five that were one word long, two that were two words long, and six that were eight words long, and the English Electric DEUCE had four that were one word long, three that were two words long, two that were four words long, and twelve that were thirty-two words long. Of these, the Packard-Bell and the Pegasus used magnetostrictive delay lines, the others mercury delay lines. The Univac I, the Pegasus, and the DEUCE all had a magnetic drum memory in addition to their delay line main memories.
Computers with magnetic drum memories only often, like the IBM 650, included an extra address in every instruction indicating where the next instruction would come from, so that instructions could be spaced according to the time it took them to execute (optimum programming). One track on a drum might have several equally spaced heads, making it the equivalent of a shortened delay line.
As we have seen, even when it became possible to make stored-program computers, the speed and cost of memory remained a concern; this remained true after the use of magnetic core memory became almost universal as well. And the internal logic of the computer was also expensive. Thus, there were both small computers and big ones. A small computer might only be able to peform very simple operations directly, such as the following set: add two numbers together, perform a bitwise logical AND between two numbers, shift the number in the accumulator by one place (either left or right), reverse all the bits in the accumulator.
Such a computer might perform multiplication using a program like this:
LOAD K0 STORE PROD clear the result LOAD K16 this location contains the number of times * to do the loop STORE I store in the counter LOOP LOAD B fetch the second term STORE TEMP SHR prepare it for the next round STORE B LOAD TEMP but use the current value AND K1 test the last bit SKN skip next instruction if not zero JMP AGAIN LOAD PROD ADD A STORE PROD AGAIN LOAD A SHL prepare the other term for the next round STORE A LOAD I ADD KMINUS1 decrement the counter STORE I SKZ JMP LOOP
where the constants K0, K16, K1, and KMINUS1 are defined, and storage for A, B, PROD, and I is allocated, elsewhere in the program.
Larger computers, that had more than one accumulator, could use the field normally specifying a destination register in the jump instructions, where this was not applicable to specify a condition for jumping. Smaller ones, with only one accumulator, either had a conditional jump instruction as shown above, or might have a Compare and Skip instruction to continue to the next instruction if the destination is less than the source, skip one instruction if they are equal, and skip two instructions if the destination is greater than the source. Such a three-way branch makes it easy to handle every possible outcome of a comparison.
Another common instruction would be one to facilitate looping for a set number of times, such as Increment and Skip if Zero. Incrementing, rather than decrementing, requires negating the number of times to perform the loop before starting, but lets the destination serve as an increasing pointer into an array.
Another very important basic instruction is a subroutine jump instruction, which would, in addition to branching to another point in the program, store the address of the instruction following the subroutine jump instruction itself in a convenient place.
Many computers, both big and small, used some method of placing addresses in instructions that weren't long enough to refer to any point in the entire memory of the computer.
The IBM 360, a large computer, had sixteen general registers each of which served as an accumulator. Several of those registers were normally used as base registers, pointing to the beginning of an area of memory currently being used; thus, the address field in an instruction consisted of four bits to indicate a base register, to the contents of which a twelve bit displacement was added.
The PDP-8, a small computer, and several other small computers (such as the Honeywell 316 and the related DDP-516, and the Hewlett-Packard 2114/2115/2116) used a simple scheme of dividing the computer's memory into pages of a given size; a single bit in the instruction indicated if the address being referred to was on the same page as the instruction containing it, or in the very first page of the memory, page zero. This was very efficient, as it meant that no addition was required to calculate an address.
But it did complicate how compilers worked, since it meant that sequences of instructions in memory would not be simple and uniform, unless every subroutine were restricted to fitting into a single page of memory, or all the variables (or pointers to arrays; see below about indirect addressing, common on this type of computer) were placed in page zero.
Being able to perform addition in address calculation as an option, however, was an important advance in computer design that took place fairly early in the development of computers.
Being able to work with arrays of numbers in memory was important for many types of calculations. One way to do this would be to take an instruction referring to the first element of an array, and add one to its numeric code; this would produce an instruction referring to the next element of the array, as long as the address field of the instruction was in its least significant part, and was long enough that it could refer to any element of the array directly.
Some computers added a feature known as indirect addressing, where a flag bit in an instruction caused the instruction to point to a location containing the real address of its operand, instead of pointing to the operand itself. This meant that the address was sitting by itself in a memory location, and could be subjected to arithmetic with confidence.
But the solution that became most popular was the index register, a register whose contents could optionally be added to the address specified in an instruction. Thus, an instruction would point to the start of an array, and the index register would indicate which element of the array was to be used.