There is a great deal of information about the contributions of the important figures in computer science available on the Web. Consequently, I will try to highlight key ideas, and explain more fully their significance if what is already available does not do the topic justice.
It is fundamentally a storage device, however. The human operator performed the manipulation, thus providing the "intelligence" or "algorithmic" ability that we take for granted as embedded in todays microprocessors. The positions of the beads represented the stored values upon which the computation was performed. In a sense the human user was the processor and the abacus was the memory device.
Here is a description of early mathematics and the development of algorithmic approaches to problem solving.
Here is a comprehensive chronology of calculating devices, and here is a concise one, and a lengthy one!
Up until the late1930s these devices were mechanical - built from gears and moving rods etc. - and, with the exception of Babbage's attempts, they were basically little different from the abacus! That may sound like an outrageous statement, but it is meant in the sense that the algorithmic ability was provided by the human user, for the most part, just as is the case with the abacus. That is, the human mind stored the algorithm and human fingers manipulated the machine in order to carry out the algorithm. Here is a description of early calculating devices and the motivations of the time that led people to want to construct them. This next link provides a somewhat different perspective on early computing devices - read the sections "What is a Computer" and "More on Near-Computers".
Charles Babbage is a special exception - but no machine of his was ever actually completed. Pieces of his machines were completed by other people, however.
How the algorithmic nature of calculating machines changed is a key process in the history of computing. It is why Henri Jacquard, Charles Babbage and Ada Byron, and Herman Hollerith figure prominently in the early history of computing. For it was their contributions that paved the way for others to realise fully, and actually implement, the notion that the algorithm, as well as the data upon which the algorithm operated, could be stored in a machine. And it is this notion that makes the computer the versatile machine that we know today.
Jacquard designed an automated weaving loom which produced designs in the cloth that were specified (i.e. programmed) by punched cards. That is the punched cards stored the algorithm that the machine executed.
Babbage designed, but never actually completed, two machines - one called the Difference Engine and the other called the Analytical Engine. The Difference engine was quite a special purpose calculating machine intended to produce accurate mathematical tables but the Analytical Engine was quite a different matter. This machine and Babbage's ideas were simply 100 years ahead of their time - and because of it Babbage has come to be regarded as a major figure in the development of computing; for the design of the Analytical Engine was in many respects the same as the computer architecture that is dominant today. However, it was mechanical and it calculated using decimal arithmetic rather than binary.
(in line image of Difference engine, e.g. [16] or [17])
Ada Byron, Countess of Lovelace, was Babbage's colleague, and one of the few people of the time to really understand his ideas. It was she who wrote many of the algorithms that might have been executed by the Analytical Engine had it ever been completed. Here is another description of Ada Byron's life and work.
Herman Hollerith designed and built programmable punched card processing machines. Originally the motivation for this was the US census of 1890, which could not have been analysed quickly enough without some kind of automated data processing. Hollerith's machines were not general purpose in the sense that computers are today - in fact they could basically just count and sort - but they very successfully demonstrated the potential for automated data processing.
With all of these early machines the algorithms that the machines were to follow were stored on punched cards. But the very idea that the machine could perform different operations depending on the instructions given to it was very important. It took until 1940 or so before the full practical realisation that the algorithm could be coded and stored in the very same place as the data was achieved.
Two of the key figures were George Boole and Alan Turing. There were many other well known mathematicians whose work also had implications for the development of machine computation. For example Leibniz, who investigated binary arithmetic and envisioned reasoning and logic as susceptible to calculation); and Hilbert and Godel.
So what is the value of p+q, i.e. p or q ? There are four possibilities, made up of the combinations of p and q each having the values true and false; namely
Thinking about the algebra of the or expression p+q with these values in mind, you can see that for case
Truth table for the or function: f(p,q) = p+q:
p | q | p+q |
---|---|---|
1 | 1 | 1 |
1 | 0 | 1 |
0 | 1 | 1 |
0 | 0 | 0 |
Another operator in boolean algebra is the and operator, written as *. So p*q represents being over six feet tall and being a basketball player. So if p is false (not over six feet) then p*q is false. And if q is false (not a basketball player) then p*q is false again. Only when both p is true and q is true can p*q also be true. The following truth table represents this operation.
Truth table for the and function: f(p,q) = p*q
p | q | p*q |
---|---|---|
1 | 1 | 1 |
1 | 0 | 0 |
0 | 1 | 0 |
0 | 0 | 0 |
Boole discovered a special law of logic - namely that p*p = p, or in fact p*p* ... p = p. So if over six feet tall is true and over six feet tall is true then (of course) over six feet tall is true, ie. p*p = p.
This is just the beginning, of course. You can read his paper
But perhaps the most major contribution from his work, and a step that
was also being forged by other mathematicians working at much the same
time but in algebra rather than symbolic logic, was the idea that the
symbols are
Boole essentially created the field of symbolic logic by defining the set of axioms (see "Algebraic Rules" in this link) or postulates which specify the result of the allowed operations on the symbols - without regard to what the symbols actually represent.
The 10 postulates (see almost the end of this lengthy discussion of Boole; the postulates are taken from a paper by E. V. Huntington in the journal Transactions of the American Mathematical Society, Vol. 35, 1933, pp 274-304) are expressed in terms of K, +, and *, where K is a class of undefined elements p, q, ... etc, and + and * are two operations:
You are probably asking "So ? ..., what is the significance of this mathematics for computers?" The answer lies in the design of the circuits in the processing unit and memory chips of modern computers. Claude Shannon showed how the analysis of such circuits could be carried out using boolean algebra. The foundation of symbolic logic provides the ability to express mathematically the complex logic required to carry out the functions of the CPU (central processing unit). Moreover, it provides mathematical tools, in the form of algebraic rules, for manipulating and simplifying that complex logic. It is no exaggeration to say that without it todays microprocessor could not be built.
We will revisit boolean algebra in the module on digital logic.
Firstly, Turing realised fully in a theoretical as well as a practical way (refer to his work on designing an actual machine while at the National Physical Laboratory), the notion that the algorithm could be treated in the same way as the data which the algorithm operated on. Thus, he expressed clearly the idea of a single machine which could be transformed with respect to the task it performs simply by specifying which algorithm it was to execute. This idea is essentially embodied in Turing's conceptualisation of a Universal Turing Machine.
Secondly, and perhaps because of his work on code breaking during the Second World War, Turing had a much clearer idea than others of his time of the computer as a machine for automatically manipulating symbols rather than as a machine for performing arithmetic calculations. Although it is true that some of the earliest practical demonstrations of computing, by building working machines, were executed in the USA, most of that work was stimulated by the need for complex numerical calculations and the people who did it, such as von Neumann (and I mention him because he was considered to have supplied the theoretical vision behind the machines), seemed to think of the machines as primarily numerical calculating devices.
Thirdly, Turing was interested in the very idea of an algorithm, and in the limits of what tasks it was possible to solve algorithmically. Thus, he developed the model of computation known as a Turing machine. This is still the most general model of computation and it is conjectured in the Church-Turing Thesis that any problem that can be solved algorithmically can be solved by a Turing Machine. It is beyond the scope of this course to look in detail at the formal description of Turing machines - suffice it to say that the Turing Machine is very much a stripped down representation of what it is to compute something (manipulate symbols). This thesis is then quite remarkable, because it states that, stripped down to the bare bones as it were, the Turing Machine idea actually captures the notion of computability completely!
Fourthly, Turing is famous because of his work in conceiving a machine that could simulate intelligence. He can, therefore, also be regarded as the father of the field of Artificial Intelligence! His ideas are embodied in the widely known Turing Test for artificial intelligence, which states that if in the course of conducting a conversation (say by typing on a keyboard) a person cannot distinguish that the partner they are conversing with is a machine, then that machine can be considered to be intelligent.
You can examine the basic chronology of events and related discussions in the Web documents here.
Another example of a computer using relays was the machines built at Harvard University - the Harvard Mark I, for example - which demonstrated an architecture in which the storage of instructions (the algorithm) was in a separate location from the storage of data. This became known as the Harvard architecture as opposed to the von Neumann architecture which has been the dominant approach used in most moderm microprocessors. We will look closely at the von Neumann architecture in the next module.
There is some dispute concerning who built the first electronic computer. Some identify John Atanasoff and Clifford Berry who built a rather special purpose electronic machine (designed to solve systems of differential equations) and others credit the team of engineers Presper Eckert and John Mauchly, along with mathematician John von Neumann whose ENIAC began operating in 1945.
In many respects the ENIAC was outdated before it began operating (are we not frustrated by the same "problem" today, when we buy a computer that is almost immediately superceded by a newer, faster model ?) because the team realised that a simpler, more powerful machine could be built according to the architectural design that has become known (unfairly to Ekert and Mauchly) as the von Neumann model.
The ENIAC was "programmed" by setting switches and plugging cables into sockets. One of the important features of the new machine, the EDVAC, was to be the use of a simple set of instructions which could be stored in the same memory as the data (haven't we heard of this before?). An algorithm, i.e. a program, would be specified by a sequence of instructions chosen from this basic set (goodbye to setting switches and plugging cables), and the program could be stored in the machine's memory. The machine was to use binary representation of data (and instructions) even if it communicated results in decimal (thus conversion between the two was required). Instructions were to be executed in sequence and one at a time. There was to be one memory with cells all the same size; and the programmer was to be able to write programs which could modify the instructions of the program, or create new instructions and insert them in the program. This new design for a computer is essentially the same that is used in most microprocessors today.
The history of hardware developments since this time has basically been one of advances to more reliable technologies (vacuum tubes to transistors to integrated circuits), new types of memory and storage technologies, and advances in miniaturisation (integrated circuits - ICs, large scale integration - LSI; and very large scale integration - VLSI). These achievements are not to be diminished, but they have not been of the same fundamental nature as the events just described.
The development of new machine architectures to replace the von Neumann model is in fact an extremely active area of fundamental research in computer science. Two possibilities are parallel or massively parallel architectures and neural networks.
The modules on operating systems, programming languages, and software engineering will trace these developments; and since they are not so much historical as contemporary I will not discuss them here at all.