Introduction to Computer Technology

History: Quest for a Universal Machine

Why Examine History?

History should not be viewed as simply a sequence of facts or a chronology of events. That path only leads to boredom and seeming irrelevance. It is far more interesting to ask yourself questions about motivations and causality, to seek out themes, and discover the relationships between discoveries, how one individual built on the work of another. Thus I ask you to consider the following questions: Task: Choose one of these questions and write a short essay on the topic (no more than 4 pages, single spaced, 12pt)

There is a great deal of information about the contributions of the important figures in computer science available on the Web. Consequently, I will try to highlight key ideas, and explain more fully their significance if what is already available does not do the topic justice.

The Beginnings

The history of people's attempts to build machines that assist with the tasks of basic arithmetic is a long one. Most people would mention the abacus, for example, as one of the earliest such machines. It assisted people to perform basic arithmetic. If you're interested in more information about the abacus click here or here.

It is fundamentally a storage device, however. The human operator performed the manipulation, thus providing the "intelligence" or "algorithmic" ability that we take for granted as embedded in todays microprocessors. The positions of the beads represented the stored values upon which the computation was performed. In a sense the human user was the processor and the abacus was the memory device.

Here is a description of early mathematics and the development of algorithmic approaches to problem solving.

Here is a comprehensive chronology of calculating devices, and here is a concise one, and a lengthy one!

Up until the late1930s these devices were mechanical - built from gears and moving rods etc. - and, with the exception of Babbage's attempts, they were basically little different from the abacus! That may sound like an outrageous statement, but it is meant in the sense that the algorithmic ability was provided by the human user, for the most part, just as is the case with the abacus. That is, the human mind stored the algorithm and human fingers manipulated the machine in order to carry out the algorithm. Here is a description of early calculating devices and the motivations of the time that led people to want to construct them. This next link provides a somewhat different perspective on early computing devices - read the sections "What is a Computer" and "More on Near-Computers".

Charles Babbage is a special exception - but no machine of his was ever actually completed. Pieces of his machines were completed by other people, however.

How the algorithmic nature of calculating machines changed is a key process in the history of computing. It is why Henri Jacquard, Charles Babbage and Ada Byron, and Herman Hollerith figure prominently in the early history of computing. For it was their contributions that paved the way for others to realise fully, and actually implement, the notion that the algorithm, as well as the data upon which the algorithm operated, could be stored in a machine. And it is this notion that makes the computer the versatile machine that we know today.

Jacquard designed an automated weaving loom which produced designs in the cloth that were specified (i.e. programmed) by punched cards. That is the punched cards stored the algorithm that the machine executed.

Babbage designed, but never actually completed, two machines - one called the Difference Engine and the other called the Analytical Engine. The Difference engine was quite a special purpose calculating machine intended to produce accurate mathematical tables but the Analytical Engine was quite a different matter. This machine and Babbage's ideas were simply 100 years ahead of their time - and because of it Babbage has come to be regarded as a major figure in the development of computing; for the design of the Analytical Engine was in many respects the same as the computer architecture that is dominant today. However, it was mechanical and it calculated using decimal arithmetic rather than binary.

(in line image of Difference engine, e.g. [16] or [17])

Ada Byron, Countess of Lovelace, was Babbage's colleague, and one of the few people of the time to really understand his ideas. It was she who wrote many of the algorithms that might have been executed by the Analytical Engine had it ever been completed. Here is another description of Ada Byron's life and work.

Herman Hollerith designed and built programmable punched card processing machines. Originally the motivation for this was the US census of 1890, which could not have been analysed quickly enough without some kind of automated data processing. Hollerith's machines were not general purpose in the sense that computers are today - in fact they could basically just count and sort - but they very successfully demonstrated the potential for automated data processing.

With all of these early machines the algorithms that the machines were to follow were stored on punched cards. But the very idea that the machine could perform different operations depending on the instructions given to it was very important. It took until 1940 or so before the full practical realisation that the algorithm could be coded and stored in the very same place as the data was achieved.

Theoretical Foundations

The previous section has dealt with the early inventors. These people were generally involved in actually constructing machines. But computing would not have developed to the state we know it today unless certain theoretical, mathematical ideas had not also evolved alongside these practical developments.

Two of the key figures were George Boole and Alan Turing. There were many other well known mathematicians whose work also had implications for the development of machine computation. For example Leibniz, who investigated binary arithmetic and envisioned reasoning and logic as susceptible to calculation); and Hilbert and Godel.

George Boole

George Boole is well known for what is now called boolean algebra (see "Aims of his Work" in this link). Most people know it as a method for combining and manipulating symbols which can have one of two values - either the value true or the value false. It is equally useful to think of the two values as 1 or 0 instead of true and false. For example, suppose that p and q are two such symbols. One way of manipulating or combining them is by using the or operator which is often written as + (and not to be confused with the + used in arithmetic).

So what is the value of p+q, i.e. p or q ? There are four possibilities, made up of the combinations of p and q each having the values true and false; namely

  1. p can have the value true and q could also have the value true
  2. p can be true and q can be false
  3. p can be false and q can be true
  4. p can be false and q can be false.
You're probably wondering exactly what p and q might be. We could make that explicit by giving an example of what p and q could be - p might be the property of being over six feet tall, and q might be the property of being a basketball player, in which case p+q represents being over six feet tall or being a basketball player.

Thinking about the algebra of the or expression p+q with these values in mind, you can see that for case

  1. when p is true (over six feet) and q is true (a basketball player) then p+q is true, that is over six feet tall or a basketball player is true;
  2. when p is true (over six feet) and q is false (not a basketball player), then p+q (over six feet or a basketball player) is true;
  3. when p is false (not over six feet) and q is true (a basketball player), then p+q (over six feet or a basketball player) is true;
  4. when p is false (not over six feet) and q is false (not a basketball player), then p+q (over six feet or a basketball player) is false.
This is often represented with what is called a truth table. In each row the table shows the value of p+q for given values of p and q, and we're using 1s and 0s rather than true and false now.

Truth table for the or function: f(p,q) = p+q:

p q p+q
1 1 1
1 0 1
0 1 1
0 0 0

Another operator in boolean algebra is the and operator, written as *. So p*q represents being over six feet tall and being a basketball player. So if p is false (not over six feet) then p*q is false. And if q is false (not a basketball player) then p*q is false again. Only when both p is true and q is true can p*q also be true. The following truth table represents this operation.

Truth table for the and function: f(p,q) = p*q

p q p*q
1 1 1
1 0 0
0 1 0
0 0 0

Boole discovered a special law of logic - namely that p*p = p, or in fact p*p* ... p = p. So if over six feet tall is true and over six feet tall is true then (of course) over six feet tall is true, ie. p*p = p.

This is just the beginning, of course. You can read his paper "The Calculus of Logic" if you want.

But perhaps the most major contribution from his work, and a step that was also being forged by other mathematicians working at much the same time but in algebra rather than symbolic logic, was the idea that the symbols are not place holders for values. Most of us would have learnt that x and y in algebra represented numbers - as indeed they may - but modern mathematics regards such things as symbols which can be manipulated according to the axioms (rules) of the algebraic system.

Boole essentially created the field of symbolic logic by defining the set of axioms (see "Algebraic Rules" in this link) or postulates which specify the result of the allowed operations on the symbols - without regard to what the symbols actually represent.

The 10 postulates (see almost the end of this lengthy discussion of Boole; the postulates are taken from a paper by E. V. Huntington in the journal Transactions of the American Mathematical Society, Vol. 35, 1933, pp 274-304) are expressed in terms of K, +, and *, where K is a class of undefined elements p, q, ... etc, and + and * are two operations:

  1. if p and q are in the class K then p+q is in class K
  2. if p and q are in class K then p*q is in class K
  3. there is an element Z of K such that p+Z = p for every element p in K
  4. there is an element U of K such that p*U = p for every element p in K
  5. p+q = q+p
  6. p*q = q*p
  7. p + q*r = (p+q)*(p+r)
  8. p*(q+r) = p*q + p*r
  9. for every element p there is an element p' such that p+p' = U and p*p' = Z
From this seemingly simple set of postulates the whole of classical symbolic logic can be built up.

You are probably asking "So ? ..., what is the significance of this mathematics for computers?" The answer lies in the design of the circuits in the processing unit and memory chips of modern computers. Claude Shannon showed how the analysis of such circuits could be carried out using boolean algebra. The foundation of symbolic logic provides the ability to express mathematically the complex logic required to carry out the functions of the CPU (central processing unit). Moreover, it provides mathematical tools, in the form of algebraic rules, for manipulating and simplifying that complex logic. It is no exaggeration to say that without it todays microprocessor could not be built.

We will revisit boolean algebra in the module on digital logic.

Alan Turing

Alan Turing is the other major figure in the early theoretical development of computer science. In fact he is often acclaimed as the father of the computer as we know it. There are a number of interesting biographies of Turing on the Web, some of which also explain his work in a manner suitable for a general audience. Consequently, I will simply highlight those aspects of his work which hold the most significance for the development of computing.

Firstly, Turing realised fully in a theoretical as well as a practical way (refer to his work on designing an actual machine while at the National Physical Laboratory), the notion that the algorithm could be treated in the same way as the data which the algorithm operated on. Thus, he expressed clearly the idea of a single machine which could be transformed with respect to the task it performs simply by specifying which algorithm it was to execute. This idea is essentially embodied in Turing's conceptualisation of a Universal Turing Machine.

Secondly, and perhaps because of his work on code breaking during the Second World War, Turing had a much clearer idea than others of his time of the computer as a machine for automatically manipulating symbols rather than as a machine for performing arithmetic calculations. Although it is true that some of the earliest practical demonstrations of computing, by building working machines, were executed in the USA, most of that work was stimulated by the need for complex numerical calculations and the people who did it, such as von Neumann (and I mention him because he was considered to have supplied the theoretical vision behind the machines), seemed to think of the machines as primarily numerical calculating devices.

Thirdly, Turing was interested in the very idea of an algorithm, and in the limits of what tasks it was possible to solve algorithmically. Thus, he developed the model of computation known as a Turing machine. This is still the most general model of computation and it is conjectured in the Church-Turing Thesis that any problem that can be solved algorithmically can be solved by a Turing Machine. It is beyond the scope of this course to look in detail at the formal description of Turing machines - suffice it to say that the Turing Machine is very much a stripped down representation of what it is to compute something (manipulate symbols). This thesis is then quite remarkable, because it states that, stripped down to the bare bones as it were, the Turing Machine idea actually captures the notion of computability completely!

Fourthly, Turing is famous because of his work in conceiving a machine that could simulate intelligence. He can, therefore, also be regarded as the father of the field of Artificial Intelligence! His ideas are embodied in the widely known Turing Test for artificial intelligence, which states that if in the course of conducting a conversation (say by typing on a keyboard) a person cannot distinguish that the partner they are conversing with is a machine, then that machine can be considered to be intelligent.

The Development of Electronic Computing Machines

Up until the early 1940s the large computers that were built were either mechanical or electromechanical, relying on relays for the switching mechanisms required. Relays are essentially switches operated by an electromagnet. These machines were large and slow. One example is the machines built by the German inventors Konrad Zuse and Helmut Schreyer.

You can examine the basic chronology of events and related discussions in the Web documents here.

Another example of a computer using relays was the machines built at Harvard University - the Harvard Mark I, for example - which demonstrated an architecture in which the storage of instructions (the algorithm) was in a separate location from the storage of data. This became known as the Harvard architecture as opposed to the von Neumann architecture which has been the dominant approach used in most moderm microprocessors. We will look closely at the von Neumann architecture in the next module.

There is some dispute concerning who built the first electronic computer. Some identify John Atanasoff and Clifford Berry who built a rather special purpose electronic machine (designed to solve systems of differential equations) and others credit the team of engineers Presper Eckert and John Mauchly, along with mathematician John von Neumann whose ENIAC began operating in 1945.

In many respects the ENIAC was outdated before it began operating (are we not frustrated by the same "problem" today, when we buy a computer that is almost immediately superceded by a newer, faster model ?) because the team realised that a simpler, more powerful machine could be built according to the architectural design that has become known (unfairly to Ekert and Mauchly) as the von Neumann model.

The ENIAC was "programmed" by setting switches and plugging cables into sockets. One of the important features of the new machine, the EDVAC, was to be the use of a simple set of instructions which could be stored in the same memory as the data (haven't we heard of this before?). An algorithm, i.e. a program, would be specified by a sequence of instructions chosen from this basic set (goodbye to setting switches and plugging cables), and the program could be stored in the machine's memory. The machine was to use binary representation of data (and instructions) even if it communicated results in decimal (thus conversion between the two was required). Instructions were to be executed in sequence and one at a time. There was to be one memory with cells all the same size; and the programmer was to be able to write programs which could modify the instructions of the program, or create new instructions and insert them in the program. This new design for a computer is essentially the same that is used in most microprocessors today.

The history of hardware developments since this time has basically been one of advances to more reliable technologies (vacuum tubes to transistors to integrated circuits), new types of memory and storage technologies, and advances in miniaturisation (integrated circuits - ICs, large scale integration - LSI; and very large scale integration - VLSI). These achievements are not to be diminished, but they have not been of the same fundamental nature as the events just described.

The development of new machine architectures to replace the von Neumann model is in fact an extremely active area of fundamental research in computer science. Two possibilities are parallel or massively parallel architectures and neural networks.

The Development of Computer Software

In the years since 1950 or so the development of computing has followed two parallel tracks. On the one hand there has been the miniaturisation of the machine and on the other has been the development of software - the operating system, programming languages, large scale applications and the human interface.

The modules on operating systems, programming languages, and software engineering will trace these developments; and since they are not so much historical as contemporary I will not discuss them here at all.