Document Sample
calculation Powered By Docstoc
					Mechanizing Calculation:
   Thinking Machines
The Most Radical and Most Revolutionary of All Current
Telecommunication Technologies Is the Computer. The
commonly known history of the computer selectively
downplays the lateness of its development and the
comparative slowness of its diffusion.
The conceptualization of the computer was slow to
arrive because it had to work against Western
philosophy established in the 17th century by Descartes,
that intelligence and thinking can only be a human
characteristic. Therefore the very concept of a thinking
machine was unthinkable. Yet it was this same school
of philosophers that were intent on enshrining
mathematics as the ‘queen’ of all sciences and the
clearest evidence available of the glory of God’s
Alan Turing studied at King’s College, Cambridge
under the mathematician Max Newman. Following a
brilliant undergraduate career, Turing was elected
fellow, at the age of 22 in 1935. In 1936 he published
a paper On Computable Numbers, that dealt
elegantly with the Cartesian obstruction.
The agenda Turing addressed was at the heart of
advanced pure mathematics. By the late 19th
century, in the wake of the creation of non-Euclidean
geometries (among other developments),
mathematics were becoming, for the first time since
the Greeks, increasingly concerned about the
consistency of the axiomatic systems they used,
that is the system of given or absolute truth:
If you all recall your high school math classes and
having to learn the algebraic axioms, you’ll recall
that mathematical systems are defined by a set of
axioms, from which theorems are deduced. A
mathematical theory might be defined as the set of
all propositions that are true under the given set of
axioms (theorems). For example, a theory of
addition would contain propositions like "1 + 2 =
3", "2 + 5 = 3 + 4", and axioms like "a + b = b
             + a", "a + 0 = a", etc.
As historians recognize today:
The creation of non-Euclidean geometry had forced the
realization that mathematics is man-made and describes
only approximately what happens in the world. The
description is remarkably successful, but it is not the
truth in the sense of representing the inherent structure
of the universe and therefore not necessarily
consistent…Every axiom system contains undefined
terms whose properties are specified only by the
axioms. The meaning of these terms is not fixed, even
though intuitively we have numbers or points or lines in
It was against this background that Bertrand Russell
coined his famous epigram: Pure mathematics is the
subject in which we do not know what we are talking
about or whether what we are saying is true.
   The sort of people that slowed computing were
led by David Hilbert, the greatest mathematician of
his generation, who in the early decades of this
century insisted on the primacy of axiomatic
method. Well into the 1920’s, Hilbert continued to
assert: “that every mathematical problem can be
solved. We are all convinced of that…” To back
this up, there existed the liar paradox that can be
classically expressed in the sentence, “This
sentence is false”. Anytime a mathematical
problem arose that could not be solved, the liar
paradox could be asserted. In 1931, Kurt Godel
attacked this approach to mathematics in a paper
titled On Formally Undecidable Propositions of
Prinicpia Mathematica and other systems.
In this Godel demonstrated that it was impossible to
give proof of the consistency of a mathematical system
by demonstrating a fundamental limitation in the power
of the axiomatic method since the existing set of
axioms is simply incomplete. Therefore there are true
arithmetical statements that can not be derived from an
axiom. Godel’s incompleteness theorem highlighted a
number of other problems, primarily the question of
decidability. If there were now mathematical assertions
that could be neither proved nor disproved, how can
one determine effective procedures? Godel muted
Hilbert’s declaration that mathematical systems had to
be consistent and complete and his insistence upon
the discovery of effective procedure as a necessary
part of mathematics.
It was Turing, five years later, who dealt with this problem.
Turing had been struck by a phrase in a lecture of
Newman’s where Hilbert’s suggestion that any
mathematical problem must be solvable by a fixed and
definitive process was glossed by Newman as “a purely
mechanical process”. Turing, in his paper, found a
problem that could not be so decided or in Turing’s
language computed. It involved an artificial construct
known as the Cantor diagonal argument whereby ‘irrational
numbers’ could be created. (Cantor was one of the 19th
century mathematicians whose work set the stage for the
crisis in axiomatic methods.) To dispose of the decidability
problem, Turing constructed a mental picture of a machine
and demonstrated that it could not compute certain
numbers. Therefore there were mathematical problems
which were not decidable; but Turing wrote “It is possible
to invent a machine which can be used to compute any
computable sequence.”
Because of this metaphor of a machine, “On
Computable Numbers” had, beyond its immediate
significance in pure maths, broader implications.
Turing’s proof involved imagining a machine which
read, wrote, scanned and remembered binary
numbers inscribed on a tape. It might not be able to
compute the irrational numbers of Cantor’s trick but
it could, in theory, deal with a vast range of other
computations. Turing had conceived of a
tremendously powerful tool as he christened it a
universal engine. Of course, he had no intention of
building such a machine. When he wrote
“computer” he meant, as did all his contemporaries,
a person who performs computations as in the
following quote from Turing:
The behavior of the computer at any moment is
determined by the symbols which he is
observing, and his ‘state of mind’ at that moment.
We may suppose that there is a bound B to the
number of symbols or squares which the
computer can observe at any one moment. If he
wishes to observe more, he must use successive
observations…Let us imagine operations
performed by the computer to be split up into
‘simple operations’ which are so elementary that
it is not easy to imagine them further divided.
Every such operation consists of some change in
the physical system consisting of the computer
and his tape.
The human computer and his tape were to
become the machine computer and its program.
Of course, with mathematicians all over the world
attempting to solve decidability or rather as it is
popularly referred to by mathematicians
Entscheidungsproblem, it was inevitable that
Turing would have competitors. In mid-April 1936
he presented his paper to Newman in Cambridge.
On April 15th, Alonzo Church of Princeton sent
away his demonstration of a different unsolvable
proposition for publication. In his Appendix,
Turing had to acknowledge Church’s similar
conclusions about the Entscheidungsproblem.
In October 1936, Emil Post, a mathematician at the
City University of New York (now NYU), submitted
a paper to Church suggesting a mechanical
device -- a ‘worker’ -- for demonstrating Church’s
proposition along the lines of Turing’s universal
machine. Post acknowledges the power of
Turing’s approach by coining the phrase Turing
These men stand in a line of mathematical
logicians traceable back to the self-taught 19th
century English mathematician, George Boole.
Boole is by many considered the man who
discovered pure mathematics by showing that an
exact agreement between two classes of
operations, Calculus and Algebra, exists.
He did so by reducing certain types of thought to a
series of on/off states or 0s and 1s. This is a binary
system of notation that dates back in its modern
mathematical form to Bacon. Boolean algebra is the
means by which a Turning machine can be said to
think, make judgements and learn. These men are
part of the ideation process that prepare the ground
of scientific competence which could be transformed
by technology into the computer.
In the 1930s, there was much going on in
mathematics which would help to translate activities
popularly considered as uniquely human into forms
that would be ‘machine readable’.
In 1938 Claude Shannon, whom Turing was to meet
during a wartime visit to the United States, published
his MIT master’s thesis, A Symbolic Analysis of Relay
and Switching Circuits in which the insights of
Boolean algebra were applied to telephone exchange
circuit analysis. This produced a mathematization of
information which not only had immediate practical
applications for his future employers, Bell Labs, but
also established another part of modern computer
science’s foundation. Information Theory as
Shannon’s work is called, defines information as the
informational content of signals abstracted from all
specific human information. It concerns not the
question, “what sort of information?” but rather “how
much information?”
In a telephone exchange, design requirements
dictate that there be less concern about the content
of messages than with the accuracy with which the
system will relay them. Information becomes reified,
quantifiable so that it can be treated as a measure of
the probability or uncertainty of a symbol or set of
symbols. By how much does the transmitter’s
message reduce uncertainty in the receiver? By that
much can the informational content of the message
be measured and capacity of the channel of
communication be determined. A binary code made
up of 0 and 1 or off and on, is made up of binary
digits. A set of binary digits that carry a message we
will call a bit, the more complicated the message is
the more bits it will require. Each wave, each
transmission of a bit, reduces our uncertainty.
The “bound B tot he number of symbols or squares
which the computer can observe at any moment,” of
which Turing wrote can be expressed as the capacity
of the computer, human or mechanical to address a
discrete number of bits.
The quantification of information in Information Theory
parallels and perhaps determines the reification of
information which is so crucial a part of the
“Information Revolution”; that is to say the rhetorical
thrust which has the production of information in so-
called “post-industrial societies” substituting for the
production of physical goods depends upon such
reification. It allows people to be comfortable with the
somewhat curious notion what we can survive by
making information instead of producing things.
The implication of all this work in the 1930s at the outer
edges of advanced mathematics was not immediately
apparent even to the mathematical community. Pure
mathematical logic was so pure that few human
activities could be considered. But once they
understood Turing, many became rich. The importance
of Turing’s On Computable Numbers was that he
moved the computer from number-cruncher to symbol
manipulator. Turing threw the first plank across the
Cartesian chasm between human being and machine.
John von Neumann, a mentor of Turing’s, was one of
the fathers of the American computer and a
mathematician of enormous range -- from game theory
to nuclear devices -- who dominated the field in the
first decade after the war.
Von Neumann wrote, in First Draft of a Report on the
EDVAC, the document that contains the original
master-plan for the modern computer: “Every digital
computing device contains certain relay-like elements,
with discrete equilibria… It is worth mentioning, that
the neurons of the higher animals are definitely
elements in the above sense. They have all-or-none
character, that is two states: Quiescent [or inactive]
and excited”. The seductiveness of the analogy
between human neural activity and digital symbol
manipulators has proved irresistible. Drawing such
parallels is not new. It has been a characteristic of
Western thought throughout the modern period,
beginning with Lamettrie’s L’Homme Machine in 1750.
Seeing humanity in the image of which ever machine
most dominates contemporary life is what might be
called mechanemorphism. With Lamettrie it was the
clock. The combustion engine followed. Freud
thought electromagnets were a good metaphor for the
brain. Today this tendency finds its most extreme
expression with the computer, especially among the
proponents of ‘strong’ artificial intelligence.
Mechanemorphism has conditioned not only our
overall attitude to computers but also the very
terminology which has arisen around them. For
example what crucially distinguishes the computer
from the calculator that precedes it is its capacity to
store data and instructions -- ‘memory’. And the
codes that control its overall operation ‘language’
facilitates contemporary mechanemorphism.
There was also an element in the ground of scientific
competence which has to do with the basic
architecture of what a Turing machine might look
like. When Turing thought to call his metaphor a
universal engine he was honoring Charles Babbage.
By 1833, the English mathematician Charles Babbage
was abandoning work on a complex calculator
which had occupied him for the previous decade.
Instead he now envisaged a device which could
tackle, mechanically, any mathematical problem.
This universal analytic engine was never built, but
the design embodied a series of concepts which
were to be realized a century or more later.
The machine was to be programmed by the use of two
sets of punched cards, one set containing instructions
and the other the data to be processed. The input data
and nay intermediate results were to be held in a ‘store’
and the actual computation was to be achieved in a
part of conditional branching operations, basic logical
steps, by hopping backwards and forwards between
the mill and the store. It was to print out its results
automatically. This is so close to the modern computer
- the mill as the CPU, the operational cards as the ROM
and the store as the RAM, etc. that Babbage has been
hailed as its father. However the possibility that the
analytic engine could alter its program during its
computations alluded Babbage, his thought remained
one crucial step away from the computer proper.
Babbage’s proposed device was a calculator of a
most advanced type, a number cruncher not a symbol
Babbage’s interest in automatic calculation sprang
from the common root -- boredom. Like Leibniz,
Pascal and Napier before him and Mauchly, Zuse and
Aitken, three major computer pioneers a century after
him, Babbage disliked the computational aspect of
mathematical work. In one story he reportedly
muttered to a colleague, the astronomer Herschel, “I
wish to God these calculations had been executed by
steam.” Herschel is said to have replied: “It is quite
possible.” Unfortunately Babbage died before he
completed work on his advanced calculator.
It was left to George Scheutz, Swedish lawyer, and
newspaper publisher to build Babbage’s difference
engine. Scheutz’s machine was based on an account
he had read of Babbages work in 1834. When finished,
the Scheutz machine, which had four differences and
fourteen places of figures, punched results on to sheet
lead or paper mache from which printing stereotypes
could be made. The machine was operable by 1844
and refined to the point where duplicates were possible
by 1855. The Scheutz engine was built in advance of
any real supervening necessity. But by the 1860s and
1870s the need had arisen through the train, the
modern corporation, the modern office and all the
things that helped it run -- the phone, mechanical
calculator (1875), modern shift-key typewriter...
Commercial desktop calculators were in
immediate production and the office equipment
industry was born. After the first key-driven
calculator was demonstrated, an elegant roll-paper
printing mechanism was added. Other machines
were more desk-like than desktop. The Millionaire,
for example, had built-in multiplication tables and
was manufactured continuously from 1899 until
The new motive power, electricity, was also used.
In 1876, at the very same Centennial Exposition in
Philadelphia where Bell made such a stir, an
engineer, George Grant, showed an electrically
driven piano-size difference engine.
For the 1890 American census, Herman Hollerith
designed a device of even greater significance both
in its use of electricity and for the fortune it made
his company, eventually to become IBM. The
decennial census was proving ever more difficult to
complete. By 1887, the Census Office (it became a
Bureau in 1920) realized that it would still be
processing the 1880 data even as the 1890 returns
were being collected. In a public competition held
to find a solution to this problem, Hollerith proposed
an electro-mechanical apparatus. He resurrected
the punched cards that Babbage had intended as
the input for the analytic engine.
Hollerith’s cards contained the census data as a
series of punched holes. The operator placed the
card into a large reading device and pulled a lever.
This moved a series of pins against the card. Where
the pins encountered a hole, they passed through the
card and into a bowl of mercury, thereby making an
electrical circuit which activated the hands of a dial.
In the test which won him the contract, Hollerith’s
machine was about ten times faster than its rival. Six
weeks after the census, the Office announced that
the population stood at 62,622,250. Hollerith
declared himself to be “the world’s first statistical
engineer”. The census was completed in a quarter of
the time its predecessor had taken and the
Tabulating Machine Company was founded.
Hollerith’s enterprise became part of the
Computing Tabulating and Recording Company
(C-T-R) which in turn became IBM in 1924. By the
1930 and into the 40s business’s requirements for
automated machinery were completely satisfied.
It can be argued that these requirements were so
satisfied that there was no need for more
advanced calculators, calculators that could alter
their programs and thus be classed as
computers. The possibility of building an electro-
mechanical digital calculator along the general
lines proposed by Babbage had been first
outlined by the Spanish scientist Leonardo Torres
y Quevedo in a work published in 1915:
An analytic machine, such as I am alluding to here,
should execute any calculation, no matter how
complicated, without help from anyone. It will be
given a formula and one or more specific values for
the independent variables, and it should then
calculate and record all the values of the
corresponding functions defined explicitly or
implicitly by the formula… Torres y Quevedo
suggested the way to do this was with switches “a
brush which moves over a line of contacts and
makes contact with each of them successively”. In
1920, Torres y Quevedo built a prototype to
illustrate the feasibility of his suggestions, the first
machine to have a typewriter as the input/output
It wasn’t till a few years later that Bell Labs began to
think along the Torres y Quevedo line of enquiry
and it wasn’t until the late 1930’s, over 15 years
later that devices appeared in the metal. The Model
K (for kitchen table) Bell ‘computer’ was made of
old bits of telephone exchange mounted on a
breadboard, one weekend in 1937, by George
Stibitz. Stibitz, a staff mathematician at the Labs,
was convinced he could wire a simple logic circuit
to produce binary additions because the ordinary
telephone relay switch was a binary - an on/off -
device, and over the weekend that is exactly what
he did. There was little immediate enthusiasm for
the breadboard calculator but a pilot project was
funded and the first complex calculator was built.
The internal supervening necessity was the endless
calculations necessary in the developing theory of
filters and transmission lines. The complex calculator
(Model 1) was finished by 1940 and did the job the
Labs required. Apart from building the first binary
machine, at a meeting of the American Mathematical
Society, Stibitz also performed the first remote control
calculation by hooking up the teletype input keyboard
in a lecture hall at Dartmouth and communicating via
the telephone wire with the Model 1 in New York. This
is the earliest application of telegraph technology to
computing. This would eventually lead to the Internet.
Stibitz now looked to make more complex devices, but
the cost, $20,000, kept the Lab administration from
making further computers for several years.
When the USA joined the Second World War in 1941,
Stibitz expertise did find a proper outlet, in building a
series of specialized electro-mechanical complex
calculators. The Model 2 had a changeable paper-tape
program and was self-checking. It was designed
specifically to test anti-aircraft detectors. The Model 3,
designed for anti-aircraft ballistic calculations, made,
like its predecessor, for unattended operation, rang a
bell in the staff sergeant’s quarters if for any reason the
computation was halted. Stibitz’s experience during
these years parallel that of Konrad Zuse, who was also
drawn to automata because of boredom: In 1934 I was a
student in civil engineering in Berlin. Berlin is a nice
town and there were many opportunities for a student to
spend his time in an agreeable manner, for instance with
the nice girls. But instead we had to perform big and
awful calculations.
In 1936, Zuse began building electro-mechanical
binary calculators out of relays, machines which
were to occupy most of the living room of his
parents’ apartment. The Z1 and Z2 were test
models. He had failed to interest the German office
machine industry in his project except for some
partial support from one manufacturer. In 1939,
work on the Z2 was halted as Zuse was inducted
into the Wehrmacht. As Zuse explained to his
American interrogator in 1946, the German
Aerodynamics Research Institute was interested in
his Z2 and so he was relieved to continue this work.
By 1943, still finishing the Z3 he began building the
special purpose S1 to increase missile production
by speeding calculation.
         Zuse 3

Zuse 4
The S1 worked so well that in the later part of
1943 Zuse, supported by the Air Ministry
established his own small firm. The Z3 was, like
the contemporary models in America,
programmable, but by holes punched in 35mm
film rather than tape. The Z3 was a floating-
point binary machine with a sixty-four-digit store
and was the first of its general purpose class to
work, by December 1941. At the war’s end Zuse
was working on a Z4. As the Allies closed in on
Berlin he was given a truck on which he loaded
the Z4 and headed south. By the surrender he
was holed up in the village of Hinterstein in the
Bavarian Alps close to the Austrian border, the
Z4 hidden in a cellar.
Howard Aitken, the builder of the third major line of
electromagnetic machines, came to computers, like the
others, while slogging through calculations he needed
for his thesis. By 1937 he had a proposal ready for
mechanizing the process. He showed it to the military
but he also took it to a calculating machine company
which decided the device was impractical, against the
opinion of its chief engineer. The engineer then directed
Aitken, to IBM. In 1939, Aitken was contracted to build
his electromagnetic machine at IBM’s Endicott Lab, the
money to come from the US Navy and a million-dollar
gift from Thomas Watson Sr, the then president of IBM.
Aitken was given the reserve rank of Naval Commander
and his staff was navy. IBM furnished the space and
equipment, mainly as with Stibitz and Zuse, relay
switches, but it also provided a team of engineers.
Within four years the machine, the Mark 1, was working.
It was transferred to Harvard as a present from IBM.
Watson insisted it be clothed in gleaming,
aerodynamically molded steel, an IBM version of 1940s
high tech. Aitken thought it should be left naked in the
interest of science. IBM called the machine the ASCC
(Automatic Sequence Controlled Calculator); Aitken
called it the Harvard Mark I. IBM ignored the Mark I as a
prototype for a business machine yet nevertheless set
about building a bigger and better one and the world got
its first glimpse of a “robot brain”.
And yet Stibitz, Zuse and Aitken invented nothing.
Their machines were built out of readily available
parts and this was to be the case with the all-
electrical calculators which came next. These
accepted prototypes speak directly to the lack of a
real supervening necessity. Occasional tasks, like
Bell Labs’ needs might produce a machine.
Inspired amateurs, like Zuse, might do so too. And
in certain corners of the military needs were
perceived and met, but basically the world worked
well enough with these various prototypes. A
universal analytic engine was simply not needed.
It was to take two wars, one hot and one cold, to
change that perception.

Shared By: