Zgodovina računalništva

Document Sample
Zgodovina računalništva Powered By Docstoc
					                         ABAK-ABAKUS



The abacus, also called a counting frame, is a calculating tool used
primarily in parts of Asia for performing arithmetic processes. Today,
abacuses are often constructed as a bamboo frame with beads sliding on
wires, but originally they were beans or stones moved in grooves in sand
or on tablets of wood, stone, or metal. The abacus was in use centuries
before the adoption of the written modern numeral system and is still
widely used by merchants, traders and clerks in Asia, Africa, and
elsewhere.

The user of an abacus who slides the beads of the abacus by hand is
called an abacist.
            John Napiere: »Napierjeve kosti«


Napier's bones is an abacus created by John Napier for calculation of
products and quotients of numbers that was based on Arab mathematics
and lattice multiplication used by Fibonacci writing in the Liber Abaci. Also
called Rabdology (from Greek ῥάβδoς [r(h)abdos], "rod" and -λογία
[logia], "study"). Napier published his version of rods in a work printed in
Edinburgh, Scotland, at the end of 1617 entitled Rabdologiæ. Using the
multiplication tables embedded in the rods, multiplication can be reduced
to addition operations and division to subtractions. More advanced use of
the rods can even extract square roots. Note that Napier's bones are not
the same as logarithms, with which Napier's name is also associated.

The abacus consists of a board with a rim; the user places Napier's rods in
the rim to conduct multiplication or division. The board's left edge is
divided into 9 squares, holding the numbers 1 to 9. The Napier's rods
consist of strips of wood, metal or heavy cardboard. Napier's bones are
three dimensional, square in cross section, with four different rods
engraved on each one. A set of such bones might be enclosed in a
convenient carrying case.

A rod's surface comprises 9 squares, and each square, except for the top
one, comprises two halves divided by a diagonal line. The first square of
each rod holds a single-digit, and the other squares hold this number's
double, triple, quadruple and so on until the last square contains nine
times the number in the top square. The digits of each product are written
one to each side of the diagonal; numbers less than 10 occupy the lower
triangle, with a zero in the top half.

A set consists of 10 rods corresponding to digits 0 to 9. The rod 0,
although it may look unnecessary, is obviously still needed for multipliers
or multiplicands having 0 in them.
                  Blaise Pascal: »Pascaline«


Blaise Pascal invented the first mechanical calculator, called alternatively
the Pascalina or the Arithmetique, in 1645, the first being that of Wilhelm
Schickard in 1623.

Pascal began work on his calculator in 1642, when he was only 19 years
old. He had been assisting his father, who worked as a tax commissioner,
and sought to produce a device which could reduce some of his workload.
Pascal received a Royal Privilege in 1649 that granted him exclusive rights
to make and sell calculating machines in France. By 1652 Pascal claimed
to have produced some fifty prototypes and sold just over a dozen
machines, but the cost and complexity of the Pascaline—combined with
the fact that it could only add and subtract, and the latter with difficulty—
was a barrier to further sales, and production ceased in that year. By that
time Pascal had moved on to other pursuits, initially the study of
atmospheric pressure, and later philosophy.

Pascaline made for French currency. The least significant denominations,
sols and deniers, are on the right.

Pascalines came in both decimal and non-decimal varieties, both of which
exist in museums today. The contemporary French currency system was
similar to the Imperial pounds ("livres"), shillings ("sols") and pence
("deniers") in use in Britain until the 1970s.

In 1799 France changed to a metric system, by which time Pascal's basic
design had inspired other craftsmen, although with a similar lack of
commercial success. Child prodigy Gottfried Wilhelm Leibniz devised a
competing design, the Stepped Reckoner, in 1672 which could perform
addition, subtraction, multiplication and division; Leibniz struggled for
forty years to perfect his design and produce sufficiently reliable
machines. Calculating machines did not become commercially viable until
the early 19th century, when Charles Xavier Thomas de Colmar's
Arithmometer, itself using the key break through of Leibniz's design, was
commercially successful.[1]

The initial prototype of the Pascaline had only a few dials, whilst later
production variants had eight dials, the latter being able to deal with
numbers up to 9,999,999.

View through back of calculator above showing wheels.

The calculator had spoked metal wheel dials, with the digit 0 through 9
displayed around the circumference of each wheel. To input a digit, the
user placed a stylus in the corresponding space between the spokes, and
turned the dial until a metal stop at the bottom was reached, similar to
the way a rotary telephone dial is used. This would display the number in
the boxes at the top of the calculator. Then, one would simply redial the
second number to be added, causing the sum of both numbers to appear
in boxes at the top. Since the gears of the calculator only rotated in one
direction, negative numbers could not be directly summed. To subtract
one number from another, the method of nines' complements was used.
To help the user, when a number was entered its nines' complement
appeared in a box above the box containing the ORIGINAL value entered.
             Charles Babbage: Analitični stroj


The analytical engine, an important step in the history of computers, was
the design of a mechanical general-purpose computer by the British
mathematician Charles Babbage. It was first described in 1837, but
Babbage continued to work on the design until his death in 1871. Because
of financial, political, and legal issues, the engine was never built. In its
logical design the machine was essentially modern, anticipating the first
completed general-purpose computers by about 100 years.

Some believe that the technological limitations of the time were a further
obstacle to the construction of the machine; others believe that the
machine could have been built successfully with the technology of the era
if funding and political support had been stronger. Charles Babbage was
notoriously hard to work with and alienated a great number of people who
had at first supported him, including his engineer Joseph Clement.
                         Konrad Zuse: Z3


Konrad Zuse's Z3 was the world's first binary computer and working
programmable, fully automatic computing machine; whose attributes, with
the addition of conditional branching, have often been the ones used as
criteria in defining a computer. The Z3 was built with 2,000 relays. (A
request for funding for an electronic successor was denied as "strategically
unimportant". It had a clock frequency of ~5–10 Hz, and a word length of
22 bits. Calculations on the computer were performed in full binary
floating point arithmetic. Z3 read programs off a punched film.

The machine was completed in 1941. On 12 May 1941, it was successfully
presented to an audience of scientists (e.g. Prof. Alfred Teichmann, Prof.
C. Schmieden) of the Deutsche Versuchsanstalt für Luftfahrt ("German
Laboratory for Aviation"), in Berlin. The original Z3 was destroyed in 1943
during an Allied bombardment of Berlin. A fully functioning replica was
built in the 1960s by the originator's company Zuse KG and is on
permanent display in the Deutsches Museum. The Z3 was used by the
German Aircraft Research Institute to perform statistical analyses of wing
flutter in aircraft design. Dr. Joseph Jennissen, member of the Reich Air
Ministry acted as the federal supervisor.
                                 ENIAC


ENIAC (pronounced [ˈ      ɛniæk]), short for Electronic Numerical Integrator
                 [1][2]
And Computer,           was the first general-purpose electronic computer. It
was a Turing-complete, digital computer capable of being reprogrammed
to solve a full range of computing problems.[3] ENIAC was designed to
calculate artillery firing tables for the U.S. Army's Ballistic Research
Laboratory, but its first use was in calculations for the hydrogen
bomb.[4][5]

When ENIAC was announced in 1946 it was heralded in the press as a
"Giant Brain". It boasted speeds one thousand times faster than electro-
mechanical machines, a leap in computing power that no single machine
has since matched. This mathematical power, coupled with general-
purpose programmability, excited scientists and industrialists. The
inventors promoted the spread of these new ideas by teaching a series of
lectures on computer architecture.

The ENIAC's design and construction were financed by the United States
Army during World War II. The construction contract was signed on June
5, 1943, and work on the computer was begun in secret by the University
of Pennsylvania's Moore School of Electrical Engineering starting the
following month under the code name "Project PX". The completed
machine was unveiled on February 14, 1946 at the University of
Pennsylvania, having cost almost $500,000. It was formally accepted by
the U.S. Army Ordnance Corps in July 1946. ENIAC was shut down on
November 9, 1946 for a refurbishment and a memory upgrade, and was
transferred to Aberdeen Proving Ground, Maryland in 1947. There, on July
29, 1947, it was turned on and was in continuous operation until 11:45
p.m. on October 2, 1955.

ENIAC was conceived and designed by John Mauchly and J. Presper Eckert
of the University of Pennsylvania.[6] The team of design engineers
assisting the development included Robert F. Shaw (function tables),
Chuan Chu (divider/square-rooter), Kite Sharpless (master programmer),
Arthur Burks (multiplier), Harry Huskey (reader/printer), Jack Davis
(accumulators) and Iredell Eachus Jr.
           John von Neumann-ova arhitektura




Design of the von Neumann architecture

The von Neumann architecture is a design model for a stored-program
digital computer that uses a processing unit and a single separate storage
structure to hold both instructions and data. It is named after the
mathematician and early computer scientist John von Neumann. Such
computers implement a universal Turing machine and have a sequential
architecture.

A stored-program digital computer is one that keeps its programmed
instructions, as well as its data, in read-write, random-access memory
(RAM). Stored-program computers were an advancement over the
program-controlled computers of the 1940s, such as the Colossus and the
ENIAC, which were programmed by setting switches and inserting patch
leads to route data and to control signals between various functional units.
In the vast majority of modern computers, the same memory is used for
both data and program instructions.

The terms "von Neumann architecture" and "stored-program computer"
are generally used interchangeably, and that usage is followed in this
article. In contrast, the Harvard architecture stores a program in a
modifiable form, but without using the same physical storage or format for
general data.
                              UNIVAC


The UNIVAC I (UNIVersal Automatic Computer I) was the first commercial
computer produced in the United States. It was designed principally by J.
Presper Eckert and John Mauchly, the inventors of the ENIAC. Design work
was begun by their company, Eckert-Mauchly Computer Corporation, and
was completed after the company had been acquired by Remington Rand.
(In the years before successor models of the UNIVAC I appeared, the
machine was simply known as "the UNIVAC".)

The first UNIVAC was delivered to the United States Census Bureau on
March 31, 1951, and was dedicated on June 14 that year.[1] The fifth
machine (built for the U.S. Atomic Energy Commission) was used by CBS
to predict the result of the 1952 presidential election. With a sample of
just 1% of the voting population it correctly predicted that Dwight
Eisenhower would win. The UNIVAC I computers were built by Remington
Rand's UNIVAC-division (successor of the Eckert-Mauchly Computer
Corporation, bought by Rand in 1950).
                           Prvi tranzister


The first patent[1] for the field-effect transistor principle was filed in
Canada by Austrian-Hungarian physicist Julius Edgar Lilienfeld on 22
October 1925, but Lilienfeld did not publish any research articles about his
devices. In 1934 German physicist Dr. Oskar Heil patented another field-
effect transistor.

On 17 November 1947 John Bardeen and Walter Brattain, at AT&T Bell
Labs, observed that when electrical contacts were applied to a crystal of
germanium, the output power was larger than the input. William Shockley
saw the potential in this and worked over the next few months greatly
expanding the knowledge of semiconductors and could be described as the
father of the transistor. The term was coined by John R. Pierce.[2]
According to physicist/historian Robert Arns, legal papers from the Bell
Labs patent show that William Shockley and Gerald Pearson had built
operational versions from Lilienfeld's patents, yet they never referenced
this work in any of their later research papers or historical articles.[3]

The first silicon transistor was produced by Texas Instruments in 1954.[4]
This was the work of Gordon Teal, an expert in growing crystals of high
purity, who had previously worked at Bell Labs.[5] The first MOS transistor
actually built was by Kahng and Atalla at Bell Labs in 1960.
     Texas Instruments: prvo integrirano vezje


The idea of an integrated circuit was conceived by a radar scientist
working for the Royal Radar Establishment of the British Ministry of
Defence, Geoffrey W.A. Dummer (1909-2002), who published it at the
Symposium on Progress in Quality Electronic Components in Washington,
D.C. on May 7, 1952.[1] He gave many symposia publicly to propagate his
ideas.

Dummer unsuccessfully attempted to build such a circuit in 1956.

The integrated circuit can be credited as being invented by both Jack Kilby
of Texas Instruments[2] and Robert Noyce of Fairchild Semiconductor [3]
working independently of each other. Kilby recorded his initial ideas
concerning the integrated circuit in July 1958 and successfully
demonstrated the first working integrated circuit on September 12,
1958.[2] In his patent application of February 6, 1959, Kilby described his
new device as ―a body of semiconductor material ... wherein all the
components of the electronic circuit are completely integrated.‖ [4]

Kilby won the 2000 Nobel Prize in Physics for his part of the invention of
the integrated circuit.[5] Robert Noyce also came up with his own idea of
integrated circuit, half a year later than Kilby. Noyce's chip had solved
many practical problems that the microchip developed by Kilby had not.
Noyce's chip, made at Fairchild, was made of silicon, whereas Kilby's chip
was made of germanium.

Early developments of the integrated circuit go back to 1949, when the
German engineer Werner Jacobi (Siemens AG) filed a patent for an
integrated-circuit-like semiconductor amplifying device [6] showing five
transistors on a common substrate arranged in a 2-stage amplifier
arrangement. Jacobi discloses small and cheap hearing aids as typical
industrial applications of his patent. A commercial use of his patent has
not been reported.

A precursor idea to the IC was to create small ceramic squares (wafers),
each one containing a single miniaturized component. Components could
then be integrated and wired into a bidimensional or tridimensional
compact grid. This idea, which looked very promising in 1957, was
proposed to the US Army by Jack Kilby, and led to the short-lived
Micromodule Program (similar to 1951's Project Tinkertoy).[7] However, as
the project was gaining momentum, Kilby came up with a new,
revolutionary design: the IC.
The aforementioned Noyce credited Kurt Lehovec of Sprague Electric for
the principle of p-n junction isolation caused by the action of a biased p-n
junction (the diode) as a key concept behind the IC.
             Intel 4004 – prvi mikroprocesor


The Intel 4004 is a 4-bit central processing unit (CPU) released by Intel
Corporation in 1971. The 4004 is the first complete CPU on one chip, the
first commercially available microprocessor, a feat made possible by the
use of the new silicon gate technology allowing the integration of a higher
number of transistors and a faster speed than was possible before. The
4004 employed a 10 μm silicon-gate enhancement load pMOS technology
and could execute approximately 92,000 instructions per second (that is,
a single instruction cycle was 11 microseconds)
       IBM PC (model 5150)




     Type        Personal computer

  Release date   August 12, 1981

  Discontinued   April 2, 1987

                 IBM BASIC / PC-DOS 1.0
Operating system CP/M-86
                 UCSD p-System

      CPU        Intel 8088 @ 4.77 MHz

    Memory       16 kiB ~ 256 kiB
                            IBM: PC AT

The IBM Personal Computer/AT, more commonly known as the IBM AT
and also sometimes called the PC AT or PC/AT, was IBM's second-
generation PC, designed around the 6 MHz Intel 80286 microprocessor
and released in 1984 as machine type 5170. Because the AT used various
technologies that were new at the time in personal computers, the name
AT stood for Advanced Technology; one such advancement was that the
Intel 80286 processor used in the AT supported Protected mode. IBM later
released an 8 MHz version of the AT.
              IBM Personal System/2 – PS/2
The Personal System/2 or PS/2 was IBM's third generation of personal
computers. The PS/2 line, released to the public in 1987, was created by
IBM in an attempt to recapture control of the PC market by introducing an
advanced proprietary architecture. Although IBM's considerable market
presence ensured the PS/2 would sell in relatively large numbers, the
PS/2 architecture ultimately failed in its bid to return control of the PC
market to IBM. Due to the higher costs of the closed architecture,
customers preferred competing PCs that extended the existing PC
architecture instead of abandoning it for something new. However, many
of the PS/2's innovations, such as the 16550 UART, 1440 KB 3.5-inch
floppy disk format, 72-pin SIMMs, the PS/2 keyboard and mouse ports,
and the VGA video standard, went on to become standards in the broader
PC market.

The OS/2 operating system was announced at the same time as the PS/2
line and was intended to be the primary operating system for models with
Intel 286 or later processors. However, at the time of the first shipments,
only PC-DOS was available, with OS/2 1.0 (text-mode only) available
several months later. IBM also released AIX PS/2, a Unix-like operating
system for PS/2 models with Intel 386 or later processors. Windows was
another option for PS/2.
                           Intel: (80)486


The Intel i486 (or 80486) was the first tightly[1] pipelined x86 design.
Introduced in 1989, it was also the first x86 chip to use more than a
million transistors, due to a large on-chip cache and an integrated floating
point unit. It represents a fourth generation of binary compatible CPUs
since the original 8086 of 1978, and it was the second 32-bit x86 design
after the 80386.

A 50 MHz 80486 executed around 40 million instructions per second on
average and was able to reach 50 MIPS peak.

The i486 was without the usual 80-prefix because of a court ruling that
prohibited trademarking numbers (such as 80486). Later, with the
Pentium, Intel began branding its chips with words rather than numbers.
                           Intel Pentium

The original Pentium processor was a 32-bit microprocessor produced by
Intel. The first superscalar x86 architecture processor,[1] it was introduced
on March 22, 1993.[2] Its microarchitecture (sometimes called P5) was a
direct extension of the 80486 architecture with dual integer pipelines, a
faster FPU, wider data bus, and features for further reduced address
calculation latency. In 1996, the Pentium MMX was introduced with the
same basic microarchitecture complemented with MMX instructions, larger
caches, and some other enhancements.

The name Pentium was derived from the Greek pente (πέντε), meaning
'five', and the Latin ending -ium, a name selected after courts had
disallowed trademarking of number-based names like "i586" or "80586".
In 1995, Intel started to employ the registered Pentium trademark also for
x86 processors with radically different microarchitectures (Pentium Pro / II
/ III / 4 / D / M). In 2006, the Pentium brand briefly disappeared from
Intel's roadmaps,[3][4] only to re-emerge in 2007.[5]

Vinod Dham is often referred to as the father of the Intel Pentium
processor,[6][7] although many people, including John H. Crawford (of i386
and i486 alumni), were involved in the design and development of the
processor.
                          Intel: Itanium


By the time Itanium was released in June 2001, its performance was not
superior to competing RISC and CISC processors.[26] Itanium competed at
the low-end (primarily 4-CPU and smaller systems) with servers based on
x86 processors, and at the high end with IBM's POWER architecture and
Sun Microsystems' SPARC architecture. Intel repositioned Itanium to focus
on high-end business and HPC computing, attempting to duplicate x86's
successful "horizontal" market (i.e., single architecture, multiple systems
vendors). The success of this initial processor version was limited to
replacing PA-RISC in HP systems, Alpha in Compaq systems and MIPS in
SGI systems, though IBM also delivered a supercomputer based on this
processor.[27] POWER and SPARC remained strong, while the 32-bit x86
architecture continued to grow into the enterprise space. With economies
of scale fueled by its enormous installed base, x86 has remained the
preeminent "horizontal" architecture in enterprise computing.

Only a few thousand systems using the original Merced Itanium processor
were sold, due to relatively poor performance, high cost and limited
software availability.[28] Recognizing that the lack of software could be a
serious problem for the future, Intel made thousands of these early
systems available to independent software vendors (ISVs) to stimulate
development. HP and Intel brought the next-generation Itanium 2
processor to market a year later.
                          Intel Pentium D

The Pentium D[2] brand refers to two series of desktop dual-core 64-bit
x86 processors with the NetBurst microarchitecture manufactured by
Intel. Each CPU comprised two dies, each containing a single core residing
next to each other on a multi-chip module package. The brand's first
processor, codenamed Smithfield, was released by Intel on May 25, 2005.
Nine months later, Intel introduced its successor, codenamed Presler,[3]
but without offering significant upgrades in design,[4] still resulting in a
relatively high power consumption.[5] By 2004, the NetBurst processors
reached a clock speed barrier at 3.8 GHz due to a thermal (and power)
limit exemplified by the Presler's 130 W Thermal Design Power[5] (a higher
TDP requires additional cooling that can be prohibitively noisy or
expensive). The future belonged to more energy efficient and slower
clocked dual-core CPUs on a single die instead of two.[6] The final
shipment date of the dual die Presler chips was August 8, 2008,[7] which
marked the end of the Pentium D brand and also the NetBurst
microarchitecture.
                     Intel: Core 2 Extreme



Core 2 is a brand encompassing a range of Intel's consumer 64-bit x86-64
single-, dual-, and quad-core CPUs based on the Intel Core
microarchitecture. The single- and dual-core models are single-die,
whereas the quad-core models comprise two dies, each containing two
cores, packaged in a multi-chip module.[1] The introduction of Core 2
relegated the Pentium brand to the mid-range market, and reunified
laptop and desktop CPU lines, which previously had been divided into the
Pentium 4, Pentium D, and Pentium M brands.

The Core microarchitecture returned to lower clock rates and improved
the usage of both available clock cycles and power when compared with
the preceding NetBurst microarchitectue of the Pentium 4/D-branded
CPUs.[2] The Core microarchitecture provides more efficient decoding
stages, execution units, caches, and buses, reducing the power
consumption of Core 2-branded CPUs while increasing their processing
capacity. Intel's CPUs have varied wildly in power consumption according
to clock rate, architecture, and semiconductor process, shown in the CPU
power dissipation tables.

Core-based processors do not have Hyper-Threading Technology found in
Pentium 4 processors. This is because the Core microarchitecture is a
descendant of the P6 microarchitecture used by Pentium Pro, Pentium II,
Pentium III, and Pentium M. Core 2 also lacks an L3 Cache found in the
Gallatin core of the Pentium 4 Extreme Edition, although an L3 Cache is
present in high-end versions of Core-based Xeons and Hyper-Threading is
present on select Atom processors. Both an L3 cache and Hyper-threading
is present in current Nehalem and future Westmere processors.

The Core 2 brand was introduced on July 27, 2006,[3] comprising the Solo
(single-core), Duo (dual-core), Quad (quad-core), and in 2007, the
Extreme (dual- or quad-core CPUs for enthusiasts) version.[4] Intel Core 2
processors with vPro technology (designed for businesses) include the
dual-core and quad-core branches.
                           AMD: Athlon


Athlon is the brand name applied to a series of different x86 processors
designed and manufactured by AMD. The original Athlon (now called
Athlon Classic) was the first seventh-generation x86 processor and, in a
first, retained the initial performance lead it had over Intel's competing
processors for a significant period of time, being the first desktop
processor to reach speeds of 1 GHz. AMD has continued the Athlon name
with the Athlon 64, an eighth-generation processor featuring x86-64 (later
renamed AMD64) technology.

The Athlon made its debut on June 23, 1999. Athlon is the ancient Greek
word for "Champion/trophy of the games".
                          AMD: Athlon X2


 The Athlon 64 X2 is the first dual-core desktop CPU designed by AMD. It
is essentially a processor consisting of two Athlon 64 cores joined together
on one die with additional control logic. The cores share one dual-channel
memory controller, are based on the E-stepping model of Athlon 64 and,
depending on the model, have either 512 or 1024 KB of L2 Cache per
core. The Athlon 64 X2 is capable of decoding SSE3 instructions (except
those few specific to Intel's architecture), so it can run and benefit from
software optimizations that were previously only supported by Intel chips.
This enhancement is not unique to the X2, and is also available in the
Venice and San Diego single core Athlon 64s.

In June 2007, AMD released low-voltage variants of their low-end 65 nm
Athlon 64 X2, named "Athlon X2".[1] The Athlon X2 processors feature
reduced TDP of 45 W.

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:24
posted:5/20/2011
language:English
pages:24