Docstoc

i COMPUTING AT THE SPEED OF LIGHT_ IS VON NEUMANN

Document Sample
i COMPUTING AT THE SPEED OF LIGHT_ IS VON NEUMANN Powered By Docstoc
					COMPUTING AT THE SPEED OF
  LIGHT, IS VON NEUMANN
          WRONG?

                    by

                Micah Bell

A thesis submitted in partial fulfillment of
   the requirements for the degree of


     Master of Science: Management
          Information Systems


          Bowie State University


                   2002




                     i
                              Bowie State University

                                    Abstract

                     COMPUTING AT THE SPEED OF
                       LIGHT, IS VON NEUMANN
                               WRONG?

                                  by Micah Bell

       Chairperson of the Supervisory Committee: Professor John Meinke
                          Department of Management Information Systems


Processor speeds have continued to increase throughout the past few years as
computer system components have continued to grow smaller.           Computer
system architecture has remained the same in accordance with von Neumann’s
architecture. In the near future, optical computing will jump processing speeds
by leaps and bounds; the newfound speed will require a new type of system
architecture. Traditional computing under Von Neumann’s architecture will be
proven obsolete by optical computing. Old architecture will have to be replaced
in order to keep up with the speed of processors leaping well beyond the current
technology used by AMD and Intel in today’s personal and business computers.
There already have been advances in memory and other components. I intend to
explore the advances in optical computing and prove the need to replace John
von Neumann’s architecture.




                                        i
                TABLE OF CONTENTS



I.     Computing History ……………………………………... 3
       A. Introduction ………………………………………... 4
       B. First Generation ……………………………………. 4
       C. Second Generation …………………………………. 4
       D. Third Generation …………………………………… 4
       E. Fourth Generation ………………………………….. 5
       F. Fifth Generation ……………………………………. 5
II.    Computing Trends & Advances ………………………… 6
       A. Introduction ………………………………………… 6
       B. AMD and Intel ………………………………………6
       C. 64-Bit Processors …………………………………….7
       D. PC Components …………………………………….. 7
       E. Component Advances ………………………………. 8
III.   Optical Computing ………………………………………11
       A. Introduction ………………………………………… 11
       B. Optical Bandwidth …………………………………...11
       C. Optical Switching …………………………………… 11
       D. Optical Storage ………………………………………12
       E. Advances …………………………………………….13
IV.    Von Neumann …………………………………………...15
       A. Introduction ………………………………………… 15
       B. His Life ……………………………………………... 15
       C. Theories …………………………………………….. 16
V.     Conclusion ……………………………………………… 18
                             LIST OF FIGURES



Number                                               Page


Figure 1, The Abacus                                    3

Figure 2, EDVAC                                         4

Figure 3, Processor Comparisons                         6

Figure 4, Area Density of Hard Drives (IBM)             7

Figure 5, Memory Access Times                           9

Figure 6, Circuit Speeds                              10

Figure 7, Bandwidth Storage Pyramid                   12

Figure 8, All Optical Logic Gate                      13

Figure 9, von Neumann and the “Institute Computer”    15

Figure 10, von Neumann Architecture                   16




                                      2
                                Chapter 1




                          COMPUTING HISTORY


Introduction

To understand optical computing and von Neumann’s ideas we will first examine
the history of computing and the components beginning many moons ago with
the first recorded use of
an abacus. The abacus
was used to aid man in
computations developed
about 5,000 years ago.
More prevalent was the
invention of the Pascaline
(another computation aid)                                    Figure 1, The Abacus
invented by Blasie Pascal in 1642 to aid his father a tax collector. In 1649
Gottfried Wilhelm von Leibniz improved the Pascaline by adding multiplication
functions thus enhancing the computing device. Over 100 years later, mechanical
calculator finally started gaining widespread acceptance creating the market for
computers. Charles Xavier Thomas de Colmar created the arithometer, which
could perform the four basic operations of multiplication, addition, subtraction
and division; it was used widely until World War I. Computers as we know them
can find their roots with Charles Babbage and his notice of the harmonics that
existed between mathematics and machines. His first attempt of a mechanical
computer was the Difference Engine that would perform simplistic functions; it
was never completed after ten years of work. Later assisted by Augusta Ada
King, Babbage created the Analytical Engine a general-purpose computer. The
Analytical Engine used input via punch cards derived from Joseph-Marie
Jacquard’s “loom” instruction technique. In 1889, Herman Hollerith used the
Jacquard loom idea of punch cards with computers to store data for calculation
aiding him in the calculation of the U.S. Census; in 1880 the Census calculation
took nearly seven years to complete. With Hollerith’s machine it took only six
weeks to calculate the Census. Hollerith started a business with his punch card
machine called the Tabulating Machine Company in 1896. Later in 1924 after a
series of mergers the Tabulation Machine Company became known as
International Business Machines (IBM).          In the following years, great
advancements in computing ensued. Vannevar Bush solved differential
equations with his version of a calculator. Professor John V. Atanasoff and his

                                       3
assistant Clifford Berry implemented George Boole’s Boolean algebra within
computers; by 1940 they completed the all-electronic computer that used ones
and zeros.

First Generation of Computers

Computers were first emphasized and researched during the outbreak of World
War II to exploit their strategic advantages. Konrad Zuse invented the Z3 in
1941 to aid in the development of missiles and airplanes. By 1943 the British
government had invented the “Colossus” to bread German coded messages. In
1944 the United States with Howard H. Aiken invented the MARK-I to create
ballistic charts for the Navy. Between 1943 and 1945, John Presper Eckert and
John W. Mauchly invented the ENIAC a general-purpose computer that was
approximately 1,000 times faster than the MARK-I. In the mid 1940, John von
                                  Neumann designed and created the EDVAC at
                                  the University of Pennsylvania; the EDVAC
                                  (that contained a central processor) could hold a
                                  stored program as well as data produced by the
                                  program. Remington Rand built the UNIVAC
                                  I in 1951; this was the first commercial
                                  computer to take advantage of the central
                                  processor system. The first generations of
                                  computers were uniquely identified by the use
                                  of machine language for each system and
          Figure 2, EDVAC         widespread use of vacuum tubes.

Second Generation of Computers

The second generation of computers was spearheaded by the invention of the
transistor. By 1956 computers were integrated with transistors; the transistor was
far more reliable than the vacuum tube. The Stretch and the LARC (invented for
atomic energy laboratories) were the first super computers to fully integrate and
take advantage of the transistor. Second generation computer were noted by the
use of assembly language in lieu of machine language making them easier to
program by the users. Second generation computers built in the 1960’s began
using printers, tape drives, disk drives, and memory; this generations ability to
store programs and use programming languages gave way to programming
languages like COBOL and FORTRAN.

Third Generation of Computers

Integrated circuits became the trademark of the third generation of computers.
Transistors were used widely but the heat they produced was detrimental to

                                        4
computer systems. The integrated circuit (invented by Jack Kilby at Texas
Instruments) made use of silicon chips and combined electrical components.
The use of an operating system was also revealed during this generation giving a
common base for computers accepting many different programs.

Fourth Generation of Computers

In 1980 Very Large Scale Integration (VLSI) allowed thousands of electrical
components to be added to a single chip. Early fourth generation computers
were known for VLSI chips like Intel’s 4004 chips invented in 1971 that
integrated I/O, the CPU, and cache onto one chip. One processor could be
programmed for any numbers of purposes; prior to this each processor was
designed and programmed for a single specific purpose. Personal computers
began being manufactured and computers were no longer used solely for
business applications. The Atari was one of the first examples of home computer
used for entertainment. IBM introduced the Personal Computer for home and
office use in 1981. Other companies made clones from the IBM PC and
widespread manufacturing ensued. In 1984, Apple introduced the Macintosh
containing a GUI interface allowing the use of a mouse to “point and click”
programs into operation. As computers became more and more widespread,
users found the need for intercommunication via computers. Phones and
networks were used as a means for computer intercommunication. The Internet
and “information superhighway” were a direct result of the computers of the
fourth generation.

Fifth “Next” Generation of Computers

The next generation of computers will be discussed thoroughly through the rest
of this paper, but beyond this paper, computers in the far future will be highly
advanced. Many will use artificial intelligence and have the ability to simulate
human thought. Next generation computers will be able to make decisions and
they will aid people will millions of tasks. The hardware of next generation
computers will take advantage of superconductors and will process at
astronomical speeds.      The near future of computers involves some
superconductor properties and outrageous speeds, as you will see.




                                       5
                                  Chapter 2




                  COMPUTING TRENDS & ADVANCES


Introduction

Current computing trends seem to be keeping pace with Moore’s Law of
increasing computer speed. The processor industry continues to keep
researching better, smaller, and faster transistors with higher capacity per chip.
AMD, Intel, IBM, and Motorola lead the way in processor development. In
December on 2001, Intel announced that they have created a Terahertz transistor
that will allow 1 billion transistors to be packed onto a single processor chip. The
processor should be on the market in 2005 and will reach speeds of 10 gigahertz.

AMD and Intel

AMD and Intel are the leading manufactures of processor for IBM clone PC and
continue to dominate and lead the market in manufacturing and development.
Motorola is the leading manufacturer of processors for the Apple computer
systems, and they continue to increase their speed competitively with AMD and
Intel. In the beginning of 2002, Intel introduced the Pentium 4 processor that
used 0.13-micron size for transistors. Currently the maximum speed of the
version 4 is 2 GHz, but by the end of 2002 it should reach speeds of 3.5 GHz.
AMD’s current processor the Athlon XP performs to high standards; its speed of
1.6     GHz       out
gunned an Intel 2
GHz processor due
to design and usage
of              high
performance DDR
(double       density
RAM) memory. All
of the processor
manufactures      are
expected           to
continue to increase                              Figure 3, Processor Comparisons
the speed of their systems slowly climbing to new heights.



                                         6
64-Bit Processors

Since 1991, 64-bit processors have been in use. DEC’s Alpha processor has been
running 64-bit code but hasn’t seen huge jumps in processing speed. AMD and
Intel have decided to capitalize on the 64-bit architecture. Current chips like the
Athlon and Pentium, which are 32-bit, will continue to be researched and
manufactured. AMD has developed the “Hammer” chip line of 64-bit
processors while Intel has developed the “Itanium.” The speeds of the 64-bit
architecture processors will start between 1 and 2 GHz. Previous developments
in 64-bit processors had not been investigated because new software must be
designed to take advantage of the 64-bit design; all previous software was
designed around a 32-bit processor. The 64-bit processors also required a new
type of machine code interface which has finally been developed know as EPIC
(Explicitly Parallel Instruction Computing). 64-bit processors are expected to be
much more expensive and will more than likely be used for high-end applications
and heavy operations like data mining and running large databases.

PC Components

Each component of a computer is important as to the performance and speed of
the system. The processor is the key component and processes the data in a
computer system; current speeds are about 2 GHz. A socket name for a
processor denotes the number of connections the processor has to the
motherboard; the more pins or connections the processor has the more data that
can be transferred to and from the CPU. RAM or volatile-memory is very
                                                             important to a
                                                             computer system
                                                             and there are
                                                             several     types.
                                                             SDRAM is the
                                                             cheapest      and
                                                             found in most
                                                             budget systems.
                                                                     RDRAM
                                                                      (Rambus
                                                             DRAM) is used
                                                             in higher end
                                                             Pentium systems
                                                             and operates at
                                                             the rise and fall
                                                             of the computer
                                                             clock rate. In
                 Figure 4, Area Density of Hard Drives (IBM) new        Athlon

                                        7
systems, DDR SDRAM is used and performs operations at double the rate of
RDRAM. Slower RAM can create a bottleneck to a computer system. The
bandwidth of the bus is also important; all information within the computer
travels on the bus. A higher bus bandwidth will allow more information to travel
through the computer system; current advanced bus speeds are about 400 MHz.
Hard drives usually create the biggest bottleneck in a computer system. They are
limited in speed by the write arms and the speed of the platters. A fast hard drive
or non-volatile storage device could be the most beneficial part of a system.
Current hard drives operate at platter speeds around 7,200-rpms. Reynold
Johnson who had been working for IBM for 18 years invented the first hard
drive in 1955. The first hard drive was called the RAMAC 350; it had a 5 MB
capacity using 50 24in disks. The rotation speed what 1200 per minute with a 1
second access time; the density of storage (area density) was 2 kilobits of storage
per square inch. The total hard drive was the size of two refrigerators. The area
density of storage is the most important aspect of a hard drive. The more bits
that can be packed into a smaller section on a platter the larger the storage
capacity of a standard hard drive. Also, dense storage means more data can be
read with each revolution of the platter thus enhancing speed. See figure 4 from
IBM on area density. From 1970 to 1991, area density grew at a rate of 15% per
year. Starting in 1991 and ending in 1997, area density exploded at a rate of 60%.
Since 1991, area density has expanded by 100% creating hard drive that can store
massive amounts of data onto very small areas. IBM lead the way in densely
packing hard drive platters with the introduction of a technology called GMR or
Giant Magnetoresistive. The rest of the industry quickly adopted this technique
leaving the standard magnetoresistive technology behind. The bits used on the
hard drives platters have continued to get smaller allowing the densely packed bits
per square inch. The size continues to shrink with increasing material
development and technological development. The storage size of the bit cannot
decrease forever; as the bit is stored on the hard drive, a magnetic domain is
created on the storage material. The question remains: how small can magnetic
charges become and still be readable and writable?

Component Advances

Research continues in the advancement of computer and computing
components.




                                        8
Advances in memory are the most compelling advantage being introduced to
computer systems. Computer processors can perform six times faster than a
hard drive can                                     Access           Data    Storage
store                  Storage Medium               Time          Transfer Capacity
information; it                                                      Rate
is easy to see Holographic Memory                   2.4 µs         10 GB/s    400
where          a                                                           Mbits/cm2
bottleneck         Main Memory (RAM) 10 – 40                       5 MB/s     4.0
usually occurs
                                                      ns                   Mbits/cm2
in a system.
                   Magnetic Disk                    8.3 ms 5 – 20 MB/s        100
The          gap
                                                                           Mbits/cm2
between CPU                             Figure 5, Memory Access Times
and data storage speeds is large and researchers in hardware and software systems
are trying to solve the problem. Holographic memory has been introduced to
solve this problem; it access data in three dimensions and data can be accessed a
page at a time instead of though serial access. Holographic memory storage
systems do not have mechanical parts and are not limited to the speed of
mechanical motion. Holographic memory uses a photosensitive material to
record interference patterns of a reference beam and a signal beam of coherent
light, where the signal beam is reflected off of an object or it contains data in the
form of light and dark areas. The nature of the photosensitive material is such
that the recorded interference pattern can be reproduced by applying a beam of
light to the material that is identical to the reference beam. The resulting light that
is transmitted through the medium will take on the recorded interference pattern
and will be collected on a laser detector array that encompasses the entire surface
of the holographic medium. Many holograms can be recorded in the same space
by changing the angle or the wavelength of the incident light. An entire page of
data is accessed in this way1. Date transfer rates of holographic memory are
around one gigabyte per second and data access is in the field of 10 ms. While
crystal storage materials are still being perfected, currently holographic memory
systems are being used. 31 kB/s is the current write rate and the current read rate
is 10 GB/s; that speed is comparable to some types of RAM but is non-volatile.
If holographic memory became the standard in non-volatile memory for
computers systems, significant performance advantages would be noticed.
Waiting to store or retrieve information from a slow hard drive would no longer
bog down the processor.

Holographic memory had been research at the Defense Advanced Research
Project Agency for a number of years but was halted in 1976 due to materials
issues. DARPA picked up the research again in the mid 1990’s funding research
at California Institute of Technology, Stanford University, and Carnegie Mellon

1From   Stephanie Boyles paper on Holographic Memory


                                                  9
University. Companies such as IBM, Rockwell, and Bayer were also
commissioned to research holographic memory; Rockwell was the most
successful.

  IBM has successfully developed a carbon-nano transistor. The size of the
transistor is 0.1 microns. The size of AMD’s and Intel’s transistor in current
processors is 0.13 microns. If developed into central processing units for
personal computers, the speed of computers would increase dramatically.


        Circuit Name:    Capacity                   Comment

         DS0             64kbps                     Building Block for Fractional T1

         T1, DS-1        1.544Mbps                  North America

         E1, DS-1        2.048Mbps                  Europe, Asia

         T2, DS-2        6.312Mbps                  North America

         E2              8.448Mbps                  Europe

         E3              34.368Mbps                 Europe and Japan

         T3 or DS3       44.736Mbps                 672 DS0s

         OC-1, STS1      51.840Mbps                 Optical Fiber

         Fast Ethernet   100.00 Mbps                LAN/WAN

         OC-3, STS3      155.520Mbps                Optical Fiber; 3 x 51.840Mbps

         OC-3c           155.520Mbps               Optical Fiber; "c"=
                                                 concatenated

         OC-12, STS12    622.080Mbps                Optical Fiber

         OC-48           2.488Gbps                  Optical Fiber
         OC-96           4.976Gbps
         OC-192          10Gbps
         OC-255          13.21Gbps

                                  Figure 6, Circuit Speeds




                                           10
                                  Chapter 3




                           OPTICAL COMPUTING


Introduction

Advances in computing have brought forth dreams turned into reality concerning
optical computing. The speed and bandwidth of optical processing surpasses
traditional means by a great distance. Photons perform and behave in a manner
that can produce these great speeds in comparison to electrons. The speed of
light (186,000 miles per second) gives great advantages to processing speeds but
storage is a more challenging topic. Optical circuits are also immune to
electromagnetic interference. They have the capability to operate in parallel with
other optical transmissions. New materials like crystal and thin films have made
optical advances possible.

Optical Bandwidth

Advances in optical bandwidth have allowed more and more data to be placed
onto a single circuit within a network. One of the most important advances in
optical bandwidth is Dense Wavelength Division Multiplexing or DWDM; this
technology allows data from different sources to be placed on a single optical
fiber. Every signal on the fiber has its own wavelength. Current DWDM
technology allows 96 different wavelengths to be multiplexed onto a single
optical fiber. DWDM allowed the speeds to be achieved in the OC-192 circuits.
Wire circuits have been limited to speeds of fast Ethernet so it is clear to see the
advantage of optical networks. Figure 6 is a comparison of speeds of different
circuits.

Optical Switching

Optical switches have been possible since 1978, but they have had little success
and speeds were very low. Many manufactures like NORTEL and Lucent have
had much success with optical switching. Until recently hybrid electro-optical
switches have been very popular. The hybrid take input in the form of optical
signals, converts them to electrical signals, performs the switching function, and
converts the signals back to optical for output. This process sounds tedious and
unsuccessful, but many companies have accomplished great speeds as a result.
Newer switches have been designed around an all-optical process. SaskTel (a

                                        11
Canadian phone carrier) was the first to implement an optical switch from
NORTEL. The new all optical switches can handle the capacity of an OC768
stream. Electronic switches are limited to speeds of 50 GB/s; this provided the
necessity of photon-based switches considering the growth of the Internet.

Optical Storage

In line with optical computing, optical storage has been on the market for a long
time relative to the computing age. There are many successful devices related to
                                                                     lasers and
                                                                            optical
                                                                          storage.
                                                                       Examples
                                                                           include
                                                                             worm
                                                                     drives, CD-
                                                                             ROM
                                                                      technology
                                                                     (introduced
                                                                     in 1984),
                                                                     optical disk
                                                                     drives, and
                                                                              more
                                                                         recently
                                                                             digital
                                                                         versatile
                     Figure 7, Bandwidth Storage Pyramid             disks. The
                                                                     DVD can
store mass amounts of data, but the speed at which the data is accessed and
written is very slow as compared to computer processor speed. Stability of
optical disk storage is much higher than that of magnetic medium. Magnetic
storage like that of hard drives are subject to magnetic interference where as
optical disks are not. When optical disks are written, the media uses three
different types of dies: green, which is based on a cynanine substance with a life
of 75 years; gold, which is based on phthalocynanine compound with a life on
100 years; and blue, which is cynanine and silver substrate making the media
more resistant to ultraviolet radiation. All recordable optical disks (not
remanufactured) are adversely affected by sunlight but not by electromagnetic
interference.

Colossal Storage Corporation is developing an optical storage system with very
high bandwidth and low power consumption. The system is called 3D Volume
Holographic Optical Disk Drive Storage. The data transfer rates are over 1000

                                        12
gigabits/sec. The material that the data is stored on is called Ferroelectric
Perovskites that are a type of photonic crystal. The storage density is from 200
gigabits per square inch to 400 terabits per square inch.

The advantage of optical storage related to optical computing is the knowledge
gained from working with lasers; as scientists have been working with lasers for
storage, there have been many advances as a result of the work. Researchers
know the processes, characteristics, and behaviors of lasers due to the vast
development of laser storage. This knowledge has lead to the development of
optical computing.

Advances

Advances in optical computing have been made possible through many research
facilities. The all-
optical logic gate
was       the   most
prevalent
demonstrated        at
NASA; the logic
gate performed in
picoseconds. With
the advance of an
optical logic gate,
purely         optical
processor is in the
very near future.
Researchers at the
University       have
                                                Figure 8, All Optical Logic Gate
performed       some
experiments with optical and quantum mechanics that may perform some
calculation nearly a billion times faster that any computer built to date. Their
emphasis was with photons and quantum mechanics. The results proved that
electrons behave in a more unpredictable behavior that is not as reliable as
photons. Photons have constant and predictable behavior that allows multiple
processing. New optical computer will be able to perform several different
actions at the same time. An example of the performance of optical computers
involves a librarian2. Instead of a librarian looking for a book volume-by-volume
and shelf-by-shelf (current computers systems) she would be cloned once for
every volume in the library and all of the librarians would search at once (optical
processing). The speed advantage of the light interference computer would far

2   Example idea from researchers at Rochester, article by Tim McDonald


                                                     13
surpass any computer system today. The technology of light interference and
quantum computing continues to be researched today. Research details are not
always available due to corporate security, but funding for nearly all of the
research into fast and alternate computing comes from the Department of
Defense Advanced Research Agency which coordinate most of the information
from universities and lists all project currently funded.




                                     14
                                     Chapter 4




                            JOHN VON NEUMANN




                      Figure 9, von Neumann and the “Institute Computer”


Introduction

John von Neumann worked in many fields of study like quantum mechanics,
game theory, algebras, and mathematics. One of his greatest contributions was in
the field of computer science.

His Life

John von Neumann was born to a lax Jewish family in Budapest, Hungry in the
year 1903. His birth name was Janos von Neumann and change later to John
after immigrating to the United States where he was called Johnny. As a wealthy
banker, his father purchased the German title of “von” in 1913. John von
Neumann was naturally intelligent and proved it as a child by conversing with his
father in classical Greek as well as memorizing phonebooks. Not practicing their
Jewish faith, the von Neumann’s placed John in a Lutheran Gymnasium (with a
tradition of outstanding instruction) where his intelligence was recognized
immediately. Another famous mathematician by the name of Eugene Winger
attended the same school as John a year ahead of him. After World War I,
communists gain control of Hungary. The von Neumann family fled to Austria
for several months later returning to Budapest. The problems in Budapest were
blamed primarily on Jewish people, which caused problems for the von
Neumann’s. John completed his education in 1921 and published his first
mathematics paper in 1922 with help from a mathematics assistant professor at
the University of Budapest named Fekete. In college, von Neumann studied

                                              15
chemistry in lieu of mathematics at the wishes of his father. Max von Neumann
(his father) did not want his son studying a subject that was not going to make
him money so in compromise chemistry was chosen. The University of Budapest
limited the number of Jewish students that could attend so John attended school
at the University of Berlin. In 1923, he transferred to the University of Zurich
and graduated with a degree in chemical engineering. Secretly he had been
studying mathematics at the University of Budapest by correspondence and
graduated in 1926 with a doctorate in mathematics and a thesis on set theory.
After college, John lectured at the University of Berlin and the University of
Hamburg between 1926 and 1930. During his time of lecturing, he also studied
at the University of Gottingen. In 1930, he went to the United States with his
new wife Marietta to lecture on quantum theory at Princeton. John was
appointed professor in 1931 and taught at Princeton along side of Albert
Einstein. He continued to return to Europe during the summers until the Nazi
party took control. By 1938, John had divorced and remarried then permanently
settled in Princeton. At Princeton he resided as the head of the mathematics and
top mathematician. John began dedicating time to logic studies leading to
automata theory and computer science. During World War II, John von
Neumann assisted in developing the hydrogen bomb as a consultant to the armed
forces; he was also an advisor to the ballistic research laboratories at Aberdeen
Proving Grounds in Maryland. John von Neumann was also a member of the
Armed Forces Special Weapons Project in Washington, D.C.; President
Eisenhower appointed him to the Atomic Energy Commission in 1955. Finally
after a number of months suffering from incurable cancer him died in
Washington, D.C. It was in the mid 1940’s and 1950’s that John von Neumann
contributed the most to computer science inventing the EDVAC and endorsing
the bit as a computers memory measurement.

Theories

John von Neumann’s
theory on computing
evolved on the basis
of a processor unit,
control unit, input
and output. He used
this basic process
during the invention
and development of
the EDVAC. Today’s
computers still use the
basic principle that
von        Neumann                     Figure 10, von Neumann Architecture



                                       16
theorized and practiced. He also endorsed and used the principle of the bit,
which all computers are based off of today; a great deal of his time was dedicated
to game theory and mathematics, but computer sciences consumed a great deal of
his time.




                                       17
                                 Chapter 5




                                CONCLUSION


Advances in computing have continued to grow since data processing was started
but the greatest speeds have been achieved recently. Thanks to studies like the
ones at the University of Rochester and other universities the fastest speeds are
yet to come to computing. In the very near future IBM’s carbon-nano transistors
will advance the computing community by leaps but perhaps not bounds while
keeping with traditional computer system architecture. The speed that will be
brought by the carbon-nono transistor will be in vein if the process cannot be
continually filled with data; here in lies the problem. Current systems will even
have difficulty keeping the processor busy with dataflow.

The development in optical and quantum computing will bring computing to
new heights leaps AND bounds over current technology. While the basic
principle of input and output will always hold true for computers since that is the
purpose of most computer systems, the internal workings are sure to change.
Non-volatile memory storage must change for the better; hard drives as we know
them are way too slow and hinder fast computer systems. This is the reason that
most systems today are moving to more and more volatile memory; moving data
from primary memory to the central processing unit take very little time. The
more data that can be packed into the main memory the faster the computer
system will appear when that data is accessed. Moving data from the hard drive
to the primary memory takes a great deal of time in comparison. If non-volatile
memory were accessed at a faster speed, there would not be much emphasis on
optical and quantum processors. The light interference computer requires an
extremely different architecture integrating very fast access memory whether or
not it is primary or secondary.

Only time will tell if John von Neumann’s theory on system architecture will hold
true.




                                        18
                             BIBLIOGRAPHY



LaMorte, Christopher. Computers:           Unknown      (2000).         Optical
   History and Development.                   Bandwidth: Unleash the Potential.
   Retrieved February 16, 2002 from           Retrieved February 16, 2002 from
   the World Wide Web:                        the     World     Wide      Web:
   http://www.digitalcentury.com/e            http://www.juniper.net/news/fe
   ncyclo/update/comp_hd.html                 atures/m160/

Metz, Cade (2002). PC Processors:          Blake, Christi (1999).        SaskTel
   What’s Inside? Retrieved                    Becomes First Canadian Carrier
   February 16, 2002 from the                  to Deploy Nortel Network’s
   World Wide Web:                             Optical     Switch      Interface.
   http://www.pcmag.com/                       Retrieved February 16, 2002 from
                                               the     World     Wide      Web:
Boyles,      Stephanie      (2000).            http://www.nortelnetworks.com
   Holographic Memory. Retrieved
   February 16, 2002 from the              Knier, Gil (2000). Now, Just a
   World         Wide         Web:            Blinkin’ Picosecond! Retrieved
   http://ucsu.colorado.edu/~steph            February 16, 2002 from the
   anb/projects/CSI3300.htm                   World          Wide         Web:
                                              http://science.nasa.gov/headlines
J.W.T. On the Horizon: Holographic            /y2000/ast28apr_1m.htm
   Storage. Retrieved February 16,
   2002 from the World Wide Web:           Paley, Mark S., et. All.      Recent
   http://www.sciam.com/2000/05                Advances in Photonic Devices
   00issue/0500toigbox5.html                   for      Optical      Computing.
                                               Retrieved February 16, 2002 from
Unknown. Government Set to Pull                the     World      Wide     Web:
   the Plug on Holographic Memory              http://science.nasa.gov/headlines
   Research Funding.      Retrieved            /images/nanosecond/thepaper.P
   February 16, 2002 from the                  DF
   World        Wide          Web:
   http://mypioneer.com/news/ind           McDonald, Tim (2001).      Forget
   news/IN9399.asp                           Electrons – Computing Goes
                                             Light-Speed. Retrieved February
Unknown.       Bandwidth Quick               16, 2002 from the World Wide
   Reference. Retrieved February             Web:
   16, 2002 from the World Wide              http://www.ecommercetimes.co
   Web:                                      m/perl/story/9851.html
   http://www.bandwidth.com/

                                      19
Robertson, E. F. (1998). John von          Grochowski, Ed. IBM Magnetic
   Neumann. Retrieved February                Hard Disk Drive Technology.
   16, 2002 from the World Wide               Retrieved February 16, 2002 from
   Web: http://www-groups.dcs.st-             the     World     Wide      Web:
   and.ac.uk/~history/Mathematicia            http://www.almaden.ibm.com/ss
   ns/Von_Neumann.html                        t/html/leadership/g02.htm

Zuse, Horst (1999).     John von
   Neumann’s Computer Concepts
   versus Konrad Zuse's Ideas
   and                        the
   Machines Z1 and Z3. Retrieved
   February 16, 2002 from the
   World           Wide     Web:
   http://irb.cs.tu-
   berlin.de/~zuse/Konrad_Zuse/
   Neumann_vs_Zuse.html

Holston, William J. (2001). How Big
   Blue Plays D. Retrieved February
   16, 2002 from the World Wide
   Web: http://www.business.com/

Mandal, M. K.     Optical Storage
   Media. Retrieved February 16,
   2002 from the World Wide Web:
   http://www.ee.ualberta.ca/~man
   dal/courses/EE587/download/C
   D-DVD.pdf

Thomas, Michael E. (2002). 3D
   Volume Holographic Optical
   Data Storage NanoTechnology.
   Retrieved February 16, 2002 from
   the     World      Wide     Web:
   http://www.geocities.com/coloss
   alstorage/colossal.htm




                                      20

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:22
posted:12/8/2011
language:
pages:23