Docstoc

A programmable machine

Document Sample
A programmable machine Powered By Docstoc
					A programmable machine. The two principal characteristics of a computer are:


        It responds to a specific set of instructions in a well-defined manner.
        It can execute a prerecorded list of instructions (a program).


Modern computers are electronic and digital. The actual machinery -- wires, transistors, and circuits -- is called
hardware; the instructions and data are called software.
More About Computer are written Below:


    1.   History Of Computer Technology
    2.   Kind Of Computers
    3.   Computer Components (Hardware)
    4.   Usage Of Computer in Our Daily Life


History of Computer Technology

A complete history of computing would include a multitude of diverse devices such as the ancient Chinese
abacus, the Jacquard loom (1805) and Charles Babbage's ``analytical engine'' (1834). It would also include
discussion of mechanical, analog and digital computing architectures. As late as the 1960s, mechanical devices,
such as the Merchant calculator, still found widespread application in science and engineering. During the early
days of electronic computing devices, there was much discussion about the relative merits of analog vs. digital
computers. In fact, as late as the 1960s, analog computers were routinely used to solve systems of finite
difference equations arising in oil reservoir modeling. In the end, digital computing devices proved to have the
power, economics and scalability necessary to deal with large scale computations. Digital computers now
dominate the computing world in all areas ranging from the hand calculator to the supercomputer and are
pervasive throughout society. Therefore, this brief sketch of the development of scientific computing is limited
to the area of digital, electronic computers.
The evolution of digital computing is often divided into generations. Each generation is characterized by
dramatic improvements over the previous generation in the technology used to build computers, the internal
organization of computer systems, and programming languages. Although not usually associated with computer
generations, there has been a steady improvement in algorithms, including algorithms used in computational
science. The following history has been organized using these widely recognized generations as mileposts.


        The Mechanical Era (1623-1945)
        First Generation Electronic Computers (1937-1953)
        Second Gener ation (1954-1962)
        Third Generation (1963-1972)
        Fourth Generation (1972-1984)
        Fifth Generation (1984-1990)
        Sixth Generation (1990 - )


The Mechanical Era (1623-1945)

The idea of using machines to solve mathematical problems can be traced at least as far as the early 17th
century. Mathematicians who designed and implemented calculators that were capable of addition, subtraction,
multiplication, and division included Wilhelm Schick hard, Blaise Pascal, and Gottfried Leibnitz.
The first multi-purpose, i.e. programmable, computing device was probably Charles Babbage's Difference
Engine, which was begun in 1823 but never completed. A more ambitious machine was the Analytical Engine. It
was designed in 1842, but unfortunately it also was only partially completed by Babbage. Babbage was truly a
man ahead of his time: many historians think the major reason he was unable to complete these projects was
the fact that the technology of the day was not reliable enough. In spite of never building a complete working
machine, Babbage and his colleagues, most notably Ada, Countess of Lovelace, recognized several important
programming techniques, including conditional branches, iterative loops and index variables.
A machine inspired by Babbage's design was arguably the first to be used in computational science. George
Scheutz read of the difference engine in 1833, and along with his son Edvard Scheutz began work on a smaller
version. By 1853 they had constructed a machine that could process 15-digit numbers and calculate fourth-
order differences. Their machine won a gold medal at the Exhibition of Paris in 1855, and later they sold it to
the Dudley Observatory in Albany, New York, which used it to calculate the orbit of Mars. One of the first
commercial uses of mechanical computers was by the US Census Bureau, which used punch-card equipment
designed by Herman Hollerith to tabulate data for the 1890 census. In 1911 Hollerith's company merged with a
competitor to found the corporation which in 1924 became International Business Machines.
                                                                                                             TOP


First Generation Electronic Computers (1937-1953)

Three machines have been promoted at various times as the first electronic computers. These machines used
electronic switches, in the form of vacuum tubes, instead of electromechanical relays. In principle the electronic
switches would be more reliable, since they would have no moving parts that would wear out, but the
technology was still new at that time and the tubes were comparable to relays in reliability. Electronic
components had one major benefit, however: they could ``open'' and ``close'' about 1,000 times faster than
mechanical switches.
The earliest attempt to build an electronic computer was by J. V. Atanasoff, a professor of physics and
mathematics at Iowa State, in 1937. Atanasoff set out to build a machine that would help his graduate students
solve systems of partial differential equations. By 1941 he and graduate student Clifford Berry had succeeded
in building a machine that could solve 29 simultaneous equations with 29 unknowns. However, the machine
was not programmable, and was more of an electronic calculator.


A second early electronic machine was Colossus, designed by Alan Turing for the British military in 1943. This
machine played an important role in breaking codes used by the German army in World War II. Turing's main
contribution to the field of computer science was the idea of the Turing machine, a mathematical formalism
widely used in the study of computable functions. The existence of Colossus was kept secret until long after the
war ended, and the credit due to Turing and his colleagues for designing one of the first working electronic
computers was slow in coming.


The first general purpose programmable electronic computer was the Electronic Numerical Integrator and
Computer (ENIAC), built by J. Presper Eckert and John V. Mauchly at the University of Pennsylvania. Work
began in 1943, funded by the Army Ordnance Department, which needed a way to compute ballistics during
World War II. The machine wasn't completed until 1945, but then it was used extensively for calculations
during the design of the hydrogen bomb. By the time it was decommissioned in 1955 it had been used for
research on the design of wind tunnels, random number generators, and weather prediction. Eckert, Mauchly,
and John von Neumann, a consultant to the ENIAC project, began work on a new machine before ENIAC was
finished. The main contribution of EDVAC, their new project, was the notion of a stored program. There is some
controversy over who deserves the credit for this idea, but none over how important the idea was to the future
of general purpose computers. ENIAC was controlled by a set of external switches and dials; to change the
program required physically altering the settings on these controls. These controls also limited the speed of the
internal electronic operations. Through the use of a memory that was large enough to hold both instructions
and data, and using the program stored in memory to control the order of arithmetic operations, EDVAC was
able to run orders of magnitude faster than ENIAC. By storing instructions in the same medium as data,
designers could concentrate on improving the internal structure of the machine without worrying about
matching it to the speed of an external control.


Regardless of who deserves the credit for the stored program idea, the EDVAC project is significant as an
example of the power of interdisciplinary projects that characterize modern computational science. By
recognizing that functions, in the form of a sequence of instructions for a computer, can be encoded as
numbers, the EDVAC group knew the instructions could be stored in the computer's memory along with
numerical data. The notion of using numbers to represent functions was a key step used by Goedel in his
incompleteness theorem in 1937, work which von Neumann, as a logician, was quite familiar with. Von
Neumann's background in logic, combined with Eckert and Mauchly's electrical engineering skills, formed a very
powerful interdisciplinary team.
Software technology during this period was very primitive. The first programs were written out in machine
code, i.e. programmers directly wrote down the numbers that corresponded to the instructions they wanted to
store in memory. By the 1950s programmers were using a symbolic notation, known as assembly language,
then hand-translating the symbolic notation into machine code. Later programs known as assemblers
performed the translation task.
As primitive as they were, these first electronic machines were quite useful in applied science and engineering.
Atanasoff estimated that it would take eight hours to solve a set of equations with eight unknowns using a
Marchant calculator, and 381 hours to solve 29 equations for 29 unknowns. The Atanasoff-Berry computer was
able to complete the task in under an hour. The first problem run on the ENIAC, a numerical simulation used in
the design of the hydrogen bomb, required 20 seconds, as opposed to forty hours using mechanical calculators.
Eckert and Mauchly later developed what was arguably the first commercially successful computer, the
UNIVAC; in 1952, 45 minutes after the polls closed and with 7% of the vote counted, UNIVAC predicted
Eisenhower would defeat Stevenson with 438 electoral votes (he ended up with 442).


                                                                                                             TOP


Second Generation (1954-1962)
The second generation saw several important developments at all levels of computer system design, from the
technology used to build the basic circuits to the programming languages used to write scientific applications.


Electronic switches in this era were based on discrete diode and transistor technology with a switching time of
approximately 0.3 microseconds. The first machines to be built with this technology include TRADIC at Bell
Laboratories in 1954 and TX-0 at MIT's Lincoln Laboratory. Memory technology was based on magnetic cores
which could be accessed in random order, as opposed to mercury delay lines, in which data was stored as an
acoustic wave that passed sequentially through the medium and could be accessed only when the data moved
by the I/O interface.
Important innovations in computer architecture included index registers for controlling loops and floating point
units for calculations based on real numbers. Prior to this accessing successive elements in an array was quite
tedious and often involved writing self-modifying code (programs which modified themselves as they ran; at
the time viewed as a powerful application of the principle that programs and data were fundamentally the
same, this practice is now frowned upon as extremely hard to debug and is impossible in most high level
languages). Floating point operations were performed by libraries of software routines in early computers, but
were done in hardware in second generation machines.

During this second generation many high level programming languages were introduced, including FORTRAN
(1956), ALGOL (1958), and COBOL (1959). Important commercial machines of this era include the IBM 704
and its successors, the 709 and 7094. The latter introduced I/O processors for better throughput between I/O
devices and main memory.
The second generation also saw the first two supercomputers designed specifically for numeric processing in
scientific applications. The term ``supercomputer'' is generally reserved for a machine that is an order of
magnitude more powerful than other machines of its era. Two machines of the 1950s deserve this title. The
Livermore Atomic Research Computer (LARC) and the IBM 7030 (aka Stretch) were early examples of machines
that overlapped memory operations with processor operations and had primitive forms of parallel processing.


                                                                                                             TOP


Third Generation (1963-1972)

The third generation brought huge gains in computational power. Innovations in this era include the use of
integrated circuits, or ICs (semiconductor devices with several transistors built into one physical component),
semiconductor memories starting to be used instead of magnetic cores, microprogramming as a technique for
efficiently designing complex processors, the coming of age of pipelining and other forms of parallel processing
(described in detail in Chapter CA), and the introduction of operating systems and time-sharing.


The first ICs were based on small-scale integration (SSI) circuits, which had around 10 devices per circuit (or
``chip''), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to 100 devices per
chip. Multilayered printed circuits were developed and core memory was replaced by faster, solid state
memories. Computer designers began to take advantage of parallelism by using multiple functional units,
overlapping CPU and I/O operations, and pipelining (internal parallelism) in both the instruction stream and the
data stream. In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to use
functional parallelism. By using 10 separate functional units that could operate simultaneously and 32
independent memory banks, the CDC 6600 was able to attain a computation rate of 1 million floating point
operations per second (1 Mflops). Five years later CDC released the 7600, also developed by Seymour Cray.
The CDC 7600, with its pipelined functional units, is considered to be the first vector processor and was capable
of executing at 10 Mflops. The IBM 360/91, released during the same period, was roughly twice as fast as the
CDC 660. It employed instruction look ahead, separate floating point and integer functional units and pipelined
instruction stream. The IBM 360-195 was comparable to the CDC 7600, deriving much of its performance from
a very fast cache memory. The SOLOMON computer, developed by Westinghouse Corporation, and the ILLIAC
IV, jointly developed by Burroughs, the Department of Defense and the University of Illinois, were
representative of the first parallel computers. The Texas Instrument Advanced Scientific Computer (TI-ASC)
and the STAR-100 of CDC were pipelined vector processors that demonstrated the viability of that design and
set the standards for subsequent vector processors.


Early in the this third generation Cambridge and the University of London cooperated in the development of
CPL (Combined Programming Language, 1963). CPL was, according to its authors, an attempt to capture only
the important features of the complicated and sophisticated ALGOL. However, like ALGOL, CPL was large with
many features that were hard to learn. In an attempt at further simplification, Martin Richards of Cambridge
developed a subset of CPL called BCPL (Basic Computer Programming Language, 1967). In 1970 Ken
Thompson of Bell Labs developed yet another simplification of CPL called simply B, in connection with an early
implementation of the UNIX operating system. comment):


                                                                                                             TOP
Fourth Generation (1972-1984)

The next generation of computer systems saw the use of large scale integration (LSI - 1000 devices per chip)
and very large scale integration (VLSI - 100,000 devices per chip) in the construction of computing elements.
At this scale entire processors will fit onto a single chip, and for simple systems the entire computer (processor,
main memory, and I/O controllers) can fit on one chip. Gate delays dropped to about 1ns per gate.


Semiconductor memories replaced core memories as the main memory in most systems; until this time the use
of semiconductor memory in most systems was limited to registers and cache. During this period, high speed
vector processors, such as the CRAY 1, CRAY X-MP and CYBER 205 dominated the high performance computing
scene. Computers with large main memory, such as the CRAY 2, began to emerge. A variety of parallel
architectures began to appear; however, during this period the parallel computing efforts were of a mostly
experimental nature and most computational science was carried out on vector processors. Microcomputers and
workstations were introduced and saw wide use as alternatives to time-shared mainframe computers.


Developments in software include very high level languages such as FP (functional programming) and Prolog
(programming in logic). These languages tend to use a declarative programming style as opposed to the
imperative style of Pascal, C, FORTRAN, et al. In a declarative style, a programmer gives a mathematical
specification of what should be computed, leaving many details of how it should be computed to the compiler
and/or runtime system. These languages are not yet in wide use, but are very promising as notations for
programs that will run on massively parallel computers (systems with over 1,000 processors). Compilers for
established languages started to use sophisticated optimization techniques to improve code, and compilers for
vector processors were able to victories simple loops (turn loops into single instructions that would initiate an
operation over an entire vector).


Two important events marked the early part of the third generation: the development of the C programming
language and the UNIX operating system, both at Bell Labs. In 1972, Dennis Ritchie, seeking to meet the
design goals of CPL and generalize Thompson's B, developed the C language. Thompson and Ritchie then used
C to write a version of UNIX for the DEC PDP-11. This C-based UNIX was soon ported to many different
computers, relieving users from having to learn a new operating system each time they change computer
hardware. UNIX or a derivative of UNIX is now a de facto standard on virtually every computer system.


An important event in the development of computational science was the publication of the Lax report. In 1982,
the US Department of Defense (DOD) and National Science Foundation (NSF) sponsored a panel on Large Scale
Computing in Science and Engineering, chaired by Peter D. Lax. The Lax Report stated that aggressive and
focused foreign initiatives in high performance computing, especially in Japan, were in sharp contrast to the
absence of coordinated national attention in the United States. The report noted that university researchers had
inadequate access to high performance computers. One of the first and most visible of the responses to the Lax
report was the establishment of the NSF supercomputing centers. Phase I on this NSF program was designed to
encourage the use of high performance computing at American universities by making cycles and training on
three (and later six) existing supercomputers immediately available. Following this Phase I stage, in 1984-1985
NSF provided funding for the establishment of five Phase II supercomputing centers.


The Phase II centers, located in San Diego (San Diego Supercomputing Center); Illinois (National Center for
Supercomputing Applications); Pittsburgh (Pittsburgh Supercomputing Center); Cornell (Cornell Theory
Center); and Princeton (John von Neumann Center), have been extremely successful at providing computing
time on supercomputers to the academic community. In addition they have provided many valuable training
programs and have developed several software packages that are available free of charge. These Phase II
centers continue to augment the substantial high performance computing efforts at the National Laboratories,
especially the Department of Energy (DOE) and NASA sites.


                                                                                                              TOP


Fifth Generation (1984-1990)

The development of the next generation of computer systems is characterized mainly by the acceptance of
parallel processing. Until this time parallelism was limited to pipelining and vector processing, or at most to a
few processors sharing jobs. The fifth generation saw the introduction of machines with hundreds of processors
that could all be working on different parts of a single program. The scale of integration in semiconductors
continued at an incredible pace - by 1990 it was possible to build chips with a million components - and
semiconductor memories became standard on all computers.


Other new developments were the widespread use of computer networks and the increasing use of single-user
workstations. Prior to 1985 large scale parallel processing was viewed as a research goal, but two systems
introduced around this time are typical of the first commercial products to be based on parallel processing. The
Sequent Balance 8000 connected up to 20 processors to a single shared memory module (but each processor
had its own local cache). The machine was designed to compete with the DEC VAX-780 as a general purpose
Unix system, with each processor working on a different user's job. However Sequent provided a library of
subroutines that would allow programmers to write programs that would use more than one processor, and the
machine was widely used to explore parallel algorithms and programming techniques.


The Intel iPSC-1, nicknamed ``the hypercube'', took a different approach. Instead of using one memory
module, Intel connected each processor to its own memory and used a network interface to connect
processors. This distributed memory architecture meant memory was no longer a bottleneck and large systems
(using more processors) could be built. The largest iPSC-1 had 128 processors. Toward the end of this period a
third type of parallel processor was introduced to the market. In this style of machine, known as a data-parallel
or SIMD, there are several thousand very simple processors. All processors work under the direction of a single
control unit; i.e. if the control unit says ``add a to b'' then all processors find their local copy of a and add it to
their local copy of b. Machines in this class include the Connection Machine from Thinking Machines, Inc., and
the MP-1 from MasPar, Inc.


Scientific computing in this period was still dominated by vector processing. Most manufacturers of vector
processors introduced parallel models, but there were very few (two to eight) processors in this parallel
machines. In the area of computer networking, both wide area network (WAN) and local area network (LAN)
technology developed at a rapid pace, stimulating a transition from the traditional mainframe computing
environment toward a distributed computing environment in which each user has their own workstation for
relatively simple tasks (editing and compiling programs, reading mail) but sharing large, expensive resources
such as file servers and supercomputers. RISC technology (a style of internal organization of the CPU) and
plummeting costs for RAM brought tremendous gains in computational power of relatively low cost workstations
and servers. This period also saw a marked increase in both the quality and quantity of scientific visualization.


                                                                                                                  TOP


Sixth Generation (1990 - )

Transitions between generations in computer technology are hard to define, especially as they are taking place.
Some changes, such as the switch from vacuum tubes to transistors, are immediately apparent as fundamental
changes, but others are clear only in retrospect. Many of the developments in computer systems since 1990
reflect gradual improvements over established systems, and thus it is hard to claim they represent a transition
to a new ``generation'', but other developments will prove to be significant changes.


                                                                                                                  TOP


Kinds Of Computers
(According To Size & Power)


Computers can be generally classified by size and power as follows,


   i.    Personal Computer
  ii.    Workstation
 iii.    Minicomputer
 iv.     Mainframe
  v.     Supercomputer



Personal Computer

A small, single-user computer based on a microprocessor. In addition to the microprocessor, a personal
computer has a keyboard for entering data, a monitor for displaying information, and a storage device for
saving data.


A small, relatively inexpensive computer designed for an individual user. In price, personal computers range
anywhere from a few hundred dollars to thousands of dollars. All are based on the microprocessor technology
that enables manufacturers to put an entire CPU on one chip. Businesses use personal computers for word
processing, accounting, desktop publishing, and for running spreadsheet and database management
applications. At home, the most popular use for personal computers is for playing games.
Personal computers first appeared in the late 1970s. One of the first and most popular personal computers was
the Apple II, introduced in 1977 by Apple Computer. During the late 1970s and early 1980s, new models and
competing operating systems seemed to appear daily. Then, in 1981, IBM entered the fray with its first
personal computer, known as the IBM PC. The IBM PC quickly became the personal computer of choice, and
most other personal computer manufacturers fell by the wayside. One of the few companies to survive IBM's
onslaught was Apple Computer, which remains a major player in the personal computer marketplace.

Other companies adjusted to IBM's dominance by building IBM clones, computers that were internally almost
the same as the IBM PC, but that cost less. Because IBM clones used the same microprocessors as IBM PCs,
they were capable of running the same software. Over the years, IBM has lost much of its influence in directing
the evolution of PCs. Many of its innovations, such as the MCA expansion bus and the OS/2 operating system,
have not been accepted by the industry or the marketplace.

Today, the world of personal computers is basically divided between Apple Macintoshes and PCs. The principal
characteristics of personal computers are that they are single-user systems and are based on microprocessors.
However, although personal computers are designed as single-user systems, it is common to link them
together to form a network. In terms of power, there is great variety. At the high end, the distinction between
personal computers and workstations has faded. High-end models of the Macintosh and PC offer the same
computing power and graphics capability as low-end workstations by Sun Microsystems, Hewlett-Packard, and
DEC.


                                                                                                           TOP


Workstation
A powerful, single-user computer. A workstation is like a personal computer, but it has a more powerful
microprocessor and a higher-quality monitor.


1. A type of computer used for engineering applications (CAD/CAM), desktop publishing, software
development, and other types of applications that require a moderate amount of computing power and
relatively high quality graphics capabilities.
Workstations generally come with a large, high-resolution graphics screen, at least 64 MB (megabytes) of RAM,
built-in network support, and a graphical user interface. Most workstations also have a mass storage device
such as a disk drive, but a special type of workstation, called a diskless workstation, comes without a disk
drive. The most common operating systems for workstations are UNIX and Windows NT.

In terms of computing power, workstations lie between personal computers and minicomputers, although the
line is fuzzy on both ends. High-end personal computers are equivalent to low-end workstations. And high-end
workstations are equivalent to minicomputers.

Like personal computers, most workstations are single-user computers. However, workstations are typically
linked together to form a local-area network, although they can also be used as stand-alone systems.

2. In networking, workstation refers to any computer connected to a local-area network. It could be a
workstation or a personal computer.

Workstation also is spelled work station or work-station.


                                                                                                           TOP


Minicomputer

 A multi-user computer capable of supporting from 10 to hundreds of users simultaneously. A midsized
computer. In size and power, minicomputers lie between workstations and mainframes. In the past decade, the
distinction between large minicomputers and small mainframes has blurred, however, as has the distinction
between small minicomputers and workstations. But in general, a minicomputer is a multiprocessing system
capable of supporting from 4 to about 200 users simultaneously.


                                                                                                           TOP


Mainframe
A powerful multi-user computer capable of supporting many hundreds or thousands of users simultaneously. A
very large and expensive computer capable of supporting hundreds, or even thousands, of users
simultaneously. In the hierarchy that starts with a simple microprocessor (in watches, for example) at the
bottom and moves to supercomputers at the top, mainframes are just below supercomputers. In some ways,
mainframes are more powerful than supercomputers because they support more simultaneous programs. But
supercomputers can execute a single program faster than a mainframe. The distinction between small
mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its
machines.


                                                                                                               TOP


Supercomputer
An extremely fast computer that can perform hundreds of millions of instructions per second. The fastest type
of computer. Supercomputers are very expensive and are employed for specialized applications that require
immense amounts of mathematical calculations. For example, weather forecasting requires a supercomputer.
Other uses of supercomputers include animated graphics, fluid dynamic calculations, nuclear energy research,
and petroleum exploration.
The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power
into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many
programs concurrently


                                                                                                               TOP


Computer Components

 CPU
 Motherboard Hard Drive
 Video Card
 Memory
 Cases
 CD-ROM/DVD-ROM
 SCSI Card
 Monitor
 Printer
 Modem
 Audio
 Digital Cameras
 Digital Camcorders
 Cooling
 Input Devices


CPU (Central Processing Unit)

So what's a CPU? It stands for Central Processing Unit. Many users erroneously refer to the whole computer
box as the CPU. In fact, the CPU itself is only about 1.5 inches square. The CPU does exactly what it stands
for. It is the control unit that processes all* of the instructions for the computer. Consider it to be the "brain"
of the computer. It does all the thinking. So, would you like to have a fast or slow brain? Obviously, the
answer to this question makes the CPU the most important part of the computer. The speed here is the most
significant. The processor's (CPU's) speed is given in a MHz or GHz rating 3 GHz is roughly 3,000 MHz. In
today's computers, the video cards, sound cards, etc. also process instructions, but the majority of the burden
lays on the CPU.


                                                                                                               TOP


Motherboard

The best way to describe the motherboard goes along well with my human body analogy that I used for the
CPU. The CPU is the brain, and the motherboard is the nervous system. Therefore, just as a person would
want to have fast communication to the body parts, you want fast communication between the parts of your
computer. Fast communication isn't as important as reliable communication though. If your brain wanted to
move your arm, you want to be sure the nervous system can accurately and consistently carry the signals to do
that! Thus, in my opinion, the motherboard is the second most important part of the computer.
The motherboard is the circuit board to which all the other components of the computer connect in some way.
The video card, sound card, IDE hard drive, etc. all plug into the motherboard's various slots and connectors.
The CPU also plugs into the motherboard via a Socket or a Slot.


Hard Disk
As the primary communication device to the rest of the computer, the hard drive is very important. The hard
drive stores most of a computer's information including the operating system and all of your programs. Having
a fast CPU is not of much use if you have a slow hard drive. The reason for this is because the CPU will just
spend time waiting for information from the hard drive. During this time, the CPU is just twiddling it's
thumbs...
The hard drive stores all the data on your computer - your text documents, pictures, programs, etc. If
something goes wrong with your hard drive, it is possible that all your data could be lost forever. Today's hard
drives have become much more reliable, but hard drives are still one of the components most likely to fail
because they are one of the few components with moving parts. The hard drive has round discs that store
information as 1s and 0s very densely packed around the disc.


Video cards

Video cards provide the means for the computer to "talk" to your monitor so it can display what the computer
is doing. Older video cards were "2D," or "3D," but today's are all "2D/3D" combos. The 3D is mostly useful
for gaming, but in some applications can be useful in 3D modeling, etc. Video cards have their own advanced
processing chips that make all kinds of calculations to make scenes look more realistic. The many video cards
out there are based on much smaller number of different chipsets (that are run at different speeds or have
slight differences in the chipsets). Different companies buy these chipsets and make their own versions of the
cards based on the chipsets. For the most part, video cards based on the same chipset with the same amount
of RAM are about equivalent in performance. However, some brands will use faster memory or other small
optimizations to improve the speed. The addition of other extras like "dual head" (support for two monitors) or
better cooling fans may also appear by different brands. At any rate, the first decision to make is what chipset
you want your video card to use. If you aren't interested in games, then the choice of chipset isn't too difficult
- just about any will do for the 2D desktop applications. There's no point in buying a video card over $100 if
you don't plan to play games.


                                                                                                              TOP


Memory

All programs, instructions, and data must be stored in system memory before the computer can use it. It will
hold recently used programs, instructions, and data in memory if there is room. This provides quick access
(much faster than hard drives) to information. The more memory you have, the more information you will
have fast access to and the better your computer will perform. Memory is much like the short term memory in
your brain. It holds your most recent information for quick access. Just as you want to accurately remember
this information in your head, you want your computer's memory to have the correct information as well, or
problems will obviously occur. Bad memory is one of the more common causes of computer crashes, and also
the most difficult problem to diagnose. Because of this, making sure you get good RAM the first time around is
very important.
There are many, many different types of memory for different tasks. The main ones today are DDR PCxx00
SDRAM DIMMs (this includes PC2700, PC3200, etc.) and Direct RDRAM RIMMs.


Computer's Case

The computer's case serves several functions. The motherboard is bolted down to the case so that the case
protects it and all other components. The metal in the case also serves to ground the motherboard. The case's
power supply converts power into a form the motherboard can use.


A good case should have ample expansion bays to be able to add additional internal and external devices. It
should have a strong enough power supply to power all the components you plan to add to your computer. The
case should be designed aerodynamically so that airflow will flow in through the front and out through the back
to properly dissipate all hot air. The case also needs to be sturdy enough to prevent components from moving
around.


                                                                                                              TOP


CD/DVD-ROM Drive

CD-ROM drives are necessary today for most programs. A single CD can store up to 650 MB of data (newer CD-
Rs allow for 700 MB of data, perhaps more with "overburn"). Fast CD-ROM drives have been a big topic in the
past, but all of today's CD-ROM drives are sufficiently fast. Of course, it's nice to have the little bits of extra
speed. However, when you consider CD-ROM drives are generally used just to install a program or copy CDs,
both of which are usually done rarely on most users' computers, the extra speed isn't usually very important.
The speed can play a big role if you do a lot of CD burning at high speeds or some audio extraction from audio
CDs (i.e. converting CDs to MP3s).


CD-R/RW (which stands for Recordable / Rewritable) drives (aka burners, writers) allow a user to create their
own CDs of audio and/or data. These drives are great for backup purposes (backup your computer's hard drive
or backup your purchased CDs) and for creating your own audio CD compilations (not to mention other things
like home movies, multimedia presentations, etc.).


DVD-ROM drives can store up to 4 GB of data or about 6 times the size of a regular CD (not sure on the exact
size, but suffice to say it's a very large storage medium). DVDs look about the same and are the same size as
a CD-ROM. DVD drives can also read CD-ROM drives, so you don't usually need a separate CD-ROM drive. DVD
drives have become low enough in price that there isn't much point in purchasing a CD-ROM drive instead of a
DVD-ROM drive. Some companies even make CD burner drives that will also read DVDs (all in one). DVD's
most practical use is movies. The DVD format allows for much higher resolution digital recording that looks
much clearer than VCR recordings.


DVD recordable drives are available in a couple of different formats - DVD-R or DVD+R with a RW version of
each. These are slightly different discs and drives (although some drives support writing to both formats). One
is not much better than the other, so it really boils down to price of the media (and also availability of the
media).


                                                                                                             TOP


SCSI card

A SCSI card is a card that will control the interface between SCSI versions of hard drives, CD-ROM drives, CD-
ROM burners, removable drives, external devices such as scanners, and any other SCSI components. Most fit in
a PCI slot and there is a wide range of types. The three main types of connectors on these cards are 25-pin for
SCSI-1, 50-pin for Narrow SCSI, and 68-pin for Wide SCSI (and Ultra-Wide SCSI, Ultra2-SCSI, Ultra160 SCSI,
and Ultra 320 SCSI - all of which use a 68 pin connector).


SCSI controllers provide fast access to very fast SCSI hard drives. They can be much faster than the IDE
controllers that are already integrated your computer's motherboard. SCSI controllers have their own
advanced processing chips, which allows them to rely less on the CPU for handling instructions than IDE
controllers do.


For the common user, SCSI controllers are overkill, but for high end servers and/or the performance freaks of
the world, SCSI is the way to go. SCSI controllers are also much more expensive than the free IDE controller
already included on your motherboard. There is also a large premium in price for the SCSI hard drives
themselves. Unless you have deep pockets, there isn't much of a point in going with a SCSI controller.


Many people buy SCSI controllers just for use with their CD-ROM burners and CD-ROM drives (these drives
must be SCSI drives of course).


SCSI cards also have the ability to have up 15 devices or more per card, while a single IDE controller is limited
to only 4 devices (some motherboards now come with more than one IDE controller though). SCSI cards allow
these drives to be in a chain along the cable. Each drive on the cable has to have a separate SCSI ID (this can
be set by jumpers on the drive). The last drive on the end of the cable (or the cable itself) has to "terminate"
the chain (you turn termination on by setting a termination jumper on the drive - or use a cable that has a
terminator at the end of it).


                                                                                                             TOP


Monitors

Monitors obviously display what is going on in your computer. They can run at various resolutions and refresh
rates. 640x480 is the default resolution for the Windows operating systems (this is a low resolution where
objects appear large and blocky). 640x480 just means that 640 pixels are fit across the top of your monitor
and 480 up and down. Most users prefer higher resolutions such as 800x600 or 1024x768 all the way up to
1600x1200 (and higher for graphics professionals). The higher resolutions make objects smaller, but clearer
(because more pixels are fit in the screen). You can fit more objects on a screen when it is in a higher
resolution. Larger monitors are better for running at the higher resolutions. If you run a high resolution on a
small monitor, the text may be hard to read because of its small size, despite the clarity.


The refresh rate is how fast the monitor can refresh (redraw) the images on the screen. The faster it can do
this, the smoother your picture will be and the less "flicker" you will see.


The monitor has a lot to do with the quality of the picture produced by your video card, but it doesn't actuall
"produce" the graphics - the video card does all this processing. But, if your video card is producing a bright
detailed picture and your monitor is dim and blurry, the picture will come out the same way.


                                                                                                              TOP


Printer

As you know, a printer outputs data from your computer on a piece of paper. There are many different types
of printers (most common are laser and inkjet), and many printers are better than others for different tasks
(printing photographs, clear text, etc.). Laser printers aren't necessarily better quality than inkjets anymore,
although they once were. If you want to be able to print in color, inkjet printers are the best option for the cost
conscious too. Some of today's "office inkjet" printers also have other functions including scanning, faxing,
copying, etc. While the scan and copy quality usually aren't that great, the quality is generally good enough for
most office / home office situations.


Modem

If you are at home, then you are most likely using a modem to view this page right now (dial-up modem, cable
modem, or DSL modem). The modem is what hosts the communication between your computer and the
computers you are connecting to over the Internet. If you're on a network, then you're using a network card
(Ethernet card most likely - and that may connect to your cable or DSL modem). A modem uses your phone
line to transfer data to and from the other computers. Newer cable modems and DSL modems provide about
10 times the speed of a regular phone modem. These are usually external and plug into a network card in your
computer.


Modem stands for "modulator / demodulator" and it encodes and decodes signals sent to and from the network
servers. Good modems should be able to do all the encoding / decoding work on their own without having to
rely on your computer's CPU to do the work.


                                                                                                              TOP


Audio

Most computers require a sound card to decode sound files into audio that can be sent to your speakers (some
have it build into the motherboard). Newer sound cards connect to PCI slots, but some of the older ones
connect to ISA slots on your motherboard. Good sound cards allow you to play games and hear "3D audio" that
makes it sounds like certain events are actually happening behind you. Some sound cards even do Dolby 5.1
decoding to allow you to listen to DVDs with full surround sound.


Computer speakers are different from regular stereo speakers in that they need to be shielded. They are often
more expensive, and there are fewer high quality computer speakers than home stereo speakers. Speakers
come in a variety of formats including quad speaker setups / 4.1 (2 front satellite speakers, 2 rear satellite
speakers, and a subwoofer), 2 speakers setups, 2.1 speaker setups (2 satellite speakers and a subwoofer), and
5.1 speaker sets (2 front satellite speakers, 1 front center channel speaker, 2 rear satellite speakers, and a
subwoofer).


Digital cameras

Digital cameras record images onto flash memory instead of onto film. They're great because you can see the
result right away on the camera's screen. They also allow you to crop images however you'd like, print them at
home on your computer right away, and selectively print pictures rather than wasting money on film and
development costs of pictures you don't really want. If you prefer, you can also send your digital pictures off for
processing and printing.
One other item to note - with regular 35 mm cameras you can easily get them in a digital format. Many film
development companies offer an Internet upload option (at Wal-Mart for example, this is just 97 cents for the
entire roll in addition to your development costs).


                                                                                                                 TOP


Digital camcorders

Digital camcorders still store video onto a tape, but they store it digitally and at a higher resolution that analog
camcorders. With digital camcorders that have FireWire out (and FireWire in on your computer), you don't need
a video capture card to get video onto your PC, edit it, and create DVDs.


CPU cooling fans

I'm going to focus on CPU cooling fans here, but also discuss case fans a little. Obviously, these keep your CPU
and case cool! If they get too hot, your system can crash and your CPU could eventually fail.


Input devices
Mouse, Keyboard, Floppy Drive, Scanner, Joy stick, CD Rom, Flash Drive etc.



                       TOP


Usage Of Computer In Our Daily Life

There is a big influence of technique on our daily life. Electronic devices, multimedia and computers are things
we have to deal with everyday. Especially the Internet is becoming more and more important for nearly
everybody as it is one of the newest and most forward-looking media and surely “the” medium of the future.
Therefore we thought that it would be necessary to think about some good and bad aspects of how this
medium influences us, what impacts it has on our social behaviour and what the future will look like.

The Internet changed our life enormously, there is no doubt about that. There are many advantages of the
Internet that show you the importance of this new medium. What I want to say is that Internet changed our life
in a positive way. First we have to make a differentiation concerning the usage. You can use the Internet at
home for personal or you at work for professional usage. Let’s come to the first. To spend a part of our day on
the Internet is for many people quite normal. They use this kind of medium to get information about all kinds
topics. Maybe some of them are interested in chatting, probably they are members of a community. Whatever
you are looking for, you will find it. Even if you want to have very specific information, you will find it in a short
time. Normally, you often have to send a letter, than you have to wait for the reception of the reply, or you
have to make some telephone calls and so on. In any case, the traditional way is the longer one. To put your
own information on the Internet is also possible. Create your own homepage, tell other users about your
interests, what you want, that’s no problem at all.


As we all know, software costs a lot, if you buy it legal. Free software, free music is available on the Internet.
You just have to download the program, the mp3-file or whatever and that’s it. Why do you want to pay more
as you need to? Special websites are created just to give you the newest programs, or to tell you where you
can get it from. Napster might actually be the most famous one.


The computer is a fix part of every modern office and the greatest part has also an access to the Internet.
Companies already present their products, their services on the Internet and so they get more flexible.
The next advantage I want to mention is the faster development. Many universities and research institutions
are also linked. They are able to exchange experiences, novelties and often they start new projects together. If
they are linked, they can save time and money.


Especially at the business sector knowledge is power. If you are the leader of a product, of a technology or just
of an idea you are able to make a lot of money. To get into this position, the Internet can play an essential
part. Companies all over the world are online. If you want, it is no problem for you to exchange experiences,
you will hear new things, you will see some facts from another point of view. For this reason you will find new
solutions, new ways to go, so take this chance!

				
DOCUMENT INFO