Docstoc

Ja

Document Sample
Ja Powered By Docstoc
					Random Access Memory (RAM)

http://www.edbpriser.dk/review.asp?ID=1586

Random Access Memory (RAM) er computerens såkaldte midlertidige datalager, hvor operativ
systemet (eksempelvis Windows), applikationer (programmer, spil, m.m.) og data, der på et givent
tidspunkt er i brug, bliver gemt. Når du eksempelvis starter et program på computeren, er det her
der indlæses nødvendigt data, som kræves for at applikationen kan afvikles. Mens en applikation er
installeret på harddisken og tilgængelig derfra, er den ikke brugbar før den er indlæst i denne
midlertidige hukommelse - RAM. Når du oplever at computeren begynder at arbejde, efter du har
startet et program, spil, m.m., er det er fordi den er i gang med at overføre de nødvendige data til
computerens RAM. Det er computerens CPU (Central Processing Unit), der udfører de individuelle
instruktioner, som applikationen forespørger. Billedet nedenfor illustrerer i grove træk denne
sammenhæng.




Et godt eksempel er når computeren starter et program, såsom dit E-mail program. Processen der
går forud for at programmet er tilgængeligt på skærmen for dig som bruger, er at computerens CPU
modtager en instruks fra dig ved at du starter programmet, normalt ved et klik med musen. Her
oversætter CPU’en denne instruks, og sender en besked til harddisken om at de pågældende
program skal indlæses i hukommelsen. Nu ligger programmet i computerens hukommelse, altså
rammen, og klar til at bruge. Jeg vil senere forklare hvorfor det ekstra led, som rammen utvivlsomt
er, er nødvendigt, men jeg kan allerede nu afsløre at det hele handler om hastighed.

Hvis man skal forsøge at perspektivere hvad RAM er for noget, der måske er mere håndgribeligt for
folk der ikke lige kender til alle de ovennævnte begreber, kan det sammenlignes lidt med
menneskets korttidshukommelse. Det du lige er i gang med ligger i korttidshukommelsen
(Rammen), og kan du ikke lige huske hvad du skal gøre, trækker du på langtidshukommelsen
(Harddisken). Det samme gør sig gældende for computeren. Hvis rammen er fyldt op eller de
nødvendige data ikke er tilgængelige, hentes der yderligere data fra harddisken ind i rammen.

RAM er som benævnt ovenfor en del af en samlet computer (PC). RAM kan altså ikke fungere
alene, og en computer kan ikke benyttes uden RAM. Det sidste er ikke helt rigtigt, da alle
computere i sig selv er udstyret med et minimum af RAM. Dette minimum er dog yderst begrænset,
og vi skal mange år tilbage før at en computer reelt kunne fungere uden ekstra RAM klodser.
Dagens standard af programmer og spil kræver stadig mere og mere RAM, og derfor ville det i dag
være usandsynligt at have en computer uden. Det har som sagt ikke altid været tilfældet, og før de
første såkaldte Personlige Computere (PC) kom på markedet i 1980’erne havde Bill Gates udtalt
følgende:”640K of memory ought to be enough for anybody”. Det skulle hurtigt vise sig at være
forkert, og i dag er tendensen faktisk at uanset hvor meget hukommelse, der installeres i en
computer, vil det altid være bedre med mere.


Hvordan virker RAM:
DRAM kan tænkes som værende en stor tabel med en række celler opdelt i kolonner og rækker.
Cellerne består af capacitors og indeholder en eller flere bits data, afhængig af den specifikke
chipkonfiguration. Tabellen tilgås via dets rækker og kolonner der på skift modtager signaler fra
RAS (Row Access Strobe) og CAS (Column Address Strobe). Capacitorerne, der bruges i cellerne
til at gemme data, aflades efter et bestemt stykke tid, og skal derfor opdateres periodisk for at data
ikke går tabt. En opdateringskontroller bestemmer intervallet mellem hver opdatering, mens en
opdateringstæller sørger for at alle rækker bliver opdateret korrekt.
Det der sker når CPU’en skal bruge information fra hukommelsen er at denne sender en
forespørgsel, der så håndteres af Hukommelseskontrolleren. Hukommelseskontrolleren sender så
forespørgslen videre til selve hukommelsen, og rapporterer tilbage til CPU’en når information er
fundet og klar til brug. Hvor lang tid denne cyklus tager, fra CPU’en til hukommelsescontrolleren
til selve hukommelsen og tilbage igen, variere i forhold til rammens hastighed, systemets
bushastighed, samt en række andre faktorer.




De vigtigste faktorer, der spiller ind på et moduls hastighed, er:

Hastigheden (MHz)

Burst timings

Data Bus Bredde

Samlede båndbredde

Disse faktorer er udførligt forklaret gennem denne guide til RAM.

I sidste ting er tilgangen til hukommelsen, der enten fungerer gennem en synkron eller en asynkron
interface. Den synkrone interface betyder i virkeligheden en langsommere hukommelse, eftersom
RAM modulet kun kan udføre en intern operation ad gangen, mens den asynkrone kan udføre flere
operationer samtidig.


Forskellen på RAM og Harddiske:
Der er flere markante forskelle mellem RAM og harddisken. Først og fremmest er RAM, som
tidligere nævnt, kun en midlertidig hukommelse, der slettes når computeren slukkes. Dernæst er
RAM betydelig hurtigere, og netop hovedårsagen til at den er nødvendig. I dette afsnit vil jeg starte
med at forklare de begrebsmæssige forskelle, hvor de tekniske vil blive gennemgået senere i guiden.

Forestil dig en metafor, hvor du har et skrivebord og filekabinet på et kontor. En kunde ringer og
spørger til sin sag. Du rejser dig, åbner filekabinet og finder den pågældende kundes sag. Du
snakker med kunden og noterer samtalens højdepunkter, hvorefter du arkiverer sagen igen. Denne
situation kan man med stor fordel overføres til sammenhængen mellem rammen og harddisken i en
computer. Dit skrivebord er rammen, når du har sager liggende der er de tilgængelige, og du kan
hurtigt slå op i dem. Havde du i eksemplet allerede haft kundens sag på skrivebordet, var du ikke
nødsaget til at rejse og gennemlede filekabinet for at finde oplysningerne frem. Det er åbenlyst at
sager der ligger på skrivebordet er meget hurtigere at tilgå end dem i filekabinettet. På samme måde
er det i en computer, men hvor stor er hastighedsforskellen så? For det første måler man harddiskes
hastighed i millisekunder, mens ram måles i nanosekunder. En anden ting er den tid det tager for
CPU’en at tilgå henholdsvis rammen og harddisken. For rammen er det ca. 200ns, mens det for
harddisken er 12.000.000ns, hvilket vil sige at rammen her, er godt 60.000 gange hurtigere. For at
sætte det hele lidt i perspektiv viser illustrationen nedenfor dette omsat til minutter.




Hvis du nu tænker at harddisken er noget langsomt skidt, som der ingen grund er til at investere i,
eller som forskerne i hvert fald burde udvikle lidt mere, må du huske den meget væsentlige forskel i
at ram er midlertidigt lager mens harddisken er vedvarende.
Hvor meget hukommelse er nødvendigt? (Lagerkapacitet):
Måske kender du allerede til at arbejde på en computer hvor der ikke helt er nok hukommelse. Du
hører ofte harddisken operere, hele computeren virker sløv og nogen gange kan du ikke få lov til at
starte et nyt program uden først at lukke et andet. Hvordan ved du så om mere hukommelse kan
afhjælpe problemet? Mængden af hukommelse, som du skal bruge eller har behov for, afhænger
primært af to ting. Først og fremmest afhænger det af hvad du bruger din computer til – hvor
krævende og hvor mange applikationer du anvender. Dernæst har det styresystem du benytter også
indvirkning på hvor meget ram du bør installere.

Nedenstående skal forstås som retningslinier for hvor meget hukommelse din computer bør have,
baseret på styresystemet og de applikationer du anvender.


Windows 9x (Windows 95 / Windows 98 / Windows ME):
Windows 9x kræver minimum 32MB RAM, mens det optimale ide flestes tilfælde er 128MB eller
mere.
Ser vi på Windows Xp kræves der som minimum 128 MB RAM, mens computere i dag ofte sælges
med mindst 256 MB, dog vil 512 MB RAM være anbefalelsesværdigt.

Bruger du primært din computer til tekstbehandling, regneark, E-mail og en smule Internet surfing
kan de anbefales at have mellem 128 og 256MB RAM. Hvis du har 128MB vil du kunne anvende
programmerne nogenlunde, men vil nogen gange mærke at computeren løber tør og der vil være
heftig aktivitet på harddisken. Har du derimod 256MB vil det hele køre mere let, og det vil kun
være hvis du har mange programmer åbne at du vil opleve at computeren får svært ved at følge
med. Bruger du din computer til hele kontorprogrammer (eksempelvis Microsoft Office), spil, en
del Internet Surfing, billeder og præsentationer skal du have mere RAM. Her anbefales som
minimum 256MB, men helst 512MB. Sætter du virkelig din computer på arbejde, og bruger den til
multimedia med billeder, video og lyd sammen med andre opgaver som Internet, E-mail,
tekstbehandling, m.m. skal du op og have meget RAM før computeren kan håndtere dine behov.
Her er et absolut minimum 512MB, men generelt jo mere jo bedre.

*Værd opmærksom på at Windows 9x ikke er optimeret til at udnytte store mængder RAM, og du
vil derfor ikke få det optimale ud af at installere mere end 128MB. Hvad gør man så hvis man har
det sidst nævnte behov og eksempelvis gerne vil installere 256MB? Ja enten gør man det og
acceptere at styresystemet ikke udnytter det 100% optimalt, eller også opgradere man sit
styresystem.


Windows 2000 Professional / Windows XP Home/Professional:
Windows 2000 og Windows XP er meget ens styresystemer, baseret på det samme fundament, hvor
XP sådan set bare er en finpudset version af 2000. De kræver begge som absolut minimum 64MB
RAM, for at kunne kører, men det kan ikke anbefales at have mindre end 128MB, og min erfaring
er endvidere at de ikke kører optimalt før man runder 256MB.

Begge styresystemer har en 4GB grænse for RAM. Benytter man Windows XP’s 64-bit version kan
man dog installere op til 16GB. Bruger man sin computer til mindre krævende opgaver, som
tekstbehandling, E-mail osv. kan man i teorien klare sig med 64MB RAM, mens jeg dog i praksis
har erfaret at 128MB er en god investering. Stiller man større krav til de opgaver ens computer skal
kunne håndtere skal man nok op og have 256MB. Sådanne opgaver kunne være Internet surfing
med flere browsere åbne på samme tid, komplekse præsentationer, billedbehandling, database
administrering, mindre krævende software udvikling (eksempelvis web-udvikling), osv. Selv til
disse opgaver kan det være en hastighedsmæssig fordel at installere 384 eller 512MB RAM.
Forventer man en computer der skal kunne håndtere statistik programmer, real-time video,
animation, storstillet Internet Surfing, heftig netværks trafik, osv., bør man installere 256MB som
minimum og gerne helt op imod 1GB RAM.


Linux:
Styresystemet Linux har efter eksisteret i nogle år, og har efterhånden opnået popularitet. Kravene
til Linux hvad angår RAM er ikke meget anderledes end hvad de nyere Windows systemer kræver.

Hvor meget RAM Linux maksimalt understøtter afhænger a hvilken version og hvilket mærke
(RedHat, SuSe, osv.) man har installeret. Som udgangspunkt understøtter de nyeste versioner 4GB
RAM, mens specielle serverversioner kan understøtte helt op til 64GB.

Hvor meget RAM skal man så installere? Jo bruger man sin computer til eksempelvis
tekstbehandling, E-mail, simpel billedmanipulation kan man klarer sig med 48-80MB. Ligesom ved
Windows er den nederste værdi i intervallet Linux eget vurdering af hvad minimumskravet er. Er
dine behov en smule mere krævende, og bruger du din computer til multimedia præsentationer,
databaser, Internet Surfing, osv. bør du som minimum have 80MB men gerne mere. Er vi helt ude
hvor dine behov virkelig er krævende, og hvor du benytter statistik programmer, video samtaler,
kompleks billedbehandling, animation, m.v. bør du have 112-512MB.


Macintosh:
Macintosh håndterer hukommelse på en væsentlig anderledes måde end andre styresystemer. Jeg vil
ikke gå i deltager med dette, men konsekvensen er at Macintosh computere kræver mere RAM.
48MB er et absolut minimum, men mindre end 128MB kan ikke anbefales.

Igen afhænger behovet for RAM af hvad du bruger din computer til. Tekstbehandling, E-mail og
den slags mindre krævende applikationer vil kræve omkring 64MB RAM, mens tungere
applikationer kan kræve fra 128MB og opefter.



* Her opmærksom på at ovenstående vurderinger af den nødvendige mængde af RAM er baseret på
et typisk brug af computeren. Eksempelvis vil antallet af applikationer man har åbnet samtidig også
ave en betydning for hvor meget RAM man bør installere. Som tidligere nævnt er det heller ikke
helt forkert at sige at jo mere RAM jo bedre, uanset hvad man benytter computeren til. Det kan også
godt betale sig at investere i mere hukommelse end man umiddelbart har brug for, sådan at man
bare er en smule fremtidssikret, da opgraderinger i software mange gange kræver højere
minimumskrav.


Opgradering af computerens RAM:
Har du en lidt ældre computer, og synes du efterhånden at den har svært ved at følge med kan det
mange gange godt betale sig at opgradere den i stedet for at kassere den og købe en ny. Ofte vil en
opgradering af mængden af RAM bevirke en betydelig hastighedsforøgelse.

Hvis du beslutter dig for at installere mere RAM i din computer er der en række ting du skal være
opmærksom på. Først og fremmest skal du undersøge hvorvidt der er nogen frie RAM slots i
computeren. Billedet nedenfor viser en skitse et bundkort, hvor RAM slots er markeret. Er der ikke
nogen frie RAM slots, er man nød til at fjerne en RAM klods og installere en med større kapacitet
hvis man ønsker at opgradere.
Udover at undersøge om der er plads på bundkortet, skal man også undersøge hvilken type RAM
ens bundkort understøtter (mere om det senere). Der er både forskellige typer af RAM og
forskellige hastigheder, der begge kan være afgørende for om en bestemt RAM klods passer til
netop dit bundkort. Du skal have altså have fat i manualen til dit bundkort, eller finde ud af hvilket
bundkort det er du har, og via Internettet finde frem til manualen på eksempelvis producentens
hjemmeside. Kan du ikke finde frem til den, har du også muligheden for at tage en af de allerede
installerede RAM klodser ud, vise den til en forhandler og på den måde købe sammen type, bare
med en større kapacitet.


Køling:
Førhen var der ikke noget der hed køling når det kom til RAM moduler, men efterhånden som
teknologien indenfor RAM har udviklet sig, og modulernes kapacitet er bleget større og større
samtidig med forøget hastigheder, er varmudviklingen også steget. Det er blevet nødvendig at
nedkøle RAM chippen.

Det kan altså godt anbefales at se efter RAM klodser der har fået spændt nogle kobber pladder
udenpå sig, for at nedkøle RAM klodsen. Dette forlænger teoretisk set rammens levetid, og giver
mulighed for højere RAM hastigheder.


Specifikationer på EDBpriser:
Følgende er de specifikationer som er de mest interessante og dem der kan have en betydning for
valget RAM modul.


Port Type (Model):
Afhængigt af hvilket bundkort du har i din computer skal du også have RAM, der passer til de
porte, som findes på bundkortet. For den uerfarne bruger kan dette godt virke som noget af en
videnskab, selvom det faktisk er rimelig enkelt at bestemme. Problemet er så bare at flere
forskellige RAM typer fysisk passer til de samme porte, og det er derfor ikke nok at RAM
klodserne passer. Her er man nød til have fat i manualen til bundkortet for at være sikker (mere om
de forskellige RAM typer senere i guiden).

Følgende er en kort gennemgang af de forskellige slot typer der findes.


Single In-Line Memory Module (SIMM):
De første SIMM moduler kunne overføre data med 8 bits ad gangen og havde 30 ben, derfor blev de
kaldt 30-Pin SIMM’s. Senere udviklede man en forbedret version med 72 ben, der kunne overføre
data med 32 bit – 72-pin SIMM’s.

SIMM moduler bliver næsten ikke brugt mere.


Dual In-line Memory Modules (DIMM):
DIMM porte er det man ser på langt de fleste bundkort i dag. Der er to varianter, en 168-pin og en
184-pin.

Den primære forskel på henholdsvis SIMM og DIMM modulerne er: Benene på hver side af et
SIMM RAM modul er sammenkædet så de danner en fælles elektrisk kontakt til SIMM porten på
bundkortet, mens DIMM moduler danner to separate kontakter. De mere fysiske forskelle er bl.a.
længden, hvilket siger sig selv når man tager antallet af ben i betragtning. Der er også en mindre
forskel i måden man installere modulerne på. SIMM modulet installeres en anelse skråt, mens
DIMM modulet installeres fuldstændig vertikalt.


Small Outline Dual In-line Memory Modules (SO-DIMM):
Denne porttype bruges i forbindelse med bærbare computere. Bærbare computere er mere
komprimeret, og har derfor også brug for komponenter der er mindre. Dette tilbyder SO-DIMM
RAM moduler. Den eneste principielle forskel på SO-DIMM og DIMM, er at SO-DIMM er
markant mindre, da den jo netop er tiltænkt bærbare computere.

SO-DIMM moduler kan have henholdsvis 72, 144 eller 200 ben, og heraf navnene 72-PIN SO-
DIMM der er 32bit, 144-PIN SO-DIMM der er 64Bit og 200-PIN SO-DIMM der er 72bit.


Rambus In-line Memory Modules (RIMM):
RIMM ligner DIMM til forveksling, dog værende en anelse tykkere. Den hurtigere
overførselshastighed (se senere) medføre en højere varmeudvikling, og derfor er et RIMM modul
dækket af et aluminiumdæksel, en såkaldt heat spreader, for beskyttelse mod overophedning.


Small Outline Rambus In-line Memory Modules (SO-RIMM):
SO-RIMM er det samme som SO-DIMM, bortset fra den bruger Rambus teknologien.


Hukommelsestyper (Memory):
Du behøver ikke bekymre dig sig meget om hvilke porte netop dit bundkort benytter. Medmindre
din computer er meget gammel, altså før 1997, er det næsten med 100% sikkerhed DIMM porte. Du
skal bare lige kigge din manual til bundkortet igennem, og checke hvilken porttype det understøtter.
Har du ikke din manual kan du også gennem windows kontrol panel undersøge hvilket bundkort du
har, og så via producentens hjemmeside porttypen derigennem.

Det som du skal være opmærksom på er hvilken RAM type du skal have fat i. Der findes mange
forskellige typer RAM, og som nævnt i forrige afsnit ligner nogle af dem til forveksling hinanden.
Herunder følger en gennemgang af de mest kendte.


Asynkron Fast Page Mode (FPM) DRAM:
Ved at implementere nogle specielle metoder til at tilgå data i hukommelsen, er det lykkedes at
mindske de interne ventetider ved nogle typer datatilgange. Denne teknologi fik tilnavnet Fast Page
fordi resultatet var at en hel ”side” af data kunne holdes aktiv i rammen. Fordelen var dog kun
mærkbar ved afvikling af nogle applikationer, og teknologien blev hurtigt videreudviklet. Fast Page
Mode (FPM) kom til verdenen , og blev standarden indenfor RAM fra 1987 til midten af 90'erne.
Kort fortalt var fordelen ved FPM var at den simpelthen var hurtigere til at tilgå data i rammen,
sammenlignet med konventionel DRAM.

Hastighedsgrænsen er 50ns.

FPM DRAM ses meget sjældent nu til dags. Har man alligevel behov for at benytte denne type
RAM, skal man være opmærksom på at den ikke understøtter en BUS hastighed på højere end
66Mhz. FPM DRAM vil typisk kunne anvende Burst Timings på 5-3-3-3 ved 66MHz.
Passer i en RIMM port, og har 72 pins.


Asynkron Extended Data Out (EDO) DRAM:
EDO RAM så dagens lys i 1995, og udkonkurrerede langsomt FPM rammen. EDO RAM er en
anelse hurtigere en FPM DRAM pga. en nyere og revolutionerende ændring i måden hukommelsen
tilgås. Kort fortalt handler det om at EDO RAM kan påbegynde en ny tilgang til rammen før den
forrige tilgang er færdigbehandlet. Den kan også både læse og skrive til rammen samtidig, og på
den måde springe nogle trin over i processen af adressering af hukommelse.
Ligesom FPM er denne ramtype næsten uddød, og bliver næsten ikke produceret længere. EDO
RAM kan opnå hastigheder på helt op til 100Mhz, hvis altså bundkortet understøtter det. Problemet
her er netop den tids bundkort, der sjældent havde en bus hastighed på højere end 66Mhz, og derfor
er fordelen, i forhold til FPM RAM, reelt ikke særlig stor på dette område. Hvad angår Burst
Timings er der noget at hente. EDO RAM understøtter Burst Timings ned til 5-2-2-2. Efter sigende
skulle rammens tilgangshastigheder også være 40% hurtigere end FPM RAM.

Alt Dette øger teoretisk set hastigheden med ca. 10-15%. I praksis viste det sig imidlertid et være en
del mindre, faktisk helt ned til kun 1%, hvor man så må konkludere at en opgradering til EDO
RAM næsten er ubetydelig. Hastighedsgrænsen er ligesom FPM DRAM 50ns.




Passer i en RIMM port, og har 72 pins.


Burst Extended Data Out (BEDO) DRAM:
BEDO DRAM er endnu en forbedring af de konventionelle asynkrone DRAM. Forbedringen ligger
i at Burst Timings er blevet presset helt ned på 4-1-1-1. Dette kræver naturligvis at bundkortet
understøtter disse hastigheder. BEDO RAM giver en større hastighedsforøgelse i forhold til EDO,
end hvad EDO RAM gav i forhold til FPM RAM. Trods det har det aldrig rigtigt slået igennem,
hvilket primært er fordi chipgiganten Intel valgte ikke at understøtte denne RAM type.

Passer ligeledes i en RIMM port, og har 72 pins.


Synchronous DRAM (SDRAM):
I slutningen af 1996 kom SDRAM. I modsætning til tidligere RAM teknologier, kan SDRAM
synkronisere sig selv med CPU'ens klokfrekvens. Det gør at chippen på bundkortet, der styrer
interaktionen mellem hukommelsen og CPU'en, kender den eksakte cyklus af klokfrekvensen, og
derfor behøver CPU'en ikke længere afvente, når data i hukommelsen forespørges. En anden fordel
ved SDRAM er brugen af Interleaving, der gør at et halvdelen af et RAM modul kan færdiggøre en
proces mens den anden halvdel påbegynder en ny. Videre benytter SDRAM såkaldt Burst mode,
der eliminerer ventetiden mellem hukommelsen og CPU’en. Både bugen af Interleaving og Burst
mode øger rammens ydeevne betydeligt.

Hastigheden for SDRAM startede med at være 66Mhz, men blev frem til år 2000 først øget til 100
og derefter til 133MHz. Siden hen er der ikke sket nogen forbedring. SDRAM understøtter Burst
Timings på 5-1-1-1
Når det kommer til SDRAM kan det være meget forvirrende, at finde ud hvilke der lige passer til
ens bundkort. Der er mange faktorer, der spiller ind for hvorvidt en bestemt SDRAM klods kan
anvendes. Eksempelvis må RAM klodsernes hastighed ikke være langsommere end systembusen på
bundkortet, ellers vil de ikke virke eller også vil de være meget ustabile. Det er fordi hele pointen
med SDRAM teknologien netop er, at eliminere de ventetider som den konventionelle asynkrone
RAM var påvirket af, og derved give en bedre ydeevne. Står du med et bundkort der understøtter
SDRAM er det sikreste at kontakte producenten af bundkortet, og høre hvilke specifikationer RAM
klodserne skal have.

Passer i en DIMM port, og har 168 pins.


Direct Rambus DRAM (RDRAM / DRDRAM):
Rambus er en DRAM arkitektur der er noget anderledes end de andre DRAM designs. Direct
Rambus teknologien er på papiret ekstraordinær hurtig, sammenlignet med de andre tilgængelige
teknologier. Det skyldes dets evne til at dobbelt klokke. Det betyder at den er i stand til at udnytte
både den stigende og den faldende klokfrekvens og udfører operationer i begge henseender. Netop
dette gjorde at Rambus teknologien var meget revolutionerende, og i sin tid blev spået som
fremtidens førende RAM teknologi. Det er dog ikke blevet tilfældet, måske pga. Rambus og Intels
patenter. Hvis en chipfabrikant havde intentioner om at udvikle RAM moduler baseret på denne
teknologi, skulle de betale afgifter til Intel og Rambus. Dette er ikke særlig attraktivt og gør det
svært for en chipfabrikant at konkurrere. Samtidig ville chipfabrikanten være underlagt de
standarder som Intel og Rambus bestemmer, og derved ikke have nogle egentlig kontrol med
udviklingen.

Rambus har en hastighed på op til 800MHz, og en teoretisk båndbredde på 1.6GB per sekund.
Trods en væsentlig øget frekvens er den samlede båndbredde for RAMBUS dog kun dobbelt så stor
som for 100MHz SDRAM.




En sidste ting der gør RDRAM unik, er busen Direct Rambus Channel. Andre RAM typer benytter
en lignende bus, men hvor de søger at øge båndbredden af denne, for at kunne overføre mere data
ad gangen, har Rambus mindskes, for at overføre data hurtigere. Normalt benyttes en bus på 64 bits
hvor RDRAM kun benytter 16 bit. Det betyder at chippens hastighed på 800Mhz i praksis kun yder
400Mhz sammenlignet med Ram der benytter en bus på 64 bit. Hvad angår Burst Timings er
RDRAM ikke noget at råbe hurra for, og disse hastigheder er faktisk værre en SDRAM. Netop dette
er alvorligt fordi mange programmer ikke er gearet til at udnytte den høje bus hastighed, og derfor i
virkeligheden ville have mere gavn af hurtigere Burst Timings.

Konklusionen må være at RDRAM er for dyrt sammenlignet med dets ydeevne. Samtidig er
RDRAM heller ikke så udbredt som DDR SDRAM (Se næste afsnit), og det begrænser udvalget af
bundkort som understøtter modulerne.

Passer i en RIMM port, og har 184 pins.


Synclink DRAM (SLDRAM):
Selvom denne Ram type for længst er blevet erklæret uddød, synes jeg den er hver at nævne,
fordi den i slutningen af 1990erne var et alternativt bud på en konkurrent til RDRAM.

SLDRAM er et revolutionerende design hvis primære forhold er en forøget ydeevne ved brug af
eksisterende arkitektur. Altså formålet var at opnå samme effekt som Rambus, uden brug af
anderledes arkitektur. Udgangspunktet var at lave modulet med en 64bit bus på 200MHz, og
eftersom overførsler også her foregår både ved den stigende og faldende klokfrekvens ville den
effektive hastighed blive 400MHz. Det giver en teoretisk båndbredde på 3.2GB per sekund, altså
dobbelt så hurtigt som RDRAM. SLDRAM er også en åben standard og har hurtigere Burst
Timings end RDRAM. På mange måde var potentiellet bedre end RDRAM, men den fik aldrig den
brede accept hos chipsæt producenterne og blev trods dets potentielle, der på mange måder var
større en RDRAM, en fiasko.


Double Data Rate Synchronous DRAM (DDR SDRAM):
DDR SDRAM bygger på den samme teknologi og arkitektur som det regulerer SDRAM. Forskellen
er den dobbelte overførselshastighed, hvor DDR SDRAM, ligesom ved RDRAM, kan overføre
både ved stigende og faldende klokfrekvens, og derved overføre to bits data til systembusen fra
rammens I/O buffer per klokfrekvens, og derved den dobbelte overførsel i forhold til ordinær
SDRAM, der kun kan overføre en bit per klokfrekvens.

På den måde vil et SDRAM modul på 100MHz nu kunne yde 200Mhz.




DDR RAM er i dag den mest udbredte RAM type, og er klart den der understøttes af flest bundkort
producenter. Den fås i flere forskellige varianter, som kan ses i listen nedenfor:

PC2100 (266 MHz)

PC2700 (333 MHz)

PC3200 (400 MHz)

PC3500 (433 MHz)

PC3700 (466 MHz)

PC4000 (500 MHz)

PC4200 (533 MHz)

PC4300 (533 MHz)

PC4400 (550 MHz)

De forskellige varianter er faktisk bare et udtryk for den samme RAM type med forskellige
hastigheder. Hvilken hastighed (variant) man skal vælge afhænger af hvad bundkortet understøtter.
Ofte er det ikke noget problem i at købe en variant der har en højere hastighed en hvad ens bundkort
understøtter, dog må man så acceptere ikke at udnytte rammens maksimale hastighed til fulde.

Se vores teoretiske hastighedssammenligning længere nede i guiden.

DDR SDRAM passer i en DIMM port, og har 184 pins.


Double Data Rate Synchronous DRAM II (DDR SDRAM II):
Som navnet antyder er DDR-II er en udvidelse af det traditionelle DDR RAM modul. DDR-II tager
nogle af fordelene fra RAMBUS teknologien og kombinerer med DDR modulet, for derved at opnå
en bedre ydeevne.

Mens DDR-I RAM kan overføre 2 bits data på en klokfrekvens, kan DDR-II overføre det samme
men på ½ Klokfrekvens, altså dobbelt så hurtigt. Et DDR modul med en 100 MHz klokfrekvens, vil
altså effektivt afvikles på 200 MHz, mens det for et DDR-II modul vil være 400 MHz.

At DDR-II RAM så i praktisk ikke er dobbelt så hurtig som dens forgænger har med ventetiden
imellem selve hukommelsen og hukommelsecontrolleren at gøre. Nedenstående viser en
sammenhæng mellem hastighederne for DDR og DDR-II:

DDR: 100 MHz klokfrekvens -> 100 MHz Data buffer -> 200 MHz hastighed
DDR-II: 100 MHz klokfrekvens -> 200 MHz Data buffer -> 400 MHz hastighed


Enhanced SDRAM (ESDRAM):
For at forøge effektiviteten og hastigheden af standard RAM moduler, har nogen producenter
forsøgt sig med at inkorporere SRAM direkte på RAM modulet, for derved at lave et mindre cache
lager. ESDRAM er altså principielt SDRAM med en lille smule cache hukommelse, til at gemme
det mest benyttede data i. Cachen understøtter overførsler på op til 200Mhz, altså omkring dobbelt
så hurtigt som resten af hukommelse, og derfor en hastighedsforøgelse for den samlede RAM klods.

Passer i en DIMM port, og har 168 pins.


Hukommelsesfrekvens / Tilgangstid:
Denne såkaldte tilgangstid til hukommelsen måles i dag i MHz. Tidligere, før udviklingen af
SDRAM blev det imidlertid målt i Nanosekunder. Denne overgang fra nanosekunder til Megahertz
kunne godt give anledning til forvirring, men er efterhånden ikke et problem længere. Dog skal man
lige være opmærksom på at hvor det førhen var bedst med en så lavt så mulig tilgangstid (målt i
nanosekunder), er det i dag hvor måleenheden er frekvens bedst med et så højt tal som muligt.

Bemærk at bundkortet skal understøtte de enkelte hastigheder. Eksempelvis kan man ikke bruge
PC100 SDRAM i et bundkort der kræver PC133 SDRAM. Derimod kan man som regel uden
problemer bruge hurtigere RAM i langsommere bundkort. Det anbefales altid at bruge den hurtigste
RAM, som dit bundkort understøtter.

Hver også opmærksom på at hvis man kombinere RAM med forskellig hastighed, vil alle RAM
klodserne køre med den hastighed, som den langsomste klods har.

Se længere nede i guiden vore teoretiske hastighedssammenligning.


Burst timings (CAS):
Burst timings er et udtryk de små pauser (Latency) der er indlagt i processen når CPU’en
forespørger rammen for data til rammen sender resultatet til CPU’en. Ventetiderne er nødvendige
for at elektronikken kan følge med, og at der ikke sker datafejl i overførslen.

Disse ventetider har en relativ stor betydning for RAM modulets samlet ydeevne, og er derfor
vigtige at tage i betragtning. Der er fire forskellige indlagte pauser, der bruges på forskellige
tidspunkter i overførselsprocessen. Den vigtigste og mest betydelige er den såkaldte CAS Latency
(CL). CAS er betegnelsen for den forsinkelse der er fra rammen modtager en instruks til den
udfører den. Jo lavere denne er jo hurtigere opererer rammen.
Den præcise sammenhæng mellem de forskellige Burst timings, og hvad der er den optimale
indstilling, er en større videnskab, som ikke er medtaget i denne guide. Dette er nemlig også
afhængigt af den præcise RAM model og det bundkort de skal monteres i. Som hovedregel kan det
anbefales at købe RAM med en CAS Latency (CL) på 2.5, eller på 2.0 hvis man ønsker at
eksperimentere med overclocking af computeren.


Teoretisk Hastigheds Sammenligning:
Når vi sammenligner RAM er der foruden de foromtalte Burst Timings tre begreber at holde styr
på:
Hastigheden (MHz)

Data Bus Bredden

Samlede båndbredde
Hastighedsbegrebet er blevet beskrevet før i guiden. Data bus bredden fortæller hvor meget data der
kan flyttes ad gangen, hvor eksempelvis 2x16 bit betyder at der er to data busser, som hver kan
flytte 16 bit ad gangen. Til sidst er der så båndbredde, der konkret fortæller hvor mange Gigabyte
rammen kan flytte per sekund. Båndbreddens størrelse afhænger direkte af hastigheden på RAM
samt størrelsen på databussen.

I denne tabel kan du sammenligne de forskellige typer RAM og deres hastigheder.
Betegnelse RAM type Clock Data bus Båndbredde
PC66 SDRAM 66MHz 64 bit 0,5GB/s
PC100 SDRAM 100MHz 64 bit 0,8GB/s
PC133 SDRAM 133MHz 64 bit 1,06GB/s
PC1600 DDR200 100MHz 64 bit 1,6GB/s
PC1600 DDR200 Dual 100MHz 2 x 64 bit 3,2GB/s
PC2100 DDR266 133MHz 64 bit 2,1GB/s
PC2100 DDR266 Dual 133MHz 2 x 64 bit 4,2GB/s
PC2700 DDR333 166MHz 64 bit 2,7GB/s
PC2700 DDR333 Dual 166MHz 2 x 64 bit 5,4GB/s
PC3200 DDR400 200MHz 64 bit 3,2GB/s
PC3200 DDR400 Dual 200MHz 2 x 64 bit 6,4GB/s
PC4200 DDR533 266MHz 64 bit 4,2GB/s
PC4200 DDR533 Dual 266MHz 2 x 64 bit 8,4GB/s
PC800 RDRAM Dual 400MHz 2 x 16 bit 3,2GB/s
PC1066 RDRAM Dual 533MHz 2 x 16 bit 4,2GB/s
PC1200 RDRAM Dual 600MHz 2 x 16 bit 4,8GB/s
PC800 RDRAM Dual 400MHz 2 x 32 bit 6,4GB/s
PC1066 RDRAM Dual 533MHz 2 x 32 bit 8,4GB/s
PC1200 RDRAM Dual 600MHz 2 x 32 bit 9,6GB/s

De steder hvor DDR-RAM er angivet som dual gælder for de bundkort og chipsæt der understøtter
Dual Channel RAM. Her kræves at man har to ens DDR-RAM klodser i, for at få udnytte af den
dobbelte data-bus, og dermed båndbredde. Man kan altså regne med at DDR-RAM i de fleste
bundkort ikke er Dual, og dette skal man være opmærksom på, når man slår op i tabellen.

Hvad angår RDRAM, så er 16-bit udgaverne (som også kaldes PC1066 RDRAM) af natur Dual, da
de skal monteres parvist. Der er også en anden type RDRAM på markedet, nemlig 32-bit udgaven.
Disse kører 2x16 bit på en enkelt klods, og man behøver ikke installere dem parvist. De betegnes
oftest som 32-bit RIMM4200 RAM.
Dataintegritetskontrol (ECC RAM):
Alle typer RAM kan i dag fås som ECC (Error Correction) RAM. Denne RAM type er dyrere, men
til gengæld har de, ved hjælp af en ekstra indbygget chip, muligheden for at rette fejl under
overførslen til og fra hukommelse. Almindelige RAM har ikke denne funktion.

Det er sjældent at disse fejl opstår, og for almindelige brugere har det næppe den store betydning.
ECC RAM anbefales derimod til computere, hvor det er vigtigt med høj sikkerhed og stabilitet – for
eksempel til servere. Desuden er ECC RAM også lidt langsommere end almindelige RAM, pga. at
der skal sendes fejlrettelsesdata, sammen med almindelige data.


Prosessor

http://student.iu.hio.no/~s127645/webprosjekt/content/prosessor.php

En 64-bit prosessor fra AMD
En mikroprosessor er en brikke som innholder veldig mange transistorer. Brikken er bygd opp
lagvis med silikon og halvleder metallet silisium, Transistorene er laget av silisium.

Den første mikroprosessoren som ble laget, var Intel sin 4004 som ble introdusert i 1971. 4004
brikken var ikke spesielt kraftig, alt den kunne gjøre var å addere og subtrahere og kunne bare gjør
det 4 bits om gangen. Det banebrytende med 4004 brikken var at det var en databrikke. Før 4004
brikken kom ut ble hjerte av datamaskinen bygd ved å lodde sammen en og en transistor om
gangen. Dette gjorde at størrelsen ble drastisk redusert og banet vei for den første
lommekalkulatoren.

Den første mikroprosessoren som fant veien inn i en PC var 8080 brikken. Prosessoren ble laget av
Intel og introdusert i 1974. Men det var ikke før 1982 at Intel sin prosessor ble bygd inn i en IBM
PC og start hele PC revolusjonen. Alle prosessorene som har blitt laget av Intel er basert på 8088
brikken. Noen forbedringer, men stort sett det samme. Dagens Pentium 4 kan kjøre alle typene av
kode som ble kjørt på 8088 brikken, men den gjør det 5000 ganger raskere.

Hvordan virker en mikroprosessor? Her er hovedaktivitetene som skjer i en mikroprosessor:
Ved å bruke ALU (Arithmetic/Logic Unit) kan en mikroprosessor kan utføre mattematiske
operasjoner som addisjon, subtraksjon, multiplikasjon og divisjon. Moderne mikroprosessorer
innholder komplette avanserte prosessorer som kan utføre ekstremt kompliserte operasjoner på store
tall.
En mikroprosessor kan flytte data fra en lokasjon i minne til et annet.
En mikroprosessor kan ta avgjørelser og hoppe til en ny pakke med instruksjoner basert på de
avgjørelsene.
En mikroprosessor innholder busser som sender og mottar data fra minne. Disse bussene heter
adresse buss og data buss.
Adresse bussen sender adresser til minne der data skal plasseres.
Data bussen sender og mottar data fra minne.
Signalene blir sendt parallelt og bussene er, i dagens mikroprosessorer, 32 bit.
Litt om fremtiden; 64 bit mikroprosessor har vært her siden 1992, men det er først i de siste årene at
de har kommet ut på markedet. Og 64 bit prosessorene kommer til å dominere markedet framover.
Dagens mikroprosessorer innholder et intern minne som er på mellom 512Kb og 2Mb. Det vil i
løpet av kort tid komme mikroprosessorer med 3, 4, 6, 9 og 18Mb internminne. Dagens raskeste
mikroprosessorer har en klokkehastighet på 3,6GHz, om kort vil det komme ut mikroprosessorer
med opp til 10GHz.

HARDDISC
http://www.hitachigst.com/hdd/research/

Hvordan virker en CD-ROM?

http://www.netprofessor.dk/artikler.asp?id=95

Nedenstående er en meget simpel gennemgang af en meget kompleks funktion.

Nogle kan stadig huske da CD’er stadig blev kaldt Laser Disc. Grunden til man kaldte dem for
Laser Disc, er fordi man benytter laser teknologi til at læse informationerne på CD’erne. En CD er
primært fremstillet i reflekterende aluminium med et beskyttende lag plastic omkring. I
aluminiumet er der ætset eller raderet en spiralspor der er ca. 4,83 km. langt. Spiralsporet
indeholder det information, som ligger på CD’en, det kan være musik, computerspil m.m.

Spiralsporet har to forskellige slags overflader, en flad og et hul. Disse to overflader indeholder
information om de 0 og 1-taller, der udgør den binære kode. Laserstrålen bliver rettet mod CD’en
og kan enten ramme en flad eller et hul, når strålen rammer en flad, bliver strålen brudt og når den
rammer et hul, bliver den reflekteret til en lysfølsom sensor. Pulsen, som rammer sensoren, generer
en ganske lille elektrisk strøm. Således bliver signalet til en serie puls og ingen puls; 0’erne og
1’erne. Omdrejningerne af CD’en og laserens position er justeret, således informationen kan blive
læst jævnt.

Disketter

http://www.webopedia.com/TERM/F/floppy_disk.html

A soft magnetic disk. It is called floppy because it flops if you wave it (at least, the 5¼-inch variety
does). Unlike most hard disks, floppy disks (often called floppies or diskettes) are portable, because
you can remove them from a disk drive. Disk drives for floppy disks are called floppy drives.
Floppy disks are slower to access than hard disks and have less storage capacity, but they are much
less expensive. And most importantly, they are portable.
Floppies come in three basic sizes:

8-inch: The first floppy disk design, invented by IBM in the late 1960s and used in the early 1970s
as first a read-only format and then as a read-write format. The typical desktop/laptop computer
does not use the 8-inch floppy disk.
5¼-inch: The common size for PCs made before 1987 and the predecessor to the 8-inch floppy
disk. This type of floppy is generally capable of storing between 100K and 1.2MB (megabytes) of
data. The most common sizes are 360K and 1.2MB.
3½-inch: Floppy is something of a misnomer for these disks, as they are encased in a rigid
envelope. Despite their small size, microfloppies have a larger storage capacity than their cousins --
from 400K to 1.4MB of data. The most common sizes for PCs are 720K (double-density) and
1.44MB (high-density). Macintoshes support disks of 400K, 800K, and 1.2MB.

CISC og RISC prosessorer

http://www.amigau.com/aig/riscisc.html

RISC
The concept was developed by John Cocke of IBM Research during 1974. His argument was based
upon the notion that a computer uses only 20% of the instructions, making the other 80%
superfluous to requirement. A processor based upon this concept would use few instructions, which
would require fewer transistors, and make them cheaper to manufacture. By reducing the number of
transistors and instructions to only those most frequently used, the computer would get more done
in a shorter amount of time. The term 'RISC' (short for Reduced Instruction Set Computer) was later
coined by David Patterson, a teacher at the University of California in Berkeley.
The RISC concept was used to simplify the design of the IBM PC/XT, and was later used in the
IBM RISC System/6000 and Sun Microsystems' SPARC microprocessors. The latter CPU led to
the founding of MIPS Technologies, who developed the M.I.P.S. RISC microprocessor
(Microprocessor without Interlocked Pipe Stages). Many of the MIPS architects also played an
instrumental role in the creation of the Motorola 68000, as used in the first Amigas (MIPS
Technologies were later bought by Silicon Graphics).. The MIPS processor has continued
development, remaining a popular choice in embedded and low-end market. At one time, it was
suspected the Amiga MCC would use this CPU to reduce the cost of manufacture. However, the
consumer desktop market is limited, only the PowerPC processor remains popular in the choice of
RISC alternatives. This is mainly due to Apple's continuous use of the series for its PowerMac
range.

CISC
CISC (Complex Instruction Set Computer) is a retroactive definition that was introduced to
distinguish the design from RISC microprocessors. In contrast to RISC, CISC chips have a large
amount of different and complex instruction. The argument for its continued use indicates that the
chip designers should make life easier for the programmer by reducing the amount of instructions
required to program the CPU. Due to the high cost of memory and storage CISC microprocessors
were considered superior due to the requirements for small, fast code. In an age of dwindling
memory hard disk prices, code size has become a non-issue (MS Windows, hello?). However,
CISC-based systems still cover the vast majority of the consumer desktop market. The majority of
these systems are based upon the x86 architecture or a variant. The Amiga, Atari, and pre-1994
Macintosh systems also use a CISC microprocessor.
RISC Vs. CISC
The argument over which concept is better has been repeated over the past few years. Macintosh
owners have elevated the argument to a pseudo religious level in support of their RISC-based God
(the PowerPC sits next to the Steve Jobs statue on every Mac altar). Both positions have been
blurred by the argument that we have entered a Post-RISC stage.
RISC: For and Against
RISC supporters argue that it the way of the future, producing faster and cheaper processors - an
Apple Mac G3 offers a significant performance advantage over its Intel equivalent. Instructions are
executed over 4x faster providing a significant performance boost! However, RISC chips require
more lines of code to produce the same results and are increasingly complex. This will increase the
size of the application and the amount of overhead required. RISC developers have also failed to
remain in competition with CISC alternatives. The Macintosh market has been damaged by several
problems that have affected the availability of 500MHz+ PowerPC chips. In contrast, the PC
compatible market has stormed ahead and has broken the 1GHz barrier. Despite the speed
advantages of the RISC processor, it cannot compete with a CISC CPU that boasts twice the
number of clock cycles.
CISC: For and Against
As discussed above, CISC microprocessors are more expensive to make than their RISC cousins.
However, the average Macintosh is more expensive than the WIntel PC. This is caused by one
factor that the RISC manufacturers have no influence over - market factors. In particular, the WIntel
market has become the definition of personal computing, creating a demand from people who have
not used a computer previous. The x86 market has been opened by the development of several
competing processors, from the likes of AMD, Cyrix, and Intel. This has continually reduced the
price of a CPU of many months. In contrast, the PowerPC Macintosh market is dictated by Apple.
This reduces the cost of x86 - based microprocessors, while the PowerPC market remains stagnant.

Post-RISC
As the world enters the 21st century the CISC Vs. RISC arguments have been swept aside by the
recognition that neither terms are accurate in their description. The definition of 'Reduced' and
'Complex' instructions has begun to blur, RISC chips have increased in their complexity (compare
the PPC 601 to the G4 as an example) and CISC chips have become more efficient. The result are
processors that are defined as RISC or CISC only by their ancestry. The PowerPC 601, for
example, supports more instructions than the Pentium. Yet the Pentium is a CISC chip, while the
601 is considered to be RISC. CISC chips have also gained techniques associated with RISC
processors. Intel describe the Pentium II as a CRISC processor, while AMD use a RISC architecture
but remain compatible with the dominant x86 CISC processors. Thus it is no longer important
which camp the processor comes from, the emphasis has once-again been placed upon the operating
system and the speed that it can execute instructions.
EPIC
In the aftermath of the CISC-RISC conflict, a new enemy has appeared to threaten the peace. EPIC
(Explicitly Parallel Instruction Computing) was developed by Intel for the server market, thought it
will undoubtedly appear in desktops over the next few years. The first EPIC processor will be the
64-bit Merced, due for release sometime during 2001 (or 2002, 2003, etc.). The market may be
divided between combined CISC-RISC systems in the low-end and EPIC in the high-end.
Famous RISC microprocessors
801
To prove that his RISC concept was sound, John Cocke created the 801 prototype microprocessor
(1975). It was never marketed but plays a pivotal role in computer history, becoming the first RISC
microprocessor.
RISC 1 and 2
The first "proper" RISC chips were created at Berkeley University in 1985.

ARM
One of the most well known RISC developers is Cambridge based Advanced Research Machines
(originally Acorn Research Machines). Their ARM and StrongARM chips power the old Acorn
Archimedes and the Apple Newton handwriting recognition systems. Since the unbundling of ARM
from Acorn, Intel have invested a considerable amount of money in the company and have utilized
the technology in their processor design. One of the main advantages for the ARM is the price- it
costs less than £10.
If Samsung had bought the Amiga in 1994, they would possibly have used the chip to power the
low-end Amigas.

SCANNER

http://computer.howstuffworks.com/scanner.htm

Scanners have become an important part of the home office over the last few years. Scanner
technology is everywhere and used in many ways:


Flatbed scanners, also called desktop scanners, are the most versatile and commonly used scanners.
In fact, this article will focus on the technology as it relates to flatbed scanners.
Sheet-fed scanners are similar to flatbed scanners except the document is moved and the scan head
is immobile. A sheet-fed scanner looks a lot like a small portable printer.
Handheld scanners use the same basic technology as a flatbed scanner, but rely on the user to move
them instead of a motorized belt. This type of scanner typically does not provide good image
quality. However, it can be useful for quickly capturing text.
Drum scanners are used by the publishing industry to capture incredibly detailed images. They use
a technology called a photomultiplier tube (PMT). In PMT, the document to be scanned is mounted
on a glass cylinder. At the center of the cylinder is a sensor that splits light bounced from the
document into three beams. Each beam is sent through a color filter into a photomultiplier tube
where the light is changed into an electrical signal.
The basic principle of a scanner is to analyze an image and process it in some way. Image and text
capture (optical character recognition or OCR) allow you to save information to a file on your
computer. You can then alter or enhance the image, print it out or use it on your Web page.

In this article, we'll be focusing on flatbed scanners, but the basic principles apply to most other
scanner technologies. You will learn about the different types of scanners, how the scanning
mechanism works and what TWAIN means. You will also learn about resolution, interpolation and
bit depth.

On the next page, you will learn about the various parts of a flatbed scanner.

Anatomy of a Scanner
Parts of a typical flatbed scanner include:

Charge-coupled device (CCD) array
Mirrors
Scan head
Glass plate
Lamp
Lens
Cover
Filters
Stepper motor
Stabilizer bar
Belt
Power supply
Interface port(s)
Control circuitry

The core component of the scanner is the CCD array. CCD is the most common technology for
image capture in scanners. CCD is a collection of tiny light-sensitive diodes, which convert photons
(light) into electrons (electrical charge). These diodes are called photosites. In a nutshell, each
photosite is sensitive to light -- the brighter the light that hits a single photosite, the greater the
electrical charge that will accumulate at that site.

Photons hitting a photosite and creating electrons
The image of the document that you scan reaches the CCD array through a series of mirrors, filters
and lenses. The exact configuration of these components will depend on the model of scanner, but
the basics are pretty much the same.

On the next page, you will see just how all the pieces of the scanner work together.

Here are the steps that a scanner goes through when it scans a document:
The document is placed on the glass plate and the cover is closed. The inside of the cover in most
scanners is flat white, although a few are black. The cover provides a uniform background that the
scanner software can use as a reference point for determining the size of the document being
scanned. Most flatbed scanners allow the cover to be removed for scanning a bulky object, such as a
page in a thick book.

A lamp is used to illuminate the document. The lamp in newer scanners is either a cold cathode
fluorescent lamp (CCFL) or a xenon lamp, while older scanners may have a standard fluorescent
lamp.
The entire mechanism (mirrors, lens, filter and CCD array) make up the scan head. The scan head is
moved slowly across the document by a belt that is attached to a stepper motor. The scan head is
attached to a stabilizer bar to ensure that there is no wobble or deviation in the pass. Pass means that
the scan head has completed a single complete scan of the document.

The image of the document is reflected by an angled mirror to another mirror. In some scanners,
there are only two mirrors while others use a three mirror approach. Each mirror is slightly curved
to focus the image it reflects onto a smaller surface.

The last mirror reflects the image onto a lens. The lens focuses the image through a filter on the
CCD array.

The filter and lens arrangement vary based on the scanner. Some scanners use a three pass scanning
method. Each pass uses a different color filter (red, green or blue) between the lens and CCD array.
After the three passes are completed, the scanner software assembles the three filtered images into a
single full-color image.

Most scanners today use the single pass method. The lens splits the image into three smaller
versions of the original. Each smaller version passes through a color filter (either red, green or blue)
onto a discrete section of the CCD array. The scanner combines the data from the three parts of the
CCD array into a single full-color image.

Another imaging array technology that has become popular in inexpensive flatbed scanners is
contact image sensor (CIS). CIS replaces the CCD array, mirrors, filters, lamp and lens with rows
of red, green and blue light emitting diodes (LEDs). The image sensor mechanism, consisting of
300 to 600 sensors spanning the width of the scan area, is placed very close to the glass plate that
the document rests upon. When the image is scanned, the LEDs combine to provide white light. The
illuminated image is then captured by the row of sensors. CIS scanners are cheaper, lighter and
thinner, but do not provide the same level of quality and resolution found in most CCD scanners.

We will take a look at what happens between the computer and scanner, but first let's talk about
resolution.

Resolution and Interpolation
Scanners vary in resolution and sharpness. Most flatbed scanners have a true hardware resolution of
at least 300x300 dots per inch (dpi). The scanner's dpi is determined by the number of sensors in a
single row (x-direction sampling rate) of the CCD or CIS array by the precision of the stepper
motor (y-direction sampling rate).

For example, if the resolution is 300x300 dpi and the scanner is capable of scanning a letter-sized
document, then the CCD has 2,550 sensors arranged in each horizontal row. A single-pass scanner
would have three of these rows for a total of 7,650 sensors. The stepper motor in our example is
able to move in increments equal to 1/300ths of an inch. Likewise, a scanner with a resolution of
600x300 has a CCD array with 5,100 sensors in each horizontal row.

Sharpness depends mainly on the quality of the optics used to make the lens and the brightness of
the light source. A bright xenon lamp and high-quality lens will create a much clearer, and therefore
sharper, image than a standard fluorescent lamp and basic lens.

Of course, many scanners proclaim resolutions of 4,800x4,800 or even 9,600x9,600. To achieve a
hardware resolution with a x-direction sampling rate of 9,600 would require a CCD array of 81,600
sensors. If you look at the specifications, these high resolutions are usually labeled software-
enhanced, interpolated resolution or something similar. What does that mean?
Interpolation is a process that the scanning software uses to increase the perceived resolution of an
image. It does this by creating extra pixels in between the ones actually scanned by the CCD array.
These extra pixels are an average of the adjacent pixels. For example, if the hardware resolution is
300x300 and the interpolated resolution is 600x300, then the software is adding a pixel between
every one scanned by a CCD sensor in each row.

Another term used when talking about scanners is bit depth, also called color depth. This simply
refers to the number of colors that the scanner is capable of reproducing. Each pixel requires 24 bits
to create standard true color and virtually all scanners on the market support this. Many of them
offer bit depths of 30 or 36 bits. They still only output in 24-bit color, but perform internal
processing to select the best possible choice out of the colors available in the increased palette.
There are many opinions about whether there is a noticeable difference in quality between 24-, 30-
and 36-bit scanners.

Image Transfer
Scanning the document is only one part of the process. For the scanned image to be useful, it must
be transferred to your computer. There are three common connections used by scanners:

Parallel - Connecting through the parallel port is the slowest transfer method available.

Small Computer System Interface (SCSI) - SCSI requires a special SCSI connection. Most SCSI
scanners include a dedicated SCSI card to insert into your computer and connect the scanner to, but
you can use a standard SCSI controller instead.

Universal Serial Bus (USB) - USB scanners combine good speed, ease of use and affordability in a
single package.

FireWire - Usually found on higher-end scanners,FireWire connections are faster than USB and
SCSI. FireWire is ideal for scanning high-resolution images.

Did You Know?
TWAIN is not an acronym. It actually comes from the phrase "Never the twain shall meet" because
the driver is the go-between for the software and the scanner. Because computer people feel a need
to make an acronym out of every term, TWAIN is known as Technology Without An Interesting
Name!
On your computer, you need software, called a driver, that knows how to communicate with the
scanner. Most scanners speak a common language, TWAIN. The TWAIN driver acts as an
interpreter between any application that supports the TWAIN standard and the scanner. This means
that the application does not need to know the specific details of the scanner in order to access it
directly. For example, you can choose to acquire an image from the scanner from within Adobe
Photoshop because Photoshop supports the TWAIN standard.

In addition to the driver, most scanners come with other software. Typically, a scanning utility and
some type of image editing application are included. A lot of scanners include OCR software. OCR
allows you to scan in words from a document and convert them into computer-based text. It uses an
averaging process to determine what the shape of a character is and match it to the correct letter or
number.

The great thing about scanner technology today is that you can get exactly what you need. You can
find a decent scanner with good software for less than $200, or get a fantastic scanner with
incredible software for less than $1,000. It all depends on your needs and budget.

For more information on scanners and related topics, check out the links on the next page.
LASERSKRIVER

http://computer.howstuffworks.com/laser-printer.htm

The term inkjet printer is very descriptive of the process at work -- these printers put an image on
paper using tiny jets of ink. The term laser printer, on the other hand, is a bit more mysterious --
how can a laser beam, a highly focused beam of light, write letters and draw pictures on paper?

In this article, we'll unravel the mystery behind the laser printer, tracing a page's path from the
characters on your computer screen to printed letters on paper. As it turns out, the laser printing
process is based on some very basic scientific principles applied in an exceptionally innovative
way.

The Basics: Static Electricity
The primary principle at work in a laser printer is static electricity, the same energy that makes
clothes in the dryer stick together or a lightning bolt travel from a thundercloud to the ground. Static
electricity is simply an electrical charge built up on an insulated object, such as a balloon or your
body. Since oppositely charged atoms are attracted to each other, objects with opposite static
electricity fields cling together.

A laser printer uses this phenomenon as a sort of "temporary glue." The core component of this
system is the photoreceptor, typically a revolving drum or cylinder. This drum assembly is made
out of highly photoconductive material that is discharged by light photons.
The Basics: Drum
Initially, the drum is given a total positive charge by the charge corona wire, a wire with an
electrical current running through it. (Some printers use a charged roller instead of a corona wire,
but the principle is the same.) As the drum revolves, the printer shines a tiny laser beam across the
surface to discharge certain points. In this way, the laser "draws" the letters and images to be
printed as a pattern of electrical charges -- an electrostatic image. The system can also work with
the charges reversed -- that is, a positive electrostatic image on a negative background.

After the pattern is set, the printer coats the drum with positively charged toner -- a fine, black
powder. Since it has a positive charge, the toner clings to the negative discharged areas of the drum,
but not to the positively charged "background." This is something like writing on a soda can with
glue and then rolling it over some flour: The flour only sticks to the glue-coated part of the can, so
you end up with a message written in powder.

With the powder pattern affixed, the drum rolls over a sheet of paper, which is moving along a belt
below. Before the paper rolls under the drum, it is given a negative charge by the transfer corona
wire (charged roller). This charge is stronger than the negative charge of the electrostatic image, so
the paper can pull the toner powder away. Since it is moving at the same speed as the drum, the
paper picks up the image pattern exactly. To keep the paper from clinging to the drum, it is
discharged by the detac corona wire immediately after picking up the toner.

The Basics: Fuser
Finally, the printer passes the paper through the fuser, a pair of heated rollers. As the paper passes
through these rollers, the loose toner powder melts, fusing with the fibers in the paper. The fuser
rolls the paper to the output tray, and you have your finished page. The fuser also heats up the paper
itself, of course, which is why pages are always hot when they come out of a laser printer or
photocopier.

So what keeps the paper from burning up? Mainly, speed -- the paper passes through the rollers so
quickly that it doesn't get very hot.
After depositing toner on the paper, the drum surface passes the discharge lamp. This bright light
exposes the entire photoreceptor surface, erasing the electrical image. The drum surface then passes
the charge corona wire, which reapplies the positive charge.

Conceptually, this is all there is to it. Of course, actually bringing everything together is a lot more
complex. In the following sections, we'll examine the different components in greater detail to see
how they produce text and images so quickly and precisely.

The Controller: The Conversation
Before a laser printer can do anything else, it needs to receive the page data and figure out how it's
going to put everything on the paper. This is the job of the printer controller.
The printer controller is the laser printer's main onboard computer. It talks to the host computer (for
example, your PC) through a communications port, such as a parallel port or USB port. At the start
of the printing job, the laser printer establishes with the host computer how they will exchange data.
The controller may have to start and stop the host computer periodically to process the information
it has received.

A typical laser printer has a few different types of communications ports.

In an office, a laser printer will probably be connected to several separate host computers, so
multiple users can print documents from their machine. The controller handles each one separately,
but may be carrying on many "conversations" concurrently. This ability to handle several jobs at
once is one of the reasons why laser printers are so popular.

The Controller: The Language
For the printer controller and the host computer to communicate, they need to speak the same page
description language. In earlier printers, the computer sent a special sort of text file and a simple
code giving the printer some basic formatting information. Since these early printers had only a few
fonts, this was a very straightforward process.
These days, you might have hundreds of different fonts to choose from, and you wouldn't think
twice about printing a complex graphic. To handle all of this diverse information, the printer needs
to speak a more advanced language.

The primary printer languages these days are Hewlett Packard's Printer Command Language (PCL)
and Adobe's Postscript. Both of these languages describe the page in vector form -- that is, as
mathematical values of geometric shapes, rather than as a series of dots (a bitmap image). The
printer itself takes the vector images and converts them into a bitmap page. With this system, the
printer can receive elaborate, complex pages, featuring any sort of font or image. Also, since the
printer creates the bitmap image itself, it can use its maximum printer resolution.

Some printers use a graphical device interface (GDI) format instead of a standard PCL. In this
system, the host computer creates the dot array itself, so the controller doesn't have to process
anything -- it just sends the dot instructions on to the laser.

But in most laser printers, the controller must organize all of the data it receives from the host
computer. This includes all of the commands that tell the printer what to do -- what paper to use,
how to format the page, how to handle the font, etc. For the controller to work with this data, it has
to get it in the right order.

The Controller: Setting up the Page
Once the data is structured, the controller begins putting the page together. It sets the text margins,
arranges the words and places any graphics. When the page is arranged, the raster image processor
(RIP) takes the page data, either as a whole or piece by piece, and breaks it down into an array of
tiny dots. As we'll see in the next section, the printer needs the page in this form so the laser can
write it out on the photoreceptor drum.
In most laser printers, the controller saves all print-job data in its own memory. This lets the
controller put different printing jobs into a queue so it can work through them one at a time. It also
saves time when printing multiple copies of a document, since the host computer only has to send
the data once.

The Laser Assembly
Since it actually draws the page, the printer's laser system -- or laser scanning assembly -- must be
incredibly precise. The traditional laser scanning assembly includes:

A laser
A movable mirror
A lens
The laser receives the page data -- the tiny dots that make up the text and images -- one horizontal
line at a time. As the beam moves across the drum, the laser emits a pulse of light for every dot to
be printed, and no pulse for every dot of empty space.

The laser doesn't actually move the beam itself. It bounces the beam off a movable mirror instead.
As the mirror moves, it shines the beam through a series of lenses. This system compensates for the
image distortion caused by the varying distance between the mirror and points along the drum.

Writing the Page
The laser assembly moves in only one plane, horizontally. After each horizontal scan, the printer
moves the photoreceptor drum up a notch so the laser assembly can draw the next line. A small
print-engine computer synchronizes all of this perfectly, even at dizzying speeds.
Some laser printers use a strip of light emitting diodes (LEDs) to write the page image, instead of a
single laser. Each dot position has its own dedicated light, which means the printer has one set print
resolution. These systems cost less to manufacture than true laser assemblies, but they produce
inferior results. Typically, you'll only find them in less expensive printers.

Photocopiers
Laser printers work the same basic way as photocopiers, with a few significant differences. The
most obvious difference is the source of the image: A photocopier scans an image by reflecting a
bright light off of it, while a laser printer receives the image in digital form.
Another major difference is how the electrostatic image is created. When a photocopier bounces
light off a piece of paper, the light reflects back onto the photoreceptor from the white areas but is
absorbed by the dark areas. In this process, the "background" is discharged, while the electrostatic
image retains a positive charge. This method is called "write-white."

In most laser printers, the process is reversed: The laser discharges the lines of the electrostatic
image and leaves the background positively charged. In a printer, this "write-black" system is easier
to implement than a "write-white" system, and it generally produces better results.

Toner Basics
One of the most distinctive things about a laser printer (or photocopier) is the toner. It's such a
strange concept for the paper to grab the "ink" rather than the printer applying it. And it's even
stranger that the "ink" isn't really ink at all.
So what is toner? The short answer is: It's an electrically-charged powder with two main
ingredients: pigment and plastic.

The role of the pigment is fairly obvious -- it provides the coloring (black, in a monochrome
printer) that fills in the text and images. This pigment is blended into plastic particles, so the toner
will melt when it passes through the heat of the fuser. This quality gives toner a number of
advantages over liquid ink. Chiefly, it firmly binds to the fibers in almost any type of paper, which
means the text won't smudge or bleed easily.

Applying Toner
So how does the printer apply this toner to the electrostatic image on the drum? The powder is
stored in the toner hopper, a small container built into a removable casing. The printer gathers the
toner from the hopper with the developer unit. The "developer" is actually a collection of small,
negatively charged magnetic beads. These beads are attached to a rotating metal roller, which
moves them through the toner in the toner hopper.
Because they are negatively charged, the developer beads collect the positive toner particles as they
pass through. The roller then brushes the beads past the drum assembly. The electrostatic image has
a stronger negative charge than the developer beads, so the drum pulls the toner particles away

In a lot of printers, the toner hopper, developer and drum assembly are combined in one replaceable
cartridge.

The drum then moves over the paper, which has an even stronger charge and so grabs the toner.
After collecting the toner, the paper is immediately discharged by the detac corona wire. At this
point, the only thing keeping the toner on the page is gravity -- if you were to blow on the page, you
would completely lose the image. The page must pass through the fuser to affix the toner. The fuser
rollers are heated by internal quartz tube lamps, so the plastic in the toner melts as it passes through.

But what keeps the toner from collecting on the fuser rolls, rather than sticking to the page? To keep
this from happening, the fuser rolls must be coated with Teflon, the same non-stick material that
keeps your breakfast from sticking to the bottom of the frying pan.

Color Printers
Initially, most commercial laser printers were limited to monochrome printing (black writing on
white paper). But now, there are lots of color laser printers on the market.
Essentially, color printers work the same way as monochrome printers, except they go through the
entire printing process four times -- one pass each for cyan (blue), magenta (red), yellow and black.
By combining these four colors of toner in varying proportions, you can generate the full spectrum
of color.

There are several different ways of doing this. Some models have four toner and developer units on
a rotating wheel. The printer lays down the electrostatic image for one color and puts that toner unit
into position. It then applies this color to the paper and goes through the process again for the next
color. Some printers add all four colors to a plate before placing the image on paper.

Some more expensive printers actually have a complete printer unit -- a laser assembly, a drum and
a toner system -- for each color. The paper simply moves past the different drum heads, collecting
all the colors in a sort of assembly line.

Advantages of a Laser
So why get a laser printer rather than a cheaper inkjet printer? The main advantages of laser printers
are speed, precision and economy. A laser can move very quickly, so it can "write" with much
greater speed than an ink jet. And because the laser beam has an unvarying diameter, it can draw
more precisely, without spilling any excess ink.
Laser printers tend to be more expensive than inkjet printers, but it doesn't cost as much to keep
them running -- toner powder is cheap and lasts a long time, while you can use up expensive ink
cartridges very quickly. This is why offices typically use a laser printer as their "work horse," their
machine for printing long text documents. In most models, this mechanical efficiency is
complemented by advanced processing efficiency. A typical laser-printer controller can serve
everybody in a small office.
When they were first introduced, laser printers were too expensive to use as a personal printer.
Since that time, however, laser printers have gotten much more affordable. Now you can pick up a
basic model for just a little bit more than a nice inkjet printer.

As technology advances, laser-printer prices should continue to drop, while performance improves.
We'll also see a number of innovative design variations, and possibly brand-new applications of
electrostatic printing. Many inventors believe we've only scratched the surface of what we can do
with simple static electricity!

For more information on laser printers and related topics, check out the links on the next page.

FARGESKRIVER

http://computer.howstuffworks.com/inkjet-printer.htm

No matter where you are reading this article from, you most likely have a printer nearby. And
there's a very good chance that it is an inkjet printer. Since their introduction in the latter half of the
1980s, inkjet printers have grown in popularity and performance while dropping significantly in
price.

An inkjet printer is any printer that places extremely small droplets of ink onto paper to create an
image. If you ever look at a piece of paper that has come out of an inkjet printer, you know that:

The dots are extremely small (usually between 50 and 60 microns in diameter), so small that they
are tinier than the diameter of a human hair (70 microns)!
The dots are positioned very precisely, with resolutions of up to 1440x720 dots per inch (dpi).
The dots can have different colors combined together to create photo-quality images.
In this edition of HowStuffWorks, you will learn about the various parts of an inkjet printer and
how these parts work together to create an image. You will also learn about the ink cartridges and
the special paper some inkjet printers use.

First, let's take a quick look at the various printer technologies.

Impact vs. Non-impact
There are several major printer technologies available. These technologies can be broken down into
two main categories with several types in each:
Impact - These printers have a mechanism that touches the paper in order to create an image. There
are two main impact technologies:
Dot matrix printers use a series of small pins to strike a ribbon coated with ink, causing the ink to
transfer to the paper at the point of impact.
Character printers are basically computerized typewriters. They have a ball or series of bars with
actual characters (letters and numbers) embossed on the surface. The appropriate character is struck
against the ink ribbon, transferring the character's image to the paper. Character printers are fast and
sharp for basic text, but very limited for other use.

Non-impact - These printers do not touch the paper when creating an image. Inkjet printers are part
of this group, which includes:
Inkjet printers, which are described in this article, use a series of nozzles to spray drops of ink
directly on the paper.
Laser printers, covered in-depth in How Laser Printers Work, use dry ink (toner), static electricity,
and heat to place and bond the ink onto the paper.

Solid ink printers contain sticks of wax-like ink that are melted and applied to the paper. The ink
then hardens in place.
Dye-sublimation printers have a long roll of transparent film that resembles sheets of red-, blue-,
yellow- and gray-colored cellophane stuck together end to end. Embedded in this film are solid
dyes corresponding to the four basic colors used in printing: cyan, magenta, yellow and black
(CMYK). The print head uses a heating element that varies in temperature, depending on the
amount of a particular color that needs to be applied. The dyes vaporize and permeate the glossy
surface of the paper before they return to solid form. The printer does a complete pass over the
paper for each of the basic colors, gradually building the image.
Thermal wax printers are something of a hybrid of dye-sublimation and solid ink technologies.
They use a ribbon with alternating CMYK color bands. The ribbon passes in front of a print head
that has a series of tiny heated pins. The pins cause the wax to melt and adhere to the paper, where
it hardens in place.
Thermal autochrome printers have the color in the paper instead of in the printer. There are three
layers (cyan, magenta and yellow) in the paper, and each layer is activated by the application of a
specific amount of heat. The print head has a heating element that can vary in temperature. The
print head passes over the paper three times, providing the appropriate temperature for each color
layer as needed.
Out of all of these incredible technologies, inkjet printers are by far the most popular. In fact, the
only technology that comes close today is laser printers.

So, let's take a closer look at what's inside an inkjet printer.

Inside an Inkjet Printer
Parts of a typical inkjet printer include:

Print head assembly
Print head - The core of an inkjet printer, the print head contains a series of nozzles that are used to
spray drops of ink.

Ink cartridges - Depending on the manufacturer and model of the printer, ink cartridges come in
various combinations, such as separate black and color cartridges, color and black in a single
cartridge or even a cartridge for each ink color. The cartridges of some inkjet printers include the
print head itself.
Print head stepper motor - A stepper motor moves the print head assembly (print head and ink
cartridges) back and forth across the paper. Some printers have another stepper motor to park the
print head assembly when the printer is not in use. Parking means that the print head assembly is
restricted from accidentally moving, like a parking brake on a car.

Belt - A belt is used to attach the print head assembly to the stepper motor.
Stabilizer bar - The print head assembly uses a stabilizer bar to ensure that movement is precise and
controlled.

Paper tray/feeder - Most inkjet printers have a tray that you load the paper into. Some printers
dispense with the standard tray for a feeder instead. The feeder typically snaps open at an angle on
the back of the printer, allowing you to place paper in it. Feeders generally do not hold as much
paper as a traditional paper tray.
Rollers - A set of rollers pull the paper in from the tray or feeder and advance the paper when the
print head assembly is ready for another pass.

Paper feed stepper motor - This stepper motor powers the rollers to move the paper in the exact
increment needed to ensure a continuous image is printed.

Power supply - While earlier printers often had an external transformer, most printers sold today
use a standard power supply that is incorporated into the printer itself.
Control circuitry - A small but sophisticated amount of circuitry is built into the printer to control
all the mechanical aspects of operation, as well as decode the information sent to the printer from
the computer.

Interface port(s) - The parallel port is still used by many printers, but most newer printers use the
USB port. A few printers connect using a serial port or small computer system interface (SCSI)
port.

Heat vs. Vibration
Different types of inkjet printers form their droplets of ink in different ways. There are two main
inkjet technologies currently used by printer manufacturers:

Thermal bubble - Used by manufacturers such as Canon and Hewlett Packard, this method is
commonly referred to as bubble jet. In a thermal inkjet printer, tiny resistors create heat, and this
heat vaporizes ink to create a bubble. As the bubble expands, some of the ink is pushed out of a
nozzle onto the paper. When the bubble "pops" (collapses), a vacuum is created. This pulls more
ink into the print head from the cartridge. A typical bubble jet print head has 300 or 600 tiny
nozzles, and all of them can fire a droplet simultaneously.

Click the button to see how a thermal bubble inkjet printer works.

Piezoelectric - Patented by Epson, this technology uses piezo crystals. A crystal is located at the
back of the ink reservoir of each nozzle. The crystal receives a tiny electric charge that causes it to
vibrate. When the crystal vibrates inward, it forces a tiny amount of ink out of the nozzle. When it
vibrates out, it pulls some more ink into the reservoir to replace the ink sprayed out.

Click on the button to see how a piezoelectric inkjet printer works.

Let's walk through the printing process to see just what happens.

Click "OK" to Print
When you click on a button to print, there is a sequence of events that take place:

The software application you are using sends the data to be printed to the printer driver.

The driver translates the data into a format that the printer can understand and checks to see that the
printer is online and available to print.

The data is sent by the driver from the computer to the printer via the connection interface (parallel,
USB, etc.).

The printer receives the data from the computer. It stores a certain amount of data in a buffer. The
buffer can range from 512 KB random access memory (RAM) to 16 MB RAM, depending on the
model. Buffers are useful because they allow the computer to finish with the printing process
quickly, instead of having to wait for the actual page to print. A large buffer can hold a complex
document or several basic documents.

If the printer has been idle for a period of time, it will normally go through a short clean cycle to
make sure that the print head(s) are clean. Once the clean cycle is complete, the printer is ready to
begin printing.

The control circuitry activates the paper feed stepper motor. This engages the rollers, which feed a
sheet of paper from the paper tray/feeder into the printer. A small trigger mechanism in the
tray/feeder is depressed when there is paper in the tray or feeder. If the trigger is not depressed, the
printer lights up the "Out of Paper" LED and sends an alert to the computer.

Once the paper is fed into the printer and positioned at the start of the page, the print head stepper
motor uses the belt to move the print head assembly across the page. The motor pauses for the
merest fraction of a second each time that the print head sprays dots of ink on the page and then
moves a tiny bit before stopping again. This stepping happens so fast that it seems like a continuous
motion.

Multiple dots are made at each stop. It sprays the CMYK colors in precise amounts to make any
other color imaginable.

At the end of each complete pass, the paper feed stepper motor advances the paper a fraction of an
inch. Depending on the inkjet model, the print head is reset to the beginning side of the page, or, in
most cases, simply reverses direction and begins to move back across the page as it prints.

This process continues until the page is printed. The time it takes to print a page can vary widely
from printer to printer. It will also vary based on the complexity of the page and size of any images
on the page. For example, a printer may be able to print 16 pages per minute (PPM) of black text
but take a couple of minutes to print one, full-color, page-sized image.

Once the printing is complete, the print head is parked. The paper feed stepper motor spins the
rollers to finish pushing the completed page into the output tray. Most printers today use inks that
are very fast-drying, so that you can immediately pick up the sheet without smudging it.
In the next section, you will learn a little more about the ink cartridges and the paper used.

Paper and Ink
Inkjet printers are fairly inexpensive. They cost less than a typical black-and-white laser printer, and
much less than a color laser printer. In fact, quite a few of the manufacturers sell some of their
printers at a loss. Quite often, you can find the printer on sale for less than you would pay for a set
of the ink cartridges!

Why would they do this? Because they count on the supplies you purchase to provide their profit.
This is very similar to the way the video game business works. The hardware is sold at or below
cost. Once you buy a particular brand of hardware, then you must buy the other products that work
with that hardware. In other words, you can't buy a printer from Manufacturer A and ink cartridges
from Manufacturer B. They will not work together.

This cartridge has cyan, magenta and yellow inks in separate reservoirs.

Another way that they have reduced costs is by incorporating much of the actual print head into the
cartridge itself. The manufacturers believe that since the print head is the part of the printer that is
most likely to wear out, replacing it every time you replace the cartridge increases the life of the
printer.

The paper you use on an inkjet printer greatly determines the quality of the image. Standard copier
paper works, but doesn't provide as crisp and bright an image as paper made for an inkjet printer.
There are two main factors that affect image quality:

Brightness
Absorption
The brightness of a paper is normally determined by how rough the surface of the paper is. A course
or rough paper will scatter light in several directions, whereas a smooth paper will reflect more of
the light back in the same direction. This makes the paper appear brighter, which in turn makes any
image on the paper appear brighter. You can see this yourself by comparing a photo in a newspaper
with a photo in a magazine. The smooth paper of the magazine page reflects light back to your eye
much better than the rough texture of the newspaper. Any paper that is listed as being bright is
generally a smoother-than-normal paper.

The other key factor in image quality is absorption. When the ink is sprayed onto the paper, it
should stay in a tight, symmetrical dot. The ink should not be absorbed too much into the paper. If
that happens, the dot will begin to feather. This means that it will spread out in an irregular fashion
to cover a slightly larger area than the printer expects it to. The result is an page that looks
somewhat fuzzy, particularly at the edges of objects and text.

As stated, feathering is caused by the paper absorbing the ink. To combat this, high-quality inkjet
paper is coated with a waxy film that keeps the ink on the surface of the paper. Coated paper
normally yields a dramatically better print than other paper. The low absorption of coated paper is
key to the high resolution capabilities of many of today's inkjet printers. For example, a typical
Epson inkjet printer can print at a resolution of up to 720x720 dpi on standard paper. With coated
paper, the resolution increases to 1440x720 dpi. The reason is that the printer can actually shift the
paper slightly and add a second row of dots for every normal row, knowing that the image will not
feather and cause the dots to blur together.

Inkjet printers are capable of printing on a variety of media. Commercial inkjet printers sometimes
spray directly on an item like the label on a beer bottle. For consumer use, there are a number of
specialty papers, ranging from adhesive-backed labels or stickers to business cards and brochures.
You can even get iron-on transfers that allow you to create an image and put it on a T-shirt! One
thing is for certain, inkjet printers definitely provide an easy and affordable way to unleash your
creativity.

Refilling Cartridges
Because of the expense of inkjet cartridges, a huge business has grown around the idea of refilling
them. For most people, refilling makes good sense, but there are a few things to be aware of:

Make sure the refill kit is for your printer model. As you learned in the previous section, different
printers use different technologies for putting the ink on the paper. If the wrong type of ink is used,
it can degrade the output or possibly damage the printer. While some commercial inkjets use oil-
based inks, virtually all desktop inkjets for home or office use have water-based ink. The exact ink
composition varies greatly between manufacturers. For example, thermal bubble inkjets need ink
that is stable at higher temperatures than piezoelectric printers.
Most manufacturers require that you use only their approved ink. Refill kits normally will void your
warranty.
While you can refill cartridges, be very careful of the ones that have the print head built into the
cartridge. You do not want to refill these more than two or three times, or the print head will begin
to deteriorate and could damage your printer.
Check out this site for some good links and information about inkjet refills.

MODEM

http://computer.howstuffworks.com/modem.htm

If you are reading this article on your computer at home, it probably arrived via modem.
In this edition of HowStuffWorks, we'll show you how a modem brings you Web pages. We'll start
with the original 300-baud modems and progress all the way through to the ADSL configurations!

(Note: If you are unfamiliar with bits, bytes and the ASCII character codes, reading How Bits and
Bytes Work will help make this article much clearer.)
Let's get started with a short recap of how the modem came to be

The Origin of Modems
The word "modem" is a contraction of the words modulator-demodulator. A modem is typically
used to send digital data over a phone line.
The sending modem modulates the data into a signal that is compatible with the phone line, and the
receiving modem demodulates the signal back into digital data. Wireless modems convert digital
data into radio signals and back.

Modems came into existence in the 1960s as a way to allow terminals to connect to computers over
the phone lines. A typical arrangement is shown below:

In a configuration like this, a dumb terminal at an off-site office or store could "dial in" to a large,
central computer. The 1960s were the age of time-shared computers, so a business would often buy
computer time from a time-share facility and connect to it via a 300-bit-per-second (bps) modem.

A dumb terminal is simply a keyboard and a screen. A very common dumb terminal at the time was
called the DEC VT-100, and it became a standard of the day (now memorialized in terminal
emulators worldwide). The VT-100 could display 25 lines of 80 characters each. When the user
typed a character on the terminal, the modem sent the ASCII code for the character to the computer.
The computer then sent the character back to the computer so it would appear on the screen.

When personal computers started appearing in the late 1970s, bulletin board systems (BBS) became
the rage. A person would set up a computer with a modem or two and some BBS software, and
other people would dial in to connect to the bulletin board. The users would run terminal emulators
on their computers to emulate a dumb terminal.

People got along at 300 bps for quite a while. The reason this speed was tolerable was because 300
bps represents about 30 characters per second, which is a lot more characters per second than a
person can type or read. Once people started transferring large programs and images to and from
bulletin board systems, however, 300 bps became intolerable. Modem speeds went through a series
of steps at approximately two-year intervals:

300 bps - 1960s through 1983 or so
1200 bps - Gained popularity in 1984 and 1985
2400 bps
9600 bps - First appeared in late 1990 and early 1991
19.2 kilobits per second (Kbps)
28.8 Kbps
33.6 Kbps
56 Kbps - Became the standard in 1998
ADSL, with theoretical maximum of up to 8 megabits per second (Mbps) - Gained popularity in
1999
(Check out How DSL Works and How Cable Modems Work for more information on the
progression of modem technology and current speeds.)

300-bps Modems
We'll use 300-bps modems as a starting point because they are extremely easy to understand. A
300-bps modem is a device that uses frequency shift keying (FSK) to transmit digital information
over a telephone line. In frequency shift keying, a different tone (frequency) is used for the different
bits (see How Guitars Work for a discussion of tones and frequencies).
When a terminal's modem dials a computer's modem, the terminal's modem is called the originate
modem. It transmits a 1,070-hertz tone for a 0 and a 1,270-hertz tone for a 1. The computer's
modem is called the answer modem, and it transmits a 2,025-hertz tone for a 0 and a 2,225-hertz
tone for a 1. Because the originate and answer modems transmit different tones, they can use the
line simultaneously. This is known as full-duplex operation. Modems that can transmit in only one
direction at a time are known as half-duplex modems, and they are rare.

Let's say that two 300-bps modems are connected, and the user at the terminal types the letter "a."
The ASCII code for this letter is 97 decimal or 01100001 binary (see How Bits and Bytes Work for
details on binary). A device inside the terminal called a UART (universal asynchronous
receiver/transmitter) converts the byte into its bits and sends them out one at a time through the
terminal's RS-232 port (also known as a serial port). The terminal's modem is connected to the RS-
232 port, so it receives the bits one at a time and its job is to send them over the phone line.

Faster Modems
In order to create faster modems, modem designers had to use techniques far more sophisticated
than frequency-shift keying. First they moved to phase-shift keying (PSK), and then quadrature
amplitude modulation (QAM). These techniques allow an incredible amount of information to be
crammed into the 3,000 hertz of bandwidth available on a normal voice-grade phone line. 56K
modems, which actually connect at something like 48 Kbps on anything but absolutely perfect
lines, are about the limit of these techniques (see the links at the end of this article for more
information).

All of these high-speed modems incorporate a concept of gradual degradation, meaning they can
test the phone line and fall back to slower speeds if the line cannot handle the modem's fastest
speed.

The next step in the evolution of the modem was asymmetric digital subscriber line (ADSL)
modems. The word asymmetric is used because these modems send data faster in one direction than
they do in another. An ADSL modem takes advantage of the fact that any normal home, apartment
or office has a dedicated copper wire running between it and phone company's nearest mux or
central office. This dedicated copper wire can carry far more data than the 3,000-hertz signal
needed for your phone's voice channel. If both the phone company's central office and your house
are equipped with an ADSL modem on your line, then the section of copper wire between your
house and the phone company can act as a purely digital high-speed transmission channel. The
capacity is something like 1 million bits per second (Mbps) between the home and the phone
company (upstream) and 8 Mbps between the phone company and the home (downstream) under
ideal conditions. The same line can transmit both a phone conversation and the digital data.

The approach an ADSL modem takes is very simple in principle. The phone line's bandwidth
between 24,000 hertz and 1,100,000 hertz is divided into 4,000-hertz bands, and a virtual modem is
assigned to each band. Each of these 249 virtual modems tests its band and does the best it can with
the slice of bandwidth it is allocated. The aggregate of the 249 virtual modems is the total speed of
the pipe.

Point-to-Point Protocol
Today, no one uses dumb terminals or terminal emulators to connect to an individual computer.
Instead, we use our modems to connect to an Internet service provider (ISP), and the ISP connects
us into the Internet. The Internet lets us connect to any machine in the world (see How Web Servers
and the Internet Work for details). Because of the relationship between your computer, the ISP and
the Internet, it is no longer appropriate to send individual characters. Instead, your modem is
routing TCP/IP packets between you and your ISP.
The standard technique for routing these packets through your modem is called the Point-to-Point
Protocol (PPP). The basic idea is simple -- your computer's TCP/IP stack forms its TCP/IP
datagrams normally, but then the datagrams are handed to the modem for transmission. The ISP
receives each datagram and routes it appropriately onto the Internet. The same process occurs to get
data from the ISP to your computer. See this page for additional information on PPP.
If you want to know more about modems, protocols, and especially if you wish to delve into things
like PSK and QAM in more detail, check out the links on the next page!

MUS

http://computer.howstuffworks.com/mouse.htm

Mice first broke onto the public stage with the introduction of the Apple Macintosh in 1984, and
since then they have helped to completely redefine the way we use computers.
Every day of your computing life, you reach out for your mouse whenever you want to move your
cursor or activate something. Your mouse senses your motion and your clicks and sends them to the
computer so it can respond appropriately.

In this article we'll take the cover off of this important part of the human-machine interface and see
exactly what makes it tick.

Evolution
It is amazing how simple and effective a mouse is, and it is also amazing how long it took mice to
become a part of everyday life. Given that people naturally point at things -- usually before they
speak -- it is surprising that it took so long for a good pointing device to develop. Although
originally conceived in the 1960s, a couple of decades passed before mice became mainstream.
In the beginning, there was no need to point because computers used crude interfaces like teletype
machines or punch cards for data entry. The early text terminals did nothing more than emulate a
teletype (using the screen to replace paper), so it was many years (well into the 1960s and early
1970s) before arrow keys were found on most terminals. Full screen editors were the first things to
take real advantage of the cursor keys, and they offered humans the first way to point.

Light pens were used on a variety of machines as a pointing device for many years, and graphics
tablets, joy sticks and various other devices were also popular in the 1970s. None of these really
took off as the pointing device of choice, however.

When the mouse hit the scene -- attached to the Mac, it was an immediate success. There is
something about it that is completely natural. Compared to a graphics tablet, mice are extremely
inexpensive and they take up very little desk space. In the PC world, mice took longer to gain
ground, mainly because of a lack of support in the operating system. Once Windows 3.1 made
Graphical User Interfaces (GUIs) a standard, the mouse became the PC-human interface of choice
very quickly.

Inside a Mouse
The main goal of any mouse is to translate the motion of your hand into signals that the computer
can use. Let's take a look inside a track-ball mouse to see how it works:

A ball inside the mouse touches the desktop and rolls when the mouse moves.

Two rollers inside the mouse touch the ball. One of the rollers is oriented so that it detects motion in
the X direction, and the other is oriented 90 degrees to the first roller so it detects motion in the Y
direction. When the ball rotates, one or both of these rollers rotate as well. The following image
shows the two white rollers on this mouse:

The rollers each connect to a shaft, and the shaft spins a disk with holes in it. When a roller rolls, its
shaft and disk spin. The following image shows the disk:

On either side of the disk there is an infrared LED and an infrared sensor. The holes in the disk
break the beam of light coming from the LED so that the infrared sensor sees pulses of light. The
rate of the pulsing is directly related to the speed of the mouse and the distance it travels.

An on-board processor chip reads the pulses from the infrared sensors and turns them into binary
data that the computer can understand. The chip sends the binary data to the computer through the
mouse's cord.

In this optomechanical arrangement, the disk moves mechanically, and an optical system counts
pulses of light. On this mouse, the ball is 21 mm in diameter. The roller is 7 mm in diameter. The
encoding disk has 36 holes. So if the mouse moves 25.4 mm (1 inch), the encoder chip detects 41
pulses of light.

You might have noticed that each encoder disk has two infrared LEDs and two infrared sensors, one
on each side of the disk (so there are four LED/sensor pairs inside a mouse). This arrangement
allows the processor to detect the disk's direction of rotation. There is a piece of plastic with a
small, precisely located hole that sits between the encoder disk and each infrared sensor. It is visible
in this photo:

This piece of plastic provides a window through which the infrared sensor can "see." The window
on one side of the disk is located slightly higher than it is on the other -- one-half the height of one
of the holes in the encoder disk, to be exact. That difference causes the two infrared sensors to see
pulses of light at slightly different times. There are times when one of the sensors will see a pulse of
light when the other does not, and vice versa. This page offers a nice explanation of how direction
is determined.

Data Interface

Most mice on the market today use a USB connector to attach to your computer. USB is a standard
way to connect all kinds of peripherals to your computer, including printers, digital cameras,
keyboards and mice. See How USB Ports Work for more information about this technology.
Some older mice, many of which are still in use today, have a PS/2 type connector, as shown here:

Instead of a PS/2 connector, a few other older mice use a serial type of connector to attach to a
computer. See How Serial Ports Work for more information.

Optical Mice
Developed by Agilent Technologies and introduced to the world in late 1999, the optical mouse
actually uses a tiny camera to take thousands of pictures every second.
Able to work on almost any surface without a mouse pad, most optical mice use a small, red light-
emitting diode (LED) that bounces light off that surface onto a complimentary metal-oxide
semiconductor (CMOS) sensor. In addition to LEDs, a recent innovation are laser-based optical
mice that detect more surface details compared to LED technology. This results in the ability to use
a laser-based optical mouse on even more surfaces than an LED mouse.

Here's how the sensor and other parts of an optical mouse work together:

The CMOS sensor sends each image to a digital signal processor (DSP) for analysis.
The DSP detects patterns in the images and examines how the patterns have moved since the
previous image.
Based on the change in patterns over a sequence of images, the DSP determines how far the mouse
has moved and sends the corresponding coordinates to the computer.
The computer moves the cursor on the screen based on the coordinates received from the mouse.
This happens hundreds of times each second, making the cursor appear to move very smoothly.
Optical mice have several benefits over track-ball mice:

No moving parts means less wear and a lower chance of failure.
There's no way for dirt to get inside the mouse and interfere with the tracking sensors.
Increased tracking resolution means a smoother response.
They don't require a special surface, such as a mouse pad.

Back to the Drawing Board
Another type of optical mouse has been around for over a decade. The original optical-mouse
technology bounced a focused beam of light off a highly-reflective mouse pad onto a sensor. The
mouse pad had a grid of dark lines. Each time the mouse was moved, the beam of light was
interrupted by the grid. Whenever the light was interrupted, the sensor sent a signal to the computer
and the cursor moved a corresponding amount.
This kind of optical mouse was difficult to use, requiring that you hold it at precisely the right angle
to ensure that the light beam and sensor aligned. Also, damage to or loss of the mouse pad rendered
the mouse useless until a replacement pad was purchased. Today's optical mice are far more user-
friendly and reliable.

Accuracy
A number of factors affect the accuracy of an optical mouse. One of the most important aspects is
resolution. The resolution is the number of pixels per inch that the optical sensor and focusing lens
"see" when you move the mouse. Resolution is expressed as dots per inch (dpi). The higher the
resolution, the more sensitive the mouse is and the less you need to move it to obtain a response.
Most mice have a resolution of 400 or 800 dpi. However, mice designed for playing electronic
games can offer as much as 1600 dpi resolution. Some gaming mice also allow you to decrease the
dpi on the fly to make the mouse less sensitive in situations when you need to make smaller, slower
movements.

Historically, corded mice have been more responsive than wireless mice. This fact is changing,
however, with the advent of improvements in wireless technologies and optical sensors. Other
factors that affect quality include:

Size of the optical sensor -- larger is generally better, assuming the other mouse components can
handle the larger size. Sizes range from 16 x 16 pixels to 30 x 30 pixels.
Refresh rate -- it is how often the sensor samples images as you move the mouse. Faster is generally
better, assuming the other mouse components can process them. Rates range from 1500 to 6000
samples per second.
Image processing rate -- is a combination of the size of the optical sensor and the refresh rate.
Again, faster is better and rates range from 0.486 to 5.8 megapixels per second.
Maximum speed -- is the maximum speed that you can move the mouse and obtain accurate
tracking. Faster is better and rates range from 16 to 40 inches per second.

Wireless Mice
Most wireless mice use radio frequency (RF) technology to communicate information to your
computer. Being radio-based, RF devices require two main components: a transmitter and a
receiver. Here's how it works:
The transmitter is housed in the mouse. It sends an electromagnetic (radio) signal that encodes the
information about the mouse's movements and the buttons you click.
The receiver, which is connected to your computer, accepts the signal, decodes it and passes it on to
the mouse driver software and your computer's operating system.
The receiver can be a separate device that plugs into your computer, a special card that you place in
an expansion slot, or a built-in component.

Many electronic devices use radio frequencies to communicate. Examples include cellular phones,
wireless networks, and garage door openers. To communicate without conflicts, different types of
devices have been assigned different frequencies. Newer cell phones use a frequency of 900
megahertz, garage door openers operate at a frequency of 40 megahertz, and 802.11b/g wireless
networks operate at 2.4 gigahertz. Megahertz (MHz) means "one million cycles per second," so
"900 megahertz" means that there are 900 million electromagnetic waves per second. Gigahertz
(GHz) means "one billion cycles per second." To learn more about RF and frequencies, see How the
Radio Spectrum Works.

Benefits
Unlike infrared technology, which is commonly used for short-range wireless communications such
as television remote controls, RF devices do not need a clear line of sight between the transmitter
(mouse) and receiver. Just like other types of devices that use radio waves to communicate, a
wireless mouse signal can pass through barriers such as a desk or your monitor.

RF technology provides a number of additional benefits for wireless mice. These include:

RF transmitters require low power and can run on batteries
RF components are inexpensive
RF components are light weight
As with most mice on the market today, wireless mice use optical sensor technology rather than the
earlier track-ball system. Optical technology improves accuracy and lets you use the wireless mouse
on almost any surface -- an important feature when you're not tied to your computer by a cord.
Pairing and Security
In order for the transmitter in the mouse to communicate with its receiver, they must be paired. This
means that both devices are operating at the same frequency on the same channel using a common
identification code. A channel is simply a specific frequency and code. The purpose of pairing is to
filter out interference from other sources and RF devices.

Pairing methods vary, depending on the mouse manufacturer. Some devices come pre-paired.
Others use methods such as a pairing sequence that occurs automatically, when you push specific
buttons, or when you turn a dial on the receiver and/or mouse.

To protect the information your mouse transmits to the receiver, most wireless mice include an
encryption scheme to encode data into an unreadable format. Some devices also use a frequency
hopping method, which causes the mouse and receiver to automatically change frequencies using a
predetermined pattern. This provides additional protection from interference and eavesdropping.

Bluetooth Mice
One of the RF technologies that wireless mice commonly use is Bluetooth. Bluetooth technology
wirelessly connects peripherals such as printers, headsets, keyboards and mice to Bluetooth-enabled
devices such as computers and personal digital assistants (PDAs). Because a Bluetooth receiver can
accommodate multiple Bluetooth peripherals at one time, Bluetooth is also known as a personal
area network (PAN). Bluetooth devices have a range of about 33 feet (10 meters).

Bluetooth operates in the 2.4 GHz range using RF technology. It avoids interference among
multiple Bluetooth peripherals through a technique called spread-spectrum frequency hopping.
WiFi devices such as 802.11b/g wireless networks also operate in the 2.4 GHz range, as do some
cordless telephonescordless telephones and microwave ovens. Version 1.2 of Bluetooth provides
adaptive frequency hopping (AFH), which is an enhanced frequency-hopping technology designed
to avoid interference with other 2.4 GHz communications.

Why is it called Bluetooth?
Harald Bluetooth was king of Denmark in the late 900s. He managed to unite Denmark and part of
Norway into a single kingdom then introduced Christianity into Denmark. He left a large
monument, the Jelling rune stone, in memory of his parents. He was killed in 986 during a battle
with his son, Svend Forkbeard. Choosing this name for the standard indicates how important
companies from the Baltic region (nations including Denmark, Sweden, Norway and Finland) are to
the communications industry, even if it says little about the way the technology works.

RF Mice
The other common type of wireless mouse is an RF device that operates at 27 MHz and has a range
of about 6 feet (2 meters). More recently, 2.4 GHz RF mice have hit the market with the advantage
of a longer range -- about 33 feet (10 meters) and faster transmissions with less interference.
Multiple RF mice in one room can result in cross-talk, which means that the receiver inadvertently
picks up the transmissions from the wrong mouse. Pairing and multiple channels help to avoid this
problem.
Typically, the RF receiver plugs into a USB port and does not accept any peripherals other than the
mouse (and perhaps a keyboard, if sold with the mouse). Some portable models designed for use
with notebook computers come with a compact receiver that can be stored in a slot inside the mouse
when not in use.

Mouse Tip
If you want to use both a wireless RF mouse and keyboard, buy them together. Pairing and
transmission technology is unique to each manufacturer and device. If you purchase an RF wireless
keyboard and mouse separately, you may have to connect a receiver for each one to your PC.

Working Together
Some PC keyboards and mice are designed to work together to give you more options for input. For
example, the Logitech Cordless Desktop LX700 comes with a keyboard that has scroll, pan and
zoom capabilities. The mouse includes the same features, so that you can use either to perform these
functions.
Multi-Media Mouse and Combination Mouse/Remote
These types of mice are used with multimedia systems such as the Windows XP Media Center
Edition computers. Some combine features of a mouse with additional buttons (such as play, pause,
forward, back and volume) for controlling media. Others resemble a television/media player remote
control with added features for mousing. Remote controls generally use infrared sensors but some
use a combination of infrared and RF technology for greater range.

Gaming Mice
Gaming mice are high-precision, optical mice designed for use with PCs and game controllers.
Features may include:

Multiple buttons for added flexibility and functions such as adjusting dpi rates on the fly
Wireless connectivity and an optical sensor
Motion feedback and two-way communication
Motion-Based Mice
Yet another innovation in mouse technology is motion-based control. With this feature, you control
the mouse pointer by waving the mouse in the air.

The technology patented by one manufacturer, Gyration, incorporates miniature gyroscopes to track
the motion of the mouse as you wave it in the air. It uses an electromagnetic transducer and sensors
to detect rotation in two axes at the same time. The mouse operates on the principle of the Coriolis
Effect, which is the apparent turning of an object that's moving in relation to another rotating object.
The device and accompanying software converts the mouse movements into movements on the
computer's screen. The mice also include an optical sensor for use on a desktop.

Biometric Mice
Biometric mice add security to your computer system by permitting only authorized users to control
the mouse and access the computer. Protection is accomplished with an integrated fingerprint reader
either in the receiver or the mouse. This feature enhances security and adds convenience because
you can use your fingerprint rather than passwords for a secure login.

To use the biometric feature, a software program that comes with the mouse registers fingerprints
and stores information about corresponding authorized users. Some software programs also let you
encrypt and decrypt files. For more information about biometric fingerprint technology, see How
Fingerprint Scanners Work.

Tilting Scroll Wheel
A recent innovation in mouse scrolling is a tilting scroll wheel that allows you to scroll onscreen
both horizontally (left/right) and vertically (up/down). The ability to scroll both ways is handy when
you are viewing wide documents like a Web page or spreadsheet.
To navigate both horizontally and vertically, the scroll wheel is positioned on a combination
fulcrum and lever. This is the design used by the Logitech Cordless Click! Plus mouse.

Another method for vertical and horizontal scrolling is a touch scroll panel that responds to your
finger sliding horizontally and vertically, as employed by the Logitech V500 Cordless Notebook
Mouse.

PLUG AND PLAY

http://www.pcguide.com/ref/mbsys/res/pnp-c.html

The large variety of different cards that can be added to PCs to expand their capabilities is both a
blessing and a curse. As you can see from the other sections that have discussed system resources,
configuring the system and dealing with resource conflicts is part of the curse of having so many
different non-standard devices on the market. Dealing with these issues can be a tremendously
confusing, difficult and time-consuming task. In fact, many users have stated that this is the single
most frustrating part of owning and maintaining a PC, or of upgrading the PC's hardware.

In an attempt to resolve this ongoing problem, the Plug and Play (also called PnP) specification was
developed by Microsoft with cooperation from Intel and many other hardware manufacturers. The
goal of Plug and Play is to create a computer whose hardware and software work together to
automatically configure devices and assign resources, to allow for hardware changes and additions
without the need for large-scale resource assignment tweaking. As the name suggests, the goal is to
be able to just plug in a new device and immediately be able to use it, without complicated setup
maneuvers.

A form of Plug and Play was actually first made available on the EISA and MCA buses many years
ago. For several reasons, however, neither of these buses caught on and became popular. PnP hit the
mainstream in 1995 with the release of Windows 95 and PC hardware designed to work with it.

Requirements for Plug and Play

Automatically detecting and configuring hardware and software is not a simple task. To perform
this work, cooperation is required from several hardware and software areas. The four "partners"
that must be Plug and Play compliant in order for it to work properly are:

System Hardware: The hardware on your system, through the system chipset and system bus
controllers, must be capable of handling PnP devices. For modern PCI-based systems this is built
in, as PCI was designed with PnP in mind. Most PCI-based systems also support PnP on their ISA
bus, with special circuitry to link the two together and share resource information. Older PCs with
ISA-only or VL-bus system buses generally do not support Plug and Play.
Peripheral Hardware: The devices that you are adding into the system must themselves be PnP
compatible. PnP is now supported for a wide variety of devices, from modems and network cards
inside the box to printers and even monitors outside it. These devices must be PnP-aware so that
they are capable of identifying themselves when requested, and able to accept resource assignments
from the system when they are made.
The System BIOS: The system BIOS plays a key role in making Plug and Play work. Routines built
into the BIOS perform the actual work of collecting information about the different devices and
determining what should use which resources. The BIOS also communicates this information to the
operating system, which uses it to configure its drivers and other software to make the devices work
correctly. In many cases older PCs that have an outdated BIOS but otherwise have support for PnP
in hardware (PCI-based Pentiums produced between 1993 and 1995 are the prime candidates) can
be made PnP-compliant through a BIOS upgrade.
The Operating System: Finally, the operating system must be designed to work with the BIOS (and
thus indirectly, with the hardware as well). The operating system sets up any low-level software
(such as device drivers) that are necessary for the device to be used by applications. It also
communicates with the user, notifying him or her of changes to the configuration, and allows
changes to be made to resource settings if necessary. Currently, the only mainstream operating
system with full PnP support is Windows 95.
As you can see, you need a lot for Plug and Play to work, and this is why the vast majority of older
systems (pre-1996) do not properly support this standard.

Plug and Play Operation

Most of the actual work involved in making Plug and Play function is performed by the system
BIOS during the boot process. At the appropriate step of the boot process, the BIOS will follow a
special procedure to determine and configure the Plug and Play devices in your system. Here is a
rough layout of the steps that the BIOS follows at boot time when managing a PCI-based Plug and
Play system:

Create a resource table of the available IRQs, DMA channels and I/O addresses, excluding any that
are reserved for system devices.
Search for and identify PnP and non-PnP devices on the PCI and ISA buses.
Load the last known system configuration from the ESCD area stored in non-volatile memory.
Compare the current configuration to the last known configuration. If they are unchanged, continue
with the boot; this part of the boot process ends and the rest of the bootup continues from here.
If the configuration is new, begin system reconfiguration. Start with the resource table by
eliminating any resources being used by non-PnP devices.
Check the BIOS settings to see if any additional system resources have been reserved for use by
non-PnP devices and eliminate any of these from the resource table.
Assign resources to PnP cards from the resources remaining in the resource table, and inform the
devices of their new assignments.
Update the ESCD area by saving to it the new system configuration. Most BIOSes will print a
message when this happens like "Updating ESCD ... Successful".
Continue with the boot.
Tip: See the section on PCI / PnP in the BIOS area, which describes the BIOS settings that affect
how PnP works in a PCI system.

Extended System Configuration Data (ESCD)

If the BIOS were to assign resources to each PnP device on every boot, two problems would result.
First, it would take time to do something that it has already done before, each boot, for no purpose.
After all, most people change their system hardware relatively infrequently. Second and more
importantly, it is possible that the BIOS might not always make the same decision when deciding
how to allocate resources, and you might find them changing even when the hardware remains
unchanged.

ESCD is designed to overcome these problems. The ESCD area is a special part of your BIOS's
CMOS memory, where BIOS settings are held. This area of memory is used to hold configuration
information for the hardware in your system. At boot time the BIOS checks this area of memory
and if no changes have occurred since the last bootup, it knows it doesn't need to configure anything
and skips that portion of the boot process.

ESCD is also used as a communications link between the BIOS and the operating system. Both use
the ESCD area to read the current status of the hardware and to record changes. Windows 95 reads
the ESCD to see if hardware has been changed and react accordingly. Windows 95 also allows
users to override Plug and Play resource assignments by manually changing resources in the Device
Manager. This information is recorded in the ESCD area so the BIOS knows about the change at
the next boot and doesn't try to change the assignment back again.

The ESCD information is stored in a non-volatile CMOS memory area, the same way that standard
BIOS settings are stored.

Note: Some (relatively rare) systems using Windows 95 can exhibit strange behavior that is caused
by incompatibility between how Windows 95 and the BIOS are using ESCD. This can cause an
"Updating ESCD" message to appear each and every time the system is booted, instead of only
when the hardware is changed. See here for more details.

Plug and Play and Non-Plug-and-Play Devices

Devices that do not support the PnP standard can be used in a PnP system, but they present special
problems. These are called legacy devices, which is geekspeak for "old hardware we have to keep
using even though it doesn't have the capabilities we wish it did". :^) They make resource
assignment much more difficult because they cannot be automatically configured by the BIOS.

Generally, the BIOS deals with non-PnP devices by ignoring them. It simply considers them as
"part of the scenery" and avoids any resources they are using. There is usually no problem using
these devices with PnP, but using too many non-PnP devices can make it more difficult for PnP to
work, due to the large number of resources that it is not allowed to touch.

"Plug and Pray" :^)

This amusing sarcastic name for Plug and Play has become all too commonly heard these days. It
refers to the large number of problems associated with getting Plug and Play to work on many
systems. It's odd to consider--wasn't the whole point of Plug and Play to make it easier to configure
systems? It is, but unfortunately PnP falls short of its lofty goal in many cases.

When you use PnP, you are essentially turning over control of system configuration to the PC. The
problem is a common one in computers: the computer isn't as smart as the human, or more
specifically, the computer isn't as "resourceful" (no pun intended. :^) ). Computers are not nearly as
good as humans at realizing things like this: "Well, if I put the modem at that value and the printer
there, I will have a conflict. But I can fix that by changing the assignment for the sound card,
moving the modem over here, and putting the printer there". The system can take care of the simple
situations, but can become confused by more complicated ones. The use of multiple "legacy" ISA
devices can exacerbate this. Generally, the more complex your setup, the more likely you will need
to manual "tweak" whatever PnP comes up with by default.

The biggest problems with Plug and Play revolve around its apparent "stubbornness". At times, the
BIOS and operating system seem determined to put a device at a location where you do not want it.
For example, you may have a modem that you want at COM3 and IRQ5, but the BIOS may decide
to put it at COM4 and IRQ3, conflicting with the COM2 serial port. This can get quite aggravating
to deal with. Also, some people just prefer the feeling of being "in control" that they lose when PnP
is used. (I must admit to being one of these people, oftentimes.)

The problems with PnP are less common now than they were in the first year that it was announced.
As with any new technology--especially one that is as complex as PnP and that involves so many
parts of the system--it takes time to iron the bugs out. Most systems today work quite well with
PnP. In most cases problems with PnP are due to incorrect system configuration, manual overrides
of PnP devices through the Windows 95 Device Manager, or incorrect BIOS settings.

TASTATUR

http://computer.howstuffworks.com/keyboard.htm

The part of the computer that we come into most contact with is probably the piece that we think
about the least. But the keyboard is an amazing piece of technology. For instance, did you know
that the keyboard on a typical computer system is actually a computer itself?

At its essence, a keyboard is a series of switches connected to a microprocessor that monitors the
state of each switch and initiates a specific response to a change in that state. In this edition of How
Stuff Works, you will learn more about this switching action, and about the different types of
keyboards, how they connect and talk to your computer, and what the components of a keyboard
are.

Types of Keyboards
Keyboards have changed very little in layout since their introduction. In fact, the most common
change has simply been the natural evolution of adding more keys that provide additional
functionality.
The most common keyboards are:

101-key Enhanced keyboard
104-key Windows keyboard
82-key Apple standard keyboard
108-key Apple Extended keyboard
Portable computers such as laptops quite often have custom keyboards that have slightly different
key arrangements than a standard keyboard. Also, many system manufacturers add specialty
buttons to the standard layout. A typical keyboard has four basic types of keys:
Typing keys
Numeric keypad
Function keys
Control keys
The typing keys are the section of the keyboard that contain the letter keys, generally laid out in the
same style that was common for typewriters. This layout, known as QWERTY for the first six
letters in the layout, was originally designed to slow down fast typists by making the arrangement
of the keys somewhat awkward! The reason that typewriter manufacturers did this was because the
mechanical arms that imprinted each character on the paper could jam together if the keys were
pressed too rapidly. Because it has been long established as a standard, and people have become
accustomed to the QWERTY configuration, manufacturers developed keyboards for computers
using the same layout, even though jamming is no longer an issue. Critics of the QWERTY layout
have adopted another layout, Dvorak, that places the most commonly used letters in the most
convenient arrangement.

The numeric keypad is a part of the natural evolution mentioned previously. As the use of
computers in business environments increased, so did the need for speedy data entry. Since a large
part of the data was numbers, a set of 17 keys was added to the keyboard. These keys are laid out in
the same configuration used by most adding machines and calculators, to facilitate the transition to
computer for clerks accustomed to these other machines.

In 1986, IBM extended the basic keyboard with the addition of function and control keys. The
function keys, arranged in a line across the top of the keyboard, could be assigned specific
commands by the current application or the operating system. Control keys provided cursor and
screen control. Four keys arranged in an inverted T formation between the typing keys and numeric
keypad allow the user to move the cursor on the display in small increments. The control keys allow
the user to make large jumps in most applications. Common control keys include:

Home
End
Insert
Delete
Page Up
Page Down
Control (Ctrl)
Alternate (Alt)
Escape (Esc)
The Windows keyboard adds some extra control keys: two Windows or Start keys, and an
Application key. The Apple keyboards are specific to Apple Mac systems.

Inside the Keyboard
The processor in a keyboard has to understand several things that are important to the utility of the
keyboard, such as:
Position of the key in the key matrix.
The amount of bounce and how to filter it.
The speed at which to transmit the typematics.

The key matrix is the grid of circuits underneath the keys. In all keyboards except for capacitive
ones, each circuit is broken at the point below a specific key. Pressing the key bridges the gap in the
circuit, allowing a tiny amount of current to flow through. The processor monitors the key matrix
for signs of continuity at any point on the grid. When it finds a circuit that is closed, it compares the
location of that circuit on the key matrix to the character map in its ROM. The character map is
basically a comparison chart for the processor that tells it what the key at x,y coordinates in the key
matrix represents. If more than one key is pressed at the same time, the processor checks to see if
that combination of keys has a designation in the character map. For example, pressing the a key by
itself would result in a small letter "a" being sent to the computer. If you press and hold down the
Shift key while pressing the a key, the processor compares that combination with the character map
and produces a capital letter "A."

The character map in the keyboard can be superseded by a different character map provided by the
computer. This is done quite often in languages whose characters do not have English equivalents.
Also, there are utilities for changing the character map from the traditional QWERTY to DVORAK
or another custom version.

Keyboards rely on switches that cause a change in the current flowing through the circuits in the
keyboard. When the key presses the keyswitch against the circuit, there is usually a small amount of
vibration between the surfaces, known as bounce. The processor in a keyboard recognizes that this
very rapid switching on and off is not caused by you pressing the key repeatedly. Therefore, it
filters all of the tiny fluctuations out of the signal and treats it as a single keypress.
If you continue to hold down a key, the processor determines that you wish to send that character
repeatedly to the computer. This is known as typematics. In this process, the delay between each
instance of a character can normally be set in software, typically ranging from 30 characters per
second (cps) to as few as two cps.

Keyboard Technologies
Keyboards use a variety of switch technologies. It is interesting to note that we generally like to
have some audible and tactile response to our typing on a keyboard. We want to hear the keys
"click" as we type, and we want the keys to feel firm and spring back quickly as we press them.
Let's take a look at these different technologies:
Rubber dome mechanical
Capacitive non-mechanical
Metal contact mechanical
Membrane mechanical
Foam element mechanical

Probably the most popular switch technology in use today is rubber dome. In these keyboards, each
key sits over a small, flexible rubber dome with a hard carbon center. When the key is pressed, a
plunger on the bottom of the key pushes down against the dome. This causes the carbon center to
push down also, until it presses against a hard flat surface beneath the key matrix. As long as the
key is held, the carbon center completes the circuit for that portion of the matrix. When the key is
released, the rubber dome springs back to its original shape, forcing the key back up to its at-rest
position.

Rubber dome switch keyboards are inexpensive, have pretty good tactile response and are fairly
resistant to spills and corrosion because of the rubber layer covering the key matrix. Membrane
switches are very similar in operation to rubber dome keyboards. A membrane keyboard does not
have separate keys though. Instead, it has a single rubber sheet with bulges for each key. You have
seen membrane switches on many devices designed for heavy industrial use or extreme conditions.
Because they offer almost no tactile response and can be somewhat difficult to manipulate, these
keyboards are seldom found on normal computer systems.

Capacitive switches are considered to be non-mechanical because they do not simply complete a
circuit like the other keyboard technologies. Instead, current is constantly flowing through all parts
of the key matrix. Each key is spring-loaded, and has a tiny plate attached to the bottom of the
plunger. When a key is pressed, this plate is brought very close to another plate just below it. As the
two plates are brought closer together, it affects the amount of current flowing through the matrix at
that point. The processor detects the change and interprets it as a keypress for that location.
Capacitive switch keyboards are expensive, but do not suffer from corrosion and have a longer life
than any other keyboard. Also, they do not have problems with bounce since the two surfaces never
come into actual contact.

Metal contact and foam element keyboards are not as common as they used to be. Metal contact
switches simply have a spring-loaded key with a strip of metal on the bottom of the plunger. When
the key is pressed, the metal strip connects the two parts of the circuit. The foam element switch is
basically the same design but with a small piece of spongy foam between the bottom of the plunger
and the metal strip, providing for a better tactile response. Both technologies have good tactile
response, make satisfyingly audible "clicks" and are inexpensive to produce. The problem is that the
contacts tend to wear out or corrode faster than on keyboards that use other technologies. Also,
there is no barrier that prevents dust or liquids from coming in direct contact with the circuitry of
the key matrix.

From the Keyboard to the Computer
As you type, the processor in the keyboard is analyzing the key matrix and determining what
characters to send to the computer. It maintains these characters in a buffer of memory that is
usually about 16 bytes large. It then sends the data in a stream to the computer via some type of
connection.

The most common keyboard connectors are:

5-pin DIN (Deustche Industrie Norm) connector
6-pin IBM PS/2 mini-DIN connector
4-pin USB (Universal Serial Bus) connector
internal connector (for laptops)
Normal DIN connectors are rarely used anymore. Most computers use the mini-DIN PS/2
connector; but an increasing number of new systems are dropping the PS/2 connectors in favor of
USB. No matter which type of connector is used, two principal elements are sent through the
connecting cable. The first is power for the keyboard. Keyboards require a small amount of power,
typically about 5 volts, in order to function. The cable also carries the data from the keyboard to the
computer.
The other end of the cable connects to a port that is monitored by the computer's keyboard
controller. This is an integrated circuit (IC) whose job is to process all of the data that comes from
the keyboard and forward it to the operating system. When the operating system is notified that
there is data from the keyboard, a number of things can happen:

It checks to see if the keyboard data is a system level command. A good example of this is Ctrl-Alt-
Delete on a Windows computer, which initiates a reboot.
The operating system then passes the keyboard data on to the current application.
The current application understands the keyboard data as an application-level command. An
example of this would be Alt - f, which opens the File menu in a Windows application.
The current application is able to accept keyboard data as content for the application (anything from
typing a document to entering a URL to performing a calculation), or
The current application does not accept keyboard data and therefore ignores the information.
Once the keyboard data is identified as either system-specific or application-specific, it is processed
accordingly. The really amazing thing is how quickly all of this happens. As I type this article, there
is no perceptible time lapse between my fingers pressing the keys and the characters appearing on
my monitor. When you think about everything the computer is doing to make each single character
appear, it is simply incredible!

DATABUS

http://www.dbcsoftware.com/dbcov.html

DB/C DX, DATABUS, and PL/B Overview
DB/C DX is a program development tool for the DATABUS programming language.
DB/C DX includes the compiler, the run-time executive and eighteen utilities. The utilities provide
functions such as file management, file sorting, file indexing, library management, source file
editing and more. DB/C DX is available for a variety of different computer operating systems
including Windows 95 through XP based personal computers, LINUX, most UNIX computer
systems, and Apple Mac OS X.

What is DATABUS?
DATABUS is a high level computer language designed for writing business oriented applications.
In some respects DATABUS is like COBOL, although DATABUS contains several sophisticated
features that are not available in COBOL or in other business languages. DATABUS is used to
create highly interactive applications that contain friendly user interfaces. DATABUS is also used
to create processing programs that deal with the large data files typically found in business
applications.
DATABUS was created by Datapoint Corporation in the early 1970s. Until 1981, Datapoint was the
only company providing a DATABUS compiler. Since then, at least six other companies have
written and are currently marketing compilers for the DATABUS language.

DATABUS was accepted as an ANSI standard in December 1994. In the process it was given the
name PL/B because Datapoint refused to relinquish its trademark on the name DATABUS. People
still generally refer to it as DATABUS.

Why use DATABUS?
DATABUS has always been a language that is easy to learn and use. Other languages that offer
these benefits typically have few operations and limit the functions available to the programmer.
DATABUS is easy to use because of its structure and readability.

The syntax is English-like and there are no cryptic characters to remember. But don't let this fool
you—DATABUS contains over 125 separate operations (called verbs) that provide the competent
programmer with an arsenal of functions. Here is an example of typical DATABUS code:


.
. THIS PL/B CODE FRAGMENT WILL LOOK UP THE
. TELEPHONE NUMBER OF AN EMPLOYEE BY EMPLOYEE NUMBER
.
  LOOP
   KEYIN "ENTER AN EMPLOYEE NUMBER: ", EMPNUM
   STOP IF F3
   READ EMPLOYEE, EMPNUM; NAME, TELNUM
   IF OVER
    BEEP
    DISPLAY "EMPLOYEE NUMBER NOT ON FILE"
   ELSE
    DISPLAY "NAME: ", NAME, "TELEPHONE: ", TELNUM
   ENDIF
  REPEAT

Many business languages in use today were really designed for mainframe batch operation or single
user PC operation. The aspect of multiple users accessing common files interactively is only an
add-on in these languages. However, DATABUS was designed from the beginning to be run in an
interactive, multi-user environment.

The functions available to the programmer for screen display and keyboard handling are excellent.
The data access and locking mechanisms are time tested and stand up well in a high performance,
heavy usage operation.

DATABUS is also a fine complement to SQL based database systems. Even though most SQL
database systems come with a built-in 4GL, many database applications are still being written in a
third generation language. The reasons for this vary, but the bottom line is that a 4GL is not capable
of providing a programmer with all the functions that a third generation language provides. When
the choice comes down to COBOL, C/C++, Java, VB, or DATABUS, many developers are
choosing DATABUS.

Here are some of the many reasons why development in DATABUS is superior:
1. Compilation is extremely fast and there is no link step at all.
2. Debugging a DATABUS program is made much easier by the fact that the language is
completely closed. All variables are automatically initialized. A numeric variable cannot contain or
be assigned an invalid value. There are no pointers that can be pointing into odd places causing
subtle and hard to find bugs. There is no data overlaying which can cause data type mismatches. A
DATABUS program cannot cause a memory dump - it's just not possible.

3. In addition to an indexed sequential access method, DATABUS provides another access method
called the associative index method. Commonly called AIM, this access method allows context-free
key searches into data files. For example, in a parts inventory file, it is possible to retreive all
records that contain the word "BOLT" anywhere in the description field. The word may be in upper
case, lower case or mixed case. The programmer does not need to pre-progam or extract keywords
before the lookup - the AIM search method does it all for him.

4. The keyboard input and screen display verbs provide many more functions than corresponding
functions in other languages. Pop up window display is almost trivial to implement. Display
attributes such as reverse video, underline, blink, and colors are specified in the DISPLAY verb
with short, easy to remember codes. The programmer does not have to look at a different section of
program (or even at a separate screen map module as in certain other languages) to figure out what
is displayed on the screen. It's all right in the program.

Why should I choose DB/C?
DB/C DX implements all aspects of the PL/B standard. DB/C DX also includes utilities that
provide all of the necessary operating system level functions used in conjunction with DATABUS
programs.

The most important feature to understand about DB/C DX is its portability. No other language in
existence today provides better portability than DB/C DX. The reason we can make this statement
is simple:


Programs compiled under DB/C DX can be run on any supported computer without recompilation.
This level of portability provides you with an unprecedented ability to run your applications
software on almost any computer you choose - with the guarantee that it will run correctly without
any program changes or other programmer intervention.

If you currently have DATABUS programs written for Datapoint's RMS DATABUS or DOS
DATASHARE systems, porting to DB/C DX is quick and simple. Certain features of DB/C make
the conversion from the Datapoint dialects of DATABUS easier.

Using DB/C DX, development and testing of new or existing DATABUS programs is noticeably
enhanced from what is available with other DATABUS compilers. Compilation speed is typically
hundreds of thousands of lines per minute. Coupled with the fact that there is no link step, total
compilation time is faster than any other general-use compiled language in existence. An entire
system such as an order entry system consisting of 50 programs can be compiled in less than a
minute on a Pentium based computer. On more expensive UNIX systems, compilation is even
faster.

Is choosing DB/C DX prudent?
Yes. DB/C DX has been a very successful choice for many companies. Version 1 was first installed
in 1983 on single user IBM PCs. Since then many additional major upgrades have been released
that have improved DB/C DX in numerous ways. DB/C DX is currently installed in over 3000
companies in 30 countries. Here is a partial list of some of the more well known customers:
Boeing Corp.
Chase Manhatten Bank
Computer Sciences Corporation
Credit Lyonnais Bank
EDS
Guardian Industries
Holiday Inn
Hyatt Hotels
Lincoln Center for Performing Arts
Manufacturer's Hanover Bank
Marathon Oil Company
Nissan Motor Corporation
Proctor & Gamble
Royal Caribbean Cruise Lines
Scott Paper Company
State of California
U.S. Army
USCO Distribution
Volvo

DATASKJERM

http://computer.howstuffworks.com/monitor.htm

Because we use them daily, many of us have a lot of questions about our monitors and may not
even realize it. What does "aspect ratio" mean? What is dot pitch? How much power does a display
use? What is the difference between CRT and LCD? What does "refresh rate" mean?

In this article, HowStuffWorks will answer all of these questions and many more. By the end of the
article, you will be able to understand your current display and also make better decisions when
purchasing your next one.

Display Technology
Often referred to as a monitor when packaged in a separate case, the display is the most-used output
device on a computer. The display provides instant feedback by showing you text and graphic
images as you work or play.
Most desktop displays use liquid crystal display (LCD) or cathode ray tube (CRT) technology,
while nearly all portable computing devices such as laptops incorporate LCD technology. Because
of their slimmer design and lower energy consumption, monitors using LCD technology (also
called flat panel or flat screen displays) are replacing the venerable CRT on most desktops.

Standards and Resolution
Resolution refers to the number of individual dots of color, known as pixels, contained on a display.
Resolution is expressed by identifying the number of pixels on the horizontal axis (rows) and the
number on the vertical axis (columns), such as 800x600. Resolution is affected by a number of
factors, including the size of the screen.

As monitor sizes have increased over the years, display standards and resolutions have changed. In
addition, some manufacturers offer widescreen displays designed for viewing DVD movies.

Common Display Standards and Resolutions
 Standard Resolution Typical Use
XGA (Extended Graphics Array) 1024x768 15- and 17-inch CRT monitors
15-inch LCD monitors
SXGA (Super XGA) 1280x1024 15- and 17-inch CRT monitors
17-and 19-inch LCD monitors
UXGA (Ultra XGA) 1600x1200 19-, 20-, 21-inch CRT monitors
20-inch LCD monitors
QXGA (Quad XGA) 2048x1536 21-inch and larger CRT monitors
WXGA (Wide XGA) 1280x800 Wide aspect 15.4-inch laptops
LCD displays
WSXGA+ (Wide SXGA plus) 1680x1050 Wide aspect 20-inch LCD monitors
WUXGA (Wide Ultra XGA) 1920x1200 Wide aspect 22-inch and larger LCD monitors


In addition to the screen size, display standards and resolutions are related to something called the
aspect ratio. Next, we'll discuss what an aspect ratio is and how screen size is measured.

Aspect Ratio and Viewable Area
Two measures describe the size of your display: the aspect ratio and the screen size. Historically,
computer displays, like most televisions, have had an aspect ratio of 4:3. This means that the ratio
of the width of the display screen to the height is 4 to 3.
For widescreen LCD monitors, the aspect ratio is 16:9 (or sometimes 16:10 or 15:9). Widescreen
LCD displays are useful for viewing DVD movies in widescreen format, playing games and
displaying multiple windows side by side. High definition television (HDTV) also uses a
widescreen aspect ratio.

All types of displays include a projection surface, commonly referred to as the screen. Screen sizes
are normally measured in inches from one corner to the corner diagonally across from it. This
diagonal measuring system actually came about because the early television manufacturers wanted
to make the screen size of their TVs sound more impressive.

Interestingly, the way in which the screen size is measured for CRT and LCD monitors is different.
For CRT monitors, screen size is measured diagonally from outside edges of the display casing. In
other words, the exterior casing is included in the measurement as seen below.

For LCD monitors, screen size is measured diagonally from the inside of the beveled edge. The
measurement does not include the casing as indicated in the image below.

Because of the differences in how CRT and LCD monitors are measured, a 17-inch LCD display is
comparable to a 19-inch CRT display. For a more accurate representation of a CRT's size, find out
its viewable screen size. This is the measurement of a CRT display without its outside casing.

Popular screen sizes are 15, 17, 19 and 21 inches. Notebook screen sizes are smaller, typically
ranging from 12 to 17 inches. As technologies improve in both desktop and notebook displays, even
larger screen sizes are becoming available. For professional applications, such as medical imaging
or public information displays, some LCD monitors are 40 inches or larger!

Obviously, the size of the display directly affects resolution. The same pixel resolution is sharper on
a smaller monitor and fuzzier on a larger monitor because the same number of pixels is spread out
over a larger number of inches. An image on a 21-inch monitor with an 800x600 resolution will not
appear nearly as sharp as it would on a 15-inch display at 800x600.

Multi-scanning Monitors
If you have been around computers for more than a decade, then you probably remember when
NEC announced the MultiSync monitor. Up to that point, most monitors only understood one
frequency, which meant that the monitor operated at a single fixed resolution and refresh rate. You
had to match your monitor with a graphics adapter that provided that exact signal or it wouldn't
work.
The introduction of NEC MultiSync technology started a trend towards multi-scanning monitors.
This technology allows a monitor to understand any frequency sent to it within a certain bandwidth.
The benefit of a multi-scanning monitor is that you can change resolutions and refresh rates without
having to purchase and install a new graphics adapter or monitor each time.

Connections
To display information on a monitor, your computer sends the monitor a signal. The signal can be
in analog or digital format.
Analog (VGA) Connection
Because most CRT monitors require the signal information in analog (continuous electrical signals
or waves) form and not digital (pulses equivalent to the binary digits 0 and 1), they typically use an
analog connection.

However, computers work in a digital world. The computer and video adapter convert digital data
into analog format. A video adapter is an expansion card or component that provides the ability to
convert display information into a signal that is sent to the monitor. It can also be called a graphics
adapter, video card or graphics card.

Once the display information is in analog form, it is sent to the monitor through a VGA cable. The
cable connects at the back of the computer to an analog connector (also known as a D-Sub
connector) that has 15 pins in three rows. See the diagram below:

1: Red out 6: Red return (ground) 11: Monitor ID 0 in
2: Green out 7: Green return (ground) 12: Monitor ID 1 in
or data from display
3: Blue out 8: Blue return (ground) 13: Horizontal Sync out
4: Unused 9: Unused 14: Vertical Sync
5: Ground 10: Sync return (ground) 15: Monitor ID 3 in
or data clock

You can see that a VGA connector like this has three separate lines for the red, green and blue color
signals, and two lines for horizontal and vertical sync signals. In a normal television, all of these
signals are combined into a single composite video signal. The separation of the signals is one
reason why a computer monitor can have so many more pixels than a TV set.

Because a VGA (analog) connector does not support the use of digital monitors, the Digital Video
Interface (DVI) standard was developed.

DVI Connection
DVI keeps data in digital form from the computer to the monitor. There's no need to convert data
from digital information to analog information. LCD monitors work in a digital mode and support
the DVI format. (Although, some also accept analog information, which is then converted to digital
format.) At one time, a digital signal offered better image quality compared to analog technology.
However, analog signal processing technology has improved over the years and the difference in
quality is now minimal.

The DVI specification is based on Silicon Image's Transition Minimized Differential Signaling
(TMDS) and provides a high-speed digital interface. A transmitter on the video adapter sends the
digital information to a receiver in the monitor. TMDS takes the signal from the video adapter,
determines the resolution and refresh rate that the monitor is using, and spreads the signal out over
the available bandwidth to optimize the data transfer from computer to monitor.

DVI cables can be a single link cable that uses one TMDS transmitter or a dual link cable with two
transmitters. A single link DVI cable and connection supports a 1920x1080 image, and a dual link
cable/connection supports up to a 2048x1536 image.

There are two main types of DVI connections:

DVI-digital (DVI-D) is a digital-only format. It requires a video adapter with a DVI-D connection
and a monitor with a DVI-D input. The connector contains 24 pins/receptacles in 3 rows of 8 plus a
grounding slot for dual-link support. For single-link support, the connector contains 18
pins/receptacles.
DVI-integrated (DVI-I) supports both digital and analog transmissions. This gives you the option to
connect a monitor that accepts digital input or analog input. In addition to the pins/receptacles
found on the DVI-D connector for digital support, a DVI-I connector has 4 additional
pins/receptacles to carry an analog signal.

DVI-D connectors carry a digital-only signal and DVI-I adds four pins for analog capability. Both
connectors can be used with a single-link or a dual-link cable, depending upon the requirements of
the display.

If you buy a monitor with only a DVI (digital) connection, make sure that you have a video adapter
with a DVI-D or DVI-I connection. If your video adapter has only an analog (VGA) connection,
look for a monitor that supports the analog format.

Color Depth
The combination of the display modes supported by your graphics adapter and the color capability
of your monitor determine how many colors it displays. For example, a display that operates in
SuperVGA (SVGA) mode can display up to 16,777,216 (usually rounded to 16.8 million) colors
because it can process a 24-bit-long description of a pixel. The number of bits used to describe a
pixel is known as its bit depth.
With a 24-bit bit depth, eight bits are dedicated to each of the three additive primary colors -- red,
green and blue. This bit depth is also called true color because it can produce the 10,000,000 colors
discernible to the human eye, while a 16-bit display is only capable of producing 65,536 colors.
Displays jumped from 16-bit color to 24-bit color because working in eight-bit increments makes
things a whole lot easier for developers and programmers.

Simply put, color bit depth refers to the number of bits used to describe the color of a single pixel.
The bit depth determines the number of colors that can be displayed at one time. Take a look at the
following chart to see the number of colors different bit depths can produce:

Bit-Depth Number of Colors
12
(monochrome)
24
(CGA)
4 16
(EGA)
8 256
(VGA)
16 65,536
(High Color, XGA)
24 16,777,216
(True Color, SVGA)
32 16,777,216
(True Color + Alpha Channel)
Notice that the last entry in the chart is for 32 bits. This is a special graphics mode used by digital
video, animation and video games to achieve certain effects. Essentially, 24 bits are used for color
and the other eight bits are used as a separate layer for representing levels of translucency in an
object or image. Nearly every monitor sold today can handle 24-bit color using a standard VGA
connector.

To create a single colored pixel, an LCD display uses three subpixels with red, green and blue
filters. Through the careful control and variation of the voltage applied, the intensity of each
subpixel can range over 256 shades. Combining the subpixels produces a possible palette of 16.8
million colors (256 shades of red x 256 shades of green x 256 shades of blue).

Now that you have a general idea of the technology behind computer monitors, let's take a closer
look at LCD monitors, CRT monitors, and the general buying considerations for both.

LCD Monitors

The Basics
Liquid crystal display technology works by blocking light. Specifically, an LCD is made of two
pieces of polarized glass (also called substrate) that contain a liquid crystal material between them.
A backlight creates light that passes through the first substrate. At the same time, electrical currents
cause the liquid crystal molecules to align to allow varying levels of light to pass through to the
second substrate and create the colors and images that you see.

Active and Passive Matrix Displays
Most LCD displays use active matrix technology. A thin film transistor (TFT) arranges tiny
transistors and capacitors in a matrix on the glass of the display. To address a particular pixel, the
proper row is switched on, and then a charge is sent down the correct column. Since all of the other
rows that the column intersects are turned off, only the capacitor at the designated pixel receives a
charge. The capacitor is able to hold the charge until the next refresh cycle.

The other type of LCD technology is passive matrix. This type of LCD display uses a grid of
conductive metal to charge each pixel. Although they are less expensive to produce, passive matrix
monitors are rarely used today due to the technology's slow response time and imprecise voltage
control compared to active matrix technology.

Now that you have an understanding of how LCD technology works, let's look at some specific
features unique to LCD monitors.

LCD Features and Attributes
To evaluate the specifications of LCD monitors, here are a few more things you need to know.
Native Resolution
Unlike CRT monitors, LCD monitors display information well at only the resolution they are
designed for, which is known as the native resolution. Digital displays address each individual pixel
using a fixed matrix of horizontal and vertical dots. If you change the resolution settings, the LCD
scales the image and the quality suffers. Native resolutions are typically:

17 inch = 1024x768
19 inch = 1280x1024
20 inch = 1600x1200
Viewing Angle
When you look at an LCD monitor from an angle, the image can look dimmer or even disappear.
Colors can also be misrepresented. To compensate for this problem, LCD monitor makers have
designed wider viewing angles. (Do not confuse this with a widescreen display, which means the
display is physically wider.) Manufacturers give a measure of viewing angle in degrees (a greater
number of degrees is better). In general, look for between 120 and 170 degrees. Because
manufacturers measure viewing angles differently, the best way to evaluate it is to test the display
yourself. Check the angle from the top and bottom as well as the sides, bearing in mind how you
will typically use the display.

Brightness or Luminance
This is a measurement of the amount of light the LCD monitor produces. It is given in nits or one
candelas per square meter (cd/m2). One nit is equal to on cd/m2. Typical brightness ratings range
from 250 to 350 cd/m2 for monitors that perform general-purpose tasks. For displaying movies, a
brighter luminance rating such as 500 cd/m2 is desirable.

Contrast Ratio
The contrast ratio rates the degree of difference of an LCD monitor's ability to produce bright
whites and the dark blacks. The figure is usually expressed as a ratio, for example, 500:1. Typically,
contrast ratios range from 450:1 to 600:1, and they can be rated as high as 1000:1. Ratios more than
600:1, however, provide little improvement over lower ratios.

Response Rate
The response rate indicates how fast the monitor's pixels can change colors. Faster is better because
it reduces the ghosting effect when an image moves, leaving a faint trial in such applications as
videos or games.

Adjustability
Unlike CRT monitors, LCD monitors have much more flexibility for positioning the screen the way
you want it. LCD monitors can swivel, tilt up and down, and even rotate from landscape (with the
horizontal plane longer than the vertical plane) to portrait mode (with the vertical plane longer than
the horizontal plane). In addition, because they are lightweight and thin, most LCD monitors have
built-in brackets for wall or arm mounting.

Besides the basic features, some LCD monitors have other conveniences such as integrated
speakers, built-in Universal Serial Bus (USB) ports and anti-theft locks.

LCD Terms
Bezel - This is the metal or plastic frame surrounding the display screen. On LCD displays, the
bezel is typically very narrow.
Contrast ratio - The difference in light intensity between white and black on an LCD display is
called contrast ratio. The higher the contrast ratio, the easier it is to see details.
Ghosting - An effect of slower response times that cause blurring of images on an LCD monitor, it's
also known as latency. The effect is caused by voltage temporarily leaking from energized elements
to neighboring, non-energized elements on the display.
Luminance - Also known as brightness, it is the level of light emitted by an LCD display.
Luminance is measured in nits or candelas per square meter (cd/m2). One nit is equal to one cd/m2.
Native resolution - This actual measurement of an LCD display, in pixels, is given in horizontal by
vertical order.
Response time - The speed at which the monitor's pixels can change colors is called response time.
It is measured in milliseconds (ms).
Stuck pixels - A pixel that is stuck either 'on' or 'off', meaning that it is always illuminated, unlit, or
stuck on one color regardless of the image the LCD monitor displays can also be called a dead
pixel.
VESA mount - With this, you can mount a monitor on a desk or wall. It meets recommendations of
the Video Electronics Standards Association (VESA).
Viewing angle - It's the degree of angle at which you can view the screen from the sides (horizontal
angle) and top/bottom (vertical angle) and continue to see clearly defined images and accurate
colors.
CRT Monitors
A CRT monitor contains millions of tiny red, green, and blue phosphor dots that glow when struck
by an electron beam that travels across the screen to create a visible image. The illustration below
shows how this works inside a CRT.

The terms anode and cathode are used in electronics as synonyms for positive and negative
terminals. For example, you could refer to the positive terminal of a battery as the anode and the
negative terminal as the cathode.

Display History 101
Displays have come a long way since the blinking green monitors in text-based computer systems
of the 1970s. Just look at the advances made by IBM over the course of a decade:
In 1981, IBM introduced the Color Graphics Adapter (CGA), which was capable of rendering four
colors, and had a maximum resolution of 320 pixels horizontally by 200 pixels vertically.
IBM introduced the Enhanced Graphics Adapter (EGA) display in 1984. EGA allowed up to 16
different colors and increased the resolution to 640x350 pixels, improving the appearance of the
display and making it easier to read text.
In 1987, IBM introduced the Video Graphics Array (VGA) display system. The VGA standard has
a resolution of 640x480 pixels and some VGA monitors are still in use.
IBM introduced the Extended Graphics Array (XGA) display in 1990, offering 800x600 pixel
resolution in true color (16.8 million colors) and 1,024x768 resolution in 65,536 colors.

In a cathode ray tube, the "cathode" is a heated filament. The heated filament is in a vacuum created
inside a glass "tube." The "ray" is a stream of electrons generated by an electron gun that naturally
pour off a heated cathode into the vacuum. Electrons are negative. The anode is positive, so it
attracts the electrons pouring off the cathode. This screen is coated with phosphor, an organic
material that glows when struck by the electron beam.

There are three ways to filter the electron beam in order to obtain the correct image on the monitor
screen: shadow mask, aperture grill and slot mask. These technologies also impact the sharpness of
the monitor's display. Let's take a closer look at these now.

CRT Features and Attributes
To evaluate the specifications of CRT monitors, here are a few more things you need to know:
Shadow-mask
A shadow mask is a thin metal screen filled with very small holes. Three electron beams pass
through the holes to focus on a single point on a CRT displays' phosphor surface. The shadow mask
helps to control the electron beams so that the beams strike the correct phosphor at just the right
intensity to create the desired colors and image on the display. The unwanted beams are blocked or
"shadowed."

Aperture-grill
Monitors based on the Trinitron technology, which was pioneered by Sony, use an aperture-grill
instead of a shadow-mask type of tube. The aperture grill consists of tiny vertical wires. Electron
beams pass through the aperture grill to illuminate the phosphor on the faceplate. Most aperture-
grill monitors have a flat faceplate and tend to represent a less distorted image over the entire
surface of the display than the curved faceplate of a shadow-mask CRT. However, aperture-grill
displays are normally more expensive.

Slot-mask
A less-common type of CRT display, a slot-mask tube uses a combination of the shadow-mask and
aperture-grill technologies. Rather than the round perforations found in shadow-mask CRT
displays, a slot-mask display uses vertically aligned slots. The design creates more brightness
through increased electron transmissions combined with the arrangement of the phosphor dots.

Dot pitch
Dot pitch is an indicator of the sharpness of the displayed image. It is measured in millimeters
(mm), and a smaller number means a sharper image. How you measure the dot pitch depends on the
technology used:

In a shadow-mask CRT monitor, you measure dot pitch as the diagonal distance between two like-
colored phosphors. Some manufacturers may also cite a horizontal dot pitch, which is the distance
between two-like colored phosphors horizontally.
The dot pitch of an aperture-grill monitor is measured by the horizontal distance between two like-
colored phosphors. It is also sometimes are called stripe pitch.

The smaller and closer the dots are to one another, the more realistic and detailed the picture
appears. When the dots are farther apart, they become noticeable and make the image look grainier.
Unfortunately, manufacturers are not always upfront about dot pitch measurements, and you cannot
necessarily compare shadow-mask and aperture-grill CRT types, due to the difference in horizontal
and vertical measurements.

The dot pitch translates directly to the resolution on the screen. If you were to put a ruler up to the
glass and measure an inch, you would see a certain number of dots, depending on the dot pitch.
Here is a table that shows the number of dots per square centimeter and per square inch in each of
these common dot pitches:

Dot Pitch Approx. number of
pixels/cm2 Approx. number of
pixels/in2
.25 mm 1,600 10,000
.26 mm 1,444 9,025
.27 mm 1,369 8,556
.28 mm 1,225 7,656
.31 mm 1,024 6,400
.51 mm 361 2,256
1 mm 100 625

Refresh Rate
In monitors based on CRT technology, the refresh rate is the number of times that the image on the
display is drawn each second. If your CRT monitor has a refresh rate of 72 Hertz (Hz), then it
cycles through all the pixels from top to bottom 72 times a second. Refresh rates are very important
because they control flicker, and you want the refresh rate as high as possible. Too few cycles per
second and you will notice a flickering, which can lead to headaches and eye strain.

Because your monitor's refresh rate depends on the number of rows it has to scan, it limits the
maximum possible resolution. Most monitors support multiple refresh rates. Keep in mind that there
is a tradeoff between flicker and resolution, and then pick what works best for you. This is
especially important with larger monitors where flicker is more noticeable. Recommendations for
refresh rate and resolution include 1280x1024 at 85 Hertz or 1600x1200 at 75 Hertz.

Multiple Resolutions
Because a CRT uses electron beams to create images on a phosphor screen, it supports the
resolution that matches its physical dot (pixel) size as well as several lesser resolutions. For
example, a display with a physical grid of 1280 rows by 1024 columns can obviously support a
maximum resolution of 1280x1024 pixels. It also supports lower resolutions such as 1024x768,
800x600, and 640x480. As noted previously, an LCD monitor works well only at its native
resolution.

LCDs vs. CRTs
If you are looking for a new display, you should consider the differences between CRT and LCD
monitors. Choose the type of monitor that best serves your specific needs, the typical applications
you use, and your budget.
Advantages of LCD Monitors

Require less power - Power consumption varies greatly with different technologies. CRT displays
are somewhat power-hungry, at about 100 watts for a typical 19-inch display. The average is about
45 watts for a 19-inch LCD display. LCDs also produce less heat.

Smaller and weigh less - An LCD monitor is significantly thinner and lighter than a CRT monitor,
typically weighing less than half as much. In addition, you can mount an LCD on an arm or a wall,
which also takes up less desktop space.

More adjustable - LCD displays are much more adjustable than CRT displays. With LCDs, you can
adjust the tilt, height, swivel, and orientation from horizontal to vertical mode. As noted previously,
you can also mount them on the wall or on an arm.

Less eye strain - Because LCD displays turn each pixel off individually, they do not produce a
flicker like CRT displays do. In addition, LCD displays do a better job of displaying text compared
with CRT displays.
Advantages of CRT Monitors

Less expensive - Although LCD monitor prices have decreased, comparable CRT displays still cost
less.

Better color representation - CRT displays have historically represented colors and different
gradations of color more accurately than LCD displays. However, LCD displays are gaining ground
in this area, especially with higher-end models that include color-calibration technology.

More responsive - Historically, CRT monitors have had fewer problems with ghosting and blurring
because they redrew the screen image faster than LCD monitors. Again, LCD manufacturers are
improving on this with displays that have faster response times than they did in the past.

Multiple resolutions - If you need to change your display's resolution for different applications, you
are better off with a CRT monitor because LCD monitors don't handle multiple resolutions as well.

More rugged - Although they are bigger and heavier than LCD displays, CRT displays are also less
fragile and harder to damage.
So now that you know about LCD and CRT monitors, let's talk about how you can use two
monitors at once. They say, "Two heads are better than one." Maybe the same is true of monitors!

Dual Monitors
One way to expand your computer's display is to add a second monitor. Using dual monitors can
make you more productive and add a lot to your computing experience.
With two monitors, you can:

View large spreadsheets
Make changes to a web page's code on one monitor and view the results on the second
Open two different applications, such as a Word document on one monitor and your web browser
on the second
Besides two displays and two sets of the appropriate video cables, the only other thing you need is a
video adapter with two display connections. The connections can be analog or digital; they need
only to match the type of connections on the monitors. It does not matter what type of monitor you
use; two LCDs, two CRTs, or one of each works fine as long as the video adapter has compatible
connections.
If you don't have a video adapter with two connections, you can purchase one and replace your
current adapter. This generally works better than simply installing another video card with a single
connection. Combination cards also come with more features, such as a TV-out port.

In addition to verifying your hardware, you should also double-check your computer's operating
system to be sure it supports the use of dual monitors. For example, Windows 98 SE, Me, 2000, and
XP support multiple monitors.

If you really want to increase your screen real estate, especially for applications such as financial
trading or 3-D design, you can even implement three or more monitors.

Other Technologies

Touch-screen Monitors
Displays with touch-screen technology let you input information or navigate applications by
touching the surface of the display. The technology can be implemented through a variety of
methods, including infrared sensors, pressure-sensitive resistors or electronic capacitors.

Wireless Monitors
Similar in looks to a tablet PC, wireless monitors use technology such as 802.11b/g to connect to
your computer without a cable. Most include buttons and controls for mousing and web surfing, and
some also include keyboards. The displays are battery-powered and relatively lightweight. Most
also include touch-screen capabilities.

Television and HDTV Integration
Some displays have built-in television tuners that you can use for viewing cable TV on your
computer. You can also find displays that accept S-video input directly from a video device.
Additional features include picture-in-picture or picture-on-picture capability, a remote control and
support for high-definition television (HDTV).

VESA Brings Standardization
The Video Electronics Standards Association (VESA) is an organization that supports and sets
industry-wide interface standards for the PC, workstation and consumer electronics industries.
VESA promotes and develops timely, relevant, open standards for the display and display interface
industry, ensuring interoperability and encouraging innovation and market growth.
In August of 1992, VESA passed the VESA Local Bus (VL-Bus) Standard 1.0. This standard had a
significant impact on the industry, because it was the first local bus standard to be developed, which
provided a uniform hardware interface for local bus peripherals. The creation of this standard
ensured compatibility among a wide variety of graphics boards, monitors, and systems software.

Today, VESA is a worldwide organization that promotes and develops open display and display
interface standards for interoperability. VESA is a formative influence in the PC industry and a
contributor to the enhancement of flat panel display, monitor, graphics, software and systems
technologies including home networking and PC theater.

Monitor Trends

DisplayPort Standard
The Video Electronics Standards Association (VESA) is working on a new digital display interface
for LCD, plasma, CRT and projection displays. The new technology, which is called DisplayPort,
supports protected digital outputs for high definition and other content along with improved display
performance.

According to VESA, the DisplayPort standard will provide a high-quality digital interface for video
and audio content with optional secure content protection. The goal is to enable support for a wide
range of source and display devices, while combining technologies. For example, the audio and
video signals will be available over the same cable -- a smaller video connector will allow for
smaller devices such as notebook computers, and the standard will enable streaming high definition
(HD) video and audio content.

Organic Light-Emitting Diode
Organic Light-Emitting Diodes (OLEDs) are thin-film LED (Light-Emitting Diode) displays that
don't require a backlight to function. The material emits light when stimulated by an electrical
current, which is known as electroluminescence. OLEDs consist of red, green and blue elements,
which combine to create the desired colors. Advantages of OLEDs include lower power
requirements, a less-expensive manufacturing process, improvements in contrast and color, and the
ability to bend.

Surface-Conduction Electron Emitter Displays
A Surface-Conduction Electron Emitter Display (SED) is a new technology developed jointly by
Canon and Toshiba. Similar to a CRT, an SED display utilizes electrons and a phosphor-coated
screen to create images. The difference is that instead of a deep tube with an electron gun, an SED
uses tiny electron emitters and a flat-panel display.

For more information on computer monitors and related topics, check out the links on the next
page.

DATAPORTER

http://computer.howstuffworks.com/question11.htm

If you have a printer connected to your computer, there is a good chance that it uses the parallel
port. While USB is becoming increasingly popular, the parallel port is still a commonly used
interface for printers.

Parallel ports can be used to connect a host of popular computer peripherals:

Printers
Scanners
CD burners
External hard drives
Iomega Zip removable drives
Network adapters
Tape backup drives
In this article, you will learn why it is called the parallel port, what it does and exactly how it
operates.

Parallel Port Basics
Parallel ports were originally developed by IBM as a way to connect a printer to your PC. When
IBM was in the process of designing the PC, the company wanted the computer to work with
printers offered by Centronics, a top printer manufacturer at the time. IBM decided not to use the
same port interface on the computer that Centronics used on the printer.

Instead, IBM engineers coupled a 25-pin connector, DB-25, with a 36-pin Centronics connector to
create a special cable to connect the printer to the computer. Other printer manufacturers ended up
adopting the Centronics interface, making this strange hybrid cable an unlikely de facto standard.

When a PC sends data to a printer or other device using a parallel port, it sends 8 bits of data (1
byte) at a time. These 8 bits are transmitted parallel to each other, as opposed to the same eight bits
being transmitted serially (all in a single row) through a serial port. The standard parallel port is
capable of sending 50 to 100 kilobytes of data per second.

Let's take a closer look at what each pin does when used with a printer:

Pin 1 carries the strobe signal. It maintains a level of between 2.8 and 5 volts, but drops below 0.5
volts whenever the computer sends a byte of data. This drop in voltage tells the printer that data is
being sent.
Pins 2 through 9 are used to carry data. To indicate that a bit has a value of 1, a charge of 5 volts is
sent through the correct pin. No charge on a pin indicates a value of 0. This is a simple but highly
effective way to transmit digital information over an analog cable in real-time.
Pin 10 sends the acknowledge signal from the printer to the computer. Like Pin 1, it maintains a
charge and drops the voltage below 0.5 volts to let the computer know that the data was received.
If the printer is busy, it will charge Pin 11. Then, it will drop the voltage below 0.5 volts to let the
computer know it is ready to receive more data.
The printer lets the computer know if it is out of paper by sending a charge on Pin 12.
As long as the computer is receiving a charge on Pin 13, it knows that the device is online.

The computer sends an auto feed signal to the printer through Pin 14 using a 5-volt charge.
If the printer has any problems, it drops the voltage to less than 0.5 volts on Pin 15 to let the
computer know that there is an error.
Whenever a new print job is ready, the computer drops the charge on Pin 16 to initialize the printer.
Pin 17 is used by the computer to remotely take the printer offline. This is accomplished by sending
a charge to the printer and maintaining it as long as you want the printer offline.
Pins 18-25 are grounds and are used as a reference signal for the low (below 0.5 volts) charge.

Notice how the first 25 pins on the Centronics end match up with the pins of the first connector.
With each byte the parallel port sends out, a handshaking signal is also sent so that the printer can
latch the byte.

SPP/EPP/ECP
The original specification for parallel ports was unidirectional, meaning that data only traveled in
one direction for each pin. With the introduction of the PS/2 in 1987, IBM offered a new
bidirectional parallel port design. This mode is commonly known as Standard Parallel Port (SPP)
and has completely replaced the original design. Bidirectional communication allows each device to
receive data as well as transmit it. Many devices use the eight pins (2 through 9) originally
designated for data. Using the same eight pins limits communication to half-duplex, meaning that
information can only travel in one direction at a time. But pins 18 through 25, originally just used as
grounds, can be used as data pins also. This allows for full-duplex (both directions at the same time)
communication.

Enhanced Parallel Port (EPP) was created by Intel, Xircom and Zenith in 1991. EPP allows for
much more data, 500 kilobytes to 2 megabytes, to be transferred each second. It was targeted
specifically for non-printer devices that would attach to the parallel port, particularly storage
devices that needed the highest possible transfer rate.

Close on the heels of the introduction of EPP, Microsoft and Hewlett Packard jointly announced a
specification called Extended Capabilities Port (ECP) in 1992. While EPP was geared toward other
devices, ECP was designed to provide improved speed and functionality for printers.
In 1994, the IEEE 1284 standard was released. It included the two specifications for parallel port
devices, EPP and ECP. In order for them to work, both the operating system and the device must
support the required specification. This is seldom a problem today since most computers support
SPP, ECP and EPP and will detect which mode needs to be used, depending on the attached device.
If you need to manually select a mode, you can do so through the BIOS on most computers.

For more information on parallel ports and related topics, check out the links on the next page.

Just about any computer that you buy today comes with one or more Universal Serial Bus
connectors on the back. These USB connectors let you attach everything from mice to printers to
your computer quickly and easily. The operating system supports USB as well, so the installation of
the device drivers is quick and easy, too. Compared to other ways of connecting devices to your
computer (including parallel ports, serial ports and special cards that you install inside the
computer's case), USB devices are incredibly simple!
In this article, we will look at USB ports from both a user and a technical standpoint. You will learn
why the USB system is so flexible and how it is able to support so many devices so easily -- it's
truly an amazing system!

Considered to be one of the most basic external connections to a computer, the serial port has been
an integral part of most computers for more than 20 years. Although many of the newer systems
have done away with the serial port completely in favor of USB connections, most modems still use
the serial port, as do some printers, PDAs and digital cameras. Few computers have more than two
serial ports.

Two serial ports on the back of a PC

Essentially, serial ports provide a standard connector and protocol to let you attach devices, such as
modems, to your computer. In this edition of How Stuff Works, you will learn about the difference
between a parallel port and a serial port, what each pin does and what flow control is.

UART Needed
All computer operating systems in use today support serial ports, because serial ports have been
around for decades. Parallel ports are a more recent invention and are much faster than serial ports.
USB ports are only a few years old, and will likely replace both serial and parallel ports completely
over the next several years.
The name "serial" comes from the fact that a serial port "serializes" data. That is, it takes a byte of
data and transmits the 8 bits in the byte one at a time. The advantage is that a serial port needs only
one wire to transmit the 8 bits (while a parallel port needs 8). The disadvantage is that it takes 8
times longer to transmit the data than it would if there were 8 wires. Serial ports lower cable costs
and make cables smaller.

Before each byte of data, a serial port sends a start bit, which is a single bit with a value of 0. After
each byte of data, it sends a stop bit to signal that the byte is complete. It may also send a parity bit.

Serial ports, also called communication (COM) ports, are bi-directional. Bi-directional
communication allows each device to receive data as well as transmit it. Serial devices use different
pins to receive and transmit data -- using the same pins would limit communication to half-duplex,
meaning that information could only travel in one direction at a time. Using different pins allows
for full-duplex communication, in which information can travel in both directions at once.

Serial ports rely on a special controller chip, the Universal Asynchronous Receiver/Transmitter
(UART), to function properly. The UART chip takes the parallel output of the computer's system
bus and transforms it into serial form for transmission through the serial port. In order to function
faster, most UART chips have a built-in buffer of anywhere from 16 to 64 kilobytes. This buffer
allows the chip to cache data coming in from the system bus while it is processing data going out to
the serial port. While most standard serial ports have a maximum transfer rate of 115 Kbps (kilobits
per second), high speed serial ports, such as Enhanced Serial Port (ESP) and Super Enhanced Serial
Port (Super ESP), can reach data transfer rates of 460 Kbps.

The Serial Connection
The external connector for a serial port can be either 9 pins or 25 pins. Originally, the primary use
of a serial port was to connect a modem to your computer. The pin assignments reflect that. Let's
take a closer look at what happens at each pin when a modem is connected.

9-pin connector:

Carrier Detect - Determines if the modem is connected to a working phone line.
Receive Data - Computer receives information sent from the modem.
Transmit Data - Computer sends information to the modem.
Data Terminal Ready - Computer tells the modem that it is ready to talk.
Signal Ground - Pin is grounded.
Data Set Ready - Modem tells the computer that it is ready to talk.
Request To Send - Computer asks the modem if it can send information.
Clear To Send - Modem tells the computer that it can send information.
Ring Indicator - Once a call has been placed, computer acknowledges signal (sent from modem)
that a ring is detected.
25-pin connector:

Not Used
Transmit Data - Computer sends information to the modem.
Receive Data - Computer receives information sent from the modem.
Request To Send - Computer asks the modem if it can send information.
Clear To Send - Modem tells the computer that it can send information.
Data Set Ready - Modem tells the computer that it is ready to talk.
Signal Ground - Pin is grounded.
Received Line Signal Detector - Determines if the modem is connected to a working phone line.
Not Used: Transmit Current Loop Return (+)
Not Used
Not Used: Transmit Current Loop Data (-)
Not Used
Not Used
Not Used
Not Used
Not Used
Not Used
Not Used: Receive Current Loop Data (+)
Not Used
Data Terminal Ready - Computer tells the modem that it is ready to talk.
Not Used
Ring Indicator - Once a call has been placed, computer acknowledges signal (sent from modem)
that a ring is detected.
Not Used
Not Used
Not Used: Receive Current Loop Return (-)
Voltage sent over the pins can be in one of two states, On or Off. On (binary value "1") means that
the pin is transmitting a signal between -3 and -25 volts, while Off (binary value "0") means that it
is transmitting a signal between +3 and +25 volts...
Going With The Flow
An important aspect of serial communications is the concept of flow control. This is the ability of
one device to tell another device to stop sending data for a while. The commands Request to Send
(RTS), Clear To Send (CTS), Data Terminal Ready (DTR) and Data Set Ready (DSR) are used to
enable flow control.

Let's look at an example of how flow control works: You have a modem that communicates at 56
Kbps. The serial connection between your computer and your modem transmits at 115 Kbps, which
is over twice as fast. This means that the modem is getting more data coming from the computer
than it can transmit over the phone line. Even if the modem has a 128K buffer to store data in, it
will still quickly run out of buffer space and be unable to function properly with all that data
streaming in.

With flow control, the modem can stop the flow of data from the computer before it overruns the
modem's buffer. The computer is constantly sending a signal on the Request to Send pin, and
checking for a signal on the Clear to Send pin. If there is no Clear to Send response, the computer
stops sending data, waiting for the Clear to Send before it resumes. This allows the modem to keep
the flow of data running smoothly.

PCMIA

http://www.pcmcia.org/about.htm

What is PCMCIA?
History
PCMCIA (Personal Computer Memory Card International Association) is an international standards
body and trade association with over 200 member companies that was founded in 1989 to establish
standards for Integrated Circuit cards and to promote interchangeability among mobile computers
where ruggedness, low power, and small size were critical. As the needs of mobile computer users
has changed, so has the PC Card Standard. By 1991, PCMCIA had defined an I/O interface for the
same 68 pin connector initially used for memory cards. At the same time, the Socket Services
Specification was added and was soon followed by the Card Services Specifcation as developers
realized that common software would be needed to enhance compatibility.
In more recent years, PCMCIA has realized the need for higher speed applications such as
multimedia and high-speed networking. From this realization came the CardBus and Zoomed Video
Specifications which allow blazing speed in such applications as MPEG video and 100 Mbit
Ethernet. Along with these speed enhancements, PCMCIA has continued to add to its specification
to enhance compatibility and allow for such other mobile-oriented concerns as 3.3V operation and
Power Management.

Today, PCMCIA promotes the interoperability of PC Cards not only in mobile computers, but in
such diverse products as digital cameras, cable TV, set-top boxes, and automobiles. As the variety
of products that need modular peripheral expansion has grown, so has the diversity of the
capabilities of modular peripherals. As such, PCMCIA has recently changed its mission statement:
"To develop standards for modular peripherals and promote their worldwide adoption."

PCMCIA's new mission is exemplified by its work with standards for small form factor cards.
PCMCIA has added the Small PC Card form factor specifications to the PC Card Standard and now
publishes and maintains the Miniature Card Standard. Also, PCMCIA will be publishing the
SmartMedia Card Standard which already provides memory solutions in one of the smallest
modular peripheral form factors today.

All of these cards enable hand-held devices such as digital cameras to use a very small, rugged form
of memory while PC Cards will allow the data to be easily transferred to your personal computer
through inexpensive adapters. As computing needs become faster and smaller, PCMCIA continues
to set the standard.




--------------------------------------------------------------------------------

Activities & Publications
PC Card Standard
PCMCIA publishes the PC Card Standard which contains all of the physical, electrical and software
specifications for the PC Card technology. The Standard is constantly being improved by
PCMCIA’s technical committee, which meets six times a year. The PC Card Standard can be
ordered through this site.

PC Card Resource Directory
The definitive resource for locating PC Card products and services. With detailed listings from
PCMCIA's members, the directory is the industry's most comprehensive source for PC Card product
information.

Home Page for PC Card Technology
PCMCIA hosts a World Wide Web site that includes information about the association, the
interactive PC Card Resource Directory, and a complete directory of PCMCIA's members.

Trade Shows
PCMCIA promotes PC Card technology at trade shows throughout the year. This year, PCMCIA
will exhibit at the WinHEC and IDF conferences. Contact the PCMCIA office at
office@pcmcia.org for more information.

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:230
posted:9/22/2011
language:Danish
pages:59