Contents - Index of by linxiaoqin


CPUS .............................................................................................................................................................. 3
 2.1. what is it? ........................................................................................................................................... 3
 2.2. terminology ........................................................................................................................................ 3
 2.3. well-known developers. ..................................................................................................................... 8
 2.4. main issues regarding cpus. ............................................................................................................... 9
MOTHERBOARDS ........................................................................................................................................ 10
  3.1. what is it? ......................................................................................................................................... 10
  3.2. terminology. ..................................................................................................................................... 11
  3.3. major manufacturers. ...................................................................................................................... 17
  3.4. server vs. desktop motherboards .................................................................................................... 18
GRAPHICS CARDS ........................................................................................................................................ 19
  4.1. what is it? ......................................................................................................................................... 19
  4.2. terminology. ..................................................................................................................................... 19
  4.3. interfaces and how they relate to graphics cards ............................................................................ 28
  4.4. OMG HOW DO I GET TEH MOST FPS IN COUNTERSTRIKE ............................................................... 29
  4.5. adapters, gender changers, and what you should expect with your new video card ..................... 29
  4.6. drivers, drivers, drivers. ................................................................................................................... 29
SOUND CARDS............................................................................................................................................. 30
  5.1. what is it? ......................................................................................................................................... 30
  5.2. terminology ...................................................................................................................................... 30
  5.3. major manufacturers ....................................................................................................................... 31
  5.4. port specifics .................................................................................................................................... 32
  5.5. issues surrounding sound cards ....................................................................................................... 32
POWER SUPPLY UNITS ................................................................................................................................ 33
  6.1. what is it? ......................................................................................................................................... 33
  6.2. terminology ...................................................................................................................................... 33
  6.3. major manufacturers ....................................................................................................................... 40
  6.4. external protection – UPS, inverters, surge suppressors, etc. ......................................................... 41
  6.5. things to watch out for..................................................................................................................... 41
HARD DRIVES .............................................................................................................................................. 42
   7.1. what is it? ......................................................................................................................................... 42
   7.2. terminology ...................................................................................................................................... 42
   7.3. major manufacturers ....................................................................................................................... 45
   7.4. external forms .................................................................................................................................. 46
   7.5. things to watch out for..................................................................................................................... 46
   7.6. RAID and what it’s for ...................................................................................................................... 47
  7.7. pros and cons for single drive vs. separate OS and storage drives .................................................. 47
RAM (MEMORY) .......................................................................................................................................... 47
  8.1. what is it? ......................................................................................................................................... 47
  8.2. terminology ...................................................................................................................................... 47
  8.3. major manufacturers ....................................................................................................................... 52
  8.4. things you should watch for............................................................................................................. 52
  8.5. why you should NEVER have less than 1 gig of ram for xp/2 gigs of ram for vista and w7 ............. 52
INPUT DEVICES ............................................................................................................................................ 53
 9.1. what is it? ......................................................................................................................................... 53
 9.2. forms of input devices...................................................................................................................... 53
 9.3. terminology ...................................................................................................................................... 53
 9.4. when wireless/bluetooth is generally a bad idea ............................................................................ 54
 9.5. major manufacturers and what they make ..................................................................................... 54
 9.6. things to watch out for..................................................................................................................... 54
OUTPUT DEVICES ........................................................................................................................................ 54
  10.1. what is it? ....................................................................................................................................... 54
  10.3. terminology .................................................................................................................................... 55
  10.4. why wireless/Bluetooth is always a bad idea ................................................................................ 55
  10.5. major manufacturers and what they make ................................................................................... 55
  10.6. things to watch out for................................................................................................................... 56
COMPUTER CASES....................................................................................................................................... 56
  11.1. why is this here? ............................................................................................................................ 56
  11.2. form factors/sizing terminology .................................................................................................... 56
  11.3. why the material matters .............................................................................................................. 57
  11.4. terminology .................................................................................................................................... 57
  11.5. cooling, and why you MUST think about it .................................................................................... 58
  11.6. major manufacturers ..................................................................................................................... 58
  11.7. things to watch out for................................................................................................................... 59
COOLING ..................................................................................................................................................... 60
   12.1. why cooling is possibly the most important thing to think about ................................................. 60
   12.2. air cooling ....................................................................................................................................... 60
   12.2. liquid cooling .................................................................................................................................. 63
   12.3. oil immersion ................................................................................................................................. 67
   12.4. extreme cooling ............................................................................................................................. 67
   12.5. why you can’t just stick your computer in your refrigerator or something equally stupid ........... 68

2.1. what is it?
         the cpu is just what it means - the central processing unit. it is the brain of your entire computer.
all major calculations take place here, and as such all major programs (as in, every program you could
ever think of running) operate at a relative speed to it. there are a variety of different forms of
processors, based on front side bus (2.2.2), socket type (2.2.4), the number of processing cores on the
chip (2.2.5), and a gamut of different technologies that various companies have pioneered in the last 25
years. in general, for an office pc, your processor is the one piece of hardware that most affects your
performance. for a gaming pc, your processor and graphics card(s) are the most important pieces of
hardware for performance.

2.2. terminology
         here, i'll post a variety of terms relating specifically to the cpu. if you don’t' find something here
that you're looking for, look in either the motherboard section (section 3) or the cooling section (section
12) for information. or, just use the find function, probably much faster.

2.2.1. clock speed, or processor speed.
         wikipedia's got the best definition of clock speed. it says that clock speed "is the fundamental
rate in cycles per second (measured in hertz) at which a computer performs its most basic operations
such as adding two numbers or transferring a value from one processor register to another." what that
means is that the clock speed is a measurement of how fast your computer 'thinks' in binary - adding 1s
and 0s to each other to make calculations that make up all programs that you're familiar with. each
clock cycle represents one 0 or 1 processed, technically. doesn’t mean only one operation happens per
clock, but it’s the most basic form of calculation. wikipedia also points out that "...CPUs that are tested
as complying with a given set of standards may be labeled with a higher clock rate, e.g., 1.50 GHz, while
those that fail the standards of the higher clock rate yet pass the standards of a lesser clock rate may be
labeled with the lesser clock rate, e.g., 1.33 GHz, and sold at a relatively lower price." this is related to
how a processor is baked up. they make these big batches of processors and test them individually. each
processor has a different maximum clock speed that it can reach without errors due to imperfections
within the chip (think nanometer-sized imperfections). so, every intel processor is baked up to be core 2
extremes, however not all can make it that high. because of the nature of pricing points for, say, the
e8xxx series, if a chip can't nominally make the 3.3ghz without a lot of errors that the e8600 can do, but
can function well JUST below that, it'll still get sold at the e8500 level of 3.16ghz. WHAT THIS MEANS is
that you can buy most cheaper chips and overclock the crap out of them without any huge issues (within
         a major part of how computers get these ridiculous clock rates (3.2ghz=3.2 BILLION calculations
per second, for goodness sake) is through the use of a multiplier. wiki defines this as, “the ratio of the
internal CPU clock rate to the externally supplied clock. A CPU with a 10x multiplier will thus see 10
internal cycles (produced by PLL-based circuitry) for every external clock cycle”. basically, for an e8400
core 2 duo, the core speed might be 3000mhz (3.0ghz), but the multiplier generally used is 9x – the cpu
does 9 operations internally per external clock. this means that your true external clock is only about
333mhz. i’ll explain the rest in the next section.

2.2.2. fsb, or front side bus.
          an fsb is basically the pathway between the cpu and the northbridge chipset on the
motherboard (3.2.1). intel processors measure these in mhz, while amd processors measure this in
(mega- or) hypertransfers per second. in the end, both mean the same thing – the bigger the number,
the faster the cpu can talk to the computer. an intel example is the aforementioned e8400, which has an
fsb of 1333mhz. an older intel cpu is the q6600, which has an fsb of 1066mhz. amd am2+ processors
have an fsb of 2000ht/s.
          what does this mean? well, let’s finish the discussion of the e8400. i just said that it has an fsb of
1333mhz, but earlier i said its external clock was 333mhz. what gives? well, remember that i mentioned
that different manufacturers use different descriptions for their fsb ratings. intel’s really behind the
times in using ‘mhz’ as a term for the fsb, because it’s not actually measuring clock rate or something
like inside the processor. instead, it’s measuring (just like amd’s ht/s) the number of transfers per
second being performed. intel uses ‘quad pumping’ as an extremely complicated technique to allow four
transfers per cycle coming out of the cpu. thus, 333mhz x 4 = 1333mhz, or 1333 transfers per second.
          why does this matter? in general, you want your fsb to be no less than 1/3 of your clock rate.
why can it be less? because your cpu doesn’t actually spit out every bit it processes, due to the cache
(2.2.3). for example, above i listed that the e8400 is a 3ghz chip with a 1333mhz fsb. that’s a ratio of
about 3/8 or so, fsb/clock. another example is the athlon 64 x2 6000+ Windsor chip, which has a clock of
3.0ghz and a ht/s rating of 2000mhz. that’s a ratio of about 2/3rds – another excellent level. your cpu
will never outpace that. here’s a bad example. my first cpu was a single-core Celeron at 3.46ghz. holy
crap! that’s a lot of ghz! however, the fsb was a paltry 533mhz, or a ratio of .15 – less than half of the
recommended level. it could go pretty fast, but pretty much everything kicked the snot out of it. and it
made my room ten degrees hotter.

2.2.3. cache
         cpus nowadays come with a small, extremely fast memory module built into them to reduce the
amount of time it takes to access recently used memory locations. because it’s faster to use these than
to use your normal memory sticks, it reduces latency of certain types of processes significantly.
Wikipedia has a very detailed article about it, but you really don’t need to know an enormous amount
about it for building computers. what you DO need to know is the difference between L1 and L2 (and
occasionally L3) memories, and what shared and discrete means in relation to this.
         L1 caches are usually very small. my e8400 has two 32kb caches at this level. note that i
specifically said that there are two L1 caches. this is different from L2, which is often shared between
multiple cores of cpus. L2 on this processor is a shared 6mb (6144kb, technically) between two
processing cores. the largest i’ve seen is 12mb with the qx9xxx series of core 2 extreme processors.
occasionally you’ll see a chip with L3 on it. besides Pentium 4 EE (horrid chips, all told), i’ve never seen it
on another intel chip. i believe that amd uses it occasionally with their quad core processors to make up
for smaller L2 cache sizes. it can technically be anywhere from 2-256mb, however i’ve never seen more
than 16 in an L3 before.
         in general, the larger the cache, the better – particularly in multi-core systems. some multi-core
processors use a discrete L2 cache – meaning, 6mb per core, for example, rather than sharing 6mb
between the two. note that no matter what, discrete is faster – even if it’s comparable. so, a 2x3mb
cache is technically faster than a 6mb shared cache. i have confirmed this with direct and repeated
2.2.4. socket
         your cpu plugs into a specific motherboard socket. there are a wide variety of them, and i’ll
discuss the motherboard side in greater detail later, so i’ll just mention a few major ones here: lga 775
(core 2 duo, quad), lga 771 (xeon), socket 1207 (opteron), and am2 (athlon).
         lga 775 is possibly the best-selling motherboard socket in existence due to the excellent
performance of the core 2 cpu architecture. lga stands for land grid array. it’s also known as socket T. it’s
technically not a socket, but rather 775 protruding pins contact points on the underside of the
processor. it allows for the use of Pentium 4 prescott, Pentium D presler, core 2 conroe and later cpus.
the biggest bonus of it was that the pins were based off of the motherboard, which is generally much
cheaper to replace than the cpus are. since they interact with a point rather than a hole, processors sit
much easier and are spring-loaded into place.
         lga 771, also called socket J, was introduced in 2006 to replace the quite horrible socket 604
server cpu interface. it is almost exactly similar to the 775 except that the indexing notches (to allow the
cpu to fit only one way) and two address pins are in a different spot. it was originally designed for the
cancelled ‘jayhawk’ core (much how socket T was originally for the cancelled ‘tejas’ core), and is used for
dual-core (Dempsey, woodcrest) and quad-core (clovertown, harpertown) xeon processors. the well-
known skulltrain dual-cpu motherboard also was based off of LGA 711, however, it used the qx9770
core 2 extreme processor. at 1400$ per cpu, this dualie board never really took off except for extremely
stupid people who had a lot of money and didn’t realize that hardly anything had been created that
actually took advantage of eight cores in a non-server environment.
         socket 1207, also known as socket F, was designed by AMD for the opteron processor. it was
released in q3 2006. it’s most famous for use as the base socket in the amd 4x4 format (the
‘quadfather’), which allows (similar to the skulltrain) two quad-core processors to be used. the name
comes from the original configuration of the quadfather, which allowed for two dual-core processors.
the sockets were modified to work in this formation (socket 1207 FX for amd, socket L1 by nvidia). the
4x4 was generally a desktop board.
         socket am2, originally called m2 but changed to prevent using the same name as now-defunct
cyrix’s desktop ‘mII’ processor, was release in q3 of 2006 and was a replacement for socket 939 and
socket 754. it supports every amd chip from athlon 64 to phenom, and was considered a lifesaver for
amd because of the significant performance gains over sockets 939, 754, and 940. recently, amd
upgraded the am2 to am2+ and then am3, although am3 is not in major production yet.

2.2.5. single, dual, triple, quad, and six-core processors
         everyone used to scream about clock speeds, and it was a big deal when the first 1ghz chip was
made. then came 2ghz, then came 3ghz…then came the wall. there is a limit to how fast a normal chip
can run efficiently (the von newmann bottleneck, basically it comes down to that the memory can’t
function fast enough to feed the cpu continuous information), and after about 3.2 ghz chips became
prohibitively hot (HEAT=FIRE). so, after cpus started setting houses on fire, tech people sat down and
thought about ways to make cpus more efficient. they had already added in hyperthreading, which
made the single cpu able to handle two threads simultaneously (previous cpus, and most afterwards,
were only able to do one specific function really fast) but single-core cpus were still getting boned. so,
they moved to putting more than one cpu on a die. this increased performance significantly – for so-
called ‘embarrassingly parallel’ problems, having two cpus doing each of the broken-down tasks made
for almost a doubled speed. a dual-core 2ghz chip could perform at a theoretical speed of almost 4ghz
(fsb limitations and internal memory controller problems kept it from being a perfect 100%).
         but enough about history. basically, no matter what, you should never buy a single core. ever.
there isn’t any excuse not to jump up to a dual-core at a lower clock rate, particularly since the release
of cheap quad cores is pushing dual-core pricing down. intel’s core 2 duo, the laptop varieties of Celeron
dual-core, Pentium D, and amd’s athlon 64 x2 and laptop-based turion 64 x2 processors are all awesome
and can handle most anything without even flinching. the biggest performance gains come when you
toggle between multiple applications – like writing a paper while having a web browser and AIM and a
music player on. a single-core chip would choke trying to manage all that. but, since you rarely have two
major applications focused at the same time, dual-core chips will put the focused application on one
core by itself, and have the other programs in the background on the second core. it makes things
MUCH easier to deal with.
         one of the biggest issues concerning multi-core chips is that in order to take advantage of the
multiple cores on a cpu, you need to write programs that will balance the load between two cores. as of
right now, unless you are using an advanced program (such as newer Photoshop programs, or new
games), your computer will only use one core for that particular program. not a big deal in terms of ms
office, but if you’re playing crysis or using photoshop CS 4, you need the extra processing power
provided by those extra cores.
         quad cores gained popularity among power users who use applications such as mentioned
above. particularly for high-end music work (like fl studio, logic, reason, and protocols) and
graphics/animation work, quads provide that extra oomph. i don’t recommend these products for your
average user, as in general the clock speed per core is much lower per dollar than for dual cores. for
example, at this moment, the e8400 (a dual-core processor) costs 170$. it runs at 3ghz, with a 1333mhz
fsb. the cheapest intel quad you can buy is the q6600 (which is in the process of being phased out), at
about 170$ as well. however, it’s only at 2.4ghz, and has an fsb of 1066mhz – meaning that each core
only pulls about 266mhz bandwidth out of the muddle. unless you NEED that extra processing power, a
quad isn’t really worth it.
         amd also released a three-core phenom x3 chip on the am2+ socket a little while after debuting
their four-core phenom x4. some speculate that it’s merely an x4 with one of the chips burned out
during testing, but it has sold well and is worth the cost, particularly in working with high-definition
         intel released a six-core xeon chip in early q4 of 2008, titled dunnington. intended as a server
processor, it includes a shared 9mb L2 cache, 16mb of L3 cache, and a range from 2.13-2.66 ghz on a
1066mhz fsb. they’re relatively cool compared to other similar chips, and intended for stacked and blade
server systems. they are the last core 2-based chip to be created before the i7/Nehalem architecture
changeover in late q4 2008.

2.2.6. ht, sse, virt tech, manufacturing process, and other weird things you should probably ignore
         when you look at the cpuz readout of most modern processors, you see all sorts of
abbreviations. mmx, sse, sse2, sse3, ssse3, sse4.1, ht, stepping, virtualization technology, manufacturing
process, em64t, blah blah blah. there’s only a few you really need to know: hyperthreading (ht),
manufacturing process, stepping, and (maybe) the basics of sse. the rest are crap you’ll never touch.
         hyperthreading, as mentioned above, partitions the core into two logical processors. it was
originally used in the Pentium 4 processor, and was unused in desktop computers for years until the
advent of the ultra-low power Atom processor and the brand new core i7 (Nehalem, lga 1366)
processor. the technical definition is that it’s used to improve parallelization of computations performed
on pc microprocessors via simultaneous multithreading. in order to be effective, the programs that
you’re using need to be created with HT in mind – for example, even though intel was bonkers over HT
when windows 2000 came out, they didn’t suggest you use it with w2k.
         manufacturing process refers to the spacing of microtransistors on the die. for example, most
early core 2 processors are based off of the 65nm manufacturing process – meaning that you can fit
roughly double the transistors on a card than you could from the previous 90nm generation. the newest
cpus are created using the 45nm manufacturing process, which lowers the heat generated by said
processors due to the fact that less energy is needed to do the same thing (less resistance). currently,
while intel has 10+ cpus out utilizing 45nm tech, amd is only planning on releasing a 45nm chip in the
server sector come q1 2009. most graphics processing units (GPUs) by nvidia and ati use the 65nm
process, and are currently upgrading to 55nm.
         stepping is a general indication of how much the processor has advanced during its lifetime. it’ll
generally start at a0, and go up from there. for example, the q6600 was for a long time a b0 chip. about
a year ago, the g0 q6600 chips came out with a reduced thermal index and fewer errors in the chip die.
everyone screamed and wanted one (except for me, who hates quads).
         sse, or streaming SIMD (single instruction, multiple data) extensions, is basically an compilation
of all possible cpu instructions in one unified set. it was introduced in 1999 with the intel p3 processor. it
was a rework of the original mmx technology that came out with the pII processor, since intel was
unhappy with the performance of mmx. sse also extends mmx beyond the original layout. sse2 was
released with the p4, and added new math instructions for use in the x64 architecture. sse3 was an
incremental upgrade over sse2. sse4 is a major enhancement that ended official mmx support and
added a large amount of additional integer instructions. basically, it’s good to have, and if you find a cpu
that doesn’t support it, it’s at least 10 years old.
         you shouldn’t really worry about any of these except manufacturing tech…and unless you’re
really hurting for cash, you should never buy a 65nm cpu. the lowered thermal output makes the 45nm
an easy pick. ht and sse are important, but unless you’re either operating a 486dx system running
windows 3.1 or are using one of the four processors mentioned above, they don’t really matter.

2.2.7. tdp, and why you can’t ignore it.
         tdp means thermal design power (or point), and it represents the maximum amount of power
the cooling system in a computer is required to dissipate. this does not represent how much heat is
emanating from the chip – rather, it represents the amount of heat that a cooling technique is required
to dissipate in order for the chip to run comfortably. an e8400 has a tdp of 65w – meaning that 65w
need to be shipped out in order for the cpu to not exceed the maximum junction temp for the chip. it
does NOT represent how much power the cpu requires! rather, it’s the max power (in heat form) it’ll
create when running real applications that needs to be shipped off the chip to prevent it from exploding.
since this margin is developed by individual chip manufacturers, it’s not something that you can use to
compare chips. it’s a warning, not a specific measure. the qx9775 dissipates 150w of heat, while the
atom dissipates a paltry 4w. it’s all about power consumption, in the end, but it’s not a definitive way to
compare chips.

2.2.8. overclocking (DHSU) and why i’m not discussing it.
         overclocking refers to the technique of setting your system’s FSB higher through motherboard
settings, thus making the internal clock of the cpu adjust itself higher to keep the same 4:1 ratio
consistent throughout the chip. this will cause errors in computations if the chip doesn’t have enough
power to complete them, so often overclockers run extra juice through their motherboards to boost cpu
performance. note that overclocking isn’t the art of overvolting your cpu, it’s finding the balance
between stability and speed.
         overclocking is dangerous – it’s easy to short out an expensive cpu because you’re going for that
last 50mhz of power. that’s the main reason i’m not discussing it, also, there is innumerable references
to overclocking and how to do it on the web. i’d suggest you just follow them – most online guides go
into specific chips and everything. just do what they say – it’d be far more accurate than anything i could
tell you to do. it’s also safer for your chip, which is more important than you are anyways.
2.3. well-known developers.
         this section only consists of two chipmakers: intel and amd. diehards are going to scream that
i’m not including cyrix, via, etc – but i’m making this an info sheet for someone building a desktop, not
an in-car box or a windows 3.1 system. so suck it up.

2.3.1. intel and naming criterion.
         intel’s chips are overall the highest-performing chips on the market, and in general you pay
more for that privilege. i’m not going to go into history here (too long), so i’ll keep it short and sweet. in
general, intel chips have a higher clockrate and overclocking potential than amd. they cost more than
amd. they have support for 45nm tech, which amd presently doesn’t. they generally have more L2 cache
         intel has been naming their chips a variety of things over the years. i’ll stick mainly with core 2,
since you shouldn’t be actually purchasing a p4 any time soon. as with most things, the higher the
number, the better the chip. in general, here’s what the name means.

core 2 duo e6750

core 2 duo: intel’s consumer 64-bit dual-core and 2x2 (meaning, two dual-cores on the same chip) cpus
with the x86-64 instruction set and based on the core microarchitecture. you can find a variety of chips
in this range, including core 2 solo (single-core), duo (dual-core), quad (quad-core) and extreme (dual- or
quad-core cpus for enthusiasts). in general, the core 2 processors used 40% less power with 40% more
performance than the Pentium 4. this holds with moore’s law (2.4.3).
e: this represents that it’s a dual-core processor. you might also see U (ultra low voltage), SU (ultra low
voltage, single core), T (dual-core mobile processors), L (low voltage mobile), P (medium voltage,
standard size), SP (medium voltage, small form factor), SL (low voltage, small form factor), q (quad), x
(dual-core extreme cpu for enthusiasts), and qx (quad extreme). told you naming criterion were
complex. i forgot the SP the first time around and had to check.
first two numbers (67): the first two numbers represents when the chip came out, and what core it’s
based off of, and what its clock rate is. for example, this chip is based off of the conroe codename.
conroe is a 65-nm chip with a 1066mhz fsb. they all feature either 2 or 4mb of shared L2, have a thermal
power rating of 65w, and are based off of the lga 775 socket. i’m not going to go into each chip name
here, since Wikipedia has a fantastic
[url=]list[/url] that you can dig into
to check everything. basically, the lower the first number, the older the chip and design. the lower the
second number, the lower the clock within that chip’s design bracket. this particular chip has 4mb of L2,
an operating frequency of 2.66ghz, and a 1333mhz fsb. wait, but i just said that all Conroe chips have a
1066mhz fsb! what gives?

second two numbers (currently 50): the last two numbers designate changes within each individual chip
design – within, say, the e6700 chip, not within the Conroe design that is its overarching design inside of
the core microarchitecture. note that this has a 5 for the third number. this third number generally
represents an upgraded fsb, L2 cache, or voltage design. in this case, it took the e6700 processor
(2.66ghz, 1066mhz fsb, 10x multiplier) and increased the fsb to 1333mhz while decreasing the multiplier
to 8, thus giving a good chip the same clock rate but at a much faster communication speed with the
computer. it was given a stepping rating of g0 to indicate the update. the fourth number is almost never
used, and generally only denotes a ‘port’ of a cpu to a different socket (qx9775, for example).
2.3.2. amd and naming criterion.
        amd’s chips overall perform slightly under the comparable chip by intel, and as a result cost
somewhat less. again, i won’t do history. in general, amd’s chips (particularly quad-core) have a lower
clockrate and overclocking potential than intel. they cost less than intel. they’re only just now getting
into 45nm tech (with their shanghai server chips), while all of intel’s new offerings have been 45nm for a
while now. they generally have less L2 cache memory, but it’s often discrete rather than shared (for
example, it’d be 256kb x 4 for a quad core, rather than 1mb shared).
        amd is far more confusing than intel with naming criterion because they completely change the
chip naming sequence every time they get a new socket to put it in – and they tend to name their
processors misleading things. however, there are a few general things to know about when you’re
looking at a processor’s name.

athlon 64 x2 6000+ Windsor

         athlon processors in general list their capabilities in the title of the chip. so, that means that this
is a dual-core processor (x2), and it’s an x86-64 capable chip (64) meaning it can run a 64-bit
architecture. the number, however, is completely misleading. you’d think that it meant that the
processor was operating at 6000 mhz, which obviously is impossible. amd uses numbers to demonstrate
the clockspeed in a relative way – meaning, that it’s faster than a 5200+ amd chip.
         the only other amd product that i can specifically say “this is how it is” is their mobile cpus –
specifically, the turian 64 chips. keeping with the previous example, it’s a relative description.

amd turian 64 x2 mobile technology tl-50

         the first parts are the same as the athlon – 64-bit capable, and a dual-core chip. the two letters
(tl) designate the processor class. the second one specifically is important – it represents the increasing
degree of ‘mobility’ (as measured by power consumption, enabling thin and light notebook pcs, and
longer battery life). note i said increasing – that means that a is worst and z is best. the number (50) is a
relative demonstration of processor performance.
         to be entirely honest, i don’t know much beyond that about amd’s nomenclature, and there’s
next to nothing online. anyone know more than me? i hardly ever use amd, because they’re really not
worth the reduced cost.

2.4. main issues regarding cpus.
       there are two major issues regarding cpus that you should know about. heat and power
consumption. i’ll also include some information on moore’s law, which allows you to compare cpus
based on when they came out to get a more specific comparison between processor performance
beyond just the clockspeed.

2.4.1. power consumption.
         this isn’t a real biggie, but it’s worth noting. assuming you don’t have an incredible computer
with a chip from 1990 in it, your processor should be one of the top two power consumers on your
computer. this scales with clockrate and cores – the higher either of those numbers is, the more power
is required to prevent the cpu from having a ‘brown-out’. this seems really simple, but you’d be
surprised. it boils down to not using a stock 300w power supply when you’ve got a decent graphics card,
or no matter what if you’re using a quad-core processor. remember that your psu can only spit out as
much power as its efficiency rating allows it to. so, if your psu is a 300w psu with 75% efficiency, it’s only
putting out 225 watts – which isn’t enough for any qx processor, and any quad above your standard

2.4.2. stock cooling vs. aftermarket cooling.
         this IS a big deal, unlike the power requirements section. use of a cpu cooler is required, else
your cpu will hit 100 degrees Celsius in about thirty seconds. you’re fine using the stock cooler if you’re
using a machine with no more than 65w of tdp. ANYTHING more can result in your computer going up in
smoke, literally. this is assuming you’ve got good airflow through your case, and you’re not gaming for
longer than maybe 30 minutes at a stretch. anything more, and you need to look into getting an
aftermarket cooler. they are sold at all different prices, so if you’re not a power user you can get a
decent one for maybe 20 bucks. it’s WORTH IT. you can and will have your computer burst into flames if
you’re not careful about your cooling setup.

2.4.3. moore’s law.
         moore’s law basically states that transistor density per dollar doubles every 18-24 months. what
the heck does that mean? and why does it matter? a better way of putting it is that processor
performance doubles every 18-24 months. this rewrite to moore’s law allows us to specifically quantify
the performance of a cpu on a relative scale to another cpu. for example, my first processor compared
to my current processor – a Celeron d 360 3.46ghz cpu as compared to a core2 duo e8400 3.0ghz cpu.
         the Celeron d 360 was released in 2004. wolfdale-named chips debuted in the beginning of
2008. thus, there’s 4 years between them, which makes for each core of the e8400 being 8x faster than
the celeron’s core – at the same speed, mind you. since 3.46 is approximately 1.15x larger than 3, you’ve
gotta divide that number (8) by 1.15, which gives you roughly 7. since the e8400 is a dual-core, you can
say that the e8400 has a performance index that is 14x the size of the celeron’s performance index. note
that this doesn’t take into account FSB or anything (the celeron’s a paltry 533mhz, which is 2.7x smaller
than the bus on the e8400), but you get the idea. theoretically, you can spell all that mess out in an
equation, but ms word doesn’t work well with equations.


3.1. what is it?
         the motherboard controls all I/O operations on your computer, and connects everything to
everything else. it routes power to components, transfers information between those components, and
handles the most basic computer functionalities, like powering on and off, controlling fans, and
sometimes handling integrated graphics functionality (on certain boards, usually lower-budget).
         the motherboard is one of the most difficult items to pick when building a computer. because of
the nature of it (everything goes through it), getting a fantastic cpu and graphics card and plugging it
into a shoddy mobo will make for a poor computer by nature, however you don’t want to spend several
hundred dollars for functionality you won’t need. because of the centrality of motherboards in
overclocking, a variety of technologies have come out to enhance stability and reduce power fluctuation
– most of which cost an exorbitant amount of money. there are also variations in cooling on the
northbridge and southbridge chipsets (3.2.3.), different forms of capacitors, an incredibly wide variety of
internal and external connectors, several form factors to fit different amounts of components into
certain spaces, and different levels of complexity and customization in the BIOS.
3.2. terminology.
        here, i'll post a variety of terms relating specifically to the motherboard. if you don't find
something here that you're looking for, look in either the cpu section (section 2) or just use the find

3.2.1. BIOS.
         BIOS can stand for either Basic Input/Output System or Built In Operating System. In general, it
refers to an ‘operating system’ for the motherboard – a simple menu (usually in 8-bit color) that controls
the boot sequence, identifies and initializes the components of the computer (like the cpu and graphics
card), and allows the lowest level of access to the pc itself (not the data, unfortunately, just the system
settings). You can set the system time, check system stats and temperatures, set the order of boot
devices (cpu, cd-rom, floppy, etc), set power-on states and administrator passwords, etc. the most
common use of the bios nowadays is to allow for overclocking. most decent BIOSes allow you to adjust
the fsb, memory timings, power to the cpu, and power to the memory.

3.2.2. chipsets.
         in general, the chipset on a motherboard is a group of integrated circuits that work together as a
single product. it generally refers to the northbridge or southbridge chipsets, which link the cpu to most
of the important parts of the computer. because it controls communications between the processor and
external devices, it’s incredibly important in defining system performance. it can also refer to the
chipsets for graphics processors, but i’ll discuss that in a later section.

3.2.3. north bridge, south bridge, golden gate bridge, bridge over troubled waters...
         as i mentioned in the last section, the southbridge and northbridge connect the cpu with major
sections of the computer.
         the northbridge is generally used to connect the cpu to the main system memory and the
graphics processor – the two most important components when defining system speed. it also connects
the cpu to the southbridge. because not all systems have a discrete graphics card, some northbridge
chipsets will often include an integrated video processor that operates using system resources. because
of the nature of cpus and memory, a northbridge will often only work with one or two kinds of cpus and
one or two kinds of ram. as computers become faster and handle more data at one time, the
northbridge is becoming hotter and hotter and will almost always have a heatsink and/or fan cooling it.
         the southbridge connects the ‘slower’ parts of the computer with the cpu – generally the pci
slots, usb, smbus (system temp sensors, fan controllers, etc), sound interfaces, and power management
functions. none of these use a quarter of the bandwidth that the ram or graphics card will utilize, so as a
result the southbridge is a much cooler chip that often won’t have a heatsink on it.

3.2.4. memory and associated issues.
         the ram plugs into the motherboard, if you haven’t guessed. due to chipset limitations, mobos
generally won’t support more than one kind of ram. note that this doesn’t refer to timings (pc6400, for
example), but rather to types of ram (ddr, ddr2, ddr3…). since most memory is pin-specific as to which
plug it will fit into, often a motherboard will only have one kind of plug for the system memory. there
will rarely be more than 4 in an lga 775-era motherboard or more than 6 in an lga 1336-era

3.2.5. sockets, compatibility, etc.
         i wound up going into this in 2.2.4. rather extensively, so refer there for these questions.
3.2.6. expansion slots.
         this section refers to the agp and pci line of expansion slots for plugging in extra boards
(daughter boards) to the motherboard. these are best known for use with graphics cards, however their
usage extends far beyond just making crysis run better. RAID setups, extra lan ports, sound cards,
wireless internet expansion cards, and more make these very versatile slots. long live agp.
         once thought to have a lifespan past Armageddon (fallout 3 doesn’t run on agp cards due to just
being too darned good of a game graphically, so i guess it wasn’t true), agp has largely fallen out of favor
due to pci-express simply being a better interface. accelerated graphics port (also called advanced
graphics port) was originally designed as a replacement for pci-based graphics cards (this is back in 1997,
mind you) because pci couldn’t handle the increasing bandwidth of graphics required in gaming. pci
topped out at 133 mb/s, whereas agp was designed for 2133mb/s, a jump of well over 15x. due to the
fact that it doesn’t share its bus with other add-on cards, agp has a much higher total bandwidth with
the cpu than a single pci slot could ever have. while some systems still use agp, these are few and far
between, and are generally at least five years old.
         very few modern systems sell boards with these slots. there are still a few varieties of cards to
be purchased nowadays, but they are sold at a much higher cost than the equivalent card in a pcie
format. pci.
         peripheral component interconnect is a technology designed in 1993 to allow for add-on cards
to be plugged into the motherboard. modern use for pci extends to low-level graphics cards, network
cards of various sorts, sound cards, modems, extra ports for usb or serial (anything slower than 133mb/s
– e-sata would not be compatible at full-speed), tv tuner cards, and disc controllers. pci is slowly being
phased out, since most tasks that are done with pci can be performed equally well with usb. however,
especially with sound cards and disc controllers, an internal hardware connection is preferable to
anything else. pci express x1.
         pci express x1 is a useless little port that sits on your motherboard, and you never use it. ever.
technically, it supports anything that the pci slot can support, however it can only handle a little more
bandwidth than a pci slot (250mb/s over 133) so there’s nothing that won’t fit in anything lower than
this. just doesn’t happen. don’t use it – its better off not to clog your pcie bus with something that
doesn’t need the bandwidth. just use your pci slot instead. those cards tend to be cheaper anyways. pci express x16.
         lauded as the gateway to extreme graphics processing, pcie burst onto the scene in 2004 and
promptly changed how we viewed games forever. allowing for up to 4gb/s (pcie x1 does 250mb/s, so
logically x16 would do 16 x 250) of bandwidth through a dedicated bus on the northbridge chipset, this
port pretty much allowed graphics processors to break out of the hole that they’d been in for several
years with the quality of the view that we received and really start going for broke.
         interestingly enough, there are a variety of levels of pcie, due to the use of ‘lanes’ for
communication. each lane can handle 250mb/s – so, logically, a pcie x4 can handle 1gb/s, etc. recently
(late 2007), pci 2.0 was released, which upgraded the lane speed to 500mb/s and the total bandwidth
for an x16 slot to 8gb/s. all 2.0 cards and motherboards are backwards compatible, which means that all
cards and motherboards designed for 2.0 can be used in a v1.1 system. [b]if it fits, it works.[/b]
3.2.7. storage ports.
         most motherboards nowadays ship with 1-2 PATA (parallel ATA) ports and at least two SATA
(serial ATA) ports. i’m not going to go into how these work, rather i’ll let you check the hard drive
section for a much more detailed description of them. it boils down to the fact that pata is annoying to
work with, and functions best for use with a single optical drive on the end of a cable, and sata is best
for everything no matter what but mostly for hard drives. if that actually makes any sense at all.

3.2.8. rear panel ports.
         all motherboards have an i/o panel at the back that allows for people to plug stuff directly into
the motherboard itself – usually interface devices like a keyboard or mouse, a display device like a
monitor, or a data transfer device like a printer, camera, mp3 player, or flash drive. there are a few
weird ones, though, so i’ll talk about them too. usb.
         hot-swappable, relatively fast, ultra-portable, and small, usb has become one of the most
ubiquitous computer formats known to the general public. everyone carries a flash drive for their
documents, a camera with a usb plug for transferring pictures out, a usb printer, a usb drink cooler, a
usb mouse, a usb leg lamp…it’s everywhere. usb functions at one of three speeds: low (1.5mb/s), full (12
mb/s), and high (480mb/s). these are half-duplex transmissions, mind you – they only go one way (in
other words, don’t try to do something involving both loading and copying information off of a usb drive
at the same time, it slows WAY down). usb 3.0 has been confirmed and finished (4.8gb/s) and is being
developed into consumer applications for sale in early 2010, apparently. i don’t want to go into history
(wiki has a great article on that), so instead i’ll just talk about the forms of usb and why you need to
know when you have an old computer.
         usb is generally found in one of two protocols nowadays: usb 1.1 and usb 2.0. if you have an old
laptop (like my wife’s computer, which was bought in 2003), you likely suffer through the use of usb 1.1.
the main issue with this is that it only operates at low and full speed – not high. this is ok if you only use
it for papers and documents, but it’s excruciating if you try to watch a movie off of an external hard
drive or something. usb 2.0 has support for all three major transmission modes, and is the standard.
you’d be hard-pressed to find something that’s dependant on bandwidth that doesn’t use usb 2.0.
         usb is generally found in one of four forms, branching from two basic adapter plugs. type a is the
normal plug we all see, rectangular in shape, with four contacts. interestingly enough, most game
controllers are just adapted versions of this plug, with an additional fifth contact. many companies sell a
receptacle version of this plug as an extension cord of sorts. the second connector we’d find commonly
is type b. type b is considered a peripheral plug: found on printers, scanners, hubs, and the like. it’s
generally the device end – i’ve never seen a computer with a type b plug on it. thirdly, there’s also
miniature versions of type a and b, however mini-b is the more common plug you’d see. lastly, and least
common, powerusb was a version released to supply up to 6a of 3v, 5v, or 12v power to external
components. it was supposed to do away with external power connectors, allowing external drives to
plug solely into the system itself. it’s an expensive format that’s fallen out of style since about a year
after it was introduced. it never really caught on due to the clunkiness of the format and how touchy it
is. d-sub, vga, s-video, hdmi, and dvi
         motherboards with an integrated video chip internalized in the northbridge chipset often have a
variety of video ports on the back of the motherboard. discrete graphics cards almost always have at
least one of these onboard. i’ll first discuss pc-centric technologies, then tv-centric technologies.
         d-sub defines a type of electrical connector that was designed back in 1952. there’s only a few
types used in computing today (ups systems, analog video, and old digital audio systems, pretty much),
but the one we’re talking about is the de15 plug, which is blue on most modern computers. it’s been
used since 1987. it’s also known as a vga connector. it carries component RGBHV (red green blue horiz
vert) video signals. this sounds complex until you realize that it’s just one plug that does a bit more than
your tv’s RGB component cable. they’re generally used with older CRT monitors, since it’s an analog
signal and not a digital one. laptops often have a mini-vga port, which is a flat plug similar to a usb plug
with 15 contacts. theoretically, d-sub/vga is able to handle up to 2048x1536 pixels.
         dvi (digital video interface) is a video interface designed for high definition displays like your
hdtv, your expensive flat panel lcd, and projectors. most monitors ship with this. it can handle up to a
1920x1200 pixel signal at 60hz, but at a far better quality picture than through vga. this is possible
because dvi also carried a clock signal to synchronize the picture. using dual-link dvi, it’s possible to
display in excess of 3840x2400 at 33hz. it cannot carry audio data, unlike its tv counterpart HDMI.
there’s a mini-dvi connector that looks similar to a meat grinder. it’s designed for laptops.
         s-video (separated video) is the counterpart to vga in a similar way that dvi is the counterpart to
hdmi. introduced to the home market in 1987, it’s a video-only cable that carries a standard definition tv
signal(640x480, no more) over a relatively cheap cable. it’s a crappy way of connecting a modern pc to
an older tv, since almost all low- to mid-range video cards purchased nowadays have an s-video port on
them. it generally is a pretty bad signal, ranking above your n64’s yellow cable but below pretty much
anything else on the totem pole.
         hdmi (high-definition multimedia interface) is a digital interface that can carry video and audio
information in an uncompressed digital format. it was adopted in the end of 2003. it is the standard for
high definition televisions available nowadays. almost every middle- to high-end video card developed
nowadays has either hdmi or the computer version of the plug, dvi, on the card. dvi transfers to hdmi
with almost no loss of clarity or signal.
         ah, ps/2. unlike most people, i don’t mind using this plug on systems. generally used for
keyboards and mice, ps/2 is being driven out for the same reasons that pata and floppy controllers are
getting the boot: if it’s not plugged in at startup, your computer will never recognize it. if you have a
desktop, ps/2 interface devices are fine. if you have a laptop, get a converter that’ll allow your ps/2
device to be plugged into your usb port. makes things a lot easier. ieee 1394, firewire
          firewire, originally a contender with usb for most useful (and whored) connector ever, lost that
battle after usb spread like herpes in a nudist colony through the windows market. generally associated
with macs and audio equipment, firewire comes in a few different varieties. it comes in two varieties: 4-
pin (which looks similar to mini-a usb plugs) and 6-pin (which is a slightly fatter plug than usb with a
taper on one end, making it look like an orange juice carton). the first iteration of these plugs, firewire
400, came out in 1995 and was way faster than usb was (in fact, it’s really close to what usb 2.0 is right
now). it could transfer at up to 400mbits/s in half-duplex mode, and could have a cable at up to 15 feet
long. firewire 800 was confirmed in 2002 and featured transfer rates of almost 800mbits/s. while the
transfer protocol was backwards compatible, the plugs aren’t – it utilizes a boxy 9-pin scheme looking a
bit like a mini-dvi connector. there are ‘bilingual’ cables available to connect with older devices,
however they haven’t really received the product penetration that firewire 400 has currently. lpt, com, serial, etc.
         legacy ports are always popping up here and there. most serial and parallel connectors have
been replaced by firewire and usb. serial audio ports, video ports, etc. have been phased out in recent
years due to the cheapness of just sticking some extra usb ports on the i/o plate. older computers still
use the lpt (line print terminal) port for printing, which had been in use since the mid 80’s. however, it’s
rare to see anything old like these on a modern mobo unless you need that functionality specifically. audio ports: trs (stereo mini-jack), s/pdif, optical (toslink) and coaxial (high-fidelity)
         audio is a big deal at OCR, obviously, so the way that you listen matters. there is a difference,
surprisingly enough, between the three listed here.
         trs (tip, ring, sleeve) plugs are your standard headphone jacks. it is 1/8th inch in thickness and fits
into most portable audio devices, as well as the back of most computers. most newer computers have 6-
channel audio. with these systems, you’ll generally see either three jack plugs on the back of a computer
– these represent headphone (green), microphone (low-level audio, red), and line in (amplified audio,
blue). these computers allow you to use 5.1 surround-sound by using all three plugs as outputs
(depending on codecs). older motherboards don’t allow this. newer motherboards support 8-channel
audio, which basically boils down to either 7.1 surround (through use of the three speaker plugs and
headphone jack) or 5.1 plus two extra recording jacks. on these systems, the center/subwoofer plug is
black, the rear speakers are orange, and the side speakers are gray. note that this is an analog signal,
and as such will have noise introduced to it by the power supply and electronics in the computer. it’s
generally a pretty good sound, but the static added by bad components can really make the sound
quality poor.
         s/pdif (sony/philips digital interconnect format) allows for the carrying of compressed digital
audio from a source to a destination – usually from a home theater receiver to a speaker amplifier that
supports dolby digital or surround sound. however, a pc can send uncompressed sound to a receiver
capable of DTS – making it an ideal format for pc users. this sound is not affected by computer noise,
only signal degradation due to the quality of the cable. there are two major connection types used with
this: optical and coaxial.
         optical audio cables are excellent for carrying expansive audio signals over short distances. the
toslink connector is capable of carrying 8-channel audio up to six meters to a dts-capable receiver. it is
(obviously) a digital signal and isn’t susceptible to radio interference like the coaxial variant of this cable.
         coaxial (high-fidelity) plugs are a digital s/pdif interface, not to be confused with the yellow
coaxial video from your sega genesis. in general, it’s best to use coax when you’re going more than six
meters or going around a tight bend, since the signal attenuation in toslink cables is really high beyond
those parameters. it is, however, affected by RF transmissions – so if you live near a transformer or
broadcasting station, it’s best to stay away from longer coaxial connections. a signal-boosted toslink is
generally your best bet in that situation. e-sata and why it’s the best of everything for external storage
         e-sata (external serial ata) is the best option for storage on your computer. e-sata is the exact
same connector that’s inside of your computer being used for your hard drives, however it’s just put
outside the system. it allows for transfer speeds up to 8x the speed of usb 2.0’s abilities, and it can take
a longer cable (slightly). if you can get e-sata, and don’t need to worry about using it with someone
else’s crappy computer, get it. it’s the best. most external enclosures for hard drives can be bought with
e-sata AND usb, which gives you the best of both worlds.

3.2.9. onboard pinned expansion ports
        the motherboard has several headers for expanded audio, usb, firewire, etc built directly onto
the board itself for the front panel of the computer and for use with the case’s expansion slots. the most
common are a usb header for front usb, an i/o pinset for the power/reset/lights on the front panel, and
an audio header – ac97 or azalea – for front audio. you might also find firewire headers. it’s rare to find
any other creative pin-outs for front panel support. depending on the quality of your motherboard, it
might also come with a front panel display meant for use in a 5.25” bay (where you cd-rom drive goes),
and often the pin-out for that is located on the top right of the board as well.

3.2.10. form factors
          motherboards come in a wide variety of different sizes and forms, depending on a plethora of
criteria. because of space considerations, heating issues, features required, and chipset power
consumption, these different form factors are needed. there are some thirty form factors available (21
of which are x86 compatible), ranging from the size of a desk to the size of your cell phone. i’ll review
the most common ones here. atx
         the most common form is atx, which is widely considered the standard for most computers. atx
measures a maximum of 12” x 9.6”, meaning, that no ATX board is ever larger than that. this case fits in
all cases sized ‘mid tower’ and up. micro-atx
        micro-atx is smaller on the long side, measuring 9.6” x 9.6”. this is generally intended for budget
systems (it saves a lot of money, in the end), and usually requires much less power than a standard atx
board. the size difference allows for smaller cases as well. extended atx
         extended atx is exactly what it sounds like: an extended edition of atx. measuring 12” by 13”,
this is generally the standard form factor for systems with more than one physical cpu (dual-cpu servers,
for example) or boards with 3 or 4 pci express x16 slots intended for tri-sli, quad-sli, or quadfire. these
are big motherboards, and generally require a full tower case to support them. mini-itx
        measuring a mere 6.7” x 6.7” (or, slightly longer than a stick of ddr2 ram), these boards are
intended for extremely low power systems. they’re used mainly in small office systems, home theater
pcs, and other, more esoteric versions of computers (think toasters, radios, and humidors as cases). they
generally have integrated video and a built-in cpu. proprietary and other strange formats you might encounter
         like i mentioned, there are a wide variety of motherboards you might run across. dell
computers, particularly, are known for using boards based on the btx format, which is a mostly dead
motherboard design (cancelled by intel officially as of 2006), originally created to alleviate overheating
issues. it was a new layout, putting important components in spots that would be better accessed by
normal case airflow. also, chipsets are moved around to be closer to what they control. dell doesn’t
always stick exactly to btx standards, and sometimes rewires power supply access and plugs. for this
reason, you always need to check your pin-out before plugging a new psu into an older dell system.
         there isn’t much else that you’ll run across that’s worth the time of writing it out.

3.2.11. power compatibility and associated ports.
        different motherboards require different power plugs to operate. because no one out there is
dealing with anything older than a p4 computer, i’ll stick to p4 and later. 20-pin vs. 24-pin.
           older motherboards used a 20-pin main power plug to supply power to all the essentials in the
system. hardware secrets has pictures with what everything is inside that plug if you care. more
recently, board makers were finding that they needed more power to run things – particularly graphics
cards with no power plugs, hot chipsets, and massive amounts of fans plugged into the system – so they
went to a 24-pin standard that allows an additional 3.3v, 5v, and 12v circuit. this is backwards
compatible! so, a 24-pin psu CAN power a 20-pin board. it’s generally called 20+4 pin on the psu
readout, and the additional 4 pins is added in on the side. you CANNOT use the 4-pin cpu power cable to
fulfill this, you’ll turn your motherboard into slag. in order for a psu to run a 24-pin motherboard, you’ve
got to either have a 24-pin psu, or a 20+4-pin psu. 4-pin and 8-pin cpu connectors.
         p4 motherboards and after found that they physically couldn’t route enough power through the
motherboard to support the really high-end processors that intel was putting out. so, the p4 connector,
more specifically known as the cpu power cable, came into existence. this square 4-pin plug usually
plugs in just to the left of the cpu (with barely enough clearance for the cpu fan, plug this in first before
the fan or you’ll be sorry). this cannot be replaced by anything else, as no other cable coming out of the
psu is capable of handling the current needed for the cpu – if you don’t have it, buy a new psu. this cable
is also known as an atx12v connector, although that name refers to the standard that created it, not the
name of the connector itself.
         8-pin cpu connectors are generally for servers, to handle the increased load that the 6-core
server cpus and the (often) multiple processors that server motherboards have. as with the p4
connector, it’s also known as an eps12v connector. the only difference is the added two 12v rails (and
subsequent grounds) that give it the extra power. the first four plugs on the connector are the same
(allowing manufacturers to include an optional 4-pin add-on to convert a p4 to an eps connector).

3.3. major manufacturers.
       motherboards are manufactured according to their specification – amd boards can’t take intel
processors, and vice versa. here are some manufacturers in each major segment. i’ll start with desktop.

3.3.1. intel-based desktop mobo manufacturers.
         asus leads the list for manufacturers of intel-based mobos. i’ve always been happy with their
products. they offer quality components for a wide range of sockets and requirements, and their boards
are known for having excellent BIOS functionality. they sell stuff recertified, too, for half the cost of a
new board (and no warranty). and they sell probably the cheapest true sli-ready (as in, supports two x16
graphics cards at full speed) motherboard out there, at around 130$ a pop.
         intel (obviously) is another major producer. their boards are top-notch in quality, and they sell
probably the best mini-itx board out there (the atom 330, currently priced extremely competitively).
while their boards aren’t known for extreme overclocking, they ARE exceptional in the mid range. they
tend to cost a bit too much, though.
         msi is another quality mobo maker. i’ve never used their products, but supposedly their bundled
drivers and software are among the best out there. their platinum boards are excellent for single-gpu
         i’d be lax to not mention gigabyte: they make some of the best high-end motherboards available
based on a wide range of chipsets. with high quality goes high price, though, as gigabyte boards are
generally somewhat overpriced.
          in terms of raw performance, evga’s motherboards consistently outperform the competition.
their lifetime warranty (not covering overclocking damage, however) is still the best in the business, and
their boards are generally the best for sli and tri-sli. they don’t sell a board under 150$ that’s worth the
money anymore (they sell a few, but they’re old), though, so they’re not feasible for a system under 1k$
or don’t plan on doing a dual-gpu solution in the future. if you’re going multi-gpu, though, this is your

3.3.2. amd-based desktop mobo manufacturers.
         not surprisingly, there are far less mobo manufacturers for amd’s offerings. biostar sells the
most on newegg, ranging from minimum-wage boards to a bank-breaking 160$ board (yes, i’m kidding).
focused solely at the counterstrike crowd, these boards are solid quality-wise but lack high-end features
like a BIOS that works and more than your basic i/o options.
         asus comes in here again as one of the major manufacturers of amd-based motherboards. if you
paid more than 110$ for your amd processor, buy an asus board. they sell the top-performing products
in this category, sport a fully functional (and overall excellent) BIOS, and have a wide variety of
customization features that make for a great board overall.
         msi and gigabyte also show up, both are excellent here in the same ways that they’re excellent
offerings for intel.

3.3.3. server-based mobo manufacturers.
         supermicro sells more server motherboards in six months than any other two manufacturers
combined in a year. they offer solid quality, a wide variety of options in both intel and amd, and have by
far the best stability of any server motherboard available. buy these boards.
         tyan comes in second for product sales. these are good boards, but they get their better power
requirement ratings from having highly specialized boards – in general you’ve gotta be really careful to
get a cpu that works with the specific board you’ve got. i’d still suggest supermicro, unless you’re
screaming for power management (like in a multi-board supercomputer, but you wouldn’t be looking
here for info if you were building one of those).
         asus and intel pop up here again – they’re good products in the desktop range, but i’d stick with
a different manufacturer for servers. go with tyan or supermicro instead.

3.3.4. all-in-one mobo manufacturers (itx and installation motherboards)
          as mentioned before, intel’s atom 330 itx offering is probably the most powerful of the mini-itx
boards, but that’s not always what you need. jetway sells models than all of the other companies who
sell itx combined, thanks to flexible boards that support a wide variety of customization and cpus. the
only other company worth mentioning is ecs, whose amd-based offerings are cheap and effective in an
htpc format.

3.4. server vs. desktop motherboards
      some people out there might consider going with a server board instead of a desktop
motherboard. the following sections should make you understand why i think you’re an idiot.

3.4.1. multiple-socket motherboards.
        you don’t need a server board (or a skulltrain, for that matter) unless you’re completely sure
that the programs you’ll be running can utilize eight cores between two cpus. most programs don’t even
recognize quad cores on a single die, let alone all eight on a dual-cpu board. unless you’re working with
extremely high-end sampling or something like photoshop cs4 (i think it’s optimized for multi-cpu? i’m
not sure), it’s pointless. if you DO know that it works, do it! it’ll be the most powerful computer you’ve
ever owned. but 99.9 percent of users don’t need one.

3.4.2. things to consider.
         assuming you are going to get a multiple-socket board, understand that it’s going to cost a LOT.
there’s no point in installing two crappy cpus in a nice board, so you’ve gotta spend the extra cash on
two nice cpus. server cpus generally cost about twice that of the equivalent desktop cpu, so you’re
spending 500-600$ a pop on a relatively good cpu (but not great). then you buy another, and you spend
300$ on a decent server board – and you haven’t even bought your graphics cards, the most important
thing in a gaming computer! not to mention that ram costs like 3x as much as normal desktop ram, and
ddr3 is like twice that. don’t buy a server board.


4.1. what is it?
         yet again, the graphics card is exactly what it sounds like – it processes, controls, and outputs all
graphics on your computer. not every computer has a ‘discrete’ graphics card (meaning that it’s a
physically separate item from the rest of the system), some have ‘integrated’ graphics built into the
motherboard that share system resources with the rest of the computer. in general, these are far less
powerful than a discrete card.
         your general graphics card is comprised of a few basic things. the card is built off of a PCB
(circuit board), and includes a GPU (graphics processing unit, like a cpu but with different specs and
architecture), a cooling unit of some kind (active, like a fan, or passive, like a heatsink or watercooling
block), and an output interface (generally a vga or dvi plug).

4.2. terminology.
         here, i'll post a variety of terms relating specifically to the graphics card. if you don’t find
something here that you're looking for, look in the motherboard section (section 3) for information. or,
just use the find function, probably much faster.

4.2.1. graphics memory.
          graphics cards have a lot of similarities to computers. they have a motherboard (the printed
circuit board that they’re built on), the have a cpu (their own personal gpu), and they have RAM – albeit
in a slightly different form. graphics memory, or gddr, is a derivative of normal ram. all video cards have
( or need) video memory to allow the graphics card to process textures, shadows, and the like. if you
have two cards that are exactly the same but one has more graphics memory, that card will perform
better than the other card. in general, get as much as you can afford. ddr, ddr2, gddr3, gddr4, gddr5
         these refer to a variety of different regulations regarding how the memory functions. the first
two, ddr and ddr2, are created exactly the same (just different sizes and form factors) as computer
memory. the last three are based on new specs relative to the different needs of graphics card memory.
         ddr and ddr2 are, as mentioned, formulated similarly to computer ram. most older cards are
based on this. you will often see the terms gddr and gddr2 used instead of these terms, however that’s a
misnomer. gddr and gddr2 don’t exist, card manufacturers just use computer RAM chips in them.
         gddr3, gddr4, and gddr5 are new standards designed for increased memory bandwidth and
higher memory clock rates, reflecting most newer card’s needs for more memory faster. both nvidia and
ati use gddr3, but only ati uses gddr4 and gddr5 (they partition ram differently for multi-gpu cards,
therefore they need more bandwidth). they’re based pretty closely on ddr2, but each has different
power and heat dispersal requirements to allow for higher performance.
         in general, ram that has the same clock rate but is of a different type has different bandwidth
capabilities, with the higher type having more bandwidth and vice versa. memory interface/bus
         the interface is exactly that – the interface between the memory chip and the pcb that the video
card is built on. the highest that you’ll find nowadays is a 512-bit interface, on the nvidia gtx series of
cards. in general, a 256-bit interface is pretty good (and about as good as you’ll see on a card that
doesn’t cost 400$ or more). it’s like memory – if you can afford a higher interface, get it! don’t save ten
bucks and get a 64- or 128-bit bus, since it’s just going to failtrain your card in the future. memory size, and why you should never buy less than 512mb for games/256mb for office use
         as i mentioned earlier, cards ship with a different variety of onboard memory. you should buy as
much as you can afford, kind of like ram and hard drive space. there’s usually a difference of 20$
between a card and the same card with twice as much memory – it’s worth the cost!
         if you game nowadays, you need at least 512mb to have an enjoyable experience. any less is
short-circuiting yourself. it’s just not worth buying less than that. larger screen sizes (than 1280x1024),
larger worlds to render, advanced textures, and advanced fog and lighting effects all stress your graphics
card’s memory constantly, and if there’s not enough memory there, you simply won’t be able to play
the game.
         with an office computer, if you’re going to buy a graphics card, get one with at least 256mb of
video memory (ESPECIALLY FOR VISTA). anything less is kind of pointless, since the card will do almost
nothing compared to onboard video using your system’s resources.

4.2.2. clocks
         as with cpus, graphics cards use a variety of clocks to measure and compare system speeds. the
three main clocks used are below. these are REALLY GENERAL, because you really don’t need to know
what they do to buy a card for a computer – you just get the one with the highest numbers on
everything (kind of like everything else, really). it does help to know just in general, though. memory clocks
         as with normal ram, graphics memory is clocked to measure the speed that it functions at. ddr
can function between 166-950mhz, ddr2 from 533-1000mhz, gddr3 from 700-1800mhz, gddr4 from 1.6-
2.4ghz, and gddr5 from 3-3.8ghz. note that these numbers are different than the extremes of standard
system ram (which hasn’t gone beyond ddr3 at this point). it boils down to this: the memory clock is
responsible for data transfers between the gpu and the on-chip memory. core clocks
         as you’ll read below, all graphics cards have a graphics processing unit, similar to your system’s
cpu in basic theory. the core clock is similar to the clock in a cpu. you’ll rarely see these higher than
700mhz – older cards clock themselves as high as possible to simulate having newer technology in them,
but it’s the same as having a huge new engine in a beat-up old car. it’s still a beat up old car – even
though it can go pretty fast, it won’t be as fast as a new car with the same engine in it. core clocks don’t
matter as much as good memory in a card in terms of performance, but they DO matter. shader clocks.
         basically, your shader clock runs each individual processor within the gpu. it sets the speed of
arithmetic operations by the processors. nvidia 8-series and up utilize a unified shader processing unit to
handle 3d modeling/pixel rendering in a more flexible way than was available before. these stream
processors execute instructions and mapping textures about 2x the core clock. the shader clock sets this

4.2.3. GPUs
         as mentioned several times before, the gpu is basically the same as a cpu – similar details,
however the gpu is a whole bunch of cores working at once, rather than just one or two. also, these
cores specialize in floating-point calculations (specialize in graphics production rather than raw
calculations). Most modern GPUs use most of their internal transistors for 3D calculations, but recent
advances in technology have allowed the GPU to handle non-graphics functions, like video decoding and
varieties of the @home distributed computing systems. Nvidia-based chipsets and naming criterion
         Nvidia and ATI ( comprise almost 100% of the discrete video card market. i’m not going
to even open the debate about which is better – suffice to say that the difference are extremely
minimal. certain setups are better than others for certain things, and i’ll leave it at that.

          i’ll only be listing information on nvidia 6xxx series and up, since older cards are old, and that’s
all you need to know. never buy a card older than a 6 series, and if you need to use one you can google
it. you’ll find the FX series before nvidia, and nowadays they’re pretty bad. in case you didn’t guess.

         nvidia names all of its cards on a specific criteria. for the 6-9 series, the first number is the
general series, the second number indicates the performance level, and the third number indicates the
version of the card. for the second and third number, 00-45 indicates an entry-level (aka bad) card, 50-
70 indicates a mainstream card (60 is the most common), and 80-95 indicates a powerhouse (like, the
7950 is still a decent card). the fourth number isn’t ever used. so, an example would be a 7950 – 7
series, high performance card, second revision. particularly in the 8 and 9 series, nvidia started using
letters to further denote card performance. these letters are (in order of general computing power) LE,
GS, GT, GTS, GTO, GTX, GTX+, Ultra, and GX2. the only one that actually means something specific is
GX2, which indicates two processors on chip. an example would be the aforementioned 7950 GX2. other
examples are the geforce 6200, 7600 gs, 8800gt/gts/gtx/ultra, and 9800gtx+. Wikipedia lists LE, but i
don’t know a name off the top of my head cause they’re generally really bad.

         nvidia released the 6 series in april of 2004. at this point, ati was busy spanking nvidia on every
possible benchmark test, and nvidia wasn’t all that happy. nvidia developed several new technologies
with this series, including shader model 3 support and the ever-so-popular SLI configuration. they also
addressed the poor shader model 2 support in their FX cards. there were six forms of cards released in
this category: 6100 and 6150 (IGP [integrated graphics processors]), 6200, 6500, 6600, and 6800. it used
DDR2 ram.
        the 7 series was released in june of 2005. it was the last card to have available AGP support, and
featured major increases in graphics performance and SLI efficiency, as well as DX9c. it was available in
7100, 7200, 7300, 7500, 7600, 7650, 7800, 7900, 7950, and 7950 GX2 (dual-processor) in discrete cards
and 7000, 7200, 7300, 7400, 7600, 7700, 7800, 7900, and 7950 for mobile computing (geforce Go 7). it
was the first major nvidia card to offer more than one option at each level. in general, these cards used
DDR2 ram for the lower models and GDDR3 for the last few to be built.

         the 8 series was released in November of 2006. it was a major shift in GPU technology because it
took the previous shader design (separate pixel and vertex shaders are separate) and amalgamated
them into one big general-use set of ‘stream processors’. it featured dual-link DVI, max resolutions of up
to 2560x1600, DX10 support (vista only), and support of shader model 4.0. it was released in 8300,
8400, 8500, 8600, and 8800 models in discrete cards, and 8400-8800 for mobile computing (geforce 8M
series). the 8800GT was probably the first card to be used widely in SLI because of the excellent price-to-
performance ratio. these cards scale at about 45% - meaning, you get about 45% better performance (as
opposed to 100%, which would be double the original performance) with one card added. there are two
major forms of 8800 cards in use today: the G80 version (original model 8800GT, 8800GTS 320 and
640mb versions, original 8800GTX and Ultra), which was based on a much slower overall architecture
that required a significant amount of power to run effectively, and the G92 version, which was used in
the newer 8800GT, GTS G92 512mb, GTX, and Ultra cards available on the market. the G92 was
developed in conjunction with the 9-series cards (all 9-series cards are either G92 or G94 based). G92
brought better power consumption, lowered heat, better SLI performance, and fewer errors in the
manufacturing process. the reason for this increase in performance was because the G92 was the first
GPU built on 65nm technology. just like CPUs built on 45nm rather than 65nm, you get increased
performance for less power consumption. they all use GDDR3 ram.

         the 9 series was launched in February of 2008. as a whole, these cards are considered to be
significantly superior to the performance of the 8 series in general because of their native development
on the G92 and (later) G94 architecture. they feature a significantly lowered power requirement per
card (even when compared to 8-series cards on the G92), extremely powerful SLI performance (2
9600GT cards scale at up to 96%), and far less heat than their older brothers. nvidia released the 9400,
9500, 9600, an several forms of the 9800 in discrete cards, and 9100, 9200, 9300, 9500, 9600, 9650,
9700, 9800 in mobile computing (geforce 9M). the 9600 and 9800GTX are probably the most popular
cards in the discrete category. nvidia released their second dual-gpu card with the 9800 GX2 card, which
combined with traditional SLI technology created the first true Quad-SLI setup available on modern
computers (the 7950 GX2 was the first, but bandwidth problems with the PCI-E interface made it not
really worth the cost compared to later solutions). they all use GDDR3 ram.

         nvidia’s current flagship gpu series, the 200 series, was launched in june of 2008 (marking the
shortest gap between series updates in nvidia’s history) with the gtx 280. since the 9-series obviously
ended the usefulness of the prior naming scheme, nvidia decided to rename the newest series with a
new naming guideline. in general, there are two differences – the fourth (useless) number is dropped,
and the first number started over at 2 for this series. the 300 series will have a first number of 3, etc.
nvidia has currently released only discrete cards in this series, with the 260, 260 core 216 (significantly
different than the 260, so i’ll mention it separately), 280, 285, and 295. at CES 2009, nvidia announced a
mobile version of these cards, known as the G100M. i don’t have any other data on them, but
preliminary findings say that it’ll be up to 50% faster than the previous 9-series editions of these cards.
all 200 series cards use gddr3 ram.
        supposedly, nvidia’s launching the 300 series of cards in q3 or q4 of 2009. it’ll be built on a 50nm
process, and it’ll use GDDR5 ram. that’s all i got currently. ati-based chipsets and naming criterion
         i’ll be the first to admin i don’t know much about ATI’s cards – i’ve been an nvidia fanboy since
a7600gs went into my first system i ever built (that card, my aforementioned 3.46ghz Celeron, and a gig
of ddr2 533 ram played me all the way though gears of war and witcher and some of the best music i’ve
ever done…wow). so if you notice something that’s wrong here, point it out.

        ati has been making graphics cards since 1985, when they were making integrated graphics
cards for Commodore and IBM. they were acquired by AMD (the cpu maker) in 2006. in general, ati
doesn’t make its own cards. similar to nvidia, they license their cards for creation by other companies –
sapphire, msi, asus, and HIS are all companies that are known for their ati-based cards.

       the radeon r400 series (x700, x800 cards) were released in september of 2004. they featured
dx9b support, opengl 2 support, gddr3 ram, and a 130nm manufacturing tech. they were released in
X700, X800, and X850 variants. note that the X is part of the name and not a variable.

        the radeon r500 series (x1000) was released in October of 2005. they were the first ATI-based
card to support dx9c, and it was highly optimized for shader model 3. they were released in x1300,
x1550, x1600, x1650, x1800, x1900, and x1950 variants. this was the first major rebuild of the gpu
architecture since the r300 series. these cards used gddr3 for the lower-end cards and gddr4 for the
higher-end cards.

        the radeon r600 series (HD 2000, HD 3000) was released in mid 2007. the 2000 series supported
dx10 and shader model 4, and the 3000 series supported dx10.1 and sm 4.1. they were released in 2350,
2400, 2600, 2900, 3430, 3450, 3470, 3650, 3690, 3830, 3850, 3850 x2, 3870, and 3870 x2 (dual-gpu)
variants. as far as i know, the 3870 x2 was the first dual-gpu card that ati created during the course of
their development. these cards used gddr3 for the lower-end cards and gddr4 for the higher-end cards.

       the radeon r700 series (HD 4000) was released in june of 2008. as before, they support dx10.1
and sm4.1, and have support up to gddr5. they were released in 4350, 4550, 4650, 4670, 4830, 4850,
4850 x2, 4870, and 4870 x2. it’s the current flagship gpu for ATI.

        there are a buttload of ati mobility cards out there. read the info [url=]
here[/url]. there’s so many more than for nvidia, so i just didn’t list them all. matrox, via, intel, ageia, and legacy graphics processors
         as i said earlier, ati and nvidia own almost 100% of the discrete graphics cards market. if you
look at gpus in general, including integrated solutions, the numbers are much different – IBM owns
about 40% of the market, and ati and nvidia own about 55%, leaving the last five percent to legacy
graphics cards and specialty cards.

         matrox specializes in cards for multi-monitor setups. the company is split up into three
departments. matrox video is for digital video editing solutions, matrix imaging is all about industrial
video capture, and matrox graphics is for general cards. the card you’ll most likely find is the TM2G
series (triple monitor 2 go), which allowed three displays to run off of a single card. they’re not really
cost effective anymore, though, since if you really need three monitors to run off of a single pci-e slot,
you can get a usb software-based video card that’ll allow for a third monitor to be used. it’s a cpu hog,
comparatively speaking, but it’s also a quarter of the price.

         via primarily makes chipsets for low-end and low-power computers. their graphics chipsets are
all integrated, as they don’t manufacture discrete cards. they’re known for having good performance for
the limited resources allocated to them.

         ageia is best known for inventing the PhysX physics modeling system. it used to be a specific
card that you’d get for your system to allow for the extra processing to take place, and now it’s built into
all cards 9-series and later. you don’t need one, but if you come across it for cheap it can be a cool toy to
play with if you’ve got a space slot on your board. they were acquired by nvidia in February of 2008.

         intel’s current line of graphics processors are all integrated – they don’t make discrete cards,
similar to via. their current line is known as the GMA series. everything you need to know about them is
available [url=]here[/url]. stream processors
         stream processing is a limited form of parallel processing, similar to what goes on inside a multi-
cpu processor (like a quad-core). for a long time, parallel processing was pretty much impossible to write
for because it was SO complex – you had to write in sets of instructions for every possible computation
that might be performed. that’s why quads still have such a limited mainstream usage, currently – it’s so
hard to write a lot of programs to utilize all four cores effectively. stream processing is a workaround to
make it easier to write for by restricting what computations (‘streams’) could be completed.
         that’s all well and good to know that, but the reason that i’m talking about it is because it’s a
pretty major thing when it comes to picking out a graphics card and comparing them. ati started using
stream processing technology (instead of separate processors for shading and vertex) first, and so in
general their cards have more stream processors at a lower clock speed. nvidia didn’t bring about the
use of a unified shader architecture (aka, one type of processor to rule them all) until the 8-series. pixel pipelines
         if you’re buying a modern card, this is useless. read
[url=]this[/url] if you actually need to know about it. CUDA, GPGPU, and all of that other stuff
         cuda (compute unified device architecture) is a parallel computing architecture developed by
Nvidia. cuda is the compute engine in Nvidia graphics processing units that is accessible to software
developers through industry standard programming languages.
         what does that mean? basically, cuda allows programmers to use the gpu to do stuff. why would
they want to do that? even though the gpu’s clock is a sixth of the cpu’s clock speed, there’s a zillion
stream processors within it that can compute simply equations in a fraction of the time (literally) that it
would take a cpu to do it. take the gtx 260. it’s got 192 core processors – 48x the cores of a quad. if you
need to do something that’s just a simple operation – like trying passwords on a wireless network, or
trying to crack a coding scheme on a computer’s files – you’d be getting WAY more performance from a
gpu because it takes advantage of using all those extra cores to do the work, rather than doing fewer
operations in less time like a faster quad-core could.
         other examples of things that use CUDA is physics technology like PhysX or Bullet (the game
mirror’s edge is an example of that). it’s also used in stuff like computational biology and distributed
computing (like folding@home). interestingly enough, even though cuda came out after the 8-series of
cards, it works on all nvidia cards 8-series and on due to binary compatibility.
         gpgpu (general purpose computing on graphics processing units) is just the general term for
what cuda does. cuda’s specific to only certain nvidia cards. gpgpu functionality is the overarching
technology of which cuda is a part.

4.2.4. 3D api stuff
         an api is an application programming interface – a standardized way to code specific stuff for
something. so, a 3d api is a standardized way to code for 3d graphics. opengl and direct3d (part of
direct) are the two main apis for 3d graphics. open gl.
         open gl is a standardized specification defining a cross-platform, cross-language api for writing
applications that produce 2d and 3d graphics. it consists about 250 functions that can produce
extremely complex 3d graphics. opengl was developed by silicon graphics in 1992 and is widely used in
CAD, virtual reality, scientific visualization, information visualization, and flight simulation. it is also used
in video games, where it competes with direct3d. it’s managed by the khronos group.
[url=]here’s[/url] some history info. this is boring and pretty
useless stuff, so that’s why i’m linking it. directx info
         direct is a combination of various graphics-based apis for rendering video and game graphics on
windows platforms. direct 3d is the specific 3d api. Microsoft developed it around the time that
windows 95 shipped, and it was packaged with windows 95 sp2. 98 shipped with it, as have every OS
windows created since. if you’re using xp, dx 9c is the top level of dx that you can get. with vista
(because of the new display driver model that they created for vista), 10 is the top. 10.1 is supported by
longhorn (server 2008) and vista sp1, and windows 7 will supposedly ship with dx11. supposedly 11 will
support gpgpu stuff and handle multithreading with multi-core cpus much better than the current

4.2.5. general graphics terms that pop up a lot
         here are some general terms that pop up a lot. if you think of something else that should be
here, let me know. gpu coolers
         gpus, like cpus, run REALLY hot (HEAT=FIRE) and need to be cooled. you’ll find both passive
coolers, like a heatsink system, and active coolers, like a fan. passive cooling systems are really popular
for silent and home theater systems. you’ll find both single- and double-slot cooling solutions in the
active cooling sector – where a card takes up one or two expansion slots on the back of the computer.
it’s important to know if your card is a dualie or a single slot because generally it’ll block the pci-e x1 slot
that’s immediately below most pci-e x16 cards. dualies are generally dualies because they’re super hot –
either because they’re really high-performance cards, or because they’re poorly designed (the 8800gts
320 comes to mind). when picking a case, keep your graphics cards in consideration – if you’re using a
dual-slot card, you’re likely going to want a side vent for cool air to come in. dual-link dvi
         dual-link allows you to power much larger displays by using two dvi ports to increase bandwidth.
a single dvi can support up to 1920x1200, two ports can support 2560x1600 (the current maximum for
rationally priced monitors). maximum resolution and why it matters
          maximum resolution is exactly what it sounds like – max resolution per monitor. this is
important for one major reason – if you’re using a huge main monitor and you want to have a second
monitor, you might not have a plug due to whether the card can support the max res with one dvi plug,
or if it needs two. just be aware. most max res ratings include a refresh rate, too – be aware of that as
well. sli/crossfire/3way sli/quadfire/quad sli
         i should note that multi-gpu systems don’t always bring an increase in performance –
sometimes they can actually degrade performance due to the application’s coding. just keep it in mind if
your particular game doesn’t go off the chain in FPS.
         SLI was introduced with the 6-series of nvidia’s cards. it’s a technology that allows you to hook
up two graphics cards to a system and increase the overall performance through the use of a little u-
shaped bridge that links the gpus of the cards. it’s possible to hook up multiple graphics cards to a
system without using this bridge, but you don’t get the extra performance – just the additional monitor
plugs for multi-monitor system. the performance increase was generally around 30-45% increased
performance for different cards until the G92 architecture increase, which allowed two mid-range 9600
cards to equal the performance two enthusiast 8800 cards with far less power requirements. for the 9-
series and after, SLI performance is approximately 92-96% for lower-level cards and 80-85% for upper-
level cards.
         you must have two graphics cards of the same core number and type (aka 6800GT or
9800GTX+), but they can be of different manufacturers or core speeds. however, SLI will automatically
scale down the better card to the same level of the lower card. although nvidia claims that RAM levels
can differ as well, in reality it’s disabled for every driver after 100.xx. you also need either an nvidia-
based motherboard (nforce chipset) or a board using intel’s x58 chipset. SLI WILL NOT WORK ON OTHER
BOARDS unless you hack it (which isn’t really a good idea, all told). also, regardless of your SLI setup you
can only use the main card’s video ports. you need a special setup to get triple or quad monitors
         3-way SLI is exactly what it sounds like. it’s about a 2.8x performance boost, and it only works on
g92 and g94-based enthusiast nvidia cards (8800GT to GTX 280, currently, not including the 9400-9600
         quad sli is somewhat what it sounds like – it only involves two cards, but four gpus. as far as i
know, current versions of quad sli only supports either the 7950GX2 and the 9800GX2. you use two
cards, with one larger sli bridge, and you get beastly framerates. it should be noted that tri sli with GTX
280s gets higher performance, generally, due to the extremely large amounts of memory supported and
the more powerful GPUs. it also costs a lot more.
         crossfire (also known as crossfire x) was first available in september of 2005, and supports up to
four graphics cards (aka, quadfire). the earlier models needed all sorts of extra crap to make it work, but
since the x1950 and the advent of crossfire x, you just need one ribbon connecter similar to (but NOT
the same as) the SLI bridge. quadfire is a 3.2x performance boost over one card, and interestingly
enough for HD 3800 cards you can use completely different cards and it’ll work perfectly fine. an easier
way to get quadfire is to use two of ati’s X2 cards, like the 3870X2. this is a really cost-effective way to
get extreme performance without paying for three top-dollar nvidia cards.
          there are pros and cons for both setups. ati licensed their crossfire tech with intel, meaning that
almost all boards support crossfire, but only certain intel boards and nvidia-based systems support sli.
however, since most of the performance gains of multi-gpu setups are set through OpenGL’s profiles, sli
gets a tick on their side of the sheet because all programs can have customized sli profiles based on
what’s best for each game. ati reverts to an unchangeable lower-performance state if it doesn’t have a
profile. like i said earlier – i’m an nvidia fanboy, so i always choose nvidia over ati, but the recent price
cuts on extremely versatile ati-based gpus make me interested in both sides again. power requirements for modern cards
         modern cards require a lot of power. i can only speak for nvidia’s cards in terms of specific
requirements, but it’s generally a good idea to get a decent power supply with a rail just for the graphics
card if at all possible. if you’ve got anything over a beginner card, you MUST have a rail just for the
graphics card, or your parents will disown you because your computer shorted out the power grid.
         if you’re using an entry-level card (x200-x400 with nvidia cards where x is 6-9, or anything with a
3 as the second number in a modern ati card), you want at least 300-350 usable watts (this is where
efficiency [6.2.6] comes in!) from your psu. you don’t need that specifically for the card, but with, say, an
core 2 duo e8400, that’s how big of a psu you want for the system.
         if you’re using a mid-range card (x500-x600 with nvidia cards where x is 6-[b]8[/b], or anything
where 5 is the second number in a modern ati card), you want at least 450-500 usable watts from your
psu. you must have a rail specifically for the card. note that i didn’t include the 9600GT cards in this
calculation. i’ve found that they pull a decent amount of power, roughly comparable to a 7800GT card.
the numbers don’t really say that, but it’s good to have a little bit of headroom with a card that can scale
as well as it does on higher-end stuff. again, 450-500 is the entire psu, not just for the card.
         if you’re using a big card (x800-x950 with nvidia cards where x is 6-9 OR a 200-series card, or
anything that’s above a 5 as the second number for ATI), you want at least 300 watts per GPU available
SPECIFICALLY for the card. i’d suggest not going below a 650w cpu with the larger cards. an 8800GT or
9800GT can squeak by on a smaller one, but if you get a big card, you’ve probably got a lot of other
power concerns to keep in mind. ramdac - what is it?
         basically, RAMDAC is a digital-to-analog converter that allows you to use an analog signal (aka,
VGA or component video) from your digitally-based video card. the advent of hdmi and dvi have
rendered it relatively useless, but it’s still included because so many people still use analog-based video
devices. operating system information
         as mentioned earlier, different operating systems need certain levels of cards in order to
support directx, opengl, etc. xp doesn’t require a discrete card but functions much better with it. it’ll
supports dx9c, no higher. vista supports 9, 9c, and 10, and needs a dx10-compatible graphics card. w7
will require a card that can support dx11. i don’t know anything about macs, so fill it in from there. refresh rate
         all monitors refresh at a steady rate, just like a tv. most LCD monitors refresh at 60hz (60 times a
second), and most CRT monitors are around 75hz or so. this is why a CRT is easier on the eyes than LCDs
are (higher refresh = less tired eyes). there are a few monitors that refresh at 120hz to allow them to
display in 3d, but this is a really new tech that doesn’t look that great all told. refresh rate doesn’t really
matter in gaming, honestly – 60hz is more than enough, you don’t need to pay for an LCD at 85hz or
something ridiculous like that.
4.2.7. hdcp and why it matters
         High-bandwidth Digital Content Protection (HDCP) is a form of digital copy protection developed
by intel to prevent copying of digital audio and video content as it travels across DisplayPort, Digital
Visual Interface (DVI), High-Definition Multimedia Interface (HDMI), Gigabit Video Interface (GVIF), or
Unified Display Interface (UDI) connections, even if copying that information would be permitted by fair
use laws. the specification is proprietary, and implementing HDCP requires a license.
         what does that mean? it means that it’s really hard to copy information coming out of a hi-def
player, and it means that your system needs to decode the information when it receives it. which in turn
places a huge load on the processor for hi-def video, and is why most processors can’t really hack 1080p
video at 24fps. well, if it’s a processor thing, why did i stick it here?
         because, there are a lot of video cards out today (ati especially) that are able to decode hdcp ON
THE CARD, and output the video in straight-up video format with no bells or whistles attached,
significantly reducing the processor load. these cards are excellent for home theater pcs, for example.
note that this doesn’t assist in rendering the video – you’ve still gotta have a decent cpu to handle it. an
hdcp-capable card doesn’t work any faster than a non-hdcp-capable card, either. it’s just something that
helps only with hi-def video and audio coming from a source, like a blu-ray burner or something. only
really useful in those situations.

4.3. interfaces and how they relate to graphics cards
        the nature of graphics cards is that they plug into the motherboard, dumping video data directly
into the bloodstream of the compuer. while i already talked about these interfaces in depth in the
motherboard section (3.2.6), i wanted to specifically talk about them in terms of video cards that are
supported, and information regarding them.

4.3.1. agp
          agp’s old. don’t buy a card for a computer that requires this unless you really desperately need
it. agp is (in general) really expensive for the performance that you get out of it. the best agp card is the
radeon 3850 (they made one?), but it costs a lot and it generally outpaces the single-core cpu you’d
have to use on an agp-only platform.

4.3.2. pci
         pci’s also a really old interface for graphics cards. the bus only has a bandwidth of like 133mb/s,
so it’s even less than pci express. best gaming card is probably the ati x1550, but it’s really not any good.
there’s a few version of the hd 2400 pro out there that are decent for gaming and hi-def movies, but
they get really hot during usage.

4.3.3. pci express x1
          as far as i know, the x1550 is the only card available for the x1 slot. don’t use it if you don’t have
to, it’s even more extremely price than the pci or agp cards.

4.3.4. pci express x16
         x16 was outmoded during the 8-series for nvidia’s lifetime. you can still but a few low-end cards
– like the 8600, 9400, and 2000-series cards from nvidia and ati – but there’s a curiousity that i know if,
and that’s the fact that there’s a 3870x2 (yes, two processors, 1gb of ram) that’s available for x16. go
figure. however, it doesn’t make any sense to only get a card that’s x16 since
4.3.5. pci express 2.0 x16
         as i said before, all 2.0 cards and motherboards are backwards compatible, which means that all
cards and motherboards designed for 2.0 can be used in a v1.1 system. [b]if it fits, it works.[/b] this is
the current standard, so i’m not going to list a top card. there are a few exceptions, but they’re
motherboard issues, not card issues.

4.3.6. usb (external) video cards
         i should clearly state that these are not a replacement for video cards in any way – they’re
software-accelerated, not hardware-accelerated. as a whole, usb video cards and converters are mainly
for multi-monitor displays where you need more than two screens.

       get the most expensive graphics card possible, regardless of your system, and then run over it
with your car. that’ll do it, every time.

4.5. adapters, gender changers, and what you should expect with your new
video card
        your new card (assuming you don’t buy a cut-rate version, a recertified card, or an open box
setup) should come with a variety of things. since most cards require external power, it should come
with either pcie6-to-2 molex (pci express power plug to two molex plugs) or pcie8-to-2 molex (pci
express 8 power plug to two molex plugs). it should come with enough converters to give you two vga
ports (so if you’ve got a card with a vga and a dvi port on it, you’ll get one converter). you should have a
component plug if your card has a built in s-video port, or an hdmi-to-dvi+audio plug if that’s what
you’ve got. it might come with a dvi-to-hdmi plug with higher-level cards.

4.6. drivers, drivers, drivers.
         all graphics cards, like all hardware, require drivers to tell your operating system how to use the
hardware. graphics cards are notable for having issues with their drivers, and unlike a motherboard
driver (which updates once every six months or so usually) drivers for modern cards update every few
weeks. while you don’t necessarily need the newest driver all the time, generally new drivers support
new games with increases in performance. also, if you stick a new card in your case that requires an
entirely separate set of drivers (like i did when i went from a 7600gs to a 9600gt), make sure you
uninstall your older driver before you stick a new one on there! most issues with graphics cards are
related to the drivers.

4.6.1. well-known third-party driver releases
         omega and xtreme-g are the two best-known drivers for graphics cards, and you can also get
beta version of forceware (nvidia’s official drivers). in general, i’ve always stuck with official drivers, but
it’s up to whatever you want to do. third-party drivers are useful when you have an older card and want
support for widescreen or something like that, or if you need a custom resolution as a workaround or
something. i’ve never really gotten into it.

5.1. what is it?
        the sound card is exactly what it sounds like – it’s a discrete card that processes the audio
coming into and out of your computer. almost every computer motherboard has integrated audio of
decent quality, but a sound card allows for increased sound quality, a variety of inputs and ouputs, and
higher sample rates. it also takes a load off of the cpu, allowing it to focus on you getting the most FPS
        the main reason most music geeks get one of these is because it allows them to play their midi
controllers directly into the host program and reduce latency to around 3ms, which is generally
unnoticeable. anything more than 5ms becomes noticeable by most people who’d be into that.

5.2. terminology
         here, i'll post a variety of terms relating specifically to the sound card. if you don’t find
something here that you're looking for, look in the motherboard section (section 3) for information. or,
just use the find function, probably much faster.

5.2.1. channels
         channels refers to the number of simultaneous channels of sound that the card is possible of
creating. this sounds complex until you realize how it works. for example, most motherboards and
sound cards are 6- or 8-channel sound – which is the equivalent of 5.1 or 7.1 audio. if you don’t know
what that means, generally the first number is the number of normal speakers (5.1 uses a center
speaker, front left and right, and back left and right), and the second is the subwoofer channel.

5.2.2. sample rate
         sample rate defines the number of samples per second taken. the standard is 44100hz or
48000hz, but you’ll find cards up to 96000hz. the higher, the more accurate the sound. 44100 sounds
pretty darn good, though, so don’t think you need some extreme sample rate just for decent gaming
sound. if you’re a music maker, then think about it. 96khz waves sound [i]really[/i] good.

5.2.3. digital audio quality
         digital audio quality (also known as audio bit depth) is a measurement of the number of bits
recorded for each sample (as mentioned above). it’s generally seen in 16-bit, 24-bit, and 32-bit. this
results in more accurate samples. you don’t need more than 24-bit audio for gaming (and it’s not that
big of a difference from 16-bit), but for music making you want 24-bit if at all possible. it’s not a big deal
either way, though.
         a good description of this is that CDs are recorded at 16-bit and DVDs are generally 24-bit.

5.2.4. chipsets
         there’s a variey of audio chipsets used in integrated audio on a motherboard. realtek makes
some of the best ones, in my opinion. that’s really all that you need to know about it.

5.2.5. SNR (signal-to-noise ratio)
        all audio signals are degraded to some point by the power source powering the signal. American
power is at a rate of 60hz, so that’s where the buzz in the PA system comes from at your local church or
school. this sound corrupts the audio signal that’s being broadcast over the audio card. it’s where we get
the sound of silence from – that sound that you get when you’ve got no audio broadcasting from your
computer but you’re wearing headphones and listening to what comes out.
        it basically comes out to mean that the higher the SNR, the better the audio quality. most good
cards have an snr around 96dba or so.

5.2.6. ports
        ports refer to what plugs into the card. 5.4 details this.

5.2.7. interfaces
         interface refers to where the card plugs into the motherboard. as you’d expect, a direct plug is
better than an external solution. pci/pci express (internal cards)
         most sound cards plug into pci or pci express x1 slot. you don’t really need the pci-e plug,
honestly – a pci plug is more than enough for what most people need. it’s very, very rare for a sound
card that needs the extra bandwidth of a pci-e – i can only think of two or three, and they’re specialist
cards for inputting multiple channels of recorded audio at high bitrates. usb/firewire (external cards)
         there are a variety of external sound cards that plug in via usb 2.0 or firewire plugs. this adds
quite a bit of versatility, since the plugs are then wherever you put the card, and is nice because you can
bring it around with you (like for a laptop). there’s far less bandwidth, though, which makes it tough to
do high-level audio processing over the usb/firewire cable.

5.3. major manufacturers
       in this section i’ll include the major manufacturers of sound cards. there’s only a few, and each
makes a few good cards and a buttload of crappy ones. read reviews before you buy!

5.3.1. creative labs
         creative is my personal favorite when it comes to sound cards. i’ve had several different cards
from them (notably their soundblaster live! 24-bit usb card) and i really liked it. they manufacture the
well-known soundblaster, x-fi, and fatal1ty (yes, that fatal1ty…i can’t believe they branded a sound card)

5.3.2. m-audio
        m-audio makes audio interfaces for recording. i’m an m-audio fanboy, though, so i’m only
mentioning this one manufacturer. they make a variety of input-based cards (meaning that they’re more
for inputting audio, not processing it). they’re generally quite a bit of money, so plan accordingly.

5.3.3. turtle beach/voyetra
         turtle beach makes good budget-price cards. if you don’t need an uber soundcard, turtle beach
is a great place to look.

5.3.4. a few other different names you might see\
         asus and startech make some decent cards, but in general i prefer the above manufacturers.
5.4. port specifics
         as i mentioned above, ports reference the inputs and outputs in most cards. here’s some
specifics. most audio stuff was already discussed in

5.4.1. 1/8th inch plugs in rainbow colors (analog stereo)
         as described above, sound cards can have up to four stereo jacks for 1/8th inch (headphone or
earbud jack) plugs for surround sound systems. they’ll carry 3 for 5.1-supported cards and 4 for 7.1-
supported cards (the added jack handles the middle right and left speakers, which generally is just an
amalgam of the front and rear speakers in an upscaling card). two of these generally double as a line-in
and a mic jack, just like on a normal computer. some lower-end systems only have two plugs – one
that’s for headphones or 2.0/2.1 systems and one that functions as a line-in and a mic combined.

5.4.2. s/pdif
         as mentioned before, coax generally comes in optical and coaxial. i haven’t seen optical or coax
input in a while on a card, but optical and coax out are relatively common.

5.4.3. recording interfaces (xlr, 1/4 inch, etc)
         m-audio’s recording interfaces generally include up to 16 plugs for inputting audio into the
system in a variety of ways. the most common is xlr (also called three-pin or microphone cable) and ¼
inch (guitar cable, looks like a large headphone jack).

5.4.4. proprietary
         you might find midi/legacy joystick plugs, aux in (like a cd player or something similar), or high-
grade optical/coax plugs as a special component. some of the really high-end stuff have a proprietary
headphone jack that looks similar to an old-school joystick plug but has a different pinout (i can’t find
the bloody thing anywhere, though). it allows for surround-sound headphones.

5.5. issues surrounding sound cards
        drivers for sound cards generally suck. they’re poorly written (almost on the level of Lexmark
printer drivers), include loads of bloatware, and will randomly stop working due to their crappy design.
in contrast, integrated audio rarely glitches in any way.
        sound cards are also inexplicably tied to the OS that they were developed on. make sure your
card works with your specific operating system, down to the version/service pack.

5.5.1. are you sure you want a card?
         if you’re just a gamer, don’t get a card unless you’ve already spent a lot on everything else. it’s
just not worth it – integrated sound is great for basic stuff. if you’re not building a monster music
machine, you don’t need a special sound card either. it’s only if you’re an audiophile that it’s worth the

5.5.2. what to avoid
        read, read, read reviews! if you get a bad vibe about a card, move on. it’s deadly to get locked
onto a card, only to find out that it won’t work in your OS. if you’re not sure if you want it, don’t get it.
they’re generally a waste of money.

6.1. what is it?
        this is the largest section in my entire tutorial because this is probably the most misunderstood
component of your computer. power supplies take power from the socket and distributes it to the
components of your computer. their capabilities are measured in watts. they generally convert the 120v
AC power in your socket to 12v, 5v, and 3.3v DC power that most components run on.
        a lot of people just think that it should be whatever’s the cheapest unit. considering that the psu
fails more than any other two components combined on a computer, you should be spending a decent
amount of cash on your power supply in order to get competent power for your system.

6.2. terminology
         here, i'll post a variety of terms relating specifically to the sound card. if you don’t find
something here that you're looking for, look in the motherboard section (section 3) for information. or,
just use the find function, probably much faster.

6.2.1 types of power supplies (form factors)
        power supplies are available in a variety of form factors, similar to motherboards. unlike
motherboards, most of these form factors are compatible only with a specific type of computer –
desktop, server, etc. due to the nature of power requirements for different boards and systems. make
sure you get one that’s specifically made for your system. atx, atx 12v
         atx stands for advanced technology, extended (what a crappy name). atx is technically a
standard for power supplies that defines what each plug on the power supply can deliver in terms of
voltage and watts. it also defines the size and format of the pins. an example of these plugs are the
common molex or 24-pin motherboard power plug. the biggest difference between atx and atx12v is
that the 12v variant can handle significantly more power in general across the rails to relieve increasing
demands for newer electronic devices. eps, eps 12v
         eps (entry-level power supply specification is an alternative to atx, and can power desktops or
servers. the biggest difference is that the cpu connector (which is 4 pins in atx) is an 8-pin connector to
handle increased power requirements common with multi-core server cpus, and there’s a 4-pin tertiary
plug for xeons and opterons. it’s technically a more stable and powerful power supply than the atx but
there’s fewer good models available. microatx
          generally, microatx power supplies are very similar to atx psus except they’re smaller, to be able
to fit into the smaller cases that mATX boards often are used in – about 5”x3”x4”. crossover power supplies and odd varieties (btx, sfx, tfx, at, etc)
         there’s all sorts of weird sizes of power supplies. at is the form factor that atx replaced (in 1995),
and is completely outdated. btx refers to the outmoded form factor that was supposed to replace atx
and offered increased airflow and a better motherboard layout – the plugs are similar but not cross-
compatible with atx. sfx and tfx are other alternate power setups that have different motherboard

6.2.2. watts
         power supplies are rated in the output that they are able to…output…in watts. if you don’t know
what watts are, go run over your computer with your car. or finish 11th grade. the trick with power
supplies is to figure out whether the listing is a real-world number or not. it’s pretty common for a
company to list their absolute peak power as opposed to how much the psu can deliver at all times. or, a
system will be tested in a coldbox (since, like graphics cards and processors, all power supplies can work
better in extreme cold than when they’re boiling hot). so, when you “600w @ 20 degrees C”, take it with
a grain of salt. most power supplies function at about 45 degrees Celsius, depending on the case. why you (almost) NEVER need more than 600 watts
         heck, most computers don’t need more than 300-400, really. if you don’t have a mainstream or
high end graphics card, or a huge processor, you’ll be fine (probably) with not much. more isn’t
necessarily better – if you have a huge power supply that’s using 35% of its ability all the time, it’s not
good for your psu and can actually cause it to burn out earlier. use the
[url=]xtreme power supply calculator[/url] to calculate
exactly the peak wattage, and then add a bit to the top to cover efficiency ratings. why you NEVER need four digits of wattage, for great justice!
         unless you’re using a 3- or 4-gpu system (not card, gpu) with a big processor, you don’t need
1000 watts of power available through your psu. there, i said it!

6.2.3. pfc
         pfc stands for power factor correction. it’s an extremely complex equation that basically
describes how much active power is being used by the system (as opposed to reactive power, or
magnetic power). read [url=]this article[/url] for more
         you don’t need pfc unless you have a really crappy local power grid. it’s a marketing ploy. it
basically allows the manufacturer to sell it in Europe to work with European power laws. an interesting
side effect is that psus with active pfc can take power between 110-240v or so without having to flip a
switch, and as such can handle power grids that fluctuate and send out a slightly ‘off’ signal…which is
the only time it’s really worth the time.

6.2.4. psu designs
         all power supplying units – not nevessarily for computers – are designed around two forms:
linear and switching mode. linear
         linear is a direct transformer – it takes the 110v or 220v AC from the wall and converts it to
whatever direct voltage is required. this is a pretty good setup for mobile phones, but it’d be huge for a
psu. so, all computer power supplies that are internal are switching-mode. switching-mode
         on a switching-mode psu, the input voltage has its frequency increased before going into the
transformer (going from 50-60hz to several KHz are typical values). with input voltage frequency
increased, the transformer and the electrolytic capacitors to convert the power (aka, all the big
components in a linear psu) can be very small. keep in mind that “switching” is a short for “high-
frequency switching”, having nothing to do whether the power supply has an on/off switch or not.

6.2.5. plugs
         here’s a list of the different plugs that power supplies use to distribute electrical energy
throughout the computer. all the pinout lists are available at
[url=]this link[/url]. molex
         long live molex. four pins, works for hard drives, dvd drives, big fans, etc. i love it. floppy
         this is a smaller plug that’s the size of your thumbnail. still four pins, and works for floppy drives
and occasionally a fan controller or card reader (depending on the type). mobo
         the main motherboard connector supplies power to all mobo-based components, including
video cards without a discrete power plug and almost all expansion cards. legacy mobo connectors
        p3 and older systems used two plugs of 10 pins to power the motherboard. they look like a big
floppy connector. 20-pin
        older motherboards use a 20-pin connector to supply power. 24-pin
        newer motherboards use a 24-pin connector to supply power. this is backwards-compatible, so
you can theoretically just plug it into a 20-pin and have the extra four pins just hanging out. this is a
pretty big danger for shorts, though. 20+4 pin
        the most versatile psu-to-mobo connector is the 20+4 pin. this is a 20-pin connector with an
extra four pins to fit both standards. p4 (atx12v)
         the p4 is a separate connector (different that the additional four plugs from the mobo plug)
that’s used to provide additional power to the cpu specifically. it’s shaped like a cube, and usually plugs
in near the cpu. it’s used with all cpus p4 and later (and the amd equivalents). sata power
          sata is the 15-pin power plug for sata hard drives and disk drives. it’s shaped like an L, and only
fits on in one direction. most drives come with a converter to change molex to sata power. peg plugs (6 and 8 pin)
         PEG, or pci-e graphics plugs, deliver additional power to higher-powered graphics cards. most
middle-line cards use one 6-pin plug, and the most i’ve ever seen is two 8-pins needed for an 8800 ultra.
most graphics cards that require this come with 2molex-to-1plug converters, for either type of plug. others you might see
         depending on the type of psu, there’s always a chance you might see something interesting like
a fan plug or something, but this is a pretty comprehensive list for most modern computers. THE power plug
         it’s a normal 3-pin wall plug. yes, it connects with the same connection a monitor uses, so you
can use a monitor cord on a psu. yes, you should ALWAYS have it grounded if you don’t want your
computer to blow up when a surge comes down the line. modular plugs and how they work
         some psus have modular capabilities – that is, you only plug in the runners (cords with multiple
connectors on it) that you need. this helps keep your case organized by not having a zillion extra plugs
hanging around. they generally just plug into the back of the power supply. they’re pretty easy to deal
with, just make sure they seat properly or your cords could wiggle out from vibration.

6.2.6. efficiency
          ahh, efficiency. the bane of buyers everywhere. simply put, efficiency shows how much the
company lied to you. it measures just how close to the wattage description that they give it is delivered
and not burned off in heat energy. here’s an example: rosewill sells a power supply currently that’s 550
watts, with a 68% efficiency rating. 550w might be exactly how much you need, and that’d be great! if it
wasn’t for the fact that the efficiency rating brings the max wattage down to 374 watts, with 156 watts
burned off as heat. see how this could be a problem, even if the added heat didn’t make your system
melt into slag? always buy at least 80% efficiency psus (note when they say ‘up to x%’…that means, not
all the time!), and always make sure you’ve got a lot of headroom in your system’s wattage numbers. 80 PLUS certified (and bronze and silver certification)
         80 PLUS is a certification that psu companies can buy (assuming they meet the requirements) to
prove how efficiently their power supplies work. officially, it means that the psu never functions below
80% at 20%, 50%, or 100% power usage. bronze is 82-85-82% (for each % of power usage), silver is 85-
88-85%, and gold is 87-90-87%. being green and running your computer
         basically, buy high-efficiency power supplies, and unplug your computer system when not in use
(or shut off your power surge protector). psus draw power when they’re off, so you’ve gotta cut the line
in order to prevent that. it costs more to get a better-efficiency psu, but your electrical bill will thank you
down the line.

6.2.7. voltage ranges and compatibility with local power
         see pfc (6.2.3).

6.2.8. peak vs. load vs. general power wattage
        as i said earlier, most cpus list their wattage in peak watts – as in, what’s the absolute max that
the psu can sustain for any length of time (as short as a few minutes, theoretically). if your psu is running
at peak all the time, not only will it die quickly but it’s probably really underpowered. get a bigger one.
        load describes when your system is running a game or something, and there’s a consistent and
pretty big load on the power supply, usually around 75-85% of peak. you want your system to use about
60-70% of the usable watts (like, apply efficiency first) in a psu when under load. that gives a ton of
headroom for your system. there really isn’t a way to test this before you buy the thing and install it, but
it’s just a guesstamite.
          general power (or ‘idle’) is when the computer is just sitting there, with a screen saver or

6.2.9. rails/power distribution
         most people think of a psu as being like a power plug, where you just can draw huge amounts of
power from one plug without thinking. due to specification restrictions, you can’t have more than 240va
(240w in a dc circuit) on a single ‘output’, or wire. note that this isn’t through one plug specifically, but
one wire within a plug. since that’s a requirement, there’d have to be an overcurrent protection circuit
(see on every single wire…aka, really expensive. so, for cheaper setups, most companies just
group different wires into a theoretical blob called a ‘virtual rail’. these are generally low-end power
supplies that are low-wattage.
         multiple rails are almost always the way to go if you buy a system with a discrete graphics card.
the rails in this place refer to different circuits – one per 12v rail. in general, you want one rail per major
component (cpu or gpu…not just per graphics card!). something you need to take notice of, of course, is
that when you plug in components you need to keep major components on different rails (or else you’ll
overdraw the circuit almost immediately).
         there are three major types of rails: 3.3v, 5v, and 12v. 3.3v
         this rail runs a few motherboard components, ram, agp cards, and some pci stuff. 5v and varieties thereof
         the 5v rail generally powers the motherboard and components on the motherboard..
         you’ll notice a -5V rail occasionally. this is another obsolete rail. the -5V was used for old-school
floppy controllers and some ISA bus cards. there's no need for the typical home user to worry about this
         almost all psus have a +5V Standby or "Soft Power" (SB) signal carries the same output level as
the +5V rail but is independent and is always on, even when the computer is turned off. this rail allows
for two things – to allow the motherboard to control the power supply when it is off by enabling
features such as wakeup from sleep mode, or wake on LAN technology to function. It also is what allows
windows to turn your computer off automatically on shutdown as opposed to previous AT supplies
where you had to bend over and push the button. every standard ATX power supply on the market will
include this rail. 12v and varieties thereof
         initially, this was specifically for the processor, but as graphics cards got better and required
more power the 12v rail was expanded to supply more power to more components. now it currently
powers the cpu and graphics cards, hard disk and optical drives, and a few fans. sounds like a lot, but
since most psus have enormous 12v rails compared to the other rails (usually around 80-90% of the psu
is on the 12v rails).
         you’ll notice a -12v rail occasionally. this rail is pretty much obsolete now and is only kept on to
provide backward compatibility with older hardware. some older types of serial port circuits required
both -12V and +12V voltages, but since almost no one except industrial users use serial ports anymore
you as a typical home user can pretty much disregard this rail. over-current protection and why it’s important to keep track of it
         as mentioned earlier, the requirements of UL 1950, CSA 950, EN 60950 and IEC 950
specifications state that you can’t draw more than 240w off of the same wire. OCP is how they prevent
each rail from overdrawing. it’s basically a circuit that’ll shut down the wire that overdraws past the
standards. so what? the reason it’s important is that several manufacturers tend to put their OCP cutoff
way above the level where damage would occur. you’ll need to read a manual to check on this, or order
from a really reputable company if you can’t find out.

6.2.10. voltage stability, noise, and ripple
         voltage stability refers to how stable the voltage coming from the system is. in other words, is it
actually 12v on the 12v rail, or is it 14v? there’s a tolerable level of instability due to voltage loads
coming on and off of the line: 5% on the positive rails, 10% on the negative rails. below’s a table of info
for tolerable stability ranges. i stole this from, fyi. thanks, HS!

                               Output Tolerance Minimum Maximum
                               +12 V ±5%        +11.40 V +12.60 V
                               + 5 V ±5%        +4.75 V +5.25 V
                               +5VSB ±5%        +4.75 V +5.25 V
                               +3.3 V ±5%       +3.14 V +3.47 V
                               -12 V ±10%       -13.2 V -10.8 V
                               -5 V ±10%        -5.25 V -4.75 V

         beyond that, power supplies should give you ‘clean’ power – the more noise on the line, the
more unstable the psu is. in a perfect world, power would create a perfectly straight line on an
oscilloscope. in reality, there’s slight oscillations in the power signal, called ripple. this cannot be more
than 120mv of noise on the 12v line and no more than 50mv on 5v and 3.3v lines. if it is more, the
power supply is unusable! HS uses an example of a pc power and cooling psu and a cut-rate psu, and the
cut-rate psu is about 2.5x more noise on the line than there could be. most websites don’t have the
equipment (specifically, an oscilloscope) to see the ripple, which is why usually website reviews of psus
are useless.

6.2.11. protection WITHIN the power supply
        there are a variety of power protections within the psu to protect the system and components
from burning out or overvolting. over-(and under-)voltage protection
         uh, pretty obvious. if the voltage coming from any output is above or below a trigger value, the
psu shuts down. this is different from overcurrent protection, which is for current being pulled through
the line (preventing it from burning the line). OVP is required, UVP isn’t. short-circuit protection
        if the power supply shorts out, it shuts down. this is required. over-current protection
        see above ( over-power protection/overload protection
        this is a psu-wide protection, rather than per line. it’s basically OVP for the whole system – if
your computer pulls more than a certain amount, the psu shuts down. this protects your system from
exploding trying to put out enough wattage for your components =) it’s optional, unfortunately. over-temperature protection
        duh. overheat = shut down. optional…actually, not very common at all.

6.2.12. redundant power supplies
         exactly what it sounds like. a redundant psu can function in one of two ways. either it can be
sitting there, completely off, waiting to be flipped on to take over for the other psu for routine
maintenance, or it can be in a standby mode to take over if there is fluctuation in the supply of power
from the other psu. if there’s x amount of fluctuation – say, a tenth of a second – the standby psu would
kick up to full operational status and take over. it’s extremely rare to see this in a desktop, but common
for servers, particularly the first set. the second is quite expensive, all told.

6.2.13. input voltage, current, and frequencies
         power in various countries varies between 100v and 240v (officially 110 or 220). the power in
each country varies as well – un the US it’s 60hz, and some countries it’s 50hz. most power supplies are
manual range (meaning you’ve gotta flip a switch in the back when you go to a different country) or
auto range (meaning that you don’t have a switch, and the psu has active pfc). based on the quality of
the power in that area, it could vary as far as 20v in either direction for either. if you live in an area with
poor quality of power, get active pfc on your system. active pfc can handle everything above, from 49-
61hz and from 100-240v, as well as being able to handle the wobble that often comes with poor power.

6.2.14. hold-up time
          if you use an uninterruptible power supply, this is VERY important. it’s the amount of time
(almost always in milliseconds) that the computer can continue discharging electricity if it loses input
(aka, if the power goes out or the breaker trips). it’s generally an indication of the quality of the
capacitors on a psu. your response time on your ups should be significantly less than the hold-up time.

6.2.15. power good signal
         the psu sends out a power good signal when the internal components have begun emitting
electricity at the proper voltages for the system. this sounds kind of obvious, but it’s pretty important –
if the system doesn’t get proper voltages from the psu, it’ll fry the components. often, if the system gets
an interruption in the power, the signal will drop for a short time (until the psu resets itself) and the
computer will appear to stay on but will reset – like during a brownout or something. the reason that
this is worth knowing about is because if there’s an issue with your power grid and you get a slight
brownout or surge, often your system will shut down due to the bad power it’s receiving. people just
automatically assume a psu is blown if this happens, but give it 15-30 seconds to reset itself and start
receiving a good signal from the power company again and you’ll be golden.

6.2.16. multiple graphics card certifications
         in order to make more money, ati and nvidia (and in turn the psu companies) ‘certify’ certain
psus for use with multi-gpu solutions. these companies pay for the designation, similar to how they pay
for efficiency rating. take it with a grain of salt, basically – just because it’s certified doesn’t mean that
that 500w psu can handle two 4870x2 cards. crossfire
        ati gives this out out. like i said, it’s not a big deal. if you’re buying a big psu for a multi-gpu
system, you’ll likely be getting one that’s certified, so it’s no biggie. SLI
        nvidia gives this one out. like i said, it’s no big deal.

6.2.16 MTBF (mean time before failure)
        this is also used with hard drives, but i figured i’d mention it here. psus are tested in big batches
intensely for months to determine how long they’ll last before they crap out. the MTBF is a rating of
how long they’ll last. most psus can last several years – as many as six or seven. if you get one with an
MTBF of 1 year, be wary that it’s built with crappy parts.

6.3. major manufacturers
        here are some psu manufacturers. there’s a pretty big choice out there, so pick wisely.

6.3.1. rosewill and why you should probably buy it for a home office computer
         rosewill makes excellent low-wattage (350-500w) power supplies with two rails and one or two
good fans for about 40-50$. they’re pretty good, as a whole. i’m not a fan of the quality of their high-end
stuff, but their power supplies for the low-end computer are pretty nice.

6.3.2. Silverstone
         they make pretty good power supplies across the board. i like their really high-end systems, but
as a whole their best psus are in the range of 600-700 watts (like most manufacturers).

6.3.3. seasonic
         I love this company. good prices, pretty solid performance across the board. stick with their
larger psus, though, down in the cheap end quality starts to drop off a bit.

6.3.4. zalman and why you should probably buy it for a gaming computer
         in general, zalman’s psus are awesome. they sell a 650, 750, and 850w model currently on
newegg, and each of those can deliver almost a hundred watts over rating continually at max load. that
doesn’t mean that you should constantly stress them there, but they’re well built with excellent design
as a whole. buy them! they’re worth the extra ten or twenty dollars.

6.3.5. thermaltake
         overpriced, but they’re decent. only buy if you’re a fanboy.

6.3.6. antec
         one of the better choices in the cheapo range. while rosewill and apevia sell uber-cheap models
of low-end stuff, antec will generally give you a few more options with some nicer features down in the

6.3.7. ocz and why you probably don’t need it
         don’t get me wrong – ocz makes excellent power supplies. their 700w model is consistently one
of the top sellers on newegg and tiger direct. they sell quality power supplies, but in general they’re just
expensive. I would buy zalman all day before I buy their psus.
6.3.8. Athena power
         they have a lot of models available, but as a whole I avoid them. cheapo desktop units, decent
server psus.

6.3.9. apevia
         newegg’s brand name. they’re good for low-end stuff, decent quality. i wouldn’t go much higher
than about 450 or 500w.

6.3.10. i-star computer company limited
         they’re big into server psus, but their desktop options are limited. i don’t know anyone who’s
bought one, personally.

6.3.11. pc power and cooling
        pc power and cooling’s 750w silent power supply (fanless) is probably as good as it gets for large
power supplies that are fanless. excellent quality, particularly for a music-based computer or something.
don’t use it in a case that’ll get super hot, but in a good case it’ll be fine.

6.4. external protection – UPS, inverters, surge suppressors, etc.
        your psu is only as good as the electricity the socket feeds it. a lot of people have surge
protectors, but few go as far as getting a UPS or an inverter to protect their system. here’s why it’s

6.4.1. UPS
         UPS stands for uninterruptable power supply. basically, this kicks in when your Power Good
signal from your psu goes out and picks up the load of the system for anywhere from 3 minutes to an
hour or two when you’ve got bad or no power coming from the socket. so, it’s protection for your data
against a brownout or blackout, however short it may be.

6.4.2. power inverters
        a power inverter basically is the electrical equivalent of a goat. it eats whatever the heck you
throw at it and spits out exactly the same thing no matter what – whatever electricity you need. i know
goats don’t spit electricity, but you know what i mean. you buy one based on what you’ll need (so, say,
220v) and it’ll ensure that if the power dips or goes higher that it’s even when it comes out.

6.4.3. surge suppressors
         uh, everyone knows what these are. what you DON’T know is that most of them suck,
completely, because they have almost no protection installed in them. look at reviews before you buy
one from wal-mart. seriously, there’s a big difference.

6.4.4. other things worth looking for
         basically everything i’ve said above is important if you’re worried about your supply of power to
the socket. if i think of anything, i’ll add it here.

6.5. things to watch out for
        there are a lot of little things to worry about with a psu. i’ll put them here as i think of them.
most of them have been posted above.
6.5.1. why cooling is important
         just like any other component, your psu won’t function as well if it’s subjected to extreme heat.
being at the top of most cases, it generally gets the worst of the heat convected directly up at it. if you
can do anything to increase cooling to the back of your computer, do it! most psus are tested in a cold-
box – meaning, the efficiency, wattage, and response times that are listed are actually only true if you
work outside during the winter. with no humidity. if you do, i’ll think you’re awesome. something that i
looked for specifically when i bought my case was a psu tray in the bottom of the cpu, set up so that the
fan for the psu sucked cool air directly from under my case into the case itself, keeping it the coolest of
the components in my case.

6.5.2. why 10% of your computer costs should be put into the psu
        your cpu delivers power to the rest of your components. if you buy a crappy, cut-rate psu that’s
straining to handle the quad-sli you’ve got in your case, you’re going to be suffering. not only will it
operate with a bad efficiency, which will end up costing you a lot of money in the long run because of
power costs, it’ll fail sooner and expel a lot of dangerous heat into your case. buy a good one! it’s well
worth the cost on your budget.

6.6. i have no idea what i just read! compress and condense plz kthnx
         buy a psu with the following features: over 80% efficiency all the time, at least 1/5th more watts
than the eXtreme Power Calculator says you need, at least as many rails as you have high-power
components, with appropriate protections, hold-up time of at least 10ms, with a fan (unless you know
what you’re doing), at least 100k hours of MTBF under average use, from a manufacturer i mentioned


7.1. what is it?
         a hard drive, according to Wikipedia, is a non-volatile (meaning, doesn’t need to be powered on
to retain data) storage device that stores digitally encoded data on rapidly rotating platters with
magnetic surfaces. now, that’s not necessarily true any more, with the advent of flash and solid-state
drives that are just basically enormous thumb drives, but it’s pretty good.

7.2. terminology
         here, i'll post a variety of terms relating specifically to the hard disc drive. if you don’t find
something here that you're looking for, look in the motherboard section (section 3) for information. or,
just use the find function, probably much faster.

7.2.1. capacity, gigabytes, and why you always should buy up
         hard disc drives nowadays are measured in gigabytes. it’s pretty hard to find one smaller than
40 gigs (raptor had a 38.7g one for a while, and SSD drives can be smaller occasionally if you’re cheap),
and the largest drive available is the recently released 2tb drive.
        now, it’s worth mentioning that hard drives don’t measure size in gigabytes. they measure it in
how many billion bytes you get, because it’s different. see section one for an explanation. a good
example is my recently purchased 750g WD caviar sata drive. it ships with 750,154,276,864 bytes, which
equals approximately 698 gigs. that’s just how it is.
        now, the reason you should always buy up is simple. you will ALWAYS need more space, and it’s
less expensive to spend an extra ten dollars and buy an extra 150 gigs rather than sit there later on and
complain about how you ran out of room. all those illegal movies get big after a while =)

7.2.2. interfaces
         hard drives connect to your computer through telepathy. no, really. computer telepathy. just
watch AI and you’ll know. IDE (ATA100 and ATA133)
          AT attachment with packet interface (also known as ATA or ATAPI) are the old standard for
hooking up a hard drive or optical drive in your computer. as you probably noticed, it started out as
being specifically for AT motherboards and has evolved over the years from WD’s Integrated Drive
Electronics (aka, IDE). once SATA came into effect in 2003, they changed the name to P-ATA, or parallel
ata, correctly identifying the technology. it’s the big, fat tape cable that you have in your computer. it’s
larger than a floppy cable, note, having 40 pins and a notch to prevent you from plugging it in wrong.
the maximum bandwidth is 133mb/s (or 100mb/s with ATA100). the max cable length is 18 inches (aka,
2-foot cables will dick up your interface if you don’t plug something into the short plug as well to boost
the signal on the way by).
          i use these with optical drives that don’t have HD or Blu in their name. a dvd will never transmit
more information than can be easily sent along an ATA cable, so as long as you foot the extra dollar for a
round ata cable (the flat ones are horrid cooling issues) they’re great. they’re pretty good for backup
drives since they’re SO backwards-compatible. i recently bought a 40 gig drive and plugged it into a p3
computer. when the world ends, there will be a computer around with an ATA interface still functioning.
just remember to use your credit card to straighten out the pins. i should note that due to the fact that
the original ATA specs only used a 28-bit addressing mode, you can’t get more than 137 gigs on a drive
plugged into any computer that uses under ATA6. also, some early BIOSes limit it to around 8.5 gigs, but
that’s getting really old. i should note that all ATA drives are essentially limited by the slowest unit on
the cable, so if you’ve got a hard drive and an optical drive on the same cable, the hard drive has to
finish its writing before the optical drive is available again. not a biggie, i rarely put more than one unit
on a cable at a time. SATA (I/1.5gb/s and II3gb/s)
          the current best for internal storage, SATA is an L-shaped plug that has approximately 1 jillion
times the speed of ATA (closer to 2.4x, but it’s a big deal!). it’s hot-swappable (like a usb drive), and has
an external variant that’s about 8 times as fast as USB is. it was invented in 2003. it’s used with both
hard drives and optical drives. it’s a much smaller cable – about the width of a pinkie and the thickness
of two quarters – due to the use of only 8 pins over the 40 that IDE uses. it’s basically the best for your
desktop. don’t buy a hard drive with an ATA interface unless you have to.
          there are two varieties of sata – sata I and sata II. I has a throughput of about 150 mb/s (1.5
gigabits per second, about 30% faster or so than IDE) and II as a throughput of about 300 mb/s (about
240% or so). as with pci-e slots, if it fits, it works – although if you plug a faster drive into a slower mobo,
it’ll run at a slower pace.
          like i said earlier, buy this with hard drives. always have at least four on your mobo in case you
want to upgrade later on (unless you’re buying for a system that there’s no chance you’ll need more). SCSI
         small computer system interface is used for just about everything – scanners, hard drives,
optical drives, etc. it was traditionally used as an alternative to IDE because it was faster and you could
connect up to and above 15 devices in a daisy chain with unique identities and everything. they’re
popular in servers because they generally have higher standards of quality assurance as compared to
desktop components, and are better running 24/7 than sata in terms of reliability over like 10 years or
something obnoxiously long like that. they’re really expensive. you don’t need one. i promise. ultra320 and ultra640 scsi
        clocking in at 320mb/s, this is the second-fastest transfer protocol on the market, barely edging
out sata drives. the bigger cousin, -640, is light-years beyond anything available. they both come in 68-
pin and 80-pin varieties. serial attached SCSI
        SAS, or serial attached scsi, is a serial-based version of the standard scsi that’s available
currently. for those who know electronics, the older version of scsi is parallel, meaning that if one died,
nothing is useable (like your old Christmas lights). the new one is serial, meaning that if one craps out
you’re not screwed. it allows for sas-to-sata backplates to be installed on traditional sata drives for
downwards-compatibility, and at some point in 2009 will upgrade the spec speed to 6gb across the
board and 24gb with wide port tech. you can stick up to 128 devices on a chain, and use up to an 8-
meter cable. aka, really cool. also ridiculously expensive. external interfaces (usb 1/1.1/2, firewire 4/6 pin, e-SATA)
         already talked about all these, but external hard drives use them all. if you can, get e-sata with
whatever your standard interface is. it’s da few-chur, mano.

7.2.3. RPM
        hard drives spin at different speeds. the standard lappy drive runs at 5400rpm, and the standard
desktop drive spins at 7200rpm. the speed that it spins at reflects directly on how fast it can access and
write data, which in turn affects more critical things (like how fast your operating system runs!). older
laptop drives are available that run at 4200rpm, but they’re snails compared to the newer drives. a few
manufacturers sell drives that run at 10,000rpm (which are quite expensive and generally in small sizes)
and 15,000rpm (only scsi drives, not really desktop drives). i like using a smaller 10k drive as my OS drive
and a huge 7200rpm drive as my storage space.
        some drives are variable speed – 5400 to 7200 is a common one. it’s supposed to save power.
who cares? it’s like a few watts here and a few watts there. don’t bother, the slower it rotates, the
slower it rotates! it’s just lost efficiency.

7.2.4. cache
          your hard drive has a small memory unit built in, ranging from 8-32 mb, that acts similar to how
the L1 and L2 caches on your CPU works. it’s also known as a buffer. the jury’s out on this – some claim
that it’s critical, some claim that there’s no difference. usually, i just say to buy as big as you can without
spending a ton of cash on it. it’s rarely more than 5 bucks extra to get the version with a larger cache, so
you might as well do it.

7.2.5. average seek, read, write, and latency
         averages for seeking data, reading data, writing data, and the latency between when the drive is
paged and when it accesses information varies directly related to the speed that the drive rotates at.
compare across drives to see where the one you’re looking at sits in. i know off the top of my head that
an average latency for a mid-level 7200rpm WD caviar drive is 4.2ms, and the average seek and write
time is 9ms and 11ms, respectively. comparatively speaking, the latency for a 10,000rpm drive is slightly
higher (takes longer to spin up to that speed, about 5.5ms) but the seek and write time is WAY better
(4.2 and 4.7 respectively).

7.2.6. form factors
         drives come in two major sizes – 2.5”, for notebook drives, a few 10,000rpm drives, and all ssd
drives, and 3.5” for all desktop drives excluding the few models of 10k drives that are smaller. make sure
you know what you’re getting so you’ve got a bracket for it!

7.2.7. why almost all of those special features don’t matter a dime
        they’re all smoke and mirrors. ignore them completely. read reviews online of hard drive writes,
and you’ll see what i mean. all those things to speed up the drive, they don’t do a thing. why intellipower DOES matter, and why you should avoid it
         intellipower is another one of those ridiculous ‘green’ initiatives. supposedly it saves wattage by
powering down the drive when it’s not in use. what that means is that your drive is constantly going
from full stop to full on, and your drive’s motor wears out faster. and it’s stupid slow compared to
normal drives. all to save a few spare watts. just shut off the light when it’s daytime and you’ll save
more than you would from all these crap features.

7.2.8. platter-based drives vs. ssd (solid state drives)
         traditionally, all drives were platter-based. solid-state drives came around in late 2007 and
started making waves. ssds were used extensively in military and aerospace due to their exceptional
ability to withstand shocks, vibration, temperatures, and non-conductive liquids.
         ssds are great because they’ve got no moving parts, they’re shock-resistant, they don’t require
defragging or anything like that, they’re silent, and they have awesomely low latency and access time.
however, they’re approximately 10x more per gig than platter drives (~500$ for a 256gb drive as of
3/8/2009 vs. ~50$ for a 250gb drive around the same time), their write and read times are REALLY
crappy compared to platters, and they wear out over time – after a few years or so, the cells start to get
worn from the continuous writing and rewriting to them and start to fail. platter drives are still better
unless you’re the pilot of an f-18 and need a computer for in the air.

7.3. major manufacturers
        uh, yeah. people you’ll see in this profession.

7.3.1. western digital and why you should buy their products
        fifteen years of making the best overall drives on the market, hands down. buy their stuff. i’ve
never had a WD drive fail on me, and i’ve got an 80 gig drive i bought when i was in 10th grade. i was 15
and a half. i’m currently 22, and it still runs awesome. it survived eleven computer builds in that time
before getting relegated to backup duty because it was getting old and i was getting nervous =)

7.3.2. seagate
         i used to swear by seagate’s barracuda line until the whole controversy with the 7200.11 drives.
now i don’t touch them. for those who don’t know, seagate dropped the ball at some point in the last
year or so in the QC department and now their drives (originally rated at 5 years MTBF) are failing at an
alarming rate within months of coming out of the box. and their firmware update, which was supposed
to fix the issue, bricked about a quarter of the drives that it was used on. oh, and even though they said
that you should update the firmware on your drive, doing so invalidated the warranty.

7.3.3. samsung
         best 1tb drives for a LONG time, the spinpoint drives are great. not too hot on the rest of their

7.3.4. soshiba
         crap. avoid if possible.

7.3.5. fujitsu
         they make a lot of scsi drives, but not much for the desktop market.

7.4. external forms
        external hard drives are important for people who need to transfer files from one computer to
another. i like using them for backups since you can shut them down and stick them in a closet without
an issue. here’s some considerations.

7.4.1. internal connection vs. external connection
         internal drives are faster. no matter what. even e-sata isn’t as fast as sata in the box. so, don’t
install programs to an external unless you really have a smart reason for doing so, because they’re run
really laggy.

7.4.2. internal power vs. external power
         this is a mixed bag. in general, drives that require internal power are slower laptop drives meant
for use with the laptop on the go where you won’t have a power socket. these drives are generally
5400rpm drives that don’t require as much power to run properly. if you have a desktop, there’s no
reason NOT to get external power, since the plug’s right there anyways. with a lappy…it’s your choice.

7.4.3. fans – do they matter?
         depends. if your drive will be used a lot to do large file transfers – like, say, a backup drive that is
run every night to backup large files – it might be a good idea. if you live somewhere that’s hot, it might
be a good idea. but they’re not critical, particularly if the hard drive’s casing is aluminum.

7.5. things to watch out for
        don’t read user reviews. users forget that companies make a jillion of these things every year
and that they’re not always going to be absolutely perfect, same as any other component. drives die
occasionally, and that’s something to live with. what you SHOULD read, though, is reviews online. if
theyr’e generally poor, then go with them.
7.6. RAID and what it’s for
         RAID means redundant array of inexpensive disks. it is a method of linking hard drives for speed
or reliability. it was originally developed to make it cheaper to get one huge hard drive by logically tying
together several cheaper (and smaller) drives. there’s various forms that either stripe the data across a
few disks to make it go faster or duplicate all writes to a different drive for reliability in case something
goes wrong. i could go into it but i don’t really know everything about it, so just read the Wikipedia
article on it.

7.7. pros and cons for single drive vs. separate OS and storage drives
        one drive is cheap. it’s easy to organize. it’s easier to handle.
        two drives means that your precious files are less likely to be on the drive that fails first, since
your OS drive will function ALL the bloody time no matter what, whereas your storage drive will only be
used when it’s transferring your hacked movies to and from it. it’s a touch more expensive, but it means
that you can lose your os and not your files. your os will run faster, too. i recommend this option.
        note that i’m not suggesting partitioning your drive into two pieces and having os on one of
them. you’ll get the performance benefit but you’ll lose the reliability of two drives.


8.1. what is it?
         according to Wikipedia, random access memory is a form of computer data storage that allows
the stored data to be accessed in any order. it is volatile, meaning that if the computer powers down the
data is lost. it is generally used by programs to perform necessary tasks while the computer is on. the
interesting thing about ram is the nature of it – all data is equally accessible, nothing is farther away
than anything else. that’s what makes ssd drives so quick to write, and they’re based on similar

8.2. terminology
        here, i'll post a variety of terms relating specifically to the memory. if you don’t' find something
here that you're looking for, look in either the motherboard section (section 3) or the cpu section
(section 2) for information. or, just use the find function, probably much faster.

8.2.1. types of ram
         ran comes in a zillion different forms. here’s the most important forms of it – there’s a lot of old
crap out there that you don’t need to know about, so i’ll filter that out. what you need to know is that
ram generally has two names: the standard name (say, ddr-400) and the module name (say, pc-3200).
the module name tells you how many megabytes per second can be transferred, and the standard name
tells you what the memory clock is. basically, the higher the numbers, the better, pretty much no matter
what. also, compatibility runs only through the standard name, not the module name. if a board can run
ddr2 667 ram, it can run both pc2-5300 and pc2-5400. ddr
         ddr sdram, or double data rate synchronous dynamic random access memory (whew), is
technically the second major form of ram in the electronics industry. sdram was the first.
         it’s called double data rate because it transfers data both on the front and back edge of the
clock signal, thus transferring twice as much information in the same amount of time. it’s a 184 pin stick,
meaning that it won’t fit in any other slot. there are four models of ram in this category, but the only
one that matters is DDR 400 pc 3200. this ram maxed out at 3200mb/s transfer rate, hence the pc 3200.
it’s OLD ram, that’s the catch – so why worry about slower models? just worry about the fastest one of
the old ram sticks. these sticks generally use a voltage of approximately 2.6v ddr2
         ddr2 is similar in many ways to ddr. it still uses double pumping – it’ll transfer data on both sides
of the clock signal – however this time the bus is clocked at twice the rate of the memory cells. similar
technique to the FSB in a cpu. basically, it runs at twice the data rate of ddr. it’s a 240-pin stick.
         the most common forms of ddr2 are 667 and 800. i run 800 in my system, currently, it’s the
most widespread max speed for motherboards to support. under the general specs, ddr2 400 pc2-3200
is the slowest, and ddr2 1066 pc2-8500 (technically 8533, but they round them off) is the fastest.
standard voltages are up to a max of 2.3 volts, but most run between 1.8 and 1.9. a shrink in the size of
the dies for the memory chips is responsible for the power savings.
         it’s worth noting that the latency on ddr2 400 sticks is significantly more than the latency for ddr
400 sticks. so, it performs worse. it wasn’t until the better sticks came around – specifically 667 – that
ddr2 was considered good. same happened for early ddr3 ram. ddr3
         these still use double pumping, but this time it transfers eight bits per clock, rather than four for
ddr2 and 2 for ddr. it’s a 240-pin stick, again, but the notch is in a different place and they’re completely
electrically incompatible with ddr2.
         the most common ddr3 ram is probably ddr3 1066 pc3-8500 or ddr3 1333 pc3-10600, as these
are the highest sticks capable of running with the lower-end core i7 processors. the max is ddr3 1600 pc-
12800, currently. as before, there’s a voltage shrink as the die size was reduced to 90nm – they run
somewhere between 1.5 and 1.6v, generally. again, the lower level ddr3 – specifically ddr3 800 pc3-
6400 – is much slower than the ddr2 equivalent. they also cost more, still. dimm
         dimm merely refers to a stick of ram of some kind. this is different from, say, vram, for a video
card. it means dual in-line memory module, referring to the stick of ram where each side has two to
eight ram chips in a line sdram
         synchronous dynamic random access memory is the traditional desktop stick of ram.
synchronous means that it waits for the clock signal from the cpu to organize and direct incoming data
within a structure. it allows the ram to accept new data before finishing processing the old one. so-dimm
         small outline dual in-line memory module – guess what! laptops, that’s right. similar idea, but
the pinouts are different. if it’s 200-pin and a notch that‘s placed away from the center, it’s DDR. if it’s
200-pin and the notch is near the center, it’s almost always DDR2. if it’s 204 pins, it’s DDR3. fb-dimm
        a high-reliability, high-stability interface that’s built for applications required this. mostly
servers. you don’t need it. if you want to know the difference, read wikipedia – it’s fairly complex. flash-based memory
         basically a non-volatile version of standard forms of RAM. it’s mainly used in usb cards, it’s not
really a form of ram, but it can be used as such in certain systems. rdram
         used often in video game consoles and video cards, it’s rarely used in computers. it’s REALLY old.
like 10 years old. micro dimm
        a 172-pin version of DDR. you won’t see it very often, it’s pretty outdated. it’s similar to the SO-
DIMM build, but it’s even smaller for use in super-portable computing. system specific ram and why that matters
         certain idiot computer makers use system-specific ram to extract even more money out of you.
gateway and Panasonic come to mind. Kingston is pretty much the only company to sell this stuff, don’t
buy it elsewhere. ss ram generally is in either odd formations (weird pinouts or something) or odd
timings (or speeds). or it’s for printers. 64-bit vs. 32-bit
         ah, the old controversy. the tl:dr version is that if you have 32-bit windows (if you don’t know,
you have this kind), you can’t have more than – at MAX – 3.65 gigs of memory. and your video card’s
ram subtracts from that. if you have 64-bit windows, you can have a (theoretical) max of 128 gigs, minus
your video card memory (won’t really matter at that point!).
         here’s why. windows assigns every single action that goes on within itself an addressing number
so you can track bugs and so that it can keep track of what’s going on. with 32-bit windows, that
addressing number is 32 bytes long, therefore giving you 232 possible addressing options. according to
my scientific calculator, that’s 4,294,967,296 bytes, or 4096 megabytes, or exactly 4 gigs. windows keeps
a bunch for restricted addressing, which is where you get about 3.65 gigs of ram. ALSO, windows doesn’t
let any application use more than 2 billion bytes at any given time, which is around 1.86gb. after that,
the application will generally either not be able to access any more ram or it’ll crash/bluescreen your
system. my system currently prefers the latter =(
         64-bit windows operates under a similar concept, except that you use 264 instead, and the max
is around 4 billion bytes. get the gist?

8.2.2. capacity
         currently, ram sticks exist (for modern computers) running from 256mb to 4gb per stick.
different motherboards can take different maximums per stick, so make sure to check what’s going on
with your mobo before updating.

8.2.3. speed
         like i mentioned above, current speeds in DDR2 range from 400mhz to 1066mhz, and from
800mhz to 1600mhz for DDR3. so-dimm sticks are usually a few months behind desktop ram.

8.2.4. timing
         oh, boy. my least favorite part of the entire tutorial.
          basically, this tells you how fast your ram does certain operations. here’s something else really
cool: no matter what, lower is better. but in reality, there’s almost no difference between different
sticks that can’t be directly related to the quality of the plant that they were made in. if you buy cut-rate
ram, it’s gonna suck. if you buy name brand, it’ll (almost) always be worth your money. READ REVIEWS!
          ram timings are listed in the following order: CL-tRCD-tRP-tRAS-CR. an example would be the
g.skill ram i’ve got right now: 5-5-5-15-1T. read below to figure out what they mean. tRAS might not be
there, it’s no biggie because you can figure it out on your own. quite often they’ll omit the CR as well. if
you don’t see all five numbers, it’s just the first three or four. they don’t omit one in the middle. RAS
         stands for row active strobe. it’s the number of clock cycles needed to internally refresh the row
to get it ready to do more work. it’s generally the fourth number, and is the addition of the first three. CAS
         stands for column access strobe. it’s also known as latency. it’s the time that it takes to read the
memory when the row’s already ready to go. generally this is considered the most important of the
timing numbers because it describes how fast the memory actually spits out the data it’s got. tRAS
         same as RAS, except it’s the correct way to say it. it’s an acronym for Row Active Time. tRCD
         row address to column address delay. it’s the amount of time that it takes between opening a
row of info and actually reading it. add this to CAS and you’ve got how fast a system can open a row and
read it. tCL
         another name for CAS. CL is the technical abbreviation for CAS latency. double abbreviation ftw! tRP
         row precharge time. it’s how long it takes to say ‘this is not the right row for my current job’ and
open the next one. the time it takes to close the wrong row and open and read the correct one is the
sum of RAS, CAS, and this one. tCL
         another abbreviation for CAS and latency. command rate
         after your computer decides what job it needs to do and selects a chip to do said job, there’s a
forced pause of either one or two clocks. there’s a slight performance hit with 2T ram – benchmarks say
upwards of 25%, but real-world tests that i’ve done indicate closer to 3% difference in performance.
benchmarks are made to sell components, so they include tests that do things that don’t happen in the
real world =) basically, if you can get 1T, get it. if not, then get 2T. it’s no big deal, in the end. latency
         again, another word for CAS or tCL. read above. 5-4-5-13-T1? what does that all mean?
        as i said earler, that’s cas-rcd-rp-ras-cr. read above. generally, the first number is the only one
that even comes close to mattering.

8.2.5. voltage
         as i mentioned in the stick study i did above, different types of ram require different voltages.
certain cpus – including the newly-released core i7 from intel – require ram to be under a specific
threshold to prevent damage to the cpu. READ UP ON THIS so you don’t torch your cpu! compatibility with major cpu standards
         i7’s require ram to function on voltages absolutely no higher than 1.7v. that’s the biggest one i
know of off the top of my head. you’ll not be able to buy ddr3 ram that’s higher than that, as far as i

8.2.6. heat spreader
         if you can get ram with a heat spreader, get it. ram runs slower as it heats up, and generally it
heats up irregularly, with parts of the pcb board it’s built of being hotter than others. heat spreaders do
just that – they even out the heat on the stick so that it’s not as hot as it could be. there’s a decent
performance increase. most heatspreaders are removable too, so if you burn out a stick or two take
them off before chucking them. you might need them in the future.

8.2.7. why recommended usage is usually a crapshoot
        because it’s a marketing scam!

8.2.8. buffered/unbuffered
        hint: this means exactly the same thing as the topic immediately below! basically, if you don’t
have a server, it doesn’t matter if it’s buffered or unbuffered. there’s really no difference.
        if you’re talking about an old computer, though, it DOES make a difference. some mobos – circa
2003 or so – have different maximum ram levels based on whether it’s buffered or unbuffered. take
note and buy accordingly.

8.2.9. registered/unregistered
         see above!

8.2.10. ecc? what’s that?
         see above! ecc IS registered. it’s HOW you register!

8.2.11. single-, dual-, and triple-channel
         ok, here’s how this works. let’s say you plug a bunch of random chips into your motherboard.
great! you’ve got single-channel ram – basically, as fast as they system will allow it to go by itself. if you
match sticks of ram in specially marked slots – often by color – you’ll get dual-channel ram. they have to
be the same size, but depending on the memory controller they might be able to be different
manufacturers, timings, sizes (some intel chipsets support ‘flex mode’, which allows the sticks that can
be matched to run in dual-channel and the rest in single channel), and even speeds (the slower speed is
used regardless, however). there’s between a 5 and 10% increase in performance from this – enough to
justify getting a 2x2gb set of ram with 32-bit windows even though you don’t get the entire fourth gig.
         dual-channel ram works by having two 64-bit data channels instead of one (like what was
originally normal). in the same way, triple-channel has three 64-bit channels instead of two, allowing for
more throughput. there isn’t much of a performance difference. oh, and it’s only for ddr3, as far as i’ve
seen. most of the performance boosts you get with triple-channel are related to ddr3 ram, as far as i can
see. compatibility with major cpu/mobo standards
        pretty much, triple-channel’s only for i7 right now. the current mainstream processors in the
core 2 and athlon 64 and x2 series support dual-channel, as do their laptop counterparts.

8.3. major manufacturers
        do i have to keep copy-pasting? really?

8.3.1. g.skill and why you should probably just buy it if you’re a desktop
         it’s cheap, it’s reliable, and it’s got a proven track record. if you want super-awesome ram, don’t
buy this stuff. if you just need the standard, get it. i’ve got their blue 2x2gb stick set in my system right
now, and it’s worked fine the whole time i’ve had it.

8.3.2. kingston and why you should probably just buy it if you’re not a desktop
         they’re pretty much the best combination of laptop ram performance and price. and they’re
pretty decent quality.

8.3.3. ocz
         generally, they make pretty good performance ram. somewhat overpriced, but pretty nice as a

8.3.4. corsair
         pretty decent models across the board. i like their stuff.

8.3.5. mushkin
        good budget type for low-end applications, don’t go for their higher-level stuff.

8.3.6. patriot
        budget-priced performance ram. stay away!

8.4. things you should watch for
          anything on the sticks that’ll add heat is a bad thing. anything that doesn’t have a long warranty
is a really bad thing. anything with bad reviews online is a bad thing.

8.4.1. why leds on your memory are a stupid idea

8.5. why you should NEVER have less than 1 gig of ram for xp/2 gigs of ram for
vista and w7
        windows xp requires about 128 megs of ram to operate the OS smoothly. that’s by itself, not
counting anything else. vista and w7 need around 1 gig (not counting your video ram) to make the
pretty stuff look pretty and handle all the background processes. why such a big difference? because xp
was developed when 1 gig of memory was excess to the extreme, and vista was developed when 1 gig
was becoming standard to handle all your background crap going on. ram’s stupid cheap, just shell out
the extra ten bucks and your os’ll run really smooth all the time.


9.1. what is it?
         anything that allows you to put input into your system – keyboards, mice, webcams, drawing
tablets, controllers of any kind – or their variants – kvm switches are the most notable variety – counts
as an input device.

9.2. forms of input devices
       there are two general types of input devices - what i call human interface devices and
technology-controlled devices. an example of a human interface device is a fingerprint reader, keyboard,
mouse, tablet, etc. a technology-controlled device might be a temperature gauge, an automatic
volumizer to reduce volume above a certain point, etc. there are obviously a lot more HIDs than TCDs in
my book.

9.3. terminology
         i’m not doing a specific section on each term because almost everything’s been talked about
before. i’m just going to discuss some changes that have come around from the old-school mouse and
keyboard we grew up playing starcraft on.
         keyboards: everyone’s seen keyboards with macro buttons – keys that do something other than
[enter] and letters and stuff like that. volume, mail, internet explorer…that’s all been around since
windows 95. what is new in this field are programmable macro buttons – buttons that you can set
special keystrokes with per program. an example would be a complex keystroke for a photo-editing
program, a set of instructions for your WoW character, or a ‘boss-button’ that minimizes all open
windows. another new thing is the keyboard display, as seen on Logitech g15 keyboards. these displays
can display system temps and data, program-specific data (like WoW stats or something equally
useless), or stuff like the time of day. again, this is programmable. another big thing nowadays are
gamepads – ergonomically designed mini-keyboards (often for your right hand, but they make them for
your off-hand as well) that group common game keys, macros, and even game-specific keys together to
allow for less flailing and fewer missed keystrokes in games.
         mice: mice have come a long way in the last few years to the point where programmable
buttons like a search button, extra scroll wheels, and even the ability to alter how heavy the mouse is
are becoming commonplace. the standard for gaming mice is still the Logitech g5, but there are other
mice (like my revolution mx) that are just as good. the key to look for is one that’s got a good-quality
laser lens. read reviews and get one that suits your needs. i loved my cheapo Microsoft mouse for over a
year before i splurged on the one i’ve got now. read reviews!
         fingerprint readers: i haven’t heard good information about these. pretty easy to hack,
supposedly. same with webcam-based facial recognition technology. a picture from a magazine can
defeat it.
        usb-based devices: tablets, soundcards, they’re all out there. i won’t go into detail here, so go
look and find what you want the best. reviews, reviews, reviews!
        webcams: AIM and Skype has made these things prevelant everywhere. look for one with a
manually adjustable external lens, preferably of glass. i bought one a while back from Hercules and i’m
happy with it. read reviews!

9.4. when wireless/bluetooth is generally a bad idea
        i prefer to not use wireless systems with keyboards and mice because of two reasons. they need
to be recharged (and i ALWAYS forget), and they drop the signal occasionally. it’s really rare, yeah, but if
you’re playing an FPS game or writing a paper and it drops ONE frame, that’s the difference between a
good paper or a good game and a bad one or a really blatant error.
        also, wireless isn’t very secure. if you’re typing your social in, and there’s someone listening in
on the signal…

9.5. major manufacturers and what they make
         the major computer companies, like dell and them, make peripherals. so does apple and
Microsoft. these are usually decent components. i used a Microsoft mouse for a long time. i also used a
3$ lite-on keyboard for about eight months, and still do when i build computers. you don’t always need
the best!
         when it comes to peripherals, Logitech has been making quality components for like 10 or 15
years. buy from them! there are a few others, like Kensington and adesso, but i really think you should
just spend the extra few dollars and get logi equipment. they’re excellent. i use a saitek eclipse
keyboard, which is a backlit keyboard for about 30-40$, and i like it. takes a little getting used to the
slightly odd action, but it’s good.

9.6. things to watch out for
       bloatware. particularly mouse bloatware. unless you’re really going to use all those extra
buttons on your keyboard, don’t install it if you can help it. they tend to have programs that run no
matter how many times you try to uninstall it. crappy programmers work for keyboard and mouse
companies, and their poor driver technology generally shows for it. not as bad as printer drivers, but


10.1. what is it?
      monitors, printers, speakers, display gadgets…anything that’ll display information from your
computer is an output device.

10.2. forms of output devices
         same as before – i’ll just do a rundown rather than specific sections on each.
         visual output devices are generally one of two things – a display or a printer. i’ve already
discussed plugs for both of these, but there’s some general guidelines you want to follow. generally,
with printers, avoid Lexmark. their drivers are probably the worst in the entire scope of computing
because of how they activate the print spooler – they tie it directly to the driver so you can’t use a
different printer when a Lexmark one is installed, and then they forget to untie it when you uninstall the
damn thing, so you can’t print anymore. sensing some personal experiences here? as for monitors,
there’s a wide variety of things you’re supposed to know about when you pick one, but in general just
read reviews. that’s how i picked out mine…i didn’t compare blacks and brights and contrast ratios and
all that crap. it’s not a TV, after all.
         audio output devices encompass headphones and speakers. you need to know what you’re
getting these for – if you’re doing audio production, spend a lot of time finding one that doesn’t color
the sound with a bass boost or audio exciter or your mixes will sound a lot different on your system than
they will elsewhere. with headphones, get an open-ear design (it doesn’t boost bass as much as other
headgear). and for what it’s worth, the headphones that allow for surround-sound within the
headphones can really, really mess your hearing up when you mix. avoid them. if you’re buying for
games and movies and general use, buy the ones that reflect your budget best. remember that, at a
desk, 5.1 isn’t all that great. 2.1 or 3.1 is. if you’re somewhere that you can hang those rear speakers, get
them! also remember that, at a desk, you don’t need much more than maybe 30 watts of power total to
get a full sound. you’re right in front of the speakers, so you don’t need a huge system.

10.3. terminology
        i’ve already discussed this in detail above, so i’m not really going to get into this much. if you
think there’s something that needs to be here, contact me.

10.4. why wireless/Bluetooth is always a bad idea
        wireless printers are even worse than wired printers. their drivers are notorious for failing just
as you’re about to print that major project.
        i heard something about wireless monitors a while back, which would be cool if the thing didn’t
need to be plugged into a wall to be useful. they’re pretty much bogus. ignore them.

10.5. major manufacturers and what they make
        for speakers and headphones, read reviews and get the best ones you can. if you’re talking
desktop speakers, i use the Logitech x230 system (2.1, 28 watts, around 40 bucks max), which is
excellent. i mix on that system. i have the logi z5500 (5.1, 505 watts, remote and speaker control, hi def
audio capable, usually available for around 250$) speakers for home theater and console gaming, and
they’re excellent! the woofer is seriously about 60 pounds, though. and it’s a huge deal if you can find
them on sale with free shipping…dell often has them on sale when newegg or amazon doesn’t. for
headphones, that’s up to you. i use a pair of bose triports for listening to my ipod, but i rarely listen to
my computer on headphones.
        for monitors, there’s several manufacturers that make one or two good models and a zillion
crappy ones. read reviews and buy from there. never buy in a store! i really like my acer al2216w 22”
widescreen monitor, and they’re stunningly cheap compared to the nearest competitor. just buy them!
10.6. things to watch out for
         wattage isn’t always everything – make sure that your woofer is approximately half of your
system’s watts. any less than 40-45% is going to be a less-than-satisfying situation.
         shipped monitors often have stuck or dead pixels. make sure you check yours right away when
you get it, and find out how many there have to be in order to be able to get a new one (usually 5-8).
there are ways to fix stuck pixels, just google it. it’s not the end of the world, although it can be annoying
as heck.


11.1. why is this here?
        because your case is really important! if you buy a massive system and want it to fit into a tiny
case, you’re just asking it to explode on you in the middle of the night. cases (and the next section,
cooling) often is the most complex and difficult question to answer during one of my builds.

11.2. form factors/sizing terminology
         different cases support different motherboards, of course. but there’s more to it than just that.
the size of your graphics cards (the 9 series and on are beasts compared to the 7 and 8 series of nvidia
cards), how many expansion slots you need, how many hard drives and optical drives you have, these all
need to be considered when deciding what size you need.

11.2.1. atx full tower and why your beast needs to be in one of these
         if you have more than one of the following, you need a full tower. more than two gpus (two
dual-slot cards might need a full, but usually not), more than one cpu, more than five expansion cards
not including the graphics card. if you’ve got a water cooling system, get a full tower.
         full towers are the hummers of the computer world – damn big and proud of it. when fully
loaded, these can weigh as much as seventy pounds! so, no, they don’t work on a desk. they’re the most
expensive, and generally are the top-of-the-line.

11.2.2. atx desktop
         these are the smaller cases that are made for keeping on your desk. they’re often not much
larger than a few large books. avoid at all costs! they’re a cooling nightmare. they’re also usually

11.2.3. atx mid tower and why it’s generally the best
         this is your standard-sized desktop. you don’t need more case unless you’re buying an absolute
LOAD of computer components to go into it. it’s got the best range of features and cooling options while
remaining small.

11.2.4. atx mini tower
         miniaturized version of the mid tower – it’s approximately the same size as the atx desktop but
it’s intended to stand upright.
11.2.5. microatx mid and mini tower
        smaller versions of above mentioned cases. they only support the microATX mobo format

11.2.6. microatx desktop
        these are interesting little cases. they’re often known as LAN cases, because they’re the most
portable of any cases out there. they often have handles, removable motherboard trays, and intense
cooling issues. if you’re buying expensive, hot components, beware to get one with vents near those

11.2.7. mini-itx tower
        these towers, usually the size of a small dictionary (only a few inches thick!), are intended for
the low-heat and low-power mini-itx motherboard form factor. heat is generally not an issue because
they’re so bloody small. if you buy one, make sure it’s got at least one fan and one vent besides that fan,
though! there still needs to be air flowing, if only to cool the ram and hard drive.

11.2.8. htpc cases
        htpc stands for home theater pc. they are small cases, similar to the size and look of stereo
components, and generally house either mini-itx systems or extremely low-power athlon systems. the
cases are generally pretty expensive, though, because they’ve gotta look good with all your other stereo
goodies as you play your downloaded movies on them off your home network. make sure they have
fans that are quiet, or you’ll regret it.

11.3. why the material matters
        different materials conduct heat differently and have different perks. nuff said.

11.3.1. steel
         steel’s the strongest case material by far, but it conducts heat slightly less efficiently as
aluminum. not a huge difference. it’s also really heavy. it’s also really cheap.

11.3.2. aluminum
         aluminum is about midline for case strength, but it conducts heat much more efficiently than all
the other materials (steel’s close, but the other’s aren’t even near it). it’s really, really light. it’s very

11.3.3. acryclic
         acrylic looks cool because it’s clear. great for studio builds. it conducts heat very poorly, it’s
midline expensive, it cracks really easily, and it’s heavy compared to aluminum.

11.3.4. plastic
        go away.

11.4. terminology
        things you’ll come across while working on computer cases.

11.4.1. mobo compatibility
          different cases can fit different motherboards. don’t go by case dimensions, go by what they say
it can fit. if it doesn’t say that it’ll fit extended atx, it won’t. don’t wish or you’ll get screwed.

11.4.2. internal drive bays
         this is how many internal bays there are for hard drives.

11.4.3. external drive bays
         this is how many external bays there are for optical drives and 3.5” fan controllers and floppy
drives. they’re generally marked separately, so you’ll see a case with 2 3.5” bays and 4 5.25” bays

11.4.4. expansion slots
         this is how many external bays there are on the rear for PCI-E and PCI expansions.

11.4.5. front ports and what should be there
         assuming it’s got a front bay, there should be at least a headphone jack and usb connections. if
it’s a small case that’s meant for being on your desk, there might not be anything, but if it’s big and
supposed to go on the floor make sure there’s at least two usb and a headphone jack. a mic plug and
firewire are nice too if you use them a lot. make sure that, if the usb plugs are in a stacked formation
(aka, 2x2) that there’s enough room to plug in a thumb drive or something larger than just the cable
when there’s something plugged in next to or above/below the plug you’re using. also make sure they
work – i’ve had one or two cases not have front-panel audio support out of the box.

11.4.6. dimensions and weight, and why they matter
          if it’s meant for your desk, it shouldn’t be taller than you when you’re sitting up. if it’s for the
floor, it should fit under the desk =) these are things that are basics and you should think about
beforehand. also, if it’s really heavy and it’s gotta fit on your desk, get a good desk.

11.4.7. toolless installation
         annoying! this is one of those things that techs invent to prevent people from complaining
about how they stripped the screws in their installation. it’s usually a variety of high-friction pins and
snap-in plastics that allow you to secure optical and hard drives without screwing them in. but you’ve
gotta pull the front panel off the computer to do it! so frustrating. it’s a wash, it takes just as long for
toolless as it does tooled.

11.4.8. wiring ducts
         these are nice. they allow you to route wires to the back of the case, behind the motherboard
tray and the back door of the case. these are really great to keep the air moving in your case without
lots of annoying wires in the way.

11.5. cooling, and why you MUST think about it
        HEAT = FIRE

11.6. major manufacturers
        like peripherals, most case companies have one or two really awesome maxes and a zillion
crappy ones.
11.6.1. apevia
         like with psus, this is newegg’s brand name. decent quality for being low-end.

11.6.2. lian-li and why they’re so stupid expensive
         lian-li are the top in build quality, consistency, and price. they’re as good as it gets…that’s why
they cost so much. i wouldn’t recommend them, though, because they ARE so much money. 500$ for a
case? i don’t think so.

11.6.3. antec
         antec’s got several really popular case designs, like the Three Hundred/Nine Hundred/Twelve
Hundred (mid, big mid, and full tower, respectively), as well as the P128s and the like. great build
quality, and the Nine Hundred is consistently one of the top sellers on every site i’ve seen. slightly more
expensive than the norm, but they’re good quality with enormous fans.

11.6.4. cooler master
         cooler master is one of the top names in full-tower cases. their stacker and cosmos cases are
excellent and quiet. pricy, but excellent for the systems that need to be in there.

11.6.5. rosewill
         rosewill makes very good cases for the low and middle ranges. somewhat flimsy, but as a whole
they’re great for a cheap build.

11.6.6. raidmax
         flashy garbage. lots of leds and windows and not much build quality.

11.6.7. supermicro
         servers and the like. pretty good.

11.6.8. Athena power
        more servers. they’re ok.

11.6.9. sunbeam
         they’re good for two things: their transformer full-tower case and their see-through acrylic case.
everything else they sell isn’t really worth the time.

11.7. things to watch out for
        read reviews! there are a lot of bad cases out there. if it’s an off-brand or seems really cheap
compared to others in it’s class, find out why. usually the money’s saved by using cut-rate fans that crap
out right away. also, case shipping is about as expensive as it gets. it can be up to 30$ for some full-
tower cases. look for free shipping deals! it makes the difference between a cheapo case and a good
one, sometimes.

11.7.1. why you should never use the psu that comes in a case (unless you’re an office person)
        well, even if you’re an office person, those psus are the lowest of the low. spend the extra thirty
bucks and get a decent one when you order.
11.7.2. why you should never go smaller than a mid tower (unless it’s an htpc)
        the smaller ones have issues with cooling. see all of section 12 =)

11.7.3. power supply location
        if you can get a case with the psu on the bottom, do it. the heat in the case rises, so putting the
component probably most affected in the long run by heat (the psu) right in the middle of it is bad.
putting it on the bottom means that it can suck in the coldest air – straight off the floor – into the
system and vent into the main body of the computer.

11.7.4. issues with toolless installations and screwless drive mounting
         well, read what i said above. toolless doesn’t always fit your special addons like hard drive
coolers and the like. my suggestion is to buy the computer and piece it together, and if you need the
extra cooling and have space add it later.

11.7.5. when having a window with a fan on the side is a really good idea
         when you have a good graphics card! if the graphics card has to pull hot air from inside the case
to cool the gpu, it’s nowhere near as effective as if it’s got a fan right there blowing cool air right onto
the card’s input. the sunbeam transformer case is an excellent example of great side fan placement.

11.8. what you SHOULD get with your case purchase
        all screws required to mount every piece of hardware at the same time. so, if you’ve got seven
expansion slots, four external 5.25” bays, six total hard drive bays…you should have enough hardware to
mount all of that AND the psu. and the motherboard risers so you don’t burn out your first four
motherboards, like i did on my first build until i realized what the smoke was.


12.1. why cooling is possibly the most important thing to think about
        your computer will not run if it overheats constantly. without cool air blowing over it, your cpu
will simply error out. your graphics card will display weird blotches. your ram will asplode. your
interwebs will be clogged. bill gates will die of a heart attack.
        do it for the children.

12.2. air cooling
        [url=]read this![/url]
        generally the most common form of cooling is air cooling. air’s pretty common in the modern
world, and one of the best things about it is that it’s free. most cases take advantage of this fact by
including a bunch of crappy fans in their boxes. most cpus come with a fan as well. your cpu MUST have
a cpu fan on it or it will start flames instantly.
        something everyone forgets is that your computer CAN’T be cooler than ambient temperatures.
most people who know things about computers don’t measure their case’s temp while under load, they
measure the difference from ambient it is under load. if you’re in a 90 degree room, your computer will
run hotter than it will if it’s in a cold basement.
         there are two types of air-cooled systems – positively charged system and negatively charged
systems. positive means that there’s more air coming in the fans than is going out, therefore air gets
pushed out of random joints in the computer’s case. this is good for keeping dust out of your system but
not very good at cooling – unless you’ve got intake fans blowing directly on your cpu and gpu, you’ll find
that air doesn’t circulate as well in a positive system because there’s no force pulling it out another vent
– just forces pushing it in. negative means that there’s more air blowing out than in, which forces your
system to suck air in through the cracks. these cracks generally bring in a lot of dust with the air, but
you’re guaranteed (as long as you planned your airflow properly) to get air past your critical
         you’ll see me talk about cfm all the time. this means ‘cubic feet per minute’. it’s a measure of
how much air a fan can blow. these numbers are generally crap (advertisement garbage that’s not true
in the real world), so basically buy to the reviews. if you know you need x cfm, buy a touch over.

12.2.1. why you need an aftermarket cpu cooler
        because it’ll start on fire if it doesn’t have one. seriously, even with a high-powered air cooling
system i’ve had mine up to almost 90 degrees Celsius under load. while
[url=]cpus can run pretty hot[/url] all the time, with 60s
and 70s common in overclocked systems, this is bad. you need a cpu fan that’s good, and that term is
not used to describe the crappy fans that intel ships with, generally.

12.2.2. fans, sizes, and why you need at least two in the end
         air in, air out. you need to push air in one and and out the other, or else it’ll either not reach its
intended target or it’ll suck crap in through the cracks in your case, where you can’t have filters to
prevent dust from getting in. you need at least two for any normal system and at least a vent and one
fan for an htpc.
         there are fans for hard drives, your pci slot (extra graphics card cooling), chipsets, even your ram
and optical drives. for the most part, it’s smoke and mirrors. 40mm
        these are tiny things generally used for moving a few cfm onto the northbridge and southbridge
chipsets. they aren’t worth the time with anything else. 60mm
        cheap alternative to the 80mm fan. don’t use it if you can avoid it. it’s about 2/3rds of the
output of an 80mm. 80/90mm
         these are one of two standard sizes for case fans. the 90mm is a little bit more, but they’re
basically the same fan. if your computer is noisy, these are why! it’s common to find these fans with a
dba rating of up to 35-40dba – which is loud! and it’s a higher-pitched whine, which is really annoying.
they’re good in small quantities, but my last case had three of them (good models, too) and it was a loud
system. 120mm
         ah, the bread and butter of watercoolers and air coolers alike. these are the second of two
standard sizes for case fans. the more of these, the better. you can move up to 90cfm with these things
without breaking 25dba because they’re so much bigger than the 80mm fan (about twice the surface
area, all told).
        you can buy these in 25mm thick or 37mm thick varieties. the 37mm varieties don’t fit in most
cases unless you’ve got nothing behind it, but they push a buttload more air due to being much larger
fans. they’re intended more for radiators and the like. 25mm is standard, you’ve gotta dig for 37mm
fans. 200/250mm
        often used in the side window of cases, these monsters are great for just getting air in the
system. i think it’s antec that has these in most of their large-model cases. 360mm
        the biggest, the baddest, the hugest. and utterly silent cause it’s so bloody big. it’s basically a
box fan stuck on the side of your system – seriously, this is like almost a foot across. serious dust hazard,
but serious cooling. bearings and the like
         there’s five or six different types of bearings on the markets for fans. i’ll admit that i know
almost nothing about them other than sleeve bearings are crap, so i stole most of this from Wikipedia.
         sleeve bearings are basically just that – the fan floats on grease or oil over the axle. they’re less
durable than others, more likely to fail at high temperatures, have poor performance off of a vertical
axis, and get really nasty loud towards the end of their life. and they’re mad cheap, because they’re so
basic. avoid if possible.
         rifle bearings are similar to sleeve bearings but are quieter and last almost as long. they’ve got a
spiral groove (similar to the rifling on a gun barrel) that forces lubrication into the joint from a reservoir.
they can be safely mounted off of vertical.
         ball bearing fans use…surprise! ball bearings. they cost a bit more but are much more durable –
particularly at high temps (like in gpu and cpu fans). they’re pretty quiet, and they last around 63000
hours (as opposed to the 40k of rifle).
         fluid bearings last forever. i don’t know how they work, but they’re pretty much silent and last
longer than any other fan…and they cost a buttload of money.
         magnetic bearing fans use magnetism like a maglev train. they’re not very popular because
they’re so expensive. 3pin vs. 4pin
        3-pin plugs directly into the motherboard. they can be monitored but not controlled. 4-pin plugs
can be controlled to reduce how loud they are. don’t pay extra for a 4-pin, 3-pin is completely fine. it’s
what i use. fan controllers
        the reason that a 4-pin is too much money is because you can use a fan controller to worry
about your fan speeds. it’s a little front-bay deal that allow you to link the speed of your fan to a knob or
automated sensor. these can be really cheap – just a few knobs – or really expensive – fully automated
system that adjusts fan speed to reflect interior temperature or to combine best cooling with least
noise. amazing stuff. sunbeam sells a few automatic controllers that are cheap but decent. why to not buy SilenX, or trust dba or rpm readings on most websites
        most rpm readings (and their associated dba readings) are tested in open air, not actually
pushing air into a compressed space like fans normally do. this is amplified when you try to use one with
a radiator or something for water cooling. trust reviews by websites, not what they all say. and just
don’t buy SilenX!

12.2.3. companies to trust
         i really like yate loon fans. they’re a little more expensive, but i have six in my computer, and i’m
very happy with their performance. they come in three models, and the M model (medium speed) is
excellent. the H model (high speed) is good as a cpu cooler. something like 120cfm or something at
         read this [url=
tests.html]120mm fan test[/url] thread. there’s a lot of useful information there.

12.2.4. heatsinks and when they’re really useful
         chipsets, low-power cpus like the atom, and cases with a lot of air movement get a lot of use out
of tall heatsinks, like a good radiator. they’re not any good if the case doesn’t get any bloody airflow
through it, though.

12.2. liquid cooling
         liquid cooling is just that – all the fans on the components have been replaced with blocks that
allow fluid to flow through them and carry away all the heat on the components. this runs through a
radiator, generally outside the case, that radiates all the heat away. it flows back to a reservoir and then
circulates through the system again. a pump is what circulates it.
         note that i said components. there still needs to be fans in the case to cool the non-water-
cooled components – notably the chipset and hard drives and psu. and, again, your computer can never
cooler than ambient.

12.2.1. why you don’t need water cooling
        because air is just as good in most systems. if you don’t know, you don’t need it. it’s tricky, and
easy to fry important components. it’s dangerous, because it’ll almost always leak and that means that
your components will fry. it’s annoying, because you’ve gotta clean the loop every 4-6 months to
prevent buildup from algae, fungi, and corrosion. and the fluid breaks down. so does the tubing. and the
waterblocks. and every time you upgrade a component you’ve gotta drain your whole system before
you can remove anything.

12.2.2. why you do need water cooling
        if you’ve got serious cpu hardware under your hood and you want to overclock it significantly,
watercooling is a good idea. this is pretty much the only reason to watercool.

12.2.3. why buying a market-made kit is a really bad idea
        they always leak. always. no exception. there are kits that companies like Petra’s Tech Shop and
Danger Den put together that is made up of components from their shop that they stick in a bag and
discount – those are fine. but premade kits from thermaltake or any of those morons are bad news,

12.2.4. blocks, pumps, etc.
        here’s what you might come across when cooling your system with fluids of various kinds.
remember that you’re putting your component’s lives on the line – don’t get a crappy block. there are a
lot of sites out there that describe what makes a quality block for each of the individual major
          all components need barbs that may or may not come with the blocks or pumps. it’s a
Christmas-tree-shaped component that’ll allow your tubing to grip and not slip.
          lastly, make sure all your components are the same metal, or they’ll rip ions off of each other
like there’s no tomorrow. if you have to use mixed metals, use an anti-corrosion liquid like antifreeze. cpu blocks
         the most-used block, the cpu block fits directly over your motherboard’s socket. there are
different ‘bests’ for each socket and design, but basically look for the best one you can afford. they
make them specific for quads and dual-cores, too, to focus over the dies inside the chip, so get one that
is specific to your design. gpu blocks
        these are on a per-configuration basis. while you can buy most big-name cards with a
waterblock already installed, there are a few different variations for the home model. you can get a
block for the entire card (aka, gpu and vram) or just for the gpu. if you just get one for the gpu, it’s a
good idea to get ramsinks, heatsinks designed specifically for use on the video card’s ram. this is usually
enough, full waterblocks are only really needed for monster cards. ram blocks
         pretty much completely useless. if you’ve got heatspreaders on your ram and fans in the case
you’re fine. chipset blocks
        in heavy overclocking situations, cooling this might be a good idea. the rest of the time, a good
heatsink (or even a fan-based chipset cooler like the
[url=]the high riser[/url] or (for smaller situations) the
[url=]an enzotech copper cooler[/url] is usually
enough. make sure to get it specifically for your chipset – don’t buy a northbridge cooler for a
southbridge chipset, etc. hard drive cooling
        useless. stick a fan in the front of the case to blow cool air over it, don’t bother with this. pumps
         probably the most important component of your water-cooling setup is your pump. there’s a
LOT of controversy surround whatever’s the best pump. read reviews and find out for yourself, since
there’s so many new components coming out in this sector. just make sure it’s going to be able to
handle the amount of line you’ve got. if you’re cooling everything in your system, you’ll need a larger
one than you’d need with, say, just your cpu.
         if you get too small of one, circulation will be too slow and your cpu will percolate. if you get too
large of one, the fluid will run too fast and your components won’t be able to exchange heat with the
fluid because it’ll be flowing by quicker than science allows it to pass on the goods. radiators/radboxes
         radiators are exactly what they sound like – hot fluid comes in, fans push air over fins that allow
it to radiate heat off, cooler fluid leaves. these are where those 120x37mm fans come in REAL handy,
since they can handle the increased backpressure from pushing through the fins. last time i checked, the
HW Labs black ice gt models were pretty popular still – i like the stealth 240. don’t forget to get filters so
you’re not blowing dust into the radiator to get stuck in the fins!
        you’ll hear about performance shrouds often. these basically allow you to focus the airflow so
that you don’t waste any pressure. there’s an excellent guide to
[url=]make one[/url]
available at
        radboxes are a cool little thing developed by swiftech. they house the radiator and sit on the
back of the computer, hanging over the obligatory 120mm fan housing. it allows you to not have to deal
with the radiator being all over the place all the time, you can mount it and forget it (until it leaks). bong cooling and when it might be for you
         this is an interesting thing that i found a while back. the basic premise of it is that water ditches
heat faster when there’s more surface area. it’s similar to a shower. you’d have a huge (like, 3-6 inches
wide) pvc pipe hanging over the side of your desk that goes down to a pail on the floor – about 5-gallon
size or so. the water from the computer (it’s gotta be water) goes through a showerhead in the top of
the pipe and ‘showers’ down the pipe into the bucket. the pipe can’t go all the way to the bucket or
there’d be no way to actually evaporate the water properly, like you’re supposed to. a huge pump on
the floor circulates it back into the system. there’s usually ping pong balls or something in the bucket so
it doesn’t always sound like someone’s peeing in there. you can see some pictures
         if you live in a house that’s really, really hot all the time, and don’t mind a crazy-looking pipe
hanging on your desk, and don’t mind refilling it constantly, this might be for you. it’s pretty high-
maintenance, all told. reservoirs
          most systems have a reservoir of some kind. it’s a place for the fluid to hang out before it goes
into the loop, and it also helps to prevent air from getting anywhere (air rises, obviously, so stick your
reservoir at the highest point and it’ll collect all the air). they come in a variety of sizes based on what
you need.
          alternatively, if you don’t get a reservoir, get a t-line. it’s just what it sounds like – a t-shaped
fitting. the fluid continues on the long side, and the leg of the t goes up to the top of the computer so
you can fill the loop and have somewhere for the air in the line to go. tubing
        don’t skimp on tubing or you’ll be sorry later. there’s a variety of different sizes, but the most
common sizes are 1/4”, 3/8”, 7/16”, and 1/2” ID (inner diameter). the thickness (and the OD, outer size)
of each size is different based on how large the ID is. for most systems, i’d suggest using 7/16” tubing
but using 1/2” barbs so that you get a REALLY tight fit on the joints. it makes for a much more leak-proof
        tubing is always – ALWAYS – secured with tube clamps, also called worm clamps or worm drive
clams. never forgoe them. knuckles, turns, and why a bend in your tubing can kill your computer
          turns of any kind are dangerous because they present a hindrance in the flow of the tubing. it’s
difficult for the fluid to flow around a tight turn, and it’s nigh impossible for fluid to flow through a
‘knuckle’ in the tubing, also known as a kink. if the fluid can’t flow, it’ll overheat, and the pump will burn
out. remember that tubing gets more flexible when it heats up – so, if you’ve got a tight squeeze, BUY a
90 degree turn fitting or something to prevent having kinks. if you don’t, it might look fine at first but
kink once the tubing heats up. fluids, additives and why you should be really careful
        you don’t put tap water into a system like this. there’s an art to what fluid goes into a
watercooling solution, including weird stuff like pine sol. water vs. distilled water vs. deionized water
          tap water’s full of weird crap like algae and chlorine and all that. it’s also, by nature, ionized,
which will dick with your metal waterblocks. so don’t use it. don’t use water from a bottle either.
          deionized water doesn’t work either, because it’ll strip ions off of your waterblocks until they
spring a leak. no algae, but still not good.
          if you must use water, use distilled water. the distillation process kills off anything that might be
living in the water, and it’ll be as ‘pure’ as you can get. bleach
         some people use bleach as a biocide to kill off algae and the like. they’ll put a few drops in their
loop and let it sit. except that bleach is extremely corrosive, and it’s dangerous to use other than to help
clean out and flush your loop. it’s ok if you run a 25/75 mix of bleach and water (not much more than
that) through your system a few times just to flush it, but rinse with distilled water afterwards or you’ll
be sorry. mixes and why they’re REALLY dangerous
         often you’ll find people use mixes. it’s easy to mix something up in your process. antifreeze and
distilled water is common – that’s a 10/90 mix. there are a myriad of others out there – google and
wander around. i don’t want to suggest something because when your system blows up you’ll come
complain to me. bugger off. purchased fluids
         feser one is excellent, but somewhat costly. each big can lasts a few fills. fluid xp+ ultra is good,
too. here’s a [url=]great
article[/url] on it. most purchased fluids are non-conductive, too, which is awesome. there’s no way
you’ll have an issue with your system blowing up if it leaks – it’s just annoying to plug the hole. antibacterials, antimicrobials, antifungals, herbicides, biocides, genocides, bioshock…
         if you use water, you need some of this in your system. biocides and herbicides cover living
things like algae and other plant-like growths. anti-bacteria and –microbials is useful if you bake your
own distilled water. antifungals are important if you live in a warmer climate.
         i should point out that you only use a few drops per loop – a little bottle the size of your thumb
should last 20 fills or so.

12.2.5. major manufacturers
        thermaltake, swiftech, hw labs, danger den…that’s about it that i’d buy from. try and make your
whole kit from the same thing, it prevents issues with alloys eating each other.
12.3. oil immersion
         ah, awesome! it’s a computer in a fishtank! no, really. basically, you immerse your computer in
mineral oil – everything that doesn’t move, that is (platter-based hard drives and optical drives need to
stay out). fans are ok, but drives will burn out from the viscosity of the oil. ssds are fine.
         you can get the oil from a vet (it’s a horse laxative), since you’ll need gallons of it and it’s a lot to
buy at home depot. a fishtank would do fine for the ‘case’, then it’s just a matter of getting everything
plugged in and ready to go before you fill it with fluid. interestingly enough, bubble bars and that kind of
stuff from a fishtank will help the oil circulate, so it’s actually encouraged.
[url=]Puget custom computers[/url] has a great article
on the making of one.

12.3.1. why it’s possibly the coolest (as in awesome) thing available
        probably because it’s one of the few cooling technologies in the last few years that actually
works really well, doesn’t cost much, and has little to no labor involved with it.

12.3.2. why it’s actually a really good idea to build a computer in a fishtank
          if you live in a hothouse, this might actually help your monster computer stay cool. in an office –
like, as a secretary’s system – it’s a cool project to have that’ll really show how creative and different
you are =) they don’t even need radiators or anything – just plug and go!

12.3.3. where to buy a custom-built oil immersion pc
         [url=]Hardcore PCs[/url] sells them for what’s actually a
really good price, all told, particularly considering the hardware you can get. i don’t know if they sell
them with i7s yet. um, and THEY’RE BULLETPROOF. Puget sells system components for building them, as
well, but they’re not bulletproof.

12.4. extreme cooling
        there is some really crazy crap going on to cool computers nowadays. you see people at
overclocking competitions pouring liquid nitrogen into custom copper pots on their cpus. pretty crazy.
here’s some more stuff.

12.4.1. phase change cooling
         wiki’s got a great description of it. here goes.
         a vapor compression phase-change cooler is a unit which usually sits underneath the PC, with a
tube leading to the processor. inside the unit is a compressor, the same type that cools a freezer. the
compressor compresses a gas (or mixture of gases) which condenses it into a liquid (this is really, really
cold!). then, the liquid is pumped up to the processor, where it passes through an expansion device. this
can be from a simple capillary tube to a more elaborate thermal expansion valve. the liquid evaporates
(changing phase), absorbing the heat from the processor as it draws extra energy from its environment
to accommodate this change (the expansion forces the liquid to become a gas, so it sucks heat from
around it). the evaporation can produce temperatures reaching around −15 to -150 degrees Celsius. The
gas flows down to the compressor and the cycle begins over again. this way, the processor can be
cooled to temperatures ranging from −15 to −150 degrees Celsius, depending on the load, wattage of
the processor, the refrigeration system, and the gas mixture used. this type of system suffers from a
number of issues but mainly one must be concerned with dew point and the proper insulation of all sub-
ambient surfaces that must be done (the pipes will sweat, dripping water on sensitive electronics).
        tl:dr? basically, it’s a high-power refrigerator that sticks directly onto the cpu and cools it to
below freezing thanks to science. but the pipes are so cold that it drips so you’ve gotta prevent that
from happening.

12.4.2. LN (subzero) cooling
         this is only used for short periods of time on consumer computers because it’s so darn
expensive. you’ll always see teams of guys overclocking systems at competitions by pouring LN2 into
custom copper pots on the cpu and gpu. nitrogen evaporates at -196 degrees Celsius, so you get really
cold systems. but it craps out your cpu after a while because of temp stress. you don’t need to worry
about it, but it’s really cool.

12.4.3. peltier/TEC (thermoelectric cooling)
         this is pretty complex crap. basically, TE cooling uses the peltier effect to cool cpus. the peltier
effect (named for jean-charles peltier, some French dude) is the effect of an electrical current at the
junction of two metals. when a current is forced to flow through a specific circuit, heat is absorbed on
one side and collects on the other. read the
[url=]wiki[/url] article on it for a more specific
explanation, i honestly don’t even really get how it works.
         in terms of consumer computing, you’d stick a peltier cooler directly onto your cpu and it’d
circuit the heat from the cpu side to a side filled with heatsinks and fans to cool that side. it’s powered
generally by your psu.

12.5. why you can’t just stick your computer in your refrigerator or something
equally stupid
        two reasons. first of all, how long does it take a refrigerator to go from room temp to cold?
several hours. so, how long would it take if the refrigerator was being heated? the compressor would
blow trying to cool air that was heating up as it was cooling it. as if that wasn’t worth pointing out,
there’s also freezerburn to consider. all the moisture in the air will condense on your components and
eventually fry something because it’ll conduct power places that it shouldn’t go. just the same reason
you can’t dip your system into a pool or something equally idiodic.

12.5.1. because i tried it?
        and burnt out the family freezer in the basement. not to mention my old/crappy system.

To top