TheDirectData.com Page 1
No Topic Page No
1 Introduction 1
2 History of SCSI/IDE 1
3 Setting up SCSI 4
4 Bus Mastering 8
5 Bus mastering and DMA 11
5.1 Bus Mastering Logistics 11
5.2 DMA Logistics 12
6 Comparison of Drives 13
6.1 Performance 13
6.2 Configuration and Ease of Use 14
6.3 Expandability and Number of Devices 15
6.4 Device Type Support 15
6.5 Device Availability and Selection 16
6.6 Software / Operating System Compatibility 16
6.7 System Resource Usage 16
6.8 Noise and Heat 17
6.9 Media Access Speed 18
6.10 Interface Access Speed 19
6.11 Comparing Cost 21
7 Advantage 24
8 IDE or SCSI? 25
9 The Future of SCSI 26
10 Conclusion 28
11 Bibliography 29
TheDirectData.com Page 2
SCSI is an entirely different interface than the more popular IDE. It is more of
a system level interface, meaning that it does not only deal with disk drives. It is not a
controller, like IDE, but a separate bus that is hooked to the system bus via a host
adapter. A single SCSI bus can hold up to eight units, each with a different SCSI ID,
ranging from 0 to 7. The host adapter takes up one ID, leaving 7 ID's for other
hardware. SCSI hardware typically consists of hard drives, tape drives, CD-ROMs and
SCSI's popularity is increasing. Speed seems to be the main reason for this,
although it will be show further down that this really isn't anything to get excited
about. One advantage is that there are a multitude of hardware types that can use a
SCSI bus. The interface is very expandable, whereas IDE is pretty much limited to
hard drive and CD-ROMs.
The reason for the slow taking of SCSI is the lack of standard. Each company
seems to have its own idea of how SCSI should work. While the connections
themselves have been standardized, the actual driver specs used for communication
have not been. The end result is that each piece of SCSI hardware has its own host
adapter, and the software drivers for the device cannot work with an adapter made by
someone else. So, due to the lack of an adapter standard, a standardized software
interface, and standard BIOS for hard drives attached to the SCSI adapter, SCSI is
pretty much a mess for the end-user.
Why this article?
Some of you readers may know that I’m an active member of Adaptec's CDR
List. Every now and then, some heavy debates and sometimes even flame wars start
appearing on the newsgroups and this list. The SCSI fans will start summing up the
advantages of SCSI, and start telling everyone that IDE is crap. Then others reply that
their IDE system works fine, and costs only half the money. Of course, both parties
have a point. So I wrote this summary, to get some fables out of the way, and place
some true facts on the WEB instead. Read through the list of claims, to get an
impression. Note that most are neither true nor false...
The History of SCSI/IDE
History of SCSI
What we currently know of as the SCSI interface had its beginnings back in
1979. Shugart Associates, led by storage industry pioneer Alan Shugart (who was a
leader in the development of the floppy disk, and later founded Seagate Technology)
created the Shugart Associates Systems Interface (SASI). This very early predecessor
of SCSI was very rudimentary in terms of its capabilities, supporting only a limited set
of commands compared to even fairly early "true" SCSI, and rather slow signaling
speeds of 1.5 Mbytes/second. For its time, SASI was a great idea, since it was the first
attempt to define an intelligent storage interface for small computers. The limitations
must be considered in light of the era: we are talking about a time when 8" floppy
drives were still being commonly used.
TheDirectData.com Page 3
Shugart wanted to get SASI made into an ANSI standard, presumably to make
it more widely accepted in the industry. In 1981, Shugart Associates teamed up with
NCR Corporation, and convinced ANSI to set up a committee to standardize the
interface. In 1982, the X3T9.2 technical committee was formed to work on
standardizing SASI. A number of changes were made to the interface to widen the
command set and improve performance. The name was also changed to SCSI; I don't
know the official reason for this, but I suspect that having Shugart Associates' name
on the interface would have implied that it was proprietary and not an industry
standard. The first "true" SCSI interface standard was published in 1986, and
evolutionary changes to the interface have been occurring since that time.
It's important to remember that SCSI is, at its heart, a system interface, as the
name suggests. It was first developed for hard disks, is still used most for hard disks,
and is often compared to IDE/ATA, which is also used primarily for hard disks. For
those reasons, SCSI is sometimes thought of as a hard disk interface. (I must admit
that placing my SCSI coverage in my own hard disk interfaces section certainly
suggests this as well!) However, SCSI is not an interface tied specifically to hard
disks. Any type of device can be present on the bus, and the very design of SCSI
means that these are "peers" of sorts--though the host adapter is sort of a "first among
equals”. My point is that SCSI was designed from the ground up to be a high-level,
expandable, high-performance interface. For this reason, it is frequently the choice of
high-end computer users. It includes many commands and special features, and also
supports the highest-performance storage devices.
SCSI began as a parallel interface, allowing the connection of devices to a PC
or other systems with data being transmitted across multiple data lines. Today, parallel
or "regular" SCSI is still the focus of most SCSI users, especially in the PC world.
SCSI itself, however, has been broadened greatly in terms of its scope, and now
includes a wide variety of related technologies and standards, as defined in the SCSI-3
TheDirectData.com Page 4
It is intended that this aspect of SCSI-3 will accelerate the development of
future SCSI implementations. Another place SCSI-3 is hoping to go is the removal of
the current 15-device limit imposed on wide SCSI. The plan is to offer a "2 phase"
addressing system that sends the higher order selection bits in the first phase and then
the final selection bits in a second addressing phase. As a result, up to 255 devices on
a narrow SCSI bus or 1023 on a wide bus could be accessed.
SCSI Implementation Bus Width Burst Speed (MB/sec)
SCSI-1 8 5
Fast SCSI 8 10
Fast Wide SCSI 16 20
Ultra SCSI 8 20
Wide Ultra SCSI 16 40
Ultra 2 SCSI LVD 8 40
Wide Ultra 2 SCSI LVD 16 80
Wide Ultra 3 SCSI LVD 16 160
History of IDE
IDE replaces older interfaces such as ST-506 and ESDI. Through the years,
many changes have been made to the IDE standard as defined by ANSI.
The original standard, call simply ATA called for 2 devices on the same
channel configured as master and slave. It also defined PIO modes 0, 1 and 2 and
DMA single word modes 0, 1 and 2 and multiword mode 0. However, this standard
had problems. Often drives by different manufacturers wouldn't work if combined on
a single channel as master and slave. ATA-2 added the faster PIO modes 3 and 4
(mode 4 being the common default PIO mode for modern PCs), faster DMA
multiword modes 1 and 2, the ability to do block mode transfers, Logical Block
Addressing or LBA, and improved support for the "identify drive" command that
allows the system to interrogate the drive for manufacturer, model and geometry.
The terms "Fast ATA and Fast ATA-2" are the inventions of Seagate and
Quantum. They are not really standards and only denote drives that are compliant to
all or part of the ATA-2 standard. ATA-3, however, was a real standard that improved
reliability and defined the SMART feature in disk drives. The current Ultra ATA or
UATA followed it. UATA also goes by many other names like UDMA, DMA-33/66
UATA isn't really a new standard, and UATA drives are still backward
compatible with ATA and ATA-2 systems. Ultra ATA is the term given to drives that
support the new DMA modes that provide up to 33 MB/s (UDAM-33) or up to 66
MB/s (UDMA-66) transfer rates with 100 MB/s just over the next hill. Both UDMA
versions support CRC error checking that assures data integrity through the IDE cable,
which was a source of serious problems in previous standards. Note that the UDMA-
TheDirectData.com Page 5
66 standard calls for an 80-conductor cable instead of the 40-conductor cable used up
to and through UDMA-33.
EIDE or Enhanced IDE is a designation created by Western Digital to describe
its newer line of high speed drives. It really isn't a standard at all, but just a marketing
tool. However, it has taken on common public use to refer to all high-speed drives and
the systems that support them.
Setting Up SCSI
Defining a SCSI Configuration
Making SCSI work involves these items and issues. This is just an overview;
we’ll examine each of these issues in detail.
SCSI Host Adapters
The host adapter must be compatible with the system’s bus.
There are different versions of SCSI: SCSI-1, SCSI-2, and SCSI-3. The host
adapter must be able to support the same level of SCSI as the peripherals.
When you are buying a SCSI host adapter, you are buying the ambassador
between all those expensive, fast peripherals you brought and your CPU and memory.
Now, more than ever, bus metering and 32-bit interface SCSI host adapter are
The host adapter must be assigned a SCSI ID between 0 and 7, or 0 and 15 on
a single channel SCSI-3 host adapter (most common for PCs).
The host needs a SCSI driver that follows the same SCSI standard as the SCSI
drivers on the peripherals. There are three standards here: ASPI, CAM and LADDR.
These are not related to SCSI-1, SCSI-2, SCSI-3; this is a different dimension in SCSI
SCSI Compatible Peripherals
Each SCSI peripheral needs a SCSI driver that is compatible with the SCSI
standard on the host’s SCSI driver.
All SCSI devices follow SCSI-1, SCSI-2, or SCSI-3 cable standards; it may be
important to know which level the device follows when installing it.
Each peripheral must be assigned a unique SCSI ID from 0 to 7,or 0 to15 for
single channel SCSI-3.
Devices can be mounted internally or externally.
Some devices have optional built-in termination; other may have termination
that’s not optional (that is very bad, but more common than you have hope), and still
others do not include termination of any kind.
Terminated devices should support active termination, but some will support
only passive termination.
TheDirectData.com Page 6
Devices should be able to be daisy-chained, so they should have two SCSI
connecters if they are external.
There are several kinds of SCSI cables and even variations between SCSI-1,
SCSI-2, and SCSI-3.
You cannot run cables more than a few feet, or the SCSI signal degrades and
the peripheral does not work.
Each SCSI system needs two terminators, one on each extreme end of the
chain of devices.
There are two kinds of terminators; passive and active. You are supposed to be
able to use either type for SCSI-1 or SCSI-2, but active is usually a better idea. SCSI-3
requires active termination.
Some devices have built-in terminators, other require separate termination
The host adapter probably supports termination of kind. This should be
considered when terminating the system.
The SCSI system will need a driver fro each SCSI device, as well as a driver
for the host adapter.
SCSI hard disks may not require drivers; instead, they may have on-board
BIOS that server that functions.
SCSI Physical Installation
Actually putting SCSI adapters and peripherals into your system is pretty much
the same whether you are using devices that are SCSI-1 or SCSI-2 or you are using the
ASPI, CAM, or LADDR standards. (I know have not explained these yet; hang on for
a minute and I will get to them, I Promise.) So let’s look at physical installation first.
Basically, putting a SCSI device into the PC involves these steps.
1. Choose a SCSI host adapter, if you don’t already have one.
2. Put a SCSI host adapter into a PC.
3. Assign a SCSI ID to the peripheral. This is usually done with a jumper or a
4. Enable or disable SCSI parity on the peripheral.
5. If the peripheral is an internal peripheral, mount it inside the PC/
6. Cable the peripheral to the SCSI host adapter.
7. Terminate both ends of the SCSI system
TheDirectData.com Page 7
Terminating the SCSI Chain
Before popping the top back on your PC, there’s one more thing that needs
doing; you must terminate the SCSI chain.
Whenever we discuss termination in class, people start referring to killer
cyborgs, but it just means providing a voltage and resistance on either end of a cable,
so that the entire bus has a particular set of electrical characteristics. Without this
resistance, the SCSI cables cannot transport data around significant error rates. (It will
work sometimes, despite what some people claim, but it won’t reliably.)
Active and Passive Terminators
SCSI-1 specified two kinds of termination, active and passive. Active was
pretty much ignored until SCSI-2 became popular. Passive terminators are just a
resistor network; if you are interested in how they electronically, you can see in below
If you don’t speak schematic; don’t sweat it: the jaggy-looking things are
resistors, TERMPWR refers to a source of electricity, and the triangle standing on its
head represents electrical ground,
He place that the power from TERMPWR is seeking to go. The main thing to
notice is that the only thing going on here is a resistor; there are no chips, and no
amplification is going on. Passive termination is basically an adaptation of the simple
terminators found in ancient floppy disk or ST506 systems. Back in the SCSI-1 days,
the designers figured that the major use for SCSI would be to hook up a couple of hard
disks to a host adapter, all with no more than eight inches to a foot of cable.
If, on the other hand, you intended to run longer cables put a lot of stuff on
them, then you need some kind of booster for termination power; that’s active
termination. It’s represented schematically in below figure.
TheDirectData.com Page 8
LT 1086 Chip
There's really no point in installing a SCSI host adapter that is not:
1) A PCI card. Trying to get DAW performance from an ISA SCSI card is like
trying to pull an elephant through a toilet seat. Don't bother!
2) Configured for bus mastering. Although it is possible to still buy a PIO-only
SCSI adapter, and at not exactly a modest prices either, don't even think about
it. If you're going to be spending the extra money to go SCSI, do it right.
3) Ultra SCSI. There is no point in going all the way to the ocean with your
bathing suit on and not jumping into the water. If you want to implement SCSI
on your system, do so with an eye to the future. Besides, the entry level for an
Ultra SCSI host adapter is no more than $80 or so with Ultra 2 Wide running
about $180. Take a look at the price comparison charts on the COMPARING
Perhaps if you are thinking of building a DAW from the ground up, the wise
choice for implementing SCSI is to do it at the motherboard level. Several good
motherboards support SCSI right on the board much the same way motherboards
support IDE. Just plug your SCSI controller cable into the SCSI port on the
motherboard and then run the other end to your drive(s).
Don't forget to terminate the cable at your last SCSI device. All signals on the
bus must be terminated with resistors at the bus ends to avoid electrical reflections.
This is achieved either by a switch on the device (not always present) or by placing an
external terminator block on the connector of the first and last devices on the bus.
Often, the first device will be the host adapter itself.
Installing the SCSI drivers is best left up to Windows, which will see the SCSI
controller and set up the drivers for you through Plug And Play. SCSI adapters are
PIO, DMA, or bus mastering, and the user can't choose the mode. If you have a bus
TheDirectData.com Page 9
mastering SCSI adapter, the driver only works in bus mastering mode. SCSI
configuration is somewhat more complex, as there are many configurable options such
as disconnect strategy, SCAM, LUN, BIOS emulation, etc. All of this should be
explained in the adapter installation guide.
It should be noted that some users have reported problems using some host
adapters with some motherboards and chip sets. MVP3 and Aladdin V chipsets have
fallen into question, although there seems to be no problem using a motherboard with
the workhorse Intel BX chip set. There have also been comments made about some
AGP cards being so power hungry that on some motherboards they rob the PCI bus of
the needed juice to reliably run high-end host adapters. Not-so-high-end adapters may
not complain about the chip set or video card's appetite.
By default, IDE disk drives transfer data to and from the system using a
protocol called "Programmed Input/Output" or PIO. This technique requires the CPU
to get into the middle of things by executing commands that shuffle the data to or from
RAM and the drive. Thus, the CPU is tied up doing the work of fetching and stuffing.
Also, the time overhead involved in putting data in the cache, reading each byte into
the CPU, sending it out to the cache again and then routing it to its destination puts a
top end to the speed of the transfers.
In typical desktop systems this isn't much of a problem. The system doesn't
have much to do during these transfers anyway, so who cares? Even if a user has
several applications open at once, seldom is more than one actually doing anything,
and during disk I/O, the application will likely be idle anyhow.
Now suppose you have an activity known as "streaming" going on which is
pulling lots of data from the drive in real time while the application doing the
streaming is simultaneously attempting to process the data as it arrives. Wow! Now
we have a problem. The CPU really does have lots to do while data is being
transferred and so getting tied up actually DOING the transfers cuts into application
In all fairness, even at the fastest rate, a disk drive couldn't pump enough data
to or from memory fast enough to cause modern high speed CPUs to break into a
sweat. Even at this high demand level, there is time to shuffle data, process that data,
shuffle it back, service interrupts, update the screen, send a byte to the modem, and so
Enter the DAW
Now we have a whole new ball game. Not only is the digital audio application
trying to stream data and process in real time, but also it needs to stream multiple files
for multi-track mixing at the same time and still supply CPU horsepower to real time
effects like reverbs and compressors. This forces a limit on the number of tracks in the
mix and the number of real time effects that the project can sustain when attempting to
perform real time production.
Under this load, even a Pentium 500 will fall short of the goal if it has to worry
with PIO along with all of this other processing. If you want to mix more than 6 or 7
TheDirectData.com Page 10
tracks using more than a few parametric EQs and one reverb, you will need to free up
some major CPU cycles!
The answer is to put the load of data I/O someplace else so the CPU can just go
to RAM and expect to find the data already there and process it. This is the idea
behind DMA or Direct Memory Access. Using DMA, a system splits the
responsibility of data communication among several intelligent sub-systems so each
can do a specialized job very well.
DMA may seem like a new idea, but actually it has been around since before
the first PC was ever designed. In the PC, sound cards, floppy drives and even SCSI
controllers have been using DMA on the ISA bus for a long time. This method
requires the ISA chip to referee the DMA transfers between the devices and RAM and
thus is called "third party" DMA.
However, the ISA bus is slow. This doesn't bother low-throughput devices like
the floppy drive and simple sound cards, but to make DMA effective for high-speed
disk drives, the ISA bus is useless. The world had to wait for the development of the
"local bus" to get the job done. This local bus technology is being implemented today
on newer motherboards by the PCI bus.
With PCI, third party DMA is fast enough to become a useful disk access
alternative to PIO. Another ability of the PCI bus is the ability for a device connected
to it to take control of the bus and perform the transfers without the use of a DMA
controller chip. This is referred to as "first party" DMA or more commonly, bus
mastering. Using bus mastering, the peripheral device can access system memory the
same way as the CPU itself.
Just about everything on the PCI bus (and its offshoot, the AGP connector) can
use bus mastering if the designers wish it to. This includes Ethernet controllers, sound
cards, Win-modems, display adapters, and so on, although due to little demand for
high speed data transfer by these adapters, most of them still stick to PIO. It's
important to understand that disk controllers are the bus the master devices on the PCI
bus, not the drives themselves. However, for most disk controllers to be operated in
bus master mode, they require that the drives themselves at least support multiword
DMA mode 2 so the data handshaking controls can be implemented between the drive
and the bus-mastering controller.
Bus mastering, being an advanced form of DMA, demands very specific
motherboard chip set support as well as specific support from the hardware attempting
to use it. The operating system must also be able to support it by loading special "bus
mastering aware" drivers. This may sound rather complicated, and it is. However, the
gains in data transfer speeds and CPU overhead reduction associated with bus
mastering is such that there is no way modern digital audio applications could perform
acceptably without it.
Luckily, Intel and Windows support it on the board and in the system and most
if not all SCSI and IDE controllers can operate using it. Don't think that these
manufacturers went through all of this trouble just for us musicians. Be assured, they
didn't. This improvement was to facilitate network server applications. However, we
can also reap the benefits of this technology.
TheDirectData.com Page 11
A Shaky Start
Most all-modern SCSI controllers connect to the PCI bus and use bus
mastering. This has been SCSI's largest advantage in terms of DAW performance, but
that all changed with the advent of IDE's entry into PCI bus mastering. For the record
though, from this point on, we will limit the definition of PCI bus mastering to a
system whereby the IDE controller transfers data to and from the drive using an
enhanced DMA protocol. It is usually referred to as Ultra DMA (or DMA-33/66) or
Ultra ATA (UATA or ATA33/66).
In the past, there were a lot of problems getting this to work. The Intel drivers
shipped with many motherboards were "behind the curve" in terms of functionality
when compared to the Intel drivers installed with Windows. Also, Windows 95 didn't
start off supporting bus mastering. Upgrading to 95B was necessary to provide this
feature. The same goes for NT4. Service Pack 3 must be installed to provide bus
mastering. Many people were tempted, not knowing any better, to install the drivers
shipped with the motherboard during setup because, well, they were shipped with the
motherboard! It seemed the thing to do.
Unfortunately, these drivers gave poor performance, and sometimes none at
all. Even after discovering the mistake and attempting to remove them from Windows,
they wouldn't de-install cleanly. This left no alternative but to wipe the drive and re-
install Windows and all of the software that came after it. Not fun! As it turns out, just
using the native Windows drivers seems the way to go.
There was some early confusion as to which drives would or would not work
under bus mastering. In that there are currently 2 types of Ultra DMA in common
uses, UDMA-33 and UDMA-66, one needs to check the specs. With UDMA-33, this
isn't much of an issue any more as almost any disk drive manufactured in the past 2
years is capable of multiword DMA mode 2 or better transfers and thus will run under
bus mastering. The same can be said for current motherboards. Most using the Intel
430 FX, HX, VX, TX or 440 FX, LX, EX, BX, GX Pentium chip set will support bus
mastering as well as those using the VIA chip set. Naturally, the Intel 810, 820 and
840 chip sets support bus mastering, but this chip set family is plagued with problems
in the memory department and so at this point, a DAW using a motherboard with
either of these three chip sets is a dicey matter.
Make sure that both the motherboard and disk drive support the newer UDMA-
66 if you want this higher performance transfer feature. UDMA-33 will use the same
IDE cable between the drive and motherboard as the older PIO system, so if you
currently have a newer drive and motherboard but don't use bus mastering, usually all
you need to do is go to Windows and switch it on. As mentioned above, UDMA-66
uses a different cable and chip set, so you must make some real effort to upgrade to
UDMA-66 from PIO or UDMA-33 even if the drive supports it. If your current
motherboard isn't UDMA-66 capable, you can get a separate IDE controller board
designed for UDMA-66 which plugs into your PCI bus to get UDMA-66 up and
running on your current system.
What is the big deal with the new cable, you ask? As it turns out, the cable is
80 conductors instead of the usual 40 conductors. Both ends still have 40 pin
connectors. Huh? Here's the deal. The extra 40 wires are grounds and lie in between
the other 40 signal lines acting as shielding. This reduces cross talk on the lines and
TheDirectData.com Page 12
enhances reliability. UDMA-66 drives will not function at 66 MB/s without this 80-
conductor cable, and will default back down to 33 MB/s if they sense a 40-conductor
cable. On the other hand, using an 80-conductor cable on an UDMA-33 drive will
likely enhance its performance too, due to the more reliable connection and thus fewer
As one example of the kinds of things that can go wrong, this is an experience
Glen, one of this article's authors, had setting up his new DAW.
When I set up my first DAW system 2 years ago, I picked up one of the new
Western Digital 13 gig drives and tried to set it up for bus mastering. When I tried, the
stupid thing kept defaulting to DOS mode! Nothing I did helped until a friend
suggested I poke around on the WD web site for clues.
I hunted for quite some time until I came across an obscure reference to the
fact that all of these new drives were being shipped enabled for UDMA-66 by default.
If a user wanted to use UDMA-33 instead, they needed to download this little program
that will talk to the drive and tell it to switch modes. Fancy that! I downloaded and ran
the utility. Within a few moments I had bus mastering running and a benchmark
reading of 3.53% CPU usage for streaming transfers and an estimated track count of
over 80 tracks of digital audio.
I understand that these drives are no longer being shipped with UDMA-66 as
the default. I wonder if my letter had anything to do with that!
To make things a bit more livable, almost all UDMA-66 drives made today
will auto-switch between 33 and 66 depending on the abilities of the controller and the
cable. Incidents like the one described above are now, hopefully, a thing of the past.
After all, drive manufacturers WANT you to buy these new drives regardless of
whether or not you can use the enhanced throughput. This way, they only need to
make one type of interface for their drives. Again, this isn't to make our lives easier,
Bus Mastering and DMA
What is the difference between regular DMA and bus mastering?
Bus Mastering Logistics
First, let's look at bus mastering again but from a DMA point of view. A bus is
a data transport. Bus mastering is a very advanced means of transporting data to and
from devices and/or memory using the PCI bus as a conduit.
A device that issues read and write operations to memory and/or I/O slave
devices is considered the master, although a master device can have slave memory
and/or I/O ports available to be accessed by other masters. For example, an Ethernet
controller must convey data it receives from over the LAN and must also access data
to send over the LAN as a bus master, but acts as a slave when the CPU, acting as a
master, programs it to initialize and to specify where it must get and put data.
Only one bus master can own, or "drive" the bus at a given instant, and the bus
is responsible for arbitrating bus master requests from the various bus master devices.
TheDirectData.com Page 13
A bus master device will request access to the bus, which is granted immediately
providing no other master has it at the moment. If another master device has been
granted access, the new one must wait until the first one completes its single or burst
transfer, or the bus arbiter times out and yanks the access away in favor of the new
requesting master, whichever happens first.
If an operation is interrupted by a timeout, it is resumed when that issuing
master receives its turn again. The CPU is a bus master device, and is always present.
The Intel PIIX families of IDE controllers found in all modern Intel chipsets for the
x86 families are bus master devices. The SoundBlaster Live! Is a bus master device
that accesses main memory through the bus to read samples? There are many
peripherals which use bus mastering on the PCI bus to free the CPU from actually
doing every transfer, for example, video cards, network cards, SCSI controllers, other
storage devices, and so on. Note that bus-mastering transfers do not require and
therefore do not tie up the DMA channels like normal DMA devices do.
A chip controls Normal DMA. The DMA chip itself is a bus master device. It
can be programmed by the CPU to perform transfers from memory to I/O, or I/O to
memory (some also allow memory to memory, but is not in the case with the PC,
although two DMA channels can be used to do that given some fancy driver
footwork). Therefore, the DMA system acts as a bus master to perform the
programmed operation while the CPU can be doing something else. The DMA
controller sends a signal to the CPU when the transfer is complete.
DMA is used to perform transfers without CPU intervention to or from
peripherals that don't have bus master capabilities. DMA issues accesses similar to
standard bus I/O accesses, but with the addition of handshaking lines DMA_Request
and DMA_Acknowledge. These signals are present on the bus for each DMA channel.
A slave device must handle these handshaking lines to be able to be operated through
DMA. Obviously, this is a much simpler system than having to support all the
complex and necessary logic in a bus master device.
The main limitations of a DMA capable slave compared with a bus master
1) The DMA slave is passive. It is the CPU, which must specify the transfers to be
done. The bus master device can perform transfers by its own initiative without
2) DMA can only transfer blocks of contiguous memory content, and only one block
for each programmed transaction. The bus masters can access memory or I/O
following any pattern without restriction.
3) In the case of the PC, the DMA device can only transfer blocks of up to 64 Kbytes,
and always on 64 Kbytes boundaries, which limits its utility. In older PCs, the DMA
system could only access the first megabyte of memory. Later it was extended to the
first 16 megabytes and currently the DMA device can more often access all memory,
but always within 64 Kbytes boundaries for each operation.
4) DMA is generally slower, although there are new faster modes and burst timing
modes achieving considerable throughputs. The slaves must specifically support these
TheDirectData.com Page 14
modes in order to use them. The original Intel 8237 DMA controller was extremely
slow. So slow that disk transfers were more efficiently done by the CPU using PIO
mode 4 because DMA would become the bottleneck. In the best theoretical case (that
was never meet) it could only transfer 4 MB/s. The reality was more like 1 MB/s.
Comparison of Drives
We looked at the two contending controller formats in the last sections, but
that's just an overview. What about the specifications? What do you need to know
about a drive's performance in order to make an intelligent choice regardless of which
format you're interested in?
To an extent, the drive format you have already committed to will be a big
factor. If you don't want to support a large number of drives and CD devices, IDE will
look like the best path to follow and SCSI will be much less appealing. If you already
have SCSI, then the choice is clear.
If you are building from scratch, you should at this point have a good idea
what CPU you would like to run, how much RAM you will need, what you feel is
right as far as video, sound, and perhaps LAN cards go, and if you want an internal
modem. Your choice of motherboards and disk system are now at issue.
Should you spring for SCSI and should you get a motherboard with built-in
Is IDE the best way to go?
Does it really matter?
We can't answer these questions for you, but we will give you better tools for
reaching that decision yourself with less dependence on the common DAW disk
superstitions, misconceptions and other people's unfounded prejudice.
Comparing the performance of the SCSI and IDE/ATA interfaces is not an easy task.
While those who favor SCSI are quick to say that it is "higher performance" than
IDE/ATA, this is not true all of the time. There are many different considerations and
performance factors that interact when considering the performance equation, because
performance is so dependent on system setup and on what is being done with the PC. I
will try to look at some of these factors and how they influence system performance
for both interfaces:
Device Performance: When looking at particular devices, there is
theoretically no difference between SCSI and IDE/ATA. The device itself
should be the same in terms of its internal performance factors. In practice, this
is rarely the case. Many manufacturers only make a particular drive as SCSI or
IDE/ATA, not both, so direct comparisons aren't easy. Since SCSI is known to
be the choice for those seeking performance, higher-performance drives tend to
show up on the SCSI interface well before they do on IDE/ATA (you pay for
TheDirectData.com Page 15
this performance, of course, but that's a separate issue). Another issue is the
implementation of the integrated device controller logic and the interface chip.
Some companies that produce the same device (the physical hard disk
assembly, for instance) for both SCSI and IDE/ATA may do a much better job
of writing the control logic for one interface than for another. In general, SCSI
drives offer higher performance than IDE/ATA ones.
Maximum Interface Data Transfer Rate: The interface or external data
transfer rate describes the amount of data that can be sent over the interface. As
described here, it's important not to place too much emphasis on the interface
transfer rate if you are only using a small number of devices. Comparing SCSI
and IDE/ATA, both interfaces presently offer very high maximum interface
rates, so this is not an issue for most PC users. However, if you are using many
hard disks at once, for example in a RAID array, SCSI will offer better overall
Single vs. Multiple Devices and Single vs. Multitasking: For single devices,
or single accesses (as in DOS), in many cases IDE/ATA is faster than SCSI,
because the more intelligent SCSI interface has more overhead for sending
commands and managing the channel. If you are just using a single hard disk,
or doing simple work in DOS or Windows 3.x where everything happens
sequentially, most of the benefits of SCSI are lost. For multitasking operating
systems, especially where transfers are occurring between multiple devices,
SCSI allows multitasking and command queuing and reordering, which
enables devices to set up multiple transactions and have them take place
basically simultaneously. In contrast, IDE/ATA transactions to one device
"block" the channel and the other device cannot be accessed. Putting two
devices on two different channels allows simultaneous access, but severely
restricts expandability. IDE/ATA still does not have the advanced features that
SCSI has for handling multiple devices.
Device-Mixing Issues: IDE/ATA channels that mix hard disks and CD-ROMs
are subject to significant performance hits in some situations, due to the fact
that these are really different protocols on the same channel. SCSI does not
have this problem.
Technological Currency: IDE/ATA has one big advantage over SCSI in
terms of performance, if cost is a consideration. Both interfaces are constantly
being updated to offer faster performance, both in terms of the interfaces
themselves and the drives produced for them. However, to take advantage of
these improvements requires additional hardware purchases. For SCSI, the
extra investments are much more costly than IDE/ATA. If one is on a limited
budget, it could well be argued that staying current with IDE/ATA technology
will offer better long-term performance than going with SCSI but only being
able to upgrade every three or four years due to cost considerations.
Configuration and Ease of Use
Much like the performance issue, the winner here depends on how many
devices you want to use. Both IDE/ATA and SCSI have had a rather spotty history in
terms of their ease of setup and configuration, and both are much better today than
they have been in the past. Overall, I would say that IDE/ATA is easier to set up,
especially if you are using a reasonably new machine and only a few devices.
IDE/ATA support is built into the BIOS, and there are fewer issues to deal with: far
TheDirectData.com Page 16
fewer different cable types, no bus to terminate, only one type of signaling, fewer
issues with software drivers, and in general fewer ways that you can get yourself into
The difference between the interfaces is, if anything, increasing. Over the past
few years IDE/ATA has in many ways become simpler to deal with, as manufacturers
have agreed on standards and fixed problems with drivers and support hardware. SCSI
has gotten more complex, especially now that new hard disks use LVD signaling,
which is more complex to set up.
The configuration simplicity advantage for IDE/ATA drops off quickly if you
want to get maximum performance while using more than a few devices. You then
have to worry about where they are being placed on the channel, finding IRQs and
other resources for multiple channels, etc. This can be done without too much
difficulty, but there are many different things to take into consideration. In contrast,
once SCSI is set up, you can put 7 devices on the bus (or 15 for wide SCSI) with very
little effort, although you do have to watch the termination as you expand the bus.
SCSI has a significant advantage over IDE/ATA in terms of hard disk
addressing issues. While IDE/ATA hard disks are subject to a host of capacity barriers
due to onflicts between the IDE/ATA geometry specifications and the BIOS Int 13h
routines, SCSI is not.
Expandability and Number of Devices
On this particular score there isn't much of a competition: SCSI beats
IDE/ATA hands down. While SCSI is more involved and expensive to set up, once
you make the appropriate investments of money and time you get a bus that can be
expanded relatively easily to either 7 devices (with narrow SCSI) or 15 (with wide
SCSI, which is what most new systems use). On the other hand, IDE/ATA normally
supports only four devices; you can expand this to eight if you add an after-market
IDE/ATA controller, but that can introduce its own issues.
Of course, this advantage of SCSI only matters if you actually need this much
expansion capability. For most users, four-device expandability is certainly sufficient,
and eight is definitely more than enough.
Device Type Support
At one point, SCSI held a significant advantage over IDE/ATA in terms of the
types of devices each interface supported. Since SCSI is a high-level system interface
used by performance machines, there have always been a wide variety of different
kinds of hardware produced for the SCSI interface. In contrast, IDE/ATA began as a
hard disk interface, and support for other types of hardware was only added later on.
Even as late as 1997, there was many more hardware choices if you had a SCSI
system than if you went with IDE.
This has changed in recent years. As the number of IDE/ATA systems on the
market has grown, many manufacturers have migrated their devices to the IDE
interface. A good example is that of CD-RW drives; a few years ago you needed a
TheDirectData.com Page 17
SCSI system if you wanted to use a CD-RW drive, but they are now commonly
available for both interfaces. There are still more different device choices for SCSI
than IDE, but the difference is less important than it once was.
One place where SCSI beats IDE/ATA easily is in support for external
devices: IDE/ATA has none. SCSI drives can even be located in a different room from
the machine that is using them, if that's an issue for some reason.
Device Availability and Selection
SCSI leads IDE/ATA in terms of the number of different types of
devices available, but often trails behind it in terms of the number of
different models available for each device type. Since the IDE/ATA
market is so much larger than the SCSI market, there are many more
brands and types of various devices available for the IDE/ATA market
than for SCSI.
One area where this can be an issue is with hard disks. Hard disk options for
the IDE/ATA interface range from small value models to large performance units. For
SCSI, there are fewer choices, particularly on the low end of the scale. This means that
you will have more difficulty finding economical drives for SCSI setups unless you go
with older units (which are certainly an option). Of course, if you want the fastest
technology, SCSI gives you more choices.
Of course, this isn't to say that your choices with SCSI are necessarily few;
there are many different companies producing SCSI hardware. You will have more
difficulty trying to set up SCSI inexpensively, but that's sort of par for the course.
Software / Operating System Compatibility
For many years, there were significant support and compatibility issues with
SCSI that did not occur with IDE/ATA. Since IDE was the "native" hard disk interface
for most PCs, and was by far the most common interface used, all software worked
with it. SCSI on the other hand was much less common in PCs, especially in their
earlier years, and so there were occasional support issues. Extra drivers or special
software would be needed to get older operating systems like "straight DOS" and
Windows 3.x to function with SCSI at all.
Today, this is pretty much a non-issue. Modern operating systems all provide
support for SCSI host adapters and devices. You may need to install a driver or some
utility software, but if you stick with well-known brand names even this may not be
System Resource Usage
Generally speaking, SCSI is superior to IDE/ATA in terms of how many
system resources are used. If you are only using a single IDE/ATA channel, the two
are basically a wash in terms of resource usage. However, once you go to a dual IDE
TheDirectData.com Page 18
channel situation you will generally consume more resources than SCSI uses--and
most PCs have dual channels by default, unless you disable one. If you were ever to
set up a four-channel IDE implementation you would be using significantly more
resources than if you had just set up a SCSI bus.
There are in fact some people who set up SCSI specifically to get around the
system resource constraints for which the PC is "famous", and which using multiple
IDE/ATA channels exacerbates. Doing this also allows you to worry less about
needing to take more resources in the future if you expand to many different devices.
There is one system resource issue involved in using SCSI under DOS or
Windows 3.x, however, that doesn't apply to newer Windows operating systems. Both
of the older operating systems generally require a driver in order to use SCSI, which
can take up a decent-sized chunk of conventional or upper memory. The IDE interface
does not normally have this requirement. This is only an issue for some systems, of
course, and the importance of conventional memory has diminished somewhat today,
as those older operating systems are less and less used.
Noise and Heat
There is one point that must be made about the faster 7200 and 10,000+ rpm
drives. They can be VERY noisy! This could be a consideration if your computer is in
the same room where you like to track. Special computer boxes are available to keep
the system cool while muffling the noise, but are prepared to spend for them. If you
attempt to build your own "Quiet Box" for your system, be sure to provide good air
circulation. There's nothing gained by a quiet computer that keeps shutting itself down
right in the middle of that "golden" take! Luckily, newer 7200 rpm drives are much
quieter then the earlier ones.
A consideration that may not occur to many folks is how cooling will affect
drive performance. The faster the rotation speed of a drive, the hotter it will get. Also,
as a rule, SCSI drives tend to run hot anyhow. This is why many SCSI drives
recommend drive fans or some form of passive drive cooling. If a drive is allowed to
overheat, this not only puts stress (perhaps fatal stress) on the electronics, but also can
cause the data storage media to fail, and there goes your big hit record! This isn't to
say that IDE drives are immune to this consideration. These drives get hot too,
especially the 7200-rpm jobs.
Everyone knows that good cooling will keep the CPU happy, even if you're not
over clocking. Good cooling will keep your drive happy too. For starters, don't mound
the hard drive at the bottom of the internal bay unless the bottom of the bay is open to
the inside (uses mounting flanges instead of a solid metal bottom). Keeping a gap
under the drive will promote good air circulation. Add a second fan if you don't have
one already. A fan in the front of the chassis blowing in will complement the power
supply fan in the back blowing out.
Another factor of drives and heat is the phenomenon of thermal recalibration.
This is a tendency of drive controllers to detect that the platters have expanded due to
heat. In response to this determination, the controller will home the heads and launch
them to a specific spot and calculate the error between where it thinks the heads
should be and where they ended up. It then re-calculates the predicted locations of the
TheDirectData.com Page 19
tracks on the disk using this data. In doing so, the controller has a much better chance
of hitting its target track on a seek if it knows how much that track has moved out of
position due to thermal expansion of the platter. Yes, this is a good idea and a cool
thing to do - except during a recording or mixing session. The thermal recalibration
takes a large fraction of a second to perform and that's more than enough time to stall
out a streaming operation. A recal is NOT the kind of thing you want your drive to be
doing in the middle of a recording session!
Keeping the drive cool and at a stable temperature is the best defense against
Overall, SCSI is a higher-performance interface. For very simple applications,
like a single hard disk and a single CD-ROM drive on different channels, IDE/ATA
has a marginal advantage. For complex applications, SCSI has a significant advantage.
Media Access Speed
In comparing IDE and SCSI it is important to understand that both types of
drive are, from a "between the shells" point of view, the same.
Inside the Drive
Hard disks have a sealed case with one or more platters of magnetically coated
media, a small synchronous motor designed to rotate the platters at a precise speed,
and an actuator with one or more arms attached, each with a read/write head at the tip.
The platters hold the data in the form of concentric tracks, each split like a pie into
many sectors. Each sector will hold 512 bytes of user data as well as error correction
information and other alignment information.
The actuator is designed like a speaker voice coil, extending or retracting along
its throw path depending on the strength of an electrical signal in the coil which will
force it very precisely to any location. The arms attached to the actuator are thereby
positioned to various places above the spinning platters where the heads can pick up or
lay down streams of magnetic information.
The heads float on a cushion of air at a distance of about 10 microns above the
platter surface. The platter's rotation produces that cushion of air. In contrast, a particle
of smoke is about 100 microns in size, or 10 times the head gap. For this reason, these
drives are manufactured in very closely controlled "clean room" conditions, are sealed
at the factory against any interaction with the outside environment and sold with the
expressed condition that the user never, for any reason, open the drive casing.
The drive also has a circuit board to control the mechanism and coordinate the
transfer of data to and from the platters in a specific format. Aside from the data and
power connectors, that's about the whole story. It stands to reason, therefore, that the
physical properties of these moving parts hold the key to a drive's access speed and
In reality, this is more the case than is commonly believed, and for that matter,
commonly disclosed by the drive manufacturers. So many drives are advertised with
little more than their data storage capacity and interface burst transfer speed. Neither
of these factors relates directly to a drive's usability as a DAW storage system. To get
TheDirectData.com Page 20
the real story, you must dig into the drive specifications, usually available only on the
maker's web site and even then only after linking past several pages of ad hype and
The drive actually performs two distinct operations in order to read or write
data; those being head positioning and data transfer. Let's start with head positioning.
To perform this act, the drive must:
1) Receive a request to position the heads to a specific location on the platter.
2) Select the proper head to access the requested platter.
3) Wait for the requested sector on the track to rotate into position for access.
All of this positioning and the buffering of the data to be written or that is
finally read must be controlled by the drive electronics. Although the electronics is
quite fast by all accounts, there is still a certain amount of overhead associated with
this activity. It is referred to as... you guessed it, Controller Overhead. Sometimes this
spec will be listed for the drive and is usually the same over a given product line or at
least a given model range. It is expressed in milliseconds (thousands of a second), or
Interface Access Speed
At this point, not much else needs to be said about the interface.
To make that point, look at the charts again. Look at the data throughput
See anything interesting?
Controller Speed: Not the Bottleneck
If you are using an IDE interface set up for bus mastering and burst transfer
rates of 66 Mbytes/sec, then you will never be able to get any of the drives listed to
come all that close to taxing the interface. In fact, even if you could force the 15000-
rpm SCSI drives to talk to an IDE interface at the same sustained data throughput rates
they boast for SCSI, it still wouldn't come all of the way to 66 Mbytes/sec.
Likewise, a SCSI controller running under bus mastering and offering a burst
rate of 80 Mbytes/sec wouldn't break into a sweat with the highest throughput drive. In
fact, a 40 MB/sec controller wouldn't be all that taxed using most of the drives listed.
As shown from the charts above, the buffer transfer rate is in the neighborhood of 1.5
times higher than sustained transfer rate. Therefore, from this logic, the Cheetah's 48.9
MB/sec internal transfer rate would translate to something like 32.6 MB/sec sustained
throughput. That wouldn't even fully saturate a UDMA 33 interface! In fact, only the
Quantum Atlas 10K II and the Fujitsu MAH/MAJ series might saturate a 40 MB/sec
SCSI interface with the IBM 75GXP series likely to swamp a UDMA33 interface.
As for those SCSI drives, how their advertised internal burst rates of around 60
MB/sec would relate to the REAL WORLD of day-to-day DAW activity is hard to
tell. Except for IBM and some of the Quantum and Seagate drives, most
manufacturers don’t list the sustained rates because if they did list them, their drives
wouldn't be quite as impressive.
TheDirectData.com Page 21
From the point of view of interface speed only, it's a wash! Remember, in a
DAW, you are interested in getting the highest throughput from one drive - period!
You don't care if you can get high burst throughput from six SCSI drives all at once
because that's not how you're going to be using it. Keep your eye on the ball.
Sustained throughput is what needs to be looking at.
Other Interface Tradeoffs
SCSI is a very complex interface, and as such, it has a complex command set.
The CPU must do more work to set up a SCSI transaction than it must do to set up an
IDE transaction. As a result, in the specific world of the DAW, that is, a single user
system not engaged in multi-tasking and multi-drive I/O, SCSI could be a bit slower
than IDE. Remember, SCSI shines the brightest when you have a bunch of devices
operating simultaneously. Your average DAW isn't one of these situations.
Another interface enhancement that holds no advantage for real time data
streaming is cache. Disk controller cache is great for burst operations and random file
access, but when data is being streamed constantly through the controller, there is no
need for cache. Any cache would be overrun in the first seconds of streaming and
never get a chance to refill again. For DAW use, ignore any advertised cache
Something important to keep in mind is the possible conflict that attaching
devices of differing interface characteristics has on overall system performance. At
one time, attaching a fast disk drive as a master and a slow CD ROM drive as a slave
on the same IDE channel would pull that channel's overall performance down to the
level of the slower device. Early PIIX controllers put severe limitations on the
configuration of IDE modes. There could be only two modes available. So, if both
devices couldn't operate at the fastest mode supported by the controller, either both
would run at the speed of the slowest, or one of them would run at PIO mode 0
regardless of its capabilities. The first PIIX imposed this limitation even across
channels. The PIIX3 removed the limitation across channels, but not across devices in
the same channel.
The latest controllers with UDMA support (PIIX4 and PIIX4E) have removed
these limits completely, and the mode can be configured independently for each
device. This should not be an issue any longer. However, the software drivers may not
all be taking advantage of the hardware improvement (we can't confirm if the drivers
shipping with Win98SE take advantage of this improvement in channel mode
selection, but Win2K does for sure). Even though this may be a dead issue, it doesn't
hurt to avoid connecting a hard drive and a CD ROM on the same IDE channel unless
the CD ROM does support UDMA (has a DMA checkbox in Device Manager like the
hard drive) and it is enabled.
Lastly, if you want to use UDMA66 transfer rates and your motherboard
doesn't support it, you can buy a PCI UDMA66 controller board and go that route.
Simply disable your second internal IDE port and use the freed IRQ for the new
interface. However, as we will see in the next part of this article, even the average IDE
drive running at 33MB/sec will likely give you more raw tracks than you will ever
need, so don't hurt yourself trying to go for the fastest drive on earth just because it's
out there. Those tracking at 96 KHz rates will need to be a bit more mindful of drive
speed and so should consider ONLY the UDMA66 or 80 MB/160MB SCSI options
TheDirectData.com Page 22
with the fastest, highest throughput drives. Even so, if you look at the drives with the
very highest throughputs, there are both IDE and SCSI drives that tie at the top rates.
There is no secret that SCSI is more expensive than IDE.
Here are four specific reasons why SCSI is more expensive than IDE/ATA (there
are probably others as well):
Additional Hardware: SCSI setups require a host adapter, which means either
an add-in card or a more expensive motherboard. Cables, terminators and
adapters also add to the cost of most SCSI implementations.
Lower Volume: Far fewer SCSI devices are sold than IDE/ATA devices. The
price of an item manufactured in high volume is usually less than one
manufactured in low volume.
Niche Market: Since SCSI has a reputation for being higher-performance and
is generally used by those who are less cost sensitive; sellers can afford to run
higher margins and still make the sale, and will usually do so. People are
willing to pay more for SCSI, and SCSI costs more because of this.
More Advanced Technology: This is really a matter of appearances: since the
performance-conscious use SCSI, it is the interface where the most advanced
new drives will typically show up first. Newer and faster technology is more
expensive than older and slower technology. The price gap between the cost of
SCSI and IDE/ATA drives has actually increased over the last few years. Just
remember that comparing a "36 GB IDE drive" to a "36 GB SCSI drive" is
making an apples-and-oranges comparison, because the SCSI drive is almost
certain to offer much more performance, and for reasons that have nothing to
do with the interface.
Not only are the drives more expensive, even for drives that compare
equivalently with IDE, but you must also buy either a SCSI controller or a
motherboard with built-in SCSI support and that will set you back several bucks
compared to a motherboard without SCSI. At some point you need to justify the added
cost if you want to go SCSI from scratch.
The best (and maybe only) reason to choose SCSI is this: the SCSI interface
only requires one IRQ and address space in order to operate all devices connected to
the SCSI controller. IDE requires an IRQ and address space for each port and each
port is good for only two devices. Do the math: 1 IRQ for 15 drives on SCSI, 2 IRQs
for 4 drives on IDE. If your system is a resource hog, that may be enough to send the
argument over the top toward SCSI. Not only is SCSI a better steward of your
system's resources, but it isn't likely that you will run out of space on a SCSI bus any
time soon even with 4 drives, a CD ROM, a JAZ drive, a scanner, and a CDRW writer
hanging off of your controller.
On the other hand, if you plan to keep your DAW free of extraneous devices
and uses and just do audio, then an IDE solution would seem better as you will not be
taxing system resources, will not need to go beyond the 4 drive limit and will gain a
bit if speed from using the less complex interface. On top of that, it's cheaper!
TheDirectData.com Page 23
After all, if you want to reach or beat the performance of the current fastest
IDE disks, you need ultra-wide SCSI or better and one of the fastest SCSI disks. That
translates into a rather large initial investment.
Below is a chart listing drives by size and how much some Internet sites are
charging for them. As you would expect, these prices can and do change daily.
However, all of these prices were taken during one day and should reflect the relative
prices of the drives. Also on this chart is the max disk to buffer throughput. As
mentioned before, this isn't the same as sustained throughput, but these figures were
given for all drives in the chart and serve as a basis of comparison. The chart may
surprise you. Some drives that have low throughput figures may be rather expensive.
This may be because the buffer is larger or there may be more heads in this unit or
some other reason. When looking over the drives in the list, keep in mind that a lot
goes into pricing a drive. Many of these reasons have nothing to do with using them in
Price Survey through LowerPrices.COM as of June 29, 2000
Manufacturer and Model Size Speed Highest Price
9 GB to 16 GB IDE
Western Digital WD102AA 10.2 5400 29.1 MB/sec $89 to
Maxtor 51024U2 10.2 7200 43.2 MB/sec $107
Fujitsu MPD3130AT 13 GB 5400 26.1 MB/sec $114
Western Digital WD136AA 13.6 5400 29.1 MB/sec $96 to
Quantum Fireball Plus LM 15 GB 7200 Not Listed $136
9 GB to 16 GB SCSI
Quantum Atlas V 9.1 7200 42.5 MB/sec $226 to
Seagate Barracuda 9.1 7200 29.4 MB/sec $255 to
Quantum Atlas 10K 9.1 10,000 59.75 MB/sec $345
17 GB to 25 GB IDE
Fujitsu MPD3173AT 17 GB 5400 26.1 MB/sec $128 to
Maxtor 92049U6 20 GB 7200 33.7 MB/sec $167
IBM DTLA307020 20 GB 7200 55.5 MB/sec $170 to
TheDirectData.com Page 24
Quantum LCT10 20.4 5400 37.13 MB/sec $124
Seagate Barracuda 20.4 7200 45.5 MB/sec $163
17 GB to 25 GB SCSI
Western Digital WDE18300- 18 GB 7200 30 MB/sec $385 to
Seagate Barracuda 18XL 18.2 7200 29.4 MB/sec $372 to
Quantum Atlas V 18.2 7200 42.5 MB/sec $395
Fujitsu MAG3182LP 18.2 10,000 45 MB/sec $475
IBM 18LZX 18.2 10,000 44.3 MB/sec $515 to
26 GB to 41 GB IDE
Maxtor 93073U6 30 GB 5400 36.9 MB/sec $198
IBM DTLA307030 30 GB 7200 37 MB/sec $232 to
Quantum LCT10 30.6 5400 37.13 MB/sec $175
26 GB to 41 GB SCSI
IBM 36XP 36.4 7200 28.9 MB/sec $1,103
Seagate Barracuda 36.4 7200 29.4 MB/sec $795
Quantum Atlas V 36.4 7200 42.5 MB/sec $735 to
42 GB and up IDE
Western Digital WD450AA 45 GB 5400 37.6 MB/sec $237
IBM DTLA307045 45 GB 7200 37 MB/sec $298 to
Maxtor 96147U8 61 GB 5400 40.8 MB/sec $343
42 GB and up SCSI
Seagate Barracuda 50.1 7200 29.4 MB/sec $895 to
Disk-to-buffer transfer speeds were used in this chart because it is the only
transfer rate that is reliably reported for all of the drives listed.
TheDirectData.com Page 25
SCSI Controller Specifications Prices
Adaptec 2940 AU Ultra $165
Adaptec 2940 UW Ultra Wide $197 to $205
Koutech SCSI-2 SCSI 2 $47
Koutech UW LVD Ultra Wide LVD $135
SCSI Motherboard Specifications Price
ASUS P2B-DS 440BX Dual PII w/Ultra 2 $526
Wide SCSI 80 MB/sec
Iwill DBL-100 440BX PII/PIII ATX $429
w/AIC7890 SCSI chip
Supermicro SQ2R6 Quad PIII w/Dual Ultra 160 $2785
Supermicro Super D2DGR Dual PII/PIII Xeon 440GX $419
w/Ultra Wide SCSI
SuperMicro P6DBS Dula PII 440BX w/Dual $345
Ultra Wide SCSI
Tyan S2257DUAN Dual PII/PIII 840 chip set $649
w/Dual Ultra-2 LVD
A quick product and price survey from the Internet and some on-line retailers
How you wish to allocate financial and system resources is a very personal
decision, but there are clear trade-offs in going either way. On the one hand, you
might have more flexibility with SCSI. On the other, IDE might be a bit faster and
allow you to take the money you save and put it into a faster CPU, better motherboard
and more RAM, which will translate directly into better performance.
SCSI is better than IDE.
Since IDE/ATA is mostly a stripped-down version of SCSI, this must be true.
IDE is cheaper than SCSI
This is truer than it should be. If you look at hard drives, the only difference
between IDE and SCSI is the interface card that is connected to the moving parts. The
more expensive SCSI card can add some to the price of the unit, but is no justification
at all to double the price.
TheDirectData.com Page 26
SCSI is faster than IDE
This is not definitely true. The origin of this fable lies in the past. There used to
be a large gap between high-end drives and low-end "consumer" drives. The high-end
drives were fast, reliable and only sold with SCSI interfaces. The consumer drives
were slower, less reliable, and only sold as IDE units.
Nowadays, things have changed a lot since technology evolved further. The
gap has been filled, which means that now you can buy high-end units with an IDE
interface, but also low-performing unreliable consumer drives with a SCSI interface.
Only the top-high-end of the market, like 10000+rpm drives are still exclusively sold
with a SCSI connector. As for the "normal" devices, the actual drives are usually
identical for the IDE and SCSI versions. Seagate for example is notorious for building
drives that even perform worse in SCSI than in IDE.
Slow IDE devices also slow down other devices
This is partially true. Just like SCSI, other devices on the same controller must
shut up while one device "talks" to the controller. So if the device can talk faster, the
other devices can grab the controller earlier too. The rate at which a device talks to the
controller is called the "burst transfer rate".
Since not every device is talking all of the time, the slowdown only occurs if at
least two devices want to talk at the same time and there is not enough bandwidth
available to accommodate both.
IDE or SCSI?
A common question that comes up when discussing this type of technology is
"which is better, IDE or SCSI?" This is a somewhat loaded question as there is no one
easy answers. In short, there are advantages to using both. SCSI devices are almost
always more expensive than IDE devices.
IDE devices are also much easier to set up in terms of not having to worry
about bus termination and other issues. IDE has also come a long way in how it
behaves with the system processor by utilizing DMA for its operations. However, the
performance gains are still on the side of SCSI, when price is no consideration. While
IDE has come a long way, SCSI hasn't rolled over and died in the process. SCSI has
always used the controller or the device for its operations (as opposed to the system
processor). And, with some patience, a SCSI system is not that hard to install.
There are also advantages in how many devices can be connected to a SCSI
controller over that of an IDE controller. Depending upon the implementation, a
single-channel SCSI controller can chain up to 15 devices as opposed to a standard PC
with IDE having up to 4 devices (spread, no less, over two chains). And, again, there
is the performance.
TheDirectData.com Page 27
With current specs having throughput up to 320 MB/s (Ultra320) and another
proposed spec promising 640 MB/s (Ultra 640), SCSI is still aggressively adapting to
meet the performance needs of the high-end market. So the answer to the question is
invariably always going to be: What do you want to do with it? If you are going to run
office and maybe a game of Counter Strike or two, then IDE will certainly suffice. If
you are going to do a lot of intensive (disk I/O intensive) work like video capture and
layout, then a SCSI system may be better for you.
If you're running a database of any sort (well, it would need to be one that's
actually getting real use), SCSI is still the best choice. There is also the great
equalizer: cost. As stated above, SCSI devices are almost always more expensive than
IDE, sometimes to the tune of hundreds of dollars, so if you only want a few more
FPS, the cost will definitely bear serious consideration. Indeed, even for traditionally
dominated SCSI realms such as video editing, IDE is slowly creeping in. For the kinds
of video editing that most consumers would do at home, IDE solutions such as the
ATA/66 (and beyond) implementations can certainly suffice--although you should still
check with the vendor of your capture card and/or editing software to see what they
recommend. Cost can never be far from our minds, and I'd say cost, more so than any
other consideration, is one of the reasons why Apple abandoned SCSI in favor of IDE
a few years back.
One may ask why SCSI devices are typically so expensive. In the case of hard
drives, the answer can be found in the manufacturing process. SCSI hard drives, on
average, are always top-tier products. This is a fact of the market. Hard drive
manufacturers know that performance nuts and high-end IT shops are going for SCSI,
and so the top-tier products are typically SCSI. A case in point would be Seagate.
Notice how its highest performing drives, even on the level of seek times and platter
count, typically emerge first as SCSI interface-ready drives, and then trickle down.
Couple with this SCSI's #1 use in enterprise-level computing: RAID.
Take one SCSI drive and compare it to an IDE/ATA drive. OK, so there may
be a small performance advantage, but nothing to get really fired up about. Now, take
that small advantage, multiple it by 10, slap it on a RAID array with 64MB write-
cache, and see what IDE can do. It can't keep up because IDE can't compete in
multiple I/O requests at once like SCSI can. Personal RAID setups using IDE are
certainly attractive, but that's not enterprise computing, and we need to recognize that
there is a difference.
SCSI is Growing
Roger Cox, Chief Analyst, GartnerGroup/Dataquest, projects that the unit
volume of host-based SCSI RAID controllers will double from 1999 to 2003.
In another GartnerGroup/Dataquest study, Adam Couture, Senior Analyst, IT
Services, evaluated the storage utility market. He projects that the storage utility
market will grow from $10 million in 2000 to $8 billion in 2003, 75% of which ($6
billion) will be generated by Internet data centers. Storage utilities deliver capacity for
servers on a usage basis, capacity that may be delivered via user networks or the
TheDirectData.com Page 28
Dataquest's projection that $6 billion will be Internet related suggests to STA
that a significant proportion of the $6 billion could contain SCSI technology. SCSI
technology would most likely be inside the servers, which are connected to the storage
utilities via SCSI or Fiber channel protocols. Currently, SCSI is used in a high
percentage of RAID controllers and that will likely continue for some time. By STA
estimates, close to 95% of all servers supporting the Internet rely on SCSI I/O
By 2003, STA projects that the hard disk drive media rate will be 90 MBps and
the required bus bandwidth will be 360 MBps. In 2003, the Ultra640 SCSI generation
will be entering the market. The future of SCSI, as a widely used storage controller
technology, is secure due to the extensive worldwide installed base and the ongoing
performance improvements in each generation of the specification.
The Future of SCSI
Why Does the Market Need SCSI?
SCSI is a stable; proven I/O solution that outperforms other I/O technologies.
It is more cost-effective than Fiber Channel. Today, SCSI is the technology of choice
for the vast majority of server and high-performance PC environments. This is due to
the important features of upgradability, scalability, manageability, inter-generational
device compatibility, reliability and performance. As SCSI continues to increase in
speed and add new features, it promises to remain a viable technology for storage and
for desktop connectivity in the long-term.
The question of "when will SCSI die?" is often asked, but never answered
reasonably. SCSI is an ever-evolving technology. Recent additions to the SCSI world
have included Fiber Channel SCSI and IEEE 1394 (Fire wire). There are even specs
out now for Ultra5 (Again, the speculated/proposed name weighing in at 640MB/s!!).
SCSI, with the advent of SCSI 3, has gone from a monolithic standard that took a long
time to "upgrade" to more of a topology map with different pieces making up the
There are papers constantly being published that propose new specs and new
ways to use the technology. A good listing of these papers can be found at the SCSI
Trade Association's website as well as the T10 organization's website. These are also
great places to look to see what is coming down the pipe in the SCSI world. SCSI is
also trying to become friendlier to programmers by adopting the Common Access
Model (CAM) to allow for a layer of abstraction between the massive SCSI command
set and a programmer who just wants to get a program to work correctly with SCSI
With the new model for changes (introduced with SCSI 3) allowing SCSI to
quickly adapt to the ever-increasing performance needs of the market today, its history
of high performance with low system resource utilization and its backwards
compatibility, SCSI promises to not only have a future in technology, but a bright one
TheDirectData.com Page 29
I think it is fitting to end with a snippet from a white paper on Ultra320 (PDF)
available at the SCSI Trade Association site that talks about PCI-X, and the kinds of
limitations that face us in the future, and how SCSI stacks up.
Under standard PCI the host bus has a maximum speed of 66 MHz. This
allows for a maximum transfer rate of 533 MB/sec across a 64-bit PCI bus. With
Ultra160 SCSI, two SCSI channels on a single device achieve a maximum transfer
rate of 320 MB/sec leaving plenty of overheads before saturating the PCI bus.
However, at 320 MB/sec, two SCSI channels can now achieve 640 MB/sec,
which will saturate a 64-bit / 66MHz PCI bus. In addition to PCI-X doubling the
performance of the host bus from 533 MB/sec to a maximum of 1066 MB/sec, there
are protocol improvements so that efficiency of the bus is improved over PCI.
Together PCI-X and Ultra320 SCSI provide the bandwidth necessary for today’s
I'd like to thank sacremon, Scott Tarr, Super duck, and Voodoo Chile for their
great input on drafts of this article! Lastly, I want to thank Octane for tackling this
topic the first time around. The quality of that article hopefully rubbed off on this
Both SCSI and IDE have their pros and cons. IDE has some limitations that
make it more suited for a consumer market than SCSI. And before you decide,
remember that you don't have to choose. You can run the best of both worlds in a
Basically, if you can find a unit that suits your needs in terms of performance,
and it's IDE, you should go for it, as you'll get the best value for your money. But if
you've run out of IDE connectors already, you'll want to consider adding a SCSI card
and go the other way.
TheDirectData.com Page 30
TheDirectData.com Page 31