INTRODUCTION TO ANALOGUE TRANSDUCERS
A power system requires some form of instrumentation to monitor and control its operation. In the
past this function has been performed by electro-mechanical devices. With these devices their
use was limited by being confined to close to the point of measurement. Replacing these
mechanical devices with their electronic equivalents has allowed the monitoring system to
become more versatile. A modern system usually involves the transmission of information from
the point of measurement to other points where this data is processed, recorded and used to
control the system parameters. The conversion and transmission of physical parameters on the
system requires the use of transducer sat the points of measurement. These transducers act as
the interface between the power system and the measurement system. Electrical transducers are
specifically transducers for converting the raw voltage and currents in a power system into useful
and meaningful electrical signals which can be used in and transmitted about the measurement
system. The inputs to these devices are usually currents and voltages from instrument
transformers such as current transformers (CTs) and voltage transformers (VTs) whilst the
outputs are standardized dc currents or digital signals. The requirements of electrical transducer
users are very similar and individual manufacturers of transducers produce a range of product
which is fairly standard. Although the transducers referred to in these notes are specifically those
manufactured by Carrel and Carrel Ltd much of the information given can be applied to
transducers in general.
TYPES OF ELECTRICAL TRANSDUCER
Traditionally most electrical transducers are designed to measure a specific physical parameter.
measured will be one of the following:
Active power (Watts)
Reactive power (Vars)
Apparent power (VA)
Phase angle (deg)
Signal convertors (dc to dc)
There are also devices to integrate currents (Amphours) and power (Watthours) as well
as temperature measuring transducers using resistance change or thermocouple voltages.
A modern development is to employ a microprocessor based circuit to derive a number of
electrical parameters within one device. Such instruments can have a local liquid crystal meter
display as well as an input/output port for digital communication with a data processing system.
Full details of the range of transducers available from Carrel and Carrel Ltd is presented in their
data catalogues, titled Electrical Transducers T-Series and LP-Series Electrical Transducers.
Information is also available on the www.carrel.co.nz In general the T-Series transducers are
housed in the traditional high standing/top terminal box used by most transducer manufacturers
whilst the LP Series are housed in a modern low profile box suited to mounting in switch boxes
along with modular circuit breakers and relays. Due to design refinements the LP Series
transducers are more economical to produce and will eventually replace the T-Series. In general
it is advisable to use standard transducers as outlined in the data sheets. Not only is the cost
minimised but one is also employing a device which is fully proven and least likely to give trouble.
Non standard transducers are available, on request from Carrel and Carrel Ltd, however their
cost is often higher due the extra work involved in producing them. If a large number of
transducers are involved then the relative cost of producing special transducers can drop to a
reasonable figure. Technical advice on all types of electrical transducers is freely available from
Carrel and Carrel as they are both designers and manufacturers of all types.
DESIRABLE CHARACTERISTICS OF ELECTRICAL TRANSDUCERS
Measuring electrical quantities using transducers confers a number of advantages.
(a) Provide a compact, silent unit which requires low maintenance.
(b) Convert the inputs into a different parameter such as the conversion of current and voltage
signals into a power signal.
(c) Provide a signal which can be transmitted from the measuring site to one or more remote
(d) Reduce the size of instrument transformers as they are not required to power a number of
meters and/or long runs of cable.
(e) Provide galvanic isolation between the power system and the measuring system thereby
avoiding hazardous voltages on the instruments.
(f) Limit voltage and current levels thereby protecting the measuring equipment.
(g) Change the response time of signals and provide filtering, averaging and removal of
undesirable parts of the signal.
(h) Provide a feedback signal for a control system.
(i) Allow mathematical operations such as addition, subtraction, integration, rooting and squaring
to be performed on system parameters.
SELECTION OF ELECTRICAL TRANSDUCERS
Performance and cost of transducers are linked. At the design stage savings can be made by
matching the capabilitiesof transducers to the needs of the system, using performance data
published in the data catalogues.
The important criteria to be considered are:
a. Type of transducer
The exact type of transducer should be matched to the purpose of the measurement. The
different types of transducer will be considered later in their separate sections where it will be
seen that certain types are used for certain purposes.
Most transducers are accurate to Class 0.5, which means that they can deviate from the true
reading by up to 0.5% of the output range. As an example, if the output is from 4 to 20mA, then
the range is 16mA and the actual deviation from
the correct reading could be as much as "0.08mA. This scale of accuracy is acceptable for most
measurements. When a greater accuracy is required Class 0.2 transducers are available,
generally at a greater cost. There is little point in
overspecifying the accuracy of a transducer in situations where that accuracy is not required or is
wasted. Transducer users should also note that an accuracy figure quoted for a transducer may
not necessarily represent the true absolute error at that point. The main international standard
used to specify transducers,IEC 60688, defines the error as that
occurring at certain rigidly fixed test conditions of temperature, frequency and power supply.
When these conditions change further additional errors are permitted which could accumulate
into a much larger absolute error. In practice the errors may tend to cancel and the situation may
be partially corrected.
As the accuracy of a transducer is specified as a percentage of the output range the designer
should try to cover as much of that range as possible to preserve the accuracy. A common fault is
to oversize current transformers for safety or to allow for future increases in load. Fitting CTs that
are double the currnet capacity will effectively change a Class 0.5 transducer to a Class 1. It is
worth noting that transducers are required to work to 120% of their nominal capacity so that one
does not have to worry about small overloads. It is far more likely that an over range reading will
adversely affect part of the reading system such as a data logger or PLC and these should be
The operating temperature range of the transducer should include that of the panel in which it is
installed when the panel is fully operational under the most extreme possible environmental
The ability of the transducer to survive abuse needs to be considered. The abuse can be
mechanical, such as vibration, or electrical, such as overloads and surges. Under these
conditions it must also be determined whether the transducer is likely to affect other equipment.
A transducer must not be a hazard to other equipment or to personnel. It should not produce high
voltages or currents even under fault conditions. A high potential on any input must not be
conveyed to an output. In practice the galvanic separation of input and output circuits are tested
for breakdown by applying a high potential (usually 4kVac) between
them for a certain period (usually 1 min).
g. Electromagnetic compatibility
A transducer must not affect or be affected by other equipment due to mutual interference
The type of enclosure housing the transducer should be compatible with the panel construction
methods employed. Most transducers are housed in compact plastic cases suitable for mounting
on DIN type 35 rail.
THE INTERNATIONAL STANDARD SPECIFICATION
All transducers should conform to a specified standard. The internationally accepted
standard for electrical transducers is IEC 60668, titled "Electrical measuring transducers for
converting a.c. electrical quantities to analogue or digital signals". The standard specifies how
transducers should be labelled, how their accuracy is determined and how they should perform
under both normal and abnormal operating conditions. Large specifiers of electrical transducers
will find it advantageous to consult the latest copy of this standard obtainable from their local
standards association or from the IEC. IEC 60668 has a number of characteristics which should
be noted as they often lead to confusion when determining accuracy. As opposed to other
standards, such as those for kilowatt hour meters, IEC 60688 refers to all errors as a percentage
of the full range (or span) of the instrument and not as a percentage of the reading.
As a consequencerelative errors at low values of reading may be higher than anticipated
hence the previous recommendation to work close to the top of the range as possible. It is also
worth noting that the specified accuracy is tested under a strict set
of conditions. Further variations in accuracy are permitted as working conditions deviate from
these set values. The following variations of basic accuracy are permitted across specified ranges
Auxiliary supply voltage variation 50%
Auxiliary supply frequency variation 50%
Environmental temperature variation 100%
Input frequency 100%
Input voltage 50%
Input current 100%
Power factor 50%
Output load 50%
Input waveform distortion 200%
External magnetic field 100%
Current balance 100%
Interaction between internal elements 50%
Self heating 100%
Common mode voltages on output 100%
Series mode interference 100%
Theoretically all these additional errors could be cumulative leading to a large absolute error. In
practice some of these errors tend to cancel leaving a less serious situation. Nevertheless users
should be aware that they can exist and make suitable allowance for them.
Carrel and Carrel transducers are produced with those standard inputs most likely to be
encountered in practice. Non standard inputs should be avoided as they generally involve
increased cost and manufacturing delay.
a. AC currents
The standard current inputs are for nominal inputs of 1 Amp or 5 Amps, in line with common
standards for current transformer secondaries. All ac current circuits are galvanically isolated
from each other and from other circuits in the transducer thus allowing them to be earthed at a
single point as is good practice.
b. AC voltages
Standard transducer voltages are those most commonly found on power systems such as 110V
and 230V on single phase systems and 63.5V, 110V, 230V and 400V on three phase systems.
These standard voltages may differ slightly from country to country. Multiple voltage inputs to a
transducer are galvanically isolated from the other circuits in the transducer, but not from each
other where there may be a high impedance common connection. The fault current likely
to flow through this common connection is greatly limited by the high impedance (usually about
c. DC inputs
There are no particular standard dc inputs although current values greater than 5A would be
taken from a shunt with 50mV, 75mV or 100mV rating and voltages greater than 600V would be
taken from a voltage divider with an output of 5V or 10V. Most dc transducers have galvanic
isolation between input and output and can be used to not only alter the nature of the signal but
also to remove troublesome common mode voltages such as from a shunt.
d. Notes on inputs
Transducers should resist damage from abnormal inputs caused by overloads or faults on the
system. In general current inputs should be able to resist a continuous level of 2 times the
nominal level or a short-term fault level of 20 times the nominal current for a period of up to 3
seconds. The voltage inputs should resist a continuous level of 1.2 times the nominal input or a
fault level of 1.5 times the nominal for up to 10 seconds.The inputs tend to load the source of
input signals and where this source is from instrument transformers the ratings of the
transformers must not be exceeded. The only transducers presenting a substantial input load are
self powered transducers. Other types of transducers have very low input loads and are unlikely
to present this problem. Transducers are often used to reduce the load on current transformers
especially where there are a number of meters to be read and long runs of secondary side
A number of standard outputs are employed by transducers. In general these are dc currents or
voltages or some form
of digital signal.
a. Current driven outputs
A current driven output has a dc current which is independent of the value of load
resistance thus making the output independent of variables such as meter and cable resistances.
Meters in the system are dc current types and are series connected.
In practice this load can vary from zero Ohms up to a limit which corresponds to a safe
terminal voltage at the transducer (usually 15V for C&C transducers).
The following standard types are available:
i. 0 to 20 mA outputs (favoured by European transducer suppliers). An accurate full range output
of 20 mA should be maintained up to the load limit specified by the manufacturer (750Ω for C&C
transducers) This type of output is the least likely to suffer from interference and should be used
where this problem is to be avoided. As the zero of the scale is also a natural zero the electronic
design is simpler and less inclined to drift. An advantage of this type of output is the ability to
present a negative output as is required by some power transducers. Such an arrangement is
useful where CTs have been connected back to front as the transducer will still read, albeit
negatively. DC meters used with 0 to 20 mA outputs are robust and well priced due to the lack of
zero offset required.
ii. 0 to 10 mA outputs are favoured by many power supply authorities and are similar to the 0 to
iii. 4 to 20 mA outputs have a range which starts from a zero equal to 4 mA (hence known as a
"live zero" type). This practice is favoured in North America and is very widely used around the
world. Most instrumentation systems
support it and it is mandatory for two wire transducers. A two wire system works on a device
which draws dc power from the same pair of wires as the signal hence the need to pass a certain
amount of current (4mA) at the zero to maintain the working of the electronic circuitry. An
advantage of this type of operation is that a total loss of signal is an immediate indication that
something is wrong with the transducer or its signal path. The disadvantage of this type of output
is that negative outputs are difficult to accommodate, the zero more likely to drift and the active
span is effectively reduced by 4 mA. Meters to measure 4 to 20 mA are generally more expensive
than simpler natural zero types. Transducers with 4 to 20mA outputs require an external power
source, to supply the 4mA at zero input. They can only be made self powered if the response
near zero is disregarded or suppressed as in "window" transducers.
b. Voltage driven outputs
Voltage referenced outputs maintain a voltage against a variable resistance load within
the limits of a certain maximum drawn current. Multiple meters in the output are placed in parallel.
Standard ranges of 0 to 5V and 0 to 10V are available. Voltage referenced inputs are increasingly
found on electronic devices such as digital meters, data loggers and logic controllers.
c. Comparison of current and voltage output types
Where the choice is available current outputs are preferred to voltage outputs as they
are much less affected by external influences. Voltage outputs are sometimes affected by wiring
resistances and should not be fed over large distances. Due to the high input impedance of
typical voltage inputs the possibility of induced and reflected signals are greater than with lower
impedance current inputs. The electronic circuitry of transducers with a voltage output usually
involve the common rail of the power supply. Interference signals on this output are conducted to
the common rail where theyunduly influence the inputs to amplifiers. Current output circuits can
be produced which do not involve this rail and hence avoid the problem. The use of voltage
outputs is unavoidable in cases where the associated instrumentation will not accept current
inputs or where there is a clash in ground connections necessitating a parallel connection rather
than a series connection.
d. Further notes on transducer outputs
To protect personnel and equipment the voltage and current levels at an output must be
limited to a safe level, generally accepted as a voltage of below 32Vdc and twice the rated
current. It must also be possible to open circuit or short circuit any output without endangering
any person or part of the system including the transducer itself. The output of a transducer must
be capable of responding rapidly to changes in the input. The usual response time is 250
milliseconds for a change of 90% of the output which is equivalent to 99% in 500ms. Faster
responses than this
are possible and are available as a special unit. The output must also have as little ac ripple
superimposed on it as possible. The maximum peak to peak ripple allowed on an output signal
must not be more than twice the class accuracy of the transducer, in other words less than 1% of
the full scale output in a class 0.5 transducer.
Response time and ripple are inter-related and it is often not possible to improve one without
degrading the other. Units incorporating improved output filtering are available which can offer
better performance and improved trade off of response time and output ripple.
AUXILIARY POWER SUPPLY
The electronic circuitry within the transducer requires a source of electrical power which
may sometimes be taken from an input, but is better taken from a separate power supply so as
not to affect any input levels. This supply can be ac or dc. AC power supplies use a transformer
to step down the voltage and to provide galvanic isolation. To provide this sort of isolation with dc
supplies requires an electronic switching circuit to convert the dc into a high frequency ac signal
which is then fed to a transformer. Of these two types of supply the ac supply is to be preferred
as it is simpler and avoids the potential interferenceintroduced by the switching circuit. Use dc
switching supplies only when this is the only reliable source of power. Loop powered transducers
are a cost effective means of supplying dc power to a transducer where isolation is not
required between supply and output. The output of this type is always 4 to 20 mA. The transducer
is connected via its signal wires to a low voltage dc supply and acts as a current regulator in
proportion to measured signal. As long as a certain specified voltage remains at the transducer
terminals the rest of the voltage can be dropped across series meters and cabling in the two wire
loop between the transducer and the power supply.
Two basic types of current transducers are recognised.
a. Average reading types
Average reading current transducers assume that the measured current waveform is an
undistorted sine wave. In most cases this is a valid assumption. These transducers are simple,
economical and reliable. The following models of average reading current transducer are
available from C&C:
i. Models T-IS/LP-IS are self powered transducers which do not require an external power source
as they take their power from the current signal. As they are self powered they do not come in a
live zero (4 to 20 mA) version.
ii. Models LP-ISX3 is a triple version of the LP-IS with three separate transducers housed in one
iii. Models T-IP/LP-IP are separately powered transducers which are used if a 4 to 20 mA output
is required or if the burden on the current source is required to be low.
iv. Models T-IL/LP-IL are loop powered (two wire type) transducers which require an external dc
power source. They only provide a 4 - 20 mA response.
v. Model T-ILE is a loop powered transducer with an integral current transformer. This a highly
economical transducer to employ as the cost of a separate CT is covered by the unit which also
occupies much less space than a separate CT and transducer.
vi. Model LP-ID is a microprocessor based transducer with a Modbus output on a twisted pair RS
485 bus system.
b. RMS reading type
Non-linear loads such as rectifiers, variable speed drives, computer supplies and other
types of electronic circuits can
distort the current waveform and make the wave shape non-sinusoidal. Under these
circumstances an average reading transducer may not give an accurate reading and a root mean
square (rms) reading type of transducer must be used.
The following type is available from C&C.
i. Model T-IRMS is a separately powered rms responding current transducer which is similar to
the T-IP transducer in that it presents low input load and can provide any type of output.
As with current transducers two basic types of voltage transducers are recognised.
a. Average reading types Average reading voltage transducers also require a sinusoidal input
waveform. The following types are available from C&C.
ii. Models T-VS/LP-VS are self powered voltage transducers which take their power from the
input voltage signal. Not available in a 4 to 20 mA version.
iii. Model LP-VSX3 is a triple version of the LP-VS with three separate transducers in one
iv. Models T-VP/LP-VP are separately powered transducers used for 4 to 20 mA outputs and
where a low input load is desired.
v. Models T-VL/LP-VL are loop powered (two wire) transducers which require an external source
of dc power. Only available with 4 - 20 mA response.
vi. Models T-VP/LP-VP and T-VL/LP-VL are also available in suppressed zero versions, also
commonly known as window type. This type of transducer is only active in the top part of the
voltage range thus giving a more sensitive reading concentrated in an area which is deemed to
be important. A typical transducer for a working voltage of 110V would read from 90V to 130V.
b. RMS reading type Distorted voltage waveforms are less commonly encountered than distorted
current waveform however under these circumstances an rms transducer is available from C&C.
i. Model T-VRMS is a separately powered root mean square responding voltage transducer.
Power transducers combine voltage and current signals to produce a signal proportional
to the type of power being measured. Power transducers measure either active power (Watts),
reactive power (Vars) or apparent power (VA).
(a) Active power (Watt) transducers
The active power in a load represents the energy being converted into non electrical
forms such as heat, light and mechanical work. It is represented by the voltage across the load
multiplied by the in-phase component of the current through the load. Active power transducers
obtain a power signal from the input current and voltage signals by passing them through an
electronic multiplier. When this power signal is averaged it represents the true power independent
of phase angle and wave shape. Single phase transducers have a single multiplying element
whilst three phase transducers have up to three multiplying elements depending on their type. A
variety of power transducers are available to suit various purposes. The simpler types, whilst they
are cheaper in price, often rely on some assumption being made which affects their accuracy.
The following types are available from Carrel and Carrel i. T-1W1/LP-1W1 are single phase
power transducers reading from corresponding voltage and current signals.
Economical versions, types T-1W1S/LP-1W1S, are in a more compact housing and have a
ii. T-1W3/LP-1W3 are three phase power transducers which assume that the system measured
presents a balanced load ie all three voltages and currents are the same. This assumption is valid
for certain devices such as three phase electric motors. The transducers have one multiplying
element which measures the power on one of the phases and assumes that the other two phases
are the same. A single current and three phase to phase voltage inputs are required. These
transducers permit a considerable economy as they are not only less expensive than other types
but also require only one CT. Economical versions of these transducers, types T-1W3S/LP-
1W3S, are in a compact housing and have a unidirectional output.
iii. T-1W4/LP-1W4 are single element three phase power transducers for balanced loads. They
are similar to the 1W3 version but are intended for apllications where only a phase to neutral
voltage signal is available. Economical versions, types T-1W4S/LP-1W4S are in a compact
housing and have a unidirectional output.
iv. T-2W3/LP-2W3 are three phase power transducers for unbalanced loads. They contain two
multiplying elements which measure power using the well known Two Wattmeter method of
power measurement. In this method it is assumed that current flows only in the three phase wires
and does not return down a neutral wire. Such a system is referred to as a three wire system.
Transducers based on this method require only two current inputs and are able to work when the
supply and load is unbalanced. High voltage transmission circuits are often three wire
systems and this transducer offers an economical means to measure power (on such systems)
as it requires only two VTs and two CTs. The quality of instrument transformers used with this
transducer is important as there is an inherent phase shift between current and voltage inputs
which accentuates any inaccuracy in the transformers.
v. T-3W4 is a three phase power transducer for unbalanced loads and supplies, with an active
neutral (four wire system). The circuitry contains three multiplying elements, one for each phase
(Three Wattmeter method). As each phase is monitored separately the transducer provides a
measurement which is independent of any assumptions and can be used on any system. The
main disadvantage of using this transducer is the relatively higher cost and the need for three
CTs and VTs. This transducer is most often used at mains potential where no VTs are required
and the CTs are relatively inexpensive.
vi. T-2.5W4/LP-2.5W4 are three phase power transducers for unbalanced loads and a balanced
supply, with an active neutral. The circuitry contains two multiplying elements with an
arrangement to permit the inclusion of a third current circuit. This type of transducer is a
substitute for a proper three element transducer. Carrel and Carrel have included them in their
range so as not to disadvantage themselves against competitors carrying this type instead of
three element types. The use of these so called "Two and a half element" transducers can only
be justified when cost is of overriding importance.
(b) Reactive power (Var) transducers
Reactive power transducers measure the quadrature component of the instantaneous
power, which is the product of the voltage and the reactive (quadrature) current. Reactive power
does no work in the load and represents the instantaneous power stored in the reactive
components (inductance and capacitance) of the load. Although this current component performs
no useful work in the load it still increases the overall current level in the supply network and
thereby causes increased transmission losses. For this reason most power suppliers try to
minimise the reactive power flowing in their systems and need to measure this quantity.
The range of reactive power transducers is similar to the active power transducers except that
internal voltage signals are displaced by 90E before being applied to the multipliers. These
displaced voltages are derived in a number of ways which tend to restrict the reading conditions.
Reactive power can be inductive (lagging) or capacitive (leading). Carrel and Carrel use the
convention that lagging signals are positive and leading signals are negative. The following
reactive power transducers are available from Carrel and Carrel.
i. T-1V1 is a single phase reactive power transducer. To achieve the 90E internal phase shift a
capacitive phase shifting circuit is used. Unfortunately this circuit is also frequency sensitive
which means that the transducer is only accurate at one particular supply frequency. Due to this
restriction this transducer should only be used where no alternative is suitable.
ii. T-1V4 is a three phase reactive power transducer for balanced loads where one current and
only a single matching phase to neutral voltage is available. The internal circuitry is the same as
the single phase transducer and is also restricted to a particular frequency. Due to this restriction
this transducer should only be used where no alternative is suitable.
iii. T-1V3/LP-1V3 are single element three phase reactive power transducers for balanced loads
where one current and three phase voltages are available. The necessary 90E phase shift is
given to the voltage signal by taking this voltage from the two phases opposite the current phase.
As the phase shift relies on the symmetry of the three phase supply this supply must be
balanced, however the shift is independent of frequency unlike the single phase types.
Economical versions, types T-1V3S/LP-1V3S, have a compact enclosure and have a
iv. T-2V3/LP-2V3 are two element three phase reactive power transducers for unbalanced loads
on three wire systems with balanced voltages. They require two current signals and is
independent of supply frequency.
v. T-3V4 is a three element three phase reactive power transducer for unbalanced loads on four
wire systems with balanced voltages. It requires three current signals and is independent of
vi. T-2.5V4/LP-2.5V4 are two element three phase reactive power transducers for unbalanced
loads on four wire systems with balanced voltages. They represent a cheaper compromise for
four wire systems.
(c) Apparent power (VA) transducers
The apparent power of a load is calculated by multiplying the rms voltage across the load by the
rms current through the load without regard to the phase angle between them. It represents the
power present if the power factor was unity. Apparent power is useful as a measure of stress on
components that are affected by both current and voltage such as generators, motors,
transformers and cables. To obtain an apparent power signal one must first obtain dc signals
proportional to voltage and current and then multiply them. In most cases voltage waveforms are
less distorted than current waveforms so C&C transducers multiply the averaged voltage signal
with the rms current signal. Due to the lack of phase angle influence VA transducer outputs
are always unidirectional. The following VA transducers are available from Carrel and Carrel.
i. T-1VA1 is a single phase VA transducer.
ii. T-1VA3 is a three phase VA transducer for balanced loads on a balanced three or four wire
iii. T-1VA4 is a three phase VA transducer for balanced loads on a balanced four wire system
where only a single phase to neutral voltage is available.
iv. T-2VA3 is a three phase VA transducer for unbalanced loads on a balanced or unbalanced
three wire system ie no neutral.
v. T-3VA4 is a three phase VA transducer for unbalanced loads on a balanced or unbalanced four
wire system ie with neutral wire.
(d) Universal (multifunction) transducers
Using microprocessor techniques a transducer can be produced which will perform a large
number of functions. Thus one device, with one set of three phase current and voltage inputs can
provide a full range of electrical measurements including currents, voltages and power
measurements as well as frequency and power factor. In addition integral functions can be
performed to obtain energy quantities such as kWhrs. Data logging and storage of data is also
provided is some cases as well as the recording of maximum and minimum quantities.
C&C have a number of such devices available, with or without liquid crystal displays to provide
local indication of electrical measurements.
(e) Some notes concerning power transducers
(i) Power flow and polarity
Power flow in a system is positive when power flows from the generation source to the load. At a
certain point in a complicated system the direction of power flow could be positive or negative as
conditions change. Under these circumstances power transducers need to have a bidirectional
output, which is usually achieved by having an output which can go positive or negative (bipolar
output). This method works well when the output has a measurement zero which is also an output
zero for example a 0 to 20mA output which then becomes a -20 to 0 to +20mA output. Live zero
transducers, such as those with 4 to 20mA outputs, cannot go negative and hence require the
zero to be placed somewhere in the middle of the working range. Often the likely power flow in
one direction is less than that in the opposite direction so that the zero does not have to be
placed in mid range. In this case the zero is always placed somewhere in the lower half of the
scale. It is normal practice to have equal sensitivity on both negative and positive power ranges.
Although it is possible to have transducers where one polarity is more sensitive than the other this
practice is not encouraged.
All current and voltage inputs to a power transducer must be correctly placed for a correct
output to be obtained. If voltage or current signals are reversed then the output will be reversed. If
a single input is reversed compared to the other inputs then the output is reduced or even lost
completely. If the incorrect voltage and current signals are aligned then the situation becomes
complex and difficult to interpret. Carrel and Carrel have produced a leaflet entitled "T-series
power transducers - Installation instructions" which details how to go about ensuring that power
transducers are correctly connected. In cases where all the CTs have been connected in the
reverse direction reversing the output will only correct the situation if the output is truly
bidirectional. In the case of live zero transducers such as those with 4 to 20mA outputs the
output will not go below the live zero and may even appear to be not working. In this case the CT
leads must be reversed, an action which often requires the power to be switched off, which may
not be convenient at the time.
Power transducer signals can be time integrated mathematically by an additional circuit to yield
units such as Joules or kWhrs from Watt signals. This integrating unit can be built into certain of
the transducers as an integral unit.
(iv) Range specification
Power transducers can be set to any input range and do not necessarily have to be
specified to VI for a single phaseor 1.732VI for three phases (where V and I are nominal voltage
and current). However the greatest accuracy will be achieved in the region of these values and it
pays to choose a range which has been rounded off close to them. Carrel and Carrel prefer the
following standard values for three phase power transducers.
V I 1.73VI Preferred
110V 1A 190.53W 200W
110V 5A 952.63W 1000W
400V 1A 692.82W 700W
400V 5A 3464.10W 3500W
Whilst the value of 110V is universal for VT secondaries some countries use 380V or 415V for 3
phase supplies. These values are close enough to 400V as to make no difference when using
In some cases transducers are specified in terms of the actual power units in the system being
measured. This action requires a knowledge of the voltage and current transformer ratios.
Transducers such as these are confined to a certain position within a system.
It is often more convenient to use the preferred values and multiply the outputs by the relevant
VT and CT ratios to obtain the true readings. With this method transducers can be interchanged
easily and a minimum of spare transducers carried to cover eventualities.
When the CTs in a system have been over sized it may be necessary to greatly increase the
sensitivity of a transducer to fully cover the power range, with a subsequent loss of overall
accuracy. This situation should be avoided by choosing CTs that match the capacity of the
(v) Combining readings
It is often necessary to sum a number of power readings from different transducer. With
transducers whose outputs are galvanically isolated from each other the outputs can be
combined by putting them in parallel for current outputs or in series for voltage outputs. The
sensitivity of the transducers must be the same with the same input to output ratios.
Where this is not the case a summing circuit must be used.
PHASE ANGLE TRANSDUCERS
Phase angle transducers provide a signal which is linearly proportional to the phase
angle between a voltage and a current in a power system. This type of transducer is often called
a power factor transducer, which is technically incorrect as power factor is proportional to the
cosine of the phase angle and as such is non linear. Many operators prefer the phase angle to be
expressed as a power factor, which is easily accomplished using specially scaled moving
coil meters. Other types of meters and systems require computation to convert phase angle to
power factor. Phase angle transducers have limitations of their use which should be noted. Most
units only work off one phase and are only valid for balanced three phase systems. A certain
minimum current level is needed to obtain a valid reading and ambiguous readings are likely
when these levels are too low. Serious operators who wish to monitor reactive currents in their
system are advised to employ reactive power transducers rather than phase angle transducers.
The range of phase angle chosen depends on the behaviour of the system being measured. As
with all transducers one should attempt to match the transducer range to the operating range of
the system. For systems which contain inductive elements only a range of –60O to 0 O (Pf 0.5
lagging to Pf 1) is a practical one to choose. For stability reasons manypower systems are
operated slightly lagging and are unlikely to go very much leading.
A good range for this type of system would be from -60 O to 0 O to +30 O (Pf 0.5 lagging to
Pf 1 to Pf 0.86 leading). Where the system can go quite heavily leading the range -60 O to 0 O to
+60 O (Pf 0.5 lagging to Pf 1 to Pf 0.5 leading) is the best choice, especially with bidirectional
outputs such as -20 mA to 0 to +20 mA.The following phase angle transducers are available from
Carrel and Carrel .
i. T-PF1 is a single phase phase angle transducer. Certain range types require the use of a
quadrature phase shifter which may restrict the frequency range of operation.
ii. T-PF3 is a three phase phase angle transducer for balanced loads. A current signal and a
voltage signal from the two opposite phases is required.
Frequency transducers provide a signal which is proportional to the frequency of a supply
(mains) voltage. As power frequencies are centred on certain standard values it is normal
practice to produce these transducers as suppressed zero (window) types with the standard
frequency at the centre of the range. For supply frequencies of 50 Hz the most popular ranges
are 45 - 55 Hz or 48 - 52 Hz depending on the stability of the system. As with most window
the absolute accuracy remains the same whilst the relative accuracy changes with the width of
the window. Carrel and Carrel supply either the T-FM or the LP-FM. Both units are controlled by a
quartz crystal oscillator for stability.
Signal convertors exist for ac and dc inputs. The term ac signal convertor usually refers
to a transducer for low level ac signals whilst dc signal convertors handle the whole range of dc
voltages and currents from low levels to high levels. In general signal convertors change the input
into a standard transducer type output and provide galvanic isolation between input and output.
They are useful when one type of transducer signal needs to be changed into a different type
of signal or if the point of measurement is at a different potential from that of the instrumentation.
In a system with conflicting potentials, problems can be caused by unwanted stray currents and
short circuits. By placing one or more signal convertors at strategic points in the system this
conflict can be removed. Signal convertors are also useful for isolating parts of the system likely
to carry high potentials under fault conditions such as during lightning strikes. When they are
used in conjunction with surge diverters hazardous transients can be prevented from entering
vulnerable parts of the system. Carrel and Carrel supply a number different types of signal
i. T-CDV/T-CDI range of signal convertors have two separate printed circuit boards for input and
output signals. Each board has its own power supply to maintain the isolation. A modulated light
beam passes the signal between the two boards to maintain a very high level of isolation between
input and output. The following variations are available with the usual range of output values
including bipolar types.
T-CAI - low level ac current input
T-CAV - low level ac voltage input
T-CDI - dc current input
T-CDV - dc voltage input
ii. T-CDU is a low cost dc signal convertor for less exacting applications. It has a single pc board
with optocoupler isolation and a unipolar output. An auxiliary supply is required.
iii. T-CDL is a dc signal convertor with a self powered input and a loop powered output. Designed
primarily for 4 to 20mA input signals it is most useful for isolating a large number of such signals
at a comparatively low cost.
Outputs of 4 to 20 mA (2 wire system) or 0 to 5/10 V (3 wire system) are available.
iv. LP-CDU is a dc signal convertor with current or voltage input and 4 - 20mA dc output. A dc or
ac auxiliary supply is required. This convertor has a very fast response time which makes it
suitable for protection and control circuits.
TAP POSITION AND RESISTANCE TRANSDUCERS
(a) Tap position transducers
Tap position transducers convert the signal from a sensor on a power transformer tap
changer mechanism into a standard transducer signal. Carrel and Carrel produce two different
types of transducer to accommodate the two basic types of sensor used for this purpose.
i. T-TPA is a tap change transducer for ac signals from a variable reluctance sensor. This type of
sensor has a sensor coil which moves in a magnetic field set up by a set of coils energised by a
ac voltage. The voltage induced in the moving coil is governed by the position of the coil. As the
energising voltage also affects the signal this signal varies in proportion to any changes in supply
voltage. The transducer contains an analogue divider circuit which divides the sensor signal by
the energising voltage to obtain a signal independent of supply voltage fluctuations. Both inputs
are isolated from the output. This transducer plus suitable metering is widely used to replace
obsolete cross field instruments and is not likely to be used in any new systems.
ii. T-TPR is a tap change transducer working off a potentiometer attached to the tap changer
mechanism. This system is suitable for any new designs as well as to replace existing cross field
instruments. The transducer places a regulated dc voltage across the potentiometer and senses
the voltage present at the moving wiper of the potentiometer. The transducer provides galvanic
isolation between the potentiometer and the output circuit.
(b) Resistance transducer
The T-R transducer provides a signal linearly proportional to a external resistance by injecting a
regulated current into the resistor and measuring the voltage dropped across it. It is used with
positional transducers and other such variable resistance devices. The resistance measured must
not be carrying current from any other source at the time of measurement.
Temperature transducers convert the signal from a sensor into a standard transducer
signal linearly proportional to temperature. Carrel and Carrel produce two basic types of
(a) Resistive types
Resistive temperature transducers work in conjuction with a resistor which changes its electrical
resistance in response to temperature changes. The universal standard sensor resistor, named
the Pt100 type, consists of a fine Platinum metal wire (or film) with a nominal resistance of 100Ω
at 0EC. Other resistance values (such as 1000Ω) and materials (such as Nickel) are used,
however the Pt100 is by far the most common sensor available. Resistive temperature
transducers need to compensate for the resistance of the connecting leads between the sensor
resistor and the transducer. To perform this function the connection requires a set of conductors
of equal matched resistance in a common cable. Resistive transducers do not have an entirely
linear response and the better types of transducer correct for this non-linearity. Carrel and Carrel
produce the following transducers.
i. T-TR is a resistive temperature transducer which requires an auxiliary supply. The sensor is not
isolated from the output. The three wire method of lead compensation
ii. is employed and the input signal is compensated for non linearity. All the usual standard
transducer outputs are available.
iii. T-TRL is a loop powered resistive temperature transducer with three wire lead compensation
and linearisation. The unit requires an external dc supply. There is no galvanic isolation between
the input and the output.
iv. LP-TRD is a digital resistive temperature transducer with a Modbus RS485 output. It exhibits
better than average linearisation due to a double analogue/digital compensation method
(b) Thermocouple transducers
A thermocouple is a device, consisting of two metals in contact, which generates a low level
voltage when a temperature difference occurs across it. A thermocouple transducer amplifies this
low level voltage and converts it into a standard transducer signal. When measuring a
temperature one end of the thermocouple must be kept at that temperature while the other end is
kept at a standard temperature usually 0EC. In practice this standard temperature is difficult to
maintain so the transducer measures the ambient temperature at the end of the thermocouple
and compensates the signal for the difference from the standard value. This technique is known
as cold junction compensation. Thermocouple signals are not linear and must be corrected for
better accuracy. Carrel and Carrel has one model of thermocouple transducer, the T-TTH, which
has cold junction compensation, linearisation and is energised from an auxiliary supply. Inputs for
thermocouples of the K and J types are routinely available and other types can be supplied on
(c) Some notes on temperature measurement
Temperature measurements are often made difficult by the wrong choice of sensor. In
general whilst thermocouples are not as accurate as Pt100 sensors they are smaller and less
expensive. Pt100 sensors are best used at fairly low temperatures (<300EC) whilst
thermocouples are best used at higher temperatures. At very high temperatures (>600EC) type K
thermocouples should be used rather than type J thermocouples. Thermocouple transducers
should to be sited in a cool position which usually neccessitates placing them remote from the
measurement point and running special thermocouple compensating leads between the sensor
and the transducer. These leads are made of similar material to the thermocouple pair and
effectively extend the thermocouple all the wayback to the transducer to ensure that the cold
junction compensation works correctly. Compensation leads are relatively expensive and their
cost must be balanced against the low cost of the thermocouple sensor itself.
Integrators are devices which derive a summated time integral of an input. Where this
input is a signal from a transducer measuring some type of rate related quantity the integral has
real meaning. A good example of a rate related quantity is that of the signal from a power
transducer which, when integrated, becomes a measure of the total energy in Joules, kWhrs or
similar such units. Another example is using the signal from a dc current transducer or a dc shunt
to produce a measure of electric charge in Coulombs or Amphrs. Similarly transducer signals for
speed or rate of use can be integrated to provide a quantity measure in many processes.
The input to an integrator can be any dc signal linearly proportional to the required rate
measurement. The output is usually presented as pulses of some type which can be used to
advance a counting device. The pulse rate is proportional to the input signal. The most commonly
used output is a set of relay contacts which can be used to operate a counter. This type of output
has the advantage of galvanic isolation coupled with the ability to accept all sorts of voltages and
currents including ac quantities. The main disadvantage of using a relay is its limited speed of
operation (<1Hz) and limited physical life. Under certain circumstances open collector transistors
and optocouplers can also be used.
Besides the integrator fitted to certain power transducers Carrel and Carrel provide an
individual integrator, the T-INTP, which has a dc input, pulsed output and requires an auxiliary
supply. Most integrators are produced to customer's individual requirements.
The thermometer is a device that measures temperature or temperature gradient
using a variety of different principles; it comes from the Greek roots thermo, heat, and
meter, to measure. A thermometer has two important elements: the temperature sensor
(e.g. the bulb on a mercury thermometer) in which some physical change occurs with
temperature, plus some means of converting this physical change into a value (e.g. the
scale on a mercury thermometer). Industrial thermometers commonly use electronic
means to provide a digital display or input to a computer.
Thermometers can be divided into two groups according to the level of knowledge about
the physical basis of the underlying thermodynamic laws and quantities. For primary
thermometers the measured property of matter is known so well that temperature can be
calculated without any unknown quantities. Examples of these are thermometers based on
the equation of state of a gas, on the velocity of sound in a gas, on the thermal noise (see
Johnson–Nyquist noise) voltage or current of an electrical resistor, and on the angular
anisotropy of gamma ray emission of certain radioactive nuclei in a magnetic field.
Secondary thermometers are most widely used because of their convenience. Also, they
are often much more sensitive than primary ones. For secondary thermometers
knowledge of the measured property is not sufficient to allow direct calculation of
temperature. They have to be calibrated against a primary thermometer at least at one
temperature or at a number of fixed temperatures. Such fixed points, for example, triple
points and superconducting transitions, occur reproducibly at the same temperature.
Internationally agreed temperature scales are based on fixed points and interpolating
thermometers. The most recent official temperature scale is the International Temperature
Scale of 1990. It extends from 0.65 K (−272.5 °C; −458.5 °F) to approximately 1,358 K
(1,085 °C; 1,985 °F).
Types of thermometers
/wiki/File:Thermometers_in_pitcher.jpgCooking thermometers used to measure the
temperature of steamed milk
Thermometers have been built which utilise a range of physical effects to measure
temperature. Most thermometers are originally calibrated to a constant-volume gas
thermometer. Temperature sensors are used in a wide variety of
scientific and engineering applications, especially measurement systems. Temperature
systems are primarily either electrical or mechanical, occasionally inseparable from the
system which they control (as in the case of a mercury thermometer).
Beckmann differential thermometer
Bi-metal mechanical thermometer
Coulomb blockade thermometer
Liquid crystal thermometer
Medical thermometer (e.g. oral thermometer, rectal thermometer, basal
Silicon bandgap temperature sensor
Six's thermometer- also known as a Maximum minimum thermometer
Thermometers can be calibrated either by comparing them with other certified
thermometers or by checking them against known fixed points on the temperature scale.
The best known of these fixed points are the melting and boiling points of pure water.
(Note that the boiling point of water varies with pressure, so this must be controlled.)
The traditional method of putting a scale on a liquid-in glass or liquid-in-metal
thermometer was in three stages:
Immerse the sensing portion in a stirred mixture of pure ice and water and mark the
point indicated when it had come to thermal equilibrium.
Immerse the sensing portion in a steam bath at 1 standard atmosphere (101.3 kPa;
760.0 mmHg) and again mark the point indicated.
Divide the distance between these marks into equal portions according to the
temperature scale being used.
Other fixed points were used in the past are the body temperature (of a healthy adult
male) which was originally used by Fahrenheit as his upper fixed point (96 °F (36 °C) to
be a number divisible by 12) and the lowest temperature given by a mixture of salt and
ice, which was originally the definition of 0 °F (−18 °C). (This is an example of a
Frigorific mixture). As body temperature varies, the Fahrenheit scale was later changed to
use an upper fixed point of boiling water at 212 °F (100 °C).
These have now been replaced by the defining points in the International
Temperature Scale of 1990, though in practice the melting point of water is more
commonly used than its triple point, the latter being more difficult to manage and thus
restricted to critical standard measurement. Nowadays manufacturers will often use a
thermostat bath or solid block where the temperature is held constant relative to a
calibrated thermometer. Other thermometers to be calibrated are put into the same bath or
block and allowed to come to equilibrium, then the scale marked, or any deviation from
the instrument scale recorded. For many modern devices calibration will be stating
some value to be used in processing an electronic signal to convert it to a temperature.
When external forces are applied to a stationary object, stress and strain are the
result. Stress is defined as the object's internal resisting forces, and strain is defined
as the displacement and deformation that occur. For a uniform distribution of
internal resisting forces, stress can be calculated (Figure 2-1) by dividing the force
(F) applied by the unit area (A):
Strain is defined as the amount of deformation per unit length of an object when a
load is applied. Strain is calculated by dividing the total deformation of the original
length by the original length (L):
Typical values for strain are less than 0.005 inch/inch and are often expressed in
Strain may be compressive or tensile and is typically measured by strain gages. It
was Lord Kelvin who first reported in 1856 that metallic conductors subjected to
mechanical strain exhibit a change in their electrical resistance. This phenomenon
was first put to practical use in the 1930s.
Figure 2-1: Definitions of Stress & Strain
Fundamentally, all strain gages are designed to convert mechanical motion into an
electronic signal. A change in capacitance, inductance, or resistance is proportional
to the strain experienced by the sensor. If a wire is held under tension, it gets
slightly longer and its cross-sectional area is reduced. This changes its resistance (R)
in proportion to the strain sensitivity (S) of the wire's resistance. When a strain is
introduced, the strain sensitivity, which is also called the gage factor (GF), is given
The ideal strain gage would change resistance only due to the deformations of the
surface to which the sensor is attached. However, in real applications, temperature,
material properties, the adhesive that bonds the gage to the surface, and the
stability of the metal all affect the detected resistance. Because most materials do
not have the same properties in all directions, a knowledge of the axial strain alone
is insufficient for a complete analysis. Poisson, bending, and torsional strains also
need to be measured. Each requires a different strain gage arrangement.
Shearing strain considers the angular distortion of an object under stress. Imagine
that a horizontal force is acting on the top right corner of a thick book on a table,
forcing the book to become somewhat trapezoidal (Figure 2-2). The shearing strain
in this case can be expressed as the angular change in radians between the vertical
y-axis and the new position. The shearing strain is the tangent of this angle.
Figure 2-2: Shearing Strain
Poisson strain expresses both the thinning and elongation that occurs in a strained
bar (Figure 2-3). Poisson strain is defined as the negative ratio of the strain in the
traverse direction (caused by the contraction of the bar's diameter) to the strain in
the longitudinal direction. As the length increases and the cross sectional area
decreases, the electrical resistance of the wire also rises.
Figure 2-3: Poisson Strain
Bending strain, or moment strain, is calculated by determining the relationship
between the force and the amount of bending which results from it. Although not as
commonly detected as the other types of strain, torsional strain is measured when
the strain produced by twisting is of interest. Torsional strain is calculated by dividing
the torsional stress by the torsional modulus of elasticity.
The deformation of an object can be measured by mechanical, optical,
acoustical, pneumatic, and electrical means. The earliest strain gages were
mechanical devices that measured strain by measuring the change in length and
comparing it to the original length of the object. For example, the extension meter
(extensiometer) uses a series of levers to amplify strain to a readable value. In
general, however, mechanical devices tend to provide low resolutions, and are bulky
and difficult to use.
Figure 2-4: Strain Gage Designs
Optical sensors are sensitive and accurate, but are delicate and not very
popular in industrial applications. They use interference fringes produced by optical
flats to measure strain. Optical sensors operate best under laboratory conditions.
The most widely used characteristic that varies in proportion to strain is electrical
resistance. Although capacitance and inductance-based strain gages have been
constructed, these devices' sensitivity to vibration, their mounting requirements,
and circuit complexity have limited their application. The photoelectric gage uses a
light beam, two fine gratings, and a photocell detector to generate an electrical
current that is proportional to strain. The gage length of these devices can be as
short as 1/16 inch, but they are costly and delicate.
The first bonded, metallic wire-type strain gage was developed in 1938. The
metallic foil-type strain gage consists of a grid of wire filament (a resistor) of
approximately 0.001 in. (0.025 mm) thickness, bonded directly to the strained
surface by a thin layer of epoxy resin (Figure 2-4A). When a load is applied to the
surface, the resulting change in surface length is communicated to the resistor and
the corresponding strain is measured in terms of the electrical resistance of the foil
wire, which varies linearly with strain. The foil diaphragm and the adhesive bonding
agent must work together in transmitting the strain, while the adhesive must also
serve as an electrical insulator between the foil grid and the surface.
When selecting a strain gage, one must consider not only the strain
characteristics of the sensor, but also its stability and temperature sensitivity.
Unfortunately, the most desirable strain gage materials are also sensitive to
temperature variations and tend to change resistance as they age. For tests of
short duration, this may not be a serious concern, but for continuous industrial
measurement, one must include temperature and drift compensation.
Each strain gage wire material has its characteristic gage factor, resistance,
temperature coefficient of gage factor, thermal coefficient of resistivity, and
stability. Typical materials include Constantan (copper-nickel alloy), Nichrome V
(nickel-chrome alloy), platinum alloys (usually tungsten), Isoelastic (nickel-iron
alloy), or Karma-type alloy wires (nickel-chrome alloy), foils, or semiconductor
materials. The most popular alloys used for strain gages are copper-nickel alloys
In the mid-1950s, scientists at Bell Laboratories discovered the piezoresistive
characteristics of germanium and silicon. Although the materials exhibited
substantial nonlinearity and temperature sensitivity, they had gage factors more
than fifty times, and sensitivity more than a 100 times, that of metallic wire or foil
strain gages. Silicon wafers are also more elastic than metallic ones. After being
strained, they return more readily to their original shapes.
Around 1970, the first semiconductor (silicon) strain gages were developed for the
automotive industry. As opposed to other types of strain gages, semiconductor
strain gages depend on the piezoresistive effects of silicon or germanium and
measure the change in resistance with stress as opposed to strain. The
semiconductor bonded strain gage is a wafer with the resistance element diffused
into a substrate of silicon. The wafer element usually is not provided with a
backing, and bonding it to the strained surface requires great care as only a thin
layer of epoxy is used to attach it (Figure 2-4B). The size is much smaller and the
cost much lower than for a metallic foil sensor. The same epoxies that are used to
attach foil gages also are used to bond semiconductor gages.
While the higher unit resistance and sensitivity of semiconductor wafer sensors
are definite advantages, their greater sensitivity to temperature variations and
tendency to drift are disadvantages in comparison to metallic foil sensors. Another
disadvantage of semiconductor strain gages is that the resistance-to-strain
relationship is nonlinear, varying 10-20% from a straight-line equation. With
computer-controlled instrumentation, these limitations can be overcome through
A further improvement is the thin-film strain gage that eliminates the need
for adhesive bonding (Figure 2-4C). The gage is produced by first depositing an
electrical insulation (typically a ceramic) onto the stressed metal surface, and then
depositing the strain gage onto this insulation layer. Vacuum deposition or
sputtering techniques are used to bond the materials molecularly.
Because the thin-film gage is molecularly bonded to the specimen, the installation
is much more stable and the resistance values experience less drift. Another
advantage is that the stressed force detector can be a metallic diaphragm or beam
with a eposited layer of ceramic insulation.
Diffused semiconductor strain gages represent a further improvement in strain
gage technology because they eliminate the need for bonding agents. By
eliminating bonding agents, errors due to creep and hysteresis also are eliminated.
The diffused semiconductor strain gage uses photolithography masking techniques
and solid-state diffusion of boron to molecularly bond the resistance elements.
Electrical leads are directly attached to the pattern (Figure 2-4D).
The diffused gage is limited to moderate-temperature applications and
requires temperature compensation. Diffused semiconductors often are used as
sensing elements in pressure transducers. They are small, inexpensive, accurate
and repeatable, provide a wide pressure range, and generate a strong output
signal. Their limitations include sensitivity to ambient temperature variations, which
can be compensated for in intelligent transmitter designs.
In summary, the ideal strain gage is small in size and mass, low in cost, easily
attached, and highly sensitive to strain but insensitive to ambient or process
Figure 2-5: Bonded Resistance
Strain Gage Construction
Bonded Resistance Gages
The bonded semiconductor strain gage was schematically described in Figures 2-
4A and 2-4B. These devices represent a popular method of measuring strain. The
gage consists of a grid of very fine metallic wire, foil, or semiconductor material
bonded to the strained surface or carrier matrix by a thin insulated layer of epoxy
(Figure 2-5). When the carrier matrix is strained, the strain is transmitted to the grid
material through the adhesive. The variations in the electrical resistance of the grid
are measured as an indication of strain. The grid shape is designed to provide
maximum gage resistance while keeping both the length and width of the gage to a
Bonded resistance strain gages have a good reputation. They are relatively
inexpensive, can achieve overall accuracy of better than +/-0.10%, are available in a
short gage length, are only moderately affected by temperature changes, have small
physical size and low mass, and are highly sensitive. Bonded resistance strain gages
can be used to measure both static and dynamic strain.
Typical metal-foil strain gages.
In bonding strain gage elements to a strained surface, it is important that the
gage experience the same strain as the object. With an adhesive material inserted
between the sensors and the strained surface, the installation is sensitive to creep
due to degradation of the bond, temperature influences, and hysteresis caused by
thermoelastic strain. Because many glues and epoxy resins are prone to creep, it is
important to use resins designed specifically for strain gages.
The bonded resistance strain gage is suitable for a wide variety of environmental
conditions. It can measure strain in jet engine turbines operating at very high
temperatures and in cryogenic fluid applications at temperatures as low as -452*F (-
269*C). It has low mass and size, high sensitivity, and is suitable for static and
dynamic applications. Foil elements are available with unit resistances from 120 to
5,000 ohms. Gage lengths from 0.008 in. to 4 in. are available commercially. The
three primary considerations in gage selection are: operating temperature, the
nature of the strain to be detected, and stability requirements. In addition, selecting
the right carrier material, grid alloy, adhesive, and protective coating will guarantee
the success of the application.
Figure 6-3: Early Mechanical Vibration Sensor
Acceleration & Vibration
Early acceleration and vibration sensors were complex mechanical contraptions
(Figure 6-3) and were better suited for the laboratory than the plant floor. Modern
accelerometers, however, have benefited from the advance of technology: their cost,
accuracy, and ease of use all have improved over the years.
Early accelerometers were analog electronic devices that were later converted into
digital electronic and microprocessor-based designs. The air-bag controls of the
automobile industry use hybrid micro-electromechanical systems (MEMS). These
devices rely on what was once considered a flaw in semiconductor design: a
"released layer" or loose piece of circuit material in the microspace above the chip
surface. In a digital circuit, this loose layer interferes with the smooth flow of
electrons, because it reacts with the surrounding analog environment.
In a MEMS accelerometer, this loose layer is used as a sensor to measure
acceleration. In today's autos, MEMS sensors are used in air bag and chassis control,
in side-impact detection and in antilock braking systems. Auto industry acceleration
sensors are available for frequencies from 0.1 to 1,500 Hz, with dynamic ranges of
1.5 to 250 G around 1 or 2 axes, and with sensitivities of 7.62 to 1333 mV/G.
Industrial applications for accelerometers include machinery vibration monitoring to
diagnose, for example, out-of-balance conditions of rotating parts. An accelerometer-
based vibration analyzer can detect abnormal vibrations, analyze the vibration
signature, and help identify its cause.
Another application is structural testing, where the presence of a structural defect,
such as a crack, bad weld, or corrosion can change the vibration signature of a
structure. The structure may be the casing of a motor or turbine, a reactor vessel, or
a tank. The test is performed by striking the structure with a hammer, exciting the
structure with a known forcing function. This generates a vibration pattern that can
be recorded, analyzed, and compared to a reference signature.
Acceleration sensors also play a role in orientation and direction-finding. In such
applications, miniature triaxial sensors detect changes in roll, pitch, and azimuth
(angle of horizontal deviation), or X, Y, and Z axes. Such sensors can be used to
track drill bits in drilling operations, determine orientation for buoys and sonar
systems, serve as compasses, and replace gyroscopes in inertial navigation systems.
Mechanical accelerometers, such as the seismic mass accelerometer, velocity
sensor, and the mechanical magnetic switch, detect the force imposed on a mass
when acceleration occurs. The mass resists the force of acceleration and thereby
causes a deflection or a physical displacement, which can be measured by proximity
detectors or strain gages (Figure 6-3). Many of these sensors are equipped with
dampening devices such as springs or magnets to prevent oscillation.
Industrial accelerometer with associated electronics.
A servo accelerometer, for example, measures accelerations from 1 microG to more
than 50 G. It uses a rotating mechanism that is intentionally imbalanced in its plane
of rotation. When acceleration occurs, it causes an angular movement that can be
sensed by a proximity detector.
Among the newer mechanical accelerometer designs is the thermal accelerometer:
This sensor detects position through heat transfer. A seismic mass is positioned
above a heat source. If the mass moves because of acceleration, the proximity to the
heat source changes and the temperature of the mass changes. Polysilicon
thermopiles are used to detect changes in temperature.
In capacitance sensing accelerometers, micromachined capacitive plates (CMOS
capacitor plates only 60 microns deep) form a mass of about 50 micrograms. As
acceleration deforms the plates, a measurable change in capacitance results. But
piezoelectric accelerometers are perhaps the most practical devices for measuring
shock and vibration. Similar to a mechanical sensor, this device includes a mass that,
when accelerated, exerts an inertial force on a piezoelectric crystal.
In high temperature applications where it is difficult to install microelectronics
within the sensor, high impedance devices can be used. Here, the leads from the
crystal sensor are connected to a high gain amplifier. The output, which is
proportional to the force of acceleration, is then read by the high gain amplifier.
Where temperature is not excessive, low impedance microelectronics can be
embedded in the sensor to detect the voltages generated by the crystals. Both high
and low impedance designs can be mechanically connected to the structure's
surface, or secured to it by adhesives or magnetic means. These piezoelectric
sensors are suited for the measurement of short durations of acceleration only.
Piezoresistive and strain gage sensors operate in a similar fashion, but strain gage
elements are temperature sensitive and require compensation. They are preferred
for low frequency vibration, long-duration shock, and constant acceleration
applications. Piezoresistive units are rugged, and can operate at frequencies up to
The term "high pressure" is relative, as, in fact, are all pressure measurements.
What the term actually means depends greatly on the particular industry one is
talking about. In synthetic diamond manufacturing, for example, normal reaction
pressure is around 100,000 psig (6,900 bars) or more, while some fiber and plastic
extruders operate at 10,000 psig (690 bars). Yet, in the average plant, pressures
exceeding 1,000 psig (69 bars) are considered high.
In extruder applications, high pressures are accompanied by high temperatures,
and sticky materials are likely to plug all cavities they might enter. Therefore,
extruder pressure sensors are inserted flush with the inner diameter of the pipe and
are usually continuously cooled.
Figure 4-1: Mechanical High Pressure Sensors
High Pressure Designs
In the case of the button repeater (Figure 4-1A), the diaphragm can detect extruder
pressures up to 10,000 psig and can operate at temperatures up to 8000¡F (4300¡C)
because of its self-cooling design. It operates on direct force balance between the
process pressure (P1) acting on the sensing diaphragm and the pressure of the
output air signal (P2) acting on the balancing diaphragm. The pressure of the output
air signal follows the process pressure in inverse ratio to the areas of the two
diaphragms. If the diaphragm area ratio is 200:1, a 1,000-psig increase in process
pressure will raise the air output signal by 5 psig.
The button repeater can be screwed into a H-in. coupling in the extruder discharge
pipe in such a way that its 316 stainless steel diaphragm is inserted flush with the
inside of the pipe. Self-cooling is provided by the continuous flow of instrument air.
Another mechanical high pressure sensor uses a helical Bourdon element (Figure 4-
1B). This device may include as many as twenty coils and can measure pressures
well in excess of 10,000 psig. The standard element material is heavy-duty stainless
steel, and the measurement error is around 1% of span. Helical Bourdon tube
sensors provide high overrange protection and are suitable for fluctuating pressure
service, but must be protected from plugging. This protection can be provided by
high-pressure, button diaphragm-type chemical seal elements that also are rated for
An improvement on the design shown in Figure 4-1B detects tip motion optically,
without requiring any mechanical linkage. This is desirable because of errors
introduced by linkage friction. In such units, a reference diode also is provided to
compensate for the aging of the light source, for temperature variations, and for dirt
build-up on the optics. Because the sensor movement is usually small (0.02 in.),
both hysteresis and repeatability errors typically are negligible. Such units are
available for measuring pressures up to 60,000 psig.
Deadweight testers also are used as primary standards in calibrating high-pressure
sensors (Figure 4-1C). The tester generates a test reference pressure when an NIST-
certified weight is placed on a known piston area, which imposes a corresponding
pressure on the filling fluid. (For more details, see Chapter 3 of this volume.) NIST
has found that at pressures exceeding 40,000 psig, the precision of their test is
about 1.5 parts in 10,000. Typical inaccuracy of an industrial deadweight tester is 1
part in 1,000 or 0.1%.
In the area of electronic sensors for high-pressure measurement, the strain gage is
without equal (see Chapter 2 for more details on strain gage operation). Strain gage
sensors can detect pressures in excess of 100,000 psig and can provide
measurement precision of 0.1% of span or 0.25% of full scale. Temperature
compensation and periodic recalibration are desirable because a 1000¡F temperature
change or six months of drift can also produce an additional 0.25% error. Other
electronic sensors (capacitance, potentiometric, inductive, reluctive) are also capable
of detecting pressures up to 10,000 psig, but none can go as high as the strain gage.
Figure 4-2: Bulk Modulus Cell
Very High Pressures
The bulk modulus cell consists of a hollow cylindrical steel probe closed at the inner
end with a projecting stem on the outer end (Figure 4-2). When exposed to a
process pressure, the probe is compressed, the probe tip is moved to the right by
the isotropic contraction, and the stem moves further outward. This stem motion is
then converted into a pressure reading. The hysteresis and temperature sensitivity
of this unit is similar to that of other elastic element pressure sensors. The main
advantages of this sensor are its fast response and safety: in effect, the unit is not
subject to failure. The bulk modulus cell can detect pressures up to 200,000 psig
with 1% to 2% full span error.
In another high-pressure design, Manganin, gold-chromium, platinum, or lead
wire sensors are wound helically on a core. The electrical resistance of these wire
materials will change in proportion to the pressure experienced on their surfaces.
They are reasonably insensitive to temperature variations. The pressure-resistance
relationship of Manganin is positive, linear, and substantial. Manganin cells can be
obtained for pressure ranges up to 400,000 psig and can provide 0.1% to 0.5% of
full scale measurement precision. The main limitation of the Manganin cell is its
delicate nature, making it vulnerable to damage from pressure pulsations or
Some solids liquefy under high pressures. This change-of-state phenomenon also
can be used as an indication of process pressure. Bismuth, for example, liquefies at
between 365,000 and 375,000 psig and, when it does, it also contracts in volume.
Other materials such as mercury have similar characteristics, and can be used to
signal that the pressure has reached a particular value.
The fundamental operating principles of force, acceleration, and torque
instrumentation are closely allied to the piezoelectric and strain gage devices used
to measure static and dynamic pressures discussed in earlier chapters. It is often
the specifics of configuration and signal processing that determine the
An accelerometer senses the motion of the surface on which it is mounted and
produces an electrical output signal related to that motion. Acceleration is
measured in feet per second squared, and the product of the acceleration and the
measured mass yields the force. Torque is a twisting force, usually encountered on
shafts, bars, pulleys, and similar rotational devices. It is defined as the product of
the force and the radius over which it acts. It is expressed in units of weight times
length, such as lb.-ft. and N-m.
Figure 6-1: Piezoelectric Sensor Element Designs
The most common dynamic force and acceleration detector is the piezoelectric
sensor (Figure 6-1). The word piezo is of Greek origin, and it means "to squeeze."
This is quite appropriate, because a piezoelectric sensor produces a voltage when it
is "squeezed" by a force that is proportional to the force applied. The fundamental
difference between these devices and static force detection devices such as strain
gages is that the electrical signal generated by the crystal decays rapidly after the
application of force. This makes these devices unsuitable for the detection of static
The high impedance electrical signal generated by the piezoelectric crystal is
converted (by an amplifier) to a low impedance signal suitable for such an
instrument as a digital storage oscilloscope. Digital storage of the signal is required
in order to allow analysis of the signal before it decays.
Depending on the application requirements, dynamic force can be measured as
either compression, tensile, or torque force. Applications may include the
measurement of spring or sliding friction forces, chain tensions, clutch release forces,
or peel strengths of laminates, labels, and pull tabs.
Tiny accelerometer is
useful for low-mass
A piezoelectric force sensor is almost as rigid as a comparably proportioned piece of
solid steel. This stiffness and strength allows these sensors to be directly inserted
into machines as part of their structure. Their rigidity provides them with a high
natural frequency, and their corresponding rapid rise time makes them ideal for
measuring such quick transient forces as those generated by metal-to-metal impacts
and by high frequency vibrations. To ensure accurate measurement, the natural
frequency of the sensing device must be substantially higher than the frequency to
be measured. If the measured frequency approaches the natural frequency of the
sensor, measurement errors will result.
Figure 6-2: Impact Flowmeter Application
The impact flowmeter is also a force sensor. It measures the flow rate of free
flowing bulk solids at the discharge of a material chute. The chute directs the
material flow so that it impinges on a sensing plate (Figure 6-2). The impact force
exerted on the plate by the material is proportional to the flow rate.
The construction is such that the sensing plate is allowed to move only in the
horizontal plane. The impact force is measured by sensing the horizontal deflection
of the plate. This deflection is measured by a linear variable differential transformer
(LVDT). The voltage output of the LVDT is converted to a pulse frequency modulated
signal. This signal is transmitted as the flow signal to the control system.
Impact flowmeters can be used as alternatives to weighing systems to measure and
control the flow of bulk solids to continuous processes as illustrated in Figure 6-2.
Here, an impact flowmeter is placed below the material chute downstream of a
variable speed screw feeder. The feed rate is set in tons per hour, and the control
system regulates the speed of the screw feeder to attain the desired feed rate. The
control system uses a PID algorithm to adjust the speed as needed to keep the flow
constant. Impact flowmeters can measure the flow rate of some bulk materials at
rates from 1 to 800 tons per hour and with repeatability and linearity within 1%.
B, R, and S
Types B, R, and S thermocouples use platinum or a platinum–rhodium alloy for
each conductor. These are among the most stable thermocouples, but have lower
sensitivity, approximately 10 µV/°C, than other types. The high cost of these makes them
unsuitable for general use. Generally, type B, R, and S thermocouples are used only for
high temperature measurements.
Type B thermocouples use a platinum–rhodium alloy for each conductor. One
conductor contains 30% rhodium while the other conductor contains 6% rhodium. These
thermocouples are suited for use at up to 1800 °C. Type B thermocouples produce the
same output at 0 °C and 42 °C, limiting their use below about 50 °C.
Type R thermocouples use a platinum–rhodium alloy containing 13% rhodium for one
conductor and pure platinum for the other conductor. Type R thermocouples are used up
to 1600 °C.
Type S thermocouples use a platinum–rhodium alloy containing 10% rhodium for one
conductor and pure platinum for the other conductor. Like type R, type S thermocouples
are used up to 1600 °C. In particular, type S is used as the standard of calibration for the
melting point of gold (1064.43 °C).
Type T (copper–constantan) thermocouples are suited for measurements in the −200 to
350 °C range. Often used as a differential measurement since only copper wire touches
the probes. Since both conductors are non-magnetic, there is no Curie point and thus no
abrupt change in characteristics. Type T thermocouples have a sensitivity of about 43
Type C (tungsten 5% rhenium – tungsten 26% rhenium) thermocouples are suited for
measurements in the 0 °C to 2320 °C range. This thermocouple is well-suited for vacuum
furnaces at extremely high temperatures and must never be used in the presence of
oxygen at temperatures above 260 °C.
Type M thermocouples use a nickel alloy for each wire. The positive wire contains 18%
molybdenum while the negative wire contains 0.8% cobalt. These thermocouples are
used in the vacuum furnaces for the same reasons as with type C. Upper temperature is
limited to 1400 °C. Though it is a less common type of thermocouple, look-up tables to
correlate temperature to EMF (milli-volt output) are available.
In chromel-gold/iron thermocouples, the positive wire is chromel and the negative wire is
gold with a small fraction (0.03–0.15 atom percent) of iron. It can be used for cryogenic
applications (1.2–300 K and even up to 600 K). Both the sensitivity and the temperature
range depends on the iron concentration. The sensitivity is typically around 15 µV/K at
low temperatures and the lowest usable temperature varies between 1.2 and 4.2
Laws for thermocouples
Law of homogeneous material
A thermoelectric current cannot be sustained in a circuit of a single homogeneous
material by the application of heat alone, regardless of how it might vary in cross section.
In other words, temperature changes in the wiring between the input and output do not
affect the output voltage, provided the wire is made of a thermocouple alloy.
Law of intermediate materials
The algebraic sum of the thermoelectric forces in a circuit composed of any number of
dissimilar materials is zero if all of the junctions are at a uniform temperature. So If a
third metal is inserted in either wire A or B and if the two new junctions are at the same
temperature, there will be no net voltage generated by the new metal
Law of successive or intermediate temperatures
If two dissimilar homogeneous materials produce thermal emf1 when the junctions are at
T1 and T2 and produce thermal emf2 when the junctions are at T2 and T3 , the emf
generated when the junctions are at T1 and T3 will be emf1 + emf2 .
The table below describes properties of several different thermocouple types. Within the
tolerance columns, T represents the temperature of the hot junction, in degrees Celsius.
For example, a thermocouple with a tolerance of ±0.0025×T would have a tolerance of
±2.5 °C at 1000 °C.
Temperatur IEC ANSI
Typ e range °C Tolerance class Tolerance BS Color
e range °C Color Color
e (continuous one (°C) class two (°C) code
(short term) code code
±1.5 between −40 /wiki/File: /wiki/File: 6.1_K
−40 °C and 333
°C and 375 °C IEC_Type BS_Type_ _Ther
−180 to °C
K 0 to +1100 ±0.004×T between _K_Ther K_Therm moco
375 °C and 1000 mocouple. ocouple.sv uple_
°C svg g Grade
°C and 1200 °C
±1.5 between −40 −40 °C and 333 /wiki/File:
−180 to °C and 375 °C °C BS_Type_
J 0 to +700 _J_Therm moco
+800 ±0.004×T between ±0.0075×T J_Thermo
375 °C and 750 °C between 333 couple.svg
°C and 750 °C
±1.5 between −40 ±2.5 between /wiki/File: /wiki/File: /wiki/
°C and 375 °C −40 °C and 333 IEC_Type BS_Type_ File:
N 0 to +1100 ±0.004×T between °C _N_Ther N_Therm MC_9
375 °C and 1000 ±0.0075×T mocouple. ocouple.sv 6.1_N
°C between 333 svg g _Ther
°C and 1200 °C moco
±1.0 between 0 °C
±1.5 between 0 /wiki/File:
and 1100 °C /wiki/File:
°C and 600 °C BS_Type_ Not
−50 to ±[1 + 0.003×(T − BS_Type_
R 0 to +1600 ±0.0025×T N_Therm define
+1700 1100)] between R_Thermo
between 600 ocouple.sv d.
1100 °C and 1600 couple.svg
°C and 1600 °C g
±1.0 between 0 °C
±1.5 between 0
and 1100 °C /wiki/File:
°C and 600 °C Not
−50 to ±[1 + 0.003×(T − BS_Type_
S 0 to 1600 ±0.0025×T define
+1750 1100)] between R_Thermo
between 600 d.
1100 °C and 1600 couple.svg
°C and 1600 °C
+200 to standard standard
B 0 to +1820 Not Available between 600 define
+1700 use copper use copper
°C and 1700 °C d.
±0.5 between −40 −40 °C and 133 /wiki/File:
−185 to −250 to °C and 125 °C °C BS_Type_
T _T_Therm moco
+300 +400 ±0.004×T between ±0.0075×T T_Thermo
125 °C and 350 °C between 133 couple.svg
°C and 350 °C
±2.5 between MC_9
±1.5 between −40 −40 °C and 333 /wiki/File: 6.1_E
°C and 375 °C °C BS_Type_ _Ther
E 0 to +800 −40 to +900 _E_Therm
±0.004×T between ±0.0075×T E_Thermo moco
375 °C and 800 °C between 333 couple.svg uple_
°C and 900 °C Grade
Chro 0.2% of the
mel/ −272 to voltage; each
AuF +300 sensor needs
Thermocouples are suitable for measuring over a large temperature range, up to
2300 °C. They are less suitable for applications where smaller temperature differences
need to be measured with high accuracy, for example the range 0–100 °C with 0.1 °C
accuracy. For such applications, thermistors and resistance temperature detectors are
more suitable. Steel industry
Type B, S, R and K thermocouples are used extensively in the steel and iron industries to
monitor temperatures and chemistry throughout the steel making process. Disposable,
immersible, type S thermocouples are regularly used in the electric arc furnace process to
accurately measure the steel's temperature before tapping. The cooling curve of a small
steel sample can be analyzed and used to estimate the carbon content of molten steel.
Heating appliance safety
Many gas-fed heating appliances such as ovens and water heaters make use of a pilot
light to ignite the main gas burner as required. If the pilot light becomes extinguished for
any reason, there is the potential for un-combusted gas to be released into the surrounding
area, thereby creating both risk of fire and a health hazard. To prevent such a danger,
some appliances use a thermocouple as a fail-safe control to sense when the pilot light is
burning. The tip of the thermocouple is placed in the pilot flame. The resultant voltage,
typically around 20 mV, operates the gas supply valve responsible for feeding the pilot.
So long as the pilot flame remains lit, the thermocouple remains hot and holds the pilot
gas valve open. If the pilot light goes out, the temperature will fall along with a
corresponding drop in voltage across the thermocouple leads, removing power from the
valve. The valve closes, shutting off the gas and halting this unsafe condition.
Some systems, known as millivolt control systems, extend this concept to the main gas
valve as well. Not only does the voltage created by the pilot thermocouple activate the
pilot gas valve, it is also routed through a thermostat to power the main gas valve as well.
Here, a larger voltage is needed than in a pilot flame safety system described above, and a
thermopile is used rather than a single thermocouple. Such a system requires no external
source of electricity for its operation and so can operate during a power failure, provided
all the related system components allow for this. Note that this excludes common forced
air furnaces because external power is required to operate the blower motor, but this
feature is especially useful for un-powered convection heaters.
A similar gas shut-off safety mechanism using a thermocouple is sometimes employed to
ensure that the main burner ignites within a certain time period, shutting off the main
burner gas supply valve should that not happen.
Out of concern for energy wasted by the standing pilot, designers of many newer
appliances have switched to an electronically controlled pilot-less ignition, also called
intermittent ignition. With no standing pilot flame, there is no risk of gas buildup should
the flame go out, so these appliances do not need thermocouple-based safety pilot safety
switches. As these designs lose the benefit of operation without a continuous source of
electricity, standing pilots are still used in some appliances. The exception is later model
instantaneous water heaters that utilise the flow of water to generate the current required
to ignite the gas burner, in conjunction with a thermocouple as a safety cut-off device in
the event the gas fails to ignite, or the flame is extinguished.
Thermopile radiation sensors
Thermopiles are used for measuring the intensity of incident radiation, typically visible or
infrared light, which heats the hot junctions, while the cold junctions are on a heat sink. It
is possible to measure radiative intensities of only a few μW/cm2 with commercially
available thermopile sensors. For example, some laser power meters are based on such
Thermocouples can generally be used in the testing of prototype electrical and
mechanical apparatus. For example, switchgear under test for its current carrying
capacity may have thermocouples installed and monitored during a heat run test, to
confirm that the temperature rise at rated current does not exceed designed limits.
Radioisotope thermoelectric generators
Thermopiles can also be applied to generate electricity in radioisotope thermoelectric
Chemical production and petroleum refineries will usually employ computers for logging
and limit testing the many temperatures associated with a process, typically numbering in
the hundreds. For such cases a number of thermocouple leads will be brought to a
common reference block (a large block of copper) containing the second thermocouple of
each circuit. The temperature of the block is in turn measured by a thermistor. Simple
computations are used to determine the temperature at each measured
There are two broad categories, "film" and "wire-wound" types.
Film rs have a layer of platinum on a substrate; the layer may be extremely thin,
perhaps one micrometer. Advantages of this type are relatively low cost and fast
response. Such devices have improved in performance although the different
expansion rates of the substrate and platinum give "strain gauge" effects and
Wire-wound thermometers can have greater accuracy, especially for wide
temperature ranges. The coil diameter provides a compromise between
mechanical stability and allowing expansion of the wire to minimize strain and
Coil Elements have largely replaced wire wound elements in the industry. This
design allows the wire coil to expand more freely over temperature while still
provided the necessary support for the coil. This design is similar to that of a
SPRT, the primary standard which ITS-90 is based on, while still providing the
durability necessary for an industrial process.
The current international standard which specifies tolerance and the temperature to
electrical resistance relationship for platinum resistance thermometers is IEC 751:1983.
By far the most common devices used in industry have a nominal resistance of 100 ohms
at 0 °C, and are called Pt-100 sensors ('Pt' is the symbol for platinum). The sensitivity of
a standard 100 ohm sensor is a nominal 0.385 ohm/°C. RTDs with a sensitivity of 0.375
and 0.392 ohm/°C are also available.
Resistance thermometers are constructed in a number of forms and offer greater stability,
accuracy and repeatability in some cases than thermocouples. While thermocouples use
the Seebeck effect to generate a voltage, resistance thermometers use electrical resistance
and require a power source to operate. The resistance ideally varies linearly with
Resistance thermometers are usually made using platinum, because of its linear
resistance-temperature relationship and its chemical inertness. The platinum detecting
wire needs to be kept free of contamination to remain stable. A platinum wire or film is
supported on a former in such a way that it gets minimal differential expansion or other
strains from its former, yet is reasonably resistant to vibration. RTD assemblies made
from iron or copper are also used in some applications.
Commercial platinum grades are produced which exhibit a change of resistance of 0.385
ohms/°C (European Fundamental Interval) The sensor is usually made to have a
resistance of 100Ω at 0 °C. This is defined in BS EN 60751:1996 (taken from IEC
60751:1995) . The American Fundamental Interval is 0.392 Ω/°C, based on using a purer
grade of platinum than the European standard. The American standard is from the
Scientific Apparatus Manufacturers Association (SAMA), who are no longer in this
Resistance thermometers require a small current to be passed through in order to
determine the resistance. This can cause resistive heating, and manufacturers' limits
should always be followed along with heat path considerations in design. Care should
also be taken to avoid any strains on the resistance thermometer in its application. Lead
wire resistance should be considered, and adopting three and four wire connections can
eliminate connection lead resistance effects from measurements - industrial practice is
almost universally to use 3-wire connection. 4-wire connection need to be used for
Advantages and limitations
Advantages of platinum resistance thermometers:
Wide operating range
Suitability for precision applications
RTDs in industrial applications are rarely used above 660 °C. At temperatures
above 660 °C it becomes increasingly difficult to prevent the platinum from
becoming contaminated by impurities from the metal sheath of the thermometer.
This is why laboratory standard thermometers replace the metal sheath with a
glass construction. At very low temperatures, say below -270 °C (or 3 K), due to
the fact that there are very few phonons, the resistance of an RTD is mainly
determined by impurities and boundary scattering and thus basically independent
of temperature. As a result, the sensitivity of the RTD is essentially zero and
therefore not useful.
Compared to thermistors, platinum RTDs are less sensitive to small temperature
changes and have a slower response time. However thermistors have a smaller
temperature range and stability.
The differential input single-ended output instrumentationamplifier is one of the
most versatile signal processing amplifiersavailable. It is used for precision amplification
of differentialdc or ac signals while rejecting large values of commonmode noise. By
using integrated circuits, a high level ofperformance is obtained at minimum cost.
Figure 1 shows a basic instrumentation amplifier which providesa 10 volt output
for 100 mW input, while rejecting greater than g11V of common mode noise. To obtain
input characteristics, two voltage followers buffer the input signal. The LM102 is
specifically designed for voltage follower usage and has 10,000 MX input impedance
with 3 nA input currents. This high of an input impedance provides two benefits: it
allows the instrumentation amplifier to be used with high source resistances and still have
low error; and it allows the source resistances to be unbalanced by over 10,000X with no
degradation in common mode rejection.
The followers drive a balanced differential amplifier, as shown inFigure 1 , which
provides gain and rejects the common mode voltage. The gain is set by the ratio of R4 to
R2 and R5 to R3. With the values shown, the gain for differential signals is 100. Figure 2
shows an instrumentation amplifier where the gain is linearly adjustable from 1 to 300
with a single resistor. An LM101A, connected as a fast inverter, is used as an attenuator
in the feedback loop. By using an active attenuator, a very low impedance is always
presented to the feedback resistors, and common mode rejection is unaffected by
gainchanges. The LM101A, used as shown, has a greater bandwidth than the LM107, and
may be used in a feedback network without instability. The gain is linearly dependent on
R6 and is equal to 10b4 R6. To obtain good common mode rejection ratios, it is
necessary that the ratio of R4 to R2 match the ratio of R5 to R3.For example, if the
resistors in circuit shown inFigure 1 had a total mismatch of 0.1%, the common mode
rejection would be 60 dB times the closed loop gain, or 100 dB. The circuit sh own in
Figure 2 would have constant common mode rejection of 60 dB, independent of gain. In
either circuit, it is possible to trim any one of the resistors to obtain common mode
rejection ratios in excess of 100 dB. For optimum performance, several items should be
considered during construction. R1 is used for zeroing the output.
It should be a high resolution, mechanically stable potentiometer to avoid a zero shift
from occurring with mechanical, disturbances. Since there are several ICs operating in
close proximity, the power supplies should be bypassed with 0.01 mF disc capacitors to
insure stability. The resistors should be of the same type to have the same temperature
coefficient. A few applications for a differential instrumentation amplifier are:
differential voltage measurements, bridge outputs, strain gauge outputs, or low level voltage measurement.
Isolation amplifiers protect data-acquisition components from potentially
destructive voltages present at remote transducers. These amplifiers are also useful when
you need to amplify low-level signals in multi-channel applications. They can also
eliminate measurement errors caused by ground loops. Amplifiers with internal
transformers reduce circuit costs by eliminating the need for additional isolated power
In Figure 1 the block diagram of the whole system isshown. A more detailed description of
the individual elements follows
2. TELEMETERING NETWORKIn Figure 2, the steam gathering network extension forthe
"Larderello Renewal Project" is shown in a scaledrawing.The figure shows all the sites of interest to
The Teleconduction Station (PT) contains all the process computers for geothermal plant remote
control. The data processing center computers for this project were also installed next to this. Each
location's orifice plate in the line was calculated to assure a differential pressure which is the best
compromise between the minimization of producible
energy losses and measurement errors for all expected flow conditions.
The of energy losses was most critical since the geothermal resource productive efficiency
maximization is the main purpose of the "Larderello Renewal Project". The predicted minimum value
for the differential pressure was determined to be 1000 Pa.
This value still allows satisfactory measurement due to the high precision of the instruments used.
A set of techniques where by a sequence of information-carrying quantities occurring at
discrete instances of time is encoded into a corresponding regular sequence of
lectromagnetic carrier pulses. Varying the amplitude, polarity, presence or absence,
duration, or occurrence in time of the pulses gives rise to the four basic forms of pulse
modulation: pulse-amplitude modulation (PAM), pulse-code modulation (PCM), pulse-
width modulation (PWM, also known as pulse-duration modulation, PDM), and pulse-
position modulation (PPM).
An important concept in pulse modulation is analog-to-digital (A/D) conversion, in which
an original analog (time- and amplitude-continuous) information signal s(t) is changed at
the transmitter into a series of regularly occurring discrete pulses whose amplitudes are
restricted to a fixed and finite number of values. An inverse digital-to-analog (D/A)
process is used at the receiver to reconstruct an approximation of the original form of s(t).
Conceptually, analog-to-digital conversion involves two steps. First, the range of
amplitudes of s(t) is divided or quantized into a finite number of predetermined levels,
and each such level is represented by a pulse of fixed amplitude. Second, the amplitude
of s(t) is periodically measured or sampled and replaced by the pulse representing the
level that corresponds to the measurement. See also Analog-to-digital converter; Digital-
According to the Nyquist sampling theorem, if sampling occurs at a rate at least twice
that of the bandwidth of s(t), the latter can be unambiguously reconstructed from its
amplitude values at the sampling instants by applying them to an ideal low-pass filter
whose bandwidth matches that of s(t).
Quantization, however, introduces an irreversible error, the so-called quantization error,
since the pulse representing a sample measurement determines only the quantization level
in which the measurement falls and not its exact value. Consequently, the process of
reconstructing s(t) from the sequence of pulses yields only an approximate version of s(t).
In PAM the successive sample values of the analog signal s(t) are used to effect the
amplitudes of a corresponding sequence of pulses of constant duration occurring at the
sampling rate. No quantization of the samples normally occurs (Fig. 1a, b). In principle
the pulses may occupy the entire time between samples, but in most practical systems the
pulse duration, known as the duty cycle, is limited to a fraction of the sampling interval.
Such a restriction creates the possibility of interleaving during one sample interval one or
more pulses derived from other PAM systems in a process known as time-division
multiplexing (TDM). See also Multiplexing and multiple access.
s(t), is a sine wave. (a) Analog signal, s(t). (b) Pulse-amplitude modulation. (c) Pulse-
width modulation. (d) Pulse-position modulation.">
Forms of pulse modulation for the case where the analog signal, s(t), is a sine wave. (a)
Analog signal, s(t). (b) Pulse-amplitude modulation. (c) Pulse-width modulation. (d)
In PWM the pulses representing successive sample values of s(t) have constant
amplitudes but vary in time duration in direct proportion to the sample value. The pulse
duration can be changed relative to fixed leading or trailing time edges or a fixed pulse
center. To allow for time-division multiplexing, the maximum pulse duration may be
limited to a fraction of the time between samples (Fig. 1c).
PPM encodes the sample values of s(t) by varying the position of a pulse of constant
duration relative to its nominal time of occurrence. As in PAM and PWM, the duration of
the pulses is typically a fraction of the sampling interval. In addition, the maximum time
excursion of the pulses may be limited (Fig. 1d).
Many modern communication systems are designed to transmit and receive only pulses
of two distinct amplitudes. In these so-called binary digital systems, the analog-to-digital
conversion process is extended by the additional step of coding, in which the amplitude
of each pulse representing a quantized sample of s(t) is converted into a unique sequence
of one or more pulses with just two possible amplitudes. The complete conversion
process is known as pulse-code modulation.
Figure 2a shows the example of three successive quantized samples of an analog signal
s(t), in which sampling occurs every T seconds and the pulse representing the sample is
limited to T/2 seconds. Assuming that the number of quantization levels is limited to 8,
each level can be represented by a unique sequence of three two-valued pulses. In Fig. 2b
these pulses are of amplitude V or 0, whereas in Fig. 2c the amplitudes are V and −V.
a) Three successive quantized samples of an analog signal. (b) With pulses of amplitude
V or 0. (c) With pulses of amplitude V or −V."
Pulse-code modulation. (a) Three successive quantized samples of an analog signal. (b)
With pulses of amplitude V or 0. (c) With pulses of amplitude V or −V.
PCM enjoys many important advantages over other forms of pulse modulation due to the
fact that information is represented by a two-state variable. First, the design parameters of
a PCM transmission system depend critically on the bandwidth of the original signal s(t)
and the degree of fidelity required at the point of reconstruction, but are otherwise largely
independent of the information content of s(t). This fact creates the possibility of
deploying generic transmission systems suitable for many types of information. Second,
the detection of the state of a two-state variable in a noisy environment is inherently
simpler than the precise measurement of the amplitude, duration, or position of a pulse in
which these quantities are not constrained. Third, the binary pulses propagating along a
medium can be intercepted and decoded at a point where the accumulated distortion and
attenuation are sufficiently low to assure high detection accuracy. New pulses can then be
generated and transmitted to the next such decoding point. This so-called process of
repeatering significantly reduces the propagation of distortion and leads to a quality of
transmission that is largely independent of distance.
An advantage inherent in all pulse modulation systems is their ability to transmit signals
from multiple sources over a common transmission system through the process of time-
division multiplexing. By restricting the time duration of a pulse representing a sample
value from a particular analog signal to a fraction of the time between successive
samples, pulses derived from other sampled analog signals can be accommodated on the
One important application of this principle occurs in the transmission of PCM telephone
voice signals over a digital transmission system known as a T1 carrier. In standard T1
coding, an original analog voice signal is band-limited to 4000 hertz by passing it through
a low-pass filter, and is then sampled at the Nyquist rate of 8000 samples per second, so
that the time between successive samples is 125 microseconds. The samples are
quantized to 256 levels, with each of them being represented by a sequence of 8 binary
pulses. By limiting the duration of a single pulse to 0.65 microsecond, a total of 193
pulses can be accommodated in the time span of 125 microseconds between samples.
One of these serves as a synchronization marker that indicates the beginning of such a
sequence of 193 pulses, while the other 192 pulses are the composite of 8 pulses from
each of 24 voice signals, with each 8-pulse sequence occupying a specified position. T1
carriers and similar types of digital carrier systems are in widespread use in the world's
Pulse modulation systems may incur a significant bandwidth penalty compared to the
transmission of a signal in its analog form. An example is the standard PCM transmission
of an analog voice signal band-limited to 4000 hertz over a T1 carrier. Since the
sampling, quantizing, and coding process produces 8 binary pulses 8000 times per second
for a total of 64,000 binary pulses per second, the pulses occur every 15.625
microseconds. Depending on the shape of the pulses and the amount of intersymbol
interference, the required transmission bandwidth will fall in the range of 32,000 to
64,000 hertz. This compares to a bandwidth of only 4000 hertz for the transmission of the
signal in analog mode. See also Bandwidth requirements (communications).
PAM, PWM, and PPM found significant application early in the development of digital
communications, largely in the domain of radio telemetry for remote monitoring and
sensing. They have since fallen into disuse in favor of PCM.
Since the early 1960s, many of the world's telephone network providers have gradually,
and by now almost completely, converted their transmission facilities to PCM
technology. The bulk of these transmission systems use some form of time-division
multiplexing, as exemplified by the 24-voice channel T1 carrier structure. These carrier
systems are implemented over many types of transmission media, including twisted pairs
of telephone wiring, coaxial cables, fiber-optic cables, and microwave. See also Coaxial
cable; Communications cable; Microwave; Optical communications; Optical fibers;
Switching systems (communications).
The deployment of high-speed networks such as the Integrated Service Digital Network
(ISDN) in many parts of the world has also relied heavily on PCM technology. PCM and
various modified forms such as delta modulation (DM) and adaptive differential pulse-
code modulation (ADPCM) have also found significant application in satellite
transmission systems. See also Communications satellite; Data communications;
Electrical communications; Integrated services digital network (ISDN); Modulation.
UNBALANCED WHEATSTONE BRIDGE -
EVALUATION OF RESISTANCE CHANGES OF A
. Connect the R → U converter using an operational amplifier according to the
schematic diagram in Fig. 16.1 (Ur = 10 V, RN1 = 10 kΩ) and measure dependence
fC of the resistance of converter on its angular deflection α in the range α = 0 to 180°
with the increments 15° (basic position of the converter slider α = 90° corresponds
to the resistance R0, i.e. ΔR = 0).
Connect the resistive sensor into the Wheatstone bridge supplied from the voltage
source supplying voltage UAC = 5 V (Fig. 16.2). Before starting measurement
balance the bridge using the resistance decade box R D for the value α = 90°.
Measure the dependence fBV of the bridge output voltage UBD on the change of
angular position of the sensor slider α, which corresponds to the change of the
sensor resistance ΔR (for the same values of α as in point 16.1.1). Find theoretical
relation for this voltage, that is
. Connect the resistive sensor into the Wheatstone bridge supplied from the current
source supplying current I = 3,6 mA. Realize the current source by means of an
operational amplifier according to Fig. 16.3. Before starting measurement balance
again the bridge using the resistance decade box R D for the value α = 90°. Measure
the dependence fBC of the bridge output voltage UBD on the change of angular
position of the sensor slider α, that is on the change of the sensor resistance ΔR (for
the same values of α as in point 16.1.1). Find theoretical relation for this voltage,
. Connect the so-called „linearized bridge“ according to the Fig. 16.4. Set the
supply voltage UZ = 2,5 V. Before starting measurement balance again the
bridge using the resistance decade box RD for the value α = 90°. Measure the
dependence fLB of the output voltage U2 on the change of angular position of the
sensor slider α, that is on the change of the sensor resistance ΔR (for the same
values of α as in point 16.1.1). Find theoretical relation for this voltage, that is
Plot in a common graph deviations of the measured values according to points
16.1.2, 16.1.3 and 16.1.4 from the linear function. The slope of the straight line
from which you will calculate deviations from the linearity find from the end points
of the measured dependence fLB(ΔR) (that means for α = 0 and α = 180°). If the
absolute values of output voltage in the both end points are not identical, replace
them by arithmetic mean value of these two absolute values (straight line U
connecting these two points intersects the origin of coordinates [ΔR,
U)(/LM/2RfΔ=)R2]). Deviations of the functions fBV(ΔR), fBC(ΔR) and fLB(ΔR) from
the linearity find (approximately) as deviation of these functions from the straight
line U. This can be done, since values of supply voltage or current in measurements
in points 16.1.2, 16.1.3 and 16.1.4 are in the instruction given so, that the slopes of
all these functions in the origin be approximately the same. (/LM/2fΔ= )
Diagram is in PDF wheatstone
The HP-310 is an all transistor design that dates back to 1963. The unit I have
works beautifully. This is a true example of fine design. I checked the unit out when I got
it and it was still well within factory calibration! For 1963, this is a very sophisticated
design. It is an upconverting superhet design (3 MHz IF), with a frequency range of 1000
Hz to 1.5 MHz. The signal level is rated at 10 uV to 100 Volts input (selected by two
range switches). The wave analyzer concept was developed at HP to test voice
telecommunications circuits. It may also be used as a low frequency scalar network
analyzer (it has a built in tracking source). Dr. Barney Oliver (Chief Engineer @ HP)
designed a linear, air variable tuning capacitor for the 310 (and it's predecessor the 302).
This allowed for the mechanical tuning counter seen (since the tuned frequency was a
linear function of the capacitors rotation) . HP also produced a motor driven "tuner" that
could be attached to the main tuning knob on the front of the analyzer. Then, by using the
recorder output connected to an XY plotter (which HP also produced), a linear input
versus frequency plot could be produced. This was truly a automatic network analyzer,
produced in 1963! Another application cited by HP was the analysis of low frequency
harmonic distortion products.
The meter on the front reads out absolute dB or volts of the tuned signal. A relative mode
may also be selected allowing the receiver to be set to a fundamental signal and relative
measurements made on harmonics.
A January, 1963 HP Journal article introduced the instrument.
The design is current by todays standards. The narrow IF bandwidths were produced by
quadrature converting the signal to IQ channels at baseband where narrow frequency
active filters could be built giving bandwidths of 200, 1000 and 3000 Hz. The IF was
then translated back to 3 MHz and added together to remove the quadrature component.
This method dates back to the late 50's (Weaver, Proc. of the IRE, 1958) for SSB
generation. The method is still in use today in the DSP world by Harris Semiconductor.
Their digital HSP50016 Digital Downconverter uses exactly the same principal to get
very narrow digital IF bandwidths!
The instrument used HP's second generation "Glow FET" chassis. The inside of the 310
looks empty since it was built with transistors. All of the circuits are built on plug in
cards and in separate shielded compartments.
All told, the transistor count in the instrument is just 60 devices. Using OPAMP's today
the transistor count would be in the thousands! That's progress? The 310 is designed with
the usual 60's selection of Germanium and Silicon transistors. If any of the Germanium
devices fail, I'll have to rebias the circuitry for Silicon replacements.
I have found the instrument to be just great at receiving WWVB with my 4 foot loop. It
has very good intermodulation performance for a "wide band" front end, as it rejects all
of the much stronger LORAN signals less than an octave away in frequency. It is also
very tolerant of the fluorescent lighting in my workshop (It's better than my Sony 2010).
The 310 has provision for AM, USB and LSB reception by the twist of a knob. A front
panel BNC easily drives headphones for listening. I have received AM broadcast stations
in Canada, Texas, and a low power AM station in Boise Idaho (from my northern
California location). The 310 easily tunes around hetrodynes on crowded AM channels.
The only deficiency it has as a receiver is relatively flat audio quality. I intend to improve
that, with the inclusion of a 20 transistor audio amplifier IC (well, that is progress, I
HP's first wave analyzer was the 302 designed in 1959, also a transistorized design. Then
came the 310. The 312 came along next and featured a maximum frequency of 18 MHz.
The 312 also sported a digital frequency display. The 312 looks very much like an
upgraded 310 (same chassis). Next came the digital 3581 family. This analyzer dropped
back to 50 kHz maximum frequency, but was much smaller. The last in the line was the
3586 produced in 1980. The 3586 is fully synthesized and covered 50 Hz to 32 MHz.
Digital techniques applied to Spectrum Analyzers ended the need for Wave Analyzers
and now cover the market that was once held by these instruments.
Frequency range.................................... 1 - 1500 kHz
Tuning Accuracy.................................... 1% +/- 300 Hz
Frequency Calibrator............................. 100 kHz (even and odd harmonics, up the
Selectivity............................................... 200, 1000 and 3000 Hz BW, 3dB
Shape Factor.......................................... 2:1 (to -25 dB down)**
IF Ringing Immunity.................................. Very good (Butterworth response)**
Voltage Range........................................ 10 uV to 100 V full scale (140 dB)
Voltage Accuracy................................... 6% (on the meter)
Dynamic Range...................................... >70 dB
Input Resistance..................................... 10, 30 and 100 K ohms (Depends on full
scale input range)
Power................................................... 110/230 VAC at 16 Watts
Weight.................................................. 44 pounds (it's that darn chassis!)
Front Panel Controls:
Max Voltage Range, Relative or Absolute, Range (in dB), Bandwidth, Mode
(AFC, Normal, BFO, USB, LSB and AM). Frequency (Fine and Coarse), Zero
Set, and Tracking (also AM Output) Amplitude.
The unit also incorporated a "sweeping" AFC circuit to keep the signal locked in
on narrow bandwidths.
* The calibrator was supposed to be a 60/40% duty cycle square wave, my unit is
closer to 50% duty cycle, so I only get odd harmonics of the calibrator (I need to
** HP did not originally specify these things, these are my "Supplemental
Certain functions and basic control---
Allows one to fix the window of frequencies to visualize in the screen.
Controls the position and function of markers and indicates the value of power.
Is a filter of resolution. The spectrum analyzer captures the measure on having displaced
a filter of small bandwidth along the window of frequencies.
Is the maximum value of a signal in a point.
Manages parameters of measurement. It stores the maximum values in each frequency
and a solved measurement to compare it.
/wiki/File:Aaronia_Spectrum_Analyzer_Software.jpgFrequency spectrum of the heating
up period of a switching power supply (spread spectrum) incl. waterfall diagram over a
view minutes. Recorded with Spectran spectrum analyzer 5030
Usually, a spectrum analyzer displays a power spectrum over a given frequency range,
changing the display as the properties of the signal change. There is a trade-off between
how quickly the display can be updated and the frequency resolution, which is for
example relevant for distinguishing frequency components that are close together. With a
digital spectrum analyzer, the frequency resolution is Δν = 1 / T, the inverse of the time T
over which the waveform is measured and Fourier transformed. With an analog spectrum
analyzer, it is dependent on the bandwidth setting of the bandpass filter. However, an
analog spectrum analyzer will not produce meaningful results if the filter bandwidth (in
Hz) is smaller than the square root of the sweep speed (in Hz/s), which means that an
analog spectrum analyzer can never beat a digital one in terms of frequency resolution for
a given acquisition time. Choosing a wider bandpass filter will improve the signal-to-
noise ratio at the expense of a decreased frequency resolution.
With Fourier transform analysis in a digital spectrum analyzer, it is necessary to sample
the input signal with a sampling frequency νs that is at least twice the highest frequency
that is present in the signal, due to the Nyquist limit. A Fourier transform will then
produce a spectrum containing all frequencies from zero to νs / 2. This can place
considerable demands on the required analog-to-digital converter and processing power
for the Fourier transform. Often, one is only interested in a narrow frequency range, for
example between 88 and 108 MHz, which would require at least a sampling frequency of
216 MHz, not counting the low-pass anti-aliasing filter. In such cases, it can be more
economic to first use a superheterodyne receiver to transform the signal to a lower range,
such as 8 to 28 MHz, and then sample the signal at 56 MHz. This is how an analog-
digital-hybrid spectrum analyzer works.
For use with very weak signals, a pre-amplifier can be used, although harmonic and
intermodulation distortion may lead to the creation of new frequency components that
were not present in the original signal. A new method, without using a high local
oscillator (LO) (that usually produces a high-frequency signal close to the signal) is used
on the latest analyzer generation like Aaronia´s Spectran series. The advantage of this
new method is a very low noise floor near the physical thermal noise limit of -174 dBm.
In acoustics, a spectrograph converts a sound wave into a sound spectrogram. The first
acoustic spectrograph was developed during World War II at Bell Telephone
Laboratories, and was widely used in speech science, acoustic phonetics and audiology
research, before eventually being superseded by digital signal processing techniques.
Spectrum analyzers are widely used to measure the frequency response, noise and
distortion characteristics of all kinds of RF circuitry, by comparing the input and output
In telecommunications, spectrum analyzers are used to determine occupied bandwidth
and track interference sources. Cellplanners use this equipment to determine interference
sources in the GSM/TETRA and UMTS technology.
In EMC testing, spectrum analyzers may be used to characterise test signals and to
measure the response of the equipment under test.
Bird Technologies Group
Narda Safety Test Solution
Rohde & Schwarz
Total harmonic distortion (THD) measurements are one of the most
commonly quoted in audio. Contrary to belief in some circles, these can be very
useful if performed properly, and reveal much about the overall performance of
There are a number of ways to measure distortion, none of which is
perfect. Probably the best is a spectrum analyser, which shows the individual
harmonics and their amplitudes. These are too expensive for the likes of you and
me (well, me, anyway) and the next best thing is featured here.
There are other methods as well, one of which is to subtract the output of
an amplifier from the input (with appropriate scaling). When the two signals are
exactly equal and opposite they are cancelled out - any signal left is distortion
created in the amplifier. This method seems easy, but is not, because there are
phase shifts within the amp that can be very difficult to compensate for exactly,
and the final accuracy of tuning the parameters - amplitude and phase - must be
just as great as with this circuit for a meaningful result.
The standard tool for measuring THD is a notch filter. This is tuned to
reject the fundamental frequency, and any signal that gets through is a
combination of the amplifier's noise (including any hum) and the distortion. The
distortion shows up as a signal that is harmonically related to the signal fed into
the amp, but is not the fundamental. Harmonics occur at double, triple, quadruple
(etc) the input frequency. These are referred to as 2nd, 3rd, 4th (etc) harmonics,
and are subdivided into odd and even. Even harmonics (2nd, 4th, etc) are
claimed to sound better than odd (3rd, 5th, etc), but in reality we don't want any
Figure 1 - Basic Twin-T Notch Filter
The filter of Figure 1 is 'normalised' to 1uF and 1k Ohm, giving a
frequency of 159Hz. The resistor and capacitor ratios are extraordinarily critical if
a deep notch is to be obtained, and this is essential for distortion measurement.
This notch filter is called a Twin-T, and works by phase cancellation of the input
signal. When the phase shift is exactly +90° and -90° in the two sections, the
tuned frequency is completely cancelled, leaving only those signals that are not
tuned out. This residual signal represents total harmonic distortion + noise.
Figure 2 - Frequency Response Of Standard Notch Filter
The problem with the notch filter shown, is that it's attenuation is too high
at the 2nd harmonic, and in fact is only acceptable one decade from the
fundamental. The example shown has about 0.7dB attenuation at 1.6kHz - a
decade from the 159Hz fundamental frequency. This is corrected by using
feedback, which tries to get rid of the notch, but is completely unsuccessful, since
when properly tuned the notch is infinitely deep.
Too much feedback, and the filter will be untunable because it is too
sharp, so a compromise is needed. We need the filter to cause no more than a
dB or so of attenuation at double the fundamental frequency (this is one octave),
to prevent serious measurement errors of second harmonic distortion.
Figure 3 shows the result when we apply feedback, and the error at one octave is
now less than 1dB. This is acceptable for normal measurements, and the
resulting error is small, while retaining the ability to tune the filter. Note that
although it is tuneable, this filter is still extremely sharp, and unless multi-turn
pots are used it will be almost impossible to obtain a good notch. The slightest
variation of the input frequency will create a massively high 'distortion' figure. I
have found that with very low distortion amps, it is a real battle to measure the
distortion whilst trying to keep the filter tuned, because of drift.
Figure 3 - Frequency Response With Feedback
This overall characteristic is the desired one, so the final notch filter design is
shown in Figure 4, with the feedback applied from the opamp. I have chosen to
make the feedback adjustable, so that you can easily modify the characteristics if
you want to. This can make it a little easier to tune, since initial tuning can be
done with a small amount of feedback, and as the exact frequency is tuned in,
the feedback can be increased.
Once upon a time, it was possible to obtain 50k+ 50K+ 25K wirewound pots (I
think that was the range) - yes, a triple gang, two separate resistances,
wirewound pot! These were especially for just this type of circuit, but I doubt that
you will find one any more. The only way that multiple frequencies can be tested
is to use a switched selector, and ensure that there is enough range in the tuning
pots to make up for all capacitor value errors.
The accuracy of tuning is critical - a 40dB deep notch will show the distortion as
1%, even though it may be much less. A 60dB notch reduces this to 0.1% and so
on. For a 100dB notch, you will need all components accurate to within 10ppm
(parts per million), or 0.001%. Even a small temperature change can send the
meter needle (or oscilloscope trace) straight off the scale. I know this, because it
happens every time I try to measure very low distortion levels, and I can't even
get to 0.001% on my meter because my oscillator has more distortion than that.
Figure 4 - The Variable Q Tuneable Notch Filter
To change ranges, we must vary either the resistance or capacitance (or both).
Figure 5 shows the range switching. To try to keep the unit reasonably versatile, I
have included two switches. SW1 gives 20, 200 and 2kHz ranges by changing
capacitor values. SW2 gives the standard 1, 2, 5 sequence common in
oscilloscopes. This combination allows the following frequencies to be tested ...
Range 1 Range 2
20 x1 20Hz
20 x2 40Hz
20 x5 100Hz
200 x1 200Hz
200 x2 500Hz
200 x5 1kHz
2k x1 2kHz
2k x2 4kHz
2k x5 10kHz
Table 1 - Range Switching
The notch frequency is determined by
fo = 1 / 2 * pi * R1 * C1
Resistor values are exactly R1 = R2, R3 = 0.5 * R1
Capacitor values are exactly C1 = C2, C3 = 2 * C1
The tuning requires that the ratios are exactly tuned - absolute values are not as
important, but must be stable. To be able to tune the notch precisely, we will use
pots in series with two of the resistors, R1 and R3. The range of the pots will vary
depending on the resistance that is switched into the circuit, and even with multi-
turn pots it is useful to have two in series, one with a lower resistance than the
other. This is shown in Figure 5.
It is essential to make sure that the pots have enough range to compensate for
the tolerance of the capacitors, and some care is needed to keep wiring
capacitance to a minimum, especially for the highest frequency range. To this
end, I suggest that all tuning components are wired directly to the switches and
pots, and that wiring is done with solid tinned copper wire. All wiring can be made
self supporting, and will exhibit very low capacitance.
Figure 5 - Input Level Control And Range Switching
The range switching simply connects different resistors and / or capacitors into
the circuit. A problem is that when resistances are changed, the sensitivity of the
tuning pots also changes. This is unavoidable unless you can get odd value
triple-gang pots (Do you feel lucky? - If you find them, buy a lottery ticket !!). VR4
and VR6 should be multi-turn - you can get geared pot drives to use standard
pots if multiturn units are unavailable.
The values and exact design frequencies are shown in Table 2. To make the
C3.x values, parallel two C1.x value caps, and if possible use a capacitance
meter to match all capacitors to within 1%. Standard tolerances will affect the
centre frequencies. Resistors must be metal film, 1% tolerance. The values of
R2.x and R3.x are lower than expected, because I have taken the mid resistance
of the two series pots into consideration. Note that some of the resistors require 2
components in series to get the desired resistance.
Frequency C3.x R1.x R2.x R3.x
82k 75k +
19.4 Hz 100nF 200nF 39k + 1k
39k 18k + 390
40.8 Hz 100nF 200nF 36k
16k 6.8k + 100
99.5 Hz 100nF 200nF 13k
82k 75k +
194 Hz 10nF 20nF 39k + 1k
39k 18k + 390
408 Hz 10nF 20nF 36k
16k 6.8k + 100
995 Hz 10nF 20nF 13k
82k 75k +
1.94 kHz 1 nF 2nF 39k + 1k
39k 18k + 390
4.08 kHz 1 nF 2nF 36k
16k 6.8k + 100
9.95 kHz 1 nF 2nF 13k
Table 2 - Resistor and Capacitor Values
There are a couple of things to be aware of with this circuit. Firstly, the input
impedance is quite low, and use of a buffer is not recommended because this will
introduce additional noise and distortion. The opamps used for the feedback
should be the best you can get hold of. The Burr-Brown OPA2604 is an excellent
choice, with 0.0003 distortion and low noise. Other devices that will be suitable
include the LM833 or the venerable NE5532.
To allow power amps to be tested, an input level control is needed, and this is
also used for calibration. The control ideally should be a wirewound device, since
power dissipation could be quite high, and wirewound pots add less noise than
The ideal measuring meter is an oscilloscope, but a millivoltmeter may be used.
Without the oscilloscope you will be unable to see the 'quality' of the distortion
components, but use of an amplifier will allow you to listen to the residual - make
sure that you have a limiter circuit on the amp, or a slight bump of the oscillator
frequency control will blow your head off! A suitable limiter is published as Project
The final measurement will include the distortion from the audio oscillator, and it
is likely that this will be greater than that of many amps. It is not really possible to
tell you how you can subtract this from the measured distortion, since the
distortion waveform has a huge influence over the result. The distortion
waveform is very important - a low average level spiky waveform (typical of
crossover distortion) will sound much worse than an apparently higher level of
"clean" 3rd harmonic distortion from a well designed push-pull amplifier stage.
To measure the distortion, set Q to minimum, all tuning pots to the mid
position, and frequency to something well away (> one decade) from the
intended measurement frequency. With the input level at minimum, apply
the signal to be measured. The voltage must be greater than 3V RMS.
If you are using a millivoltmeter (not digital!), set it to the 3V range, and
advance the input level until the meter reads full scale. Set the frequency
range controls to the test frequency, and reduce the Q control to about
half or less.
Carefully adjust the oscillator frequency, then the fine tuning controls (they
are interactive) until the minimum possible voltage reading is shown,
adjusting the range on the millivoltmeter as you get lower readings.
Advance the Q control and repeat until Q is at maximum, and you have
the minimum voltage reading. In some cases you will need to re-adjust the
oscillator frequency slightly to be able to obtain a null in the meter reading.
Make sure that the input level control is not changed during the
measurement, as the resistance affects the notch filter tuning, and you will
have to re-tune the filter.
If you were to obtain a final reading of 7mV, you can now determine the distortion
THD% = ( V2 / V1 ) x 100 Where V1 is the initial voltage and V2 is the lowest
THD = ( 0.007 / 3 ) x 100 = 0.23%
Remember that if you apply too much signal to the input, you will destroy the
opamp. The use of protection diodes is not an option (IMHO), as this will
introduce distortion, making your measurements useless. The distortion
introduced by the analyser will exceed that of a good amplifier. This is
The distortion meter circuit can be simplified. For example, you may feel
that there are more ranges than you need, and these can be adapted for your
needs. You might even think that a single range is sufficient, and for many basic
tests this is OK. Naturally, you will be unaware of distortion that may become
apparent only at low or high frequencies - but I shall leave this up to you.
The traces below show the fundamental (green) at 705mV, some
deliberately introduced odd-order harmonic distortion (red) and even-order
distortion (blue). Note that there is no evidence whatsoever of the original 159Hz
fundamental - the notch filter has removed it completely. Anything left over has
been added to the original signal, and is therefore distortion.
Figure 6 - Fundamental, Plus Distortion Waveforms
The distortion voltages were amplified by 100 times for clarity. The odd-order
distortion measures 40.81mV (408.1uV), and even-order distortion was 33.05mV
(330.5uV) - remember, the displayed and measured distortion was amplified by
100. The discontinuity at the beginning of the waveforms (which start at 12.5ms
rather than zero) is because of the notch filter. Because it has a high Q, the
waveform suffers from considerable transient distortion, and the last remainder is
visible on the trace.
Using the formula above, distortion can be calculated ...
THD% = ( 408.1uV / 705mV ) x 100 = 0.058% (odd-order)
THD% = ( 330.5uV / 705mV ) x 100 = 0.047% (even-order)
Distortion was created by the simple circuit shown below. Since the graph shown
is from a simulator, it was necessary to add distortion because the simulator's
output waveform is a perfect sinewave, and has zero distortion.
Figure 7 - Distortion Generator
With an applied signal of 1V peak (707mV RMS), the distortion added is quite
small. This is especially true when you consider that the signal is limited by a 10
ohm resistor, and the diode distortion is applied via the 1k resistor. It is probable
that this amount of distortion would be inaudible on a sinewave, and it will
definitely be inaudible with a music signal (of the same peak amplitude of
Figure 6-4: Torque on a Rotating Shaft
Torque is measured by either sensing the actual shaft deflection caused by a twisting
force, or by detecting the effects of this deflection. The surface of a shaft under
torque will experience compression and tension, as shown in Figure 6-4. To measure
torque, strain gage elements usually are mounted in pairs on the shaft, one gauge
measuring the increase in length (in the direction in which the surface is under
tension), the other measuring the decrease in length in the other direction.
Early torque sensors consisted of mechanical structures fitted with strain gages.
Their high cost and low reliability kept them from gaining general industrial
acceptance. Modern technology, however, has lowered the cost of making torque
measurements, while quality controls on production have increased the need for
accurate torque measurement.
Applications for torque sensors include determining the amount of power an engine,
motor, turbine, or other rotating device generates or consumes. In the industrial
world, ISO 9000 and other quality control specifications are now requiring companies
to measure torque during manufacturing, especially when fasteners are applied.
Sensors make the required torque measurements automatically on screw and
assembly machines, and can be added to hand tools. In both cases, the collected
data can be accumulated on dataloggers for quality control and reporting purposes.
Other industrial applications of torque sensors include measuring metal removal
rates in machine tools; the calibration of torque tools and sensors; measuring peel
forces, friction, and bottle cap torque; testing springs; and making biodynamic
Torque can be measured by rotating strain gages as well as by stationary proximity,
magnetostrictive, and magnetoelastic sensors. All are temperature sensitive. Rotary
sensors must be mounted on the shaft, which may not always be possible because of
Figure 6-5: Inductive Coupling of Torque Sensors
A strain gage can be installed directly on a shaft. Because the shaft is rotating, the
torque sensor can be connected to its power source and signal conditioning
electronics via a slip ring. The strain gage also can be connected via a transformer,
eliminating the need for high maintenance slip rings. The excitation voltage for the
strain gage is inductively coupled, and the strain gage output is converted to a
modulated pulse frequency (Figure 6-5). Maximum speed of such an arrangement is
Strain gages also can be mounted on stationary support members or on the
housing itself. These "reaction" sensors measure the torque that is transferred by the
shaft to the restraining elements. The resultant reading is not completely accurate,
as it disregards the inertia of the motor.
Strain gages used for torque measurements include foil, diffused semiconductor,
and thin film types. These can be attached directly to the shaft by soldering or
adhesives. If the centrifugal forces are not large--and an out-of-balance load can be
tolerated--the associated electronics, including battery, amplifier, and radio
frequency transmitter all can be strapped to the shaft.
Proximity and displacement sensors also can detect torque by measuring the
angular displacement between a shaft's two ends. By fixing two identical toothed
wheels to the shaft at some distance apart, the angular displacement caused by the
torque can be measured. Proximity sensors or photocells located at each toothed
wheel produce output voltages whose phase difference increases as the torque twists
Another approach is to aim a single photocell through both sets of toothed wheels.
As torque rises and causes one wheel to overlap the other, the amount of light
reaching the photocell is reduced. Displacements caused by torque can also be
detected by other optical, inductive, capacitive, and potentiometric sensors. For
example, a capacitance-type torque sensor can measure the change in capacitance
that occurs when torque causes the gap between two capacitance plates to vary.
The ability of a shaft material to concentrate magnetic flux--magnetic permeability-
-also varies with torque and can be measured using a magnetostrictive sensor. When
the shaft has no loading, its permeability is uniform. Under torsion, permeability and
the number of flux lines increase in proportion to torque. This type of sensor can be
mounted to the side of the shaft using two primary and two secondary windings.
Alternatively, it can be arranged with many primary and secondary windings on a
ring around the shaft.
A magnetoelastic torque sensor detects changes in permeability by measuring
changes in its own magnetic field. One magnetoelastic sensor is constructed as a thin
ring of steel tightly coupled to a stainless steel shaft. This assembly acts as a
permanent magnet whose magnetic field is proportional to the torque applied to the
shaft. The shaft is connected between a drive motor and the driven device, such as a
screw machine. A magnetometer converts the generated magnetic field into an
electrical output signal that is proportional to the torque being applied
8-channel digital-to-analog converter Cirrus Logic CS4382 placed on Sound Blaster X-Fi
In electronics, a digital-to-analog converter (DAC or D-to-A) is a device for
converting a digital (usually binary) code to an analog signal (current, voltage or
An analog-to-digital converter (ADC) performs the reverse operation.
Basic ideal operation
Ideally sampled signal. Signal of a typical interpolating DAC output
A DAC converts an abstract finite-precision number (usually a fixed-point binary
number) into a concrete physical quantity (e.g., a voltage or a pressure). In
particular, DACs are often used to convert finite-precision time series data to a
continually-varying physical signal.
A typical DAC converts the abstract numbers into a concrete sequence of
impulses that are then processed by a reconstruction filter uses some form of
interpolation to fill in data between the impulses. Other DAC methods (e.g.,
methods based on Delta-sigma modulation) produce a pulse-density modulated
signal that can then be filtered in a similar way to produce a smoothly-varying
By the Nyquist–Shannon sampling theorem, sampled data can be reconstructed
perfectly provided that its bandwidth meets certain requirements (e.g., a
baseband signal with bandwidth less than the Nyquist frequency). However, even
with an ideal reconstruction filter, digital sampling introduces quantization error
that makes perfect reconstruction practically impossible. Increasing the digital
resolution (i.e., increasing the number of bits used in each sample) or introducing
sampling dither can reduce this error.
Instead of impulses, usually the sequence of numbers update the analogue
voltage at uniform sampling intervals.
These numbers are written to the DAC, typically with a clock signal that causes
each number to be latched in sequence, at which time the DAC output voltage
changes rapidly from the previous value to the value represented by the currently
latched number. The effect of this is that the output voltage is held in time at the
current value until the next input number is latched resulting in a piecewise
constant or 'staircase' shaped output. This is equivalent to a zero-order hold
operation and has an effect on the frequency response of the reconstructed
Piecewise constant signal typical of a zero-order (non-interpolating) DAC output.
The fact that practical DACs output a sequence of piecewise constant values or
rectangular pulses would cause multiple harmonics above the nyquist frequency.
These are typically removed with a low pass filter acting as a reconstruction filter.
However, this filter means that there is an inherent effect of the zero-order hold
on the effective frequency response of the DAC resulting in a mild roll-off of gain
at the higher frequencies (often a 3.9224 dB loss at the Nyquist frequency) and
depending on the filter, phase distortion. This high-frequency roll-off is the output
characteristic of the DAC, and is not an inherent property of the sampled data.
Top-loading CD player and external digital-to-analog converter.
Most modern audio signals are stored in digital form (for example MP3s and
CDs) and in order to be heard through speakers they must be converted into an
analog signal. DACs are therefore found in CD players, digital music players, and
PC sound cards.
Specialist stand-alone DACs can also be found in high-end hi-fi systems. These
normally take the digital output of a CD player (or dedicated transport) and
convert the signal into a line-level output that can then be fed into a pre-amplifier
Similar digital-to-analog converters can be found in digital speakers such as USB
speakers, and in sound cards.
Video signals from a digital source, such as a computer, must be converted to
analog form if they are to be displayed on an analog monitor. As of 2007, analog
inputs are more commonly used than digital, but this may change as flat panel
displays with DVI and/or HDMI connections become more widespread. A video
DAC is, however, incorporated in any Digital Video Player with analog outputs.
The DAC is usually integrated with some memory (RAM), which contains
conversion tables for gamma correction, contrast and brightness, to make a
device called a RAMDAC.
A device that is distantly related to the DAC is the digitally controlled
potentiometer, used to control an analog signal digitally.
The most common types of electronic DACs are:
the Pulse Width Modulator, the simplest DAC type. A stable current or voltage
is switched into a low pass analog filter with a duration determined by the digital
input code. This technique is often used for electric motor speed control, and is
now becoming common in high-fidelity audio.
Oversampling DACs or Interpolating DACs such as the Delta-Sigma DAC,
use a pulse density conversion technique. The oversampling technique allows for
the use of a lower resolution DAC internally. A simple 1-bit DAC is often chosen
because the oversampled result is inherently linear. The DAC is driven with a
pulse density modulated signal, created with the use of a low-pass filter, step
nonlinearity (the actual 1-bit DAC), and negative feedback loop, in a technique
called delta-sigma modulation. This results in an effective high-pass filter acting
on the quantization (signal processing) noise, thus steering this noise out of the
low frequencies of interest into the high frequencies of little interest, which is
called noise shaping (very high frequencies because of the oversampling). The
quantization noise at these high frequencies are removed or greatly attenuated by
use of an analog low-pass filter at the output (sometimes a simple RC low-pass
circuit is sufficient). Most very high resolution DACs (greater than 16 bits) are of
this type due to its high linearity and low cost.
Higher oversampling rates can either relax the specifications of the output low-
pass filter and enable further suppression of quantization noise. Speeds of greater
than 100 thousand samples per second (for example, 192kHz) and resolutions of
24 bits are attainable with Delta-Sigma DACs. A short comparison with pulse
width modulation shows that a 1-bit DAC with a simple first-order integrator
would have to run at 3 THz (which is physically unrealizable) to achieve 24
meaningful bits of resolution, requiring a higher order low-pass filter in the noise-
shaping loop. A single integrator is a low pass filter with a frequency response
inversely proportional to frequency and using one such integrator in the noise-
shaping loop is a first order delta-sigma modulator. Multiple higher order
topologies (such as MASH) are used to achieve higher degrees of noise-shaping
with a stable topology.
the Binary Weighted DAC, which contains one resistor or current source for
each bit of the DAC connected to a summing point. These precise voltages or
currents sum to the correct output value. This is one of the fastest conversion
methods but suffers from poor accuracy because of the high precision required for
each individual voltage or current. Such high-precision resistors and current-
sources are expensive, so this type of converter is usually limited to 8-bit
resolution or less.
the R-2R ladder DAC, which is a binary weighted DAC that uses a repeating
cascaded structure of resistor values R and 2R. This improves the precision due to
the relative ease of producing equal valued matched resistors (or current sources).
However, wide converters perform slowly due to increasingly large RC-constants
for each added R-2R link.
the Thermometer coded DAC, which contains an equal resistor or current source
segment for each possible value of DAC output. An 8-bit thermometer DAC
would have 255 segments, and a 16-bit thermometer DAC would have 65,535
segments. This is perhaps the fastest and highest precision DAC architecture but
at the expense of high cost. Conversion speeds of >1 billion samples per second
have been reached with this type of DAC.
Hybrid DACs, which use a combination of the above techniques in a single
converter. Most DAC integrated circuits are of this type due to the difficulty of
getting low cost, high speed and high precision in one device.
o the Segmented DAC, which combines the thermometer coded principle
for the most significant bits and the binary weighted principle for the least
significant bits. In this way, a compromise is obtained between precision
(by the use of the thermometer coded principle) and number of resistors or
current sources (by the use of the binary weighted principle). The full
binary weighted design means 0% segmentation, the full thermometer
coded design means 100% segmentation.
DACs are at the beginning of the analog signal chain, which makes them very
important to system performance. The most important characteristics of these
Resolution: This is the number of possible output levels the DAC is designed to
reproduce. This is usually stated as the number of bits it uses, which is the base
two logarithm of the number of levels. For instance a 1 bit DAC is designed to
reproduce 2 (21) levels while an 8 bit DAC is designed for 256 (28) levels.
Resolution is related to the Effective Number of Bits (ENOB) which is a
measurement of the actual resolution attained by the DAC.
Maximum sampling frequency: This is a measurement of the maximum speed at
which the DACs circuitry can operate and still produce the correct output. As
stated in the Nyquist–Shannon sampling theorem, a signal must be sampled at
over twice the frequency of the desired signal. For instance, to reproduce signals
in all the audible spectrum, which includes frequencies of up to 20 kHz, it is
necessary to use DACs that operate at over 40 kHz. The CD standard samples
audio at 44.1 kHz, thus DACs of this frequency are often used. A common
frequency in cheap computer sound cards is 48 kHz – many work at only this
frequency, offering the use of other sample rates only through (often poor)
monotonicity: This refers to the ability of DACs analog output to increase with
an increase in digital code or the converse. This characteristic is very important
for DACs used as a low frequency signal source or as a digitally programmable
THD+N: This is a measurement of the distortion and noise introduced to the
signal by the DAC. It is expressed as a percentage of the total power of unwanted
harmonic distortion and noise that accompany the desired signal. This is a very
important DAC characteristic for dynamic and small signal DAC applications.
Dynamic range: This is a measurement of the difference between the largest and
smallest signals the DAC can reproduce expressed in decibels. This is usually
related to DAC resolution and noise floor.
Other measurements, such as Phase distortion and Sampling Period Instability,
can also be very important for some applications.
DAC figures of merit
o DNL (Differential Non-Linearity) shows how much two adjacent code
analog values deviate from the ideal 1LSB step 
o INL (Integral Non-Linearity) shows how much the DAC transfer
characteristic deviates from an ideal one. That is, the ideal characteristic is
usually a straight line; INL shows how much the actual voltage at a given
code value differs from that line, in LSBs (1LSB steps).
o Noise is ultimately limited by the thermal noise generated by passive
components such as resistors. For audio applications and in room
temperatures, such noise is usually a little less than 1 μV (microvolt) of
white noise. This limits performance to less than 20~21 bits even in 24-bit
DACs, and cannot be corrected unless one resorts to extremely low
temperatures to create superconductivity: clearly an impractical
Frequency domain performance
o SFDR (Spurious Free Dynamic Range) indicates in dB the ratio between
the powers of the converted main signal and the greatest undesired spur
o SNDR (Signal to Noise and Distortion Ratio) indicates in dB the ratio
between the powers of the converted main signal and the sum of the noise
and the generated harmonic spurs
o HDi (i-th Harmonic Distortion) indicates the power of the i-th harmonic of
the converted main signal
o THD (Total harmonic distortion) is the sum of the powers of all HDi
o if the maximum DNL error is lessthan 1 LSB,then D/A converter is
guaranteed to be monotonic.
However many monotonic converters may have a maximum DNL greater than 1
Time domain performance
o Glitch Energy
o Response Uncertainty
o TNL (Time Non-Linearity)
ANALOG TODIGITAL-CONVERTER (ADC)
An analog-to-digital converter (abbreviated ADC, A/D or A to D) is a device
which converts continuous signals to discrete digital numbers. The reverse
operation is performed by a digital-to-analog converter (DAC).
Typically, an ADC is an electronic device that converts an input analog voltage
(or current) to a digital number. However, some non-electronic or only partially
electronic devices, such as rotary encoders, can also be considered ADCs. The
digital output may use different coding schemes, such as binary, Gray code or
two's complement binary.
The resolution of the converter indicates the number of discrete values it can
produce over the range of analog values. The values are usually stored
electronically in binary form, so the resolution is usually expressed in bits. In
consequence, the number of discrete values available, or "levels", is usually a
power of two. For example, an ADC with a resolution of 8 bits can encode an
analog input to one in 256 different levels, since 28 = 256. The values can
represent the ranges from 0 to 255 (i.e. unsigned integer) or from -128 to 127
(i.e. signed integer), depending on the application.
Resolution can also be defined electrically, and expressed in volts. The voltage
resolution of an ADC is equal to its overall voltage measurement range divided
by the number of discrete intervals as in the formula:
Q is resolution in volts per step (volts per output code),
EFSR is the full scale voltage range = VRefHi − VRefLo,
M is the ADC's resolution in bits, and
N is the number of intervals, given by the number of available levels (output
codes), which is: N = 2M
Some examples may help:
o Full scale measurement range = 0 to 10 volts
o ADC resolution is 12 bits: 2 = 4096 quantization levels (codes)
o ADC voltage resolution is: (10V - 0V) / 4096 codes = 10V / 4096 codes
0.00244 volts/code 2.44 mV/code
o Full scale measurement range = -10 to +10 volts
o ADC resolution is 14 bits: 2 = 16384 quantization levels (codes)
o ADC voltage resolution is: (10V - (-10V)) / 16384 codes = 20V / 16384
codes 0.00122 volts/code 1.22 mV/code
o Full scale measurement range = 0 to 8 volts
o ADC resolution is 3 bits: 2 = 8 quantization levels (codes)
o ADC voltage resolution is: (8 V − 0 V)/8 codes = 8 V/8 codes = 1
volts/code = 1000 mV/code
In practice, the smallest output code ("0" in an unsigned system) represents a
voltage range which is 0.5X of the ADC voltage resolution (Q)(meaning half-wide
of the ADC voltage Q ) while the largest output code represents a voltage range
which is 1.5X of the ADC voltage resolution (meaning 50% wider than the ADC
voltage resolution). The other N − 2 codes are all equal in width and represent
the ADC voltage resolution (Q) calculated above. Doing this centers the code on
an input voltage that represents the Mth division of the input voltage range. For
example, in Example 3, with the 3-bit ADC spanning an 8 V range, each of the N
divisions would represent 1 V, except the 1st ("0" code) which is 0.5 V wide, and
the last ("7" code) which is 1.5 V wide. Doing this the "1" code spans a voltage
range from 0.5 to 1.5 V, the "2" code spans a voltage range from 1.5 to 2.5 V,
etc. Thus, if the input signal is at 3/8ths of the full-scale voltage, then the ADC
outputs the "3" code, and will do so as long as the voltage stays within the range
of 2.5/8ths and 3.5/8ths. This practice is called "Mid-Tread" operation. This type
of ADC can be modeled mathematically as:
The exception to this convention seems to be the Microchip PIC processor,
where all M steps are equal width. This practice is called "Mid-Rise with Offset"
In practice, the useful resolution of a converter is limited by the best signal-to-
noise ratio that can be achieved for a digitized signal. An ADC can resolve a
signal to only a certain number of bits of resolution, called the "effective number
of bits" (ENOB). One effective bit of resolution changes the signal-to-noise ratio
of the digitized signal by 6 dB, if the resolution is limited by the ADC. If a
preamplifier has been used prior to A/D conversion, the noise introduced by the
amplifier can be an important contributing factor towards the overall SNR.
Most ADCs are of a type known as linear, although analog-to-digital conversion
is an inherently non-linear process (since the mapping of a continuous space to a
discrete space is a piecewise-constant and therefore non-linear operation). The
term linear as used here means that the range of the input values that map to
each output value has a linear relationship with the output value, i.e., that the
output value k is used for the range of input values from
m(k + b)
m(k + 1 + b),
where m and b are constants. Here b is typically 0 or −0.5. When b = 0, the ADC
is referred to as mid-rise, and when b = −0.5 it is referred to as mid-tread.
If the probability density function of a signal being digitized is uniform, then the
signal-to-noise ratio relative to the quantization noise is the best possible.
Because this is often not the case, it's usual to pass the signal through its
cumulative distribution function (CDF) before the quantization. This is good
because the regions that are more important get quantized with a better
resolution. In the dequantization process, the inverse CDF is needed.
This is the same principle behind the companders used in some tape-recorders
and other communication systems, and is related to entropy maximization.
For example, a voice signal has a Laplacian distribution. This means that the
region around the lowest levels, near 0, carries more information than the regions
with higher amplitudes. Because of this, logarithmic ADCs are very common in
voice communication systems to increase the dynamic range of the
representable values while retaining fine-granular fidelity in the low-amplitude
An eight-bit a-law or the μ-law logarithmic ADC covers the wide dynamic range
and has a high resolution in the critical low-amplitude region, that would
otherwise require a 12-bit linear ADC.
An ADC has several sources of errors. Quantization error and (assuming the
ADC is intended to be linear) non-linearity is intrinsic to any analog-to-digital
conversion. There is also a so-called aperture error which is due to a clock jitter
and is revealed when digitizing a time-variant signal (not a constant value).
These errors are measured in a unit called the LSB, which is an abbreviation for
least significant bit. In the above example of an eight-bit ADC, an error of one
LSB is 1/256 of the full signal range, or about 0.4%.
Main article: Quantization noise
Quantization error is due to the finite resolution of the ADC, and is an
unavoidable imperfection in all types of ADC. The magnitude of the quantization
error at the sampling instant is between zero and half of one LSB.
In the general case, the original signal is much larger than one LSB. When this
happens, the quantization error is not correlated with the signal, and has a
uniform distribution. Its RMS value is the standard deviation of this distribution,
given by . In the eight-bit ADC example, this represents 0.113% of the full
At lower levels the quantizing error becomes dependent of the input signal,
resulting in distortion. This distortion is created after the anti-aliasing filter, and if
these distortions are above 1/2 the sample rate they will alias back into the audio
band. In order to make the quantizing error independent of the input signal, noise
with an amplitude of 1 quantization step is added to the signal. This slightly
reduces signal to noise ratio, but completely eliminates the distortion. It is known
All ADCs suffer from non-linearity errors caused by their physical imperfections,
resulting in their output to deviate from a linear function (or some other function,
in the case of a deliberately non-linear ADC) of their input. These errors can
sometimes be mitigated by calibration, or prevented by testing.
Important parameters for linearity are integral non-linearity (INL) and differential
non-linearity (DNL). These non-linearities reduce the dynamic range of the
signals that can be digitized by the ADC, also reducing the effective resolution of
This section does not cite any references or sources. Please help improve this
article by adding citations to reliable sources. Unverifiable material may be
challenged and removed. (June 2008)
Imagine that we are digitizing a sine wave x(t) = Asin2πf0t. Provided that the
actual sampling time uncertainty due to the clock jitter is Δt, the error caused by
this phenomenon can be estimated as .
One can see that the error is relatively small at low frequencies, but can become
significant at high frequencies.
This effect can be ignored if it is relatively small as compared with quantizing
error. Jitter requirements can be calculated using the following formula: ,
where q is a number of ADC bits.
ADC input frequency
in bit 1 Hz 44.1 kHz 192 kHz 1 MHz 10 MHz 100 MHz 1 GHz
8 1243 µs 28.2 ns 6.48 ns 1.24 ns 124 ps 12.4 ps 1.24 ps
10 311 µs 7.05 ns 1.62 ns 311 ps 31.1 ps 3.11 ps 0.31 ps
12 77.7 µs 1.76 ns 405 ps 77.7 ps 7.77 ps 0.78 ps 0.08 ps
14 19.4 µs 441 ps 101 ps 19.4 ps 1.94 ps 0.19 ps 0.02 ps
16 4.86 µs 110 ps 25.3 ps 4.86 ps 0.49 ps 0.05 ps –
18 1.21 µs 27.5 ps 6.32 ps 1.21 ps 0.12 ps – –
20 304 ns 6.88 ps 1.58 ps 0.16 ps – – –
24 19.0 ns 0.43 ps 0.10 ps – – – –
32 74.1 ps – – – – – –
This table shows, for example, that it is not worth using a precise 24-bit ADC for
sound recording if we don't have an ultra low jitter clock. One should consider
taking this phenomenon into account before choosing an ADC.
The analog signal is continuous in time and it is necessary to convert this to a
flow of digital values. It is therefore required to define the rate at which new
digital values are sampled from the analog signal. The rate of new values is
called the sampling rate or sampling frequency of the converter.
A continuously varying bandlimited signal can be sampled (that is, the signal
values at intervals of time T, the sampling time, are measured and stored) and
then the original signal can be exactly reproduced from the discrete-time values
by an interpolation formula. The accuracy is limited by quantization error.
However, this faithful reproduction is only possible if the sampling rate is higher
than twice the highest frequency of the signal. This is essentially what is
embodied in the Shannon-Nyquist sampling theorem.
Since a practical ADC cannot make an instantaneous conversion, the input value
must necessarily be held constant during the time that the converter performs a
conversion (called the conversion time). An input circuit called a sample and hold
performs this task—in most cases by using a capacitor to store the analog
voltage at the input, and using an electronic switch or gate to disconnect the
capacitor from the input. Many ADC integrated circuits include the sample and
hold subsystem internally.
All ADCs work by sampling their input at discrete intervals of time. Their output is
therefore an incomplete picture of the behaviour of the input. There is no way of
knowing, by looking at the output, what the input was doing between one
sampling instant and the next. If the input is known to be changing slowly
compared to the sampling rate, then it can be assumed that the value of the
signal between two sample instants was somewhere between the two sampled
values. If, however, the input signal is changing fast compared to the sample
rate, then this assumption is not valid.
If the digital values produced by the ADC are, at some later stage in the system,
converted back to analog values by a digital to analog converter or DAC, it is
desirable that the output of the DAC be a faithful representation of the original
signal. If the input signal is changing much faster than the sample rate, then this
will not be the case, and spurious signals called aliases will be produced at the
output of the DAC. The frequency of the aliased signal is the difference between
the signal frequency and the sampling rate. For example, a 2 kHz sinewave
being sampled at 1.5 kHz would be reconstructed as a 500 Hz sinewave. This
problem is called aliasing.
To avoid aliasing, the input to an ADC must be low-pass filtered to remove
frequencies above half the sampling rate. This filter is called an anti-aliasing filter,
and is essential for a practical ADC system that is applied to analog signals with
higher frequency content.
Although aliasing in most systems is unwanted, it should also be noted that it can
be exploited to provide simultaneous down-mixing of a band-limited high
frequency signal (see frequency mixer).
In A to D converters, performance can usually be improved using dither. This is a
very small amount of random noise (white noise) which is added to the input
before conversion. Its amplitude is set to be about half of the least significant bit.
Its effect is to cause the state of the LSB to randomly oscillate between 0 and 1
in the presence of very low levels of input, rather than sticking at a fixed value.
Rather than the signal simply getting cut off altogether at this low level (which is
only being quantized to a resolution of 1 bit), it extends the effective range of
signals that the A to D converter can convert, at the expense of a slight increase
in noise - effectively the quantization error is diffused across a series of noise
values which is far less objectionable than a hard cutoff. The result is an accurate
representation of the signal over time. A suitable filter at the output of the system
can thus recover this small signal variation.
An audio signal of very low level (with respect to the bit depth of the ADC)
sampled without dither sounds extremely distorted and unpleasant. Without
dither the low level always yields a '1' from the A to D. With dithering, the true
level of the audio is still recorded as a series of values over time, rather than a
series of separate bits at one instant in time.
A virtually identical process, also called dither or dithering, is often used when
quantizing photographic images to a fewer number of bits per pixel—the image
becomes noisier but to the eye looks far more realistic than the quantized image,
which otherwise becomes banded. This analogous process may help to visualize
the effect of dither on an analogue audio signal that is converted to digital.
Dithering is also used in integrating systems such as electricity meters. Since the
values are added together, the dithering produces results that are more exact
than the LSB of the analog-to-digital converter.
Note that dither can only increase the resolution of a sampler, it cannot improve
the linearity, and thus accuracy does not necessarily improve.
Main article: oversampling
Usually, signals are sampled at the minimum rate required, for economy, with the
result that the quantization noise introduced is white noise spread over the whole
pass band of the converter. If a signal is sampled at a rate much higher than the
Nyquist frequency and then digitally filtered to limit it to the signal bandwidth then
there are 3 main advantages:
digital filters can have better properties (sharper rolloff, phase) than analogue
filters, so a sharper anti-aliasing filter can be realised and then the signal can be
downsampled giving a better result
a 20 bit ADC can be made to act as a 24 bit ADC with 256x oversampling
the signal-to-noise ratio due to quantization noise will be higher than if the whole
available band had been used. With this technique, it is possible to obtain an
effective resolution larger than that provided by the converter alone
These are the most common ways of implementing an electronic ADC:
A direct conversion ADC or flash ADC has a bank of comparators, each firing
for their decoded voltage range. The comparator bank feeds a logic circuit that
generates a code for each voltage range. Direct conversion is very fast, but usually
has only 8 bits of resolution (255 comparators - since the number of comparators
required is 2n - 1) or fewer, as it needs a large, expensive circuit. ADCs of this
type have a large die size, a high input capacitance, and are prone to produce
glitches on the output (by outputting an out-of-sequence code). Scaling to newer
submicrometre technologies does not help as the device mismatch is the dominant
design limitation. They are often used for video, wideband communications or
other fast signals in optical storage.
A successive-approximation ADC uses a comparator to reject ranges of
voltages, eventually settling on a final voltage range. Successive approximation
works by constantly comparing the input voltage to the output of an internal
digital to analog converter (DAC, fed by the current value of the approximation)
until the best approximation is achieved. At each step in this process, a binary
value of the approximation is stored in a successive approximation register
(SAR). The SAR uses a reference voltage (which is the largest signal the ADC is
to convert) for comparisons. For example if the input voltage is 60 V and the
reference voltage is 100 V, in the 1st clock cycle, 60 V is compared to 50 V (the
reference, divided by two. This is the voltage at the output of the internal DAC
when the input is a '1' followed by zeros), and the voltage from the comparator is
positive (or '1') (because 60 V is greater than 50 V). At this point the first binary
digit (MSB) is set to a '1'. In the 2nd clock cycle the input voltage is compared to
75 V (being halfway between 100 and 50 V: This is the output of the internal
DAC when its input is '11' followed by zeros) because 60 V is less than 75 V, the
comparator output is now negative (or '0'). The second binary digit is therefore set
to a '0'. In the 3rd clock cycle, the input voltage is compared with 62.5 V (halfway
between 50 V and 75 V: This is the output of the internal DAC when its input is
'101' followed by zeros). The output of the comparator is negative or '0' (because
60 V is less than 62.5 V) so the third binary digit is set to a 0. The fourth clock
cycle similarly results in the fourth digit being a '1' (60 V is greater than 56.25 V,
the DAC output for '1001' followed by zeros). The result of this would be in the
binary form 1001. This is also called bit-weighting conversion, and is similar to a
binary search. The analogue value is rounded to the nearest binary value below,
meaning this converter type is mid-rise (see above). Because the approximations
are successive (not simultaneous), the conversion takes one clock-cycle for each
bit of resolution desired. The clock frequency must be equal to the sampling
frequency multiplied by the number of bits of resolution desired. For example, to
sample audio at 44.1 kHz with 32 bit resolution, a clock frequency of over
1.4 MHz would be required. ADCs of this type have good resolutions and quite
wide ranges. They are more complex than some other designs.
A ramp-compare ADC (also called integrating, dual-slope or multi-slope
ADC) produces a saw-tooth signal that ramps up, then quickly falls to zero. When
the ramp starts, a timer starts counting. When the ramp voltage matches the input,
a comparator fires, and the timer's value is recorded. Timed ramp converters
require the least number of transistors. The ramp time is sensitive to temperature
because the circuit generating the ramp is often just some simple oscillator. There
are two solutions: use a clocked counter driving a DAC and then use the
comparator to preserve the counter's value, or calibrate the timed ramp. A special
advantage of the ramp-compare system is that comparing a second signal just
requires another comparator, and another register to store the voltage value. A
very simple (non-linear) ramp-converter can be implemented with a
microcontroller and one resistor and capacitor. Vice versa a filled capacitor can be
taken from an integrator, time-to-amplitude converter, phase detector, sample and
hold circuit, or peak and hold circuit and discharged. This has the advantage that a
slow comparator cannot be disturbed by fast input changes.
A delta-encoded ADC has an up-down counter that feeds a digital to analog
converter (DAC). The input signal and the DAC both go to a comparator. The
comparator controls the counter. The circuit uses negative feedback from the
comparator to adjust the counter until the DAC's output is close enough to the
input signal. The number is read from the counter. Delta converters have very
wide ranges, and high resolution, but the conversion time is dependent on the
input signal level, though it will always have a guaranteed worst-case. Delta
converters are often very good choices to read real-world signals. Most signals
from physical systems do not change abruptly. Some converters combine the delta
and successive approximation approaches; this works especially well when high
frequencies are known to be small in magnitude.
A pipeline ADC (also called subranging quantizer) uses two or more steps of
subranging. First, a coarse conversion is done. In a second step, the difference to
the input signal is determined with a digital to analog converter (DAC). This
difference is then converted finer, and the results are combined in a last step. This
can be considered a refinement of the successive approximation ADC wherein the
feedback reference signal consists of the interim conversion of a whole range of
bits (for example, four bits) rather than just the next-most-significant bit. By
combining the merits of the successive approximation and flash ADCs this type is
fast, has a high resolution, and only requires a small die size.
A Sigma-Delta ADC (also known as a Delta-Sigma ADC) oversamples the
desired signal by a large factor and filters the desired signal band. Generally a
smaller number of bits than required are converted using a Flash ADC after the
Filter. The resulting signal, along with the error generated by the discrete levels of
the Flash, is fed back and subtracted from the input to the filter. This negative
feedback has the effect of noise shaping the error due to the Flash so that it does
not appear in the desired signal frequencies. A digital filter (decimation filter)
follows the ADC which reduces the sampling rate, filters off unwanted noise
signal and increases the resolution of the output. (sigma-delta modulation, also
called delta-sigma modulation)
A Time Interleaved ADC uses M parallel ADCs where each ADC sample data
every M:th cycle of the effective sample clock. This result in that the sample rate
is increased M times compared to what each individual ADC can manage. In
practice the individual differences between the M ADCs degrade the over all
performance reducing the SFDR. However, technologies exist to correct for these
time-interleaving mismatch errors.
There can be other ADCs that use a combination of electronics and other
A Time-stretch analog-to-digital converter (TS-ADC) digitizes a very wide
bandwidth analog signal, that cannot be digitized by a conventional electronic
ADC, by time-stretching the signal prior to digitization. It commonly uses a
photonic preprocessor frontend to time-stretch the signal, which effectively slows
the signal down in time and compresses its bandwidth. As a result, an electronic
backend ADC, that would have been too slow to capture the original signal, can
now capture this slowed down signal. For continuous capture of the signal, the
frontend also divides the signal into multiple segments in addition to time-
stretching. Each segment is individually digitized by a separate electronic ADC.
Finally, a digital signal processor rearranges the samples and removes any
distortions added by the frontend to yield the binary data that is the digital
representation of the original analog signal.
Commercial analog-to-digital converters
These are usually integrated circuits.
Most converters sample with 6 to 24 bits of resolution, and produce fewer than 1
megasample per second. Thermal noise generated by passive components such
as resistors masks the measurement when higher resolution is desired. For
audio applications and in room temperatures, such noise is usually a little less
than 1 μV (microvolt) of white noise. If the Most Significant Bit corresponds to a
standard 2 volts of output signal, this translates to a noise-limited performance
that is less than 20~21 bits, and obviates the need for any dithering. Mega- and
gigasample per second converters are available, though (Feb 2002).
Megasample converters are required in digital video cameras, video capture
cards, and TV tuner cards to convert full-speed analog video to digital video files.
Commercial converters usually have ±0.5 to ±1.5 LSB error in their output.
In many cases the most expensive part of an integrated circuit is the pins,
because they make the package larger, and each pin has to be connected to the
integrated circuit's silicon. To save pins, it's common for slow ADCs to send their
data one bit at a time over a serial interface to the computer, with the next bit
coming out when a clock signal changes state, say from zero to 5V. This saves
quite a few pins on the ADC package, and in many cases, does not make the
overall design any more complex. (Even microprocessors which use memory-
mapped IO only need a few bits of a port to implement a serial bus to an ADC.)
Commercial ADCs often have several inputs that feed the same converter,
usually through an analog multiplexer. Different models of ADC may include
sample and hold circuits, instrumentation amplifiers or differential inputs, where
the quantity measured is the difference between two voltages.
Application to music recording
ADCs are integral to current music reproduction technology. Since much music
production is done on computers, when an analog recording is used, an ADC is
needed to create the PCM data stream that goes onto a compact disc.
The current crop of AD converters utilized in music can sample at rates up to 192
kilohertz. Many people in the business consider this an overkill and
pure marketing hype, due to the Nyquist-Shannon sampling theorem. Simply put,
they say the analog waveform does not have enough information in it
to necessitate such high sampling rates, and typical recording techniques for
high-fidelity audio are usually sampled at either 44.1 kHz (the standard for CD) or
48 kHz (commonly used for radio/TV broadcast applications). However, this kind
of bandwidth headroom allows the use of cheaper or faster anti-aliasing filters of
less severe filtering slopes. The proponents of oversampling assert that such
shallower anti-aliasing filters produce less deleterious effects on sound quality,
exactly because of their gentler slopes. Others prefer entirely filterless AD
conversion, arguing that aliasing is less detrimental to sound perception than pre-
conversion brickwall filtering. Considerable literature exists on these matters, but
commercial considerations often play a significant role. Most high-
profile recording studios record in 24-bit/192-176.4 kHz PCM or in DSD formats,
and then downsample or decimate the signal for Red-Book CD production.
AD converters are used virtually everywhere where an analog signal has to be
processed, stored, or transported in digital form. Fast video ADCs are used, for
example, in TV tuner cards. Slow on-chip 8, 10, 12, or 16 bit ADCs are common
in microcontrollers. Very fast ADCs are needed in digital oscilloscopes, and are
crucial for new applications like software defined radio..
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Schematic of a 2-to-1 Multiplexer. It can be equated to a controlled switch.
Schematic of a 1-to-2 Demultiplexer. Like a multiplexer, it can be equated to a controlled
In electronics, a multiplexer or mux (occasionally the term muldex or muldem
is also found, for a combination multiplexer-demultiplexer) is a device that
performs multiplexing; it selects one of many analog or digital input signals and
outputs that into a single line. A multiplexer of 2n inputs has n select bits, which
are used to select which input line to send to the output.
An electronic multiplexer makes it possible for several signals to share one
expensive device or other resource, for example one A/D converter or one
communication line, instead of having one device per input signal.
In electronics, a demultiplexer (or demux) is a device taking a single input
signal and selecting one of many data-output-lines, which is connected to the
single input. A multiplexer is often used with a complementary demultiplexer on
the receiving end.
An electronic multiplexer can be considered as a multiple-input, single-output
switch, and a demultiplexer as a single-input, multiple-output switch. The
schematic symbol for a multiplexer is an isosceles trapezoid with the longer
parallel side containing the input pins and the short parallel side containing the
output pin. The schematic on the right shows a 2-to-1 multiplexer on the left and
an equivalent switch on the right. The sel wire connects the desired input to the
In telecommunications, a multiplexer is a device that combines several input
information signals into one output signal, which carries several communication
channels, by means of some multiplex technique. A demultiplexer is in this
context a device taking a single input signal that carries many channels and
separates those over multiple output signals.
In telecommunications and signal processing, an analog time division multiplexer
(TDM MUX) may take several samples of separate analogue signals and
combine them into one pulse amplitude modulated (PAM) wide-band analogue
signal. Alternatively, a digital TDM multiplexer may combine a limited number of
constant bit rate digital data streams into one data stream of a higher data rate,
by forming data frames consisting of one timeslot per channel.
In telecommunications, computer networks and digital video, a statistical
multiplexer may combine several variable bit rate data streams into one constant
bandwidth signal, for example by means of packet mode communication. An
inverse multiplexer may utilize several communication channels for transferring
The basic function of a multiplexer: combining multiple inputs into a single data stream.
On the receiving side, a demultiplexer splits the single data stream into the original
One use for multiplexers is cost savings by connecting a multiplexer and a
demultiplexer (or demux) together over a single channel (by connecting the
multiplexer's single output to the demultiplexer's single input). The image to the
right demonstrates this. In this case, the cost of implementing separate channels
for each data source is more expensive than the cost and inconvenience of
providing the multiplexing/demultiplexing functions. In a physical analogy,
consider the merging behaviour of commuters crossing a narrow bridge; vehicles
will take turns using the few available lanes. Upon reaching the end of the bridge
they will separate into separate routes to their destinations.
At the receiving end of the data link a complementary demultiplexer is normally
required to break single data stream back down into the original streams. In
some cases, the far end system may have more functionality than a simple
demultiplexer and so, whilst the demultiplexing still exists logically, it may never
actually happen physically. This would be typical where a multiplexer serves a
number of IP network users and then feeds directly into a router which
immediately reads the content of the entire link into its routing processor and
then does the demultiplexing in memory from where it will be converted directly
into IP packets.
It is usual to combine a multiplexer and a demultiplexer together into one piece of
equipment and simply refer to the whole thing as a "multiplexer". Both pieces of
equipment are needed at both ends of a transmission link because most
communications systems transmit in both directions.
A real world example is the creation of telemetry for transmission from the
computer/instrumentation system of a satellite, space craft or other remote
vehicle to a ground system.
In analog circuit design, a multiplexer is a special type of analog switch that
connects one signal selected from several inputs to a single output.
In digital circuit design, the selector wires are of digital value. In the case of a 2-
to-1 multiplexer, a logic value of 0 would connect I0 to the output while a logic
value of 1 would connect I1 to the output. In larger multiplexers, the number of
selector pins is equal to where n is the number of inputs.
For example, 9 to 16 inputs would require no less than 4 selector pins and 17 to
32 inputs would require no less than 5 selector pins. The binary value expressed
on these selector pins determines the selected input pin.
A 2-to-1 multiplexer has a boolean equation where A and B are the two inputs, S
is the selector input, and Z is the output:
A 2-to-1 mux
Which can be expressed as a truth table:
S A B Z
1 1 1
1 0 1
0 1 0
0 0 0
1 1 1
1 0 0
0 1 1
0 0 0
This truth table should make it quite clear that when S = 0 then Z = A but when S
= 1 then Z = B. A straightforward realization of this 2-to-1 multiplexer would need
2 AND gates, an OR gate, and a NOT gate.
Larger multiplexers are also common and, as stated above, requires
selector pins for n inputs. Other common sizes are 4-to-1, 8-to-1, and 16-to-1.
Since digital logic uses binary values, powers of 2 are used (4, 8, 16) to
maximally control a number of inputs for the given number of selector inputs.
The boolean equation for a 4-to-1 multiplexer is:
Two realizations for creating a 4-to-1 multiplexer are shown below:
These are two realizations of a 4-to-1 multiplexer:
one realized from a decoder, AND gates, and an OR gate
one realized from 3-state buffers and AND gates (the AND gates are acting as the decoder)
Note that the subscripts on the In inputs indicate the decimal value of the binary control
inputs at which that input is let through.
Larger multiplexers can be constructed by using smaller multiplexers by chaining
them together. For example, an 8-to-1 multiplexer can be made with two 4-to-1
and one 2-to-1 multiplexers. The two 4-to-1 multiplexer outputs are fed into the 2-
to-1 with the selector pins on the 4-to-1's put in parallel giving a total number of
selector inputs to 3, which is equivalent to an 8-to-1.
List of ICs which provide multiplexing
The 7400 series has several ICs that contain multiplexer(s):
S.No. IC No. Function Output State
1 74157 Quad- 2:1 MUX Output same as input given
2 74158 Quad- 2:1 MUX Output is inverted input
3 74153 Dual- 4:1 MUX Output same as input
4 74352 Dual- 4:1 MUX Output is inverted input
5 74151A 8:1 MUX Both outputs available ie. Complementary outputs
6 74151 8:1 MUX Output is inverted input
7 74150 16:1 MUX Output is inverted input
Demultiplexers take one data input and a number of selection inputs, and they
have several outputs. They forward the data input to one of the outputs
depending on the values of the selection inputs. Demultiplexers are sometimes
convenient for designing general purpose logic, because if the demultiplexer's
input is always true, the demultiplexer acts as a decoder. This means that any
function of the selection bits can be constructed by logically OR-ing the correct
set of outputs.
Example: A Single Bit 1-to-4 Line Demultiplexer
List of ICs which provide demultiplexing
The 7400 series has several ICs that contain demultiplexer(s):
S.No. IC No. Function Output State
1 74139 Dual 1:4 DEMUX Output is inverted input
3 74156 Dual- 1:4 DEMUX Output is open collector
4 74138 1:8 DEMUX Output is inverted input
5 74154 1:16 DEMUX Output is inverted input
6 74159 1:16 DEMUX Output is open collector and same as
Data acquisition is the sampling of the real world to generate data that can be
manipulated by a computer. Sometimes abbreviated DAQ or DAS, data
acquisition typically involves acquisition of signals and waveforms and
processing the signals to obtain desired information. The components of data
acquisition systems include appropriate sensors that convert any measurement
parameter to an electrical signal, then conditioning the electrical signal which can
then be acquired by data acquisition hardware.
Acquired data are displayed, analyzed, and stored on a computer, either using
vendor supplied software, or custom displays and control can be developed
using various general purpose programming languages such as BASIC, C,
Fortran, Java, Lisp, Pascal. Specialized programming languages used for data
acquisition include EPICS, used to build large scale data acquisition systems,
LabVIEW, which offers a graphical programming environment optimized for data
acquisition, and MATLAB which provides a programming language, and also
built-in graphical tools and libraries for data acquisition and analysis