Experiment

Document Sample
Experiment Powered By Docstoc
					ASEN 3300 Aerospace Electronics and Communications                                                            Spring 2009


Lab 6 - Digital Signals and A/D and D/A Conversion
Assigned:         Friday, 20 Feb 2009
Prelab Quiz:      Tuesday, 24 Feb 2009, 9:00 – 9:10 am
Report Due:       Friday, 27 Feb 2009 3:00 pm (through CU Learn)

OBJECTIVES
   Review the binary number system and how it affects the digital representation of signals in a computer.
   Identify the key characteristics of analog to digital (A/D) and digital to analog (D/A) conversion for data
    acquisition and command output, to and from a computer.
   Understand quantization and its effect on measurement range and precision.
   Understand sampling frequency requirements and aliasing.
   Gain experience using LabView and the oscilloscope to evaluate the effects of A/D and D/A on signal structure.

READING
        How_do_ADCs_work.pdf within the CU Learn Lab 6 module
        Horowitz/Hill; Art of Electronics; Section 9.15 – 9.25 (pp. 612-640)


BACKGROUND
So far in the course we have worked primarily with analog signals – that is the voltages and currents were continuous in
both time and amplitude. Historically, measurements from analog instruments were often recorded directly by devices
such as a strip chart recorder or tape recorder. While analog recorders are still used for some types of measurements,
today almost all measurement signals eventually make their way to a computer or other digital device. Even the
benchtop equipment such as the scope and function generators that we are using in ITLL are actually digital devices that
are designed to behave like analog instruments.

In this lab we begin to look at digital representations of signals in a computer, i.e. as a sequence of 1‟s and 0‟s, and how
to convert analog signals to and from such a representation. The device that converts an analog voltage to a numerical
(digital) value is called an analog to digital (A/D) converter. The device that produces an analog voltage level from a
numerical (digital) value is called a digital to analog (D/A) converter. These functions are provided on a data acquisition
card in the ITLL computers (MIO-16). The front panel of the ITLL lab stations provides convenient connections to
access the various A/D and D/A ports on the I/O card. The program LabView is used to control the acquisition of
signals from the A/D ports, processing these signals by sampling and filtering, and sending the resulting signals out to
the D/A ports.

We will use a LabView Virtual Instrument (“Digital Audio Simulator.llb”) written by Bradley Dunkin to explore two of
the key features of A/D and D/A conversion – sampling frequency and quantization. In particular we will take an analog
voltage from the function generator, put it through an A/D converter to the computer, write it back out through the D/A
converter, and look at the resulting analog signals on the oscilloscope. By generating frequencies in the audio range we
will also be able to listen to the effects of quantization and sampling using headphones. In this experiment you will be
able to explore the effects of the A/D and D/A sampling and quantization parameters on the signal characteristics using a
variety of different tools. This experience should help you make good choices when selecting analog to digital interface
components for aerospace systems.

Quantization
An analog signal has a continuously varying voltage, that is, the voltage can take any value within the limits set by the
power supply. Thus, given a voltmeter with enough precision, there is no limit to the resolution with which one can



Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                            Spring 2009


measure the voltage. A digital representation of a number has a finite resolution determined by the number of bits
assigned to store the number and the range of numbers that must be represented. The number of values that can be
represented with n bits is 2n. The resolution is determined by the value associated with the least significant bit (LSB) and
the range is determined by the resolution times the number of possible values. As an example, imagine that we assign a
value of 1mV to the LSB. A three bit binary number can store up to 8 values (0 to 7), so the range of positive voltages
which can be represented is 0mV - 7mV. (0 = 0mV 1mV, 1 = 1mV2,V,…, 7 = 6mV7mV).

Any A/D or D/A converter has a finite number of values or levels that it can provide, which depends on the number of
binary digits (bits) in its digital representation. During A/D conversion, any voltage that falls in between two LSB digital
representation values (i.e. a voltage “bin”) is assigned the same digital value (often “rounded down”). The bin size is
determined by the input voltage range and the number of digital values available. The lab station has 12 bit A/D and D/A
converters with a voltage range of –10V to +10V. This means that they can represent up to 2 12 = 4096 different voltage
levels within the specified range. The resulting quantization or LSB voltage is 20V/2 12 bins = 4.88 mV/bin. As a result,
at the full 12 bit resolution of the A/D or D/A, a 2 Vpp sampled sinusoid on the oscilloscope can appear to be an analog
signal (unless you zoom in closely, where the 4.88 mV steps can be seen).

A/D converters used in many instruments on spacecraft and in communication systems use far fewer bits. A sun sensor
might have 6-8 bits and a GPS receiver typically uses 1-2 bit A/D conversion. In contrast, modern digital audio systems
use A/D and D/A conversion of up to 22 bits. Getting more bits in the A/D converter is more expensive and requires
more data storage, and more time for processing. Typical designs use the minimum number of bits required to get the
full range of values at the smallest (worst) acceptable resolution. A 2-bit A/D converter has only 4 bins in which to
assign the input voltages. So, for an input voltage range of –10V to +10V, the following table shows how the voltages
would be assigned. In this case, the resolution of the measurement would only be 5V. Similarly, if a D/A converter with
only 4 discrete values (i.e. 2 bits) were used to provide an analog command signal in the same voltage range to a motor
for example, the output voltage would change in steps of 5V.

                              Input Voltage        Digital Value
                           –10V < Vin < –5V             00
                            -5V < Vin < 0V              01
                             0V < Vin < 5V              10
                            5V < Vin < 10V              11

There are several important consequences of finite quantization on measurements using an A/D converter, and control or
signal reconstruction using a D/A converter:
     limited measurement / command resolution and range
     increased noise
     introduction of high frequency signal components in the analog output



Sampling
The numerical values stored in a computer are not only discrete in value, but they also can only change at specific discrete
times. The rate at which an A/D converter takes a reading of the input voltage is known as the sample frequency. In
between samples, the digital representation is completely oblivious to changes in the input signal. Thus it is obvious that
proper selection of the sampling frequency depends on the frequency content of the signal to be measured.




Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                             Spring 2009


The sampling theorem (Shannon-Whittaker sampling theorem) says that a continuous signal may be perfectly
reconstructed if it is sampled (with high amplitude resolution) at a rate of at least twice the frequency of the highest
frequency component of the signal. Typically, signals are sampled at 3-10 times the highest frequency and this process
is called “over-sampling”. For example, a CD recording is sampled at 44 kHz even though the typical audio range goes
up to only about 15 kHz.

Proper selection of a sampling frequency is extremely important because it limits the spectral content (or range of
frequency components) of the input waveform that can be correctly captured. For a given sampling frequency (fs), the
Nyquist frequency1 is defined as fs/2 (half the sampling frequency). Spectral components of an input signal, that are
higher than the Nyquist frequency are not simply lost in the A/D process, but they actually show up at other frequencies
through a process called aliasing. Aliased signals are problematic because they can change the apparent magnitude of
real lower frequency signal components or appear at other frequencies altogether. Any subsequent analysis of the
sampled signal will then draw incorrect conclusions about the measured analog signal.




For a given sampling frequency fs, the spectrum of a sampled signal is given by replicating the original spectrum about
multiples of the sample rate. Here, it necessary to use the two-sided spectrum, which has spectral lines symmetrically
placed about zero frequency. For example, an analog sinusoid at 4 kHz has spectral lines in the two-sided spectrum at +/-
4kHz. If this signal is sampled at fs=10kHz, the sampled signal has spectral lines at +/-4kHz, +/-6kHz, +/-14kHz, +/-
16kHz, etc. (Draw a picture to see this clearly). Now if we send this signal to a D/A converter, and low-pass filter the
resulting analog signal so that only the +/- 4kHz signal survives, we have reconstructed the original signal with no alias
errors. The sampling theorem says that the highest frequency signal that can be accurately represented is fs/2 = 5kHz,
which is satisfied in this case. If instead the analog sinusoid has a frequency of 5.5kHz, i.e. 0.5kHz above fs/2, the
sampled signal will contain spectral lines at +/- 5.5 kHz, +/- 4.5 kHz, +/- 15.5 kHz, +/- 14.5 kHz, etc. If this signal is
D/A converted and low pass filtered to remove all spectral lines above 5.5 kHz, we will recover the original 5.5 kHz
signal, plus an imposter (aliased signal) at 4.5 kHz.

For most (e.g. non-sinusoidal) signals, the only means to avoid aliasing is to apply a low pass filter to analog signals
prior to A/D conversion, so that the sampled signals contain very little spectral amplitude at frequencies of fs/2 and
higher. This will dramatically reduce or eliminate the higher frequency components before they have a chance to distort
the digital representation. The LabView vi actually has this feature built into it, so if you have time and interest, you are
welcome to explore it as well.

One final characteristic associated with sampling is that the digital signal representation has no value in between the
signal samples. This must be addressed for both the A/D and D/A converters. On the A/D side, the signal sampling




Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                            Spring 2009


process is not instantaneous, so it is important to consider what might happen to the input analog voltage while the
sample is in process. The typical approach to this is to construct a “sample and hold” circuit that latches or freezes the
input signal during the sample. On the D/A side, the analog voltage is only changed when the digital values are updated
(i.e. at the sample rate), so that the analog output is constant in between samples. Thus the analog output voltage will be
a stair-step, with step width determined by the sample period and minimum step height determined by the quantization
resolution (number of bits) in the D/A. These discrete steps introduce higher frequency components in the reconstructed
analog signal, which may be eliminated through the use of a low pass filter at the output. This too is implemented in the
“Digital Audio Simulator.vi”, so you are welcome to try it out for yourself.

PRELAB (25pts)
1.   Range and quantization
     a) Express the numbers 99 and 999 in binary.
     b) How many binary digits (bits) are required to represent a number in the range of –1V to +1V with a resolution
        of 0.05V?
     c) How many voltage levels are generated by a D/A converter with 12 bits , 4 bits, 2 bits, 1 bit?
     d) Sketch or plot two cycles of a 1kHz analog sine wave signal with Vpp=2 V. Sketch or plot two cycles of a 4, 2,
        and 1 bit digital representation of this signal. Assume that total voltage range of the A/D conversion is –1V to
        +1V, and the sampling frequency is 100 times higher than the input frequency.

2.   Sampling
     a) What is the minimum sampling frequency for a pure sine wave input at 3kHz?
     b) For a sampling frequency of 5kHz, what is the Nyquist frequency? After being sampled at 5kHz, at what
         frequencies would the following input sinusoids appear? 1kHz, 2kHz, 2.5kHz, 3kHz, 4kHz, 5kHz., 6kHz
     c) For a common time axis sketch or plot 1 ms worth of analog sine wave signals at 1kHz and 3kHz with V pp=2 V.
         Also show the digital representation of these signals sampled at 5kHz. (Assume that you have 12 bit
         quantization.)
3.   Resolution
     Suppose you have an accelerometer with a sensitivity of 100 mV/g, with an output voltage range of 0 to 5V. You
     wish to measure vibrations of the solar arrays on a satellite, with a resolution of 0.1 g. Luckily you have an A/D
     converter with a range of 0 to 5V.
     a) What is the required voltage resolution to be able to measure 0.1 g?
     b) How many voltage levels or bins do you need to represent the full output voltage range at the required
         resolution?
     c) How many bits are required in the A/D converter to accomplish this?
     d) What acceleration is represented by the LSB of the A/D converter you chose?



EXPERIMENT (25pts)
Setup:
    a) Set up the function generator to produce a sine wave at 1kHz with a zero DC offset and 2Vpp amplitude.
       Since we are using the function generator as input to the oscilloscope, make sure that the function generator is
       set to “High Z”.
    b) Use a “T” and two BNC cables to connect the output of the function generator to channel 1 of the oscilloscope
       and to “ACH1” (not ACH0) analog to digital converter on the top left (MIO-16 section) of the lab station I/O
       panel.




Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                         Spring 2009


     c) Use a BNC cable to connect “DAC1” (this is the digital to analog converter output) on your lab station I/O
        panel to channel 2 of the oscilloscope. Note: DAC0 puts out the raw, unprocessed waveform, while DAC1 puts
        out the processed waveform.
     d) Turn on both channel 1 and channel 2 of the scope. Set the volts/div the same for both and align the ground
        levels for both channels.
     e) Open H:\ITLL Documentation\ITLL Modules\Digital Audio Simulator\Digital Audio Simulator.vi. The front
        panel of the Digital Audio simulator will open. Refer to the Appendix at the bottom of this document for
        instructions on using the VI. To run the loaded “VI“, press the “Right Arrow” () in the upper left corner of the
        “VI” virtual instrument.
     f) Make a detailed block diagram sketch of your equipment set up in your lab notebook. Please note which way
        the signals are going on the diagram, i.e., the output of the function generator goes into the Analog to Digital
        Converter A/D (ACH1 – top row / left side of input connectors), and the output of the Digital to Analog
        Converter D/A (DAC1) goes into the scope.
     g) For most of these exercises you will view the analog and digital signals on the Labview scope and the benchtop
        scope. Signal spectra will be viewed on the Labview screen. You may also use the FFT module on the
        benchtop scope if you like, but this is not required. If you decide to look at the signals with the FFT module,
        make sure to reset the time axes on the scope to show about 50-100 cycles of the waveform, as you have done
        before.

1.   Quantization
     REMEMBER:
      Every time you change the input signal, press the “Acquire Waveform” button in VI „Signal Input‟
        window (yellow).
      Every time you change the simulation parameters, press the “Output Continuous” button on the
        „Simulation Parameter‟ window (green).
     a) Set up the function generator to produce a sine wave at 1kHz with a zero DC offset and 2Vpp amplitude. View
        the signal on channel 1 of the scope and sketch, noting the period and amplitude of the signal.
     b) On the VI, under “Signal Input”, set the Input Scan Rate to 100kHz, Number of Points to 20,000, and Input
        Limits to +/- 1V.
     c) On the VI, under „Simulation Parameters‟, set the Sampling Rate to 100kHz, and the Bit Resolution to 12.
        Set the vi plot axes as follows (default):
        Time axis 0 to 2 ms, voltage axis –2 to +2 V, frequency axis 0 to 5 kHz, and amplitude axis 1e-6 to 1 Vrms.
     d) Sketch or print out the vi scope and spectrum analyzer displays for the Raw and Processed Waveforms.
        Compare the results and note any differences in frequencies or amplitudes. (note: you can press „Alt-
        PrintScreen‟ on the keyboard to copy the screen display to the clipboard, and paste the display into a Word
        document – looks good, and saves lots of time to sketch).
     e) Sketch and compare the input signal and output signals on the benchtop scope (channel 1 & 2). This will work
        better if you freeze the screen by hitting start and then stop when the sweep is done. Note Vrms and Vaverage for
        input and output signal.
     f) Reduce the bit resolution to 4, 2, and 1 bit and sketch what you observe in the processed waveform on the vi
        scope or on the benchtop scope. Note that each time you change the „Simulation Parameters”, you must click
        the “Output Continuous” button to apply the change to the processed waveform. Look at the benchtop scope
        for comparison – in some cases it may be easier to see your results here. Again use the start and stop buttons to
        freeze your image on the screen. Make sure to note the number of different voltage levels in the signal, the
        position of the signal relative to the baseline (0V), and the RMS and average Voltage of both input and output
        signal.




Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                           Spring 2009


     g) Set up the function generator to produce a sine wave at 1kHz with a zero DC offset and 1Vpp amplitude.
        Repeat the process (collected input and viewing/sketching output) with 4 bit resolution.
     h) Set up the function generator to produce a sine wave at 1kHz with a zero DC offset and 4Vpp amplitude.
        Repeat the process (collected input and viewing/sketching output) with 4 bit resolution.
     i) Set up the function generator to return to produce a sine wave at 1kHz with a zero DC offset and 2Vpp
        amplitude (full scale range). Repeat the process (collected input and viewing/sketching output) with 4 bit
        resolution Disconnect the D/A output from the benchtop scope and plug in the headphones. Note the location
        of the volume control and to avoid hurting your eardrums, turn on the signal before you put the headphones on
        your ears. Compare the tones you hear for 12, 4, 2, and 1 bit quantization.

2.   Sampling Frequency and Aliasing
     a) Return the equipment to the original configuration, i.e. with the benchtop scope measuring the D/A output, and
        12 bit quantization.
     b) On the LabView vi, set the sampling rate to 5kHz on the “Simulation Parameters” window.
     c) Set up the function generator to produce a sine wave at 1kHz with a zero DC offset and 2Vpp amplitude.
        Acquire the raw waveform on the LabView vi, set the output to continuous.
     d) Sketch or print out the processed waveforms on the vi scope and spectrum analyzer (or use “Alt-Print Screen”
        to copy / paste to clipboard). Also look at the benchtop scope for comparison – in some cases it may be easier
        to see your results here. Again use the start and stop buttons to freeze your image on the screen. Make sure to
        note the frequency and amplitude of the signal peaks on the spectrum analyzer, and the number of different
        discrete time steps in the signal, and the position of the signal relative to the baseline (0V) shown on the scope.
     e) Step the function generator frequency from 1kHz to 6kHz in steps of 1kHz. Each time you change the function
        generator output, you must click the “Acquire Waveform” button on the LabView vi, to capture a new data
        sample, then click the “Output Continuous” button to generate a new processed waveform. For each input
        frequency, note the characteristics of the raw and processed waveforms including the amplitude and frequency
        of the signal components shown on the vi spectrum analyzer and the number of discrete time steps per cycle of
        the signals. You may do this with a series of sketches or a table of values.
     f) Disconnect the D/A output from the benchtop scope and plug in the headphones. Note the location of the
        volume control and to avoid hurting your eardrums, turn on the signal before you put the headphones on your
        ears. Compare the tones you hear for input frequencies ranging from 1kHz to 6kHz. Remember, you must
        click the Acquire Waveform and Output Continuous buttons each time you change the input signal.
     g) Set the function generator frequency back to 1kHz, zero offset, 2Vpp. Select a square wave instead of sine
        wave. Remember, each time you change the function generator output, or the VI Simulation settings, you must
        click the Acquire Waveform button on the LabView vi, to capture a new data sample, then click the Output
        Continuous button to generate a new processed waveform. Display processed waveforms with a Simulation
        Parameter setting of 5,000 Hz Sampling Rate and 50,000 Hz Sampling Rate. Note the difference in the
        spectrum for the two sampling rates. Repeat for a triangular 1kHz input waveform (5,000 Hz and 50,000 Hz
        sampling frequency).

3.   Resolution
     a) Connect your accelerometer output to the ACH1 on your lab station I/O panel. Make sure to connect the power
        and ground lines on the accelerometer for proper operation of the sensor.
     b) Set the input limits on the “VI” to 0V to 5V.
     c) Set the sampling rate to 100,000 Hz.
     d) Set the bit resolution to the value you determined in the prelab.
     e) Measure 0g, +1g, and –1g, and check that the LSB corresponds to 0.1g. What is the approximate noise level of
        your measurements?


Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                          Spring 2009




ANALYSIS (25pts)
1.   Quantization
     a) At a high sampling rate of 50 times the input frequency, and 12 bit quantization, did you notice any differences
        between the original analog signal and the signal produced by A/D sampling and D/A reconstruction?
     b) As the number of bits used to represent the signal was reduced from 12 to 1, how did the processed waveform
        change? Did the number of voltage levels shown on the instruments agree with your pre-lab predictions?
     c) What did you notice about the average signal voltage for each of the quantization levels? Explain your result
        quantitatively. (Hint: look at the voltage represented by the LSB).
     d) What was the impact of using different Vpp amplitude levels while the input limits of the ADC remained the
        same? Specifically address what was the impact on the number of bits utilized in the measurement.
     e) In aerospace applications, downlink capability is often very limited. This restricts the number of bits available
        for telemetry, and consequently, the resolution and/or range with which analog signal measurements can be sent
        to the ground. Suggest a possible method to use a small number of bits to represent a large amplitude signal
        that is changing slowly. What approach might you use for a signal that is very noisy, but averages out to the
        correct value over time?

2.   Sampling
     a) With a sampling rate of 5kHz, how did the processed 1kHz signal compare to the analog 1 kHz signal? What
        frequencies appeared in the spectrum analyzer trace? How do these compare to your pre-lab predictions?
     b) Describe how the processed signal characteristics changed as the input was increased from 1kHz up to 6 kHz.
        Include results from your scope, spectrum analyzer, and audio experiments. Comment specifically on what
        happens when the input frequency exceeds the Nyquist frequency.
     c) Discuss your observations for sine, square and triangular waves collected at 5,000 Hz and 50,000 Hz sampling
        frequency.
     d) If you were sampling a signal that you knew to be of a frequency above the Nyquist rate and there were no low
        frequency signal components (i.e. they might be filtered out), how could you deliberately use aliasing to provide
        accurate measurements at a lower sample rate? Give a specific numerical example.

3.   Filtering
     a) Analog signals are usually passed through a low pass filter prior to A/D conversion – why is this necessary or
          desirable?
     b) If a signal has high frequency components at the Nyquist rate, then there is a risk that the lower frequency
          components may have amplitudes corrupted by aliasing during sampling. You want to design an anti-aliasing
          filter such that you decrease the amplitude of the frequency components at the Nyquist rate by an order of
          magnitude (x0.1). Determine what is the highest frequency than can be processed by this system with a
          reduction in amplitude of no more than 3dB (specified in terms of the sampling frequency)?
     c) Analog signals produced by a D/A converter are also usually passed through a low pass filter prior to being
          used in a control system or other application – why is this necessary or desirable?




Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                           Spring 2009



Appendix



How to Use the National Instrument Labview Virtual Instrument
“Digital Audio Simulator.vi”
1.  To open the Labview “VI”, run Labview, open the file in the folder:
    H:\ITLL Documentation\Vis\Digital Audio Simulator2.vi
     This file contains the virtual instrument (short; “vi”) Digital Audio simulator2.vi, which captures and stores a block
    of samples from an A/D port on the MIO-16. It can then perform analysis of the sampled signal and output the same
    (or a further processed) signal to the D/A port.
2. Click on the Right Arrow button  in the upper left to start the vi. You can leave it running the whole time.
3. Disable the input and output filters by clicking in the boxes next to “Bypass input filter” and “Bypass output filter”
    at the top of the screen (see next page).
4. To change any of the numerical values in the control panel, highlight the number and type in a new one.
5. The left side of the vi control panel allows you to specify the raw sampled signal parameters and displays the raw
    signal on a virtual scope and spectrum analyzer. This is considered the “input waveform”.
6. The right side of the vi control panel allows you to specify the processing to be applied to the waveform and
    displays the processed waveform on a virtual scope and spectrum analyzer. To the right of the analyzer is a peak
    detect window that lists the locations and amplitudes of the spectral peaks.
7. For the best digital representation of the input signal you should acquire the waveform at a high scan rate (compared
    to your input signal) and maximum resolution. To set this up:
     Set the input scan rate to 100,000 Hz (100kHz) (We will use input signals in the range of 1kHz-6kHz)
     Set the number of points to 20,000. With these settings it takes 0.2 sec to acquire a new waveform.
     Set the input voltage limits to +2V to –2V.
8. Click on “Acquire Waveform” to capture the input waveform from the A/D converter. Click again every time your
    input waveform changes, otherwise this vi will not measure the changed input.
9. Click on “Output Continuously” to write the waveform to the D/A converter. What this does is to repeatedly
    output the stored signal to the D/A. Since there is only a limited sample in the buffer, the output will also repeat
    after the total sample time.
10. To change the sampling rate of the processed waveform, enter a new number in the box for Sampling Rate. (Note:
    the “actual sampling rate” is computed by the vi – it should match the sampling rate you enter to the left). The
    “actual sampling rate” and the “sampling rate” are the same, as long as “sampling rate” divides evenly into “input
    scan rate” – otherwise, the VI calculates the closest “actual sampling rate”. To change the output quantization, enter
    the number of bits desired. You must press “Output” to make your changes effective.
11. If the input waveform changes, you must click on “Acquire Waveform” to capture the change.
12. To change the scale on the scope or spectrum analyzer displays, you must turn off the autoscaling (right click on the
    x or y button below the graph and click on autoscale x axis or y axis to toggle this on/off). Then use the left mouse
    button to highlight the axis value you want to change and type in the number you want.




Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications   Spring 2009




Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                      Spring 2009



Lecture Notes for Lab 6
    1.   Analog vs digital signals
    2.   A/D conversion                                                                 sampling
                                        Analog
    3.   D/A conversion                 Instrument            A/D           Computer
    4.   Quantization
    5.   Sampling

Analog signal – continuous in time and voltage
Digital signal – discrete in time and value (voltage)                                                           Analog
                                                                            Computer            D/A             Instrument

                                                                                              control/reconstruction




Sample time          0          1               2              3            4          5                 6
Digital value        0          0.5             1              0.5          0          -0.5              -1

Sample interval defines the width of discrete steps in time.
Quantization defines the height of discrete steps in voltage (amplitude).

Examples of analog and digital devices?




Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                               Spring 2009


Quantization
Computers rely on binary representations of numbers 0 or 1
                                     28     27   26 25 24          23   22   21    20

                                     256    128    64   32    16    8   4    2      1

                                    MSB                                           LSB

   N bits can store 2N different values: 8 bits - 256 values, 4 bits – 8 values
   LSB is the smallest increment or resolution
   Range of values = LSB value X the number of values
    Example: If you require a resolution of 0.01 g‟s, what is the range for an 8 bit number?
        28 = 256
        LSB = 0.01 g
        So the full range is 2.56 g‟s
        If you need + and -, then only 1.28 g‟s
   What does quantization do to signal representation? It adds noise!
    For a varying input signal the best signal-to-noise ratio you can achieve with an N bit representation is:
        S/N = 6.03N + 1.76 dB

   What determines the best number of bits for an A/D or D/A converter?
              o Range
              o Resolution
              o Measurement noise or actuator error
              o Cost and Complexity – use as few bits as possible
   Filtering of the D/A output is required to reduce or eliminate abrupt changes in voltage.

       Bit-resolution and A/D / D/A Resolution:
            o 8 bit: (basic PIC)                                       256 bins
            o 10 bit (PIC-type controllers)                           1024 bins
            o 12 bit (standard)                                       4096 bins
            o 16 bit (getting more common)                           65536 bins
                                  Bits      Bins          0-5V Res. +/-10V Res.
                                                           [mV/bin]        [mV/bin]
                                     8             256          19.53           78.13
                                    10            1024            4.88          19.53
                                    12            4096            1.22           4.88
                                    16          65536             0.08           0.31
             o   Voltage resolution depends on reference / range of voltage !

Sampling
Sampling Theorem – a continuous signal can be perfectly reconstructed if it is sampled at
        fs > 2 times the highest frequency present (fmax).

Aliasing – for a given sample frequency, fs, the Nyquist frequency is fs/2. Frequencies higher than fs/2 are not measured
correctly. This is called folding or aliasing.

       A digital spectrum is only valid from 0 to fs/2
       Example: for a sample rate of 1 kHz, what do we see for input frequencies of 0 to 3 kHz?
       Input to the A/D must be bandlimited to avoid aliasing. Use a LPF.
A/D and D/A
       A/D converts voltage to a series of bits. How? It must freeze or hold the voltage constant while the bits are
        determined (Sample and hold)
       D/A converts a series of bits to a voltage. How? It must keep the output voltage constant while the bits change.


Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc
ASEN 3300 Aerospace Electronics and Communications                                                          Spring 2009


Data Acquisition A/D conversion:




                     Reference: from http://www.cyberresearch.com/content/tutorials/tutorial5.htm




                     Reference: from http://www.cyberresearch.com/content/tutorials/tutorial5.htm

Data Acquisition and Control Check List
Analog Inputs:
         o Range of signal (highest / lowest voltage, offset), sensitivity (V/physical unit).
         o Unipolar (+) vs. bipolar () input voltage range
         o Single ended (referenced to common) vs. differential input (V=, V-).
         o Gain or range setting, signal conditioning (low-pass, high-pass, band-pass), required bit resolution.
         o Frequency / Sampling Rate: can I collect the data fast enough to resolve the frequency information in the
             signal ?
         o Noise: for DC signals, often easier to filter in software (sample multiple times, build average) than with
             low-pass filter (cannot filter radiated noise).
Analog Outputs:
An analog output is typically required for any application involving a variable control device such as a servo motor or
servo valve. Most D/A boards output voltages; some can output 4–20mA current loops.
         o What range and what resolution is required ? What is the volts/bit of the DAC ?
         o Where does the signal go: input impedance of receiving instrument, can the DAC drive this much current
             into the receiver ?
         o What is the outputs signal referenced to (common ground vs. differential input at receiver)



Chu/Palo ac38bf03-b25f-4400-94dc-4d4c56c4b46d.doc