DESIGN AND EVALUATION OF AS OFT OUPUT VITERBI ALGORITHM (SOVA) FOR by 0b57qtF

VIEWS: 49 PAGES: 44

									     DESIGN AND EVALUATION OF A SOFT
   OUPUT VITERBI ALGORITHM (SOVA) FOR
  USE IN A CONCATENATED CODING SCHEME
                                         Group # 9
                                     December 10, 2001

                     Faculty Technical Advisors:
                                 Professor Fredrick Bruno

                    _______________________________________
                                        Faculty advisor
                                 Stevens Institute of Technology

                                         Jim W.Lang

                    _______________________________________
                                 Senior Member Technical Staff
                                         BAE Systems

                                          Jim Gower
                    _______________________________________
                                         Technical Staff
                                         BAE Systems


                                  Group Members:

Jennifer Ross                   Kristina Paschenko                  Shanna Garber
_____________________________   ________________________________    ____________________________

                Jacob Ausfahl                                       Hector Mugurusa
        ______________________________                     _______________________________

            “We pledge our honor that we have abided by the Stevens Honor System”
                  TABLE OF CONTENTS

ABSTRACT……. ……..……..……..……..……..……..……..……..……..…….. 3


PROJECT PROPOSAL PLAN…… ……..……..……..……..……..……..……4-11

    Introduction……. ……..……..……..……..……..……..……..……..…….. 4
    Design Requirements.. ……..……..……..……..……..……..……..…….. 6
    Design Approaches…..……..……..……..……..……..……..……..…….. 10
    Financial Budget ……..……..……..……..……..…………… ……..…….. 11
    Project Schedule ……..……..……..……..……..……..……..……..…….. 12

CONCLUSION… ……..……..……..……..……..……..……..……..……..……..13


REFERENCES... ……..……..……..……..……..……..……..……..……..……..14
I.     ABSTRACT

       Focal designs of Digital Communication Systems have consisted of coding schemes of
improved error performance and increased data transmission using the same number of bits. In a Link-
16 Digital Communication system, a Viterbi algorithm is implemented to minimize the transmission
errors by computing the most likely state sequence of a soft decision input and then outputting it as a
hard decision. The Viterbi Algorithm attempts to minimize bit errors of the output by estimating the
original input bits. This is accomplished by calculating the different input possibilities of a specific
output symbol and then assigning a confidence level to those inputs that have a higher probability of
actually having occurred. An extension of the classical Viterbi Algorithm is the Soft Output Viterbi
Algorithm (SOVA), which attempts to minimize bit errors as well. The SOVA differs from the Cassical
Algorithm in that it outputs soft decisions, rather than hard decisions.


       It is proposed that a concatenated coding scheme be designed utilizing a SOVA within a
controlled Digital Communication System. Its overall performance will then be evaluated and
compared to the performance of the classical Viterbi Algorithm.


       Development of the SOVA can be simulated by design approaches within MATLAB open
architecture. Initially, the SOVA will be simulated within the MATLAB open architecture, which is
currently being used for the Viterbi Algorithm exercised by BAE Systems. Within this environment, a
SOVA will be coupled with a Reed Solomon Decoder to implement various DSP simulations.


       At the conclusion of the aforementioned application, overall system performance will be
assessed and opted against classical Viterbi Algorithm simulations. If time permits, the use of VHDL
in the SOVA application can be enacted as a secondary source to simulate the concatenated coding
scheme. Assessments of overall system performance will be made by comparing variations in the
decoding efficiencies of the SOVA with those of the classical Viterbi.


       Based on the technology premonitions of the SOVA, it is expected to improve overall system
performance in providing more viable solutions for decoding schemes.
II.1 INTRODUCTION

       The Link-16 Communication System is an advanced digital data communication system that

provides for jam-resistant, crypto secure, digital voice and data communications for command, control,

and communications and an automatic, high-speed, computer-to-computer data link capability. Link-16

provides multiple nets, relative position navigation, aircraft control functions, surveillance reporting

and community identification capabilities. Link 16/JTIDS is utilized to exchange/distribute encrypted

information from scattered sources at a high rate to all required users of that information (Pike, 99).

During the data transmission and acquisition phase, various factors challenge the integrity of the data.

Noise, network congestion, and hardware failures are among the numerous causes that affect data

processing. Efforts made in the field of accurate digital signal processing have been centered around

efficient methods of encoding and decoding data to economically improve precision transfers.




       The Viterbi Algorithm was proposed by Andrew J. Viterbi in 1967 as a way to decode

convolutional codes by finding which of the possible state transitions that occur has the highest

probability of occurring (Ryan, 93). With the objective of the Viterbi Algorithm to find the optimal

(most probable) route to the state, improvements on algorithms have been digested over the years to

create best possible communication systems. A recent modification to the Viterbi Algorithm is the idea

of SOVA, Soft Output Viterbi Algorithm. The output produced is a hard decision regarding the

survivor path similar to the conventional Viterbi Algorithm. However, different from the conventional,

information about the confidence of the path is outputted (Ryan, 93). Yet another implementation to

error handling of data transmissions is the Reed Solomon Block Decoder which deals with error

correction. The Reed Solomon takes each block of data and corrects the errors and attempts to restore

the data to original form. The decoder uses algebraic decoding procedures to correct these errors and

implements erasures, more specifically the known position of an error symbol (Riley, 98).
       The object of this project is to design and evaluate a Soft Output Viterbi Algorithm (SOVA) for

use in a concatenated coding scheme.        The project model will be based on a Link-16 digital

communication system with a receiver that employs a Viterbi soft decision input/hard decision output

decoder followed by a Reed Solomon block decoder. The main focus of the project design is to

determine how overall system performance could be improved by employing a SOVA followed by a

Reed Solomon decoder with erasure handling capability over the current project arrangement.



       The current project arrangement involves a receiver acquiring data from an external source and

altering it into a digital phase waveform. This waveform is expressed as bit decisions. Included with

the bit decisions are errors, caused by factors discussed earlier, that need to be corrected. Following

waveform transformation, data is then interleaved to limit the level of block errors which would

counteract the purpose of the Viterbi Algorithm and the Reed Solomon’s erasure handling capability,

thus causing it to fail. Data is then sent to a Hard Output Viterbi Algorithm where some of those errors

are eliminated. Once more, the data is interleaved and transmitted to the Reed Solomon Decoder to

further extract more errors from the block. Finally, a more precise data group is sent out to whatever

media use is needed for the communication system.


       Our contribution to the project is to determine whether the SOVA will output better error

corrected data than the Hard Output Viterbi Algorithm when it is coupled with the Reed Solomon

Decoder with erasure handling capability. The project is bounded in using this specific receiver, the bit

decisions that are created, the descramblers (data interleaver) with 3-bit confidence, and an 8-bit Reed

Solomon Decoder. Each of the pieces can be seen in the system layout shown in Figure 1.
                                                    Figure 1


       Therefore, variations to the preceding design implementation will cover the addition of a SOVA

to the existing Viterbi Algorithm and erasure handling parameters on the Reed Solomon Decoder.

Since the object of our project is to see how much the SOVA and erasure handling capability will be

improved we cannot form a hypothesis as to how much the performance will be improved.

Nonetheless, we concede to the determination that the bit error rate will be minimized and thus be

improved with the use of the SOVA. From previous experiments it was concluded that an increase in

performance was met by deviating from the classical Viterbi to the soft input/hard output Viterbi.

These findings would lead us to believe that there would be an increase in performance in the SOVA,

but will only be determinable through testing.


       From an engineering standpoint, the importance of this project is absolute. Improvements in

data transmission in the Link-16 Communication System alone is economically invaluable. Precision

data transmissions can save an immeasurable amount of money to corporations, governments, and

agencies which strive to implement efficient communication systems. It is with this idea in mind, that

that this initiative is pursued to better our current systems and aggregate optimal solutions to solve our

engineering endeavors.
II.2 DESIGN REQUIREMENTS

       DESIGN OBJECTIVES


       The main questions being addressed in the design project is how much does the existing

communication system improve with the implementation of a SOVA followed by a Reed Solomon

Decoder and is the implementation worth while from an engineering perspective. As discussed earlier a

problem occurring with data transmission through a receiver is noise being added to the signal, thus

corrupting its integrity. In order to increase the accuracy of the data transmission the data is sent

through a series of encoders and decoders. There are many types of decoders that can be used to do

this, two of which are the Classical Viterbi and the Hard Output Viterbi. The Classical Viterbi initiates

a hard or decisive input and produces a fixed output. The Hard Output Viterbi implements a soft input,

which appends confidence bits, and produces a related fixed or hard output. With the use of such

algorithms it has been proven by BAE Systems that the accuracy of the data transmission is increased.

In the following design implementation, modifications to the preexisting algorithm are made to produce

a soft output as opposed to hard output. This algorithm type is herein referred to as SOVA. It is

hypothesized that implementation of a SOVA will further increase the accuracy of the data transmission

because it supplies a confidence level (probability statistic) to the output and allows erasure capabilities

to the Reed Solomon Decoder. It is also hypothesized that using the Reed Solomon Decoder will

increase the accuracy of the data by removing corrupt or inaccurate data.              The extent of the

performance increase in accuracy will be modeled by simulations and created in a MATLAB open

architecture environment.
       FUNCTIONAL REQUIREMENTS


       At the present time the communication system used by BAE consist of (in sequential order) a

Link-16 receiver, bit decision hardware, de-interleaving hardware, Viterbi decoder, de-interleaving

hardware and a Reed Solomon block decoder.            In the design implementation segments of the

communication system that will be altered are the Viterbi decoder with soft outputs and the Reed

Solomon decoder to have erasure capability. The design implementation unit will provide increased

accuracy in decoding schemes to make for a more efficient communication system. The following

information depicts existing functional design requirements as provided by BAE Systems:



   The Viterbi Decoder (VitDec) implements a soft-decision Viterbi-decoding algorithm to decode

parallel input symbol streams based on the following convolutional code and decoder parameters:


          constraint length K=7
          rates R = 1/2 or 1/3
          generating polynomials G0 = 1718, G1 = 1338, G2 = 1658 (G2 for rate 1/3 only)
          soft decision bits SD = 3, symbols in sign-magnitude format
          trace back depth TD = 96
          block depth BD = 48


       Hard-decision decoding can be achieved by tying the two Least Significant Bits (LSB) of the 3-

bit parallel soft decision input symbols to 012 and driving the Most Significant Bits (MSB) with the

demodulated hard decisions. External de-puncturing can be achieved by driving the two LSBs of the 3-

bit soft decision input symbol corresponding to a punctured bit to 002 (the MSB value does not matter).
The functional block diagram of VitDec is depicted below:


                               Figure 2. VitDec Functional Block Diagram

                                                    VitDec
                                       Trellis
          DecIn0(2:0)
                                                                 TraceRam
          DecIn1(2:0)
          DecIn2(2:0)
             DecRate
            DecShift
                                    Butterfly
               Clock
            DecReset
                                                                TraceBack                  DecOut
             ResetX




      VitDec contains major sub-blocks Trellis and TraceBack separated by a dual-port TraceRam

       storing 4 48-bit trellis blocks worth of bit decisions for each of the 64 states.

      Sub-block Trellis calculates the branch metrics, path metrics, and survivor paths for all 64

       encoder states in parallel, then writes the bit decision for each state into TraceRam for every bit

       time. In addition, Trellis finds the state with the minimum path metric and initiates trace back at

       the appropriate time.

      Component Butterfly calculates the branch metrics, path metrics, and survivor paths for two

       output states that share common input states and branch metrics, and is instantiated 32 times

       within sub-block Trellis.

      Once initiated, sub-block TraceBack reads the TraceRam bit decision history starting with the

       current state with the minimum path metric to recreate the decoded bits in reverse order. These

       bits are shifted out on DecOut in correct order using a LIFO while the next block of bits are

       recreated.


A typical interface to waveform signal processing logic is depicted below:
                                     Figure 3. VitDec Typical Interface
                                                                                          DecIn0(2:0)
                                          Pulse de-interleave,                            DecIn1(2:0)
                    Soft Decisions         Bit de-scramble,                               DecIn2(2:0)
                                             Bit transpose                                DecShift
                                                                        De-puncture,      DecRate
                                                                          Control,        DecReset
                                                                        Bit packing
                        RS Decode,             Symbol                                     DecOut
                         Buffering           de-interleave


                                                                                          Clock
                                            System Clocks                                 ResetX
                                              & Resets                                         VitDec




       As can be seen by this diagram, puncturing schemes for code rates derived from the base 1/2 or

1/3 rate (e.g. rates 3/4, 7/8 based on rate 1/2) are handled externally to Viterbi Decoder.


       Resource and electrical performance characteristics will vary depending on the technology,

synthesis, and place & route tools used for FPGA implementation. The following characteristics are for

an implementation using Exemplar Leonardo Spectrum v2000.1b for synthesis and Altera Quartus

v2000.09 Service Pack 1 for place & route. They are included here for illustration purposes only.


           Device                        Logic Cells                  Memories
           Altera APEX20K160E            4104 / 6100 (64%)            16384 / 81920 ESB bits (20%)
                                                                      8 / 40 ESBs (20%)



               Signal            F max      Tsu min          Th min       Tpw min        Tco max
                                 (MHz)      (ns)             (ns)         (ns)           (ns)
               Clock             47.3
               DecIn0(2:0)                  3.7              0
               DecIn1(2:0)                  3.6              0
               DecIn2(2:0)                  4.5              0
               DecRate                      3.7              0
               DecReset                     5.9              0
               DecShift          15.8       10.1             0.1
               ResetX                                                     10
               DecOut                                                                    6.7
Butterfly calculates surviving path metrics for states 0Y and 1Y in accordance with the butterfly

diagram below:


                                     Figure 4. Butterfly Diagram
                                               Branch0Y0
                            StateY0                                  State0Y
                                                 Branch0Y1



                                                 Branch1Y0

                            StateY1                                  State1Y
                                               Branch1Y1


       Branch metrics Branch0Y0, Branch0Y1, Branch 1Y0, and Branch1Y1 are the Euclidean

distance of the inputs R0, R1, and R2 from the branch words W0, W1, W2, and W3. These branch

words are the output of the convolutional encoder when DI and S5-S0 equal 0Y0, 0Y1, 1Y0, 1Y1

respectively. Since the generating polynomials all include the first and last bits, W0 = W3 = Exp and

W1 = W2 = not Exp, therefore:


       Exp(0) = Y(4) xor Y(3) xor Y(2)
       Exp(1) = Y(3) xor Y(2) xor Y(0)
       Exp(2) = Y(4) xor Y(3) xor Y(1)
       Branch0Y0 = Branch1Y1 = BRM0 = j=0 2 [{Exp(j) xor Rj(2)} *
       Rj(1:0)]
       Branch0Y1 = Branch1Y0 = BRM1 = j=0 2 [{not Exp(j) xor Rj(2)} *
       Rj(1:0)]

      Before the start of each decode cycle, DecReset is asserted to set the path metric for state 0 to 0

       and the path metric for all other states to a number greater than the largest branch metric (32).

      Two paths enter each output state. The path metric is the sum of the branch metric (Branch0Y0,

       Branch0Y1, Branch1Y0, or Branch1Y1) and the associated input surviving path metric (StateY0

       or StateY1). The output surviving path metric (State0Y or State1Y) is the minimum of the two

       path metrics and the bit decision (Dec0Y or Dec1Y) indicates which path had the minimum

       metric.
   In order to limit the size of the path metrics and prevent adder overflow, normalization logic is

    used to keep the metrics less than 127. Once all metrics exceed 63 (NormInX = 02), the metrics

    are normalized by subtracting 64.

   Path metrics State0Y and State1Y and bit decisions Dec0Y and Dec1Y are then generated in

    accordance with the following pseudo-code:


    if DecReset == 12
         Dec0Y = Dec1Y = 02
         State0Y = 32 if DecRstBit == 12 else 02
         State1Y = 32
    else if ButterflyEn == 02
         ACSAdd0(0) = StateY0 + BRM0
         ACSAdd0(1) = StateY1 + BRM1
         ACSMux0 = min(ACSAdd0(0),ACSAdd0(1))
         ACSAdd1(0) = StateY0 + BRM1
         ACSAdd1(1) = StateY1 + BRM0
         ACSMux1 = min(ACSAdd1(0),ACSAdd1(1))
         NormOutX = 02 if (ACSMux0 > 63) and (ACSMux1 > 63) else 12
    else if ButterflyEn == 12
         Dec0Y = 02 if ACSAdd0(0) < ACSAdd0(1) else 12
         State0Y = ACSMux0 - 64 if NormInX == 02 else ACSMux0
         Dec1Y = 02 if ACSAdd1(0) < ACSAdd1(1) else 12
         State0Y = ACSMux1 - 64 if NormInX == 02 else ACSMux1
Input latch registers R0, R1, and R2 are updated in accordance with the input interface timing diagram

below.


                                Figure 5. Input Interface Timing Diagram

            Clock


          DecShift
                                 //               //              //              //                //
                                 //               //              //              //                //
           DecIn0     D00               D01              D02             D03             D04         D05
                                 //               //              //              //                //
                                 //               //              //              //                //
           DecIn1     D10               D11              D12             D13             D14         D15
                                 //               //              //              //                //
                                 //               //              //              //                //
           DecIn2     D20               D21              D22             D23             D24         D25
                                 //               //              //              //                //
                                 //               //              //              //                //
               R0                       D00              D01             D02             D03         D04
                                 //               //              //              //                //
                                 //               //              //              //                //
               R1                       D10              D11             D12             D13         D14
                                 //               //              //              //                //
                                 //               //              //              //                //
               R2                     0 or D20         0 or D21        0 or D22        0 or D23   0 or D24
                                 //               //              //              //                //



        R0 and R1 latch DecIn0 and DecIn1, respectively. R2 latches 0002 for rate 1/2 (DecRate = 02)

         or DecIn2 for rate 1/3 (DecRate = 12).

        DecShift is asserted for only one Clock rising edge per bit cycle. The minimum bit cycle is

         three clock periods.

        All 64 path metrics are updated using R0, R1, and R2 after each DecShift in parallel using 32

         (2K-2) interconnected Butterfly structures that form the latest level of the decoder trellis.

        If all 32 Butterfly structures indicate normalization is required, then all Butterfly structures are

         commanded to normalize their path metrics.

        After all path metrics are updated, bit decisions from all 64 states are written into TraceRam in

         accordance with the TraceRam write timing diagram following.
                          Figure 6. TraceRam Write Timing Diagram

          Clock


       DecShift


    TraceWAddr     ED16      EE16           EF16          0016           0116         0216


    TraceWData     D189      D190          D191          D192           D193         D194

     TraceWrEn


      StartTrace




   TraceWAddr(5:0) indicates the bit time within a 48-bit trellis block. This value is incremented

    by one modulo-48 after every write pulse.

   TraceWAddr(7:6) indicates the trellis block within TraceRam. This value is incremented by

    one modulo-4 and the StartTrace signal is pulsed when the write to the 48th bit time within the

    block occurs.

   For each bit cycle, the state with the minimum surviving path metric (MinState) is calculated

    using a 64-to-1 pipelined comparator tree.
       SYSTEM PARAMETERS


VitDec supports the following interface signals:


Clock. Input clock used to update all internal registers on the rising edge. The frequency of this clock

       must be at least 3 times the desired bit throughput rate.

DecIn0(2:0), DecIn1(2:0), DecIn2(2:0). Active high soft decision input symbols corresponding to the

       convolutional encoder bits produced by the generating polynomials G0 = 1718, G1 = 1338, and

       G2 = 1658. DecIn2 is ignored for rate 1/2 decoding. DecInj(2) represents the value of the

       incoming bit. DecInj(1:0) represents the confidence factor for the incoming bit, either erasure

       (002), weak (012), moderate (102) or strong (112).

DecRate. Active high input indicating the incoming code is rate 1/2 (02) or rate 1/3 (12).

DecReset. Active high input used to synchronously reset path metrics and trace back pointers.

DecShift. Active high input that enables the shifting in of valid input symbols on the DecIn inputs and

       shifting out of decoded bits on DecOut. This signal pulses high for one Clock rising edge every

       bit cycle and defines the decoder throughput rate.

ResetX. Active low input that asynchronously initializes internal logic.

DecOut. Active high output containing the decoded data bits.



VitDec supports reset and normal modes of operation:


   Reset mode is entered when ResetX is driven to 02. During reset mode, DecOut is driven to 02 and

    all internal registers are initialized to appropriate values. All other inputs are ignored.

   Normal mode is entered when ResetX is driven to 12. During normal mode, all I/O interactions and

    functions perform as required for system interaction.
Butterfly Interface Requirements supports the following interface signals:


ButterflyEn. Active high input pulse that enables path metric updates.

Clock. Input clock used to update all internal registers on the rising edge.

DecReset. Active high input used to synchronously reset path metrics.

DecRstBit. Active high input used for resetting path metrics during DecReset.

NormInX. Active low input that commands path metric normalization.

R0(2:0), R1(2:0), R2(2:0). Active high latched soft decision input symbols corresponding to the

       convolutional encoder bits produced by the generating polynomials G0 = 1718, G1 = 1338, and

       G2 = 1658 respectively. For rate 1/2, R2 is set to 0002.

ResetX. Active low input that asynchronously initializes internal logic.

StateY0(6:0), StateY1(6:0). Active high inputs containing the path metrics for states Y0 and Y1 (e.g.

       if Y = 000102, then StateY0 is the metric for state 4, StateY1 is the metric for state 5).

Y(4:0). Active high input indicating the butterfly number or encoder partial state. Y associates the

       input path metrics and branch metrics with the corresponding output path metrics.

Dec0Y, Dec1Y. Active high outputs containing the bit decisions for states 0Y and 1Y, respectively

       (e.g. if Y = 111012, then Dec0Y and Dec1Y contain the bit decisions for states 29 and 61,

       respectively).

NormOutX.      Active low output indicating the path metrics for states 0Y and 1Y are ready for

       normalization.

State0Y(6:0), State1Y(6:0). Active high outputs containing the surviving path metrics for states 0Y

       and 1Y, respectively (e.g. if Y = 100102, then State0Y and State1Y contain the path metrics for

       states 18 and 50, respectively).
Trellis supports the following interface signals:


Clock. Input clock used to update all internal registers on the rising edge.

DecIn0(2:0), DecIn1(2:0), DecIn2(2:0). Active high soft decision input symbols corresponding to the

       convolutional encoder bits produced by the generating polynomials G0 = 1718, G1 = 1338, and

       G2 = 1658. DecIn2 is ignored for rate 1/2 decoding. DecInj(2) represents the value of the

       incoming bit. DecInj(1:0) represents the confidence factor for the incoming bit, either erasure

       (002), weak (012), moderate (102) or strong (112).

DecRate. Active high input indicating the incoming code is rate 1/2 (02) or rate 1/3 (12).

DecReset. Active high input used to synchronously reset path metrics.

DecShift. Active high input that enables the shifting in of valid input symbols on the DecIn inputs.

ResetX. Active low input that asynchronously initializes internal logic.

MinState(5:0). Active high output that indicates which state has the lowest surviving path metric.

StartTrace. Active high output that indicates completion of a 48-bit trellis block.

TraceWAddr(7:0). Active high outputs containing the TraceRam write address.

TraceWData(63:0). Active high outputs containing the bit decisions for all 64 parallel trellis states to

       be written into TraceRam.

TraceWrEn. Active high output that enables writes to TraceRam.



TraceBack supports the following interface signals:


Clock. Input clock used to update all internal registers on the rising edge.

DecReset. Active high input used to synchronously reset trace back pointers.

DecShift. Active high input that enables the shifting out of symbols on DecOut.

MinState(5:0). Active high input that indicates the trellis state with the lowest surviving path metric.

ResetX. Active low input that asynchronously initializes internal logic.
StartTrace. Active high input that indicates completion of a 48-bit trellis block.

TraceRData(63:0). Active high input that contains the bit decisions read from TraceRam.

TraceWAddr(7:6). Active high input that contains the current 48-bit block being updated by Trellis.

DecOut. Active high output containing the decoded data bits.

TraceRAddr(7:0). Active high outputs containing the TraceRam read address.



       In order to guarantee a minimum of 96 bits of trace back history per decoded bit, TraceBack

performs the following sequence of events:


       allow Trellis to fill 48-bit TraceRam blocks 0,1,2
       while Trellis fills block 3
            trace back through blocks 2-1-0 at 3x bit rate
            latch bits in block 0 into LIFO 0
       while Trellis fills block 0
            trace back through blocks 3-2-1 at 3x bit rate
            latch bits in block 1 into LIFO 1
            shift bits in LIFO 0 out onto DecOut
       while Trellis fills block 1
            trace back through blocks 0-3-2 at 3x bit rate
            latch bits in block 2 into LIFO 0
            shift bits in LIFO 1 out onto DecOut
       while Trellis fills block 2
            trace back through blocks 1-0-3 at 3x bit rate
            latch bits in block 3 into LIFO 1
            shift bits in LIFO 0 out onto DecOut
       while Trellis fills block 3
            trace back through blocks 2-1-0 at 3x bit rate
            latch bits in block 0 into LIFO 0
            shift bits in LIFO 1 out onto DecOut
       repeat last four sequences for remaining input bits
Data is read from TraceRam in accordance with the TraceRam read timing diagram below:


                                Figure 7. TraceRam Read Timing Diagram

             Clock


          DecShift


         StartTrace


       TraceWAddr     AF16          C016             C116               C216                 C316                 C416


       TraceRAddr     6F16     AF16 AE16 AD16   AC16 AB16 AA16   A916   A816   A716   A616   A516   A416   A316   A216


       TraceRData        D6F        DAF DAE     DAD DAC DAB DAA DA9            DA8    DA7    DA6    DA5    DA4 DA3




      TraceRAddr(5:0) indicates the bit time within a 48-bit trellis block. This value starts at 47 and

       decrements by one modulo-48 for every trace back cycle.

      TraceRAddr(7:6) indicates the trellis block within TraceRam. At assertion of StartTrace, this

       value starts at TraceWAddr(7:6) and decrements by one modulo-4 when TraceRAddr(5:0)

       reaches zero.

      One of the 64 TraceRData bits (DataDec) is selected for latching into the appropriate LIFO

       based on a 6-bit selector called DataState. DataState is initialized to MinState on assertion of

      StartTrace and is updated on every trace back cycle according to the formula:


       DataState = (DataState << 1) + DataDec
       DESIGN CONTSTRAINTS

       There are several constraints that were imposed based on the original design of the Soft

Input/Hard Output Viterbi Algorithm. These restraints are carried over into the SOVA. The Link 16

communications system is the receiver we must use for the communication system. To change the type

of receiver now would involve altering the entire communication system which, for our purposes, is not

cost effective.   The next restraint is the actual bit decisions hardware that is initialized once

transmission is acquired by the receiver. These bit decisions have been altered for their optimum level

in the existing configuration. The project is also constrained by a constraint length of seven and a rate

of one half on the viterbi algorithm. This means that there are six shift registers in the Finite State

Machine(FSM), seven minus the one shift register, and that for every input that is put into the FSM

there are two outputs. All of these restraints were imposed from the original design of the Viterbi

Algorithm specific to the Link-16 Communication System and are herein carried over.
II.3 SYSTEM DESIGN

       DESIGN APPROACH

       The Soft Output Viterbi Algorithm project has many restrictions and there is little room for

different approaches. As noted earlier, this is a project sponsored by BAE Systems, which uses an

existing digital signal processing architecture. The DSP environment has already been programmed in

MATLAB and includes a Classical Viterbi Algorithm and the Hard Output Viterbi Algorithm. Our

project involves replacing the existing Classical Viterbi algorithm with our own design of the Soft

Output Viterbi algorithm and testing the system’s performance.


       Our approach to this project was to first research the existing Classical Viterbi Algorithm and

understand exactly how it works within the existing digital environment. We had received examples

from BAE systems to help us walk through the Classical Viterbi. Once this phase was complete, we

began researching the Soft Input/Hard Output Viterbi algorithm. Again examples were provided to help

us understand the different steps of the algorithm. MATLAB code was provided to us for the two types

of algorithms. After reviewing the code, we began researching the SOVA. Although we have not fully

completed this stage, we did provide BAE with a quick presentation on the basic information we found.

Once we fully understand the SOVA, we will give our sponsors a full presentation on its workings. At

this point we are understanding the difference between the algorithms. Knowing the differences

between them will allow us to determine what requirements will be needed or changed to be able to

implement the SOVA.


       Once the requirements of the SOVA are calculated, we can then brainstorm within our specific

group and with our sponsor to come up with possible solutions to implement the requirements. Here,

the most efficient and cost effective solution will be determined, and design could progress further into
actual project implementation. The implementation will consist of programming our own SOVA

algorithm within the MATLAB environment, debugging it, and testing it with actual input data. We

can then compare the results from testing, with those obtained by using the Classical Viterbi set-up by

BAE Systems. If we are successful, we will be able to determine for BAE systems whether or not they

should adopt the SOVA.
       CRITICAL COMPONENTS

       It was stated earlier that the Viterbi Algorithm is a decoder used as a way to decode

convolutional codes by finding which of the possible state transitions that occur has the highest

probability of occurring. Given observations, the algorithm finds the shortest path or the most possible

path through a trellis (Ryan, 93). The algorithm provides a way of finding the most likely state

sequence by using a finite-state discrete-time Markov process (Berrou, 93).


       A Finite State Machine produces outputs that are seen by the Viterbi Algorithm as a set of

observation symbols with some of the original data symbols corrupted by noise (Mendez, 98). Metrics

are calculated from the Finite State Machine and the observation symbols and transitions that are used

by the Viterbi algorithm to decide which path from the trellis was most likely to have been followed.

The Viterbi Algorithm does this by looking at each state at time t and any transitions that occur with

this state and deciding which transition has the greatest metric. This transition is the one that is the

most likely to occur. When looking at metrics, if there is a tie between maximum transitions one of the

transitions is chosen randomly as the most likely transitions. The state’s survivor path metric is

assigned to the greatest metric and the remaining transitions are discarded into the state. The state is

added to the survivor state path at t-1. This is the origin of the transition and the survivor path of the

state being examined at time t, where the algorithm is moved into states at t+1 and repeated. When t

reaches the truncation length, the Viterbi Algorithm is repeated to determine the survivor path for the

truncation length and determine which of the survivor paths has the greatest metric. In case of a tie

survivor path, the mostly path is chosen at random. The output of the Viterbi Algorithm is a survivor

path and the corresponding metrics (Ryan, 93).



       The mathematical representation of the algorithm is found using Bayes’ Rule and the first order

Markov equation. Let p(Z|C) represent the probability density function of the vector sequence Z = zi,
z2…zn conditioned on the sequence of identities C = c1,c2…Cn. zk is the feature vector for the kth value

and takes on M values for k = 1,2…n. Using Bayes’ Rule:




After maximizing the discriminate function:




It is assumed the size of sequence of observations is not very large and there is conditional

independence among the feature vectors. The updated algorithm is:




Since the Viterbi Algorithm is a first order Markov equation, the final equation is reduced to




                                                                                    (Ryan, 93).


       Most conventional Viterbi Algorithm relies on inputs from a demodulator to find the most likely

set of states used by the Finite State Machine and determine the original input sequence.           The

demodulator makes hard decision on symbols it receives from a channel.              An alteration to the

algorithm is when the demodulator produces is soft decision as oppose to a firm decision. The

demodulator produces an output that the demodulator thinks was sent along as well as confidence bits.

This increases the Viterbi Algorithms range of performance. It can be used to decode soft decisions

and output a hard decision (Mendez, 98).
          Another alteration to the Viterbi Algorithm is the idea of SOVA, Soft Output Viterbi Algorithm.

This takes a soft input and produces a soft input. The output produced is a hard decision regarding the

survivor path similar to the conventional Viterbi Algorithm.            Different from the conventional,

information about the confidence of the path is outputted (Mendez, 98).



          Viterbi Algorithm is used for communications in decoding TCM codes, handwritten word

recognition, through to non-linear dynamic system state estimation, target tracking and printed word

recognition (Mendez, 98). It is excellent at decoding isolated areas at a bit error rate of 1 out of 100.

For our project we want to take test decisions and corrects and errors made. Using the Viterbi

Algorithm not all errors will be caught. The Reed Solomon decoder is needed to catch the missed

errors.



Convolutional Encoder

          A sequence of binary digits have to be transmitted along a communication channel. The

convolutional encoder is a shift register that shifts in input bits from the sequence and produces a set of

output bits. The output bits are determined by the logical operations that are carried out on parts of the

input sequence. The encoder is defined by a rate, constrain length, and a polynomial Gj. In the example

shown in Figure 8, the rate is ½ because for every one input yields two output values. The constraint

length, k, is 4. The number of states is k-1 making 3 state values, s1, s2, & s3.
                              Figure 8: Example Convolutional Encoder


       The boxes in this example represent the shift register and the content are the states that the FSM

is in. This state actually corresponds to the actual contents of the shift register locations. To obtain

output O1 the contents of S1, S2, and S3 go through a logical exclusive-or function. Similarly to obtain

output O2 the contents of S1 and S3 go through a logical exclusive-or function. The shift registers are

initialized to all zeros before any input are shifted in. So say the first input bit is a 1, the registers

would read 1, 0, 0 yielding a 1 for output O1 and a 1 for output O2. This process continues for every

input sequence shifted into the encoder. For an input sequence of 011000 the output sequence received

should be 00 11 01 01 11 00. In the example show in the figure lets assume that the actual received bits

after transmission over a noisy channel were 01 11 01 00 11 00. The bits in red are the bits which are

obvious errors from what should have been received.




Finite States Machine


       The states are represented by nodes and transitions between two states are represented by edges.

For each possible transition in the Finite States Machine there is a corresponding output symbol

produced. Through the course of time a state-to-state path can be traced (Ryan, 93). A sample Finite

States Machine is shown in Figure 9. The boxes are the possible states and the lines with arrows are the

possible transitions between the different states. The numbers in red are the corresponding outputs that

should be produced by the encoder followed by the transition that caused it. For example the 00 / 0

shows that if a 0 is input into the encoder, the two bit binary number that should come out is 00. The

trellis diagram needed in calculation of the survivor paths is determined from the Finite States Machine.
                             Figure 9: Sample Finite States Machine Diagram

Trellis

          A trellis is a representational graph of a finite set of states taken from the Finite States Machine
from known state co to cn. A node represents a distinct state at a given time while the arrow represents a
transition of states. For any sequence C there is a corresponding unique path through the trellis and for
every unique path there is a corresponding sequence C. For the Finite States Machine above, the
corresponding trellis diagram would look like Figure 10.

                 time




states




                        t1             t2             t3              t4              t5



                 Figure 10: Sample Trellis Diagram corresponding to FSM in Figure 9
         The dots represent the states that were in the FSM diagram and the lines represent the arrowed
lines in the FSM diagram. For example in Figure 1 you can see that for the box 00, three are two
arrowed lines, one coming from itself and the other going to state 01. Both of these transitions are
shown in the Figure 9. Once the trellis diagram is determined the Viterbi Algorithm is utilized to
calculate the branch metrics of each of the transitions. There are two different ways to calculate the
branch metrics for a trellis diagram, both of which are dependant on the type of input given to the
encoder. There are hard input and soft input bits that you can input into the encoder.


Hard Input


         In the case of hard input bits, the branch metrics are determined by calculating the Hamming
distance of each transition. The Hamming distance is calculated by adding 1 for each received bit that
differs from the bit that was received in comparison to the bit that should have been received. For
example if the encoder was suppose to output 00 and it actually received a 01, one bit would be
different resulting in a branch metric of 1. This branch metric is calculated for every single transition of
the trellis diagram. The branch metrics for two time intervals is shown in Figure 10. The lines in red
correspond to the shortest branch metrics.

                time


                                1                            2

states                          1                        0


                                1                        0
                                        1                            2

                                            2                            1
                                    0
                                                                 1

                            0                        1
                                    2                        1



                       t1                       t2                           t3   t4   t5

                            Figure 11: Corresponding Branch Metrics for Hard Input
Soft Input


        Soft input bits yield slightly different output information than the hard. Not only is the output

bits received, but a symbol table of confidence values are created. The symbol table with confidence

values is shown in Figure 12. In the case of soft input bits, the branch metrics are determined by

calculating the Euclidean distance of each transition. The Euclidean distance is calculated by adding

the bit value of the confidence level of each of the received bits that differ from the bits that were

received in comparison to the bit that should have been received.



Bit 2 (msb)      Value                   01        11        01          00         11         00
Bit 1            Confidence MSB          10        11        11          10         11         01
Bit 0            Confidence LSB          11        10        11          00         10         11


                                           Figure 12: Symbol Table


        The branch metrics for two time intervals is shown in Figure 13 The lines in red correspond to

the shortest branch metrics. The first and second received symbols have a bit value of 01 with

confidence factors of 11 and 01 respectively. The confidence values are read down in the table not

across. For all the braches with branch word 00, the branch metric will be 0+1+1. For all braches with

branch metric words 01, the branch metric will be 0+0=0. For all braches with branch metric words 10,

the branch metric will be 3+1=4. For all braches with branch metric words 11, the branch metric will

be 3+0=3.
                time


                                1                            5

states                          3                        0
                                1

                                3                        0
                                        1                            5
                                                                     2
                                            4                            2
                                            2
                                    0
                                                                 3
                                                                 1
                            0                        3
                                    4                        2



                       t1                       t2                           t3   t4   t5

                            Figure 13: Corresponding Branch Metrics for Soft Input


         By calculation of the branch metrics, the smallest accumulated metrics can be determined. Once

the smallest accumulated metrics are determined, the algorithm can then trace-back through the trellis

to determine what the correct input bits were.



Reed Solomon Block Decoder


         Reed Solomon Block Decoder deals with error and erasure correction. Through the use of

algebraic decoding procedure and syndrome calculations, the decoder processes each block and corrects

the errors and restores data. The decoder attempts to identify the position and magnitude of as many as

t errors or 2t erasures as well as correct the errors and erasures (Riley, 98). The number t is depending

on the characteristics of the Reed-Solomon code used for the decoder. The ultimate goal is to recover

the original data.
       The Reed Solomon can be seen in many areas for various reasons. It can be used in storage

devices, for example tape, compact disk, DVD and barcodes. Reed Solomon can be used in wireless

and mobile communications, such as cellular telephones and microwave links.                     Satellite

communication, Digital Television and High speed modems are among other hardware that use the

Reed Solomon (Riley, 98). The reason why the Reed Solomon can be used in various hardware is

because there advantages to using the Reed Solomon. The main advantage in using the Reed Solomon

is the probability of an error remaining in the decoded data is much lower than the probability of an

error if Reed-Solomon is not used. This is known as coding gain (Riley, 98).


       Errors can occur during the transmission or storage. This can occur for a number of reasons,

including noise or interference and scratches on a CD. A symbol error can occur one a single bit in the

symbol is incorrect or when all bits in a symbol are incorrect (Riley, 98). Erasures are the known

position of an error symbol. The demodulator supplies this location information (Chan, 97). The

erasure bit is assigned a flag if there is not enough confidence. Erasure is beneficial if you can

eliminate an error with better then 50% confidence. Since the Reed Solomon codes consist of a matrix

structure and implement several fast algorithms, much the effort is needed to correct erasures is reduced

(Riley, 98). The decoder has the capability of correcting half the number of errors as erasures. The

maximum number or erasures to be corrected is equal to the number of parity bits (Chan, 97). The

amount or errors corrected is directly proportional to the power needed for the decoder. This means

that when designing the Reed Solomon decoder, the designer has to decide a compromise between the

conservation of power and maximum error correction.


       If s represents a symbol size, the maximum codeword length, represents by n, is n = 2s-1. This

codeword length, more specifically the number of parity bits, is directly proportional to the amount of

processing power needed for the decoder. Some ways to decrease the codeword length are making a

number of data symbols zero at the encoder, not transmitting the data symbols or reinserting the data
symbols at the decoder. The received codeword into the decoder, represented by r(x) us the original or

transmitted codeword c(x) plus errors e(x). This statement is represented by the equation

r(x) = c(x) + e(x) (Riley, 98).

        A diagram of the Reed Solomon Codeword is shown below (Riley, 98).:




        A codeword is generated using a specific polynomial (Riley, 98). All valid code words are

divisible by this polynomial. The general form is g(x) = (x-ai)(x-aI+1)…(x-aI+2t). The codeword is

constructing using c(x) = g(x).t(x). g(x) is the generator polynomial, c(x) is the valid codeword, i(x) is

the information block and a is the primitive element of the field (Riley, 98).


        When a codeword is decoded there are three possible outcomes. First of the possibilities is

when two times the amount of errors plus the erasures is less than or equal to the number of parity

symbols. In the case the original transmitted code word will be recovered. Second possibilities the

decoder cannot recover the code. Last possibility is the decoder can mis-decode and recover an

incorrect code and have no indication of this (Chan, 97).


        When finding error locations an equation is used with t unknowns. “t” represents half the

number of parity bytes. First the error locator polynomial is used. Then the roots of the polynomial are

found. In finding the error values, an equation is used with t unknowns (Chan, 97). There are several

fast algorithms used to do this. When finding the symbol error values several simultaneous equations

with t unknowns are solved. Often the algorithm used to do this is the Forney Algorithm.
       To begin the Forney Algorithm the infinite degree syndrome polynomial must be defined. This

polynomial is in the form of:




       The next step is to define the error magnitude polynomial in the form:




       After knowing the first two coefficients, the algorithm becomes:




       Finally, the error magnitude expression becomes:




                                                                   (Matache, 96)


       When finding the Error Locator Polynomial there is an option of two algorithms to be used.

Either the Berlekamp-Massey algorithm or Euclid’s Algorithm is used. The Euclid’s is more often used

because it is easier to implement. The Berlekamp-Massey algorithm, although a little more difficult to

implement, leads to more efficient hardware and software implementations (Riley, 98). When finding

the roots of the polynomial the Chien search algorithm is used. This determines the error positions by

finding the zeroes of error-locator polynomial L(x) (Riley, 98).



       The Berlekamp-Massey algorithm is described in the steps that follow.
               1. Compute the syndrome sequence S1…S2t for the received word.
               2. Initialize the algorithm variables
                      a. K = 0
                         b. (0)(x) = 1
                         c. L = 0
                         d.   T(x) = x

                 3. Set k=k+1 and compute the discrepancy (K) :




                 4. If (K) =0, go to step 8

                 5. Modify the connection polynomial:




                 6. If 2L>=k, then go to step 8

                 7. Set L = k-L and:




                 8. Set T(x)= x . T(x)

                 9. If k < 2t, then go to step 3

                 10. Determine the roots of:




                                                                   (Matache, 96)

       If the roots of the equation are distinct and lie in the right field,, the error magnitudes can be

determined, corresponding locations in the received word can be corrected and the algorithm can STOP

(Matache, 96).
       The Euclid’s Algorithm is much less complex than the Berlekamp-Massay Algorithm. The

Euclid’s Algorithm basically finds the GCD (greatest common divisor) of a, b by successive divisions.

If a>b, (a.b) can be reduced by subtracting b from a or by subtracting a multiple of b from a. The

reduction can only be positive so the algorithm always terminates in finite number of steps. The last

step is in the from (u,v) where u is a multiple of v and v is the GCD of a and b. An example of the

Euclid’s Algorithm is as follows. A=213 and B=72 (Matache, 96). The steps are as follows:



              1. (213,72)=(213-72,72)=(141,72)
              2. (141, 72)=(141-72,72)=(72, 69)
              3. (72, 69)=(72-69, 69)=(69, 3)
              4. The Algorithm stops since 69 is a multiple of 3. So in conclusion (213, 72)=3
                  (Matache, 96).


       A diagram of the Reed Solomon Decoder is shown below (Riley, 98).:




       The symbols represented are:


              r(x) = Received codeword
              Si = Syndromes
              L(x) =Error Locator Polynomial
              Xi = Error Locations
              Yi = Error Magnitudes
              c(x) = Recovered Codeword
              v = Number or errors (Riley, 98).
       A Reed-Solomon codeword has 2t syndromes that depend on the errors corrected not on the

transmitted codeword. The syndromes can be calculated by substituting the 2t roots of the generated

polynomial g(x) into r(x), the received codeword (Riley, 98).



Reed Solomon Encoder

       The Reed Solomon Encoder takes a block of digital data and adds extra redundant bits. The

Encoder take k data symbols of s bits each and adds a parity symbol to make a n symbol codeword.

The encoder consists of n-k parity symbol of s bits each (Riley, 98).



A diagram of a Reed Solomon System with the decoder and encoder is shown below (Riley, 98).




A diagram of a Reed Solomon Encoder is shown below (Riley, 98)
Reed Solomon Codes

          Reed Solomon codes are a subset of BCH codes and are linear block codes (Riley, 98). The

codes are based on a specialist area of mathematics known as Galois Fields or Finite Fields. The codes

are specified as RS (n,k) with s-bit symbols (Riley, 98). The Reed Solomon codes are will suited to

correct when a series of bits in a codeword are received in an error. This type of error is known as burst

errors.

          The Finite or Galois Field arithmetic deals with the idea that a finite field has the property that

when using any arithmetic operations on filed elements the result will always occur within the original

field.    With the aid of hardware or software functions, the Reed Solomon encoder and decoder needs

to carry out these arithmetic operation (Riley, 98).



SOVA


          The Soft Output Viterbi Algorithm (SOVA) is used in the decoding of convolutional codes and

produces a soft output. Our goal is to replace the existing Classical Viterbi Algorithm with a Soft

Output Viterbi Algorithm to find out if the overall performance of the system will be improved and will

output better error corrected data.



The decoder is composed of the following five components [2]:

             ACS (add-compare-select) processors

             The metric computation unit

             The hypercubic paths between processor that deminish the amount of exchange between

              registers

             The traceback computation unit to search back the best path through the trellis

             An algorithm to revise the paths
These components are illustrated in the following block diagram of the SOVA




       One reason the SOVA could be more effective than the Classical Viterbi Algorithm is that the

soft values being outputted are probabilities or decimal values, whereas the hard ouputs are strictly

either 1/0 or +1/-1. The hard outputs of 1 or 0 lose information because the soft values generated by the

channel are ignored. However, when given a soft value, one may say the bit is more likely towards the

1 rather than the 0. Also, with the hard output decoder, two decoders are needed. With the SOVA, the

soft output can be used as a soft input for another decoder.


       Replacing the Viterbi Algorithm with the Soft Output Viterbi Algorithm shouldn’t be too

difficult of a task since the SOVA extends the Classical Viterbi Algorithm [1]. Both first form a trellis

and then determine the accumulated maximum likelihood for each state [3]. The only difference would

be the next step, tracing back to find the surviving path to determine the soft output. When the Viterbi

traces back over one path, the SOVA chooses the maximum likelihood path and it’s next competitor

path and traces back over the winner path[1].
       This tends to get very complex because the SOVA then has to traceback from each of the 2m

states. A solution to that problem would be to use the Classical Viterbi Algorithm to look ahead in

time, rather than backwards, in order to find the maximum likelihood path. This can be done by

calculating the path metrics of each state, but at time k where the SOVA would normally traceback for

each of the 2m possible states [1].




       By taking the soft input, Yu, the branch metric may be calculated at m for time t=uT with the

following formula [4].




       With reliability information for the subset Cm, conditioned on the received value Yu at time u,

for every state z in the trellis, the path metric is needed [4]. The path metric can be calculated by:
TEST PROCEDURES & DESIGN RISKS



       Evaluation and assessment requirements of the current design approach will be based on testing

in the MATLAB open architecture environment. Signal integrity will be plotted against bit error rates

and evaluated from simulations. The current algorithms will be modified and run under several

different controlled test for evaluation. A design risk that may very well be met could be that the

SOVA implementation does not improve bit error rate transmission. However, in light of this possible

failure, project requirements and success will be achieved by assessing the knowledge of such a failure.

Another possible design risk encountered may be the hindrance of licensing software to proceed with

actual MATLAB simulations. Such add-ins within the MATLAB open architecture are not available

with present versions and could be a design risk factor in acquiring.
II.4 FINANCIAL BUDGET


                  Academic Term (January 2002 – May 2002)
           Cost Components                                               Cost
Labor Costs                                                                                             *
   Professional                                                                                   N/A
  Student                                                                                         N/A
Test Equipment                                                                                     **
   Modeling Simulation Program                                                                    N/A
   Modeling Simulation Equipment                                                                  N/A
Final Presentation Costs                                                                       $ 70.00
   Copies (250@ $.10) & color copies
                                                                                               $ 45.00
   (20@$1.00)
   Slides / Transparencies                                                                     $ 20.00
   Poster Boards and Presentation Boards                                                       $ 25.00
Travel
   Gas and Tolls (16 mpg) ($3 per trip) (15 trips)                                             $ 45.00

                     TOTAL                                           $ 205.00 ***



* Labor Costs are sponsored by BAE Systems in conjuncition with CPE 415 Group #9 Stevens
  Institute of Technology Senior Design team, and thus are defaulted at this time.

** Test Equipment costs are sponsored and provided by BAE Systems in conjuction with CPE 415
   Group #9 Stevens Institute of Technology Senior Design Team, and thus are defaulted at this time.
   Modeling Simulation Programs are made available by BAE Systems and the Computer Engineering
   Department at Stevens Institute of Technology.

*** Project cost totals are to be capped at $250 dollars, in which CPE 415 Group #9 Stevens Institute
    of Technology Senior Design Team will accomadate the difference
II.5 PROJECT SCHEDULE

                                                   Jan 20, '02 Feb 3, '02  Feb 17, '02 Mar 3, '02  Mar 17, '02 Mar 31, '02      Apr 14, '02 Apr 28, '02       May 1
ID   Task Name                              14   18 22 26 30 3      7 11 15 19 23 27 3      7 11 15 19 23 27 31 4         8   12 16 20 24 28 2         6    10 14
1    Weekly Statu s Reports
17   Weekly Grou ps Meetings/Evaluatio n
34   Meeting w/ sponsor BAE                                  Group 9
35   SOVA Research                                                                     Group 9
36   SOVA Output Sy mbols                                                     Jenn an d Kristina
37   SOVA Output Storage                                                      Hector
38   SOVA Trellis Metrics                                                     Jake
39   SOVA Trace-Back Method                                                   Shanna
40   SOVA/Classical Comparison                                                    Group 9
41   SOVA Presentation & Design w/ BAE                                                   Group 9 & BAE
42   Design Implementation                                                                                                           Group 9
43   Design Testing and Error Control                                                                                                             Group 9
44   Meeting w/BAE                                                                                        Group 9 & BAE
45   Final Report-Abstract and Conclusion                                                                                                          Hector
46   Final Report-project schedule s                                                                                                               Shanna
47   Final Report-Introduction                                                                                                                     Jake
48   Final Reportl-Design Allocation                                                                                                               Jenn
49   Final Reportl-Design Requirments                                                                                                              Kristina
50   Final Report-Financial Budget                                                                                                                 Shanna
51   Final Presentation Slides                                                                                                                   Group 9
52   Final Project Presentation                                                                                                                   Group 9
III. SUMMARY

         It is the objective of this endeavor to study variations of Viterbi Algorithms, more specifically
Soft Output Viterbi Algorithms, and observe its working mechanisms.                Throughout modeling
simulation, performance factors will be evaluated against traditional coding schemes presented by BAE
Systems and efficiency will be assessed.


         Soft Output Viterbi Algorithms is a decoding scheme, which attempts to minimize bit errors by
estimating the posterior probabilities of individual bits of the transmission. SOVA technological
capabilities in DSP architecture allows for minimization of error in data transmission in a variety of
platforms. It is our intentions to study SOVA within a controlled DSP environment, mainly the LINK
16 communication system, and simulate it against a Classical Viterbi Algorithm.


         Expected results from SOVA testing within a concatenated coding scheme is that overall system
performance of data transmissions within a LINK 16-communication system will increase.               This
conclusion can be assessed inducing mathematically, that Soft Output in contrast to Classical hard
output Viterbi algorithm, carries greater probability efficiency in error handling. With these expected
results, the recommended SOVA model will thus lead to a successful result at the conclusion of this
study.


         At the conclusion of SOVA testing within a MATLAB open architecture environment, other
venues may be opted to be exercised such as VHDL development of SOVA as well as FPGA
implementation.
IV. REFERENCES

 1. Ryan, M.S. and Nudd, G.R., The Viterbi Algorithm, Department of Computer Science,
          University of Warwick, Coventry, CV4 7Al, England, February 1993.

 2. S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, "A soft-output APP module for iterative
           decoding of concatenated codes," IEEE Commun. Letters, vol. 1, pp. 22-4, Jan. 1997

 3. C. Berrou, P. Adde, E. Angui, and S. Faudeil, "A low complexity soft-output Viterbi decoder
           architecture," in Proc., IEEE Int. Conf. on Commun., pp. 737-40, May 1993

 4. http://www.4i2i.com/reed_solomon_codes.htm, October 11, 2001.

 5. http://www.calculex.com/bert.htm, April 22, 1995.

 6. http://www.cim.mcgill.ca/~latorres/Viterbi/va_main.html, 1998.

 7. http://www.cim.mcgill.ca/~latorres/Viterbi/va_main.html, September 22, 1998.

 8. http://www.comit.com/publications/datasheets/reedsolomoned.pdf, September 2, 1998.

 9. http://csis.ee.virginia.edu/~dcg3w/papers/ISLPED98_paper.pdf , August 12, 1998.

 10. http://www.dcs.warwick.ac.uk/pub/reports/rr/238.html, February 21, 1993.

 11. http://www.ece.utexas.edu/~prob/OTHERCOURSES/CRYPTO/math.background/node6.html,
             December 2, 2000.

 12. http://www.ee.ucla.edu/~matache/rsc/node8.html, October 20, 1996.

 13. http://www.ee.ucla.edu/~matache/rsc/node9.html, October 20, 1996.

 14. http://www.fas.org/man/dod-101/sys/ship/weaps/data-links.htm, June 30, 1999.

 15. http://www.gel.ulaval.ca/~fortier/publications/IPbloc2.pdf

 16. http://personal.ie.cuhk.edu.hk/~chankm6/TurboCode/index.html, December 21, 1997.

 17. http://pw1.netcom.com/~chip.f/viterbi/algrthms2.html, September 21, 2001.

 18. http://speedy.et.unibw-muenchen.de/cgi-
             bin/xload.cgi?src=/forsch/ut/soft_tcm/sova_work.html&push=/forsch/ut/soft_tcm/prev_
             work.html~hpos2&t0=backto&v0=Previous~sWorks

 19. http://www.rockwellcollins.com/gs/products/datalinks/, December 2, 2001.

								
To top