VIEWS: 69 PAGES: 8 CATEGORY: Business POSTED ON: 3/31/2010
Advanced GPS signal acquisition Viktor Przebinda May 3, 2004 Abstract Here we present an algorithm incorporating a number of advanced GPS signal acquisition techniques that we have implemented in a single matlab routine. This routine performs FFT based correlation, frequency domain carrier searching, decimation, multiple code period integra- tion, and multiple code block integration with Tsuli code frequency matching. Together, these features improve acquisition performance both in terms of speed and detection conﬁdence. 1 Introduction Detecting the digital spread spectrum encoding of GPS signals is an interesting task involving a number of advanced signal processing techniques that are so complex they cannot be done with conventional analog circuitry alone, thus requiring some form of digital processing. Some appli- cations may require fast acquisition techniques for real time applications. Others may need to be capable of acquiring very weak signals. Enhanced 911 [2] for cellular telephones is one such example. We have written a single application that implements and integrates a wide range of acquisition techniques that provide both superior performance as well as weak signal detection. In part 2 we discuss FFT-based correlation [1] which dramatically decreases signal acquisition time using the fast fourier transform algorithm rather than a traditional serial search. In section 3 we discuss carrier frequency searching through frequency domain phase shifts instead of repetitive down conversion. As a result, our algorithm needs only to perform down-conversion once at the center (zero doppler) frequency per data set. Doppler frequency is then found by shifting in the frequency domain. Section 4 discusses decimation of the gps signal to reduce the size of the data set as well as optimize the data set size for the FFT algorithm. Section 5 discusses how integrating over multiple code blocks is used to detect weak signals, as well as the limitations involving integrating negative energy due to a possible data bit transition. Section 6 describes a weak signal detection method that does not suﬀer from negative energy integration, limited only by the change in doppler due to satellite movement. Section 7 presents and discusses some performance information on the algorithm. Finally section 8 explains the implementation of our matlab signal acquisition routine. 1 2 FFT based correlation Traditional, or serial, acquisition used in many embedded applications performs correlation though exhaustive phase searching. For all possible code phases, [1, l], a locally generated CA code is summed with base-band samples from a GPS antenna. l prnj ∗ gpsdata( j + [1, l]) j=1 This process is computationally expensive, taking on the order of O(n2 ) time with the length of the CA code. However, it is typically the method of choice for embedded applications since it requires a very small amount of memory. A new acquisition algorithm proposed in the early 90’s [1] signiﬁcantly speeds up the acquisition process by using the eﬃcient fast fourier transform algorithm. The FFT correlation algorithm operates as follows. 1) A number of samples equal to an integer multiple of the CA code length is loaded. 2) The data is down-converted to base-band basedata = downconvert(gpsdata) 3) An FFT is performed on the data set, moving it into the frequency domain. basedata = f f t(basedata) 4) A PRN code is generated with the same sampling rate as the data set. 5) The complex component of the PRN code is set equal to the real component. prn = prn + i ∗ prn 5) The PRN code converted into the frequency domain, and then conjugated. prn = conj(f f t(prn)) 6) The basedata is then multiplied by the PRN basedata. ∗ prn 7) The inverse FFT is computed, and absolute value is taken abs(if f t(basedata. ∗ prn)) The reduced acquisition time of the FFT method, O(nln(n)) vs n2 results in a very signiﬁcant speedup. For a three sample per chip data set (3069 samples) this means a diﬀerence of 30692 − 3069 ∗ log2 (3069) = 9383211 operations, which is 265 times faster than a traditional serial search. However, the complexity of FFT, as well as the memory needed to hold data sets while processing, has made this method unpopular among embedded devices where hardware is a limiting factor. 2 3 Searching for carrier in Frequency domain Due to clock drift on GPS front end samplers, as well as doppler introduced by moving GPS satellites, searching for a GPS satellite involves a search over frequency space as well as code phase. The input signal must be mixed with a range of possible carrier frequencies covering a given band to match the eﬀects of clock drift and doppler. Frequency mixing in software is expensive since it involves multiplication of all data points in the set. Afterwards, the new down converted data must be moved into the frequency domain using an FFT in order to perform correlation.We are able to avoid the redundant frequency mixing by shifting data in the frequency domain rather than mixing in the time domain as shown in ﬁgure 1. Baseband Shifting in frequency domain Figure 1: Shifting in the frequency domain is equivalent to wave multiplication in the time domain 4 Decimation We have found that it can be useful to process data at a diﬀerent sampling frequency than one given in a data set. One reason is to have a power of two number of samples, thus maximizing the eﬃciency of the FFT algorithm. Another reason is simply to reduce the amount of data needed to process for each frequency bin. For example, when using an inaccurate clock it may be necessary to perform a large number of frequency search steps even to ﬁnd a strong signal. The ﬀtcorrelate routine is capable of re-sampling a data set at a desired number of samples per chip as shown in ﬁgure 2, then rounding up to the nearest power of two to maximize the eﬃciency of the FFT algorithm. This is done using matlab’s resample routine available with the signal processing toolbox. The routine takes from the user the desired number of samples per chip, spc, and computes from it the number of samples per block, which is given by: log2 (spc∗CAlength∗codeperiods) desiredN umSamples = 2 Where spc is the desired number of samples per chip, codeperiods is the desired number of code- periods, and CAlength is 1023. 3 Figure 2: Decimation resamples a given data set to a speciﬁed number of samples, interpolating the new samples from the original. 5 Integrating over multiple code periods Weak signals will require integration over multiple code periods to detect. Idealy, we would like to to be able to integrate over an arbitrary number of code periods to detect a signal. Unfortunately this is not possible since actual data is encoded into the polarity of the C/A code. If the polarity of the C/A code changes during integration, it will result in destructive interference. For GPS, the C/A code may change once every 20 code periods. For this reason our acquisition method supports a maximum integration of two adjacent sets of 10 code periods. Since over 20 code periods there can be at most one data bit transition, by summing over two adjacent 10 code period blocks we are assured that at least one set will be free of a data bit ﬂip. Figure 3 illustrates this concept. Since time domain down conversion must be performed on both code period sets, we oﬀset the second code period set’s down conversion center frequency by half the code frequency search step. This way, if no data bit transitions occur in either of the two adjacent code block sets, we gain the beneﬁt of having a better frequency estimate as well as greater correlation strength. Data bit transition every 20 C/A Codes By integrating over no more than X 10 code periods twice we are assured that at least one will X not contain data bit flip. Figure 3: 10ms max integration time 6 Summing over multiple blocks As discussed in the previous section, we are limited to integrating no more than 10ms worth of data to acquire a signal. For some applications this may not be enough. We would like a method that can be used to somehow integrate over numerous seconds worth of data. Our algorithm provides 4 this functionality by summing multiple adjacent correlation graphs as shown in ﬁgure 4. Since the peak in the correlation will always occur at the same phase regardless of it’s magnitude relative to noise, we can sum multiple adjacent graphs and expect to see the signal strength to grow faster than the noise, eventually breaking out of noise. + = Sum over even numbered blocks Each block is 1 to 10 codeperiods Sum over odd numbered blocks Figure 4: Summing over multiple blocks is done by adding consecutive correlation graphs (top). These graphs come from the odd and even numbered blocks in the data set (bottom), so that we are assured one of the summations will be free of any destructive interference due to possible data bit ﬂips. This holds true only when the number of code periods over which to integrate divides evenly into 20. One obstacle to overcome is the change in code rate as a result of doppler. Over 10ms of data, the change is negligible, amounting to less than a sample at 3 samples per chip. However, over multiple seconds of data the change is very apparent and must be dealt with. One option is to regenerate the PRN for each frequency bin, however this method is too computationally expensive for a practical application. Our algorithm uses the Tsuli method which shifts each correlation graph appropriately before summing it with all the others. The amount to shift the correlation graph is a function of sample number, and the adjusted code rate. Adjusted code rate, coderateadj is computed from the carrier frequency estimate. coderateadj = 1.023e6 ∗ (currentf req − Iforig )/1575.42e6; Where currentf req is the current frequency step, 1.023e6 is the number of CA chips in one second, Iforig is the user-speciﬁed center frequency, and 1575.42e6 is the standard GPS carrier frequency. From this the new code rate is given by coderatenew = 1.023e6 + coderateadj 5 and the number of samples that the code rate will be oﬀ each second is: fs ∗ coderateadj coderatenew For each code block processed, we ﬁnd the starting sample number, and then compute the number of seconds into the data set for that sample. This is given simply by: sample fs Multiplying these last two expressions together yields the number of samples by which to shift the correlation graph for each block of data. sample fs sample ∗ coderateadj ∗ ∗ coderateadj = fs coderatenew coderatenew Conveniently, this adjustment does not depend on the sampling frequency, and therefore is easy to integrate with decimation. Using this method allows us to sum multiple seconds worth of data reliably. This feature is limited only by the changing doppler frequency of the satellite. In a practical application, however, we ﬁnd that this method is not eﬀective beyond roughly two seconds into a data set despite the fact that phase is matched across all correlation graphs. 6 Correlation Signal Strength over block x 10 11 5 4.5 10 4 3.5 9 3 strength strength 2.5 8 2 7 1.5 1 6 0.5 0 5 0 200 400 600 800 1000 1200 1400 1600 1800 0 50 100 150 200 250 300 350 400 phase (sample) data block Phase change over time Phase compensation over time 999 −1 998.5 −1.5 998 −2 997.5 phase (samples) phase (samples) 997 −2.5 996.5 −3 996 −3.5 995.5 995 −4 0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400 data block data block Figure 5: Eight second block summation for prn 11 shows that block summation (top) is ineﬀective after roughly two seconds dispite accurate phase compensation (bottom). 6 peak Figure 5 illustrates this eﬀect. The top right plot shows cumulative signal strength nextpeak over code block. The strength reaches a peak and then levels oﬀ. The reason for this is unknown. It is speculated that the noise is catching up and therefore no overall strength improvement is visible. Furthermore, the maximum point in the correlation graph, around 4.5e6 is roughly consistent with the expected value computed simply by multiplying the peak over the ﬁrst code block by the number of code blocks summed, or 400. 1.062e4 ∗ 400 = 4.248e6 7 Performance Figure 6 presents a general overview of the CPU utilization within ﬀtcorrelate for a few test cases covering most of ﬀtcorrelate’s functionality. We see that for trivial data sets such as the ﬁrst, most of the computation time is dominated by the resample method, which was put there in the ﬁrst place to improve performance. As data set size increases through more code periods, blocks, and samples per chip we ﬁnd more computation dominated by inverse fourier transform and circular shifts for frequency domain carrier searching. In comparison to a standard serial search algorithm, fftcorrelate CPU usage 80 Percent CPU usage 70 resample 60 cacode2 50 circshift 40 downconvert 30 fft 20 ifft 10 0 hz z hz hz z kh 0h 00 0k 0k 10 50 ,1 ,1 5 1, 1, 0, ,1 ,1 1, 1, ,1 ,1 ,3 1, 1, ,3 10 10 10 test case Figure 6: CPU utilization for a set of test cases. Each test case is read as number of code periods, desired samples per chip, carrier search space. our algorithm is 3,240 times faster when given identical parameters. 7 8 Implementation Here we present a brief overview of the implementation details. The routine begins by opening the input ﬁle and setting a few global variables and constants. The number of samples is then computed as desiredN umSamples = 2 log2 (spc∗CAlength∗codeperiods) Where spc is the desired number of samples per chip, codeperiods is the desired number of code- periods, and CAlength is 1023. The ﬁrst loop nesting, which cycles over the two adjancent sets of code periods, is entered. The number of frequency search steps is the computed and stored as 2 ∗ If space If steps = round( ) If stepsize A second loop nesting is entered to compute and store blocks for summation. For each iteration of the loop a set of code periods is read from the input ﬁle, down converted to base-band using the estimated carrier frequency, resampled to the speciﬁed number of samples per chip, converted into the frequency domain via ﬀt, and stored away for later use. Next the local PRN code is generated at the new sampling frequency, converted to frequency domain, and conjugated. Next the algorithm begins the frequency search process. For each frequency search step, the code rate for that frequency is computed. Next a third loop nesting is entered that takes each of the previously stored blocks, multiplies them by the PRN, converts them into the time domain via iﬀt, shifts them according to the previously computed code frequency, and sums them together. The signal strength of the resulting correlation graph is then computed by dividing the highest peak by the next highest peak within +/- two chips. This strength is compared to a running “best” strength, if it is larger than any previously computed strength then it’s carrier, phase, strength, and correlation graph are saved as “best”. Each of the data blocks is then shifted for the next carrier frequency test. The loop then repeats. After all possible carrier frequencies have been tested over each of the two adjacent block sets, the function returns the best recorded results to the user. References [1] NEW FAST GPS CODE-ACQUISITION TECHNIQUE USING FFT, ELECTRONICS LET- TERS, 17th January 1991 Vol. 27 No. 2 [2] http://www.fcc.gov/911/enhanced/ [3] Dr. Akos dma@colorado.edu 8