Document Sample

Investigation of the effect of LDPC coding on the sparseness of a data source in AWGN channel conditions J. Schoeman† and L.P. Linde‡ Department of Electrical, Electronic and Computer Engineering, University of Pretoria, Pretoria, 0002, South Africa Email: † jschoeman@up.ac.za, ‡ llinde@postino.up.ac.za Abstract— This paper implements a measure for sparseness in a simple antipodal modulation system employing LDPC coding and a p(ai ) message passing algorithm (MPA) based decoder in AWGN channel ηs = n−1 = p(ai ) (1) conditions. The authors proceed to show the effect of the LDPC en- k=0 p(ak ) coder on the statistics and CDF of the source, as well as the PDF of with ai the symbol that is least likely to occur. This does not the received data. Comparative results for various levels of sparseness and average signal to noise ratio are presented, as well as simulation account for the source alphabet, bh . Lets deﬁne the prob- results showing that more detailed source analysis should be made be- ability of alphabet letter bh in symbol ai as P rob[bhi ] = a fore the assumption of equiprobable input symbols can be justiﬁed. p(bhi ). The measure for sparseness on a letter level, rather a Also presented is simulation results of bit error performance for the than a symbol level, can now be given (when weighted with coded system, yielding a Pe = 9.8 × 10−5 at Eb /N0 = 5 dB for the (10, 5, 0.5) MPA decoder with 10 iterations, as well as a additional 0.1 p(ai ) and bh is least likely to occur) as dB gain when the number of iterations are increased to 100. n−1 h k=0 p(ak )p(bak ) ηα = m−1 n−1 j (2) I. I NTRODUCTION j=0 k=0 p(ak )p(bak ) m−1 n−1 Recent years have seen an incredible ﬂourish in the ﬁelds of with j=0 k=0 p(ak )p(bj k ) = 1. a lossy data compression techniques for multimedia types of data (like images, video, audio and voice) that is speciﬁcally This letter level (or bit level, if m = 2 for a binary source) matched for such a source. However, given a scenario where measure is more practical, considering detectors will gener- the average code length of the compressed data stream is ally receive and process data on a chip, sample or bit level, not matched to the entropy of the source, it is regularly and not initially on a symbol level. This is true especially in found that the data stream transmitted to the channel appear circumstances where data is fragmented into frames in sys- sparse. This leads to non-optimal detection of the received tems employing time division multiplexing. It is also im- signal, given that the regular assumption of equiprobable portant to note that an inﬁnite blocklength is not a practical inputs to the channel is not strictly true. It has been shown consideration, but if the blocklength chosen, l, is sufﬁciently recently that the decision region of the detected sparse sig- large, ηα can be accurately approximated within the frames. nal can be adjusted to yield improved results, given that a measure exists to quantify the sparseness of a data stream III. T HE LDPC CODER AND DECODER [1]. The derivation of this measure is revisited in Section II. A linear LDPC error correcting code is best described by an (N, K) binary generator matrix, GT . Assuming that the a- Once the sparseness of a data stream can be reliably quan- priori probability of s is uniformly distributed and indepen- tiﬁed, we proceed to present a low density parity check dent of the probability of the noise vector, n, it is convenient (LDPC) channel coding setup in Section III. These codes to deﬁne the (N − K, N ) parity check matrix, H, such that have undergone a rebirth since their discovery in the 1960’s HGT = 0. Encoding a K-bit input vector, s, using either by Gallager [2] and they have been extensively researched GT or H, yields an N -bit vector given by x = GT · s. All for the last decade as an alternative to the very powerful arithmetic operations are restricted to GF(2). System per- turbo codes [3][4]. formance is increased by allowing N → ∞. Finally, the simulation setup and comparative results are Optimal decoding is performed when maximizing the pos- presented and discussed in Section IV, concluding remarks terior probability using Bayes’ theorem as are presented in Section V and some acknowledgements are made in Section VI. P (r|s, G) P (s) P (s|r, G) = (3) P (r|G) II. A MEASURE FOR SPARSENESS The received vector, r, can be described as r = x + n, Consider a source with n symbols ai = {a0 , a1 , ..., an−1 }, and n a zero mean Gaussian process with variance, σn . A with ai constructed from any of m possible alphabet letters probability of receiving a 1, p1 is given by the distribution l 1 0 1 bh = {b0 , b1 , ..., bm−1 }. Each symbol, ai , has a probability f (n) = −2r/σn2 , yielding pl = 1 − pl . 1+e of occurrence given as P rob[ai ] = p(ai ). A measure of sparseness for a blocklength l → ∞ can then be deﬁned The decoding of Eq. (3) can be accurately approximated as the ratio of p(ai ) and p(a0 ), ..., p(an−1 ), with p(ai ) < by implementing a sum-product algorithm, also referred p(aj ) for j [0, n − 1], but j = i. This yields a measure for to as the message-passing algorithm decoder. The imple- sparseness for symbols, given simply by mented 3 step decoding algorithm is described in detail in [4], with the main emphasis of this paper only on initializ- ηα ηα ηα cd 0 1 ing {qml , qml } to {p0 , p1 }, after which a horizontal step is l l 0.1 0.09900 0.17000 performed to determine 0.2 0.19937 0.29032 x 0.3 0.29974 0.37573 0 rml = P (zm |xl = 0, {xl : l ∈ χ}) qml l 0.4 0.40115 0.44466 {xl :l ∈χ} l ∈χ 0.5 0.49931 0.49949 1 x rml = P (zm |xl = 1, {xl : l ∈ χ}) qml l Probability Density Function of Sparse Data Source (LDPC Coding) {xl :l ∈χ} l ∈χ 0.5 (4) 0.45 E /N =0 dB with χ = L (m) \l. Finally, a vertical step is performed by b η =0.1 α 0 0 1 0.4 ηα cd =0.17 determining {qml , qml } for each value of l as rd =0.39641 Probability Density Distribution 0.35 0 qml = αml p0 l 0 rm l 0.3 m ∈M (l)\m (5) 0.25 1 qml = αml p1 l 1 rm l 0.2 m ∈M (l)\m 0.15 0 1 with αml scaled such that qml + qml = 1. At this stage the 0.1 0 1 posterior probabilities, {ql , ql }, can be approximated with 0.05 0 −4 −3 −2 −1 0 1 2 3 4 0 ql = α l p 0 l 0 rml Input [normalized Eb] m∈M (l) (6) 1 ql = α l p 1 l 1 rml Fig. 1. PDF for an LDPC coded system with ηα = 0.1, Eb /N0 = 0dB m∈M (l) The algorithm now repeats from the horizontal step and will Probability Density Function of Sparse Data Source (LDPC Coding) continue to do so until either a valid codeword is determined 0.35 Eb/N0 =0 dB or the maximum number of iterations are exceeded. ηα =0.3 0.3 η =0.37573 α cd rd =0.12692 Probability Density Distribution 0.25 IV. S IMULATIONS 0.2 A. Simulation Platform 0.15 A single simulation platform was implemented. This test 0.1 platform was conﬁgured for BPSK modulation with rectan- gular pulse shaping and a symbol rate of 1000 symbols/s. 0.05 The platform does not support DS/SSMA CDMA capabili- 0 −4 −3 −2 −1 0 1 2 3 4 ties. A sparse data source is used with adjustable levels of Input [normalized E ] b sparseness. No source coding is performed to maximize the average code length and entropy ratio. Channel coding is Fig. 2. PDF for an LDPC coded system with ηα = 0.3, Eb /N0 = 0dB performed by employing an LDPC encoder and an optimal message passing algorithm decoder, as described in Section Probability Density Function of Sparse Data Source (LDPC Coding) III. Traditionally, N would be allowed to be very big. How- 0.9 ever, the simulation platform was restricted to a very small 0.8 E /N =5 dB b 0 η =0.1 (10, 5, R = 0.5) code, with parity check matrix given by 0.7 η α α cd =0.16989 rd =0.12542 Probability Density Distribution 1 1 0 1 1 1 0 1 1 0 0.6 1 1 1 0 1 0 1 1 0 1 0.5 H = 0 0 1 0 0 1 1 1 1 0 (7) 0.4 1 0 0 1 1 1 1 0 0 1 0.3 0 1 1 1 0 0 0 0 1 1 0.2 The main motivation behind this restriction was to present 0.1 results close to the worst case scenario. This is so that it can 0 −4 −3 −2 −1 0 1 2 3 4 be applied directly to 3G/4G communication systems as an Input [normalized E ] b upper bound on bit error rate performance. Fig. 3. PDF for an LDPC coded system with ηα = 0.1, Eb /N0 = 5dB B. Simulation Results From these ﬁgures and the table, a number of important ob- servations can be made: (1) It is clear that the sparseness Fig. 1 through Fig. 5 show the simulated results obtained of the source, ηα , is not the same as the sparseness of the from the test platform, while the table below presents some LDPC output, ηα cd . (2) The approximate measured sparse- numerical data returned by the test platform. ness values, ηα , closely resemble the theoretical values, ηα . Probability Density Function of Sparse Data Source (LDPC Coding) V. C ONCLUSION 0.6 E /N =5 dB b 0 This paper investigated the effect on sparseness as deﬁned ηα =0.3 η =0.3757 in Section II introduced by a linear LDPC encoder. It has α cd Probability Density Distribution 0.5 r =0.040148 d been shown via simulation that the coded sparseness does 0.4 in fact differ from the sparseness of the source and that the coded sparseness tends to be less sparse than the sparseness 0.3 of the original data source. It has also been shown that a (10, 5, R = 0.5) LDPC code can be implemented, but that 0.2 the BER performance can only be described as conserva- 0.1 tive, with Pe = 9.8 × 10−5 at Eb /N0 = 5 dB. This is still an incredible result, given that practical blocklenghts of 10 0 −4 −3 −2 −1 0 1 2 3 4 bits are uncommon. The implemented MPA algorithm has Input [normalized E ] b been shown to be very efﬁcient when implemented with 10 iterations or more. There is, however, room for optimiza- Fig. 4. PDF for an LDPC coded system with ηα = 0.3, Eb /N0 = 5dB tion and simpliﬁcation, which will be considered in a later research effot. (3) It is clear from the measured coded sparseness results VI. ACKNOWLEDGEMENTS for Eq. (7) that the coded sparseness is monotonic over 0 < ηα < 0.5. (4) It is clear that the general assumption The authors wish to thank Intel for its donation towards the of selecting the decision region as rd = 0 will not yield op- development of the Ipercube distributed computing system timal results. Optimal rd values are given in the ﬁgures. (5) at the University of Pretoria, as well as P. Greeff for his Various bit error performances from [2] and [4] show that invaluable knowledge and assistance with the Ipercube. a large blocksize provides for very powerful channel cod- R EFERENCES ing. It is clear that the much smaller (10, 5, R = 0.5) code [1] J. Schoeman and L. Linde, “Performance investigation of a sparse is outperformed by it’s (1008, 504, R = 0.5) counterparts, data compression technique with awgn channel effects.” Submitted for but that it still provides for adequate error protection at IEEE Africon 2004, 2004. Pe = 9.8×10−5 at Eb /N0 = 5 dB. (6) The (10, 5, R = 0.5) [2] R. Gallager, Low-density Parity-Check Codes. PhD thesis, Cambridge, 1963. code is suitable for error correction in applications where [3] D. J. C. MacKay, “Information theory, inference and learning algo- a small blocksize is required, and will yield improved re- rithms.” Textbook in preparation, 1997. sults (although still not as good as the (1008, 504, R = 0.5) [4] D. J. C. MacKay, “Good error correcting codes based on very sparse matrices.” Submitted to IEEE transactions on Information Theory. codes) if the codelength is slightly increased to support mul- Available from http://wol.ra.phy.cam.ac.uk/, 1997. timedia and IP applications. (7) It is clear that by increasing the number of iterations of the MPA decoder, the bit error Johan Schoeman holds a B.Eng (2001) performance is increased. (8) The 0.1 dB gain obtained by and B.Eng Hons. (2002) in Electronic increasing the number of iterations from 10 to 100 does not Engineering from the University of Preto- seem justiﬁed. ria. At present, he is studying towards an MEng degree in Electronic Engineering BER Performance for a LDPC coded system with small block size using MPA decoding 0 and is a full time lecturer in the Depart- 10 ment of Electrical, Electronic and Com- puter Engineering (E,E&C Eng), Faculty −1 of Engineering, University of Pretoria. 10 His research interests are in SWR devel- opment, source and statistical channel coding techniques Simulated Bit Error Probability and bandwidth efﬁcient modulation techniques applicable 10 −2 for rural WCDMA 3G/4G systems. Louis P. Linde holds a B.Eng Hons (1973) 10 −3 degree in Electrotechnical Engineering from the University of Stellenbosch and M.Eng (1980) and D.Eng (1983) degrees in Elec- 10 −4 Uncoded (Theory) Coded (Theory), R = 1/2 tronic Engineering from the University Gallager (1008, 504, 1/2) McKay MPA (1008, 504, 1/2) of Pretoria. He is presently the Group MPA (10, 5, 1/2), 1 Iteration(s) Head of Signal Processing and Telecom- MPA (10, 5, 1/2), 10 Iteration(s) −5 MPA (10, 5, 1/2), 100 Iteration(s) munications in the Department of Electri- 10 −4 −2 0 2 4 6 8 cal, Electronic and Computer Engineering Average Eb/N0 [dB] (E,E&C Eng), Faculty of Engineering, University of Preto- ria, as well as Director of both the Centre for Radio and Fig. 5. Comparitive BER plot for the LDPC coded system with various iterations Digital Communication (CRDC) and the DigiMod Group in RE at UP.

DOCUMENT INFO

Shared By:

Categories:

Tags:

Stats:

views: | 6 |

posted: | 3/8/2011 |

language: | English |

pages: | 3 |

OTHER DOCS BY gyvwpsjkko

How are you planning on using Docstoc?
BUSINESS
PERSONAL

By registering with docstoc.com you agree to our
privacy policy and
terms of service, and to receive content and offer notifications.

Docstoc is the premier online destination to start and grow small businesses. It hosts the best quality and widest selection of professional documents (over 20 million) and resources including expert videos, articles and productivity tools to make every small business better.

Search or Browse for any specific document or resource you need for your business. Or explore our curated resources for Starting a Business, Growing a Business or for Professional Development.

Feel free to Contact Us with any questions you might have.