Example: barber

LDPC Codes – a brief Tutorial - bernh

LDPC Codes a brief TutorialBernhard Leiner, : 8, 20051 IntroductionLow- density parity - check (LDPC) Codes are a class of linear blockLDPC Codes . The name comes from the characteristic of their parity -checkmatrix which contains only a few1 s in comparison to the amount of0 s. Their main advantage is that they provide a performance whichis very close to the capacity for a lot of different channels and lineartime complex algorithms for decoding. Furthermore are they suitedfor implementations that make heavy use of were first introduced by Gallager in his PhD thesis in ,1960 But due to the computational effort in implementing coder and en-coder for such Codes and the introduction of Reed-Solomon Codes ,they were mostly ignored until about ten years Representations for LDPC codesBasically there are two different possibilities to represent LDPC all linear block Codes they can be described via matrices.

LDPC Codes – a brief Tutorial Bernhard M.J. Leiner, Stud.ID.: 53418L bleiner@gmail.com April 8, 2005 1 Introduction Low-density parity-check (LDPC) codes are a class of linear block LDPC codes. The name comes from the characteristic of their parity-check matrix which contains only a few 1’s in comparison to the amount of 0’s.

Tags:

  Parity, Check, Density, Cpld, Low density parity check

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of LDPC Codes – a brief Tutorial - bernh

1 LDPC Codes a brief TutorialBernhard Leiner, : 8, 20051 IntroductionLow- density parity - check (LDPC) Codes are a class of linear blockLDPC Codes . The name comes from the characteristic of their parity -checkmatrix which contains only a few1 s in comparison to the amount of0 s. Their main advantage is that they provide a performance whichis very close to the capacity for a lot of different channels and lineartime complex algorithms for decoding. Furthermore are they suitedfor implementations that make heavy use of were first introduced by Gallager in his PhD thesis in ,1960 But due to the computational effort in implementing coder and en-coder for such Codes and the introduction of Reed-Solomon Codes ,they were mostly ignored until about ten years Representations for LDPC codesBasically there are two different possibilities to represent LDPC all linear block Codes they can be described via matrices.

2 Thesecond possibility is a graphical RepresentationLets look at an example for a low- density parity - check matrix matrix defined in equation (1) is a parity check matrix with di-mensionn mfor a(8, 4) can now define two numbers describing these number of1 s in each row andwcfor the columns. For a matrix tobe calledlow-densitythe two conditionswc nandwr mmust1H= 0 1 0 1 1 0 0 11 1 1 0 0 1 0 00 0 1 0 0 1 1 11 0 0 1 1 0 1 0 (1)c0c1c2c3c4c5c6c7f0f3f1f2vnodescnodesF igure 1: Tanner graph corresponding to the parity check matrix inequation (1). The marked pathc2 f1 c5 f2 c2is anexample for a short cycle. Those should usually be avoided since theyare bad for decoding satisfied. In order to do this, the parity check matrix should usuallybe very large, so the example matrix can t be really called RepresentationTanner introduced an effective graphical representation for LDPCT annergraph, 1981codes.

3 Not only provide these graphs a complete representation of thecode, they also help to describe the decoding algorithm as explainedlater on in this graphs arebipartite graphs. That means that the nodes ofthe graph are separated into two distinctive sets and edges are onlyconnecting nodes of two different types. The two types of nodes ina Tanner graph are calledvariable nodes(v-nodes) andcheck nodesv-nodes(c-nodes).c-nodesFigure is an example for such a Tanner graph and representsthe same code as the matrix in 1. The creation of such a graph israther straight forward. It consists ofmcheck nodes (the number ofparity bits) andnvariable nodes (the number of bits in a codeword). check nodefiis connected to variable nodecjif the elementhijofHis Regular and irregular LDPC codesA LDPC code is calledregularifwcis constant for every columnregularandwr=wc (n/m)is also constant for every row.

4 The examplematrix from equation (1) is regular withwc=2andwr=4. It s alsopossible to see the regularity of this code while looking at the graphicalrepresentation. There is the same number of incoming edges for everyv-node and also for all the low density but the numbers of 1 s in each row or columnaren t constant the code is called airregularLDPC Constructing LDPC codesSeveral different algorithms exists to construct suitable LDPC himself introduced one. Furthermore MacKay proposed oneMacKayto semi-randomly generate sparse parity check matrices. This is quiteinteresting since it indicates that constructing good performing LDPC Codes is not a hard problem. In fact, completely randomly chosencodes are good with a high probability. The problem that will arise,is that the encoding complexity of such Codes is usually rather Performance & ComplexityBefore describing decoding algorithms in section 3, I would like toexplain why all this effort is needed.

5 The feature of LDPC Codes toperform near the Shannon limit1of a channel exists only for largeblock lengths. For example there have been simulations that of the Shannon limit at a bit error rate of10 6with anShannon limitblock length of107. An interesting fact is that those high performancecodes are large block length results also in large parity - check and gen-erator matrices. The complexity of multiplying a codeword with amatrix depends on the amount of 1 s in the matrix. If we put thesparse matrixHin the form[PTI]via Gaussian elimination the gen-erator matrixGcan be calculated asG=[I P]. The sub-matrixPis generally not sparse so that the encoding complexity will be proofed that reliable communication over an unreliable channel isonly possible with code rates above a certain limit the channel the complexity grows inO(n2)even sparse matrices don tresult in a good performance if the block length gets very high.

6 So iter-ative decoding (and encoding) algorithms are used. Those algorithmsperform local calculations and pass those local results via step is typically repeated several term local calculations already indicates that a divide andconquere strategy, which separates a complex problem into manage-divide andconquereable sub-problems, is realized. A sparse parity check matrix now helpsthis algorithms in several ways. First it helps to keep both the lo-cal calculations simple and also reduces the complexity of combiningthe sub-problems by reducing the number of needed messages to ex-change all the information. Furthermore it was observed that iterativedecoding algorithms of sparse Codes perform very close to the optimalmaximum likelihood Decoding LDPC codesThe algorithm used to decode LDPC Codes was discovered indepen-dently several times and as a matter of fact comes under differentnames.

7 The most common ones are thebelief propagation algorithm, themessage passing algorithmand thesum-product order to explain this algorithm, a very simple variant which workswith hard decision, will be introduced first. Later on the algorithm willbe extended to work with soft decision which generally leads to betterdecoding results. Only binary symmetric channels will be Hard-decision decodingThe algorithm will be explained on the basis of the example codealready introduced in equation 1 and figure An error free receivedcodeword would be [1 0 0 1 0 1 0 1]. Let s suppose that wehave a BHC channel and the received the codeword with one error bitc1flipped the first step all v-nodescisend a message to their (alwaysci fj2 in our example) c-nodesfjcontaining the bit they believe tobe the correct one for them. At this stage the only informationa v-nodecihas, is the corresponding received i-th bit ofc, means for example, thatc0sends a message containing1tof1andf3, nodec1sends messages containingy1(1) tof0andf1, and so :c1 1 c3 1 c4 0 c7 1sent:0 c10 c31 c40 c7f1received:c0 1 c1 1 c2 0 c5 1sent:0 c00 c11 c20 c5f2received:c2 0 c5 1 c6 0 c7 1sent:0 c21 c50 c61 c7f3received:c0 1 c3 1 c4 0 c6 0sent:1 c01 c30 c40 c6 Table 1: overview over messages received and sent by the c-nodes instep 2 of the message passing the second step every check nodesfjcalculate a response tofj cievery connected variable node.

8 The response message containsthe bit thatfjbelieves to be the correct one for this v-nodeciassuming that the other v-nodes connected tofjare other words: If you look at the example, every c-nodefjisconnected to 4 v-nodes. So a c-nodefjlooks at the message re-ceived from three v-nodes and calculates the bit that the fourthv-node should have in order to fulfill the parity check 2 gives an overview about this is, that this might also be the point at which the de-coding algorithm terminates. This will be the case if all checkequations are fulfilled. We will later see that the whole algo-rithm contains a loop, so an other possibility to stop would bea threshold for the amount of phase: the v-nodes receive the messages from the checkci fjnodes and use this additional information to decide if their orig-inally received bit is OK.

9 A simple way to do this is a majorityvote. When coming back to our example that means, that eachv-node has three sources of information concerning its bit. Theoriginal bit received and two suggestions from the check 3 illustrates this step. Now the v-nodes can send anothermessage with their (hard) decision for the correct value to thecheck to step our example, the second execution of step 2 would terminate thedecoding process sincec1has voted for0in the last step. This corrects5v-nodeyireceivedmessages from check nodesdecisionc01f1 0f3 11c11f0 0f1 00c20f1 1f2 00c31f0 0f3 11c40f0 1f3 00c51f1 0f2 11c60f2 0f3 00c71f0 1f2 11 Table 2: Step 3 of the described decoding algorithm. The v-nodesuse the answer messages from the c-nodes to perform a majority voteon the bit transmission error and all check equations are now Soft-decision decodingThe above description of hard-decision decoding was mainly for ed-ucational purpose to get an overview about the idea.

10 Soft-decisiondecoding of LDPC Codes , which is based on the concept of beliefpropagation, yields in a better decoding performance and is thereforebeliefpropagationthe prefered method. The underlying idea is exactly the same as inhard decision decoding. Before presenting the algorithm lets introducesome notations: Pi=Pr(ci=1|yi) qijis a message sent by the variable nodecito the check nodefj. Every message contains always the pairqij(0)andqij(1)which stands for the amount of belief thatyiis a 0 or a 1 . rjiis a message sent by the check nodefjto the variable nodeci. Again there is arji(0)andrji(1)that indicates the (current)amount of believe in thatyiis a 0 or a 1 .The step numbers in the following description correspond to thehard decision variable nodes send theirqijmessages. Since no otherci fjinformation is avaiable at this step,qij(1)=Piandqij(0)=1 )b)rji(b)rji(b)qij(b)yiqij(b)Figure 2: a) illustrates the calculation ofrji(b)and b)qij(b) check nodes calculate their response messagesrji:2fj cirji(0)=12+12 i Vj\i(1 2qi j(1))(3)andrji(1)=1 rji(0)(4)So they calculate the probability that there is an even numberof1 s amoung the variable nodes exceptci(this is exactly whatVj\imeans).


Related search queries