Example: biology

Types of Coding - Purdue University College of Engineering

C. A. Bouman: Digital Image Processing - January 9, 20231 Types of Coding Source Coding - Code data to more efficiently representthe information Reduces size of data Analog - Encode analog source data into a binary for-mat Digital - Reduce the size of digital source data channel Coding - Code data for transmition over a noisycommunication channel Increases size of data Digital - add redundancy to identify and correct errors Analog - represent digital values by analog signals Complete Information Theory was developed by ClaudeShannonC. A. Bouman: Digital Image Processing - January 9, 20232 Digital Image Coding Images from a 6 MPixel digital cammera are 18 MByteseach Input and output images are digital Output image must be smaller ( 500 kBytes) This is a digital source Coding problemC. A. Bouman: Digital Image Processing - January 9, 20233 Two Types of Source (Image) Coding Lossless Coding (entropy Coding ) Data can be decoded to form exactly the same bits Used in zip Can only achieve moderate compression ( 2:1 -3:1) for natural images Can be important in certain applications such as medi-cal imaging Lossly source Coding Decompressed image is visually similar, but has beenchanged Used in JPEG and MPEG Can achieve much greater compression ( 20:1 -40:1) for natural images Uses entropy codingC.

Channel Coding - Code data for transmition over a noisy communication channel – Increases “size” of data – Digital - add redundancy to identify and correct errors – Analog - represent digital values by analog signals •Complete “Information Theory” was developed by Claude

Tags:

  Coding, University, Engineering, College, Channel, Purdue, Purdue university college of engineering, Channel coding

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Types of Coding - Purdue University College of Engineering

1 C. A. Bouman: Digital Image Processing - January 9, 20231 Types of Coding Source Coding - Code data to more efficiently representthe information Reduces size of data Analog - Encode analog source data into a binary for-mat Digital - Reduce the size of digital source data channel Coding - Code data for transmition over a noisycommunication channel Increases size of data Digital - add redundancy to identify and correct errors Analog - represent digital values by analog signals Complete Information Theory was developed by ClaudeShannonC. A. Bouman: Digital Image Processing - January 9, 20232 Digital Image Coding Images from a 6 MPixel digital cammera are 18 MByteseach Input and output images are digital Output image must be smaller ( 500 kBytes) This is a digital source Coding problemC. A. Bouman: Digital Image Processing - January 9, 20233 Two Types of Source (Image) Coding Lossless Coding (entropy Coding ) Data can be decoded to form exactly the same bits Used in zip Can only achieve moderate compression ( 2:1 -3:1) for natural images Can be important in certain applications such as medi-cal imaging Lossly source Coding Decompressed image is visually similar, but has beenchanged Used in JPEG and MPEG Can achieve much greater compression ( 20:1 -40:1) for natural images Uses entropy codingC.

2 A. Bouman: Digital Image Processing - January 9, 20234 Entropy LetXbe a random variables taking values in the set{0, , M 1}such thatpi=P{X=i} Then we define the entropy ofXasH(X) = M 1 i=0pilog2pi= E[log2pX]H(X)has units of bitsC. A. Bouman: Digital Image Processing - January 9, 20235 Conditional Entropy and Mutual Information Let(X, Y)be a random variables taking values in the set{0, , M 1}2such thatp(i, j) =P{X=i, Y=j}p(i|j) =p(i, j) M 1k=0p(k, j) Then we define the conditional entropy ofXgivenYasH(X|Y) = M 1 i=0M 1 j=0p(i, j) log2p(i|j)= E[log2p(X|Y)] The mutual information betweenXandYis given byI(X;Y) =H(X) H(X|Y)The mutual information is the reduction in uncertainty A. Bouman: Digital Image Processing - January 9, 20236 Entropy (Lossless) Coding of a Sequence LetXnbe an sequence of random variables takingvalues in the set{0, , M 1}such thatP{Xn=m}=pm Xnfor eachnis known as a symbol How do we representXnwith a minimum number of bitsper symbol?

3 C. A. Bouman: Digital Image Processing - January 9, 20237A Code Definition:A code is a mapping from the discrete set ofsymbols{0, , M 1}to finite binary sequences For each symbol,mtheir is a corresponding finite bi-nary sequence m | m|is the length of the binary sequence Expected number of bits per symbol (bit rate) n=E[| Xn|]=M 1 m=0| m|pm Example forM= 4m m| m|0 0 121 1 022 013 1 0 0 1 0 06 Encoded bit stream(0,2,1,3,2) (01|0|10|100100|0)C. A. Bouman: Digital Image Processing - January 9, 20238 Fixed versus Variable Length Codes Fixed Length Code -| m|is constant for allm Variable Length Code -| m|varies withm Problem Variable length codes may not be uniquely decodable Example: Using code from previous page(6) (100100)(1,0,2,2) (10|01|0|0) Different symbol sequences can yield the same code Definition:A code isUniquely Decodableif there existsonly a single unique decoding of each coded sequence. Definition:APrefix Codeis a specific type of uniquelydecodable code in which no code is a prefix of A.

4 Bouman: Digital Image Processing - January 9, 20239 Lower Bound on Bit Rate Theorem:LetCbe a uniquely decodable code for symbol sequenceXn. Then the mean code length isgreater thanH(Xn). n =E[| Xn|]=M 1 m=0| m|pm H(Xn) Question: Can we achieve this bound? Answer: Yes! Constructive proof using Huffman codesC. A. Bouman: Digital Image Processing - January 9, 202310 Huffman Codes Variable length prefix code Uniquely decodable Basic idea: Low probability symbols Long codes High probability symbols short codes Basic algorithm: Low probability symbols Long codes High probability symbols short codesC. A. Bouman: Digital Image Processing - January 9, 202311 Huffman Coding Algorithm1. Initialize list of probabilities with the probability ofeachsymbol2. Search list of probabilities for two smallest probabilities,pk andpl .3. Add two smallest probabilities to form a new probability,pm=pk +pl .4. Removepk andpl from the Addpmto the Go to step 2 until the list only contains 1 entryC.

5 A. Bouman: Digital Image Processing - January 9, 202312 Recursive Merging for Huffman Code Example forM= A. Bouman: Digital Image Processing - January 9, 202313 Resulting Huffman Code Binary codes given by path through tree00000p1p0p2p3p7p4p5p6110root10100110 101001110110010001000100001C. A. Bouman: Digital Image Processing - January 9, 202314 Upper Bound on Bit Rate of Huffman Code Theorem:For a Huffman code, nhas the property thatH(Xn) n < H(Xn) + 1 A Huffman code is within 1 bit of optimal efficiency Can we do better?C. A. Bouman: Digital Image Processing - January 9, 202315 Coding in Blocks We can code blocks of symbols to achieve a bit rate thatapproaches the entropy of the source symbols. , X0, , Xm 1 Y0, Xm, , X2m 1 Y1, So we have thatYn=[Xnm, , X(n+1)m 1]whereYn {0, , Mm 1}C. A. Bouman: Digital Image Processing - January 9, 202316 Bit Rate Bounds for Coding in Blocks It is easily shown thatH(Yn) =mH(Xn)and the numberof bits per symbolXnis given by nx= nymwhere nyis thenumber of bits per symbol for a Huffman code ofYn.

6 Then we have thatH(Yn) ny< H(Yn) + 11mH(Yn) nym<1mH(Yn) +1mH(Xn) nym< H(Xn) +1mH(Xn) nx< H(Xn) +1m As the block size grows, we havelimm H(Xn) limm nx H(Xn) + limm 1mH(Xn) limm nx H(Xn) So we see that for a Huffman code of blocks with lengthmlimm nx=H(Xn)C. A. Bouman: Digital Image Processing - January 9, 202317 Comments on Entropy Coding As the block size goes to infinity the bit rate approachesthe entropy of the sourcelimm nx=H(Xn) A Huffman coder can achieve this performance, but it re-quires a large block size. Asmbecomes largeMmbecomes very large largeblocks are not practical. This assumes thatXnare , but a similar result holdsfor stationary and ergodic sources. Arithmetic coders can be used to achieve this bitrate inpractical A. Bouman: Digital Image Processing - January 9, 202318 Run Length Coding In some cases, long runs of symbols may occur. In thiscase, run length Coding can be effective as a preprocessorto an entropy coder.

7 Typical run length coder uses ,(value,# of repetitions),(value,# of repetitions+1), where2bis the maximum number of repetitions Example: LetXn {0,1,2} |0000000 07|111 13|222222 26| If more than2brepetitions occur, then the repetition is bro-ken into segments |00000000 08|00 02|111 13| Many other variations are A. Bouman: Digital Image Processing - January 9, 202319 Predictive Entropy Coder for Binary Images Uses in transmission of Fax images (CCITT G4 standard) Framework LetXsbe a binary image on a rectangular lattices=(s1, s2) S LetWbe a causal window in raster order Determine a model forp(xs|xs+rr W) Algorithm1. For each pixel in raster order(a) Predict Xs={1ifp(1|Xs+rr W)> p(0|Xs+rr W)0otherwise(b) IfXs= Xssend 0; otherwise send 12. Run length code the result3. Entropy code the resultC. A. Bouman: Digital Image Processing - January 9, 202320 Predictive Entropy Coder Flow DiagramEncoderCausalCodingHuffmanXsXORE ncodingRun LengthPredictorDecoderDecodingHuffmanDec odingRun LengthPredictorCausalXsXORC.}

8 A. Bouman: Digital Image Processing - January 9, 202321 How to Choosep(xs|xs+rr W)? Non-adaptive method Select typical set of training images Design predictor based on training images Adaptive method Allow predictor to adapt to images being coded Design decoder so it adapts in same manner as encoderC. A. Bouman: Digital Image Processing - January 9, 202322 Non-Adaptive Predictive Coder Method for estimating predictor1. Select typical set of training images2. For each pixel in each image, formzs= (xs+r0, , xs+rp 1)where{r0, , rp 1} Index the values ofzsfromj= 0toj= 2p 14. For each pixel in each image, computehs(i, j) = (xs=i) (zs=j)and the histogramh(i, j) = s Shs(i, j)5. Estimatep(xs|xs+rr W) =p(xs|zs)as p(xs=i|zs=j) =h(i, j) 1k=0h(k, j)C. A. Bouman: Digital Image Processing - January 9, 202323 Adaptive Predictive Coder Adapt predictor at each pixel Update value ofh(i, j)at each pixel using equationsh(i, j) h(i, j) + (xs=i) (zs=j)N(j) N(j) + 1 Use updated values ofh(i, j)to compute new predictor ateach pixel p(i|j) h(i, j)N(j) Design decoder to track encoderC.

9 A. Bouman: Digital Image Processing - January 9, 202324 Adaptive Predictive Entropy Coder FlowDiagramEncoderHistogram EstimationCodingHuffmanXsXORE ncodingRun LengthPredictorCausalCausalDecoderDecodi ngHuffmanDecodingRun LengthPredictorCausalCausalHistogram EstimationXsXORC. A. Bouman: Digital Image Processing - January 9, 202325 Lossy Source Coding Method for representing discrete-space signals with min-imum distortion and bit-rate Outline Rate-distortion theory Karhunen-Loeve decorrelating Transform Practical coder structuresC. A. Bouman: Digital Image Processing - January 9, 202326 Distortion LetXandZbe random vectors inIRM. Intuitively,Xisthe original image/data andZis the decoded we use the squared error distortion measure givenbyd(X, Y) =||X Z||2 Then the distortion is given byD=E[d(X, Y)] =E[||X Z||2] This actually applies to any quadratic norm error distor-tion measure since we can define X=AXand Z=AZSo D=E[ X Z 2]=E[||X Z||2B]whereB= A. Bouman: Digital Image Processing - January 9, 202327 Lossy Source Coding : Theoretical Framework Notation for source codingXn IRMfor0 n < N- a sequence of randomvectorsY {0,1}K- aKbit random binary IRMfor0 n < N- the decoded sequence ofrandom (N)= (X0, , XN 1)Z(N)= (Z0, , ZN 1) Encoder function:Y=Q(X0, , XN 1) Decoder function:(Z0, , ZN 1) =f(Y) Resulting quantitiesBit-rate=KNDistortion=1NN 1 n=0E[||Xn Zn||2] How do we chooseQ( )to minimize the bit-rate and dis-tortion?

10 C. A. Bouman: Digital Image Processing - January 9, 202328 Differential Entropy Notice that the information contained in a Gaussian ran-dom variable is infinite, so the conventional entropyH(X)is not defined. LetXbe a random vector taking values inIRMwith den-sity functionp(x). Then we define the differential entropyofXash(X) = x IRMp(x) log2p(x)dx= E[log2p(X)]h(X)has units of bitsC. A. Bouman: Digital Image Processing - January 9, 202329 Conditional Entropy and Mutual Information LetXandYbe a random vectors taking values inIRMwith density functionp(x, y)and conditional densityp(x|y). Then we define the differential conditional entropy ofXgivenYash(X|Y) = x IRM x IRMp(x, y) log2p(x|y)= E[log2p(X|Y)] The mutual information betweenXandYis given byI(X;Y) =h(X) h(X|Y) =I(Y;X) Important:The mutual information is well defined forboth continuous and discrete random variables, and it rep-resents the reduction in uncertainty A. Bouman: Digital Image Processing - January 9, 202330 The Rate-Distortion Function Define/Remember: LetX0be the first element of the sequence.


Related search queries