26
1 S.72-3320 Advanced Digital Communication (4 cr) Convolutional Codes

3320 Conv Coding

Embed Size (px)

Citation preview

Page 1: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 1/26

1

S.72-3320 Advanced Digital Communication (4 cr)

Convolutional Codes

Page 2: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 2/26

2

Timo O. Korhonen, HUT Communication Laboratory

Targets today

x Why to apply convolutional coding?

x Defining convolutional codes

x Practical encoding circuits

x Defining quality of convolutional codes

x

Decoding principlesx Viterbi decoding

Page 3: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 3/26

3

Timo O. Korhonen, HUT Communication Laboratory

Convolutional encoding

x Convolutional codes are applied in applications that require good performance with

low implementation complexity. They operate on code streams (not in blocks)

x Convolution codes have memory that utilizes previous bits to encode or decode

following bits (block codes are memoryless)

x Convolutional codes are denoted by (n,k,L), where  L is code (or encoder) Memory

depth (number of register stages)

x Constraint length C=n(L+1) is defined as the number of encoded bits a message

 bit can influence tox Convolutional codes achieve good performance by expanding their memory depth

n(L+1) output bits

input bit

(n,k,L)

encoder 

(n,k,L)

encoder 

k bits n bits

encoded bits

message bits

Page 4: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 4/26

4

Timo O. Korhonen, HUT Communication Laboratory

Example: Convolutional encoder, k = 1, n = 2

x Convolutional encoder is a finite state machine (FSM) processing information bits

in a serial manner 

x Thus the generated code is a function of input and the state of the FSM

x In this (n,k,L) = (2,1,2) encoder each message bit influences a span of C= n(L+1)=6  successive output bits = constraint length C 

x Thus, for generation of n-bit output, we require in this example n shift registers in k =

1 convolutional encoder 

2 1

2

'

''

 j j j j

 j j j

 x m m m

 x m m

− −

= ⊕ ⊕

= ⊕

1 1 2 2 3 3' '' ' '' ' '' ...

out  x x x x x x x=

(n,k,L) = (2,1,2) encoder 

memory

depth L

= number 

of states

Page 5: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 5/26

5Timo O. Korhonen, HUT Communication Laboratory

Example: (n,k,L)=(3,2,1) Convolutional encoder 

3 2' j j j j

 x m m m− −

= ⊕ ⊕

3 1''

 j j j j x m m m

− −= ⊕ ⊕

2'''

 j j j x m m

−= ⊕

•After each new block of k input bitsfollows a transition into new state•Hence, from each input state

transition, 2k  different output states

may follow•Each message bit influences a span

of C = n(L+1) = 3(1+1) = 6  

successive output bits

Page 6: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 6/26

6Timo O. Korhonen, HUT Communication Laboratory

Generator sequences

x (n,k,L) Convolutional code can be described by the generator sequences

that are the impulse responses for each coder n output branches:

x Generator sequences specify convolutional code completely by the

associated generator matrix

x Encoded convolution code is produced by matrix multiplication of the

input and the generator matrix

(1) (2) ( )

, ,...n

g g g

(1)

( 2)

[1 0 11]

[1111]

=

=

g

g

(2,1,2) encoder 

 Note that the generator sequence length

exceeds register depth always by 1

(n,k,L)

encoder 

(n,k,L)

encoder 

k bits n bits

(1)

0g (1)

2g (1)

mg

( ) ( ) ( ) ( )

0 1[ ]n n n n

m  g g g  =g L

Page 7: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 7/267Timo O. Korhonen, HUT Communication Laboratory

Convolution point of view in encoding and

generator matrixx Encoder outputs are formed by modulo-2

discrete convolutions:

where u is the information sequence:

x Therefore, the l :th bit of the j:th output branch is*

x Hence, for this circuit the following equations result,

(assume: )

(1) (1) ( 2) (2) ( ) ( )* , * ... * j j= = =v u g v u g v u g

( ) ( ) ( ) ( ) ( )0 1 10

...

where 1, 0,

m  j j j j jl l i l l l l m mi

l i

v u g u g u g u g  

m L u l i

− − −=

= = + + +

= + <∑

@

0 1( , , )u u=u L

(1)

2 3

( 2)

1 2 3

l l l l  

l l l l l  

v u u uv u u u u

− −

− − −

= + += + + +

(1)

( 2)

[1 0 11]

[111 1]

=    =  

g

g

(1) ( 2) (1) ( 2) (1) ( 2)

0 0 1 1 2 2[ ...]v v v v v v=v

encoder output:        j   

branches

n(L+1) output bits

input bit

2 L =

*note that u is reversed in time as in the definition of convolution top right

( ) ( ) ( ) xyk A

 x y u x k y u k  ∈∑⊗ = −

3l u −

(1)

3 g (1)

2 g 2l u −

Page 8: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 8/268

Timo O. Korhonen, HUT Communication Laboratory

Example: Using generator matrix

(1)

( 2)

[1 0 11]

[111 1]

=    =  

g

g

Verify that you can obtain the result shown!

11 10

01

11 00 01 11 01⊕ ⊕ ⊕ =1 2 3 1 2 3

1 4 4 2 4 4 3(1) ( 2)

0 0g g (1) ( 2)

1 1g g

( ) ( )

0

( )

1 1

( )...

 j j

l l 

 j

 j

l m m

v u g 

u g 

u g 

=

+

+ +

(1) ( 2)

m mg g

l m

u−

Page 9: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 9/269

Timo O. Korhonen, HUT Communication Laboratory S.Lin, D.J. Costello: Error Control Coding, II ed, p. 456 

Page 10: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 10/2610

Timo O. Korhonen, HUT Communication Laboratory

2 1

2

'

''

 j j j j

 j j j

 x m m m

 x m m

− −

= ⊕ ⊕

= ⊕

1 1 2 2 3 3' '' ' '' ' '' ...out  x x x x x x x=

This tells how one input bit

is transformed into two output bits

(initially register is all zero)

Representing convolutional codes: Code tree

2 1 0 1 j jm m− −

=

' 1

''

0

1

1

0

 j

 j

 x

 x

= ⊕ ⊕

= ⊕

' 0

''

0

0

1

0

 j

 j

 x

 x

= ⊕ ⊕ = ⊕

(n,k,L) = (2,1,2) encoder 

' 0

''

1

0

0

1

 j

 j

 x

 x

= ⊕ ⊕

= ⊕

Number of brachesdeviating from each node

equals 2k 

Page 11: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 11/2611

Timo O. Korhonen, HUT Communication Laboratory

Representing convolutional codes compactly:

code trellis and state diagram

Shift register states

Input state ‘1’

indicated by dashed line

Code trellisState diagram

Page 12: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 12/2612

Timo O. Korhonen, HUT Communication Laboratory

Inspecting state diagram: Structural properties of 

convolutional codesx Each new block of k input bits causes a transition into new state

x Hence there are 2k  branches leaving each statex Assuming encoder zero initial state, encoded word for any input of k  

 bits can thus be obtained. For instance, below for u=(1 1 1 0 1),

encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced:

- encoder state diagram for (n,k,L)=(2,1,2) code

- note that the number of states is 8 = 2L+1 => L = 2 (two state bits)

Verify that you obtain the same result!

Input state

Page 13: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 13/26

13Timo O. Korhonen, HUT Communication Laboratory

Code weight, path gain, and generating function

x The state diagram can be modified to yield information on code distance

 properties (= tells how good the code is to detect or correct errors)x Rules (example on the next slide):

 –  (1) Split S0into initial and final state, remove self-loop

 –  (2) Label each branch by the branch gain X i.Here i is the weight* of the n 

encoded bits on that branch

 –  (3) Each path connecting the initial state and the final state represents anonzero code word that diverges and re-emerges with S

0only once

x The path gain is the product of the branch gains along a path, and the weight of 

the associated code word is the power of  X in the path gain

x Code weigh distribution is obtained by using a weighted gain formula to

compute its generating function (input-output equation)

where Ai is the number of encoded words of weight i

( )i

i

 AT  X X ∑= i

*In linear codes, weight is the number of ‘1’:s in the encoder output

Page 14: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 14/26

14Timo O. Korhonen, HUT Communication Laboratory

Example: The path representing the state

sequence S0S1S3S7S6S5S2S4S0 has the

 path gain X2X1X1X1X2X1X2X2=X12

and the corresponding code word

has the weight of 12

6 7 8

9 10

( )

3 5

11 25 ....

i

ii

T X A X  

 X X X  

 X X 

∑=

= + +

+ + +

Where does these terms come from?

weight: 1weight: 2 branch

gain

Page 15: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 15/26

15Timo O. Korhonen, HUT Communication Laboratory

Distance properties of convolutional codes

x Code strength is measured by the minimum free distance:

where v’ and v’’ are the encoded words corresponding information

sequences u’ and u’’. Code can correct up to errors.

x The minimum free distance d  free denotes:

w The minimum weight of all the paths in the state diagram thatdiverge from and remerge with the all-zero state S

0

w The lowest power of the code-generating function T(X)

{ }min ( ', '') : ' '' free

d d = ≠v v u u

6 7 8

9 10

( )

3 511 25 ....

i

ii

T X A X  

 X X X   X X 

∑=

= + ++ + +

6 free

d ⇒ =

( )/ 2 / 2 1c free c free

G kd n R d  = = >

Code gain*:

/ 2 freet d <

* for derivation, see Carlson’s, p. 583

Page 16: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 16/26

16Timo O. Korhonen, HUT Communication Laboratory

Coding gain for some selected convolutional codes

x Here is a table of some selected convolutional codes and their 

code gains RC d  free /2 expressed for hard decoding also by

1010log ( / 2) dB

 freec R d γ  =

Page 17: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 17/26

17Timo O. Korhonen, HUT Communication Laboratory

Decoding of convolutional codes

x Maximum likelihood decoding of convolutional codes means finding the

code branch in the code trellis that was most likely transmittedx Therefore maximum likelihood decoding is based on calculating code

Hamming distances for each branch potentially forming encoded word

x Assume that the information symbols applied into an AWGN channel areequally alike and independent

x

Let’s denote by x encoded symbols (no errors) and by y received(potentially erroneous) symbols:

x Probability to decode the symbols is then

x

The most likely path through the trellis will maximize this metric.Often ln() is taken from both sides, because probabilities are oftensmall numbers, yielding:

(note this corresponds equavalently also the smallest Hamming distance)

0 1 2... ...

 j x x x x=x

0 1... ...

 j y y y=y

0

( , ) ( | ) j j

 j

  p p y x∞

=∏=y x

Decoder 

(=distance

calculation)x

yreceived code:

non -

erroneous code:

bit

decisions

{ } { }1

ln ( , ) ln ( )  j mj

 j

  p p y x∞

=∑=y x

Page 18: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 18/26

18Timo O. Korhonen, HUT Communication Laboratory

Example of exhaustive maximal likelihood detection

x Assume a three bit message is transmitted and encoded by (2,1,2)

convolutional encoder. To clear the decoder, two zero-bits are appendedafter message. Thus 5 bits are encoded resulting 10 bits of code. Assume

channel error probability is p = 0.1. After the channel 10,01,10,11,00 is

 produced (including some errors). What comes after the decoder, e.g. what

was most likely the transmitted code and what were the respective message

 bits?

a

 b

c

d

states

decoder outputs

if this path is selected

Page 19: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 19/26

19Timo O. Korhonen, HUT Communication Laboratory

0

0

( , ) ( | )

ln ( , ) ln ( | )

 j j j

 j  j j

  p p y x

  p p y x

=

∞=

=

=

y x

y x

errorscorrect

weight for prob. toreceive bit in-error 

Page 20: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 20/26

20Timo O. Korhonen, HUT Communication Laboratory

 Note also the Hamming distances!

correct:1+1+2+2+2=8;8 ( 0.11) 0.88

false:1+1+0+0+0=2;2 ( 2.30) 4.6

total path metric: 5.48

⋅ − = −⋅ − = −

The largest metric, verify

that you get the same result!

Page 21: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 21/26

21Timo O. Korhonen, HUT Communication Laboratory

The Viterbi algorithm

x Problem of optimum decoding is to find the minimum distance path

from the initial state back to the initial state (below from S 0 to S 0). Theminimum distance is one of the sums of all path metrics from S 

0to S 

0

x Exhaustive maximum likelihood

method must search all the paths 

in phase trellis (2k  paths emerging/

entering from 2 L+1

 states for an (n,k,L) code)

x The Viterbi algorithm gets

improvement in computational 

efficiency via concentrating into

survivor paths of the trellis

Page 22: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 22/26

22Timo O. Korhonen, HUT Communication Laboratory

The survivor pathx Assume for simplicity a convolutional code with k =1, and thus up to 2k = 2

 branches can enter each state in trellis diagram

x Assume optimal path passes S . Metric comparison is done by adding the metric of 

S 1 and S 2 to S . At the survivor path the accumulated metric is naturally smaller (otherwise it could not be the optimum path)

x For this reason the non-survived path can

 be discarded -> all path alternatives need not

to be further considered

x  Note that in principle the whole transmitted

sequence must be received before decision.

However, in practice storing of states for 

input length of 5 L is quite adequate

2 branches enter each nodek 

2 nodes, determined L

 by memory depth

 branch of larger metric discarded

Page 23: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 23/26

23Timo O. Korhonen, HUT Communication Laboratory

Example of using the Viterbi algorithm

x Assume the received sequence is

and the (n,k,L)=(2,1,2) encoder shown below. Determine the Viterbi

decoded output sequence!

01101111010001 y =

(Note that for this encoder code rate is 1/2 and memory depth equals L = 2)

states6 44 7 4 48

Page 24: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 24/26

Page 25: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 25/26

25Timo O. Korhonen, HUT Communication Laboratory

How to end-up decoding?

x In the previous example it was assumed that the register was finally

filled with zeros thus finding the minimum distance pathx In practice with long code words zeroing requires feeding of long

sequence of zeros to the end of the message bits: this wastes channel

capacity & introduces delay

x To avoid this path memory truncation is applied:

 –  Trace all the surviving paths to thedepth where they merge

 –  Figure right shows a common point

at a memory depth J 

 –   J is a random variable whose applicable

magnitude shown in the figure (5 L)has been experimentally tested for 

negligible error rate increase

 –   Note that this also introduces the

delay of 5 L! 5 stages of the trellis J L>

Page 26: 3320 Conv Coding

8/8/2019 3320 Conv Coding

http://slidepdf.com/reader/full/3320-conv-coding 26/26

26Timo O. Korhonen, HUT Communication Laboratory

Lessons learned

x You understand the differences between cyclic codes and

convolutional codesx You can create state diagram for a convolutional encoder 

x You know how to construct convolutional encoder circuits

 based on knowing the generator sequences

x You can analyze code strengths based on known code

generation circuits / state diagrams or generator sequences

x You understand how to realize maximum likelihood

convolutional decoding by using exhaustive search

x You understand the principle of Viterbi decoding