FULLTEXT01
FULLTEXT01
Burst-error-correcting
capability of Quadratic residue
codes
where α is a primitive p-th root of unity and Qp is the set of all quadratic
residues modulo p. A much easier way to construct QR codes is by finding
idempotent e(x), i.e., e2 (x) ≡ e(x), instead of using roots of unity. Through
idempotent, we can find a generator polynomial g(x) for decoding processes.
We are interested in investigating capability of binary QR codes in cor-
recting burst errors. Burst errors are errors that happen in short intervals and
dependently of one and another. The decoding method used to decode QR
codes is called error trapping. As the name indicated, we try to trap the error
pattern within the first n − k digits of a cyclic [n, k]-code, by making cyclic
shifts of a received word w .
Reiger bound of QR codes has been studied. The bound tells us the optimal
length of burst errors that can be corrected theoretically. In this thesis, we
test for different length of QR codes in the program Mathematica and see if
Reiger bound is met.
Keyword : Binary quadratic residue code, error trapping decoding, Reiger
bound, burst error
Acknowledgements
First, I want to thank my supervisor, Per-Anders Svensson, for his guidance and
patience throughout this project, as well as my examiner, Marcus Nilsson, for his
advice during the thesis process. I am also deeply grateful to all the teachers I
have encountered, whose guidance and support have been invaluable throughout
this academic journey.
I would also like to thank my classmates for all the help they have given me over
the years. I hope for great success for everyone in their academic and professional
paths.
Lastly, heartfelt thanks to a special friend who shared late nights studying with
me during the final stages of the thesis process. If you are reading this, I would like
to extend my best wishes to you:
2 Coding theory 2
3 Method 4
4 Linear codes 4
4.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.2 Coset decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5 Cyclic codes 9
5.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.2 Burst errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.3 Reiger bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.4 Error trapping method . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.4.1 Decoding algorithm for random-error-correcting . . . . . . . . 14
5.4.2 Decoding algorithm for burst-error-correcting . . . . . . . . . 15
6 Quadratic-residue codes 16
6.1 Quadratic residue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.2 Binary quadratic residue codes . . . . . . . . . . . . . . . . . . . . . . 19
6.3 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4 Idempotent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.5 Decoding of QR codes . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7 Results 26
8 Discussion 27
References 28
A Appendix 1 30
1 Introduction
Coding theory is the branch of mathematics with a relatively short history. The
birth of coding theory can be tracked back to 1948 when Claude Shannon published
his paper A mathematical theory of communication [10]. His work introduced the
concepts of channel capacity and the noisy channel coding theorem. The theorem
states that given a noisy channel with channel capacity and information transmission
rate, if the rate is below the capacity, then theoretically it is possible to transmit
information nearly without error. He also introduced the concept of redundancy,
where we can add extra check bits, making code possible to correct possible errors.
One year later, binary Golay codes were developed, named after Marcel J. E. Golay.
Golay’s original paper was barely a half-page long [2], but it has been called the
“best single published page” in coding theory [1]. Golay codes, with a length of 23
and dimension of 12, are 3-bit-error-correcting and 5-burst-error-correcting codes.
Their error-correcting ability is impressive considering the short length of the codes.
The codes are also known as the perfect codes. Shortly after, in 1950, Richard
Hamming introduced an error-correcting code that was later known as Hamming
code [3]. Hamming codes are single error-correcting codes. In his fundamental
paper on Hamming codes, he also gave definitions of Hamming distance. These
breakthroughs have opened a new field since then. Coding theorists have devoted
to finding efficient schemes to encode and decode through a noisy channel. Coding
theory has a wide range of applications, from deep space transmission to wireless
communications.
The physical medium used to transmit information is called a channel, such as
telephone lines. Noise is an undesirable disturbances, causing the received informa-
tion to differ from the original ones. The main task in coding theory is to detect
and correct errors that occurred in a transmission via a noise channel. All messages
have to be encoded to codewords before transmission. Over a noisy channel, it is
codewords that are transmitted and received. If an unknown codeword is received,
then we know that errors must occur. A well-designed error-correcting code can
increase the reliability in sending and receiving information. One important type
of error-detecting and error-correcting codes is cyclic codes. Such codes are very
efficient to correct random and burst errors. Binary quadratic residue codes (QR
codes1 ) are one of the several important classes of cyclic codes. QR codes were first
introduced by Andrew Gleason [7], who mentioned many important properties of
such codes in a brief letter. Since then, the codes have been extensively studied
for many years. Many algebraic algorithms were developed for decoding quadratic
residue codes. The binary [7,4,3]-Hamming code and the binary [23,12,7]-Golay code
are some examples of QR codes. In this thesis, we will use error trapping method
to decode binary QR codes.
The paper is organized as follows. Chapter 2 gives fundamental definitions within
coding theory, covering concepts such as minimum distance, length, dimension, etc.
Chapter 3 provides a brief overview of our approach to the topic. In Chapter 4, we
concentrate on exploration of linear codes with vector structures and coset decoding.
In Chapter 5, we focus on cyclic codes where additional algebraic structures besides
linearity are introduced. This chapter also covers error trapping method for decoding
both random and burst errors, along with the presentation of the Reiger bound.
1
Not to be confused with QR Code (quick-response code), which can be scanned by an imaging
device, such as a camera.
1
In Chapter 6, we include definitions related to quadratic residues, alongside the
construction of quadratic residue codes using idempotents, etc. Chapter 7 serves as
a summary of the findings and results of the project. Chapter 8 presents a short
discussion.
2 Coding theory
Coding theory plays a critical role in ensuring the reliability and efficiency of infor-
mation transmission over noisy channels. In particular, in communication systems
where reliable data transmission is essential, such as telecommunications and satel-
lite communication, coding theory ensures that errors occurred can be detected and
corrected, giving us a more robust transmission.
Definition 2.1 (Code alphabet; Code symbols; Code; Codeword). [6] Let A =
F2 = {0, 1} be a finite set of size 2. We call A a code alphabet and its elements
code symbols. Then
(ii) A nonempty set C is called a code, and its element is called a codeword in C.
Remark. In general, we can take the code alphabet be a finite field Fq of order q.
Example 2.1. A code over the code alphabet F2 = {0, 1} is a binary code, and code
symbols are 0 and 1. The set of a binary word of length 3 is
Suppose that we have encoded each spectral color as a word of length 3, including
white color as presented in the Table 2.1. When the message ’Red’, which is encoded
as 001, has been transmitted, the receiver is unable to tell if the message was cor-
rupted or not. To solve this problem, we can simply add some form of redundancy
so that errors can be detected or corrected. Firstly, let us add an extra digit so that
each encoded word will contain an even number of 1s. Suppose that only one error
occurred during the transmission. If receiver gets 1011, we can see directly that an
error has occurred since the received word is not among our codewords. However,
we do not know whether 1011 comes from 0011, 1001, 1010 or 1111. However, if we
introduce more redundancy, error-correcting is possible. Now, let us encode
meaning that we repeat the original message 3 times. In this case, we can be sure
that a received word 101001001 with a single bit error comes from 001001001 because
there are more errors between 101001001 and other codewords.
2
Message Encoding Codeword
Red 001 0011 001001001
Orange 010 0101 010010010
Yellow 011 0110 011011011
Green 100 1001 100100100
Cyan 101 1010 101101101
Blue 110 1100 110110110
Violet 111 1111 111111111
White 000 0000 000000000
Remark. The above case shows that error correction is at the cost of reducing trans-
mission speed. A 9-bit message is transmitted instead of 3 bits in order to detect and
correct a single error. The redundancy must be chosen with some care to maximize
detection or correction capability.
Definition 2.2 (Hamming distance). [6][9] Let x = x1 x2 . . . xn and y = y1 y2 . . . yn
be two words of length n over an alphabet A. The Hamming distance from x to y ,
denoted by d(x,y ), is defined to be the number of positions in which x and y differ.
Example 2.2. Let A = {0, 1}, and let x = 001001001 and y = 101101101 be two
words of length 9. Then d(x,y ) = 3 since there are three positions in which x and
y differ, i.e., the first, fourth and seventh position.
Definition 2.3 (Nearest neighbour decoding). [6] Let C be a binary code. Suppose
that a word x is received, we can compute the Hamming distance d(c,c, x
x) to x , where
c ∈ C. If d(c,
c, x
x) is minimal among all the codewords in C, then we can correct x to
the codeword if such a codeword is unique. Otherwise, no correction can be made.
This method is called the nearest neighbour decoding.
Definition 2.4 (Minimum Distance). [6] Let C be any code containing at least two
words. Then the minimum distance of C is the smallest possible Hamming distance
between any two different codewords, denoted by
d(C) = min{d(x,y ) : x,y ∈ C, x ̸= y }.
A binary code can be thus called as an (n, M, d)-code, with length n, size M and
distance d.
Example 2.3. Let C = {000000000, 001001001, 101101101} be a binary code. Then
d(C) = 3 since
d(000000000, 001001001) = 3,
d(000000000, 101101101) = 6,
d(001001001, 101101101) = 3.
It is a binary (9, 3, 3)-code.
Assuming that x = 0010000001 is received, then
d(000000000, 001000001) = 2,
d(001001001, 001000001) = 1,
d(101101101, 001000001) = 4.
By using nearest neighbour decoding, we decode x to 001001001.
3
Theorem 2.1. A code with distance d is an exactly ⌊ (d−1)
2
⌋-error-correcting code.
The above theorem shows that the distance of a code is related to the error-
correcting capabilities. A large distance d increases possibilities to correct errors.
Definition 2.5 (Hamming weight). [6] Let x be a word in Fn2 . The Hamming weight
of x is the number of nonzero coordinates in x , denoted as
x) = d(x,
wt(x x, 00),
Definition 2.6 (Minimum weight). [6] Let C be a code, then the minimum weight
of C is the smallest of the weights of the nonzero codewords of C, denoted as wt(C).
Example 2.4. Let C be the binary code defined from the previous example, then
wt(000000000) = 0,
wt(001001001) = 3,
wt(101101101) = 6.
Hence, wt(C) = 3.
3 Method
This bachelor thesis is going to investigate burst-error-correcting codes. For simplic-
ity, we will limit our investigation to binary codes only. Cyclic codes, as a subclass of
linear codes, are very efficient for correcting burst errors due to their rich algebraic
structures. There are some important classes among cyclic codes. In this work,
we focus on one of these, i.e. binary quadratic residue codes, and their capability
of correcting burst errors. The decoding algorithm we have chosen is called error
trapping method. The decoding method allows us to compute syndromes cyclically,
making it suitable to correct both random and burst errors. Since the syndromes of
a received word can be determined by the remainders, error patterns that are bursts
of certain length can be obtained. A linear code is able to correct all burst errors
of certain length if and only if all such burst errors lie in distinct cosets. Therefore,
by checking if there exist duplicate remainders, we can draw the conclusion if a
code can correct all burst errors of certain length as Reiger bound indicated. That
is to say, the capability of burst-error-correcting is checked by Reiger bound. The
program Mathematica has been used to assist us in this project.
4 Linear codes
In Chapter 2, we have defined a binary code of length n as a nonempty subset of Fn2 .
Since it is just a set of vectors, we might need to list all the codewords to specify
a code, which is inconvenient and ineffective in encoding and decoding processes.
However, if we add some algebraic structures, making codes become vector spaces,
then each codeword of the given code is a linear combination of the codewords from
its basis. The distance of the given code is equal to the minimum weight of any
nonzero codeword [4]. Such codes are known as linear codes.
4
4.1 Definitions
Definition 4.1 (Vector spaces over finite fields). [6] Let F2 be the finite field of
order 2. A nonempty set V , together with vector addition and scalar multiplication,
is a vector space over F2 if for all u , v , w ∈ V and λ, µ ∈ F2 , it satisfies all of the
following conditions:
(i) u + v ∈ V ;
u + v ) + w = u + (vv + w );
(ii) (u
(v) u + v = v + u;
(vi) λvv ∈ V ;
u + v ) = λu
(vii) λ(u u + λvv , (λ + µ)u
u = λu
u + µu
u;
u = λ(µu
(viii) (λµ)u u);
Example 4.1. The code C1 = {0000, 0101, 0110} is not linear since 0101 + 0110 =
0011 ∈
/ C1 , while C2 = {000, 001, 010, 011} is linear.
Theorem 4.1. [6] Let C be a linear code over F2 , then d(C) = wt(C).
x′ , y′ ) = wt(x
d(C) = d(x x′ − y′ ) ≥ wt(C)
5
Definition 4.3 (Generator matrix). [6][9] Let C be an [n, k]-code. A k ×n matrix G
whose rows form a basis is called a generator matrix for a linear code. Furthermore,
the codewords in C are the linear combinations of the rows of G, denoted as
C = {xG | x ∈ Fk2 }.
6
Definition 4.4 (Parity-check matrix). Let G be a k × n generating matrix for a
linear code C. An (n − k) × n matrix H is called a parity-check matrix if
GH T ≡ O (mod 2),
C + u = {vv + u : v ∈ C} = u + C.
Theorem 4.3. [6] Let C be an [n, k]-linear code over the finite field F2 , then for all
u , v ∈ Fn2 ,
(ii) |C + u | = |C| = 2k ;
(iii) if u ∈ C + v , then C + u = C + v ;
7
Example 4.5. Consider a binary linear code C = {0000, 1011, 0101, 1110}, let us
use this [4, 2]-code to decode. First, let us list the cosets of the code. We start with
0 , writing down the codewords of C as the first row. Then choosing any vector u ,
not in the first row, of minimum weight, we compute the coset C + u and list the
result as the second row. We repeat the process, taking a vector that not appearing
in the previous rows of minimum weight, and compute the coset for the third row.
Continuing this way until all the cosets are listed. Since the code C has dimension
k = 2, there exists 2n−k = 24−2 = 22 = 4 cosets and 2k = 22 = 4 words in each coset.
We list all such cosets and underline coset leaders as follows:
We observe that 0001 + 1010 = 1011 is a codeword in C, thus 0001 and 1010
are in the same coset C + 0001. However, 0010 + 1000 = 1010 does not belong to
C, since they are from the different coset. Suppose that w = 1101 is received, we
find that w + C is in the fourth coset. The unique coset leader of this coset is 1000,
which can be chosen as the error pattern. Hence, 1101 − 1000 = 1101 + 1000 = 0101
was the most likely codeword transmitted. We also notice that the coset C + 0001
has two coset leaders that can be chosen. In practice, we can arbitrarily choose one
of them. The decoding in this case is not unique. As one might notice, the above
decoding scheme works too slow when the length of code n is large. The hardest
parts include finding the coset containing the received word and then obtaining a
word of least weight in that coset [4]. Fortunately, we can use syndromes to speed
up the process.
Definition 4.6 (Syndrome). [6] Let C be an [n, k]-linear code over F2 and let H
be a parity-check matrix for C. For any w ∈ Fn2 , the syndrome of w is the word
w ) = w H T ∈ Fn−k
S(w 2 .
Remark. A unique coset leader corresponds to an error pattern that can be corrected.
All members from the same coset have the same syndrome. Furthermore, all the
error patterns e can be generated with wt(ee) ≤ ⌊ d−1
2
⌋ as coset leaders and syndrome
for each of them can be computed.
Example 4.6. Let C be the code of Example 4.5 above, and the parity-check
matrix for C is
1 0 1 0
H= .
1 1 0 1
8
Since the word of least weight in the coset C + w is u = 1000, we compute the
syndrome of the coset leader u as
1 1
0 1
uH T = 1 0 0 0
= 1 1 = w H.
1 0
0 1
We conclude that v = w + u = 1101 + 1000 = 0101 was the most likely codeword
transmitted, i.e. the first bit was in error.
Definition 4.7 (Syndrome look-up table). A table which matches each coset leader
with its syndromes is called a syndrome look-up table.
Example 4.7. To construct a syndrome look-up table for decoding, we first list all
the cosets for the code and choose from each coset a word of least weight as coset
leader u . Then we find a parity-check matrix H for the code and, for each coset
leader u , compute its syndrome u H T .
Assuming that a binary linear code with parity-check matrix
1 0 1 1 0 0
H = 1 1 1 0 1 0 .
0 1 1 0 0 1
Since ⌊(d − 1)/2⌋ = 1, all the error patterns with weight 0 or 1 can be coset leaders.
After computing u H T , a syndrome look-up table is constructed as follows.
Suppose that w = 110111 is received, then w H T = 010, being the sixth row of
the syndrome look-up table. The coset leader is u = 000010. We conclude that
v = w + u = 110111 + 000010 = 110101 was the most likely codeword transmitted.
5 Cyclic codes
Cyclic codes were first introduced by E. Prange in 1957 [6]. These codes are easy to
implement on a computer using so-called shift registers. Algebraic coding theorists
have discovered interesting algebraic structure and properties since then. For ex-
ample, the set of codewords of a cyclic code forms an ideal in a certain ring defined
by Theorem 5.1 over a finite field. This property allows cyclic codes to effectively
correct not only random errors but also burst errors. Since only the binary codes
are discussed here, we can simplify polynomial arithmetic modulo 2 as
(a + b)2 = a2 + 2ab + b2 (mod 2) ≡ a2 + b2 .
9
5.1 Definitions
Let us first recall some important concepts in the study of cyclic codes.
Definition 5.1 (Ideal). [6] Let R be a commutative ring with unity. A nonempty
subset I of R is called an ideal if
(i) a + b belongs to I, for all a, b ∈ I;
(ii) ra ∈ I, for all r ∈ R and a ∈ I.
Definition 5.2 (Principle ideal). [6] Let a ring R be a commutative ring with unity,
then an ideal I of R is called a principal ideal if there exists an element g ∈ I such
that I = ⟨g⟩, where
⟨g⟩ := {gr : r ∈ R}.
Definition 5.3 (Cyclic; Cyclic code). [6] If (an−1 , a0 , a1 , . . . , an−2 ) ∈ S whenever
(a0 , a1 , . . . , an−1 ) ∈ S, where S is a subset of Fn2 , then S is a cyclic set. A linear
code C is called a cyclic code if C is a cyclic set.
Example 5.1. Let us check the following codes.
1. The code C1 = {00000, 10011, 01001, 00110, 11011, 10101, 01111, 11100} is not
a cyclic code since 11100 ∈ C but 01110 ∈
/ C.
2. The code C2 = {0000, 0111, 1011, 1101, 1110} is not a cyclic code since it is
not a linear code.
3. The code C3 = {000, 101, 011, 110} is a cyclic code where every cyclic right
shift of a codeword is again a codeword.
If C is a cyclic code, we can express each codeword c = c0 c1 . . . cn−1 as a poly-
nomial in F2 [x] and have the following correspondence:
π : Fn2 → F2 [x]/(xn − 1), (c0 , c1 , . . . , cn−1 ) 7→ c0 + c1 x + · · · + cn−1 xn−1 .
We say that π is an F2 -linear transformation of vector spaces over F2 . The following
theorem follows.
Theorem 5.1. [6] A nonempty subset C of Fn2 is a cyclic code if and only if π(C)
is an ideal of F2 [x]/(xn − 1).
Example 5.2. Given a cyclic code C = {000, 110, 101, 011}, then we have π(C) =
{0, 1 + x, 1 + x2 , x + x2 }.
F2 [x]
The cyclic code can be considered as a ring Rn = xn −1
. Since
xn ≡ 1 (mod xn − 1),
we can replace xn by 1, xn+1 by x and so on.
Example 5.3. Consider a ring Rn , where n = 5. Let (x3 + x + 1)(x4 + 1) be the
polynomial. Instead of using long division, we can apply x5 ≡ 1 (mod x5 − 1) to
obtain the ring
R5 = (x3 + x + 1)(x4 + 1) = x7 + x5 + x4 + x3 + x + 1
≡ x4 + x3 + x2 + x + 1 (mod x5 − 1).
10
Now, let us consider the polynomial of codeword
c(x) = c0 + c1 x + · · · + cn−1 xn−1
in Rn . When we multiply c(x) by x, we obtain
x · c(x) = c0 x + c1 x2 + · · · + cn−1 xn
= cn−1 + c0 x + · · · + cn−2 xn−1 .
It shows that multiplying by x is equivalent to perform a single cyclic shift. Similarly,
we conclude that multiplying by xm corresponds to m cyclic shift [5].
Definition 5.4 (Reducible and irreducible). [6] A polynomial f (x) is said to be
reducible over the field if there exist two polynomials g(x) and h(x) such that f (x) =
g(x)h(x) where deg(g(x)) < deg(f (x)) and deg(h(x)) < deg(f (x)) and f (x) =
g(x)h(x). Otherwise, f (x) is said to be irreducible.
Definition 5.5 (Generator polynomial). [5][6] The unique polynomial of the least
degree of a nonzero ideal I of Fq [x]/(xn − 1) is called the generator polynomial
of I. In a nonzero cyclic code C, the polynomial g(x) of least degree is the
generator polynomial of C.
Example 5.4. Let us find all binary cyclic codes of length 3. We factorize the
polynomial x3 − 1 ∈ F2 as
x3 − 1 = (x + 1)(x2 + x + 1).
Therefore, the divisors, generator polynomials, of x3 − 1 are 1, 1 + x, 1 + x + x2 and
1 + x3 , respectively. Then the ideals in
F2 [x]
R3 = = {0, 1, x, 1 + x, x2 , 1 + x2 , x + x2 , 1 + x + x2 }
x3 − 1
and the corresponding cyclic codes are listed in Table 5.3.
Divisor Ideal Code
1 ⟨1⟩ {000, 100, 010, 110, 001, 101, 011, 111}
1+x ⟨1 + x⟩ = {0, 1 + x, x + x2 , 1 + x2 } {000, 110, 011, 101}
1 + x + x2 ⟨1 + x + x2 ⟩ = {0, 1 + x + x2 } {000, 111}
1 + x3 ⟨1 + x3 ⟩ = {0} {000}
Theorem 5.2. [9] Let C be an ideal in Rn , i.e. a cyclic code of length n, then the
generator polynomial g(x) divides xn − 1.
Proof. Dividing xn − 1 by g(x) gives us that
xn − 1 = q(x)g(x) + r(x),
where deg(r(x) < r). Since in a ring Rn , xn − 1 = 0 ∈ C, yielding that r(x) ∈ C, it
follows that r(x) = 0. Hence, g(x) | xn − 1.
Remark. A cyclic code can be generated by a polynomial, but such polynomial does
not need to be a generator polynomial [5]. Finding a cyclic code can be done by
factorizing xn − 1 into irreducible polynomials since g(x) always divides xn − 1 as
shown in the theorem above.
11
5.2 Burst errors
In coding theory, we often assume that errors in transmission are independent of one
another, and are randomly distributed. Such errors are known as random errors.
Unfortunately, this assumption is not realistic. There are communication channels
where the errors happen in short intervals, namely in bursts. These errors are called
burst errors. Codes for correcting such errors are called burst-error-correcting codes.
Cyclic codes, due to their richer algebraic structures, are very efficient for correcting
burst errors compared to linear codes.
Definition 5.6 (Cyclic run). [6] Let w ∈ Fnq be a word of length n. A cyclic run of 0
of length l of w is a succession of l cyclically consecutive zeros among the digits of
w.
Definition 5.7 (Burst). [6] A burst of length l is a binary vector whose nonzero bits
are located to l consecutive positions, with the first and the last bit being nonzero.
Example 5.5. 0000111010 is a burst of length 5.
Definition 5.8 (Cyclic burst). A binary vector v is said to be a cyclic burst of
length l, if l is the shortest burst length among the set of all shifted vectors of v .
Example 5.6. 110010000000110 is a burst of length 8 or 14, depending on which
bit we start to count. However, the shortest burst length is 8. Hence, it is a cyclic
burst of length 8.
12
F2 such that n−k+1
P
i=1 ciu i = 0 . We know that c is a codeword in C if and only if
c H T = 0. Hence, (c1 , c2 , . . . , cn−k+1 , 0 ) is a codeword, as well as a cyclic burst of
length at most n − k + 1. We know that if a cyclic burst is a codeword in a l-cyclic
burst error-correcting code, then the length of the burst must be greater than 2l,
giving that n − k ≥ 2l as desired.
13
Suppose that w(x) = w0 + w1 x + · · · + wn−1 xn−1 is the polynomial of the received
word, and then the syndrome s = w H T of w corresponds to the polynomial
Hence, w(x) ≡ s(x) (mod g(x)). Since deg s(x) < n − k = deg g(x), the result
follows.
For any [n, k]-cyclic code, a received word can be represented by a polynomial
w(x) of degree at most n−1, while the syndrome can be represented by a polynomial
s(x) of degree at most n − k − 1. We know that w(x) − s(x) should be the nearest
codeword of w(x) by coset decoding. However, if the error occurs among the k last
digits of the codeword, the subtracting fails to find the nearest codeword when only
n − k first digits are searched. The method of error trapping is one way to avoid
this problem. We look for the error pattern within the first n − k digits by making
cyclic shifts.
whence s0 (x) = x2 + 1, which is not an error pattern. From s0 (x), we compute the
next syndrome as
14
which is an error pattern. Thus, the least m = 1 is obtained. Thereby,
k ≤ n − 2l ≤ n − l
is satisfied. Since the error trapping decoding method can correct error patterns
with a cyclic run of 0 of length at most k, the error trapping method is also capable
of correcting burst errors. However, the requirement of the weight of an error pattern
is replaced by the burst of length. The modified decoding algorithm is presented as
follows [6].
Let C be a binary [n, k]-cyclic code with generator polynomial g(x), e(x) be an
error pattern with a cyclic burst of length at most l. Then e(x) can be determined
by:
1: Compute the syndromes si (x) recursively of xi w(x) for i = 0, 1, 2, . . . ;
2: Find the smallest integer m such that the syndrome sm (x) is a cyclic burst of
length at most l;
3: Compute e(x) = xn−m sm (x) in Fq [x]/(xn − 1). Decode w(x) to w(x) − e(x).
Example 5.8. Assumed that we are using a communication channel where the
burst errors can occur, let C be a binary [15, 9]-cyclic code generated by g(x) =
1 + x3 + x4 + x5 + x6 . Consider the received word
We assume that C is 3-cyclic burst error correcting, i.e., all burst errors of length 3
or less can be corrected. By long division, we have
15
and we have found the least m = 3. Thus, we get
e(x) = x15−3 s3 (x) = x12 ,
so the corrected codeword is
c(x) = w(x) − e(x)
= (1 + x2 + x4 + x5 + x6 + x8 + x10 + x11 + x12 ) − x12
= 1 + x2 + x4 + x5 + x6 + x8 + x10 + x11
= 101011101011000.
6 Quadratic-residue codes
The quadratic residue code, first studied by E. Prange [6], is a special class of cyclic
codes. Such a code can be obtained by choosing generator polynomial properly.
The construction starts with choosing a primitive element p in a finite field and
then using a quadratic residue modulo p. Since quadratic residue codes share the
same algebraic structure and property as cyclic codes, they also retain the capability
of burst-error-correction.
16
Theorem 6.2. [9] Let Qp denote the set of all quadratic residues mot p and Np the
set of all quadratic nonresidues mod p. Then the set Qp has size p−1
2
, and is equal
to
2
2 2 p−1
Qp = 1 , 2 , . . . ,
2
p−1
where all numbers are taken modulo p. Hence, the set Np has size 2
, i.e.
p−1
|Qp | = |Np | = .
2
Example 6.3. Consider the finite field F7 . We know that 3 is a primitive element
since
(iii) The product of a nonzero quadratic residue modulo p and a quadratic non-
residue modulo p is a quadratic nonresidue modulo p.
(iv) There are exactly (p − 1)/2 nonzero quadratic residues modulo p and (p − 1)/2
quadratic nonresidues modulo p, therefore Fp = {0} ∪ Qp ∪ Np .
αQp = {αr : r ∈ Qp } = Qp ,
βQp = {βr : r ∈ Qp } = Np ,
αNp = {αn : n ∈ Np } = Np
and
βNp = {βn : n ∈ Np } = Qp .
17
Proof. (i) Let g be a primitive element of Fp , γ and θ be two quadratic residues
modulo p. Then there exists two integer i and j such that γ = g 2i and θ = g 2j .
Hence, γθ = g 2(i+j) is a quadratic residue modulo p, proving (i).
(ii) Similarly, let µ and ν be two quadratic nonresidues modulo p. Then there
exists two integer i and j such that µ = g 2i−1 and ν = g 2j−1 . Hence, µν =
g 2(i+j−1) is a quadratic residue modulo p, giving (ii).
(iv) We know that all the nonzero quadratic residues modulo p can be written as
{g 2i : i = 0, 1, . . . , (p − 3)/2},
{g 2i−1 : i = 1, 2, . . . , (p − 1)/2}.
Definition 6.4 (Splitting Field). Splitting field is the smallest field that contains
all roots of a given polynomial.
Example 6.4. Let us compute the splitting field of f (x) = x2 + x + 1 ∈ F2 [x]. Since
f (x) has no roots in F2 , f (x) is irreducible in F2 [x]. Then we have F2 [x]/(x2 +x+1) =
{0, 1, x, 1 + x}. Putting α = x, then F2 [α] = 0, 1, α, 1 + α, so
x2 + x + 1 ≡ 0 (mod x2 + x + 1) ⇔ α2 + α + 1 = 0. (2)
(1 + α)2 + (1 + α) + 1 = 1 + α2 + 1 + α + 1
= α2 + α + 1 + 2
=0+2
=0 over F2 .
18
6.2 Binary quadratic residue codes
Let p be an odd prime. By Fermat-Euler theorem, 2 is a quadratic residue modulo
p. Then there exists an integer m ≥ 1 such that 2m − 1 is divisible by p, i.e., 2m ≡ 1
(mod p). Let θ be a primitive element of F2m , as m being large enough to contain
all roots of xp − 1. Let α be a primitive p-th root of unity, forming a cyclic group.
m
Take α = θ(2 −1)/p , then the divisors of xp − 1
Y Y
gQ (x) := (x − αr ) and gN (x) := (x − αn )
r∈Qp n∈Np
are polynomials with coefficients of gQ (x) and gN (x) defined over F2 [x]. It follows
that
Binary quadratic residue codes, denoted as Q(p) = ⟨gQ (x)⟩ and N (p) = ⟨gN (x)⟩ [6],
are the binary cyclic codes of length p over F2 generated by the polynomials gQ (x)
and gN (x) respectively. By Theorem 6.2, we know that
p−1
|Qp | = |Np | =
2
and the sets Qp and Np form a partition of {1, 2, . . . , p − 1} equally. It follows that
p+1
dim(Qp ) = p − deg(gQ (x)) = p − |Qp | =
2
and
p+1
dim(Np ) = p − deg(gN (x)) = p − |Np | = ,
2
p+1
showing that the dimensions of both Qp and Np are equal to 2
.
6.3 Construction
Lemma 6.1. The polynomials gQ (x) and gN (x) belong to F2 [x].
Proof. It is sufficient to show that each coefficient of gQ (x) and gN (x) belongs to F2 .
Let gQ (x) = a0 + a1 x + · · · + ak xk , where ai ∈ F2m and k = (p − 1)/2. Raising
each coefficient to its second power, we have
Y
a20 + a21 x + · · · + a2k xk = (x − α2r )
r∈Qp
Y
= (x − αj ) (by Theorem 6.3(i))
j∈2Qp
Y
= (x − αj )
j∈Qp
= gQ (x).
Hence, ai = a2i for all 0 ≤ i ≤ m, meaning that ai are elements of F2 . Thus, gQ (x)
is a polynomial over F2 .
19
Similarly, let gN (x) = a0 + a1 x + · · · + al xl , and l = (p − 1)/2, then
Y
a20 + a21 x + · · · + a2l xl = (x − α2n )
n∈Np
Y
= (x − αj ) (by Theorem 6.3(iii))
j∈2Np
Y
= (x − αj )
j∈Np
= gN (x),
giving that gN (x) is also a polynomial over F2 .
The previous Lemma 6.1 shows that each coefficient of gQ (x) and gN (x) belongs
to F2 . Now, let us use this property to construction a QR code.
Example 6.6. Let p = 7, then 2 is a quadratic residue modulo 7 by Example 6.3.
From the example, we have shown that {1, 2, 4} are the quadratic residues modulo
7, and {3, 5, 6} are the quadratic nonresidues modulo 7. The polynomial f (x) =
1 + x + x3 is irreducible over
F23 = F8 = F2 [x]/⟨1 + x + x2 ⟩ = {a + bθ + cθ2 | θ3 = θ + 1}.
Let α be a root of 1 + x + x3 in the field, then the order of α is 7 and α3 = α + 1.
We say that α is a primitive 7th root of unity, having 1 + x + x3 as its minimal
polynomial over F2 [x]. The polynomials gQ (x) and gN (x) are defined as
Y
gQ (x) = (x − αr )
r∈Q7
= (x − α)(x − α2 )(x − α4 )
= (x + α)(x + α2 )(x + α4 )
= x3 + (α + α2 + α4 )x2 + (α3 + α5 + α6 )x + 1
= x3 + (2α2 + 2α)x2 + [α3 + α2 (α + 1) + (α + 1)2 ]x + 1
= x3 + (2α2 + 1)x + 1
= x3 + x + 1 and
Y
gN (x) = (x − αn )
n∈N7
= (x − α3 )(x − α5 )(x − α6 )
= x3 + (2α + 1)x2 + (α4 + α2 + α)x + α3 + α
= x3 + x2 + (2α2 + 2α)x + 1
= x3 + x2 + 1,
respectively. Hence, x7 − 1 = (x − 1)gQ (x)gN (x). Based on the factorization, we
obtain the following codes:
⟨gQ (x)⟩ = ⟨x3 + x + 1⟩
= {f (x)(x3 + x + 1) | deg f (x) ≤ 3}
= {0000000, 1101000, 0110100, 0011010, 0001101, 1000110,
0100011, 1010001, 1011100, 1110010, 1100101, 0101110,
0010111, 1001011, 0111001, 1111111}
20
and
Theorem 6.4. [6] For an odd prime p, 2 is a quadratic residue modulo p if and
only if p is of the form p = 8m ± 1.
Corollary 6.1. [6] There exist binary quadratic residue codes of length p if and
only if p is a prime of the form p = 8m ± 1.
6.4 Idempotent
We have shown that cyclic codes of odd length n can be obtained from a factorization
of xn − 1 into monic irreducible factors over Fq . However, factoring xn − 1, involving
finding a primitive n-th root of unity, is not always easy when the code length n
increases. An alternative approach is to use idempotents.
Definition 6.6 (Idempotent). [9] A polynomial e(x) is said to be idempotent in Rn
if e2 (x) ≡ e(x).
Example 6.7. The polynomial x3 + x5 + x6 is an idempotent in R7 since
Theorem 6.5. [9] Let C be a cyclic code in Rn with generator polynomial g(x), and
let h(x) be a polynomial such that g(x)h(x) = xn −1 ∈ F2 [x] and gcd(h(x), g(x)) = 1,
then there exists polynomials a(x) and b(x) for which
a(x)g(x) + b(x)h(x) = 1.
The polynomial e(x) = a(x)g(x) mod (xn − 1) has the following properties:
(i) e(x) is the unique identity in C, i.e.
(ii) e(x) is the unique polynomial in C that is both idempotent and generates C,
i.e. C = ⟨e(x)⟩.
Definition 6.7 (Generating idempotent). [9] The polynomial e(x) defined as the
previous theorem is called the generating idempotent of C.
The next theorem shows how to compute g(x) from e(x).
Theorem 6.6. [9] The generator polynomial of the code ⟨e(x)⟩ is
21
Proof. Since the generator polynomial g(x) of a cyclic code in Rn divides xn − 1, we
can write xn − 1 = g(x)h(x), together with e(x) ≡ a(x)g(x) by the definition, we
have
gcd(e(x), xn − 1) = gcd(a(x)g(x), h(x)g(x)).
Since a(x) and h(x) are relatively prime, so this is equal to g(x).
Theorem 6.7. [9] Let C be a cyclic code in Rn with generator polynomial g(x) and
generating idempotent e(x), then g(x) and e(x) have exactly the same roots, in the
splitting field for xn − 1, from among the n-th roots of unity. Furthermore, if f (x) is
an idempotent in Rn that has exactly the same roots as g(x) from among the n-th
roots of unity, then f (x) is the generating idempotent of ⟨g(x)⟩.
Now, let us use the previous theorem to determine the generating idempotent of
a binary quadratic residue code. Putting
X
e(x) = xr ,
r∈Qp
and if n ∈ Np , then
X X
e(αn ) = αnr = αn ̸= e(α).
r∈Qp n∈Np
So, e(x) are two distinct constant on the sets Qp and Np , respectively. Hence, we
must have either
(
0, if s ∈ Qp
1. e(αs ) =
1, if s ∈ Np
or
(
1, if s ∈ Qp
2. e(αs ) =
0, if s ∈ Np .
If the latter prevail, by putting β = αv for some v ∈ Np , then
e(β s ) = e(αvs ) = 0
for all s ∈ Qp . Hence, we can always obtain case 1 by replacing α with a different
primitive p-th root of unity.
Now, in the splitting field F2s , we have
(
p−1 1, if p = 8m − 1
e(1) = =
2 0, if p = 8m + 1.
This can again split into two cases:
22
1. If p = 8m − 1, then
(
0, if s ∈ Qp
e(αs ) =
1, if s ∈ Np ∪ {0}.
2. If p = 8m + 1, then
(
0, if s ∈ Qp ∪ {0}
e(αs ) =
1, if s ∈ Np .
According to the above theorem, we know that e(x) in the former case is the gen-
erating idempotent for ⟨gQ ⟩ while the latter case is the generating idempotent for
⟨(x − 1)gQ ⟩.
The following theorem shows that how to find idempotent for ⟨gN ⟩.
Theorem 6.8. [9] Let
X X
e(x) = xr and f (x) = xn .
r∈Qp n∈Np
23
If r ∈ Qp , then gQ (αr ) = 0 since αr is a root. Then (1+f (x))2 = 1+f (x)2 = 1+f (x),
and so 1 + f (x) is idempotent. Since
p−1
X xp − 1
e(x) + 1 + f (x) = xi = ,
i=0
x−1
(αs )p − 1
e(αs ) + (1 + f (αs )) = = 0.
αs − 1
By Theorem 6.7, we know that idempotent and generator polynomial have every
αr , where r ∈ Qp , as a root. These are the roots of g(x). Every root of g(x) should
also be a root of the idempotent, i.e. g(x) | e(x). Since we want idempotent having
same root as g(x), but we do not want 1 to be the root of idempotent, so we get
idempotent of another code, which is ⟨(x − 1)gQ (x)⟩.
2(23−1)/2 − 1 211 − 1
= = 89,
2 23
so the order of α := θ89 is 23. Let θ be a root of 1 + x + x3 + x5 + x11 in F2 [x], then
θ is a primitive element of F211 . Then we have the factorization
24
By Corollary 6.1, we know that there exist binary quadratic residue codes of
prime length p if and only if p = 8m ± 1. Thus, Reiger bound can be expressed in
two cases:
p−1
(i) p = 8m + 1 : l ≤ 4
and
p−3
(ii) p = 8m − 1 : l ≤ 4
,
where l must be an integer. A binary cyclic burst error is built upon with the first
and last positions being 1 s. A burst of length 1 and 2, are 1 and 11, respectively.
For a burst of length greater than 2, there are always two possible ways to choose
for the positions lying in between. Considering p cyclic shifts, we have
p · (1 + 20 + 21 + 22 + . . . )
possible combinations. Thus, the number of burst errors of length l or less in such
codes is
l−2
!
X
p · (1 + 20 + 21 + 22 + · · · + 2l−2 ) = p 1 + 2i
i=0
l−1
= p(1 + 2 − 1)
l−1
=p·2
(
p · 2(p−1)/4−1 , if p = 8m + 1
≤
p · 2(p−3)/4−1 , if p = 8m − 1,
where i = 0, 1, 2, . . . .
According to Theorem 4.3.(v), the number of coset is
2n−k = 2(p+1)/2 .
When p = 8m + 1, we have
p · 2(p−1)/4−1 = p · 2(p−5)/4 .
By comparison test, we obtain
2(p+1)/2 2(p+7)/4
lim = lim
p→∞ p · 2(p−5)/4 p→∞ p
2 · 27/4
p/4
= lim ,
p→∞ p
showing that 2p/4 increases exponentially faster than p when p increases. It implies
that the number of cosets grows faster than the number of burst errors. Similarly,
when p = 8m − 1, we have
p · 2(p−3)/4−1 = p · 2(p−7)/4
and
2(p+1)/2 2(p+9)/4
lim = lim
p→∞ p · 2(p−7)/4 p→∞ p
2 · 29/4
p/4
= lim ,
p→∞ p
giving the same result.
Table 6.4 lists the number of error patterns and cosets of the first ten binary
quadratic residue codes.
25
Code Burst errors of length ≤ l No. burst error patterns No. cosets
[7, 4] ⌊ 7−4
2 ⌋=1 1·7=7 27−4 = 8
[17, 9] ⌊ 17−9
2 ⌋=4 8 · 17 = 136 217−9 = 256
[23, 12] ⌊ 23−12
2 ⌋=5 16 · 23 = 368 223−12 = 2048
[31, 16] ⌊ 31−16
2 ⌋=7 64 · 31 = 1984 231−16 = 32768
[41, 21] ⌊ 41−21
2 ⌋ = 10 512 · 41 = 20992 241−21 = 1048576
[47, 24] ⌊ 47−24
2 ⌋ = 11 1024 · 47 = 48128 247−24 = 8388608
[71, 36] ⌊ 71−36
2 ⌋ = 17 65536 · 71 = 4653056 271−36 = 34359738368
[73, 37] ⌊ 73−37
2 ⌋ = 18 131072 · 73 = 9568256 273−37 = 68719476736
[79, 40] ⌊ 79−40
2 ⌋ = 19 262144 · 79 = 20709376 279−40 = 549755813888
[89, 45] ⌊ 89−45
2 ⌋ = 22 2097152 · 89 = 186646528 289−45 = 17592186044416
7 Results
If a cyclic code is said to be l-burst-error-correcting, then the Reiger bound must
be satisfied. Reiger bound indicates the optimal length of burst errors that can
be corrected theoretically. However, Reiger bound cannot guarantee that a linear
code is an l-burst-error-correcting code. Take a quadratic residue [17, 9] code as an
example, the code is expected to correct all burst errors of length 4 with the equality
of Reiger bound.
With the help of Mathematica, we list all remainders for the burst errors of
length 1, 2, 3 and 4 in a table. Mathematica code for decoding a [17,9] QR code can
be found in the appendix. We have found 17 duplicate remainders for such code by
error trapping method. For instance, the remainder 1 + x2 + x3 appears at position
2 and 47 among the remainders for burst errors of 4, respectively. It implies that
for all i = 0, 1, 2, . . . , 16, we have two types of burst error of form xi (1 + x2 + x3 )
and x11+i (1 + x + x3 ), respectively. Then by Theorem 4.3 the difference must be
a codeword. Taking i = 0, the codeword is thus
It is clear that w(x) is a multiple of generator polynomial g(x) = 1+x3 +x4 +x5 +x8 .
Hence, they must belong to the same coset. The counterexample shows that [17, 9]
QR code cannot be a 4-burst-error-correcting code by Theorem 5.3.
Let b1 (x) and b2 (x) be two distinct burst errors, having the same syndrome.
Hence, both of them can be a coset burst leader. Let
and
is clearly a multiple of g(x). Therefore, they belong to the same coset and thereby
a codeword. By Theorem 4.3, b1 (x) and b2 (x) must belong to the same coset.
26
The shifts of form
..
.
xmb1 (x) − xmb2 (x) = xm (u(x)g(x))
are also multiples of g(x), and thus belong to the same coset. For any shift, it holds
that the difference between two burst errors is a multiple of g(x). Thus, we can
expect to obtain at least p duplicate remainders for a code length of p if there exist
duplicates.
In our case, we have found 17 pairs of duplicates for a code of length 17. In fact,
for all i = 0, 1, 2, . . . 16, xi (1 + x2 + x3 ) and x11+i (1 + x + x3 ), shifting cyclically, they
must belong to the same coset. Therefore, it is no coincidence that we have found
17 pairs. Since not all the burst errors of length 4 or less lie in distinct cosets, the
code fails to correct all these errors. In fact, Reiger bound is a necessary but not
sufficient bound. The counterexample we have found here verified the assumption.
As the length of a binary quadratic-residue code increases, the number of cosets
increases more than error patterns does. The possibilities that such error patterns
belong to different cosets increase. However, this still does not guarantee that error
patterns will be in distinct cosets as the code length grows. Each code should be
checked individually by looking at remainders.
8 Discussion
Although cyclic codes are generally powerful for correcting burst errors, one should
note that burst-error-correcting capabilities between codes vary. If one is looking
for optimal codes for burst-error-correcting, then there are some other codes, such
as Reed–Solomon codes, might be more interesting in the field.
Reed-Solomon codes are powerful for burst error correction since the codes op-
erate on alphabet sizes larger than binary [8]. Consider a Reed-Solomon code over
F2m where each symbol of the alphabet is represented by m bits. It makes no dif-
ference that how many of those m bits are in error, they are regarded as a single
symbol error. Since burst errors occur in bursts, it is very likely that several errors
become a single symbol error. This property makes such codes powerful for burst
error correction.
We have shown in the previous chapter that Reiger bound is just a necessary
bound. The bound indicates the optimal length that a code can correct. However, it
does not mean that errors whose length exceeds the bound cannot all be corrected.
The burst errors whose syndromes are unique can still be corrected although not all
of them lie in distinct cosets.
In this thesis, we have only discussed binary codes. One should note that cyclic
burst errors should be redefined for a non-binary code. Take a ternary code as
an example, with the first and the last bit being nonzero, we have more options
to consider compared to a binary code. For instance, a burst of length 3 can be
27
constructed in the following ways:
101, 111, 121,
102, 112, 122,
201, 211, 221,
202, 212, 222.
Thus, a different formula should be derived for computing the number of cosets and
burst errors. Consider an [n, k]-code C ∈ Fq , we would like to construct burst errors
of length l. Since the first and last positions of a burst error must be nonzero, there
are (q − 1) · (q − 1) possibilities in total. For the positions lying in between, we can
have q possible ways to choose for l − 2 positions. With n cyclic shifts, the number
of burst errors of length l is
n · q l−2 (q − 1)2
and the number of coset is q n−k .
Lastly, we would also like to remind the readers, there are different decoding
methods to decode QR codes besides error trapping mentioned here. The codes
have been studied intensively by algebraic coding theorists who intend to find more
effective decoding methods. Decoding of long binary QR codes is an interesting and
active research area. This part is left to the readers to explore on their own.
References
[1] Elwyn Ralph Berlekamp. Key Papers in the Development of Coding Theory.
IEEE Press selected reprint series. Institute of Electrical and Electronics Engi-
neers, 1974.
[2] Marcel J. E. Golay. Notes on digit coding. Proceedings of the IRE, 37(6):657,
1949.
[3] Richard Hamming. Error Detecting and Error Correcting Codes. Bell System
Technical Journal, 29(2):147–160, 1950.
[4] D.R. Hankerson, D.G. Hoffman, D.A. Leonard, C.C. Lindner, K.T. Phelps,
C.A. Rodger, and J.R. Wall. Coding Theory and Cryptography: The Essentials,
Second Edition. A Series of Monographs and Textbooks in Pure and applied
mathematics. Marcel Dekker, 2000.
[5] Raymond Hill. A First Course in Coding Theory. Oxford applied mathematics
and computing science series. Oxford University Press, 1986.
[6] San Ling and Chaoping Xing. Coding Theory. A First Course. Cambridge
University Press, 2010.
[7] Vera Pless. Introduction to the Theory of Error-Correcting Codes, Second Edi-
tion. Wiley-Interscience series in discrete mathematics. John Wiley & Sons,
1989.
[8] Irving S. Reed and Gustave Solomon. Polynomial Codes over Certain Finite
Fields. Journal of the Society for Industrial and Applied Mathematics, 8(2):300–
304, 1960.
28
[9] Steven Roman. Coding and Information Theory. Graduate Texts in Mathe-
matics. Springer, 1992.
[11] Lekh R. Vermani. Elements of Algebraic Coding Theory. Chapman & Hall
Mathematics Series. Chapman & Hall, 1996.
29
A Appendix 1
In[1]:= PrimitiveRootList[17]
Out[1]= {3, 5, 6, 7, 10, 11, 12, 14}
Find nonzero quadratic nonresidues modulo 17, i.e., compute 6^{2i-1} mod 17
Construct the generating idempontent for NQR, using 1+e(x) since p=17=8m+1
In[4]:= e[x_] := 1 + x ^ 6 + x ^ 12 + x ^ 7 + x ^ 14 + x ^ 11 + x ^ 5 + x ^ 10 + x ^ 3
PolynomialRemainder[e[x] * e[x], x ^ 17 - 1, x, Modulus → 2]
30
2
Use error trapping decoding to find remainders for bursts of length 1,2,3 and 4, respectively
In[15]:= Position[p4, 1 + x ^ 2 + x ^ 3]
Out[15]=
{{2}, {47}}
In[17]:= p2 =
31
3
1, x, x2 , x3 , x4 , x5 , x6 , x7 , 1 + x3 + x4 + x5 , x + x4 + x5 + x6 , x2 + x5 + x6 + x7 , 1 + x4 + x5 + x6 + x7 ,
1 + x + x3 + x4 + x6 + x7 , 1 + x + x2 + x3 + x7 , 1 + x + x2 + x5 , x + x2 + x3 + x6 , x2 + x3 + x4 + x7
In[21]:= Length[res]
Out[21]=
17
32