0% found this document useful (0 votes)
6 views

FULLTEXT01

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

FULLTEXT01

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Bachelor Degree Project

Burst-error-correcting
capability of Quadratic residue
codes

Author: Xiaoqi Cheng


Supervisor: Per-Anders Svensson
Examiner: Marcus Nilsson
Semester: Spring 2024
Subject: Mathematics
Abstract
A binary quadratic residue code (QR) code of prime length p is a cyclic
code over F2m , where p = 8m ± 1. Such a code has generator polynomials
defined as
Y
gQ (x) := (x − αr ),
r∈Qp

where α is a primitive p-th root of unity and Qp is the set of all quadratic
residues modulo p. A much easier way to construct QR codes is by finding
idempotent e(x), i.e., e2 (x) ≡ e(x), instead of using roots of unity. Through
idempotent, we can find a generator polynomial g(x) for decoding processes.
We are interested in investigating capability of binary QR codes in cor-
recting burst errors. Burst errors are errors that happen in short intervals and
dependently of one and another. The decoding method used to decode QR
codes is called error trapping. As the name indicated, we try to trap the error
pattern within the first n − k digits of a cyclic [n, k]-code, by making cyclic
shifts of a received word w .
Reiger bound of QR codes has been studied. The bound tells us the optimal
length of burst errors that can be corrected theoretically. In this thesis, we
test for different length of QR codes in the program Mathematica and see if
Reiger bound is met.
Keyword : Binary quadratic residue code, error trapping decoding, Reiger
bound, burst error
Acknowledgements
First, I want to thank my supervisor, Per-Anders Svensson, for his guidance and
patience throughout this project, as well as my examiner, Marcus Nilsson, for his
advice during the thesis process. I am also deeply grateful to all the teachers I
have encountered, whose guidance and support have been invaluable throughout
this academic journey.
I would also like to thank my classmates for all the help they have given me over
the years. I hope for great success for everyone in their academic and professional
paths.
Lastly, heartfelt thanks to a special friend who shared late nights studying with
me during the final stages of the thesis process. If you are reading this, I would like
to extend my best wishes to you:

May the road rise up to meet you.


May the wind be always at your back.
May the sun shine warm upon your face,
and the rains fall soft upon your fields.
Contents
1 Introduction 1

2 Coding theory 2

3 Method 4

4 Linear codes 4
4.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.2 Coset decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

5 Cyclic codes 9
5.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.2 Burst errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.3 Reiger bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.4 Error trapping method . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.4.1 Decoding algorithm for random-error-correcting . . . . . . . . 14
5.4.2 Decoding algorithm for burst-error-correcting . . . . . . . . . 15

6 Quadratic-residue codes 16
6.1 Quadratic residue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.2 Binary quadratic residue codes . . . . . . . . . . . . . . . . . . . . . . 19
6.3 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4 Idempotent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.5 Decoding of QR codes . . . . . . . . . . . . . . . . . . . . . . . . . . 24

7 Results 26

8 Discussion 27

References 28

A Appendix 1 30
1 Introduction
Coding theory is the branch of mathematics with a relatively short history. The
birth of coding theory can be tracked back to 1948 when Claude Shannon published
his paper A mathematical theory of communication [10]. His work introduced the
concepts of channel capacity and the noisy channel coding theorem. The theorem
states that given a noisy channel with channel capacity and information transmission
rate, if the rate is below the capacity, then theoretically it is possible to transmit
information nearly without error. He also introduced the concept of redundancy,
where we can add extra check bits, making code possible to correct possible errors.
One year later, binary Golay codes were developed, named after Marcel J. E. Golay.
Golay’s original paper was barely a half-page long [2], but it has been called the
“best single published page” in coding theory [1]. Golay codes, with a length of 23
and dimension of 12, are 3-bit-error-correcting and 5-burst-error-correcting codes.
Their error-correcting ability is impressive considering the short length of the codes.
The codes are also known as the perfect codes. Shortly after, in 1950, Richard
Hamming introduced an error-correcting code that was later known as Hamming
code [3]. Hamming codes are single error-correcting codes. In his fundamental
paper on Hamming codes, he also gave definitions of Hamming distance. These
breakthroughs have opened a new field since then. Coding theorists have devoted
to finding efficient schemes to encode and decode through a noisy channel. Coding
theory has a wide range of applications, from deep space transmission to wireless
communications.
The physical medium used to transmit information is called a channel, such as
telephone lines. Noise is an undesirable disturbances, causing the received informa-
tion to differ from the original ones. The main task in coding theory is to detect
and correct errors that occurred in a transmission via a noise channel. All messages
have to be encoded to codewords before transmission. Over a noisy channel, it is
codewords that are transmitted and received. If an unknown codeword is received,
then we know that errors must occur. A well-designed error-correcting code can
increase the reliability in sending and receiving information. One important type
of error-detecting and error-correcting codes is cyclic codes. Such codes are very
efficient to correct random and burst errors. Binary quadratic residue codes (QR
codes1 ) are one of the several important classes of cyclic codes. QR codes were first
introduced by Andrew Gleason [7], who mentioned many important properties of
such codes in a brief letter. Since then, the codes have been extensively studied
for many years. Many algebraic algorithms were developed for decoding quadratic
residue codes. The binary [7,4,3]-Hamming code and the binary [23,12,7]-Golay code
are some examples of QR codes. In this thesis, we will use error trapping method
to decode binary QR codes.
The paper is organized as follows. Chapter 2 gives fundamental definitions within
coding theory, covering concepts such as minimum distance, length, dimension, etc.
Chapter 3 provides a brief overview of our approach to the topic. In Chapter 4, we
concentrate on exploration of linear codes with vector structures and coset decoding.
In Chapter 5, we focus on cyclic codes where additional algebraic structures besides
linearity are introduced. This chapter also covers error trapping method for decoding
both random and burst errors, along with the presentation of the Reiger bound.
1
Not to be confused with QR Code (quick-response code), which can be scanned by an imaging
device, such as a camera.

1
In Chapter 6, we include definitions related to quadratic residues, alongside the
construction of quadratic residue codes using idempotents, etc. Chapter 7 serves as
a summary of the findings and results of the project. Chapter 8 presents a short
discussion.

2 Coding theory
Coding theory plays a critical role in ensuring the reliability and efficiency of infor-
mation transmission over noisy channels. In particular, in communication systems
where reliable data transmission is essential, such as telecommunications and satel-
lite communication, coding theory ensures that errors occurred can be detected and
corrected, giving us a more robust transmission.

Definition 2.1 (Code alphabet; Code symbols; Code; Codeword). [6] Let A =
F2 = {0, 1} be a finite set of size 2. We call A a code alphabet and its elements
code symbols. Then

(i) A binary word of length n over A is a sequence w = w1 w2 . . . wn with wi ∈ A


for all i. We can also think of w as a vector (w1 , w2 , . . . , wn ).

(ii) A nonempty set C is called a code, and its element is called a codeword in C.

(iii) The number of codewords in C is called the size of C.

(iv) A code of length n and size M is called an (n, M )-code.

Remark. In general, we can take the code alphabet be a finite field Fq of order q.

Example 2.1. A code over the code alphabet F2 = {0, 1} is a binary code, and code
symbols are 0 and 1. The set of a binary word of length 3 is

C = {000, 011, 101, 111}.

This is a binary (3, 4)-code.

Suppose that we have encoded each spectral color as a word of length 3, including
white color as presented in the Table 2.1. When the message ’Red’, which is encoded
as 001, has been transmitted, the receiver is unable to tell if the message was cor-
rupted or not. To solve this problem, we can simply add some form of redundancy
so that errors can be detected or corrected. Firstly, let us add an extra digit so that
each encoded word will contain an even number of 1s. Suppose that only one error
occurred during the transmission. If receiver gets 1011, we can see directly that an
error has occurred since the received word is not among our codewords. However,
we do not know whether 1011 comes from 0011, 1001, 1010 or 1111. However, if we
introduce more redundancy, error-correcting is possible. Now, let us encode

001 7−→ 001001001,

meaning that we repeat the original message 3 times. In this case, we can be sure
that a received word 101001001 with a single bit error comes from 001001001 because
there are more errors between 101001001 and other codewords.

2
Message Encoding Codeword
Red 001 0011 001001001
Orange 010 0101 010010010
Yellow 011 0110 011011011
Green 100 1001 100100100
Cyan 101 1010 101101101
Blue 110 1100 110110110
Violet 111 1111 111111111
White 000 0000 000000000

Table 2.1: Channel encoding schemes

Remark. The above case shows that error correction is at the cost of reducing trans-
mission speed. A 9-bit message is transmitted instead of 3 bits in order to detect and
correct a single error. The redundancy must be chosen with some care to maximize
detection or correction capability.
Definition 2.2 (Hamming distance). [6][9] Let x = x1 x2 . . . xn and y = y1 y2 . . . yn
be two words of length n over an alphabet A. The Hamming distance from x to y ,
denoted by d(x,y ), is defined to be the number of positions in which x and y differ.
Example 2.2. Let A = {0, 1}, and let x = 001001001 and y = 101101101 be two
words of length 9. Then d(x,y ) = 3 since there are three positions in which x and
y differ, i.e., the first, fourth and seventh position.
Definition 2.3 (Nearest neighbour decoding). [6] Let C be a binary code. Suppose
that a word x is received, we can compute the Hamming distance d(c,c, x
x) to x , where
c ∈ C. If d(c,
c, x
x) is minimal among all the codewords in C, then we can correct x to
the codeword if such a codeword is unique. Otherwise, no correction can be made.
This method is called the nearest neighbour decoding.
Definition 2.4 (Minimum Distance). [6] Let C be any code containing at least two
words. Then the minimum distance of C is the smallest possible Hamming distance
between any two different codewords, denoted by
d(C) = min{d(x,y ) : x,y ∈ C, x ̸= y }.
A binary code can be thus called as an (n, M, d)-code, with length n, size M and
distance d.
Example 2.3. Let C = {000000000, 001001001, 101101101} be a binary code. Then
d(C) = 3 since
d(000000000, 001001001) = 3,
d(000000000, 101101101) = 6,
d(001001001, 101101101) = 3.
It is a binary (9, 3, 3)-code.
Assuming that x = 0010000001 is received, then
d(000000000, 001000001) = 2,
d(001001001, 001000001) = 1,
d(101101101, 001000001) = 4.
By using nearest neighbour decoding, we decode x to 001001001.

3
Theorem 2.1. A code with distance d is an exactly ⌊ (d−1)
2
⌋-error-correcting code.

The above theorem shows that the distance of a code is related to the error-
correcting capabilities. A large distance d increases possibilities to correct errors.

Definition 2.5 (Hamming weight). [6] Let x be a word in Fn2 . The Hamming weight
of x is the number of nonzero coordinates in x , denoted as

x) = d(x,
wt(x x, 00),

where 0 is the zero word.

Definition 2.6 (Minimum weight). [6] Let C be a code, then the minimum weight
of C is the smallest of the weights of the nonzero codewords of C, denoted as wt(C).

Example 2.4. Let C be the binary code defined from the previous example, then

wt(000000000) = 0,
wt(001001001) = 3,
wt(101101101) = 6.

Hence, wt(C) = 3.

3 Method
This bachelor thesis is going to investigate burst-error-correcting codes. For simplic-
ity, we will limit our investigation to binary codes only. Cyclic codes, as a subclass of
linear codes, are very efficient for correcting burst errors due to their rich algebraic
structures. There are some important classes among cyclic codes. In this work,
we focus on one of these, i.e. binary quadratic residue codes, and their capability
of correcting burst errors. The decoding algorithm we have chosen is called error
trapping method. The decoding method allows us to compute syndromes cyclically,
making it suitable to correct both random and burst errors. Since the syndromes of
a received word can be determined by the remainders, error patterns that are bursts
of certain length can be obtained. A linear code is able to correct all burst errors
of certain length if and only if all such burst errors lie in distinct cosets. Therefore,
by checking if there exist duplicate remainders, we can draw the conclusion if a
code can correct all burst errors of certain length as Reiger bound indicated. That
is to say, the capability of burst-error-correcting is checked by Reiger bound. The
program Mathematica has been used to assist us in this project.

4 Linear codes
In Chapter 2, we have defined a binary code of length n as a nonempty subset of Fn2 .
Since it is just a set of vectors, we might need to list all the codewords to specify
a code, which is inconvenient and ineffective in encoding and decoding processes.
However, if we add some algebraic structures, making codes become vector spaces,
then each codeword of the given code is a linear combination of the codewords from
its basis. The distance of the given code is equal to the minimum weight of any
nonzero codeword [4]. Such codes are known as linear codes.

4
4.1 Definitions
Definition 4.1 (Vector spaces over finite fields). [6] Let F2 be the finite field of
order 2. A nonempty set V , together with vector addition and scalar multiplication,
is a vector space over F2 if for all u , v , w ∈ V and λ, µ ∈ F2 , it satisfies all of the
following conditions:

(i) u + v ∈ V ;

u + v ) + w = u + (vv + w );
(ii) (u

(iii) there is an element 0 ∈ V with 0 + v = v = v + 0 for all v ∈ V ;

(iv) for each u ∈ V , there exists a −u


u such that u + (−u
u) = 0 = (−u
u) + u ;

(v) u + v = v + u;

(vi) λvv ∈ V ;

u + v ) = λu
(vii) λ(u u + λvv , (λ + µ)u
u = λu
u + µu
u;

u = λ(µu
(viii) (λµ)u u);

u = u , where 1 is the multiplicative identity of F2 .


(ix) 1u

Definition 4.2 (Linear code). [6][11][4] A linear code C of length n over F2 is a


subspace of the vector space Fn2 . If C has length n and dimension k over F2 , then
we say that C is a binary [n, k]-code, and if C has a known minimum distance d,
then we say that C is an [n, k, d]-code.

Lemma 4.1. If C is a linear code, then the zero word 0 is a codeword.

Proof. Let x ∈ C be any codeword. Since C is a linear code, x + x ∈ C must be a


codeword. But for a binary code, x + x = 0, whence 0 ∈ C.
The above lemma shows that a linear code must contain the zero word. For
simplicity, we can say that a binary code is linear if and only if the sum of two
codewords is again a codeword. Since a linear code is a vector space, it is easy to
arrange the basis vectors as rows of a matrix.

Example 4.1. The code C1 = {0000, 0101, 0110} is not linear since 0101 + 0110 =
0011 ∈
/ C1 , while C2 = {000, 001, 010, 011} is linear.

Theorem 4.1. [6] Let C be a linear code over F2 , then d(C) = wt(C).

x −yy ). Since there exists x′ , y ′ ∈ C


x, y ) = wt(x
Proof. For any words x , y , we have d(x
x′ , y ′ ) = d(C), we have
such that d(x

x′ , y′ ) = wt(x
d(C) = d(x x′ − y′ ) ≥ wt(C)

as x′ − y ′ ∈ C. Conversely, there is a z ∈ C\{00} such that wt(C) = wt(zz ), so

wt(C) = wt(zz ) = d(zz , 0 ) ≥ d(C),

since 0 is a codeword in any linear code.

5
Definition 4.3 (Generator matrix). [6][9] Let C be an [n, k]-code. A k ×n matrix G
whose rows form a basis is called a generator matrix for a linear code. Furthermore,
the codewords in C are the linear combinations of the rows of G, denoted as

C = {xG | x ∈ Fk2 }.

Example 4.2. Consider the generator matrix


 
1 1 0 0
G = 0 1 1 1 ,
1 0 1 0

then for each x = (x1 , x2 , x3 ) ∈ F32 , there are codewords


 
  1 1 0 0
x1 x2 x3 0 1 1 1 = (x1 + x3 , x1 + x2 , x2 + x3 , x2 ).
1 0 1 0
A linear code is a vector space whose a generator matrix can be obtained by
finding a basis. Since C = ⟨C⟩, we can use the following algorithm to produce a
basis for C.
1. Form a matrix whose rows are the words in a nonempty subset S of Fn2 .
2. Use elementary row operations to find a row echelon form (REF) of the ma-
trix.
It follows that the nonzero rows of the REF form a basis for C = ⟨S⟩. A linear code
is generated by S.
Theorem 4.2. [4] A matrix G is a generator matrix for some linear code C if and
only if the rows of G are linearly independent.
Example 4.3. Consider the code C = {0000, 1110, 0111, 1001}, let us find a gener-
ator matrix for C using the algorithm mentioned above.
First, we form a matrix of the given words, and then perform row operations
modulo 2 of the matrix as follows.
   
0 0 0 0 1 1 1 0
1 1 1 0
 → 0 1 1 1
 

0 1 1 1 1 0 0 1 (interchange row 1 and row 4)
1 0 0 1 0 0 0 0
 
1 1 1 0
0 1 1 1
→ 0 1 1 1
 (add row 1 to row 3)
0 0 0 0
 
1 1 1 0
0 1 1 1
→ 0 0 0 0
 (add row 2 to row 3),
0 0 0 0
 
1 1 1 0
so G = is a generator matrix whose nonzero rows in REF are linearly
0 1 1 1
independent.

6
Definition 4.4 (Parity-check matrix). Let G be a k × n generating matrix for a
linear code C. An (n − k) × n matrix H is called a parity-check matrix if

GH T ≡ O (mod 2),

where O is the k × (n − k) zero matrix.

Remark. Given a parity-check matrix H, we have that G is a generator matrix for


C if and only if the rows of G are linearly independent and GH T = O.

Example 4.4. We  continuewith the previous example where the 2 × 4 generator


1 1 1 0
matrix is G = . By trial and error, we find a 2 × 4 matrix H =
  0 1 1 1  
0 1 1 0 T 0 0
, yielding a 2 × 2 zero matrix modulo 2, such that GH = ,
1 0 1 1 0 0
showing that H is a parity-check matrix.

Remark. We have that c is a codeword in C if and only if c H T = 0, namely, c must


be orthogonal to every row of H.

4.2 Coset decoding


Definition 4.5 (Coset; Coset leader). [6] Let C be a linear code of length n over
F2 , and u ∈ Fn2 be any vector of length n, then the coset of C is determined by u to
be the set

C + u = {vv + u : v ∈ C} = u + C.

A word of the least Hamming weight in a coset is called a coset leader.

Theorem 4.3. [6] Let C be an [n, k]-linear code over the finite field F2 , then for all
u , v ∈ Fn2 ,

(i) every vector of Fn2 belongs to at least one coset of C;

(ii) |C + u | = |C| = 2k ;

(iii) if u ∈ C + v , then C + u = C + v ;

(iv) two cosets are either identical or disjoint;

(v) the number of distinct cosets is 2n−k ;

(vi) C + u = C + v if and only if u − v ∈ C, i.e. u and v are in the same coset.

Assume that a codeword v of a linear code C is transmitted, and a word w is


received, we denote an error pattern e such that w = v +ee. If e = 0 , then we say that
there is no error occurred. Since e = w − v ∈ w + C, then we have w − e = v ∈ C.
So, by the previous theorem, it follows that the cosets C + w and C + e must be
identical. Thus, the error pattern e and the received word w are in the same coset.
By choosing an error pattern e of least weight in the coset C + w , the transmitted
codeword v = w − e can be obtained, and it will be a nearest neighbour of v .

7
Example 4.5. Consider a binary linear code C = {0000, 1011, 0101, 1110}, let us
use this [4, 2]-code to decode. First, let us list the cosets of the code. We start with
0 , writing down the codewords of C as the first row. Then choosing any vector u ,
not in the first row, of minimum weight, we compute the coset C + u and list the
result as the second row. We repeat the process, taking a vector that not appearing
in the previous rows of minimum weight, and compute the coset for the third row.
Continuing this way until all the cosets are listed. Since the code C has dimension
k = 2, there exists 2n−k = 24−2 = 22 = 4 cosets and 2k = 22 = 4 words in each coset.
We list all such cosets and underline coset leaders as follows:

C + 0000 : 0000 1011 0101 1110


C + 0001 : 0001 1010 0100 1111
C + 0010 : 0010 1001 0111 1100
C + 1000 : 1000 0011 1101 0110

We observe that 0001 + 1010 = 1011 is a codeword in C, thus 0001 and 1010
are in the same coset C + 0001. However, 0010 + 1000 = 1010 does not belong to
C, since they are from the different coset. Suppose that w = 1101 is received, we
find that w + C is in the fourth coset. The unique coset leader of this coset is 1000,
which can be chosen as the error pattern. Hence, 1101 − 1000 = 1101 + 1000 = 0101
was the most likely codeword transmitted. We also notice that the coset C + 0001
has two coset leaders that can be chosen. In practice, we can arbitrarily choose one
of them. The decoding in this case is not unique. As one might notice, the above
decoding scheme works too slow when the length of code n is large. The hardest
parts include finding the coset containing the received word and then obtaining a
word of least weight in that coset [4]. Fortunately, we can use syndromes to speed
up the process.

Definition 4.6 (Syndrome). [6] Let C be an [n, k]-linear code over F2 and let H
be a parity-check matrix for C. For any w ∈ Fn2 , the syndrome of w is the word
w ) = w H T ∈ Fn−k
S(w 2 .

Remark. A unique coset leader corresponds to an error pattern that can be corrected.
All members from the same coset have the same syndrome. Furthermore, all the
error patterns e can be generated with wt(ee) ≤ ⌊ d−1
2
⌋ as coset leaders and syndrome
for each of them can be computed.

Example 4.6. Let C be the code of Example 4.5 above, and the parity-check
matrix for C is
 
1 0 1 0
H= .
1 1 0 1

If w = 1101 is received, then the syndrome of w is


 
1 1
 0 1  
wHT = 1 1 0 1 
 
= 1 1 .
1 0 
0 1

8
Since the word of least weight in the coset C + w is u = 1000, we compute the
syndrome of the coset leader u as
 
1 1
 0 1 
uH T = 1 0 0 0 
 
 = 1 1 = w H.
1 0
0 1
We conclude that v = w + u = 1101 + 1000 = 0101 was the most likely codeword
transmitted, i.e. the first bit was in error.
Definition 4.7 (Syndrome look-up table). A table which matches each coset leader
with its syndromes is called a syndrome look-up table.
Example 4.7. To construct a syndrome look-up table for decoding, we first list all
the cosets for the code and choose from each coset a word of least weight as coset
leader u . Then we find a parity-check matrix H for the code and, for each coset
leader u , compute its syndrome u H T .
Assuming that a binary linear code with parity-check matrix
 
1 0 1 1 0 0
H = 1 1 1 0 1 0 .
0 1 1 0 0 1
Since ⌊(d − 1)/2⌋ = 1, all the error patterns with weight 0 or 1 can be coset leaders.
After computing u H T , a syndrome look-up table is constructed as follows.

Coset leader Syndrome


000000 000
100000 110
010000 011
001000 111
000100 100
000010 010
000001 001
000101 101

Table 4.2: A syndrome look-up table

Suppose that w = 110111 is received, then w H T = 010, being the sixth row of
the syndrome look-up table. The coset leader is u = 000010. We conclude that
v = w + u = 110111 + 000010 = 110101 was the most likely codeword transmitted.

5 Cyclic codes
Cyclic codes were first introduced by E. Prange in 1957 [6]. These codes are easy to
implement on a computer using so-called shift registers. Algebraic coding theorists
have discovered interesting algebraic structure and properties since then. For ex-
ample, the set of codewords of a cyclic code forms an ideal in a certain ring defined
by Theorem 5.1 over a finite field. This property allows cyclic codes to effectively
correct not only random errors but also burst errors. Since only the binary codes
are discussed here, we can simplify polynomial arithmetic modulo 2 as
(a + b)2 = a2 + 2ab + b2 (mod 2) ≡ a2 + b2 .

9
5.1 Definitions
Let us first recall some important concepts in the study of cyclic codes.
Definition 5.1 (Ideal). [6] Let R be a commutative ring with unity. A nonempty
subset I of R is called an ideal if
(i) a + b belongs to I, for all a, b ∈ I;
(ii) ra ∈ I, for all r ∈ R and a ∈ I.
Definition 5.2 (Principle ideal). [6] Let a ring R be a commutative ring with unity,
then an ideal I of R is called a principal ideal if there exists an element g ∈ I such
that I = ⟨g⟩, where
⟨g⟩ := {gr : r ∈ R}.
Definition 5.3 (Cyclic; Cyclic code). [6] If (an−1 , a0 , a1 , . . . , an−2 ) ∈ S whenever
(a0 , a1 , . . . , an−1 ) ∈ S, where S is a subset of Fn2 , then S is a cyclic set. A linear
code C is called a cyclic code if C is a cyclic set.
Example 5.1. Let us check the following codes.
1. The code C1 = {00000, 10011, 01001, 00110, 11011, 10101, 01111, 11100} is not
a cyclic code since 11100 ∈ C but 01110 ∈
/ C.
2. The code C2 = {0000, 0111, 1011, 1101, 1110} is not a cyclic code since it is
not a linear code.
3. The code C3 = {000, 101, 011, 110} is a cyclic code where every cyclic right
shift of a codeword is again a codeword.
If C is a cyclic code, we can express each codeword c = c0 c1 . . . cn−1 as a poly-
nomial in F2 [x] and have the following correspondence:
π : Fn2 → F2 [x]/(xn − 1), (c0 , c1 , . . . , cn−1 ) 7→ c0 + c1 x + · · · + cn−1 xn−1 .
We say that π is an F2 -linear transformation of vector spaces over F2 . The following
theorem follows.
Theorem 5.1. [6] A nonempty subset C of Fn2 is a cyclic code if and only if π(C)
is an ideal of F2 [x]/(xn − 1).
Example 5.2. Given a cyclic code C = {000, 110, 101, 011}, then we have π(C) =
{0, 1 + x, 1 + x2 , x + x2 }.
F2 [x]
The cyclic code can be considered as a ring Rn = xn −1
. Since

xn ≡ 1 (mod xn − 1),
we can replace xn by 1, xn+1 by x and so on.
Example 5.3. Consider a ring Rn , where n = 5. Let (x3 + x + 1)(x4 + 1) be the
polynomial. Instead of using long division, we can apply x5 ≡ 1 (mod x5 − 1) to
obtain the ring
R5 = (x3 + x + 1)(x4 + 1) = x7 + x5 + x4 + x3 + x + 1
≡ x4 + x3 + x2 + x + 1 (mod x5 − 1).

10
Now, let us consider the polynomial of codeword
c(x) = c0 + c1 x + · · · + cn−1 xn−1
in Rn . When we multiply c(x) by x, we obtain
x · c(x) = c0 x + c1 x2 + · · · + cn−1 xn
= cn−1 + c0 x + · · · + cn−2 xn−1 .
It shows that multiplying by x is equivalent to perform a single cyclic shift. Similarly,
we conclude that multiplying by xm corresponds to m cyclic shift [5].
Definition 5.4 (Reducible and irreducible). [6] A polynomial f (x) is said to be
reducible over the field if there exist two polynomials g(x) and h(x) such that f (x) =
g(x)h(x) where deg(g(x)) < deg(f (x)) and deg(h(x)) < deg(f (x)) and f (x) =
g(x)h(x). Otherwise, f (x) is said to be irreducible.
Definition 5.5 (Generator polynomial). [5][6] The unique polynomial of the least
degree of a nonzero ideal I of Fq [x]/(xn − 1) is called the generator polynomial
of I. In a nonzero cyclic code C, the polynomial g(x) of least degree is the
generator polynomial of C.
Example 5.4. Let us find all binary cyclic codes of length 3. We factorize the
polynomial x3 − 1 ∈ F2 as
x3 − 1 = (x + 1)(x2 + x + 1).
Therefore, the divisors, generator polynomials, of x3 − 1 are 1, 1 + x, 1 + x + x2 and
1 + x3 , respectively. Then the ideals in
F2 [x]
R3 = = {0, 1, x, 1 + x, x2 , 1 + x2 , x + x2 , 1 + x + x2 }
x3 − 1
and the corresponding cyclic codes are listed in Table 5.3.
Divisor Ideal Code
1 ⟨1⟩ {000, 100, 010, 110, 001, 101, 011, 111}
1+x ⟨1 + x⟩ = {0, 1 + x, x + x2 , 1 + x2 } {000, 110, 011, 101}
1 + x + x2 ⟨1 + x + x2 ⟩ = {0, 1 + x + x2 } {000, 111}
1 + x3 ⟨1 + x3 ⟩ = {0} {000}

Table 5.3: Example 5.4

Theorem 5.2. [9] Let C be an ideal in Rn , i.e. a cyclic code of length n, then the
generator polynomial g(x) divides xn − 1.
Proof. Dividing xn − 1 by g(x) gives us that
xn − 1 = q(x)g(x) + r(x),
where deg(r(x) < r). Since in a ring Rn , xn − 1 = 0 ∈ C, yielding that r(x) ∈ C, it
follows that r(x) = 0. Hence, g(x) | xn − 1.
Remark. A cyclic code can be generated by a polynomial, but such polynomial does
not need to be a generator polynomial [5]. Finding a cyclic code can be done by
factorizing xn − 1 into irreducible polynomials since g(x) always divides xn − 1 as
shown in the theorem above.

11
5.2 Burst errors
In coding theory, we often assume that errors in transmission are independent of one
another, and are randomly distributed. Such errors are known as random errors.
Unfortunately, this assumption is not realistic. There are communication channels
where the errors happen in short intervals, namely in bursts. These errors are called
burst errors. Codes for correcting such errors are called burst-error-correcting codes.
Cyclic codes, due to their richer algebraic structures, are very efficient for correcting
burst errors compared to linear codes.
Definition 5.6 (Cyclic run). [6] Let w ∈ Fnq be a word of length n. A cyclic run of 0
of length l of w is a succession of l cyclically consecutive zeros among the digits of
w.
Definition 5.7 (Burst). [6] A burst of length l is a binary vector whose nonzero bits
are located to l consecutive positions, with the first and the last bit being nonzero.
Example 5.5. 0000111010 is a burst of length 5.
Definition 5.8 (Cyclic burst). A binary vector v is said to be a cyclic burst of
length l, if l is the shortest burst length among the set of all shifted vectors of v .
Example 5.6. 110010000000110 is a burst of length 8 or 14, depending on which
bit we start to count. However, the shortest burst length is 8. Hence, it is a cyclic
burst of length 8.

5.3 Reiger bound


Theorem 5.3. [6] A cyclic code C is an l-burst-error-correcting code if and only if
all the burst errors of length l or less lie in distinct cosets of C.
Proof. (⇒) Assume that C is an l-burst-error-correcting code, if there exists two
distinct bursts errors b1 and b2 of length l or less lie in the same coset of C, then the
difference c = b1 − b2 is a codeword. So, if b1 is received, then b1 could be decoded
to both 0 and c . This contradicts the assumption that C is l-burst-error-correcting.
(⇐) Suppose that all the burst errors of length l or less lie in distinct cosets, then
each such burst error is uniquely determined by its syndrome. The error can thus
be corrected through its syndrome.
Corollary 5.1. [6] Let C be an [n, k]-cyclic l-burst-error-correcting code. Then
(i) no nonzero burst of length 2l or less can be a codeword;

(ii) the bound n − k ≥ 2l is called Reiger bound.


Proof. (i) Suppose that there exists a nonzero codeword c , being a cyclic burst of
length 2l or less. Then c is of form (1, u , v , 1, 0 ), where u and v are two vectors of
length l − 1 or less. Thus the burst is located to the 2l first bits of c . The word
w = (1, u , 0 , 0, 0 ) and c + w = (0, 0 , v , 1, 0 ) are two bursts of length at most l. Since
(cc + w ) − w = c is a codeword, c + w and w lie in the same coset of C according to
Theorem 4.3(vi). This is a contradiction to the previous theorem.
(ii) Let u1 , u2 , . . . , un−k+1 be the first n − k + 1 column vectors of a parity-check
matrix of C. Thus, they lie in Fn−k 2 , showing that they are more columns than the
dimension. So, they must be linearly dependent. It follows that c1 , c2 , . . . , cn−k+1 ∈

12
F2 such that n−k+1
P
i=1 ciu i = 0 . We know that c is a codeword in C if and only if
c H T = 0. Hence, (c1 , c2 , . . . , cn−k+1 , 0 ) is a codeword, as well as a cyclic burst of
length at most n − k + 1. We know that if a cyclic burst is a codeword in a l-cyclic
burst error-correcting code, then the length of the burst must be greater than 2l,
giving that n − k ≥ 2l as desired.

5.4 Error trapping method


In general, decoding of cyclic codes follows the same principles as for linear codes:
1. compute the syndrome;
2. find the error pattern corresponding to this syndrome, typically to a coset
leader;
3. correct the errors.
However, recall that cyclic codes have a richer algebraic structure, which can be
used to improve the decoding process.
For a cyclic code, we can produce a parity-check matrix of the form [6]
H = (In−k | A),
where A is some (n − k) × k matrix over Fq , and a corresponding generator matrix
is give by
G = (AT | Ik ).
Let w(x) be a received word, and c(x) ∈ F2 [x]/(xn − 1) represent a codeword. Then
we have c(x) = g(x)u(x) and w(x) = q(x)g(x) + r(x) by the division algorithm. It
yields that w(x) − r(x) = q(x)g(x) whence w[x] and r[x] belong to the same coset.
Theorem 5.4. Let H = (In−k | A) be a parity-check matrix for an [n, k]-cyclic code
C over F2 , and g(x) its generator polynomial. Let w ∈ Fn2 be a received word and
s = w H T ∈ Fn−k
2 its syndrome. Let w(x) and s(x) be the corresponding polynomials
in F2 [x], then s(x) is the remainder of w(x) when divided by g(x).
Proof. Since A is of type (n − k) × k, then AT is of type k × (n − k) and its rows
can be expressed as polynomials ai (x) ∈ F2 [x] of degree at most n − k − 1. It follows
that the rows of G are given by xn−k+i − ai (x), where i = 0, 1, . . . , k − 1. If g(x) is a
generator polynomial for C, then there are polynomials qi (x) ∈ F2 [x]/(xn − 1) such
that
xn−k+i − ai (x) = qi (x)g(x), (1)
since every rows of G is a codeword. Hence, all codewords are multiples of g(x).
Since H = (In−k | A), it follows that
1
 
 x 
 x2 
 
 . 
 . 
 . 
T
H =  xn−k−1  .
 
 a0 (x) 
 
 a1 (x) 
 
 . 
 .. 
ak−1 (x)

13
Suppose that w(x) = w0 + w1 x + · · · + wn−1 xn−1 is the polynomial of the received
word, and then the syndrome s = w H T of w corresponds to the polynomial

s(x) = w0 + w1 x + · · · + wn−k−1 xn−k−1 + wn−k a0 (x) + . . . wn−1 ak−1 (x)


n−k−1
X k−1
X
i
wn−k+j xn−k+j − qj (x)g(x)

= wi x + (insertion of (1))
i=0 j=0
n−1 k−1
!
X X
= w i xi − wn−k+j qj (x) g(x)
i=0 j=0

≡ w(x) (mod g(x)).

Hence, w(x) ≡ s(x) (mod g(x)). Since deg s(x) < n − k = deg g(x), the result
follows.
For any [n, k]-cyclic code, a received word can be represented by a polynomial
w(x) of degree at most n−1, while the syndrome can be represented by a polynomial
s(x) of degree at most n − k − 1. We know that w(x) − s(x) should be the nearest
codeword of w(x) by coset decoding. However, if the error occurs among the k last
digits of the codeword, the subtracting fails to find the nearest codeword when only
n − k first digits are searched. The method of error trapping is one way to avoid
this problem. We look for the error pattern within the first n − k digits by making
cyclic shifts.

5.4.1 Decoding algorithm for random-error-correcting


Let C be a binary [n, k, d]-cyclic code with generator polynomial g(x), and w(x) be
a received word with an error pattern e(x), where the weight of e(x) is less than or
equal to b = ⌊(d − 1)/2⌋. We want to determine e(x). Assume that e(x) has a cyclic
run of 0 of length at least k, the decoding algorithm for cyclic codes is as follows [6].
1: Compute the syndromes si (x) recursively of xi w(x) for i = 0, 1, 2, . . . ;
2: Find the smallest integer m such that the weight of syndrome sm (x) is at most
b;
3: Compute e(x) = xn−m sm (x) in Fq [x]/(xn − 1). Decode w(x) to w(x) − e(x).
Example 5.7. Assumed that we are using a memoryless channel where errors occur
independently and are randomly distributed, let C be a binary [7, 4]-cyclic code with
generator polynomial g(x) = 1 + x + x3 . Suppose that each received words has at
most one error, we have the following words w1 (x) = 0101111 and w2 (x) = 0100011.
Let us use error trapping to correct them.
Since d(C) = 3, any word of weight of 1 is an error pattern, giving a cyclic run
of 0 of length 6. Thus, we can correct all error patterns when the dimension of code
is k = 4. Let w1 (x) = 0101111 = x + x3 + x4 + x5 + x6 . By long division, we obtain

w1 (x) = (x3 + x2 + 1)g(x) + x2 + 1,

whence s0 (x) = x2 + 1, which is not an error pattern. From s0 (x), we compute the
next syndrome as

s1 (x) = xs0 (x) − 1 · g(x) = 1,

14
which is an error pattern. Thus, the least m = 1 is obtained. Thereby,

e(x) = x7−1 s1 (x) = x6 · 1 = x6 ,

and we decode w(x) as

w1 (x) − e(x) = (x + x3 + x4 + x5 + x6 ) − x6 = x + x3 + x4 + x5 = 0101110.

Similarly, let w2 (x) = 0100011 = x + x5 + x6 . By long division, we obtain

w2 (x) = (x3 + x2 + x)g(x) + 0,

whence s0 (x) = 0, indicating that no error occurred.

5.4.2 Decoding algorithm for burst-error-correcting


Now, suppose that C is l-cyclic burst error correcting, since a cyclic burst of length
l has a cyclic run of 0 of length n − l, then the Reiger bound

k ≤ n − 2l ≤ n − l

is satisfied. Since the error trapping decoding method can correct error patterns
with a cyclic run of 0 of length at most k, the error trapping method is also capable
of correcting burst errors. However, the requirement of the weight of an error pattern
is replaced by the burst of length. The modified decoding algorithm is presented as
follows [6].
Let C be a binary [n, k]-cyclic code with generator polynomial g(x), e(x) be an
error pattern with a cyclic burst of length at most l. Then e(x) can be determined
by:
1: Compute the syndromes si (x) recursively of xi w(x) for i = 0, 1, 2, . . . ;

2: Find the smallest integer m such that the syndrome sm (x) is a cyclic burst of
length at most l;

3: Compute e(x) = xn−m sm (x) in Fq [x]/(xn − 1). Decode w(x) to w(x) − e(x).
Example 5.8. Assumed that we are using a communication channel where the
burst errors can occur, let C be a binary [15, 9]-cyclic code generated by g(x) =
1 + x3 + x4 + x5 + x6 . Consider the received word

w(x) = 101011101011100 = 1 + x2 + x4 + x5 + x6 + x8 + x10 + x11 + x12 .

We assume that C is 3-cyclic burst error correcting, i.e., all burst errors of length 3
or less can be corrected. By long division, we have

w(x) = (x6 + x3 + x)g(x) + x3 + x2 + x + 1,

where s0 (x) = x3 + x2 + x + 1. Since s0 (x) is a burst of length 4 and not a burst


error pattern, we continue compute the syndromes si (x) of xi w(x)

s1 (x) = xs0 (x) − 0 · g(x) = x4 + x3 + x2 + x, not an error pattern


s2 (x) = xs1 (x) − 0 · g(x) = x5 + x4 + x3 + x2 , not an error pattern
s3 (x) = xs2 (x) − 1 · g(x) = 1, an error pattern,

15
and we have found the least m = 3. Thus, we get
e(x) = x15−3 s3 (x) = x12 ,
so the corrected codeword is
c(x) = w(x) − e(x)
= (1 + x2 + x4 + x5 + x6 + x8 + x10 + x11 + x12 ) − x12
= 1 + x2 + x4 + x5 + x6 + x8 + x10 + x11
= 101011101011000.

6 Quadratic-residue codes
The quadratic residue code, first studied by E. Prange [6], is a special class of cyclic
codes. Such a code can be obtained by choosing generator polynomial properly.
The construction starts with choosing a primitive element p in a finite field and
then using a quadratic residue modulo p. Since quadratic residue codes share the
same algebraic structure and property as cyclic codes, they also retain the capability
of burst-error-correction.

6.1 Quadratic residue


Theorem 6.1. [11] In any finite field Fq , the multiplicative group F∗q of all nonzero
elements is a cyclic, denoted as F∗q = ⟨α⟩ = Fq \{0}.
Remark. Every finite field has at least one primitive element[6].
Example 6.1. For a finite field F4 , the multiplicative group F∗4 forms a cyclic group.
Since |F∗4 | = 3, we can write out F∗4 = {α, α2 , α3 } = ⟨α⟩.
Definition 6.1 (Primitive element). [6] An element α in a finite field Fq is called a
primitive element or generator of Fq if Fq = {0, α, α2 , . . . , αq−1 }, i.e. all the nonzero
elements can be written as powers of a single element.
Definition 6.2 (Order). [6] The order of a nonzero element α ∈ Fq , denoted by
ord(α), is the smallest positive integer k such that αk = 1.
Example 6.2. Consider the element α in the field F4 = F2 [α], where α is a root of
the irreducible polynomial 1 + x + x2 ∈ F2 [x]. Then we have
α0 =1
α1 =α
α2 = −(1 + α) = 1 + α and
α3 = α(α ) = α(1 + α) = α + α2 = α + 1 + α = 1.
2

It gives that F4 = {0, α, 1 + α, 1} = {0, α, α2 , α3 }. Hence, α is a primitive element


and the order of α is 3.
Definition 6.3 (Quadratic residue; Quadratic nonresidue). [6][9] Let p > 2 and
be a prime and choose a primitive element g of Fp . A nonzero element r of Fp is
called a quadratic residue modulo p if r = g 2i and a quadratic nonresidue modulo p
if n = g 2i−1 for some integer i. In other words, r is a quadratic residue modulo p if
r has a square root such that
g2 ≡ r mod p.

16
Theorem 6.2. [9] Let Qp denote the set of all quadratic residues mot p and Np the
set of all quadratic nonresidues mod p. Then the set Qp has size p−1
2
, and is equal
to
  2 
2 2 p−1
Qp = 1 , 2 , . . . ,
2
p−1
where all numbers are taken modulo p. Hence, the set Np has size 2
, i.e.

p−1
|Qp | = |Np | = .
2
Example 6.3. Consider the finite field F7 . We know that 3 is a primitive element
since

30 ≡ 3 (mod 7), 32 ≡ 2 (mod 7), 33 ≡ 6 (mod 7),


34 ≡ 4 (mod 7), 35 ≡ 5 (mod 7), 36 ≡ 1 (mod 7).

Then the nonzero quadratic residues modulo 7 are

{32i : i = 0, 1, . . . } = {1, 2, 4},

and the quadratic nonresidues modulo 7 are

{32i−1 : i = 1, 2, . . . } = {3, 6, 5}.

Hence, |Qp | = |Np | = (7 − 1)/2 = 3.


Theorem 6.3. [6] Let p be an odd prime. Let Qp denote the set of all quadratic
residues mod p and Np the set of all quadratic nonresidues mod p. Then we have
the following.
(i) The product of two quadratic residues modulo p is a quadratic residue mod-
ulo p.

(ii) The product of two quadratic nonresidues modulo p is a quadratic residue


modulo p.

(iii) The product of a nonzero quadratic residue modulo p and a quadratic non-
residue modulo p is a quadratic nonresidue modulo p.

(iv) There are exactly (p − 1)/2 nonzero quadratic residues modulo p and (p − 1)/2
quadratic nonresidues modulo p, therefore Fp = {0} ∪ Qp ∪ Np .

(v) For α ∈ Qp and β ∈ Np , we have that

αQp = {αr : r ∈ Qp } = Qp ,
βQp = {βr : r ∈ Qp } = Np ,
αNp = {αn : n ∈ Np } = Np

and

βNp = {βn : n ∈ Np } = Qp .

17
Proof. (i) Let g be a primitive element of Fp , γ and θ be two quadratic residues
modulo p. Then there exists two integer i and j such that γ = g 2i and θ = g 2j .
Hence, γθ = g 2(i+j) is a quadratic residue modulo p, proving (i).

(ii) Similarly, let µ and ν be two quadratic nonresidues modulo p. Then there
exists two integer i and j such that µ = g 2i−1 and ν = g 2j−1 . Hence, µν =
g 2(i+j−1) is a quadratic residue modulo p, giving (ii).

(iii) Since γν = g 2(i+j)−1 is a quadratic nonresidue modulo p, (iii) follows.

(iv) We know that all the nonzero quadratic residues modulo p can be written as

{g 2i : i = 0, 1, . . . , (p − 3)/2},

and that all the quadratic nonresidues modulo p are

{g 2i−1 : i = 1, 2, . . . , (p − 1)/2}.

Thus, it shows that (iv) holds.

(v) The last part follows immediately from (i) − (iv).

Definition 6.4 (Splitting Field). Splitting field is the smallest field that contains
all roots of a given polynomial.
Example 6.4. Let us compute the splitting field of f (x) = x2 + x + 1 ∈ F2 [x]. Since
f (x) has no roots in F2 , f (x) is irreducible in F2 [x]. Then we have F2 [x]/(x2 +x+1) =
{0, 1, x, 1 + x}. Putting α = x, then F2 [α] = 0, 1, α, 1 + α, so

x2 + x + 1 ≡ 0 (mod x2 + x + 1) ⇔ α2 + α + 1 = 0. (2)

Since the characteristic is two, we can rewrite (2) as follows.

(1 + α)2 + (1 + α) + 1 = 1 + α2 + 1 + α + 1
= α2 + α + 1 + 2
=0+2
=0 over F2 .

The splitting field, F2 (α) = {a + bα | a, b ∈ F2 , α2 = α + 1}, is large enough to split


x2 + x + 1.
Definition 6.5 (Roots of unity; Primitive roots of unity). [9][4] The element α is
called n-th roots of unity over Fq if αn = 1. The set of all n-th roots of unity is a
cyclic group with respect to multiplication. If α generates the cyclic group of all
n-th root of unity, then it is called a primitive n-th root of unity.
Example 6.5. Consider the polynomial x15 − 1 ∈ F2 [x]. The splitting field is
F24 = F16 . Let α be a root of the polynomial x4 + x + 1, which is irreducible over
F2 [x], then F16 can be represented as the set of polynomials in α of degree at most
3, i.e. α4 = α + 1. It is easy to verify that α15 = 1, giving that the order of α divides
15. Since α3 ̸= 1 and α5 ̸= 1, then α has order 15, showing that α is a primitive
element. Then the 5th roots of unity are {1, α3 , α6 , α9 , α12 } and the set of 3rd roots
of unity are {1, α5 , α10 }, then α3 is a primitive 5th root of unity.

18
6.2 Binary quadratic residue codes
Let p be an odd prime. By Fermat-Euler theorem, 2 is a quadratic residue modulo
p. Then there exists an integer m ≥ 1 such that 2m − 1 is divisible by p, i.e., 2m ≡ 1
(mod p). Let θ be a primitive element of F2m , as m being large enough to contain
all roots of xp − 1. Let α be a primitive p-th root of unity, forming a cyclic group.
m
Take α = θ(2 −1)/p , then the divisors of xp − 1
Y Y
gQ (x) := (x − αr ) and gN (x) := (x − αn )
r∈Qp n∈Np

are polynomials with coefficients of gQ (x) and gN (x) defined over F2 [x]. It follows
that

xp − 1 = (x − 1)gQ (x)gN (x).

Binary quadratic residue codes, denoted as Q(p) = ⟨gQ (x)⟩ and N (p) = ⟨gN (x)⟩ [6],
are the binary cyclic codes of length p over F2 generated by the polynomials gQ (x)
and gN (x) respectively. By Theorem 6.2, we know that
p−1
|Qp | = |Np | =
2
and the sets Qp and Np form a partition of {1, 2, . . . , p − 1} equally. It follows that

p+1
dim(Qp ) = p − deg(gQ (x)) = p − |Qp | =
2
and
p+1
dim(Np ) = p − deg(gN (x)) = p − |Np | = ,
2
p+1
showing that the dimensions of both Qp and Np are equal to 2
.

6.3 Construction
Lemma 6.1. The polynomials gQ (x) and gN (x) belong to F2 [x].

Proof. It is sufficient to show that each coefficient of gQ (x) and gN (x) belongs to F2 .
Let gQ (x) = a0 + a1 x + · · · + ak xk , where ai ∈ F2m and k = (p − 1)/2. Raising
each coefficient to its second power, we have
Y
a20 + a21 x + · · · + a2k xk = (x − α2r )
r∈Qp
Y
= (x − αj ) (by Theorem 6.3(i))
j∈2Qp
Y
= (x − αj )
j∈Qp

= gQ (x).

Hence, ai = a2i for all 0 ≤ i ≤ m, meaning that ai are elements of F2 . Thus, gQ (x)
is a polynomial over F2 .

19
Similarly, let gN (x) = a0 + a1 x + · · · + al xl , and l = (p − 1)/2, then
Y
a20 + a21 x + · · · + a2l xl = (x − α2n )
n∈Np
Y
= (x − αj ) (by Theorem 6.3(iii))
j∈2Np
Y
= (x − αj )
j∈Np

= gN (x),
giving that gN (x) is also a polynomial over F2 .
The previous Lemma 6.1 shows that each coefficient of gQ (x) and gN (x) belongs
to F2 . Now, let us use this property to construction a QR code.
Example 6.6. Let p = 7, then 2 is a quadratic residue modulo 7 by Example 6.3.
From the example, we have shown that {1, 2, 4} are the quadratic residues modulo
7, and {3, 5, 6} are the quadratic nonresidues modulo 7. The polynomial f (x) =
1 + x + x3 is irreducible over
F23 = F8 = F2 [x]/⟨1 + x + x2 ⟩ = {a + bθ + cθ2 | θ3 = θ + 1}.
Let α be a root of 1 + x + x3 in the field, then the order of α is 7 and α3 = α + 1.
We say that α is a primitive 7th root of unity, having 1 + x + x3 as its minimal
polynomial over F2 [x]. The polynomials gQ (x) and gN (x) are defined as
Y
gQ (x) = (x − αr )
r∈Q7

= (x − α)(x − α2 )(x − α4 )
= (x + α)(x + α2 )(x + α4 )
= x3 + (α + α2 + α4 )x2 + (α3 + α5 + α6 )x + 1
= x3 + (2α2 + 2α)x2 + [α3 + α2 (α + 1) + (α + 1)2 ]x + 1
= x3 + (2α2 + 1)x + 1
= x3 + x + 1 and
Y
gN (x) = (x − αn )
n∈N7

= (x − α3 )(x − α5 )(x − α6 )
= x3 + (2α + 1)x2 + (α4 + α2 + α)x + α3 + α
= x3 + x2 + (2α2 + 2α)x + 1
= x3 + x2 + 1,
respectively. Hence, x7 − 1 = (x − 1)gQ (x)gN (x). Based on the factorization, we
obtain the following codes:
⟨gQ (x)⟩ = ⟨x3 + x + 1⟩
= {f (x)(x3 + x + 1) | deg f (x) ≤ 3}
= {0000000, 1101000, 0110100, 0011010, 0001101, 1000110,
0100011, 1010001, 1011100, 1110010, 1100101, 0101110,
0010111, 1001011, 0111001, 1111111}

20
and

⟨gN (x)⟩ = ⟨x3 + x2 + 1⟩


= {f (x)(x3 + x2 + 1) | deg f (x) ≤ 3}
= {0000000, 1011000, 0101100, 0010110, 0001011, 1000101,
1100010, 0110001, 1110100, 1001110, 1010011, 0111010,
0011101, 0100111, 1101001, 1111111}.

Theorem 6.4. [6] For an odd prime p, 2 is a quadratic residue modulo p if and
only if p is of the form p = 8m ± 1.
Corollary 6.1. [6] There exist binary quadratic residue codes of length p if and
only if p is a prime of the form p = 8m ± 1.

6.4 Idempotent
We have shown that cyclic codes of odd length n can be obtained from a factorization
of xn − 1 into monic irreducible factors over Fq . However, factoring xn − 1, involving
finding a primitive n-th root of unity, is not always easy when the code length n
increases. An alternative approach is to use idempotents.
Definition 6.6 (Idempotent). [9] A polynomial e(x) is said to be idempotent in Rn
if e2 (x) ≡ e(x).
Example 6.7. The polynomial x3 + x5 + x6 is an idempotent in R7 since

(x3 + x5 + x6 )2 = (x3 + x5 + x6 )(x3 + x5 + x6 )


= x6 + 2x8 + 2x9 + 2x11 + x10 + x12
≡ x3 + x5 + x6 (mod x7 − 1).

Theorem 6.5. [9] Let C be a cyclic code in Rn with generator polynomial g(x), and
let h(x) be a polynomial such that g(x)h(x) = xn −1 ∈ F2 [x] and gcd(h(x), g(x)) = 1,
then there exists polynomials a(x) and b(x) for which

a(x)g(x) + b(x)h(x) = 1.

The polynomial e(x) = a(x)g(x) mod (xn − 1) has the following properties:
(i) e(x) is the unique identity in C, i.e.

p(x)e(x) ≡ p(x) for all p(x) ∈ C;

(ii) e(x) is the unique polynomial in C that is both idempotent and generates C,
i.e. C = ⟨e(x)⟩.
Definition 6.7 (Generating idempotent). [9] The polynomial e(x) defined as the
previous theorem is called the generating idempotent of C.
The next theorem shows how to compute g(x) from e(x).
Theorem 6.6. [9] The generator polynomial of the code ⟨e(x)⟩ is

g(x) = gcd(e(x), xn − 1).

21
Proof. Since the generator polynomial g(x) of a cyclic code in Rn divides xn − 1, we
can write xn − 1 = g(x)h(x), together with e(x) ≡ a(x)g(x) by the definition, we
have
gcd(e(x), xn − 1) = gcd(a(x)g(x), h(x)g(x)).
Since a(x) and h(x) are relatively prime, so this is equal to g(x).
Theorem 6.7. [9] Let C be a cyclic code in Rn with generator polynomial g(x) and
generating idempotent e(x), then g(x) and e(x) have exactly the same roots, in the
splitting field for xn − 1, from among the n-th roots of unity. Furthermore, if f (x) is
an idempotent in Rn that has exactly the same roots as g(x) from among the n-th
roots of unity, then f (x) is the generating idempotent of ⟨g(x)⟩.
Now, let us use the previous theorem to determine the generating idempotent of
a binary quadratic residue code. Putting
X
e(x) = xr ,
r∈Qp

since 2 is a quadratic residue modulo p for an odd prime p if p = 8m ± 1 according


to Theorem 6.4, we have
X X
[e(x)]2 = e(x2 ) = x2r ≡ xr = e(x).
r∈Qp r∈Qp

Hence, e(x) is an idempotent. It follows that e(αi ) = 0 or 1 for all i. If s ∈ Qp , then


X X
e(αs ) = αsr = αr = e(α);
r∈Qp r∈Qp

and if n ∈ Np , then
X X
e(αn ) = αnr = αn ̸= e(α).
r∈Qp n∈Np

So, e(x) are two distinct constant on the sets Qp and Np , respectively. Hence, we
must have either
(
0, if s ∈ Qp
1. e(αs ) =
1, if s ∈ Np
or
(
1, if s ∈ Qp
2. e(αs ) =
0, if s ∈ Np .
If the latter prevail, by putting β = αv for some v ∈ Np , then
e(β s ) = e(αvs ) = 0
for all s ∈ Qp . Hence, we can always obtain case 1 by replacing α with a different
primitive p-th root of unity.
Now, in the splitting field F2s , we have
(
p−1 1, if p = 8m − 1
e(1) = =
2 0, if p = 8m + 1.
This can again split into two cases:

22
1. If p = 8m − 1, then
(
0, if s ∈ Qp
e(αs ) =
1, if s ∈ Np ∪ {0}.

2. If p = 8m + 1, then
(
0, if s ∈ Qp ∪ {0}
e(αs ) =
1, if s ∈ Np .

According to the above theorem, we know that e(x) in the former case is the gen-
erating idempotent for ⟨gQ ⟩ while the latter case is the generating idempotent for
⟨(x − 1)gQ ⟩.
The following theorem shows that how to find idempotent for ⟨gN ⟩.
Theorem 6.8. [9] Let
X X
e(x) = xr and f (x) = xn .
r∈Qp n∈Np

There is a primitive p-th root of unity α for which


(
0, if s ∈ Qp
e(αs ) =
1, if s ∈ Np .

Then the following holds:


1. if p = 8m − 1, then e(1) = 1 and
(a) the generating idempotent for Qp (p) is e(x),
(b) the generating idempotent for Np (p) is f (x).
2. if p = 8m + 1, then e(1) = 0 and
(a) the generating idempotent for Qp (p) is 1 + f (x),
(b) the generating idempotent for Np (p) is 1 + e(x).
Proof. We only prove part 2 here. Later, we will use
Q this part to generate idempotent
r
for a [17, 9]-code. When p = 8m+1, then gQ (x) = r∈Qp (x−α ), having deg gQ (x) =
p−1
2
= 4m. We know that
X X
f (x)2 = f (x2 ) = x2n = xn = f (x),
n∈Np n∈Np

and there exist p−1


2
quadratic nonresidues. Hence, f (x) is idempotent. Since n ∈ Np ,
n = g 2k+1 (mod p) where g is a primitive element in F∗p . If p = 8m + 1, then in F2s
p−1
f (1) = =0
2
and the zeros are as follows.

For 1 + e(x) : Np and For 1 + f (x) : Qp

23
If r ∈ Qp , then gQ (αr ) = 0 since αr is a root. Then (1+f (x))2 = 1+f (x)2 = 1+f (x),
and so 1 + f (x) is idempotent. Since
p−1
X xp − 1
e(x) + 1 + f (x) = xi = ,
i=0
x−1

for s ̸= 0, we have s ∈ Qp or s ∈ Np , then

(αs )p − 1
e(αs ) + (1 + f (αs )) = = 0.
αs − 1
By Theorem 6.7, we know that idempotent and generator polynomial have every
αr , where r ∈ Qp , as a root. These are the roots of g(x). Every root of g(x) should
also be a root of the idempotent, i.e. g(x) | e(x). Since we want idempotent having
same root as g(x), but we do not want 1 to be the root of idempotent, so we get
idempotent of another code, which is ⟨(x − 1)gQ (x)⟩.

6.5 Decoding of QR codes


Example 6.8. Take p = 23, since

2(23−1)/2 − 1 211 − 1
= = 89,
2 23
so the order of α := θ89 is 23. Let θ be a root of 1 + x + x3 + x5 + x11 in F2 [x], then
θ is a primitive element of F211 . Then we have the factorization

x23 − 1 =(x − 1)(x11 + x10 + x6 + x5 + x4 + x2 + 1)


× (x11 + x9 + x7 + x6 + x5 + x + 1).

into irreducible polynomials over F2 . Since 23 is a prime of the form 8m − 1, where


m = 3. By Corollary 6.1, a factorization of the form x23 − 1 = (x − 1)gQ (x)gN (x),
where gQ (x) and gN (x) are generator polynomials for quadratic residue codes, must
also exist. As generator polynomials for a quadratic residue code of length 23,
gQ (x) and gN (x) must have the degree 11, which is exactly what the degrees of the
irreducible factors in the factorization of x23 − 1 happen to be. Therefore, those 11-
degree factors must be the generator polynomials. This is the binary [23, 12]-Golay
code and is an example of quadratic residue code [9][11].
The Golay codes were discovered by Marcel J. E. Golay in 1940s [6]. The Reiger
bound indicates that the optimal burst length that can be corrected is 5. Now, we
would like to show that such codes are indeed 5-cyclic burst error correcting, we
need to check that all such cyclic burst error patterns lie in different cosets. It is
clear that possible cyclic bursts of length at most 5 are 1, 11, 101, 111, 1001, 1011,
1101, 1111, 10001, 10011, 10101, 10111, 11001, 11011, 11101 and 11111, so there
are 16 in total. Since each one of these can be put in 23 different positions in a
word. There are 23 · 16 = 368 possible cyclic burst error patterns of length 5 or less.
Furthermore, there exist 223−12 = 211 = 2048 different cosets. We can see that there
are more cosets than error patterns. Thus, it is possible that these patterns belong
to different cosets. This can be easily verified in Mathematica. We leave it to the
reader to check.

24
By Corollary 6.1, we know that there exist binary quadratic residue codes of
prime length p if and only if p = 8m ± 1. Thus, Reiger bound can be expressed in
two cases:
p−1
(i) p = 8m + 1 : l ≤ 4
and
p−3
(ii) p = 8m − 1 : l ≤ 4
,
where l must be an integer. A binary cyclic burst error is built upon with the first
and last positions being 1 s. A burst of length 1 and 2, are 1 and 11, respectively.
For a burst of length greater than 2, there are always two possible ways to choose
for the positions lying in between. Considering p cyclic shifts, we have
p · (1 + 20 + 21 + 22 + . . . )
possible combinations. Thus, the number of burst errors of length l or less in such
codes is
l−2
!
X
p · (1 + 20 + 21 + 22 + · · · + 2l−2 ) = p 1 + 2i
i=0
l−1
= p(1 + 2 − 1)
l−1
=p·2
(
p · 2(p−1)/4−1 , if p = 8m + 1

p · 2(p−3)/4−1 , if p = 8m − 1,
where i = 0, 1, 2, . . . .
According to Theorem 4.3.(v), the number of coset is
2n−k = 2(p+1)/2 .
When p = 8m + 1, we have
p · 2(p−1)/4−1 = p · 2(p−5)/4 .
By comparison test, we obtain
2(p+1)/2 2(p+7)/4
lim = lim
p→∞ p · 2(p−5)/4 p→∞ p
2 · 27/4
p/4
= lim ,
p→∞ p
showing that 2p/4 increases exponentially faster than p when p increases. It implies
that the number of cosets grows faster than the number of burst errors. Similarly,
when p = 8m − 1, we have
p · 2(p−3)/4−1 = p · 2(p−7)/4
and
2(p+1)/2 2(p+9)/4
lim = lim
p→∞ p · 2(p−7)/4 p→∞ p
2 · 29/4
p/4
= lim ,
p→∞ p
giving the same result.
Table 6.4 lists the number of error patterns and cosets of the first ten binary
quadratic residue codes.

25
Code Burst errors of length ≤ l No. burst error patterns No. cosets
[7, 4] ⌊ 7−4
2 ⌋=1 1·7=7 27−4 = 8
[17, 9] ⌊ 17−9
2 ⌋=4 8 · 17 = 136 217−9 = 256
[23, 12] ⌊ 23−12
2 ⌋=5 16 · 23 = 368 223−12 = 2048
[31, 16] ⌊ 31−16
2 ⌋=7 64 · 31 = 1984 231−16 = 32768
[41, 21] ⌊ 41−21
2 ⌋ = 10 512 · 41 = 20992 241−21 = 1048576
[47, 24] ⌊ 47−24
2 ⌋ = 11 1024 · 47 = 48128 247−24 = 8388608
[71, 36] ⌊ 71−36
2 ⌋ = 17 65536 · 71 = 4653056 271−36 = 34359738368
[73, 37] ⌊ 73−37
2 ⌋ = 18 131072 · 73 = 9568256 273−37 = 68719476736
[79, 40] ⌊ 79−40
2 ⌋ = 19 262144 · 79 = 20709376 279−40 = 549755813888
[89, 45] ⌊ 89−45
2 ⌋ = 22 2097152 · 89 = 186646528 289−45 = 17592186044416

Table 6.4: First ten binary quadratic-residue codes.

7 Results
If a cyclic code is said to be l-burst-error-correcting, then the Reiger bound must
be satisfied. Reiger bound indicates the optimal length of burst errors that can
be corrected theoretically. However, Reiger bound cannot guarantee that a linear
code is an l-burst-error-correcting code. Take a quadratic residue [17, 9] code as an
example, the code is expected to correct all burst errors of length 4 with the equality
of Reiger bound.
With the help of Mathematica, we list all remainders for the burst errors of
length 1, 2, 3 and 4 in a table. Mathematica code for decoding a [17,9] QR code can
be found in the appendix. We have found 17 duplicate remainders for such code by
error trapping method. For instance, the remainder 1 + x2 + x3 appears at position
2 and 47 among the remainders for burst errors of 4, respectively. It implies that
for all i = 0, 1, 2, . . . , 16, we have two types of burst error of form xi (1 + x2 + x3 )
and x11+i (1 + x + x3 ), respectively. Then by Theorem 4.3 the difference must be
a codeword. Taking i = 0, the codeword is thus

w(x) = 1 + x2 + x3 + x11 + x12 + x14 .

It is clear that w(x) is a multiple of generator polynomial g(x) = 1+x3 +x4 +x5 +x8 .
Hence, they must belong to the same coset. The counterexample shows that [17, 9]
QR code cannot be a 4-burst-error-correcting code by Theorem 5.3.
Let b1 (x) and b2 (x) be two distinct burst errors, having the same syndrome.
Hence, both of them can be a coset burst leader. Let

b1 (x) = q1 (x)g(x) + s(x)

and

b2 (x) = q2 (x)g(x) + s(x),

then the difference

b1 (x) − b2 (x) = (q1 (x) − q2 (x)) g(x)


= u(x)g(x)

is clearly a multiple of g(x). Therefore, they belong to the same coset and thereby
a codeword. By Theorem 4.3, b1 (x) and b2 (x) must belong to the same coset.

26
The shifts of form

xbb1 (x) − xbb2 (x) = x(u(x)g(x)),


x b1 (x) − x2b2 (x) = x2 (u(x)g(x)),
2

..
.
xmb1 (x) − xmb2 (x) = xm (u(x)g(x))

are also multiples of g(x), and thus belong to the same coset. For any shift, it holds
that the difference between two burst errors is a multiple of g(x). Thus, we can
expect to obtain at least p duplicate remainders for a code length of p if there exist
duplicates.
In our case, we have found 17 pairs of duplicates for a code of length 17. In fact,
for all i = 0, 1, 2, . . . 16, xi (1 + x2 + x3 ) and x11+i (1 + x + x3 ), shifting cyclically, they
must belong to the same coset. Therefore, it is no coincidence that we have found
17 pairs. Since not all the burst errors of length 4 or less lie in distinct cosets, the
code fails to correct all these errors. In fact, Reiger bound is a necessary but not
sufficient bound. The counterexample we have found here verified the assumption.
As the length of a binary quadratic-residue code increases, the number of cosets
increases more than error patterns does. The possibilities that such error patterns
belong to different cosets increase. However, this still does not guarantee that error
patterns will be in distinct cosets as the code length grows. Each code should be
checked individually by looking at remainders.

8 Discussion
Although cyclic codes are generally powerful for correcting burst errors, one should
note that burst-error-correcting capabilities between codes vary. If one is looking
for optimal codes for burst-error-correcting, then there are some other codes, such
as Reed–Solomon codes, might be more interesting in the field.
Reed-Solomon codes are powerful for burst error correction since the codes op-
erate on alphabet sizes larger than binary [8]. Consider a Reed-Solomon code over
F2m where each symbol of the alphabet is represented by m bits. It makes no dif-
ference that how many of those m bits are in error, they are regarded as a single
symbol error. Since burst errors occur in bursts, it is very likely that several errors
become a single symbol error. This property makes such codes powerful for burst
error correction.
We have shown in the previous chapter that Reiger bound is just a necessary
bound. The bound indicates the optimal length that a code can correct. However, it
does not mean that errors whose length exceeds the bound cannot all be corrected.
The burst errors whose syndromes are unique can still be corrected although not all
of them lie in distinct cosets.
In this thesis, we have only discussed binary codes. One should note that cyclic
burst errors should be redefined for a non-binary code. Take a ternary code as
an example, with the first and the last bit being nonzero, we have more options
to consider compared to a binary code. For instance, a burst of length 3 can be

27
constructed in the following ways:
101, 111, 121,
102, 112, 122,
201, 211, 221,
202, 212, 222.
Thus, a different formula should be derived for computing the number of cosets and
burst errors. Consider an [n, k]-code C ∈ Fq , we would like to construct burst errors
of length l. Since the first and last positions of a burst error must be nonzero, there
are (q − 1) · (q − 1) possibilities in total. For the positions lying in between, we can
have q possible ways to choose for l − 2 positions. With n cyclic shifts, the number
of burst errors of length l is
n · q l−2 (q − 1)2
and the number of coset is q n−k .
Lastly, we would also like to remind the readers, there are different decoding
methods to decode QR codes besides error trapping mentioned here. The codes
have been studied intensively by algebraic coding theorists who intend to find more
effective decoding methods. Decoding of long binary QR codes is an interesting and
active research area. This part is left to the readers to explore on their own.

References
[1] Elwyn Ralph Berlekamp. Key Papers in the Development of Coding Theory.
IEEE Press selected reprint series. Institute of Electrical and Electronics Engi-
neers, 1974.
[2] Marcel J. E. Golay. Notes on digit coding. Proceedings of the IRE, 37(6):657,
1949.
[3] Richard Hamming. Error Detecting and Error Correcting Codes. Bell System
Technical Journal, 29(2):147–160, 1950.
[4] D.R. Hankerson, D.G. Hoffman, D.A. Leonard, C.C. Lindner, K.T. Phelps,
C.A. Rodger, and J.R. Wall. Coding Theory and Cryptography: The Essentials,
Second Edition. A Series of Monographs and Textbooks in Pure and applied
mathematics. Marcel Dekker, 2000.
[5] Raymond Hill. A First Course in Coding Theory. Oxford applied mathematics
and computing science series. Oxford University Press, 1986.
[6] San Ling and Chaoping Xing. Coding Theory. A First Course. Cambridge
University Press, 2010.
[7] Vera Pless. Introduction to the Theory of Error-Correcting Codes, Second Edi-
tion. Wiley-Interscience series in discrete mathematics. John Wiley & Sons,
1989.
[8] Irving S. Reed and Gustave Solomon. Polynomial Codes over Certain Finite
Fields. Journal of the Society for Industrial and Applied Mathematics, 8(2):300–
304, 1960.

28
[9] Steven Roman. Coding and Information Theory. Graduate Texts in Mathe-
matics. Springer, 1992.

[10] Claude E. Shannon. A Mathematical Theory of Communication. Bell System


Technical Journal, 27(3):379–423, 1948.

[11] Lekh R. Vermani. Elements of Algebraic Coding Theory. Chapman & Hall
Mathematics Series. Chapman & Hall, 1996.

29
A Appendix 1

Find a list of primitive roots of 17

In[1]:= PrimitiveRootList[17]
Out[1]= {3, 5, 6, 7, 10, 11, 12, 14}

Find a primitive element

In[2]:= PowerMod[6, 2, 17]


Out[2]= 2

Find nonzero quadratic nonresidues modulo 17, i.e., compute 6^{2i-1} mod 17

In[3]:= TablePowerMod6, 2 k - 1, 17, {k, 1, 8}

Out[3]= {6, 12, 7, 14, 11, 5, 10, 3}

Construct the generating idempontent for NQR, using 1+e(x) since p=17=8m+1

In[4]:= e[x_] := 1 + x ^ 6 + x ^ 12 + x ^ 7 + x ^ 14 + x ^ 11 + x ^ 5 + x ^ 10 + x ^ 3
PolynomialRemainder[e[x] * e[x], x ^ 17 - 1, x, Modulus → 2]

Out[5]= 1 + x3 + x5 + x6 + x7 + x10 + x11 + x12 + x14

Compute the generator polynomial

In[6]:= PolynomialGCD[e[x], x ^ 17 - 1, Modulus → 2]


Out[6]= 1 + x3 + x4 + x5 + x8

In[7]:= t4 = Table[x ^ k, {k, 0, 3}]


Out[7]= 1, x, x2 , x3 

In[8]:= t3 = Table[x ^ k, {k, 0, 2}]


Out[8]= 1, x, x2 

Construct a burst of length 4 and 3, respectively

In[9]:= Allwords[l_] := TableIntegerDigits[k, 2, l], {k, 0, 2 ^ l - 1};


bursts4 = MapJoin{1}, # , {1} &, Allwords[2]
Out[10]=
{{1, 0, 0, 1}, {1, 0, 1, 1}, {1, 1, 0, 1}, {1, 1, 1, 1}}

In[11]:= polbursts4 = Map#.t4 &, bursts4


Out[11]=
3
1 + x , 1 + x2 + x3 , 1 + x + x3 , 1 + x + x2 + x3 

In[12]:= bursts3 = MapJoin{1}, # , {1} &, Allwords[1]


Out[12]=

{{1, 0, 1}, {1, 1, 1}}

30
2

In[13]:= polbursts3 = Map#.t3 &, bursts3


Out[13]=
2
1 + x , 1 + x + x2 

Use error trapping decoding to find remainders for bursts of length 1,2,3 and 4, respectively

In[14]:= p4 = FlattenTablePolynomialRemainderx ^ s * polbursts4i,


1 + x^3 + x^4 + x ^ 5 + x ^ 8, x, Modulus → 2, {s, 0, 16}, i, 1, 4
Out[14]=
3
1 + x , 1 + x2 + x3 , 1 + x + x3 , 1 + x + x2 + x3 , x + x4 , x + x3 + x4 , x + x2 + x4 , x + x2 + x3 + x4 , x2 + x5 ,
x2 + x4 + x5 , x2 + x3 + x5 , x2 + x3 + x4 + x5 , x3 + x6 , x3 + x5 + x6 , x3 + x4 + x6 , x3 + x4 + x5 + x6 , x4 + x7 ,
x4 + x6 + x7 , x4 + x5 + x7 , x4 + x5 + x6 + x7 , 1 + x3 + x4 , 1 + x3 + x4 + x7 , 1 + x3 + x4 + x6 , 1 + x3 + x4 + x6 + x7 ,
x + x4 + x5 , 1 + x + x3 , x + x4 + x5 + x7 , 1 + x + x3 + x7 , x2 + x5 + x6 , x + x2 + x4 , 1 + x2 + x3 + x4 + x6 ,
1 + x + x2 + x3 + x5 , x3 + x6 + x7 , x2 + x3 + x5 , x + x3 + x4 + x5 + x7 , x + x2 + x3 + x4 + x6 , 1 + x3 + x5 + x7 ,
x3 + x4 + x6 , 1 + x2 + x3 + x6 , x2 + x3 + x4 + x5 + x7 , 1 + x + x3 + x5 + x6 , x4 + x5 + x7 , x + x3 + x4 + x7 ,
1 + x6 , x + x2 + x4 + x6 + x7 , 1 + x3 + x4 + x6 , 1 + x2 + x3 , x + x7 , 1 + x2 + x4 + x7 , x + x4 + x5 + x7 ,
x + x3 + x4 , 1 + x2 + x3 + x4 + x5 , 1 + x + x4 , 1 + x2 + x3 + x4 + x6 , x2 + x4 + x5 , x + x3 + x4 + x5 + x6 ,
x + x2 + x5 , x + x3 + x4 + x5 + x7 , x3 + x5 + x6 , x2 + x4 + x5 + x6 + x7 , x2 + x3 + x6 , 1 + x2 + x3 + x6 ,
x4 + x6 + x7 , 1 + x4 + x6 + x7 , x3 + x4 + x7 , x + x3 + x4 + x7 , 1 + x3 + x4 + x7 , 1 + x + x3 + x4 + x7 

Locate one of the duplicates

In[15]:= Position[p4, 1 + x ^ 2 + x ^ 3]
Out[15]=
{{2}, {47}}

In[16]:= p3 = FlattenTablePolynomialRemainderx ^ s * polbursts3i,


1 + x^3 + x^4 + x ^ 5 + x ^ 8, x, Modulus → 2, {s, 0, 16}, i, 1, 2
Out[16]=
2
1 + x , 1 + x + x2 , x + x3 , x + x2 + x3 , x2 + x4 , x2 + x3 + x4 , x3 + x5 , x3 + x4 + x5 , x4 + x6 , x4 + x5 + x6 ,
x5 + x7 , x5 + x6 + x7 , 1 + x3 + x4 + x5 + x6 , 1 + x3 + x4 + x5 + x6 + x7 , x + x4 + x5 + x6 + x7 ,
1 + x + x3 + x6 + x7 , 1 + x2 + x3 + x4 + x6 + x7 , 1 + x + x2 + x3 + x5 + x7 , 1 + x + x7 , 1 + x + x2 + x5 + x6 ,
1 + x + x2 + x3 + x4 + x5 , x + x2 + x3 + x6 + x7 , x + x2 + x3 + x4 + x5 + x6 , 1 + x2 + x5 + x7 ,
x2 + x3 + x4 + x5 + x6 + x7 , 1 + x + x4 + x5 + x6 , 1 + x6 + x7 , x + x2 + x5 + x6 + x7 , 1 + x + x3 + x4 + x5 + x7 ,
1 + x2 + x4 + x5 + x6 + x7 , 1 + x + x2 + x3 + x6 , 1 + x + x4 + x6 + x7 , x + x2 + x3 + x4 + x7 , 1 + x + x2 + x3 + x4 + x7 

In[17]:= p2 =

TablePolynomialRemainder x ^ s (1 + x), 1 + x^3 + x^4 + x ^ 5 + x ^ 8, x, Modulus → 2, {s, 0, 16}


Out[17]=
1 + x, x + x2 , x2 + x3 , x3 + x4 , x4 + x5 , x5 + x6 , x6 + x7 ,
1 + x3 + x4 + x5 + x7 , 1 + x + x3 + x6 , x + x2 + x4 + x7 , 1 + x2 + x4 , x + x3 + x5 ,
x2 + x4 + x6 , x3 + x5 + x7 , 1 + x3 + x5 + x6 , x + x4 + x6 + x7 , 1 + x2 + x3 + x4 + x7 

31
3

In[18]:= p1 = TablePolynomialRemainder x ^ s (1), 1 + x^3 + x^4 + x ^ 5 + x ^ 8, x, Modulus → 2, {s, 0, 16}


Out[18]=

1, x, x2 , x3 , x4 , x5 , x6 , x7 , 1 + x3 + x4 + x5 , x + x4 + x5 + x6 , x2 + x5 + x6 + x7 , 1 + x4 + x5 + x6 + x7 ,
1 + x + x3 + x4 + x6 + x7 , 1 + x + x2 + x3 + x7 , 1 + x + x2 + x5 , x + x2 + x3 + x6 , x2 + x3 + x4 + x7 

Check if there exists duplicates

In[19]:= p = Tally[Join[p4, p3, p2, p1]];


res = Select[p, #〚2〛 > 1 &]
Out[20]=
2 3
1 + x + x , 2, 1 + x + x3 , 2, x + x3 + x4 , 2, x + x2 + x4 , 2,
2 4 5
x + x + x , 2, x2 + x3 + x5 , 2, x3 + x5 + x6 , 2, x3 + x4 + x6 , 2, x4 + x6 + x7 , 2,
4 5 7
x + x + x , 2, 1 + x3 + x4 + x7 , 2, 1 + x3 + x4 + x6 , 2, x + x4 + x5 + x7 , 2,
2 3 4 6
1 + x + x + x + x , 2, x + x3 + x4 + x5 + x7 , 2, 1 + x2 + x3 + x6 , 2, x + x3 + x4 + x7 , 2

Count the number of pairs found

In[21]:= Length[res]
Out[21]=

17

32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy