BCH Chapter 1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 30

ABSTRACT

In this modern information era, network size and users are


increasing day by day. The development of communication systems evolved
to digital format from analog format became part of life and caters to various
applications like basic voice services, video broadcasting, HD video calling,
Home automation, and artificial intelligence. So that data traffic, user density
increases exponentially. This needs a data system with more capacity to
handle this high volume data communication without error. This makes it a
necessity to optimize Error Detection and Correction (EDC) coding as per the
present scenario. The error detection and correction code is a method that
adds redundant data to empower the robustness of information over the noisy
communication channel. This motivates to find a coding scheme to succeed
in the high error probability.

First, the BCH code is designed with the Parity layered


algorithm. The reason for considering BCH codes is the low decoding
complexity for a multiuser channel. Usually, the component of a multi-user
channel’s receiver system is more than a single user channel. This makes it
compulsory for a coding system to be less complex. So the Parity structure is
added with the layered algorithm by properly designing the parity check
matrix. This BCH code reduces iteration in the decoding process, also
handles well on random errors. Burst Error is a common occurrence in digital
communication systems, broadcasting systems, and digital storage devices.
The advantage of BCH code is its low error output capability. To make it
better against burst error, two interleavers are introduced to change burst
error to
random error. This scheme is used as an outer code along with low
complex BCH as inner code as the first product code system.
CHAPTER 1

INTRODUCTION

The success of a digital communication system is the ability to


channel is an integral part of the data transmission system which may
be distracted through noises. The channel can be coaxial cable, wireless,
fiber cable, twisted cable, etc.

In the receiver, the first process is the extraction of the


message from the information-bearing waveform created by the
modulation. This is handled by the corresponding demodulator. Next, the
channel decoder corrects the error that occurred in the transmission using
the incorporated EDC algorithm and provides the original message without
parity bits. Source decoder changes the output of the channel decoder
which is a binary into the signs of source information.

Figure 1.1 Block diagram of digital communication system


1.2 TYPES OF ERROR

Channel error is an error in a signal which arrives at the


receiver of a communication system. This is due to noise in the channel.
The noise is an undesirable signal that is added or affects the message bits
transmitted over the channel. It is random in nature and unpredictable, it
doesn't have single frequencies or constant amplitude. Noise affects the
recovery of the message; this noise error can be broadly classified as
random error and burst error.

Random error is normally known as white Gaussian noise


which induces normally distributed random errors in the data. This
arises in the channel where noise corrupts the individual bits of the
transmitted packet in random nature. These are usually caused by thermal
noise and the radiation collected by the receiving antenna in the contact
networks. Besides, the power spectral density of the Gaussian noise at the
receiver input is white in most cases. The white colour is a combination of
all colours equally likewise white noise affects all the frequencies of the
message equally. The error control codes such as LDPC, BCH, etc. can
handle random error effectively. Burst Errors are the bit errors that take
place simultaneously in more than two bits positioned near to one another.
These are caused in a storage device by fading in a contact channel and
multipath communication. Burst errors make large blocks of data full of
faults in channels due to catastrophic faults that are highly problematic
in the digital communication networks. In general, in the face of
disastrous burst errors, all coding schemes struggle to recreate a message,
and message data may be unrecognizable at the receiver. However, by
analysing the received data as a symbol, codes like Reed-Solomon can
detect the existence of the burst error.
Reddy and Robinson (1972) pointed out that one coding
algorithm can control burst error less than or equal to N and by using
another coding method, it can correct up to M errors. When both algorithms
are added to the coding system, it can correct up to random error n < N
and burst error m < M. To address the channel for handling both errors
simultaneously product codes can be used. Blomqvist (2020) describes
the error correction using product codes can control various errors and can
correct beyond half of the minimum distance.

1.3 ERROR CONTROL TECHNIQUES

1.3.1 Automatic Repeat Request

Automatic Repeat Request (ARQ) works at the point when


the receiver identifies a mistake in the information; it will ask the
source to retransmit if negative acknowledgment comes from the receiver
as shown in Figure 1.2. Information transmitted over the channel, from
the sender to the collector will be exposed to distraction. Subsequently,
data transmitted over the channel will get corrupted due to noise. In such
cases, the collector informs the sender to retransmit the information to
bring the rehashed transmission of the same information. Such rehashed
transmissions are not just giving postponement in data communication,
also consume extra channel bandwidth.
Figure 1.2 Process of automatic repeat request

1.3.2 Forward Error Correction

Figure 1.3 Process of forward error correction


As a substitute to ARQ, Forward Error Correction (FEC)
method is added in the channel encoder and decoder. Here, the data to be
transmitted will be included with parity bits and transmitted over the
channel. Included repetitive bits will assist the recipient to correct errors
that happen during transmission. As shown in Figure 1.3, when the
receiver identifies an error in the received data, it will find the error
location in a stream of data and will correct it using error control codes
at the receiver itself. The FEC coding is needed because it is a far superior
option to transmit information over and over the same channel until the
destination informs received data is correct.

1.4 ERROR CONTROL CODE IN DIGITAL


COMMUNICATION

FEC is emerged as a necessary element in the present


digital communication system because of its ability to sort out errors in
the receiver end. This scheme is broadly categorized as Convolutional
codes and Block codes.

Figure 1.4 Classification of error control code


1.4.1 Shannon’s Information Theory

Shannon explained that every communication should reserve


bits for forward error correction coding to receive the data without error.

E = p log2(1/p) + (1 − p) log2(1/(1 − p)) (1.1)

Where p - The probability of a bit transmitted without error


(1 – p)- The probability of error
E - Reserved bits for error corrections

Shannon’s information theory helps to estimate the amount of


coding data that can be added to the message which helps data destination
to handle independently rectifying errors that happen in a channel. Figure
1.5 shows that the Shannon limit on Eb/No consists of Channel’s spectral
efficiency (Cp) as one of the functions. When Cp =2, the Shannon limit
on Eb/No is equal to
1.76 dB. If Cp =1, the limit comes to 0 dB. When Cp =0, it goes to -1.59 dB.

Cp =
R

BW
Figure 1.5 Eb/No vs Spectral efficiency

This value is called the Shannon power efficiency limit. This


limit informs the minimum requirement of energy per bit at the
transmitter for reliable communication. It is one of the important
measures in designing a coding scheme.

1.4.2 Block Codes

Elias (1954) introduces the block error-correcting codes. The


block codes divide the incoming message stream as a block that has 'k'
number of bits and then applied the coding procedure which produces 'q'
parity bits. After encoding, error control bits (q) are appended to the
message block. The resultant code words will have n number of bits.

Code word (n)=Block message bits(k)+Parity bits(q) (1.3)

There will be 2k code words and each codeword is a linear


combination of other code words. This is classified into linear code and
nonlinear code. Linear codes are widely used because it has
efficient
properties, concise description, and easy coding than nonlinear coding.
Figure
1.6 shows the general structure of the block encoding System. Here, the
blocks from the input stream (i) are encoded with parity bits (p) to
produce code set (v). The block codes do not require any buffers and can
correct a specified number of errors. It is used in a system where BER is
relatively low and has very high code rates. The code rate is a ratio
between codeword (n) to the message (k). LDPC code, Hamming code,
BCH code, RS code are some of the leading block codes.

Figure 1.6 General structure of block encoding System

1.4.2.1 Low-Density Parity Check (LDPC) code

Gallager (1962) had given this code as one of the linear


block codes for error correction. This encoding will have more numbers of
0's than 1's in the parity check matrix; hence it is called a low density
parity check code. The LDPC codes are having a code rate closer to the
Shannon channel limit. It has nearly 0.04dB of Shannon's limit. This
code uses an iterative decoding method that is easy to implement and has
enhanced performance in the error rectifying process by providing a low
error floor in the BER curve.
This code can be constructed for any block length and code rate. Figure
1.7 shows the function of the LDPC with variable node (v) and parity
check node (c).

Figure 1.7 Depiction of LDPC code

1.4.2.2 Bose, Chaudhuri, and Hocquenghem (BCH) Code

Bose & Ray-Chaudhuri (1960) proposed the BCH codes.


Due to its cyclic and finite derived algorithmic approach, the encoding and
decoding process becomes simple. Kashani & Shiva (2006) explained
that the BCH code is characterized through the Galois Field for the
wireless digital network. This code can address different arbitrary errors
and based on the number of error correction capabilities, the message size
could likewise be chosen. It’s a cyclic code yet has direct properties of
Hamming code. BCH codes are characterized by polynomials; a generator
polynomial is picked after which encoding is executed. The advantage of
this code over other codes is, it has precise control on error correction
ability and can be designed using low intricate architecture. The BCH
code is the generality of hamming codes, contains the following
parameters.

• Block length: n = 2 m – 1
• Number of parity-check digits: n – k ≤ m * t
• Minimum Distance: d min ≥ (2*t) + 1
1.4.2.3 Reed and Solomon (RS) code

In this modern communication, RS code is used in


numerous applications like telecommunications, deep space
communication, and particularly the application requires to handle burst
error correction. It works well for the channel that has memory. While
message size is expanded, this code works as more prominent coding. RS
code is a linear systematic code. If more burst errors happen, this code
will default due to its bounded distance decoding property. It can correct
up to half of its parity bits. The encoding and interpreting of the RS code
could likewise be designed as a BCH code; but, variation is, here product
and sum are performed in Galois Field. The same encoding, decoding
hardware of binary BCH code could likewise be utilized in RS code also.
Tan et al. (2014) used Reed - Solomon for a digital sensor-based system to
overcome the burst errors.

1.4.3 Convolutional Codes

Convolutional codes are alternatives to block coding in


which encoding and decoding can take place on a continuous data
bitstream instead of the static block code process. In this code, a set of bits
are taken to the shift register. The codeword is produced using a designed
XOR system, when the next bit of message comes in, the data in the
register shifted and the codeword is produced. Likewise, the process
continues up to the last bit of the message. The output is a combination
of the previous bit and the present bit, so it requires memory for
buffering. Figure 1.8 shows the general structure of the convolutional
encoding system. A continuous bit stream is converted into a single
codeword. The widely used decoding algorithm is Viterbi because it
gives an efficient decoding system. Mahdavifar et al. (2014) implemented
the convolutional code as a product code for the digital video
broadcasting and wireless sensor network. Trellis coded modulation and
Turbo codes are convolutional coding systems.

1.4.3.1 Turbo Code

Turbo code is having two Recursive Systematic


Convolutional codes (RSC) that process the incoming message as parallel
and combines at the output of the encoder. Turbo code is having three
outputs, one encodes the incoming message directly using the
convolutional algorithm like RSC, another convolutional encoder gets
the same incoming message after interleaving, and the third one is the un-
coded original message. Turbo codes having high error capability where
information rate can be passed close to theoretical limit and error rate
close to zero. So that Condo and Masera (2014) implemented turbo for
deep-space communication. These codes are also useful in UMTS,
CDMA2000, mobile WiMAX systems, etc.

1.4.4 Product Codes

Figure 1.9 Product code structure


Product code is a technique to encode a block of data with
one EDC code in a row and another EDC code in a column. First, the
input data stream is divided into k1 bits and q1 parity bits added by one
algorithm,

n1=k1+q1 (1.4)
then, The n1 data set is exchanged from row to column using interleaver,
and the second algorithm is added. It adds q2 parity bits. Consider n1 as k2.
Hence,
n2=k2+q2 (1.5)

Minimum distance of the product code = d1*d2 (1.6)

Where, d1= Minimum distance of codeword C1


d1= Minimum distance of codeword C2

So, the product code increases the minimum distance which is


the important parameter of error correction ability. The communication
which vulnerable to errors can be dealt with by incorporating the error
control code capable to correct random error in row and code capable to
correct burst error in the column or vice versa. Based on the application
four code has been chosen. The first product code is designed for DVB
which is using LDPC and BCH codes. The second product code is
designed for mobile communication which has more burst errors. So
LDPC and RS code has been chosen. The third product code is designed
for deep-space communication. Here, turbo code has been chosen along
with LDPC to control burst errors.

1.5 CHANNEL ERROR MODELS IN COMMUNICATION


The channel error models like Additive White Gaussian
Noise (AWGN) Channels and Rayleigh fading channels are used to create
the random errors and burst errors respectively in the communication
design model. This gives error estimation like transmitting as in real-time
wireless digital communication scenario and helps to analyze the
performance of coding algorithms Chowdhury et al. (2017) described that
the Rician fading model is quite inefficient for real-time application. So
AWGN and Rayleigh models are considered to add the noises in the
channel that occurs during the movement of signal from transmitter to
receiver.

1.5.1 Additive White Gaussian Noise (AWGN) channel

The performance of the coding system is analyzed for the


correction ability of random error in the presence of noise, especially
thermal noise. These noises are additive to the signal while transferring
from transmitter to receiver, the autocorrelation of AWGN is zero since
the power spectral density is uniform and the noise samples follow the
Gaussian random process. The AWGN channel is a linear and time-
invariant system. The AWGN is used to create random errors in the
message bitstream.

Figure 1.10 Power spectral density of the AWGN channel model


(McClaning & Vito 1959)
1.5.2 Rayleigh fading channel

Rayleigh fading is used to analyze the performance coding


system in the event of burst error occurrence at the digital
communication system. This model creates a burst error in the signal
which will arise in the actual system due to multipath reception. Here the
deep fading will arise in every half of the wavelength due to the random
process. The Rayleigh fading channel model having its amplitude like the
Rayleigh model and the derivative of amplitude follows Gaussian which is
amplitude independent. The phase is uniform and the derivative of the
phase is Gaussian which is amplitude- dependent. It is useful in
calculating the Probability Density Function (PDF) of the incoming signal,
the rate of fluctuation happening in a channel, effect of multiple
interferences occurring in the signal while using satellite and mobile
communication. The depth of burst error can be changed by modifying
variance (σ) as shown in Figure 1.11.
Probability

Random variable

Figure 1.11 Probability density function of the Rayleigh channel


model (Siddiqui 1964)
1.6 AN OVERVIEW OF PRODUCT CODES IN DIGITAL
COMMUNICATION SYSTEM

Digital communication is a self-sustained transmission


system which can correct errors while recognizing message at its
receiver through error control codes. In high-speed technological
development, enhancing these error control codes according to the present
scenario becomes necessary.

1.6.1 Low Density Parity Check Code

Jian-bin et al. (2018) constructed a matrix of QC-LDPC


which is applied in Wireless Sensor Networks (WSN). The basic matrix
shortens the RS codes and has good minimum distances, BER
performance, and less complexity in hardware design and
implementation. The simulation output shows the WSN system with good
energy efficiency. Along with fully structured features, good minimum
distance codes are used to ensure the girth of the designed parity check
matrix. The error correction is similar to the PEG algorithm. It can be used
in the practical applications of ECC for WSN.

Peng et al. (2018) explain a deterministic structure to resolve


the high coding complexity and inflexible code length selection in quasi-
cyclic low-density parity check. The perfect cyclic shifts ensure less storage
space as well as reduced hardware complexity. The simulation part
shows the improvement by 0.13 dB, 0.32 dB of perfect cycle different
sets, and BER of 10-6 with AWGN. A special structure is designed and
simulated with combinational mathematical models. The sum-product
algorithm is used to give excellent SNR without error levelling.
Sang & Joo (2006) analyse the effect of block interleaving
in LDPC codes. Here decoding is soft compared to other RS codes, so the
LDPC- Turbo codes generate higher performance and this can be
improved further. The average number of iterations will be minimized in
the decoding part of LDPC-Turbo codes. The results obtained in the
simulation ensure the better performance of the RS turbo codes. The 2D
block interleaving is used with a low BER region. It is suitable for high-
quality communication systems.

Xiumin et al. (2019) propose a system with density


evolution analysis with Gaussian approximation. Here the Belief
Propagation (BP) algorithm and the Turbo Decoding Message Passing
(TDMP) algorithm are used. The symmetry condition obtained a
convergent result. The density evolution method provides a simple
theoretical bays algorithm. The simulation results in the convergence of the
speedy decoding TDMP algorithm with the normalized factor. It simulates
six code rates of LDPC codes which are used under IEEE 802.16e
standards.

Moataz & Ashraf (2015) propose a system with many


micro- sensors that constitute the wireless sensor networks. Battery lifetime
and power consumption are considered as the most important impact,
which ensures a reliable and sustainable network operation. LDPC and
ECC ensure the network. It gives low SNR with less complex circuits.
The energy efficiency is with concatenated LDPC codes and simulated
outputs give the improved BER. The energy-efficient system proposed
here is suitable for wireless sensor networks. The control coding
parameters include frame size, code rate, and BER.

Alireza et al. (2019) designed a new binary message-


passing decoding algorithm. The algorithm is based on Bounded Distance
Decoding (BDD), which is a soft coding algorithm. However, the
messages are hard it reduces the data flow. It ensures the reliability of
the decoder output. The density evolution analysis is used for
generalized low density parity check codes. This code achieves the
performance gain up to 0.29 dB and 0.31 dB. This method is expensive
and has a minor increase in complexity. It is mainly applied in fiber-optic
communication systems.

Lin et al. (2013) propose an early termination technique used


for low density parity check codes. The decoding part is focused much on
iterative operations, with high reliability along with the soft log-likelihood
ratio. The threshold value is maintained throughout all the layers. It
gives up to 60% negligible loss of coding gain at Eb/No of 3.0 dB.
This criterion gives the highest reliability under simulation on IEEE
802.16e standards with high efficiency provided the highest data rate.

Jin Xie et al. (2009) propose an improved layered min-sum


algorithm for usual low density parity check codes decoding. It is ensuring
the high speed of the decoding algorithm with regular updates. The
simulation part is compared with the analytical outputs and similar results
were ensured. The number of iterations is reduced by 8.4% - 62.6%. But
increases the hardware complexity. Here all the layers were enhanced with
improved efficiency and amplification.

Tian et al. (2020) describe a decoder for a non-binary LDPC


code with less complexity. Even though non-binary LDPC is used widely,
its high usage of hardware is a concern in its implementation. To solve
this issue, a trellis based min-max decoder with some optimized check
node processing unit architecture is proposed. This results in 1/3 hardware
reduction and increases clock frequency.
20

Eran et al. (2017) propose the power-efficient LDPC decoder


by adding a trellis dependency in any quasi cycle LDPC. This system is
encoded by a parallel concatenated QC-RSC encoder based on the trellis.
The results of this system having 2 times lesser hardware complexity
than conventional QC-LDPC code for the same data rate SNR and
BER. This parallel concatenated LDPC code along with the QC-MAP
decoder behaves well than conventional QC-LDPC code by 0.5 dB. So,
this system is useful in a high- speed mobile system

Bocharova et al. (2019) present a decoding method for


LDPC code by using the Maximum Likelihood (ML) method. This
reduces the complexity of a decoder. In this method, when ML decoding
fails to decode and correct the errors, the system combines ML decoding
with the list coding method in the Binary Erasure Channel (BEC). The
difference between BP decoding and BP list coding method is presented in
the Wi-Max standard.

Sunita & Bhaaskaran (2013) explains that regular error


control codes not able to correct right various mistakes in recollections,
but a significant number of those are fit for distinguishing numerous
errors. This system presents an exceptional translating calculation to
recognize and appropriate various mistakes. The calculation utilized is
such it can address a limit of 11 errors during 32-bit information and a limit
of nine mistakes during 16-bit information. The proposed strategy is
frequently used to improve the correcting ability in the nearness of
numerous bit upsets. The perpetual arrangement of information bits is
influenced when highly energetic particles from outer radiation strike, and
cause soft errors. It performs superior to the recently known procedure of
error recognition and rectification utilizing matrix codes.
The system of Qin et al. (2017) enumerates that all the
reliability vector elements are supported into separate sets. It is noted that
trellis-based extended min-sum decoding methods having a path. So it
reduces the search in all trellis of check nodes. The simulation results
show that the described methods decode the low-density parity-check
code in a better manner in terms of complexity also.

The system of Eran et al. (2015) gives an idea about trellis-


based quasi-cyclic Low-Density Parity-Check codes. This is built by
introducing trellis-based convolution dependency on any type of quasi-
cyclic LDPC. It also describes the QC Viterbi decoding algorithm.
These two methods are simulated and compared with conventional LDPC
code and for the decoder; the belief propagation method is taken. The
results show that the proposed method having 1 dB more for the same
BER and complexity.

Aragon et al. (2019) present a substitution group of rank


measurement codes: Low Rank Parity Check (LRPC). This system
proposed a productive probabilistic translating calculation. This group of
codes is frequently observed because it might be compared to old-style
LDPC codes for the rank measurement. This is an all-inclusive rendition
of the structure presenting LRPC codes, with a significantly new function.

1.6.2 LDPC and BCH product code

Coşkun et al. (2020) enumerate a system of Successive


Cancellation List (SCL) decoding for product codes. Generally, SCL
decoding depends on a 2×2 Hadamard kernel which is used as a
description for the product system. This analysis focused on the product
codes used in wireless communication systems based on extended
hamming codes. The results show that SCL algorithms performed
similarly to belief propagation algorithms.
Also highlighted, the product code performs well when a high rate algorithm
is used as an outer code.

Li & Zhang (2010) present a scheme for product code. Here,


the TCM codes and BCH codes are used in the concatenation form for
the error control coding scheme. The TCM encoder and decoder are
embedded on-chip and BCH hardware is kept as off-chip. The improved
TCM is also proposed which relieves the burden of BCH code when
correcting errors. The result shows the improvement in error correction
compared to the BCH code by using the same number of extra redundant
bits.

Kihoon et al. (2012) present a two-iteration concatenated


Bose- Chaudhri-Hocquenghem code. It is high speed with low-complexity
suitable for 100Gb/s optical communications with high data processing
rates. A low complexity syndrome is used in the architecture that ensures
the high-speed processing and 10-5 decoder output BER. It is used for
potential applications, the design and implementation of the two-iteration
concatenated BCH code give better performance. The block interleaving
methods allow the same. It is also applied for a very high-data processing
rate as well as good error correction.

Uryvsky & Osypchuk (2014) give a procedure for the


complexity of the LDPC check matrix (H) searching with good error-
correcting ability grows exponentially with increasing a codeword length.
There is no need for BCH codes to perform the search matrix procedure
because of the nature of encoding/decoding processes. This is an
advantage of BCH codes. Research showed that LDPC codes can be
characterized as anti-noise codes with better error-correction properties.
The relative number of corrected errors per codeword is almost the same
for LDPC (n=1000) and BCH (n=1023) codes. LDPC codes have a little
bit better error-correcting abilities than BCH codes have if code rate
R<0.7. It is implied that other parameters like code length, signal to noise
ratio, manipulation method, and required reliability are the same for
LDPC and BCH. The coded rates are obtained for LDPC (n=1000) and
BCH codes (n=1023) for the Gaussian channel when the signal to noise
value is known. According to numerical LDPC and BCH code rates
values, LDPC code can be recommended as more effective if the signal to
noise value is larger than 7 dB.

The system described in Spinner et al. (2016) gives the


snapshot for the soft input algorithm. It describes the structure for
concatenated codes. It is working on the concept of super code trellis
which is more efficient than ordinary trellis in terms of complexity. Super
code trellis reduces the memory required for BCH codes. This is
achieved by using a sequential decoding algorithm rather than using an
ordinary decoding algorithm. This results in better hardware architecture.
The complexity of the decoder increases to 82% compared to the system of
Spinner & Freudenberger (2015). So this system is useful where a low
residual error rate is required.

The system in Xinmiao et al. (2011) prescribes the soft


decision algorithm for BCH decoder instead of hard decision decoding
for satellite systems. It is achieved by using the cyclic property with
non-uniform quantization to reduce the hardware. The simulation results
show the described system has a low hardware architecture and better
BER performance than a conventional hard decision algorithm.

Swaminathan et al. (2020) propose a blind estimation method


for product codes for the scenario at the receiver is non-cooperative, needs
blind reconstruction of the parameter. Generally, product codes use many
components like different coders, interleaves, etc. These components and
their parameters affect the performance. The efficiency of the algorithm in
terms of the probability of correct estimation is analyzed using the product
system of BCH and RS. The results show that when the code dimension
and modulation order increase, the performance increases.

1.6.3 LDPC and RS product code

Qiu et al. (2020) describe the coding system that uses special
case LDPC, Spatially Coupled (SC) low-density parity-check as inner
codes, and reed Solomon as outer codes. A decoding algorithm based
on belief propagation is proposed as sliding window decoding. For RS
coding, Berlekamp-Massey is used. This system improves the Bit error rate
up to 10-8 error floor.

Chengen et al. (2012) propose a product code system that


uses hamming code along the column and RS codes along the rows in the
matrix of input message bits. This system is compared with plain BCH
code and RS code. The result in comparison shows that the proposed
system having 18% lower latency and 40% lower space constraints. This
work also explains a flexible production system that moves to stronger
ECC code by compensating with latency when the error in the message is
high. This work helps to design a product system with less area and latency.

Jianguo et al. (2012) explain the construction of the product


code of RS with a low density parity check code. It compares the other
construction codes and produces 5.42% redundancy, which is a lower
redundancy and is less complex too. This method is highly suitable for
optical transmission systems. The simulated analysis shows the
comparison of classical RS code and it is observed that the novel LDPC
code has better advantages including error correction, lower decoding
complexity, etc. It can be widely used for high-speed ultra-long-haul
optical transmission systems with FEC codes.
Bingrui et al. (2019) generate a decoding model for Ship-
based Satellite Communications On-The-Move (SSCOTM) between ships
and satellites. When the ship antenna deviates there will be a burst error
that leads to data loss. A method to model the direction of wind resembling
the reduction of SNR is analysed. An ECC system is designed using
the RS code with LDPC and applied for error correction. The simulation
output shows the better performance from this system and it is reliable.
This system is with simple hardware implementation and a shorter code
length.

Blaum (2020) generates the alternative of the Integrated


Interleaved and Extended Integrated Interleaved codes for locally
recoverable codes. The comprehensive definition is given to such specially
generated new codes. Improvements in the special case of coding
techniques are observed and the minimum distance with upper bound is
also ensured. The systematic coding techniques use the parity symbols in an
iterative decoding algorithm on rows and columns. It also involves the
Reed-Solomon type of codes. The generalization of the code is allowed
with encoding and decoding optimization.

Liu et al. (2014) enumerates the performance of the LDPC-


RS product code system for the use of the nextgen broadcasting system
and also provides a novel iterative scheme for the product coding system.
The proposed hybrid product scheme outperforms the regular hard decision
of RS and SPA of the LDPC algorithm by providing higher error
correction ability and robustness to the decoding system. The results
show a decrease in decoding threshold and complexity. This work helps to
build a hybrid decoding scheme.

Salah (2018) describes the various forward error detection


and correction codes under various noises. Providing a better channel is
difficult nowadays because of interference, noises, and topology changes,
etc. So it suggests emerging channel coding algorithms like LDPC, RS
codes also described the difference between Binary LDPC and Non-binary
LDPC codes.

The described method in Jieun et al. (2016) uses the


nonbinary LDPC code as an outer code and Reed Solomon code as an
inner code for the application of modern tape storage. The result shows that
the mentioned method gives a better gain compared with RS-RS codes.
Also, it handles decoding complexity quite effectively.

Greaves & Syatriadi (2019) discuss the developments of


some extraordinary cases like RS codes [n, k] in limited fields (n to
n+1) where parity generator matrix gets bind output. Moreover, this
system is added with estimation for constrained generator matrix. Lovett's
estimate is explained in detail which indicates the estimation usability.

Park & Kim (2020) provides an estimation method using a


generator polynomial model for the error correction code. This model
gives high reliable generator polynomial for RS and BCH code. The
probability density function of the codeword is also highly accurate. At the
system level, it provides high system iuntegrity. This estimation gives an
eye diagram similar to transient simulation for RS and BCH code

1.6.4 LDPC and Turbo product code

Condo & Masera (2014) proposes a product code scheme


for deep space communication with error correction codes and low BER.
The use of different coding techniques leads to hardware complexity. To
avoid this, non-custom LDPC and turbo codes are used with the BCJR
algorithm. The serial concatenation of Turbo code and LDPC codes is
used for simulation. The results have been compared with state of art
competitive results used in deep space. The early analysis of the area makes
the system occupy less space and also with low cost.

Yuan et al. (2017) propose two product codes based on


LDPC code along with used the constrained turbo product codes and
constrained turbo block conventional codes. The gain obtained is nearer to
uniform interleaving that increases the minimum hamming distance.
Here new improved code is used to give high performance. This is
suitable for wireless application that ensures better performance in the
wireless communication systems compared to WiMax and LTE standards
respectively.

Gooru & Rajaram (2014) provides an overview of the novel


turbo codes capable of performing the operations nearer to Shannon's
limit. The turbo coding techniques with many iterative decoders are
most commonly used. The MAP algorithm is made use to reduce the
number of iterations applied in the decoding part. The FPGA
implementation is used for the simulation that encodes and decodes the
data. It gives improved channel coding techniques and the simulation
outputs are verified with similar coding techniques. The SISO decoder will
reduce the number of iterations also.

Blaum & Hetzler (2018) generate the extended product codes


that process the encoding part more efficiently compared to other
coding techniques. This code along with extended integrated interleaved
code form a special case of codes that naturally unifies the upper bound.
Here minimum distance is also improved and enhancement is done at the
decoding part with uniform distribution of parity symbols. The upper
bound on the minimum distance was presented. It requires a small finite
field in practical cases. The simulation outputs ensure the encoding and
decoding algorithms with minimum distance.
Winstead et al. (2005) propose Turbo codes and other
iteratively decoded codes have been joined into a few advanced
correspondences systems like 3GPP, DVB, and DVB-RCS. Because of the
iterative idea of the disentangling calculation, turbo decoders are deserted
to interpret inactivity and enormous energy utilization.

The survey by Mukhtar et al. (2016) is useful to analyse the


code rate and the complexity of various Turbo Product Codes (TPC). It
gives significance to multidimensional, nonbinary, modified row-column
interleaving TPC along with, irregular, extended and shortened, and
nonlinear TPCs. It also gives knowledge about product codes.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy