0% found this document useful (0 votes)
23 views6 pages

Low-Latency Ordered Statistics Decoding of BCH Codes

The paper presents a low-latency ordered statistics decoding (OSD) algorithm for BCH codes, addressing the latency caused by Gaussian elimination in conventional OSD methods. By utilizing the systematic generator matrix of corresponding Reed-Solomon codes, the proposed method generates codeword candidates more efficiently, leading to reduced decoding complexity and latency. Simulation results indicate that the new approach achieves similar performance to conventional OSD while significantly lowering decoding time.

Uploaded by

Hoàng Lý Minh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views6 pages

Low-Latency Ordered Statistics Decoding of BCH Codes

The paper presents a low-latency ordered statistics decoding (OSD) algorithm for BCH codes, addressing the latency caused by Gaussian elimination in conventional OSD methods. By utilizing the systematic generator matrix of corresponding Reed-Solomon codes, the proposed method generates codeword candidates more efficiently, leading to reduced decoding complexity and latency. Simulation results indicate that the new approach achieves similar performance to conventional OSD while significantly lowering decoding time.

Uploaded by

Hoàng Lý Minh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2022 IEEE Information Theory Workshop (ITW)

Low-Latency Ordered Statistics Decoding of BCH


Codes
Lijia Yang† , Li Chen‡
† School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen 518107, China
‡ School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China
Email: yanglj39@mail2.sysu.edu.cn, chenli55@mail.sysu.edu.cn
2022 IEEE Information Theory Workshop (ITW) | 978-1-6654-8341-4/22/$31.00 ©2022 IEEE | DOI: 10.1109/ITW54588.2022.9965799

Abstract—This paper proposes a low-latency ordered statistics information sets generated by randomly biased log-likelihood
decoding (OSD) algorithm for BCH codes. The OSD latency ratios (LLRs) were proposed in [11]–[12] in order to improve
is mainly caused by Gaussian elimination (GE) that produces the OSD performance.
a systematic generator matrix of the code. Considering BCH
codes is binary subcodes of Reed-Solomon (RS) codes, we show However, the GE latency challenge remains, which will be
that the BCH codeword candidates can be produced through the addressed by this work. Since BCH codes are binary subcodes
systematic generator matrix of the corresponding RS code. The of Reed-Solomon (RS) codes, their codeword candidates can
systematic generator matrix of an RS code can be formed by be generated through the corresponding RS codewords, which
generating the linearly independent RS codewords in parallel, requires the RS systematic generator matrix. It can be formed
replacing the GE process and enabling a low OSD latency.
This paper further proposes a segmented variant that facilitates by generating the linearly independent RS codewords in
the decoding by reducing the number of test error patterns parallel, underpinning a low decoding latency. In particular,
(TEPs). Complexity of the proposed OSD is also analyzed. Our an (n, k) BCH code is a binary subcode of an (n, k 0 ) RS
simulation results show that the proposed decoding can achieve code that is defined over a binary extension field, where n is
a similar performance as the conventional OSD, but with a lower their codeword length and the dimension of the RS code is
decoding complexity. The decoding latency can be reduced over
the conventional OSD substantially. greater than that of the BCH code, i.e., k 0 > k. The k 0 linearly
Index Terms—BCH codes, low-latency, subfield subcode, max- independent RS codewords can be generated in parallel using
imum likelihood decoding, ordered statistics decoding the Lagrange interpolation polynomials, forming the RS sys-
tematic generator matrix. The BCH codeword candidates can
I. I NTRODUCTION be yielded through generating the binary RS codewords by the
The realization of ultra-reliable low-latency communication matrix. In order to further reduce the decoding complexity,
(URLLC) requires the support of competent short-to-medium a segmented low-latency OSD is further proposed. By seg-
length channel codes. The transmission limit of a finite menting the original TEPs, a near ML decoding performance
length coded system has been characterized in [1]. Recent can still be achieved with less TEPs, resulting in a lower
research on short-to-medium length codes has shown that decoding complexity. Complexity of the proposed OSD is
ordered statistics decoding (OSD) of BCH codes can yield analyzed. Our simulation results show that the decoding
a performance that is closed to the transmission limit [2]–[3]. latency (in microsecond) can be substantially reduced over the
In OSD, the codeword candidates are generated through the conventional OSD. They yield a similar decoding performance
re-encoding of test messages that are formed by alternating as the conventional OSD with a smaller decoding output list,
decisions of the most reliable independent positions (MRIPs) resulting in fewer floating point operations for identifying the
in a codeword. The re-encoding process requires Gaussian most likely codeword from the list.
elimination (GE) that produces a systematic generator matrix
II. P RELIMINARIES
of the code. However, due to the sequential feature of GE,
its latency cannot be compromised, which is also a long- A. Ordered Statistics Decoding
standing challenge for OSD [4]. In order to reduce the Let Fq denote a finite field of size q, and its extension
OSD complexity, several skipping and stopping rules have field is further denoted as Fqm , where m > 1. Let f =
been proposed in [5]–[8]. They facilitate the decoding by (f0 , f1 , . . . , fk−1 ) ∈ Fk2 and c = (c0 , c1 , . . . , cn−1 ) ∈ Fn2
identifying the unpromising test error patterns (TEPs) and denote the message vector and codeword vector of an (n, k)
the maximum likelihood (ML) codeword candidate within BCH code, respectively, and d denote its minimum Ham-
the decoding output list, respectively. They result in skipping ming distance. Its generator matrix G is a k × n binary
the unpromising TEPs, or terminating the decoding earlier. matrix as G = [g0 , g1 , · · · , gn−1 ], where g0 , g1 , · · · , gn−1
The box-and-match algorithm [9] trades time and space com- are the column vectors of length k. Let us assume that
plexity by considering the TEPs of small weights. Moreover, a BCH codeword c is transmitted by the use of BPSK
the MRIPs segmentation approach was proposed in [10], modulation as : 0 7→ 1; 1 7→ −1. The modulated symbol
dividing the OSD operation into several segments to reduce sequence is x = (x0 , x1 , . . . , xn−1 ), where xj ∈ {−1, 1}
the decoding complexity. On the other aspect, the multiple and j = 0, 1, . . . , n − 1. After a memoryless channel, the

978-1-6654-8341-4/22/$31.00 ©2022 IEEE 404


Authorized licensed use limited to: Rutgers University Libraries. Downloaded on March 22,2025 at 21:57:54 UTC from IEEE Xplore. Restrictions apply.
2022 IEEE Information Theory Workshop (ITW)

received symbol sequence is r = (r0 , r1 , . . . , rn−1 ) ∈ Rn . A codeword candidate with a smaller correlation distance to
Let Pr (rj | cj = 0) and Pr (rj | cj = 1) denote channel ob- y is more likely to be the transmitted codeword. Let Sω =
servations of cj , its received LLR is defined as (ω)
{Lj |yj = ĉj }, elements Lj of Sω can be reordered as
Pr(rj | cj = 0) |Lξ0 | ≤ |Lξ1 | ≤ · · · ≤ Lξ(n−dω −1) , (8)
Lj = ln . (1)
Pr(rj | cj = 1)
where dω denotes the Hamming distance between y and ĉ(ω) .
Subsequently, the hard-decision received word y = (y0 , y1 ,
The ML criterion is [5]
. . . , yn−1 ) ∈ Fn2 can be obtained. That says if Lj > 0, yj = 0;
otherwise, yj = 1. Since a greater |Lj | indicates the received d−d
X ω −1

information of cj is more reliable, reliability of the received d(y, ĉ(ω) ) ≤ Lξj . (9)
information for all coded bits can be ordered based on |Lj |, j=0

yielding a refreshed bit index sequence j0 , j1 , . . . , jn−1 . It If ĉ(ω) satisfies (9), it will be the ML codeword. The OSD
indicates |Lj0 | ≥ |Lj1 | ≥ · · · ≥ |Ljn−1 |. A permuted received decoding can be terminated once the ML codeword is found.
word can be further obtained as Otherwise, the one that yields the smallest correlation distance
y 0 = Π y = yj0 , yj1 , . . . , yjn−1 , to y will be selected as the decoding output ĉopt .
 
(2)
Note that the GE that produces the systematic generator
where Π denotes the permutation function. Applying the same matrix G00 is a sequential process incurring the OSD latency
permutation to the columns of G yields challenge.
G0 = Π (G) = gj0 , gj1 , . . . , gjn−1 .
 
(3)
B. BCH Codes and RS Codes
GE will be performed on G0 , reducing columns
The subfield subcode relationship between BCH codes and
gj0 , gj1 , . . . , gjk−1 into weight one and yielding a systematic
RS codes is stated as follows.
generator matrix as
h i Definition I ([13]): Given two linear block codes C and C 0
G00 = gj00 , gj01 , . . . , gj0n−1 , (4) of length n, they are defined over Fq and Fqm , respectively.
If C = C 0 ∩ Fnq , C is a subcode of C 0 over Fq .
where columns gj00 , gj01 , . . . , gj0k−1 form a k × k identity Lemma 1([14]): An (n, k) t error-correcting BCH code
submatrix. However, this cannot be achieved if the first k defined over F2 is a subcode of an (n, k 0 ) t error-correcting
columns are not linearly independent. In this case, a second RS code defined over F2m . Note that, the RS codes are the
permutation will be needed, and the GE will be conducted maximum distance separable (MDS) codes. With the same
again. This adjustment continues until the first k columns of error correction capacity, the RS code dimension is greater
G0 are linearly independent. Note that if a second permutation than that of the BCH subcode, i.e., k 0 > k.
is needed, y 0 will also be updated accordingly. Without further
mentioning, we assume that the first k columns of G0 have III. L OW-L ATENCY O RDERED S TATISTICS D ECODING
been ensured with this property.
A. RS Systematic Generator Matrix
0
Consequently, after ensuring the first k columns of G With the permuted received word y 0 of (2), let us de-
being linearly independent, the first k positions in y 0 are fine Θ = {j0 , j1 , . . . , jk0 −1 } as the index set of its k 0
called the MRIPs and their index set is denoted as Υ = most reliable positions (MRPs), and its complementary set
{j0 , j1 , . . . , jk−1 }. Let f = (yj0 , yj1 , . . . , yjk−1 ) denote a Θc = {jk0 , jk0 +1 , . . . , jn−1 }. Note that since the OSD is
(ω) (ω) (ω)
message and e(ω) = (ej0 , ej1 , . . . , ejk−1 ) ∈ Fk2 de- discussed under the binary BCH code paradigm, it is assumed
note a TEP that will be used to update f , where ω = y 0 ∈ Fn2 . Otherwise, for an RS code, y 0 ∈ Fn2m . Picking

1, 2, . . . , λ=0 λk . For each e(ω) , there are at most τ non- up the received symbols indexed by Θ, an initial message

0
zero entries. The test messages can be generated by u = (yj0 , yj1 , . . . , yjk0 −1 ) ∈ Fk2 can be formed. We also
denote the support of its symbol indices that are realized in
f (ω) = f + e(ω) . (5) y 0 as supp(u) = {j0 , j1 , . . . , jk0 −1 }. With u, the message
The corresponding codeword candidate can be generated by polynomial of the (n, k 0 ) RS code can be defined as
X
(ω) (ω) (ω) Hu (x) = yj Lj (x), (10)
ĉ(ω) = (ĉ0 , ĉ1 , . . . , ĉn−1 ) = Π−1 (f (ω) · G00 ), (6)
j∈supp(u)
where ĉ(ω) ∈ Fn2 and Π−1 is the inverse of the permutation
where
function Π. Let us further define the correlation distance Y x − αj 0
Lj (x) = (11)
between y and ĉ(ω) as αj − αj 0
j 0 ∈supp(u),j 0 6=j

X is the Lagrange interpolation polynomial of code locator αj .


d(y, ĉ(ω) ) , |Lj | . (7) It enables Lj (αj ) = 1, and Lj (αj 0 ) = 0 if j 0 6= j. With
(ω)
j:yj 6=ĉj code locators α0 , α1 , . . . , αn−1 , the RS codeword v = (v0 ,

405
Authorized licensed use limited to: Rutgers University Libraries. Downloaded on March 22,2025 at 21:57:54 UTC from IEEE Xplore. Restrictions apply.
2022 IEEE Information Theory Workshop (ITW)

v1 , . . . , vn−1 ) ∈ Fn2m can be generated by


(ω)
v̂ (ω) = (u + e0 ) · GRS
v = (Hu (α0 ), Hu (α1 ), . . . , Hu (αn−1 )). (12) (18)
(0) 0 (ω)
= v̂ +e · GRS ,
Let us define k 0 weight-1 messages as uj0 = (1, 0, . . . , 0),
uj1 = (0, 1, . . . , 0), . . . , ujk0 −1 = (0, 0, . . . , 1), respec- where v̂ (ω) ∈ Fn2m . Based on Lemma 1, if v̂ (ω) ∈ Fn2 , it is
tively. They have the same support as u, i.e., supp(uj0 ) = also an (n, k) BCH codeword. The following Theorem shows
supp(uj1 ) = · · · = supp(ujk0 −1 ) = Θ. Consequently, the that this binary assessment can be implemented effectively by
(ω)
generator matrix of the (n, k 0 ) RS code can be generated by knowing v̂ (0) , e0 and GRS .
(0) P
Theorem 2: If v̂j + i∈Θ,e0 (ω) 6=0 Hui (αj ) is binary, for
i
GRS
  all j ∈ Θc , v̂ (ω) is a BCH codeword.
Huj (α0 ) Huj (α1 ) ··· Huj (αn−1 ) Proof: Based on (18), let us define
0 0 0
 Huj (α0 ) Huj (α1 ) ··· Huj (αn−1 ) 
(ω) (ω) (ω) (ω)
1 1 1
e0 · GRS = (φ0 , φ1 , . . . , φn−1 ).
 
= .. .. .. .. , (19)

. . . .

(ω)
 
Huj 0 (α0 ) Huj 0 (α1 ) · · · Huj 0 (αn−1 ) The RS codeword symbol v̂j can be determined by
k −1 k −1 k −1
(13) (ω)
v̂j = v̂j
(0)
+ φj .
(ω)
(20)
where each row is a codeword of the respective message.
Based on (14), we know if j ∈ Θ,
Since the k 0 messages are linearly independent, the k 0 code-
(ω) (ω)
words are also linearly independent. They constitute the φj = e0 j . (21)
generator matrix of the (n, k 0 ) RS code. In GRS , columns (0) (ω)
j0 , j1 , . . . , jk0 −1 form a k 0 × k 0 identity submatrix. Hence, Since for j ∈ Θ, v̂j ∈ {0, 1} and the TEP e0 is also
(ω)
GRS is in the systematic form. The row-i column-j entry of binary. Hence, v̂j ∈ {0, 1}, ∀j ∈ Θ. For the remaining
GRS is symbols with index j ∈ Θc , based on (14) and (18), we know

(ω) (0)
X

 0, if j ∈ Θ, j 6= i; v̂j = v̂j + Hui (αj ). (22)

Hui (αj ) = 1, if j ∈ Θ, j = i; i∈Θ,e0 i
(ω)
6=0
0 ∈Θ (αj −αj 0 )
Q
, if j ∈ Θc .

 j
Therefore, if they are binary, codeword v̂ (ω) is binary. Based

(αj −αi ) j 0 ∈Θ,j 0 6=i (αi −αj 0 )
Q

Qn−1 (14) on Lemma 1, it is also a BCH codeword.


Since αj j 0 =0,j 0 6=j (αj − αj 0 ) = 1, when j ∈ Θc , Similar to the conventional OSD, the proposed OSD gen-
(ω)
Q
αi j 0 ∈Θc (αi − αj 0 ) erates the codeword candidates by numerating the TEPs e0
Hui (αj ) = Q . (15) and re-encoding as in (18). Based on Theorem 2, if codeword
αj (αj − αi ) j 0 ∈Θc ,j 0 6=j (αj − αj 0 ) (ω)
symbols v̂j (j ∈ Θc ) are binary, v̂ (ω) will be a BCH
Note that when the code rate is greater than 1/2, |Θc | < |Θ|, codeword. The correlation distance between y and v̂ (ω) will
and eq. (15) requires less finite field computation. be further determined as in (7). Once a codeword candidate
Note that matrix GRS can be generated in parallel, under- v̂ (ω) satisfies the ML criterion of (9), v̂ (ω) will be selected as
pinning the low-latency feature of the proposed OSD. the decoding output v̂ opt and decoding terminates. Otherwise,
the one that yields the smallest correlation distance with y
will be selected as v̂ opt .
B. Generation of BCH Codeword Candidates Since the systematic generator matrix GRS can be generated
in parallel, it yields a decoding latency advantage over the
The BCH codeword candidates can be further generated conventional OSD. Summarizing the above description, the
by GRS . With the initial message u = (yj0 , yj1 , . . . , yjk0 −1 ), low-latency OSD is shown below as in Algorithm 1.
(0) (0) (0)
RS codeword v̂ (0) = (v̂0 , v̂1 , . . . , v̂n−1 ) ∈ Fn2m can be IV. S EGMENTED VARIANT
generated by
This section further proposes a segmented variant of the
v̂ (0) = u · GRS , (16)
proposed OSD, in order to reduce its complexity.
(0)
where v̂j = yj , ∀j ∈ Θ. Similar to the OSD that was The above description shows that in the OSD, if the number
introduced in Section II. A, let us also define a TEP as of errors in the MRIPs is not greater than the decoding order
(ω) (ω) (ω) (ω) 0
e0 = (e0 j0 , e0 j1 , . . . , e0 jk0 −1 ) ∈ Fk2 . Subsequently, the τ , the transmitted codeword will be included in the decoding
test message u(ω) can be generated by output list. Let Pe,OSD (τ ), Pe,ML and Pe (τ ) denote the error
(ω)
probability of OSD with an order τ , the error probability of
u(ω) = u + e0 . (17) the ML decoding, and the probability that the number of errors
(ω) (ω) in the MRIPs is greater than τ , respectively. They hold
The corresponding RS codeword v̂ (ω) = (v̂0 , v̂1 , . . . ,
(ω)
v̂n−1 ) can be further generated by Pe,OSD (τ ) ≤ Pe,ML + Pe (τ ). (23)

406
Authorized licensed use limited to: Rutgers University Libraries. Downloaded on March 22,2025 at 21:57:54 UTC from IEEE Xplore. Restrictions apply.
2022 IEEE Information Theory Workshop (ITW)

Algorithm 1 Low-Latency OSD of BCH Codes significantly, resulting in a reduced decoding complexity.
Input: Received symbol sequence r, order τ ; Note that the partition point in the MRIPs can be more
Output: v̂ opt ; flexibly adjusted to achieve a better complexity reduction. But
1: Compute the LLRs as in (1), and determine y; this process remains heuristic. More numerical results on this
2: Define MRPs, u, and let dmin = +∞; will be provided in Section VI.
3: Generate GRS as in (14);
(0) V. C OMPLEXITY A NALYSIS
4: Generate the initial codeword v̂ as in (16);
0 (ω)
5: For each TEP e , do This section analyzes the complexity of the proposed OSD
6: Test if the codeword v̂ (ω) is binary as in (22); and compares it with the conventional OSD. In the conven-
7: If v̂ (ω) is binary tional OSD, binary operations and floating point operations
8: Determine d(y, v̂ (ω) ) as in (7); are needed. The GE process requires n · (min{n − k, k})
2

9: If d(y, v̂ (ω) ) < dmin binaryPτoperations.


00
 Based on G , k · (n − k) and (n −
10: Update dmin = d(y, v̂ (ω) ) and v̂ opt = v̂ (ω) ; k) · λ=1 λ λk binary operations are needed to compute
11: If d(y, v̂ (ω) ) satisfies the ML criterion of (9) ĉ(0) and the other candidate codewords ĉ(ω) , respectively.
12: Terminate the decoding; Finally, identifying the decoding output ĉopt requires at most

n · λ=0 λk floating point operations. In the proposed OSD,

13: End for
14: Return v̂ opt ; the F2m finite field operations and floating point operations are
needed. In computing the RS systematic generator matrix GRS
as in (14) or (15), 2n · min{n − k 0 , k 0 } finite field operations
are needed. The generation of v̂ (0) as in (16) requires at
When
most k 0 · (n − k 0 ) finite field operations. Let Nj 0 denote the
  
d
τ ≥ min − 1 ,k , (24) (ω)
4 number of TEPs e0 that yield binary estimated symbols
(ω)
v̂j in Θ after the j 0 th judgement as in Theorem 2, where
c
Pe (τ )  Pe,ML [3]. Therefore, if the OSD order is sufficiently
j 0 = 0, 1, . . . , n − k 0 . Note that when j 0 = 0, no assessment
large, it can approach the ML decoding performance.
(ω) has been conducted, and N0 is the total number of TEPs,
Since the length of TEP e0 is greater than that of e(ω) Pτ 0
(ω) i.e., N0 = λ=0 kλ . A BCH codeword will be confirmed
in the conventional OSD, there are more test errors in e0 .
after the n − k 0 positions in Θc have been assessed. Hence,
They occur in the extra symbol band defined by Θ \ Υ. The
the decoding output list cardinality of the proposed OSD
analysis of [3] shows that if τ satisfies (24), Pe (τ ) becomes
is Nn−k0 . Computing BCH codeword candidates v̂ (ω) as in
negligible. Hence, there is no need to assign an order greater Pτ 0 Pn−k0
(22) requires at most λ=1 λ kλ + τ j 0 =1 Nj 0 finite field
than τ for the first k positions of MRPs.
(ω) operations. Finally, identifying v̂ opt requires at most nNn−k0
With e0 , we can partition it into two segments as
0 (ω) (ω) (ω) (ω) (ω) (ω) (ω) floating point operations. The above complexity characteri-
e 1 = (e0 j0 , e0 j1 , . . . , e0 jk−1 ) and e0 2 = (e0 jk , e0 jk+1 ,
(ω)
zations are summarized as in Table I . It can be seen that
. . . , e0 jk0 −1 ), respectively. The proposed OSD can be per- complexity of the proposed OSD also depends on Nj 0 . More
(ω) (ω)
formed by numerating e0 1 and e0 2 , which form a smaller numerical results will be provided in the following section,
(ω) providing more insight of it.
set of TEPs. Let τ1 and τ2 denote the segment orders of e0 1
0 (ω)
and e 2 , respectively. Similar to the definition of Pe (τ ), let TABLE I
Pe1 (τ1 ) and Pe2 (τ2 ) denote the probabilities of the number of C OMPLEXITY OF THE PROPOSED AND THE CONVENTIONAL OSD S .
(ω)
errors in e0 1 is greater than τ1 and the number of errors in
0 (ω) Algorithms Operations Complexity
e 2 is greater than τ2 , respectively. In a memoryless channel,
GE n · (min{n − k, k})2
we have
Compute ĉ(0) k · (n − k)
OSD (τ ) k
Pe (τ ) = 1 − (1 − Pe1 (τ1 ))(1 − Pe2 (τ2 )). (25) Compute ĉ(ω) (n − k) · τλ=1 λ λ
P
Pτ k 
Find ĉopt n · λ=0 λ
Based on (23), we can obtain the error probability upper
Compute GRS 2n · min{n − k0 , k0 }
bound of the segmented OSD as
Low-Lat. Compute v̂ (0) k0 · (n − k0 )
k0  Pn−k0
Compute v̂ (ω)

Pe,seg-OSD (τ1 , τ2 ) ≤ Pe,ML +Pe1 (τ1 )+Pe2 (τ2 )−Pe1 (τ1 )Pe2 (τ2 ). OSD (τ ) λ=1 λ λ + τ j 0 =1
Nj 0
(26) Find v̂ opt nNn−k0
Hence, if τ1 ≥ min {dd/4 − 1e , k}, Pe1 (τ1 )  Pe,ML , and
Pe,seg-OSD (τ1 , τ2 ) ≤ Pe,ML + Pe2 (τ2 ). (27)
VI. S IMULATION RESULTS
Therefore, if τ1 ≥ min {dd/4 − 1e , k} and τ2 is appropri-
ately chosen such that Pe2 (τ2 )  Pe,ML , the ML decoding A. Decoding Performance
performance can still be approached by the segmented OSD. Figs. 1 and 2 show the decoding frame error rate (FER)
This segmented variant helps reduce the number of TEPs of the (31, 21) and the (63, 45) BCH codes, respectively.

407
Authorized licensed use limited to: Rutgers University Libraries. Downloaded on March 22,2025 at 21:57:54 UTC from IEEE Xplore. Restrictions apply.
2022 IEEE Information Theory Workshop (ITW)

For the segmented low-latency OSD, it is parameterized by the complexity advantage of the proposed OSDs, as discussed
(τ1 | l, τ2 ), where l denotes the length of the first segment. below.
(ω) (ω) (ω) (ω) (ω) (ω)
That says e0 1 = (e0 j0 , e0 j1 , . . . , e0 jl−1 ) and e0 2 = (e0 jl ,
TABLE II
(ω) (ω)
e0 jl+1 , . . . , e0 jk0 −1 ). Performance of the Berlekamp-Massey N UMERICAL RESULTS OF Nj 0 IN DECODING THE (63, 45) BCH CODE
WITH τ = 3.
(BM) decoding [15] and the conventional OSD [3] are pre-
sented as benchmarks. The ML decoding performances were j0 0 1 2 3 4 5 6
obtained in [16]. Our results show that the low-latency OSD Nj 0 30914 957 36 9 8 7 7
performance can approach that of the conventional OSD, but
requires a larger decoding order. This is due to the fact that
k 0 > k and |Θ| > |Υ|, more errors will be introduced in
the MRPs of the low-latency OSD. However, our results also TABLE III
N UMERICAL RESULTS OF COMPLEXITY AND LATENCY IN DECODING THE
show that the segmented variant can yield a similar decoding (63, 45) BCH CODE .
performance with a smaller order.
SNR Complexity Latency
1.E+0 Algorithms
(dB) F2 /F64 oper. Floating oper. (µs)
4 2.78 × 104 81 6.58 × 102
1.E-1 BM
OSD (1) 5 2.60 × 104 19 5.34 × 102
OSD (1) 6 2.56 × 104 8 5.06 × 102
1.E-2
ML 4 1.81 × 104 15 1.99 × 103
FER

Low-Lat.
5 5.21 × 103 8 4.36 × 102
1.E-3 OSD (3)
6 2.58 × 103 7 1.32 × 102
Low-Lat. OSD (1)

1.E-4
4 3.69 × 103 8 2.71 × 102
Low-Lat. OSD (2) Seg. Low-Lat.
OSD (1 | 45, 3) 5 2.64 × 103 7 1.44 × 102
Seg. Low-Lat. OSD ( 1 | 21, 1)
1.E-5 6 2.45 × 103 7 1.17 × 102
2 3 4 5 6 7 8
SNR (dB)

Table III compares the complexity and latency in decoding


Fig. 1. Performance of the (31, 21) BCH code.
the (63, 45) BCH code. All OSDs will terminate once an
ML codeword is identified by (9). The decoding complexity
1.E+0
and latency are measured and averaged as in decoding one
codeword. Referring to Fig. 2, to achieve the same decoding
1.E-1 BM
performance, the number of finite field operations in the two
proposed OSDs are smaller than that of the binary operations
OSD (1)
1.E-2 in the conventional OSD, especially segmented variant. De-
FER

ML spite the proposed OSDs incur more TEPs, Table II shows that
1.E-3
Low-Lat. OSD (1)
the binary codeword assessment of Theorem 2 helps eliminate
Low-Lat. OSD (2) the redundant ones effectively, resulting in a relatively low
1.E-4 Low-Lat. OSD (3) level of finite field operations. This assessment also results in
Seg. Low-Lat. OSD (1 | 45, 3)
Seg. Low-Lat. OSD (1 | 47, 2)
fewer floating point operations required by the ML criterion.
1.E-5 Finally, Table III also vindicates the latency advantage of the
2 3 4 5 6 7 8
SNR (dB) proposed OSDs. Our simulations were performed with the
Intel core i7-10710U CPU. In the proposed OSDs, each row
Fig. 2. Performance of the (63, 45) BCH code. of GRS is generated in parallel. In all OSDs, the TEPs are
decoded in a serial manner. It can be seen that both the low-
B. Decoding Complexity and Latency latency OSD and its segmented variant can effectively reduce
the decoding latency over the conventional OSD, vindicating
As pointed out in Section V, complexity of the proposed the latency advantage of our proposed OSDs.
OSD depends on Nj 0 . Table II shows our numerical results of
Nj 0 in decoding the (63, 45) BCH code with τ = 3. Note that ACKNOWLEDGEMENT
the BCH code is a binary subcode of the (63, 57) RS code. It
This work is sponsored by National Natural Science Foun-
can be seen that the assessment of Theorem 2 can effectively
dation of China (NSFC) with project ID 62071498.
eliminate the nonbinary codewords. E.g., after assessing the
(ω)
first symbol in Θc , i.e., v̂jk0 , there are only 957 TEPs that can R EFERENCES
possibly produce BCH codewords. Moreover, the decoding
[1] Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel coding rate in the
output list cardinality N6 is only 7, which is far smaller than finite blocklength regime,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp.
that of the conventional OSD with τ = 1. This will result in 2307–2359, 2010.

408
Authorized licensed use limited to: Rutgers University Libraries. Downloaded on March 22,2025 at 21:57:54 UTC from IEEE Xplore. Restrictions apply.
2022 IEEE Information Theory Workshop (ITW)

[2] M. C. Coşkun et al., “Efficient error-correcting codes in the short


blocklength regime,” Phys. Commun., vol. 34, pp. 66–79, 2019.
[3] M. Fossorier and S. Lin, “Soft-decision decoding of linear block codes
based on ordered statistics,” IEEE Trans. Inf. Theory, vol. 41, no. 5,
pp. 1379–1396, 1995.
[4] C. Choi and J. Jeong, “Fast soft decision decoding algorithm for linear
block codes using permuted generator matrices,” IEEE Communications
Letters, vol. 25, no. 12, pp. 3775–3779, 2021.
[5] T, Kaneko et al., “An efficient Maximum-Likelihood decoding algo-
rithm for linear block codes with algebraic decoder,” IEEE Trans. Inf.
Theory, vol. 40, no. 2, pp. 320–327, 1994.
[6] Y. Wu and C. N. Hadjicostis, “Soft-decision decoding using ordered
recodings on the most reliable basis,” IEEE Trans. Inf. Theory, vol. 53,
no. 2, pp. 829–836, 2007.
[7] C. Yue et al., “A revisit to ordered statistics decoding: distance distri-
bution and decoding rules,” IEEE Trans. Inf. Theory, vol. 67, no. 7, pp.
4288–4337, 2021.
[8] W. Jin and M. Fossorier, “Probabilistic sufficient conditions on opti-
mality for reliability based decoding of linear block codes,” in Proc.
IEEE Int. Symp. Inf. Theory (ISIT), July. 2006, Seattle, WA, USA.
[9] A. Valembois and M. Fossorier, “Box and Match techniques applied
to soft-decision decoding,” IEEE Trans. Inf. Theory, vol. 50, no. 5, pp.
796–810, 2004.
[10] S. E. Alnawayseh and P. Loskot, “Ordered statistics-based list decoding
techniques for linear binary block codes,” EURASIP J. Wirel. Commun.
Netw., vol. 2012, no. 1, pp. 1–12, Dec.2012.
[11] W. Jin and M. P. C. Fossorier, “Reliability-based soft-decision decoding
with multiple biases,” IEEE Trans. Inf. Theory, vol. 53, no. 1, pp. 105–
120, 2007.
[12] M. Fossorier, “Reliability-based soft-decision decoding with iterative
information set reduction,” IEEE Trans. Inf. Theory, vol. 48, no. 12,
pp. 3101–3106, 2002.
[13] E. Berlekamp, “Algebraic coding theory,” New York, NY, USA:McGraw-
Hill, 1968.
[14] V. Guruswami and A. Rudra, “Limits to list decoding Reed-Solomon
codes,” IEEE Trans. Inf. Theory, vol. 52, no. 8, pp. 3642–3649, 2006.
[15] J. Massey, “Shift-register synthesis and BCH decoding,” IEEE Trans.
Inf. Theory, vol. 15, no. 1, pp. 122–127, 1969.
[16] Helmling et al., “Database of channel codes and ML simulation results,”
www.uni-kl.de/channel-codes, 2019.

409
Authorized licensed use limited to: Rutgers University Libraries. Downloaded on March 22,2025 at 21:57:54 UTC from IEEE Xplore. Restrictions apply.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy