HEVC Overview Rev2
HEVC Overview Rev2
HEVC Overview Rev2
265
Prepared by Shevach Riabtsev
The author is thankful to every individual who has reviewed the presentation and provided comments/suggestions
HEVC Encoder
Ref Ref Ref
Input Video
+
Motion Est.
MVs
Residual
Inter
T&Q
Bit-Stream
CABAC
Reference samples
Mode
Motion Comp.
Intra Pred.
Intra/Inter Decision
Intra
MVs/Intra modes
Intra Est.
SAO
Deblk.
+ +
Q-1& T-1
SAO params
Notes: In addition to AVC/H.264, SAO and SAO Params Estimation added. SAO Params Est. can be executed right after deblocking or right after the reconstruction as shown in the figure. HEVC similarity with AVC/H.264 allows quick upgrading of existing AVC/H.264 solutions to HEVC ones.
Bitstream Structure
VPS
SPS
PPS
Slice Header
* * * *
Slice Header
Slice Data
Slice Header
Slice Data
* * * *
Picture #k
High-Level Syntax ( Sequence, Picture level ) Sequencelevel: VPS, SPS and PPS
VPS specify multi-layer structure
SPS specify a single layer: Profile, tier, level Picture Size Max/Min CTU size, CU depth Max/Min TU size, TU depth ..
PPS specify a single layer, on/off tools: Sign Data Hiding Transform Skip Weighted Prediction Tiles, WPP
Coding Block (CB): Each CTB can be further partitioned into multiple coding blocks (CBs). The size of the CB can range from the same size as the CTB to a minimum size (88).
Coding Unit (CU) The luma CB and the chroma CBs, together with the associated syntax, form a coding unit (CU). Each CU can be either Intra or Inter predicted.
CTU Syntax
64x64 CTU
CTU Syntax (2) All CUs in a CTU are encoded (traversed) in ZScan order:
64x64 CTU
Note: unlike to prior standards where MB header is followed by data, in HEVC headers are dispersed: CTU Header
CU Hdr
CU Data CTU
* * * * CU Hdr
CU Data
CU Syntax (1)
Prediction Block (PB): Each CB is partitioned in 1, 2 or 4 prediction blocks (PBs).
Prediction Unit (PU): The luma PB and the chroma PBs, together with the associated syntax, form a prediction unit (PU). Intra:
2Nx2N Inter:
2Nx2N
NxN
2NxN
Nx2N
CU Syntax (2)
Inter Assymetric Partitions (conditioned by amp_enabled_flag in SPS):
nLx2N
nRx2N
2NxnU
2NxnD
2NxnU
2NxnD
nLx2N
nRx2N
CU Syntax (3)
Notes: The smallest luma PB size is 4 8 or 8 4 samples (where 4x8 and 8x4 are permitted only for uni-directional predictions, no bi-prediction < 8x8 allowed).
Chroma PBs mimic corresponding luma partition with the scaling factor 1/2 for 4:2:0.
Assymetric splitting is also applied to chroma CBs.
CU Syntax (4)
Transform Block (TB) : Each luma CB can be quadtree partitioned into one, four or larger number of TBs. The number of transform levels is controlled by max_transform_hierarchy_depth_inter and max_transform_hierarchy_depth_intra.
Example.
CB divided into two TB levels (the block #1 is split into four blocks):
1,0
1,1 1,3
0
1,2
0 2 3
1,0
1,1
1,2
1,3
CU Syntax (5)
Notes Unlike to H.264/AVC where TB PB, prediction and transform partitioning are independent, i.e. TB can contain several PBs and vice versa. Reported by some experts that prediction discontinuities on PB boundaries within TB are smoothed by transform and quantization. If PB and TB boundaries coincide then the discontinuities are observed increased.
Restrictions/Constraints
a) HEVC disallows 16x16 CTBs for level 5 and above (4K TV). Motivation: 16x16 CTBs add overheads for decoders to target 4K TV:
Illustration: Lets take CtbSizeY=16 (as in AVC/H.264). Then RawCtuBits = 16*16*8+2*8*8*8 = 3072 and maximal CTB bit-size is 4*3072/3 = 4096 bits respectively. Notice that in AVC/H.264 the maximal MB bit-size is 3200 bits.
A - 1,2
A 0,2
a 0,2
b 0,2
c 0,2
A 1,2
A 2,2
Luma Motion Compensation Details (2) Quarter-pels a0,0, c0,0, d0,0, n0,0 and half-pels b0,0, h0,0 are derived from nearest integer
positions. The quarter-pels a0,0, c0,0, d0,0, n0,0 are derived by the 7-tap filter and the half-pels b0,0, h0,0 by 8-tap filter. a0,0 , b0,0 and c0,0 are computed by horizontal filtering, while d0,0 , h0,0 and n0,0 by vertical filtering.
A - 1, - 1 A 0, - 1 a 0, - 1 b 0, - 1 c 0, - 1 A 1, - 1 A 2, - 1
A - 1,2
A 0,2
a 0,2
b 0,2
c 0,2
A 1,2
A 2,2
Luma Motion Compensation Details (3) Half-pel j0,0 is derived by applying the 8-tap filter vertically to nearest half-pels: b0,3 , b0,2 ,
b0,1 , b0,0 , b0,1 , b0,2 , b0,3 , b0,4 . Notice that j0,0 can be determined only after b0,0 has been computed (see the previous slide). Quarter-pels e0,0 and p0,0 are derived by applying the 7-tap filter vertically to nearest quarter-pels. Notice that e0,0 and p0,0 can be determined only after a0,0 has been computed (see the previous slide).
A - 1, - 1 A 0, - 1 a 0, - 1 b 0, - 1 c 0, - 1 A 1, - 1 A 2, - 1
b 0,0 f 0,0
j00
q 0,0 b 0,1
A - 1,2
A 0,2
a 0,2
b 0,2
c 0,2
A 1,2
A 2,2
Quarter-pel k0,0 is derived by applying the 8-tap filter vertically to nearest quarter-pels: c0,3,
c0,2, c0,1, c0,0, c0,1, c0,2, c0,3, c0,4
A - 1, - 1 A 0, - 1 a 0, - 1 b 0, - 1 c 0, - 1 A 1, - 1 A 2, - 1
A - 1,2
A 0,2
a 0,2
b 0,2
c 0,2
A 1,2
A 2,2
Luma Motion Compensation Details (5) Quarter-pels f0,0 , g0,0 , q0,0 , r0,0 are derived by applying the 7-tap filter vertically to the nearest quarter-pels.
A - 1, - 1
A 0, - 1
a 0, - 1
b 0, - 1
c 0, - 1
A 1, - 1
A 2, - 1
A - 1,0
d - 1,0 h - 1,0 n - 1,0 A - 1,1
A 0,0
d 0,0 h 0,0 n 0,0 A 0,1
a 0,0
e 0,0 i 0,0 p 0,0 a 0,1
b 0,0
f 0,0 j 0,0 q 0,0 b 0,1
c 0,0
g 0,0 k 0,0 r 0,0 c 0,1
A 1,0
d 1,0 h 1,0 n 1,0 A 1,1
A 2,0
d 2,0 h 2,0 n 2,0 A 2,1
A - 1,2
A 0,2
a 0,2
b 0,2
c 0,2
A 1,2
A 2,2
Notes/Conclusions
1. Luma interpolation can be performed in two serial stages: half-pel and quarterpel.
2. For motion compensation of NxM block its required to load (N+7)x(M+7) reference block (3 left/above, 4 right and below).
3. 8 tap filter coefficients: { -1, 4, -11, 40, 40, -11, 4, -1 } 4. 7-tap filter coefficients: { -1, 4, -10, 58, 17, -5, 1 }, non-symmetric. 5. 8-tap MC interpolation filter expands DR of intermediate results to 22 bits: -pel interpolation expands DR for 8-bits input to 16 bits.
Worst case for 1/2-pel filter : b0,0 = A3,0 + 4*A2,0 11*A1,0 + 40*A0,0 + 40*A1,0 11*A2,0 + 4*A3,0 A4,0 Maximal value of b0,0 is 88*255 = 22440, the minimal value is -24*255 = -6120. The same limits are also correct for h0,0 . Worst case for 1/4-pel filter:
After shifting by 6 the dynamic range reduced to 16 bits: -5100 e0,0 25500
j0,0 = ( b0,3 + 4*b0,2 11*b0,1 + 40*b0,0 + 40*b0,1 11*b0,2 + 4*b0,3 b0,4 ) >> 6 Taking into account that b0,k is in the range [-6120 .. 22440] the expression in the parenthesis gives the following limits:
-88*6120 = - 538560 b0,3 + 4*b0,2 11*b0,1 + 40*b0,0 + 40*b0,1 11*b0,2 + 4*b0,3 b0,4 88*22440=1974720
As in the case with e0,0 the dynamic range in calculation of j0,0 is increased to 22 bits. After the shift by 6, the dynamic range is reduced to 16 bits.
Intra Prediction
Overview
33 angular predictions for both luma and chroma. Two non-directional predictions (DC, Planar). PB sizes from 44 up to 6464. Like in AVC/H.264 Luma intra prediction mode is coded predictivly and chroma intra mode is derived from the luma one.
Unlike to AVC/H.264 three most probable modes: MPM0, MPM1 and MPM2 are considered. The following figure reveals the logic for derivation of MPMs:
Left available ?
N Y Y
Top available ?
N
Left Intra ?
Y
Top Intra ?
Y
CandA=Left
CandA=DC
CandB=Top
CandB=DC
CandA = CandB?
Y
MPM0=candA MPM1=candB
CandA < 2
Y N
MPM2 = Plane
MPM2 = Vert
MPM2 = DC
Coding & Derivation Luma Intra Prediction Mode (2) Encoder side: If the current luma intra prediction mode is one of three MPMs, prev_intra_luma_pred_flag set to 1 and the MPM index (mpm_idx) is signaled. Otherwise, the index of the current luma intra prediction mode excluding the three MPMs is transmitted to the decoder by using a 5-bit fixed length code (rem_intra_luma_pred_mode).
Coding & Derivation Luma Intra Prediction Mode (3) Decoder side: Upon derivation of MPMs, collect them in candModeList[3]= [MPM0, MPM1, MPM2]. The following code outlines how luma intra mode is derived:
If prev_intra_pred_flag = true then IntraPredMode = CandList [mpm_idx] Else { sort CandList in ascending order } if candModeList[0] is greater than candModeList[1], swap two values. if candModeList[0] is greater than candModeList[2], swap two values. if candModeList[1] is greater than candModeList[2], swap two values. Read 5 bits to rem_intra_luma_pred_mode IntraPredMode = rem_intra_luma_pred_mode if IntraPredMode >= candList[ 0 ] IntraPredMode ++ if IntraPredMode >= candList[ 1 ] IntraPredMode ++ if IntraPredMode >= candList[ 2 ] IntraPredMode ++
EndElse
Coding & Derivation Chroma Intra Prediction Mode Unlike AVC/H.264, chroma intra prediction mode is derived from luma and from the syntax element intra_chroma_pred_mode ( signaled each PB ) according to the following table:
intra_chroma_pred _mode 0 1 2 3 4
At most 4N+1 neighbor pixels are required (in contrast to H.264/AVC, belowleft samples are exploited in HEVC).
Top CTU Top-left CTU 8x8
Top-left predictor
8x8
Top predict.
16x16
Top-right predictors
Left CTU
Left predictors
Current PB
8x8
Current CTU
16x16
Below Left predictors
Implementation Angular Intra Prediction (2) To improve the intra prediction accuracy, the projected reference sample location is computed with 1/32 sample accuracy, bi-linear interpolation is used. In angular mode predicted sample PredSample(x,y) is derived as follow:
predSample[ x ][ y ] = ( ( 32 iFact )*ref[ x+iIdx+1 ] + iFact*ref[ x+iIdx+2] + 16 ) >> 5 predSample[ x ][ y ] = ( ( 32 iFact )*ref[ y+iIdx+1 ] + iFact*ref[ y+iIdx+2] + 16 ) >> 5
The parameter iIdx and iFact denote the index and the multiplication factor determined by the intra prediction mode (can be extracted via LUTs).
The weighting factor iFact remains constant across predicted row or column that facilitates SIMD implementations of angular intra predictions.
Planar Mode In AVC/H.264, the plane intra mode requires two multiplications per sample
predL[ x, y ] = Clip1Y( ( a + b * ( x 7 ) + c * ( y 7 ) + 16 ) >> 5 )
plus overhead calculation per 16x16 block in determination of the parameters a,b and c. Totally the plane mode takes at most three multiplications per sample.
In HEVC, intra planar mode requires four multiplications per sample:
predSamples[ x ][ y ] = ( ( nT 1 x ) * p[ 1 ][ y ] + ( x + 1 ) * p[ nT ][ 1 ] + ( nT 1 y ) * p[ x ][ 1 ] + ( y + 1 ) * p[ 1 ][ nT ] + nT ) >> ( Log2( nT ) + 1 )
Overview
Effective motion data prediction techniques have been adopted in HEVC in order to reduce motion data portion in the stream. Unlike other standards (e.g. AVC/H.264), the HEVC adopted competitive motion vector prediction for both regular and merge (HEVC replacement of AVC/H.264 skip/direct) modes, i.e. several predictor candidates are competing for the prediction and the right candidate is signaled in the stream. Unlike AVC/H.264, in regular (advanced motion vector prediction or in AMVP in jargon of HEVC) MV prediction the co-located temporal MV is considered if one of spatial candidates is non-available or redundant. In merge mode (HEVC replacement of AVC/H.264 skip/direct), the set of merge candidates can include a temporal candidate (if one or more spatial candidates are non-available or redundant). Including of temporal MV prediction in both regular and merge modes improves error resilience. On the other hand an additional storage of co-located MVs of reference frames is required. Spatial and temporal MVP candidates can be derived independently and in parallel
B1
B0
A1 A0
Note: Due to limiting of comparisons and exempting the temporal candidate from prunning process, redundant candidates can appear in the merge list.
CABAC
Prunning Process remove duplicates (restricted) Decode merge_idx
index
Advanced Motion Vector Prediction (AMVP) Motion vector is predicted from five spatial neighbors: B0, B1, B2, A0, A1 (see the figure below) and one co-located temporal MV. Only two motion candidates are chosen among six neighbors and the selected predictor is explicitly signaled (mvp_lx_flag).
B1
B0
A0 A1
The first motion candidate (left candidate) is chosen from {A0, A1} The second motion candidate (top candidate) is chosen from {B0, B1, B2} If both candidates are available and have the same motion data, one is excluded If one of the above candidates is not available then the temporal MV is used unless the temporal prediction mode disabled If the number of available candidates is less than 2, zero MV is added
Transform
Overview The standard supports 32x32, 16x16, 8x8 and 4x4 DCT-like transforms and 4x4 DST-like
transform. Notice that DST-like 4x4 transform is allowed only for intra mode. Each transform is specified by 8-bits signed digital transform matrix T. To perform all transform operations its required 32-bits precision.
b) Scaling and clipping of Z to guarantee that the output values are within 16-bits. c) Y = Z T Notice that in encoder architecture the step (c) can be coupled with quantization: once first row of Y completed the quantization of the first row is started.
Transform Implementation
Notice AVC/H.264 where transform coefficients are dyadic in 4x4 case and near dyadic (i.e. from the form 2^n, 2^n-1, 2^n+1) in 8x8 case and hence AVC/H.264 transform can be multiplication-free. In HEVC transform operations are not multiplication-free. Indeed, let the multiplication takes 3 cycles, shift or addition 1 cycle . Therefore if all coeffs are near dyadic we can use only shifts and additions, otherwise we need a multiplier (because the alternative of shifts and adds hurts performance). As well as in AVC/H.264 the transforms in HEVC are separable and can be performed as sequence of two 1D transforms (vertical and horizontal): for (i=0, i<N, i++ ) { 1D Transform on column i // vertical } Scaling (right shift by 7) & Saturation for (j=0, j<N, j++ ) { 1D Transform on row j }
// horizontal
Comments
As well as in previous standards HEVC DCT works well on flat areas, but fails on areas with noise, contours and other peculiarities of the signal. HEVC DCT is efficient for big size of blocks but it looses efficiency on smaller blocks.
Beginning from 16x16 transforms visual artifacts are noticeable. The more the transform size the more artifacts are observable. Deblocking can reduce artifacts on TB boundaries, while artifacts inside a TB can be reduced only by SAO. Therefore its recommended to apply SAO when large transform sizes (32x32) are used.
Pc permutation matrix
HW Aspects of Transform 1D 8x8 case (2) A1 butterfly block (no multiplications), Q = A1 * Z{1..4}
4x4 DST (Discrete Sine Transform) 4x4 DST is applied only for Intra prediction. The 4x4 DST matrix from HEVC spec:
29 74 84 55
55 74 29 84
74 0 74 74
84 74 55 29
Motivation: Intra prediction is based on the top and left neighbors. Prediction accuracy is more for the pixels located near to top/left neighbors than those away from it. In other words, residual of pixels which are away from the top/left neighbor usually be larger then pixels near to neighbors. Therefore DST transform is more suitable to code such kind of residuals, since DST basis function start with low and increase further which is different from conventional DCT basis function. Reported 4x4 DST provides some performance gain, about 1%, against DCT. For bigger sizes the gain is negligible.
Entropy Coding
Overview
HEVC specifies only one entropy coding method CABAC, comparing to two CABAC and CAVLC as in H.264/AVC. HEVC CABAC encoding involves three main functions: a) b) c) Binarization - maps the syntax elements to binary symbols (bins). Context modeling - estimates the probability of the bins Arithmetic coding - compresses the bins to bits based on the estimated probability
Memory Requirements In AVC/H.264 context of some syntax elements (e.g. mvd) depends on top and left values. This dependence on top values requires line buffers, may be issue for 4K resolutions.
In HEVC the dependence from top values in selection of context is almost removed.
For example, unlike to AVC/H.264, in HEVC mvd is coded without the need of knowing neighboring mvd values.
Residual Coding: Overview Each Transform Block (TB) is divided into 4x4 sub-blocks (coefficient groups). Processing starts with the last significant coefficient and proceeds to the DC coefficient in the reverse scanning order. Coefficient groups are processed sequentially in the reverse order (from bottomright to top-left) as illustrated in the following figure:
Note: Experiments show that including horizontal and vertical scans for large TBs offers little compression efficiency, so vertical and horizontal scans are limited to 4x4 and 8x8 sizes.
Residual Coding: Multi-Level Significance Unlike to AVC/H.264, three level significance is used, intermediate level is added to exploit transform block sparsity: Level0: coded_block_flag is signaled for each TB (transform block) to specify the significance of the entire TB. Level1: (intermediate level): if coded_block_flag =1 then each TB divided into 4x4 coefficient groups (CG) where the significance of the entire CG is signaled (by coded_sub_block_flag).
a) The coded_sub_block_flag syntax elements are signaled in reverse order (from bottom-right towards top-left) according to selected scan.
b) The coded_sub_block_flag is not signaled for the last CG (i.e. the CG which contains the last level). Motivation: a decoder can infer significance since the last level is present. c) The coded_sub_block_flag is not signaled for the group including the DC position
Level2: If coded_sub_block_flag=1 then significant_coeff_flag are signaled to specify the significance of individual coefficients.
a)The significant_coeff_flag are signaled in the reverse order (from bottom-right towards top-left) according to selected scan.
Notes:
The coded_sub_block_flag of CG containing of DC coefficient (i.e. (0,0)
position) is not coded and inferred to 1. Motivation to improve coding efficiency, since the probability of this CG to be totally zero is low.
If current CG contains last coefficient in a TB then the coded_sub_block_flag is
Last coeff
Residual Coding: Levels At the start of residual block, coordinates of last significant coefficient are signaled (last_significant_coeff_x, last_significant_coeff_y) Coding starts backward from last significant coefficient toward (0,0). The coding process of each CG generally consists of five separate loops (passes), which provides some benefits for parallelization :
1. 2. 3. 4. 5.
significant_coeff_flag loop coeff_abs_level_greater1_flag loop coeff_abs_level_greater2_flag (at most one flag is coded) coeff_sign_flag loop coeff_abs_level_remaining loop
Hint for parallelization: Grouping of syntax elements of same type enables parallel processing. For example, while coeff_abs_level_greater1_flag are proceeding, the significance map contexts for the next CG can be pre-calculated.
Residual Coding: significant_coeff_flag significant_coeff_flag - indicates whether the transform coefficient is non-zero.
1 bin regular coding, 3 context models. Context model derivation for 8x8 and higher TBs - context depends on the significant_coeff_group_flag of the neighboring right CG and lower (sl) CGs and on the coefficient position in the current CG. Motivation: to avoid data dependencies within a CG and to benefit parallezation with negligible coding loss if contexts depend on significance of immediately preceeding coefficints (around 0.1% as reported in JCTVC-I0296). Coding direction
Current CG Bottom CG Right CG
Here sr denotes significant_coeff_group_flag of the right CG and sl denotes significant_coeff_group_flag of the lower CG.
coded_sub_block_flag is true and all the other coefficients are zero (i.e. all coefficients are zero except the (0,0), specified by inferSbDcSigCoeffFlag).
In case of 4x4 TB the significant_coeff_flag is not coded. The
significant_coeff_flag is inferred to 1 if the last significant coefficient points at it, otherwise inferred to 0.
Residual Coding: coeff_abs_level_greater1_flag coeff_abs_level_greater1_flag - indicates (if signaled) whether the transform coefficient has absolute value > 1.
1 bin regular coding, 24 context models.
Only first eight coeff_abs_level_greater1_flags in a CG are coded, the rest is inferred to 0. Motivation to reduce regularly (context) coded bins, especially at the high bit-rate, and improve CABAC throughput. Indeed, at most 8 coeff_abs_level_greater1_flags are coded in one CG instead of 16. There are 4 context model sets for luma ( denoted as 0, 1, 2 and 3) and 2 for chroma (denoted as 4 and 5), the number of context models in each set is 4. The derivation of context mode consists of two steps: the inference of context set and the derivation of the model inside the selected set, the following table reveals the context set derivation:
Luma
# coeff_abs_level_greater1_flags
in previous CG
Chroma
>0 1 3 0 4 4 >0 5 5
0 0 2
CG with DC CG without DC
Residual Coding: coeff_abs_level_greater2_flag coeff_abs_level_greater2_flag - indicates (if signaled) whether the transform coefficient has absolute value > 2. Unlike coeff_abs_level_greater1_flag, this flag is signaled once.
1 bin regular coding, 6 context models. If all coeff_abs_level_greater1_flag are 0, the coeff_abs_level_greater2_flag is not signaled and inferred to 0. Context model derivation:
Luma
# coeff_abs_level_greater1_flags in previous CG
Chroma
>0 1 3 0 4 4 >0 5 5
0 0 2
CG with DC CG without DC
Notice that the derivation of context model for coeff_abs_level_greater2_flag is identical to the derivation of context set of coeff_abs_level_greater1_flag.
Encoder can be required to modify coefficients to embed the sign (potentially quantization noise can be increased).
If the distance in scan order between the first and the last nonzero coefficient is less than 4 then SDH is not used. Notice that the fixed value 4 was experimentally chosen (see JCTVC-I0156). Probably that value can be a bad choice on some streams. If only one nonzero coefficient is present in CG, then SDH is not activated. When SDH is beneficial? When the percentage of sign bits is substantial (it is expected to happen when the bit-rate is low). Disadvantages of SDH: More complexity and Increase of quantization noise (potentially)
If there is nonzero delta values, find the minima minNzDelta among abs( delta ) { If minNzDelta >0 adjust qCoef = qCoef +1 Else [minNzDelta <0] adjust qCoef = qCoef -1 } Else [ all delta values are zero ] Take most high frequence coeff and adjust it.
}
Residual Coding: coeff_abs_level_remaining coeff_abs_level_remaining remaining absolute value of the coefficient level. All bins are bypass coded (to increase throughput). The total level is derived as:
Level = 1 + coeff_abs_level_greater1_flag+ coeff_abs_level_greater2_flag+ coeff_abs_level_remaining
Binarization - HEVC employs adaptive GolombRice coding for small values and switches to Exp-Golomb code for larger values. The Golomb-Rice parameter cRiceParam depends on previous levels. The transition point to Exp-Golomb is when the unary code length equals 4. The maximal codeword for coeff_abs_level_remaining is kept within 32 bits.
15 0 0
14 1 1 0
13 -1 1 0
12 0 0
11 2 1 1 0
10 4 1 1
9 -1 1 0
8 -4 1 1
7 4 1 1
6 2 1 1
5 -6 1
4 4 1
3 7 1
2 6 1 not coded
1 -12 1
0 18 1
0 3
1 3
0 3
0 1
11
17
Coeff-1
Residual Coding - Notes On SW decoder: the processing of residuals take ~8% of the computation for 4K video. It is a challenge to speedup or parallelize the residual coding loop by SW There are multiple branches within the loops. There are data dependency among adjacent data. The loop counts in the four loops can be different (challenge for loop unrolling) For coeff_abs_level_remaining the binarization type depends on the previous coefficient level.
Deblocking
Overview
The deblocking filter is applied to all samples adjacent to PB or TB boundaries, with the following exceptions: picture boundaries
32
Deblocked 8X8 8X8 Non-deblocked
16x16
8X8
32
16X16
16X16
Deblocking Algorithm
1. For each edge of 8x8 grid determine the filter strength (Bs) 2. According to the filter strength and the average quantization parameter (QP) determine two thresholds: tC and 3. According to the values of edge pixels and tC , modify (if needed) the pixels
Top Line Buffer For deblocking top reference storage is necessary: 4 luma top lines
q0 q1 q2 q3
p0 .. p2 luma pixels are modifiable while p0 .. p3 are taken into consideration p0 chroma is modifiable, while p0 , p1 are taken into consideration
Page 75
Determination thresholds tC and Thresholds tC and are derived by the following table:
Vertical Edge Filtering (1) derivation d on/off decision for all 4 lines/columns based on two lines/columns: p3,0 p2,0 p1,0 p0,0 p3,1 p2,1 p1,1 p0,1 p3,2 p2,2 p1,2 p0,2 p3,3 p2,3 p1,3 p0,3 Derivation of d: dp0 = Abs( p2,0 2*p1,0 + p0,0 ) dp3 = Abs( p2,3 2*p1,3 + p0,3 ) dq0 = Abs( q2,0 2*q1,0 + q0,0 ) dq3 = Abs( q2,3 2*q1,3 + q0,3 ) dpq0 = dp0 + dq0 dpq3 = dp3 + dq3 dp = dp0 + dp3 dq = dq0 + dq3 d = dpq0 + dpq3 Notice that lines #1 and #2 dont participate in calculation of d q0,0 q1,0 q2,0 q3,0 q0,1 q1,1 q2,1 q3,1 q0,2 q1,2 q2,2 q3,2 q0,3 q1,3 q2,3 q3,3
Vertical Edge Filtering (2) derivation dSam0 and dSam1 dSam0 = 0 dSam3 = 0 If ( (dpq0 ( >> 2 )) And ((Abs( p3,0 p0,0 ) + Abs( q0,0 q3,0 ) < ( >> 3 )) And (Abs( p0,0 q0,0 ) < ( 5*tC + 1 ) >> 1) ) Then { dSam0 =1 } If ( (dpq3 ( >> 2 )) And ((Abs( p3,3 p0,3 ) + Abs( q0,3 q3,3 ) < ( >> 3 )) And (Abs( p0,3 q0,3 ) < ( 5*tC + 1 ) >> 1) ) Then { dSam3 =1 }
Vertical Edge Filtering (3) derivation dE, dEp and dEq dE= 0, takes values 0,1,2 dEp = 0, takes values 0,1 dEq = 0, takes values 0,1 If d < Then dE = 1 if dSam0 = 1 and dSam3 = 1 then dE = 2 // strong filter, modify p0 .. p2, q0 .. q2 If dp <( + ( >> 1 ) ) >> 3 then dEp = 1
If dE = 2 // strong filtering, p0 .. p2, q0 .. q2 are modied { for each k=0..3 { p0,k = Clip3( p0,k2*tC, p0,k+2*tC, ( p2,k + 2*p1,k + 2*p0,k + 2*q0,k + q1,k + 4 ) >> 3 ) p1,k = Clip3( p1,k2*tC, p1,k+2*tC, ( p2,k + p1,k + p0,k + q0,k + 2 ) >> 2 ) p2,k = Clip3( p2,k2*tC, p2,k+2*tC, ( 2*p3,k + 3*p2,k + p1,k + p0,k + q0,k + 4 ) >> 3 ) q0,k = Clip3( q0,k2*tC, q0,k+2*tC, ( p1,k + 2*p0,k + 2*q0,k + 2*q1,k + q2,k + 4 ) >> 3 ) q1,k = Clip3( q1,k2*tC, q1,k+2*tC, ( p0,k + q0,k + q1,k + q2,k + 2 ) >> 2 ) q2,k= Clip3( q2,k2*tC, q2,k+2*tC, ( p0,k + q0,k + q1,k + 3*q2,k + 2*q3,k + 4 ) >> 3 ) } }
Page 83
Background Idea: Quantization makes reconstructed and original blocks differ. The quantization error is not uniformly distributed among pixels. There is a bias in distortion around edges which can be eliminated/reduced.
Background (cont.) In addition to bias in quantization distortion around edges, systematic errors related to specific ranges of pixel values can also occur. Both types of the above systematic errors (or biases) are corrected in SAO.
Overview
SAO is the second post-processing tool (after deblocking) accepted in HEVC/H.265. SAO is applied after deblocking. For efficient HW implementation SAO can be coupled with deblocking in MB-loop.
SAO can be optionally turned off or applied only on luma samples or only on chroma samples (regulated by slice_sao_luma_flag and slice_sao_chroma_flag ).
SAO parameters can be either explicitly signalled in CTU header or inherited from left or above CTUs.
As well as deblocking SAO is adaptively applied on pixels. Unlike to deblocking SAO is adaptively applied to all pixels.
There are two types of SAO: Edge Type - offset depends on edge mode (signaled by SaoTypeIdx = 2) Band Type - offset depends on the sample amplitude (SaoTypeIdx = 1) Note: chroma CTBs share the same SaoTypeIdx.
Edge Type SAO In case of Edge type, the edge is searched across one of following directions ( the direction is signaled by sao_eo_class parameter, once per CTU):
Notes: Sample labeled p indicates a current sample. Two samples labeled n0 and n1 specify two neighboring samples along the chosen direction. The edge detection is applied to each sample. According to the results the sample is classified into five categories (EdgeIdx) :
Edge Type SAO (cont.) According to EdgeIdx the corresponding sample offset (signaled by sao_offset_abs and sao_offset_sign) is added to the current sample. Up to 12 edge offsets (4 luma, 4 Cb chroma and 4 Cr chroma) are signaled per CTU. To reduce the bit overhead there is a particular merge mode (signaled by sao_merge_up_flag and sao_merge_left_flag flag) which enables a direct inheritance of SAO parameters from top or left CTU.
Reported that SAO reduces ring and mosquitos artifacts and improve subjective quality for low compression ratio video.
Band Type SAO The pixel range from 0..255 is uniformly split into 32 bands and the sample values belonging to four consecutive bands are modified by adding the values denoted as band offsets.
For SAO left and top lines of pixels need to keep in a memory. Pipeline chain: a) QTR Deblock+ SAO decisions SAO b) QTR + SAO decisions Deblock SAO According to the schema (a), during the deblocking process, statistical information is processed and the decision on SAO parameters is made. Due to the fact that SAO parameters are determined in Deblock stage we cant apply CABAC in parallel to Deblocking. The schema (b) enables to parallel Deblock and CABAC with negligible coding efficiency loss.
Quality
Paralleling Tools
Overview
HEVC adopted three in-built parallel processing tools: Slices Tiles Wavefronts (WPP)
Slices
As in H.264/AVC, slices are groups of CTUs in scan order, separated by start code.
Slice #2
Slice #3 Slice #4
Dependencies of dependent slice include the following: Slice Header Dependency: Short slice header is used where the missing elements are copied from the preceding normal slice. Context Models Dependency: CABAC context models are not initialized to defaults at the beginning of a dependent slice. Spatial Prediction Dependency: No breaking intra and motion vector prediction.
Restriction: Each dependent slice must be preceded by a non-dependent slice. The picture always starts with a normal slice, followed by zero or more dependent slices. Rationale for using of dependent slices: Allow data associated with a particular wavefront thread or tile to be carried in a separate NAL unit, and thus make that data available to a system for fragmented packetization with lower latency than if it were all coded together in one slice.
Cons: Penalty on rate distortion performance is incurred due to the breaking of dependencies at slice boundaries Overhead is added since each slice is preceded by the slice header.
Tiles Tiles are rectangular groups of CTUs. Tiles are transmitted in raster scan order, and the CTUs inside each tile are also processed in raster scan order. All dependencies are broken at tile boundaries. The entropy coding engine is reset at the start of each tile and flushed at the end of the tile. Only the deblocking filter can be optionally applied across tiles, in order to reduce visual artifacts.
Tiles (cont.)
At the end of each tile CABAC is flushed and consequently the tile ends at byte boundary. The tile entry points (actually offsets) are signaled at the start of picture in order to enable to a decoder to process tiles in parallel. Due to high area/perimeter ratio square tiles are more beneficial than rectangular ones (since the perimeter represents the boundaries where the dependencies are broken).
Pros: Friendly to multi-core implementations, can be built by simply replicating the single core designs. Composition of a picture (4K TV) from multiple rectangular sources which are encoded independently. With slices we can compose only horizontal stripes.
Cons:
Pre-defined tile structure makes MTU size matching challenging. Breaking intra and motion vector prediction across tile boundaries deteriorates coding efficiency.
Wavefronts (WPP) Picture is divided into rows of CTUs. The first row is processed in an ordinary way.
The second row can be delayed until two first CTUs of the first row completed.
The third row can be processed after two first CTUs of the second row have been made, etc.
Wavefronts (WPP) The context models of the entropy coder in each row are inferred from those in the preceding row with a small fixed processing lag. Actually the context models are inherited from the second CTU of the previous row.
CABAC is flushed after the last CTU of each row, making each row to end at byte boundary.
No breaking of dependencies across rows of CTUs.
Wavefronts (Cont.)
CABAC is flushed at the end of each CTU row in order to make each row to end at byte boundary and to facilitate parallel processing. CABAC is reset at the end of each CTU row in order to enable parallel processing.
Wavefronts (Cont.)
At the start of each CTU row CABAC contexts are inherited from the above row (after the second CTU of the above row finished, in order to minimize training penalty). In other words cabac context derivation crosses row boundaries. It requires some synchronization among cores.
Entry points of each CTU row are explicitly signaled in picture/slice header.
Cons: MTU size matching challenging with wavefronts. Frequent cross-core data communication, inter-processor synchronization for WPP is complex.
Core #n
Notes
Wavefront parallel encoding is reported to give BD-rate degradation around 1.0% compared to a non-parallel mode. Bitrate savings from 1% to 2.5% are observed at same QP for Wavefront against Tiles (each row is encompassed by single tile). Wavefronts and Tiles cant co-exist in single frame.
2. Random Access Main configuration - encoder_randomaccess_main.cfg High efficiency (10 bits per pixel) - encoder_randomaccess_he10.cfg
4. Low Delay (DPB buffer contains single reference frame) : I P P P P Main configuration - encoder_lowdelay_P_main.cfg High efficiency (10 bits per pixel) - encoder_lowdelay_P_he10.cfg