Truncated SVD For Image Compression
Truncated SVD For Image Compression
Truncated SVD For Image Compression
I. I NTRODUCTION T
V1
Image compression is very important for digital T
V2
environment, because images contain vast amount A = [σ1 U1 , σ2 U2 , ..., σn Un ] (2)
...
of information and without any compression, re-
quired memory for image is huge. Before any com- VnT
pression algorithm is applied, the required memory
for 2 hours movie with a resolution of 2048x1080,
with a color depth of 3 bytes at 25 frame/s is around A = σ1 U1 V1T + σ2 U2 V2T + ... + σn Un VnT (3)
1200 Gb. Therefore, images have to be compressed If the sequence σ1 , σ2 , . . . , σn is fast decaying, we
to make them feasible to store and transfer. may approximate the matrix A by using only first
Nowadays there are many successful image and r dominant terms.
video compression methods such as JPG and MPG.
These methods already compress the original im- A ≈ σ1 U1 V1T +σ2 U2 V2T +...+σr Ur VrT , r < n (4)
ages or videos with very high compression rate This is called the truncated SVD approximation
and without loss. However, especially in computer
of matrix A. If the matrix A represents gray-scale
vision and robotics application, these compressed
image (An image is assumed to be gray-scale just
images should be compressed even more. For some
for simplicity, it can be easily extended for RGB
applications, many images has to be stored (e.g. face
image.), we can use this property for image com-
recognition applications) and these images should
pression by storing u1 ,...,ur , v1 ,...,vn and the scalars
be compressed without least information loss, in this
σ1 ,...,σn instead of storing image (matrix A of size
case truncated SVD image compression method can
m by n) itself. So in this way, we need to store
be very useful. In this work, truncated SVD method
(m + n + 1)r pixels instead of mn pixels.
for image compression is studied. The relation (3) is always true when m ≥ n, so
we should generalize the relation for all the cases
II. M ETHOD (m ≥ n and m < n) because image size could be
anything. Let define a new variable p = min(m, n),
Any matrix A ∈ Rmxn , m ≥ n can be repre- the generalized version of relation (3) is following.
sented as following.
A = σ1 U1 V1T + σ2 U2 V2T + ... + σp Up VpT (5)
A = U ΣV T , (1) Then, relation (4) turns into a following form.
mxn nxn
A ≈ σ1 U1 V1T + σ2 U2 V2T + ... + σr Ur VrT , r < p (6)
U ∈R , Σ = diag(σ1 , σ2 , ..., σn ), V ∈ R
1) Implementation 1-a: When we implement this
where the matrices U and V have orthonormal method, we should consider two situation. First
columns and σ1 ≥ σ2 ≥ ... ≥ σn ≥ 0, this situation is that image size (m and n) is not too
representation is called thin SVD of the matrix A. big and we can compute the thin SVD directly by
This equation can be also written in the following using MATLAB built-in function ”svd”. In order to
form. reduce the computation cost, we should use function
SCIENTIFIC COMPUTING - PROJECT 2
T T
”svd” with the option ”econ”. In that way, if m ≥ n, U1 Q
T T
MATLAB computes n by n matrix V and n by n U2 Q
diagonal matrix Σ; then, it computes matrix U by U 0T U 0 = ... [QU1 , QU2 , ..., QUn ]
(17)
using relation (8). If m < n, MATLAB computes
T T
Un Q
m by m matrix U and m by m diagonal matrix Σ;
then, it computes matrix V by using relation (11).
Because QT Q = I, Q’s can be eliminated.
Therefore, MATLAB avoids computations with a T
unitary matrix of size max{m, n} by max{m, n}. U1
If m ≥ n, V is orthonormal square matrix, so T
U2
V T V = In,n , then we can find matrix U by matrix U 0T U 0 =
... [U1 , U2 , ..., Un ] = I
(18)
multiplication.
AV = U Σ (7) UnT
Method 1a 1b
Cat 0.0106 0.0119
Amsterdam 0.0112 0.0117
Wave 0.1470 0.1510
r 10 20 50 100
Madrid-1 15.1007 15.5689 17.2867 18.9045
Madrid-2 17.5421 17.8494 18.4628 19.2378
Chair-1 11.1754 11.8750 11.9998 16.4475
Chair-2 12.5845 12.8448 13.2292 17.3668
A PPENDIX
MATLAB C ODE
1 f u n c t i o n [U, Sigma , V] = t r u n c s v d 1 a
(A, r ) (b) RI, r=50, s=1,
2 %Method 1 a , t h i n SVD (a) Original Image MR=0.0667,
3 %T h i s f u n c t i o n c o m p u t e s t h e MSE=1.67x10−5
t r u n c a t e d SVD o f m a t r i x A
4 %r i s d e s i r e d number o f t h e most
dominant e i g e n v a l u e s
5 % r h a s t o be l e s s t h a n o r e q u a l
t o min (m, n )
6 % U i s m by r w i t h o r t h o n o r m a l
columns
7 %V i s n by r w i t h o r t h o n o r m a l
columns
8 %Sigma i s r by r d i a g o n a l m a t r i x
(c) RI, r=50, s=10, (d) RI, r=50, s=20,
with dominant e i g e n v a l u e s MR=0.3367, MR=0.6684,
9
MSE=4.38x10−9 MSE=1.54x10−10
10 [m, n ] = s i z e (A) ;
11
Fig. 14: ’Oasis’ image. Original image size:
12 %Check i f r i s v a l i d 1500x1500.
13 i f r >=min (m, n )
SCIENTIFIC COMPUTING - PROJECT 8
14 d i s p l a y ( ’ r h a s t o be l e s s t h a n 54
o r e q a u l t o min (m, n ) , r i s 55
s e t t o min (m, n ) ’ ) ; 56 U = V1 ;
15 r = min (m, n ) ; 57 V = Q∗U1 ;
16 end 58
17 59 end
18 [U, Sigma , V] = s v d (A, ’ e c o n ’ ) ; 60
19 U = U( : , 1 : r ) ; 61 U = U( : , 1 : r ) ;
20 V = V( : , 1 : r ) ; 62 V = V( : , 1 : r ) ;
21 Sigma = Sigma ( 1 : r , 1 : r ) ; 63 Sigma = Sigma ( 1 : r , 1 : r ) ;
22 64
23 end 65 end
24 66
25 %−−−−−−−−−−−−−−−−−−−−−−−−−−− 67 %−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
26 f u n c t i o n [U, Sigma , V] = t r u n c s v d 1 b 68
30 72 g l o b a l N dim ;
31 %Check i f r i s v a l i d 73 [m, n ] = s i z e (A) ;
32 i f r >=min (m, n ) 74
33 d i s p l a y ( ’ r h a s t o be l e s s t h a n 75 %Check i f r i s v a l i d
o r e q a u l t o min (m, n ) , r i s 76 i f r >=min (m, n )
s e t t o min (m, n ) ’ ) ; 77 d i s p l a y ( ’ r h a s t o be l e s s t h a n
34 r = min (m, n ) ; o r e q a u l t o min (m, n ) , r i s
35 end s e t t o min (m, n ) ’ ) ;
36 78 r = min (m, n ) ;
37 i f m>=n 79 end
38 %QR f a c t o r i z a t i o n o f i n p u t 80
matrix A 81 %D e f i n e a l p h a
39 %O p t i o n ’ 0 ’ e n a b l e s t h i n QR 82 a l p h a = norm (A) / 1 0 ;
factorization 83
47 90 N dim = n ;
48 e l s e % i n t h e c a s e o f m<n , t h e n we 91 [V, D] = j d q r ( ’ATA ’ , ’K ’ , r ,
s h o u l d work w i t h Aˆ T s t r u c t ( ’ P r e c o n d ’ ,M) ) ;
49 %QR f a c t o r i z a t i o n o f t r a n s p o s e 92 Sigma = s q r t (D) ;
of input matrix A 93 temp = d i a g ( s q r t (D) ) ;
50 [Q, R ] = q r (A’ , 0 ) ; 94 S i g m a i n v = d i a g ( 1 . / temp ) ;
51 95 U = A∗V∗ S i g m a i n v ;
52 %SVD o f s q u a r e m a t r i x R 96
98 M1 = t r i l (A ( 1 : m, 1 : m) ) + a l p h a ∗ 145 y = N dim ;
e y e (m) ; M2 = M1 ’ ; 146
99 %P r e c o n d m a t r i x 147 else
100 M = [M1, M2 ] ; 148 y = [];
101 149 end
102 N dim = m; 150
116 164
190 [ U1 , S1 , V1 ] = t r u n c s v d 1 b (A, r ) ;
191 t2 ( i )=toc ;
192 end
193
198
199 %T r u n c a t e d SVD f o r l a r g e −s i z e d
images
200 repeat = 5;
201 t1=zeros ( repeat ,1) ;
202 t2=zeros ( repeat ,1) ;
203 f o r i =1: r e p e a t
204 tic ,
205 %AˆTA ( o r AAˆ T ) i s n o t u s e d
206 [ U2 , S2 , V2 ] = t r u n c s v d 2 (A, r ) ;
207 t1 ( i )=toc ;
208
209 tic ,
210 %AˆTA ( o r AAˆ T ) i s u s e d
211 [ U3 , S3 , V3 ] = t r u n c s v d 2 b (A, r ) ;
212 t2 ( i )=toc ;
213 end
214
219 %R e c o n s t r u c t e d image
220 A r e c o n s t = U2∗S2∗V2 ’ ;
221
225 %R e s h a p e r e c o n s t r u c t e d image
226 A r e c o n s t = r e s h a p e ( A r e c o n s t , [ m, n
]) ;
227
228 %D i s p l a y r e c o n s t r u c t e d image
229 f i g u r e ( ) ; imshow ( A r e c o n s t , [ ] ) ;
230