Truncated SVD For Image Compression

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

SCIENTIFIC COMPUTING - PROJECT 1

Truncated SVD for Image Compression


Mehmet Ozgur Turkoglu
Student ID: s1814389 , E-mail: moturkoglu@gmail.com
Educational Program: MSc. in Electrical Engineering
Sunday 4th June, 2017

I. I NTRODUCTION  T
V1
Image compression is very important for digital  T
V2 
environment, because images contain vast amount A = [σ1 U1 , σ2 U2 , ..., σn Un ]   (2)
 ... 
of information and without any compression, re-  
quired memory for image is huge. Before any com- VnT
pression algorithm is applied, the required memory
for 2 hours movie with a resolution of 2048x1080,
with a color depth of 3 bytes at 25 frame/s is around A = σ1 U1 V1T + σ2 U2 V2T + ... + σn Un VnT (3)
1200 Gb. Therefore, images have to be compressed If the sequence σ1 , σ2 , . . . , σn is fast decaying, we
to make them feasible to store and transfer. may approximate the matrix A by using only first
Nowadays there are many successful image and r dominant terms.
video compression methods such as JPG and MPG.
These methods already compress the original im- A ≈ σ1 U1 V1T +σ2 U2 V2T +...+σr Ur VrT , r < n (4)
ages or videos with very high compression rate This is called the truncated SVD approximation
and without loss. However, especially in computer
of matrix A. If the matrix A represents gray-scale
vision and robotics application, these compressed
image (An image is assumed to be gray-scale just
images should be compressed even more. For some
for simplicity, it can be easily extended for RGB
applications, many images has to be stored (e.g. face
image.), we can use this property for image com-
recognition applications) and these images should
pression by storing u1 ,...,ur , v1 ,...,vn and the scalars
be compressed without least information loss, in this
σ1 ,...,σn instead of storing image (matrix A of size
case truncated SVD image compression method can
m by n) itself. So in this way, we need to store
be very useful. In this work, truncated SVD method
(m + n + 1)r pixels instead of mn pixels.
for image compression is studied. The relation (3) is always true when m ≥ n, so
we should generalize the relation for all the cases
II. M ETHOD (m ≥ n and m < n) because image size could be
anything. Let define a new variable p = min(m, n),
Any matrix A ∈ Rmxn , m ≥ n can be repre- the generalized version of relation (3) is following.
sented as following.
A = σ1 U1 V1T + σ2 U2 V2T + ... + σp Up VpT (5)
A = U ΣV T , (1) Then, relation (4) turns into a following form.

mxn nxn
A ≈ σ1 U1 V1T + σ2 U2 V2T + ... + σr Ur VrT , r < p (6)
U ∈R , Σ = diag(σ1 , σ2 , ..., σn ), V ∈ R
1) Implementation 1-a: When we implement this
where the matrices U and V have orthonormal method, we should consider two situation. First
columns and σ1 ≥ σ2 ≥ ... ≥ σn ≥ 0, this situation is that image size (m and n) is not too
representation is called thin SVD of the matrix A. big and we can compute the thin SVD directly by
This equation can be also written in the following using MATLAB built-in function ”svd”. In order to
form. reduce the computation cost, we should use function
SCIENTIFIC COMPUTING - PROJECT 2

 T T
”svd” with the option ”econ”. In that way, if m ≥ n, U1 Q
 T T
MATLAB computes n by n matrix V and n by n U2 Q 
diagonal matrix Σ; then, it computes matrix U by U 0T U 0 =  ...  [QU1 , QU2 , ..., QUn ]
 (17)
using relation (8). If m < n, MATLAB computes 
T T

Un Q
m by m matrix U and m by m diagonal matrix Σ;
then, it computes matrix V by using relation (11).
Because QT Q = I, Q’s can be eliminated.
Therefore, MATLAB avoids computations with a  T
unitary matrix of size max{m, n} by max{m, n}. U1
If m ≥ n, V is orthonormal square matrix, so  T
U2 
V T V = In,n , then we can find matrix U by matrix U 0T U 0 = 
 ...  [U1 , U2 , ..., Un ] = I
 (18)
multiplication.  
AV = U Σ (7) UnT

AVi As a result, we compute SVD of matrix A by


Ui = , i = 1, ..., n (8) computing SVD of matrix R whose size is less than
σi
of A. In the other case, m < n, we apply same
If m < n, U is orthonormal square matrix, procedure to AT instead of A.
so U T U = Im,m , then we can find matrix V as
following. AT = QR, (19)
AT = V ΣU T (9) Q ∈ Rnxm , R ∈ Rmxm
AT U = V Σ (10) AT = QU ΣV T (20)
AT Ui A = V ΣU T QT (21)
Vi = , i = 1, ..., m (11)
σi
A = U 0 ΣV 0T (22)
A. Implementation 1-b where U 0 = V and V 0 = QU .
We can also avoid computation with matrix of
size max{m, n} by max{m, n} when m 6= n by B. Implementation 2
first computing thin QR factorization of matrix A Regarding to implementation, the second situa-
or AT . Let assume m > n, A can be written in tion is that image size (m and n) is large and it is
following form. too expensive to compute the thin SVD or thin QR
A = QR, (12) of matrix A. In this case, we approach the problem
mxn nxn differently; we solve eigenproblem for AAT (or
Q ∈ R ,R ∈ R AT A) using iterative eigensolvers and then, find
Q is matrix with orthonormal columns (QT Q = truncated SVD of matrix A as following relations.
In,n ) and R is upper triangle matrix. Now, we can Let assume m ≥ n (for simplicity) and calculate
T
compute the SVD of R which is n by n matrix. A A by using relation (3).

R = U ΣV T (13) AT = σ1 V1 U1T + σ2 V2 U2T + ... + σn Vn UnT (23)


AT A = σ12 V1 V1T + σ22 V2 V2T + ... + σn2 Vn VnT (24)
If we rewrite the relation (12), it is going to be as
following. If we multiply matrix AT A with any of vectors,
T
A = QU ΣV (14) V1 ,... ,Vn , we obtain the following equation.

A = U 0 ΣV T (15) AT AVi = σi2 Vi , i = 1, 2, ..., n (25)


2 T
where U 0 = QU . U 0 can be easily proved to be So (Vi , σi )’s are the eigenpairs of matrix A A. I
matrix with orthonormal columns. previously showed that it is possible to find Ui when
we know Vi and σi (see Relation (8)). Thus, we can
U 0 = [QU1 , QU2 , ..., QUn ] (16) compute SVD of A by finding eigenpairs of AT A.
SCIENTIFIC COMPUTING - PROJECT 3

It is important to select appropriate one among


A A and AAT . In order to reduce computational
T

cost, we should select the one with smaller size.


Therefore, if m > n we should select AT A whose
size is n by n; whereas if m < n we should select
AAT whose size is m by m. If A is square matrix,
we can use either of them. If we use AAT , eigenpair
of this matrix is (Ui , σi2 ), so after finding eigenpairs,
we can find Vi ’s by using relation (11).
We do not need to compute all the eigenpairs of
A A (or AAT ) since we only need most dominant
T

r eigenvalues and the associated eigenvectors. In


this project, we use Jacobi-Davidson eigensolver
(MATLAB implementation is available).
In order to make Jacobi-Davidson method more Fig. 1: Matrix AT A, condition number=
efficient, we use preconditioner. Because the matrix 2.7704x1022 (A is given in Figure 2.)
in question (AT A or AAT ) is not sparse so we
should use other preconditioner instead of ILU or
SSOR preconditioners. The reasonable precondi- vector multiplication). It is important to note that
tioner is Cholesky factorization of the matrix AT A+ MATLAB does the calculation from left to right,
αI where I is the identity matrix and α is small if we do not use parenthesis in the function we
positive scale parameter. We add αI term in order pass to the Jacobi-Davidson function (jdqr), it first
to make the matrix non-singular. Because when A multiply AT and A so we should use parenthesis
is an image, AT A (or AAT ) is quite prone to be like AT ∗ (A ∗ x) instead of AT ∗ A ∗ x. Also
singular because in general image pixels are not in order to avoid matrix-matrix multiplication for
random and images are smooth (adjacent column precondioner to be M1 M2 , M2 = M1T , with M1
and row vectors are almost the same) unless there being the diagonal and lower triangular parts of AT
are an edge or corner. Therefore, matrix A is close with skipped zero columns plus a αI.
to be non-singular itself for instance the condition
number of image in Figure 2 is 1.0594x104 . AT A or
AAT are more non-singular than A because the size
of matrix is large and values of matrix element are
limited by certain numbers (assume from 0 to 1 or 0
to 255) and inner product of column vectors produce
similar values as seen in Figure 1. For instance,
the condition number of AT A is 2.7704x1022 I
take α value as a small fraction of norm-2 of A.
Because matrix second norm is a spectral norm
which equals to square root of maximum eigenvalue Fig. 2: Matrix A, condition number = 1.0594x104
of AT A in that way we increase the eigenvalues with
some fraction of maximum eigenvalue and matrix
becomes non-singular. I empirically determine the III. E XPERIMENT & R ESULTS
value in a way that minimum value which makes All the experiments are conducted with the real
the matrix non-singular and α = ||A||2 /10. images instead of randomly generated matrices. It
In order to increase computational efficiency of is important to note that images are gray-scale
Jacobi-Davidson method, we do not calculate AT A (MATLAB ’rgb2gray’ function converts RGB image
(or AAT ) explicitly, instead we only do matrix- into gray-scale image.) but all the implementation
vector multiplication because the matrix in ques- in this work can be easily applied to RGB images.
tion is not sparse, matrix-matrix multiplication is Images are converted to double precision by using
expensive (n times more expensive than matrix- MATLAB function ’im2double’ before processing.
SCIENTIFIC COMPUTING - PROJECT 4

Method 1a 1b
Cat 0.0106 0.0119
Amsterdam 0.0112 0.0117
Wave 0.1470 0.1510

TABLE I: CPU time for Method 1a and 1b (See


Figure 3, 4, and 5).

In this work memory requirement (MR) for com-


pressed images are given in the ratio of the memory
needed for a compressed image to the memory
needed for an original image.
(m + n + 1)r
MR = (a) Original Image (b) RI, r=30, MR=0.2924
mn
There are several metrics to evaluate the quality
of image compression algorithm. In this work, Mean
Square Error (MSE) which is one of the most
popular performance metric is used. The equation
for calculating MSE, denoting the mean square error
between the two images (original and reconstructed
(or compressed) images) is given as
m n
1 XX
M SE = (ri,j − oi,j )2
mn i=1 j=1
where oi,j and ri,j are a pixel of original and
reconstructed images respectively.
(c) RI, r=60, MR=0.5847 (d) RI, r=90, MR=0.8771
A. Is the implementation 1a or 1b more efficient?
Fig. 3: ’Cat’ image. Original image size: 240x180.
The implementation 1a and 1b are for small-sized
images. These two implementation are tested on
several small-sized images (see Figure 3,4, 5). Both
method were run 100 times for ’Cat’, ’Amsterdam’,
and ’Wave’ images which are small (medium)-
sized images. The average CPU times are given in
Table I. According to results, method 1a is slightly
more efficient, it is probably because MATLAB
’svd’ function is pre-compiled or more optimized
somehow. Some example of reconstructed images
(a) Original Image (b) RI, r=30, MR=0.2924
(RI) are given in Figure 3, 4, and 5.

B. Is the code more efficient if AT A or AAT is not


calculated?
In order to see whether not computing AT A
(or AAT ) explicitly increases the performance, the
implementation-2 is tested on ’Madrid’ (m < n)
and ’Chair’ (m > n) images which are large-sized
(c) RI, r=60, MR=0.5847 (d) RI, r=90, MR=0.8771
images. The code is run 5 times and average CPU
times are listed in Table II. According to the results, Fig. 4: ’Amsterdam’ image. Original image size:
the implementation is more efficient if AT A (or 180x240.
AAT ) is not computed explicitly.
SCIENTIFIC COMPUTING - PROJECT 5

(a) Original Image (b) RI, r=30, MR=0.1001

(a) Original Image (b) RI, r=10, MR=0.0077

(c) RI, r=60, MR=0.2002 (d) RI, r=90, MR=0.3003

Fig. 5: ’Wave’ image. Original image size:


525x700.

r 10 20 50 100
Madrid-1 15.1007 15.5689 17.2867 18.9045
Madrid-2 17.5421 17.8494 18.4628 19.2378
Chair-1 11.1754 11.8750 11.9998 16.4475
Chair-2 12.5845 12.8448 13.2292 17.3668

TABLE II: CPU time for ’Madrid’ and ’Chair’


images (See Figure 6, 7). First and third columns
show CPU time when AT A or AAT is not
computed explicitly; second and fourth columns
show CPU time when AT A or AAT is computed. (c) RI, r=20, MR= 0.0153 (d) RI, r=50, MR=0.0383

Fig. 7: ’Chair’ image. Original image size:


3264x2176.

C. Which r should be chosen?

(a) Original Image (b) RI, r=10, MR=0.0071


For all the images previously used, MSE curves
are given in Figure 8, 9, 10, 11, and 12. According
to these curves, how much information is lost after
compression can be inferred. So one can choose
suitable r value for specific application by exam-
ining these MSE curves. For instance, let examine
MSE curve for ’Wave’ image (see Figure 10), if the
(c) RI, r=20, MR=0.0143 (d) RI, r=50, MR=0.0358 compression has to be almost lossless then suitable r
Fig. 6: ’Madrid’ image. Original image size: should be larger than 50; whereas, if the application
2448x3264. concerns only the main structure in the image then
r can be chosen as around 20.
SCIENTIFIC COMPUTING - PROJECT 6

Fig. 10: MSE Curve of ’Wave’ image.


Fig. 8: MSE Curve of ’Cat’ image.

Fig. 11: MSE Curve of ’Madrid’ image.


Fig. 9: MSE Curve of ’Amsterdam’ image.
CPU times are listed in Table III and reconstructed
images are given in Figure 13 and 14. (r = 50 for
D. Can we increase computational efficiency by
both images for all cases.)
reshaping image?
It is important to note that when the image is
Computational cost of this compression algorithm reshaped before compression algorithm is applied,
depends on the size of the matrix in question so we memory requirement changes for same r value.
may reduce the computational work and CPU time Memory requirement (MR) for reshaped image is
by reshaping the images before the compression following.
algorithm is applied. For instance, if our image is of
size 1080 by 1920 and r is chosen as 100 then we (m/s + sn + 1)r
MR =
can reshape the image in a way that image size is mn
108 by 19200. In this way the algorithm deals with According to results obtained, this method (re-
108 by 108 matrix instead of 1080 by 1080 matrix. shaping before compression) definitely increases
Notice that after resizing image r has to be lower the computational efficiency. Normally compression
than minimum of new m and n. with higher MR needs more computation because
This phenomena is tested on ’XX’ and ’Oasis’ more eigenpairs have to be computed; but by re-
images (see Figure 13 and 14) whose size are shaping before compression algorithm applied, less
1080 by 1920 and 1500 by 1500 respectively. New CPU time might be needed for same r value and
variable s as a scaling factor is defined so the size of also MR can increase (According to selection of s
reshaped image is m/s by sn. For different s values, value, this situation might be reverse). There is an
SCIENTIFIC COMPUTING - PROJECT 7

(b) RI, r=50, s=1,


(a) Original Image MR=0.0724,
MSE=2.28x10−4

Fig. 12: MSE Curve of ’Chair’ image.


(c) RI, r=50, s=10, (d) RI, r=50, s=20,
s 1 5 10 20 MR=0.4656, MR=0.5232,
XX 1.6484 0.9908 0.6778 0.4638 MSE=2.02x10−11 MSE=1.38x10−14
Oasis 4.5302 1.8764 1.5198 12.4086
Fig. 13: ’XX’ image. Original image size:
TABLE III 1080x1920.

exception when s = 20 for ’Oasis’ image, CPU time


increases because the matrix in question becomes
nearly non-singular and Jacobi-Davidson method
gets slower.

A PPENDIX
MATLAB C ODE

1 f u n c t i o n [U, Sigma , V] = t r u n c s v d 1 a
(A, r ) (b) RI, r=50, s=1,
2 %Method 1 a , t h i n SVD (a) Original Image MR=0.0667,
3 %T h i s f u n c t i o n c o m p u t e s t h e MSE=1.67x10−5
t r u n c a t e d SVD o f m a t r i x A
4 %r i s d e s i r e d number o f t h e most
dominant e i g e n v a l u e s
5 % r h a s t o be l e s s t h a n o r e q u a l
t o min (m, n )
6 % U i s m by r w i t h o r t h o n o r m a l
columns
7 %V i s n by r w i t h o r t h o n o r m a l
columns
8 %Sigma i s r by r d i a g o n a l m a t r i x
(c) RI, r=50, s=10, (d) RI, r=50, s=20,
with dominant e i g e n v a l u e s MR=0.3367, MR=0.6684,
9
MSE=4.38x10−9 MSE=1.54x10−10
10 [m, n ] = s i z e (A) ;
11
Fig. 14: ’Oasis’ image. Original image size:
12 %Check i f r i s v a l i d 1500x1500.
13 i f r >=min (m, n )
SCIENTIFIC COMPUTING - PROJECT 8

14 d i s p l a y ( ’ r h a s t o be l e s s t h a n 54

o r e q a u l t o min (m, n ) , r i s 55

s e t t o min (m, n ) ’ ) ; 56 U = V1 ;
15 r = min (m, n ) ; 57 V = Q∗U1 ;
16 end 58

17 59 end
18 [U, Sigma , V] = s v d (A, ’ e c o n ’ ) ; 60

19 U = U( : , 1 : r ) ; 61 U = U( : , 1 : r ) ;
20 V = V( : , 1 : r ) ; 62 V = V( : , 1 : r ) ;
21 Sigma = Sigma ( 1 : r , 1 : r ) ; 63 Sigma = Sigma ( 1 : r , 1 : r ) ;
22 64

23 end 65 end
24 66

25 %−−−−−−−−−−−−−−−−−−−−−−−−−−− 67 %−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
26 f u n c t i o n [U, Sigma , V] = t r u n c s v d 1 b 68

(A, r ) 69 f u n c t i o n [U, Sigma , V] = t r u n c s v d 2 (


27 %Method 1b , t h i n QR, t h e n SVD A, r )
28 70 %Method 2 , l a r g e −s c a l e SVD
29 [m, n ] = s i z e (A) ; 71

30 72 g l o b a l N dim ;
31 %Check i f r i s v a l i d 73 [m, n ] = s i z e (A) ;
32 i f r >=min (m, n ) 74

33 d i s p l a y ( ’ r h a s t o be l e s s t h a n 75 %Check i f r i s v a l i d
o r e q a u l t o min (m, n ) , r i s 76 i f r >=min (m, n )
s e t t o min (m, n ) ’ ) ; 77 d i s p l a y ( ’ r h a s t o be l e s s t h a n
34 r = min (m, n ) ; o r e q a u l t o min (m, n ) , r i s
35 end s e t t o min (m, n ) ’ ) ;
36 78 r = min (m, n ) ;
37 i f m>=n 79 end
38 %QR f a c t o r i z a t i o n o f i n p u t 80

matrix A 81 %D e f i n e a l p h a
39 %O p t i o n ’ 0 ’ e n a b l e s t h i n QR 82 a l p h a = norm (A) / 1 0 ;
factorization 83

40 [Q, R ] = q r (A, 0 ) ; 84 %i f m>=n , u s e A’A o t h e r w i s e AA’


41 85 i f m>=n
42 %SVD o f s q u a r e m a t r i x R 86 M1 = t r i l (A ( 1 : n , 1 : n ) ’ ) + a l p h a ∗
43 [U, Sigma , V] = s v d (R) ; e y e ( n ) ; M2 = M1 ’ ;
44 87 %P r e c o n d m a t r i x
45 U = Q∗U ; 88 M = [M1, M2 ] ;
46 89

47 90 N dim = n ;
48 e l s e % i n t h e c a s e o f m<n , t h e n we 91 [V, D] = j d q r ( ’ATA ’ , ’K ’ , r ,
s h o u l d work w i t h Aˆ T s t r u c t ( ’ P r e c o n d ’ ,M) ) ;
49 %QR f a c t o r i z a t i o n o f t r a n s p o s e 92 Sigma = s q r t (D) ;
of input matrix A 93 temp = d i a g ( s q r t (D) ) ;
50 [Q, R ] = q r (A’ , 0 ) ; 94 S i g m a i n v = d i a g ( 1 . / temp ) ;
51 95 U = A∗V∗ S i g m a i n v ;
52 %SVD o f s q u a r e m a t r i x R 96

53 [ U1 , Sigma , V1 ] = s v d (R) ; 97 else


SCIENTIFIC COMPUTING - PROJECT 9

98 M1 = t r i l (A ( 1 : m, 1 : m) ) + a l p h a ∗ 145 y = N dim ;
e y e (m) ; M2 = M1 ’ ; 146

99 %P r e c o n d m a t r i x 147 else
100 M = [M1, M2 ] ; 148 y = [];
101 149 end
102 N dim = m; 150

103 151 return


104 [U, D] = j d q r ( ’AAT ’ , ’K ’ , r , 152

s t r u c t ( ’ P r e c o n d ’ ,M) ) ; 153 %−−−−−−−−−−−−−−−−−−−−−−−−−−−−


105 Sigma = s q r t (D) ; 154 clear ; close all ;
106 155 %% T r u n c a t e d SVD Image C o m p r e s s i o n
107 temp = d i a g ( s q r t (D) ) ; 156 g l o b a l A;
108 S i g m a i n v = d i a g ( 1 . / temp ) ; 157

109 V = S i g m a i n v ∗ (U’ ) ∗A; 158 img = i m r e a d ( ’ i m a g e s / 1 . j p g ’ ) ;


110 V = V’ ; 159 i f s i z e ( s i z e ( img ) , 2 ) == 3
111 160 %i f t h e image i s RGB, c o n v e r t
112 end t o t h e g r a y −s c a l e
113 161 img = r g b 2 g r a y ( img ) ;
114 162 end
115 end 163

116 164

117 %−−−−−−−−−−−−−−−−−−−−−−−− 165 %C o n v e r t image i n t o d o u b l e


118 precision
119 f u n c t i o n y = ATA( x , f l a g ) 166 A = i m 2 d o u b l e ( img ) ;
120 g l o b a l A; 167

121 g l o b a l N dim ; 168 %Image s i z e


122 169 [m, n ] = s i z e (A) ;
123 i f n a r g i n <2 170

124 y = A’ ∗ (A∗x ) ; 171 %D i s p l a y t h e i n p u t image


125 172 f i g u r e ( ) ; imshow (A , [ ] ) ;
126 e l s e i f strcmp ( flag , ’ dimension ’ ) 173

127 y = N dim ; 174 %R e s h a p e t h e m a t r i x ( image )


128 175 s =1;
129 else 176 A = r e s h a p e (A , [ m/ s , n ∗ s ] ) ;
130 y = []; 177

131 end 178 %D e f i n e t h e number o f d o m i n a n t


132 terms
133 return 179 r = 100;
134 180

135 %−−−−−−−−−−−−−−−−−−−−−−−−−−−− 181 %T r u n c a t e d SVD f o r s m a l l −s i z e d


136 images
137 f u n c t i o n y = AAT( x , f l a g ) 182 r e p e a t = 100;
138 g l o b a l A; 183 t1=zeros ( repeat ,1) ;
139 g l o b a l N dim ; 184 t2=zeros ( repeat ,1) ;
140 185 f o r i =1: r e p e a t
141 i f n a r g i n <2 186 tic ,
142 y = A∗ (A’ ∗ x ) ; 187 [U, S , V] = t r u n c s v d 1 a (A, r ) ;
143 188 t1 ( i )=toc ;
144 e l s e i f strcmp ( flag , ’ dimension ’ ) 189 tic ,
SCIENTIFIC COMPUTING - PROJECT 10

190 [ U1 , S1 , V1 ] = t r u n c s v d 1 b (A, r ) ;
191 t2 ( i )=toc ;
192 end
193

194 %Mean CPU t i m e s


195 t 1 = mean ( t 1 ) %CPU t i m e f o r
trunc svd1a
196 t 2 = mean ( t 2 ) %CPU t i m e f o r
trunc svd1b
197

198

199 %T r u n c a t e d SVD f o r l a r g e −s i z e d
images
200 repeat = 5;
201 t1=zeros ( repeat ,1) ;
202 t2=zeros ( repeat ,1) ;
203 f o r i =1: r e p e a t
204 tic ,
205 %AˆTA ( o r AAˆ T ) i s n o t u s e d
206 [ U2 , S2 , V2 ] = t r u n c s v d 2 (A, r ) ;
207 t1 ( i )=toc ;
208

209 tic ,
210 %AˆTA ( o r AAˆ T ) i s u s e d
211 [ U3 , S3 , V3 ] = t r u n c s v d 2 b (A, r ) ;
212 t2 ( i )=toc ;
213 end
214

215 %Mean CPU t i m e s


216 t 1 = mean ( t 1 )
217 t 2 = mean ( t 2 )
218

219 %R e c o n s t r u c t e d image
220 A r e c o n s t = U2∗S2∗V2 ’ ;
221

222 %C a l c u l a t e MSE ( Mean s q u a r e e r r o r )


223 MSE = sum ( sum ( ( A r e c o n s t −A) ) . ˆ 2 ) / (
m∗ n )
224

225 %R e s h a p e r e c o n s t r u c t e d image
226 A r e c o n s t = r e s h a p e ( A r e c o n s t , [ m, n
]) ;
227

228 %D i s p l a y r e c o n s t r u c t e d image
229 f i g u r e ( ) ; imshow ( A r e c o n s t , [ ] ) ;
230

231 %C a l c u l a t e memory r e q u i r e m e n t (MR)


232 MR = (m/ s +n∗ s + 1 ) ∗ r / ( m∗n )

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy