Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
29 views
15 pages
DL M3 Tech
DL notes
Uploaded by
Alefiya Rampurawala
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save DL-M3-Tech For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
29 views
15 pages
DL M3 Tech
DL notes
Uploaded by
Alefiya Rampurawala
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save DL-M3-Tech For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 15
Search
Fullscreen
Autoencoders : SL noose Unsupervised Learning 31 Inroducton Linear Autoencoder, Undercomplete Autoencodr, Overcomplets Autoencoders, Regularzaten in Autoencoders 32 _Denoising Autoencoders, Sparse Autoencoders,Contracive Autoencoders 33 _ Application of Autoencoders: Image Compression 3.1_Introduction + Machine Leeming aigoritims are often classified into two categories: Supervised Learning and Unsuperdsat Leeming. The diference between these categories largely les inthe type of data that we deal with: Supendad Leaming deals with labeled data, while Unsupervised Learning deals with unlabeled data, 1+ Anautocncoder is a dass of neural network that uses unsupervised earning and apples backpropagation seting the target values tobe equal othe inputs ‘An autoencoder is» neural network that is trained to lam the identity function and attempts to copy is input cutput While doing tis, the network wl be able to eam useful and intresting properties inthe data and ems representation of the data, which might be helpful in Clasifiation image recovery, or any other application where (900d feature sets needed An autoencoder leas this data encodings in an unsupervised manner. The im ofan autoencoderis olen alower-dimensional representation (encoding) or higher-dimensionl dat tis typically used for dimensionality reduction by training the network to capture the mest important parts ote spt image.Fig. 31.2: Architectural component of autoncoder ‘he encoder compresses the input and the decoder atempts to reset the input fom the compressed version sosced by the encoder. Autoencoders compres the input nto 2 lower dimensional code (Encoded representation of ad then reconstruct the output from this representation. The code isa compact “summary” ox “compression” of seit so ced the latent space representation. After Wiring, the encoder models saved and the decoders» ‘Auwencoders: Unsupervised ear wan the avtoencoder is highly dependent on the typeof ing ‘nd outut we want the autoencoder to adapt to. If we ave working with image data, the most popular oy, {unetions for econstction are MSE Loss and Lt Loss. In cave the inputs and outputs are within the rage 0.) {an Also make use of Binary Cross Entropy a the reconstruction lous. 3.1.1 Important Properties of Autoencoders 1 Dataspecific: Autoencoders ae ony able to meaning compress data sar to what they have been ng ‘on Since they lean features specific tothe given training data they ae diferent from a standard data comers, ‘igor ike gzip. So we cant expect an autoencoder trained on handwritten digits to compress photos tomy : The output of the autoencode will not be exactly the same as the input it wil be 8 cose Bul presentation. you want lossless compression they are not the way t0 90 Unsupervised : Autoencoders are considered an unupervsed leaning techique since they dont Read) labels to train on, But to be more precise they are st supervised because they generate their own las Rom th training data, 3.2 _ Linear Autoencoders ‘+ Anautoencoder is a type of Feedfonard used to lam efient data coding in an unsupervised manne + The im ofan autoencoder i to eam a representation encoding) fora set of data, Spill for redction, by traning the network to ignore signal noise.Ms 3.2.1 Difference between PCA and Auto Encoders 33__Under Complete Auto Encod "PCAs essential near ranformaton but Auto-encoders ae capable of modeling complex nontinear funtion [PA features are totaly neat uncorrelated wth each other since Features are projections onto the o ‘basi. Bt avtoencade eatres might hve creltions since they are just trained for accurate reconstruction, PCAs faster and computationally cheaper than autoencoders ‘singe layered utoencoder witha linear activation hncton i very similar to PCA ‘Astoencoder i prone to overfiting due to high number of parameters. (hough regularization and caret can avoid ti) Im under complete autoencoder, the dimensionality of hidden ayers is smaller than the input layer. tn these types of encoders we contain the number of nodes present in the hidden layers) ofthe network, lining the amount of information that can flow through the network. So, basically networks lean to compress high dimensional input into lower dimensional representation forcing ‘network to capture only the most important feature. ‘Ao under complete autoencoder has no explit regularization term - we simply train the model acording to ‘Thus, the only way to ensure thatthe model ist memeiing the input data (verfiting) is the ensure that suficerty restrict the number of nodes inthe hidden layers Here by penalizing the network according to the reconstruction eo, the model can learn the most ateributes of the input data and how to best reconstruct the original input from an “encoded” state. deal, encoding wil earn and describe latent attributes ofthe input dat ‘The hidden ayer i under complete means itis smal than the input ayer. ‘The hidden lyer compresses the input wa compress wel only onthe training dstbutionLearning, 35 ization in Auto Encoder eee + ater tan min he ma capt by Mig wee ad! decode slo A he oa dn ode on ong le hv ce oe ba ait copy ts np to soup . ‘In practic, we usually find two types of regularized autoencoder ; the sparse autoencader and the autoencoder. 35.1 Denoising Auto Encoders + Autoencoers ae Neural Networks that are commonly used fr feature selection and extraction. Homey there are more nes in the hen lyer than ther ae inputs the Network i isking to lee the soca “Wentty Function’, also called the "Null Function”, meaning that the output equals the input, marking ‘Autoencoder useless Denoising Autoencoders solve this problem by carupting the data on purpose by randomly tuming Some of te input values to zero. In genera, the percentage of input nodes which are being set to zero is about 30% t0 50%, + Thedenoisng autoencoders 1. The hidden layers ofthe autoencoder learn more robust fiers 2. Reduce the sk of overftng inthe autoencoder 3. Prevent the autoencoder rom learning a simple fentily function to solve the problem of overcomplete AB "Noise was stochastically (.e, randomly) added tothe input data and then the autoencoder was trained to rece the orginal nonperturbed signal, ‘One ofthe common applications of such auto encoders isto re-proces an image to improve the accuse of ptical character ecagnition (OCR) algorithm, We know that juste bit of the wrong pe of noise (ex. printer nk smudges, poor image quality dung te Scan, etc) can dramatically hurt the performance of your OCR method, sing denoising autoencoders, we can automaticaly pre-process the image, improve the quality, and thereto increase the accuracy ofthe downstream OCR algorithm,«Sine 90 Possible design a neural network that has a Nenible numberof nodes at san ayers, pare atoencaders work by penaiing the activation of some newons in hidden ye Dai! svete 00d + note See ‘alates the number of neurons that have Deen aca trond 2 penalty thats directly proportional oth + Ths pena cle the spasity function prevents the neural ewok rom cating ee rewor a ares_Autoencnders:Vasuperad ‘Most commonly used is 1 regularization 4 regularization adds “absolute value of magnitude” of coetficients as penalty term while L2 “squared magnitude” of coetcent asa penalty term, ‘Alough LL and 12 can both be used as reguaization tem, the Key ciference between them i ‘regularization tends to shrink the penalty coefficient to zero while 2 regularization would move ‘wards zero but they wil never each Thus L regularization soften used a3 method of eture extraction, Ob) = Lt 2) + regularization + AZ, a" 4 xcept for the ist two tems, we add the third term which penalizes the absolute value ofthe vector of {in layer for sample, Then we use a hyperparameter to control its effect on the whole os function. An way, we do build a sparse autoencoder 3.5.3 Contractive Auto Encoders ‘The goa of Contractive Autoencoder is to reduce the representations sensitivity towards the trsining input data Im other words we strive to make the autoencoders robust of small changes inthe training dataset. Ii order to achieve this, we must add a regularizer or penalty term to the cost function thatthe sutoencode trying to minimize adds an extra term inthe loss function of auto encode its given by yoo = 2232) ‘The above penalty tem isthe Frobnious Norm ofthe encoder, the Froinious norm i just a generalaaton. Euclidean norm In other words this senstvty penalization tem isthe “sum of squares of all pata derivatives ofthe features with expect to input dimensions ‘Mathematics of contractive auto encoders We need to understand the Frobenius norm of the Jacobian matrix befor we ive into mathematics “The Frobenius norm also called the Euclidean norm, matrix norm of an man matric A defined asthe “square the sum of the absolute squares ofits element” Jacobian matrix the matix of “all rstoxder portal derivatives of a vector-valued function. So when sa square matrix both the marx and is determinant are refered to as the Jacobian. these to definitions ives us the meaning of Frobenvs norm ofthe Jacobian matrixoe 210 Autoencoders- Unsupervised Les ng ‘Compression eo the image consumes moe peittng neni wis means ore ot fr he vaamision of te data. So, + Ot scent use of bandwidth the images ae eon ompresedbelore transmission oer he internet ve ass th image CoMPresson Ung Cnwohitnal Ao encode, nore I NO RETESET ed gas eco, wee WNT nage ast F ‘estat consi of vrlous YPar mages inciting igs lation and etc | weve sigh pc mga nt tn hs oi eign ett F No.of taining dataset mages = 60000, No.of testing dataset mages = 19000 No ofctsevsber = 10(65) Dimension ofeach image = 28328 wolion fon model using com lowing Fig. 36 shows the high-level actecue of the nage compression mod sutoencoders cha Eneadeg te atest om imo mast J out it as an input to autoencoder, rains the data spat1916.2 Detaled architecture ofthe convolution autoencoder for mage compression ‘he atoencder ae tts wth an nput layer, whose main uncon i to take the apg othe ag layers ofthe autoencoder nthe convolution layers, the 3x3 fer having the same padding and Rel activation function ean Be see Following the input layer are two convolutional ayers whose main function is to compres the image Following the two convolutional layers there Isa maspoo! ayer. Max pool ayers are alo ad in the enc pu {o select the more robust features from the input image or dataset. In the maxpooting layer, 252 ae ken gy ‘After the Maxpool Iyer there are thee comvolstional layers, which ae again responsible for perforin a comolution function, Following the maxpoo! layer there are two convolutional layer and one code layer This code lyer is basically the bottleneck consisting ofthe lowest possible compressed ata Wi is Rm ata whichis 770, Now decoder module sts. After the code lyer there ae tree convolutional layers BERS Sa restoring the compressed image from the code ayer. LUpsampling layer (opposite of Maxpookng operation) sao included inthis model in the Gagoder pant to ham ‘the dimension ofthe image fom the encoded representation having fier size 22, The upsampling layers further {olowed by tree convolutional layer and then aga By 6s psa ‘After the upsamping layer there sone final convolutional lyr, followed by the final output ayer tha shows ‘tpt function. For the output ayer, tanh activation function can be yee,wa dat st ons has ot hander dg woe a neste F BG 59. 3.7.1: Original dean images trom NIST ‘ning Dataset: 6,000 datapoints belong tothe traning dataset ad eng Dataset: 10000 data points belongs to the BBO BSoe Fig. 372: Nein imager ober Decoder Network sit have a0 inp ayer of 784 rer soxte model eel = Sequential) selada(Dense(500, Input. dim=num pixels, activation='relu'}) nueladal(Dense(300, activation="relu)) Hote ‘0ieladd(Dense( 100, activation~'relu)) Owcoder ‘uel ad(Dense(300, activation='rel) ‘sie add(Dense(500, ativation="rel!)} ‘oieladd(Dense(784, ativation= sigmoid) 44 Other Applications of Autoencoders Dimension reductonality aya vec eins ree eprint he idee ld de aDeep Lenina 313 oencoders:Unsapervie Ts layer rom the model the information from the input has been compressed ae By separating can now be handled as a variable = As a resutt we may determine that by deleting the decoder, an autoencoder withthe coding ay ‘output ean be wed for dimensionality reduction 2. Recommendation Systems , Consider the case of YouTube, the ideas ‘+ The input ata isthe clustering of similar users based on interests “+ Interests of wers are denoted by videos watched, wach ime fr each, interactions (ike commenting) video ‘+ Above data is captured by custeing content ‘+ Encoder part wil capture the interests ofthe user ‘= Decoder part wl try to project the interests on two parts: © esting unseen content 12 new content fom content creators 3. Anomaly detection + Another application for autoencoders is anomaly detection. By learning to replicate the most salen in the training dat, the model is encouraged to lear to precisely reproduce the most frequently ol characteristics + When facing anomalies, the mode! should worsen ts reconstruction performance. ‘+ In most cases only data with normal instances ave used to rain the autoencoder, in others, the anomalies Is small compared to the observation set so tat its contibution to the learned rep could be ignored. After vainng, the autoencoder will accurately reconstruct “normal” data, wile flog s0.with unfamiar anomalous dts = Reconstruction error the ear between the orginal data and its low dimensional reconstruction) suse anomaly core to detect anomalies. 4. image Production «The VAE (Variational Autoencoder is a generative model used to produce images thatthe model has 8 seen, The concept is that the system will generate similar
You might also like
Vae - Gan 1
PDF
No ratings yet
Vae - Gan 1
136 pages
Chapter 7 - Autoencoders
PDF
No ratings yet
Chapter 7 - Autoencoders
91 pages
Ch3-Auto-encoder
PDF
No ratings yet
Ch3-Auto-encoder
40 pages
Unit-IV_Part-01
PDF
No ratings yet
Unit-IV_Part-01
47 pages
Lecture_6373_07
PDF
No ratings yet
Lecture_6373_07
53 pages
Deep Learning Module-2 & 4
PDF
No ratings yet
Deep Learning Module-2 & 4
48 pages
Lecture 23b Auto Encoder
PDF
No ratings yet
Lecture 23b Auto Encoder
27 pages
Autoencoders
PDF
No ratings yet
Autoencoders
35 pages
Autoencoder - Unit 4
PDF
No ratings yet
Autoencoder - Unit 4
39 pages
UNIT V
PDF
No ratings yet
UNIT V
32 pages
UNIT-V DL
PDF
No ratings yet
UNIT-V DL
31 pages
DL Unit3 Autoencoder
PDF
No ratings yet
DL Unit3 Autoencoder
91 pages
Neural Network Unsupervised Machine Learning: What Are Autoencoders?
PDF
No ratings yet
Neural Network Unsupervised Machine Learning: What Are Autoencoders?
22 pages
DeepLearning Unit IV Notes
PDF
No ratings yet
DeepLearning Unit IV Notes
58 pages
6. Brief Introduction on Current Research Areas - Autoencoders
PDF
No ratings yet
6. Brief Introduction on Current Research Areas - Autoencoders
20 pages
module 03
PDF
No ratings yet
module 03
13 pages
DL UNIT 4
PDF
No ratings yet
DL UNIT 4
21 pages
Unit II
PDF
No ratings yet
Unit II
35 pages
Vae Gan
PDF
No ratings yet
Vae Gan
214 pages
Autoencoders: Presented By: 2019220013 Balde Lansana (
PDF
No ratings yet
Autoencoders: Presented By: 2019220013 Balde Lansana (
21 pages
ML Lec 19 Autoencoder
PDF
No ratings yet
ML Lec 19 Autoencoder
54 pages
Unit5 Autoencoders.doc
PDF
No ratings yet
Unit5 Autoencoders.doc
45 pages
DL Unit - 4
PDF
No ratings yet
DL Unit - 4
26 pages
M2- Autoencoders
PDF
No ratings yet
M2- Autoencoders
25 pages
Unit 5e - Autoencoders
PDF
No ratings yet
Unit 5e - Autoencoders
32 pages
Autoencoders - Presentation
PDF
No ratings yet
Autoencoders - Presentation
18 pages
L23_autoencoders
PDF
No ratings yet
L23_autoencoders
16 pages
Chapter17 Autoencoders
PDF
No ratings yet
Chapter17 Autoencoders
23 pages
UNIT 3
PDF
No ratings yet
UNIT 3
23 pages
Unit-5 Auto Encoders in Deep Learning
PDF
No ratings yet
Unit-5 Auto Encoders in Deep Learning
23 pages
Unit 3
PDF
No ratings yet
Unit 3
39 pages
DUnit IV
PDF
No ratings yet
DUnit IV
22 pages
Unit 5
PDF
No ratings yet
Unit 5
27 pages
MODULE 5 Auto-Encoders and Generative Models
PDF
No ratings yet
MODULE 5 Auto-Encoders and Generative Models
25 pages
Autoencoders
PDF
No ratings yet
Autoencoders
20 pages
D5_PPT
PDF
No ratings yet
D5_PPT
79 pages
Autoencoders
PDF
No ratings yet
Autoencoders
12 pages
Autoencoders U
PDF
No ratings yet
Autoencoders U
44 pages
DL Class5
PDF
No ratings yet
DL Class5
23 pages
AD3501-DL-UNIT 5 NOTES
PDF
No ratings yet
AD3501-DL-UNIT 5 NOTES
16 pages
Autoencoders
PDF
No ratings yet
Autoencoders
14 pages
week6 (1)
PDF
No ratings yet
week6 (1)
4 pages
Auto Encoder
PDF
No ratings yet
Auto Encoder
39 pages
Unit 4
PDF
No ratings yet
Unit 4
10 pages
Experiment 4
PDF
No ratings yet
Experiment 4
26 pages
UNIT-5 part1
PDF
No ratings yet
UNIT-5 part1
15 pages
Neural Network Unsupervised Machine Learning: What Are Autoencoders?
PDF
No ratings yet
Neural Network Unsupervised Machine Learning: What Are Autoencoders?
22 pages
Lec16 - Autoencoders
PDF
No ratings yet
Lec16 - Autoencoders
18 pages
Autoencoders
PDF
No ratings yet
Autoencoders
4 pages
Study Materials - Denoising Autoencoders
PDF
No ratings yet
Study Materials - Denoising Autoencoders
7 pages
Auto Encoders
PDF
No ratings yet
Auto Encoders
4 pages
AAI Module 3
PDF
No ratings yet
AAI Module 3
11 pages
Autoencoder
PDF
No ratings yet
Autoencoder
39 pages
DL Unit 5
PDF
No ratings yet
DL Unit 5
19 pages
Deep Learning: Prof:Naveen Ghorpade
PDF
No ratings yet
Deep Learning: Prof:Naveen Ghorpade
43 pages
Introduction To Autoencoders: A Brief Overview
PDF
No ratings yet
Introduction To Autoencoders: A Brief Overview
27 pages
1 Autoencoders
PDF
No ratings yet
1 Autoencoders
22 pages
Lecture 14 Autoencoders
PDF
No ratings yet
Lecture 14 Autoencoders
39 pages
Auto Encoder
PDF
No ratings yet
Auto Encoder
10 pages
Tech Neo DL U3 6 Split This Is A PDF of Techneo Deep Learning
PDF
No ratings yet
Tech Neo DL U3 6 Split This Is A PDF of Techneo Deep Learning
110 pages
DL M1 Tech
PDF
No ratings yet
DL M1 Tech
40 pages
DL M2 Tech
PDF
No ratings yet
DL M2 Tech
32 pages
DL M1 TechNeo
PDF
No ratings yet
DL M1 TechNeo
30 pages
DL M6 Tech
PDF
No ratings yet
DL M6 Tech
29 pages
DL M4 Tech
PDF
No ratings yet
DL M4 Tech
24 pages
DL M5 Tech
PDF
No ratings yet
DL M5 Tech
21 pages
Sample Questions: Subject Name: Semester: VI
PDF
No ratings yet
Sample Questions: Subject Name: Semester: VI
17 pages