0% found this document useful (0 votes)
29 views15 pages

DL M3 Tech

DL notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
29 views15 pages

DL M3 Tech

DL notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 15
Autoencoders : SL noose Unsupervised Learning 31 Inroducton Linear Autoencoder, Undercomplete Autoencodr, Overcomplets Autoencoders, Regularzaten in Autoencoders 32 _Denoising Autoencoders, Sparse Autoencoders,Contracive Autoencoders 33 _ Application of Autoencoders: Image Compression 3.1_Introduction + Machine Leeming aigoritims are often classified into two categories: Supervised Learning and Unsuperdsat Leeming. The diference between these categories largely les inthe type of data that we deal with: Supendad Leaming deals with labeled data, while Unsupervised Learning deals with unlabeled data, 1+ Anautocncoder is a dass of neural network that uses unsupervised earning and apples backpropagation seting the target values tobe equal othe inputs ‘An autoencoder is» neural network that is trained to lam the identity function and attempts to copy is input cutput While doing tis, the network wl be able to eam useful and intresting properties inthe data and ems representation of the data, which might be helpful in Clasifiation image recovery, or any other application where (900d feature sets needed An autoencoder leas this data encodings in an unsupervised manner. The im ofan autoencoderis olen alower-dimensional representation (encoding) or higher-dimensionl dat tis typically used for dimensionality reduction by training the network to capture the mest important parts ote spt image. Fig. 31.2: Architectural component of autoncoder ‘he encoder compresses the input and the decoder atempts to reset the input fom the compressed version sosced by the encoder. Autoencoders compres the input nto 2 lower dimensional code (Encoded representation of ad then reconstruct the output from this representation. The code isa compact “summary” ox “compression” of seit so ced the latent space representation. After Wiring, the encoder models saved and the decoders » ‘Auwencoders: Unsupervised ear wan the avtoencoder is highly dependent on the typeof ing ‘nd outut we want the autoencoder to adapt to. If we ave working with image data, the most popular oy, {unetions for econstction are MSE Loss and Lt Loss. In cave the inputs and outputs are within the rage 0.) {an Also make use of Binary Cross Entropy a the reconstruction lous. 3.1.1 Important Properties of Autoencoders 1 Dataspecific: Autoencoders ae ony able to meaning compress data sar to what they have been ng ‘on Since they lean features specific tothe given training data they ae diferent from a standard data comers, ‘igor ike gzip. So we cant expect an autoencoder trained on handwritten digits to compress photos tomy : The output of the autoencode will not be exactly the same as the input it wil be 8 cose Bul presentation. you want lossless compression they are not the way t0 90 Unsupervised : Autoencoders are considered an unupervsed leaning techique since they dont Read) labels to train on, But to be more precise they are st supervised because they generate their own las Rom th training data, 3.2 _ Linear Autoencoders ‘+ Anautoencoder is a type of Feedfonard used to lam efient data coding in an unsupervised manne + The im ofan autoencoder i to eam a representation encoding) fora set of data, Spill for redction, by traning the network to ignore signal noise. Ms 3.2.1 Difference between PCA and Auto Encoders 33__Under Complete Auto Encod "PCAs essential near ranformaton but Auto-encoders ae capable of modeling complex nontinear funtion [PA features are totaly neat uncorrelated wth each other since Features are projections onto the o ‘basi. Bt avtoencade eatres might hve creltions since they are just trained for accurate reconstruction, PCAs faster and computationally cheaper than autoencoders ‘singe layered utoencoder witha linear activation hncton i very similar to PCA ‘Astoencoder i prone to overfiting due to high number of parameters. (hough regularization and caret can avoid ti) Im under complete autoencoder, the dimensionality of hidden ayers is smaller than the input layer. tn these types of encoders we contain the number of nodes present in the hidden layers) ofthe network, lining the amount of information that can flow through the network. So, basically networks lean to compress high dimensional input into lower dimensional representation forcing ‘network to capture only the most important feature. ‘Ao under complete autoencoder has no explit regularization term - we simply train the model acording to ‘Thus, the only way to ensure thatthe model ist memeiing the input data (verfiting) is the ensure that suficerty restrict the number of nodes inthe hidden layers Here by penalizing the network according to the reconstruction eo, the model can learn the most ateributes of the input data and how to best reconstruct the original input from an “encoded” state. deal, encoding wil earn and describe latent attributes ofthe input dat ‘The hidden ayer i under complete means itis smal than the input ayer. ‘The hidden lyer compresses the input wa compress wel only onthe training dstbution Learning, 35 ization in Auto Encoder eee + ater tan min he ma capt by Mig wee ad! decode slo A he oa dn ode on ong le hv ce oe ba ait copy ts np to soup . ‘In practic, we usually find two types of regularized autoencoder ; the sparse autoencader and the autoencoder. 35.1 Denoising Auto Encoders + Autoencoers ae Neural Networks that are commonly used fr feature selection and extraction. Homey there are more nes in the hen lyer than ther ae inputs the Network i isking to lee the soca “Wentty Function’, also called the "Null Function”, meaning that the output equals the input, marking ‘Autoencoder useless Denoising Autoencoders solve this problem by carupting the data on purpose by randomly tuming Some of te input values to zero. In genera, the percentage of input nodes which are being set to zero is about 30% t0 50%, + Thedenoisng autoencoders 1. The hidden layers ofthe autoencoder learn more robust fiers 2. Reduce the sk of overftng inthe autoencoder 3. Prevent the autoencoder rom learning a simple fentily function to solve the problem of overcomplete AB "Noise was stochastically (.e, randomly) added tothe input data and then the autoencoder was trained to rece the orginal nonperturbed signal, ‘One ofthe common applications of such auto encoders isto re-proces an image to improve the accuse of ptical character ecagnition (OCR) algorithm, We know that juste bit of the wrong pe of noise (ex. printer nk smudges, poor image quality dung te Scan, etc) can dramatically hurt the performance of your OCR method, sing denoising autoencoders, we can automaticaly pre-process the image, improve the quality, and thereto increase the accuracy ofthe downstream OCR algorithm, «Sine 90 Possible design a neural network that has a Nenible numberof nodes at san ayers, pare atoencaders work by penaiing the activation of some newons in hidden ye Dai! svete 00d + note See ‘alates the number of neurons that have Deen aca trond 2 penalty thats directly proportional oth + Ths pena cle the spasity function prevents the neural ewok rom cating ee rewor a ares _Autoencnders:Vasuperad ‘Most commonly used is 1 regularization 4 regularization adds “absolute value of magnitude” of coetficients as penalty term while L2 “squared magnitude” of coetcent asa penalty term, ‘Alough LL and 12 can both be used as reguaization tem, the Key ciference between them i ‘regularization tends to shrink the penalty coefficient to zero while 2 regularization would move ‘wards zero but they wil never each Thus L regularization soften used a3 method of eture extraction, Ob) = Lt 2) + regularization + AZ, a" 4 xcept for the ist two tems, we add the third term which penalizes the absolute value ofthe vector of {in layer for sample, Then we use a hyperparameter to control its effect on the whole os function. An way, we do build a sparse autoencoder 3.5.3 Contractive Auto Encoders ‘The goa of Contractive Autoencoder is to reduce the representations sensitivity towards the trsining input data Im other words we strive to make the autoencoders robust of small changes inthe training dataset. Ii order to achieve this, we must add a regularizer or penalty term to the cost function thatthe sutoencode trying to minimize adds an extra term inthe loss function of auto encode its given by yoo = 2232) ‘The above penalty tem isthe Frobnious Norm ofthe encoder, the Froinious norm i just a generalaaton. Euclidean norm In other words this senstvty penalization tem isthe “sum of squares of all pata derivatives ofthe features with expect to input dimensions ‘Mathematics of contractive auto encoders We need to understand the Frobenius norm of the Jacobian matrix befor we ive into mathematics “The Frobenius norm also called the Euclidean norm, matrix norm of an man matric A defined asthe “square the sum of the absolute squares ofits element” Jacobian matrix the matix of “all rstoxder portal derivatives of a vector-valued function. So when sa square matrix both the marx and is determinant are refered to as the Jacobian. these to definitions ives us the meaning of Frobenvs norm ofthe Jacobian matrix oe 210 Autoencoders- Unsupervised Les ng ‘Compression eo the image consumes moe peittng neni wis means ore ot fr he vaamision of te data. So, + Ot scent use of bandwidth the images ae eon ompresedbelore transmission oer he internet ve ass th image CoMPresson Ung Cnwohitnal Ao encode, nore I NO RETESET ed gas eco, wee WNT nage ast F ‘estat consi of vrlous YPar mages inciting igs lation and etc | weve sigh pc mga nt tn hs oi eign ett F No.of taining dataset mages = 60000, No.of testing dataset mages = 19000 No ofctsevsber = 10(65) Dimension ofeach image = 28328 wolion fon model using com lowing Fig. 36 shows the high-level actecue of the nage compression mod sutoencoders cha Eneadeg te atest om imo mast J out it as an input to autoencoder, rains the data spat 1916.2 Detaled architecture ofthe convolution autoencoder for mage compression ‘he atoencder ae tts wth an nput layer, whose main uncon i to take the apg othe ag layers ofthe autoencoder nthe convolution layers, the 3x3 fer having the same padding and Rel activation function ean Be see Following the input layer are two convolutional ayers whose main function is to compres the image Following the two convolutional layers there Isa maspoo! ayer. Max pool ayers are alo ad in the enc pu {o select the more robust features from the input image or dataset. In the maxpooting layer, 252 ae ken gy ‘After the Maxpool Iyer there are thee comvolstional layers, which ae again responsible for perforin a comolution function, Following the maxpoo! layer there are two convolutional layer and one code layer This code lyer is basically the bottleneck consisting ofthe lowest possible compressed ata Wi is Rm ata whichis 770, Now decoder module sts. After the code lyer there ae tree convolutional layers BERS Sa restoring the compressed image from the code ayer. LUpsampling layer (opposite of Maxpookng operation) sao included inthis model in the Gagoder pant to ham ‘the dimension ofthe image fom the encoded representation having fier size 22, The upsampling layers further {olowed by tree convolutional layer and then aga By 6s psa ‘After the upsamping layer there sone final convolutional lyr, followed by the final output ayer tha shows ‘tpt function. For the output ayer, tanh activation function can be yee, wa dat st ons has ot hander dg woe a neste F BG 59. 3.7.1: Original dean images trom NIST ‘ning Dataset: 6,000 datapoints belong tothe traning dataset ad eng Dataset: 10000 data points belongs to the BBO BSoe Fig. 372: Nein imager ober Decoder Network sit have a0 inp ayer of 784 rer soxte model eel = Sequential) selada(Dense(500, Input. dim=num pixels, activation='relu'}) nueladal(Dense(300, activation="relu)) Hote ‘0ieladd(Dense( 100, activation~'relu)) Owcoder ‘uel ad(Dense(300, activation='rel) ‘sie add(Dense(500, ativation="rel!)} ‘oieladd(Dense(784, ativation= sigmoid) 44 Other Applications of Autoencoders Dimension reductonality aya vec eins ree eprint he idee ld de a Deep Lenina 313 oencoders:Unsapervie Ts layer rom the model the information from the input has been compressed ae By separating can now be handled as a variable = As a resutt we may determine that by deleting the decoder, an autoencoder withthe coding ay ‘output ean be wed for dimensionality reduction 2. Recommendation Systems , Consider the case of YouTube, the ideas ‘+ The input ata isthe clustering of similar users based on interests “+ Interests of wers are denoted by videos watched, wach ime fr each, interactions (ike commenting) video ‘+ Above data is captured by custeing content ‘+ Encoder part wil capture the interests ofthe user ‘= Decoder part wl try to project the interests on two parts: © esting unseen content 12 new content fom content creators 3. Anomaly detection + Another application for autoencoders is anomaly detection. By learning to replicate the most salen in the training dat, the model is encouraged to lear to precisely reproduce the most frequently ol characteristics + When facing anomalies, the mode! should worsen ts reconstruction performance. ‘+ In most cases only data with normal instances ave used to rain the autoencoder, in others, the anomalies Is small compared to the observation set so tat its contibution to the learned rep could be ignored. After vainng, the autoencoder will accurately reconstruct “normal” data, wile flog s0.with unfamiar anomalous dts = Reconstruction error the ear between the orginal data and its low dimensional reconstruction) suse anomaly core to detect anomalies. 4. image Production «The VAE (Variational Autoencoder is a generative model used to produce images thatthe model has 8 seen, The concept is that the system will generate similar

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy