0% found this document useful (0 votes)
0 views21 pages

Chapter1. Introduction To Deep Learning

The document provides an introduction to deep learning, detailing its core concepts, history, and evolution, including various types of neural networks such as CNNs, RNNs, and Transformers. It discusses the applications of deep learning in fields like image recognition, natural language processing, and medical diagnosis, as well as the challenges faced, such as data dependence and ethical concerns. Additionally, it offers resources for exercises and datasets suitable for beginners in deep learning.

Uploaded by

lechuc508
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views21 pages

Chapter1. Introduction To Deep Learning

The document provides an introduction to deep learning, detailing its core concepts, history, and evolution, including various types of neural networks such as CNNs, RNNs, and Transformers. It discusses the applications of deep learning in fields like image recognition, natural language processing, and medical diagnosis, as well as the challenges faced, such as data dependence and ethical concerns. Additionally, it offers resources for exercises and datasets suitable for beginners in deep learning.

Uploaded by

lechuc508
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

TRƯỜNG ĐẠI HỌC GIAO THÔNG VẬN TẢI TP.

HCM
VIỆN CÔNG NGHỆ THÔNG TIN, ĐIỆN, ĐIỆN TỬ

Chapter1.

INTRODUCTION TO
DEEP LEARNING

PhD. Nguyễn Thị Khánh Tiên


tienntk@ut.edu.vn
Key concepts of Deep learning
Deep learning is a subfield of machine learning that has revolutionized
artificial intelligence.
Core Concepts:
● Neural Networks:
○ At the heart of deep learning are artificial neural networks,
inspired by the structure of the human brain.
○ These networks consist of interconnected nodes (neurons)
organized in layers:
■ Input Layer: Receives the raw data.
■ Hidden Layers: Perform complex computations on the
data. Deep learning networks have multiple hidden
layers, hence the "deep" in deep learning.
■ Output Layer: Produces the final result.
● Deep Neural Networks:
○ The "deep" in deep learning refers to the multiple layers in these
neural networks.
○ These layers enable the network to learn increasingly complex
representations of data.
Key concepts of Deep learning
How it Works:

● Deep learning models learn by processing vast amounts of data.


● They automatically extract features from the data, eliminating the need for manual feature engineering.
● The network adjusts the connections (weights) between neurons through a process called backpropagation, minimizing the
difference between predicted and actual outputs.
Key Differences from Machine Learning

Key Differences from Machine Learning:

● Feature Extraction: Traditional machine learning often


requires manual feature extraction, while deep learning
automates this process.
● Data Requirements: Deep learning models typically
require large amounts of data to perform effectively.
● Complexity: Deep learning models can handle more
complex problems and data than many traditional
machine learning algorithms.
History of Deep Learning
History of Deep Learning

● Early Foundations (1940s-1960s):


○ The seeds were sown with the work of Walter Pitts and Warren McCulloch, who proposed a computational model of
neural networks in 1943.
○ Frank Rosenblatt's Perceptron (1958) introduced the concept of trainable artificial neurons.
○ Early challenges, like the limitations of the Perceptron and the "AI winter," hindered progress.

● Resurgence (1980s-1990s):
○ The development of the backpropagation algorithm in the 1980s was a crucial breakthrough, enabling the training of
multi-layer neural networks.
○ Yann LeCun's work on Convolutional Neural Networks (CNNs) for handwritten digit recognition (LeNet) demonstrated
the practical potential of deep learning.

● The Deep Learning Revolution (2000s-Present):


○ Increased computational power, particularly the use of GPUs, made it possible to train much larger and deeper networks.
○ Datasets like ImageNet provided the vast amounts of data needed for effective training.
○ Breakthroughs in areas like computer vision (AlexNet), natural language processing (NLP), and reinforcement learning
(AlphaGo) propelled deep learning into the mainstream.
○ The creation of transformer models have revolutionized NLP, and many other fields.
Motivation of Deep Learning
● Mimicking the Brain. Deep learning is inspired by the
hierarchical structure of the human brain, aiming to
create systems that can learn complex patterns and
representations from data.
● Automating Feature Extraction. Traditional machine
learning often requires manual feature engineering,
which can be time-consuming and challenging. Deep
learning automates this process, allowing models to
learn relevant features directly from raw data.

● Solving Complex Problems. Deep learning has proven highly effective


in tackling problems that are difficult for traditional algorithms, such as
image recognition, speech recognition, and natural language
understanding.
● The need to process and understand vast amounts of data. The
explosion of digital information has created a need for algorithms that
can make sense of that data. Deep learning excels at this.
Evolution of Deep Learning
● Architectural Innovations. The development of various neural
network architectures, such as CNNs, Recurrent Neural
Networks (RNNs), and Transformers, has enabled deep
learning to address a wide range of tasks.
● Algorithmic Advancements. Ongoing research has led to
improvements in training algorithms, optimization techniques,
and regularization methods.
● Hardware Acceleration. The development of specialized
hardware, such as GPUs and TPUs, has significantly
accelerated deep learning training and inference.
● Data Availability. The increase in the amount of available data
has been a key factor in the success of deep learning.
● Expanding Applications. Deep learning is now being applied
in diverse fields, including healthcare, finance, transportation,
and entertainment.
● Ethical Considerations. As deep learning becomes more
powerful, there is an increasing focus on addressing ethical
concerns, such as bias, fairness, and privacy.
Evolution of Deep Learning
Types of DL Networks
Types of DL Networks
The most common types:
1. Convolutional Neural Networks (CNNs):
● Best for: Image recognition, image classification, object detection, 4. Generative Adversarial Networks (GANs):
video analysis, and other visual tasks. ● Best for: Generating new data, such as images, music,
● Key features: Convolutional layers (for feature extraction), pooling and text.
layers (for dimensionality reduction), and fully connected layers (for ● Key features: Two networks (a generator and a
classification). discriminator) that compete against each other.
● Examples: ResNet, VGGNet, AlexNet, Efficient Net. ● Examples: DCGAN, StyleGAN.
2. Recurrent Neural Networks (RNNs): 5. Autoencoders:
● Best for: Processing sequential data, such as natural language ● Best for: Dimensionality reduction, feature learning,
processing (NLP), time series analysis, and speech recognition. and anomaly detection.
● Key features: Recurrent connections that allow information to persist ● Key features: An encoder that compresses the input
over time. data and a decoder that reconstructs it.
● Examples: LSTM (Long Short-Term Memory), GRU (Gated 6. Deep Belief Networks (DBNs):
Recurrent Unit). ● Best for: Unsupervised learning and feature extraction.
3. Transformers: ● Key features: A stack of Restricted Boltzmann
● Best for: NLP tasks (like machine translation, text summarization, and Machines (RBMs).
question answering), but also increasingly used in computer vision. 7. Graph Neural Networks (GNNs):
● Key features: Attention mechanisms that allow the model to focus on ● Best for: Working with graph-structured data, such as
relevant parts of the input sequence. social networks, molecular structures, and knowledge
Examples: BERT, GPT, Transformer, Vision Transformer (ViT). graphs.
● Key features: Message passing between nodes in a
graph.
Application of Deep learning
Applications. Deep learning has led to significant advances
in various fields, including:
● Image Recognition: Identifying objects, people, and
scenes in images and videos.
● Natural Language Processing (NLP):
Understanding and generating human language,
powering applications like chatbots and machine
translation.
● Speech Recognition: Converting spoken language
into text.
● Autonomous Driving: Enabling vehicles to perceive
their surroundings and navigate safely.
● Medical Diagnosis: Assisting in the detection of
diseases from medical images.

Key Considerations:
● Data Availability
● Computational Resources
● "Black Box" Nature: Deep learning models can be
difficult to interpret, making it challenging to
understand how they arrive at their predictions.
The main problems addressed by deep learning
Deep learning tackles a wide range of complex problems, often categorized as follows:
1. Classification:
● Description:
○ Classification aims to assign an object to one of a set of predefined categories.
○ Examples: image classification (cats, dogs), text classification (spam, non-spam).
● Common techniques:
○ Convolutional Neural Networks (CNNs) for image classification.
○ Recurrent Neural Networks (RNNs) and Transformers for text classification.
2. Regression:
● Description:
○ Regression predicts a continuous value.
○ Examples: predicting house prices, predicting temperature.
● Common techniques:
○ Feedforward Neural Networks.
3. Object Detection:
● Description:
○ Object detection not only classifies objects but also locates them within an image or video.
○ Examples: detecting pedestrians in surveillance video, detecting traffic signs.
● Common techniques:
○ R-CNN (Region-based Convolutional Neural Networks), YOLO (You Only Look Once), SSD (Single Shot MultiBox
Detector).
The main problems addressed by deep learning
4. Image Segmentation:
● Description:
○ Image segmentation divides an image into meaningful regions, with each pixel assigned a label.
○ Examples: medical image segmentation to identify organs, satellite image segmentation for land analysis.
● Common techniques:
○ U-Net, FCN (Fully Convolutional Networks).

5. Natural Language Processing (NLP):


● Description:
○ NLP encompasses problems related to understanding and generating human language.
○ Examples: machine translation, sentiment analysis, text summarization, text generation.
● Common techniques:
○ RNNs, LSTM (Long Short-Term Memory), Transformers (BERT, GPT).

6. Generative Models:
● Description:
○ Generative models learn to create new data similar to the training data.
○ Examples: generating realistic images, generating music, generating text.
● Common techniques:
○ GAN (Generative Adversarial Networks), VAE (Variational Autoencoders).
Challenges in Deep Learning
1. Data Dependence:
● Requires massive, high-quality datasets, which are often
costly and difficult to obtain.
● Data biases lead to biased models.

2. Computational Demands:
● Training deep networks is computationally intensive,
needing powerful hardware.
● Long training times.

3. Model Complexity:
● "Black box" nature: lack of interpretability.
● Difficult hyperparameter tuning.
● Overfitting and underfitting issues.

4. Ethical Concerns:
● Potential for bias and unfairness.
● Privacy risks.
● Security risks from adversarial attacks.

5. Deployment difficulties:
● Hardware limitations on edge devices.
● Scalability problems.
Where to Find Deep Learning Exercises
● Google Developers Machine Learning Crash Course:
○ This course provides interactive exercises that allow you to experiment with neural network configurations.
○ It's a fantastic resource for beginners.
○ Here is the link: Neural networks: Interactive exercises | Machine Learning - Google for Developers

● Kaggle:
○ Kaggle is a platform that hosts data science competitions and provides datasets and notebooks.
○ You can find numerous deep learning exercises and tutorials on Kaggle.
○ Here is a good starting point: Exercise: Deep Neural Networks - Kaggle

● Deep Learning Book Exercises:


○ The book "Deep Learning" by Ian Goodfellow, Yoshua Bengio, and Aaron Courville has associated exercise material.
○ Here is a link to that resource: Exercises - Deep Learning

● GitHub:
○ GitHub is a treasure trove of deep learning projects and exercises.
○ Search for repositories related to deep learning tutorials or specific tasks.
○ Here is an example: qiyuangong/Deep_Learning_Exercise: Deep Learning Exercise and Notebook - GitHub
Dataset
Where to Find Datasets:
Dataset is a collection of data. This data can come in various
● Kaggle: A popular platform with a wide variety of
forms, such as: Numbers, Text, Images, Audio, Video. The key
datasets.
purpose of a dataset is to provide the raw material that machine
● UCI Machine Learning Repository: A classic source of
learning algorithms use to learn patterns and make predictions.
datasets for machine learning research.
● Google Dataset Search: A search engine for datasets.
Key Characteristics:
● Hugging Face Datasets: A library and hub for easily
● Structure. Datasets can be structured (like tables in a
accessing and sharing datasets.
database), semi-structured (like JSON or XML files), or
● Government open data portals (e.g., data.gov).
unstructured (like raw text or images).
● Size. Datasets can range from small collections of a few data
points to massive repositories containing billions of records.
● Purpose. Datasets are used for various purposes, including:
Training machine learning models, testing model
performance, analyzing data to gain insights

Importance in Deep Learning. Deep learning models, in particular,
require large datasets to achieve high accuracy. The more data a
model has, the better it can learn complex patterns.
Datasets are often divided into three subsets:
● Training set: Used to train the model.
● Validation set: Used to fine-tune the model's
hyperparameters.
● Testing set: Used to evaluate the model's final performance.
Dataset
Some common datasets perfect for beginners diving into deep
learning
1. MNIST (Modified National Institute of Standards and
Technology database):
● Description: A dataset of handwritten digits (0-9).
● Use Case: Image classification.
● Why it's good:
○ Very simple and well-understood.
○ Excellent for learning basic CNNs.
○ Abundant tutorials and examples.
● Where to find it: Built into many deep learning libraries
(TensorFlow, PyTorch).
2. Fashion-MNIST:
● Description: Similar to MNIST, but with images of
fashion items (e.g., shoes, shirts).
● Use Case: Image classification.
● Why it's good:
○ Slightly more challenging than MNIST.
○ Still relatively simple and easy to work with.
○ Replaces the original MNIST dataset for a more
complex use case.
● Where to find it: Built into many deep learning libraries
(TensorFlow, PyTorch).
Dataset
3. CIFAR-10 (Canadian Institute For Advanced Research): 5. Boston Housing Dataset:

● Description: A dataset of 60,000 32x32 color images in 10 ● Description: Data about housing values in the Boston
classes (e.g., airplane, dog, cat). area, including features like crime rate and number of
● Use Case: Image classification. rooms.
● Why it's good: ● Use Case: Regression (predicting house prices).
○ A step up in complexity from MNIST. ● Why it's good:
○ Color images introduce more challenges. ○ Simple numerical dataset.
○ Very popular. ○ Good for learning basic regression models.
● Where to find it: Built into many deep learning libraries ○ Easy to understand.
(TensorFlow, PyTorch). ● Where to find it: Built into scikit-learn.

4. IMDB Movie Reviews:

● Description: A dataset of movie reviews, labeled as positive or


negative.
● Use Case: Sentiment analysis (text classification).
● Why it's good:
○ Introduces natural language processing (NLP).
○ Good for learning RNNs and basic text processing.
○ Commonly used.
● Where to find it: Built into many deep learning libraries
(TensorFlow, Keras).
Reference

https://atcold.github.io/NYU-DLSP20/
https://www.youtube.com/watch?v=0bMe_vCZo30&list=PL80I41oVxglKcAHllsU0txr3OuTTaWX2v

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy