PDF&Rendition=1 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Deccan Education Society’s

WILLINGDON COLLEGE, SANGLI


B.Sc. Computer Sci ( Entire) DEPARTMENT

B.SC. Part - III

Subject – Machine Learning


B. Sc. Part- III Computer Science Entire (Semester I)
Course Title: Machine Learning ( Part – I )

1. Introduction to Machine Learning


• Introduction to Machine Learning
• Introduction
• Evolution of machine learning
• Difference between AI and Machine learning
• Developments in machine learning
• Introduction to K-nearest neighbour method, different phases of predicative
modeling
B. Sc. Part- III Computer Science Entire (Semester I)
Course Title: Machine Learning

2. Aspects of Machine Learning


• Definition of learning System
• Goals and applications of machine learning
• Aspects of developing a learning system: training data, concept
representation, function approximation
B. Sc. Part- III Computer Science Entire (Semester I)
Course Title: Machine Learning

3. Machine Learning Modelling


• ML Modeling flow, How to treat Data in ML?
• Types of machine learning, performance measures
• Bias-Variance Trade-Off
• Overfitting & Under fitting, Bootstrap Sampling, Bagging, Aggregation

4. Basic Probability and terms


• Rules of probability, permutations and combinations
• Bayers theorem, Descriptive statistics, compound probability, conditional
probability
Artificial Intelligence

Artificial Intelligence is a contrast to Human Intelligence.

Artificial Intelligence suggest that machines can mimic (copycat) humans in:
• Talking
• Thinking
• Learning
• Planning
• Understanding

Artificial Intelligence is also called Machine Intelligence and Computer Intelligence.


Narrow AI
Narrow Artificial Intelligence is limited to narrow (specific) areas like most of the AI
we have around us today

Email spam Filters Text to Speech Speech Recognition


Self Driving Cars E-Payment Google Maps
Text Autocorrect Social Media Face Detection
Search Algorithms Robots NLP - Natural Language Processing
Flying Drones Amazon's Alexa Netflix's Recommendations

Narrow AI is also called Weak AI.

Weak AI - Built to simulate human intelligence.

Strong AI - Built to copy human intelligence.


Strong AI

• Strong Artificial Intelligence is the type of AI that mimics human intelligence.

• Strong AI indicates the ability to think, plan, learn, and communicate.

• Strong AI is the theoretical next level of AI - True Intelligence.

• Strong AI moves towards machines with self-awareness, consciousness, and


objective thoughts.
Neural Networks (NN)

Neural Networks is:


• A programming technique
• A method used in machine learning
• A software that learns from mistakes

Neural Networks are based on how the human brain works:

Neurons are sending messages to each other. While the neurons are trying to solve a
problem (over and over again), it is strengthening the connections that lead to
success and diminishing the connections that lead to failure.
Machine Learning
Machine Learning is a subfield of Artificial intelligence
Machine Learning is the study of different algorithms that can improve automatically
through experience and old data and build the model.
A machine learning model is similar to computer software designed to recognize patterns
or behaviors based on previous experience or data.

The learning algorithm discovers patterns within the training data, and it outputs an ML
model which captures these patterns and make predictions on new data.
"Learning machines to imitate human intelligence"

Traditional programming uses known algorithms to produce results from data:

Data + Algorithms = Results

Machine learning creates new algorithms from data and results:

Data + Results = Algorithms


Examples of Machine Learning

• One popular example of machine learning is image recognition. This involves training
a computer to accurately identify and classify images based on patterns and features
within the image data. For instance, an ML algorithm can be trained to distinguish
between pictures of cats and dogs by analysing their unique physical characteristics
such as fur colour, tail shape, and ear size.

• Another common application of ML is natural language processing (NLP). In this case,


the technology enables computers to interpret human language in various forms such
as text or speech. NLP is used in chatbots, virtual assistants like Siri or Alexa, and
even for sentiment analysis in social media monitoring tools.
Machine Learning

Traditional Programming Machine Learning

Data Program Data Output

Computer Computer

Output Program
For Example

Program written for addition of two number returns output – Addition of two
numbers.

Suppose we provide data


6+2=8 7 + 5 = 12 6 + 4 = 10 and …..

Machine learning learns from data.


Applications of Machine Learning
Evolution of machine learning

Early Beginnings and Conceptual Foundations


• The roots of machine learning are intertwined with early AI developments in the
1950s and 1960s. Initial ideas revolved around creating machines that could
learn and adapt.

• Notable early work included algorithms like the Perceptron, developed by Frank
Rosenblatt in the late 1950s. It was a simple yet significant step towards neural
network concepts.

1970s – Decision Trees and Development of Theory


• The 1970s saw the introduction of decision tree algorithms. These were used for
classification and regression tasks.

• This era also saw the development of the theoretical foundations of machine
learning, considering issues like the trade-off between bias and variance.
Evolution of machine learning

1980s – Emergence of Machine Learning as a Distinct Field


• The 1980s marked the separation of machine learning from AI as its own field. This
was partly due to the limitations of knowledge-driven AI systems.
• Key algorithms developed during this time included the k-nearest neighbours
algorithm, Naive Bayes classifier and the beginning of Support Vector Machines
(SVMs).

1990s – Growth and Diversification


• The 1990s saw a boom in machine learning, with the development of new algorithms
and the refinement of existing ones.
• Ensemble methods like Bagging and Boosting and algorithms like Random Forests,
became popular.
Evolution of machine learning

2000s – Big Data and Scalability


• The rise of the internet and the explosion of data availability in the 2000s brought
new challenges and opportunities for machine learning.
• This era saw the development of scalable machine learning algorithms capable of
handling large volumes of data.
• Important progress was made in Bayesian networks, Gradient Boosting machines,
and the emergence of Deep Learning.

2010s – Deep Learning Revolution


• The 2010s were dominated by the rise of deep learning, a type of machine learning
based on deep neural networks.
• This era also saw machine learning applications in a wide range of fields, from
healthcare to finance, and the integration of machine learning in consumer.
Evolution of machine learning

Recent Trends-
• More recently, the focus has been on making machine learning models more efficient,
interpretable and fair.
• Advances in reinforcement learning, unsupervised learning and transfer learning are
also noteworthy.
• Ethical considerations and the social impact of machine learning have become
increasingly important topics of discussion.

Machine learning continues to be a rapidly evolving field, with new algorithms and
approaches developed regularly. Each step in its evolution has opened up new
possibilities and applications, making it one of the most dynamic areas of computer
science.
Difference between AI and Machine Learning

• AI and ML are making the world more technology-driven, increasing productivity and
efficiency. Both terms are interchangeable, but they are different.
• AI is the broader concept of machines that enables computers to mimic human
behaviour.
• In contrast, ML is an application of AI based on the idea that we should be able to
give machines access to data and let them learn for themselves.

• In simple terms, Machine Learning is the subset of Artificial


Intelligence.
Difference between AI and Machine Learning

Artificial Intelligence Machine Learning

Machine learning is a subset of AI which


Artificial intelligence is a technology which
allows a machine to automatically learn
enables a machine to simulate human
from past data without programming
behaviour.
explicitly.
The goal of AI is to make a smart computer The goal of ML is to allow machines to
system like humans to solve complex learn from data so that they can give
problems. accurate output.

In ML, we teach machines with data to


In AI, we make intelligent systems to
perform a particular task and give an
perform any task like a human.
accurate result.
Difference between AI and Machine Learning

Artificial Intelligence Machine Learning

Machine learning and deep learning are the Deep learning is a main subset of machine
two main subsets of AI. learning.
AI has a very wide range of scope. Machine learning has a limited scope.

AI is working to create an intelligent system Machine learning is working to create


which can perform various complex tasks. machines that can perform only those
specific tasks for which they are trained.
AI system is concerned about maximizing Machine learning is mainly concerned
the chances of success. about accuracy and patterns.
Difference between AI and Machine Learning

Artificial Intelligence Machine Learning

The main applications of AI are Siri, The main applications of machine learning
customer support using chatboats, Expert are Online recommender system, Google
System, Online game playing, intelligent search algorithms, Facebook auto friend
humanoid robot, etc. tagging suggestions, etc.
On the basis of capabilities, AI can be Machine learning can also be divided into
divided into three types, which are, Weak mainly three types that are Supervised
AI, General AI, and Strong AI. learning, Unsupervised learning,
and Reinforcement learning.
Developments in machine learning
Introduction to K-nearest neighbour method

K-Nearest Neighbour is one of the simplest Machine Learning


algorithms based on Supervised Learning technique and regression. It is
a versatile algorithm also used for imputing missing
values and resampling datasets. K-NN algorithm stores all the available
data and classifies a new data point based on the similarity. This means
when new data appears then it can be easily classified into a well suite
category by using K- NN algorithm.

K-NN is a non-parametric algorithm, which means it does not make


any assumption on underlying data. It is also called a lazy learner
algorithm because it does not learn from the training set immediately
instead it stores the dataset and at the time of classification, it performs
an action on the dataset.
Introduction to K-nearest neighbour method

The K-NN working can be explained on the basis of the below


algorithm
Step-1 Select the number K of the neighbours
Step-2 Calculate the Euclidean distance of K number of neighbours
Step-3 Take the K nearest neighbours as per the calculated Euclidean
distance.
Step-4 Among these k neighbours, count the number of the data
points in each category.
Step-5 Assign the new data points to that category for which the
number of the neighbour is maximum.
Step-6 Our model is ready.
Introduction to K-nearest neighbour method

Example: Suppose, we have an image of a creature that looks similar


to cat and dog, but we want to know either it is a cat or dog. So for this
identification, we can use the KNN algorithm, as it works on a similarity
measure. Our KNN model will find the similar features of the new data
set to the cats and dogs images and based on the most similar features it
will put it in either cat or dog category.
Introduction to K-nearest neighbour method

Advantages of KNN Algorithm


• It is simple to implement.
• It is robust to the noisy training data
• It can be more effective if the training data is large.

Disadvantages of KNN Algorithm


• Always needs to determine the value of K which may be complex some
time.
• The computation cost is high because of calculating the distance
between the data points for all the training samples.
Predictive Model -

Predictive modeling means


i) Making use of Past Data & attributes
ii) Predict the future using this.
Predictive analytics refers to the application of mathematical models
to large amounts of data with the aim of identifying past behaviour
patterns and predicting future outcomes. The practice combines data
collection, data mining, machine learning and statistical algorithms to
provide the “predictive” element.
It involves using statistical algorithms and machine learning
techniques to analyze historical data and make predictions about future
or unknown events.
Consider the example of referring movies to users

Past Horror Movies


Future Unwatched Horror Movies

Similar People Classic Movie


Suggestion for You “Classic Movie”

Gender, Age, Location, Past


Movies

Consider the another example of Prediction of Stock Prices

It involves a) Analyzing past stock prices


b) Analyzing similar stocks
c) Future stock price required
Phases of Predictive Model or Steps of Predictive Model Life Cycle

1) Define the problem and gather data

2) Clean and prepare the data

3) Split the data into training and test sets

4) Select the appropriate algorithm

5) Evaluate the model

6) Use the model to make predictions


Phases of Predictive Model or Steps of Predictive Model Life Cycle
Step 1- Define the problem and gather data
The first step in building a predictive model is to define the problem
you are trying to solve and gather the necessary data. For example,
Suppose we want to build a predictive model to identify patients who
are at risk of developing diabetes. We have gathered data on patient
demographics, medical history and lifestyle factors and have labelled
each patient as either diabetic or non-diabetic.
Step 2 - Clean and prepare the data
Before we can use the data for analysis, we need to clean and
prepare it. This involves removing any missing values, encoding
categorical variables and scaling numerical variables.
Step 3 - Split the data into training and test sets
Next, we need to split the data into a training set and a test set. The
training set will be used to train the model, while the test set will be
used to evaluate its performance.

Step 4 - Select the appropriate algorithm


The next step is to select the appropriate algorithm for our data and
problem. For our example, we will use logistic regression, which is a
popular algorithm for binary classification problems.

Step 5- Evaluate the model


After training the model, we need to evaluate its performance using the
test set. We can use various metrics such as accuracy, precision, recall,
Step 5- Evaluate the model
After training the model, we need to evaluate its performance using the
test set. We can use various metrics such as accuracy, precision, recall,
and F1 score to evaluate the model’s performance.

Step 6 - Use the model to make predictions


After evaluating the model, we can use it to make predictions on new
data.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy