0% found this document useful (0 votes)
30 views12 pages

Unit2- AI Project Cycle-converted

The AI Project Cycle consists of five stages: Problem Scoping, Data Acquisition, Data Exploration, Modelling, and Evaluation. Each stage involves specific tasks such as identifying the problem, acquiring and analyzing data, creating models using rule-based or learning-based approaches, and evaluating the model's performance. The document also discusses neural networks and their similarities to the human nervous system in processing information.

Uploaded by

Mahika Shetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views12 pages

Unit2- AI Project Cycle-converted

The AI Project Cycle consists of five stages: Problem Scoping, Data Acquisition, Data Exploration, Modelling, and Evaluation. Each stage involves specific tasks such as identifying the problem, acquiring and analyzing data, creating models using rule-based or learning-based approaches, and evaluating the model's performance. The document also discusses neural networks and their similarities to the human nervous system in processing information.

Uploaded by

Mahika Shetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

AI Project Cycle

Introducti
on

Consciously or Subconsciously our mind makes up plans for every task


which we have to accomplish which is why things become clearer in our
mind. Similarly, if we have to develop an AI project, the AI Project Cycle
provides us with an appropriate framework which can lead us towards the
goal. The AI Project Cycle mainly has 5 stages:

1. Problem Scoping:

Identifying such a problem and having a vision to solve it, is what


Problem Scoping is about.
Scoping a problem is not that easy as we need to have a deeper
understanding around it so that the picture becomes clearer while we
are working to solve it. Hence, we use the 4Ws Problem Canvas to
help us out.
The 4Ws Problem canvas helps in identifying the key elements
related to the problem.

● Who?
The “Who” block helps in analysing the people getting affected
directly or indirectly due to it. Under this, we find out who the
‘Stakeholders’ to this problem are and what we know about
them. Stakeholders are the people who face this problem and
would be benefited with the solution.
● Where?
Now that you know who is associated with the problem and what
the problem actually is; you need to focus on the
context/situation/location of the problem. This block will help you
look into the situation in which the problem arises, the context
of it, and the locations where it is prominent.
● Why?
You have finally listed down all the major elements that affect
the problem directly. Now it is convenient to understand who
the people that would be benefitted by the solution are; what is
to be solved; and where will the solution be deployed. These
three canvases now become the base of why you want to solve
this problem. Thus, in the “Why” canvas, think about the
benefits which the stakeholders would get from the solution and
how it will benefit them as well as the society.

Take a look at the Problem Statement Template and understand


the key elements of it.
2. Data Acquisition

Data Acquisition. As the term clearly mentions, this stage is about


acquiring data for the project.
Data can be a piece of information or facts and statistics collected
together for reference or analysis. Whenever we want an AI project to
be able to predict an output, we need to train it first using data.
Training data is the data that is used to train a specific AI
system. Testing data is the data that is used to test if the AI
system is working as required.
For any AI project to be efficient, the training data should be authentic
and relevant to the problem statement scoped.

Data Features:
Data features refer to the type of data you want to collect.
After mentioning the Data features, you get to know what sort of
data is to be collected. Now, the question arises- From where can
we get this data? There can be various ways in which you can
collect data.
Some of them are:

Sometimes, you use the internet and try to acquire data for
your project from some random websites. Such data might not
be authentic as its accuracy cannot be proved. Due to this, it
becomes necessary to find a reliable source of data from where
some authentic information can be taken.
3. Data Exploration :
While acquiring data, you must have noticed that the data is a
complex entity – it is full of numbers and if anyone wants to make
some sense out of it, they have to work some patterns out of it.
Thus, to analyze the data, you need to visualize it in some user-
friendly format so that you can:
● Get a sense of the trends, relationships and patterns contained within the data.

● Define strategy for which model to use at a later stage.

● Communicate the same to others effectively. To visualize data, we can use


various types of visual representations.

4. Modelling:
So as in previous article data exploration, we have seen how we can
represent data in graphics using various tools. This graphical
representation makes data easy to understand for humans to take a
decision or prediction. But when it comes to machine to access and
analyze data, machine requires mathematical representation of data.
Hence every model needs a mathematical approach to analyze data.
Basically there are two approaches broadly taken by researchers for
AI modelling. They are:

○ Rule-Based Approach
○ Learning-Based Approach
Rule Based Approach:

Rule Based Approach Refers to the AI modelling where the relationship


or patterns in data are defined by the developer. The machine follows
the rules or instructions mentioned by the developer, and performs its
task accordingly. For example, suppose you have a dataset consisting
of 100 images of apples and 100 images of bananas. To train your
machine, you feed this data into the machine and label each image as
either apple or banana. Now if you test the machine with the image of
an apple, it will compare the image with the trained data and
according to the labels of trained images, it will identify the test image
as an apple. This is known as a Rule based approach. The rules given
to the machine in this example are the labels given to the machine for
each image in the training dataset.
Learning Based Approach:

Learning Based Approach refers to


the AI modelling where the
machine learns by itself. Under the
Learning Based approach, the AI
model gets trained on the data fed
to it and then is able to design a
model which is adaptive to the
change in data.
That is, if the model is trained with
X type of data and the machine
designs the algorithm around it, the
model would modify itself according
to the changes which occur in the
data so that all the exceptions are
handled in this case.

The learning-based approach can further be divided into three parts:

1. Supervised Learning:
● In a supervised learning model, the dataset which is fed
to the machine is labelled.
● A label is some information which can be used as a tag for data.
● For example, students get grades according to the marks
they secure in examinations. These grades are labels
which categories the students according to their marks.
There are two types of Supervised Learning models:
Classification: Where the
data is classified according
to the labels. For example,
in the grading system,
students are classified on
the basis of the grades they
obtain with respect to their
marks in the examination.
This model works on a
discrete dataset which
means the data need not be
continuous.

Regression: Such models


work on continuous data.
For example, if you wish to
predict your next salary,
then you would put in the
data of your previous
salary, any increments,
etc., and would train the
model. Here, the data
which has been fed to the
machine is continuous.
2. Unsupervised Learning:
● An unsupervised learning model works on an unlabeled dataset.
● This means that the data which is fed to the machine is random
and there is a possibility that the person who is training the
model does not have any information regarding it.
● The unsupervised learning models are used to identify
relationships, patterns and trends out of the data which is fed
into it. It helps the user in understanding what the data is about
and what are the major features identified by the machine in it.
● For example, you have a random data of 1000 dog images and
you wish to understand some pattern out of it, you would feed
this data into the unsupervised learning model and would train
the machine on it. After training, the machine would come up
with patterns which it was able to identify out of it. The Machine
might come up with patterns which are already known to the
user like color or it might even come up with something very
unusual like the size of the dogs.

Unsupervised learning models


can be further divided into two
categories:

● Clustering: Refers to the


unsupervised learning
algorithm which can cluster the
unknown data according to the
patterns or trends identified out
of it. The patterns observed
might be the ones which are
known to the developer or it
might even come up with some
unique patterns out of it.

● Dimensionality Reduction: We humans are able to visualize


up to 3- Dimensions only but according to a lot of theories and
algorithms, there are various entities which exist beyond 3-
Dimensions. For example, in Natural language Processing, the
words are considered to be N- Dimensional entities. Which
means that we cannot visualize them as they exist beyond our
visualization ability. Hence, to make sense out of it, we need to
reduce their dimensions. Here, dimensionality reduction
algorithm is used.
Hence, to reduce the dimensions and still be able to make
sense out of the data, we use Dimensionality Reduction.

5. Evaluation:

Once a model has been made and trained, it needs to go through


proper testing so that one can calculate the efficiency and performance
of the model. Hence, the model is tested with the help of Testing Data
(which was separated out of the acquired dataset at Data Acquisition
stage) and the efficiency of the model is calculated.

Neural Networks:

A neural network is essentially a system of organizing machine learning


algorithms to perform certain tasks. It is a fast and efficient way to solve
problems for which the dataset is very large, such as in images.

Each node of these hidden layers has its own machine learning algorithm
which it executes on the data received from the input layer. The processed
output is then fed to the subsequent hidden layer of the network. There can
be multiple hidden layers in a neural network system and their number
depends upon the complexity of the function for which the network has been
configured. Also, the number of nodes in each layer can vary accordingly.
The last hidden layer passes the final processed data to the output layer
which then gives it to the user as the 103 final output. Similar to the input
layer, output layer too does not process the data which it acquires. It is
meant for user-interface.
Some features of a neural network are listed below.

Neural Networks Vs Human Nervous System:

Given are the images of a Human Neuron and its relation with the Neural
Network. The axon from a neuron sends an impulse to the synapse of
another neuron. The impulse received is then sent to the cell body (nucleus)
through dendrites. The cell body performs an activation function on the
impulse received and then gives it to the output axon which passes the
same to the next neuron in the system. Now as we relate this process with
an Artificial Neural Network, we can see that the input layer gets data which
is passes on to the nodes in the hidden layer. The nodes perform specific
actions on the data and pass the processed information to the next layer. In
the end, the final processed data reaches the output of the system.
axon from a neuronsynapse
00

cell body

activatio
n
function

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy