0% found this document useful (0 votes)
186 views

Model Life Cycle

The document discusses the model life cycle for developing an AI project. It describes the 5 main stages as problem scoping, data acquisition, model selection, model development and evaluation. It provides details about each stage, including defining the problem and goals, collecting relevant and authentic training data, researching and selecting the appropriate model type, developing the model using the training data, and testing the model on new data to evaluate performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
186 views

Model Life Cycle

The document discusses the model life cycle for developing an AI project. It describes the 5 main stages as problem scoping, data acquisition, model selection, model development and evaluation. It provides details about each stage, including defining the problem and goals, collecting relevant and authentic training data, researching and selecting the appropriate model type, developing the model using the training data, and testing the model on new data to evaluate performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

XII – Artificial Intelligence

S. Murugan @ Prakasam
PGT Teacher, Computer Science Dept
Class XI Units
➢ Unit 1: Introduction to AI+
➢ Unit 2: AI Applications & Méthodologies*
➢ Unit 3: Math for AI+
➢ Unit 4: AI Values (Ethical Decision Making)+
➢ Unit 5: Introduction to Storytelling*
➢ Unit 6: Critical & Creative Thinking*
➢ Unit 7: Data Analysis (Computational Thinking)*+
➢ Unit 8: Regression+
➢ Unit 9: Classification & Clustering+
➢ Unit 10: AI Values (Bias Awareness)*+
Class XII Units
➢ Unit 1: Capstone Project
➢ Unit 2: Model Lifecycle (Knowledge)
➢ Unit 3: Storytelling Through Data (Critical and Creative thinking
-Skills)
Model Life Cycle - (AI Project Life Cycle Model)
Introduction
Let us assume that you have to make a greeting card for your
mother as it is her birthday. You are very excited about it and have
thought of many ideas to execute the same. Let us look at some of the
steps which you might take to accomplish this task:

1. Look for some cool greeting card ideas from different sources. You
might go online and checkout some videos or you may ask someone who
has knowledge about it.
2. After finalizing the design, you would make a list of things that are
required to make this card.
3. You will check if you have the material with you or not. If not, you
could go and get all the items required, ready for use.
4. Once you have everything with you, you would start making the card.
5. If you make a mistake in the card somewhere which cannot be
rectified, you will discard it and start remaking it.
6. Once the greeting card is made, you would gift it to your mother.
Are these steps relatable?
____________________________________________________________________
____________________________________________________________________
Do you think your steps might differ? If so, write them down!
____________________________________________________________________
____________________________________________________________________
If we have to develop an AI project, the AI Project Cycle provides us with an
appropriate framework which can lead us towards the goal. The AI Project
Cycle mainly has 5 stages:
Starting with Problem Scoping, you set the goal for your AI project by stating the
problem which you wish to solve with it. Under problem scoping, we look at various parameters
which affect the problem we wish to solve so that the picture becomes clearer
To proceed,
➢ You need to acquire data which will become the base of your project as it will help you in
understanding what the parameters that are related to problem scoping are.

➢ You go for data acquisition by collecting data from various reliable and authentic sources. Since the
data you collect would be in large quantities, you can try to give it a visual image of different types of
representations like graphs, databases, flow charts, maps, etc. This makes it easier for you to interpret
the patterns which your acquired data follows.

➢ After exploring the patterns, you can decide upon the type of model you would build to achieve the
goal. For this, you can research online and select various models which give a suitable output.

➢ You can test the selected models and figure out which is the most efficient one.

➢ The most efficient model is now the base of your AI project and you can develop your algorithm
around it.

➢ Once the modelling is complete, you now need to test your model on some newly fetched data. The
results will help you in evaluating your model and improving it.

➢ Finally, after evaluation, the project cycle is now complete and what you get is your AI project.
Problem Scoping:

➢ Problem Scoping refers to understanding a problem finding out various


factors which affect the problem, define the goal or aim of the project.

➢ It is a fact that we are surrounded by problems. They could be small or big,


sometimes ignored or sometimes even critical.

➢ Many times, we become so used to a problem that it becomes a part of our life.

➢ Identifying such a problem and having a vision to solve it, is what Problem
Scoping is about.

➢ A lot of times we are unable to observe any problem in our surroundings.

➢ In that case, we can take a look at the Sustainable Development Goals.

➢ 17 goals have been announced by the United Nations which are termed as
the Sustainable Development Goals.

➢ The aim is to achieve these goals by the end of 2030.

➢ A pledge to do so has been taken by all the member nations of the UN.
As you can see, many goals correspond to the problems which we might
observe around us too. One should look for such problems and try to solve them as
this would make many lives better and help our country achieve these goals.
Scoping a problem is not that easy as we need to have a deeper
understanding around it so that the picture becomes clearer while we are
working to solve it. Hence, we use the 4Ws Problem Canvas to help us out.
Who?
The “Who” block helps in analyzing the people getting affected
directly or indirectly due to it. Under this, we find out who the
‘Stakeholders’ to this problem are and what we know about them.
Stakeholders are the people who face this problem and would be
benefitted with the solution.
Here is the Who Canvas:
What?
Under the “What” block, you need to look into what you have on
hand. At this stage, you need to determine the nature of the problem.
What is the problem and how do you know that it is a problem? Under this
block, you also gather evidence to prove that the problem you have selected
actually exists. Newspaper articles, Media, announcements, etc are some
examples.
Here is the What Canvas:
Where?
Now that you know who is associated with the problem and what
the problem actually is; you need to focus on the
context/situation/location of the problem. This block will help you look
into the situation in which the problem arises, the context of it, and the
locations where it is prominent.
Here is the Where Canvas:
Why?
You have finally listed down all the major elements that affect the
problem directly. Now it is convenient to understand who the people that would
be benefitted by the solution are; what is to be solved; and where will the
solution be deployed. These three canvases now become the base of why you
want to solve this problem. Thus, in the “Why” canvas, think about the benefits
which the stakeholders would get from the solution and how it will benefit them as
well as the society.

After filling the 4Ws Problem canvas, you now need to summarize all the
cards into one template. The Problem Statement Template helps us to summarize all
the key points into one single Template so that in future, whenever there is need to
look back at the basis of the problem, we can take a look at the Problem Statement
Template and understand the key elements of it.
Data Acquisition:
➢ Data Acquisition is the process of collecting accurate and reliable
data to work with. Data Can be in the format of text, video, images,
audio and so on and it can be collected from carious source like the
interest, journals, newspapers and so on

➢ Data Acquisition - As the term clearly mentions, this stage is


about acquiring data for the project. Let us first understand what is
Data.

➢ Data can be a piece of information or facts and statistics


collected together for reference or analysis. Whenever we want an AI
project to be able to predict an output, we need to train it first using data.

➢ For example, If you want to make an Artificially Intelligent system


which can predict the salary of any employee based on his previous
salaries, you would feed the data of his previous salaries into the
machine.
➢ This is the data with which the machine can be trained. Now,
once it is ready, it will predict his next salary efficiently.

➢ The previous salary data here is known as Training Data while the
next salary prediction data set is known as the Testing Data.

➢ For better efficiency of an AI project, the Training data needs to be


relevant and authentic.

➢ In the previous example, if the training data was not of the


previous salaries but of his expenses, the machine would not have
predicted his next salary correctly since the whole training went
wrong.

➢ Similarly, if the previous salary data was not authentic, that is, it was
not correct, then too the prediction could have gone wrong. Hence….

➢ For any AI project to be efficient, the training data should be


authentic and relevant to the problem statement scoped.
Data Features
Look at your problem statement once again and try to find the data features
required to address this issue. Data features refer to the type of data you want to
collect. In our previous example, data features would be salary amount,
increment percentage, increment period, bonus, etc.
After mentioning the Data features, you get to know what sort of data is to
be collected. Now, the question arises- From where can we get this data? There can
be various ways in which you can collect data. Some of them are:
➢ Sometimes, you use the internet and try to acquire data for your
project from some random websites.

➢ Such data might not be authentic as its accuracy cannot be proved. Due
to this, it becomes necessary to find a reliable source of data from where some
authentic information can be taken.

➢ At the same time, we should keep in mind that the data which we
collect is open-sourced and not someone’s property.

➢ Extracting private data can be an offence.

➢ One of the most reliable and authentic sources of information, are the open-
sourced websites hosted by the government.

➢ Some of the open-sourced Govt. portals are: data.gov.in, india.gov.in


Data Exploration:

➢ Data exploration refers to the initial step in data analysis in which data
analysts use data visualization and statistical techniques to describe dataset
characterizations, such as size, quantity, and accuracy, in order to better
understand the nature of the data

➢ In the previous modules, you have set the goal of your project and have
also found ways to acquire data.

➢ While acquiring data, you must have noticed that the data is a complex
entity – it is full of numbers and if anyone wants to make some sense out of it,
they have to work some patterns out of it.

➢ For example, if you go to the library and pick up a random book, you
first try to go through its content quickly by turning pages and by reading
the description before borrowing it for yourself, because it helps you in
understanding if the book is appropriate to your needs and interests or not.
Thus, to analyze the data, you need to visualize it in some user-friendly format
so that you can:
➢ Quickly get a sense of the trends, relationships and patterns contained within
the data.
➢ Define strategy for which model to use at a later stage.
➢ Communicate the same to others effectively. To visualize data, we can use
various types of visual representations.
Are you aware of visual representations of data? Fill them below:
1. Google Charts
➢ Google chart tools are powerful, simple to use, and free. Try out our rich
gallery of interactive charts and data tools.
2. Tableau
➢ Tableau is often regarded as the grand master of data visualization software
and for good reason.
➢ Tableau has a very large customer base of 57,000+ accounts across many
industries due to its simplicity of use and ability to produce interactive
visualizations far beyond those provided by general BI solutions.
3. Fusion Charts
➢ This is a very widely-used, JavaScript-based charting and visualization
package that has established itself as one of the leaders in the paid-for market.
➢ It can produce 90 different chart types and integrates with a large number of
platforms and frameworks giving a great deal of flexibility.
4. High charts
➢ A simple options-structure allows for deep customization, and styling can
be done via JavaScript or CSS.
➢ High charts is also extendable and pluggable for experts seeking advanced
animations and functionality.
Modelling:

➢ Modelling is the process in which different models based on


the visualized data can be created and even checked for the advantages
and disadvantages of the model.

➢ The graphical representation makes the data understandable for


humans as we can discover trends and patterns out of it.

➢ But when it comes to machines accessing and analyzing data, it


needs the data in the most basic form of numbers (which is binary –
0s and 1s) and when it comes to discovering patterns and trends in
data, the machine goes in for mathematical representations of the
same.

➢ The ability to mathematically describe the relationship between


parameters is the heart of every AI model. Thus, whenever we talk
about developing AI models, it is the mathematical approach towards
analyzing data which we refer to.
Rule Based Approach
➢ Rule Based Approach Refers to the AI modelling where the
relationship or patterns in data are defined by the developer.
➢ That means the machine follows and works on the rules and
information given by the developer and performs the task
accordingly.
➢ For example, we have a dataset which tells us about the conditions
on the basis of which we can decide, if an elephant may be spotted or not
while on safari. The parameters are: Outlook, Temperature, Humidity and
Wind.
➢ Now, let’s take various possibilities of these parameters and see in
which case the elephant may be spotted and in which case it may not.
➢ After looking through all the cases, we feed this data in to the
machine along with the rules which tell the machine all the possibilities.
The machine trains on this data and now is ready to be tested.
➢ While testing the machine, we tell the machine that Outlook =
Overcast; Temperature = Normal; Humidity = Normal and Wind =
Weak.
➢ On the basis of this testing dataset, now the machine will be able
to tell if the elephant has been spotted before or not and will display
the prediction to us. This is known as a rule-based approach because
we fed the data along with rules to the machine and the machine after
getting trained on them is now able to predict answers for the same.
➢ A drawback/feature for this approach is that the learning is
static. The machine once trained, does not take into consideration any
changes made in the original training dataset. That is, if you try testing the
machine on a dataset which is different from the rules and data you fed it
at the training stage, the machine will fail and will not learn from its
mistake.
➢ Once trained, the model cannot improvise itself on the basis of
feedbacks. Thus, machine learning gets introduced as an extension to
this as in that case, the machine adapts to change in data and rules
and follows the updated path only, while a rule-based model does
what it has been taught once.
For example: Suppose have a datasets containing 100 images of apples and
bananas each. Now you created a machine using Computer-Vision, and trained
it with the labelled images of apples and bananas. If you test your machine with
a image of an apple it will give you the output by comparing the images in its
datasets. This is known as Rule Based Approach.
Datasets
Dataset is a collection of related sets of Information that is composed of
separate elements but can be manipulated by a computer as a unit.
In Rule based Approach we will deal with 2 divisions of dataset:

1. Training Data - A subset required to train the model


2. Testing Data - A subset required while testing the trained the model
Training vs Testing Data

What is AI Training?
Teaching a Machine to properly interpret data and learn from it in order to perform a
task with accuracy. Just like with humans, this takes time and patience.
HYPERPARAMETER:

➢ In machine learning, a hyperparameter is a parameter whose value is used to


control the learning process. By contrast, the values of other parameters (typically
node weights) are derived via training.

➢ Hyperparameters can be classified as model hyperparameters, that cannot be


inferred while fitting the machine to the training set because they refer to the model
selection task, or algorithm hyperparameters, that in principle have no influence on
the performance of the model but affect the speed and quality of the learning process.

➢ An example of a model hyperparameter is the topology and size of a neural


network. Examples of algorithm hyperparameters are learning rate and mini-batch
size.[clarification needed]

➢ Different model training algorithms require different hyperparameters, some


simple algorithms (such as ordinary least squares regression) require none. Given
these hyperparameters, the training algorithm learns the parameters from the data.

➢ For instance, LASSO is an algorithm that adds a regularization hyperparameter


to ordinary least squares regression, which has to be set before estimating the
parameters through the training algorithm.
Learning Based Approach
Refers to the AI modelling where the machine learns by itself.
Under the Learning Based approach, the AI model gets trained on the
data fed to it and then is able to design a model which is adaptive to the
change in data.
That is, if the model is trained with X type of data and the machine
designs the algorithm around it, the model would modify itself according
to the changes which occur in the data so that all the exceptions are
handled in this case.
For example, suppose you have a dataset comprising of 100
images of apples and bananas each. These images depict apples and
bananas in various shapes and sizes. These images are then labelled as
either apple or banana so that all apple images are labelled ‘apple’ and
all the banana images have ‘banana’ as their label.
Now, the AI model is trained with this dataset and the model is programmed in such
a way that it can distinguish between an apple image and a banana image according to their
features and can predict the label of any image which is fed to it as an apple or a banana. After
training, the machine is now fed with testing data.
Now, the testing data might not have similar images as the ones on
which the model has been trained. So, the model adapts to the features on which
it has been trained and accordingly predicts if the image is of an apple or
banana.
In this way, the machine learns by itself by adapting to the new data which is
flowing in. This is the machine learning approach which introduces the dynamicity in the
model.
Supervised Learning
In a supervised learning model, the dataset which is fed to the machine is labelled.
In other words, we can say that the dataset is known to the person who is training the
machine only then he/she is able to label the data. A label is some information which can
be used as a tag for data. For example, students get grades according to the marks they
secure in examinations. These grades are labels which categories the students according
to their marks.
Classification: Where the data is classified according to the labels. For example, in the
grading system, students are classified on the basis of the grades they obtain with respect to
their marks in the examination. This model works on discrete dataset which means the
data need not be continuous.
Regression: Such models work on continuous data. For example, if you wish to predict
your next salary, then you would put in the data of your previous salary, any increments,
etc., and would train the model. Here, the data which has been fed to the machine is
continuous.
Unsupervised Learning
An unsupervised learning model works on unlabeled dataset. This means that
the data which is fed to the machine is random and there is a possibility that the person who is
training the model does not have any information regarding it.
The unsupervised learning models are used to identify relationships, patterns
and trends out of the data which is fed into it. It helps the user in understanding what
the data is about and what are the major features identified by the machine in it.
For example, you have a random data of 1000 dog images and you wish to
understand some pattern out of it, you would feed this data into the unsupervised
learning model and would train the machine on it.
After training, the machine would come up with patterns which it was able to
identify out of it. The Machine might come up with patterns which are already known to the
user like color or it might even come up with something very unusual like the size of the dogs.
Clustering: Refers to the unsupervised learning algorithm which can cluster the unknown
data according to the patterns or trends identified out of it. The patterns observed might
be the ones which are known to the developer or it might even come up with some unique
patterns out of it.
Dimensionality Reduction: We humans are able to visualize up to 3-Dimensions only but
according to a lot of theories and algorithms, there are various entities which exist beyond 3-
Dimensions.
For example, in Natural Language Processing, the words are considered to be N-
Dimensional entities.
Which means that we cannot visualize them as they exist beyond our
visualization ability. Hence, to make sense out of it, we need to reduce their dimensions.
Here, dimensionality reduction algorithm is used.
As we reduce the dimension of an entity, the information which it contains starts
getting distorted.
For example, if we have a ball in our hand, it is 3-Dimensions right now. But if we
click its picture, the data transforms to 2-D as an image is a 2-Dimensional entity.
Now, as soon as we reduce one dimension, at least 50% of the information is lost as
now we will not know about the back of the ball.
Whether the ball was of same color at the back or not? Or was it just a hemisphere?
If we reduce the dimensions further, more and more information will get lost.
Hence, to reduce the dimensions and still be able to make sense out of the data, we
use Dimensionality Reduction.
Reinforcement Learning:

➢ In this type of learning, The system works on Reward or Penalty policy.

➢ In this a agent perform a action positive or negative, in the environment which is


taken as input from the system, then the system changes the state in the
environment and the agent is provided with a reward or penalty.

➢ The system also builds a policy, that what action should be taken under a specific
condition.

➢ Example: A very good example of these is Vending machines.


Suppose you put a coin (action) in a Juice Vending machine(environment),
now the system detects the amount of coin give(state) you get the drink
corresponding to the amount(reward) or if the coin is damaged or there is any
another problem, then you get nothing (penalty).

➢ Here the machine is building a policy that which drink should be provided under
what condition and how to handle a error in the environment.
Deploy:
➢ Deployment is the method by which you integrate a machine
learning model into an existing production environment to make practical
business decisions based on data.
➢ It is one of the last stages in the machine learning life cycle and can
be one of the most cumbersome.
➢ To deploy a model, you create a model resource in AI Platform
Prediction, create a version of that model, then link the model version to
the model file stored in Cloud Storage.

Retrain:
➢ Rather retraining simply refers to re-running the process that
generated the previously selected model on a new training set of data.
➢ The features, model algorithm, and hyperparameter search space
should all remain the same.
Evaluation
➢ Evaluation is the method of understanding the reliability of an API
Evaluation and is based on the outputs which is received by the feeding the data
into the model and comparing the output with the actual answers.
➢ Once a model has been made and trained, it needs to go through proper testing
so that one can calculate the efficiency and performance of the model.
Testing: It’s techniques include the process of executing a program or
application with the intent of finding failures, and verifying that the model (product) is
fit for use.
Validation : The act of confirming something as true or correct: The new
method is very promising but requires validation through further testing. Validation is
the process of establishing documentary evidence demonstrating that a procedure,
process, or activity carried out in testing and then production maintains the desired level
of compliance at all stages.
➢ Hence, the model is tested with the help of Testing Data (which was separated
out of the acquired dataset at Data Acquisition stage) and the efficiency of the
model is calculated on the basis of the parameters mentioned below:
Model Life Cycle

➢ The machine learning life cycle is the cyclical process that


AI or machine learning projects follow.

➢ It defines each step that an engineer or developer should


follow.

Generally, every AI project lifecycle encompasses three


main stages:

Project Scoping, Design or Build Phase, and


Deployment in Production.
AI Model Life Cycle

➢ Understanding AI Project Cycle is crucial element for students to


have a robust and sound AI foundation.

➢ In order to implement successful AI projects, students need to adopt


a comprehensive approach to covering each step of the AI or machine
learning life cycle - starting from project scoping and data prep, and going
through all the stages of model building, deployment, management,
analytics, to full-blown Enterprise AI.

➢ Generally, every AI project lifecycle encompasses three main stages:


Project Scoping, Design or Build Phase, and Deployment in
Production. Let's go over each of them and the key steps and factors to
consider when implementing them.
Step 1: Scoping (Requirements Analysis)
➢ The first fundamental step when starting an AI initiative is
scoping and selecting the relevant use case(s) that the AI model will be
built to address.
➢ This is arguably the most important part of your AI project. Why?
There's a couple of reasons for it. First, this stage involves the planning
and motivational aspects of your project. It is important to start strong if
you want your artificial intelligence project to be successful.
➢ There's a great phrase that characterizes this project stage: garbage
in, garbage out. This means if the data you collect is no good, you won't be
able to build an effective AI algorithm, and your whole project will
collapse.
➢ In this phase, it's crucial to precisely define the strategic business
objectives and desired outcomes of the project, select align all the
different stakeholders' expectations, anticipate the key resources and
steps, and define the success metrics.
➢ Selecting the AI or machine learning use cases and being able to
evaluate the return on investment (ROI) is critical to the success of any
data project.
Step 2: Design/Building the Model

➢ Once the relevant projects have been selected and properly scoped, the next step of
the machine learning lifecycle is the Design or Build phase, which can take from a
few days to multiple months, depending on the nature of the project.

➢ The Design phase is essentially an iterative process comprising all the steps
relevant to building the AI or machine learning model: data acquisition,
exploration, preparation, cleaning, feature engineering, testing and running a
set of models to try to predict behaviors or discover insights in the data.

➢ Enabling all the different people involved in the AI project to have the appropriate
access to data, tools, and processes in order to collaborate across different stages of
the model building is critical to its success.

➢ Another key success factor to consider is model validation: how will you determine,
measure, and evaluate the performance of each iteration with regards to the defined
ROI objective?
During this phase, you need to evaluate the various AI development platforms,
e.g.:
➢ Open Languages — Python is the most popular, with R and Scala also in the mix.
➢ Open Frameworks — Scikit-learn, XG Boost, TensorFlow, etc.
➢ Approaches and Techniques — Classic ML techniques from regression all the
way to state-of-the-art GANs and RL
➢ Productivity-Enhancing Capabilities — Visual modelling, Auto AI to help with
feature engineering, algorithm selection and hyperparameter optimization
➢ Development Tools — Data Robot, H2O, Watson Studio, Azure ML Studio, Sage
maker, Anaconda, etc.
Different AI development platforms offer extensive documentation to help the
development teams. Depending on your choice of the AI platform, you need to visit
the appropriate webpages for this documentation, which are as follows:
➢ Microsoft Azure AI Platform;
➢ Google Cloud AI Platform;
➢ IBM Watson Developer platform;
➢ Big ML;
➢ Infosys Nia resources.
Step 3: Testing
While the fundamental testing concepts are fully applicable in AI development
projects, there are additional considerations too. These are as follows:

➢ The volume of test data can be large, which presents complexities.


➢ Human biases in selecting test data can adversely impact the testing phase,
therefore, data validation is important.
➢ Your testing team should test the AI and ML algorithms keeping model validation,
successful learnability, and algorithm effectiveness in mind.
➢ Regulatory compliance testing and security testing are important since the
system might deal with sensitive data, moreover, the large volume of data
makes performance testing crucial.
➢ You are implementing an AI solution that will need to use data from your other
systems, therefore, systems integration testing assumes importance.
➢ Test data should include all relevant subsets of training data, i.e., the data
you will use for training the AI system.
➢ Your team must create test suites that help you validate your ML models.
Upcoming Slides are Just for your General
Understanding
Commonly Used Platforms to Build and Run Models
Top “No-Code” Machine Learning platform You Should Use:
Create ML:
➢ Create ML is an independent macOS application that comes with a bunch of pre-
trained model templates.
➢ By using transfer learning lets you build your own custom models. From image
classifiers to style transfers to natural language processing to recommendation
systems it has almost every suite covered. All you need to do is pass the training and
validation data in the required formats.
➢ Moreover, you can fine-tune the metrics and set your own iteration count before
starting the training. Create ML provides real-time results on the validation data for
models such as style transfer. In the end, it’ll generate a Core ML model that you can
test and deploy in your iOS applications.
Google Auto ML:
➢ Google’s Cloud AutoML currently includes Vision(image classification), Natural
Language, AutoML Translation, Video Intelligence, Tables in its suite of machine
learning products.
➢ This enables developers with limited machine learning expertise to train models
specific to their use cases. AutoML on the cloud removes the need to know transfer
learning or how to create a neural network by providing out of the box support for
thoroughly tested deep learning models.
➢ Once the model training is finished you can test and export the model in .pb ,.tflite ,
CoreML etc formats.
Make ML:
➢ Make ML is a developer tool used for creating object detection and semantic
segmentation models without code.
➢ It provides a macOS app for iOS developers to create and manage datasets(such as
performing object annotations in images). Interestingly, they also have a dataset-store
with some free computer vision datasets to train a neural network in just a few clicks.
➢ Make ML have shown their potential in sports-based applications wherein you could
do ball tracking. Also, they have an end to end tutorials for training nail and potato
segmentation models which should give any non-machine learning developer a good
head start. Using their built-in annotation tool that works in videos you can build a
Hawkeye detector that’s used in cricket and tennis games.
Fritz AI
➢ Fritz AI is a growing machine learning platform that helps bridge the gap between
mobile developers and data scientists.
➢ Fritz AI Studio lets you quickly turn ideas into production-ready apps by providing
data annotation tools and synthetic data to generate datasets in a seamless fashion.
➢ Besides introducing support for Style Transfer before Apple, Fritz AI’s machine
learning platform also provides solutions for model retraining, analytics, easy
deployment, and protection from attackers.
Runway ML
➢ It provides a delightful visual interface to quickly train models ranging from text and
image generation(GANs) to motion capture, object detection, etc without the need to
write or think in code.
➢ Runway ML lets you browse a range of models ranging from super-resolution
images to background removal and style transfer.
➢ While exporting models from their application isn’t free of cost, a designer can
always leverage the power of their pre-trained generative adversarial networks to
synthesize new images from their prototypes. Their Generative Engine that
synthesizes images as you type sentences is one of the highlights.
Obviously AI
➢ Obviously AI uses state of the art natural language processing to perform complex
tasks on user-defined CSV data. The idea is to upload the dataset, pick the prediction
column, and enter questions in natural language and evaluate results.
➢ The platform trains the machine learning model by choosing the right algorithm for
you. So, just with a few clicks, you can get a prediction report be it for forecast
revenue or predicting the inventory demand. This is incredibly useful for small and
medium-sized businesses looking to get a foot into the field of artificial intelligence
without having an in-house data science team.
➢ Obviously AI lets you integrate data from other sources as well such as MySQL,
Salesforce, RedShift, etc. So, without knowing how linear regression and text
classification you can leverage their platform to run predictive analysis on your data.
Super Annotate:
➢ Super Annotate is an AI-powered annotation platform that uses machine learning
capabilities(specifically transfer learning) to boost your data annotation process.
By using their image and video annotation tools you can quickly annotate data
with the help of built-in predictive models.
➢ So, generating datasets for object detection, image segmentation will get a whole
lot easier and faster. Super Annotate also handles duplicate data annotation which
is common in video frames.
Teachable Machine
➢ Teachable Machines let you quickly train models to recognize images, sounds,
and poses right from your browser. You can simply drag and drop files to teach
your model or use the webcam to create a quick and dirty dataset of images or
sounds. Teachable Machine uses the Tensorflow.js library in your browser and
ensures that your training data stays on the device.
➢ Big step by Google for people who wanted to practice machine learning without
any coding knowledge. The final model can be exported in Tensorflow.js or tflite
formats which can then be used in your websites or app. You can also convert the
model into different formats using Onyx. Simple image classification model I
managed to train in less than a minute.
Top 10 AI Platforms
Google:
➢ Platform: Google Cloud AI.
Description:
➢ Google AI Platform allows for the creation of applications that run on both the Google
Cloud Platform and on-premises. It targets machine learning developers, data
scientists and data engineers with an easier route from the ideas to the production
stage, thanks to its flexibility and support for other Google platforms such as
Kubeflow.
➢ With native support for other Google AI products such as TensorFlow, Google’s
solution promises an end-to-end approach, with everything from preparing data to
validation and deployment contained under one umbrella.
Infosys:
➢ Platform: Infosys Nia
Description:
➢ Infosys Nia is offered by Indian IT multinational Infosys’ subsidiary Edge Verve. The
product supports end-to-end enterprise AI journey from data management, digitization
of document and images, model development to operationalizing models.
➢ One of its specializations is in the automatic digitizing of documents in order to
unlock the data contained within. Edge Verve itself is a leader in robotic and
intelligent process automation via its Assist Edge platform.
Google Brain Team:
➢ Platform: TensorFlow.
Description:
➢ TensorFlow is a machine learning platform developed by Google and later
released on an open source basis. It makes clear its end-to-end nature, facilitating
all stages of machine learning from model building with high-level APIs,
deployment whether on the cloud, on-premises, in a browser or device and taking
ideas from the conceptual to the code level thanks to its flexible architecture.
➢ The platform includes different libraries for its various deployment settings, with
a lightweight version for mobile and IOT deployments.
Data Robot:
➢ Platform: Data Robot.
Description:
➢ The Data Robot enterprise AI platform accelerates and democratizes data
science by automating the end-to-end journey from data to value. This allows you
to deploy trusted AI applications at scale within your organization.
➢ Data Robot provides a centrally governed platform that gives you the power
of AI to drive better business outcomes and is available on your cloud platform-
of-choice, on-premise, or as a fully-managed service.
Amazon:
➢ Platform: Amazon AI services
Description:
➢ Amazon emphasizes the accessibility of its services, and the potential to add AI to
applications without any machine learning skills required.
➢ Amazon touts the capabilities of its advanced machine learning in fields such as
video analysis, natural language, virtual assistants and more to enable businesses
to get the same level of insight via AI that Amazon itself does.
Microsoft:
➢ Platform: Microsoft Azure AI
Description:
➢ Microsoft’s AI platform integrates with its Azure cloud product, which it says is
suitable for mission-critical solutions. Enabling features such as image analytics,
speech comprehension and prediction, Microsoft’s solution claims to be useful for
all developers, from data scientists to app developers and machine learning
engineers.
➢ Part of its offering is based around an ethical and responsible approach to AI, with
systems to mitigate bias as well as ensure confidentiality and compliance.
H2O.ai:
➢ Platform: H2O.ai.
Description:
➢ H2O.ai describes its mission as being the democratization of AI and machine
learning for everyone. With an open source platform, the company claims to be
used by hundreds of thousands of data scientists in over 20,000 organizations
across the world, in industries such as financial services, healthcare, retail and
insurance.
➢ The Mountain View, California-based business has raised over $150mn since its
2012 foundation, with its latest Series D in 2019 raising $72.5mn.
IBM:
➢ Platform: IBM Watson Studio.
Description:
➢ Operating on any cloud system, IBM’s Watson Studio allows for the building and
training of AI models. It is one of the core services of IBM Cloud Pak for Data, a
multi cloud data and AI platform.
➢ Together with IBM Watson® Machine Learning and IBM Watson® OpenScale™,
Watson Studio provides tools for data scientists, application developers and
subject matter experts to collaborate and easily work with data to build, run and
manage models at scale.
Wipro Holmes:
➢ Platform: Wipro Holmes AI and automation platform.
Description:
➢ Wipro is a leading global information technology, consulting and business process
services company, who harness the power of cognitive computing, hyper-automation,
robotics, cloud, analytics, and emerging technologies to help clients adapt to the
digital world and make them successful.
➢ The Wipro Holmes AI and automation platform promises to cover all aspects of
deploying an AI solution, from building to publishing, metering, governing and
monetizing, and is offered on a software-as-a-service (SaaS) basis. Among its features
are digital virtual agents and process automation, as well as support for robotics and
drones.
Salesforce:
➢ Platform: Salesforce Einstein
Description:
➢ Founded in 1999 by internet entrepreneur Marc Benioff, Salesforce is a leader in the
customer relationship management space.
➢ Salesforce Einstein was specifically built for Salesforce’s CRM solution and fills the
platform with AI capabilities to enable possibilities such as identifying patterns and
trends in customer data. That in turn enables companies to better understand their
customers in order to deliver more personalised forms of customer service.
Some Real Time AI Application
1.https://magenta.tensorflow.org/assets/sketch_rnn_demo/multi_predict.html/
2. https://experiments.withgoogle.com/ai/giorgio-cam/view/
3. https://www.autodraw.com/
4. https://billie.withyoutube.com/
5. https://lipsync.withyoutube.com/
6. https://cocodataset.org/#explore/
7. https://classifier.appinventor.mit.edu/oldpic/
8. https://teachablemachine.withgoogle.com/
9. https://www.weebly.com/in/
10. https://www.ibm.com/in-en/cloud/watson-assistant/
11. https://www.cleverbot.com/
12. https://chat.kuki.ai/createaccount/
13. https://www.storytellingwithdata.com/letspractice/downloads/

FREE - AI PLATFORM to Build APK - 1. http://appinventor.mit.edu/


THANK YOU

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy