0% found this document useful (0 votes)
53 views

UNIT 1 AIS

Uploaded by

Sadhana Ganesan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

UNIT 1 AIS

Uploaded by

Sadhana Ganesan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

MASTER OF COMPUTER APPLICATIONS

Subject Name : Artificial Intelligence Subject Code : MCC1854


Year / Sem : II / III Batch : 2023 – 2025
UNIT I - INTRODUCTION

Introduction–Definition - Future of Artificial Intelligence – Characteristics of Intelligent


Agents– Typical Intelligent Agents – Problem Solving Approach to Typical AI problems.

INTRODUCTION

AI is one of the fascinating and universal fields of Computer science which has a great scope
in future. AI holds a tendency to cause a machine to work as a human. Artificial Intelligence
exists when a machine can have human based skills such as learning, reasoning, and solving
problems

With Artificial Intelligence you do not need to preprogram a machine to do some work,
despite that you can create a machine with programmed algorithms which can work with own
intelligence, and that is the awesomeness of AI.

Goals of Artificial Intelligence

Following are the main goals of Artificial Intelligence:

1. Replicate human intelligence


2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human intelligence such as:
o Proving a theorem
o Playing chess
o Plan some surgical operation
o Driving a car in traffic
5. Creating some system which can exhibit intelligent behavior, learn new things by
itself, demonstrate, explain, and can advise to its user.

APPLICATIONS OF ARTIFICIAL INTELLIGENCE

Page 1 of 50
 Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe,
etc., where machine can think of large number of possible positions based on heuristic
knowledge.
 Natural Language Processing − It is possible to interact with the computer that
understands natural language spoken by humans.
 Expert Systems − There are some applications which integrate machine, software,
and special information to impart reasoning and advising. They provide explanation
and advice to the users.
 Vision Systems − These systems understand, interpret, and comprehend visual input
on the computer. For example,
o A spying aeroplane takes photographs, which are used to figure out spatial
information or map of the areas.
o Doctors use clinical expert system to diagnose the patient.
o Police use computer software that can recognize the face of criminal with the
stored portrait made by forensic artist.
 Speech Recognition − Some intelligent systems are capable of hearing and
comprehending the language in terms of sentences and their meanings while a human
talks to it. It can handle different accents, slang words, noise in the background,
change in human’s noise due to cold, etc.
 Handwriting Recognition − The handwriting recognition software reads the text
written on paper by a pen or on screen by a stylus. It can recognize the shapes of the
letters and convert it into editable text.
 Intelligent Robots − Robots are able to perform the tasks given by a human. They
have sensors to detect physical data from the real world such as light, heat,
temperature, movement, sound, bump, and pressure. They have efficient processors,
multiple sensors and huge memory, to exhibit intelligence. In addition, they are
capable of learning from their mistakes and they can adapt to the new environment.
 Healthcare - Healthcare Industries are applying AI to make a better and faster
diagnosis than humans. AI can help doctors with diagnoses and can inform when
patients are worsening so that medical help can reach to the patient before
hospitalization
 Finance - AI and finance industries are the best matches for each other. The finance
industry is implementing automation, chatbot, adaptive intelligence, algorithm
trading, and machine learning into financial processes

Page 2 of 50
 Data Security - The security of data is crucial for every company and cyber-attacks
are growing very rapidly in the digital world. AI can be used to make your data more
safe and secure. Some examples such as AEG bot, AI2 Platform, are used to
determine software bug and cyber-attacks in a better way.
 Social Media - Social Media sites such as Facebook, Twitter, and Snap chat contain
billions of user profiles, which need to be stored and managed in a very efficient way.
AI can organize and manage massive amounts of data. AI can analyze lots of data to
identify the latest trends, hash tag, and requirement of different users.
 Travel and Transport - AI is becoming highly demanding for travel industries. AI is
capable of doing various travel related works such as from making travel arrangement
to suggesting the hotels, flights, and best routes to the customers. Travel industries are
using AI-powered chatbots which can make human-like interaction with customers
for better and fast response
 Automotive industry - Some Automotive industries are using AI to provide virtual
assistant to their user for better performance. Such as Tesla has introduced TeslaBot,
an intelligent virtual assistant. Various Industries are currently working for
developing self-driven cars which can make your journey more safe and secure.
 Robotics - Artificial Intelligence has a remarkable role in Robotics. Usually, general
robots are programmed such that they can perform some repetitive task, but with the
help of AI, we can create intelligent robots which can perform tasks with their own
experiences without pre-programmed. Humanoid Robots are best examples for AI in
robotics, recently the intelligent Humanoid robot named as Erica and Sophia has been
developed which can talk and behave like humans
 Entertainment - We are currently using some AI based applications in our daily life
with some entertainment services such as Netflix or Amazon. With the help of ML/AI
algorithms, these services show the recommendations for programs or shows
 Agriculture - Agriculture is an area which requires various resources, labor, money,
and time for best result. Now a day's agriculture is becoming digital, and AI is
emerging in this field. Agriculture is applying AI as agriculture robotics, solid and
crop monitoring, predictive analysis. AI in agriculture can be very helpful for farmers.
 E-commerce - AI is providing a competitive edge to the e-commerce industry, and it
is becoming more demanding in the e-commerce business. AI is helping shoppers to
discover associated products with recommended size, color, or even brand

Page 3 of 50
 Education - AI can automate grading so that the tutor can have more time to teach.
AI chatbot can communicate with students as a teaching assistant. AI in the future can
be work as a personal virtual tutor for students, which will be accessible easily at any
time and any place.

FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

Artificial Intelligence is not just a part of computer science even it's so vast and
requires lots of other factors which can contribute to it. To create the AI first we should know
that how intelligence is composed, so the Intelligence is an intangible part of our brain which
is a combination of Reasoning, learning, problem-solving perception, language
understanding, etc.

To achieve the above factors for a machine or software Artificial Intelligence requires the
following discipline:

o Mathematics
o Biology
o Psychology
o Sociology
o Computer Science
o Neurons Study
o Statistics

Four Approaches:

Historically, there are four approaches that are followed in Artificial Intelligence. The
four approaches are :

 Acting humanly
 Acting rationally
 Thinking humanly
 Thinking rationally

Thinking Humanly : The cognitive modeling Thinking Rationally : The ―laws of thought‖
approach approach
Acting Humanly : The Turing Test Approach Acting Rationally : The rational agent
approach

Page 4 of 50
Acting Humanly : The Turing Test Approach
 Alan Turing proposed a simple method of determining whether a machine can

demonstrate human intelligence. If a machine engages in a conversation with a


human about how to process the data it has been demonstrated by a machine, He has
proposed the following skills of the test as follows:
 The turning judges the conversational skills of humans. According to this test, a
computer program can think of a proper response for humans. This test matches the
conversational data from the existing data through an algorithm and back respond to
humans.

The “standard interpretation” of the Turing Test, in which player C, the interrogator,

is given the task of trying to determine which player – A or B – is a computer and

which is a human. The interrogator is limited to using the responses to written


questions to make the determination

As you can imagine, this is quite a difficult task for the respondent machine. There are a lot of

things going on during a conversation. At the very minimum, the machine needs to be well

versed with the following things:

 Natural Language Processing: The machine needs this to communicate with the

interrogator. The machine needs to parse the sentence, extract the context, and give an

appropriate answer.

 Knowledge Representation: The machine needs to store the information provided

before the interrogation. It also needs to keep track of the information being provided
during the conversation so that it can respond appropriately if it comes up again.

Page 5 of 50
 Reasoning: It’s important for the machine to understand how to interpret the information

that gets stored. Humans tend to do this automatically to draw conclusions in real time.

 Machine Learning: This is needed so that the machine can adapt to new conditions in

real time. The machine needs to analyze and detect patterns so that it can draw inferences.

Advantages of the Turing Test in Artificial Intelligence:


1. Evaluating machine intelligence: The Turing Test provides a simple and well-known
method for evaluating the intelligence of a machine.
2. Setting a benchmark: The Turing Test sets a benchmark for artificial intelligence
research and provides a goal for researchers to strive towards.
3. Inspiring research: The Turing Test has inspired numerous studies and experiments
aimed at developing machines that can pass the test, which has driven progress in the
field of artificial intelligence.
4. Simple to administer: The Turing Test is relatively simple to administer and can be
carried out with just a computer and a human judge.

Disadvantages of the Turing Test in Artificial Intelligence:


1. Limited scope: The Turing Test is limited in scope, focusing primarily on language-
based conversations and not taking into account other important aspects of intelligence,
such as perception, problem-solving, and decision-making.
2. Human bias: The results of the Turing Test can be influenced by the biases and
preferences of the human judge, making it difficult to obtain objective and reliable
results.
3. Not representative of real-world AI: The Turing Test may not be representative of the
kind of intelligence that machines need to demonstrate in real-world applications.

Thinking humanly: The cognitive modeling approach

 Once we gather enough data, we can create a model to simulate the human process.
This model can be used to create software that can think like humans. If the program
behaves in a way that matches human behavior, then we can say that humans have a
similar thinking mechanism.
 If computer programs, I/O and timing behaviours matches corresponding human
behaviours, that is, we can say that some of the programs mechanisms could also be
operating in human

Page 6 of 50
 The interdisciplinary field of cognitive science brings together computer models from
AI and experimental techniques from psychology that try to construct precise and
testable theories of the workings of human mind.
 When program thinks like a human, it must have some way of determining how
humans think.
 Two ways to get to know the actual workings of human minds namely:
o Through Introspection (Trying to catch one‟s own thoughts)
o Through Psychological Experiments

Thinking rationally: The “laws of thought” approach

 Rationality refers to doing the right thing in a given circumstance.

 The Greek philosopher Aristotle was one of the first to attempt to codify ―right
thinking,‖ that is, irrefutable reasoning processes. His syllogisms provided patterns for

argument structures that always yielded correct conclusions when given correct

premises

 For example,

Socrates is a man;

All men are mortal;

Socrates is mortal.

These laws of thought were supposed to govern the operation of the mind; their study

initiated the field called logic which can be implemented to create intelligent systems

There are two main obstacles to this approach.


 It is not easy to take informal knowledge and state it in the formal terms required by

logical notation, particularly when the knowledge is less than 100% certain.

 There is a big difference between solving a problem ―in principle‖ and solving it in

practice. Even problems with just a few hundred facts can exhaust the computational

resources of any computer unless it has some guidance as to which reasoning steps to

try first.

Page 7 of 50
Acting rationally: The rational agent approach

 An agent is just something that acts. Of course, all computer programs do something,

but computer agents are expected to do more: operate autonomously, perceive their

environment, persist over a prolonged time period, adapt to change, and create and

pursue goals.

 Rational agents need to be performed in such a way that there is maximum benefit to

the entity performing the action. An agent is said to act rationally if, given a set of

rules, it takes actions to achieve its goals. It just perceives and acts according to the

information that’s available. This system is used a lot in AI to design robots when they

are sent to navigate unknown terrains.

 All the skills needed for the Turing Test also allow an agent to act rationally.

Knowledge representation and reasoning enable agents to reach good decisions. We

need to be able to generate comprehensible sentences in natural language to get by in a

complex society.

Definition
Artificial Intelligence may be defined as ―It is a branch of computer science by which
we can create intelligent machines which can behave like a human, think like humans, and
able to make decisions‖

TYPES OF ARTIFICIAL INTELLIGENCE BASED ON CAPABILITIES


 Strong AI
 Weak AI

Weak AI

Weak AI, often known as narrow AI, is a category of artificial intelligence confined to
a singular or limited domain. Weak AI mimics human thought processes. By performing
time-consuming operations and conducting data analysis through methods that people can’t
always use, this technology can be advantageous to society.

It concentrates on tasks such as responding to user input searches or playing games.


Human intervention depends on specifying the learning algorithm’s parameters and supplying
the appropriate training data to assure correctness.

Page 8 of 50
It cannot break the rules; it just adheres to them and is constrained by them. Weak AI
helps convert enormous amounts of data into useful information by identifying patterns and
generating predictions.

Example : Deep Blue, Smart Assistants like Siri or Alexa, Chatbots, Email Spam Filters,
Self-Driving Cars, GPS and navigation apps like Google Maps, Recommendation systems
like Amazon, Spotify, Netflix

Strong AI

Strong AI, also known as artificial general intelligence (AGI) or deep AI, is a
computer system with a comprehensive intellect capable of learning and employing its
intellectual ability to solve any problem.

It can understand, work and have a thought process different from humans in certain
situations. Strong AI uses an understanding of mind AI framework to understand the goals,
motivations, standards, and cognitive processes that govern other intelligent beings.

Strong AI, on the other hand, is capable of learning, thinking and adapting like
humans do. That said, strong AI systems don’t actually exist yet.

Examples of strong AI in sci-fi movies like Wall-E, Big Hero 6, The Terminator,
Vision from Marvel, etc. Some of the areas where strong AI is helpful:

 Cyber Security
 Robots with high intellect
 Integration of strong AI in IoT (Internet of Things)
 Language translation machines
 Image recognition systems

Difference Between Strong AI and Weak AI


Strong AI Weak AI
Perform intelligent human-level activities Limited to perform specific tasks
Have the ability to learn, think, and Programmed for fixed function
perform new activities like humans
It poses creativity, common sense, and It doesn’t have any consciousness or
logic like humans. awareness of its own.
They have a goal to solve problems at a They have a goal to complete a task with
faster pace. creative and accurate solutions.
There are no real examples of strong AI Examples of weak AI include Alexa, Siri,
because it is a hypothetical theory. Some and Google Assistant.
fictional examples are Wall-E and Big
Hero 6, GPT-4, MuZero

Page 9 of 50
HISTORY OF ARTIFICIAL INTELLIGENCE

o Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
o Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial neural
network (ANN) named SNARC. They utilized 3,000 vacuum tubes to mimic a
network of 40 neurons.
o Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-Playing
Program, which marked the world's first self-learning program for playing games.
o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for
some theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field.
o Year 1958: During this period, Frank Rosenblatt introduced the perceptron, one of
the early artificial neural networks with the ability to learn from data. This invention
laid the foundation for modern neural networks. Simultaneously, John McCarthy
developed the Lisp programming language, which swiftly found favor within the AI
community, becoming highly popular among developers.

Page 10 of 50
o Year 1959: Arthur Samuel is credited with introducing the phrase "machine learning"
in a pivotal paper in which he proposed that computers could be programmed to
surpass their creators in performance. Additionally, Oliver Selfridge made a notable
contribution to machine learning with his publication "Pandemonium: A Paradigm for
Learning." This work outlined a model capable of self-improvement, enabling it to
discover patterns in events more effectively.
o Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow created
STUDENT, one of the early programs for natural language processing (NLP), with
the specific purpose of solving algebra word problems.
o Year 1965: The initial expert system, Dendral, was devised by Edward Feigenbaum,
Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi. It aided organic chemists in
identifying unfamiliar organic compounds.
o Year 1966: The researchers emphasized developing algorithms that can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which
was named ELIZA. Furthermore, Stanford Research Institute created Shakey, the
earliest mobile intelligent robot incorporating AI, computer vision, navigation, and
NLP. It can be considered a precursor to today's self-driving cars and drones.
o Year 1968: Terry Winograd developed SHRDLU, which was the pioneering
multimodal AI capable of following user instructions to manipulate and reason within
a world of blocks.
o Year 1969: Arthur Bryson and Yu-Chi Ho outlined a learning algorithm known as
backpropagation, which enabled the development of multilayer artificial neural
networks. This represented a significant advancement beyond the perceptron and laid
the groundwork for deep learning. Additionally, Marvin Minsky and Seymour Papert
authored the book "Perceptrons," which elucidated the constraints of basic neural
networks. This publication led to a decline in neural network research and a
resurgence in symbolic AI research.
o Year 1972: The first intelligent humanoid robot was built in Japan, which was named
WABOT-1.
o Year 1973: James Lighthill published the report titled "Artificial Intelligence: A
General Survey," resulting in a substantial reduction in the British government's
backing for AI research.
o Year 1980: After AI's winter duration, AI came back with an "Expert System".
Expert systems were programmed to emulate the decision-making ability of a human
expert. Additionally, Symbolics Lisp machines were brought into commercial use,

Page 11 of 50
marking the onset of an AI resurgence. However, in subsequent years, the Lisp
machine market experienced a significant downturn.
o Year 1981: Danny Hillis created parallel computers tailored for AI and various
computational functions, featuring an architecture akin to contemporary GPUs.
o Year 1984: Marvin Minsky and Roger Schank introduced the phrase "AI winter"
during a gathering of the Association for the Advancement of Artificial Intelligence.
They cautioned the business world that exaggerated expectations about AI would
result in disillusionment and the eventual downfall of the industry, which indeed
occurred three years later.
o Year 1985: Judea Pearl introduced Bayesian network causal analysis, presenting
statistical methods for encoding uncertainty in computer systems.
o Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by defeating
world chess champion Gary Kasparov, marking the first time a computer triumphed
over a reigning world chess champion. Moreover, Sepp Hochreiter and Jürgen
Schmidhuber introduced the Long Short-Term Memory recurrent neural network,
revolutionizing the capability to process entire sequences of data such as speech or
video.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came into the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
o Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released the paper titled
"Utilizing Graphics Processors for Extensive Deep Unsupervised Learning,"
introducing the concept of employing GPUs for the training of expansive neural
networks.
o Year 2011: Jürgen Schmidhuber, Ueli Meier, and Jonathan Masci created the initial
CNN that attained "superhuman" performance by emerging as the victor in the
German Traffic Sign Recognition competition. Furthermore, Apple launched Siri, a
voice-activated personal assistant capable of generating responses and executing
actions in response to voice commands.
o Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where it had to solve
complex questions as well as riddles. Watson had proved that it could understand
natural language and can solve tricky questions quickly.
o Year 2012: Google launched an Android app feature, "Google Now", which was able
to provide information to the user as a prediction. Further, Geoffrey Hinton, Ilya

Page 12 of 50
Sutskever, and Alex Krizhevsky presented a deep CNN structure that emerged
victorious in the ImageNet challenge, sparking the proliferation of research and
application in the field of deep learning.
o Year 2013: China's Tianhe-2 system achieved a remarkable feat by doubling the
speed of the world's leading supercomputers to reach 33.86 petaflops. It retained its
status as the world's fastest system for the third consecutive time. Furthermore,
DeepMind unveiled deep reinforcement learning, a CNN that acquired skills through
repetitive learning and rewards, ultimately surpassing human experts in playing
games. Also, Google researcher Tomas Mikolov and his team introduced Word2vec, a
tool designed to automatically discern the semantic connections among words.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test." Whereas Ian Goodfellow and his team pioneered generative
adversarial networks (GANs), a type of machine learning framework employed for
producing images, altering pictures, and crafting deepfakes, and Diederik Kingma and
Max Welling introduced variational autoencoders (VAEs) for generating images,
videos, and text. Also, Facebook engineered the DeepFace deep learning facial
recognition system, capable of identifying human faces in digital images with
accuracy nearly comparable to human capabilities.
o Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go player Lee
Sedol in Seoul, South Korea, prompting reminiscence of the Kasparov chess match
against Deep Blue nearly two decades earlier.Whereas Uber initiated a pilot program
for self-driving cars in Pittsburgh, catering to a limited group of users.
o Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
o Google has demonstrated an AI program, "Duplex," which was a virtual assistant that
had taken hairdresser appointments on call, and the lady on the other side didn't notice
that she was talking with the machine.
o Year 2021: OpenAI unveiled the Dall-E multimodal AI system, capable of producing
images based on textual prompts.
o Year 2022: In November, OpenAI launched ChatGPT, offering a chat-oriented
interface to its GPT-3.5 LLM.

FUTURE OF ARTIFICIAL INTELLIGENCE


 Improved Business Automation

Page 13 of 50
With the rise of chatbots and digital assistants, companies can rely on AI to handle simple
conversations with customers and answer basic queries from employees.

AI’s ability to analyze massive amounts of data and convert its findings into convenient
visual formats can also accelerate the decision-making process. Company leaders don’t have
to spend time parsing through the data themselves, instead using instant insights to make
informed decisions.

 Job disruption

Workers in more skilled or creative positions are more likely to have their jobs augmented by
AI, rather than be replaced. Whether forcing employees to learn new tools or taking over
their roles, AI is set to spur upskilling efforts at both the individual and company level.

 Data Privacy Issues

Companies require large volumes of data to train the models that power generative AI tools,
and this process has come under intense scrutiny. Concerns over companies collecting
consumers’ personal data have led the Federal Trade Commission to open an investigation
into whether OpenAI has negatively impacted consumers through its data collection methods
after the company potentially violated European data protection laws.

 Increased regulation

AI could shift the perspective on certain legal questions, depending on how generative AI
lawsuits unfold in 2024. For example, the issue of intellectual property has come to the
forefront in light of copyright lawsuits filed against OpenAI by writers, musicians and
companies like The New York Times. These lawsuits affect how the U.S. legal system
interprets what is private and public property, and a loss could spell major setbacks for
OpenAI and its competitors.

 Climate Change Concerns

On a far grander scale, AI is poised to have a major effect on sustainability, climate change
and environmental issues. Optimists can view AI as a way to make supply chains more
efficient, carrying out predictive maintenance and other procedures to reduce carbon
emissions.

INTELLIGENT AGENTS
In artificial intelligence, an agent is a computer program or system that is designed to
perceive its environment, make decisions and take actions to achieve a specific goal or set of
goals. The agent operates autonomously, meaning it is not directly controlled by a human
operator.

An AI system can be defined as the study of the rational agent and its environment.
The agents sense the environment through sensors and act on their environment through
actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc.

Page 14 of 50
AGENT

An agent is anything that can viewed as perceiving its environment through sensors

and acting upon that environment through effectors. An Agent runs in the cycle of perceiving,

thinking, and acting those inputs and display output on the screen.

 Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.
 Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator
can be an electric motor, gears, rails, etc.
 Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.

Examples :

 Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
 Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
 Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.

INTELLIGENT AGENTS

An intelligent agent is an autonomous entity which acts upon an environment using

Page 15 of 50
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent agent.

Following are the main four rules for an AI agent:

 Rule 1: An AI agent must have the ability to perceive the environment.


 Rule 2: The observation must be used to make decisions.
 Rule 3: Decision should result in an action.
 Rule 4: The action taken by an AI agent must be a rational action.

AI TERMINOLOGY

 Percept: the agent’s perceptual inputs at any given instant


 Percept sequence: the complete history of everything the agent has perceived
 Agent function maps any given percept sequence to an action [f : p*->A]
 The agent program runs on the physical architecture to produce f

THE STRUCTURE OF AGENT

The job of AI is to design an agent program that implements the agent function—
the mapping from percepts to actions. We assume this program will run on some sort of
computing device with physical sensors and actuators—we call this the architecture:
Agent = architecture + program

Architecture: Architecture is machinery that an AI agent executes on.

Agent Function

The agent function, which takes the entire percept history. Agent function is used to
map a percept to an action
F : P* → A
Agent Program

The agent program takes just the current percept as input because nothing more is
available from the environment; if the agent’s actions need to depend on the entire percept
sequence, the agent will have to remember the percepts. Agent program is an implementation
of agent function. An agent program executes on the physical architecture to produce
function F.

EXAMPLE : A VACCUM CLEANER – SIMPLE REFLEX AGENT

To illustrate the above concepts, we use a very simple example—the vacuum-cleaner


world shown

Page 16 of 50
This particular world has just two locations: squares A and B. The vacuum agent
perceives which square it is in and whether there is dirt in the square. It can choose to move
left, move right, suck up the dirt, or do nothing. One very simple agent function is the
following: if the current square is dirty, then suck; otherwise, move to the other square

Agent Program :

function REFLEX-VACUUM-AGENT([location,status]) returns an action

if status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Rule:

 "If the current location is dirty, then suck." "If the current location is clean, then move
to the next location." This simple reflex agent operates based on a direct mapping
between percept and action.
 It doesn't have memory or the ability to learn from past experiences. Its decision-
making is determined solely by the immediate state of the environment.
 While this basic model may work for scenarios with a limited and predictable
environment (e.g., a small room with known dirt locations), more sophisticated AI
techniques, such as model-based or learning-based approaches, might be necessary for
complex environments where the state is dynamic and uncertain. Additional variables
and parameters are required for larger and more complex scenarios.

Page 17 of 50
Case:

Room A and Room B are Dirty, and initially, an agent is inside Room A. There are 2 rooms
so we would have 4 cases of dirt and clean and 8 possible states for an agent.

Scenario and States

The following figure shows the various possible states or working of a vaccum
cleaner

SIMPLE EXAMPLE FOR TABULATION OF AGENT

Agent – A shopping agent on internet called as bot

Tabulation of percepts and action mapping

Page 18 of 50
Sequence of Percepts Actions
Type URL of greeting site mygreeting.com Display website
Navigation and observation of greetings to be purchased Clicks on the link
To get details of greeting (which is purchased), in terms of a form Form filling
To perceive completion of process Receiving receipt or bill

WEAK AND STRONG AGENT

Weak Agent
A weak notion says that an agent is a hardware or software based computer system
that has the following properties :

 Autonomy – agents operate without direct intervention of humans and have control
over their actions and internal state
 Social ability – agents interact with other agents (and possibly humans) via an agent
communication language’
 Reactivity - agents perceive their environment and respond in timely and rational
fashion to changes that occur in it
 Pro-activeness – agents do not simply act in response to their environment, they are
capable of taking the initiative, generate their own goals and act to achieve them

Strong Agent
A strong notion says that an agent has mental properties, such as knowledge, belief,
intention and obligation. In addition, an agent has other properties such as :

 Mobility – agents can move around from one machine to another and across different
system architectures and platforms
 Veracity – agents do not knowingly communicate false information
 Rationality – agents will try to achieve their goals and not acts in such a way that
would prevent their goals from being achieved

RATIONAL AND OMNISCIENCE BEHAVIOR

Rationality

The rationality of an agent is measured by its performance measure. Rationality can


be judged on the basis of following points:

Page 19 of 50
 Performance measure which defines the success criterion.
 Agent prior knowledge of its environment.
 Best possible actions that an agent can perform.
 The sequence of percepts.

Rational Agent

A rational agent is an agent which has clear preference, models uncertainty, and acts
in a way to maximize its performance measure with all possible actions. A rational agent is
said to perform the right things. AI is about creating rational agents to use for game theory
and decision theory for various real-world scenarios. For an AI agent, the rational action is
most important because in AI reinforcement learning algorithm, for each best possible action,
agent gets the positive reward and for each wrong action, an agent gets a negative reward.

Example: Let's say we have a self-driving car that uses sensors to perceive its environment
and makes decisions on how to navigate safely. The car doesn't have complete knowledge of
every aspect of the road, but it uses the available information (such as traffic lights, signs, and
other vehicles) to make rational decisions about acceleration, braking, and steering.

Omniscience

 An omniscient agent knows the actual outcome of its actions and can act accordingly,
but in reality omniscience is impossible.
 An omniscient (perfect) agent knows the actual outcome of its actions and can act
 accordingly; but perfection is impossible in reality
 Rationality is not same as perfection. Rationality maximizes expected performance
where as perfection maximizes actual performance.
 For increasing performance agent must do same actions in order to modify future
percepts.
 This is called as information gathering which is important part of rationality. Also
agent should explore (understand) environment to increase performance i.e. for doing
more correct actions.
 Learning is another important activity agent should do so as to gather information.
Agent may know environment completely (which is practically not possible) in
certain cases but if it is not known agent needs learn on its own.
 To the extent that an agent relies on the prior knowledge of its designer rather than on
its own percepts, we say that agent lacks autonomy. A rational agent should be

Page 20 of 50
autonomous - it should learn what it can do to compensate for partial or incorrect
prior knowledge.
Example: Imagine a chess-playing computer program that knows the position and
possible moves of every piece on the board at all times, as well as all future moves and
outcomes. This program would be considered an omniscient agent because it has
complete knowledge of the chess game.

Good and Bad Agent


The concept of rational behaviour leads to two types agents, the good agents and In
the bad agent. Most of the time the good and bad behaviour (that is performance) of the agent
depends completely on the environment.
 If environment is completely known then we get agent's good behaviour
 If environment is unknown then agent can act badly

TASK ENVIRONMENT

A task environment is essentially a problem to which agent is a solution. The range of


task environments that might arise in AI is obviously vast. We can, however, identify a fairly
small number of dimensions along which task environments can be categorized. These
dimensions determine, to a large extent, the appropriate agent design and the applicability of
each of the principle families of techniques for agent implementation.

TYPES OF TASK ENVIRONMENT

 Fully observable vs Partially Observable


 Static vs Dynamic
 Discrete vs Continuous
 Deterministic vs Stochastic
 Single-agent vs Multi-agent
 Episodic vs sequential

Fully Observable Vs Partially Observable


 If an agent's sensors give it the access to the complete state of the environment at each
point of time, then it is fully observable.
 In some environment, if there is noise or agent is with inaccurate sensors or may be
some states of environment are missing then such environment is partially observable.
Example-

Page 21 of 50
Fully Observable
The puzzle game environment is fully observable where agent can see all the aspects, that are
surrounding it. That is agent can see all the squares of the puzzle game along with values (if
any added) in them.
More examples -
1) Image analysis. 2) Tic - tac toe.
Partially Observable
The pocker game environment is partially observable. Game of pocker is a card game that
shares betting rule; and usually (but not always) hand rankings. In this game agent is not able
to perceive other player's betting.
Also agent cannot see other player's card. It has to play with reference to its own cards and
with current betting knowledge.
More examples-
1) Interactive Science Tutor.
2) Millitary Planning.

Deterministic Vs Stochastic
 If from current state of environment and the action, agent can deduce the next state of
environment then, it is deterministic environment otherwise it is stochastic
environment.
 If the environment is deterministic except for the actions of other agents, we say that
the environment is strategic.
Examples -
Deterministic: In image analysis whatever is current percept of the image, agent can take
next action or can process remaining part of image based on current knowledge. Finally it can
produce all the detail aspects of the image.
Strategic: Agent playing tic-tac toe game is in strategic environment as from the current state
agent decides next state action except for the action of other agents.
More examples -
1) Video analysis.
2) Trading agent.

Page 22 of 50
Stochastic: Boat driving agent is in stochastic environment as the next driving does not based
on current state. In fact it has to see the goal and from all current and previous percepts agent
needs to take action.
More examples-
1) Car driving.
2) Robot firing in crowd.

Episodic Vs Sequential
 In episodic environment agent's experience is divided into atomic episodes such that
each episode consists of, the agent perceiving process and then performing single
action. In this environment the choice of action depends only on the episode itself,
previous episode does not affect current actions.
 In sequential environment on the other hand, the current decision could affect all
future decision.
 Episodic environments are more simpler than sequential environments because the
agent does not need to think ahead.
Example -
Episodic Environment: Agent finding defective part of assembled computer machine. Here
agent will inspect current part and take action which does not depend on previous decisions
(previously checked parts).
More Examples -
1) Blood testing for patient.
2) Card games.
Sequential Environment: A game of chess is sequential environment where agent takes
action based on all previous decisions.
More examples - -
1) Chess with a clock.
2) Refinery controller.

Static Vs Dynamic
 If the environment can change while agent is deliberating then we say the
environment is dynamic for the agent, otherwise it is static.
 Static environments are easy to tackle as agent need not worry about changes around
(as it will not change) while taking actions.
 Dynamic environments keep on changing continuously which makes agent to be more
attentive to make decisions for act.

Page 23 of 50
 If the environment itself does not change with time but the agent's performance does,
then we say that environment is semidynamic.

Examples -
Static: In crossword puzzle game the environment that is values held in squares can only
change by the action of agent.
More examples -
1) 8 queen puzzle
2) Semidynamic.
Dynamic: Agent driving boat is in dynamic environment because the environment can
change (A big wave can come, it can be more windy) without any action of agent.
More examples -
1) Car driving
2) Tutor.

Discrete Vs Continuous
 In discrete environment the environment has fixed finite discrete states over the time
and each state has associated percepts and action.
 Continuous environment is not stable at any given point of time and it changes
randomly thereby making agent to learn continuously, so as to make decisions.
Example:
Discrete: A game of tic-tac toe depicts discrete environment where every state is stable and it
associated percept and it is outcome of some action.
More examples -
1) 8 - queen puzzle
2) Crossword puzzle.
Continuous: A boat driving environment is continuous where the state changes are
continuous, and agent needs to perceive continuously.
More examples -
1) Part Picking Robot
2) Flight Controller.

Single Agent Vs Multiagent


 In single agent environment we have well defined single agent who takes decision and
acts.

Page 24 of 50
 In multiagent environment there can be various agents or various group of which are
working together to take decision and act. In multiagent environment agents we can
have competitive multiagent environment, in which many agents are working parallel
to miximize performance of individual or there can be co-operative multiagent
environment, where in all agents have single goal and they work to get high
performance of all of them together.
Example:
 Multiagent independent environment
o Many agent in game of Maze.
 Multiagent cooperative environment
o Fantasy football. [Here many agents work together to achieve same goal.]
Multiagent competitive environment
 Trading agents. [Here many agents are working but opposite to each other]
o Multiagent antagonistic environment
 Wargames, [Here multiple agents are working opposite to each other but one side
(agent/agent team) is having negative goal.]
o Single agent environment
 Boat driving[Here single agent perceives and acts]

Example for Different types of Environment


Fully vs Deterministi Episodic Static vs Discrete vs Single
Partially c vs vs Dynamic Continuou vs
Observabl Stochastic Sequentia s Multi
e l Agent
s
Brushing Fully Stochastic Sequential Static Continuous Single
Your Teeth
Playing Partially Stochastic Sequential Dynamic Continuous Multi-
Chess Agent
Playing Partially Stochastic Sequential Dynamic Continuous Multi-
Cards Agent
Playing Partially Stochastic Sequential Dynamic Continuous Multi-
Agent
Autonomous Fully Stochastic Sequential Dynamic Continuous Multi-
Vehicles Agent
Order in Fully Deterministic Episodic Static Discrete Single

Page 25 of 50
Restaurant
Crossword Fully Deterministic Sequential Static Discrete Single
puzzle
Chess with a Fully Deterministic Sequential semi Discrete Multi-
clock Agent
Poker Partially Stochastic Sequential Static Discrete Multi-
Agent
Backgammo Fully Stochastic Sequential Static Discrete Multi-
n Agent
Taxi driving Partially Stochastic Sequential Dynamic Continuous Multi-
Agent
Medical Partially Stochastic Sequential Dynamic Continuous Single
diagnosis
Image Fully Deterministic Episodic semi Continuous Single
analysis
Part-picking Partially Stochastic Episodic Dynamic Continuous Single
robot
Refinery Partially Stochastic Sequential Dynamic Continuous Single
controller
Interactive Partially Stochastic Sequential Dynamic Discrete Multi-
English tutor Agent

Complexity comparison of task environment

Following is the rising order of complexity of various task environment.

Low Rising order of Complexity High


Observable -----> Partially Observable
Deterministic -----> Stochastic
Episodic -----> Sequential
Static -----> Dynamic
Discrete -----> Continuous
Single -----> Multi Agents

More Types of Task Environment


Based on specific problem domains we can further classify task environments as follow.

Page 26 of 50
1) Monitoring and Surveillance Environment
Example: Agent monitoring incoming people at some gathering where only authorized
people are allowed.
2) Time Constrained Environment
Example: Chess with a clock environment where the move should be done in specified
amount of time.
3) Decision Making Environment
Example: The executive agent who is monitoring profit of a organization, can help top level
management to take decision..
4) Process Based Environment
Example: The image processing agent who can take input and synthesize it to produce
required output, and details about the image.
5) Personal or User Environment
Example: A small scale agent which can be used as personal assistance who can help to
remember daily task, who can give notifications about work etc.
6) Buying Environment
Example: A online book shopping bot (agent) who buys book online as per user
requirements.
7) Automated Task Environment
Example: A cadburry manufacturing firm can use a agent who automates complete procedure
of cadburry making.
8) Industrial Task Environment
Example: An agent developed to make architecture of a building or layout of building.
9) Learning Task Environment (Educational)
Example: We can have a agent who is learning some act or some theories presented to it and
later it can play it back which will be helpful for others to learn that act or theories.
10) Problem Solving Environment
Example: We can have agent who solve different types of problems from mathematics or
statistics or any general purpose problem like travelling salesman problem.
11) Scientific and Engineering Task Environment
Example Agent doing scientific calculations for aeronautics purpose or agent develop to
design road maps or over bridge structure.

Page 27 of 50
12) Biological Task Environment
Example Agent working for design of some chemical component helpful for medicine.
13) Space Task Environment
Example Agent that is working in space for observing space environment and recording
details about it.
14) Research Task Environment
Example Agent working in a research lab where it is made to grasp (learn) knowledge and
represent it and drawing conclusions from it, which will helps researcher for further study.
15) Network Task Environment
Example: An agent developed to automatically carry data over a computer network based on
certain conditions like time limit or data size limit in same network (same type of agent can
be developed for physically transferring items or mails) over same network.
16) Repository Task Environment
Example: If a data repository is to be maintained then agent can be developed to arrange data
based on criteria’s which will be helpful for searching later on.

DIFFERENT TYPES OF AGENTS


 Intelligent agent
 Simple Reflex Agent
 Model-based reflex agent
 Goal-based agents
 Utility-based agent
 Learning agent

Intelligent agent

 Intelligent agent is an intelligent actor, who observe and act upon an environment
 Intelligent agent is magnum-opus

Characteristics of Intelligent Agent (IA)


1) The IA must learn and improve through interaction with the agent environment.
2) The IA must adapt online and in the real time situation.
3) The IA must learn quickly from large amounts of data.

4) The IA must accommodate new problem solving rules incrementally.


5) The IA must have memory which must exhibit storage and retrieval capacities.
6) The IA should be able to analyse self in terms of behaviour, error and success

Page 28 of 50
Simple Reflex Agent

 The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
 These agents only succeed in the fully observable environment.
 The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
 The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.

Schematic Diagram of a Agent

Property
1) These are very simple but their intelligence is limited.
2) They will work only if correct decision can be made on the basis of only the
current percept- that is only if the environment is fully observable.
3) A little bit of unobservability can cause serious trouble.

4) If simple reflex agent works in partially observable environment then, it can lead to
infinite loops.

Page 29 of 50
5) Infinite loops can be avoided if simplex reflex agent can try out possible actions i.e
can randomize the actions.
6) A randomize simple reflex agent will perform better than deterministic reflex
agent.

Example:
In ATM agent system if PIN matches with given account number then customer gets
money.
Procedure: SIMPLE - REFLEX - AGENT
Input: Percept
Output: An action.
Static: Rules, a set of condition - action rules.

Agent Program

function SIMPLE-REFLEX-AGENT(percept) returns an action

persistent: rules, a set of condition–action rules

state←INTERPRET-INPUT(percept)

rule←RULE-MATCH(state, rules)

action ←rule.ACTION

return action

Problems for the simple reflex agent design approach:

 They have very limited intelligence


 They do not have knowledge of non-perceptual parts of the current state
 Mostly too big to generate and to store.
 Not adaptive to changes in the environment.

Advantages of Simple Reflex Agent


 Easy to design and implement, requiring minimal computational resources
 Real-time responses to environmental changes
 Highly reliable in situations where the sensors providing input are accurate, and the
rules are well designed
 No need for extensive training or sophisticated hardware

Page 30 of 50
Limitation of Simple Reflex Agent
 Prone to errors if the input sensors are faulty or the rules are poorly designed
 Have no memory or state, which limits their range of applicability
 Unable to handle partial observability or changes in the environment they have not
been explicitly programmed for
 Limited to a specific set of actions and cannot adapt to new situations

Model based Agent

Internal state of the agent stores current state of environment which describes part of
unseen world i.e how world evolves, and effect of agent's own actions. It means that it stores
model of possibilities around it. Hence it is called as model based reflex agent.

A model based agent has two important factors :

 Model: It is knowledge about "how things happen in the world," so it is called a


Model based agent.
 Internal State: It is a representation of the current state based on percept history.
o These agents have the model, "which is knowledge of the world" and based on
the model they perform actions.
o Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.

Schematic Diagram

Page 31 of 50
Property:
1) It has ability to handle partially observable environments. 2) Its internal state is
updated continuously which can be shown as:
Old - Internal state + Current percept = Update state.
For example:
A car driving agent which maintains its own internal state and then take action as
environment appears to it.
Amazon Bedrock is a service that uses foundational models to simulate operations,
gain insights, and make informed decisions for effective planning and optimization.

Procedure: REFLEX-AGENT-WITH-STATE

Input: Percept.

Output: An action.

Static State, a description of the current world state, rules, a set of condition- action rules,
action, the most recent action, initially none.

Agent Program

function MODEL-BASED-REFLEX-AGENT(percept) returns an action

persistent: state, the agent’s current conception of the world state

model , a description of how the next state depends on current state and action

rules, a set of condition–action rules

action, the most recent action, initially none

state←UPDATE-STATE(state, action, percept ,model )

rule←RULE-MATCH(state, rules)

action ←rule.ACTION

return action

Advantages of Model based Agent


 Quick and efficient decision-making based on their understanding of the world

Page 32 of 50
 Better equipped to make accurate decisions by constructing an internal model of the
world
 Adaptability to changes in the environment by updating their internal models
 More informed and strategic choices by using its internal state and rules to determine
the condition

Disadvantages of Model based Agent


 Building and maintaining models can be computationally expensive
 The models may not capture the real-world environment’s complexity very well
 Models cannot anticipate all potential situations that may arise
 Models need to be updated often to stay current
 Models may pose challenges in terms of interpretation and comprehension

Goal Based Agent

 The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
 The agent needs to know its goal which describes desirable situations.
 Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
 They choose an action, so that they can achieve the goal.
 These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.

Page 33 of 50
Schematic Diagram

Property
1) Goal based agent works simply towards achieving goal.
2) For tricky goals it needs searching and planning.
3) They are dynamic in nature because the information description appears in proper and
explicit manner.
4) We can quickly change goal based agent's behaviour for new/unknown goal.
For example:
Agent searching a solution for 8-queen puzzle.
Google bard a goal-based agent, it has a goal or objective to provide high-quality responses to
user queries. It chooses its actions that are likely to assist users in finding the information
they seek and achieving their desired goal of obtaining accurate and helpful responses.

Advantages of Goal based agent


 Simple to implement and understand
 Efficient for achieving a specific goal
 Easy to evaluate performance based on goal completion
 It can be combined with other AI techniques to create more advanced agents
 Well-suited for well-defined, structured environments

Page 34 of 50
 It can be used for various applications, such as robotics, game AI, and autonomous
vehicles.

Disadvantages of goal based agent


 Limited to a specific goal
 Unable to adapt to changing environments
 Ineffective for complex tasks that have too many variables
 Requires significant domain knowledge to define goals

Utility based Agent

In complex environment only goals are not enough for agent designs. Additional to
this we can have utility function.
 These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at
a given state.
 Utility-based agent act based not only goals but also the best way to achieve the goal.
 The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
 The utility function maps each state to a real number to check how efficiently each
action achieves the goals.

Schematic Diagram

Page 35 of 50
Property:
1) Utility function maps a state on to a real number, which describes the associated degree
of best performance.
2) Goals gives us only two outcomes achieved or not achieved. But utility based agents
provide a way in which the likelihood of success can be measured against importance of
the goals.
3) Rational agent which is utility based can maximize expected value of utility function
i.e more perfection can be achieved.
4) Goals gives only two discrete states,
a) Happy b) Unhappy.

For example-
Millitary planning robot which provides certain plan of action to be taken. Its
environment is too complex, and expected performance is also high.
Anthropic Claude, an AI tool whose goal is to help card members maximize their
rewards and benefits from using cards, is a utility-based agent

Advantages of Utility based agent


 Handles a wide range of decision-making problems
 Learns from experience and adjusts their decision-making strategies
 Offers a consistent and objective framework for decision-making

Disadvantages of Utility based agent


 Requires an accurate model of the environment, failing to do so results in decision-
making errors
 Computationally expensive and requires extensive calculations
 Does not consider moral or ethical considerations
 Difficult for humans to understand and validate

Difference between goal and utility based agent

Goal based agent Utility based agent


Goal-based agents may perform in a way that Utility-based agents are more reliable
produces an unexpected outcome because because they can learn from their
their search space is limited environment and perform most efficiently
Makes decisions based on the goal and the Makes decisions based on the utility and

Page 36 of 50
available information general information
Goal-based agents are easier to program Implementing utility-based agents can be a
complex task
Considers a set of possible actions before Maps each state to an actual number to check
deciding whether the goal is achieved or not how efficiently each step achieves its goals
Utilized in computer vision, robotics, and Used in GPS and tracking systems
NLP

Learning agent

A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
 It starts to act with basic knowledge and then able to act and adapt automatically
through learning. A learning agent has four conceptual components, which are:
o Learning element: It is responsible for making improvements by learning
from environment
o Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
o Performance element: It is responsible for selecting external action
o Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
 Hence, learning agents are able to learn, analyze performance, and look for new ways
to improve the performance.

Page 37 of 50
Schematic Diagram

Example :
AutoGPT, created by Significant Gravitas.
Imagine you want to purchase a smartphone. So, you give AutoGPT a prompt to
conduct market research on the top ten smartphones, providing insights on their pros and
cons.
Once given this task, AutoGPT analyzes the pros and cons of the top ten smartphones
by exploring various websites and sources. It evaluates the authenticity of websites using a
sub-agent program. Finally, it generates a detailed report summarizing the findings and listing
the pros and cons of the top ten smartphone companies.

Advantages of Learning Agents :


 The agent can convert ideas into action based on AI decisions
 Learning intelligent agents can follow basic commands, like spoken instructions, to
perform tasks
 Unlike classic agents that perform predefined actions, learning agents can evolve with
time
 AI agents consider utility measurements, making them more realistic
Disadvantages of Learning Agents :
 Prone to biased or incorrect decision-making
 High development and maintenance costs
Page 38 of 50
 Requires significant computing resources
 Dependence on large amounts of data
 Lack of human-like intuition and creativity

DESIGNING AN AGENT SYSTEM


When we are specifying agents we need to specify performance measure, the
environment and the agent's sensors and actuators. We group all these under the heading of
the task environment.
For the acronymically, it is called as PEAS ([P]erformance, [E]nvironment,
[A]ctuators, [S]ensors) description.
Steps in Designing an Agent
1) Define problem area (i.e. task environment) in complete manner. Example-Vaccum world,
automated face recognition, automated taxi driver.
2) Define or tabulate PEAS.
3) Define or tabulate agent functions (i.e. percept sequence and action column)
4) Design agent program.
5) Design an architecture to implement agent program.
6) Implement an agent program.
The agent system may be single agent or multiple agents system.
If system is multiagents then we need to consider communication, co-operation strategies
among multiple agents.

PEAS
PEAS stands for performance measure, environment, actuators, and sensors. PEAS
defines AI models and helps determine the task environment for an intelligent agent.
 Performance measure: It defines the success of an agent. It evaluates the criteria that
determines whether the system performs well.
 Environment: It refers to the external context in which an AI system operates. It
encapsulates the physical and virtual surroundings, including other agents, objects,
and conditions.
 Actuators: They are responsible for executing actions based on the decisions made.
They interact with the environment to bring about desired changes.

Page 39 of 50
 Sensors: An agent observes and perceives its environment through sensors. Sensors
provide input data to the system, enabling it to make informed decisions.
Examples :

Agent Performance Environment Actuators Sensors


Vacuum Cleanliness, Room, table, Wheels, Camera, sensors
cleaner security, battery carpet, floors brushes
ChatGPT Response Online Text generation Input
quality interaction engine mechanisms for
platforms user queries
Autonomous Efficient Roads, traffic, Brake, Cameras, GPS,
vehicle navigation, pedestrians, accelerator, speedometer
safety, time, road signs steer, horn
comfort
Hospital Patient's health, Doctors, Prescription, Symptoms
cost
patients, nurses, diagnosis, tests,
staff treatments
An automated Safe, fast, legal, Roads, other Steering Cameras, Sonar,
taxi driver comfortable traffic, acceleration Speedometer,
trip, maximize pedestrians, break, Signal, GPS, Odometer,
profits customers Horn, Display Accelerometer,
Engine,
Sensors,
Keyboard
An automated Correct Human face Capturing face, Web / Video
face recognizer recognition software, Web Feature Camera,
Camera / Video Extraction, Keyboard,
Camera, Classification Mouse, Infrared
Infrared Light Light
Part Picking Percentage of Conveyor belt Jointed arm and Camera, Joint
Robot parts in correct with parts; bins hand angle sensors
bins
ATM Secure, ATM Machine, Display menu / Touch Screen

Page 40 of 50
Reliable fast Human, Screen with
service Computer options,
Validity Checks
E-Commerce Secure, E-Commerce Display product Keyboard,
System Reliable, Fast websites, lists with price, Mouse
business Human, Forms
processing Computer
Refinery Maximize, Refinery Values, Pumps, Temperature,
Controller Purity, Yield, operators Heaters, Pressure,
Safety Displays Chemical
Sensors
Satellite Image Correct image Downlink from Display Color pixel
Analysis categorization orbiting categorization arrays
System satellite of scene
Chemical Correct A Chemistry Recording Knowledge
reaction recording of lab where result of database of
analyser in reaction instruments, reaction chemicals and
chemistry Chemicals are their
research lab available for characteristics
carrying out
reactions
Medical Healthy patient, Patient, Display Keyboard entry
diagnosis Minimize Hospital, Staff Questions, of Symptoms,
Costs, Lawsuits Tests, Findings,
Diagnoses, Patients answers
Treatments,
Referrals
Blood testing Correct Blood sample Detail reporting Database of
system reporting on lab of each test procedures of
each test with specified test conduction
components and results
Interactive Maximize Set of students, Display Keyboard Entry
English tutor students score testing agency exercises,

Page 41 of 50
on test Suggestions,
Corrections
A Casio Learner should Group of Display of each Inputs from
Teacher be able to play Learner or a note, learner, from
specific musical single learner presentation of mouse or
pieces playing a key, keyboard and
Sample music database of
pieces casio details
Playing Soccer Scoring, No Soccer Field, Player legs, Eyes, Ears
Penalties, Players, Goalie, Head, Hands
Allowing the Referees,
other team to Coach, Soccer
score Ball, Net
Exploring the Accurate Water, Arms, Cameras, Sonar,
subsurface mapping of the Submercible Propellers Motor, Sensors
oceans of titan oceans and
identification of
the
environment
Shopping for Low price cost Internet, Rival Keyboard, Monitor
used AI Books of procuring an shopping sites, Mouse
on the Internet AI Book Customers
Playing a Attaining Rackets, Net, Human Eyes, Ears
Tennis Match highest score to Referee,Ball,
win the match Players
Practicing Keeping good Wall, Racket, Human Eyes, Ears
tennis against a form Ball
wall
Performing a Attaining Jumping pole, Legs
high jump maximum Padding,
height Jumper,
Referee
Knitting a Creating a well Knitter, Yarn, Hands Eyes, Touch

Page 42 of 50
Sweater made full Knitting
sweater Needles,
Directions
Bidding on an Attaining an Auctioneer, Voice Eyes, Ears
item at an item for lowest Bidders, Item
auction cost possible

Advantages of PEAS :

 Clarity: PEAS helps define the performance measure clearly, allowing developers
to establish specific goals and objectives for the AI system. It ensures system
performance evaluation and measurement effectively against predefined criteria.
 User experience: PEAS creates AI systems that provide user experiences by
considering the performance measure and designing the system. Whether it's
accuracy, efficiency, or personalized interactions, the system meets user
expectations and provides value by focusing on performance.
 Evaluation: PEAS provides a basis for evaluating the performance of AI systems
and identifying improvement areas. Developers measure the system's
performance, gather feedback, and make informed decisions for enhancing the
system's capabilities and addressing shortcomings by defining clear performance
measures.

PROBLEM SOLVING APPROACH TO TYPICAL AI PROBLEMS


In real world, there are different types of problems. Problem Solving in games such as
Sudoku can be an example. It can be done by building an artificially intelligent system to
solve that particular problem. To do this, one needs to define the problem statements first and
then generating the solution by keeping the conditions in mind.
 Goal formulation, based on the current situation and the agent’s performance
measure, is the first step in problem solving
 Problem formulation is the process of deciding what actions and states to consider,
given a goal.
 A search algorithm takes a problem as input and returns a solution in the form of an
action sequence

Page 43 of 50
 The state space forms a graph in which the nodes are states and the arcs between
nodes are actions.
 A path is the state space is a sequence of states connected by a sequence of actions.
Traditionally people think that the person who is able to solve more and more problems is
more intelligent than others. It is always said that problem solving skills demonstrates
intelligence hence it becomes a major aspect in artificial intelligence to solve the problems. In
order to understand how exactly problem solving contributes to intelligence, one needs to
find out how intelligent species solve problems.
The classical approach to solving a problem is quite simple in which, given a problem at
hand hit and trial method is used to check for various solutions to that problem. This hit and
trial approach usually works well for trivial problems and is referred to as the classical
approach to problem solving.

Generate and Test

This is a technical name given to the classical way of solving problems where different
combinations are generated to solve the problems, and the one which solves the problem is
taken as the correct solution. The rest of the combinations that are considered as incorrect
solution are destroyed.

Al Components that are required to solve problem

There are six major components of an artificial intelligence system. They are solely
responsible for generating desired results for particular problem. These components are as
follows,
 Knowledge Representation: It is the major foundation of an artificial intelligence
system. It is used for representing necessary knowledge so as to generate knowledge
base with the help of which AI system can perform tasks and generate results.
 Heuristic Searching Techniques: Usually while dealing with the problems the
knowledge base keeps on growing and growing making it difficult to search in that
knowledge base. To tackle with this challenge, heuristic searching techniques can be
used which can provide results (because of certain criteria) efficiently in terms of time
and memory usage.
 Artificial Intelligence Hardware: Hardware compatibility is major concern when it
comes to deploy software on machines. Hardware must be efficient to accommodate

Page 44 of 50
and produce desire results. Hardware components includes each and every machinery
required spanning from memory to processor to communicating devices. Al systems
incomplete without Al hardware.
 Computer Vision and Pattern Recognition: AI programs capture the inputs on their
own by generating a real world scenario with the help of this component. Sufficient
and compatible hardware enables better patterns gathering that makes a useful
knowledge base.
 Natural Language Processing: This component processes or analyses written or
spoken languages. Speech recognition is not sufficient to capture real world data.
Acquiring the word sequence and parsing sentence into computer is not just sufficient
to gain knowledge about environment for AI systems. Natural Language processing
plays vital role in understanding of domain of text to AI systems.
 Artificial Intelligence Language and Support Tools: Artificial Intelligence languages
are almost similar to traditional software development programming languages with
additional feature to capture human brain processes and logic as much as possible.

Well defined problems and Solutions

A problem can be defined formally by five components:


 The initial state that the agent starts in.
 A description of the possible actions available to the agent.
 A description of what each action does; the formal name for this is the transition
Model
 The goal test, which determines whether a given state is goal (final) state. In some
problems we can explicitly specify a set of goals. If a particular state is reached we
can check it with set of goals and if a match is found success can be announced.
 A path cost function that assigns a numeric cost (value) to each path. The problem-
solving agent is expected to choose a cost-function that reflects its own performance
measure.

Problem Formulation Types

There are two main kinds of problem formulation


1) Incremental formulation
2) Complete-state formulation.

Page 45 of 50
Depending upon problem requirements and specification one can decide which one go
for.

Incremental Formulation
 It involves operators that augment the state description, starting with an empty state.
 It generates many sequences.
 Memory requirements is less as all states are not explored (exploration will be done
till the goal is found).

Complete-state Formulation
 In this initially we will have some basic configuration represented in initial state.
 Here while doing any action first the conditions on the actions will be checked so that
the configuration state after the action will be same legal state.

Solving the Problem

Finding the solution of a problem is procedure which involves following phases←


 Problem definition: Where in detailed specification of inputs and what constitutes an
acceptable solution is described
 Problem analysis: Where in problem is studied through various view points like
inputs, to the problem, environment of the problem, expected outputs.
 Knowledge representation: Where in the known data about the problem and various
expected stimuli from environment is represented in perticular format which is helpful
for taking actions.
 Problem solving: Where in the selection of best suited techniques for problem
solutions are thought of and finalized.

PROBLEM SOLVING AGENT

Approach to Problem Solving Agent


 Goal based agents are also called as problem solving agent.
 Problem solving agent adapt to the task environment understand goal and achieve
success -
 Problem solving agents determine sequence of actions which generate successful
state.
 Problem solving agent can be aimed at maximizing performance measure there by
developing intelligent problem solving agent.

Page 46 of 50
Steps in Problem Solving
Problem solving agent achieves success by taking following approach to problem
solution -
Step 1: Goal setting
Agent set the goal by considering the environment.
Step 2: Goal formulation
The goals set in step 1 are formalized in the frame work. The key activity in goal
formulation is
1) To observe current state. 2) To tabulate agents performance measures.
Step 3: Problem formulation
After formulating goal, it is required to find out what will be the sequence of actions
which generate goal state.
Problem formulation is a way of looking at actions and states generated because of
actions, which leads to success.
Step 4: Search in unknown environment
If the task environment is unknown then agent first tries different sequence of actions
and gathers knowledge (i.e. learning). Then agent gets known set of actions which leads to
goal state. Thus agent search for describable sequence of actions this process is called as
searching process.
With knowledge of environment and goal state we can design a search algorithm. A
search algorithm is a procedure which takes problem as input and return its solution which
represented in the form of action sequence.
Step 5: Execution phase
Once the solution is given by the search algorithm then the actions suggested by the
algorithm are executed. This is the execution phase. Solution guides agent for doing the
actions. After executing the actions agent again formulate new goal.

Algorithm :
Procedure or method: Problem solving agent (unknown space, percept).
Results: An action.
Input: P→ percept (Environment perception)
Static:
1) A → An action sequence, initially with null value.
2) S→ State-current state.
3) G→ Goal - A goal initially null.

Page 47 of 50
4) P→ Problem - A real world situation.
State- update state (State, percept)
If (s) is empty then do
g← Formulate goal (s)
P← Formulate problem (s, g)
S← Search (p)
G← First (s)
S← Rest (s)
Return a
Procedure

REPRESENTATION OF AI PROBLEMS

TIC-TAC-TOE Game

Board position: = {1,2,3,4,5,6,7,8,9}


An element contains the value 0, if the corresponding square is blank; 1, if it is filled
with ―O‖ and 2, if it is filled with ―X‖
Hence starting state is {0,0,0,0,0,0,0,0,0}. The goal state or winning combination will
be board position having ―O‖ or ―X‖ separately in the combination of ({1,2,3}, {4,5,6},
{7,8,9},{1,4,7},{2,5,8}, {3,6,9}, {1,5,9}, { 3,5,7}) element values. Hence two goal states can
be {2,0,1,1,2,0,0,0,2} and {2,2,2,0,1,0,1,0,0}. These values correspond to the goal States.

Any board position satisfying this condition would be declared as win for
corresponding player. The valid transitions of this problem are simply putting ―1‖ or ―2‖ in
any of the element position containing 0. In practice, all the valid moves are defined and
stored. While selecting a move it is taken from this store. In this game, valid transition table

Page 48 of 50
will be a vector (having 39 entries), having 9 elements in each.

Water Jug Problem

In the water jug problem in Artificial Intelligence, we are provided with two jugs: one
having the capacity to hold 3 gallons of water and the other has the capacity to hold 4 gallons
of water. There is no other measuring equipment available and the jugs also do not have any
kind of marking on them. So, the agent’s task here is to fill the 4-gallon jug with 2 gallons of
water by using only these two jugs and no other material. Initially, both our jugs are empty.
So, to solve this problem, following set of rules were proposed:
 Production rules for solving the water jug problem.
 Here, let x denote the 4-gallon jug and y denote the 3-gallon jug.
Initial State Condition Final State Description of Action Taken
(x,y) If x<4 (4,y) Fill the 4 gallon jug completely
(x,y) if y<3 (x,3) Fill the 3 gallon jug completely
(x,y) If x>0 (x-d,y) Pour some part from the 4 gallon jug
(x,y) If y>0 (x,y-d) Pour some part from the 3 gallon jug
(x,y) If x>0 (0,y) Empty the 4 gallon jug
(x,y) If y>0 (x,0) Empty the 3 gallon jug
(x,y) If (x+y)<7 (4, y-[4-x]) Pour some water from the 3 gallon jug to
fill the four
gallon jug
(x,y) If (x+y)<7 (x-[3-y],y) Pour some water from the 4 gallon jug to
fill the 3
gallon jug.
(x,y) If (x+y)<4 (x+y,0) Pour all water from 3 gallon jug to the 4
gallon jug
(x,y) if (x+y)<3 (0, x+y) Pour all water from the 4 gallon jug to the 3
gallon jug

The listed production rules contain all the actions that could be performed by the
agent in transferring the contents of jugs. But, to solve the water jug problem in a minimum
number of moves, following set of rules in the given sequence should be performed: Solution
of water jug problem according to the production rules:
4 gallon jug contents 3 gallon jug contents Rule followed
0 gallon 0 gallon Initial state

Page 49 of 50
0 gallon 3 gallon Rule no.2
3 gallon 0 gallon Rule no.9
3 gallon 3 gallon Rule no.2
4 gallon 2 gallon Rule no.7
0 gallon 2 gallon Rule no.5
2 gallon 0 gallon Rule no.9

On reaching the 7th attempt, we reach a state which is our goal state. Therefore, at this state,
our problem is solved.

Page 50 of 50

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy