AL3391 Notes Unit I
AL3391 Notes Unit I
AL3391 Notes Unit I
What is AI?
1. "The exciting new effort to make computers think machines with minds, in the
full and literal sense." (Haugeland, 1985)
3. "The art of creating machines that performs functions that require intelligence
when performed by people." (Kurzweil, 1990)
6. "The study of the computations that make it possible to perceive, reason, and
act." (Winston, 1992)
The term AI is defined by each author in its own perceive, leads to four
important categories
The goal of the machine is to fool the interrogator into believing that is the
person. If the machine succeeds at this, then we will conclude that the machine
is acting humanly. But programming a computer to pass the test provides plenty
to work on, to possess the following capabilities.
Total Turing Test: the test which includes a video so that the interrogator can
test the perceptual abilities of the machine. To undergo the total Turing test, the
computer will need
Laws of thought were supposed to govern the operation of the mind and their
study initiated the field called logic
Example 2:“Ram is a student of III year CSE; All students are good in III year
CSE; therefore, Ram is a good student”
Syllogisms : A form of deductive reasoning consisting of a major premise, a
minor premise, and a conclusion
Syllogisms provided patterns for argument structures that always yielded correct
conclusions when given correct premises
1. It is not easy to take informal knowledge and state it in the formal terms
required by logical notation, particularly when the knowledge is less.
An agent is just something that acts. A rational agent is one that acts so as to
achieve the best outcome or, when there is uncertainty, the best expected
outcome. The study of rational agent has two advantages.
Aristotle (384-322 B.C.) was the first to formulate a precise set of laws governing
the rational part of the mind. He developed an informal system of syllogisms for
proper reasoning, which allowed one to generate conclusions mechanically, given
initial premises.
Philosophers staked out most of the important ideas of k1, but the leap to a
formal science required a level of mathematical formalization in three
fundamental areas: logic, computation, and probability
Economics (1776-present)
The science of economics got its start in 1776, when Scottish philosopher Adam
Smith (1723-1790) published An Inquiry into the Nature and Causes of the
Wealth of Nations. While the ancient Greeks and others had made contributions
to economic thought, Smith was the first to treat it as a science, using the idea
that economies can be thought of as consisting of individual agents maximizing
their own economic well-being
Neuroscience (1861-present)
Neuroscience is the study of the nervous system, particularly the brain. The
exact way in which the brain enables thought is one of the great mysteries of
science. It has been appreciated for thousands of years that the brain is
somehow involved in thought, because of the evidence that strong blows to the
head can lead to mental incapacitation
The origin of scientific psychology are traced back to the wok if German
physiologist Hermann von Helmholtz(1821-1894) and his student Wilhelm
Wundt(1832 – 1920). In 1879, Wundt opened the first laboratory of
experimental psychology at the University of Leipzig. In US,the development of
computer modeling led to the creation of the field of cognitive science. The
field can be said to have started at the workshop in September 1956 at MIT.
Computer engineering (1940-present)
Ktesibios of Alexandria (c. 250 B.c.) built the first self-controlling machine: a
water clock with a regulator that kept the flow of water running through it at a
constant, predictable pace. Modern control theory, especially the branch known
as stochastic optimal control, has as its goal the design of systems that maximize
an objective function over time.
Linguistics (1957-present)
Modem linguistics and AI, then, were "born" at about the same time, and grew
up together, intersecting in a hybrid field called computational linguistics or
natural language processing.
There were a number of early examples of work that can be characterized as AI,
but it was Alan Turing who first articulated a complete vision of A1 in his 1950
article "Computing Machinery and Intelligence." Therein, he introduced the
Turing test, machine learning, genetic algorithms, and reinforcement learning.
The early years of A1 were full of successes-in a limited way. General Problem
Solver (GPS) was a computer program created in 1957 by Herbert Simon and
Allen Newell to build a universal problem solver machine. The order in which the
program considered subgoals and possible actions was similar to that in which
humans approached the same problems. Thus, GPS was probably the first
program to embody the "thinking humanly" approach. At IBM, Nathaniel
Rochester and his colleagues produced some of the first A1 programs. Herbert
Gelernter (1959) constructed the Geometry Theorem Prover, which was able to
prove theorems that many students of mathematics would find quite tricky.
Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts
Institute of Technology (MIT). In 1963, McCarthy started the AI lab at Stanford.
Tom Evans's ANALOGY program (1968) solved geometric analogy problems that
appear in IQ tests, such as the one in Figure
Fig : The Tom Evan’s ANALOGY program could solve geometric analogy
problems as shown.
A dose of reality (1966-1973)
From the beginning, AI researchers were not shy about making predictions of
their coming successes. The following statement by Herbert Simon in 1957 is
often quoted:
“It is not my aim to surprise or shock you-but the simplest way I can summarize
is to say that there are now in the world machines that think, that learn and that
create. Moreover, their ability to do these things is going to increase rapidly until-
in a visible future-the range of problems they can handle will be coextensive with
the range to which the human mind has been applied.
In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan
to build intelligent computers running Prolog. Overall, the A1 industry boomed
from a few million dollars in 1980 to billions of dollars in 1988.
Psychologists including David Rumelhart and Geoff Hinton continued the study of
neural-net models of memory.
One of the most important environments for intelligent agents is the Internet.
Sample Applications
Game playing: IBM's Deep Blue became the first computer program to defeat
the world champion (Garry Kasparov) in a chess match. The value of IBM's stock
increased by $18 billion.
Logistics Planning: During the Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis and Replanning Tool, DART to do automated logistics planning
and scheduling for transportation. This involved up to 50,000 vehicles, cargo,
and people at a time, and had to account for starting points, destinations,
routes, and conflict resolution
INTELLIGENT AGENTS
1. A human agent has eyes, ears, and other organs for sensors and hands,
legs, mouth, and other body parts for actuators.
2. A robotic agent might have cameras and infrared range finders for sensors
and various motors for actuators.
The term percept is to refer to the agent's perceptual inputs at any given
instant.
An agent's behavior is described by the agent function that maps any given
percept sequence to an action.
Example : The vacuum-cleaner world has just two locations: squares A and B.
The vacuum agent perceives which square it is in and whether there is dirt in the
square. It can choose to move left, move right, suck up the dirt, or do nothing.
One very simple agent function is the following: if the current square is dirty,
then suck, otherwise move to the other square.
Fig : A vacuum-cleaner world with just two locations
Percept Action
sequence
function VACUUM-AGENT([location,status])
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Concept of Rationality
A rational agent is one that does the right thing. The right action is the one that
will cause the agent to be most successful.
Performance measures
Rationality
For each possible percept sequence, a rational agent should select an action that
is expected to maximize its performance measure, given the evidence provided
by the percept sequence and whatever built-in knowledge the agent has. A
rational agent should be autonomous
An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality.
Autonomy
Information Gathering
Taxi driving is clearly stochastic in this sense, because one can never predict the
behavior of traffic exactly;
If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static. Taxi driving is
clearly dynamic. Crossword puzzles are static.
The distinction between single-agent and multi agent environments may seem
simple enough. For example, an agent solving a crossword puzzle by itself is
clearly in a single-agent environment, whereas an agent playing chess is in a
two-agent environment.
Environment Characteristics
The job of AI is to design the agent program that implements the agent function
mapping percepts to actions.
Agent programs
Agent programs take the current percept as input from the sensors and return
an action to the actuators
The agent program takes the current percept as input, and the agent function
takes the entire percept history
The agent programs will use some internal data structures that will be updated
as new percepts arrive. The data structures are operated by the agents decision
making procedures to generated an action choice, which is then passed to the
architecture to be executed. Two types of agent programs are
1. A Skeleton Agent
2. A Table Lookup Agent
Skeleton Agent
Table-lookup agent
• Huge table
• Take a long time to build the table
• No autonomy
• Even with learning, need a long time to learn the table entries
The simplest kind of agent is the simple reflex agent. These agents select actions
on the basis of the current percept, ignoring the rest of the percept history.
This agent describes about how the condition – action rules allow the agent to
make the connection from percept to action
It acts according to a rule whose condition matches the current state, as defined
by the percept.
An Agent knows the description of current state as well as goal state. The action
matches with the current state is selected depends on the goal state.
If the name of disease is identified for the patient then the treatment is given to
the patient to recover from him from the disease and make the patient healthy is
the goal to be achieved
An agent which generates a goal state with high – quality behavior (i.e) if more
than one sequence exists to reach the goal state then the sequence with more
reliable, safer, quicker and cheaper than others to be selected.
Utility is a function that maps a state onto a real number, which describes the
associated degree of happiness
2. When there are several goals that the agent can aim for, none of which
can be achieved with certainty, utility provides a way in which the
likelihood of success can be weighed up against the importance of the
goal
If the patient disease is identified then the sequence of treatment which leads to
recover the patient with all utility measure is selected and applied
Learning agent
The learning task allows the agent to operate in unknown environments initially
and then become more competent than its initial knowledge.
1. Learning element
2. performance element
3. Critic
4. Problem generator
The learning element uses feedback from the critic on how the agent is doing
and determines how the performance element should be modified to do better in
the future. Learning element is also responsible for making improvements
The critic tells the learning element how well the agent is doing with respect to
a fixed performance standard
Search is one of the operational tasks that characterize AI programs best. Almost
every AI program depends on a search procedure to perform its prescribed
functions. Problems are typically defined in terms of state, and solution
corresponds to goal states.
Types of problem
In general the problem can be classified under anyone of the following four types
which depends on two important properties. They are
(i) Amount of knowledge, of the agent on the state and action description.
(ii) How the agent is connected to its environment through its percepts and
actions?
Problem solving agent is one kind of goal based agent, where the agent decides
what to do by finding sequence of actions that lead to desirable states. The
complexity arises here is the knowledge about the formulation process, (from
current state to outcome action) of the agent.
Note :
1. initial state
2. successor function (actions)
3. goal test
4. path cost
The initial state that the agent starts in.
Successor function (S) - Given a particular state x, S(x) returns a set of states
reachable from x by any single action.
The goal test, which determines whether a given state is a goal state.
Sometimes there is an explicit set of possible goal states, and the test simply
checks whether the given state is one of them.
A path cost function that assigns a numeric cost to each path. The problem-
solving agent chooses a cost function that reflects its own performance measure.
State space (or) state set space - The set of all possible states reachable
from the initial state by any sequence of actions.
Path (state space) - The sequence of action leading from one state to another
The effectiveness of a search can be measured using three factors. They are:
For Example
Imagine an agent in the city of Arad, Romania, enjoying a touring holiday. Now,
suppose the agent has a nonrefundable ticket to fly out of Bucharest the
following day. In that case, it makes sense for the agent to adopt the goal of
getting to Bucharest. The agent's task is to find out which sequence of actions
will get it to a goal state.
A search algorithm takes a problem as input and returns a solution in the form of
an action sequence. Once a solution is found, the actions it recommends can be
carried out. This is called the execution phase.
Formulating problems
Initial state : the initial state for our agent in Romania might be described as
In(Arad)
Goal test : The agent's goal in Romania is the singleton set {In(Bucharest)).
Path cost : The step cost of taking action a to go from state x to state y is
denoted by c(x, a, y).
Example Problems
Toy Problems
i) Vacuum world Problem
States: The agent is in one of two locations, each of which might or might not
contain dirt. Thus there are 2 * 22 = 8 possible world states.
Initial state: Any state can be designated as the initial state.
Successor function: three actions (Left, Right, and Suck).
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the
path.
The 8-puzzle problem consists of a 3 x 3 board with eight numbered tiles and a
blank space. A tile adjacent to the blank space can slide into the space. The
object is to reach a specified goal state
States: A state description specifies the location of each of the eight tiles and
the blank in one of the nine squares.
Initial state: Any state can be designated as the initial state.
Successor function: This generates the legal states that result from trying the
four actions (blank moves Left, Right, Up, or Down).
Goal test: This checks whether the state matches the goal configuration (Other
goal configurations are possible.)
Path cost: Each step costs 1, so the path cost is the number of steps in the
path.
Initial State Goal State
In crypt arithmetic problems letters stand for digits and the aim is to find a
substitution of digits for letters such that the resulting sum is arithmetically
correct, each letter stand for a different digit
Rules
Example 1:
SEND
+MORE
-
MONEY
Example 2:
FORTY
+TEN
+TEN
SIXTY
Three missionaries and three cannibals are on one side of a river, along with a
oat that can hold one or two people. Find a way to get everyone to the other
side, without ever leaving a group of missionaries in one place out numbers by
the cannibals in that place
Assumptions :
Path cost : Number of crossings between the two sides of the river.
Solution:
Real-world problems
States: Each is represented by a location (e.g., an airport) and the current time.
Initial state: This is specified by the problem.
Successor function: This returns the states resulting from taking any
scheduled flight (perhaps further specified by seat class and location), leaving
later than the current time plus the within-airport transit time, from the current
airport to another.
Goal test: Are we at the destination by some pre specified time?
Path cost: This depends on monetary cost, waiting time, flight time, customs
and immigration procedures, seat quality, time of day, type of airplane, frequent-
flyer mileage awards, and so on.
Search techniques use an explicit search tree that is generated by the initial
state and the successor function that together define the state space. In general,
we may have a search graph rather than a search tree, when the same state can
be reached from multiple paths
The root of the search tree is a search node corresponding to the initial state,
In(Arad).
Apply the successor function to the current state, and generate a new set of
states
1. Start the sequence with the initial state and check whether it is a goal state or
not.
From the initial state (current state) generate and expand the new set of states.
The collection of nodes that have been generated but not expanded is called as
fringe. Each element of the fringe is a leaf node, a node with no successors in
the tree.
Expanding A
Expanding B
Expanding C
Sequence of steps to reach the goal state F from (A = A - C - F)
2. Search strategy: In the above example we did the sequence of choosing,
testing and expanding until a solution is found or until there are no more states
to be expanded. The choice of which state to expand first is determined by
search strategy.
3. Search tree: The tree which is constructed for the search process over the
state space.
4. Search node: The root of the search tree that is the initial state of the
problem.
There are many ways to represent nodes, but we will assume that a node is a
data structure with five components:
STATE: the state in the state space to which the node corresponds
PARENT-NODE: the node in the search tree that generated this node;
ACTION (RULE): the action that was applied to the parent to generate the
node;
PATH-COST: the cost, traditionally denoted by g(n) , of the path from the initial
state to the node
DEPTH: the number of steps along the path from the initial state.
The collection of nodes represented in the search tree is defined using set or
queue representation.
Set : The search strategy would be a function that selects the next node to be
expanded from the set
Queue: Collection of nodes are represented, using queue. The queue operations
are defined as:
Task. : Find a path to reach E using Queuing function in general tree search
algorithm
Breadth-first search
The path in the 2nd depth level is selected, (i.e) SBG{or) SCG.
Algorithm :
Example:
Time complexity
= 1 +b + b 2 + ............... + b d
= O(b d)
The space complexity is same as time complexity because all the leaf nodes of
the tree must be maintained in memory at the same time = O(b d)
Completeness: Yes
Optimality: Yes, provided the path cost is a non decreasing function of the
depth of the node
Advantage: Guaranteed to find the single solution at the shallowest depth level
Uniform-cost search
Breadth-first search is optimal when all step costs are equal, because it always
expands the shallowest unexpanded node. By a simple extension, we can find an
algorithm that is optimal with any step cost function. Instead of expanding the
shallowest node, uniform-cost search expands the node n with the lowest
path cost. Note that if all step costs are equal, this is identical to breadth-first
search.
Uniform-cost search does not care about the number of steps a path has, but
only about their total cost.
B to be expanded next
No need to expand the next path SC, because its path cost is high to reach C
from S, as well as goal state is reached in the previous path with minimum cost.
Time complexity is same as breadth first search because instead of depth level
the minimum path cost is considered.
Time complexity: O(b d) Space complexity: O(b d)
Depth-first search
Depth-first search always expands the deepest node in the current fringe of
the search tree
The search proceeds immediately to the deepest level of the search tree, where
the nodes have no successors. As those nodes are expanded, they are dropped
from the fringe, so then the search "backs up" to the next shallowest node that
still has unexplored successors. This strategy can be implemented by TREE-
SEARCH with a last-in-first-out (LIFO) queue, also known as a stack.
Algorithm:
In the worst case depth first search has to expand all the nodes
The nodes are expanded towards one particular direction requires memory for
only that nodes.
Completeness: No
Optimality: No
Advantage: If more than one solution exists (or) number of levels is high then
DFS is best because exploration is done only in a small portion of the whole
space.
1. Definition: A cut off (maximum level of the depth) is introduced in this search
technique to overcome the disadvantage of depth first search. The cutoff value
depends on the number of states.
The number of states in the given map is 5. So, it is possible to get the goal
state at a maximum depth of 4. Therefore the cutoff value is 4
Task : Find a path from A to E.
Optimality: No, because not guaranteed to find the shortest solution first in the
search technique.
Limit = 0
Limit = 1
Limit = 2
Solution: The goal state G can be reached from A in four ways. They are:
1. A – B – D - E – G ------ Limit 4
2. A - B - D - E - G -------- Limit 4
3. A - C - E - G -------- Limit 3
4. A - F - G ------- Limit2
Since it is a iterative deepening search it selects lowest depth limit (i.e.) A-F-G is
selected as the solution path.
Iterative deepening combines the advantage of breadth first search and depth
first search (i.e) expansion of states is done as BFS and memory requirement is
equivalent to DFS.
Bidirectional search
The forward and backward searches done at the same time will lead to the
solution in O(2bd/2) = O(bd/2)step, because search is done to go only halfway
If the two searches meet at all, the nodes of at least one of them must all be
retained in memory requires O(bd/2) space.
Optimality: Yes, because the order of expansion of states is done in both the
directions.
1. Do not return to the state you just came from (i.e) avoid any successor that is
the same state as the node's parent.
2. Do not create path with cycles (i.e) avoid any successor of a node that is the
same as any of the node's ancestors.
3. Do not generate any state that was ever generated before.
If the current node matches a node on the closed list, then it is discarded and it
is not considered for expansion. This is done with GRAPH-SEARCH algorithm.
This algorithm is efficient for problems with many repeated states
The worst-case time and space requirements are proportional to the size of the
state space, this may be much smaller than O(bd)