Ai Notes 1 5 Chapters
Ai Notes 1 5 Chapters
AI notes-1-5 chapters
Objectives:
1. To conceptualize the basic ideas and techniques underlying the design of
intelligent systems.
2. To make students understand and Explore the mechanism of mind that enable
intelligent thought and action.
3. To make students understand advanced representation formalism and search techniques.
4. To make students understand how to deal with uncertain and incomplete information.
Genetic algorithms.
3.4 Adversarial Search: Games, Optimal strategies, The
minimax algorithm , Alpha-Beta Pruning.
Term Work:
The distribution of marks for term work shall be as follows:
Laboratory work (experiments/case studies): ………….. (15) Marks.
Assignment:………..………………………………… (05) Marks.
Attendance ………………………………………. (05) Marks
TOTAL: ……………………………………………………. (25) Marks.
Practical/Oral examination:
Practical examination based on the above syllabus will be conducted.
7. Program on unification.
Any other practical covering the syllabus topics and subtopics can be conducted.
Text Books:
1. Stuart J. Russell and Peter Norvig, "Artificial Intelligence A Modern Approach “Second
Edition" Pearson Education.
2. Saroj Kaushik “Artificial Intelligence” , Cengage Learning.
3. George F Luger “Artificial Intelligence” Low Price Edition , Pearson Education., Fourth edition.
Reference Books:
1. Ivan Bratko “PROLOG Programming for Artificial Intelligence”, Pearson
Education, Third Edition.
2. Elaine Rich and Kevin Knight “Artificial Intelligence” Third Edition
3. Davis E.Goldberg, “Genetic Algorithms: Search, Optimization and Machine
Learning”, Addison Wesley, N.Y., 1989.
4. Hagan, Demuth, Beale, “Neural Network Design” CENGAGE Learning, India Edition.
5. Patrick Henry Winston , “Artificial Intelligence”, Addison-Wesley, Third Edition.
6. Han Kamber, “Data Mining Concepts and Techniques”, Morgann Kaufmann Publishers.
7. N.P.Padhy, “Artificial Intelligence and Intelligent Systems”, Oxford University Press.
AI is generally associated with Computer Science, but it has many important links with other
fields such as Maths, Psychology, Cognition, Biology and Philosophy, among many others. Our
ability to combine knowledge from all these fields will ultimately benefit our progress in the
quest of creating an intelligent artificial being.
AI is a branch of computer science which is concerned with the study and creation of computer
systems that exhibit
some form of intelligence
OR
those characteristics which we associate with intelligence in human
behavior
What is intelligence?
Intelligence is a property of mind that encompasses many related mental abilities, such as the
capabilities to
reason
plan
solve problems
think abstractly
comprehend ideas and language and
learn
Knowledge Base
AI programs should be learning in nature and update its knowledge accordingly.
Knowledge base consists of facts and rules.
Characteristics of Knowledge:
o It is voluminous in nature and requires proper structuring
o It may be incomplete and imprecise
o It may keep on changing (dynamic)
Navigational Capability
Navigational capability contains various control strategies
Control Strategy
o determines the rule to be applied
o some heuristics (thump rule) may be applied
Inferencing
Inferencing requires
o search through knowledge base
and
o derive new knowledge
1.6 Sub-areas of AI
Sub areas of AI are:
a. Knowledge representation
b. Theorem proving
c. Game playing
d. Common sense reasoning dealing with uncertainty and decision making
e. Learning models, inference techniques, pattern recognition, search and matching etc.
f. Logic (fuzzy, temporal, modal) in AI
g. Planning and scheduling
h. Natural language understanding
6
1.7 Applications of AI
Some of the applications are given below:
a. Business : Financial strategies, give advice
b. Engineering: check design, offer suggestions to create new product
c. Manufacturing: Assembly, inspection & maintenance
d. Mining: used when conditions are dangerous
e. Hospital : monitoring, diagnosing & prescribing
f. Education : In teaching
g. Household: Advice on cooking, shopping etc.
h. Farming: prune trees & selectively harvest mixed crops.
Heavy use of
a. probability theory
b. decision theory
c. statistics
d. logic (fuzzy, modal, temporal)
The branch of computer science concerned with making computers behave like humans.
“Artificial Intelligence is the study of human intelligence such that it can be replicated
artificially.”
An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through effectors.
A human agent has eyes, ears, and other organs for sensors, and hands, legs, mouth, and
other body parts for effectors.
A robotic agent substitutes cameras and infrared range finders for the sensors and various
motors for the effectors.
A software agent has encoded bit strings as its percepts and actions.
Simple Terms
Percept
o Agent’s perceptual inputs at any given instant
Percept sequence: Complete history of everything that the agent has ever perceived
Agent’s behavior is mathematically described by
o Agent function
o A function mapping any given percept sequence to an action
Practically it is described by
8
Concept of Rationality
A rational agent is one that does the right thing. As a first approximation, we will say that
the right action is the one that will cause the agent to be most successful.
That leaves us with the problem of deciding how and when to evaluate the agent's success.
We use the term performance measure for the how—the criteria that determine how
successful an agent is.
Environment?
It is a first step in designing an agent. We should specify the environment which suitable for agent action.
If swimming is the task for an agent then environment must be water not air.
e.g. Streets/freeways, traffic, pedestrians, weather . . .
Actuators?
These are one of the important details of agent through which agent performs actions in related and
specified environment.
e.g. Steering, accelerator, brake, horn, speaker/display, . . .
Sensors?
It is the way to receive different attributes from environment.
e.g. Cameras, accelerometers, gauges, engine sensors, keyboard, GPS . . .
(In designing an agent, the first step must always be to specify the task environment as fully as
possible)
10
11
Software agents (or software robots or softbot) exist in rich, unlimited domains. Imagine a
softbot designed to fly a flight simulator for a 747.
The simulator is a very detailed, complex environment, and the software agent must choose from
a wide variety of actions in real time.
Now we have to decide how to build a real program to implement the mapping from percepts to
action.
We will find that different aspects of driving suggest different types of agent program.
Intelligent agents categories into five classes based on their degree of perceived
intelligence and capability
1. Simple reflex agents
2. model-based reflex agents
3. goal-based agents
4. utility-based agents
5. Learning agents
12
Simple reflex agents act only on the basis of the current percept, ignoring the rest of the
percept history.
The agent function is based on the condition-action rule: if condition then action.
This agent function only succeeds when the environment is fully observable.
Some reflex agents can also contain information on their current state which allows them to
disregard conditions whose actuators are already triggered.
Infinite loops are often unavoidable for simple reflex agents operating in partially observable
environments.
Note: If the agent can randomize its actions, it may be possible to escape from infinite loops.
13
A model-based agent can handle a partially observable environment.
Its current state is stored inside the agent maintaining some kind of structure which
describes the part of the world which cannot be seen.
This knowledge about "how the world works" is called a model of the world, hence the name
"model-based agent".
A model-based reflex agent should maintain some sort of internal model that depends on the
percept history and thereby reflects at least some of the unobserved aspects of the current
state.
Goal-based agents
Goal-based agents further expand on the capabilities of the model-based agents, by using
"goal" information.
This allows the agent a way to choose among multiple possibilities, selecting the one which
reaches a goal state.
Search and planning are the subfields of artificial intelligence devoted to finding action
sequences that achieve the agent's goals.
In some instances the goal-based agent appears to be less efficient; it is more flexible because
the knowledge that supports its decisions is represented explicitly and can be modified.
14
Utility-based agents
Goal-based agents only distinguish between goal states and non-goal states.
This measure can be obtained through the use of a utility function which maps a state to a
measure of the utility of the state.
A more general performance measure should allow a comparison of different world states
according to exactly how happy they would make the agent.
The term utility can be used to describe how "happy" the agent is.
A rational utility-based agent chooses the action that maximizes the expected utility of the
action outcomes- that is, the agent expects to derive, on average, given the probabilities and
utilities of each outcome.
A utility-based agent has to model and keep track of its environment, tasks that have
involved a great deal of research on perception, representation, reasoning, and learning.
Learning agents
Learning has an advantage that it allows the agents to initially operate in unknown
environments and to become more competent than its initial knowledge alone might allow.
15
The learning element uses feedback from the "critic" on how the agent is doing and
determines how the performance element should be modified to do better in the future.
The performance element is what we have previously considered to be the entire agent: it
takes in percepts and decides on actions.
It is responsible for suggesting actions that will lead to new and informative experiences.
16
1. Goal Formulation
2. Problem Formulation
3. Search
4. Execute
E.g. Driving from Arad to Bucharest...
Note: In this chapter we will consider one example that “A map is given with different cities connected
and their distance values are also mentioned. Agent starts from one city and reach to other.”
Goal Formulation
Goal formulation, based on the current situation, is the first step in problem solving. As well as
formulating a goal, the agent may wish to decide on some other factors that affect the desirability of
different ways of achieving the goal. For now, let us assume that the agent will consider actions at
the level of driving from one major town to another. The states it will consider therefore correspond
to being in a particular town.
Declaring the Goal: Goal information given to agent i.e. start from Arad and reach to
Bucharest.
17
Problem Formulation
Problem formulation is the process of deciding what actions and states to consider, given a goal.
Process of looking for action sequence (number of action that agent carried out to
reach to goal) is called search. A search algorithm takes a problem as input and
returns a solution in the form of an action sequence. Once a solution is found, the
actions it recommends can be carried out. This is called the execution phase. Thus, we
have a simple "formulate, search, execute" design for the agent.
Successor function S(x) = A description of the possible actions available to the agent. The most
common formulation uses a successor function , given a particular state x, SUCCESSOR-FN(x)
returns a set of <action, successor> ordered pair where each action is one of the legal actions in
state x and each successor is a state that can be reached from x by applying the action. e.g. ,from
state In(Arad),the successor function for Romania problem would return {<Go(Zerind),In(Zerind)>,
<Go(sibiu),In(sibiu)>, <Go(Timisoara),In(Timisoara)>}.
path cost (additive)=Function that assigns a numeric cost to each path. e.g., sum of distances,
number of actions executed, etc. Usually given as c(x, a, y), the step cost from x to y by action a,
assumed to be ≥ 0.
“A solution is a sequence of actions leading from the initial state to a goal state”.
Example:
The 8-puzzle consists of a 3x3 board with 8 numbered tiles and a blank space. A tile adjacent to the
blank space can slide into the space.
There are two types of searching strategies are used in path finding,
1) Uninformed Search strategies.
2) Infirmed Search strategies.
Uninformed strategies use only the information available in the problem definition
1) Breadth-first search
2) Depth-first search
3) Depth-limited search
4) Iterative deepening search
Algorithm:
1. Place the starting node.
2. If the queue is empty return failure and stop.
3. If the first element on the queue is a goal node, return success and stop otherwise.
4. Remove and expand the first element from the queue and place all children at the end of the
queue in any order.
5. Go back to step 1.
19
Disadvantages:
If the solution is farther away from the root, breath first search will consume lot of time.
Algorithm:
1. Push the root node onto a stack.
2. Pop a node from the stack and examine it.
If the element sought is found in this node, quit the search and return a result.
Otherwise push all its successors (child nodes) that have not yet been discovered onto the
stack.
3. If the stack is empty, every node in the tree has been examined – quit the search and return "not
found".
4. If the stack is not empty, repeat from Step 2.
Advantages:
If depth-first search finds solution without exploring much in a path then the time and space
it takes will be very less.
The advantage of depth-first Search is that memory requirement is only linear with respect
to the search graph. This is in contrast with breadth-first search which requires more space.
Disadvantages:
Depth-First Search is not guaranteed to find the solution.
It is not complete algorithm, if it go into infinite loops.
20
21
24
Heuristic Function:
“A rule of thumb, simplification, or educated guess that reduces or limits the search for
solutions in domains that are difficult and poorly understood.”
– h(n) = estimated cost of the cheapest path from node n to goal node.
– If n is goal then h(n)=0
It is a technique which evaluation the state and finds the significance of that state w.r.t. goal state
because of this it is possible to compare various states and choose the best state to visit next.
Heuristics are used to improve efficiency of search process. The search can be improved by
evaluating the states. The states can be evaluated by applying some evaluation function which tells
the significance of state to achieve goal state. The function which evaluates the state is the heuristic
function and the value calculated by this function is heuristic value of state.
The heuristic function is represented as h(n)
Eg. 8-puzzle problem
Admissible heuristics
h1(n) = number of misplaced tiles
h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)
In the example
h1(S) = 6
h2(S) = 2 + 0 + 3 + 1 + 0 + 1 + 3 + 4 = 14
If h2 dominates h1, then h2 is better for search than h1.
1. Iterative deepening A*(IDA*)- Here cutoff information is the f-cost (g+h) instead of depth
2. Recursive best first search (RBFS) - Recursive algorithm that attempts to mimic standard
best-first search with linear space.
3. Simplified Memory bounded A* (SMA*)- Drop the worst-leaf node when memory is full
Since Iterative Deepening A* performs a series of depth-first searches, its memory requirement is
linear with respect to the maximum search depth. In addition, if the heuristic function is admissible,
IDA* finds an optimal solution. Finally, by an argument similar to that presented for DFID, IDA*
expands the same number of nodes, asymptotically, as A* on a tree, provided that the number of
25
– If current f-values exceeds this alternative f-value than backtrack to alternative path.
– S is expanded
S – A found to be best child
2 2
F=g+h – A is expanded with bound 9
7=2+5 – C has F value 10,stop expansion
A E 9=2+7 – Backup F value
2 – Forget expansion from A
5 – A has backed up F value 10
8=4+4 B – E is best to expand next
F 11=7+4 – E is expanded with bound 10
2 – F has F-value 11
2
10=6+7 – Stop expansion, back up F value
C 11=9+2 – Forget expansion from E
G
3 – E has backed up F value 10
– A is best to expand next
12=9+3 2
D – When B and C are regenerated ,they
3 inherit F value 10 from parent
– A is expanded with bound 11
11=11+0
T – D has F value 12
– Stop expansion, back up F value
– Forget expansion from A
– A has backed up F value 12
– E is best to expand next
– E is expanded with bound 12
– Reach goal, search ends
For above example go through class notes.
5
1
6
2
3 7
4 8
27
The previous sections have considered algorithms that systematically search the space. If the space
is finite, they will either find a solution or report that no solution exists. Unfortunately, many search
spaces are too big for systematic search and are possibly even infinite. In any reasonable time,
systematic search will have failed to consider enough of the search space to give any meaningful
results. This section and the next consider methods intended to work in these very large spaces. The
methods do not systematically search the whole search space but they are designed to find solutions
quickly on average. They do not guarantee that a solution will be found even if one exists, and so
they are not able to prove that no solution exists. They are often the method of choice for
applications where solutions are known to exist or are very likely to exist.
Previously: systematic (classical search) exploration of search space.
– Path to goal is solution to problem
YET, for some problems path is irrelevant.
– E.g 8-queens
Hill Climbing
Hill climbing is a mathematical optimization technique which belongs to the family of local
search.
It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to
find a better solution by incrementally changing a single element of the solution.
If the change produces a better solution, an incremental change is made to the new solution,
repeating until no further improvements can be found.
Hill climbing is good for finding a local optimum (a solution that cannot be improved by
considering a neighboring configuration) but it is not guaranteed to find the best possible
solution (the global optimum) out of all possible solutions (the search space).
Global maximum is the best possible solution and the objective of this search to reach at global
maximum (highest peak on hill).
28
Ridges:
A "ridge" which is an area in the search that is higher than the surrounding areas, but cannot be
searched in a simple move.
Plateau:
All steps equal (flat or shoulder)
A plateau is encountered when the search space is flat, or sufficiently flat that the value
returned by the target function is indistinguishable from the value returned for nearby regions
due to the precision used by the machine to represent its value.
In such cases, the hill climber may not be able to determine in which direction it should step,
and may wander in a direction that never leads to improvement.
29
SA exploits an analogy between annealing and the search for a minimum in a more general
system.
SA uses a random search that accepts changes that decrease objective function f, as well as
some that increase it.
SA uses a control parameter T, which by analogy with the original application is known as
the system "temperature."
Genetic Algorithm
An algorithm is a set of instructions that is repeated to solve a problem.
Genetic Algorithms follow the idea of SURVIVAL OF THE FITTEST- Better and better
solutions evolve from previous generations until a near optimal solution is obtained.
Also known as evolutionary algorithms, genetic algorithms demonstrate self organization and
adaptation similar to the way that the fittest biological organism survive and reproduce.
Genetic Algorithms are often used to improve the performance of other AI methods such as
expert systems or neural networks.
The method learns by producing offspring that are better and better as measured by a fitness
function, which is a measure of the objective to be obtained (maximum or minimum).
Simple GA
initialize population;
evaluate population;
while TerminationCriteriaNotSatisfied
repair();
evaluate population;
}}
30
Concepets
Fitness score(value):every chromosome has fitness score can be inferred from the
chromosome itself by using fitness function.
Stochastic operators
Selection replicates the most successful solutions found in a population at a rate proportional
to their relative quality
Recombination (Crossover) decomposes two distinct solutions and then randomly mixes
their parts to form novel solutions
Example:
Suppose a Genetic Algorithm uses chromosomes of the form x=abcdefgh with a fixed length of
eight genes. Each gene can be any digit between 0 and 9. Let the fitness of individual x be
calculated as :
And let the initial population consist of four individuals x1, ... ,x4 with the following
chromosomes :
X1 = 6 5 4 1 3 5 3 2
F(x1) = (6+5)-(4+1)+(3+5)-(3+2) = 9
X2 = 8 7 1 2 6 6 0 1
F(x2) = (8+7)-(1+2)+(6+6)-(0+1) = 23
X3 = 2 3 9 2 1 2 8 5
X4= 4 1 8 5 2 0 9 4
31
X2 x1 x3 x4
Arrangement Assume
Individuals String Representation Fitness
maximization
X1(second fittest
X2 87126601 23
individual)
X3 (third fittest
X3 23921285 -16
individual)
Offspring 1 87123532 15
Offspring 2 65416601 17
Offspring 3 65921232 -2
Offspring 4 23413585 -5
So that, the overall fitness is improved, since the average is better and worst is improved.
Adversarial Search
Games
Multiagent environments, in which each agent needs to consider the actions of the other agents and
how they affect its own welfare. The unpredictability of these other agents can introduce
contingencies into agent’s problem solving process. In this section we cover competitive
environments, in which agent’s goals are conflict, giving rise to adversarial search-often known as
games.
Examine the problems that arise when we try to plan ahead in a world where other agents are
planning against us. A good example is in board games. Adversarial games, while much studied in
AI, are a small part of game theory in economics.
1. Solution is (heuristic) method for finding goal. 1. Solution is strategy (strategy specifies move for
2. Heuristic techniques can find optimal solution every possible opponent reply).
3. Evaluation function: estimate of cost from start 2. Optimality depends on opponent.
to goal through given node 3. Time limits force an approximate solution.
4. Examples: path planning, scheduling activities 4. Evaluation function: evaluate “goodness” of game
position
5. Examples: chess, checkers, Othello, backgammon.
Game Setup
Two players: MAX and MIN
MAX moves first and they take turns until the game is over
Winner gets award, loser gets penalty.
Games as search:
Initial state: e.g. board configuration of chess
Successor function: list of (move, state) pairs specifying legal moves.
Terminal test: Is the game finished?
Utility function: Gives numerical value of terminal states. E.g. win (+1), lose (-1) and draw
(0) in tic-tac-toe or chess MAX uses search tree to determine next move.
33
There are two players involved, MAX and MIN. A search tree is generated, depth-first, starting with
the current game position up to the end game position. Then, the final game position is evaluated
from MAX’s point of view. Afterwards, the inner node values of the tree are filled bottom-up with
the evaluated values. The nodes that belong to the MAX player receive the maximum value of it’s
children. The nodes for the MIN player will select the minimum value of it’s children.
The values represent how good a game move is. So the MAX player will try to select the move with
highest value in the end. But the MIN player also has something to say about it and he will try to
select the moves that are better to him, thus minimizing MAX’s outcome.
Alpha-Beta Pruning
Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are
evaluated by the algorithm its search tree. It is an adversarial search algorithm used commonly for
machine playing of two-player games (Tic-tac-toe, Chess etc.).
α: Best already explored option along path to the root for miximizer.
β: Best already explored option along path to the root for minimizer.
34
Knowledge and Representation are distinct entities that play a central but distinguishable role in
intelligent system.
Knowledge is a description of the world.
– It determines a system's competence by what it knows.
Representation is the way knowledge is encoded.
– It defines the performance of a system in doing something.
Knowledge-based agent:
35
Performance:
Points are awarded and/or deducted:
– find gold: +1000
– death by Wumpus: -1000
– death by endless pit: -1000
– each action: -1
– picking up the arrow: -10
Environment:
– 4x4 grid.
– Agent starts at 1,1 (lower left).
– Agent starts facing to the right.
– The gold and the Wumpus are placed at random locations in the grid. (can't be in start room).
– Each room other than starting room can be a bottomless pit with probability 0.2
Actuators:
● Turn left 90°
● Turn right 90°
● Move forward into room straight ahead.
– blocked by walls.
– Eaten by a live Wumpus immediately.
– Fall into pit immediately.
● Grab an object in current room.
● Shoot arrow in straight line (only allowed once).
Sensor:
● Agent can smell, detect a breeze, see gold glitter, detect a bump (into a wall) and hear the Wumpus
scream when it is killed.
● Sensors:
– Stench: a neighboring room holds the Wumpus.
– Breeze: a neighboring room holds a pit.
– Glitter: The room the agent is in has some gold.
– Bump: The agent just banged into a wall.
– Scream: The Wumpus just died (shot with arrow).
36
Percept Sequence:
Percept (1,1)=(None, None, None, None, None,)
Percept (2,1)=(Breeze, None, None, None, None,)
Percept (1,1)=(None, None, None, None, None,)
Percept (1,2)=(None, Stench, None, None, None,)
Percept (2,2)=(None, None, None, None, None,)
Percept (3,2)=(Breeze, None, None, None, None,)
Percept (2,3)=(None, None, Glitter, None, None,)
User defines a set of propositional symbols, like P and Q. User define the semantics of each of
these symbols. For example,
P means "It is hot"
Q means "It is humid"
R means "It is raining"
Examples of PL sentences:
(P ^ Q) => R (here meaning "If it is hot and humid, then it is raining")
Q => P (here meaning "If it is humid, then it is hot")
Q (here meaning "It is humid.")
Given the truth values of all of the constituent symbols in a sentence, that sentence can be
"evaluated" to determine its truth value (True or False). This is called an interpretation of the
sentence.
A model is an interpretation (i.e., an assignment of truth values to symbols) of a set of sentences
such that each sentence is True. A model is just a formal mathematical structure that "stands in"
for the world.
A valid sentence (also called a tautology) is a sentence that is True under all interpretations.
Hence, no matter what the world is actually like or what the semantics is, the sentence is True.
For example "It's raining or it's not raining."
An inconsistent sentence (also called unsatisfiable or a contradiction) is a sentence that is
False under all interpretations. Hence the world is never like what it describes. For example, "It's
raining and it's not raining."
Examples:
Q.1) consider following set of facts.
I. Rani is hungry.
II. If rani is hungry she barks.
III. If rani is barking then raja is angry.
Convert into proposition logic statements.
Solution:
step1: we can use following propositional symbols
P: Rani is hungry
Q: Rani is Barking
R: Raja is Angry.
Predicate Logic:
First-Order Logic or first order Predicate Logic (FOL or FOPL) Syntax
38
Connectives. Same as in PL: not (~), and (^), or (v), implies (=>), if and only if (<=>)
E.g., (x) cs540-student(x) => smart(x) means "All cs540 students are smart.
“You rarely use universal quantification to make blanket statements about every
individual in the world: (x) cs540-student(x) ^ smart(x) meaning that everyone in the
world is a cs540 student and is smart.
Existential quantifiers usually used with "and" to specify a list of properties or facts about
an individual.
E.g., (x) cs540-student(x) ^ smart(x) means "there is a cs540 student who is smart."
(x) cs540-student(x) => smart(x) But consider what happens when there is a person
who is NOT a cs540-student.
Switching the order of universal quantifiers does not change the meaning: (x)(y)P(x,y)
is logically equivalent to (y)( x)P(x,y). Similarly, you can switch the order of existential
quantifiers.
Switching the order of universals and existential does change meaning:
39
Resolution
It is a mechanism to infer some fact or to reach to some conclusion using logic. In resolution to
prove some fact are true, we resolve that the negation of that fact is not true.
Following steps are in resolution;
1) Convert English statements to either propositional logic or predicate logic statements.
2) Convert logical statement to CNF (Conjunctive Normal Form).
3) Negate the conclusion.
4) Resolve the negation of conclusion is not true using resolution tree.
a=>b = ~a V b
2) Eliminate ‘’
ab=a
b
abc=a
b
~c
a(bVc) = a
bVc
aV(bc) = aVb
aVc
40
4) Eliminate ‘’
To eliminate ‘’ , convert the fact into prefix normal form in which all the universal quantifiers are
at the beginning of formula.
Eg. All students are intelligent.
x: Student(x)->Intelligent(x)
After eliminating x we get,
Student(x)->Intelligent(x).
I. P
II. ~P V Q
III. ~Q V R
Raja is angry: R
~R ~Q V R
~Q ~P V Q
Thus ,we get empty string and we can conclude that ‘Raja is
~P P angry’
41
I. P
II. ~P V Q
III. ~Q V~P V R
It is raining: R
~R ~Q V ~P V R
~Q v ~P ~P V Q
~P P
Thus, we get empty string and we can conclude that ‘It will Rain’
I. ~P V ~Q
II. PVR
III. ~R V S
IV. ~Q V S
Using Q
~Q ~P V ~Q
~P PVR
R ~R V S
S ~S
Using ~S
~S ~R V S
~R PVR
P ~P V ~Q
~Q Q
Thus, we get empty string and we can conclude that ‘If butler was guilty then he got the cream’.
43
Forward chaining is a data driven method of deriving a particular goal from a given
knowledge base and set of inference rules
Inference rules are applied by matching facts to the antecedents of consequence relations in
the knowledge base
The application of inference rules results in new knowledge (from the consequents of the
relations matched), which is then added to the knowledge base.
Inference rules are successively applied to elements of the knowledge base until the goal is
reached
A search control method is needed to select which element(s) of the knowledge base to apply
the inference rule to at any point in the deduction
Example:
Knowledge Base:
Goal:
Solution:
Step 1:
44
Step 3: Again applying inference rule between “If [X is a frog] Then [X is colored green]” and “[Fritz is
a frog]”.
Step 4: New rule is added to knowledge base. Every resulted rule must compare with goal.
45
Backward Chaining
Backward chaining is a goal driven method of deriving a particular goal from a given
knowledge base and set of inference rules
Inference rules are applied by matching the goal of the search to the consequents of the
relations stored in the knowledge base
When such a relation is found, the antecedent of the relation is added to the list of goals (and
not into the knowledge base, as is done in forward chaining)
Search proceeds in this manner until a goal can be matched against a fact in the knowledge
base
– Remember: facts are simply consequence relations with empty antecedents, so this is
like adding the ‘empty goal’ to the list of goals
As with forward chaining, a search control method is needed to select which goals will be
matched against which consequence relations from the knowledge base
46
Many times in complex world theory of agent and events in environment are contradicted to each
other, and this result in reduction of performance measure. E.g. “let agent’s job is to leave the
passenger on time, before the flight departs. But agent knows the problems it can face during
journey. Means Traffic, flat tire, or accident. In these cases agent cannot give its full performance.
”This is called as uncertainty.
Probability:
Objective probability
Averages over repeated experiments of random events
o E.g. estimate P (Rain) from historical observation
Makes assertions about future experiments
New evidence changes the reference class
47
Probability Basics
Priori probability
The prior probability of an event is the probability of the event computed before the collection of
new data. One begins with a prior probability of an event and revises it in the light of new data. For
example, if 0.01 of a population has schizophrenia then the probability that a person drawn at
random would have schizophrenia is 0.01. This is the prior probability. If you then learn that that
there score on a personality test suggests the person is schizophrenic, you would adjust your
probability accordingly. The adjusted probability is the posterior probability.
Bayes' Theorem:
Bayes' theorem considers both the prior probability of an event and the diagnostic value of a test to
determine the posterior probability of the event. The theorem is shown below:
where P(D|T) is the posterior probability of Diagnosis D given Test result T, P(T|D) is the
conditional probability of T given D, P(D) is the prior probability of D, P(T|D') is the conditional
probability of T given not D, and P(D') is the probability of not D'.
48
Example 2:
• Assume your house has an alarm system against burglary. You live in the seismically active area
and the alarm system can get occasionally set off by an earthquake. You have two neighbors, Mary
and John, who do not know each other. If they hear the alarm they call you, but this is not
guaranteed.
• We want to represent the probability distribution of events: – Burglary, Earthquake, Alarm, Mary
calls and John calls
49
3. In the BBN the full joint distribution is expressed using a set of local conditional
distributions
50
Learning
Learning denotes changes in a system that enables the system to do the same task more efficiently
next time.
Learning is an important feature of Intelligence”.
Learning is a branch of artificial intelligence, is a scientific discipline concerned with the design and
development of algorithms that allow computers to evolve behaviors based on empirical data, such
as from sensor data or databases.
Learning agents
Learning has an advantage that it allows the agents to initially operate in unknown
environments and to become more competent than its initial knowledge alone might allow.
The most important distinction is between the "learning element", which is responsible for
making improvements, and the "performance element", which is responsible for selecting
external actions.
The learning element uses feedback from the "critic" on how the agent is doing and
determines how the performance element should be modified to do better in the future.
The performance element is what we have previously considered to be the entire agent: it
takes in percepts and decides on actions.
Forms of Learning
Supervised Learning
An agent tries to find a function that matches examples from a sample set; each example
provides an input together with the correct output
A teacher provides feedback on the outcome, the teacher can be an outside entity, or part of
the environment
Goal is to build general model that will produce correct output on novel input.
51
Unsupervised Learning
Unsupervised learning seems much harder: the goal is to have the computer learn how to do
something that we don't tell it how to do.
Reinforcement Learning
Learning from feedback (+ve or -ve reward) given at end of a sequence of steps. Unlike supervised
learning, the reinforcement learning takes place in an environment where the agent cannot directly
compare the results of its action to a desired result. Instead, it is given some reward or punishment
that relates to its actions. It may win or lose a game, or be told it has made a good move or a poor
one. The job of reinforcement learning is to find a successful function using these rewards.
Addresses the question of how an autonomous agent that senses and acts in its environment
can learn to choose optimal actions to achieve its goals
Use reward or penalty to indicate the desirability of the resulting state
Example problems
o control a mobile robot
o learn to optimize operations in a factory
o learn to play a board game
Inductive Learning
Inductive learning is supervised learning.
Simplest form: learn a function from examples
f is the target function
An example is a pair (x; f(x)), e.g.
52
Example
Should we wait for a table at a restaurant?
Possible attributes:
Alternate restaurant nearby?
Is there a bar to wait in?
Is it Friday or Saturday?
How hungry are we?
How busy is the restaurant?
How many people in the restaurant?
53
“In short, an ES is an intelligent computer program that can perform special and difficult task(s) in
some field(s) at the level of human experts.”
• Increased availability
• Reduced Danger
• Reduced Cost
• Multiple expertise
• Increased Reliability
• Explanation facility
• Fast Response
• Intelligent tutor
Knowledge Base
To store knowledge from the experts of special field(s). It contains facts and feasible operators
The other data is stored in a separate database called global database, or database simply.
54
According to the information from the knowledge base, the reasoning machine can coordinate
the whole system in a logical manner, draw inference and make a decision.
User Interface
The user interacts with the expert system in problem-oriented language such as in restricted
English, graphics or a structure editor. The interface mediates information exchanges between the
expert system and the human user.
Interpreter
Through the user interface, interpreter explains user questions, commands and other
information generated by the expert system, including answers to questions, explanations and
justifications for its behavior, and requests for data.
Blackboard
To record intermediate hypotheses and decisions that the expert system manipulates.
Note:
Almost no exiting expert system contains all the components shown above, but some components,
especially the knowledge base and reasoning machine, occur in almost all expert systems. Many ESs use
global database in place of the blackboard. The global database contains information related to
specific tasks and the current state.
1. Assessment
• Resources requirement
• Sources of knowledge
2. Knowledge Acquisition
3. Bottleneck in ES development
3. Design
5. Iterative process
4. Testing
4. Work closely with domain expert that guide the growth of the knowledge and end
user that guide in user interface design
5. Documentation
1. Compile all the projects information into a document for the user and developers of
the system such as:
• User manual
• diagrams
• Knowledge dictionary
6. Maintenance
56
The finished system captures, distributes and The finished product automates manual
leverages expertise procedures
57