UNIT-I-PPT
UNIT-I-PPT
Introduction
Why Study AI?
• AI makes computers more useful
• Intelligent computer would have huge impact
on civilization
• AI cited as “field I would most like to be in” by
scientists in all fields
• Computer is a good metaphor for talking and
thinking about intelligence
Why Study AI?
• Turning theory into working programs forces
us to work out the details
• AI yields good results for Computer Science
• AI yields good results for other fields
• Computers make good experimental subjects
What is the definition of AI?
Bellman, 1978
“[The automation of] activities that we associate with human thinking, activities
such as decision making, problem solving, learning”
What is the definition of AI?
Systems that think like Systems that think rationally
humans
Systems that act like humans Systems that act rationally
Haugeland, 1985
“The exciting new effort to make computers think machines with minds, in the
full and literal sense”
What is the definition of AI?
Systems that think like Systems that think rationally
humans
Systems that act like humans Systems that act rationally
Kurzweil, 1990
“The art of creating machines that perform functions that require
intelligence when performed by people”
What is the definition of AI?
Systems that think like Systems that think rationally
humans
Systems that act like humans Systems that act rationally
Nilsson, 1998
“Many human mental activities such as writing computer programs, doing
mathematics, engaging in common sense reasoning, understanding
language, and even driving an automobile, are said to demand intelligence.
We might say that [these systems] exhibit artificial intelligence”
What is the definition of AI?
Systems that think like Systems that think rationally
humans
Systems that act like humans Systems that act rationally
Schalkoff, 1990
“A field of study that seeks to explain and emulate intelligent behavior in
terms of computational processes”
What is the definition of AI?
Systems that think like Systems that think
humans rationally
Systems that act like Systems that act rationally
humans
Winston, 1992
“The study of the computations that make it possible to perceive, reason, and
act”
Approach 1: Acting Humanly
• Turing test: ultimate test for acting humanly
– Computer and human both interrogated by judge
– Computer passes test if judge can’t tell the difference
How effective is this test?
• Agent must:
– Have command of language
– Have wide range of knowledge
– Demonstrate human traits (humor, emotion)
– Be able to reason
– Be able to learn
• Loebner prize competition is modern version of
Turing Test
– Example: Alice, Loebner prize winner for 2000 and
2001
Approach 2: Thinking Humanly
• Requires knowledge of brain function
• What level of abstraction?
• How can we validate this
• This is the focus of Cognitive Science
Approach 3: Thinking Rationally
• Aristotle attempted this
• What are correct arguments or thought
processes?
• Provided foundation of much of AI
• Not all intelligent behavior controlled by logic
• What is our goal? What is the purpose of
thinking?
Approach 4: Acting Rationally
• Act to achieve goals, given set of beliefs
• Rational behavior is doing the “right thing”
– Thing which expects to maximize goal achievement
• This is approach adopted by Russell & Norvig
Foundations of AI
Philosophy
Can formal rules be used to draw valid conclusions?
How does the mind arise from a physical brain?
Where does knowledge come from?
How does knowledge lead to action?
• Aristotle (384–322 B.C.), was the first to formulate a precise set of
laws governing the rational part of the mind. He developed an
informal system of syllogisms for proper reasoning, which in
principle allowed one to generate conclusions mechanically, given
initial premises.
• 450 BC, Socrates asked for algorithm to distinguish pious from
non-pious individuals
• Aristotle developed laws for reasoning
– The final element in the philosophical picture of the mind
is the connection between knowledge and action
– Intelligence requires action as well as reasoning
Foundations of AI
Mathematics
What are the formal rules to draw valid conclusions?
What can be computed?
How do we reason with uncertain information?
– Formal science required a level of mathematical
formalization in three fundamental areas: logic,
computation, and probability.
• Mathematical development really began with the work of
George Boole (1815–1864), who worked out the details of
propositional, or Boolean, logic (Boole, 1847)
• Theory of probability. The Italian Gerolamo Cardano (1501–
1576) first framed the idea of probability, describing it in
terms of the possible outcomes of gambling events.
Foundations of AI
Economics
How should we make decisions so as to maximize
payoff?
How should we do this when others may not go
along?
How should we do this when the payoff may be far in
the future?
• The science of economics got its start in 1776, when Scottish
philosopher Adam Smith (1723–1790) published An Inquiry
into the Nature and Causes of the Wealth of Nations.
• Smith was the first to treat it as a science, using the idea that
economies can be thought of as consisting of individual
agents maximizing their own economic well-being.
Foundations of AI
Neuroscience
How do brains process information?
Neuroscience is the study of the nervous system,
particularly the brain.
Paul Broca’s (1824–1880) study of aphasia (speech
deficit) in brain-damaged patients in 1861
demonstrated the existence of localized areas of the
brain responsible for specific cognitive functions
Foundations of AI
Psychology
How do humans and animals think and act?
Cognitive psychology, which views the brain as an
information-processing device, can be traced back at
least to the works of William James (1842–1910).
Helmholtz also insisted that perception involved a form
of unconscious logical inference.
Foundations of AI
• Computer engineering
– How can we build an efficient computer?
– For artificial intelligence to succeed, we need two things:
intelligence and an artifact.
• Control theory and cybernetics
– How can artifacts operate under their own control?
– Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling
machine: a water clock with a regulator that maintained a
constant flow rate.
• Linguistics
– How does language relate to thought?
– 1957, Skinner studied behaviorist approach to language learning
– Modern linguistics and AI, then, were “born” at about the same
time, and grew up together, intersecting in a hybrid field called
computational linguistics or natural language processing.
History of AI
• CS-based AI started with “Dartmouth Conference” in 1956
• Attendees
– John McCarthy
• LISP, application of logic to reasoning
– Marvin Minsky
• Popularized neural networks
• Slots and frames
• The Society of the Mind
– Claude Shannon
• Computer checkers
• Information theory
• Open-loop 5-ball juggling
– Allen Newell and Herb Simon
• General Problem Solver
AI Applications
• Robotic vehicles: A driverless robotic car named STANLEY sped through the
rough terrain of the Mojave dessert at 22 mph, finishing the 132-mile course
first to win the 2005 DARPA Grand Challenge.
• Speech recognition
• Autonomous planning and scheduling: A hundred million miles from Earth,
NASA’s Remote Agent program became the first on-board autonomous
planning program to control the scheduling of operations for a spacecraft
• Game playing: IBM’s DEEP BLUE became the first computer program to
defeat the world champion in a chess match when it bested Garry Kasparov
by a score of 3.5 to 2.5 in an exhibition match
• Logistics planning: During the Persian Gulf crisis of 1991, U.S. forces deployed
a Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to
do automated logistics planning and scheduling for transportation
• Robotics: The iRobot Corporation has sold over two million Roomba robotic
vacuum cleaners for home use.
• Machine Translation: A computer program automatically translates from one
language to another
Which of these can currently be done?
• Play a decent game of table tennis
Poker
Backgammon
SimpleReflexAgent(percept)
state = InterpretInput(percept)
rule = RuleMatch(state, rules)
action = RuleAction(rule)
Return action
Example: Vacuum Agent
• Performance?
– 1 point for each square cleaned in time T?
– #clean squares per time step - #moves per time step?
• Environment: vacuum, dirt, multiple areas defined by square regions
• Actions: left, right, suck, idle
• Sensors: location and contents
– [A, dirty]
• Store previously-observed
information(Knowledge
base)
Know how world evolves
How agents actions affect
the world
Model base agents update
their state
Reflex Vacuum Agent
Search
Search
• Search permeates all of AI
• What choices are we searching through?
– Problem solving
Action combinations (move 1, then move 3, then move 2...)
– Natural language
Ways to map words to parts of speech
– Computer vision
Ways to map features to object model
– Machine learning
Possible concepts that fit examples seen so far
– Motion planning
Sequence of moves to reach goal destination
• An intelligent agent is trying to find a set or sequence of
actions to achieve a goal.
• This is a goal-based agent
Problem-solving Agent
Assumptions
• Static or dynamic?
Environment is static
Assumptions
• Static or dynamic?
• Fully or partially observable?
Environment is discrete
Assumptions
• Static or dynamic?
• Fully or partially observable?
• Discrete or continuous?
• Deterministic or stochastic?
Environment is deterministic
Assumptions
• Static or dynamic?
• Fully or partially observable?
• Discrete or continuous?
• Deterministic or stochastic?
• Episodic or sequential?
Environment is sequential
Assumptions
• Static or dynamic?
• Fully or partially observable?
• Discrete or continuous?
• Deterministic or stochastic?
• Episodic or sequential?
• Single agent or multiple agent?
Assumptions
• Static or dynamic?
• Fully or partially observable?
• Discrete or continuous?
• Deterministic or stochastic?
• Episodic or sequential?
• Single agent or multiple agent?
Search Example
Formulate goal: Be in
Bucharest.
Formulate problem:
states are cities,
operators drive between
pairs of cities
Operators:
• fill jug x from faucet
• pour contents of jug x in jug y until y full
• dump contents of jug x down drain
Goal: (2,n)
Branch Factor = 2
Analysis
• Assume goal node at level d with constant branching factor b
• Advantages
Simple to implement
Complete
Finds solution[optimal solution]
• Disadvantages
More Memory
Analysis
• See what happens with b=10
– expand 1 Million nodes/second
– 1,000 bytes/node
Algorithm:
I. Enter root node on stack
II. Do until stack is not empty
a) Remove node
i) if node= Goal stop
ii) Push all children of node in stack
DFS Examples
Analysis
• Time complexity
In the worst case, search entire space
Goal may be at level d but tree may continue to level m, m>=d
O(bm)
Particularly bad if tree is infinitely deep
• Space complexity
Only need to save one set of children at each level
1 + b + b + … + b (m levels total) = O(bm)
For previous example, DFS requires 118kb instead of 10 petabytes for d=12 (10
billion times less)
• Benefits
May not always find solution
Solution is not necessarily shortest or least cost
If many solutions, may find one quickly (quickly moves to depth d)
Simple to implement
Space often bigger constraint, so more usable than BFS for large problems
Comparison of Search Techniques
DFS BFS
Complete N Y
Optimal N N
Heuristic N N
Time bm bd+1
Space bm bd+1
Uniform Cost Search (Branch&Bound)
OPEN: CLOSED:
9
G
H [A] [ ]
10
GOAL [C,B,D] [A]
[F,E,B,D] [A,C]
[G,E,B,D] [A,C,F]
[E,B,D] [A,C,F,G]
Comparison of Search Techniques
DFS BFS UCS IDS Best
Complete N Y Y Y N
Optimal N N Y N N
Heuristic N N N N Y
Time bm bd+1 bm bd bm
Space bm bd+1 bm bd bm
Beam Search
• Optimized version of Best First search
• Heuristic search algorithm
• Beam Value(β)(Only predetermined no.of best partial
solutions are kept as candidates).
• Only keep best (lowest-h) n nodes on open list
• Explores a graph by expanding the most promising
node in a limited set.
• Reduces memory requirement
• Uses Greedy search(i.e BFS & DFS)
Example
START
A 7 Euclidean distance:
11
14
D • AG=40
C
18 25 • BG=32
B
• CG=25
10
15 8 F • DG=35
E 20 • EG=19
• FG=17
9
H
G • GG=0
10
GOAL • HG=10
Example Euclidean distance:
START • AG=40
A 7 • BG=32
11 D • CG=25
14
18 25
• DG=35
B C • EG=19
10 • FG=17
8 F
15 • GG=0
E 20 • HG=10
OPEN: CLOSED:
9
G
H [A] [ ]
10
GOAL [C,B,D] [A]
[F,E,B] [A,C]
Consider Beam Value(β)=2 [G,E] [A,C,F]
[E] [A,C,F,G]
Comparison of Search Techniques
DFS BFS UCS IDS Best Beam
Complete N Y Y Y N N
Optimal N N Y N N N
Heuristic N N N N Y Y
Time bm bd+1 bm bd bm nm
Space bm bd+1 bm bd bm bn
Hill Climbing Search
• Variant of generate and test method in which feedback
from test procedure is used to help generator decide
which direction to move in search space.
• Always move in single direction.
• It is Like DFS
• Local search algorithm
• Uses Greedy search
• Hill climbing is irrevocable
• n is the “beam width”
– n = 1, Hill climbing
– n = infinity, Best first search
Hill Climbing Search
Evaluate Initial State
No
Current state(CS)= Initial State
Goal Yes
Return Solution
state
NS is
better
Yes
CS=NS
than
CS
No
Hill Climbing (Greedy Search)
• Features
– Much faster
– Less memory
– Dependent upon h(n)
– If bad h(n), may prune away all goals
– Not complete
Hill Climbing Issues
• Also referred to as gradient descent
• Foothill problem / local maxima / local minima
• Can be solved with random walk or more steps
• Other problems: ridges, plateaus
global maxima
local maxima
values
states
Comparison of Search Techniques
DFS BFS UCS IDS Best HC Beam
Complete N Y Y Y N N N
Optimal N N Y N N N N
Heuristic N N N N Y Y Y
Time bm bd+1 bm bd bm bm nm
Space bm bd+1 bm bd bm b bn
A* Search
• Uses heuristics function h(n) and cost g(n) to reach the
node ‘n’ from initial state to goal state
– f(n)=g(n)+h(n)
• Finds shortest path through search space.
• Note that UCS and Best-first both improve search
– UCS keeps solution cost low
– Best-first helps find solution quickly
– A* combines these approaches
• It gives fast and optimal result.
• It is optimal and complete
• It solves complex problems
• Required more memory
A* Search- Algorithm
i. Enter initial node in OPEN list
ii. If OPEN= Empty return Fail
iii. Select node from OPEN which has smallest
value (g+h)
If Node=Goal return Success
iv. Expand node ‘n’ Generate all successors of
Node and compute (g+h) for each successor
v. If node ‘n’ is already in OPEN/CLOSED attach to
back pointer
vi. Goto step (iii)
Example
Solution Space:
7 START • SA=1+6=7
S • SB=4+2=6
1 4
2
• SBC=4+2+1=7
6 2
A B
• SBCD=4+2+3+0=9
12 5 2
• SAB=1+2+2=5
D C 1
3 • SAC=1+5+1=7
0
• SAD=1+12+0=13
GOAL
• SABC=1+2+2+1=6
• SABCD=1+2+2+3+0=8
• SACD==1+5+3+0=9
Power of f
• If heuristic function is wrong it either
– overestimates (guesses too high)
– underestimates (guesses too low)
• Overestimating is worse than underestimating
• A* returns optimal solution if h(n) is admissible
– heuristic function is admissible if never
overestimates true cost to nearest goal
– if search finds optimal solution using admissible
heuristic, the search is admissible
Comparison of Search Techniques
DFS BFS UCS IDS Best HC Beam A*
Complete N Y Y Y N N N Y
Optimal N N Y N N N N Y
Heuristic N N N N Y Y Y Y
Time bm bd+1 bm bd bm bm nm bm
Space bm bd+1 bm bd bm b bn bm
IDA*
• Iterative Deepening A* (IDA*) is graph
traversal and path finding method to
determine the shortest route in weighted
graph between a defined start node to any
goal node
• It is kind of series of Depth-First Searches
• Like Iterative Deepening Search, except
– Use A* cost threshold instead of depth threshold
– Ensures optimal solution
IDA*
• Initialization
– Set the root node as the current node and find the f-score
• Set Threshold
– Set the cost limit as a threshold for a node i.e the maximum f-
score allowed for that node for further explorations
• Node Expansion
– Expand the current node to its children and find f-scores
• Pruning
– If for any node the f-score>threshold , prune that node because
its considered too expensive for that node and store it in the
visited node list
• Return path
– If the goal node is found then return the solution
• Update the threshold
– If the goal node is not found then repeat the above steps by
changing threshold
Analysis
• Some redundant search
– Small amount compared to work done on last
iteration
• Dangerous if continuous-valued h(n) values or
if values very close
– If threshold = 21.1 and value is 21.2, probably only
include 1 new node each iteration
• Time complexity is O(bm)
• Space complexity is O(bm)
Comparison of Search Techniques
DFS BFS UCS IDS Best HC Beam A* IDA*
Complete N Y Y Y N N N Y Y
Optimal N N Y N N N N Y Y
Heuristic N N N N Y Y Y Y Y
Time bm bd+1 bm bd bm bm nm bm bm
Space bm bd+1 bm bd bm b bn bm bm
RBFS
• Recursive Best First Search
• It is recursive i.e performs looping to previous
level for minimum f value
• Keep track of alternative (next best) sub tree
• Update f values before (from parent)
and after (from descendant) recursive call
Analysis
– h2 dominates h1
Generating Heuristic Functions
• Hill climbing
• Simulated annealing
• Genetic algorithms
Steepest Ascent Hill Climbing
• In Steepest –Ascent multiple check points are checked.
• Examining all neighbor nodes and selects nodes closest
to goal as next node.
• Stochastic hill climbing chooses at random from among
the uphill moves.
• First-choice hill climbing implements stochastic hill
climbing by generating successors randomly until one is
generated that is better than the current state.
• The hill-climbing algorithms described so far are
incomplete—they often fail to find a goal when one
exists because they can get stuck on local maxima.
• Random-restart hill climbing adopts the well-known
adage, “If at first you don’t succeed, try, try again.” It
conducts a series of hill-climbing searches from
randomly generated initial states,until a goal is found.
Hill Climbing Search
Evaluate Initial State
No
Generate all successors
Goal Yes
Return Solution
state
NO
better Return Solution
Yes
Hill Climbing (gradient ascent/descent)
else E
current = next with probability e T
Genetic Algorithms
• A genetic algorithm (or GA) is a variant of
stochastic beam search in which successor states
are generated by combining two parent states
rather than by modifying a single state
• What is a Genetic Algorithm (GA)?
– An adaptation procedure based on the mechanics of
natural genetics and natural selection
• GAs have 2 essential components
– Survival of the fittest
– Recombination
• Representation
– Chromosome = string
– Gene = single bit or single subsequence in string,
represents 1 attribute
GAs Exhibit Search
• Each attempt a GA makes towards a solution is
called a chromosome
– A sequence of information that can be interpreted as
a possible solution
• Typically, a chromosome is represented as
sequence of binary digits
– Each digit is a gene
• A GA maintains a collection or population of
chromosomes
– Each chromosome in the population represents a
different guess at the solution
The GA Procedure
1. Initialize a population (of solution guesses)
2. Do (once for each generation)
a. Evaluate each chromosome in the population
using a fitness function
b. Apply GA operators to population to create a
new population
3. Finish when solution is reached or number of
generations has reached an allowable
maximum.
Common Operators
• Reproduction
• Crossover
• Mutation
Reproduction
11111 11000
00000 00111
Mutation