0% found this document useful (0 votes)
27 views

Unit II Problem solving

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Unit II Problem solving

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

AI NOTES

Unit II Problem-solving

Problem solving is a process of generating solutions from observed data.

• a problem is characterized by a set of goals,


• a set of objects, and
• a set of operations.
Solving Problems by Searching:
Problem-solving agents use search algorithms to find solutions. The process involves:

1. Defining the problem and goal state:


Define the problem precisely. This definition must include precise specification of
what the initial state (s) will be as well as what final situations constitute acceptable
solutions to the problem.
2. Identifying possible actions and transitions:
Analyze the problem i.e. aspects that have Analyze the problem i.e. Aspects that
have important impact on the appropriateness of various possible techniques for
solving the problem.
3. Evaluating search strategies (e.g., breadth-first, depth-first, heuristic):
Isolate and represent the task knowledge to solve the problem.
4. Implementing search algorithms (e.g., A* algorithm):
Choose the best problem-solving technique(s) and apply it (them) to particular
solution.
Problem-Solving Agents:
Problem-solving agents are intelligent systems that:
1. Reason and solve problems autonomously
2. Use knowledge representation and inference
3. Employ search strategies and planning
Types of problem-solving agents:
1. Reactive agents (react to environment)

2. Planning agents (plan and execute actions)


3. Hybrid agents (combine reactive and planning capabilities)
Example Problems
1. Route finding: Find the shortest path between two cities.
2. Sudoku: Fill in missing numbers to satisfy constraints.

P a g e 1 | 25
AI NOTES

3. Sliding puzzle: Move tiles to form a complete picture.


4. Planning: Schedule tasks to achieve goals.
5. Decision-making: Choose optimal actions under uncertainty.

What is Production System?


Production system or production rule system is a computer program typically used to
provide some form of artificial intelligence, which consists primarily of a set of rules about
behavior but it also includes the mechanism necessary to follow those rules as the system
responds to states of the world.

Example Problems:
Basically, there are two types of problem approaches:

 Toy Problem: It is a concise and exact description of the problem which is used by the
researchers to compare the performance of algorithms.

 Real-world Problem: It is real-world based problems which require solutions. Unlike a toy
problem, it does not depend on descriptions, but we can have a general formulation of the
problem.
Some Toy Problems

 8 Puzzle Problem:
Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a blank space.
The tile adjacent to the blank space can slide into that space. The objective is to reach a
specified goal state similar to the goal state, as shown in the below figure.  In the figure,
our task is to convert the current state into goal state by sliding digits into the blank space.

In the above figure, our task is to convert the current(Start) state into goal state by sliding
digits into the blank space.
The problem formulation is as follows:
P a g e 2 | 25
AI NOTES

 States: It describes the location of each numbered tiles and the blank tile.

 Initial State: We can start from any state as the initial state.

 Actions: Here, actions of the blank space is defined, i.e., either left, right, up or down

 Transition Model: It returns the resulting state as per the given state and actions.

 Goal test: It identifies whether we have reached the correct goal-state.

 Path cost: The path cost is the number of steps in the path where the cost of each
step is 1.
2 Problem Statement:

We have two jugs of capacity 5l and 3l (liter), and a tap with an endless supply of water. The
objective is to obtain 4 liters exactly in the 5-liter jug with the minimum steps possible.
Production System:
1. Fill the 5 liter jug from tap
2. Empty the 5 liter jug
3. Fill the 3 liter jug from tap
4. Empty the 3 liter jug
5. Then, empty the 3 liter jug to 5 liter
6. Empty the 5 liter jug to 3 liter
7. Pour water from 3 liters to 5 liter
8. Pour water from 5 liters to 3 liters but do not empty
Solution: 1,8,4,6,1,8 or 3,5,3,7,2,5,3,5;
Example: Missionaries and Cannibals

The Missionaries and Cannibals problem illustrates the use of state space search for planning
under constraints: Three missionaries and three cannibals wish to cross a river using a two-
person boat. If at any time the cannibals outnumber the missionaries on either side of the
river, they will eat the missionaries. How can a sequence of boat trips be performed that will
get everyone to the other side of the river without any missionaries being eaten?
Search Algorithm Terminologies:
o Search: Searching is a step-by-step procedure to solve a search-problem in a given
search space. A search problem can have three main factors:

o Search Space: Search space represents a set of possible solutions, which a


system may have.

o Start State: It is a state from where agent begins the search.


o Goal test: It is a function which observe the current state and returns
whether the goal state is achieved or not.

P a g e 3 | 25
AI NOTES

o Search tree: A tree representation of search problem is called Search tree. The root
of the search tree is the root node which is corresponding to the initial state.
o Actions: It gives the description of all the available actions to the agent.
o Transition model: A description of what each action do, can be represented as a
transition model.
o Path Cost: It is a function which assigns a numeric cost to each path.

o Solution: It is an action sequence which leads from the start node to the goal node.
o Optimal Solution: If a solution has the lowest cost among all solutions.
o Properties of Search Algorithms:
o Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:
o Completeness: A search algorithm is said to be complete if it guarantees to return a
solution if at least any solution exists for any random input.
o Optimality: If a solution found for an algorithm is guaranteed to be the best solution
(lowest path cost) among all other solutions, then such a solution for is said to be an
optimal solution.
o Time Complexity: Time complexity is a measure of time for an algorithm to complete
its task.
o Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.

Types of search algorithms

Based on the search problems we can classify the search algorithms into uninformed (Blind
search) search and informed search (Heuristic search) algorithms.

P a g e 4 | 25
AI NOTES

Uninformed Search Algorithms


Uninformed search algorithms explore nodes without additional information or
heuristics, relying solely on the graph's structure.
Types of Uninformed Search:
1. Breadth-First Search (BFS) 2. Depth-First Search (DFS) 3. Uniform-Cost Search
(UCS)
4. Depth-Limited Search (DLS) 5. Iterative Deepening Search (IDS)
Characteristics:

1. No heuristic function
2. Exploration based on node structure
3. No guidance towards goal
4. Complete or incomplete search
Advantages:
1. Simplicity 2. Easy implementation 3. No reliance on heuristics

P a g e 5 | 25
AI NOTES

4. Guaranteed to find solution (if exists)


Disadvantages:
1. Inefficient exploration 2. High computational complexity

3. May get stuck in infinite loops 4. No optimality guarantee


Real-World Applications:
1. Web crawlers 2. Social network analysis 3. Network topology discovery
4. Database querying 5. Simple puzzle solving

1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called breadth-
first search.
o BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm.
o Breadth-first search implemented using FIFO queue data structure.
Advantages:
o BFS will provide a solution if any solution exists.

o If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.

o It also helps in finding the shortest path in goal state, since it needs all nodes at the
same hierarchical level before making a move to nodes at lower levels.
o It is also very easy to comprehend with the help of this we can assign the higher rank
among path types.
Disadvantages:
o It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
o BFS needs lots of time if the solution is far away from the root node.
o It can be very inefficient approach for searching through deeply layered spaces, as it
needs to thoroughly explore all nodes at each level before moving on to the next.
Example:

P a g e 6 | 25
AI NOTES

In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
o It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
Advantage:

o DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.

o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
o With the help of this we can stores the route which is being tracked in memory to
save time as it only needs to keep one at a particular time.
Disadvantage:

P a g e 7 | 25
AI NOTES

o There is the possibility that many states keep re-occurring, and there is no guarantee
of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.

o The de�pth-first search (DFS) algorithm does not always find the shorte�st path to
a solution.

Example:
In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node
is not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.

3. Depth-Limited Search Algorithm:


A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will treat
as it has no successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.
P a g e 8 | 25
AI NOTES

Advantages:
Depth-Limited Search will restrict the search depth of the tree, thus, the algorithm
will require fewer memory resources than the straight BFS (Breadth-First Search) and
IDDFS (Iterative Deepening Depth-First Search). After all, this implies automatic
selection of more segments of the search space and the consequent why
consumption of the resources. Due to the depth restriction, DLS omits a predicament
of holding the entire search tree within memory which contemplatively leaves room
for a more memory-efficient vice for solving a particular kind of problems.

o When there is a leaf node depth which is as large as the highest level allowed, do not
describe its children, and then discard it from the stack.

o Depth-Limited Search does not explain the infinite loops which can arise in classical
when there are cycles in graph of cities.
Disadvantages:
o Depth-limited search also has a disadvantage of incompleteness.
o It may not be optimal if the problem has more than one solution.
o The effectiveness of the Depth-Limited Search (DLS) algorithm is largely dependent
on the depth limit specified. If the depth limit is set too low, the algorithm may fail to
find the solution altogether.

Informed Search
Informed search algorithms utilize additional information or heuristics to guide the
search process, making them more efficient and effective.
Types of Informed Search:

P a g e 9 | 25
AI NOTES

1. Best-First Search (BFS): Uses a heuristic function to estimate the distance from
each node to the goal.
2. A*: Combines the cost to reach each node with the heuristic function.
3. Iterative Deepening Depth-First Search (IDDFS): Combines depth-first search with
iterative deepening.
4. Greedy Search: Chooses the node that appears closest to the goal.

5. Hill Climbing: Explores neighbors and chooses the best one based on the heuristic.
Characteristics:
1. Heuristic function: Estimates the distance from each node to the goal.
2. Guided search: Uses heuristics to focus on promising areas.
3. Efficient exploration: Avoids exploring unnecessary nodes.

4. Optimality: May or may not guarantee optimal solutions.


Advantages:
1. Improved efficiency: Reduced computation time.
2. Better solutions: Heuristics guide the search towards optimal solutions.
3. Flexibility: Can be adapted to various problem domains.

Disadvantages:
1. Heuristic quality: Poor heuristics can lead to suboptimal solutions.
2. Complexity: More complex algorithms require more computational resources.
3. Overreliance on heuristics: May not explore alternative solutions.
Real-World Applications:

1. Route planning: GPS navigation, logistics.


2. Scheduling: Resource allocation, timetabling.
3. Game playing: AI opponents, puzzle solving.
4. Network optimization: Telecommunications, social network analysis.
5. Robotics: Motion planning, task planning.

Comparison to Uninformed Search:

| Informed Search | Uninformed Search |

P a g e 10 | 25
AI NOTES

--------------------------------------------------------------------------------------------------------------
----------

1 Heuristic | Uses heuristics | No heuristics |

2 Efficiency | More efficient | Less efficient |

3 Optimality | May guarantee optimality | No optimality


guarantee |

4 Complexity | More complex | Simpler |

When to choose Informed Search:


- Heuristics are available and reliable.
- Computation time is critical.

- Optimal or near-optimal solutions are required.


What is a Heuristic Function?
A heuristic function (h(n)) estimates the distance from a node (n) to the goal state.
Properties of Heuristic Functions
1. Admissibility: Never over estimates true distance (h(n) ≤ true distance).

2. Consistency: Estimated distance decreases as node approach’s goal (h(n) ≤ h(n') if n


is closer to goal).

3. Monotonicity: Estimated distance increases as node moves away from goal.


Heuristic Function Examples
1. Sliding Puzzle: Number of misplaced tiles.
2. Route Finding: Distance to goal location.
3. Sudoku: Number of unsolved cells.

1) The A* algorithm
A* is a popular pathfinding algorithm used to find the shortest path between two
points in a weighted graph or network. It's widely used in various fields, including:
1. Video games (e.g., navigation, pathfinding)
2. Robotics (e.g., motion planning)
3. Geographic information systems (GIS)
4. Traffic routing

P a g e 11 | 25
AI NOTES

5. Network optimization

How A * works:

1. Initialization: Define the start node (S) and goal node (G).
2. Open list: Create a priority queue containing nodes to be evaluated, starting with
the start node.
3. Closed list: Create a set to store nodes that have already been evaluated.
4. Evaluation: For each node in the open list:
Calculate the estimated total cost (f) = cost to reach node (g) + heuristic cost to reach
goal (h)
- Compare f values to determine the node with the lowest estimated total cost

5. Node selection: Choose the node with the lowest f value and move it to the closed
list.

6. Neighbor evaluation: Evaluate the node's neighbors:


- Calculate the cost to reach each neighbor through the current node
- Update the neighbor's g and f values if a shorter path is found
- Add neighbors to the open list if they haven't been evaluated before
7. Repeat: Steps 4-6 until the goal node is reached or the open list is empty.

Key components:
1. Heuristic function (h): Estimates the cost to reach the goal from a given node.
2. Cost function (g): Represents the cost to reach a node from the start.
3. Priority queue: Efficiently manages nodes based on their estimated total cost (f).
Advantages:
1. Efficient: A* is relatively fast and scalable.

P a g e 12 | 25
AI NOTES

2. Optimal: Guarantees the shortest path if the heuristic function is admissible (never
overestimates) and consistent (estimates the same cost for identical nodes).
Disadvantages:
1. High computational cost: A* can be slow for large graphs or complex searches due
to the overhead of maintaining the priority queue and calculating heuristic values.
2. Memory usage: A* requires significant memory to store the open and closed lists,
especially for large graphs.
3. Non-optimality: A* may not always find the optimal solution if the heuristic
function is not admissible (never overestimates) or consistent (estimates the same
cost for identical nodes).
4. Incompleteness: A* may not find a solution if:

- The graph is disconnected


- The start or goal node is unreachable
- The heuristic function is poorly designed
Common variations:
1. Dijkstra's algorithm: A special case of A* without a heuristic function.

2. Bidirectional A*: Searches from both start and goal nodes simultaneously.
3. Iterative deepening A: Combines A with depth-first search to improve
performance.
2) Best-First Search (BFS) algorithm
Overview
Best-First Search is a pathfinding algorithm that explores nodes based on their
estimated distance from the goal (heuristic value). It's similar to A* but doesn't
consider the cost to reach each node.
Key Components
1. Heuristic function (h): Estimates the distance from each node to the goal.

2. Priority queue: Nodes are ordered based on their heuristic value (h).
3. Node selection: Choose the node with the lowest h value.
Algorithm Steps
1. Initialize the start node (S) and goal node (G).
2. Create a priority queue containing the start node.
3. While the queue is not empty:
P a g e 13 | 25
AI NOTES

a. Dequeue the node with the lowest h value (N).


b. If N is the goal node, terminate.
c. Evaluate N's neighbors:

i. Calculate their h values.


ii. Add neighbors to the queue if they haven't been evaluated.
d. Repeat step 3.
Advantages
1. Simpler implementation: Compared to A*.

2. Faster computation: For problems with a good heuristic function.


3. Less memory usage: No need to store cost values.

Disadvantages
1. Non-optimality: May not find the shortest path.

2. Heuristic dependence: Relies heavily on the quality of the heuristic function.


3. Incomplete: May not find a solution if the graph is disconnected.
Variations
1. Greedy Best-First Search: Uses a more aggressive heuristic function.
2. Weighted Best-First Search: Assigns weights to nodes based on their importance.

3. Iterative Deepening Best-First Search: Combines BFS with depth-first search.


Real-World Applications
1. Video games: Pathfinding for NPCs.
2. GPS navigation: Route planning.
3. Robotics: Motion planning.

4. Network optimization: Routing and scheduling.


Comparison to A*

Points Best-First Search A*


Optimality Non-optimal Optimal
Heuristic Heuristic-only Heuristic + cost
Complexity Simpler More complex
Speed Faster Slower

P a g e 14 | 25
AI NOTES

When to choose Best-First Search:


- Fast computation is crucial.
- Heuristic function is reliable.

- Path optimality is not critical.


Constraint Satisfaction Problems (CSPs)
A Constraint Satisfaction Problem is a computational problem where:
1. Variables: A set of variables X = {x1, x2, ..., xn} with finite domains Di.
2. Constraints: A set of constraints C = {c1, c2, ..., cm} restricting variable
assignments.
3. Goal: Find an assignment of values to variables that satisfies all constraints.

Key Components:
1. Variables (X): Represent the unknowns.
2. Domains (D): Define possible values for each variable.
3. Constraints (C): Specify relationships between variables.
4. Assignment: A mapping of values to variables.

Types of Constraints:
1. Unary constraints: Involve one variable (e.g., x1 ≥ 5).

2. Binary constraints: Involve two variables (e.g., x1 + x2 = 10).


3. Tertiary constraints: Involve three variables (e.g., x1 + x2 + x3 = 15).
4. Global constraints: Involve multiple variables (e.g., allDifferent([x1, x2, ..., xn])).
CSP Examples:
1. Scheduling: Assign tasks to time slots.

2. Resource allocation: Allocate resources to tasks.


3. Sudoku: Fill numbers in a grid satisfying constraints.
4. Timetabling: Schedule classes and teachers.
5. Cryptography: Decrypt messages.
CSP Solution Methods:
P a g e 15 | 25
AI NOTES

1. Backtracking 2. Local search 3. Constraint propagation


4. Arc consistency 5. Search algorithms (e.g., A*, BFS)
Properties:

1. Satisfiability: Does a solution exist?


2. Consistency: Are constraints consistent?
3. Completeness: Does the solution satisfy all constraints?
Applications:
1. Artificial intelligence 2. Operations research 3. Computer science

4. Engineering 5. Finance
Real-World Examples:
1. Air traffic control 2. Supply chain management 3. Resource optimization
4. Automated planning 5. Game playing
Challenges:

1. Computational complexity 2. Scalability 3. Constraint modelling


4. Solution quality 5. Explanation generation
Constraint Propagation
Constraint Propagation is a fundamental technique in Constraint Satisfaction
Problems (CSPs) that helps reduce the search space by inferring new constraints from
existing ones.
Purpose:
Constraint Propagation aims to:

1. Reduce variable domains 2. Eliminate inconsistent values


3. Strengthen constraints 4. Improve search efficiency
Types of Constraint Propagation:
1. Forward Checking: Check constraints when assigning values to variables.
2. Arc Consistency: Ensure consistency between two variables.

3. Path Consistency: Ensure consistency along paths of variables.


4. K-Consistency: Ensure consistency among k variables.
Algorithms:
1. AC-3 (Arc Consistency Algorithm 3): Simple, efficient arc consistency algorithm.
P a g e 16 | 25
AI NOTES

2. AC-4: Improved arc consistency algorithm.


3. PC-2 (Path Consistency Algorithm 2): Simple path consistency algorithm.
4. K-Consistency Algorithm: General algorithm for k-consistency.

Techniques:
1. Domain Pruning: Remove inconsistent values from variable domains.
2. Constraint Elimination: Remove redundant constraints.
3. Constraint Strengthening: Strengthen constraints to reduce search space.
4. Value Ordering: Order values to reduce search space.

Benefits:
1. Reduced Search Space: Fewer variables and values to explore.
2. Improved Search Efficiency: Faster solution finding.
3. Increased Solution Quality: Better solutions through reduced search space.
4. Simplified Constraint Modelling: Easier constraint specification.

Applications:
1. Scheduling: Resource allocation, timetabling.
2. Resource Optimization: Supply chain management, logistics.
3. Automated Planning: Planning, decision-making. 4. Game Playing: Strategic
decision-making.
Backtracking Search
Backtracking Search is a fundamental algorithm for solving Constraint Satisfaction
Problems (CSPs). It's a popular method for finding solutions, especially when
constraint propagation is insufficient.
Algorithm Steps:

1. Initialize an empty assignment.


2. Select an unassigned variable.
3. Assign a value to the variable from its domain.
4. Check consistency with existing assignments and constraints.
5. If consistent, recursively assign values to other variables.

6. If inconsistent, backtrack and try another value.


7. Repeat steps 3-6 until a solution is found or all values are exhausted.

P a g e 17 | 25
AI NOTES

Key Components:
1. Variable Ordering: Selecting the next variable to assign.
2. Value Ordering: Selecting the next value to try.

3. Constraint Checking: Ensuring consistency with existing assignments and


constraints.

4. Backtracking: Returning to a previous assignment and trying another value.


Types of Backtracking:
1. Chronological Backtracking: Backtrack to the previous variable.
2. Non-Chronological Backtracking: Backtrack to a variable that caused inconsistency.
3. Dynamic Backtracking: Adaptively change variable/value ordering.

Benefits:
1. Completeness: Guaranteed to find a solution if one exists.
2. Flexibility: Handles various CSPs and constraints.
3. Simple Implementation: Easy to understand and code.
Challenges:

1. Computational Complexity: Exponential time complexity in worst-case.


2. Space Complexity: Large memory requirements for deep backtracking.
3. Inefficiency: Inadequate constraint propagation can lead to excessive backtracking.
Real-World Applications:
1. Scheduling: Resource allocation, timetabling.

2. Resource Optimization: Supply chain management, logistics.


3. Automated Planning: Planning, decision-making.
4. Game Playing: Strategic decision-making, game playing.
Local Search for CSPs
Local Search is a stochastic algorithm for solving Constraint Satisfaction Problems
(CSPs). It's particularly effective for large, complex problems.

Key Components:
1. Initial Solution: Generate an initial assignment.
2. Neighborhood Function: Define neighboring solutions.
3. Evaluation Function: Assess solution quality.
P a g e 18 | 25
AI NOTES

4. Search Strategy: Choose next solution.


Local Search Algorithms:
1. Hill Climbing: Greedy ascent to better solutions.

2. Simulated Annealing: Probabilistic acceptance of worse solutions.


3. Tabu Search: Memory-based search to avoid cycles.
4. Genetic Algorithms: Population-based search.
5. Guided Local Search: Combines local search with constraint relaxation.
Neighborhood Functions:

1. Swap: Exchange values between variables.


2. Flip: Change a single variable's value.
3. Insert: Add a new value to a variable.
4. Delete: Remove a value from a variable.
Evaluation Functions:

1. Constraint Violations: Count unsatisfied constraints.


2. Penalty Functions: Assign costs to constraint violations.
3. Objective Functions: Optimize a specific objective.
Advantages:
1. Efficient for large CSPs 2. Handles complex constraints

3. Robust to noise and uncertainty 4. Easy to implement

Disadvantages:
1. No guarantee of optimality 2. Sensitive to initial solution
3. May get stuck in local optima 4. Requires parameter tuning

Real-World Applications:
1. Scheduling: Resource allocation, timetabling.
2. Resource Optimization: Supply chain management, logistics.
3. Automated Planning: Planning, decision-making.
4. Game Playing: Strategic decision-making, game playing.
Hill Climbing

P a g e 19 | 25
AI NOTES

Hill Climbing is a popular local search algorithm for optimization problems. It's
simple, efficient, and effective.
Key Components:
1. Initial Solution: Generate an initial assignment.
2. Neighborhood Function: Define neighboring solutions.

3. Evaluation Function: Assess solution quality.


4. Ascent Strategy: Choose next solution.
Hill Climbing Algorithm:
1. Initialize an initial solution.
2. Evaluate the solution using the evaluation function.

3. Generate neighboring solutions using the neighborhood function.


4. Select the best neighboring solution.
5. If the new solution is better, replace the current solution.
6. Repeat steps 3-5 until no improvement or max iterations.
Types of Hill Climbing:

1. Steepest Ascent Hill Climbing: Choose the best neighboring solution.


2. First-Choice Hill Climbing: Choose the first improving neighboring solution.
3. Random-Restart Hill Climbing: Restart with a new initial solution.
Neighborhood Functions:
1. Swap: Exchange values between variables. 2. Flip: Change a single variable's value.

3. Insert: Add a new value to a variable. 4. Delete: Remove a value from a variable.

Evaluation Functions:
1. Constraint Violations: Count unsatisfied constraints.
2. Penalty Functions: Assign costs to constraint violations.

3. Objective Functions: Optimize a specific objective.


Advantages:
1. Simple implementation 2. Efficient for small to medium-sized problems
3. Fast convergence 4. Easy to understand
Disadvantages:
P a g e 20 | 25
AI NOTES

1. May get stuck in local optima 2. Sensitive to initial solution


3. No guarantee of optimality 4. Requires parameter tuning
Real-World Applications:

1. Scheduling 2. Resource Optimization 3. Automated Planning


4. Game Playing 5. Machine Learning
The Structure of Problems!
Understanding the structure of problems is crucial for effective problem-solving.
Problem Structure Classification

1. Well-Defined Problems: Clear goals, constraints, and solution spaces.


2. Ill-Defined Problems: Unclear goals, constraints, or solution spaces.
3. Wicked Problems: Complex, dynamic, and uncertain.
Problem Types
1. Optimization Problems: Find the best solution.

2. Satisfiability Problems: Find a feasible solution.


3. Decision Problems: Choose among alternatives.
4. Search Problems: Find a path or solution.
Problem Complexity
1. Simple Problems: Easy to solve.

2. Complex Problems: Difficult to solve due to:


1 Many variables. 2 Non-linearity.
3 Uncertainty. 4 Interconnectedness.

Problem Representation

1. State-Space Representation: Describe problem states.


2. Graph Representation: Model problems as graphs.
3. Constraint Representation: Define problems using constraints.
Problem-Solving Approaches
1. Analytical Methods: Mathematical modeling.
2. Numerical Methods: Approximation techniques.

P a g e 21 | 25
AI NOTES

3. Heuristic Methods: Rule-based search.


4. Metaheuristics: High-level search strategies.
Problem-Solving Strategies

1. Divide and Conquer: Break down complex problems.


2. Dynamic Programming: Solve sub-problems.
3. Greedy Algorithms: Make locally optimal choices.
4. Backtracking: Explore solution spaces.
Understanding problem structure helps:

1. Choose effective solution methods. 2. Identify potential challenges.


3. Develop efficient algorithms.
Adversarial Search
Adversarial Search is a fundamental concept in Artificial Intelligence, focusing on
decision-making in competitive environments.
Key Components:
1. Agents: Players or decision-makers.

2. Game Tree: Representation of possible moves and outcomes.


3. Evaluation Function: Assessing game states.
4. Search Algorithm: Exploring game trees.
Types of Adversarial Search:
1. Minimax: Assuming optimal opponent play.

2. Alpha-Beta Pruning: Efficient minimax variant.


3. Monte Carlo Tree Search (MCTS): Sampling-based search.
4. Game Theory: Strategic decision-making.
Real-World Applications:
1. Chess and other strategy games. 2. Poker and other card games.

3. Financial markets and trading. 4. Autonomous vehicles.


Games
Games are a fascinating domain for Artificial Intelligence, requiring strategic decision-
making, problem-solving, and adaptability.
Types of Games
P a g e 22 | 25
AI NOTES

1. Board Games (e.g., Chess, Go, Checkers)


2. Card Games (e.g., Poker, Blackjack, Bridge)
3. Video Games (e.g., FPS, RPG, Strategy)

4. Puzzle Games (e.g., Sudoku, Crosswords, Tetris)


Game Characteristics
1. Deterministic vs. Stochastic 2. Perfect vs. Imperfect Information
3. Turn-based vs. Real-time 4. Single-player vs. Multi-player
AI Techniques for Games

1. Minimax Algorithm 2. Alpha-Beta Pruning 3. Monte Carlo Tree Search (MCTS)


4. Deep Learning (e.g., ConvNets, Recurrent Nets)
5. Reinforcement Learning (e.g., Q-learning, SARSA)
6. Game Theory (e.g., Nash Equilibrium, Pareto Optimality)
Popular Game AI Frameworks

1. PyGame 2. OpenCV 3. TensorFlow 4. PyTorch 5. Unity ML-Agents


Game AI Applications
1. Game Playing (e.g., Chess, Go, Poker) 2. Game Development (e.g., NPC AI,
Pathfinding)
3. Education (e.g., Game-based learning) 4. Research (e.g., Multi-agent systems,
Decision-making)
Optimal Decisions in Games
Optimal decisions in games involve strategic thinking, probability, and decision
theory.
Game Theory Concepts

1. Nash Equilibrium: Optimal strategies for multiple players.


2. Pareto Optimality: Multi-objective optimization. 3. Dominance: Comparing
strategies.
4. Expected Utility: Calculating outcome probabilities.
Decision-Making Models
1. Decision Trees: Visualizing possible outcomes.
2. Payoff Matrices: Evaluating strategy outcomes.

3. Markov Decision Processes (MDPs): Sequential decision-making.


P a g e 23 | 25
AI NOTES

4. Game Trees: Representing game states.


Optimization Techniques
1. Minimax Algorithm: Assuming optimal opponent play.

2. Alpha-Beta Pruning: Efficient minimax variant.


3. Monte Carlo Tree Search (MCTS): Sampling-based search.
4. Linear Programming: Optimizing resource allocation.
Real-World Applications
1. Economics: Auctions, bargaining, and negotiations.

2. Finance: Portfolio optimization, risk management.


3. Political Science: Voting systems, international relations.
4. Biology: Evolutionary game theory.
Types of Games
1. Zero-Sum Games: One player's gain equals another's loss.

2. Non-Zero-Sum Games: Players' outcomes aren't symmetric.


3. Cooperative Games: Players collaborate.
4. Non-Cooperative Games: Players act independently.
Optimal Decision-Making Strategies
1. Maximize Expected Utility (MEU)

2. Minimize Maximum Regret (MMR)


3. Maximize Minimum Gain (MMG)
4. Nash Equilibrium Strategy

Alpha-Beta Pruning

Alpha-Beta Pruning is an optimized Minimax algorithm for decision-making in games,


reducing computation by pruning branches.

Key Concepts:
1. Alpha (α): Best possible score for MAX (maximizing player).
2. Beta (β): Best possible score for MIN (minimizing player).
3. Pruning: Eliminating branches that won't affect the outcome.
Alpha-Beta Pruning Algorithm:
P a g e 24 | 25
AI NOTES

1. Initialize alpha (α) and beta (β).


2. Explore game tree recursively:
- MAX node: Update alpha (α) if score > alpha.

- MIN node: Update beta (β) if score < beta.


- Prune branches if alpha ≥ beta.
3. Backtrack to optimal move.
Advantages:
1. Reduced computation: Pruning eliminates unnecessary branches.

2. Faster decision-making: Optimized search.


3. Improved accuracy: More efficient exploration.
Types of Pruning:
1. Alpha Pruning: Prune branches with score ≤ alpha.
2. Beta Pruning: Prune branches with score ≥ beta.

3. Alpha-Beta Window: Prune branches outside (alpha, beta) range.


Real-World Applications:
1. Chess engines (e.g., Stockfish, Leela Chess Zero).
2. Game playing AI (e.g., Go, Poker).
3. Decision-making systems.

4. Optimization problems.

P a g e 25 | 25

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy