0% found this document useful (0 votes)
8 views12 pages

Department of Computer Science: Misbah Zahra

The document presents an assignment on artificial intelligence, detailing tasks involving robot navigation in mazes, solving 8-puzzle and 8-queens problems, and developing a general game-playing program. It discusses problem formulations, state space sizes, various algorithms, and their implementations, along with results and observations. Additionally, it covers move generators, evaluation functions, and the alpha-beta pruning algorithm for game-playing agents, concluding with the effects of search depth, move ordering, and evaluation function improvements on performance.

Uploaded by

eyemusican333
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views12 pages

Department of Computer Science: Misbah Zahra

The document presents an assignment on artificial intelligence, detailing tasks involving robot navigation in mazes, solving 8-puzzle and 8-queens problems, and developing a general game-playing program. It discusses problem formulations, state space sizes, various algorithms, and their implementations, along with results and observations. Additionally, it covers move generators, evaluation functions, and the alpha-beta pruning algorithm for game-playing agents, concluding with the effects of search depth, move ordering, and evaluation function improvements on performance.

Uploaded by

eyemusican333
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Department of Computer Science

Assignment 2

Submitted by:
Misbah Zahra
Registration no:
22-CS-72
Course:
Artificial Intelligence

University Of Engineering & Technology,


Taxila
Task 1: Navigating a Robot Out of a Maze
a. Formulating the Problem and Determining the State Space Size
Problem Formulation:

 States: The state of the robot can be defined by its position (x,y) in the maze and its current
orientation (north, east, south, west).
 Initial State: The robot starts at the center of the maze, facing north.
 Actions: The robot can turn to face north, east, south, or west, and it can move forward a
certain distance until it encounters a wall.
 Transition Model: Moving forward changes the robot's position based on its orientation,
while turning changes only the orientation.
 Goal State: The robot exits the maze.

State Space Size:

 If the maze is of size N×N, there are N2N2 possible positions.


 There are 4 possible orientations.
 Therefore, the total state space size is N2×4.

b. Reformulating the Problem Using Intersection Observations


Reformulation:

 States: The state is now defined by the robot's position (x,y) only at intersections (points
where two or more corridors meet) and its orientation.
 Actions: The robot can move forward from one intersection to the next, and it can turn at
intersections to change direction.
 Transition Model: The robot moves from one intersection to another in a straight line, and
it can change direction only at intersections.

State Space Size:

 Let II be the number of intersections in the maze.


 There are still 4 possible orientations.
 Therefore, the total state space size is I×4.

c. Reformulating the Problem Using Movement to Turning Points


Reformulation:

 States: The state is defined by the robot's position (x,y) at any point in the maze, but actions
now consist of moving in a straight line until reaching a turning point (intersection or dead
end).
 Actions: The robot can move in any of the four directions until it reaches a turning point.
 Transition Model: The robot moves in a straight line until it reaches a turning point, at
which point it can change direction.

Orientation Tracking:

 Since the robot moves in straight lines until it reaches a turning point, and it can choose
any direction at that point, we no longer need to keep track of the robot's orientation during
movement. The orientation only matters when deciding the next direction at a turning point.
 Therefore, we do not need to explicitly track the robot's orientation during movement, only
at decision points (turning points).

State Space Size:

 The state space is now defined by the robot's position (x,y) at any point in the maze.
 If the maze is of size N×N, the state space size is N2.
 However, the effective state space is reduced because the robot's path is constrained to
straight lines between turning points, and decisions are only made at these points.
Task 2: Solving 8-Puzzle and 8-Queens Instances
1. Problem Definitions:
8-Puzzle:

 State Representation: A 3x3 grid with tiles numbered 1–8 and one empty space
(represented as 0).
 Objective: Rearrange the tiles from a given initial state to the goal state (e.g., [1, 2, 3, 4, 5,
6, 7, 8, 0]).
 Actions: Move the empty space up, down, left, or right.
 Heuristic: Number of misplaced tiles or Manhattan distance.

8-Queens:

 State Representation: A list of length 8, where each index represents a column, and the
value represents the row of the queen in that column.
 Objective: Place 8 queens on a chessboard such that no two queens threaten each other.
 Actions: Move a queen to a different row in its column.
 Heuristic: Number of pairs of queens attacking each other.
2. Algorithms:

Hill Climbing (Steepest-Ascent):

 Evaluate all possible next states and choose the one with the best heuristic value.
 Stops when no better state is found.

Hill Climbing (First-Choice):

 Randomly evaluate next states until a better state is found.


 Stops when no better state is found after a certain number of attempts.

Hill Climbing with Random Restart:

 Perform hill climbing multiple times with random initial states.


 Stops when a solution is found.

Simulated Annealing:

 Randomly select a next state.


 Accept worse states with a probability that decreases over time (based on a temperature
schedule).
 Stops when the temperature reaches 0 or a solution is found.
3. Generating Instances
8-Puzzle:

 Generate random solvable instances by starting from the goal state and performing a
series of valid moves.

8-Queens:

 Generate random instances by placing queens randomly in each column.


4. Solving Instances
Implementation Steps:

1. Define the State Representation:

 For 8-puzzle: Use a 3x3 grid or a flat list.


 For 8-queens: Use a list of 8 integers.

2. Define the Heuristic Function:

 For 8-puzzle: Use the number of misplaced tiles or Manhattan distance.


 For 8-queens: Use the number of attacking pairs.

3. Implement the Algorithms:

 Hill Climbing (Steepest-Ascent and First-Choice).


 Hill Climbing with Random Restart.
 Simulated Annealing.

4. Generate Instances:

 Generate 1000 random instances for each problem.

5. Solve Instances:

 Run each algorithm on each instance and record:


a. Whether a solution was found.
b. The number of steps taken.
c. The time taken.

6. Analyze Results:

 Compare the success rates, steps, and time for each algorithm.

Example Results:
8-Puzzle:

Algorithm Success Rate (%) Avg. Steps Avg. Time (ms)

Hill Climbing (Steepest) 60 200 50


Hill Climbing (First) 55 180 40
Random Restart 95 300 100
Simulated Annealing 90 250 80
8-Queens:

Algorithm Success Rate (%) Avg. Steps Avg. Time (ms)


Hill Climbing (Steepest) 70 150 30
Hill Climbing (First) 65 140 25
Random Restart 98 200 60
Simulated Annealing 95 180 50

Observations:
Hill Climbing:

 Prone to getting stuck in local optima.


 Steepest-ascent is more thorough but slower than first-choice.
Random Restart:

 Significantly improves success rates by avoiding local optima.


 Requires more steps and time due to multiple restarts.

Simulated Annealing:

 Balances exploration and exploitation, leading to high success rates.


 More efficient than random restart in terms of steps and time.
Task 3: General Game-Playing Program
a. Move Generators and Evaluation Functions
1. Rubik's Cube:

 Move Generator:
o Define the possible moves (e.g., rotate a face clockwise or counterclockwise).

o Generate all possible next states by applying these moves to the current state.
 Evaluation Function:

o Use the number of correctly oriented cubies or the sum of Manhattan distances for
each cubie to its goal position.

2. Checkers:

 Move Generator:

o Generate all valid moves for a player (e.g., normal moves, jumps, and double
jumps).
o Handle kinged pieces separately.
 Evaluation Function:

o Use piece count difference (e.g., +1 for each player's piece, +2 for each king).
o Add positional bonuses (e.g., controlling the center).
3. Chess:
 Move Generator:

o Generate all legal moves for each piece (e.g., pawns, knights, bishops, rooks,
queens, kings).
o Handle special moves (e.g., castling, en passant, promotion).
 Evaluation Function:

o Use piece values (e.g., pawn = 1, knight = 3, bishop = 3, rook = 5, queen = 9).
o Add positional bonuses (e.g., control of center squares, pawn structure, king
safety).

b. General Alpha-Beta Game-Playing Agent


Alpha-Beta Pruning Algorithm:
 Input: Current game state, depth limit, evaluation function.
 Output: Best move and its evaluation score.
 Steps:
o If the depth limit is reached or the game is over, return the evaluation score.
o Generate all possible moves from the current state.
o For each move:
 Apply the move to the current state.
 Recursively call the alpha-beta function on the new state.
 Update alpha (max player) or beta (min player) based on the result.
 Prune branches if alpha ≥ beta.
o Return the best move and its score.
Implementation:

def alpha_beta(state, depth, alpha, beta, maximizing_player):

if depth == 0 or state.is_terminal():

return state.evaluate(), None

If maximizing_player:
max_eval = -float('inf')
best_move = None
for move in state.generate_moves():
new_state = state.apply_move(move)
eval,_ = alpha_beta(new_state, depth - 1, alpha, beta, False)
if eval > max_eval:
max_eval = eval
best_move = move
alpha = max(alpha, eval)
if beta <= alpha:
break
return max_eval, best_move
else:
min_eval = float('inf')
best_move = None
for move in state.generate_moves():
new_state = state.apply_move(move)
eval, _ = alpha_beta(new_state, depth - 1, alpha, beta, True)
if eval < min_eval:
min_eval = eval
best_move = move
beta = min(beta, eval)
if beta <= alpha:
break
return min_eval, best_move

c. Comparing Effects
1. Increasing Search Depth:

 Effect: Deeper search allows the agent to explore more moves ahead, leading to better
decision-making.
 Trade-off: Increases computation time exponentially.

2. Improving Move Ordering:

 Effect: Better move ordering (e.g., prioritizing captures or checks) reduces the number of
nodes explored by pruning more branches.
 Ideal Case: Perfect move ordering reduces the effective branching factor to the square
root of the original branching factor.

3. Improving the Evaluation Function:

 Effect: A more accurate evaluation function leads to better decisions at the same search
depth.
 Trade-off: May increase computation time per node.

Effective Branching Factor:

 Calculation: Measure the average number of nodes explored at each depth.


 Comparison: Compare the effective branching factor with the ideal case (perfect move
ordering).

Example Results
Chess:

Effective Branching Avg. Nodes Avg. Time


Configuration
Factor Explored (ms)
Depth = 3, Basic Eval 35 50,000 500
Depth = 4, Improved Eval 30 100,000 1000
Depth = 3, Perfect
6 10,000 200
Ordering

Checkers:

Effective Branching Avg. Nodes Avg. Time


Configuration
Factor Explored (ms)
Depth = 5, Basic Eval 8 20,000 300

Depth = 6, Improved Eval 7 50,000 800


Depth = 5, Perfect
3 5,000 100
Ordering

Conclusion:
 Increasing search depth improves performance but at a high computational cost.
 Improving move ordering significantly reduces the effective branching factor,
approaching the ideal case.
 Improving the evaluation function enhances decision quality without increasing search
depth.
 The combination of these techniques leads to a highly effective game-playing agent.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy