Department of Computer Science: Misbah Zahra
Department of Computer Science: Misbah Zahra
Assignment 2
Submitted by:
Misbah Zahra
Registration no:
22-CS-72
Course:
Artificial Intelligence
States: The state of the robot can be defined by its position (x,y) in the maze and its current
orientation (north, east, south, west).
Initial State: The robot starts at the center of the maze, facing north.
Actions: The robot can turn to face north, east, south, or west, and it can move forward a
certain distance until it encounters a wall.
Transition Model: Moving forward changes the robot's position based on its orientation,
while turning changes only the orientation.
Goal State: The robot exits the maze.
States: The state is now defined by the robot's position (x,y) only at intersections (points
where two or more corridors meet) and its orientation.
Actions: The robot can move forward from one intersection to the next, and it can turn at
intersections to change direction.
Transition Model: The robot moves from one intersection to another in a straight line, and
it can change direction only at intersections.
States: The state is defined by the robot's position (x,y) at any point in the maze, but actions
now consist of moving in a straight line until reaching a turning point (intersection or dead
end).
Actions: The robot can move in any of the four directions until it reaches a turning point.
Transition Model: The robot moves in a straight line until it reaches a turning point, at
which point it can change direction.
Orientation Tracking:
Since the robot moves in straight lines until it reaches a turning point, and it can choose
any direction at that point, we no longer need to keep track of the robot's orientation during
movement. The orientation only matters when deciding the next direction at a turning point.
Therefore, we do not need to explicitly track the robot's orientation during movement, only
at decision points (turning points).
The state space is now defined by the robot's position (x,y) at any point in the maze.
If the maze is of size N×N, the state space size is N2.
However, the effective state space is reduced because the robot's path is constrained to
straight lines between turning points, and decisions are only made at these points.
Task 2: Solving 8-Puzzle and 8-Queens Instances
1. Problem Definitions:
8-Puzzle:
State Representation: A 3x3 grid with tiles numbered 1–8 and one empty space
(represented as 0).
Objective: Rearrange the tiles from a given initial state to the goal state (e.g., [1, 2, 3, 4, 5,
6, 7, 8, 0]).
Actions: Move the empty space up, down, left, or right.
Heuristic: Number of misplaced tiles or Manhattan distance.
8-Queens:
State Representation: A list of length 8, where each index represents a column, and the
value represents the row of the queen in that column.
Objective: Place 8 queens on a chessboard such that no two queens threaten each other.
Actions: Move a queen to a different row in its column.
Heuristic: Number of pairs of queens attacking each other.
2. Algorithms:
Evaluate all possible next states and choose the one with the best heuristic value.
Stops when no better state is found.
Simulated Annealing:
Generate random solvable instances by starting from the goal state and performing a
series of valid moves.
8-Queens:
4. Generate Instances:
5. Solve Instances:
6. Analyze Results:
Compare the success rates, steps, and time for each algorithm.
Example Results:
8-Puzzle:
Observations:
Hill Climbing:
Simulated Annealing:
Move Generator:
o Define the possible moves (e.g., rotate a face clockwise or counterclockwise).
o Generate all possible next states by applying these moves to the current state.
Evaluation Function:
o Use the number of correctly oriented cubies or the sum of Manhattan distances for
each cubie to its goal position.
2. Checkers:
Move Generator:
o Generate all valid moves for a player (e.g., normal moves, jumps, and double
jumps).
o Handle kinged pieces separately.
Evaluation Function:
o Use piece count difference (e.g., +1 for each player's piece, +2 for each king).
o Add positional bonuses (e.g., controlling the center).
3. Chess:
Move Generator:
o Generate all legal moves for each piece (e.g., pawns, knights, bishops, rooks,
queens, kings).
o Handle special moves (e.g., castling, en passant, promotion).
Evaluation Function:
o Use piece values (e.g., pawn = 1, knight = 3, bishop = 3, rook = 5, queen = 9).
o Add positional bonuses (e.g., control of center squares, pawn structure, king
safety).
if depth == 0 or state.is_terminal():
If maximizing_player:
max_eval = -float('inf')
best_move = None
for move in state.generate_moves():
new_state = state.apply_move(move)
eval,_ = alpha_beta(new_state, depth - 1, alpha, beta, False)
if eval > max_eval:
max_eval = eval
best_move = move
alpha = max(alpha, eval)
if beta <= alpha:
break
return max_eval, best_move
else:
min_eval = float('inf')
best_move = None
for move in state.generate_moves():
new_state = state.apply_move(move)
eval, _ = alpha_beta(new_state, depth - 1, alpha, beta, True)
if eval < min_eval:
min_eval = eval
best_move = move
beta = min(beta, eval)
if beta <= alpha:
break
return min_eval, best_move
c. Comparing Effects
1. Increasing Search Depth:
Effect: Deeper search allows the agent to explore more moves ahead, leading to better
decision-making.
Trade-off: Increases computation time exponentially.
Effect: Better move ordering (e.g., prioritizing captures or checks) reduces the number of
nodes explored by pruning more branches.
Ideal Case: Perfect move ordering reduces the effective branching factor to the square
root of the original branching factor.
Effect: A more accurate evaluation function leads to better decisions at the same search
depth.
Trade-off: May increase computation time per node.
Example Results
Chess:
Checkers:
Conclusion:
Increasing search depth improves performance but at a high computational cost.
Improving move ordering significantly reduces the effective branching factor,
approaching the ideal case.
Improving the evaluation function enhances decision quality without increasing search
depth.
The combination of these techniques leads to a highly effective game-playing agent.