0% found this document useful (0 votes)
8 views

Ai unit 2 notes

The document discusses various searching techniques in artificial intelligence, emphasizing the importance of search algorithms for effective information retrieval. It categorizes search algorithms into uninformed and informed types, detailing their characteristics, advantages, and limitations, as well as providing examples like BFS, DFS, A*, and Local Search Algorithms. Additionally, it highlights applications, best practices, and methods for improving information retrieval using these algorithms.

Uploaded by

Akshay Teotia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Ai unit 2 notes

The document discusses various searching techniques in artificial intelligence, emphasizing the importance of search algorithms for effective information retrieval. It categorizes search algorithms into uninformed and informed types, detailing their characteristics, advantages, and limitations, as well as providing examples like BFS, DFS, A*, and Local Search Algorithms. Additionally, it highlights applications, best practices, and methods for improving information retrieval using these algorithms.

Uploaded by

Akshay Teotia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Unit-2

Searching Techniques
By:Ms. Yogita Kaushik
Asst.Prof.

Introduction
The way humans engage with technology has been fundamentally changed by
artificial intelligence (AI). Search algorithms are one of the essential elements of AI
because they are so important for information retrieval. These algorithms are made
to efficiently search through enormous volumes of data and present the user with
pertinent results.

Importance of search algorithms in information


retrieval
Due to the following factors, search algorithms are essential to AI information
retrieval:
 Effective retrieval: They speed through vast amounts of data to quickly find
the information you're looking for.
 Precision & accuracy: These algorithms use relevance scoring and ranking
strategies to find the best results, improving retrieval efficiency.
 Scalability: They are capable of handling massive data collections, ensuring
effective retrieval even for huge volumes of data.
 Query optimization: To improve searches and produce accurate and
pertinent results, algorithms examine the structure of queries, user intent, and
context.
 Personalization & user preferences: Advanced algorithms adapt retrieval
results based on interactions and user preferences to offer a customized
experience.
 Organization of information: To organize and categorize material to make
exploration easier, search engines use clustering, classification, & topic
modeling.
 Continuous improvement: Over time, algorithms improve retrieval results
by making adjustments based on user input and system performance.

Types of search algorithms in AI


*Uninformed Search Algorithms
Uninformed search algorithms, also known as blind search algorithms, used for
search space without using any domain-specific knowledge or heuristics. They do
not have any information about the problem beyond its definition and the structure
of the search space.

Uninformed search algorithms rely solely on the problem’s structure and the initial
state to make decisions. Uninformed search algorithms are:

Breadth-first Search
Breadth-first search (BFS) is a search algorithm that traverses all the nodes at a
given depth before moving on to the next depth level. It starts at the root node and
traverse all of its neighboring nodes before moving on to the next depth level. BFS
is guaranteed to find the shortest path between the starting node and any other
reachable node in an unweighted graph. However, it can be memory-intensive for
larger graphs due to the need to store all the visited nodes.

Depth-first Search
Depth-first search (DFS) traverses as far as possible along each branch before
backtracking. It starts at the root node and traverse each of its neighboring nodes
until it reaches a dead end, and then backtracks to the next branch. DFS is useful
for exploring all possible solutions in a large space, but may not find the optimal
solution in some cases.

Depth-limited Search
Depth-limited search (DLS) is a variant of depth-first search that limits the
maximum depth of exploration. It stops exploring a branch when the maximum
depth is reached, even if the solution has not been found. DLS is useful for
exploring large spaces where the optimal solution is not required, but may miss the
solution if it is beyond the maximum depth.

Iterative Deepening Depth-first Search


Iterative deepening depth-first search (IDDFS) is a variant of depth-first search that
gradually increases the maximum depth of exploration until the solution is found.
It starts with a maximum depth of 1 and increases the depth by 1 in each iteration
until the solution is found. IDDFS combines the advantages of DFS and BFS by
exploring all possible solutions in a large space while also finding the optimal
solution.

Uniform Cost Search


Uniform cost search (UCS) algorithm searches the nodes with the lowest cost first.
It starts at the root node and traverses each neighboring node in order of increasing
cost. UCS is useful for finding the optimal solution in a weighted graph, where the
cost of each edge represents a different weight.

Bidirectional Search
Bidirectional search is a search algorithm that starts from both the starting and
ending nodes and searches towards the middle. It traverses all the neighboring
nodes in both directions until they meet at a common node. Bidirectional search is
useful for reducing the search space in large graphs, as it only searches the nodes
that are likely to be on the optimal path.

Uninformed algorithms are generally easy to implement but may be less


efficient or suboptimal in finding solutions compared to informed search
algorithms.

*Informed Search Algorithms


Informed search algorithms, also known as heuristic search algorithms, use
domain-specific knowledge or heuristics to guide the search process. Heuristics are
estimates of the cost or effort required to reach the goal from a given state, helping
the algorithm to prioritize certain paths or nodes that appear more promising.
Informed search algorithms can be more efficient and effective in finding solutions
or optimal outcomes, as they use additional information to guide their decisions.
Common informed search algorithms are:

Best First Search Algorithm (Greedy Search)


The Best First Search Algorithm, also known as Greedy Search, is a search
algorithm that selects the node that is closest to the goal state based on a heuristic
function. The heuristic function provides an estimate of the distance between the
current node and the goal state. The algorithm searches the node with the lowest
heuristic value first, without considering the actual cost of reaching that node. This
can lead to finding a sub-optimal solution if the heuristic function is not well-
designed, as the algorithm may prioritize exploring nodes that are not on the
optimal path.

A* Search Algorithm
The A* Search Algorithm is an informed search algorithm that combines the
advantages of both uniform cost search and best-first search. It uses a heuristic
function to estimate the distance from the current node to the goal state, but also
considers the actual cost of reaching that node. A* search algorithm evaluates each
node based on the sum of the cost of reaching that node and the heuristic value of
that node. It then searches the node with the lowest evaluation value first, which is
expected to be the most promising node for finding the optimal solution. A* search
algorithm is guaranteed to find the optimal solution if the heuristic function is
admissible and consistent. An admissible heuristic function never overestimates
the actual distance to the goal, and a consistent heuristic function satisfies the
condition that the heuristic estimate from any node to the goal is not greater than
the cost of getting to any neighboring node plus the heuristic estimate from that
node to the goal.

These search algorithms represent different approaches for traversing and


exploring problem spaces in AI. The choice of the algorithm depends on various
factors such as memory requirements, speed, and the need for optimal solutions.
Informed search algorithms like A* Search generally perform better when a
suitable heuristic is available, while uninformed search algorithms like DFS or
BFS can be used when no domain-specific knowledge is accessible.
Difference between Informed and Uninformed Search

Informed Search
 They contain information on goal state.
 It helps search efficiently.
 The information is obtained by a function that helps estimate how close a current state is,
to the goal state.
 Examples of informed search include greedy search and graph search.
 It uses the knowledge in the process of searching.
 It helps find the solution quickly.
 It may or may not be complete.
 It is inexpensive.
 It consumes less time.
 It gives the direction about the solution.
 It is less lengthy to implement.

Uninformed Search
 They don’t have any additional information.
 The information is only provided in the problem definition.
 The goal state can be reached using different order and length of actions.
 Examples of uninformed search include depth first search (DFS) and breadth first search
(BFS).
 It doesn’t use the knowledge in the process of searching.
 It takes more time to show the solution.
 It is always complete.
 It is expensive.
 It consumes moderate time.
 There is no suggestion regarding finding the solution.
 It is lengthy to implement.
How search algorithms work in artificial
intelligence
Artificial intelligence (AI) search algorithms use a logical process to locate the
required data. Here is how they usually work:
 Define the search space: Create a model of potential states so they can be
explored.
 Start from the first state: Start your search from the first state you can find
in the search space.
 Explore neighboring states: Examine surrounding states using guidelines or
heuristics to assess their significance based on standards like resemblance.
 Move towards goal state: Iteratively advance toward the objective state by
employing strategies like backtracking or prioritization.
 Evaluate and improve: To increase accuracy and efficiency, continuously
monitor progress, and modify relevant criteria, heuristics, or user preferences.
 Reach the goal state: Stop searching after you've located the desired data or
the best match, which satisfies the set goal criteria.
 Performance optimization: To minimize computational resources &
retrieval time, optimize algorithms using methods like pruning, heuristic
optimization, or parallelization.

Advantages and limitations of search algorithms


in AI
 An approach that is methodical and well-organized: Search algorithms
offer a structured way of quickly looking through enormous volumes of data,
ensuring a methodical retrieval procedure.
 Effective and precise retrieval: These algorithms make it possible to
identify pertinent information quickly, enhancing the effectiveness and
precision of information retrieval.
 Handling complex search spaces: Search algorithms have the ability to
handle complex search spaces, which enables them to travel through complex
data structures or issue domains.
 Adaptability to different challenges: Search algorithms are flexible and
adaptable in numerous AI disciplines since they may be used to solve different
kinds of problems.

AI search algorithm limitations


 Performance problems with overly large or complex search spaces: When
the search space is too big or complicated, search algorithms may run into
performance problems, which might slow down retrieval or demand more
computation.
 Dependence on heuristics and input data quality: The effectiveness of the
search results is significantly influenced by the precision of the used heuristics
and the input data quality. The performance of search algorithms might be
impacted by inaccurate or lacking data.
 Non-guarantee of optimal solutions: In dynamic or incomplete search
spaces, where the best solution might not be recognizable within the current
limitations, search algorithms may not always guarantee finding the optimal
solution.

Applications of search algorithms in AI


Applications for search algorithms can be found throughout several AI fields,
including:
 Natural language processing: Search algorithms are used for information
extraction, query resolution, sentiment analysis, and other language
processing tasks in natural language using AI.
 Image recognition: These algorithms help in the search for pertinent photos
using user queries or matching visual cues.
 Recommendation systems: Search engines make personalized suggestions
possible by matching user preferences with products that are appropriate.
 Robotics: To ensure effective exploration and effective mobility, search
algorithms are essential to robot path planning and navigation.
 Data mining: These methods make it easier to do tasks like grouping,
classification, & anomaly detection by extracting useful insights and patterns
from vast datasets.
Improving information retrieval with search
algorithms
Think about the following concepts to improve information retrieval using search
algorithms:
 Algorithm selection: Carefully select the best algorithm based on the current
issue and the resources at hand. Making wise decisions requires an
understanding of the traits and constraints of various algorithms.
 Parallel computing: Utilizing parallel computing techniques, you can
enhance the search process. The retrieval process can be sped up and more
effectively handled by spreading the task across several processors or
machines.
 Refining heuristics: Heuristics can be improved by tweaking the search
engine's heuristics in order to increase precision and efficiency. Heuristics can
be modified in accordance with the knowledge base and problem domain to
improve retrieval.

Best Practices of Search Algorithms in AI


AI search algorithm best practices include:
 Problem understanding: Gaining an in-depth knowledge of the problem's
needs will help you choose an algorithm.
 Algorithm selection: Select the best algorithm depending on the features of
the problem and variables like efficiency, accuracy, scalability, & cost.
 Heuristic design: Create efficient heuristics to efficiently direct the search
process towards the intended outcome.
 Techniques for performance optimization: Examine methods like parallel
computing, memory-efficient data structures, and pruning.
 Evaluation and iteration: Constantly assess and enhance the performance of
the algorithm through feedback analysis and incremental advancements.
 Think about trade-offs and restrictions: To improve the performance of the
algorithm, take into consideration constraints, resource limits, and user
preferences.
 Testing and validation: Use relevant datasets and real-world examples to
thoroughly test and validate the method.
Local Search Algorithm
Local Search Algorithm is a type of Space Search Optimization. The process of
the Local Search Algorithm is very interesting. It is used to find the best solution to
the stated problem. The Local Search Algorithm in Artificial Intelligence starts
with a random solution. Later on, it performs minor changes until it finds the best
solution.
Working of Local Search Algorithm
As we are now clear with the basic concept of Local Search Algorithm in Artificial
Intelligence, let's now discuss the working of Local Search Algorithm.
 Step 1: The process starts to find the best solution. It moves forward with a random solution
to the given problem.

 Step 2: The objective function checks the quality of the initial solution, like the
performance of the system, based on the problem constraints and needs.

 Step 3: Heuristics helps the algorithm to find the neighbour solution by making small
changes in the assumed solution. Heuristics is a technique to solve the problem using
practical experiences.

 Step 4: After selecting the neighbour solution, it is again checked with the help of the
objective function. At last, the best solution is selected after repeating the same process
again and again.

 Step 5: The program stops after a certain criteria match. There are other ways also that can
stop the program, such as the maximum time limit, the number of iterations exceeding, or a
threshold value for the objective function.

 Step 6: The last solution at which the program stops is considered the best solution for the
stated problem.
Types of Local Search Algorithm
There are basically three types of Local Search Algorithm present in Space Search
Optimization.

Hill Climbing

Hill climbing is a type of Local Search Algorithm in Artificial Intelligence. This


algorithm takes the reference of the greedy approach means it moves in the
direction of optimized cost. You can get the best solution for travelling salesman
problems with the help of the Hill Climbing method. The travelling salesman
problem minimizes the distance covered by the man.
The pseudo-code of the Hill Climbing algorithm is as follows.
def hill_climbing(problem):
current = node(problem.initialState)
while True:
neighbor = Successor of current with the highest objective function value
if neighbor.value <= current.value:
return current.state
current = neighbor

Explanation:
In the above code,
 In line 1, we are assigning the initial state to the current.

 Inside the while loop, we are taking only the successor value in the objective function.

 If the value of its neighbour is not better than the current one, we are just returning the
current node.

 Else return the neighbour node.

The Hill Climbing algorithm has further three more variants. The names of those
variants are as follows.
 Random-restart Hill Climbing
 First-choice Hill Climbing
 Stochastic Hill Climbing
Local Beam Search
Local Beam Search is also a type of Local Search Algorithm and is based on the
heuristic algorithm. The Local Beam Search algorithm is the updated version of
the Hill Climbing algorithm. In this algorithm, the current node is the starting
beam (set) of the k solutions rather than only a single solution.

The Local Beam Search algorithm creates a group of solutions from the current
state for each iteration. After that, they evaluate the solution to choose the k best
solution that later becomes the new current solution.

Simulated Annealing
The Simulated Annealing follows the rule of the heuristic algorithm to optimize
and fix artificial intelligence problems. The main concept behind this algorithm is
that it controls the randomness in the search process by changing the temperatures.
High temperature leads to exploring new regions of the search space. This also
increases its propensity to accept non-improving moves.

The Simulated Annealing is a very successful algorithm as it solves many


optimization problems easily. The problems that use the Simulated Annealing are
as follows.
 Travelling Salesman Problem,
 Vehicle Routing Problem
 Job Shop Scheduling Problem
Pros of Local Search Algorithm
There are many advantages of using the Local Search Algorithm in Artificial
Intelligence. Some of them are listed below.
 It is a very efficient algorithm as it only needs to explore a small portion of the complete
search space.

 It takes less time compared to other space search algorithms.

 It requires fewer conditions to be met than other search techniques.

 The code is easy in the local search algorithm, even for the complex question.
Cons of Local Search Algorithm
If there are many advantages to using the Local Search Algorithm in Artificial
Intelligence, then there must be a few disadvantages also. Some of the cons are
listed below.
 The main disadvantage of the local search algorithm is that it gets trapped in the local
optima.

 If the cost function of the problem is high, then the schedule becomes slow.

 The local search algorithm can not tell the user that it got the optimal solution.

What is Adversarial Search?


Adversarial Search in Artificial Intelligence is a type of search in artificial
intelligence, deep learning, machine learning and computer vision in which one
can trace the movement of an enemy or opponent. Such searches are useful
in chess, business strategy tools, trade platforms and war-based games using
AI agents. The user can change the current state but has no control over the future
stage in an adversarial search.
The opponent has control over the next state which is unexpected. There may be
instances where more than one agent is searching for a solution in the same search
space which is common in gameplay. Games are treated as a Search problem and a
heuristic evaluation function which are the two key components that help in the
modeling and solving of games in AI.
Different Game Scenarios using Adversarial Search
 Perfect Information: An example of a perfect information game is one where
agents can see the entire board. Agents are able to view each other's
movements and possess all of the game's information. Go, Checkers and
Chess are a few examples

 Imperfect Information: The term "game with imperfect information" refers


to those, such as tic tac toe, battleship, blind, bridge, and others, in
which agents are not fully informed about the game or aware of what is
happening

 Deterministic Game: Games classified as deterministic involve no element


of randomness; instead, they adhere to a rigid structure and set of rules. A few
examples are tic tac toe, go, checkers, chess, and so on

 Non-deterministic Games: Games that involve a chance or luck element


and a variety of unpredictable outcomes are said to be non-deterministic.
Either dice or cards are used to introduce this element of chance or luck. Each
action-reaction is not fixed; they are random. We also refer to these games as
stochastic games. For instance, Monopoly, Poker, Backgammon, etc

 Zero-Sum games: These are strictly competitive games where the success of
one player is offset by the failure of another. Every player in these games
will have a different set of opposing strategies, and the net value of gain and
loss is zero. Every player aims to maximize gain or minimize loss, depending
on the conditions and surroundings of the game
Game Tree

It has several nodes, with the Root node at the top. Each node represents the
current state of the game and contains the moves of the players at its
edge. Each layer in the tree contains alternate turns for Maximizer and Minimizer.
Minimizer contains the loss and minimizes the maximum loss, whereas Maximizer
maximizes the minimal gain. Depending on the game setting and the opponent's
strategy, a player takes on a maximizer or a minimizer role.
The steps in the game are as follows:
 Each game has a starting state

 A game will have more than one person

 The root node may have turned for the maximizer, and the maximizer fills "X"
in any of the vacant cells on the node's edge. As a result, there are nine
possible actions

 It is now the minimizer's turn to evaluate the opponent's action and


consequently post "0" in a vacant cell

 The maximizer selects an empty cell and fills it with "X" based on the previous
move by the minimizer

 Steps 4 and 5 are repeated to any depth, with optimal moves by the minimizer
and maximizer, respectively, until one of the rows is completely filled with
"X" or "0"

 The game is over whenever the terminal condition is attained in step 6

 Using the utility, the ultimate result is obtained and declared. It might be a
Maximizer win, a Minimizer win, or a draw
Need of Adversarial Search by the Agents
The importance of Adversarial Search in Artificial Intelligence may be seen in
various games; some of the most essential elements are as follows:
 This method can be used to observe the movement of the opposing
player, and the strategy must be developed accordingly based on these
observations. This strategy also addresses the end goal's path and how to reach
it. So, with this technique or algorithm in place, we may modify various game
situations

 The games that have used such algorithms have got so clever that they may
have generated unexpected types of turns that can upset the other player. As a
result, the opposing players' moves are less likely to be foreseen

 By incorporating adversarial searches into games, the competitiveness of the


game increases significantly, attracting the user or player to play the game
more frequently
 By including these search predictions, unpredictable results, rules, and
regulations must be revised on a frequent basis to ensure that the nature of the
competition does not get rusty
Important Features of Adversarial Search
 Adversarial games can be classified as having perfect or imperfect
information. Every player in a game with perfect information is fully aware of
the game's present condition; yet, in a game with imperfect information,
certain information is kept hidden from the players

 Search strategies such as minimax and alpha-beta pruning are used in


adversarial games to discover the best move. These algorithms aid in choosing
the best move for a player by weighing the possible outcomes of each move

 Since the size of the game tree, it may not always be able to search it
thoroughly. In these cases, heuristics are used to approximate the optimal
move. Heuristics are shortcuts that allow the algorithm to quickly evaluate the
possibility of a move without having to look through the entire game tree

What is Alpha Beta Pruning?


Alpha beta pruning in Artificial Intelligence is a way of improving the minimax
algorithm. In the minimax search method, the number of game states that it must
investigate grows exponentially with the depth of the tree. We cannot remove the
exponent, but we can reduce it by half. As a result, a technique known as pruning
allows us to compute the proper minimax choice without inspecting each node of
the game tree. Alpha-beta pruning may be done at any level in a tree, and it can
occasionally trim the entire sub-tree and the tree leaves. It is named alpha-beta
pruning in artificial intelligence because it involves two parameters, alpha, and
Beta.
Condition for Alpha Beta Pruning
The two parameters of alpha beta pruning in artificial intelligence are
Alpha: It is the best highest value choice that we have found in the path of the
maximizer. The starting value of alpha is -∞.
Beta: It is the best lowest value choice that we have found in the path of the
minimizer. The starting value of beta is +∞.
Minimax Algorithm
The Minimax algorithm is a backtracking algorithm used in game theory and
decision-making. It is used to find the optimal move for a player, assuming that the
opponent is also playing optimally. It is commonly used for turn-based two-player
games such as chess, checkers, tic-tac-toe, etc.
In minimax, the two players are called minimizer and maximizer. The minimizer
tries to get the lowest possible score, while the maximizer attempts to get the
maximum possible score. The minimax method determines the best move for
MAX, the root node player. The search tree is built by recursively extending all
nodes from the root in a depth-first manner until the game ends or the maximum
search depth is achieved.

Key Points about Alpha Beta Pruning


 Alpha: At each point along the Maximizer path, Alpha is the best option or the highest
value we've discovered. –∞ is the initial value for alpha.

 Beta: At every point along the Minimizer route, Beta is the best option or the lowest value
we've identified. The value of beta is initially set to +∞.

 The condition for Alpha-beta Pruning is α >= β.

 Alpha is updated only when it's MAX's time, and Beta can only be updated when it's MIN's
turn.

 The MAX player will only update alpha values, whereas the MIN player will only update
beta values.

 During the reversal of the tree, node values will be transferred to upper nodes instead of
alpha and beta values.

 Only child nodes will get Alpha and Beta values.


Pseudo-code for Alpha-beta Pruning
Following is the pseudo-code for alpha-beta pruning.
function alpha_beta(node, depth, alpha, beta, maximizing_player):
if depth == 0 or node is a terminal node:
return the heuristic value of node

//For max player


if maximizing_player:
v = negative infinity
for each child in node:
v = max(v, alpha_beta(child, depth - 1, alpha, beta, False))
alpha = max(alpha, v)
if beta <= alpha:
break
return v

//For min player


else:
v = infinity
for each child in node:
v = min(v, alpha_beta(child, depth - 1, alpha, beta, True))
beta = min(beta, v)
if beta <= alpha:
break
return v
Working of Alpha Beta Pruning
Let us consider a two-player search tree to understand further how Alpha-beta
pruning in artificial intelligence works.
Step 1
The Max player will start by traveling from node A to node B, where α = -∞ and β
= +∞, and delivering these alpha and beta values to node B, where = - and = + once
again, and Node B transmitting the identical value to its offspring D.

Step 2
As Max's turn at Node D approaches, the value of α will be decided. When the
value of α is compared to 2, then 3, the value at node D is max (2, 3) = 3. Hence,
the node value is also 3.
Step 3
The algorithm returns to node B, where the value of β will change since this is a
turn of Min. Now β = +∞will be compared to the value of the available subsequent
nodes, i.e., min (∞, 3) = 3, resulting in node B now α = -∞, and β = 3. In the next
phase, the algorithm will visit the next successor of Node B, Node E, and pass the
values of α = -∞ and β = 3.

Step 4
Max will take over at node E and change alpha's value. The current existing value
of alpha will be compared to 5, resulting in max (-∞, 5) = 5, and at node E, where
α>=β, the right successor of E will be pruned, and the algorithm will not traverse
it, resulting in the value at node E being 5.

Step 5
We now traverse the tree backward, from node B to node A. At node A, alpha will
be converted to the greatest feasible value of 3, as max (-∞, 3)= 3, and β = +∞.
These two values will now be passed on to Node C, A's right-hand successor.
At node C, the values and β = +∞ and α =3 will be passed on to node F, and node
F will get the identical values.
Step 6
At node F, the value of α is compared with the left child 0, and max(3,0)= 3. It is
then compared with the right child, which is 1, and max(3,1)= 3.

Step 7
Node F sends the node value 1 to node C. The value of Beta is adjusted at C, α = 3
and β= +∞, and it is compared to 1, resulting in min (∞, 1) = 1. Now, if α = 3 and
β = 1, the condition α>=β is met, the algorithm will prune the next child of C,
which is G, rather than calculating the entire sub-tree G.

Step 8
C now returns the value of 1 to A, with max (3, 1) = 3 being the optimum result for
A. The final game tree is shown here, with nodes that have been calculated and
nodes that have never been computed. As a result, in this case, the ideal value for
the maximizer is 3.

Move Ordering in Alpha Beta Pruning


The sequence in which nodes are inspected determines the efficiency of alpha beta
pruning. When it comes to alpha beta pruning in artificial intelligence, move
ordering is crucial.
Move ordering are of two types in alpha beta pruning in artificial intelligence:
 Worst Ordering: In some circumstances of alpha beta pruning, the algorithm prunes none
of the nodes and behaves like a conventional minimax algorithm. Because of the alpha and
beta variables, this takes a long time and produces no valuable findings. In pruning, this is
known as the Worst Ordering. In this situation, the optimal move is on the right side of the
tree.
 Ideal Ordering: In some circumstances of alpha beta pruning in artificial intelligence, the
algorithm prunes a large number of nodes. In pruning, this is referred to as ideal ordering.
The optimal move is on the left side of the tree in this situation. We choose DFS
Algorithm because it searches the left side of the tree first and then goes deep twice as fast
as the minimax method.
Rules to find Good Ordering
For finding the effective alpha-beta pruning ordering, we have to follow some
rules:
 In the tree, sort the nodes such that the better ones are checked first.

 The optimal move is made from the shallowest node.

 We can keep track of the states since there's a chance they'll happen again.

 When deciding on the right step, make use of your subject expertise. For example, in chess,
consider the following order: captures first, threats second, forward movements third,
backward moves fourth.
Implementation in Python
# Initial values of Alpha and Beta
MAX, MIN = 10000000, -100000000

# Defining the minimax function that returns the optimal value for the current play
er

def minimax(depth, index, maximizingPlayer,


values, alpha, beta):

# Terminating condition
if depth == 3:
return values[index]

if maximizingPlayer:

optimum = MIN

# Recursion for left and right children


for i in range(0, 2):

val = minimax(depth + 1, index * 2 + i,


False, values, alpha, beta)
optimum = max(optimum, val)
alpha = max(alpha, optimum)

# Alpha Beta Pruning condition


if beta <= alpha:
break

return optimum

else:
optimum = MAX

# Recursion for left and right children


for i in range(0, 2):

val = minimax(depth + 1, index * 2 + i,


True, values, alpha, beta)
optimum = min(optimum, val)
beta = min(beta, optimum)

# Alpha Beta Pruning


if beta <= alpha:
break

return optimum
# Driver Code
if __name__ == "__main__":

values = [5, 7, 12, 6, 3, 18, -9, 4]


print("The value is :", minimax(0, 0, True, values, MIN, MAX))

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy