Ai unit 2 notes
Ai unit 2 notes
Searching Techniques
By:Ms. Yogita Kaushik
Asst.Prof.
Introduction
The way humans engage with technology has been fundamentally changed by
artificial intelligence (AI). Search algorithms are one of the essential elements of AI
because they are so important for information retrieval. These algorithms are made
to efficiently search through enormous volumes of data and present the user with
pertinent results.
Uninformed search algorithms rely solely on the problem’s structure and the initial
state to make decisions. Uninformed search algorithms are:
Breadth-first Search
Breadth-first search (BFS) is a search algorithm that traverses all the nodes at a
given depth before moving on to the next depth level. It starts at the root node and
traverse all of its neighboring nodes before moving on to the next depth level. BFS
is guaranteed to find the shortest path between the starting node and any other
reachable node in an unweighted graph. However, it can be memory-intensive for
larger graphs due to the need to store all the visited nodes.
Depth-first Search
Depth-first search (DFS) traverses as far as possible along each branch before
backtracking. It starts at the root node and traverse each of its neighboring nodes
until it reaches a dead end, and then backtracks to the next branch. DFS is useful
for exploring all possible solutions in a large space, but may not find the optimal
solution in some cases.
Depth-limited Search
Depth-limited search (DLS) is a variant of depth-first search that limits the
maximum depth of exploration. It stops exploring a branch when the maximum
depth is reached, even if the solution has not been found. DLS is useful for
exploring large spaces where the optimal solution is not required, but may miss the
solution if it is beyond the maximum depth.
Bidirectional Search
Bidirectional search is a search algorithm that starts from both the starting and
ending nodes and searches towards the middle. It traverses all the neighboring
nodes in both directions until they meet at a common node. Bidirectional search is
useful for reducing the search space in large graphs, as it only searches the nodes
that are likely to be on the optimal path.
A* Search Algorithm
The A* Search Algorithm is an informed search algorithm that combines the
advantages of both uniform cost search and best-first search. It uses a heuristic
function to estimate the distance from the current node to the goal state, but also
considers the actual cost of reaching that node. A* search algorithm evaluates each
node based on the sum of the cost of reaching that node and the heuristic value of
that node. It then searches the node with the lowest evaluation value first, which is
expected to be the most promising node for finding the optimal solution. A* search
algorithm is guaranteed to find the optimal solution if the heuristic function is
admissible and consistent. An admissible heuristic function never overestimates
the actual distance to the goal, and a consistent heuristic function satisfies the
condition that the heuristic estimate from any node to the goal is not greater than
the cost of getting to any neighboring node plus the heuristic estimate from that
node to the goal.
Informed Search
They contain information on goal state.
It helps search efficiently.
The information is obtained by a function that helps estimate how close a current state is,
to the goal state.
Examples of informed search include greedy search and graph search.
It uses the knowledge in the process of searching.
It helps find the solution quickly.
It may or may not be complete.
It is inexpensive.
It consumes less time.
It gives the direction about the solution.
It is less lengthy to implement.
Uninformed Search
They don’t have any additional information.
The information is only provided in the problem definition.
The goal state can be reached using different order and length of actions.
Examples of uninformed search include depth first search (DFS) and breadth first search
(BFS).
It doesn’t use the knowledge in the process of searching.
It takes more time to show the solution.
It is always complete.
It is expensive.
It consumes moderate time.
There is no suggestion regarding finding the solution.
It is lengthy to implement.
How search algorithms work in artificial
intelligence
Artificial intelligence (AI) search algorithms use a logical process to locate the
required data. Here is how they usually work:
Define the search space: Create a model of potential states so they can be
explored.
Start from the first state: Start your search from the first state you can find
in the search space.
Explore neighboring states: Examine surrounding states using guidelines or
heuristics to assess their significance based on standards like resemblance.
Move towards goal state: Iteratively advance toward the objective state by
employing strategies like backtracking or prioritization.
Evaluate and improve: To increase accuracy and efficiency, continuously
monitor progress, and modify relevant criteria, heuristics, or user preferences.
Reach the goal state: Stop searching after you've located the desired data or
the best match, which satisfies the set goal criteria.
Performance optimization: To minimize computational resources &
retrieval time, optimize algorithms using methods like pruning, heuristic
optimization, or parallelization.
Step 2: The objective function checks the quality of the initial solution, like the
performance of the system, based on the problem constraints and needs.
Step 3: Heuristics helps the algorithm to find the neighbour solution by making small
changes in the assumed solution. Heuristics is a technique to solve the problem using
practical experiences.
Step 4: After selecting the neighbour solution, it is again checked with the help of the
objective function. At last, the best solution is selected after repeating the same process
again and again.
Step 5: The program stops after a certain criteria match. There are other ways also that can
stop the program, such as the maximum time limit, the number of iterations exceeding, or a
threshold value for the objective function.
Step 6: The last solution at which the program stops is considered the best solution for the
stated problem.
Types of Local Search Algorithm
There are basically three types of Local Search Algorithm present in Space Search
Optimization.
Hill Climbing
Explanation:
In the above code,
In line 1, we are assigning the initial state to the current.
Inside the while loop, we are taking only the successor value in the objective function.
If the value of its neighbour is not better than the current one, we are just returning the
current node.
The Hill Climbing algorithm has further three more variants. The names of those
variants are as follows.
Random-restart Hill Climbing
First-choice Hill Climbing
Stochastic Hill Climbing
Local Beam Search
Local Beam Search is also a type of Local Search Algorithm and is based on the
heuristic algorithm. The Local Beam Search algorithm is the updated version of
the Hill Climbing algorithm. In this algorithm, the current node is the starting
beam (set) of the k solutions rather than only a single solution.
The Local Beam Search algorithm creates a group of solutions from the current
state for each iteration. After that, they evaluate the solution to choose the k best
solution that later becomes the new current solution.
Simulated Annealing
The Simulated Annealing follows the rule of the heuristic algorithm to optimize
and fix artificial intelligence problems. The main concept behind this algorithm is
that it controls the randomness in the search process by changing the temperatures.
High temperature leads to exploring new regions of the search space. This also
increases its propensity to accept non-improving moves.
The code is easy in the local search algorithm, even for the complex question.
Cons of Local Search Algorithm
If there are many advantages to using the Local Search Algorithm in Artificial
Intelligence, then there must be a few disadvantages also. Some of the cons are
listed below.
The main disadvantage of the local search algorithm is that it gets trapped in the local
optima.
If the cost function of the problem is high, then the schedule becomes slow.
The local search algorithm can not tell the user that it got the optimal solution.
Zero-Sum games: These are strictly competitive games where the success of
one player is offset by the failure of another. Every player in these games
will have a different set of opposing strategies, and the net value of gain and
loss is zero. Every player aims to maximize gain or minimize loss, depending
on the conditions and surroundings of the game
Game Tree
It has several nodes, with the Root node at the top. Each node represents the
current state of the game and contains the moves of the players at its
edge. Each layer in the tree contains alternate turns for Maximizer and Minimizer.
Minimizer contains the loss and minimizes the maximum loss, whereas Maximizer
maximizes the minimal gain. Depending on the game setting and the opponent's
strategy, a player takes on a maximizer or a minimizer role.
The steps in the game are as follows:
Each game has a starting state
The root node may have turned for the maximizer, and the maximizer fills "X"
in any of the vacant cells on the node's edge. As a result, there are nine
possible actions
The maximizer selects an empty cell and fills it with "X" based on the previous
move by the minimizer
Steps 4 and 5 are repeated to any depth, with optimal moves by the minimizer
and maximizer, respectively, until one of the rows is completely filled with
"X" or "0"
Using the utility, the ultimate result is obtained and declared. It might be a
Maximizer win, a Minimizer win, or a draw
Need of Adversarial Search by the Agents
The importance of Adversarial Search in Artificial Intelligence may be seen in
various games; some of the most essential elements are as follows:
This method can be used to observe the movement of the opposing
player, and the strategy must be developed accordingly based on these
observations. This strategy also addresses the end goal's path and how to reach
it. So, with this technique or algorithm in place, we may modify various game
situations
The games that have used such algorithms have got so clever that they may
have generated unexpected types of turns that can upset the other player. As a
result, the opposing players' moves are less likely to be foreseen
Since the size of the game tree, it may not always be able to search it
thoroughly. In these cases, heuristics are used to approximate the optimal
move. Heuristics are shortcuts that allow the algorithm to quickly evaluate the
possibility of a move without having to look through the entire game tree
Beta: At every point along the Minimizer route, Beta is the best option or the lowest value
we've identified. The value of beta is initially set to +∞.
Alpha is updated only when it's MAX's time, and Beta can only be updated when it's MIN's
turn.
The MAX player will only update alpha values, whereas the MIN player will only update
beta values.
During the reversal of the tree, node values will be transferred to upper nodes instead of
alpha and beta values.
Step 2
As Max's turn at Node D approaches, the value of α will be decided. When the
value of α is compared to 2, then 3, the value at node D is max (2, 3) = 3. Hence,
the node value is also 3.
Step 3
The algorithm returns to node B, where the value of β will change since this is a
turn of Min. Now β = +∞will be compared to the value of the available subsequent
nodes, i.e., min (∞, 3) = 3, resulting in node B now α = -∞, and β = 3. In the next
phase, the algorithm will visit the next successor of Node B, Node E, and pass the
values of α = -∞ and β = 3.
Step 4
Max will take over at node E and change alpha's value. The current existing value
of alpha will be compared to 5, resulting in max (-∞, 5) = 5, and at node E, where
α>=β, the right successor of E will be pruned, and the algorithm will not traverse
it, resulting in the value at node E being 5.
Step 5
We now traverse the tree backward, from node B to node A. At node A, alpha will
be converted to the greatest feasible value of 3, as max (-∞, 3)= 3, and β = +∞.
These two values will now be passed on to Node C, A's right-hand successor.
At node C, the values and β = +∞ and α =3 will be passed on to node F, and node
F will get the identical values.
Step 6
At node F, the value of α is compared with the left child 0, and max(3,0)= 3. It is
then compared with the right child, which is 1, and max(3,1)= 3.
Step 7
Node F sends the node value 1 to node C. The value of Beta is adjusted at C, α = 3
and β= +∞, and it is compared to 1, resulting in min (∞, 1) = 1. Now, if α = 3 and
β = 1, the condition α>=β is met, the algorithm will prune the next child of C,
which is G, rather than calculating the entire sub-tree G.
Step 8
C now returns the value of 1 to A, with max (3, 1) = 3 being the optimum result for
A. The final game tree is shown here, with nodes that have been calculated and
nodes that have never been computed. As a result, in this case, the ideal value for
the maximizer is 3.
We can keep track of the states since there's a chance they'll happen again.
When deciding on the right step, make use of your subject expertise. For example, in chess,
consider the following order: captures first, threats second, forward movements third,
backward moves fourth.
Implementation in Python
# Initial values of Alpha and Beta
MAX, MIN = 10000000, -100000000
# Defining the minimax function that returns the optimal value for the current play
er
# Terminating condition
if depth == 3:
return values[index]
if maximizingPlayer:
optimum = MIN
return optimum
else:
optimum = MAX
return optimum
# Driver Code
if __name__ == "__main__":