0% found this document useful (0 votes)
78 views

Chapter 2-AI

The document discusses various state space search algorithms and techniques. It describes how a state space represents a problem as a set of possible states. Common state space search algorithms like A*, breadth-first search, depth-first search, and hill climbing are used to efficiently search the state space from the initial to goal state. The 8-puzzle problem is provided as an example state space representation where each puzzle configuration is a state.

Uploaded by

jajara5519
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

Chapter 2-AI

The document discusses various state space search algorithms and techniques. It describes how a state space represents a problem as a set of possible states. Common state space search algorithms like A*, breadth-first search, depth-first search, and hill climbing are used to efficiently search the state space from the initial to goal state. The 8-puzzle problem is provided as an example state space representation where each puzzle configuration is a state.

Uploaded by

jajara5519
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Chapter 2

(8Hours)
UNIT-B
Problem solving techniques: State space search, control strategies, heuristic search, problem
characteristics, production system characteristics., Generate and test, Hill climbing, best first
search, A* search, Constraint satisfaction problem, Mean-end analysis, Min-Max Search, Alpha-
Beta Pruning, Additional refinements, Iterative Deepening.
Logic: Propositional logic, predicate logic, Resolution, Resolution in proportional logic and
predicate logic, Clause form, unification algorithm

A state space is a way to mathematically represent a problem by defining all the possible states
in which the problem can be. This is used in search algorithms to represent the initial state, goal
state, and current state of the problem. Each state in the state space is represented using a set of
variables.

The efficiency of the search algorithm greatly depends on the size of the state space, and it is
important to choose an appropriate representation and search strategy to search the state space
efficiently. ,

One of the most well-known state space search algorithms is the A algorithm. Other commonly
used state space search algorithms include breadth-first search (BFS), depth-first search
(DFS), hill climbing, simulated annealing, and genetic algorithms.

Steps in State Space Search

The steps involved in state space search are as follows:


 To begin the search process, we set the current state to the initial state.
 We then check if the current state is the goal state. If it is, we terminate the algorithm and
return the result.
 If the current state is not the goal state, we generate the set of possible successor states
that can be reached from the current state.
 For each successor state, we check if it has already been visited. If it has, we skip it, else
we add it to the queue of states to be visited.
 Next, we set the next state in the queue as the current state and check if it's the goal state.
If it is, we return the result. If not, we repeat the previous step until we find the goal state
or explore all the states.
 If all possible states have been explored and the goal state still needs to be found, we
return with no solution.

Example of State Space Search

The 8-puzzle problem is a commonly used example of a state space search. It is a sliding puzzle
game consisting of 8 numbered tiles arranged in a 3x3 grid and one blank space. The game aims
to rearrange the tiles from their initial state to a final goal state by sliding them into the blank
space.

To represent the state space in this problem, we use the nine tiles in the puzzle and their
respective positions in the grid. Each state in the state space is represented by a 3x3 array with
values ranging from 1 to 8, and the blank space is represented as an empty tile.

The initial state of the puzzle represents the starting configuration of the tiles, while the goal
state represents the desired configuration. Search algorithms utilize the state space to find a
sequence of moves that will transform the initial state into the goal state.
This algorithm guarantees a solution but can become very slow for larger state

Types of search algorithms:

There are far too many powerful search algorithms out there to fit in a single article. Instead,
this article will discuss six of the fundamental search algorithms, divided into two categories,
as shown below.

Uninformed Search Algorithms:

The search algorithms in this section have no additional information on the goal node other
than the one provided in the problem definition. The plans to reach the goal state from the start
state differ only by the order and/or length of actions. Uninformed search is also called Blind
search. These algorithms can only generate the successors and differentiate between the goal
state and non goal state.
The following uninformed search algorithms are discussed in this section.
1. Depth First Search
2. Breadth First Search
3. Uniform Cost Search
Depth First Search:
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data
structures. The algorithm starts at the root node (selecting some arbitrary node as the root node
in the case of a graph) and explores as far as possible along each branch before backtracking. It
uses last in- first-out strategy and hence it is implemented using a stack.
Example:
Question. Which solution would DFS find to move from node S to node G if run on the graph
below?
Solution. The equivalent search tree for the above graph is as follows. As DFS traverses the
tree “deepest node first”, it would always pick the deeper branch until it reaches the solution
(or it runs out of nodes, and goes to the next branch). The traversal is shown in blue arrows.

Path: S -> A -> B -> C -> G

Breadth First Search:


Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data
structures. It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as
a ‘search key’), and explores all of the neighbor nodes at the present depth prior to moving on
to the nodes at the next depth level. It is implemented using a queue.
Example:
Question. Which solution would BFS find to move from node S to node G if run on the graph
below?

Solution. The equivalent search tree for the above graph is as follows. As BFS traverses the
tree “shallowest node first”, it would always pick the shallower branch until it reaches the
solution (or it runs out of nodes, and goes to the next branch). The traversal is shown in blue
arrows.
Path: S -> D -> G

Uniform Cost Search:

UCS is different from BFS and DFS because here the costs come into play. In other words,
traversing via different edges might not have the same cost. The goal is to find a path where
the cumulative sum of costs is the least.
Cost of a node is defined as:
cost(node) = cumulative cost of all nodes from root
cost(root) = 0
Example:
Question. Which solution would UCS find to move from node S to node G if run on the graph
below?

Solution. The equivalent search tree for the above graph is as follows. The cost of each node is
the cumulative cost of reaching that node from the root. Based on the UCS strategy, the path
with the least cumulative cost is chosen. Note that due to the many options in the fringe, the
algorithm explores most of them so long as their cost is low, and discards them when a lower-
cost path is found; these discarded traversals are not shown below. The actual traversal is
shown in blue.
Path: S -> A -> B -> G
Cost: 5

Informed Search Algorithms:

Here, the algorithms have information on the goal state, which helps in more efficient
searching. This information is obtained by something called a heuristic.
In this section, we will discuss the following search algorithms.
1. Greedy Search
2. A* Tree Search
3. A* Graph Search
Search Heuristics: In an informed search, a heuristic is a function that estimates how close a
state is to the goal state. For example – Manhattan distance, Euclidean distance, etc. (Lesser
the distance, closer the goal.) Different heuristics are used in different informed algorithms
discussed below.

Greedy Search(Best first search):

In greedy search, we expand the node closest to the goal node. The “closeness” is estimated by
a heuristic h(x).
Heuristic: A heuristic h is defined as-
h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.
Strategy: Expand the node closest to the goal state, i.e. expand the node with a lower h value.
Example:
Question. Find the path from S to G using greedy search. The heuristic values h of each node
below the name of the node.
Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it has the
lower heuristic cost. Now from D, we can move to B(h=4) or E(h=3). We choose E with a
lower heuristic cost. Finally, from E, we go to G(h=0). This entire traversal is shown in the
search tree below, in blue.

Path: S -> D -> E -> G


Advantage: Works well with informe

A* Tree Search:

A* Tree Search, or simply known as A* Search, combines the strengths of uniform-cost search
and greedy search. In this search, the heuristic is the summation of the cost in UCS, denoted by
g(x), and the cost in the greedy search, denoted by h(x). The summed cost is denoted by f(x).
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the
most promising path. It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal. The heuristic method, however, might not always
give the best solution, but it guaranteed to find a good solution in reasonable time. Heuristic
function estimates how close a state is to the goal. It is represented by h(n), and it calculates the
cost of an optimal path between the pair of states. The value of the heuristic function is always
positive.
Heuristic: The following points should be noted wrt heuristics in A*
search.
 Here, h(x) is called the forward cost and is an estimate of the distance of the current node
from the goal node.
 And, g(x) is called the backward cost and is the cumulative cost of a node from the root
node.
 A* search is optimal only when for all nodes, the forward cost for a node h(x)
underestimates the actual cost h*(x) to reach the goal. This property of A* heuristic is
called admissibility.
Admissibility:
Strategy: Choose the node with the lowest f(x) value.
Example:
Question. Find the path to reach from S to G using A* search.

Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at
each step, choosing the node with the lowest sum. The entire work is shown in the table
below.
Note that in the fourth set of iterations, we get two paths with equal summed cost f(x), so we
expand them both in the next set. The path with a lower cost on further expansion is the chosen
path.

Path h(x) g(x) f(x)

S 7 0 7

S -> A 9 3 12

S -> D 5 2 7

S -> D -> B 4 2+1=3 7

S -> D -> E 3 2+4=6 9

S -> D -> B -> C 2 3+2=5 7


S -> D -> B -> E 3 3+1=4 7

S -> D -> B -> C -> G 0 5+4=9 9

S -> D -> B -> E -> G 0 4+3=7 7

Path: S -> D -> B -> E -> G


Cost: 7

A* Graph Search:
 A* tree search works well, except that it takes time re-exploring the branches it has already
explored. In other words, if the same node has expanded twice in different branches of the
search tree, A* search might explore both of those branches, thus wasting time
 A* Graph Search, or simply Graph Search, removes this limitation by adding this rule: do
not expand the same node more than once.
 Heuristic. Graph search is optimal only when the forward cost between two successive
nodes A and B, given by h(A) – h (B), is less than or equal to the backward cost between
those two nodes g(A -> B). This property of the graph search heuristic is
called consistency.
Consistency:
Example:
Question. Use graph searches to find paths from S to G in the following graph.

the Solution. We solve this question pretty much the same way we solved last question, but in
this case, we keep a track of nodes explored so that we don’t re-explore them.
Path: S -> D -> B -> E -> G
Cost: 7

Points to remember:

o A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least cost path.

Time Complexity: The time complexity of A* search algorithm depends on heuristic function,
and the number of nodes expanded is exponential to the depth of solution d. So the time
complexity is O(b^d), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)


https://www.javatpoint.com/ai-informed-search-algorithms#:~:text=large%2Dscale
%20problems.-,Example%3A,any%20node%20from%20start%20state.

Control Strategy in Artificial Intelligence scenario is a technique or strategy, tells us about


which rule has to be applied next while searching for the solution of a problem within problem
space. It helps us to decide which rule has to apply next without getting stuck at any point. These
rules decide the way we approach the problem and how quickly it is solved and even whether a
problem is finally solved.

Control Strategy helps to find the solution when there is more than one rule or fewer rules for
finding the solution at each point in problem space. A good Control strategy has two main
characteristics:

Control Strategy should cause Motion


Each rule or strategy applied should cause the motion because if there will be no motion than
such control strategy will never lead to a solution. Motion states about the change of state and if
a state will not change then there be no movement from an initial state and we would never solve
the problem.

Control strategy should be Systematic


Though the strategy applied should create the motion but if do not follow some systematic
strategy than we are likely to reach the same state number of times before reaching the solution
which increases the number of steps. Taking care of only first strategy we may go through
particular useless sequences of operators several times. Control Strategy should be systematic
implies a need for global motion (over the course of several steps) as well as for local motion
(over the course of single step).

Examples:
Breadth-First Search: It searches along the breadth and follows first-in-first-out queue data
structure approach. It will start to scan node A first and then B-C-D-E-F.

Depth-First Search: It searches along the depth and follows the stack approach. The sequence
for scanning nodes will be A-B-D-E-C-F, it scans all the sub-nodes of parent nodes and then
moves to another node.

problem characteristics

To choose an appropriate method for a particular problem first we need to categorize the
problem based on the following characteristics.

1. Is the problem decomposable into small sub-problems which are easy to solve?
2. Can solution steps be ignored or undone?
3. Is the universe of the problem is predictable?
4. Is a good solution to the problem is absolute or relative?
5. Is the solution to the problem a state or a path?
6. What is the role of knowledge in solving a problem using artificial intelligence?
7. Does the task of solving a problem require human interaction?

Use of decomposing problems:


❑ Each sub-problem is simpler to solve.
❑ Each sub-problem can be handed over to a different processor. Thus can be solved in parallel
processing environment.
❑ There are non-decomposable problems.
❑ For example, Block world problem is non decomposable.

https://www.vtupulse.com/artificial-intelligence/problem-characteristics-in-artificial-intelligence/
A production system, also known as a rule-based system or an expert system, is a widely used
approach in AI for representing knowledge and reasoning. It consists of a set of rules, a working
memory, and an inference engine. Production systems are particularly useful for problem-
solving, decision-making, and knowledge representation in domains where expertise can be
captured in the form of rules or heuristics.

Characteristics of Production System


Production systems are a type of artificial intelligence architecture that have characteristics such
as:

1. Monotonic Production System: the application of a rule never prevents the later application

of another rule that could also have been applied at the time the first rule was selected. i.e.,

rules are independent.

2. Non-Monotonic Production system is one in which this is not true.

3. Partially commutative Production system: a production system with the property that if

application of a particular sequence of rules transforms state x to state y, then allowable

permutation of those rules, also transforms state x into state y.

4. Commutative Production system: A Commutative production system is a production system

that is both monotonic and partially commutative.

Properties of Good Generators:

The good generators need to have the following properties:


 Complete: Good Generators need to be complete i.e. they should generate all the possible
solutions and cover all the possible states. In this way, we can guaranty our algorithm to
converge to the correct solution at some point in time.
 Non Redundant: Good Generators should not yield a duplicate solution at any point of
time as it reduces the efficiency of algorithm thereby increasing the time of search and
making the time complexity exponential. In fact, it is often said that if solutions appear
several times in the depth-first search then it is better to modify the procedure to traverse a
graph rather than a tree.
 Informed: Good Generators have the knowledge about the search space which they
maintain in the form of an array of knowledge. This can be used to search how far the agent
is from the goal, calculate the path cost and even find a way to reach the goal.
Generate and test

Generate and Test Search is a heuristic search technique based on Depth First Search with
Backtracking which guarantees to find a solution if done systematically and there exists a
solution. In this technique, all the solutions are generated and tested for the best solution. It
ensures that the best solution is checked against all possible generated solutions. The
evaluation is carried out by the heuristic function as all the solutions are generated
systematically in generate and test algorithm but if there are some paths which are most
unlikely to lead us to result then they are not considered. The heuristic does this by ranking all
the alternatives and is often effective in doing so.

Algorithm

1. Generate a possible solution. For example, generating a particular point in the problem
space or generating a path for a start state.
2. Test to see if this is a actual solution by comparing the chosen point or the endpoint of the
chosen path to the set of acceptable goal states
3. If a solution is found, quit. Otherwise go to Step 1

Properties of Good Generators:

The good generators need to have the following properties:


 Complete: Good Generators need to be complete i.e. they should generate all the possible
solutions and cover all the possible states. In this way, we can guaranty our algorithm to
converge to the correct solution at some point in time.
 Non Redundant: Good Generators should not yield a duplicate solution at any point of
time as it reduces the efficiency of algorithm thereby increasing the time of search and
making the time complexity exponential. In fact, it is often said that if solutions appear
several times in the depth-first search then it is better to modify the procedure to traverse a
graph rather than a tree.
 Informed: Good Generators have the knowledge about the search space which they
maintain in the form of an array of knowledge. This can be used to search how far the
agent is from the goal, calculate the path cost and even find a way to reach the goal.
Let us take a simple example to understand the importance of a good generator. Consider a pin
made up of three 2 digit numbers i.e. the numbers are of the form,

Hill climbing
o Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution to
the problem. It terminates when it reaches a peak value where no neighbor has a higher
value.
o Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate neighbor state
and not beyond that.
o A node of hill climbing algorithm has two components which are state and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or graph as it only
keeps a single current state.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and Test method.
The Generate and Test method produce feedback which helps to decide which direction
to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which
optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not remember the
previous states.

State-space Diagram for Hill Climbing:

The state-space landscape is a graphical representation of the hill-climbing algorithm which is


showing a graph between various states of algorithm and Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost function, and
state-space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the
global minimum and local minimum. If the function of Y-axis is Objective function, then the
goal of the search is to find the global maximum and local maximum.
A state-space diagram consists of various regions that can be explained as follows;

 Local maximum: A local maximum is a solution that surpasses other neighboring


solutions or states but is not the best possible solution.
 Global maximum: This is the best possible solution achieved by the algorithm.
 Current state: This is the existing or present state.
 Flat local maximum: This is a flat region where the neighboring solutions attain the
same value.
 Shoulder: This is a plateau whose edge is stretching upwards.

Algorithm

 Conduct an assessment of the current state. Stop the process and indicate success if it is a
goal state.
 Perform looping on the current state if the assessment in step 1 did not establish a goal
state.
 Continue looping to attain a new solution.
 Assess the new solution. If the new state has a higher value than the current state in steps
1 and 2, then mark it as a current state.
 Continue steps 1 to 4 until a goal state is attained. If this is the case, then exit the process.

Steepest – Ascent hill climbing

This algorithm is more advanced than the simple hill-climbing algorithm. It chooses the next
node by assessing the neighboring nodes. The algorithm moves to the node that is closest to the
optimal or goal state.

Stochastic hill climbing

In this algorithm, the neighboring nodes are selected randomly. The selected node is assessed to
establish the level of improvement. The algorithm will move to this neighboring node if it has a
higher value than the current state.

Constraint satisfaction problem


Finding a solution that meets a set of constraints is the goal of constraint satisfaction problems
(CSPs), a type of AI issue. Finding values for a group of variables that fulfill a set of
restrictions or rules is the aim of constraint satisfaction problems.
There are mainly three basic components in the constraint satisfaction problem:
Variables: The things that need to be determined are variables. Variables in a CSP are the
objects that must have values assigned to them in order to satisfy a particular set of constraints.
Boolean, integer, and categorical variables are just a few examples of the various types of
variables Variables, for instance, could stand in for the many puzzle cells that need to be filled
with numbers in a sudoku puzzle.
Domains: The range of potential values that a variable can have is represented by domains.
Depending on the issue, a domain may be finite or limitless. For instance, in Sudoku, the set of
numbers from 1 to 9 can serve as the domain of a variable representing a problem cell.
Constraints: The guidelines that control how variables relate to one another are known as
constraints. Constraints in a CSP define the ranges of possible values for variables. Unary
constraints, binary constraints, and higher-order constraints are only a few examples of the
various sorts of constraints. For instance, in a sudoku problem, the restrictions might be that
each row, column, and 3×3 box can only have one instance of each number from 1 to 9.
Constraint Satisfaction Problems (CSP) representation:
 The finite set of variables V 1, V2, V3 ……………..Vn.
 Non-empty domain for every single variable D 1, D2, D3 …………..Dn.
 The finite set of constraints C 1, C2 …….…, Cm.
 where each constraint C i restricts the possible values for variables,
 e.g., V1 ≠ V2
 Each constraint Ci is a pair <scope, relation>
 Example: <(V1, V2), V1 not equal to V2>
 Scope = set of variables that participate in constraint.
 Relation = list of valid variable value combinations.
 There might be a clear list of permitted combinations. Perhaps a
relation that is abstract and that allows for membership testing and
listing.
Mean-end analysis

o Means-Ends Analysis is problem-solving techniques used in Artificial intelligence for


limiting search in AI programs.
o It is a mixture of Backward and forward search technique.
o The MEA technique was first introduced in 1961 by Allen Newell, and Herbert A. Simon
in their problem-solving computer program, which was named as General Problem
Solver (GPS).
o The MEA analysis process centered on the evaluation of the difference between the
current state and goal state.

Algorithm for Means-Ends Analysis:

Let's we take Current state as CURRENT and Goal State as GOAL, then following are the steps
for the MEA algorithm.

o tep 1: Compare CURRENT to GOAL, if there are no differences between both then
return Success and Exit.
o Step 2: Else, select the most significant difference and reduce it by doing the following
steps until the success or failure occurs.

a. Select a new operator O which is applicable for the current difference, and if there
is no such operator, then signal failure.
b. Attempt to apply operator O to CURRENT. Make a description of two states.
i) O-Start, a state in which O?s preconditions are satisfied.
ii) O-Result, the state that would result if O were applied In O-start.
c. If
(First-Part <------ MEA (CURRENT, O-START)
And
(LAST-Part <----- MEA (O-Result, GOAL), are successful, then signal Success
and return the result of combining FIRST-PART, O, and LAST-PART.

Min-Max Search,
Minimax is a kind of backtracking algorithm that is used in decision making and game theory
to find the optimal move for a player, assuming that your opponent also plays optimally. It is
widely used in two player turn-based games such as Tic-Tac-Toe, Backgammon, Mancala,
Chess, etc.
In Minimax the two players are called maximizer and minimizer. The maximizer tries to get
the highest score possible while the minimizer tries to do the opposite and get the lowest score
possible.
Example:
Consider a game which has 4 final states and paths to reach final state are from root to 4 leaves
of a perfect binary tree as shown below. Assume you are the maximizing player and you get
the first chance to move, i.e., you are at the root and your opponent at next level. Which move
you would make as a maximizing player considering that your opponent also plays
optimally?

 Maximizer goes LEFT: It is now the minimizers turn. The minimizer now has a choice
between 3 and 5. Being the minimizer it will definitely choose the least among both, that is
3
 Maximizer goes RIGHT: It is now the minimizers turn. The minimizer now has a choice
between 2 and 9. He will choose 2 as it is the least among the two values.
 Being the maximizer you would choose the larger value that is 3. Hence the optimal
move for the maximizer is to go LEFT and the optimal value is 3.
 Now the game tree looks like below :

Alpha-Beta Pruning,
Alpha-Beta pruning is not actually a new algorithm, but rather an optimization technique for
the minimax algorithm. It reduces the computation time by a huge factor. This allows us to
search much faster and even go into deeper levels in the game tree. It cuts off branches in the
game tree which need not be searched because there already exists a better move available. It is
called Alpha-Beta pruning because it passes 2 extra parameters in the minimax function,
namely alpha and beta.
Let’s define the parameters alpha and beta.
Alpha is the best value that the maximizer currently can guarantee at that level or above.
Beta is the best value that the minimizer currently can guarantee at that level or below.
Let’s make the above algorithm clear with an example.
 The initial call starts from A. The value of alpha here is -INFINITY and the value of beta
is +INFINITY. These values are passed down to subsequent nodes in the tree. At A the
maximizer must choose max of B and C, so A calls B first
 At B it the minimizer must choose min of D and E and hence calls D first.
 At D, it looks at its left child which is a leaf node. This node returns a value of 3. Now the
value of alpha at D is max( -INF, 3) which is 3.
 To decide whether its worth looking at its right node or not, it checks the condition
beta<=alpha. This is false since beta = +INF and alpha = 3. So it continues the search.
 D now looks at its right child which returns a value of 5.At D, alpha = max(3, 5) which is
5. Now the value of node D is 5
 D returns a value of 5 to B. At B, beta = min( +INF, 5) which is 5. The minimizer is now
guaranteed a value of 5 or lesser. B now calls E to see if he can get a lower value than 5.
 At E the values of alpha and beta is not -INF and +INF but instead -INF and 5 respectively,
because the value of beta was changed at B and that is what B passed down to E
 Now E looks at its left child which is 6. At E, alpha = max(-INF, 6) which is 6. Here the
condition becomes true. beta is 5 and alpha is 6. So beta<=alpha is true. Hence it breaks
and E returns 6 to B
 Note how it did not matter what the value of E‘s right child is. It could have been +INF or -
INF, it still wouldn’t matter, We never even had to look at it because the minimizer was
guaranteed a value of 5 or lesser. So as soon as the maximizer saw the 6 he knew the
minimizer would never come this way because he can get a 5 on the left side of B. This
way we didn’t have to look at that 9 and hence saved computation time.
 E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.The value of node B is also 5
So far this is how our game tree looks. The 9 is crossed out because it was never computed.

 B returns 5 to A. At A, alpha = max( -INF, 5) which is 5. Now the maximizer is guaranteed


a value of 5 or greater. A now calls C to see if it can get a higher value than 5.
 At C, alpha = 5 and beta = +INF. C calls F
 At F, alpha = 5 and beta = +INF. F looks at its left child which is a 1. alpha = max( 5, 1)
which is still 5.
 F looks at its right child which is a 2. Hence the best value of this node is 2. Alpha still
remains 5
 F returns a value of 2 to C. At C, beta = min( +INF, 2). The condition beta <= alpha
becomes true as beta = 2 and alpha = 5. So it breaks and it does not even have to compute
the entire sub-tree of G.
 The intuition behind this break-off is that, at C the minimizer was guaranteed a value of 2
or lesser. But the maximizer was already guaranteed a value of 5 if he choose B. So why
would the maximizer ever choose C and get a value less than 2 ? Again you can see that it
did not matter what those last 2 values were. We also saved a lot of computation by
skipping a whole sub-tree.
 C now returns a value of 2 to A. Therefore the best value at A is max( 5, 2) which is a 5.
 Hence the optimal value that the maximizer can get is 5
This is how our final game tree looks like. As you can see G has been crossed out as it was
never computed.

Additional refinements,
Most machine learning does not exploit prior knowledge. Theory refinement (a.k.a. theory
revision or knowledge-base refinement) is the task of modifying an initial imperfect knowledge-
base (KB) to make it consistent with empirical data.
Theory refinement, also known as theory revision or knowledge-base refinement, is indeed an
important aspect of machine learning and knowledge representation. While it's true that many
machine learning algorithms, especially those based on deep learning and neural networks,
primarily rely on data-driven approaches and do not explicitly incorporate prior knowledge,
theory refinement approaches aim to bridge the gap between prior knowledge and empirical data.
Here's a breakdown of what theory refinement involves:
1. Initial Imperfect Knowledge Base (KB): Theory refinement begins with an existing
knowledge base or a set of prior beliefs about a domain
2. Incorporating Empirical Data: To make the knowledge base more accurate and
consistent with real-world observations, theory refinement algorithms utilize empirical
data. This data could come from various sources, such as sensors, observations,
experiments, or user interactions.
3. Modifying the Knowledge Base: The primary goal of theory refinement is to modify the
initial knowledge base in a way that it becomes more consistent with the empirical data.
This process may involve adding new knowledge, updating existing knowledge, or
removing contradictory information.
4. Consistency and Coherence: During theory refinement, one of the key objectives is to
ensure that the resulting knowledge base is logically consistent and coherent. This means
that the refined knowledge should not contain contradictions and should accurately
reflect the observed data.

Certainly, there are several additional refinements and considerations in the context of
theory refinement and knowledge base refinement:
1. Uncertainty Modeling: Real-world data is often noisy and uncertain. Advanced theory
refinement methods incorporate uncertainty modeling to handle probabilistic or fuzzy
information. Techniques such as Bayesian networks or Markov logic networks can
represent and manage uncertainty in the knowledge base.
2. Prioritization: Not all data is equally valuable for refining a knowledge base. Some
information may be more reliable or relevant than others. Prioritization mechanisms can
be employed to give higher weight to certain sources of data or to prioritize certain types
of knowledge updates.
3. Conflict Resolution: In situations where there are conflicting pieces of evidence or data,
theory refinement algorsithms may need to implement conflict resolution strategies. This
can involve assigning degrees of belief or confidence to different pieces of knowledge
and resolving conflicts based on these confidence scores.
4. Incremental Learning: Theory refinement can be performed incrementally as new data
becomes available. Incremental learning techniques allow the knowledge base to be
updated gradually without requiring a complete reevaluation of the entire knowledge base
each time new data arrives.
5. Human-in-the-Loop: Human experts often play a crucial role in the theory refinement
process. They can provide domain-specific insights, validate the changes made to the
knowledge base, and ensure that the refined knowledge aligns with expert knowledge.
Iterative Deepening.
Iterative deepening depth-first search (IDDFS) is an algorithm that is an important part of an
Uninformed search strategy just like BFS and DFS. We can define IDDFS as an algorithm of an
amalgam of BFS and DFS searching techniques. In IDDFS, We have found certain limitations in
BFS and DFS so we have done hybridization of both the procedures for eliminating the demerits
lying in them individually. We do a limited depth-first search up to a fixed “limited depth”. Then
we keep on incrementing the depth limit by iterating the procedure unless we have found the
goal node or have traversed the whole tree whichever is earlier.

A search algorithm known as IDS combines the benefits of DFS with Breadth First Search
(BFS). The graph is explored using DFS, but the depth limit steadily increased until the target is
located. In other words, IDS continually runs DFS, raising the depth limit each time, until the
desired result is obtained. Iterative deepening is a method that makes sure the search is thorough
(i.e., it discovers a solution if one exists) and efficient (i.e., it finds the shortest path to the goal).

The pseudocode for IDS is straightforward:

1. function iterativeDeepeningSearch(root, goal):


2. depth = 0
3. while True:
4. result = depthLimitedSearch(root, goal, depth)
5. if result == FOUND:
6. return goal
7. if result == NOT_FOUND:
8. return None
9. depthdepth = depth + 1
10.
11. function depthLimitedSearch(node, goal, depth):
12. if node == goal:
13. return FOUND
14. if depth == 0:
15. return NOT_FOUND
16. for child in node.children:
17. result = depthLimitedSearch(child, goal, depth - 1)
18. if result == FOUND:
19. return FOUND
20. return NOT_FOUND

Let us take an example to understand this

Here in the given tree, the starting node is A and the depth initialized to 0. The goal node is R

where we have to find the depth and the path to reach it. The depth from the figure is 4. In this

example, we consider the tree as a finite tree, while we can consider the same procedure for the

infinite tree as well. We knew that in the algorithm of IDDFS we first do DFS till a specified

depth and then increase the depth at each loop. This special step forms the part of DLS or Depth

Limited Search. Thus the following traversal shows the IDDFS search.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy