0% found this document useful (0 votes)
3 views

Informed Search - Report v2

The document provides an in-depth analysis of informed search techniques in Artificial Intelligence, focusing on their efficiency in exploring state spaces using heuristics. It discusses various algorithms such as Greedy Best-First Search and A* Search, detailing their principles, advantages, challenges, and applications. The report emphasizes the importance of heuristic quality and introduces variants like Iterative Deepening A* for improved performance in large search spaces.

Uploaded by

mca10058.24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Informed Search - Report v2

The document provides an in-depth analysis of informed search techniques in Artificial Intelligence, focusing on their efficiency in exploring state spaces using heuristics. It discusses various algorithms such as Greedy Best-First Search and A* Search, detailing their principles, advantages, challenges, and applications. The report emphasizes the importance of heuristic quality and introduces variants like Iterative Deepening A* for improved performance in large search spaces.

Uploaded by

mca10058.24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

Informed Search Techniques in Artificial Intelligence

Abstract
Informed search techniques are a fundamental aspect of problem-solving in Artificial Intelligence (AI), enabling more targeted and efficient
exploration of state spaces compared to uninformed search methods. These techniques utilize heuristic information—educated guesses based on
domain-specific knowledge—to make intelligent decisions at each step of the search process. By guiding the search toward the most promising
paths, informed search algorithms can significantly reduce computation time and improve solution quality. This report provides an in-depth
analysis of various informed search strategies such as Greedy Best-First Search, A* Search, and their variants. It highlights their core principles,
practical implementations, advantages, challenges, and suitable application domains.

1. Introduction
Search is a core component of many AI systems, especially in scenarios involving decision-making, planning, and navigation. The goal of a search
algorithm is to find a path from an initial state to a goal state in a problem space. Traditional uninformed search strategies, such as Breadth-First
Search and Depth-First Search, explore the search space without any domain-specific guidance. In contrast, informed search algorithms enhance
the efficiency and effectiveness of this process by incorporating heuristic knowledge.

Heuristics serve as rules of thumb that estimate the cost or value of a particular state in achieving the goal. Informed search algorithms prioritize
paths that appear more promising based on these heuristics. This intelligent guidance allows AI agents to solve complex problems more
efficiently, from pathfinding in robotics to decision-making in strategic games. This report explores the structure, function, and comparative
benefits of informed search techniques and discusses their applications across various fields.

2. Heuristic Search
Heuristics are problem-specific functions that estimate the cost of reaching the goal from a
given state. In the context of AI search algorithms, they act as a guide, enabling the algorithm to
make informed decisions about which path to explore next. A heuristic function, usually denoted
as h(n), assigns an estimated cost from node n to the goal. This allows the algorithm to prioritize
nodes that are more likely to lead to an optimal solution.

The quality of a heuristic is vital to the success of an informed search algorithm. A good heuristic
can dramatically reduce the number of nodes explored, making the search faster and more
efficient. Conversely, a poor heuristic can misguide the search and increase computational
effort.

Two key properties define a good heuristic:

-​ Admissibility: It never overestimates the cost to reach the goal.


-​ Consistency (Monotonicity): The estimated cost is always less than or equal to the cost from the current node to a neighbor plus the
estimated cost from that neighbor to the goal.

Example: In a navigation system, the straight-line distance (Euclidean distance) between a location and the destination is often used as a heuristic.

3. Greedy Best-First Search


3.1. Description

Best-First Search is an informed search algorithm that explores a state space by always expanding the node that appears to be most promising,
based on a given heuristic function h(n). Unlike uninformed strategies that blindly explore nodes, Best-First Search leverages domain-specific
knowledge to intelligently guide the search.

It is called “Best-First” because at each step, it picks the best node according to the heuristic function. This makes it faster than brute-force
strategies in many real-world problems, especially when a well-designed heuristic is available.

3.2. Characteristics

●​ Search Strategy: Informed (uses heuristic)


●​ Evaluation Function: f(n) = h(n)
●​ Data Structure Used: Priority Queue (ordered by heuristic value)
1
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

●​ Goal: To reach the goal node as quickly as possible using the most promising route
●​ Completeness: Yes (if the graph is finite and no loops)
●​ Optimality: No (it can find sub-optimal solutions)

3.3. Working Principle

1.​ Place the initial node in the open list (priority queue), ordered by h(n).​

2.​ Loop until the goal is found or the open list is empty:​

●​ Remove the node with the lowest h(n) from the open list.
●​ If this node is the goal, return the solution.
●​ Otherwise, expand the node and:​

○​ Generate all its successors.


○​ For each successor:
■​ Calculate h(n)
■​ Insert it into the priority queue.​

3.​ Repeat the process until a solution is found.

3.4. Diagram

-​ Consider finding the path from P to S in the following graph:

-​ In this example, the cost is measured strictly using the heuristic value. In other words, how close it is to the target.

-​ C has the lowest cost of 6. Therefore, the search will continue like so:

2
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

-​ U has the lowest cost compared to M and R, so the search will continue by exploring U. Finally, S has a heuristic value of 0 since that is the target node:

-​ The total cost for the path (P -> C -> U -> S) evaluates to 11. The potential problem with a greedy best-first search is revealed by the path (P -> R -> E ->
S) having a cost of 10, which is lower than (P -> C -> U -> S). Greedy best-first search ignored this path because it does not consider the edge weights.

3.5 Pros and Cons

Pros:

●​ Faster Exploration: Expands nodes closer to the goal, often leading to faster solutions in large search spaces.
●​ Simple and Easy Implementation: Simple to implement with only a heuristic function, making it quick to set up.
●​ Low Memory Usage: Requires less memory since it stores only nodes close to the goal in the open list.
●​ Efficient for Certain Problems: Works well when the heuristic is accurate and the goal is easily identified.

3
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

Cons:

●​ Non-optimal Solution: Since the algorithm only considers the heuristic value and ignores edge weights, it may find a solution that is not the shortest or
least costly. This can lead to suboptimal paths.
●​ Incomplete: The search may fail to find a solution, especially if there are dead ends or if the goal node is unreachable. Greedy Best-First Search does
not always explore all possible paths.
●​ Doesn’t Consider Edge Weights: By ignoring edge weights, the algorithm may miss paths that are less heuristic-optimal but ultimately cheaper in terms
of cost. This can lead to inefficient pathfinding.
●​ Sensitive to Heuristic Quality: The algorithm’s effectiveness is heavily dependent on the accuracy of the heuristic function. A poorly designed heuristic
can result in inefficient search or failure to find the goal.
●​ Can Get Stuck in Local Minima: Greedy Best-First Search may get stuck in local minima, focusing too much on immediate low-cost paths and
overlooking potentially better, longer paths that lead to the goal.

4. A Search*
4.1 Description

A* (A-Star) is one of the most widely used and powerful informed search algorithms in Artificial Intelligence. It combines the strengths of Uniform Cost Search
and Greedy Best-First Search by considering both the actual cost to reach a node and the estimated cost to reach the goal.

4.2 Working Principle

●​ Initialization:
-​ Start with the initial node.
-​ Compute f(n) = g(n) + h(n) for the start node (where g(n) = 0).
-​ Place the start node in the open list (priority queue).

●​ Iteration:
-​ Select the node with the lowest f(n) from the open list.
-​ If it's the goal node, return the path.
-​ Otherwise:
■​ Expand the node (generate successors).
■​ For each child node:
●​ Calculate g(child) = g(parent) + cost(parent → child)
●​ Estimate h(child)
●​ Compute f(child) = g(child) + h(child)
●​ If the child is not in open or closed list, add it.
●​ If it’s already there with a higher f(n), update it.

●​ Repeat the process until the goal is found or the open list is empty.

4.3 Diagram

-​ Consider the following example of trying to find the shortest path from S to G in the following graph:

4
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain
-​ Each edge has an associated weight, and each node has a heuristic cost (in parentheses).

An open list is maintained in which the node S is the only node in the list. The search tree can now be constructed.

-​ Exploring S:

-​ A is the current most promising path, so it is explored next:

-​ Exploring D:

5
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

-​ Exploring F:

-​ Notice that the goal node G has been found. However, it hasn’t been explored, so the algorithm continues because there may be a shorter path to G. The
node B has two entries in the open list: one at a cost of 16 (child of S) and one at a cost of 18 (child of A). The one with the lowest cost is explored next:

-​ Exploring C:

6
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

-​ The next node in the open list is again B. However, because B has already been explored, meaning the shortest path to B has been found, it is not
explored again, and the algorithm continues to the next candidate.

-​ The next node to be explored is the goal node G, meaning the shortest path to G has been found! The path is constructed by tracing the graph
backwards from G to S:

5.5 Admissibility and Consistency

●​ A heuristic is admissible if it never overestimates the true cost.


●​ It is consistent if the estimated cost is always less than or equal to the estimated cost from any neighboring node plus the step cost to that neighbor.

5.6 Pros and Cons

●​ Pros: Guarantees optimality with admissible heuristics.


●​ Cons: High memory usage.

5.7 Real World Applications

●​ Navigation & GPS Systems – Finding the shortest or fastest path.


●​ Game AI – Decision-making in real-time strategy and puzzle games.
●​ Robotics – Path planning in dynamic environments.
●​ Network Routing – Efficient packet delivery with optimal cost.
●​ Speech & NLP – Optimal parsing, auto-correction paths, etc.

7
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

6. Variants of A Search*

6.1 Iterative Deepening A*


Iterative Deepening A* (IDA*) is a memory-efficient variant of the A* search algorithm. While A* keeps all nodes in memory (which can become a
problem for large graphs), IDA* combines the optimality of A* with the space efficiency of Depth-First Search (DFS).

It uses heuristics like A*, with an evaluation function f(n) = g(n) + h(n)

Instead of using a priority queue, it performs repeated depth-first searches with increasing f-cost thresholds

Working

The IDA* algorithm includes the following steps:

○​ Start with an initial cost limit.

The algorithm begins with an initial cost limit, which is usually set to the heuristic cost estimate of the optimal path to the goal node.

○​ Perform a depth-first search (DFS) within the cost limit.

The algorithm performs a DFS search from the starting node until it reaches a node with a cost that exceeds the current cost limit.

○​ Check for the goal node.

If the goal node is found during the DFS search, the algorithm returns the optimal path to the goal.

○​ Update the cost limit.

If the goal node is not found during the DFS search, the algorithm updates the cost limit to the minimum cost of any node that was expanded
during the search.

○​ Repeat the process until the goal is found.

The algorithm repeats the process, increasing the cost limit each time until the goal node is found.

Example Implementation
Let's look at a graph example to see how the Iterative Deepening A* (IDA*) technique functions. Assume we have the graph below, where the
figures in parenthesis represent the expense of travelling between the nodes:

8
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

We want to find the optimal path from node A to node F using the IDA* algorithm. The first step is to set an initial cost limit. Let's use the heuristic
estimate of the optimal path, which is 7 (the sum of the costs from A to C to F).

1.​ Set the cost limit to 7.


2.​ Start the search at node A.
3.​ Expand node A and generate its neighbors, B and C.
4.​ Evaluate the heuristic cost of the paths from A to B and A to C, which are 5 and 10 respectively.
5.​ Since the cost of the path to B is less than the cost limit, continue the search from node B.
6.​ Expand node B and generate its neighbors, D and E.
7.​ Evaluate the heuristic cost of the paths from A to D and A to E, which are 10 and 9 respectively.
8.​ Since the cost of the path to D exceeds the cost limit, backtrack to node B.
9.​ Evaluate the heuristic cost of the path from A to C, which is 10.
10.​Since the cost of the path to C is less than the cost limit, continue the search from node C.
11.​Expand node C and generate its neighbor, F.
12.​Evaluate the heuristic cost of the path from A to F, which is 7.
13.​Since the cost of the path to F is less than the cost limit, return the optimal path, which is A - C - F.

We're done since the ideal route was discovered within the initial pricing range. We would have adjusted the cost limit to the lowest cost of any
node that was enlarged throughout the search and then repeated the procedure until the goal node was located if the best path could not be
discovered within the cost limit.

A strong and adaptable search algorithm, the IDA* method may be used to identify the best course of action in a variety of situations. It effectively
searches huge state spaces and, if there is an optimal solution, finds it by combining the benefits of DFS and A* search.

Advantages
○​ Completeness: The IDA* method is a complete search algorithm, which means that, if an optimum solution exists, it will be discovered.
○​ Memory effectiveness: The IDA* method only keeps one path in memory at a time, making it memory efficient.
○​ Flexibility: Depending on the application, the IDA* method may be employed with a number of heuristic functions.
○​ Performance: The IDA* method sometimes outperforms other search algorithms like uniform-cost search (UCS) or breadth-first search
(BFS) (UCS).
○​ The IDA* algorithm is incremental, which means that it may be stopped at any time and continued at a later time without losing any
progress.
○​ As long as the heuristic function utilised is acceptable, the IDA* method will always discover the best solution, if one exists. This implies
that the algorithm will always choose the route that leads directly to the target node.

Disadvantages
○​ Ineffective for huge search areas. IDA* potential for being incredibly ineffective for vast search spaces is one of its biggest drawbacks.
Since IDA* expands the same nodes using a depth-first search on each iteration, this might result in repetitive calculations.
○​ Get caught in a nearby minima. The ability of IDA* to become caught in local minima is another drawback.
○​ Extremely reliant on the effectiveness of the heuristic function. The effectiveness of the heuristic function utilised heavily influences IDA*
performance.
○​ Although IDA* is memory-efficient in that it only saves one path at a time, there are some situations when it may still be necessary to use a
substantial amount of memory.
○​ confined to issues with clear objective states. IDA* is intended for issues when the desired state is clearly defined and identifiable.

9
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

6.2 Simplified Memory-Bounded A (SMA)


SMA* ( Simplified Memory Bounded A*) is a shortest path algorithm that is based on the A* algorithm. The difference between SMA* and A* is that
SMA* uses a bounded memory, while the A* algorithm might need exponential memory.

Like the A*, it expands the most promising branches according to the heuristic. What sets SMA* apart is that it prunes nodes whose expansion has
revealed less promising than expected.

SMA*, just like A* evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to get from the node to the goal: f(n) = g(n) +
h(n).

Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost of the cheapest path from n to the goal, we have f(n) =
estimated cost of the cheapest solution through n. The lower the f value is, the higher priority the node will have.

The difference from A* is that the f value of the parent node will be updated to reflect changes to this estimate when its children are expanded. A
fully expanded node will have an f value at least as high as that of its successors.

In addition, the node stores the f value of the best forgotten successor (or best forgotten child). This value is restored if the forgotten successor is
revealed to be the most promising successor.

Working:

Image 1: Idea of how SMA* works

Image 2: Generating Children in SMA* Algorithm

10
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

Image 3: Handling Long Paths i.e. Too Many Nodes In The Memory

Image 4: Adjusting The f Value

Problem

Image 5: The Problem of Simplified Memory Bounded A*

11
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

On Image 5 we have the problem we are about to solve. Our requirement is to find the shortest path starting from node “S” to node “G”. We can
see the graph and how the nodes are connected. Also, we can see the table, containing the heuristic values for each of the nodes.

Image 6: Evaluating The Children

On Image 6 we can see how the algorithm evaluates the children. Firstly it added the “A” and “B” nodes (starting alphabetically). On the right of the
node in blue color is the f value, on the left is the g value or the cost to reach that node(in green color), and the h value or the heuristic (in red
color).

Image 7: Full Memory While Evaluation

On Image 7 we can see what happens when you want to evaluate more nodes than what the memory allows. The already evaluated children with
the highest f value get removed, but its value is remembered in its parent(the number in purple color on the right of the “S” node).

Image 8: Adjusting The f Value

12
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain
On Image 8 we can see that if all accessible children nodes are visited or explored we adjust the parent’s f value to the value of the children with
the lowest f value.

Image 9: Memory Full Evaluate Through The Lowest f Value Child

On Image 9 we can see how the algorithm evaluates the children of the “A” node. The only children is the “G” node which is the goal, which means
there is now a way for exploring further.

Now we remove the “C” node, since it has the highest f value but we don’t remember it, since the “B” node has a lower value. Now we update the f
value of the “A” node to 6 because all the children accessible are explored.

Image 10: Update The Root Value (“S” node) To The Remembered Value

Just because we’ve reached the goal node, doesn’t mean that the algorithms are finished. As we can see the remembered f value of the “B” node is
lower than the f value of the “G” node explored through the “A” node.

This gives hope that we might reach the goal node again for a lower f value or lower total cost. We also update the f value of the “S” node to 5
since there is no more child node with value 4.

Image 11: Remove The Goal Leading Node

13
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain
As we can see on Image 11 we will remove the “A” node since it has already discovered the goal and has the same f value as the “C” node so we will
remember this node in the parent “S” node.

Image 12: Reaching The Goal For The Lowest Total Cost

On Image 12 we can see that we’ve reached the goal again, this time through the “B” node. This time cost is lower than through the “A” node.

The memory is full, which means we need to remove the “C” node, because it has the highest f value, and it means that if we eventually reach the
goal from this node the total cost at best case scenario will be equal to 6, which is worse than we already have.

At this stage, the algorithm terminates, and we have found the shortest path from the “S” node to the “G” node.

______________________________________________________________________________________________________________________________________________________________________________

7. Comparison of Informed Search Algorithms


Algorithm Completeness Optimality Space Time
Complexity Complexity

Greedy Best-First Yes No O(b^m) O(b^m)


Search

A* Search Yes Yes O(b^d) O(b^d)

IDA* Yes Yes O(d) O(b^d)

SMA* Yes Yes Bounded Bounded

8. Applications of Informed Search


●​ Robotics: Path planning for autonomous robots.
●​ Game AI: Heuristic-based game strategies.
●​ Network Routing: Optimal packet routing in dynamic networks.
●​ Medical Diagnosis: Efficient search of diagnostic hypotheses.
●​ Natural Language Processing: Syntax parsing using heuristic grammars.

9. Challenges and Limitations


●​ Heuristic Quality: Poor heuristics degrade performance.
●​ Memory Usage: A* and BFS can be memory intensive.
●​ Computational Overhead: Heuristic computation may be expensive.
●​ Local Minima: Greedy search can be misled by local optima.

14
Informed Search in AI​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ Samyak Jain

Conclusion
Informed search techniques play a pivotal role in enhancing the intelligence and efficiency of problem-solving in Artificial Intelligence (AI). Unlike
uninformed search strategies that blindly explore the state space, informed search methods incorporate heuristic knowledge to guide the search
process toward promising regions of the search space. This capability drastically reduces both time and computational overhead, particularly in
complex or vast problem domains.

Among the various informed strategies, Greedy Best-First Search offers simplicity and speed, making it suitable for scenarios where quick, if not
optimal, decisions are acceptable. However, its tendency to get trapped in local optima limits its effectiveness in critical applications.

In contrast, A* Search stands out for its ability to guarantee both completeness and optimality, provided that the heuristic function is admissible
and consistent. It balances the actual cost incurred and the estimated cost to reach the goal, making it one of the most widely used algorithms in
robotics, navigation systems, game AI, and route optimization.

The Iterative Deepening A* (IDA*) and Simplified Memory-Bounded A* (SMA*) variants address the high memory consumption of A*, offering
alternatives that trade space efficiency for increased computation time while still maintaining optimality in many cases.

Nevertheless, the effectiveness of any informed search technique is fundamentally tied to the quality of the heuristic. Poor heuristics can
render even the most advanced algorithms inefficient or suboptimal. Designing or learning high-quality heuristics remains an ongoing research
challenge, especially in dynamic and uncertain environments.

In summary, informed search techniques provide a framework of intelligent decision-making, significantly outperforming uninformed methods
in both performance and applicability. As AI systems continue to grow in complexity, the development of scalable, memory-efficient, and
adaptive informed search algorithms will be critical in advancing real-world AI applications. With the integration of machine learning to
dynamically improve heuristics, the future of informed search is promising and central to the next generation of intelligent agents.

References
1.​ Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach.
2.​ Pearl, J. (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving.
3.​ Nilsson, N. J. (1998). Artificial Intelligence: A New Synthesis.
4.​ Hart, P. E., Nilsson, N. J., & Raphael, B. (1968). A Formal Basis for the Heuristic Determination of Minimum Cost Paths.

5. Berkeley AI Course (CS 188) – University of California, Berkeley​


https://inst.eecs.berkeley.edu/~cs188/fa18/​
Includes lecture notes, slides, and problem sets on search algorithms

6. GeeksforGeeks – Informed Search Algorithms​


https://www.geeksforgeeks.org/a-search-algorithm/​
Helpful for algorithm breakdowns, examples, and pseudocode.

7. Brilliant.org – A* and IDA* tutorials​


https://brilliant.org/wiki/a-star-search/​
Visualization-heavy explanations and comparisons.

15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy