0% found this document useful (0 votes)
5 views

AI Unit-1

Uploaded by

yogeshbansals850
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

AI Unit-1

Uploaded by

yogeshbansals850
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

introduction to AI and intelligent systems:

1. Definition of AI: Artificial Intelligence (AI) is a branch of computer science that aims to
create systems that can perform tasks that typically require human intelligence.
2. Goals of AI:
o Replicate human intelligence.
o Perform tasks such as reasoning, learning, problem-solving, perception, and
language understanding.
3. Key Concepts:
o Machine Learning: Subset of AI where systems learn from data.
o Neural Networks: Algorithms inspired by the structure and function of the
human brain.
o Natural Language Processing (NLP): Processing and understanding human
language by computers.
o Robotics: Field of engineering and computer science that deals with design,
construction, and operation of robots.
4. History of AI:
o Dartmouth Conference (1956): Birth of AI as a field.
o Early successes (e.g., chess-playing programs, expert systems).
o AI winter and resurgence.
5. Applications of AI:
o Healthcare: Diagnosis, drug discovery, personalized treatment.
o Finance: Fraud detection, algorithmic trading, risk assessment.
o Transportation: Self-driving cars, route optimization.
o Customer Service: Chatbots, virtual assistants.
o Entertainment: Recommendation systems, gaming AI.
6. Challenges and Risks:
o Ethics and bias in AI algorithms.
o Job displacement and automation.
o Privacy concerns with data collection and usage.
o Existential risks associated with superintelligent AI.
7. Types of AI:
o Narrow AI (Weak AI): Designed for a narrow task (e.g., virtual assistants).
o General AI (Strong AI): Can understand, learn, and apply knowledge across
different domains.
o Superintelligent AI: AI surpassing human intelligence.
8. Intelligent Systems:
o Definition: Systems that exhibit characteristics of intelligence, such as
learning, reasoning, problem-solving, perception, and language
understanding.
o Components: Perception, representation and reasoning, learning, action.

Remember, these are just introductory notes. Each of these topics can be expanded into further
detail as the course progresses.
Certainly! Here are some class notes on different approaches to AI:

1. Symbolic AI (GOFAI - Good Old-Fashioned AI):


o Based on symbolic manipulation and logic.
o Represents knowledge using symbols and rules.
o Performs reasoning and problem-solving through logical inference.
o Examples include expert systems and rule-based systems.
2. Connectionist AI (Neural Networks):
o Inspired by the structure and function of the human brain.
o Comprised of interconnected nodes (neurons) organized in layers.
o Learns patterns and relationships from data through training.
o Deep learning, a subset of neural networks, uses multiple layers for more
complex tasks.
3. Evolutionary AI:
o Inspired by the process of biological evolution.
o Uses genetic algorithms, genetic programming, and evolutionary strategies.
o Solutions evolve over generations through mutation, recombination, and
selection.
o Well-suited for optimization, search, and design problems.
4. Bayesian Networks:
o Probabilistic graphical models representing uncertain relationships between
variables.
o Uses Bayes' theorem to update probabilities based on new evidence.
o Suitable for reasoning under uncertainty and decision-making.
5. Fuzzy Logic:
o Deals with uncertainty and imprecision in decision-making.
o Allows for reasoning with approximate, rather than precise, information.
o Membership functions define the degree of membership of elements in sets.
6. Hybrid Approaches:
o Combine multiple AI techniques to leverage their strengths.
o Examples include neuro-symbolic systems, which integrate symbolic
reasoning with neural networks.
o Hybrid systems often achieve better performance than individual
approaches alone.
7. Emerging Approaches:
o Quantum AI: Utilizes principles of quantum mechanics for computation,
potentially offering exponential speedup for certain problems.
o Swarm Intelligence: Inspired by the collective behavior of social insects,
such as ants and bees, for optimization and decision-making.
o Neuromorphic Computing: Designing hardware architectures inspired by the
human brain to accelerate AI algorithms.

These approaches represent diverse strategies for developing AI systems, each with its own
strengths and limitations. The choice of approach depends on the specific problem domain,
available resources, and desired outcomes.

1. Definition of Problem Solving:


o Problem-solving in AI involves finding solutions to complex problems using
computational techniques.
o It encompasses defining problems, representing them in a structured format,
and devising algorithms to find solutions.
2. Problem Representation:
o Problems are typically represented using formalisms such as state-space
representations, problem trees, or graphs.
o State-space representation: Describes the possible states of the problem
and the transitions between them.
o Problem trees: Organize possible actions and outcomes in a tree structure.
o Graphs: Represent states as nodes and transitions as edges.
3. Problem-Solving Approaches:
o Search Algorithms: Explore the space of possible solutions systematically
to find a desired outcome.
▪ Uninformed Search: Explore the search space without additional
information.
▪ Informed Search: Use heuristics or domain-specific knowledge to
guide the search more efficiently.
o Constraint Satisfaction: Find a solution that satisfies a set of constraints.
o Optimization: Find the best solution according to some predefined criteria,
such as minimizing cost or maximizing utility.
o Rule-Based Systems: Represent knowledge in the form of rules and use
inference mechanisms to derive solutions.
o Game Playing: Develop algorithms to play games optimally, considering
possible moves and their consequences.
4. Search Algorithms:
o Breadth-First Search (BFS): Expands the shallowest unexpanded node first.
o Depth-First Search (DFS): Explores as far as possible along each branch
before backtracking.
o A* Search: Uses a heuristic function to guide the search towards the goal
efficiently.
o Hill Climbing: Iteratively improves a solution by making small modifications
until an optimal solution is found.
o Genetic Algorithms: Use principles of natural selection and evolution to
search for optimal solutions in a population of candidate solutions.
5. Heuristics and Domain Knowledge:
o Heuristics provide additional information about the problem domain to guide
the search more efficiently.
o Admissible heuristics: Never overestimate the cost to reach the goal.
o Consistent heuristics: Satisfy the triangle inequality, ensuring that the
estimated cost from a node to the goal is always less than or equal to the
estimated cost from any successor of that node to the goal.
6. Evaluation Metrics:
o Completeness: Can the algorithm find a solution if one exists?
o Optimality: Does the algorithm find the best (lowest-cost) solution?
o Time complexity: How long does it take to find a solution?
o Space complexity: How much memory is required during the search?
7. Real-World Applications:
o Problem-solving techniques are applied in various domains, including
logistics, scheduling, planning, robotics, natural language processing, and
autonomous systems.

Uninformed Search

1. Introduction:
o Uninformed search algorithms explore the search space systematically
without considering additional information about the problem domain.
o They are also known as blind search algorithms because they do not use any
domain-specific knowledge or heuristics.
2. Types of Uninformed Search Algorithms:

a. Breadth-First Search (BFS):


o Expands the shallowest unexpanded node first.
o Uses a FIFO (First-In-First-Out) queue to store the frontier.
o Guarantees the shortest path to the goal if the path cost is non-decreasing.

b. Depth-First Search (DFS):

o Explores as far as possible along each branch before backtracking.


o Uses a LIFO (Last-In-First-Out) stack or recursion to store the frontier.
o May not find the shortest path to the goal and can get stuck in infinite paths.

c. Uniform-Cost Search (UCS):

o Expands the node with the lowest path cost.


o Uses a priority queue ordered by path cost to store the frontier.
o Guarantees the optimal solution if all step costs are non-negative.
3. Characteristics:
o Completeness: Uninformed search algorithms are generally complete,
meaning they can find a solution if one exists.
o Optimality: BFS and UCS are optimal if the step costs are non-negative.
o Time Complexity: The time complexity of these algorithms depends on the
size of the search space.
o Space Complexity: BFS and UCS require more memory compared to DFS
due to storing all explored nodes in memory.
4. Comparison:
o BFS: Suitable for finding the shortest path in a tree or graph with uniform
edge costs. However, it may consume a lot of memory for large search
spaces.
o DFS: Memory-efficient but may not find the shortest path and can get stuck
in infinite paths.
o UCS: Guarantees the optimal solution, but its performance depends on the
edge costs.
5. Applications:
o Uninformed search algorithms are used in various AI applications such as
route planning, puzzle solving, and exploring state spaces in problem-solving
tasks.
6. Considerations:
o Selection of the appropriate uninformed search algorithm depends on
factors such as the problem domain, available memory, and desired solution
optimality.

These class notes provide an overview of uninformed search algorithms in AI, including their
types, characteristics, comparison, and applications.
Breadth-First Search (BFS)

1. Introduction:
o Breadth-First Search (BFS) is a fundamental algorithm used in AI for
systematically exploring a graph or tree data structure.
o It explores all the nodes at a given depth level before moving on to nodes at
the next level.
o BFS guarantees to find the shortest path from the initial node to a goal node,
if one exists, in terms of the number of edges traversed.
2. Algorithm:
o Data Structures:
▪ Queue: Used to store nodes at the current level (frontier).
▪ Set or Hash Table: Used to keep track of visited nodes to avoid
revisiting them.
o Procedure:
▪ Enqueue the initial node into the queue.
▪ Repeat until the queue is empty: a. Dequeue a node from the front of
the queue. b. If the dequeued node is the goal node, return success. c.
Otherwise, enqueue all the unvisited neighboring nodes of the
dequeued node and mark them as visited.
▪ If the queue becomes empty and the goal node is not found, return
failure.
3. Characteristics:
o BFS explores the search space level by level, ensuring that all nodes at a
given depth are visited before moving to deeper levels.
o It guarantees to find the shallowest path from the initial node to the goal
node, making it suitable for finding optimal solutions.
o BFS may require significant memory resources, especially for graphs with a
large branching factor, as it stores all the nodes at each level in memory.
o The time complexity of BFS is O(V + E), where V is the number of vertices
(nodes) and E is the number of edges in the graph.
4. Applications:
o Route Planning: BFS is commonly used in navigation systems to find the
shortest path between locations.
o Puzzle Solving: BFS can be applied to solve puzzles such as the 8-puzzle or
the maze problem.
o Web Crawling: BFS can be used to crawl web pages on the internet in a
systematic manner.
5. Advantages:
o Completeness: BFS is complete, meaning it will find a solution if one exists.
o Optimality: BFS guarantees to find the shortest path to the goal node.
o Simplicity: BFS is easy to implement and understand, making it suitable for
introductory AI courses.
6. Considerations:
o Memory Usage: BFS may not be suitable for graphs with a large number of
nodes or a high branching factor due to its high memory requirements.
o Performance: The performance of BFS depends on the structure of the graph
and the available memory resources. It may become impractical for large
graphs.

Understanding BFS is crucial for AI practitioners as it forms the basis for more advanced search
algorithms and techniques.

Let's consider a simple example of BFS using a graph representing a social network where each
node represents a person, and edges represent friendships.

Consider the following undirected graph:

mathematica
A
/ \
B C
/ \ / \
D E F G

In this graph:

• A is friends with B and C.


• B is friends with A, D, and E.
• C is friends with A, F, and G.
• D is friends with B.
• E is friends with B.
• F is friends with C.
• G is friends with C.

Let's say we want to find if person G is in the social network, and if so, what is the shortest path
from person A to person G.
We can use BFS to search for person G starting from person A:

1. Begin with person A and enqueue it.


2. Dequeue person A, mark it as visited, and enqueue its neighbors B and C.
3. Dequeue person B, mark it as visited, and enqueue its neighbors D and E.
4. Dequeue person C, mark it as visited, and enqueue its neighbors F and G.
5. Dequeue person D, mark it as visited (if it hasn't been already), and enqueue its
neighbor (none in this case).
6. Dequeue person E, mark it as visited (if it hasn't been already), and enqueue its
neighbor (none in this case).
7. Dequeue person F, mark it as visited (if it hasn't been already), and enqueue its
neighbor (none in this case).
8. Dequeue person G, mark it as visited, and finish the search.

In this example, BFS will find person G and the shortest path from person A to person G will be
A -> C -> G.

Depth-First Search (DFS)

1. Introduction:
o Depth-First Search (DFS) is an uninformed search algorithm used to traverse
or explore a graph or tree data structure.
o It explores as far as possible along each branch before backtracking.
o DFS does not guarantee finding the shortest path to a goal node but is useful
for exploring the entire graph or tree.
2. Algorithm:
o Data Structures:
▪ Stack: Used to store nodes to be explored.
▪ Set or Hash Table: Used to keep track of visited nodes to avoid
revisiting them.
o Procedure:
▪ Push the initial node onto the stack.
▪ Repeat until the stack is empty: a. Pop a node from the top of the
stack. b. If the popped node is the goal node, return success. c.
Otherwise, push all the unvisited neighboring nodes of the popped
node onto the stack and mark them as visited.
▪ If the stack becomes empty and the goal node is not found, return
failure.
3. Characteristics:
o DFS explores the search space deeply before exploring nodes at the same
level, often reaching the deepest nodes first.
o It may not find the shortest path to a goal node but is useful for exploring the
entire graph or tree.
o DFS uses less memory compared to BFS, as it only needs to store the nodes
along the current path.
o The time complexity of DFS is O(V + E), where V is the number of vertices
(nodes) and E is the number of edges in the graph.
4. Applications:
o Maze Solving: DFS can be used to explore a maze to find a path from the start
to the exit.
o Topological Sorting: DFS can be applied to perform topological sorting of a
directed acyclic graph (DAG).
o Graph Traversal: DFS is commonly used for graph traversal and cycle
detection.
5. Advantages:
o Memory Efficiency: DFS uses less memory compared to BFS, especially for
graphs with a high branching factor.
o Simplicity: DFS is easy to implement and understand, making it suitable for
introductory AI courses.
o Space Efficiency: DFS can be implemented using recursion, which does not
require explicit data structures like queues.
6. Considerations:
o Completeness: DFS may not find a solution if the search space is infinite or if
there are cycles in the graph.
o Optimality: DFS does not guarantee finding the shortest path to a goal node.
o Performance: The performance of DFS depends on the structure of the graph
and the order in which nodes are explored.

Let's consider a simple example of DFS using a graph representing a social network where each
node represents a person, and edges represent friendships.

Consider the following undirected graph:

mathematica
A
/ \
B C
/ \ / \
D E F G

In this graph:

• A is friends with B and C.


• B is friends with A, D, and E.
• C is friends with A, F, and G.
• D is friends with B.
• E is friends with B.
• F is friends with C.
• G is friends with C.

Let's say we want to perform DFS starting from person A.

We can use DFS to explore the graph in a depth-first manner:

1. Start with person A and mark it as visited.


2. Explore each neighbor of A depth-first.
o Explore B: Mark it as visited.
▪ Explore D: Mark it as visited.
▪ Explore E: Mark it as visited.
o Explore C: Mark it as visited.
▪ Explore F: Mark it as visited.
▪ Explore G: Mark it as visited.

The traversal sequence using DFS starting from A is: A -> B -> D -> E -> C -> F -> G.

DFS explores as far as possible along each branch before backtracking, which is why it traverses
the entire branch of node B before exploring the branch of node

Iterative Deepening Search (IDS) is an uninformed search algorithm used in artificial


intelligence to find the optimal solution in a tree or graph. It combines the benefits of depth-first
search (DFS) and breadth-first search (BFS) algorithms while mitigating their limitations.

Here's how Iterative Deepening Search works:

1. Initialization: Start with a depth limit of 0.


2. Depth-Limited Search: Perform a depth-limited search (DLS) from the root node,
exploring nodes up to the current depth limit.
3. Solution Check:
o If the goal node is found at the current depth limit, return the solution.
o If the goal node is not found and there are unexplored nodes at the current
depth limit, increase the depth limit and repeat the search.
4. Iterative Deepening: Repeat steps 2 and 3, increasing the depth limit with each iteration
until the goal node is found or the search space is exhausted.
5. Termination: Stop when the goal node is found or when the search space is fully
explored.

Iterative Deepening Search offers several advantages:

• Completeness: IDS is guaranteed to find a solution if one exists, even in infinite search
spaces, as long as the depth limit is set appropriately.
• Optimality: IDS finds the shallowest goal node, ensuring the optimal solution is found in
terms of the minimum path cost.
• Memory Efficiency: IDS consumes memory comparable to DFS, as it only maintains the
path from the root node to the current node, unlike BFS which requires storing the entire
level.

However, IDS may explore some nodes multiple times, leading to redundant work. Despite this
drawback, IDS is often preferred for its simplicity and effectiveness, especially in situations
where memory is limited or the search space is unknown.

s consider a simple example of using Iterative Deepening Search (IDS) to find a target value in a
binary tree.

Suppose we have the following binary tree:

markdown
1
/ \
2 3
/ \ / \
4 5 6 7

Our goal is to find a target value, say 6, in this binary tree using IDS.

Here's how IDS would work in this scenario:

1. Initial Depth Limit (0):


o Start with a depth limit of 0.
o Perform a depth-limited search (DLS) from the root node, exploring nodes up
to the depth limit of 0.
o Since the target value is not found at this depth limit, increase the depth
limit.
2. Depth Limit (1):
o Increase the depth limit to 1.
o Perform a depth-limited search from the root node, exploring nodes up to the
depth limit of 1.
o Explore nodes 2 and 3 at depth 1.
o Since the target value is not found at this depth limit, increase the depth
limit.
3. Depth Limit (2):
o Increase the depth limit to 2.
o Perform a depth-limited search from the root node, exploring nodes up to the
depth limit of 2.
o Explore nodes 4, 5, 6, and 7 at depth 2.
o The target value 6 is found at this depth limit.
4. Termination:
o Stop the search and return the path to the target value.

In this example, Iterative Deepening Search gradually increases the depth limit with each
iteration, exploring deeper levels of the binary tree until it finds the target value or exhausts all
possible paths. IDS ensures completeness and optimality, making it suitable for searching in
large and complex search spaces.

Hill Climbing Search in AI

1. Introduction:
o Hill Climbing is a local search algorithm used in artificial intelligence for
optimization problems.
o It belongs to the family of heuristic search algorithms and aims to find the
peak (maximum or minimum) of the objective function in the search space.
2. Algorithm:
o Initialization:
▪ Start with an initial solution, often randomly generated or provided.
o Evaluation:
▪ Evaluate the current solution by calculating its objective function
value or fitness score.
o Neighbor Generation:
▪ Generate neighboring solutions by making small modifications to the
current solution. These modifications could involve changing one or
more parameters, variables, or components of the solution.
o Improvement:
▪ Select the neighboring solution that offers the greatest improvement
according to the objective function or fitness score.
▪ If the neighboring solution improves upon the current solution, move
to it.
o Termination:
▪ Repeat steps 2-4 until one of the termination conditions is met:
▪ No further improvement is possible (local maximum or
minimum).
▪ A maximum number of iterations is reached.
▪ A predefined threshold for improvement is achieved.
▪ A timeout limit is reached.
3. Types of Hill Climbing:
o Simple Hill Climbing: Considers only one neighboring solution at each
iteration and moves to it if it improves over the current solution.
o Steepest-Ascent Hill Climbing: Evaluates all neighboring solutions and
moves to the one that offers the greatest improvement, even if it's not the
best overall.
o Random-Restart Hill Climbing: Periodically restarts the search from a
random initial solution to escape local optima.
o Simulated Annealing: Allows uphill moves with a certain probability,
allowing exploration of the search space.
4. Characteristics:
o Hill Climbing is greedy in nature, as it always moves to the best available
solution in the immediate neighborhood without considering the global
picture.
o It focuses on improving the current solution without considering the entire
search space.
o Hill Climbing typically requires minimal memory compared to other search
algorithms.
5. Applications:
o Hill Climbing is widely used in optimization problems such as parameter
tuning in machine learning algorithms, scheduling, routing, and many other
domains where finding the best solution within a limited search space is
crucial.
6. Advantages:
o Simplicity: Hill Climbing is easy to implement and understand.
o Efficiency: It can quickly find good solutions for certain types of optimization
problems, especially in small search spaces.
7. Limitations:
o May get stuck in local optima: Depending on the nature of the problem and
the search space, Hill Climbing may converge to a local optimum instead of
the global optimum.
o Lack of diversity: Hill Climbing may overlook potentially better solutions that
are not directly reachable from the current state.

Hill Climbing is a powerful optimization algorithm in AI, widely used for finding optimal
solutions in various domains. Its effectiveness depends on the problem characteristics, the quality
of the objective function, and the search space topology.

Let's illustrate the Hill Climbing search algorithm with a complete solution using a simple
example of finding the maximum value in a list of numbers.

Suppose we have the following list of numbers:

csharp
[2, 8, 5, 12, 9, 6, 3]

Our goal is to find the maximum value in this list using Hill Climbing.

Here's how Hill Climbing would work in this scenario:

1. Initialization:
o Start with an initial solution, which could be any value in the list. Let's start
with the first element, 2.
o Set a variable current_max to store the current maximum value found.
2. Evaluation:
o Evaluate the current solution by comparing it with its neighbors.
o In this case, we compare the value of 2 with its neighbor to the right: 8.
3. Improvement:
o Move to the neighboring solution that offers the greatest improvement.
o Since 8 is greater than 2, we update current_max to 8.
4. Iteration:
o Repeat steps 2 and 3 until reaching the maximum value or a termination
condition.
o Evaluate the solution of 8 and compare it with its neighbor to the right: 5.
o Since 5 is less than 8, we stay at 8.
5. Termination:
o Stop when no better solution can be found in the immediate neighborhood,
or a termination condition is met.
o Continue iterating through the list, updating current_max as needed.
o In this scenario, we reach the end of the list and find that 12 is the maximum
value.

Complete Solution:
• Start with the initial solution: 2.
• Iterate through the list:
o Compare each element with the current maximum value ( current_max).
o Update current_max if a larger value is found.
• The Hill Climbing algorithm terminates when the end of the list is reached.
• The maximum value found is current_max, which is 12.

In this example, Hill Climbing successfully finds the maximum value in the list by iteratively
moving towards higher values until reaching a local maximum. It demonstrates how Hill
Climbing can be applied to finding optimal solutions in a list of numbers.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy