0% found this document useful (0 votes)
8 views

AI_notes

The document contains notes on Artificial Intelligence (AI) for BCA-CS students, covering topics such as the introduction to AI, its historical background, applications, problem-solving techniques, and various AI approaches including the Turing Test and Rational Agent methods. It outlines how AI works through data processing and highlights its applications across multiple industries. The notes also discuss specific algorithms and techniques used in AI, such as search algorithms and game playing strategies.

Uploaded by

Vivek Raj Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

AI_notes

The document contains notes on Artificial Intelligence (AI) for BCA-CS students, covering topics such as the introduction to AI, its historical background, applications, problem-solving techniques, and various AI approaches including the Turing Test and Rational Agent methods. It outlines how AI works through data processing and highlights its applications across multiple industries. The notes also discuss specific algorithms and techniques used in AI, such as search algorithms and game playing strategies.

Uploaded by

Vivek Raj Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

JHARKHAND RAKSHA SHAKTI

UNIVERSITY

BCA-CS Final Year Vi Sem

ARTIFICIAL
INTELLIGENCE NOTES

Notes for BCACS-601


Supervised by
Saleem Sanatan Kujur
April 2, 2025
Contents

1 UNIT 1 INTRODUCTION TO ARTIFICIAL INTELLIGENCE 2


1.1 Introduction to AI . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Background and Applications . . . . . . . . . . . . . . . . . . 4
1.2.1 Historical Backgorund of AI . . . . . . . . . . . . . . . 4
1.2.2 How does AI work? . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Applications of AI . . . . . . . . . . . . . . . . . . . . 6
1.3 Turing Test and Rational Agent approaches to AI . . . . . . . 7
1.3.1 Turing Test Approach . . . . . . . . . . . . . . . . . . 7
1.3.2 Rational Agent Approach . . . . . . . . . . . . . . . . 8
1.4 Introduction to Intelligent Agents . . . . . . . . . . . . . . . . 9
1.4.1 Environment . . . . . . . . . . . . . . . . . . . . . . . 12

2 UNIT 2 PROBLEM SOLVING AND SEARCHING TECH-


NIQUES 15
2.1 Introduction to AI problems . . . . . . . . . . . . . . . . . . . 15
2.1.1 Time Complexity . . . . . . . . . . . . . . . . . . . . . 15
2.1.2 Space Complexity: . . . . . . . . . . . . . . . . . . . . 16
2.1.3 AI Problem Examples: . . . . . . . . . . . . . . . . . . 16
2.2 Problem Space: . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Solution Space: . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Problem Characteristics . . . . . . . . . . . . . . . . . . . . . 17
2.5 Control Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.1 Two requirements for a good control strategy: . . . . . 18
2.6 Breadth First Search . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Depth First Search . . . . . . . . . . . . . . . . . . . . . . . . 21
2.8 A* algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.9 Constraint Satisfaction Problem . . . . . . . . . . . . . . . . . 26
2.10 Mean-End Analysis . . . . . . . . . . . . . . . . . . . . . . . . 27

1
2.11 Introduction to Game Playing . . . . . . . . . . . . . . . . . . 28
2.11.1 Types of Game Playing in Artificial Intelligence . . . . 28
2.11.2 The Role of Search Algorithms in Game Playing . . . . 29
2.12 MiniMax Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 29
2.13 Alpha-Beta Pruning Problems . . . . . . . . . . . . . . . . . . 33

3 Important portion for MID SEM 36

2
Chapter 1

UNIT 1 INTRODUCTION TO
ARTIFICIAL INTELLIGENCE

1.1 Introduction to AI
What is AI ?
Artificial intelligence (AI) is technology that enables computers and ma-
chines to simulate human learning, comprehension, problem solving, decision
making, creativity and autonomy.
Applications and devices equipped with AI can see and identify objects.
They can understand and respond to human language. They can learn from
new information and experience. They can make detailed recommendations
to users and experts. They can act independently, replacing the need for
human intelligence or intervention (a classic example being a self-driving
car).
But in 2024, most AI researchers and practitioners—and most AI-related
headlines—are focused on breakthroughs in generative AI (gen AI), a tech-
nology that can create original text, images, video and other content. To
fully understand generative AI, it’s important to first understand the tech-
nologies on which generative AI tools are built: machine learning (ML) and
deep learning.

3
Figure 1.1: Evolution of AI Knowledge

4
1.2 Background and Applications
1.2.1 Historical Backgorund of AI
Artificial Intelligence (AI) has a rich history that spans centuries, evolving
from philosophical concepts to the sophisticated systems we see today. Its
roots can be traced back to ancient times, though the formal field emerged
much later. The idea of creating intelligent machines dates to Greek mythol-
ogy, with tales of mechanical beings like Talos, a bronze automaton. How-
ever, the intellectual groundwork for AI began in the 17th and 18th centuries
with philosophers like René Descartes, who speculated about the nature of
thought, and Gottfried Leibniz, who envisioned a universal language of rea-
soning that machines could use.
The 19th century brought significant advancements in mathematics and
logic, laying the foundation for AI. George Boole developed Boolean alge-
bra, which became a cornerstone of computer science, while Charles Bab-
bage designed the Analytical Engine—a mechanical computer that, though
never completed, hinted at programmable machines. His collaborator, Ada
Lovelace, wrote the first algorithm intended for the machine and speculated
that it could go beyond mere calculations to create music or art, foreshad-
owing AI’s creative potential.
The true birth of AI as a field occurred in the mid-20th century. In 1936,
Alan Turing published his seminal paper on computable numbers, intro-
ducing the Turing Machine—a theoretical device that modeled computation
and problem-solving. Turing later proposed the ”imitation game” (now the
Turing Test) in 1950, asking whether a machine could exhibit behavior indis-
tinguishable from a human’s. This question galvanized AI research. Around
the same time, in 1943, Warren McCulloch and Walter Pitts created a math-
ematical model of artificial neurons, bridging biology and computation and
sparking interest in neural networks.
The term ”Artificial Intelligence” was coined in 1956 by John McCarthy,
who organized the Dartmouth Conference, widely considered AI’s founding
event. Attendees, including Marvin Minsky and Claude Shannon, aimed
to simulate human intelligence using computers. Early successes followed:
in 1951, Christopher Strachey wrote a checkers-playing program, and by
1956, Allen Newell and Herbert Simon’s Logic Theorist proved mathematical
theorems, demonstrating machine reasoning.
The 1960s and 1970s saw optimism tempered by challenges. Programs

5
like ELIZA, a natural language chatbot by Joseph Weizenbaum (1966), and
SHRDLU, a language-understanding system by Terry Winograd (1970), show-
cased AI’s potential. However, limited computing power and overambitious
goals led to the first ”AI winter” in the late 1970s, as funding dried up amid
unmet expectations.
The 1980s revived interest with expert systems—software mimicking hu-
man expertise in fields like medicine (e.g., MYCIN). Meanwhile, neural net-
works regained traction, bolstered by backpropagation algorithms. The 1990s
marked a shift toward practical applications, exemplified by IBM’s Deep Blue
defeating chess champion Garry Kasparov in 1997.
The 21st century ushered in an AI renaissance, driven by big data, power-
ful GPUs, and machine learning breakthroughs. Milestones include Google’s
DeepMind AlphaGo beating Lee Sedol in 2016 and the rise of large language
models like those powering modern chatbots. Today, AI permeates daily
life—voice assistants, recommendation systems, autonomous vehicles—building
on centuries of human ingenuity, from ancient myths to Turing’s vision, into
a transformative reality.

1.2.2 How does AI work?


In order to create AI, you need to: define the problem, determine the out-
comes, organize the data set, choose the appropriate technology, and then
test solutions. If the intended solution does not work, you can continue
experimenting to reach the desired outcome.
Below, we’ll go through five steps that illustrate how AI works: inputs,
processing, outcomes, adjustments, and assessments.

1. Input: Data is first collected from various sources in the form of text,
audio, videos, and more. It is sorted into categories, such as those that
can be read by the algorithms and those that cannot. You would then
create the protocol and criteria for which data will be processed and
used for specific outcomes.

2. Processing: Once data is gathered and inputted, the next step is


to allow AI to decide what to do with the data. The AI sorts and
deciphers the data using patterns it has been programmed to learn
until it recognizes similar patterns in the data that is being filtered
into the system.

6
3. Outcomes: After the processing step, the AI can use those complex
patterns to predict outcomes in customer behavior and market trends.
In this step, the AI is programmed to decide whether specific data is
a “pass” or “fail”—in other words, does it match previous patterns?
That determines outcomes that can be used to make decisions.

4. Adjustments: When data sets are considered a “fail”, AI learns


from that mistake, and the process is repeated again under different
conditions. It may be that the algorithm’s rules must be adjusted to
suit the data set in question or that the algorithm needs slight alter-
ation. In this step, you might return to the outcomes step to better
align with the current data set’s conditions.

5. Assessments: The final step for AI completing an assigned task is


assessment. Here, the AI technology synthesizes insights gained from
the data set to make predictions based on the outcomes and adjust-
ments. Feedback generated from the adjustments can be incorporated
into the algorithm before moving forward.

1.2.3 Applications of AI
AI has numerous applications across various industries and domains, includ-
ing virtual assistants, recommendation systems, fraud detection, autonomous
vehicles, and healthcare, impacting everything from daily tasks to complex
decision-making.

ˆ Virtual Assistants: AI powers virtual assistants like Siri, Alexa, and


Google Assistant, enabling voice-controlled tasks and information re-
trieval.

ˆ Recommendation Systems: AI algorithms are used to suggest prod-


ucts, movies, music, and other content based on user preferences.

ˆ Chatbots and Customer Service: AI-powered chatbots provide


24/7 customer support, answering questions and resolving issues.

ˆ Image and Facial Recognition: AI is used in security systems, social


media, and other applications to identify individuals and objects.

7
ˆ Natural Language Processing (NLP): AI enables computers to
understand, interpret, and generate human language, facilitating com-
munication and information access.

ˆ Healthcare: AI assists in medical diagnoses, accelerates drug dis-


covery, enables personalized medicine, and optimizes hospital manage-
ment.

ˆ Finance: AI detects fraud, powers algorithmic trading, and provides


personalized financial advice.

ˆ Transportation: AI enables autonomous vehicles and optimizes traf-


fic flow.

ˆ Manufacturing: AI predicts equipment failures and improves quality


control.

ˆ Agriculture: AI optimizes crop yields and enables robotic farming.


ˆ Education: AI personalizes learning and automates grading.
ˆ Cybersecurity: AI detects threats and analyzes malware.
ˆ Robotics: AI automates industrial tasks and aids in exploration.
ˆ Marketing: AI creates content and enables targeted advertising.
1.3 Turing Test and Rational Agent approaches
to AI
In AI, the Turing test and Rational agent approaches represent distinct meth-
ods for achieving intelligence. The Turing test focuses on mimicking human-
like conversational ability, while the rational agent approach emphasizes act-
ing to achieve the best possible outcome.

1.3.1 Turing Test Approach


The Turing test approach centers on evaluating a machine’s capacity to dis-
play intelligent behavior that is virtually indistinguishable from that of a

8
human during a natural language conversation. This method involves a hu-
man interrogator engaging in a text-based chat with both a human and a
machine, without knowing which is which. The machine is deemed intel-
ligent if the interrogator cannot consistently tell it apart from the human
participant. The primary goal of this test is to develop machines capable of
convincingly simulating human intelligence, thereby challenging the bound-
aries between artificial and human cognition.

Example of Turing test approach: For the Turing Test, consider a


scenario where a human interrogator engages in a text-based conversation
with two participants: one a human and the other a chatbot, such as an
advanced version of a language model. The interrogator asks a variety of
questions—ranging from casual topics like favorite movies to complex queries
about emotions or abstract concepts. If the chatbot responds with answers
that are witty, contextually appropriate, and emotionally nuanced, and the
interrogator cannot reliably distinguish it from the human, the chatbot would
pass the Turing Test. A real-world example could be a customer service
chat where a user interacts with an AI like ”Alex” to resolve a billing issue,
unaware that Alex isn’t human due to its natural, helpful responses.

1.3.2 Rational Agent Approach


The Rational Agent Approach focuses on designing systems that operate ra-
tionally to accomplish specific objectives, guided by their understanding of
the environment and predefined goals. In this method, rational agents ac-
tively perceive their surroundings, leveraging this information to determine
the most effective course of action. They then execute decisions aimed at
maximizing their performance or utility, ensuring their actions align with
the intended outcomes. The ultimate goal of this approach is to create AI
systems capable of making optimal or near-optimal decisions across a diverse
range of scenarios, enhancing their efficiency and adaptability in real-world
applications.

Types of Rational Agents:

1. Simple Reflex Agents: These agents act based on current perceptions


without considering past experiences.

9
2. Model-Based Reflex Agents: These agents maintain a model of the
world and use it to make decisions, considering past perceptions.

3. Goal-Based Agents: These agents are driven by specific goals and use
search algorithms and planning to achieve them.

4. Utility-Based Agents: These agents aim to maximize their utility or


satisfaction by making decisions based on their preferences. Learning
Agents: These agents can learn from their experiences and improve
their performance over time.

Example of Rational Agent approach: the Rational Agent Ap-


proach can be exemplified by a self-driving car navigating a busy city. The
car, acting as a rational agent, perceives its environment through sensors
like cameras and radar, detecting traffic lights, pedestrians, and other vehi-
cles. Based on this input and its pre-programmed goal of safely reaching a
destination, it evaluates options—such as slowing down for a pedestrian or
switching lanes to avoid traffic—and selects the action that optimizes safety
and efficiency. Another example is a smart thermostat, which monitors room
temperature and user preferences, then adjusts heating or cooling to maintain
comfort while minimizing energy use, making near-optimal decisions based
on its objectives.

1.4 Introduction to Intelligent Agents


Intelligent agents are entities either software or hardware based that perceive
their environment through sensors or inputs and take actions to achieve spe-
cific goals based on that perception. They are designed to exhibit some level
of intelligence, such as reasoning, learning, or problem-solving, and can range
from simple to highly complex systems.

The Intelligent agent structure is formed by following compo-


nents:

1. Perception:
This component allows the agent to receive information from its en-
vironment through sensors, which could be cameras, microphones, or
other input devices.

10
2. Decision-Making:
This is where the agent processes the perceived information and uses
its knowledge and algorithms to determine the best course of action.

3. Action:
This component enables the agent to interact with its environment
by executing the chosen actions, which could be controlling actuators,
sending messages, or modifying data.

4. Memory:
Some agents also incorporate a memory system to store and retrieve
information, allowing them to learn from past experiences and make
more informed decisions in the future.

5. Learning:
Some agents are designed to learn and improve their performance over
time through techniques like reinforcement learning, where they learn
by trial and error and receive feedback in the form of rewards or penal-
ties.

Behaviour of Intelligent Agent

ˆ Autonomy:
Intelligent agents operate independently, making decisions and taking
actions without constant human intervention. They can function with-
out direct human control, allowing them to adapt to dynamic environ-
ments.

ˆ Reactivity:
They can perceive changes in their environment and respond quickly
and appropriately. This allows them to react to real-time conditions
and adjust their actions accordingly.

ˆ Goal-Orientation:
Every action an intelligent agent takes is driven by specific objectives.
They are designed to achieve predefined goals, whether it’s navigating
a route, providing customer service, or making financial decisions.

ˆ Adaptability:
Intelligent agents can learn from their experiences and improve over

11
Figure 1.2: A simple Intelligent Agent

12
time. This allows them to refine their strategies and approaches based
on what works and what doesn’t, leading to continuous improvement
in their performance.
ˆ Perception:
Intelligent agents use sensors to gather information about their envi-
ronment. These sensors can include cameras, microphones, GPS, and
other devices that provide the agent with data about its surroundings.
Types of Intelligent Agents:
1. Simple Reflex Agents:
These agents respond to immediate data and predefined rules, making
them suitable for simple tasks.
2. Model-Based Agents:
These agents maintain an internal model of the world and use it to
predict future states and make decisions.
3. Goal-Based Agents:
These agents are designed to achieve specific goals, planning their ac-
tions to reach those goals.
4. Utility-Based Agents:
These agents aim to maximize overall utility, making decisions that
lead to the most desirable outcomes.
5. Learning Agents:
These agents can learn from their experiences and improve their per-
formance over time.

1.4.1 Environment
The environment is everything outside the agent that it can perceive and act
upon. It’s the context in which the agent functions, including physical or
virtual elements, other agents, and the rules governing the interactions.
Properties of Environment
ˆ Discrete / Continuous:
If there are a limited number of distinct, clearly defined, states of the
environment, the environment is discrete (For example, chess); other-
wise it is continuous (For example, driving).

13
ˆ Observable / Partially Observable:
If it is possible to determine the complete state of the environment at
each time point from the percepts it is observable; otherwise it is only
partially observable.

ˆ Static / Dynamic :
If the environment does not change while an agent is acting, then it is
static; otherwise it is dynamic.

ˆ Single agent / Multiple agents:


The environment may contain other agents which may be of the same
or different kind as that of the agent.

ˆ Accessible / Inaccessible:
If the agents sensory apparatus can have access to the complete state
of the environment, then the environment is accessible to that agent.

ˆ Deterministic / Non-deterministic:
If the next state of the environment is completely determined by the
current state and the actions of the agent, then the environment is
deterministic; otherwise it is non-deterministic.

ˆ Episodic / Non-episodic:
In an episodic environment, each episode consists of the agent perceiv-
ing and then acting. The quality of its action depends just on the
episode itself. Subsequent episodes do not depend on the actions in the
previous episodes. Episodic environments are much simpler because
the agent does not need to think ahead.

Examples of Intelligent Agents:

ˆ Robots:
Robots can be seen as physical intelligent agents, with sensors, actua-
tors, and control systems that allow them to interact with the physical
world.

ˆ 5Software Agents:
Software agents can perform tasks like managing email, filtering infor-
mation, or interacting with users through chatbots.

14
ˆ Self-Driving Cars:
These are complex agents that use sensors, cameras, and algorithms to
navigate and make decisions on the road.

ˆ Financial Trading Bots:


These agents can autonomously trade stocks and other assets based on
market data and algorithms.

15
Chapter 2

UNIT 2 PROBLEM SOLVING


AND SEARCHING
TECHNIQUES

2.1 Introduction to AI problems


In AI, problems can be considered complex based on both time and space
complexity, where time complexity relates to the execution time of an al-
gorithm, and space complexity relates to the memory usage. AI problems
often involve balancing time and space complexity to find efficient solutions,
especially when dealing with large datasets and complex tasks. The above
context include:

2.1.1 Time Complexity


ˆ Definition: Time complexity refers to how the execution time of an
algorithm grows as the input size increases.
ˆ Significance: A high time complexity means the algorithm will take
longer to run as the problem size grows, potentially becoming imprac-
tical for large inputs.
ˆ Examples:
– Linear Time (O(n)): The time taken grows proportionally to the
input size (e.g., searching for an element in an unsorted list).

16
– Quadratic Time (O(n2 )): The time taken grows proportionally to
the square of the input size (e.g., comparing every element in a
list to every other element).
– Exponential Time (O(2n )): The time taken grows exponentially
with the input size (e.g., brute-force search in a large space).

2.1.2 Space Complexity:


ˆ Definition: Space complexity refers to the amount of memory space an
algorithm uses as the input size increases.

ˆ Significance: A high space complexity means the algorithm requires a


lot of memory, which can be a problem, especially for large datasets or
limited hardware resources.

ˆ Examples:
– Constant Space (O(1)): The memory usage remains constant re-
gardless of the input size (e.g., storing a few variables).
– Linear Space (O(n)): The memory usage grows proportionally to
the input size (e.g., storing a copy of the input data).
– Quadratic Space (O(n2 )): The memory usage grows proportion-
ally to the square of the input size (e.g., storing a matrix).

2.1.3 AI Problem Examples:


ˆ Combinatorial Explosion: Many AI problems involve searching through
a vast space of possible solutions, leading to exponential time and space
complexity.

ˆ Image Recognition: Training and running deep learning models for im-
age recognition can require massive amounts of data and computational
resources, leading to high time and space complexity.

ˆ Natural Language Processing: Processing large amounts of text data


and training models to understand and generate human language can
also be computationally expensive.

17
ˆ Game Playing: AI algorithms for playing games like chess or Go often
involve searching through a large state space, leading to high time
complexity.

2.2 Problem Space:


ˆ Definition: The problem space is the abstract representation of a
problem, including all possible states and actions that can be taken to
move from the initial state to a goal state.

ˆ Representation: It’s often modeled as a graph where nodes represent


states, and edges represent operators or actions that transform one
state into another.

ˆ Purpose: Understanding the problem space helps in designing effec-


tive AI strategies for problem-solving.

2.3 Solution Space:


ˆ Definition: The solution space refers to the set of all possible solutions
or configurations that can be derived from a problem’s parameters. ]

ˆ Relationship to Problem Space: The solution space is a subset


of the problem space, containing only the states that represent valid
solutions.

ˆ Importance: Identifying the solution space helps in evaluating the ef-


fectiveness of different AI strategies and choosing the most appropriate
one.

2.4 Problem Characteristics


AI problems are very complicated things and traditionally, they generally
have more complexity and unpredictability that you won’t find in conven-
tional programming. These characteristics must be understood if you are
going to create better AI solutions.

18
1. Complexity :
The standard computational tasks are often easier than AI problems. It
is complicated for two reasons: All AI systems use complex algorithms
and must, of course, process vast data sets.

2. Uncertainty :
But this is as good as it gets for a multitude of traditional algorithms
that are for the hand not entirely certain, or under any kind of in-
complete information at all. Yet, AI systems must make probabilistic
predictions and make decisions, further expanding this uncertainty.

3. Adaptability:
One of the main benefits of using live data in a machine learning model
is that it’s naturally changing from time (or the environment it lives
in too). Since this nature is dynamic, programmers are constrained to
develop flexible algorithms that can learn and change.

4. Goal-oriented Design :
Our AI goals include achieving specific goals. For these goals, from
sorting data to things like language translation and facial recognition,
the goals are quite simple.

2.5 Control Strategies


Control strategies guide AI systems in navigating complex problem spaces
and making decisions, ultimately leading to solutions or achieving desired
outcomes.

2.5.1 Two requirements for a good control strategy:


1. Control Strategy should cause Motion Each rule or strategy ap-
plied should cause the motion because if there will be no motion than
such control strategy will never lead to a solution. Motion states about
the change of state and if a state will not change then there be no
movement from an initial state and we would never solve the problem.

2. Control strategy should be Systematic Though the strategy ap-


plied should create the motion but if do not follow some systematic

19
strategy than we are likely to reach the same state number of times
before reaching the solution which increases the number of steps. Tak-
ing care of only first strategy we may go through particular useless
sequences of operators several times. Control Strategy should be sys-
tematic implies a need for global motion (over the course of several
steps) as well as for local motion (over the course of single step).

2.6 Breadth First Search


BFS (Breadth-First Search) is an algorithm used to traverse or search through
graphs and trees. It explores all neighbors of a node before moving to the
next level.
BFS is a traversing algorithm where traversing starts from a selected node
(source or starting node) and traverse the graph layerwise thus exploring the
neighbour nodes (nodes which are directly connected to source node). Then
move towards the next-level neighbour nodes.
This makes BFS an effective approach for finding the shortest path in un-
weighted graphs or for performing an exhaustive search across graph struc-
tures.BFS operates using a queue data structure, ensuring that nodes are
processed in the order they are discovered. This traversal guarantees that
the shortest path from the source to any other node is always found in un-
weighted graphs.
This systematic approach highlights BFS’s ability to explore all possibil-
ities exhaustively, making it a cornerstone algorithm in AI and many other
fields. Breadth-First Search (BFS) has several defining characteristics that
make it an essential algorithm for graph traversal and problem-solving:

ˆ Completeness:
BFS guarantees to find a solution if one exists, provided the graph
is finite. This property is particularly useful in AI applications like
solving puzzles or navigating mazes.

ˆ Optimality:
BFS is optimal for unweighted graphs, ensuring that the shortest path
from the source node to any other reachable node is always found.

ˆ Time Complexity:
The time complexity of BFS is O(V + E), where V is the number of

20
21

Figure 2.1: Breadth First Algorithm


vertices and E is the number of edges in the graph. This reflects the
traversal of every node and edge once.

ˆ Space Complexity:
BFS requires O(V) space to store nodes in the queue during traversal.
This can become a limitation for graphs with a large number of vertices.

Algorithm 1 Breadth-First Search (BFS)


1: Input: Graph G(V, E), Source node s ▷ Graph with vertices and edges,
starting node s
2: Output: BFS traversal order
3: Create an empty queue Q ▷ Queue to manage traversal order
4: Enqueue s into Q ▷ Start with the source node
5: Mark s as visited ▷ Ensure we don’t visit s again
6: while Q is not empty do ▷ Continue while there are nodes to explore
7: Dequeue a node u from Q ▷ Retrieve the front node
8: Process u ▷ Perform necessary operation on node u
9: for each unvisited neighbor v of u do ▷ Explore all adjacent nodes
10: Mark v as visited ▷ Ensure v is not revisited
11: Enqueue v into Q ▷ Add v to queue for further exploration
12: end for
13: end while

2.7 Depth First Search


Depth First Search (DFS) is a straightforward yet powerful algorithm with
distinct characteristics that make it suitable for various graph traversal and
search tasks in Artificial Intelligence. By exploring as far as possible along
each branch before backtracking, DFS mimics how humans often approach
puzzles or games.
Search algorithms like DFS are essential in AI, helping machines navigate
decision trees, detect cycles in graphs, or solve complex problems modeled as
graphs. Its simplicity and efficiency make DFS a widely used technique in AI
applications such as navigation systems, network analysis, and game-solving
strategies.

22
23

Figure 2.2: Depth First Algorithm


The key characteristics of DFS:

1. Stack-Based Approach:
Mechanism: DFS uses a stack to manage nodes during traversal. This
stack can be:
Explicit: Using a data structure like a list.
Implicit: Using recursion, where the system call stack keeps track of
visited nodes.
Example: If visiting node A  B C, the stack maintains the nodes
in this order until
backtracking occurs.

2. Depth-First Nature:
Mechanism: The algorithm explores one branch of the graph as deeply
as possible before backtracking to explore other branches.
Benefit: This approach is efficient for problems where the solution is
located deep in the graph, such as puzzle-solving.

3. Time Complexity:
Formula: O(V+E)
Where:
V: Number of vertices (nodes).
E: Number of edges.
Reason: Each vertex and edge is visited once during the traversal.

4. Space Complexity:
Formula: O(V)
Explanation: The maximum space required corresponds to the depth
of the recursion stack or the size of the explicit stack. Worst-Case Sce-
nario: Space consumption increases in graphs with deep, linear struc-
tures.

5. Unweighted Graph Traversal:


DFS does not consider edge weights, making it ideal for simple traversal
tasks but unsuitable for shortest-path problems in weighted graphs.

24
Algorithm 2 Depth-First Search (DFS)
1: Input: Graph G(V, E), Source node s ▷ Graph with vertices and edges,
starting node s
2: Output: DFS traversal order
3: Create an empty stack S ▷ Stack to manage traversal order
4: Push s into S ▷ Start with the source node
5: Mark s as visited ▷ Ensure we don’t visit s again
6: while S is not empty do ▷ Continue while there are nodes to explore
7: Pop a node u from S ▷ Retrieve the most recent node
8: Process u ▷ Perform necessary operation on node u
9: for each unvisited neighbor v of u do ▷ Explore all adjacent nodes
10: Mark v as visited ▷ Ensure v is not revisited
11: Push v into S ▷ Add v to stack for further exploration
12: end for
13: end while

2.8 A* algorithm
The A* algorithm is a widely used pathfinding algorithm in AI, designed to
find the most efficient route between two points. It improves on earlier algo-
rithms like Dijkstra’s Algorithm by incorporating a heuristic that estimates
the cost of reaching the goal from any given node, making it more efficient
in terms of computational time.
Unlike Breadth-First Search (BFS) or Depth-First Search (DFS), which
explore all nodes indiscriminately, A* prioritizes nodes that are likely to lead
to the goal by balancing two factors:

1. g(n): The actual cost of reaching node n from the start.

2. h(n): The heuristic estimate of the cost to reach the goal from node n.

This combination allows A* to be both optimal and efficient, making it


the go-to algorithm for applications that require rapid pathfinding with high
accuracy.

25
Algorithm 3 A* Search Algorithm
1: Input: Graph G(V, E), Start node s, Goal node g, Heuristic function h
2: Output: Shortest path from s to g
3: Create an empty priority queue Q ▷ Stores nodes to be explored,
ordered by cost
4: Enqueue s into Q with priority f (s) = g(s) + h(s) ▷ Initialize with start
node cost
5: Create an empty map cameF rom ▷ Stores path history
6: Create a map gScore with default value ∞, set gScore[s] = 0 ▷ Cost
from start to each node
7: Create a map f Score with default value ∞, set f Score[s] = h(s) ▷
Estimated total cost
8: while Q is not empty do ▷ Continue while there are nodes to explore
9: Dequeue node u from Q with the lowest f Score ▷ Node with lowest
estimated cost
10: if u = g then ▷ If goal is reached, reconstruct the path
11: extbfReturn Reconstructed path from cameF rom
12: end if
13: for each neighbor v of u do ▷ Explore all adjacent nodes
14: tentativeG = gScore[u] + d(u, v) ▷ Calculate tentative cost to
reach v
15: if tentativeG < gScore[v] then ▷ If found a cheaper path to v
16: cameF rom[v] = u ▷ Record best path to v
17: gScore[v] = tentativeG ▷ Update cost to reach v
18: f Score[v] = gScore[v] + h(v) ▷ Update estimated total cost
19: if v is not in Q then ▷ Ensure v is considered for exploration
20: Enqueue v into Q with priority f Score[v]
21: end if
22: end if
23: end for
24: end while
25: extbfReturn Failure ▷ No path found to the goal

26
2.9 Constraint Satisfaction Problem
A Constraint Satisfaction Problem (CSP) consists of three primary elements:
variables, domains, and constraints. Each variable represents an unknown
element that must be assigned a value from its respective domain, which
is a predefined set of allowable values. The constraints define relationships
between variables, specifying which combinations of values are valid and
which are not. The goal of a CSP is to assign values to all variables in such
a way that all constraints are satisfied.
CSPs are widely used in AI to solve a variety of problems, from puzzle-
solving to resource allocation. For example, the famous Sudoku puzzle is a
CSP where the variables are the cells of the grid, the domain is the numbers 1
through 9, and the constraints are that no two cells in the same row, column,
Ö
or 3 3 subgrid can contain the same number. Another classic example is the
map coloring problem, where regions of a map are colored in such a way
that no neighboring regions have the same color. CSPs are essential in AI
because they provide a structured approach to decision-making and problem-
solving, allowing algorithms to focus on finding valid solutions within defined
parameters
Every CSP can be broken down into three essential components: vari-
ables, domains, and constraints.
1. Variables: These represent the elements that need to be assigned values.
In a scheduling problem, for instance, each task could be considered a vari-
able.
2. Domains: Each variable has a domain, which is the set of possible values
it can take. In the scheduling example, the domain of a task might be the
available time slots.
3. Constraints: Constraints define the rules that govern the relationships be-
tween variables. In scheduling, constraints might dictate that no two tasks
can overlap or that certain tasks must be completed before others.

Types of Constraint Satisfaction Problems:

1. Binary CSPs : Binary CSPs are the simplest form of CSPs, where con-
straints exist between pairs of variables. Each constraint involves exactly
two variables, making the problem easier to visualize and solve. An example
of a binary CSP is the map coloring problem, where each region on a map
must be assigned a color, and the constraint is that no two neighboring re-

27
gions can share the same color. This problem can be represented as a graph,
with nodes representing regions and edges representing constraints between
neighboring regions.
2. Non-binary CSPs : Non-binary CSPs involve constraints that apply to
more than two variables. For example, in a scheduling problem, a constraint
might specify that three tasks must be scheduled in different time slots. Non-
binary CSPs are more complex than binary CSPs because they involve more
intricate relationships between variables. Solving non-binary CSPs often re-
quires breaking down the problem into smaller binary subproblems or using
specialized algorithms that can handle higher-order constraints.
3. Dynamic CSPs : Dynamic CSPs are problems in which the variables
or constraints can change over time. These problems are more flexible and
require algorithms that can adapt to changes as they occur. A common
example of a dynamic CSP is a real-time scheduling problem, where the
availability of resources or the timing of tasks can change during the course
of the problem-solving process. Dynamic CSPs are more challenging to solve
because they require continuous updating and reevaluation of the solution
space.

2.10 Mean-End Analysis


Means-ends analysis (MEA) is a problem-solving technique, particularly used
in artificial intelligence, that involves breaking down a complex problem into
smaller, manageable subgoals and then finding the ”means” or actions to
achieve those subgoals and ultimately reach the desired ”end” goal.
Key points in mean-end analysis:
1. Identify the problem: Start by clearly defining the problem and the de-
sired outcome (the goal).
2. Break down the problem: Divide the problem into smaller, more manage-
able subgoals.
3. Find the means: For each subgoal, determine the actions or ”means”
that will help achieve it. Apply the actions: Execute the actions in a logical
sequence to reach the final goal.

28
2.11 Introduction to Game Playing
Game playing in artificial intelligence refers to the development of algorithms
and models that enable machines to play and excel in games that require
decision-making, strategy, and problem-solving. These games serve as an
excellent medium for testing AI capabilities since they involve well-defined
rules, structured environments, and measurable outcomes.

AI in game playing can be applied to various types of games, including:

1. Single-player games: Games like puzzles or solitaire, where the AI


competes against itself or static challenges.
2. Multi-player games: Competitive games such as Chess or Go, where
AI plays against a human or another AI.
3. Deterministic games: Games with no randomness, where outcomes
are determined by the player’s decisions (e.g., Chess).
4. Stochastic games: Games involving randomness or uncertainty, like
card games and dice games (e.g., Poker).

Game playing is significant because it mirrors real-world decision-making


processes, where outcomes depend on evaluating multiple possibilities, an-
alyzing opponents, and managing limited resources. By solving these chal-
lenges, AI systems enhance their ability to process information efficiently, op-
timize strategies, and adapt to changing environments, making game playing
a fundamental area in AI research and development.

2.11.1 Types of Game Playing in Artificial Intelligence


Game playing in AI can be categorized based on the nature of the game’s
structure and information availability. The key types include:

ˆ Deterministic vs. Stochastic Games: Deterministic games have no ran-


domness; outcomes depend entirely on the players’ moves. Examples
include Chess and Tic-Tac-Toe.

ˆ Stochastic games involve randomness or probabilistic events, such as


dice rolls or shuffled cards. Examples include Poker and Backgammon.

ˆ Perfect vs. Imperfect Information Games: Perfect information games


29
allow all players to access complete information about the game state
at any point, such as Chess and Go.

ˆ Imperfect information games limit access to certain information, such


as the opponent’s hand in card games like Poker.

ˆ Zero-Sum Games: These games involve direct competition, where one


player’s gain is another player’s loss. Examples include strategy games
like Chess and competitive multiplayer games.

2.11.2 The Role of Search Algorithms in Game Playing


Search algorithms play a fundamental role in AI game playing by enabling
systematic exploration of possible moves and outcomes. These algorithms
allow AI systems to evaluate game states, predict opponents’ moves, and
determine the optimal strategy for success.

ˆ Minimax Algorithm: Minimax is widely used in two-player, zero-sum


games like Chess. It evaluates all possible moves for both players by
assuming the opponent will play optimally, minimizing the player’s loss
while maximizing gains.

ˆ Alpha-Beta Pruning: An optimization of Minimax, Alpha-Beta Prun-


ing reduces the number of nodes evaluated, improving efficiency. It
eliminates branches in the search tree that do not affect the final deci-
sion, making it ideal for games with deep decision trees.

ˆ Monte Carlo Tree Search (MCTS): MCTS combines random simula-


tions with statistical analysis to determine the best moves. It is par-
ticularly useful in games like Go and complex video games, where the
state space is vast, and exhaustive search is impractical.

2.12 MiniMax Algorithm


The Mini-Max algorithm is a cornerstone in artificial intelligence, particularly
in decision-making processes for two-player games. It plays a crucial role in
game theory by allowing AI agents to select optimal moves assuming their
opponents are also playing optimally.

30
This strategy helps minimize the possible losses (for the “min” player)
and maximize gains (for the “max” player) in adversarial games such as
chess, tic-tac-toe, and go. The algorithm works by evaluating game states
and making predictions about future moves.
In essence, the Mini-Max algorithm operates recursively, constructing a
game tree where each node represents a possible future state of the game. The
AI player, using the Mini-Max strategy, works backward from the terminal
nodes (end game states) to make decisions that maximize its chances of
winning.
For example, in chess, the AI simulates potential moves, evaluating both
its own and its opponent’s best strategies. The outcome of this process
helps the AI decide the best move to make at any given point. Given the
rising complexity of AI applications and the importance of optimization in
these decision-making processes, understanding the inner workings of the
Mini-Max algorithm has become essential in fields beyond gaming, such as
economics and robotics.
The Mini-Max algorithm is a recursive strategy commonly used in two-
player adversarial games. It helps a player make optimal decisions by evaluat-
ing the outcomes of possible future moves. Key points in Mini-Max algorithm
:

ˆ Game Tree Construction : The root node represents the current state of
the game, and each branch represents a possible move made by either
player. The two players are generally labeled “MAX” and “MIN.”
MAX tries to maximize the score (selecting the best possible moves),
while MIN aims to minimize it (counteracting MAX’s strategy).
ˆ Terminal State Evaluation : Once the tree is constructed, the algorithm
evaluates the terminal nodes. These are the final possible states of the
game where a winner is decided, or the game ends in a draw. At this
stage, the algorithm assigns numerical values to each terminal node:
1. A win for MAX results in a positive score (e.g., +1).
2. A win for MIN results in a negative score (e.g., -1).
3. A draw results in a neutral score (e.g., 0).
This process is known as the static evaluation function. It helps to
determine how favorable a game state is for the MAX player.
ˆ Backpropagation of Values : After evaluating the terminal nodes, the
31
algorithm starts to backtrack through the game tree. During this phase,
it computes the optimal score for each intermediate node by considering
its child nodes:
1. If the current player is MAX, the algorithm selects the maximum
value from the child nodes, assuming MAX will make the best possible
move to increase the score.
2. If the current player is MIN, the algorithm selects the minimum value
from the child nodes, assuming MIN will make a move that minimizes
MAX’s advantage.
This backpropagation continues until the algorithm reaches the root
node, ensuring that each player chooses the best possible move at every
stage.

ˆ Depth-based Exploration : Mini-Max operates as a depth-first search


algorithm, meaning it explores one branch of the game tree all the way
down to a terminal node before backtracking. This process repeats for
all branches, which ensures that every potential outcome is considered
before making a decision. However, the depth of exploration can be
limited by factors such as computational resources and time constraints.
Many implementations of Mini-Max use depth-limiting to prevent the
algorithm from going too deep, as the number of game states increases
exponentially with depth.

ˆ Optimal Move Selection : Once the algorithm has propagated values


back to the root node, it chooses the move associated with the highest
value for MAX. This move is considered optimal because it guarantees
the best possible outcome for MAX, assuming that MIN is also playing
optimally. The recursive nature of Mini-Max ensures that both players
are playing their best strategies, making the game as competitive as
possible.

ˆ Alpha-Beta Pruning (Optimization) : Although Mini-Max is effective,


it can be computationally expensive. To optimize the process, a tech-
nique called Alpha-Beta pruning can be applied. Alpha-Beta pruning
reduces the number of nodes the algorithm needs to evaluate by cutting
off branches that won’t affect the final decision. This technique helps
the algorithm perform faster without losing accuracy.

Pseudo-code for Mini-Max Algorithm

32
Algorithm 4 MiniMmax Algorithm
1: Input: Node n, Depth d, Maximizing player flag isM ax
2: Output: Optimal move value
3: if d = 0 or n is a terminal node then ▷ Base case: if depth is zero or
game ends
4: Return Evaluation of n ▷ Return heuristic value
5: end if
6: if isM ax then ▷ Maximizing player’s turn
7: maxEval = −∞ ▷ Initialize with worst possible value
8: for each child c of n do ▷ Explore all possible moves
9: eval = M inimax(c, d − 1, false) ▷ Recursive call for minimizing
player
10: maxEval = max(maxEval, eval) ▷ Choose the maximum value
11: end for
12: Return maxEval
13: else ▷ Minimizing player’s turn
14: minEval = ∞ ▷ Initialize with worst possible value
15: for each child c of n do ▷ Explore all possible moves
16: eval = M inimax(c, d − 1, true) ▷ Recursive call for maximizing
player
17: minEval = min(minEval, eval) ▷ Choose the minimum value
18: end for
19: Return minEval
20: end if

33
2.13 Alpha-Beta Pruning Problems
Alpha-beta pruning is an optimization technique for the Min-Max algorithm
that reduces the number of nodes evaluated by pruning branches that do not
need to be explored.
The minimax algorithm is a decision-making process commonly used in
two-player, zero-sum games like chess. In such games, one player aims to
maximize their score while the other seeks to minimize it.
The minimax algorithm operates by recursively exploring all possible
game states (represented as a tree structure) and assigning values to the
leaf nodes based on the potential outcomes of the game.
The algorithm then propagates these values up the tree to find the optimal
move. However, as the complexity of the game increases, the number of
possible states grows exponentially, leading to very high computational costs.
This is where Alpha Beta Pruning becomes crucial. It reduces the number
of nodes the minimax algorithm needs to evaluate by “pruning” branches that
cannot influence the final decision.
By eliminating unnecessary computations it simplifies the decision-making
process, enabling faster and more efficient evaluations. As a result, Alpha
Beta Pruning is practical for real-time applications, such as game-playing
AI, where speed and efficiency are critical.
The key idea behind Alpha Beta Pruning is to avoid evaluating branches
of the game tree that cannot influence the final decision based on the values
already discovered during the search. It achieves this using two values: Alpha
and Beta.

ˆ Alpha represents the best (highest-value) value that the maximizing


player (usually the AI) can guarantee so far. It acts as a lower bound.
The initial value of alpha is −∞.

ˆ Beta represents the best (lowest-value) value that the minimizing player
(the opponent) can guarantee so far. It acts as an upper bound. The
initial value of alpha is +∞.

ˆ As the AI explores the tree, it keeps track of Alpha and Beta values.
When exploring a node, it compares the node’s value against these
values.

34
ˆ If, at any point, Alpha becomes greater than or equal to Beta; it means
the current branch will not affect the final decision because the oppo-
nent will avoid this path in favor of a better one. As a result, this
branch is pruned, and the algorithm moves on to the next branch.

ˆ This process allows the algorithm to skip large parts of the tree, signif-
icantly reducing the number of nodes to be evaluated.

35
Algorithm 5 Alpha-Beta Pruning Algorithm
1: Input: Node n, Depth d, Alpha α, Beta β, Maximizing player flag isM ax
2: Output: Optimal move value
3: if d = 0 or n is a terminal node then ▷ Base case: if depth is zero or
game ends
4: Return Evaluation of n ▷ Return heuristic value
5: end if
6: if isM ax then ▷ Maximizing player’s turn
7: maxEval = −∞ ▷ Initialize with worst possible value
8: for each child c of n do ▷ Explore all possible moves
9: eval = AlphaBeta(c, d − 1, α, β, false) ▷ Recursive call for
minimizing player
10: maxEval = max(maxEval, eval) ▷ Choose the maximum value
11: α = max(α, eval) ▷ Update alpha (best option so far)
12: if β ≤ α then ▷ Beta cutoff condition
13: Break ▷ Prune the remaining branches
14: end if
15: end for
16: Return maxEval
17: else ▷ Minimizing player’s turn
18: minEval = ∞ ▷ Initialize with worst possible value
19: for each child c of n do ▷ Explore all possible moves
20: eval = AlphaBeta(c, d − 1, α, β, true) ▷ Recursive call for
maximizing player
21: minEval = min(minEval, eval) ▷ Choose the minimum value
22: β = min(β, eval) ▷ Update beta (best option so far)
23: if β ≤ α then ▷ Alpha cutoff condition
24: Break ▷ Prune the remaining branches
25: end if
26: end for
27: Return minEval
28: end if

36
Chapter 3

Important portion for MID


SEM

CHAPTER 1 Applications to AI, Turing Test , Rational Agent Approach, Introduction to intel
CHAPTER 2 Problem Characteristics, Control Strategies, BFS and DFS

37

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy