AI Notes
AI Notes
Definition of AI:
Artificial Intelligence (AI) refers to the development of computer systems capable of performing
tasks that typically require human intelligence. These tasks include learning, reasoning,
problem-solving, perception, language understanding, and decision-making. AI enables
machines to imitate cognitive functions like humans, enhancing their ability to process data,
recognize patterns, and make informed decisions.
AI is rapidly evolving and is expected to have a transformative impact on various industries and
aspects of daily life. Key areas of growth include:
Overall, AI's future involves more autonomous systems, personalized user experiences, and the
integration of AI in nearly every industry.
Intelligent agents are systems that can act autonomously, perceive their environment, and
respond to changes in order to achieve specific goals. Some key characteristics include:
● Autonomy: The ability to operate without human intervention.
● Perception: Sensing and interpreting the environment to understand the current state of
the world.
● Reactivity: The ability to respond to changes in real-time, adapting actions based on
new information.
● Proactivity: Taking initiative to achieve goals rather than just reacting to stimuli.
● Social Ability: Agents may communicate and collaborate with other agents or humans
to solve complex problems.
Types of agents:
Agents can be grouped into four classes based on their degree of perceived intelligence and
capability :
Simple Reflex Agents :Simple reflex agents ignore the rest of the percept history and act only
on the basis of the current percept. The agent function is based on the condition-action rule. If
the condition is true, then the action is taken, else not. This agent function only succeeds when
the environment is fully observable.
Model-Based Reflex Agents : The Model-based agent can work in a partially observable
environment, and track the situation. A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.
Utility-Based Agents: utility-based agent is an agent that acts based not only on what the goal
is, but the best way to reach that goal. The Utility-based agent is useful when there are multiple
possible alternatives, and an agent has to choose in order to perform the best action. The term
utility can be used to describe how "happy" the agent is.
Typical Intelligent Agents:
● Virtual Assistants: Agents like Siri, Alexa, and Google Assistant, which process natural
language commands and perform tasks for users.
● Self-Driving Cars: Autonomous vehicles that perceive their surroundings and make
driving decisions.
● Recommendation Systems: AI systems that suggest products, content, or services to
users based on their preferences, such as Netflix recommendations or Amazon product
suggestions.
● Game Agents: AI-controlled characters or opponents in video games that adapt and
react based on the player’s actions.
This structured approach allows AI to tackle complex problems in fields such as robotics, natural
language processing, and game-playing.
—----------------------------------------------------------------------------------------------------
Unit 2 Problem Solving Methods in AI
AI uses various problem-solving methods to find solutions to complex issues. These methods
involve different search strategies, optimization techniques, and game-playing algorithms.
1. Search Strategies:
Search strategies are used to explore possible solutions to a problem by traversing through a
state space, which consists of all possible configurations or states of the problem.
● Uninformed Search:
These search strategies do not use any domain-specific knowledge (heuristics) and
explore the search space blindly.
○ Breadth-First Search (BFS): Explores all possible solutions level by level.
○ Depth-First Search (DFS): Explores a solution path completely before
backtracking.
○ Uniform Cost Search: Expands the least-cost node, prioritizing paths with lower
cost.
● Informed Search:
These strategies use heuristics or domain-specific knowledge to guide the search,
making it more efficient.
○ A Algorithm:* Combines path cost (g) and heuristic (h) to find the least-cost
solution.
○ Greedy Best-First Search: Chooses paths based on the estimated cost to reach
the goal (heuristic).
2. Heuristics:
Heuristics are problem-solving techniques that use approximate methods or rules of thumb to
find solutions more quickly. They are not guaranteed to provide the optimal solution but can
guide the search process in large or complex search spaces.
● Example: In a navigation problem, the straight-line distance between two points can be
used as a heuristic to guide the search.
Local search algorithms are optimization techniques that focus on exploring neighboring states
of a current solution in order to find a better one. These are particularly useful for problems with
large state spaces.
● Hill Climbing:
A local search algorithm that continuously moves towards the direction of increasing
value (higher objective function).
Example: Imagine you’re hiking a mountain. You keep moving up the slope toward the
highest point you can see. If you reach a peak but realize it’s not the tallest mountain,
you might be stuck there instead of finding a higher one nearby.
● Simulated Annealing:
A probabilistic technique that allows moves to worse solutions early on, in order to
escape local optima and explore the solution space more broadly.
Example: Think of a ball in a bowl. If you let it roll, it might settle in a low spot (like a local
peak). But if you give it a little energy (like heating it), it can jump to higher spots before
settling down again, allowing it to find a better peak overall.
● Genetic Algorithms:
A population-based optimization technique inspired by natural selection, where
candidate solutions evolve over time.
Example: Picture a garden of different plants. You select the healthiest ones,
cross-pollinate them to create new plants, and occasionally introduce some seeds from
different flowers. Over time, the garden gets better as the best traits combine and
evolve.
In some cases, agents may not have complete information about the environment. They must
make decisions based on limited or partial observations.
● Contingency Planning:
Involves creating plans that take into account the uncertainty in the environment and
provide alternative actions based on observations.
Example:
Imagine a robot tasked with delivering packages in an office building. It knows that
certain areas may be blocked or that people may move around unexpectedly. To handle
these uncertainties, the robot creates a contingency plan that outlines alternative routes
to the destination. If the primary route is blocked, the robot can refer to its contingency
plan and switch to a backup route that leads around the obstacle, ensuring it still
completes the delivery in a timely manner.
● Belief States:
Represent the set of all possible states the agent could be in, given the partial
information. Searching through belief states allows the agent to act optimally with
incomplete knowledge.
Example:
In a game of poker, a player does not have complete information about their opponents’
hands. The player forms a belief state that represents the possible hands their
opponents could be holding based on the cards visible on the table and their previous
betting behavior. For instance, if the player has observed their opponents being
aggressive in betting, they might believe that one of them has a strong hand. This belief
state helps the player decide whether to raise, call, or fold, even with incomplete
knowledge about the actual cards in play.
CSPs are problems where the solution must satisfy a set of constraints or conditions. Examples
include scheduling problems and crossword puzzles.
● Constraint Propagation:
A technique where constraints are used to reduce the search space by eliminating
values from the possible domains of variables.
Example:
Imagine a Sudoku puzzle, which is a CSP. The grid has specific rules (constraints) that
dictate how numbers can be placed. Initially, some numbers are filled in, creating
constraints on the remaining empty cells.
For instance, if a row already contains the numbers 1, 2, and 3, those numbers cannot
be placed in any other cells of that row. By applying constraint propagation, you can
eliminate these possibilities from the potential values for other cells in the same row,
column, or box, effectively reducing the search space for a solution.
● Backtracking Search:
A recursive search method where an agent incrementally builds a solution and
backtracks when a constraint is violated.
Example:
Consider the N-Queens problem, where the goal is to place N queens on an N×N
chessboard so that no two queens threaten each other. This is a CSP where the
constraints involve ensuring that no two queens share the same row, column, or
diagonal.
Using backtracking search, the algorithm places a queen in the first row and then
recursively attempts to place queens in subsequent rows. If placing a queen violates any
constraints (e.g., if another queen already occupies that column or diagonal), the
algorithm backtracks by removing the last placed queen and trying the next possible
position. This process continues until all queens are successfully placed on the board or
all possibilities are exhausted.
6. Game Playing:
In competitive environments, AI agents often need to play games against opponents. These
problems involve making decisions that consider both the agent's moves and the opponent's
moves.
● Minimax Algorithm:
A decision-making algorithm used in two-player zero-sum games, where one player’s
gain is another player’s loss. The goal is to minimize the possible loss in the worst-case
scenario.
Example:
In a simple game of tic-tac-toe, the minimax algorithm can be used to determine the
optimal move for a player (say Player X) assuming that Player O is also playing
optimally.
● The algorithm constructs a game tree where each node represents a possible
game state after a player's move.
● For each terminal state (win, lose, or draw), the algorithm assigns a score (e.g.,
+1 for a win, -1 for a loss, and 0 for a draw).
● The algorithm works its way back up the tree, with Player X aiming to maximize
their score (choosing the highest score available) while Player O aims to
minimize it (choosing the lowest score). This back-and-forth process allows
Player X to make the best possible move considering Player O's responses.
● Alpha-Beta Pruning:
An optimization technique applied to the minimax algorithm to ignore or prune branches
that do not affect the final decision, making the search more efficient.
Example:
Continuing with the tic-tac-toe example, let’s say we implement the minimax algorithm
with alpha-beta pruning for optimization.
● As the algorithm explores the game tree, it keeps track of two values: alpha, the
best value that the maximizing player (Player X) can guarantee at that level or
above, and beta, the best value that the minimizing player (Player O) can
guarantee at that level or below.
● If at any point, the algorithm finds that the current node's score is worse than the
alpha or beta thresholds, it can prune that branch and stop exploring further
down that path, as it won’t influence the final decision.
In games, agents must aim to make optimal decisions, considering future outcomes and the
actions of opponents. This can be achieved by searching game trees and evaluating the payoff
of various strategies.
8. Stochastic Games:
● Example: In games like poker or backgammon, an agent must handle uncertainty due to
hidden information or random events, requiring a mix of strategy and probabilistic
reasoning.
These problem-solving methods form the core of AI's approach to tackling complex, dynamic
problems across various domains like robotics, natural language processing, and game
development.
—------------------------------------------------------------------
Unit 3 Knowledge Representation in AI
Knowledge representation is a crucial aspect of AI that focuses on how machines can store,
access, and use knowledge to solve complex problems. Various methods and techniques are
used to represent knowledge effectively, making reasoning and decision-making processes
more efficient.
Types of knowledge
1. Declarative Knowledge: Knowing what. Facts or concepts, e.g., "Paris is the capital of
France."
2. Procedural Knowledge: Knowing how. Steps or methods to do something, e.g., "How to
ride a bicycle."
3. Meta-Knowledge: Knowing about knowledge. Understanding when and how to apply
different kinds of knowledge, e.g., knowing when to use certain algorithms in AI.
4. Heuristic Knowledge: Experience-based rules of thumb. Practical tips that work in most
cases, e.g., "Restart the computer to fix many issues."
5. Structural Knowledge: Understanding relationships. How concepts are connected, e.g.,
"A dog is a mammal, and a mammal is an animal."
Knowledge Cycle
The AI knowledge cycle consists of five key components that work together to enable intelligent
behavior:
First Order Predicate Logic (FOPL) is a formal system used in AI to represent facts and
relationships between objects in a domain. It uses:
FOPL allows for more expressive knowledge representation compared to propositional logic, as
it includes quantifiers and relationships between objects.
2. Prolog Programming:
Prolog is a programming language mainly used in AI for solving problems with logical
reasoning. It represents knowledge through facts, rules, and queries to make logical
deductions. Here’s a simple breakdown:
1. Facts are statements about the world.
For example: father(john, mary). % John is Mary’s father
When you run the query, Prolog checks if it can deduce a "yes" or "no" answer based on
the provided facts and rules. Here, it would answer "yes" because father(john, mary)
is defined as a fact, and the rule confirms that John is also a parent of Mary.
3. Unification:
Unification is the process of making two logical expressions identical by finding a suitable
substitution for variables. It is a fundamental concept in Prolog and logic programming, enabling
pattern matching and inference.
4. Forward Chaining:
Forward chaining is a data-driven inference technique where reasoning starts from known facts
and applies rules to infer new facts until a goal is reached.
● Example:
Rules:
1. If a person has a fever and a cough, they might have the flu.
2. If a person has the flu, they should rest and drink fluids.
Conclusion:
The person should rest and drink fluids.
5. Backward Chaining:
Backward chaining is a goal-driven inference technique where reasoning starts from the goal
and works backward to find the facts that support the goal.
Conclusion:
The person should rest and drink fluids.
● Rules:
3. If a person has a fever and a cough, they might have the flu.
4. If a person has the flu, they should rest and drink fluids.
6. Resolution:
Resolution is a logical rule used to deduce new facts by combining statements. For example, if
we know "Either A or B is true" (A ∨ B) and "A is false" (¬A), we can conclude "B is true."
7. Knowledge Representation:
8. Ontological Engineering:
Ontological engineering involves creating ontologies, which are formal representations of a set
of concepts within a domain and the relationships between them. It helps AI systems
understand the structure of knowledge in specific domains.
10. Events:
Events represent occurrences or changes in the state of the world over time. In knowledge
representation, events are modeled to describe temporal relations and sequences.
● Example: "John left the house" and "It started raining" are events.
Mental events and objects represent internal states of agents, such as beliefs, desires,
intentions, or perceptions. This concept is vital for AI systems that need to model human-like
reasoning and behavior.
AI systems use reasoning techniques to infer relationships or make decisions based on the
categories objects belong to. For instance, an AI system might infer that if an object is a "Bird," it
can "Fly" (unless exceptions exist, like "Penguins").
● Example: Given the category "Bird" and the fact "Tweety is a Bird," the system might
conclude "Tweety can Fly."
Default reasoning allows AI systems to make assumptions when complete information is not
available. These assumptions can be overridden if contradictory evidence is found.
● Example: By default, "Birds can fly," but if the AI system learns "Tweety is a penguin," it
overrides the default and concludes "Tweety cannot fly."
These concepts form the foundation of knowledge representation and reasoning in AI, enabling
machines to handle complex information and make informed decisions.
—-----------------------------------
Unit 4 Software Agents in AI
Software agents are autonomous programs that act on behalf of users or other programs,
making decisions and performing tasks in dynamic environments. In AI, intelligent agents are
designed to operate in environments with a certain degree of autonomy, intelligence, and
cooperation. Below are the key concepts related to software agents:
1. Architecture for Intelligent Agents:
The architecture of an intelligent agent defines its internal structure, how it perceives the
environment, and how it decides and acts upon it. Common architectures include:
2. Agent Communication:
Intelligent agents need to communicate with other agents to share information, collaborate, or
negotiate. Communication protocols and languages are essential for agent interactions.
● Message Content: The information being shared, which could include data, goals,
beliefs, or queries.
● Message Types: Includes inform, request, propose, reject, and accept, among others.
In multi-agent systems, agents often need to negotiate or bargain to achieve their goals,
especially when their goals conflict or they need to share resources.
● Negotiation: A process where agents engage in a dialogue to reach a mutual
agreement. Agents may have different preferences, objectives, or resources and must
find a solution that benefits all parties.
● Bargaining: A subset of negotiation, where agents try to maximize their own benefit
while offering something of value to the other agent. For instance, one agent may offer
resources or services in exchange for something it needs.
Negotiation strategies:
● Competitive: Agents aim to maximize their own utility, often at the expense of the other
party.
● Cooperative: Agents work together to achieve mutual benefit, focusing on win-win
outcomes.
In multi-agent systems, agents often interact with unknown or untrustworthy entities. Trust and
reputation mechanisms help agents decide who to cooperate with based on past interactions
and the reputation of other agents.
● Trust: A measure of an agent’s belief in the reliability and honesty of another agent.
Trust is built over time through repeated interactions.
● Reputation: A global measure of an agent’s standing or reliability in the multi-agent
system, typically based on feedback from other agents. An agent with a high reputation
is more likely to be trusted by others.
Mechanisms to build trust and reputation:
● Direct Experience: Agents build trust based on their own interactions and experiences
with other agents.
● Reputation Systems: Agents share feedback about their interactions, allowing others to
make decisions based on the collective reputation of agents in the system.
—--------------------------------------------
Unit 5 Applications of Artificial Intelligence (AI)
AI has a wide range of applications across various industries and domains, transforming how
tasks are automated, information is processed, and decisions are made. Below are key AI
applications:
1. Language Models:
Language models are AI systems designed to understand, generate, and manipulate human
language. These models form the basis of many natural language processing (NLP)
applications.
Information retrieval refers to the process of obtaining relevant information from large datasets
or documents based on user queries.
● Tasks in IE:
○ Named Entity Recognition (NER): Identifying entities such as people, locations,
and organizations in text.
○ Relation Extraction: Determining relationships between entities (e.g., “John
works at Company X”).
● Uses: Automatic document processing, legal case analysis, and news summarization.
NLP involves the interaction between computers and human language. It allows machines to
understand, interpret, and generate human language.
● Applications:
○ Text Analytics: Understanding sentiment, categorizing content.
○ Chatbots and Virtual Assistants: Siri, Alexa, and Google Assistant.
○ Document Summarization: Extracting key points from large text documents.
○ Sentiment Analysis: Identifying emotions in user reviews or social media posts.
○
5. Machine Translation:
Machine translation (MT) involves automatically translating text or speech from one language to
another using AI.
Speech recognition allows machines to understand and process spoken language into text or
actions.
AI plays a significant role in the development of intelligent robots and hardware systems. These
systems are designed to perform tasks autonomously or assist humans.
● Examples:
○ Industrial Robots: Used in manufacturing for precision tasks.
○ Service Robots: Used in healthcare, retail, and customer service industries.
○ Autonomous Vehicles: Self-driving cars powered by AI.
● Functions: Perception, navigation, manipulation, and interaction with the physical world.
8. Perception:
Perception in AI involves interpreting data from sensors (such as cameras, microphones, and
LIDAR) to understand the environment.
● Types of Perception:
○ Computer Vision: Analyzing visual data (e.g., image recognition, facial
recognition).
○ Speech and Audio Perception: Recognizing and interpreting sound patterns.
● Applications: Self-driving cars, surveillance systems, and gesture recognition.
9. Planning:
Planning in AI refers to the ability of agents to set goals and develop sequences of actions to
achieve them efficiently.
● Key Concepts:
○ Pathfinding: Algorithms that determine the optimal path from a starting point to a
goal (e.g., A* algorithm).
○ Task Planning: Identifying and organizing tasks to achieve specific outcomes.
● Applications: Robotics (navigation and manipulation), logistics (scheduling deliveries),
and video games (NPC behavior).
10. Moving:
AI enables robots and autonomous systems to move in complex environments. This includes
both locomotion (for robots) and navigation.
● Technologies Used:
○ Reinforcement Learning: For learning optimal movement strategies.
○ Pathfinding Algorithms: For navigating through obstacles and reaching a
destination.
● Applications:
○ Autonomous Drones: Used for surveillance, delivery, and exploration.
○ Self-Driving Cars: Use perception, planning, and movement to navigate roads
safely.