0% found this document useful (0 votes)
2 views

AI notes

The document provides an extensive overview of various topics in artificial intelligence, including language models, n-grams, vector space models, and agent-based systems. It covers concepts such as expert systems, planning systems, and learning in AI, detailing their components, applications, and examples. Additionally, it discusses machine learning tasks and classifiers like Naive Bayes, emphasizing the importance of knowledge representation and reasoning in AI systems.

Uploaded by

Gangesh Sawarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

AI notes

The document provides an extensive overview of various topics in artificial intelligence, including language models, n-grams, vector space models, and agent-based systems. It covers concepts such as expert systems, planning systems, and learning in AI, detailing their components, applications, and examples. Additionally, it discusses machine learning tasks and classifiers like Naive Bayes, emphasizing the importance of knowledge representation and reasoning in AI systems.

Uploaded by

Gangesh Sawarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Here’s a detailed explanation of the topics you mentioned:

1. Language Models
Language models predict the likelihood of a sequence of words in a language. They
are foundational in natural language processing tasks, such as text generation, speech
recognition, and translation.
Types of Language Models:
 Statistical Language Models (SLMs): Use probabilities derived from a corpus.
 Neural Language Models (NLMs): Use neural networks to model language
(e.g., GPT, BERT).
Example:
For the sentence: "The cat is on the mat,"
A language model assigns a probability to the sequence:
P("The cat is on the mat") = P("The") × P("cat" | "The") × P("is" | "The cat") ...

2. N-grams
N-grams are contiguous sequences of n items (words, characters) from a given text.
They are used in language modeling, text classification, and speech recognition.
Types:
 Unigram: Single word (e.g., The, cat, is).
 Bigram: Two-word sequence (e.g., The cat, cat is).
 Trigram: Three-word sequence (e.g., The cat is).
Example:
For the text: "I love programming."
 Bigrams: (I love), (love programming).

3. Vector Space Models


A vector space model represents text documents as vectors in a multi-dimensional
space.
Steps:
1. Feature Extraction: Convert text into numerical features (e.g., term frequency).
2. Similarity Measurement: Measure distances between vectors (e.g., cosine
similarity).
Example:
Two documents:
 Doc1: "I like cats."
 Doc2: "I like dogs."
Feature vectors (based on term frequency):
 Doc1: [1, 1, 1, 0] (I, like, cats, dogs).
 Doc2: [1, 1, 0, 1].

4. Bag of Words (BoW)


BoW is a text representation model that ignores word order and represents text as a set
of words with their frequency.
Steps:
1. Tokenize the text.
2. Count occurrences of each word.
3. Represent as a sparse vector.
Example:
Text: "Cats are lovely and cats are smart."
BoW: {Cats: 2, are: 2, lovely: 1, and: 1, smart: 1}.

5. Text Classification
Text classification assigns categories to text based on its content using machine
learning models like Naive Bayes, SVM, or neural networks.
Example:
Sentiment Analysis: Classify reviews as positive, negative, or neutral.
Input: "The movie was amazing!"
Output: Positive.

6. Information Retrieval
Information retrieval (IR) is the process of fetching relevant information from a
database or the web based on a query.
Example:
Search Engines:
Query: "Best programming languages for AI"
Retrieves ranked results from indexed web pages.

7. PageRank
PageRank is an algorithm used by search engines to rank web pages based on their
importance, determined by the quality and quantity of links pointing to them.
Example:
Page A links to Pages B and C. Page B links back to Page A. PageRank increases for
pages with backlinks from high-quality pages.

8. Information Extraction
Information extraction (IE) identifies structured information (e.g., entities,
relationships) from unstructured text.
Example:
Text: "Barack Obama was born in Hawaii in 1961."
Extracted Information:
 Entity: Barack Obama (Person).
 Location: Hawaii.
 Date: 1961.

9. Question Answering (QA)


QA systems find precise answers to user questions from a corpus or the web.
Example:
Question: "Who is the President of the USA?"
Answer: Joe Biden (as of 2025).

Would you like more details or practical examples on any of these topics?
Definition of Agents
An agent is an autonomous entity capable of perceiving its environment through
sensors and acting upon it using actuators to achieve specific goals. Agents are used in
AI systems, robotics, and software systems.
Characteristics of Agents:
1. Autonomy: Operates without direct human intervention.
2. Reactivity: Responds to changes in its environment.
3. Proactiveness: Takes initiative based on objectives.
4. Social Ability: Interacts with other agents or humans.
5. Learning: Improves behavior over time based on experiences.
Agent Architectures
Agent architectures define the underlying design principles for how agents perceive,
process, and act.
1. Reactive Architecture
 Focuses on immediate responses to stimuli.
 Does not involve complex reasoning or memory.
 Examples include reflex-based systems like robotic obstacle avoidance.
Example: A robot detecting an obstacle and immediately changing direction without
considering long-term goals.
2. Deliberative Architecture
 Uses symbolic reasoning to plan actions based on goals.
 Maintains an internal representation of the environment.
 More resource-intensive but capable of solving complex tasks.
Example: A chess-playing AI evaluating multiple moves to determine the best
strategy.
3. Layered Architecture
 Combines reactive and deliberative approaches by organizing control into layers:
o Reactive Layer: Immediate responses.
o Deliberative Layer: Planning and decision-making.
o Executive Layer: Coordinates between reactive and deliberative layers.
Example: An autonomous car with:
 A reactive layer for avoiding collisions.
 A deliberative layer for planning routes.
 An executive layer to switch between tasks.
4. Cognitive Architecture
 Mimics human-like reasoning using psychological and neurological principles.
 Emphasizes learning, memory, and decision-making.
 Uses frameworks like SOAR or ACT-R.
Example: A personal assistant like Siri, which learns user preferences over time.

Multi-Agent Systems (MAS)


MAS consist of multiple interacting agents, each with distinct capabilities and goals.
Agents can collaborate or compete depending on the system's purpose.
1. Collaborating Agents
 Work together to achieve common goals.
 Require effective communication and coordination.
Example: Robots in a warehouse collaborating to move packages efficiently.
2. Competitive Agents
 Compete to achieve individual goals, often at the expense of others.
 Common in game theory and market simulations.
Example: Stock trading bots competing to execute profitable trades.
3. Swarm Systems
 Composed of simple agents that follow local rules without centralized control.
 Exhibit emergent behavior that solves complex tasks.
 Inspired by biological systems like ant colonies or bird flocks.
Example:
 Ant algorithms for finding shortest paths.
 Drone swarms for search-and-rescue operations.
4. Biologically Inspired Models
 Mimic behaviors of biological entities to solve computational problems.
 Examples include:
o Genetic Algorithms: Inspired by natural selection.
o Neural Networks: Modeled after brain neurons.
Example: Simulating the behavior of bees to optimize network traffic routing.

Applications of Agent-Based Systems


1. E-commerce: Personalized recommendations using cognitive agents.
2. Healthcare: Collaborative agents for patient monitoring.
3. Robotics: Swarm robots for agricultural tasks.
4. Traffic Management: Competitive agents for adaptive signal control.
These systems leverage agent capabilities to solve complex, distributed problems
effectively.
Expert Systems: An Overview
An Expert System is an AI application designed to simulate the decision-making
ability of a human expert in a specific domain. These systems use a knowledge base of
facts and rules combined with inference engines to solve problems that typically
require human expertise.

Key Components of Expert Systems


1. Knowledge Base:
o Stores domain-specific knowledge in the form of facts and rules.
o Facts: Information about the domain (e.g., "Blood pressure above 140 is
high").
o Rules: Conditional statements derived from expertise (e.g., "IF blood
pressure is high, THEN recommend lifestyle changes").
2. Inference Engine:
o Applies logical rules to the knowledge base to derive new facts or make
decisions.
o Uses two primary reasoning methods:
 Forward Chaining: Starts with the available facts and applies rules
to reach a conclusion.
 Backward Chaining: Starts with a hypothesis and works backward
to verify it using facts.
3. User Interface:
o Provides interaction between the user and the system for queries,
explanations, and decisions.

1. Representing and Using Domain Knowledge


Domain knowledge is represented in a structured format so that it can be effectively
used by the expert system.
Representation Techniques
1. Production Rules:
o Represented as IF-THEN statements.
o Example:
o IF the engine does not start AND the battery is dead
o THEN replace the battery.
2. Semantic Networks:
o Represents knowledge using nodes (concepts) and links (relationships).
o Example:
 Node: "Car"
 Relationship: "has part" → "Engine"
3. Frames:
o Uses objects and their attributes.
o Example:
o Car:
o - Engine: Working/Not working
o - Battery: Charged/Not charged
4. Logic:
o Uses propositional and predicate logic.
o Example: BatteryDead ⟹ ReplaceBattery\text{BatteryDead} \implies
\text{ReplaceBattery}
Usage
 The inference engine processes the knowledge base and applies logical
reasoning to derive conclusions or suggest actions.

2. Expert System Shells


An Expert System Shell is a software framework that provides the basic structure and
tools to develop an expert system. It allows domain experts to input knowledge
without requiring programming expertise.
Features of Expert System Shells
1. Pre-built Inference Engine:
o Reduces the need to develop reasoning algorithms.
2. Knowledge Base Editor:
o Enables easy entry and modification of rules and facts.
3. Explanation Facility:
o Justifies the reasoning process by showing how decisions are made.
4. User Interface:
o Simplifies user interaction.
Example of Expert System Shells
 CLIPS: A widely used tool for developing expert systems.
 Jess: Java Expert System Shell.
 Drools: Business rule management system.

3. Explanation
The ability of an expert system to explain its reasoning process is critical for user trust
and debugging.
Key Features
1. Why Explanation:
o Explains why a specific question is being asked.
o Example: "Why are you asking about symptoms?" → "To determine the
type of illness."
2. How Explanation:
o Explains how a decision or conclusion was reached.
o Example: "How did you conclude the battery is dead?" → "Based on the
fact that the engine does not start and the battery voltage is low."
3. Traceability:
o Shows the sequence of rules applied to reach a decision.

4. Knowledge Acquisition
Knowledge acquisition involves gathering, organizing, and encoding domain
knowledge into the expert system.
Techniques
1. Manual Entry:
o Direct input of facts and rules by domain experts.
2. Automated Knowledge Acquisition:
o Use of machine learning to infer patterns and rules.
3. Interviews:
o Structured interviews with domain experts to extract their expertise.
4. Observation:
o Observing experts in action to understand decision-making processes.
5. Case-based Reasoning:
o Using past cases to derive solutions for similar problems.
Challenges
 Tacit Knowledge: Experts may find it difficult to articulate their expertise.
 Dynamic Knowledge: Knowledge evolves, requiring frequent updates.
 Complexity: Large domains require extensive effort to encode knowledge.

Example of an Expert System


Medical Diagnosis System
Problem: Diagnose diseases based on symptoms.
Knowledge Base:
 Facts:
o "Fever is above 100°F."
o "Coughing and shortness of breath are symptoms."
 Rules:
o IF fever > 100°F AND coughing THEN suspect flu.
o IF fever > 100°F AND shortness of breath THEN suspect pneumonia.
Inference Engine:
 Applies the rules to the input symptoms to diagnose the condition.
User Interface:
 Query: "What are your symptoms?"
 Output: "You may have the flu. Consult a doctor."
Conclusion
Expert systems are powerful tools for solving complex problems in specific domains.
By representing domain knowledge effectively, using expert system shells, providing
explanations, and employing robust knowledge acquisition techniques, they can mimic
human expertise and provide valuable support in decision-making.
Blocks World
The Blocks World is a classic domain used in artificial intelligence (AI) to study
problem-solving and planning. It involves a set of blocks (typically cubes) that can be
stacked on one another or placed on a table. The goal is to manipulate the blocks to
transform an initial configuration into a desired goal configuration using a set of
predefined operations like moving one block from one position to another.
Components:
 Objects: Blocks labeled as A, B, C, etc.
 States: A representation of the arrangement of blocks (e.g., on(A, B), clear(B)).
 Actions/Operators: Basic operations like Move(X, Y) to move block X onto
block Y.
 Goals: Desired configurations (e.g., on(A, B), on(B, C)).
Example:
Initial State: on(A, B), on(B, Table), on(C, Table), clear(A), clear(C) Goal State:
on(B, C), on(A, B)
Plan:
1. Move(A, Table)
2. Move(B, C)
3. Move(A, B)

Components of Planning Systems


1. States: Representations of the environment at any point.
2. Goals: Descriptions of the desired outcome.
3. Operators/Actions: Define how the system transitions between states.
4. Plans: A sequence of operators to achieve the goal from the initial state.
5. Planner: The algorithm or system that generates and executes the plan.

Goal Stack Planning


This is a linear planning approach that uses a stack to manage subgoals and actions:
1. Start with the goal: Place the goal state on the stack.
2. Decompose into subgoals: Push all subgoals and actions required to achieve the
goal onto the stack.
3. Resolve subgoals: Iteratively pop the top of the stack and solve it, generating
new actions as needed.
4. Execute actions: When all subgoals are resolved, execute the plan.
Example:
Goal: on(A, B), on(B, C)
Stack evolves as:
 Push on(A, B) → Move(A, B)
 Push on(B, C) → Move(B, C)

Non-linear Planning
Non-linear planning allows handling multiple goals and actions concurrently, enabling
the creation of a plan that does not strictly follow a linear sequence. It uses:
 Partial-order planning: Plans are represented as partially ordered steps.
 Causal links: Ensure that specific preconditions are maintained.
Example:
Goals: on(A, B), on(B, C) Plan:
1. Move(A, Table)
2. Move(B, C)
3. Move(A, B) These steps can be reordered as long as dependencies are respected.

Hierarchical Planning
Hierarchical planning decomposes complex tasks into simpler sub-tasks:
 High-level actions: Represent the abstract steps.
 Refinement: Break down high-level actions into detailed plans.
 Advantages: Simplifies problem-solving and improves scalability.
Example:
High-level goal: Arrange blocks as on(A, B), on(B, C) Steps:
1. Decompose: Achieve on(B, C) and on(A, B)
2. Solve subgoals as simpler problems.

Learning in AI
1. Learning from Examples: Learning patterns or rules from labeled examples.
Example: Training a classifier on labeled images to recognize cats and dogs.
2. Learning by Advice: The system improves performance based on external
advice (e.g., heuristic rules).
3. Explanation-Based Learning (EBL): Uses prior knowledge to generalize from
a single example.
Example: A chess program learns a winning strategy by analyzing a single
expert game.
4. Learning in Problem Solving: Learning heuristics or shortcuts to solve
problems more efficiently based on past experiences.

Machine Learning Tasks


1. Classification: Categorize data into predefined labels.
o Example: Email spam detection.
2. Inductive Learning: Generalize a rule or pattern from observed examples.
o Example: Given data on weather conditions and play decisions, induce
rules for when to play outside.

Naive Bayes Classifier


A simple probabilistic classifier based on Bayes' theorem:
P(C∣X)=P(X∣C)⋅P(C)P(X)P(C|X) = \frac{P(X|C) \cdot P(C)}{P(X)}
Example:
 Training Data:
Weather Play
Sunny Yes
Rainy No
 Predict Play for Weather = Sunny.
Steps:
1. Calculate prior probabilities: P(Play=Yes)P(Play=Yes), P(Play=No)P(Play=No).
2. Compute likelihood: P(Weather=Sunny∣Play=Yes)P(Weather=Sunny |
Play=Yes), P(Weather=Sunny∣Play=No)P(Weather=Sunny | Play=No).
3. Use Bayes' formula to find the posterior.

Decision Trees
A tree structure used for classification:
 Nodes: Test attributes.
 Branches: Attribute values.
 Leaves: Decision outcomes.
Example:
Predict whether to play:
 If Weather = Sunny and Humidity = High: No.
 If Weather = Rainy: No.
Constructed tree:
Weather?
|-- Sunny --> Humidity?
|-- High --> No
|-- Normal --> Yes
|-- Rainy --> No
Would you like detailed coding examples or further elaboration?
Here's an explanation of the requested topics:

Review of Propositional and Predicate Logic


Propositional Logic
 Definition: A formal system representing statements (propositions) that are
either true or false.
 Syntax:
o Atoms: Basic statements like PP, QQ, etc.
o Connectives: AND (∧\land), OR (∨\lor), NOT (¬\neg), IMPLIES
(→\rightarrow).
 Semantics: Truth tables define how connectives work.
 Example:
o Statement: "It is raining, and the ground is wet."
o Representation: P∧QP \land Q, where P=rainingP = \text{raining} and
Q=ground is wetQ = \text{ground is wet}.
Predicate Logic (First-Order Logic, FOL)
 Definition: Extends propositional logic by adding quantifiers and relations.
 Components:
o Constants: Specific objects (a,ba, b).
o Variables: General placeholders (x,yx, y).
o Functions: Maps objects to objects (Father(x)Father(x)).
o Predicates: Represent relations (Loves(x,y)Loves(x, y)).
o Quantifiers: ∀\forall (forall) and ∃\exists (exists).
 Example:
o Statement: "Everyone loves someone."
o Representation: ∀x ∃y Loves(x,y)\forall x \, \exists y \, Loves(x, y).

First-Order Logic
 Advantages over Propositional Logic: Allows modeling of relationships and
quantified statements.
 Example Usage: Representing "All humans are mortal" as
∀x(Human(x)→Mortal(x))\forall x (Human(x) \rightarrow Mortal(x)).

Resolution and Theorem Proving


 Resolution: A rule of inference used for propositional and predicate logic.
o Steps:
1. Convert statements to Conjunctive Normal Form (CNF).
2. Apply the resolution rule iteratively to derive contradictions or
conclusions.
 Example:
o Given: P∨QP \lor Q and ¬Q\neg Q,
o Resolution: PP.
 Theorem Proving: Automated tools use resolution to verify theorems, e.g.,
Prolog.

Forward Chaining and Backward Chaining


Forward Chaining
 Definition: Data-driven inference technique.
 Process:
o Start with known facts.
o Apply rules to infer new facts until the goal is reached.
 Example:
o Rules: IfP→QIf P \rightarrow Q, IfQ→RIf Q \rightarrow R.
o Facts: PP.
o Derive: Q,RQ, R.
Backward Chaining
 Definition: Goal-driven inference technique.
 Process:
o Start with the goal.
o Work backward to determine what facts/rules satisfy the goal.
 Example:
o Goal: RR.
o Rules: IfQ→RIf Q \rightarrow R, IfP→QIf P \rightarrow Q.
o Determine PP as the requirement.

Temporal and Spatial Reasoning


 Temporal Reasoning: Deals with time-based facts.
o Example: "Event A happens before Event B."
o Tools: Temporal logic, interval algebra.
 Spatial Reasoning: Deals with space-based relationships.
o Example: "Object X is to the left of Object Y."
o Applications: Robotics, GIS systems.

Review of Probabilistic Reasoning


Bayes Theorem
 Formula: P(H∣E)=P(E∣H)⋅P(H)P(E)P(H|E) = \frac{P(E|H) \cdot P(H)}{P(E)}
where:
o P(H∣E)P(H|E): Posterior probability.
o P(E∣H)P(E|H): Likelihood.
o P(H)P(H): Prior probability.
o P(E)P(E): Evidence probability.
 Example: Diagnosing a disease given symptoms.

Totally-Ordered and Partially-Ordered Planning


Totally-Ordered Planning
 Definition: Plans where actions are arranged in a strict sequence.
 Characteristics:
o Simple and linear.
o Suitable for problems with strict dependencies.
 Example:
o Task: Make tea.
o Plan: Boil water →\rightarrow Add tea leaves →\rightarrow Pour into cup.
Partially-Ordered Planning
 Definition: Plans with flexible ordering of actions.
 Characteristics:
o Actions are only ordered where necessary.
o Reduces unnecessary constraints.
 Example:
o Task: Prepare breakfast.
o Plan: Boil eggs and toast bread (can be done in parallel).

Let me know if you’d like a deeper dive into any specific topic!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy