AI notes
AI notes
- Job Displacement : AI can automate tasks that humans currently perform, potentially
leading to job losses, especially in industries like manufacturing, transportation, and
customer service.
- Bias and Discrimination : AI systems can perpetuate biases present in their training data,
leading to unfair outcomes, particularly in areas like hiring, lending, or law enforcement.
- Privacy Concerns : AI systems, particularly those used in surveillance, data mining, and
social media, can intrude on personal privacy by collecting and analyzing large amounts of
personal data.
- Security Threats : AI systems could be hacked or manipulated, potentially leading to
significant consequences, such as the misuse of autonomous weapons or AI-driven decision-
making in critical infrastructures.
- Autonomous Weapons : AI can be used to develop autonomous weapons that operate
without human intervention, raising ethical concerns and potential risks of misuse in
warfare.
- Unintended Consequences : AI systems, particularly advanced ones, may act unpredictably
or in ways that deviate from human intent, leading to unforeseen outcomes.
2. List the various types of risks in AI.
There are several categories of risks associated with AI:
- Automation of Repetitive Tasks : AI can take over repetitive and mundane tasks, allowing
humans to focus on more creative and strategic work.
- Improved Decision-Making : AI can analyze large datasets quickly and accurately, helping
organizations make better data-driven decisions.
- Cost Savings : Automating tasks and improving efficiency can reduce operational costs in
many industries.
- Personalization : AI enhances user experiences by providing personalized
recommendations, such as in e-commerce, entertainment, and advertising.
- Efficiency Gains : AI can optimize processes, such as supply chain management, energy
usage, or logistics, leading to higher productivity.
- Healthcare Advancements : AI is used in medical diagnosis, drug discovery, and robotic
surgeries, improving healthcare outcomes and making treatments more accessible.
- Software Agents : Programs that operate within a digital environment (e.g., search
engines, chatbots).
- Robotic Agents : Physical robots that interact with the physical world (e.g., self-driving cars,
industrial robots).
- Human Agents : In multi-agent systems, humans may also be considered agents,
interacting with other intelligent agents in the system.
- Sensors : These are the parts of the agent that perceive the environment. For example,
cameras, microphones, or other inputs in a robot; or web scraping tools in a software agent.
- Actuators : The components that allow the agent to act on its environment. In a physical
robot, these could be wheels or arms; in a software agent, this might involve sending
messages or making API calls.
- Decision-Making Mechanism : This is the core of the agent, where it processes information
from its sensors and decides what actions to take using logic, machine learning, or other
techniques.
- Performance Measure : Defines how successful the agent is in achieving its goals. This
helps in evaluating and improving its decisions.
- Simple Reflex Agents : These agents respond directly to percepts using predefined rules.
They do not consider the history of percepts and operate purely based on current
conditions.
- Model-based Reflex Agents : These agents maintain an internal model of the environment,
allowing them to handle partially observable situations by keeping track of previous
percepts.
- Goal-based Agents : These agents make decisions by considering a goal they need to
achieve. They evaluate which actions will lead them closer to achieving that goal.
- Utility-based Agents : These agents consider not only goals but also a utility function,
which measures how desirable an outcome is. They choose actions that maximize their
utility.
- Learning Agents : These agents improve over time by learning from their experiences. They
have a learning component that allows them to adapt their actions based on past successes
and failures.
9. Write a short note on:
c. Goal-based Agents
Goal-based agents act to achieve specific goals. They make decisions by considering which
actions will bring them closer to achieving their desired outcome. For example, a self-driving
car's goal may be to reach a destination safely, and it chooses actions based on this
objective.
d. Utility-based Agent
Utility-based agents extend goal-based agents by incorporating a utility function that
measures the desirability of different outcomes. They choose actions not just to achieve a
goal but to maximize their utility, making them more capable of handling trade-offs between
different possible outcomes. For example, a robot may not only want to reach a destination
but also do so as efficiently as possible.
e. Learning Agent
A learning agent is capable of improving its performance over time by learning from its
experiences. It consists of four components: a learning element (which improves based on
feedback), a performance element (which selects actions), a critic (which provides feedback
on performance), and a problem generator (which explores new possibilities). A learning
agent can adapt to changes in the environment and optimize its actions based on past
experiences.
3.Problem Solving
1. Define problem.
In artificial intelligence, a problem is defined as a situation that needs to be solved by finding
a sequence of actions that will transform the current state into a desired goal state. The
solution to a problem involves reaching the goal from the initial state through a series of
valid transitions or actions.
4.Game Theory
1. Explain Optimal Decisions in Games.
In games, an optimal decision is one that maximizes a player's chances of winning or
achieving the best possible outcome while considering that the opponent is also trying to do
the same. In a two-player zero-sum game, this means minimizing the opponent's maximum
gain (minimax strategy). An optimal decision is based on analyzing the game tree,
considering all possible moves of both players, and choosing the move that leads to the best
worst-case outcome. The goal is to maximize the minimum payoff the player can receive,
assuming the opponent also plays optimally.
example, to check the validity of A ⟹ B, the tableau would split A and ¬B and attempt to
contradiction is found in all branches of the tableau, the formula is unsatisfiable (false). For
system.
10. What do you mean by Axiomatic Systems?
An Axiomatic System is a formal system in which a set of axioms (self-evident truths) is used
as the foundation to derive theorems. The system consists of:
- Axioms : Basic, assumed true statements.
- Inference Rules : Logical rules to derive new statements (theorems) from the axioms.
A well-known example is Euclidean Geometry , where all geometric theorems are derived
from a small set of axioms (e.g., "through any two points, there is exactly one straight line").
An axiomatic system is complete if all true statements can be derived and consistent if no
contradictions can be derived.
6.Reasoning
1. What do you mean by Reasoning in AI?
Reasoning in AI refers to the process of drawing logical conclusions from available data or
knowledge. It allows AI systems to make decisions, solve problems, and derive new
information based on existing facts. Reasoning can be of various types, including deductive
reasoning (deriving conclusions from general rules), inductive reasoning (inferring general
rules from specific examples), and abductive reasoning (inferring the most likely explanation
for a set of observations).
7.Planning
1. Define Planning and Explain Algorithm of a Simple Planning Agent.
Planning in AI refers to the process of generating a sequence of actions that an agent must
execute to achieve a specific goal from a given initial state. The task of planning is to decide
in advance how to accomplish a goal based on a model of the environment and available
actions.
Algorithm:
1. Classical Planning :
- In classical planning, the agent works with a well-defined environment where the states,
actions, and outcomes are known and deterministic. The environment is static, and the
agent is omniscient about the world.
- Example: Solving a puzzle where each move has a predictable outcome.
1. State-Space Search : The most basic planning technique where the planner searches
through the state space to find a sequence of actions that lead from the initial state to the
goal state. It can be done using forward search (from initial state) or backward search (from
the goal).
2. Partial-Order Planning (POP) : A technique where actions are ordered only when
necessary, and flexibility is maintained to reorder actions during execution.
3. Hierarchical Task Network (HTN) Planning : In HTN, complex goals are decomposed into
simpler subtasks using a hierarchy of tasks. HTN is efficient for complex domains with
multiple levels of abstraction.
In POP, the plan is flexible, and actions can be rearranged as long as the plan constraints are
satisfied, allowing for more efficient and adaptable planning.
8.Recent Trends in AI
1. List and Explain Applications of AI.
6. Robotics : AI-driven robots are used in manufacturing, assembly lines, and even in
domestic applications like cleaning and delivery services.
2. Neural Language Models (NLMs) : These models use neural networks to capture more
complex language patterns. Examples include recurrent neural networks (RNNs) and
transformer-based models like GPT and BERT.
9.Intelligent System
1. What is an Intelligent Agent in AI?
An Intelligent Agent is an entity in AI that perceives its environment through sensors and
takes actions using actuators to achieve specific goals. It interacts with the environment,
gathering information, processing it, and taking actions autonomously or semi-
autonomously to achieve the best outcome based on predefined goals.
1. Simple Reflex Agents : These agents act solely based on the current percept, ignoring the
history of percepts. They follow a condition-action rule (if condition A, do action B).
- Example: A vacuum cleaner moves left or right based on whether the current location is
dirty or clean.
2. Model-Based Reflex Agents : These agents maintain a memory or internal state to keep
track of the history of percepts. They use this memory to make better decisions.
- Example: A robot that remembers which areas have been cleaned and which haven’t.
3. Goal-Based Agents : These agents take actions to achieve specific goals. They choose
actions based on how well they help achieve the desired goals.
- Example: A GPS system that chooses the best route to reach a destination.
4. Utility-Based Agents : These agents make decisions based on a utility function that
quantifies how desirable a particular state is. They aim to maximize utility, balancing
different factors like cost, time, and efficiency.
- Example: A financial trading algorithm that evaluates risk and reward to maximize profits.
5. Learning Agents : These agents improve their performance over time by learning from
their experiences.
- Example: A recommendation system that improves its suggestions based on user
preferences and feedback.
10.Knowledge Representation
1. Explain What You Mean by Knowledge Representation and Need of Knowledge
Representation.
Knowledge Representation (KR) refers to the way information, facts, and rules are stored so
that an AI system can use it to reason and solve problems. KR is crucial for enabling AI
systems to simulate human-like thinking by modeling real-world entities and their
relationships.
The need for Knowledge Representation arises because raw data is not enough for
intelligent reasoning. KR is necessary to:
- Represent complex data efficiently.
- Enable machines to infer new knowledge from existing data.
- Provide AI with the ability to understand, interpret, and interact with the world.
1. Declarative Knowledge : Refers to facts and information stored explicitly in the knowledge
base (e.g., "Paris is the capital of France").
4. Heuristic Knowledge : Rules of thumb or best practices used to make decisions when
precise knowledge is unavailable (e.g., "If a car is not starting, check the battery first").
5. Structural Knowledge : Understanding of how concepts are related to one another (e.g.,
ontologies, taxonomies).
Example : In a project, to finish a task (T), you either complete Task A (OR node) or both Task
B and Task C (AND node).
- Propositional Logic is used to make inferences. The agent knows rules such as "If I am
adjacent to a pit, I will feel a breeze," and uses these rules to deduce the safe areas.
Example:
- Breeze(X, Y) ⇒ Pit(X, Y-1) OR Pit(X, Y+1) : If there is a breeze in a cell, one of its neighbors
has a pit.
- Universal quantifier (∀) : “For all x, P(x)” means P(x) is true for every x.
- Existential quantifier (∃) : “There exists an x such that P(x)” means P(x) is true for at least
one x.
- Forward Chaining : Starts with known facts and applies inference rules to extract more
facts until the goal is reached. It’s data-driven and works well in situations where all facts
are known, but the goal is not clear.
- Example: From a rule "If it rains, the ground is wet," and the fact "It is raining," forward
chaining concludes "The ground is wet."
- Backward Chaining : Starts with the goal and works backward by determining which facts
need to be true for the goal to be achieved. It’s goal-driven and works well when you know
the goal but not the facts.
- Example: To prove "The ground is wet," backward chaining would search for conditions
such as "It is raining."
12. Differentiate Between Forward Chaining and Backward Chaining.
Direction From facts to the goal (data-driven) From the goal to facts (goal-driven)
Efficiency Can generate unnecessary conclusions More focused as it only explores relevant facts
4. Explain the Working of Neural Networks. Explain Simple Neural Network Architecture.
Neural networks work by passing input data through layers of neurons. Each neuron
computes a weighted sum of its inputs, adds a bias term, and applies an activation function
to produce an output. This output is passed to the next layer, and this process continues
until the final output layer, which makes the prediction.
- Simple Neural Network Architecture : A basic neural network consists of three types of
layers:
1. Input Layer : Takes in features of the data.
2. Hidden Layers : One or more layers where computations take place.
3. Output Layer : Produces the final prediction or classification.
9. Write the Types of Convolutional Neural Networks. Explain Any Two in Detail.
- LeNet : One of the first CNN architectures designed for digit recognition.
- AlexNet : Popularized deep learning and won the ImageNet competition in 2012. It has
multiple convolutional layers followed by fully connected layers.
- VGGNet : Uses deeper networks with small filters for better image classification.
- ResNet (Residual Networks) : Introduced skip connections to solve the vanishing gradient
problem in deep networks.
AlexNet and ResNet have significantly impacted deep learning by improving image
classification accuracy and solving deep network training issues.
a. Gradient Descent : Iteratively updates model weights to minimize the loss function by
moving in the direction of the steepest descent (negative gradient).
b. Stochastic Gradient Descent (SGD) : Instead of using the entire dataset, SGD updates
weights after each training example, making it faster but with more noisy updates.
c. Mini Batch Stochastic Gradient Descent (MB-SGD) : Combines the benefits of both batch
and stochastic gradient descent by updating weights based on small batches of data.
d. SGD with Momentum : Adds a momentum term to SGD to accelerate the gradient in
the relevant direction and prevent oscillations.
f. Adaptive Gradient (AdaGrad) : Adapts the learning rate for each parameter individually,
based on the history of gradients, making it suitable for sparse data.
i. Adam : Combines RMSprop and momentum, providing adaptive learning rates and
incorporating past gradients into the current gradient, making it one of the most widely used
optimizers.
Tokenization is the process of splitting text into smaller units, called tokens, such as words,
phrases, or sentences. For example, the sentence "NLP is fun!" can be tokenized into ["NLP",
"is", "fun"].
4. Explain:
a. Word2Vec
Word2Vec is a neural network-based model used to learn word embeddings. It uses two
approaches:
1. Continuous Bag of Words (CBOW) : Predicts a target word based on its
surrounding context words.
2. Skip-gram : Predicts surrounding context words for a given target word.
The embeddings learned by Word2Vec capture the semantic meaning of words. For
instance, the model can understand that "king" is to "man" as "queen" is to "woman".
a. Sentiment Analysis
Sentiment Analysis is the process of analyzing textual data to determine the sentiment or
emotional tone behind the words. The goal is to classify the sentiment of a given text as
positive , negative , or neutral . Sentiment analysis is widely used in:
- Product reviews to gauge customer satisfaction.
- Social media monitoring to understand public sentiment.
- Market analysis for understanding trends and customer opinions.
b. Text Classification
Text Classification is the process of categorizing text into predefined labels or categories
based on its content. Examples of text classification tasks include:
- Spam detection : Classifying emails as spam or not spam.
- Topic classification : Categorizing news articles by subject (e.g., sports, politics).
- Language identification : Identifying the language in which a document is written.
Both sentiment analysis and text classification play critical roles in a variety of NLP tasks,
enabling machines to interpret and categorize human language efficiently.
14.Reinforcement Learning
1. Explain Reinforcement Learning.
Reinforcement Learning (RL) is a type of machine learning where an agent learns to make
decisions by interacting with an environment to maximize some notion of cumulative
reward. The agent learns through trial and error, receiving feedback from its actions in the
form of rewards or penalties.
In RL, the agent's goal is to learn a policy (a strategy) that defines the best action to take in
each state to maximize the total expected reward over time.
RL is commonly used in applications such as game playing (e.g., AlphaGo), robotics, and
autonomous systems.
MDPs assume the Markov property , which means the future state depends only on the
current state and action, not on the past states.
Q-learning :
Q-learning is an off-policy reinforcement learning algorithm used to find the optimal action-
selection policy for an agent. It is model-free, meaning the agent does not need to know the
transition probabilities or reward functions in advance. The agent learns a Q-value for each
state-action pair, representing the expected cumulative reward for taking a given action in a
specific state and then following the optimal policy thereafter.
DQNs are widely used in complex decision-making tasks like video games and robotics where
the state space is large or continuous.
These concepts provide a strong foundation for understanding how reinforcement learning
and related methods are used to solve decision-making problems in AI systems.