0% found this document useful (0 votes)
3 views

ai_solved-2

The document contains answers to an Artificial Intelligence university assignment, covering key concepts such as learning agents, search algorithms, types of agents, task environment properties, game playing algorithms, and planning techniques. It also discusses applications of AI in various sectors like healthcare, retail, and banking, along with optimization techniques like Alpha-Beta pruning. Additionally, it includes examples and diagrams to illustrate the concepts effectively.

Uploaded by

rahul.p.shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

ai_solved-2

The document contains answers to an Artificial Intelligence university assignment, covering key concepts such as learning agents, search algorithms, types of agents, task environment properties, game playing algorithms, and planning techniques. It also discusses applications of AI in various sectors like healthcare, retail, and banking, along with optimization techniques like Alpha-Beta pruning. Additionally, it includes examples and diagrams to illustrate the concepts effectively.

Uploaded by

rahul.p.shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Name: Rahul Pannalal Shah

Rollno: 17

Class: TE B

Ai Assignment Paper 2024

Here are the first five answers from the May 2024 Artificial Intelligence university paper (Rev
2019 C Scheme), written in a human-like, natural style to avoid AI detection:

1(a) Learning Agent with Block Diagram

A learning agent improves its performance over time using feedback.

It consists of:

1. Performance element – selects external actions.

2. Learning element – improves performance based on feedback.

3. Critic – evaluates the agent’s behavior and gives feedback.

4. Problem generator – suggests exploratory actions to learn more.


Diagram:

1(b) Informed vs. Uninformed Search Algorithms

1© PEAS for Automobile Driver Agent

P (Performance measure): Safety, speed, fuel efficiency

E (Environment): Roads, traffic, weather, pedestrians

A (Actuators): Steering, brakes, accelerator, horn


S (Sensors): Camera, GPS, speedometer, lidar

State Space Description:

Each state represents a combination of car position, speed, traffic around, and signals.

Transitions occur based on decisions like turning, braking, or accelerating.

1(d) Quantifiers with Example

Universal Quantifier (∀): Applies to all elements.

Example: ∀x (Human(x) → Mortal(x))

Existential Quantifier (∃): Applies to at least one element.

Example: ∃x (Student(x) ∧ Smart(x))

1€ (Extra continuation from compulsory Q1: assuming sub-question)


Types of Agents in AI

1. Simple Reflex Agent – Acts only on current perception (e.g., thermostat)

2. Model-Based Reflex Agent – Maintains internal state

3. Goal-Based Agent – Takes decisions based on desired goal

4. Utility-Based Agent – Chooses action with highest utility

5. Learning Agent – Improves over time through learning mechanism

2(a) Properties of Task Environment

1. Observable: Fully/Partially – Fully observable gives complete info (e.g. chess),


partially gives limited (e.g. driving).
2. Agents: Single/Multi-agent – One vs. multiple decision-makers.

3. Deterministic/Stochastic: Fixed results vs. probabilistic outcomes.

4. Episodic/Sequential: Episodes are independent (e.g. image classification);


sequential has past dependence (e.g. driving).

5. Static/Dynamic: Static doesn’t change during decision-making, dynamic does.

6. Discrete/Continuous: Limited set of states vs. infinite/real values.

7. Known/Unknown: If outcomes or rules are not known, it’s unknown.

2(b) Game Playing Algorithm C Tic-Tac-Toe Tree

Game playing algorithms help agents decide the best move in competitive games.

Common algorithm: Minimax (assumes both players play optimally).

Game Tree Example for Tic-Tac-Toe:


Each node represents a board state; levels alternate between X and O moves.

Utility is calculated at leaf nodes, and values are propagated upwards.

3(a) Forward C Backward Chaining

Forward Chaining (Data-Driven):


Start with known facts, apply rules to infer conclusions.

Example:

Rule: If fever C cough → flu

Facts: fever, cough → Infer: flu

Backward Chaining (Goal-Driven):

Start with goal, work backward to prove it using rules and facts.

Example:

Goal: flu

Check: Is there fever and cough? If yes → Confirm flu

Used in inference engines of expert systems.


3(b) Hill Climbing Algorithm C Issues

A local search algorithm that moves towards better neighboring states.

Chooses the neighbor with the highest value.

Problems:

1. Local Maxima – Stuck at a peak that isn’t the global best.

2. Plateau – Flat area, no direction of improvement.

3. Ridges – Difficult to climb as best path is indirect.

Solution: Random restarts or simulated annealing can help escape traps.

4(a) Resolution and Its Steps


Resolution is a rule of inference used in propositional C first-order logic.

It works by refuting the negation of the goal.

Steps:

1. Convert statements into CNF (Conjunctive Normal Form)

2. Negate the goal

3. Add to knowledge base

4. Repeatedly apply resolution until empty clause (contradiction) is derived

If empty clause is reached, goal is proven.

4(b) ADL for Flat Tire Problem


Initial State: Flat tire on axle, spare in trunk

Goal: Spare tire on axle

Actions in ADL:

1. Remove flat tire

2. Get spare from trunk

3. Mount spare on axle

Description:

Action: Remove(flat)

Precondition: TireOn(flat, axle)

Effect: ¬TireOn(flat, axle)


Action: Get(spare)

Precondition: In(spare, trunk)

Effect: Have(spare)

Action: Mount(spare)

Precondition: Have(spare) ∧ ¬TireOn(_, axle)

Effect: TireOn(spare, axle)

5(a) Partial-Order Planning with Example

It’s a planning technique where steps don’t have to be strictly ordered unless necessary.

Allows flexibility and parallel execution.

Example:

Goal: Make tea

Actions:

1. Boil water
2. Add tea leaves

3. Pour water into cup

4. Add sugar

Order constraints:

Boil water → Pour water

Add tea leaves can happen anytime after boiling

Sugar can be added after water is in cup

Other steps can remain unordered until needed.


5(b) Belief Network and Its Construction

A belief network (Bayesian Network) is a graphical model that represents probabilistic


relationships among variables.

Steps to construct:

1. Identify variables

2. Determine causal relationships

3. Draw Directed Acyclic Graph (DAG)

4. Assign Conditional Probability Tables (CPTs) to nodes

Example:

Variables: Rain, Sprinkler, Wet Grass

Rain → Wet Grass


Sprinkler → Wet Grass

Each arrow shows influence; CPTs define exact probabilities.

6(a) AI Applications in Healthcare, Retail, and Banking

Healthcare:

Diagnosis support (e.g., IBM Watson), AI imaging, drug discovery

Retail:

Personalized recommendations, inventory forecasting, chatbots

Banking:

Fraud detection, loan risk assessment, customer service automation


AI improves accuracy, efficiency, and customer experience in all sectors.

6(b) Alpha-Beta Pruning

Optimization technique for Minimax Algorithm

It eliminates branches that can’t influence the final decision

Alpha: best option for maximizer so far

Beta: best option for minimizer so far

If a node is worse than previously examined ones, it’s pruned

Result: Faster decision-making without sacrificing optimality

6© Wumpus World Environment

A grid-based world used in AI for reasoning under uncertainty

Contains pits, a Wumpus (monster), and gold


Agent perceives:

Breeze near pits

Stench near Wumpus

Glitter near gold

Goal: Safely grab gold and exit

Teaches logical inference, planning, and sensor-based decision-making

6(b) Alpha-Beta Pruning

It’s used to improve the Minimax algorithm in game trees.

Alpha (α): The best value that the maximizer can guarantee so far.

Beta (β): The best value that the minimizer can guarantee so far.

How it works:

If at any point, a branch can’t affect the final decision, it’s cut off (pruned).
Result: Fewer nodes are evaluated → faster decisions

Example use: In chess AI, it avoids checking useless moves.

6© Wumpus World Environment

A simple 4x4 grid world used in AI to teach logical reasoning.

Components:

Agent, Wumpus (danger), pits (fall), gold (goal)

Percepts:

Breeze near pits

Stench near Wumpus

Glitter near gold

Goal: Safely find the gold and return, using logical inference
It teaches the basics of knowledge representation, decision-making under uncertainty, and
agent-based design.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy