0% found this document useful (0 votes)
29 views

Assignment - Ai

Ai 1st 2nd units imp questions with answers

Uploaded by

anilkatta639
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Assignment - Ai

Ai 1st 2nd units imp questions with answers

Uploaded by

anilkatta639
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Assignment – I

1. What is AI? Discuss in detail history of AI.


Definition:
It is a branch of computer science by which we can create intelligent machines which can behave like a
human, think like humans, and able to make decisions.
(or)

History of AI:
Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits
in 1943. They proposed a model of artificial neurons.
Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength
between neurons. His rule is now called Hebbian learning.
Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950.
Alan Turing publishes "Computing Machinery and Intelligence" in which he proposed a test. The test can
check the machine's ability to exhibit intelligent behavior equivalent to human intelligence, called a
Turing test.
Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John
McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field.
Year 1966: The researchers emphasized developing algorithms which can solve mathematical problems.
Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-1.
Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and
Netflix also started using AI.
Deep learning, big data and artificial general intelligence (2011-present)
Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the
complex questions as well as riddles. Watson had proved that it could understand natural language and
can solve tricky questions quickly.
Year 2012: Google has launched an Android app feature "Google now", which was able to provide
information to the user as a prediction.
Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing
test."
Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and
also performed extremely well.

2 a) Explain about the Agents and Environments.


b) Illustrate the Properties of task environments.
Definition:
⦁ An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators
⦁ The agent function maps from percept histories to actions:
[f: P* -> A]
⦁ agent = architecture + program
Types of Agents:
⦁ Simple Reflex Agent
⦁ Model-based reflex agent
⦁ Goal-based agents
⦁ Utility-based agent
1. Simple Reflex Agent:
These agents act solely based on the current state of the environment and do not consider the history of
previous states. They are suitable for simple and predictable environments. The Simple reflex agent
works on Condition-action rule, which means it maps the current state to action. Such as a Room
Cleaner agent, it works only if there is dirt in the room.
2. Model – Based Reflex Agent:
These agents maintain an internal model of the environment, which they use to make decisions. They
consider both the current state and the history of previous states to take action.

3. Goal-based Agents:
These agents operate based on a set of predefined goals. They evaluate the current state of the
environment and take actions that bring them closer to their goals.

4. Utility-based Agents:
These agents aim to maximize a specific utility function, which represents a measure of the desirability
of different outcomes. They consider not only the current state but also the future states of the
environment to make decisions.
5. Learning Agents:
These agents can learn from their experience and adapt their behaviour accordingly. They can be further
classified into three types:
a) Passive learning agents
b) Active learning agents
c) Reinforcement learning agents

b) Task environments in artificial intelligence and robotics can be characterized by several properties,
including:
1. Fully observable vs. partially observable: A task environment is fully observable if the agent can
directly observe the complete state of the environment at each time step. If the agent only has
access to partial information about the environment, the environment is considered partially
observable.
2. Single-agent vs. multi-agent: A task environment can involve a single agent or multiple agents
operating concurrently. In a multi-agent environment, agents may have to compete or
cooperate with each other to achieve their goals.
3. Deterministic vs. stochastic: A task environment is deterministic if the outcome of an action is
entirely determined by the current state and the action taken. If there is an element of
randomness or uncertainty in the environment's dynamics, the environment is considered
stochastic.
4. Episodic vs. sequential: A task environment is episodic if the agent's actions only affect the
immediate reward, and the next state depends entirely on the current state and action taken. In
contrast, a sequential environment involves a sequence of actions that lead to a final outcome.
5. Static vs. dynamic: A task environment is static if the environment does not change while the
agent is deliberating. In contrast, a dynamic environment can change due to the actions of the
agent or external factors.
Understanding these properties is essential for designing effective agents that can operate in a wide
range of environments.

3 a) Discuss how to Measure a problem-solving performance.


b) Explain different Uninformed search strategies of AI.
Ans: Measuring problem-solving performance involves assessing how well an agent or system can solve
a particular problem in a given environment. There are several metrics used to evaluate problem-solving
performance, including:
1. Success rate: The success rate measures the percentage of problem instances that an agent or
system successfully solves. A high success rate indicates that the agent or system is effective in
solving the problem.
2. Completion time: The completion time measures the time taken by the agent or system to solve
the problem. A shorter completion time is generally preferred as it indicates faster
problem-solving performance.
3. Optimality: Optimality measures how close the solution found by the agent or system is to the
optimal solution. An optimal solution is the best possible solution to the problem, and an agent
or system that finds it is considered to have achieved optimal performance.
4. Resource usage: Resource usage measures the amount of resources, such as memory, CPU
time, or energy, used by the agent or system to solve the problem. A low resource usage
indicates efficient problem-solving performance.
5. Robustness: Robustness measures how well an agent or system can handle changes or
perturbations in the environment. A more robust system can adapt to changes and still solve the
problem effectively.
These metrics can be used alone or in combination to evaluate problem-solving performance. The
choice of metric depends on the specific problem and the goals of the agent or system.

b) Uninformed search is a class of general-purpose search algorithms which operates in brute force-way.
Uninformed search algorithms do not have additional information about state or search space other
than how to traverse the tree, so it is also called blind search.
Following are the various types of uninformed search algorithms:
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
Breadth-first Search:
• BFS algorithm starts searching from the root node of the tree and expands all successor node at
the current level before moving to nodes of next level.
• Breadth-first search implemented using FIFO queue data structure.
Complete:
Yes (if b is finite), the shallowest solution is returned
Time:
b+b2+b3+… +bd = O(bd)
Space:
O (bd) (keeps every node in memory)
Optimal:
Yes if step costs are all identical or path cost is a nondecreasing function of the depth of the node
Depth-first Search:
• Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
• DFS uses a stack data structure for its implementation.
Complete:
No: fails in infinite-depth spaces, or spaces with loops Modify to avoid repeated states along path
complete in finite spaces
Time:
O (bm): terrible if m is much larger than d but if solutions are dense, may be much faster than
breadth-first
Space?
O (bm), i.e., linear space!

Optimal:
No
Depth-limited Search:
• A depth-limited search algorithm is similar to depth-first search with a predetermined limit.
Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In this
algorithm, the node at the depth limit will treat as it has no successor nodes further.
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Time Complexity: Time complexity of DLS algorithm is O(bl ).
Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even
if ℓ>d.
Iterative deepening depth-first search:
• The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search
algorithm finds out the best depth limit and does it by gradually increasing the limit until a goal
is found.
Completeness: This algorithm is complete is ifthe branching factor is finite.
Time Complexity: time complexity is O(bd).
Space Complexity: O(bd).

Uniform cost search:


• Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This
algorithm comes into play when a different cost is available for each edge.
Time Complexity and Space complexity:O(b1 + [C*/ε]))

4 a) What is a hill climbing search? Explain in detail simulated annealing search.


b) Discuss the drawbacks of hill climbing search.
• Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem. It
terminates when it reaches a peak value where no neighbor has a higher value.
• Hill climbing algorithm is a technique which is used for optimizing the mathematical problems.
One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem
in which we need to minimize the distance traveled by the salesman.
• It is also called greedy local search as it only looks to its good immediate neighbor state and not
beyond that.
• A node of hill climbing algorithm has two components which are state and value.
• Hill Climbing is mostly used when a good heuristic is available.
• In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps
a single current state.

Simulated Annealing:
Idea: improve hill-climbing by allowing occasional down-hill steps, to minimize the probability of getting
stuck in a local maximum Down-hill steps taken randomly but with probability that decreases with time
Probability controlled by a given annealing schedule for a temperature parameter T If schedule lowers T
slowly enough, search is guaranteed to end in a global maximum
Catch: it may take several tries with test problems to devise a good annealing schedule

b)
• Local Maximum: A local maximum is a peak state in the landscape which is better than each of
its neighboring states, but there is another state also present which is higher than the local
maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space landscape.
Create a list of the promising path so that the algorithm can backtrack the search space and explore
other paths as well.

• Plateau: A plateau is the flat area of the search space in which all the neighbor states of the
current state contains the same value, because of this algorithm does not find any best direction
to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the
problem. Randomly select a state which is far away from the current state so it is possible that the
algorithm could find non-plateau region.

• Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can improve this
problem.
5) Explain Mini-max search strategy for tic-tac-toe problem.
Ans: The key to the Minimax algorithm is a back and forth between the two players, where the player
whose "turn it is" desires to pick the move with the maximum score. In turn, the scores for each of the
available moves are determined by the opposing player deciding which of its available moves has the
minimum score. And the scores for the opposing players moves are again determined by the turn-taking
player trying to maximize its score and so on all the way down the move tree to an end state.
A description for the algorithm, assuming X is the "turn taking player," would look something like:
• If the game is over, return the score from X's perspective.
• Otherwise get a list of new game states for every possible move
• Create a scores list
• For each of these states add the minimax result of that state to the scores list
• If it's X's turn, return the maximum score from the scores list
• If it's O's turn, return the minimum score from the scores list
You'll notice that this algorithm is recursive, it flips back and forth between the players until a final score
is found.
Let's walk through the algorithm's execution with the full move tree, and show why, algorithmically, the
instant winning move will be picked:

• It's X's turn in state 1. X generates the states 2, 3, and 4 and calls minimax on those states.
• State 2 pushes the score of +10 to state 1's score list, because the game is in an end state.
• State 3 and 4 are not in end states, so 3 generates states 5 and 6 and calls minimax on them,
while state 4 generates states 7 and 8 and calls minimax on them.
• State 5 pushes a score of -10 onto state 3's score list, while the same happens for state 7 which
pushes a score of -10 onto state 4's score list.
• State 6 and 8 generate the only available moves, which are end states, and so both of them add
the score of +10 to the move lists of states 3 and 4.
• Because it is O's turn in both state 3 and 4, O will seek to find the minimum score, and given the
choice between -10 and +10, both states 3 and 4 will yield -10.
• Finally the score list for states 2, 3, and 4 are populated with +10, -10 and -10 respectively, and
state 1 seeking to maximize the score will chose the winning move with score +10, state 2.

6) Solve the following problem using Constraint Satisfaction approach.

Ans: The Crypt-Arithmetic problem in Artificial Intelligence is a type of encryption problem in which the
written message in an alphabetical form which is easily readable and understandable is converted into a
numeric form which is neither easily readable nor understandable
1. A number 0-9 is assigned to a particular alphabet.
2. Each different alphabet has a unique number.
3. All the same, alphabets have the same numbers.
4. The numbers should satisfy all the operations that any normal number does.

1. We first look for the MSB in the last word which is 'M' in the word 'MONEY' here. It is the letter
which is generated by carrying. So, carry generated can be only one. SO, we have M=1.
2. Now, we have S+M=O in the second column from the left side. Here M=1. Therefore, we have,
S+1=O. So, we need a number for S such that it generates a carry when added with 1. And such
a number is 9. Therefore, we have S=9 and O=0
3. the next column from the same side we have E+O=N. Here we have O=0. Which means E+0=N
which is not possible. This means a carry was generated by the lower place digits. So we have:
4. 1+E=N ----------(i)
5. Next alphabets that we have are N+R=E -------(ii)
6. So, for satisfying both equations (i) and (ii), we get E=5 and N=6.
7. Now, R should be 9, but 9 is already assigned to S, So, R=8 and we have 1 as a carry which is
generated from the lower place digits.
8. We have D+5=Y and this should generate a carry. Therefore, D should be greater than 4. As 5, 6,
8 and 9 are already assigned, we have D=7 and therefore Y=2.
9 5 6 7
1 0 8 5
---------------------------
1 0 6 5 2
----------------------------

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy