Arificial Intelligence

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 38

ARIFICIAL

INTELLIGENCE
UNIT 1

INTRODUCTIO TO ARTIFICIAL INTELLIGENCE


Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial
defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made
thinking power."

So, we can define AI as:

"It is a branch of computer science by which we can create intelligent machines which can
behave like a human, think like humans, and able to make decisions.

According to Haugeland, artificial intelligence is, “the exciting new effort to make

Computers think … machines with minds, in the full and literal sense”.

For Bellman, it is “the automation of activities that we associate with human thinking, activities such as
decision".

Artificial Intelligence exists when a machine can have human based skills such as learning,
reasoning, and solving problems.

With Artificial Intelligence you do not need to preprogram a machine to do some work, despite
that you can create a machine with programmed algorithms which can work with own
intelligence, and that is the awesomeness of AI.

Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of
other factors which can contribute to it. To create the AI first we should know that how
intelligence is composed, so the Intelligence is an intangible part of our brain which is a
combination of Reasoning, learning, problem-solving perception, language understanding,
etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the
following discipline:

o Mathematics
o Biology
o Psychology
o Sociology
o Computer Science
o Neurons Study
o Statistics

It is believed that AI is not a new technology, and some people says that as per Greek myth,
there were Mechanical men in early days which can work and behave like humans.

o With the help of AI, we can create such software or devices which can solve real-world
problems very easily and with accuracy such as health issues, marketing, traffic issues,
etc.
o With the help of AI, we can create your personal virtual Assistant, such as Cortana,
Google Assistant, Siri, etc.
o With the help of AI, we can build such Robots which can work in an environment where
survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new Opportunities.

Goals of Artificial Intelligence


Following are the main goals of Artificial Intelligence:

1. Replicate human intelligence


2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human intelligence such as:
o Proving a theorem
o Playing chess
o Plan some surgical operation
o Driving a car in traffic
5. Creating some system which can exhibit intelligent behavior, learn new things by itself,
demonstrate, explain, and can advise to its user.

Advantages of Artificial Intelligence


Following are some main advantages of Artificial Intelligence:

o High Accuracy with less error: AI machines or systems are prone to less errors and
high accuracy as it takes decisions as per pre-experience or information.
o High-Speed: AI systems can be of very high-speed and fast-decision making, because of
that AI systems can beat a chess champion in the Chess game.
o High reliability: AI machines are highly reliable and can perform the same action
multiple times with high accuracy.
o Useful for risky areas: AI machines can be helpful in situations such as defusing a
bomb, exploring the ocean floor, where to employ a human can be risky.
o Digital Assistant: AI can be very useful to provide digital assistant to the users such as
AI technology is currently used by various E-commerce websites to show the products as
per customer requirement.
o Useful as a public utility: AI can be very useful for public utilities such as a self-driving
car which can make our journey safer and hassle-free, facial recognition for security
purpose, Natural language processing to communicate with the human in human-
language, etc.

Disadvantages of Artificial Intelligence


Every technology has some disadvantages, and the same goes for Artificial intelligence. Being so
advantageous technology still, it has some disadvantages which we need to keep in our mind
while creating an AI system. Following are the disadvantages of AI:

o High Cost: The hardware and software requirement of AI is very costly as it requires lots
of maintenance to meet current world requirements.
o Can't think out of the box: Even we are making smarter machines with AI, but still they
cannot work out of the box, as the robot will only do that work for which they are trained,
or programmed.
o No feelings and emotions: AI machines can be an outstanding performer, but still it does
not have the feeling so it cannot make any kind of emotional attachment with human, and
may sometime be harmful for users if the proper care is not taken.
o Increase dependency on machines: With the increment of technology, people are
getting more dependent on devices and hence they are losing their mental capabilities.
o No Original Creativity: As humans are so creative and can imagine some new ideas but
still AI machines cannot beat this power of human intelligence and cannot be creative and
imaginative.

Types of Artificial Intelligence:


Artificial Intelligence can be divided in various types, there are mainly two types of main
categorization which are based on capabilities and based on functionally of AI. Following is
flow diagram which explain the types of AI.

AI type-1: Based on Capabilities


1. Weak AI or Narrow AI:
o Narrow AI is a type of AI which is able to perform a dedicated task with intelligence.
The most common and currently available AI is Narrow AI in the world of Artificial
Intelligence.
o Narrow AI cannot perform beyond its field or limitations, as it is only trained for one
specific task. Hence it is also termed as weak AI. Narrow AI can fail in unpredictable
ways if it goes beyond its limits.
o Apple Siriis a good example of Narrow AI, but it operates with a limited pre-defined
range of functions.
o IBM's Watson supercomputer also comes under Narrow AI, as it uses an Expert
system approach combined with Machine learning and natural language processing.
o Some Examples of Narrow AI are playing chess, purchasing suggestions on e-
commerce site, self-driving cars, speech recognition, and image recognition.
2. General AI:
o General AI is a type of intelligence which could perform any intellectual task with
efficiency like a human.
o The idea behind the general AI to make such a system which could be smarter and
think like a human by its own.
o Currently, there is no such system exist which could come under general AI and can
perform any task as perfect as a human.
o The worldwide researchers are now focused on developing machines with General AI.
o As systems with general AI are still under research, and it will take lots of efforts and
time to develop such systems.

3. Super AI:
o Super AI is a level of Intelligence of Systems at which machines could surpass
human intelligence, and can perform any task better than human with cognitive
properties. It is an outcome of general AI.
o Some key characteristics of strong AI include capability include the ability to think, to
reason,solve the puzzle, make judgments, plan, learn, and communicate by its own.
o Super AI is still a hypothetical concept of Artificial Intelligence. Development of such
systems in real is still world changing task.

Artificial Intelligence type-2: Based on functionality


1. Reactive Machines
o Purely reactive machines are the most basic types of Artificial Intelligence.
o Such AI systems do not store memories or past experiences for future actions.
o These machines only focus on current scenarios and react on it as per possible best
action.
o IBM's Deep Blue system is an example of reactive machines.
o Google's AlphaGo is also an example of reactive machines.

2. Limited Memory
o Limited memory machines can store past experiences or some data for a short
period of time.
o These machines can use stored data for a limited time period only.
o Self-driving cars are one of the best examples of Limited Memory systems. These
cars can store recent speed of nearby cars, the distance of other cars, speed limit,
and other information to navigate the road.

3. Theory of Mind
o Theory of Mind AI should understand the human emotions, people, beliefs, and be
able to interact socially like humans.
o This type of AI machines is still not developed, but researchers are making lots of
efforts and improvement for developing such AI machines.

4. Self-Awareness
o Self-awareness AI is the future of Artificial Intelligence. These machines will be super
intelligent, and will have their own consciousness, sentiments, and self-awareness.
o These machines will be smarter than human mind.
o Self-Awareness AI does not exist in reality still and it is a hypothetical concept.

SIMULATION OF SOPHISTICATED & INTELLIGENT


BEHAVIOUR
Artificial Intelligence is branch of computer science dealing with the simulation of
intelligent behavior in computers.
The term was coined in 1956 by John McCarthy at the Massachusetts Institute of
Technology. Artificial intelligence includes the following areas of specialization:
 games playing: programming computers to play games against human
opponents
 expert systems: programming computers to make decisions in real-life
situations (for example, some expert systems help doctors diagnose diseases
based on symptoms)
 natural language: programming computers to understand natural human
languages
 Neural networks: Systems that simulate intelligence by attempting to reproduce
the types of physical connections that occur in animal brains.

Uses & Effects of Artificial Intelligence on our Daily Life


The real success of AI is that most people are simply unaware of how significantly it
affects and enables the routines of daily life. A man gets up in the morning to the smell of
coffee already brewing. This is thanks to a microchip inside the coffee machine that
allows him to program his coffeemaker to turn itself on while he is still sleeping. Another
microchip keeps his toast from burning and remembers which setting from light to dark
he likes best.
Computer Games:
Traditionally, AI computer programs have worked in the background, making sure that
the digital environment of forests, smoking volcanoes, and rambling paths run smoothly.
But AI is also used to make computer games more challenging for their human
participants.
E-Commerce
Today a person does not have to get into a car to go shopping, thanks to sophisticated AI
applications and the Internet. In 1995 almost no businesses conducted their affairs over
the Internet. By the year 2003 business-to-consumer sales on the Internet exceeded $100
billion & business-to-business sales were more than $3 trillion.
When a person calls a company or logs on to a company's Web site, it is rare to
actually contact a human being. Instead many businesses are relying on automated help
desks that use an artificial intelligence system called case-based reasoning that works to
match up the customer's problem to similar problems stored in its memory. It can then
adapt a solution that worked in the past to the current problem.
Driving Intelligence
In the 1990s, the U.S. Department of Transportation introduced the Intelligent Vehicle
Initiative as part of the Transportation Efficiency Act for the 21st Century. Its mission is
to look for ways to design cars and trucks that would prevent accidents and fatalities on
the road. AI labs around the country are experimenting with all sorts of prototype AI
systems, such as collision warning devices that use computerized voice, sound, or light to
alert the driver to a possible crash and voice-activated controls so that the driver only has
to push a single voice activation button on the steering wheel and command "Radio on"
or "Temperature seventy degrees." Heat-detecting devices similar to the military's night
vision systems would display infrared images on the windshield to warn drivers of an
obstacle in their path. Sensors on the front of the car would allow the cruise control to
maintain a safe speed and distance between vehicles by slowing or accelerating as
needed. "Smart cars" already give drivers the ability to navigate using the OnStar
satellite system, which also automatically notifies emergency crews when an airbag has
been deployed. The 2004 Toyota Prius even has sensors that can unlock the door when
the driver's hands are full and help a driver safely back up into a parking space.
AI in Businesses:
AI is not just found in e-commerce. Expert systems help run most of the major businesses
the world over. Wal-Mart harnesses an expert system enhanced with an ANN to sift
through the data of all the sales at more than three thousand stores to find patterns and
relationships between stores, products, and customers. It can find the pattern in what sells
and what does not faster than hundreds of human analysts can. Expert systems even
manage billions of dollars in the stock market.
The Digital Doctor
Expert systems are also used in medicine to help doctors diagnose patients. In a 1997
study researchers concluded that medical students learn more than 47,000 facts and
29,000 concepts in just the first two years of medical school. Ideally all of that
knowledge can be programmed into an expert system.

Although Artificial intelligence has gone a far way inside our lives still there is not a single
Computer or Robot that exists with all thinking capabilities that humans have. As the term itself
says Artificial, it refers to the thoughts produced on simple logics and questioning. Complex
logics are still out of the reach of Artificial Intelligence. AI devices or robots are only able to do
tasks for what they have been previously programmed for.

PROBLEM SOLVING IN GAME:


 Problem Solving in games such as “Sudoku” can be an example. It can be done by
building an artificially intelligent system to solve that particular problem. To do this, one
needs to define the problem statements first and then generating the solution by keeping
the conditions in mind.

 Some of the most popularly used problem solving with the help of artificial intelligence
are:
1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.

Problem Searching
 In general, searching refers to as finding information one needs.
 Searching is the most commonly used technique of problem solving in artificial
intelligence.
 The searching algorithm helps us to search for solution of particular problem.

Problem
 Problems are the issues which come across any system. A solution is needed to solve that
particular problem.
Steps: Solve Problem Using Artificial Intelligence
 The process of solving a problem consists of five steps. These are:

1. Defining The Problem: The definition of the problem must be included precisely. It
should contain the possible initial as well as final situations which should result in
acceptable solution.

2. Analyzing The Problem: Analyzing the problem and its requirement must be done as
few features can have immense impact on the resulting solution.

3. Identification of Solutions: This phase generates reasonable amount of solutions to the


given problem in a particular range.

4. Choosing a Solution: From all the identified solutions, the best solution is chosen basis
on the results produced by respective solutions.

5. Implementation: After choosing the best solution, its implementation is done.


NATURAL LANGUAGE PROCESSING:
Natural Language Processing, usually shortened as NLP, is a branch of artificial
intelligence that deals with the interaction between computers and humans using the
natural language and draw out some meaning from it, so that the result can be further used.

The ultimate objective of NLP is to read, decipher, understand, and make sense of the
human languages in a manner that is valuable.

Most NLP techniques rely on machine learning to derive meaning from human
languages.

In fact, a typical interaction between humans and machines using Natural Language
Processing could go as follows:

1. A human talks to the machine

2. The machine captures the audio

3. Audio to text conversion takes place

4. Processing of the text’s data

5. Data to audio conversion takes place

6. The machine responds to the human by playing the audio file

The Natural Language Processing can have speech and written test as Input as well as
Output in the following combinations;

1. Input: Speech, Output: Text.


2. Input: Text, Output: Speech.

 Natural Language Processing is mainly comprises up of two components:

1. Natural Language Understanding (NLU).


2. Natural Language Generation (NLG).
1. Natural Language Understanding

 NLU is much harder to implement as compared to NLG.


 It is because of the fact that, NLU is designed in such a manner that it is able to
handle unstructured and random inputs. It then, converts those inputs into
structured form so that machines can understand it and can handle it to generate
some predictable and meaningful outputs.
 NLU process is carried out before NLG process.

2. Natural Language Generation

 With the help of NLG, Unknown internal representations can be converted into
meaningful phrases and then into sentences.
 Natural Language Generation can also be called as a translator which can
translate data from human language.

Natural Language Processing Works: Working


 Every Natural Language Processing system goes through the five step process
as described below.

1. Lexical Analysis: This phase analysis the structure of the words. It breaks the whole lot
of paragraphs into simple phrases and phrases into even more simpler words.

2. Syntactic Analysis: This phase rephrases the series of words generated in the lexical
analysis phase and combines them in such a way to generate meaningful sentences and
paragraphs.

3. Semantic Analysis: This phase extracts the meaning and checks the meaningfulness of
the sentences. For Example: “Wet water”.
4. Discourse Integration: This phase draws out the meaning of sentences or phrases
currently present on the basis of previous and next sentences or phrase.

5. Pragmatic Analysis: This phase is responsible for extracting the actual meaning of the
phrases by comparing it with real world entities.

What is NLP used for?


Natural Language Processing is the driving force behind the following common applications:

Language translation applications such as Google Translate

 Word Processors such as Microsoft Word and Grammar that employ NLP to check
grammatical accuracy of texts.

 Interactive Voice Response (IVR) applications used in call centers to respond to certain
users’ requests.

 Personal assistant applications such as OK Google, Siri, Cortana, and Alexa.

What are the techniques used in NLP?

Syntactic analysis and semantic analysis are the main techniques used to complete Natural
Language Processing tasks.

Here is a description on how they can be used.

1. Syntax

Syntax refers to the arrangement of words in a sentence such that they make grammatical sense.

In NLP, syntactic analysis is used to assess how the natural language aligns with the grammatical
rules.

Computer algorithms are used to apply grammatical rules to a group of words and derive meaning
from them.

Here are some syntax techniques that can be used:


 Lemmatization: It entails reducing the various inflected forms of a word into a single form
for easy analysis.

 Morphological segmentation: It involves dividing words into individual units called


morphemes.

 Word segmentation: It involves dividing a large piece of continuous text into distinct units.

 Part-of-speech tagging: It involves identifying the part of speech for every word.

 Parsing: It involves undertaking grammatical analysis for the provided sentence.

 Sentence breaking: It involves placing sentence boundaries on a large piece of text.

 Stemming: It involves cutting the inflected words to their root form.

2. Semantics

Semantics refers to the meaning that is conveyed by a text. Semantic analysis is one of the
difficult aspects of Natural Language Processing that has not been fully resolved yet.

It involves applying computer algorithms to understand the meaning and interpretation of words
and how sentences are structured.

Here are some techniques in semantic analysis:

 Named entity recognition (NER): It involves determining the parts of a text that can be
identified and categorized into preset groups. Examples of such groups include names of
people and names of places.

 Word sense disambiguation: It involves giving meaning to a word based on the context.

 Natural language generation: It involves using databases to derive semantic intentions and
convert them into human language.

Wrapping up

Natural Language Processing plays a critical role in supporting machine-human interactions.

As more research is being carried in this field, we expect to see more breakthroughs that will
make machines smarter at recognizing and understanding the human language.
AUTOMATED REASONING:

Automated reasoning is an area of cognitive science (involves knowledge


representation and reasoning) and meta logic dedicated to understanding different
aspects of reasoning. The study of automated reasoning helps produce computer
programs that allow computers to reason completely, or nearly completely,
automatically. Although automated reasoning is considered a sub-field of artificial
intelligence, it also has connections with theoretical computer science, and
even philosophy.

The most developed subareas of automated reasoning are automated theorem


proving (and the less automated but more pragmatic subfield of interactive theorem
proving) and automated proof checking (viewed as guaranteed correct reasoning
under fixed assumptions).Extensive work has also been done in reasoning
by analogy using induction and abduction.

Reasoning is the ability to make inferences, and automated reasoning is concerned with the
building of computing systems that automate this process. Although the overall goal is to
mechanize different forms of reasoning, the term has largely been identified with valid
deductive reasoning as practiced in mathematics and formal logic. In this respect, automated
reasoning is akin to mechanical theorem proving.

Building an automated reasoning program means providing an algorithmic description to a


formal calculus so that it can be implemented on a computer to prove theorems of the
calculus in an efficient manner.
Important aspects of this exercise involve defining the class of problems the program will be
required to solve, deciding what language will be used by the program to represent the
information given to it as well as new information inferred by the program, specifying the
mechanism that the program will use to conduct deductive inferences, and figuring out how
to perform all these computations efficiently.

While basic research work continues in order to provide the necessary theoretical framework,
the field has reached a point where automated reasoning programs are being used by
researchers to attack open questions in mathematics and logic, provide important applications
in computing science, solve problems in engineering, and find novel approaches to questions
in exact philosophy.

Visual Perception:
Perception:
Perception is the process of acquiring, interpreting, selecting, and organizing
sensory information.

Perception presumes sensation, where various types of sensors each convert a


certain type of simple signal into data of the system. To put the data together and to
make sense out of them is the job of the perception mechanism.

Perception can be seen as a special type of categorization (or classification, pattern


recognition) where the inputs are sensory data, and the outputs are categorical
judgments and conceptual relations.

The difficulty of the task comes from the need of multiple levels of abstraction,
where the relations among data items are many-to-many, uncertain, and changing
over time.

Accurately speaking, we never "see things as they are", and perception process of
an intelligent system is often (and should be) influenced by internal and external
factors beside the signals themselves. Furthermore, perception is not a pure passive
process driven by the input.

In AI, the study on perception is mostly focused on the reproduction of human


perception, especially on the perception of aural and visual signals. However, this
is not necessarily the case since the perception mechanism of a computer system
does not have to be identical to that of a human being.

Vision or visual Perception


Computer Vision is the science and technology of obtaining models, meaning and
control information from visual data. The two main fields of computer vision are
computational vision and machine vision. Computational vision has to do with
simply recording and analyzing the visual perception, and trying to understand it.
Machine vision has to do with using what is found from computational vision and
applying it to benefit people, animals, environment, etc.

Computer Vision has influenced the field of Artificial Intelligence greatly. The
Robocup tournament and ASIMO are examples of Artificial Intelligence using
Computer Vision to its greatest extent. The Robocup tournament is a tournament for
robot dogs playing soccer. To be able to play soccer, these dogs must be able to see
the ball, then react to it accordingly. Engineers of these robot dogs have been
challenged to create robot dogs who can beat the best soccer players at soccer in
around fifty years.

ASIMO, seen below, is another example of how computer vision is an important part
of Artificial Intelligence. ASIMO is a robot created by Honda, but of course, all
robots need to be able to know where to move around and what is in its surroundings.
To be able to do this, ASIMO uses cameras to visualize computationally what is in its
surroundings, and then uses it to achieve its goal.

Artificial Intelligence can also use computer vision to communicate with humans.
GRACE the robot, shown below, is a robot who could communicate slightly with
humans to be able to recognize her surroundings and achieve a specific goal. For
example, GRACE attended a conference through a lobby and up an elevator by
communicating with humans. Communications included understanding that she had
to wait in line, and asking others to press the elevator button for her. She also has a
binocular vision system allowing her to react to human gestures as well.

Artificial Intelligence also uses computer vision to recognize handwriting text and
drawings. Text typed down on a document can be read by the computer easily, but
handwritten text cannot. Computer vision fixes this by converting handwritten figures
into figures that can be used by a computer. An example is shown below. The
attempted drawing of a rectangular prism resting on three other rectangular prism is
converted by computer vision to a 3-D picture of the same thing, but in a format
usable by the computer and more readable by users.
Another important part of Artificial Intelligence is passive observation and analysis.
Passive observation and analysis is using computer vision to observe and analyze
certain objects over time. For example, in the pictures below, on the first one, the
passing cars are being observed and analyzed as what type of car by the computer.
This can be done by outlining the car shape and recording it. On the second picture,
the flock of geese are observed and analyzed over time. The record could serve to
predict when geese would come again, for how long they would stay, and how many
of them there could be.

Heuristic Algorithm:
A Heuristic is a technique to solve a problem faster than classic methods, or to
find an approximate solution when classic methods cannot. This is a kind of a
shortcut as we often trade one of optimality, completeness, accuracy, or
precision for speed. A Heuristic (or a heuristic function) takes a look at search
algorithms. At each branching step, it evaluates the available information and
makes a decision on which branch to follow. It does so by ranking alternatives.
The Heuristic is any device that is often effective but will not guarantee work
in every case.

So why do we need heuristics? One reason is to produce, in a reasonable


amount of time, a solution that is good enough for the problem in question. It
doesn’t have to be the best- an approximate solution will do since this is fast
enough. Most problems are exponential. Heuristic Search let us reduce this to
a rather polynomial number. We use this in AI because we can put it to use in
situations where we can’t find known algorithms.
We can say Heuristic Techniques are weak methods because they are
vulnerable to combinatorial explosion.

Heuristic Search Techniques in Artificial Intelligence


Briefly, we can taxonomize such techniques of Heuristic into two categories:

a. Direct Heuristic Search Techniques in AI


Other names for these are Blind Search, Uninformed Search, and Blind Control
Strategy. These aren’t always possible since they demand much time or memory.
They search the entire state space for a solution and use an arbitrary ordering of
operations. Examples of these are Breadth First Search (BFS) and Depth First Search
(DFS).
b. Weak Heuristic Search Techniques in AI
Other names for these are Informed Search, Heuristic Search, and Heuristic Control
Strategy. These are effective if applied correctly to the right types of tasks and usually
demand domain-specific information. We need this extra information to compute
preference among child nodes to explore and expand. Each node has a heuristic
function associated with it. Examples are Best First Search (BFS) and A*.
Before we move on to describe certain techniques, let’s first take a look at the ones we
generally observe. Below, we name a few.
 Best-First Search
 A* Search
 Bidirectional Search
 Tabu Search
 Beam Search
 Simulated Annealing
 Hill Climbing
 Constraint Satisfaction Problems

Hill Climbing in Artificial Intelligence


First, let’s talk about Hill Climbing in Artificial Intelligence. This is a heuristic for
optimizing problems mathematically. We need to choose values from the input to
maximize or minimize a real function. It is okay if the solution isn’t the global optimal
maximum.
One such example of Hill Climbing will be the widely discussed Travelling Salesman
Problem- one where we must minimize the distance he travels.
a. Features of Hill Climbing in AI
Let’s discuss some of the features of this algorithm (Hill Climbing):
 It is a variant of the generate-and-test algorithm
 It makes use of the greedy approach
This means it keeps generating possible solutions until it finds the expected solution,
and moves only in the direction which optimizes the cost function for it.
b. Types of Hill Climbing in AI
 Simple Hill Climbing- This examines one neighboring node at a time and selects
the first one that optimizes the current cost to be the next node.
 Steepest Ascent Hill Climbing- This examines all neighboring nodes and selects
the one closest to the solution state.
 Stochastic Hill Climbing- This selects a neighboring node at random and decides
whether to move to it or examine another.

Let’s take a look at the algorithm for simple hill climbing.


1. Evaluate initial state- if goal state, stop and return success. Else, make initial state
current.
2. Loop until the solution reached or until no new operators left to apply to current
state:
a. Select new operator to apply to the current producing new state.
b. Evaluate new state:
 If a goal state, stop and return success.
 If better than the current state, make it current state, proceed.
 Even if not better than the current state, continue until the solution reached.
3. Exit.

c. Problems with Hill Climbing in AI


We usually run into one of three issues-
 Local Maximum- All neighboring states have values worse than the current. The
greedy approach means we won’t be moving to a worse state. This terminates the
process even though there may have been a better solution. As a workaround, we
use backtracking.
 Plateau- All neighbors to it have the same value. This makes it impossible to
choose a direction. To avoid this, we randomly make a big jump.
 Ridge- At a ridge, movement in all possible directions is downward. This makes
it look like a peak and terminates the process. To avoid this, we may use two or
more rules before testing.

7. Best-First Search (BFS) Heuristic


Search
Often dubbed BFS, Best First Search is an informed search that uses an evaluation
function to decide which adjacent is the most promising before it can continue to
explore. Breadth- and Depth- First Searches blindly explore paths without keeping a
cost function in mind. Things aren’t the same with BFS, though. Here, we use a
priority queue to store node costs. Let’s understand BFS Heuristic Search through
pseudo code.
1. Define list OPEN with single node s– the start node.
2. IF list is empty, return failure.
3. Remove node n (node with best score) from list, move it to list CLOSED.
4. Expand node n.
5. IF any successor to n is the goal node, return success and trace path from goal
node to s to return the solution.
6. FOR each successor node:
 Apply evaluation function f.
 IF the node isn’t in either list, add it to list OPEN.
7. Loop to step 2.

A* Search Algorithm
Artificial intelligence in its core strives to solve problems of enormous
combinatorial complexity. Over the years, these problems were boiled down to
search problems.

A path search problem is a computational problem where you have to find a


path from point A to point B. In our case, we'll be mapping search problems to
appropriate graphs, where the nodes represent all the possible states we can
end up in and the edges representing all the possible paths that we have at
our disposal.

Any time we want to convert any kind of problem into a search problem, we
have to define six things:

1. A set of all states we might end up in


2. The start and finish state
3. A finish check (a way to check if we're at the finished state)
4. A set of possible actions (in this case, different directions of movement)
5. A traversal function (a function that will tell us where we'll end up if we
go a certain direction)
6. A set of movement costs from state-to-state (which correspond to edges
in the graph)
Basic Concepts of A*
A* is based on using heuristic methods to
achieve optimality and completeness, and is a variant of the best-first
algorithm.

When a search algorithm has the property of optimality, it means it is


guaranteed to find the best possible solution, in our case the shortest path to
the finish state. When a search algorithm has the property of completeness, it
means that if a solution to a given problem exists, the algorithm is guaranteed
to find it.

Each time A* enters a state, it calculates the cost, f(n) (n being the
neighboring node), to travel to all of the neighboring nodes, and then enters
the node with the lowest value of f(n).

These values are calculated with the following formula:

f(n)=g(n)+h(n)
g(n)being the value of the shortest path from the start node to node n,
and h(n) being a heuristic approximation of the node's value.

For us to be able to reconstruct any path, we need to mark every node with
the relative that has the optimal f(n) value. This also means that if we revisit
certain nodes, we'll have to update their most optimal relatives as well. More
on that later.

The efficiency of A* is highly dependent on the heuristic value h(n), and


depending on the type of problem, we may need to use a different heuristic
function for it to find the optimal solution.

Construction of such functions is no easy task and is one of the fundamental


problems of AI. The two fundamental properties a heuristic function can have
are admissibility and consistency.
Admissibility and Consistency
A given heuristic function h(n) is admissible if it never overestimates the real
distance between n and the goal node.

Therefore, for every node n the following formula applies:

h(n)≤h∗(n)h(n)≤h∗(n)
h*(n) being the real distance between n and the goal node. However, if the
function does overestimate the real distance, but never by more than d, we
can safely say that the solution that the function produces is of accuracy d (i.e.
it doesn't overestimate the shortest path from start to finish by more than d).

A given heuristic function h(n) is consistent if the estimate is always less than
or equal to the estimated distance between the goal n and any given
neighbor, plus the estimated cost of reaching that neighbor:

c(n,m)+h(m)≥h(n)c(n,m)+h(m)≥h(n)
c(n,m) being the distance between nodes n and m. Additionally, if h(n) is
consistent, then we know the optimal path to any node that has been already
inspected. This means that this function is optimal.

Theorem: If a heuristic function is consistent, then it is also admissible.

Proof by Complete Induction


The induction parameter N will be the number of nodes between node n and
the finish node s on the shortest path between the two.

Base: N=0

If there are no nodes between n and s, and because we know that h(n) is
consistent, the following equation is valid:

c(n,s)+h(s)≥h(n)c(n,s)+h(s)≥h(n)
Knowing h*(n)=c(n,s) and h(s)=0 we can safely deduce that:

h∗(n)≥h(n)h∗(n)≥h(n)
Induction hypothesis: N < k

We hypothesize that the given rule is true for every N < k.

Induction step:

In the case of N = k nodes on the shortest path from n to s, we inspect the first
successor (node m) of the finish node n. Because we know that there is a path
from m to n, and we know this path contains k-1 nodes, the following equation
is valid:

h∗(n)=c(n,m)+h∗(m)≥c(n,m)+h(m)≥h(n)

Search Algorithms in Artificial Intelligence


Search algorithms are one of the most important areas of Artificial Intelligence. This topic
will explain all about the search algorithms in AI.

Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving
methods. Rational agents or Problem-solving agents in AI mostly used these search
strategies or algorithms to solve a specific problem and provide the best result. Problem-
solving agents are the goal-based agents and use atomic representation. In this topic, we
will learn various problem-solving search algorithms.

Search Algorithm Terminologies:


o Search: Searchingis a step by step procedure to solve a search-problem in a given
search space. A search problem can have three main factors:
a. Search Space: Search space represents a set of possible solutions, which a
system may have.
b. Start State: It is a state from where agent begins the search.
c. Goal test: It is a function which observe the current state and returns
whether the goal state is achieved or not.
o Search tree: A tree representation of search problem is called Search tree. The root
of the search tree is the root node which is corresponding to the initial state.
o Actions: It gives the description of all the available actions to the agent.
o Transition model: A description of what each action do, can be represented as a
transition model.
o Path Cost: It is a function which assigns a numeric cost to each path.
o Solution: It is an action sequence which leads from the start node to the goal node.
o Optimal Solution: If a solution has the lowest cost among all solutions.

Properties of Search Algorithms:


Following are the four essential properties of search algorithms to compare the efficiency of
these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return a


solution if at least any solution exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best solution


(lowest path cost) among all other solutions, then such a solution for is said to be an
optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task.

Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.

Types of search algorithms


Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes. Uninformed search applies
a way in which search tree is searched without any information about the search space like
initial state operators and test for the goal, so it is also called blind search.It examines each
node of the tree until it achieves the goal node.

It can be divided into five main types:

o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search
Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem
information is available which can guide the search. Informed search strategies can find a
solution more efficiently than an uninformed search strategy. Informed search is also called
a Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but guaranteed
to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in another
way.

An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search
2. A* Search

Uninformed Search Algorithms


Uninformed search is a class of general-purpose search algorithms which operates
in brute force-way. Uninformed search algorithms do not have additional
information about state or search space other than how to traverse the tree, so it
is also called blind search.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search
algorithm.
o Breadth-first search implemented using FIFO queue data structure.

Advantages:

o BFS will provide a solution if any solution exists.


o If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.

Disadvantages:

o It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
o BFS needs lots of time if the solution is far away from the root node.

Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm
from the root node S to goal node K. BFS search algorithm traverse in layers, so it will
follow the path which is shown by the dotted arrow, and the traversed path will be:

1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the
node.

2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
o It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions using
recursion.

Advantage:

o DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).

Disadvantage:

o There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is not
found. After backtracking it will traverse node C and then G, and here it will terminate as it
found goal node.

Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the
node.

2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
o It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions using
recursion.

Advantage:

o DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).

Disadvantage:

o There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.

Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is not
found. After backtracking it will traverse node C and then G, and here it will terminate as it
found goal node.
x

Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node,
hence space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps
or high cost to reach to the goal node.

3. Depth-Limited Search Algorithm:


A depth-limited search algorithm is similar to depth-first search with a predetermined limit.
Depth-limited search can solve the drawback of the infinite path in the Depth-first search.
In this algorithm, the node at the depth limit will treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:


o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o It may not be optimal if the problem has more than one solution.

Example:

Completeness: DLS search algorithm is complete if the solution is above the depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not
optimal even if ℓ>d.
4. Uniform-cost Search Algorithm:
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge. The primary
goal of the uniform-cost search is to find a path to the goal node which has the lowest
cumulative cost. Uniform-cost search expands nodes according to their path costs form the
root node. It can be used to solve any graph/tree where the optimal cost is in demand. A
uniform-cost search algorithm is implemented by the priority queue. It gives maximum
priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm if
the path cost of all edges is the same.

Advantages:

o Uniform cost search is optimal because at every state the path with the least cost is
chosen.

Disadvantages:

o It does not care about the number of steps involve in searching and only concerned
about path cost. Due to which this algorithm may be stuck in an infinite loop.

Example:

Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity:

Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node.
Then the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0
and end to C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.

Space Complexity:

The same logic is for space complexity so, the worst-case space complexity of Uniform-cost
search is O(b1 + [C*/ε]).

Optimal:

Uniform-cost search is always optimal as it only selects a path with the lowest path cost.

5. Iterative deepeningdepth-first Search:


The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search
algorithm finds out the best depth limit and does it by gradually increasing the limit until a
goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search and depth-
first search's memory efficiency.

The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown.

Advantages:

o Itcombines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.

Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the previous phase.

Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

Completeness:

This algorithm is complete is ifthe branching factor is finite.

Time Complexity:

Let's suppose b is the branching factor and depth is d then the worst-case time complexity
is O(bd).

Space Complexity:

The space complexity of IDDFS will be O(bd).

Optimal:

IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the
node.

6. Bidirectional Search Algorithm:


Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-
search, to find the goal node. Bidirectional search replaces one single search
graph with two small subgraphs in which one starts the search from an initial
vertex and other starts from goal vertex. The search stops when these two graphs
intersect each other.

Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.

Advantages:

o Bidirectional search is fast.


o Bidirectional search requires less memory

Disadvantages:

o Implementation of the bidirectional search tree is difficult.


o In bidirectional search, one should know the goal state in advance.
Informed Search Algorithms
So far we have talked about the uninformed search algorithms which looked through search
space for all possible solutions of the problem without having any additional knowledge
about search space. But informed search algorithm contains an array of knowledge such as
how far we are from the goal, path cost, how to reach to goal node, etc. This knowledge
help agents to explore less to the search space and find more efficiently the goal node.

The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.

Heuristics function: Heuristic is a function which is used in Informed Search, and it finds
the most promising path. It takes the current state of the agent as its input and produces
the estimation of how close agent is from the goal. The heuristic method, however, might
not always give the best solution, but it guaranteed to find a good solution in reasonable
time. Heuristic function estimates how close a state is to the goal. It is represented by h(n),
and it calculates the cost of an optimal path between the pair of states. The value of the
heuristic function is always positive.

Heuristic algorithm versus solution guaranteed


algorithm.

An "algorithm" is any set of rules for doing something. What you mean is a "solution
algorithm". A "solution algorithm" guarantees a correct solution. The "guarantee" is the
key phrase. The Gaussian Elimination method taught to solve a system of linear
equations is a "solution algorithm" in that it guarantees that you will always give the
right answer. Solution algorithms to a problem can be faster or slower, but they all have
the same guarantee of being correct.

A "heuristic" is an algorithm that does not guarantee a correct solution. A "good"


heuristic is one that will get you either the correct or a good enough solution most of
the time. Heuristics are used when either there is no known solution algorithm, or you
are interested in something faster than the known solution algorithm.

Imagine that you see a set of boxes on the other side of the room, and you have to
guess which is the heaviest. A fair heuristic would be to guess that the largest box is the
heaviest. The real answer could be found by actually weighing the boxes. However, it
may be either that weighing the boxes is impossible, or you do not want to spend the
time to weigh the boxes. In those cases, you would use the heuristic of guessing that the
largest is the heaviest.
The key point about a heuristic is that there is no way of knowing when the solution you
get is wrong. If there was, you could create a self-correction loop and get the right
solution, and that would mean you have a solution algorithm. But like with the boxes,
just by looking at them, you would never know when the largest is not the heaviest.
With this box weight heuristic, usually, and under a lot of conditions, you would be right.
But just by looking at them you could never know when the largest is full of pillows, and
the smallest is full of lead. You would never know when you were wrong. The best thing
to do in practice is to do an empirical statistical study of the typical situation and the
typical answer you get.

Note that is different from an "approximate solution algorithm" that guarantees that the
solution is correct to within some degree. Weighing the boxes on a cheap scale is an
approximate solution algorithm. Guessing their weight by their size is a heuristic.

When confronted with solving sets of non-linear equations, or solving NP problems in


general, or solving problems like face recognition that are too poorly defined to even be
formally stated as an NP problem, the only way to get a solution in practice is through
heuristics. You then just have to live with the fact that you will never know when you are
way off.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy