Arificial Intelligence
Arificial Intelligence
Arificial Intelligence
INTELLIGENCE
UNIT 1
"It is a branch of computer science by which we can create intelligent machines which can
behave like a human, think like humans, and able to make decisions.
According to Haugeland, artificial intelligence is, “the exciting new effort to make
Computers think … machines with minds, in the full and literal sense”.
For Bellman, it is “the automation of activities that we associate with human thinking, activities such as
decision".
Artificial Intelligence exists when a machine can have human based skills such as learning,
reasoning, and solving problems.
With Artificial Intelligence you do not need to preprogram a machine to do some work, despite
that you can create a machine with programmed algorithms which can work with own
intelligence, and that is the awesomeness of AI.
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of
other factors which can contribute to it. To create the AI first we should know that how
intelligence is composed, so the Intelligence is an intangible part of our brain which is a
combination of Reasoning, learning, problem-solving perception, language understanding,
etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the
following discipline:
o Mathematics
o Biology
o Psychology
o Sociology
o Computer Science
o Neurons Study
o Statistics
It is believed that AI is not a new technology, and some people says that as per Greek myth,
there were Mechanical men in early days which can work and behave like humans.
o With the help of AI, we can create such software or devices which can solve real-world
problems very easily and with accuracy such as health issues, marketing, traffic issues,
etc.
o With the help of AI, we can create your personal virtual Assistant, such as Cortana,
Google Assistant, Siri, etc.
o With the help of AI, we can build such Robots which can work in an environment where
survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new Opportunities.
o High Accuracy with less error: AI machines or systems are prone to less errors and
high accuracy as it takes decisions as per pre-experience or information.
o High-Speed: AI systems can be of very high-speed and fast-decision making, because of
that AI systems can beat a chess champion in the Chess game.
o High reliability: AI machines are highly reliable and can perform the same action
multiple times with high accuracy.
o Useful for risky areas: AI machines can be helpful in situations such as defusing a
bomb, exploring the ocean floor, where to employ a human can be risky.
o Digital Assistant: AI can be very useful to provide digital assistant to the users such as
AI technology is currently used by various E-commerce websites to show the products as
per customer requirement.
o Useful as a public utility: AI can be very useful for public utilities such as a self-driving
car which can make our journey safer and hassle-free, facial recognition for security
purpose, Natural language processing to communicate with the human in human-
language, etc.
o High Cost: The hardware and software requirement of AI is very costly as it requires lots
of maintenance to meet current world requirements.
o Can't think out of the box: Even we are making smarter machines with AI, but still they
cannot work out of the box, as the robot will only do that work for which they are trained,
or programmed.
o No feelings and emotions: AI machines can be an outstanding performer, but still it does
not have the feeling so it cannot make any kind of emotional attachment with human, and
may sometime be harmful for users if the proper care is not taken.
o Increase dependency on machines: With the increment of technology, people are
getting more dependent on devices and hence they are losing their mental capabilities.
o No Original Creativity: As humans are so creative and can imagine some new ideas but
still AI machines cannot beat this power of human intelligence and cannot be creative and
imaginative.
3. Super AI:
o Super AI is a level of Intelligence of Systems at which machines could surpass
human intelligence, and can perform any task better than human with cognitive
properties. It is an outcome of general AI.
o Some key characteristics of strong AI include capability include the ability to think, to
reason,solve the puzzle, make judgments, plan, learn, and communicate by its own.
o Super AI is still a hypothetical concept of Artificial Intelligence. Development of such
systems in real is still world changing task.
2. Limited Memory
o Limited memory machines can store past experiences or some data for a short
period of time.
o These machines can use stored data for a limited time period only.
o Self-driving cars are one of the best examples of Limited Memory systems. These
cars can store recent speed of nearby cars, the distance of other cars, speed limit,
and other information to navigate the road.
3. Theory of Mind
o Theory of Mind AI should understand the human emotions, people, beliefs, and be
able to interact socially like humans.
o This type of AI machines is still not developed, but researchers are making lots of
efforts and improvement for developing such AI machines.
4. Self-Awareness
o Self-awareness AI is the future of Artificial Intelligence. These machines will be super
intelligent, and will have their own consciousness, sentiments, and self-awareness.
o These machines will be smarter than human mind.
o Self-Awareness AI does not exist in reality still and it is a hypothetical concept.
Although Artificial intelligence has gone a far way inside our lives still there is not a single
Computer or Robot that exists with all thinking capabilities that humans have. As the term itself
says Artificial, it refers to the thoughts produced on simple logics and questioning. Complex
logics are still out of the reach of Artificial Intelligence. AI devices or robots are only able to do
tasks for what they have been previously programmed for.
Some of the most popularly used problem solving with the help of artificial intelligence
are:
1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.
Problem Searching
In general, searching refers to as finding information one needs.
Searching is the most commonly used technique of problem solving in artificial
intelligence.
The searching algorithm helps us to search for solution of particular problem.
Problem
Problems are the issues which come across any system. A solution is needed to solve that
particular problem.
Steps: Solve Problem Using Artificial Intelligence
The process of solving a problem consists of five steps. These are:
1. Defining The Problem: The definition of the problem must be included precisely. It
should contain the possible initial as well as final situations which should result in
acceptable solution.
2. Analyzing The Problem: Analyzing the problem and its requirement must be done as
few features can have immense impact on the resulting solution.
4. Choosing a Solution: From all the identified solutions, the best solution is chosen basis
on the results produced by respective solutions.
The ultimate objective of NLP is to read, decipher, understand, and make sense of the
human languages in a manner that is valuable.
Most NLP techniques rely on machine learning to derive meaning from human
languages.
In fact, a typical interaction between humans and machines using Natural Language
Processing could go as follows:
The Natural Language Processing can have speech and written test as Input as well as
Output in the following combinations;
With the help of NLG, Unknown internal representations can be converted into
meaningful phrases and then into sentences.
Natural Language Generation can also be called as a translator which can
translate data from human language.
1. Lexical Analysis: This phase analysis the structure of the words. It breaks the whole lot
of paragraphs into simple phrases and phrases into even more simpler words.
2. Syntactic Analysis: This phase rephrases the series of words generated in the lexical
analysis phase and combines them in such a way to generate meaningful sentences and
paragraphs.
3. Semantic Analysis: This phase extracts the meaning and checks the meaningfulness of
the sentences. For Example: “Wet water”.
4. Discourse Integration: This phase draws out the meaning of sentences or phrases
currently present on the basis of previous and next sentences or phrase.
5. Pragmatic Analysis: This phase is responsible for extracting the actual meaning of the
phrases by comparing it with real world entities.
Word Processors such as Microsoft Word and Grammar that employ NLP to check
grammatical accuracy of texts.
Interactive Voice Response (IVR) applications used in call centers to respond to certain
users’ requests.
Syntactic analysis and semantic analysis are the main techniques used to complete Natural
Language Processing tasks.
1. Syntax
Syntax refers to the arrangement of words in a sentence such that they make grammatical sense.
In NLP, syntactic analysis is used to assess how the natural language aligns with the grammatical
rules.
Computer algorithms are used to apply grammatical rules to a group of words and derive meaning
from them.
Word segmentation: It involves dividing a large piece of continuous text into distinct units.
Part-of-speech tagging: It involves identifying the part of speech for every word.
2. Semantics
Semantics refers to the meaning that is conveyed by a text. Semantic analysis is one of the
difficult aspects of Natural Language Processing that has not been fully resolved yet.
It involves applying computer algorithms to understand the meaning and interpretation of words
and how sentences are structured.
Named entity recognition (NER): It involves determining the parts of a text that can be
identified and categorized into preset groups. Examples of such groups include names of
people and names of places.
Word sense disambiguation: It involves giving meaning to a word based on the context.
Natural language generation: It involves using databases to derive semantic intentions and
convert them into human language.
Wrapping up
As more research is being carried in this field, we expect to see more breakthroughs that will
make machines smarter at recognizing and understanding the human language.
AUTOMATED REASONING:
Reasoning is the ability to make inferences, and automated reasoning is concerned with the
building of computing systems that automate this process. Although the overall goal is to
mechanize different forms of reasoning, the term has largely been identified with valid
deductive reasoning as practiced in mathematics and formal logic. In this respect, automated
reasoning is akin to mechanical theorem proving.
While basic research work continues in order to provide the necessary theoretical framework,
the field has reached a point where automated reasoning programs are being used by
researchers to attack open questions in mathematics and logic, provide important applications
in computing science, solve problems in engineering, and find novel approaches to questions
in exact philosophy.
Visual Perception:
Perception:
Perception is the process of acquiring, interpreting, selecting, and organizing
sensory information.
The difficulty of the task comes from the need of multiple levels of abstraction,
where the relations among data items are many-to-many, uncertain, and changing
over time.
Accurately speaking, we never "see things as they are", and perception process of
an intelligent system is often (and should be) influenced by internal and external
factors beside the signals themselves. Furthermore, perception is not a pure passive
process driven by the input.
Computer Vision has influenced the field of Artificial Intelligence greatly. The
Robocup tournament and ASIMO are examples of Artificial Intelligence using
Computer Vision to its greatest extent. The Robocup tournament is a tournament for
robot dogs playing soccer. To be able to play soccer, these dogs must be able to see
the ball, then react to it accordingly. Engineers of these robot dogs have been
challenged to create robot dogs who can beat the best soccer players at soccer in
around fifty years.
ASIMO, seen below, is another example of how computer vision is an important part
of Artificial Intelligence. ASIMO is a robot created by Honda, but of course, all
robots need to be able to know where to move around and what is in its surroundings.
To be able to do this, ASIMO uses cameras to visualize computationally what is in its
surroundings, and then uses it to achieve its goal.
Artificial Intelligence can also use computer vision to communicate with humans.
GRACE the robot, shown below, is a robot who could communicate slightly with
humans to be able to recognize her surroundings and achieve a specific goal. For
example, GRACE attended a conference through a lobby and up an elevator by
communicating with humans. Communications included understanding that she had
to wait in line, and asking others to press the elevator button for her. She also has a
binocular vision system allowing her to react to human gestures as well.
Artificial Intelligence also uses computer vision to recognize handwriting text and
drawings. Text typed down on a document can be read by the computer easily, but
handwritten text cannot. Computer vision fixes this by converting handwritten figures
into figures that can be used by a computer. An example is shown below. The
attempted drawing of a rectangular prism resting on three other rectangular prism is
converted by computer vision to a 3-D picture of the same thing, but in a format
usable by the computer and more readable by users.
Another important part of Artificial Intelligence is passive observation and analysis.
Passive observation and analysis is using computer vision to observe and analyze
certain objects over time. For example, in the pictures below, on the first one, the
passing cars are being observed and analyzed as what type of car by the computer.
This can be done by outlining the car shape and recording it. On the second picture,
the flock of geese are observed and analyzed over time. The record could serve to
predict when geese would come again, for how long they would stay, and how many
of them there could be.
Heuristic Algorithm:
A Heuristic is a technique to solve a problem faster than classic methods, or to
find an approximate solution when classic methods cannot. This is a kind of a
shortcut as we often trade one of optimality, completeness, accuracy, or
precision for speed. A Heuristic (or a heuristic function) takes a look at search
algorithms. At each branching step, it evaluates the available information and
makes a decision on which branch to follow. It does so by ranking alternatives.
The Heuristic is any device that is often effective but will not guarantee work
in every case.
A* Search Algorithm
Artificial intelligence in its core strives to solve problems of enormous
combinatorial complexity. Over the years, these problems were boiled down to
search problems.
Any time we want to convert any kind of problem into a search problem, we
have to define six things:
Each time A* enters a state, it calculates the cost, f(n) (n being the
neighboring node), to travel to all of the neighboring nodes, and then enters
the node with the lowest value of f(n).
f(n)=g(n)+h(n)
g(n)being the value of the shortest path from the start node to node n,
and h(n) being a heuristic approximation of the node's value.
For us to be able to reconstruct any path, we need to mark every node with
the relative that has the optimal f(n) value. This also means that if we revisit
certain nodes, we'll have to update their most optimal relatives as well. More
on that later.
h(n)≤h∗(n)h(n)≤h∗(n)
h*(n) being the real distance between n and the goal node. However, if the
function does overestimate the real distance, but never by more than d, we
can safely say that the solution that the function produces is of accuracy d (i.e.
it doesn't overestimate the shortest path from start to finish by more than d).
A given heuristic function h(n) is consistent if the estimate is always less than
or equal to the estimated distance between the goal n and any given
neighbor, plus the estimated cost of reaching that neighbor:
c(n,m)+h(m)≥h(n)c(n,m)+h(m)≥h(n)
c(n,m) being the distance between nodes n and m. Additionally, if h(n) is
consistent, then we know the optimal path to any node that has been already
inspected. This means that this function is optimal.
Base: N=0
If there are no nodes between n and s, and because we know that h(n) is
consistent, the following equation is valid:
c(n,s)+h(s)≥h(n)c(n,s)+h(s)≥h(n)
Knowing h*(n)=c(n,s) and h(s)=0 we can safely deduce that:
h∗(n)≥h(n)h∗(n)≥h(n)
Induction hypothesis: N < k
Induction step:
In the case of N = k nodes on the shortest path from n to s, we inspect the first
successor (node m) of the finish node n. Because we know that there is a path
from m to n, and we know this path contains k-1 nodes, the following equation
is valid:
h∗(n)=c(n,m)+h∗(m)≥c(n,m)+h(m)≥h(n)
Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving
methods. Rational agents or Problem-solving agents in AI mostly used these search
strategies or algorithms to solve a specific problem and provide the best result. Problem-
solving agents are the goal-based agents and use atomic representation. In this topic, we
will learn various problem-solving search algorithms.
Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task.
Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.
o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search
Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem
information is available which can guide the search. Informed search strategies can find a
solution more efficiently than an uninformed search strategy. Informed search is also called
a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions but guaranteed
to find a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in another
way.
1. Greedy Search
2. A* Search
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search
algorithm.
o Breadth-first search implemented using FIFO queue data structure.
Advantages:
Disadvantages:
o It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
o BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm
from the root node S to goal node K. BFS search algorithm traverse in layers, so it will
follow the path which is shown by the dotted arrow, and the traversed path will be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the
node.
2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
o It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
Note: Backtracking is an algorithm technique for finding all possible solutions using
recursion.
Advantage:
o DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is not
found. After backtracking it will traverse node C and then G, and here it will terminate as it
found goal node.
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the
node.
2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
o It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
Note: Backtracking is an algorithm technique for finding all possible solutions using
recursion.
Advantage:
o DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is not
found. After backtracking it will traverse node C and then G, and here it will terminate as it
found goal node.
x
Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node,
hence space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps
or high cost to reach to the goal node.
Advantages:
Disadvantages:
Example:
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not
optimal even if ℓ>d.
4. Uniform-cost Search Algorithm:
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge. The primary
goal of the uniform-cost search is to find a path to the goal node which has the lowest
cumulative cost. Uniform-cost search expands nodes according to their path costs form the
root node. It can be used to solve any graph/tree where the optimal cost is in demand. A
uniform-cost search algorithm is implemented by the priority queue. It gives maximum
priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm if
the path cost of all edges is the same.
Advantages:
o Uniform cost search is optimal because at every state the path with the least cost is
chosen.
Disadvantages:
o It does not care about the number of steps involve in searching and only concerned
about path cost. Due to which this algorithm may be stuck in an infinite loop.
Example:
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node.
Then the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0
and end to C*/ε.
Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of Uniform-cost
search is O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always optimal as it only selects a path with the lowest path cost.
This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search's fast search and depth-
first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown.
Advantages:
o Itcombines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.
Disadvantages:
o The main drawback of IDDFS is that it repeats all the work of the previous phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Completeness:
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time complexity
is O(bd).
Space Complexity:
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the
node.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
Disadvantages:
The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds
the most promising path. It takes the current state of the agent as its input and produces
the estimation of how close agent is from the goal. The heuristic method, however, might
not always give the best solution, but it guaranteed to find a good solution in reasonable
time. Heuristic function estimates how close a state is to the goal. It is represented by h(n),
and it calculates the cost of an optimal path between the pair of states. The value of the
heuristic function is always positive.
An "algorithm" is any set of rules for doing something. What you mean is a "solution
algorithm". A "solution algorithm" guarantees a correct solution. The "guarantee" is the
key phrase. The Gaussian Elimination method taught to solve a system of linear
equations is a "solution algorithm" in that it guarantees that you will always give the
right answer. Solution algorithms to a problem can be faster or slower, but they all have
the same guarantee of being correct.
Imagine that you see a set of boxes on the other side of the room, and you have to
guess which is the heaviest. A fair heuristic would be to guess that the largest box is the
heaviest. The real answer could be found by actually weighing the boxes. However, it
may be either that weighing the boxes is impossible, or you do not want to spend the
time to weigh the boxes. In those cases, you would use the heuristic of guessing that the
largest is the heaviest.
The key point about a heuristic is that there is no way of knowing when the solution you
get is wrong. If there was, you could create a self-correction loop and get the right
solution, and that would mean you have a solution algorithm. But like with the boxes,
just by looking at them, you would never know when the largest is not the heaviest.
With this box weight heuristic, usually, and under a lot of conditions, you would be right.
But just by looking at them you could never know when the largest is full of pillows, and
the smallest is full of lead. You would never know when you were wrong. The best thing
to do in practice is to do an empirical statistical study of the typical situation and the
typical answer you get.
Note that is different from an "approximate solution algorithm" that guarantees that the
solution is correct to within some degree. Weighing the boxes on a cheap scale is an
approximate solution algorithm. Guessing their weight by their size is a heuristic.