ICS 2404 Artificial Intelligence

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

ICS 2404 Artificial Intelligence

Artificial Intelligence (AI) is a branch of Science which deals with helping machines finding
solutions to complex problems in a more human-like fashion. This generally involves
borrowing characteristics from human intelligence, and applying them as algorithms in a
computer friendly way. A more or less flexible or efficient approach can be taken depending
on the requirements established, which influences how artificial the intelligent behaviour
appears.

AI is generally associated with Computer Science, but it has many important links with other
fields such as Maths, Psychology, Cognition, Biology and Philosophy, among many others.
Our ability to combine knowledge from all these fields will ultimately benefit our progress in
the quest of creating an intelligent artificial being.

AI currently encompasses a huge variety of subfields, from general-purpose areas such as


perception and logical reasoning, to specific tasks such as playing chess, proving
mathematical theorems, writing poetry, and diagnosing diseases. Often, scientists in other
fields move gradually into artificial intelligence, where they find the tools and vocabulary to
systematize and automate the intellectual tasks on which they have been working all their
lives. Similarly, workers in AI can choose to apply their methods to any area of human
intellectual endeavour. In this sense, it is truly a universal field.

Definition of Artificial intelligence

It is often difficult to construct a definition of a discipline that is satisfying to all of its


practitioners. AI research encompasses a spectrum of related topics. Broadly, AI is the
computer-based exploration of methods for solving challenging tasks that have traditionally
depended on people for solution. Such tasks include complex logical inference, diagnosis,
and visual recognition, comprehension of natural language, game playing, explanation, and
planning.
Alternative Definitions
 AI is the study of how to make computers do things which at the moment people do
better. This is ephemeral as it refers to the current state of computer science and it
excludes a major area ; problems that cannot be solved well either by computers or by
people at the moment.
 AI is a field of study that encompasses computational techniques for performing tasks
that apparently require intelligence when performed by humans.
 AI is the branch of computer science that is concerned with the automation of
intelligent behaviour. A I is based upon the principles of computer science namely
data structures used in knowledge representation, the algorithms needed to apply that
knowledge and the languages and programming techniques used in their
implementation. These definitions avoid philosophic discussions as to what is meant
by artificial or intelligence.
 AI is the field of study that seeks to explain and emulate intelligent behaviour in terms
of computational processes.
 AI is about generating representations and procedures that automatically or
autonomously solve problems which would otherwise be solved by humans.
 A I is the part of computer science concerned with designing intelligent computer
systems, that is, computer systems that exhibit the characteristics we associate with
intelligence in human behaviour such as understanding language, learning, reasoning
and solving problems.
 A I is the exciting new effort to make computers thinking machines with minds, in the
full and literal sense In brief summary, AI is concerned with developing computer
systems that can store knowledge and effectively use the knowledge to help solve
problems and accomplish tasks. This brief statement sounds a lot like one of the
commonly accepted goals in the education of humans. We want students to learn
(gain knowledge) and to learn to use this knowledge to help solve problems and
accomplish tasks.
The above definitions give us four possible goals to pursue in artificial intelligence:
- Systems that think like humans
- Systems that act like humans
- Systems that think rationally. ( A system is rational if it does the right thing.)
- Systems that act rationally

Historically, all four approaches have been followed. The following are some of the
approaches.

Acting humanly: The Turing Test approach.


The Turing Test, proposed by Alan Turing (Turing, 1950), was designed to provide a
satisfactory operational definition of intelligence. Turing defined intelligent behaviour as the
ability to achieve human-level performance in all cognitive tasks, sufficient to fool an
interrogator. Roughly speaking, the test he proposed is that the computer should be
interrogated by a human via a teletype, and passes the test if the interrogator cannot tell if
there is a computer or a human at the other end. Programming a computer to pass the test
provides plenty to work on. The computer would need to possess the following capabilities:
 natural language processing to enable it to communicate successfully in English (or
some other human language);
 knowledge representation to store information provided before or during the
interrogation;
 automated reasoning to use the stored information to answer questions and to draw
new conclusions;
 machine learning to adapt to new circumstances and to detect and extrapolate
patterns.

Turing's test deliberately avoided direct physical interaction between the interrogator and the
computer, because physical simulation of a person is unnecessary for intelligence. However,
the so-called total Turing Test includes a video signal so that the interrogator can test the
subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical
objects ``through the hatch.'' To pass the total Turing Test, the computer will need
 computer vision to perceive objects, and
 Robotics to move them about.
Within AI, there has not been a big effort to try to pass the Turing test. The issue of acting
like a human comes up primarily when AI programs have to interact with people, as when an
expert system explains how it came to its diagnosis, or a natural language processing system
has a dialogue with a user. These programs must behave according to certain normal
conventions of human interaction in order to make themselves understood. The underlying
representation and reasoning in such a system may or may not be based on a human model.
Thinking humanly: The cognitive modelling approach
If we are going to say that a given program thinks like a human, we must have some way of
determining how humans think. We need to get inside the actual workings of human minds.
There are two ways to do this:
Through introspection (trying to catch our own thoughts as they go by).
Through psychological experiments. Once we have a sufficiently precise theory of the mind,
it becomes possible to express the theory as a computer program. If the program's
input/output and timing behaviour matches human behaviour, that is evidence that some of
the program's mechanisms may also be operating in humans.
Thinking rationally: The laws of thought approach
The Greek philosopher Aristotle was one of the first to attempt to codify ``right thinking,''
that is, irrefutable reasoning processes. His famous syllogisms provided patterns for
argument structures that always gave correct conclusions given correct premises. For
example, ``Socrates is a man; all men are mortal; therefore Socrates is mortal.'' These laws of
thought were supposed to govern the operation of the mind, and initiated the field of logic.
By 1965, programs existed that could, given enough time and memory, take a description of a
problem in logical notation and find the solution to the problem, if one exists. (If there is no
solution, the program might never stop looking for it.) The so-called logicist tradition within
artificial intelligence hopes to build on such programs to create intelligent systems.
Acting rationally: The rational agent approach
Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent is
just something that perceives and acts. In this approach, AI is viewed as the study and
construction of rational agents.
Areas of Artificial Intelligence
Perception

Machine Vision: It is easy to interface a TV camera to a computer and get an image into
memory; the problem is understanding what the image represents. Vision takes lots of
computation; in humans, roughly 10% of all calories consumed are burned in vision
computation. Speech Understanding: Speech understanding is available now. Some systems
must be trained for the individual user and require pauses between words. Understanding
continuous speech with a larger vocabulary is harder.
Touch(tactile or haptic) Sensation: Important for robot assembly tasks.

Robotics

Although industrial robots have been expensive, robot hardware can be cheap: Radio Shack
has sold a working robot arm and hand for $15. The limiting factor in application of robotics
is not the cost of the robot hardware itself. What is needed is perception and intelligence to
tell the robot what to do; ``blind'' robots are limited to very well-structured tasks (like spray
painting car bodies).

Planning
The task of coming up with a sequence of actions that will achieve a goal is called planning
Planning attempts to order actions to achieve goals. Planning applications include logistics,
manufacturing scheduling, planning manufacturing steps to construct a desired product.
There are huge amounts of money to be saved through better planning.
 Intelligent training.
Theorem Proving
 Proving mathematical theorems might seem to be mainly of academic interest.
However, many practical problems can be cast in terms of theorems. A general
theorem prover can therefore be widely applicable. Examples:
 Automatic construction of compiler code generators from a description of a CPU's
instruction set.
 J Moore and colleagues proved correctness of the floating-point division algorithm on
AMD CPU chip.

Symbolic Mathematics

Symbolic mathematics refers to manipulation of formulas, rather than arithmetic on numeric


values.
• Algebra
• Differential and Integral Calculus
Symbolic manipulation is often used in conjunction with ordinary scientific computation as a
generator of programs used to actually do the calculations. Symbolic manipulation programs
are an important component of scientific and engineering workstations.

Game Playing
Games are good vehicles for research because they are well formalized, small, and self-
contained. They are therefore easily programmed. Games can be good models of competitive
situations, so principles discovered in game-playing programs may be applicable to practical
problems.
Although knowledge representation is one of the central and in some ways most familiar
concepts in AI, the most fundamental question about it--What is it?--has rarely been
answered directly. Numerous papers have lobbied for one or another variety of
representation, other papers have argued for various properties a representation should have,
while still others have focused on properties that are important to the notion of representation
in general.
What is a knowledge representation? We argue that the notion can best be understood in
terms of five distinct roles it plays, each crucial to the task at hand:

 A knowledge representation (KR) is most fundamentally a surrogate, a substitute for


the thing itself, used to enable an entity to determine consequences by thinking rather
than acting, i.e., by reasoning about the world rather than taking action in it.
 It is a set of ontological commitments, i.e., an answer to the question: In what terms
should I think about the world?
 It is a fragmentary theory of intelligent reasoning, expressed in terms of three
components:
(i) the representation's fundamental conception of intelligent reasoning;
(ii) the set of inferences the representation sanctions; and
(iii) the set of inferences it recommends (any way to get new expressions from
old).
 It is a medium for pragmatically efficient computation, i.e., the computational
environment in which thinking is accomplished. One contribution to this pragmatic
efficiency is supplied by the guidance a representation provides for organizing
information so as to facilitate making the recommended inferences.
 It is a medium of human expression, i.e., a language in which we say things about the
world.

Expert Systems
What is an Expert System?
Jackson (1999) provides us with the following definition:
An expert system is a computer program that represents and reasons with knowledge of some
specialist subject with a view to solving problems or giving advice.
To solve expert-level problems, expert systems will need efficient access to a substantial
domain knowledge base, and a reasoning mechanism to apply the knowledge to the
problems they are given. Usually they will also need to be able to explain, to the users who
rely on them, how they have reached their decisions.
They will generally build upon the ideas of knowledge representation, production rules,
search, and so on.
Often we use an expert system shell which is an existing knowledge independent framework
into which domain knowledge can be inserted to produce a working expert system. We can
thus avoid having to program each new system from scratch.

Typical Tasks for Expert Systems

There are no fundamental limits on what problem domains an expert system can be built to
deal with. Some typical existing expert system tasks include:

1. The interpretation of data Such as sonar data or geophysical measurements


2. Diagnosis of malfunctions Such as equipment faults or human diseases
3. Structural analysis or configuration of complex objects Such as chemical compounds or
computer systems
4. Planning sequences of actions Such as might be performed by robots
5. Predicting the future Such as weather, share prices, exchange rates.
However, these days, “conventional” computer systems can also do some of these things.
Characteristics of Expert Systems
Expert systems can be distinguished from conventional computer systems in that:
1. They simulate human reasoning about the problem domain, rather than simulating the
domain itself.
2. They perform reasoning over representations of human knowledge, in addition to doing
numerical calculations or data retrieval. They have corresponding distinct modules referred to
as the inference engine and the knowledge base.
3. Problems tend to be solved using heuristics (rules of thumb) or approximate Methods or
probabilistic methods which, unlike algorithmic solutions, are not guaranteed to result in a
correct or optimal solution.
4. They usually have to provide explanations and justifications of their solutions or
recommendations in order to convince the user that their reasoning is correct.
The Architecture of Expert Systems

The process of building expert systems is often called knowledge engineering. The
knowledge engineer is involved with all components of an expert system:
Building expert systems is generally an iterative process. The components and their
interaction will be refined over the course of numerous meetings of the knowledge engineer
with the experts and users. We shall look in turn at the various components.
Knowledge Acquisition
The knowledge acquisition component allows the expert to enter their knowledge or expertise
into the expert system, and to refine it later as and when required.
Historically, the knowledge engineer played a major role in this process, but automated
systems that allow the expert to interact directly with the system are becoming increasingly
common.
The knowledge acquisition process is usually comprised of three principal stages:
1. Knowledge elicitation is the interaction between the expert and the knowledge
engineer/program to elicit the expert knowledge in some systematic way.
2. The knowledge thus obtained is usually stored in some form of human friendly way.
Intermediate representation.
3. The intermediate representation of the knowledge is then compiled into an executable
form (e.g. production rules) that the inference engine can process.

Knowledge Elicitation
The knowledge elicitation process itself usually consists of several stages:
1. Find as much as possible about the problem and domain from books, manuals, etc. In
particular, become familiar with any specialist terminology and jargon.
2. Try to characterise the types of reasoning and problem solving tasks that the system will be
required to perform.
3. Find an expert (or set of experts) that is willing to collaborate on the project.
Sometimes experts are frightened of being replaced by a computer system.
4. Interview the expert (usually many times during the course of building the system). Find
out how they solve the problems your system will be expected to solve. Have them check and
refine your intermediate knowledge representation.
This is a time intensive process, and automated knowledge elicitation and machine learning
techniques are increasingly common modern alternatives.

Knowledge identification: Use in depth interviews in which the knowledge engineer


encourages the expert to talk about how they do what they do. The knowledge engineer
should understand the domain well enough to know which objects and facts need talking
about.
Knowledge conceptualization: Find the primitive concepts and conceptual relations of the
problem domain.
Epistemological analysis: Uncover the structural properties of the conceptual knowledge,
such as taxonomic relations (classifications).
Logical analysis: Decide how to perform reasoning in the problem domain. This kind of
knowledge can be particularly hard to acquire.
Implementation analysis: Work out systematic procedures for implementing and testing the
system.

Capturing Tacit/Implicit Knowledge

One problem that knowledge engineers often encounter is that the human experts use
tacit/implicit knowledge (e.g. procedural knowledge) that is difficult to capture.
There are several useful techniques for acquiring this knowledge:
1. Protocol analysis: Tape-record the expert thinking aloud while performing their role and
later analyse this. Break down the their protocol/account into the smallest atomic units of
thought, and let these become operators.
2. Participant observation: The knowledge engineer acquires tacit knowledge through
practical domain experience with the expert.
3. Machine induction: This is useful when the experts are able to supply examples of the
results of their decision making, even if they are unable to articulate the underlying
knowledge or reasoning process.
Which is/are best to use will generally depend on the problem domain and the expert.

Representing the Knowledge

We have already looked at various types of knowledge representation. In general, the


knowledge acquired from our expert will be formulated in two ways:
1. Intermediate representation – a structured knowledge representation that the knowledge
engineer and expert can both work with efficiently.
2. Production system – a formulation that the expert system’s inference engine can process
efficiently.
It is important to distinguish between:
1. Domain knowledge – the expert’s knowledge which might be expressed in the form of
rules, general/default values, and so on.
2. Case knowledge – specific facts/knowledge about particular cases, including any derived
knowledge about the particular cases.
The system will have the domain knowledge built in, and will have to integrate this with the
different case knowledge that will become available each time the system is used.

The Inference Engine

1. Match the premise patterns of the rules against elements in the working memory.
Generally the rules will be domain knowledge built into the system, and the working memory
will contain the case based facts entered into the system, plus any new facts that have been
derived from them.
2. If there is more than one rule that can be applied, use a conflict resolution strategy to
choose one to apply. Stop if no further rules are applicable.
3. Activate the chosen rule, which generally means adding/deleting an item to/from working
memory. Stop if a terminating condition is reached, or return to step 1.
Early production systems spent over 90% of their time doing pattern matching, but there is
now a solution to this efficiency problem:

The User Interface

The Expert System user interface usually comprises of two basic components:
1. The Interviewer Component
This controls the dialog with the user and/or allows any measured data to be read into the
system. For example, it might ask the user a series of questions, or it might read a file
containing a series of test results.
2. The Explanation Component
This gives the system’s solution, and also makes the system’s operation transparent to that
conclusion. It might instead explain why it could not reach a conclusion.
So that is how we go about building expert systems.

AI Technique.

Intelligence requires knowledge but knowledge possesses less desirable properties such as
 It is voluminous
 it is difficult to characterise accurately
 it is constantly changing
 it differs from data by being organised in a way that corresponds to its application

An AI technique is a method that exploits knowledge that is represented so that

 The knowledge captures generalisations; situations that share properties, are grouped
together, rather than being allowed separate representation.
 It can be understood by people who must provide it; although for many programs the
bulk of the data may come automatically, such as from readings. In many AI domains
people must supply the knowledge to programs in a form the people understand and in
a form that is acceptable to the program.
 It can be easily modified to correct errors and reflect changes in real conditions.
 It can be widely used even if it is incomplete or inaccurate.
 It can be used to help overcome its own sheer bulk by helping to narrow the range of
possibilities that must be usually considered.

Artificial Intelligence Search

Problem Spaces and Search


Building a system to solve a problem requires the following steps
 Define the problem precisely including detailed specifications and what constitutes an
acceptable solution;
 Analyse the problem thoroughly for some features may have a dominant effect on the
chosen method of solution;
 Isolate and represent the background knowledge needed in the solution of the
problem;
 Choose the best problem solving techniques in the solution.

Defining the Problem as state Search


To understand what exactly artificial intelligence is, we illustrate some common problems.
Problems dealt with in artificial intelligence generally use a common term called 'state'. A
state represents a status of the solution at a given step of the problem solving procedure. The
solution of a problem, thus, is a collection of the problem states. The problem solving
procedure applies an operator to a state to get the next state. Then it applies another operator
to the resulting state to derive a new state. The process of applying an operator to a state and
its subsequent transition to the next state, thus, is continued until the goal (desired) state is
derived. Such a method of solving a problem is generally referred to as state space
approach.
For example, in order to solve the problem play a game, which is restricted to two person
table or board games, we require the rules of the game and the targets for winning as well as a
means of representing positions in the game. The opening position can be defined as the
initial state and a winning position as a goal state, there can be more than one legal move to
allow for transfer from initial state to other states leading to the goal state. However the rules
are far too copious in most games especially chess where they exceed the number of particles
in the universe 10. Thus the rules cannot in general be supplied accurately and computer
programs cannot easily handle them. The storage also presents another problem but searching
can be achieved.
Formal description of a problem
 Define a state space that contains all possible configurations of the relevant objects,
without enumerating all the states in it. A state space represents a problem in terms of
states and operators that change states
 Define some of these states as possible initial states;
 Specify one or more as acceptable solutions, these are goal states;
 Specify a set of rules as the possible actions allowed. This involves thinking about the
generality of the rules, the assumptions made in the informal presentation and how
much work can be anticipated by inclusion in the rules.
Production system
 a set of rules each consisting of a left side the applicability of the rule and the right
side the operations to be performed;
 one or more knowledge bases containing the required information for each task;
 a control strategy that specifies the order in which the rules will be compared to the
database and ways of resolving conflict;
 a rule applier

Choose an appropriate search technique:


 How large is the search space?
 How well-structured is the domain?
 What knowledge about the domain can be used to guide the search?
Basic Recursive Algorithm
If the input is a base case, for which the solution is known, return the solution.
Otherwise,
 Do part of the problem, or break it into smaller subproblems.
 Call the problem solver recursively to solve the subproblems.
 Combine the subproblem solutions to form a total solution.

In writing the recursive program:


 Write a clear specification of the input and output of the program.
 Assume it works already.
 Write the program to use the input form and produce the output form.
Search Methods
The excessive time spent in searching is almost entirely spent on failures (sequences of
operators that do not lead to solutions). If the computer could be made to look at promising
sequences first and avoid most of the bad ones, much of the effort of searching could be
avoided. Blind search or exhaustive methods try operators in some fixed order, without
knowing which operators may be more likely to lead to a solution. Such methods can succeed
only for small search spaces. Heuristic search methods use knowledge about the problem
domain to choose more promising operators first.
Exhaustive search
Searches can be classified by the order in which operators are tried: depth-first, breadth-first,
bounded depth-first.
Breadth-First Search
In This technique, the children (i.e the neighbour) of a node are first visited before the grand
children (i.e. the neighbour of the neighbour) are visited. 1. Create a variable called NODE-
LIST and set it to the initial state. 2. UNTIL a goal state is found OR NODE-LIST is empty
DO
(a) Remove the first element from NODE_LIST and call it E. IF NODE-LIST was empty
quit.
(b) FOR each way that each rule can match the state described in E DO (i) Apply the rule to
generate a new state (ii) IF the new state is a goal state quit and return this state. (iii)
Otherwise add the new state to the end of NODE-LIST.
Algorithm Depth-First Search
The depth first search follow a path to its end before stating to explore another path.
1. IF the initial state is a goal state, quit and return success.
2. Otherwise DO the following until success or failure is signalled
(a) Generate a successor, E, of the initial state. If there are no more successors signal failure.
(b) Call Depth-First Search with E as the initial state.
(c) If success is returned signal success otherwise continue in the loop.
Depth-first search applies operators to each newly generated state, trying to drive directly
toward the goal.
Advantages:
1. Low storage requirement: linear with tree depth.
2. Easily programmed: function call stack does most of the work of maintaining state of the
search.

Disadvantages:
1. May find a sub-optimal solution (one that is deeper or more costly than the best solution).
2. Incomplete: without a depth bound, may not find a solution even if one exists.
Bounded Depth-First Search
Depth-first search can spend much time (perhaps infinite time) exploring a very deep path
that does not contain a solution, when a shallow solution exists. An easy way to solve this
problem is to put a maximum depth bound on the search. Beyond the depth bound , a failure
is generated automatically without exploring any deeper.
Problems:
1. It's hard to guess how deep the solution lies.
2. If the estimated depth is too deep (even by 1) the computer time used is dramatically
increased.
3. If the estimated depth is too shallow, the search fails to find a solution; all that computer
time is wasted.

Iterative Deepening Iterative deepening begins a search with a depth bound of 1, then
increases the bound by 1 until a solution is found. Advantages:
1. Finds an optimal solution (shortest number of steps).
2. Has the low (linear in depth) storage requirement of depth-first search.

Disadvantage:
1. Some computer time is wasted re-exploring the higher parts of the search tree. However,
this actually is not a very high cost.
2. Cost of Iterative Deepening
3. In general, (b - 1) / b of the nodes of a search tree are on the bottom row. If the branching
factor is b = 2, half the nodes are on the bottom; with a higher branching factor, the
proportion on the bottom row is higher.
Heuristics Search
A heuristic is a method that might not always find the best solution but is guaranteed to find a
good solution in reasonable time. By sacrificing completeness it increases efficiency. It is
particularly useful in solving tough problems which could not be solved any other way and if
a complete solution was to be required infinite time would be needed i.e. far longer than a
lifetime.
To use heuristics to find a solution in acceptable time rather than a complete solution in
infinite time. The next example illustrates the requirement for heuristic search as it needs a
very large time to find the exact solution.
Example:
The travelling salesman problem
A salesperson has a list of cities to visit and she must visit each city only once. There are
distinct routes between the cities. The problem is to find the shortest route between the cities
so that the salesperson visits all the cities once. Suppose there are N cities then a solution that
would work would be to take all N! possible combinations and to find the shortest distance
that being the required route. This is not efficient as with N=10 there are 3 628 800 possible
routes. This is an example of combinatorial explosion.
There are better methods for solution, one is called branch and bound.
Generate all the complete paths and find the distance of the first complete path. If the next
path is shorter save it and proceed in this way abandoning any path when its length so far
exceeds the shortest path length. Although this is better than the previous method it is still
exponential.
Heuristic Search applied to the travelling salesman problem
Applying this concept to the travelling salesperson problem.
1 select a city at random as a start point;
2 repeat 3 to select the next city have a list of all the cities to be visited and choose the
nearest one to the current city , then go to it; 4 until all cities visited This produces a
significant improvement and reduces the time from order N! to N. It is also possible to
produce a bound on the error in the answer it generates but in general it is not possible to
produce such an error bound. In real problems the value of a particular solution is trickier to
establish, this problem is easier as it is measured in miles, other problems have vaguer
measures.. Although heuristics can be created for unstructured knowledge producing cogent
analysis is another issue and this means that the solution lacks reliability. Rarely is an optimal
solution required good approximations usually suffice. Although heuristic solutions are bad
in the worst case the worst case occurs very infrequently and in the most common cases
solutions now exist. Understanding why heuristics appear to work increases our
understanding of the problem. This method of searching is a general method which can be
applied to problems of the following type.
Problem Characteristics.
 Is the problem decomposable into a set of nearly independent smaller or easier sub-
problems? Can the solution steps be ignored or at least undone if they prove unwise?
 Is the problem's universe predictable?
 Is a good solution to the problem obvious without comparison to all other possible
solutions?
 Is the desired solution a state of the world or a path to a state?
 Is a large amount of knowledge absolutely required to solve this problem or is
knowledge important only to constrain the search? Can a computer that is simply
given the problem return the solution or will the solution of the problem require
interaction between the computer and a person?
The design of search programs.
Each search process can be considered to be a tree traversal exercise. The object of the search
is to find a path from an initial state to a goal state using a tree. The number of nodes
generated might be immense and in practice many of the nodes would not be needed. The
secret of a good search routine is to generate only those nodes that are likely to be useful.
Rather than having an explicit tree the rules are used to represent the tree implicitly and only
to create nodes explicitly if they are actually to be of use. The following issues arise when
searching:

the tree can be searched forwards from the initial node to the goal state or backwards from
the goal state to the initial state.
• How to select applicable rules, it is critical to have an efficient procedure for matching rules
against states.
• How to represent each node of the search process this is the knowledge representation
problem or the frame problem. In games an array suffices in other problems more complex
data structures are needed.
The breadth first does take note of all nodes generated but depth first can be modified.
Check duplicate nodes
• 1 examine all nodes already generated to see if new node is present.
• 2 if it does exist add it to the graph.
• 3 if it does already exist then
• A set the node that is being expanded to point to the already existing node corresponding to
its successor rather than to the new one.
• The new one can be thrown away.
• B if the best or shortest path is being determined check to see if this path is better or worse
than the old one.
• If worse do nothing.
• If better save the new path and work the change in length through the chain of successor
nodes if necessary.

Knowledge representation
Much intelligent behaviour is based on the use of knowledge; humans spend a third of their
useful lives becoming educated. There is not yet a clear understanding of how the brain
represents knowledge
Knowledge representation (KR) is an area in artificial intelligence that is concerned with
how to formally "think", that is, how to use a symbol system to represent "a domain of
discourse" that which can be talked about, along with functions that may or may not be
within the domain of discourse that allow inference (formalized reasoning) about the objects
within the domain of discourse to occur. Knowledge representation is the study of how
knowledge about the world can be represented and what kinds of reasoning can be done with
that knowledge. In order to use knowledge and reason with it, you need what we call a
representation and reasoning system (RRS).
A representation and reasoning system is composed of a language to communicate with a
computer, a way to assign meaning to the language, and procedures to compute answers
given input in the language. Intuitively, an RRS lets you tell the computer something in a
language where you have some meaning associated with the sentences in the language, you
can ask the computer questions, and the computer will produce answers that you can interpret
according to the meaning associated with the language
There are several important issues in knowledge representation:
• How knowledge is stored;
• How knowledge that is applicable to the current problem can be retrieved;
• How reasoning can be performed to derive information that is implied by existing
knowledge but not stored directly.

The storage and reasoning mechanisms are usually closely coupled. It is necessary to
represent the computer's knowledge of the world by some kind of data structures in the
machine's memory. Traditional computer programs deal with large amounts of data that are
structured in simple and uniform ways. A.I. programs need to deal with complex
relationships, reflecting the complexity of the real world.
Typical problem solving (and hence many AI) tasks can be commonly reduced to:
• Representation of input and output data as symbols in a physical symbol
• reasoning by processing symbol structures, resulting in other symbol structures.
Some problems highlight search whilst others knowledge representation. Several kinds of
knowledge might need to be represented in AI systems:
 Objects Facts about objects in our world domain. e.g. Guitars have strings, trumpets
are brass instruments.
 Events Actions that occur in our world. e.g. Steve Vai played the guitar in Frank
Zappa's Band. - Performance A behaviour like playing the guitar involves
knowledge about how to do things. - Meta-knowledge knowledge about what we
know. e.g. Bobrow's Robot who plan's a trip. It knows that it can read street signs
along the way to find out where it is. Thus in solving problems in AI we must
represent knowledge and there are two entities to deal with:
 Facts truths about the real world and what we represent. This can be regarded as the
knowledge level
 Representation of the facts which we manipulate. This can be regarded as the
symbol level since we usually define the representation in terms of symbols that can
be manipulated by programs.
We can structure these entities at two levels: The knowledge level: at which facts are
described The symbol level: at which representations of objects are defined in terms of
symbols that can be manipulated in programs.
Using Knowledge
We have briefly mentioned where knowledge is used in AI systems. Let us consider a little
further to what applications and how knowledge may be used.
 Learning It refers to acquiring knowledge. This is more than simply adding new facts
to a knowledge base. New data may have to be classified prior to storage for easy
retrieval, etc.. Interaction and inference with existing facts to avoid redundancy and
replication in the knowledge and also so that facts can be updated.
 Retrieval The representation scheme used can have a critical effect on the efficiency
of the method. Humans are very good at it.
 Reasoning Infer facts from existing data.
Properties for Knowledge Representation Systems
 The following properties should be possessed by a knowledge representation system. -
Representational Adequacy the ability to represent the required knowledge; -
Inferential Adequacy the ability to manipulate the knowledge represented to produce
new knowledge corresponding to that inferred from the original;
 Inferential Efficiency the ability to direct the inferential mechanisms into the most
productive directions by storing appropriate guides;
 Acquisitioned Efficiency the ability to acquire new knowledge using automatic
methods wherever possible rather than reliance on human intervention. To date no
single system optimizes all of the above

Approaches to Knowledge Representation


- Simple relational knowledge
The simplest way of storing facts is to use a relational method where each fact about a set of
objects is set out systematically in columns. This representation gives little opportunity for
inference, but it can be used as the knowledge basis for inference engines.
• Simple way to store facts.
• Each fact about a set of objects is set out systematically in columns.
• Little opportunity for inference.
• Knowledge basis for inference engines.
We can ask things like: Who is dead? Who plays Jazz/Trumpet etc.? This sort of
representation is popular in database systems.
- Inheritable knowledge
Relational knowledge is made up of objects consisting of attributes and corresponding
associated values. Inheritable knowledge extends the base more by allowing inference
mechanisms:
• Property inheritance

Elements inherit values from being members of a class. data must be organized into a
hierarchy of classes as shown in the figure below
Boxed nodes represent objects and values of attributes of objects. Values can be objects with
attributes and so on. Arrows point from object to its value. This structure is known as a slot
and filler structure, semantic network or a collection of frames. The algorithm to retrieve a
value for an attribute of an instance object:
1. Find the object in the knowledge base
2. If there is a value for the attribute report it
3. Otherwise look for a value of instance if none fail
4. Otherwise go to that node and find a value for the attribute and then report it
5. Otherwise search through using isa until a value is found for the attribute.
Inferential Knowledge
Represent knowledge as formal logic: All dogs have tails : dog(x) hasatail(x) Advantages:
 A set of strict rules. o Can be used to derive more facts.
 Truths of new statements can be verified.
 Guaranteed correctness.

 Many inference procedures available to in implement standard rules of logic.


 Popular in AI systems. e.g Automated theorem proving.
 Procedural Knowledge
 Basic idea:
 • Knowledge encoded in some procedures o small programs that know how to do
specific things, how to proceed.
 o e.g a parser in a natural language understander has the knowledge that a noun
phrase may contain articles, adjectives and nouns. It is represented by calls to routines
that know how to process articles, adjectives and nouns.

Advantages:
 • Heuristic or domain specific knowledge can be represented.
 • Extended logical inferences, such as default reasoning facilitated.
 • Side effects of actions may be modelled. Some rules may become false in time.
Keeping track of this in large systems may be tricky.

Disadvantages:
 Completeness -- not all cases may be represented.
 Consistency -- not all deductions may be correct.

 e.g If we know that Fred is a bird we might deduce that Fred can fly. Later we might
discover that Fred is an emu.
 Modularity is sacrificed. Changes in knowledge base might have far-reaching effects.
 Cumbersome control information.

Issue in Knowledge Representation


 Below are listed issues that should be raised when using a knowledge representation
technique:
 Important Attributes Are there any attributes that occur in many different types of
problem? There are two instance and isa and each is important because each supports
property inheritance.
 Relationships What about the relationship between the attributes of an object, such
as, inverses, existence, techniques for reasoning about values and single valued
attributes. We can consider an example of an inverse in
Granularity
At what level should the knowledge be represented and what are the primitives. Choosing
the Granularity of Representation Primitives are fundamental concepts such as holding,
seeing, playing and as English is a very rich language with over half a million words it is
clear we will find difficulty in deciding upon which words to choose as our primitives in a
series of situations. If Tom feeds a dog then it could become: feeds(tom, dog) If Tom gives the
dog a bone like: gives(tom, dog,bone) Are these the same? In any sense does giving an object
food constitute feeding? If give(x, food) feed(x) then we are making progress.
Logic Knowledge Representation
Here we will highlight major principles involved in knowledge representation. In particular
predicate logic will be met in other knowledge representation schemes and reasoning
methods.
Procedural Knowledge Representations
Declarative or Procedural?
Declarative knowledge representation:
 Static representation -- knowledge about objects, events etc. and their relationships
and states given.
 Requires a program to know what to do with knowledge and how to do it.
 Procedural representation:
 control information necessary to use the knowledge is embedded in the knowledge
itself. e.g. how to find relevant facts, make inferences etc.
 Requires an interpreter to follow instructions specified in knowledge.

Semantic Nets
The major idea is that:
 The meaning of a concept comes from its relationship to other concepts, and that,
 The information is stored by interconnecting nodes with labelled arcs.

Inference in a Semantic Net


Basic inference mechanism: follow links between nodes. Two methods to do this:
Intersection search -- the notion that spreading activation out of two nodes and finding their
intersection finds relationships among objects. This is achieved by assigning a special tag to
each visited node. Many advantages including entity-based organisation and fast parallel
implementation. However very structured questions need highly structured networks.
Inheritance -- the isa and instance representation provide a mechanism to implement this.
Inheritance also provides a means of dealing with default reasoning. E.g. we could represent:
• Emus are birds.
• Typically birds fly and have wings.
Frames
Frames can also be regarded as an extension to Semantic nets. Indeed it is not clear
where the distinction between a semantic net and a frame ends. Semantic nets initially
we used to represent labelled connections between objects. As tasks became more
complex the representation needs to be more structured. The more structured the
system it becomes more beneficial to use frames. A frame is a collection of attributes
or slots and associated values that describe some real world entity. Frames on their
own are not particularly helpful but frame systems are a powerful way of encoding
information to support reasoning. Set theory provides a good basis for understanding
frame systems. Each frame represents:

 a class (set), or
 an instance (an element of a class).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy