AI Module 4
AI Module 4
AI Module 4
Knowledge Acquisition
General Concepts in Knowledge Acquisition
Types of learning
Difficulty in Knowledge Acquisition
General learning model
Performance measures.
Early work in Machine Learning
Perceptron
Checkers playing example
Learning automata
Genetic algorithms
Intelligent editors
Analogical and Explanation Based Learning
Analogical Reasoning and learning, Examples,
Explanation based learning.
Knowledge Acquisition - General Concepts in Knowledge Acquisition.
Knowledge acquisition is the process of extracting, structuring and organizing knowledge from one source,
usually human experts, so it can be used in software.
Knowledge is taking from an accountant as per his work in an ERP and in feature through AI with out an
accountant company work will go smoothly.
Acquired knowledge may consist of facts, rules, concepts, procedures, relationships, statistics, or other useful
information. Sources of this knowledge may include.
The classification is independent of the knowledge domain and the representation scheme used. It is based
on the type of inference strategy employed or the methods used in the learning process.
In addition to the above classification, we will sometimes refer to learning methods as either weak methods or
knowledge-rich methods. Weak methods are general purpose methods in which little or no initial knowledge is available.
Knowledge Acquisition Difficulties
Time-consuming process
Problems in Transferring Knowledge
Expressing Knowledge
Transfer to a Machine
Number of Participants
Structuring Knowledge
Other Difficulties ………………..
Performance of a given system or compare the relative performance of two different system,
such comparisons are possible only when standard performance measures are available.
Or
The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer
is capable of thinking like a human being.
Efficiency.
The efficiency of a method is a measure of the average time required to construct the target knowledge
structures from some specified initial structures.
Robustness.
Robustness is the ability of a learning system to function with unreliable feedback and with a variety of training
examples. A robust system must he able to build tentative structures which are subject to modification or
withdrawal if later found to be inconsistent with statistically sound structures.
Efficacy. The efficacy of a system is a measure of the overall power of the system. It is a combination of the
factors generality, efficiency, and robustness.
Ease of implementation.
Ease of implementation relates to the complexity of the programs and data structures and the resources
required to develop the given learning system. Lacking good complexity metrics, this measure will often be
somewhat subjective.
Early work in Machine Learning
These early designs were self-adapting systems which modified their own structures in an attempt to produce
an optimal response to some input stimuli.
While the fourth approach was modelled after survival of the fittest through population genetics.
The perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a
function which can decide whether or not an input, represented by a vector of numbers, belongs to
some specific class.
Perceptions are pattern recognition or classification devices that are crude approximations of neural networks.
They make decisions about patterns by summing up evidence obtained from many small sources. They can he
taught to recognize one or more classes of objects through the use of stimuli
Axons from one neuron can send messages to the dendrites of other neurons
The inputs to the system are through an array of sensors such as a rectangular grid of light sensitive pixels.
These sensors are randomly connected in groups to associative threshold units ATU where the sensor
outputs are combined and added together. If the combined outputs to an ATIJ exceed some fixed
threshold, the ATU unit executes and produces a binary output.
The outputs from the ATU are each multiplied by adjustable parameters or weights se (i = I, 2.... . k) and the
results added together in a terminal comparator unit. If the input to the comparator exceeds a riven
threshold level 1'. the perceptron produces a positive response of I (yes) corresponding to a sample
classification of class. I. Otherwise, the output is 0 (no) corresponding to an object classification of non-
class-I
All components of the system are fixed except the weights is, which are adjusted through a punishment-
reward process described below! This learning process continues until optimal values of are found at which
time the system will have learned the proper classification of objects for the two different classes.
CHECKERS PLAYING
During the 1950s and 1960s Samuel (1959, 1967) developed a program which could learn to play checkers at
a master's level.
while playing the game of checkers, either with a human opponent or with a copy of itself. At each state of
the game, the program checks to see if it has remembered a best-move value for that state. If not, the
program explores ahead three moves (it determines all of. its possible moves:
for each of these, it finds all of its opponent's moves: and for each of those, it determines all of its next
possible moves). The program then computes an advantage or win value estimate of all the ending board
states. These values determine the best move for the system from the current state. The current board
state and its corresponding value are stored using an indexed address scheme for subsequent recall.
At board state K. the program looks ahead two moves
and computes the value of each possible resultant board
state. It then works backward by first finding the
minimum board values at state K + 2 in each group of
moves made from state K + I (minimums = 4. 3. and 2).
These minimums correspond to the moves the opponent
would make from each position when at state K + I.
The program then chooses the maximum of these
minimums as the best (minimax) move it can make from
the present board state K (maximum = 4). By looking
ahead three moves, the system can be assured it can do
no worse than this minimax value. The board state and
the corresponding minimax value for a three-move-ahead
sequence are stored in Samuel's system. These values are
then available for subsequent use when the same state is
encountered during a new game.
Learning Automaton.
A learning automaton is one type of machine learning algorithm studied since 1970s. Learning
automata select their current action based on past experiences from the environment.
Two component
Automation
and
Environment
GENETIC ALGORITHM
Genetic Algorithm (GA) is a search-based optimization technique based on the principles of Genetics and
Natural Selection. It is frequently used to find optimal or near-optimal solutions to difficult problems which
otherwise would take a lifetime to solve.
Initialization
• Initialization
• Selection according to fitness
• Crossover between selected
Selection
chromosomes
• Mutation
• Repeat cycle until it is true
Cross over
00111101
00101010
True
Intelligent editors
An intelligent editor acts as an interface between a domain expert and an expert system. They permit a domain expert to
interact directly with the system without the need for an intermediary to code the knowledge.
The editor has direct access to the knowledge in the expert system and knows the structure of that knowledge. Through
the editor.an expert can create, modify, and delete rules without a knowledge of the internal structure of the rules.
Some editors have the ability to suggest reasonable alternatives and to prompt the expert for clarifications when
required
Expert system, a computer program that uses artificial intelligence methods to solve complex problems within a specialized
domain that ordinarily requires human expertise.
The expert systems are capable of
Advising , Instructing and assisting human in decision making , Demonstrating , Deriving a solution, Diagnosing ,
Explaining ,Interpreting input , Predicting results , Justifying the conclusion, Suggesting, alternative options to a
problem.
Analogical Learning
Learning by analogy is one of the fundamental insights of artificial intelligence. Humans can draw
on the past experience to solve current problems very well.
problems and their solutions, plans, situations, episodes, and so forth. Analogies play a dominant
role in human reasoning and learning processes. Previously remembered experiences are
transformed and extended to fit new unfamiliar situation.
2. Access and recall: The similarity of the new problem to previously experienced ones serves
as an index with which to access and recall one or more candidate experiences (analogues).
3. Selection and mapping: Relevant Parts of the recalled experiences are selected for their
similarities and mapped from the base to the target domain.
4. Extending the mapped experience: The newly mapped analogues are modified and
extended to fit the target domain situation.
5. Validation and generalization: The newly formulated solution is validated for its
applicability through some form of trial process (such as theorem provers or simulation). If
the validation is supported, a generalized solution is formed which accounts for both the old
and the new situations.
EXAMPLES OF ANALOGICAL LEARNING SYSTEMS
Winston's System
When there were similarities among the relationships and motives of the second group of characters.
Recall of analogous Situations: When presented with a current situation. candidate analogues were retrieved from
memory using an hierarchical indexing scheme.
Similarity matching: In selecting the best of the known Situations during the reminding process described above, a
similarity matching score is computed for each of the recalled candidates. A score is computed for all slot pairings
between two frames, and the pairing having the highest score is selected as the proper analogue.
Mapping base-to-target situations: The base-to-target analogue mapping process used in this system depends on
the similarity of parts between base and target domains and role links that can be established between the two.
Grelner's NLAG System
Russell Greiner (1988) developed an analogical learning system he called NLAG . The system requires three
inputs and produces a single output. The inputs are an impoverished theory (a theory lacking in knowledge with
which to solve a given problem), an analogical hint, and the given problem for solution. The output from the
system is a solution conjecture for the problem based on the analogical hint.
Carbonell developed two analogical systems for problem solving, each based on a different perception of the
analogical process.
The first system was based on what he termed transformational analogy
The second on derivational analogy. The major differences between the two methods he in the amount of
details remembered and stored for situations such as problem solution traces and the methods used in the
base-to-target domain transformational process.
Explanation based learning
Cup -Domain
Observation –Facts
Knowledge Acquisition .
Types of learning.
Difficulty in Knowledge Acquisition.
General learning model.
Performance measures.
1. Describe two examples of analogical learning you have
experienced recently
2. Explain different types of learning
3. Performance measures.
4. Explain in Early work in Machine Learning Perceptron
and Genetic algorithms.
5. What is Intelligent editors ?
6. Analogical Reasoning and learning, Examples,
7. Write a short not on explanation based learning.
8. Define operationality as it applies to explanation-based
learning and give an example of it as applied to some task.