AI Module 4

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 38

MODULE IV

 Knowledge Acquisition
 General Concepts in Knowledge Acquisition
 Types of learning
 Difficulty in Knowledge Acquisition
 General learning model
 Performance measures.
 Early work in Machine Learning
 Perceptron
 Checkers playing example
 Learning automata
 Genetic algorithms
 Intelligent editors
 Analogical and Explanation Based Learning
 Analogical Reasoning and learning, Examples,
 Explanation based learning.
Knowledge Acquisition - General Concepts in Knowledge Acquisition.

Knowledge acquisition is the process of extracting, structuring and organizing knowledge from one source,
usually human experts, so it can be used in software.

Knowledge is taking from an accountant as per his work in an ERP and in feature through AI with out an
accountant company work will go smoothly.

Acquired knowledge may consist of facts, rules, concepts, procedures, relationships, statistics, or other useful
information. Sources of this knowledge may include.

Experts in the domain of interest


Textbooks
Technical papers
Databases
Reports
The environment
Normal Knowledge Acquisition Technique

• Observe the person solving real problems.


• Through discussions, identify the kinds of data, knowledge and procedures
required to solve different types of problems.
• Build scenarios with the expert that can be associated with different problem
types.
• Have the expert solve a series of problems verbally and ask the rationale behind
each step.
• Develop rules based on the interviews and solve the problems with them.
• Have the expert review the rules and the general problem solving procedure.
• Compare the responses of outside experts to a set of scenarios obtained from the
project's expert 
Types of learning

• New knowledge learn through different methods


• Depending on the type of material to be learned, the amount of relevant knowledge we already possess,
and the environment in which the learning takes place.

The classification is independent of the knowledge domain and the representation scheme used. It is based
on the type of inference strategy employed or the methods used in the learning process.

The five different learning methods under this taxonomy are

Memorization (rote learning)


Direct instruction (by being told)
Analogy
Induction
Deduction
Memorization
Learning by memorization is the simplest form of learning. It requires the least amount of inference and is
accomplished by simply copying the knowledge in the same form that it will be used directly into the knowledge
base. Phone number , Multiplication table.

Direct instruction (by being told)


This type of learning requires more inference than rote learning since the knowledge must be transformed into an
operational form before being integrated into the knowledge base, We use this type of learning when a teacher
presents a number of facts directly to us in a well organized manner.
Analogy
It is the process of learning a new concept or solution through the use of similar known concepts or solutions. We use
this type of learning when solving problems on an exam where previously learned examples serve as a guide or when
we learn to drive a car using our knowledge of car driving.
Induction
This form of learning requires the use of inductive inference, a form of invalid but useful inference. We use inductive
learning when we formulate a general concept after seeing a number of instances or examples of the concept. From
analysis of facts deriving the actual problem.
Deduction
It is accomplished through a sequence of deductive inference steps using known facts.. From the known facts. new
facts or relationships are logically derived.

In addition to the above classification, we will sometimes refer to learning methods as either weak methods or
knowledge-rich methods. Weak methods are general purpose methods in which little or no initial knowledge is available.
Knowledge Acquisition Difficulties

Time-consuming process
Problems in Transferring Knowledge
Expressing Knowledge
Transfer to a Machine
Number of Participants
Structuring Knowledge
Other Difficulties ………………..

Experts may lack time or not cooperate.


Testing and refining knowledge is complicated.
Poorly defined methods for knowledge elicitation.
System builders may collect knowledge from one source, but the relevant knowledge may.
be scattered across several sources.
May collect documented knowledge rather than use experts.
The knowledge collected may be incomplete.
Difficult to recognize specific knowledge when mixed with irrelevant data.
Experts may change their behaviour when observed and/or interviewed.
Problematic interpersonal communication between the knowledge engineer and the expert.

How we can overcome

The ability and personality of the knowledge engineer


Must develop a positive relationship with the expert
The knowledge engineer must create the right impression
Computer aided knowledge acquisition tools
General learning model

Based up on the initial knowledge try to lean new


knowledge

Performance element : Selecting the external


actions from the environment , Through sensors.
– Students

Critic detrains the out come of action with the


particular standard.
Give feed back to the leaning element – Exam

Learning element making improvements always


observing the performance – Teacher

Knowledge base always keep on learning , giving


new ideas.
Performance measures

Performance of a given system or compare the relative performance of two different system,
such comparisons are possible only when standard performance measures are available.
Or
The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer
is capable of thinking like a human being.

Some times ------Implementation problem


Some of PERFORMANCE MEASURES are
Generality
One of the most important performance measures for learning methods is the generality or scope of the
method. Generality is a measure of the ease with which the method can be adapted to different domains of
application.

Efficiency.
The efficiency of a method is a measure of the average time required to construct the target knowledge
structures from some specified initial structures.

Robustness.
Robustness is the ability of a learning system to function with unreliable feedback and with a variety of training
examples. A robust system must he able to build tentative structures which are subject to modification or
withdrawal if later found to be inconsistent with statistically sound structures.

Efficacy. The efficacy of a system is a measure of the overall power of the system. It is a combination of the
factors generality, efficiency, and robustness.

Ease of implementation.

Ease of implementation relates to the complexity of the programs and data structures and the resources
required to develop the given learning system. Lacking good complexity metrics, this measure will often be
somewhat subjective.
Early work in Machine Learning

Herbert Alexander Simon:

“Learning is any process by which a system improves


performance from experience”

Machine Learning is concerned with computer programs that


automatically improve their performance….
Early work in Machine Learning

These early designs were self-adapting systems which modified their own structures in an attempt to produce
an optimal response to some input stimuli.

We will consider only four of the more representative designs

First one approximate model of a small network of neurons.


A second approach was initially based on a form of rote learning. It was later modified to learn by adaptive
parameter adjustment.

The third approach used self-adapting stochastic automata models,

While the fourth approach was modelled after survival of the fittest through population genetics.

Examples of this four methods


They ale Rosenblatt's perceptrons, Samuel's checkers playing system, Learning Automata, Genetic Algorithms
PERCEPTRONS

The perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a
function which can decide whether or not an input, represented by a vector of numbers, belongs to
some specific class.

Perceptions are pattern recognition or classification devices that are crude approximations of neural networks.
They make decisions about patterns by summing up evidence obtained from many small sources. They can he
taught to recognize one or more classes of objects through the use of stimuli
Axons from one neuron can send messages to the dendrites of other neurons
The inputs to the system are through an array of sensors such as a rectangular grid of light sensitive pixels.
These sensors are randomly connected in groups to associative threshold units ATU where the sensor
outputs are combined and added together. If the combined outputs to an ATIJ exceed some fixed
threshold, the ATU unit executes and produces a binary output.

The outputs from the ATU are each multiplied by adjustable parameters or weights se (i = I, 2.... . k) and the
results added together in a terminal comparator unit. If the input to the comparator exceeds a riven
threshold level 1'. the perceptron produces a positive response of I (yes) corresponding to a sample
classification of class. I. Otherwise, the output is 0 (no) corresponding to an object classification of non-
class-I

All components of the system are fixed except the weights is, which are adjusted through a punishment-
reward process described below! This learning process continues until optimal values of are found at which
time the system will have learned the proper classification of objects for the two different classes.
CHECKERS PLAYING

During the 1950s and 1960s Samuel (1959, 1967) developed a program which could learn to play checkers at
a master's level.

while playing the game of checkers, either with a human opponent or with a copy of itself. At each state of
the game, the program checks to see if it has remembered a best-move value for that state. If not, the
program explores ahead three moves (it determines all of. its possible moves:

for each of these, it finds all of its opponent's moves: and for each of those, it determines all of its next
possible moves). The program then computes an advantage or win value estimate of all the ending board
states. These values determine the best move for the system from the current state. The current board
state and its corresponding value are stored using an indexed address scheme for subsequent recall.
At board state K. the program looks ahead two moves
and computes the value of each possible resultant board
state. It then works backward by first finding the
minimum board values at state K + 2 in each group of
moves made from state K + I (minimums = 4. 3. and 2).
These minimums correspond to the moves the opponent
would make from each position when at state K + I.
The program then chooses the maximum of these
minimums as the best (minimax) move it can make from
the present board state K (maximum = 4). By looking
ahead three moves, the system can be assured it can do
no worse than this minimax value. The board state and
the corresponding minimax value for a three-move-ahead
sequence are stored in Samuel's system. These values are
then available for subsequent use when the same state is
encountered during a new game.
Learning Automaton.

A learning automaton is one type of machine learning algorithm studied since 1970s. Learning
automata select their current action based on past experiences from the environment. 
Two component

Automation
and
Environment
GENETIC ALGORITHM

GENETIC ALGORITHM INTRODUCTION

Genetic Algorithm (GA) is a search-based optimization technique based on the principles of Genetics and
Natural Selection. It is frequently used to find optimal or near-optimal solutions to difficult problems which
otherwise would take a lifetime to solve.
Initialization

• Initialization
• Selection according to fitness
• Crossover between selected
Selection
chromosomes
• Mutation
• Repeat cycle until it is true
Cross over
00111101
00101010

Exchange or cross over


Mutation
Then mutation
Until true

True
Intelligent editors
An intelligent editor acts as an interface between a domain expert and an expert system. They permit a domain expert to
interact directly with the system without the need for an intermediary to code the knowledge.

The editor has direct access to the knowledge in the expert system and knows the structure of that knowledge. Through
the editor.an expert can create, modify, and delete rules without a knowledge of the internal structure of the rules.
Some editors have the ability to suggest reasonable alternatives and to prompt the expert for clarifications when
required

Expert system, a computer program that uses artificial intelligence methods to solve complex problems within a specialized
domain that ordinarily requires human expertise.
The expert systems are capable of
Advising , Instructing and assisting human in decision making , Demonstrating , Deriving a solution, Diagnosing ,
Explaining ,Interpreting input , Predicting results , Justifying the conclusion, Suggesting, alternative options to a
problem.
Analogical Learning
Learning by analogy is one of the fundamental insights of artificial intelligence. Humans can draw
on the past experience to solve current problems very well.

If we say something analogy is there .

problems and their solutions, plans, situations, episodes, and so forth. Analogies play a dominant
role in human reasoning and learning processes. Previously remembered experiences are
transformed and extended to fit new unfamiliar situation.

Eg : The thief was caught red handed.


Computer need to understand exact meaning …..

How we can solve this problems ?


1.Transformable analogy.
Looking similar situation and coping
2.Dirivation analogy
copying and modify if necessary.
Analogical Learning Process

1. Analogue recognition A new problem and situation is encountered and recognized as


being similar to previously encountered situation.

2. Access and recall: The similarity of the new problem to previously experienced ones serves
as an index with which to access and recall one or more candidate experiences (analogues).

3. Selection and mapping: Relevant Parts of the recalled experiences are selected for their
similarities and mapped from the base to the target domain.

4. Extending the mapped experience: The newly mapped analogues are modified and
extended to fit the target domain situation.

5. Validation and generalization: The newly formulated solution is validated for its
applicability through some form of trial process (such as theorem provers or simulation). If
the validation is supported, a generalized solution is formed which accounts for both the old
and the new situations.
EXAMPLES OF ANALOGICAL LEARNING SYSTEMS

Winston's System, Grelner's NLAG System, CarboneWs Systems

Winston's System
When there were similarities among the relationships and motives of the second group of characters.

The important features of Winston's system can be summarized as follows.


Knowledge representation: Winston's system used frame structures as part of the Frame Representation Language
(FRL) developed by Roberts and Goldstein. Slots within the frames were given special meanings, such as AKO.
appears in, and the like. Individual frames were linked together in a network for easy access to related items.

Recall of analogous Situations: When presented with a current situation. candidate analogues were retrieved from
memory using an hierarchical indexing scheme.

Similarity matching: In selecting the best of the known Situations during the reminding process described above, a
similarity matching score is computed for each of the recalled candidates. A score is computed for all slot pairings
between two frames, and the pairing having the highest score is selected as the proper analogue.

Mapping base-to-target situations: The base-to-target analogue mapping process used in this system depends on
the similarity of parts between base and target domains and role links that can be established between the two.
Grelner's NLAG System

Russell Greiner (1988) developed an analogical learning system he called NLAG . The system requires three
inputs and produces a single output. The inputs are an impoverished theory (a theory lacking in knowledge with
which to solve a given problem), an analogical hint, and the given problem for solution. The output from the
system is a solution conjecture for the problem based on the analogical hint.

Carbonell developed two analogical systems for problem solving, each based on a different perception of the
analogical process.
The first system was based on what he termed transformational analogy
The second on derivational analogy. The major differences between the two methods he in the amount of
details remembered and stored for situations such as problem solution traces and the methods used in the
base-to-target domain transformational process.
Explanation based learning
Cup -Domain
Observation –Facts

Glass - Another Domain


Observation –Facts

Goal ---Glass cup


Assignment 2

Knowledge Acquisition .
Types of learning.
Difficulty in Knowledge Acquisition.
General learning model.
Performance measures.
1. Describe two examples of analogical learning you have
experienced recently
2. Explain different types of learning
3. Performance measures.
4. Explain in Early work in Machine Learning Perceptron
and Genetic algorithms.
5. What is Intelligent editors ?
6. Analogical Reasoning and learning, Examples,
7. Write a short not on explanation based learning.
8. Define operationality as it applies to explanation-based
learning and give an example of it as applied to some task.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy