FAI Unit 4
FAI Unit 4
1.Domain Expert
The domain expert is a knowledgeable and skilled person capable of solving
problems in a specific area or domain. This person has the greatest
expertise in a given domain. This expertise is to be captured in the expert
system. Therefore, the expert must be able to communicate his or her
knowledge, be willing to participate in the expert system development and
commit a substantial amount of time to the project.
2.Knowledge Engineer
• The knowledge engineer is someone who is capable of designing, building
and testing an expert system. This person is responsible for selecting an
appropriate task for the expert system.
• The knowledge engineer is responsible for testing, revising and
integrating the expert system into the workplace. Thus, the knowledge
engineer is committed to the project from the initial design stage to the
final delivery of the expert system, and even after the project is
completed, he or she may also be involved in maintaining the system.
3.Programmer
• The programmer is the person responsible for the actual programming,
describing the domain knowledge in terms that a computer can understand
4.Project manager
• The project manager is the leader of the expert system development team,
responsible for keeping the project on track. He or she makes sure that all
deliverables and milestones are met, interacts with the expert, knowledge
engineer, programmer and end-user.
5.End user
• The end-user, often called just the user, is a person who uses the expert
system when it is developed.
Structure of a rule-based expert system
1.The knowledge base contains the domain knowledge useful for problem
solving. In a rule-based expert system, the knowledge is represented as a set
of rules. Each rule specifies a relation, recommendation, directive, strategy
or heuristic and has the IF (condition) THEN (action) structure. When the
condition part of a rule is satisfied, the rule is said to fire and the action part
is executed.
3.The inference engine carries out the reasoning whereby the expert system
reaches a solution. It links the rules given in the knowledge base with the
facts provided in the database
4.The explanation facilities enable the user to ask the expert system how a
particular conclusion is reached and why a specific fact is needed. An expert
system must be able to explain its reasoning and justify its advice, analysis
or conclusion.
When the IF (condition) part of the rule matches a fact, the rule is fired and its THEN (action) part is executed.
Forward chaining
Fig 2.6
• Forward chaining is the data-driven reasoning. The reasoning starts from the
known data and proceeds forward with that data. Each time only the topmost
rule is executed. When fired, the rule adds a new fact in the database. Any
rule can be executed only once. The match-fire cycle stops when no further
rules can be fired.
• In the first cycle, only two rules, Rule 3: A X and Rule 4: C L, match facts in
the database. Rule 3: A X is fired first as the topmost one. The IF part of this
rule matches fact A in the database, its THEN part is executed and new fact X
is added to the database. Then Rule 4: C L is fired and fact L is also placed in
the database.
• In the second cycle, Rule 2: X & B & E Y is fired because facts B, E and X are
already in the database, and as a consequence fact Y is inferred and put in the
database. This in turn causes Rule 1: Y & D Z to execute, placing fact Z in the
database (cycle 3). Now the match-fire cycles stop because the IF part of Rule
5: L & M N does not match all facts in the database and thus Rule 5 cannot be
fired.
Backward chaining
Fig 2.7
• In Pass 1, the inference engine attempts to infer fact Z. It searches the
knowledge base to find the rule that has the goal, in our case fact Z, in its
THEN part. The inference engine finds and stacks Rule 1: Y & D -> Z. The IF
part of Rule 1 includes facts Y and D, and thus these facts must be
established.
• In Pass 2, the inference engine sets up the sub-goal, fact Y, and tries to
determine it. First it checks the database, but fact Y is not there. Then the
knowledge base is searched again for the rule with fact Y in its THEN part. The
inference engine locates and stacks Rule 2: X & B & E -> Y. The IF part of Rule 2
consists of facts X, B and E, and these facts also have to be established.
• In Pass 3, the inference engine sets up a new sub-goal, fact X. It checks the
database for fact X, and when that fails, searches for the rule that infers X.
The inference engine finds and stacks Rule 3: A ->X. Now it must determine
fact A
Fig 2.7
• In Pass 4, the inference engine finds fact A in the database, Rule 3: A ->X is
fired and new fact X is inferred.
• In Pass 5, the inference engine returns to the sub-goal fact Y and once again
tries to execute Rule 2: X & B & E -> Y. Facts X, B and E are in the database
and thus Rule 2 is fired and a new fact, fact Y, is added to the database.
• In Pass 6, the system returns to Rule 1: Y & D -> Z trying to establish the
original goal, fact Z. The IF part of Rule 1 matches all facts in the database,
Rule 1 is executed and thus the original goal is finally established.
• Let us now compare Figure 2.6 with Figure 2.7. As you can see, four
rules were fired when forward chaining was used, but just three rules
when we applied backward chaining. This simple example shows that
the backward chaining inference technique is more effective when we
need to infer one particular fact, in our case fact Z.
MEDIA ADVISOR: a demonstration rule-based expert system
Rule: 6 Rule: 9
if the job is building if the stimulus_situation is ‘physical object’
or the job is repairing and the stimulus_response is ‘hands-on’
or the job is troubleshooting and feedback is required
then the stimulus_response is ‘hands-on’ then medium is workshop
Rule: 10
if the stimulus_situation is symbolic Rule: 13
and the stimulus_response is analytical if the stimulus_situation is verbal
and feedback is required and the stimulus_response is analytical
then medium is ‘lecture – tutorial’ and feedback is required
then medium is ‘lecture – tutorial’
Rule: 11
if the stimulus_situation is visual Rule: 14
and the stimulus_response is documented if the stimulus_situation is verbal
and feedback is not required and the stimulus_response is oral
then medium is videocassette and feedback is required
then medium is ‘role-play exercises’
Rule: 12
if the stimulus_situation is visual
and the stimulus_response is oral
and feedback is required
then medium is ‘lecture – tutorial’
• An object and its value constitute a fact (for instance, the environment is
machines, and the job is repairing). All facts are placed in the database
• Options
The final goal of the rule-based expert system is to produce a solution to the
problem based on input data. In MEDIA ADVISOR, the solution is a medium
selected from the list of four options:
• medium is workshop
• medium is ‘lecture – tutorial’
• medium is videocassette
• medium is ‘role-play exercises’
Dialogue
• Dialogue In the dialogue shown below, the expert system asks the
user to input the data needed to solve the problem (the environment,
the job and feedback). Based on the answers supplied by the user
• In the dialogue shown below, the expert system asks the user to input
the data needed to solve the problem (the environment, the job and
feedback). Based on the answers supplied by the user
Conflict resolution
• we considered two simple rules for crossing a road. Let us now add a
third rule. We will get the following set of rules:
• The inference engine compares IF (condition) parts of the rules with
data available in the database, and when conditions are satisfied the
rules are set to fire. The firing of one rule may affect the activation of
other rules, and therefore the inference engine must allow only one
rule to fire at a time. In our road crossing example, we have two rules,
Rule 2 and Rule 3, with the same IF part. Thus both of them can be set
to fire when the condition part is satisfied. These rules represent a
conflict set. The inference engine must determine which rule to fire
from such a set. A method for choosing a rule to fire when more than
one rule can be fired in a given cycle is called conflict resolution
If the traffic light is red, which rule should be executed?
• In forward chaining, both rules would be fired. Rule 2 is fired first as
the topmost one, and as a result, its THEN part is executed and
linguistic object action obtains value stop.
• However, Rule 3 is also fired because the condition part of this rule
matches the fact ‘traffic light’ is red, which is still in the database.
As a consequence, object action takes new value go.
How can we resolve a conflict?
1) To establish a goal and stop the rule execution when the goal is
reached
• The obvious strategy for resolving conflicts is to establish a goal and
stop the rule execution when the goal is reached. In our problem, for
example, the goal is to establish a value for linguistic object action.
When the expert system determines a value for action, it has reached
the goal and stops. Thus if the traffic light is red, Rule 2 is executed,
object action attains value stop and the expert system stops. In the
given example, the expert system makes a right decision; however if
we arranged the rules in the reverse order, the conclusion would be
wrong. It means that the rule order in the knowledge base is still very
important.
2)Fire the rule with the highest priority
The rule represents the uncertainty by numbers called certainty factors cf 0.1.
Disadvantages of rule-based expert systems
• Opaque relations between rules. Although the individual production
rules tend to be relatively simple and self-documented, their logical
interactions within the large set of rules may be opaque.