0% found this document useful (0 votes)
100 views

Unit V Natural Language Processing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views

Unit V Natural Language Processing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

AI UNIT V NOTES

Unit V Natural Language processing:


What is Natural Language Processing?
Natural language processing (NLP) is a field of computer science and a subfield of artificial
intelligence that aims to make computers understand human language. NLP uses
computational linguistics, which is the study of how language works, and various models
based on statistics, machine learning, and deep learning. These technologies allow computers
to analyze and process text or voice data, and to grasp their full meaning, including the
speaker’s or writer’s intentions and emotions.
NLP powers many applications that use language, such as text translation, voice recognition,
text summarization, and chatbots. You may have used some of these applications yourself,
such as voice-operated GPS systems, digital assistants, speech-to-text software, and customer
service bots. NLP also helps businesses improve their efficiency, productivity, and
performance by simplifying complex tasks that involve language.

Working of Natural Language Processing (NLP)

1. Text Input and Data Collection

 Data Collection: Gathering text data from various sources such as websites, books,
social media, or proprietary databases.

 Data Storage: Storing the collected text data in a structured format, such as a database
[Date]

or a collection of documents.

1
AI UNIT V NOTES

2. Text Preprocessing
Preprocessing is crucial to clean and prepare the raw text data for analysis. Common
preprocessing steps include:
 Tokenization: Splitting text into smaller units like words or sentences.
 Lowercasing: Converting all text to lowercase to ensure uniformity.
 Stop word Removal: Removing common words that do not contribute significant
meaning, such as “and,” “the,” “is.”
 Punctuation Removal: Removing punctuation marks.
 Stemming and Lemmatization: Reducing words to their base or root forms. Stemming
cuts off suffixes, while lemmatization considers the context and converts words to
their meaningful base form.
 Text Normalization: Standardizing text format, including correcting spelling errors,
expanding contractions, and handling special characters.
3. Text Representation

 Bag of Words (BoW): Representing text as a collection of words, ignoring grammar and
word order but keeping track of word frequency.
 Term Frequency-Inverse Document Frequency (TF-IDF): A statistic that reflects the
importance of a word in a document relative to a collection of documents.
 Word Embeddings: Using dense vector representations of words where semantically
similar words are closer together in the vector space (e.g., Word2Vec, GloVe).
4. Feature Extraction
Extracting meaningful features from the text data that can be used for various NLP tasks.
 N-grams: Capturing sequences of N words to preserve some context and word order.
 Syntactic Features: Using parts of speech tags, syntactic dependencies, and parse
trees.

 Semantic Features: Leveraging word embeddings and other representations to capture


word meaning and context.

5. Model Selection and Training


Selecting and training a machine learning or deep learning model to perform specific NLP
tasks.
 Supervised Learning: Using labeled data to train models like Support Vector Machines
(SVM), Random Forests, or deep learning models like Convolutional Neural Networks
[Date]

(CNNs) and Recurrent Neural Networks (RNNs).

2
AI UNIT V NOTES

 Unsupervised Learning: Applying techniques like clustering or topic modelling (e.g.,


Latent Dirichlet Allocation) on unlabelled data.

 Pre-trained Models: Utilizing pre-trained language models such as BERT, GPT, or


transformer-based models that have been trained on large corpora.

6. Model Deployment and Inference


Deploying the trained model and using it to make predictions or extract insights from new
text data.
 Text Classification: Categorizing text into predefined classes (e.g., spam detection,
sentiment analysis).
 Named Entity Recognition (NER): Identifying and classifying entities in the text.
 Machine Translation: Translating text from one language to another.
 Question Answering: Providing answers to questions based on the context provided by
text data.
7. Evaluation and Optimization

Evaluating the performance of the NLP algorithm using metrics such as accuracy, precision,
recall, F1-score, and others.
 Hyperparameter Tuning: Adjusting model parameters to improve performance.
 Error Analysis: Analysing errors to understand model weaknesses and improve
robustness.
8. Iteration and Improvement
Continuously improving the algorithm by incorporating new data, refining preprocessing
techniques, experimenting with different models, and optimizing features.
NLP Techniques:
1. Rule-Based Approach

2. Machine Learning (ML)


3. Deep Learning (DL)
4. Word Embeddings (Word2Vec, GloVe)
5. Recurrent Neural Networks (RNNs)
6. Long Short-Term Memory (LSTM) Networks

7. Transformers
[Date]

3
AI UNIT V NOTES

NLP Applications:
1. Virtual Assistants (e.g., Siri, Alexa)
2. Language Translation Apps (e.g., Google Translate)
3. Sentiment Analysis Tools
4. Text Summarization Tools

5. Chatbots
6. Speech Recognition Systems
7. Information Retrieval Systems

The Components of Natural Language Processing

There are 2 main components of Natural language processing: Syntactic Analysis and Semantic
analysis.

Syntactic Analysis
Syntactic analysis involves analysing the grammatical syntax of a sentence to understand its
meaning.
For example, consider the following sentence: “The cow jumped over the moon“
Using Syntactic analysis, a computer would be able to understand the parts of speech of the
different words in the sentence. Based on the understanding, it can then try and estimate the
meaning of the sentence. In the case of the above example (however ridiculous it might be in
real life), there is no conflict about the interpretation. Thus, the syntactic analysis does the job
just fine.
However, human language is nuanced and not always, is a sentence as simple as the one
described above. Consider this: “Does this all sound like a joke to you?“
A human would easily understand the irateness locked in the sentence. However, a syntactic
analysis may just be too naive for it. That leads us to the need for something better and more
sophisticated, i.e., Semantic Analysis.
Semantic Analysis
In the case of semantic analysis, a computer understands the meaning of a text by analyzing
the text as a whole and not just looking at individual words. The context in which a word is
used is very important when it comes to semantic analysis. Let’s revisit the same example:
“Does it all sound like a joke to you?” While the word “joke” may be positive on its own,
something sounding like a joke may not be so. Thus, the context here is derived by analyzing
the whole sentence instead of isolated words.
[Date]

4
AI UNIT V NOTES

The semantic analysis does throw better results, but it also requires substantially more
training and computation. We will now understand why.

How does Semantic Analysis work


What’s an Orange?
It could be a fruit, a color, or a place! To know the meaning of Orange in a sentence, we need
to know the words around it.
For example, if the sentence talks about “orange shirt,” we are talking about the color orange.
If a sentence talks about someone from Orange wearing an orange shirt – we are talking
about Orange, the place, and Orange, the color. And, if we are talking about someone from
Orange eating an orange while wearing an orange shirt – we are talking about the place, the
color, and the fruit.
You see, the word on its own matters less, and the words surrounding it matter more for the
interpretation. A semantic analysis
algorithm needs to be trained with a larger corpus of data to perform better.
Over the years, NLP has progressed from syntactic analysis to semantic analysis. Think about
Google Search. 10-15 years ago, search results were filled with irrelevant junk. It was easy for
SEO practitioners to load their pages with keywords and rank. It’s not the same case today.
Search Engines, like Google, can interpret content beyond individual words.
Advantages of Semantic Analysis
A strong grasp of semantic analysis helps firms improve their communication with customers
without needing to talk much.
Improved Customer Knowledge
A company can scale up its customer communication by using semantic analysis-based tools.
It could be BOTs that act as doorkeepers or even on-site semantic search engines. By allowing
customers to “talk freely”, without binding up to a format – a firm can gather significant
volumes of quality data.
Emphasized Customer-centric Strategy
A successful semantic strategy portrays a customer-centric image of a firm. It makes the
customer feel “listened to” without actually having to hire someone to listen.
Refined SEO Strategy

Modern-day SEO strategy requires a semantic approach. Content is today analyzed by search
engines, semantically and ranked accordingly. It is thus important to load the content with
sufficient context and expertise. On the whole, such a trend has improved the general
content quality of the internet.
[Date]

How does Syntactic Analysis work

5
AI UNIT V NOTES

Syntactic analysis is much easier to implement than semantic analysis. The typical process
includes:

o Tokenization: Breaking a text into sentence tokens & then a sentence into word
tokens
o Lemmatization: Identifying the lemma of a word. Example: Cri will be the
lemma for all cry , crying, cried
o POS Tagging: Identifying the part of speech (POS) of a particular word in a
sentence.
Advantages of Syntactic Analysis
While semantic analysis is more modern and sophisticated, it is also expensive to implement.
Thus for simple text analysis, syntactic analysis is still used. One such need is described below.
Parsing
Parsing implies pulling out a certain set of words from a text, based on predefined rules. For
example, we want to find out the names of all locations mentioned in a newspaper. Semantic
analysis would be an overkill for such an application and syntactic analysis does the job just
fine.
Discourse Processing
Definition: Analysis of language beyond sentence level, focusing on context, structure, and
relationships.

Key Concepts:
1. Discourse Structure: Organization of text into coherent units.
2. Coherence Relations: Connections between sentences (e.g., cause-effect).
3. Discourse Markers: Words/phrases indicating relationships (e.g., "however").
4. Anaphora Resolution: Identifying pronoun references.

Discourse Processing Tasks:


1. Text Segmentation
2. Discourse Parsing
3. Coherence Detection
4. Anaphora Resolution

5. Coreference Resolution
[Date]

6
AI UNIT V NOTES

Pragmatic Processing
Definition: Analysis of language in context, considering speaker intention, implicature, and
inference.
Key Concepts:
1. Implicature: Inferring meaning beyond literal interpretation.
2. Inference: Drawing conclusions from context.
3. Presupposition: Assumptions underlying language.

4. Speech Acts: Classifying utterances (e.g., question, statement).


Pragmatic Processing Tasks:
1. Implicature Detection
2. Inference Generation
3. Presupposition Resolution

4. Speech Act Classification


5. Sarcasm Detection
Techniques and Models
1. Machine Learning (ML) and Deep Learning (DL)
2. Discourse Representation Theory (DRT)

3. Cantering Theory
4. Rhetorical Structure Theory (RST)
5. Pragmatic Inference Frameworks
Applications
1. Text Summarization

2. Question Answering
3. Sentiment Analysis
4. Dialogue Systems
5. Natural Language Generation
[Date]

7
AI UNIT V NOTES

Forms of Learning:
1. Supervised Learning: Learning from labelled data.
2. Unsupervised Learning: Learning from unlabelled data.
3. Reinforcement Learning: Learning from rewards/punishments.

4. Semi-Supervised Learning: Combining labelled and unlabelled data.


5. Active Learning: Selecting data for labelling.

Inductive Learning:
Definition: Inductive learning is a type of machine learning where the algorithm learns
patterns and relationships from specific instances (data) to make generalizations.
How it works:

1. Data Collection: Gather specific instances (data points) related to the problem.
2. Pattern Identification: Identify patterns, relationships, or rules within the data.
3. Hypothesis Formation: Formulate a hypothesis (model) based on the patterns.
4. Generalization: Apply the hypothesis to new, unseen data.

Example:
Problem: Predict whether a person will buy a car based on age and income.
Data:

| Age | Income | Bought Car |

-------------------------------------------------------------------------------
| 25 | 40000 | Yes |
| 30 | 50000 | Yes |
| 35 | 60000 | Yes |
| 20 | 30000 | No |

| 40 | 70000 | Yes |
[Date]

8
AI UNIT V NOTES

Pattern Identification:
- People with higher incomes tend to buy cars.
- People above 30 years old tend to buy cars.

Hypothesis Formation:
- If age > 30 and income > 50000, then likely to buy a car.
Generalization:
- New data: Age = 38, Income = 65000
- Prediction: Likely to buy a car (based on hypothesis)

Learning Decision Trees:

Definition: A decision tree is a graphical representation of a decision-making process, where


each internal node represents a feature or attribute, and each leaf node represents a class
label or prediction.
How Decision Trees Learn:
1. Root Node: Start with the entire dataset.
2. Splitting: Choose the best feature to split the data into subsets.

3. Recursion: Recursively apply splitting to each subset.


4. Leaf Nodes: Stop when a subset contains only one class label.
5. Pruning: Remove unnecessary nodes to prevent overfitting.

Decision Tree Algorithms:

1. ID3 (Iterative Dichotomize 3): Uses information gain to select features.


2. CART (Classification and Regression Trees): Uses Gini impurity to select features.
3. C4.5: An extension of ID3, handles continuous features.
4. Random Forest: Combines multiple decision trees for improved accuracy.
[Date]

9
AI UNIT V NOTES

Key Concepts:
1. Features: Attributes used to make decisions.
2. Class Labels: Target variable predictions.
3. Information Gain: Measure of feature importance.
4. Gini Impurity: Measure of node purity.

5. Entropy: Measure of uncertainty.


Advantages:
1. Interpretable: Easy to understand and visualize.
2. Handling Non-Linear Relationships: Effective with complex data.
3. Handling Missing Values: Can handle missing feature values.

Explanation-Based Learning (EBL)


Definition: EBL is a machine learning approach that uses domain knowledge to learn from
examples by generating explanations for the observed data.

Key Concepts:
1. Domain Knowledge: Expert knowledge about the problem domain.

2. Explanation: A logical justification for a decision or classification.


3. Operationalization: Converting domain knowledge into a usable form.

How EBL Works:


1. Domain Knowledge Acquisition: Expert knowledge is gathered.

2. Example Selection: Relevant examples are selected.


3. Explanation Generation: Explanations are generated for each example.
4. Generalization: Explanations are generalized to create a model.
[Date]

10
AI UNIT V NOTES

Here's an example of Explanation-Based Learning (EBL)


Problem: Medical Diagnosis
Goal: Diagnose patients with either Hypertension (HTN) or Hyperthyroidism (HTD) based on
symptoms.
Domain Knowledge:

1. If a patient has high blood pressure, they are likely to have HTN.
2. If a patient has an increased heart rate, they are likely to have HTD.
3. If a patient has a family history of thyroid problems, they are likely to have HTD.

Training Data:

| Patient ID | Symptoms | Diagnosis |


---------------------------------------------------------------------------------------
|1 | High blood pressure, headache | HTN |
|2 | Increased heart rate, weight loss | HTD |

|3 | High blood pressure, dizziness | HTN |


|4 | Normal blood pressure, fatigue | Unknown |

EBL Process:
1. Select a patient (e.g., Patient 1).

2. Generate an explanation for the diagnosis (HTN):


- High blood pressure → HTN (Rule 1)
- Headache → HTN (associated symptom)
3. Operationalize the explanation:
- IF (blood pressure = high) AND (symptoms = headache) THEN diagnosis = HTN

4. Generalize the explanation to create a model:


[Date]

- IF (blood pressure = high) THEN diagnosis = HTN (Rule 1')

11
AI UNIT V NOTES

- IF (heart rate = increased) THEN diagnosis = HTD (Rule 2')


- IF (family history = thyroid problems) THEN diagnosis = HTD (Rule 3')

New Patient: Patient 5 with symptoms:

- Blood pressure = normal


- Heart rate = increased
- Weight loss = yes

EBL Prediction: HTD (using Rule 2')

Learning Using Relevance Information:


Definition: Learning using relevance information involves training machine learning models to
focus on the most relevant features or data points to improve performance.
Types:
1. Relevance Feedback: Users provide feedback on the relevance of search results.
2. Active Learning: Models actively query users for relevance information.

3. Transfer Learning: Models leverage pre-trained knowledge from related tasks.


Key Concepts:
1. Relevance Score: Measures the importance of each feature or data point.
2. Weighting: Assigns higher weights to relevant features or data points.
3. Regularization: Prevents overfitting by penalizing irrelevant features.

Example:
Problem: Product Recommendation
Goal: Recommend products to users based on their past purchase
Data:
| User ID | Product ID | Rating |

-------------------------------------------------------------------------------------------
[Date]

|1 |A |5|

12
AI UNIT V NOTES

|1 |B |4|
|2 |C |3|
|2 |D |2|

Relevance Information:

- User 1 likes products with "electronics" feature.


- User 2 dislikes products with "clothing" feature.
Learning Process:
1. Initialize model with equal weights for all features.
2. Train model on labeled data (ratings).

3. Receive relevance feedback from users.


4. Update weights based on relevance scores.
5. Retrain model with updated weights.
Updated Model:
- Electronics feature weight: 0.8

- Clothing feature weight: 0.2


Improved Recommendations:
- User 1: Product E (electronics)
- User 2: Product F (not clothing)

Neural Network Learning

Definition: Neural networks are machine learning models inspired by the human brain's
structure and function.
[Date]

Key Concepts:

13
AI UNIT V NOTES

1. Artificial Neurons: Processing units that receive inputs and produce outputs.
2. Activation Functions: Introduce non-linearity to neural networks.
3. Backpropagation: Training algorithm for neural networks.

Example:

Problem: Handwritten Digit Recognition

Real-World Applications
1. Image Classification
2. Natural Language Processing

3. Speech Recognition

Working of a Neural Network


Neural networks are complex systems that mimic some features of the functioning of the
human brain. It is composed of an input layer, one or more hidden layers, and an output layer
made up of layers of artificial neurons that are coupled. The two stages of the basic process
are called backpropagation and forward propagation.

 Forward Propagation
 Input Layer: Each feature in the input layer is represented by a node on the network,
which receives input data.

 Weights and Connections: The weight of each neuronal connection indicates how
strong the connection is. Throughout training, these weights are changed.
[Date]

14
AI UNIT V NOTES

 Hidden Layers: Each hidden layer neuron processes inputs by multiplying them by
weights, adding them up, and then passing them through an activation function. By
doing this, non-linearity is introduced, enabling the network to recognize intricate
patterns.
 Output: The final result is produced by repeating the process until the output layer is
reached.
Backpropagation

 Loss Calculation: The network’s output is evaluated against the real goal values, and a
loss function is used to compute the difference. For a regression problem, the Mean
Squared Error (MSE) is commonly used as the cost function.

Loss Function:
 Gradient Descent: Gradient descent is then used by the network to reduce the loss. To
lower the inaccuracy, weights are changed based on the derivative of the loss with
respect to each weight.
 Adjusting weights: The weights are adjusted at each connection by applying this
iterative process, or backpropagation, backward across the network.
 Training: During training with different data samples, the entire process of forward
propagation
 loss calculation, and backpropagation is done iteratively, enabling the network to
adapt and learn patterns from the data.
 Activation Functions: Model non-linearity is introduced by activation functions like
the rectified linear unit (ReLU) or sigmoid. Their decision on whether to “fire” a
neuron is based on the whole weighted input.

Genetic Learning
Definition: Genetic algorithms inspire genetic learning, using principles of natural selection
and genetics.
Key Concepts:
[Date]

1. Chromosomes: Represent solutions as binary strings.


2. Fitness Function: Evaluates solution quality.
15
AI UNIT V NOTES

3. Crossover: Combines parent chromosomes.


4. Mutation: Introduces random variations.
Example:
Problem: Optimization of Function f(x) = x^2 + 3x + 2
Genetic Algorithm:

- Population Size: 100


- Chromosome Length: 10
- Fitness Function: f(x)
- Crossover Probability: 0.7
- Mutation Probability: 0.1

Result: Optimal solution x = -1.5


Real-World Applications
1. Optimization Problems
2. Scheduling Tasks

3. Resource Allocation
Comparison
| | Neural Network Learning | Genetic Learning |
| Inspiration | Human Brain | Natural Selection |
| Key Concepts | Neurons, Activation Functions | Chromosomes, Fitness Function |

| Example | Handwritten Digit Recognition | Function Optimization |


| Strengths | Pattern Recognition | Global Optimization |
| Weaknesses | Overfitting | Computational Cost |

Expert Systems:

Definition: Expert systems are computer programs that mimic human expertise in a specific
domain, using knowledge representation and reasoning techniques.

Expert systems consist of 5 key components:


[Date]

1. knowledge base ,
2. inference engine,

16
AI UNIT V NOTES

3. user interface,
4. explanation module
5. knowledge acquisition system.
 The knowledge base contains facts and rules relevant to a specific domain,
 while the inference engine interprets this knowledge to find solutions to user
problems.
 The user interface allows non-experts to interact with the system,
 while the explanation module provides explanations for the system’s conclusions.
 The knowledge acquisition system ensures that the expert system can acquire and
integrate new knowledge.
Types of Expert Systems in AI
There are several types of Expert Systems used in AI:
1. Rule-Based Expert Systems: These systems use a collection of rules to make decisions.
Rules are created by human experts and guide the system’s reasoning process. An
example is Mycin, an expert system for diagnosing bacterial infections.
2. Frame-Based Expert Systems: Frame-based expert systems use frame representation
to organize knowledge. Frames capture structured information about entities and
their attributes, allowing the system to reason about specific instances. For example,
an expert system for car insurance might use frames to represent different types of
coverage and associated costs.
3. Fuzzy Expert Systems: Fuzzy expert systems handle imprecise or uncertain data using
fuzzy logic. This allows the system to reason with degrees of truth rather than binary
values. Fuzzy expert systems are useful in domains where precise measurements are
difficult or subjective, such as weather forecasting or risk assessment.

4. Neural Expert Systems: Neural expert systems utilize neural networks to learn from
data through training processes. Neural networks can recognize patterns and make
predictions based on input data. They are particularly effective in areas such as image
recognition and natural language processing.
5. Neuro-Fuzzy Expert Systems: Neuro-fuzzy expert systems combine elements of fuzzy
logic and neural networks to make decisions based on both numerical and linguistic
information. These systems excel in complex domains where uncertainty and
imprecision are prevalent, such as financial forecasting or traffic management.
Advantages and Benefits of Expert Systems

Expert systems offer numerous advantages over traditional problem-solving approaches.

 Expert systems in AI provide increased accuracy by leveraging expert knowledge


stored in their knowledge bases. This ensures consistent decision-making even in
complex scenarios.
[Date]

17
AI UNIT V NOTES

 They are highly scalable as they can handle large amounts of information efficiently.
This makes them suitable for managing complex domains with vast amounts of data.

 Moreover, expert systems in AI can be cost-effective by reducing the need for human
experts, resulting in significant cost savings.

 It enhances decision-making by providing relevant data and expertise to support


informed choices.
Expert Systems in AI Examples
Expert systems in AI leverage advanced knowledge representation and rule-based reasoning
to emulate human expertise, finding applications in diverse domains.

1. MYCIN: An early expert system, utilised rule-based reasoning to diagnose bacterial


infections. It analysed patient symptoms and medical history, providing
recommendations for antibiotic treatments based on expert knowledge encoded in
the system.

2. Dendral: Dendral, one of the earliest expert systems, focused on organic chemistry
analysis. In troubleshooting scenarios, similar rule-based reasoning is applied where
expert systems analyse complex systems to identify and resolve issues.

Expert System Shells:

Definition: Pre-built software frameworks for developing expert systems.


Characteristics:
1. Domain-Independent: Applicable to various domains.
2. Knowledge Representation: Supports multiple knowledge representation techniques.
3. Inference Engine: Provides reasoning capabilities.

4. User Interface: Facilitates interaction with users.

Examples:

1. CLIPS (C Language Integrated Production System)


[Date]

2. JESS (Java Expert System Shell)

18
AI UNIT V NOTES

3. PyKE (Python Knowledge Engine)


4. Drools (Java Rules Engine)
5. OpenL Tablets (Open Source Expert System Shell)

Here's an example of an Expert System Shell:

CLIPS (C Language Integrated Production System)


Overview:
CLIPS is a popular, open-source expert system shell written in C. It provides a flexible framework
for building rule-based expert systems.

Example Application:
Medical Diagnosis Expert System

Knowledge Base:
(defrule fever
(symptom fever yes)
=>
(assert (disease pneumonia)))

(def rule headache


(symptom headache yes)
=>
(assert (disease meningitis)))
(defrule diagnosis

(disease ?d)
=>
(printout t "Diagnosis: “? d crlf))

Explanation:
[Date]

This expert system has three rules:

19
AI UNIT V NOTES

1. Fever Rule: If the symptom "fever" is yes, then assert "pneumonia" as a possible disease.
2. Headache Rule: If the symptom "headache" is yes, then assert "meningitis" as a possible
disease.
3. Diagnosis Rule: If a disease is asserted, print out the diagnosis.
User Interaction
CLIPS> (reset)

CLIPS> (assert (symptom fever yes))


CLIPS> (assert (symptom headache yes))
CLIPS> (run)
Diagnosis: pneumonia
Diagnosis: meningitis

Knowledge Acquisition:
Definition: Process of gathering, structuring, and representing domain expertise.
Methods:
1. Interviews: Domain expert interviews.
2. Surveys: Questionnaires and surveys.

3. Observations: Observing domain experts.


4. Documentation: Reviewing existing documents.
5. Prototyping: Building prototypes to elicit feedback.
Knowledge Acquisition Techniques:
1. Task Analysis: Breaking down complex tasks.

2. Cognitive Walkthroughs: Simulating user interactions.


3. Protocol Analysis: Analysing domain expert thinking.
4. Knowledge Modelling: Representing domain concepts.
[Date]

20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy