Attribute MSA Training
Attribute MSA Training
DSQR Training
Attribute MSA
Fred Nunez
Corporate Quality
Improving a Discrete Measurement System
2
Attribute MSA
• A MSA Attribute data study is the primary tool for assessing the reliability
of a qualitative measurement system.
• Attribute data has less information content than variables data but often it
is all that's available and it is still important to be diligent about the
integrity of the measurement system.
3
Attribute MSA Roadmap
The roadmap to planning, implementing. collecting data for a MSA attribute data follows:
Step 1.
To start the MSA attribute data study, identify the metric and agree within the team on its
operational definition. Often the exact measurement terms aren’t immediately obvious.
For example, in many transactional service processes, it could be the initial writing of the line items to an
order, the charging of the order to a specific account, or the translation of the charges into a bill. Each of
these might involve a separate classification step.
Step 2.
Define the defects and classifications for what makes an item defective. These should be
mutually exclusive (a defect cannot fall into two categories) and exhaustive. If an item is
defective it must fall into at least one defined category,
If done correctly every entity must fall into one and only one category .
Step 3.
Select samples to be used in the MSA. Use a sample size calculator. From 30 to 50 samples are
necessary. The samples should span the normal extremes of the process with regards to the
attribute being measured.
4
Attribute MSA Roadmap
Step 3.
Select samples to be used in the MSA. From 30 to 50 samples are necessary. The samples
should span the normal extremes of the process with regards to the attribute being
measured.
Measure the samples independent from one another. The majority of the samples should be
from the "gray" areas, and a few from clearly good and clearly bad.
For example, for a sample of 30 units, five units might be clearly defective and five units
might be clearly acceptable. The remaining samples would vary in quantity and type of
defects.
Step 4.
Select at least 3 appraisers to conduct the MSA. These should be people who normally
conduct the assessment.
5
Attribute MSA Roadmap
Step 5.
Perform the appraisal. Randomly provide the samples to each appraiser (without them
knowing which sample it is or the other appraisers witnessing the appraisal) and have him
classify the item per the defect definitions.
After the first appraiser has reviewed all items, repeat with the remaining appraisers.
Appraisers must inspect and classify independently.
After all appraisers have classified each item, repeat the whole process for one additional
trial.
Step 6.
Conduct an expert appraisal or compare to a standard.
– In Step 5 the appraisers were compared to themselves (Repeatability) and to one another
(Reproducibility).
– If the appraisers are not compared to a standard, the team might gain a false sense of security in the
Measurement System.
6
Attribute MSA Roadmap
Step 7.
Enter the data into a statistical software package such as Minitab and analyze it. Data is
usually entered in columns (Appraiser, Sample, Response, and Expert). The analysis output
typically includes:
7
Minitab Follow Along:
Attribute Gage R&R
Data: C:\SixSigma\Data\Attributes.mtw
Conduct an attribute Gage R&R study:
Stat>Quality Tools>Attribute Agreement Analysis…
8
Minitab Follow Along:
Attribute Gage R&R, cont.
Columns
containing the
appraised
attributes
9
Minitab Follow Along: Attribute Gage R&R, cont.
Attribute Gage R&R Study for Jane1, Jane2, Bob1, Bob2, Alex1, Alex2
Within Appraiser
Assessment Agreement
Appraiser # Inspected # Matched Percent (%) 95.0% CI
Jane 30 23 76.7 ( 57.7, 90.1)
Bob 30 22 73.3 ( 54.1, 87.7)
Alex 30 18 60.0 ( 40.6, 77.3)
# Matched: Appraiser agrees with him/herself across trials.
10
Minitab Follow Along: Attribute Gage R&R, cont.
Between Appraisers
Assessment Agreement
# Inspected # Matched Percent (%) 95.0% CI
30 7 23.3 ( 9.9, 42.3)
# Matched: All appraisers' assessments agree with each other.
11
Minitab Follow Along: Attribute Gage R&R, cont.
Date of study :
Assessment Agreement
Reported by:
Name of product:
Misc:
80 80
70 70
Percent
Percent
60 60
50 50
40 40
Jane Bob Alex Jane Bob Alex
Appraiser Appraiser
12
The Kappa Statistic
• The Kappa statistic tells us how much better the measurement system is than
random chance. If there is substantial agreement, there is the possibility that the
ratings are accurate. If agreement is poor, the usefulness of the ratings is
extremely limited.
• The Kappa statistic will always yield a number between -1 and +1. A value of -1
implies totally random agreement by chance. A value of +1 implies perfect
agreement. What Kappa value is considered to be good enough for a
measurement system? That very much depends on the applications of your
measurement system. As a general rule of thumb, a Kappa value of 0.7 or higher
should be good enough to use for investigation and improvement purposes.
13
Attribute MSA - Example
Objective:
Analyzing and interpretation of an Attribute Gage R&R Study
Background: In addition to the Gage R&R Study on the hole diameter an attribute
R&R study was conducted to check the visual assessment system used.
Four appraisers looked at 30 units repeatedly.
Data: C:\SixSigma/Data/CSGRR
Instructions:
Part A:
Analyze the initial data (column “assessment”) and assess the quality of the
measurement system.
If necessary, recommend improvement actions.
Part B:
The team made suggestions for improving the measurement system and used the
same parts to conduct a second attribute Gage R&R study. Analyze the data after
improvement (column “re-assessment”).
Time: 10 min
14
Attribute MSA - Example
Date of study:
Assessment Agreement Reported by:
Name of product:
Misc:
100 100
[ , ] 95.0% CI
Percent
90 90
Percent
Percent
80 80
15
Attribute MSA - Example
Between Appraisers
Assessment Agreement
16
Attribute MSA - Example –Ans. B
Date of study:
Assessment Agreement Reported by:
Name of product:
Misc:
100 100
[ , ] 95.0% CI
Percent
95 95
Percent
Percent
90 90
85 85
17
Attribute MSA - Example –Ans. B
Between Appraisers
Assessment Agreement
# Inspected # Matched Percent (%) 95.0% CI
30 29 96.7 ( 82.8, 99.9)
# Matched: All appraisers' assessments agree with each other.
18
Reasons MSA Attribute Data Fails
Appraiser
• Visual acuity (or lack of it)
• Misinterpretation of the reject definitions
Appraisal
• Defect probability.
– If this is very high, the appraiser tends to reduce the stringency of the test. The appraiser becomes
numbed or hypnotized by the sheer monotony of repetition.
– If this is very low, the appraiser tends to get complacent and tends to see only what he expects to
see.
• Fault type. Some defects are far more obvious than others.
• Number of faults occurring simultaneously. If this is the case, the appraiser must judge the
correct defect category.
• Not enough time allowed for inspection.
• Infrequent appraiser rest periods.
• Poor illumination of the work area.
• Poor inspection station layout.
• Poor objectivity and clarity of conformance standards and test instructions.
19
Reasons MSA Attribute Data Fails
20
Notes
21