0% found this document useful (0 votes)
898 views

Attribute MSA Training

Here are the key steps to analyze and interpret an Attribute Gage R&R study: 1. Run the Attribute Agreement Analysis in Minitab. This will calculate % agreement within and between appraisers. 2. Check the within and between appraiser agreement percentages. Look for at least 85% within and 70% between. 3. Check the appraiser vs standard agreement. Look for at least 85% agreement to ensure accuracy. 4. Calculate Kappa to determine how much better than chance the agreement is. Look for Kappa >0.7. 5. Interpret the results. Low % agreement or Kappa indicate the measurement system needs improvement before use. High values mean the system is

Uploaded by

Disha Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
898 views

Attribute MSA Training

Here are the key steps to analyze and interpret an Attribute Gage R&R study: 1. Run the Attribute Agreement Analysis in Minitab. This will calculate % agreement within and between appraisers. 2. Check the within and between appraiser agreement percentages. Look for at least 85% within and 70% between. 3. Check the appraiser vs standard agreement. Look for at least 85% agreement to ensure accuracy. 4. Calculate Kappa to determine how much better than chance the agreement is. Look for Kappa >0.7. 5. Interpret the results. Low % agreement or Kappa indicate the measurement system needs improvement before use. High values mean the system is

Uploaded by

Disha Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

HEATING, COOLING & WATER HEATING PRODUCTS

DSQR Training
Attribute MSA
Fred Nunez
Corporate Quality
Improving a Discrete Measurement System

Assessing the accuracy, repeatability, and reproducibility of a discrete


measurement system
• Discrete data are usually the result of human judgment (“which category
does this item belong in?”)
• When categorizing items (good/bad; type of call; reason for leaving), you
need a high degree of agreement on which way an item should be
categorized
• The best way to assess human judgment is to have all operators
categorize several known test units
– Look for 100% agreement
– Use disagreements as opportunities to determine and eliminate
problems

2
Attribute MSA

• A MSA Attribute data study is the primary tool for assessing the reliability
of a qualitative measurement system.

• Attribute data has less information content than variables data but often it
is all that's available and it is still important to be diligent about the
integrity of the measurement system.

• Attribute inspection generally does one of three things:


– Classifies an item as either Conforming or Nonconforming
– Classifies an item into one of multiple categories
– Counts the number of "non-conformities" per item inspected

• Thus, a "perfect" MSA attribute data system would


– Correctly classify every item
– Always produce a correct count of an item's non-conformities

3
Attribute MSA Roadmap
The roadmap to planning, implementing. collecting data for a MSA attribute data follows:

Step 1.
To start the MSA attribute data study, identify the metric and agree within the team on its
operational definition. Often the exact measurement terms aren’t immediately obvious.
For example, in many transactional service processes, it could be the initial writing of the line items to an
order, the charging of the order to a specific account, or the translation of the charges into a bill. Each of
these might involve a separate classification step.

Step 2.
Define the defects and classifications for what makes an item defective. These should be
mutually exclusive (a defect cannot fall into two categories) and exhaustive. If an item is
defective it must fall into at least one defined category,
If done correctly every entity must fall into one and only one category .

Step 3.
Select samples to be used in the MSA. Use a sample size calculator. From 30 to 50 samples are
necessary. The samples should span the normal extremes of the process with regards to the
attribute being measured.
4
Attribute MSA Roadmap

Step 3.
Select samples to be used in the MSA. From 30 to 50 samples are necessary. The samples
should span the normal extremes of the process with regards to the attribute being
measured.

Measure the samples independent from one another. The majority of the samples should be
from the "gray" areas, and a few from clearly good and clearly bad.

For example, for a sample of 30 units, five units might be clearly defective and five units
might be clearly acceptable. The remaining samples would vary in quantity and type of
defects.

Step 4.
Select at least 3 appraisers to conduct the MSA. These should be people who normally
conduct the assessment.

5
Attribute MSA Roadmap

Step 5.
Perform the appraisal. Randomly provide the samples to each appraiser (without them
knowing which sample it is or the other appraisers witnessing the appraisal) and have him
classify the item per the defect definitions.

After the first appraiser has reviewed all items, repeat with the remaining appraisers.
Appraisers must inspect and classify independently.

After all appraisers have classified each item, repeat the whole process for one additional
trial.
Step 6.
Conduct an expert appraisal or compare to a standard.
– In Step 5 the appraisers were compared to themselves (Repeatability) and to one another
(Reproducibility).
– If the appraisers are not compared to a standard, the team might gain a false sense of security in the
Measurement System.

6
Attribute MSA Roadmap

Step 7.
Enter the data into a statistical software package such as Minitab and analyze it. Data is
usually entered in columns (Appraiser, Sample, Response, and Expert). The analysis output
typically includes:

– Percentage overall agreement

– Percentage agreement within each appraisers(Repeatability)

– Percentage agreement between appraisers (Reproducibility)

– Percentage agreement with known standard (Accuracy)

7
Minitab Follow Along:
Attribute Gage R&R
Data: C:\SixSigma\Data\Attributes.mtw
Conduct an attribute Gage R&R study:
Stat>Quality Tools>Attribute Agreement Analysis…

8
Minitab Follow Along:
Attribute Gage R&R, cont.

Columns
containing the
appraised
attributes

9
Minitab Follow Along: Attribute Gage R&R, cont.

Session Window Output:

Attribute Gage R&R Study for Jane1, Jane2, Bob1, Bob2, Alex1, Alex2
Within Appraiser
Assessment Agreement
Appraiser # Inspected # Matched Percent (%) 95.0% CI
Jane 30 23 76.7 ( 57.7, 90.1)
Bob 30 22 73.3 ( 54.1, 87.7)
Alex 30 18 60.0 ( 40.6, 77.3)
# Matched: Appraiser agrees with him/herself across trials.

Each Appraiser vs Standard


Assessment Agreement
Appraiser # Inspected # Matched Percent (%) 95.0% CI
Jane 30 19 63.3 ( 43.9, 80.1)
Bob 30 18 60.0 ( 40.6, 77.3)
Alex 30 18 60.0 ( 40.6, 77.3)
# Matched: Appraiser's assessment across trials agrees with standard.

10
Minitab Follow Along: Attribute Gage R&R, cont.

Session Window Output:

Between Appraisers
Assessment Agreement
# Inspected # Matched Percent (%) 95.0% CI
30 7 23.3 ( 9.9, 42.3)
# Matched: All appraisers' assessments agree with each other.

All Appraisers vs Standard


Assessment Agreement
# Inspected # Matched Percent (%) 95.0% CI
30 7 23.3 ( 9.9, 42.3)
# Matched: All appraisers' assessments agree with standard.

11
Minitab Follow Along: Attribute Gage R&R, cont.

Date of study :
Assessment Agreement
Reported by:
Name of product:
Misc:

Within Appraisers Appraiser vs Standard


90 95.0% C I 90 95.0% C I
P ercent P ercent

80 80

70 70
Percent

Percent
60 60

50 50

40 40
Jane Bob Alex Jane Bob Alex
Appraiser Appraiser

12
The Kappa Statistic

Pobserved = Proportion of units classified in which the raters agreed


Pchance = Proportion of units for which one would expect agreement by chance

• The Kappa statistic tells us how much better the measurement system is than
random chance. If there is substantial agreement, there is the possibility that the
ratings are accurate. If agreement is poor, the usefulness of the ratings is
extremely limited.

• The Kappa statistic will always yield a number between -1 and +1. A value of -1
implies totally random agreement by chance. A value of +1 implies perfect
agreement. What Kappa value is considered to be good enough for a
measurement system? That very much depends on the applications of your
measurement system. As a general rule of thumb, a Kappa value of 0.7 or higher
should be good enough to use for investigation and improvement purposes.

13
Attribute MSA - Example

Objective:
Analyzing and interpretation of an Attribute Gage R&R Study
Background: In addition to the Gage R&R Study on the hole diameter an attribute
R&R study was conducted to check the visual assessment system used.
Four appraisers looked at 30 units repeatedly.
Data: C:\SixSigma/Data/CSGRR

Instructions:
Part A:
Analyze the initial data (column “assessment”) and assess the quality of the
measurement system.
If necessary, recommend improvement actions.
Part B:
The team made suggestions for improving the measurement system and used the
same parts to conduct a second attribute Gage R&R study. Analyze the data after
improvement (column “re-assessment”).

Time: 10 min

14
Attribute MSA - Example
Date of study:
Assessment Agreement Reported by:
Name of product:
Misc:

Within Appraiser Appraiser vs Standard

100 100
[ , ] 95.0% CI
Percent

90 90
Percent

Percent

80 80

Brian Dawn Linda William Brian Dawn Linda William


Appraiser Appraiser

15
Attribute MSA - Example
Between Appraisers
Assessment Agreement

# Inspected # Matched Percent (%) 95.0% CI


30 26 86.7 ( 69.3, 96.2)

# Matched: All appraisers' assessments agree with each other.

All Appraisers vs Standard


Assessment Agreement

# Inspected # Matched Percent (%) 95.0% CI


30 26 86.7 ( 69.3, 96.2)

# Matched: All appraisers' assessments agree with standard.

16
Attribute MSA - Example –Ans. B
Date of study:
Assessment Agreement Reported by:
Name of product:
Misc:

Within Appraiser Appraiser vs Standard

100 100
[ , ] 95.0% CI
Percent

95 95
Percent

Percent
90 90

85 85

Brian Dawn Linda William Brian Dawn Linda William


Appraiser Appraiser

17
Attribute MSA - Example –Ans. B

Between Appraisers
Assessment Agreement
# Inspected # Matched Percent (%) 95.0% CI
30 29 96.7 ( 82.8, 99.9)
# Matched: All appraisers' assessments agree with each other.

All Appraisers vs Standard


Assessment Agreement

# Inspected # Matched Percent (%) 95.0% CI


30 29 96.7 ( 82.8, 99.9)
# Matched: All appraisers' assessments agree with standard.

18
Reasons MSA Attribute Data Fails
Appraiser
• Visual acuity (or lack of it)
• Misinterpretation of the reject definitions

Appraisal
• Defect probability.
– If this is very high, the appraiser tends to reduce the stringency of the test. The appraiser becomes
numbed or hypnotized by the sheer monotony of repetition.
– If this is very low, the appraiser tends to get complacent and tends to see only what he expects to
see.
• Fault type. Some defects are far more obvious than others.
• Number of faults occurring simultaneously. If this is the case, the appraiser must judge the
correct defect category.
• Not enough time allowed for inspection.
• Infrequent appraiser rest periods.
• Poor illumination of the work area.
• Poor inspection station layout.
• Poor objectivity and clarity of conformance standards and test instructions.

19
Reasons MSA Attribute Data Fails

Organization and environment.


• Appraiser training and certification.
• Peer standards. Defectives are often deemed to reflect badly on coworkers.
• Management standards.
• Knowledge of operator or group producing the item.
• Proximity of inspectors.

20
Notes

• AIAG describes a “Short Method” requiring:


– 20 samples
– 2 inspectors
– 100% agreement to standard
• AIAG also has a “Long Method” for evaluating
an attribute gage against known standards
that could be measured with continuous data.

21

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy