Six Sigma
Six Sigma
Six Sigma
In Six Sigma, the concept of quality encompasses manufacturing, commercial and other
service functions of an organization because all these functions directly or indirectly affect
products/service quality & customer satisfaction. It also embodies a structured system of
capturing all types of errors and their quantified measurement for subsequent analysis and
improvement.
Under the new approach, quality is a state in which Value enrichment is realized for the
customer and provider in every aspect of business relationship.
•Scrap
•Rework
Visible costs
•Warranty costs
• High inventory
Successful leaders have made Six Sigma their way of conducting business.
-Azim Premji, Chairman,
Wipro Limited
Value enrichment for the company and the customer is done by measurement based
approach of Six Sigma, which enables us to find :-
Vast areas of business about which we do not know enough.
When we do not know about a parameter or a characteristic which are important to
customers, we do not value it.
If we do not value a parameter or a characteristic, we do not measure it.
If we do not measure, we can not improve.
Lower Upper
Specification Limit Mean / Target Specification Limit
3 1’s 3 1’s
Higher
Higherthese
this
numbers,
number,
Lower the
chance of Much Lower 3.4 Defects
producing a Probability Per Million
defect of Failure Opportunities
1
6 2’s 6 2’s
Steering Committee
Sponsor
Black Belt
Green Belt
Team Members
Champion ( GM )
Adds value in project reviews & Clears road blocks for the teams.
Has the overall responsibility for the project closure & recognition of successful Project
Team
Must identify the project deliverable for Green Belt in Project Charter
Monitor progress of Green belt project & generate MIS for Top Management.
Interact along with the Business / Functional Unit Head and helps project selection
Work out the Financial Score Card along with his Sponsor / Chief Financial Officer
Up to 5 people
People who have stake in the process
People who are benefited by removal of pain area
People who have complementary skills
People from same location
Guest members
Six Sigma
CONTROL
How can results be sustained?
Four short or long landings per day One short or long landing every two years
16 minutes per week of unsafe water supply 1.4 minutes of unsafe water every 5 years
30%__
There is enormous
25%__ opportunity to Improve…..
20%__
0% __
World class
Companies
| | | | |
3.4 233 6,210 66,807 308,537 DPMO
6 5 4 3 2 Sigma Multiple
Manufacturing
Commercial
Information Systems
Human Resources
Any other?
Process Sustenance
Improve
Analyze
Measure
DEFINE
SAMIR SHAH Slide 28 Six Sigma Module1-
DMAIC Steps
Success of any Six Sigma endeavor relies heavily on project definition & selection
This is true for any project – no matter what methodology you select
A CTQ is a
Customer Requirement
Internal CTQ
OR
Process Requirement
Key concepts
Survey results
Service reviews
Meetings
Prioritize pain areas / project themes & – Multi-voting / Theme selection matrix
9 Key Questions
Types of Scoping
Longitudinal Scoping
Lateral Scoping
e.g. – From the time of customer reporting the complaint till final satisfaction confirmation
Mostly the ‘start’ & ‘end’ points are baton change points
Points to remember:
There should not be too many baton changes with in the scoped process
The pain areas (identified at the time of project selection) must be within the selected scope
Between the “Start” and “End” of the process, there should be logical flow of units
Inside-Outside
Longitudinal
Any IT related work
Starts after receipt of PO from customer
Any transit delays
Ends at the despatch from factory
Lateral Product damages
Project boundary
After the scoping has been done, following should be checked to validate the scope
If there are enough transactions to measure (at least 20 transactions per month are
recommended for effective measurements)
If the scoped process would still result in achieving the objectives set by the Sponsor /
Champion.
Re-scoping may be needed later after spending considerable time on the project
Fill this table with your Sponsor for the roles of each team member
Tool
Team Charter
2
Enhanced productivity
Project Agreement
SAMIR SHAH Slide 45 Six Sigma Module1-
Call a Team Meeting
Present the teaming module, complete recommended exercises & develop role
clarity for each project team member
Why Bother?
A shared recognition, by both the team and the key constituents, of the need and logic of
the change sought
Dissatisfaction with status quo
Tool
Loss Opportunity
Loss-Opportunity Matrix
short-term threats Immediate gains
Short Term if we don’t do if we are successful
the project? In the project?
Loss-Opportunity Matrix
Elevator Speech
Improve
Analyze
MEASURE
Define
A robust measurement system forms the basis of any Six Sigma project
Tool
P
Process
S I O C
SIPOC
1 2 5
Process Boundary
Suppliers
Manage the supplier by giving clear specifications on requirements or data
Inputs
Data / unit required to execute the process
Process Boundary
Identified by the hand-off at the input (the start point of process) and the output (the end
point of the process)
Outputs
Output of a process creating a product or service that meets a customer need
In your project, output characterizes the pain area / project theme
For example, if pain area is truck loading during despatch, output measured for your
project could be time taken in loading
Customers
Users of the output
What You Think It Is... What It Actually Is... What You Would Like
It To Be...
S I Steel
O C
Making
BF/SIP HM/DRI
Process LM Caster
S I O C
Casting Process
SMS
LM Slab Mill
Manual Manual
Delay Input
Operation
LP N
If LP
N Y
Wait tank has Unload.
space
Non-Value added
activity Time dimension of process
Enables seeing that changes are not made in a vacuum and will carry through,
affecting the entire process down the line
Helps re-examine (if needed) the scope and charter of your project
Recall that customers are better off telling you what they do not want
Discrete data
Data that can take a limited number of values
Examples
Number of orders delivered late
Continuous Data
Examples
Yield of a process
Spread / Dispersion
Shape
Mean
Mean is the arithmetic average of all data points in a data set
Y1 + Y2 + Y3 + ………. + Yn
Y= Where n = number of data points
n
Mode
Mode is the most frequently occurring data point in a data set
Median
Median is the middle data point of a data set arranged in an ascending / descending order
Average
Range
Range is the difference between the maximum & minimum data point
Variance & standard deviation measure how individual data points are spread around mean
( Y1 - Y )2 + ( Y2 – Y )2 + ……. + ( Yn – Y )2
Variance = s2 =
(n–1)
Standard Deviation = s = s2
C
B
A
Mean of Curve ‘A’ is more representative of its data set as compared to Curves ‘B’ & ‘C’
Spread outside the specifications may result in defects; this information is not
provided by mean
It’s a data set in which spread of the data set around its mean is identical
Negative / Left skewed - high spread on the left side of the mean
Median Median
At 4 σ multiple, large sample sizes are required to detect changes (>1000 observations)
Opportunities for error in a process is the number of steps / tasks / actions in the
process, where there is a possibility of committing an error, that may result in a defect
This is because data, on whether or not a defect is created, is discrete type (yes / no)
Sampling
Sample
Population
‘Parameter’ ‘Statistic’
X1 + X 2 + X 3 + … + X n X1 + X 2 + X 3 + … + X N
n N
( X 1 - X )2 + ( X 2 – X ) 2 + … + ( X n – X ) 2 ( X1 - µ )2 + ( X2 – µ )2 + … + ( XN – µ )2
(n–1) (N)
s2 σ2
n = number of data points in the sample N = number of data points in the population
All items in the population have an equal chance of being chosen in the sample
Example: A customer satisfaction survey team picking the customers to be contacted at random
Systematic sampling
Sample every nth one
Example : pick every 3rd or 5th item
Subgroup Sampling
Sample n units every t th time
Example : 3 units every hour
Bias occurs when systematic differences are introduced into the sample as a result of the sample
selection process
A biased sample would not adequately represent the population & would lead to incorrect conclusions
about the population
Convenience bias - when sample is drawn from the most easily accessible part of population
Environment bias - when conditions have changed from the time sample was drawn to the
time sample was used to draw conclusions
Fresh data should be collected to ensure that the latest process trend is studied
Historical data may have measurement errors which would be validated in next step of DMAIC
In such cases, Champion / sponsors may review the project, address the ‘discipline’ issues &
decide whether project needs to collect another sample or gets abandoned here
Business criteria to select a sample size include cost, time & effort
Statistical criteria include the accuracy of the sample representing the population
Higher the sample size, better the accuracy of the information about the population
parameters ( µ & σ )
Z1 with n = 25
Z2 with n = 16
Z3 with n = 4
2
Z 1 – (α / 2) * σ
n=
Let’s take a normal pack of cards. It has got 52 cards. Average of all cards is 7 & standard
deviation is 3.78. Now if I want to take a sample of few cards & want their mean to be within ± 2, i.e.
5 & 9, how many cards should I take?
= 2
σ = 3.78
2
Assume α = 0.05
Z 97.5 * 3.78
n=
2
From Appendix 1, Z 97.5 = 1.96
2
= 14
That means 95% of the samples with size 14 will have its mean between 5 & 9
We know that the population standard deviation of runs scored by a cricket player is
25. Now, we want to collect a sample that can estimate the career average within
± 5 runs tolerance with 99% confidence. What should be the sample size?
= 5
2
σ = 25
Z 99.5 * 25
α = 0.01 n=
5
Z 97.5 = 1.96
2
1.96 * σ
n= for Continuous Data
Extending the same logic, we can find out the sample size required while dealing with discrete
population
If the average population proportion non-defective is at ‘p’, population standard deviation can be
calculated as
σ = p ( 1 – p)
1.96
n= p ( 1 – p) for Discrete Data
A robust measurement system forms the basis of any Six Sigma project
Step 1 of DMAIC
Design of the measurement system
Measurement system for ‘Y’ indicates that this step deals with the accuracy of
defect measurement
A country preacher was walking the back-road near a church. He became thirsty so
decided to stop at a little cottage and ask for something to drink. The lady of the house
invited him in and in addition to something to drink, she served him a bowl of soup by the
fire. There was a small pig running around the kitchen. The pig was constantly running up
to the visitor and giving him a great deal of attention. The visiting pastor commented that
he had never seen a pig this friendly. The housewife replied: "Ah, he's not that friendly.
Measurement
Variation
LSL USL
Measured Sigma
of the
Process
Obtain information about the type of measurement variation associated with the
measurement system
Measurement error is a statistical term meaning the net effect of all sources of
measurement variability that cause an observed value to deviate from the true value
Both process and measurement variability must be evaluated and improved together
If we work on process variability first and our measurement variability is large, we can
never conclude that the improvement made was significant, or correct
Measurement System
Variation
Accurate Precise
Operator 2
Operator 1
Reproducibility
Repeatability
Difference leads
to Reproducibility
1 2 1 2 1 2
Trial
Reading 3 4 3 4 3 4
#1
5 6 5 6 5 6
Difference leads to
Repeatability
Six Parts / Conditions
1 2 1 2 1 2
Trial
Reading 3 4 3 4 3 4
#2
5 6 5 6 5 6
Usually 3 operators
Short method does not measure operator & equipment variability separately
Long method measures operator & equipment variability separately, but does not measure
combined effect
ANOVA Method
Measures operator & equipment variability separately with combined effect as well that
better defines causality
Short method gives total measurement system variation, not separating the
equipment & operator variation (repeatability & reproducibility) Tool
However, time, resource & cost constraints may need to be looked into
Long method separates the equipment & operator variation, but does not elaborate
Tool 12
on combined effect. ANOVA takes care of this issue
However, time, resource & cost constraints may need to be looked into
We can use the same data in the previous example with ANOVA method.
Minitab gives the following output:
Source DF SS MS F P
GRR as a % of Tolerance
However, discretion may be needed depending upon application of the process / equipment
You should not proceed to next DMAIC step. Simplify process / explore root cause
If none of the above criteria is met, do not proceed to the next step
Improve
ANALYZE
Measure
Define
Customer Specifications
Defects
Location
Defects per Unit
(DPU) Spread
It is a count of all defects present in a unit, not of how critical each defect is
Tool
Normal Distribution
Applicability to many situations where given the population knowledge, we need to
predict the sample behavior
- +
70 80 90 100 110 120 130
Figure 3.02
- +
µ
- 1σ + 1σ
68.26%
- 2σ + 2σ
95.46%
- 3σ + 3σ
99.73%
- 4σ + 4σ
99.9937%
- 5σ + 5σ
99.99943%
- 6σ + 6σ
99.999998%
- +
µ1 µ2 µ3 -3 -2 -1 0 +1 +2 +3
Instead of dealing with a family of normal distributions with varying means & standard
deviations, a standard normal curve standardizes all the distributions with a single curve
that has a mean of 0 & standard deviation of 1
Y - µ
Z=
σ
3 ’s
Higher this
number,
Lower the
chance of Much Lower
3.4 Defects
producing a Probability Per Million
defect of Failure Opportunities
1
6 ’s
Sample at Time 3
Sample at Time 2
Shift
Sample at Time 1
Long term performance
LSL T USL
It is the capability or the potential performance of the process, in control at any point of time
Over time, a typical process will shift by approximately 1.5 standard deviations
In other words, long term variation is typically 1.5 standard deviations more than the
short term variation
This difference is called the Sigma shift, which is an indicator of process control
This shift could be due to different operators, raw material, wear & tear, time, etc.
Discrete data Z values as studied in the previous session have been adjusted for shift
Long term performance adjusted by a factor of 1.5 gives short term capability (Z ST)
Random Systematic
Added variation due to factors external to the usual process (abnormal variation)
Large data collected over time
Special causes (different operators, raw material, wear & tear) lead to increase in variation
Special causes need to be identified & corrected for improvement
Long term variation is always greater than the short term variation
ZST = 3 CP
LSL USL
CP relates the process short term variation with the customer specification limits
That means this process can fit 12 standard deviations between USL & LSL
20 140 20 140
Even though almost 40% of the data is outside specification limits in graph 2, it is
still a Six Sigma capable process, however, not performing to its potential
USL - Y Y - LSL
CPU = CPL =
3 SST 3 SST
It considers the data centering & forces the mean to be between the specifications
CP > = CPK
USL - Y Y - LSL
PPU = PPL =
3 SLT 3 SLT
It is similar to CPK except that it uses the long term standard deviation
PPK enables ZLT computation for both one sided & two sided specifications
Due to limitations of multiple shift factors & CP, process sigma multiple calculations
for continuous data start from PPK
ZST
15
10
Frequency
5
35 40 45 50 55 60
Time taken in delivering pizza
Expected DPMO values are different from the observed DPMO because expected
values are calculated as per the fitted normal probability distribution
DPMO of the process is 25979, as per the expected long term performance
For example, you can classify late pizza deliveries in terms of outlets from where
the late deliveries were made to increase focus
If the existing process has to be improved, then the improvement goal should be
chosen only after proving statistically that it can be achieved only due to change, &
not by noise
However, project teams must not get over-ambitious, entitlement study should
be completed
Suppose team management wants to see if Indian cricket team’s performance Tool
has improved after the have recruited a new coach. Is there an
improvement that can be proven statistically?
What does the management need to do? It basically needs to make an assumption
Test of Hypothesis
about the efficiencies of the two coaches A & B, & test it for significance
When a person is being prosecuted for a crime, the judge hears the proceedings
assuming that the person has committed no crime
In other words, the person is non-guilty till proven otherwise, i.e. status quo
Considering the previous example, null hypothesis is that the two coaches have the
same efficiency (i.e. no difference in efficiencies till proven otherwise)
The prosecutor believes in the alternative hypothesis & gives proofs to substantiate it
Considering the previous example, alternative hypothesis is that coach B has a higher
efficiency than coach A (i.e. there is a difference in efficiencies)
One of the key steps in setting an improvement goal is to prove that the targeted /
benchmarked performance is truly performing at a higher level (different population)
In other words, a project team can set a target hypothesizing that this would be
really an improved performance level
Team must test this hypothesis statistically, otherwise it may end up setting either
too easy or too stiff a target
Hypothesis
testing
Discrete Continuous
data data
Variance Mean
c2 test Comparison
of two
Comparison
of many
F test
σ known Z test
Sometimes, at this step, project teams may set aggressive targets to justify the
project selection
However, one must understand that such steps may result in abandoning the
project half-way
Worst still, it sets a bad precedence & de-motivates the team members
Knowledge about the true behavior of the process is limited at the time of project
selection & hence, benefits expected may be grossly approximated
After setting the improvement target, project team must arrive at an accurate
estimate of benefits with Finance & take the sign-off from Champion to proceed
Sometimes, project teams may need to re-visit step 4.2 depending upon the
Champion’s expectations
Step 4
Step 5
Define Performance Goals
Identify Variation Sources
A
Step 6 Explore Potential Causes
Step 7
Step 8
Establish Variable Relationship
Design Operating Limits
I
Step 9 Validate Measurement System for ‘X’
Step 10
Step 11
Verify Process Improvement
Implement Process Controls
C
SAMIR SHAH Slide 171 Six Sigma Module1-
Means & Ends of a Process
Y X
Dependent Independent
Effect Cause
Symptom Problem
Monitor Control
Unit
Specifications
Defects
Baseline
Target
In other words, defect observed in output ‘Y’ is due to some ‘X’ not being controlled
In this step, we identify the factors that contribute to variation in the process output ‘Y’
Another objective of the step is to separate the vital few X’s from trivial many
Identifying & prioritizing X’s could be done using both non-statistical & statistical tools
ANOVA
Failure Mode & Effect Analysis (FMEA)
5.2 Separate vital few X’s from trivial many for further screening
Wrong Customer
Backbone Details on Invoice
No credit card verification
Foul order
60
80
50
Percent
Count
60
40
30 40
20
20
10
0 0
me ady y i ng
o n ti re l at
e j am ead ait
ng ot up affi
c ot r sw
o mi es
n
oke Tr stn Bu
t c oth W kf a
no Cl ea
Bu
s Br
Count 25 18 15 6 5 2
Percent 35.2 25.4 21.1 8.5 7.0 2.8
Cum % 35.2 60.6 81.7 90.1 97.2 100.0
Pareto diagram can be used even in step 3 when we establish the process baseline
If a process has multiple defect definitions, project teams can use the Pareto to see
where to focus first for defect reduction, & set improvement targets accordingly
Even for single defect definitions, if there are multiple defect categories (different
products, different geographies, etc.), Pareto could be useful
For example, if late delivery for products is the defect definition, one can use the Pareto to
see if the frequency of late delivery is higher in product A, B or C, & focus accordingly
Depending upon the data characteristics of Y & X, we can choose the appropriate
tool
Correlation
Continuous & ANOVA
Regression
Y Identify
opportunities for
Discrete converting ‘Y’ Chi-square
into a
continuous one
Continuous Discrete
X
Correlation
Positive value of ‘r’ means direction of movement in both variables is same
Negative value of ‘r’ means direction of movement in both variables is inverse
Zero value of ‘r’ means no correlation between the two variables
Higher the value of ‘r’, stronger the correlation between ‘Y’ & ‘X‘
r = 0.05 r = 0.50
6 6
4 4
2 2
0 0
0 6 12 0 6 12
8 10
6 8
6
4
4
2 2
0 0
0 6 12 0 6 12
r = 0.95 r = – 0.95
Correlation measures the linear association between the output (Y) and one
input variable (X) only
While correlation tells us only about the direction of movement, it does not
throw much light on degree of movement in one variable with respect to Tool
movement in another
Regression of ‘Y’ on ‘X’ results in a transfer function equation that can be used to
predict the value of ‘Y’ for given values of ‘X’
Y = f(X)
Regression
‘Y’ can be regressed on one or more X’s simultaneously
Simple linear regression is for one X
Multiple linear regression is for more than one X’s
Regression by subsets is to choose the best model when there are many X’s
Polynomial regression is to explore the curvilinear relationship between variables
A simple linear regression equation is nothing but a fitted linear equation between
‘Y’ & ‘X’ that looks as follows:
Y = A + BX + C
If ‘Y’ & ‘X’ are not perfectly linear (r = ± 1), there could be several lines that could be
fitted
Y Y
Error
Error Error
X Y-intercept X
Minitab fits the line which has the least value of errors squared & added
X Y
Regression Analysis
The regression equation is
Analysis of Variance
Source DF SS MS F P
Regression 1 200.00 200.00 19.05 0.012
Residual Error 4 42.00 10.50
Total 5 242.00
For a targeted profit of INR 32 Million, R&D could be budgeted for INR 6 Million
The R-Squared value is the proportion of variability in the Y variable accounted for by
the predictors. In other words, 82.6% of variation in ‘Y’ is explained by ‘X’ in the fitted
model
If you add another variable X2 to the model, value of R2 will increase even though
X2 may not add any value to the model. Minitab adjusts this with R2ADJ value so that
dummy variables are accounted for, if added
R2 values of 70% & above denote a good relationship between ‘Y’ & ’X’ &
that the respective ‘X’ should be further studied
ANOVA table shows the P-value for the regression model which is less than 0.05
indicating that fitted model is good enough
It is similar to Two-way ANOVA we have discussed in step 4, except for the difference
that X’s used in ANOVA were discrete
The approach is similar & a linear multiple regression equation looks as follows:
Y = A + B1X1 + B2X2 + C
Project teams get tempted to extrapolate the results beyond the range of collected data. In
one of the previous examples, higher R&D expenditures may not see same increase in
profits, & hence, the regression equation between profit & R&D expense may change
Range 1 Range 2
Profit
R&D Expense
Number
of schools
Population Incidents
of Crime
Population
Regression assumes that all X’s are mutually independent variables. If one of the X’s
depends on another X, it may result in a good regression model (ANOVA P-value), but P-
values of the regression coefficients may be insignificant
That means a model with ‘Y’ on any of these X’s will be good, but not when both the X’s
are included in the model
ANOVA
SAMIR SHAH Slide 195 Six Sigma Module1-
ANOVA Example
A restaurant puts great emphasis on customer satisfaction. For some weeks, the
ratings seemed to suffer & the manager tried to identify the factors that could be
causing this. He chooses one of the potential factors as ‘team’ that serves the
customer. Team 1 takes care of the Chinese cuisine while team 2 serves the Indian
food. Is the type of team a potential source of variation in satisfaction ratings?
In the previous example, the root cause may be that Chinese & Indian foods are
prepared by two different chefs, satisfaction ratings for two teams are just an
indicator of the variation. It may not mean that team 2 is more efficient than team 1
When you use multiple X’s in ANOVA, they have to be mutually independent
FMEA
SAMIR SHAH Slide 199 Six Sigma Module1-
FMEA Concept & Output
Effect
Severity
Control
Detectability
Severity is an assessment of the seriousness of the effects of the failure mode on the
customer
9 Very High: Potential failure mode affects safe operation and/or non-compliance
with regulations
Occurrence is the probability that a specific cause will result in the particular failure
mode
Even though, project teams may identify potential X’s using Brainstorming / Fishbone,
& use Regression / ANOVA / Chi-square to prioritize potential X’s, they may still end
up with X’s that explain the variation in ‘Y’, but do not really cause that variation
The real output of this step is to short-list potential X’s that may have a causal
relationship with ‘Y’, because a relationship between ‘Y’ & ‘X’ is a necessary but not
sufficient condition for cause & effect
If an ‘X’ does not explain variation in ‘Y’, it should not be explored further, it is one of
those trivial many X’s project team would have identified
A good job done in this step reduces the work in further steps
We check for causation in the next step of Analyze module through experimentation
techniques
IMPROVE
Analyze
Measure
Define
Y X
Dependent Independent
Effect Cause
Symptom Problem
Monitor Control
Brainstorming
FMEA
x1 x7 = 38%
x2 x6 = 27% Vital X’s
x3 x2 = 12%
x4
x5 Exploration of x9 = 4%
x6 the y-x x10= 4%
x7 relationship x5 = 2%
x8 x1 Trivial X’s σ error
x9 x3
x10 x4
x11 x8 = 13%
x12 x11
x12
Fishbone
Definition
Origin
Strategy of DOE
Define the Problem
Establish the Objective
Select the Response (Y)
Select the Factors (X’s)
Choose the Factor Levels
Select the Experimental Design
Collect the Data
Analyze the Data
Draw Conclusions
Run Additional Experiments, if necessary
Achieve the Objective
Experimental design is more than just analyzing data. It is a structured process for
achieving an objective
DOE
Experiment Design
Factors
Response
Design
Level
Experiment Design
The formal plan for conducting the experiment is called the “experiment design” (also the
“experiment pattern”)
It includes the choices of the responses, factors, levels, blocks, and treatments and the
use
of certain tools called planned grouping, randomization, replication
Factors
A factor is one of the controlled or uncontrolled variables whose influence upon the
response
is being studied in the experiment. Factors are also known as the X’s
A factor may also be qualitative, e.g., different machines, different operator, clean or not
clean
Response
The measured characteristic used to quantify the result of a combination of factors at given
levels. The response will be one of the Y’s
Design (Layout)
Complete specification of experimental test runs including blocking, randomization,
replications, repetitions, and assignment of factor-level combinations to experimental units
Level
The “levels” of a factor are the values of the factor being examined in the experiment. For
quantitative factors, each chosen value becomes a level, e.g., if the experiment is to be
conducted at two different temperatures, then the factor temperature has two “levels”. In a
qualitative factor, the single factor “cleanliness” has two levels: clean and not clean.
This approach is changing one factor at a time, keeping the other factors constant
Consider an example of a car where its fuel efficiency is dependent on the speed
(Km / Hr) & coolant level (ml / Liter) for the engine
First, experiment runs are made by keeping the speed constant at 155 & varying
the coolant level
80
75
70
Optimum is slightly to the right of 1.5,
Efficiency
65
45
40
0 0.5 1 1.5 2 2.5 3
Coolant
Now, experiment runs are made by keeping the coolant constant at 1.67 & varying
the speed
80
75
70
Still about 78% efficiency,
Efficiency
65
60 what is the optimum level then?
55
50
45
40
130 140 150 160 170 180 190
Speed
One-at-a-time Planned
Interaction is defined as the effect of one factor ‘X1’ on the ‘Y’ being dependent on which level of
another factor ‘X2’ is chosen
For example, al low speeds, fuel efficiency may not get affected by low coolant level. But, at
high speeds, if coolant is low, fuel efficiency may come down drastically
No objective
A well-defined experimental objective is similar to a good problem statement. It is developed
using standard problem solving tools such as problem statement, cause and effect diagrams
and root cause analysis. An experiment in Step 6 should not encounter this difficulty if the
first five steps of the MAIC have been performed correctly
Screening
& Optimization
Characterization
Full Factorial
Multi-level Experiments
2K Factorial
Composite Designs
Fractional Factorial
In industry, two-level full and fractional factorial designs are often used to “screen”
for the really important factors that influence process output measures or product
quality. These designs are useful for fitting first-order models (which detect linear
effects), and can provide information on the existence of second-order effects
(curvature) when the design includes center points.
Screening experiments can be used to quantitatively separate the vital few X’s from
the trivial many X’s. These types of experiments are also known as exploratory
experiments
Optimization experiments seek the level(s) of the vital few X’s that will optimize the
performance of the y, with respect to targeting, variability reduction, or both
High
Graphical Illustration B
Low
Low A High
Trial A B
1 - -
Tabular Illustration
2 + -
3 - +
4 + +
High
Graphical Illustration B
High
Low
Low High C
A Low
Trial A B C
1 - - -
2 + - -
3 - + -
4 + + -
Tabular Illustration 5 - - +
6 + - +
7 - + +
8 + + +
HIGH
(H, L) (H, H)
(+1, -1) (+1, +1)
x1 x1 x2
Factor X1
(L, L) (L, H)
(-1, -1) (-1, +1)
1 x2
LOW
Factor X2
LOW HIGH
If we add all first numbers ( ΣXi ) in the parentheses, sum is ZERO, ditto for second
numbers - balance
If we multiply all first & second numbers in the parentheses & add all up ( ΣXiXj ),
sum is ZERO - orthogonal
Repetition
This is running the experiment twice on each trial combination, without changing the setting,
i.e. no other run in between
Replication
This is running the experiment twice on each trial combination, but with a change of setting,
i.e. some other run in between
Replicates should be used in 22 &23 designs since number of trials is less
Blocking
A blocking variable is a factor whose levels are used in the experiment, but, effect on
response is not studied
Randomization
Runs are made in random order as opposed to a standard order to avoid lurking variables
that change over time
Repetition & Replication address the issue of experimental error by taking multiple
readings on one setting
Often in a process, there are factors which may have an effect on the response but
are either unknown, uncontrollable, or of no interest to the experimenter. If ignored,
these factors can confound the results and produce erroneous conclusions. With
proper experimental planning, these types of factors can be accounted for with a
minimal effect on the number of trials performed. Randomization addresses the
issue of unknown factors
Nuisance factors that can be classified can be eliminated using a blocked design.
For example, an experiment may be carried out over several days with large
variations in temperature and humidity, or data may be collected in different plants,
or by different technicians. Observations collected under the same experimental
conditions are said to be in the same block. In above case, 1 plant could be 1 block
& even though it is similar to a factor level, its effect is not studied in the experiment
2. Look at the p-values for the terms in the model. p-values should be generally less
5. Form two lists, one for significant factors / interactions (vital few) and one for non-
significant factors / interactions (trivial many)
6. Estimate the prediction equation (transfer function) to assess the magnitude and
direction of change in ‘Y’ as a function of ‘X’
7. Use the results from the analysis and develop another design, if necessary
Response being measured is left out ‘dirt’ content in the clothes measured through a
standard evaporation procedure
Each trial can be replicated a number of times to provide an estimate of the error in
the experimental process
Experiment is run as per the randomized run order as suggested by Minitab &
following response is observed
Long (+1)
Time
Higher (+1)
Concentration
Short (-1)
Lower (-1)
52 41
50 47
50 41
43 45
58 45
Time
61 43
Concentration
65 44
65 Temp. 42
STAT > DOE > FACTORIAL PLOTS > Main effects plot
55.0 55.5
52.5
Dirt
50.0
12 6.75
47.5
45.0
43.5
Temp Time Conc
It’s clear that temperature has the greatest effect, time has a moderate effect &
concentration has the least effect
It’s easy to conclude that concentration does not have much effect on dirt which
may sound a bit strange. But remember this observation is true only within the
range of values used in the experiment. For some other levels of concentration,
time & temperature, one might see a different result altogether
Therefore, factor levels should be chosen as per the normal operating conditions
because the process is going to be run under those conditions only
Project teams must question those results that defy logic, & try to re-experiment
with different levels
1 52
-1
42
62
Time
1 52
-1
42
Conc
Temperature & time have significant interaction because change in response for
different levels of temperature is not same for different levels of time
It’s easy to find an interaction by just looking at the two lines; if they are parallel, no
interaction
We can now develop the coded prediction equation for the response taking into
account only the significant effects
Since prediction equation starts at the grand average level, all effects are divided by
2
To improve the cleaning efficiency, dirt should be lower. Hence, higher temperature
should be used. We have seen that at higher temperature, time does not matter, &
hence time could be reduced on a practical basis
Concentrate on vital few X’s to optimize their values such that you achieve desired
response
‘Y’ by using a 2-level optimization design in the next step of DMAIC
Vital few X’s were identified & characterized in step 6. Since Y = f(X) is known from
step 6, the X’s in the transfer function are the vital ones. Step 7 attempts to
establish the levels of these X’s that provide the desired improvement in ‘Y’
Analysis of the transfer function reveals whether an increase in ‘X’ either increases
or decrease the output ‘Y’
For a target value of ‘Y’, the required level of ‘X’ can be determined analytically
Focus of step 7 is on optimization experiments. After choosing the level of ‘X’ (to
give the target value of ‘Y’), the process is checked to affirm the result
Even when a transfer function is well-defined, there are typically still trivial X’s
affecting the process. The aggregate level of their effect should be determined
If there are only a few discrete levels of a vital ‘X’, then in previous Step , the best
level would have been identified
Since the optimum level for each of the vital X’s is explicitly identified, it is only
necessary to confirm that the predicted result occurs
X’s
In the case of continuous X’s, it may be necessary to explore additional levels of the
vital X’s, in order to gain a better estimate of transfer function f(X)
Once the level of each ‘X’ is chosen, it is always a good idea to verify the function.
Confirmation runs should be made to verify that the predicted value of ‘Y’ occurs
The error in the transfer function should be estimated to affirm that the error is
sufficiently small to achieve the goal set . If it is not, then additional X’s may be
necessary to create an enhanced transfer function
Screening &
Optimization
Characterization
Once the vital few factors have been identified, a sequential series of experiments
can be used to determine the optimum factor combination
If additional runs are needed to help estimate the transfer function, DOE can be used
again
One should know how to use experimental designs to determine the factor settings
which will produce optimal results
One should know common types of experimental designs which use more than two
factor levels
3 level optimization – find the best place to operate when you are already in
the right area
Screening
Performance DOE
goal on ‘Y’
(2K, 2K-P)
Temperature
40.0 41.5
160
40.3
40.5
40.7
40.2
40.6
150
39.3 40.9
Time
30 40
A single trial at the center point provides insight into the non-linear response
Are the current operating conditions optimum? If not, which direction provides
improvement?
‘Analyze Factorial Design’ can be used to see the direction of linear improvement,
(provided we had used Minitab to create the design)
Screening DOE output is as follows for the cleaning example which signifies that
the fit is significant & the factors are important
Tool 30
Yield = 40.44 + 0.325*Temp + 0.775*Time
30 40
We usually take the center of the design region as the origin point for the path of
steepest ascent. Then steps along the path are chosen proportionally to the signs
and magnitudes of the regression coefficients. Usually, the variable with the largest
regressor coefficient represents the base step and steps for other variables are
then a fraction of this step
Since time has the largest regressor coefficient, it is chosen as the base step size
of 5 (center to face)
Time Temperature
35 155.0
40 157.1
45 159.2
50 161.3
55 163.4
60 165.5
65 167.6
Go as long as you see improvement, but take care of the process noise
If there is a practical barrier (time / cost / skill) on a factor level, keep increasing the
level of another factor
A second 22 experiment could be run with one of the previous corner points
Tool 31
as a base point
The original design is shown as solid lines and this second simplex is shown as
dashed
Temperature
Simplex Method
40.0 41.5
160
40.3
40.5
40.7
40.2
40.6
150
39.3 40.9
Time
30 40
You may choose to replicate / run the experiment again at the (40, 160) setting
Screening
Performance DOE
goal on ‘Y’
(2K, 2K-P)
After the improvement region has been located, we want to find the ideal operating
values
Since the screening designs / 2 level optimization designs only model linear surfaces,
an augmented design is needed to quantify the curvature of the surface
If there is still too much noise in the process not leading to satisfactory sigma level
of the process as targeted in step 4, most significant trivial X’s from step 5 must be
included in the model & steps 6-7 be repeated
x7 = 38% x7 = 38%
x6 = 27% Vital X’s x6 = 27%
x2 = 12% x2 = 12% Vital X’s
x9 = 4%
x9 = 4% x10= 4%
x10= 4%
x5 = 2% x5 = 2%
x1 Trivial X’s
x1
x3 σerror x3
Trivial X’s Smaller
x4 x4
x8 x8 σerror
x11 x11
x12 x12
For the purpose of this CTQ, setting the vital X’s to optimize ‘Y’ is the best policy.
Tradeoffs in multiple CTQ’s and Y’s will be explored later
Similarly, since the trivial X’s do not significantly affect this y, these X’s can be set
arbitrarily or with respect to other CTQ’s
It is important to quantify the noise/error in the process once all the X’s are set.
Remember that not all X’s need to be set, as trivial X’s which are uncontrollable will
still vary
If the noise/error does not lead to an acceptable Sigma level, an enhanced cycle
must be performed
Screening
Performance DOE
goal on ‘Y’
(2K, 2K-P)
For quantitative X’s, a prediction equation can be formed that will provide a direction
of improvement. Sequential screening may be needed using 2-level optimization
designs
Once the vicinity of the optimum settings has been located, 3-level optimization
experiments can be used to quantify the nonlinear effects and compute the best
settings for the X’s
If sufficient improvement has not been made, additional X’s (from the screening
experiment) will need to be added to the pool of vital few
Step 8 provided the experimental techniques to establish the values of X’s that
produce the best output level of ‘Y’
The best values of X’s are backed off to find a range of values that while not the
‘best’ level, does provide acceptable output levels
A range of ‘X’ values can provide additional flexibility in setting these factor levels
while not adversely affecting the output, specially in the cases of multiple responses
Y = f (X)
USL
LSL
LOL UOL
X
XL XU
XT
Target
X3
Y = 20 + 2X
USL = 90
T = 60
LSL = 45
X
XLSL XT XUSL
Assume the true relationship between ‘X’ & ‘Y’ is demonstrated by the regression line
Y = 20 + 2X
Since ‘Y’ has USL & LSL as well, we have a range of operating values for ‘X’
For USL & LSL as 90 & 45 for ‘Y’, we can solve for XUSL = 35 & XLSL = 12.5
Hence, value of ‘X’ should range from 12.5 to 35, with an ideal setting at 20
Y = 20 + 2X
USL = 90
T = 60
LSL = 45
X
XL XLOL X XUOL XU
T
As studied in step 2 of DMAIC, output ‘Y’ may have a GRR associated with it
We create a buffer against this error by slightly reducing the operating range for ‘X’
New operating limits are called as lower operating limit (LOL) & upper operating
limit (UOL)
If the GRR standard deviation σM = 2 from GRR study, we create a three sigma
buffer for LSL & USL for ‘Y’ & adjust them to 90 - 6 = 84 & 45 + 6 = 51
Y = 20 + 2X
USL = 90
USLB = 84
T = 60
LSLB = 51
LSL = 45
X
XL XLOL XT XUOL XU
12.5 15.5 20 32 35
Y
Y
USL
LSL
X X
13 14 15 16 17 18 19 20 21 22 23
Y
USL
LSL
Shift
A B C D
Discrete attribute X
X1
Y2
X1
If there is no overlapping region, all CTQ’s can not be fulfilled & project team should
seek help from BB & Champion to prioritize
Once the operating limits are determined, it may be worthwhile to have a re-look at
the process
New operating limits may need some extra steps, some steps to be scrapped, or
some steps to be performed differently
Why bother?
Technical
Habit and inertia
Difficulty in learning new skills
Sunk costs
Lack of skills
Political
Threats to old guard from new guard
Relationships
Power and authority imbalance
Cultural
Selective perception
Locked into old "mindset"
Afraid of letting go
Technical
Do an alignment test for systems and structures
Provide training and education
Provide coaches, tools, job aids
Run pilot to demonstrate
Political
Do a political map to understand influence patterns
New measures and rewards
Clarify roles and responsibilities – accountabilities
Involve champion
Cultural
Do a cultural audit: what beliefs drive us?
Articulate desired mindset and gaps
Redefine measures and rewards
Make known important core values that remain constant
Analyze
Measure
Define
In step 9, we determined the operating limits of vital few X’s. Step 9 applies the
GRR tools used in step 2 of DMAIC to assess the measurement system variability
associated with these X’s
Y
Measurement Variability in ‘Y’ & Model Error (other X’s)
Y = f (X)
USL
LSL
X
XL XLOL XUOL XU
Measurement validation could be applied after Step 5. Since it may not be known
at that time which X’s are vital, measurement validation would need to be done on
all X’s
Measurement validation could be applied after Step 6, when the vital X’s are
known. The operating region for the vital X’s might not be known
Measurement validation could be applied after Step 7, but similarly the region of
operation for the X’s may not have been completely determined
Measurement validation could be applied after Step 8, which obviously fits the step
order. This time may be a little late for X’s with high measurement error, since
decisions have been already made without knowledge of this noise
In this case when the conditions are not repeatable and reproducible, the best
estimate for equipment and appraiser variation comes from when the conditions are
held as homogenous as possible. However, it is confounded with the sample
variation
Difference leads to
Reproducibility & Sample Variability
1 2 7 8 13 14
Trial
Reading 3 4 9 10 15 16
#1
5 6 11 12 17 18
Difference leads to
Repeatability &
36 Total Parts / Conditions
Sample Variability
19 20 25 26 31 32
Trial
Reading 21 22 27 28 33 34
#2
23 24 29 30 35 36
The GRR for destructive testing can still use multiple operators and multiple trials,
however it is by definition a must to use multiple samples. Note that there are now
36 unique samples for the GRR study
The variability associated with sample is not separable from either equipment or
operator. Thus, one should try to obtain samples which are as homogenous as
possible
Specific rules for GRR on X’s are developed from the operating ranges
Thumb rules as studied in step 2 still apply, however, one must test the effect on ‘Y’
& see if it remains within the specification limits
Y = f (X)
USL
X
XL XU
Operating limits of all vital X’s are buffered to satisfy the ‘Y’. The target ‘X’ value will
give the best DPMO. However, every value of ‘X’ within the final buffered operating
limit will satisfy the DPMO & Sigma multiple requirement
Step 10
Step 11
Verify Process Improvement
Implement Process Controls
C
SAMIR SHAH Slide 309 Six Sigma Module1-
Key Concepts
Step 10 also re-computes the process baseline to ensure that targeted level of
performance has been achieved as promised in step 4
10.2 Prepare action plan to maintain X’s that are varying away from settings
So far, we have identified the best settings for each of the vital ‘X’
The key now is to ensure that the X’s don’t vary away from the targeted setting
Process control is a crucial tool in ensuring that this Six Sigma project delivers
lasting benefits
Prevention
Prevention Detection &
Detection
Mistake-Proofing
Look for cutting unproductive time to foster creativity
Brainstorming
Identify Customer complaints
1
Problems Error reports
Rejection analysis
FMEA
Prioritize
2 Cost & Effort estimate
Problems COPQ
Provide guidelines
Check-lists
SOP’s
Templates
Use visuals
Color-codes
Shapes
Each team takes any two of the situations & apply mistake-proofing techniques:
• Car owners complain that they often forget if the fuel tank hole is on the left or right side of the
car.
• A cold drink manufacturer wants to ensure that all bottles are filled with exactly the same
quantity.
• Commuters complain that ‘free left’ is always blocked by the vehicles that have to go straight.
• HR team of a company has found that employees only punch-in & don’t punch-out.
• Bank customers complain that they find it difficult to keep track of cheques issued by them.
• Administration team of a company finds that employees don’t switch-off lights while leaving.
• A FMCG company has found that retailers don’t disburse the freebies to customers as due.
• Credit card customers complain that they end up over-spending on their cards.
In general, the decision is either to accept or reject the lot so as to make sure that the
defective X’s do not result in out of specification ‘Y’. This process is called Acceptance
Acceptance Sampling
Sampling
Acceptance sampling was originally applied by the U.S. military to the testing of bullets
If, on the other hand, none were tested, malfunctions might occur in the field of battle,
with potentially disastrous results
A sample should be picked at random from the lot, and on the basis of information that
was yielded by the sample, a decision should be made regarding the disposition of the
lot (population)
The number of defects found in the sample of size ‘n’ is compared to a predetermined
standard, the critical number of defects ‘c’. If the observed number exceeds the critical
number, the entire lot (population) is rejected
Acceptance sampling is "the middle of the road" approach between no inspection and
100% inspection
An acceptance sampling plan (ASP) is a sampling scheme and a set of rules for
making decisions. The decision, based on counting the number of defectives in a
sample, can be to accept the lot, reject the lot, or even, to take another sample and
then repeat the decision process
On the other hand, Acceptance Sampling ignores the process & focuses exclusively
on the output after it has been produced
About SPC
Aids visual monitoring & controlling
Depends heavily on data collection
It forms data into patterns which can be statistically tested and, as a result, leads to
information about the behavior of process output / control variable characteristics
It detects assignable causes which affects the central tendency and/or variability of
the cause system
It points out where action can be taken with known degrees of risk and confidence
Time / Number
Control charts are useful for tracking process statistics over time and detecting the
presence of special causes
A process is in control when most of the points fall within the bounds of the control
limits, and the points do not display any nonrandom patterns
LCL
0.135%
UCL = µ + 3σ
99.73%
LCL = µ - 3σ
0.135%
Out of control point
Continuous Data
EWMA
Discrete Data
Defectives Defects
NP P C U
These constants are used to determine control limits & other process statistics
85 1
3.0SL=82.93
Individual Value
75
X=69.07
65
55 -3.0SL=55.20
Subgroup 0 5 10 15
20
3.0SL=17.04
Moving Range
10
R=5.214
0 -3.0SL=0.00
LCLX = X - A2 R
UCLX = X + A2 R
LCLMR = D3 R
UCLMR = D4 R
subgroup-size (n) = 1
Let’s take data of the previous example only. Assume that the data on temperature
was collected using three different probes & below table gives three readings per
hour, each for one probe, over 5 hours (5 samples, each of sub-group size 3)
80
1
3.0SL=77.25
Sample Mean
70 X=69.07
60 -3.0SL=60.88
Subgroup 1 2 3 4 5
20 3.0SL=20.59
Sample Range
10
R=8.000
0 -3.0SL=0.00E+00
LCLX = X - A2 R
UCLX = X + A2 R
LCLR = D3 R
UCLR = D4 R
LCLX = X - A3 S
UCLX = X + A3 S
LCLS = D3 S
UCLS = D4 S
NP charts track the number of defectives and detect the presence of special
causes. Usually, it plots defectives coming from a fixed sub-group size that allows
an apple-to-apple comparison while plotting numbers
Each entry in the worksheet column is the number of defectives for one subgroup,
assumed to have come from a binomial distribution with parameters n and p
proportion. The center line and control limits are then calculated using this value
Center Line = n p
n = Sub-group size
LCLNP = np - 3 n p (1 – p) p = Total defectives / Total units
UCLNP = np + 3 n p (1 – p)
Let’s assume that the quality control department checks the quality of finished
goods sampling a batch of 10 items every hour. If items are found out of control
limits consistently in any given day, production process has to be stopped for the
next day. They collect the following data over 24 hours:
4
Sample Count
2
NP=1.417
1
0 -3.0SL=0.00E+00
0 5 10 15 20 25
Sample Number
P charts track the proportion of defectives and detect the presence of special
causes. It allows to plot defectives coming from a varying sub-group size because it
doesn’t use numbers as against NP chart
Each entry in the worksheet column is the number of defectives for one subgroup,
assumed to have come from a binomial distribution with parameters n and p
Center Line = p
n = Sub-group size
LCLP = p - 3 p (1 – p) / n p = Total defectives / Total units
UCLP = p + 3 p (1 – p) / n
0.4
Proportion
0.3
0.2
P=0.1417
0.1
0.0 -3.0SL=0.00E+00
0 5 10 15 20 25
Sample Number
It’s identical to NP chart in the previous example since sub-group size is constant &
data is same
C charts track the number of defects and detect the presence of special causes.
Usually, it plots defects coming from a fixed sub-group size that allows an apple-to-
apple comparison while plotting numbers
Each entry in the worksheet column is the number of defects for one subgroup,
assumed to have come from a Poisson distribution with parameters λC
Process average number of defects λ is estimated by the given sample data; this
value also forms the center line of the data. control limits are then calculated using
this value
Center Line = λC
UCLC = λC + 3 λC
5 3.0SL=4.950
4
Sample Count
2
C=1.400
1
0 -3.0SL=0.00E+00
0 5 10 15
Sample Number
U charts track the number of defects per opportunity and detect the presence of
special causes. Usually, it plots defects coming from a varying sub-group size
Each entry in the worksheet column is the number of defects for one subgroup,
assumed to have come from a Poisson distribution with parameters λ
Center Line = λU
Let’s slightly change the data used in example of C chart. Let’s assume that the
customer service department now administers two questionnaires on employees,
one with 10 & another with 20 questions, i.e. sub-group size varies. They have to
be answered in ‘yes / no’. Each question that is answered in a ‘no’ is a defect.
0.4 3.0SL=0.3886
0.3
Sample Count
0.2
0.1 U=0.09545
0.0 -3.0SL=0.00E+00
0 5 10 15
Sample Number
Do not apply SPC tools to processes that are known to be out on control
Do not ignore ‘out-of-control’ signals if your ‘Y’ is meeting the specifications & ‘X’ is
meeting the operating limits
It’s quite possible that ‘X’ is under control, but ‘Y’ is out-of-spec’s
Success of control charts (specially Xbar-R chart) depends upon the proper
selection of sub-groups
All control charts allow specifying the historical mean & standard deviation, as
applicable. A good practice in control charting is to fix the center line & control limits
by using these parameters so that fresh control limits are not calculated for each
sample
However, control limits must be re-calculated if data displays a clearly different trend & the
reason for this change (new machine / skill / technology / material) is known & desirable
Let’s use the same example that was used in Xbar-R chart. We measured the
temperature using three probes & calculated the centerline & control limits using
that sample data itself. Suppose we know that population mean & standard
deviation are 70 & 5 respectively. Now we can force the control chart to perform
tests as per these limits for the current sample
Control chart would now look like this, check the new center line & control limits
80
3.0SL=78.66
Sample Mean
70 X=70.00
-3.0SL=61.34
60
Subgroup 1 2 3 4 5
3.0SL=21.79
20
Sample Range
10
R=8.465
0 -3.0SL=0.00E+00
Define what corrective actions should be taken when ‘X’ is found to be out-of-control
Points to Remember
Take help from the process flow diagrams developed in previous steps
Pay attention to FMEA output
It’s possible that all vital X’s are under control, but required improvement is not made
If required improvement is not made, each of the above points should be explored
Improved performance sustained for at least one-two months* with all vital X’s
under control
* Black Belts must use their discretion based upon the sample size available to statistically prove the
improvement
Step 11 emphasizes the need to sustain the improvement made so that the process
does not slip back to the original performance
P R O J E C T
Statistical solution is discovering vital X’s, their best settings & operating limits
Control Plan
QFD / VOC
FMEA / Fishbone
Process Owners
SOP’s
Tool
Improved performance sustained for at least one-two months1 with all vital X’s
under control
1 Black Belts must use their discretion based upon the sample size available to statistically prove
the improvement
IF right customer
CTQ has been
selected
IF it is
translated to a
measurable Customer
internal CTQ
will
IF vital X’s are be
discovered that
functionally relate
satisfied
to ‘Y’
IF operating limits
are set for vital X’s