Professional Documents
Culture Documents
What
Does It Mean?
Phillip H. Williams 2
The definition for Six Sigma was clear from the beginning 3.4 defects per million opportunities
(DPMO), allowing for a 1.5-sigma process shift. But the definition for zero defects is not so
clear. Perhaps zero defects refers to the domain beyond 3.4 DPMO. Or perhaps it refers to
designing defects out of the process or product, so that theoretically at least a company can
consistently manufacture a defect-free product.
There is value in trying to understand the meaning and purpose of this oft-used term, and
whether its use is the best approach in a Six Sigma environment of continuous improvement.
All defects are the same, since all defects are bad
There is no such thing as a benign defect.
If we can get rid of the defects, then we can get rid of the testing.
In fact, all defects are not equal. Defects, depending on their size and type, have different
probabilities of impacting the finished product. And these probabilities depend on the
technology. In fact, the impact probability of a particular defect may vary within the
technology that is, at the stage or layer in which it occurs. When it comes to the practical
definition of a defect, bad is a relative term. Many defects are simply neutral. They are never
good, but again, depending on the technology they may cause no harm either. If all defects
are considered bad, then prioritization is difficult.
It is the role of statistically minded scientists and engineers to classify defects and their potential
impact, based on data and engineering judgment. This allows them to systematically reduce
defect levels in a prioritized fashion, starting with the worst and progressing toward the more
benign. Without this kind of problem-solving prioritization, progress may be slow and
confused perhaps even at a standstill. The ability to prioritize is absolutely necessary in the
continuous improvement process.
The statement that if fewer defects are produced, then less inspection will be required is
incorrect. Actually, the opposite is true. A higher level and sophistication of testing is required to
detect a smaller level of defects. The plot in Figure 1, derived from a cumulative binomial
distribution (pass/fail inspection) shows how the sample size increases exponentially as the
prevalence of a defective unit decreases. The particular curve in Figure 1 corresponds to a
probability of detection of 95 percent. In other words, if a defect is present at the indicated level
(x-axis), there is a 95 percent probability that at least one failed unit will be detected using the
sample size indicated on the y-axis.
A more intuitive example is: If a shoebox full of needles is mixed into a haystack, only a portion
of the haystack will have to be moved before the presence of needles is detected. If there is only
one needle in the haystack, every straw may have to be moved before it is found, assuming it is
not missed entirely.
This is really the misunderstanding that drives the inappropriate application of a zero defects
policy to multiple points along the supply chain (Figure 2). It may be thought that producing zero
or near-zero defects at each point will lead to reduced or eliminated inspection/testing prior to
shipment to the end-customer. But for zero defects to approach reality, the inspection/testing
must remain the same or increase at the final inspection point. If zero is truly the goal, then 100
percent sampling at the escape point is required, regardless of defect levels. This implies, then,
that any zero defect inspections prior to the escape point may be non-value-added.
Ideally suppliers need to produce the highest quality output possible, in order to maximize yield
and minimize costs which ultimately benefits both the supplier and the customer. But a zero
defects policy does not provide this motivation to suppliers. When the goal of zero defects is
applied to multiple interim points along the supply chain, the undesired effects of increased costs
and lower yields are encouraged. The increased costs come from increased tests, inspections and
cycle time. The lower yields are likely because of a higher rate of false fails (type 1 errors) as
the suppliers apply increasingly stringent criteria in an attempt to eliminate potential failures at
the customers incoming test/inspection. In other words, in an effort to eliminate even the
smallest possibility of customer incoming test failures, good product may be scrapped to overly
stringent criteria.
Design for test (DFT) DFM: Worlds Collide, Then Cooperate by L. Peters in
Semiconductor International, June 1, 2005.
Robust design
It is probably best to not encourage the use of somewhat ambiguous terminology in the place of
well-defined and meaningful methodologies such as these.
The concept of continuous improvement is intuitive. It makes sense to always strive for a better
process or product, to reduce costs, satisfy customers and gain market share. Absolute perfection
can never be achieved, but an organization can move closer and closer with good statistical and
engineering practices.