Professional Documents
Culture Documents
net/publication/264973369
CITATIONS READS
0 5,368
1 author:
Tapan P. Bagchi
Indian Institute of Technology Kharagpur
206 PUBLICATIONS 1,165 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Tapan P. Bagchi on 23 August 2014.
Objectives
Structure
The concept of TQM is basically very simple. Each part of the organization has
customerssome external and many internal. Identifying what the customer requirements are
and setting about to meet them is the core of a total quality approach. This requires a good
management system, methods including statistical quality control (SQC), and teamwork.
A well-operated, documented management system provides the necessary foundation for the
successful application of SQC. Note, however, that SQC is not just a collection of techniques.
It is a strategy for reducing variability, the root cause of many quality problems. SQC refers
to the use of statistical methods to improve and enhance quality and through it customer
satisfaction. However, this task is seldom trivial because real world processes are affected by
numerous uncontrolled factors. For instance, within every factory, conditions fluctuate with
time. Variations occur in the incoming materials, in machine conditions, in the environment
and in operator performance. A steel plant, for example, may purchase good quality ore
from a mine, but the physical and chemical characteristics of ore coming from different
locations in the mine may vary. Thus, everything isn't always "in control."
Besides ore, in steel making furnace conditions may vary from heat to heat. In welding, it is
not possible to form two exactly identical joints and faulty joints may occur occasionally. In a
cutting process, the size of each piece of material cut varies; even the most high-quality
cutting machine has some inherent variability. In addition to such inherent variability, a large
number of other factors may also influence processes (Figure 1.1).
Many of these variations cannot be predicted with certainty, although sometimes it is possible
to trace the unusual patterns of such variations to their root cause(s). If we have collected
sufficient data from these variations, we can tell, in terms of probability, what is most likely to
occur the next if no action is taken. It we know what is likely to occur the next given certain
conditions, we can take suitable actions to try to maintain or improve the acceptability of the
output. This is rationale of statistical quality control.
Training level
The Process
Control Variables:
Set points for temperature,
cutting speed, raw material
Variation in Output
specs, recipe etc.
Variables:
Quality of Finished
Raw material Product;
quality/quantity Level of Customer
State Variables Satisfaction
measured here
Another prospect in which statistical methods can help to improve product quality is the
design of products and processes. It is now well-understood that over 2/3rd of all product
malfunctions may be traced to their design. Indeed, the characteristics or quality of a product
depend greatly on the choice of materials, settings of various parameters in the design of the
product and the production process settings. In order to locate an optimal setting of the
various parameters which gives the best product, we may consider using models governing the
outcome and the various parameters, if such models can be established by theory or through
experimental work. Such a model is diagrammatically shown in Figure 1.2.
However, in many cases, a theoretical quality control model y = f(x) relating the final output
responses (y1, y2, y3, …) and the input parameters (x1, x2, x3, …) is either extremely difficult
to establish or mathematically intractable. The following two examples illustrate such cases.
Example 1: In bakery industry, the taste, tenderness and texture of a kind of bread depends
on various input parameters such as the origin of the flour used, the amounts of sugar, the
amount of baking powder, the baking temperature profile and baking time, and the type of
oven used, and so on. In order to improve the quality of the bread produced, the baker may
use a model which relates the input parameters and the output quality of the bread. To find
theoretical models quantifying the taste, tenderness and texture of the bread produced and
relate these quantities to the various input parameters based our present scientific knowledge
is a formidable task. However, the baker can easily use statistical methods in regression
analysis to establish empirical models and use them to locate an optimal setting of the input
parameters.
Example 2: Sometimes there are great difficulties in solving an engineering problem using
established theoretical models. The heat accumulated on a chip in an electronic circuit during
normal operation will raise the temperature of the chip and shorten its life. In order to
improve the quality of the circuit, the designer would like to optimize the design of the circuit
so that the heat accumulated on the chip will not exceed a certain level. This heat
accumulated can be expressed theoretically in terms of other parameters in the circuit using a
complicated system of ten or more daunting partial differential equations which can be used to
optimize the circuit design. However, it is usually not possible to solve such a system
analytically, and to solve it numerically using the computer also has computational
difficulties. In this situation, a statistical methodology known as design of experiments (DOE)
can be used to find an optimal design of the circuit without going through the complicated
method of solving partial differential equations.
In other cases control may need to be exercised even on-linewhile the process is in
progressbased on how the process is performing, to maintain product quality. Thus in
statistical quality control problems are numerous and diverse.
1. Acceptance Sampling
This method is also called "sampling inspection." When products are required to be inspected
but it is not feasible to inspect 100% of the products, samples of the product may be taken for
inspection and conclusions drawn using the results of inspecting the samples. This technique
specifies how to draw samples from a population and what rules to use to determine the
acceptability of the product being inspected.
Note, however, that since SPC requires processes to display measurable variation, it is
ineffective for quality levels approaching six-sigma though it is quite effective for companies
in the early stages of quality improvement efforts.
3. Design of Experiments
Trial and error can be used to run experiments in the design of products and design of
processes, in order to find an optimal setting of the parameters so that products of good quality
will be produced. However, performing experiments by trial and error unscientifically is
frequently very inefficient in the search of an optimal solution. Application of the statistical
methodology of "design of experiments" (DOE) can help us in performing such experiments
scientifically and systematically. Additionally, such methods greatly reduce the total effort
used in product or process development experiments, increasing at the same time the accuracy
of the results. DOE forms an integral part of Taguchi methodstechniques that produce
high quality and robust product and process designs.
The production of a product typically progresses as indicated in the simplified flow diagram
shown in Figure 1.3. In order to improve the quality of the final product, design of
experiments (DOE) may be used in Step 1 and Step 2, acceptance sampling may be used in
Step 3 and Step 5, and statistical process control (SPC) may be used in Step 4.
There are several benefits that the SPC approach brings, as follows:
There are no restrictions as to the type of the process being controlled or studied, but the
process tackled will be improved.
Decisions guided by SPC are based on facts not opinions. Thus a lot of 'emotion' is
removed from problems by SPC.
Quality awareness of the workforce increases because they become directly involved in
the improvement process.
Knowledge and experience of those who operate the process are released in a systematic
way through the investigative approach. They understand their role in problem solving,
which includes collecting facts, communicating facts, and making decisions.
Management and supervisors solve problems methodically, instead of by using a seat-of-
the-pants style.
Natural Natural
Variation Variation
of the of the
process process
We introduce now an important concept employed in thinking statistically about real life
processes. Process capability is the range over which the "natural variation" of a process
occurs as determined by the system of common or random causes; that is, process capability
indicates what the process can deliver under "stable" conditions when it is said to be under
statistical control.
The capability of a process is the fraction of output that can be routinely found to be within
specifications. A capable process has 99.73% or more of its output within specifications
(Figures 1.4 and 1.5).
Process capability refers to how capable a process is of making parts that are within the range
of engineering or customer specifications. Figure 1.4 shows the distribution of the dimension
of parts for a machining process whose output follows the bell-shaped normal distribution.
This process is capable because the distribution of its output is wholly within the spec range.
The process shown by Figure 1.5 is not capable.
Process Control on the other hand refers to maintaining the performance of a process at its
current capability level. Process control involves a range of activities such as sampling the
process product, charting its performance, determining causes of any excessive variation and
taking corrective action.
Such decisions are usually based on economics. Remember that under routine production, the
cost to produce one unit of the product (i.e., its unit cost) whether the product ultimately ends
up falling within or outside specs is the same. Rather, the firm may be forced to raise the
market price of the within-spec products (those that are acceptable to customers) and thus
weaken its competitive position.
"Scrap and/or rework out-of-spec or defective parts" is therefore a poor business strategy since
labour and materials have already been invested in the unacceptable product produced.
Additionally, inspection errors will probably allow some nonconforming products to leave the
production facility if the firm aims at making parts that just meet the specs. On the other
hand, new technology might require substantial investment the firm cannot afford.
Distribution of
part dimensions
Natural variation of
the process
Lower Upper
Spec Spec
Limit Limit
Distribution of
part dimensions
Natural variation of
the process
-3 -2 - Mean + +2 +3
Lower Upper
Spec Spec
Limit Limit
Natural variation of
the process
Spec
Limits
Distribution of
part dimensions
Natural variation of
the process
Lower Upper
Spec Spec
Limit Limit
Changes in design, on the other hand, may sacrifice fitness-for-use requirements and result in
a lower quality product. Thus, these factors demonstrate the need to consider process
capability during product design and in the acceptance of new contracts. Many firms now
require process capability data from their vendors. Both ISO 9000 and QS 9000 quality
management systems require a firm to determine its process capability.
Process capability has three important components: (1) the design specifications, (2) the
centering of the natural variation, and (3) the range, or spread, of variation. Figures 2.1 to 2.4
illustrate four possible outcomes that can arise when natural process variability is compared
with product specs. In Figure 2.1, the specifications are wider than the natural variation; one
would therefor expect that this process will always produce conforming products as long as it
remains in control. It may even be possible to reduce costs by investing in a cheaper
technology that permit a larger variation in the process output. In Figure 2.2, the natural
variation and specifications are the same. A small percentage of nonconforming products
might be produced; thus, the process should be closely monitored.
In Figure 2.3, the range of natural variability is larger than the specification; thus, the current
process would not always meet specifications even when it is in control. This situation often
results from a lack of adequate communication between the design department and
manufacturing, a task entrusted to manufacturing engineers.
If the process is in control but cannot produce according to the design specifications, the
question should be raised whether the specifications have been correctly applied or if they
may be relaxed without adversely affecting the assembly or subsequent use of the product. If
the specifications are realistic and firm, an effort must be made to improve the process to the
point where it is capable of producing consistently within specifications.
Finally, in Figure 2.4, the capability is the same as in Figure 2.2, but the process average is
off-center. Usually this can be corrected by a simple adjustment of a machine setting or re-
calibrating the inspection equipment used to capture the measurements. If no action is taken,
however, a substantial portion of output will fall outside the spec limits even though the
process has the inherent capability to meet specifications.
We may define the study of process capability from another perspective. A capability study is
a technique for analyzing the random variability found in a production process. In every
manufacturing process there is some variability. This variability may be large or small, but it
is always present. It can be divided into two types:
Variability due to common (random) causes
Variability due to assignable (special) causes
The first type of variability is said to be inherent in the process and it can be expected to occur
naturally within a process. It is attributed to a multitude of factors which behave like a
constant system of the chances affecting the process. Called common or random causes, such
factors include equipment vibration, passing traffic, atmospheric pressure or temperature
changes, electrical voltage or humidity fluctuations, changes in operator's physical or
emotional conditions, etc. Such are the forces that determine whether a coin when tossed will
end up showing a head or tail when on the floor. Together, however, these "chances" form a
unique, stable and describable distribution. The behaviour of a process operating under such
conditions is predictable (Figure 2.5).
Inherent variability may be reduced by changing the environment or the technology, but given
a set of operating condition, this variability can never be completely eliminated from a
process. Variability due to assignable causes, on the other hand, refers to the variation that
can be linked to specific or special causes that disturb a process. Examples are tool failure,
power supply interruption, process controller malfunction, adding wrong ingredients or wrong
quantities, switching a vendor, etc.
Predicted variability
Time
? ? ?
? ? Time
? ?
? ?
Assignable causes are fewer in number and are usually identifiable through investigation on
the shop floor or an examination of process logs. The effect (i.e., the variation in the process)
caused by.an assignable factor, however, is usually large and detectable when compared with
the inherent variability seen in the process. If the assignable causes are controlled properly,
the total process variability associated with them can be reduced and even eliminated. Still,
the effect of assignable causes cannot be described by a single distribution (Figure 2.6).
A capability study measures the inherent variability or the performance potential of a process
when no assignable causes are present (i.e., when the process is said to be in statistical
control). Since inherent variability can be described by a unique distribution, usually a normal
distribution, capability can be evaluated by utilizing the properties of this distribution. Recall
that capability is the proportion of routine process output that remains within product specs.
Even approximate capability calculations done using histograms enable manufacturers to take
a preventive approach to defects. This approach is in contrast with the traditional two-step
process: production personnel make the product while QC personnel inspect and screen out
products that do not meet specifications. Such QC is wasteful and expensive since it allows
plant resources including time and materials to be put into products that are not salable. It is
also unreliable since even 100 percent inspection would fail to catch all defective products.
SPC aims at correcting undesirable changes in the output of a process. Such changes may
affect the centering (or accuracy) of the process, or its variability (spread or precision). These
effects are graphically shown in Figure 2.
Target Output
Distribution
of output
Distribution Distribution of
of xbar individual
measurements (x)
Natural variation of
the process {x}
Lower Upper
Spec Limit Spec Limit
for x for x
Those new to SPC often have the misconception that they don't need to calculate capability
indices. Some even think that they can compare their control limits to the spec limits. This is
not true, because control limits look at the distribution of averages (xbar, p, np, u, etc.) while
capability indices look at the distribution of individual measurements (x). The distribution of
x for a process will always be more spread out than the distribution of its xbar values (Figure
2.5). Therefore, the control limits are often within the specification limits but the plus-and-
minus-3-sigma distribution of individual part dimensions (x) is not.
The statistical theory of the "central limit theorem" says that the averages of samples or
subgroups {xbar} follow more closely a normal distribution. This is why we can easily
construct control charts on process data that are themselves not normally distributed. But
averages cannot be used for capability calculation because capability evaluates individual
parts delivered by a process. After all, parts get shipped to customers, not averages.
Capability studies are most often used to quickly determine whether a process can meet specs
or how many parts will exceed the specifications. However, there are numerous other
practical uses:
Estimating percentage of defective parts to be expected
Evaluating new equipment purchases
Predicting whether design tolerances can be met
Assigning equipment to production
Planning process control checks
Analyzing the interrelationship of sequential processes
Making adjustments during manufacture
Setting specifications
Costing out contracts
Since a capability study determines the inherent reproducibility of parts created in a process, it
can even be applied to many problems outside the domain of manufacturing, such as
inspection, administration, and engineering.
There are instances where capability measurements are valuable even when it is not practical
to determine in advance if the process is in control. Such an analysis is called a performance
study. Performance studies can be useful for examining incoming lots of materials or one-
time-only production runs. In the case of an incoming lot, a performance study cannot tell us
that the process that produced the materials is in control, but it may tell us by the shape of the
distribution what percent of the parts are out of specs or more importantly, whether the
distribution was truncated by the vendor sorting out the obvious bad parts.
Before we set up a capability study, we must select the critical dimension or quality
characteristic (must be a measurable variable) to be examined. This dimension is the one that
must meet product specs. In the simplest case, the study dimension is the result of a single,
direct product and measurement process. In more complicated studies, the critical dimension
may be the result of several processing steps or stages. It may become necessary in these
cases to perform capability studies on each process stage. Studies on early process stages
frequently prove to be more valuable than elaborate capability studies done on later processes
since early processes lay the foundation (i.e., constitute the input) which may affect later
operations.
Once the critical dimension is selected, data measurements can be collected. This can be
accomplished manually or by using automatic gaging and fixturing linked to a data collection
device or computer. When measurements on a critical dimension are made, it is important we
ensure that the measuring instrument is as precise as possible, preferably one order of
magnitude finer than the specification. Otherwise, the measuring process itself will contribute
excess variation to the dimension data as recorded. Using handheld data collectors with
automatic gages may help reduce errors introduced by the process of measurement, data
recording, and transcription for post processing by computer.
The ideal situation for data collection is to collect as much data as possible over a defined
time period. This will yield a reliable capability number since it is based upon a large sample
size. In the practice of process improvement, determining process capability is Step 5:
Process capability formulas commonly used by industry require that the process must be in
control and normally distributed before one takes samples to estimate process capability. All
standard capability indices assume that the process is in control and the individual data follow
a normal distribution. If the process is not in control, capability indices are not valid, even if
they appear to indicate the process is capable.
Three different statistical tools are used together to determine whether a process is in control
and follows a normal distribution. These are
Control charts
Visual analysis of a histogram
Mathematical analysis of the distribution to test that the distribution is normal.
Note that no single tool can do the job here and all three must be used together. Control
charts (discussed in detail later in this Unit) are the most common method for maintaining a
process in statistical control. For a process to be in control, all points plotted on the control
chart must be inside the control limits with no apparent patterns (e.g., trends) be present. A
histogram (described below in Section 3) allows us to quickly see (a) if any parts are outside
the spec limits and (b) what the distribution's position is relative to the specification range. If
the process is one that is naturally a normal distribution, then the histogram should
approximate a bell-shaped curve if the process is in control. However, note that a process can
be in control but not have its individuals following a normal distribution if the process is
inherently non-normal.
Many process naturally follow a bell-shaped curve (a normal distribution) but some do not.
Examples of non-normal dimensions are roundness, squareness, flatness and positional
tolerances; they have a natural barrier at zero. In these cases, a perfect measurement is zero
(for example, no ovality in the roundness measurement). There can never be a value less than
zero. The standard capability indices are not valid for such non-normal distributions. Tests for
normality are available in SPC text books that can assist you to identify whether or not a
process is normal. If a process is not normal, you may have to use special capability measures
that apply to non-normal distributions [1].
We first define some special terms. USL stands for Upper Specification Limit and LSL for
Lower Specification Limit. Midpoint is the center of the spec limits. The midpoint is also
frequently referred to as the nominal value or the target. Tolerance is the distance between
the upper and lower spec limits (tolerance = USL - LSL).
The standard deviation for the distribution of individual data, one important variable in all
the capability index calculations, can be determined in either of two ways.
The standard text book formula for , the population standard deviation, when the true
process mean () is known and population size (N) is finite is
(x i )2
i 1
N
The term population here means all parts produced, not just a sample.
In practice, however, we usually we work with a sample of the population, a handful of items
collected or sampled from the production line, since this is more practical. In this case, the
formula for standard deviation (s) is as follows.
(x i xbar) 2
s i 1
n 1
n
xbar x
i 1
i /n
s symbolizes sample standard deviation and n is the sample size. s is an estimator of . The
standard deviation of a distribution is an indication of the dispersion (or variability) present in
the population of the datathe higher is variability, the larger will be s (and ). When over
30 individual observations are taken, the above formulas for population standard deviation ()
and sample standard deviation (s) yield virtually the same numerical result. In the following
pages we will use the symbol to denote the term standard deviation as we continue our
discussion of process capability indices.
Standard deviation may also be estimated using Rbar (the average of the sample or subgroup
ranges Ri) and a constant that has been developed by statisticians for this purpose. The
formula for estimating sigma is:
Rbar
ˆ
d2
" hat" represents the estimated standard deviation. Rbar is the average of the sample ranges
{Ri} for a sample period i when the process is in control. The constant d2 varies by sample
size (n) and is listed in Table 2.1.
It is important that you remember that the process data or the individual measurements {xi}
must be normally distributed and in control in order to use the estimated value of in
process capability calculations. If both of these conditions are not met, the estimated standard
deviation or value will not be valid. If the process is normally distributed and in control,
either method is acceptable and usually yields about the same result. Also, remember that
neither actual nor estimated used in calculating capability will be meaningful if the process
is inherently non-normal. Separate special methods are available for finding capability
measures for non-normal distributions arising from non-normal processes.
If you have both estimated and actual capability indices available, choose one method and stay
with it. Avoid the temptation to look at both and choose the one that is better, since this will
introduce variation in results.
The Cp Index
The most commonly used capability indices are Cp and Cpk. Cp, a measure of the dispersion of
the process, is the ratio of tolerance to 6. The formula is
Tolerance
Cp
6
The quantity "6" in the Cp formula is derived from the fact that, in a normal distribution,
99.73% of the parts will be within a 6 apread, i.e., within ( 3) when the process is
being disturbed only by random or chance causes.
Suppose that the tolerance and measurements on a process are as follows: USL = 5.0, LSL =
1.0, midpoint of spec = 3.0, average of sample averages ( xbar ) = 2.0, = 0.5. Then
tolerance = (5.0 - 1.0) = 4.0, xbar + 3 = 3.5 and xbar - 3 = 0.5. Then the CP for the
process output data sampled is 4.0/3.0 = 1.33.
As you can see from the Cp formula, values for Cp can range from near zero to very large
positive numbers. When Cp is less than 1, tolerance is less than the 6the inherent or
natural spread of the process output. When Cp is greater than 1, tolerance is greater than the
6. For a process, the greater the Cp index number, the higher its process capability is.
" Cp" alone, however, does not tell the complete story about the process!
Note that Cp is only a measure of the dispersion or spread of the distribution. It is not a
measure of the "centeredness" (where the mean of the process output is in relation to the spec
midpoint). An off-centered process could be a problematic situation. An example is the
situation shown by Figure 2.4. Both Figures 2.2 and 2.4 display processes that have a Cp =
1.0. But the process shown by Figure 2.4 has a significant fraction of its output falling
outside (below) the lower spec limit.
This is why Cp is never used alone as a measure of process capability. Cp only shows how
good would the process be if the process could be centered with some adjustments. The
alternative capability index is Cpk, described below.
While Cp is only a measure of dispersion, Cpk measures both dispersion and centeredness.
The Cpk foirmula takes into account both the process spread and the location of the process
average in relation to the spec midpoint. The formula is as follows.
"The lesser of" actually determines how capable the process is on the worst side. Using the
data of the previous example we obtain
The greater the Cpk value is the higher is the fraction of the output meeting specs. Hence, the
better is the process. A Cpk value greater than 1.0 means that the 6 spread of the data falls
completely within the spec limits. An example is the process shown in Figure 1.4.
A Cpk value of 1.0 indicates that 99.73% of the parts produced by the process would be within
the spec limits. In this process only about 3 out of a thousand parts would be scrapped or
rejected. In other words, such a process just meets specs.
Do we need to improve the process (i.e., reduce its inherent variability) further? Improvement
beyond just meeting specs may greatly improve the quality of fitness of the parts during
assembly and also cut warranty costs.
The different special process conditions detectable by Cpk calculations are as follows.
Many companies with QS 9000 registration, demand their vendors to demonstrate Cpk
capabilities of 1.33 or beyond. A Cpk of 1.33 has about 99.994% of products within specs.
A Cpk value of 2.0 is the coveted "six sigma" quality level. To reach this stage advanced
SQC methods including design-of-experiments (DOE) would be required. At this level no
more than 3 or 4 parts per million products produced would fall outside the spec limits. Such
small variation is not visible on xbar-R control charts in the normal operation of the process.
Remember that control and capability are two very different concepts. As shown in Figure
2.6, in general, a process may be capable or not capable, or in control or out of control,
independently of each other. Clearly, we would like every process to be both capable and in
(statistical) control. If a process is neither capable nor in control, we must take two corrective
actions to improve it. First we should get it in a state of control by removing special causes of
variation, and then attack the common causes to improve its capability. If a process is capable
but not in control (as the above example illustrated), we should work to get it back in control.
Control state
Capable Ideal
process
Capability
state
Not
Capable
Control charts, like the other basic tools for quality improvement, are relatively simple to use.
In general, control charts have two objectives, (a) help restore accuracy of the process so that
the process average stays near the target, and (b) help minimize variation in the process to
ensure that good precision is maintained in the output (see Figure 2.7).
Control charts have three basic applications: (1) to establish a state of statistical control, (2) to
monitor a process and signal when the process goes out of control, and (3) to determine
process capability. The following is a summary of the steps required to develop and use
control charts. Steps 1 through 4 focus on establishing a state of statistical control; in step 5,
the charts are used for ongoing monitoring; and finally, in step 6, the data are used for process
capability analysis.
1. Preparation
a. Choose the variable or attribute to be measured
b. Determine the basis, size, and frequency of sampling.
c. Set up the correct control chart.
2. Data Collection
a. Record the data
b. Calculate relevant statistics: averages, ranges, proportions, and so on.
c. Plot the statistics on the chart.
3. Determination of trial control limits
a. Draw the center line (process average) on the chart.
b. Compute the upper and lower control limits.
4. Analysis and interpretation
a. Investigate the chart for lack of control
b. Eliminate out-of-control points.
c. Re-compute control limits if necessary.
5. Use as a problem-solving tool
a. Continue data collection and plotting.
b. Identify out-of-control situations and take corrective action.
6. Use the control chart data to determine process capability, if desired.
In Section 4 we discuss the SPC methodology in detail and the construction, interpretation,
and use of the different types of process control charts. Although many different charts will
be described, they will differ only in the type of measurement for which the chart is used; the
same analysis and interpretation methodology described applies to each of them.
In SPC, numbers and information form the basis for decisions and actions. Therefore, a thorough
data recording systemmanual or otherwisewould be an essential enabler for SPC. In order to
allow one to interpret fully and derive maximum use of quality-related data, over the past fifty
years a set of simple statistical 'tools' have evolved. These tools offer any organization a easy
means to collect, present and analyze most of such data. In this section we briefly review these
tools. An extended description of them may be found in the quality management standard ISO
9004-4 (1994).
In the 1950's Japanese industry began to learn and apply statistical methods in earnestness,
methods that American statisticians Walter Shewhart and W Edward Deming developed in the
1930's and 1940's to help manage quality. Subsequently, progress in continuous quality
improvement in Japan led to significant expansion of the many simple statistical tools on shop
floor. Kaoru Ishikawa, head of the Japanese Union of Scientists and Engineers (JUSE), later
formalized the use of these tools in Japanese manufacturing with the introduction of the 7 Quality
Control (7 QC) tools. The seven Ishikawa tools reviewed below are now an integral part of
quality control on the shop floor around the world. Many Indian industries use them routinely.
3.1 Flowchart
The flowchart lists the order of activities in a project or process and their interdependency. It
expresses detailed process knowledge. To express this knowledge certain standard symbols are
used. The oval symbol indicates the beginning or end of the process. The boxes indicate action
items while diamonds indicate decision or check points. The flowchart can be used to identify the
steps affecting quality and the potential control points. Another effective use of the flowchart
would be to map the ideal process and the actual process and to identify their differences as the
targets for improvements. Flowcharting is often the first step in Business Process Reengineering
(BPR).
START
no
MIX CHECK RE-MIX
ok
STOP
3.2 Histogram
In manufacturing, the histogram can rapidly identify the nature of quality problems in a
process by the shape of the distribution as well as the width of the distribution. It informally
establishes process capability. It can also help compare two or more distributions.
Lower Upper
Spec Tolerance Spec
35 Limit Limit
30
25
Frequency 20
15
10
5
Weight (gms)
The Pareto chart, as shown below, indicates the distribution of effects attributable to various
causes or factors arranged from the most frequent to the least frequent. This tool is named
after Wilfredo Pareto, the Italian economist who determined that wealth is not evenly
distributed and some of the people have most of the money.
This tool is a graphical picture of the relative frequencies of different types of quality
problems with the most frequent problem type obtaining clear visibility. Thus the Pareto chart
identifies the vital few and the trivial many and it highlights problems that should be worked
first to get the most improvement. Historically, 80% problems are caused by 20% of the
factors.
35% P
o
30% o
r Operator
25% d errors
Percent d i
20% m
defective e
e
15% s p Calibration
i n
s a
10 g r
i Material
% n t
5% o
n s
Misc.
Categories
The cause and effect diagram is also called the fishbone chart because of its appearance and
the Ishikawa diagram after the man who popularized its use in Japan. Its most frequent use is
to list the causes of some particular quality problem or defect. The lines coming of the core
horizontal line are the main causes while the lines coming off those are subcauses.
The cause and effect diagram identifies problem areas where data should be collected and
analyzed. It is used to develop reaction plans to help investigate out-of-control points found on
control charts. It is also the first step for planning design of experiments (DOE) studies and
for applying Taguchi methods to improve product and process designs.
Machine Man
training
speed attention
tool
Defect
mixing quantity quality
inspection
Methods Materials
The scatter diagram shows any existing the pattern in the relationship between two variables
that are thought to be related. For example, is there a relationship between outside
temperature and cases of the common cold? As temperatures drop, do cases of the common
cold rise in number? The closer the scatter points hug a diagonal line, the more closely there
is one-to-one relationship between the variables being studied. Thus, the scatter diagram may
be used to develop informal models to predict the future based on past correlations.
Negative
20 Correlation
Cases of observed
Common 15
Cold/100
10
persons
5
0
30 40 50 60 70
Outdoor Temperature (F)
The run chart shows the history and pattern of variation. It is plot of data points in time
sequence, connected by a line. Its primary use is in determining trends over time. The analyst
should indicate on the chart whether up is good or down is good. This tool is used at the
beginning of the change process to see what the problems are. It is also used at the end (or
check) part of the change process to see whether the change made has resulted in a permanent
process improvement.
0.85
Average
0.8
0.75
0.7
0.65
1
11
13
15
17
19
21
23
25
27
29
Sample #
Prepared for the Extension Program of Indira Gandhi Open University
21
Whereas a histogram gives a static picture of process variability, a run chart or a control chart
illustrates the dynamic performance (i.e., performance over time) of the process. The control
chart in particular is a powerful process quality monitoring device and it constitutes the core
of statistical process control (SPC). It is a line chart marked with control limits at 3 standard
deviations () above and below the average quality level. These limits are based on the
statistical studies of shop data conducted in the 1030s by Dr Walter Shewhart. By comparing
certain measures of the process output such as xbar, R, p, u, c etc. (see Section 4) to their
control limits one can determine quality variation that is due to common or random causes and
variation that is produced by the occurrence of assignable events (special causes).
Failure to distinguish between common causes and special causes of variation can actually
increase the variation in the output of a process. This is often due to the mistaken belief that
whenever process output is off target, some adjustment must be made. However, knowing
when to leave a process alone is an important step in maintaining control over a process.
Equally important is knowing when to take action to prevent the production of nonconforming
product. Using actual industry data Shewhart demonstrated that a sensible strategy to control
quality is to first eliminate the special causes with the help of the control chart and then
systematically reduce the common causes. This strategy reduces the variation in process
output with a high degree of reliability while it improves the acceptability of the product.
Statistical process control (SPC) is actually a methodology for monitoring a process to (a)
identify the special causes of variation and (b) signal the need to take corrective action when it
is appropriate. When special causes are present, the process is deemed to be out of control. If
the variation in the process is due to common causes alone, the process is said to be in
statistical control. A practical definition of statistical control is that both the process averages
and variances are constant over time (Figure 2.5). Such a process is stable and predictable.
SPC uses control charts as the basic tool to improve both quality and productivity. SPC
provides a means by which a firm may demonstrate its quality capability, an activity necessary
for survival in today's highly competitive markets. Also, many customers (e.g., the
automotive companies) now require the evidence that their suppliers use SPC in managing
their operations. Note, however, that since SPC requires processes to display measurable
variation; even though it is quite effective for companies in the early stages of quality efforts,
it becomes ineffective in producing improvements once quality level approaches six-sigma.
Before we leave this section, we repeat again that process capability calculations make little
sense if the process is not in statistical control because the data are confounded by special (or
assignable) causes and thus do not represent the inherent capability of the process. The simple
tools described in this section may be good enough to enable you to check this. To see this,
consider the data in Table 3.1, which shows 150 measurements of a quality characteristic from
a manufacturing process with specifications 0.75 0.25. Each row corresponds to a sample of
size = 5 taken every 15 minutes. The average of each sample is also given in the last column.
A frequency distribution and histogram of these data is shown in Figure 3.1. The data form a
relatively symmetric distribution with a mean of 0.762 and standard deviation 0.0738. Using
these values, we find that Cpk = 1.075 and form the impression that the process capability is at
least marginally acceptable.
Some key questions, however, remain to be answered. Because the data were taken over an
extended period of time, we cannot determine if the process remained stable throughout that
period. In a histogram the dimension of time is not considered. Thus, histograms do not
allow you to distinguish between common and special causes of variation. It is unclear
whether any special causes of variation are influencing the capability index. If we plot the
average of each sample against the time at which the sample was taken (since the time
increments between samples are equal, the sample number is an appropriate surrogate for
time), we obtain the run chart shown in Figure 3.6.
Frequency
0.65 10
0.7 14 30
0.75 40 20
0.8 31
10
0.85 37
0.9 14 0
0.6
0.7
0.8
0.9
LSL = 0.5
USL = 1
0.95 1
USL = 1 0
More 0
Average 0.762
Bin
Std Dev 0.0738
Sample # X1 X2 X3 X4 X5 Average
1 0.682 0.689 0.776 0.798 0.714 0.732
2 0.787 0.860 0.601 0.749 0.779 0.755
3 0.780 0.667 0.838 0.785 0.723 0.759
4 0.591 0.727 0.812 0.775 0.730 0.727
5 0.693 0.708 0.790 0.758 0.671 0.724
6 0.749 0.714 0.738 0.719 0.606 0.705
7 0.791 0.713 0.689 0.877 0.603 0.735
8 0.744 0.779 0.660 0.737 0.822 0.748
9 0.769 0.773 0.641 0.644 0.725 0.71
10 0.718 0.671 0.708 0.850 0.712 0.732
11 0.787 0.821 0.764 0.658 0.708 0.748
12 0.622 0.802 0.818 0.872 0.727 0.768
13 0.657 0.822 0.893 0.544 0.750 0.733
14 0.806 0.749 0.859 0.801 0.701 0.783
15 0.660 0.681 0.644 0.747 0.728 0.692
16 0.816 0.817 0.768 0.716 0.649 0.753
17 0.826 0.777 0.721 0.770 0.809 0.781
18 0.828 0.829 0.865 0.778 0.872 0.834
19 0.805 0.719 0.612 0.938 0.807 0.776
20 0.802 0.756 0.786 0.815 0.801 0.792
21 0.876 0.803 0.701 0.789 0.672 0.768
22 0.855 0.783 0.722 0.856 0.751 0.793
23 0.762 0.705 0.804 0.805 0.809 0.777
24 0.703 0.837 0.759 0.975 0.732 0.801
25 0.737 0.723 0.776 0.748 0.732 0.743
26 0.748 0.686 0.856 0.811 0.838 0.788
27 0.826 0.803 0.764 0.823 0.886 0.82
28 0.728 0.721 0.820 0.772 0.639 0.736
29 0.803 0.892 0.740 0.816 0.770 0.804
30 0.774 0.837 0.872 0.849 0.818 0.83
The run chart hints that the mean might have shifted up at about sample #1 In fact, the
average for the first 16 samples turns out to be 0.738 while for the remaining samples it is
0.789. Therefore, although the overall average is close to the target specification (0.75), at no
time was the actual process operating centered near the target. In the next section you will see
why we should conclude that this process is not in statistical control and therefore we should
not pay much attention to the process capability index Cpk calculated as 1.075.
SPC exists because there is, and will always be, variation in the characteristics of materials, in
parts, in services, and in people. With the help of the simple tools described in this section
SPC can provide us the means to understand and assess such variability, and then manage it.
As we mentioned in the previous section, the control chart is a powerful process quality
monitoring device and it constitutes the core of statistical process control (SPC). In the SPC
methodology, knowing when to leave a process alone is an important step in maintaining
control over a process. Control charts enable us to do that.
Equally important is knowing when to take action to prevent the production of nonconforming
product. Indeed, failure to distinguish between variation produced by common causes and
special causes can actually increase the variation in the output of a process. Again, control
charts empower us here.
When the chart crosses any of its control limits, special causes are indicated to be present.
The process is now deemed to be out of control and is investigated to remove the source of
disturbance. Otherwise, when variation stays within control limits, it is indicated to be due to
common causes alone. Now the process is said to be in "statistical control" and it should be
left alone. Statistical control is defined as the state in which both the process averages and
variances are constant over time and hence the process output is stable and predictable (Figure
2.5). Control charts help us in bringing a process within such control.
Most process that deliver a "product" or a "service" may be monitored by measuring their
output over time and then plotting these measurements appropriately. However, processes
differ in the nature of those output. Variables data are those output characteristics that are
measurable along a continuous scale. Examples of variables data are length, weight, or
viscosity. By contrast, some output may only be judged to be good or bad, or "acceptable" or
"unacceptable", such as print quality of a photocopier or defective knots produced per meter
by a weaving machine. In such cases we categorize the output as an attribute that is either
acceptable or unacceptable; we cannot put it on a continuous scale as done with weight or
viscosity. However, SPC methodology provides us with a variety of different types of control
charts to work with such diversity.
For variables data control charts most commonly used are the "xbar" chart and the "R-chart"
(range chart). The xbar chart is used to monitor the centering of the process to help control
its accuracy (Figure 2.4). The R-chart monitors the dispersion or precision of the process.
Range R rather than standard deviation is used as a measure of variation simply to enable
workers on the factory floor perform control chart calculations by hand, as done for example
in the turbine blade machining shop in BHEL, Hardwar. For large samples and when data can
be processed by a computer, the standard deviation is a better measure of variability.
The first step in developing xbar and R-charts is to gather data. Usually, about 25 to 30
samples are collected. Samples between size 3 and 10 are generally used, with 5 being the
most common. The number of samples is indicated by k, and n denotes to sample size. For
each sample i, the mean (denoted by xbari) and the range (Ri) are computed. These values are
then plotted on their respective control charts. Next, the overall mean (x_doublebar) and
average range (Rbar) calculations are made. These values specify the center lines for the
xbar and R-charts, respectively. The overall mean is the average of the sample means xbari.
x x ... xn xbar i
xbar 1 2 x _ doublebar i 1
n k
The average range is similarly computed, using the formulas
Ri max( xi ) min( xi )
k
R i
Rbar i 1
k
The average range and average mean are used to compute control limits for the R-and xbar
charts. Control limits are easily calculated using the following formulas:
UCLR = D4 Rbar
LCLR = D3 Rbar
where the constants D3, D4 and A2 depend on sample size n and may be found in Table 2.1.
Control limits represent the range between which all points are expected to fall if the process
is in statistical control, i.e., operating only under the influence of random or common causes.
If any points fall outside the control limits or if any unusual patterns are observed, then some
special (called assignable) cause has probably affected the process. In such case the process
should be studied using a "reaction plan" (Figure 4.1), process logs and other tools and
devices to determine and eliminate that cause.
Note, however, that if assignable causes are affecting the process, then the process data are not
representative of the true state of statistical control and hence the calculations of the center
line and control limits would be biased. To be effective, SPC requires the center line and the
control limit calculations to be unbiased. Therefore, before control charts are set up for roitine
use by the factory, any out-or-control data points should be eliminated from the data table and
new values for x_doublebar, Rbar, and the control limits re-computed, as illustrated below.
In order to determine whether a process is in statistical control, the R-chart is always analyzed
first. Since the control limits in the xbar chart depend on the average range, special causes in
the R-chart may produce unusual patterns in the xbar chart, even when the centering of the
process is in control. (An example of this is given later in this unit). Once statistical control is
established for the R-chart, attention may turn to the xbar chart.
Figure 4.2 shows a typical data sheet used for recording. This form provides space (under
"Notes") for descriptive information about the process and for recording of sample
observations and computed statistics. Subsequently the control charts are drawn. The
construction and analysis of control charts may be best seen by example as follows.
= 113/3 = 44.33
The range of the first sample was Ri = 70-22 = 48. (Note: In practice, xbar and R calculations
may be rounded to the nearest integer for simplicity).
The calculations of sample averages, range, overall mean, and the control limits are shown on
the worksheet displayed in Figure 4.3. The average range is the sum of the sample ranges
(676) divided by the number of samples (25); the overall mean is the sum of the sample
averages (1,221) divided by the number of samples (25). Since sample size is 3 here, the
factors used in computing the control limits are A2 = 1.023 and D4 = 2.574 (Table 2.1). For
sample size of 6 or less, factor D3 is 0; therefore, the lower control limit (LCL) on the range
chart is zero. The center line and control limits are drawn on the charts shown in Figure 4.3.
Note that as a convention, out-of-control points are noted directly on the charts.
On examining the R chart first we infer that the process is in control because all points lie
within the control limits and no unusual patterns exist. On the xbar chart, however, the xbar
value for sample 17 lies above the upper control limit. On investigation we find that some
suspicious cleaning material had been used in the process at this point (an assignable cause of
variation). Therefore, data from sample 17 should be eliminated from the control chart
calculations and the control limits re-done. Figure 4.4 shows the revised calculations after
sample 17 was removed. The revised center lines and control limits are shown. The resulting
xbar and R charts both appear to be in control.
Date 12/1 12/1 12/1 12/1 12/1 12/1 13/1 13/1 13/1 13/1 13/1 13/1 14/1 14/1 14/1 14/1 14/1 14/1 15/1 15/1 15/1 15/1 15/1 15/1 16/1
Time 8:00 11:0 2:00 5:00 8:00 11.0 8:00 11:0 2:00 5:00 8:00 11.0 8:00 11:0 2:00 5:00 8:00 11.0 8:00 11:0 2:00 5:00 8:00 11.0 8:00
1 41 78 84 60 46 64 43 37 50 57 24 78 51 41 56 46 99 71 41 41 22 62 64 44 41
2 70 53 34 36 47 16 53 43 29 83 42 48 57 29 64 41 86 54 2 39 40 70 52 38 63
3 22 68 48 25 29 56 64 30 57 32 39 39 50 35 36 16 98 39 53 36 46 46 57 60 62
4
5
Calculations:
Sum 133 199 166 121 122 136 160 110 136 172 105 165 158 105 156 103 283 164 96 116 108 178 173 142 166
Average 44. 66. 55. 40. 40. 45. 53. 36. 45. 533 35 55 52. 35 52 34. 94. 54. 32 38. 36 59. 567 433 55.
= Xbar 33 33 33 33 67 33 33 67 33 67 33 33 67 67 33 33
Range 48 25 50 35 18 48 21 13 28 51 18 39 7 12 28 30 13 32 51 5 24 24 12 22 22
=R
Notes: Gas AC Pum New
flow malf p clea
adju uncti stall ning
sted on ed tried
once
75 UCL
Xbar
50 Center lin
x_double
25 LCL
0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample #
Sample # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Range = 48 25 50 35 18 48 21 13 28 51 18 39 7 12 28 30 13 32 51 5 24 24 12 22 22
R
UCLR 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5
LCLR 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
INITIAL R CHART
80
70 UCL
60
50
R
40
30
20
10
0 LCL
Sample #
Prepared for the Extension Program of Indira Gandhi Open University
27
To reveal the inherent variability of the process, the control limits must be re-calculated, after
removing any "out of control" points.
Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
#
Average 44. 66. 55. 40. 40. 45. 53. 36. 45. 533 35 55 52. 35 52 34. Om 54. 32 38. 36 59. 567 433 55.
= Xbar 33 33 33 33 67 33 33 67 33 67 33 itte 67 67 33 33
d
x_double 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40
bar
UCLxbar 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75.
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
LCLxbar 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18.
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
40
30
20 LCL
10
0
Sample #
Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
#
Range 48 25 50 35 18 48 21 13 28 51 18 39 7 12 28 30 Om 32 51 5 24 24 12 22 22
-it
=R
UCLR 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71.
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
UCLR 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
REVISED R CHART
80
70 UCL
60
50
R
40
30
20
10
LCL
0
Sample #
When a process is in statistical control, the points on a control chart fluctuate randomly
between the control limits with no recognizable, non-random pattern. The following
checklist provides a set of general rules for examining a process to determine if it is in control:
1. No points are outside control limits.
2. The number of points above and below the center line is about the same.
3. The points seem to fall randomly above and below the center line.
4. Most points, but not all, are near the center line, and only a few are close to the control
limits.
The underlying assumption behind these rules is that the distribution of sample means is
normal. This assumption follows from the central limit theorem of statistics, which states that
the distribution of sample means approaches a normal distribution as the sample size increases
regardless of the original distribution. Of course, for small sample sizes, the distribution of
the original data must be reasonably normal for this assumption to hold. The upper and lower
control limits are computed to be three standard deviations from the overall mean. Thus, the
probability that any sample mean falls outside the control limits is very small. This
probability is the origin of rule 1.
Since the normal distribution is symmetric, about the same number of points fall above as
below the center line. Also, since the mean of the normal distribution is the median, about
half the points fall on either side of the center line. Finally, about 68 percent of a normal
distribution falls within one standard deviation of the mean; thus, most but not all points
should be close to the center line. These characteristics will hold provided that the mean and
variance of the original data have not changed during the time the data were collected; that is,
the process is stable.
Several types of unusual patterns arise in control charts, which are reviewed here along with
an indication of the typical causes of such patterns.
A common reason for a point falling outside a control limit is an error in the calculation of
xbar or R for the sample. You should always check your calculations whenever this occurs,
Other possible causes are a sudden power surge, a broken tool, measurement error, or an
incomplete or omitted operation in the process.
100
80
Xbar 60
40
20
0
Sample #
100
80
Xbar
60
40
20
0
Sample #
If the shift is up in the R-chart, the process has become less uniform. Typical causes are
carelessness of operators, poor or inadequate maintenance, or possibly a fixture in need of
repair. If the shift is down in the R-chart, the uniformity of the process has improved. This
might be the result of improved workmanship or better machines or materials. As mentioned,
every effort should be made to determine the reason for the improvement and to maintain it.
Three rules of thumb are used for early detection of process shifts. A simple rule is that if
eight consecutive points fall on one side of the center line, one could conclude that the mean
has shifted. Second, divide the region between the center line and each control limit into three
equal parts. Then if (1) two of three consecutive points fall in the outer one-third region
between the center line and one of the control limits or (2) four of five consecutive points fall
within the outer two-thirds region, one would also conclude that the process has gone out of
control.
Cycles
Cycles are short, repeated patterns in the chart, alternating high peaks and low valleys (see
Figure 4.7). These patterns are the result of causes that come and go on a regular basis. In the
xbar chart, cycles may be the result of operator rotation or fatigue at the end of a shift,
different gauges used by different inspectors, seasonal effects such as temperature or
humidity, or differences between day and night shifts. In the R-chart, cycles can occur from
maintenance schedules, rotation of fixtures or gauges, differences between shifts, or operator
fatigue.
80
60
Xbar
40
20
Sample #
Trends
A trend is the result of some cause that gradually affects the quality characteristics of the
product and causes the points on a control chart to gradually move up or down from the center
line. As a new group of operators gains experience on the job, for example, or as maintenance
of equipment improves over time, a trend may occur. In the xbar chart, trends may be the
result of improving operator skills, dirt or chip buildup in fixtures, tool wear, changes in
temperature or humidity, or aging of equipment. In the R-chart, an increasing trend may be
due to a gradual decline in material quality, operator fatigue, gradual loosening of a fixture or
a tool, or dulling of a tool. A decreasing trend often is the result of improved operator skill or
work methods, better purchased materials, or improved or more frequent maintenance.
80
60
Xbar
40
20
Sample #
Prepared for the Extension Program of Indira Gandhi Open University
30
A mixture pattern can result when different lots of material are used in one process, or when
parts are produced by different machines but fed into a common inspection group.
Instability
Instability is characterized by unnatural and erratic fluctuations on both sides of the chart over
a period of time (see Figure 4.9). Points will often lie outside both the upper and lower
control limits without a consistent pattern. Assignable causes may be more difficult to
identify in this case than when specific patterns are present. A frequent cause of instability is
over adjustment of a machine, or the same reasons that cause hugging the control limits.
40
20 LCL
4 of 5 below one
0
Sample #
As suggested earlier, the R-chart should be analyzed before the xbar chart, because some out-
of-control conditions in the R-chart may cause out-of-control conditions in the xbar chart.
Also, as the variability in the process decreases, all the sample observations will be closer to
the true population mean, and therefore their average, xbar, will not vary much from sample to
sample. If this reduction in the variation can be identified and controlled, then new control
limits should be computed for both charts.
After a process is determined to be in control, the charts should be used on a daily basis to
monitor production, identify any special causes that might arise, and make corrections as
necessary. More important, the chart tells when to leave the process alone! Unnecessary
adjustments to a process result in nonproductive labor, reduced production, and increased
variability of output.
It is more productive if the operators themselves take the samples and chart the data. In this
way, they can react quickly to changes in the process and immediately make adjustments. To
do this effectively, training of the operators is essential. Many companies conduct in-house
training programs to teach operators and supervisors the elementary methods of statistical
quality control. Not only does this training provide the mathematical and technical skills that
are required, but it also give the shop-floor personnel increased quality-consciousness.
Another important point must be noted. Control charts are designed to be used by production
operators rather than by inspectors or QC personnel. Under the philosophy of statistical
process control, the burden of quality rests with the operators themselves. The use of control
charts allows operators to react quickly to special causes of variation. The range is used in
place of the standard deviation for the very reason that it allows shop-floor personnel to easily
make the necessary computations to plot points on a control chart. The experience even in
Indian factories such as the turbine blade machining shop in BHEL Hardwar strongly supports
this assertion. The right approach taken by management in ingraining the correct outlook
among the workers appears to hold the key here.
After a process has been brought to a state of statistical control by eliminating special causes
of variation, the data may be used to find a rough estimate process capability. This approach
uses the average range Rbar rather than the estimated standard deviation of the original data.
Nevertheless, it is a quick and useful method, provided that the distribution of the original data
is reasonably normal.
Under the normality assumption, the standard deviation (x) of the original data {x}can be
estimated as follows:
(= x) = Rbar/d2
where d2 is a constant that depends on the sample size and is also given in Table 2.1.
Therefore, process capability may be determined by comparing the spec range to 6 x. The
natural variation of individual measurements is given by x_doublebar 3 x. The following
example illustrates these calculations.
Cp = 100/98 = 1.02
Thus, Cp for this process looks OK. However, the lower and upper capability indices are
This gives a Cpk value equal to 0.96, which is less than 1.0. This analysis suggests that both
the centering and the variation of the wafer manufacturing process must be improved.
The actual fraction of the output falling within the spec range or tolerance may be calculated
in a step-by-step manner as follows.
Note: The initial xbar control chart (Figure 4.3) shows one xbar point (Sample #17) out of
control (beyond UCLxbar). This point should be removed and the control limits should be re-
calculated. This is done as follows.
Distribution of xbar
0.002
LLx ULx
Step 4: Compute upper and lower ( 3x) limits of process variation under statistical
control:
If the individual observations are normally distributed, then the probability of being out of
specification can be computed. In the example above we assumed that the data are normal.
The revised mean (estimated by x_doublebar) is 47 and the standard deviation (x) is
98/6=16.3.
Figure 4.10 shows the z calculations for specification limits of 0 and 100. These z values are
used to find the area (= probability of finding x) between 0 and the mean (47) is 0.4980 as
determined from the standard normal distribution table. Thus 0.2 percent of the output (wafer
production or {x}) would be expected to fall below the lower specification.
The area to the right of 100 is approximately zero. Therefore, all the output can be expected
to meet the upper specification.
A word of caution deserves emphasis here. Control limits are often confused with
specification or "spec" limits. Spec limits, normally expressed in engineering units, indicate
the range of variation in a quality characteristic that is acceptable to the customer.
Specification dimensions are usually stated in relation to individual parts for "hard" goods,
such as automotive hardware. However, in other applications, such as in chemical processes,
specifications are stated in terms of average characteristics. Thus, control charts might
mislead one into thinking that if all sample averages fall within the control limits, all output
will conform to specs. This assumption is not true. Control limits relate to sample averages
while specification limits relate to individual measurements. A sample average may fall
within the upper and lower control limits even though some of the individual observations are
out of specification. Since xbar = x/n, control limits are narrower than the natural variation
in the process (Figure 2.5) and they do not represent process capability.
When a process is in control, the xbar and R charts may be used to make decisions about the
state of the process during its operation. We use three zones on the charts to help in the
routine management of the process. The zones are marked on the charts as follows.
Zone 1 (it falls between the upper and lower WARNING LINES (or 2 sigma lines) . If
the plotted points fall in this zone, it indicates that the process has remained stable and actions
or adjustments are unnecessary. Indeed any adjustment here may increase the amount of
variability.
Zone 2 (it falls between Zone 1 and Zone 3). Any point found in Zone 2 suggests that there
may have been an assignable change and another sample must be taken to check that out.
Zone 3 (it falls beyond the UPPER or LOWER CONTROL LIMIT). Any points falling in
this zone indicates that the process should be investigated and thatn if action is taken, the
latest estimate of x_doublebar value and Rbar should be used to revise the control limits.
ZONE 3
Investigate and adjust
Upper Control Limit
ZONE 2 Investigate
Upper Warning Limit
ZONE 1 Do nothing
xbar
ZONE 1
X_double bar
ZONE 1
ZONE 1
Lower Warning Limit
ZONE 2
Lower Control Limit
ZONE 3
Sample #
Modified control limits often are used when process capability is very good. For example,
suppose that the process capability of a factory is only 60 percent of tolerance (Cp = 1.67) and
that the process mean can be controlled by a simple machine adjustment. Management may
quickly discover the impracticality of investigating every isolated point that falls outside the
usual control limits because the output is probably still well within specifications. In such
cases, the usual control limits may be replaced with the following modified control limits:
where URLx is the upper reject level, LRLx is the lower reject level and USL and LSL are the
upper and lower specifications respectively. A m values are determined by statistical principles
and these are shown in Table 4.1. The modified control limits allow for more variation than
the ordinary control limits and still provide high confidence that the product produced will
remain within specification. It is important to note that modified limits apply only if process
capability is at least 60 to 75 percent of tolerance. However, if the mean must be controlled
closely, a conventional xbar-chart should be used even if the process capability is good. Also
if the process standard deviation (x) is likely to shift, don't modify control limits.
Sample
Size n A2 D4 d2 Am
2 1.880 3.267 1.128 0.779
3 1.023 2.574 1.693 0.749
4 0.729 2.282 2.059 0.728
5 0.577 2.114 2.326 0.713
6 0.483 2.004 2.534 0.701
Example 5: Computing Modified Control Limits for the Silicon Wafer Case:
Shown below is the calculations for the silicon wafer thickness example considered in this
section. Since the sample size is 3, Am = 0.749. Therefore, the modified limits are
Observe that if the process is centered on the nominal, the modified control limits are "looser"
(wider) than the ordinary control limits. For this example, before the modified control limits
are implemented, the centering of the process would first have to be corrected from its current
(estimated) value of 40 to the specification midpoint of 50.0.
Several alternatives to the popular xbar and R-chart for process control of variables
measurements are available. This section discusses some of them.
An alternative to using the R-chart along with the xbar chart is to compute and plot the
standard deviation s of each sample. Although the range has traditionally been used, since it
involves less computational effort and is easier for shop-floor personnel to understand, using s
rather than R has its advantages. The sample standard deviation is a more sensitive and better
indicator of process variability, especially for larger sample sizes. Thus, when tight control of
variability is required, s should be used. With the use of modern calculators and personal
computers, the computational burden of computing s is reduced or eliminated, and s has thus
become a viable alternative to R.
(x i xbar) 2
s i 1
n 1
To construct an s-chart, compute the standard deviation for each sample. Next, compute the
average standard deviation sbar by averaging the sample standard deviations over all samples.
(Notice that this computation is analogous to computing R). Control limits for the s-chart are
given by
UCLs = B4 sbar
LCLs = B3 sbar
For the associated xbar chart, the control limits derived from the overall standard deviation
are
where A3 is a constant that is a function of sample size (n) may be found in Table 2.1.
Observe that the formulas for the control limits are equivalent to those for xbar and R-charts
except that the constants differ.
The average (i.e., overall) mean is computed to be x_doublebar = 0.108, and the average
standard deviations is sbar = 1.791. Since the sample size is 10, B3 = 0.284, B4 = 1.716, and
A3 = 0.975. Hence, the control limits for the s-chart are
The xbar and s-charts are shown in Figures 5.1 and 5.2 respectively. The charts indicate that
this process is not in control, and an investigation as to the reasons for the variation,
particularly in the xbar chart, is warranted.
Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
#
1 1 9 0 1 -3 -6 -3 0 2 0 -3 -12 -6 -3 -1 -1 -2 0 0 1 1 -1 0 1 2
2 8 4 8 1 -1 2 -1 -2 0 0 -2 2 -3 -5 -1 -2 2 4 3 2 2 0 0 0 2
3 6 0 0 0 0 0 0 -3 -1 -2 2 0 0 5 -1 -2 -1 0 -3 1 2 2 -1 0 1
4 9 3 0 2 -4 0 -2 -1 -1 -1 -1 -4 0 0 -2 0 0 0 3 1 1 -1 0 1 2
5 7 0 3 1 0 2 -1 -2 -3 -1 1 -1 -8 -5 -1 -4 -1 0 3 -3 2 2 1 1 -1
6 9 0 1 1 1 -1 -1 1 0 0 -2 4 -4 1 0 0 -1 3 1 2 2 2 0 2 2
7 2 3 2 2 0 2 -3 -3 1 -1 -2 2 -6 5 -2 -2 2 0 0 1 1 -1 0 0 2
8 7 4 0 0 -2 0 0 0 -3 -2 -1 -3 -1 -4 -1 -4 -1 0 1 -2 1 0 0 0 1
9 9 8 2 0 0 -3 -2 -3 -1 -2 1 -4 -1 -1 0 -1 1 1 2 3 1 0 -1 -1 -1
10 7 3 3 1 -2 0 -2 -2 0 0 1 0 -2 -5 -1 0 -2 0 -2 0 2 -1 0 0 2
Xbar 6.5 3.4 1.9 0.9 -1.1 -0.4 -1.5 -1.5 -0.6 -0.9 -0.6 -1.6 -3.1 -1.2 -1 -1.6 -0.3 0.8 0.8 0.6 1.5 0.2 -0.1 0.4 1.2
X_doubl 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11
ebar
UCLxbar 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
LCLxbar - - - - - - - - - - - - - - - - - - - - - - - - -
1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
Std_dev 2.83 3.13 2.47 0.73 1.59 2.50 1.08 1.43 1.57 0.87 1.71 4.52 2.80 3.91 0.66 1.50 1.49 1.47 2.09 1.83 0.52 1.31 0.56 0.84 1.22
8 4 0 8 5 3 0 4 8 6 3 7 7 0 7 6 4 6 8 8 7 7 8 3 9
UCLs 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
LCLs 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50
9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
Xbar 6.5 3.4 1.9 0.9 -1.1 -0.4 -1.5 -1.5 -0.6 -0.9 -0.6 -1.6 -3.1 -1.2 -1 -1.6 -0.3 0.8 0.8 0.6 1.5 0.2 -0.1 0.4 1.2
X_doubl 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11
ebar
UCLxbar 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
LCLxbar - - - - - - - - - - - - - - - - - - - - - - - - -
1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
8
6
4
Xbar
2
0
-2
-4
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample #
Std_dev 2.83 3.13 2.47 0.73 1.59 2.50 1.08 1.43 1.57 0.87 1.71 4.52 2.80 3.91 0.66 1.50 1.49 1.47 2.09 1.83 0.52 1.31 0.56 0.84 1.22
8 4 0 8 5 3 0 4 8 6 3 7 7 0 7 6 4 6 8 8 7 7 8 3 9
UCLs 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
LCLs 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50
9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
3
s
2
0
1 4 7 10 13 16 19 22 25
Sample #
With the development of automated inspection for many processes, manufacturers can nor
easily inspect and measure quality characteristics on every item produced. Hence, the sample
size for process control is n = 1, and a control chart for individual measurements also called
an x-chart can be used. Other examples in which x-charts are useful include accounting data
such as shipments, orders, absences, and accidents; production records of temperature,
humidity, voltage, or pressure; and the results of physical or chemical analyses.
With individual measurements, the process standard deviation can be estimated and three-
sigma control limits used. As shown earlier, Rbar/d2 provides an estimate of the process
standard deviation. Thus, an x-chart for individual measurements would have "three-sigma"
control limits defined by
Sample of size 1, however, do not furnish enough information for process variability
measurement. However, process variability can be determined by using a moving average of
ranges, or a moving range, or n successive observations. for example, a moving range for n =
2 is computed by finding the absolute difference between two successive observations. The
number of observations used in the moving range determines the constant d 2; hence, for n = 2
from Table 2.1, d2 = 1.128. In a similar fashion, larger values of n can be used to compute
moving ranges. The moving range chart has control limits defined by
UCLR = D4 Rbar
LCLR = D3 Rbar
The moving range is computed as shown by taking absolute values moving range is the
difference between the first two observations:
The moving range chart, shown in Figure 5.5, indicates that the process is in control. Next,
the x-chart is constructed for the individual measurements:
Some caution is necessary when interpreting patterns on the moving range chart. Points
beyond control limits are signs of assignable causes. Successive ranges, however, are
correlated, and they may cause patterns or trends in the chart that are not indicative of out-of-
control situations. On the x-chart, individual observations are assumed to be uncorrelated;
hence, patterns and trends should be investigated.
Control charts for individuals have the advantage that specifications can be drawn on the chart
and compared directly with the control limits.
Attributes quality data assume only two valuesgood or bad, pass or fail. Attributes usually
cannot be measured, but they can be observed and counted and are useful in quality
management in many practical situations. For instance, in printing packages for consumer
products, color quality can be rated as acceptable or not acceptable, or a sheet of cardboard
either is damaged or is not. Usually, attributes data are easy to collect, often by visual
inspection. Many accounting records, such as percent scrapped, are also usually readily
available. However, one drawback in using attributes data is that large samples are necessary
to obtain valid statistical results.
Several different types of control charts are fused for attribute data. One of the most common
is the p-chart (introduced in this section). Other types of attributes charts are presented in the
next chapter. One distinction that we must make is between the terms defects and defectives.
A defect is a single nonconforming quality characteristic of an item. An item may have
several defects. The term defective refers to items having one or more defects. Since certain
attributes charts are used for defectives while others are used for defects, one must understand
the difference. In quality control literature, the term nonconforming is often used instead of
defective.
A p-chart monitors the proportion of nonconforming items produced in a lot. Often it is also
called a fraction nonconforming or fraction defective chart. As with variables data, a p-chart
is constructed by first fathering 25 to 30 samples of the attribute being measured. The size of
each sample should be large enough to have several nonconforming items. If the probability
of finding a nonconforming item is small, a large sample size is usually necessary. Samples
are chosen over time periods so that any special causes that are identified can be investigated.
Let us suppose that k samples, each of size n, are selected. If y represents the number
nonconforming items or defectives in a particular sample, the proportion nonconforming is
(y/n). Let pi be the fraction nonconforming in the ith sample; the average fraction
nonconforming pbar for the group of k samples then is
p1 p2 ... pk
pbar p
k
This statistic pbar reflects the average performance of the process. One would expect a high
percentage of samples to have a fraction nonconforming within three standard deviations of p.
As estimate of the standard deviation is given by
p(1 p)
sp
n
Therefore, upper and lower control limits may be given by
UCLp p 3s p
UCLp p 3s p
Analysis of a p-chart is similar to that of the xbar or R-chart. Points outside the control limits
signify an out-of-statistical-control situation, i.e., the process has been disturbed by an
assignable factor. Patterns and trends should also be sought to identify the presence of
assignable factors. However, a point on a p-chart below the lower control limit or the
development of a trend below the center line indicates that the process might have improved,
since the ideal is zero defectives. However, caution is advised before such conclusions are
drawn, because errors may have been made in computation. An example of a p-chart is
presented next.
0.022(1 0.022)
sp 0.01467
100
0.07
UCLp
p (Fraction Defective)
0.06
0.05
0.04
0.03
pbar
0.02
0.01
0 LCLp
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample #
Thus, the upper control limit, UCLp, is 0.022 + 3(0.01467) = 0.066, and the lower control
limit, LCLp is 0.022 - 3(0.01467) = - 0.022. Since this later figure is negative and fraction
nonconforming (pbar) can never be negative, for LCLp zero (0) is used. The control chart for
this example is shown in Figure 6.2.
The sorting process appears to be in statistical control. Any values found above the upper
control limit or evidence of upward trend might indicate the need for re-training the personnel.
Often 100 percent inspection is performed on process output during fixed sampling periods;
however, the number of units produced in each sampling period may vary. In this case, the p-
chart would have a variable sample size.
One way of handling this is to compute a standard deviation for each individual sample. Thus,
p(1 p)
p3
ni
where the number of observations in the Ith sample is nI,. The control limits for this sample
will be given by
where
p
number _ nonconfor min g
n i
18 20 14 ... 18 271
p 0.0909
137 158 92 ... 160 2980
0.0909(1 0.0909)
LCLp 0.0909 3 0.017
137
0.0909(1 0.0909)
UCLp 0.0909 3 0.165
137
Note carefully that because the sample sizes vary, control limits would be different for each
sample. The p-chart is shown in Figure 6.4. Points 13 and 15 are outside the control limits.
Variable control
limits that depend
on sample size ni
0.2
0.15 UCL
p (Fraction
Defective)
0.1
0.05
LCL
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Sample #
pi = yi/n
where yi is the number found nonconforming and n is the sample size. Multiplying both sides
of the equation pi = yi/n, yields
yi = npi
That is, the number nonconforming is equal to the sample size times the proportion
nonconforming. Instead of using a chart for the fraction nonconforming, an equivalent
alternativea chart for the number of nonconforming items is useful. Such a control chart is
called an np-chart.
The np-chart is a control chart for the number of nonconforming items in a sample. To use the
np-chart, the size of each sample must be constant. Suppose that two samples of sizes 10 and
15 each have four nonconforming items. Clearly, the fraction nonconforming in each sample
is difference between samples. Thus, equal sample size are necessary to have a common base
for measurement. Equal sample sizes are not required for p-charts, since the fraction
nonconforming is invariant to the sample size.
The np-chart is a useful alternative to the p-chart because it is often easier to understand for
production personnelthe number of nonconforming items is more meaningful than a
fraction. Also, since it requires only a count, the computations are simpler.
The control limits for the np-chart, like those for the p-chart, are based on the binomial
probability distribution. The center line is the average number of nonconforming items per
sample as denoted by npbar, which is calculated by taking k samples of size n, summing the
number of nonconforming items yi in each sample, and dividing by k. That is
y1 y 2 ... y k
npbar
k
An estimate of the standard deviation is
where pbar = (npbar)/n. Using three-sigma limits as before, the control limits are specified by
3 1 ... 0 1
np 2.2
25
2.2
p 0.022
100
Then,
Since the lower control limit is less than zero, a value of LCL = 0 is used. The control chart
for this example is given in Figure 6.6.
Number
Nonconforming
Sample # (np) LCLnp UCLnp
1 3 0 6.6
2 1 0 6.6
3 0 0 6.6
4 0 0 6.6
5 2 0 6.6
6 5 0 6.6
7 3 0 6.6
8 6 0 6.6
9 1 0 6.6
10 4 0 6.6
11 0 0 6.6
12 2 0 6.6
13 1 0 6.6
14 3 0 6.6
15 4 0 6.6
16 1 0 6.6
17 1 0 6.6
18 2 0 6.6
19 5 0 6.6
20 2 0 6.6
21 3 0 6.6
22 4 0 6.6
23 1 0 6.6
24 0 0 6.6
25 1 0 6.6
Sample #
Recall that a defect is a single nonconforming characteristic of an item, while the term
defective refers to an item that has one or more defects in it. In some situations, quality
assurance personnel may be interested not only in whether an item is defective but also in how
many defects it has. For example, in complex assemblies such as electronics, the number of
defects is just as important as whether the product is defective. Two charts can be applied in
such situations. The c-chart is used to control the total number of defects per unit when
subgroup size is constant. If subgroup sizes are variable, a u-chart is used to control the
average number of defects per unit.
The c-chart is based on the Poisson probability distribution. To construct a c-chart, first
estimate the average number of defects per unit, cbar, by taking at least 25 samples of equal
size, counting the number of defects per sample, and finding the average. The standard
deviation (sc) of the Poisson distribution is the square root of the mean and yields
sc cbar
UCLc = 5.82
The chart is shown in Figure 6.8 and appears to be in control. Such a chart can be used for
continued control or for monitoring the effectiveness of a quality improvement program.
UCL
LCL
As long as sample size is constant, a c-chart is appropriate. In many cases, however, the
subgroup size is not constant or the nature of the production process does not yield discrete,
measurable units. For example, suppose that in an auto assembly plant, several different
models are produced that vary in surface area. The number of defects will not then be a valid
comparison among different models. Other applications, such as the production of textiles,
photographic film, or paper, have no convenient set of items to measure. In such cases, a
standard unit of measurement is used, such as defects per square foot or defects per square
inch. The control chart used in these situations is called a u-chart.
The variable u reprsents the average number of defects per unit of measurement, that is ui =
ci/ni, where ni is the size of subgroup #i (such as square feet). The center line ubar for k
samples each of size ni is computed as follows:
c1 c2 ... ck
ubar
n1 n2 ... nk
su ubar / ni
The control limits, based on three standard deviations for the ith sample are then
Note that if the size of the subgroups varies, so will the control limits. This result is similar to
the p-chart with variable sample sizes. In general, whenever the sample size n varies, the
control limits will also vary.
To construct the chart, first compute the number of errors per slip as shown in column 3. The
average number of errors per slip, ubar, is found by dividing the total number of errors (217)
by the total number of packing slips (2,843). This is ubar = 217/2843 = 0.076
su 0.076 / ni
0.2
0.15
Variable
0.1
Control limits
0.05
0
11
13
15
17
19
21
23
25
27
29
31
1
3
5
7
9
Sample #
Prepared for the Extension Program of Indira Gandhi Open University
51
The control limits (LCLu and UCLu) are shown in Figure 6.10. As with a p-chart, individual
control limits will vary with the sample size, ni. The control chart is shown in Figure 6.9.
One point (sample #2) appears to be out of control.
One application of c-charts and u-charts is in a quality rating system. When some defects are
considered to more serious than others, they can be rated, or categorized, into different classes.
For instance,
Each category can be weighted using a point scale, such as 100 for A, 50 for B, 10 for C, and
1 for D. These points, or demerits, can be used as the basis for a c- or u-chart that would
measure total demerits or demerits per unit, respectively. Such charts are often used for
internal quality control and as a means of rating suppliers.
Confusion often exists over which chart is appropriate for a specific application, since the c-
and u-charts apply to situations in which the quality characteristics inspected do not
necessarily come from discrete units.
The key issue to consider is whether the sampling unit is constant. For example, suppose that
an electronics manufacturer produces circuit boards. The boards may contain various defects,
such as faulty components and missing connections. Because the sampling unit - the circuit
board - is constant (assuming that all boards are the same), a c-chart is appropriate. If the
process produces boards of varying sizes with different numbers of components and
connections, then a u-chart would apply.
As another example, consider a telemarketing firm that wants to track the number of calls
needed to make one sale. In this case, the firm has no physical sampling unit. However, an
analogy can be made with the circuit boards. The sale corresponds to the circuit board, and
the number of calls to the number of defects. In both examples, the number of occurrences in
relationship to a constant entity is being measured. Thus, a c-chart is appropriate.
Quality Characteristic
yes no no Use
c-
chart
Use xbar and Use p-chart with Use u-
s charts variable sample chart
size
Capability and control are independent concepts. Ideally, we would like a process to have
both high capability and be in control. If a process is not in control, it should first be
brought into control before attempting to evaluate process capability
Control charts for variables data include: xbar- and R-charts; and s-charts, and individual
and moving range charts. xbar- and s-charts are alternatives to xbar- and R-charts for
larger sample sizes. The sample standard deviation provides a better indication of process
variability than the range. Individuals charts are useful when every item can be inspected
and when a long lead time exists for producing an item. Moving ranges are used to
measure the variability in “individuals” or x charts.
A process is in control if no points are outside control limits; the number of points above
and below the center line is about the same; the points seem to fall randomly above and
below the center line; and most points (but not all) are near the center line, with only a
few close to the control limits.
Typical out-of-control conditions are represented by sudden shifts in the mean value,
cycles, trends, hugging of the center line, hugging of the control limits, and instability.
Modified control limits can be used when the process capability is known to be good.
These wider limits reduce the amount of investigation of isolated points that would fall
outside the usual control limits.
Charts for attributes include p-, np-, c- and u-charts. The np-chart is an alternative to the
p-chart, and controls the number nonconforming for attributes data. Charts for defects
include the c-chart and u-chart. The c-chart is used for constant sample size and the u-
chart is used for variable sample size.
9 IMPLEMENTING SPC
The original methods of SQC have been available for over 75 years now; Walter Shewhart's
first book on control charts was written in 1924. However, studies show that managers still do
not understand variation.
Where do you find the motivation? Studies in industry show that when used properly, SQC or
SPC reduce the cost of qualitythe cost of excessive inspection, scrap, returns from
customers and warranty service. Good quality, as the Japanese have demonstrated, edges out
competition, builds repute and along with that brings in new customers, raises morale and
expands business. Good quality culture also propagates: Companies using SPC frequently
require their suppliers to use them also. This generates considerable benefit.
Where there is low use of SPC, the major reason often is lack of knowledge of variation and
the importance of understanding it in order to improve customer satisfaction. Successful firms
on the other hand repeatedly show certain characteristics:
Their top management understand variation and the importance of SPC methods to
successfully manage it. They do not delegate this task to the QC department.
All people involved in the use of the technique understand what they are being asked to
do and why it would help them, and
Training, followed by clear and written instructions on agreed procedures are
systematically introduced and followed up through audits.
The above principles form the core of the general principles of good quality management, be it
through ISO 9000, QS 9000 or TQM.
The bottom line is that you must find your own motivation to create and deliver quality.
If you do it as a fad, you will neither get the results nor maintain credibility with your own
people about initiatives that you take in your management interventions.
So, to summarize,
Studies show that to succeed with SPC you must understand variation, however boring
that may appear.
When SPC is used properly, quality costs go down.
Low usage of SPC is associated with lack of knowledge and training, even in senior
management.
SPC needs to be systematically introduced.
A step-wise introduction of SPC would include
- review of management systems,
- review of requirements and design specs,
- emphasis on the need for process understanding and control,
- planning for education and training,
- tackling one problem at a time based on customer complaints/feedback,
- recording of detailed data,
- measuring process capability and
- making routine use of data to manage the process.
The first prudent step toward implementing SPC in an organization would be "to put the house
in order," which may be done by getting the firm registered by ISO 9000.
1. Thirty samples of size 3 listed in the following table were taken from a machining process
over a 15-hour period.
a. Compute the mean and standard deviation of the data.
b. Compute the mean and range of each sample and plot them on control.
Sample Observations
1 3.55 3.64 4.37
2 3.61 3.42 4.07
3 3.61 3.36 4.34
4 4.13 3.50 3.61
5 4.06 3.28 3.07
6 4.48 4.32 3.71
7 3.25 3.58 3.51
8 4.25 3.38 3.00
9 4.35 3.64 3.20
10 3.62 3.61 3.43
11 3.09 3.28 3.12
12 3.38 3.15 3.09
13 2.85 3.44 4.06
14 3.59 3.61 3.34
15 3.60 2.83 2.84
16 2.69 3.57 3.28
17 3.07 3.18 3.11
18 2.86 3.69 3.05
19 3.68 3.59 3.93
20 2.90 3.41 3.37
21 3.57 3.63 2.72
22 2.82 3.55 3.56
23 3.82 2.91 3.80
24 3.14 3.83 3.80
25 3.97 3.34 3.65
26 3.77 3.60 3.81
27 4.12 3.38 3.37
28 3.92 3.60 3.54
29 3.50 4.08 4.09
30 4.23 3.62 3.00
2. Forty samples of size 5 listed in the following table were taken from a machining process
over a 25-hour period.
a. Compute the mean and standard deviation of the data.
b. Compute the mean and range of each sample and plot them on control charts. Does the
process appear to be in statistical control? Why or why not?
Sample Number
Data 1 2 3 4 5 6 7 8 9 10
1 9.999 10.022 10.001 10.007 10.011 10.019 10.015 9.988 9.980 10.017
2 9.992 9.998 10.006 10.006 9.979 10.017 10.015 9.990 10.001 10.017
3 10.002 10.037 10.002 10.004 9.991 10.018 9.978 10.008 10.013 9.988
4 10.003 9.994 9.993 10.018 9.996 10.008 10.006 10.002 9.998 10.010
5 10.009 10.003 10.011 10.011 9.994 10.018 9.997 9.989 10.015 9.980
11 12 13 14 15 16 17 18 19 20
1 9.980 10.004 10.025 9.992 9.985 9.977 9.996 10.014 10.001 9.982
2 10.038 9.990 9.989 10.023 10.002 9.975 9.991 10.010 9.979 9.975
3 9.990 10.002 9.981 10.019 10.008 10.002 10.005 10.000 10.001 9.976
4 9.996 10.003 10.006 9.990 10.008 10.021 10.009 10.001 10.015 10.012
5 10.016 9.996 9.998 10.003 9.998 9.989 9.977 10.006 10.009 9.994
21 22 23 24 25 26 27 28 29 30
1 10.010 9.988 9.991 10.005 9.987 9.994 9.994 9.972 10.018 9.985
2 10.003 10.004 9.996 10.003 9.993 10.007 9.987 9.994 10.007 10.010
3 9.990 10.001 10.020 10.027 9.992 10.013 10.027 9.969 9.980 9.998
4 10.010 9.995 10.002 9.996 9.987 9.997 10.030 10.011 9.987 10.033
5 10.015 9.977 10.022 9.970 10.008 10.014 9.989 9.985 10.014 9.994
31 32 33 34 35 36 37 38 39 40
1 10.009 9.987 9.990 9.985 9.991 10.002 10.045 9.970 10.019 9.954
2 10.013 10.012 9.973 10.038 9.999 9.989 9.993 9.999 9.989 10.011
3 10.008 10.015 9.996 9.991 9.989 9.983 10.007 9.989 9.998 10.003
4 9.990 9.995 9.990 9.988 10.014 10.013 9.990 9.999 9.997 9.987
5 10.008 10.021 9.980 9.986 9.997 9.980 10.010 10.014 9.986 10.005
3. Suppose that the following sample means and standard deviations are observed for
samples of size 5. Construct xbar and s-charts for these data.
Sample # X S Sample # X S
1 2.15 0.14 11 2.10 0.17
2 2.07 0.10 12 2.19 0.13
3 2.10 0.11 13 2.14 0.07
4 2.14 0.12 14 2.13 0.11
5 2.18 0.12 15 2.14 0.11
6 2.11 0.12 16 2.12 0.14
7 2.10 0.14 17 2.08 0.17
8 2.11 0.10 18 2.18 0.10
9 2.06 0.09 19 2.06 0.06
10 2.15 0.08 20 2.13 0.14
4. Construct charts for individuals using both two-period and three-period moving ranges for
the following observations: 9.0, 9.5, 8.4, 11.5, 10.3, 12.1, 11.4, 10.0, 11.0, 12.7, 11.3, 12,
12.6, 12.5, 13.0, 12.0, 11.2, 11.1, 11.5, 12.5, 12.1
5. The fraction defective for an automotive piston is given below for 20 samples. Two
hundred units are inspected each day. Construct a p-chart and interpret the results.
12 ACCEPTANCE SAMPLING
A sampling plan is a method for guiding the acceptance sampling process. It specifies the
procedure for drawing samples to inspect from a batch of products and then the rule for
deciding whether to accept or reject the whole batch based on the results of this inspection.
The sample is a small number of items taken from the batch rather than the whole batch. The
action of rejecting the batch means not accepting it for consumption and this may include
downgrading the batch or selling it at a lower price returning it to its supplier or vendor.
Suppose that a sampling plan specifies that (a) n item are drawn randomly from a batch to
form a sample and (b) the batch is rejected if and only if c or more of these n items are
defective or non-conforming. An operating characteristic curve, or OC-curve, of a sampling
plan is defined as the plot of the probability that the batch will be accepted (Pa(p)) against the
fraction p of defective products in the batch. The larger is Pa(p), the more it is likely that the
batch is accepted. A higher likelihood of acceptance benefits the producer. On the other
hand, the smaller is Pa(p), the harder it will be for the batch to be accepted. This would
benefit and even protect the consumer who would want some assurance against receiving bad
products and would prefer accepting batches with a low p (fraction defective) value.
In order to specify a sampling plan with Pa(p) characteristics as described above, the numbers
n and c must be correctly specified. This specification requires us to specify two batch quality
or fraction defective levels first, namely the AQL (acceptable quality level) and the RQL
(rejection quality level) values. AQL and RQL (explained below) are two key quality
parameters frequently used in designing a sampling plan. An ideal sampling plan is regarded
to be one that accepts the batch with 100% probability if the fraction of defective items in it is
less than or equal to AQL and that rejects the batch with a 100% probability if the fraction of
defective items in the batch is larger than AQL.
Most customers realize that perfect quality (e.g., a batch or lot of parts containing no defective
parts in it) is perhaps impossible to expect; some defectives will always be there. Therefore,
the customer decides to tolerate a small fraction of defectives in his/her purchases. However,
the customer certainly wants a high level of assurance that the sampling plan used to screen
the incoming lots will reject lots with fraction defective levels exceeding some decidedly poor
quality threshold called the rejection quality level or RQL. In reality, RQL is a defect level
that causes a great deal of heartache once such a lot enters the customer's factory.
The supplier of those parts, on the other hand, wants to ensure that the customer's acceptance
sampling plan will not reject too many lots with defect levels that are certainly within the
customer's tolerance, i.e., acceptable to the customer on the average. Generally, the customer
sets a quality threshold here also, called AQL. This is actually the worst lot fraction defective
that is acceptable to the customer in the shipments he/she receives on an average basis.
A bit of thinking will indicate that only error-free 100% inspection of all items in a lot would
accept the batch with 100% probability if the fraction of defective items in it is less than or
equal to AQL and reject the lot with a 100% probability if the fraction of defective items in it
is larger than AQL. Such performance cannot be realized othewise. The OC-curve of such a
sampling plan is shown in Figure 12.1.
1.0
Pa(p)
0.0
0.0 AQL p (fraction defective)
c
n
Pa( p) p x (1 p) n x
x 0 x
Now, if the producer is willing to sacrifice a little bit so that the batch with fraction defective
AQL is accepted with a probability at least (1- ) where is a small positive number, and
the consumer is willing to sacrifice a little so that the batch with fraction defective RQL is
accepted with a probability of at most , where is a small positive number and RQL > AQL,
then the following two inequalities can be established.
AQL (1 AQL)
x 0
x n x
1
RQL (1 RQL)
x c 1
x n x
From the above two inequalities, the numbers n and c can be solved (but the solution may not
be unique). The OC-curve of such a sampling plan is shown in Figure 12.2. The number is
call the producer's risk, and the number called the consumer's risk. As a common practice in
industries, the magnitude of and is usually set at some value from 0.01 to 0.1.
Nomograms are available for obtaining solution(s) of the inequalities given above easily.
The above sampling plan is called a single sampling plan (Figure 12.3) because the decision
to accept or reject the batch is made in a single stage after drawing a single sample from the
batch being evaluated.
To set up a practical single sampling procedure you would need to specify 1) AQL, 2) , the
consumer's risk, 3) RQL, and 4) , the producer's risk. The plan itself may be developed from
a tool called the"Larsen Nomogram" given in statistical quality control texts and handbooks.
A typical plan specifying AQL = 0.01, = 0.05, RQL = 0.06 and = 0.10 will have sample
size n = 89 and the maximum number of defectives allowed (c) = 2.
1.0
Pa(p)
0.0
0.0 AQL RQL p (fraction
defective)
Sample n items
Yes
d < c? Accept the lot
No
A double sampling plan is one in which a sample of size n1 is drawn and inspected, the batch
is accepted rejected according to whether the number d1 of defective items found in the
sample is r1 or r2 (where r1 < r2); and if the number of defective items lies between r1 and
r2, a further sample of size n2 is drawn and inspected, and the batch is accepted or rejected
according to whether the total number d1 + d2 of defective items in the first and second sample
is r3 or > r3. This procedure is shown diagrammatically in Figure 12.4.
Sample n1 items
Yes
d1 < r1? Accept the lot
No
Yes
d1 > r2? Reject the lot
No
Sample n2 more
items from the lot
Yes
d1+d2 > r3? Accept the lot
No
The average sample number (ASN) of a double sampling plan is Pa1(p) n1 + (1 - Pa1(p)) n2,
where Pa1(p) is the probability that the batch will be accepted upon inspection of the first
sample. If the value of p in the batch is very low or very high, a decision can usually be made
within inspection of the first sample, and the ASN of a double sampling plan will be smaller
than the sample size of a single sampling plan with the same producer's risk () and
consumer's risk ().
The idea of double sampling plan can be extended to construct sampling plans of more than
two stages, namely multiple sampling plans. A sequential sampling plan is a sampling plan in
which one item is drawn and inspected each time, in such a way that if a decision (of
accepting or rejecting the batch) can be made upon inspection of the first item, the process is
stop; if a decision cannot be made, a second item will be drawn and inspected, and if decision
can be made upon inspection of the second item, the plan is stopped; otherwise, the third item
is drawn and inspected, and so on, until a decision can be made. However, multiple sampling
plans and sequential sampling plans are not so commonly used, because their implementation
in practice is more complicated that single and double sampling plans.
The quantity [p Pa(p)] is called the average outgoing quality (AOQ) of a sampling plan at
fraction defective p. This is the quality to be expected at the customer's end if the sampling
plan is used consistently and repeatedly to accept or reject lots being received. It is clear that
if p = 0, then AOQ = 0. If p = 1, that is, all the product items in the batch are defective, then
Pa(p) of any sampling plan that we are using should be equal to 0 for otherwise the plan would
not have any utility. Hence AOQ = 0 also when p = 1. Since AOQ can never be negative, it
has a global maximum point in the range (0,1); this maximum is called the average outgoing
quality limit (AOQL) of the sampling plan. The graph of AOQ against p for a typical
sampling plan is shown in Figure 12.5. Although a sampling plan can be specified by setting
the producer's risk () and consumer's risk () at AQL and RQL, the quantity AOQL can also
be used to specify a sampling plan.
AOQ
AOQL
Another type of sampling plan which is different from the above is called continuous
sampling procedure (CSP). The rationale of CSP is that, if we are not sure that the products
produced from a process is of good quality, a 100% inspection will be adopted; if the quality
of products is found to be good, then only a fraction of the products will be inspected. In the
simplest CSP, initially 100% is performed; during 100% inspection if no defective items are
found after a specified number of items are inspected (which means that the quality of product
produced is perhaps good), 100% inspection is stopped and only a fraction f of the products is
inspected. During fraction inspection if a defective item is found (which means that the
quality of products might have deteriorated), then fraction inspection is stopped and 100%
inspection is resumed. More refined CSP's have also been constructed, for example, by
setting f at 1/2 at the first stage, 1/4 at the second stage, and so on.
All sampling plans described above are called attribute sampling plans, because the
inspection procedure is based on a "go"/ "no go" basis, that is, an item is either regarded as
non-defective and accepted, or it is regarded as defective and not accepted. Variable
sampling plans are sampling plans in which continuous measurements (such as dimensional
or weight checks) are made on each item in the sample, and the decision as to whether to
accept or to reject the batch is based on the sample mean or the average of the measurements
obtained from all items contained in the sample.
A variable sampling plan can be used, for example, when a product item is regarded as
acceptable if a certain measurement x (diameter, length, hardness, etc.) of it exceeds a pre-set
lower spec limit L; otherwise the item is regarded as not acceptable (see Figure 12.5.6).
Unacceptable items
L x
Measurements {x} of the products produced would vary from item to item, but these
measurements have a population mean µ, say. When µ is much larger than L, we can expect
that most items will have an x value greater than L and all such items would be acceptable;
when µ is much less than L, we can expect that most items will have x values less than L and
all such items would not be acceptable. A variable sampling plan can be constructed by
specifying a sample size n and a lower cut-off value c for the sample mean xbarn such that if
the sample of size n is drawn and all items in this sample are measured, the lot is accepted if
the sample mean xbarn exceeds c. The lot is rejected otherwise. We require that when the
population mean of the product produced is µ1 or larger, the lot is accepted with a probability
of at least 1 - , and when the population mean is µ2 or smaller, the lot is accepted with
probability only or less, where = producer's risk and = consumer's risk (Figure 12.7).
2 c 1 Xbarn
Suppose that x is the value of x such that the probability for x < x is . According to the
criterion given above we can drive that
( x x )
n
( 1 2 )
x x
2 c 1
n n
The above system of inequalities may not have a unique (n, c) solution. From elementary
statistical theory, if the form of x is known (for example, when x follows a normal
distribution), from these inequalities we can determine a minimum value for the sample size n
(which is an integer), and a range for the cut-off point c for the sample mean xbar. Such a
sampling plan is a called single specification limit variable sampling plan. If an upper
specification limit U instead of a lower specification limit L is set for x, we only need to
consider the lower specification limit problem with (-x) replacing x and (-U) replacing L.
International standards for sampling plans are now available. Many of these are based on the
work of Professors Dodge and Rohmig. The plan than originally was developed for single and
multiple attribute sampling for the US army during WW II is now widely used in industries. It
is called the MIL-STD-105E. An equivalent Indian standard known as IS 2500 has been
published by the Bureau of Indian Standards. Many other official standards for various
attribute sampling plans (such as those based on AOQ, or CSP's, and so on) and variable
sampling plans (assuming the variable has a normal distribution, when the population variance
is known or unknown, and so on) have been published by the US government and the British
Standards Institution.
Before we end this section, we stress again that acceptance sampling or sampling inspection is
only a screening tool for separating batches or lots of good quality products from batches of
poor quality products. To some extent this screening assures the quality of incoming parts and
materials. Actually, the use of sampling plans does help an industry to do this screening more
effectively than the drawing samples arbitrarily. Therefore, sampling inspection can be used
during purchasing, for checking the quality of incoming materials, whenever one is not sure
about the conditions and QC procedures in use in the vendor's plant.
Acceptance sampling can also be used for the final checking of products after production
(Figure 1.3). This, to a limited degree, assures the quality of the products being readied for a
customer before they are physically dispatched and even Motorola uses acceptance sampling
as a temporary means of quality control until permanent corrective actions can be
implemented. But note that unlike SPC, acceptance sampling does not help in the prevention
of the production of poor quality products.
In this section we discuss briefly methods that belong in the domain of quality engineering, a
recently formalized discipline that aims at developing products whose superior performance
delights the discriminating usernot only when the package is opened, but also throughout
their lifetime of use. The quality of such products is robust, i.e., it remains unaffected by the
deleterious impact of environmental or other factors often beyond the users' control.
Since the topic of quality engineering is of notably broad appeal, we include below a brief
review of the associated rationale and methods. The term “quality engineering” (QE) was
used till recently by Japanese quality experts only. One such expert is Genichi Taguchi (1986)
who reasoned that even the best available manufacturing technology was by itself no
assurance that the final product would actually function in the hands of its user as desired. To
achieve this Taguchi suggested the designer must “engineer” quality into the product, just as
he/she specifies the product’s physical dimensions to make the dimensions of the final product
correct.
The essence of the loss function concept may be stated as follows. Whenever a product
deviates from its target performance, it generates a loss to society (Figure 13.1). This loss is
minimum when performance is right on target, but it grows gradually as one deviates from the
target. Such a philosophy suggests that the traditional "if it is within specs, the product is
good" view of judging a product's quality is not correct. If your foot size is 7, then a shoe of
size different from 7 will cause you inconvenience, pain, loose fit, and even embarrassment.
Under such conditions it is meaningless to seek a shoe that meets a spec given as (7 x).
To state again, the loss function philosophy says that for a producer, the best strategy is to
produce products as close to the target as possible, rather than aiming at "being within
specifications."
Sensitivity analysis also determines the changes to be expected in the design’s performance
due to factor variations of uncontrollable character. If the design is found to be too sensitive,
the designer projects the worst-case scenarioto help plan for the unexpected. However,
studies indicate that worst-case projections or conservative designs are often unnecessary and
that a “robust design” can greatly reduce off-target performance caused by poorly controlled
manufacturing conditions, temperature or humidity shifts, wider component tolerances used
during fabrication, and also field abuse that might occur due to voltage/frequency fluctuations,
vibration, etc.
Robust design should not be confused with rugged or conservative design, which adds to unit
cost by using heavier insulation or high reliability, high tolerance components. As an
engineering methodology robust design seeks to reduce the sensitivity of the product/process
performance to the uncontrolled factors through a careful selection of the values of the design
parameters. One straightforward way to produce robust designs is to apply the "Taguchi
method".
The Taguchi method may be illustrated as follows. Suppose that a European product (Swiss
chocolate bars) is to be introduced “as is” in a tropical country where the ambient temperature
rises to 45C. If the European product formulation is directly adopted, the result may be
molten bars on store shelves in Bombay and Singapore and gooey hands and dirty dresses, due
to the high temperature sensitivity of the Swiss chocolate recipe (Curve 1, Figure 13.2).
The behavior of the bar’s plasticity may be experimentally explored to determine its
robustness to temperature, but few product designers actually attempt this. Taguchi would
suggest that we do here some special “statistical experiments” in which both the bar’s
formulation (the original Swiss and perhaps an alternate prototype formulation that we would
call “X”) and ambient temperature would be varied simultaneously and systematically and the
consequent plasticities observed.
Curve 1
P
l
a
s Curve 2
t
i
c
i
t
y
Temperature
Taguchi was able to show that by such experiments it is often possible to discover an alternate
bar design (here an appropriate chocolate bar formulation) that would be robust to
temperature. The trick, he said, is to uncover any “exploitable” interaction between the effect
of changing the design (e.g. from the Swiss formulation to Formulation “X”) and temperature.
In the language of statistics, two factors are said to interact when the influence of one on a
response is found to depend on the setting of the other factor (Montgomery, 1997). Figure
13.3 shows such an interaction, experimentally uncovered. Thus, a “robust” chocolate bar
may be created for the tropical market if the original Swiss formation is changed to
Formulation “X”.
Note that a product’s performance is “fixed” primarily by its design, i.e., by the settings
selected for its various design factors. Performance may also be affected by
noiseenvironmental factors, unit to unit variation in material, workmanship, methods, etc.,
or due to aging/deterioration (Figure 13.4). The breakthrough in product design that Taguchi
achieved renders performance robust even in the presence of noise, without actually
controlling the noise factors themselves. Taguchi’s special “designnoise array” experiments
(Figure 13.5) discover those optimum settings. Briefly, the procedure first builds a special
assortment of prototype designs (as guided by the “design array”) and then tests these
prototypes for their robustness in “noisy” conditions. For this, each prototype is “shaken” by
deliberately subjecting it to different levels of noise (selected from the “noise array”, which
simulates noise variation in field conditions). Thus performance is studied systematically
under noise in order to find eventually a design that is insensitive to the influence of noise.
Noise
Design
Factor B
To guide the discovery the “optimum” design factor settings Taguchi suggested a two-step
procedure. In Step 1, optimum settings for certain design factors (called “robustness seeking
factors”) are sought so as to ensure that the response (for the bar, plasticity) becomes robust
(i.e., the bar does not collapse into a blob at least up to 50C temperature). In Step 2, the
optimum setting of some other design factor (called the “adjustment” factor) is sought to put
the design’s average response at the desired target (e.g., for plasticity a level that is easily
chewable).
For the chocolate bar design problem, the alternative design “factors” are two candidate
formulationsone the original European, and the other that we called “X”. Thus, the design
array would contain two alternatives “X” and “Eu.” “Noise” here is ambient temperature, to
be experimentally varied over the range 0C to 50C, as seen in the tropics.
Figure 13.5 shows the experimental outcome of hypothetical designnoise array experiments.
For instance, response value Plasticity X_50 was observed when formulation X was tested at
50C. Figure 13.3 is a compact illustrative display of these experimental results. It is evident
from Figure 13.3 that Formulation X’s behavior is quite robust even at tropical temperatures.
Therefore, the adaptation of Formulation “X” would make the chocolate bar “robust”, i.e., its
plasticity would not vary much even if ambient temperature had wide swings.
Many text book methods for conducting multi-factor experiments, however, are too elaborate
and cumbersome. This has discouraged many practicing engineers from trying out this
powerful methodology in real design and optimization work. Taguchi observed this and
popularized a class of simpler experimental plans that can still reveal a lot about the
performance of a product or process, without the burden of heavy theory. An example is
shown below.
The Taguchi orthogonal array experimental scheme would set up the following experimental
plan consisting of only four specifically designed experiments. The results are shown in the
table and on the associated graph. It is clear from the factor effect plots of the results that the
designer would be much better off by increasing pressure. Nozzle diameter seems to have
little effect on the extent of the area covered by the extinguisher.
3 5 MM 4 BARS 1.6 M2
5 10 2 4
4 10 MM 4 BARS 1.9 M2 Nozzle CO2
The six sigma principle is Motorola's own rendering of what is known in the quality literature
as the zero defects (ZD) program. Zero defects is a philosophical benchmark or standard of
excellence in quality proposed by Philip Crosby. Crosby explained the mission and essence of
ZD by the statement "What standard would you set on how many babies nurses are allowed to
drop?" ZD is aimed at stimulating each employee to care about accuracy and completeness, to
pay attention to detail, and to improve work habits. By adopting this mind-set, everyone
assumes the responsibility toward reducing his or her own errors to zero.
One might think that having three-sigma quality, i.e., the natural variability ( x 3x) equal
to tolerance (= upper spec limit - lower spec limit, or in other words, Cp = 1.0) would mean
good enough quality. After all, if distribution is normal, only 0.27% of the output would be
expected to fall outside the product's specs or tolerance range. But what does this really
mean? An average aircraft consists of 10,000 different parts. At 3-sigma quality, 27 of those
parts in an assembled aircraft would be defective. At this performance level, there would be
no electricity or water in Delhi for one full day each year. Even four-sigma quality may not
be OK. At four-sigma level there would be 62.10 minutes of telephone shutdown every week.
You might wish to know where performance is today. Restaurant bill errors are near 3-sigma.
Payroll processing errors are near 4-sigma. Wire transfer of funds in banks is near 4-sigma
and so is baggage mishandling by airlines. The average Indian manufacturing industry is near
the 3-sigma level. Airline flight fatality rates are at about 6.5 sigma level (0.25 per million
landings). At two-sigma level a company's cost of returns, scrap, rework and erosion of
market share costs over a third of its yearly sales.
For the typical Indian, a 1-hour train delay, an incorrect eye operation or drug administration,
or no electricity or water half a day is no surprise; he/she routinely experiences even worse
performance. Quantitatively, such performance is worse than two-sigma. Can this be called
acceptable? One- or two-sigma performance is downright noncompetitive.
Besides adopting TQM as the way to conduct business, many companies worldwide are now
seriously looking at six-sigma benchmarks to assess where they stand. Six sigma not only
reduces defects and raises customer acceptability, it has been now shown at Allied Signal Inc.,
Motorola, Raytheon, Bombardier Aerospace and Xerox that it can actually save money as
well. Therefore, it is no surprise that Motorola aggressively set the following quality goal for
itself in 1987 and then didn't want to stop till they achieved it:
Improve product and services quality ten times by 1989, and at least one
hundred fold by 1991. Achieve six-sigma capability by 1992. With a deep
sense of urgency, spread dedication to quality to every facet of the
corporation, and achieve a culture of continual improvement to assure total
customer satisfaction. There is only one goal: zero defectsin everything
we do.
The concept of six-sigma qualityshrinking the inherent variation in a process to half of the
spec range (Cp = 2.0) while allowing the mean to shift at most 1.5 sigma from the spec
midpoint (the target quality)is explained by Figure 14.1. The area under the shifted curves
beyond the six sigma range (the tolerance limits) is only 0.0000034, or 3.4 parts per million.
If the process mean can be controlled to within 1.5 x of the target, a maximum of 3.4
defects per million pieces produced can be expected. If the process mean is held exactly on
target, only 2.0 defects per billion would be expected. This is why within its organization
Motorola defines six sigma as a state of the production or service unit that represents "almost
perfect quality."
Process Mean 3
= Inherent variability
= half of Tolerance
- 6 - 1.5 + 1.5 + 6
Process Mean
is allowed to vary
in this range
Tolerance
Many companies have adopted the Measure-Analyze-Improve-Control cycle to step into six
sigma. Typically they proceed as follows:
Select critical-to-quality characteristics
Define performance standards (the targets to be achieved)
Validate measurement systems (to ensure that the data is reliable)
Establish Product Capability (how good are you now?)
Define performance objectives
Identify sources of variation (seven tools etc.)
Screen potential causes (correlation studies, etc.)
Discover relationship between variablescauses or factorsand the output (DOE)
Establish operating tolerances for input factors and output variables
Validate the measurement system
Determine process capability (Cpk) (Can you deliver? What do you need to
improve?)
Implement process controls
One must audit and review nonstop to ensure that one is moving along the charted path.
The aspect common between six sigma and ZD ("zero defects") is that both concepts require
the maximum participation by the entire organization. In other words, they require that
unrelenting effort by management and the involvement of all employees. Companies such as
General Motors have used a four-phase approach to seek six sigma:
1. Measure: Select critical quality characteristics though Pareto charts; determine the
existing frequency of defects, define target performance standard, validate the
measurements system and establish existing process capability.
2. Analyze: Understand when, where, and why defects occur by defining performance
objectives and sources of variation.
3. Improve: Identify potential causes, discover cause-effect relationships, and establish
operating tolerances.
4. Control: Maintain improvements by validating the measurement system, determining
process capability, and implementing process control systems.
It is reported that in GM a new culture has been created. An individual or team devotes all its
time and energy to solving one problem at a time, designs solutions with customers'
assistance, and helps to minimize bureaucracy in supporting the six-sigma initiative.
Beyond TQM
The Japanese have recently evolved the "Bluebird Plan," which is a "third option" beyond
SPC and TQM designed to achieve the four objectives business excellence. These objectives
include establishing corporate ethics, maintaining and boosting international competitiveness,
ensuring stable employment and improving national quality of life.
The Bluebird Plan provides a forum for government, labor and management to discuss the
actions which need to be taken. In Japan the plan set out an action program for reform for the
three years 1997-1999 which was noted to be a critical time that would determine the direction
of Japan's future. Striking about the plan is the employers' acceptance that the relationship
between labor and management is an imperative "stabilizing force in society." Thus it reaches
beyond the tenets of TQM.