You are on page 1of 75

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/264973369

Statistical Methods and Inspection Techniques in Quality Control: An


Introduction

Book · March 2001


DOI: 10.13140/2.1.4842.6887

CITATIONS READS

0 5,368

1 author:

Tapan P. Bagchi
Indian Institute of Technology Kharagpur
206 PUBLICATIONS   1,165 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Critical Chain Project Management View project

All content following this page was uploaded by Tapan P. Bagchi on 23 August 2014.

The user has requested enhancement of the downloaded file.


1

STATISTICAL QUALITY CONTROL

Objectives

After reading this unit you should be able to:


 Apply statistical thinking to quality improvement
 Use 7 tools to troubleshoot quality problems
 Determine process capability of a manufacturing process
 Explain what a control chart is and how to design one
 Select appropriate control charts for different applications
 Understand the principles behind sampling methods
 Understand the role of 6-sigma and other advanced QC methods

Structure

1. What is Statistical Quality Control?


2. Process Capability: A Discerning Measure of Process Performance
3. The Seven Quality Improvement Tools
4. Control Charts for Variables Data
4.1 Constructing xbar and R Charts and Establishing Statistical Control
4.2 Interpreting Abnormal Patterns in Control Charts
4.3 Routine Process Monitoring and Control
4.4 Estimating Plant Process Capability
4.5 Use of Warning Limits
4.6 Modified Control Limits
4.7 Key Points about Control Charts
5. Special Control Charts for Variables Data
5.1 Xbar and s-Charts
5.2 Charts for Individuals
6. Control Charts for Attributes
6.1 Fraction Nonconforming (p) Chart.
6.2 Variable Sample Size p Chart
6.3 np-Charts for Number Nonconforming
6.4 Charts for Defects
7. Choosing the Correct SPC Chart
8. Key Points about Control Chart Construction
9. Implementing SPC
10. Review Questions about Control Charts
11. Self Assessment Questions about SPC
12. Acceptance Sampling
13. What are Taguchi Methods?
14. The Six-Sigma Program
15. Key Words
16. Further Readings and References

Prepared for the Extension Program of Indira Gandhi Open University


2

1 WHAT IS STATISTICAL QUALITY


CONTROL?

The concept of TQM is basically very simple. Each part of the organization has
customerssome external and many internal. Identifying what the customer requirements are
and setting about to meet them is the core of a total quality approach. This requires a good
management system, methods including statistical quality control (SQC), and teamwork.

A well-operated, documented management system provides the necessary foundation for the
successful application of SQC. Note, however, that SQC is not just a collection of techniques.
It is a strategy for reducing variability, the root cause of many quality problems. SQC refers
to the use of statistical methods to improve and enhance quality and through it customer
satisfaction. However, this task is seldom trivial because real world processes are affected by
numerous uncontrolled factors. For instance, within every factory, conditions fluctuate with
time. Variations occur in the incoming materials, in machine conditions, in the environment
and in operator performance. A steel plant, for example, may purchase good quality ore
from a mine, but the physical and chemical characteristics of ore coming from different
locations in the mine may vary. Thus, everything isn't always "in control."

Besides ore, in steel making furnace conditions may vary from heat to heat. In welding, it is
not possible to form two exactly identical joints and faulty joints may occur occasionally. In a
cutting process, the size of each piece of material cut varies; even the most high-quality
cutting machine has some inherent variability. In addition to such inherent variability, a large
number of other factors may also influence processes (Figure 1.1).

Many of these variations cannot be predicted with certainty, although sometimes it is possible
to trace the unusual patterns of such variations to their root cause(s). If we have collected
sufficient data from these variations, we can tell, in terms of probability, what is most likely to
occur the next if no action is taken. It we know what is likely to occur the next given certain
conditions, we can take suitable actions to try to maintain or improve the acceptability of the
output. This is rationale of statistical quality control.

Ambient temperature, vibration, humidity,


supply voltage, etc.
Labour

Training level
The Process
Control Variables:
Set points for temperature,
cutting speed, raw material
Variation in Output
specs, recipe etc.
Variables:
Quality of Finished
Raw material Product;
quality/quantity Level of Customer
State Variables Satisfaction
measured here

Figure 1.1: Input, Environmental and Output Variables of a Process

Prepared for the Extension Program of Indira Gandhi Open University


3

Input factors Process Output


y = f(x) responses
x1, x2, x3, … y1, y2, y3,

Figure 1.2: An Idealized Process Model

Another prospect in which statistical methods can help to improve product quality is the
design of products and processes. It is now well-understood that over 2/3rd of all product
malfunctions may be traced to their design. Indeed, the characteristics or quality of a product
depend greatly on the choice of materials, settings of various parameters in the design of the
product and the production process settings. In order to locate an optimal setting of the
various parameters which gives the best product, we may consider using models governing the
outcome and the various parameters, if such models can be established by theory or through
experimental work. Such a model is diagrammatically shown in Figure 1.2.

However, in many cases, a theoretical quality control model y = f(x) relating the final output
responses (y1, y2, y3, …) and the input parameters (x1, x2, x3, …) is either extremely difficult
to establish or mathematically intractable. The following two examples illustrate such cases.

Example 1: In bakery industry, the taste, tenderness and texture of a kind of bread depends
on various input parameters such as the origin of the flour used, the amounts of sugar, the
amount of baking powder, the baking temperature profile and baking time, and the type of
oven used, and so on. In order to improve the quality of the bread produced, the baker may
use a model which relates the input parameters and the output quality of the bread. To find
theoretical models quantifying the taste, tenderness and texture of the bread produced and
relate these quantities to the various input parameters based our present scientific knowledge
is a formidable task. However, the baker can easily use statistical methods in regression
analysis to establish empirical models and use them to locate an optimal setting of the input
parameters.

Example 2: Sometimes there are great difficulties in solving an engineering problem using
established theoretical models. The heat accumulated on a chip in an electronic circuit during
normal operation will raise the temperature of the chip and shorten its life. In order to
improve the quality of the circuit, the designer would like to optimize the design of the circuit
so that the heat accumulated on the chip will not exceed a certain level. This heat
accumulated can be expressed theoretically in terms of other parameters in the circuit using a
complicated system of ten or more daunting partial differential equations which can be used to
optimize the circuit design. However, it is usually not possible to solve such a system
analytically, and to solve it numerically using the computer also has computational
difficulties. In this situation, a statistical methodology known as design of experiments (DOE)
can be used to find an optimal design of the circuit without going through the complicated
method of solving partial differential equations.

In other cases control may need to be exercised even on-linewhile the process is in
progressbased on how the process is performing, to maintain product quality. Thus in
statistical quality control problems are numerous and diverse.

SQC engages the following three methodologies:

1. Acceptance Sampling
This method is also called "sampling inspection." When products are required to be inspected
but it is not feasible to inspect 100% of the products, samples of the product may be taken for
inspection and conclusions drawn using the results of inspecting the samples. This technique
specifies how to draw samples from a population and what rules to use to determine the
acceptability of the product being inspected.

Prepared for the Extension Program of Indira Gandhi Open University


4

2. Statistical Process Control (SPC)


Even in an apparently stable production process, products produced are subject to random
variations. SPC aims at controlling the variability of process output using a device called the
control chart. On a control chart, a certain characteristic of the product is plotted. Under
normal conditions these plotted points are expected to vary in a "usual way" on the chart.
When abnormal points or patterns appear on the chart, it is a statistical indication that the
process parameters or production conditions might have changed undesirably. At this point an
investigation is conducted to discover unusual or abnormal conditions (e.g. tool breakdown,
use of wrong raw material, temperature controller failure, etc.). Subsequently, corrective
actions are taken to remove the abnormality. In addition to the use of control charts, SPC also
monitors process capability, an indicator of the adequacy of the manufacturing process to
meet customer requirements under routine operating conditions. In summary, SPC aims at
maintaining a stable, capable and predictable process.

Note, however, that since SPC requires processes to display measurable variation, it is
ineffective for quality levels approaching six-sigma though it is quite effective for companies
in the early stages of quality improvement efforts.

3. Design of Experiments
Trial and error can be used to run experiments in the design of products and design of
processes, in order to find an optimal setting of the parameters so that products of good quality
will be produced. However, performing experiments by trial and error unscientifically is
frequently very inefficient in the search of an optimal solution. Application of the statistical
methodology of "design of experiments" (DOE) can help us in performing such experiments
scientifically and systematically. Additionally, such methods greatly reduce the total effort
used in product or process development experiments, increasing at the same time the accuracy
of the results. DOE forms an integral part of Taguchi methodstechniques that produce
high quality and robust product and process designs.

The Correct Use of Statistical Quality Control Methods

The production of a product typically progresses as indicated in the simplified flow diagram
shown in Figure 1.3. In order to improve the quality of the final product, design of
experiments (DOE) may be used in Step 1 and Step 2, acceptance sampling may be used in
Step 3 and Step 5, and statistical process control (SPC) may be used in Step 4.

Design Design of Procurement


of the the Dispatch
of materials Production of the
Product Process and parts Product

Figure 1.3: Production from Design to Dispatch

There are several benefits that the SPC approach brings, as follows:
 There are no restrictions as to the type of the process being controlled or studied, but the
process tackled will be improved.
 Decisions guided by SPC are based on facts not opinions. Thus a lot of 'emotion' is
removed from problems by SPC.
 Quality awareness of the workforce increases because they become directly involved in
the improvement process.
 Knowledge and experience of those who operate the process are released in a systematic
way through the investigative approach. They understand their role in problem solving,
which includes collecting facts, communicating facts, and making decisions.
 Management and supervisors solve problems methodically, instead of by using a seat-of-
the-pants style.

Prepared for the Extension Program of Indira Gandhi Open University


5

Natural Natural
Variation Variation
of the of the
process process

Specification range Specification


range

Figure 1.4 Figure 1.5


A Capable Process: A Process that is not Capable:
Natural variation is Natural variation exceeds
within spec range spec range

Prepared for the Extension Program of Indira Gandhi Open University


6

2 PROCESS CAPABILITY: A DISCERNING


MEASURE OF PROCESS PERFORMANCE

We introduce now an important concept employed in thinking statistically about real life
processes. Process capability is the range over which the "natural variation" of a process
occurs as determined by the system of common or random causes; that is, process capability
indicates what the process can deliver under "stable" conditions when it is said to be under
statistical control.

The capability of a process is the fraction of output that can be routinely found to be within
specifications. A capable process has 99.73% or more of its output within specifications
(Figures 1.4 and 1.5).

Process capability refers to how capable a process is of making parts that are within the range
of engineering or customer specifications. Figure 1.4 shows the distribution of the dimension
of parts for a machining process whose output follows the bell-shaped normal distribution.
This process is capable because the distribution of its output is wholly within the spec range.
The process shown by Figure 1.5 is not capable.

Process Control on the other hand refers to maintaining the performance of a process at its
current capability level. Process control involves a range of activities such as sampling the
process product, charting its performance, determining causes of any excessive variation and
taking corrective action.

As mentioned above, the capability of a process is an expression of the comparison of product


specs to the range of natural variability seen in the process. In simple terms, process
capability expresses the proportion or fractional output that a process can routinely deliver
within the specifications. A process when subjected to a capability study answers two key
questions, "Does the process need to be improved?" and "How much does the process need to
improved?"

Knowing process capability allows manufacturing and quality managers to predict,


quantitatively, how well a process will meet specs and to specify equipment requirements and
the level of control necessary to maintain the firm's capability. For example, if a design specs
require a length of metal tubing to be cut within one-tenth of an inch, a process consisting of a
worker using a ruler and hacksaw will probably result in a large percentage of nonconforming
product. In this case, the process, due to its high inherent or natural variability, is not capable
of meeting the design specs. Management would face here three possible choices: (1)
measure each piece and either re-cut or scrap nonconforming tubing, (2) develop a better
process by investing in new technology, or (3) change the specifications.

Such decisions are usually based on economics. Remember that under routine production, the
cost to produce one unit of the product (i.e., its unit cost) whether the product ultimately ends
up falling within or outside specs is the same. Rather, the firm may be forced to raise the
market price of the within-spec products (those that are acceptable to customers) and thus
weaken its competitive position.

"Scrap and/or rework out-of-spec or defective parts" is therefore a poor business strategy since
labour and materials have already been invested in the unacceptable product produced.
Additionally, inspection errors will probably allow some nonconforming products to leave the
production facility if the firm aims at making parts that just meet the specs. On the other
hand, new technology might require substantial investment the firm cannot afford.

Prepared for the Extension Program of Indira Gandhi Open University


7

Distribution of
part dimensions

-3 -2 - Mean + +2 +3

Natural variation of
the process

Lower Upper
Spec Spec
Limit Limit

Figure 2.1: A Capable Process: Output is


wholly within spec limits

Distribution of
part dimensions

Natural variation of
the process
-3 -2 - Mean + +2 +3

Lower Upper
Spec Spec
Limit Limit

Figure 2.2: A Process with Natural


Variability equal to Spec Range
Distribution of
part dimensions

-3 -2 - Mean + +2 +3

Natural variation of
the process

Spec
Limits

Figure 2.3: A Process with Natural Variability wider than


Spec Limits. This process is not capable.

Prepared for the Extension Program of Indira Gandhi Open University


8

Distribution of
part dimensions

-3 -2 - Mean + +2 +3

Natural variation of
the process

Lower Upper
Spec Spec
Limit Limit

Figure 2.4: An off-centered Process

Changes in design, on the other hand, may sacrifice fitness-for-use requirements and result in
a lower quality product. Thus, these factors demonstrate the need to consider process
capability during product design and in the acceptance of new contracts. Many firms now
require process capability data from their vendors. Both ISO 9000 and QS 9000 quality
management systems require a firm to determine its process capability.

Process capability has three important components: (1) the design specifications, (2) the
centering of the natural variation, and (3) the range, or spread, of variation. Figures 2.1 to 2.4
illustrate four possible outcomes that can arise when natural process variability is compared
with product specs. In Figure 2.1, the specifications are wider than the natural variation; one
would therefor expect that this process will always produce conforming products as long as it
remains in control. It may even be possible to reduce costs by investing in a cheaper
technology that permit a larger variation in the process output. In Figure 2.2, the natural
variation and specifications are the same. A small percentage of nonconforming products
might be produced; thus, the process should be closely monitored.

In Figure 2.3, the range of natural variability is larger than the specification; thus, the current
process would not always meet specifications even when it is in control. This situation often
results from a lack of adequate communication between the design department and
manufacturing, a task entrusted to manufacturing engineers.

If the process is in control but cannot produce according to the design specifications, the
question should be raised whether the specifications have been correctly applied or if they
may be relaxed without adversely affecting the assembly or subsequent use of the product. If
the specifications are realistic and firm, an effort must be made to improve the process to the
point where it is capable of producing consistently within specifications.

Finally, in Figure 2.4, the capability is the same as in Figure 2.2, but the process average is
off-center. Usually this can be corrected by a simple adjustment of a machine setting or re-
calibrating the inspection equipment used to capture the measurements. If no action is taken,
however, a substantial portion of output will fall outside the spec limits even though the
process has the inherent capability to meet specifications.

We may define the study of process capability from another perspective. A capability study is
a technique for analyzing the random variability found in a production process. In every
manufacturing process there is some variability. This variability may be large or small, but it
is always present. It can be divided into two types:
 Variability due to common (random) causes
 Variability due to assignable (special) causes

Prepared for the Extension Program of Indira Gandhi Open University


9

The first type of variability is said to be inherent in the process and it can be expected to occur
naturally within a process. It is attributed to a multitude of factors which behave like a
constant system of the chances affecting the process. Called common or random causes, such
factors include equipment vibration, passing traffic, atmospheric pressure or temperature
changes, electrical voltage or humidity fluctuations, changes in operator's physical or
emotional conditions, etc. Such are the forces that determine whether a coin when tossed will
end up showing a head or tail when on the floor. Together, however, these "chances" form a
unique, stable and describable distribution. The behaviour of a process operating under such
conditions is predictable (Figure 2.5).

Inherent variability may be reduced by changing the environment or the technology, but given
a set of operating condition, this variability can never be completely eliminated from a
process. Variability due to assignable causes, on the other hand, refers to the variation that
can be linked to specific or special causes that disturb a process. Examples are tool failure,
power supply interruption, process controller malfunction, adding wrong ingredients or wrong
quantities, switching a vendor, etc.

Predicted variability

Time

Figure 2.5: Common Causes of Variation


Variability cannot be
present, but no Assignable Causes
predicted

? ? ?
? ? Time
? ?
? ?

Figure 2.6: Both Common and Assignable


Causes affecting the process

Prepared for the Extension Program of Indira Gandhi Open University


10

Assignable causes are fewer in number and are usually identifiable through investigation on
the shop floor or an examination of process logs. The effect (i.e., the variation in the process)
caused by.an assignable factor, however, is usually large and detectable when compared with
the inherent variability seen in the process. If the assignable causes are controlled properly,
the total process variability associated with them can be reduced and even eliminated. Still,
the effect of assignable causes cannot be described by a single distribution (Figure 2.6).

A capability study measures the inherent variability or the performance potential of a process
when no assignable causes are present (i.e., when the process is said to be in statistical
control). Since inherent variability can be described by a unique distribution, usually a normal
distribution, capability can be evaluated by utilizing the properties of this distribution. Recall
that capability is the proportion of routine process output that remains within product specs.

Even approximate capability calculations done using histograms enable manufacturers to take
a preventive approach to defects. This approach is in contrast with the traditional two-step
process: production personnel make the product while QC personnel inspect and screen out
products that do not meet specifications. Such QC is wasteful and expensive since it allows
plant resources including time and materials to be put into products that are not salable. It is
also unreliable since even 100 percent inspection would fail to catch all defective products.
SPC aims at correcting undesirable changes in the output of a process. Such changes may
affect the centering (or accuracy) of the process, or its variability (spread or precision). These
effects are graphically shown in Figure 2.

Target Output
  
  
 
    
    
     
      
Distribution  
of output

Accuracy  Accuracy  Accuracy  Accuracy 


Precision  Precision  Precision  Precision 

Figure 2.7: Process Accuracy and Precision


Lower Control Upper Control
Limit for xbar Limit for xbar

Distribution Distribution of
of xbar individual
measurements (x)

-3 -2 - Mean + +2 +3

Natural variation of
the process {x}

Lower Upper
Spec Limit Spec Limit
for x for x

Figure 2.5: Distribution of Averages (xbar)


compared to Distribution of Individuals (x)

Prepared for the Extension Program of Indira Gandhi Open University


11

Control Limits are Not an Indication of Capability

Those new to SPC often have the misconception that they don't need to calculate capability
indices. Some even think that they can compare their control limits to the spec limits. This is
not true, because control limits look at the distribution of averages (xbar, p, np, u, etc.) while
capability indices look at the distribution of individual measurements (x). The distribution of
x for a process will always be more spread out than the distribution of its xbar values (Figure
2.5). Therefore, the control limits are often within the specification limits but the plus-and-
minus-3-sigma distribution of individual part dimensions (x) is not.

The statistical theory of the "central limit theorem" says that the averages of samples or
subgroups {xbar} follow more closely a normal distribution. This is why we can easily
construct control charts on process data that are themselves not normally distributed. But
averages cannot be used for capability calculation because capability evaluates individual
parts delivered by a process. After all, parts get shipped to customers, not averages.

What Capability Studies Do for You

Capability studies are most often used to quickly determine whether a process can meet specs
or how many parts will exceed the specifications. However, there are numerous other
practical uses:
 Estimating percentage of defective parts to be expected
 Evaluating new equipment purchases
 Predicting whether design tolerances can be met
 Assigning equipment to production
 Planning process control checks
 Analyzing the interrelationship of sequential processes
 Making adjustments during manufacture
 Setting specifications
 Costing out contracts

Since a capability study determines the inherent reproducibility of parts created in a process, it
can even be applied to many problems outside the domain of manufacturing, such as
inspection, administration, and engineering.

There are instances where capability measurements are valuable even when it is not practical
to determine in advance if the process is in control. Such an analysis is called a performance
study. Performance studies can be useful for examining incoming lots of materials or one-
time-only production runs. In the case of an incoming lot, a performance study cannot tell us
that the process that produced the materials is in control, but it may tell us by the shape of the
distribution what percent of the parts are out of specs or more importantly, whether the
distribution was truncated by the vendor sorting out the obvious bad parts.

How to set up a Capability Study

Before we set up a capability study, we must select the critical dimension or quality
characteristic (must be a measurable variable) to be examined. This dimension is the one that
must meet product specs. In the simplest case, the study dimension is the result of a single,
direct product and measurement process. In more complicated studies, the critical dimension
may be the result of several processing steps or stages. It may become necessary in these
cases to perform capability studies on each process stage. Studies on early process stages
frequently prove to be more valuable than elaborate capability studies done on later processes
since early processes lay the foundation (i.e., constitute the input) which may affect later
operations.

Prepared for the Extension Program of Indira Gandhi Open University


12

Once the critical dimension is selected, data measurements can be collected. This can be
accomplished manually or by using automatic gaging and fixturing linked to a data collection
device or computer. When measurements on a critical dimension are made, it is important we
ensure that the measuring instrument is as precise as possible, preferably one order of
magnitude finer than the specification. Otherwise, the measuring process itself will contribute
excess variation to the dimension data as recorded. Using handheld data collectors with
automatic gages may help reduce errors introduced by the process of measurement, data
recording, and transcription for post processing by computer.

The ideal situation for data collection is to collect as much data as possible over a defined
time period. This will yield a reliable capability number since it is based upon a large sample
size. In the practice of process improvement, determining process capability is Step 5:

Step 1 Gather process data


Step 2 Plot the data on control charts.
Step 3 Find the control limits.
Step 4 Get the process in control (in other words, identify and eliminate assignable causes).
Step 5 Calculate process capability.
Step 6 If process capability is not sufficient, improve the process (reduce its inherent
variation), and go back to Step 1.

Capability Calculations Condition 1: The Process Must be in Control!

Process capability formulas commonly used by industry require that the process must be in
control and normally distributed before one takes samples to estimate process capability. All
standard capability indices assume that the process is in control and the individual data follow
a normal distribution. If the process is not in control, capability indices are not valid, even if
they appear to indicate the process is capable.

Three different statistical tools are used together to determine whether a process is in control
and follows a normal distribution. These are
 Control charts
 Visual analysis of a histogram
 Mathematical analysis of the distribution to test that the distribution is normal.

Note that no single tool can do the job here and all three must be used together. Control
charts (discussed in detail later in this Unit) are the most common method for maintaining a
process in statistical control. For a process to be in control, all points plotted on the control
chart must be inside the control limits with no apparent patterns (e.g., trends) be present. A
histogram (described below in Section 3) allows us to quickly see (a) if any parts are outside
the spec limits and (b) what the distribution's position is relative to the specification range. If
the process is one that is naturally a normal distribution, then the histogram should
approximate a bell-shaped curve if the process is in control. However, note that a process can
be in control but not have its individuals following a normal distribution if the process is
inherently non-normal.

Capability Calculations Condition 2: The Process Must be Inherently Normal

Many process naturally follow a bell-shaped curve (a normal distribution) but some do not.
Examples of non-normal dimensions are roundness, squareness, flatness and positional
tolerances; they have a natural barrier at zero. In these cases, a perfect measurement is zero
(for example, no ovality in the roundness measurement). There can never be a value less than
zero. The standard capability indices are not valid for such non-normal distributions. Tests for
normality are available in SPC text books that can assist you to identify whether or not a
process is normal. If a process is not normal, you may have to use special capability measures
that apply to non-normal distributions [1].

Prepared for the Extension Program of Indira Gandhi Open University


13

Capability Index Formulas

We first define some special terms. USL stands for Upper Specification Limit and LSL for
Lower Specification Limit. Midpoint is the center of the spec limits. The midpoint is also
frequently referred to as the nominal value or the target. Tolerance is the distance between
the upper and lower spec limits (tolerance = USL - LSL).

The standard deviation for the distribution of individual data, one important variable in all
the capability index calculations, can be determined in either of two ways.

The standard text book formula for , the population standard deviation, when the true
process mean () is known and population size (N) is finite is

(x i   )2
  i 1
N

The term population here means all parts produced, not just a sample.

In practice, however, we usually we work with a sample of the population, a handful of items
collected or sampled from the production line, since this is more practical. In this case, the
formula for standard deviation (s) is as follows.

(x i  xbar) 2
s  i 1
n 1

when xbar is the sample average given by

n
xbar  x
i 1
i /n
s symbolizes sample standard deviation and n is the sample size. s is an estimator of . The
standard deviation of a distribution is an indication of the dispersion (or variability) present in
the population of the datathe higher is variability, the larger will be s (and ). When over
30 individual observations are taken, the above formulas for population standard deviation ()
and sample standard deviation (s) yield virtually the same numerical result. In the following
pages we will use the symbol  to denote the term standard deviation as we continue our
discussion of process capability indices.

Standard deviation may also be estimated using Rbar (the average of the sample or subgroup
ranges Ri) and a constant that has been developed by statisticians for this purpose. The
formula for estimating sigma is:

Rbar
ˆ 
d2

" hat" represents the estimated standard deviation. Rbar is the average of the sample ranges
{Ri} for a sample period i when the process is in control. The constant d2 varies by sample
size (n) and is listed in Table 2.1.

It is important that you remember that the process data or the individual measurements {xi}
must be normally distributed and in control in order to use the estimated value of  in
process capability calculations. If both of these conditions are not met, the estimated standard
deviation or  value will not be valid. If the process is normally distributed and in control,
either method is acceptable and usually yields about the same result. Also, remember that

Prepared for the Extension Program of Indira Gandhi Open University


14

neither actual nor estimated  used in calculating capability will be meaningful if the process
is inherently non-normal. Separate special methods are available for finding capability
measures for non-normal distributions arising from non-normal processes.

TABLE 2.1: FACTORS FOR CALCULATING CONTROL CHART LIMITS

xbar Chart s Chart R Chart


n A A2 A3 c4 B3 B4 B5 B6 d2 d3 D1 D2 D3 D4
2 2.121 1.880 2.659 0.7979 0 3.267 0 2.606 1.128 0.853 0 3.686 0 3.267
3 1.732 1.023 1.954 0.8862 0 2.568 0 2.276 1.693 0.888 0 4.358 0 2.574
4 1.500 0.729 1.628 0.9213 0 2.266 0 2.088 2.059 0.880 0 4.698 0 2.282
5 1.342 0.577 1.427 0.9400 0 2.089 0 1.964 2.326 0.864 0 4.918 0 2.114
6 1.225 0.483 1.287 0.9515 0.030 1.970 0.029 1.874 2.534 0.848 0 5.078 0 2.004
7 1.134 0.419 1.182 0.9594 0.118 1.882 0.113 1.806 2.704 0.833 0.204 5.204 0.076 1.924
8 1.061 0.373 1.099 0.9650 0.185 1.815 0.179 1.751 2.847 0.820 0.388 5.306 0.136 1.864
9 1.000 0.337 1.032 0.969 0.239 1.761 0.232 1.707 2.970 0.808 0.547 5.393 0.184 1.816
10 0.949 0.308 0.975 0.9727 0.284 1.716 0.276 1.669 3.078 0.797 0.687 5.469 0.223 1.777
11 0.905 0.285 0.927 0.9754 0.321 1.679 0.313 1.637 3.173 0.787 0.811 5.535 0.256 1.744
12 0.866 0.266 0.886 0.9776 0.354 1.646 0.346 1.610 3.258 0.778 0.922 5.594 0.283 1.717
13 0.832 0.249 0.850 0.9774 0.382 1.618 0.374 1.585 3.336 0.770 1.025 5.647 0.307 1.693
14 0.802 0.235 0.817 0.9810 0.406 1.594 0.399 1.563 3.407 0.763 1.118 5.696 0.328 1.672
15 0.775 0.223 0.789 0.9823 0.428 1.572 0.421 1.544 3.472 0.756 1.203 5.741 0.347 1.653
16 0.750 0.212 0.763 0.9835 0.448 1.552 0.440 1.526 3.532 0.750 1.282 5.782 0.363 1.637
17 0.728 0.203 0.739 0.9845 0.466 1.534 0.458 1.511 3.588 0.744 1.356 5.820 0.378 1.622
18 0.707 0.194 0.718 0.9854 0.482 1.518 0.475 1.496 3.640 0.739 1.424 5.856 0.391 1.608
19 0.688 0.187 0.698 0.9862 0.497 1.503 0.490 1.483 3.689 0.734 1.487 5.891 0.403 1.597
20 0.671 0.180 0.680 09869 0.510 1.490 0.504 1.470 3.735 0.729 1.549 5.921 0.415 1.585
21 0.655 0.173 0.663 0.9876 0.523 1.477 0.516 1.459 3.778 0.724 1.605 5.951 0.425 1.575
22 0.640 0.167 0.647 0.9882 0.534 1.466 0.528 1.448 3.819 0.720 1.659 5.979 0.434 1.566
23 0.626 0.162 0.633 0.9887 0.545 1.455 0.539 1.438 3.858 0.716 1.710 6.006 0.443 1.557
24 0.612 0.157 0.619 0.9892 0.555 1.445 0.549 1.429 3.895 0.712 1.759 6.031 0.451 1.548
25 0.600 0.153 0.606 0.9896 0.565 1.435 0.559 1.420 3.931 0.708 1.806 6.056 0.459 1.541

If you have both estimated and actual capability indices available, choose one method and stay
with it. Avoid the temptation to look at both and choose the one that is better, since this will
introduce variation in results.

The Cp Index

The most commonly used capability indices are Cp and Cpk. Cp, a measure of the dispersion of
the process, is the ratio of tolerance to 6. The formula is

Tolerance
Cp 
6

The quantity "6" in the Cp formula is derived from the fact that, in a normal distribution,
99.73% of the parts will be within a 6 apread, i.e., within (  3) when the process is
being disturbed only by random or chance causes.

Suppose that the tolerance and measurements on a process are as follows: USL = 5.0, LSL =
1.0, midpoint of spec = 3.0, average of sample averages ( xbar ) = 2.0,  = 0.5. Then
tolerance = (5.0 - 1.0) = 4.0, xbar + 3 = 3.5 and xbar - 3 = 0.5. Then the CP for the
process output data sampled is 4.0/3.0 = 1.33.

Prepared for the Extension Program of Indira Gandhi Open University


15

As you can see from the Cp formula, values for Cp can range from near zero to very large
positive numbers. When Cp is less than 1, tolerance is less than the 6the inherent or
natural spread of the process output. When Cp is greater than 1, tolerance is greater than the
6. For a process, the greater the Cp index number, the higher its process capability is.

" Cp" alone, however, does not tell the complete story about the process!

Note that Cp is only a measure of the dispersion or spread of the distribution. It is not a
measure of the "centeredness" (where the mean of the process output is in relation to the spec
midpoint). An off-centered process could be a problematic situation. An example is the
situation shown by Figure 2.4. Both Figures 2.2 and 2.4 display processes that have a Cp =
1.0. But the process shown by Figure 2.4 has a significant fraction of its output falling
outside (below) the lower spec limit.

This is why Cp is never used alone as a measure of process capability. Cp only shows how
good would the process be if the process could be centered with some adjustments. The
alternative capability index is Cpk, described below.

The Cpk Index

While Cp is only a measure of dispersion, Cpk measures both dispersion and centeredness.
The Cpk foirmula takes into account both the process spread and the location of the process
average in relation to the spec midpoint. The formula is as follows.

 USL  mean   mean  LSL 


C pk  The _ lesser _ of  and 
 3   3 

"The lesser of" actually determines how capable the process is on the worst side. Using the
data of the previous example we obtain

Cpk = The lesser of (5.0 - 2.0)/1.5 or (2.0 - 1.0)/1.5

= Min (2.0, 0.67) = 0.67

The greater the Cpk value is the higher is the fraction of the output meeting specs. Hence, the
better is the process. A Cpk value greater than 1.0 means that the 6 spread of the data falls
completely within the spec limits. An example is the process shown in Figure 1.4.

A Cpk value of 1.0 indicates that 99.73% of the parts produced by the process would be within
the spec limits. In this process only about 3 out of a thousand parts would be scrapped or
rejected. In other words, such a process just meets specs.

Do we need to improve the process (i.e., reduce its inherent variability) further? Improvement
beyond just meeting specs may greatly improve the quality of fitness of the parts during
assembly and also cut warranty costs.

The different special process conditions detectable by Cpk calculations are as follows.

Prepared for the Extension Program of Indira Gandhi Open University


16

Cpk value Process Output Distribution


> 1.0 All output completely within spec limits

= 1.0 One end of the 6 spread falls on a spec limit,


the other may be within the other spec limit

0  Cpk  1.0 Part of the 6 spread falls outside spec limit

Negative Cpk , i.e., Process mean is not within spec limits


Cpk  0.0

Many companies with QS 9000 registration, demand their vendors to demonstrate Cpk
capabilities of 1.33 or beyond. A Cpk of 1.33 has about 99.994% of products within specs.

A Cpk value of 2.0 is the coveted "six sigma" quality level. To reach this stage advanced
SQC methods including design-of-experiments (DOE) would be required. At this level no
more than 3 or 4 parts per million products produced would fall outside the spec limits. Such
small variation is not visible on xbar-R control charts in the normal operation of the process.

Remember that control and capability are two very different concepts. As shown in Figure
2.6, in general, a process may be capable or not capable, or in control or out of control,
independently of each other. Clearly, we would like every process to be both capable and in
(statistical) control. If a process is neither capable nor in control, we must take two corrective
actions to improve it. First we should get it in a state of control by removing special causes of
variation, and then attack the common causes to improve its capability. If a process is capable
but not in control (as the above example illustrated), we should work to get it back in control.

Control state

In Control Out of Control

Capable Ideal
process
Capability
state

Not
Capable

Figure 2.6 Capability versus Control (Arrows indicate the


Directions of Appropriate Management Action).

Statistical Process Control (SPC) Methodology

Control charts, like the other basic tools for quality improvement, are relatively simple to use.
In general, control charts have two objectives, (a) help restore accuracy of the process so that
the process average stays near the target, and (b) help minimize variation in the process to
ensure that good precision is maintained in the output (see Figure 2.7).

Control charts have three basic applications: (1) to establish a state of statistical control, (2) to
monitor a process and signal when the process goes out of control, and (3) to determine
process capability. The following is a summary of the steps required to develop and use
control charts. Steps 1 through 4 focus on establishing a state of statistical control; in step 5,
the charts are used for ongoing monitoring; and finally, in step 6, the data are used for process
capability analysis.

Prepared for the Extension Program of Indira Gandhi Open University


17

1. Preparation
a. Choose the variable or attribute to be measured
b. Determine the basis, size, and frequency of sampling.
c. Set up the correct control chart.
2. Data Collection
a. Record the data
b. Calculate relevant statistics: averages, ranges, proportions, and so on.
c. Plot the statistics on the chart.
3. Determination of trial control limits
a. Draw the center line (process average) on the chart.
b. Compute the upper and lower control limits.
4. Analysis and interpretation
a. Investigate the chart for lack of control
b. Eliminate out-of-control points.
c. Re-compute control limits if necessary.
5. Use as a problem-solving tool
a. Continue data collection and plotting.
b. Identify out-of-control situations and take corrective action.
6. Use the control chart data to determine process capability, if desired.

In Section 3 we review the "seven quality improvement tools"simple methods popularized


by the Japanesethat can do a great deal in bringing a poorly performing process into control
and then to improve it further.

In Section 4 we discuss the SPC methodology in detail and the construction, interpretation,
and use of the different types of process control charts. Although many different charts will
be described, they will differ only in the type of measurement for which the chart is used; the
same analysis and interpretation methodology described applies to each of them.

Prepared for the Extension Program of Indira Gandhi Open University


18

3 THE SEVEN QUALITY IMPROVEMENT TOOLS

In SPC, numbers and information form the basis for decisions and actions. Therefore, a thorough
data recording systemmanual or otherwisewould be an essential enabler for SPC. In order to
allow one to interpret fully and derive maximum use of quality-related data, over the past fifty
years a set of simple statistical 'tools' have evolved. These tools offer any organization a easy
means to collect, present and analyze most of such data. In this section we briefly review these
tools. An extended description of them may be found in the quality management standard ISO
9004-4 (1994).

In the 1950's Japanese industry began to learn and apply statistical methods in earnestness,
methods that American statisticians Walter Shewhart and W Edward Deming developed in the
1930's and 1940's to help manage quality. Subsequently, progress in continuous quality
improvement in Japan led to significant expansion of the many simple statistical tools on shop
floor. Kaoru Ishikawa, head of the Japanese Union of Scientists and Engineers (JUSE), later
formalized the use of these tools in Japanese manufacturing with the introduction of the 7 Quality
Control (7 QC) tools. The seven Ishikawa tools reviewed below are now an integral part of
quality control on the shop floor around the world. Many Indian industries use them routinely.

3.1 Flowchart

The flowchart lists the order of activities in a project or process and their interdependency. It
expresses detailed process knowledge. To express this knowledge certain standard symbols are
used. The oval symbol indicates the beginning or end of the process. The boxes indicate action
items while diamonds indicate decision or check points. The flowchart can be used to identify the
steps affecting quality and the potential control points. Another effective use of the flowchart
would be to map the ideal process and the actual process and to identify their differences as the
targets for improvements. Flowcharting is often the first step in Business Process Reengineering
(BPR).

START

no
MIX CHECK RE-MIX

ok

STOP

3.2 Histogram

The histogram is a bar chart showing a distribution of variable quantities or characteristics.


An example of a "live" histogram would be to line up by height a group of students enrolled in
a course. Normally, one individual would be the tallest and one the shortest, with a cluster of
individuals bunched around the average height.

In manufacturing, the histogram can rapidly identify the nature of quality problems in a
process by the shape of the distribution as well as the width of the distribution. It informally
establishes process capability. It can also help compare two or more distributions.

Prepared for the Extension Program of Indira Gandhi Open University


19

Lower Upper
Spec  Tolerance  Spec
35 Limit Limit
30
25
Frequency 20
15
10
5

5.1 5.2 5.3 5.4 5.5 5.6 5.7

Weight (gms)

3.3 Pareto Chart

The Pareto chart, as shown below, indicates the distribution of effects attributable to various
causes or factors arranged from the most frequent to the least frequent. This tool is named
after Wilfredo Pareto, the Italian economist who determined that wealth is not evenly
distributed and some of the people have most of the money.

This tool is a graphical picture of the relative frequencies of different types of quality
problems with the most frequent problem type obtaining clear visibility. Thus the Pareto chart
identifies the vital few and the trivial many and it highlights problems that should be worked
first to get the most improvement. Historically, 80% problems are caused by 20% of the
factors.

35% P
o
30% o
r Operator
25% d errors
Percent d i
20% m
defective e
e
15% s p Calibration
i n
s a
10 g r
i Material
% n t
5% o
n s
Misc.

Categories

3.4 Cause and Effect Diagram

The cause and effect diagram is also called the fishbone chart because of its appearance and
the Ishikawa diagram after the man who popularized its use in Japan. Its most frequent use is
to list the causes of some particular quality problem or defect. The lines coming of the core
horizontal line are the main causes while the lines coming off those are subcauses.

The cause and effect diagram identifies problem areas where data should be collected and
analyzed. It is used to develop reaction plans to help investigate out-of-control points found on
control charts. It is also the first step for planning design of experiments (DOE) studies and
for applying Taguchi methods to improve product and process designs.

Prepared for the Extension Program of Indira Gandhi Open University


20

Machine Man
training
speed attention
tool
Defect
mixing quantity quality

inspection
Methods Materials

3.5 Scatter Diagram

The scatter diagram shows any existing the pattern in the relationship between two variables
that are thought to be related. For example, is there a relationship between outside
temperature and cases of the common cold? As temperatures drop, do cases of the common
cold rise in number? The closer the scatter points hug a diagonal line, the more closely there
is one-to-one relationship between the variables being studied. Thus, the scatter diagram may
be used to develop informal models to predict the future based on past correlations.

Negative
20 Correlation
Cases of observed
Common 15
Cold/100
10
persons
5

0
30 40 50 60 70
Outdoor Temperature (F)

3.6 Run Chart

The run chart shows the history and pattern of variation. It is plot of data points in time
sequence, connected by a line. Its primary use is in determining trends over time. The analyst
should indicate on the chart whether up is good or down is good. This tool is used at the
beginning of the change process to see what the problems are. It is also used at the end (or
check) part of the change process to see whether the change made has resulted in a permanent
process improvement.

A Run Chart of Sample Averages

0.85
Average

0.8

0.75

0.7

0.65
1

11

13

15

17

19

21

23

25

27

29

Sample #
Prepared for the Extension Program of Indira Gandhi Open University
21

3.7 Control Chart

Whereas a histogram gives a static picture of process variability, a run chart or a control chart
illustrates the dynamic performance (i.e., performance over time) of the process. The control
chart in particular is a powerful process quality monitoring device and it constitutes the core
of statistical process control (SPC). It is a line chart marked with control limits at 3 standard
deviations () above and below the average quality level. These limits are based on the
statistical studies of shop data conducted in the 1030s by Dr Walter Shewhart. By comparing
certain measures of the process output such as xbar, R, p, u, c etc. (see Section 4) to their
control limits one can determine quality variation that is due to common or random causes and
variation that is produced by the occurrence of assignable events (special causes).

Failure to distinguish between common causes and special causes of variation can actually
increase the variation in the output of a process. This is often due to the mistaken belief that
whenever process output is off target, some adjustment must be made. However, knowing
when to leave a process alone is an important step in maintaining control over a process.
Equally important is knowing when to take action to prevent the production of nonconforming
product. Using actual industry data Shewhart demonstrated that a sensible strategy to control
quality is to first eliminate the special causes with the help of the control chart and then
systematically reduce the common causes. This strategy reduces the variation in process
output with a high degree of reliability while it improves the acceptability of the product.

Statistical process control (SPC) is actually a methodology for monitoring a process to (a)
identify the special causes of variation and (b) signal the need to take corrective action when it
is appropriate. When special causes are present, the process is deemed to be out of control. If
the variation in the process is due to common causes alone, the process is said to be in
statistical control. A practical definition of statistical control is that both the process averages
and variances are constant over time (Figure 2.5). Such a process is stable and predictable.

SPC uses control charts as the basic tool to improve both quality and productivity. SPC
provides a means by which a firm may demonstrate its quality capability, an activity necessary
for survival in today's highly competitive markets. Also, many customers (e.g., the
automotive companies) now require the evidence that their suppliers use SPC in managing
their operations. Note, however, that since SPC requires processes to display measurable
variation; even though it is quite effective for companies in the early stages of quality efforts,
it becomes ineffective in producing improvements once quality level approaches six-sigma.

Before we leave this section, we repeat again that process capability calculations make little
sense if the process is not in statistical control because the data are confounded by special (or
assignable) causes and thus do not represent the inherent capability of the process. The simple
tools described in this section may be good enough to enable you to check this. To see this,
consider the data in Table 3.1, which shows 150 measurements of a quality characteristic from
a manufacturing process with specifications 0.75  0.25. Each row corresponds to a sample of
size = 5 taken every 15 minutes. The average of each sample is also given in the last column.
A frequency distribution and histogram of these data is shown in Figure 3.1. The data form a
relatively symmetric distribution with a mean of 0.762 and standard deviation 0.0738. Using
these values, we find that Cpk = 1.075 and form the impression that the process capability is at
least marginally acceptable.

Some key questions, however, remain to be answered. Because the data were taken over an
extended period of time, we cannot determine if the process remained stable throughout that
period. In a histogram the dimension of time is not considered. Thus, histograms do not
allow you to distinguish between common and special causes of variation. It is unclear
whether any special causes of variation are influencing the capability index. If we plot the
average of each sample against the time at which the sample was taken (since the time
increments between samples are equal, the sample number is an appropriate surrogate for
time), we obtain the run chart shown in Figure 3.6.

Prepared for the Extension Program of Indira Gandhi Open University


22

Upper limit Frequency


LSL = 0.5 0
Histogram
0.55 1 50
0.6 1
40

Frequency
0.65 10
0.7 14 30
0.75 40 20
0.8 31
10
0.85 37
0.9 14 0

0.6

0.7

0.8

0.9
LSL = 0.5

USL = 1
0.95 1
USL = 1 0
More 0
Average 0.762
Bin
Std Dev 0.0738

Figure 3.1 Frequency Distribution and Histogram of a Typical Process

Table 3.1 Thirty samples of Quality Measurements.

Sample # X1 X2 X3 X4 X5 Average
1 0.682 0.689 0.776 0.798 0.714 0.732
2 0.787 0.860 0.601 0.749 0.779 0.755
3 0.780 0.667 0.838 0.785 0.723 0.759
4 0.591 0.727 0.812 0.775 0.730 0.727
5 0.693 0.708 0.790 0.758 0.671 0.724
6 0.749 0.714 0.738 0.719 0.606 0.705
7 0.791 0.713 0.689 0.877 0.603 0.735
8 0.744 0.779 0.660 0.737 0.822 0.748
9 0.769 0.773 0.641 0.644 0.725 0.71
10 0.718 0.671 0.708 0.850 0.712 0.732
11 0.787 0.821 0.764 0.658 0.708 0.748
12 0.622 0.802 0.818 0.872 0.727 0.768
13 0.657 0.822 0.893 0.544 0.750 0.733
14 0.806 0.749 0.859 0.801 0.701 0.783
15 0.660 0.681 0.644 0.747 0.728 0.692
16 0.816 0.817 0.768 0.716 0.649 0.753
17 0.826 0.777 0.721 0.770 0.809 0.781
18 0.828 0.829 0.865 0.778 0.872 0.834
19 0.805 0.719 0.612 0.938 0.807 0.776
20 0.802 0.756 0.786 0.815 0.801 0.792
21 0.876 0.803 0.701 0.789 0.672 0.768
22 0.855 0.783 0.722 0.856 0.751 0.793
23 0.762 0.705 0.804 0.805 0.809 0.777
24 0.703 0.837 0.759 0.975 0.732 0.801
25 0.737 0.723 0.776 0.748 0.732 0.743
26 0.748 0.686 0.856 0.811 0.838 0.788
27 0.826 0.803 0.764 0.823 0.886 0.82
28 0.728 0.721 0.820 0.772 0.639 0.736
29 0.803 0.892 0.740 0.816 0.770 0.804
30 0.774 0.837 0.872 0.849 0.818 0.83

The run chart hints that the mean might have shifted up at about sample #1 In fact, the
average for the first 16 samples turns out to be 0.738 while for the remaining samples it is
0.789. Therefore, although the overall average is close to the target specification (0.75), at no
time was the actual process operating centered near the target. In the next section you will see
why we should conclude that this process is not in statistical control and therefore we should
not pay much attention to the process capability index Cpk calculated as 1.075.

SPC exists because there is, and will always be, variation in the characteristics of materials, in
parts, in services, and in people. With the help of the simple tools described in this section
SPC can provide us the means to understand and assess such variability, and then manage it.

Prepared for the Extension Program of Indira Gandhi Open University


23

4 CONTROL CHARTS FOR VARIABLES DATA

As we mentioned in the previous section, the control chart is a powerful process quality
monitoring device and it constitutes the core of statistical process control (SPC). In the SPC
methodology, knowing when to leave a process alone is an important step in maintaining
control over a process. Control charts enable us to do that.

Equally important is knowing when to take action to prevent the production of nonconforming
product. Indeed, failure to distinguish between variation produced by common causes and
special causes can actually increase the variation in the output of a process. Again, control
charts empower us here.

When the chart crosses any of its control limits, special causes are indicated to be present.
The process is now deemed to be out of control and is investigated to remove the source of
disturbance. Otherwise, when variation stays within control limits, it is indicated to be due to
common causes alone. Now the process is said to be in "statistical control" and it should be
left alone. Statistical control is defined as the state in which both the process averages and
variances are constant over time and hence the process output is stable and predictable (Figure
2.5). Control charts help us in bringing a process within such control.

Most process that deliver a "product" or a "service" may be monitored by measuring their
output over time and then plotting these measurements appropriately. However, processes
differ in the nature of those output. Variables data are those output characteristics that are
measurable along a continuous scale. Examples of variables data are length, weight, or
viscosity. By contrast, some output may only be judged to be good or bad, or "acceptable" or
"unacceptable", such as print quality of a photocopier or defective knots produced per meter
by a weaving machine. In such cases we categorize the output as an attribute that is either
acceptable or unacceptable; we cannot put it on a continuous scale as done with weight or
viscosity. However, SPC methodology provides us with a variety of different types of control
charts to work with such diversity.

For variables data control charts most commonly used are the "xbar" chart and the "R-chart"
(range chart). The xbar chart is used to monitor the centering of the process to help control
its accuracy (Figure 2.4). The R-chart monitors the dispersion or precision of the process.

Range R rather than standard deviation is used as a measure of variation simply to enable
workers on the factory floor perform control chart calculations by hand, as done for example
in the turbine blade machining shop in BHEL, Hardwar. For large samples and when data can
be processed by a computer, the standard deviation is a better measure of variability.

4.1 Constructing xbar and R-Charts and Establishing Statistical Control

The first step in developing xbar and R-charts is to gather data. Usually, about 25 to 30
samples are collected. Samples between size 3 and 10 are generally used, with 5 being the
most common. The number of samples is indicated by k, and n denotes to sample size. For
each sample i, the mean (denoted by xbari) and the range (Ri) are computed. These values are
then plotted on their respective control charts. Next, the overall mean (x_doublebar) and
average range (Rbar) calculations are made. These values specify the center lines for the
xbar and R-charts, respectively. The overall mean is the average of the sample means xbari.

x  x  ...  xn  xbar i
xbar  1 2 x _ doublebar  i 1
n k
The average range is similarly computed, using the formulas

Prepared for the Extension Program of Indira Gandhi Open University


24

Ri  max( xi )  min( xi )
k

R i
Rbar  i 1

k
The average range and average mean are used to compute control limits for the R-and xbar
charts. Control limits are easily calculated using the following formulas:

UCLR = D4 Rbar

LCLR = D3 Rbar

UCLxbar = x_doublebar + A2 Rbar

LCLxbar = x_doublebar - A2 Rbar

where the constants D3, D4 and A2 depend on sample size n and may be found in Table 2.1.

Control limits represent the range between which all points are expected to fall if the process
is in statistical control, i.e., operating only under the influence of random or common causes.
If any points fall outside the control limits or if any unusual patterns are observed, then some
special (called assignable) cause has probably affected the process. In such case the process
should be studied using a "reaction plan" (Figure 4.1), process logs and other tools and
devices to determine and eliminate that cause.

Note, however, that if assignable causes are affecting the process, then the process data are not
representative of the true state of statistical control and hence the calculations of the center
line and control limits would be biased. To be effective, SPC requires the center line and the
control limit calculations to be unbiased. Therefore, before control charts are set up for roitine
use by the factory, any out-or-control data points should be eliminated from the data table and
new values for x_doublebar, Rbar, and the control limits re-computed, as illustrated below.

In order to determine whether a process is in statistical control, the R-chart is always analyzed
first. Since the control limits in the xbar chart depend on the average range, special causes in
the R-chart may produce unusual patterns in the xbar chart, even when the centering of the
process is in control. (An example of this is given later in this unit). Once statistical control is
established for the R-chart, attention may turn to the xbar chart.

Figure 4.2 A Control Chart Data Sheet

Part Name: Process Step: Spec Limits:


Operator: Machine: Gage: Unit of Measure:
Date
Time
Sample1
2
3
4
5
xbar
R
Notes

Prepared for the Extension Program of Indira Gandhi Open University


25

Figure 4.2 shows a typical data sheet used for recording. This form provides space (under
"Notes") for descriptive information about the process and for recording of sample
observations and computed statistics. Subsequently the control charts are drawn. The
construction and analysis of control charts may be best seen by example as follows.

Example 3: Control Charts for Silicon Wafer Production.


The thickness of silicon wafers used in the production of semiconductors must be carefully
controlled during wafer manufacture. The tolerance of one such wafer is specified as
 0.0050 inches. In one production facility, three wafers were randomly selected each hour
and the thickness measured carefully (Figure 4.3). Subsequently, xbar and R were calculated.
For example, the average of the first sample was

xbar1 = (41 + 70 + 22)/3

= 113/3 = 44.33

The range of the first sample was Ri = 70-22 = 48. (Note: In practice, xbar and R calculations
may be rounded to the nearest integer for simplicity).

The calculations of sample averages, range, overall mean, and the control limits are shown on
the worksheet displayed in Figure 4.3. The average range is the sum of the sample ranges
(676) divided by the number of samples (25); the overall mean is the sum of the sample
averages (1,221) divided by the number of samples (25). Since sample size is 3 here, the
factors used in computing the control limits are A2 = 1.023 and D4 = 2.574 (Table 2.1). For
sample size of 6 or less, factor D3 is 0; therefore, the lower control limit (LCL) on the range
chart is zero. The center line and control limits are drawn on the charts shown in Figure 4.3.
Note that as a convention, out-of-control points are noted directly on the charts.

On examining the R chart first we infer that the process is in control because all points lie
within the control limits and no unusual patterns exist. On the xbar chart, however, the xbar
value for sample 17 lies above the upper control limit. On investigation we find that some
suspicious cleaning material had been used in the process at this point (an assignable cause of
variation). Therefore, data from sample 17 should be eliminated from the control chart
calculations and the control limits re-done. Figure 4.4 shows the revised calculations after
sample 17 was removed. The revised center lines and control limits are shown. The resulting
xbar and R charts both appear to be in control.

Figure 4.3 Silicon Wafer Thickness Data as observed

Date 12/1 12/1 12/1 12/1 12/1 12/1 13/1 13/1 13/1 13/1 13/1 13/1 14/1 14/1 14/1 14/1 14/1 14/1 15/1 15/1 15/1 15/1 15/1 15/1 16/1
Time 8:00 11:0 2:00 5:00 8:00 11.0 8:00 11:0 2:00 5:00 8:00 11.0 8:00 11:0 2:00 5:00 8:00 11.0 8:00 11:0 2:00 5:00 8:00 11.0 8:00
1 41 78 84 60 46 64 43 37 50 57 24 78 51 41 56 46 99 71 41 41 22 62 64 44 41
2 70 53 34 36 47 16 53 43 29 83 42 48 57 29 64 41 86 54 2 39 40 70 52 38 63
3 22 68 48 25 29 56 64 30 57 32 39 39 50 35 36 16 98 39 53 36 46 46 57 60 62
4
5
Calculations:
Sum 133 199 166 121 122 136 160 110 136 172 105 165 158 105 156 103 283 164 96 116 108 178 173 142 166
Average 44. 66. 55. 40. 40. 45. 53. 36. 45. 533 35 55 52. 35 52 34. 94. 54. 32 38. 36 59. 567 433 55.
= Xbar 33 33 33 33 67 33 33 67 33 67 33 33 67 67 33 33
Range 48 25 50 35 18 48 21 13 28 51 18 39 7 12 28 30 13 32 51 5 24 24 12 22 22
=R
Notes: Gas AC Pum New
flow malf p clea
adju uncti stall ning
sted on ed tried
once

Prepared for the Extension Program of Indira Gandhi Open University


26

Control Limit Calculations


Calculation basis: All Subgroups included
Rbar 676/25 = 27
X_doublebar 1221/25 = 48.8
Spec Midpoint 50.0
A2  Rbar 1.02327 = 26
UCLxbar = x_doublebar + A2  Rbar 76.4
LCLxbar = x_doublebar - A2  Rbar 21.2
UCLR = D4  Rbar 69.5
UCLR = D3  Rbar 0.0

The Initial xbar Control Chart


Sample # 1 2 3 4 5 6 7 8 9 10 11 12
13 14 15 16 17 18 19 20 21 22 23 24 25
Average 44. 66. 55. 40. 40. 45. 53. 36. 45. 533 35 55
52. 35 52 34. 94. 54. 32 38. 36 59. 567 433 55.
= Xbar 33 33 33 33 67 33 33 67 33 67 33 33 67 67 33 33
x_double 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48. 48.
bar 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
UCL 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4
xbar
LCLxbar 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2

Point beyond UCL,


INITIAL Xbar CONTROL CHART i.e., out of control
100

75 UCL
Xbar

50 Center lin
x_double

25 LCL

0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample #

Sample # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Range = 48 25 50 35 18 48 21 13 28 51 18 39 7 12 28 30 13 32 51 5 24 24 12 22 22
R
UCLR 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5

LCLR 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

INITIAL R CHART
80
70 UCL
60
50
R

40
30
20
10
0 LCL

Sample #
Prepared for the Extension Program of Indira Gandhi Open University
27

Figure 4.4 Revised Control Chart Calculations

To reveal the inherent variability of the process, the control limits must be re-calculated, after
removing any "out of control" points.

Calculation basis: All Subgroups included Subgroup #17 Removed


Rbar 676/25 = 27 663/24 = 26
x_doublebar 1221/25 = 48.8 1127/24 = 40
Spec Midpoint 50.0 50.0
A2  Rbar 1.02327 = 26 1.02326 = 28.2
UCLxbar = x_doublebar + 76.4 75.2
A2  Rbar
LCLxbar = x_doublebar - 21.2 18.8
A2  Rbar
UCLR = D4  Rbar 69.5 71.0
UCLR = D3  Rbar 0.0 0.0

The Revised Xbar Control Limits and Chart

Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
#
Average 44. 66. 55. 40. 40. 45. 53. 36. 45. 533 35 55 52. 35 52 34. Om 54. 32 38. 36 59. 567 433 55.
= Xbar 33 33 33 33 67 33 33 67 33 67 33 itte 67 67 33 33
d
x_double 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40
bar
UCLxbar 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75. 75.
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
LCLxbar 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18. 18.
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8

REVISED Xbar CONTROL CHART


80
UCL
70
60
50 x_doublebar
Xbar

40
30
20 LCL
10
0
Sample #

Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
#
Range 48 25 50 35 18 48 21 13 28 51 18 39 7 12 28 30 Om 32 51 5 24 24 12 22 22
-it
=R
UCLR 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71. 71.
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
UCLR 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Prepared for the Extension Program of Indira Gandhi Open University


28

REVISED R CHART
80
70 UCL
60
50
R

40
30
20
10
LCL
0

Sample #

4.2 Interpreting Abnormal Patterns in Control Charts

When a process is in statistical control, the points on a control chart fluctuate randomly
between the control limits with no recognizable, non-random pattern. The following
checklist provides a set of general rules for examining a process to determine if it is in control:
1. No points are outside control limits.
2. The number of points above and below the center line is about the same.
3. The points seem to fall randomly above and below the center line.
4. Most points, but not all, are near the center line, and only a few are close to the control
limits.
The underlying assumption behind these rules is that the distribution of sample means is
normal. This assumption follows from the central limit theorem of statistics, which states that
the distribution of sample means approaches a normal distribution as the sample size increases
regardless of the original distribution. Of course, for small sample sizes, the distribution of
the original data must be reasonably normal for this assumption to hold. The upper and lower
control limits are computed to be three standard deviations from the overall mean. Thus, the
probability that any sample mean falls outside the control limits is very small. This
probability is the origin of rule 1.

Since the normal distribution is symmetric, about the same number of points fall above as
below the center line. Also, since the mean of the normal distribution is the median, about
half the points fall on either side of the center line. Finally, about 68 percent of a normal
distribution falls within one standard deviation of the mean; thus, most but not all points
should be close to the center line. These characteristics will hold provided that the mean and
variance of the original data have not changed during the time the data were collected; that is,
the process is stable.

Several types of unusual patterns arise in control charts, which are reviewed here along with
an indication of the typical causes of such patterns.

One Point Outside Control Limits


A single point outside the control limits (see Figure 4.5) is usually produced by a special
cause. Often, the R-chart provides a similar indication. Once in a while, however, such points
are normal part of the process and occur simply by chance.

A common reason for a point falling outside a control limit is an error in the calculation of
xbar or R for the sample. You should always check your calculations whenever this occurs,
Other possible causes are a sudden power surge, a broken tool, measurement error, or an
incomplete or omitted operation in the process.

Prepared for the Extension Program of Indira Gandhi Open University


28

Figure 4.5 Single Point Outside Control Limits

100
80
Xbar 60
40
20
0

Sample #

Sudden Shift in the Process Average


An Unusual number of consecutive points falling on one side of the center line (see Figure
4.6) is usually an indication that the process average has suddenly shifted. Typically, this
occurrence is the result of an external influence that has affected the process, which would be
considered a special cause. In both the xbar and R-charts, possible causes might be a new
operator, a new inspector, a new machine setting, or a change in the setup or method.

Figure 4.6 Shift in Process Average

100
80
Xbar

60
40
20
0

Sample #

If the shift is up in the R-chart, the process has become less uniform. Typical causes are
carelessness of operators, poor or inadequate maintenance, or possibly a fixture in need of
repair. If the shift is down in the R-chart, the uniformity of the process has improved. This
might be the result of improved workmanship or better machines or materials. As mentioned,
every effort should be made to determine the reason for the improvement and to maintain it.

Three rules of thumb are used for early detection of process shifts. A simple rule is that if
eight consecutive points fall on one side of the center line, one could conclude that the mean
has shifted. Second, divide the region between the center line and each control limit into three
equal parts. Then if (1) two of three consecutive points fall in the outer one-third region
between the center line and one of the control limits or (2) four of five consecutive points fall
within the outer two-thirds region, one would also conclude that the process has gone out of
control.

Cycles
Cycles are short, repeated patterns in the chart, alternating high peaks and low valleys (see
Figure 4.7). These patterns are the result of causes that come and go on a regular basis. In the
xbar chart, cycles may be the result of operator rotation or fatigue at the end of a shift,
different gauges used by different inspectors, seasonal effects such as temperature or
humidity, or differences between day and night shifts. In the R-chart, cycles can occur from
maintenance schedules, rotation of fixtures or gauges, differences between shifts, or operator
fatigue.

Prepared for the Extension Program of Indira Gandhi Open University


29

Figure 4.7 Cycles


100

80

60
Xbar

40

20

Sample #

Trends
A trend is the result of some cause that gradually affects the quality characteristics of the
product and causes the points on a control chart to gradually move up or down from the center
line. As a new group of operators gains experience on the job, for example, or as maintenance
of equipment improves over time, a trend may occur. In the xbar chart, trends may be the
result of improving operator skills, dirt or chip buildup in fixtures, tool wear, changes in
temperature or humidity, or aging of equipment. In the R-chart, an increasing trend may be
due to a gradual decline in material quality, operator fatigue, gradual loosening of a fixture or
a tool, or dulling of a tool. A decreasing trend often is the result of improved operator skill or
work methods, better purchased materials, or improved or more frequent maintenance.

Hugging the Center Line


Hugging the center line occurs when nearly all the points fall close to the center line (see
Figure 4.8). In the control chart, it appears that the control limits are too wide. A common
cause of hugging the center line is that the sample includes one item systematically taken from
each of several machines, spindles, operators, and so on. A simple example will served to
illustrate this pattern. Suppose that one machine produces parts whose diameters average 508
with variation of only a few thousandths; a second machine produces parts whose diameters
average 502, again with only a small variation. Taken together, parts from both machines
would yield a range of variation that would probably be between 500 and 510, and average
about 505, since one will always be high and the second will always be low. Even though a
large variation will occur in the parts taken as a whole, the sample averages will not reflect
this variation. In such a case, a control chart should be constructed for each machine, spindle,
operator, and so on. An often overlooked cause for this pattern is miscalculation of the control
limits, perhaps by using the wrong factor from the table, or misplacing the decimal point in the
computations.

Figure 4.8 Hugging the Centre Line


100

80

60
Xbar

40

20

Sample #
Prepared for the Extension Program of Indira Gandhi Open University
30

Hugging the Control Limits


This pattern shows up when many points are near the control limits with very few in between.
It is often called a mixture and is actually a combination of two different patterns on the same
chart. A mixture can be split into two separate patterns.

A mixture pattern can result when different lots of material are used in one process, or when
parts are produced by different machines but fed into a common inspection group.

Instability
Instability is characterized by unnatural and erratic fluctuations on both sides of the chart over
a period of time (see Figure 4.9). Points will often lie outside both the upper and lower
control limits without a consistent pattern. Assignable causes may be more difficult to
identify in this case than when specific patterns are present. A frequent cause of instability is
over adjustment of a machine, or the same reasons that cause hugging the control limits.

Figure 4.9 Out-of-Control Indicators


100
2 of 3 above two  8 points on one side
80 of center line
UCL
60
Xbar

40

20 LCL
4 of 5 below one 
0

Sample #

As suggested earlier, the R-chart should be analyzed before the xbar chart, because some out-
of-control conditions in the R-chart may cause out-of-control conditions in the xbar chart.
Also, as the variability in the process decreases, all the sample observations will be closer to
the true population mean, and therefore their average, xbar, will not vary much from sample to
sample. If this reduction in the variation can be identified and controlled, then new control
limits should be computed for both charts.

4.3 Routine Process Monitoring and Control

After a process is determined to be in control, the charts should be used on a daily basis to
monitor production, identify any special causes that might arise, and make corrections as
necessary. More important, the chart tells when to leave the process alone! Unnecessary
adjustments to a process result in nonproductive labor, reduced production, and increased
variability of output.

It is more productive if the operators themselves take the samples and chart the data. In this
way, they can react quickly to changes in the process and immediately make adjustments. To
do this effectively, training of the operators is essential. Many companies conduct in-house
training programs to teach operators and supervisors the elementary methods of statistical
quality control. Not only does this training provide the mathematical and technical skills that
are required, but it also give the shop-floor personnel increased quality-consciousness.

Prepared for the Extension Program of Indira Gandhi Open University


31

Introduction of control charts on the shop floor typically leads to improvements in


conformance, particularly when the process is labor intensive. Apparently, management
involvement in operators' work produces positive behavioral modifications (as first
demonstrated in the famous Hawthorne studies). Under such circumstances, and as good
practice, management and operators should revise the control limits periodically and
determine a new process capability as improvements take place.

Another important point must be noted. Control charts are designed to be used by production
operators rather than by inspectors or QC personnel. Under the philosophy of statistical
process control, the burden of quality rests with the operators themselves. The use of control
charts allows operators to react quickly to special causes of variation. The range is used in
place of the standard deviation for the very reason that it allows shop-floor personnel to easily
make the necessary computations to plot points on a control chart. The experience even in
Indian factories such as the turbine blade machining shop in BHEL Hardwar strongly supports
this assertion. The right approach taken by management in ingraining the correct outlook
among the workers appears to hold the key here.

4.4 Estimating Plant Process Capability

After a process has been brought to a state of statistical control by eliminating special causes
of variation, the data may be used to find a rough estimate process capability. This approach
uses the average range Rbar rather than the estimated standard deviation of the original data.
Nevertheless, it is a quick and useful method, provided that the distribution of the original data
is reasonably normal.

Under the normality assumption, the standard deviation (x) of the original data {x}can be
estimated as follows:

 (= x) = Rbar/d2

where d2 is a constant that depends on the sample size and is also given in Table 2.1.
Therefore, process capability may be determined by comparing the spec range to 6 x. The
natural variation of individual measurements is given by x_doublebar  3 x. The following
example illustrates these calculations.

Example 4: Estimating Process Capability for Silicon Wafer Manufacture


In this example, the capability (Cp and Cpk, see Section 2.4) calculations for the silicon wafer
production data displayed in Figure 4.3 are shown. The overall distribution of the data is
indicated by Figure 4.10. For a sample of size (n) = 3, d2 is 1.693. The ULx and LLx represent
the upper and lower 3x limits on the data for individual observations. Thus, wafer thickness
is expected to vary between -19 and 95.9. The "zero point" of acceptable wafers is the lower
specification, meaning that the thickness of all wafers produced is expected, without
adjustment, to vary from 0.0019 below the lower specification to 0.0959 above the lower
specification. Therefore,

Cp = 100/98 = 1.02

Thus, Cp for this process looks OK. However, the lower and upper capability indices are

Cpl = (47 - 0)/48.9 = 0.96

Cpu = (100 - 47)/48.9 = 1.08

This gives a Cpk value equal to 0.96, which is less than 1.0. This analysis suggests that both
the centering and the variation of the wafer manufacturing process must be improved.

The actual fraction of the output falling within the spec range or tolerance may be calculated
in a step-by-step manner as follows.

Prepared for the Extension Program of Indira Gandhi Open University


32

Step 1: Find Modified (corrected) Control limits

Note: The initial xbar control chart (Figure 4.3) shows one xbar point (Sample #17) out of
control (beyond UCLxbar). This point should be removed and the control limits should be re-
calculated. This is done as follows.

Calculation basis: All Subgroups Subgroup #17


included Removed
Rbar 676/25 = 27 663/24 = 26
x_doublebar 1221/25 = 48.8 1127/24 = 40
Spec Midpoint 50.0 50.0
A2  Rbar 1.02327 = 26 1.02326 = 28.2
UCLxbar = x_doublebar + A2  Rbar 76.4 75.2
LCLxbar = x_doublebar - A2  Rbar 21.2 18.8
UCLR = D4  Rbar 69.5 71.0
UCLR = D3  Rbar 0.0 0.0

Step 2: Find the revised Process Standard Deviation (x)

x = Rbar/d2 = 26/1.693 = 16.30

Step 3: Compare tolerance limits (specification) with the revised 6x

US (Upper Spec Limit) = 100.0

LS (Lower Spec Limit) = 0.0

US - LS = 100.0 - 0.0 = 100.0

6x = 6  16.30 = 981

Distribution of xbar

0.002

LLx ULx

LSL = 0 47 USL =100


ZLSL = (0 - 47)/16.3 ZUSL = (100 - 47)/16.3
= z-2.88
= (100 - 47)/16.3 = 3.25
= 3.25

Figure 4.10 Process Capability Probability Computations

Prepared for the Extension Program of Indira Gandhi Open University


33

Step 4: Compute upper and lower ( 3x) limits of process variation under statistical
control:

ULx = x_doublebar + 3x = 40 + 3  16.30 = 95.9

LLx = x_doublebar - 3x = 40 - 3  16.30 = -1.9

If the individual observations are normally distributed, then the probability of being out of
specification can be computed. In the example above we assumed that the data are normal.
The revised mean (estimated by x_doublebar) is 47 and the standard deviation (x) is
98/6=16.3.

Figure 4.10 shows the z calculations for specification limits of 0 and 100. These z values are
used to find the area (= probability of finding x) between 0 and the mean (47) is 0.4980 as
determined from the standard normal distribution table. Thus 0.2 percent of the output (wafer
production or {x}) would be expected to fall below the lower specification.

The area to the right of 100 is approximately zero. Therefore, all the output can be expected
to meet the upper specification.

Control limits are not specification limits!

A word of caution deserves emphasis here. Control limits are often confused with
specification or "spec" limits. Spec limits, normally expressed in engineering units, indicate
the range of variation in a quality characteristic that is acceptable to the customer.
Specification dimensions are usually stated in relation to individual parts for "hard" goods,
such as automotive hardware. However, in other applications, such as in chemical processes,
specifications are stated in terms of average characteristics. Thus, control charts might
mislead one into thinking that if all sample averages fall within the control limits, all output
will conform to specs. This assumption is not true. Control limits relate to sample averages
while specification limits relate to individual measurements. A sample average may fall
within the upper and lower control limits even though some of the individual observations are
out of specification. Since xbar = x/n, control limits are narrower than the natural variation
in the process (Figure 2.5) and they do not represent process capability.

4.5 Use of Warning Limits

When a process is in control, the xbar and R charts may be used to make decisions about the
state of the process during its operation. We use three zones on the charts to help in the
routine management of the process. The zones are marked on the charts as follows.

Zone 1 (it falls between the upper and lower WARNING LINES (or  2 sigma lines) . If
the plotted points fall in this zone, it indicates that the process has remained stable and actions
or adjustments are unnecessary. Indeed any adjustment here may increase the amount of
variability.

Zone 2 (it falls between Zone 1 and Zone 3). Any point found in Zone 2 suggests that there
may have been an assignable change and another sample must be taken to check that out.

Zone 3 (it falls beyond the UPPER or LOWER CONTROL LIMIT). Any points falling in
this zone indicates that the process should be investigated and thatn if action is taken, the
latest estimate of x_doublebar value and Rbar should be used to revise the control limits.

Prepared for the Extension Program of Indira Gandhi Open University


34

WARNING: Possible Results of


Point in Zone 2 second sampling
 Take second
sample

ZONE 3
 Investigate and adjust
Upper Control Limit
ZONE 2  Investigate
Upper Warning Limit
ZONE 1  Do nothing
xbar

ZONE 1
X_double bar
ZONE 1
ZONE 1
Lower Warning Limit
ZONE 2
Lower Control Limit
ZONE 3

Sample #

Figure 4.11 ACTIONS FOR SECOND SAMPLE FOLLOWING A


WARNING SIGNAL IN ZONE 2

4.6 Modified Control Limits

Modified control limits often are used when process capability is very good. For example,
suppose that the process capability of a factory is only 60 percent of tolerance (Cp = 1.67) and
that the process mean can be controlled by a simple machine adjustment. Management may
quickly discover the impracticality of investigating every isolated point that falls outside the
usual control limits because the output is probably still well within specifications. In such
cases, the usual control limits may be replaced with the following modified control limits:

URLx = USL - Am Rbar

LRLx = LSL + Am Rbar

where URLx is the upper reject level, LRLx is the lower reject level and USL and LSL are the
upper and lower specifications respectively. A m values are determined by statistical principles
and these are shown in Table 4.1. The modified control limits allow for more variation than
the ordinary control limits and still provide high confidence that the product produced will
remain within specification. It is important to note that modified limits apply only if process
capability is at least 60 to 75 percent of tolerance. However, if the mean must be controlled
closely, a conventional xbar-chart should be used even if the process capability is good. Also
if the process standard deviation (x) is likely to shift, don't modify control limits.

Table 4.1 Factors for Control Limts and Standard Deviation

Sample
Size n A2 D4 d2 Am
2 1.880 3.267 1.128 0.779
3 1.023 2.574 1.693 0.749
4 0.729 2.282 2.059 0.728
5 0.577 2.114 2.326 0.713
6 0.483 2.004 2.534 0.701

Prepared for the Extension Program of Indira Gandhi Open University


35

Example 5: Computing Modified Control Limits for the Silicon Wafer Case:
Shown below is the calculations for the silicon wafer thickness example considered in this
section. Since the sample size is 3, Am = 0.749. Therefore, the modified limits are

URLx = US - AmR = 100-0.7(27) = 79.3


LRLx = LS + AmR = 0+0.749(26) = 20.7

Observe that if the process is centered on the nominal, the modified control limits are "looser"
(wider) than the ordinary control limits. For this example, before the modified control limits
are implemented, the centering of the process would first have to be corrected from its current
(estimated) value of 40 to the specification midpoint of 50.0.

4.7 Key Points about Control Charts

 Every process exhibits some variability. Variability is caused by a multitude of factors


that affect a manufacturing or service process while it is operating.
 In SPC, a process is assumed to be disturbed by two types of factors called respectively
(1) random causes and (2) assignable causes.
 Random causes are many, but their individual effect is small. The combined effect of all
random causes is manifested as the inherent variability of the process.
 A performance measure called "process capability" quantifies the inherent variability of a
process. Process capability measures the fraction of the total process output that falls
within tolerance (or spec range) when no assignable factors are affecting the process.
 Assignable factors on the other hand are few in number (we can usually "put our finger on
it"), but their effect is large and noticeable on an appropriately drawn "control chart."
 When disturbed by an assignable cause, the average value of the process output may
deviate from its desired target value, hence the process may lose accuracy. Process
spread or dispersion may also widen, causing a worsening of precision. Hence, the
appearance of assignable causes should be detected quickly and corrective steps should be
taken to remove such factors from affecting the process.
 Mean or sample average (xbar) indicates the extent of accuracy of process output
(location of the output distribution relative to the target). Standard deviation, or range
(R), indicates the precision of the output.
 The purpose of drawing control charts is to determine when the process should be left
alone and when it should be investigated and if necessary, adjusted or rectifiedto
remove any assignable cause affecting it.

Prepared for the Extension Program of Indira Gandhi Open University


36

5 SPECIAL CONTROL CHARTS FOR VARIABLES

Several alternatives to the popular xbar and R-chart for process control of variables
measurements are available. This section discusses some of them.

5.1 Xbar and s-charts

An alternative to using the R-chart along with the xbar chart is to compute and plot the
standard deviation s of each sample. Although the range has traditionally been used, since it
involves less computational effort and is easier for shop-floor personnel to understand, using s
rather than R has its advantages. The sample standard deviation is a more sensitive and better
indicator of process variability, especially for larger sample sizes. Thus, when tight control of
variability is required, s should be used. With the use of modern calculators and personal
computers, the computational burden of computing s is reduced or eliminated, and s has thus
become a viable alternative to R.

The sample standard deviation is computes as

 (x i  xbar) 2
s i 1
n 1

To construct an s-chart, compute the standard deviation for each sample. Next, compute the
average standard deviation sbar by averaging the sample standard deviations over all samples.
(Notice that this computation is analogous to computing R). Control limits for the s-chart are
given by

UCLs = B4 sbar
LCLs = B3 sbar

where B3 and B4, are constants found in Table 2.1.

For the associated xbar chart, the control limits derived from the overall standard deviation
are

UCLxbar = x_doublebar + A3 sbar

LCLxbar = x_doublebar - A3 sbar

where A3 is a constant that is a function of sample size (n) may be found in Table 2.1.

Observe that the formulas for the control limits are equivalent to those for xbar and R-charts
except that the constants differ.

Example 6: Constructing xbar and s-Charts.


To illustrate the use of the xbar- and s-charts, consider the data given below. These data
represent measurements of deviations from a nominal specification for some machined part.
Samples of size 10 are used; for each sample the mean and standard deviation have been
computed.

The average (i.e., overall) mean is computed to be x_doublebar = 0.108, and the average
standard deviations is sbar = 1.791. Since the sample size is 10, B3 = 0.284, B4 = 1.716, and
A3 = 0.975. Hence, the control limits for the s-chart are

Prepared for the Extension Program of Indira Gandhi Open University


37

LCLs = 0.284 (1.791) = 0.509

UCLs = 1.716 (1.791) = 3.073

For the xbar chart, the control limits are

LCLxbar = 0.108 - 0.975 (1.791) = - 1.638

UCLxbar = 0.108 + 0.975 (1.791) = 1.854

The xbar and s-charts are shown in Figures 5.1 and 5.2 respectively. The charts indicate that
this process is not in control, and an investigation as to the reasons for the variation,
particularly in the xbar chart, is warranted.

Data and Calculations for Example 6

Number of samples = 25, sample size = 10.

Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
#
1 1 9 0 1 -3 -6 -3 0 2 0 -3 -12 -6 -3 -1 -1 -2 0 0 1 1 -1 0 1 2
2 8 4 8 1 -1 2 -1 -2 0 0 -2 2 -3 -5 -1 -2 2 4 3 2 2 0 0 0 2
3 6 0 0 0 0 0 0 -3 -1 -2 2 0 0 5 -1 -2 -1 0 -3 1 2 2 -1 0 1
4 9 3 0 2 -4 0 -2 -1 -1 -1 -1 -4 0 0 -2 0 0 0 3 1 1 -1 0 1 2
5 7 0 3 1 0 2 -1 -2 -3 -1 1 -1 -8 -5 -1 -4 -1 0 3 -3 2 2 1 1 -1
6 9 0 1 1 1 -1 -1 1 0 0 -2 4 -4 1 0 0 -1 3 1 2 2 2 0 2 2
7 2 3 2 2 0 2 -3 -3 1 -1 -2 2 -6 5 -2 -2 2 0 0 1 1 -1 0 0 2
8 7 4 0 0 -2 0 0 0 -3 -2 -1 -3 -1 -4 -1 -4 -1 0 1 -2 1 0 0 0 1
9 9 8 2 0 0 -3 -2 -3 -1 -2 1 -4 -1 -1 0 -1 1 1 2 3 1 0 -1 -1 -1
10 7 3 3 1 -2 0 -2 -2 0 0 1 0 -2 -5 -1 0 -2 0 -2 0 2 -1 0 0 2

Xbar 6.5 3.4 1.9 0.9 -1.1 -0.4 -1.5 -1.5 -0.6 -0.9 -0.6 -1.6 -3.1 -1.2 -1 -1.6 -0.3 0.8 0.8 0.6 1.5 0.2 -0.1 0.4 1.2
X_doubl 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11
ebar
UCLxbar 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
LCLxbar - - - - - - - - - - - - - - - - - - - - - - - - -
1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8

Std_dev 2.83 3.13 2.47 0.73 1.59 2.50 1.08 1.43 1.57 0.87 1.71 4.52 2.80 3.91 0.66 1.50 1.49 1.47 2.09 1.83 0.52 1.31 0.56 0.84 1.22
8 4 0 8 5 3 0 4 8 6 3 7 7 0 7 6 4 6 8 8 7 7 8 3 9
UCLs 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
LCLs 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50
9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9

xbar Chart Calculations for Example 6

Xbar 6.5 3.4 1.9 0.9 -1.1 -0.4 -1.5 -1.5 -0.6 -0.9 -0.6 -1.6 -3.1 -1.2 -1 -1.6 -0.3 0.8 0.8 0.6 1.5 0.2 -0.1 0.4 1.2
X_doubl 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11 0.11
ebar
UCLxbar 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85 1.85
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
LCLxbar - - - - - - - - - - - - - - - - - - - - - - - - -
1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8

Prepared for the Extension Program of Indira Gandhi Open University


38

Figure 5.1 Xbar CHART

8
6
4
Xbar

2
0
-2
-4
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample #

s-Chart for Example 6

Std_dev 2.83 3.13 2.47 0.73 1.59 2.50 1.08 1.43 1.57 0.87 1.71 4.52 2.80 3.91 0.66 1.50 1.49 1.47 2.09 1.83 0.52 1.31 0.56 0.84 1.22
8 4 0 8 5 3 0 4 8 6 3 7 7 0 7 6 4 6 8 8 7 7 8 3 9
UCLs 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07 3.07
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
LCLs 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50
9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9

Prepared for the Extension Program of Indira Gandhi Open University


39

Figure 5.2 s-CHART


5

3
s
2

0
1 4 7 10 13 16 19 22 25

Sample #

5.2 Charts for Individuals

With the development of automated inspection for many processes, manufacturers can nor
easily inspect and measure quality characteristics on every item produced. Hence, the sample
size for process control is n = 1, and a control chart for individual measurements also called
an x-chart can be used. Other examples in which x-charts are useful include accounting data
such as shipments, orders, absences, and accidents; production records of temperature,
humidity, voltage, or pressure; and the results of physical or chemical analyses.

With individual measurements, the process standard deviation can be estimated and three-
sigma control limits used. As shown earlier, Rbar/d2 provides an estimate of the process
standard deviation. Thus, an x-chart for individual measurements would have "three-sigma"
control limits defined by

UCLx = x_average + 3 Rbar / d2

LCLx = x_average - 3 Rbar / d2

Sample of size 1, however, do not furnish enough information for process variability
measurement. However, process variability can be determined by using a moving average of
ranges, or a moving range, or n successive observations. for example, a moving range for n =
2 is computed by finding the absolute difference between two successive observations. The
number of observations used in the moving range determines the constant d 2; hence, for n = 2
from Table 2.1, d2 = 1.128. In a similar fashion, larger values of n can be used to compute
moving ranges. The moving range chart has control limits defined by

UCLR = D4 Rbar

LCLR = D3 Rbar

which is comparable to the ordinary range chart.

Example 7: Constructing an x-Chart with Moving Ranges.

Consider a set of observations measuring the percentage of cobalt in a chemical process as


given in Figure 5.3.

Figure 5.3 Data and Calculations for Example 7

Prepared for the Extension Program of Indira Gandhi Open University


40

Observa- X LCLx CLx UCLx Moving LCLr UCLr


tion # Range
1 3.75 2.56 3.498 4.43 0 1.15
2 3.8 2.56 3.498 4.43 0.05 0 1.15
3 3.7 2.56 3.498 4.43 0.1 0 1.15
4 3.2 2.56 3.498 4.43 0.5 0 1.15
5 3.5 2.56 3.498 4.43 0.3 0 1.15
6 3.05 2.56 3.498 4.43 0.45 0 1.15
7 3.5 2.56 3.498 4.43 0.45 0 1.15
8 3.25 2.56 3.498 4.43 0.25 0 1.15
9 3.6 2.56 3.498 4.43 0.35 0 1.15
10 3.1 2.56 3.498 4.43 0.5 0 1.15
11 4 2.56 3.498 4.43 0.9 0 1.15
12 4 2.56 3.498 4.43 0 0 1.15
13 3.5 2.56 3.498 4.43 0.5 0 1.15
14 3 2.56 3.498 4.43 0.5 0 1.15
15 3.8 2.56 3.498 4.43 0.8 0 1.15
16 3.4 2.56 3.498 4.43 0.4 0 1.15
17 3.6 2.56 3.498 4.43 0.2 0 1.15
18 3.1 2.56 3.498 4.43 0.5 0 1.15
19 3.55 2.56 3.498 4.43 0.45 0 1.15
20 3.65 2.56 3.498 4.43 0.1 0 1.15
21 3.45 2.56 3.498 4.43 0.2 0 1.15
22 3.3 2.56 3.498 4.43 0.15 0 1.15
23 3.75 2.56 3.498 4.43 0.45 0 1.15
24 3.5 2.56 3.498 4.43 0.25 0 1.15
25 3.4 2.56 3.498 4.43 0.1 0 1.15

The moving range is computed as shown by taking absolute values moving range is the
difference between the first two observations:

|3.75 - 3.80| = 0.05

The second moving range is computed as

|3.80 - 3.70| = 0.10

From these data we find that LCLR = 0 and

UCLR = (3.267)(0.352) = 1.15

The moving range chart, shown in Figure 5.5, indicates that the process is in control. Next,
the x-chart is constructed for the individual measurements:

LCLx = 3.498 - 3(0.352)/1.128 = 2.56

UCLx = 3.498 + 3(0.352)/1.128 = 4.43

The two charts indicate that the process is in control.

Prepared for the Extension Program of Indira Gandhi Open University


41

Some caution is necessary when interpreting patterns on the moving range chart. Points
beyond control limits are signs of assignable causes. Successive ranges, however, are
correlated, and they may cause patterns or trends in the chart that are not indicative of out-of-
control situations. On the x-chart, individual observations are assumed to be uncorrelated;
hence, patterns and trends should be investigated.

Control charts for individuals have the advantage that specifications can be drawn on the chart
and compared directly with the control limits.

Some disadvantage also exist:


 Individuals charts are less sensitive to many of the conditions that can be detected by
xbar and R-charts; for example, the process must vary a lot before a shift in the mean is
detected.
 Also, short cycles and trends may appear on an individual's chart and not on an xbar or R-
chart.
 Finally, the assumption of normality of observations is more critical than for xbar and R-
charts; when the normality assumption does not hold, greater chance for error is present.

To summarize, SPC for variables data progresses in three stages:


1. Examination of the state of statistical control of the process using xbar and R charts,
2. A process capability study to compare process spread and its location to specifications,
and
3. Routine process control using control charts.

Prepared for the Extension Program of Indira Gandhi Open University


42

6 CONTROL CHARTS FOR ATTRIBUTES

Attributes quality data assume only two valuesgood or bad, pass or fail. Attributes usually
cannot be measured, but they can be observed and counted and are useful in quality
management in many practical situations. For instance, in printing packages for consumer
products, color quality can be rated as acceptable or not acceptable, or a sheet of cardboard
either is damaged or is not. Usually, attributes data are easy to collect, often by visual
inspection. Many accounting records, such as percent scrapped, are also usually readily
available. However, one drawback in using attributes data is that large samples are necessary
to obtain valid statistical results.

6.1 Fraction Nonconforming (p) Chart

Several different types of control charts are fused for attribute data. One of the most common
is the p-chart (introduced in this section). Other types of attributes charts are presented in the
next chapter. One distinction that we must make is between the terms defects and defectives.
A defect is a single nonconforming quality characteristic of an item. An item may have
several defects. The term defective refers to items having one or more defects. Since certain
attributes charts are used for defectives while others are used for defects, one must understand
the difference. In quality control literature, the term nonconforming is often used instead of
defective.

A p-chart monitors the proportion of nonconforming items produced in a lot. Often it is also
called a fraction nonconforming or fraction defective chart. As with variables data, a p-chart
is constructed by first fathering 25 to 30 samples of the attribute being measured. The size of
each sample should be large enough to have several nonconforming items. If the probability
of finding a nonconforming item is small, a large sample size is usually necessary. Samples
are chosen over time periods so that any special causes that are identified can be investigated.

Let us suppose that k samples, each of size n, are selected. If y represents the number
nonconforming items or defectives in a particular sample, the proportion nonconforming is
(y/n). Let pi be the fraction nonconforming in the ith sample; the average fraction
nonconforming pbar for the group of k samples then is

p1  p2  ...  pk
pbar  p 
k

This statistic pbar reflects the average performance of the process. One would expect a high
percentage of samples to have a fraction nonconforming within three standard deviations of p.
As estimate of the standard deviation is given by

p(1  p)
sp 
n
Therefore, upper and lower control limits may be given by

UCLp  p  3s p

UCLp  p  3s p

If LCLp is less that zero, a value of zero is used.

Prepared for the Extension Program of Indira Gandhi Open University


43

Analysis of a p-chart is similar to that of the xbar or R-chart. Points outside the control limits
signify an out-of-statistical-control situation, i.e., the process has been disturbed by an
assignable factor. Patterns and trends should also be sought to identify the presence of
assignable factors. However, a point on a p-chart below the lower control limit or the
development of a trend below the center line indicates that the process might have improved,
since the ideal is zero defectives. However, caution is advised before such conclusions are
drawn, because errors may have been made in computation. An example of a p-chart is
presented next.

Example 8: Constructing a p-Chart.


The mail sorting personnel in a post office must read the PIN code on a letter and divert the
letter to the proper carrier route. Over one month's time, 25 samples of 100 letters were
chosen and the number of errors was recorded. This information is summarized in Figure 6.1.

The average fraction nonconforming, pbar is found to be

0.03  0.01  ...  0.01


pbar  p   0.022
25
The standard deviation is computed as

0.022(1  0.022)
sp   0.01467
100

Figure 6.1: Data and Control Limit Calculations for Example 8

Number Sample p = Fraction


Sample # of errors Size Nonconforming LCLp UCLp
1 3 100 0.03 0.022 0.066
2 1 100 0.01 0.022 0.066
3 0 100 0.00 0.022 0.066
4 0 100 0.00 0.022 0.066
5 2 100 0.02 0.022 0.066
6 5 100 0.05 0.022 0.066
7 3 100 0.03 0.022 0.066
8 6 100 0.06 0.022 0.066
9 1 100 0.01 0.022 0.066
10 4 100 0.04 0.022 0.066
11 0 100 0.00 0.022 0.066
12 2 100 0.02 0.022 0.066
13 1 100 0.01 0.022 0.066
14 3 100 0.03 0.022 0.066
15 4 100 0.04 0.022 0.066
16 1 100 0.01 0.022 0.066
17 1 100 0.01 0.022 0.066
18 2 100 0.02 0.022 0.066
19 5 100 0.05 0.022 0.066
20 2 100 0.02 0.022 0.066
21 3 100 0.03 0.022 0.066
22 4 100 0.04 0.022 0.066
23 1 100 0.01 0.022 0.066
24 0 100 0.00 0.022 0.066
25 1 100 0.01 0.022 0.066

Prepared for the Extension Program of Indira Gandhi Open University


44

Figure 7.6.2 Attribute (p ) Chart for Example 8

0.07
UCLp

p (Fraction Defective)
0.06
0.05
0.04
0.03
pbar
0.02
0.01
0 LCLp
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample #

Thus, the upper control limit, UCLp, is 0.022 + 3(0.01467) = 0.066, and the lower control
limit, LCLp is 0.022 - 3(0.01467) = - 0.022. Since this later figure is negative and fraction
nonconforming (pbar) can never be negative, for LCLp zero (0) is used. The control chart for
this example is shown in Figure 6.2.

The sorting process appears to be in statistical control. Any values found above the upper
control limit or evidence of upward trend might indicate the need for re-training the personnel.

6.3 Variable Sample Size p Charts

Often 100 percent inspection is performed on process output during fixed sampling periods;
however, the number of units produced in each sampling period may vary. In this case, the p-
chart would have a variable sample size.

One way of handling this is to compute a standard deviation for each individual sample. Thus,

Std _ Dev  p(1  p) / ni

p(1  p)
p3
ni

where the number of observations in the Ith sample is nI,. The control limits for this sample
will be given by

where

p
 number _ nonconfor min g
n i

Example 7: Variable sample Size.


Figure 6.3 shows 20 samples with varying sample sizes. The value of pbar is computed as

18  20  14  ...  18 271
p   0.0909
137  158  92  ...  160 2980

Prepared for the Extension Program of Indira Gandhi Open University


45

Therefore, control limits for sample #1 would be

0.0909(1  0.0909)
LCLp  0.0909  3  0.017
137

0.0909(1  0.0909)
UCLp  0.0909  3  0.165
137

Note carefully that because the sample sizes vary, control limits would be different for each
sample. The p-chart is shown in Figure 6.4. Points 13 and 15 are outside the control limits.

Figure 6.4 p CHART WITH VARIABLE SAMPLE SIZE

Variable control
limits that depend
on sample size ni
0.2

0.15 UCL
p (Fraction
Defective)

0.1

0.05
LCL
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Sample #

Figure 6.3 Data and Calculations for Example 7

Sample # Value Sample p = Fraction Std Dev LCLp UCLp


I Size (nI) Nonconforming
1 18 137 0.1314 0.0245648 0.0172 0.1646
2 20 158 0.1266 0.0228741 0.0223 0.1596
3 14 92 0.1522 0.0299764 0.001 0.1809
4 6 122 0.0492 0.0260311 0.0128 0.169
5 11 86 0.1279 0.031004 0 0.184
6 22 187 0.1176 0.0210258 0.0279 0.154
7 6 156 0.0385 0.230203 0.0219 0.16
8 9 117 0.0769 0.0265815 0.0112 0.1707
9 14 110 0.1273 0.0274143 0.0087 0.1732
10 12 142 0.0845 0.0241284 0.0186 0.1633
11 8 140 0.0571 0.0243001 0.018 0.1638
12 13 179 0.0726 0.0214905 0.0265 0.1554
13 5 196 0.0255 0.0205374 0.0293 0.1526
14 15 163 0.0920 0.0225206 0.0234 0.1585
15 25 140 0.1786 0.0243001 0.018 0.1638
16 12 135 0.0889 0.0247461 0.0167 0.1652
17 16 186 0.0860 0.0210822 0.0277 0.1542
18 12 193 0.0622 0.0206964 0.0289 0.153
19 15 181 0.0829 0.0213714 0.0268 0.1551
20 18 160 0.1125 0.0227307 0.0227 0.1591

Prepared for the Extension Program of Indira Gandhi Open University


46

6.3 np-charts for Number Nonconforming

In the p-chart, the fraction nonconforming of the ith sample is given by

pi = yi/n

where yi is the number found nonconforming and n is the sample size. Multiplying both sides
of the equation pi = yi/n, yields

yi = npi

That is, the number nonconforming is equal to the sample size times the proportion
nonconforming. Instead of using a chart for the fraction nonconforming, an equivalent
alternativea chart for the number of nonconforming items is useful. Such a control chart is
called an np-chart.

The np-chart is a control chart for the number of nonconforming items in a sample. To use the
np-chart, the size of each sample must be constant. Suppose that two samples of sizes 10 and
15 each have four nonconforming items. Clearly, the fraction nonconforming in each sample
is difference between samples. Thus, equal sample size are necessary to have a common base
for measurement. Equal sample sizes are not required for p-charts, since the fraction
nonconforming is invariant to the sample size.

The np-chart is a useful alternative to the p-chart because it is often easier to understand for
production personnelthe number of nonconforming items is more meaningful than a
fraction. Also, since it requires only a count, the computations are simpler.

The control limits for the np-chart, like those for the p-chart, are based on the binomial
probability distribution. The center line is the average number of nonconforming items per
sample as denoted by npbar, which is calculated by taking k samples of size n, summing the
number of nonconforming items yi in each sample, and dividing by k. That is

y1  y 2  ...  y k
npbar 
k
An estimate of the standard deviation is

s npbar  npbar (1  pbar )

where pbar = (npbar)/n. Using three-sigma limits as before, the control limits are specified by

UCLnpbar  npbar  3 npbar (1  pbar )

LCLnpbar  npbar  3 npbar (1  pbar )

Example 8: An np-chart for a Post Office.


The np data for the post office example discussed earlier is given in Figure 6.5. The average
number of errors found is:

3  1  ...  0  1
np   2.2
25

Prepared for the Extension Program of Indira Gandhi Open University


47

To find the standard deviation, we compute

2.2
p  0.022
100
Then,

sn p  2.2(1  0.022)  1.4668

The control limits are then computed as

UCLn p  2.2  3(1.4668)  6.6

LCLn p  2.2  3(1.4668)  2.20

Since the lower control limit is less than zero, a value of LCL = 0 is used. The control chart
for this example is given in Figure 6.6.

Figure 6.5 Data and Calculations for Example 8

Number
Nonconforming
Sample # (np) LCLnp UCLnp
1 3 0 6.6
2 1 0 6.6
3 0 0 6.6
4 0 0 6.6
5 2 0 6.6
6 5 0 6.6
7 3 0 6.6
8 6 0 6.6
9 1 0 6.6
10 4 0 6.6
11 0 0 6.6
12 2 0 6.6
13 1 0 6.6
14 3 0 6.6
15 4 0 6.6
16 1 0 6.6
17 1 0 6.6
18 2 0 6.6
19 5 0 6.6
20 2 0 6.6
21 3 0 6.6
22 4 0 6.6
23 1 0 6.6
24 0 0 6.6
25 1 0 6.6

Prepared for the Extension Program of Indira Gandhi Open University


48

Figure 6.6 np Chart for Example 8


7 UCLnp
6
5
np 4
3
2
1
0 LCLnp
1 3 5 7 9 11 13 15 17 19 21 23 25

Sample #

6.4 Charts for Defects

Recall that a defect is a single nonconforming characteristic of an item, while the term
defective refers to an item that has one or more defects in it. In some situations, quality
assurance personnel may be interested not only in whether an item is defective but also in how
many defects it has. For example, in complex assemblies such as electronics, the number of
defects is just as important as whether the product is defective. Two charts can be applied in
such situations. The c-chart is used to control the total number of defects per unit when
subgroup size is constant. If subgroup sizes are variable, a u-chart is used to control the
average number of defects per unit.

The c-chart is based on the Poisson probability distribution. To construct a c-chart, first
estimate the average number of defects per unit, cbar, by taking at least 25 samples of equal
size, counting the number of defects per sample, and finding the average. The standard
deviation (sc) of the Poisson distribution is the square root of the mean and yields

sc  cbar

Thus, three-sigma control limits for the c-chart are given by

UCLc  cbar  cbar

LCLc  cbar  cbar

Example 9: Constructing a c-Chart


Figure 6.7 shows the number of machine failures in a factory over a 25-day period. The total
number of failures is 45; therefore, the average number of failures per day is

cbar = 45/25 = 1.8

Control limits for the c-chart here are therefore given by

UCLc = 5.82

LCLc = -2.22 or zero

The chart is shown in Figure 6.8 and appears to be in control. Such a chart can be used for
continued control or for monitoring the effectiveness of a quality improvement program.

Prepared for the Extension Program of Indira Gandhi Open University


49

Figure 6.7 Data and Calculations for Example 9

Sample # Number of Defects (c) LCLc UCLc


1 2 0 5.82
2 3 0 5.82
3 0 0 5.82
4 1 0 5.82
5 3 0 5.82
6 5 0 5.82
7 3 0 5.82
8 1 0 5.82
9 2 0 5.82
10 2 0 5.82
11 0 0 5.82
12 1 0 5.82
13 0 0 5.82
14 2 0 5.82
15 4 0 5.82
16 1 0 5.82
17 2 0 5.82
18 0 0 5.82
19 3 0 5.82
20 2 0 5.82
21 1 0 5.82
22 4 0 5.82
23 0 0 5.82
24 0 0 5.82
25 3 0 5.82

UCL

LCL

As long as sample size is constant, a c-chart is appropriate. In many cases, however, the
subgroup size is not constant or the nature of the production process does not yield discrete,
measurable units. For example, suppose that in an auto assembly plant, several different
models are produced that vary in surface area. The number of defects will not then be a valid
comparison among different models. Other applications, such as the production of textiles,
photographic film, or paper, have no convenient set of items to measure. In such cases, a

Prepared for the Extension Program of Indira Gandhi Open University


50

standard unit of measurement is used, such as defects per square foot or defects per square
inch. The control chart used in these situations is called a u-chart.

The variable u reprsents the average number of defects per unit of measurement, that is ui =
ci/ni, where ni is the size of subgroup #i (such as square feet). The center line ubar for k
samples each of size ni is computed as follows:

c1  c2  ...  ck
ubar 
n1  n2  ...  nk

The standard deviation su of the ith sample is estimated by

su  ubar / ni

The control limits, based on three standard deviations for the ith sample are then

UCLu  ubar  3 ubar / ni

LCLu  ubar  3 ubar / ni

Note that if the size of the subgroups varies, so will the control limits. This result is similar to
the p-chart with variable sample sizes. In general, whenever the sample size n varies, the
control limits will also vary.

Example 10: Constructing a u-Chart


A catalog distributor ships a variety of orders each day. The packing slips often contain errors
such as wrong purchase order numbers, wrong quantities, or incorrect sizes. Figure 6.10
shows the error data collected during August, 1999. Since the sample size varies each day, a
u-chart is appropriate.

To construct the chart, first compute the number of errors per slip as shown in column 3. The
average number of errors per slip, ubar, is found by dividing the total number of errors (217)
by the total number of packing slips (2,843). This is ubar = 217/2843 = 0.076

The standard deviation for a particular sample size ni is therefore

su  0.076 / ni

Figure 7.6.9 u CHART


0.25
u (Defects/Unit)

0.2

0.15

Variable
0.1
Control limits
0.05

0
11
13
15
17
19
21
23
25
27
29
31
1
3
5
7
9

Sample #
Prepared for the Extension Program of Indira Gandhi Open University
51

The control limits (LCLu and UCLu) are shown in Figure 6.10. As with a p-chart, individual
control limits will vary with the sample size, ni. The control chart is shown in Figure 6.9.
One point (sample #2) appears to be out of control.

Figure 6.10 Data and Calculations for Example 10

Sample No.of Sample ui = ci/ni = Standard


# Defects cI Size ni Defects/unit Deviation su LCLu UCLu
1 8 92 0.0870 0.0288036 0 0.1627
2 15 69 0.2174 0.0332596 0 0.1761
3 6 86 0.0698 0.0297915 0 0.1657
4 13 85 0.1529 0.0299662 0 0.1662
5 5 123 0.0407 0.0249109 0.0016 0.1511
6 5 87 0.0575 0.0296198 0 0.1652
7 3 74 0.0405 0.0321163 0 0.1727
8 8 83 0.0964 0.0303251 0 0.1673
9 4 103 0.0388 0.0272222 0 0.158
10 6 60 0.1000 0.0356669 0 0.1833
11 7 136 0.0515 0.0236904 0.0053 0.1474
12 4 80 0.0500 0.0308885 0 0.169
13 2 70 0.0286 0.0330212 0 0.1754
14 11 73 0.1507 0.0323355 0 0.1733
15 13 89 0.1461 0.0292951 0 0.1642
16 6 129 0.0465 0.0243246 0.0034 0.1493
17 6 78 0.0769 0.031282 0 0.1702
18 3 88 0.0341 0.029451 0 0.1647
19 8 76 0.1053 0.0316909 0 0.1714
20 9 101 0.0891 0.0274904 0 0.1588
21 8 92 0.0870 0.0288036 0 0.1627
22 2 70 0.0286 0.0330212 0 0.1754
23 9 54 0.1667 0.0375963 0 0.1891
24 5 83 0.0602 0.0303251 0 0.1673
25 13 165 0.0788 0.021508 0.0118 0.1409
26 5 137 0.0365 0.0236038 0.0055 0.1471
27 8 79 0.1013 0.0310834 0 0.1696
28 6 76 0.0789 0.0316909 0 0.1714
29 7 147 0.0476 0.0227868 0.008 0.1447
30 4 80 0.0500 0.0308885 0 0.169
31 8 78 0.1026 0.031282 0 0.1702

One application of c-charts and u-charts is in a quality rating system. When some defects are
considered to more serious than others, they can be rated, or categorized, into different classes.
For instance,

A - Very serious, B - serious,


C - moderately serious, D - not serious

Each category can be weighted using a point scale, such as 100 for A, 50 for B, 10 for C, and
1 for D. These points, or demerits, can be used as the basis for a c- or u-chart that would
measure total demerits or demerits per unit, respectively. Such charts are often used for
internal quality control and as a means of rating suppliers.

Prepared for the Extension Program of Indira Gandhi Open University


52

7 CHOOSING THE CORRECT SPC CHART

Confusion often exists over which chart is appropriate for a specific application, since the c-
and u-charts apply to situations in which the quality characteristics inspected do not
necessarily come from discrete units.

The key issue to consider is whether the sampling unit is constant. For example, suppose that
an electronics manufacturer produces circuit boards. The boards may contain various defects,
such as faulty components and missing connections. Because the sampling unit - the circuit
board - is constant (assuming that all boards are the same), a c-chart is appropriate. If the
process produces boards of varying sizes with different numbers of components and
connections, then a u-chart would apply.

As another example, consider a telemarketing firm that wants to track the number of calls
needed to make one sale. In this case, the firm has no physical sampling unit. However, an
analogy can be made with the circuit boards. The sale corresponds to the circuit board, and
the number of calls to the number of defects. In both examples, the number of occurrences in
relationship to a constant entity is being measured. Thus, a c-chart is appropriate.

Figure 7.1 Control Chart Selection

Quality Characteristic

Use x- and Type of Attribute


n > 1? moving range
charts

no Use xbar yes


Constant Use p- or Constant
n  10? and sample size? sampling unit?
np-charts
R charts

yes no no Use
c-
chart
Use xbar and Use p-chart with Use u-
s charts variable sample chart
size

Prepared for the Extension Program of Indira Gandhi Open University


43

8. KEY POINTS ABOUT CONTROL CHART


CONSTRUCTION

 Statistical process control (SPC) is a methodology for monitoring a process to identify


special causes of variation and signal the need to take corrective action when appropriate.

 Capability and control are independent concepts. Ideally, we would like a process to have
both high capability and be in control. If a process is not in control, it should first be
brought into control before attempting to evaluate process capability

 Control charts have three basic applications:

(1) Establishing a state of statistical control,

(2) Monitoring a process to identify special causes, and

(3) Determining process capability.

 Control charts for variables data include: xbar- and R-charts; and s-charts, and individual
and moving range charts. xbar- and s-charts are alternatives to xbar- and R-charts for
larger sample sizes. The sample standard deviation provides a better indication of process
variability than the range. Individuals charts are useful when every item can be inspected
and when a long lead time exists for producing an item. Moving ranges are used to
measure the variability in “individuals” or x charts.

 A process is in control if no points are outside control limits; the number of points above
and below the center line is about the same; the points seem to fall randomly above and
below the center line; and most points (but not all) are near the center line, with only a
few close to the control limits.

 Typical out-of-control conditions are represented by sudden shifts in the mean value,
cycles, trends, hugging of the center line, hugging of the control limits, and instability.

 Modified control limits can be used when the process capability is known to be good.
These wider limits reduce the amount of investigation of isolated points that would fall
outside the usual control limits.

 Charts for attributes include p-, np-, c- and u-charts. The np-chart is an alternative to the
p-chart, and controls the number nonconforming for attributes data. Charts for defects
include the c-chart and u-chart. The c-chart is used for constant sample size and the u-
chart is used for variable sample size.

Prepared for the Extension Program of Indira Gandhi Open University


44

9 IMPLEMENTING SPC

The original methods of SQC have been available for over 75 years now; Walter Shewhart's
first book on control charts was written in 1924. However, studies show that managers still do
not understand variation.

Where do you find the motivation? Studies in industry show that when used properly, SQC or
SPC reduce the cost of qualitythe cost of excessive inspection, scrap, returns from
customers and warranty service. Good quality, as the Japanese have demonstrated, edges out
competition, builds repute and along with that brings in new customers, raises morale and
expands business. Good quality culture also propagates: Companies using SPC frequently
require their suppliers to use them also. This generates considerable benefit.

Where there is low use of SPC, the major reason often is lack of knowledge of variation and
the importance of understanding it in order to improve customer satisfaction. Successful firms
on the other hand repeatedly show certain characteristics:
 Their top management understand variation and the importance of SPC methods to
successfully manage it. They do not delegate this task to the QC department.
 All people involved in the use of the technique understand what they are being asked to
do and why it would help them, and
 Training, followed by clear and written instructions on agreed procedures are
systematically introduced and followed up through audits.

The above principles form the core of the general principles of good quality management, be it
through ISO 9000, QS 9000 or TQM.

The bottom line is that you must find your own motivation to create and deliver quality.

If you do it as a fad, you will neither get the results nor maintain credibility with your own
people about initiatives that you take in your management interventions.

So, to summarize,
 Studies show that to succeed with SPC you must understand variation, however boring
that may appear.
 When SPC is used properly, quality costs go down.
 Low usage of SPC is associated with lack of knowledge and training, even in senior
management.
 SPC needs to be systematically introduced.
 A step-wise introduction of SPC would include
- review of management systems,
- review of requirements and design specs,
- emphasis on the need for process understanding and control,
- planning for education and training,
- tackling one problem at a time based on customer complaints/feedback,
- recording of detailed data,
- measuring process capability and
- making routine use of data to manage the process.

The first prudent step toward implementing SPC in an organization would be "to put the house
in order," which may be done by getting the firm registered by ISO 9000.

Prepared for the Extension Program of Indira Gandhi Open University


45

10 REVIEW QUESTIONS ABOUT CONTROL CHARTS

1. Define statistical process control and discuss its advantages


2. What does the term in statistical control mean? Explain the difference between capability
and control.
3. What are the disadvantages of simply using histograms to study process capability?
4. Discuss the three primary applications of control charts.
5. Describe the difference between variables and attributes data. What types of control
charts are used for each?
6. Briefly describe the methodology of constructing and using control charts.
7. What does one look for in interpreting control charts? Explain the possible causes of
different out-of-control indicators.
8. How should control charts be used by shop-floor personnel?
9. What are modified control limits? Under what conditions should they be used?
10. How are variables control charts used to determine process capability?
11. Describe the difference between control limits and specification limits.
12. Shy is the s-chart sometimes used in place of the R-Chart?
13. Describe some situations in which a chart for individual measurements would be used.
14. Explain the concept of a moving range. Why is a moving range chart difficult to
interpret?
15. Explain the difference between defects and defectives.
16. Briefly describe the process of constructing a p-chart. What are the key differences
compared with an x-chart?
17. Does an np-chart provide any different information than a p-chart? Why would an np-
chart be used?
18. Explain the difference between a c-chart and a u-chart.
19. Discuss how to use charts for defects in a quality rating system.
20. Describe the rules for determining the appropriate control chart to use in any given
situation.

Prepared for the Extension Program of Indira Gandhi Open University


46

11 SELF ASSESSMENT QUESTIONS ABOUT SPC

1. Thirty samples of size 3 listed in the following table were taken from a machining process
over a 15-hour period.
a. Compute the mean and standard deviation of the data.
b. Compute the mean and range of each sample and plot them on control.

Sample Observations
1 3.55 3.64 4.37
2 3.61 3.42 4.07
3 3.61 3.36 4.34
4 4.13 3.50 3.61
5 4.06 3.28 3.07
6 4.48 4.32 3.71
7 3.25 3.58 3.51
8 4.25 3.38 3.00
9 4.35 3.64 3.20
10 3.62 3.61 3.43
11 3.09 3.28 3.12
12 3.38 3.15 3.09
13 2.85 3.44 4.06
14 3.59 3.61 3.34
15 3.60 2.83 2.84
16 2.69 3.57 3.28
17 3.07 3.18 3.11
18 2.86 3.69 3.05
19 3.68 3.59 3.93
20 2.90 3.41 3.37
21 3.57 3.63 2.72
22 2.82 3.55 3.56
23 3.82 2.91 3.80
24 3.14 3.83 3.80
25 3.97 3.34 3.65
26 3.77 3.60 3.81
27 4.12 3.38 3.37
28 3.92 3.60 3.54
29 3.50 4.08 4.09
30 4.23 3.62 3.00

2. Forty samples of size 5 listed in the following table were taken from a machining process
over a 25-hour period.
a. Compute the mean and standard deviation of the data.
b. Compute the mean and range of each sample and plot them on control charts. Does the
process appear to be in statistical control? Why or why not?

Sample Number 
Data 1 2 3 4 5 6 7 8 9 10
1 9.999 10.022 10.001 10.007 10.011 10.019 10.015 9.988 9.980 10.017
2 9.992 9.998 10.006 10.006 9.979 10.017 10.015 9.990 10.001 10.017
3 10.002 10.037 10.002 10.004 9.991 10.018 9.978 10.008 10.013 9.988
4 10.003 9.994 9.993 10.018 9.996 10.008 10.006 10.002 9.998 10.010
5 10.009 10.003 10.011 10.011 9.994 10.018 9.997 9.989 10.015 9.980

Prepared for the Extension Program of Indira Gandhi Open University


47

11 12 13 14 15 16 17 18 19 20
1 9.980 10.004 10.025 9.992 9.985 9.977 9.996 10.014 10.001 9.982
2 10.038 9.990 9.989 10.023 10.002 9.975 9.991 10.010 9.979 9.975
3 9.990 10.002 9.981 10.019 10.008 10.002 10.005 10.000 10.001 9.976
4 9.996 10.003 10.006 9.990 10.008 10.021 10.009 10.001 10.015 10.012
5 10.016 9.996 9.998 10.003 9.998 9.989 9.977 10.006 10.009 9.994

21 22 23 24 25 26 27 28 29 30
1 10.010 9.988 9.991 10.005 9.987 9.994 9.994 9.972 10.018 9.985
2 10.003 10.004 9.996 10.003 9.993 10.007 9.987 9.994 10.007 10.010
3 9.990 10.001 10.020 10.027 9.992 10.013 10.027 9.969 9.980 9.998
4 10.010 9.995 10.002 9.996 9.987 9.997 10.030 10.011 9.987 10.033
5 10.015 9.977 10.022 9.970 10.008 10.014 9.989 9.985 10.014 9.994

31 32 33 34 35 36 37 38 39 40
1 10.009 9.987 9.990 9.985 9.991 10.002 10.045 9.970 10.019 9.954
2 10.013 10.012 9.973 10.038 9.999 9.989 9.993 9.999 9.989 10.011
3 10.008 10.015 9.996 9.991 9.989 9.983 10.007 9.989 9.998 10.003
4 9.990 9.995 9.990 9.988 10.014 10.013 9.990 9.999 9.997 9.987
5 10.008 10.021 9.980 9.986 9.997 9.980 10.010 10.014 9.986 10.005

3. Suppose that the following sample means and standard deviations are observed for
samples of size 5. Construct xbar and s-charts for these data.

Sample # X S Sample # X S
1 2.15 0.14 11 2.10 0.17
2 2.07 0.10 12 2.19 0.13
3 2.10 0.11 13 2.14 0.07
4 2.14 0.12 14 2.13 0.11
5 2.18 0.12 15 2.14 0.11
6 2.11 0.12 16 2.12 0.14
7 2.10 0.14 17 2.08 0.17
8 2.11 0.10 18 2.18 0.10
9 2.06 0.09 19 2.06 0.06
10 2.15 0.08 20 2.13 0.14

4. Construct charts for individuals using both two-period and three-period moving ranges for
the following observations: 9.0, 9.5, 8.4, 11.5, 10.3, 12.1, 11.4, 10.0, 11.0, 12.7, 11.3, 12,
12.6, 12.5, 13.0, 12.0, 11.2, 11.1, 11.5, 12.5, 12.1

5. The fraction defective for an automotive piston is given below for 20 samples. Two
hundred units are inspected each day. Construct a p-chart and interpret the results.

Sample # Fraction Defective Sample # Fraction Defective


1 0.11 11 0.16
2 0.16 12 0.23
3 0.12 13 0.15
4 0.10 14 0.12
5 0.09 15 0.11
6 0.12 16 0.11
7 0.12 17 0.14
8 0.15 18 0.16
9 0.09 19 0.10
10 0.13 20 0.13

Prepared for the Extension Program of Indira Gandhi Open University


48

12 ACCEPTANCE SAMPLING

Sampling inspection is a screening mechanism used to separate acceptable products from


products of poor quality; it actually does not improve the quality of any product. The most
obvious way to tell whether a product item is acceptable or not is to inspect it or to use it. If
this can be done to every item, that is, a 100% inspection can be performed, there would be no
need to use acceptance sampling. However, in many cases, it is not economical nor possible
to do a 100% inspection. For instance, if the cost of inspection for an item is higher the value
of the item, which usually is true for low-cost massed-produced products such as injection-
molded parts or flash light bulbs, a 100% inspection is not justified; if equipment cost and
labor cost to inspect an item are high, only a small fraction of the product can be inspected. If
the inspection is a destructive test (for example, a life test for electronic components or a car
crash test), obviously a 100% inspection will destroy all the products. When inspection is
necessary and 100% inspection is not possible, acceptance sampling can be employed.

A sampling plan is a method for guiding the acceptance sampling process. It specifies the
procedure for drawing samples to inspect from a batch of products and then the rule for
deciding whether to accept or reject the whole batch based on the results of this inspection.
The sample is a small number of items taken from the batch rather than the whole batch. The
action of rejecting the batch means not accepting it for consumption and this may include
downgrading the batch or selling it at a lower price returning it to its supplier or vendor.

Suppose that a sampling plan specifies that (a) n item are drawn randomly from a batch to
form a sample and (b) the batch is rejected if and only if c or more of these n items are
defective or non-conforming. An operating characteristic curve, or OC-curve, of a sampling
plan is defined as the plot of the probability that the batch will be accepted (Pa(p)) against the
fraction p of defective products in the batch. The larger is Pa(p), the more it is likely that the
batch is accepted. A higher likelihood of acceptance benefits the producer. On the other
hand, the smaller is Pa(p), the harder it will be for the batch to be accepted. This would
benefit and even protect the consumer who would want some assurance against receiving bad
products and would prefer accepting batches with a low p (fraction defective) value.

12.1 AQL and RQL

In order to specify a sampling plan with Pa(p) characteristics as described above, the numbers
n and c must be correctly specified. This specification requires us to specify two batch quality
or fraction defective levels first, namely the AQL (acceptable quality level) and the RQL
(rejection quality level) values. AQL and RQL (explained below) are two key quality
parameters frequently used in designing a sampling plan. An ideal sampling plan is regarded
to be one that accepts the batch with 100% probability if the fraction of defective items in it is
less than or equal to AQL and that rejects the batch with a 100% probability if the fraction of
defective items in the batch is larger than AQL.

Most customers realize that perfect quality (e.g., a batch or lot of parts containing no defective
parts in it) is perhaps impossible to expect; some defectives will always be there. Therefore,
the customer decides to tolerate a small fraction of defectives in his/her purchases. However,
the customer certainly wants a high level of assurance that the sampling plan used to screen
the incoming lots will reject lots with fraction defective levels exceeding some decidedly poor
quality threshold called the rejection quality level or RQL. In reality, RQL is a defect level
that causes a great deal of heartache once such a lot enters the customer's factory.

The supplier of those parts, on the other hand, wants to ensure that the customer's acceptance
sampling plan will not reject too many lots with defect levels that are certainly within the
customer's tolerance, i.e., acceptable to the customer on the average. Generally, the customer
sets a quality threshold here also, called AQL. This is actually the worst lot fraction defective
that is acceptable to the customer in the shipments he/she receives on an average basis.

Prepared for the Extension Program of Indira Gandhi Open University


49

12.2 The Operating Characteristic (OC) Curve of a Sampling Plan

A bit of thinking will indicate that only error-free 100% inspection of all items in a lot would
accept the batch with 100% probability if the fraction of defective items in it is less than or
equal to AQL and reject the lot with a 100% probability if the fraction of defective items in it
is larger than AQL. Such performance cannot be realized othewise. The OC-curve of such a
sampling plan is shown in Figure 12.1.

1.0

Pa(p)

0.0
0.0 AQL p (fraction defective)

Figure 12.1 The Ideal OC Curve

In order to correctly evaluate the OC-curve of an arbitrary (not 100%-inspection) sampling


plan with parameters (n, c) we need to harness certain principles of probability theory as
follows. Suppose that the batch is large or the production process is continuous, so that
drawing a sample item with or without replacement has about the same result. In such cases
we are permitted to assume that the number of defective items x in a sample of sufficiently
large size n follows a binomial probability distribution. Pa(p) is then given by the expression

c
n
Pa( p)     p x (1  p) n  x
x 0  x 

Now, if the producer is willing to sacrifice a little bit so that the batch with fraction defective
AQL is accepted with a probability at least (1- ) where  is a small positive number, and
the consumer is willing to sacrifice a little so that the batch with fraction defective RQL is
accepted with a probability of at most , where  is a small positive number and RQL > AQL,
then the following two inequalities can be established.

 AQL (1  AQL)
x 0
x n x
1

 RQL (1  RQL)
x  c 1
x n x


From the above two inequalities, the numbers n and c can be solved (but the solution may not
be unique). The OC-curve of such a sampling plan is shown in Figure 12.2. The number  is
call the producer's risk, and the number  called the consumer's risk. As a common practice in
industries, the magnitude of  and  is usually set at some value from 0.01 to 0.1.
Nomograms are available for obtaining solution(s) of the inequalities given above easily.

Prepared for the Extension Program of Indira Gandhi Open University


50

The above sampling plan is called a single sampling plan (Figure 12.3) because the decision
to accept or reject the batch is made in a single stage after drawing a single sample from the
batch being evaluated.

To set up a practical single sampling procedure you would need to specify 1) AQL, 2) , the
consumer's risk, 3) RQL, and 4) , the producer's risk. The plan itself may be developed from
a tool called the"Larsen Nomogram" given in statistical quality control texts and handbooks.
A typical plan specifying AQL = 0.01, = 0.05, RQL = 0.06 and  = 0.10 will have sample
size n = 89 and the maximum number of defectives allowed (c) = 2.


1.0

Pa(p)


0.0
0.0 AQL RQL p (fraction
defective)

Figure 12.2 OC Curve of a Practical Sampling Plan

Sample n items

Inspect all items in


the sample:
d defectives found

Yes
d < c? Accept the lot

No

Reject the lot

Figure 12.3 The Single Sampling Plan

Prepared for the Extension Program of Indira Gandhi Open University


51

12.3 DOUBLE SAMPLING PLANS

A double sampling plan is one in which a sample of size n1 is drawn and inspected, the batch
is accepted rejected according to whether the number d1 of defective items found in the
sample is  r1 or  r2 (where r1 < r2); and if the number of defective items lies between r1 and
r2, a further sample of size n2 is drawn and inspected, and the batch is accepted or rejected
according to whether the total number d1 + d2 of defective items in the first and second sample
is  r3 or > r3. This procedure is shown diagrammatically in Figure 12.4.

Sample n1 items

Inspect all items in


this sample:
d1 defectives found

Yes
d1 < r1? Accept the lot

No

Yes
d1 > r2? Reject the lot

No

Sample n2 more
items from the lot

Inspect all items in


the second sample:
d2 defectives found

Yes
d1+d2 > r3? Accept the lot

No

Reject the lot

Figure 12.4 The Double Sampling Plan


Prepared for the Extension Program of Indira Gandhi Open University
52

The average sample number (ASN) of a double sampling plan is Pa1(p) n1 + (1 - Pa1(p)) n2,
where Pa1(p) is the probability that the batch will be accepted upon inspection of the first
sample. If the value of p in the batch is very low or very high, a decision can usually be made
within inspection of the first sample, and the ASN of a double sampling plan will be smaller
than the sample size of a single sampling plan with the same producer's risk () and
consumer's risk ().

The idea of double sampling plan can be extended to construct sampling plans of more than
two stages, namely multiple sampling plans. A sequential sampling plan is a sampling plan in
which one item is drawn and inspected each time, in such a way that if a decision (of
accepting or rejecting the batch) can be made upon inspection of the first item, the process is
stop; if a decision cannot be made, a second item will be drawn and inspected, and if decision
can be made upon inspection of the second item, the plan is stopped; otherwise, the third item
is drawn and inspected, and so on, until a decision can be made. However, multiple sampling
plans and sequential sampling plans are not so commonly used, because their implementation
in practice is more complicated that single and double sampling plans.

12.4 AOQ AND AOQL (AVERAGE OUTGOING QUALITY LIMIT

The quantity [p Pa(p)] is called the average outgoing quality (AOQ) of a sampling plan at
fraction defective p. This is the quality to be expected at the customer's end if the sampling
plan is used consistently and repeatedly to accept or reject lots being received. It is clear that
if p = 0, then AOQ = 0. If p = 1, that is, all the product items in the batch are defective, then
Pa(p) of any sampling plan that we are using should be equal to 0 for otherwise the plan would
not have any utility. Hence AOQ = 0 also when p = 1. Since AOQ can never be negative, it
has a global maximum point in the range (0,1); this maximum is called the average outgoing
quality limit (AOQL) of the sampling plan. The graph of AOQ against p for a typical
sampling plan is shown in Figure 12.5. Although a sampling plan can be specified by setting
the producer's risk () and consumer's risk () at AQL and RQL, the quantity AOQL can also
be used to specify a sampling plan.

AOQ

AOQL

Lot Fraction Defective p

Figure 12.5 The AOQ Curve of a Sampling Plan

Another type of sampling plan which is different from the above is called continuous
sampling procedure (CSP). The rationale of CSP is that, if we are not sure that the products
produced from a process is of good quality, a 100% inspection will be adopted; if the quality
of products is found to be good, then only a fraction of the products will be inspected. In the
simplest CSP, initially 100% is performed; during 100% inspection if no defective items are
found after a specified number of items are inspected (which means that the quality of product
produced is perhaps good), 100% inspection is stopped and only a fraction f of the products is
inspected. During fraction inspection if a defective item is found (which means that the
quality of products might have deteriorated), then fraction inspection is stopped and 100%
inspection is resumed. More refined CSP's have also been constructed, for example, by
setting f at 1/2 at the first stage, 1/4 at the second stage, and so on.

Prepared for the Extension Program of Indira Gandhi Open University


53

12.5 VARIABLES SAMPLING PLANS

All sampling plans described above are called attribute sampling plans, because the
inspection procedure is based on a "go"/ "no go" basis, that is, an item is either regarded as
non-defective and accepted, or it is regarded as defective and not accepted. Variable
sampling plans are sampling plans in which continuous measurements (such as dimensional
or weight checks) are made on each item in the sample, and the decision as to whether to
accept or to reject the batch is based on the sample mean or the average of the measurements
obtained from all items contained in the sample.

A variable sampling plan can be used, for example, when a product item is regarded as
acceptable if a certain measurement x (diameter, length, hardness, etc.) of it exceeds a pre-set
lower spec limit L; otherwise the item is regarded as not acceptable (see Figure 12.5.6).

Unacceptable items

L x

Figure 12.6 Distribution of Individual Measurements (x)

Measurements {x} of the products produced would vary from item to item, but these
measurements have a population mean µ, say. When µ is much larger than L, we can expect
that most items will have an x value greater than L and all such items would be acceptable;
when µ is much less than L, we can expect that most items will have x values less than L and
all such items would not be acceptable. A variable sampling plan can be constructed by
specifying a sample size n and a lower cut-off value c for the sample mean xbarn such that if
the sample of size n is drawn and all items in this sample are measured, the lot is accepted if
the sample mean xbarn exceeds c. The lot is rejected otherwise. We require that when the
population mean of the product produced is µ1 or larger, the lot is accepted with a probability
of at least 1 - , and when the population mean is µ2 or smaller, the lot is accepted with
probability only  or less, where  = producer's risk and  = consumer's risk (Figure 12.7).

Unacceptable Lots Acceptable Lots

2 c 1 Xbarn
 

Figure 12.7 Producer's () and Consumer's () Risks

Prepared for the Extension Program of Indira Gandhi Open University


54

Suppose that x is the value of x such that the probability for x < x is . According to the
criterion given above we can drive that

( x  x )
n
( 1   2 )

x  x
  2  c   1
n n

The above system of inequalities may not have a unique (n, c) solution. From elementary
statistical theory, if the form of x is known (for example, when x follows a normal
distribution), from these inequalities we can determine a minimum value for the sample size n
(which is an integer), and a range for the cut-off point c for the sample mean xbar. Such a
sampling plan is a called single specification limit variable sampling plan. If an upper
specification limit U instead of a lower specification limit L is set for x, we only need to
consider the lower specification limit problem with (-x) replacing x and (-U) replacing L.

When a product item is regarded as acceptable only if a certain measurement x of it lies


between a lower specification limit L and an upper specification limit U, a double
specification limit variable sampling plan will be used. In a double specification limit
variable sampling plan, a sample size n, a lower cut-off value cL and an upper cut-off value cU
for the sample mean xbar are specified. A batch is accepted if any only if the sample mean of
a sample of size n from the batch lies between cL and cU . Calculations for cL,, cU, and n are
more complicated than the single specification case.

MIL-STD-105E SAMPLING PLANS

International standards for sampling plans are now available. Many of these are based on the
work of Professors Dodge and Rohmig. The plan than originally was developed for single and
multiple attribute sampling for the US army during WW II is now widely used in industries. It
is called the MIL-STD-105E. An equivalent Indian standard known as IS 2500 has been
published by the Bureau of Indian Standards. Many other official standards for various
attribute sampling plans (such as those based on AOQ, or CSP's, and so on) and variable
sampling plans (assuming the variable has a normal distribution, when the population variance
is known or unknown, and so on) have been published by the US government and the British
Standards Institution.

Sampling Inspection is only a Screening Tool!

Before we end this section, we stress again that acceptance sampling or sampling inspection is
only a screening tool for separating batches or lots of good quality products from batches of
poor quality products. To some extent this screening assures the quality of incoming parts and
materials. Actually, the use of sampling plans does help an industry to do this screening more
effectively than the drawing samples arbitrarily. Therefore, sampling inspection can be used
during purchasing, for checking the quality of incoming materials, whenever one is not sure
about the conditions and QC procedures in use in the vendor's plant.

Acceptance sampling can also be used for the final checking of products after production
(Figure 1.3). This, to a limited degree, assures the quality of the products being readied for a
customer before they are physically dispatched and even Motorola uses acceptance sampling
as a temporary means of quality control until permanent corrective actions can be
implemented. But note that unlike SPC, acceptance sampling does not help in the prevention
of the production of poor quality products.

Prepared for the Extension Program of Indira Gandhi Open University


55

13 WHAT ARE TAGUCHI METHODS?

In this section we discuss briefly methods that belong in the domain of quality engineering, a
recently formalized discipline that aims at developing products whose superior performance
delights the discriminating usernot only when the package is opened, but also throughout
their lifetime of use. The quality of such products is robust, i.e., it remains unaffected by the
deleterious impact of environmental or other factors often beyond the users' control.

Since the topic of quality engineering is of notably broad appeal, we include below a brief
review of the associated rationale and methods. The term “quality engineering” (QE) was
used till recently by Japanese quality experts only. One such expert is Genichi Taguchi (1986)
who reasoned that even the best available manufacturing technology was by itself no
assurance that the final product would actually function in the hands of its user as desired. To
achieve this Taguchi suggested the designer must “engineer” quality into the product, just as
he/she specifies the product’s physical dimensions to make the dimensions of the final product
correct.

QE requires systematic experimentation with carefully developed prototypes whose


performance is tested in actual field conditions. The object is to discover the optimum set-
point values of the different design parameters, to ensure that the final product would perform
as expected consistently when in actual use. A quality-engineered product has robust
performance.

13.1 Taguchi's Thoughts on Quality

Taguchi, a Japanese electrical engineer by training, is credited to have made several


contributions in the management and assurance of quality. Taguchi studied the methods of
design of experiments (DOE) at the Indian Statistical Institute in the 1950s and later applied
these methods in a very creative manner to improve product and process design. His methods
now form the foundation of engineering design methodology in many leading industries
around the world, including AT&T, General Motors and IBM. In the 1980s his methods were
popularized in the USA by Madhav Phadke and Raghu Kacker of the Bell Laboratories.

Taguchi's contributions may be classified under the following three headings:


 The loss function
 Robust design of products and production processes
 Simplified industrial statistical experiments

Taguchi's view of Traditional view of


loss to society: loss to society:
Loss to "Target is best; "Within spec is OK,
Society Loss rises outside spec is bad"
continuously"

Lower Target Upper Performance


Spec spec Characteristic

Figure 13.1 Loss to Society increases whenever


Performance deviates from the Target

Prepared for the Extension Program of Indira Gandhi Open University


56

The essence of the loss function concept may be stated as follows. Whenever a product
deviates from its target performance, it generates a loss to society (Figure 13.1). This loss is
minimum when performance is right on target, but it grows gradually as one deviates from the
target. Such a philosophy suggests that the traditional "if it is within specs, the product is
good" view of judging a product's quality is not correct. If your foot size is 7, then a shoe of
size different from 7 will cause you inconvenience, pain, loose fit, and even embarrassment.
Under such conditions it is meaningless to seek a shoe that meets a spec given as (7  x).

To state again, the loss function philosophy says that for a producer, the best strategy is to
produce products as close to the target as possible, rather than aiming at "being within
specifications."

The other contributions of Taguchi are a methodology to minimize performance or quality


problems arising due to non-ideal operating or environmental conditions, and a simplified
method known as orthogonal array experiments to help conduct multi-factor experiments
toward seeking the best product or process design. These ideas may be described as follows.

13.1 The Secret of Creating a Robust Design

A practice common in traditional engineering design is sensitivity analysis. For instance, in


traditional electronic circuit design, as well as the development of performance design
equations, sensitivity analysis of the circuit developed remains a key step that the designer
must complete before his job is over. Sensitivity analysis evaluates the likely changes in the
device's performance, usually due to element value tolerances or due to value changes with
time and temperature.

Sensitivity analysis also determines the changes to be expected in the design’s performance
due to factor variations of uncontrollable character. If the design is found to be too sensitive,
the designer projects the worst-case scenarioto help plan for the unexpected. However,
studies indicate that worst-case projections or conservative designs are often unnecessary and
that a “robust design” can greatly reduce off-target performance caused by poorly controlled
manufacturing conditions, temperature or humidity shifts, wider component tolerances used
during fabrication, and also field abuse that might occur due to voltage/frequency fluctuations,
vibration, etc.

Robust design should not be confused with rugged or conservative design, which adds to unit
cost by using heavier insulation or high reliability, high tolerance components. As an
engineering methodology robust design seeks to reduce the sensitivity of the product/process
performance to the uncontrolled factors through a careful selection of the values of the design
parameters. One straightforward way to produce robust designs is to apply the "Taguchi
method".

The Taguchi method may be illustrated as follows. Suppose that a European product (Swiss
chocolate bars) is to be introduced “as is” in a tropical country where the ambient temperature
rises to 45C. If the European product formulation is directly adopted, the result may be
molten bars on store shelves in Bombay and Singapore and gooey hands and dirty dresses, due
to the high temperature sensitivity of the Swiss chocolate recipe (Curve 1, Figure 13.2).

The behavior of the bar’s plasticity may be experimentally explored to determine its
robustness to temperature, but few product designers actually attempt this. Taguchi would
suggest that we do here some special “statistical experiments” in which both the bar’s
formulation (the original Swiss and perhaps an alternate prototype formulation that we would
call “X”) and ambient temperature would be varied simultaneously and systematically and the
consequent plasticities observed.

Prepared for the Extension Program of Indira Gandhi Open University


57

Curve 1
P
l
a
s Curve 2
t
i
c
i
t
y

Temperature

Figure 13.2 DEPENDENCE OF PLASTICITY ON


AMBIENT TEMPERATURE

Response at the highest


ambient temperature
Robust
behavior
P of “X”
l
a Unacceptable
s variation in Eu’s
t plasticity due to
i temperature rise
c
i
t
y Response at the lowest
ambient temperature

Formulation “X” European (Eu)


formulation

Figure 13.3 INTERACTION OF THE EFFECTS OF


TEMPERATURE AND CHOCOLATE FORMULATION

Taguchi was able to show that by such experiments it is often possible to discover an alternate
bar design (here an appropriate chocolate bar formulation) that would be robust to
temperature. The trick, he said, is to uncover any “exploitable” interaction between the effect
of changing the design (e.g. from the Swiss formulation to Formulation “X”) and temperature.
In the language of statistics, two factors are said to interact when the influence of one on a
response is found to depend on the setting of the other factor (Montgomery, 1997). Figure
13.3 shows such an interaction, experimentally uncovered. Thus, a “robust” chocolate bar
may be created for the tropical market if the original Swiss formation is changed to
Formulation “X”.

Prepared for the Extension Program of Indira Gandhi Open University


58

13.2 Robust Design by the "Two-step" Taguchi Method

Note that a product’s performance is “fixed” primarily by its design, i.e., by the settings
selected for its various design factors. Performance may also be affected by
noiseenvironmental factors, unit to unit variation in material, workmanship, methods, etc.,
or due to aging/deterioration (Figure 13.4). The breakthrough in product design that Taguchi
achieved renders performance robust even in the presence of noise, without actually
controlling the noise factors themselves. Taguchi’s special “designnoise array” experiments
(Figure 13.5) discover those optimum settings. Briefly, the procedure first builds a special
assortment of prototype designs (as guided by the “design array”) and then tests these
prototypes for their robustness in “noisy” conditions. For this, each prototype is “shaken” by
deliberately subjecting it to different levels of noise (selected from the “noise array”, which
simulates noise variation in field conditions). Thus performance is studied systematically
under noise in order to find eventually a design that is insensitive to the influence of noise.

Noise

Design Product Variable


Factor A Response

Design
Factor B

Figure 13.4 DESIGN AND NOISE FACTORS BOTH IMPACT RESPONSE

Temp 50C Plasticity X_50


Temp 40C Plasticity Eu_50
“X”
Temp 30C ...
 =
Eu Temp 20C
Temp 10C Plasticity X_0
Temp 0C Plasticity Eu_0

Design Noise (Experimental


Array Array results)

Figure 13.5 DESIGNNOISE ARRAY EXPERIMENTS AND THEIR RESULTS

To guide the discovery the “optimum” design factor settings Taguchi suggested a two-step
procedure. In Step 1, optimum settings for certain design factors (called “robustness seeking
factors”) are sought so as to ensure that the response (for the bar, plasticity) becomes robust
(i.e., the bar does not collapse into a blob at least up to 50C temperature). In Step 2, the
optimum setting of some other design factor (called the “adjustment” factor) is sought to put
the design’s average response at the desired target (e.g., for plasticity a level that is easily
chewable).

Prepared for the Extension Program of Indira Gandhi Open University


59

For the chocolate bar design problem, the alternative design “factors” are two candidate
formulationsone the original European, and the other that we called “X”. Thus, the design
array would contain two alternatives “X” and “Eu.” “Noise” here is ambient temperature, to
be experimentally varied over the range 0C to 50C, as seen in the tropics.

Figure 13.5 shows the experimental outcome of hypothetical designnoise array experiments.
For instance, response value Plasticity X_50 was observed when formulation X was tested at
50C. Figure 13.3 is a compact illustrative display of these experimental results. It is evident
from Figure 13.3 that Formulation X’s behavior is quite robust even at tropical temperatures.
Therefore, the adaptation of Formulation “X” would make the chocolate bar “robust”, i.e., its
plasticity would not vary much even if ambient temperature had wide swings.

Taguchi's Orthogonal Array Experiments

Rather typically, the performance of a product (or process) is affected by a multitude of


factors. It is also well-known that over 2/3rd of all product malfunctions may be traced to the
design of the product. To the extent basic scientific knowledge allows the designer to guide
his/her design, the designer does his/her best to come up with selection of design parameter
values that would ensure good performance. Frequently though, not everything can be
predicted by theory and experimentation or prototyping must be resorted to and the design
must be empirically optimized. The planning of such experimental multiple-factor
investigations falls in the domain of statistical design of experiments (DOE).

Many text book methods for conducting multi-factor experiments, however, are too elaborate
and cumbersome. This has discouraged many practicing engineers from trying out this
powerful methodology in real design and optimization work. Taguchi observed this and
popularized a class of simpler experimental plans that can still reveal a lot about the
performance of a product or process, without the burden of heavy theory. An example is
shown below.

A fire extinguisher is to be designed so as to effectively cover flames in case of a fire. The


designer wishes to achieve this by either using higher pressure inside the CO 2 cylinder, or by
altering the nozzle design. Theoretical models of such systems using computational fluid
dynamics (CFD) are too complex and too cumbersome to optimize. The question to be
answered is, which is more effectivehigher pressure or a wider nozzle?

The Taguchi orthogonal array experimental scheme would set up the following experimental
plan consisting of only four specifically designed experiments. The results are shown in the
table and on the associated graph. It is clear from the factor effect plots of the results that the
designer would be much better off by increasing pressure. Nozzle diameter seems to have
little effect on the extent of the area covered by the extinguisher.

Exp Nozzle dia CO2 Observed


t# Pressure Spray Area
A
r
1 5 MM 2 BARS 0.8 M2 e
a
2 10 MM 2 BARS 0.9 M2

3 5 MM 4 BARS 1.6 M2
5 10 2 4
4 10 MM 4 BARS 1.9 M2 Nozzle CO2

Prepared for the Extension Program of Indira Gandhi Open University


60

14 THE SIX SIGMA PRINCLPLE

The six sigma principle is Motorola's own rendering of what is known in the quality literature
as the zero defects (ZD) program. Zero defects is a philosophical benchmark or standard of
excellence in quality proposed by Philip Crosby. Crosby explained the mission and essence of
ZD by the statement "What standard would you set on how many babies nurses are allowed to
drop?" ZD is aimed at stimulating each employee to care about accuracy and completeness, to
pay attention to detail, and to improve work habits. By adopting this mind-set, everyone
assumes the responsibility toward reducing his or her own errors to zero.

One might think that having three-sigma quality, i.e., the natural variability ( x  3x) equal
to tolerance (= upper spec limit - lower spec limit, or in other words, Cp = 1.0) would mean
good enough quality. After all, if distribution is normal, only 0.27% of the output would be
expected to fall outside the product's specs or tolerance range. But what does this really
mean? An average aircraft consists of 10,000 different parts. At 3-sigma quality, 27 of those
parts in an assembled aircraft would be defective. At this performance level, there would be
no electricity or water in Delhi for one full day each year. Even four-sigma quality may not
be OK. At four-sigma level there would be 62.10 minutes of telephone shutdown every week.

You might wish to know where performance is today. Restaurant bill errors are near 3-sigma.
Payroll processing errors are near 4-sigma. Wire transfer of funds in banks is near 4-sigma
and so is baggage mishandling by airlines. The average Indian manufacturing industry is near
the 3-sigma level. Airline flight fatality rates are at about 6.5 sigma level (0.25 per million
landings). At two-sigma level a company's cost of returns, scrap, rework and erosion of
market share costs over a third of its yearly sales.

For the typical Indian, a 1-hour train delay, an incorrect eye operation or drug administration,
or no electricity or water half a day is no surprise; he/she routinely experiences even worse
performance. Quantitatively, such performance is worse than two-sigma. Can this be called
acceptable? One- or two-sigma performance is downright noncompetitive.

Besides adopting TQM as the way to conduct business, many companies worldwide are now
seriously looking at six-sigma benchmarks to assess where they stand. Six sigma not only
reduces defects and raises customer acceptability, it has been now shown at Allied Signal Inc.,
Motorola, Raytheon, Bombardier Aerospace and Xerox that it can actually save money as
well. Therefore, it is no surprise that Motorola aggressively set the following quality goal for
itself in 1987 and then didn't want to stop till they achieved it:

Improve product and services quality ten times by 1989, and at least one
hundred fold by 1991. Achieve six-sigma capability by 1992. With a deep
sense of urgency, spread dedication to quality to every facet of the
corporation, and achieve a culture of continual improvement to assure total
customer satisfaction. There is only one goal: zero defectsin everything
we do.

The Steps to Six Sigma

The concept of six-sigma qualityshrinking the inherent variation in a process to half of the
spec range (Cp = 2.0) while allowing the mean to shift at most 1.5 sigma from the spec
midpoint (the target quality)is explained by Figure 14.1. The area under the shifted curves
beyond the six sigma range (the tolerance limits) is only 0.0000034, or 3.4 parts per million.
If the process mean can be controlled to within  1.5 x of the target, a maximum of 3.4
defects per million pieces produced can be expected. If the process mean is held exactly on
target, only 2.0 defects per billion would be expected. This is why within its organization
Motorola defines six sigma as a state of the production or service unit that represents "almost
perfect quality."

Prepared for the Extension Program of Indira Gandhi Open University


61

Process Mean  3
= Inherent variability
= half of Tolerance

- 6 - 1.5 + 1.5 + 6

LSL Target USL

Process Mean
is allowed to vary
in this range

Tolerance

Figure 14.1 The Six Sigma Process:


USL = Mean + 3, LSL = Mean - 3

Motorola prescribes six steps to achieve the six-sigma state, as follows.

Step 1: Identify the product you create or service you provide.


Step 2: Identify the customer(s) for your product or service and determine what they consider
important.
Step 3: Identify your needs to provide the product or service that satisfies the customer.
Step 4: Define the process for doing the work.
Step 5: Mistake-proof the process and eliminate waste effort.
Step 6: Ensure continuous improvement by measuring, analyzing and controlling the
improved process.

Many companies have adopted the Measure-Analyze-Improve-Control cycle to step into six
sigma. Typically they proceed as follows:
 Select critical-to-quality characteristics
 Define performance standards (the targets to be achieved)
 Validate measurement systems (to ensure that the data is reliable)
 Establish Product Capability (how good are you now?)
 Define performance objectives
 Identify sources of variation (seven tools etc.)
 Screen potential causes (correlation studies, etc.)
 Discover relationship between variablescauses or factorsand the output (DOE)
 Establish operating tolerances for input factors and output variables
 Validate the measurement system
 Determine process capability (Cpk) (Can you deliver? What do you need to
improve?)
 Implement process controls

One must audit and review nonstop to ensure that one is moving along the charted path.

Prepared for the Extension Program of Indira Gandhi Open University


62

The aspect common between six sigma and ZD ("zero defects") is that both concepts require
the maximum participation by the entire organization. In other words, they require that
unrelenting effort by management and the involvement of all employees. Companies such as
General Motors have used a four-phase approach to seek six sigma:

1. Measure: Select critical quality characteristics though Pareto charts; determine the
existing frequency of defects, define target performance standard, validate the
measurements system and establish existing process capability.
2. Analyze: Understand when, where, and why defects occur by defining performance
objectives and sources of variation.
3. Improve: Identify potential causes, discover cause-effect relationships, and establish
operating tolerances.
4. Control: Maintain improvements by validating the measurement system, determining
process capability, and implementing process control systems.

It is reported that in GM a new culture has been created. An individual or team devotes all its
time and energy to solving one problem at a time, designs solutions with customers'
assistance, and helps to minimize bureaucracy in supporting the six-sigma initiative.

Beyond TQM

The Japanese have recently evolved the "Bluebird Plan," which is a "third option" beyond
SPC and TQM designed to achieve the four objectives business excellence. These objectives
include establishing corporate ethics, maintaining and boosting international competitiveness,
ensuring stable employment and improving national quality of life.

The Bluebird Plan provides a forum for government, labor and management to discuss the
actions which need to be taken. In Japan the plan set out an action program for reform for the
three years 1997-1999 which was noted to be a critical time that would determine the direction
of Japan's future. Striking about the plan is the employers' acceptance that the relationship
between labor and management is an imperative "stabilizing force in society." Thus it reaches
beyond the tenets of TQM.

Prepared for the Extension Program of Indira Gandhi Open University

View publication stats

You might also like