You are on page 1of 10

Quality management article

In this file, you can ref useful information about quality management article such as quality
management articleforms, tools for quality management article, quality management
articlestrategies If you need more assistant for quality management article, please leave your
comment at the end of file.
Other useful material for quality management article:
qualitymanagement123.com/23-free-ebooks-for-quality-management
qualitymanagement123.com/185-free-quality-management-forms
qualitymanagement123.com/free-98-ISO-9001-templates-and-forms
qualitymanagement123.com/top-84-quality-management-KPIs
qualitymanagement123.com/top-18-quality-management-job-descriptions
qualitymanagement123.com/86-quality-management-interview-questions-and-answers

I. Contents of quality management article


==================
A brief survey of the development of the concept of Total Quality Management and its
importance.
Since the beginning of the twentieth century the quality function has developed from what at best
could be described as a cursory inspection of products, into a concept of Total Quality
Management (TQM) - a management philosophy in which the needs of the customer are
exceeded and which encourages all employees to strive towards continuous improvement in the
quality of the products and services of their organisations. The primary objective of the quality
function is to ensure that all products are manufactured free from defects, conform to all
specifications and satisfy the customers requirement (NCI Quality Management Module 3
Handout, 2005). This function is the main underlying concept behind quality control, quality
assurance and TQM and all the main developments, innovations and theories down the decades
to the present date have kept this function in focus.
Since the 1920's mathematicians and engineers have used the word sigma as a symbol for a unit
of measurement in product quality variation. Six Sigma is an advanced form of TQM.
American Motorola engineers first used the expression in the context of quality improvement in
1986 as an informal name for an in-house initiative for reducing defects in production processes.
By 2000 it had effectively ensconced its quality and process improvement standards in
organisations worldwide.

In the pre-1920 manufacturing period, when an employees work was inspected a decision was
made whether to accept or reject it. But in the years that followed and manufacturing became
more sophisticated, full time inspection jobs had to be created because more specialized skills,
which most production staff did not have, were needed to cope with the more complex and more
technical problems that cropped up. Since most inspectors lacked training, a separate inspection
department evolved with a chief inspector answerable to the works manager. With this
department came new services and issues requiring higher standards of quality control. Thus
evolved the quality control department with a quality control manager, responsible for quality
control engineering as well as inspection services.
In the 1920s quality control began to benefit from the application of statistical theory and in
1924 Shewhart made the first sketch of a modern control chart. In fact, much of what today
comprises the theory of statistical process control (SPC) developed from his early work, though
it wasnt till the late 1940s, especially during the rejuvenation of defeated Japans industrial
system which had been destroyed in the Second World War, that these techniques were to prove
useful in the manufacturing industry.
In 1947 General Douglas McArthur took 200 scientists and specialists, including the renowned
American statistician Dr. W. Edwards Deming(1900-1993) to Japan to help re-generate its
economy and redeem its reputation for shoddy goods. By 1949 the Union of Japanese Scientists
and Engineers (JUSE) was formed and Kaoru Ishikawa (1916-1989) developed and delivered
their first basic quality control course that was attended by managers from companies like Sony,
Nissan, Mitsubishi and Toyota. 1950 the Union invited Deming to deliver lectures on his
statistical quality techniques. Many Japanese manufacturing companies adopted these and while
businesses in the United States were more interested in producing large quantities of products at
the expense of quality, the Japanese were gaining a considerable foothold in American markets
with their inexpensive and high quality products. In fact, quality management practices
developed rapidly in Japanese plants in the early 1950s and become a major theme in Japanese
management philosophy.
In the late 1960s and early 1970s Japans imports into the USA and Europe continued to
significantly and by the late 70s and 80s American businesses were feeling the brunt of Japan's
more advanced industrial practices. Some companies, including Ford, IBM, and Xerox, had
started to adopt Dr Demings principles of Total Quality Management as a result of which they
were able to regain some of the markets earlier lost to the Japanese. However, by this time
Japanese firms were able to measure their quality defects in terms of a small number of parts per
million, while their Western counterparts were still quoting percentage defects.
The development of the quality function owed a lot to the theories and ideas of three groups of
gurus whose contributions to the development of the quality function were groundbreaking.
These were the Americans who went to Japan in the fifties like Deming, Juran and Feigenbaum;

theJapanese like Ishikawa, Taguchi and Shingo who developed new concepts in response to the
Americans in the late 1950 and Western gurus in the 1970s-1980s like Peters and Crosby.

Deming (1900-1993) promoted problem solving and team work, concepts that were new to
statistical quality control. He believed management to be responsible for 94% of quality
problems and his famous fourteen point plan included creating a constancy of purpose towards
improvement of product and service, ceasing the dependence on mass inspection, ending the
practice of awarding business on the basis of price and instituting a vigorous programme of
education and retraining (1982).
He also promoted the Plan, Do, Check, Act (PDCA) cycle, also known as the Deming cycle (see
diagram on right), although it was developed by his colleague, Dr Shewhart (1891-1967).
His contemporary Dr Joseph M. Juran developed the quality trilogy quality planning, quality
control and quality improvement (1951). According to him, good quality management requires
quality actions to be planned out, improved and controlled. His ten steps to quality improvement
included building awareness of the need and opportunity for improvement, setting goals for
improvement, providing training, carrying out projects to solve problems and maintain
momentum (1988).
Another American Armand V. Feigenbaum contributed the concept of Total Quality Control
which he defined it as an effective system for integrating quality development, quality
maintenance and quality improvement efforts of the various groups within an organisation, so as
to enable production and service at the most economical levels that allow full customer
satisfaction (1951).
Developments relating to the quality function were also taking place among the Japanese gurus.
The renowned Dr Kaoru Ishikawa interpretedtotal quality as company wide quality control,
whereby all staff were encouraged to practice continuous improvement in the quality and
productivity of products and services, so that the needs of the customer were not only catered for,
but also surpassed. His innovations includethe assembly and use of Pareto analysis (a tool used
to separate the vital few from the trivial many - or the 80:20 Rule); Stratification, Check sheets,
and Process Control charts -also known as the seven basic tools of quality.
He is also famous for the Ishikawa (or fishbone or cause and effect) diagram (see diagram
below). First used in the 1960s it is a graphical method used in a root cause analysis for
identifying the most likely causes for an undesired effect. The main bones of the diagram can be
labelled with categories such as the 4 Ms: management, manpower, machines and materials (the

4 M's), the 4 Ps: Place, Procedure, People, Policies and the 4 Ss: Surroundings, Suppliers,
Systems, Skills, with identified problems stemming from each.
Wikipedia 2005

His contemporary Dr Genichi Taguchi introduced the Taguchi methodology which enabled
designers to identify the best possible settings to produce a sturdy product that could survive
manufacturing and provide what the customer wants. Another Japanese expert Shigeo Shingo is
strongly associated with the Poka-Yoke (mistake proofing) system which examined defects and
the production system was either stopped so that the root causes of the problem could be
established and prevented from reoccurring, or the error condition was automatically adjusted to
prevent it from becoming a defect. The aim of Poka-Yoke was to stop errors becoming defects.
He also identified Zero quality control as the ideal production system.
The American Tom Peters identified leadership as being central to the quality improvement
process and suggested Managing By Walking About (MBWA), innovation and people as the
three main areas in the pursuit of excellence (1982). His contemporary, Philip Crosby helped to
popularise the use of TQM and introduced the "4 Absolutes of Quality" which identified quality
as conformance to requirements, achieved through prevention rather than appraisal. He
championed "zero defects" as the quality performance standard and believed that by setting up
processes that are designed to prevent errors, not only will quality improve, but production cost
will also be reduced.
Peters fourteen steps to quality improvement include giving formal recognition to all
participants, forming a management level quality improvement team (QIT), evaluating the cost
of quality and encouraging employees to communicate to management any problems they
identify.
In 1983 the National Quality using BS5750 was introduced and since then the International
Standardisation Organisation (ISO)9000 a globally recognised standard for quality
management systems and Sigma Six (introduced by Motorola in 1986) - a quality improvement
methodology for achieving near perfect quality (UK DTI, 2005), have become the internationally
recognised standards for the implementation of the quality function in the twenty-first century.
==================

III. Quality management tools

1. Check sheet

The check sheet is a form (document) used to collect data


in real time at the location where the data is generated.
The data it captures can be quantitative or qualitative.
When the information is quantitative, the check sheet is
sometimes called a tally sheet.
The defining characteristic of a check sheet is that data
are recorded by making marks ("checks") on it. A typical
check sheet is divided into regions, and marks made in
different regions have different significance. Data are
read by observing the location and number of marks on
the sheet.
Check sheets typically employ a heading that answers the
Five Ws:

Who filled out the check sheet


What was collected (what each check represents,
an identifying batch or lot number)
Where the collection took place (facility, room,
apparatus)
When the collection took place (hour, shift, day of
the week)
Why the data were collected

2. Control chart
Control charts, also known as Shewhart charts
(after Walter A. Shewhart) or process-behavior
charts, in statistical process control are tools used
to determine if a manufacturing or business
process is in a state of statistical control.
If analysis of the control chart indicates that the
process is currently under control (i.e., is stable,
with variation only coming from sources common
to the process), then no corrections or changes to

process control parameters are needed or desired.


In addition, data from the process can be used to
predict the future performance of the process. If
the chart indicates that the monitored process is
not in control, analysis of the chart can help
determine the sources of variation, as this will
result in degraded process performance.[1] A
process that is stable but operating outside of
desired (specification) limits (e.g., scrap rates
may be in statistical control but above desired
limits) needs to be improved through a deliberate
effort to understand the causes of current
performance and fundamentally improve the
process.
The control chart is one of the seven basic tools of
quality control.[3] Typically control charts are
used for time-series data, though they can be used
for data that have logical comparability (i.e. you
want to compare samples that were taken all at
the same time, or the performance of different
individuals), however the type of chart used to do
this requires consideration.

3. Pareto chart

A Pareto chart, named after Vilfredo Pareto, is a type


of chart that contains both bars and a line graph, where
individual values are represented in descending order
by bars, and the cumulative total is represented by the
line.
The left vertical axis is the frequency of occurrence,
but it can alternatively represent cost or another
important unit of measure. The right vertical axis is
the cumulative percentage of the total number of
occurrences, total cost, or total of the particular unit of
measure. Because the reasons are in decreasing order,
the cumulative function is a concave function. To take
the example above, in order to lower the amount of
late arrivals by 78%, it is sufficient to solve the first
three issues.
The purpose of the Pareto chart is to highlight the
most important among a (typically large) set of
factors. In quality control, it often represents the most
common sources of defects, the highest occurring type
of defect, or the most frequent reasons for customer
complaints, and so on. Wilkinson (2006) devised an
algorithm for producing statistically based acceptance
limits (similar to confidence intervals) for each bar in
the Pareto chart.

4. Scatter plot Method

A scatter plot, scatterplot, or scattergraph is a type of


mathematical diagram using Cartesian coordinates to
display values for two variables for a set of data.
The data is displayed as a collection of points, each
having the value of one variable determining the position
on the horizontal axis and the value of the other variable
determining the position on the vertical axis.[2] This kind
of plot is also called a scatter chart, scattergram, scatter
diagram,[3] or scatter graph.
A scatter plot is used when a variable exists that is under
the control of the experimenter. If a parameter exists that
is systematically incremented and/or decremented by the
other, it is called the control parameter or independent
variable and is customarily plotted along the horizontal
axis. The measured or dependent variable is customarily
plotted along the vertical axis. If no dependent variable
exists, either type of variable can be plotted on either axis
and a scatter plot will illustrate only the degree of
correlation (not causation) between two variables.
A scatter plot can suggest various kinds of correlations
between variables with a certain confidence interval. For
example, weight and height, weight would be on x axis
and height would be on the y axis. Correlations may be
positive (rising), negative (falling), or null (uncorrelated).
If the pattern of dots slopes from lower left to upper right,
it suggests a positive correlation between the variables
being studied. If the pattern of dots slopes from upper left
to lower right, it suggests a negative correlation. A line of
best fit (alternatively called 'trendline') can be drawn in
order to study the correlation between the variables. An
equation for the correlation between the variables can be
determined by established best-fit procedures. For a linear
correlation, the best-fit procedure is known as linear
regression and is guaranteed to generate a correct solution
in a finite time. No universal best-fit procedure is
guaranteed to generate a correct solution for arbitrary
relationships. A scatter plot is also very useful when we
wish to see how two comparable data sets agree with each

other. In this case, an identity line, i.e., a y=x line, or an


1:1 line, is often drawn as a reference. The more the two
data sets agree, the more the scatters tend to concentrate in
the vicinity of the identity line; if the two data sets are
numerically identical, the scatters fall on the identity line
exactly.

5.Ishikawa diagram
Ishikawa diagrams (also called fishbone diagrams,
herringbone diagrams, cause-and-effect diagrams, or
Fishikawa) are causal diagrams created by Kaoru
Ishikawa (1968) that show the causes of a specific event.
[1][2] Common uses of the Ishikawa diagram are product
design and quality defect prevention, to identify potential
factors causing an overall effect. Each cause or reason for
imperfection is a source of variation. Causes are usually
grouped into major categories to identify these sources of
variation. The categories typically include
People: Anyone involved with the process
Methods: How the process is performed and the
specific requirements for doing it, such as policies,
procedures, rules, regulations and laws
Machines: Any equipment, computers, tools, etc.
required to accomplish the job
Materials: Raw materials, parts, pens, paper, etc.
used to produce the final product
Measurements: Data generated from the process
that are used to evaluate its quality
Environment: The conditions, such as location,
time, temperature, and culture in which the process
operates

6. Histogram method

A histogram is a graphical representation of the


distribution of data. It is an estimate of the probability
distribution of a continuous variable (quantitative
variable) and was first introduced by Karl Pearson.[1] To
construct a histogram, the first step is to "bin" the range of
values -- that is, divide the entire range of values into a
series of small intervals -- and then count how many
values fall into each interval. A rectangle is drawn with
height proportional to the count and width equal to the bin
size, so that rectangles abut each other. A histogram may
also be normalized displaying relative frequencies. It then
shows the proportion of cases that fall into each of several
categories, with the sum of the heights equaling 1. The
bins are usually specified as consecutive, non-overlapping
intervals of a variable. The bins (intervals) must be
adjacent, and usually equal size.[2] The rectangles of a
histogram are drawn so that they touch each other to
indicate that the original variable is continuous.[3]

III. Other topics related to Quality management article (pdf


download)
quality management systems
quality management courses
quality management tools
iso 9001 quality management system
quality management process
quality management system example
quality system management
quality management techniques
quality management standards
quality management policy
quality management strategy
quality management books

You might also like