You are on page 1of 9

Itil quality management

In this file, you can ref useful information about itil quality management such as itil quality
managementforms, tools for itil quality management, itil quality managementstrategies If you
need more assistant for itil quality management, please leave your comment at the end of file.
Other useful material for itil quality management:
qualitymanagement123.com/23-free-ebooks-for-quality-management
qualitymanagement123.com/185-free-quality-management-forms
qualitymanagement123.com/free-98-ISO-9001-templates-and-forms
qualitymanagement123.com/top-84-quality-management-KPIs
qualitymanagement123.com/top-18-quality-management-job-descriptions
qualitymanagement123.com/86-quality-management-interview-questions-and-answers

I. Contents of itil quality management


==================
The Theory of Constraints (TOC) as offered by Eli Goldratt is a very powerful conceptual as
well as practical tool of which every Business Analyst should be aware. Many MBA programs
recommend the reading of The Goal (authored by Goldratt) and cover the core elements of
TOC in Operations Management modules.
Very simplistically, TOC is about identifying the core bottleneck in a system and then
eliminating that bottleneck (something like a chain is only as strong as its weakest link). The
bottleneck will then be somewhere else in the system so the process of identification and
elimination continues.
ITIL and TOC
In relation to Quality Management and ITIL, the ITIL books refer primarily to The Deming
Cycle (Plan, Do, Check, Act) for process improvement (ITIL version 3 also describes a 7-Step
Improvement Process). TOC is conceptually very powerful and compliments the Deming Cycle.
The power of TOC is its simplicity in illustrating that there is always a bottleneck (constraint) in
any system. The challenge is to identify the primary bottleneck and eliminate it as the primary
bottleneck. ITIL process maturity is all about eliminating the bottleneck at which time the
bottleneck will move and during this removing of the bottleneck/constraint the ITIL processes

mature. TOC therefore works very well with the Deming Cycle together in that TOC assists to
identify the bottleneck whilst the Deming Cycle assists in eliminating it (we could also add in
Lewins Unfreeze Change Refreeze Change model as we are we unfreezing a stable state,
effecting the Change and then moving to the next constraint).
As an example, suppose that we are continuously failing a Service Level where the Service Desk
should be responding to all emails within ten minutes even although we employ two people to
specifically respond to emails. We analyse our business process and discover that 40% of emails
are received between 12 p.m. and 2 p.m. each day and that these are the hours where our two
email responders take a lunch break. Our constraint is in the supply of services where demand
exceeds supply during the two hours. We eliminate the bottleneck by changing the lunch break
hours of the responders. Now we are free to look at the next bottleneck (please note that this is a
very simplistic example).
External Article
Below is an article offering TOC as part of an ongoing Process Improvement initiative. The
article may provide some benefit however doesnt articulate well how TOC may be used for
process improvement (I believe that the article was originally authored by Pink Elephant):
A system or a process cannot be more efficient than its limiting factor!
In The Goal Eli Goldratt presents the Theory Of Constraints (TOC). TOC introduces primary
measurements for the analysis of systems based on productivity and ultimately, profit. The core
truth of TOC is that every system or process has at least one constraint or bottleneck, and that the
identification of this constraint should be the focus for any improvement activity.
TOC advocates that organizations take a three-dimensional view of three core business concepts,
Inventory, Operating Expense and Throughput. To relate these financial terms to IT one needs to
expand the definitions beyond their traditional concepts.
Inventory: All of the money, investment, outstanding issues, pending changes, unresolved
incidents, excess capacity, etc. an organization has tied up in an un-sellable, unfinished,
unresolved, undeliverable, or pending state.

Pre Process Inventory: stuff that is currently waiting in queue in a raw or input state. i.e.:
Calls that are waiting in the ACD system or emails that have not been answered by the
Service Desk.

Active Inventory: stuff that is currently within the system or process and is currently
being transformed into a desired or sellable output state, i.e.: Change Management
records that are currently being assessed, authorized and scheduled.

Post Process Inventory: stuff that has been successfully transformed into a desired output
but has not been delivered to a client, sold, confirmed resolved, or generated profit, i.e.:
The Service Desks feedback calls back to the users to confirm that an incident, which
has been resolved, can be permanently closed.

In TOC, the concept of Inventory contradicts the conventional balance sheet definition of
Inventory as an asset and redefines inventory as a liability.
Operating Expense: All of the money, time, energy, thought, resources, overtime, etc. tied up in
the process of converting raw data or inventory into the output of the process.
Throughput: Defined as the speed at which inventory is moved through the end-to-end process,
and delivered to the customer in order to realize the goal of profit, resolution, deployment, etc.
Goldratt observes that these three core principles are inseparably linked and that a change in any
one of these three dimensions will automatically result in a proportionate change in the others.
The perspective taken by TOC is that the biggest gains are realized by increasing throughput.
However, to increase throughput the bottlenecks to the process need to be identified and
eliminated.
Question: What occurs when you remove a bottleneck?
Answer: Another bottleneck appears elsewhere in the process.
Result: The identification of the next area for improvement to increase throughput, and the cycle
of continuous improvement continues.

Conclusion
In conclusion, Goldratts Theory of Constraints places a practical tool in the hands of individuals
involved in the ongoing management and improvement of business processes.
==================

III. Quality management tools

1. Check sheet
The check sheet is a form (document) used to collect data
in real time at the location where the data is generated.
The data it captures can be quantitative or qualitative.
When the information is quantitative, the check sheet is
sometimes called a tally sheet.
The defining characteristic of a check sheet is that data
are recorded by making marks ("checks") on it. A typical
check sheet is divided into regions, and marks made in
different regions have different significance. Data are
read by observing the location and number of marks on
the sheet.
Check sheets typically employ a heading that answers the
Five Ws:

2. Control chart

Who filled out the check sheet


What was collected (what each check represents,
an identifying batch or lot number)
Where the collection took place (facility, room,
apparatus)
When the collection took place (hour, shift, day of
the week)
Why the data were collected

Control charts, also known as Shewhart charts


(after Walter A. Shewhart) or process-behavior
charts, in statistical process control are tools used
to determine if a manufacturing or business
process is in a state of statistical control.
If analysis of the control chart indicates that the
process is currently under control (i.e., is stable,
with variation only coming from sources common
to the process), then no corrections or changes to
process control parameters are needed or desired.
In addition, data from the process can be used to
predict the future performance of the process. If
the chart indicates that the monitored process is
not in control, analysis of the chart can help
determine the sources of variation, as this will
result in degraded process performance.[1] A
process that is stable but operating outside of
desired (specification) limits (e.g., scrap rates
may be in statistical control but above desired
limits) needs to be improved through a deliberate
effort to understand the causes of current
performance and fundamentally improve the
process.
The control chart is one of the seven basic tools of
quality control.[3] Typically control charts are
used for time-series data, though they can be used
for data that have logical comparability (i.e. you
want to compare samples that were taken all at
the same time, or the performance of different
individuals), however the type of chart used to do
this requires consideration.

3. Pareto chart

A Pareto chart, named after Vilfredo Pareto, is a type


of chart that contains both bars and a line graph, where
individual values are represented in descending order
by bars, and the cumulative total is represented by the
line.
The left vertical axis is the frequency of occurrence,
but it can alternatively represent cost or another
important unit of measure. The right vertical axis is
the cumulative percentage of the total number of
occurrences, total cost, or total of the particular unit of
measure. Because the reasons are in decreasing order,
the cumulative function is a concave function. To take
the example above, in order to lower the amount of
late arrivals by 78%, it is sufficient to solve the first
three issues.
The purpose of the Pareto chart is to highlight the
most important among a (typically large) set of
factors. In quality control, it often represents the most
common sources of defects, the highest occurring type
of defect, or the most frequent reasons for customer
complaints, and so on. Wilkinson (2006) devised an
algorithm for producing statistically based acceptance
limits (similar to confidence intervals) for each bar in
the Pareto chart.

4. Scatter plot Method

A scatter plot, scatterplot, or scattergraph is a type of


mathematical diagram using Cartesian coordinates to
display values for two variables for a set of data.
The data is displayed as a collection of points, each
having the value of one variable determining the position
on the horizontal axis and the value of the other variable
determining the position on the vertical axis.[2] This kind
of plot is also called a scatter chart, scattergram, scatter
diagram,[3] or scatter graph.
A scatter plot is used when a variable exists that is under
the control of the experimenter. If a parameter exists that
is systematically incremented and/or decremented by the
other, it is called the control parameter or independent
variable and is customarily plotted along the horizontal
axis. The measured or dependent variable is customarily
plotted along the vertical axis. If no dependent variable
exists, either type of variable can be plotted on either axis
and a scatter plot will illustrate only the degree of
correlation (not causation) between two variables.
A scatter plot can suggest various kinds of correlations
between variables with a certain confidence interval. For
example, weight and height, weight would be on x axis
and height would be on the y axis. Correlations may be
positive (rising), negative (falling), or null (uncorrelated).
If the pattern of dots slopes from lower left to upper right,
it suggests a positive correlation between the variables
being studied. If the pattern of dots slopes from upper left
to lower right, it suggests a negative correlation. A line of
best fit (alternatively called 'trendline') can be drawn in
order to study the correlation between the variables. An
equation for the correlation between the variables can be
determined by established best-fit procedures. For a linear
correlation, the best-fit procedure is known as linear
regression and is guaranteed to generate a correct solution
in a finite time. No universal best-fit procedure is
guaranteed to generate a correct solution for arbitrary
relationships. A scatter plot is also very useful when we
wish to see how two comparable data sets agree with each

other. In this case, an identity line, i.e., a y=x line, or an


1:1 line, is often drawn as a reference. The more the two
data sets agree, the more the scatters tend to concentrate in
the vicinity of the identity line; if the two data sets are
numerically identical, the scatters fall on the identity line
exactly.

5.Ishikawa diagram
Ishikawa diagrams (also called fishbone diagrams,
herringbone diagrams, cause-and-effect diagrams, or
Fishikawa) are causal diagrams created by Kaoru
Ishikawa (1968) that show the causes of a specific event.
[1][2] Common uses of the Ishikawa diagram are product
design and quality defect prevention, to identify potential
factors causing an overall effect. Each cause or reason for
imperfection is a source of variation. Causes are usually
grouped into major categories to identify these sources of
variation. The categories typically include
People: Anyone involved with the process
Methods: How the process is performed and the
specific requirements for doing it, such as policies,
procedures, rules, regulations and laws
Machines: Any equipment, computers, tools, etc.
required to accomplish the job
Materials: Raw materials, parts, pens, paper, etc.
used to produce the final product
Measurements: Data generated from the process
that are used to evaluate its quality
Environment: The conditions, such as location,
time, temperature, and culture in which the process
operates

6. Histogram method

A histogram is a graphical representation of the


distribution of data. It is an estimate of the probability
distribution of a continuous variable (quantitative
variable) and was first introduced by Karl Pearson.[1] To
construct a histogram, the first step is to "bin" the range of
values -- that is, divide the entire range of values into a
series of small intervals -- and then count how many
values fall into each interval. A rectangle is drawn with
height proportional to the count and width equal to the bin
size, so that rectangles abut each other. A histogram may
also be normalized displaying relative frequencies. It then
shows the proportion of cases that fall into each of several
categories, with the sum of the heights equaling 1. The
bins are usually specified as consecutive, non-overlapping
intervals of a variable. The bins (intervals) must be
adjacent, and usually equal size.[2] The rectangles of a
histogram are drawn so that they touch each other to
indicate that the original variable is continuous.[3]

III. Other topics related to Itil quality management (pdf


download)
quality management systems
quality management courses
quality management tools
iso 9001 quality management system
quality management process
quality management system example
quality system management
quality management techniques
quality management standards
quality management policy
quality management strategy
quality management books

You might also like