You are on page 1of 6

Int J Adv Manuf Technol (2006) 30: 141146

DOI 10.1007/s00170-005-0023-z

ORIGINA L ARTI CLE

Hsin-Hung Wu . Jiunn-I Shieh

Using a Markov chain model in quality function deployment


to analyse customer requirements

Received: 27 October 2004 / Accepted: 2 March 2005 / Published online: 1 October 2005
# Springer-Verlag London Limited 2005

Abstract This study applies a Markov chain model in


quality function deployment to analyse customer requirements from probabilities viewpoints. In reality, both initial
and transition probabilities can be computed based upon
customers surveys by asking their past and present
choices. For a decision maker, each customer requirement
can be analysed as time goes by, and the changes for each
technical measure can be closely examined from time to
time. Moreover, when new customers surveys are conducted and available, customer requirements and technical
measures can be updated in a timely basis to reflect and
fulfil the dynamic customer needs. Therefore, this proposed approach provides a decision maker to analyse and
then satisfy both present and future customer needs early
on such that a better strategy can be made based upon the
most updated customers surveys.
Keywords Customer requirement . Markov chains .
Quality function deployment . Technical measure .
Transition probability

1 Introduction
Meeting or exceeding customer requirements to increase
customer satisfaction is the ultimate target of total quality
management [1]. In order to successfully implement the
philosophy to achieve higher customer satisfaction, an
organization should relentlessly improve the processes and
products by a variety of methods and tools [2]. Quality
H.-H. Wu (*)
Department of Business Administration,
National Changhua University of Education,
No. 2 Shida Road,
Changhua City, Changhua, 500 Taiwan, ROC
e-mail: hhwu@cc.ncue.edu.tw
Tel.: +886-4-7232105
J.-I. Shieh
Department of Information Science and Applications,
Asia University,
Taichung, Taiwan

function deployment (QFD) is considered as one of the


very effective quality systems tools that can be used to
fulfil customer requirements and improve customer
satisfaction [36]. Generally speaking, QFD is an overall
concept that provides a means of translating customer
requirements into appropriate engineering characteristics
or technical measures for each stage of product development and production [7, 8].
The most commonly seen QFD is a four-phase model,
which consists of product planning (also known as house of
quality (HOQ)), parts deployment, process planning and
production planning [9, 10]. In reality, most organizations
only adopt HOQ, depicted in Fig. 1, in product planning. In
fact, HOQ links the voice of the customer to technical
measures through which detailed processes and production
plans can be developed in the other phases of QFD [9]. The
foundation of HOQ is that products should be designed to
meet customer needs so the marketing department, design
engineers and manufacturing staffs should work closely
together from the time the product is first conceived [8, 11].
For more information about HOQ and QFD, please refer to
Refs. [3, 7, 9, 11] and [12].
To provide customer-oriented products, listening to the
voice of the customer is critical. To sum up, the voice of the
customer may come from a variety of sources, such as
surveys, focus groups, interviews, trade shows, to name a
few [13, 14]. The traditional approach to collect customer
requirements concentrates on present customer needs,
but Shen et al. [15] and Wu et al. [16] have concluded that
analysing future customer needs is very critical to an
organizations long-term competitiveness since customer
needs are dynamic and may vary drastically from time to
time. Besides, predicting future customer requirements
early on could help organizations provide better products,
possibly delight customers, and, eventually, increase
customer satisfaction.
Shen et al. have developed fuzzy trend analysis to
analyse future customer needs by monitoring the importance trend for each customer requirement [15]. However,
the study only uses five categories, i.e. significant increase
of importance, moderate increase of importance, status

142

Fig. 1 The house of quality

quo (small increase or decrease of importance), moderate


decrease of importance and significant decrease of importance, to classify the importance of customer needs.
Moreover, there was no further discussion on how the
proposed approach could be implemented for the rest of the
HOQ. On the other hand, Wu et al. have applied grey
theory to analyse customer needs in the house of quality
when a very small amount of data is available [16]. The
basic assumption is that the gathered information should
be deterministic and precise enough such that the analyses
of customer requirements and technical measures would
become much more meaningful.
In this study, a Markov chain model is used in the house
of quality to monitor the trend for each customer
requirement as well as for each technical measure from
probabilities viewpoints. The advantage is that the gathered
information in a daily basis is typically uncertain such that
using a Markov chain model might be more appropriate to
analyse customer needs and track the importance trends of
technical measures from a probabilities viewpoint. More
importantly, when the most updated information is available, the importance of both customer requirements and
technical measures can be further adjusted based on the
newly information to better reflect the dynamic customer
needs.
This paper is organised as follows: Section 2 provides a
short review of a Markov chain model. In Section 3, an
example is illustrated to show how this proposed approach
works in analysing both customer requirements and
technical measures. Finally, conclusions are summarised
in Section 4.

2 A Markov chain model


Markov process models, widely applied such as market
share analysis, customer loyalty, inventory, water resources,

university enrolment prediction and machines breakdowns, are useful in studying the evolution of systems
over repeated trials, which are often successive time periods
where the state of the system in any particular period is
uncertain [1720]. Markov process models assume that the
system starts in an initial state or condition, but the initial
state or condition will be changed over time. Predicting
these future states involves knowing the systems likelihood
or probability of changing from one state to another.
Transition probabilities, presented by the matrix, are used to
describe the manner where the system makes transitions
from one period to the next. The matrix of transition probabilities is a matrix of conditional probabilities of being in a
future state given a current state [18]. A Markov chain
model, a special case of Markov process models, is used to
study the short- and long-run behaviour of certain stochastic
systems [19].
To describe a Markov chain model, the following
statements are summarised from Anderson, Sweeney, and
Williams [17], Render and Stair [18], and Taha [19]. Let a
finite set S={Ejj=1,2,...m} represent the exhaustive and
mutually exclusive states of a system at any time. Initially
at time t0, the system may be in any of these states. Let aj(0)
be the absolute probability that the system is in state Ej at
time t0. If the system is Markovian, then define
pij PfXtn jjXtn1 ig;

(1)

where Pij is the transition probability of going from state i


at time tn1 to state j at time tn, and assume these probabilities are stationary over time. In addition, Xtn and Xtn1
in Eq. 1 are random variables. The transition probabilities
from state Ei to state Ej can be further expressed in a matrix
form:
2

p00
6 p10
6
6
P 6 p20
6 p30
4
..
.

p01
p11
p21
p31
..
.

p02
p12
p22
p32
..
.

p03
p13
p23
p33
..
.

3

7
7
7
7;
7
5

(2)

where individualP pij values are usually determined


empirically, and j pij 1 since all entries in the P matrix are nonnegative and the entries in each row must sum
to 1. That is, a Markov chain model is defined as a
transition matrix P along with the initial probability aj(0)
associated with the states Ej [19].
To further compute the state probabilities of the system
after a specified number of transitions, let a(n)
j be the state
probabilities of the system after n transitions at time tn. The
general expression of aj(n) in terms of aj(0) and P are as
follows [19]:
X 0
1
0
0
0
aj a1 p1j a2 p2j a3 p3j . . .
ai pij : (3)
i

143
Table 1 The needed information for each customer requirement
Expected weight

Initial probability

Conditional probabilities

CR1

4.2

CR2

4.2

CR3

3.0

CR4

2.0

CR5

2.0

P(H)=0.6
P(M)=0.4
P(L)=0
P(H)=0.7
P(M)=0.2
P(L)=0.1
P(H)=0.3
P(M)=0.4
P(L)=0.3
P(H)=0.1
P(M)=0.3
P(L)=0.6
P(H)=0
P(M)=0.5
P(L)=0.5

P(H|H)=0.6
P(H|M)=0.3
P(H|L)=0
P(H|H)=0.8
P(H|M)=0.6
P(H|L)=0.3
P(H|H)=0.4
P(H|M) =0.3
P(H|L)=0.3
P(H|H)=0
P(H|M)=0.1
P(H|L) =0
P(H|H)=0
P(H|M) =0.3
P(H|L)=0.2

Also,
2

aj

ai pij

X
k

X X
i

0
ak

pki pij

!
0

ak pki pij

(4)
0 2
ak pkj ;

P
2
where pkj i pki pij is the transition probability after 2
transitions. Finally, the transition probability after n transitions, p(n)
kj , can be presented by the recursive formula [19]:
X n1
n
pki pij :
(5)
pkj

P(M|H)=0.4
P(M|M)=0.6
P(M|L)=0.2
P(M|H)=0.1
P(M|M)=0.3
P(M|L)=0.6
P(M|H)=0.3
P(M|M)=0.4
P(M|L)=0.3
P(M|H)=0.2
P(M|M)=0.6
P(M|L)=0.1
P(M|H)=0.5
P(M|M)=0.4
P(M|L)=0.4

P(L|H)=0
P(L|M)=0.1
P(L|L)=0.8
P(L|H)=0.1
P(L|M)=0.1
P(L|L)=0.1
P(L|H)=0.3
P(L|M)=0.3
P(L|L)=0.4
P(L|H)=0.8
P(L|M)=0.3
P(L|L)=0.9
P(L|H)=0.5
P(L|M) =0.3
P(L|L) =0.4

has all rows identical (these identical rows have the same
entries as vector V). The vector V gives the long-run behaviour trend of the Markov chain model. In our study, a
finite regular Markov chain model will be illustrated such
that vector V can be solved by a system of linear equations.
Let V be the probability vector that
2 V=[v1v2v3] and
3 the
p11 p12 p13
transition probability matrix P 4 p21 p22 p23 5. We
p31 p32 p33
want to find V such that VP=V, or
2
3
p11 p12 p13
v1 v2 v3 4 p21 p22 p23 5 v1 v2 v3 :
(6)
p31 p32 p33

Use matrix multiplication, Eq. 6 becomes


It is not possible to make long-run behaviour predictions
with all transition probability matrices, but long-run
behaviour predictions are possible for a large set of
transition probability matrices. Such behaviour predictions
are always possible if the transition probability matrices are
regular. A transition probability matrix is regular if some
power of the matrix contains all positive entries. A Markov
chain model is a regular Markov chain model if its transition
probability matrix is regular. Note that if a finite regular
Markov chain model with transition probability matrix P,
then there exists a probability vector V such that VP=V. We
also have that the steady-state transition probability matrix
Table 2 The expected weights
for all of the customer
requirements

CR1
CR2
CR3
CR4
CR5

v1 p11 v2 p21 v3 p31

v1 p12 v2 p22 v3 p32

v1 p13 v2 p23 v3 p33 

v1 v2 v3 :

(7)

In addition, Eq. 7 can be displayed by the following


system:
8
< v1 p11  1 v2 p21 v3 p31 0
(8)
v p v2 p22  1 v3 p32 0 :
: 1 12
v1 p13 v3 p23 v3 p33  1 0

1-step

2-step

3-step

4-step

5-step

...

3.88
4.22
3.00
1.64
2.80

3.704
4.224
3.000
1.570
2.610

3.586
4.225
3.000
1.531
2.656

3.499
4.225
3.000
1.511
2.645

3.433
4.225
3.000
1.501
2.648

...
...
...
...
...

Steady-state
3.222
4.225
3.000
1.4857
2.647

144
Fig. 2 The trends for each
customer requirement

Note that by the definition of transition probability


matrix, we have p11+p12+p13=1, p21+p22+p23=1, p31+
p32+p33=1. It is easy to see that the last equation is simply
the sum of the first two equations multiplied by 1, so we
will drop this equation. By the definition of probability
vector, we have v1+v2+v3=1. To find v1, v2, and v3, solve
the following system
8
< v1 p11  1 v2 p21 v3 p31 0
(9)
v p v2 p22  1 v3 p32 0 :
: 1 12
v1 v2 v3 1
Using the
2 GaussJordan3method, we obtain the reduced
1 0 0 d1
system of 4 0 1 0 d2 5. Thus, v1=d1, v2=d2, and v3=d3,
0 0 1 d3
). Moreover, we
and the steady-state V=(d1 d2 d32
3 also have
d1 d2 d3
the steady-state probabilities of 4 d1 d2 d3 5.
d1 d2 d3

3 An illustrated example
To illustrate how a Markov chain model can be applied in
analysing customer requirements, assume that there are five
customer requirements, denoted as CR, and five technical
measures, denoted as TM, in quality function deployment.
Suppose a group of customers have been asked to evaluate
the importance of each CR and how each CR would be
changed in the near future by surveys. To simplify the

illustration, assume the weight for each CR can be classified


into high (H) with a weight of 5, medium (M) with a weight
of 3, and low (L) with a weight of 1 along with respective
probabilities, where the sum of these three probabilities
should be equal to 1. To each CR, the expected weight
can be computed by (5probability of H+3probability
of M+1probability of L). Moreover, specific details for
each CR should be evaluated. For instance, when the
importance of a CR is H, the probabilities that the importance of this CR in the next period will become H, M and
L can be computed from surveys. The assumed needed
information is provided in Table 1.
In practice, we can fit a Markov chain model from
customers questionnaires by asking their past (last time)
choices (three choices: H, M and L) and their present time
choices (three choices: H, M, and L). Then, the initial
probabilities come from the past choice in customers
questionnaires, whereas the transition probabilities (or
conditional probabilities) come from both of the past and
present choices in customers questionnaires. Each CR has
its own initial probability.
Mathematically speaking, assume a finite set S={H,M,
L} of states, where state 1, state 2 and state 3 are H, M and
L, respectively. Assign to each pair (i, j) S2 of states a real
number pij and each i S of state a number i, initial
probability for each state, such that we have the following
properties:
pij  0 for 8i; j 2 S 2 ;
X
pij 1 for 8i 2 S;
j2S

and
Table 3 The computations of the expected values of technical
measures
Expected weight TM1
CR1 3.88
CR2 4.22
CR3 3.00
CR4 1.64
CR5 2.80
Expected importance

TM2

TM3

3
3

9
1
39.62

3
1
22.28

9
49.50

TM4

3
3
1
23.30

TM5
1
1
1
3
19.50

i  0 and

X
i2S

i 1:

For example, if =(1, 2, 3) is the initial probability for


CR1, then 1 is the initial probability of state 1 for CR1, 2 is
the initial probability of state 2 for CR1, and 3 is the initial
probability of state 3 for CR1. Assume that ni =the number of
times that the choice is in state i, and nij =the number of
pij
transitions
from state i to state j, then estimate pij by b

P
bi ni
ni .
nij ni and i by 

145
Table 4 The numerical results
for each TM in different time
periods

TM1
TM2
TM3
TM4
TM5

1-step

2-step

3-step

4-step

5-step

...

39.62
22.28
49.50
23.30
19.50

39.59
21.68
47.27
23.24
18.76

39.56
21.29
47.34
23.21
18.78

39.54
21.01
46.98
23.19
18.66

39.53
20.80
46.81
23.18
18.60

...
...
...
...
...

For convenience, we introduce a terminology of n-step


transition probabilities as follows: The conditional probabilities that a Markov chain model with stationary
transition probability will be in state j at time t+n given
that it is in state i at time t is called the n-step transition
n
probabilities and is denoted as pij(n), i.e. pij PfXtn jj
Xt ig. The matrix of these n-step transition probabilities
is denoted as P(n).
For CR1, the transition probabilities are
2
3
0:6 0:4 0:0
P 4 0:3 0:6 0:1 5;
0:0 0:2 0:8
where, for instance, p12=0.4 represent the 1-step transition
probability of going from H to M, and assume these
probabilities are stationary over time. If the decision maker
is interested in knowing how the expected weight (EW) of
CR1 would be changed in the next few periods, the computations, based upon Anderson, Sweeney and Williams
[17], are as follows:
EW of CR11  step
2
0:6 0:4
6
0:6 0:4 04 0:3 0:6
0:0

0:2

EW of CR12  step
2
0:6 0:4
6
0:6 0:4 04 0:3 0:6
0:0 0:2
Fig. 3 The trends for each
technical measure

EW of CR13  step
2
0:6 0:4
6
0:6 0:4 04 0:3 0:6

Steady-step
39.51
20.15
46.16
23.16
18.39

0:2

33 2 3
5
76 7
0:1 5 4 3 5 3:586
1
0:8

EW of CR14  step
2
0:6 0:4
6
0:6 0:4 04 0:3 0:6

34 2 3
0:0
5
76 7
0:1 5 4 3 5 3:499

0:0

0:0

0:2

0:0

0:8

and

0:0

EW of CR15  step
2
0:6 0:4
6
0:6 0:4 04 0:3 0:6
0:0 0:2

32 2 3
5
76 7
0:1 5 4 3 5 3:704
0:8
1

Finally, the expected weight of CR1 after the steadystate transition probability, applying Eq. 9 along with
GaussJordan method, becomes 5(3/9)+3(4/9)+1(2/9)=
3.222. By the similar procedure, the expected weights for
the rest of customer requirements are summarised in
Table 2. The trends for each CR are provided in Fig. 2.
When the expected weights and trends of these five
customer requirements in each different time periods are
known, the next step is to compute the expected values of

32 3
5
76 7
0:1 54 3 5 3:88
1
0:8

0:0

35 2 3
5
76 7
0:1 5 4 3 5 3:433:
1
0:8
0:0

146

the technical measures for different time periods. For


instance, the expected weights of these five customer
requirements in 1-step are placed in the expected weight
column, depicted in Table 3. Assume the relationship
between each CR and TM is known in Table 3. Then, the
expected importance for each TM can be directly computed
by arithmetic approach. By the similar approach, the
expected importance for each TM in different time periods
can be calculated, and the numerical results are summarised in Table 4. The trends for each technical measure
in different time periods are depicted in Fig. 3.
Clearly, using a Markov chain model in quality function
deployment to analyse customer requirements in the future
is practical. For instance, both Table 2 and Fig. 2 provide
valuable information for a decision maker to understand
how each customer requirement would be changed as time
goes by from probabilities viewpoints. Moreover, Table 4
and Fig. 3 provide the similar information for a decision
maker to monitor how each technical measure would be
changed from time to time. The major advantage of applying a Markov chain model here is that the classification
of the weight for each CR could be adjusted to become more
complicated to meet the needs in reality. The trends of both
customer requirements and technical measures can be
plotted and traced by a decision maker to satisfy customer
needs in the present and future time periods. More
importantly, if new customers surveys are conducted to
have most updated information, new transition probabilities
will be available. Therefore, customer needs can be further
analysed and adjusted based upon the new information.

4 Conclusions
This study has integrated a Markov chain model in quality
function deployment to analyse future customer requirements from probabilities viewpoints. In reality, the needed
probabilities, i.e. both initial and conditional probabilities,
can be computed based upon customers surveys by asking
their past and present choices. For a decision maker, each
customer requirement can be analysed as time goes by.
Moreover, a decision maker can closely take a look at how
each technical measure would be changed from time to
time. Furthermore, when new customers surveys are conducted and available, customer requirements and technical
measures can be updated to reflect and fulfil the dynamic
customer needs. As a result, this proposed approach provides a decision maker to analyse and then satisfy both
present and future customer needs earlier such that a better
strategy can be made based upon the most updated customers surveys.

Acknowledgements This study was supported in part by the


National Science Council in Taiwan with the grant number of NSC
94-2213-E-018-008.

References
1. Kondo Y (2001) Customer satisfaction: how can I measure it?
Total Qual Manag Bus Excell 12(78):867872
2. Chen H-K, Chen H-Y, Wu H-H, Lin W-T (2004) TQM
implementation in a healthcare and pharmaceutical logistics
organization: the case of Zuellig Pharma in Taiwan. Total Qual
Manag Bus Excell 15(910):11711178
3. Chan LK, Wu ML (2002) Quality function deployment: a
literature review. Eur J Oper Res 143:463497
4. Wu H-H (2002) Implementing grey relational analysis in
quality function deployment to strengthen multiple attribute
decision making processes. J Qual 9(2):1939
5. Kuo T-C, Wu H-H (2003) Green products development by
applying grey relational analysis and green quality function
deployment. Int J Fuzzy Syst 5(4):229238
6. Wu H-H (200203) A comparative study of using grey
relational analysis in multiple attribute decision making
problems. Qual Eng 15(2):209217
7. Sullivan LP (1986) Quality function deployment. Qual Prog 19
(6):3950
8. Tan KC, Shen XX (2000) Integrating Kanos model in the
planning matrix of quality function deployment. Total Qual
Manag 11(8):11411151
9. Chan LK, Wu ML (200203) Quality function deployment: a
comprehensive review of its concepts and methods. Qual Eng
15(1):2335
10. Han CH, Kim JK, Choi SH (2004) Prioritizing engineering
characteristics in quality function deployment with incomplete
information: a linear partial ordering approach. Int J Prod Econ
91:235249
11. Hauser JR, Clausing D (1988) The house of quality. Harvard
Bus Rev 66(3):6373
12. Akao Y (1990) Quality function deployment: integrating customer requirements into product design. Productivity Press,
Cambridge, MA
13. Griffin A, Hauser J (1993) Voice of the customer. Mark Sci
12:127
14. Gryna FM (2001) Quality planning and analysis: from product
development through use, 4th edn. McGraw-Hill, New York
15. Shen X-X, Xie M, Tan K-C (2001) Listening to the future voice
of the customer using fuzzy trend analysis in QFD. Qual Eng
13(3):419425
16. Wu H-H, Liao AYH, Wang PC (2005) Using grey theory in
quality function deployment to analyse dynamic customer
requirements. Int J Adv Manuf Technol 25:12411247
17. Anderson DR, Sweeney DJ, Williams TA (2003) An introduction to management science: quantitative approaches to
decision making, 10th edn. Thomson, Andover, UK
18. Render B, Stair RM (2000) Quantitative analysis for management, 7th edn. Prentice-Hall, New York
19. Taha HA (1997) Operations research: an introduction, 6th edn.
Prentice-Hall, New York
20. Winston WL (1994) Operations research: applications and
algorithms, 3rd edn. Duxbury Press, Pacific Grove, CA

You might also like