Professional Documents
Culture Documents
EDITED BY:
Rajiv Banker,
Ali Emrouznejad
Ana Lúcia Miranda Lopes
Mariana Rodrigues de Almeida
Title: Data Envelopment Analysis: Theory and Applications
Edited by: Rajiv Banker, Ali Emrouznejad, Ana Lúcia Miranda Lopes, Mariana
Rodrigues de Almeida
EDITED BY:
Rajiv Banker
Fox School of Business and Management
Temple University
Philadelphia, PA 19121
USA
Ali Emrouznejad
Aston Business School
Aston University
Birmingham B4 7ET
UK
December 2012
ISBN: 978 185449 437 5
PREFACE: A MESSAGE FROM THE LOCAL ORGANIZERS
Dear conference participants,
It is a great honor and pleasure to us from UFMG and UFRN welcome you to the 10th
International Conference on Data Envelopment Analysis - DEA2012 in Natal, Brazil.
The main themes of the DEA2012 conference are Energy and Regulation and Health
Performance Management. In total, one workshop on Energy and Regulation of Energy
Companies, five panels, 28 sessions for paper presentations and and two poster sessions will
be available for the participants. You will find papers about applications in agriculture,
banking, logistics, education, energy regulation, health, information technology, supply chain
management, transportation, tourism and sports. Sessions about economic issues in modeling,
algorithms, price and allocative efficiency and sustainability are also presented. A hundred
seventy (170) people from 25 different countries have registered to this Conference. The
participation of Brazilian academics is substantial (54,6%). We indeed have the pleasure to
receive people from distribution and transmission energy companies that are interested in
discussing the use of the DEA methodology to the Brazilian energy regulation model.
Foreigner and Brazilian consulting companies are also registered.
We would like to take this opportunity to express our sincere thanks to the board of iDEAs
that have accepted our proposal in Thessaloniki, Greece on august 2011, granting Brazil the
opportunity of bringing this prestigious Conference for the first time to the South America. We
are grateful to the stream organizers and members of the DEA in practice and scientific
conference committee. We warmly thank to Professor Rajiv Banker and Professor Ali
Emrouznejad for their priceless dedication and hard work in setting up this Conference.
We wish you all an interesting and enjoyable time in Natal, Brazil.
Mariana Almeida
Universidade Federal do Rio Grande do Norte, UFRN
TABLE OF CONTENTS
1. A genetic algorithm approach to efficiency assessments with common weights ...... 9
Valiakos Athanasios
8. Data Envelopment Analysis of the effieincy frontier for the results achived by
Formula 1 drivers and teams ........................................................................................ 55
Prof. Dr. Aparecido Jorge Jubran
Profa. Msc. Laura Martinson Provasi Jubran
José Rubens Moura Martins
Jane Leite Silva
9. Data Envelopment Analysis Type Linear and Goal Programming Models For
Measuring Energy Efficiency Performance of OECD Countries ................................ 60
Hasan BAL
Mehmet Guray UNSAL
18.Integration of BSC, DEA and Game Theory in the performance of public health
service .......................................................................................................................... 137
Marco Aurélio Reis dos Santos
Fernando Augusto Silva Marins
Valerio A. P. Salomon
21.Maximal Allocated Benefit and Minimal Allocated Cost and its Application........... 171
Mozhgan Mansouri Kaleibar
Sahand Daneshvar
31.Technical efficiency of Burkina Faso primary public health care centers ............... 252
Oumarou Hebie
Simon Tiendrebeogo
Séni Kouanda
Abdel Latef Anouze
34.Theory of robust optimization in overall profit efficiency with data uncertainty .... 277
N. Aghayi
M.A. Raayatpanah
Abstract
The common weights approach is one of the most prominent methods to further
prioritize the subset of DEA efficient units. This approach can be modeled as a multi-
objective problem, where one seeks for a common set of weights that locates the
efficiency ratio of each unit as close as possible to the target score of 1. In such a
setting, different metrics can be applied to measure the distance of the efficiency ratios
from target, such as the L1, L2 and L∞. When L1 and L2 metrics are used the models
derived are non-linear. In case of the L∞ metric, the problem can be heuristically
solved by the bisection method and a series of linear programs. We investigate in this
paper the ability of genetic algorithms to solve the problem for estimating efficiency
scores, by using an evolutionary optimization method based on a variant of the Non-
dominated Sorted Genetic Algorithm.
Keywords: Data envelopment analysis, Common weights analysis (CWA), Genetic
algorithms, evolutionary optimization
Introduction
In DEA efficiency assessments, the weights for inputs and outputs are estimated to the
best advantage for each unit, so as to maximize its relative efficiency. Basically, DEA
provides a categorical classification of the units into efficient and inefficient ones.
However, although DEA is strong in identifying the inefficient units it is weak in
discriminating among the efficient units. The basic DEA model often rates too many
units as efficient. This is a commonly recognized problem of DEA, which becomes
more intense when the number of units is relatively small with respect to the total
number of inputs and outputs. Further discrimination among the efficient units is an
issue that has attracted considerable attention in the DEA literature (Angulo-Meza et al.
2002 [0]. The common weights approach is one of the most prominent methods to
further prioritize the subset of DEA efficient units. This approach can be modeled as a
multi-objective problem, where one seeks for a common set of weights that locates
the efficiency ratio of each unit as close as possible to the target score of 1. In such a
setting, different metrics can be applied to measure the distance of the efficiency ratios
from target, such as the L1, L2 and L∞. When L1 and L2 metrics are used the models
derived are non-linear. In case of the L∞ metric, the problem can be heuristically
solved by the bisection method and a series of linear programs. We investigate in this
r r1∑u y ∑u y r rn
==
max{h
1( u , v )
r 1=
m
= r 1
,..., h
n ( u ,v ) m
} (1)
=i 1 =i 1
∑ vi xi1 ∑v x i in
s.t.
s m
ur ≥ ε , vr ≥ ε ∀r , i
Within the family of LP metrics, L1 and L∞ metrics are of particular interest in the field
of Multi-Objective Linear Programming. This is because they are the only LP metrics
that result in linear scalar problems, when minimizing a distance of the frontier to a
reference point. Therefore, let d be the minimum distance which optimizes all
efficiency scores for each DMU. Since hk is the ideal point of DMUk, the distance from
ideal score 1 is minimized with different metrics.
The metrics
Within the family of LP metrics, L1 and L∞ metrics are of particular interest in the field
of Multi-Objective Linear Programming. This is because they are the only LP metrics
that result in linear scalar problems, when minimizing a distance of the frontier to a
reference point. The L∞ metric can be calculated from the following equation,
The Chebychev norm is a distance function where the absolute value of the largest
coordinates’ difference between two points absolutely dominates. The L1 metric can
be calculated by the following equation,
n
|| d ||1: min ∑ (1 − hk )
= (3)
k =1
The Manhattan norm hat represents the shortest distance in unit steps along
each axis between two points. The case of L2 is calculated from,
n
=
|| d ||2 : min ∑ (1 − h )
k =1
k
2
(4)
The Euclidean norm between two points is the length of the straight line between the
two points and it is by far the most commonly used norm.
Therefore, if this metric is used in Global DEA, only the largest factors’ difference is
taken into account (thus leading to the most balanced solution between achievements
of different factors).
For the case of L∞, the above model Eq. (1) although not linear it can be handled
through bisection search - Despotis (2002) [0] using the following equivalent form,
∑u y − ∑v x
r rj
=r 1 =i 1
i ij ≤ 0, j =
1,..., n
∑u y r rj
1 − ( r =m1 + z ) ≤ 0, j =
1,..., n
∑v x
i =1
i ij
ur ≥ ε , vi ≥ ε ∀r , i
z≥0
s
|| d ||1 = min ∑ d j , j=1,...,n (6)
r =1
∑u y r rj
r =1
m
+dj =
1, j=1,...,n
∑v x
i =1
i ij
ur ≥ ε , vr ≥ ε ∀r , i
d j ≥ 0, j=1,...,n
s
|| d ||2 = min ∑ d 2j , j=1,...,n (7)
r =1
s.t.
s
∑u y r rj
r =1
m
+dj =
1, j=1,...,n
∑v x
i =1
i ij
ur ≥ ε , vr ≥ ε ∀r , i
d j ≥ 0, j=1,...,n
The above two models are non-linear.
GA DEA
Since Eq. (1) is not linear, it can be approached with Genetic Algorithms. Multi-
objective Evolutionary Algorithms (EAs) are GAs customized to solve multi-objective
problems by using specialized fitness functions. EAs, such as Non-dominated Sorted
Genetic Algorithm NSGA-II can be modified to find a set of multiple non-dominated
solutions in a single run.
Multiobjective Evolutionary Algorithms (EAs) are GAs customized to solve multi-
objective problems by using specialized fitness functions. In order to achieve
evolutionary multi-objective optimization, three different aspects must be considered:
Fitness Assignment, Diversity Mechanism and Elitism. Fitness Assignment is actually
the fitness function chosen. Diversity Mechanism is the way next population is
generated. Elitism is whether the best dominating solutions found so far survive to the
next generation. Therefore a controlled genetic algorithm, which is a variant of Non-
dominated Sorted Genetic Algorithm (NSGA-II), is proposed in this research. For
fitness function Pareto Ranking approach is utilized. The population is ranked
according to a dominating rule. In order to maintain the Diversity, crowding distance
is employed, while Elitism exists partially. An elitist GA always favours individuals
with better fitness value (rank). A controlled elitist GA also favours individuals that
can help increase the diversity of the population even if they have a lower fitness
Genetic Algorithm approach examines each solution in a specific area. After that, a
pareto front is formed with the dominating solutions. The selection of the best
solution is a decision of the decision maker. Therefore, by applying the metrics, the
decision maker can select the best solution minimizing the distance from ideal value 1.
Table 2 shows that the mean values are close to zero. The minimum distances using
different metrics are obtain with a single run of the genetic algorithm. It is also
important to note that the range of the values is also significant low. A further
inspection shows that the overall efficiency scores from GA and bLP do not vary
much.
Conclusions
In this paper, a genetic algorithm is presented to estimate the efficiency scores with
common weights. No prior research is been made towards using genetic algorithms in
order to solve multi objective DEA. Different metrics are used as part of the algorithm
in order to minimize distance of all DMUs from efficient value 1. The general fitness
framework utilizes the NSGA-II approach taking into consideration the LP metric.
A simulation is conducted in order to review the capability of the algorithm. The
results indicated that the genetic algorithm is capable of minimizing the distance from
optimum value 1. In other words, through optimization of the genetic algorithm there
is a set of common weights that minimizes the total distance. In scenarios with various
number of inputs and outputs, the data has been analyzed and metrics are computed.
It is proved that using Genetic Algorithms is a viable solution to estimate efficiency
scores with common weights, and is in fact less complex than nonlinear equivalent. In
addition, the efficient units are slightly increased since the genetic algorithm uses
pareto front optimization.
Angulo-Meza, L., Estellita Lins, M.P. Review of methods for increasing discrimination in
data envelopment analysis Annals of Operations Research 116(1-4): 225-242.
Deb K., Pratap A., Agarwal S., Meyarivan T. (2002). A fast and elitist multi-objective
genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation (6): 182–
197.
Kao, C. , Hung, H.-T. (2005) Data envelopment analysis with common weights: The
compromise solution approach Journal of the Operational Research Society 56(10):
1196-1203.
Liu, F. - H. F. , Hsuan Peng, H. (2008) Ranking of units on the DEA frontier with
common weights Computers and Operations Research 35(5): 1624-1637.
Wu, J. , Liang, L. , Yang, F. (2009) Determination of the weights for the ultimate cross
efficiency using Shapley value in cooperative game Expert Systems with Applications
36(1): 872-876
Despotis D.K. (2002). Improving the Discriminating Power of DEA- Focus on Globally
Efficient Units. Journal of the Operational Research Society (53): 314-323.
Acknowledgements
This study is funded and supported by the Institute of National Funds of Greece, since
one of the authors is under financial scholarship.
Menegolla IA
Pan-American Health Organization – PAHO, Setor de Embaixadas Norte, Lote 19, CEP 70800-400 –
Federal District, Brazil Caixa Postal 08-729, 70312-970 – Brasilia, DF, Brasil,
ivonemenegolla@yahoo.com.br
Estellita Lins MP
Alberto Luiz Coimbra Post-Graduation and Engineering Research Institute, Operational Research,
COPPE/UFRJ, Technology Center, Block F, Room 103, University City/ Fundão Island, Rio de
Janeiro, Brazil, estellita@pep.ufrj.br
Abstract
The study develops an alternative measure of efficiency to assess the Brazilian
National Immunization Program, using Data Envelopment Analysis (DEA), output
oriented, Variable Returns to Scale (VRS) model, in order to congregate differing
indicators in a unique index and to consider the differences among the federal units
when comparing the twenty six Brazilian states that have diverse socioeconomic and
urbanization scenarios. The NIP program can be considered highly efficient in Brazil.
The mean efficiency score for the 26 states was 93.7 % (5.4 % SD). 16 states were
considered efficient. To reach the frontier of best practices, each state and region
could have an individual goal for vaccine homogeneity. DEA technique evaluates
homogeneity indicators for various vaccines in the same model making it possible to
construct an efficiency index for “the first year of life” immunization cycle.
Keywords: Data Envelopment Analysis, Health Services Assessment, Public Health
Policy, National Program of Immunization
Introduction
Immunization is the process whereby a person is made immune or resistant to an
infectious disease, typically by the administration of a vaccine. Vaccines stimulate the
body’s own immune system to protect the person against subsequent infection or
disease. Immunization is n a proven tool for controlling and eliminating life-
threatening infectious diseases and is estimated to avert between 2 and 3 million
deaths each year. It is one of the most cost-effective health investments, with proven
As there is no sense in trading off the usual goals for coverage and abandon rate, this
paper intends to develop an alternative measure of efficiency to assess the Brazilian
Immunization Program in 2010, using Data Envelopment Analysis (DEA) in order to
congregate the differing homogeneity indicators as a unique index and to consider the
different resource facilities observed when comparing the twenty six Brazilian states
that together give this country a continental dimension, with diverse socioeconomic
and urbanization scenarios.
Table II – DEA Model Results: Scores, Immunization Goals and Benchmarks for
the NIP in Brazilian States
References
Atkinson W., Wolfe C., Hamborski J. (2011) Epidemiology and Prevention of Vaccine-
Preventable Diseases. 12th edition. The Public Health Foundation; 2011.
Barreto M.L., Teixeira M.G., Bastos F.I., Ximenes R.A.A., Barata R.A. (2011) Successes
and failures in the control of infectious diseases in Brazil: social and environmental
context, policies, interventions, and research needs. The Lancet; 377: 1877-1889.
Chilingerian J.A., Sherman D. (2004) Health Care Applications - From Hospitals to
Physicians; From Productive Efficiency to Quality Frontiers. In: Cooper WW; Seiford
LM; Zhu J. Handbook on data envelopment analysis. Boston: Kluwer Academic
Publishers.
Hollingsworth B. (2003) Non-Parametric and Parametric Applications Measuring
Efficiency in Health Care Health Care Management Science;6:203-218.
Hollingsworth B. (2008) The Measurement of Efficiency and Productivity of Health
Care Delivery, Health Economics; 17 (10):1107-1128.
O´Neill L, Rauner M, Heidenberger K, Kraus M. (2008) A cross-national comparison
and taxonomy of DEA-based hospital efficiency studies. Socio-Economic Planning
Sciences 2008; 42(3):158-189.
Cooper WW, Seiford LM, Tone K. (2007) Data Envelopment Analysis-A
Comprehensive Text with Models, Applications, References and DEA Solver Software.
2nd. Ed. Massachusetts: Springer
Ozcan YA. Health Care Benchmarking and Performance Evaluation: An Assessment
using Data Envelopment Analysis (DEA). International Series in Operations Research
and Management Science. Springer; 2008.
Lins MPE, Lobo MSC, Fiszman R, Silva ACM, Ribeiro VJP. O Uso da Análise Envoltória
de Dados – DEA - para Avaliação de Hospitais Universitários Brasileiros. Revista
Ciência e Saúde Coletiva 2007; 12(4): 985-998.
Hasan BAL
Gazi University, Science Faculty, Statistics Department, Ankara, TURKEY hasanbal@gazi.edu.tr
H. Hasan ORKCU
Gazi University, Science Faculty, Statistics Department, Ankara, TURKEY hhorkcu@gazi.edu.tr
Abstract
DEA has become a very popular method of performance measure, but it still suffers
from some shortcomings. One of these shortcomings is the issue of having multiple
optimal solutions to weights for efficient DMUs. The cross efficiency evaluation as an
extension of DEA is proposed to avoid this problem. Lam (2010) is also proposed a
mixed-integer linear programming formulation based on linear discriminant analysis
and super efficiency method (MILP model) to avoid having multiple optimal solutions
to weights. In this study, we modified MILP model to determine more suitable weight
sets and also evaluate the energy efficiency of OECD countries as an application of
the proposed model.
Keywords: Data envelopment analysis, discriminant analysis, cross efficiency, MILP
model.
Introduction
DEA was first developed by Charnes et al. [3] that seems to be the most popular
method for measuring the efficiency of homogenous decision making units. It become
very popular method which is used in operations research and management science.
The improvements made to the DEA technique have resulted in several new problems
[2]. For example, the issue of unrealistic weights distribution, the weak discrimination
power, and having multiple optimal solutions to weights for efficient DMUs.
Having multiple optimal solutions to weights affects to a great extent the consistency
of operations related to weights cross efficiency method is the most frequently studied
topic in DEA literature. Sexton et al. [13] developed the cross efficiency method to rate
the DMUs. Their technique made use of the cross evaluation scores computed as
related to all DMUs and hence identified the best DMUs [1].
∑u r, p yrj
θ p, j = r =1
m
. (4)
∑v
i =1
x
i , p ij
s.t.
s m
∑ ur yrj −∑ vi xij + M z j ≥ 0,
=r 1 =i 1
j ∈ E,
s m
∑ ur yrj −∑ vi xij − M z j ≤ −ε ,
=r 1 =i 1
j ∈ E,
∑v x i ie = 1,
i =1 (5)
s
∑u y
r =1
r re ≥ h,
s
m
=
∑ u
r 1=
y
r rj − h ∑ vi xij ≤ 0 ; =
i1
j 1, , n, j ≠ e,
z j ∈ {0,1} , j=
1, , n,
ur , vi ≥ 0, =
r 1, , s, =i 1, , m
where, ε is a very small positive number, M is an extremely large positive number,
E is the efficient set which contains all the efficient DMUs, while E contains all the
inefficient DMUs. The value of h is predetermined. In the MILP model proposed by
∑u y r rj
r =1
m
≤h ; =
j 1, , n, j ≠ e,
∑v x
i =1
i ij
(7)
is obtained. Lam [7] proposed to determine h with the super efficiency method due to
the difficulty of fractional programming problems. Due to h>1, it seems appropriate to
determine h by the super efficiency method.
The constraint given with (8) was selected instead of this constraint in this study.
s m
m
If the constraint given with (8) is organized, ( ∑ vi xij ≥ 0 ),
i =1
s m
∑ ur yrj ∑v x i ij
h
r =1
m
− i =1
m
≤ m
; =
j 1, , n, j≠e (9)
∑v x
i ij i ij
=i 1 =i 1 =i 1
∑v x ∑v xi ij
∑u y r rj
h
r =1
m
≤ 1+ m
= ; j 1, , n, j ≠ e, (10)
∑v x
i ij
=i 1 =i 1
∑ vi xij
is obtained.
Here, the value of h is selected as a variable, being h>1. When the constraint given
s m
with (10) is examined, it can be seen that the efficiency ratio ( ∑ ur yrj ∑v x i ij )
=r 1 =i 1
remains lower than a value higher than 1. In the MILP model, the efficiency ratio is of
a value lower than the value of h (h>1). That is to say, the efficiency ratio is similar to
the MILP model in the model that we propose also. While the model that we propose
determines the threshold value of h itself for the efficiency ratio for the model as well,
the value of h in the MILP model is predetermined with the super efficiency model. As
h is considered as a variable in our proposed model, it is not necessary to use the
super efficiency model to determine h. Besides, our proposed model is not an integer
linear programming. The model suggested is given in the equation (11).
s.t.
s m
∑ u y −∑ v x
r rj
=r 1 =i 1
i ij + d j ≥ 0, j ∈ E,
s m m
∑ ur yrj −∑ vi xij − d j ≤ −ε ,
=r 1 =i 1
j ∈ E , ∑ vi xie = 1,
i =1
(11)
∑u y
r =1
r re ≥ h,
s m
h ≥1
d j ≥ 0, j=
1, , n
ur , vi ≥ 0, =
r 1, , s, =i 1, , m
where, ε is a very small positive number, E is the efficient set which contains all
efficient DMUs, while E contains all inefficient DMUs.
Application Study
According to an application study on energy data of OECD countries, we consider
gross domestic product and nonfossil fuel consumption as outputs variables. CO2
Emission and fossil fuel consumption are the inputs similarly
Ramanathan’s study [14] in Table 1. We obtained the data set from International
Energy Agency web page “http://www.eia.doe.gov”. The data belongs to 2008.
Rank
Rank
Rank
Rank
Country Eff. Eff. Eff. model osed
The proposed model produces far fewer zero weights than Lam’s MILP. As pointed
out by Cooper et al [16], it is better to use weight sets which have more balanced
virtual weights than extreme virtual weights in measuring the performance of the
DMUs in the DEA. The results of application study are listed in Table 2-4. All of the
models have significant and high correlation coefficients between each other. This
solution is lead us these models have same usefulness in ranking of DMUs.
Furthermore, the proposed model has an ability to find of value of h by itself. Another
significant advantage of our modified MILP model is that it can effectively reduce the
number of zero weights for inputs and outputs as seen in Table 3.
Conclusions
Lam [7] proposed a mixed-integer linear programming (MILP) formulation based on
linear discriminant analysis and super efficiency method. In this paper, we modify this
model to choose suitable weight sets to be used in cross efficiency evaluation. The
weight sets obtained from the proposed model are suitable for cross-evaluation
because they reflect the different strengths of the efficient DMUs. The proposed model
produces far fewer zero weights than Lam’s MILP. The results are obtained from
energy consumption and CO2 emmission efficiency of OECD Countries.
References
[1] T.R. Andersen, K.B. Hollingsworth, L.B. Inman, (2002) The Fixed Weighting Nature
of a Cross Evaluation Model, J. Prod. Anal., 18 (1) 249–255.
[2] H. Bal, H.H. Örkcü, S. Çelebioğlu, (2010) Improving the Discrimination Power and
Weight Dispersion in the Data Envelopment Analysis, Comput. Oper. Res., 37 (1) 99–
107.
[3] A. Charnes, W.W. Cooper, E. Rhodes, (1978) Measuring the Efficiency of Decision
Making Units, Eur. J. Oper. Res., 2 429–444.
[4] J.R. Doyle, R. Green, (1994) Efficiency and Cross Efficiency in Data Envelopment
Analysis: Derivatives, Meanings and Uses, J. Oper. Res. Soc., 45 (5) 567–578.
[5] R. Green, J.R. Doyle, W. Cook, (1996) Preference voting and project ranking using
DEA and cross-evaluation, Eur. J. Oper. Res., 90 (3) 461–472.
[7] K.F. Lam, (2010) In the determination weight sets to compute cross-efficiency ratios
in DEA, J. Oper. Res. Soc., 61 134–143.
[8] L. Liang, J. Wu, W.D. Cook, J. Zhu, (2008) Alternative secondary goals in DEA
cross-efficiency evaluation, Int. J. Prod. Eco., 113 (2) 1025–1030.
[9] W.M. Lu, S.F. Lo, (2007) A closer look at the economic-environmental disparities
for regional development in China, Eur. J. Oper. Res., 183 (2) 882–894.
[10] M. Oral, O. Kettani, P. Lang, (1991) A methodology for collective evaluation and
selection of industrial R&D projects, Manage. Sci., 37 (7) 871–885.
[11] H.H. Örkcu, H. Bal, (2011) Goal programming approaches for data envelopment
analysis cross efficiency evaluation, Appl. Math. Comput., 218 346-356.
[12] N. Ramón, J.L. Ruiz, I. Sirvent, (2010) On the choice of weights profiles in cross-
efficiency evaluations, Eur. J. Oper. Res., 207 (3) 1564–1572.
[13] T.R. Sexton, R.H. Silkman, A.J. Hogan, (1986) Data Envelopment Analysis:
Critique and Extension. In: Silkman R.H. (Ed.), Measuring Efficiency: An Assessment of
Data Envelopment Analysis, 32. Jossey-Bass, San Francisco 73-105.
[15] T.R. Andersen, K.B. Hollingsworth, L.B. Inman, (2002) The Fixed Weighting
Nature of a Cross Evaluation Model, J. Prod. Anal., 18 (1) 249–255.
[16] W.W. Cooper, J.L. Ruiz, I. Sirvent, (2007) Choosing Weights From Alternative
Optimal Solutions of Dual Multiplier Dual Models in DEA, Eur. J. Oper. Res., 180 443–
458.
Rajiv Banker
Fox School of Business and Management – Temple University, banker@temple.edu
Abstract
Software development best practices claim to reduce IT projects risks and deliver
better and efficient software products. Yet widely accepted on industry, these
techniques are the subject of intense debate in academia. Objective: This paper
presents a quantitative analysis of the impact of a set of software development
techniques on the technical efficiency of software projects. A study is conducted on
105 software development projects for efficiency analysis of effort, time elapsed,
productivity and defect density. The following software practices are evaluated:
capability maturity models like CMMI, requirements elicitation techniques, design and
architecture techniques, test techniques, project management techniques, business
process management use and case tools adoption. Method: Benchmarking is
performed using efficient frontier analysis with DEA BCC input oriented. Efficiency
scores among software project groups are compared using DEA based hypothesis
tests. Results: No single software engineering technique could explain an increase in
overall performance in IT projects. Yet, we find some evidences that IT firms are more
efficient than other firm’s types when delivering software projects. Conclusion: Results
emphasize the importance of recognizing that optimal management techniques
depend on the characteristics of the software development project, organization type
and its sociotechnical environment.
Keywords: Software Economics, DEA BCC, Information Systems, CMMI, Software
Architecture
Introduction
Information technology (IT) has been used for over 50 years as a plausible instrument
for increasing efficiency in firms’ business processes. Several software engineering best
practices such as capability maturity models, project management techniques,
requirement, design and test techniques are proposed instruments to reduce software
Related Work
A longitudinal study conducted since 1985, compiled the report Chaos Manifesto [2],
indicates that the majority of IT projects present efficiency problems regarding the
quality as perceived by its users, project deadlines and costs. In a meta-analysis of
investments in IT services Brynjolfsson [3] studied the IT productivity paradox, arguing
that the use of IT often does not generate the expected returns on investment. The
same author presents in the later work efficiency factors when information technology
is linked to the innovation of business processes [4]. Since the 70s, software costs
exceed hardware costs in IT in a range of up to 5:1 [5] [6]. Currently this ratio exceeds
10:1 and continues to grow. With respect to the unit of analysis for software projects,
several investigations have been conducted to determine factors that guide economic
efficiency. Factors related to product size, teams, technologies and construction
process are more accepted in the literature. These aspects are discussed in the classic
book The Mythical Man-Month [7] and its elements already indicate diseconomies of
scale. Early econometric models such as SLIM and COCOMO II [6] are seminal
references in determining these factors and also presents evidence of diseconomies for
large projects.
Yet classical analyses suggest decreasing returns to scale for software projects, other
authors present divergent arguments [8]. Banker and Kemerer [9] analysed several IT
project databases and noticed that some of them exhibit economies of scale. The same
authors in subsequent work [10] present similar evidence in the context of software
maintenance and other projects database. Over the past years, investigations by other
authors show evidence of varying scales in projects [11].
Academic software economics literature encompasses many quantitative methods of
analysis. Pioneering work in the 70s and 80s were based simply on multiple linear
regression methods. A systematic review of measurement and prediction of
productivity of software projects analyses the main quantitative methods in use in 38
Methods
This paper investigates the hypothesis that software development best practices leads
to better project outcomes.
To investigate this hypothesis, it is necessary to understand the relationships of
efficiency in software projects and nature of returns to scale. Equation (1) is usually
used to establish the relative efficiency of software development projects from the
correlation between the functional size (lines of code implemented, function points,
tables, databases) and their effort in hours.
𝐸𝑓𝑓𝑜𝑟𝑡 = 𝑎 ∗ 𝑆𝑖𝑧𝑒 𝑏 (1)
𝑏 < 1 𝑦𝑖𝑒𝑙𝑑𝑠 𝑖𝑛𝑐𝑟𝑒𝑎𝑠𝑖𝑛𝑔 𝑟𝑒𝑡𝑢𝑟𝑛𝑠 o𝑓 𝑠𝑐𝑎𝑙𝑒
𝐼𝑓 � 𝑏 = 1 𝑦𝑖𝑒𝑙𝑑𝑠 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝑟𝑒𝑡𝑢𝑟𝑛𝑠 𝑜𝑓 𝑠𝑐𝑎𝑙𝑒
𝑏 > 1 𝑦𝑖𝑒𝑙𝑑𝑠 𝑑𝑒𝑐𝑟𝑒𝑎𝑠𝑖𝑛𝑔 𝑟𝑒𝑡𝑢𝑟𝑛𝑠 𝑜𝑓 𝑠𝑐𝑎𝑙𝑒
Methods such as COCOMO II use this equation as basis. In COCOMO II, the a term is
a 14 factor composite. Although COCOMO II does not use modern software
development terminology, some of its attributes are linked to aspects of software
design attributes such as Application of software engineering methods, Software
Complexity or Required Software Reliability of the Product.
If we assume that terms a and b are known to a certain project database in a firm, it is
expected that similar size projects have similar efforts. In the presence of different
efforts we can compare the efficiency of these projects. Projects that present lower
efforts when compared to another project of similar size are the one that have better
economic efficiencies. However, examination of project results through a single
outcome (univariate analysis) such as effort in hours can be deceptive. Quality
measures, effort and time elapsed should be analysed together.
We understand that the efficiency of a project should be analysed from a set of
technical and managerial results (multivariate analysis). In the design of the
experiment we used four variables to analyse software projects efficiency:
(2)
When we compare two projects, ceteris paribus, reduced effort, reduced time elapsed,
reduced defects and greater functional size suggests better efficiency.
The efficiency function in our experiment is not known, i.e., we make no assumption
about the nature of this function (non-parametric function). To compare diverse
software projects in a problem of multiple inputs and outputs we used DEA BCC
(Data Envelopment Analysis) method.
The experiment consisted in the comparative analysis of a set of software projects.
The database used was ISBSG Release Date v11, composed of 5052 software projects.
The data of the projects were selected from the following criteria:
• Evaluation of data quality of a project is sound and has been formally evaluated
by experts from ISBSG. (Date Quality Rating = A and B).
• Software project defects were reported.
• Functional size counting technique respects Function Point technique as defined by
IFPUG (IFPUG Count = Approach) and function points project size in the range
(200, 2000).
• Development of new products (Development Type = New development).
Maintenance and reconstruction projects were not evaluated in this work.
One hundred and five (105) projects met these criteria and were used for efficiency
comparison.
Our DEA model has two inputs (Effort and Elapsed Time) and two outputs
(Functional size and Defects). Since software defects is an undesirable output, we
modelled it using the reciprocal multiplicative approach [13]. In this approach, the
undesirable output is modeled as being desirable: f(uik)=1/uik, where uik is one of
the elements of the matrix U of the undesirable outputs i of the decision making unit
k.
In our work, we are interested in the comparison of a group of IT projects which used
a specific technique and a group of IT projects which didn’t use this specific
technique. We use DEA-based hypothesis tests for comparing two groups of decision
making units, according with [14]. We use the modified T-Test procedure for
comparing the equality of means of two populations random variables.
Technical
Efficiency Scores
Mean Standard
Software Project Technique Value Deviation
Table 02 shows the technical efficiency scores breakdown for firm type.
Technical
Efficiency Scores
Mean Standard
Firm type Value Deviation
Conclusions
Quantitative analysis of projects software can be better studied with multivariate
analysis methods such as DEA. Results emphasize the importance of recognizing that
optimal management techniques depend on the characteristics of the software
development project, organization type and its sociotechnical environment. IT
companies presented better results and non-IT companies, which can show how the
socio-technical can influence the technical performance of projects.
B. W. Boehm, Software Engineering Economics, 1st ed. Upper Saddle River, NJ, USA:
Prentice Hall PTR, 1981.
F. P. Brooks,Jr., The Mythical Man-Month: Essays on Softw, 1st ed. Boston, MA, USA:
Addison-Wesley Longman Publishing Co., Inc., 1978.
[13] E. Gomes and M. Lins, “Modelling undesirable outputs with zero sum gains
data envelopment analysis models,” Journal of the Operational Research Society, pp.
616–623, 2008.
Acknowledgements
We authors would like to thank ISBSG for making available the ISBSG R11 data.
Mohsen Rostamy-Malkhalifeh
Department Mathematics, Islamic Azad University, Science and Research Branch, Tehran, Iran
mohsen_rostamy @yahoo.com
Somayeh Mamizadeh
Department Mathematics, Islamic Azad University, Central Tehran Branch, Tehran, Iran
somayeh_mamizadeh@yahoo.com
Abstract
After proceeding with international management, enterprises have to face the
challenge of Buyer-Supplier mainly because of the paid change in the business
environment and severe competition I market and customers diverse demand. In some
enterprise we may come across imprecise data such as interval, ordinal and fuzzy
data. And also for some Buyer-Suppliers, there are indexes that its decrease and or
increase are not possible. These indexes calls non-controllable or somewhat
controllable. We consider evaluating of Buyer-Supplier so that parts of inputs are non-
controllable variable. We focus on the case when imprecise input-output data are
represented by intervals.
Keywords: Imprecise Data Envelopment Analysis, Performance evaluation, Buyer-
Supplier
Introduction
The organization is customer-oriented organization, which aims to anticipate customer
demand in the issues that are most valuable to them. Customer service organization
are always the cornerstone of thinking and planning organizations. In the world
today that quality is driven customer-oriented, customer focus is the foundation of all
commercial activities. The definitive impact of the supply chain on enterprise’s
performance has been reported from many industries. A supply chain consists of
organizations working in the entire supply chain. The supply chain performance
evaluation problem is one of the most comprehensive strategic decision problems that
need to be considered for long-term efficient operation of the whole supply chain [2].
Our approach is evaluating of the overall performance in Buyer-Supplier relationships
through the measurement of intensity and effectiveness in a supply chain: we use
evaluation tools such as Imprecise Data Envelopment Analysis (IDEA). In the
Methods
Assume that there are n supply chains (SCs) each of them producing S outputs by
consuming M inputs so that D represented controllable variable. The correspondences
between Q1 and (Q11,Q12,Q13,Q14) and between Q2 and (Q21,Q22,Q23,Q24) are shown in
Table 1.
X Buyer-Supplier Y
Yr 13 λ ≥ yrq14 r=
Q Q
1,..., s
if Q1 = LR
XiLλ ≤ θ xiqL , XiRλ ≤ θ xiqR i∈D
XiLλ ≤ xiqL , XiRλ ≤ xiqR i∉D
YrLλ ≥ yrq
L , YrRλ ≥ yrq
R r=
1,..., s
if Q2 ≠ L / R
Xi 21λ z1i − ≤ θ xiq22 z1i − i∈D
Q Q
if Q2 = L / R
XiLλ z1i − ≤ θ xiqL z1i − , XiRλ zi2− ≤ θ xiqR zi2− i ∈ D
XiLλ z1i − ≤ xiqL z1i − , XiRλ zi2− ≤ xiqR zi2− i∉D
YrLλ zr2+ ≥ yrq
L z2+
r
, YrRλ z1r+ ≥ yrq
R z1+ r = 1,..., s
r
λ ≥0 , eT λ =
1 , λq =0
z1i − , zi2− , z1r+ , zr2+ ∈{0,1}.
N R L L R
L L L R R
R R R L L
N L R R L
L L L R R
R R R L L
Table 1: The correspondence between Qi and (Qi1, Qi2, Qi3, Qi4), i=1,2
Conclusions
Today, enterprises have found that they can buy them increasingly effective in
increasing the efficiency and effectiveness, and therefore buying practices have
changed and try to choose an suitable manner so that they can meet its strategic
objectives and purchasing. To accomplish of this subject we have to seek suitable
suppliers and strategic and related with them to attain competitive advantages. To
achieve this goal, the implementation of the supply chain is necessary.
References
Inuiguchi M., Mozioshita F., (2011) Qualitative and quantitative data envelopment
analysis with interval data, Annals of operation research, DOI 10.1007/s10479-011-
0988-y.
Yang F., Wu D., Liang L., Bi G.,& Wu D.D., (2009) Supply chain DEA: Possibility set
and performance evaluation model, Annals of operation research, DOI
10.1007/s10479-008-0511-2.
Smiti Pande
smriti.pande@bimtech.ac.in
Abstract
The purpose of the study is to evaluate the performance of a chain of organized
pharmacy retail stores using a new enhanced technique, DEA. The study focuses on
evaluation of technical and cost efficiency. Technical efficiency does not take into
account the substitution possibilities between inputs and therefore we applied cost
and allocative efficiency models. To illustrate the DEA models discussed above we are
focusing upon a real-world problem of revival of loss making retail stores of an
organization which deals in pharmacy retailing. This study uses secondary data
collected over a time period of three years from a pharmacy retail chain located in
National Capital Region. This organization has expanded itself into a chain of 46
pharmacy retail stores over a time span of three years. The study is deemed to be
helpful to the retail managers in providing a framework for performance evaluation
and enabling the pharmacy retail stores in gaining a competitive edge over the
increasing competition faced by the emerging organized pharmacy retail market in
India.
Keywords: Pharmacy, Retailing, Performance Evaluation, Data Envelopment Analysis
Introduction
In the current Indian pharmacy retailing scenario, the sector is mainly dominated by
unorganised players, which comprises of, the neighbourhood chemist stores owned
by small families. The rising affordability, increased consciousness and willingness to
spend has not only given rise to the health and pharmacy retailing business of the
society but has also demanded for a big transition. Therefore, the challenge faced by
the organised sector is to position itself in such a way that they can be easily
differentiated from the small and unorganised retailers of pharmacy sector. Currently
there is not much differentiation in the offerings of organised players but at the same
time, organised players are a big threat for dominating unorganised sector (Pande and
Patel, 2011). Technology and cost are the two wheels that drive a business and
therefore play a crucial role in running an efficient business. A considerable amount of
attention received by the retail efficiency research is understandable but the scarcity of
Theoretical Framework
According to Farrell (1957), cost efficiency gets decomposed into technical and
allocative efficiency. The cost minimization problem and its decomposition can be
diagrammatically represented as-
Outputs Authors
Sales Joo et al. 2009; Barros 2006; Moreno 2006; Seller-Rubino &
Mas-Ruiz 2006; Barros and Alves 2003; Donthu and Yoo
1998
Footfalls
Here, the relative unit costs of store for each DMU were also recorded in the
following manner: Per unit cost of store size = Total rent / Store size. The other input
variables such as maintenance, marketing and other day-to-day expenses are clubbed
Where, θ i is the efficiency score for the retail store i computed from the BCC model
with categorization of age and location as non-discretionary variables.
Sales i , Footfalls i , Operating expensesi , Store sizei are sales, footfalls, operating
expenses and store size of the i th retail store. The non-discretionary variables
Agei and Locationi are part of the thesis work and are not considered in this cost
efficiency section.
The χ 2 test statistics (=161.7) with six degrees of freedom associated with p value
(=2.5844x10-32) shows that the model is a good fit for the data. Also we find that the
value of constant 2 (e-2.49479751= 0.082513157) from the Tobit model is much less that the
standard deviation of θ i (=0.1676) which again shows that the models appears to fit
References:
Banker, R. D., Charnes, A. and Cooper, W.W. (1984) ‘Some Models for Estimating
Technical and Scale Inefficiencies in Data Envelopment Analysis’, Management
Science, No. 30, pp.1078-1092.
Barros, C.P. and Alves, C.A. (2003) ‘Hypermarket retail store efficiency in Portugal’,
International Journal of Retail & Distribution Management, Vol. 31, No. 11, pp.549-560.
Charnes, A., Cooper, W.W. and Rhodes, E. (1978), "Measuring the Efficiency of
Decision Making Units”, European Journal of Operational Research, Vol. 2, 1978,
pp.429-444.
Coelli, T.J., Rao, P. And Battese, G.E. (1998), An Introduction to Efficiency and
Productivity Analysis, Kluwer Academic Press, Dordrecht
Cooper, W.W., Seiford, L.M. and Tone, Kaoru (2007) Data Envelopment Analysis: A
comprehensive text with models, applications, references and DEA-Solver Software,
Springer Science + Business Media, LLC.
Donthu, Naveen and Yoo, Boonghee (1998) ‘Retail Productivity Assessment Using
Data Envelopment Analysis’, Journal of Retailing, Vol. 74(1), pp. 89-105
Farrell, M.J. (1957) ‘The Measurement of Production Efficiency’, Journal of the Royal
Joo, Seong-Jong, Stoeberl, P.A. and Fitzer, Kristin (2009) ‘Measuring and
benchmarking the performance of coffee stores for retail operations’, Benchmarking:
An International Journal, Vol. 16, No. 6, pp. 741-753
Moreno, Justo de Jorge (2008) ‘Efficiency and regulation in Spanish hypermarket retail
trade: A cross-section approach’, International Journal of Retail & Distribution
Management”, Vol. 36, No. 1, pp.71-88
Pande, Smriti and Patel, G.N. (2011) ‘Assessment of Performance using DEA: A Case
on Chain of Pharmacy Retail Stores located in NCR’, in Sardana, G.D. and
Thatchenkery, Tojo (Eds.), Building Competencies for Sustainability in Organizational
Excellence, Macmillan publishers India Ltd., pp. 319-336
Ray, Subhash C. (2004) Data Envelopment Analysis Theory and Techniques for
Economics and Operations Research, Cambridge University Press, New York, USA
S.No.
Allocative Allocative Allocative
BCC Eff. Cost Eff. BCC Eff. Cost Eff. BCC Eff. Cost Eff.
Eff. Eff. Eff.
Abstract
This paper examines the Data Envelopment Analysis (DEA) from a cognitive
perspective. Experimental evidence regarding the role of DEA efficiency scores as an
overall non-financial performance measure in a context with multiple alternatives and
attributes is outlined. The study not only confirms that the efficiency score acts as a
strong performance marker when deciding on which decision making units (DMUs)
should be awarded for their non-financial performance, but also shows that the score
may induce a halo effect, significantly influencing a posterior financial assessment.
These results have practical consequences for planning, reporting, and controlling
processes that incorporate DEA efficiency scores.
Keywords: DEA, performance assessment, performance markers, halo effect,
experimental study.
Introduction
DEA is a promising approach to performance evaluation that allows the simultaneous
analysis of financial and non-financial performance indicators. Nevertheless, this
instrument presents some shortcomings. Imminent pitfalls have been extensively
discussed from a prescriptive point of view. Dyson et al. (2001), e.g., systematically
outline conceptual problems of DEA and describe possibilities to cope with them. In
contrast, although the “specification of cognitive processes is important to theory
development” (Peters, 1993, p. 391), we have found no contributions discussing DEA
applications from the descriptive point of view. This is astonishing, since a DEA-based
evaluation also includes subjective components, implying that it is susceptible to
behavioral influences and limitations. In fact, a (DEA-based) performance assessment
aiming at identifying the best performing DMUs can be seen as a multiple attribute
choice with multiple alternatives, and as such, it can be analyzed from the perspective
of psychological theories.
In this study, the non-financial and the financial performance assessment tasks are
considered as two choice problems. In the first one, DEA scores serve to measure the
overall non-financial performance of a set of DMUs. Based on heuristics and bias
literature, we hypothesize that the relatively high accessibility of DEA scores will make
them act as performance markers, thus (over-)simplifying the choice problem by
Method
Bachelor students (N = 72) taking introductory Management Control and Business
Accounting courses at a German university were asked to answer the questionnaire
during a lecture. Students were randomly assigned to one of two conditions (DEA
score, N = 37; no DEA scores, N = 35). Seven cases were eliminated from the analysis
since no valid answers were provided for the dependent variables defined in the
general research model.
The experiment comprised two phases. The first had the aim of evaluating the role of
DEA scores as performance markers on the non-financial performance assessment.
The second phase was designed to investigate possible effects of DEA scores on a
posterior financial performance assessment.
Participants received a table with non-financial data corresponding to 10 different
DMUs and were asked to decide to which three DMUs they would assign a bonus for
non-financial performance (Bnf). The non-financial data included in the report were
based on a real case of a European pharmacy chain and included the following
performance criteria: average number of employees, sales area in m², sales
transactions, and number of customer advices. For those participants in the treatment
with DEA scores (θ), this measure was also included. The treatment containing DEA
scores permitted to identify three DMUs as efficient (θ = 100%), one as almost efficient
(θ = 98%) and all others as inefficient, with θ = 54% being the lowest efficiency score.
A brief description regarding the DEA methodology was included in the
corresponding vignette.
In the second phase, participants were provided with a table containing three different
financial indicators for the 10 DMUs: profit, cash flow in thousand euro, and return on
capital. The financial performance indicators were designed to avoid not only a clear
Results
We used a path analysis to prove the general research model for DMU A. The
correlation matrix for the four variables included in the causal model together with the
main statistics for each variable are presented in Table 1.
As expected, the presence of DEA scores is significantly correlated with the
assignment of Bnf to A, and the assignment of a Bf to A is significantly correlated with
the perceived score for financial performance for this DMU. It is interesting that the
assignment of a Bnf to A is also significantly correlated both with the posterior
assignment of a Bf to this DMU and its perceived financial performance score. This
provides initial support for our hypothesis.
When developing the hypothesis it was predicted that DEA scores would serve as
performance markers and therefore would have an influence on the decision of Bnf
assignment. The path analysis for the case of DMU A (Fig.1) indicates that the
presence of a DEA score of 100% has a positive influence on the Bnf assignment. The
direct effect is positive and significant (a = 0.329, z = 2.822, p = 0.005). The contrary
occurs for the case of DMU D, whose DEA score was only 98% (a = –0.419, z = –
3.742, p = 0.000), therefore providing confirmatory evidence for our research
hypothesis.
Following the criterion of considering an indirect effect to be significant if all the paths
involved in its calculation are significant (Kline, 2011), it can be concluded that DEA
scores have an effect on the posterior financial performance assessment. The presence
of DEA scores has an indirect effect on the assignment of Bf to A (a×b) as well as on
the perceived financial performance score (a×b×d). Similar results are obtained for
DMU D (a×b significant, with b = 0.265, z = 2.292, p = 0.022) Table 2 presents the
coefficients for each of the paths included in the causal model for DMU A.
DEA 1
Bnf to A 0.324** 1
Bf to A 0.109 0.244* 1
Halo
effect Assignment of
Bf to A
b = 0.239*
Performance
marker
a = 0.329**
Presence of Assignment of
Bnf to A d = 0.365***
DEA scores
Perceived
c = 0.206
financial
Halo efficiency score
effect
of A
References
Cardinaels E., P.M.G. van Veen-Dirks (2010) Financial versus non-financial
information: the impact of information organization and presentation in a balanced
scorecard, Accounting, Organizations and Society 35 (6): 565–578.
Dholakia U.M., M. Gopinath, R.P. Bagozzi (2005) The role of desires in sequential
impulsive choices, Organizational Behavior and Human Decision Processes 98 (2):
179–194.
Dyson R.G., R.S. Allen, A.S. Camanho, V.V. Podinovski, C.S. Sarrico, E.A. Shale (2001)
Pitfalls and protocols in DEA, European Journal of Operational Research 132 (2): 245–
259.
Huber V.L., M.A. Neale, G.B. Northcraft (1987) Judgment by heuristics: effects of ratee
and rater characteristics and performance standards on performance-related
judgments, Organizational Behavior and Human Decision Processes 40 (2): 149–169.
Kline R.B. (2011) Principles and practice of structural equation modeling, 3rd edition,
The Guilford Press: New York.
Moore D.A. (2005) Commentary: conflicts of interest in Accounting. In: D.A. Moore,
D.M. Cain, G. Loewenstein, M.H. Bazerman (Eds.), Conflicts of interest. Challenges and
solutions in Business, Law, Medicine, and Public Policy. Cambridge University Press:
Cambridge et al., 70–73.
Tversky A., D. Kahneman (1974) Judgment under uncertainty: heuristics and biases,
Science, New Series 185 (4157): 1124–1131.
Wilson T.D., D.B. Centerbar, N. Brekke (2002) Mental contamination and the
debiasing problem. In T. Gilovich, D. Griffin, D. Kahneman (Eds.), Heuristics and
biases. The psychology of intuitive judgment, Cambridge University Press: Cambridge et
al., 185–200.
Direct effects
Indirect effects
a× b × d 0.029* 1.138
Abstract
The current Formula 1 driver and team ranking system does not allow for an impartial
and unbiased comparison between results, since the criteria used are oftentimes
inconsistent. In order to make such comparison, the mathematical optimization tool
DEA - Data Envelopment Analysis was used. As a result of this research, an plan was
elaborated to review the efficiency frontier for the results achieved by Formula 1
drivers and teams, and a new driver and team ranking list was created.
Key words: Data Envelopment Analysis, Formula 1, Ranking.
Introduction
The objective of this paper was to design a plan to analyze the efficiency frontier for
the results achieved by Formula One car drivers and teams. This topic is particularly
relevant to the sports community, since the existing comparative performance
measurement methods applied along decades of competitions have always lacked
indicators that apply to all sports categories.
The existing car racer and racing team ranking system does not allow for an impartial
and unbiased comparison, since the criteria used are oftentimes inconsistent.
This paper also aims to help shed a light on the reasons why Formula One car racers
and agents constantly criticize the changes in the ranking criteria set out by FIA -
Fédération Internationale de l'Automobile. By constantly introducing technological
innovations, Formula 1 is today the most audacious and technologically sophisticated
motorsport category in the world.
One of the problems that affect this auto racing category is its points scoring system,
which generates distortions when analyzing and ranking the best drivers.
Scoring system
Temp./Pos. 1º 2º 3º 4º 5º 6º 7º 8º 9º 10º
1950-1955 8 6 4 3 2
1956-1990 9 6 4 3 2 1
1991-2002 10 6 4 3 2 1
2003-2009 10 8 6 5 4 3 2 1
2010 25 18 15 12 10 8 6 4 2 1
These issues bring a negative impact upon sponsors and manufacturers, as well as on
drivers and teams, which are greatly affected. The points scoring system adopted by
FIA has changed frequently along the years, making it impossible to objectively
compare and analyze the results of each race and each championship along the years.
Due to the criteria currently in use, the rankings prepared and released to the public
are inaccurate. The rankings measure performance by driver, nationality, constructor,
engine, and tire manufacturer.
An example of these distortions can be seen in Table 1 (in APPENDICES), showing
the top 10 in the 2010 championship, where the 3 and 4 positions would be reversed
only if the current scoring criteria valid in 2009 continued in 2010.
Methods
The mathematical optimization tool DEA - Data Envelopment Analysis was used to
prepare the reports. Data Envelopment is an analytical tool designed to identify the
best practices for use of resources, which, in our study, comprise those resources
available for Formula 1 teams.
According Emrouznejad (2005), Figure 2 shows a number of units P1, P2 ... P6, where
each unit consumes a resource, but produce different amounts of outputs y1 and y2.
Thus, for a given amount of input feature units that provide larger quantities of
outputs will be considered efficient.
Through this research, an analysis plan was elaborated to review the efficiency
frontier for the results achieved by Formula 1 drivers and teams. It also allowed
putting together a new ranking covering all Formula 1 drivers and teams, as well as
other stakeholders, including engine, tires, and service suppliers.
The mathematical optimization tool DEA - Data Envelopment Analysis was used to
prepare the reports.
A bibliographical research was carried out to collect data on the results of all F1 races
ever – beginning with the first race in 1950.
After the results of all F1 races were known, a study was conducted to analyze the
inputs and outputs for a review using the DEA method, with the following
components being assessed:
• Decision Maker Units - DMUs: Teams and Drivers.
• INPUT: Grid position at qualifying session.
• OUTPUT: Position at the end of the race.
Outputs: The driver's position at the end of the race, considering the arrival order
according to FIA’s rules. The output weight for each driver's position was calculated
based on the number of cars participating in each race.
Conclusions
FIA’s criteria determine that the ranking be based on points awarded to drivers
according to their position at the end of the race, with up to a number of positions
scoring points, and the remaining drivers scoring no points, regardless of the number
of drivers. Sometimes the scoring criteria, which is set out annually, are changed at the
end of a season. By using this new scoring method, i.e., the DEA efficiency frontier,
these comparisons become a reality, since the assessment is based only on the order
of arrival, which is translated into an efficiency ranking.
References
FARREL, M. J. (1957) The measurement of productive efficiency. Journal of the Royal
Statistic Society, London, v. 120, n. 3, p.253-290.
FIA. http://www.fia.com/en-GB/Pages/HomePage.aspx. Acessado em 30/jan/2011.
Access 30/jan/2011.
EMROUZNEJAD, Ali. (1995-2000) Coventry: Warwick Business School, Data
Envelopment Analysis Home Page. Disponível em:
<http://www.deazone.com/index.htm>. Access 25 mai 2005.
FORMULA 1. http://www.formula1.com/default.html. Access 30/jan/2011.
JUBRAN, L.M.P. (2005) Aplicação da Análise por Envoltória de Dados: um estudo da
eficiência das companhias seguradoras. 2005. 143 p. Dissertação (Mestrado) –
Departamento de Engenharia Elétrica, Escola Politécnica da Universidade de São
Paulo. São Paulo.
PRADO, D. L. Pontos na F1: Decisão polêmica da FIA não é a primeira na história.
Disponível em < http://www.dzai.com.br/>, 2009. Access 30/jan/2011.
PUCCINI, A. L.; PIZZOLATO, N. D. (1989) Programação linear. 2.ed. Rio de Janeiro:
LTC.
Abstract
Models planned in this work are as follows: the goal programming DEA ranking
model recognized as an alternative to multiple criteria DEA and the model which
closes each weighted output (input) component to weighted output (input) sum, and
hence providing a contribution to efficiency account of each output (input)
component proportional to the output(input) values. The proposed models are
applied to measure the energy efficiency performances of OECD countries and the
results obtained are presented.
Keywords: Linear and Goal Programming, Energy Efficiency, Data Envelopment
Analysis
Introduction
DEA was first developed by Charnes et al. [1] that seems to be the most popular
method for measuring the efficiency of homogenous decision meaking units. Bal et al.
[2] suggest goal programming approaches to improve the discrimination power of
DEA. Sexton et al. [3] and Doyle and Green [4] suggest cross efficiency evalution as
an extension of DEA aimed at avoiding some of the mentioned difficulties. Their
technique made use of the cross evaluation scores computed as related to all DMUs
and hence identified the best DMUs [5].
The problem of having multiple optimal solutions to weights for efficient DMUs affect
to a great extent the consistency of operations related to weight cross efficiency
method is most frequently studied topic in DEA literature. Sexton et al. [3] and Doyle
and Green [4] recommended the use of a secondary objective (model) for the cross
efficiency evaluation related to the non-uniqueness of optimal weights in DEA. They
proposed the aggressive and benevolent models for achieving the secondary
objective. Andersen et. al. [5] proved the fixed weighting nature of the cross efficiency
∑u y r rp
θp = r =1
m
(1)
∑v x
i =1
i ip
s.t.
m
∑v x
i =1
i ip =1 (2)
s m
∑u y − ∑v x
r rj
=r 1 =i 1
i ij ≤0, j = 1, 2, . . . , n
ur ≥ 0 , r = 1, . . . , s
vi ≥ 0 , i=1, ..., m
In the above-mentioned models, DMUs is considered to be efficient if and only if
θ *p=1; otherwise, it is referred to as non-efficient.
DEA can be used only for ranking inefficient DMUs and in order to abolish this
disadvantage various methods were developed [8]. The most commonly used method
developed for ranking efficient decision units is the super efficiency model proposed
by Andersen and Petersen [9].
In addition, the cross efficiency method was developed as a DEA extension tool to be
utilized for identifying the best performing DMUs, and for ranking DMUs using cross
efficiency scores that are linked to all DMUs [3,6]. In the first stage, the optimal
weights of inputs and outputs are calculated for each DMU using the classical DEA
formulation. Given the results of the first stage, the weights used by the DMU can be
utilized for calculating the peer rated efficiency for each of the other DMUs. The peer
∑u r, p yrj
θ p, j = r =1 . (3)
m
∑v
i =1
i , p ijx
If we consider the multiple criteria DEA model, CCR model could be expressed
equivalently in the form given by Li and Reeves [10].
s
min d p or max θ p = ∑ ur yrp
r =1
st
m
∑v x
i =1
i ip =1 (4)
s m
∑u y − ∑v x
r rj
=r 1 =i 1
i ij +dj =
0
j = 1, 2, . . . , n
ur , vi , d j ≥ 0 , all r , i and j values
where d p is the deviation variable for DMUo. This DMU is efficient if and only if
d p = 0 or θ p = 1 . A multiple criteria data envelopment analysis model formulation with
the minmax and minsum criteria, which minimizes a deviation variable, rather than
maximizing the efficiency score, is shown as below (MCDEA):
s
min d o or max wo = ∑ ur yrjo
r =1
min M
n
min ∑d
j =1
j (5)
st
m
∑v x
i =1
i ijo =1
s m
=
r rj
r 1 =i 1
∑u y − ∑ vi xij + d j =
0
j = 1, 2, . . . , n
M − d j ≥ 0 , j = 1, 2, . . . , n
ur , vi , d j ≥ 0 , All r , i and j
(GPDEA-CCR)
∑v x
i =1
i ip + n1 − p1 =
1
s
∑u y
r =1
r rp + n2 − p2 =
1 (6)
s m
∑u y − ∑v x
r rj
=r 1 =i 1
i ij + d=
j 0, =
j 1, 2, . . . , n
M − d j + n3 j − p=
3j 0 ,=
j 1, 2, . . . , n
ur ≥ 0, r=
1, 2, . . . ,s
vi ≥ 0, i=
1, 2, . . . ,m
d j ≥ 0, j=
1, 2, . . . , n
n1 , p1 , n2 , p2 ≥ 0
n3 j , p3 j ≥ 0 , j =
1, 2, . . . , n
Where for the DMU under evaluation, n1 and p1 are the unwanted deviation variables
for the goal which constraints the weighted sum of outputs less than or equal to unity,
n2 is the wanted deviation for the goal which makes the weighted sum of outputs less
than or equal to unity. n3 j ( j = 1, 2, . . . , n ) are the unwanted deviation variables for
the goal which realizes M as the maximum deviation, and p3 j ’ are the wanted
deviation variables for the same goal. Whereof our aim, given equal weight to the
unwanted deviations, is to minimize the sum of unwanted deviations, is to minimize
m
∑u y
r =1
r rp = θ p*
m
∑v x
i =1
i ip =1 (7)
s m
∑u y − ∑v x
r rj
=r 1 =i 1
i ij ≤0 j=
1, 2, . . . , n
s
∑u y
r =1
r rj −ur yrp ≤ z p r=
1, 2, . . . , s
m
∑v x
i =1
i ij − vi xi p ≤ z p i = 1, 2, . . . , m
ur , vi , z p ≥ 0 , all r , i values
Here, θ * p is the efficiency value for p. DMU obtained from the classical DEA.
Application
In this section, the methods, which are mentioned above, are used to compare the
performance of OECD countries in respect of CO2 emissions, energy consumption
[10]. According to Ramanathan’s study [10], the input variables are CO2 emissions per
capita (denoted as CO2 per cap hereafter), fossil fuel energy consumption (FOSS), and
the output variables are gross domestic product per capita (GDP per cap) and non-
fossil fuel energy consumption (NFOSS) to measure the energy consumption and
CO2emmissionefficiency.
Table 1. DEA-CCR Results of Application Study
According to the obtained results in Table 1-4, it could be seen GPDEA and
Component models have fewer efficient DMUs than classical DEA model. According
to coefficients of variation for weights of inputs and outputs, the proposed models
generally decrease the variation of weights of variables.
Conclusions
It could be seen that GPDEA and Component models have fewer efficient
DMUs and the proposed models generally decrease the variation coefficient of
weights of variables. The models models also have significant and high
correlation between each other according to ranking scores of DMUs.
References
A. Charnes, W.W. Cooper, E. Rhodes, Measuring the Efficiency of Decision Making
Units, Eur. J. Oper. Res., 2 (1978) 429–444.
H. Bal, H.H. Örkcü, S. Çelebioğlu, Improving the Discrimination Power and Weight
Dispersion in the Data Envelopment Analysis, Comput. Oper. Res., 37 (1) (2010) 99–
107.
T.R. Sexton, R.H. Silkman, A.J. Hogan, Data Envelopment Analysis: Critique and
Extension. In: Silkman R.H. (Ed.), Measuring Efficiency: An Assessment of Data
Envelopment Analysis, 32. Jossey-Bass, San Francisco (1986) 73-105.
J.R. Doyle, R. Green, Efficiency and Cross Efficiency in Data Envelopment Analysis:
Derivatives, Meanings and Uses, J. Oper. Res. Soc., 45 (5) (1994) 567–578.
T.R. Andersen, K.B. Hollingsworth, L.B. Inman, The Fixed Weighting Nature of a Cross
Evaluation Model, J. Prod. Anal., 18 (1) (2002) 249–255.
P., Andersen, N., Petersen, A procedure for ranking efficient units in Data
Envelopment Analysis, Management Science, 39(10), (1993) 1261-1264.
X. B., Li, G.R., Reeves, A Multiple Criteria Approach to Data Envelopment Analysis,
European Journal of Operational Research, 115: (1999) 507–517.
Abstract
The scientific literature has pointed to the process of fiscal decentralization as a
potential inducer of efficiency and productivity in the public sector. However, some
authors have questioned whether the process in Brazil would actually generate a
waste of resources and raise problems in the quality of services provided. This paper
uses the Malmquist index and econometrics with panel data to empirically assess the
question of the relationship between fiscal decentralization and performance of the
public health service in Brazil, as well as to provide an overview of the dynamics of
regional productivity in the sector. The results allow us to observe that the
decentralization of health spending has a negative relationship on the productivity of
these services, but fiscal responsibility has a greater influence on the performance of
the local governments.
Keywords: Health, Decentralization of Expenditures, Productivity, Fiscal
Responsibility.
Introduction
Worldwide, health care is of increasing concern, both politically and socio-
economically. In Brazil, the health system experiences increasing pressure to improve
its performance, both in regard to controlling the cost of services and ensuring greater
access and better quality health care is available to the population. To analyze the
current stage of this system it is necessary to understand the process of
decentralization, particularly with the creation of the Sistema Único de Saúde (Unified
Health System, SUS), provisions for which are contained within the 1988 Constitution,
where states and municipalities gained the biggest transfers of funds and
responsibilities over the provision of health services.
Oates (1977 and 2005), among others, notes that the fiscal decentralization process
generates a number of benefits to society, given that local governments can provide
goods and services more efficiently, which are more relevant to local preferences and
demands. On the other hand, critics like Prud'homme (1995) point to the number of
*
Recipient of financial aid from IPEA – Brazil.
Methods
To achieve the set goals, the empirical analysis of the study was divided into two
stages. The first builds a dynamic index of productivity growth for public health
services, using data from the municipalities aggregated at the state level, where the
indicator can be divided into changes in efficiency and technical innovation. This
index is intended to ascertain the best relations of efficiency and technical changes
obtained during the period between 1996 and 2007. To calculate this indicator, we
used the Malmquist index of productivity, with the help of non-parametric method
Data Envelopment Analysis (DEA) to estimate the necessary efficiency scores. This
approach was chosen as it handles simultaneously multiple inputs and outputs that
are typical of the health sector, and also not to impose functional form on the
production frontier 1. Thus, for the development of Stage I we used the following
production function of public health services:
(𝑦1 , 𝑦2 ) = 𝑓(𝑥1 , 𝑥2 , 𝑥3 ) (1)
1
The concept of Malmquist productivity index was first introduced by Malmquist (1953) and later
refined by several works, including Caves et al. (1982), Färe et al. (1994) and Thrall (2000). This index
represents the growth of total factor productivity (TFP) of decision making units (DMU), which reflect
two components: efficiency change and technological change over time.
R² adjusted 0.9921
Observations 312
The estimation results described in the table above have been achieved considering
the model of regression on panel data of fixed effects. The choice was based on the
Hausman test, which revealed that the fixed effects estimator is consistent and efficient
when compared with the random effects estimator. The estimation was performed
with a total of 26 units in cross-section, over a period of 12 years, totalling 312
observations in the panel.
The dummy referring to the technological aspect of the municipalities (Dum_Scale) as
a factor in controlling the volume produced showed that decentralization caused by
SUS can negatively affect the provision of public health, revealing that the size of the
hospital influences the productivity indicator. The purpose of the incorporation of this
dummy was to control the issue of change in the technological pattern from the
decentralization of health. The model without this binary variable also captured the
productivity of the DMUs that had no change in their returns to scale in the period.
The result expressed in table 1 corroborates the intuition that large hospitals with
decreasing returns to scale tend to have a higher level of productivity than the units of
lower scale. From a regional perspective all the dummies were significant and the
indicator of the productivity of public health care possessed a better relationship with
the localities in South and Southwest, compared to the ones in the Northeast, North
and Midwest. This result is interesting as it highlights that the performance of health
care provisions is influenced by their geographical position, a clear sign of the great
technical and socioeconomic disparities faced by Brazilian regions.
Another interesting feature which helps to better understand regional differences in
productivity concerns the design of Brazilian fiscal federalism. As it is shown in table
1, governments belonging to the North and Northeast have high dependence on
transfers from the Union. Thus, as indicated by positive and significant coefficients of
fiscal responsibility variables (Fr) and cash flows (Cf), where the DMUs that depend
less on transfers tend to have greater accountability and efficiency in the provision of
public goods.
The literature on health economics indicates that environmental factors, particularly
socioeconomic factors, directly influence the productivity and efficiency of goods and
Conclusions
This study has tried to respond to whether increased decentralization of public health
spending favored or not the productivity of this service in the country. Given the facts
presented, it was observed that the performance of the public health services revealed
a negative relationship with the indicator of decentralization of spending in the area.
This result was contrary to the vision in the so called "decentralization theorem,"
which emphasizes gains in social welfare and efficiency when products are provided
by local governments. In this context, the critical view of Campos (1998) on the high
level of wastage, technical and administrative insufficiency, corruption, nepotism and
other problems faced by the management of local governments in Brazil can be a
possible explanation for the results found.
Nevertheless, one must keep in mind that the process of decentralizing the provision
of public health in Brazil has brought strong technological changes, as evidenced by
the change of the returns to scale. Therefore, this technical move may have acted to
impose at first a lower level of productive performance, since the hospitals have
become more spatially decentralized and started to operate with increasing returns to
scale, which might further generate greater resource and productivity savings.
Caves DW et al. (1982). The economic theory of index numbers and the measurement
of input, output and productivity. Econometrica 50: 1393-1414.
Färe R et al. (1994). Productivity growth, technical progress, and efficiency change in
industrialized countries. The American Economic Review 84: 66-83.
Gasparini CE, Ramos FS (2004). Relative deficit of health services in Brazilian states
and regions. Brazilian Review of Econometrics 24: 75-107.
Gang Cheng
MaxDEA@qq.com, Peking University, China.
Panagiotis Pzervopoulos
University of Loannina, Greece. pzervopoulos@hotmail.com
Abstract
The aim of this paper is to study the relationship between Mexican banking sector
deregulation and bank performance, in addition, the effects of competition and bank
risk-taking is accounted for. To this end, we develop a DEA-Malmquist productivity
model of bank performance. A recent econometric advance in General Methods of
Moments, such as the Arellano, Bover/Blundell-Bond estimation, to estimate bank
performance, is applied. The estimated model involves bank panel data from the
Mexican deregulation period, 1988-2000.
Keywords: Deregulation, banking, productivity, Mexico.
Introduction
In the last decades the Mexican banking system has undergone several
transformations. Private banks were nationalized in 1982 and in the period 1991-92 the
banking system was privatized. In 1994 the North America Free Trade Agreement
(NAFTA) became in force. From 1994 onwards, banking investment was open to
foreign capital. Several restrictions were imposed under NAFTA covenants for overseas
participation in the banking sector. While such limitations were supposed to last until
2000, they in fact were lifted during the Mexican financial crises that took place in
1995. After the crisis, foreign investment became predominant in the Mexican
banking sector.
The financial liberalization process encompassed several reforms in order to enhance
efficiency as well as to increase productivity in the financial system. The purpose of
this study is to analyze the effects of banking liberalization on bank performance. The
banking performance is measured by productivity and efficiency. In addition, the
effects of competition and risk-taking are accounted for. With such a purpose, an
Previous studies
Selected papers have applied Malmquist techniques to asses bank performance. Some
of those who have done it, include Berg, Forsund and E. Jansen (1992), Berg, et al
(1993), Bukh, Forsund and Berg (1995) and Mlima (1999) for Nordic countries; Grifell-
Tatje and Lovell (1995,1997), for Spanish banks; Leightner and Lovell (1998), for Thai
banks; Rebelo and Mendes, for Portuguese banks (2000); Isik and Hassan, for Turkish
banks; Paradi et al.(2002) for Canadian Banks, Kirikal (2004) and Kirikal, Sorg and
Vensel (2004), for Estonian banks; and Galagedera, U.A. D. and Edirisuriya, P. (2004),
for Indian banks; and Murillo-Melchor. C. et al. (2005) for European banks. By and
large, the available empirical evidence shows a mixture on the effects of financial
liberalization on productivity and efficiency. Gilbert and Wilson (1998) found that
financial liberalization in Korea produced outstanding results on productivity of
Korean Banks. Likewise, Isik and Hassan (2003) dealing with Malmquist DEA model
showed a relevant improvement on Turkish banking productivity after liberalization.
On the other hand, Yildirim (2002) studying the technical and scale efficiencies of
Turkish banks between 1988 and 1999 measured with the use of nonparametric Data
Envelopment Analysis found that over the sample period both pure technical and
scale efficiency measures show a great variation and the sector did not achieve
sustained efficiency gains.
Method
The calculated original DEA Malmquist results were adjusted by bootstrapping
following the Kneip et al (2008) process. Subsequently, in order to avoid distortion
on results, the adjusted Malmquist Index was purified by excluding figures that were
detected as outliers. Once the previous corrections took place, the figures were
incorporated to run the regressions specified below on the liberalization model.
To this end, the liberalization model with four regressions was estimated. The bank
performance (p) dependent variable is regressed on four independent and four
instrumental variables. Bank performance is assessed by four variables, which are
total factor productivity change (TFPCH), efficiency change (EFFCH), technological
change (TECHCH) and net interest margin (nim).
The explicative variables include financial liberalization index (finlib) which is
calculated by Kaminsky, G, and S. Schmukler (2008). The market power (mp) stands
for another explanatory variable, which is calculated following the Panzar and Rose
(1987) model. In addition, a dummy variable accounting for the origin of capital:
foreign or national (for) is included. Finally, the ratio of investment to GDP (invgdp),
Hypotheses
H0 1) Financial liberalization positively affected bank performance
2) Credit, capital and liquidity risk enhanced bank performance
3) The foreign capital positively influenced bank performance
4) Investment improved bank performance.
Model
The Malmquist index is used to evaluate productivity change of banks between two
time periods. Such change is called product “catch-up” and “frontier-shift” terms. The
catch-up (or recovery) is related to the firm efficiency. The catch-up is a term to assess
the degree at which a firm attains efficiency improvements. On the other hand, the
frontier-shift (or innovation) reflects changes in the efficient frontiers surrounding an
enterprise between two time periods.
Firstly, following Tone, K. (2004) a set of n banking firms (xj, yj) (j=1, …n) is
established, each one has m inputs and q outputs where the vector xj ∈ Rm denotes
inputs and a set of q outputs denoted by a vector yj ∈ R q, over the periods 1 and 2.
The notations (xo,yo)1 and (xo,yo)2 are employed in order to designate decision making
units (DMUo )(o = 1, …, n) in the periods 1 and 2 respectively.
The production possibility set (X,Y)t (t = 1 and 2 ) spanned by {(xj,yj)t } (j=1,…,n) is
defined by:
(𝑋, 𝑌)𝑡 = �(𝑥, 𝑦)| 𝑥 ≥ ∑𝑛𝑗=1 𝜆𝑗 𝑥𝑗𝑡 , 0 ≤ 𝑦 ≤ ∑𝑛𝑗=1 𝜆𝑗 𝑦𝑗𝑡 , 𝐿 ≤ 𝑒𝜆 ≤ 𝑈, 𝜆 ≥ 0 � … (1),
where 𝑒 is the row vector with all elements equal to one and 𝜆 ∈ Rn is the intensity
vector, and L and U are the lower and upper bounds of the sums of the intensity
respectively. The production possibility set (X,Y)t is characterized by frontiers that are
composed of (x,y) ∈ (X,Y)t such that is not possible to improve any element of the
input x or any element of the output y without worsening some other input or output.
This frontier is called the frontier technology at the period t. In the Malmquist index
analysis, the efficiencies of DMUs (xo,yo)1 and (xo,yo)2 are evaluated by the frontier
technologies 1 and 2 in several ways.
The Malmquist index (MI) is computed as the product of Catch-up and Frontier-Shift:
MI = (Catch-up) x (Frontier-Shift) …(2)
The obtained output with the DEA Malmquist algorithm was adjusted through
bootstrapping techniques following Kneip et al (2008).
Liberalization model
Following Brissimis et al (2008), the liberalization model is specified as: 𝑝𝑖𝑡 = 𝑎0 +
𝑎1 𝑓𝑖𝑛𝑙𝑖𝑏 + 𝑎2 𝑚𝑝𝑡 + 𝑎3 𝑥𝑖𝑡 + 𝑎4 𝑖𝑛𝑣𝑔𝑑𝑝𝑡 + 𝑓𝑜𝑟 + 𝑢𝑖𝑡 (8) where p stands for the
performance of bank i at time t; p is written as a function of a time-dependent
financial liberalization variable, finlib. 𝑚𝑝 stands for an index of banking industry
market power; x is a vector of bank-level variables representing credit, liquidity and
capital risk; invgdp is a variable that captures the macroeconomic conditions common
to all banks; and finally the error term uit.
The total factor productivity change (TFPCH) is positively affected by factors such as
investment (invgdp), lagged (1) credit risk, and capital risk. On the other hand, TFPCH
is negatively affected by capital risk (cap), and credit risk (cr).
It is clear that investment, capital risk, lagged (1 and 2) of capital risk, lagged (1) of
capital risk and credit risk exert a positive effect for technological advance. On the
opposite side, credit risk, and the lagged (2) of technological change, exert a negative
effect over technological change.
Efficiency change is affected positively by investment, liquidity risk lagged (1 and 2)
capital risk lagged (1), credit risk, credit risk lagged (2). The reverse is true for
explanatory variables such as liquidity risk, credit risk, efficiency change lagged (1).
Net interest margin is positively affected by lagged (1 and 2) liquidity risk, capital
risk, credit risk, credit risk lagged (1), and net interest margin lagged (1). On the
opposite side, credit risk lagged (1), liquidity risk lagged (2), capital risk lagged (1 and
2) and net interest margin lagged (2) show an adverse effect on net interest margin.
References
Berg, S. A., F. Fordsund, and E. Jansen. (1992). Malmquist Indexes of Productivity
Growth During the Regulation of Norwegian Banking, 1980-1989, Scandinavian
Journal of Economics, 94 (Supplement), 211-228.
Bukh P.N.D., F.R. Forsund and S.A. Berg. (1995). Banking Efficiency in the Nordic
Countries: A four-country Malmquist index Analysis, Bank of Norway: Working Paper
Research Department.
Färe, Grosskopf, Norris and Zhang (1994). Productivity Growth, technical progress and
efficiency change in industrial countries, American Economic Review, 84, 66-83.
Isik, I. and K. Hassan. (2003). Financial Deregulation and Total Factor Productivity
Change: An Empirical Study of Turkish Commercial Banks. Journal of Banking &
Finance 27. Elsevier Science.
Kneip, Alois & Simar, Léopold & Wilson, Paul W., 2008. "Asymptotics and Consistent
Bootstraps For Dea Estimators In Nonparametric Frontier Models," Econometric
Theory, Cambridge University Press, vol. 24(06), pages 1663-1697, December.
Leightner,J.E. and C.A.K. Lovell. (1998). The Impact of Financial Liberalization on the
Performance of Thai Banks. Journal of Economics and Business 1998; 50.115-131.
Elsevier Science Inc. New York.
Tone, K. (2004) Malmquist Productivity Index. In Cooper, W.W., Seiford, L. M., and
Joe Zhu. (editors) Handbook on Data Envelopment Analysis. Kluwer Academic
Publishers. The Netherlands. Pp. 203-228.
J.L.T Blank
Delft University of Technology, The Netherlands, j.l.t.blank@tudelft.nl
Abstract
Economies of scale and scope are usually derived under the assumption that the set of
production possibilities are shared by all firms in an industry irrespective of whether
they specialize in a single output or not. Mental health care institutions in the
Netherlands vary substantially in the scale and the number of outputs. Estimation of
one cost function therefore seems very restrictive and requires the allowance of zero-
values. We used a translog cost function model with dummy-variables for different
types of institutions, to allow for different technologies. We found evidence for
differences in technologies between institutions specialized in counseling and
integrated institutions that also performed other activities, expressed by the number of
days in the hospital or permanent care, number of treatments in daycare or number of
day activities. The marginal costs of counseling were lower for the integrated
institutions than for the specialized institutions.
Keywords: economies of scale, economies of scope, stochastic frontier analysis,
mental health care
Introduction
Governments seek tools to control the growth of healthcare spending. Producing at
the optimal scale and full use of economies of scope can help reducing the healthcare
spending. However it is important that the economies of scale and scope are correctly
derived. In this study, we will determine economies of scale and scope of mental
health care institutions in the Netherlands. The mental healthcare in the Netherlands is
particularly interesting because of the increasing costs in mental health care due to an
increasing number of patients. Furthermore, the size of the institutions and the
combinations of activities offered varies widely between mental healthcare providers.
If economies of scale and scope exist potential cost savings can be realized by
restructuring the sector.
Economies of scale and scope are generally derived under the assumption that all
firms in a certain industry operate under the same production possibilities, irrespective
*
Corresponding author
Methods
Model
Institutions vary widely with respect to scale and type of treatment they offer. A
substantial part of the institutions only perform ambulant care (counseling), and are
likely to vary substantially from integrated institutions that not only offer counseling
but also offer residence to their patients for example. We therefore estimated a cost
function that allows for different technologies between different types of institutions
by including dummy variables. We divided the institutions into two groups, the first
group consisted of institutions that had counseling as the only output, and the second
group was all other institutions. Under the assumption of a translog form (Christensen
et al., 1973), we estimated the following cost function:
bij ln(Yi ) ln (Y j ) +
m n m m
ln(C ) = Dint g [ a 0 + ∑ bi ln(Yi ) + ∑ c i ln(Wi ) + 1
i =1 i =1
2 ∑∑
i =1 j =1
abij ln(Yi ) ln (Y j ) +
m n m m
Damb [ aa 0 + ∑ abi ln(Yi ) + ∑ ac i ln(Wi ) + 1
i =1 i =1
2 ∑∑
i =1 j =1
(1)
a o , bi , ci , bij , cij , eij , aa o , abi , aci , abij , acij , aeij , parameters to be estimated.
We first estimated the cost function under the assumption of the same technology for
different types of institutions by assuming the same parameters for the ambulant
institutions as for the integrated institutions. For the specialized institutions we
multiplied the parameters of the zero-outputs with a dummy variable to make sure
they were not estimated for that observation. The obvious restrictions that we impose
in case of the assumption of same technology:
aa o = a o , abi = bi , aci = ci , abij = bij , acij = cij , aeij = eij
The estimated cost functions were used to derive economies of scale and economies
of scope. The economies of scale were represented by the cost flexibility which
described the increase in cost relative to the increase in output (Baumol et al., 1988),
depending on the type of institution. The cost flexibility is described by the following
formula:
∂ ln(C )
b ln (Y ) + ∑∑ e ln (W ) ] +
m m n n
v' = ∑ = Dint g [ ∑ bi + 1
i ∂ ln(Yi ) i =1
2 ∑∑i =1 j =1
ij j
i =1 j =1
ij j
ab ln (Y ) + ∑∑ ae ln (W ) ]
m m n n
Damb [ ∑ ab 2 ∑∑
i +1 ij j ij j
i =1 i =1 j =1 i =1 j =1
Economies of scope were deducted from the marginal cost of the overlapping outputs
between specialized and integrated institutions. We determined the marginal costs at
varying levels of output to account for differences in scale between institutions. The
marginal costs were derived as follows:
∂C C ∂ ln(C ) m n
= [ b j + ∑ bij ln(Yi ) + ∑ e ji ln(Wi ) ]*
C
mc j = =
∂Y j Y j ∂ ln(Y j ) i =1 i =1 Yj
Data
The mental health care institutions in the Netherlands report to the Ministry of Health
in the Netherlands. The yearly data collected for this purpose were used in this
analysis over the years 2008-2010. We selected the institutions that dealt with mental
health care only, so departments of psychiatry as part of general hospitals for example
were not included. We selected those institutions that had valid and plausible values
for all variables. In total 201 observations (institutions per year) remained (59 in 2008,
73 in 2009 and 69 in 2010).
We included four measures of treatment as output variables: the number of counsels,
the number of days in residence, the number of part-time treatments and the number
of day activities (Table 1). Of these institutions, 32 were specialized in the sense that
they only performed counseling. The other institutions did counseling and at least one
of the other treatment activities. None of the institutions was specialized in one of the
other activities. We therefore used two groups to which we refer to as specialized and
integrated institutions. We used two inputs, personnel and material and capital. The
latter two were added and used as one variable. Costs of the inputs were available
and we used the number of full time jobs to calculate the price of personnel. The
prices of materiel and capital were based on an index constructed from the Consumer
Price Index and a Price Index for investments of fixed activa of the government
(Statistics Netherlands, www.cbs.nl).
Conclusions
Specialized and integrated mental health care institutions in the Netherlands vary in
the way they operate, as shown by the different technology assumption. We also
found indications that increasing the scale of the institutions and that integrating
counseling in institutions with other types of treatment could increase the productivity
of the sector.
The assumption of the same technology did not give plausible estimates because of
the negative parameter values of two of the outputs. Instead of using dummy
variables, we also tried to replace the zero-values with the minima of the non-zero
values and estimate the model under the assumption of the same technology for all
institutions. The estimates were very similar then to the estimates of the model that
References
Baumol, J., Panzar, J.C., & Willig, R.D. (1988). Contestable markets and the theory of
industry structure Sydney: Marcourt Brace Jovanovich.
Bottasso, A., Conti, M., Piacenz, M., & Vannoni, D. (2011). The appropriateness of the
poolability assumption for multiproduct technologies: Evidence from the English water
and sewerage utilities. International Journal of Production Economics, 130(1), 112-117.
Christensen, L.R., Jorgenson, D.W., & Lau, L.J. (1973). Transcendental Logarithmic
Production Frontiers. The Review of Economics and Statistics, 55(1), 28-45.
Halsteinli, V., Kittelsen, S.A., & Magnussen, J. (2010). Productivity growth in outpatient
child and adolescent mental health services: the impact of case-mix adjustment. Soc
Sci Med, 70(3), 439-446.
Hollingsworth, B., & Street, A. (2006). The market for efficiency analysis of health care
organisations. Health Econ, 15(10), 1055-1059.
Weaver, M., & Deolalikar, A. (2004). Economies of scale and scope in Vietnamese
hospitals. Soc Sci Med, 59(1), 199-208.
Weninger, Q. (2003). Estimating multiproduct costs when some outputs are not
produced. Empirical Economics, 28(4), 753-765.
Constant A0 -0.59 0.09 -6.88 0.30 0.05 6.03 -1.47 0.16 -9.11
Counseling B1 1.35 0.04 31.17 0.25 0.06 4.31 1.40 0.06 22.30
(y1)
Personnel C1 0.71 0.01 65.71 0.70 0.01 61.28 0.55 0.04 15.38
(w1)
Materiel & C2 0.29 0.01 27.39 0.30 0.01 26.10 0.45 0.04 12.46
capital (w2)
Time A1 -0.03 0.03 -0.95 -0.02 0.02 -1.20 0.05 0.05 1.02
y1 x y1 B11 0.06 0.05 1.38 0.26 0.05 5.41 0.35 0.03 12.03
w1 x w1 C11 0.13 0.05 2.55 0.12 0.07 1.63 0.08 0.08 1.01
w2 x w2 C22 0.13 0.05 2.55 0.12 0.07 1.63 0.08 0.08 1.01
w1 x w2 C12 -0.13 0.05 -2.55 -0.12 0.07 -1.63 -0.08 0.08 -1.01
y1 x w1 E11 -0.01 0.01 -1.41 0.01 0.01 0.82 -0.08 0.02 -5.00
R2 0.98 0.99
Abstract
This paper presents a sequential approach to organizational assessment from an
efficiency standpoint. The empirical illustration originates from data referring to the
period 2000-2007 and collected from a sample of 37 libraries affiliated to a federal
university in Rio de Janeiro; this sample covers some 90% of the population. In the
first and second steps efficiency scores computed from estimated DEA (Data
Envelopment Analysis) models are employed to rank DMUs (libraries) as well as to
provide pro-efficiency allocative corrections. The third step presents a long run
evaluation that is accomplished by Markovian analysis through computing the
corresponding equilibrium distribution between (efficiency) states. The Markovian
approach also provides some particular durations – such as mean recurrence times
and first passage times – that allow managerial interpretation. Since DMUs (libraries)
are here classified as “efficient” or “inefficient” according to computed annual scores,
the proposed model is “systemic” to the extent that only aggregate data are
considered.
Keywords:Organizational assessment. Efficiency analysis. Markovian analysis.
Academic libraries - Brazi.
Introduction
This paper presents an optimization approach to organizational evaluation from an
Efficiency Analysis standpoint. The approach combines in a simple way efficiency
scores computed from the estimation of selected Data Envelopment Analysis (DEA)
models and a long run evaluation provided by Markovian analysis. The proposed
*
Corresponding author
Methods
The Efficiency Principle simply states that, when studying the production process in
any organization, whenever a production unit uses the same resources but yields
greater quantities of output than another unit, it should be considered “relatively more
efficient” (i. e., relative to one another), no matter how formally the productivity
problem is analyzed. Analogously it should be considered “relatively more efficient” if
it uses fewer resources and yields the same output. From an analytical standpoint
these properties correspond to evaluating an organizational unit in terms of its
position vis à vis an adequately defined and computed “efficiency frontier”. There is
an established body of knowledge - Data Envelopment Analysis (DEA), a class of
mathematical programming models – with a now long tradition (Emrouznejad, Parker,
Tavares, 2008) of being applied to a broad range of situations involving the analysis of
production frontiers in a multi-unit, multi-input and multi-output framework in such a
way that usual parametric restrictions are absent.
In a seminal methodological paper Tulkens and Vanden Eeckaut (1995) describe and
explain the main issues relating to the role of time in nonparametric efficiency
analysis, especially in what concerns alternative ways to accommodate empirical
information into reference production sets that will be submitted to efficiency
computations. Of particular interest here (see Table I) is their classification of the
variety of forms whereby the time dimension present in panels may be treated when
investigating observed productive activity.
The paper by Wang and Huang (2007) introduces two models to examine long run
efficiency analysis. However they do not compute any long run solution in any of
those cases. In addition, although they have modelled and specified the probability of
one-step temporal transition from efficient (resp. inefficient) to inefficient (resp.
efficient) state, there seems to be no indication as to how those probabilities might be
used to compute long run “structural” distributions of the DMUs among the two states
(“efficient” or “inefficient”).
Using direct results from finite ergodic Markov chains (Kemeny, Snell, 1972), and
assuming one (estimated) aggregate transition matrix is available, it is possible to
compute the long run distribution of the “system” (the set of DMUs) between the two
states. This is the purpose of the third step in our model.
The model
Our proposed sequential model consists of three steps. The first two steps – involving
the computation of efficiency scores and of operational plans in turn – are typically
DEA condition
T VE classification** DEA model
satisfied *
Notes: *: number of DMUs not less than two (three) times the number of
variables.
**: classification of (sample) observed subsets by Tulkens-Vanden Eeckaut (1995)
As soon as a transition matrix is available, long run analysis is possible and will result
from the simple computation of a fixed point for the transition matrix (KEMENY,
SNELL, 1972). This fixed point is a vector containing the long run distribution of the
“system”. Since we do not follow the statistical approach applied by Wang and Huang
(2007), some form of combination must be chosen for the “initial” transition matrix. In
our model we use the following basic result (Kemeny, Snell, 1972): when the number
of time steps grows indefinitely one has
lim (1/n)(P + P2 + . . . + Pn) = [1 1 1 1]’π (2)
where n is the number of steps; Pn = (( pij (n) )) is the nth power matrix, whose
(i ; j) element represents the probability of transition from state i to state j after n
steps; [1 1 ;…1]’ is a column-vector with all elements equal to 1; the apostrophe
means transpose and π is a constant row-vector containing the long run equilibrium
distribution between states whose components are nonnegative and sum to 1 (a so-
called probability vector). The “finite mean” in the left hand side of (2) suggests a way
to estimate a single matrix from the seven available and then to compute the long run
corresponding to this “averaging matrix”.
Standard Coefficient
Variables Min Max Mean
deviation of Variation
Given that we are working with contemporaneous reference sets (see Tulkens and
Vanden Eeckhaut, 1995, and Table I), data for 2000-2007 allow to obtain 7 transition
matrices, say P1, P2, … , P6, P7 . In order to apply the finite sum approach, we
employ successive products of yearly transition matrices, instead of powers of the
same (initial or otherwise chosen) transition matrix. Therefore we can take the seven
factor average matrix A defined as
A = [P1 + P1P2 + P1P2P3 + ... + P1P2P3…P6P7) / 7 (3)
as a good candidate to be used when solving the fixed point problem, since it
incorporates more information than each individual matrix, in addition to being a
good picture of the successive one-step, two-step until seven-step transitions, in the
Note. * - All DMUs with scores equal to 1 for the whole period have been excluded
Conclusions
Upon assuming the Efficiency Principle as a guideline to organizational evaluation,
this paper presented a model for organizational assessment in the short and long runs
by combining two approaches – the DEA approach to efficiency analysis and the
Markovian assumption that introduces a long run perspective. Results have shown that
the three step model uncovers quantitative aspects that may be of assistance to
managers committed to efficiency in the short and long runs. For long run assessment
we rely on Markov Chains to compute an aggregate measure of the distribution of the
productive system (the “organization”) between two states – efficient or inefficient.
References
Emrouznejad A., B. Parker; G. Tavares (2008) Evaluation of research in efficiency and
productivity: a survey and analysis of the first 30 years of scholarly literature in DEA,
Socio-Economic Planning Sciences 42 (3): 151-157.
Kemeny J. G., J. L. Snell (1972) Mathematical Models in the Social Sciences.
Cambridge, Mass.: The MIT Press.
Tulkens H., P. Vanden Eeckaut (1995) Non-parametric efficiency, progress and regress
measures for panel data: methodological aspects, European Journal of Operational
Research 80 (3): 474-499.
Vakkuri J. (2003) Research Techniques and Their Use in Managing Non-profit
Organizations – an illustration of DEA analysis in NPO environments, Financial
Accountability and Management 19 (3): 243-263.
Wang M. H., T. H. Huang (2007) A study on the persistence of Farrell’s efficiency
measure under a dynamic framework, European Journal of Operational Research 180
(3): 1302-1316.
Abstract
The purpose of this article is to analyze the efficiency of the industrial sectors in Brazil
from 1996 to 2009, considering their contributions to the sustainable development of
Brazil. To this end, we used the Data Envelopment Analysis (DEA), which enabled,
from the additive model and the window analysis, to evaluate the ability of industries
to reduce environmental impacts and increase social and economic benefits. The
results of this study indicated that the Textile sector is the most efficient industrial
sector in Brazil in terms of contributing to sustainable development, followed by these
sectors: Foods and Beverages, Chemical, Mining, Nonmetallic, Paper and Pulp and
Metallurgical.
Keywords: Industrial Sectors, Data Envelopment Analysis, Sustainable Development.
Introduction
The data presented in the last Intergovernmental Panel on Climate Change (IPCC,
2007) indicate that global warming is largely due to human activity, especially human-
caused CO2 emissions. Along these lines, fossil fuel burning has been shown to be
responsible for approximately 85% of all anthropogenic CO2 emission produced
yearly.
Silva and Guerra (2009) explain that the use of fossil fuels has driven the world
economy since the Industrial Revolution, with energy as an essential component for
the social and the economic development of a nation and its supply is an essential
pre-requisite to human activities.
*
Address correspondence to Flávia de Castro Camioto, Department of Production Engineering,
University of São Paulo, Trabalhador São-Carlense, 400, São Carlos, SP, 13566-590 Brazil. Phone: +51 16
3373 9428, Fax: +55 16 3373 9425. E-mail: flaviacamioto@yahoo.com.br
Methods
To reach the goal, a mathematical programming method called Data Envelopment
Analysis (DEA) was used. This method, based on the additive model and on the
window analysis, enabled analyzing the performance of the industrial sectors of Brazil
to reduce energy consumption and CO2 emissions from fossil fuels (inputs), while
increasing the GDP by sectors, the persons employed and personnel expenses
(outputs).
For this research, the main Brazilian industrial sectors were selected, for which the
National Energy Balance (BEN) and the Brazilian Institute of Geography and Statistics
(IBGE) provided data, and due to lack of available information from IBGE some
sectors were grouped. Thus, for this work, the spatial delimitation of the Brazilian
industry includes: (a) Nonmetallic, which corresponds to the cement and ceramics
Metallurgical 1.99 2.23 2.30 2.20 2.44 2.63 2,61 2.34 0.10
Paper and Pulp 0.60 0.70 0.74 0.76 0.78 0.88 0.90 0.76 0.01
Nonmetallic 0.66 0.73 0.75 0.75 0.75 0.82 0.80 0.75 0.01
Mining 0.41 0.45 0.46 0.41 0.36 0.33 0.32 0.39 0.03
Chemical 0.14 0.29 0.27 0.27 0.27 0.31 0.32 0.26 0.04
Foods and Beverages 0.06 0.19 0.22 0.18 0.13 0.22 0.17 0.17 0.03
Textile 0.01 0.02 0.01 0.01 0.01 0.02 0.02 0,02 0,0004
Then, as second to last in the efficiency ranking of Brazil’s industrial sectors, there is
the Paper and Pulp sector, which like the Metallurgical sector, its efficiency decreased
as the oldest years were excluded from the analysis and the most recent ones were
contemplated in the windows, as shown in Table 2. It is noteworthy, however, that
the variability of this sector was significantly lower than the metallurgical sector, with
the standard deviation equal to 0.01.
The third in the inefficiency ranking is the Nonmetallic sector. This sector also was
becoming more inefficient as the oldest years were being excluded and the most
Conclusions
Adopting the concept of sustainability requires not only the viability of the economic
approach, but also the social and environmental variables in order to achieve a better
spread of the gains acquired by the use of the natural resources with minimum
damages to the planet and to humanity. A fair development process demands the
interaction of sustainability dimensions to harmonize different interests involving
economical growth and a social and ecological perspective.
Therefore, this article analyzed through the concept of efficiency, the contribution of
seven Brazilian industrial sectors for the sustainable development, which addressed
the three basic pillars of the triple bottom line, which are economic, social and
environmental development. Thus, we analyzed the efficiency of the Brazilian
industrial sectors from 1996 to 2009, to reduce the inputs "energy consumption" and
"CO2 emissions from fossil fuels" and increase the outputs "sector GDP", "persons
employed" and "personnel expenses", simultaneously. The outcome of this study
showed that the Textile sector is the most efficient in terms of contribution to the
References
BEN - Balanço Energético Nacional, 2010. Divulga informações relativas ao binômio
oferta consumo de fontes de energia. Available at: <https://ben.epe.gov.br>.
(Accessed 04.11.10).
Hamzah, M.O., Jamshidi, A., Shahadan, Z., 2010. Evaluation of the potential of
Sasobit® to reduce required heat energy and CO2 emission in the asphalt industry.
Journal of Cleaner Production 18: 1859-1865.
Mao, J., Du, Y., Xu, L., Zeng, Y., 2011. Quantification of energy related industrial eco-
efficiency of China. Frontiers of Environmental Science and Engineering in China. doi:
10.1007/s11783-010-0289-8.
Scheneider, M., Romer, M., Tschudin, M., Bolio, H., 2011. Sustainable cement
production - present and future. Cement and Concrete Research 41(7): 642-650.
Silva, F. I. A., Guerra, S. M. G., 2009. Analysis of the energy intensity evolution in the
Brazilian industrial sector - 1995 to 2005. Renewable and Sustainable Energy Reviews
13(9): 2589-2596.
Simões, A., La Rovere, E. L., 2008. Energy Sources and Global Climate Change: The
Brazilian Case. Energy Sources Part A: Recovery, Utilization & Environmental Effects
30: 1327-1344.
Tomasula, P. M., Nutter, D. W., 2011. Mitigation of Greenhouse Gas Emissions in the
Production of Fluid Milk. Advances in Food and Nutrition Research 62: 41-88.
Wernet, G., Mutel, C., Hellweg, S., Hungerbühler, K., 2011. The Environmental
Importance of Energy Use in Chemical Production. Journal of Industrial Ecology 15(1):
96-107.
ABSTRACT
This paper analyzes the public administration efficiency regarding the attenuation of
infant mortality. In many Brazilian regions there is a low coverage of sanitation
systems. This deficit in supply per se is directly associated with under five mortality.
However, the poor quality supply of service also causes a similar effect. Thus, this
paper analyzes the efficiency of the Brazilian states in the management of sanitation.
To do this, it was considered as the primary objective of sanitation to improve the
population welfare, which is translated by the elevation in the number of over-five-
year children who were born survivors for each thousand. As a result, it was found
that all the states in the South region are efficient, while the Southeast are not as good
as these, but have a score close to 100%. Northeast states show a low efficiency. In
the north, most of the units was ineffective, but better than the northeast units. When
comparing these results with the indicator of infant mortality, it is perceived that in the
Northeast the scarcity of infrastructure is complemented by the inefficiency and
produces a perverse effect that makes the region have the highest Under Five
Mortality Rate nationally.
Keywords: Sanitation, Infant Mortality, Efficiency, DEA.
Introduction
Services of Water Supply and Sewage Collection (SWSSC) in many developing
countries is quite impaired and very far from achieving universal service (Turolla,
2002; RIVERA, 1996). Over 2.6 billion people worldwide lack access to adequate
sanitation system and approximately 900 million people do not use drinking water
(WHO, 2010).
According to the World Health Organization, the inappropriate use of sanitation and
water is a major risk of mortality. It also has an additional adverse effect to be more
connected to regions of low income - 99% of its occurrence is in developing
countries. It is associated to a group of five risk factors that together account for 25%
Região Nordeste;
2000; 48,81 Região Nordeste;
2001; 45,56 Região Nordeste;
Região Nordeste; Região Norte
2002; 43,28 Região Nordeste;
2003; 40,97 Região Nordeste;
Região Nordeste;
2004; 38,87 Região Nordeste
U5MR Região Norte;Região
2000; Norte; 2001; 2005;Região
37,32006;
Nordeste;
Região Norte; 2002; Região 2007;36,19
Sudeste 35,2
Brasil; 2000;
33,35 31,99 Região Norte;Região
2003; Norte; 2004;
Brasil; 2001;
32,17 30,56 30,97 Região Norte; 2005;
Brasil; 2002; 29,13
Brasil; 2003;
29,74 28,09 Região
Região Norte; 2007;
2006;
Região Centro- Região Centro- Região
28,59 26,62
Brasil; 2004; SulNorte;
Região Centro- Brasil; 27,64
2005;
Brasil;25,36
26,95
2006;
Região Sudeste;
Oeste; 2000; 25,12 2001;
Região
Oeste; Sudeste;
24,49 2002;
Região Centro- Região Centro- Região Brasil;
Centro-2007; 24,77
26,32 24,07
Região Sul; 2000; Região
Oeste; Sudeste;
Região
23,24 Sudeste; Região Centro-Oeste
Região Centro-
2000; 22,12Região Sul; 2001;
Região Sul; Oeste;
Região
2002; 2003;
Sul; 22,56
Região
2003;
Oeste;
2001; 21,16 2002; 20,2 2003; 19,89Região Sul; 2004; Sudeste;
2004; 22,18
Região Sudeste;
Oeste; 2005; 21,23
Região Sudeste;
19,93 Região
Oeste; Sudeste;
2006;
2007; 20,54
20,18
19,46 18,65 18,82 2004; 18,92Região
17,53
Brasil
2005;
Sul; 2005;
Região
Região
17,92 Sul;
Sul;
2006;
2006;
2007;
17,69
16,1 2007; 17,08
15,84
15,11
Source: elaborated by the author by means of data extracted from DATASUS (2007)
There is a strong correlation between the indicators DU and DBI. This perhaps be due
to the more urbanized regions hold a greater attractive power, and therefore it is
natural that there is a greater concentration of physicians in these areas. Also there is a
strong correlation between DBI and GDPPC, which may occur for the same reason:
the richest regions can attract more doctors, both in the private or public
It is also observed, from the table, that the WS and U5SR correlation is low, despite
being significant. This may indicate that the coverage of water supply systems has no
significant effect on infant mortality, or that these systems are operating inefficiently in
many states, which possibly reduces the positive effect. The second option is more
likely, and hence the indicator will be remained in the model.
The main object of study is the efficiency of sanitation on health promotion, so that
the model used in the research will be called Model 1 and it will have as inputs WS,
SC and DU. The method used was the BCC. As a guideline for the optimization, it was
considered a maximization of the product, given the levels of available inputs.
SOB5 1.0000
CA 0.3911 1.0000
10 TO 99.46 973.06
Source: elaborated by the author with the use of Eviews 7.0 / R2 adjusted= 0,6
It is noteworthy, however, that the PCWA used here may not measure the situation
well, because the water shortage problem is a micro-regional one, and aggregating by
states, this information may not be well captured by the indicator. The attention
Conclusion
In this study, we analyzed the technical efficiency of the state governments in
administering the sewerage, which is directly associated with reduced infant mortality.
For this analysis, we used the DEA methodology oriented to the maximization of the
product. As a result, states with 100% of technical efficiency are divided into three
geographic regions of the country, in a total of five: two in the South, one North and
two in the Midwest. In the North, states have not been efficient, but efficiency
achieved scores above the northeastern states, even having an infrastructure below to
these, either through coverage of basic sanitation, by number of physicians or by per
capita income. This result clearly shows the inefficiency of the Northwest states, which
make them to have the highest U5MR.
Thus, we perceive the need for quality improvement in the provision of sanitation
services, for its whole service whole is not limited to providing water and sewage
services to citizens. A better control is essential over the quality of this offer, since
when comparing the effectiveness of services between the states, it is noted that there
is an inequality, especially among geographic regions, which further contributes to the
social inequality in the country.
References
BLACK, R. E.; MORRIS, S. S.; BRYCE, J (2003). Where and Why are 10 million children
dying every year? The Lancet. V. 361,p. 2.226-2.234.
UNICEF (2009).The States of The World’s Children 2009. UNICEF, New York.
WORLD HEALTH ORGANIZATION – WHO (2009). Global health risks: mortality and
burden of disease attributable to selected major risks. WHO Library, Switzerland.
WHO; UNICEF (2000). Global Water supply and sanitation assessment 2000 report.
Switzerland.
Abstract
One of the most important problems concerning non-parametric frontiers is the
detection of outliers, influential information that distorts statistical analysis. This topic
has been present for a long time in the literature of non-parametric statistics. Wilson
(1995) used the test proposed by Andrews and Pregibon (1978) to apply one of the
first empirical statistical tests to detect outliers on convex sets of non-parametric
frontiers. Since then, a series of proposals for the detection and treatment of outliers
has emerged in the field of non-parametric frontiers. The purpose of this research is to
highlight three methods of outlier detection: Wilson (1995), Simar (2003), Sousa and
Stosic (2005), and also answer which one is more efficient in statistical terms, lower
bias and variance.
Keywords: Outliers Detection Methods; Non-Parametric Frontiers; Data Envelopment
Analisys; mask effect; Monte-Carlo simulation.
Introduction
One of the most important problems concerning non-parametric frontiers is to control
its borders to outliers, influential information that distorts statistical analysis. Any
Outlier Detection Method (ODM) must help the researcher to isolate wheat from the
weeds, and because most of them grow together, many times, it’s very hard to
separate which one observation is not part of a particular bunch of data, is necessary
a queer eye-detector to do it right with no help of detection procedures.
The ODMs has been present for a long time in the non-parametric statistics literature.
Wilson (1995) used the test proposed by Andrews and Pregibon (1978) to apply one
of the first empirical statistical tests to detect outliers in convex sets of non-parametric
frontiers.
Since then, a series of proposals for the detection and treatment of outliers has
emerged in the field of non-parametric frontiers. The purpose of this research is to
highlight three methods of outlier detection: the already mentioned Wilson (1995),
and also Simar (2003) and Sousa and Stosic (2005). The purpose is to answer which
one is more efficient in statistical terms, lower bias and variance.
Methods
Three outliers detection methods (ODM) applied to non-parametric frontiers will be
studied: the AP-Wilson statistic, named APW for short; the statistic from Simar Test
(ST) developed by the approach of Cazals, Florens and Simar (2002) and Simar (2003).
And the statistics of Sousa-Stosic (SS) developed by Sousa and Stosic (2005). These
three methods have considerably creative and different approaches to the problem of
detecting outliers.
These three statistics are:
∑ (λ − λs )
n
2
s ,i
SS: ςi = s =1; s ≠ i
(3)
n −1
Where X* is the [XY] matrix of information, with X representing the set of inputs and
Y representing the set of outputs. The dataset element, or decision maker units
(DMU’s), is S = {1, 2, …, n}, which i represents any particular observation from S. L is
a subset of S (L ⊂ S) and det[X*´X*]i(S-L) represents the determinant of X* for one
particular i in S without the subset L.
The Simar's statistic test is just efficiency index (λi), usually attained with the distance
from DMU i to its frontier projection DMU´ (i´). Where x0 and y0 are values for x and
y for one arbitrary point, m is the sample size (trimming parameter), and b is the
bootstrap index of one particular replication with maximum of B. In this paper, for
sake of simplicity, m = B = 100. Also, j is the number of outputs in the Y variable set
(n × p). Where n is the maximum number of observations pertaining to S, and p is the
number of outputs.
Usually the Shephard-efficiency (λ=1.00) indicates maximum efficiency and λ ≥ 1.00 is
reserved to inefficiency. But in this case, with trimming parameter, it’s possible to
obtain super-efficiency (λ<1.00), values of λ ± δ (efficiency index plus or minus a
small delta) are used to indicate the outliers.
The ςi is the statistic SS mentioned above, s is any element of S-set described above.
The λs,i is the efficiency index without the subject information i, also belonging to S,
(s ≠ i). If the DMU ‘i’ is influent the sum of (λs,i – λs)2 will be positive and greater
91 0.004 0.100
92 0.025 0.190
93 0.035 0.240
94 0.100 0.700
95 0.500 0.900
96 0.700 1.000
97 0.750 1.050
98 1.000 1.000
99 1.200 1.000
For now on, we made means to compare different ODMs, called statistic ρ*. This
statistic is based on simple proportion of all outliers detected by the method (ωi) over
all ‘true’ outliers (Ω):
∑n ω i ∑n Ti ω i
ρ * = 1 − i =1 ⋅ i =1 (5)
n Ω
The term in first parenthesis is useful for one penalty to over identification. On the
limit, a weak threshold could identify all ‘true’ outliers identifying all observations as
outliers. To avoid this, the penalty was suggested, if one ODM detects all n
ST 1.085 1.095
SS 0.020 0.121
To compare three different methods with only one database is insufficient to build
confidence intervals and pronounce about reliability of indicators. One idea to
compare empirically three different methods is Monte Carlo simulations. The
simulation was composed by 2000 replications of equation (4) with ninety
observations each (DGP) and the ten outliers were kept fixed.
In the first procedure, the outliers were added one by one until they sum up 10. This
procedure identifies the method capacity of identify all outliers and to control mask
effects. The routine is described:
• Run the equation (6) and obtain 90 observations for each.
• Append to each replication the first outlier kept fixed (from table 1).
• Replicate step 1 and 2 2000 times.
• Save the mean and sampled standard deviation (σ) of the ρ* statistic.
• Obtain the confidence interval (CI) with 5% for α (error type I and supposing
normal distribution).
2
If the ODM is really unbiased the term in second parenthesis will be 1.00, and with n = 100, the first
penalty term should not trespass 0.9, therefore, the maximum ρ* is 0.9 with ten outliers. For only one
outlier this maximum is 0.99.
91 0.000 0.000 0.000 0.000 0.728 0.037 0.656 0.800 0.883 0.268 0.358 1.407
92 0.000 0.000 0.000 0.000 0.738 0.036 0.667 0.810 0.869 0.197 0.483 1.254
93 0.000 0.000 0.000 0.000 0.738 0.035 0.669 0.808 0.583 0.123 0.342 0.824
94 0.206 0.006 0.194 0.218 0.845 0.027 0.791 0.898 0.485 0.003 0.480 0.490
95 0.335 0.010 0.315 0.355 0.888 0.023 0.842 0.934 0.579 0.003 0.573 0.585
96 0.423 0.012 0.399 0.447 0.903 0.018 0.868 0.939 0.484 0.001 0.482 0.486
97 0.485 0.012 0.461 0.509 0.903 0.015 0.873 0.933 0.415 0.001 0.413 0.417
98 0.530 0.013 0.505 0.555 0.893 0.016 0.862 0.924 0.363 0.001 0.362 0.365
99 0.564 0.015 0.535 0.593 0.883 0.015 0.853 0.913 0.323 0.001 0.322 0.324
100 0.589 0.019 0.552 0.626 0.875 0.015 0.846 0.904 0.325 0.045 0.236 0.414
91 0.000 0.000 0.000 0.000 0.726 0.037 0.653 0.799 0.899 0.243 0.422 1.375
92 0.000 0.000 0.000 0.000 0.731 0.037 0.659 0.803 0.954 0.097 0.765 1.144
93 0.000 0.000 0.000 0.000 0.737 0.038 0.661 0.812 0.966 0.011 0.945 0.987
94 0.814 0.026 0.764 0.864 0.831 0.033 0.766 0.895 0.972 0.007 0.959 0.986
95 0.805 0.027 0.753 0.858 0.781 0.034 0.714 0.849 0.969 0.010 0.949 0.990
96 0.804 0.027 0.752 0.857 0.760 0.034 0.692 0.827 0.968 0.010 0.950 0.987
97 0.805 0.026 0.755 0.856 0.758 0.036 0.687 0.829 0.969 0.010 0.949 0.988
98 0.792 0.026 0.741 0.843 0.713 0.038 0.638 0.788 0.738 0.408 -0.062 1.537
99 0.790 0.026 0.740 0.840 0.714 0.040 0.636 0.791 0.277 0.435 -0.576 1.131
100 0.795 0.025 0.746 0.844 0.713 0.039 0.637 0.788 0.954 0.101 0.755 1.152
Conclusions
It’s hard to be conclusive to answer the question “Which ODM is better?” There is
some indication that Simar’s method (2003) is the better one for detect bunch of
outliers, and do it with great confidence, because small interval and small
misidentification. With correct threshold, the SS is the better one to detect outliers 91,
92 and 93 in two Monte-Carlo procedures and the more powerful method to detect
References
Andrews, D., D. Pregibon (1978) Findings the outliers that Matter, Journal of the Royal
Statistical Society, series B, 40 (1): 85-93.
Acknowledgements
The authors thank the support of Centro de Estudos de Políticas de Públicas of
Fundação João Pinheiro (CEPP/FJP) for technical and finance support to the
presentation in the 10th DEA international congress. Like many others Operational
Research Professionals, we also regret the passing of Professor W.W. Cooper, one of
those who made DEA meeting possible. All possible remaining flaws are responsible
exclusively by the authors and not their institutions and collaborators.
Abstract
In 2011 the Brazilian Electricity Regulator - ANEEL implemented a benchmarking
model to evaluate the operational efficiency of the electrical power utilities in Brazil.
This model is based on two benchmarking methods widely applied by other
regulators: Data Envelopment Analysis - DEA and Corrected Ordinary Least Square -
COLS. The aim of this paper is to identify the cause of discrepancies between the
results obtained by applying DEA and COLS models and also to discuss the use of a
non- decreasing returns to scale(NDRS) DEA model by the regulator. It is shown that
the differences between the parametric (COLS) and non-parametric (DEA) models are
mainly due to the unsuitability of the Cobb-Douglas model as a cost function, by the
effect of the sample size which shifts the COLS's efficiency scores towards smaller
values, and because of the DEA model type NDRS.
Introduction
In September 10, 2010, the Brazilian National Electric Energy Agency - ANEEL started
to debate with the society about the rules and methodologies for defining the
revenues of electricity distribution utilities in the 3rd Periodic Tariff Review Cycle
(3PTRC), through the public hearing 040/2010 (AP040). By means of the technical
note 265/2010 the regulator proposed a full review on the model which calculates
regulatory operational costs. The definition of efficient operating costs is a central
point in the incentive regulation, because it has been chosen for regulating natural
Methodology
The most commonly used benchmarking frontier techniques include Data
Envelopment Analysis (DEA), Corrected Ordinary Least Squares (COLS) and Stochastic
Frontier Analysis (SFA) (Haney and Pollitt 2009). DEA is a non-parametric method
which requires assumptions of increasing and concave production function (Banker,
Charnes and Cooper 1984). It provides a very flexible function, robust to errors of
misspecification. Both SFA and COLS are parametric methods which require the
specification of a functional mathematical equation, such as Cobb-Douglas or translog
λk ≥ 0, ∀k =1,..N
θ VRS
j = max θ
N
S .t. ∑ λk yrk ≥ θ yrj , ∀=
r 1,..R;
k =1
N (2)
∑ λk xik ≤ xij , ∀i =1,...I ;
k =1
N
∑ λ=
k 1, λk ≥ 0, ∀=
k 1,..N
k =1
θ NDRS
j = max θ
N
S .t. ∑ λk yrk ≥ θyrj , ∀=
r 1,..R;
k =1
N (4)
∑ λk xik ≤ xij , ∀i =1,...I ;
k =1
N
∑ λk ≥ 1, λk ≥ 0, ∀k =1,..N
k =1
ANEEL Proposal
ANEEL proposed, in a first stage of the public hearing (040/2010), two DEA models
organized in two stages. Both models for the first stage used the operational cost as
their input variable and the network extension (km) as the output variable. Model 1
includes the number of customers as the second output variable, while model 2
includes the energy consumption (MWh) as the second output variable. Figure 1
shows the inputs and outputs of each model. Both models assume a non-decreasing
returns to scale (NDRS).
Inputs Outputs
Figure 1. Proposed model by ANEEL in the first stage of the Public Hearing 040
The data used was from 2003-2010. The electrical distribution companies were split
into two groups. Group A is composed by companies with an annual energy
consumption greater than 1 Terawatt-hour (TWh), whereas group B is composed by
companies with an annual energy consumption below 1 TWh. The DEA method was
applied separately on each group in order to estimate the final efficiency scores.
After to reach the efficient operational cost for each distribution energy company
ANEEL proposed to adjust these scores by environmental variables in a two stage
model.
2%
1%
0%
-1% -1%
-3% -2% -2%
-3%
-4%
-5% -5% -5%
-6% -6% -6% -6%
-8% -7%
-8%
-9% -9% -9%
-11%
-13% -12% -12%
-14%
-15%
-16%
-18% -17%
-18%
-21%
-23%
Figure 3. Differences en percentage between COLS and DEA scores for Group A.
-16%
-20% -19% -19%
-25%
-30%
-28%
-30%
-31%
-35%
-37%
-40%
-41%
-45%
Figure 4. Differences en percentage between COLS and DEA scores for Group B.
Syrjanen, Bogetoft and Agrell (2006), pointed this problem in the report to the Finnish
Regulator[1]: "the Cobb–Douglas function is not a cost function". After concluding that
the linear function has heteroscedasticity problems that can be solved with the use of
a log-linear function (Cobb-Douglas), the authors argue that this has significant
conceptual problems suggesting not using it in the regulatory cost modeling for
Finnish distributors. “Due to conceptual problems related to the log-linear model we
suggest discarding this model"(p. 50).
Figure 6 shows that although in the log scale, the Cobb-Douglas function seems to fit
properly the data (figure in the left side), in the original space it produces a curve
taking an opposite direction (blue curve) than what it should be expected as a cost
function (pink curve). Therefore, the Cobb-Douglas is not appropriate as a cost
function.
Table 1: Confidence intervals for the efficiency score of Cemig using the
adjusted multiple regression model
Description Efficiency Score
Estimated Value 0.4519956
Lower Limit 0.3412855
Upper Limit 0.5986191
0.9
0.8
0.9
0.7
0.8
0.6
COLS
COLS
0.7
0.5
0.6
0.4
0.5
0.3
0.4
0.2
(a) (b)
Figure 8. Scatter plots of DEA and COLS efficiency score for groups A and B.
Table 2 presents the estimated values for the parameter α for groups A and B, and the
respective confidence intervals of 95% and 99%. The value α=1 is not included in any
of the confidence intervals. From these results it can be concluded that the DEA
efficiency scores and COLS are not statistically similar. On average, for Group A, the
efciciency scores calculated by COLS is 10.33% smaller than DEA efficiency scores.
For Group B the efficiency scores calculated by COLS is, on average, 21.66% smaller
than those generated by DEA.
Group B
Estimated value for 0,78342
intercept: α
Conclusions
The evaluation and implementation of benchmarking models are certainly a difficult
task, primarily due to the difficulty of getting the best data and then due to the wide
range of possibilities for methodologies and modeling. However, the available
techniques have been already applied for several years and the literature on the
subject is quite wide, having been already discussed the main problems and benefits
associated with each tool.
In order to determine the best method, it is important to consider the problem to be
solved, but it is also essential to avoid methodological problems that might disqualify
the results.
It is known that the goal of mathematical and statistical models is, through simplified
simulations, capture the most important aspects of reality. To consider all the variables
The purpose of this paper was to demonstrate some problems of the model presented
by ANEEL. As demonstrated, the model should be improved, but, in addition, there
must be a more complete study to identify the main variables that explain the costs in
the electricity distribution service in Brazil.
Acknowledgments
This work was supported by Fundação de Amparo à Pesquisa de Minas Gerais –
FAPEMIG, Project PPM-00543-11 and Fundação de Amparo à Pesquisa de Minas
Gerais – FAPEMIG, Agência Nacional de Energia Elétrica- ANEEL and Centrais Elétricas
de Minas Gerais – CEMIG, Project SHA-APQ-03165-11.
References
AGRELL, P. J.; BOGETOFT, P.; TIND, J. DEA and Dynamic Yardstick Competition in
Scandinavian Electricity Distribution. Journal of Productivity Analysis, v.23, n.2, p. 173-
201, 2005.
AROCENA, P. Cost and quality gains from diversification and vertical integration
in the electricity industry: A DEA approach. Energy Economics, v.30, n.1, p. 39-58,
2008.
BANKER, R.D.; CHARNES, A.; COOPER, W. W. Models for the estimation of technical
and scale inefficiencies in Data Envelopment Analysis. Management Science, v.30, p.
1078-1092, 1984.
BOGETOFT, P., OTTO, L. Benchmarking with DEA, SFA and R. Springer Science,
2011.
CEPA, 2003.Background to work on assessing efficiency for the 2005 distribution price
control review, Scoping study, Final report, Prepared for The UK Office of Gas and
Electricity Markets (Ofgem), Cambridge Economic Policy Associates, available at:
www.ofgem.gov.uk.
ESTELLITA LINS, M. P.; SOLLERO, M. K. V.; CALÔBA, G. M.; Integrating the regulatory
and utility firm perspectives, when measuring the efficiency of electricity distribution.
European Journal of Operational Research, v. 181, p. 1413-1424, 2007.
NERA. Commentary on CEPA Benchmarking Paper, A Report for EDF Energy. 2003.
Valerio A. P. Salomon
Department of Production Engineering at Sao Paulo State University (UNESP),
salomon@feg.unesp.br
Abstract
We evaluate the efficiency of the primary public health service in a typical Brazilian
medium size town. This paper describes an integration of Network DEA model, Nash
bargaining model, and BSC. The Network DEA allows establishing a hierarchical
structure that corresponds to the BSC perspectives by adopting a sequence of stages
where a set of indicators at one stage impacts in a subsequent stage. We used the
Nash bargaining model in order to negotiate the desired levels of input-output from
one stage to another. Public health services were compared as well as the
benchmarks were identified for the inefficient services.
Keywords: Network DEA, Game Theory, BSC, Bargaining Problem.
Introduction
Many researchers have focused attention on the measurement of efficiency in the
healthcare. According to Emrouznejad et al. (2008), the DEA (Data Envelopment
Analysis) has been applied in the evaluation of the health services performance with
good results. Various DEA approaches have been widely developed and applied in
both public and private sectors (HADAD et al., 2011).
Here, we evaluate the efficiency of the primary public health service in a typical
Brazilian medium size town, which has as the main challenge to find way to
efficiently achieve its goals.
Moreover, this paper describes a method that integrates Network DEA model (FÄRE
and GROSSKOPF, 2000), Nash bargaining model, and BSC - Balanced Scorecard
*
Corresponding author
Method
With the help of the Health Department personnel of the studied city, were identified
which resources (inputs) and service levels (outputs) would be considered in the
modeling: the medical specialties (internal medicine, gynecology and pediatrics)
offered by a typical HBU (Health Basic Units) were defined as being the DMU. Also,
the DEA variables were designated for each BSC perspective customized for the
Health System: Patient Perspective; Internal Processes Perspective, Learning and
Growth Perspective; and Financial Perspective.
As the next step, a survey was planned and performed in each HBU to obtain data
information about the "Patient Perspective", "Learning and Growth Perspective" and
other organizational characteristics the DMU. The Key Performance Indicators
considered were: Financial Perspective - Number of physician, Number of
functionaries, and Expenses; Learning & Growth Perspective - Working conditions;
Internal Process Perspective - Medical service time (minutes), Wait time (minutes),
Percentage of available medications, and Capacity used; and Patient Perspective -
Percentage of patient satisfied.
Finally we developed three models in order to evaluate the performance in each
medical speciality. Each model was applied in three phases. Consider a three-stage
process shown in Figure 1. Suppose we have k DMU, and that each DMU has the
vector of inputs (that are indicators for the Financial Perspective) to the first stage and
the vector of outputs from this stage that are indicators for the Learning & Growth
Perspective. These outputs (Learning & Growth Perspective) then become the inputs
to the second stage. Indicators assigned to Internal Process Perspective denote the
vector of outputs from second stage; this vector (Internal Process Perspective)
becomes the vector of inputs to third stage. Finally, indicators assigned to Patient
Perspective denote the vector of outputs from the third stage.
max ∏ (ui − di )
u ∈S , u ≥ d i =1
(1)
The optimum value of function F(N, S, d) satisfy four properties: Pareto efficiency,
Invariance with respect to affine transformation, Independence of irrelevant
alternatives, and Symmetry.
In the first model (2), we used the Nash Bargaining problem in order to negotiate
the desired levels of input-output in each stage of Network DEA, as expressed
below. This model (2) is the first phase of our analysis.
max z1+ z 2 + z 3 s.t. (2.1)
y Po 2 ( β 2 − 1) ≤ Y P 2 (λ2 − λ1 ) (2.3)
y Po 3 ( β 3 − 1) ≤ Y P 3 (λ3 − λ2 ) (2.4)
x Po 4α ≥ X P 4λ3 (2.5)
( β1 − δ1 )( β 2 − δ 2 ) = z1 (2.6)
( β 2 − δ 2 )( β 3 − δ 3 ) = z2 (2.7)
( β 3 − δ 3 )(δ 4 − α ) = z3 (2.8)
λ1 = 1, λ2 = 1, λ3 = 1 (2.9)
β1 ≥ δ1, β 2 ≥ δ 2 , β 3 ≥ δ 3 , α ≤ δ 4 (2.10)
λ1 ≥ 0, λ2 ≥ 0, λ3 ≥ 0, α ≥ 0. (2.11)
y Po 2 ( β 2 − 1) ≤ Y P 2 (λ2 − λ1 ) (3.4)
y Po 3 ( β 3 − 1) ≤ Y P 3 (λ3 − λ2 ) (3.5)
x Po 4α ≥ X P 4λ3 (3.6)
λ1 = 1, λ2 = 1, λ3 = 1 (3.7)
β1 ≥ δ1, β 2 ≥ δ 2 , β 3 ≥ δ 3 , α ≤ δ 4 (3.8)
λ1 ≥ 0, λ2 ≥ 0, λ3 ≥ 0, α ≥ 0. (3.9)
In this model, α * and β i* is the value of optimum solution for the 1st model. Note
that if some β i* and some α * do not satisfy β i* > δ i and α * < δ i , then the respective
output associate to β i* and, the input associate to α * do not participate as a player in
model (3).
In the third phase, we used the model (4) in order to optimise β i and α that do not
participate as a player in second model (3), in order words, this model (4) optimise
the values of β i* and α * if β i* = δ i and α * = δ i in the first model.
y Po 2 ( β 2 − 1) ≤ Y P 2 (λ2 − λ1 ) (4.3)
y Po 3 ( β 3 − 1) ≤ Y P 3 (λ3 − λ2 ) (4.4)
λ1 = 1, λ2 = 1, λ3 = 1 (4.6)
λ1 ≥ 0, λ2 ≥ 0, λ3 ≥ 0, α ≥ 0. (4.8)
In this model (4), α * and β i* are the optimum solution for the 2nd model, but it does
not provide information on the efficiency of each individual stage. We calculated the
efficiency of each stage by the multiplier model (5).
min u2 yoP 2 + u3 yoP 3 + u2* + u3* + v* (5.1)
− d1β1* − d 2 β 2* − d 3 β 3* + d 4α * s.t.
u1 yoP1
e3 = stage 3 (5.1)
u2 yoP 2 + u2*
u2 yoP 2
e2 = stage 2 (5.2)
u3 yoP 3 + u3*
u3 yoP 3
e1 = stage 1 (5.3)
vxoP 4 + v*
According to Cook et al. (2010) and Chen et al. (2009), because of extra free-in-sign
variable in the VRS-DEA model becomes difficult the use of geometric mean of the
efficiency scores of the three individual stages (KAO and HWANG, 2008), once the
geometric efficiency decomposition of the overall efficiency is restricted to CRS
situations. Thus, we adopted Additive efficiency decomposition (arithmetic mean)
approach (CHEN et al. 2009).
Table 6.The data of medical specialities for five Health Basic Units
DMU Satisfaction Medical service time Wait time Medicine Available Capacity used
A-Internal medicine 86.22% 6.1 77.8 61.83% 81.25%
A-Pediatrics 95.14% 7.8 116.5 84.38% 81.25%
A-gynecology 77.38% 9.4 106.3 53.27% 64.58%
B-Internal medicine 74.13% 5.3 108.6 44.94% 65.25%
C-Pediatrics 58.87% 7.7 268.0 82.59% 68.75%
D-Internal medicine 80.38% 7.7 75.6 100.00% 68.75%
E-Internal Medicine 67.46% 14.2 171.2 62.92% 62.50%
E-Pediatrics 46.41% 15.2 96.8 54.64% 56.25%
E-Ginecology 80.02% 21.5 66.3 100.00% 25.00%
DMU Working conditions Physicians Functionaries Expenses
A-Internal medicine 68.57% 6 40 R$ 1240.18
A-Pediatrics 68.57% 2 40 R$ 1132.34
A-gynecology 68.57% 4 40 R$ 1186.26
B-Internal medicine 23.18% 4 13 R$ 929.76
C-Pediatrics 25.70% 2 12 R$ 513.17
D-Internal medicine 14.39% 4 15 R$ 1688.13
E-Internal Medicine 27.29% 2 16 R$ 588.04
E-Pediatrics 27.29% 2 16 R$ 588.04
E-Ginecology 27.29% 3 16 R$ 620.71
In our preliminary study, we considered as disagreement point δ i = 1 , for i={1, 2, 3, 4},
thus it is possible guarantee that S was a feasible set. We obtained the efficiency
scores (Table 2) for each stage by equations from (5.1) to (5.3).
DMU Satisfaction Medical service time Wait time Medicine Available Capacity used
A-Internal medicine 86.22% 9.2 77.8 91.42% 81.25%
A-Pediatrics 95.14% 7.8 116.5 84.38% 81.25%
A-gynecology 77.38% 10.4 89.1 89.94% 71.78%
B-Internal medicine 74.76% 7.7 75.6 100.00% 73.08%
C-Pediatrics 83.55% 7.7 75.6 100.00% 68.75%
D-Internal medicine 80.38% 8.0 72.8 103.83% 71.39%
E-Internal Medicine 75.08% 16.1 91.7 88.33% 70.53%
E-Pediatrics 74.91% 17.2 85.8 90.33% 67.12%
E-Ginecology 80.02% 22.5 63.4 104.46% 68.75%
DMU Working conditions Physicians Functionaries Expenses
A-Internal medicine 97.90% 2 25 R$ 816.53
A-Pediatrics 68.57% 2 40 R$ 1132.34
A-gynecology 95.77% 2 24 R$ 793.29
B-Internal medicine 34.49% 2 12 R$ 513.17
C-Pediatrics 37.02% 2 12 R$ 513.17
D-Internal medicine 25.70% 2 12 R$ 513.17
E-Internal Medicine 39.73% 2 15 R$ 588.04
E-Pediatrics 40.43% 2 15 R$ 588.04
E-Ginecology 38.61% 2 12 R$ 513.17
The dual values of λk that represents the weights for a linear combination (virtual
DMU that is a benchmark) of medical specialties that are considered efficient. Making
an analogy with the BSC theory, it would be interesting to establish objectives,
measures and targets for the medical specialties, as well to develop the Strategic Map.
The BSC allows the identification of good initiatives, which should be adopted by the
inefficient DMU, based on implemented actions by a DMU that is a benchmark.
Table 10. Benchmarks for each medical specialty.
Benchmarks
Stage 1 A-Pediatrics C-Pediatrics
A-Internal medicine 49.0% 51.0%
A-Pediatrics 100.0% 0%
A-gynecology 45.2% 54.8%
B-Internal medicine 0% 100.0%
C-Pediatrics 0% 100.0%
D-Internal medicine 0% 100.0%
E-Internal Medicine 12.1% 87.9%
E-Pediatrics 12.1% 87.9%
E-Ginecology 0% 100.0%
Conclusions
This research demonstrated how is possible the use of the DEA to measure (and to
rank) the relative efficiency of the medical specialties offered by Health Basic Units
(HBU) in a medium size town of Brazil. The ranking of efficiencies provides important
information to the Health Secretary decides regarding actions to distribute better the
available resources, making more efficient the HBU, therefore improving the overall
performance of the Health System as a whole.
Regarding the set of medical specialties analyzed, it should be noted that the results
obtained are based on selected inputs and outputs and their priorities. Sometimes, the
targets proposed by the DEA technique cannot be applied in practice or they are very
difficult of achieve them, and, in these cases, managers ought to take them as a
reference. Each medical specialty must be analyzed according to its reality or
organizational context; therefore, even those, which achieved 100% of relative
efficiency, sometimes do not have the performance desired by the organization.
This occurs because the efficiency’s concept represents the best relation between the
benefits obtained and the available resource used, but does not means these benefits
are the desired results.
References
Amado C.A.F., Santos S. P., Marques P.M. (2012) Integrating the Data Envelopment
Analysis and the Balanced Scorecard approaches for enhanced performance
assessment, Omega (40): 390-403.
Eilat H. et al. (2008) R&D project evaluation: An integrated DEA and balanced
scorecard approach. Omega (36): 895-912.
Färe R., Grosskopf S. (2000) Network DEA, Socio-Economic Planning Sciences (34):
35-49.
Kao C., Hwang S-N. (2008) Efficiency decomposition in two-stage data envelopment
analysis: An application to non-life insurance companies in Taiwan, European Journal
of Operational Research (185): 418-429.
Kaplan R.S., Norton D.P. (1992) The Balanced Scorecard: measures that drive
performance, Harvard Business Review (70): 71-79.
Aljar J. Meesters
University of Groningen, Netherlands, A.J.Meesters@rug.nl
Abstract
This paper proposes an alternative class of stochastic frontier estimators. Instead of
making distributional assumptions on the error and efficiency component in the
econometric specification of a cost function model (or any other model), this class is
based on the idea that some observations contain more information about the true
frontier than others. If an observation is likely to contain much information, it will get
a large weight in the regression analysis. In order to establish the weights, we
propose an iterative procedure. In each step weights can be determined by the
residuals obtained earlier and a user-specified weighting function. In each step the
weights will be updated and a next stage WLS regression will be carried out.
The advantages of this approach are its high transparency, the easy application to a
full-specified model and its flexibility. It allows to directly observing which
observations determine the frontier for a large part. The easy expansion to a full-
specified model refers to a model that includes a cost function and its corresponding
share equations. Its flexibility refers to the use of several alternative weighting
functions and the easiness of testing for the sensitivity of the outcomes.
The model has been applied to a set of Dutch hospital data. The outcomes of this
application are promising. The model converges rather quickly and presents reliable
estimates for the parameters, the cost efficiencies and the error components.
Key words: weighted least squares. SFA, hospitals
Introduction
The methodology Stochastic Frontier Analysis (SFA), suggested by Aigner, Lovell and
Schmidt (1977) and Meeusen and Van den Broek (1977), has become a standard in
econometric estimation of production and cost (or any other value) function. It is
based on the idea that empirically, production (or cost) can be described as a function
of a number of inputs (or outputs and input prices), a stochastic term, reflecting
errors, and another stochastic term, reflecting efficiency. With maximum likelihood
techniques, the parameters of the function and the parameters of the distribution of
Methods
We start from the cost function, although the method may be applied to any other
model (see e.g. Färe & Primont, 1995). We assume that the firm is cost-minimizing and
that the total cost can be represented by a cost function c(y,w) that meets all the
requirements it entails. Input demand equations xn(y,w) can be derived from the cost
function, by applying Shephard’s Lemma. For reasons of convenience, we rewrite the
cost equations and input demand equations in terms of logarithms and cost shares,
and add an error term.
ln(𝐶) = 𝑐(ln(𝑦) , ln(𝑤)) + 𝜀0 (1)
𝑆𝑛 = 𝑠𝑛 (ln(𝑦) , ln(𝑤)) + 𝜀𝑛 (2)
With:
C = total costs;
y = vector of outputs;
w = vector of input prices;
Sn = optimal cost share for input n (n = 1,.., N).
ε0, εn error terms
Equations (1) and (2) can be estimated by a certain minimum distance estimator or, if
one wants to check for heterogeneity, with fixed or random effects, which will result
in consistent estimates of the parameters if 𝐸[𝜀|𝑦, 𝑤] = 0. However, if some firms are
inefficient, i.e. they have a higher cost than what can be explained, the cost function
or random noise 𝐸[𝜀] > 0, causing biases in the parameters of Equations (1) and (2).
Our suggestion for reducing these biases consists of estimating Equations (1) and (2)
with weighted least squares and assigning the ‘ill-behaving’ observations with a low
weight, while the ‘well-behaving’ observations will be assigned with a high weight.
Weighted least squares (WLS), which is also referred to as generalized least squares
(GLS), is a widely used econometric technique; however, since the weights are
generally not observable, they have to be estimated (see e.g. Verbeek, 2012). Our
proposed weighting scheme is based on the residuals obtained after Equations (1) and
(2) have been estimated with LSQ 3, as we know that firms that are highly inefficient,
and thus likely to bias the results, will have a large residual 𝜀̂, where 𝜀̂ is the estimate
of 𝜀.
3
If Equation (1) and (2) are estimated with fixed effects, the weights can also be based on the fixed
effects, which would make our estimator to a generalized version of the estimator, suggested by
Wagenvoort and Schure (2006).
with:
𝜎�𝑢2 = 𝜎�𝜀2 − 𝜎�𝑣2
The efficiency score then equals:
𝐸𝑓𝑓𝑖 = exp ( −𝑀(𝑢�𝑖 |𝜀̂𝑖 ) (4)
Obviously, there are other alternatives available (see e.g. Kumbakhar & Lovell, 2000).
Note that, in comparison with the original Jondrow et al. (1982) paper, in the model,
we have swapped the roles of the random error and efficiency components. It is
important to stress that we don’t apply the distributional assumptions to the errors and
efficiency components in the estimation procedure, but only in the derivation of the
efficiency scores. However, this procedure has some consequences for the weighting
scheme in the estimation procedure. Since we assume that all negative residuals
indicate efficient firms and that the residuals only reflect random noise, there is no
reason for assigning different weights to these observations; therefore, for reasons of
consistency, we use a weighting scheme that assigns a weight equaling 1 to all the
observations in the efficient subset.
Model specification
We apply the well-known translog cost function model (Christensen et al., 1973;
Christensen & Greene, 1976). The cost function model consists of a translog cost
function and the corresponding cost share equations. The model includes first and
second order terms, as well as cross-terms between outputs and input prices, on the
one hand, and a time trend, on the other hand. These cross-terms with a time trend
represent the possible different natures of technical change. Cross-terms with outputs
refer to output-biased technical change and cross-terms with input prices refer to
input-biased technical change.
Data
The data for this study cover the period 2003-2009 and were obtained from the Dutch
Hospitals Association. Annual financial, patient and personnel data were collected by
means of surveys. The surveys cover all the general hospitals in the Netherlands. For
Estimation results
The models are estimated as multivariate regression systems with various equations
with a joint density, which we assume to be a normal distribution. Because
disturbances are likely to be cross-equation-correlated, Zellner’s Seemingly Unrelated
Regression method is being used for estimation (Zellner, 1962). As usual, as the shares
add up to one, causing the variance–covariance matrix of the error terms to be
singular, one share equation in the direct cost function model is eliminated. Since we
are dealing with a relatively large number of cross-sectional units and a limited
number of periods, we ignore the fact that we are dealing with a panel data (with
respect to intra-firm correlations). It is obvious that the between variance is far more
important than the within variance.
With:
An interesting aspect of this weighting scheme is that the weights are directly related
to the efficiency scores. Efficient firms have weights equal to 1 and inefficient firms
have efficiency scores equaling the weights multiplied by a constant (equal to the ratio
of variances).
However, it is easy to implement other weighting schemes and see if the results differ.
This is another advantage of our approach over the SFA approach, which requires to
calculate the convolution of two random variables and to derive the maximum
likelihood, if one wants to use another distribution. As it turns out, our results were
quite robust for another weighting scheme, based on rank numbers. In case of IWLS
estimation, the procedure will stop if the maximum change in the parameters is less
than 1%. The number of iterations in our application equals 12. Besides the imposed
Table 1 shows that most parameter estimates are significant at the 5% level. For most
variables, the estimated parameters also obtain the expected signs. We have calculated
the theoretical conditions for monotonicity and concavity for the average firm. Since
the fitted cost shares are positive for the average firm, the theoretical condition for
monotonicity is satisfied for all inputs. A necessary condition for concavity of the cost
function is that the own partial elasticities of substitution are less than zero for all
inputs. This condition also upholds for all inputs. A sufficient condition is that the
matrix of partial elasticities of substitution is negative semi-definite. This condition
unfortunately does not uphold, since one of the eigenvalues is (slightly) positive. All
other eigenvalues are negative. Therefore, the sufficient condition is too tight. These
statements uphold for the outcomes of both estimation procedures.
Comparing the outcomes of the plain LSQ estimates and the iterative weighted LSQ, a
number of the estimated parameters are quite similar. Especially the estimates of the
parameters, corresponding to input prices and fixed resources, show great similarities.
On the other hand, there are also a few striking differences, in particular in respect of
the trend parameters. The IWLS estimated parameters a2-a7, representing the frontier
shift from year to year, are lower than the parameter estimates from the plain LSQ
estimation, implying that technical change is slower in comparison to the average cost
function, which may also take account of some cost efficiency changes. The
parameters, corresponding to the services produced, also show some substantial
differences. However, the calculated cost flexibilities for the average firm are identical
up to the third decimal ( ∑ bm = 1.230 ). Bigger differences can be found between the
parameters of the cross-terms of services produced. However, the LSQ estimates (and
partly also the IWLS-estimates) are rather unreliable. One of the most striking results is
that, apart from very few exceptions, the parameter estimates according to the IWLS-
estimation are far more efficient.
Figure 2 shows that, in 2009, approximately one quarter of the hospitals were (almost)
efficient. Furthermore, the inefficient hospitals show a plausible pattern of
inefficiencies. The average efficiency equals 95% with a standard deviation of 5%. The
minimum efficiency score equals 81%. When comparing efficiency scores between the
years, it appears that they are very robust (not presented in the figure). In 2003, the
average efficiency is a little bit lower (94%) and in 2008, it is a little bit higher (96%).
One of the serious drawbacks of the thick frontier approach is that it requires
sampling from a stratified sample. Since, in this procedure, we haven’t stratified the
sample at all, it is questionable whether, regardless of certain characteristics, each
hospital has an equal probability of being identified as an efficient hospital. Obvious
characteristics that may affect the probability of being (in)efficient are the size and the
year. Therefore, we inspect the distribution of the efficiency scores, related to year
and size. Figure 3 reflects the number of efficient hospitals in each year of the sample.
Figure 3 shows that the final selection of efficient hospitals is quite uniformly
distributed over the years, varying between 18 and 29. This shows that the procedure
does not tend to favour a particular year.
Another potential selection bias may occur with respect to the size of the hospitals.
Figure 4 reflects the frequency distribution with respect to the size (divided in four
quartiles with respect to the number of beds).
Conclusions
This paper proposes an alternative class of stochastic frontier estimators. Instead of
making distributional assumptions on the error and efficiency component in the
econometric specification of a cost function model (or any other model), this class is
based on the idea that some observations contain more information about the true
frontier than others. If an observation is likely to contain much information, it will be
assigned a large weight in the regression analysis. In order to establish the weights,
we propose an iterative procedure. Since no a priori information is available, the first
step consists of running a standard Least Squares method. Subsequently, weights can
be determined by the residuals obtained and a user-specified weighting function. The
weights obtained allow for Weighted Least Squares (WLS) to be applied. Since the
WLS residuals will differ from the LSQ residuals, new weights will be determined by
means of an iterative procedure. In each step, the weights will be updated and a new
WLS regression will be estimated. Since the negative residuals, by definition, represent
the error component, the variance of these errors can easily be calculated and used as
an estimator of the variance of the normal distribution of the noise. Similar to SFA,
(expected) inefficiency and noise can be derived for all the other observations. The
iterative procedure stops as soon as the change in the parameters between two
iterations is less than a given threshold value.
The advantages of this approach are its high transparency, the easy application to a
full-specified model and its flexibility. It allows to directly observe which observations
determine the frontier for a large part. The easy expansion to a full-specified model
refers to a model that includes a cost function and its corresponding share equations.
Its flexibility refers to the use of several alternative weighting functions and the
easiness of testing for the sensitivity of the outcomes.
The model is applied to a set of Dutch hospital data (including about 550
observations). The outcomes of this application are promising. The model converges
rather quickly and presents reliable estimates for the parameters, the cost efficiencies
and the error components. About 25% of the hospitals are designated as efficient. The
average efficiency score is approximately 93%.
References
Aigner, D., Lovell, C. A. K., & Schmidt, P. (1977). Formulation and estimation of
stochastic frontier production function models. Journal of Econometrics, 6(1), 21-37.
Berger, A. N., & Humphrey, D. B. (1991). The Dominance of Inefficiencies over Scale
and Product Mix Economies in Banking. [Journal Article]. Journal of Monetary
Economics, 28(1), 117-148.
Christensen, L. R., & Greene, W. H. (1976). Economies of Scale in U.S. Electric Power
Generation. Journal of Political Economy, 84(4), 655-676.
Färe, R., & Primont, D. (1995). Multi-Output Production and Duality: Theory and
applications. Dordrecht: Kluwer Academic Publishers.
Fried, H. O., Lovell, C. A. K., & Schmidt, S. S. (2008). The measurement of productive
efficiency and productivity growth. New York: Oxford University Press.
Guitton, A. (2000). Stanford Lecture Notes on the IRLS algorithm. Retrieved from
HYPERLINK
"http://sepwww.stanford.edu/public/docs/sep103/antoine2/paper_html/index.html"
Kumbhakar, S. C., & Lovell, C. A. K. (2000). Stochastic frontier analysis. New York:
Cambridge University Press.
Meeusen, W., & Van den Broeck, J. (1977). Efficiency estimation from Cobb-Douglas
production functions with composed error. International Economic Review(8), 435–
444.
Ondrich, J., & Ruggiero, J. (2001). Efficiency measurement in the stochastic frontier
model. European Journal of Operational Research, 129(2), 434-442.
Verbeek, M. (2012). A guide to modern econometrics (4 ed.). Chichester: John Wiley &
sons, Ltd.
Kazuyuki Sekitani
Shizuoka University, Japan
Jianming Shi
Muroran Institute of Technology, Japan,
Abstract
In this study, we introduce and investigate least distance p-norm efficiency measures
that satisfy strong monotonicity on the efficient frontier.
Keywords: DEA, least distance measures, p-norm, strong monotonicity, shadow profit-
based dominated set, free disposal input-output set, efficient frontier
Introduction
In data envelopment analysis (DEA), mathematical programming is applied to
observed input-output data in order to assess the efficiency performance and provide
target information for managers of entities such as banks, hospitals and business units.
Each entity responsible for transforming multiple inputs to multiple outputs is called a
decision-making unit (DMU). In DEA, there are basically two frameworks for the
efficiency assessment and targeting: the greatest and the least distance frameworks.
Greatest distance measures generally provide efficiency targets that are obtained as the
farthest projections from the DMU to be assessed. Such measures include Tone’s
(2001) slacks-based measure of efficiency (SBM) and Cooper et al.’s (1999) range-
adjusted measure of efficiency (RAM). These greatest distance efficiency measures
and projections are often used because of their computational ease. However, closest
or least distance projections are often more relevant than greatest distance projections
from the perspective of private or public managers. This is so because closer
efficiency targets may be reached with less effort. In a p-norm framework, RAM and
SBM are considered to be greatest distance 1-norm measures, i.e., they are obtained
by maximizing the 1-norm from the DMU being evaluated to the efficient frontier E .
In this research we utilize two DEA representations of T: the constant returns to scale
reference technology expressed by
J J
T c ≡ (x, y ) ∈ R+N + M ∑ x j λ j ≤ x, ∑ y j λ j ≥ y , λ ≥ 0 , (2)
j =1 j =1
J J J
T v ≡ (x, y ) ∈ R+N + M ∑xλ j j ≤ x, ∑ y j λ j ≥ y , ∑λ j = 1, λ ≥ 0 , (3)
j =1 j =1 j =1
where 0 is a zero vector with appropriate dimension. The weakly efficient frontier or
boundary of (1) is defined by
From this definition, the weakly efficient frontier is the set of all the inputs and
outputs that are not strongly dominated. The efficient frontier of (1) is defined by
(x′, − y ′) ≤ (x, − y )
E ≡ ( x, y ) ∈ T ⇒ (x′, y ′) ∉ T . (5)
(x′, − y ′) ≠ (x, − y )
n p
1/ p
∑ zl if p ∈ [1, ∞)
z p
= l =1 (6)
{
max zl ,, zl } if p = ∞
which is an inefficiency measure that identifies the closest point on E from an activity
( x, y ) . Pastor and Aparicio (2010) showed that an efficiency measure defined by
{
1 − min (x − x′, y − y ′) Z 2
(x′, y ′) ∈ E } (8)
is not a strongly monotonic efficiency measure that seeks closest points over the
efficient frontier E . Recently, Ando et al. (2012) considered the following axiom
(Axiom C ′ ) by relaxing Axiom C.
Ando et al. (2012) developed a least distance inefficiency measure satisfying weak
monotonicity over E by modifying (7) as follows:
The situation where an activity (x′, y ′) belongs to D(x, y ) means that x′ can product
y ′ if (x, y ) is feasible. Hence, D(x, y ) represents a free disposal input-output set of
an activity (x, y ) .
D(x, y ) is utilized to define a slacks-based supper-efficiency
measure in Cooper et al. (2007, p. 313). By incorporating D(x, y ) into
min { ( x, y ) − ( x′, y ′) p ( x′, y ′) ∈ E} , Ando et al. (2012) shows that f p , defined in (9), is
weakly monotonic over E for all p ∈ [1, ∞] .
This study extends (9) to a least distance efficiency measurement setting, in which
strong monotonicity is satisfied, i.e., all of Axioms A, B and C are satisfied. For this
purpose, we replace D(x, y ) of (10) by by Dε (x, y ) with the following ε ( ≥ 0 )
dependent set of (x, y ) :
1 ε ε
( x, y ) = (x + d x , y − d y ),
ε 1
Dε (x, y ) = ( x, y ) dx , I (ε ) = . (11)
0≤ I (ε ) y ε
d
ε ε 1
In what follows, we show the profits associated with any activities in Dε (x, y ) do not
exceed the profit of (x, y ) . That is, we utilize shadow prices of inputs and outputs of
a DMU in place of observed prices to interpret Dε (x, y ) where (x, y ) attains the
maximum shadow profit. Therefore, we call Dε (x, y ) the shadow profit-based
dominated set.
{
min ( x, y ) − ( x′, y ′) p }
( x′, y ′) ∈ E , ( x, y ) ∈ Dε (x, y ) , (12)
then it follows from D0 (x, y ) = D(x, y ) that f 0p (x, y ) = f p (x, y ) and hence, fε p (x, y )
is a natural extension of (9).
Now, let us explain how Dε (x, y ) and prices in dual space are related to ε . For
this purpose, we define, for a given ε > 0 , the following set of allowable multiplier
weights (shadow prices):
Proposition 1: ( x, y ) ∈ Dε ( x, y ) ⇔ wy − vx ≥ wy − vx ∀ ( v, w ) ∈ Ω ( ε ) .
Proposition 1 indicates that, in Dε ( x, y ) , there does not exist an activity, whose profit
exceeds that of the activity (x, y ) for any price vector ( v, w ) in Ω ( ε ) . Therefore, any
activities in Dε ( x, y ) are no better than (x, y ) from the viewpoint of shadow profit
values as far as ( v, w ) is in Ω ( ε ) . At this point, it is appropriate to state that we have
restricted the existence region of allowable shadow prices to Ω ( ε ) , and thus we need
to explicitly specify Ω ( ε ) in relation to each fully efficient DMU. We suggest
constructing Ω ( ε ) so that ( x, y ) ∈ T satisfies the following condition:
( x, y ) ∈ E ⇔ ∃( v, w ) ∈ Ω (ε ) such that=
0 wy − vx, (a)
(14)
∀ ( v, w ) ∈ Ω ( ε ) such that 0 ≥ wy − vx (b)
It is known that, for any efficient activity (x, y ) ∈ E , there is at least one positive
shadow price vector ( v, w ) satisfying wy − vx ≥ wy − vx ∀ ( x, y ) ∈ T . Furthermore,
since the efficient frontier is the union of finitely many faces, we can limit to finitely
many positive shadow price vectors for each efficient activity. Consequently, if we
select a sufficiently small ε > 0 , then the existence of Ω ( ε ) is guaranteed. The
collection of shadow price vectors contains at least one positive price vector ( v, w )
satisfying (a) in (14) for any efficient activity (x, y ) ∈ E . That is, Ω ( ε ) contains
information on shadow prices enough to describe the efficient frontier E . For all
shadow price vectors ( v, w ) ∈ Ω ( ε ) , any activities ( x, y ) in Dε ( x, y ) cannot have
higher shadow profits than the evaluated DMU (x, y ) . Mathematically, we can
express this situation as follows:
max wy − vx
= wy − vx ≤ 0 ∀ ( v, w ) ∈ Ω ( ε ) , ∀ ( x, y ) ∈ T
( x , y )∈Dε ( x , y )
max wy − vx = 0 ∀ ( x, y ) ∈ E
( x , y )∈Dε ( x , y ),
( v , w )∈Ω(ε )
Lemma 3: Choose (x, y ) ∈ T and ε > 0 arbitrarily and let (xˆ , yˆ ) ∈ D(x, y ) \ {(x, y )} .
If E ∩ Dε (x, y ) = ∅ or E ∩ Dε (x, y ) = {(x, y )} , then
The inequality (17) means that fε p (x, y ) satisfies strong monotonicity over E under
the assumption E ∩ Dε (x, y ) ⊆ {(x, y )} . For a given positive value ε , any ε ∈ [0, ε )
and the cone Dε (x, y ) satisfy the two following conditions:
The existence and choice of the positive value ε were discussed and provided in our
previous work (Fukuyama and Sekitani 2012) in a different context. The existence of
ε > 0 guarantees Axiom C as well as Axiom A. For any face F of T , we define
N M
∑
VW ( F ) ≡ ( v, w ) n =1
vn + ∑wm = 1, vx j ≥ wy j ∀j = 1, , J
m =1 . (20)
vx = wy ∀(x, y ) ∈ F
x2
Dε ( xC , y C )
D ( xE , y E )
Dε ( xE , y E )
N ( xC , y C )
Dε ( xB , y B )
Ω (ε) N ( xB , y B )
x1
Lemma 4: Choose (x, y ) ∈ T and ε ∈ (0, ε ) arbitrarily and let ( x, y ) ∈ Dε (x, y ) , then
for every l = 1, , L , there exists ( v l , w l ) ∈ VW ( Fl ) such that
v l x − w l y ≥ v l x − w l y ≥ 0. (22)
References
Ando K., A. Kai, T. Maeda, K. Sekitani (2012) Least Distance Based Inefficiency
Measures in the Pareto-Efficient Frontier in DEA, Journal of the Operational Research
Society of Japan 55: 73-91.
Pastor J.T., J. Aparicio (2010) The Relevance of DEA Benchmarking Information and
the Least-Distance Measure: Comment, Mathematical and Computer Modelling 52:
397-399.
Sahand Daneshvar
Tabriz Branch, Islamic Azad University, Tabriz, Iran, sahanddaneshvar1@yahoo.com
Abstract
In this paper, we investigate the problems of consensus-making among institution in
stock exchange with multiple criteria for evaluating performance when the players
(institutions) are supposed to be egoistic and the score for each criterion for a player
is supposed to be a positive score. Each player sticks to his superiority regarding the
criteria. This paper introduces the models for computing minimal cost ratio and the
maximal benefit for institutions.
Keywords: Cooperative Game, DEA, Game Theory, Stock Exchange, Weight
selection.
Introduction
Let us suppose n players each have m criteria for evaluating their competency or
ability, which is represented by a positive score for each criterion. As with usual
classroom examination, the higher the score for a criterion is, the better player is
judged to perform that criterion. For example, there are three students A, B and C,
with three criteria of linear algebra, real analysis and numerical analysis. The scores
are their records for the three subjects, measured by positive cardinal numbers. All
players are supposed to be selfish in the sense that they insist on their own advantage
on the scores. Similar situations exist with many societal problems. We now present
some of the potential applications of DEA game. In the literature on cooperative game
theory, there have been many applications to cost or benefit sharing problems. The
proposed DEA game models are in sharp contrast to them, in that we can deal with
these problems under multi – criteria environments that are common to real conflicts
in our society. This paper through allocating and imputing the given benefit [6]
propose a new scheme for computing maximal allocated benefit and minimal
allocated cost for the institutions under the framework of game theory and data
envelopment analysis (DEA). The different sections of this paper are sequenced in the
following order:
*
Corresponding author
Selfish behavior
X (x ij ) ∈ R +m ×n
Let = be the score matrix, scores of player j in the criterion i for
i = 1,..., m and j = 1,..., n and x ij > 0 . It is assumed that the higher score for a criterion
is, the better player is judged to perform as regard to that criterion. Each person k has
a right to choose two sets of nonnegative weights w k = (w 1k ,...,w mk ) to the criteria that
are most preferable to the player. Using the weight w k , the relative scores of player k
to the total score
are defined as follows:
m
∑w
i =1
i
k
x ik
m n
(1)
=i 1=j 1
∑w i k (∑ x ij )
The denominator represents the total score of all players as measured by player k s
weight selection, while the numerator indicates player k self evaluation using the same
weight selection. Hence, the expression (1) demonstrates player k relative importance
(share) under the weight (or value) selection w k . We assume that the weighted scores
are transferable. Player k wishes to maximize this ratio by selecting the most
preferable weights, thus resulting in the following fractional program.
m
∑w
i =1
i
k
x ik
Max m n
(2)
∑w i k (∑ x ij )
k
w
=i 1=j 1
s .t w ik ≥ 0 ( ∀i )
The motivation behind this program is that player k aims to maximize his relative
value as measured by the ratio: the weighted sum of his records vs. the weighted sum
of all players' records. This arbitrary weight selection is the fundamental concept
underlying DEA initiated by Charnes et al. [2]. DEA terms this as variable weight, in
contrast to a prior fixed one. Refer to cooper et al. [10] for further explanation of this
issue.
Before continuing, we reformulate the problem as follows, without losing generality.
We normalize the fuzzy data set X so that it is row-wise normalized, i.e.,
∑ j =1 x ij= 1 (∀i ) .
n
The program (2) is not affected by this operation. Thus, using the Charnes-Cooper
transformation scheme, the fractional program (2) can be expressed using a linear
program as follows:
m
c (k ) = Max ∑w
i =1
i
k
x ik
m
s .t ∑w=
i =1
i
k
1, w i k ≥ 0 ∀i (3)
Apparently, the optimal solution is given by assigning 1 to w k i ( k ) for the criterion i(k)
=
such that =
x i ( k ) Max {
x ik i 1,....., m } and assigning 0 to the weight of remaining
criteria. We denote this optimal value by c(k).
=
c (k ) x=
ik k 1,..., n (4)
The c(k) indicates the highest relative score for player k which is obtained by the
optimal weight selecting behavior. The optimal weight w ik( k ) may differ from one
player to another.
n
Theorem 1. ∑ c (k ) ≥ 1
k =1
(5)
The inequality above follows from x i ( k ) k ≥ x 1k and the last equality follows from the
row–wise normalization.
This theorem asserts that, if each player sticks to his egoistic sense of value and insists
on getting the portion of the benefit as designated by c(k), the sum of shares usually
exceeds 1 and hence c(k) cannot fulfil the role of division of the benefit. If eventually,
the sum of c(k) turns out to be 1, all players will agree to accept the division c(k),
since this is obtained by the players most preferable weight selection. The latter case
will occur when all players have the same and common optimal weight selection.
More concretely, we have the following theorem.
Theorem 2.
n
The equality ∑ c (k ) = 1
k =1
holds if and only if our data satisfies the condition
x 1=
k x 2=
k = x mk , ∀k =
.... 1,..., n .
That is, each player has the same score with respect to the m criteria.
Proof. The (if) part can be seen as follows:
The (only if) part can be proved as follows. Suppose x 11 > x 21 then there must be
column h ≠1 such that x 1h < x 2 h , otherwise the second row sum cannot attain 1. Thus
we have c (1) ≥ x 11 , c (h ) ≥ x 2 h > x 1h and c ( j ) ≥ x 1 j (∀j ≠ 1, h ) . Hence it holds that
n n n
∑ c (k ) ≥
k=
1 j=
1, ≠ h
∑ x 1 j + x 2h > ∑ x 1 j =
j=
1
1
This leads to a contradiction. Therefore player1 must have the same score in all
criteria. The same relation must hold for the other players.
In the above case, only one criterion is needed for describing the game and the
division proportional to this score is a fair division.
However, such situations might only occur in rare instances. In the majority of cases,
n
we have ∑ c (k ) ≥ 1 .
k =1
The c(S) with c (ϕ ) = 0 , defines a characteristic function of the coalition S. Thus this
game is represented by (N,c).
Definition 1. A function f is called sub – additive if for any S ⊂ N and T ⊂ N with S T
= φ the following statement holds: f (S T ) ≤ f (S ) + f (T ) .
Definition 2.A function f is called super – additive if for any S ⊂ N and T ⊂ N with S
T = φ the following statement holds: f (S T ) ≥ f (S ) + f (T ) .
Theorem 3. The characteristic function of c is sub – additive, for any S ⊂ N and T ⊂ N
with S T = φ we have c (S T ) ≤ c (S ) + c (T ) (8)
Proof. By renumbering the indexes, we can assume that S= { 1,..., h } ,T= {h + 1,..., k }
and S T = { 1,..., k } . For these sets, it holds that
k h k
Max ∑ x ij ≤ Max
c (S T ) = ∑x ij + Max ∑ x ij =
c (S ) + c (T )
i i i
j= 1 j= 1 j = h +1
Theorem 4. c (N ) = 1 .
The optimal value d(k) assures the minimum division that player k can expect from
the game .
n
Theorem 5. ∑ d (k ) ≤ 1
k =1
(10)
Thus this game starts from d(k)>0, k = 1,..., n and enlarge the gains by the coalition
until the grand coalition N with d(N)=1 is reached .
Theorem 7. d (S ) + c (N \ S ) = 1 ∀S ⊂ N
≠
=
Proof. By renumbering the indexes, we can assume that S {=
1,..., h } , N {1,..., n } and
N \=
S { h + 1,..., n } .For this sets, it holds that
h n n n n
d (S ) + c (N \ S )= Min ∑ x ij + Max ij ∑x = Min (∑ x ij − ∑x
ij ) + Max ∑x ij
i i i i
j ==
1 j h +1 j=
1 j=
h +1 j=
h +1
n n n n
=−
Min (1
i
∑
j=
h +1
i
x ij ) + Max ∑
j=
h +1
x ij =
1 − Max
i
∑
j=
h +1
i
x ij + Max ∑x
j=
h +1
ij=
1
∑u
i =1
i y ij
s n
(12)
=
∑u i (∑ y ij )
i 1=j 1
Player j wishes to maximize his benefits. We can express this situation by the linear
program below:
s
Max ∑u
i =1
i y ij
s n s
s .t
=i 1=j 1
∑u i (∑=
y ij ) 1,
=i 1
∑u i y ij ≥ 0 =
( j 1,..., n )
ui ≥ 0 ∀i (13)
The weights of benefits are nonnegative. A characteristic function of the coalition S is
defined by the linear program below:
s
c (S ) = Max ∑ u i ∑y ij
=i 1 j ∈S
s n s
s .t
=
i
i 1=
∑u (∑=
j 1
y ) ij 1,
=i 1
∑u i y ij ≥ 0 =
( j 1,..., n )
ui ≥ 0 ∀i (14)
In the program (14), the benefits of all players are nonnegative. Since the constraints
of program (14) are the same for all coalitions, we have the following theorem.
Theorem 8. The maximal allocated benefits game satisfies a sub- additive property.
Proof. For any S ⊂ N and T ⊂ N with S T = φ , we have:
s s
=c (S T ) Max = ∑ u i ∑ y ij Max ∑ u i ( ∑ y ij + ∑ y ij )
i= 1 j ∈S T i=1 j ∈S j ∈T
s s s s
= Max (∑ u i ∑ y ij ) + (∑ u i ∑ y ij ) ≤ Max ∑ u i ∑ y ij + Max ∑ u i ∑ y ij ≤ c (S ) + c (T )
i= 1 j ∈S i= 1 j ∈T i= 1 j ∈S i= 1 j ∈T
vi ≥ 0 ∀i (15)
The weights of costs are nonnegative. A characteristic function of the coalition S is
defined by the linear program below:
m
d (S ) = Min
=i 1
∑v ∑ x i
j ∈S
ij
m n m
s .t
=
i
i 1=
∑v ∑=
j 1
x ij 1,
=
∑v
i 1
i x ij ≥ 0 =
( j 1,..., n )
vi ≥ 0 ∀i (16)
In the program (16), the costs of all players are nonnegative. Minimal allocated costs
game satisfies a super–additive property.
Theorem 9. The maximal allocated benefit game (N,c) and min game (N,d) are dual
games, for any S ⊂ N, we have d (S ) + c (N \ S ) =
1.
Proof.
s s s s
(N \ S ) Max (∑ u i =
c= ∑ y ij Max (∑u i ( ∑ y ij − =
∑ y ij )) Max (∑u i ∑ y − ∑u ∑ y
ij i ij )
u u u
i=
1 j ∈N \ S i=
1 j ∈N j ∈S i=
1 j ∈N i=
1 j ∈S
s s
Max (1 − ∑ u i ∑ y ij ) =
= 1 − Min (∑ u i ∑ y ij ) =
1 − d (S ).
u u
i=
1 j ∈S i=
1 j ∈S
In above programs we presented a scheme for computing maximal benefit ratio and
minimal cost ratio for coalitions. Also we can compute maximal benefit ratio and
minimal cost ratio for members of coalition, using extended programs in this paper.
These results are applied by the following section.
wi ≥0 ∀i (17)
Similarly, we can avoid occurrence of zero weight in linear programs of maximal
allocated benefits and minimal allocated costs. Then, we have:
s m
Max ∑u
i =1
i y ij Min ∑v
i =1
i x ij
s n m n
s .t
=i 1=
∑u i (∑ y ij ) = 1,
j 1
s .t
=
∑v i ∑ x ij = 1,
i 1=j 1
s m
∑u
i =1
i y ij ≥ 0 ( j =
1,..., n ) (18) ∑v
i =1
i x ij ≥ 0 ( j =
1,..., n )
ui vi
Li ≤ =
≤ U i (i 2,..., s ), u i ≥ 0 ∀i Li ≤ ≤=
U i (i 2,..., m ), v i ≥ 0 ∀i (19)
u1 v1
u
L i ≤ i=≤ U i (i 2,..., s ) u i ≥ 0 ∀i (20)
u1
m
d (S ) = Min ∑v i ∑ x ij
=i 1 j ∈S
m n m
s .t ∑v i ∑=
x ij 1, ∑v i x ij ≥ 0 =
(i 1,..., n )
=i 1=j 1 =i 1
vi
Li ≤ = ≤ U i (i 2,..., m ), v i ≥ 0 ∀i (21)
v1
The (14), (16) are modified in (20), (21) respectively.
1 3.3464320121013 67.5846
2 4.334617031011 7.314482759
3 1.51126828041011 87.40897756
4 1.6524467051012 4006.200769
5 1.037196511012 120.6611866
6 1.6836805851013 2037.625
7 2.87094423281011 2488.081878
8 1.237378771012 418.670247
9 4.4581064711012 2126.74093
10 2.3231519521011 760.6495957
11 5.0760281641012 10551.21025
12 1.680248221013 8194.995143
13 5.6439585111011 1528.364516
14 2.1992450321012 4975.210573
15 2.7033073961012 151.4166932
1 2.66 66.619143
2 4 7.156545209
3 3.97 0.450634985
4 4 2.88184438
5 3.93 13.95348837
6 3.98 2.411575563
7 3.94 5.5555
8 3.95 3.421166307
9 3.98 1.055980861
10 3.92 2.631578947
11 3.98 8.571428571
12 3.98 0.45958367
13 3.91 1488.354974
14 3.91 17.17030568
15 3.97 3.13577924
1 0.0458 0.3859
2 0.0689 0.0052
3 0.0684 0.0017
4 0.0689 0.0190
5 0.0677 0.0151
6 0.0685 0.2478
7 0.0678 0.0329
8 0.0680 0.0254
9 0.0685 0.0512
10 0.0675 0.0027
11 0.0685 0.0583
12 0.0685 0.1928
13 0.9166 0.9166
14 0.0673 0.0065
15 0.0684 0.0351
First company has the minimal allocated benefit and the 13th has the maximal
allocated benefit. Clearly, this scheme is a way for attaining reasonable and fair
division for institutions.
Conclusions
In this paper, we have studied the common weight issues that connect the game
solution with arbitrary weight selection behaviors of the players (institutions). In this
regard, we have proposed a method for computing maximal allocated benefit and
minimal allocated costs for institutions. An extension of problem, in which both
superiority and inferiority criteria could be considered to players (institutions), has
been discussed. Furthermore a numerical example, in which some institutions of
stock exchange evaluated with the proposed ways, has been considered.
W.W. Cooper, L.M. Seiford, K. Tone, Data envelopment analysis, a comprehensive text
with models application references and DEA- solver software, Boston: Klawer
Academic Publishers, 2000.
Acknowledgements
The authors acknowledge the support from Young Researchers Club. This research is
supported by Tabriz Branch, Islamic Azad University research budget and
sponsorship.
Finn R. Førsund,
Department of Economics, University of Oslo, Norway, f.r.forsund@econ.uio.no
Andrey V. Lychev
National University of Science and Technology «MISiS», Russia, lychev@misis.ru
Abstract
There are some specific features of the non-radial DEA (data envelopment analysis)
models which cause some problems under the returns to scale (RTS) measurement. In
the scientific literature on DEA, some methods were suggested to deal with the RTS
measurement in the non-radial DEA models. These methods are based on using strong
complementary slackness conditions (SCSC) in the optimization theory. In this paper,
we propose and substantiate a direct method for the RTS measurement in the non-
radial DEA models. Our computational experiments documented that the proposed
method works reliably and efficiently on the real-life data sets.
Introduction
The measurement of scale properties of frontier functions estimated using DEA models
may in some cases be problematic. In particular the non-radial DEA models (Banker
et al., 2004) can possess some specific features that give rise to estimation problems.
First, multiple reference sets may exist for a production unit. Second, multiple
supporting hyperplanes may occur on optimal units of the frontier. Third, multiple
projections (a projection set) may occur in the space of input and output variables. All
these features cause certain difficulties under measurement of returns to scale of
production units.
Banker et al. (2004) proposed a two-stage approach to determine returns to scale in
the non-radial models. Sueyoshi and Sekitani (2007) showed that this approach may
generate incorrect results in some cases. An interesting approach was proposed for
measurement of returns to scale based on using strong complementary slackness
conditions (SCSC) in the non-radial DEA models (Sueyoshi and Sekitani, 2007).
However, our theoretical consideration and computational experiments show that the
SCSC non-radial model may not be efficient from the computational point of view.
The SCSC non-radial model generates ill-conditioned basic matrices during the
Problem statement
The non-radial DEA model can be written in the following form (Banker et al., 2004;
Sueyoshi and Sekitani, 2007)
max h = (C +T S + + C −T S − )
subject to
n
∑X
j =1
j λj + S− = Xo,
n
∑Y λ
j =1
j j − S + = Yo ,
(1)
n
∑λ
j =1
j = 1, λ j ≥ 0, j = 1, , n,
S + ≥ 0, S − ≥ 0,
where X j = ( x1 j ,, xmj ) and Y j = ( y1 j ,, y rj ) represent the observed inputs and
outputs of production units ( X j , Y j ) , j = 1,, n , S − = ( s1− ,, s m− ) and S + = ( s1+ ,, sr+ )
are vectors of slack variables. The superscript «T» indicates a vector transpose. The
components of the objective-function vectors C + and C − are specified as follows:
ck− = (m + r ) −1 (max{xkj | j = 1,, n}
− min{xkj | j = 1,, n}) −1 , k = 1,, m,
ci+ = (m + r ) −1 (max{ yij | j = 1,, n}
− min{ yij | j = 1,, n}) −1 , i = 1,, r.
The model (1) is also called range-adjusted model (RAM) (Cooper et al., 2000).
Corollary 1. Let two different faces Γ1 and Γ2 of the set T intersect. Then only one
from the following cases occurs:
(i) one face belongs to the other face entirely, to be precise let Γ1 ⊂ Γ2 , and set Γ1 ∩ Γ2 is
a part of the boundary of Γ2 ;
(ii) set Γ1 ∩ Γ2 is a part of the boundary of the face Γ1 and face Γ2 , moreover the set
Γ1 ∩ Γ2 is also a face and its dimension is less than the dimensions of the face Γ1 or the
face Γ2 .
From the assertions written above, it follows that faces can intersect only along the
boundaries of these faces. Taking into account also that the number of faces of the
set T is finite, we obtain that there exists a face of minimum dimension Γmin
Since the optimal solution is obtained with the help of the simplex-method, every
non-basic variable is equal to zero. However, some basic variables may be equal to
zero also, then the optimal solution is considered degenerate.
For the dual problem (2) all indices belonging to the set ( J * ∪ J v ∪ J u ) contain a basic
set of indices. However, the set ( J * ∪ J v ∪ J u ) may also contain non-basic indices, in
this case the dual problem (2) is considered degenerate. Thus, the following relations
hold
( I * ∪ I x− ∪ I y+ ) ⊆ J B ⊆ ( J * ∪ J v ∪ J u ) ,
Problem Qol ( l ∈ J * ):
max f l = λl
subject to
∑Y λ − ∑ e s j j
+
i i = Yo ,
j∈ J * i∈ J u (6)
∑λ j = 1, λ j ≥ 0, j ∈ J , *
j∈ J *
sk− ≥ 0, k ∈ J v , si+ ≥ 0, i ∈ J u ,
where ek− ∈ E m and ei+ ∈ E r are identity vectors associated with variables sk− and si+ ,
respectively.
Notice that problem (6) includes only those variables for which the corresponding
dual constraints hold strictly equations for optimal dual variables (5). According to the
dual theorems of linear programming, this means that optimal variables of problem (6)
will also be optimal variables of problem (1).
The Procedure that finds all production units belonging to the minimum face Γmin and
to the set Λ* is described as follows:
Initialize sets J o = ∅ , JH = J * , J1 = ∅ . If the set JH is not empty, then go to the
next step. If set JH is empty then go to step 3.
Choose index l ∈ JH , if the set JH is empty, then go to step 3. Solve the problem (6).
If f l * > 0 , then determine J o = J o ∪ l . If f l * = 1 , then J1 = J1 ∪ l . Delete index l from
the set JH = JH \ l . Go to the beginning of the step.
If f l * = 0 , then delete index l from the set JH = JH \ l . Go to the beginning of step 2.
Set J o determines the set of units belonging to the face Γmin . Set J1 determines the set
of units belonging to the set Λ* .
The Procedure is completed.
The standard present-day optimization software generates only one point in the
multidimensional space as an optimal solution. However, this may be not sufficient in
order to determine returns to scale on the whole minimum face, since different
vertices of the face may display different returns to scale. Any unit from set J * may
belong to the minimum face. The standard software generates set J * as a by-product.
So, the Procedure enables one to check whether some unit from set J * belongs to the
minimum face or not. The validity of this assertion is based on the theorems given
below.
After running the Procedure, the minimum face Γmin , containing the optimal set Λ* ,
can be written in the form:
(7)
∑ λ j = 1, λ j ≥ 0, j ∈ J o .
j∈J o
However, some points of the set Γmin may not belong to the set Λ* on the frontier.
The set Λ* is written as:
Λ* = { ( X , Y ) ( X , Y ) ∈ Γmin , X ≤ X o , Y ≥ Yo ,
C +T Y − C −T X = (v * − C − )T X o − (8)
* + T
(u − C ) Yo + u . *
0 }
The Procedure enables one to find all units belonging to the face Γmin and to the set
Λ* . The validity of this assertion is based on the following theorems.
Theorem 1. Let unit Z t ∈ E m+ r be an interior point of polyhedron Γ ⊂ E m+ r , let also
unit Z p ∈ E m+ r be any point of this polyhedron, which is distinct from point Z t . Then
unit Z t can be represented as a convex combination of (m + r + 1) units of set Γ and
unit Z p enters this combination with a nonzero coefficient.
Theorem 2. The optimal value of problem (6) is strictly positive f l * > 0 if and only if
unit ( X l , Yl ) belongs to the minimum face Γmin that contains the set Λ* .
In essence, Theorem 1 says that, if some unit Z p is a vertex of face Γmin or belongs to
the face, then it is necessary that there exists such solution that variable λ*p enters this
solution with a nonzero coefficient.
Corollary 3. If the optimal value of problem (6) f l * = 1 , then unit ( X l , Yl ) belongs to
the set Λ* .
If f l * = 1 , then λ*l = 1 , this means that λ*l is the only non-negative λ -variable in the
optimal basis, hence unit ( X l , Yl ) belongs to the set Λ* .
It was proved in (Krivonozhko et al., 2012) that interior points of a face have the same
returns to scale, so it is sufficient to determine returns to scale at any interior point of
this face. An interior point ( X , Y ) of the face Γmin can be chosen as a strong convex
combination of units from the set J o , that is
X= ∑X
j∈J o
j λj, Y = ∑Y λ ,
j∈J o
j j
∑λ
j∈J o
j = 1, λ j > 0, j ∈ J o .
Returns to scale of unit ( X , Y ) can be measured by two methods at least. In the first
(indirect) method (Banker et al., 2004) the BCC model is solved at the first step, the
dimension of this problem is equal to (m + r + 1) × (m + r + n) , then at the second step
Computational experiments
However, the question arises: “What is the ‘payment’ for discovering all vertices of the
face Γmin ?”
We calculated the number of iterations accomplished by CPLEX software in order to
solve problem (1) for all units ( X j , Y j ) , j = 1, , n of the Model 1. This number is
equal to α = 2876 iterations. Next, we calculated the number of iterations
accomplished by CPLEX in order to solve problems (6) for all units ( X j , Y j ) ,
j = 1, , n . This number makes up β = 1782 = 0.62 α iterations in the problems of the
type (6). That is really not a heavy burden, even for an ordinary notebook.
Finding. The non-radial DEA models possess some specific features. However this is a
not a problem in order for find returns to scale of the set of optimal points. For this
purpose it is sufficient to find an interior point of the minimum face that contains the
set of optimal solutions of problem (1) and to determine returns to scale of this
interior point. Such solution requires much less computations than to solve problem
(1) for the specific unit.
Conclusions
In Sueyoshi and Sekitani (2007; 2009), a method was proposed in order to measure
returns to scale in the non-radial DEA models using strong complementary slackness
conditions. However, the size of the SCSC non-radial model (3) increases significantly
in comparison with the model (1). In particular, for the banks data set the size of basic
matrices during the solution process becomes ( 1840 × 1840 ) instead of ( 7 × 7 ) in the
model (1). In addition, some constraints in model (3) do not make sense from an
economic point of view. Unreliable solutions may follow due to these reasons.
In our method, it is sufficient to solve several problems of the form of model (1),
however such problems have much less variables than problem (1). Thus, the
proposed approach is reliable and efficient for solutions of real-life problems.
Moreover, it was stressed in Sueyoshi and Sekitani (2007) that the method of Banker et
al. (2004) cannot always generate reliable results because of difficulties described
above in the non-radial DEA models. However, the method of Banker et al. (2004)
can also be used to measure returns to scale from our point of view. For this purpose,
it is sufficient to take an interior point of the minimum face, which can be found by
the method proposed in this paper, after this one can use the method proposed
in Banker et al. (2004).
Cooper, W. W., Park, K. S., and Pastor, J. T. (2000) RAM: A range adjusted measure of
efficiency. Journal of Productivity Analysis 11: 5–42.
Førsund, F. R., Hjalmarsson, L., Krivonozhko, V. E., and Utkin, O. B. (2007) Calculation
of scale elasticities in DEA models: direct and indirect approaches. Journal of
Productivity Analysis 28: 45–56.
Acknowledgements
The research is carried out with financial support of the Programme of Creation and
Development of the National University of Science and Technology «MISiS». The
reported study was partially supported by RFBR, research projects No.11-07-00698 and
No.12-07-31136.
Abstract
This work presents a strategic approach to the formulation and structuring of health
problem in 5565 Brazilian municipalities using Concept Map and Data Mining. In
addition, using the Operational Research Method called DEA (Data Envelopment
Analysis) for the determination of counties goals and performance indicators and the
establishment of benchmarks for regulating the public health sector in Brazil. The
analytical results will corroborate to a progressive incentive for greater productivity in
the Brazilian municipalities health.
Keywords: Data Envelopment Analysis (DEA), Data Mining, Concept Map,
Multimethodology, Public Health
Introduction
Data Envelopment Analysis (DEA) has been used to support public policy in
regulatory processes because of its ability to handle multidisciplinary problems and
deliver efficient targets. However, public policy is also characterized by its inherent
complexity, which requires taking a particular arbitrary perspective regarding the
system purpose and relevant factors.
As a comparative method in its essence, DEA requires that homogeneity prevail as
one of the key issues for the Decision Making Units under assessment. Nevertheless,
not much research has been devoted to either complexity or homogeneity in DEA
applications. One reason could be the small size of most databases in regulatory
problems, which does not allow the application of clustering procedures to data.
This work presents a strategic approach to the formulation and structuring of the
health problem, taking advantage of the large size database comprising 5565 Brazilian
municipalities. First it uses Concept Maps to represent the many factors that
characterize the complex issues involved in health performance management in Brazil.
Then heterogeneity is investigated and a Data Mining technique is applied to find
appropriate clusters of around a thousand municipalities.
Methods
According to Rosenhead and Mingers (2001), Concept Map is important to make
explicit when we are seeking proposals for dealing with complex problems and not
just solving a simplified part of a problem under a particular perspective. In this
former approach the structuring of matters, issues and situations is one of the stages of
the modeling at the very beginning of the decision-making process.
For Weis and Indurkhya (1999), data mining is the search for valuable information in
large databases, as a cooperative effort between men and computers. Men design
databases, describe problems and define goals. Computers check data and look for
patterns that match the goals established by men.
The Data Envelopment Analysis (DEA) has been used in the calculation of
performance indicators and to establish benchmarks for regulation of public sectors.
The method lends itself to use in multidisciplinary issues and multiagents may be used
in the estimation of production frontier functions or for incorporating the opinion of
specialists, like a multicriteria method.
Multimethodology is the "art" of use, combined, more than a methodology or part of
methodologies, in order to consider, in the best way, the various problems, as
proposed by Mingers and Brocklesby (1997). The multimethodology approach
assumes that there is a specific method that is more appropriate, but that all methods
have advantages and disadvantages that can be balanced.
Antoun Netto (2012) has more details about the clustering technique in Brazilian
municipalities.
For the determination of municipal development targets and indicators in the area of
health we will consider clusters 1 and 3. Such a choice results from the following
facts:
DIMENSION
Circulatory
External Causes Infant Mortality
Diseases
Deaths Chapter XVI -
Some disorders
originating in the perinatal
Deaths Chapter XX
Deaths Chapter IX period
- External causes of
INPUT - Circulatory Deaths Chapter XVII -
morbidity and
Diseases Congenital malformations,
mortality
deformities and
chromosomal
abnormalities
Resident
Resident Population Resident Population
OUTPUT Population
20 to 29 years old < 1 year old
40 to 59 years old
• In the first stage, we considered the output-oriented VRS basic models for
the three dimensions: infant mortality, circulatory diseases and external
causes.
• In the second stage, we determined the final efficiencies for these
municipalities, using the weighted average.
Dimension
Manaus (AM)
Abaetetuba (PA)
Belém (PA)
Frutal (MG)
Manaus (AM)
Manaus (AM) Porto Ferreira (SP)
Abaetetuba (PA)
Santarém (PA) Almirante Tamandaré (PR)
Barcarena (PA)
Monte Santo (BA) Concórdia (SC)
Monte Santo (BA)
Laguna (SC)
Viamão (RS)
Cluster Benchmark
1 Manaus (AM)
3 São Paulo (SP)
Conclusions
This work illustrated the use of conceptual maps for the design and structuring of the
Brazilian public health assessment. Knowledge mapping was used to establish a
context for the qualitative and quantitative variables, to be used in Data Mining and
Data Envelopment Analysis, respectively.
The approach used encompasses different major diseases that are prevalent and
representative of death risks at three different age ranges. The methodology intends to
evaluate three different processes and respective sets of variables that are not directly
related, and thus assessed in three different DEA models, constituting a first stage.
Differently from other approaches, we restrained the analysis to health closures,
represented by the worst situation, namely death occurrence. We intend to extend the
method to include other minor diseases.
Other recommendations for future research consider the evolution of efficiency along
the period 2009-2012, since the first coincided with the beginning of the current
mandate of mayors in Brazilian municipalities. We suggest the use of Malmquist
index to evaluate the actions and programs in the health municipalities during their
mandate, considering the change in the health performance of municipalities between
the two periods of time (2009 and 2012).
Finally, it is important to facilitate the validation of these results by qualified
healthcare professionals, regarding the potential for improvements and the
observation of the recommendations in the National Policy of Health Promotion,
whose general objective is to promote the quality of life and reduce vulnerability and
health risks related to its determinants and conditioning – ways of living, working
conditions, housing, environment, education, leisure, culture and access to essential
goods and services.
References
Antoun Netto, S. O. (2012). O uso de Multimetodologia para a determinação de Metas
e Indicadores de Desenvolvimento Municipal na Área da Saúde. Tese (doutorado) –
UFRJ/ COPPE/ Programa de Engenharia de Produção.
Lins, M. E. et al. (2007). O uso da Análise Envoltória de Dados (DEA) para avaliação
de hospitais universitários brasileiros. Ciência & Saúde Coletiva, 12(4): 985-998.
Reisman, A.; Oral, M. (2005). Soft systems methodology: A context within a 50-year
retrospective of OR/MS." Interfaces 35.2, 164-78.
Rosenhead, J.; Mingers, J. (2001). Rational analysis for a problematic world: problem
structuring methods for complexity, uncertainty and conflict. 375p. 2. ed. West Sussex:
John Willey & Sons.
Lúcio Silva
Department of Production Engineering, Federal University of Pernambuco,
lucio_camara@hotmail.com
Abstract
Do Microfinance Institutions trade-off between financial and social efficiencies?
Previous evidence shows that a positive correlation exists. In this paper we argue that
previous results are biased because they do not account for differences between
countries. We propose a decomposition in which efficiency is broken down into a
MFI and a country component. Results are quite different from the ones previously
presented, given we isolate any country-specific characteristic that might bias
efficiency estimates. The correlation we estimate between financial and social
efficiency is about 35% of what was previously reported. Also, this correlation varies
within the financial efficiency distribution. Looking at profits, we obtain that they are
indeed correlated with efficiencies; the financial efficiency correlation is positive but,
contrary to previous findings, the social efficiency correlation is negative. Finally, we
show that countries indeed face different frontiers implying that efficiencies should be
assessed by comparing units within the same country.
Keywords: DEA, microfinance institutions (MFIs), social efficiency
Introduction
Microfinance Institutions (MFIs) are nowadays present in most countries around the
world. Their main purpose is to provide credit to the poor who are excluded from the
traditional banking system and, for this reason, they are widely recognized for their
contribution to the development of financial markets in the developing world.
Contrary to what is observed in the literature about the traditional banking sector, little
is known about the performance of MFIs, specially considering their mutual purpose
of being financially and socially efficient (Gutiérrez-Nieto et al., 2009). Berguiga
(2009), for example, concludes after an extensive review that MFIs do not trade-off
between being financially and socially efficient, that is, the literature actually suggests
that these two requirements are compatible and may even be complementary. This
question is also addressed in Gutiérrez-Nieto et al. (2009). They analyze both the
financial and social efficiency of Microfinance Institutions using data for 89 MFIs
*
Corresponding author
Data
The data source, as in most of the recent studies looking at MFIs (Gutiérrez-Nieto et
al. (2009), among others), comes from the Microfinance Information Exchange (MIX)
database. The sample we use in this article was obtained by first dropping all MFIs
that had missing values on any of the variables used in the analysis and then
restricting our sample to include only countries that had at least 20 MFIs, given we
want to construct country-specific frontiers. All the data we use consider only the year
of 2008. This leaves us with 483 MFIs distributed within 14 countries. The two largest
countries, according to the number of MFIs analyzed, are Russia, with 78, and India,
with 57 microfinance institutions.
We estimate DEA efficiency scores using the CCR model of constant returns to scale,
which has been the model used in this specific literature. Given the main purpose of
our article is to improve upon what has been done in the literature, we follow the
input and output selection of Gutiérrez-Nieto et al. (2009). There are three inputs: total
assets (A), represented by the total of all net asset accounts ($), taken directly from the
Mixmarket dataset; Operating Cost (C), defined as expenses related to operations,
such as transportation, office, and personnel expenses; and number of active
employees (E) employed by the MFI. The two financial outcomes considered are:
gross loan portfolio (L), defined as the outstanding balance of the all of the MFI's
outstanding loans including current, delinquent and restructured loan, but not loans
that have been written off (it does not include interest receivable); and financial
revenue (R), defined as the revenue generated from the gross loan portfolio and from
investments plus other operating revenue. The two social outcomes considered are:
number of active borrowers who are female (W); and an indicator of benefit to the
poorest (P), defined through the average loan balance per borrower and the per
capita Gross National Income.
The number of active borrowers who are female is taken directly from the Mixmarket
dataset. We interpret this as a measure of female empowerment at home or, in some
degree, within her society. The indicator of benefit to the poorest is another social
outcome of MFIs widely considered in the literature, since MFIs are partially designed
to provide financial services to the poorest in a given country. We follow Gutierrez-
Nieto et al. (2009) and define an indicator to benefit the poorest as the ratio between
the average loan balance per borrower and per capita Gross National Income
(pcGNI). The intuition is that wealthier individuals are able to borrow larger amounts.
Thus, if the average loan balance per borrower is low, then a larger fraction of poor
individuals are being financed. The division of the average loan balance per borrower
by per capita Gross National Income (pcGNI) is only intended to capture differences
in countries average per capita income, hence is it a normalization. This indicator is
then normalized again to lie on the interval [0,1] and inverted (one minus the
indicator), such that a number close to 1 represents a higher percentage of loans to
the poor. Finally, this new indicator is multiplied by the number of active borrowers,
such that one can construct an approximate measure of the number of poor
borrowers. In this paper we consider two main empirical DEA specifications: the
financial frontier, represented by inputs (Assests, Costs and Employees) and outputs
(gross loan portfolio and financial revenue), named as ACE-LR; and the social frontier,
represented by the same inputs but considers as outputs (number of active borrowers
who are female and the indicator of benefit to the poorest) and is named as ACE-WP.
The decomposition we adopt in this paper follows closely the one proposed by
Portela and Thanassoulis (2001). For our case, we will first compare MFIs within the
same country, such that country characteristics are isolated from final estimates,
We now estimate the relationship between efficiencies and profitability. This exercise
was also done by Gutiérrez-Nieto et al. (2009) in which they obtained that the
correlation between social efficiency and profits was positive but never statistically
different from zero. We take the return on equity (ROE) as a measure of profitability
and compute correlations for financial and social efficiencies using our sample. Results
are presented in Table 4. As expected, financial efficiency is positively and
significantly correlated with ROE. On the other hand, the correlation between social
efficiency and ROE is found to be negative.
Table 4: Estimated correlations between ROE and
Financial and Social Efficiencies
Variables Financial Social
(.000145) (.000072)
We now look at Country Efficiencies (CE) estimated by equation 1. As one can see in
Table 5, financial and social frontiers vary significantly between the countries in our
sample. The correlation between efficiencies is -.474 suggesting that countries with
high financial frontiers have low social frontiers. This, in our view, might be a result of
two effects: (a) MFIs located in different countries may face different conditions and
(b) different MFI types might be sorting themselves in different countries given some
country-specific characteristic. Both effects, however, imply that DEA efficiency
coefficients should be assessed by comparing units from the same country.
Conclusions
In this paper we extend the analysis carried out by Gutiérrez-Nieto et al. (2009) by
looking at the interactions between financial and social efficiencies but taking into
account differences between countries. Results are quite different from the ones
observed in Gutiérrez-Nieto et al. (2009). Our estimate of the correlation between
financial and social efficiencies is much smaller (correlation is about 35% of what they
find). Looking at profits, we show that, as expected, financial efficiency is positively
and significantly correlated with ROE. On the other hand, the correlation between
social efficiency and ROE is found to be negative.
References
Amores AF & Contreras I. (2009). New approach for the assignment of new European
agricultural subsidies using scores from data envelopment analysis: Application to
olive-growing farms in Andalusia (Spain). European Journal of Operational Research,
193: 718-729.
Portela MCAS & Thanassoulis E. (2001). Decomposing school and school type
efficiency. European Journal of Operational Research, 132: 114-130.
Abstract
Hospitals have been forced to change because of both users’ pressure for higher
quality services and regulatory agencies’ pressure for better resources management.
Notably, low quality health services are directly derived from poor hospital
management and causes substantial user dissatisfaction. Against this background, this
paper reports on a performance evaluation of hospitals financed by the Brazilian
Unified Health System (SUS). It analyzes the performance of 20 hospitals (among
public and voluntary organizations) in seven States in 2008. Focusing on financial
management, it uses a set of operational ratios (i.e., Occupancy Rate, Average Length
of Stay, and Full Time Employees per Bed) as inputs, and financial ratios (i.e., EBIT
Margin, EBITDA Margin, Return on Assets, Return on Invested Capital, and Net
Margin) as outputs. The assessment framework confirms the hypotheses, and shows
that financial management efficiency differs between public and voluntary hospitals.
Keywords: Efficiency, Data Envelopment Analysis (DEA), Hospitals, Financial Ratios,
Operational Ratios, Public and Voluntary Hospitals.
Methods
DEA Overview
Three models were set up for data analysis. The first (M1) approaches efficiency
focusing on the public nature of the hospitals and thus uses the indicators shown in
Chart 1. The expectation of this model is of relative homogeneity in the way to
measure profitability in public hospitals and voluntary hospitals (non-profit
organizations), as public hospitals usually do not register depreciation. Depreciation
can be seen as a non-financial expense that reduces the accounting profit of an
organization, which could thus benefit the results of public hospitals.
Chart 1: Models 1, 2 e 3 – inputs and outputs.
Model Inputs Outputs
Mean time of stay (TMP); Occupancy rate EBITDA Margin (ME1); e Return on Invested
M1
(TO); e FTE / Bed (FL) Capital (ROIC)
Mean time of stay (TMP); Occupancy rate EBIT Margin (ME2); Return on Assets (ROA);
M2
(TO); e FTE / Bed (FL) e Net Margin (ML)
The second model (M2) was estimated focusing on business indicators of profitability,
including depreciation and amortization expenses (cf. Chart 1). This model is assumed
to provide better performance results for voluntary hospitals, since their operational
result include revenues from private health insurances, which usually have greater
mean net margin and return on assets.
The third model (M3) is a combination of the two others (cf. Chart 1). The aim is to
assess how the efficiency scores behave, assuming that this behavior will resemble
one of the two other models, once the DEA will adjust the weight of the indicators
that lead to better scores.
The efficiency scores of the public hospitals were higher than those of the voluntary
hospitals in all the models under scrutiny. The largest difference is found in M1, which
is the model in which the financial results are included more similarly for all
organizations in the sample.
Considering the number of beds, the results show that the organizations offering from
100 through 200 beds are those with the highest mean scores in all the models (Table
3). The results suggest that this amount of beds can provide the best economies of
scale to hospital organizations.
As for income (BRL million) in 2008, the results show that the hospitals earning 7-20
million Brazilian Reais had the best performance (Table 4). In order to understand
this, further studies should investigate the complexity of the services provided, as well
as the synergy among the procedures offered in the hospitals. This result could
suggest that these organizations have economies of scope.
Conclusions
This study aimed to analyze the financial management efficiency of Brazilian hospitals
building on operational indicators as inputs and financial indicators as outputs. The
results show that public hospitals have higher efficiency scores than the voluntary
hospitals in all the three models applied to the sample. This result, however, should
be observed with caution.
One limitation of this study is that human resources expenses are accounted
differently in voluntary and public hospitals. This expense is usually not allocated to
the public hospital organizations, rather to a central agency within the public
administration. This can skew the results, but the accounting information of this
central agency is not fully available.
Besides, DEA is a tool that is sensitive to a number of factors, including sample,
inputs, outputs, and period of observation. The relative nature of the tool should
always be regarded, especially in exploratory studies like this. Therefore, results vary
according to DMU inputs and/or outputs, and indicators. As this study was meant to
be a first attempt to design a model for hospital financial assessment, we hope that
further research draws on our results to develop more robust models.
Azevedo G. H. I., Roboredo M. C., Aizemberg L., Silveira J. Q., Soares de Mello J. C. C.
B. (2012) Uso de análise envoltória de dados para mensurar eficiência temporal de
rodovias federais concessionadas, Journal of Transport Literature 6 (1): 37-56.
Charnes A., Cooper W. W., Rhodes E. (1978) Measuring the Efficiency of Decision
Making Units, European Journal of Operational Research 2: 429-444.
Guerra M., Souza A. A., Moreira D. R. (2012) Performance Analysis: A Study Using
Data Envelopment Analysis in 26 Brazilian Hospitals, Journal of Health Care Finance
38 (4): 19-35.
Abstract
This study aimed to analyze the performance of cities of Brazil’s southeastern region
in resource allocation on primary care, from 2007 to 2010. In order to do the
performance analysis, we used in this study the technical efficiency scores produced
by the Data Envelopment Analysis (DEA) methodology. But, before starting the cities’
efficiency analysis, the cluster analysis was applied to group similar cities. This study
proposes an analytical model of the performance of 1097 cities, based on the National
Primary Care Policy. The efficiency scores obtained highlights the disparities in the
allocation of resources and the results obtained in primary care. This fact could be
justified due the absence of procedures of relative comparison between cities and the
decentralization of public expenses in public healthcare. The results shed light on the
possibility to improve the performance of primary care, given the current level of
resource allocation.
Keywords: Brazil. Primary Care. Performance. Public Expenditure. DEA.
*
Corresponding author
Methods
From the population of 1668 cities of Southeastern region of Brazil we chose only
cities which had at least one Family Health Team – the group of physician, nurses,
nurse assistants and other professionals due to that are the main strategy of the
National Primary Care Policy. Afterwards, after an exploratory analysis of the data, we
excluded from the sample those with inconsistent variables and with the absence of
some observation for one or more analyzed years (2007-2010). It was excluded 570
cities because of that.
The final sample was comprised of about 70% of the cities in Minas Gerais, 60% of the
cities in the State of Espírito Santo, 50% of the cities in Rio de Janeiro and 52% of the
cities in São Paulo. In some cases, existing secondary data create an opportunity for
evaluative and simplified comparisons.
Analysis procedures
In order to do the performance analysis we used in this study the technical efficiency
scores produced by the Data Envelopment Analysis (DEA) methodology. But, before
starting the cities’ efficiency analysis, the cluster analysis, according to Ferguson et al.
(2000) was applied to group similar cities.
In the cluster analysis we used the non-hierarchical k-means method quoted in
Maroco et al. (2005) and Hair et al. (2005) for partitioning the city groups.
According to Sugar and James (2003), a fundamental problem in cluster analysis is to
determine the best number of groups. In order to determine this number that shall be
used in the study we employed the Calinski and Harabaz index whose equations are
in Milligan and Cooper (1985) and Sugar and James (2003). According to Milligan and
Cooper (1985), the Calinski and Harabaz index is considered to be a robust index for
defining the number of clusters.
After the definition of the clusters, the measuring of efficiency was performed through
the use of the Data Envelopment Analysis (DEA) methodology, output-oriented
supposing variable returns to scale (BANKER et al., 1984).
The output orientation was applied because the objective of the cities must be to
increase the primary care services production and not to reduce the budget allocated
in the field. Also, quoted authors have discussed the scarcity of resources for this
sector, indicating that the increase in allocation efficiency is necessary (FLEURY, 2001;
WORLD BANK, 2004; MINISTRY OF HEALTH, 2008).
In this study, we have the availability of a panel of efficiency scores for verifying
changes in the productivity of these cities over the years in the allocation of resources
in primary care. According to Banker et al. (2005) the Malmquist index was proposed
by Caves, Christensen and Diewet (1982) with the objective of measuring changes in
productivity between two periods of time by the distance between a DMU and the
frontier of production for each period.
After the definition of the clusters we inserted some more variables available by
Ministry of Health for additional characterization the cities. These variables are related
to the cities’ infrastructures as the following ones:
Proportion of houses with water piping (x102);
Proportion of houses with garbage collection (x102);
Proportion of houses with sewage piping (x102);
Proportion of houses made of bricks (x102);
Proportion of houses which have electricity (x102).
These variables were used because, according to the World Bank (2007), other factors
such as the access to drinking water and sanitation may also have an influence on
comparisons between expenditures and results.
With the formation of the groups it was possible to perform the relative technical
efficiency analysis. In this direction, the allocation of resources for the primary care
will be represented by the number of Family Health Teams and number of health
establishments. These two variables according to National Primary Care Policy
represent the sector’s resource allocation.
The resource allocation model for performance evaluation in Figure 2 was built by
two inputs and three outputs. The first output was characterized by the number of
people registered by the Family Health Team. The number of people registered by the
Family Health Team, number of home visits of the Family Health Team and the
ambulatory production in primary care were the outputs of the model.
Conclusions
One of this study’s main contributions was the proposal of the analytic model of
performance in primary care expenditures. Furthermore, this study took into
consideration a longitudinal analysis which is not taken into consideration in most
studies which use efficiency indexes in Brazil.
Analyzing the most similar possible cities, in four different groups, the efficiency
scores evidenced the disparities in resource allocation in the southeastern region, a
fact which could be justified by their autonomy in allocating their resources and the
absence of relative comparison procedures between them for this allocation. This
study proposes a model to solve this problem of a lack of relative comparisons. Also,
we may analyze through the quartiles that the median scores are far from technical
efficiency. There is space for increasing the primary care service offer, given the
current level of expenditures.
References
BANKER, R.D.; CHARNES, A.; COOPER, W.W. (1984) Some models for estimating
technical and scale inefficiencies in data envelopment analysis. Management Science,
30 (9): 1078-1092.
HAIR, J.F.; ANDERSON, R.E.; TATHAM, R.L.; BLACK, W.C. (2005) Multivariate Data
Analysis. New Jersey: Upper Saddle River, 2005.
RAY, S.C., DESLI, E. (1997) Productivity growth, technical progress, and efficiency
change in industrialized countries: comment. The American Economic Review, 87 (5):
1033-1039.
SUGAR, C.A; JAMES, G.M. (2003) Finding the number of clusters in a dataset: an
information-theoretic approach. Journal of the American Statistical Association, 98
(463): 750-763.
WORLD BANK. (2007) Brazil: governance in Brazil’s Unified Health System (SUS):
raising the quality of public spending and resource management. Washington,DC:
World Bank.
Francisco Vargas
Economics Department, Universidad de Sonora,E. de Zubeldia No. 27, Hermosillo, Sonora, Mexico,
fvargas@guaymas.uson.mx
Gang Cheng
China Center for Health Development Studies, Peking University,38 Xueyuan Rd, Beijing, China,
chenggang@bjmu.edu.cn
Abstract
This paper emphasizes the sensitivity of the Data Envelopment Analysis (DEA)
efficiency scores due to sampling variations of best-practice frontier and
dimensionality issues. Despite its non-parametric nature, it has been proven that DEA
yields consistent estimators when large sample sizes are used. A DEA Bootstrap
method is being widely applied to tackle inaccuracy of DEA estimators. The
combination of DEA and a modified Bootstrap expression enhances the statistical
properties of DEA estimators without overcoming the inherent limitations of each of
the two methods. This paper provides a non-resampling multi-parametric
methodology to deal with the sensitivity of DEA estimators when small samples are
available. A comparative analysis between the DEA Bootstrap and the new method’s
estimations convergence rate shows that the new method performs better than the
DEA Bootstrap in the presence of small samples.
Keywords: Data Envelopment Analysis; Bootstrap; Small samples; Bias correction
Introduction
Data Envelopment Analysis (DEA) is a non-parametric technique that draws on linear
programming to measure relative efficiency of decision making units (DMUs). The
efficiency scores are defined against a best-practice frontier, which consists of the top-
performing DMUs of the sample under evaluation. In this context, the accuracy and
reliability of the obtained efficiency scores depend on the robustness of the frontier to
sampling variations, dimensionality issues, and noisy data. In the presence of these
uncertainties, bias is involved in the efficiency measures.
Method
Considering that DEA is not appropriate for measuring efficiency scores as a stand-
alone method in the case of small samples because it yields upward-biased results,
and also that Bootstrap has limited accuracy in approximating the true variability of
population efficiency scores when it is applied in the same context, we develop a
DEA-based bias-correction method that has particular applicability in the presence of
small samples ( n < 50 ) and complex production processes. The proposed method
(MPBC) does not draw on resampling such as Bootstrap. It rather employs truncated
random data generation processes to estimate the unknown population distribution F
from the empirical distribution F̂ . The scope of the new method is to estimate the
=
population efficiency scores Θ {θ=
, p
p
1, 2,..., m} by producing an estimator F̂ of the
{ }
n
efficiency scores θˆi 0 ≤ θˆi ≤ 1 for every DMUi. In the following analysis we presume
i =1
i =1
{x }*ψ ω
generation process T is utilized to produce a sequence of pseudo-numbers ψ =1
for every DMU. Every sequence of pseudo-numbers originates from every single
efficiency score or from a combination of a selected efficiency score and the average
scores of the sample.
T {θ θ to produce xo*ψ ∀ ψ =
1, 2,..., ω}
=
∀ i 1, 2,= ..., n; ψ 1, 2, ..., ω
{
xio*ψ = min xi*ψ , θˆi }
2
In addition, T ( x* ) N (θ , se )
n
θ i (=
) z θˆi + (1 − z ud ) n −1 ∑ θˆi
ud
1/ 2
iMPBC ω −1 [ x*ψ − s ()]2
ω
=
The standard error of the proposed MPBC method is se ∑ io i (5)
ψ =1
ω
where si () = ω −1 ∑ xio*ψ
ψ =1
Taking into account equations (3) and (5), and also a large enough sequence size ( ω )
of pseudo-data so that the asymptotic normality of their distribution to be assured by
4
ψ = 1, ..., ω where ω denotes a proper length for the sequence rather than ω → ∞
=1
f *
i
f f
express fixed values.
i =1
stands for the upper bound of the confidence interval of the bias-corrected efficiency
scores.
Acknowledging the inherit randomness in the proposed method, all the provided
proofs or statements result from iterative procedures. In formula (8), the probability,
that is the average of L=1000 iterations, is equal to an infinitesimal value.
The inherent randomness in the proposed method is regarded as a drawback because
it is a source of instability for the obtained results when the method is applied
repeatedly. To overcome this drawback, a stabilization parameter γ is introduced in
the procedure that eliminates up to 99% the variation of the bias-corrected scores. The
parameter γ expresses the number of iterations for the formulas (1)-(6). The reported
results are average scores.
The proposed method 5 for dealing with uncertainties in DEA is expressed by the
following formula fθˆ (α , cv, z , ω , γ , n ex , var ex ) ≡ θ * MPBC (9)
where α denotes the level of significance, cv is preferably a low-variance parameter (
cv < 1 ), 0 ≤ z ≤ 1 , about the interval of ω we have already discussed, and γ should be
at least equal to unity, which means that no iteration to the proposed bias-correction
procedure is applied.
In formula (9), two exogenous parameters n ex and var ex are included; they denote the
number of DMUs in the original sample and the number of input and output
variables, respectively, that are utilized to define the efficiency scores through DEA.
Based on a numerical example and on results that are tested through Monte Carlo to
eliminate randomness, the proposed method yields better estimators ( θ *MPBC ) for the
population efficiency scores ( θ ) than the DEA Bootstrap ( θ *Boot ) when the original
sample consists of fewer than 50 DMUs. In addition, the convergence rate of θ *MPBC s to
θ s increases against θ *Boot s when the number of input and output variables increases.
Results
From the dataset, we draw 18 samples, each sample consisting of 10, 30, 40, 50, 60,
and 80 DMUs. DEA efficiency estimators are computed for each of the 6 samples
when 4 (Case 1: 3 inputs & 1 output), 7 (Case 2: 5 inputs & 2 outputs), and 10 (Case
3: 7 inputs & 3 outputs) variables are employed. Additionally, DEA efficiency scores
5
The algorithm for the MPBC method has been developed in Matlab.
For a z ∈ [0.0, 0.5) , the MPBC efficiency estimators report better convergence than the
smoothed Bootstrap efficiencies for samples with up to 38 DMUs. In the case a
z ∈ [0.5, 1.0] is decided by the user, then the maximum sample size for which the
MPBC simulates at a higher degree than the smoothed Bootstrap the true efficiency
varies according to the number of variables. For instance, for 4 variables, the
maximum sample size is 31; for 7 and 10 variables, it is 35 and 39, respectively. In all
these cases, the maximum sample size meets the criterion/rule of thumb of
n ≥ max { x * y , 3 * ( x + y )} for preventing dimensionality effects in the DEA efficiency
estimations.
Taking into account formulas (13) and (14), we develop the following roadmap to
facilitate the application of MPBC towards the optimum relative convergence of the
efficiency estimators to the true efficiency scores. In practice,
if variables ≤ 7, and
n<31 → γ =[0, 2, 10] and z ∈ [0.5, 1.0]
31 ≤ n ≤ 38 → γ =10 and z ∈ [0.0, 0.5)
if variables = 8, and
n<32 → γ =10 and z ∈ [0.5, 1.0]
32 ≤ n ≤ 38 → γ =10 and z ∈ [0.0, 0.5)
If variables = 9, and
n<36 → γ =10 and z ∈ [0.5, 1.0]
36 ≤ n ≤ 38 → γ =10 and z ∈ [0.0, 0.5)
If variables ≥ 10, ∀ n → γ =10 and z ∈ [0.5, 1.0], [max n 6: adjusted to the number of
variables]
6
The MPBC method yields more consistent efficiency estimators than the smoothed Bootstrap, or
relative consistent efficiency estimators, and also free of dimensionality effects, for up to 43 DMUs
when 13 variables are incorporated in the DEA evaluation process.
i =1
Conclusions
In this paper, we developed a multi-parametric method for bias correction (MPBC) of
DEA efficiency estimators. The new method enhances the applicability and reliability
of DEA as a comparative efficiency measurement technique when small or inadequate
samples are available. In the presence of small or inadequate samples, DEA efficiency
estimators are not regarded as consistent because they are biased by sampling
variations and dimensionality.
In order to prevent any confusion to the users of the method resulting from the
selection of the appropriate parameters to attain greater consistency for the efficiency
estimators than the smoothed Bootstrap, we provide a detailed roadmap. Based on
this roadmap, we prove that efficiency estimators obtained by DEA in conjunction
with MPBC converge in higher probability to the true efficiency scores than the
efficiency estimators yielded by smoothed Bootstrap, under certain circumstances.
MPBC is regarded as an appropriate bias-correction method for DEA efficiency
estimators when the samples consist at maximum of 43 DMUs.
References
Banker, R. D. (1993). Maximum-Likelihood, Consistency and Data Envelopment
Analysis - a Statistical Foundation. Management Science, 39, 1265-1273.
Banker, R. D., Charnes, A., & Cooper, W. W. (1984). Some models for estimating
technical and scale inefficiencies in data envelopment analysis. Management Science,
30, 1078-1092.
Cooper, W. W., Seiford, L. M., & Tone, K. (2007). Data envelopment analysis: a
comprehensive text with models, applications, references and DEA-Solver software
(2nd ed.). New York: Springer Science + Business Media.
Simar, L., & Wilson, P. W. (1998). Sensitivity analysis of efficiency scores: How to
bootstrap in nonparametric frontier models. Management Science, 44, 49-61.
Simar, L., & Wilson, P. W. (1999). Estimating and bootstrapping Malmquist indices.
European Journal of Operational Research, 115, 459-471.
Ludmila Neumann
Technische Universität Braunschweig, Institute of Management Control and Business Accounting,
Germany, l.neumann@tu-bs.de
Abstract
One of the major strengths of Data Envelopment Analysis (DEA) is the endogenous
determination of the weights of performance criteria, assigning to each decision
making unit (DMU) its best possible efficiency score. However, this property also
leads to a significant shortcoming: it allows zero-value weights that exclude criteria
from the evaluation. While many approaches that deal with this problem incorporate
value judgments into analysis, our approach supports management’s efficiency
analysis with a complementary performance measure that is derived from the given
data set. The respective balance score evaluates the extent to which a DMU avoids
concentration on only some of the crucial performance criteria. One of the possible
decisions resulting from a balance analysis is to reduce the set of DMUs considered to
serve as benchmarks. For this case, a modified CCR-O model is presented.
Keywords: DEA, Balance score, Inappropriate benchmarks
Introduction
Data Envelopment Analysis (DEA) is a meaningful approach to relative efficiency
measurement of decision-making units (DMUs) that assesses DMUs’ performance by
aggregating their input and output values into a single efficiency score. The weights of
these performance criteria are endogenously determined, assigning to each DMU its
best possible efficiency score. This property constitutes one of the major advantages
of the basic DEA models but it also represents a source of pitfalls concerning
performance assessment and performance control. Such problems result from the
possibility of zero-value weights that eliminate from the analysis any input and/or
output criteria in which the performance of a DMU is weak, with the aim of raising its
efficiency score to its maximum level (Dyson and Thanassoulis, 1988; Allen et al.,
1997; Thanassoulis et al., 2004). Unlike approaches that deal with this issue by
Numerical example
We refer to the example of a European pharmacy chain originally presented in Ahn et
al. (2012). As Table 1 shows, 20 pharmacy stores were analyzed, based on two inputs:
worked hours (WH) and store square meters (SQM) and three outputs: number of
customers buying medications available only on prescription (MP), number of
customers purchasing over the counter products (OTC) and total number of
prescriptions (PR). The outputs represent the main goals to be pursued by the stores.
The main numerical results calculated by running the balance model are presented in
Table 1. The DMUs A, E, F, O, Q, R, and T are not attain all outputs in an acceptable
proportion. For example, DMU A has a balance score βA of 0.88 since it requires a
minimal output variation of 12% to be projected onto C ; an increase of 11% on MP
and 12% on PR is necessary, while a reduction of 12% on OTC is allowed.
In the extreme case, where at least one output is not produced at all, the respective
DMU will be characterized by a balance score of 0. The DMU R represents such an
extreme DMU because it has no OTC sales. The ϖo,r values project R (nearly) onto the
origin of the coordinate system, which obviously cannot be understood as a
benchmark. However, the efficiency analysis by means of traditional CCR-O model
designates R as a benchmark for F and K (see Table 2), although Table 1 shows that R
is specialized on two outputs, while F and K also take a third output into account. For
this reason we reanalyze the efficiency of the pharmacy stores by means of a modified
CCR-O model which exclude unbalanced DMUs from the reference set of other
DMUs. Table 2 presents the results of the modified CCR-O model and those of the
traditional CCR-O model.
The balance and efficiency analysis concerning a period 1 is represented by the results
of iteration 1. The six pharmacy stores originally identified as efficient keep this
property. While the balanced DMUs I, M and N also continue to serve as benchmarks
for other stores, the unbalanced DMUs O, Q and R are eliminated from the reference
set of other stores. This increases the efficiency score of those inefficient stores which
formerly had unbalanced DMUs in their reference sets. For example, the new
benchmark of A is N, raising its efficiency score from 0.44 to 0.70. As a special case, L
is now characterized as efficient, and it becomes an appropriate benchmark for K and
P. It should be noted that the balance analysis indicates R as a DMU following a
different business strategy. In this case, R is not comparable to the other DMUs and
should be excluded from the sample.
On the basis of the data analysis of period 1, the management of the pharmacy stores
should take decisions according to the respective results. If a store is unbalanced, a
change of its output mix in reference to the determined reference point on the frontier
of the multicone is required. Hence, the DMUs A, E, F, O, Q and T should adjust their
output values according to the proposed ϖo,r values in Table 1. Such changes in the
output mix may induce a negative change in the efficiency score in the subsequent
period 2 indicating the overestimation of the actual performance of the DMUs.
Assuming that the balanced DMUs keep their balanced output mix, the pharmacy
stores are again evaluated in period 2, but without the incomparable DMU R. The
results of this second balance analysis are shown in Table 2, iteration 2. The DMUs A
(βA=0.99), E (βE=0.97), F (βF=0.97), Q (βQ=0.99), and T (βT=0.99) still remain
unbalanced, while DMU O becomes balanced and the balance score of DMU B
decreases to 0.97. The efficiency analysis by running the modified CCR-O model
shows a decrease of the efficiency scores of unbalanced DMUs while the efficiency
scores of balanced DMUs as well as the reference set of inefficient DMUs remain
unaffected.
This iterative process of balance and efficiency analysis could be continued. Assuming
that the output mix of balanced DMUs remains constant, the unbalanced DMUs will
turn into balanced ones after further iterations will be conducted.
Tom Weyman-Jones
Loughborough University UK, T.G.Weyman-jones@lboro.ac.uk
Karligash Glass
Abstract
The purpose of this paper is to apply a quadratic data envelopment analysis model
with bootstrap and second stage regression to estimate the efficiency of a sample of
investment funds, obtain the statistical inference of the efficiency scores and detect the
determinants of inefficiency. Morey and Morey (1999) developed a mutual funds
efficiency measure in a traditional mean-variance model. It is derived from the
standard data envelopment analysis but differs from it in having non-linear constraints
in the envelopment version of the model’s structure. This paper first applies the
procedures in Morey and Morey (1999) to a new modern data set comprising a multi-
year sample of investment funds and then utilise Simar-Wilson (2008) bootstrapping
algorithms to develop statistical inference and confidence intervals for the indexes of
efficient investment fund performance. For the second stage analysis, robust-OLS
regression, Tobit models and Papke-Wooldridge (PW) models are conducted and
compared to evaluate contextual variables affecting the performance of investment
funds. The DEA efficiency scores are regressed on potential variables to test the
statistical significance of those factors. Results and inferences are drawn from an
extensive new dataset of investment funds.
Keywords: nonlinear-DEA, portfolios, bootstrapping, second stage DEA
Introduction
The history of portfolio evaluation dates from the 1960th (Sharp, 1966; Treynor, 1965
and Jensen, 1968), with emphasis on both expected return and risk. Investment fund
managers attempt to find efficient portfolios – those promising the greatest expected
return for any given degree of risk, i.e. risk-adjusted return. However, there are many
criticisms of traditional portfolio analysis which focus on their sensitivity to chosen
benchmarks. Murthi et at. (1997) were the first to apply DEA methodology to fund
performance evaluation. A large proportion of DEA models applied to investment
funds show piecewise linear correspondence between multiple inputs and outputs.
Methods
In the essence of data envelopment analysis, Morey and Morey (1999) quadratic
models use the idea of ‘funds of funds’: for each fund there is a corresponding
composite benchmarking fund, which lies on the efficient frontier. These are
hypothetical but potentially efficient combinations of the actual observations. DEA
scores are obtained by measuring the direct distance from the position of the fund in
question in mean-variance space to that of the efficient composite benchmarking fund.
Return
Mean
Inefficien
t Fund Return
Augmentatio
n
Risk
Contractio
O
Risk Graph 1
Mean return augmentation method could be seen as output oriented DEA, and it
represents a vertical path towards the efficient frontier, while the risk contraction
model is input-oriented DEA which follows the horizontal path.
Consider N mutual funds to be evaluated, indexed j=1, 2,…,N, where j0 is the fund in
evaluation for each run. j0 = 1,2,...N , and there are N runs totally. Let T denote the
number of different time horizons, where t=1, 2,…,T. Denote E ( R j ,t ) as the mean
return for fund j, and σ 2j as its the variance as well as Cov ( Ri ,t , R j ,t ) as its covariance.
Denote w j as the weight allocated to each fund to form the benchmarking fund in
each run. Formula of mean return augmentation is as follows:
Determine w j ≥ 0( j = 1,2,... j0 ,..., N ) so that:
Max θ
N
s.t. ∑w
j =1
j =1
N N N
Where θ is the efficiency score and we have θ ≥ 1
∑w σ
j =1
2
j
2
j ,t + ∑∑ wi w j Cov( Ri ,t , R j ,t ) ≤ σ
i =1 j =1
2
j0
i+ j
N
∑ w E(R
j =1
j j ,t ) ≥ ϑE ( R j0 ,t )
(t = 1,2,...T )
.
θ is calculated by running the above programming problem once for each fund.
Efficient funds will have a value of one, while inefficient ones will get a value greater
than one which shows how much the actual return should be expanded for the fund
to be considered technically efficient.
The second quadratic program is risk contraction, and the formula is as follows:
∑w σ
j =1
2
j
2
j ,t + ∑∑ wi w j Cov( Ri ,t , R j ,t ) ≤ Zσ 2j0
i =1 j =1
i+ j
(t = 1,2,...T )
by funds with Z equal to one.
Conclusions
The purpose of this paper is to apply a quadratic data envelopment analysis model
with bootstrap and second stage regression to estimate the efficiency of a sample of
investment trusts, obtain the statistical inference of the efficiency scores and detect the
determinants of inefficiency.
References
Ayoe, H. (2007). Second stage DEA: Comparison of approaches for modelling the DEA
score. European Journal of Operational Research, 181(1), 425-435.
Banker, R. D., & Natarajan, R. (2008). Evaluating contextual variables affecting
productivity using data envelopment analysis. Operations Research, 56(1), 48-58.
Morey, M. R., & Morey, R. C. (1999) Mutual fund performance appraisals: A multi-
horizon perspective with endogenous benchmarking. Omega, 27(2), 241-258.
Simar, L., & Wilson, P. W. (1998) Sensitivity analysis of efficiency scores: How to
bootstrap in nonparametric frontier models. Management Science, 44(1), 49-61.
Abstract
This study applied the Data Envelopment Analysis (DEA) methodology in Brazilian
Steel Industry. The DEA is based on non-parametric mathematical models. It evaluates
the performance of each unit of observation with a multidimensional perspective. The
study used a portfolio of 12 product development projects for which were obtained
the scores of technical and scale efficiency. The technical efficiency scores indicated
the benchmark projects, demonstrating the potential of DEA. For purposes of analysis,
this research used the Data Envelopment Analysis with BCC (Banker, Charnes,
Cooper) input-oriented model; content analysis and the NTCR model (News,
Technology, Complexity and Rhythm). It can be concluded that the adoption of
methodologies aimed at efficiency projects management is an essential tool for
competitiveness of the organization, motivating employees, improving management
processes and reducing time for product delivery.
Keywords: Data Envelopment Analysis (DEA); BCC, innovation and product
development management; Multifunctional Integration; steel industry; technical
efficiency; scale efficiency.
Introduction
In the current scenario of business in the industry stands out the capacity for
innovation management and product development as a determinant of survival of
organizations. Researchers and practitioners in this area have provided significant
contributions of management systems and integrated processes, creating and
This context brings solution for the industry like a management new products and
process development. However, as Rozenfeld et al. (2006) pointed out the
organization's success in developing new products is not guaranteed by the genius
and creativity of professional R & D, or the number of resources allocated to the
projects.
Clark e Wheelwright (1993) shows that one the practice essential in this discussion, it
is the importance of creating a development product framework that brings a broad
perspective to the process and facilitates cross-functional integration.
Shenhar and Dvir (2010) identified five dimensions of project success: project
efficiency, impact on the customer; impact team, and direct sales success and
preparation for the future.
In this context this research applied the Data Envelopment Analysis (DEA) in product
development of a Brazilian Steel Industry in order to: measure the technical efficiency;
identify the benchmark projects and measure the scale efficiency. The analysis
concentrated on a portfolio of 12 product development projects, and the scores of
technical and scale efficiency were calculated. The technical efficiency scores
indicated the projects "benchmarks", providing the capability of the method. It can be
concluded that the adoption of methodologies aimed at efficiency management
project is an indispensable tool for the competitiveness of the organization.
Methods
Regarding methodological aspects, it is a case study with qualitative and quantitative
approach. Scripts were used for semi-structured interviews for data collection.
The study used a portfolio of 12 product development projects for which were
obtained the scores of technical and scale efficiency. The technical efficiency scores
indicated the benchmark projects.
For purposes of analysis, the three techniques were applied:
• Data Envelopment Analysis (DEA) with BCC (Banker, Charnes, Cooper) input-
oriented model – VRS (Variable Returns to Scale);
• Content analysis;
• Model NTCR (News, Technology, Complexity and Rhythm).
The software selected to perform the method of this research was the DEA Frontier,
and it is available at www.deafrontier.com.
2
Naval 7,8,9 Project_Naval 2 180 473650 3 9000
3
Naval 10 Project_Naval 3 242 622450 1 8000
4
Tubo 1 Project_Tubo 1 395 736000 1 0
Tubes (Oil and Gas)
5
Tubo 2 Project_Tubo 2 395 736000 1 5000
6
Tubo 3 Project_Tubo 3 395 736000 1 7000
7
Tubo 4 Project_Tubo 4 395 736000 1 7000
8
Estrutural 1 Project_Estrutural 1 152 298600 1 8000
9
Estrutural 2 Project_Estrutural 2 152 298600 1 8500
Estrutural
10
Estrutural 3 Project_Estrutural 3 181 350800 1 7800
11 Estrutural 4 Project_Estrutural 4 120 241000 1 7850
12
Estrutural 5, 6 Projeto_Estrutural 5 183 354400 2 6000
As shown in Table 12, technical efficiency scores of each analyzed project were
obtained. It was identified two efficient DMUs, 1 and 11, because they have reached
the maximum value equal to 1. Therefore, these are the benchmarks DMUs, and the
others can be considered inefficient.
Input-Oriented
VRS Optimal Lambdas
DMU No. DMU Name Efficiency with Benchmarks
1 Project_Naval 1 1,00000 1,000 Project_Naval 1
2 Project_Naval 2 0,80000 0,400 Project_Naval 1 0,600 Project_Estrutural 4
3 Project_Naval 3 0,50107 0,021 Project_Naval 1 0,979 Project_Estrutural 4
4 Project_Tubo 1 0,32745 1,000 Project_Estrutural 4
5 Project_Tubo 2 0,32745 1,000 Project_Estrutural 4
6 Project_Tubo 3 0,32745 1,000 Project_Estrutural 4
7 Project_Tubo 4 0,32745 1,000 Project_Estrutural 4
8 Project_Estrutural 1 0,82345 0,021 Project_Naval 1 0,979 Project_Estrutural 4
9 Project_Estrutural 2 0,87793 0,091 Project_Naval 1 0,909 Project_Estrutural 4
10 Project_Estrutural 3 0,68700 1,000 Project_Estrutural 4
11 Project_Estrutural 4 1,00000 1,000 Project_Estrutural 4
12 Project_Estrutural 5 0,81131 0,200 Project_Naval 1 0,800 Project_Estrutural 4
The investigation carried out with BCC Model oriented input indicated the DMU's 2, 3,
4, 5, 6, 7, 8, 9, 10 and 12 as inefficient. Although as Table 2 shows each one has a
different degree of inefficiency.
According to the analysis of Table 3, it can be identified that two DMUs had the
maximum efficiency achieved with CCR Model, since they reached the maximum
value possible equal to 1. They were Projects 1 and 11, belonging to the Program
Project Naval and Structural respectively. It is noteworthy that the Projects 1 and 11
also obtained maximum technical efficiency model for BCC, which brings us to prove
that every project efficiently in CCR model will also be in the BCC model, FERREIRA e
GOMES, (2009). However, the DMU efficiency in BCC model can´t is efficient in CCR
model.
After the technical efficiencies are calculated in BCC and CCR models, it is followed
by the calculation of the efficiency scale, and the results are presented in Table 4.
Input-Oriented
CRS Sum of Optimal Lambdas
DMU No. DMU Name Efficiency lambdas RTS with Benchmarks
1 Project_Naval 1 1,00000 1,000 Constant 1,000 Project_Naval 1
2 Project_Naval 2 0,60000 0,600 Increasing 0,600 Project_Naval 1
3 Project_Naval 3 0,40498 0,570 Increasing 0,493 Project_Naval 1 0,078 Project_Estrutural 4
4 Project_Tubo 1 0,10726 0,167 Increasing 0,167 Project_Naval 1
5 Project_Tubo 2 0,21015 0,556 Increasing 0,089 Project_Naval 1 0,467 Project_Estrutural 4
6 Project_Tubo 3 0,29246 0,868 Increasing 0,026 Project_Naval 1 0,841 Project_Estrutural 4
7 Project_Tubo 4 0,29246 0,868 Increasing 0,026 Project_Naval 1 0,841 Project_Estrutural 4
8 Project_Estrutural 1 0,82252 1,019 Decreasing 1,019 Project_Estrutural 4
9 Project_Estrutural 2 0,87393 1,083 Decreasing 1,083 Project_Estrutural 4
10 Project_Estrutural 3 0,68268 0,992 Increasing 0,002 Project_Naval 1 0,991 Project_Estrutural 4
11 Project_Estrutural 4 1,00000 1,000 Constant 1,000 Project_Estrutural 4
12 Project_Estrutural 5 0,53097 0,489 Increasing 0,302 Project_Naval 1 0,187 Project_Estrutural 4
Inputs Outputs
Duration of Project Number of Products Delivered
Cost (R$) Opportunities (Tons Sold)
Input-Oriented Input-Oriented
CRS VRS Scale
DMU No. DMU Name Efficiency Efficiency Efficiency
1 Project_Naval 1 1,00000 1,00000 1,00000
2 Project_Naval 2 0,60000 0,80000 0,75000
3 Project_Naval 3 0,40498 0,50107 0,80823
4 Project_Tubo 1 0,10726 0,32745 0,32756
5 Project_Tubo 2 0,21015 0,32745 0,64178
6 Project_Tubo 3 0,29246 0,32745 0,89316
7 Project_Tubo 4 0,29246 0,32745 0,89316
8 Project_Estrutural 1 0,82252 0,82345 0,99888
9 Project_Estrutural 2 0,87393 0,87793 0,99544
10 Project_Estrutural 3 0,68268 0,68700 0,99372
11 Project_Estrutural 4 1,00000 1,00000 1,00000
12 Project_Estrutural 5 0,53097 0,81131 0,65445
Source: Execution from DEA-Fontier
These results have established practices to be implemented for projects that showed
technical and scale inefficiency. Among the critical success factors were identified, in
the sequence described, practices that contributed to the success of Projects Naval 1
and Structural 4, which were considered the benchmarks of portfolio analysis:
- Working in cross-functional teams, including customers and suppliers,
- Development Product Process of well-defined, from concept to launching;
- Ability to capture customer requirements;
- The project leader had the technical, managerial and interpersonal skills
needed to manage conflicts and problems.
By the application of the .NTCR Model, the projects are classified in: innovation, low
technology, system and critical time. Except 4 projects from Tubes Program that are
classified in: innovation, high technology, system and critical time. The Data
Envelopment Analysis (DEA) with BCC (Banker, Charnes, Cooper) input-oriented
model would apply to 8 projects instead of 12.
Conclusions
It can be concluded that the adoption of methodologies aimed at efficiency project
management is an essential tool for competitiveness of the organization, motivating
employees, improving management processes and reducing time for product delivery.
An attempt was made to apply DEA methodology on development product
management, using a case study of Brazilian steel industry. The main results were the
identification of benchmark projects, which had maximum technical efficiency and the
indication that 80% of projects in the portfolio analysis showed technical and scale
inefficiency.
References
BANKER, R. D.; CHARNES, A.; COOPER, W.; Some Models for Estimating Technical
and Scale Inefficiencies. In Data Envelopment Analysis. Management Science, Vol. 30,
No. 9, pp. 1078-1092, Set. 1984.
CHARNES, A.; COOPER, W.; RHODES; E.. Measuring the efficiency of decision making
units. European Journal of Operational Research 2, 1978; 429-444..
CHARNES, A.; COOPER, W.; LEWIN; A.Y., SEIFORD,L. M. Data Envelopment Analysis:
Theory, Methodology and Application. 3a. Ed. Massachussetts, USA, 1997. 513p
KERZNER, Harold. Gestão de Projetos: as melhores práticas. 2ª. Edição - Porto Alegre:
Bookman, 2006. 824p.
ROZENFELD, H.; FORCELLINI, F. A.; AMARAL, D.C.; TOLEDO, J.C.; SILVA, S.L.;
ALLIPRANDINI, D.H; SCALICE, R.K. (2006). Gestão de Desenvolvimento de Produtos:
uma referência para melhoria do processo. São Paulo: 542p, Editora Saraiva.
ZHU, J.; COOK; W. D.. Data Envelopment Analysis: Modeling Operational Processes
and Measuring Productivity. York University, Canada. Wade D. Cook, 2008.
Simon TIENDREBEOGO
Institut de Recherche en Sciences de la Santé, stiendrebeogo@irss.bf (corresponding author)
Séni KOUANDA
Institut de Recherche en Sciences de la Santé, skouanda@irss.bf (corresponding author)
Abstract
Millennium Development Goals 4th and 5th axes aim to reduce infant mortality and
enhance maternal health by the year 2015. However, in Burkina Faso (West Africa),
these indicators remain worrying. To face this situation, Burkina Faso engaged the
extension of Obstetrical and Neonatal Emergency Care (EmNOC) strategy since 2004.
The main objective of this paper was to assess technical efficiency of the public basic
health care organizations (CSPS) related to SONU. The great number of such
organizations and their spatial distribution make them more accessible for women
who need EmNOC. We found, by an output oriented model that the median efficiency
score of the CSPS is 1.17 and less than 20% of the CSPS are fully efficient; none of the
sixty three health district of the country is efficient. Rural location of CSPS and lack of
electricity seems to be the main environmental factors of inefficiency according to our
data.
Keywords: Data Envelopment Analysis (DEA), Two-stage estimation, Obstetrical and
Neonatal Emergency Care, Burkina Faso.
Introduction
Developing countries remain the most affected by maternal mortality in the world,
536,000 women die each year worldwide due to complications of pregnancy,
childbirth and the puerperium according to United Nations. Millennium Development
Goals (MGDs) 4th and 5th axes aimed to reduce infant mortality and enhance maternal
health by the year 2015. However, in Burkina Faso, these two indicators remain
worrying. The maternal mortality ratio was successively 566, 484 and 307.3 for 100000
live births in 1993, 1998 and 2006. Other indicators show that Burkina Faso has a poor
health status. The public expenditure on health is extremely low, amounting to less
than 6.1% of the GDP. In addition, the distribution of health resources is highly
skewed reflecting significant regional disparities in health services across the country.
Health System
The health system has two types of structures: the administrative structures and the
hospitals and social-health. The health system is pyramidal and has three levels: the
central level represented by the office of Minister of Health, the General Secretariat
and the central departments; the intermediate or regional level (there are 13 Regional
Directorates of Health); the peripheral level or operational which includes 63 health
districts. The central level is the level of direction, design and policy making and
national development plans in health. The intermediate or regional level is the level of
monitoring of the implementation of policy and national development plans in health.
The peripheral level is managed by a team headed by a district medical officer. This is
the operational level where the development plans for health are implemented. The
health district is the operational entity.
Methods
Data Envelopment Analysis
Inclusion criteria
All health districts were included. In each district, all primary health care centers that
met the following criteria were included: Public facility, practice of delivery, no
missing data for the study variables. Outlier primary health care centers according to
efficiency score were also excluded.
Analysis was performed on sample of 996 primary health facilities in Burkina Faso.
Most of the primary health care centers were rural (90,8%). More than one primary
health care center did not receive electricity (28%) and 15% of primary health care
centers was not supplied with water.
These facilities have a variation in term of equipment and resources endowment. In
term of obstetric capacity, the range is between 1 and 20. More than 75% have no
wise women while the maximum number found is 8 (Table1).
The variation is very wide in outputs compared to inputs. The range of delivery
number is great than 100 (Table 3).
Efficiency estimation
Only 20% of primary health care centers are fully efficient. Most of them (80.42%)
have slacks and more than 70.5% of primary health care centers are inefficient (Table
4). Most of CSPS (75.2%) are inefficient at each step of their production process.
Many health centers can improve their performance. Indeed, 50% of CSPS can
proportionally increase their outputs quantity by 17% with the current inputs
(efficiency median index is 1.17). Moreover Burkina Faso CSPS can assess their
efficiency according to outputs quantity improvement by 23%
Independent of the region, the proportion of efficient CSPS is less than 40%.
Nevertheless, we notice a variation of efficiency among regions. For instance, Centre-
Sud, Boucle du Mouhoun, Sahel regions have high proportions of efficiency CSPS
(more than 25%). The less proportions are found in Hauts-Bassins, Centre-Est and Est
regions where the proportion of efficient CSPS is less than 15%. The aggregated
efficiency score show that Boucle du Mouhoun, Centre Sud and Sahel are, by
decreasing order, the most efficient regions and Plateau Central, Est, and Cascades are
the less efficient
In the same way, analyses show that Nouna, Pô health districts are the most efficient
while Banfora and Bousse are the less efficient health districts.
Discussion
Seventy per cent of the CSPS in the sample were technically inefficient and 20% were
fully efficient. A similar study of 155 primary health care clinics in in Kwazulu-Natal
found 30% of health care clinics to be technically efficient. However, Our finding is
quite upper than the results obtained in Nouna district in Burkina Faso (30%) by
Marshall Flesha (2011), in Ghana (65%) J. AKAZILY (2008). Theses differences might
be explained by a rather homogeneous structure of the health centers (cases of a
single health district study), a limited number of health centers (regional study) or the
strength/organization of health system and the income of the country(country level
study) or the chosen inputs and outputs.
Most of primary health care centers (80%) use more inputs than need at current
operational level. Theses health centers can produce the same level of outputs using
17% less of each input. It should however be noted that this does not imply the
presence of excess capacity relative to needs. It might be explain by the lower
utilization rate of health services due to possibly demand-side barriers of any type.
Centre sud, Boucle du Mouhoun and Sahel are the most efficient regions. For Sahel
and Boucle du Mouhoun regions, it might be related to the high proportion of health
centers applying free cost for a normal delivery (41% and 31% respectively). Centre
Sud has the highest percent of health centers (55%) with a formal system of poor
women subsidy and the second low rate of homebirth (15%). These regions host the
most efficient primary health centers. Nouna the best one is in Boucle du Mouhoun
region while Po the second is in Centre-Sud. Marschal and Flesha found that 70% of
primary health centers in Nouna are technically efficient. Our results show that health
center location, power supply, and poor women care subsidy are in increasing order
factors affecting health centers inefficiency. Rural location increases inefficiency. It
could probably be related to, among other things, care seeking behavior of the
catchment population and population density. For example, the high proportion of
homebirth is found in rural location (39%). This factor is found by Tamiru Balchia in
Conclusions
The study shows that 80% of primary health care centers are inefficient. These
facilities are not operating at a full technical capacity and could increase their outputs
within existing budgets. The evidence indicates that capacity utilization is insufficient
in rural health centers, those without power supply and system of care for indigent
women. Our study recommends the adoption of policies for better management of
health centers and better use of health services.
References
WW Rhodes E. Charnes A, Cooper. Measuring the efficiency of decision making units.
European Journal of Operational Research, 2 :429–444, 1978.
Rolf FÄRE and Valentin ZELENYUK. On aggregate farell efficiency. European Journal
of Operational Research, 146:615–620, 2003.
Léopold SIMAR and Valentin ZELENYUK. Statistical inference for aggregates of farell-
type efficiencies. Journal of Applied Econometrics, 22:1367–1394, 2007.
]Paul MARSHALL and Steffen FLESSA. Efficiency of primary care in rural burkina faso.
a two-stage dea analysis. Health Economics Review, pages 1–5, 2011.
Street A Jacobs R, Smith PC. Measurin Efficiency in Health Care: Analytic Tech-niques
and Health Policy. Cambridge: Cambridge University Press, 2006.
Mariana De Santis
Quantum 8, mdsantis@quantumamerica.com
Introduction
The aim of this study is to estimate the efficiency of the electricity distribution
companies in Brazil during 2004 to 2009. For this purpose, a semi-parametric
methodology that incorporates the impact of environmental variables and statistical
noise is applied. DEA is employed supplemented with SFA methodology, following
Fried et al (2002). This methodology adjusts the amounts of inputs considering
environmental variables and statistical noise using a stochastic frontier approach,
taking all companies to a common scenario, recalculating then the percentages of
efficiency by DEA. Unlike other methodologies, the amount of each input is adjusted
instead of the efficiency score, allowing environmental variables affect differently to
each of the inputs. This approach is particularly advantageous when estimates of
efficiency include more than one input, such as total expenditures, operating
expenditures and capital expenditures.
The practice of benchmarking with price setting purposes is present in most advanced
worldwide regulations. In Brazil, in the second tariff review (2007-2010),
benchmarking was used in the regulation of the distribution companies to analyze the
global consistency in operating costs obtained by applying the “model company”
method; to relate technical losses with delinquency rates and to determine the quality
of service. With an analysis of global consistency, the Brazilian regulator (ANEEL)
attempted to validate the reasonableness of the costs of the reference company,
obtained by "top-down" with an alternative methodology. These analyzes were not
made explicit in the discussions with the agents. However, the regulator committed to
explain the global consistency of analyzes to be used in the third review, where
7
Quantum – Experts in Public Utilities Regulation – www.quantumamerica.com
8
Quantum – Experts in Public Utilities Regulation – www.quantumamerica.com
Methodology
The methodology used in this study is a three-stage approach according to Fried et al
(2002). This method combines the advantages of DEA and SFA methods. A complete
analysis of the advantages of using a three stage model can be found at Fried et al
(2002). In the first stage, we apply DEA to input and output data to obtain the initial
efficiency scores of firms. In the second stage SFA is used to attribute variation in first
stage electricity distribution companies’ performance to environmental variables,
managerial inefficiency and random noise. In the third stage, firms’ inputs are adjusted
to take into account the environmental effects and statistical noise uncovered in the
initial DEA. Finally, DEA is applied to the adjusted inputs to obtain improved
measures of managerial efficiency, net of environmental variables and random noise
effects.
Data
The data used in this study are those used by the Brazilian National Electricity Agency
(ANEEL) to determine the methodology that will be applied for calculating the
electricity rates for the third regulatory period. This database was used by ANEEL, in
the Public Hearing 40, for estimating the efficiency of the distribution companies. The
input and output information was provided by the companies to the regulator. The
environmental variables were surveyed by ANEEL from different sources which are
mentioned below. Additionally we segregated the total number of consumers by type
in small, industrial and rural with information from various public sources.
ANEEL’s database is available at its official website and it includes 61 electricity
distributors for the period 2004 – 2009. Years before 2003 were not considered due to
data limitations and to exclude the period of energy rationing that took place in 2001
and 2002. In this study 17 of these companies were excluded due to lack of specific
data mainly related to consumer type disaggregation. Details about the definition of
the variables can be found in the Technical Note 294 by ANEEL.
The variables used in this study are the following:
9
See Technical Note Number 294/2011, SRE/ANEEL
Capital Costs
The capital cost considered in this study was calculated according to an economic
concept, including the Annual Depreciation of physical capital and the Opportunity
Cost of Capital. Since the aim of this work is to measure the input-oriented efficiency,
it was considered more appropriate to calculate the cost of capital as the annuity,
because allocating the cost of investments evenly over the period does not harm the
efficiency of companies whose net assets are newer neither reward with low cost (of
capital) those with older assets. For the calculation of the annual rate an opportunity
cost of capital of 16.07% and the average life of the assets of each company were
considered, both parameters used by the regulator in its estimates of efficiency.
Consumers
It includes Residential, Commercial, Industrial, Rural and other categories on
December of each year of the period.
Average wage
The salary was calculated by ANEEL for its estimates of efficiency and it is expressed
in local currency of December 2010. This variable measures the labor cost a
distribution company faces when hiring employees.
Estimated models
In order to obtain the efficiency of Brazilian electricity distributors during 2004-2009
three alternative models were estimated:
The first model includes as inputs, operating costs and capital costs, responding to
the premise that the measurement of efficiency must take a comprehensive approach,
including all inputs involved in the production process jointly. Thus, the risk of
measuring the efficiency of operation costs conditioned to the quantity and quality of
the distribution network, which may differ significantly among compared companies,
is minimized. In other words, it excludes the problems associated with the
interchangeability of remuneration to two different production factors (trade - off).
The outputs considered are the number of consumers and network length per year.
For the correction of the slacks of both inputs (stage 2), we considered the following
environmental variables:
10
See Technical Note N. 271/2010 by SRE/ANEEL.
With regard to the signs of the estimated coefficients associated to the environmental
variables in the adjustment of the optimal slack by SFA is expected average wages
have a positive impact, because the distributors that operate in areas where labor
receives higher wages necessarily face higher costs than those located in areas where
the formal labor market is more depressed. The former face higher wage levels that
raise their costs, being this difference not attributed to inefficient management.
It is expected that companies located in areas with high proportion of social
vulnerability face greater difficulties in reducing non-technical losses. It is therefore
likely that operating costs to combat fraud and costs of anti-fraud network be in direct
relation with indices of socioeconomic complexity. Accordingly, this variable is
expected to be positive in the optimum adjustment of the slack.
The coefficient associated with rain is expected to be positive, since companies
operating in rainy areas are exposed to higher operational costs for service restoring
such as network reconnection costs, posts and line replacement, etc.
The second model includes the same inputs and outputs than the first, with the
difference that the consumers are broken into three categories: small, composed by
residential and commercial consumers, industrial and rural consumers.
The third model considers as input only the operation and maintenance costs.
Unlike the previous models, it only focuses on the OPEX efficiency, by controlling the
magnitude of invested capital with the network length. It is worth mentioning that the
Brazilian regulatory benchmarking techniques used to promote efficiency in the last
review conducted last year were applied only to OPEX. It was decided to consider as
outputs the number of served customers, the total energy delivered and the network
length. The latter is also treated as an output by the Brazilian regulator, as well as by
other authors, who consider the number of kilometers of maintained network as an
output. Estache et al (2010), however, show that in numerous studies of
benchmarking is often used as input the length of the network as a proxy for capital
input, usually in the estimates of cost functions or distance functions. (p. 143).
Environmental variables used in the adjustment of the optimal slacks are the same as
in the previous models.
From the results of the first stage, we calculated each company's slacks, which were
adjusted to incorporate the effect of environmental and stochastic variables. It was run
a regression for each of the slacks for each input of the model under the assumption
that the error term that captures inefficiency presents a normal distribution truncated
at 0. We adopted a panel data model with random effects, in which inefficiency varies
in the period according to the Batese - Coelli model. The estimated coefficients
presented in all cases the expected sign and are statistically significant. The variable
rain only was statistically significant in Model 3. Companies operating in areas where
the wage is higher face higher costs, as well as those that operate in higher social
conflict areas. The parameter lambda is statistically different from zero, indicating that
managerial inefficiency explains the variability of slacks between sample firms and
time.
The impacts of environmental variables and the stochastic variables estimated in the
second stage were incorporated into the inputs of each company, which were
adjusted to carry all distributors to a common operating environment. To do so, input
quantities of the producers who were benefited from a favorable environment and
"good luck" were adjusted upward.
Finally, the efficiency scores were recalculated using the input adjusted in the second
stage. A summary of the results is presented in Table 3, which shows the average
efficiency scores for the total sample and for some groups of distributors. As expected,
in all cases efficiency is higher after separating the non-managerial component of total
slacks. In general, the average efficiency rises from 0.8/0.86 to 0.93/0.92 in the
estimation considering OPEX and Capital Costs as inputs (Models 1 and 2), whereas in
model 3, the initial mean score, 0.74 increases to 0.89 in the last stage.
Note that the average score of the companies serving larger quantities of customers
(more than 3 million customers) is higher than those classified as small and medium
companies both in the first and third stage of the estimations in all models, while no
significant disparities are observed between the smallest and medium firms. Notice
that after "leveling the playing field," according to Fried et al (2002), the size of the
distribution companies seems to reduce its impact in the final estimations of the
companies’ efficiency. However, results suggest that larger companies present
advantages with regards to smaller ones, attributable to higher density reducing the
unitary cost.
It is also worth noting that model 3 shows smaller efficiency scores than models 1 and
2, suggesting that when the analysis is more complete, considering both, OPEX and
Capital Costs, the existing tradeoffs between capital and labor is better captured by the
model, whereas when only estimating OPEX efficiency, companies that are more labor
intensive might be considered inefficient. It seems clear that when considering both
inputs, inefficiency is better isolated from the other variables that affect costs.
Conclusions
In this study we have obtained estimates of the efficiency of electricity distributors in
Brazil in the period 2004 - 2009, applying the DEA methodology and SFA. The results
show that the impact of non-managerial variables such as level of input prices and
socioeconomic complexity, as well as statistical noise, affect significantly on the
efficiency of firms in the sample, being this effect more significant among small and
medium firms. A notable contribution is the incorporation of the stochastic variables in
the models, in contrast with the methodology used recently by the Brazilian regulator,
which explicitly included environmental variables but excluded the effect of statistical
noise. Another interesting aspect is the inclusion of capital costs in the models jointly
with OPEX, capturing the effect of possible trade-off between these two inputs.
Moreover, the efficiency scores of each electricity distributor show small variations
along each of the years, unlike the results obtained by the regulator in the last
revision, in some cases showing sharp fluctuations in the scores.
It is worth noting that in this study a more complete model was estimated in
comparison to the common practice by considering different types of consumers,
which allows taking into account the different degrees of effort required to serve
diverse users. The results indicate the trade–off between OPEX and CAPEX, since the
average score increases when the inputs representative of capital is explicitly
References
Banker, R.D. and R. Natarajan (2008). Evaluating contextual variables affecting
productivity using data envelopment analysis. Operations Research 56(1): 48-58.
Baumol, W., Panzar, J. and Willig, R. (1988) Contestable markets and the theory of
industry structure, Harcourt Brace Jovanovich, Inc.
Burns, P. and Weyman-Jones, T.G, (1996). "Cost Functions and Cost Efficiency in
Electricity Distribution: A Stochastic Frontier Approach," Bulletin of Economic
Research, Wiley Blackwell, vol. 48(1), pages 41-64, January.
Farsi, M. and Filippini, M. (2004). ‘Regulation and Measuring Cost Efficiency with
Panel Data Models: Application to Electricity Distribution Utilities’, Review of Industrial
Organization, 25 (1): 1-19.
Farsi, M., Filippini, M. and Greene,W. (2005). "Application of Panel Data Models in
Benchmarking Analysis of the Electrivity Distribution Sector," CEPE Working paper
series 05-39, CEPE Center for Energy Policy and Economics, ETH Zurich.
Fried, H.O.; Lovell C.A.K.; Schnidt S.S.; Yaisawarng S. (2002), Accounting for
Environmental Effects and Statistical Noise in Data Envelopment Analysis, Journal of
Productivity Analysis 17, 157-174.
Growitsch, C., Jamasb, T. and Pollitt, M. (2005). "Quality of Service, Efficiency, and
Scale in Network Industries: An Analysis of European Electricity Distribution," IWH
Discussion Papers 3, Halle Institute for Economic Research.
Hess, B. and Cullmann, A. (2007). "Efficiency analysis of East and West German
electricity distribution companies - Do the "Ossis" really beat the "Wessis"?," Utilities
Policy, Elsevier, vol. 15(3), pages 206-214, September.
Mota, R., (2004). Comparing Brazil and USA electricity distribution performance: what
was the impact of Privatization? Cambridge Working Papers in economics, CWPE
0423. The Cambridge-MIT Institute.
Pérez-Reyes, R., and Tovar, B. (2009): Measuring Efficiency and Productivity Change
(PTF) in The Peruvian Electricity Distribution Companies after Reforms, Energy Policy,
37, 2249-2261.
Ramos-Real, F., Tovar, B., Iootty, M, Fagundes de Almeida, E. and Queiroz, H. (2009).
“The evolution and main determinants of productivity in Brazilian electricity
distribution 1998-2005: an empirical analysis”, Energy Economics, 31 (2) 298-305.
Silva, Hamilton (2011). “Cost Efficiency in Periodic Tariff Reviews: The Reference
Utility Approach and the Role of Interest Groups.” University of Florida, Department
of Economics, PURC Working Paper.
Tovar, B., Ramos-Real, F. and Fagundes de Almeida, E. (2011). “Firm size and
productivity. Evidence from the electricity distribution industry in Brazil”, Energy
Policy, 39 826-833.
Abstract
This study was conducted to evaluate the production efficiency in sugarcane farms, by
using DEA. Seventeen DMUs comprising 2010-2011 harvest were considered. The
inputs were the rent of the land; raw material; and costs of the harvest, loading and
transport of the sugarcane. The revenue on sugarcane sale was considered as output.
The CCR addressed to inputs was used as model. The stalk productivity ranged from
76 to 114 t of stalks/ha, as the average productivity being 81 t/ha. The fertilizer costs
ranged from R$14.33 to R$366.59/ha; however, the costs of harvest, loading and
transport of the stalks were the highest, as ranging from R$16.43 to R$27.16. There
was a relationship between crop productivity and profit in R$/ha. From 17 DMUs, six
were efficient. Those six DMUs were considered as benchmark to the other inefficient
DMUs. The DEA method contributed to identify efficient sugarcane farmers and to
explore their knowledge in order to improve the inefficient farms.
Keywords: Data Envelopment Analysis; sugarcane; agricultural management.
Introduction
In the world scenery, Brazil stands out as the largest producer of sugarcane, with
approximately 33% of the world production, and it is considered as the first one in
production of sugar and ethanol, as accounting for over half of the sugar traded
worldwide (MAPA, 2011). This position has been occupied mainly due to the cropped
area that is relatively large, the high productivity levels achieved in the main
producing regions of the country and due to the increased productive potential of
new cultivars (MARTINELLI, 2011, OLIVEIRA et al., 2011).
In recent decades, has become increasable interest in the sugarcane crop, either on
socioeconomic viewpoint and environmental aspect.
Methods
Concerning to objective, this research is characterized as exploratory and descriptive.
In relation to results, this research is classified as applied.
The data were obtained from SEBRAE/MG, for the period 2006 to 2011 (for five
sugarcane crops) and, complementarily, the technical visits were made to sugarcane
It should be noted among the three input variables under analysis that the total raw
materials used, on average, showed a greater reduction in expenses required for the
DMUs to achieve the goal of reduction (17.8%), although the indicated highest
reduction refers to the variable ‘land rent’ for DMU2 (36.5%).
Those farms considered as effective are the benchmark for other units of the analysis.
This reference varies according to approximation of the use levels of the inputs and
outputs among farms, as ranging from 0 to 1. The importance of efficient DMU for the
other is greater the closer to 1.
Table 3 presents the results of the benchmark analysis, considering the farm which is
the main reference for the index and the importance of reference (lambda).
The DMU 12 is considered as benchmark for DMUs 11, 4 and 1. In the case of the
DMU 1, however, the importance index of this reference is relatively low. Therefore,
for this farm, the attempt to mirror practices of the DMU 12 becomes more distant.
It is observed that the DMUs 10, 13 and 16, although effective, have not been
considered as main benchmarks for any other DMU. When analyzing the data of the
DMUs 10, 13 and 16, those three were effective, but are not near the other DMUs
concerning to dimension of their variables, therefore they do not stand as main
benchmarks.
In this study, it was also verified a relationship between crop productivity and the
profit in R$/ha, which can be described by the equation y = 24.2 x - 938.29
(R2=0.7845), Figure 2.
References
BANKER, R. D.; COOPER, W.W.; SEIFORD, L. M.; ZHU, J. (2004). Return to scale in
DEA. In: COOPER, W.W.; SEIFORD, L.M., ZHU, J. (Eds.). Handbook on data
envelopment analysis. Boston: Kluwer Academic.
COOPER, W. W.; LI, S.; SEIFORD, L. M.; ZHU, J. (2004). Sensitivity analysis in DEA. In:
Handbook on data envelopment analysis. Boston: Kluwer Academic, chap. 3, p. 75-97.
GALLARDO A.L.C.F & BOND A. (2010). Capturing the implications of land use change
in Brazil through environmental assessment: Time for a strategic approach? Environ
Impact Asses Rev.
MACEDO, I.C. et al. (2004). Balanço das emissões de gases do efeito estufa na
produção e no uso do etanol no Brasil. São Paulo: Secretaria de Meio Ambiente do
Estado de São Paulo.
M.A. Raayatpanah
School of Mathematics, Statistics and Computer Science, University of Tehran, Tehran, Iran,
raayatpanah@khayam.ut.ac.ir
Abstract
This paper presents an application of robust optimization in which data envelopment
analysis (DEA) is used to measure overall profit efficiency when data are uncertain.
When the inputs and outputs as well as the input and output price vectors of decision
making units (DMUs) in the constraints belong to an uncertain set (a set with data
uncertainty), we compute overall profit efficiency under the assumption of the worst
case scenario in relation to the uncertainty and the adjustment of the level of
robustness of the solution is also allowed to trade off performance with protection
among these uncertain elements. We show that the maximum overall profit efficiency
score may not always occur in an optimistic case and the decision maker (DM) can
obtain the maximum overall profit efficiency score corresponding to a value between
the optimistic and pessimistic cases. The results of the study have been exemplified by
real applications.
Keywords: Data Envelopment Analysis, Robust Optimization, Overall profit efficiency,
Uncertain Data.
Introduction
Data envelopment analysis (DEA) is concerned with the evaluations of performance
and it particularly deals with evaluating the activities of organizations. At first, Charnes
et al. (1978) introduced the CCR model under constant returns to scale (CRS) by
extending linear programming production. The original CCR model, was applicable
only to technologies characterized by global CRS, which was modified in the model
introduced by Banker et al. (1984) assuming variable returns to scale (VRS). In
traditional DEA, decision making units (DMUs) are evaluated by considering data
certainty, and traditional DEA is thus not able to appraise DMUs under uncertainty
conditions such as imprecision, vagueness, inconsistency, etc. Thus, the concept of
uncertainty is one of the interesting subjects in DEA. In the real world, data
uncertainty is shown in different ways, for instance, by interval, fuzzy, and stochastic
data. Sengupta (1992) was the initial DEA study which changed the traditional view of
uncertainty. Cooper et al. (1999) presented the interval approach to deal with interval
+ max [ ∑ p rj (d roy (φ − λo )
{ s yj | s yj ⊆ J yj ,| s yj |≤ Γ jy } r∈s y
j
l =1
l rl j j ∑cij [(−θ + λo ) xio +
i =1
∑λ x
l =1
l il )]
l ≠o
n
l ≠o (1)
− ∑λd l
y
t jl ))] ≤ 0, j = 1, , n, + max [ ∑cij (d (−θ + λo ) + x
io
l =1 { s xj | s xj ⊆ J xj ,| s xj |≤ Γ xj } i∈s x
j
l ≠o
∑λd
l =1
l
x
il ) + (Γ jx − [Γ jx ]) (ct j (d tx o (−θ + λo )
j j
l ≠o
n
+
l =1
∑λd l
x
t jl ))] ≤ 0, j = 1, , n,
l ≠o
∑λ
j =1
j = 1, λ j ≥ 0, j = 1, , n.
+ ∑q y
rj ≤ 0, j = 1, , n, + ∑q x
ij ≤ 0, j = 1, , n,
r∈J yj i∈J xj
n
z yj + q rjy ≥ p rj (φ − λo )d roy , j = 1,..., n, r ∈ J jy , z yj + q rjy ≥ ∑λ p
l =1
l rj , j = 1,..., n, r ∈ J jy ,
l ≠o
n
z xj + qijx ≥ cij (−θ + λo )d iox , j = 1,..., n, r ∈ J jx , z xj + qijx ≥ ∑λc
l =1
l ij , j = 1,..., n, i ∈ J jx ,
l ≠o
(2)
n
∑λ
j =1
j = 1,
λ j ≥ 0, z yj ≥ 0, z xj ≥ 0, q rjy ≥ 0,
qijx ≥ 0, j = 1, , n, r ∈ J jy , i ∈ J jx .
It should be mentioned that Model (2) can be applied in cases where the lower
bound of each interval input and output is non-negative.
Robust Overall Profit Efficiency (ROPE) Model with the Input and
Output Price Vectors Uncertainty
In this section, a model is formulated, based on robust optimization, to measure
overall profit efficiency with the input and output price vector uncertainty. In real
applications, it is unlikely for all coefficients of price vectors to be equal to their
nominal value; it is also unlikely for them all to be equal to their worst-case value.
Thus, it is desirable to adjust the level of conservativeness of the solution, so that a
reasonable trade-off between robustness and performance is achieved. The numbers
Γ jp , j = 1,..., n and Γ jc , = 1,..., n are presented, which assume values in the intervals
[0, | J jp |] and [0, | J cj |] , where J jp and J cj are the index sets of the uncertain
parameters of the output and the input prices, respectively, and they are not
necessarily integer-valued. Γ jp and Γ jc are to adjust the robustness of the proposed
method against the level of conservatism of the solution. In fact, they are the
protection level for the j -th constraint. The number of coefficients allowed to vary is
at most Γ jp and Γ jc for the output and input price vectors respectively. Thus, the main
purpose is to introduce a model that is protected against the maximum Γ jp and Γ jc ,
where only one output price and only input price ptj and ctj vary by at most
(Γ jp − [Γ jp ])tjp and (Γ jc − [Γ jc ]) d tjc , respectively. That is, it is assumed that only a subset of
the output price and a subset of the input price change to yeild the solution that was
presented by Bertsimas and Sim (2004). We propose the robust counterpart of Model
(5) as follows:
d o = max φ − θ
n
+ (Γ jp − [Γ jp ]) d tp j (φy t o − ∑ y t l λl )] ≤ 0,
j j j
l =1
j = 1, , n,
m n
∑c
i =1
ij (−θxio + ∑xil λl )
l =1
n
+ max [ ∑d ijc (−θxio + ∑xil λl )
{ s cj | s cj ⊆ J cj ,| s cj |≤ Γ cj } i∈s c l =1
j
(3)
n
+ (Γ − [Γ ]) dc c c
(−θxt o + ∑xt l λl )] ≤ 0, n
∑λ = 1, λ j ≥ 0, j = 1, , n.
j j tjj j j
l =1 j
j = 1, , n,
j =1
The above model is non-linear. We can obtain the linear form of Model (3) using the
proposition given by Bertsimas and Sim (2004), as follows:
d o = max φ − θ
s n
s.t. ∑ p rj (φy ro − ∑ y rl λl ) + z jp Γ jp
r =1 l =1
+ ∑q p
rj ≤ 0, j = 1, , n,
r∈J jp
m n
+ ∑q c
ij ≤ 0, j = 1, , n,
i∈J cj
n
− f i c ≤ θxio − ∑xil λl ≤ f i c , i = 1,..., m,
l =1
∑λ
j =1
j = 1,
Conclusions
In this paper, a deterministic methodology was proposed to address the problem of
measuring overall profit efficiency subject to the input, output, and price vector
parameters being within an uncertainty set. Using the robust optimization concept, an
equivalent model was built without uncertainty of the same class. Specifically, the
Acknowledgement
This research is partially supported by Ardabil Islamic Azad University.
References
Banker, R., Charnes, A., Cooper, W.W. (1984) Some models for estimating technical
and scale inefficiencies in data envelopment analysis, European Journal of Operational
Research 30 (9): 1078-1092.
Bertsimas, D., and Sim, M. (2004) The price of robustness, Operations Research 52 (1):
35-53.
Charnes, A., Cooper, W. W., Rhodes, E. (1978) Measuring the efficiency of decision
making units, European Journal of Operational Research; 2 (6): 429-444.
Cooper, W.W., Park, K.S., Yu, G. (1999) IDEA and AR-IDEA: models for dealing with
imprecise data in DEA, Management Science 45: 597-607.
Entani, T., Maeda, Y., Tanaka, H. (2002) Dual models of interval DEA and its
extension to interval data, European Journal of Operational Research 136: 32-45.
Table 1 : The input, output, and nominal price data for 4 DMUs.
DMU j I1 I2 O1 O2 c1 j c2 j p1 j p2 j
1 20 151 100 90 500 100 550 2010
2 19 131 150 50 350 80 400 1800
3 25 160 160 55 450 90 480 2200
4 27 168 180 72 600 120 600 3500
DMU j k 1j k j2 k Uj DMU j d 1j d j2 d Uj
1 0.19 0.05 0.29 1 0.00 0.00 0.12
2 1.16 0.21 3.07 2 0.17 0.08 0.32
3 0.16 0.61 1.45 3 0.29 0.15 0.46
4 0.00 0.00 0.06 4 0.13 0.06 0.24
Josiane Baldo
Federal University of Espírito Santo, Brazil, Master in Civil Engineering, josianebaldo@hotmail.com
Abstract
Tourism becomes a sector of more important time for economic and social
development of a country, contributing for the reduction of disparity of the regions
and favouring for its growth. The transport service, as well as road infrastructure is
essential part of tourism in order to promote movement of tourists to intended
destination. This paper presents the use of DEA to measure the efficiency in capture
of Tourism Investment in relation to Investment in Highways. Government tourism
planning of Espirito Santo State is based on eight tourist routes. Each route has an
origin in Vitoria, capital of Espirito Santo State and a final destiny that define the
route. For this study were analyzed 87 stretches road linking Vitória, to each urban
community that compose these routes. These 87 stretch roads were defined as DMU
and it was divided into two groups due to the difference in type of road, simple or
double. The inputs of this problem are distance between cities, road investment,
maintenance road investment and traffic volume. It was used tourism investment as
product. The DEA Model used was BCC oriented by product. The results achieved
shows that tourism level of investment can increase comparing with roads
investments.
Keywords: Data Envelopment Analysis, Efficiency Analysis, Highways of the Espírito
Santo, Tourism Investment and Highways
Introduction
According to Porter (1990), the infrastructure is essential to the promotion of systemic
conditions of competitiveness in service systems - transport, energy, water,
telecommunications - key to economic activity.
In the economic context, according to the Brazilian Institute of Tourism (2010), the
country reached the goal of $ 5.8 billion in international currencies generated by
tourism. In the Espirito Santo State, according to a survey conducted by the State
Secretariat of Tourism of the Espirito Santo State - SETUR in February 2010, the
Identification DMUs
The road stretches linking the capital of Espírito Santo (Vitoria) to cities that make up
each route can be considered the decision-making units, homogeneous, because they
use the same vectors inputs / product with the same goal, which is road transport
carried by passenger vehicles, public transportation vehicles, transport vehicles and
cargo motorcycles, and therefore can be evaluated by their relative efficiencies, the
units identified as being able to compose the efficient production frontier.
Note that there is the existence of road stretches connecting Vitoria to a given
municipality who share inputs and outputs of another passage that connects Vitoria to
another city. This happens due to the composition of excerpts come from one or
more cities.
The decision to take as its starting point the city of Vitoria was due to several factors:
first by its geographical location, being equidistant from most cities that make up the
routes except one route to which the municipality belongs. Second factor is important
DEA Variables
The initial selection was based on data availability in DER-ES on Secretariat State
Tourism of the Espirito Santo State, the Undersecretary of State Budget and the
literature review. Initially six variables were chosen to analyze the efficiency of road
stretches. Taking into consideration the opinion of experts, the relevant variables for
final analysis of efficiency of road stretches connecting Vitoria to each municipality
that make up the Tourist Routes of the Espirito Santo State were:
• Inputs: Distance, investment in roads, investment in maintenance and traffic volume.
• Products: Investment in tourism
In this study we adopted the BCC model (BANKER, CHARNES, COOPER, 1984) (with
variable returns to scale) product oriented, with and without restriction to weights,
using the software SIAD.
The SIAD (ANGULO MESA et al, 2005) was developed to calculate all the results of
DEA models, such as efficiencies of DMU's, weights for each of the targets,
benchmarks and clearances. Furthermore, it also provides the option of inserting
weight restrictions and the possibility of using up to 150 DMU's. Through SIAD also
can see which units work with constant returns and they work with variable returns to
scale.
Conclusions
Besides the analysis of tourism investment over investment in roads, there are several
other important indicators in the analysis of efficiency Tourist Routes that have not
been included here. Therefore it is extremely important that prompted the sectors
responsible for the inclusion of indicators such as:
• Number of tourists;
• Index accidents on highways;
• Cost of accidents on highways.
Another suggestion would be to analyze the efficiency of Tourist Routes in relation to
its infrastructure (power of attraction of the municipality, number of Beds, number of
Tourist attractions, number of positions of power, among other tourism products).
References
ANGULO MEZA, L;, BIONDI NETO, L.; SOARES DE MELLO, J.C.C.B.; GOMES, E. G.
ISYDS (2005) Integrated System for Decision Support (SIAD – Sistema Integrado de
Apoio a Decisão): a software package for data envelopment analysis model. Pesquisa
Operacional, v.25, n.3, p 493-503.
BANKER, R.D.,CHARNES, A., COOPER W.W. (1984) Some models for estimating
Technical and Scale inefficiencies in Data Envelopment Analysis. Management
Science.
PORTER, M. (1990) The need for a new paradigm: the competitive advantage of
nations. New York: The Free Press, 1990.
Abstract
Considering the amount of resources needed for ports enterprises, it is important to
evaluate the level of infrastructure utilization at ports terminals to assist managers to
take decisions on improving performance or investing in increasing capacity. In this
line, this paper analyses the efficiency of available elements use at public port
terminals located at the state of Espírito Santo, Brazil, in order to obtain subsidies to
its planning and development. For this, it was made a performance comparison
between regional terminals using DEA modelling and the homogenization criteria of
DMUs presented by Bertoloto (2010). The results of the study indicate the potential for
increase cargo handling and can contribute to management decisions at the
operational and strategic level.
Keywords: Port efficiency; Port infrastructure; DEA.
Introduction
According to the Secretary of Ports (SEP, 2011), the Brazilian port sector handles
approximately 700 million tons of various goods per year, what strengthens its
importance in the major trade operations (IPEA, 2010, SEP, 2011).
In the context of Brazilian international trade, the port complex located at the state of
Espírito Santo has a prominent position (ESPÍRITO SANTO, 2006). In 2010, ports and
terminals that belong to that complex moved around 171.46 million tons of cargo
(SINDIEX, 2011), at about 24.00% of the total flow in the country.
Given its importance, the development of regional economy is closely linked to the
strategies of port activities. And to support the existing demand through public
terminals, it is indispensable to increase efficiency in the use of available infrastructure
(ESPÍRITO SANTO, 2006).
As defined in Brazilian port sector legislation, port facilities can be classified as private
or public, and public facilities can be operated by public or private companies, by
prior bidding and agreement (BRAZIL, 1993; IPEA, 2009; IPEA, 2010).
Among the elements that ensure port competitiveness there are the berths and depth,
both determinants of the size of ships that dock at the port, that also determine the
In this sense, the analysis of efficiency, defined as the ratio between what is produced
and what could be produced with the same resources (SOARES DE MELLO et al.,
2005), can be useful to identify the efforts to improve ports performance. And one of
the most known and used methods to get benchmarks for ports infrastructure is the
Data Envelopment Analysis (FONTES AND SOARES DE MELLO, 2006; COSTA, 2007;
PIRES et al., 2009; BERTOLOTO, 2010).
The Data Envelopment Analysis (DEA) is a technique for analysis and diagnosis based
on mathematical programming developed by Charnes, Cooper and Rhodes (1978),
that uses data input and output and the production function theory to estimate the
efficiency frontier in a set of decision-making units (SOARES DE MELLO et al., 2003;
BLONINGEN AND WILSON, 2008; WANKE, 2009 a; BERTOLOTO, 2010).
To analyze the efficiency of use of infrastructure utilization of public terminals located
at Espírito Santo and obtain information for its planning and development, this paper
presents a study of nine ports and terminals, with similar location and connectivity
with other modes, based on maximum draft allowed, total length of berths and total
amount of cargo handled, with application of DEA. According to the different forms of
exploitation which they are subjected, these ports and terminals were classified into
three groups - Direct Administration, Rented and Private - on which was used the
compensation technique for non-homogeneous units presented by Bertoloto(2010).
The results indicate that there is potential to increase cargo handling at public
terminals and suggest that specialization and form of operation of terminals
and ports influence its efficiency.
Performance measures obtained can help to identify the level of utilization of ports
and terminals, to trace the causes of inefficiencies, to define the best form of
exploitation and to take decisions on berths specialization.
Methods
This study proposes a combination of theory and practice through the application of
Data Envelopment Analysis model on a set of ports and terminals located at the state
of Espírito Santo, Brazil, to evaluate the efficiency at local public terminals
infrastructure utilization.
The DEA model chosen was BCC (BANKER et al., 1984), input-oriented.
Nine ports or terminals were analyzed at the period between the years 2008 and 2009.
Were defined as inputs the maximum draft allowed and total length of berths, both
measured in meters, and as output the total amount of cargo handled quarterly,
measured in tons (BERTOLOTO, 2010). Data were obtained from analyzed ports and
terminals and from the local Port Authority.
Conclusions
Based on maximum draft permitted, extension of berths and total cargo handled data
of a group of regional ports and terminals, it was made an analysis of efficiency in
utilization of infrastructure of public terminals located at the state of Espírito Santo,
Brazil.
Ports and terminals were divided into three groups, according to the form of
exploitation they are subjected - Direct Administration, Rented and Private – upon
which was applied the mathematical model of Data Envelopment Analysis, using the
compensation method of no homogeneous units presented by Bertoloto (2010).
In the analyzed universe the average performance of private ports is higher than that
of public terminals (Direct Administration and Rented units) and in this latter group,
no specialized terminals presented the highest percentage of DMUs with inefficiencies
lower than 50.00%. The observations suggest that specialization influences terminals
efficiencies, as already stated by Wanke (2009 b).
The results of the study contribute both to the operational management, assisting in
the identification of the determinants of low performance, and to strategic
management, as they provide important information for decision on expansion,
directing port activities and opportunity for specialization of berths.
References
Banker, R. D.; Charnes, A.; Cooper, W.W. (1984) Some models for estimating technical
and scale inefficiencies in data envelopment analysis. Management Science 30 :
1078-1092.#
Brasil (1993) Lei 8.630, de 25 de fevereiro de 1993. Dispõe sobre o regime jurídico da
exploração dos portos organizados e das instalações portuárias e dá outras
providências. Available from: <www.planalto.gov.br/ccivil_03/LEIS/l8630.htm>,
retrieved April 22, 2011.
Charnes, A.; Cooper, W. W.; Rhodes, E. (1978) Measuring the efficiency of decision-
making units. European Journal of Operational Research, v. 2, p. 429-444.
Espírito Santo (2006) Plano de desenvolvimento Espírito Santo 2025: nota técnica:
desenvolvimento da logística e dos transportes no Espírito Santo. Macroplan, Espírito
Santo.
Meza, L. A., Neto, L. B.; Soares de Mello, J. C. C. B.; Gomes, E. G. ISYDS (2005)
Integrated System for Decision Support (SIAD – Sistema Integrado de Apoio a
Decisão): a software package for data envelopment analysis model. Pesquisa
Operacional, 2005, v.25, n.3, p. 493-503.
Soares de Mello, J. C. C. B. S.; Meza, L. A.; Gomes, E. G.; Neto, L. B. (2005) Curso de
Análise Envoltória de Dados. In: XXXVII Simpósio Brasileiro de Pesquisa Operacional –
SBPO 2005, Gramado, Anais do SBPO 2005.
Wanke, P. F. (2009 a) Infraestrutura Portuária. In: Wanke, P. F.; Silveira, R. V.; Barros,
F. G.(eds) Introdução ao Planejamento da Infraestrutura e Operações Portuárias:
Aplicações de Pesquisa Operacional. Atlas, São Paulo.
Abstract
This paper deals with the efficiency evaluation and analysis of 44 Third Party Logistics
(3PLs) providers working in Brazil by means of the Data Envelopment Analysis (DEA).
Since the analyzed companies have diversified sizes, the DEA model considered
variable returns to scales and was oriented to maximize the outputs of 3PLs. The use
of variable selection techniques was critical, in this research, to reach a subset of
variables with greater representativeness. As the main prac- tical result of this study, it
was possible to identify the relative efficiency of 3PLs of national range and their
return to scale. Furthermore, the results are of help in the decision making on
investments based on the enterprises capacity, considering the expected return of
each situation.
Introduction
The process of partial or total outsourcing of logistics operations is characterized by
using outside companies to perform typical Third Party Logistics (3PLs) provider
functions, which traditionally used to be performed by the companies themselves
(Sohail and Sohal, 2003; Kayakutlu and Buyukozkan, 2011). This trend is generally
found among the organizations that seek, by outsourcing nonessential activities, the
reduction of internal costs and the service level increase through a raise in the supply
chain management efficiency (Min and Joo, 2006; Seth, et al., 2006).
In accordance with this worldwide phenomenon, a fast growth of logistics activities
has been observed in Brazil, which led companies to reach new competitive levels in
several economies’ dimensions (Wanke and Affonso, 2011). However, even with an
accelerated growth and with improvements in the national supply chain, Brazil still
has low efficiency when compared to medium or high developed countries (Wanke
and Fleury, 2006).
Methodology
The research was conducted with data from the special issue Operadores Logísticos
(2011), published annually by the brazilian magazine Tecnologística, which shows the
main companies from the Third Party Logistics (3PLs) providers market in Brazil. The
analysis contemplates the universe of companies working in Brazil and different sized
businesses.
The analysis took into account four input variables, as follows: i) number of
employees; ii) total area used for storage; iii) number of warehouses owned by the
companies; and iv) number of warehouses within the client facilities. As a result of the
outputs, the following variables were considered: i) number of customers under
contract; ii) company’s operating revenue; iii) total managed products volume; and iv)
revenue growth presented by the company as compared with the previous year.
According to Dyson, et al. (2001), the set of Decision Making Units (DMU) must be
homogeneous regarding to the activities they perform, being able to run similar tasks
with the same goals and also presenting the same inputs and outputs, with varying
intensity of these variables. In other words, the DEA methodology requires common
variables among the 3PLs to compare them, making it necessary to obtain a list of
organizations, limited by data availability. In that sense, a sample of 44 3PLs was
Min � 𝑣𝑖 ∙ 𝑥𝑖0 + 𝑣0
𝑖=1
Subject to:
𝑠
� 𝑢𝑟 ∙ 𝑦𝑟0 = 1
𝑟=1
𝑠 𝑚 (1)
� 𝑢𝑟 ∙ 𝑦𝑟𝑗 − 𝑣0 − � 𝑣𝑖 ∙ 𝑥𝑖𝑗
𝑟=1 𝑖=1
≤ 0;
𝑗 = 1, … , 𝑛,
𝑣𝑖 , 𝑢𝑟 ≥ 0; 𝑟 = 1, … , 𝑠;
𝑖 = 1, … , 𝑚.
Here the yrj and xij, all positive, are the known outputs and inputs from a set of j =
1,…, n organizations. The data yrj and xij are constants and will usually be
observations from past decisions on inputs and the outputs that resulted therefrom.
The variables vi, ur ≥ 0 are weights to be determined by the solution of this problem,
by the data on all of the DMU's which are being used as a reference set. It is worth
noting the presence of variable v0, representing the return to scale presented by the
analyzed unit ‘0’, determining whether the operations are conducted with increasing,
constant or decreasing returns to scale. Thus, in case v > 0, returns to scale are
decreasing; if v = 0, returns are constant; and if v < 0, returns will be increasing.
The efficiency measure of any DMU is obtained as the inverse of the minimum of the
weighted inputs and rated relative to the others from the set that accords the most
favorable weighting that the constraints allow. Thus, the weights are objectively
determined to obtain a scalar measure of efficiency in any case. The constraints
provide that ratios of weighted outputs to weighted inputs for every DMU be less than
or equal to unity.
Average 75.74%,
SD 28.05
Minimum 13.14%
100% 20
99 –70% 7
69 –40% 9
Low 39% 8
Increasing 11 5 6
Constant 0 0 0
Decreasing 33 15 18
REFERENCE
Adler, N. et al. (2002), “Review of ranking methods in the data envelopment analysis
context”, European Journal of Operational Research, Vol. 140, No. 2, pp. 249-265.
Banker, R.D. et al. (1984), “Some models for estimating technical and scale
inefficiencies in data envelopment analysis”, Management Science, Vol. 30, No. 9, pp.
1078-1092.
Cook, W.D., Seiford, L.M. (2009), “Data envelopment analysis (DEA): Thirty years on”,
European Journal of Operational Research, Vol. 192, No. 1, pp. 1-17.
Cooper, W. et al. (2006), Introduction to Data Envelopment Analysis and Its Uses: With
DEA-Solver Software and References, Springer, New York, NY.
Dyson, R.G. et al. (2001), “Pitfalls and protocols in DEA”, European Journal of
Operational Research, Vol. 132, No. 2, pp. 245-259.
Hamdan, A., Rogers, K.J.J. (2008), “Evaluating the efficiency of 3PL logistics
operations”, International Journal of Production Economics, Vol. 113, No. 1, pp. 235-
244.
Koster, M.B.M. et al. (2009), “On using DEA for benchmarking container terminals”,
International Journal of Operations & Production Management, Vol. 29, No. 11, pp.
1140-1155.
Liu, B., Fu, S. (2009), “Efficiency Measurement of Logistics Public Companies Basing
on the Modified DEA Model”, in International Conference on Computational
Intelligence and Security, Beijing, 2009, IEEE, Washington, pp. 601-605.
Sohail, M.S., Sohal, A.S. (2003), “The use of third party logistics services: a Malaysian
perspective”, Technovation, Vol. 23, No. 5, pp. 401-408.
Seth, N. et al. (2006), “A conceptual model for quality of service in the supply chain”,
International Journal of Physical Distribution & Logistics Management, Vol. 36, No. 7,
pp. 547-575.
Wanke, P.F., Fleury, P.F. (2006), “Transporte de cargas no Brasil: estudo exploratório
das principais variáveis relacionadas aos diferentes modais e às suas estruturas de
custos”, in Negri, J.A., Kubota, L.C. Estrutura e dinâmica do setor de serviços no Brasil,
IPEA, Brasília, pp 409-464.