Professional Documents
Culture Documents
i
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
ISBN: 978-602-19590-4-6
Proceedings of
INTERNATIONAL CONFERENCE ON
MATHEMATICAL AND COMPUTER SCIENCES
ii
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PREFACE
This event is a forum for mathematician and computer scientist for discussing and exchanging
information and knowledge in their area of interest. It aims to promote activities in research,
development and application not only on mathematics and computer sciences areas, but also
all areas that are related to those two fields.
This proceeding contains sorted papers from the International Conference on Mathematical and
Computer Sciences (ICMCS) 2013. ICMCS 2013 is the inaugural international event organized
by Mathematics Department Faculty of Mathematics and Natural Sciences University of
Padjadjaran, Indonesia.
In this proceeding, readers can find accepted papers that are organized into 3 track sections,
based on research interests which cover (1) Mathematics, (2) Applied Mathematics, (3)
Computer Sciences and Informatics.
We would like to express our gratitude to all of keynote and invited speakers:
Prof. Dr. M. Ansjar (Indonesia)
Assoc. Prof. Dr. Q. J. Khan (Oman)
Prof. Dr. Ismail Bin Mohd (Malaysia)
Prof. Dr. rer. nat. Dedi Rosadi (Indonesia)
Prof. Dr. T. Basarudin (Indonesia)
Assoc. Prof. Abdul Thalib Bin Bon (Malaysia)
Prof. Dr. Asep K. Supriatna (Indonesia)
We also would like to express our gratitude to all technical committee members who have
given their efforts to support this conference.
Finally, we would like to thank to all of the authors and participants of ICMCS 2013 for their
contribution. We hope your next participation in the next ICMCS.
Editorial Team
iii
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PROCEEDINGS
EDITORS
iv
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PROCEEDINGS
REVIEWERS
v
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PROCEEDINGS
SCIENTIFIC COMMITTEE
vi
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PROCEEDINGS
ORGANIZING COMMITTEE
vii
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
TABLE OF CONTENTS
PREFACE ................................................................................................................................ iii
EDITORS .................................................................................................................................. iv
REVIEWERS ............................................................................................................................. v
A Noble Great Hope for Future Indonesian Mathematicians, Muhammad ANSJAR ............ 1
Teaching Quotient Group Using GAP, Ema CARNIA, Isah AISAH &
Sisilia SYLVIANI ..................................................................................................................... 60
Network Flows and Integer Programming Models for The Two Commodities
Problem, Lesmana E. .............................................................................................................. 77
viii
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
A Property of 𝒛−𝟏 𝑭𝒎 [[𝒛−𝟏 ]] Subspace, Isah AISAH, Sisilia SYLVIANI ............................. 119
ix
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
Controlling Robotic Arm Using a Face, Asep SHOLAHUDDIN, Setiawan HADI ................ 202
Image Guided Biopsies For Prostate Cancer, Bambang Krismono TRIWIJOYO ............ 214
Mining Co-occurance Crime Type Patterns for Spatial Crime Data, Arief F HUDA,
Ionia VERITAWATI ........................................................................................................... 267
x
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
KEYNOTE SPEAKER
xi
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Mathematics has strong interactions with human life since ancient time all over the
world. Mathematics has also contributed in developing human mind. Mathematics is never
absence in all efforts developing new science, technology and engineering to improve the
quality of human life. Besides the contribution of body of knowledge of mathematics in the
development of science, technology and engineering, there are values in mathematics, which is
worth to adopt for pleasant and peaceful life in a community, or in a country.
It is fair to imagine at one time, Indonesia with such a great populations, will contribute
a significant number of mathematicians actively participate with other scientists and engineers
to develop modern science and technology in this country, which is also acknowledge by other
modern countries.
The dream which is better called A Noble Hope will come true, if Indonesian
mathematicians start to work today, may be start with a small scale well calculated action, and
slowly, but sure, expand widely. It may take decades to come, but it might be also sooner.
There are two main programs could be followed
Program one is about the developing of mathematics graduate research. Firstly, it is
necessary to strengthen and to improve mathematics graduate and undergraduate programs in
all departments. These activities should be along the line, or parallel to the existing government
programs besides proposing new programs.
Program two concerns all level of pre university mathematics education.
Mathematicians should establish contacts with groups of mathematics teachers. Through these
contact mathematicians help the teachers to master and understand correctly the concepts they
are teaching, make them used with the way of thinking and reasoning in mathematics, and adopt
the values in mathematic. This will enable them to make their student also understand correctly
the concepts they are learning, and introduce thinking and reasoning in mathematics slowly.
They could also make the students familiar with the values in mathematics. However, this
should be done in whatever curriculum is being used and ways of teaching are being applied.
This is purely to improve and to correct the mathematical background of the teachers,
in order they can provide the students with proper and correct understanding of mathematical
concepts they are learning, free from interfering to the teaching practice. The most expected
result is the mass improvement of pre university mathematics education, how slow it might be.
This also guaranties the possible continuation of the first program.
Meanwhile, mathematicians should always give input to the government and
community about the correct mastering in mathematics, and should welcome any invitation to
improve mathematics educations, and participate actively.
A strong message behind these programs is the responsibility of each Indonesian
mathematician for improvement the quality of pre university mathematics educations.
Keywords: Mathematics with human life, mathematics and human mind, mathematics and
science and technology, a noble hope, values in mathematics, improving the quality of pre
university mathematics education.
1. Introduction.
The main desireof Indonesian, as well as the people of all nations, are living in prosperity and
respectedamong the nations. The continuous effort by all citizens generation by generation is the main
key to achieve the goal. High quality education in all levels is the only basic and effective prescription
to prepare each generation to hold thisresponsibility.
1
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Mathematics has had interaction with human life since the birth of mathematics, which is unknown
until today.It will continue to the future. Mathematics has contributed to the human life, culture, and
humanity. On the other hand, the demand to fulfill the needs of human life hastriggered various
developments in mathematics.
The pyramid in the era of ancient Egypt (around 200 B.C.) showed that the ancient Egyptian
had been using some mathematical concepts known today. Restructuring of land boundary covered by
the mud brought by the annual flood of the Nile might have triggered the formulation of the basic
concepts of current geometry, whatever was the formulation. However, this is far from the estimated
date of written ancient Egyptian mathematics known as Moscow Mathematical papyrus and Rhind
papyrus. Both papyri discovered in 1858 dated from about 1700 B.C. Both papyri contain problems and
their solutions. It is most likely that that the problems related to the work faced by the people as far
back as 3500 B.C. Egyptian mathematics must have existed since that time.
At the same time mathematics also developed in Babylonia. Babylonian mathematics related
not only to astronomy, as well as in Egypt, but also related to various daily activities such as counting,
trading, etc. The existence of dams, canals, irrigations and other engineering work in Babylonia also
2
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
suggested that they had been using mathematics, although possibly in much simpler versions. Probably,
it is more precise to say that the engineering work created mathematics in Babylonia.
In 332 B.C.,Egyptian and Babylonian mathematics merged with Greek mathematics,after
Alexander the Great conquered Egypt and Babylonia. Since then mathematics belonged to the Greek
and developed until the year of 600.
Meanwhile, mathematics also grew and developed solely in China since 1300 B.C. The first
contact with the rest of the world was estimated around 400 B.C. Mathematics in China was developed
responding to the need for trading, government administrations, architecture, calendar, etc.
However, mathematics as an organized, independent, and reasoned discipline firstly introduced
in Classic Greek of the period from 600 to 300 B.C. The Greek made mathematics abstract, where an
idea is processed only by human mind. They insisted on deductive proof. This is one of the great
contributions of the Greek mathematics. The Greek from this Classic Greek period followed by the
Alexandrian Greek period also created many other foundations for current mathematics. Euclidcreated
geometry known as Euclidean geometry in the Alexandrian period.
The interaction between mathematics and real phenomena as introduced in Greek mathematics
is another vital contribution of the Greek. The Greek identified that mathematics is an abstraction of
physical world, and saw in it the ultimate truth about the structure and the design of the universe. It
means that they considered mathematical concepts as the abstract form of the nature. The phrase
'mathematics is the essence of nature's reality' appeared as a doctrine. Mathematical equations express
various real phenomena, which is familiar today as mathematical models of the respective phenomena.
The Greek also identified mathematics with music. They even considered mathematics as an art. They
felt and saw beauty, harmony, simplicity, clarity, and orderliness in mathematics.
The vitality of Greek Mathematical activities started to decline in the Christian era, especially
after Alexandrian Greek in North Africa defeated by the Roman Empire. The situation became worst
when the Arab Kingdom conquered Alexandria in 640. However, a century later the Arab kingdom
opened the door to invite Greek and Persian scientists to perform in the kingdom. Greek mathematics
started blooming again as Arab mathematics. The mathematics community in Arab translated and
completed former work of Greek mathematicians. This became the sources for European mathematics
lately, to replace the missing original manuscripts in their period. Mathematics performed in Arab
through astronomy to determine the praying and fasting calendar, the direction of kiblah for praying,
etc.
Outside of Greece and Italy, mathematics started to develop in Europe only around the 12 th
century, after churches were established. In the circle of churches, learning mathematics was relatively
important. By emphasizing on deductive reasoning, they considered learning mathematics as an
exercise for debating oraugmenting. A priest needs this ability in theology and spreading the religion.
Besides, they considered arithmetic as a science of number applied to music, geometry as a study of
stationary objects, and astronomy as a study on moving objects.
In the period of 1400 – 1600, the humanist studied abstract mathematics together with physics,
architecture and other sciences, to support the developmentof their work of art. They started to use
perspective in fine arts as a direct involvement of mathematics. They wrote various books on
mathematics for arts, and some could be categorized as mathematics books.
At the end of 16thcentury, the role of mathematics in sciences, especially in astronomy, was
increasing. Copernicus and Kepler strongly believed on the laws in astronomy and mechanicsobtained
through mathematics. The heliocentric replaced the geocentric concept in astronomy. At the beginning
of the 17th century, namely in 1610, Galileo wrote a famous statement:
"Philosophy [nature] is written in the great book whichever lies before our eyes – I mean
the universe – but we cannot understand it if we do not first learn the language and grasp
the symbols in which it is written. The book is written in the mathematical language, and the
symbols are triangles, circles and other geometrical figures, without whose help it is
impossible to comprehend a single word of it; without which one wonders in vain through
a dark labyrinth."
This statement agrees with current perceptions. A mathematical model is a mathematical representation
of nature or other real phenomena. We can explore the nature as well as the other real phenomena and
3
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
solve their problems mathematically through the respective mathematical models. Descartes even said
that the essence of science is mathematics.
Newton and Leibniz created calculus separately in the 17th century, and considered as a greatest
creation in mathematics, next to Euclidean geometry. The work in calculus to solve physical problems
led to the creation of ordinary differential equations and extended later to partial differential equations.
Those are among the most powerful mathematical models to solve various real world problems. It began
from Newton's law of motion.
The interaction between mathematics and various activities for the benefit of human life has
increased in modern time until now through its applications in science, technology, and engineering, as
well as in art and social sciences. One time, physics needed a particular function with properties
considered impossible at the time. The function, which is familiar today as Dirac-δ function, is
impossible in conventional function. It is a discontinuous function which is zero along a real axis except
at the origin, but integrable along the whole axis with the value of 1.To back up the existence of such
function the theory of Generalized Functions is created. This function opened the new era in quantum
mechanics in modern physics, which provideda lot of benefit for human life. The application of
mathematics to win the World War II was the starting point of operational research. Operational
research has many applications to support many activities related to the better living. Genetic algorithm
is a useful method in engineering. This method based on the abstract mathematical expression of
genetics theory in biology. Finite element method is another powerful method in structural
engineering.The formulation of the method based on physical views of a structure. Through abstract
mathematical formulation, this method is also applicable as a powerful method in fluid dynamics.
Mathematics has also played significant role in creating and developing new sciences and
technology such as nano sciences, nano technology, information technology, etc., which also provide
valuable contributions to human life. We must not neglect the progress of computer science, side by
side with numerical analysis. Mathematical computation provides strong supports to science,
technology and engineering leading to prosperous living, as well as in mathematics itself.
When classical Greek introduced mathematics as an abstract concept, it means that any ideas in
mathematics are the results ofprocesses of human mind. The Greek mathematicians insisted onusing
deductive proof. Last century,Freudenthal stated that mathematics is not only a body of knowledge, but
also a human activity. By considering human activity also means human mind. Therefore, this statement
agrees with the earlier statement of Richard Courant, who called mathematics as an expression of human
mind. The full Courant's statementis as follows.
This statement points out that thinking in mathematics is an active anddynamic thinking.Strong reasons
always support all statements. Thinking in mathematics is not aiming just fora perfect outcome, but the
outcome with aesthetic perfection. Strong intuition should accompany logical thinking. Besides paying
attention comprehensively to the existing situations, mathematical thinking is also looking forwardto
build up something new. Besides aiming at general situation, never mathematics never neglect any
particular cases. It is common to say that mathematical thinking is logical, critical and systematic
thinking, as well as consistent, creative and innovative. It is clear that these are also the way of thinking,
which is a great worth in real life.
This way of thinking has built up strong structure of mathematical theories, which make
mathematics always stable in their development. Each mathematical theory consists of a chain of neatly
ordered truths known as axioms, theorems, lemmas, prepositions, etc. There are no contradictions
among the truths in mathematics, and obeyed consistently. A truth in mathematics is a relative truth. A
statement becomes a new truth if it seems logical consequences of previous truths, or at least it does not
contradict them. However, an axiom is an initial truth or one of the basics of other truthsaccepted only
4
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
It has been shown that the general wishes of every nation is living in prosperity and respectable among
the world community. To achieve this situation mastery in science, technology and engineering is
imperative for every nation. Meanwhile, mathematics has been playing significant roles in the
development of science and technology, and has been interacted with real life since early history, and
contributed to the development of human mind for a long history.
Therefore, it is a great hope, or, probably a noble great hope to see at one time, Indonesia with
such enormous population will contribute a significant number of mathematicians directly participate
in the modern development of science, technology and engineering in this country. One may say that
this is only a nice dream. However, this is a dream that may come true, although frankly it is not easy
and may take very long, long time. It needs very strong will, followed by continuous strong big efforts
and tireless work for a long time. May be it will take decades to come, but may be less. This is a
5
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
challenge, for Indonesian mathematicians; a challenge to reach what one may call as A Noble Great
Hope.
If there is a will, this effort has to start by improving and intensifying research activities,
improving undergraduate and graduate mathematics programs, and the mot important thing is
improving pre university mathematics teaching in all levels.
This may sounds as an old song, a song so far nobody can sing. However, there are roles for
individual mathematicians; good mathematicians with graduate or undergraduate degrees. They could
do something along the line of government policy and government's programs and provide indirect, but
valuable support. The roles should be played continuously, gradually more intensive, and this will
provide benefit from time to time to our education..
This program should start with standardizing the mathematical knowledge and mathematical abilities
among the mathematicians in each mathematics departments in every university, as a prerequisite for
intensifying research programs.
The existing research activities should be pushed more intensively and whenever possible
design a new well-planned proposals. Whenever possible try to start collaborative or joint research with
colleges in any branches of sciences and engineering. Mathematicians could contribute for example
through modeling, identifying mathematical aspects in the problem, or introducing mathematical tools
and computation to solve the problem, etc.
To make this is more possible for a long range, the undergraduate and graduate mathematics
curriculum should allow, even encourage the students to take one or two courses in other branches of
sciences, or engineering. Mathematics department should establish cooperation and good relations with
related departments.
If this program runs as expected, we hope not very long, a small group of mathematicians,
engineers, or other scientists will come out with respectable results. This group, as an embryo should
develop in quality and quantity. Several other embryos should grow in the short future.
In this case, mathematicians are working completely within the framework of government
programs, with some innovations.
4.2. A Noble Great Hope program 2: Improving mathematics at pre university education
The weakness of mathematics at current pre university education is a reality and we have to admit. This
situation weakened any efforts to develop university mathematics educations.
The main factor to weaken the pre university mathematics educations in all levels are the
teachers. There are a very limited number of teachers and most of them do not equipped properly with
mathematical knowledge they have to teach. Even many of them do not understand correctly the
concepts they have to teach. This must not happen. They must get help. They must and need to
understand correctly the concepts they are teaching. With this help, they must be able to make their
students also understand correctly and free from miss conceptions.
Besides, the teachers must used with thinking and reasoning mathematically. They should also
understand some values in mathematics appropriate for real life, so that they can also pass this to their
students.
This could be done if an individual or a small group of mathematicians organized regular
meetings with small groups of teachers. The meetings completely discuss the correct understanding of
the concepts to teach, thinking and reasoning mathematically, and understanding on the values in
mathematic. So, it nothing to do with curriculum and the way of teaching being adopted.
The activities should extend to many more groups, however the old groups must be closely
monitored. Emphasizing to a small group is only for effective reason.
This is a contribution of Indonesian mathematician to improve pre university mathematics
educations, without interfering government programs.
Meanwhile, mathematicians should provide regular information on mathematics to the
government and the society, for instants comments on school mathematics books, popular clarifications
about mathematics, etc.
6
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
To improve the pre university mathematic education should not be left to government only.
Mathematicians have an obligation to play a role outside government programs.
5. Concluding remark.
This paper should have more analysis, especially on the pre university mathematics educations. The
same thing for The Noble Great Hope program 1 and 2, more detail should be outlined. But myillness
restricts me from working much. Therefore, in the program 2 every body should arranged the activity
individually.
I thanks and appreciate highly, the understanding and the patient of the committee.
At last, I there are still small things could be drawn from this short paper. I apologize for the
imperfect things of this paper due to my illness.
6. References
Ellis, M. W. (2005). The Paradigm Shift in Mathematics Education. The Mathematics Educator Vol 15,
7-17.
7
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
INVITED SPEAKERS
8
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract
In this paper, we propose an idea as well as a method how to manage the convergences of
Newton’s method if its iteration process encounters a local extremum. This idea build the
oscullating circle at a local extremum, and use that radius of oscullating circle or known as the
radius of curvature as an additional number of local extremum, then take the addition of local
extremum and radius of curvature at that local extremum as an initial guess in finding a root
close to that local extremum. Several examples which demonstrates that our idea is successful
and perform to fulfill the aim of this paper, will also be given in this paper.
1. Introduction
One of the most frequently occuring problems in scientific work is to find the roots of the equations of
the form
f ( x) 0 (1)
Iterative procedures for solutions (1) are routinely employed. Starting with the classical Newton’s
method, a number of methods for finding roots of equations have come to exist each of which has its
own advantages and limitations.
The Newton’s method of root finding based on the iterative formula is given by
f ( xk )
xk 1 xk . (2)
f '( xk )
Newton’s method displays a faster quadratic convergence near the root while it requires evaluation of
the function and its derivative at each step of the iteration.
However, when the derivative evaluated is zero, Newton’s method stalls ([1]).
Newton’s method will face several obstacles if it has low values of the derivative, the Newton
iteration offshoots away from the current point of iteration and may possible converge to a root far away
from the intended domain. For certain forms of equations, Newton’s method diverges or oscillates and
fails to converge to the desire root. We observe these obstacles by considering the function with
expression
f ( x) x 3 x 2 x 3
where its graph is given in Figure 1 ([2]). If we start the iteration at x0 where x0 is a fixed number in
the interval (1.5,1.6), then we will obtain infinite sequence like
x0 , x1 , x0 . x1 , …
*
which does not converge to x .
x* x0 1 x1 x0 0.999...
Figure 1 : x0 , x1 , x0 . x1 , … Figure 2 : x0 ,
9
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
If we start at x0 0.999... as shown in Figure 2, we may get x1 which approaches (exceed the
computer number). Therefore the algorithm cannot proceed.
In this paper, we would like to compute all the zeroes of a function when its graph is much like in
Figure 3.
2. Curvature of a Function
The idea of curvature is the measure of how sharply a curve bends. We would expect the curvature to
be 0 for a straight line, to be very small for curves which bend sharply. If we move along a curve, we
see that the direction of the tangent line will not change as long as the curve is flat. Its direction will
change if the curve bends. The more the curve bends, the more the direction of the tangent line will
change. As we know that the movement of Newton’s method searching process is depended on the
tangent line of each iteration. We are thus led to the following definition and theorem which are taken
from [2].
Definition 1
Let the curve C be given by the differentiable vector function f (t ) f1 (t )i f 2 (t ) j . Suppose that
(t ) denotes the direction of f '(t ) . (i) The curvature of C , denoted by (t ) , is the absolute value
of the rate of change of direction with respect to arc length ( s ), that is,
d
(t ) , note that (t ) 0 .
ds
(ii) The radius of curvature (t ) is define by
1
(t ) , if (t ) 0 .
(t )
10
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Theorem 2
dT
If T (t ) denotes the unit tangent vector to f , then (t ) .
ds
Theorem 3
If C is a curve with equation y f ( x) where f is twice differentiable then
f "( x)
.
1 f '( x)
2 3/ 2
According to [3], when the curvature (t ) 0 , the center of curvature lies along the direction of
N (t ) at distance 1/ from the point (t ) . When (t ) 0 , the center of curvature lies along the
direction N (t ) at distance 1/ from (t ) . In either case, the center of curvature is located at,
1
(t ) N (t ) .
(t )
The osculating circle, when 0 , is the circle at the center of curvature with radius 1/ which is
called the radius of curvature. The osculating circle approximates the curve locally up to the second
order (the illustration is in Figure 4).
3
y
N x
Figure 4
x*k x*k
Figure 5
Basically, in order to make Newton’s method converges to x* , the zero of a function f , we need an
initial estimation that is close enough to x* . Therefore, we would like to get the number such that
xk* becomes the best estimation as an initial point for Newton’s method. We use the radius of
curvature of f x to be the number , therefore xk* is the initial best estimation when we use
11
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
the Newton’s method to find the root of f x . We will prove that xk x is the radius of the
* *
largest interval around x* in which the application of Newton’s method to any point in x *
, x*
Definition 4 ([4])
The function f : D R R is Lipschitz continuous function with constant in an interval D ,
written f Lip D, if for every x, y D ,
f x f y x y .
For the convergence of Newton’s method, we need to show that f ' Lip D . This condition has
been shown in [4] through the following Lemma.
Lemma 5 ([4])
If (i) f : D R R for an open interval D , (ii) f ' Lip D , then for any x, y D ,
( y x)2 .
f ( y) f ( x) f '( x)( y x)
2
For most problems, Newton’s method will converge q -quadratically to the root of one nonlinear
equation in one unknown. We shall now state the fundamental theorem of numerical mathematics.
Theorem 6 ([4])
If (i) f : D R R for an open interval D , (ii) f ' Lip D , (iii) for some 0,
f '( x) for every x D , (iv) f ( x) 0 has a solution x* D , then there is some 0 such that
12
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Proof
Let ̂ be the radius of curvature of f ( x ) at x1* . Let ˆ be the radius of the largest interval around
x * , that is contained in D and define
min{ˆ,(2 / )} .
We will show by induction that for n 0,1,2,... , the equation (4) holds, and
| xn1 x* || xn x* | .
Take ˆ xk* ˆ x* as the radius of the largest interval around x* D , and let x0 xk* ̂ be an initial
point which is a lower bound or an upper bound of x* ˆ , x* ˆ . The proof simply shows at each
iteration that the new error | xn 1 x* | is bounded by a constant times the error the affine model makes,
in approximating f at x * .
For n 0 , we have
x1 x x0 x
* f x0
*
x0 x
*
f x0 f x*
f ' x0 f ' x0
xk* ˆ x *
f xk* ˆ f x*
f ' ˆ xk*
x f ' x ˆ ˆ f ' x ˆ x f ' x ˆ f x ˆ f x
*
k
*
k
*
k
* *
k
*
k
*
f ' x ˆ
*
k f ' x ˆ f ' x ˆ
*
k f ' x ˆ
*
k
*
k
1 f x f x ˆ x f ' x ˆ x f ' x ˆ ˆ f ' x
* * * * * * *
ˆ
f ' x ˆ
* k k k k k
k
1 f x f x ˆ f ' x ˆ x x ˆ
* * * * *
f ' x ˆ
* k k k
k
x1 x*
1
f x* f xk* ˆ f ' xk* ˆ x
*
xk* ˆ .
f ' xk* ˆ
13
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4. Computation Examples
In this section, we will try to obtain the nearest root closed to an extreme point with initial guess xk
*
, and xk where xk is an extreme point, and is the radius of curvature at this extreme point. In this
* *
section, we will use five examples (Exp.) given in Table 1, and try to obtain a root to the right of extreme
point, and a root the left extreme point of each function.
[5,3]
2
f2 ( x) x2 2x 3
[1.5,1]
3
f3 ( x) x3 x2
4 2 [3,10]
f 4 ( x) sin x sin x
3
5 3 [9,13]
f5 ( x) cos x cos(2 x) sin x
5
Table 2 shows that the use of initial guess xk* , with xk* is a local extremum of a function, and
is a radius of curvature at that local extremum, will make Newton’s method (N) converges (C) to a root
of a function closest to the local extremum. However, when the radius of curvature is too small,
Newton’s method iterative process failed (F) to bring to the expected solution, this can be seen in Exp.
5, the column is colored gray. To overcome this obstacle, we have made a modification to the radius of
curvature which will be discussed in the next section.
14
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
f ( x)
Unfortunate case when Newton’s method encounters a trial guess near such a local extremum, then
Newton’s method will send its solution far away from the desired solution (see Figure 6). This situation
happend in Exp. 5 of Table 2 where the size of radius of curvature to be added to the minima point is
not enough to bring that point to the expected root.
For details, in Exp. 5, xk* 10.9598 is a minimizer of f5 ( x) , 0.191837 is radius of curvature at
* , then
xk xk* is an initial guess in finding the nearest root from xk* on the right, and xk* on the
left. In Table 2, it has been given a sign that Exp. 5 failed in getting the root on the left side of xk* . Now
we try to double up the radius of curvature become 2 , and use xk* 2 as the new initial guess, then
with that new initial guess, we will obtain x* 9.93822 which is the nearest root of xk* on the left. So
we assume that the failure due to Exp. 5 caused by the small radius of curvature. To overcome this
obstacle, we decide to restrict the radius of curvature as
r for r (0,1) .
The algorithm of modified radius of curvature can be described in Algorithm M.
Algorithm M
This simple algorithm computes using data ( x0 , , r , m ) where x0 is local
extremum of a function, is a tolerance, r is a real number, and m is the maximum number of
iteration.
1. j
1 f ' x
0
2 3/ 2
f " x0
2. : j
3. i : 1
4. while i r do
4.1. i : i j
4.2. i : i 1
4.3. : i
5. return.
15
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
6. Numerical Results
In this section, we employ our method to solve some nonlinear equations. All experiments were
performed on a personal computer with AMD Dual-Core Processor E-350 1.6 GHz and 2 GB memory.
The operating system was Windows 7 Starter (32-bit) and the implementations were done in Microsoft
Visual C++ 6.0.
We used the following 20 test functions and display the approximate zeros x* .
2 3
f1 ( x) sin( x) sin x f 2 ( x) cos x cos(2 x) sin( x)
3 5
2 1 4
f3 ( x) cos x sin x cos( x) f 4 ( x) sin x sin( x)
5 10 9
1 1 2
f5 ( x) cos( x) sin(2 x) f6 ( x) sin(2 x)sin( x) sin x
2 3 3
5
10
f 7 ( x) i cosi 1 x i
i 1
f8 ( x) sin( x) sin x ln x 0.84 x 3
3
5 10 10 1
f9 ( x) exp(0.1x) i 1 x i
i 1
f10 ( x) cos( x) cos x 0.84
3 3 x
5
10
f11 ( x) i sin i 1 x i
i 1
f12 ( x) sin( x) sin x ln x 0.84 x
3
f14 ( x) x2 1
5
f13 ( x) sin i 1 x i
i 1
f15 ( x) x 2x 3
2
f16 ( x) x3 x2
1
f17 ( x) 2 x 2 1.05x 4 x6 x f18 ( x) x4 4x3 4x2 0.5
6
f19 ( x) 3x x3 f20 ( x) x6 22x4 9x2 102
16
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
0.96611 0 -9.98415e- 0
-6.4786 -4.59502 -8.66132 -6.2244e-016
8 017
0.92092 0 6.19554e- 0 -9.98415e-
-3.06054 -1.44966 -4.59502
7 017 017
0.099921 0 3.91522e- 0 6.19554e-
1.00009 1.70283 -1.44966
6 017 017
0 0 3.91533e-
3.23845 1.10178 4.88978 2.5247e-016 1.70283
017
0 -2.27954e- 0
6.07014 1.05587 7.16913 4.88978 2.5247e-016
016
0 0 -2.27954e-
9.29931 0.88308 11.2033 8.2974e-016 7.16913
016
0 8.65927e- 0 -3.18162e-
4 -8.42662 1.03325 -7.06858 -9.42478
017 016
0 0 8.65927e-
-6.66794 1.12298 -6.28319 -8.3768e-017 -7.06858
017
0.87740 0 0
-4.50953 -3.14159 1.206e-016 -6.28319 -8.3768e-017
8
0.94786 0 0
-1.93595 -1.08393e-162 0 -3.14159 1.206e-016
3
0.94786 0 0 1.08393e-
1.93595 3.14159 1.206e-016 0
3 016
0.87740 0 0
4.50953 6.28319 -8.3768e-017 3.14159 1.206e-016
8
0 8.65927e- 0
6.66794 1.12298 7.06858 6.28319 -8.3768e-017
017
0 -3.18162e- 0 8.65927e-
8.42662 1.03325 9.42478 7.06858
016 017
0 5.55112e- 0 7.14354e-
5 2.56634 0.61094 3.98965 1.5708
017 017
0.21952 0 -2.44921e- 0 -3.99529e-
6 4.231 4.71239 3.78726
6 016 016
0.21952 0 -3.18051e- 0 -2.44921e-
5.19377 5.63752 4.71239
6 016 016
0.10406 0.1 -9.76996e- 0.1 8.43769e-
7 -9.28634 -9.03415 -9.55476
2 015 015
0.10405 0.1 5.77316e- 0.1 -9.76996e-
-8.79406 -8.57612 -9.03415
1 015 015
0.10135 0.1 3.55271e- 0.1 5.77316e-
-8.29039 -8.05487 -8.57612
4 015 015
0.10270 0.1 -7.21645e- 0.1 3.55271e-
-7.70831 -7.40542 -8.05487
9 015 015
0.10192 0.1 -3.19744e- 0.1 -7.21645e-
-7.08351 -6.73964 -7.40542
7 014 015
0.10327 0.1 -1.77636e- 0.1 -3.19744e-
-6.47857 -6.15885 -6.73964
6 015 014
0.1 0.1 -1.77636e-
-5.94894 0.10492 -5.70985 -1.5099e-014 -6.15885
015
0.1 -8.88178e- 0.1
-5.4614 0.1052 -5.19666 -5.70985 -1.5099e-014
015
0.10479 0.1 -1.11011e- 0.1 -8.88178e-
-4.96318 -4.71693 -5.19666
9 015 015
0.10402 0.1 4.76789e- 0.1 -1.11011e-
-4.47753 -4.23649 -4.71693
7 015 015
17
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 3 shows that the use of radius of curvatute at the extreme point will make Newton’s method
always converges to the roots closed to this extreme point. Nonzero value of r indicate that the functions
have the small radius of curvature at their extreme points.
7. Conclusion
In this paper, we have presented that the radius of curvature at maximizer or minimizer points can be
used as an increment to those extremum points in the attempt to find the radius of convergence of
Newton’s method near to the said maximizer or minimizer of a function. Numerical results show that
our valuable method succeed in finding the desire solutions.
19
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
[1] T. T. Ababu, “A Two-Point Newton Method Suitable for Nonconvergent Cases and with Super-
Quadratic Convergence”, Advance in Numerical Analysis, Hindawi Publishing Corporation,
Article ID 687383, http://dx.doi.org/10.1155/2013/687382, 2013.
[2] I. B. Mohd, The Width Is Unreachable, The Travel Is At The Speed Of Light, Siri Syarahan Inaugral
KUSTEM : 6 (2002), Inaugral Lecture of Prof. Dr. Ismail Bin Mohd, 14th September 2002
[3] S. I. Grossman, Calculus Third Edition, Academic Press, 1984.
[4] J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and
Nonlinear Equations, Prentice-Hall, 1983.
20
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: In today globalization world, transportation is the main source for huma to
go anywhere, especially for a university student. Basically, transportation is meant that
“any device used to move an item from one location to another”. We could see to our
surrounding, almost all people in this world have their own transportation. Referring to
the transportation management in Universiti Tun Hussein Onn Malaysia (UTHM),
transportation is very important for the students. For the students who live at the
residential college, bus is the main transportation for them to go and from class as they
do not have any transportation. Then, it became an issue after students had to face the
transportation problem. The issue of bus services arised because of the current
requirement that cannot be met. Comparison between Northwest Corner Method
(NWCM) and Vogel Approximation Method (VAM) is used to solve these available
issues. Observation, interview, and data collection will be carried out on the bus service
that send students to the class towards on residential colleges that involved, such as
Residential College of Perwira and Residential College of Taman Universiti to ensure
that the requirements of the research objectives will be achieved.
1. Introduction
According to Tran and Kleiner (2005), public transportation is defined as transportation service
providers on an ongoing basis, general and specifically to the public. Transport does not include school
bus, chrter busses and bus service that offering sightseeing. Examples of public transportation used by
people other than busses are trains and ferries.
The study was conducted in Universiti Tun Hussein Onn Malaysia (The University) and involves daily
bus transportation for students to and from the class. The study involved six (6) facultieswhich are
Faculty of Mechanical and Manufacturing Engineering (FKMP), Faculty of Civil and Environmental
Engineering (FKAAS), and the Faculty of Electrical and Electronic Engineering (FKEE). Some of the
students expressed these faculties are those who were place in the Residential College of Perwira while
the rest are in Residential College of Tun Dr. Ismail and Residential College of Tun Fatimah. While for
students from the Faculty of Computer Science and Information Technology (FSKTM), Faculty of
Technical and Vocational Education (FPTV) and Faculty of Technology Management and Business
(FPTP) most of the students from the these faculties are staying at Residential College of Perwira and
only certain of them who involves third year students and fourth year students are living at the specific
rented house.
Problems can be identified when the increases of sum of the number of students who live in the
Residential College of Perwira and Residential College of Taman Universiti caused the number of bus
provided are not able to accommodate the capacity of students. This lead to a badly damaged for an
existing bus in every two-month. An average estimated cost of bus maintenance for every month is
about RM 500 while the maintenance costs for a buss less than five (5) years of service is at least RM
21
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
3000. Lastly, for the bus more than five (5) years of service take about RM 6000 and above. When this
happens, the parties involved have to spend a big amount of money for repairing a break down bus in
order to recover them for immediate use. Bus travel times also play an important role which it will
decide a student to be at class whether sooner or later. Students often complain that the bus is always
late to send them to class without knowing the exact causes of delay.
This study covers the area which is in UTHM which includes two (2) types of daily bus transportation
which are Colourplus and Sikun Jaya. It involves traveling from college -Such as Colorplus, from
Perwira Residential College to UTHM while Sikun Jaya hand the journey from Residential College of
Taman Universiti to UTHM.
Its helps to identify the problems that often occur in daily bus transportation system and and then it
helps to find the best solution to resolve it from occur again. Therefore, it is very important to come out
with the models that are inherent in the transportation system in order to propose the appropriate model
to be used by UTHM at this current situation. In addition, it is intended to facilitate the journey of bus
driver to by studying the suitable routes for the bus driver use and this can saves the shipment time and
cost of daily operation of bus transportation. For students, the benefits received are through short
delivery time will allows students to arrived early to the class and university shall not have to bear the
cost that much to add more buses in order to accommodate the number of students to go to the class,
particularly during peak periods.
22
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Literature Review
2.1 Introducution
Means different types of transport vehicles used whether by air, land, and water to move or carry goods
and passengers from one place to another.
2.2 Public Transportation
Public transport is defined as a system of motorized transport such as buses, taxis, and trains used by
people in a specific area with fixed fares (Kamus Dewan Bahasa dan Pustaka Edisi Keempat).
According Sudirga (2009), after data has been received from sources of supply and demand in a number
of places, it is compiled into the table. Researchers should determine the most initial feasible solution
in the transportation problem.
According to Samuel and Venkatachalapathy (2011) VAM was improved by using matrix total
opportunity cost (TOC) as well as to consider the cost of replacement provision. Total opportunity cost
matrix is obtained by adding the "matrix opportunity cost row" (matrix of opportunity costs row: for
each of row, the cost of the smallest in the row subtracted from every element in the same row) and for
the "matrix opportunity cost column" (matrix of opportunity costs column: for each of column from the
matrix of the actual cost of transportation, the cost of the smallest value in the column is subtracted by
every element in the same column.
According Rahardiantoro (2006), Minimal Spinning Tree Technique is a technique which is used to
find a way that can connects all the points in the same network until the minimum distance will obtain.
Minimal Spanning Tree problem has similarities with the shortest path problem (Shortest Route), but
the purpose of using this technique is to link the entire nodes in a network so that the total path is
minimized. The resulting network can connect all nodes in the network at minimum total distance.
Technical measures of Minimal Spinning Tree are:
1. Select a node in the network in anarbitrary way
2. Connect the node is the closest node to minimize the total distance
3. All the nodes is observed to determine whether there are still nodes that have not been
connected, discover and connect nodes that have not been connected.
According to Reeb and Leavengood (2002), the transport problem are known as one of the most
important applications and succeed in quantitative analysis to solve business problems involving the
distribution of the product. In essence, it aims to minimize the cost of shipping goods from one location
to another so that the needs in each area can be filled and every ship that operates leading capacity of
goods to a predetermined location.
According to Iles (2005) in his book entitled "Public Transport in Developing Countries", public
transportation is important to a broad majority without involving private transport. The need to personal
23
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
mobility, in particular the entry for job opportunities, but with the ability of low income levels is a
common problem, and the service provided is always required because it is not sufficient.
3. Research Methodology
3.1 Introduction
The research methodology is the most important aspect in chapter three (3) because it discusses the
methods of a study conducted by researchers to prove something authentic study done or not. It is
carefully constructed based on the guidance of reference resources available to ensure the collection of
data to be acquired is systematic.
The research conducted is a case of study that was done at the University Tun Hussein Onn Malaysia
(The University) by using qualitative methods. It was chosen to enable the researcher to understand it
deeply why it is necessary to build an effective transport system model to solve the problem of transport
available.
Respondents were randomly selected as an observation based on the two residential colleges studied.
This refers to the collection of data that are made on a daily basis from one month to another month. As
for the interview, the respondents were selected among bus supervisors and bus drivers.
Selection based on the students who inhabit residential colleges that offer daily bus transport services
such as Residential College of Perwira, and Residential College of Taman Universiti. Total population
can be identified for residential colleges involved are of 3267 students.
Samples were selected through students either boys or girls who use this daily bus service as their main
transportation to get to class. The main focus is on a group of fisrt year students because they are the
biggest users of bus services. The study sample was taken based on the two residential colleges
involved. While through the interview, the sample was selected are the group of people related with
daily bus transportation such as the supervisors and bus drivers.
The suitable instrument that can be used by the researcher in this study is through observation and
interview methods. Indirectly, an observation per passenger volume data can be taken at any time and
it can be recorded accurately and systematically. While by interviewing, the data obtained as a result of
the interview conducted on the respondent particularly supervisors and bus drivers to support the data
obtained as a result of the observations made and the data are recorded in detail.
24
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Primary data obtained are collected through two methods, which are:
The observation that has been done is a daily method which is from one month to another. The purposes
of this method is to identify the estimated number of students who use the bus service on a daily basis
either at the peak or at the usual time. The data obtained are subject to fluctuations in the number of
passengers on the bus at anytime in a month.
The researchers also used the interview technique to support the data derived from observation. The
interview that is conducted is a interview that focus on method which they are more focused on the
group of respondents who are involved with the daily bus transportation services. The researcher will
select three respondents from two daily bus transportation companies, which served as the Colourplus-
Tunas Tinggi Pt. Ltd Company and the Sikun Jaya Pt. Ltd.Company to be interviewed that is related
with the issue of bus transportation problems.
Secondary data is data obtained from the studies that have been done. Data is divided into data obtained
from sources printed and non-printed sources. Print resources available through the magazines, articles
and books in the library. For printed sources, the researcher acquire an existing reference sources
through data from the internet and the researcher can obtain the desired journals through sites such as
Emerald, EBSCO HOST, ProQuest, Science Direct and IEEE.
Data analysis is based on data obtained through observations made from one month to another month
and a structural interview was conducted on the respondents. The analysis of data is very important
because it will determine whether a study reach or fulfill the needs of the research objectives or not. In
the study conducted, the researchers use the Production and Operation Management (POM) Software
to calculate the overall data. In addition, researchers are using QSR NVivo 10 Software to translate the
results obtained in the form of words to statistics.
4. Data Analysis
4.1 Introduction
Data analysis discusses the calculation steps of the two models used for comparison in determining to
decide which model is the best and how to calculate the frequency of respondents that use the service
of daily bus transportation as well as how to identify the best route to save costs and time of the journey
of a bus.
25
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The analysis of data is made based on three (3) main objective that need to be achieved, which are:
a) To identify the best method in terms of capacity to solve problems inherent in the existing
transportation model system either on using the method of North West Corner Method (NWCM)
or Vogel Approximation Method (VAM).
b) To determine the frequency number of student use daily bus service to go to the class.
c) To identify the best route to save timeof a bus trip through the technique of a Minimum Spanning
Tree.
Table generated after data entered. It is very clear that how the supply of transport available to meet the
needs of the current student demand.
Optimal cost results obtained from the analysis using the method of Northwest Corner Method
(NWCM) is at RM 392.00.
Marginal cost, which arise from the analysis of the Production and Operation Management (POM)
Software.
Here can be seen how to calculate the costs incurred through travel bus transportation is by way of
method of Northwest Corner Method.
4.3.5 Iteration
Iterasi 2
KK Perwira 16 44 (-90)
KK Taman Universiti (24) (90) 54
Bus Stop ATM 38 (44) 6
Iterasi 3
KK Perwira 10 44 6
KK Taman Universiti (-66) (0) 54
Bus Stop ATM 44 (44) (90)
Iterasi 4
KK Perwira (66) 44 16
KK Taman Universiti 10 (0) 44
Bus Stop ATM 44 (-22) (24)
Iterasi 5
KK Perwira (66) (22) 60
KK Taman Universiti 54 (22) (0)
Bus Stop ATM (0) 44 (24)
Method of Northwest Corner Method (NWCM) analysed through Production and Operation (POM)
Software has five (5) iteration calculation process where it is shown in the table above.
Figure above shows the cost of delivery schedule of students involves the location of the
PerwiraResidential to Tun Dr.Ismail/Tun Fatimah Residential College of 60/RM0 while for Taman
Universiti Residential College to “Susur Gajah G3” was obtained at a cost of 54/RM216. The rest is
from the Taman Universiti Residential College to Tun Dr.Ismail/Tun Fatimah Residential College
recorded a cost of 0/RM0 and the ATM Bus Stop to “Susur Gajah G3”, the cost is of 0/RM0. The last
one is from ATM Bus Stop to Library is a 44/RM176.
27
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Through this list, the researcher can identify the cost per unit of shipping from Taman Universiti
Residential College to “Susur Gajah G3” is of RM 4.00 while for Taman Universiti Residential College
to Tun Dr.Ismail/Tun Fatimah Residential College was RM 20.00. The last one is from ATM Bus Stop
to Library of RM 4.00.
This data enable the researcher to compute the optimal total cost required to initiate movement of a bus
trip around the university.
Marginal cost analysis of the results of the Production and Operation Management (POM) Software.
28
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Here can be seen how to calculate the costs involved through bus transportation made travel.
4.4.5 Iteration
Table 4.4.5:Iteration
Susur Gajah G3 Library KK
TDI/TF
Iteration 1
KK Perwira 54 (60) 6
KK Taman Universiti (-6) 44 10
Bus Stop ATM (60) (44) 44
Iteration 2
KK Perwira 44 (54) 16
KK Taman Universiti 10 44 (6)
Bus Stop ATM (60) (38) 44
Method of Vogel Approximation Method (VAM) are analysed through Production and Operation
(POM) Software have two (2) iteration calculation process where it is shown in the table above.
Figure above shows the delivery schedule of the students, the costs of location from Perwira Residential
College to the “Susur Gajah G3” is 44/RM0 while for Taman Universiti Residential College to “Susur
Gajah G3” was obtained at a cost of 10/RM140. The rest is from the Residential College of Taman
Universiti to the Libarary recorded at a cost of 44/RM0 and the Residential College of Tun Dr. Ismail/
Tun Fatimah, the cost is as much as 16/RM320. The last one is from Bus Stop”ATM to Kolej Kediaman
Tun Dr. Ismail/ Tun Fatimah the cost is 44/RM0.
29
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Through this list, the researcher can identify the cost per unit of shipping from Perwira Residential
College to Tun Dr. Ismail/ Tun Fatimah Residential College is of RM20.00 while for Taman Universiti
Residential College to “Susur Gajah G3” is RM14.00.
The researcher decided to propose to the university to use Northwest Corner Method model because
through the use of this model to the university, it can save cost through the payment of the minimum
optimal cost of RM 392 to the daily bus transportation contract company for UTHM for one day
operating a bus. This is due to shipping costs using the provided transportation route is calculated only
from the location of the Taman Universiti Residential College to “Susur Gajah G3” and also from the
location of ATM Bus Stop to the Library.
600
29.5
89.5
500 149.5
209.5
269.5
400 329.5
No. of Students (Persons)
389.5
449.5
300 509.5
569.5
629.5
200 689.5
749.5
809.5
100 869.5
929.5
0
Mid Point (Hours)
30
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The graph above shows the high frequency is due at the first bus operation from 7.00 am to 8.00 am of
512 students. It also shows a low frequency in the last hour of bus operation, from example from 10.00
pm to 11.00 pm.
200 329.5
389.5
449.5
150
509.5
569.5
100 629.5
689.5
749.5
50
809.5
869.5
0 929.5
Figure above shows that the frequency of the highest recorded daily is in the first hour of bus service
operates. This shows that the range of peak hours between 7.00 am to 8.00 am, there were 260 students
who use the bus service to go to the class. While the lowest frequency in the sixth hour of the bus
services operated by 49 students. Identified at the time of the sixth hour is covering at 12.10 pm to 1.00
pm.
31
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Total Distance:
= 0.3 + 0.2 + 0.5 + 0.2 + 0.6 + 0.9 + 0.2 + 0.2 + 0.05 + 0.2 + 0.5 + 0.9 + 0.3
= 5.05 km
5.1 Introduction
In this chapter, the researcher describes and discusses the findings obtained of data analysis done. The
findings obtained from the comparison of the model involved, which are Northwest Corner Method
Model and Vogel Approximation Method Model. The researcher also studied the frequency amount of
students who use the bus services daily and subsequently to identify the best route to shorten the travel
distance and save the cost. Researchers has been given the opportunity to highlight some of the
recommendations that are relevant to the research topic
5.2 Recommendation
The researcher have identified a number of recommendations that need to be concern of and this
recemmendation need some actions to be taken by the specific parties to ensure the smooth operation
of daily bus transportation to students running smoothly.
The researcher suggested that each student should start their journey to the class as early as possible,
for example in the morning, as they all aready know at 7.30 am until 8.10 am is the peak time. Students
may be able to wait for the bus as early as 7:10 in the morning to avoid the congestion.
The researchers suggest that students themselves must change their attitudes to be more disciplined and
not take it as easy in some matters. The research also suggests students do not have to return to
residential college or home if the students only have a short time off.
32
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Looking to the existing schedule, at some time it is very suitable for the driver and it is like a guide to
drivers so that drivers try to follow it as best as possible. With a given schedule, a bit of congestion due
to the many students who are waiting for the arrival of bus can be reduced. If the driver can comply
with time given well, the problem of student complaining against the late bus will not arise.
The researcher emphasizes the element of respect and tolerance. Researchers have found that some
students complained that if they asked some of the bus drivers, they get rail from the driver. In addition,
most of the bus drivers working based on their mood or feelings. This situation can be changed if
tolerate’ attitude on each other is manured.
From the front of the examination hall of F2, the researcher suggests that a security officer is required
to keep traffic running smoothly so the movement of vehicles especially in the morning at which time
the staff and students would like to get into the university and in the evening when the staff and students
would like to go home are going well.
The researchers suggested that the management of UTHM play an important role to avoid this matter
from happening again. Depth briefing related to transportation case should be given as early as possible
during the “Minggu Haluan Siswa” (MHS).
5.7 Conclusion
Overall, this study has achieved the three (3) objectives to be studied by researcher’s early of her
research. The researcher hope that Northwest Corner Method model (NWCM) that have been suggested
by her could help UTHM in order to solve the cost problem and thus it can save budget and UTHM can
also provide the more better transport facilities.
References
Baxter, P. and Susan, J. (2008). Qualitative Case Study Methodology: Study Design and
Implementation for Novice Researchers. The Qualitative Report, Vol 13(4), 544-559. Dicapai pada
May 2, 2012, dari ms 1 di http://www.scribd.com/doc/46305862/Maksud-Pengangkutan Dicapai
pada December 11, 2012, dari ms 1 di http://dickyrahardi.blogspot.com/2008/05/minimal-
spanning-tree.html.
Iles, R. (2005). Public Transportation in Developing Countries. 1st ed. United Kingdom. Elsevier.
Transportation.
Reeb, J. and Leavengood, S. (2002). Transportation Problem: A Special Case for Linear Programming
Problems. Operational Research, Vol 1, 1-36.
Samuel, A. E. and Venkatachalapathy, M. (2011). Modified Vogel’s Approximation Method for Fuzzy
Transportation Problems. Applied Mathematical Sciences, Vol 5(28), 1367-1372.
Sudirga, R. S. (2009). Perbandingan Pemecahan Masalah Transportasi Antara Metode Northwest
Corner Rule Dan Stepping-Stone Method Dengan Assignment Method. Business & Management. Vol
4 (1), 29-50.
Tran, T. and Kleiner, B. H. (2005). Managing for Excellence in Public Transportation. Importance of
Public Transportation, Volume 28, 154 – 163.
33
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Nuruljannah Samsudin (2009). Mengkaji Kualiti Perkhidmatan Pengangkutan Bas Dan Van Dari
Perspektif Pelanggan : Sebuah Kajian Kes Di Universiti Tun Hussein Onn Malaysia. Universiti
Tun Hussein Onn Malaysia. Tesis Ijazah Sarjana Muda.
34
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PRESENTERS
35
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1. Introduction
Uncertainty of events that will happen in the future, it is risky. When high-risk or potentially difficult
to control, then most of the people or companies prefer shifted the risk to the insurance company. The
insurance company takes over or bear some of the risk. Therefore, the policyholder must pay insurance
premiums. As for insurance companies, if the risk occurs, the insurance company must pay the claim
to the insured.
In fact, sometimes the amount of the premium is not balanced by the number of claims filed
insureds. If a claim is filed too much, it will threaten the stability of the insurance company. Therefore,
insurance companies require a solution to overcome it. One way that can be used is to determine the
outstanding claims reserves of insurance.
In determining the amount of the outstanding claims reserves, can use several methods. In this
paper, that the method used is Bornhuetter-Ferguson method.
2. Methodology
Without loss of generality, assume that the data consists of a triangle or an increase in claims in
the form of a run-off triangle. Increase this claim can be written as {𝑆𝑖,𝑘 ∶ 𝑖 = 1, … , 𝑛 ; 𝑘 = 1, … , 𝑛 +
1 − 𝑖} where 𝑖 is the year in which the incident occurred is called the event, 𝑘 is the number of periods
until payment completion are called the development years.
Then from the triangle run-off was the summation row consecutive years until the period of the
development, can be written mathematically as follows:
𝐶𝑖,𝑘 = ∑𝑘𝑗=1 𝑆𝑖,𝑗 . (2.1)
𝐶𝑖,𝑘 expressed the amount of increase in cumulative claims of incident years 𝑖 after 𝑘 the development
years, 1 ≤ 𝑖, 𝑘 ≤ 𝑛.
Bornhuetter-Ferguson method avoids the dependence on the number of claims at this time, then
back up his claim:
𝑅̂𝑖𝐵𝐹 = 𝑈̂𝑖 (1 − 𝑧̂𝑛+1−𝑖 ) (2.2)
where
𝑈̂𝑖 = 𝑣𝑖 𝑞̂𝑖 (2.3)
and 𝑧̂𝑘 ∈ [0,1]
36
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Bornhuetter-Ferguson method aimed at the construction of an estimate for 𝑞𝑖 which is not directly
dependent on the amount of cumulative claims 𝐶𝑖,𝑛+1−𝑖 . The first step is to consider the ratio of the
average increase in claims:
∑𝑛+1−𝑘
𝑖=1 𝑆𝑖,𝑘
𝑚
̂𝑘 = (2.4)
∑𝑛+1−𝑘
𝑖=1 𝑣𝑖
of the development years 𝑘 observed to date. Then used a weighted average 𝑟𝑖 from ratios 𝑆𝑖,𝑘 ⁄𝑣𝑖 and
𝑚
̂ 𝑘 . That is
̂𝑘
𝑚 𝑆𝑖,𝑘 ⁄𝑣𝑖 𝑆𝑖,𝑘 ∑𝑛+1−𝑖
𝑘=1 𝑆𝑖,𝑘 𝐶𝑖,𝑛+1−𝑖
𝑟𝑖 = ∑𝑛+1−𝑖
𝑘=1 ∙ = ∑𝑛+1−𝑖
𝑘=1 = = . (2.5)
∑𝑛+1−𝑖
𝑗=1 ̂𝑗
𝑚 ̂𝑘
𝑚 𝑣𝑖 ∑𝑛+1−𝑖
𝑗=1 ̂𝑗
𝑚 𝑣𝑖 ∑𝑛+1−𝑖
𝑗=1 ̂𝑗
𝑚 𝑣𝑖 ∑𝑛+1−𝑖
𝑗=1 ̂𝑗
𝑚
Because the development year is expressed by 𝑘, then ∑𝑛+1−𝑖
𝑗=1 𝑚 ̂𝑗 on 𝑟𝑖 changed to
𝐶𝑖,𝑛+1−𝑖
𝑟𝑖 = . (2.6)
𝑣𝑖 ∑𝑛+1−𝑖
𝑘=1 𝑚 ̂𝑘
So that, 𝑟𝑖 is the ratio of the individual claims 𝐶𝑖,𝑛+1−𝑖 ⁄𝑣𝑖
𝑑𝑖𝑏𝑎𝑦𝑎𝑟 𝑡𝑒𝑟𝑗𝑎𝑑𝑖
𝑟̅𝑖 = √𝑟𝑖 ∙ 𝑟𝑖 . (2.7)
𝑚̂∗ = 𝑚 ̂ 1∗ + ⋯ + 𝑚 ̂ 𝑛∗ + 𝑚 ∗
̂ 𝑛+1 , (2.9)
This eventually results in prior estimates
𝑞̂𝑖 = 𝑟𝑖∗ 𝑚
̂∗ (2.10)
for the ultimate claims ratio of incident years 𝑖 and the ultimate number of claims
𝑈̂𝑖 = 𝑣𝑖 𝑞̂𝑖 = 𝑣𝑖 𝑟𝑖∗ 𝑚
̂∗ (2.11)
appropriate.
37
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
∑𝑛+1−𝑘 𝑆𝑗,𝑘
because the 𝜁̂𝑘∗ ≈ 𝜁̂𝑘 ≈ 𝑗=1
can be assumed that the
∑𝑛+1−𝑘
𝑗=1 𝑥𝑗
∑𝑛+1−𝑘 𝑆𝑗,𝑘 𝑠𝑘2
𝑉𝑎𝑟(𝜁̂𝑘∗ ) ≈ 𝑉𝑎𝑟 ( 𝑗=1
)= , (2.35)
∑𝑛+1−𝑘
𝑗=1 𝑥𝑗 𝑛+1−𝑘
∑𝑗=1 𝑥𝑗
with 1 ≤ 𝑘 ≤ 𝑛. Therefore 𝑉𝑎𝑟(𝜁̂𝑘∗ ) estimated by
2 ∗
𝑠̂𝑘2
(𝑠. 𝑒. (𝜁̂𝑘∗ )) = , (2.36)
∑𝑛+1−𝑘
𝑗=1
̂𝑗
𝑈
With 1 ≤ 𝑘 ≤ 𝑛.
2
Altogether, an estimated (𝑠. 𝑒. (𝑧̂𝑘∗ )) for 𝑉𝑎𝑟(𝑧̂𝑘∗ ) is
2 2 2 2 2
(𝑠. 𝑒. (𝑧̂𝑘∗ )) = min ((𝑠. 𝑒. (𝜁̂1∗ )) + ⋯ + (𝑠. 𝑒. (𝜁̂𝑘∗ )) , (𝑠. 𝑒. (𝜁̂𝑘+1
∗
)) + ⋯ + (𝑠. 𝑒. (𝜁̂𝑛+1
∗
)) ). (2.37)
So finally obtained estimator for the mean square error of prediction is
2 2
𝑚𝑠𝑒𝑝(𝑅̂𝑖𝐵𝐹 ) = 𝑈 2∗
̂𝑖 (𝑠̂𝑛+2−𝑖 2∗
+ ⋯ + 𝑠̂𝑛+1 ̂𝑖2 + ( 𝑠. 𝑒. (𝑈
) + (𝑈 ∗
̂𝑖 )) ) (𝑠. 𝑒. (𝑧̂𝑛+1−𝑖 ∗
)) + (1 − 𝑧̂𝑛+1−𝑖 )2 ,
(2.38)
Prediction of error
𝑃𝐸(𝑅̂𝑖𝐵𝐹 ) = √𝑚𝑠𝑒𝑝(𝑅̂𝑖𝐵𝐹 ), (2.39)
and % prediction of the error is
𝑃𝐸(𝑅 ̂ 𝐵𝐹 )
%𝑃𝐸(𝑅̂𝑖𝐵𝐹 ) = 𝑅̂𝐵𝐹𝑖 × 100%. (2.40)
𝑖
To check the significance of the difference between the estimated reserves or alternative to building the
confidence interval for 𝐸(𝑈𝑖 ) needed only pure error estimation
2 2 2 2
( 𝑠. 𝑒. (𝑅̂𝑖𝐵𝐹 )) = (𝑈
̂𝑖2 + ( 𝑠. 𝑒. (𝑈 ∗
̂𝑖 )) ) (𝑠. 𝑒. (𝑧̂𝑛+1−𝑖 ∗
̂𝑖 )) (1 − 𝑧̂𝑛+1−𝑖
)) + ( 𝑠. 𝑒. (𝑈 )2 . (2.41)
For total reserves 𝑅 = 𝑅1 + ⋯ + 𝑅𝑛 above, obtained estimates of total reserves is not biased
𝑅̂ 𝐵𝐹 = 𝑅̂1𝐵𝐹 + ⋯ + 𝑅̂𝑛𝐵𝐹 . Mean square error of prediction of total reserves is
𝑚𝑠𝑒𝑝(𝑅̂ 𝐵𝐹 ) = 𝑉𝑎𝑟(𝑅̂ 𝐵𝐹 ) + 𝑉𝑎𝑟(𝑅). (2.42)
𝑉̂ 𝑎𝑟(𝑅) = ∑𝑛𝑖=1 𝑈 2∗
̂𝑖 (𝑠̂𝑛+2−𝑖 2∗
+ ⋯ + 𝑠̂𝑛+1 ). (2.43)
Estimation error 𝑉𝑎𝑟(𝑅̂ ) more needed because 𝑅̂1 , … , 𝑅̂𝑛 positively correlated through
𝐵𝐹 𝐵𝐹 𝐵𝐹
39
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
with
̂𝐶 𝑜𝑣(𝑅̂𝑖𝐵𝐹 , 𝑅̂𝑗𝐵𝐹 ) = 𝜌̂𝑖,𝑗
𝑈
𝑠. 𝑒. (𝑈̂𝑖 )𝑠. 𝑒. (𝑈 ∗
̂𝑗 )(1 − 𝑧̂𝑛+1−𝑖 ∗
)(1 − 𝑧̂𝑛+1−𝑗 )+
𝑧 ∗ ∗ ̂𝑖 𝑈
̂𝑗 .
𝜌̂𝑖,𝑗 𝑠. 𝑒. (𝑧̂𝑛+1−𝑖 )𝑠. 𝑒. (𝑧̂𝑛+1−𝑗 )𝑈 (2.50)
So finally obtained the mean square error for the prediction of the total claims reserves is
𝑚𝑠𝑒𝑝(𝑅̂ 𝐵𝐹 ) = 𝑉𝑎𝑟(𝑅̂ 𝐵𝐹 ) + 𝑉𝑎𝑟(𝑅)
𝑛 𝑛
2
= ∑𝑈 2∗
̂𝑖 (𝑠̂𝑛+2−𝑖 2∗
+ ⋯ + 𝑠̂𝑛+1 ) + ∑ ( 𝑠. 𝑒. (𝑅̂𝑖𝐵𝐹 )) + 2 ∑ 𝐶̂ 𝑜𝑣(𝑅̂𝑖𝐵𝐹 , 𝑅̂𝑗𝐵𝐹 ),
𝑖=1 𝑖=1 𝑖<𝑗
prediction of the total error is
𝑃𝐸(𝑅̂ 𝐵𝐹 ) = √𝑚𝑠𝑒𝑝(𝑅̂ 𝐵𝐹 ), (2.51)
and % total prediction error is
𝑃𝐸(𝑅̂𝐵𝐹 )
%𝑃𝐸(𝑅̂ 𝐵𝐹 ) = 𝐵𝐹 .
𝑅̂
(2.52)
The data used in this paper is the data obtained from the journal Mack and Re (2006). The data is the
increase in claims during the 13 years from 1992 to 2004, and at 13 years of development.
40
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 3.3 Increase the ratio of claims and Estimated Average Patterns
Developments for increase Data Claims Happen
𝑘 𝑚̂𝑘 𝑚̂𝑘
̃ 𝑚̂ 𝑘∗ ∑𝑚 ̂ 𝑘∗ 𝜁̂𝑘 𝑧̂𝑘 𝜁̂𝑘∗ 𝑧̂𝑘∗
1 0.0593 0.0692 0.0692 0.0692 0.0502 0.0502 0.0502 0.0502
2 0.1844 0.1998 0.1998 0.2689 0.1449 0.1950 0.1449 0.1950
3 0.2804 0.2752 0.2752 0.5441 0.1996 0.3946 0.1996 0.3946
4 0.3225 0.3006 0.3006 0.8448 0.2180 0.6126 0.2180 0.6126
5 0.2243 0.2039 0.2039 1.0487 0.1479 0.7604 0.1479 0.7604
6 0.1157 0.1166 0.1166 1.1653 0.0845 0.8450 0.0845 0.8450
7 0.0962 0.1258 0.1258 1.2911 0.0912 0.9362 0.0912 0.9362
8 0.0143 0.0230 0.05 1.3411 0.0167 0.9528 0.0363 0.9724
9 0.0206 0.0381 0.02 1.3611 0.0276 0.9805 0.0145 0.9869
10 - - -
0.0034 0.0069 0.01 1.3711 0.0050 0.9755 0.0073 0.9942
11 0.0020 0.0039 0.005 1.3761 0.0028 0.9783 0.0036 0.9978
12 0.0127 0.0238 0.002 1.3781 0.0173 0.9956 0.0015 0.9993
13 - - -
0.0028 0.0049 0.001 1.3791 0.0036 0.9921 0.0007 1
Ekor 0 1.3791 0 1
41
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 3.4 Increase the ratio of claims and the average estimate
for the Development of Improved Data Patterns of Claims Paid
𝑘 𝑚
̂𝑘 𝑚̂𝑘
̃ ̂ 𝑘∗
𝑚 ∑𝑚 ̂ 𝑘∗ 𝜁̂𝑘 𝑧̂𝑘 𝜁̂𝑘∗ 𝑧̂𝑘∗
1 0.0074 0.0086 0.0086 0.0086 0.0063 0.0063 0.0063 0.0063
2 0.0564 0.0611 0.0611 0.0697 0.0443 0.0505 0.0443 0.0505
3 0.1793 0.1760 0.1760 0.2457 0.1276 0.1782 0.1276 0.1782
4 0.2812 0.2621 0.2621 0.5078 0.1901 0.3682 0.1901 0.3682
5 0.2270 0.2064 0.2064 0.7143 0.1497 0.5179 0.1497 0.5179
6 0.1450 0.1462 0.1462 0.8604 0.1060 0.6239 0.1060 0.6239
7 0.1307 0.1709 0.1709 1.0313 0.1239 0.7478 0.1239 0.7478
8 0.0562 0.0903 0.11 1.1413 0.0655 0.8133 0.0798 0.8276
9 0.0294 0.0545 0.07 1.2113 0.0395 0.8529 0.0508 0.8784
Table 3.4 Increase the ratio of claims and the average estimate
for the Development of Improved Data Patterns of Claims Paid
10 0.0079 0.0159 0.05 1.2613 0.0115 0.8644 0.0363 0.9146
11 0.0105 0.0205 0.03 1.2913 0.0149 0.8793 0.0218 0.9364
12 0.0137 0.0257 0.02 1.3113 0.0186 0.8979 0.0145 0.9509
13 - -
0.000031 0.000043 0.02 1.3313 0.0000 0.8979 0.0145 0.9654
Ekor 0.0478 1.3791 0.0346 1
Table 3.5 Claims Ratio Index, Initial Claims Ratio Ultimate, Ultimate Claim,
and Estimation of Outstanding Claims Reserves for Claims Data increase Happen
𝑟𝑖 𝑅̂𝑖𝐵𝐹
𝑖 𝑟̅𝑖 𝑟𝑖∗ 𝑞̂𝑖 𝑈̂𝑖
Happen Paid Happen Paid
1 0.5319 0.6129 0.5710 0.5710 0.7874 32299.9191 0 1118.5948
2 0.4737 0.5438 0.5075 0.5075 0.6999 40279.1124 29.2069 1979.0648
3 0.4581 0.5104 0.4835 0.4835 0.6668 40634.6177 88.3941 2585.8263
4 0.4375 0.4744 0.4556 0.4556 0.6283 39604.2625 229.7407 3381.7862
5 0.6536 0.7323 0.6918 0.6918 0.9540 58440.5745 762.7689 7109.0110
6 0.9787 1.0853 1.0307 1.0307 1.4214 81346.9434 2241.4592 14024.4629
7 1.3554 1.2448 1.2989 1.2989 1.7914 163258.6555 10417.5331 41168.2102
8 2.0094 2.0028 2.0061 2.0061 2.7666 268150.6343 41569.6714 100850.1239
9 1.4647 1.4174 1.4409 1.4409 1.9871 331893.0957 79509.4046 160000.5501
10 1.0245 0.8716 0.9450 0.9450 1.3032 193519.8073 74977.1006 122259.4617
11 0.7494 0.7373 0.7433 0.7433 1.0251 169559.7233 102656.5288 139350.9305
12 0.4395 0.2777 0.3494 0.5 0.6895 157381.5643 126690.1157 149426.7861
13 0.5819 1.4354 0.9139 0.5 0.6895 156150.7225 148318.1288 155171.5582
Total 587490.0529 898426.3667
42
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 3.6 Constant variability, Standard Error of 𝜁̂𝑘∗ and Standard Error of 𝑧̂𝑘∗
for Data of An increase Claims
2 2
(𝑠. 𝑒. (𝜁̂∗ )) 𝑠. 𝑒. (𝜁̂𝑘 ) (𝑠. 𝑒. (𝑧̂ ∗ )) 𝑠. 𝑒. (𝑧̂𝑘 )
∗ ∗
𝑘 𝑠̃̂𝑘 𝑠̂𝑘2∗ 𝑘 𝑘
1 157.9638 157.9638 0.000091 0.0095 0.000091 0.0095
2 258.7319 258.7319 0.000164 0.0128 0.000255 0.0160
3 241.0382 241.0382 0.000170 0.0130 0.000425 0.0206
4 193.4240 193.4240 0.000155 0.0124 0.000580 0.0241
5 793.3281 793.3281 0.000751 0.0274 0.001331 0.0365
6 677.8112 444.7212 0.000614 0.0248 0.001946 0.0441
7 359.4531 570.0033 0.001250 0.0354 0.001655 0.0407
8 74.5968 73.8335 0.000252 0.0159 0.001402 0.0374
9 80.2661 32.8787 0.000156 0.0125 0.001247 0.0353
10 12.3862 25.1074 0.000164 0.0128 0.001082 0.0329
11 10.7283 21.9404 0.000194 0.0139 0.000889 0.0298
12 35.3697 20.2354 0.000279 0.0167 0.000610 0.0247
13 19.6970 0.000610 0.0247 0 0
Ekor 19.1729 0 0 0 0
Table 3.7 Constants of Variability, Standard Error of 𝜁̂𝑘∗ and Standard Error of 𝑧̂𝑘∗
for Data of Paid Claims increase
2 2
(𝑠. 𝑒. (𝜁̂∗ )) 𝑠. 𝑒. (𝜁̂𝑘 ) (𝑠. 𝑒. (𝑧̂ ∗ )) 𝑠. 𝑒. (𝑧̂𝑘 )
2∗ ∗ ∗
𝑘 𝑠̃̂𝑘 𝑠̂𝑘 𝑘 𝑘
1 12.5625 12.5625 0.000007 0.0027 0.000007 0.0027
2 97.2829 97.2829 0.000062 0.0079 0.000069 0.0083
3 80.2233 80.2233 0.000057 0.0075 0.000125 0.0112
4 359.6533 359.6533 0.000288 0.0170 0.000413 0.0203
5 204.5555 281.9872 0.000267 0.0163 0.000680 0.0261
6 111.6348 121.1375 0.000167 0.0129 0.000848 0.0291
7 283.9844 171.3740 0.000376 0.0194 0.001224 0.0350
8 69.3003 72.9480 0.000249 0.0158 0.001473 0.0384
9 36.7752 41.6315 0.000197 0.0140 0.001639 0.0405
10 37.5430 31.4504 0.000206 0.0143 0.001434 0.0379
11 22.3647 23.7591 0.000210 0.0145 0.001224 0.0350
12 19.7605 20.6506 0.000285 0.0169 0.000939 0.0306
13 20.6506 0.000639 0.0253 0.000300 0.0173
Ekor 30.4780 0.0002998 0.0173 0 0
43
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4. Conclusion
The things that can be inferred from the results of the discussion are as follows: outstanding claims
reserves estimation results obtained using Bornhuetter-Ferguson method is equal to 587,490.0529 for
claims incurred increased data and 898,426.3667 for data enhancement claims paid. It means that the
amount of estimated outstanding claims reserves that must be provided by insurance companies
amounted to 587,490.0529 for claims incurred increased data and 898,426.3667 for data enhancement
claims paid. The magnitude of the prediction error of the estimate of outstanding claims reserves
Bornhuetter-Ferguson method obtained amounted to 8.51% for data enhancement claims occurred and
4.01% for data enhancement claims.
5. References
.
Mack, Thomas and Re, Munich. 2006. Parameter Estimation for Bornhuetter/Ferguson. Casualty
Actuarial Society Forum Fall 2006, 141-157.
Mack, Thomas. 2008. The Prediction Error of Bornhuetter/Ferguson. Astin Bulletin, 38, 87-103.
Verrall, R. J. 2004. A Bayesian Generalized Linear Model for The Bornhuetter-Ferguson Method of
Claims Reserving. North American Actuarial Journal, 8, 67-89.
44
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Most corporations considering debt liabilities issue risky coupon bonds for a finite
maturity which tipically matches the expected life of the assets being financed. For valuing these
coupon bond, we can consider the common stock and and coupon bonds as a compound option.
The other problem is bond indenture provisions often include safety covenants that give bond
investors the right to reorganize a firm if its value falls below a given barrier. This paper will
shown how to value bonds with coupon based on the first passage time approach. We will
construct a formula for probability of default at the maturity date by computing the historical
low of firm values. Using Indonesian corporate coupon bond data, we will predict the
bankruptcy of this firm.
1. Introduction
Credit risk management is one of the most important recent developments inthe finance industry.It has
been the subject of considerableresearch interest in banking and finance communities, and has recently
drawn theattention of statistical researchers. Credit Risk is the risk induced from credit events such as
credit rating change, restucturing, failure to pay, bankruptcy, etc. More formal denition, credit risk is
the distribution of financial losses due to unexpected changes in the credit quality of a counterparty in
a financial agreement (Giesecke, 2004).Central to credit risk is the default event, which occurs if the
debtor is unable tomeet its legal obligation according to the debt contract.
Merton (1974) firstly builds a model based on the capital structure of the firm, which becomes
the basis of the structural approach. He assumes that the firm is financed by equity and a zero coupon
bond with face value K and maturity date T. In this approach, the company defaults at the bond maturity
time T if its assets value falls below the face value of the bond at time T.
Black and Cox (1976) extends the definition of default event and generalize Merton’s method
into the First Passage Approach. In this approach, the firm defaults when the history low of the firm
assets value below some barrier D. Thus the default event could take place before the maturity date T.
This theory also need assumption that the corporations issues only one zero coupon bond. Reisz and
Perlich (2004) point out that if the barrier is below the bond’s face value, then default time definition
of Black and Cox theory does not reflect economic reality anymore. In their paper, they modified the
classic First Passage Time Approach, and re-defining the formula of default time.
Up to this time, most corporation tend to issue risky coupon bond. At every coupon date until
the final payment, the firms have to pay the coupon. At the maturity date, the bondholder receive the
face value of the bond. The bankruptcy of the firm occurs when the firm fails to pay the coupon at the
coupon payment and/or the face value of the bond at the maturity date. Geske (1977) has derived
formulas for valuing coupon bonds. In a later paper, Geske (1979) suggested that when company has
coupon bond outstanding, the common stock and coupon bond can be viewed as a compound option.
In this paper we proposed a method for unifying some theory above. We want to produce a new
theory in credit risk that fulfill assumptions in real finance industry. We will derive probability of default
45
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
formula for risky coupon bond with modified first passage time approach. We construct the formula by
computing the historical low of firm values.
2. Theoritical Framework
Consider a firm with market value at time t, Vt, which is financed by equity and a zero coupon bond
with face value, K, and maturity date, T. The firm’s contractual obligation is to repay the amount K to
the bondholder at time T. Merton and Black & Scholes (1973) indicated that most corporate liabilities
may be viewed as an option. They derived a formula for valuing call option and discussed the pricing
of a firm’s common stock and bonds when the stock is viewed as an option on the value of the firm.
Thus, valuing the equity price of the firm is identical to the equation for valuing a European call option.
The firm is assumed to default at the bond maturity date T, if the total assets value of the firm
is not sufficient to pay its obligation to the bondholder. Thus the default time τ is a discrete random
variable given by
𝑇 jika 𝑉𝑇 < 𝐾
𝜏= { (1)
∞ jika 𝑉𝑇 ≥ 𝐾
To calculate the probability of default, we make assumption that the standard model for the evolution
of asset prices over time is Geometric Brownian Motion:
𝑑𝑉𝑡 = 𝜇 𝑉𝑡 𝑑𝑡 + 𝜎𝑉𝑡 𝑑𝑊𝑡 and 𝑉0 > 0 (2)
Where µ is a drift parameter, σ> 0 is a volatility parameter, and W is a standard Brownian Motion.
Setting 𝑚 = 𝜇 − 12𝜎 2 Itto’s lemma implies that
𝑉𝑡 = 𝑉0 exp(𝑚𝑡 + 𝜎) (3)
Since Wt is normally distributed with mean zero and variance T, probability of default is given by
log 𝐿−𝑚𝑇
𝑃(𝜏 = 𝑇) = 𝑃[𝑉𝑇 < 𝐾] = 𝑃[𝜎𝑊𝑇 < log 𝐿 − 𝑚𝑇] = Φ ( 𝜎 √𝑇
) (4)
𝐾
where 𝐿 = 𝑉 and is the cumulative standard normal distribution function.
0
46
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Merton’s model, the firm can only default at the maturity date T. As noted by Black & Cox (1976),
bond indenture provisions often include safety covenants that give bondholder right to reorganize a firm
if its value falls below a given barrier.
We still use geometric Brownian motion to model the total assets of the firm Vt. Suppose the
default barrier B is a constant valued in (0,V0), then the default time τ is modified to
𝜏 = inf{𝑡 > 0: 𝑉𝑡 < 𝐵} (5)
This definition says a default takes place when the assets of the firm fall to some positive level B for
the first time. The firm assumed to take the position of not default at time t = 0.
So, the probability of default is calculated as
𝐵
𝑃(𝜏 ≤ 𝑇) = 𝑃[𝑀𝑇 < 𝐵] = 𝑃 [min𝑠≤𝑇 (𝑚𝑠 + 𝜎𝑊𝑠 ) < log (𝑉 )]
0
Where M is the historical low of firm values
𝑀𝑡 = min𝑠≤𝑇 𝑉𝑠
Since the distribution historical low of an arithmetic Brownian Motion is inverse Gaussian, then the
probability of default can be calculated explicitly by
𝐵 2𝑚 𝐵
ln( )−𝑚𝑇 𝐵 2 ln( )+𝑚𝑇
𝑉0 𝑉0
𝑃(𝜏 ≤ 𝑇) = Φ ( 𝜎√𝑇
) + (𝑉 ) 𝜎 Φ ( 𝜎√𝑇
) (6)
0
Figure 2 shows the default event graphically for Black & Cox’s model.
In practice, the most common form of debt instrument is acoupon bond.In the U.S and in many other
countries, coupon bonds paycoupons every six months and face value at maturity.Suppose the firm has
only common stock and coupon bond outstanding. The coupon bond has n interest payments of c dollars
each. The firm is assumed to default at the coupon date, if the total assets value of the firm is not
sufficient to pay the coupon payment to bondholder. And at the maturity date, the firm can default if
the total assets is below the face value of the bond. For this case, if the firm defaults on a coupon
payment, then all subsequent coupon payments (and payments of face value) are also default on.
Geske (1979) proposed a theory for valuing risky coupon bond. When the corporation has
coupon bonds outstanding, the common stock can be considered as a compound option (Geske, 1977).
A compound option is an optionon an option. In other words, the underlying asset is anotheroption(Wee,
2010). For coupon bond, valuing equity price of coupon bond is identical to valuing a European call
option on call option.
At every coupon date until the final payment, the firm have the option of buying the coupon or
forfeiting the firm to bondholder. The final firm option is to repurchase the claims on the firm from the
47
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
bondholders by paying off the principal at maturity. The financing arrangements for making or missing
the interest payments are specified in the indenture conditions of the bond. In Figure 3 we illustrate the
default event of Geske’s model.
3.1 Modified First Passage Time Approach (Reisz & Perlich’s Model)
In their paper, Reisz & Perlich (2004) point out that if the barrier is below the face value of the bond,
then our earlier definition (5) does not reflect economic reality anymore. It does not capture the situation
when the firm is in default because VT<K although MT>B.
Then, they proposed a redefine default as firm value falling below the barrier B<K at any time
before maturity or firm value falling below face value K at maturity. Formally, the default time is now
given by
𝜏 = min(𝜏1 , 𝜏2 ) (7)
Where
τ1 = the maturity time T if assets VT<K at T
τ2 = the first passage time of assets to the barrier B
In other words, the default time is defined as the minimum of the first passage default time (5) and
Merton’s default time (1). This definition of default is consistent with the payoff to equity and bonds.
Even if the firm value does not fall below the barrier, if assets are below the bond’s face value at
maturity he firm default. The default event for Reisz & Perlich’s model is shown at Figure 4.
Assuming that the firm can neither repurchase shares nor issue new senior debt, the payoffs to
the firm’s liabilities at debt maturity T are summarized in Table 1 and Table 2.
Table 1. Payoffs at Maturity in The Modified First Passage Time Approach for B≥ K
State of the firm Assets Bond Equity
No Default MT>B K VT– K
Default MT≤ B B>K B 0
B=K K 0
48
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 2. Payoffs at Maturity in The Modified First Passage Time Approach for B<K
State of the firm Assets Bond Equity
No Default MT>B, VT≥ K K VT– K
Default MT>B, VT< K VT 0
MT≤ B B 0
3.1 Valuation of Coupon Bond with Modified First Passage Time Approach
In this section, we want to begin with assumption that the firm have assets value Vt, which is financed
by equity and a single coupon bond with face value K and only one time coupon payment at tc for the
bond period.
Suppose the default barrier B is a constant valued in (0,V0) and c<B< K, then the default time τ
is given by
𝜏 = min(𝜏1 , 𝜏2 , 𝜏3 , 𝜏4 , 𝜏5 ) (8)
Where
τ1 = the maturity time T if assets VT<K at T
τ2 = the first passage time of assets to the barrier B at time (tc,T)
= inf{𝑡𝑐 < 𝑡 < 𝑇: 𝑉𝑡 < 𝐵}
τ3 = the coupon payment date if assets VT<B or assets VT<c at time tc
τ 4 = the first passage time of assets to the barrier B at time (0,tc)
= inf{0 < 𝑡 ≤ 𝑡𝑐 : 𝑉𝑡 < 𝐵}
τ5 = ∞, otherwise
With the definition above, we can summarize the default time by
𝜏 = min(𝜏1 , 𝜏2 ∗ ) (9)
Where
τ2* =the first passage time of assets to the barrier B at time (0,T)
= inf{0 < 𝑡 < 𝑇: 𝑉𝑡 < 𝐵}
The default event is shown at Figure 5.
49
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 5. Default Event for Coupon Bond with Modified First Passage Time Approach
We have to check wether this default definition is consistent with the payoff to investor. We need to
consider two scenarios.
1. B≥ c + K
a. If the firm value never falls below the barrier B over the term of the bond (MT>B), then at
coupon payment the bond investor receive the coupon c, and at the maturity date receive
the face value c + K, K <V0. The equity holders receive the remaining VT – (c + K) at the
maturity date.
b. If the firm value falls below the barrier at some point during the bond’s term (MT ≤ B), then
the firm default. In this case, the firm stop operating, bond investors take over its assets B
and equity investor receive nothing.
Bond investor is fully protected: they receive at least the face value and coupon c + K upon
default and the bond is not subject to deafult rik anymore.
c. If the assets value VTis less then c + K, the ownership of the firm will be transferred to the
bondholder, who lose the amount (c + K) – VT. Equity is worthless because of limited
liability.
2. B<c + K
This anomaly does not occur if we assume B<c + K so that bondholder is both exposed to some
default risk and compensated for bearing that risk.
a. If the firm value never falls below the barrier B over the term of the bond (MT>B) and VT ≥
c + K, then at coupon payment the bond investor receive the coupon c, and at the maturity
date receive the face value c + K, K <V0. The equity holders receive the remaining VT – (c
+ K) at the maturity date.
b. If MT>B but VT<c + K, then the firm default, since the remaining assets are nor sufficient
to pay off the debt in full. Bondholder collect the remaining assets VT and equity become
worthless.
c. If MT≤B, then the firm default as well. Bond investor receive B<K at default and equity
become worthless.
To calculate probability of default for this case, first we define M as the historical low of firm values,
that is
𝑀𝑡 = min𝑠≤𝑇 𝑉𝑠
𝐵 2𝑚 𝐵2
ln( )−𝑚𝑇 𝐵 2 ln( )+𝑚𝑇
𝑉0 𝐾𝑉0
𝑃(𝜏 ≤ 𝑇) = Φ ( 𝜎√𝑇
)+ (𝑉 ) 𝜎 Φ( 𝜎√𝑇
) (10)
0
The probability of default for coupon bond with modified first passage time approach is higher than
the corresponding probability of the classical approah, equation (6).
In this case study we use data sets from and Indonesian Bond Market Directory 2011 that is published
by Indonesian Stock Exchange (IDX) and Indonesian Bond Pricing Agency (IBPA). We use bond that
is issued by PT Bank Lampung (BPD Lampung), namely Oligasi II Bank Lampung Tahun 2007, with
code number BLAM02 IDA000035208. The profile structure of this bond is given at Table 2. Total
assets data of the firm is published by Indonesian Bank is given at Table 3.
For deriving the probability of default of bond, we have to construct the formula by computing
the historical low of firm values. All the computation is done by R programming. In this study, we use
a fixed barrier level in 2,000,000,000,000.
Using formula (10) we have the probability of default for Obligasi II Bank Lampung Tahun
2007 is 0.00003627191. This probability of default is very small because of the outstanding of the bond
is very low than the total assets value. It can be seen from Table 2 and Table 3, that the face value of
the bond is 300,000,000,000,000 and the total assets value at the end of 2012 is 4,221,274,000,000. In
the normal situation, the total assets value is very sufficient for paying the principal of the bond.
Acknowledgements
We would like to thank to Hibah Disertasi Doktor from DIKTI grant research in 2013.
51
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Black, F. & Cox, J., 1976, Valuing Corporate Securities: Some Effects of Bond Indenture Provisions,
Journal of Finance, 31 (2), 351-367.
Black, F. & Scholes, M., 1973, The Pricing of Option and Corporate Liabilities, Journal of Political
Economy, 81, 637-654.
Geske, R., 1977, The Valuation of Corporate Liabilities as Compound Options, Journal of Financial
and Quantitative Analysis, 12, 541-552.
Geske, R., 1979, The Valuation of Compound Options, Journal of Financial Economics, 7, 63-81.
Giesecke, K., 2004, Credit Risk Modeling and Valuation: An Introduction, Credit Risk: Models and
Management, Vol.2, D. Shimko (Ed.), Wiley. New York.
Merton, R., 1974, On The Pricing of Corporate Debt: The Risk Structure of Interest Rate, Journal of
Finance, 29 (2), 449-470.
Reisz, A. & Perlich, C., 2004, A Market-Based Framework for Brankruptcy Prediction, Working Paper,
Baruch College and New York University.
Wee, L.T., 2010, Compound Option, Teaching Note.
Website of Bank Indonesia (BI), 2013, Data Total Aset Bank. www.bi.go.id. [May 20, 2013]
Website of Bursa Efek Indonesia (BEI), 2012, Indonesian Bond Market Directory 2011, www.idx.co.id
[May 20, 2013]
Website of Indonesia Bond Pricing Agency (IBPA), 2012, Data Obligasiwww.ibpa.co.id. [May 20,
2013]
52
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: In this paper, we would like to discuss the question of when the subdirect sum of two
nonsingular 𝑍-matrices be a nonsingular 𝑀-matrix and how to get their inverses. By seeing the
value of entries of an inverse are nonnegative numbers, it is shown that matrix is a nonsingular
𝑀 -matrix. In particular, the two blocks of lower and upper triangular nonsingular 𝑀-matrices,
these are the 𝑘-subdirect sum of two matrices is a nonsingular 𝑀 -matrix. In this paper, all of
above cases are suitable with properties and theorems of these matrices will be given.
1. Introduction
Let 𝐴 and 𝐵 be two square matrices of order 𝑛1 and 𝑛2 , respectively, and let 𝑘 be an integer such that1 ≤
𝑘 ≤ min(𝑛1 , 𝑛2 ). Let 𝐴 and 𝐵 be partitioned into 2 × 2 blocks as follows:
𝐴 𝐴12 𝐵 𝐵12
𝐴 = [ 11 ], and 𝐵 = [ 11 ] (1.1)
𝐴21 𝐴22 𝐵21 𝐵22
where A22 and B11 are square matrices of order k. The 𝑘-subdirect sum of 𝐴 and 𝐵 and denote it by
𝐴11 𝐴12 0
𝐶 = 𝐴 ⨁𝑘 𝐵 = [𝐴21 𝐴22 + 𝐵11 𝐵12 ], (1.2)
0 𝐵21 𝐵22
In the following result we show that nonsingularity of matrix 𝐴̂22 + 𝐵̂11is a necessary and sufficient
condition for the 𝑘-subdirect sum 𝐶 to be nonsingular. The proof is based on the use of the relation
𝑛 = 𝑛1 + 𝑛2 − 𝑘 to properly partition the indicated matrices.
Some definitions, properties and theorems will be given in this paper. A nonsingular 𝑍-matrix
is a matrix which has positive the first-diagonal entries and nonpositive off-diagonal entries. A
nonsingular 𝑀-matrix is a nonsingular 𝑍-matrix which has nonnegative the inverse of 𝑍-matrix
entries. In this paper, we want to obtain the 𝑘-subdirect sum of two nonsingular 𝑍-matrices by using
some definitions, properties and theorems, such that the inverses of 𝑍-matrices be a nonsingular 𝑀-
matrix or not. Furthermore, in this paper, we give some examples which help illustrate the theoretical
results.
Theorem 2.1 Let 𝐴 and 𝐵 be nonsingular matrices of order 𝑛1 and 𝑛2, respectively, and let 𝑘 be an
integer such that 1 ≤ 𝑘 ≤ min(𝑛1 , 𝑛2 ). Let 𝐴 and 𝐵 be partitioned as in (1.1) and their inverses be
partitioned as in (1.3). Let 𝐶 = 𝐴 ⨁𝑘 𝐵, then 𝐶 is nonsingular if and only if 𝐻 ̂ = 𝐴̂22 + 𝐵̂11 is
nonsingular.
Proof. Let 𝐼𝑚 be the identity matrix of order m. The theorem follows from the following relation:
𝐴̂11 𝐴̂12 0 𝐴11 𝐴12 0 𝐼𝑛−𝑛2 0 0
𝐴−1 0 𝐼𝑛−𝑛2 0
̂ ̂
[ ]𝐶 [ ] = [𝐴̂21 𝐴̂22 0 ] [𝐴21 𝐴22 + 𝐵11 𝐵12 ] [ 0 𝐵11 𝐵12 ]
0 𝐼𝑛−𝑛1 0 𝐵−1 0 0 𝐼𝑛−𝑛1 0 𝐵21 𝐵22 0 𝐵̂21 𝐵̂22
𝐼𝑛−𝑛2 𝐴̂12 0
=[ 0 ̂
𝐻 𝐵̂12 ].
0 0 𝐼𝑛−𝑛𝑛
(1.4)
We first consider the 𝑘-subdirect sum of nonsingular 𝑍-matrices. From (1.4) we can explicitly write
𝐼𝑛−𝑛2 −𝐴̂12 𝐻 ̂ −1 𝐴̂12 𝐻 ̂ −1 𝐵̂12
𝐼𝑛−𝑛 0 𝐴−1 0
𝐶 −1 = [ 2
] [ 0 ̂
𝐻 −1
−𝐻 ̂ −1 ̂
𝐵 ] [ ]
0 𝐵 −1 12 0 𝐼𝑛−𝑛1
0 0 𝐼𝑛−𝑛1
From which we can obtain
𝐴̂11 − 𝐴̂12 𝐻 ̂ −1 𝐴̂21 𝐴̂12 − 𝐴̂12 𝐻 ̂ −1 𝐴̂22 𝐴̂12 𝐻
̂ −1 𝐵̂12
𝐶 −1 = [ 𝐵̂11 𝐻 ̂ −1 𝐴̂21 ̂ −1 𝐴̂22
𝐵̂11 𝐻 −𝐵̂11 𝐻 ̂ −1 𝐵̂12 + 𝐵̂12 ] (2.1)
̂ ̂ −1 ̂ ̂ ̂ −1 ̂ ̂ ̂ −1 ̂ ̂
𝐵21 𝐻 𝐴21 𝐵21 𝐻 𝐴22 −𝐵21 𝐻 𝐵12 + 𝐵22
1 1 𝐵 𝐵12 1 3 −1 3 −1
𝐵=[ ] → 𝐵 = [ 11 ] → 𝐵−1 = [ ]=[ ]
2 3 𝐵21 𝐵22 1 −2 1 −2 1
where 𝐵̂11 = 3, 𝐵̂12 = −1, 𝐵̂21 = −2, 𝐵̂22 = 1 and 𝐻̂ = 𝐴̂22 + 𝐵̂11 = 1 + 3 = 4 is nonsingular.
𝐴11 𝐴12 0 2 1 0 2 1 0
𝐶 = 𝐴 ⨁1 𝐵 = [𝐴21 𝐴22 + 𝐵11 𝐵12 ] = [2 2 + 1 1] = [2 3 1], from here we have
0 𝐵21 𝐵22 0 2 3 0 2 3
1
𝐴−1 0 𝐼 0 1 −2 0 2 1 0 1 0 0
[ ] 𝐶 [ 𝑛−𝑛2 ] = [ −1 1 0 ] [ 2 3 1 ] [ 0 3 −1 ]
0 𝐼𝑛−𝑛1 0 𝐵−1
0 0 1 0 2 3 0 −2 1
1 1 1
1 −2 −2 1 0 0 1 −2 0
= [0 2 1 ] [0 3 −1] = [0 4 −1]
0 2 3 0 −2 1 0 0 1
Thus, 𝐶 is nonsingular.
54
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Theorem 2.3. Let 𝐴 and 𝐵 be nonsingular 𝑍-matrices of order 𝑛1 and 𝑛2 , respectively, and let 𝑘 be an
integer such that 1 ≤ 𝑘 ≤ min(𝑛1 , 𝑛2 ). Let 𝐴 and 𝐵 be partitioned as in (1.1) and their inverses be
partitioned as in (1.3). Let 𝐶 = 𝐴 ⨁𝑘 𝐵, and 𝐻̂ = 𝐴̂22 + 𝐵̂11 be nonsingular. Then 𝐶 is nonsingular
−1
𝑀-matrix if and only if for every entries of 𝐶 is nonnegative.
Example 2.4.
2 −4 4 −3
𝐴=[ ] and 𝐵 = [ ] are nonsingular 𝑍-matrices.
−1 1 −2 1
1
1 1 4 − 2 −2 1 1
−1
We have𝐴 = [
−2 1 2
]=[ 1 ], where𝐴̂11 = − 2 , 𝐴̂12 = −2, 𝐴̂21 = − , 𝐴̂22 = −1.
2
− 2 −1
1 3
1 1 3 − − ̂11 = − 1 , 𝐵̂12 3
𝐵−1 = −2 [ ]=[ 2 2], where 𝐵 = − 2 , 𝐵̂21 = −1, 𝐵̂22 = −2 and
2 4 −1 −2 2
1 3 2
𝐻̂ = 𝐴̂22 + 𝐵̂11 = −1 − = − , so 𝐻 ̂ =− .
−1
2 2 3
The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 1 is
𝐴11 𝐴12 0 2 −4 0
𝐶 = 𝐴 ⊕1 𝐵 = [𝐴21 𝐴22 + 𝐵11 𝐵12 ] = [−1 5 −3], and from (2.1) we have
0 𝐵21 𝐵22 0 −2 1
1 2
− −2
𝐴̂11 − 𝐴̂12 𝐻
̂ −1 𝐴̂21 𝐴̂12 − 𝐴̂12 𝐻̂ −1 𝐴̂22 𝐴̂12 𝐻
̂ −1 𝐵̂12 6 3
1 1
𝐶 −1 = [ 𝐵̂11 𝐻 ̂ −1 𝐴̂21 ̂ −1 𝐴̂22
𝐵̂11 𝐻 −𝐵̂11 𝐻
̂ −1 𝐵̂12 + 𝐵̂12 ] = −
6
− 3 −1 , we can
̂ ̂ −1 ̂ ̂ ̂ −1 ̂ ̂ ̂ −1 ̂ ̂
𝐵21 𝐻 𝐴21 𝐵21 𝐻 𝐴22 −𝐵21 𝐻 𝐵12 + 𝐵22 1 2
[− 3 − 3 −1]
see from here that the entries of 𝐶 −1 are nonpositive. Thus, 𝐶 is not a nonsingular 𝑀-matrix.
Example 2.5.
1 0 0 1 0 0
𝐴 = [−1 3 −1] and 𝐵 = [−2 4 −2] are nonsingular 𝑍-matrices, we have
−2 −1 1 −1 −1 1
1 0 0 3 1 1
2 0 0 3 1 1
1
𝐴−1
= 2 [3 1 1] = [ 2 2 2 ], where 𝐴̂11 = [1], 𝐴̂12 = [0 0], 𝐴̂21 = [27] , 𝐴̂22 = [21 2
3].
7 1 3
7 1 3 2 2 2
2 2 2
1 0 0
2 0 0 1 1 0 0
1 2] = [2 2 1], where 𝐵̂11 = [2 1 ] , 𝐵̂12 = [ ] , 𝐵̂21 = [3 2] , 𝐵̂22 = [2].
1 1
𝐵−1 = [4
2 1 2 1
6 1 4 3 2 2
Since the entries of 𝐴−1 and 𝐵−1 are nonnegative, respectively, then the matrices of 𝐴 and 𝐵 are also
nonsingular 𝑀-matrices.
1 1 3 1 1 8 2
1 0 4
2 −2 −7
̂ = 𝐴̂22 + 𝐵̂11 =
𝐻 [21 2
3] + [2 1] = [25 2 ̂ −1
] and 𝐻 = 7
[ 5 3 ]= [ 710 6 ].
2 2 −2 −
2 2 2 2 7 7
1 0 0 0
The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 2 is 𝐶 = 𝐴 ⊕2 𝐵 = [ −1 3 −1 + 1 0 0
]
−2 −1 1 −2 4 −2
0 −1 −1 1
1 0 0 0
−1 4 −1 0
𝐶=[ ] and from (2.1) we have
−2 −3 5 −2
0 −1 −1 1
55
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1 0 0 0
5 3 1 2
7 7 7 7
from which we obtain 𝐶 −1 = 13 5 4 8 .
7 7 7 7
18 8 5 17
[7 7 7 7]
Since the entries of 𝐶 −1 are nonnegative then 𝐶 is a nonsingular 𝑀-matrix.
Example 2.6
2 0 −1 1 −1 0
𝐴 = [−2 1 −3] and 𝐵 = [−1 2 −1] are nonsingular 𝑍-matrices but not nonsingular 𝑀-
0 −1 1 −2 −4 3
matrices.
1 1 1
− −
−2 1 1 3 6 6
−1 1 1 1 4
𝐴 = −6 [ 2 2 8] = − 3 − 3 − 3 ,
2 2 2 1 1 1
[ − 3 − 3 − 3]
1 1 4
1 1 1 − − −
3
where 𝐴̂11 = [3], 𝐴̂12 = [− 6 − 6], 𝐴̂21 = [ 1] , 𝐴̂22 = [ 31 3
1].
−3 −3 −3
2 1
− −1 −
2 3 1 3 3
−1 1 5 1
𝐵 = −3 [5 3 1] = −3 −1 − 3 ,
8 6 1 8 1
[− 3−2 − ]
3
2 1
− −1 − 8 1
where 𝐵̂11 = [ 35 ] , 𝐵̂12 = [ 31] , 𝐵̂21 = [− 3 −2] , 𝐵̂22 = [− 3].
− 3 −1 −3
1 4 2 7
−3 −3 − 3 −1 −1 − 3
̂ ̂ ̂
𝐻 = 𝐴22 + 𝐵11 = [ 1 1] + [ 5 ]=[ 4] and
− − − −1 −2 −
3 3 3 3
2 7
4 7 − 10
𝐻̂ −1 = − 3 [− 3 3 ] = [ 5
10 3 3 ].
2 −1 − 5 10
2 0 −1 0
The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 2 is 𝐶 = 𝐴 ⊕2 𝐵 = [−2 1 −3 + 1 −1 0 ]
0 −1 1 −1 2 −1
0 −2 −4 3
2 0 −1 0
−2 2 −4 0
𝐶=[ ] and from (2.1) we have
0 −2 3 −1
0 −2 −4 3
56
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
11 2 1 1
30
− 15 − 10 − 30
1 1 1 1
−6 −6 −2 −6
−1
from here we obtain 𝐶 = 4 4 3 1 .
− 15 − 15 − 15 − 15
7 7 3 2
[− 15
− 15 − 5 15 ]
−1
Since the entries of 𝐶 are not all nonnegative, then 𝐶 is not a nonsingular 𝑀-matrix.
In the special case of 𝐴 and 𝐵 block lower and upper triangular nonsingular 𝑀-matrices, respectively,
the result of Theorem 2.2 is easy to establish.
𝐴11 0 𝐵 𝐵12
Let 𝐴 = [ ] and 𝐵 = [ 11 ]
𝐴21 𝐴22 0 𝐵22
(2.2)
with 𝐴22 and 𝐵11 square matrices of order 𝑘.
Theorem 2.7. Let 𝐴 and 𝐵 be nonsingular lower and upper block triangular nonsingular 𝑀-matrices,
respectively, partitioned as is (2.2). Then 𝐶 = 𝐴 ⊕𝑘 𝐵 is a nonsingular 𝑀-matrix.
In this particular case of block triangular matrices we have
𝐴̂12 = 0, 𝐵̂21 = 0, 𝐴̂22 = 𝐴−1 ̂ −1 ̂ −1 −1
22 , 𝐵11 = 𝐵11 , 𝐴22 = 𝐵11 and 𝐻 = 𝐴22 + 𝐵11 .
𝐴̂11 0 𝐵̂ 𝐵̂12
𝐴−1 = [ ] , 𝐵−1 = [ 11 ].
̂ ̂
𝐴21 𝐴22 0 𝐵̂22
𝐶 = 𝐴 ⊕1 𝐵
where 𝐶11 = (−1)2 (𝐴22 + 𝐵11 ). , 𝐶21 = (−1)3 (0) = 0, 𝐶31 = (−1)4 (0) = 0
𝐶12 = (−1)3 𝐴21 𝐵22 , 𝐶22 = (−1)4 𝐴11 𝐵22 , 𝐶32 = (−1)5 𝐴11 𝐵12
𝐶13 = (−1) (0) = 0, 𝐶23 = (−1) (0) = 0, 𝐶33 = (−1)6 𝐴11 (𝐴22 + 𝐵11 ).
4 5
Example 2.8.
57
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2 0 3 −3
𝐴=[ ] and 𝐵 = [ ] are nonsingular lower and upper block triangular nonsingular 𝑀-
−1 3 0 2
1
1 3 0 0 1 1 1
−1
matrices, then 𝐴 = 6 ( ) = [21 1 ],where 𝐴11−1 −1
= 2 , 𝐴12 = 0, 𝐴−1 −1
21 = 6 , 𝐴22 = 3
1 2
6 3
1 1
1 2 3 3 2 −1 1
−1 1 −1 −1 1
𝐵−1 = 6 ( )=[ 1], where 𝐵11 = 3 , 𝐵12 = , 𝐵21 = 0, 𝐵22 = 2 and 𝐴22 = 𝐵11 .
0 3 0 2
2
The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 1 is 𝐶 = 𝐴 ⊕1 𝐵
2 0 0 2 0 0
𝐶 = [−1 3 + 3 −3] = [−1 6 −3],
0 0 2 0 0 2
−1
𝐴11 0 0
1 1 −1 1
then from (2.3) we obtain 𝐶 −1 = [− 2 𝐴−1 −1
22 𝐴21 𝐴11 𝐴
2 22
− 2 𝐴−1 −1
22 𝐵12 𝐵22 ]
−1
0 0 𝐵22
1 1
0 0 0 0
2 2
1 1 1 1 1 1 1 1 1 1 1
= − 2 . 3 . (−1). 2 .
2 3
− 2 . 3 . (−3). 2 = 12 6 4
and therefore
1 1
[ 0 0 2 ] [0 0 2]
𝐶 is a nonsingular 𝑀-matrix as expected.
3. Conclusion
𝐵̂11 = 𝐵11−1 ̂ −1 ̂
, 𝐵12 = 𝐵12 , 𝐵21 = 0, 𝐵̂22 = 𝐵22 −1
and 𝐴22 = 𝐵11 ,
−1
𝐴11 0 0
1 1 −1 1
then 𝐶 −1 is obtained from 𝐶 −1 = [− 2 𝐴−1 −1
22 𝐴21 𝐴11 𝐴
2 22
− 𝐴−1
2
−1
22 𝐵12 𝐵22 ].
−1
0 0 𝐵22
Acknowledgment
We would like to thank all the people who helped this paper and Department of Mathematics who made
this seminar.
58
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Ayres Jr.Phd, Frank. (1974). Matriks. Translated by I nyoman Susila.(1994). Jakarta: Erlangga.
Bru, Rafael., Francisco Pedroche., and Daniel B.Szyld. (July 2005). Subdirect Sums of Nonsingular M-Matrices
and of Their Inverses. Electronic Journal of Linear Algebra ISSN 1081-3810.Vol.13 pp 162-174. Retrieved
June 1, 2013. http://hermite.cii.fc.ul.pt/iic/ela/ela-articles/13.html.
S.M. Fallat and C.R.Johnson. (1999). Sub-direct sums and positively classes of matrices. Linear Algebra Appl.,
288:149-173.
59
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Quotient group as one of the subjects in abstract algebra is often difficult perceived
most undergraduate students. Therefore, we need a method that can facilitate the students to
understand the concept of quotient group. One alternative that can be done is by using GAP
(Group, Algorithm, and Programming) software as an instrument in learning the quotient group.
GAP can make a presentation of the quotient group concept becomes more attractive. By using
GAP, it is expected to facilitate the students to have a better understanding of the quotient group
concept.
1. Introduction
Quotient group is one of the materials studied in the abstract algebra course at undergraduate level. The
teachers often encounter students’ difficulty in accepting and understanding the material. Most students
have difficulty understanding the concept of sets whose members is set. Orit Hazzan, in his paper
(1999), also mentioned that many of the Abstract Algebra teachers reported students’ difficulties in
understanding the material. In 1994, Dubinsky, Dautermann, Leron, and Zazkis point out that the major
difficulties in understanding group theory appear to begin with the concepts leading up to Lagrange’s
Theorem and Quotient Groups – cosets, cosets multiplication and normality. Therefore, this paper
emphasizes on the teaching quotient group.
Various attempts have been made to assist students in understanding the material. For example,
conducting tutorials, explaining quotient group materials in detail accompanied by examples that more
real. Many researchers have conducted research to develop teaching abstract algebra materials. Brown
(1990), Kiltinen and Mansfield (1990), Czerwinski (1994), and Leganza (1995), they all provide
examples of specific abstract algebra tasks for students and then examined the responses given by the
students. Dubinsky, Dautermann, Leron and Zazkis, in 1994, conducted a study on the development of
learning some topics in abstract algebra, including coset, normality, and quotient group. In 1997, Asiala,
Dubinsky, Mathews, and Morics conduct research that concentrates on developing students'
understanding coset, normality, and quotient group materials.
Some researchers have used programming language for teaching abstract algebra materials. For
example, in 1976 Gallian using a computer program written in the Fortran programming language to
investigate finite groups (Gallian, 1976). There was also a researcher who uses “Exploring Small
Groups” (Geissinger, 1989), and “Cayley” (O 'Bryan & Sherman, 1992). Some of them use software
package that does not specialize in computational in discrete abstract algebra, such as Matlab (Makiw,
1996). On the contrary, to help teach abstract algebra, Dubinsky and Leron (1994) uses software
package that specializes in computational in discrete abstract algebra that is ISETL. However, over the
times, the use of ISETL felt less effective, because the program has a lot of limitations in terms of its
functions and library. In addition, this program is designed specifically for teaching, so it cannot be
used for research purposes. When we teach abstract algebra we should think that we educate
undergraduate students who will be a graduate student in the future, so we better provide them with the
tools they can use to do research in the future.
In this paper, to solve the same problem, GAP software will be used as a tool in teaching the
concept of quotient group to undergraduate students and review the role of software GAP to deepen
students' understanding of the material. GAP is a software package for computational that is used for
computational in discrete abstract algebra (The GAP Group, 2013). Compared with ISETL, GAP has
many advantages. Beside GAP can be used as a tool in teaching abstract algebra materials, it can also
60
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
use for research purposes. Another advantage of GAP is that this software is still being developed until
today.
2. About GAP
GAP stands for Group, Algorithm, and Programming. GAP is a free, open, and extensible software
package which is used for computation in abstract algebra, with particular emphasis on Computational
Group Theory. This software is used in research and teaching for studying groups and their
representations, rings, vector spaces, algebras, combinatorial structures, and more (The GAP Group,
2013).
GAP was first developed in 1985 at Lehrstuhl D für Mathematik, RWTH Aachen Germany by
Joachim Neubüser, Johannes Meier, Alice Niemayer, Werner Nickel, and Martin Schönert. The first
version of GAP, which was released to the public in 1988, is version 2.4. Then, in 1997 coordination
of GAP development was moved to St. Andrews, Scotland. GAP version 4.1, which was released in
July 1999, is a complete internal redesign and almost complete rewrite of the system. In 2008, GAP
received an award from the ACM / SIGSAM Richard Dimick Jenks Memorial Prize as a superior
software engineering for computational algebra. Until now GAP is still being developed at the
University of St. Andrews, in St. Andrews, Scotland (The GAP Group, 2013). The current version of
GAP is 4.6.5 which was released on 20 July 2013. Figure 1 shows the GAP user interface version 4.6.5.
Alexander Hulpke develop a GAP installer version 4.4.12. The installer will install GAP and
GGAP, a graphical user interface for the system. Figure 2 shows GGAP user interface.
Although GGAP look better in terms of appearance, GGAP is still using GAP version 4.4.
Consequently there are some drawbacks. There are many changes between GAP version 4.4 and 4.6.5.
GAP 4.6.5 has more packages and improved functionality than 4.4. Some bugs which was found in 4.4,
which could lead to incorrect result, fixed in version 4.6.5. Therefore, although we can still use GGAP
to execute some specific commands, but the use of GAP version 4.6.5 is recommended.
61
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
GAP has more than 100 packages that serve as algorithms, methods, or library. From a
programming standpoint this software has a lot of functions and operations. GAP currently has more
than 800 default functions to study topics in algebra. So that GAP can be used to provide many
examples, from the simple to the complex example, in a relatively short time compared to manually
search. There are at least five ways GAP can be a useful educational tool, that is GAP can be use as a
fancy calculator, as a way to provide large or complicated examples, as a way for students to write
simple computer algorithms, as a way for producing large amounts of data so that the student can
formulate a conjecture and as a means for students to work in collaboration (Gallian, J.A., 2010).
GAP is an interactive system and based on a "read-eval-print" loop: the system takes in user
input, given in text form, evaluates this input (which typically will do the calculations) and then prints
the result of this calculation. (Hulpke. A, 2011).
The interactive nature of GAP allows the user to write an expression or command and see its
value immediately. User can define a function and apply it to arguments to see how the function works
(The GAP Group, 2006).
When the student has sufficient knowledge about group and subgroup, they can be given an explanation
of the right and left relations concept in the group theory. The rules that are given to this relation leads
to a necessary and sufficient condition for a subset to be subgroups. Both right and left relations are an
equivalence relation. With the above relation, the group will be partitioned into equivalence classes
called coset. In particular, the left relation will result in the formation of left coset, and the right relation
will generate the right coset.
62
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 3 gives an overview of the work steps that can be done by the student to find cosets. First, in the
GAP worksheet, the students asked to define a group and its subgroup. After that, by using GAP, the
students are assigned to find all the right and left cosets that based on the understanding they have
earned. Students can also view and compare the definition of right and left coset.
The teacher can give some example to help the students understand how to use GAP to find coset. For
example, first define G is group that generated by permutation (1 2 3 4) and use the command in GAP
to print all of its elements. Then, in a similar way to G, define H subgroup of G that generated by
permutation (1 3) (2 4) and print all of its elements. Now, find all left and right cosets H in G one at a
time. After that, the teacher can give another example to find the coset to the students.
By using GAP, students can be trained to find coset with a more pleasant way. That is because they can
see directly the real form of all cosets that they are looking for. So that making it easier for them to
understand the concept of coset which has previously given.
Once they were able to find the cosets one by one, the teacher can raise the question what if we want to
bring all of these cosets in just one command line GAP. This question is raised with the intention that
not only the students can understand the concept of coset addition theoretically, but also they can apply
the concepts they have acquired into a computer algorithm. It can exercise their creativity in learning
mathematics.
The next thing to do is they asked to observe all cosets that they obtained. The question that can be
raised is whether all cosets they obtained different from each other? Or is there cosets that has something
in common. There are several ways to answer these questions. One of them is by manually comparing
cosets they have obtained, i.e. checks whether the right cosets is the same with left cosets? This way,
the first thing they do is compare cosets that they obtained and then write down what conclusions they
can take. From this, the teacher can raise a question, whether it is still an effective way to do for cosets
with many elements? The answer to that question can raise another way to check whether the right coset
is equivalent with the left coset. It can also exercise their creativity. Because to answer that question,
the students are required to be able to apply the formal definition that they have obtained into a computer
algorithm. Figure 4 shows another way of checking wheter every right coset is left coset.
Once the students understand how to find the right and left coset as well as check that the right and left
coset are equivalent, teacher can provide them understanding of normal subgroups. Normal subgroups
are a group that has the same right and left coset. Thus, the students can understand the concept of
normal subgroups easier because they have indirectly applied the definition to GAP.
From the exercise above the students already know what is meant by coset, left and right coset, and
normal subgroup. Thus they have had enough knowledge to face the quotient group material.
To introduce the concept of quotient group G / H, which is the set of cosets Hg or gH, the teacher will
show the necessary and sufficient condition for subgroups H such that operation on G / H well defined.
The necessary and sufficient condition is that H is a normal subgroup (the right coset is equivalent with
the left coset) of the group G.
After introducing the concept of quotient group, students will be introduced some examples of quotient
group. However, sometimes students still cannot understand very well how to find the quotient group.
Therefore, after doing such examples manually, the students are assigned to work on the GAP
worksheet. By using GAP, students will be able to see and understand the concept of quotient group
easier. Figure 5 shows how to obtain quotient group and its elements using GAP.
For example, define a symmetric group S4. Then, using the GAP command, find all of its elements.
After that, define N a subgroup of S4 that generated by (1 2) (3 4) and (1 3) (2 4). Find all elements in
N by using GAP command. Check whether N is a normal subgroup of S4. If N is a normal subgroup of
S4 then print quotient group S4/N, otherwise find another subgroup of S4 that normal in S4.
Based on the explanation above, it can be concluded that by using GAP the teacher can introduce the
concept of quotient group to the students in a fun way. By using GAP, the students are also trained to
improve their creativity, particularly in the mathematics areas. That is because they are trained to apply
algebraic concepts they have obtained in the form of programming algorithms. They can also be easier
to understand a new concept they receive, because they can directly see the real form of the concept.
With the use of GAP, in learning abstract algebra, it is expected to provide the motivation to learn
abstract algebra in a way that is not monotonous, which in turn can increase students' understanding of
the course.
4. Conclusion
Abstract algebra is one of the subjects that are often difficult perceived by most students. Dubinsky et.al
found great difficulty encountered by students starting with the concept that led to quotient group.
Therefore we need an innovative method of learning the quotient group concept. One of the things that
can be done is by using GAP as an instrument in the learning quotient groups. The use of GAP in
learning the concept of quotient group is expected to reduce abstractness of the concept of quotient
group, thus helping students to understand the quotient group concept.
Acknowledgements
We would like to thank all the people who prepared and revised previous versions of this document.
65
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Asiala, M., Dubinsky, E., Mathews, D. M., Morics, S. and Oktaç, A. (1997). Development of Students’
Understanding of Cosets, Normality, and Quotient groups. Journal of Mathematical Behavior, 16(3), 241–
309.
Dubinsky, Ed & Leron, Uri. (1994). Learning Abstract Algebra with ISETL. New York: Springer-Verlag.
Gallian, J.A. (1976). Computers in Group Theory. Mathematics Magazines, 49, 69-73.
Gallian, J. A. (2010), Abstract Algebra with GAP for Contemporary Abstract Algebra 7th edition. Brooks/Cole,
Cengage Learning. Boston.
Geissinger, Ladnor. (1989). Exploring Small Groups (Ver1.2B). San Diego: Harcourt Brace Jovanovich.
Hazzan, O. (1999). Reducing Abstraction Level When Learning Abstract Algebra Concepts. Education Studies
in Mathematics, 40 (1), 71-90.
Hulpke, A. (2011). Abstract Algebra in GAP. The Creative Commons Attribution-Noncommercial-Share Alike
3.0. United States, California.
Makiw, George. (1996). Computing in abstract algebra. The College Mathematics Journal, 27, 136-142.
O’Bryan, John & Sherman, Gary. (1992). Undergraduates, CAYLEY, and mathematics. PRIMUS, 2, 289-308.
Rainbolt, J.G. (2002). Teaching Abstract Algebra with GAP. Saint Louis.
The GAP Group. (2013). GAP – Groups, Algorithms, and Programming, Version 4.6.5. http://www.gap-
system.org.
66
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Inflation is one of economic ills that could not be ignored, because it can bring
economic instability, slow economic growth and increase the number of unemployment.
Usually, inflation become a target of government’s policy. Failing or shocking will make price
market fluctuation in domestic country and its end with inflation in economic (Baasir,
2003:265). There are some factors that cause inflation such as money supply, fuel price,
exchange rate, and BI rate. Controlling inflation is hard to do for maintaining economic stability
because of the inflation is fluctuated. Looking at the most influential factor of inflation is one
of the method to control it. Theory of inflation and Keynes theory are used to analyze inflation
factors. Error correction model method is used to find the most influential factors of inflation,
that factor is fuel price.
1. Introduction
Inflation is the general increase in prices and continuously associated with the market mechanism that
can be caused by various factors. The process of decreasing the value of the currency continuously is
also called the inflation. Inflation is an economic disease that can not be ignored, because it can cause
economic instability, slow economic growth and ever-rising unemployment.
Not infrequently inflation to government policy targets. Failure or shock in the country will
lead to price fluctuations in the domestic market and end up with inflation in the economy (Baasir,
2003:265).
Fluctuation in the rate of inflation in Indonesia with a variety of factors that affect the result in
the more difficult it is to control inflation, so in control of government must know the factors forming
inflation. Inflation in Indonesia is not only a short term phenomenon, as the quantity theory and the
theory of inflation Keynes, but also a long term phenomenon (Baasir, 2003:267).
Inflation fluctuates, causing inflation control is very difficult to maintain stability in the
economy. Therefore, an attempt to control for stable so important to do. One way to control it by looking
at the factors that most influence on inflation. That requires further analysis in the search for the causes
of inflation that occurred through study whether factors affecting inflation in Indonesia and the
influence of these factors in the long term.
This paper can be seen through the relationship between the factors that cause inflation as a
variable in the money supply, national income variable, the variable rate, variable interest rates, with
inflation in the long jangaka. So as to know which are the most influential factor in inflation.
2. Methodology
This paper uses the data in the form of a monthly time series of the year from 2005 to 2012.
The data used in this thesis is a secondary data obtained from the institutions or agencies,
among others, Bank Indonesia (BI) and the Central Statistics Agency (BPS). The data used are:
1. Data of inflation in Indonesia in 2005 – 2012 years.
2. Data of the money supply (M2) in Indonesia in 2005-2012 years.
67
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
ECM econometric methods used in the analysis of time series data. ECM involves the use of
econometric measurement method called cointegration. ECM method is used to look at the short term
dynamic movement, so that the balance can be seen in the short term. While cointegration is used to
look at the long term equilibrium. Before discussing the ECM, first discussed the concept of stationarity.
Stationary test on the data, performed the unit root tests (unit root) to see whether the time series data
used stationary or not. Stationary test data in this study using the Augmented Dickey Fuller test (ADF).
This test by comparing the value ADFtest with Mackinnon Critical Value 1%, 5%, 10% by the
following equation (Gujarati, 2003:817):
m
Z t 0 T Z t 1 Z
i 1
i t i t (1)
Cointegration test popularized by Engle and Granger (1987) (Gujarati, 2009). Cointegration approach
is closely related to the possibility of testing the long-term equilibrium relationship between economic
variables as required by economic theory. Cointegration approach can also be seen as a test of the theory
and is an important part in the formulation and estimation of a dynamic model (Engle and Granger,
1987).
In the concept of cointegration, two or more variables are not stationary time series will be
cointegrated if a linear combination is also in line with the passage of time, although it can occur each
variable is not stationary. When the time series variables are cointegrated then there is a stable
relationship in the long run, if the two series are not stationary consisting of X t and Zt cointegrated,
then there is a special representation as follows:
Zt = β0 + β1 Xt + εt (7)
The hypothesis used to test cointegration according to equation (7) is as follows:
H0 : = 0, meaning that the data of the time series contain unit roots
H1 : ≠ 0, meaning that the data of the time series does not contain unit roots
H0 rejected if = 0 the time series data do not contain unit roots. And H0 is accepted if ≠ 0 the
time series data contain unit roots.
In the long term time series model can be shown to be cointegrated regression or an equilibrium
(stable) in the long term, but in the short-term time series models are probably not experiencing balance
disturbance caused by the error term (εt). Adjustments to the deviation of the short-term real money
demand is done by inserting the error correction term derived from the long-term residual equation. To
correct imbalances in the short term to the long-term equilibrium is called the Error Correction
Mechanism.
ECM model of the relationship between the independent variable (X) and dependent variable
(Y), in the form of:
Yt a0 a1 xt a0 t 1 et
(10)
εt-1 an error cointegration of lag 1. When εt-1 not zero then the model has no equilibrium. if εt-1
positive, a2 εt-1 negative, will cause Yt negative so that Yt rose again to correct the error of balance.
Whereas, if εt-1 negative, a2 εt-1 positive, will lead to Yt positive so that Yt rise in period t to correct
balance errors. The absolute value of a2 describes how quickly the value of the balance can be achieved
again.
One important step in estimating a model is to test whether the model has been estimated that it
deserves to be used or not. This feasibility testing or diagnostic tests that test for serial correlation
between the residuals at some lag. In this study, a diagnostic test used is the Portmanteau test, the test
statistic as follows (Anastia, 2012):
m
rˆk 2
Q nn 2
k 1
nk
(11)
69
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Testing of stationary time series data for variable levels of inflation, the money supply, the
price of raw fuel, U.S. dollas rate, the interest rate used in this thesis using graphs and
Augmented Dickey Fuller test (ADF) in equation (2) with the help of software Eviews 6. The
hypothesis used is:
H 0 : ADFtest> MacKinnon Critical Value (there is a unit root at the level)
H 1 : ADFtest< MacKinnon Critical Value (there is no unit root at the level)
Based on Table 1 it can be seen that the results ADFtest> MacKinnon Critical Value only
variable in the money supply, the money supply variable is not stationary in level at significance level
of 1%, 5%, and 10%. As for the results ADFtest <MacKinnon Critical Value on a variable rate of
inflation, fuel prices, U.S. dollar exchange rate, and interest rates, then the variable is already stationary
in level at significance level of 1%, 5%, and 10%.
Due to the variable in the money supply is not stationary at level, so the data is transformed into
stationary and re-tested using the Augmented Dickey Fuller test (ADF) in equation (3.7) with the help
of software EViews 6 on 1st difference. Hypotheses used are:
H 0 : ADFtest> MacKinnon Critical Value (there is a unit root 1st difference)
H1 : ADFtest <MacKinnon Critical Value (not there 1st difference)
70
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Based on Table 2 it can be seen that the results ADFtest <MacKinnon Critical Value on a
variable in the money supply, then the variable in the money supply has been stationary in 1st difference.
The variables are not stationary at level but stationary at 1st difference, cointegration likely will
occur, which means there is a long-term relationship between the variables. To find out if it is true
berkointegrasi variables, then tested with the Augmented Engle-Granger test using equation (7) with
the help of software EViews 6. That will get the long-term cointegration models.
t-Statistic Prob.*
Std.
Variable Coefficient Error t-Statistic Prob.
Table 3 shows the null hypothesis, the results ADFtest <MacKinnon Critical Value at significance level
of 1%, 5%, and 10%. It can be concluded variable inflation, money supply, fuel prices, U.S. dollar
exchange rate, and interest rates are cointegrated.
Error correction model proposed by Engle-Granger require two stages, so called EG two steps.
The first phase calculates the residual value of cointegration regression results in Table 3. The
second stage regression analysis by including the residuals from the first step. The results of
the first phase of the data in the form of residuals from cointegration.
71
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Table 4 it can be seen some of the variables were not significant. Furthermore, by removing
non-significant variables and the model re-estimation performed, starting from the least significant
variable (the biggest prob value), and greater than the significant level α = 5% , Which is constant, and
then the variable d (b3), d (b1), and d (b4). Eliminate the constant re-estimation results with the help of
software EViews 6 are given in Table 5.
In Table 5 the variable d (b1), d (b3), d (b4) is not significant so that the model should
be re-estimated by eliminating variable d (b3) the least significant (prob the greatest value).
The result of re-estimation by eliminating variable d (b3) with the help of software EViews 6
are given in Table 6.
72
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Table 6 the variable d (b1) and d (b4) is not significant so that the model should be re-estimated by
eliminating variable d (b4) the least significant (prob the greatest value). The result of re-estimation by
eliminating variable d (b4) with the help of software EViews 6 are given in Table 7.
73
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Table 7 the variable d (b1) was not significant so that the model should be re-estimated by
eliminating variable d (b1). The result of re-estimation by eliminating variable d (b1) with the help of
software EViews 6 are given in Table 8.
H0 :
are homoscedastisity
H :
1
are heteroscedasticity
If Q m2 p then reject H 0 . However, if Q m
2
p
then accept H . 0
The results of the diagnostic test with the help of software EViews 6 are given in Table 9.
74
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Test Equation:
Dependent Variable: RESID^2
Method: Least Squares
Date: 09/22/13 Time: 14:18
Sample: 2005M08 2011M12
Included observations: 77
By looking at the results of Obs * R-Squared of 0,112236 < 9,48773 (By looking at the results
of Obs * R-Squared value of the critical Chi square d ( X 2 ) at α = 5%), it can be concluded that the
estimation occurs homoscedasticity. Another way is to look at the probability of the value of chi squares.
In the above result the probability value of 0.9903 means heteroscedasticity not occur at significant
level of 1%, 5%, 10%. The greater the probability value means that the heteroscedasticity is not
happened.
The model above explains that if the variable inflation (d (Y )) increase 1%, the variable price of
crude fuel ( d ( X 2 ) ) will increase by 0.799148%. and variable t 1 showed large residual error
correction or lag 1 for 0.770966 which means a short-term equilibrium will be reached.
4. Conclusion
The conclusion that can be drawn in this study are as follows: For the variable d ( X 2 ) , if the increase
in inflation of 1%, the price of raw fuel will increase by 0.799148% in the short term and long term. As
for the variable d (X 1) , d ( X 3 ) , d ( X 4 ) , if the increase in inflation by 1%, then the money supply,
U.S. dollar exchange rate, and interest rates will affect the long term but had no effect in the short term.
Variables t 1 showed large residual error correction or lag 1 is 0.77097. Due to the variable d(b1),
d(b3), d(b4) has been removed, so that the money supply, U.S. dollar exchange rate, and interest rates
are not significant to be used as a model. Only the price of raw fuel that is significant to be used as a
model so that the price of raw fuel that is most influential on inflation.
75
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
5. References
Agustina, R. 2012. Analisis Hubungan Kausalitas dan Keseimbangan Jangka Panjang Pertumbuhan
Penduduk dan Pertumbuhan Ekonomi Jawa Barat Menggunakan Pendekatan Model Vector
Autoregressive (VAR). Skripsi Tidak Dipublikasikan. Bandung : Jurusan Matematika Fakultas
Matematika dan Ilmu Pengetahuan Alam Universitas Padjadjaran.
Anastia, J. N. 2012. Perbandingan Tiga Uji Statistik Dalam Verifikasi Model Runtun Waktu. Skripsi
Tidak Diterbitkan. Bandung : Jurusan Matematika Universitas Pendidikan Indonesia.
Cryer, J.D. 1986. Time Series Analysis. Boston: PWS-KENT Publishing Company.
Gujarati, D. 2003. Basic Econometric. Second Edition. New york : Mcgraw-Hill.
Rosadi, D. 2012. Ekonometrika & Analisis Runtun Waktu Terapan dengan EViews. Yogyakarta: Andi.
Wei, W.W.S. 2006. Time Series Analisys Univariate and Multivariate Methods. Second Edition. USA:
Addison-Wesley Publishing Company.
76
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract :Network Flows models generally explain the problems with the assumption that the
commodity is sent through a network. Sometimes the network can carry different types of
commodities. Multi-commodity problem aims to minimize the total cost when different types
of goods shipped through the same network. Commodities can be distinguished based on their
physical characteristics or only with certain attributes.
This paper focuses on network flows and integer programming models for the two commodities
Keyword : network , integer programming, commodities
1. Introduction
Network problems are often used in transportation, electricity, telephone, and communication.
Networks can also be used in the production, distribution, project planning, layout planning, resource
management, and financial planning. One model of network optimization is the minimum cost flow
problem. This problem is related to the flow through the network with arc capacities are limited. Such
as the shortest path problem that takes into account the cost or the shortest distance through the arc.
a. Networks
A network is defined as a collection of points (nodes) and collection of lines (arcs) which joining
these points. There is normally some flow along these lines, going from one point (node) to another.
arcs
nodes
i j
x i j = 100 passengers
If the flow through an arc is allowed only in one direction, then the arc is said to be a directed arc.
Directed arcs are graphically with arrows in the direction of the flow.
i j
i
Figure 1.3 Directed flow
When the flow on an arc ( between two nodes ) can move in either direction, it is called an
undirected arc. Undirected arcs are graphically represented by a single line ( without arrows )
connecting the two nodes.
d. Arc capacity
Arc capacity is the maximum amount of flow on an arc. Example include restrictions on the number
of flights between two cities.
e. Supply Nodes
Supply nodes are nodes with the amount of flow coming to them greater than the amount of flow
leaving them or nodes with positive net flow.
115
f. Demand Nodes
Demand nodes are nodes with negative net flow or outflow greater than inflow.
-50
g. Transshipment Nodes
Transshipment nodes are nodes with the same amount of flow arriving and leaving or nodes with
zero net flow.
78
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
h. Path
A Path is a sequence of distinct arc that connect two nodes in this fashion.
B E
A A
A i D i
A G
A
i
C i F i
A A
figure 1.7
i A network withi 3 path A to G
i. Cycle
Cycle is a sequence of directed arcs that begins and end at the same node.
2
A
i
1 3
A A
i i
4
A
j. Connected Network
Connected network is a network in which every two nodes are linked by at least one path.
1 2 3
A A A
i i i
4 5
A 4
Figure 1.9 Connected A
Network
i
79
i
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Model Formulation
In this section, it will first explain the basic assumption of network problems. Then the list of input
parameters and the decision variables of the minimum cost flow problem. Finally a mathematical
formulation built in stages.
Minimum cost flow problem on a network attempt to minimize the total cost of shipping supplies are
available through the network to meet the demand. It often occurs in the transportation problem,
transshipment, and shortest path problems. This problem assumes that we know the cost per unit of
flow and capacity associated with each arc. In general, the minimum cost flow problem can be described
as follows :
With the aim of minimizing the total cost of delivery through the network to meet the demand, then
the mathematical models are as follows:
n n
Minimize Z =
i 1 j 1
cij xij (2-1)
n n
s.t x x
i 1
ij
j 1
ij bi for each nodes i
80
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The first summation on the constraint node shows the total flow out of node i, while the second shows
the sum of the total flow into node i, so the difference between them is the net flow generated at this
node. In practice bi and uij are integers, as well as all the basic variables in every basic feasible solutions
include an optimal solution must be an integer. So in general find the optimal solution to this problem
typically use integer programming.
Example :
Consider the following network presented in figure 2.1 (adapted from Anderson et al 2003). An
airlane is tasked with transporting goods from nodes 1 and 2 to nodes 5,6, and 7. The airlane does not
have direct flights from the source nodes to the destination nodes. Instead they are connected trough its
hubs in nodes 3,4. The numbers next to the nodes represent the demand supply in tons. The numbers on
the arcs represent the unit cost of transport the goods from sources to destinations so that the total cost
is minimized. The aircraft flying to and from node 4 can carry a maximum of 50 tons of cargo.
5 1 5 50
75 3 5 4
1 2 A
A A
8 8
i6
i i 60
2
7 3
A
4
75 2 4 4 i
A A 4
7
i i 2 40
Figure 2.1 Network presentation for Minimum
A cost flow
We need to write one constraint for each node. For example, for node 1 we have :
x1,3 + x1,4 ≤ 75
for node 2 we have :
x2,3 + x2,4 ≤ 75
Similarly, we write constraints for the other nodes. Note that the net flow for nodes 3 and 4 should be
zero as these are transshipment nodes.
All the flights to and from node 4 can carry a maximum of 50 tons. Therefore, all the flow to and from
this node must be limited to 50 as follow :
x1,4 ≤ 50
x2,4 ≤ 50
x4,5 ≤ 50
x4,6 ≤ 50
x4,7 ≤ 50
Non negative constraint is xij ≥ 0 , and xij is integer.
Solving this problem using software POM-QM for Windows , generates a total minimum cost of $
1,250
The solution for this problem is presented in figure 2.2.
81
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
50
5 1 5
75 75 5 50 4
3 A
1
2
A 8 25 8 i 60
A 50
i 7 6
i3 10
4 2
75 A
4 40
2 i
A 4 A4
50 7 40
i 2
i
Figure 2.2 Solution to minimum A
cost flow
i
The general model is mathematically expressed as follows (Bazaraa et al 1990) :
Sets
M = set of nodes
Index
i,j,k = index for nodes
Parameters
ci,j = unit cost of flow from node i to node j
bi = amount of supply/demand for node i
Li,j = lower bound on flow through arc (i,j)
Ui,j = upper bound on flow through arc (i,j)
Decision variable
xi,j = amount of flow from node i to node j
Objective function
Minimize Z = x
iM jM
i, j . ci , j ……. (2.2)
Subject to
jM
xi , j x
kM
k ,i bi , i 1,2,3, ..., M
The objective function (2.2) attempts to minimize the total cost of the network. The constraints (2.3)
satisfy the requirements of each node by determining the amount of in flow and out flow from that node,
and impose the lower and upper bound restriction along the arcs.
In general network model to explain the problem with the assumption that one type of commodity or
entity is sent through a network. Sometimes the network can carry different types of commodities.
Minimum cost flow problem for the two commodities are trying to minimize the total cost when
different types of goods shipped through the same network. Both commodities can be distinguished
based on their physical characteristics or only with certain attributes. Two issues are widely used
commodity in the transportation industry. In the airline industry, the model adopted two commodities
to formulate models pair the crew and fleet assignment models.
82
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Example
We modify the example that was presented for the minimum cost flow problem discussed earlier to
explain the two commodities model formulation. Figure 3.1 presents the modified example.
30
40 5 1 5 20
35 4
5 A
1 3
A8 28 30
i
A 30
i 7 3 6
i 4
2
50 A
4
25 2 4 A 4 i 30
A 10
i 7
i
2
A
Figure 3.1 Network presentation for two commodities problem
i
As we see in this figure the scenario is very similar to the earlier case. The only difference is that instead
of having only one type cargo, in this case we have two types ( two commodities ). The numbers next
to each node represent the supply/demand for each cargo at that node. As an example, node 1 supplies
40 and 35 tons of cargo 1 and 2 respectively. The transportation costs per ton are also similar. We want
to determine how much from each cargo should be routed on each arc so that the total transportation
cost is minimum.
Subject to :
x1,3,1 + x1,4,1 ≤ 40
x1,3,2 + x1,4,2 ≤ 35
x2,3,1 + x2,4,1 ≤ 50
x2,3,2 + x2,4,2 ≤ 25
x3,5,1 + x4,5,1 ≤ 30
x3,5,2 + x4,5,2 ≤ 20
x3,6,1 + x4,6,1 ≤ 30
x3,6,2 + x4,6,2 ≤ 30
x3,7,1 + x4,7,1 ≤ 30
x3,7,2 + x4,7,2 ≤ 10
Recall that all the flights to end from node 4 can carry a maximum 0f 50 tons. Therefore :
83
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
x1,4,1 + x1,4,2 ≤ 50
x2,4,1 + x2,4,2 ≤ 50
x4,5,1 + x4,5,2 ≤ 50
x4,6,1 + x4,6,2 ≤ 50
x4,7,1 + x4,7,2 ≤ 50
Solving this problem using software POM-QM ( Production and Operation Management Quantitative
Methods ) version 3.0 generates a total minimum cost of $ 1,150. The solution for this problem is
presented in figure 3.2.
30
40 5 1 30 5 20
35 40 5 20 4
35
30 A
1 3 20
8 2 8 30
A i
A 30 6
7
i 0520 3 00 2
i 4 10 A
50 30
20 i
4 30
25 2 4 A 4 10 30
A 10
i i 7
2
A
Figure 3.2 Solution of minimum cost flow for two Commodities problem
i
The general model is mathematically expressed as follows (Ahuja et al. 1993) :
Sets
M = set of nodes
K = set of commodities
Indeces
i,j = index for nodes
k = index for commodities
Parameters
ci,j,k = unit cost of flow from node i to j for
commodity k
bi,k = amount of supply / demand at node i
for commodity k
ui,j = flow capacity on arc (i,j)
Decision variable
xi,j,k = amount of flow from node i to node j
for commodity k
Objective function
84
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Minimize Z = c
kK iM jM
i , j ,k . xi , j ,k
(3.1)
Subject to :
x
tM
i ,t ,k x
tM
t ,i ,k bi,k (3.2)
kK
xi, j ,k ui, j (3.3)
References
Ahuya, R., Magnanti, T., and Orlin, J. 1993. Network Flows, Theory, Algorithm and Application. Prentice Hall.
Anderson,D.,Sweeney D.,and Williams,T. 2003. Quantitative Methods for Business. 9th Edition. South-Western
Bazaraa, M., Jarvis, J., and Sherali, H. 1990. Linear Programming and Network Flows. John Wiley
Bazargan, M. 2010. Airline Operations and Scheduling. 2nd Edition. MPG Books Group
Hillier, F. and Lieberman, G. 2001. Introduction to Operations Research. 7th Edition. McGraw-Hill.
85
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1. Introduction
Such as the importance of continuity of a function in convergence of the functions sequence [4], then
in any numbers sequence also, need to be studied about conditions that must be satisfied so that a
sequence is convergent. To support a research related to the convergence of sequence, projections,
continuity of functions, existence of point convergence, and fractional derivatives, it will be discussed
in this paper about the convergence of numbers sequence through geometric approach.
As it has been known that the Fibonacci sequence is a sequence (xn) which has a recursion
equation xn = xn-1 + xn-2 , with the initial condition x0 = 1 and x1 = 1. Several features of the
Fibonacci sequence, is that if the greatest common divisor of the numbers m and n is k, then the greatest
common divisor of the term-m ie xm and term-n ie xn is term-k ie xk. Similarly xk is always a divisor
factor of xnk for all natural numbers n [5]. Another feature is that any four consecutive Fibonacci
numbers: w, x, y, z are always forming Pythagorean triple, ie wz, 2xy, and (yz - xw) [6]. Besides those
three things above, also known that although Fibonacci sequence itself is not convergent, but the
Fibonacci ratio sequence is converging on a number called the Golden Ratio [5].
Furthermore the generalized of Fibonacci sequence is (yn) with rules
yn = α yn-1 + β yn-2 (1)
with real constants α and β are both non-zero, and the initial conditions y0 and y1. [2].
In [8] J. M. Tuwankotta have shown that the sequence (yn) with β = 1 - α and 0 < α < 2 is a
contractive sequence so that convergent on R (set of the real numbers) [2], and the convergence point
𝑦 −𝑦
is 𝐿(𝛼) = 𝑦0 + 1 0 .
2−𝛼
In this paper, the author discusses (𝑟𝑛 ) ie the ratio sequence of two successive term of the
generalized Fibonacci sequence (1) in the form
𝑦
𝑟𝑛 = 𝑦 𝑛 , (2)
𝑛−1
86
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The first problem to be studied in this paper is the conditions for constants α and β so that the
sequence (𝑟𝑛 ) well-defined, and how to the relationship of α and β so that (𝑟𝑛 ) converges. Furthermore,
in addition to evidence of convergence also will show convergence point of the sequence.
Suppose given a ratio sequence as in (3). The sequence (𝑟𝑛 ) will be well-defined if rn 0 for all n.
Thus we must choose initial conditions r1 such that r2 0, or r3 0, or r4 0, and so on.
From (3) rn can also be expressed as
𝛽
𝑟𝑛 = 𝛼 + 𝛽
𝛼+ 𝛽
𝛼+ n-1 times the division
𝛽
𝛼+ ⋯
⋯+
𝛽
𝛼+
𝑟1
Hence
−𝛽
1. 𝑟1 = will result in r2 = 0
𝛼
−𝛽
2. 𝑟2 = 𝛽 will result in r3 = 0
𝛼+
𝛼
−𝛽
3. 𝑟2 = 𝛽 will result in r4 = 0
𝛼+ 𝛽
𝛼+
𝛼
and so on.
Thus (𝑟𝑛 ) will be well-defined, if the initial condition r1 CF (α , β) where CF (α , β) is
Continued Fraction
{ −𝛽
𝛼
,
−𝛽
𝛽 ,
−𝛽
𝛽 , . . . }.
𝛼+ 𝛼+ 𝛽
𝛼 𝛼+
𝛼
In particular, if the sequence CF(α , β) = (fn) is a sequence converging to f , then
−𝛽 −𝛽
lim 𝑓𝑛 = 𝑓 = = ,
𝑛→∞ 𝛼 − lim 𝑓𝑛 𝛼−𝑓
𝑛→∞
Hence
𝛼±√𝛼 2 +4𝛽
𝑓= 2
.
In the case α > 0 and β > 0 , then fn < 0 for all n, so the value that satisfies is
2 4
f = < 0.
2
3. Necessary Condition for Convergence
Necessary condition for convergence sequence (𝑟𝑛 ) is determined by the relationship between α and
β. If it is assumed that the sequence (𝑟𝑛 ) converges to a number r, then from equation (3) is obtained:
87
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
𝛽
lim 𝑟𝑛 = lim (𝛼 + ),
𝑛→∞ 𝑛→∞ 𝑟𝑛−1
resulting equation
𝛽
r = α + 𝑟 or r2 – α r – β = 0 . (4)
In this case, r has a real value if it satisfies α2 + 4 β 0.
Hence, necessary condition for convergence of (𝑟𝑛 ) is α2 + 4 β 0 .
4. Evidence of convergence.
To prove that (𝑟𝑛 ) is a convergent sequence, it will be shown that (𝑟𝑛 ) is a contractive sequence, ie
there is a constant C with 0 < C < 1 so that for every n holds:
|𝑟𝑛 − 𝑟𝑛−1 | < 𝐶 |𝑟𝑛−1 − 𝑟𝑛−2 | .
By using the equation (3) obtained
𝛽 𝛽
|𝑟𝑛 − 𝑟𝑛−1 | = |(𝛼 + ) − (𝛼 + 𝑟 ) |
𝑟 𝑛−1 𝑛−2
𝛽 𝛽
=| − |
𝑟𝑛−1 𝑟𝑛−2
|𝛽|
= |𝑟𝑛−1 − 𝑟𝑛−2 | , (5)
|𝑟𝑛−1 ||𝑟𝑛−2 |
where
y n 1 y n 2
|𝑟𝑛−1 ||𝑟𝑛−2 | = .
y n 2 y n 3
y n 1
=
y n 3
= yn 2 yn 3
yn 3
y n2
= α + β
y n 3
= α rn-2 + β ,
Thus, whether or not contractive sequence (rn) will depend on α and β. In this case the author
divide it in some cases.
Case -1: α > 0 , β > 0 , dan rn > 0 n .
Case -2: α > 0 , β > 0 , dan rn < 0 n .
Case -3: α > 0 , β < 0 , dan rn > 0 n .
Case -4: α > 0 , β < 0 , dan rn < 0 n .
Case -5: α < 0 , β > 0 , dan rn > 0 n .
Case -6: α < 0 , β > 0 , dan rn < 0 n .
Case -7: α < 0 , β < 0 , dan rn > 0 n .
Case -8: α < 0 , β < 0 , dan rn < 0 n .
For Case-1 and Case-6 above, will be obtained α rn-2 + β > β , so if we let
88
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
This proves that (rn) is contractive. According to the theorem, contractive sequence is Cauchy, and
Cauchy sequence in R is convergent [2], thus proving that (rn) is convergent.
For Case-4, will apply α rn-2 + β > β if rn-2 < 0 n, but this is not possible, because
𝛽
if ri < 0 for some i, then ri+1 = α + > 0 so that (rn) is not contractive.
𝑟𝑖
Similarly to the Case-7, the inequality α rn-2 + β > β will apply if rn-2 > 0 n, but this is
𝛽
not possible because if ri > 0 for some i, then ri+1 = α + 𝑟 < 0 , so that (rn) is not contractive.
𝑖
While for the Case: 2, 3, 5, and 8, shall apply α rn-2 + β < β so = C > 1. Thus
rn2
(rn) is not contractive and so not guaranteed covergent.
5. Convergence point
Furthermore is to determine the value of convergence point of the sequence (rn). If (rn) converges to
r, then from equation (4) is obtained
r = 4
2
,
2
so that we have two possible values of r, namely:
r* = 4 r** = 4
2 2
or (7)
2 2
In the Case-1 of the above, where α > 0 and β > 0, then the value will be r * > 0 and r ** < 0. But
since rn > 0 for all n, then lim 𝑟𝑛 > 0 [1], and this means that (rn) converges to r *.
𝑛→∞
Similarly in the Case-6, where α < 0 and β > 0, then the value of r * > 0 and r ** < 0. But because
rn < 0 for all n, then lim 𝑟𝑛 < 0 [1], and this means that (rn) converges to r **.
𝑛→∞
For other cases, convergence of (rn) still needs to be further investigated, including geometry approach.
6. Geometry approach
As noted earer, the convergence point (rn) ie r * or r ** will depend on the value of α and β, so
that in this geometry approach, eight cases of the above can be simplified into four cases, namely:
case-1: α > 0 β > 0, case-2: α < 0, β > 0,
case-3: α > 0 β < 0, and case-4: α < 0, β < 0.
A geometric approach can be used to see the convergence of (rn), where we can compare the
recurrence relation in (3) with the hyperbolic function:
𝛽 𝛼𝑥+𝛽
𝑦= 𝛼+ 𝑥
or y = 𝑥
. (8)
If we call r1 = x, then by substituting r1 to (3) or x to (2) will be obtained r2 = y. Furthermore, by
way of projecting the value of y = r2 to the x-axis through the line y = x, we call call r2 = x. Then by
substituting r2 to (3) or x to (8) will be obtained r3 = y. Process goes on so that r1, r2, r3, . . . and so
on, are all located on the x-axis and toward to abscissa of convergence point on the curve (8).
Because there are two points, ie r* and r** , where one of which may be the point of
convergence, then in the graph will look rn value changes that would toward one of the two points
above. These changes depend on α and β.
In the case-1 α > 0 and β > 0, from (7) we have r * is positive and r ** is negative number.
89
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
𝛽
From (3) rn = α + 𝑟𝑛−1
for n 2. So if r1 > 0 then rn > 0 n , and if r1 < 0 then there exist
a natural number k so that rn > 0 for all n > k . Hence for case-1, (rn) will converge to r* > 0.
In Figure-1, hyperbole in equation (8) has a horizontal asymptote y = α > 0 and the intersection point
with the x-axis is x = − < 0 where r1 , r2 , r3 , . . . move towards r *, which means that (rn)
converges to r *.
Figure-1 Figure-2
In the case-2, α < 0 and β > 0 from (7) will be obtained r * is positive, and r ** is negative,
so that (rn) will converges to r ** < 0. Hyperbole has a horizontal asymptote y = α < 0 and the
intersection point with the x-axis is x = − > 0. Graph as in Figure-2 above.
In the case-3 α > 0 and β < 0 will be obtained r * > r ** both are positive, so that (rn) will converges
to r * <0. Hyperbole has a horizontal asymptote y = α > 0 where the intersection point with the x-axis
is x = − > 0. Graph as in Figure-3 below.
Figure-3 Figure-4
In the case-4 α < 0 and β < 0 will be obtained r * > r ** both are negative, so that (rn) will converge
to r ** < 0. The Hyperbole has a horizontal asymptote y = α < 0 where the intersection point with
the x-axis is x = − < 0. Graphs as in Figure-4 above.
90
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
7. Conclusion
Based on the description above, it turns out that a necessary condition for convergence of ratio
𝑦
sequence of generalized Fibonacci (rn) = (𝑦 𝑛 ) is α 2 + 4 β 0 where a convergence points are
𝑛−1
= 4
2
r* for α > 0 β > 0 or α > 0 β < 0 , and
2
r** = 4 for α < 0 β > 0 or α < 0 β > 0 .
2
2
One interesting thing is that the sequence of numbers that exist in the CF (α, β) that the
numbers are being "ban" for r1 as the initial condition, it converges to a number f which is one
point of convergence of (rn). Even for the case α < 0 and β > 0 the sequence (rn) and CF (α, β) has
the same convergence point,ie r ** = f .
From the above discussion it is also seen that the convergence point does not depend on rn , but
only depending on α and β only. Similarly, the graph shows that the convergence point is a point on a
curve that have slope ramps, meaning that
if f (r *) < f (r **) then (rn) converges r *, and
if f (r *) > f (r **) then (rn) converges r **.
8. Acknowledgement
This work is fully supported by Universitas Padjadjaran under the Program of Penelitian
Unggulan Perguruan Tinggi Program Hibah Desentralisasi No. 2002/UN6.RKT/KU/2013.
9. References
[1] Apostol, Introduction to Mathematical Analysis, Addison-Weslley,1974
[2] Bartle, R.G & Sherbert, Introduction to Real Analysis, second ed, John Wiley & sons, Inc.1992
[3] Dominic & Vella Alfred, When is Member of Phitagorean Triple, phitagoras@fellas.com, 2002
[4] Endang Rusyaman, Kankan Parmikanti, Eddy Djauhari, dan Ema Carnia , Syarat Kekontinuan
Fungsi Konvergensi Pada Barisan Fungsi Turunan Berorde Fraksional, Seminar Nasional Sains
dan Teknologi Nuklir, Bandung, 2013
[5] Kalman & Menna Robert, The Fibonacci Numbers Expossed, Mat Magazine, 2003, (3:167-
181)
[6] Parmikanti. K, Pendekatan Geometri Untuk Masalah Konvergensi Barisan, Seminar Nasional
Matematika, Unpad, 2006
[7] Rusyaman. E, Konvergensi Barisan Barisan Fibonacci yang Diperumum, Seminar Nasional
Matematika, Unpad, 2006
[8] Tuwankotta. J.M, Contractive Sequence, ITB, 2005
91
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Investment in Islamic stocks investors are also faced with the issue of risk, due to
daily price of Islamic stock also fluctuate. To minimize the level of risk, investors usually
forming an investment portfolio. Establishment of a portfolio consisting of several Islamic
stocks are intended to get the optimal composition of the investment portfolio. This paper
discussed about optimizing investment portfolio of Mean-Variance to Islamic stocks by using
mean and volatility is not constant approaches. Non constant mean analyzed using models
Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed
using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization
process is performed by using the Lagrangian multiplier technique. As a numerical illustration,
the method is used to analyze some Islamic stocks in Indonesia. The expected result is to get
the proportion of investment in each Islamic stock analyzed.
1. Introduction
Investment is basically invest some capital into some form of instrument (asset), can be either fixed
assets or financial assets. Investing in financial assets can generally be done by buying shares in the
stock market. Investing in stocks, investors will be exposed to the risk that the magnitude of the problem
along with the magnitude of the expected return (Kheirollah & Bjarnbo, 2007). The greater the expected
return, generally the greater the risk to be faced. Investment risk is describing rise and fall stock price
changes at any time can be measured by the value of variance (Sukono, et al., 2011).
The strategy is often used by investors in the face of the risks of investing is to form an
investment portfolio. Establishment of an investment portfolio is essentially allocates capital in a few
selected stocks, or often referred to diversify investments (Panjer et al., 1998). The purpose of the
establishment of the investment portfolio is to get a certain return with minimum risk levels, or to get
maximum returns with limited risk. To achieve these objectives, the investor is deemed necessary to
conduct analysis of optimal portfolio selection. Analysis of portfolio selection can be done with
optimum investment portfolio optimization techniques (Shi-Jie Deng, 2004).
Therefore, this paper studied the paper on portfolio optimization model of Mean-Variance,
where the average (mean) and volatility (variance) assumed the value is not constant, which is analyzed
using time series model approach (time series). Non constant mean analyzed using models
Autoregressive Moving Average (ARMA), whereas non constant volatility analyzed using models of
the Generalized Autoregressive Conditional Hetroscedasticity (GARCH) (Shi-Jie Deng, 2004).
Methods such analysis is then used to analyze a Islamic stock in Indonesia. the purpose of this analysis
is to obtain the proportion of investment capital allocation in some Islamic stocks are analyzed, which
can provide a maximum return with a certain level of risk.
92
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Methodology
In this section will discuss the stages of analysis includes the calculation of stock returns, mean
modeling, volatility modeling and portfolio optimization.
2.1 Stocks Return
Suppose Pit Islamic stock price i at time t, and rit Islamic stock return i at time t. The value of rit
can be calculated using the following equation.
P
rit ln it , (1)
Pit 1
where i 1,..., N with N number of stocks that were analyzed, and t 1,..., T with T the number of stock
price data observed (Tsay, 2005; Sukono et al., 2011).
with mean 0 and variance i2 . Sequence {rit } is a model ARMA( p, q ) with mean it , if {rti it } is
a model ARMA( p, q ) (Gujarati, 2004; Shewhart et al., 2004).
Stages of the process modeling the mean include: (i) identification of the model, (ii) parameter
estimation, (iii) diagnostic test, and (iv) Prediction (Tsay, 2005).
2.3 Volatility Models
Volatility models in time series data in general can be analyzed using GARCH models. Suppose {rit }
is Islamic stock returns i at time t is stationary, the residuals of the mean model for Islamic stock i at
time t is ait rtt it . Residual sequence {ait } follow the model GARCH( g, s )when for each has
the following equation:
g s
ait it it , it i 0
2
ik ait2 k ij it2 j it , (3)
k 1 j 1
with { it } is a sequence of residual volatility models, namely the sequence of random variables are
independent and identically distributed (IID) with mean 0 and variance 1. Parameter coefficients satisfy
ij 0 , and k 1
max( g , s)
the property that i 0 0 , ik 0 , ik ij 1 (Shi-Jie Deng, 2004; Tsay,
2005).
Volatility modeling process steps include: (i) The estimated mean model, (ii) Test of ARCH
effects, (iii) Identification of the model, (iv) The estimated volatility models, (v) Test of diagnosis, and
(vi) Prediction (Tsay, 2005).
2.4 Prediction of l –Step Ahead
Using the mean and volatility models, aiming to calculate the prediction of mean ˆit rˆih (l ) and
volatility ˆ it2 ˆ ih
2 (l ) , for l -period ahead of the starting point prediction h (Tsay, 2005; Febrian &
93
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Herwany, 2009). The prediction results of mean ˆt rˆih (l ) dan volatility ˆ it2 ˆ ih
2 (l ) , will then be
Portfolio return can be expressed as r p w' r with w' e 1 ( Zhang, 2006; Panjer et al., 1998). Suppose
μ' ( 1t ,..., Nt ) , expectations of portfolio p can be expressed as:
p E[rp ] w' μ . (4)
Suppose given covariance matrix Σ ( ij )i, j 1,..., N , where ij Cov(rit , r jt ) . Variance of the
portfolio return can be expressed as follows:
2p w' Σw . (5)
Definition 1. (Panjer et al., 1998). A portfolio p * called (Mean-variance) efficient if there is
no portfolio p with p p* and 2p 2p* (Panjer et al., 1998).
To get efficient portfolio, typically using an objective function to maximize
2 p 2p 0
,
where the parameters of the investor's risk tolerance. Means, for investors with risk tolerance ( 0)
need to resolve the problem of portfolio
Maximize 2w' μ - w' Σw (6)
the condition w' e 1
Please note that the completion of (6), for all [0, ) form a complete set of efficient portfolios. Set
of all points in the diagram- ( p , p ) related to efficient portfolio so-called surface efficient (efficient
2
3. Illustrations
In this section will discuss the application of the method and the results of the analysis stage of the
observation that includes Islamic stocks data, the calculation of Islamic stock returns, modeling the
mean of Islamic stocks, volatility modeling, prediction of the mean and variance values, the process of
optimization.
94
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
-.20 -.20
-.3 -.3 -.12
250 500 750 1000 1250 250 500 750 1000 1250
250 500 750 1000 1250 250 500 750 1000 1250 250 500 750 1000 1250
Can be seen by naked eye chart in Figure-1 shows that the five Islamic stock return data were analyzed
(have stationary). For stationary testing is done using the ADF test statistic results respectively values
are: -34.24848; -33.79008; -30.20451; -40.04979; and-28.36974. Further, if the specified level of
significance = 5%, can be obtained by a standard normal distribution critical value is -2.863461. It
is clear that the value of the test statistic for all ADF of Islamic stocks are analyzed located in the
rejection region, so that everything is stationary.
2
on squared residuals correlogram at , the ACF and PACF graphs of each, selected models of volatility
that might be tentative. Volatility model estimation each of Islamic stock return performed
simultaneously (synchronously) with mean models. After going through tests of significance for
parameters and significance tests for models, all equations are written below have been significant. The
result, obtained the best model are respectively:
Islamic stock AKRA follow the model ARMA(1,0)-GARCH(1,1) with equation:
rt 0.073891rt 1 at and t2 0.000014 0.040015 t21 0.9404318 t21 t
Islamic stock CPIN follow the model ARMA(1,0)-GARCH(1,1) with equation:
rt 0.089639rt 1 at and t2 0.000052 0.134049 t21 0.820716 t21 t
Islamic stock ITMG follow the model ARMA(1,0)-GARCH(1,1) with equation:
rt 0.193825rt 1 at and t2 0.000012 0.066024 t21 0.923108 t21 t
Islamic stock MYOR follow the model ARMA(7,0)-GARCH(1,1) with equation:
rt 0.102007rt 7 at and t2 0.000009 0.044332 t21 0.945801 t21 t
Islamic stock TLKM follow the model ARMA(2,0)-GARCH(1,1) with equation:
rt 0.084289rt 2 at and t2 0.000019 0.139166 t21 0.824540 t21 t
Based on the ARCH-LM test statistics, the residuals of the models for Islamic stock AKRA,
CPIN, IMTG, MYOR, and TLKM there is no element of ARCH, and also has white noise. Mean and
volatility models are then used to calculate the values ˆ t = rˆt (l ) and ˆ t2 = t2 (l ) recursively.
96
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
0.001200 0.000136 0.000251 0.000113 0.000401 0.9613 0.0345 0.2020 0.0257 0.2522
0.000136 0.001840 0.000092 0.000315 0.000225 0.0345 0.5904 0.0738 0.2237 0.0801
Σ 0.000251 0.000092 0.001078 0.000512 0.000133 1 3
Σ 10 0.2020 0.0738 1.3037 0.6955 0.0406
0.000113 0.000315 0.000512 0.000956 0.000075 and 0.0257 0.2237 0.6955 1.4880 0.0150
0.000401 0.000225 0.000133 0.000075 0.001399 0.2522 0.0801 0.0406 0.0150 0.8030
Optimization done in order to determine the composition of the portfolio weights, and thus the
portfolio weight vector is determined by using equation (8). The weight vector calculation process, the
values of risk tolerance determined by the simulation begins value = 0.000 with an increase of
0.001. If it is assumed that short sales are not allowed, then the simulation is stopped when the value of
= 0.036, because it has resulted in a portfolio weight at least there is a negative value. The portfolio
weights calculation results are given in Table-2.
w1 w2 w3 w4 w5 wT e ̂ p ˆ 2p ˆ p ˆ 2p ˆ p / ˆ 2p
AKRA CPIN IMTG MYOR TLKM Sum Mean Variance Maximum Ratio
0.000 0.2150 0.1406 0.1895 0.2629 0.1920 1 0.0059 0.00043136 0.00546864 13.7161
0.001 0.2093 0.1411 0.2025 0.2555 0.1916 1 0.0061 0.00043151 0.00566849 14.0507
0.002 0.2036 0.1417 0.2155 0.2480 0.1913 1 0.0062 0.00043195 0.00576805 14.6893
0.003 0.1978 0.1422 0.2285 0.2406 0.1909 1 0.0064 0.00043268 0.00596732 14.6893
0.004 0.1921 0.1428 0.2414 0.2331 0.1905 1 0.0065 0.00043370 0.00606630 14.9921
0.005 0.1864 0.1433 0.2544 0.2257 0.1902 1 0.0066 0.00043502 0.00616498 15.2832
0.006 0.1807 0.1439 0.2674 0.2182 0.1898 1 0.0068 0.00043663 0.00636337 15.5621
0.007 0.1750 0.1444 0.2803 0.2108 0.1894 1 0.0069 0.00043853 0.00646147 15.8284
0.008 0.1693 0.1450 0.2933 0.2033 0.1891 1 0.0071 0.00044073 0.00665927 16.0817
0.009 0.1636 0.1455 0.3063 0.1959 0.1887 1 0.0072 0.00044322 0.00675678 16.3217
0.010 0.1579 0.1461 0.3193 0.1884 0.1883 1 0.0074 0.00044600 0.00695400 16.5481
0.011 0.1522 0.1466 0.3322 0.1810 0.1880 1 0.0075 0.00044907 0.00705093 16.7608
0.012 0.1465 0.1472 0.3452 0.1736 0.1876 1 0.0077 0.00045344 0.00724656 16.9597
0.013 0.1407 0.1477 0.3582 0.1661 0.1872 1 0.0078 0.00045610 0.00734390 17.1445
0.014 0.1350 0.1483 0.3711 0.1587 0.1587 1 0.0080 0.00046005 0.00753995 17.3154
0.015 0.1293 0.1488 0.3841 0.1512 0.1865 1 0.0081 0.00046430 0.00763570 17.4724
0.016 0.1236 0.1494 0.3971 0.1438 0.1461 1 0.0083 0.00046884 0.00783116 17.6155
0.017 0.1179 0.1499 0.4101 0.1363 0.1858 1 0.0084 0.00047367 0.00792633 17.7449
0.018 0.1122 0.1505 0.4230 0.1289 0.1854 1 0.0086 0.00047879 0.00812121 17.8608
0.019 0.1065 0.1510 0.4360 0.1214 0.1850 1 0.0087 0.00048421 0.00821579 17.9633
0.020 0.1008 0.1516 0.4490 0.1140 0.1847 1 0.0088 0.00048992 0.00831008 18.0528
0.021 0.0951 0.1521 0.4619 0.1065 0.1843 1 0.0090 0.00049592 0.00850408 18.1295
0.022 0.0893 0.1527 0.4749 0.0991 0.1840 1 0.0091 0.00050221 0.00859779 18.1937
0.023 0.0836 0.1533 0.4879 0.0916 0.1836 1 0.0093 0.00050880 0.00879120 18.2459
0.024 0.0779 0.1538 0.5009 0.0842 0.1832 1 0.0094 0.00051568 0.00888432 18.2863
0.025 0.0722 0.1544 0.5138 0.0767 0.1829 1 0.0096 0.00052285 0.00907715 18.3154
0.026 0.0665 0.1549 0.5268 0.0693 0.1825 1 0.0097 0.00053032 0.00916968 18.3336
0.027 0.0608 0.1555 0.5398 0.0619 0.1821 1 0.0099 0.00053808 0.00936192 18.3413
0.028 0.0551 0.1560 0.5527 0.0544 0.1818 1 0.0100 0.00054613 0.00945387 18.3390
0.029 0.0494 0.1566 0.5657 0.0470 0.1814 1 0.0102 0.00055447 0.00964553 18.3270
0.030 0.0437 0.1571 0.5787 0.0395 0.1810 1 0.0103 0.00056311 0.00973689 18.3059
0.031 0.0380 0.1577 0.5917 0.0321 0.1807 1 0.0105 0.00057204 0.00992796 18.2760
0.032 0.0322 0.1582 0.6046 0.0246 0.1803 1 0.0106 0.00058126 0.01001874 18.2379
0.033 0.0265 0.1588 0.6176 0.0172 0.1799 1 0.0107 0.00059078 0.01010922 18.1919
0.034 0.0208 0.1593 0.6306 0.0097 0.1796 1 0.0109 0.00060059 0.01029941 18.1386
0.035 0.0151 0.1599 0.6435 0.0023 0.1792 1 0.0110 0.00061069 0.01038931 18.0783
0.036 0.0094 0.1604 0.6565 -0.0052 0.1788 1 0.0112 0.00062108 0.01057892 18.0115
Based on the results of the optimization process are given in Table-2, the pair of points ( ̂ p ,
ˆ 2p )efficient portfolio can be formed or the so-called efficient frontier as given in Figure-2.a. This graph
97
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
shows the efficient frontier decent area for investors with different levels of risk tolerance, to make an
investment. Also by using the optimization process results in Table-2, can be calculated ratio value
̂ p towards ˆ 2p for each level of risk tolerance. The ratio calculation results can be shown as in Figure
2.b. This ratio shows the relationship between the optimum portfolio return expected with variance as
a measure of risk.
18
0.010
17
0.009
Mean
Ratio
16
0.008
15
0.007
14
0.006
Based on the results of the calculation of portfolio optimization, the optimum value is achieved
when the value of the portfolio's risk tolerance = 0.027. The portfolio produces mean value of ̂ p =
0.0099 with the value of risk as the variance ˆ 2p = 0.00053808.
Composition weight of the maximum portfolio respectively are: 0.0608, 0.1555, 0.5398,
0.0619, and 0.1821. This provides reference to investors that invest in Islamic stocks of AKRA, CPIN,
ITMG, MYOR, and TLKM, in order to achieve the maximum value of the portfolio, the composition
of the portfolio weights are as mentioned above.
4. Conclusions
In this paper we analyzed the Mean-Variance portfolio optimization on some Islamic stocks by using
non constant mean and volatility models approaches, in some Islamic stocks are traded in the Islamic
capital market in Indonesia. The analysis showed that be some of Islamic stocks which analyzed all
follow the ARMA( p, q )-GARCH( g, s ) models. Whereas, Based on the results of the calculation of
portfolio optimization, produced that the optimum is achieved when the composition of the portfolio
investment weights in Islamic stocks of AKRA, CPIN, ITMG, MYOR, and TLKM, respectively are:
0.0608, 0.1555, 0.5398, 0.0619, and 0.1821. The composition of the portfolio weights thereby will
produces a portfolio with mean value of 0.0099 and the value of risk, measured as the variance of
0.00053808.
References
Febrian, E. & Herwany, A. (2009). Volatility Forecasting Models and Market Co-Integration: A Study on South-
East Asian Markets. Working Paper in Economics and Development Studies. Department of Ekonomics,
Padjadjaran University.
Goto, S. & Yan Xu. (2012). On Mean Variance Portfolio Optimization: Improving Performance Through Better
Use of Hedging Relations. Working Paper. Moore School of Business, University of South Carolina. email:
shingo.goto@moore.sc.edu.
Gujarati, D.N. (2004). Basic Econometrics. Fourth Edition. The McGraw−Hill Companies, Arizona.
Kheirollah, A. & Bjarnbo, O., (2007). A Quantitative Risk Optimization of Markowitz Model: An Empirical
Investigation on Swedish Large Cap List, Master Thesis, in Mathematics/Applied Mathematics, University
Sweden, Department of Mathematics and Physics, www.mdh.se/polopoly_fs/ 1.16205!MasterTheses.pdf.
98
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Panjer, H.H. Ed., et al. (1998). Financial Economics: With Applicationsto Investments, Insurance, and Pensions.
Schaumburg, Ill.: The Actuarial Foundation.
Rifqiawan, R.A. (2008). Analisis Perbedaan Volume Perdagangan Saham-Saham yang Optimal Pada Jakarta
Islamic Index (JII) di Bursa Efek Indonesia (BEI). Tesis Program Magister. Program Stdi Magister Sains
Akuntansi, Program Pascasarjana, Universitas Diponegoro, Semarang, 2008.
Shewhart, Walter A and Samuel S. Wilks. (2004). Applied Econometric Time Series. John Wiley &Sons, Inc.
United States of America.
Shi-Jie Deng. (2004). Heavy-Tailed GARCH models: Pricing and Risk Management Applications in Power
Market, IMA Control & Pricing in Communication & Power Networks. 7-17 Mar
http://www.ima.umn.edu/talks/.../deng/power_ workshop_ ima032004-deng.pdf.
Sukono, Subanar & Dedi Rosadi. (2011). Pengukuran VaR Dengan Volatilitas Tak Konstan dan Efek Long
Memory. Disertasi. Prtogram Studi S3 Statistika, Jurusan Matematika, Fakultas Matematika dan Ilmu
Pengetahuan Alam, Universitas Gajah Mada, Yogyakarta, 2011.
Tsay, R.S. (2005). Analysis of Financial Time Serie, Second Edition, USA: John Wiley & Sons, Inc.
Yoshimoto, A. (1996). The Mean-Variance Approach To Portfolio Optimization Subject To Transaction Costs.
Journal of the Operations Research Society of Japan, Vol. 39, No. 1, March 1996
Zhang, D., (2006). Portfolio Optimization with Liquidity Impact, Working Paper, Center for Computational
Finance and Economic Agents, University of Essex, www.orfe.princeton.edu/
oxfordprinceton5/slides/yu.pdf.
99
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1. Introduction
The objective of portfolio selection is to find the right asset mix that provides the approriate
combination of return and risk that allows investors to achieve their financial goals. Portfolio selection
problems were firstly by Markowitz in 1952. In the proposed models, the return is measured by the
expected value of the random portfolio return, while the risk is quantified by the variance of the portfolio
(mean-variance portfolio).
The mean-variance portfolio (MVP) just requires estimation of mean 𝝁 and covariance matrix
𝚺 of asset returns. Traditionally, the sample mean and covariance matrix have been used for this
purpose. However, because of estimation error, policies constructed using these estimators are
extremely unstable. So, the resulting portfolio weights fluctuate substantially over time, see Chopra dan
Ziemba (1993), Broadie (1993), Bengtsson (2004) also Ceria and Stubbs (2006).
The instability of the mean-variance portfolios can be explained since sample mean and
covariance matrix are maximum likelihood estimators under normality. These estimators possess
desirable statistical properties under the true model. However, their asymptotic breakdown point is
equal to zero (Maronna et.al, 2006), i.e. that they are badly affected by atypical observations.
Several techniques have been suggested to reduce the sensitivity of mean-variance portfolio.
One of them is represented by robust statistics. The theory of robust statistics is concerned with the
construction of statistical procedures that are stable even when the empirical (sample) distribution
deviates from the assumed (normal) distribution. (see Huber 2004, Staudle and Sheather 1990, Maronna
et.al 2006). Other researchers have proposed portfolio policies based on robust estimation techniques,
see Lauprette (2002), Vaz-de Melo and Camara (2003), Perret-Gentil and Victoria-Feser (2004),
Welsch and Zhou (2007) , DeMiguel and Nogales (2009) also Hu (2012).
Based on the previous analysis, this paper examines portfolio policies using robust estimators.
These policies should be less sensitive to deviations of the empirical distribution of returns from
normality than the traditional policies. We focus on certain robust estimators known as Minimum
Volume Ellipsoid (MVE) and Fast Minimum Covariance Determinant (Fast-MCD), which have high
breakdown point. (see Rousseeuw and Van Driessen,1999).
100
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Robust Statistics
Robust statistics is an extension of classical statistics that takes into account the possibility of
model misspesification (including outliers). In this case, the parametric model is the multivariate normal
model with parameters 𝝁 and 𝚺. Robust estimators for location of scale with multivariate data have
first been proposed by Gnanadesikan and Kettenring (1972). One of them is the property of affine
equivariance and is fulfilled by estimators 𝝁 ̂ (𝒓) of location and estimators 𝚺 ̂(𝒓) of scale that satisfy
(see Maronna et.al. 2006):
̂ (𝑨𝒓 + 𝒃) = 𝑨𝝁
𝝁 ̂ (𝒓) + 𝒃 (1)
̂ ̂
𝚺(𝑨𝒓 + 𝒃) = 𝑨′𝚺(𝒓)𝑨 (2)
The most widely used estimators of this type are the minimum volume ellipsoid (MVE)
estimator of Rousseeuw (1985) and Fast Minimum Covariance Determinant was constructed by
Rousseeuw and Van Driessen (1999).
2
The constant c is chosen as 𝜒𝑝,0.5 and denotes the cardinality.
The computation of the MCD estimator is far from being trivial. The naive algorithm would
proceed by exhaustively investigating all subsets of size h out of n to find the subset with the smallest
determinant of its covariance matrix, but this will be feasible only for very small data sets. In 1999,
Rousseeuw and Van Diressen constructed a very fast algorithm to calculate the MCD estimator. The
new algorithm is called FAST-MCD and its based on the C-step.
2
1, if 𝑑(𝑇𝑀𝐶𝐷 ,𝐶𝑀𝐶𝐷 ) (𝑖) ≤ √𝜒𝑝,0.975
Where: 𝑢𝑖 = {
0, otherwise
3. Optimal Portfolio
Let the random vector 𝒓 = (𝒓1 , 𝒓2 , … , 𝒓𝑁 )′ denote random returns of the N risky assets with
mean vector mean 𝝁 and covariance matrix 𝚺, and 𝒘 = (𝒘1 , 𝒘2 , … , 𝒘𝑁 )′ denote the proportion of the
portfolio to be invested in the N risky assets. Then the target of the investor is to choose an optimal
portfolio 𝒘 that lies on the mean-risk efficient frontier. In the Markowitz model, the “mean” of a
portfolio is defined as the expected value of the portfolio return, 𝒘𝑇 𝒓, and the “risk” is defined as the
variance of the portfolio return, 𝒘𝑇 𝚺𝒘
Mathematically, minimizing the variance subject to target and budget constraints leads to a
formulation like:
min 𝒘𝑻 𝚺𝒘. (6)
Kendala: 𝒘𝑻 𝝁 ≥ 𝜇0 (7)
𝒆𝑻 𝒘 = 1. (8)
𝒘>0 (9)
102
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Where 𝜇0 is the minimum expected return, 𝒆𝑻 𝒘 = 1 is the budget and 𝒘 > 0 stand for no short-
selling.
In the above formula if parameters are known, then the optimization problem (6) - (9) can be
solved numerically. However, the parameters are never known in practice and they have to be estimated
from an unknown distribution with limited data. Traditionally, Maximum Likelihood Estimator (MLE
) have been used to estimate the sample mean and covariance matrix.
If the data are multivariate normal distribution then 𝚺 ̂𝑴𝑳𝑬 and 𝝁 ̂ 𝑴𝑳𝑬 are the optimal estimator
of the solution problem (6) – (9). But in actual finacial market, the Gaussian model may not completely
unsatisfactory since the empirical distribution of asset return may in fact be asymmetric, skewness and
have heavier tails.
Robust statistics can deal with a part of the data that is not fully compatible with the distribution
implied by the assumed model, i.e. when model misspecification exists, and in particular in the presence
of outlying observations. The optimal portfolio weight based on robust estimator then can be solved by
the following equation:
min ̂𝐫𝐨𝐛 𝒘.
𝒘𝑻 𝚺 (10)
Kendala: 𝒘𝑻 𝝁
̂ 𝐫𝐨𝐛 ≥ 𝜇0 (11)
𝒆𝑻 𝒘 = 1. (12)
𝒘>0 (13)
4. Research Methodology
The research utilizes historical daily rates of return for 8 companies from the Jakarta Islamics
Index (JII). There are Alam Sutera Realty Tbk (ASRI), Indofood Sukses Makmur Tbk (INDF), Jasa
Marga Tbk (JSMR), Telekomunikasi Indonsesia (TLKM), Timah Tbk (TINS), Akr Corporindo Tbk
(AKRA), Charoen Pokhphan Indonesia Tbk (CPIN) dan XL Axiata Tbk (EXCL). The data is taken
from January 2012 to December 2012 (see www.finance.yahoo.com).
Classical portfolio establishment that using sample mean and covariance matrix will be
compared with the following robust methods: Minimum Volume Ellipsoid (MVE) and Fast-Minimum
Covariance Determinant (FMCD). For both robust estimators, the fraction of rejected observations is
set at 10% .
In this study, the Sharpe ratio is employed to evaluate the performance of the three portfolios.
This ratio focus on measuring the additional return (or risk premium) per unit of dispersion in
investment asset or trading startegy which is considered as risk, that is, a variance risk measure. The
definition of Sharpe Ratio in portfolio is:
𝐸(𝑅𝑝) − 𝑅𝑓
𝑆𝑅 =
𝜎𝑝
Where 𝑅𝑝 is the portfolio return, 𝑅𝑓 is the risk free return, 𝜎𝑝 is the standard deviation of the
excess of the portfolio return. In practice, the higher Sharpe ratio it has, the better performance portfolio
will have.
5. Result
The data consisting of 259 historical daily arithmetic returns (January 3, 2012 – Desember 31,
2012) of eight stocks chosen from Jakarta Islamic Index. These data are used as scenario to compare
the performance of three portfolio (MV, MVE dan Fast-MCD). Table 1 present the mean of eight stocks,
including the standar deviation.
103
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
As expected, associating with the table 3, it can be observed that the eight return data are not
normally distributed. This is indicated by the Sig> 0.05, its mean that there is no sufficient evidence to
accept H0 (normal distribution of data).
In this section, an analysis of the portfolio composition will be compared between the classic
portfolio and robust portfolios. Establishment of optimal portfolio conducted for various values of µ0 ,
that is 0.0013 - 0.0021. Table 3, 4 and 5 shows the composition of each portfolio.
It can be noticed that increasing of µ0 causes increase in both assets of INDF and CPIN.
Meanwhile, the increasing µ0 cause decrease on assets of ASRI, JSMR, TLKM, TINS, AKRA and
EXCL.
Meanwhile, the MVE portfolio obtaining the following results:
104
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
It can be seen that the formation of optimal MVE portfolios at µ0 = 0.001, the weight of each
asset as follows: ASRI is 4.63%, INDF is 16.6%, JSMR is 27.47%, TLKM is 18.1%, AKRA 4.36%,
CPIN is 6.37%, and ECXL is 13.53%. Fascinatingly the increasing of µ0 causes the rising in both assets
of CPIN and EXCL, respectively other assets decrease.
The establishment optimal portfolios through Fast-MCD is presented in the table below:
The table 5 present an optimal Fast MCD portfolio for various expected return. It can be
observed that the ASRI never involved in the formation of portfolio, indicated by 0% of weight. As
well as happens to TLKM, TINS and AKRA. The table 5. also shown that CPIN, EXCL and INDF
stocks contributed the dominant share compared to other stocks.
Based on the analysis of the performance of portfolio weight, we can conclude that these three
approaches earned different portfolio weights. At various levels of expected return the classical model
diversivied portfolio contrast with robust models. However, the difference between classic and robust
portfolio composition become equal when the given return increases.
In this section, the analysis performance of risk and sharpe ratio will be compared between
classical and robust models. The results are presented in the following table:
105
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 6. Standard Deviation and Sharpe Ratio for Given Expected Return
µ0 MV MVE FastMCD
Stdev Sharpe Stdev Sharpe Stdev Sharpe
0.0013 0.009274 0.010783 0.009165 0.010911 0.006633 0.015080
0.0014 0.009381 0.021320 0.009487 0.021081 0.007483 0.026730
0.0015 0.009695 0.030944 0.009798 0.030618 0.008367 0.035860
0.0016 0.010392 0.038491 0.010250 0.039024 0.009487 0.042160
0.0017 0.011225 0.044543 0.010954 0.045645 0.010583 0.047250
0.0018 0.012247 0.048992 0.011662 0.051449 0.011747 0.051077
0.0019 0.013191 0.053066 0.012490 0.056045 0.012961 0.054008
0.0020 0.014142 0.056569 0.013416 0.059630 0.014142 0.056569
0.0021 0.016125 0.058140 0,015492 0.058090 0.015492 0.058090
Table 6. present the risk and sharpe ratio at different expected return. Its shown that the risk of
portfolio Fast-MCD gives the smallest risk than the others. Similarly, the performance of sharpe ratio
of Fast-MCD was the highest. The greater the value of Sharpe Ratio, the better the portfolio, since
Sharpe ratio measures the expected return per unit of risk. Therefore, in the contex of risk and sharpe
ratio, we can conclude that portfolio Fast-MCD is superior compared to classical and MVE portfolio.
Another way to look at the performance of the portfolio is to make the efficient frontier. An
effcient frontier is the curve that shows all efficient portfolios in a risk-return framework. An effcient
portfolio is defined as the portfolio that maximizes the expected return for a given of risk (standard
deviation), or the portfolio that minimizes the risk subject to a given expected return. The following
figure shows the behaviour of the efficient frontier for each portfolio.
0.0022
0.0021
0.002
0.0019
0.0018
0.0017 MV
0.0016 MVE
0.0015 Fast-MCD
0.0014
0.0013
0.0012
0.0011
0.006 0.007 0.008 0.009 0.01 0.011 0.012 0.013 0.014 0.015 0.016 0.017
Based on Figure 1, it can be observed that the Fast-MCD efficient frontier is superior compared
to the MVE and MV efficient frontier.
6. Conclusion
This study mainly compare the performance of three different portfolio, i.e. Mean Variance
portfolio, MVE portfolio and Fast-MCD portfolio. The empirical results shows for a set of given return,
the composition of three portfolio are different. Meanwhile, it is no significant different between MV
portfolio and Fast-MCD portfolio when the expected retun grows.
Through the comparison of the risk (standar deviation) , Sharpe Ratio and the efficient frontier,
it is clear that Fast-MCD portfolio performs better than Mean Variance portfolio and MVE portfolio.
106
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Bengtsson, C. (2004). The Impact of Estimation Error on Portfolio Selection for Investor with Constat
Relative Risk Aversion.
Best, M. J., and Grauer, R. R. (1991). On the sensitivity of mean-variance efficient portfolios to changes
in asset means: some analytical and computational results. Review of Financial Studies, 4(2),
315-342.
Broadie, M. (1993). Computing efficient frontiers using estimated parameters. Annals of Operations
Research, 45, 21-58.
Ceria, S., and Stubbs, R. A. (2006). Incorporating estimation errors into portfolio selection: robust
portfolio construction. Journal of Asset Management, 7(2), 109-127.
Chopra, V. K. and Ziemba,W. T. (1993). The effects of errors in means, variances, and covariances on
optimal portfolio choice. Journal of Portfolio Management, 19(2), 6-11.
DeMiguel, V. and Nogales, F. J. (2008). Portfolio selection with robust estimation. Technical Report,
London Business School.
Gentil, P.C and Feser.V.MP (2004). Robust Mean Variance Portfolio Selection. Working Paper 173,
National Centre of Competence in Research NCCR FINRISK.
Hu, J. (2012). An Empirical Comparison of Different Approaches in Portfolio Selection. U.U.D.M.
Project Report 2012:7.`````
Huber, R. J. (1981). Robust statistics. New York: Wiley.
Lauprete, G.J. (2001). Portfolio risk minimization under departures from normality. PhD thesis, Sloan
school of Management, Massachusetts Institute of Technology, Cambridge,MA.
Markowitz, H. M. (1952). Portfolio selection. Journal of Finance, 7: 77-91.
Rousseeuw, P.J. and K. Van Driessen (1999). A Fast Algorithm for the Minimum Covariance
Determinant Estimator. Technometrics, 41, 212–223.
Staudte, R.G. and Sheather, S.J. (1990). Robust Estimation and Testing. John Wiley and Sons Inc.
Vaz-de Melo, B., R. P. Camara. 2003. Robust modeling of multivariate financial data. Coppead
Working Paper Series 355, Federal University at Rio de Janeiro, Rio de Janeiro, Brazil.
Welsh. R.Y. and Zhou. (2007). Application of Robust Statistics to Asset Allocation Models. Statistical
Journal. 5(1): 97-114.
107
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
ABSTRACT: In this research was discussed about a multivariate model for predicting the
efficiency of the financial performance insurance companies. Multivariate models which used
are discriminant model and logistic regression model. Changing net profit is based for grouping
data to two categories because of profit is often to used as an indicator of measurement
performance company. The predictive variables are represented as 7 financial ratios. A
multivariate model is obtained by comparing the results of discriminant analysis and logistic
regression analysis. Five of seven financial ratios are significantly influence for predicting the
efficiency of the financial performance insurance companies.
1. Introduction
Profit is one indicator of the performance of a company. Earnings growth constantly increasing from
year to year can give a positive signal about the prospects of the company in the future performance of
the company (Margaretta, 2010). Financial ratio analysis can be used as a tool for predicting the
financial performance of a company. The financial performance of a company is a picture of a
company's financial statements, as in the financial statements are estimates as assets, liabilities, capital
and profits of the company. One of the usefulness of financial statements is to make a picture of the
company from one period to the next on the growth or decline, and allow it to be compared with other
companies similar industries.
Beaver (1966) using financial ratios as predictors of failure and states that usability can only be tested
ratios relating to some specific purpose. The ratio is now widely used as predictors of failure. BarNiv
and Hershbarger (1990) presents a model that incorporates variables that are designed to identify the
financial solvency of the life insurance. Three multivariate analysis (multidiscriminant, nonparametric,
and logit) has been used to examine the implementation and efficiency of alternative multivariate
models for life insurance solvency (Mahmoud, 2008).
In this study the authors present a multivariate model to predict the efficiency of financial performance
based on net profit insurance companies listed on the Stock Exchange using financial ratios.
Multivariate models were used, namely discriminant models and logistic regression models.
2. Literature Review
2.1 Earnings and Earnings Growth
Profit or gain in accounting is defined as the difference between selling price and cost of production ..
(Wikipedia, 2011). Corporate profit growth is the result of a reduction profit in year t the profit for the
year t-1 divided by profit for the year t-1. (Zainuddin dan Jogiyanto, 1992). Earnings growth forecasts
108
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
are often used by investors, creditors, companies and governments to advance their business. Earnings
growth formula:
𝑋 −𝑋
𝑌 = 𝑡𝑋 𝑡−1 (1)
𝑡−1
where: Xt profit in t, Xt-1 profit in t-1, and Y earnings growth.
The distance d12 , d 22 ,..., d n2 is a random variable that distributed Chi-Square (Johnson and Wichern, 1982).
Although the distance are not independent or not Chi-Square distribution that exactly, but it will be
very useful to plot it. The results of this plot are also known as Chi-Square plot. The algorithm
for the formation of Chi-Square plot is as follows:
1. Sort the distance squared value of the calculation in equation (2), from the smallest to the
largest.
1 1 1
2. Plot it of pair (d( j ) , p (( j ) / n)) , where p2 (( j ) / n) is 100( j ) / n percentile of the Chi-
2 2
2 2 2
Square distribution with degrees of freedom p.
2
Equation (3) will follow the distribution Tp,n1 n2 2 when H0 is true.
n1 n2 p 1 2
T Fp,n1 n2 p 1 (4)
(n1 n2 2) p
where p dimensions of statistic of T2 be the first degrees of freedom for statistic of F.
Fisher classifying an observation based on the score which is calculated from the linear function 𝑌 =
′ 𝑋 where ′ states vector containing the explanatory variable coefficients, that form the linear
equation of the response variable.
′ = [1 , 2 , … , 𝑝 ]
𝑋1
𝑋=[ ]
𝑋2
Xk states of the data matrix in the group k-th
𝑥11𝑘 𝑥12𝑘 … 𝑥1𝑝𝑘
𝑥 𝑥22𝑘 … 𝑥2𝑝𝑘
𝑋𝑘 = [ 21𝑘 ]
⋮ ⋮ ⋱ ⋮
𝑥𝑛1𝑘 𝑥𝑛2𝑘 … 𝑥𝑛𝑝𝑘
i = 1,2,...,n
j = 1,2,...,p
k= 1 and 2
xijkk states of observation i-th, variable j-th, and on group k-th.
The linear combination of that best according to Fisher is to maximize the ratio between the average
squared distance Y that obtained from X of groups 1 and 2 with a variance of Y, or be formulated as
follows:
(𝜇1𝑌 −𝜇2𝑌 )2 ′ (𝜇1 −𝜇2 )(𝜇1 −𝜇2 )′
= (5)
𝜎𝑌 2 ′ 𝛴
(′)2
If (𝜇1 − 𝜇2 ) = , then equation (4) became ′Σ
. Because Σ is a positive definite matrix, then
(′)2
according to the theory of the Cauchy-Schwartz inequality, the ratio ′Σ can be maximized when
= 𝑐𝛴 −1 = 𝑐𝛴 −1 (𝜇1 − 𝜇2 )
by choosing c = 1 produces the linear combination of the so-called Fisher linear combinations of the
following:
𝑌 = ′ 𝑋 = (𝜇1 − 𝜇2 )′𝛴−1 𝑋 (6)
with
eg X
X X ... X
e0 11 2 2 k k
X g X
(10)
1 e 1 e0 1 X1 2 X 2 ... k X k
Principle of maximum likelihood method that determines the parameters that maximize the value of
likelihood function. The estimated value of the parameters can be obtained using assistance IBM SPSS
19 statistical software.
110
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
b. Wald Test
Wald test is used to test the partial or individual independent variables which are significant, and are
not significant to the multiple logistic regression models. The Wald test using the statistic Z which
following the standard normal distribution. The Statistic Z used is:
ˆi
Z (12)
SE ˆi
where ˆi = estimators for the parameters i
= estimator of the standard error for the coefficient
SE ˆi i
If Z Z 1 1
or Z Z 1 1
then H0 rejected, and H1 accepted
k 1 k 1
d. R -Square
Value of R 2 in the logistic regression analysis to show the strength of the relationship between
independent variables and the dependent variable. For the value of R 2 determined by using the formula:
111
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
L2
R2 1 exp (13)
n
where L = log likelihood value of the model.
n = the number of data.
112
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
From the Table 1 above it can be concluded that in the group coded 0, only independent variables of
ROI (x2), PER (x5), and LR (x7) were normal distribution, and in the group coded 1, only independent
variable ROI (x2), ROE (x3), and ER (x6) were normal distribution. Thus, it can be said that most of the
independent variables are not normally distributed.
In this section we done checking whether or not there of multicollinearity relationship between the
independent variables. Checking relationships multikolinieritas here performed using IBM SPSS 19
statistical software. The results multicollinearity relationship checking is given in Table 2.
From the Table 2 it can be seen that there is some multicollinearity between the independent variables,
as follows:
- Correlation between the independent variables ROI (x2) and ROE (x3) is 0.870 > 0.5 (strong
correlation)
- Correlation between the independent variables SR (x4) and ER (x6) is 0.532 > 0.5 (strong correlation)
- Correlation between the independent variables SR (x4) and LR(x7) is 0.568 > 0.5 (strong correlation)
- Correlation between the independent variables ER (x6) and LR (x7) is 0.652 > 0.5 (strong correlation)
Test vector average value was conducted to determine whether there or not, the difference
between groups. Test vector average value is done by using IBM SPSS 19 statistical software. Test
results the average vector is given in Table 3.
From Table 3 it can be seen from the seven, only two independent variables were
significantly different for the two groups of the discriminant of the efficiency of the financial
performance of the insurance company, which is the independent variable ROI (x2) and ROE
(x3).
113
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
To test the equality of the variance-covariance matrix is done with the Box's M test using IBM SPSS
19 statistical software. The result of the variance covariance equality test is given in Table 4.
df1 28
df2 2070.719
Sig. 0
Because the Sig. = 0,000 < 0,05 meaningful the hypotesis H0 rejected. Means group covariance matrices
are significantly different. Based on the results of the independent variables for normality testing,
checking multicollinearity and variance-covariance matrix similarities apparently violated assumptions.
In the case that there are only two groups / categories if the group covariance matrices differ
significantly advanced the process should not be done. (Santoso, 2005). This research was also
supported by a previous study by Cooper and Emory (1995), the implications of the distribution of most
of the independent variables are not normal, then testing with parametric analysis such as test-t, Z,
ANOVA and discriminant analysis is not appropriate. (Almillia, 2004).
Data that did not meet the assumption of multivariate normality can cause problems in the
estimation of the discriminant function, therefore if possible logistic regression analysis can be used as
an alternative (Gessner, et al., 1998; Huberty, 1984;Johnson and Wichern,1982).
From the Table 5 it can be seen the value of log likelihood = -16,402
114
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Independent
Conclusions
Variables (xi)
From Table 6 obtained multiple logistic regression model while you are the following:
eg x e1,7020,022 x1 27,891x2 16,267 x3 0,011x4 0,034 x5 2,431x6 2,286 x7
ˆ x
1 eg x 1 e
1,702 0,022 x1 27,891x2 16,267 x3 0,011 x4 0,034 x5 2,431 x6 2,286 x7
b. Wald Test
Based on the Wald test has been done in Table 6, it can be concluded that the independent variables
Current Ratio (x1) and Solvency Ratio (x4) not significantly affect the partial or not the insurance
company's financial performance efficiency. The independent variables were significant or partial effect
on the efficiency of the financial performance of insurance companies, which is ROI ( x2 ), ROE ( x3 ),
PER ( x5 ), Expenses Ratio ( x6 ), and Loss Ratio ( x7 ). Independent variables Current Ratio (x1) and
Solvency Ratio (x4) not significant, it is removed from the model. Then the parameter estimation was
repeated with five independent variables were significant.
From the Table 7 it can be seen the value of log likelihood = -16,4255
115
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Independent
Conclusions
Variables (xi)
From the Table 8 obtained multiple logistic regression model that is:
eg x e1,686 25,704 x2 16,927 x3 – 0,034 x5 2,306 x6 – 2,230 x7
ˆ x g x
1 e 1 e1,686 25,704 x2 16,927 x3 – 0,034 x5 2,306 x6 – 2,230 x7
n n
G 2 yi ln ˆi 1 yi ln 1 ˆi n1 ln n1 n0 ln n0 n ln n
i 1 i 1
2 16, 4255 27 ln 27 13ln13 40ln 40
2 16, 4255 88,9876 33,3443 147,5552
17,5956
210,055 1,1455
G 210,055 that is 17,5956 > 1,1455, then H0 rejected.
b. Wald Test
Based on the results in Table 8, after the return parameter estimation and all the independent variables
in the partial test using the Wald test results of five independent variables, that is ROI ( x2 ), ROE ( x3
), PER ( x5 ), Expenses Ratio ( x6 ), and Loss Ratio ( x7 ) significant or partial effect on the efficiency of
the financial performance of insurance companies.
d. R- Square
The value of R 2 in the logistic regression analysis showed strong correlation between the independent
variables and the dependent variable. Value of R 2 is determined as follows:
116
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
L2
R2 1 exp
n
(16, 4255) 2
1 exp
40
1 exp 6,7449
0.9988
Because R 2 = 0,9988 or 99,88 % meaningful independent variables ROI ( x2 ), ROE ( x3 ), PER ( x5 ),
Expenses Ratio ( x6 ), and Loss Ratio ( x7 ) has a strong relationship to the efficiency of the financial
performance of insurance companies.
From Table 9 above classification accuracy of logistic regression models in predicting the financial
performance of the efficiency of insurance companies is equal to 77.5%.
5. Conclusions
In this case, a violation of the assumption of multivariate normal, multicollinearity, and not with him
the variance-covariance matrix in discriminant analysis. Appropriate model to predict the efficiency of
the insurance company's financial performance that are listed in the Indonesia Stock Exchange is the
logistic regression model with a model accuracy of 77.5%, the rest is influenced by other factors.
Variables that affect the efficiency of financial performance based on changes in income insurance
companies listed in the Indonesia Stock Exchange ROI ( x2 ), ROE ( x3 ), PER ( x5 ), Expenses Ratio (
x6 ), and Loss Ratio ( x7 ).
117
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
6. References
Agresti, A. 1996. An Introduction Categorical Data Analysis. New York: John Wiley & Sons, Inc.
Almilia, L.S. 2004. Analisis Faktor-Faktor yang Mempengaruhi Kondisi Financial Distress Suatu
Perusahaan yang Terdaftar di Bursa Efek Jakarta. Jurnal Riset Akuntansi Indonesia. Vol.7 No.1.
Januari. Hal 1-22.
Hajarisman, N. 2008. Seri Buku Ajar Statistika Multivariat. Bandung: Program Studi Statistika
Universitas Islam Bandung.
Johnson, R.A., and Wichern, D.W. 1982. Applied Multivariate Statistical Analysis. New Jersey:
Prentice-Hall, Inc., Englewood Cliffs.
Mahmoud, O.H. 2008. A Multivariate Model for Predicting the Efficiency of Financial Performance
for Property and Liability Egyptian Insurance Companies. Casualty Actuarial Society.
Margaretta, Y. 2010. Analisis Rasio Keuangan, Kebijakan Deviden dengan Ukuran Perusahaan sebagai
Variabel Kontrol dalam Memprediksi Pertumbuhan Laba pada Perusahaan Manufaktur yang
Terdaftar di Bursa Efek Indonesia. Surabaya: Skripsi Program S1 Akuntansi STIE PERBANAS.
Santoso, S. 2005. Menggunakan SPSS untuk Statistika Multivariat. Jakarta: Elex Media Komputindo.
Santoso, S. 2010. Statistika Multivariat Konsep dan Aplikasinya. Jakarta: Elex Media Komputindo.
Zainudin dan Jogiyanto, H. 1999. Manfaat Rasio Keuangan dalam Memprediksi Pertumbuhan Laba
(Studi Empirirs pada Perusahaan Perbankan yang Terdaftar di Bursa Efek Jakarta). Jurnal Riset
Ekonomi dan Akuntansi Indonesia. Vol.2. No.1 . Januari. Hal 66-90.
Tobing, M. 2011. http://www.martintobing.com/view/214 (diakses 31 Januari 2012).
_____________. http://cafe-ekonomi.blogspot.com/2009/09/artikel-tentang-laba.html (diakses 27
Maret 2012).
118
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: The set 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] is a vector space consists of power series in 𝑧 −1 with
coefficients in 𝐹 𝑚 . The subspace of 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] have many properties that often used in
the behavior theory. In this paper we will discuss one of many properties of 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]]
subspace. We will discuss the necessary and sufficient condition that cause annihilator‘s
preannihilator of a subspace of 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] is equal to the subspace itself.
1. Introduction
The behavior theory is an interesting material to be studied in algebraic point of view. Willems [1]
defined behavior as the set of all trajectories of a dynamical system. In the algebraic point of view, we
can see behavior as a linear, shift invariant, and complete subspace ofz 1m[[ z 1 ]] . In this paper,
1 m 1
we’re not focusing on the behavior theory. However, we will discuss a property in the z [[ z ]]
subspace that usefull to studied the behavior theory in the algebraic point of view.
2. Preliminaries
Let be an arbitrary field and m be the space of all m-vectors with coordinates in . The set (( z 1 ))
is defines as follows
(( z 1 )) = f i z i | f i , n f . (1)
i = n f
f ( z) = i =n fi z i and g ( z ) = i =n gi z i both are elements in (( z 1 )) , then the operations of
If
f g
addition and multiplication are defined by
( f g )( z ) = ( f
i = maks n f , n g
i gi ) z i ,
and
( fg )( z ) =
k = n f ng
hk z k ,
where hk = i = i
f gk i (P.A. Fuhrmann, 2010).
The set [z ] , polynomial ring over the field , is defined as follows
0
[ z] = f i z i | f i , n f . ,
i =nf
The set [z ] is defined as the set of all polynomials with form
nf i
f z where
i =0 i
fi and n f
is nonnegative integer. It also can be expressed as a subset of (( z 1 )) as follows:
119
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
0
[ z] = f i z i | f i , n f .
i =nf
The set [z ] is the polynomial ring over the field [2].
The set of all formal series in form z 1 with coefficient in the field , denoted by [[ z 1 ]] ,
which can be expressed as follows:
[[ z 1 ]] = f i z i | f i , i = 0,1,2,.
i =0
The set z 1[[ z 1 ]] is defined by
z 1 [[ z 1 ]] = f i z i | f i , i = 1,2, .
i = 1
We have been defined the sets (( z 1 )) , [z] , and [[ z 1 ]] at the beginning of this section.
Now we will discuss about the sets m (( z 1 )) , m[z] , and m[[ z 1 ]] which is a vector space over
the field . The set m (( z 1 )) defined as follows:
m (( z 1 )) = f i z i | f i m , n f .
i = n f
Elements in [ z]m can be identified as elments in the set m[z] , that is
n
f
[ z ] = f i z i | f i m , n f {0}.
m
i =0
The set m[z] is a module over the ring [z] . The set m[[ z 1 ]] defined as
m [[ z 1 ]] = f i z i | f i m , i = 0,1,2,.
i =0
The set z 1m[[ z 1 ]] is defined by
z 1 m [[ z 1 ]] = f i z i | f i m , i = 1,2, .
i = 1
The set z 1m[[ z 1 ]] is a vector space over the field .
hi z i hi z i .
(2)
i =1 i =1
1 m 1
The subset B z [[ z ]] is complete if for any w =
w z z 1m[[ z 1 ]] and for
i =1 i
i
and
( x, y1 y2 ) = ( x, y1 ) ( x, y2 ), x E, y1 , y2 F
Is called Bilinear Function On E F (Greub, G.H., 1967).
Bilinear onm (( z 1 )) is defined by
[ , ] : m (( z 1 )) m (( z 1 )) m , (3)
1
with the rules that are defined as follows, for all f , g (( z )) , where f ( z ) = j =f f j z j and
m n
g ( z ) = j g= g j z j where
n
f j , g j m ,
f , g = g Tj f j 1
j =
g
j =
T
j f j 1 = g nT f n
g g 1
g0T f 1 g T1 f 0 g n f ,
f 1 n f
(4)
That means [ , ] is well defined. We can also show that [ , ] is a bilinear form on
1 1
(( z )) (( z )) . Selanjutnya perhatikan teorema berikut ini.
m m
Definition 2 (Fuhrmann P.A., 2002) Let a subspace M [z] . We define its Annihilator
m
dari M by
M = { f z 1m[[ z 1 ]] | [ g, f ] = 0, g M},
that also a subspace z 1m[[ z 1 ]] . Let a subspace V z 1m[[ z 1 ]] , thes preannihilator
V
defined as follow
V = {k m [ z] | [h, k ] = 0, h V },
that also a subspace m[z] .
Lemma 3 (Fuhrmann P.A., 2002) Let (m[ z]) = z 1m[[ z 1 ]] , A = Pn0 ( z 1m [[ z 1 ]])
and A* denote its dual. Every functional on A* can be identified by elements in m[z] .
Proposisi 4 (Fuhrmann P.A., 2002) Let V z 1m[[ z 1 ]] be a subspace. (V ) = V if
Proof. [] Assume ( V ) = V holds. We wil show that for any h z 1m[[ z 1 ]] V there exist
1 m 1
k V such that [k , h] 0 . Let h be an arbitrary element in z [[ z ]] V . From the assumtion
above, we have h ( V ) . This implies that there exist k V such that [h, k ] 0 .
[] We will prove this implication with conradiction. Assume ( V ) V . It obvous that V ( V )
. That is means there exist h ( V ) V , such that for all k V we have [h, k ] = 0 . This
1 m 1
contradiction with assumption that for all h z [[ z ]] V there exist k V such that
[k , h] 0 . Thus we have ( V ) = V .
121
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
is a finite dimensional vector space. Let y = Pn (h) and Vn = Pn (V ) . Based on the assumption
0 0 0
a basis for Vn , then B {y} is a basis for span{Vn , y} . So that, the exist where
0 0
: span{Vn0 , y}
bi 0
y 1.
In the other words, there exist such that (Vn0 ) = 0 and ( y ) 0 . Extending to
: X n* , implies ( X n* )* = ( Pn0 ( X * ))* . From Lema 3, we can identify with elements in
m[z] . In the other words, there exist f V such that [ f , h] 0 . Thus, based on Propotition 4, we
have proved V complete lengkap if and only if ( V ) = V .
4. Conclusion
The set 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] is a vector space consists of power series in 𝑧 −1 with coefficients in 𝐹 𝑚 . The
subspace of 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] have many properties that often used in the behavior theory. One of them
1 m
is described in proposition 5. That is, a subspace z [[ z 1 ]] is complete if and only if its
annihilator‘s preannihilator is equal to the subspace itself.
Acknowledgements
We would like to thank all the people who prepared and revised previous versions of this document.
References
Fuhrmann, P.A. (2002). A Study of Behavior, Linear Algebra and Its Applications, vol. 351-352, 2002, pp. 303-
380.
Fuhrmann, P.A. (2010). A Polynomial Approach to Linear Algebra. Springer.
Greub, G.H.. (1967). Linear Algebra, Springer-Verlag.
Willems, J.C. (1986). From Time Series to Linear Systems. Part I: Finite-Dimensional Linear Time Invariant
Systems. Automatica 22.
122
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: In this paper, we will investigate the results of graph coloring by Michael Larsen,
James Propp and Daniel Ullman 1995. Namely, "fractional chromatic number of Mycielski
Graf," that the fractional clique number of a graph G bounded below by the number of clique
integer, and it is equal to the fractional chromatic number, which is bounded above by the
number of chromatic integer. In other words,
𝜔(𝐺) ≤ 𝜔𝐹 (𝐺) = 𝜒𝐹 (𝐺) ≤ 𝜒(𝐺)
Given this relationship, giving rise to a question whether the difference 𝜔𝐹 (𝐺) − 𝜔(𝐺) and
𝜒(𝐺) − 𝜒𝐹 (𝐺) can be made arbitrary large. The question then will be proved in the affirmative,
with the order of the graph to show the differences between them are increased without limit.
Proof to determine the fractional coloring and the fractional chromatic number, will be shown
in two different ways: first intuitively, combinatorial way marked in relation to with graph
homomorphisms, and then in relation to with an independent set, with calculations using linear
programming. In this second context, will be defined fractional clique, and see how this relates
to fractional coloring. Relationship between coloring fractions and fractional clique is the key
proof of Larsen, Propp, and Ullman.
Keywords: fractional clique number, fractional chromatic number, Mycielski Graf, linear
programming.
1. Introduction
In this paper, we discuss a result about graph colorings from 1995. The paper we will be investigating
is "The Fractional Chromatic Number of Mycielski's Graphs," by Michael Larsen, James Propp and
Daniel Ullman [3].
We will begin with some preliminary definitions, examples, and results about graph colorings.
Then we will define fractional colorings and the fractional chromatic number, which are the focus of
Larsen, Propp and Ullman's paper. We will define fractional colorings in two different ways: first in a
fairly intuitive, combinatorial manner that is characterized in terms of graph homomorphisms, and then
in terms of independent sets, which as we shall see, lends itself to calculation by means of linear
programming. In this second context, we shall also define fractional cliques, and see how they relate to
fractional colorings. This connection between fractional colorings and fractional cliques is the key to
Larsen, Propp and Ullman's proof.
A graph is defined as a set of vertices and a set of edges joining pairs of vertices. The precise definition
of a graph varies from author to author; in this paper, we will consider only finite, simple graphs, and
shall tailor our definition accordingly.
A graph G is an ordered pair (V (G), E(G)), consisting of a vertex set, V (G), and an edge set,
E(G). The vertex set can be any finite set, as we are considering only finite graphs. Since we are only
considering simple graphs, and excluding loops and multiple edges, we can define E(G) as a subset of
the set of all unordered pairs of distinct elements of V (G).
123
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
If u and v are elements of V (G), and {u, v} ∈ E(G), then we say that u and v are adjacent, denoted u ~
v. Adjacency is a symmetric relation, and in the case of simple graphs, anti-reflexive. A set of pairwise
adjacent vertices in a graph is called a clique and a set of pairwise non-adjacent vertices is called an
independent set.
For any graph G, we define two parameters: 𝛼(G), the independence number, and 𝜔(G), the
clique number. The independence number is the size of the largest independent set in V (G), and the
clique number is the size of the largest clique.
2.1.2 Examples
As examples, we define two families of graphs, the cycles and the complete graphs.
The cycle on n vertices (n > 1), denoted Cn, is a graph with V (Cn) = {1,. . . , n} and x ~ y in Cn if and
only if 𝑥 − 𝑦 ≡ ± 1 (mod n). We often depict Cn as a regular n-gon. The independence and clique
𝑛
numbers are easy to calculate: we have 𝛼(𝐶𝑛 ) = ⌊2 ⌋ and 𝜔(Cn) = 2 (except for C3, which has a clique
number of 3).
The complete graph on n vertices, Kn, is a graph with V (Kn) = {1,. . . , n}and x ~ y in V (Kn) for
all x ≠ y. It is immediate that 𝛼(Kn) = 1 and 𝜔(Kn) = n. The graphs C5 and K5 are shown in Figure 1.
C5 K5
A proper n-coloring (or simply a proper coloring) of a graph G can be thought of as a way of assigning,
from a set of n "colors", one color to each vertex, in such a way that no adjacent vertices have the same
color. A more formal definition of a proper coloring relies on the idea of graph homomorphisms.
If G and H are graphs, a graph homomorphism from G to H is a mapping : V (G) → V (H) such
that u ~ v in G implies 𝜙(u) ~ 𝜙(v) in H. A bijective graph homomorphism whose inverse is also a graph
homomorphism is called a graph isomorphism.
Now we may define a proper n-coloring of a graph G as a graph homomorphism from G to Kn.
This is equivalent to our previous, informal definition, which can be seen as follows. Given a "color"
for each vertex in G, with adjacent vertices always having different colors, we may define a
homomorphism that sends all the vertices of the same color to the same vertex in Kn. Since adjacent
vertices have different colors assigned to them, they will be mapped to different vertices in Kn, which
are adjacent. Conversely, any homomorphism from G to Kn assigns to each vertex of G an element of
{1, 2, . . . , n}, which may be viewed as colors. Since no vertex in Kn is adjacent to itself, no adjacent
vertices in G will be assigned the same color.
In a proper coloring, if we consider the inverse image of a single vertex in Kn, i.e., the set of all
vertices in G with a certain color, it will always be an independent set. This independent set is called a
color class associated with the proper coloring. Thus, a proper n-coloring of a graph G can be thought
of as a covering of the vertex set of G with independent sets.
We define a graph parameter 𝜒(G), the chromatic number of G, as the smallest positive integer n
such that there exists a proper n-coloring of G. Equivalently, the chromatic number is the smallest
number of independent sets required to cover V (G). Any finite graph with k vertices can certainly be
colored with k colors, so we see that 𝜒(G) is well-defined for a finite graph G, and bounded from above
by |𝑉 (𝐺)|. It is also clear that, if we have a proper n-coloring of G, then 𝜒(G) ≤ n.
124
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
We can establish some inequalities relating the chromatic number to the other parameters we
have defined. First, 𝜔(G) ≤ 𝜒(G), since all the vertices in a clique must be different colors. Also, since
|𝑉 (𝐺)|
each color class is an independent set, we have 𝛼(𝐺)
≤ 𝜒(𝐺), where equality is attained if and only if
each color class in an optimal coloring is the size of the largest independent set.
We can calculate the chromatic number for our examples. For the complete graphs, we have
𝜒(Kn) = n, and for the cycles we have 𝜒(Cn) = 2 for n even and 3 for n odd. In Figure 2, we see C5 and
K5 colored with three and five colors, respectively.
2 1
2 5
1 1
3 2 3 4
We now generalize the idea of a proper coloring to that of a fractional coloring (or a set coloring),
which allows us to define a graph's fractional chromatic number, denoted 𝜒𝐹 (G), which can assume
non-integer values.
Given a graph, integers 0 < b ≤ a, and a set of a colors, a proper a/b-coloring is a function that
assigns to each vertex a set of b distinct colors, in such a way that adjacent vertices are assigned disjoint
sets. Thus, a proper n-coloring is equivalent to a proper n/1-coloring.
The definition of a fractional coloring can also be formalized by using graph homomorphisms.
To this end, we define another family of graphs, the Kneser graphs. For each ordered pair of positive
integers (a, b) with a ≥ b, we define a graph Ka:b. As the vertex set of Ka:b, we take the set of all b-
element subsets of the set {1, . . . , a}. Two such subsets are adjacent in Ka:b if and only if they are
disjoint. Note that Ka:b is an empty graph (i.e., its edge set is empty) unless a ≥ 2b.
Just as a proper n-coloring of a graph G can be seen as a graph homomorphism from G to the
graph Kn, so a proper a/b-coloring of G can be seen as a graph homomorphism from G to Ka:b.
The fractional chromatic number of a graph, 𝜒𝐹 (G), is the infimum of all rational numbers a/b
such that there exists a proper a/b-coloring of G. From this definition, it is not immediately clear that
𝜒𝐹 (G) must be a rational number for an arbitrary graph. In order to show that it is, we will use a different
definition of fractional coloring, but first, we establish some bounds for 𝜒𝐹 (G) based on our current
definition.
We can get an upper bound on the fractional chromatic number using the chromatic number. If
𝑛𝑏
we have a proper n-coloring of G, we can obtain a proper 𝑏 coloring for any positive integer b by
replacing each individual color with b different colors. Thus, we have 𝜒𝐹 (G) ≤ 𝜒(G), or in terms of
homomorphisms, we can simply note the existence of a homomorphism from Kn to Knb:b (namely, map
i to the set of j ≡ i (mod n)).
To obtain one lower bound on the fractional chromatic number, we note that a graph containing
an n-clique has a fractional coloring with b colors on each vertex only if we have at least n . b colors to
choose from; in other words, 𝜔(G) ≤ 𝜒𝐹 (G).
Just as with proper colorings, we can obtain another lower bound from the independence number.
Since each color in a fractional coloring is assigned to an independent set of vertices (the fractional
|𝑉 (𝐺)|
color class), we have |𝑉 (𝐺)| . b ≤ 𝛼(G) . a, or 𝛼(𝐺)
≤ 𝜒𝐹 (𝐺).
125
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Another inequality, which will come in handy later, regards fractional colorings of subgraphs. A
graph H is said to be a subgraph of G if V (H) ⊆ V (G) and E(H) ⊆ E(G). Notice that if H is a subgraph
of G, then any proper a/b-coloring of G, restricted to V (H), is a proper a/b-coloring of H. This tells us
that 𝜒𝐹 (H) ≤ 𝜒𝐹 (G).
{1, 2}
{4, 5} {3, 4}
{2, 3} {1, 5}
∑ 𝑓( 𝐽 ) = 1
𝐽∈𝐼(𝐺,𝑢)
for each vertex u. The weight of this fractional coloring is simply the number of colors.
Next, suppose we have a graph G with a proper a/b coloring as defined above, with a b-element
set of colors associated with each vertex. Again, each color determines a color class, which is an
1
independent set. If we define a function that sends each color class to and every other independent set
𝑏
to 0, then again, we have for each vertex u, ∑𝐽∈𝐼(𝐺,𝑢) 𝑓( 𝐽 ) = 1, so we have a fractional coloring by our
new definition, with weight a/b.
Finally, let us consider translating from the new definition to the old one. Suppose we have a
graph G and a function f mapping from I(G) to [0, 1] ⋂ Q. (We will see below why we are justified in
restricting our attention to rational valued functions.) Since the graph G is finite, the set I(G) is finite,
and the image of the function f is a finite set of rational numbers. This set of numbers has a lowest
common denominator, b. Now suppose we have an independent set I which is sent to the number m/b.
Thus, we can choose m different colors, and let the set I be the color class for each of them. Proceeding
in this manner, we will assign at least b different colors to each vertex, because of our condition that for
all u ∑𝐽∈𝐼(𝐺,𝑢) 𝑓( 𝐽 ) ≥ 1. If some vertices are assigned more than b colors, we can ignore all but b of
126
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
them, and we have a fractional coloring according to our old definition if the weight of f is a/b, and we
do not ignore any colors completely, then we will have obtained a proper a/b coloring. If some colors
are ignored, then we actually have a proper d/b fractional coloring, for some d < a.
The usefulness of this new definition of fractional coloring and fractional chromatic number in terms
of independent sets is that it leads us to a method of calculation using the tools of linear programming.
To this end, we will construct a matrix representation of a fractional coloring.
For a graph G, define a matrix A(G), with columns indexed by V (G) and rows indexed by I(G).
Each row is essentially the characteristic function of the corresponding independent set, with entries
equal to 1 on columns corresponding to vertices in the independent set, and 0 otherwise.
Now let f be a fractional coloring of G and let y(G) be the vector indexed by I(G) with entries
given by f. With this notation, and letting 1 denote the all 1's vector, the inequality y(G)TA(G) ≥ 1T
expresses the condition that
∑ 𝑓( 𝐽 ) ≥ 1
𝐽∈𝐼(𝐺,𝑢)
for all u ∈V (G).
In this algebraic representation of a fractional coloring, the determination of fractional chromatic
number becomes a linear programming problem. The entries of the vector y(G) are a set of variables,
one for each independent set in V (G), and our task is to minimize the sum of the variables (the weight
of the fractional coloring), subject to the set of constraints that each entry in the vector y(G)TA(G) be
greater than 1, and that each variable be in the interval [0, 1]. This amounts to minimizing a linear
function within a convex polyhedral region in n-dimensional space defined by a finite number of linear
inequalities, where n = |𝐼(𝐺)|. This minimum must occur at a vertex of the region. Since each
hyperplane forming a face of the region is determined by a linear equation with integer coefficients,
then each vertex has rational coordinates, so our optimal fractional coloring will indeed take on rational
values, as promised.
The regular, integer chromatic number, can be calculated with the same linear program by
restricting the values in the vector y(G) to 0 and 1. This is equivalent to covering the vertex set by
independent sets that may only have weights of 1 or 0. Although polynomial time algorithms exist for
calculating optimal solutions to linear programs, this is not the case for integer programs or 0-1
programs. In fact, many such problems have been shown to be NP-hard. In this respect, fractional
chromatic numbers are easier to calculate than integer chromatic numbers.
The linear program that calculates a graph's fractional chromatic number is the dual of another
linear program, in which we attempt to maximize the sum of elements in a vector x(G), subject to the
constraint A(G)x(G) ≤ 1. We can pose this maximization problem as follows: we want to define a
function h : V (G) → [0, 1], with the condition that, for each independent set in I(G), the sum of function
values on the vertices in that set is no greater than 1. Such a function is called a fractional clique, the
dual concept of a fractional coloring. As with fractional colorings, we define the weight of a fractional
clique to be the sum of its values over its domain. The supremum of weights of fractional cliques
defined for a graph is a parameter, 𝜔𝐹 (G), the fractional clique number.
Just as we saw a fractional coloring as a relaxation of the idea of an integer coloring, we would
like to understand a fractional clique as a relaxation of the concept of a integer clique to the rationals
(or reals). It is fairly straightforward to understand an ordinary clique as a fractional clique: we begin
by considering a graph G, and a clique, C ⊆ V (G). We can define a function h : V (G) → [0, 1] that
takes on the value 1 for each vertex in C and 0 elsewhere. This function satisfies the condition that its
values sum to no more than 1 over each independent set, for no independent set may intersect the clique
C in more than one vertex. Thus the function is a fractional clique, whose weight is the number of
vertices in the clique.
127
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Since an ordinary n-clique can be interpreted as a fractional clique of weight n, we can say that
for any graph G, 𝜔(G) ≤ 𝜔𝐹 (G).
The most important identity we will use to establish our main result is the equality of the
fractional chromatic number and the fractional clique number. Since the linear programs which
calculate these two parameters are dual to each other, we apply the Strong Duality Theorem of Linear
Programming. We state the theorem in full. The reader is referred to [4] for more information about
linear programming.
Maximize cT x
subject to Ax ≤ b
and x≥0
with its dual, of the form:
Minimize yT b
subject to yTA ≥ cT
and y≥0
If both LPs are feasible, i.e., have non-empty feasible regions, then both can be optimized, and the two
objective functions have the same optimal value.
In the case of fractional chromatic number and fractional clique number, our primary LP is that
which calculates the fractional clique of a graph G. The vector c determining the objective function is
the all 1s vector, of dimension |𝑉 (𝐺)|, and the constraint vector b is the all 1s vector, of dimension
|𝐼(𝐺)|. The matrix A is the matrix described above, whose rows are the characteristic vectors of the
independent sets in I(G), defined over V (G). The vector x for which we seek to maximize the objective
function cT x has as its entries the values of a fractional clique at each vertex. The vector y for which we
seek to minimize the objective function y T b has as its entries the values of a fractional coloring on each
independent set.
In order to apply the Strong Duality Theorem, we need only establish that both LPs are feasible.
Fortunately, this is easy: the zero vector is in the feasible region for the primary LP, and any proper
coloring is in the feasible region for the dual. Thus, we may conclude that both objective functions have
the same optimal value; i.e., that for a graph G, we have 𝜔𝐹 (G) = 𝜒𝐹 (G).
This equality gives us a means of calculating these parameters. Suppose that, for a graph G, we
find a fractional clique with weight equal to r. Since the fractional clique number is the supremum of
weights of fractional cliques, we can say that r ≤ 𝜔𝐹 (G). Now suppose we also find a fractional coloring
of weight r. Then, since the fractional chromatic number is the infimum of weights of fractional
colorings, we obtain 𝜒𝐹 (G) ≤ r. Combining these with the equality we obtained from duality, we get
that 𝜔𝐹 (G) = r = 𝜒𝐹 (G). This is the method we use to prove our result.
We have noted that the fractional clique number of a graph G is bounded from below by the integer
clique number, and that it is equal to the fractional chromatic number, which is bounded from above by
the integer chromatic number. In other words, 𝜔(G) ≤ 𝜔𝐹 (G) = 𝜒𝐹 (G) ≤ 𝜒(G).
Given these relations, one natural question to ask is whether the differences 𝜔𝐹 (G) − 𝜔(G) and
𝜒(G) − 𝜒𝐹 (G) can be made arbitrarily large. We shall answer this question in the affirmative, by
displaying a sequence of graphs for which both differences increase without bound.
128
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The sequence of graphs we will consider is obtained by starting with a single edge K2, and
repeatedly applying a graph transformation, which we now define. Suppose we have a graph G, with V
(G) = {v1, v2, . . . , vn}. The Mycielski transformation of G, denoted 𝜇(G), has for its vertex set the set
{x1, x2, . . . , xn, y1, y2, . . . , yn, z} for a total of 2n + 1 vertices. As for adjacency, we put
xi ~ xj in 𝜇(G) if and only if vi ~ vj in G,
xi ~ yj in 𝜇(G) if and only if vi ~ vj in G,
and yi ~ z in 𝜇(G) for all i ∈ {1, 2, . . . , n}. See Figure 4 below.
x1 y1
v1
v2 x2 y2
x1
v1
y1
x2 y2 y5 x5
v2 v5 z
v3 v4 y3 y4
x4
x3
The theorem that we shall prove states that this transformation, applied to a graph G with at least
one edge, results in a graph 𝜇(G) with
(a) 𝜔(𝜇(G)) = 𝜔(G),
(b) 𝜒(𝜇(G)) = 𝜒(G) + 1, and
1
(c) 𝜒𝐹 (𝜇(G)) = 𝜒𝐹 (G) + 𝜒 .
𝐹(𝐺)
First we note that the vertices x1, x2, . . . ,xn form a subgraph of 𝜇(G) which is isomorphic to G.
Thus, any clique in G also appears as a clique in 𝜇(G), so we have that 𝜔(𝜇(G)) ≥ 𝜔(G).
To obtain the opposite inequality, consider cliques in 𝜇(G). First, any clique containing the vertex
z can contain only one other vertex, since z is only adjacent to the y vertices, none of which are adjacent
to each other. Now consider a clique {xi(1), . . . ,xi(r), yj(1), . . . , yj(s)}. From the definition of the Mycielski
transformation, we can see that the sets {i(1), . . . , i(r)} and {j(1), . . . , j(s)} are disjoint, and that the
set {vi(1), . . . ,vi(r), vj(1), . . . , vj(s)} is a clique in G. Thus, having considered cliques with and without
vertex z, we see that for every clique in 𝜇(G), there is a clique of equal size in G, or in other words,
𝜔(𝜇(G)) ≤ 𝜔(G). Combining these inequalities, we have 𝜔(𝜇(G)) = 𝜔(G), as desired.
129
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1
𝑔(𝑦𝑖 ) = 𝑓(𝑣𝑖 )
𝜔𝐹 (𝐺)
1
𝑔(𝑧) =
𝜔𝐹 (𝐺)
We must show that this is a fractional clique. In other words, we must establish that it maps its
domain into [0; 1], and that its values sum to at most 1 on each independent set in 𝜇(G). The codomain
is easy to establish: the range of f lies between 0 and 1, and since 𝜔𝐹 (G) ≥ 𝜔(G) ≥ 2 > 1, then 0 <
1
𝜔 (𝐺)
< 1. Thus each expression in the definition of 𝑔 yields a number between 0 and 1. It remains to
𝐹
show that the values of 𝑔 are sufficiently bounded on independent sets.
130
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
We introduce a notation: for M ⊆ V (G), we let 𝑥(𝑀) = {𝑥𝑖 |𝑣𝑖 ∈ 𝑀} and 𝑦(𝑀) = {𝑦𝑖 |𝑣𝑖 ∈ 𝑀}.
Now we will consider two types of independent sets in 𝜇(G): those containing z and those not containing
z.
Any independent set S ⊆ V (𝜇(G)) that contains z cannot contain any of the yi vertices, so it must
be of the form S = {z} ∪ x(M) for some independent set M in V (G). Summing the values of 𝑔 over all
vertices in the independent set, we obtain:
1 1
= + (1 − ) ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑀
1 1
≤ + (1 − )=1
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
Now consider an independent set S ⊆ V(𝜇(G)) with z ∉ S. We can therefore say S = x(M) ⋃ y(N)
for some subsets of V (G), M and N, and we know that M is an independent set. Since S is independent,
then no vertex in y(N) is adjacent to any vertex in x(M), so we can express N as the union of two sets A
and B, with A ⊆ M and with none of the vertices in B adjacent to any vertex in M. Now we can sum the
values of g over the vertices in S = x(M) ⋃ y(N) = x(M) ⋃ y(A) ⋃ y(B):
1 1
∑ 𝑔(𝑣) = (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑆 𝑣∈𝑀 𝑣∈𝑁
1 1 1
= (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑀 𝑣∈𝐴 𝑣∈𝐵
1 1 1
≤ (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑀 𝑣∈𝑀 𝑣∈𝐵
1
= ∑ 𝑓(𝑣) + ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺)
𝑣∈𝑀 𝑣∈𝐵
The first two equalities above are simply partitions of the sum into sub-sums corresponding to
subsets. The inequality holds because A ⊆ M, and the final equality is just a simplification. It will now
suffice to show that the final expression obtained above is less than or equal to 1.
Let us consider H, the subgraph of G induced by B. The graph H has some fractional chromatic
number, say r/s. Suppose we have a proper r/s-coloring of H. Recall that the color classes of a fractional
coloring are independent sets, so we have r independent sets of vertices in V (H) = B; let us call them
C1, . . . , Cr. Not only is each of the sets Ci independent in H, but it is also independent in G, and also Ci
⋃ M is independent in G as well, because Ci ⊆ B.
For each i, we note that f is a fractional clique on G, and sum over the independent set Ci ⊆M to
obtain:
∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) ≤ 1
𝑣∈𝑀 𝑣∈𝐶𝑖
131
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
𝑟 ∑ 𝑓(𝑣) + 𝑠 ∑ 𝑓(𝑣) ≤ 𝑟
𝑣∈𝑀 𝑣∈𝐶𝑖
The second term on the left side of the inequality results because each vertex in B belongs to s
different color classes in our proper r/s-coloring. Now we divide by r to obtain:
𝑠
∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) ≤ 1
𝑟
𝑣∈𝑀 𝑣∈𝐵
𝑠
Since r/s is the fractional chromatic number of H, and H is a subgraph of G, we can say that 𝑟 ≤
1 𝑠
𝜒𝐹 (G) = 𝜔𝐹 (G), or equivalently, 𝜔 ≤ 𝑟 . Thus:
𝐹 (𝐺)
1 𝑠
∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) ≤ ∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) ≤ 1
𝜔𝐹 (𝐺) 𝑟
𝑣∈𝑀 𝑣∈𝐵 𝑣∈𝑀 𝑣∈𝐵
as required. We have shown that the mapping g that we defined is indeed a fractional clique on 𝜇(G).
We now check its weight.
𝑛 𝑛
1 1 1
= (1 − ) ∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) +
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑉(𝐺) 𝑣∈𝑉(𝐺)
1
= ∑ 𝑓(𝑣) +
𝜔𝐹 (𝐺)
𝑣∈𝑉(𝐺)
1 1
= 𝜔𝐹 (𝐺) + = 𝜒𝐹 (𝐺) +
𝜔𝐹 (𝐺) 𝜒𝐹 (𝐺)
This is the required weight, so we have constructed a fractional coloring and a fractional clique
1
on 𝜇(G), both with weight 𝜒𝐹 (𝐺) + 𝜒 (𝐺) . We can now write the inequality
𝐹
1
𝜒𝐹 (𝜇(𝐺)) ≤ 𝜒𝐹 (𝐺) + ≤ 𝜔𝐹 (𝐺)
𝜒𝐹 (𝐺)
and invoke strong duality to declare the terms at either end equal to each other, and thus to the middle
term. ∎
132
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Now that we have a theorem telling us how the Mycielski transformation affects the three parameters
of clique number, chromatic number, and fractional chromatic number, let us apply this result in a
concrete case, and iterate the Mycielski transformation to obtain a sequence of graphs {Gn}, with Gn+1
= 𝜇(Gn) for n > 2. For our starting graph G2 we take a single edge, K2, for which 𝜔(G) = 𝜒𝐹 (G) = 𝜒(G)
= 2.
Applying our theorem, first to clique numbers, we see that 𝜒(Gn) = 2 for all n. Considering
chromatic numbers, we have 𝜒(G2) = 2 and 𝜒(Gn+1) = 𝜒(Gn)+1; thus 𝜒(Gn) = n for all n. Finally, the
fractional chromatic number of Gn is determined by a sequence {𝑎𝑛 }𝑛∈{2,3,… } given by the recurrence:
1
a2 = 2 and an+1 = an + 𝑎 .
𝑛
This sequence has been studied (see [5] or [1]), and it is known that for all n:
1
√2𝑛 ≤ 𝑎𝑛 ≤ √2𝑛 + ln 𝑛
𝑛
1
Clearly, an grows without bound, but less quickly than any sequence of the form nr for r > 2. Thus, the
difference between the fractional clique number and the clique number grows without limit, as does the
difference between the chromatic number and the fractional chromatic number.
References
133
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Pramono SIDIa*, Ismail BIN MOHDb, Wan Muhamad AMIR WAN AHMADc,
Sudradjat SUPIANd, Sukonoe, Lusianaf
a
Department of Mathematics FMIPA Universitas Terbuka, Indonesia
b,c
Department of Mathematics FST Universiti Malaysia Terengganu, Malaysia
d,e,f
Department of Mathematics FMIPA Universitas Padjadjaran, Indonesia
a*
Email : pram@ut.ac.id
Abstract: Natural disasters, such as floods are one cause of property damage in residential
buildings that cannot be avoided, because it cannot know when it happened. This leads to the
risk that financing should be optimized. In this paper, optimization is performed on a
combination of three methods of financing funds (insurance, credit, and savings). Optimization
is made under conditions to ensure a comprehensive loss of damage to property on the
residential buildings. Simulation is done to see the effect of changes in the factors that
influence the optimization value of property, amount of debt, and the future value annuity. As
a result, the value of the loss ratio and the cost ratio is numerically shown in the table, and in
graphical form in three different situations.
1. Introduction
Natural disasters are categorized into four, namely: natural disasters were meteorological or also called
hydro-meteorological disasters are climate-related causes, such as floods (events that occur when the
flow of excess water soak the mainland), geological disaster is a natural disaster that occurs on the
surface of the Earth, such as earthquakes (vibration or shock that occurs in the earth's surface caused by
the release of energy from the sudden that creates seismic waves), volcanic eruptions (events that occur
due to deposition of magma in the earth's crust is pushed out by a high-pressure gas), and also tsunami
(water body displacement caused by changes in sea surface vertically with a sudden). Disasters from
space is the arrival of various celestial bodies such as asteroids or solar storms disorders, such as
asteroids can be a threat to countries with large population such as China, India, United States, Japan,
and Southeast Asia. Outbreak or epidemic is an infectious disease that spreads through the human
population at large in scope, eg, across the country or around the world.
Natural disasters caused the risks. Risks cannot be avoided, eliminated, or moved, but the risk
can be minimized. One of the risks caused by natural disasters is financing risks arising from property
damage. There are various methods of financing (insurance, reserves, loans, etc.). Financial risk
management can be done by one or a combination of several methods of financing. Research on this
problem was made by Hanak (2010), the sensitivity analysis on the optimization of funding providing
property damage repair in residential buildings contained in a journal with a combination of the two
methods of financing. Based on the journals, the research focus is on the optimization of simulation
analysis of funding provision for property damage repairs in residential buildings with a combination
of three methods of financing. Hanak research goal is to ensure that losses covered comprehensively by
utilizing the advantages of the method of financing and credit insurance. Results of research conducted
by Hanak (2010) is valid on the parameters used in the case study.
In this paper conducted a review of the literature study simulation model optimization improved
the provision of funding for property damage to a residential building, which was developed by Hanak
134
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
(2009; 2010). The purpose of this study is a simulation model to determine the factors that influence
the optimization property damage repair fund, the value of the property, annuity future value, and the
amount of debt. These factors are examined for three situations characterized by different input
parameters and is described by two indicator ratio (loss ratio and cost ratio).
2. Mathematical Model
In this section discussed about the ex ante risk financing, the risk of ex post financing, the factors that
Affect the provision of funds, and the optimization models of the supply of funds. This discussion is
very useful in the study of simulation optimization done providing funds to repair damage to residential
buildings by the caused by floods.
AFV A s (2)
i
where, 𝐴𝐹𝑉 annuity future value, As one annuity amount (Amount of each payment), i interest rate
(interest rate), and n period (in years).
135
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1 r 1
p
AC (3)
1 r r
p
or (Frensidy,2005:62) :
1 1 r
p
AC D (4)
r
where AC annuity credit (total installment credit), D debt amount (installment amount per period), r
annual credit interest rate (interest rate per period), and p term of expiration (long period).
Optimization Model
In this paper examined two studies conducted by Hanak (2009; 2010) related to flood disasters, ex ante
risk financing methods (life insurance and savings), the risk of ex post financing methods (credit). Then
conducted studies optimization models provide funding to repair the damage of property residential
buildings. Further simulations were performed to study the data to determine the case of two indicators:
the ratio of the loss ratio and the cost ratio as a comparison. Simulations are done using the data as it is
presented by Hanak (2010), because the data relating to the provision of funding to repair property
damage to residential buildings not obtained fully in global.
Values of the loss ratio and the cost ratio values shown in the graph in three different situations.
Three different situations are characterized by differences in parameters of the factors that influence the
optimization of funding models for property damage repair (Hanak, 2010), as follows:
Situation 1: The value of property value (not fixed), the value of debt of amount (fixed), and the value
of future value annuity (fixed), the intention is that: the value of the variable VP intended variable (not
fixed); Value of Variable D included a constant, and the value of variables included AFV constant.
Situation 2: The value of property value (not fixed), the value of debt of amount (not fixed), and the
value of future value annuity (fixed), the point is that: the value of the variable VP intended variable
(not fixed ); Value of variable D are included in the models optimized by varying intervals, and Value
of AFV which included variables constant.
Situation 3: The value of value of the property (not fixed), the value of debt of amount (not fixed), and
the annuity value future value (not fixed), the intention is that: the value of the variable VP intended
variable (not fixed); Value of variables D is incorporated in the models optimized by varying intervals,
and Value of variable AFV models optimized by varying intervals.
Mathematical model for optimizing the provision of funding property repairs consist of
objective function is the total cost of repairing damaged property. To model the financing fund repair
of property damage in residential buildings, is defined as the following variables:
𝑁 = tahun, lamanya pembayaran premi yang dilakukan atau lamanya properti diasuransikan.
𝑉𝑃 = value of property, value of the insured property
𝐶𝐴 = capital assured, total insured, the total value of the property to be multiplied with the
insurance rate.
136
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
From the variables in the model obtained for the total cost of financial backing's creation, namely:
Total cost of insurance premiums
100 DIS 100 ADD
TP N CA IR (5)
100 100
with
1 r 1
p
1 j
m
PL AC
1 r r
p
CA VP (6)
1 PL j
m
1 r 1
p
1 PL j AC 1 r p r
m
100 DIS 100 ADD
TP N VP BIR FIRB k
1 j
m
PL 100 100
(7)
Total payments beyond the limits of the ability of insurance (insurance benefit payment limit).
Calculation of insurance benefit payments over the upper limit (PIBL) influenced by the particular
loss (PL) and the upper limit insurance benefits (UIBL).
If PL UIBL then PIBL PL IB (8)
If PL UIBL then PIBL 0 (9)
Total own risk payment (deductible). Own calculations for the case of flood risk is influenced only
by the magnitude of a particular loss (PL) or a claim filed by the insured is:
DP DR 1 PL j
m
(10)
Total installment credit (annuity credit)
1 r r D
p
AC (11
1 r 1
p
137
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
So, of the five mathematical models for each property in the total cost of repairs data, the mathematical
model for the total cost of funding repairs to the damaged property is a residential building:
a b c d
TC TP 0 PIBL 0 DP 1 AC 1 AS (13)
Or
1 r 1
p
1 PL j AC 1 r p r
m
0 PIBL DR 1 PL j
b m
1 r 1 1 i 1
p n
3. Illustrations
3.1 Data Simulation
The data used for illustration here is a simulated data, with the parameters given in Table-1 as follows:
138
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
m
PL j
LR 1
(15)
VP
TC
CR (16)
VP
Simulation of optimization analysis is used in the case study to the risk of property damage due
to flooding in residential buildings. Sensitivity analysis focused only on three factors: the value of the
property, annuity future value, and the amount of debt. All the factors mentioned above have their
impact on the model and the three variables have a discrete probability distribution. The LR and CR
values shown on the chart in three different situations based on changes in the value of the selected
factors (value of property, annuity future value, and the amount of debt).
Situation 1: The value of property value (not fixed), the value of debt of amount (fixed), and the value
of future value annuity (fixed). VP variables included in the input of the interval (80,000,000;
1,000,000,000); variables constant D (0; 0); constant and variable AFV (0; 0). Simulation results are
shown in graphical form as in Figure-1.
Situation 2: The value of property value (not fixed), the value of debt of amount (not fixed), and the
value of future value annuity (fixed). The VP variables included in the input of the interval (80,000,000;
10,000,000); Variable D optimized by the model on the interval (0; 8,000,000); constant and variable
AFV (0; 0). Simulation results are shown in graphical form as in Figure-2.
Figure-1 Graph of function 𝐿𝑅 and 𝐶𝑅1 – Figure-2 Graph of function 𝑳𝑹 and 𝑪𝑹𝟐 –
Situation 1 Situation 2
Based on Figure-1, the conditions will be favorable if LR> 16%. And become ineffective when
the curve is under CR LR (LR <16%), thus added a new financing method is credit. Meanwhile, Based
on Figure-2, the conditions will be favorable if LR> 11%. And become ineffective when the curve is
under CR LR (LR <11%), thus added a new financing method that is savings.
Situation 3: the value of the property (not fixed), the value of debt of amount (not fixed), and the
annuity future value (not fixed). VP variables included in the input of the interval (1000000; 10000000);
Variable D optimized by the model on the interval (0; 8,000,000), and Variable AFV optimized by the
model on the interval (0; 5,000,000). The results are shown in graphical form as in Figure-3.
139
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure-3 Graph of function 𝑳𝑹 and 𝑪𝑹𝟑 Figure-4 Graph of Function 𝐿𝑅, 𝐶𝑅1, 𝐶𝑅2, and
– Situation 3 𝐶𝑅3
Based on Figure-3, a change in the slope of the curve CR, thereby reducing the inefficiencies of
insurance and credit when the condition where the curve LR is under the CR (LR < CR). Figure-4
illustrates the effect of the combination of three types of financial reserve that insurance, credit, and
savings, made with mathematical models. Where by combining three types of reserve financing, method
of financing can make the fund more efficient and effective.
4. Conclusions
In this paper has studied on simulated factors that influence the optimization of funding the provision
for property damage repairs on a residential building, which is caused by the floods. There are two
studies on the method of financing, namely: ex ante risk financing methods (life insurance and savings),
the risk of ex post financing methods (credit). Using three factors, namely insurance, saving, and credit,
simulation studied to determine the characteristics of the factors that influence the optimization of
140
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
providing funding repairs. Optimization analysis is done by comparing the ratio of the two indicators,
the loss ratio and expense ratio. Results of simulation studies on the combination of three factors,
namely insurance reserve financing, credit, and savings, which are made with mathematical models,
shows that by combining these three types of reserve financing, method of financing can make the fund
more efficient and effective, or more optimum.
References
Frensidy, Budi. 2005. Matematika Keuangan. Jakarta : Salemba Empat.
Alcrudo. (2003). Mathematical Modelling Techniques for Flood Propagation in Urban Areas. Working
Paper. Universidad de Zaragoza, SPAIN.
Firdaus, Fahmi. (2011). 1598 Bencana Alam Terjadi Di Indonesia. Online artikel.
http://news.okezone.com/read/2011/12/30/337/549497/bnpb-1-598-bencana-alam-terjadi-
ditahun-2011. (diakses tanggal 16 November 2012).
Friedman, D.G. (2005). Insurance and Natural Hazard. Working Paper. The Travelers Insurance
Company, Hartford, Connecticut, USA.
Hanak, Tomas. 2009. Sensitivity Analysis of Selected Factors Affecting the Optimization of the Funds
Financing Recovery from Property Damage on Residential Building. Nehnuteľnosti a bývanie.
ISSN : 1336-944X.
Hanak, Tomas. 2010. How to Ensure Sufficiency of Financial Backing to Cover Future Losses on
Residential Buildings in Efficient Way?. Nehnuteľnosti a bývanie, Vol. 4, pp. 42-51. ISSN: 1336-
944X.
Irawan, D. & Riman. 2012. Apresiasi Kontraktor Dalam Penggunaan Asuransi Pada Pembangunan
Konstruksi di Malang. Jurnal Widya Teknika Vol.20 No.1; Maret 2012.
Jongman, B., Kreibich, H., Apel, H., Barredo, J.I., Bates, P.D., Feyen, L., Gericke, A., Neal, J., Aerts,
J.C.J.H., & Ward, P.J. (2012). Comparative Flood Damag Model Assessment: Toward a European
Approach. Natural Hazards and Earth System Sciences. 12, 3733-3752, 2012.
Marhusor, Hilda. 2004. Studi Perhitungan Premi Pada Asuransi Konstruksi Untuk Risiko Pada Banjir.
Skripsi, Jurusan Matematika, FMIPA, ITB.
Merz, B., Kreibich, H., Scwarze, R., and Thieken, A. (2010). Assessmen of Economic Flood Damage”
(Review Article). Natural Hazards and Earth System Sciences. 10, 1697-1724, 2010.
Sanders, R., Shaw, F., MacKay, H., Galy, H., & Foote, M. (2005). National Flood Modelling for
Insurance Purposes: Using FSAR for Flood Risk Estimation in Europe. Hidrology & Earth System
Sciences, 9(4), 446-456 (2005) © EGU.
Shrubsole, D., Brooks, G., Halliday, R., Haque, E., Kumar, A., Lacroix, J., Rasid, H., Rossulle, J., &
Simonovic, S.P. (2003). An Assessment of Flood Risk Management in Canada. Working Paper
No. 28. Institute for Catastrophic Loss Reduction.
141
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Stock market is one of the economic driving on a country. It is because the stock market is
capital facilities and accumulation of long term funds that directed for increasing community
participation in moving the funds in order to support financing the national economy. Almost all industry
in the country were represented by stock market. Therefore the fluctuation of stock price that recorded is
reflected through an index movement that better known as Indeks Harga Saham Gabungan (IHSG).
Indeks Harga Saham Gabungan (IHSG) are affected by some factors, such as internal factor and external
factor. Factor that came from overseas is like foreign exchange index, crude oil prices, and sentiment
overseas market. Factor that came from domestic is usually came from foreign exchange rate. Through
this study, we will see the factors influence, both of internal and external against Indeks Harga Saham
Gabungan (IHSG) especially occur in Bursa Efek Indonesia (BEI)
The global economic crisis had a significant impact on the development of the capital market in
Indonesia. The impact of the world financial crisis, better known by the global economic crisis, which
occurred in the United States, is very influential towards Indonesia. Because the majority of Indonesian
exports performed in the U.S. and of course it greatly affects the economy in Indonesia. One of the most
influential impact of the economic crisis America is the rupiah weakened against the dollar, Indeks
Harga Saham Gabungan (IHSG) is increasingly unhealthy, and of course exports are hampered due to
reduced demand from the U.S. market itself. Besides closing for a few days as well as the suspension
of stock trading in Bursa Efek Indonesia (BEI) is one of the real impact and the first time in history,
which of course can reflect on how big the impact of the global nature of this problem.
The capital market is one of the economy movers of a country. Because the stock market is a
tool of capital formation and accumulation of long-term funds, directed to increase participation from
the public in the mobilization of funds, to support its financing national development. In addition, the
stock market is also a representation to assess the condition of the companies in a country. Because
almost all the industries sector in the State represented by the capital market. Capital markets are
experiencing increased (Bullish) or decreased (Bearish) seen from the rise and fall of stock prices listed,
reflected through a movement of the index or better known as the Indeks Harga Saham Gabungan
(IHSG). IHSG is t he value used to measure the combined performance of all capitals (companies /
issuers) listed on the Bursa Efek Indonesia (BEI).
The summary of simultaneous and complex effects on various influencing factors, primarily
economic phenomena. IHSG even today used as a barometer of economic health of a country and as a
foundation of statistical analysis on current market conditions (Widoatmojo, S. 1996:189). Meanwhile,
according to Ang (1997:14.6), Indeks Harga Saham Gabungan (IHSG/Stock Price Index) is a value that
is used to measure the performance of stocks listed in the stock exchanges. The IHSG issued by the
142
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
respective stock exchanges and officially issued by private institutions such as the financial media,
financial institutions, and others.
In this research will be demonstrated the variables that will affect the movement of Indeks
Harga Saham Gabungan (IHSG) as an index of foreign exchange, oil prices, exchange rates, interest
rates, and inflation in Bursa Efek Indonesia (BEI). So we could know which variables affect the
movement of Indeks Harga Saham Gabungan (IHSG) the most.
Based on the description of the background of this paper, we can identified the issues to be
discussed, as follows:
1. What is the influence of foreign stock indices, oil prices, and the exchange rate towards
Indeks Harga Saham Gabungan (IHSG)?
2. Which variables are the most dominant in the movement of Indeks Harga Saham Gabungan
(IHSG)?
IHSG is the earliest published Index, which is a reflection of the development of prices on the Stock
Exchange in general. IHSG is the change in stock price, either common stock or stock preferens, when
calculating the price at the time of the calculation basis. Unit change in the stock price index is for
points. If today's IHSG BEJ is 1800 points, while the previous day is 1810 points, the index fell 10
points, said.
IHSG calculations are done by weighting according to the weighted average of the market value
(market value weighted average index). First each share is calculated based on market value weights,
the number of shares multiplied by the share price. This value is then compared to the overall market
value to gain weight. The method of calculating the index on the BEI is as follows:
Linear regression analysis is a study involving a functional relationship between variables in the data,
expressed in mathematical equations (Sudjana, 2005:310). On its use, it is less simple linear regression
analysis could represent a more complex issue, because it involves only one independent variable.
Simple linear regression analysis was developed into a multiple linear regression analysis.
Model of multiple linear regression analysis equation can generally be written in matrix
notation as follows:
Y=Xβ + ε (1)
143
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Linear regression model in equation (1) is unknown from the elements of the parameter vector.
Therefore we need an estimation of the model.
A common method used to estimate the parameters is the method of least squares regression.
The main principle of this method is estimating the parameters by minimizing the residual sum of
squares.
Equation (1) is a regression model of the population, then the regression model of the sample
is expressed as follows:
ˆ +e
Y = Xβ
if described, would be as follows::
In a simple way, the parameter estimation 𝛽̂0 , 𝛽̂1 , 𝛽̂2 , … , 𝛽̂𝑘 using the LSM as follows:
ˆ = (XT X)-1 XY
β
The number of Square’s Sum (SS) show the number of irregularities around its average value,
which consists of two sources, namely the sum of squares regression which states influence on the
regression and residual sum of squares which is the remainder of the DPS that can’t be explained by
(Sembiring, 2003: 45).
SSR can be written as follows:
𝑛
𝑆𝑆𝑅 = ∑ 𝑒𝑖2
𝑖=1
Multiple Regression Model T Test
T test is used to demonstrate how far the influence of the independent variable or a dependent
individually in explaining variation in the dependent variable or bonded variable. The purpose of the
test is to test the t itself. Here are the steps to test the hypothesis with a t distribution:
1) formulate hypothesis
H0 : βi = 0, means that the independent variables do not have a relationship with the dependent variable
H1 ∃ βi ≠ 0, (i = 1,2, … , n; and n is the index of economic) means that the independent variables
have a relationship with the dependent variable
2) Determine the significant level or degree of certainty
Significant level or degree of certainty that is used for α = 1%, 5%, 10%, with df = n − k
Where: df = degree of freedom
n = number of samples
k = number of regression coefficients
3) Determine the decision regions, the region where the null hypothesis or H0 is accepted or not. With
the following criteria:
144
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
F-test was conducted to determine the effect of independent variables together on the dependent
variable. Here are the steps to test the hypothesis:
1) Formulate Hypotheses
H0 :β1 = β2 = β3 = ⋯ = βk = 0, means there is no relationship between the dependent and
independent variables.
H1 :β1 ≠ β2 ≠ β3 ≠ ⋯ ≠ βk = 0, means there is a relationship between the dependent and independent
variables.
2) Determine the significant level
Significant level or degree of certainty that is used for α = 1%, 5%, 10%
3) Determine the decision the region, _ the region where the null hypothesis or is accepted or not.
H0 is accepted if the calculated Fcount ≤ Ftable, means that all the independent variables together do not
have a relationship with the dependent variable.
H0 is rejected if the Fcount> F table, means that all the independent variables together have a relationship
with the dependent variable.
5) Draw conclusions
Decision-making can be either acceptance or rejection of. Calculated F value obtained in the previous
step and compared with the value of F obtained from the table. If the calculated F is greater than F table,
145
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
then is rejected, so that it can be concluded that there is a relationship between the independent variables
together with the dependent variable. But if F count less than or equal to the F table, then is accepted,
so it can be concluded that there is no relationship of independent variables together with the dependent
variable.
Stepwise Method
This method is a combination of the two methods, the forward selection method (forward selection) and
the allowance method (backward elimination). In this method both methods are applied
interchangeably. Stepwise method is used to get a similar conclusion but from a different direction, that
is by including variables in order to meet the regression equation. The order to insert a variable,
determined by using partial correlation coefficient as a measure, for determining the importance of the
variables in the equation that has not been there.
Stepwise regression analysis for the selection of a good equation similar to forward selection
methods, except for any portion that meets the hypothesis H0 : βi = 0 for all variables tested, and which
satisfy |t i |-statistic is less than the critical value are eliminated from the equation. The next equation is
added into the equation to see that the value of the partial correlation of the forward method. Stepwise
election continues until the achievement of equality without variables |t i |-statistic is less than some
critical value of t correspond and no variables left to put into the equation.
To perform stepwise regression analysis using the SPSS application is very easy. There already
provided tools to process data by stepwise method. But here I will explain the procedure to release the
model stepwise method without using the tools available in SPSS.
The first step for the stepwise method is to find the value of the correlation coefficient of each
variable used. Further note that the value of its variable approach |R| → 1, then insert variables into the
model. Once there is one variable, then estimate the regression model. Note the value of t, if |t count | >
t table(1−α;db) or p - value < α, then the first significant variables entered into the model.
The next step is to calculate the partial correlation remaining independent variables with the
dependent variable, the control variable is the variable that is entered into the model in the first step
earlier. After the results of the calculations obtained, note that the correlation coefficient approaching |
R | → 1 and then enter into the model. Proceed by estimating the model using the first and second
variable. If the value of |t count | > t table(1−α;db) or p - value < α, then resumed by calculating partial
correlation remaining independent variables with the dependent variable. As a second control variable
is the variables obtained from the previous estimate. If |t count | > t table(1−α;db) are not met by the final
estimated variable, then that variable can be eliminated or not used in the model.
Next repeat the above steps until all independent variables until there are no remaining
variables. If all the variables are no longer left, then the next step is to build a model of the variables
that have been obtained from the previous process. Models formed only of the variable or variables that
are not eliminated or qualifying |t count | > t table(1−α;db) or p-value <α. The model that has been
established is the best model of the stepwise method
3. Data Processing
This study uses monthly data in the form of year 2003 to 2012. The data used in this paper is the
simulation data and secondary data. Simulation data is used to see a pattern that can provide conclusions
regarding the methods used. Secondary data obtained from http://finance.yahoo.com/,
http://www.bi.go.id/, and http://www.esdm.go.id/
The data used are:
1) Data Indeks Harga Saham Gabungan (IHSG) in Indonesia in 2003-2012.
2) Data Dow Jones Industrial Average in 2003-2012.
3) Financial Times Stock index data exchange years 2003-2012.
4) U.S. Dollar exchange rate data in 2003-2012.
5) British Pound exchange data years 2003-2012.
6) Data crude oil prices in 2003-2012.
146
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
This section contains the data displayed on the attachment in the form of secondary data.
Secondary data that has been obtained is being processed by using SPSS 17. Dow Jones Industrial
Average data or United States of America’s stock exchanges hereinafter referred to as DJIA. Financial
Times Stock Exchange or Great Britain’s stock exchanges hereinafter referred to as FTSE. World crude
oil data hereinafter referred to as MINYAK. Rupiah’s exchange rate to United States Dollar hereinafter
reffered to as USD. Rupiah’s exchange rate to Great Britain Poundsterling hereinafter referred to as
GBP.
Calculation of the coefficient of determination is done to determine which variables will be entered first
into the model. The result of the calculation is a value | R | which is between 0 and 1.
Oil prices was the first variable entered into the model. The next step is to estimate the model with
variables entered first. Estimate the model used in this step to enter method in SPSS tools.
Table 2 First Stage Estimation Results
147
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Coefficientsa
Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta T Sig.
1(Constant) -440.386 29.351 -15.004 .000
MINYAK 36.484 .390 .889 93.564 .000
a. Dependent Variable: JKSE
Based on Table 2 it can be seen that the results |t count | > t table(1−α;db) or p − value < 𝛼 then
the price of oil retained in the model.
Then the next step is further partial correlation calculation. Partial correlation is calculated using the
control variables obtained from existing variables in the model first is the price of oil.
Table 3 Shows the results of the calculation of the partial correlation with oil prices as control
variables. From the table it can be seen the value of |𝑟𝑥𝑖,𝑀𝐼𝑁𝑌𝐴𝐾 | is close to 1 is the correlation of GBP
or pound sterling exchange rate.
148
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
GBP or pound sterling exchange rate entered as the second variable when seen from the results in Table
3. Further back we estimate the model by including two variables: oil prices and the pound sterling
exchange rate.
Table 4 Estimation Results of Phase Two
Coefficientsa
Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta T Sig.
1(Constant) 2318.355 98.454 23.548 .000
MINYAK 35.524 .336 .866 105.759 .000
GBP -.167 .006 -.237 -28.983 .000
a. Dependent Variable: JKSE
Based on Table 4 it can be seen that the results |t count | > t table(1−α;db) or p − value < 𝛼 the
oil price and the exchange rate at the pound retained in the model.
At this stage we re-calculated value of the partial correlation of the remaining variables. Calculation at
this stage using two control variables obtained from the model that has been estimated previously.
Because oil prices and the exchange rate of pound sterling is not eliminated from the model, the two
variables become the control variable to calculate the partial correlation stage.
149
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
DJIA or the U.S. stock price index is the next variable entered into the model. Based on Table 5 of the
calculation of the partial correlation, then DJIA goes into estimation model and a model with the DJIA
as a third variable.
Table 6 Estimation Results Third Stage
Coefficientsa
Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta t Sig.
1(Constant) 1647.564 98.850 16.667 .000
MINYAK 29.830 .440 .727 67.848 .000
GBP -.207 .006 -.293 -35.659 .000
DJIA .155 .008 .202 18.482 .000
a. Dependent Variable: JKSE
Based on Table 6, it can be seen that the results |t count | > t table(1−α;db) atau p − value < 𝛼
then oil price, exchange rate pound sterling, and U.S. stock index maintained in the model.
The next stage is re-calculating the partial correlation. This time is used three control variables for the
DJIA retained in the model.
Table 7 shows the results of the calculation of the partial correlation with oil prices, exchange
rate pound sterling, and U.S. stock price index as a control variable. From the table it can be seen that
the value of |rxi,MINYAK;GBP;DJIA | which is close to 1 is USD or the value of the U.S. dollar exchange
rate. These variables are subsequently incorporated into the model to be estimated.
USD or U.S. dollar exchange rate is the fourth variable is the next entry. Hereinafter jointly be estimated
models there are three variables that have been previously.
150
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Coefficientsa
Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta t Sig.
1(Constant) -3381.638 142.699 -23.698 .000
MINYAK 21.988 .383 .536 57.336 .000
GBP -.360 .006 -.511 -62.643 .000
DJIA .375 .008 .489 45.300 .000
USD .602 .015 .342 41.422 .000
a. Dependent Variable: JKSE
Based on Table 8, it can be seen that |t count | > t table(1−α;db) or p − value < 𝛼, then no variable
is eliminated. Thus, the price of oil, pound sterling exchange rate, stock price index of the United States,
and the U.S. dollar exchange rate goes into the model.
At this stage the model estimation is done directly by entering the last variable or the FTSE UK
index of stock prices. Calculation of partial correlation is no longer done because this variable is the
last variable that has not been entered into the model. Therefore at this stage the model is estimated
with all variables.
151
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4. CONCLUSION
Based on the model it can be seen that the coefficient of OIL has the greatest value. It means that
most affect the movement of oil price index compared with other variables. And most do not affect JCI
is the DJIA or the U.S. stock price index since the value of the coefficient is small.
References
152
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Proofs in geometry is considered difficult and boring in school mathematics, so that
many students and teachers as well try to avoid it. Indeed, solving proofs require a
comprehensive basic knowledge of geometry. This is a preliminary research of how to learn
geometry through proofs. The instructions consist of several stages. With three classes of total
60 graduate preservice students, in one semester they enjoy and having fun learning proofs, and
absolutely gain their knowledge of geometry deeper and enhance their curiosity of learning
geometry further.
1. Introduction
Conjecturing and demonstrating the logical validity of conjectures are the essence of
the creative act of doing mathematics (NCTM Standards, 2000).
It’s been a common old story that geometry is a difficult and boring subject in learning mathematics.
As we know, geometry is a subject avoided by most high school teachers (in Indonesia), since their
perception of geometry as a difficult subject, needs a lot of spatial thinking (imagination) and deductive
reasoning. As a matter of fact, it is supposed to be more interesting and easier to teach geometry
compared to algebra. Look everywhere in or outside the class, you will see a lot of geometry shapes
around! Furthermore, evidence in mathematics curriculum tells that, algebra has a lot more materials
than geometry, and consequently teachers and students spend a lot more time in studying algebra, rather
than geometry.
Proof in geometry is one important subject in geometry, yet only a few has been taught in high
school as well as in the university (Burger & Shaughnessy, 1986; Usiskin, 1982). Solving proofs require
comprehensive and continuous concepts from the start to the end. Numerous attempts have been made
to improve students' proof skills by teaching formal proof, albeit largely unsuccessful ones (Harbeck,
1973; Ireland, 1974; Martin, 1971; Summa, 1982; Van Akin, 1972). Moreover Senk (1985) examined
that of over 1500 students; only about 30 percent of students enrolled in full-year geometry courses
achieved 75 percent mastery level in proof writing.
In class, generally geometry is taught in a teacher-centered approach, where the teacher
is the center of attention and students are consider as a whole group. In certain session,
problems are assigned, corrected, and handed back with a little feedback. This approach might
work for some students, but the problem is how to make all a better life-long problem solver.
The purpose of this preliminary research is to describe an approach that is suitable in teaching
geometry, through proofs.
153
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Theoretical Background
In the development of geometrical thinking specifically, where proofs will be the core of geometry,
students must pass through several stages; initial stage is to identify the problem, that is what is given
and its picture, and what has been asked; middle stage is to retrieve all the knowledge they have around
what is given such as definition and properties, how to approach the solution using his or her intuitive
perceptions, and final stage is to write down the proof rigorously. These mean is for discovery or
deduction, students will make conjectures based on the picture or pattern. Proofs might be considered
as an open-ended problem that is there are many ways to construct proofs, so students might prove upon
not only their mathematical knowledge, but also their imagination and creativity. As Silver, et al. (2005)
stated “You can learn more from solving one problem on many different ways than you can from solving
many different problems”.
Moore (1994) stated that there are seven major sources of students’ difficulties in solving
proofs: inability to state the definition, inadequate concept image, inability to use the definition to
structure a proof, inability or unwillingness to generate examples, and difficulties with mathematics
language and notation. Accordingly, in most geometry class, teacher seldom discuss alternative
solutions.
Solving proofs might be also considered as problem solving procedures, but many mathematics
teachers often teach students by having them copy standard solution methods, and not surprisingly that
students find it difficult when facing new problems (Harskamp and Suhre, 2006).
More importantly, proofs supposed to be a meaningful tool to learn mathematics, not as a formal and
boring exercise for students and teachers.
Hanna (1995) argues that “the most important challenge to mathematics educators in the context of
proof is to enhance its role in the classroom by finding more effective ways of using it as a vehicle to
promote mathematical understanding.”
3. Methodology
Three classes of preservice teachers in master’s program, comprised of a total of 60 graduate students
of mathematics education were participating in the Geometry course. The topic is ‘how to prove Euclid-
geometry problem’, ranging from easy to hard problems. At the first and the last session of the semester,
students filled a questionnaire of their perception of geometry.
The design instructional is as follows:
Teacher reviewed some basic properties of several of 2-dimensional shapes, including its
definitions.
The students were provided 30 proof problems. Started from the first 10 problems, students
tried to prove them individually or by discussion. For each problem, there might be different
type of proof. The teacher circled around, provided small hints to students whom asked for
help. If the students could not get them done, then they did it as an assignment for the next
session.
Next, the teacher checked all the students’ assignment, and he will carefully choose a particular
problem (or the teacher might ask the students if there are any problem to be discussed), and 3
students who solved in different ways. These 3 students will write their complete proof on the
board.
The teacher, together with the students, analyzed these student’s proofs, by showing their
mistakes, if exists. The last step, the teacher asked the students: which way of proving is the
most effective, the most comprehendible, the ‘easiest’, etc.
This process continued until all problems have been proven.
Throughout the course, students might use dynamic power of GSP to explore, and lead to
conjectures.
154
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Initially, students felt scary and not confidence in learning geometry, and it appeared that lack of
geometry concept was the problem. Slowly, along with proving problems they comprehend the concept
and gained their confidence. At the end of the semester, there was a huge difference of their perception
of geometry. They concluded that mastering the basic concept were the most fundamental of learning
geometry. In facing problems, they should know what is given and what has been asked, use their
intuition or reasoning to choose the appropriate rules or properties and implement them towards the
goal, and use their skills to write down a accurate and rigorous proof. We believe by writing proofs
accurately and rigorously, students understand the underlying concepts and ideas.
Some benefits from this instruction:
The dynamic geometry software helps students to sharpen their intuition, and reasoning.
Proofs are always open-ended problems, means there are many ways to prove.
Proofs can be considered as problem solving procedure, so that it is a good exercise to approach
problems and make use of proper mathematical tools.
In solving proofs rigorously, one must have comprehensive concepts, which are mastering the
concept.
Students improve how to write proofs accurately by thinking logically.
This approach is a student-centered learning and process-oriented, where students communicate
to each other scientifically.
Finally, the students found that learning geometry was fun, exciting, and challenging, even for less
successful student, and they felt with ecstasy once a problem was proven. What important for students
is they learn with pleasure, and that will enhance their curiosity to learn geometry!
References
Burger, W. F. and Shaughnessy, J.M. (1986). Characterizing the van Hiele Levels of Development in
Geometry. Journal for Research in Mathematics Education 17 :31-48.
Hanna, G. (1995). Some pedagogical Aspects of Proof. Interchange 21: 6-13.
Harbeck, S.C.A. (1973). Experimental Study of the Effect of Two Proof Formats in High School
geometry on Critical Thinking and Selected Student Attitudes. Ph.D diss. Dissertation Abstracts
International 33 :4243A.
Harskamp, E. and Suhre, C. (2006). Improving Mathematical Problem Solving: A Computerized
Approach. Computers in Human Behavior, 22, 801-815.
Ireland, S.H. (1974). The Effects of a One-Semester Geometry Course, Which Emphasizes the Nature
of Proof on Student Comprehension of Deductive Processes. Ph.D diss. Dissertation Abstracts
International 35 :102A-103A.
Martin, R.C. (1971). A Study of Methods of Structuring a Proof as an Aid to the Development of Critical
Thinking Skills in High School Geometry. Ph.D diss. Dissertation Abstracts International 31
:5875A.
Moore, R.C. (1994). Making the Transition to Formal Proof. Educational Studies in Mathematics, 27,
249-266.
NCTM. (2000). Principles and Standards for School Mathematics. Reston, VA: The National Council
of Teachers of Mathematics, Inc.
Senk, S. L. (1985). How Well Do Students Write Geometry Proofs? Mathematics Teacher 78
(September 1985):448-56.
Silver, E.A., et al. (2005). Moving from Rhetoric to Praxis: Issues Faced by Teachers in Having
Students Consider Multiple Solutions for Problems in the Mathematics Classroom. Journal of
Mathematical Behavior, 24, 287-301.
Summa, D.F. (1982). The Effects of Proof Format, Problem Structure, and the Type of Given
Information on Achievement and Efficiency in Geometric Proof. Ph.D diss. Dissertation Abstracts
International 42 :3084A.
155
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Usiskin, Z. (1982) Van Hiele Levels and Achievement in Secondary School Geometry. Final report of
the Cognitive Development and Achievement in Secondary School Geometry Project. Chicago:
University of Chicago, Department of Education.
Van Akin, E. F. (1972). An Experimental Evaluation of Structure in Proof in High School Geometry.
Ph.D diss. Dissertation Abstracts International 33 (1972):1425A.
Willoughby, S. (1990). Mathematics education for a changing world. ASCD: Alexandria, VA.
Ziemek, T.R (2010). Evaluating the Effectiveness of Orientation Indicators with an Awareness of
Individual Differences. Ph.D dissertation. Univ of Utah.
156
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: This research describes a mathematical model of the inflammation dental pulp. The
reaction can be either reversible or irreversible pulpitis, depends on the intensity of stimulation,
the severity of the damaged tissue, and host response. It causes pain from the lightest pain,
namely sensitive teeth complaints to the most severe spontaneous pain localized difficult. The
aim of this research is to obtain the characteristic function of the level of inflammatory dental
pulp (reversible and irreversible pulpitis) based on histogram analysis on the periapical
radiography.
1. Introduction
A mathematical model is a description of a system using mathematical concepts and language. The
process of developing a mathematical model is termed mathematical modelling. Mathematical models
are used not only in the natural sciences (such as physics, biology, earth science, meteorology) and
engineering disciplines (e.g. computer science, artificial intelligence), but also in the social sciences
(such as economics, psychology, sociology and political science); physicists, engineers, statisticians,
operations research analysts, economists and medicine (dentis) are use mathematical models most
extensively. A model may help to explain a system and to study the effects of different components,
and to make predictions about behaviour.
Mathematical models can take many forms, including but not limited to dynamical systems,
statistical models, differential equations, or game theoretic models. These and other types of models
can verlap, with a given model involving a variety of abstract structures. In general, mathematical
models may include logical models, as far as logic is taken as a part of mathematics. In many cases, the
quality of a scientific field depends on how well the mathematical models developed on the theoretical
side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical
models and experimental measurements often leads to important advances as better theories are
developed. (Sudradjat, 2013)
The main limitation in conventional intraoral radiograph for dentoalveolar disease imaging is
representing 3D structure in 2D image. This limitation also occur on caries, pulpa, and periodontal.
(Tyndall and Rathore, 2008). Technology development, especially in computer and information
technology, has affect dental radiology (White and Goaz, 2004). Radiologi imaging, one of technology
development benefits, is able to detect 70% lesion. On digital image, there are two basic form, that are
indirect digital imaging and direct digital imaging. (Langlais, 2004). Indirect digital imaging began to
be used after there are reports that told no differences between indirect digital imaging and direct digital
imaging in measuring demineralization belom enamel surface. The same procedure also reported as
157
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
successful by Eberhard, et al, in monitoring in vitro dental demineralization, and Ortman, et al., in
detecting alveolar bone defect changes with bone loss about 1-5%.
The paper presents a review of number of mathematical model of the inflammation dental pulp.
The reaction can be either reversible or irreversible pulpitis, depends on the intensity of stimulation, the
severity of the damaged tissue, and host response. It causes pain from the lightest pain, namely sensitive
teeth complaints to the most severe spontaneous pain localized difficult. The aim of this research is to
obtain the characteristic function of the level of inflammatory dental pulp (necrosis, pulpitis and normal)
based on histogram analysis on the periapical radiography.
Digital image is a continous image f ( x, y ) that have been mapped into a discrete image including its
properties (i.e., spasial coordinate and brightness level). Digital image is a M x N matrix which its rows
and colums value correspond to pixel value as shown in equation (1). (Munir, R., 2004).
It is usual to digitize the values of the image function f ( x, y ) , in addition to its spatial
coordinates. This process of quantisations involves replacing a continuously varying f ( x, y ) with a
discreate set of quantization levels. The accuracy with wich variations in f ( x, y ) are represented
determined by the number of quantization level that we use; the more levels we use, the better the
approximations.
Convensionally, a set of n quantization levels comparises the integer 0,1, 2, n 1 , 0 and
n 1 are usually displayed or printed as black and white, repectively, with intermediate levels rendered
in various shades of grey. Quantisation level are therefore commonly referred to as grey levels. The
collective term for all the grey levels, ranging from black to white, is a grayscale.
For convenient and efficient processing by a computer, the number of grey levels, n is usually
an integral power two. We may write
n 2K (2)
where K is number of bites used for quantisation. K is typically 8, giving us images with 256 possible
grey levels ranging from 0 (black) to 250 (white) (Phillips, 1994).
Quantitative analysis shows that the average gray scale pixel value can be used to see
remineralization of dental caries, with lesion status. Quantitative value correlate to lesion status in
subjective analysis. The more bigger values are for remineralization of lesion, the lower ones are for
demineralization, gray scale values that close to 128 are stabilize lesion. Quantitative grading form
dental caries lesion does not depend on observer because the operator job is limited to decide ROI and
software automatically show the gray scale pixel values. (Carneiro LS, et al., 2009).
We can get ROI (Region of Interest) value for grading demineralization below enamel surface
by setting caries region as ROI. In some cases, lesion boundaries become harder to set because
raidiolusen from karies lesion is not defined well, moreover when there is an overlapping. We cannot
fully avoid these things in daily clinical routine, but these are important in designing research to assure
operator precision in measuring to avoid ROI disperse because the operator choose bigger or smaller
lesion. (Wenzel A, 2002). Then we will continue with measuring gray scale pixel values using
158
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
histogram in selected area. We can also see the average and standard deviation of grayscale pixel values
on the software. (Carneiro LS, et al., 2009).
Mathematical models can take many forms, including but not limited to dynamical systems, statistical
models, differential equations, or game theoretic models. These and other types of models can overlap,
with a given model involving a variety of abstract structures. In general, mathematical models may
include logical models, as far as logic is taken as a part of mathematics. In many cases, the quality of a
scientific field depends on how well the mathematical models developed on the theoretical side agree
with results of repeatable experiments. Lack of agreement between theoretical mathematical models
and experimental measurements often leads to important advances as better theories are developed. The
basic process modeling we see Figure 1.
This model describes the development of a mathematical model that can provide the basis for
a decision support system to aid dentists (or patients) in making decisions about how often to perform
(or receive) intraoral periapical radiographs. The model, which describes the initiation and
progression of approximal dental pulp used simple descriptive method yaitu dengan membentuk
caracteristic function wich tree levels are necrosis, pulpitis, and normal, Walton and Torabinejad
2008.
Propose by Walton and Torabinejab, 2008 and , if x is mean of grayscale (2),than we see the following
characteristic function,
0 x 64, necrosis
f ( x) 64 x 128, pulpitis (3)
x 128,
normal
The study was conducted with simple descriptive method an sampling using accidental
sampling. Data obtained based on the results of vlinical examination, then the periaptical radiograph
with reversible or irreversible pulpitis diagnosis digitized using Matlab V.7.0.4 wich will in the
histogram graph then can be determined for characteritic function of dental pulp inflammation. For
validation of model (1) we gather sampel 100 intraoral periapical radiographs pulp pathological cases
in Departement Dentomaxillofacial Radiology, Dental Hospital, Padjadjaran University,Bandung, West
Java, Indonesia in September to December 2012 with accidental sampling.
159
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
From the results of the study 100 samples obtained 29 samples of normal catagory, 30 samples
pulpitis, necrosis 30 and 1 is not clear, the results are presented in Table 1 -3
160
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
161
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
162
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The conclution of this research that the description of dental pulp inflammantion based on analysis of
histrogram graph on periaptical radiograph, the photo is more radiolucent than nomal pulp and more
radiopaque than necrosis pulp (1). The results showed that levels of inflammatory dental pulp x based
on grayscale value are normal x 128 , pulpitis 64 x 128 and necrosis 0 x 64 .
References
Carneiro LS, Nunes CA, Silva MA, Leles CR, and Mendonc EF. 2009. In vivo study of pixel grey measurement
in digital subtraction radiography for monitoring caries remineralization. Dentomaxillofacial Radiology
Journal, 38, 73-78. Available online at http://dmfr.birjournals.org (diakses 17 April 2012).
Langlais, R. P. 2004. Exercises in Oral Radiology and Interpretation, 4 th edition. Missouri W.B. Saunders
Company. Pp. 67-68.
Michael Shwartz, Joseph S. Pliskin, Hans-Göran Gröndahl and Joseph Boffa, 1987, A Mathematical Model of
Dental Caries Used to Evaluate the Benefits of Alternative Frequencies of Bitewing Radiographs
Management Science 1987 33:771-783; doi:10.1287/mnsc.33.6.771.
Munir, R. 2004. Pengolahan Citra Digital dengan Pendekatan Algoritmik. Bandung : Penerbit Informatika.
Phillips, D. 1994. Image Processing in C. Kansas : R&D Publications, Inc.
Sudradjat, 2013, Model and Simulation, Departement of Mathjematics, Faculty of Mathematic and Natural
Sciences, Universitas Padjadjaran.
Tyndall and Rathore, 2008 .Cone-beam CT diagnostic applications: caries, periodontal bone assessment, and
endodontic applications. Dent Clin North Am. 2008 Oct;52(4):825-41, vii. doi: 10.1016/j.cden.2008.05.002.
Walton E Richard dan Torabinejad Mahmoud. 2008. Prinsip dan Praktik Ilmu Endodonsia, Ed 3. Alih bahasa :
Narlan Sumawinata, editor bahasa Indonesia : Lilian Juwono. Jakarta : Penerbit Buku Kedokteran EGC.
Wenzel A. 2002. Computer-automated caries detection in digital bitewings: consistency of a program and its
influence on observer agreement. J Dent Res, 81, 590-593. Available online at
http://www.medicinaoral.com.
White P. Goaz, 1994, Oral Radiology; Principles and Interpretation. Missouri: Mosby. 1-5 pp.
Whaites, Eric. 2007. Essentials of Dental Radiology Principle and Interpretation. St. Louis : Mosby Inc.
Wilhelm Burger , Mark J. Burge, 2008, Principles of Digital Image Processing, Springer-Verlag London Limited
2009
163
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Mode of transportation is a very important requirement for human life. Because
Humans in performing everyday life Often perform movements from one place to another
place, good movement at close range and long distance. Aircraft is a prime choice today to
make long-distance traveling, especially traveling cross-country (international). This has
prompted many airline companies offer international routes. Ticket price competition has also
occurred among airline companies, so that any airline should be able to control the aircraft
operational cost management. The selection of these aircraft from the national airport to the
international airport destination countries need to be considered with the lowest cost. In this
paper, the selection of these aircraft with the lowest operating costs is done using network
analysis. Results of the analysis are expected to provide an overview of how the selection of
one of the methods with the aircraft operating costs lowest done.
1. Introduction
Transportation is an essential requirement for human life today. Rapid technological advances have
driven the need for people to become more mobile (Azizah et al., 2012). Humans are always looking
for ways to shorten move from one place to another. Existing transportation facilities became more
advanced, so humans have not been quite satisfied with the land and sea transportation (Lee, 2000;
Bazargan, 2004), people use air transport. The existence of human aircraft can go to one place to the
premises much faster than using land and sea transportation (Forbes & Lederman, 2005). The existence
of transport aircraft, the company makes a variety of flight service providers began to emerge. Various
airlines not only offer many destinations, but also the best rates varied (Gustia, 2010).
The airlines currently managing its management are given freedom. So that they can freely
manage system services and flights, even setting the rates theirs of flight (Neven et al., 2005). Freedom
to manage: system services, flight routes, and the rates to stimulate the rapid growth of new airline
companies. Competition in the airline market is increasingly tight (Sarndal & Statton, 2011). This
causes any aviation service companies must be increasingly careful and cautious in doing cost control,
by implementing good management (Prihananto, 2007). Good management is one of them is the
selection of the aircraft. The selection of these aircraft flew very influential on the effectiveness and
efficiency of flight planning, where each route that run the company have feasible and meets the
requirements of flight. Terms of flight, among others, is that each required flight leg must be the optimal
combination of these aircraft, the number of aircraft planned to fly, as well as the needs of the turn-
around time for each aircraft (Yan & Young, 1996). The fulfillment of these requirements is reflected
in the total operating cost of each flight of the aircraft (Bower & Kroo, 2008).
Operational costs will affect the financial ability of the airline company. The higher the
operating costs will be lower profits and conversely, the lower the operating costs will be higher profits
(Bae et al., 2010). So the analysis is needed to determine the selected aircraft (Renaud & Boctor, 2002).
Based on the above, this paper aims to analyze the selection of international routes that provide lowest
operational costs. Selection of international routes with the lowest cost is analyzed using network
analysis. As a numerical illustration of the issue will be analyzed the selection Operational flights route
with the lowest cost, as discussed in Section 4 in this paper.
164
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Theoretical
In this section we explore the theoretical basis that includes: Selection Mode of Transportation, Aviation
International, Aircraft Costs and Operating Costs Issues Cheapest Flight routes.
Behavior of public transport service users in selecting transport modes are generally determined by the
3 (three) factors, namely: trip characteristics, trip characteristics of the offender, and the characteristics
of the transportation system. Trip characteristics include: distance and travel purposes, while traveling
offender characteristics, among others: the level of income, the ownership mode of transportation, and
employment. Characteristics of the transport system include: relative travel time, travel expenses
relative, and the relative service levels. When viewed from the side of the transport providers, transport
the selection mode behavior can be influenced by a change in the characteristics of transportation system
(Prihananto, 2007).
Ortuzar (1994), states that the selection of mode of transport is a very important part of
transportation planning models. This is because the selection of modes to be played a key role for policy
makers transportation providers, especially air transport. The main factors affecting public transport
services by Morlok & Willumsen (1998) related to the travel time or travel speed, while the other quality
factors can be ignored. Based on the theory of voting behavior such transportation, airline companies
should be able to select these aircraft, considering three of these characteristics.
In the world of aviation known the difference between civil aircraft with state aircraft, the difference
between civil aircraft by aircraft regulated under the Paris Convention State Convention, 1919, Havana
1928, the 1944 Chicago Convention, 1958 Geneva Convention, and the United Nations Convention on
UNCLOS. In various national laws such as the United States, Australia, the Netherlands, Britain, and
Indonesia also made a distinction between civil aircraft with aircraft State (Prihananto, 2007).
The differences between civil aircraft in one hand with the other state aircraft were based on
authority of each type of aircraft used by each agency. Thus the difference is important, because
according to international law the treatment of civil aircraft in contrast to the treatment of State aircraft.
State aircraft have certain immunity rights are not owned by civil aircraft. The treatment is in line with
the Paris Convention in 1919, the 1944 Chicago Convention, the Geneva Conventions, 1958 and 1982
UNCLOS UN Convention mentioned above (Prihananto, 2007).
Thus from the above description, the notion of international civil aviation is conducted flight
of civil aircraft, which has the nationality mark and registration signs, and in times of peace can cross
the airspace of Member States other international civil aviation organization (Prihananto, 2007).
Aircraft financing can basically be categorized into two that is non-operating expenses and operating
costs. Non-operating Costs are costs that have nothing to do with the operation of the aircraft, while
operating costs are costs of operating aircraft (Prihananto, 2007).
Operating Costs consist of direct operating costs DOC, and indirect operating costs IOC. DOC
is directly related to the cost of flying a plane, while the cost of supporting the IOC is heavily influenced
by the airline company's management policy, but can be expected to need for the IOC. Both types of
these operating costs (DOC and IOC) is one factor in considering the type of aircraft that will be
operated on a route that has been selected (Bazargan, 2004).
165
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
An all expenses directly related to and dependent on the type of aircraft operated, and will change to a
different type of aircraft. Direct operating costs can be grouped into (Bazargan, 2004; Prihananto, 2007):
Flight operating cost, is the cost to be incurred in connection with the operation of the aircraft. This cost
component includes several elements, that is crew costs, fuel costs, leasing costs, and insurance costs.
Maintenance cost, are costs that must be incurred as a result of aircraft maintenance. Consists of labor
costs and material costs.
Depreciation and amortization Costs. Depreciation is an expense due to lower nominal value or price
of the aircraft in the course of Waku since the product came out. While amortization is an expense
allowance periodically for costs such as cabin crew training, and pre-development costs associated with
the operation or use of the development of new aircraft.
Is all fixed costs are not affected by changes in aircraft type, because it does not depend directly with
the operation of the aircraft. These Costs consist of the station and ground cost (the cost of handling
and servicing aircraft on the ground), passenger service cost (passenger service charge). Ticketing, sales
and promotion costs, and administrative costs (Prihananto, 2007).
The problem is the lowest cost problems associated with the search path or route with lowest cost of
location / airport of origin to the location / destination airport. These problems usually arise in air
transport network, both nationally and internationally (Haouari et al., 2009; Clarke et al., 1997). Since
a lot of the problem of determining the application with the lowest cost, then the discussion will start
the model in general. Suppose there is a network of cost flights between the international airports of
origin (origin airport) up to the point of destination (destination airport). Network cost flights between
these airports generally as shown in Figure-1 as follows:
Suppose the cost of flights between the airport i to airport j expressed as d ij , the problem is
to find the total lowest cost of flights from the airport i to airport j , where the amount of d ij always
the same for every natural number i j . If an airport is not connected directly with the origin airport,
then the airport is given the value d ij = . The problem is to determine the values while for the airports,
and find the value or fixed costs. If the fixed cost for destination airport has to be obtained, it will obtain
the lowest cost from origin airport to destination airport (Wu and Coppins, 1981).
Procedure to determine lowest cost of this can be done according to the following stages:
166
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Step 1: Provide the cost for airport purposes yi with a value of 0 (zero).
Step 2: Calculate the cost while the other airport with the destination airport. While the costs of charged
directly to the destination airport. If the airport cannot be achieved directly with the airport next
destination given value, which is a very large value
Step 3: From the temporary costs calculated, select the least cost and the state as a permanent cost to
the destination airport. If there is a fee while the same value, select one of them as the least cost
to the destination airport.
Step 4: Perform calculations while the cost to the airport of origin. From this calculation select d ij y j
which the cost or route with lowest cost.
Calculation of cost or lowest cost with the above procedure can also be formulated in linear
programming. The problem of determining the route with lowest cost is the same as the mathematical
methods jobs assignment, or transshipment. If origin airport is seen as the source and the destination
airport as the need, then the other airport can be viewed as a point or a location of transshipment (Wu
& Coppins, 1981).
When it is assumed that the route with lowest cost or to be traversed has a coefficient of 1 (one),
then these problems can be formulated as a special assignment jobs, so that the formulation of the model
is the route with lowest cost (Wu and Coppins, 1981):
I J
Minimize { d ij xij } ; i j (1)
i 1 j 1
Interpretation of this model is that the airline company wants to determine lowest cost from the
origin airport to the destination airport. Constraints specified in the model showed that only one route
to be taken from the airport of origin, and to the destination airport. Similarly to other routes between
the airports are also only taken one point (Wu and Coppins, 1981). As an illustration, the model with
the lowest cost route selection will be given examples of numerical analysis as follows.
3. Illustrations
In section 3 of this discussion: description of the problem, Problem Solving Using Network Analysis
and Formulation to Linear Programming, as follows.
Suppose an airline company wanted to develop a flight that originated from Bandung and intended
finish in Saint Petersburg. During the transiting flight will be planned in several countries. Some
alternative and the cost of operating each route cost studies have been conducted, for example, as given
in Table-1 below.
167
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Based on the operational costs data given in Table-1 can be composed of a network model of the
operational costs of each route between cities / countries as given in Figure-1 below.
Application of the procedures discussed in section 2.4, for determine the cost of route between the
airport is as follows:
Step 1: y1 y11 0
168
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Step 3: Operating costs of the airport 5 to airport 11 through the airport 9, is equal to the operating
costs of the airport 5 straight to the airport 11. Therefore, the chosen one, for example, selected
operating costs of the airport 5 to airport 11 through the airport 6 and 9, or y 5 $115 .
Operating costs of the airport 6 to airport 11 through airport 9, is smaller than the operating
costs of the airport 6 to airport 11 through the airport 5. Therefore operating costs of the airport
6 to airport 11 through airport 9 were chosen as the route with the lowest cost that is $ 105 or
y 6 $105 . Similarly, with the same reason, the operational cost of the airport 7 to the airport
11 through the airport 8, or y 7 $115 . Operating costs of airport 8 to airport 11 through the
airport 7 and 9, or y8 130 .
Based on the results of recent calculations, it appears that the lowest operational cost is $ 240. So route
airplanes with the lowest operating costs was obtained through city or state: y1 y 4 y6 y9 y11
, with each of the operating cost is: $ 75 + $ 60 + $ 55 + $ 50 = $ 240. Thus route selection based solely
on operating costs, so that other factors beyond the operational costs are not considered, and will be
studied further in subsequent research.
The issue of route selection airplane lowest operational costs described in section 3.1, can be formulated
into a linear programming model. Linear programming model is formulated in a theoretical discussion
refer section 2.4 above. The objective function is formulated based on the operational costs of data
given in Table-1 or in the network given in Figure-2. While the constraint functions is formulated based
on the direction the network arcs shown in Figure-2, whose coefficients are described in Table 2 below.
169
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The issue of route selection operating aircraft with lowest cost can be expressed as a
transshipment problem, so that the formulation of linear programming models can be created by using
a variable xij . This variable xij is the route of aircraft from the airport i to airport j . The objective
function of linear programming models is the minimization of operational costs from airport i to airport
j . Because of the current flight out at a cost equal to the current airport is entered, the number of
coefficients on the right hand side of the constraint functions will be equal to zero. While the coefficient
of constraint functions is the +1 for express the current flight out, and -1 for express the current flight
into an airport.
Based on the operational costs of data described in Table-1 or networks flights alternative given
in Figure-2, the objective function of the linear programming model can be formulated as follows:
Maximize z $70x1.2 $65x1.3 $75x1.4 $50x 2.4 $90x2.5 $35x 2.4 $85x3.8
$60x 4.6 $65x 4.7 $40x5.6 $115x5.11 $40x6.5 $20x6.7 $55x6.9
$20x7.6 $45x7.8 $65x7.9 $45x8.7 $60x8.10 $50x9.11 $70 x10.11
Subject to:
x1.2 x1.3 x1.4 1 ;
x1.2 x 2.4 x 2.5 0 ;
x1.3 x3.4 x3.8 0 ;
x1.4 x 2.4 x3.4 x 4.6 x 4.7 0 ;
x 2.5 x5.6 x5.11 0 ;
x 4.6 x5.6 x6.7 x6.9 0 ;
x 4.7 x6.7 x7.8 x7.9 0 ;
x3.8 x7.8 x8.10 0 ;
x6.9 x7.9 x9.11 0 ;
x8.10 x10.11 0 ;
x5.11 x9.11 x10.11 1 ;
xij 0 ; i, j 1,...,11
While using the coefficients are described in Table-2, the constraints function are formulated as
described above. Completion of linear programming models will produce minimum total operating cost
is the $ 240.
4. Conclusions
This paper has discussed the selection of international routes with lowest operating costs using network
analysis. Based on the description given problem, for the selection of alternative networks route of
aircraft as given in Figure-2. Completion of network analysis in Figure-2 is selected route aircraft from
Bandung airport to the Saint Petersburg airport, through the airport-airport: Singapore, Kuala Lumpur,
Moscow. The route has the lowest total cost of $ 240. The issue of route selection lowest flight operating
costs can also be viewed as a transshipment problem, and can be formulated as linear programming
models. Completion of the linear programming model would also obtained lowest total operating cost
of $ 240.
170
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Azizah, F., Rusdiansyah, A., & Indah, N.A. (2012). Perancangan Alat Bantu Pengambilan Keputusan Untuk
Penentuan Jumlah dan Rute Armada Pesawat Terbang. Kertas Kerja. Jurusan Teknik Industri, Institut
Teknologi Sepuluh Nopember, Surabaya.
Bae, K.H., Sherali, H.D., Kolling, C.P., Sarin, S.C., Trani, A.A. (2010). Integrated Airline Operations: Schedule
Design, Fleet Assignment, Aircraft Routing, and Crew Scheduling. Dissertation. Doctor of Philosophy in
Industrial and System Engineering. Faculty of the Virginia Polytechnic Institute and State University.
Blacksburg, Virginia.
Bower, G.C. & Kroo, I.M. (2008). Multi-Objective Aircraft Optimization for Minimum Cost and Emissions Over
Specific Route Networks. 26th International Congress of the Aeronautical Sciences. ICAS 2008.
Bazargan, M., (2004). Airline Operations and Scheduling. Hampshire: Ashgate Publishing Limited,
Burlington,USA.
Clarke, L., Johnson, E., Nemhauser, G., and Zhu, Z. (1997). The Aircraft Rotation Problem. Annals of Operations
Research 69(1997)33-46.
Forbes, S.J. & Lederman, M. (2005). The Role Regional Airlines in the U.S. Airline Industry. Working Paper.
Department of Economics, 9500 Gilman Drive, La Jolla, CA 92093-0508, USA.
Gustia, R.S. (2010). Penerapan Dynamic Programing Dalam Menentukan Rute Penerbangan International dari
Indonesia. Kertas Kerja. Program Studi Teknik Informatika, Institut Teknologi Bandung.
Haouari, M., Aissaoui, N. & Mansour, F.Z., (2009). Network flow-based approaches for integrated aircraft
fleeting and routing. European Journal of Operational Research, pp.591-99.
Lee, J.J. (2000). Historical and Future Trends in Aircraft Performance, Cost, and Emissions. Thesis. Master of
Science in Aeronautics and Astronautics and the Engineering System Division. Massachusetts Institute of
Technology.
Morlok, E.K. (1998). Introduction to Transportation Engineering and Planning. New York: McGraw-Hill Ltd.
Neven, D.J., Roller, L.H., & Zhang, Z. (2005). Endogenous Costs and Price-Cost Margin An Application to the
Europeann Airline Industry. Working Paper. Graduate Institute of International Studies, Universit of
Geneva.
Otuzar, J.D. & Willumsen, L.G. (1994). Modelling Transportation. Second Edition, New York: John Wiley &
Sons.
Prihananto, D. (2007). Pemilihan Tipe Pesawat Terbang Untuk Rute Yogyakarta- Jakarta Berdasarkan Perkiraan
Biaya Operasional. Seminar Nasional Teknologi 2007 (SNT 2007). Yogyakarta 24 November 2007. ISSN:
1978-9777.
Renaud, J. & Boctor, F.F., (2002). A sweep-based algorithm for the fleet size and mix vehicle routing problem.
European Journal of Operational Research, pp.618- 28.
Sarndal, C.E. & Statton, W.B. (2011). Factors Influencing Operating Cost in the Airline Industry. Working Paper.
Centre fo Transportation Studies, The University of British Columbia.
Wu, N. & Coppins, R. (1981). Linear Programming and Extensions. New York: McGraw-Hill Book Company.
Yan, S. & Young, H.F. (1996). A Decision Support Framework for Multi-Fleet Routing and Multi-Stop Flight
Sceduling. Transpn Res.-A, Vol. 30, No. 5, pp. 379-398, 1996.
171
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: This study examines the effect of scientific debate instructional strategy on the
enhancement students’ mathematical communication, reasoning, and connections ability in the
concept of Integral. This study is quasi-experimental with a static group comparison design
involves 96 students from Department of Mathematics Education. Research instruments
include student’ prior knowledge of mathematics (PAM), a test of mathematical
communication, reasoning and connection ability as well as teaching materials. The data are
analyzed by using Mann-Whitney U test, two-path ANOVA and Kruskal Wallis Test. The study
finds that the enhancement in mathematical communication and reasoning abilities in students
with the scientific debate is not significantly different from the conventional instruction.
Students’ mathematical connection ability that follows instruction with scientific debate
strategy is better than that of students who follow the conventional instruction. There was no
difference in the average rate of the increasing mathematical communication and connection
skills of students between the interactions of PAM with learning approach. There is no
interaction between instructional factors and PAM factors on the increasing mathematical
communication and connection skills. The enhancement of student’ mathematical reasoning
abilities with a scientific debate based on the PAM, it is not completely distinctive. On the other
hand, the enhancement of student’ mathematical reasoning abilities with a conventional
instruction based on the PAM was considerably different. On the scientific debate, student’s
educational background differences do not give major effect on the enhancement mathematical
communication, reasoning and connection ability but on the conventional instruction provides
a better effect. It means that, the students with background of Senior High School have enhanced
mathematical communication, reasoning, and connections ability better than compare to the
students of the Islamic Senior High School.
1. Introduction
Integration and differentiation derivative are the two important concept in mathematics. Integrals and
derivatives are the two main operations in calculus. Integration principles are formulated by Isaac
Newton and Gottfried Leibniz in the 17th century by leveraging the close relationship that exists
between the anti-derivative and the definite integrals, which is a relationship that allows to calculate the
actual value of the integral would be easier without the need to wear a Riemann sum. This relationship
is called the fundamental theorem of calculus. Through the fundamental theorem of calculus, which
they independently developed, integration is connected with differentiation. So that the integral can be
defined as anti-derivative.
In France, the concept of integrals are introduced to secondary education students (17-18) years,
which is presented in the form of the traditional definition in the form of primitive functions. In 1972,
integrals calculus introduced that includes: the definition of the Riemann sum for numerical functions
of a real variable in the interval limited; integrabel theorem of continuous functions and monotone
functions. After the reforms in 1982, to return again to see the integral as a function of the primitive
and the area under the positive function, and introduce the example integral value approach with a
variety of numerical methods.
172
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Indonesia, the concept of integral given to senior high school students (SMU) and the Higher
Education course on calculus 2. Ability of Integrals tested for the senior high school and are equal: (1)
to calculate the indefinite integral, (2) to calculate definite integrals of algebraic functions and
trigonometric functions, (3) to measure the area under the curves, and (4) to calculate the volume of the
rotary. Ability which tested, to revolves the understanding of concepts around, and it is included in the
category of low-level in the level of high-order mathematical thinking. It is characterized by problems
in the form: to remember, to apply the formula regularly, to calculate and to apply simply formulas or
concepts in the simple cases or in the similar cases. Whereas competency standards (SKL), which must
be achieved to understand the concept of the integrals are an integral concept of algebraic functions and
trigonometric functions and be able to apply them in solving problems.
Although the abilities tested are included in the category of low-level and not in accordance
with the SKL, but some studies show that even low levels of student learning outcomes for this integral
concept was included in the category of low compared to other mathematical material. The low ability
of students to understand the concept of integral proposed by Orton (1983) that the average value of
materials integral to the evaluation results have the lowest value, ie 1.895 to 1.685 for the level of
schooling and college level on a scale of 0 to 4, compared with the material in Calculus such as: line,
limits, and derivatives. Sabella and Redish (2011) states that most college students in conventional
classes have a superficial understanding and incomplete information about the basic concepts in
calculus. Romberg and Tufte (1987) states that students view mathematics as a collection of static
concepts and technical to be solved step by step. In the mathematics learning, students are only required
to complete, to describe in graphic form, to locate, to evaluate, to define, and to quantify in a model that
is obvious. They are rarely challenged to solve the problems of high order mathematical thinking
(Ferrini-Mundy 627).
Results of trial UN 2010 were given to 879 senior high school students in the city Bandung
showed that students who were able to answer correctly to the concept of integral only 30.22%. This
condition is certainly not achieve mastery in groups. While the test results the UN 2011 which was
followed by 1578 students in the city of Bandung, also demonstrated the ability of students is still low
in the integral concept that only 6.7% of students were able to answer correctly compared to other
calculus concepts such as limits and derivatives 42.3% 11 , 5%. The not achieve mastery students in
materials integral to the course will have an impact on students who continue their education to the
Department of Mathematics or Mathematics Education. One the causing of low ability students to the
concept of integrals is the learning mathematics is presented in the form of basic concepts, the
explanation of concepts through examples, and the exercises about the settlement. The learning process
is generally carried out in line with the pattern of a dish as it is available in the reference books. The
learning process is more likely to encourage this kind of thinking reproductive process as a result of the
reasoning process is more developed imitative. Such a situation gives less room to enhance the ability
of high-order mathematical thinking and critical and creative thinking for students. Students tend to
solve problems by looking at the integral existing examples, so that when given a non-routine matter,
students' difficulties.
Development of high-order mathematical thinking ability is very important, because in all
disciplines and in the world of work requires a person to be able to: (1) to present ideas through speech,
to write, to demonstrate and to describe the presentation visually in a variety of different, (2) to
understand , to interpret, and to evaluate ideas presented orally, in writing, or in visual form, (3) to
construct, to interpretr, and to connect different representations of ideas and relations, (4) to make
investigations and allegations, to formulate questions, and to draw conclusions and to evaluate
information, and (5) to produce and to present convincing arguments (Secretary's Commission on
Achieving Necessary Skills, 1991). These abilities are closely related to communication, reasoning, and
mathematical connections ability.
In mathematics education, communication, reasoning, and connections ability are high-orger
thinking ability which must be had to solve mathematical problems and life issues that can be used on
any state, such as critical thinking, logical and systematic. This is consistent with the characteristics of
mathematics as a science of value to that reflected in the role of mathematics as a symbolic language
and powerful communication tool, a short, solid, accurate, precise, and do not have a double meaning
(Wahyudin, 2003). This the statement indicated that mathematics has a very important role for the
development of a person's thought patterns both as a representation of understanding of mathematical
173
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
concepts, communication tools, as well as a tool that serves the field of science.To through
mathematical communication ability, students can exchange their knowledge and clarify understanding.
The communication process can help students to construct the meaning and completeness of the ideas
and to avoid misconceptions. Aspects of communication also help students to be able to communicate
ideas both verbally and in the writing. When a student is challenged and asked to argue to communicate
the results of their thinking to others both orally and in writing, the students learn to explain and to
convince others, to listen or to explan ideas of the others, and to provide the opportunity for him to
develop his experience. In addition, the communication ability, other the ability that should be
developed is the reasoning ability. A person’s reasoning ability can be seen from its ability to overcome
life issues. A person with high reasoning ability will always be able to quickly make decisions in the
solving various problems in their’s live. This capability is supported by the strength of his reasoning to
be able to connect the facts and evidence to arrive at a proper conclusion. Mathematical reasoning
ability is not only necessary to resolve issues related to the field of mathematics, but also necessary to
solve the problems faced in life. Mathematical reasoning needed someone when confronted with the
issue, in which we have to evaluate arguments and to select a few feasible solutions. This condition
implies that when one is faced with a number of statements or arguments were related by the issues it
faces, mathematical reasoning ability is required to make judgments or to evaluate the statement before
to make a decision. Thus, the mathematical ability of a person is not only used for the purpose of
calculation but also to provide argument or to claim that requires logically presentation to ensure that
the way of the thinking is right. Thus, the development of reasoning ability is essential for every student,
as a preparation to be able to do the analysis before to make a decision and be able to make an argument
to defend.
Another important ability which be used to be developed by students is the mathematical
connection ability. This connection ability will also appear on the student's ability to communicate and
to reason. mathematical connection ability can be closely related with the relational understanding.
Relational understanding requires people to be able to understand more than one concept and to relate
between these concepts. While the mathematical connection ability are the ability to connect a wide
range of ideas in mathematics and in other areas as well as the real world.
Mathematical ability are developed above, in accordance with mathematical competence
proposed by Niss (in Kusumah, 2012:3), namely: (1) mathematical thinking and reasoning, (2)
mathematical argumentation, mathematical communication, (4) modeling, (5) problem possing and
problem solving, (6) representation, (7) symbol, and (8) tool and technology. NCTM (2000) has
identified that, communication, reasoning, and problem solving ability are an important process in the
mathematics instructional in an effort to solve the mathematical problems.
Communication, reasoning, and connections mathematical ability, can only be achieved
through the instructional that can enhance the capabilities particularly in the cognitive domain, affective
and psychomotor ability. Suryadi study (2005) on the development of high-order mathematical thinking
through an indirect approach, there are two fundamental things that need further study and research and
the relationship of student-depth material and student-teacher relationships. In this study was found that
to encourage student’s mental action, the instructional process must be preceded by a presentation that
contains problems to challenge students to think. Besides the instructional process should also facilitate
the students to construct the knowledge or self-concept so that students will be able to find the back of
knowledge (reinvention).
One of the learning model that can meet the demands is scientific debate. This is supported by
the results of research Legrand, et al. (1986), which revealed that the effect of the application of
scientific debate in the learning can improve students' understanding in the concept of integral during
the final exam. The other result was indicated by Alibert, et al. (1987) that the application of scientific
debate is the majority of students in the learning attained mastery in understanding the concept of
integral, in addition, students can explore their knowledge where settlement is not implemented
algorithms.
In the application of scientific debate, students are trained to communicate the knowledge and
to sustain its arguments according to the truth of the mathematical concepts through debate. This ability
to argue will spur develop mathematical reasoning and connections ability, because the students must
be able to think logically and systematically, and be able to relate various concepts to sustain his
argument. This is consistent with the theory of constructivism which states that, the knowledge was
174
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
gained the students, in which the students construct their own knowledge through interaction, conflict,
and re-equilibration that to involve the knowledge of mathematics, other students, and a variety of
issues. The interaction is governed by a lecturer to choices fundamental issues. Based on the above
conditions, researcher is interested to review and to analyze the application of scientific l debate with
the title of the research is: "Instructional the Scientific Debate to Enhance Students’ Mathematical
Communication, Reasoning, and Connections Ability in the concept of Integral.
2. Problem Formulation
Based on the background of the issues above, research problems can be formulated as follows:
a. Are there differences the enhancement student’ mathematical communication, reasoning, and
connections ability between students that follows instruction scientific debate with the students who
follow the conventional instruction?
b. Are there interaction between instructional factors and students’ Mathematical Prior Ability (PAM)
on the enhancement students’ mathematical communication and connections ability?
c. Are there differences the enhancement student’ mathematical reasoning ability among students
that follows instruction scientific debate with the students who follow the conventional instruction were based
on PAM?
d. Are there differences the enhancement student’ mathematical communication, reasoning, and
connections between students that follows instruction scientific debate with the students who follow the
conventional instruction?
3. Research Objectives
4. Research Methods
This quasi-experimental study with a static group comparison design involves 96 students from who
take Calculus Lecture 2. For the purpose of category 1, the data are analized by using normalized gain
test with the formula:
Normalized gain (g) = (Score Postes-Score pretest)/(Score Ideal-Score pretest).
Test statistic used to test whether there differences the enhancement student’ mathematical
communication, reasoning, and connections ability between students that follows instruction scientific
debate with the students who follow the conventional instruction was used Mann-whiney (U). For purpose 2 was
used ANOVA, For the purpose of 3 was used Kruskal Wallis, and for the purpose of 4 was used the
Mann-Whitney test (U).
175
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
5. Research Result
Subjects of study were 94 students who its are 60 students from senior high school and 34 students
from Islamic senior high school. The enhancement student’ mathematical communication, reasoning,
and connections ability are at criteria middle with an average normalized gain scores are 0.660, 0.522,
and 0.635. While statistically calculated by Mann-whiney Test (U) to test the hypothesis that
enhancement student’ mathematical communication, reasoning, and connections ability that follows
scientific debate instruction is better than the students that follow conventional instruction show:
1) There was no difference the enhancement student’ mathematical communication, ability
between students that follows instruction with scientific debate with the students who follow the conventional
instruction.
2) There were no differences the enhancement student’ mathematical reasoning ability between
students that follows instruction with scientific debate with the students who follow the conventional instruction.
3) There are differences the enhancement student’ mathematical connections ability between
students that follows instruction with scientific debate with the students who follow the conventional instruction.
In other words, the enhancement student’ mathematical connections ability students that
follows instruction with scientific debate better than the students who follow the conventional instruction.
Since data of the enhancement mathematical communication and connections showed normal
distribution and homogeneous variance, the interaction between instructional models with PAM were
analyzed using ANOVA. The calculation result is presented in Table 2 as follows:
Graphically, the interaction between instructional model with PAM to the enhancement student’
mathematical communication and connections shown in Figure 1.
176
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
From the graphical representation, it appears that the group of students with midle and less PAM
learning outcomes with learning model scientific debate produced an average improvement of
communication capabilities mathematically larger than conventional learning. While the group of
students with high PAM yielded an average improvement of communication capabilities mathematical
smaller than conventional learning. The average increase in the ability of mathematical connections to
each group of PAM in the group of students with learning scientific debate is always better than the
conventional learning student group. This gives an indication that the application of scientific debate
learning model produces a better impact than conventional learning to increase student mathematical
connection capabilities. The average improvement of mathematical reasoning abilities in students
learning model scientific debate to the level of high, medium, low and did not differ significantly.
Whereas in conventional learning models, the average increase in mathematical reasoning ability for
high-level, midle, and low differ significantly.
177
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
mathematical communication, reasoning, and connections ability. This means the enhancement
mathematical communication, reasoning, and connections ability that the students come from senior
high school better than students from Islamic senior high school.
6. Discussion
The factors discussed in this study are instructional model (conventional and Scientific Debate), prior
knowledge of mathematics (high, midle and low), and educational background of students (Senior High
School and Islamic Senior High School).
a. Instructional Model
Characteristics of the instructional model can be used as an idea of how teaching and learning occurs
in the classroom. From the results of the study literature about scientific debate instructional model and
conventional instructional acquired characteristics for each of the instructional model as shown in Table
4 below,
From the instructional model characteristics of the scientific debate, it appears that each student
is required to construct and to understand knowledge independently. Thus, students have a very big role
178
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
in the effort to understand the concept, to developed the procedure, to discovered the principle, and to
implement of concepts, procedures, and principles to solve a given problem. The lecturer main role is
fasilisator should always facilitate any development that happens to students during the learning process
takes place.
To implement of the scientific debate instructional model, the lecturer prepare teaching
materials and Student Worksheet (LKS) for the experimental class. Teaching materials were developed
in this study were designed so that students were able to find the concepts, procedures, principles, and
be able to apply them to solve a given problem. Teaching materials were developed in such a way that
students allowed to achieve relevant mathematical competence to the material being studied. In
addition, focus on the developing teaching materials are directed to develop students’ mathematical
communication, reasoning, and connection ability optimally. LKS were used to equip students as a
reference for themselves in the debate and problem solving. LKS contain not only the problems to be
solved by the students, but also includes concepts, procedures, and principles, as well as the presentation
examples of applications that can be learned students before the preparation in the debate. In the
scientific debate instructional model, students are required to solve the problem in the form writing and
they also mush account their answers in the debate. This is consistent with the statement Pugalee (2001)
that the students mush trained to communicate their mathematical knowledge, to provide arguments for
each answer and, to respond every the answers given by others, so that what it is learned to be more
meaningful for him . In addition, Huggins (1999) stated that in order to enhance conceptual
understanding of mathematics, students can do to express mathematical ideas to others.
From the quantitative calculations results, showed that there were not difference the
enhancement student’ mathematical communication, reasoning, and connections ability between
students that follows instruction with scientific debate with the students who follow the conventional instruction.
Although the results of the calculations did not indicate any difference, but we mush compare between
the scientific debate instructional model with conventional instructional. The scientific debate
instructional model challenged students to learn actively. Now it the question is whether an active
learning is better than passive learning? If the goal of students’ learning result only in terms of the
ability to answer the questions which are generally from the cognitive, the result was not much different.
Highest actively learning student will be positively correlated with academic achievement but low.
However, another result seen that everyday life requires activity of that to study and to work with active
or it's more fun, that active learning will broaden your horizons and others, the current study is very
important. Minimum to help realize human beings are most active will not be needed in the days to
come (Ruseffefendi, 1991).
There are difference the enhancement student’ mathematical connections ability between
students that follows instruction with scientific debate with the students who follow the conventional instruction. This
means, students acquire instruction with scientific debate has the enhancement in the mathematical
connections ability are better than conventional instructional. This rationale is because student should
be solved applications mathematics problems, the students’ knowledge become more insight to interpret
mathematics not only in mathematics itself but also its role in other fields. This is consistent with
research Harnisch (in Qohar, 2011) that students can get an overview of the concepts and big ideas
about relations between mathematics and science, as well as students gain more experience.
Additionally, NCTM (2000) states that mathematics is not a collection of separate topics, but it is a
network of ideas that are related very closely. The problems were solved student is able to lead students
to improve their mathematical connections. Thus, students are trained to perform well in mathematics
as well as linkages to other areas so that students recognize the importance of the mathematical
connections ability.
It this consistent with Piaget's statement that "knowledge is actively constructed by the leaner,
not passively received from the environment" (Dougiamas, 1998). Piaget’ statement means that the
students do not passively receive the knowledge of the environment but must be active to discover
knowledge on their own again. While the role of the teacher only drove students to seek knowledge and
to understand significantly. This is consistent with the Geoghegan’ statement (2005) that learning and
teaching become a reflective phenomenon based on the inter-connections between teachers and students
together to co-instructor in the search to mean and to understand. Salamon and Perkins (in Dewanto,
2007) suggested that the 'acquisition' and 'participation' in learning to relate and to interact with each
other in a synergic way.
179
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Student’ intelligence can be measured or kumulatif achievement index (GPA) although sometimes
misses, but the GPA can be a guarantee that the student will be able to attend a particular school. In this
study, student’ mathematical prior ability (PAM) is the students’ ability in the Calculus 1 material. PAM
was used to see the readiness of students to receive course materials and to relate with the other subject
matter that has been received. The students’ success in their lessons depends on the readiness of students
in their lessons. PAM classified into three groups: high, meddle, and low.
From the the quantitative analysis results, PAM provides significant effect on the enhancement
student’ mathematical communication ability. This is consistent with the Arends’opinion (2008a: 268),
that the student's ability learn new ideas rely on their prior knowledge previous and to exist cognitive
structures. Meanwhile, according to Ruseffendi, the student’ success in learning is almost entirely
influenced by the intelligence of students, student’readiness, and students’ talent (1991). The effect of
PAM that was ubased on the instructional model difference (scientific debate and conventional) indicate
that the instructional models differences do not provide a significant effect on the enhancement student’
mathematical communication ability.
The quantitative analysis result, showed that the difference in the level of PAM did not give
significant effects on the enhancement student’ mathematical connection ability.The difference learning
model gives significant effects on the enhancement student’ mathematical connection ability. This
means that, average the enhancement students’ mathematical connections with scientific debate modelis
always better than the student with conventional instructional. This gives an indication that the
application of scientific debate instructional model produces a better effect than the application the
conventional instructional to student’ mathematical connection ability.
There was not interaction between model instructional with PAM on the enhancement student’
mathematical communication and connection ability. The results was consistent with the results of Nana
(2009) which concluded that there is not interaction between learning approach with PAM students in
mathematical problem solving ability. Thus, the utility factor learning model does not provide a strong
influence the enhancement student’ mathematical communication and connection ability. The scientific
debate instructional model can enhance students’ mathematical reasoning ability evenly to the various
levels of PAM. This it wass possible, because at the scientific debate instructional model, each student
is challenged to put forward ideas and to reflex their answer in the debate. This condition may spur
increased student‘ mathematical reasoning and understanding in depth. Huggins (1999) statement that
in order to enhance conceptual understanding of mathematics, students can do to express mathematical
ideas to others. Brenner (1998: 108) and Kadir (2008: 346), through discussions with teachers and
activity partner, students are expected to gain a better understanding of the basic concepts of
mathematics and become better problem solvers. Increasing students' mathematical conceptual
understanding will eventually lead to increase of mathematical reasoning ability. Goldberg and Larson
(2006: 97) statement that, the discussion can improve reasoning ability, human relations skills, and
communication ability.
The average enhancement students’mathematical reasoning ability in the conventional
instructional learning based on PAM significantly different. This means that in the conventional
instructional, rate students’ PAM has a strong influence on increasing student’ reasoning ability. This
is possible, because in the conventional instructional, students receive teaching materials passively, so
that the enhancement reasoning will rely on its prior knowledge. In addition to instructional
conventional, students will rely on the textbook to enhance knowledge in more depth. Yet according to
180
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Baroody (1993: 99), teachers and textbooks often have the words and symbols little meaning for
children. Furthermore, Baroody said that, the students rarely asked to explain their ideas in any form.
These conditions certainly less developed students’ reasoning ability optimally, because students are
not challenged to discover and to construct knowledge independently.
Students’ educational background in this study grouped the students come from Senior High School
(SMU) and Islamic Senior High School (MA). In the group with scientific debate instructional, the
student’ educational background differences does not have a strong influence on the enhancement of
students’ mathematical communication, reasoning, and connection ability. The absence of the influence
of educational background to the enhancement of student’ mathematical communication, reasoning,
and connections ability are great adaptability of each student. Adaptability of student in the learning is
affected by the application of scientific debate instructional, in which the instructional process is faced
on an application problems. This application mathematical problems are the focus and stimulus for
student’ instructional and as a vehicle for the development of problem solving ability. So the
instructional is focuse on students (students-centered), the lecturer is a facilitator or coaches, as well as
acquired the newly information or concepts through self-directed learning. The importance of students
are faced with the problems that this application are in accordance with the Walshs’opinion (2008),
which can form a positive attitude, to form creativity, to enhance deep understanding, and to develop
problem solving ability or to investigative ability which can be applied in various fields of life.
Scientific debate instructional emphasis students' learning activities. Students work
collaboratively to identify what they need to develop solutions and to find relevant sources, to share
and to synthesize of findings, and to ask questions that lead to further learning. In this case, teachers act
as facilitator who facilitate student in the learning. According to, the CIDR (2004), as a facilitator,
teacher can ask questions to the students to sharpen or deepen students' understanding of the relationship
between concepts that they built. Lecturer seeks balance between activities provide direct guidance and
to encourage students the self-directed learning. These conditions will trigger to the enhancement of
students mathematical communication, reasoning, and mathematical connections ability equally,
because the scientific debate instructional challenged students to have greater adaptability.
In the group of students with conventional instructional, the results showed that the differences
in the students’ educational background have an influence on the enhancement of students’
mathematical communication, reasoning, and connections ability. It becomes logical because in the
conventional instructional, lecturers explain the subject matter actively, to give examples and to
exercises, while students act like machines, students listen, to take notes and to do the exercises
givenlecturer. In these circumstances, students are not given much time to find his own knowledge
because learning is dominated lecturer. The discussions are not often implemented, so the interaction
and communication between students and other students, students and lecturer did not show up. This
results in mathematical communication, reasoning, and connections less develop students’ ability
optimally. Students feel less mathematical applications in social life, whereas mathematical literacy is
very important in today's information era. This results in the conventional instructional, student’
educational background can not enhance mathematical communication, reasoning, and connections
evenly because the students will learn and to receive course materials according to his ability.
7. Conclusion
Based on the formulation of the problem, results, and discussion presented in the previous chapter can
be concluded as follows:
a. The enhancement students’ mathematical communication, reasoning, and connections ability with
scientific debate instructional has criteria middle. The enhancement students’ mathematical
communication and reasoning ability with scientific debate instructional is not significantly
different than the group of students with conventional instructional. The enhancement students’
181
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
mathematical connections ability with scientific debate instructional is better than conventional
instructional.
b. There is no interaction between the PAM (high, middle, low) by a factor instructional model
(scientific debate and conventional) to increase students’ mathematical communication and
connection ability.
c. The average enhancement of mathematical reasoning abilities in the group of students with
scientific debate instructional based on PAM did not differ significantly. The average enhancement
of mathematical reasoning ability in the student group with conventional instructional based on
PAM significantly different.
d. The difference in the student’ educational background does not have an influence on the
enhancement mathematical communication, reasoning, and connections ability for the students that
follow scientific debate instructional. In the group of students with conventional instructional, the
differences student’ educational background have a significant effect on increasing mathematical
communication, reasoning, and connection ability.
8. References
Alibert, D., Legrand, M. & Richard, F., (1987) ‘Alteration of didactic contract in codidactic situation’, Proceeding
of PME 11, Monteral, 379-386.
Alibert, D., (1988), ‘Towards New Customs in the Classroom’, For the Learning of Mathematics, 8(2), 31-35.
Arends, R. I. (2008). Learning to Teach. New York: McGraw-Hill Companies, Inc.
Brenner, M. E. (1998) Development of Mathematical Communication in Problem Solving Groups by Language
Minority Students. Bilingual Research Journal, 22:2, 3, & 4 Spring, Summer, & Fall 1998.
Ferrini-Mundy, Joan and Karen G. Graham, (1991). An Overview of the Calculus Curriculum Reform Effort:
Issues for Learning, Teaching, and Curriculum Development, American Mathematical Monthly, 98 (7) 627-
635.
Huggins, B., & Maiste, T.(1999). Communication in Mathematics. Master’s Action Research Project, St. Xavier
University & IRI/Skylight.
http://blog.elearning.unesa.ac.id/m-saikhul-arif/makalah-pembelajaran-dengan-pendekatan-teori-
konstruktivistik.
http://www.distrodocs.com/16186-cara-belajar-efektif-belajar-akuntasi.
Kusumah, Y.S., (2012). Current Trends in Mathematics and Mathematics Eduacation: Teacher Professionsl
Development in The Enhancement of Students’ Mathematical Literacy and Competency. Makalah Seminar
Nasional: Universitas Pendidikan Indonesia.
Legrand, M., et al, (1986), “Le debat scientifique”, Actes du Collaques franco-allemands de Marseille, 53-66.
NCTM (2000). Principles and Standards for School Mathematics, Reston, Virginia.
Niss, G. (1996). Goals of mathematics teaching. In A.J. Bishop, K. Clementa, C. Keitel, J. Kilpatrick,& C. Laborde
(Eds.). International handbook of mathematical education. Netherlands: Kluwer Academic Publisher.
Orton, A. (1983). Student’understanding of Integration. Educational Studies in Mathematics, 14, 1-18.
Polya, G. (1973). How to Solve It. A New Aspect of Mathematical Method. New Jersey: Princenton University
Press.
Pugalee, D.A. (2001). Using Communication to Develop Student’s Literacy. Journal Research of Mathematics
Education 6(5) , 296-299.
Romberg, TA, dan Fredric W. Tufte, (1987). Kurikulum Matematika Rekayasa: Beberapa Saran dari Cognitive
Science, Monitoring Matematika Sekolah: Latar Belakang Makalah.
Ruseffendi, E. T. (1991). Pengantar kepada Membantu Guru Mengembangkan Kompetensinya dalam Pengajaran
Matematika untuk Meningkatkan CBSA. Bandung: Tarsito.
Sabella, Mel S. dan Redish, E. F. Student Understanding of Topics in Calculus, University of Maryland Physics
Education Research Group.
http://www.physics.umd.edu/perg/plinks/calc.htm.
Suryadi. D (2005). Penggunaan Pendekatan Pembelajaran Tidak Langsung Serta Pendekatan Gabungan
Langsung dan Tidak Langsung dalam Rangka Meningkatkan Kemampuan Berfikir Matematik Tingkat Tinggi
Siswa SLTP. Disertasi pada PPS UPI: tidak diterbitkan.
Suryadi. D (2010). Model Antisipasi dan Situasi Didaktis dalam Pembelajaran Matematika Kombinatorik
Berbasis Pendekatan Tidak Langsung. Bandung: UPI.
Wahyudin, (2003), Kemampuan Guru Matematika, Calon Guru Matematika, dan Siswa dalam Mata Pelajaran
Matematika. Disertasi pada PPS UPI: tidak diterbitkan.
182
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
Abstract: In an effort to give policymakers of food crisis alerts, IFPRI have developed a new
tool for the early warning system for identifying extreme price variability of agricultural
products. This tool measures excessive food price variability. By providing an early warning
system to alert the world of price abnormalities for key commodities in the global agricultural
markets, policymakers can make better informed plans and decisions, including when to release
stocks from emergency grain reserve. The World Bank is currently developing a framework of
monitoring the food price crisis in the global and national levels. This framework focused on
price and did not examine other factors that govern the food crisis. Development is not directed
to define the time of food crisis but for operational indicators that can monitor how close food
prices to a level which is categorized crisis. The indicators are used to predict global food price
spike that occurred in June 2008 and February 2011. Analyzes were conducted to select
indicators that sounded an alert about the crisis. The best one is then used to monitor where
current global prices stand with respect to the selected crisis threshold .The weakness of the
monitoring frame work done by World Bank and IFPRI above is that maybe we will never seen
rice price hike, but we may can still see the rice crises. We may have no natural disaster but we
may still have the rice crises caused by miss management. To overcome the weakness of the
models studied on ten papers. The author’s proposed different approach called: Quasi Static
Rice Crises with Quasi Static Prediction Model And Justifiable Action.
Keywords: crisis, quasi static, interval forecasting, supply forecasting
1. Introduction
Extensive shrinkage of rice rate fields in Indonesia areas such as Tangerang, Serang, Bekasi, Karawang,
Purwakarta, Bandung and Bogor as a regional center for rice national production is much higher
compared to other regions. In these areas have been established thousands of industries. In Bekasi,
thousands of hectares of rice fields converted to the industry, even rice fields with the technical
irrigation. Central Bureau of Statistics indicate, paddy fields conversion rate reached 110 thousand
hectares per year since 2000, and the printing of the new rice fields is only about 30-52 thousand
hectares per year. In Kabupaten Bandung, shrinkage of rice rate is up to 30 hectares per year, paddy
fields were converted into residential or industrial complex (http://oc.its.ac.id). According to Bandung
mayor reports, in 2012 the city of Bandung has 14,725 hectares of land specialized planted with rice.
Rice production in the city of Bandung was at 4.5% of the total requirement, (Hidayat Yuyun, 2012).
Conditions of rice durability Bandung longest lasted for 7 years since 2011, 7th year supply of rice is
still considered sufficient to prevent social unrest due to shortage of rice. In the year to 8 City of
Bandung predicted rice is in crisis or public turmoil associated with rice shortage. It must be realized
that the rice crisis year-to-8 occurs when the Extent Program to the gardens and dry field and
Intensification of all the land was done in 2011. If the program is not done in year 2011, the predicted
figures will be even worse and rice crisis will occurs sooner or later. Assuming the conditions of rice
supply , the rate of decrease in production (due to decreased land and or reduced productivity, and rice
prices constant as it is today, (HidayatYuyun, 2011). Rice crisis sooner or later it will happen so that
accurate information about the predicted time of crisis in Bandung become very pressing.
To overcome these issues, the aim of this research is to identify rice crisis time in Bandung Indonesia.
To support the research aim, the authors outlined five specific objectives; (1) to develop rice crisis
criteria; (2) to forecast rice demand in Bandung; (3) to forecast rice supply in Bandung; (4) to determine
183
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
simultaneous model of demand-supply-price of rice in Bandung; and lastly (5) to forecast rice crisis
time in Bandung. However, in this paper, the discussion is limited to objective 1 and methods used to
achieve the objective. Expected results of this study will provide a wide and deep insight of the rice
scarcity situation in Bandung, Indonesia. The results are expected to give these benefits: (1) Preventive
policy to slow down a crisis with rice in city of Bandung or to delay rice crisis time; and (2) Bandung
city government will have a clear agricultural strategy to strengthening the resilience of rice is-oriented.
Urgency or primacy of this study was food crisis is not the matter of existence, the matter is when. This
thought inspired by the theory of Malthus; in this case the demand will follow a geometrical
phenomenon, while supply will indicate behavior arithmetically. This is a consequence of thinking that
human population grew faster than food production. Malthusian theory clearly emphasizes the
importance of balance in the number of population geometrically the supply of food by arithmetically.
The Malthusian theory has actually questioned the environmental carrying capacity and the capacity of
the environment. Land as a component of the natural environment is not able to provide agricultural to
meet the needs of a growing population. Carrying capacity of the land as components of the environment
decline, because the burden of more and more human. In line with the Malthusian theory, then sooner
or later a crisis will occur, thus there is a strong basis for conducting research with great question, when
the rice crisis will happen? Perhaps the crisis could be happen tomorrow or near future. Thoughts on
the occurrence of a crisis with rice in 2012 also supported of data the low internal production of
Bandung, at 4.5% residents. The other data that strengthens the idea is that the crisis the paddy field
occurred because rice farming is not financially attractive. The data show that the local minimum wage
per month of Bandung per worker amounted IDR.1,271,625 while the wages of farm laborers of
Bandung per person per month, amounting IDR.395,390. This means that the wages of the farm laborers
of Bandung far below the minimum wage of Bandung. For the wages of farm workers is much smaller
than the minimum wage then how can we expect the supply of rice in the city of Bandung will increase.
The trend is a decline, this is where the role of the Department of Agriculture and Food Security of
Bandung to seek so that resilience of rice rose. Of course if crisis with rice will occur in the near future,
the effective action must be done quickly. It demands knowing when the time crisis with rice will occur.
Predictions that the City of Bandung will have a time of crisis in 2018 as reported in the study
(HidayatYuyun, 2011) functions as a trigger for a more profound research. Remembering preliminary
nature of these studies over which shows that it is possible to predict the time crisis with rice. Previous
research has a lot of drawbacks in the assumption that the conditions of supply of rice, the rate of
production decline [due to decreased land and or reduced productivity], and rice prices constant like
now. Of course this is unrealistic.
The Limitations of Study: (1). This study is geographically limited to the Bandung city at Republic of
Indonesia; (2).Under the assumption that rice demand behavior of people in Bandung are “coded in
their DNA”, modified demand behavior is not possible or has small probability and not challenging.
Prediction of rice crisis time is determined when other food substitute is not available; and (3).Data for
the study were collected from a number of secondary sources except for model of price and number of
rioters. Most of the data for the analysis were obtained from various publications of Indonesian Central
Bureau of Statistics .
To show the originality of the research the authors have done literature study to some papers efficiently
and off course effective. The papers are:
(IRRI-INTERNATIONAL RICE RESEARCH INSTITUTE,2008), (David Dawe& Tom
Slayton,2010), (Saifullah a.,2008), (Won W. Koo MHK, Gordon W. Erlandson,1985), (Dawe D,2010),
(Parhusip U.,2012), ( Cuesta J.,2012), Can The Next Crisis Be Prevented , Dawe D. (2010). Supply and
Demand Analysis of Rice in Indonesia ,Parhusip U. (2012).Global Price Trend Cuesta J. (2012),
(ATENEO ECONOMICS ASSOCIATION ,2008), ( Berthelsen J.,2011), and (Maximo Torero,2012).
184
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Methodology
This research has limitations and is not to answer a lot of things. In other words, a state that we take
care of is not to be the cause of the poor management of the rice riots. It means when people come to
the streets what the overwhelming problem is, if the overwhelming problem is the rice, now it's our
responsibility.
Step1. Crisis Definition
What are crises anyway? There is no standard definition. In this paper, crisis is defined as the situation
when the cops cannot control the riot caused totally or partially by rice scarcity. Riot caused by other
factors except rice scarcity is beyond concern. This is the best and robust definition.
Step2. Riot Control Model
Next question is when will it happen? Immediate answer of it must be riot control model. Its because
we are worry or afraid of uncontrollable riot causing the crises. So we must know riot control model
and its validity period. This model is not for the present moment. Its validity period must cover when
crises is happen and not for now. So it involved time variable.
Step3. People Stress Model
Let assume we have riot controlling model, what next? Riot somehow caused by people stress. Crucial
thing in developing people stress model are how many people are rioting in and in what condition, or
because of what condition. So the questions include magnitude and duration. Why people stress? In this
research the authors assume that the rice price caused stress of the people. As a consequence of it then
we must develop price model
Step4. Price Econometric Model
We need price model and I don’t want to model it in time series approach because the forecast sometime
is too late and some time contaminated. Another reason, price does not indicate scarcity dan
abundances. Does cheap rice prices showed a lot of rice? An example when the dollar being held at a
price of IDR 2000.One day he jumped from IDR 2000 to IDR 9000. Time series price data lose the
dynamic of dollar. Time series then Contaminated. Price does not show dynamic. Price dynamic cannot
be seen through the time series.
The most important thing I realize that price is effect not a caused. Then time series modeling in prices
resulting in too late. That why 1 use econometric model instead of time series model in forecasting the
price dynamic. There are two reasons for not practicing price forecasting using the time series method.
Firstly, Contaminated and secondly it’s too late. Too late because price is just an effect and
contaminated because there are changes in state regulated prices (administered price) resulting not the
true price. Economically speaking price is determined by the interaction of demand and supply. Rice
Price is controlled by a dynamic system which has numerous of factors. Time series is not the
appropriate way of analyzing thing. Prices are controlled by at least demand and supply variable. As is
well known, the price of a commodity and the quantity sold are determined by the intersection of
demand-and-supply curves for that commodity. Thereof we must develop simultaneous-equation
model, involving demand, supply, and price.
Step5. Time Series Forecasting Model for Demand and Supply
As a consequence of econometric models in Step4 then the authors should seek the best time series
forecasting model for demand and supply of rice separately. Forecasting demand and supply output in
step 5 will be the input to Step 4, Step 3 and Step 2 and ended up with activities to determine when the
rice crisis happen
185
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
As an integral part of modeling the rice crises forecasting this paper shows genuine approach of rice
demand forecasting in Bandung city. Here is the process.
2.2.1 Data
The data used is rice supply to Bandung city (internal and external) from 2004 until 2012. Data compiled
from three sources, namely Bandung Central Bureau of Statistics, Logistics Agency Sub Division West
Java, the Department of Agriculture and Food Security in Bandung (Indonesia).
Ft 1 X t (2.1)
b. Model Naive Rate of Change
The equation of naive rate of change model is , [20]
Xt
Ft 1 X t (2.2)
X t 1
c. Simple Linear Regression Model
The equation is:
Ft a b(t ) . (2.3)
d. Double Moving Averages Model
The equations used are:
X X t 1 X t 2 ... X t N 1
S t' t . (2.4)
N
S ' S t' 1 S t' 2 ... S t' N 1
S t' ' t . (2.5)
N
at S t' (S t' S t' ' ) 2S t' S t' ' . (2.6)
2
bt ( S t' S t' ' ) (2.7)
N 1
Ft m at bt m . (2.8)
e. Single Exponential Smoothing
The equation of Single Exponential Smoothing is written as follow [21]
Ft 1 X t (1 ) Ft (2.9)
et X t Ft (2.10)
f. Double Exponential Smoothing : Brown Liniear Method One-Parameter
The Equation are used to implement the method are shown [21]
S t' X t (1 )S t' 1 (2.11)
S S (1 ) S
t
''
t
' ''
t 1 (2.12)
at S t' (S t' S t' ' ) 2S t' S t' ' (2.13)
bt ( S t' S t' ' ) (2.14)
1
Ft m at bt m (2.15)
St X t (1 )(St 1 bt 1 ) (2.16)
bt (St St 1 ) (1 )bt 1 (2.17)
Ft m St bt m (2.18)
bt
(6 5 )St' (10 8 )St'' (4 3 )St''' (2.23)
21
2
2
ct ( S t' 2S t' ' S t' ' ' ) (2.24)
(1 ) 2
1
Ft m at bt m ct m 2 (2.25)
2
3. Forecasting Quality
Forecasting quality are determined by three critical parameter namely Accuracy, Precision, and
Visibility. These three parameters are measure for selection of the best forecasting methods.
Visibility is the ability of a model to predict the future. Measure of the accuracy and precision of
forecasting methods will vary depending on how far the method can predict the future. Visibility of the
forecasting model in this study is 3 years. So that for the 4th year and the subsequent need re-examine.
Accuracy is the degree of closeness to actual value. Accuracy is a must, accuracy exist in any
forecasting activity. Forecasting accuracy represent the forecast error it is is the difference between the
actual value and the forecast value of the time series and we call it error measure. The forecast error is
simply, et= Yt – Ft , regardless of how the forecast was produced. Yt is the actual value at period t, and
Ft is the forecast for period t. The most commonly used metric is Mean Absolute Percentage Error
(MAPE) = mean(|pt |) .The percentage error is given by pt = 100et / Yt [12].
Percentage errors have the advantage of being scale independent, so they are frequently used
to compare forecast performance between different data series. A measurement system can be accurate
but not precise, precise but not accurate, neither, or both. Accurate but no precision, is not meaningful.
Forecast precision associated with a wide interval of the forecast results. A very wide forecasting
interval indicates low precision. Thus, the narrower forecast interval is the better. This study attempted
to forecast the demand variable in forecasting interval format so that it can assess the results of
forecasting precision.
187
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In this paper, focus has been on fitting a forecasting model as close as possible to time series data. The
ultimate test is of course to see how the forecasts fit the future, but these tests can only be done
retrospectively. Therefore it is common practice to do “blind tests” by splitting the data in two series,
the first part is used to fit the model to the data, and the recent part of the time series is used as a hold
out test, to see how accurate the model forecast the “unknown” future. The model that most accurately
describes the holdout series is then selected for making the actual forecast. These previous examples
may be regarded as this last step.
The rationale behind this practice is the belief that the model that most accurately prolonged its
fitted pattern in the initial series into the holdout series also will be most accurate when prolonging the
pattern from the complete data series into the real future. Just selecting the model that has the best fit to
the complete time series may result in selecting a model that “over-fits” the data—incorporating random
fluctuations that do not repeat themselves [11]. The stand is we don’t believe in accuracy at model
building phase. I believe in accuracy at testing phase. Whatever performance measure in model building
phase to me means nothing. It is because the models should be tested in the way they will be used. How
will we test the models? We use hold out as testing ground. This study makes sure the model is tested
in the same way.
Sources: Regional Planning Development Board of Bandung in co-operations with Central Bureau of
Statistics of Bandung, Bandung logistics agency, and Department of Agriculture and Food Security of
Bandung.
188
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Look at the data plot in Figure 1.It can be seen that time series data shows upward trend pattern. Test
results also indicate stationarity of the time series data.
4.3. Selecting Best Forecasting Model Based on Data Sets Model Building [First Stage]
Based on the use of forecasting method on the first set of data (model building) are obtained the value
of MAPE. Table 2 gives recapitulation MAPE values for nine models based on annual rice supply of
data. The MAPE has received some criticism (see Rob J. Hyndman, 2006).
Table 2 give information that largest MAPE produced by double exponential smoothing models: Holt
Two-Parameter. The smallest MAPE value is given by a model of simple linear regression of 5.15.The
model that most accurately describes the hold out series is then selected for making the actual forecasts
[11]. We have nine candidates. Now how to choose the best model of the nine models? The model will
be used to predict 3 years ahead which 3 is visibility number. We don’t need tracking signal to measure
visibility. Why? Because the test results show we can predict 3 years ahead accurately then we claim 3
to be the visibility number. Why do not we just use a visibility of 3 years and are looking for models
with the smallest MAPE? MAPE is actually getting smaller is not necessarily better. Although we must
be careful to the hold out ground. We need a fact as ground reason that the future is the copy of the hold
out. There is still threat of over-fit. Consideration used is we should not choose a model that has the
error is too small also too big. In other words, not too fit not too loose. This is the ultimate problem in
forecasting. To overcome this difficulty we used control chart concept in statistical quality control.
Control chart can be used to screen out the model without extreme value in error forecast. Following
that mechanism we get the results shows at Figure 2. Of the nine MAPE above are created control chart
to get rid of the extreme value of MAPE.
189
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
30.00
20.00
10.00
0.00
The control chart formed by upper control limit (UCL), Central Limit (CL) and Lower Control Limit
(LCL). The control limits is created using the following formula:
UCL X MAPE MAPE 13,87 6,86 20,74
The control charts shows there are four models with a value of MAPE is outside of control limits ,
namely: Naïve Model, Naive rate of change, Simple Linear Regression , Double Exponential Smoothing
model: Holt Two-Parameter Method. The fourth model is not used for the next stage, while other models
(Double Moving Averages, Single Exponential Smoothing, Double Exponential Smoothing: Brown
Linear Method One-Parameter, the triple Exponential Smoothing Brown one-parameter Quadratic
Method, Chow Adaptive Control Method) is used in the next step
4.4. Forecasting Model with Good Accuracy and Precision (Hidayat Yuyun,2013)
The models passes the selection phase 1 then tested using the hold out data set (data are darkened) as
much as 2 times. Precision assessment is then performed on the 5 models that are within the control
limits. As already stated, forecast precision associated with a bandwidth of the forecast results. Interval
forecasting width indicates low precision. Therefore the narrower the interval forecasting showed better
precision, then we will select the best model to use the smallest bandwidth criteria. Precision
calculations using the concept of interval forecasting are presented in Table 3.
Having obtained the value of the bandwidth of each model, then the model are selected based on the
value of the bandwidth. For this purpose, control charts are used to dispose of forecasting models that
have extreme bandwidth value. Here is the control chart
190
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Based on the control chart above, there are two models that bandwidth is out of control, namely: Single
Exponential Smoothing Model, Triple Exponential Smoothing: One-Parameter Quadratic method of
Brown, this model is not used in the later stages. There are 3 models that can be used to predict the rice
supply to the city of Bandung. Models that qualify are the Double Moving Averages, Exponential
Smoothing Double: One-Parameter Linear Method of Brown, Adaptive Control of Chow. Chow
Adaptive Control is the best model because it has the smallest bandwidth.
Using Chow Adaptive Control Method , forecast results are presented below in the form of intervals .
4.6. Conclusions
What is crises anyway?There is no standard definitions. In this paper crisis is defined as the situation
when the cop can not control the riot caused totally or partrially by rice scarcity. Riot caused by
other factors except rice scarcity is out of concern.
The weaknes of the existing forecasting frame work is that may be we will never seen price hike,
but we may can still see the crises. We may have no natural disasterbut we may still have the crises
caused by miss management.
This paper outlines different approach whereby good forecast are found by screen out the models
using control chart.The advantage is it will avoid from extreme value of error in data test to
overcome overfit problem.The paper also advice to make forecast in interval format to get
precission of the forecasting models.
We found that the Chow Adaptive Control Method showing the best results achieved for both
accuracy and precision.
191
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Acknowledgements
We would like to thank all the people who prepared and revised previous versions of this document.
References
Abraham, Bovas and Ledolter, Johannes. (1983). Statistical Methods For Forecasting, John Wiley &
Sons, Inc. Canada.
ATENEO ECONOMICS ASSOCIATION (AEA). (2008). Analyzing the Rice Crisis in the Philippines
31 may 2008.
Berthelsen J. (2011). Anatomy of a Rice Crisis.Vol.3, No. 2(Global Asia):6.
Cuesta J. (2012).Global Price Trends - Toward New Crisis? April 2012 April 2012.
David Dawe & Tom Slayton. (2010).The Rice Crisis - Markets, Policies and Food Security. The World
Rice Market Crisis of 2007–2008. 2010 (FAO):368.
Dawe D. (2010). The Rice Crisis - Markets, Policies and Food Security. Can the next Rice Crisis be
Prevented. 2010;393(The food and agriculture organization of the united nations and earthscan).
Hidayat Yuyun. (2011). Data Base Produksi Pangan Kota Bandung 2011].
Hidayat Yuyun. (2012). Data Base Produksi Pangan Kota Bandung 2012
Hidayat, Yuyun.(2013). RICE DEMAND FORECASTING WITH sMAPE ERROR MEASURES ,
INTEGRATED PART OF THE FRAMEWORK FOR FORECASTING RICE CRISES TIME
IN BANDUNG-INDONESIA. PROCEEDING:The International Conference on Applied
Statistics , 2013.Department Of Statistics Universitas Padjadjaran, Indonesia
http://oc.its.ac.id, Sawah Digusur Petani Menganggur
http://international.cgdev.orgt, Asian Rice Crisis Puts 10 Million or More at Risk: Q&A with Peter
Timmer, April 21, 2008.
Hanke, John E. and Wichern, Dean, W. (2005). Business Forecasting Eight Edition. Pearson Education,
Inc. New Jersey.
IRRI-INTERNATIONAL RICE RESEARCH INSTITUTE. (2008).Responding to the Rice Crisis.
2008 (IRRI):20.
Martumpal Chandra P.S., Drs. Yuyun Hidayat, MT.,(2013). Menentukan Model Peramalan Terbaik
Untuk Meramal Suplai Beras Kota Bandung.
Makridakis, Spyros., Wheelwright, Steven C., Mcgee, Victor E. (1999). Metode dan Aplikasi
Peramalan, .Edisi Kedua. Jakarta :Binarupa Aksara.
Maximo Torero. (2012). International Food Policy Research Institute [IFRI], Food Security Portal, Rice
Excessive Food Price Variability Early Warning System,2012
Parhusip U. (2012).Supply and demand analysis of rice in Indonesia [1950-1972]. (Departement of
agricultural economics Michigan state university).
Rasmus Rasmussen. (2004).On time series data and optimal parameters, Omega 32 (2004) 111 – 120
Rob J. Hyndman. (2006).Another Look At Forecast-Accuracy Metrics For Intermittent Demand,
FORESIGHT Issue 4June 2006
Saifullah a. (2008). The Rice Crisis - Markets, Policies and Food Security. Indonesia’s rice policy and
price stabilization programme:managing domestic prices during the 2008 crisis. (The food and
agriculture organizationof the united nations and earthscan).
Won W. Koo MHK, Gordon W. Erlandson. (1985). Analysis of Demand and Supply of Rice in
Indonesia.1985 (Dept. of. Agricultural Economics, Nort Dakota Agricultural Experiment Station,
NORTH DACOTA STATE UNIVERSITY):24.
192
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Management and liabilities to be very important for any bank or other financial
institution. Accordingly, banks or other financial institutions should have a system that can
formulate a function that can connect entrepreneurs or owners of capital with the real business
sector. This paper studied and formulated a model that deals with the problem global
optimization of the portfolio under the asset liability models. Formulation of the model was
conducted on the modeling of the distribution of asset returns, equation liabilities, the risk
measure Value-at-Risk, and local and global optimization equation. Furthermore, to find local
and global optimum solution is done by using a genetic algorithm. Formulation results expected
in this research is a model that can be used effectively in the management of assets and
liabilities.
1. Introduction
In general financial institutions, such as banks, insurance companies, mutual fund companies, pension
fund management companies, pawnshops, and other, essentially an intermediary institution, either
directly or indirectly, between the depositors and the owners of capital (shareholders) by the employer
or sector real. Financial institution to receive deposits and or capital from shareholders, and then
distributed again in the form of loans or investments and other, which can turn a profit. The advantage
gained is then partially distributed to depositor and or shareholders in the form of interest and or
dividends. Depositors and or shareholders are willing to give up their money because they believe that
the financial institution is able to choose a professional alternative of investments that can generate
profits quite interesting.
Investment selection process itself should be done carefully, because if there is an error in the
selection of the investment will result in the financial institution cannot meet its obligations to pay
interest and dividends to the depositary or to provide and or shareholders. A common investment by
asset-liability management committee is in the form of shares in the capital market. In investing, asset-
liability management committee will usually dealing with investment risk. To anticipate the movement
of the investment risk, investment risk measurement can be performed with quantile approach, or better
known as Value-at-Risk (VaR). Investment risk cannot be basically eliminated, but can be minimized.
The strategy is often done in minimizing investment risk is by forming portfolios. The essence of the
establishment of a portfolio is allocated on a variety of alternative investment or investment
diversification, so that the overall investment risk can be minimized.
Investment diversification will effectively reduce the risk if the investor is able to form efficient
portfolios. An efficient portfolio is categorized as a portfolio when the portfolio lies on the efficient
surface. Efficient portfolio selection is influenced by the level of risk preferences of each investor, so
that efficient surface along the curve will be many locally optimum of portfolio (individual). But among
local optimum portfolios are portfolios that there will be a global optimum. Therefore, risk is measured
using Value-at-Risk (VaR), formulated in this paper a portfolio that can maximize return and minimize
the level of risk, both locally and globally, under the asset-liability characteristics. Portfolio better,
193
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
known as Mean-VaR portfolio optimization under asset-liability models. To resolve the portfolio
model is done by using a genetic algorithm. As a numerical illustration, the models used to analyze
some of stocks that traded on the capital market in Indonesia. The goal is to get the proportion of funds
allocated to each stock which analyzed.
2. Methodology
This analysis discusses several methods of mathematical models which are very useful for formulation
of Mean-VaR portfolio model under the asset-liability characteristics and find a solution. In this section
the models discussed included: models of calculation of stock returns, stock return distribution model,
asset-liability models, models of Mean-VaR portfolio optimization, and genetic algorithms.
Suppose Pit stock price i ( i 1,..., N with N the number of stocks that analyzed) at time t ( t 1,..., T
with T the number of stock price data that observed). Suppose also rit is the stock return i at time t
can be calculated using the following equation:
rit ln Pit ln Pit 1 (1)
Stock return data will then be analyzed and the distribution model estimated the expected values and
the variance of each as follows.
Suppose Rit random variable of stocks return i ( i 1,..., N with N the number of stocks that
analyzed) at time t ( t 1,..., T with T the number of stock price data that observed) which has
a certain continuous distribution. Function f (rit ) is probability density of a random variable
koninu Rit , defined over the set of all real numbers R , when: 1) f (rit ) 0 for all rit R ; 2)
b
f (rit )drit 1 ; and 3) P(a Rit b) f (rit )drit .
a
Expectation or mean value of Rit is it E[ Rit ] r f (rit )drit , and the variance of Rit is
it
it2 Var[ Rit ] E[(rit it ) 2 ] (rit it ) 2 f (rit )drit . Expectation and the variance has
several important properties in the study, which are: i) E[ pRit qR jt ] pE[ Rit ] qE[R jt ] ; and
ii) when Rit and R jt random variables with joint probability distribution f (rit , r jt ) , then
Var[ pRit qR jt ] p 2Var[ Rit ] q 2Var[ R jt ] 2 pqCov(Rit , R jt ) . Where
Cov(Rit , R jt ) E[(rit it )(r jt jt )] .
Suppose L0 the initial liability and L1 the liability value after one period. Comparison of the growth
of liabilities given by a random variable RL ( L1 L0 ) / L0 . Generally, RL depending on changes of
the structure of interest rates, inflation, and real wages. Suppose also A0 the initial value of assets
liabilities assets liabilities and A1 the value after one period. Assume that all investment opportunities
i 1,..., N are risky. Suppose the investment strategies of funds (deposit) conducted by forming a
portfolio x . Therefore, the value of assets after one period is given by A1 A0[1 RA ( x)] , where
194
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
RA ( x) i 1 xi Ri with xi the weights of assets to liabilities of i (Gerstner et al, [4]; Keel & Muller,
N
It is assumed that the vector of mean assets was μT ( 1,..., N ) , with i E ( Ri ) , i 1,..., N . The
covariance matrix of asset is Σ ( ij ) , i, j 1,..., N with ij Cov( Ri , R j ) . Covariance of asset
liability vectors is γT (1,..., N ) , i 1,..., N , i covariance between stocks with economic index.
It is assumed that the value of f0 1 . Return portfolio weight vector is, xT ( x1,..., xN ) with
i 1 xi 1 or xT e 1, where eT (1,...,1) the unit vector. Based on these assumptions, equation (5)
N
can be expressed as
S μT x L , (6)
and equation (6) as
S2 xT Σx L2 γ T x . (7)
Value-at-Risk ( VaR ) surplus of the portfolio can be formulated as
VaRS ( z S S ) z (xT Σx L γ T x)1 / 2 μT x L , (8)
where z the percentile of the standard normal distribution for the of significance level
(Khindanova & Rachev, [9]; Tsay, [13]).
A surplus of the portfolio S * is said (Mean-VaR) efficient if there is no surplus portfolio S
with S * S and S * S (Panjer et al, [10]; Sukono et al, [11]). Selection of efficient portfolio
i 1 xi 1 or
N
surplus can be performed using the objective function: Max {2S VaRS } with,
Maximize 2 μT x L z (xT Σx L 2γ T x)1 / 2 μT x L
Subject to eT x 1 , (9)
where the factor of risk tolerance (or risk avoidance factor).
195
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
in a chromosome) * (total population). To select the position of the mutated genes is done by generating
a random number between 1 to integer total_gen. If the random number generated is less than the
variable mutation_rate ( ρm ), then select the position as a sub-chromosome mutation. After the
mutation process, meaning it has completed one iteration of the genetic algorithm, also called a
generation. This process will be repeated until a predetermined number of generations, and ultimately
be acquired chromosome as the optimum objective function.
Pseducode genetic algorithms in general are given as shown in Figure-1 below.
3. Illustrations
Data assets are analyzed in this paper is accessible through the website: http://www.finance.go.id//.
Data include six (6) stock and a data of the rupiah exchange rate against the USD dollar, for the period
January 2, 2010 until June 4, 2013. stock data includes the names of the stock: INDF, DEWA, AALI,
LSIP, ASII, and TURB . Name of the stocks respectively given symbols: S1 , S 2 , S 3 , S 4 , S 5 , and
S 6 , while the rupiah exchange rate against the USD dollar given symbols D L . Stock prices is includes
the opening price, highest price, lowest price and closing price, but analyzed only closing prices. In
general, stock prices and the rupiah exchange rate against the USD dollar are analyzed magnitude
fluctuated up and down. In fact, sometimes rising sharply and then down again, sometimes also dropped
sharply and then rose again. Both stocks price data as well as the rupiah exchange rate against the USD
dollar, furthermore determined each the return value by using equation (1). Subsequently, both stock
return data as well as the return of the rupiah exchange rate against the USD dollar, was estimated form
each distribution as discussed in section 3.2 below.
In this section, was estimated distributions of stocks return data: S1 , S 2 , S 3 , S 4 , S 5 , and S 6 , and
the value of the rupiah against rate against the USD dollar D L . Estimates performed using the method
of Maximum Likelihood Estimator. As for the distribution model fit test performed using statistical
Anderson Darling (AD). Based on the form of the distribution estimator can be estimate the values of
mean estimator ̂ i and variance estimator ˆ i ( i 1,..., 6 and L ). Estimation process of distribution,
distribution fit test, and the estimated parameter values ̂ i and ˆ i ( i 1,..., 6 and L ) done with the
197
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
assistance software of Minitab 14. The results of the estimation process and fit test are given in Table-
1 below.
Estimator of parameter values of mean ̂ i and variance ˆ i will then be used to form mean
vector and covariance matrix, the portfolio optimization modeling as discussed in Section 3.3 as
follows.
In this section conducted the Mean-VaR portfolio optimization using genetic algorithms.
Portfolio optimization models which determined the solution was referring to equation (9). First, by
using the data in Table-1 column of mean ̂ i , formed a vector of mean stock
μT (0.004501 0.002873 0.001580 0.002693 0.009728 0.001510) . Secondly, also using the data in Table-
1 column of variance ˆ i , together with the values of the covariance between stocks, the covariance
matrix is formed as follows:
Third, because the number of stock that were analyzed consisted of six stock, then the unit vector
defined as eT (1 1 1 1 1 1) . Fourth, based on the calculation of the covariance between stock returns
with a return value of the rupiah exchange rate against the USD dollar, then the covariance of asset
liability vector is formed as γT (0.000542 0.000371 0.000447 0.000626 0.000812 0.000724) .
Furthermore, substituting the values of the estimator L and L , as well as the vectors of μT
, eT , γ T , and matrix Σ into equation (9), can be used to calculate the composition of the weight vector
xT ( x1 ,...,x6 ) , that can maximize the objective function (9).
Determination process optimization solutions performed using a genetic algorithm, and the results are
given in Table-2 below.
198
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
199
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
14.8 0.0049 0.0361 0.1478 0.2053 0.0115 0.5944 1.0000 0.0019 0.0160 0.2601 0.1202
15.0 0.0163 0.1952 0.2865 0.1512 0.0031 0.3477 1.0000 0.0020 0.0116 0.2564 0.1760
15.2 0.0274 0.0174 0.4892 0.0142 0.0296 0.4222 1.0000 0.0019 0.0136 0.2649 0.1408
15.4 0.0165 0.0347 0.1858 0.0008 0.0454 0.7168 1.0000 0.0020 0.0190 0.2693 0.1050
15.6 0.0009 0.1823 0.0268 0.0241 0.0420 0.7239 1.0000 0.0021 0.0196 0.2683 0.1090
15.8 0.0174 0.0665 0.4197 0.0383 0.0051 0.4530 1.0000 0.0018 0.0138 0.2788 0.1280
16.0 0.0615 0.0324 0.1890 0.0549 0.0073 0.6550 1.0000 0.0019 0.0175 0.2811 0.1073
16.2 0.0168 0.0383 0.4727 0.0231 0.0173 0.4317 1.0000 0.0018 0.0137 0.2835 0.1323
16.4 0.0691 0.0605 0.3054 0.0137 0.0029 0.5485 1.0000 0.0019 0.0153 0.2862 0.1218
16.6 0.0046 0.1248 0.3112 0.1576 0.0125 0.3894 1.0000 0.0020 0.0119 0.2822 0.1688
16.8 0.0240 0.0677 0.4645 0.0163 0.0264 0.4012 1.0000 0.0019 0.0130 0.2880 0.1500
17.0 0.0279 0.0224 0.5071 0.0915 0.0173 0.3338 1.0000 0.0019 0.0120 0.2916 0.1587
17.2 0.0672 0.0360 0.3427 0.0331 0.0019 0.5190 1.0000 0.0018 0.0147 0.2989 0.1250
17.4 0.0475 0.0199 0.4232 0.0303 0.0381 0.4410 1.0000 0.0021 0.0132 0.2933 0.1558
17.6 0.0264 0.0495 0.4280 0.0132 0.0205 0.4624 1.0000 0.0019 0.0140 0.3035 0.1340
17.8 0.0134 0.2300 0.2525 0.0759 0.0038 0.4244 1.0000 0.0020 0.0135 0.3016 0.1487
18.0 0.0180 0.0311 0.3048 0.1272 0.0352 0.4838 1.0000 0.0021 0.0134 0.3022 0.1542
18.2 0.0430 0.2299 0.2554 0.0048 0.0056 0.4614 1.0000 0.0020 0.0143 0.3075 0.1416
18.4 0.0072 0.0662 0.3502 0.1341 0.0208 0.4214 1.0000 0.0020 0.0124 0.3110 0.1588
18.6 0.0418 0.0575 0.2842 0.1474 0.0012 0.4678 1.0000 0.0019 0.0132 0.3167 0.1455
18.8 0.0220 0.1401 0.3664 0.0746 0.0108 0.3862 1.0000 0.0020 0.0122 0.3172 0.1608
19.0 0.0438 0.1669 0.0734 0.0034 0.0118 0.7007 1.0000 0.0020 0.0190 0.3249 0.1037
19.2 0.0061 0.1532 0.4306 0.0386 0.0268 0.3446 1.0000 0.0020 0.0121 0.3207 0.1687
19.4 0.0161 0.0198 0.4002 0.3096 0.0018 0.2525 1.0000 0.0020 0.0103 0.3242 0.1933
19.6 0.0500 0.0314 0.4100 0.0823 0.0080 0.4184 1.0000 0.0019 0.0128 0.3328 0.1481
19.8 0.0244 0.0842 0.1338 0.1775 0.0161 0.5640 1.0000 0.0020 0.0151 0.3312 0.1357
20.0 0.0624 0.0377 0.3820 0.1233 0.0004 0.3942 1.0000 0.0019 0.0121 0.3373 0.1593
Based on the optimization process whose results are shown in Table-2, it appears that for every value
of different risk tolerances, resulting in a composition of different investment allocation weights. Due
to the weight of the composition of different investment allocation, resulting in acquired return and
Value-at-Riks of portfolio differently. In the portfolio optimization process above the global optimum
of portfolio achieved when the value of risk tolerance = 12.8, with the composition of the allocation
investment funds in S1 , S 2 , S 3 , S 4 , S 5 , and S 6 are 0.0123, 0.0395, 0.3524, 0.3118, 0.0044, and
0.2795 respectively. The global optimum of portfolio produces the expected return of the portfolio value
is 0.0020, with the Value-at-Risk is 0.0103. At the global optimum of portfolio the ratio between
portfolio return with Value-at-Risk of 0.1967 is the largest compared to other ratio values. This course
can be used as a reference for investors in making investment decisions on stocks that were analyzed.
4. Conclusion
In this paper we used the genetic algorithm to find a value of the global optimum for Mean-VaR portfolio
investment under asset liability. On global optimum position, we obtained the expected return value of
portfolio is 0.0020, with the Value-at-Risk is 0.0103. The global optimum obtained, when the portfolio
weights of x1 , x2 , x3 , x4 , x5 and x6 are 0.0123, 0.0395, 0.3524, 0.3118, 0.0044, and 0.2795
respectively, with the risk tolerance is 12.8.
References
Caglayaa, M.O. & Pinter, J.D. (2010). Development and Calibration of Currency Market Strategies by
Global Optimization. Working Paper. Faculty of Economics and Administrative Sciences, 2:
Faculty of Engineering Ozyeğin University, Kusbakisi Caddesi, No.2, 34662 Altunizade,
Istanbul, Turkey. mustafa.caglayan@ozyegin.edu.tr, janos.pinter@ozyegin.edu.tr
Dowd, K.. (2002). An Introduction to Market Risk Measurement, John Wiley & Sons, Inc., New Delhi,
India.
Elton, E.J. & Gruber, M.J. (1991). Modern Portfolio Theory and Investment Analysis, Fourth Edition,
John Wiley & Sons, Inc., New York.
Froot, K.A., Venter, G.G. & Major, J.A. (2007). Capital and Value of Risk Transfer, Working Paper,
New York: Harvard Business School, Boston, MA 02163. Http://www.people.hbs.edu/kfroot/.
(Downloaded in December 2012).
Gerstner, T., Griebel, M. & Holtz, M. (2007). A General Asset-Liability Management Model for the
Efficient Simulatrion of Portfolios of Life Insurance Policies, Working Paper, Institute for
200
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
201
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: In this paper, methods for controlling a robotic arms by using face have been
inspected and applied successfully. Based on method of Face detection and Tracking using KLT
Algorithm, the methods have been applied by utilizing 5 servo motors with three degrees using
Arduino microcontroller platform. Computer software consists of two parts. In the first part, a
program for moving robotic arm which is stored in microcontroller, and the second is a program
for detecting the presence of facial position which is stored in computer storage. Face detection
application has been applied based on image streaming from a webcam. The result of this
detection is face position that becomes input to selected servo number. Servo number and
direction of movement then is transferred to the microcontroller via a serial communication to
trigger the movement of the robotic arm. One of real application of this system is to move object
from one place to another place. This method has been implemented on a robotic arm using
processed camera and Microcontroller.
1. Introduction
Similar to informatics, robotics has developed tremendously in various aspects of human life. The
combination of the two fields of studies –informatics and robotics, is very useful to the improvement of
human beings’ life quality, especially those who have limited ability (the disabled). A person with
limited ability in functioning the part of his head, for instance, is likely to move using his head
movement, particularly the part of his face.
With its rapid development, robots, conscious and unconsciously, have been present in the life of
human beings in great variety of forms. There is even a simple design of robot which can carry out
simple or repeated activities. Generally, public defines robot as ‘a living creature’ in shape of man or
animals made from metal and powered by electricity. In a broader sense, robot means a device which
in limited capacity is able to work automatically according to instructions given by its designer. In this
sense, there is a close relationship between robot and automatization so that it can be understood that
almost any modern activity depends more and more on robots and automation.
In the case of limited movement of parts of the body in which only part of the head can be moved,
robotic arm can be a solution. Further problem would be how can this robotic arm be controlled by the
head or the face?
2. LITERATURE REVIEW
Face detection can be carried out by using toolbox vision on the developed matlab (Viola, Paul and
Jones, Michael, 2001), (Bruce D. Lucas and Takeo Kanade, 1981), ( Carlo Tomasi and Takeo Kanade,
1991), (Jianbo Shi and Carlo Tomasi, 1994), (Zdenek Kalal, Krystian Mikolajczyk and Jiri Matas,
2010). The function to be used is to detect face which result is stored in the variable of face detector.
Therefore, the function is as follow:
faceDetector = vision.CascadeObjectDetector();
This function detects a face location in a frame. On the other hand, to get a face box which will be stored
in “bbox” can be done with this fucntion
202
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
As a result, if described, then the face position in array polygon will take form as Figure 2.1
Face
Robotic arm mover uses servo motors which can be controlled by using microcontroller.
3. RESEARCH METHODS
This research employed literature study and experimentation. It was conducted by creating robotic arm
which can be moved from the face to aid people with limited function of body parts. The scheme can
be seen in Figure 3.1.
203
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Developing Software of
Face Detection
Developing Hardware of
Robotic Arm
The value of central point position (𝑥𝑐 , 𝑦𝑐 ) determines the position of the face which 𝑥𝑐 is x position
in the center, while 𝑦𝑐 is y position in the center.
Criteria for model and servo positions were determined as follow:
There were two models: model= 1 to select a particular servo motor; and model= 2 to move servo motor
to the left or to the right. Its equation formula is:
205
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
DETERMINING
SERVO MOTOR
CENTRAL POINT
MOVEMENT
OF THE FACE
DETERMINING
POSITION OF
SERVO MOTOR
STATUS
First of all, face detection was carried out through the WebCam to be processed in the computer so that
it resulted in a face box. In the face box then there was computation of face’s central point, which is
(xc,yc) to determine model selection, which is “1” or “2” and select the number of servo motor (1 to 5).
Secondly, after getting the model and servo number, message was sent to the robotic arm through serial
cable. Thirdly, Microcontroller Arduino would respond it and move the robotic arm according to its
whereabout and model.
4. RESULT
The device developed in this research is a face’s central position detection to determine a particular
servo and model positions. The result of the captured image can be seen in Figure 4.1.
206
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
(a) (b)
Figure 4.1 (a) face detection in model “1” (b) face detection in model “2”
Figure 4.1(a) shows movement of the face’s central point which is indicated with red color in model
“1”. This is the process to select the number of servo motor. Picture 4.1.(b) shows the model used (in
the picture it is seen model=2). Model “2” is used to wind selected servo motor (for example is servo
motor “4”). Servo is wound clockwise or counter-clockwise.
5. CONCLUSION
This research developed software and hardware to aid people with limited body parts, which is only
part of the head can be moved. The device developed is able to move robotic arm through face. The
robotic arm’s movement is controlled through face’s movement to select servo motor and move it to
the left or right as well as going up or down. This research used 5 units of servo motors which are
controlled through microcontroller Arduino. This robotic arm can move an object from one place to
another within range of the robotic arm.
207
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Bruce D. Lucas and Takeo Kanade. (1981). An Iterative Image Registration Technique with an Application to
Stereo Vision . International Joint Conference on Artificial Intelligence
Carlo Tomasi and Takeo Kanade . (1991). Detection and Tracking of Point Features. Carnegie Mellon University
Technical Report CMU-CS ,91-132.
Jianbo Shi and Carlo Tomasi . (1994). Good Features to Track. IEEE Conference on Computer Vision and Pattern
Recognition..
Viola, Paul A. and Jones, Michael J. (2001). Rapid Object Detection using a Boosted Cascade of Simple Features,
IEEE CVPR.
Zdenek Kalal, Krystian Mikolajczyk and Jiri Matas. (2010). Forward-Backward Error: Automatic Detection of
Tracking Failures. International Conference on Pattern Recognition.
http://www.lynxmotion.com
208
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract :So many developed website which based Ecommerce, online transaction is one of
the options to buy goods or services. In this condition therefore, The development of Payment
Gateaway Infrastructure is needed to make this process easier and safe. This Process trigger off
toward buyer and seller of informations, so to secure this process is really needed. This research
used twofish algorithm,which is one of cryptography algorithm that implemented payment
gateway as a tool to secure seller and buyer information.
1. Introduction
In the last few years, the security about in payment system on purchase processing at e-commerce is
become the main concern. In the present, many payment gateway corporations appear to give service
at many web E-commerce based. To secure the purchasing data from variuos attacks and for the
integrity of data, to keep the data integrity, the data must be encrypted before being transmitted or
stored. Cryptography is the science of protecting clear, meaningful information using mathematical
algorithms [1]. Cryptography’s methods will be used to secure buyer’s data and seller’s data. This
process not only to encrypt data of purchasing, but also to give authorization between buyer and seller.
In this research, three cryptography methods is used in security and authentification process,the method
are Twofish, RSA, and SHA-1.
2. Payment gateway
Payment gateway is defined as any type of payment methods that use Information and Communications
Technology (ICT), including cryptography and telecommunication networks[2]. There are seven
participants be participant on the process payment gateway [3] :
1. Cardholder/client is person who use credit card or account of bank tu purchase the product to
merchant.
2. Issuer is bank who make account for client
3. Merchant/seller is a person who provide the product.
4. Acquirer is bank who make the merchant account.
5. Payment gateway is a tool which operated by acquirer to process message of purchasing
6. Brand holders is company who make and develope credit card
7. Third party, sometimes acquirer need third party to run payment gateway
In this research there are just three participant which is cardholder, merchant, and payment gateway.
209
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
3. Protocol
The existency of Protocol Communication is needed in order tobuild the secure payment processing.The
four kinds of protocol communication that important to be implemented are:
1. Hypertext Transfer Protocol Secure (HTTPS) is a protocol communication that
secure communication over a computer network, with especially wide deployment on
the Internet [4]
2. Secure Electronic Transaction (SET) is a standard protocol communication for
securing credit card transactions over insecure networks, specifically, the Internet [3]. This
research DES subtitute by twofish as symmetric cryptography methods
3. TSL/SSL are cryptographic protocols that are designed to provide
communication security over the Internet[5].
4. Cryptography
Cryptography is the science of protecting clear, meaningful information using mathematical algorithms
[1]. For the general Cryptography’s method devided by two forms, symmetric-key and asymmetric key
(public key cryptosystem)
1. Symmetric-key cryptography refers to encryption methods in which both the sender and
receiver share the same key. In this research will use twofish algorithm as symmetric key [6]
2. Public key cryptosystem a cryptographic technique that enables users to securely
communicate on an insecure public network, and reliably verify the identity of a user
via digital signatures [6]. In this research will use RSA algorithm as PKC.
5. Twofish
Twofish uses a 16-round Feistel-like structure with additional whitening of the input and output[1].
210
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
5.1 Whitening
Whitening is the technique of xoring key material before the first round and after the last round [7].
The function F is a key-dependent permutation on 64-bit values. It takes three arguments, two input
words R0 and R1, and the round number r used to select the appropriate subkeys[7].
The function g forms the heart of Two_sh. The input word X is split into four bytes. Each byte is run
through its own key-dependent S-box[7].
The key schedule has to provide 40 words of expanded key K0,…..,K39, and the 4 key-dependent S-
boxes used in the g function [7].
6. RSA
RSA stands for Ron Rivest, Adi Shamir and Leonard Adleman, RSA is one of method from PKC,
RSA’s algorithm as follow:
1. Compute N as the product of two prime numbers p and q (N=p.q)
2. r=(p-1).(k-1)
3. Choose an integer e such that 1< e <r, e and r are coprime.
4. e.d≡1mod r, e is private key and d is public key
for the encrytion, c≡memod N with c is Chiper text and m is plain text.
for decryption m≡cd mod N,in this research RSA will be use to secure the key that be used in
symmetric key and as signature scheme
7. SHA-1
SHA-1 is a one of method to “hash” a message(plain text), the result is called “message digest”, this
method not same with symmetric key and asymmetric key that are reversible,but in hash function is
irreversible so message digest can’t be a message (plain text)again. In this research SHA-1 will use as
Signature scheme.
In this research the participants that participate in payment gateway are just client(buyer), merchant,
and payment gateway, actually client, merchant and payment gateway must have certificate from
certificate authority (CA) but in this research,all participants are considered have certificate and all
process that related to CA are considered has been completed.
211
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 2. Scheme 1a
At the Figure 2, a scheme of cryptography method from client to merchant. A message (form
of purchasing) hash with sha-1 the result is called message digest. Message digest signing with client’s
private key then we get digital signature. Message and digital signature encrypt with twofish algorithm
than the result is called encrypted message. The key was used from twofish algorithm encrypt with
merchant’s public key to get digital envelope. So the result from this scheme is encrypted message and
digital envelope.
At the Figure 3, digital envelope decrypt with merchant’s private key, the result is key of
twofish algorithm. The key is used to decrypt an encrypted message than the result is message and
client’s digital signature. The message hash with sha-1 than obtained “message digest a”. Client’s
digital signature verify with client’s public key than obtained “message digest b”. “Message digest a”
compare with “message digest b” if the result is same authorization process succeed, but if it is not same
authorization process failure.
If authorization process between client and merchant is success, the next step is process
authorization between merchant and payment gateway. The process is similar like process between
client and merchant, but in digital signature process digital signature merchant signing with private key
merchant. For process verification payment gateway verify with public key merchant, after that verify
with public key buyer end the result is “message digest B”. the next step is just compare “message
digest A” and “message digest B” if the results is same the the authorization is completed but if the
result is not same, the authorization failure.
212
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 3 scheme 1b
9. Conclusion
Cryptography’s method must be impemented to secure information,then to secure the information must
use more then one cryptography’s method.
Reference
[1]Irfan L., Tasneem B. and Pooja N.,Encryption and decryption of data using twofish algorithm,2012
[2] J. Raja and M.S. Velmurgan, “E-Payments: Problem and Prospects,”Journal of Internet Banking and
Commerce, vol. 13, no. 1, April 2008.
[3] Secure Electronic Transaction, Journal (2000)Volume 6
[4] HTTPS Everywhere FAQ. May 2012.
[5]Eric Rescorla (2001). SSL and TLS: Designing and Building Secure Systems. United States: Addison-Wesley
Pub Co. ISBN 0-201-61598-3.
[6] Kartik Krishnan (2004). Computer Networks and Computer Security
[7] Bruce Schneier (1998). Twofish: A 128-bit Block Cipher
213
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Systematic transrectal ultrasound-guided biopsy is a promising technique for the detection of
prostate cancer but failed to detect a variety of tumors, while Magnetic resonance (MR)-guided biopsy
technique has been widely used, but there is no optimal technique. This paper is proposed to develop the
technique of multi-modal image-guided biopsy technique for detecting prostate cancer accurately.
Develop prostate multimoal image registration method based on information theory and automated
statistical shape analysis to find pieces that closely match the MR axial slice and TRUS. Design and
evaluate efficient and accurate segmentation of the prostate in 2D TRUS image sequence to facilitate
multi-modal image fusion between TRUS and MRI to improve the malignant tissue samples for biopsy.
Design and evaluate techniques for determining and reducing the correction for movements of the
prostate tissue during the biopsy procedure by incorporating some biomechanical moddeling.
1. Introduction
TYPE of prostate cancer is most often diagnosed in men [1]. Incidents increased dramatically after the
introduction of prostate specific antigen (PSA) test [2],[3]. However, urologists face the dilemma of
patients with elevated PSA levels and / or increased and negative biopsy results. Because serum PSA
levels, used for early diagnosis of prostate cancer, is a test that is very sensitive but not specific, other
tests are needed to diagnose prostate cancer is transrectal ultrasound (TRUS) was introduced in 1968 as
a means for diagnostic imaging of prostate cancer [4]. The sensitivity of this technique for the detection
of prostate cancer is low (20-30% [5]) because more than 40% of prostate tumors isoechoic and only
the peripheral zone can be accurately detected [6],[7].
Doppler TRUS and application of contrast agents increase the detection rate of prostate cancer
74-98% [8]-[12]. More than 1.2 million prostate needle biopsy is run each year in the United States
[13]. TRUS-guided systematic biopsy (TRUSBx) is the gold standard for detecting prostate cancer. A
systematic approach is characterized by a low sensitivity (39-52%) and high specificity (81-82%) [14].
In case of doubt, the additional biopsy session needs to be done. In some cases, systematic protocol
extended with additional targeted biopsies detected by TRUS hypoechoic area, which increases the
detection rate slightly [4].
The role of magnetic resonance imaging (MRI) in the detection of prostate cancer is increasing
but highly debated [15]. Anatomical T2-weighted MRI has been disappointing in detecting and
localizing prostate cancer. Estimate the sensitivity of MRI for the detection of prostate cancer using T2-
weighted sequences and endorectal coil varied from 60% to 96% [16]. Several groups have
convincingly shown that dynamic contrast enhancement and spectroscopy to improve detection and that
the sensitivity of MRI is comparable to and can exceed of transrectal biopsy [16]. Various MRI
techniques, such as proton magnetic resonance (MR) spectroscopy and dynamic contrast-enhanced
MRI, has been applied for more accurate detection, localization, and staging of prostate cancer [17].
A new study shows the area under the receiver operating curve of 0.67 to 0.69 for prostate
localization with regular 1.5 T MRI anatomy [17]. Localization accuracy increased to 0.80 and 0.91
using MRI spectroscopy and by applying a contrast agent, respectively [17]. Diffusion-weighted MRI
is increasingly being used, which can lead to increased detection rates [18]-[20]. A prostate
segmentation approach based on establishing some means of parametric models derived from principal
component analysis of shape and posterior probability in multi-resolution framework to reach an
average Dice similarity coefficient of 0.91 ± 0.09 for the 126 images containing 40 images of the
summit, 40 images of the base and 46 images of the central region in a leave-one-patient-out validation
framework. The average time segmentation procedure was 0.67 ± 0.02 s [51].
214
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
A new approach of some form of statistical models and Information posterior probability of the
prostate region with the aim of 2D prostate segmentation in TRUS images has been proposed. Our
approach is accurate and robust to significant shape, size and variations in image contrast when
compared to traditional TRUS AAM. Changing the intensity of the posteriors are assisted in
automatically initialization and a significant improvement in segmentation accuracy. Some models on
average compared to single AAM models further improve the accuracy of segmentation. This model
has shown significant improvement in segmentation accuracy for basic and slices peak compared to
AAM. execution time Process is 0.67 with MATLAB code. We look forward speeds up execution time
models with optimized C + + coding.
A new method for non-rigid registration of transrectal ultrasound and magnetic resonance
images of prostate based on a regularized framework of non-linear point correspondences obtained from
statistical measure of shape-context. Registration accuracy of this method was evaluated in 20 pairs of
prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice
similarity coefficient indicates the average 0.980 ± 0.004, an average of 95% Hausdorff distance of 1.63
± 0.48 mm and the average enrollment targets and target localization error of 1.60 ± 1.17 mm and 0.15
± 0.12 mm, respectively [52].
Framework diffeomorphic non-linear novel with TPS to be Transformation of the underlying
has been proposed to register the prostate picture multimodal. A method for establishing point
correspondence on a pair of TRUS and MR images has also been proposed that By calculation
Bhattacharyya distance to shapecontext point representation contours. [52]. The bijectivity of
diffeomorphism which maintained by integrating over a set of non-linear functions for both the moving
image fixed and changed. It regularized bending energy and fault localization point correspondence is
established between the fixed and moving image has further added to the system non-linear equations
added to the constraint TPS. This additional constraint ensures deformation regularized of anatomical
structures of local in the prostate that is meaningful to clinical intervention such as a prostate biopsy.
Performance of the proposed method has been compared with two variations of a non-linear
transformation of TPS where the control point has been uniformly placed on grid for the first and the
control points established by using proposed a method of correspondence points for the second. Second
method does not involve regularization and only rely transformation function non-linear. Results
obtained on dataset of patients actually concluded that the overall performance The proposed method
in terms of accuracy of registration of global and local better than the two variations as well as traditional
TPS and B-splines based method of registration deformable, and Therefore the coming can be applied
to prostate biopsy procedure. The proposed method has been validated against the amount varies control
points which concluded that the control points within the prostate nodes required to maintain the
deformation clinically meaningful and that 8 point boundary catch inflexions of curve prostate
optimally suited than the limit or less control points. The proposed method has been shown to be
affected by inaccuracies automatic segmentation thanks resistance method of automatic segmentation
is used. Endorsement registration methods on the base and non mid-gland slices have demonstrated
high accuracy registration of global and local describe the robustness of the method.
Proposed TPS framework of non-linear regularization can be applied to 3D prostate registration
volume. However, point correspondence slice-by-slice can be formed after resampling prostate volume.
The TRUS-MR slice correspondence manually selected in our experiment can also be selected
automatically using the tracker EM attached to the TRUS probes that will provide the spatial position
pieces TRUS in pre-derived prostate TRUS / MR volume for needle biopsy. An automated method
based on information theory and statistics form analysis to find pieces of MR that closely match axial
slice TRUS is currently being investigated. Algorithm can be parallelized if programmed on GPU and
therefore may useful for real-time fusion of multimodal image the prostate during biopsy.
MR-guided biopsy of the more advanced techniques, but there is no current consensus on the
optimal technique [21-23]. Open and closed MRI settings used together. Several types of biopsy robot
[24], some of the software complex, used to guide the needle.
Target area was determined using a combination of different MRI techniques. Some doctors
use a transrectal approach, while others prefer transperineal methodology. Prostate motion during the
biopsy procedure is one of the biggest challenges in taking a biopsy of the prostate [25]. Some solutions
to this problem have been proposed, of fixation using a needle for rendering real-time images [26].
Some of MR-guided prostate biopsy approaches have been investigated, but so far, there is no consensus
215
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
for this technique. The purpose of this research is to integrate real-time TRUS and MRI fusion-guided
biopsy and to develop methods for using high-contrast MR data is sensitive for detecting tumors and
real-time character to follow TRUS prostate motion during the biopsy
There is a great need for accurate imaging technique for prostate cancer. Ease of access to the prostate
gland allows high resolution imaging of the prostate gland. Nowadays technology has evolved
radiography, magnetic resonance imaging (MRI) is considered the most suitable for this application,
where the method offers superior spatial resolution. With further improvements, MRI is expected to be
able to give the shape and position of the tumor and will play a central role in guiding the treatment of
prostate cancer. Image-guided procedures is also required, although a variety of instrumentation in the
holes of magnetic resonance (MR) scanner full of technical challenges.
Pondman and colleagues [45] provide an excellent overview of the engineering and clinical
efforts published to date involving MRI-guided prostate biopsy instrumentation and techniques. Several
groups have focused on this field and produce a variety of prototype devices capable of safely and
accurately place a needle biopsy in the prostate gland tissue to allow sampling of the prostate were
identified from the MR images. These preliminary results need support with many design elements of
this device as well as the validation study of MRI in characterization of prostate pathology. The true
benefits of these efforts can not be realized.
Ghose and colleagues [53] also provide an excellent overview of the methods developed for
segmentation of TRUS prostate gland, MR and CT images, the three major imaging modalities that aid
in the diagnosis of prostate cancer and treatment. The purpose of this paper is to study the similarities
and differences between the main different methods, highlighting their strengths and weaknesses to help
segmentation methodology right choice. Defining a new taxonomy for prostate segmentation strategy
that allows the grouping algorithm first and then to show The main advantages and disadvantages of
each strategy. Given a comprehensive overview of the existing methods in all TRUS, MR and CT
modalities, highlighting their key-points and features. Finally, a discussion of selecting the most
appropriate segmentation strategy for given imaging modality provided
Development of MR-compatible instrumentation for imaging guided procedures provide a
springboard for a variety of diagnostic and therapeutic glands in benign and malignant disease. In
addition to biopsies, needle-based procedures, such as network-based focal ablation or injection with
therapeutic monitoring the effects of treatment, which may now equipped with sophisticated imaging
techniques, such as MR thermography. Similarly, other pelvic procedures based endocavity, including
gynecological and colorectal procedures, may be adjusted by using the device. The addition of remote
actuation, or'' robot'' automation, further extends the potential of this approach to include other forms
of the procedure and the use of the organ systems.
A large number of studies [46],[47] have recently shown that magnetic resonance (MR), in
addition to the proton 1H-spectroscopic analysis and dynamic contrast-enhanced imaging (DCEMR),
could represent a powerful tool for managing aspects of prostate cancer, including early diagnosis ,
localization of cancer, the road map for surgery and radiotherapy, and early detection of local
recurrence. Ultrasound-guided biopsy is considered the preferred method for the detection of prostate
cancer, however, most studies have reported that sextant biopsies missed up to 30% of cancers, and
biopsy results showed 83% positive predictive value and negative predictive value of 36% when
compared with radical prostatectomy for tumor localization [48]. Although MR and MR spectroscopic
imaging (MRSI) is not currently used as a first approach to diagnose prostate cancer, they can be useful
for directing targeted biopsy, particularly for cases with antigen (PSA) levels of prostate-specific are
indicative of cancer and negative biopsy previously.
In this article, Pondman et al [45] summarized the technical applications and current clinical
MR-guided prostate biopsy. In some experiences [49],[50], MR-guided biopsy techniques are widely
available, but there is no current consensus on the optimal technique. In addition, relevant issues in
particular, prostate motion during the biopsy is one of the biggest challenges in taking a biopsy of the
gland. Assistance robot for MR guided prostate interventions to improve outcomes, but the cost could
be more relevant issues. For a long time, a valid diagnostic imaging for prostate cancer is not yet
216
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
available. MRSI can reduce the rate of false negative biopsies and reduce the need for more extensive
biopsy or repeat biopsy procedure. MR-guided prostate biopsy also will have an increasing role in this
field. Extensive clinical studies are very important to analyze the real value and benefits of MR guidance
for biopsy.
In pattern recognition feature can be defined as a measurable quantity that can be used to distinguish
two or more areas. [59]. More than one feature can be used to distinguish various regions and various
features of the known as feature vectors. Vector space associated the feature vectors are known as
feature space. Supervised and classification (PR)-based techniques aimed at un-supervised get partition
feature space into a set of labels for different areas. Especially the classifier and / or grouping based on
techniques used for this purpose. Classifiers using a set of Training data with objects labeled a priori
information build a predictor to assign labels un-label observational future. In contrast, the method of
grouping a set of feature vectors given and the goal is to identify groups or similar groups objects on
the basis of the feature vector associated with each. Proximity measures are used to group the data into
clusters Similar types.
1. Classifier-based Segmentation
In classifiers based prostate segmentation is viewed as prediction or learning problems. Each object in
the training set associated with the response variable (class label) and features vector. The training set
used to build predictors that can specify the class label for the object on the basis of observed feature
vectors.
a. TRUS.
Intensity heterogeneity, texture features dependable and imaging artifacts pose a challenge to feature
space for the partition. Zaim [60] used texture features, spatial values and gray-level information in self
organizing map neural network to segment the prostate. In more recent work [61] the author uses energy
entropy and symmetric, orthonormal, and the second order wavelet coefficients [62] overlapping
windows in support vector machine (SVM) classifier. Mohammed et al. [63] used spatial and frequency
Domain information of Gabor filters and multi-resolution prior knowledge of the location of the prostate
in TRUS images to identify prostate. Parametric and non-parametric estimation Fourier transform
power spectrum density along with ring and wedge filters [64] of the region of interest (ROI) were used
as a feature vector to classify images to TRUS prostate and prostate area non use non-linear SVM.
b. MRI.
The use of the edge detector operator characteristic MR images can produce many false edges due to
high soft tissue contrast. Therefore, Zwiggelaar et al. [98] used first and second-order derivative
direction Lindeberg [99], in polar coordinate system to identify the edges. An inverse transformation
The longest of the selected curve after non-maximum suppression interrupted curve in the vertical
direction is used to getting prostate boundary. On the other hand, Samiee et al. [100] using prior
information from the prostate to improve form prostate boundary. Average gradient values obtained
from moving mask (guided by previous forms of information) is used to browse the prostate boundary.
In the same way, Flores-Tapia et al. [101] used a priori information form of prostate to explore the
limits by a small mask movement the feature space constructed from the product detail Haar wavelet
coefficients in multi-resolution framework.
The purpose of clustering-based method is to determine the intrinsic grouping of data in a set of un-
labeled by some distance measures. Any data associated with vector features and The task is to identify
217
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
groups of similar objects or groups at the basis of the set of feature vectors. Number of groups implicitly
assumed to be known and we have to choose Relevant features, and algorithms to measure the distance
is used.
a. TRUS
Richard et al. [65] using an average shift algorithm [66] in the texture space to determine the mean and
covariance matrix for each cluster. A probabilistic label is assigned to each pixel pixel determines
membership with respect to each cluster. Finally, the compatibility coefficient and pixel spatial
information is used for probabilistic relaxation and refinement of the prostate area.
3. Hybrid Segmentation
Combines a priori limit, shape, area, and features information of the prostate gland increases
segmentation accuracy. The method robust to noise and produce superior results with variations in shape
and texture of the prostate.
a. TRUS
Mid prostate gland images in the axial slices in TRUS images often characterized by a hypoechoic mass
surrounded by hyperechoic halo. In order to capture this feature, Liu et al. [67] proposes to use a radial
search of a center prostate to determine prostate edge points. The key identified boundary points of
greatest variation in gray values in each row. Average shape models built of manually segmented
contours are used to fix the lock points. A similar scheme was adopted by Yan et al. [68]. In this case
case, the contrast in profile variations normal vector perpendicular to the PDM is used to automatically
determine the prominent points and the prostate boundary. Important points determined by removing
the points that fall in the shadow areas.
Previous forms of information helped determine the shape of prostate points are lost in shadow
areas on TRUS images. Optimal search through profiles perpendicular vector for key points are used to
determine prostate boundary with discrete deformable models in a multiresolution, energy minimization
framework. Modeling shape and texture features and use them to segment the new image has been used
by many researchers. It scheme mainly vary in approach adopted in the creation models of shape and
texture. For example, Zhan et al. [69] proposed to model the texture space by grouping into prostate
and non-prostate region texture features captured by rotation invariant Gabor filter with a way SVM.
This space is then used as a feature classified external force in a deformable model framework to
segment prostate. In consequence their work [70], the authors propose to speed up the process by using
Zernike moments [71] to detect the edges in the low and medium resolution and maintain texture
classification using Gabor feature and SVM. In different ways [72], the authors also propose to reduce
number of support vectors by introducing a penalty term in the objective function of SVM, which punish
and reject outliers. Finally, Zhan et al. [57] proposed to combine texture and edge information to
improve segmentation accuracy. Gabor feature rotation invariant multi-resolution of prostate and non-
prostate area is used to train Gaussian kernel SVM classification system for prostate texture areas. In
the deformable segmentation procedures, SVM is used to label voxels around the surface of the
deformable models as prostate or non-prostate tissue.
Furthermore, the surface deformable models are encouraged to limit the deformation forcing
prostate tissue labeled. Step Network labeling and label-based measure surface deformation dependent
on each other, the process is carried out iteratively convergence. A similar scheme was adopted by Diaz
and Castañeda [73]. Asymmetric sticks and anisotropic filter is first applied to reduce spots in TRUS
images. A DDC produced using cubic interpolation of four points is initialized by the user. It DDC
deformed under the influence of the force, the gradient large and damping forces to produce contour
prostate. Features such as the average intensity, variance, output filter back projection, and stick Filters
are used to build the feature vector. Pixels are classified into prostate and non-prostate region using
SVM.
Furthermore, DDC is automatically initialized from the prostate boundary and used to obtain
the final contours of the prostate. Cosio et al. [58] using the position and value of the gray scale of the
218
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
prostate in TRUS images in a Gaussian mixture model of three Gaussian clusters prostate, and non-
prostate tissue and identifying halo around the prostate in TRUS images. Bayes classifier is used to
identify regions of the prostate. After pixel classification ASM initialized with a binary image using a
global optimization Method. Optimization problem consists of finding The optimum combination of
four posing and two shape parameters, which corresponds to an estimate of the prostate boundary in
binary image.
A multi-population genetic algorithm with four posing and ten parameters are used to optimize
the shape ASM in a multi-resolution framework to segment the prostate. Another common hybrid
approach is to use both forms and prostate intensity distribution segment. Medina et al. [74] using AAM
framework [75] for a model of the form and texture of the prostate area. In this framework Gaussian
shape and intensity models are made from PCA analysis combined to produce a model of the combined
average. The prostate is segmented exploit prior knowledge nature of space optimization in minimizing
the difference between the target image and the model average. Ghose et al. [76] using the estimated
coefficients Haar wavelet for reducing speckle to improve segmentation accuracy. Then, Ghose et al.
[77] developed the model further by introduced a textural contrast invariant features extracted from log
Gabor quadrature filters.
Recently Ghose et al. [78] Used information obtained in the framework of probabilistic
Bayesian to build a model of appearance. Furthermore, some means the model shape and appearance
priors used for improve the accuracy of segmentation. Gong et al. [79] proposed using super-elliptical
deformable shape models to produce prostate. Using deformable super-ellipse as previously the model
for the prostate, the ultimate goal is to find optimal parameter vector that best describes the prostate in
given unsegmented images. Search formulated as maximum posterior criterion using Bayes rule. It
initial parameters used in the maximum a posteriori (MAP) framework to obtain optimized parameters
for the ellipse. Then, Tutar et al. [80] using an average of three manual described prostate contours to
construct a three-dimensional mesh with spherical harmonic to represent the average prostate models.
With 8 harmonics, vector feature Element 192 was reduced to 20 using PCA. Users initialize algorithm
to decipher the prostate boundary in mid-gland axial and sagittal images. Therefore, the problem of
finding shape parameter vector that will segment the prostate in the spatial domain is reduced to finding
the optimal shape parameter in the parametric domain that maximized the posterior of the probability
density function of the cost, which measures the level of agreement between the model and the edge of
the prostate in the picture. Yang et al. [81] proposes to use the min / max flow [82] for the smooth
contour of a 3D model of the prostate is made of delineation 2D users.
The main mode of the form variations identified with PCA and morphological filters used to
extract the prostate region-based information gland. The model, and region-based information then
combined in a Bayesian framework to generate energy function, which reduced the level set framework.
Garnier et al. [83] used 8 user defined points to initialize 3D prostate mesh. Two algorithms are used to
determine end prostate segmentation. First, DDC with an edge as external forces and 6 user defined
point central gland as a landmark that is used to destroy the hole into segments prostate. Furthermore,
the initial mesh is used to create a graph and in the second stage of image features such as gradient
introduced to construct a cost function. Finally, the graph-cut is used to determine the prostate volume.
Graph cut results enhanced with DDC to improve results.
b. MRI
Before the shape and size of the prostate that information exploited by vikal et al. [84] to build a model
of the form of the average from manually drawn contours. The authors use
Canny edge filter to determine after the pre-processing drawing with sticks filter to suppress noise and
enhance contrast. On average the model which is used to dispose of pixels that do not follow the same
orientation as the model. Contours obtained further enhanced by removal gap using polynomial
interpolation. Segmented contours obtained in the middle of the slices are used to initialize the slices
lie above and below the central slice. The use of Bayesian framework to model the texture prostate is
common in MR images. For example, Allen et al. [85] proposed to segment the prostate within the
framework of EM treat three distinctive peaks in the intensity distribution as a mixture of three
Gaussians (background, middle region and the outskirts of the prostate).
A limited form of deformable models with pixels clustered as deformation strength then used
to segment the prostate. Similarly, in makni et al. [55], intensity of the prostate region is modeled as a
mixture Gaussians. They propose a Bayesian approach where prior probability of the voxel labeling is
219
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
achieved by using a limited form of deformable models and Markov fields modeling. Conditional
probabilities associated with modeled intensity values, and segmentation is achieved to estimate the
optimal label for prostate boundary pixels within the framework of the MAP decision. Although the
atlas-based registration and segmentation prostate has become popular in recent time, obtained
Segmentation results should be improved by deformable model to improve the accuracy. Martin et al.
[54] used a hybrid registration minimize energy intensity and geometry to register the atlas.
Minimization of intensity based energy devoted to image template matching with reference geometric
images while minimizing energy fit the model points of the template image to the scene points belong
to the reference image. Finally, the form of restricted Deformable models are used to enhance the
results.
More recently, Martin et al. [86] using a probabilistic atlas for impose further limits the spatial
and prostatic segments in three dimensions. Shape and texture modeling of the prostate have been
combined in the work of Tsai et al. [87], which uses shapes and regions level set based framework to
segment the prostate on MR images. One contour fixed and used as a reference system where all other
contour affine transformed minimize their differences in a multi-resolution approach. PCA of the main
mode captures shape variability variation and also incorporated in the level set function, with with the
area-based information such as the area, the amount of intensity, average intensity and variance
information. Minimization level sets of the objective function produces segmented prostate. The authors
also suggest that the level sets of coupled models of the prostate, rectum, and internal obturator muscles
from MR images to segment these structures simultaneously [88]. Algorithm is made robust by allowing
forms overlap each other, and the last is the segmentation achieved by maximizing the mutual
information of three areas. Similarly, Liu et al. [56] use the elliptical for deformable
Prostate boundary segment after Otsu thresholding [89] of images in the prostate and non-
prostate area. A limited form level set initialized from the ellipse fitting of the prostate is used to further
refine the results. Finally, post processing gradient map of prostate and rectum produce a final
segmentation. Firjani et al. [90] modeled background and foreground with pixel Gaussian Mixed
Markov random field and using information from probability of pixels into the prostate in building form
Models. Shape and intensity jointly optimized with graph cut based algorithm. The authors extended
their work for 3D segmentation of the prostate [91]. Zhang et al. [92] propose an interactive
environment for prostate segmentation. Area and edge-based level set is used to segment the prostate
from the background depends on the foreground and background information provided by area-based
users. Gao et al. [93] represents the form of a series of training as a point cloud. Particle filters are used
to register the cloud points made from a common reference prostate volume to minimize the difference
in the pose.
Prior Shape and local Statistics images incorporated in the energy function minimized to
achieve the segmentation of the prostate at the level of set framework. Recently, Toth et al. [94] using
a series of 50 Gaussian kernel variables to extract texture size prostate features. ASM is built from
contours manually drawn training images are automatically initialized depending on The most likely
location of the prostate boundary to reach segmentation. Then, Toth et al. [95] in addition to the intensity
values, use the mean, standard deviation, range, slope, and kurtosis values in the intensity of the local
environment to deploy ASM automatically initialized from magnetic resonance spectroscopy (MRS)
information. Information clustered MRS using replicated k-means clustering to identify prostate in mid-
slice to initialize the ASM multi-feature. Khurd et al. [96] Localized prostate gland center with a mixture
of Gaussian Model-based clustering and expectation maximization after reducing the magnetic bias in
the image. Thresholding on Prostate probabilistic maps obtained by random walker-based segmentation
algorithm [97] to segment prostate.
4. Future Trends
Diagnostic imaging has become an indispensable procedure in medical science. Patient anatomical
imaging methods structure has improved the diagnosis of pathology,create new avenues of research in
the process. Automatic segmentation of anatomical structures from different imaging modalities such
as U.S., MRI and CT has become an important step for reduce the variability of inter and intra-observer,
improving contouring time thereafter. This paper examines the methods involved with prostate
220
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
segmentation. Strengths and limitations segmentation methodology has been discussed along with the
presentation of validation and performance evaluation same. Finally, the discussion in choosing the
right segmentation methodologies for specific imaging modalities have has been done. It has been
underlined that the segmentation of the prostate must use a geometric technique, spatial, intensity,
texture, and prior imaging physics to improve accuracy.
Prostate segmentation is still an open issue and with advances in technology for diagnosis,
treatment and follow-up of prostate diseases new requirements must be met. Multimodal image fusion
least two imaging modalities provide valuable information. For example, a fusion MRI and TRUS
imaging should be helpful in getting a more accurate sample during the biopsy. However, for these
methods to work in a real scenario, automatic, accurate and real time fusion of the two imaging
modalities is needed. In such circumstances automatic real time segmentation of prostate and
registrations will increase the accuracy of prostate contour and efficiency. Automatic segmentation and
accurate real time prostate can be achieved with efficient algorithms designed for graphics processing
unit. Moreover, the purpose prostate segmentation in each frame can be modified by Prostate tracking
purposes on every frame. Enhancement in 3D prostate segmentation method will become a trend in
coming years due to the increasing use of 3D imaging modalities, efficient and accurate algorithm where
necessary. In sense, information from dynamic contrast-enhanced MRI, and MR spectroscopy will be
used as an additional features for automatic segmentation. In addition, registration performed on
prostate contours for the same modality during the period time can also provide valuable information
about the development of the of prostate disease.
Ghose et al. [102]. have validated the accuracy and robustness of their approach with 15 MR
public dataset with image resolution of 256x256 pixels of MICCAI prostate challenge [103] in a leave
one out evaluation strategy. During validation the test dataset is removed and the probabilistic atlas and
multiple mean models of the apex, central and the base regions are constructed with the remaining 14
datasets. To determine the region of interest for atlas based registration the center of a central slice is
manually provided by the user. Such an interaction is necessary to minimize the influence of intensity
heterogeneities around the prostate [104],[105]. The probabilistic atlas produces an initial soft
segmentation of the prostate. The centroid of each of the 2D slices of the prostate volume is computed
from probabilistic values of the soft segmentation. All the mean models of the corresponding regions
(apex, central and base) are initialized at the centroid of each of the slices to segment the prostate in
that slice. The segmentation result of the mean model producing the least fitting error is selected as the
final segmentation in 2D. The 2D labels are rigidly registered to the 3D labels generated using
probabilistic atlas to constrain pose variation and generate valid 3D shapes. Their method is
implemented in Matlab 7 on an Intel Quad Core Q9550 processor of 2.83 Ghz processor speed and 8
GB RAM. They have used most of the popular prostate segmentation evaluation metrics like Dice
similarity coefficient (DSC), 95%Hausdorff distance (HD),mean absolute distance (MAD), specificity,
sensitivity, and accuracy to evaluate our method. They have compared their method with the results
published in MICCAI prostate challenge 2009 [104],[106] and with the work of Gao et al. [107] . They
observe that their method performs better than some of the works in literature. It is to be noted that
[106] used a probabilistic atlas for their segmentation. However, our hybrid framework of probabilistic
atlas and multiple SSAM improves on overlap and contour accuracies. The accuracy of their method
may be attributed to the use of the hybrid framework of optimized 2D segmentation that incorporate
local variabilities and 3D shape restriction to produce a valid prostate shape. Multiple mean models of
shape and intensity priors for different regions of the prostate approximate the local variabilities better
as each of these models are capable of producing new instances in a Gaussian space of shape and
appearance. Also, SSAM being a region based segmentation technique performs well in the base and
apex regions of the prostate for low contrast images.
5. Robotics
Robotics is a new field in medicine because of stringent safety criteria. The most commonly used robot
in minimally invasive procedures, such as the heart, bladder, prostate, and neurosurgery. By using MRI
to guide the robot to cause some additional problems: limited patient access, and MR compatibility is
very important. Robots may not interfere with the image obtained using the MR scanner. For this reason,
ferromagnetic and electronic devices can not be used in the magnet. In addition, the robot must be
221
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
registered with MR images using the coordinate system to be able to target anatomical areas. These
challenges include manual and traditional electromechanical robot handling a needle biopsy. Therefore,
new methods and devices have been developed, including real-time, in-scanner guidance methods to
operate the device.
Fichtinger et al [30] developed a transrectal prostate biopsy device that can be used in conventional
open MR scanner. Patient lying in a prone position, and the veil was introduced in the rectum, a needle
guidance device can be slid into the sheath. To guide the devices, microcoil antenna wound around a
small capsule contains gadolinium active fiducials used as a solvent. Position is used as a marker of the
rigid body of the device to compute the exact orientation of the entire device. Guiding needle device
has 3 degrees of freedom (df), motion, translation, and rotation endeffector in the rectum. Images
acquired continuously. Computers calculate continuous motion parameters, allowing real-time dynamic
control. Stages of manual motion today, but they may be motorized in the future.
An MR-biopsy device suitable for use in clinical practice is used by Beyersdorff et al [21]. A
guide needle is inserted into the rectum and connected to this device. This device allows rotation,
angulation, and translation needle. This guide does not contain a fiduciary active coils, but guides the
needle seen on both T1-and T2-weighted MR images. A fast imaging method used to detect the position
of the needle book. Beyersdorff et al [21] performed MR-guided biopsy with this device in 12 patients
with elevated PSA levels and previous negative TRUSBx round. In seven patients, the suspicious areas
can be defined in a fast sequence, in five other cases, areas of interest are clearly visible in the image
prebiopsi and can be marked on the images obtained during biopsy. Positioning the needle guide was
time consuming, the first guide to be identified on the localizer image, then, to adjust the position of the
device, the patient had to glide in and out of the scanner. After repositioning of the needle book, MR
images in two perpendicular plane should be obtained to guide to the target drive. Prostate cancer was
detected in 5 of 12 patients (42%).
222
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Engelhard et al [31] using a similar device and modified in a study with 37 patients who
underwent a previous negative prostate biopsy. The researchers concluded that a suspicious lesions with
a diameter of 10mm can be successfully punctured using this device. Prostate cancer was detected in
14 of 37 patients (38%). In a study of 27 patients with previous negative TRUSBx, Anastasiadis et al
[32] also used this device and found prostate cancer in 15 (55.5%) patients. The level of detection after
one round of negative biopsies ranged between 38% and 55.5% [21],[31],[32], these data are promising
and demonstrate the potential clinical value of MR-guided biopsy. TRUSBx in the second half, only
15% to 20% of prostate cancers will be detected, in the third round of biopsies, only 8% will be detected
[33].
The transrectal needle guide system, called APT-MRI (standing for'' access to prostate tissue
under MRI guidance'') developed by Krieger et al [34] and Susil et al [35]. It can be used in closed pits
3 T magnet with the patient in the prone position. Needle placement devices, to some extent, similar to
the device used by Beyersdorff et al [21] and Fichtinger et al [30]. APT-MRI device combining
endorectal coil imaging and tracking method hybrid that consists of a fiduciary marker tracking passive
and MR compatible fiber-optic joint encoders. Coordinates to position interventional devices in the
scanner obtained from MR images with gadolinium fiduciary marker segmentation tube is inserted into
the main axis of the tube and the two markers are placed according to the needle tract.
Thin sheets of 1-mm isotropic, sagittal, proton-densityweighted TSE image obtained in a field
marker. Automatic segmentation markers achieved using custom targeting software, the sagittal image
format into the axial plane along the main axis of the device along the axis of intervention and needle.
3 df available to achieve the target include endorectal probe rotation, pitch (angle) of the needle, and
the needle insertion depth. Targeting software provide the necessary guiding parameters that are
controlled through a quick manual adjustment of the device. Commercially available MR-compatible
core needle biopsy can be used by APT-MRI device.
In three biopsy procedures performed by Susil et al, biopsy needle placement accuracy on
average was 1.8 mm (range 0.4 to 4.0 mm) [34],[35]. Using this device in a study with 13 patients with
at least one previous negative prostate biopsy 12 months earlier, Singh et al [20] found only one patient
with directed biopsy positive for prostate cancer.
DiMaio et al [29],[36] and Fischer et al [23] designed a robot manipulator to perform
transperineal biopsy with the patient in the supine position. This position may be more relaxing for
patients compared with the prone position is more commonly used. This device consists of visualization,
planning, and navigation and robotic placement machine needle. This robot can be adjusted by using a
pneumatic actuator. Position information provided by the passive tracking fiducials on the robot base.
This system has proven to be MRI compatible. In free space, the needle tip localization accuracy of
0.25mm and 0.5 mm. Zangos et al [37] using the device for transgluteal biopsy in a closed-bore system.
Biopsy Transgluteal minimize the risk of injury to the bladder, bowel, and iliac vessels, and
there is no intestinal bacteria into the prostate. Disadvantages of this method is the need for local or
general anesthesia and the longer lines biopsy. This device uses markers in guiding hydrolic arm, and a
basket system to control the drive and optical sensors. Innomotion Interactive software used to plan and
to control biopsy. The device used in the study body, where the body was placed in a prone or lateral
position. Insertion point and the target are marked on T2-weighted MR images, and the biopsy path is
calculated automatically. Needle is inserted manually by the doctor after the body was removed from
the scanner. The average deviation of the needle tip is 0.35mm (range 0.08 to 1.08 mm). In all cases,
the target is reached in the initial effort. This technique has not been used in clinical practice. As
mentioned above, immobilization of the prostate is a problem that affects the accuracy of the target.
Transrectal approach seems to have some difficulty with prostate movement.
Transrectal prostate biopsy device used by Fichtinger et al [30] and by Krieger et al [34] has
been modified to account for this problem. Devices used by Beyersdorff et al [21] is used to guide the
needle MRIcompatible prostate biopsy. This device has been described as having a problem with the
prostate motion during the biopsy, but the needle guide can be used to immobilize the prostate.
223
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
3. Discussion
MR-guided prostate biopsy will have a higher role in diagnosing prostate cancer. One of the biggest
challenges in taking a biopsy of the prostate is a correction for the motion of the prostate tissue during
the biopsy procedure. Research is needed to design and evaluate techniques to determine and reduce
the movement. Lattouf et al [38] evaluated 26 patients to understand whether the use of endorectal MRI
before TRUSBx improved diagnosis of prostate cancer in high risk populations.They found that MRI
before TRUSBx tend to produce more cancer diagnosis, but the difference was not statistically
significant. The reason for this finding may include MRI findings suboptimal localization and biopsy
site. Real-time TRUS and MRI-guided biopsy fusion is proposed as a method for using high-contrast
MR data is sensitive for detecting tumors and real-time TRUS character to follow the movement of the
prostate during a biopsy [39],[40].
The technique used for biopsy can also be used for the treatment of prostate cancer, such as
with brachytherapy. In this case, accurate seed placement is very important to provide accurate dose
coverage, but only a few reports are available with the initial data. A study by Singh showed promising
results in patients with three-seed implantation using MR data is fused with computer-assisted
tomography (CT) data for treatment planning [41]. Future studies should investigate the role of the
robot guided treatment. Assistance robot for MR-guided interventions in the hole closed with prostate,
3 T magnets has been investigated by researchers in Urobotics Laboratory, Department of Urology,
Johns Hopkins University, led by Dr. Dan Stoianovici. MR-compatible robotic device using optical-
pneumatic actuation and sensing mechanisms and pneumatic motors (PneuStep) [42]. The motor is
separated from the magnet and electromagnet due entirely made of nonmagnetic and dielectric
components [42].
Pneumatic actuation is the perfect choice for MRI compatibility as well as to achieve very high
precision and reproducibility in accessing the target [43]. The robot was tested with impressive accuracy
for image-guided needle targeting mock-up, ex vivo, and animal studies [44]. Currently, the new holder
is being developed for the robot to handle the transperineal biopsy needle into the prostate to access.
Robot mountain in the imager table with the patient, who is placed in a decubitus position, and he was
able to orient and operate a needle biopsy under direct MRI guidance. T2-weighted TSE sequence data
is transferred from the imager used by custom software to determine the target and entry point for the
needle in three-dimensional coordinate system gambarMR. A software used to calculate the coordinates
for each position in the coordinate system of the robot guides the robot to perform automatic, needle
placement centered targets.
4.Conclusions
Prostate biopsy is an essential procedure for determining optimal treatment. systematic TRUSBx still
failed to detect a variety of tumors. Using the MR images during diagnostic biopsy procedure biopsi.
Various improve the quality of MR-compatible robot has been developed. Powered mechanical robots
can be adjusted from outside the scanner where the needle insertion should be executed manually
investigated all the robots. Unfortunately, little is known about the accuracy of this robot.
This paper proposed the technique of multi-modal image -guided biopsy technique for detecting
prostate cancer accurately. Develop prostate multimodal image registration method based on
information theory and automated statistical shape analysis to find pieces that closely match the MR
axial slice and TRUS. Design and evaluate efficient and accurate segmentation of the prostate in 2D
TRUS image sequence to facilitate multi-modal image fusion between TRUS and MRI to improve the
malignant tissue samples for biopsy. The use of multiparametric MRI with merged between
spectroscopy and both T1-and T2-weighted MR images to improve prostate detection. Design and
evaluate techniques for determining and reducing the correction for movements of the prostate tissue
during the biopsy procedure by incorporating some biomechanical moddeling. Implement software to
define the targets and the entry points for the needle on the three-dimensional coordinate system of the
image. Determine the correspondence points on a pair of TRUS and MR images and to calculate the
coordinates for the respective positions in the coordinate system of target-centered needle placements.
224
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
5. Acknowledgment
The author would like to thank College of Computer of Bumigora, Mataram, West Nusa-Tenggara,
Indonesia for funding this preliminary research.
References
[1] Jemal A, Siegel R, Ward E, Murray T, Xu J, Thun MJ. Cancer statistics, 2007. CA Cancer
J Clin 2007;57:43–66.
[2] Barry MJ. Clinical practice. Prostate-specific-antigen testing for early diagnosis of prostate
cancer. N Engl J Med 2001;344:1373–7.
[3] Frankel S, Smith GD, Donovan J, Neal D. Screening for prostate cancer. Lancet
2003;361:1122–8.
[4] Heijmink SW, van Moerkerk H, Kiemeney LA, Witjes JA, Frauscher F, Barentsz JO. A
comparison of the diagnostic performance of systematic versus ultrasoundguided biopsies
of prostate cancer. Eur Radiol 2006;16: 927–38.
[5] Terris MK, Wallen EM, Stamey TA. Comparison of midlobe versus lateral systematic
sextant biopsies in the detection of prostate cancer. Urol Int 1997;59:239–42.
[6] Salo JO, Rannikko S, Makinen J, Lehtonen T. Echogenic structure of prostatic cancer
imaged on radical prostatectomy specimens. Prostate 1987;10:1–9.
[7] Ellis WJ, Brawer MK. The significance of isoechoic prostatic carcinoma. J Urol
1994;152:2304–7.
[8] de la Rosette JJ, Aarnink RG. New developments in ultrasonography for the detection of
prostate cancer. J Endourol 2001;15:93–104.
[9] Ismail M, Petersen RO, Alexander AA, Newschaffer C, Gomella LG. Color Doppler
imaging in predicting the biologic behavior of prostate cancer: correlation with disease-
free survival. Urology 1997;50:906–12.
[10] Kimura G, Nishimura T, Kimata R, Saito Y, Yoshida K. Random systematic sextant biopsy
versus power doppler ultrasound-guided target biopsy in the diagnosis of prostate cancer:
positive rate and clinicopathological features. J Nippon Med Sch 2005;72:262–9.
[11] Lavoipierre AM, Snow RM, Frydenberg M, et al. Prostatic cancer: role of color Doppler
imaging in transrectal sonography. AJR Am J Roentgenol 1998;171:205–10.
[12] Scattoni V, Zlotta A, Montironi R, Schulman C, Rigatti P, Montorsi F. Extended and
saturationprostatic biopsy in the diagnosis and characterisation of prostate cancer: a critical
analysis of the literature. Eur Urol 2007;52:1309–22.
[13] Haker SJ, Mulkern RV, Roebuck JR, et al. Magnetic resonance-guided prostate
interventions. Top Magn Reson Imaging 2005;16:355–68.
[14] Norberg M, Egevad L, Holmberg L, Sparen P, Norlen BJ, Busch C. The sextant protocol
for ultrasound-guided core biopsies of the prostate underestimates the presence of cancer.
Urology 1997;50:562–6.
[15] Schiebler ML, Schnall MD, Pollack HM, et al. Current role of MR imaging in the staging
of adenocarcinoma of the prostate. Radiology 1993;189:339–52.
[16] Kirkham APS, Emberton M, Allen C. How good is MRI at detecting and characterising
cancer within the prostate? Eur Urol 2006;50:1163–75, discussion 1175.
[17] Futterer JJ, Heijmink SW, Scheenen TW, et al. Prostate cancer localization with dynamic
contrast-enhanced MR imaging and proton MR spectroscopic imaging. Radiology
2006;241:449–58.
[18] Hosseinzadeh K, Schwarz SD. Endorectal diffusionweighted imaging in prostate cancer to
differentiate malignant and benign peripheral zone tissue. J Magn Reson Imaging
2004;20:654–61.
225
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
[40] Kaplan I, Oldenburg NE, Meskell P, Blake M, Church P, Holupka EJ. Real timeMRI-
ultrasound image guided stereotactic prostate biopsy. Magn Reson Imaging 2002;20:295–
9.
[41] Singh AK, Guion P, Sears-Crouse N, et al. Simultaneous integrated boost of biopsy
proven, MRI defined dominant intra-prostatic lesions to 95 Gray with IMRT: early results
of a phase I NCI study. Radiat Oncol 2007;2:36.
[42] Stoianovici D, Song D, Petrisor D, et al. ‘‘MRI Stealth’’ robot for prostate interventions.
Minim Invasive Ther Allied Technol 2007;16:241–8.
[43] Stoianovici D. Multi-imager compatible actuation principles in surgical robotics. Int J Med
Robotics Comput Assist Surg 2005;1:86–100.
[44] Muntener M, Patriciu A, Petrisor D, et al. Transperineal prostate intervention: robot for
fully automated MR imaging system description and proof of principle in a canine model.
Radiology 2008;47:543–9.
[45] Pondman KM, Fu¨ tterer JJ, ten Haken B, et al. MR-guided biopsy of the prostate: an
overview of techniques and a systematic review. Eur Urol 2008;54:517–27.
[46] Kirkham APS, Emberton M, Allen C. How good is MRI at detecting and characterising
cancer within the prostate? Eur Urol 2006;50:1163–75.
[47] SciarraA, PanebiancoV, Salciccia S, et al. Role of dynamic contrast-enhancedmagnetic
resonance imagingandproton MR spectroscopic imaging in the detection of local
recurrence after radical prostatectomy for prostate cancer. Eur Urol 2008;54:589–600.
[48] Hricak H. MR imaging and MR spectroscopic imaging in the pre-treatment evaluation of
prostate cancer. Br J Radiol 2005;78:103–11.
[49] Hata N, Jinzaki M, Kacher D, et al. MR imaging-guided prostate biopsywith surgical
navigation software: device validation and feasibility. Radiology 2001;220:263–8.
[50] Zangos S, Eichler K, Engelmann K, et al.MR-guided transgluteal biopsies with an open
low-field systemin patient with clinically suspected prostate cancer: technique and
preliminary results. Eur Radiol 2005;15:174–82.
[51] Ghose S , Oliver A, Mitra J, et al. A supervised learning framework of statistical shape and
probability priors for automatic prostate segmentation in ultrasound images. Medical
Image Analysis 2013; 17:587–600.
[52] Mitra J, Kato Z, Martí R, et al. A spline-based non-linear diffeomorphism for multimodal
prostate registration. Medical Image Analysis 2012; 16:1259–1279.
[53] Ghose S, Oliver A, Martí R, et al. A survey of prostate segmentation methodologies in
ultrasound, magnetic resonance and computed tomography images. computer methods and
program sinbiomedicine 2012; 108:262–287.
[54] S. Martin, V. Daanen, J. Troccaz, Atlas-based prostate segmentation using an hybrid
registration, International Journal of Computer Assisted Radiology and Surgery 3 (2008)
485–492.
[55] N. Makni, P. Puech, R. Lopes, R. Viard, O. Colot, N. Betrouni, Combining a deformable
model and a probabilistic framework for an automatic 3d segmentation of prostate on MRI,
International Journal of Computer Assisted Radiology and Surgery 4 (2009) 181–188.
[56] X. Liu, D.L. Langer, M.A. Haider, Y. Yang, M.N. Wernick, I.S. Yetik, Prostate cancer
segmentation with simultaneous estimation of markov random field parameters and class,
IEEE Transactions on Medical Imaging 28 (2009) 906–915.
[57] Y. Zhan, D. Shen, Deformable segmentation of 3D ultrasound prostate images using
statistical texture matching method, IEEE Transactions on Medical Imaging 25 (2006)
256–272.
[58] F.A. Cosío, Automatic initialization of an active shape model of the prostate, Medical
Image Analysis 12 (2008) 469–483.
[59] N.N. Kachouie, P. Fieguth, A medical texture local binary pattern For TRUS prostate
segmentation, in: Proceedings of the 29th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society, IEEE Computer Society Press, USA, 2007,
pp. 5605–5608.
[60] A. Zaim, Automatic segmentation of the prostate from ultrasound data using feature-based
self organizing map, in: H. Kalviainen, J. Parkkinen, A. Kaarna (Eds.),Proceedinggs of
227
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
229
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
[93] Y. Gao, R. Sandhu, G. Fichtinger, A.R. Tannenbaum, A Coupled Global Registration and
Segmentation Framework with Application to Magnetic Resonance Prostate Imagery,
IEEE Transactions on Medical Imaging 10 (2010) 17–81.
[94] R. Toth, B.N. Bloch, E.M. Genega, N.M. Rofsky, R.E. Lenkinski, M.A. Rosen, A.
Kalyanpur, S. Pungavkar, A. Madabhushi, Accurate prostate volume estimation using
multifeature active shape models on t2-weighted mr, Academic Radiology 18 (2011) 745–
754.
[95] R. Toth, P. Tiwari, M. Rosen, G. Reed, J. Kurhanewicz, A. Kalyanpur, S. Pungavkar, A.
Madabhushi, A magnetic resonance spectroscopy driven initialization scheme for active
shape model based prostate segmentation, Medical Image Analysis 15 (2011) 214–225.
[96] P. Khurd, L. Grady, K. Gajera, M. Diallo, P. Gall, M. Requardt, B. Kiefer, C. Weiss, A.
Kamen, Facilitating 3D spectroscopic imaging through automatic prostate localization in
mr images using random walker segmentation initialized via boosted classifiers, in: A.
Madabhushi, J. Dowling, H.J. Huisman, D.C. Barratt (Eds.), Prostate Cancer Imaging,
volume 6963 of Lecture Notes in Computer Science, Springer, 2011, pp. 47–56.
[97] L. Grady, Random walks for image segmentation, IEEE Transactions on Pattern Analysis
and Machine Intelligence 28 (2006) 1768–1783.
[98] R. Zwiggelaar, Y. Zhu, S. Williams, Semi-automatic segmentation of the prostate, in: F.J.
Perales, A.J. Campilho, N.P. de la Blanca, A. Sanfeliu (Eds.), Pattern Recognition and
Image Analysis, Proceedings of First Iberian Conference, IbPRIA, Springer,
Berlin/Heidelberg/New York/Hong Kong/London/Milan/Paris/Tokyo, 2003, pp. 1108–
1116.
[99] T. Lindeberg, Edge detection and ridge detection with automatic scale selection, in:
Proceedings of Computer Vision and Pattern Recognition, IEEE Computer Society Press,
Los Alamitos/California/Washington/Brussels/Tokyo, 1996, pp. 465–470.
[100] M. Samiee, G. Thomas, R. Fazel-Rezai, Semi-automatic prostate segmentation of MR
images based on flow orientation, in: IEEE International Symposium on Signal Processing
and Information Technology, IEEE Computer Society Press, USA, 2006, pp. 203–207.
[101] D. Flores-Tapia, G. Thomas, N. Venugopal, B. McCurdy, S. Pistorius, Semi automatic
MRI prostate segmentation based on wavelet multiscale products, in: 30th Annual
International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE
Computer Society Press, USA, 2008, pp. 3020–3023.
[102] S. Ghose, J. Mitra, A. Oliver, R. Marti, X. Llado, J. Freixenet, J. C. Vilanova, D. Sidibe,
F. Meriaudeau, A Coupled schema of probabilistic atlas and statistical shape and
appearance model for 3D prostate segmentation in MR images in: IEEE ICIP, United
States, 2012, pp. 0069-5550.
[103] MICCAI, “2009 prostate segmentation challenge MICCAI,”
wiki.namic.org/Wiki/index.php, accessed on [1st April, 2011].
[104] A. Gubern-Merida et al., “Atlas based segmentation of the prostate in MR images,”
wiki.namic.org/Wiki/images/d/d3/Gubern-Merida Paper.pdf, accessed on [20th July,
2011], 2009.
[105] S. Klein et al., “Automatic Segmentation of the Prostate in 3D MR Images by
AtlasMatching Using LocalizedMutual Information,” Med.Physics, vol. 35, pp. 1407–
1417, 2008.
[106] J. Dowling et al., “Automatic atlas-based segmentation of the prostate,” wiki.na-
mic.org/Wiki/images/f/f1/Dowling 2009 MICCAIProstate v2.pdf, accessed on [20th July,
2011], 2009.
[107] Y. Gao et al., “A Coupled Global Registration and Segmentation Framework with
Application toMagnetic Resonance Prostate Imagery,” IEEE Trans. Med. Imaging, vol.
10, pp. 17–81, 2010.
230
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: The research was carried out with a case study at PT Best Stamp Indonesia head
office Trade Center Metro A-10 Bandung which engaged in the manufacture of color stamp.
The result of this study was the level of maturity each IT investment management process, so
that PT Best Stamp Indonesia can have an overview of the processes that need further
development. The results showed that most of the maturity index are at maturity level 3
(defined).
1. Introduction
Investment in information technology (IT) is a business strategy that should be carried out by the
company to remain competitive and not to be left behind from other similar companies. On this subject
(Hsin-Ginn Hwang, 2005) explains, (Lucas and Turner, 1982) Information technology can be used to
achieve the corporate strategy by helping companies to gain efficiency in operations, improve the
planning process, and opens new markets. In addition, the company's strategy should be considered in
the planning phase of information technology, as information technology plays an important role in the
implementation of corporate strategies, (Mc Farlan, 1984). Information technology investment when
viewed only from the weight of the cost while still in the procurement stage will make the company
become reluctant to invest, as stated by Wina et al (2007: 31) that the cost calculated not only at the
start of procurement, but continues during the maintenance or for as long as the investment is used.
In order PT Best Stamp Indonesia can compete among similar companies, PT Best Stamp
Indonesia should be aware that the best strategy for the implementation of the use of information
technology needs to be improved.
For companies such as PT Best Stamp Indonesia that engaged in the manufacture of stamps as
the core business, Information technology can help record transactions, track inventory, pay
employees, buy new merchandise as well as to evaluate sales trends. Furthermore, to obtain maximum
results and benefits for organizations in the development of technology, a thorough consideration
needed to be used as a framework for calculating the value, and one of them is Val IT. The interpretation
through the Val IT Approach is used to provide a clear picture of the information technology assets in
an organization.
231
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Literature Review
Value is based on the benefits derived from competition, which is reflected in the performance of
business today and in the future, which will increase the competitive advantage from the company
competitors, and force the management to invest. Information technology can be regarded as the science
needed to manage information so that the information can be found easily and accurately.
According to Ross and Beth in their study issued by MIT Sloan Management Review (2002),
titled Beyond the Business Case: New Approaches to IT Investment said that there are two general
approaches in investing in information technology, the Strategic Objective, that prioritizing in short-
term profits and long growth period and Technology Scope, which implement the information
technology infrastructure as a business solution.
IT Governance Institute (ITGI), the agency that issued IT governance framework, in April 2006 issued
a complementary framework that can be used to measure the value of IT which is called Val IT.
Currently, Val IT focuses on new IT investments and will be expanded to include all services and IT
assets (Bell, Stephen, 2006). Val IT initiative goals include research, publications and support services
to help management understand the value of IT investments and ensure that organizations can obtain
the optimal value of IT investments in the context of cost and acceptable risk.
There are three main domains to measure the value of IT investments. Each domain consists of multiple
processes and has a purpose each as follows:
1. Value Governance (VG), consisted of 11 process and aims to optimize the value of IT investments
obtained by :
a. Establish governance, control and monitoring framework.
b. Provide strategic direction for investment.
c. Defining the characteristics of an investment portfolio.
2. Portfolio Management (PM), consisted of 14 process and aims to ensure that all IT investment
portfolio align and contribute optimally to the organization's strategic objectives by:
a. Establish and manage the resource profile.
b. Defining the investment restrictions.
c. evaluating, prioritizing and selecting, delay or reject new investment
d. Manage the overall portfolio.
e. Monitor and evaluate the performance of the portfolio.
3. Investment Management (IM) consisted of 15 process and aims to ensure that IT investment program
in the organization can provide optimal results with reasonable cost and within an acceptable risk,
by
a. Identify the business requirements.
b. Developing a clear understanding of candidate investment program.
c. Analyze alternatives.
d. Define a program and documenting detailed business case including a clear and detailed
description of the program's benefits for the company.
e. Establish clear accountability and ownership of the program.
f. Monitor and report on program performance.
232
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
On the concept of the Val IT framework, there are several terms related to IT investments (Definitions
of Key Terms Used in the Val IT Initiative, The Val IT Business Case, 2006):
1. Value: The expected results from investments that support the business.
2. Portfolio: The group of programs, projects, services or assets that selected, managed and
monitored to optimize the return value of the business.
3. Programme: A structured group consisting of a variety of projects. The investment
program is the main unit in the Val IT investments.
4. Project: A set of activities that focus on producing certain capabilities.
5. Implement: Covering an economical life cycle investment program.
To implement Val IT framework can be done by building a business case for the project that will
measure the value of its investments. Through business case, we can evaluate how much value is offered
on a business proposal.
Making the business case consisted of 8 stages, as follows:
1. Creating fact sheets with relevant data and perform data analysis include the following:
2. Alignment analysis.
3. Financial benefit analysis.
4. Analysis of non-financial benefits.
5. Risk analysis, resulting of:
6. Assessment / valuation and optimization of result / risk generated by IT investments, that expressed
by:
7. Structured recording of the results of previous stage and business case documentation, as well as the
end results are always updated by:
8. Evaluating the business case for program execution, throughout the life cycle of the program.
3. Research Methodology
This section will explain the methodology used in conducting this study:
1. Literature study was conducted to collect all the data and information of a literature variety that
related to the management of IT and IT investments. Literature study and preliminary survey
are done to study the VAL IT governance in applying IT. VAL IT analysis focused on the
233
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
description of the application of IT applications in the VAL IT framework to get the result or
problem formulation. Analysis of the company was done to map the state of the company IT
applications as an IT investment assets. A literature study was conducted to obtain these results,
and to determine the profile of the company, the management or organizational structure and
work processes of IT applications.
Literature
Study
Selection of Research
Methodology
Planning Selection of
the questionnaire Respondents
Data processing
Phase 4
Maturity levels
Phase 5
Business Case Analysis
2. The VAL IT framework is used as a framework to analyse and measure the IT investment in
the company.
3. The questionnaire was used to survey the company. The questionnaire design was based on the
Key Management Practices and Val IT Maturity Model and the values result were used to
determine the level of maturity. The questionnaire used was a questionnaire to determine the
level of information systems governance maturity in the form of questions with answers ranging
of zero to five according to the level of Val IT maturity model.
4. Respondents that selected in this study were only the staff that work in PT Best Stamp Indonesia
head office A-10 MTC Bandung, due to time constraints and the cost of doing research.
5. Data processing of completed questionnaires filled by respondents then data was further
processed by finding the average value.
6. Mapping of the company's recent conditions examined in connection with the IT investment
management for VAL IT framework. Each standard value of the process contained in VAL IT
234
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
framework compared to the situation and condition of the company and its management of IT,
and then evaluated. The reference processes support VAL IT to produce the maturity analysis
for each IT application management processes that being studied. Reference for the company’s
processes that require further development derived from that gap analysis will provide input to
improve the process of managing IT so that the company has a picture of potential IT
applications in VAL IT framework.
7. Val IT maturity level recommendations used to improve each process were proposed in order
for PT Best Stamp Indonesia to reach level 5 (Optimised) where the process has been selected
at the level of best practices based on the results of continuous improvement.
8. This research has the final stages, making conclusions after the whole process is completed
and the evaluation of the findings in the study. Then the suggestions were given to be used as
a reference for future research.
The data was collected by distributing questionnaires to 30 respondents, which during filling out the
questionnaire, the researcher accompanying the object of research in order to answer any questions that
might arise from the respondents.
Value Governance
Process Maturity Level
VG1. Ensure informed and committed leadership 4
VG 2. Define and implement processes 3
VG 3. Define roles and responsibilities 4
VG4. Ensure appropriate and accepted accountability 4
VG 5. Define information requirements 4
VG 6. Establish reporting requirements. 4
VG 7. Establish organisational structures 3
VG 8. Establish strategic direction. 4
VG 9. Define investment categories. 4
VG 10. Determine a target portfolio mix 4
VG 11. Define evaluation criteria by category 4
235
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Investment Management
Process Maturity Level
IM 1. Develop a high-level definition of investment opportunity. 3
IM 2. Develop an initial programme concept business case. 2
IM 3. Develop a clear understanding of candidate programmes. 2
IM 4. Perform alternatives analysis 3
IM 5. Develop a programme plan. 3
IM 6. Develop a benefits realisation plan. 2
IM 7. Identify full life cycle costs and benefits 3
IM 8. Develop a detailed programme business case. 3
IM 9. Assign clear accountability and ownership 3
IM 10. Initiate, plan and launch the programme. 4
IM 11. Manage the programme 3
IM 12. Manage/track benefits 3
IM 13. Update the business case 3
IM 14. Monitor and report on programme performance. 3
IM 15. Retire the programme. 3
Based on the calculation of the maturity model level for all Value Governance, Portfolio Management
and Investment Management processes the result is index 3, meaning that PT Best Stamp Indonesia is
at the third level, Defined - Procedures have been standardized, documented and communicated, but
at this stage, implementation is still individual.
236
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
3 2
IM3 2 2 3 PM3
3
2 3
IM2 PM4
3 3 3
IM1 PM5
PM14 PM6
PM13 PM7
PM12 PM8
PM11 PM9
PM10
Business case can be made after analyzing the processes identified in the Val IT which describes the
information technology investment program. PT Best Stamp Indonesia according to the survey, have
invested in the form of a web system information system as a means of supporting the company's
operations. Results of the analysis are shown in Table 5 below.
237
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4.3 Recommendation
The results of the analysis IT investment implementation of maturity level model, produced a number
of recommendations for each process to be carried out by PT Best Stamp Indonesia to optimize the
management of IT investments using Val IT as follows:
239
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
3.10 Initiate, plan and launch the Planning the necessary resource and the commission for the
programme. projects to achieve program outcomes.
3.11 Manage the programme. Managing and monitoring the performance of individual projects
relating to delivery, schedule, cost and risk to identify potential
impacts on program performance, and take timely corrective action
when necessary.
3.12 Manage/track benefits. Monitor performance based on the target on a regular basis to
ensure that the benefit planned achieved, supported and optimized
3.13 Update the business case. Renew the business case that reflects the status of the program
implementation that should be implemented at any time if there are
changes to be made to the projected costs or program benefit, while
the risk change and in the preparation stages of review
3.14 Monitor and report on Make a report that includes the performance of the entire portfolio,
programme performance. IT strategy in accordance with the policies and standards, benefits
realization, process maturity, end user satisfaction and IT internal
control status.
3.15 Retire the programme. Removal of program from the portfolio will be terminated with
the director approval and document it for further lessons
5.1 Conclusion
From the assessment of PT Best Stamp Indonesia information technology investment review, can be
concluded as follows:
1. Measuring the value of IT investments is important so that the business strategy with the
implementation of the operational support system go in line with the company strategy.
2. Val IT Framework has detailed processes and can be customized to the characteristics of IT
investments in the form of web system application of PT Best Stamp Indonesia.
3. Business case analysis shows that the strategic objectives to be achieved of PT Best Stamp
Indonesia should match with the investment planning of information technology in the form of a
web system.
4. Based on the result of Val IT processes applied in web system technology application, PT Best
Stamp Indonesia only reached level 3 (Defined) of maturity level.
5. Improvement of the maturity level to reach level 5 (Optimized) on the results of the assessment
using Val IT can be done by performing operational process improvement procedures through
appropriate web application system continuously and provide recommendations of best practices
in the information user.
5.2 Suggestion
240
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Bell, Stephen (2006), Val IT: helping companies add dollar value with ICT, New Zealand, 8th June, Computer
World, ,
Hwang, Hsin-Ginn, Robert Yeh, James J. Jiang, and Gary Klein (2005), IT Investment Strategy And IT
Infrastructure Services. The Review of Business Information Systems Journal, Vol. 6, 56.
IT Governance Institute (2006), Enterprise Value: Governance of IT Investments, The Val IT Framework, the
United States of America, IT Governance Institute,.
IT Governance Institute (2006), Enterprise Value: Governance of IT Investments, The Business Case, , the United
States of America, IT Governance Institute.
Ross, Jeanne W. and Cynthia M. Beth. Beyond the Business Case: New Approaches to IT Investment. MIT Sloan
Management Review, USA, 2002
Winanti, Wina dan Falahah (2007), Val IT : Kerangka Kerja Evaluasi Investasi Teknologi Informasi,
Yogyakarta, Seminar Nasional Aplikasi Teknologi Informasi 2007 (SNATI 2007),.
241
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: The authors conducted a study on the use of the COBIT framework for assessing IT
governance using domain Plan and Organise And Acquire and Implement to measure the value
of information technology investments and align it with corporate goals. Assessment of
information technology governance is intended to make the mapping process of planning and
organization, acquisition and implementation of the maturity level. Using the level of maturity
then management can measure the position of the current information systems and assess what
is needed to improve it. At PT. Best Stamp Indonesia there are 17 processes and have the
information technology processes that are currently at maturity level 3 / defined. From the
results obtained, the implementation of IT governance models and models of information
systems audit COBIT can be applied to the process of information technology in the company,
however, it is necessary to make adjustments or modifications to the process.
1. Introduction
In this globalization era, the use of Information Technology (IT) has become a necessity that plays an
important role in many aspects of life. IT also has provided many benefits for the development of
business processes. “Effective strategy development is becoming vital for today’s Organizations. As the
impact of IT has grown in organizations, IT strategy is finally getting the attention it deserves in
business”. (Heather, James and Satyendra, 2007). Companies are increasingly trying to be the best, so
does PT. Best Stamp Indonesia attempted to implement the best strategies in their use of IT.
The use of IT has been providing solutions and profit opportunities through the form of the
strategic role of IT in achieving the organization's vision and mission.
On the other hand the application of IT investment costs are relatively expensive. In which the
emergence of risk of failure. This condition requires consistency in the field of management so that an
IT Governance (IT Governance) are fine-tuned to be an essential requirement. Good IT governance is
able to make the company survive, and help develop strategies and realize the goals of the company.
COBIT (Control Objectives for Information and related Technology) is an IT Governance
standards developed by the IT Governance Institute (ITGI), which is an organization that conducts
research on IT Governance models based in the United States. In contrast to the standards of other IT
Governance, COBIT has a broader scope, comprehensive, and in-depth look at the process of managing
IT. In support of IT Governance, COBIT provides a framework that ensures IT has aligned with
business processes, IT resources have been used with responsibility, and IT risks have been dealt with
appropriately.
242
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Literature Review
2.1 Information Technology Governance
There are several definitions of IT governance such as: Definition of IT governance according to ITGI
(2007: 5):
IT governance is the responsibility of executives and the board of directors, and consists of the
leadership, organisational structures and processes that ensure that the enterprise’s IT sustains and
extends the organisation’s strategies and objectives.
Based on this definition IT governance is the responsibility of top management and executive
management of an enterprise. IT governance is part of the overall management of the company
consisting of the leadership and organizational structures and processes to ensure that there is a
continuation of the IT organization and the development of strategies and goals of the organization.
According to Grembeergen, Haes (2004:1):
IT governance is the organizational capacity excercised by the Board, excecutive management
and lT management to control the formulation and implementation of IT strategy and in this way ensure
the fusion of business and IT
The definition of information technology governance is the organizational measures undertaken
by the board, executive management and IT management to control the formulation and implementation
of IT strategy and how to believe in business and IT itself.
Opportunities created from the optimization of IT resources in the area of organizational
resources, such as data, application systems, infrastructure and human resources. As expressed by
Stacey Hamaker, CISA, and Austin Hutton, 2004, that:
“IT governance is an integral part of corporate governance in raising the bar of corporate
integrity and enhancing shareholder value. IT governance goes beyond IT audit and beyond what the
CIO can accomplish by himself/herself. Depending on the organization, IT governance may be the
enabler for an organization to move to the next level or it may be the only way an organization can
meet regulatory and legal requirements”.
Based the definitions above, it can be seen that the emphasis of IT governance is the creation
of a strategic alignment between the business of Information Technology with the management of a
company and also has a very important role in the application of information technology governance.
Control Objectives for Information and Related Technology (COBIT) is a framework developed by IT
Governance Institute, an organization that conducts research on IT management model based in the
United States.
COBIT provides a clear policy and good practice in the governance of information technology
to assist senior management in understanding and managing the risks associated with information
technology governance by providing a framework for information technology governance and control
objectives detailed guide / detailed control objectives for the management, business process owners,
users and auditors.
The focus of COBIT processes described by the model of information technology process that divides
into 4 sections and 34 processes that summarized by 210 detailed control objectives according to areas
of responsibility, ranging from planning, building, running and monitoring the information technology
implementation, and also provide end-to-end view of information technology.
The main characteristics of the COBIT framework is the focus on business process orientation,
and controlled by a control-based measurements.
The 34 COBIT IT processes (ITGI, 2007) are:
243
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
As defined in the COBIT control policies procedures, practices and organizational structures as policies,
procedures, practices and organizational structures designed to provide an acceptable assurance that
244
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
business objectives will be achieved and unexpected events can be prevented or identified and repaired.
While the information technology control objectives are statements regarding the intent or result
expected by implementing control procedures in a particular IT activity.
The COBIT control framework, providing a clear link between the need for information
technology governance, information technology processes and information technology control, because
the control objectives are organized according to the process of information technology. Each process
of the information technology that contained in COBIT has a high level control objectives and several
detail control objectives. Overall control is a characteristic well-managed process.
Having an understanding of the status of information technology systems, is necessary for the
organization, in order to decide the level of management and control that must be given. Therefore,
companies need to know what should be measured and how the measurements are done, so it can be
required their performance level status.
One of the tools of measurement of the performance of a system is a model of information
technology maturity (maturity level). Maturity model for the management and control of the process of
information technology is based on an evaluation method that can evaluate their own organization from
a non-existent (0) to optimistic (5). Maturity model is intended to determine the presence of existing
problems and how to prioritize improvement. Maturity model is designed as a profile of information
technology processes, so that the organization will be able to recognize a description of the current
situation and future possibilities. The use of maturity models developed for each of 34 process enables
management of information technology to be able to identify (ITGI, 2007):
1. Current Condition of the company
2. Current conditions of the industry for comparison
3. Company desired condition
4. The desired growth between as-is and to.
Maturity model is built originated from the generic qualitative models, in which the principles of the
following attributes added by multilevel (ITGI, 2007):
1. awareness and communication
2. policies, plan and procedures
3. tools and automation
4. skills and expertise
5. responsibility and accountability
6. goal selling and measurement
Defining a process maturity model information technology refers to the COBIT framework in general
is as follows (ITGI, 2007):
1. Level 0: non-exixtent. There is absolutely no IT processes are identified. Companies have not
realized the issues to be discussed
2. Level 1: Initial / ad-hoc. There is evidence that showed the company was aware of the issues that
need to be addressed. There is no standard process, instead there is a specific approach that tends
to be applied per case. Management approach overall has not been in an organized manner.
3. Level 2: repeatable but intuitive. The process has been developed to the stage where similar
procedures are followed by different people perform the same task. There is no formal training and
communication of standard procedures, and responsibility given to the individual. There is a high
confidence in the knowledge for the individual, therefore errors often occur.
4. Level 3: defined process. Procedures have been standardized and documented, and communicated
through training. But it is up to the individual to follow these processes, and therefore will be
difficult to detect irregularities. The procedure itself is not complicated but it is a formalization of
the activities that have been carried out.
5. Level 4: managed and measureable. It is possible to perform monitoring and measuring compliance
with procedures and taking action if the process appears to be working effectively. The process of
constant development and provide best-practice. Automation and auxiliary devices are used in a
limited or fragmented.
245
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
6. Level 5: optimized. The process of attaining the level of best practice, as the results and continuous
improvement and maturity modeling with other enterprises. Integrated information technology used
to automate the workflow, providing auxiliary devices to improve the effectiveness and quality will
enable companies to quickly adjust to changes.
3. Research Methodology
This section will explain the methodology used by the authors in conducting this research. Methodology
is the manner and sequence of work that will be used in this study
Literature study
Determining
the Research Methods
Data Collection
1. Questionnaires I and II
2.Interviews Documentation
Data processing
Phase 4
Identification of Control Objectives
Phase 5
Maturity Level
The design of
Maturity Levels Improvement proposal
The following research methods were used in the preparation of the research, namely:
1. Studying literature sources such as COBIT Framework, the audit stage, the system procedure in
PT. Best Stamp Indonesia.
2. Data collection and internal documents about the company such as vision, mission, goals,
objectives, organizational structure, IT architecture and development plan including IT
management policies.
3. Determining the research methods using case studies.
4. Designing the questionnaire accompanied by a selection of respondents
246
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
247
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The recapitulation of the data showed that the rate of DCO are moderate or adequately with an average
value of 2.28. According to the COBIT framework, the best value is the value of 3.00, a conclusion
can be drawn that the level of performance of information technology is approaching the good position.
These conditions must be maintained and if possible to be improved so that the expectations of all
stakeholders can be achieved.
in this case the IT planning process is reasonably sound and ensures that proper planning may be
implemented. However, the authority was given to individual managers in connection with the
execution process, and there is no procedure to check the process, for future reference standard
procedure needs to be made to check the processes in the IT strategic plan.
Table 2 shows the value and the level of IT maturity. In this case the value and levels of IT
maturity are different, in which the index value in the form of fractions defines these conditions is a
process or a step toward maturity. While Maturity Level is an integer which is the size of the absolute
value of existing IT maturity.
Overall the level of IT maturity in PT. Best Stamp Indonesia are at level 4 or level 3. This
condition indicates that the company has defined a standard procedure in all activities associated with
the process of web data management system but still needs to more supervision to carry out the
procedure to avoid deviations.
249
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 3. mapping results of the Company SI position for each SI process with the maturity level
Table 3. Audit recommendations Information System (IS) PT. Best Stamp Indonesia
No Process Recommendation
Planning
1 and Organizing
1.1 Setting a Strategic IT Plan 1. IT strategic plan should be planned and documented
2. IT strategic planning should take into account all aspects of the
requirements of the institution
3. Institutional leaders should be involved directly in IT strategic
planning process
4. IT strategic planning should be monitored continuously by the
head of Institution
1.2 Establish Information 1. Information architecture planning should be well planned and
Architecture documented
2. Information architecture development should use certain
techniques and methods
3. Data dictionary should be established and documented
1.3 Establish Technology 1. Planning needs to consider the direction of technology trends or
Direction tendencies
2. technology
3. The impact of the use of new technologies should be anticipated
4. It is necessary to give guidance to staff of the plan tht will use the
new technology
5. Maintenance needs to be anticipated over an adoption of new
technologies
6. documentation is made regarding the technology institutions
direction
1.4 Establish IT Processes, 1. There needs to be a clear distribution of tasks and responsibilities
Organization and their to the Web System
Relationship 2. Web Systems department needs to be involved in the decision
making process
3. Explanations need to be done to raise awareness of the importance
of internal controls, security and discipline of all staff
4. Web Systems department needs to establish relationships with
external parties to facilitate its duties and responsibilities
250
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
251
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
5.1. Conclusion
Based on the analysis and evaluation has been discussed previously, the conclusions can be obtained :
Implementation of information technology has been adapted to the company's strategic plan, as a
guide to information technology that provides direction and guidance in the application of
information technology at PT. Best Stamp Indonesia.
Seventeen process of information technology at PT. Best Stamp Indonesia overall are at the level
defined
There are still some weaknesses in the company's IT processes such as the lack of adequate Help
Desk System.
Monitoring and evaluating the performance of the IT performance has not done optimally, it is
characterized with the information systems audit that has never be done for the overall information
systems that run at PT. Best Stamp Indonesia.
5.2. Recomendation
Based on the research that has been done, the authors have suggestions that can later be used as a basis
for further research. The suggestions include inadequate information technology security measures. One
of the things that concerns the author that there’s no system of adequate disaster management.
In terms of internal controls there had been an awareness from the management to improve the
company's internal control conditions, but it is not optimally done. the author suggest to put more
attention to internal controls management so later the company can create a better information
technology governance.
252
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
253
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: The critical component made from metal that has high temperature and mechanical
resistance more commonly use in power plant and refinary. A number of tube made from carbon
steel ferritic SA - 213T22 is used in boiler to generate steam on power plant. The failure of the
tube commonly causes by creep phenomenon. In order to know the creep behavior of materials,
it can be simulated by using accelerated creep test in laboratory. In this research, a model of
creep behavior has been formulated for two different creep stage. Stage I is a region when the
materials is given a sudden load and its experience strain hardening, and followed by a balance
of strain hardening and softening due to high temperature effects. Stage II is a region where the
equilibrium is broken because of the dominant effect of metal temperature. The first step is
formulating a creep model stage I that describes the strain hardening mechanism of metal, the
dε
equation is ~ k − ε at 0 < t < t1 . Where t1 is transition time. Creep model stage II describes
dt
the failure mechanism materials due to the effects of temperature and it is happen very quickly,
dε
the equation is ~ ε at t1 < t < t 2 . At time t = t1 , the functions in equations (3) and (4),
dt
should be continuous and differentiable. Coefficients A, B, C, α, and β are obtained by fitting
the equations (3) and (4) to the creep curves from the accelerated creep test in the laboratory.
The solution of stage I creep is exponential function that convergen at certain time (t 1), and for
the stage II is positive exponential function. The t1 is the transition time, physically describe the
broken balance of strain hardening and softening, and it is called terminate creep time. The
higher temperature is the shorter terminate creep time. The higher stress is the shorter terminate
time. Coefficient A, B, C, α, and β in the solution function of creep models is unique, depend
on temperature and stress given.
Keywords: Curve fitting, accelerated creep, creep curve, transition time, creep stage
1. Introduction
The critical component made from metal that has high temperature and mechanical resistance more
commonly use in power plant and refinary. The important characteristics that should be considered is
the thermal resistance. If the metal is exposed in high temperature and pressure for along term it will
undergo plastic deformation called creep (Viswanathan, 1995). A number of tube made from carbon
steel ferritic SA - 213T22 is used in boiler to generate steam on power plant. In order to know the creep
behavior of materials, it can be simulated by using accelerated creep test in laboratory. A constant stress
is given to the materials in a certain temperature for a longer period of time. Elongation or strain
materials are measured as a function of time.
In this research, a model of creep behavior has been formulated for two different creep stage . Stage I
is a region when the materials is given a sudden load and its experience strain hardening, and followed
by a balance of strain hardening and softening due to high temperature effects. Stage II is a region where
the equilibrium is broken because of the dominant effect of metal temperature (Fujio Abe, 2008 and
Rusinko, 2011). In this stage physically metal experienced crack initiation and then propagate rapidly
that eventually fracture.
254
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The mathematical model formulated is tested by fitting to the data from accelerated creep test in
laboratory to obtain curve fitting function that satisfied the real condition.
2. Methodology
The first step is formulating a creep model stage I that describes the strain hardening mechanism of
metal
dε
dt
~ k − ε at 0 < t < t1 (1)
dε
with is strain rate, k is a constant and ε instantaneous strain, t1 is the time when the equilibrium
dt
between strain hardening and softening is broken.
In this stage the strain rate is proportional to the part of the metal that has not experienced strain. Creep
model stage II describes the failure mechanism materials due to the effects of temperature and it is
happen very quickly.
dε
dt
~ε at t1 < t < t 2 (2)
with t2 is the time of fracture.
In this stage the strain rate is directly proportional to the strain, and it increase from time to time rapidly.
Solution of the creep stage I model is equations (3) which describe the strain as a function of time ε(t).
At time t = t1 , the functions in equations (3) and (4), should be continuous and differentiable.
Coefficients A, B, C, α, and β are obtained by fitting the equations (3) and (4) to the creep curves from
the accelerated creep test in the laboratory.
The second step is accelerated creep tests in laboratory. The material is taken from the steam distribution
pipe with material SA-213 T22 in power plant. And then the tube is cut in the longitudinal direction as
shown in Figure 1.
255
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The next step, the accelerated creep test is done in B2TKS PUSPIPTEK Serpong shown in Figure 2.
Figure 2. (a) Creep machine (b) Schematic diagram creep test (J.Rösler, 2007)
The first test is done for constant stress 190 MPa, while the temperature is adjusted as the operating
conditions about 550 oC to 580 oC. The results shown in Figure 2. The second test is done for constant
temperature while the applied stress is about 152 MPa to 240 MPa. The stress given is higher than
operational stress to enable the accelerated creep and is done until the samples has fractured shown in
Figure 3.
Figure 3. Photos of specimens before and after accelerated creep testing for constant stress
The creep test is done by two ways constant stress and constant temperature (Marc Andre 2009).
In Figure 4 the point in the graph is obtained from strain meassurement as a function of time from the
accelerated creep test. While the continous line is the curve fitting from the point using creep function
model in equation (3) and (4).
256
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 4. Curve fitting for creep behavior, creep strain versus time with constant stress 190 MPa
for the four specimens.
dε
For T = 550°C and t r =10.59 hours with dt = 0.20 % per hour
ε(t) = {2.01228 − 1.86667 exp(−0.697207t) , 0 < t < 6.6 hours (5)
dε
For T = 560°C and t r =5.9 hours with = 0.36% per hour
dt
dε
For T = 580°C and t r =1.95 hours with dt = 6.12% per hour
For each creep curve has a time of transition is different t1 , t 2 , and t 3 . The transtition time is the time
when the balance of strain hardening and softening is broken and then the material begin cracking
finally fractured. For T = 580 oC the transtition time is 1.75 hour, for T= 570oC is 1.8 hour and for T =
560 oC is 4.6 hour. While for T = 550 oC there is not transtition time, it means the material will not
fracture for along periode of time. So the higher the temperature, the smaller the transition time, its
mean the remaining life of material is shorter.
In Figure 4 the point in the graph is obtained from strain meassurement as a function of time from the
accelerated creep test. While the continous line is the curve fitting from the point using creep function
model in equation (3) and (4).
257
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 5. Curve fitting for creep behavior, creep strain versus time with constant temperature
580 oC for five specimens.
dε
for σ = 152.51 MPa and t r =31,8 hours with dt
= 0.39% per hour, from the picture above the curve
did not experience creep stage II.
dε
For σ = 228.17 MPa and t r =4.3 hours with = 0.66 % per hour
dt
40.4643 − 40.5169 exp(−0.0137855t) , 0.68 < t < 1.5 hour
ε(t) = { (10)
0.16345 exp(0.85979t) , 1.5 ≤ t < 4.3hours
dε
For σ = 229.25 MPa and t r =3.2 hours with = 1.6% per hour
dt
63.2105 − 63.0704 exp(−0.0269024t) , 0.06 < t ≤ 1.75 hours
ε(t) = { (11)
0.7606 exp(0.781321t) , 1.75 < t < 3.3 hours
dε
For σ = 234.05 MPa and t r =0.48 hours with dt = 13.3% per hour
4.5344 − 4.34028 exp(−3.18695t) , 0.1 < t < 0.2 hours
ε(t) = { (12)
0.559826 exp(5.5517t) , 0.2 ≤ t < 0.5 hours
dε
For σ = 239.94 MPa and t r =0.3 hours with dt increase rapidly, because of the stress is very high.
In Figure 5 each creep curve consists of two segments separated by a transition time of creep stage 1 to
stage 2 trip. That is expressed by t1 , t 2 , t 3 and t 4 . For the curve with σ = 152,51 MPa there is no
trasntition time because of very low stress. The material will not fracture for along periode. For σ =
228.17 MPa the time transtition is 1.5 hour, σ = 229.25 MPa is 1.75 hour, σ = 234.05 MPa is 0.2 hour
and σ = 239.94 MPa is 0.15 hour. The higher stress given the shorter time transtition and the shoerter
remaining life is. Using fitting curve for each curve has its own constant A, B, C, α, and β as shown in
equation (5) untill (13).
258
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4. Conclusion
dε
Mathematical model for stage I creep can be express by differential equation ~ k − ε at 0 < t < t1 ,
dt
dε
and for the stage II is dt ~ ε at t1 < t < t 2 . The solution of stage I creep is exponential function that
convergen at certain time (t1), and for the stage II is positive exponential function. The t1 is the transition
time, physically describe the broken balance of strain hardening and softening, and it is called terminate
creep time. The higher temperature is the shorter terminate creep time. The higher stress is the shorter
terminate time. Coefficient A, B, C, α, and β in the solution function of creep models is unique, depend
on temperature and stress given. The function can be obtain by fitting curve.
Acknowledgements
We would like to thank to Mr. Ilham Hatta from B2TKS BPPT Serpong that help us in collecting data
and testing materials during accelerated creep test. And to Mr. Sari Purwono from engineering division
in Indonesian Power Suralaya for giving the sample test during the experiment.
References
259
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: In this paper, the growing effects in metamorphic animation of plant-like fractals are
presented. The metamorphic animation technique is always interesting, especially the animation
which is involving objects in the nature that can be represented by fractals. Through the inverse
problem process, objects in nature can be encoded into IFS fractals form by means of the collage
theorem and self-affine function. Multidirectional, bidirectional and unidirectional growing
effects in metamorphic animation of plant-like fractal can be simulated based on a family of
transitional IFS code naturally between the source and target objects by an IFS rendering
algorithm.
Keywords: Fractal, metamorphic animation, growing effects, collage theorem, transitional IFS
code, self-affine function
1. Introduction
In this paper, there are six sections. The first and the last sections are introduction and conclusion.
In between both sections there are other four sections, those are related works, transitional IFS code,
metamorphic animation and simulation. In this introductory section, the discussion begin with the basic
terminology such as fractal geometry, self-affine, IFS code, and IFS rendering algorithms in conjunction
with the metamorphic animation in fractal form.
The term of fractal is first coined by Mandelbrot, picked from a Latin word: fractus, which has a
meaning: fractured or broken (Mandelbrot, 1982). One way to generate a fractal object is by the iterated
function system (IFS) which is first introduced by Barnsley based on Hutchinson’s idea as mathematical
background (Barnsley, 1993 and Hutchinson, 1979). Another way to generate a fractal is by
Lindenmayer or L systems which is first introduced by Lindenmayer and is suitable for generating the
tree-like objects (Lindenmayer et.al, 1992). The fractal geometry as a superset of the euclidean
geometry can have the range of dimension in fractional numbers continuously, and not as a discrete
integer numbers like in the euclidean geometry.
1.2 Self-affine
As the next term, self-affine function is special case of affine transformation function of fractal
objects which have a self-similarity property. The self-similarity as a property of a fractal object means
that parts of an object can represent an object as a whole in smaller scale and in the different orientations.
The self-affine function (2D) maps the next position of points (x’, y’) as a vector in an object that depend
on the previous ones (x, y) by a matrix (2 X 2) which has four coefficients : a, b, c and d and a vector
which has two coefficients : e and f, so totally there are six coefficients as described in the equation-1
below.
260
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
𝑥′ 𝑎 𝑏 𝑥 𝑒
𝑊 [ ′] = [ ] ∗ [𝑦] + [𝑓 ] (1)
𝑦 𝑐 𝑑
`Fractal objects in iterated function systems or IFS form is represented by IFS code set, which is actually
a collection of self-affine function coefficients. Typically a 2D object in IFS fractal form can be encoded
as one or more collections of six coefficients: a, b, c, d, e and f. One IFS code set represents one part
of a fractal object that has similarity to the object as a whole as already mentioned in the previous
section above. The coefficient-a, b, c and d represent and determine the shape of the fractal object in x
and y directions, and the the other two coefficients, e and f represent and determine the position and
scale of the object (Barnsley, 1993). The typical IFS code as an example with probability factors p is
displayed in table-1 and the correspondent figure in figure-1 below. The first row of function represents
the center part, the second and third represent the right and left side parts and the two last row represent
the the trunk (right and left as a pair in opposite orientation
to fill the void area complementarily).
2. Related Works
2.1 Algorithm
In their paper, Chen et al. presented a new fractal-based algorithm for the metamorphic animation.
The conceptual roots of fractals can be traced to the attempts to measure the size of objects for which
traditional definitions based on Euclidean geometry or calculus failed. Therefore, the objective of this
study is to design a fractal-based algorithm and produce a metamorphic animation based on a fractal
idea. The main method is to weight two IFS codes between the start and the target object by an
interpolation function. Their study shows that the fractal idea can be effectively applied in the
metamorphic animation. The main feature of the algorithm is that it can deal with a fractal object that
the conventional algorithm cannot. In application, their algorithm has many practical values that can
improve the efficiency of animation production and simultaneously greatly reduce the cost (Chen et al.,
2006).
In their research, Zhang et al. proposed the general formula and the inverse algorithm for the multi-
dimensional piece-wise self-affine fractal interpolation model and presented in their paper to provide
261
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
the theoretical basis for the multi-dimensional piece-wise hidden-variable fractal model ( Zhang et al.,
2007).
In his research Chang proposed a hierarchical fixed point-searching algorithm that can determine the
coarse shape, the original coordinates, and their scales of 2-D fractal sets directly from its IFS code.
Then the IFS codes are modified to generate the new 2-D fractal sets that can be the arbitrary affine
transformation of original fractal sets. The transformations for 2-D fractal sets include translation,
scaling, shearing, dilation/contraction, rotation, and reflection. The composition effects of the
transformations above can also be accomplished through the matrix multiplication and represented by
a single matrix and can be synthesized into a complicated image frame with elaborate design (Chang,
2009).
2.3 Interpolation
In his thesis, Scealy studied primarily on V-variable fractals, as recently developed by Barnsley,
Hutchinson and Stenflo. He extended fractal interpolation functions to the random (and in particular,
the V -variable) setting, and calculated the box-counting dimension of particular class of V-variable
fractal interpolation functions. The extension of fractal interpolation functions to the V-variable setting
yields a class of random fractal interpolation functions for which the box-counting dimensions may be
approximated computationally, and may be computed exactly for the {V, k}-variable subclass. In
addition he presented a sample application of V-variable fractal interpolation functions to the generation
of random families of curves, adapting existing curve generation schemes that are based upon iteration
of affine transformations ( Scealy, 2009).
262
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4. Metamorphics Animation
In this paper, there are two kinds of metamorphic animation using the random iteration algorithm
with the partial interpolation of IFS code and the whole interpolation of IFS code between the
coefficients in functions of the source and target.
Table-2. A pair IFS code set of grass-like fractal (a. Source; b. Target) : 5 functions
a. IFS code set of grass-like fractal (source) b. IFS code set of grass-like fractal (target)
a b c d e f p a b c d e f p
0.03 0.06 -0.06 -0.42 -0.033 0.40 10.0 0.04 0.06 -0.06 -0.50 -0.033 0.40 10.0
0.10 -0.15 0.04 0.37 -0.015 0.20 20.0 0.24 -0.15 0.09 0.37 -0.015 0.20 20.0
0.08 -0.08 0.03 0.19 -0.025 0.30 05.0 0.18 -0.08 0.07 0.19 -0.025 0.30 05.0
0.15 -0.19 0.06 0.46 -0.031 0.40 25.0 0.36 -0.19 0.14 0.46 -0.031 0.35 25.0
0.23 -0.27 0.09 0.65 -0.035 0.40 40.0 0.54 -0.27 0.22 0.65 -0.033 0.40 40.0
Table-3. A pair IFS code set of bushes-like fractal (a. Source; b. Target) : 4 functions
a. IFS code set of bushes -like fractal (source) b. IFS code set of bushes -like fractal (target)
a b c d e f p a b c d e f p
0.01 0.06 -0.004 0.17 0.00 0.00 5.0 0.01 0.06 -0.004 0.17 0.00 0.00 5.0
0.01 -0.09 0.004 0.26 0.00 0.00 5.0 0.01 -0.09 0.004 0.26 0.00 0.00 5.0
0.50 -0.30 0.208 0.79 -0.09 0.22 45.0 0.60 -0.30 0.238 0.79 -0.09 0.22 45.0
0.50 -0.30 -0.208 0.79 0.04 0.12 45.0 0.60 0.30 -0.238 0.79 0.04 0.12 45.0
Table-4. A pair IFS code set of tree-like fractal (a. Source; b. Target) : 5 functions
a. IFS code set of tree-like fractal (source) b. IFS code set of tree-like fractal (target)
a b c d e f p a b c d e f p
-0.64 0.00 0.00 0.50 0.025 0.52 20.0 -0.64 0.00 0.00 0.50 0.025 0.57 20.0
0.19 -0.49 0.34 0.44 -0.007 0.92 20.0 0.21 -0.54 0.38 0.49 0.047 0.97 20.0
0.46 0.42 -0.25 0.36 -0.003 0.94 20.0 0.51 0.45 -0.28 0.40 -0.049 1.00 20.0
-0.06 -0.07 0.45 -0.11 0.110 0.62 20.0 -0.06 -0.07 0.45 -0.11 0.117 0.73 20.0
-0.03 0.07 -0.47 0.02 -0.098 0.48 20.0 -0.03 0.07 -0.47 0.02 -0.105 0.58 20.0
5. Simulation
In this paper, there are three kinds of simulation that have the different types of growing effect resulted,
those are the unidirectional growing effect, the bidirectional and the multidirectional growing effects.
For the simulation purpose three different plant-like fractal objects are used. The first simulation which
shows the unidirectional growing effect, a grass-like fractal object is used. The transitional images as
the result of metamorphic animation of that object is displayed in figure-2 below. The second simulation
which shows the bidirectional growing effect, a bushes-like fractal object is used. The transitional
images as the result of metamorphic animation of that object is displayed in figure-3 below. Finally the
263
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
third simulation which shows the multidirectional growing effect, a tree-like fractal object is used. The
transitional images as the result of metamorphic animation of that object is displayed in figure-4 below.
5.1 Unidirectional Growing Effect
The eight images below shows the moving direction of top object to the right-below (south-east
direction) as long as the object is growing.
6. Conclusion
From the metamorphic animation and simulation in the above sections, we can conclude that there are
three kinds of growing effect as the result of metamorphic animations of plant-like fractals that depend
on the kind of interpolation chosen, i.e: unidirectional, bidirectional and multidirectional growing
effects.
Acknowledgements
We would like to thank the Orginizing Committee of IC-MCS 2013 who has informed and invited
researchers and lecturers in ‘Aptikom’ society forum through the official invitation letter.
References
Mandelbrot, Benoit B. 1982. The Fractal Geometry of Nature. W.H. Freeman and Company.
Hutchinson, John E. 1979. Fractals Self-similarity. Indiana University Mathematics Journal 30.
http://gan.anu.edu.au/~john/Assets/Research%20Papers/fractals_self-similarity.pdf [ visited
:2013-07-31]
Barnsley, Michael F. 1993. Fractals Everywhere. 2nd edition. Morgan Kaufmann,. Academic Press
Lindenmayer, A., Fracchia, F.D., Hanan, J., Krithivasan, K., Prusinkiewicz, P. Lindenmayer Systems,
Fractals, and Plants. (1992). Springer Verlag.
265
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
http://www.booksrating.com/Lindenmayer-Systems-Fractals-and-Plants/p149012/ [ visited
:2013-07-31]
Chen, Chuan-bo, Zheng, Yun-ping, Sarem, M. (2006). A Fractal-based Algorithm for the Metamorphic
Animation. Information and Communication Technologies, ICTTA '06. 2nd, Volume: 2, D.O.I:
10.1109 / ICTTA. 2006. 1684885 , Page (s): 2957-2962.
Nikiel, S. (2007).Iterated Function Systems for Real-Time Image Synthesis. © Springer-Verlag London
Limited.
Zhang, Tong, Zhuang, Zhuo. (2007). Multi-Dimensional Piece-Wise Self-Affine Fractal Interpolation
Model. TSINGHUA SCIENCE AND TECHNOLOGY; ISSN 1007-0214 02/18 pp244-251;
Volume 12, Number 3, June 2007; D.O.I: 10.1016/S1007-0214(07)70036-6
Kunze, H.E., La Torre, D., Vrscay, E.R. (2008). From Iterated Function Systems to Iterated
Multifunction Systems. Communications on Applied Nonlinear Analysis {\bf 15} (4), 1-14.
Chang, Hsuan T. (2009).Arbitrary Affine Transformation and Their Composition Effects for Two-
Dimensional Fractal Sets. Photonics and Information Laboratory Department of Electrical
Engineering of Taiwan National Yunlin University of Science and Technology.
http://teacher.yuntech.edu.tw/htchang/geof.pdf [ visited :2013-07-29]
Scealy, Robert. (2009).V-Variable Fractals and Interpolation. Philosophy doctoral thesis of the
Australian National University. http://www.superfractals.com/pdfs/scealyopt.pdf [ visited
:2013-07-29]
Zhuang, Yi-xin, Xiong ,Yue-shan, Liu, Fa-yao. (2011). IFS Fractal Morphing based on Coarse Convex-
hull. 978-1-4244-8625-0/11/$26.00 ©2011 IEEE
Darmanto, T., Suwardi, I.S., Munir, R. (2012). Cyclical Metamorphic Animation of Fractal Images
based on a Family of Multi-transitional IFS Code Approach. Control, Systems & Industrial
Informatics (ICCSII), IEEE
Conference on, D.O.I: 10.1109/CCSII.2012.6470506, Page(s): 231 - 234
266
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Crime is an unlawful act that happens in a particular location. The linkage between
the type of crime with social-economic and geography conditions of a region has been a lot of
research studies. These conditions may cause some types of crime. Because some types of crime
occurring in a location that these conditions are the same, so in this study, the proposed method
to observe the relationship between the type of crime in a particular location and co-location
patterns of variation among regions crimes. Proposed method using aggregate data in a location
under the jurisdiction of a police, aggregate data is then converted into a non-numeric data then
all types of crime in one location arranged in sequense. Clustering performed on the sequence
formed. The experiments were performed by using crime data from police Metro Jakarta area.
Found co-location for a type of crime against other types of crime as well as various types of
crime similarity to certain regions.
Keywords: Spatial analysis, co-location, crime dataset,
1. Introduction
Crime understood as an interaction between the victim, offender, and neighborhood crime[
Wang et al., 2013]. In practice, this concept can be measured using a variety of socio-economic
circumstances and chance occurrence of the crime, such as population density, type of location (market,
mall, or village), and the level of regional security [Anselin 2000, Ratcliffe 2010].
Criminal analysis is parse (breaking up) the efforts made in violation of law in some parts to
get the basic properties and give a report of the findings. The goal is to get useful information from
large amounts of data and deliver it to the parties relating to the prevention of crime and the criminal
muzzle [Osborne, 2003]. Bruce analysis linking the definition of a crime by location, time, a certain
character, and in common with other criminal [Bruce, 2004].
So it can be understood that a criminal offense related to the location, time, geographic
conditions, economic conditions, and other crimes. Recently, several statistical methods for spatial data
[Anselin et. al. 2000, Ahmadi 2003, Ratcliffe 2010, Patrick et. al 2011] and some of the techniques used
in data mining proposed for crime data analysis [Philip 2012, Wang et. al 2013] which provides a
method to overcome some of the weaknesses in traditional methods. However, these methods still
contain some weaknesses, namely the use of the data aggregate of all types of data in addition to crime
and involves another data instead crime data. Aggregate use of these types of crime reduces the accuracy
of the analysis results for spatialnya elements. This is because there are several types of low-level crimes
are summed with high levels of crime or otherwise that result in lower than the spatial correlation
analysis on each type separately [Andresen et. al 2012]. On the other hand, the availability of criminal
data on a country (region) evolved as Indonesia, can not complete in the developed countries like
America. Data recorded at the Jakarta Police, the data is in the form of aggregates of each type of crime
by police force area level resort. For more detailed data, such as crime locations were not recorded in
the report. This causes the data can not analyzed by spatial point pattern types, but only able to analyze
by spatial data type region (area / polygon).
To overcome these two shortcomings in this study the researchers made criminal data analysis
method that is spatial, by type of crime by a police jurisdictions resort level. The advantage of this
method is that the data used is the data type of crime by region, so it does not require any other data
related to an area, eg economic conditions, type of area is residential, industrial or commercial, etc.
(note: this data must be obtained from the different agency). Another advantage is the result obtained
in the form of the level of crime in an area (to determine the hot spot area), the pattern of crimes in an
area to form a pattern for the areas that have the same pattern.
This method is based on the method for data sequences klustering [Dorohanceanu 2000,
Okkunen 1999]. Data in the form of a type of crime in the area is converted into a non-numeric, then
267
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
arranged in a sequence. Obtained pattern is the pattern of sequence similarity criteria specified above
threshold.
2. Related Work
Reported studies conducted for criminal data started by the French ecologist Guerry and
Quetelet in which they explain the difference in the level of crime in the social conditions that vary on
a locality. This is the beginning of the study using the unit of analysis spatially defined population
aggregates [Anselin et. Al. 2000, Ratcliffe 2010]. The development of computer technology come to
drive the development of spatial data analysis methods including criminal data. This is supported by the
Geographic Information System.
Two approaches in spatial crime data analysis method is a statistical approach and the approach
to data mining. Approach with statistical methods are usually based on the method of spatial
autocorrelation using Moran's I and Geary's c for global measures while to know the value of a particular
location used autocorrelation LISA and Gi and Gi* [Anselin et. Al. 2000, Chakravorty, 1995, Ahmadi
2003, Zhang 2007, Ratcliffe 2010].
3. Methodology
3.1 Model Data
The data model is a data area with the contents of the data is the aggregate of each kind of crime
in an area is. Data set of this type of crime in a particular location to form a single entity, which in this
study is called particle. Particle is sequences taken from the data for the observation area that consists
of several types of crime.
3.2 Particle Formation Patterns
Particle pattern is the arrangement of elements that is level of cime type in some location.
Particle patterns are grouped according to the similarity of its constituent elements. Particle size
similarity is :
- P is a particle with member are {𝑝1 , 𝑝2 , … , 𝑝8 }
- For n particle, there are 𝑃1 , 𝑃2 , … 𝑃𝑛
- Similarity of 𝑃𝑖 and 𝑃𝑗 , for 𝑖, 𝑗 = 1,2, … , 𝑛 is:
This algorithm is implemented using simulated data. Arranged so that the data presented in the model
with 9 elements and its value in the form of non-numeric.
Implementation steps are as follows:
1. Prepare spatial data in non-numeric form.
2. Define particle, each location is forming one particle.
Some particles are obtained from the data above example is as follows :
268
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
These results consist of {subsequence, the same amount of char, beginning the character, the number of
particles, [no particles]}
𝑃1 ∩ 𝑃2 ∕ 𝑃1 ∪ 𝑃2 ≥ Φ (2)
𝑃1 , 𝑃2 are particles that make up the 1st and 2nd candidate. Threshold we use is 10%
𝑃1 ∩ 𝑃2 ≥ Φ (3)
𝑃1 , 𝑃2 are particles that make up the 1st and 2nd candidate. Threshold can be determined in
accordance with the conditions of the data and the desired cluster. In the example described,
used equation 2.
269
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4. Experiment
4.1 Data
Criminal data in this study are data on the number of crimes that occur in an area. There are
two approaches of the data area of the crime scene, which is the area where the crime occurred and the
area where the police station where the crime was reported. Jakarta crime data form, using the second
model, where the crime is a crime the police station where the incident was reported.
Police Station where the crime is reported to the Office of Police Resort (POLRES), Office of
the Police Sector (POLSEK), and the Regional Police Office (Police). Number of Police under the
Police POLRES METRO JAYA is 11, with 110 POLSEK and 2 Implementing Port Security Unit
(KPPP) Sukarno Hatta Airport Police Station and the Port of Tanjung Priok. Jakarta Police have work
areas in the administrative area of DKI Jakarta, Tangerang, Tangerang City, South Tangerang, Depok,
Bekasi and Bekasi. So METRO JAYA Police jurisdiction is Jakarta and surrounding areas. Examples
of crime data obtained from the City Police, is as follows :
Table 1. Level of Crimes by Type of data reported in CENTRAL JAKARTA POLRES the
Year 2011((Distik, POLDA METRO JAYA, 2012))
Region ANI CU TOD RAM RAM BAJA RODA RODA RODA JML
RAT RAT ONG PAS POK K 2 3 4
1 2 4 5 6 7 8 9 10 11 12 19
I POLRES
JAKPUS
1 GAMBIR 38 117 3 11 0 0 209 0 17 395
2 SW. BESAR 54 78 8 8 1 0 158 0 10 317
3 KEMAYORAN 31 121 6 16 1 0 223 0 32 430
4 MENTENG 23 74 6 0 2 0 74 0 8 187
5 TN. ABANG 28 78 0 8 2 0 61 0 7 184
6 SENEN 13 54 2 7 0 0 102 1 9 188
7 C. PUTIH 3 97 2 1 1 0 120 1 13 238
8 JH. BARU 7 50 3 1 0 0 157 0 7 225
The result of simple statistical analysis can be seen the crime rate areas in the region to the
police station where the crime was reported.
Statistical elementary method can determine the level of crime, the average incidence, etc. But
we can not know the pattern of the relationship between the type of crime to one another. Also the
pattern of linkage to the region where the crime took place or reported. The method developed in this
research can answer these problems.
The methods developed in this study can reveal a pattern of crimes for each area and point out
if there are patterns of the same or similar areas exist. Results using simple statistics can be seen in
Figure 12 and 13. Figure 12 illustrates the incidence rate of crime categorized into very high, high,
medium, low and none. Whereas Figure 13, describes the incidence rate for all types of theft crimes
with categories like in the figure 12.
Step-by-step analysis of criminal data clustering based on location (spatial clustering analysis
of data) is:
1. Conversion of data into a number of crimes intervals, each interval symbolized by the letter
of a, b, ...
2. Take steps clustering as described previously (methodology section)
270
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1 2 4 5 6 7 8 9 10 11 12 13
I POLRES JAKPUS
0 GAMBR 0 d c c h 0 g d a
1 SW. BESAR 0 f b d f a f b b
2 KEMAYORAN 0 d c e i a h g a
3 MENTENG a c b b c b c b a
4 TN. ABANG a c b b c b c b a
5 SENEN 0 b b b d 0 d b c
6 C. PUTIH a a b a e a d c b
7 JH. BARU 0 a a a f 0 f b a
Explanation :
Core ii {xxx xxx xxx, j, n, m [k,k,...] }
271
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
For spatial crime data POLDA METRO JAYA year 2011, produced 96 patterns of crime.
This pattern is the pattern for the most number of areas, namely the 17 areas that have this
pattern. We use 4-forming region to graphically see patterns,
Figure 5. Pattern number-0 clustering result of Tangerang, Batu Ceper, Jati Uwung, Tagaraksa
In the graph, it appears there was a pattern of the same crime rate for all crime types 2, 3, 5, 6,
and 7, while the other different types. Regions that have the same pattern there are 17 regions, namely
Pamulang, Tangerang Kota, Batu Ceper, etc. Graphs for data before conversion is
272
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 6. Pattern of Crime Type Level of Tangerang, Batu Ceper and Tigaraksa
To chart pattern types of crime using the data before it is converted (figure), showed a different
pattern with graphic images after conversion. This is caused by the conversion of the method used. The
method used is to use a data distribution for each type of crime divided into intervals that were made.
Differences in the number of significant events for each type of crime causes the difference conversion
value. This is what causes the figure .. not seen the same pattern for all three areas above.
Pattern number-66,
Core 66- {-ca-b0ba-,1,6,3,[11, 21, 34]}
Pattern number-66, 6 have the same type of crime with 3 areas that have this pattern.
Figure 7. Pattern number 66, Tanjung Priuk, Pasar Rebo and Kembangan have similar pattern
Pattern number-66, easier to take a look at the differences and similarities incidence rate for
each type of crime. Crime type 1st, 4th, dan 9th have different incidence rates for different regions,
whereas other types of crimes are the same for all three constituent regions, namely Tanjung Priok,
Kembangan and Pasar Rebo.
273
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 8. Crime Level area Tanjung Priuk, Kembangan and Pasar Rebo
If you see the graph for the data source (before dikonfersi) area of Tanjung Priok, Kembangan, and
Pasar Rebo, the difference in incidence rates for all types of crime patterns 1 and 4 are not seen clearly.
Similarly, for the equation to the local pattern-2 and 3 are also not clear. However, by using the method
of conversion of the data, the pattern and the pattern equations can be found.
6. Unknowledgement
The work is funded by “Hibah Strategis Nasional”, Ministary of Education, Indonesia, 2013
References
Wang, D., Ding, W., Lo, H., Morabito, M., Chen, P., Salazar, J., & Stepinski, T. (2013). Understanding
the spatial distribution of crime based on its related variables using geospatial discriminative
patterns. Computers, Environment and Urban Systems, 39, 93–106.
doi:10.1016/j.compenvurbsys.2013.01.008
Zhang, B. H., & Peterson, M. P. (2007). A Spatial Analysis Of Neighborhood Crime In Omaha ,
Nebraska Using Alternative Measure Of Crime Rates. Internet Journal of Criminology.
Osborne, D., & Wernicke, S. (2003). What Is Crime Analysis? In Introduction to Crime Analysis, Basic
Resources for Criminal Justice Practice (pp. 1–11).
Bruce, C. W. (2004). Fundamentals of Crime Analysis. In Exploring Crime Analysis (pp. 7–11).
Anselin, L., Cohen, J., Cook, D., Gorr, W., & Tita, G. (2000). Spatial Analysis of Crime. Measurement
and Analysis Of Crime and Justice, 4, 213–262.
Ahmadi, M. (2003). Crime Mapping and Spatial Analysis. International Institue for Geo-Information
Science and Earth Observation Enschede.
Ratcliffe, J. (2010). Crime Mapping: Spatial and Temporal Challenges. In A. R. Piquero & D. Weisburd
(Eds.), Handbook of Quantitative Criminology (pp. 5–25). New York, NY: Springer New York.
doi:10.1007/978-0-387-77650-7
274
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Patrick, D. L., Murray, T. P., & Governor, L. (2011). Analysis of Massachusetts Hate Crimes Data (pp.
1–34).
Andresen, M. a., & Linning, S. J. (2012). The (in)appropriateness of aggregating across crime types.
Applied Geography, 35(1-2), 275–282. doi:10.1016/j.apgeog.2012.07.007
Dorohanceanu B, Nevill-Manning C. (2000), A Practical Suffix-Tree Implementation for String
Searching, Dr. Dobb’s Journal,
Ukkonen, E. (1995). On-line construction of suffix trees. Algorithmica, 14(3), 249–260.
doi:10.1007/BF01206331
275
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: The increasing number of digital images stored in the computer’s storing
media and database capability to provide for rapidly user’s information requirements has been
an important demand today. Efficiency in search of database images in large quality size
in the CBIR (content based image retrieval) system has been a highly demand.
This paper is aimed to obtain solution of applied access time optimization and
improved accuracy CBIR on WANG database as much as 1,000 image records in resolution
256 x 384 pixels and
384 x 256 pixels. The proposed solution has a system architectural model as integration from
the query optimization and partition clustering with expectation that they will increase efficiency
in search of images duration without a reduction in accuracy level of searching output.
The cluster formation basis in this study utilized the minimum and maximum PSNR
(peak signal to noise ratio) values in form of image similarity from comparison between image
record color feature extraction and basic images, in formation of 2, 4, 8, 16, and 32 clusters as
cluster index in filtering as well as searching of the image recording cluster position.
Meanwhile, the CBIR Query application is to measure images similarity searched by images
record in image color feature extraction usage with color histogram measurements.
The result of this model implementation showed that accuracy in F-Score value for
non- cluster query to 5 clusters query in K-Means clustering with extraction in image color
histogram shows an increasing accuracy from 0.22 to 0.23.
1. INTRODUCTION
A query optimization in a process of access to data computationally is an alternative to consider
in attempt to minimize access time more efficiently to a database record by using the query. The
application of query operation is widely used in relational databases, such as text, web, picture,
multimedia, etc. databases. CBIR is an application of a computer vision related to a digital image data
retrieval process. By ‘Content’ is in this context meant as colors, shapes, textures or other
information that could be derived from the image. There are some types of features used in an image
retrieval, such as color, texture, shape, etc retrievals.
A new rank-join algorithm that was unrelated to an union strategy, and held an excellent
accuracy proof. The proposed algorithm used a regulation of input relation to produce values on
combined inputs (Ihab, 2004). The result obtained in the rank-joint algorithm proved that the produced
optimality was related to the number of tuples accessed in the query joints. A statistical results of a
query clustering process and the probability of results that could be seen by a traditional query approach
significantly differed (Roussinov, 2000). The implementation accomplished in a traditional query
application requires the users to acquire a skill in query language in selecting a keyword when searching
for information in webs.
From the research conducted it could be concluded that the effectiveness of a search would be
better if the information searching process used an adaptive search and used both query clustering
and summarizing. Early in the 1990, CBIR which conducted a retrieval process based on a visual content
in a form of the compositions of image colors, began to be developed (Ghosh, 2002). Currently,
retrieval systems have also involved the user feedbacks, irrespective of whether or not an image of
retrieval results was relevant (relevance feedback) which was used as a reference in modifying
a retrieval process to obtain more accurate results. The implementation of a query optimization proposed
in this paper was related to CBIR in image databases aimed at obtaining a record with a high image
276
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
content similarity level (above 90%) and a minimum access time during an image database record
searching process. The developed proposal was initiated by carrying out a color extradition process of
each image database record, and then a searching process of clustering values on an image database
record based on basic figures was conducted, and furthermore the results of clustering of each image
database record were used to a filtering process based on the searched images, thus it was conceivable
that the record content searching process was conducted in only those records that fell into the same
cluster.
2. MEASUREMENT OF IMAGE QUALITY
A measurement of image quality consisted of 2 (two) groups based on HVS (Human visual
system). One group was a conventional measurement system that used a technique of computing both
SNR (Signal to Noise Ratio) and PSNR (Peak Signal to Noise Ratio) values. Another group was based
on a criterion of the accuracy of changes in information signal and image distortions based on the
computations of SSIM (Structural Similarity Index) and VroiWQI (Visual region of interest Weighted
Quality Index) values.
The measurement of image quality could use a conventional measurement system that applies
a technique of computing SNR (Signal to Noise Ration), MSE (Mean Square Error) and PSNR (Peak
Signal to Noise Ration) values. The formula of computing SNR, MSE, and PSNR values was as follows:
Meanwhile, PSNR was the value of a comparison between the maximum values of images as
a reconstructed result with mean square error (MSE) values. For a 8-bit image, the maximum pixel
value was 255. The higher the result of PSNR, the better the quality of picture. PSNR was expressed
in a decibel 9dB) unit. The minimum value of PSNR at interpolation that fell into a category of good
was 30dB. Mathematically, it was formulated as follows:
2552
𝑃𝑆𝑁𝑅 = 10 log 10 [ ] 𝑑𝐵
𝑀𝑆𝐸
K-means is one of the simplest unsupervised learning algorithms in which each point is
assigned to only one particular cluster. The procedure follows a simple, easy and iterative way to
classify a given data set through a certain number of clusters (assume k clusters) fixed apriori. The
277
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The determination of K centroid for each cluster used as an initial estimation of the cluster
center could apply several methods, namely:
1. selecting the first objects of K in the sample as a mean vector of initial cluster K,
2. selecting objects K that are very far away from one another,
3. beginning with an experimental value for K that is greater than necessary one, and beginning to
make a distance of cluster center at an interval of a standard deviation in each variable, and
4. selecting K and the initial values configuration of cluster based on the prior explanation.
Where i is the color in the color histogram storage and H [i] indicates the number of pixels of color i in
the image, and n is the number of colors used in the storage of the color histogram.
Results histogram value for each color component (x, y, z) of the image is sought (Hq) and record
images (Hi) then calculate resemblance to calculate the distance to the known color histogram
Histogram Intersection Technique (HIT), using the formula in equation as follows:
∑𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍 min (𝐻𝑞 (𝑥, 𝑦, 𝑧), 𝐻𝑖 (𝑥, 𝑦, 𝑧))
𝑆(𝐻𝑞 , 𝐻𝑖 ) =
∑𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍 𝐻𝑞 (𝑥, 𝑦, 𝑧)
Possess the formula distance values tend to have small differences, so that the formula developed into
equation as follows:
∑𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍 min (𝐻𝑞 (𝑥, 𝑦, 𝑧), 𝐻𝑖 (𝑥, 𝑦, 𝑧))
𝑆(𝐻𝑞 , 𝐻𝑖 ) =
min[ ∑𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍 𝐻𝑞 (𝑥, 𝑦, 𝑧), ∑𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍 𝐻𝑖 (𝑥, 𝑦, 𝑧)
A clustering design refers to a clustering algorithm by using both minimum and maximum
PSNR values as a basis of forming clusters, shown in Fig. 1. The stages of image clustering was
initiated by an initialization of every cluster specially formed, not referring to prior researchers, instead
conducted by an initial computation before the initialization was conducted. Before this special
initialization was conducted, the authors had tried it by available initialization models but it could not
be applied because it turned out that, during the image retrieval query process at the next stage, there
were a considerably large number of records that should have had a similarity above 90.00% but was
not found.
278
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The initialization of clusters intended was by computing in advance the value of square root
of the sum of squares of minimum PSNR and maximum PSNR for each record in an image database
used as a base of record order, and then the distance between clusters was determined by
computing the total records divided by the number of clusters and ended by determining each
cluster that was taken from the record successively based on the change of their distances.
// Compute the Distance of each record based on PSNRMin and PSNRMax while NOT tablex.eof do begin
XPsnrMin:= tablex.FieldByName('PSNRMin').asextended;
XPsnrMax := tablex.FieldByName('PSNRMax').asextended;
Edit; FieldByName('PSNRAvg').AsExtended:=
sqrt(XPsnrMin*XPsnrMin+XPsnrMax*XPsnrMax); Next;
end;
After the original initialization of each newly formed cluster an adjustment for each record from
the image database table was conducted by computing in advance the distance of image record to the
nearest cluster by using Euclidean dij distance. The computation of dij distance was conducted by
computing the sum of the square of difference of minimum PSNR and jth minimum cluster center
279
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
plus maximum PSNR and jth maximum cluster center. The result was then squared to obtain the
searched distance. Then, the nearest distance with the intended cluster was computed again to obtain
the value of cluster center. This process was then repeated to the beginning of record of the image
database until there was no change anymore in the formed cluster center. After the clustering process,
the next process was to run clustered-based image retrieval query. The clustered-based image retrieval
query began with a process of computing minimum and maximum PSNR values based on base
image, and then image groups were determined by computing the nearest distance with the record
contained in prcitra table. After finding the cluster of the searched image, then a filtering of record was
carried out by adjusting to the cluster of the searched image. The next steps were in accordance with
the process applied to query retrieval process.
6. RESULTS
An image query testing by using index clusters began with a clustering process with a total
image data (record) used of 1,000 images stored in an image database (WANG Database) table that
contained blob field for JPEG file. The implementation of the clustering was applied in several cluster
groups, namely, 2 clusters, 4 clusters, and 8 clusters, and the amounts of iteration of each cluster
and the values of minimum PSNR and maximum PSNR for each cluster were shown in Table 1 and
Figure 2.
Fig. 2 Plot for K-Means Clustering in 2,4 and 8 Clusters for WANG Database.
Table 2 shows in detail the results of testing process by using CBIR query and that of image query
with index cluster by using 1,000 records stored in WANG database. The testing was conducted 30
images by using different searching base images. From the testing of CBIR query, a average F-Score
of 0.22 was obtained, whereas from the testing of image query by using index cluster for each
cluster a average F-Score of 0.21, 0.22, 0.22, and 0.23 for 2, 3, 4, and 5 clusters, respectively, were
280
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
obtained.
TABLE 2 THE RESULTS OF PRECISION, RECALL AND F-SCORE GROUP BY CATEGORIES USING CBIR
QUERY AND IMAGE QUERY WITH INDEX CLUSTER AND ACCESS WITH METHODE COLOR HISTOGRAM
Without
2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster 8 Cluster
Categories Cluster
P R P R P R P R P R P R P R P R
Tribal 0,92 0,18 0,92 0,18 0,90 0,18 0,90 0,18 0,92 0,18 0,92 0,18 0,92 0,18 0,68 0,14
Beach 0,33 0,07 0,33 0,07 0,35 0,07 0,38 0,08 0,42 0,08 0,28 0,06 0,33 0,07 0,27 0,05
Monument 0,35 0,07 0,33 0,07 0,37 0,07 0,35 0,07 0,38 0,08 0,28 0,06 0,35 0,07 0,32 0,06
Bus 0,47 0,09 0,45 0,09 0,48 0,10 0,48 0,10 0,52 0,10 0,50 0,10 0,55 0,11 0,55 0,11
Dinosaur 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20
Elephant 0,67 0,13 0,67 0,13 0,67 0,13 0,67 0,13 0,68 0,14 0,58 0,12 0,60 0,12 0,60 0,12
Rose 0,87 0,17 0,87 0,17 0,85 0,17 0,92 0,18 0,92 0,18 1,00 0,20 0,95 0,19 0,93 0,19
Horse 0,93 0,19 0,93 0,19 0,93 0,19 0,93 0,19 0,97 0,19 0,97 0,19 0,93 0,19 0,95 0,19
Mountain 0,30 0,06 0,28 0,06 0,27 0,05 0,23 0,05 0,25 0,05 0,17 0,03 0,13 0,03 0,23 0,05
Food Dish 0,77 0,15 0,65 0,13 0,75 0,15 0,75 0,15 0,78 0,16 0,68 0,14 0,70 0,14 0,67 0,13
Rata-rata 0,66 0,13 0,64 0,13 0,66 0,13 0,66 0,13 0,68 0,14 0,64 0,13 0,65 0,13 0,62 0,12
F-Score 0,22 0,21 0,22 0,22 0,23 0,21 0,22 0,21
Figure. 3 Graph results of Precision, Recall and F-Score K-Means Clustering for WANG Database.
Figure 3 shows erdetailed results for the overall average value of precision, recall and F-Score query
with cluster and query CBIR without cluster. The value of the F-Score the highest overall average
achieved at the value 0.23 for CBIR query using 5 cluster, while the other has a value of less than
0.23 and F-Score average overall relative decline and the lowest conditions contained in the 32 clusters.
7. CONCLUSION
1. The image database record clustering used in this research was conducted based on the computation
of minimum and maximum PSNR (Peak Signal Noise Ratio) values of each image database record
by using random base image.
2. Clustering that began with a cluster initialization process would be a main key in an information
retrieval process of a query. This research found a new algorithm for cluster initialization by
computing in advance the square root of the sum of minimum the square of PSNR and that of
281
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
maximum PSNR for each record in the image database used as a base of record order, and then the
distance between clusters was determined by computing total record divided by the number of
clusters and ended by determining each cluster that was taken from ordered records based on
changes in their distances.
3. The results of this model implementation on CBIR query in WANG database by using records that
was taken randomly by an amount of 1,000 showed that highest level of accuracy achieved in 5
cluster, and accuracy decreases when the cluster size is enlarged. These results also suggest that an
increase in the level of accuracy for query without cluster compared than query with cluster
increased from 0:22 to 0:23.
8. REFERENCES
[1] Bdanyopadhyay, S., dan Maulik, U., 2002, An evolutionary technique based on k-
means algorithm for optimal clustering in R. Information Science. 146:221-237.
[2] Boncz P.A., Manegold S.,and Kersten. M.L., 1999, Database architecture optimized for the
new bottleneck:Memory access.In Proc.of VLDB pages 54–65.
[3] Chaudhur S. and Shim K., 1999, Optimization of queries with user-defined predicates.
TODS 24(2):177–228.
[4] Ganguly S., 1998, Design and Analysis of Parametric Query Optimization Algorithms, Proc.
of 24th Intl. Conf. on Very Large Data Bases (VLDB).
[5] Gassner P., Lohman G., Schiefer K. and Wang Y., 1993, Query Optimization in the IBM
DB2 Family, Data Engineering Bulletin, 16 (4).
[6] Ghosh A., Parikh J., Sengar V. and Haritsa J., 2002, Query Clustering for Plan Selection,
Tech Report, DSL/SERC, Indian Institute of Science.
[7] Gopal R. and Ramesh R., 1995, The Query Clustering Problem: A Set Partitioning Approach,
IEEE Trans. on Knowledge and Data Engineering, 7(6).
[8] Ihab F. Ilyas, Walid Aref G, Elmagarmid and Ahmed K., 2004, Supporting top-k join queries
in relational databases, The VLDB Journal (2004) 13: 207–221.
[9] Ioannidis Y., Ng R., Shim K. and Sellis T., 1992, Parametric Query Processing, Proc. of Intl.
Conf. on Very Large Data Bases (VLDB).
[10] Kossmann, and Konrad Stocker, 1998, Iterative Dynamic Programming: A New Class of
Query Optimization Algorithms, The VLDB Journal.
[11] Park J. and Segev A., 1993,Using common sub-expressions to optimize multiple queries,
Proc. of IEEE Intl. Conf. On Data Engineering (ICDE).
[12] Roy P., Seshadri S., Sudarshan S. and Bhobe S., 2000, Efficient and Extensible Algorithms
for Multi Query Optimization, Proc. of ACM SIGMOD Intl. Conf. on Management of Data.
[13] Roussinov Dmitri G. , McQuaid Michael J, 2000, Information Navigation by Clustering
and Summarizing Query Results, Proceedings of the 33rd Hawaii International Conference
on System Sciences.
[14] Sellis T., 1988, Multiple Query Optimization, ACM Trans. On Database Systems, 13(1).
[15] Shim K., Sellis T. and Nau D., 1994, Improvements on a heuristic algorithm for multiple-query
optimization, Data and Knowledge Engineering, 12.
[16] Smith, K. A., dan Ng, A, 2003, Web page clustering using a self-organizing map of
user navigation patterns. Decisions Support Systems. 35:245-256.
[17] Zhang T., Ramakrishnan R. and Livny M., 1996, BIRCH: An Efficient Data Clustering
Method for Very Large Databases, Proc. of ACM SIGMOD Intl. Conf. on Management of Data.
282
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
283