You are on page 1of 7

Journal of Ship Production and Design, Vol. 26, No. 3, August 2010, pp.

199205

Application of an Artificial Neural Network to the Selection


of a Maximum Efficiency Ship Screw Propeller
Dunja Matulja, Roko Dejhalla, and Ozren Bukovac
Faculty of Engineering, University of Rijeka, Rijeka, Croatia, Vukovarska 58, HR-51000 Rijeka, Croatia

The idea of the present study is to apply the advantages of neural networks to the
choice of an optimum ship screw propeller as an introduction to more complex ship
design problems. The neural network was created and trained to provide the characteristics of the maximum efficiency propeller. To train the network, data regarding the
blade number, advance speed, delivered power, rate of revolution, diameter, pitch
ratio, and expanded area ratio as well as thrust and efficiency were set as inputs and
outputs. The testing of the network proved its efficiency, which makes it a reliable tool
for the preliminary screw propeller selection.
Keyword: propellers
1. Introduction
THE SHIP design process is traditionally described as an iterative procedure in the form of a design spiral and the propulsion
is one of the major spiral spoke. In the preliminary ship design
stage, a ship screw propeller selection is almost invariably based
on charts giving the results of open-water tests on a series of
model propellers. Some of the traditional propeller diagrams
have been transformed into polynomial expressions allowing
easy interpolation and optimization within traditional propeller
geometries.
An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model
based on biologic neural networks (Haykin 2005, Taylor 2006). It
consists of an interconnected group of artificial neuron and process information using a connectionist approach to computation.
In most cases, a NN is an adaptive system that changes its structure based on external or internal information that flows through
the network during the learning phase. In more practical terms,
NNs are nonlinear statistical data modeling tools that can be used
to model complex relationships between inputs and outputs.
The NNs are excellent prediction tools, particularly in cases where
there is a little information about the relationship between the inputs
and outputs of the problem. NNs possess a number of unique
characteristics that make them particularly attractive in complex
problems, as the ship design process is. An overview of the applicaManuscript received by JSPD Committee December 2009; accepted April
2010.
AUGUST 2010

tion of NNs to various aspects of the ship design process is examined


in Gougoulidis (2008), and representative examples are shown in
order to illustrate the power and versatility of such approach.
This paper discusses the development of a NN model for the
selection of a maximum efficiency ship screw propeller. The
preliminary research (Matulja & Dejhalla 2008) revealed that
it is feasible to train an unaccustomed, black-box type of network
only if a large amount of data are used for training. Therefore, a new
NN was created, with a structure optimized to provide the most
accurate results for the propeller selection issue. The enhancement
lays in a higher accuracy with a lower number of input data. The
propeller performance data were derived using the Wageningen
B-screw series. Computational examples are presented in order to
demonstrate the effectiveness and validity of the developed NN.

2. Wageningen B-screw series


The Wageningen B-screw series is considered to be the most
extensive and the most appropriate for a large range of ship types.
The open water test data of Wageningen B-screw series are
referred to (Oosterveld & Van Oossanen 1975). The characteristics of the series are represented at Reynolds number of 2  106 by
equations of the form:
39

KT ( Cn J sn P=Dtn AE =AO un Z vn
n1

47

KQ ( Cn J sn P=Dtn AE =AO un Z vn
n1

8756/1417/10/2603-0199$00.00/0 JOURNAL OF SHIP PRODUCTION AND DESIGN

199

where KT is the thrust coefficient, KQ is the torque coefficient,


J is the propeller advance ratio, P/D is the pitch ratio, AE/AO is
the expanded area ratio, and Z is the number of propeller blades.
Cn, sn, tn, un, and vn are polynomial coefficients.
The corrections KT and KQ have been included (Oosterveld
& Van Oossanen 1975) in order to extend the availability of polynomials for Reynolds numbers up to 2  109:

3
KT Re KT Re 2  106 KT Re

6
KQ Re KQ Re 2  10 KQ Re
4
A reasonable indication to the required expanded area ratio AE/AO is obtained by means of formula given by Keller
(Oosterveld & Van Oossanen 1975):
AE 1:3 0:3  Z  T

K
pO  pV  D2
AO

where pO is the static pressure at the centerline of a propeller shaft,


pV is the vapor pressure of water, T is the propeller thrust, and
D is the propeller diameter. The value of K is taken zero for ships
with a smooth wake distribution such as are fast twin screw ships,
0.1 for other twin screw ships and 0.2 for single screw ships.

3. Artificial neural networks


The human brain is a highly complicated system that is capable
to solve complex problems. One of its most important building
blocks is the neuron, and the brain contains approximately 1011
neurons. These neurons are connected by nearly 1015 connections,
creating a huge neural network. Neurons send impulses to each
other through their connections and these impulses produce brain
activity. The neural network also receives impulses from the five
senses and sends out impulses to muscles to achieve motion or
speech.
An individual neuron can be seen as an input-output device that
collects the impulses from the surrounding neurons, and, when it
has received enough impulses, it sends out an impulse to other
neurons.
Artificial neurons are similar to their biologic counterparts.
They have input connections that are summed together to determine the strength of their output. The output is the result of the
sum being fed into an activation function.
In the artificial neural network a neuron is an informationprocessing unit that is fundamental to the operation of a neural
network. The block diagram presented in Fig. 1 shows the model
of a neuron, which forms the basis for designing (artificial) neural

Fig. 1 Model of a neuron


200

AUGUST 2010

networks. Three basic elements of the neuronal model can be


identified:
A set of synapses or connecting links, each of which is
characterized by a weight or strength of its own. Specifically, a
signal xj at the input of synapse j connected to neuron k is
multiplied by the synaptic weight wkj. It is important to make
a note of the manner in which the subscripts of the synaptic
weight wkj are written. The first subscript refers to the neuron
in question and the second subscript refers to the input end of
the synapse to which the weight refers. Unlike a synapse in the
brain, the synaptic weight of an artificial neuron may lie in a
range that includes negative as well as positive values.
An adder for summing the input signals, weighted by the
respective synapses of the neuron; the operations described
here constitute a linear combiner.
An activation function for limiting the amplitude of the
output of a neuron. The activation function is also referred to as
a squashing function in that it squashes (limits) the permissible
amplitude range of the output signal to some finite value.

Typically, the normalized amplitude range of the output of a


neuron is written as the closed unit interval [0,1] or alternatively
[1,1].
The model of a neuron in Fig. 1 also includes an externally
applied bias, denoted by bk. The bias bk has the effect of increasing or lowering the net input of the activation function, depending
on whether it is positive or negative, respectively. In mathematical
terms, a neuron k can be described by writing the following pair of
equations:
m

uk ( wkj  xj

yk wuk bk

j1

where x1, x2, . . . , xn are the input signals; wk1, wk2, . . . , wkm are the
synaptic weights of neuron k, and uk is the linear combiner output
due to the input signals, bk is the bias, w() is the activation
function and yk is the output signal of the neuron.
Different activation functions can be chosen, but the most
common is the sigmoid activation function, which outputs a number between 0 (for low input values) and 1 (for high input values).
The result of this function is then passed as the input to other
neurons through more connections. Each of these connections
has a certain weight, and these weights determine the behavior of
the network.
In the human brain the neurons are connected in a seemingly
random order and send impulses asynchronously. However, a NN
is not organized that way, since its aim is to create a function
approximator and not the model of a brain.
Neural networks are usually ordered in layers with connections
going between the layers. The first layer contains the input neurons, and the last layer contains the output neurons. These input
and output neurons represent the input and output variables of the
approximated function. Between the input and the output layer
there is a certain number of hidden layers. The connections that
lead to and from these hidden layers, as well as their weights,
determine how well the NN will perform. While learning to
approximate a function, examples of the function mode are
shown to the NN, and the internal weights in the NN are slowly
adjusted in the way to produce the outputs that will be as close as
possible to the example values. This process is called training.
JOURNAL OF SHIP PRODUCTION AND DESIGN

Fig. 2

Layers of a NN

The objective of the training is a NN that will provide the correct


output when a new set of input variables is given.
In the paper, the Multilayer Perceptron Network has been
applied to form the structure (Sherrod 2009). The selected learning algorithm was the back-propagation learning rule, and the
mean square error was chosen as the criterion of error estimation.
The code was created and adapted by using Fast Artificial Neural
Network Library (FANN 2009). FANN was chosen because of its
manageability and favorable prior experience with it.
The network diagram in Fig. 2 illustrates a fully connected,
three-layer, feed-forward, perceptron neural network. Fully
connected means that the output from each input and hidden
neuron is distributed to all of the neurons in the following layer.
Feed-forward means that the values only move from input to
hidden to output layers, while no values are fed back to earlier
layers. This network has an input layer (on the left) with three
neurons, one hidden layer (in the middle) with three neurons, and
an output layer (on the right) with three neurons. This means that
there is one neuron in the input layer for each predictor variable.
A vector of predictor variable values (x1, . . ., xp) is presented to
the input layer. The input layer (or processing before the input
layer) standardizes these values so that the range of each variable
is 1 to 1, or, more often, 0.9 to 0.9. The input layer distributes
the values to each of the neurons in the hidden layer. In addition to
the predictor variables, there is a constant input of 1.0, called the
bias, that is fed to each of the hidden layers. The bias is multiplied
by a weight and added to the sum going into the neuron.
Arriving at a neuron in the hidden layer, the value from each
input neuron is multiplied by a weight wji, and the resulting
weighted values are added together producing a combined value
uj. The weighted sum uj is fed into a transfer function, s, which
outputs a value hj. The outputs from the hidden layer are distributed to the output layer.
Arriving at a neuron in the output layer, the value from each
hidden layer neuron is multiplied by a weight wkj, and the
resulting weighted values are added together producing a combined value vj. The weighted sum vj is fed into a transfer function
s, which outputs a value yk. The y values are the outputs of the
network.

4. Training the neural network


The goal of the training process is to find the set of weight
values that will allow the output from the neural network to match
the actual target values as closely as possible.
As most training algorithms, the cycle followed in order to
refine the weight values in NN was:
AUGUST 2010

1. Running a set of predictor variable values through the network using a tentative set of weights, with the setting of
initial weights of node connections as random values
2. Computing the difference between the predicted target value
and the actual target value for this case
3. Averaging the error information over the entire set of training cases
4. Propagating the error backward through the network and
computing the gradient (vector of derivatives) of the change
in error with respect to changes in weight values
5. Making adjustments to the weights to reduce the error. Each
cycle is called an epoch, and the number of epoch depends
on the problem complexity.
Before starting the training, the inputs and the desired outputs
need to be defined. Because the network needs a certain number of
data to learn from, these data have to be adequately collected and
arranged. The data that should be known to the user are set as
inputs, while the expected resulting data are set as outputs. Generally, the larger the database, the more accurate is the NN. However, the gathering of data requires some time, and wasting too
much time before even starting the training would make the network inefficient and nullify its advantages. So the idea was to
create a NN capable of learning from a limited number of training
data.
A set of 389 samples of data was prepared using the B-screw
series polynomials incorporated into the computer program. The
computer program searches for the combination of parameters that
gives the maximum efficiency propeller. The calculations were
carried out using the values of delivered power, rate of revolution,
ship speed, and stern geometry taken from real ships. Among the
gathered set of data, 339 randomly chosen were used for the
training while the remaining 50 were set aside for testing.
It is common to start the propeller selection from the available
engine power at a given rate of revolution. Therefore, the considered input parameters were Z blade number, VA advance
speed, PD delivered power, and N rate of revolution. The
output variables were D diameter, P/D pitch ratio, AE/AO
expanded area ratio, T thrust, and ZO maximum open water
efficiency. In this manner, the obtained propeller implies the one
with the highest efficiency, considering the limitations imposed
by the ships draft, as well as the minimum requested expanded
area ratio to avoid cavitation. The data range of the considered
parameters is shown in Table 1.
The input parameters are shown in Figs. 3 to 6. On these figures
each symbol (filled rhomb) represents the value of the blade

Table 1 Data range


Parameter

Range

Z
VA (m/s)
PD (kW)
N (min1)
D (m)
P/D
AE/AO
T (kN)
ZO

37
3.89.8
1,23049,000
64.4288.0
2.2011.40
0.51.12
0.361.10
1253,690
0.400.75

JOURNAL OF SHIP PRODUCTION AND DESIGN

201

Fig. 6 Input data, rate of revolution


Fig. 3 Input data, blade number

Fig. 4 Input data, advance speed

Fig. 5 Input data, delivered power


202

AUGUST 2010

number, advance speed, delivered power, and rate of revolution


used for the training.
Because one hidden layer is sufficient for nearly all problems, a
single hidden layer has been created. The transfer function in the
hidden layer was set as sigmoid symmetrical because of its good
performances in this particular issue. To ensure the best convergence of the results, in the processing before the input layer the
input values were standardized, so the range of each variable was
set to be between 0.9 and 0.9.
One of the most important characteristics of a perceptron network is the number of neurons in the hidden layer. If an inadequate number of neurons is used, the network will be unable to
model complex data, and the resulting fit will not be satisfying.
Another important characteristic that needs to be set is the learning momentum. It is used in the search of the global minimum, as
it enables the estimation of a certain weight at a given time.
Momentum simply adds a fraction m of the previous weight
update to the current one. The number of neurons and the momentum are parameters that can be estimated by different methods.
In order to determine the size of the NN, several tests have been
done to define the number of neurons in the hidden layer that
gives the satisfactory results. The results treated as satisfactory
were those that gave both the fastest computation and least mean
square error. During the generation of NN, it was presumed that it
would be more efficient to try different combinations than to
prepare a specific software in order to determine that optimal
number of neurons.
A lower number of neurons is usually better for general learning. Based on prior experience, the data number (339) divided by
10 was taken as a starting number of neurons. However, in this
case one parameter resulted as overtrained, while the others
showed excessive mean square error that is the usual consequence
of too many neurons. Therefore, it was decided to halve the number of neurons (bisection method), and after some tryout the number of 13 neurons revealed. This number of neurons gave better
accuracy than a lower number, while more than 13 neurons did not
increase the accuracy but only slowed down the computation time.
The learning momentum was chosen to be 0.8. Because four
parameters were set as inputs, and five as outputs, a reasonable
number of epochs needed to be set. After 1000 epochs, an acceptable convergence of the results was achieved.
JOURNAL OF SHIP PRODUCTION AND DESIGN

To demonstrate the effectiveness of the training, the training


results are presented in Figs. 7 to 11. On these figures each symbol
(filled rhomb for B-series, circle for NN) represents the value
of the propeller diameter, expanded area ratio, thrust, pitch ratio,
and propeller efficiency, respectively. The percentage discrepancy
of all individual cases was within 10%, with the majority of all
data within 5%.

5. Results
After the NN is trained, the network was tested using the previously stored 50 sets of data. As examples of values, eleven input
data sets (blade number, advance speed, delivered power, rate of
revolution) are shown in Table 2.
As an illustration, Figs. 12 to 16 show results of all the 50 cases
used for testing. In these figures, each symbol (filled rhomb for
B-series, circle for NN) represents the value of the propeller
diameter, expanded area ratio, thrust, pitch ratio, and propeller
efficiency, respectively.

Fig. 7 Training results of the NN, propeller diameter

Fig. 8
AUGUST 2010

Training results of the NN, expanded area ratio

Fig. 9 Training results of the NN, thrust

Fig. 10

Fig. 11

Training results of the NN, pitch ratio

Training results of the NN, propeller efficiency

JOURNAL OF SHIP PRODUCTION AND DESIGN

203

Table 2 Examples of input data

Case 2
Case 5
Case 8
Case 12
Case 16
Case 21
Case 27
Case 32
Case 39
Case 41
Case 46

VA (m/s)

PD (kW)

N (min1)

3
4
6
5
3
5
4
5
6
7
4

6.15
6.55
5.6
6.0
4.85
9.7
5.2
6.3
5.5
5.7
4.85

5,274.76
7,473.27
7,926.16
8,499.77
1,554.66
48,991.60
13,043.0
8645.46
8,496.50
12,994.90
1,545.37

136.0
103.5
85.0
173.0
256.0
94.7
88.1
146.0
173.0
88.1
256.0

Fig. 14

Fig. 12

Testing results of the NN, propeller diameter


Fig. 15

Fig. 13
204

Testing results of the NN, thrust

Testing results of the NN, expanded area ratio

AUGUST 2010

Fig. 16

Testing results of the NN, pitch ratio

Testing results of the NN, propeller efficiency


JOURNAL OF SHIP PRODUCTION AND DESIGN

Table 3 Testing results of the NN

Case 2
Case 5
Case 8
Case 12
Case 16
Case 21
Case 27
Case 32
Case 39
Case 41
Case 46

NN output
B-series
NN output
B-series
NN output
B-series
NN output
B-series
NN output
B-series
NN output
B-series
NN output
B-series
NN output
B-series
NN output
B-series
NN output
B-series
NN output
B-series

Table 4.

D (m)

P/D

AE/AO

T (kN)

ZO

5.717
5.653
6.533
6.547
6.758
6.815
4.954
5.075
3.048
3.031
10.013
9.912
8.551
8.432
5.461
5.451
4.895
5.020
7.369
7.189
2.915
2.878

0.674
0.658
0.781
0.786
0.872
0.878
0.655
0.654
0.621
0.595
0.842
0.834
0.648
0.639
0.708
0.727
0.649
0.639
0.872
0.881
0.642
0.643

0.449
0.466
0.459
0.446
0.500
0.527
0.738
0.760
0.554
0.544
0.756
0.736
0.475
0.455
0.682
0.685
0.814
0.851
0.632
0.621
0.609
0.619

565.8
564.5
747.77
754.30
847.35
859.36
780.48
797.65
178.93
180.09
3450.45
3473.04
1493.80
1461.50
805.90
821.53
804.33
811.20
1299.30
1301.30
171.09
175.98

0.655
0.660
0.667
0.661
0.608
0.610
0.560
0.563
0.553
0.560
0.682
0.688
0.591
0.583
0.595
0.599
0.517
0.525
0.569
0.570
0.544
0.550

The corresponding NN outputs are shown in Table 3. For


comparison, the optimum propeller characteristics determined by
calculation based on B-series polynomials are reported in Table 3
as well.
Comparing the NN outputs to the calculated results, a close
agreement between the values can be noticed. For example, the
diameter difference is within a few centimeters, which is, considering the treated diameter range, accurate enough for a preliminary ship propeller selection.
At the training and at the testing the same level of accuracy was
obtained, which indicates a stable, reliable neural network. The
percentage discrepancies range between the NN outputs, and
B-series calculations are presented in Table 4.
Considering the achieved accuracy that is within 8%, it can be
concluded that the developed NN has reached the adequate precision for the problem at hand.
Although it is usual to start the propeller selection from the
available engine power and a rate of revolution, the developed
NN can also be used for the propeller selection when the rate of
revolution is to be determined if the delivered power, the propeller
diameter, and the advance velocity are specified. Moreover, the
NN can be used for the propeller selection when the diameter or
rate of revolution is to be determined if the thrust is taken as a
starting point. In this manner, with the simple exchange of input
data, all possible cases that may be of interest in practice are
covered.

AUGUST 2010

D
P/D
AE/AO
T
ZO

The percentage discrepancies


3.57% < (100D/D) < 3.62%
4.77% < [100(P/D)/(P/D)] < 4.42%
7.95% < [100(AE/AO)/(AE/AO)] < 5.56%
5.56% < (100T/T) < 3.69%
1.78% < (100ZO/ZO) < 2.16%

6. Conclusion
Artificial neural networks are relatively new tools in the field of
naval architecture and marine engineering. They possess characteristics that make them particularly attractive in complex problems, such as the ship design process is.
A neural network has been developed that is thought to enable
the selection of a maximum efficiency ship screw propeller. Different neural network architectures and learning parameters were
tried in order to find the most satisfactory one. The structure was
optimized to provide the highest accuracy with a relatively low
number of input data.
For a set of input data as blade number, advance speed, delivered power, and rate of revolution, the neural network provides
the diameter, pitch ratio, expanded area ratio, and thrust of the
propeller with the maximum open water efficiency.
Because the obtained results are convincing, the developed procedure can be considered a solid base for the further development of
neural networks applicable in various aspects of ship design process.

Acknowledgments
This work has been carried out within the research project No.
069-0691736-1667, financed by the Ministry of Science, Education, and Sports of the Republic of Croatia.

References
FANN 2009 Fast Artificial Neural Network Library. Available at http://
leenissen.dk/fann/. Accessed March 11, 2009.
GOUGOULIDIS, G. 2008 The utilization of artificial neural networks in
marine applications: an overview, Naval Engineers Journal, 120, 3, 1926.
HAYKIN, S. 2005 Neural Networks, A Comprehensive Foundation, Pearson
Education, Inc.
MATULJA, D., AND DEJHALLA, R. 2008 Neural network prediction of an optimum ship screw propeller, Proceedings, 19th International DAAAM Symposium, Vienna, Austria, 829830.
OOSTERVELD, M. W. C., AND VAN OOSSANEN, P. 1975 Further computeranalyzed data of the Wageningen B-screw series, International Shipbuilding
Progress, 22, 251, 251262.
SHERROD, P. H. 2009 20032009 Predictive Modeling Software. Available
at http://www.dtreg.com. Accessed March 17, 2009.
TAYLOR, B. J., editor 2006 Methods and Procedures for the Verification
and Validation of Artificial Neural Networks, Springer Science Business
Media, Inc.

JOURNAL OF SHIP PRODUCTION AND DESIGN

205

You might also like