You are on page 1of 5

1

GENETIC ALGORITHM & ANN MODEL FOR OPTIMIZATION OF RESISTANCE OF A SHIP


Danta Sandeep
Roll Number : 09NA3011 Department of Ocean Engineering & Naval Architecture E-Mail : dantasandeep@gmail.com
Abstract An Artificial Neural Network (ANN) is known to produce results of sufficient accuracy to be useful for preliminary prediction of vessel resistance. The Added Benefits include: being - relatively easy and simple to setup and use; easily retrained by any new data; Froude number (F N) can be taken as an independent variable. The ANN directly fits to the data rather than using a set of smooth curves in Froude Number Axis. For now, an ANN with two hidden layers is good enough to perform well. Also, we can extend the ANN to Genetic Algorithm (GA) relatively easily to optimize the design parameters to get the best results. The GA uses the approximations provided by the ANN response surfaces for its objective functions. This would be effective and efficient (acceptable level).

Introduction ANNs are widely used for classification and prediction because of their ability to model nonlinear functions quickly and accurately. In the field of Naval Architecture, interpolation and prediction of the hull resistance from model experiments and tank testing are done using statistical regression analysis. Although a larger dataset would give superior results, the additional fitting step may introduce new errors. As a result it was decided to attempt fitting of the original experimental data directly, despite the data being highly non-linear and there being a relatively small number of points and the presence of significant noise in the data. The non-linearity in the solution surface fitted to the resistance data by the neural network raise an additional problem when it comes to using the output of the trained ANN within an optimisation procedure. Because the solution surface is multimodal, it is not feasible to use a gradient-based optimisation method to reliably find the optimum set of parameters. To address this problem we implement a simple Genetic Algorithm (GA) based optimisation framework. The framework was designed to be as flexible as possible in order to experiment with the GA parameters to determine the best combination of these settings. Neural Networks are general purpose, flexible, non-linear models that, given enough hidden neurons and enough data, can approximate virtually any function to any desired degree of accuracy. In other words, Neural Networks are universal approximates. Neural Networks can be used when there is little knowledge about the form of the relationship between the independent and dependent variables. Most ANNs have some sort of training rule whereby the weights of connections are adjusted on the basis of training data. There are many different types of ANNs. The form that has found widest application is the feed-forward multi-layer perceptron or MLP. MLPs have an input layer with a series of inputs, one or more hidden layers and an output layer. The number of input elements is equal to the number of variables in the input dataset and the number of outputs equal to the number of result values required. The number of hidden layers and the number of elements in them can vary, and there are several techniques used to determine the optimum structure.

Despite the similarities between ANNs and statistical methods, ANNs have some specific advantages: 1) POWER: ANNs are capable of modelling extremely complex functions. In particular, ANNs are non-linear. For many years, linear modelling has been the commonly used technique in most modelling domains, since linear models had well-known optimisation strategies. Where the linear approximation was not valid (which was frequently the case) the models are also not valid which posed a major problem. ANNs also keep in check the dimensionality problem to model non-linear functions with large numbers of variables. 2) Ease of use. ANNs learn by example. The ANN user gathers representative data, and then invokes training algorithms which enable the ANN to learn the structure of the data. Although the user requires some knowledge of how to select and prepare the data, the level of user knowledge needed to successfully apply ANNs is much lower than that required for more traditional statistical methods.

Genetic Algorithm A Genetic Algorithm is basically a mathematical and computerized version of Darwins theory of Evolution the survival of the fittest, so to say. It is mostly used to maximization or minimization problems, which are the most common industrial and engineering problems. For example, we may need to maximize the profits or minimize losses in production in industries or we may have to select the best routes where the profit is maximum or where the expenses are minimum. The Genetic Algorithm provides the optimal (lets say, near optimal) solutions. But, we need to specify the search area i.e the feasible and non-feasible areas and boundary conditions, after every iteration. These iterations may include different methods Reproduction, Cross-over and Mutation. After the best or fittest choice of set is selected from the population at the end of every iteration and the process is repeated over and over till we get the optimal solution.

Non Linear Programming Problem: A General Non Linear Programming is either about maximization or minimization. When it comes to practical applications, we use constrained problems as we need to maximize the profit using available or minimum resources. Any Evolutionary Computation, like GA should address the handling of feasible and non-feasible solutions. Let us say the search space consists of two disjoint subsets feasible and non-feasible. Thus an optimal solution comes from the feasible solutions. The non-feasible set may have the solutions which can be repaired to be optimized. For now, we are not considering the repairing of unfeasible solutions and assume that the non-feasible subset has no optimal solution. Note that, these subsets may not be convex or connected. A general Non Linear Programming Problem is given as Min(f(x)) or Max(f(x)) where n x=x1,x2,x3,.xn belongs to R with n variables. That is, we have to either minimize or maximize the function f(x) basing on the constraints on x. We can call this a constrained problem. These constraints form the feasible set we need to get the solution. We can project this as a n-dimensional rectangle with upper and lower bounds forming the faces of the rectangle as (Lower Bound) < xi < (Upper Bound) where i=1,2,3,.n and this is the search space. Now the feasible space is given by another set of constraints as gj(x) < 0 where j=1,2,3,.m. Thus the problem can be Min/Max f(x) subject to n S={xR | l(xi) < xi < u(xi), i=1,2,3,n} Search Space n F={xR | gj(x) < 0, j=1,2,m} Feasible Space/Set. Changes for Minimization/Maximization Problems The reference paper discusses about an algorithm called M-COGA or Modified Co-Evolutionary Genetic Algorithm. The principle of M-COGA is based on co-evolution and repairing unfeasible

3
solutions (which we are not considering to avoid complexities) and elitist strategy which facilitates faster convergence to the best optimal solution. Working Procedure of GA: Initialization Stage The population is randomly initialized within the search space S. Initial Feasible Point The algorithm requires an initial reference point (a feasible one) to enter the evolutionary process. The process which is used is beyond the scope of the paper. Repairing of the Unfeasible Solution The Process is beyond the scope of this paper. Let us assume that it is done by external means or any available algorithm. Elitist Strategy Elitist individual is the more fit population. (They have a higher fitness function value. Note that Fitness function is f(x) for maximization and for minimization 1/(1+f(x)).) The strategy of elitist ensures that the best individual or elitist does not increase (minimization) or decrease (maximization). Evolution Process The algorithm uses the objective function f(x) to evaluate fitness of the individual. Thus the fitness function can be given as ( ) ( ) for Maximization and ( ) for Minimization Problem. Stopping Rule The algorithm is terminated when one or both of the following conditions is satisfied. (a) The number of generations hits the maximum value (b) When crossover (any variation) has no longer any effect Structure of GA (Algorithm given in the paper): START Get a feasible point (initial reference point); T=0; Initialization: Repair Population (Unfeasible solutions); Keep the best; (Use Selection Operators) While (t<max_gen) max_gen is the maximum number of generations START T=T+1; Select P(t) FROM P(t-1) Perform Recombination P(t); Repair Population Stop (if Convergence occurs) Elitist Using the ELITIST Strategy END END Training and Testing Data The data set used to train and test the ANN is a series of geometrically similar catamaran models based on the hull shape shown together with the parameters described. ( )

4
B - Maximum Beam at Waterline (Draft) T Draft (Depth up to which the ship is immersed) - Displacement Model 3b 4a 4b 4c 5a 5b 5c 6a 6b 6c B/T 2 1.5 2 2.5 1.5 2 2.5 1.5 2 2.5 L/ 6.27 7.40 7.41 7.39 8.51 8.50 8.49 9.50 9.50 9.50
1/3

WSA (Wetted Surface Area) 0.434 0.348 0.338 0.34 0.282 0.276 0.277 0.24 0.233 0.234

Body Plan of the Sample Ship (Catamaran)

Example Optimization Based on the test results, a GA was used to optimise catamaran hull forms for minimum resistance The different phases were evenly divided over the total number of generations. 1/3 In this case resistance was optimised based on range constraints for B/T ratio, and L/ ratio. Due to the multi-dimensional solution space, in this case four free parameters, it is not feasible to visualise the complete surface. The highest fitness values in the population are indicated by the dashed line (upper), the average fitness values by the solid line (middle), and the lowest fitness values by the dash-dot line (lower). As the three fitness plots meet it can be seen that the GA is converging to an optimal part of the solution space. Conclusions The work concluding in the paper shows that a sparse (not dense) dataset with a relatively high degree of noise can be fitted effectively with a feed-forward neural network. Also, there may be benefits resulting from the investigation of networks with two hidden layers rather than one. The work detailed in this paper has also demonstrated that a combination of GAs and ANNs can be successfully used as an optimisation tool for ship design parameters. The success of a GA is closely related to the selection methods and other parameters used. However, a particular combination of parameters may not produce a successful optimiser in a different problem domain. In particular, if the solution space becomes more multi-dimensional in nature, the importance of selecting the correct GA parameters increases.

The Interface Used for Calculations

Future work The work described here has used a GA to find the optimal solution in a problem space with four parameters. More useful results could be achieved if additional hull parameters are included for optimisation, such as longitudinal centre of buoyancy position (LCB), prismatic or block coefficients (CP or CB), or midship area coefficient (CM). We can also extend the GA to perform multi-objective optimisation. Example objectives might be the simultaneous optimisation of both resistance and seakeeping values. Even more, if we can apply Fuzzy Logic to vary the GA Limits, the search time can be reduced to a very large extent and thereby saving resources. References Research paper on Optimization of Vessel Resistance by Andrew Mason, Formsys GA/ANN Lecture Notes

You might also like