You are on page 1of 2

Report Mech2450 Assignment 2; Question 2c) Aim: The aim of this question was to implement the Method of Steepest

Descent and write a MATLAB program, using the algorithm 9.1 from the MECH2450 textbook as a guide. By implementing this ) method, we were to find the local minimum of the function ( with a starting [ ] point .

Method: The method used was to implement the Method of Steepest Descent algorithm from the text book. This method, used to find the minimum of a function, does this by repeatedly computing the minima of a function g(t) of a single variable t, as follows: Assume that the function f(x,y) has a minimum at a value . Firstly we choose a starting point, x. Then we look for the local minimum to this point, along the straight line in the direction of ( ), which is the direction of steepest descent. That is, we found the value of t and the point ( ) ( ) ( ( )) has a minimum. The value of z(t) is then used as a new approximation to original starting point, x, is replaced by . ( ), at which the function,

. The process is repeated, where the

In general terms, this method works quite well with this problem. A limitation though, is that it will never converge to true answer, in this case [0 0]. It will always work to just to the limits of the convergence tolerance, and then, depending on the initial starting point, give a value. I.e. if the tolerance is 1e-20, it will return values: x1=3.758981e-021 and x2=-8.353292e-022, where these are not equal to zero, just outside of the tolerance. Also another downside to this method, is that the closer it gets the answer, the slower it converges, it seems to zigzag as the gradients point nearly orthogonally to the shortest direction the minimum.

Results and discussion

For the given initial points and a tolerance of 1e-10, the algorithm converged after 34 iterations, with the values, x1=4.359501e-011 and x2=2.179751e-011.

The figure shows the results when the program is run. init_g is the initial starting point and the function result displayed, is the original given function from the problem statement, with the determined values for x1 and x2.

It seemed that the starting point could have quite an effect on if and how the algorithm converged. This was probably due to different local minima being evaluated. I had, at first, a problem with implementing the separate functions of the gradient, t, and the initial function, as I was unsure, how to populate them with the respective values for the starting point, yet after a lot of trial and error, I was able to solve this problem. A major issue in writing this code, was finding an appropriate tolerance, and a corresponding maximum iteration value, without impacting on the speed of the program, or the desired accuracy of the result. Conclusion Conclusively, the Method of Steepest descent, in this case, finds the local minimum closest to the starting point, successfully, if only within a certain tolerance. The actual local minimum, in this case, lies at x1=0, and x2=0. My code found the minimum to be x1=4.359501e-011 and x2=2.179751e-011, with a tolerance of 1e-20.

You might also like