You are on page 1of 3

How to Solve the Optimization Problems

Marc Niethammer
Formulating an optimization problem needs go hand in hand with a solution method. Many solution methods have been proposed in the literature. Giving an exhaustive list is next to impossible, but most of the problems fall into a small set of subcategories which will be described in the next Sections with a brief discussion on their relation to computer vision and image analysis. For details see the references at the end of this document and the references therein.

1
1.1

Optimization Methods, Computer Vision, and Image Analysis


Overview

The material in this section is mainly based on the optimization tree of NEOS (Network Enabled Optimization System) [1]. See [2, 4, 7] for more background, in particular with respect to relations to computer vision and image analysis. Figure 1 shows a hierarchy of optimization problems. Optimization problems (and their respective solution methods) may be subdivided into continuous and discrete optimization problems. Continuous problems: Involve continuous variables, e.g., x Rn . Discrete problems: Involve discrete variables, e.g., x Z, which may be interpreted as group labels.

1.2

Continuous problems

Continuous problems frequently arise in computer vision and image analysis. Many parameter estimation problems (e.g., tting a model to image information), as well as image reconstruction and segmentation methods are formulated continuously. 1.2.1 Unconstrained continuous problems

Global optimization: min{f (x)}. Aims at nding a globally optimal solution. This may be simple for convex functions or very hard for non-convex problems. General methods are simulated annealing, genetic algorithms, and graduated non-convexity. Useful to avoid local minima. Nonlinear equations: min{ f (x) : x Rn }. Very general problem setting for a given norm . No special structure that may readily be exploited by an optimization algorithm (other than that the function is continuous and dierentiable). Newtons method or direct gradient descent form the basis of most of the general nonlinear equations solvers. The partial dierential equations generated in the variational formulations of image analysis and computer vision mostly fall into this class (with n very large).
1 Nonlinear least-squares: min{r(x) : x Rn }, r(x) = 2 f (x) 2 . Nice structure of the problem 2 allows (for suciently smooth f ) easy computation of the energy gradient and its Hessian which may be exploited by the optimizer. This problem frequently occurs in simple parameter-estimation problems.

Figure 1: Optimization hierarchy. Image from [1].

Nondierentiable optimization: Most iterative solution methods require smoothness of the objective function to be able to take derivatives to determine the best solution direction. However, this precludes their application for nondierentiable objective functions (for example in the case of minimax problems). 1.2.2 Constrained continuous problems

Linear programming: min{cT x : Ax = b, x 0}. Used for example in an approximation of integer programming problems [6] (see Section 1.3) for image segmentation. Nonlinearity constrained/bounded constrained: Allows for the incorporation of range, equality, and inequality constraints. Network programming: At the core of graph-cut algorithms [5, 3], which have recently been used extensively for image segmentation. Minimum cut problems, maximum ow problems, shortest path problems, etc. Stochastic programming: For optimization problems where data is not known accurately, e.g., due to measurement error or dependency on future data. Solution is sought that is optimal for a set of scenarios. The Kalman lter is an example, it is an optimal estimator for linear systems with Gaussian noise.

1.3

Discrete problems

Segmentation problems may also be regarded as discrete optimization problems. Where each segmentation class is being assigned a discrete label and the energy to be optimized is based on the labelings and interrelations (weightings) between them as induced through image information. Integer programming problems are in general NP-hard. However, approximate solutions may be computed. See for example the recent paper by Komodakis and Tziritas [6]. For certain forms of energies the globally optimal solutions may be 2

found eciently (e.g., for binary labelings with submodular energies, reformulated as a network problem and solved through graph-cuts; see discussion in Section 1.2).

References
[1] NEOS guide. http://www-fp.mcs.anl.gov/otc/Guide. [2] ICCV tutorial on discrete optimization in computer vision. http://www.csd.uoc.gr/komod/ICCV07 tutorial/, 2007. [3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(11):12221239, 2001. [4] John W. Chinneck. Practical optimization: A gentle introduction. http://www.sce.carleton.ca/ faculty/chinneck/po.html. [5] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(2):147159, 2004. [6] N. Komodakis and G. Tziritas. Approximate labeling via graph cuts based on linear programming. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(8):14361453, 2007. [7] N.A.Thacker and T.F.Cootes. Vision through optimization. http://homepages.inf.ed.ac.uk/rbf/ CVonline/LOCAL COPIES/BMVA96Tut/BMVA96Tut.html.

You might also like