Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Mathematical Optimization Terminology: A Comprehensive Glossary of Terms
Mathematical Optimization Terminology: A Comprehensive Glossary of Terms
Mathematical Optimization Terminology: A Comprehensive Glossary of Terms
Ebook856 pages6 hours

Mathematical Optimization Terminology: A Comprehensive Glossary of Terms

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Mathematical Optimization Terminology: A Comprehensive Glossary of Terms is a practical book with the essential formulations, illustrative examples, real-world applications and main references on the topic.

This book helps readers gain a more practical understanding of optimization, enabling them to apply it to their algorithms. This book also addresses the need for a practical publication that introduces these concepts and techniques.

  • Discusses real-world applications of optimization and how it can be used in algorithms
  • Explains the essential formulations of optimization in mathematics
  • Covers a more practical approach to optimization
LanguageEnglish
Release dateNov 10, 2017
ISBN9780128052952
Mathematical Optimization Terminology: A Comprehensive Glossary of Terms
Author

Andre A. Keller

André A. Keller is an Associate Researcher at the Computer Science Laboratory of Lille University, Science and Technology, France. He received a PhD in Economics (Operations Research) from University of Paris I Panthéon- Sorbonne. He is a reviewer for international journals including AMM, Ecol.Model., JMAA and a Member of the JAST editorial board. He presented several Plenary Lectures at International Conferences and was Invited Professor in different countries. As a Full Professor, he has taught applied mathematics and optimization techniques, econometrics, microeconomics, game theory, and more in various universities in France. His experience centers are in the building, analyzing, and forecasting with large scale macro-economic systems within research groups of the French CNRS. Other domains of experience are notably discrete mathematics, circuit analysis, time-series analysis, spectral analysis, fuzzy logic. He has published numerous articles, book chapters, and books. The books are on topics such as time-delay systems, and multi-objective optimization.

Related to Mathematical Optimization Terminology

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Mathematical Optimization Terminology

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Mathematical Optimization Terminology - Andre A. Keller

    Mathematical Optimization Terminology

    A Comprehensive Glossary of Terms

    First Edition

    André A. Keller

    Table of Contents

    Cover image

    Title page

    Copyright

    Author Biography

    Preface

    Acknowledgments

    Chapter 1: Elements of Mathematical Optimization

    Abstract

    1.1 Introduction

    1.2 History of Mathematical Optimization

    1.3 Formulation of Optimization Problems

    1.4 Classification of Optimization Methods

    1.5 Design and Choice of an Algorithm

    Chapter 2: Glossary of Mathematical Optimization Terminology

    Abstract

    2.1 Introduction

    2.2 Glossary of Terms Alphabet A

    2.3 Glossary of Terms Alphabet B

    2.4 Glossary of Terms Alphabet C

    2.5 Glossary of Terms Alphabet D

    2.6 Glossary of Terms Alphabet E

    2.7 Glossary of Terms Alphabet F

    2.8 Glossary of Terms Alphabet G

    2.9 Glossary of Terms Alphabet H

    2.10 Glossary of Terms Alphabet I

    2.11 Glossary of Terms Alphabet J

    2.12 Glossary of Terms Alphabet K

    2.13 Glossary of Terms Alphabet L

    2.14 Glossary of Terms Alphabet M

    2.15 Glossary of Terms Alphabet N

    2.16 Glossary of Terms Alphabet O

    2.17 Glossary of Terms Alphabet P

    2.18 Glossary of Terms Alphabet Q

    2.19 Glossary of Terms Alphabet R

    2.20 Glossary of Terms Alphabet S

    2.21 Glossary of Terms Alphabet T

    2.22 Glossary of Terms Alphabet U

    2.23 Glossary of Terms Alphabet V

    2.24 Glossary of Terms Alphabet W

    2.25 Glossary of Terms Alphabet Z

    Chapter 3: Elements of Technical Background

    Abstract

    3.1 Introduction

    3.2 Contextual A–Z Glossary

    Appendix A: Basic Features of Mathematical Optimization

    A.1 Introduction

    A.2 Types of Optimization

    A.3 Optimization Methods and Techniques

    A.4 Algorithms and Search Strategies

    A.5 Optimization Problems

    A.6 Application Areas of Optimization

    Acronyms

    Author Index

    Copyright

    Academic Press is an imprint of Elsevier

    125 London Wall, London EC2Y 5AS, United Kingdom

    525 B Street, Suite 1800, San Diego, CA 92101-4495, United States

    50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States

    The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom

    © 2018 Elsevier Ltd. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

    This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

    Notices

    Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

    Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

    To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

    Library of Congress Cataloging-in-Publication Data

    A catalog record for this book is available from the Library of Congress

    British Library Cataloguing-in-Publication Data

    A catalogue record for this book is available from the British Library

    ISBN: 978-0-12-805166-5

    For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

    Publisher: Candice Janco

    Acquisition Editor: Glyn Jones

    Editorial Project Manager: Anna Valutkevich

    Production Project Manager: Omer Mukthar

    Cover Designer: Victoria Pearson

    Typeset by SPi Global, India

    Author Biography

    André A. Keller is an associate researcher at the Computer Science Laboratory of Lille University of Science and Technology, France. He received his PhD (doctorat d'Etat) in Economics/Operations Research from the University of Paris I Panthéon-Sorbonne. He is a reviewer for international journals including AMM, Ecol. Model., JMAA, and a Member of the JAST editorial board. He has presented several plenary lectures at international conferences and was visiting professor in different countries. As a Professor (Professeur des Universités), he has taught applied mathematics and optimization techniques, econometrics, microeconomics, game theory, and more in various universities in France. His experience centers are on building, analyzing, and forecasting with large-scale macroeconomic systems within research groups of the French CNRS, notably at the University of Paris X-Nanterre. His other domains of experience are notably discrete mathematics, circuit analysis, time-series analysis, spectral analysis, and fuzzy logic. In addition to numerous articles and book chapters, he has published books on topics, such as time-delay systems and multiobjective optimization.

    Update: March 12, 2017

    Preface

    Scientific information on contemporary mathematical optimization is broad and diverse for reasons that are related to the long history of a cross-disciplinary field of scientific disciplines, to the recent and accelerated development of computer-based decentralized computation, and to the multiple applications to solve many real-life problems that accompany progress. The academic literature includes many remarkable surveys to assess the origin and the circumstances of discovery (a concept, a methodology, and a technique), and to put into perspective the evolution of a particular method. Encyclopedias on optimization gather surveys on many aspects of a particular approach. Many textbooks present the theory and practice of an optimization approach with examples (such as multiobjective optimization or mixed optimization in discrete and continuous variables, or combinatorial optimization, etc.). Codes, commercial and free software on optimization are available to users. Most of the glossaries available on optimization are a book annex but are available online. These glossaries often include an undifferentiated set of general (sometimes formalized) definitions of a few lines on the main concepts and more technical terms to which these concepts refer to. The user may also feel the need to apply to more focused, precise, and illustrated information that shows the value of an approach. This book proposes such a type of approach to facilitate an introduction to a domain.

    The design of a presentation is based on a chosen (nonexhaustive) set of terms of optimization that is oriented toward the user concerned to be informed quickly and thoroughly, and to take into account immediately the standard and more advanced techniques of optimization in view of its own applications. Thus, the project seeks to address a variety of dimensions of contemporary optimization, such as the optimization of univariate and multivariate continuous functions, the optimization of scalar- or vector-valued functions, constrained and unconstrained programming, convex and nonconvex optimization, continuous and discontinuous optimization, optimization with one or more objectives, hierarchical (or multilevel) optimization, combinatorial optimization, graph theory and networks, game theory, dynamic programming, uncertainty in decision making (e.g., fuzzy environment with imprecise data), and decomposition methods specific to large real-world applications.

    This book can be considered as a portable guide, the dimension of which may facilitate users access to the field of optimization while allowing immediate implementation and deepening. The central part of the book (Chapter 2) deals with the definition and presentation of entries on the mathematical optimization terminology in alphabetical order. The search for full completeness (which is unrealistic or difficult to achieve) is not the purpose sought within the limited scope of a glossary. The list of retained templates is based rather on a reflective personal experience and aims to be complete in the sense of the contemporary dimensions of optimization. The fields of application (engineering, industry, management, economics and finance, medicine, etc.) are varied, but not all of them can be retained in this scope. However, this list of terms should offer opportunities for updating in the future by using the proposed methodology in gathering information on optimization.

    Each simple terminology (a single term or concept) or compound terms is the subject of a comparable (but not systematic) questioning to know the origin, definition, the scope of a concept, the scope of applications, mathematical formulation, an algorithm, illustrations, as well as illustrative examples. An information block accompanying each entry includes practical data to facilitate a further study, such as with the following elements, that is, bibliographic references, MSC 2010 codes, cross references to other terms directly related to optimization (Chapter 2) or its technical context (Chapter 3), as well as presentations of online sites for particular terms.

    This glossary is dedicated to the items and expressions in optimization or programming with common acronyms. A list of 480 items are defined in this book. The book includes three main parts and useful indexes. It is organized as follows. Chapter 1 presents the elements of mathematical optimization including a short history, standard formulation of mathematical optimization, methods and algorithms, the design and choice of algorithms, and references. Chapter 2 provides the main glossary specific to Mathematical Optimization Terminology with bibliography. The glossary includes 317 items. Chapter 3 specifies 163 other items of the technical background of mathematics, operations research, statistics, and probabilities. The book is introductory but not elementary. It provides the required knowledge in fundamental mathematical analysis, mathematical programming, techniques of operations research, and probability theory.

    André A. Keller, Villeneuve d’Ascq, European Metropolis of Lille, France

    April 2017

    Acknowledgments

    The author of this book has benefited from various recent contributions in this field of research in mathematical optimization. Such elements are collaborations with colleagues at the University of Lille in France, plenary lectures given abroad in various countries (e.g., United States, Canada, UK, Germany, Japan, China, and Russia in 2009–17), electronic documentation at University of Lille. The interlibrary loan by Mrs Lebrun at University UVHC of Valenciennes in France also provided to the author a valuable assistance in finding library sites, books, and copies of articles in France and abroad.

    The University of Lille allowed the author’s participation in teaching Game theory (i.e., a course for doctoral students in Game Theory and Industrial Organization in 1993–96), in presenting conference academic papers notably at the Annual Meeting on Mathematical Economics in 2010 and 2011. Prof. Nicolas Vaneecloo associated the author to his CNRS research group on socioeconomic studies in 2009–12. In this period, a seminar on complex dynamics of economic systems was created with Assistant Professors N. Rahmania of the Paul Painlevé Mathematical Laboratory of Lille and B. Dupont from the Department of Economics. In particular, B. Dupont integrated the author’s contribution on Time-Delay Systems with Application to Economic Dynamics and Control (a Lambert Academic Publishing book by Keller, 2011) in his teaching module on Economic Modelization with Maple.

    The author thanks Philippe Mathieu, professor at the University of Lille in France for associating him until now with his research unit on Multi-Agent Systems and Behavior. This research unit is part of the division Interaction and Collective Intelligence in the Center for Research in Computer Science, Signal, and Automatic Control of Lille. Prof. Philippe Mathieu showed interest in this project.

    The author is obliged to Prof. Nikos Mastorakis, President of the WSEAS International Conference, for giving him the opportunity to present invited Plenary Lectures on the subjects of this book. The author would also like to thank Prof. Elias C. Aifantis for encouraging these research projects and publications in JMBM on reaction-diffusion systems (in 2012), and convex underestimating relaxation techniques (in 2015). Prof. Aifantis was Director of Mechanics and Materials at the Polytechnic School of the Aristotle University of Thessaloniki in Greece and was participating in the Michigan Technological University in the United States.

    The author expresses his gratitude to Anna Valutkevich, Editorial Project Manager at Elsevier, for her patient and stimulating assistance in preparing this book. Thanks also go to Omer Mukthar, Production Project Manager at Elsevier, for his professional cooperation in realizing this technical book.

    André A. Keller, Villeneuve d’Ascq, France

    Chapter 1

    Elements of Mathematical Optimization

    Abstract

    Optimization methods and applications represent a vast literature. The intensity and variety of research papers have increased in recent years. This introduction gives a precise historical overview of these developments. The use of evolutionary algorithms has emerged in combinatorial optimization and optimization in continuous or mixed design variables. The reasons are the conventional methods failed before the formal complexity of full-scale applications in real life. Two major types of problems stand out, that is, the decision problems involving a single objective and challenges requiring use of multiple objectives jointly. Classification of optimization problems can be based on the technical characteristics of these problems. However, the classification of methods differs for single and multiobjective optimization. Regarding single-objective problems, a distinction is introduced between direct research methods and methods of descent with derivatives. The classification methods of multiobjective optimization are based on the number of Pareto solutions, as well as the preferences of decision maker.

    Keywords

    Classical method; Computational complexity; Evolutionary algorithm; Genetic algorithm; Hybrid evolution algorithm; Metaheuristics; Multiobjective optimization problem; Nondifferential/nonsmooth optimization; Simplex method; Game theory

    1.1 Introduction

    The Handbook of Global Optimization by Horst and Pardalos (1995) (volume 1) introduced to the optimization techniques, such as concave optimization, DC optimization, quadratic optimization, complementary problems, minimax, multiplicative programming problems, Lipschitz optimization, fractional programming, network flow optimization, interval methods, and stochastic programming (two-phase methods, random search methods, simulated annealing, etc.). The second volume of the Handbook of Global Optimization by Pardalos and Romeijn (2002) included various metaheuristics such as simulated annealing, genetic algorithms (GAs), neural networks, taboo search, shake-and-bake methods, and deformation methods.

    The Handbook of Applied Optimization (with 1095 pages) by Pardalos and Resende (2002, pp. 567–991) provided applications in a variety of domains in agriculture (e.g., forest), aerospace, biology and chemistry, energy (e.g., electrical, power systems, oil and gas, nuclear engineering), environment (e.g., air pollution), finance (e.g., portfolio selection) and economics, manufacturing, mechanics, telecommunication, and transportation. The Encyclopedia of Optimization in its five volumes and about 2710 pages by Floudas and Pardalos (2001) collects papers on a broad range of methods and technical aspects of numerous approaches and applications. In these volumes, all contributions are classified according to the alphabetical order of their title. The Handbook of Test Problems in Local and Global Optimization (Floudas et al., 1999, 2010) contains test problems in local and global optimization for a wide range of real-world problems,¹ for example, quadratic programming, bilinear, biconvex, DC (difference convex) problems.

    Several books also cover most of the problems and applications in global optimization. The book Practical Optimization by Gill, Murray, and Wright (1981) is on practical optimization. The book treats of the optimality conditions, unconstrained methods for univariate functions, multivariate nonsmooth functions, nonderivative methods, methods for large-scale problems and practicalities (use of software packages, computed solutions properties, accuracy, scaling). The book Global Optimization: Deterministic Approaches by Horst and Tuy (1996) also contains parametric concave programming, outer approximation, branch-and-bound technique, decomposition of large-scale problems, and particular challenges of concave minimization (bilinear programming, complementary problems). The relevant textbook by Geiger and Kanzow (2000) (in German) introduces the theory and practice of the numerically constrained optimization. The book includes optimality conditions, linear programming, nonlinear optimization, and nonsmooth optimization (lagrangian duality, regularization processes to improve the optimality conditions, subgradient methods, and bounded approximation). The textbook Linear and Nonlinear Programming by Luenberger and Yee (2008) extends this presentation to constrained minimization problems by using primal methods, penalty and barrier methods, dual-and-cutting plane methods, and primal-dual methods. The book by Hastings (2006) introduces the readers to the extended domain of operations research techniques by using the software package Mathematica®. The electronic book (eBook) by Weise (2009) on global optimization algorithms focused on evolutionary computation algorithms, including GAs, genetic programming (GP), learning classifier systems, evolution strategy (ES), differential evolution (DE), particle swarm optimization (PSO), ant colony optimization. The second edition of this eBook includes 2335 references, for which links are generally provided.

    For nonconvex problems, a number of techniques of convexification have been proposed, but other algorithms have been introduced to solve this complexity of real-life optimization problems. Holland (1975) described two main factors that permit the development of such GAs. The first factor is the computation powers of parallel machines and the second an interdisciplinary cooperation between researchers. The book Genetic Programming: On the Programming of Computers by Means of Natural Selection by Koza (1992) introduced to ES and evolutionary computation (see also Jacob, 2001).

    The convexity of functions and sets in an optimization problem is a fundamental concept. The foundations of the convex analysis (e.g., properties of convexity and duality correspondences) are notably presented in the book Convex Analysis by Rockafellar (1970) (see also Hiriart-Urruty & Lemaréchal, 2000). The book Convex Optimization by Boyd and Vandenberghe (2004) is centered on the theory and practice of the convex optimization (see also Bertsekas, 2009). In fact, we know that primal-dual methods require a convex structure (at least locally). In economics, a fundamental problem consists in allocating scarce resources among alternating purposes. We then have to determine the instruments within a feasible set (reflecting the scarcity of resources) so as to maximize the objective provided in Intriligator (1971), Intriligator (1981), and Arrow and Intriligator (1981). The convexity assumption is a necessary condition for the existence of an equilibrium allocation.

    1.2 History of Mathematical Optimization

    The historical development of mathematical optimization consists of three broad approaches, namely the classical methods, the evolutionary algorithms (EAs) and more recently the hybrid methods. The conventional practice methods focus on optimizing a single objective, with or without additional constraints. The practice of EAs has been more recently developed to solve the most difficult cases of optimization problems with several objectives. It is remarkable to observe that these two problems also have ancient origins dating back to the 19th century. Indeed the origin of vector optimization goes back to Edgeworth (1881) and Pareto (1896). The two economists developed the theory of indifference curves and defined the basic concept of optimality in multiobjective optimization (MOO).

    1.2.1 Origin and Evolution of Classical Methods

    The foundation of mathematical programming relies on two major scientific works: the publication of the Theory of Games and Economic Behavior by von Neumann and Morgenstern (1953) and the discovering of the simplex method by George B. Dantzig in 1947 (see Dantzig & Wolfe, 1960). In the same year, John von Neumann developed the theory of duality.

    A short history by Minoux (1986) identified four decades of development in mathematical programming until 1987. The first 10 years have been devoted to linear programming and theoretical foundations of nonlinear programming. The second decade has seen the introduction of the following techniques, namely integer programming, network theory, nonconvex programming, dynamic programming and control theory. Decomposition techniques were developed in the same period for solving large-size systems. The third decade has seen the development of a theory of nondifferentiable/nonsmooth optimization and the combination of mathematical programming with graph theory leading to combinatorial optimization (see Papadimitriou & Steiglitz, 1982). The fourth decade of optimization was influenced by the introduction of computational complexity (see Papadimitriou, 1995). More recently, the history of optimization theory was divided into three major waves according to Chiang (2009). The first wave was attributed to linear programming and simplex method in the late 1940s, the second wave was with convex optimization and interior point method at the end of the 1980s. The third wave was characterized by the nonconvex optimization.

    1.2.2 Development of Evolutionary Algorithms

    ²

    The first use of heuristic algorithms goes back to 1948³ when Turing (1948) was breaking the German Enigma code during World War II (see also Angelov, 2016; Yang, 2014). Thereafter, heuristic and metaheuristic algorithms for solving programming problems were issued from the difficulties with classical optimization methods.

    Abido (2010) mentioned four inconveniences for solving MOO problems with conventional algorithms: (1) a need for a repetitive application of an algorithm to find the Pareto-optimal solutions, (2) a requirement of some knowledge about the problem, (3) the sensitivity of an algorithm to the shape of the Pareto-optimal front, and (4) the spread of the Pareto-optimal solutions depending on the chosen algorithm. Heuristic algorithms are suitable solvers for severe high-dimensional, real-life problems (see Tong, Chowdhury, & Messac, 2014).⁴

    Heuristics and metaheuristics refer to approximation resolution methods. Heuristics denote techniques which seek near-optimal solutions at a low cost. Metaheuristics are characterized by a master strategy. They can guide and correct the operations of subordinate heuristics (see Reeves, 1995). Thus, metaheuristics such as EAs may refer to a higher level procedure, which combines different operations of heuristics for exploring a search area.

    EAs include notably GAs, ES, and GP. EAs also include, but are not limited to, nature-inspired algorithms such as neural methods, simulated annealing, tabu search, ant colony systems and other particle swarm intelligence techniques. The capacity of such methods to solve NP-hard⁵ combinatorial problems is well-known (e.g., the problems of traveling salesperson, scheduling, graph, and transportation). The book by Michalewicz (1999) introduced metaheuristics for solving numerical optimization problems. An overview of evolutionary techniques with applications is proposed by N. Srinivas and K. Deb's sorting GA (Srinivas & Deb, 1994), C.M. Fonseca and P.J. Fleming's multiobjective GA (see Fonseca & Fleming, 1993, 1995), P. Hajela and L. Lee's weighted-based GA (see Hajela & Lee, 1996; Zitzler, 1999), including Schaffer's vector-evaluated GA (see Schaffer, 1984).

    EAs are mainly based on principles of the Darwinian evolution characterized as follows. Individuals within populations (or species) differ. Traits are passed on to offspring. More offspring are produced than can survive in every generation. The members who survive are naturally selected with most favorable performances. This natural process is based on individuals with consequences on the corresponding population. This evolution process is backward, mostly deterministic (i.e., partially random). It is not perfect and can produce new traits besides existing traits. Such algorithm is regarded as population-based stochastic algorithms, which elements include a population of individuals, fitness evaluation, genetic operators guiding evolution, and selection.

    One should indicate the fast development of a MOO approach in the mid-1980s with the help of EAs.⁶ An early attempt to use GAs to solve MOO problems was realized by Ito, Akagi, and Nishikawa (1983). Goldberg (1989) proposed Pareto-set fitness assignment to solve Schaffer's multiobjective problems. In the same period, two books were devoted to the theory and techniques of MOO, such as Changkong and Haimes (1983) and that of Sawaragi, Nakayama, and Tanino (1985). The fast expansion of this approach was stimulated by numerous real-world applications from science, technology, management, and finance. Rangaiah (2009) was the first publication on MOO with a focus on chemical engineering. The applications in this area are notably in chemical, mineral processing, oil and gas, petroleum, pharmaceutical industries, and so on. Lai and Hwang (1994) extended the MOO approach to fuzzy decision-making problems.⁷

    The first use of genetic-based search algorithms to MOO problems goes back to the pioneering work of Rosenberg (1967) (see also Coello, 1999). In his brief history of metaheuristics, Yang (2014, pp. 16–20) specified the relevant decades of the development of EAs. The 1960s and 1970s researchers knew the development of GAs at the University of Michigan. The contribution of John Holland in 1975 (see Holland, 1975) proposed a search method based on the Darwinian evolution concepts and natural principles of biological systems. Crossover, mutation, and selection operators were used to solve difficult combinatorial problems. In the same period, evolutionary strategies were initiated at the Technical University of Berlin. Ingo Rechenberg in 1971 (see Rechenberg, 1973) (in German) and Schwefel (1977) (in German) proposed a search method for solving optimization problems. Fogel (1994) introduced the evolutionary programming by using simulated evolution as a learning process.⁸

    Following Yang, the decades 1980s and 1990s were fruitful steps for metaheuristic algorithms. Kirkpatrick, Gelatt, and Vecchi (1983) pioneered the simulated annealing algorithm in 1983. This method was inspired by the annealing process of metals. In 1986, the use of memory was proposed by Fred Glover's tabu search in Glover (1986). In 1992, the search technique by Marco Dorigo in Dorigo (1992) was inspired by the swarm intelligence of ant colonies using a pheromone to communicate. Later in 1995, Kennedy and Eberhart (1995) developed the PSO, inspired by the swarm intelligence of fish and birds. In 1997, Storn and Pricee (1997) proposed the differential evolution (DE) algorithm. This vector-based EA proved to be more efficient than a genetic algorithm.

    In the recent years, other nature-inspired algorithms were introduced such as harmony search (HS) algorithm for distribution, transport and scheduling 2001, honeybee algorithms 2004, firefly algorithm (FA) 2007, cuckoo search algorithm (CSA) 2009, bat algorithm (BA) 2010 based on echolocation behavior, and flower pollination algorithm (FPA) 2012.

    1.2.3 Contemporary Emergence of Hybrid Approaches

    Hybrid evolution algorithms are also named memetic algorithms (MAs) (see Ishibuchi & Yoshida, 2002). Grosan and Abraham (2007) emphasized the need for hybrid EAs in handling real-world problems involving complexities and various uncertainties (e.g., noisy environment, imprecision of data, vagueness in the decisions).

    Knowles and Corne (2005) reviewed MAs for MOO problems. Mashwani (2011) surveyed the hybrid MOEAs showing how hybridization can be designed (1) to use one algorithm and improve it with other techniques, (2) to use multiple operators in an EA, and (3) to better MOGA solutions by implementing effective local search. The algorithm Memetic-PAES proposed by Knowles and Cornee (2005) combined the local search strategy in the Pareto-archived evolution strategy (PAES) with the use of GA. Thangaraj, Pant, Abraham, and Bouvry (2011) reviewed the hybrid optimization technique in which the main algorithm is PSO with combined a local and a global search algorithm. Zamuda, Brest, Boskovic, and Zumer (2009) retained DE as an original algorithm coupled with a local search strategy. Wang, Cai, Guo, and Zhou (2007) extended the hybrid algorithms with global and local search strategies for solving constrained MOO problems. Garrett and Dasgupta (2006) analyzed the performances of hybrid EAs for multiobjective quadratic assignment problems (QAPs). The inclusion of local searches generally improves the performances of MOEAs.

    The basic idea is applying a local search to new offspring. Then, improved offspring compete with the population of survivals to the nest generation. Tang and Wang (2013) reviewed the new trend for developing hybrid MOEAs by combining concepts and components of different metaheuristics. Whitley, Gordon, and Mathias (1994) identified two forms of hybrid genetic search. The first type uses Lamarkian evolution, and the second way introduces an additional local search. In this study, we present the Lamarkian strategy search as with the MOGLS and the AbYSS algorithms (i.e., Adapter Scatter Search algorithms). The ZDT4 test function demonstrates the performances of such algorithms to generate the Pareto-optimal front.

    1.3 Formulation of Optimization Problems

    Decision problems usually involve only one goal to achieve. Decision makers may also have several conflicting objectives to achieve. A specific formal treatment is necessary for such problems. The set of feasible solutions is bounded by constraints to satisfy.

    1.3.1 Single-Objective Optimization

    Let the optimization problem

    where X is the feasible set (or opportunity set in economics).

    A nonstrict global minimum is such that the solution vector x. A local minimum of the objective function f over X . More generally, a convex constrained optimization program is such that the objective function is convex and that the constraints are concave (inequalities), or both concave and convex (equalities). Formally, a minimization problem, with m inequality constraints and p equality constraints is represented by

    . An optimization problem is often illustrated by a maximizing problem, consisting of a quadratic objective function and linear constraints. The unconstrained and constrained problems are both examined here.

    The unconstrained problem is

    , Q regular symmetric matrix, c . If Q is negative definite f(xis a global maximum. The quadratic-linear problem is

    where A matrix and c a n×1 vector of coefficients. Let y the 1 vector of multipliers associated with the m . Using the n+m . Finally, we get the two expressions:

    is the optimum of the unconstrained quadratic optimization problem.

    Many areas including manufacturing, chemical, and biological sciences, engineering design, need a nonconvex modeling. The nonconvexities may be due to multimodal objective functions, to integer requirements, and so on. However, this multiplicity of local solutions may also be due to nonlinearities in the constraint set, even when the objective function is convex (see Tawarmalani & Sahinidis, 2002, pp. 1–5). Optimization problems such as bi-level programming are typical convex and nondifferentiable. For such optimization problems, the standard nonlinear programming techniques, which mostly depend on the starting point (e.g., the steepest descent), will then fail in finding the global optimum solution. The consequences of nonconvexities are well-known: the impossibility to define a dual functional, the existence of a duality gap, and so on (see Bazaraa, Sherali, & Shetty, 2006, pp. 257–314; Bertsekas, 2009, pp. 216–242). Examples in economics show that nonconvex preferences will cause discontinuities of the demand functions and thus the possible nonexistence of equilibrium prices (see Varian, 1992, pp. 393–394). The global optimization algorithms may be divided into two groups: the deterministic approach (e.g., branch-and-bound, outer-approximation, cutting planes, decomposition) and stochastic heuristically methods (e.g., random search, GAs, ES, clustering algorithm). A typology of global optimization methods can be based on mathematical structures as in Horst and Tuy (1996, pp. 3–51) and Hendrix and Toth (2010, pp. 147–159), such as quadratic, bilinear, fractional functions. The main classes of global optimization by Horst and Tuy (1996) are the concave minimization (i.e., a concave objective function and linear and convex constraints), the reverse convex programming (i.e., a convex minimization over the intersection of convex sets and complements of convex sets), DC programming (i.e., the objective function can be expressed as a difference between two convex functions) and Lipschitz optimization (i.e., a Lipschitz continuous objective function, for which its slope is bounded).

    1.3.2 Multiobjective Optimization

    A general continuous MOO problem states to find n that simultaneously minimize (and respectively maximize) r . These decision variables and objectives are subject to restrictions such as bounds and constraints.

    Decision variables take their values¹⁰ in a closed interval defined by a lower and an upper bound. There are 2n . These bounds represent the decision space.

    The objectives are subject to restrictions represented by m and p equalities.

    The basic generic MOO problem takes the following form

       (1.1)

    The feasible space is defined by

    .

    A feasible solution to the MOO problem satisfies all the 2n inequalities and equality constraints.

    A MOO problem (1.1) may contain a vector of parameters p. The standard form becomes

    1.4 Classification of Optimization Methods

    A classification of optimization problems can be performed according to their technical characteristics. This will be our starting point inspired by Sarker and Newton (2008, pp. 11–13), prior to the classification of solving methods. A distinction must be made between methods that solve single-objective problems and programming problems with multiple objectives. In the first case, we refer to S.S. Rao's book (Rao, 2009). In the second case, we will retain the classification proposed by Miettinen (1999).

    1.4.1 Classification of Optimization Problems

    The classification by Sarker and Newton (2008) illustrates a minimization or maximization problem. This construction is based on the principal features of an optimization problem. The characteristics differentiating the optimization problems are related to the number of objectives (i.e., single or multiple objectives) and constraints (i.e., inequality and equality constraints), the type of design variables (i.e., continuous, discrete or mixed integer), and the mathematical properties of all the functions (i.e., linearity or not, convexity, and differentiability) (see Figure 1.1).

    Figure 1.1 Classification of optimization problems. Inspired from Sarker, R. A., & Newton, C. S. (2008). Optimization modelling: A practical approach. Boca Raton, FL/London, UK: CRC Press, p. 12, Figure 1.3.

    1.4.2 Classification of Single-Objective Optimization Methods

    Two classifications are proposed by Rao (2009). One of the classifications is devoted to methods for unconstrained minimization problems. The other is constrained minimization techniques. In both cases, a distinction is made between direct research methods (i.e., without requiring the partial derivatives), and techniques of descent with derivatives.

    The directed search methods for unconstrained optimization problems (see Rao, 2009, pp. 309–334) include the random search method, the grid search method, univariate method, pattern search method (e.g., Powell's method). The methods of descent for the same type of optimization problems (see Rao, 2009, pp. 335–368) consist of the steepest descent (or Cauchy) method, the Fletcher-Reeves method, Newton's method, Marquardt method, and the quasi-Newton methods (i.e., David-Fletcher-Powell method, and Broyden-Fletcher-Goldfarb-Shanno method).

    The optimization problems under constraints processing techniques also include direct approaches by which the constraints are handled explicitly, as well as indirect methods, using a sequence of unconstrained optimization problems. The directed search methods

    Enjoying the preview?
    Page 1 of 1