Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach
Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach
Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach
Ebook1,361 pages13 hours

Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Nonlinear Dynamical Systems and Control presents and develops an extensive treatment of stability analysis and control design of nonlinear dynamical systems, with an emphasis on Lyapunov-based methods. Dynamical system theory lies at the heart of mathematical sciences and engineering. The application of dynamical systems has crossed interdisciplinary boundaries from chemistry to biochemistry to chemical kinetics, from medicine to biology to population genetics, from economics to sociology to psychology, and from physics to mechanics to engineering. The increasingly complex nature of engineering systems requiring feedback control to obtain a desired system behavior also gives rise to dynamical systems.


Wassim Haddad and VijaySekhar Chellaboina provide an exhaustive treatment of nonlinear systems theory and control using the highest standards of exposition and rigor. This graduate-level textbook goes well beyond standard treatments by developing Lyapunov stability theory, partial stability, boundedness, input-to-state stability, input-output stability, finite-time stability, semistability, stability of sets and periodic orbits, and stability theorems via vector Lyapunov functions. A complete and thorough treatment of dissipativity theory, absolute stability theory, stability of feedback systems, optimal control, disturbance rejection control, and robust control for nonlinear dynamical systems is also given. This book is an indispensable resource for applied mathematicians, dynamical systems theorists, control theorists, and engineers.

LanguageEnglish
Release dateSep 19, 2011
ISBN9781400841042
Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach

Read more from Wassim M. Haddad

Related to Nonlinear Dynamical Systems and Control

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Nonlinear Dynamical Systems and Control

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Nonlinear Dynamical Systems and Control - Wassim M. Haddad

    Preface

    Dynamical system theory provides a paradigm for modeling and studying phenomena that undergo spatial and temporal evolution. A dynamical system consists of three elements—namely, a setting (called the state space) in which the dynamical behavior takes place, such as a torus, topological space, manifold, or locally compact metric space; a mathematical rule or dynamic which specifies the evolution of the system over time; and an initial condition or state from which the system starts at some initial time. Ever since its inception, the basic questions concerning dynamical system theory have involved qualitative solutions for the properties of a dynamical system; questions such as: For a particular initial system state, does the dynamical system have at least one solution? What are the asymptotic properties of the system solutions? How are the system solutions dependent on the system initial conditions? How are the system solutions dependent on the form of the mathematical description of the dynamic of the system? How do system solutions depend on system parameters? And how do system solutions depend on the properties of the state space on which the system is defined?

    . Ancient Greek astronomers and mathematicians such as Eudoxus, Ptolemy, and Archimedes were the first to use abstract mathematical models and attach them to the physical world. More importantly, using abstract thought and theoretical calculations they were able to deduce something that is true about the physical world. Newton’s greatest achievement, however, was the discovery that the motion of the planets and moons of the solar system resulted from a single fundamental source—the gravitational attraction of the heavenly bodies. As a consequence, Newtonian physics developed into the first field of modern science—dynamical systems as a branch of mathematical physics—wherein the circular, elliptical, and parabolic orbits of the heavenly bodies of our solar system were no longer fundamental determinants of motion, but rather approximations of the universal laws of the cosmos specified by governing differential equations of motion.

    In the past century, dynamical system theory quickly spread from the study of astronomical bodies to modeling and studying numerous physical phenomena occurring in nature involving changes over time, and it has become one of the most important and fundamental fields of modern science and engineering. The application of dynamical systems has crossed interdisciplinary boundaries from chemistry to biochemistry to chemical kinetics, from medicine to biology to population genetics, from economics to sociology to psychology, and from physics to mechanics to engineering. The increasingly complex nature of engineering systems requiring feedback control to obtain a desired system behavior also gives rise to dynamical systems. Feedback control theory involves the analysis and synthesis of a feedback controller that manipulates system inputs to obtain a desired effect on the output of the system in the face of system uncertainty and system disturbances. Furthermore, since most physical and engineering systems are inherently nonlinear, the resulting feedback dynamical system can exhibit a very rich dynamical behavior.

    One of the most important concepts in the study of dynamical system theory and control theory is the concept of stability. Stability theory concerns the behavior of the system trajectories of a dynamical system when the system initial state is near an equilibrium state. Since exogenous disturbances and system component uncertainty are always present in every actual system, stability theory plays a central role in dynamical systems and control. As in the study of dynamical system theory, the origins of stability theory can be traced back to the study of idealizations of astronomical problems involving persistent small oscillations (librations) about a state of motion. One of the first modern treatments of stability theory was given by Joseph-Louis Lagrange wherein he concluded that for a conservative mechanical system an isolated minimum of the system potential energy corresponds to a stable equilibrium system state. The most complete stability analysis framework for dynamical systems was developed by Aleksandr Lyapunov. Lyapunov’s method is based on construction of a function of the system state coordinates that serves as a generalized norm of the solution of the dynamical system. Its appeal comes from the fact that conclusions about the behavior of the dynamical system can be drawn without actually computing the system solution trajectories. As a result, Lyapunov stability theory has become one of the cornerstones of systems and control theory.

    Aleksandr Mikhailovich Lyapunov was born on the 6th of June, 1857 to the family of the prominent astronomer Mikhail Vasilyevich Lyapunov in Yaroslavl, Russia. After completing the gymnasium (high school) in Nizhny Novogorod where he was taught Greek, Latin, and basic science and mathematics, in 1876 he entered the Physics and Mathematics Department of Saint Petersburg University. Upon graduating from Saint Petersburg University in 1880, he remained in the department of mechanics at the university to prepare for his academic career which was greatly influenced by Chebyshev. His master’s work concentrated on the problem of a homogeneous, incompressible fluid mass held together by gravitational forces of its particles and rotating about a fixed axis. Chebyshev posed the question, Under what conditions will the motion of the fluid be stable and what possible equilibrium forms can this stable rotating fluid take? This famous question of the piriform body, which was of considerable importance to cosmogony and had been studied by prominent mathematicians such as Newton, MacLaurin, Jacobi, and Poincaré, was eventually settled by Lyapunov in 1905.

    After receiving his Master’s degree in applied mathematics in 1884 for his work On the Stability of Ellipsoidal Forms in the Equilibrium of a Rotating Fluid, Lyapunov moved to Kharkov University as a Privatdozent (private reader or lecturer) in mechanics where he continued his research towards his doctoral thesis. In 1892 the Kharkov Mathematical Society published Lyapunov’s seminal work on The General Problem of the Stability of Motion which he defended as his doctoral thesis at Moscow State University (Moskovskii Gosudarstvennii Universitet) later that year. His paper Sur le Problème General de la Stabilité du Mouvement published in the Annales de la Faculté des Sciences de l’Université de Toulouse in 1907 marks the beginning of modern stability theory in the West. In 1893 Lyapunov was appointed as a professor at Kharkov University and in 1902 he returned to Saint Petersburg University as Chair of Applied Mathematics. In 1900 he was elected as a corresponding member of the Russian Academy of Sciences, and in 1901 as a full member of the Academy in the field of applied mathematics.

    Even though Lyapunov is predominantly known for his work on the stability of equilibria and motion of mechanical systems with a finite number of degrees of freedom in the dynamical systems and control community, his scientific work includes fundamental contributions to stability of equilibrium figures of rotating fluids; equilibrium figures of uniformly rotating fluids; mathematical physics; probability theory; and theoretical mechanics. In all these disciplines Lyapunov established a deep level of accuracy and rigor that resulted in fundamental and classical mathematical results. Lyapunov tragically died on the 3rd of November, 1918 in Odessa, Russia (now Ukraine) as a result of a voluntary act three days after the death of his wife. His wish, which he had stated in an ante mortem note, was to be buried with his wife.

    The main objective of this book is to present and develop necessary mathematical tools for stability analysis and control design of nonlinear dynamical systems, with an emphasis on Lyapunov-based methods. This book is intended to be useful to applied mathematicians, dynamical system theorists, control theorists, and engineers. Since dynamical system theory and Lyapunov stability theory lie at the heart of mathematical sciences and engineering, researchers and graduate students in these fields who seek a fundamental understanding of the rich behavior of nonlinear dynamical systems and control will also find this textbook useful. The appropriate background for this book is a first course in linear system theory and a first course in advanced (multivariable) calculus.

    After a brief introduction on dynamical systems in Chapter 1, a systematic development of nonlinear ordinary differential equations is given in Chapter 2. In Chapters 3 and 4, we present fundamental stability theory as well as advanced stability theory for nonlinear dynamical systems. Chapter 5 provides a treatment of classical dissipativity theory and absolute stability theory. A detailed treatment of stability of feedback interconnections, control Lyapunov functions, feedback linearization, zero dynamics, and stability margins for nonlinear regulators is given in Chapter 6. Chapter 7 focuses on input-output stability and dissipativity theory. In Chapter 8, we present the nonlinear optimal control problem, while stability and optimality results for backstepping control problems are given in Chapter 9. Chapters 10–12 provide novel extensions to disturbance rejection and robust control of nonlinear dynamical systems. Finally, Chapters 13 and 14 present discrete-time extensions of the aforementioned topics.

    The authors would like to thank Dennis S. Bernstein, Sanjay P. Bhat, Tomohisa Hayakawa, V. Lakshmikantham, and Frank L. Lewis for their constructive comments and feedback. In some parts of the book we have relied on work we have done jointly with Chaouki T. Abdallah, Dennis S. Bernstein, Sanjay P. Bhat, Jerry L. Fausz, Qing Hui, Vikram Kapila, Alexander Leonessa, Sergey G. Nersesov, and David A. Wilson; it is a pleasure to acknowledge their contributions. In addition, we thank Sanjay P. Bhat, Tomohisa Hayakawa, Qing Hui, Vikram Kapila, and Sergey G. Nersesov for graciously agreeing to read chapters of the book for clarity and technical accuracy. We are also grateful to our students for their helpful comments, insightful suggestions, and corrections, all of which greatly improved the exposition of the book.

    The authors would also like to thank Paul Katinas for providing a translation of Anaximander’s, Herakleitos’, and Archimedes’ statements quoted in ancient Greek on which are central to the study of cosmology and modern mathematics. A common misconception among scholars has been that modern mathematicians and not ancient Greek mathematicians were the first to be able to address notions of infinity in a rigorous and precise manner. However, as has been recently discovered in the Archimedes Palimpsest, Archimedes was the first to rigorously address the science of infinity with infinitely large sets using precise mathematical proofs in his work on The Method of Mechanical Theorems.

    and there is an underlying order to this change—the Logos , as well as that energy is change and substance is change

    , he arrives at a precursor to the principle of relativity and the mass-energy equivalence.

    Finally, Archimedes’ statement leads to the foundation of mathematical mechanics. His law of the lever—involving the earliest treatment of a constrained mechanical system—associated with this statement led to the idea of energy as the product of force and distance, to the concept of the conservation of energy, and to the principle of virtual velocities. In his treatise on The Method of Mechanical Theorems he established the foundations of integral calculus using infinitesimals, as well as the foundations of mathematical mechanics. In addition, in one of his problems he constructed the tangent at any given point for a spiral, establishing the origins of differential calculus.

    led to the scientific revolution which further led to the marvels of modern-day science and engineering.

    Atlanta, Georgia, USA, October 2007, Wassim M. Haddad

    Knoxville, Tennessee, USA, October 2007, VijaySekhar Chellaboina

    Chapter One

    Introduction

    A system is a combination of components or parts that is perceived as a single entity. The parts making up the system may be clearly or vaguely defined. These parts are related to each other through a particular set of variables, called the states of the system, that completely determine the behavior of the system at any given time. A dynamical system is a system whose state changes with time. Specifically, the state of a dynamical system can be regarded as an information storage or memory of past system events. The set of (internal) states of a dynamical system must be sufficiently rich to completely determine the behavior of the system for any future time. Hence, the state of a dynamical system at a given time is uniquely determined by the state of the system at the initial time and the present input to the system. In other words, the state of a dynamical system in general depends on both the present input to the system and the past history of the system. Even though it is often assumed that the state of a dynamical system is the least set of state variables needed to completely predict the effect of the past upon the future of the system, this is often a convenient simplifying assumption.

    receives an input u(t) (e.g., matter, energy, information) and generates an output y(tdepends on the physical description of the system. In addition, each system output y(t) belongs to the fixed set Y , depends on both the initial state x(t0) = x. In other words, knowledge of both xis necessary and sufficient to determine the present and future state x(t) = s(t, t0, x0, u.

    of the system together with a rule or dynamic for discrete-time systems.

    Definition 1.1. A dynamical system are such that the following axioms hold:

    i) , s(·, t0, x0, u.

    ii) , s(t0, t0, x0, u) = x0.

    iii) , s(t, t0, x0, u1) = s(t, t0, x0, u.

    iv) (Group property): s(t2, t0, x0, u) = s(t2, t1, s(t1, t0, x0, u), u.

    v) such that y(t) = h(t, s(t, t0, x0, u), u(t.

    and we refer to the map s(·, t0, ·, u) as the flow or trajectory ; and for a given trajectory s(t, t0, x0, uas an initial time as an initial condition as an input is isolated if the input space consists of one element only, that is, u(t) = u*, and the dynamical system is undisturbed if udepends on the state s(t, t0, x0, uis not influenced by the values of the output after time t. This is simply a statement of causality that holds for all physical systems. The notion of a dynamical system as defined in Definition 1.1 is far too general to develop useful practical deductions for dynamical systems. This notion of a dynamical system is introduced here to develop terminology and to introduce certain key concepts. As we will see in the next chapter, under additional regularity conditions the flow of a dynamical system describing the motion of the system as a function of time can generate a differential equation on the state space, allowing for the development of a large array of mathematical results leading to useful and practical analysis and control synthesis tools.

    Determining the rule or dynamic that defines the state of physical and engineering systems at a given future time from a given present state is one of the central problems of science and engineering. Once the flow of a dynamical system describing the motion of the system starting from a given initial state is given, dynamical system theory can be used to describe the behavior of the system states over time for different initial conditions. Throughout the centuries—from the great cosmic theorists of ancient Greece to the present-day quest for a unified field theory—the most important dynamical system is our universe. By using abstract mathematical models and attaching them to the physical world, astronomers, mathematicians, and physicists have used abstract thought to deduce something that is true about the natural system of the cosmos.

    The quest by scientists, such as Brahe, Kepler, Galileo, Newton, Huygens, Euler, Lagrange, Laplace, and Maxwell, to understand the regularities inherent in the distances of the planets from the sun and their periods and velocities of revolution around the sun led to the science of dynamical systems as a branch of mathematical physics. One of the most basic issues in dynamical system theory that was spawned from the study of mathematical models of our solar system is the stability of dynamical systems. System stability involves the investigation of small deviations from a system’s steady state of motion. In particular, a dynamical system is stable if the system is allowed to perform persistent small oscillations about a system equilibrium, or about a state of motion. Among the first investigations of the stability of a given state of motion is by Isaac Newton. In particular, in his Principia Mathematica [335] Newton investigated whether a small perturbation would make a particle moving in a plane around a center of attraction continue to move near the circle, or diverge from it. Newton used his analysis to analyze the motion of the moon orbiting the Earth.

    Numerous astronomers and mathematicians who followed made significant contributions to dynamical stability theory in an effort to show that the observed deviations of planets and satellites from fixed elliptical orbits were in agreement with Newton’s principle of universal gravitation. Notable contributions include the work of Torricelli [431], Euler [116], Lagrange [252], Laplace [257], Dirichlet [106], Liouville [283], Maxwell [310], and Routh [369]. The most complete contribution to the stability analysis of dynamical systems was introduced in the late nineteenth century by the Russian mathematician Aleksandr Mikhailovich Lyapunov in his seminal work entitled The General Problem of the Stability of Motion [293–295]. Lyapunov’s direct method states that if a positive-definite function (now called a Lyapunov function) of the state coordinates of a dynamical system can be constructed for which its time rate of change following small perturbations from the system equilibrium is always negative or zero, then the system equilibrium state is stable. In other words, Lyapunov’s method is based on the construction of a Lyapunov function that serves as a generalized norm of the solution of a dynamical system. Its appeal comes from the fact that stability properties of the system solutions are derived directly from the governing dynamical system equations; hence the name, Lyapunov’s direct method.

    Dynamical system theory grew out of the desire to analyze the mechanics of heavenly bodies and has become one of the most fundamental fields of modern science as it provides the foundation for unlocking many of the mysteries in nature and the universe that involve the evolution of time. Dynamical system theory is used to study ecological systems, geological systems, biological systems, economic systems, neural systems, and physical systems (e.g., mechanics, thermodynamics, fluids, magnetic fields, galaxies, etc.), to cite but a few examples. Dynamical system theory has also played a crucial role in the analysis and control design of numerous complex engineering systems. In particular, advances in feedback control theory have been intricately coupled to progress in dynamical system theory, and conversely, dynamical system theory has been greatly advanced by the numerous challenges posed in the analysis and control design of increasingly complex feedback control systems.

    Since most physical and engineering systems are inherently nonlinear, with system nonlinearities arising from numerous sources including, for example, friction (e.g., Coulomb, hysteresis), gyroscopic effects (e.g., rotational motion), kinematic effects (e.g., backlash), input constraints (e.g., saturation, deadband), and geometric constraints, system nonlinearities must be accounted for in system analysis and control design. Nonlinear systems, however, can exhibit a very rich dynamical behavior, such as multiple equilibria, limit cycles, bifurcations, jump resonance phenomena, and chaos, which can make general nonlinear system analysis and control notoriously difficult. Lyapunov’s results provide a powerful framework for analyzing the stability of nonlinear dynamical systems. Lyapunov-based methods have also been used by control system designers to obtain stabilizing feedback controllers for nonlinear systems. In particular, for smooth feedback, Lyapunov-based methods were inspired by Jurdjevic and Quinn [224], who give sufficient conditions for smooth stabilization based on the ability to construct a Lyapunov function for the closed-loop system. More recently, Artstein [13] introduced the notion of a control Lyapunov function whose existence guarantees a feedback control law which globally stabilizes a nonlinear dynamical system. Even though for certain classes of nonlinear dynamical systems a universal construction of a feedback stabilizer can be obtained using control Lyapunov functions [13, 406], there does not exist a unified procedure for finding a Lyapunov function that will stabilize the closed-loop system for general nonlinear systems. In light of this, advances in Lyapunov-based methods have been developed for analysis and control design for numerous classes of nonlinear dynamical systems. As a consequence, Lyapunov’s direct method has become one of the cornerstones of systems and control theory.

    The main objective of this book is to present necessary mathematical tools for stability analysis and control design of nonlinear systems, with an emphasis on Lyapunov-based methods. The main contents of the book are as follows. In Chapter 2, we provide a systematic development of nonlinear ordinary differential equations, which is central to the study of nonlinear dynamical system theory. Specifically, we develop qualitative solutions properties, existence of solutions, uniqueness of solutions, continuity of solutions, and continuous dependence of solutions on system initial conditions for nonlinear dynamical systems.

    In Chapter 3, we develop stability theory for nonlinear dynamical systems. Specifically, Lyapunov stability theorems are developed for time-invariant nonlinear dynamical systems. Furthermore, invariant set stability theorems, converse Lyapunov theorems, and Lyapunov instability theorems are also considered. Finally, we present several systematic approaches for constructing Lyapunov functions as well as stability of linear systems and Lyapunov’s linearization method. Chapter 4 provides an advanced treatment of stability theory including partial stability, stability theory for time-varying systems, Lagrange stability, boundedness, ultimate boundedness, input-to-state stability, finite-time stability, semistability, and stability theorems via vector Lyapunov functions. In addition, Lyapunov and asymptotic stability of sets as well as stability of periodic orbits are also systematically addressed. In particular, local and global stability theorems are given using lower semicontinuous Lyapunov functions. Furthermore, generalized invariant set theorems are derived wherein system trajectories converge to a union of largest invariant sets contained on the boundary of the intersections over finite intervals of the closure of generalized Lyapunov level surfaces. These results provide transparent generalizations to standard Lyapunov and invariant set theorems.

    In Chapter 5, using generalized notions of system energy storage and external energy supply, we present a systematic treatment of dissipativity theory [456]. Dissipativity theory provides a fundamental framework for the analysis and design of nonlinear dynamical systems using an input, state, and output system description based on system-energy-related considerations. As a direct application of dissipativity theory, absolute stability theory involving the stability of a feedback system whose forward path contains a dynamic linear time-invariant system and whose feedback path contains a memoryless (possibly time-varying) nonlinearity is also addressed. The Aizerman conjecture and the Luré problem, as well as the circle and Popov criteria, are extensively developed.

    Using the concepts of dissipativity theory, in stability. In addition, we develop connections between dissipativity theory of input, state, and output systems, and input-output dissipativity theory.

    In Chapter 8, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control. Asymptotic stability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution to the steady-state form of the Hamilton-Jacobi-Bellman equation, and hence guarantees both stability and optimality. The overall framework provides the foundation for extending linear-quadratic controller synthesis to nonlinear-nonquadratic problems. Guaranteed stability margins for nonlinear optimal and inverse optimal regulators that minimize a nonlinear-nonquadratic performance criterion are also established. Using the optimal control framework of Chapter 8, in Chapter 9, we give a unification between nonlinear-nonquadratic optimal control and backstepping control. Backstepping control has received a great deal of attention in the nonlinear control literature [247, 395]. The popularity of this control methodology is due in large part to the fact that it provides a systematic procedure for finding a Lyapunov function for nonlinear closed-loop cascade systems.

    In Chapter 10, we develop an optimality-based framework to address the problem of nonlinear-nonquadratic control for disturbance rejection of nonlinear systems with bounded exogenous disturbances. Specifically, using dissipativity theory with appropriate storage functions and supply rates we transform the nonlinear disturbance rejection problem into an optimal control problem by modifying a nonlinear-nonquadratic cost functional to account for the exogenous disturbances. As a consequence, the resulting solution to the modified optimal control problem guarantees disturbance rejection for nonlinear systems with bounded input disturbances. Furthermore, it is shown that the Lyapunov function guaranteeing closed-loop stability is a solution to the steady-state Hamilton-Jacobi-Isaacs equation for the controlled system. The overall framework generalizes the Hamilton-Jacobi-Bellman conditions developed in Chapter 8 to address the design of optimal controllers for nonlinear systems with exogenous disturbances.

    In Chapter 11, we concentrate on developing a unified framework to address the problem of optimal nonlinear robust control. As in the disturbance rejection problem, we transform the given robust control problem into an optimal control problem by properly modifying the cost functional to account for the system uncertainty. As a consequence, the resulting solution to the modified optimal control problem guarantees robust stability and performance for a class of nonlinear uncertain systems. In Chapter 12, we extend the framework developed in Chapter 11 to nonlinear systems with nonlinear time-invariant real parameter uncertainty. Robust stability of the closed-loop nonlinear system is guaranteed by means of a parameter-dependent Lyapunov function composed of a fixed (parameter-independent) and variable (parameter-dependent) part. The fixed part of the Lyapunov function can be seen to be the solution to the steady-state Hamilton-Jacobi-Bellman equation for the nominal system. Finally, in Chapters 13 and 14 we give a condensed presentation of the continuous-time analysis and control synthesis results developed in Chapters 2–12 for discrete-time systems.

    Chapter Two

    Dynamical Systems and Differential

    Equations

    2.1 Introduction

    In this chapter, we provide a thorough treatment of some of the basic results in nonlinear differential equations. The study of differential equations in dynamical systems and control is of the highest importance since differential equations arise in nearly all disciplines of science, engineering, medicine, economics, biocenology, and demography. In as much as differential equations are central to these fields, their study has additionally led to the development of entire fields of abstract mathematics. In particular, algebraic topology and group theory were developed to solve problems in dynamical system theory, which was originally a branch of differential equations. Fourier analysis was developed for analyzing the heat equation, while topology and set theory were essential developments for the study of convergence problems for differential equations. And of course most recently control theory, whose strength has been its effective use of differential equations, has significantly contributed to the theory of differential equations. The study of differential equations is usually divided into two parts; qualitative theory and quantitative theory. In this chapter, we will focus almost exclusively on qualitative analysis of differential equations. The notions of openness, convergence, continuity, and compactness that we will use throughout this book will refer to the topology generated on the nby the vector norm || · ||.

    As discussed in Chapter 1, a nonlinear dynamical system consists of a set of possible states such that the knowledge of these states at some time t = t0, together with the knowledge of an external input to the system for t t0, completely determines the behavior of the dynamical system at any time t t0. Hence, a dynamical system can generate a differential equation of the form

    , is the state , is the input or control is piecewise continuous in t and continuous in x and u . The input or control u is assumed to be a piecewise continuous function of time with values in U. Here, the time index t . In studying dynamical systems of the form (2.1), we can distinguish between system analysis and system synthesis. Specifically, in analyzing (2.1) we evaluate the behavior of the system for a specified fixed input or control signal. Alternatively, synthesis refers to a design procedure which yields a control input that achieves a desired behavior of the system state. In this book we consider both the analysis and control synthesis problems for nonlinear dynamical systems.

    In the remainder of this section we provide several classifications of (2.1) along with some definitions used throughout the book. First, we refer to (2.1) as a nonlinear disturbed or controlled time-varying dynamical system. Alternatively, in the case where u(t) ≡ 0, we let f(t, x) = F(t, x, 0) so that (2.1) becomes

    is piecewise continuous in t and continuous in x . In this case, we refer to (2.2) as a nonlinear undisturbed or uncontrolled time-varying dynamical system. Note that if the input u(·) is a priori uniquely specified, then defining f(t, x) = F(t, x, u(t)), (2.1) can be written as (2.2). Hence, in this case, the distinction between a disturbed and undisturbed system is not precise since an undisturbed system can describe the case where u(t) ≡ 0 or the case when u(t) is specified over [t0, tis piecewise continuous, then f(t, x) = F(t, x, u(t)) is also piecewise continuous in t over [t0, t. The following definitions provide several classifications of the nonlinear dynamical system (2.2). Analogous classifications also hold for (2.1).

    Definition 2.1. Consider the nonlinear dynamical system (2.2). If f(t, x) = f(t0, x, then (2.2) is called a time-invariant or autonomous dynamical system.

    Definition 2.2. . If there exists T > 0 such that f(t, x) = f(t + T, x, then (2.2) is called a periodic dynamical system. Furthermore, the minimum time T > 0 for which f(t, x) = f(t + T, x) is called the period.

    Definition 2.3. . If f(t, x) = A(t)xis piecewise continuous on [t0, t, then (2.2) is called a linear time-varying there exists T > 0 such that A(t) = A(t + T), then (2.2) is called a linear periodic dynamical system. Finally, if f(t, x) = Ax, then (2.2) is called a linear autonomous dynamical system.

    Next, we introduce the concept of an equilibrium point of a nonlinear dynamical system.

    Definition 2.4. is said to be an equilibrium point if f(t, xe) = 0 for all t te.

    is an equilibrium point at time te if and only if xe is an equilibrium point at all times. Furthermore, if xe is an equilibrium point at time tyields

    denotes differentiation with respect to τ, and hence xe is an equilibrium point of (2.3) at τ = 0 since f(τ + te, xe) = 0 for all τ ≥ 0. Hence, we assume without loss of generality that te = 0. Furthermore, if fthen, without loss of generality, we can assume that xe = 0 so that f(t, 0) = 0, t and note that

    Now, the claim follows by noting that fs(t, 0) = f(t, xe) = 0, t tsuch that F(t, xe, ue) = 0 for all t is an equilibrium point of (2.2) and x(t0) = xe, then x(t) = xe, t t0, is a solution (see Section 2.7) to (2.2).

    Example 2.1. Consider the nonlinear dynamical system describing the motion of a simple pendulum with viscous damping given by

    where c is the acceleration due to gravity, and l is the length of the pendulum. Note that (2.5) and (2.6) have the form of (2.2) with f(t, x) = f(x). Even though the physical pendulum has two equilibrium points, namely, (0, 0) and (πgiven by f(xe) = 0. In particular,

    Example 2.2. There are dynamical systems that have no equilibrium points. In particular, consider the system

    where α > 1. For this system there does not exist an xe = [x1e x2e]T such that f(x.

    As seen in Examples 2.1 and 2.2, the nonlinear algebraic equation f(x) = 0 may have multiple solutions or no solutions. However, f(x) = 0 can also have a continuum of solutions. To see this, let f(x) = Ax, which corresponds to the dynamics of a linear autonomous system. Now, Ax = 0 has a unique solution xe = 0 if and only if det A ≠ 0. However, if det A (see Section 2.2).

    Definition 2.5. of (2.2) is said to be an isolated equilibrium point if there exists ε contains no equilibrium points other than xe.

    (see Section 2.3) and we use the notions of vector norms as defined in Section 2.2.

    Proposition 2.1. is piecewise continuous on [t0, t. If for every t [t0, t1],

    is nonsingular, then xe is an isolated equilibrium point.

    Proof. . Since det A(t) 0, there exists ε (see Problem 2.1). Next, expanding f(t, x) via a Taylor series expansion about the equilibrium point x = xe and using the fact that f(t, xe) = 0 yields¹

    there exists δ > 0 such that

    it follows that

    .

    Example 2.3. Consider the nonlinear dynamical system

    Note that with x = [x1, x2]T, f(x) = 0 implies that xe = [x1e x2e]T = [0 x, which shows that every point on the x2 axis is an equilibrium point of (2.13) and (2.14).

    In the remainder of this chapter we address the problems of existence and uniqueness of solutions to the nonlinear dynamical system (2.2) along with continuous dependence of solutions on the system initial conditions and system parameters. First, however, we review some basic definitions and results on vector and matrix norms, topology and analysis, and vector and Banach spaces.

    2.2 Vector and Matrix Norms

    , the absolute value |α| of α denotes the magnitude of α while discarding the sign of αwe can extend the definition of absolute value by replacing each component of x and each entry of A by

    for all i = 1, . . . , n and j = 1, . . . , m. For many applications it is more useful to have a scalar measure of the magnitude of x or A. Vector and matrix norms provide such measures.

    Definition 2.6. A vector norm that satisfies the following axioms:

    i) .

    ii) ||x|| = 0 if and only if x = 0.

    iii) .

    iv.

    Condition iv) is known as the triangle inequality. It is important to note that if there exists α ≥ 0 such that x = αy or y = αx, then iv) holds as an equality. There are many different norms. The most useful class of vector norms are the p-norms (also called Hölder norms).

    Proposition 2.2. For 1 ≤ p < ∞, || · ||p defined by

    .

    Proof. The only difficulty is in proving the triangle inequality, which in this case is known as Minkowski’s inequality. If ||x + y||p = 0, then obviously ||x + y||p ≤ ||x||p + ||y||p since ||x||p and ||y||p are nonnegative for 1 ≤ p ≤ ∞. Suppose ||x + y||p > 0. The cases p = 1 and p = ∞ are immediate. Hence, assume 1 < p < ∞ and note that

    Now, letting q = p/(p − 1) and applying Hölder’s inequality (2.18) to each of the summations on the right-hand side of (2.15) yields

    and using the fact that 1 − 1/q = 1/p we obtain

    which proves the result.

    The notation || · ||p to which it is applied. We assume this is implied by context. In the case where p = 1,

    is called the absolute sum norm; the case p = 2,

    is called the Euclidean norm; and the case p = ∞,

    is called the infinity norm.

    The next result is known as Hölder’s inequality. For this result (and henceforth) we interpret 1/∞ = 0.

    Proposition 2.3. , and let 1 ≤ p ≤ ∞ and 1 ≤ q ≤ ∞ satisfy 1/p + 1/q = 1. Then

    Proof. The cases p = 1, ∞ and q = ∞, 1 are straightforward and are left as an exercise for the reader. Assume 1 < p < ∞ so that 1 < q < ∞. For u ≥ 0, v ≥ 0, and α (0, 1) note that

    with equality holding in (2.19) if and only if u = υfor all z > 1. Now, it follows that for z ≥ 0, f(z) ≤ f(1) = 0 with equality holding for z = 1. Hence, αz + 1 − α with equality holding for z = 1. Taking z = u/υ, υ ≠ 0, yields (2.19), while for υ = 0 (2.19) is trivially satisfied.

    Now, for each i ∈ {1,…, n}, α = 1/p, and 1 − α = 1/q, on setting

    (2.19) yields

    Summing over [1, n] yields

    which proves (2.18).

    The case p = q = 2 is known as the Cauchy-Schwarz inequality. Since this result is important, we state it as a corollary.

    Corollary 2.1. . Then

    Proof. Note that for x = 0 or y ,

    yields the result.

    Two norms || · || and || · ||are equivalent if there exist positive constants α, β such that

    for all x . Note that (2.25) can be written as

    Hence, the word equivalent is justified. The next result shows that all norms are equivalent in finite-dimensional spaces.

    Theorem 2.1. , then || · || and || · ||′ are equivalent.

    Proof. and note that

    . Similarly, it can be shown that ||x y||C′||x y||with respect to || · ||∞.

    is a compact set (see Definition 2.14) it follows from Weierstrass’ theorem (Theorem 2.13) that there exists α, β . Now it follows that

    and

    which implies that

    Similarly, it can be shown that

    . Now it follows from (2.28) and (2.29) that

    which proves the equivalence of || · || and || · ||′.

    The notion of vector norms allows us to define a matrix , for example, as vec A, where vec(·) denotes the column stacking operator.

    Definition 2.7. A matrix norm that satisfies the following axioms:

    i) .

    ii) ||A || = 0 if and only if A = 0.

    iii) .

    iv) .

    , then || · ||′ defined by ||A||= ||vec A. For example, the pdefine

    Note that we use the same symbol || · ||p to denote the p, then ||A||p coincides with the vector p, then

    The matrix p-norms in the cases p = 1, 2, ∞ are the most commonly used. In particular, when p , where || · ||F is called the Frobenius norm. Since ||A||2 = ||vec A||2, we have

    It is easy to see that ||A||F = (tr AA

    , respectively. We say (|| · ||, || · ||′, || · ||″) is a submultiplicative triple of matrix norms with constant δ > 0 if

    . If δ and

    , then we say that || · || is submultiplicativeis submultiplicative if (|| · ||, || · ||, || · ||) is a submultiplicative triple of matrix norms with constant 1. We see in (2.34) with n = 2, for example, that || · ||p, 1 ≤ p satisfies ||In|| = 1, then we say || · || is normalized.

    A special case of submultiplicative triples is the case in which B is a vector, that is, l consider (setting δ = 1)

    , respectively. If n = m and || · || = || · ||", that is,

    then we say that || · || and || · ||′ are compatible.

    We now consider a very special class of matrix norms, namely, the induced matrix norms wherein (2.35) holds as an equality.

    Definition 2.8. defined by

    is called an induced matrix norm.

    It can be shown that (2.36) is equivalent to each of the expressions

    In this case, we say || · ||″ is induced by || · || and || · ||′. If m = n and || · || = || · ||′, then we say that || · ||″ is induced by || · || and we call || · ||″ an equi-induced norm. It remains to be shown, however, that || · ||″ defined by (2.36) is indeed a matrix norm.

    Theorem 2.2. Every induced matrix norm is a matrix norm.

    Proof. be defined as in (2.36) or, equivalently, (2.38). Clearly, || · ||″ satisfies Axioms i)−iii) of Definition 2.7. To show that Axiom ivand note that

    .

    In the notation of (2.36),

    Hence,

    In this case, (|| · ||′, || · ||", || · ||) is a submultiplicative triple. If m = n and || · || = || · ||′, then the induced norm || · ||″ is compatible with || · ||. The next result shows that submultiplicative triples of matrix norms can be obtained from induced matrix norms.

    Proposition 2.4. , respectively. Let || · ||″′ induced by || · || and || · ||′, let || · ||″″ induced by || · ||′ and || · ||″, and let || · ||″″′ , then

    Proof. ,

    which implies (2.39).

    The following corollary to Proposition 2.4 is immediate.

    Corollary 2.2. Every equi-induced matrix norm is a normalized submultiplicative matrix norm.

    2.3 Set Theory and Topology

    is a setdenotes that x is an element denotes that x = {x1, x2, xhaving a certain property P. The union . Hence,

    The intersection . Hence,

    The complement . When it is clear that the complement is with respect to a universal is contained . The set with no elements is called the empty set are disjoint is called a partition .

    Example 2.4.

    The Cartesian product of n is the set consisting of ordered elements of the form (x1, x2, . . . , xn. Hence,

    the ith element of the ordered set . The definition of the union, intersection, and product of sets can be extended from two sets (or n , where i an indexed family of sets. The definition of the union and intersection of a collection of sets is

    is usually {1, . . . , n, although there are occasional exceptions to this.

    and define the open ball with radius ε and center x by

    , for which the point x is the origin (0, 0), is the interior of the square max{|y1|, |y2|} < 1.

    Definition 2.9. is an interior point if there exists ε . The interior is the set

    is open .

    and every open ball is an open set.

    Example 2.5, there exists ε .

    Definition 2.10. neighborhood of x containing x.

    (or an ε-neighborhood of x).

    Definition 2.11. is a closure point if, for every ε is an isolated point if there exists ε . The closure is the set

    : x }.

    is closed . Finally, the boundary .

    A closure point is sometimes referred to as an adherent point or contact point . A closure point should be carefully distinguished from an accumulation point.

    Definition 2.12. is an accumulation point if, for every ε is called a derived set .

    if and only if every open ball centered at x distinct from x or, equivalently, every open ball centered at x . Note that if x , then x Rn is closed if and only if it contains all of its accumulation points. An accumulation point is sometimes referred to as a cluster point or limit point in the literature.

    Example 2.6. in this case) containing α will contain points of (0, 1] distinct from αis given by

    is

    .

    . Clearly, x , x = 0 and x = , and every point x in .

    The next proposition shows that the complement of an open set is closed and the complement of a closed set is open.

    Proposition

    Enjoying the preview?
    Page 1 of 1