Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Constraint Processing
Constraint Processing
Constraint Processing
Ebook923 pages

Constraint Processing

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

Constraint satisfaction is a simple but powerful tool. Constraints identify the impossible and reduce the realm of possibilities to effectively focus on the possible, allowing for a natural declarative formulation of what must be satisfied, without expressing how. The field of constraint reasoning has matured over the last three decades with contributions from a diverse community of researchers in artificial intelligence, databases and programming languages, operations research, management science, and applied mathematics. Today, constraint problems are used to model cognitive tasks in vision, language comprehension, default reasoning, diagnosis, scheduling, temporal and spatial reasoning. In Constraint Processing, Rina Dechter, synthesizes these contributions, along with her own significant work, to provide the first comprehensive examination of the theory that underlies constraint processing algorithms. Throughout, she focuses on fundamental tools and principles, emphasizing the representation and analysis of algorithms.
  • Examines the basic practical aspects of each topic and then tackles more advanced issues, including current research challenges
  • Builds the reader's understanding with definitions, examples, theory, algorithms and complexity analysis
  • Synthesizes three decades of researchers work on constraint processing in AI, databases and programming languages, operations research, management science, and applied mathematics
LanguageEnglish
Release dateMay 22, 2003
ISBN9780080502953
Constraint Processing
Author

Rina Dechter

Rina Dechter is a professor of Computer Science at the University of California, Irvine. She received her PhD in Computer Science at UCLA in 1985, a MS degree in Applied Mathematic from the Weizman Institute and a B.S in Mathematics and Ststistics from the Hebrew University, Jerusalem. Her research centers on computational aspects of automated reasoning and knowledge representation including search, constraint processing and probabilistic reasoning. Professor Dechter has authored over 50 research papers, and has served on the editorial boards of: Artificial Intelligence, the Constraint Journal, Journal of Artificial Intelligence Research and the Encyclopedia of AI. She was awarded the Presidential Young investigator award in 1991 and is a fellow of the American association of Artificial Intelligence.

Related to Constraint Processing

Intelligence (AI) & Semantics For You

View More

Reviews for Constraint Processing

Rating: 4.1666665 out of 5 stars
4/5

3 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Constraint Processing - Rina Dechter

    Kahana.

    Preface

    A constraint is a restriction on a space of possibilities; it is a piece of knowledge that narrows the scope of this space. Because constraints arise naturally in most areas of human endeavor, they are the most general means for formulating regularities that govern our computational, physical, biological, and social worlds. Some examples: the angles of a triangle must sum to 180 degrees; the four nucleotides that make up DNA strands can only combine in particular sequences; the sum of the currents flowing into a node must equal zero; Susan cannot be married to both John and Bill at the same time. Although observable in diverse disciplines, they all share one feature in common: they identify the impossible, narrow down the realm of possibilities, and thus permit us to focus more effectively on the possible.

    Formulating problems in terms of constraints has proven useful for modeling fundamental cognitive activities such as vision, language comprehension, default reasoning, diagnosis, scheduling, and temporal and spatial reasoning, as well as having application for engineering tasks, biological modeling, and electronic commerce. Formulating problems in terms of constraints enables a natural, declarative formulation of what must be satisfied, without having to say how it should be satisfied.

    This book provides comprehensive, in-depth coverage of the theory that underlies constraint processing algorithms as they have emerged in the last three decades, primarily in the area of artificial intelligence. The intended audience is readers in diverse areas of computer science, including artificial intelligence, databases, programming languages, and systems, as well as practitioners of related fields such as operations research, management science, and applied mathematics.

    This book focuses on the fundamental tools and principles that underlie reasoning with constraints, with special emphasis on the representation and analysis of constraint satisfaction algorithms that operate over discrete and finite domains. We first describe the basic principles underlying relational representation and then present processing algorithms across two main categories: search based and inference based. Search algorithms are characterized by backtracking search and its various enhancements, while inference algorithms are presented through a variety of constraint propagation methods (also known as consistency-enforcing methods).

    In order to enhance the understanding and analysis of the various methods, we emphasize a graph-based view of both types of algorithms throughout the book. This graphical preference also allows us to tie in constraint networks and constraint processing with the emerging framework of graphical models that cuts across deterministic and probabilistic knowledge bases.

    We seek to take the reader on a step-by-step journey through the world of constraints, offering definitions, examples, theory, algorithms, and complexity analysis. The text is intended primarily for graduate students, senior undergraduates, researchers, and practitioners, and it could be used in a quarter/semester course dedicated to this topic, or as a reference book in a general class on artificial intelligence and operations research. To accommodate different levels of reading, some chapters can be skipped without disturbing the flow. Each chapter starts with the basic and practical aspects of its topic, and proceeds to more advanced issues, including those under current research.

    This book is divided into two parts. Chapters 2 through 7 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 provide the basic material, while Chapters 8 through 15 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 provide more advanced elaboration. Two chapters are contributed by outside authors, experts in their respective areas: Chapter 11, which focuses on tractable languages, and Chapter 15, which provides a bridging introduction into the whole area of constraint programming languages. Chapters 12, 13, and 14 extend constraint processing to temporal reasoning, optimization tasks, and probabilistic reasoning, respectively.

    The chapter flow tree in Figure P.1 provides a precedence order for reading the chapters. A detailed overview of the chapters is provided in Chapter 1.

    Figure P.1 Chapter flow diagram.

    Acknowledgments

    It is a pleasure for me to thank the many colleagues whose work influenced the content of this book. First and foremost, I would like to acknowledge the pioneering work of Ugo Montanari, Alan Mackworth, and Eugene Freuder, whose seminal papers triggered and influenced much of my work, as well as that of the entire constraint processing field. Second, I would like to thank collaborators and students in my group whose work forms the basis for large portions of this book: Itay Meiri, Rachel Ben-Eliyahoo-Zohari, Dan Frost, Eddie Schwalb, Irina Rish, Kalev Kask, Javier Larrosa, Lluis Vila, and Yousri El-Fattah.

    Special thanks for many influential interactions to the members of the UCLA cognitive science lab: Hector Geffner, Dan Geiger, and Adnan Darwiche, and most of all, to Judea Pearl, who has inspired me throughout my academic life, and whose encouragement and support is invaluable.

    Special thanks also to Peter van Beek, with whom the book idea was born, and whose collaboration and advice throughout the years have always been beneficial, and to Apt Krzystoff for forcefully encouraging me to get this project through.

    I am grateful to every one of my colleagues who has contributed comments on the many versions of this manuscript: Pedro Meseguer, Francesca Rossi, Toby Walsh, Barbara Smith, Carla Gomes, Bart Selman, Henry Kautz, Roberto Bayardo, Manolis Koubarakis, Jean-Charles Regin, Faheim Bacchus, and Weixiong Zhang. Pedro and Carla in particular provided thorough and in-depth comments. Thanks also go to anonymous reviewers of the final draft.

    I am indebted to all the students who have perused many versions of this book in the past six years and have provided careful criticism, and in particular to my recent group: Robert Mateescu, Radu Marinescu, David Larkin, and Bozhena Bidyuk.

    The National Science Foundation deserves acknowledgment for sponsoring research that allowed the creation of this book, with special thanks to Larry Reaker. Other sponsors include the Office of Naval Research, with special thanks to Wendy Martinez, and the Air Force Office of Scientific Research, with special thanks to Abraham Waksman.

    Thanks to Nira Brand for her dedicated editing of the first six chapters of this book, and to Mario Espinoza for assisting in drawing many of the figures and incorporating revisions.

    Finally, I owe a great debt to my family for their encouragement throughout the years: to Gadi, for harnessing his wonderful language skills as a developmental editor for the book; to Dani, for his inspiring songs that influenced the final months of this work; and to Eyal, for his thought-provoking questions that forced me to deal with basic issues. Finally, thanks to Avi, my husband of more than 30 years, whose daily support and companionship have made this book, and many more things, possible.

    chapter 1

    Introduction

    The work under our labour grows luxurious by restraint.

    John Milton, Paradise Lost

    We regularly encounter constraint in our day-to-day lives—for instance, a finite amount of memory in our PCs, seats in the car, hours in the day, money in the bank. And we regularly engage in solving constraint satisfaction problems: how to live well but within our means, how to eat healthy but still enjoy food. Most of the time, we don’t require sophisticated computer-processed algorithms to figure out whether to splurge on a ski vacation or eat the triple-layer chocolate cake. But consider the complexity encountered when the number of constraints to be satisfied, and variables involved, begins to grow. For example, we find it takes a surprisingly long time to determine the optimal seating arrangement for a dinner party, or choose one movie rental for a large group of friends.

    Now imagine the difficulty in scheduling classrooms for a semester of university instruction. We need to allocate a classroom for every course while simultaneously satisfying the constraints that no two classes may be held in the same classroom at the same time, no professor can teach in two different classrooms at the same time, no class may be scheduled in the middle of the night, all classes must be offered in appropriately sized rooms or lecture halls, certain classes must not be scheduled at the same time, and so on.

    As the complexity of the problem grows, we turn to computers to help us find an acceptable solution. Computer scientists have devised language to model constraint satisfaction problems and have developed methods for solving them. The language of constraints is used to model simple cognitive tasks such as vision, language comprehension, default reasoning and abduction, as well as tasks that require high levels of human expertise such as scheduling, design, diagnosis, and temporal and spatial reasoning.

    In general, the tasks posed in the language of constraints are computationally intractable (NP-hard), which means that you cannot expect to design algorithms that scale efficiently with the problem size, in all cases. However, it is possible and desirable to identify special properties of a problem class that can accommodate efficient solutions and to develop general algorithms that are efficient for as many problems as possible.

    Indeed, over the last two to three decades, a great deal of theoretical and experimental research has focused on developing and improving the performance of general algorithms for solving constraint satisfaction problems, on identifying restricted subclasses that can be solved efficiently, called tractable classes, and on developing approximation algorithms. This book describes the theory and practice underlying such constraint processing methods.

    The remainder of this chapter is divided into three parts. First is an informal overview of constraint networks, starting with common examples of problems that can be modeled as constraint satisfaction problems. Second is an overview of the book by chapter. Third is a review of mathematical concepts and some preliminaries relevant to our discussion throughout the book.

    1.1 Basic Concepts and Examples

    In general, constraint satisfaction problems include two important components of variables with associated domains and constraints. Let’s define each component and then take a look at several examples that formally model constraint satisfaction problems. First, every constraint problem must include variables: objects or items that can take on a variety of values. The set of possible values for a given variable is called its domain. For example, in trying to find an acceptable seating arrangement for a dinner party, we may choose to see the chairs as our variables, each with the same domain, which is the list of all guests.

    The second component to every constraint problem is the set of constraints themselves. Constraints are rules that impose a limitation on the values that a variable, or a combination of variables, may be assigned. If the host and hostess must sit at the two ends of the table, then their choices of seats are constrained. If two feuding guests must not be placed next to or directly opposite one another, then we must include this constraint in our overall problem statement.

    Note that there is often more than one way to model a problem. In the previous example, we could just as logically have decided to call the guests our variables and their domains the set of chairs at the table. In this case, assuming a one-to-one correspondence between chairs and guests, the choice makes little difference, but in other cases, one formulation of a problem may lend itself more readily to solution techniques than another.

    A model that includes variables, their domains, and constraints is called a constraint network, also called a constraint problem. Use of the term network can be traced to the early days of constraint satisfaction work when the research focus was restricted to sets of constraints whose dependencies were naturally captured by simple graphs. We prefer this term because it emphasizes the importance of a constraint dependency structure in reasoning algorithms.

    A solution is an assignment of a single value from its domain to each variable such that no constraint is violated. A problem may have one, many, or no solutions. A problem that has one or more solutions is satisfiable or consistent. If there is no possible assignment of values to variables that satisfies all the constraints, then the network is unsatisfiable or inconsistent.

    Typical tasks over constraint networks are determining whether a solution exists, finding one or all solutions, finding whether a partial instantiation can be extended to a full solution, and finding an optimal solution relative to a given cost function. Such tasks are referred to as constraint satisfaction problems (CSPs).

    Let’s look at some common examples of problems that can be intuitively modeled as constraint satisfaction problems, including both simple puzzle problems that help illustrate the principles involved, as well as more complex real-world problems. At this point the specification of the constraints will be made informally. We will revisit some of these examples in greater detail in the following chapter and throughout the book.

    The n-Queens Problem

    The classic example used to illustrate a constraint satisfaction problem is the n-queens problem. The task is to place n queens on an n × n chessboard such that the placement of no queen constitutes an attack on any other. One possible constraint network formulation of the problem is the following: there is a variable for each column of the chessboard x1,…, xn, the domains of the variables are the possible row positions Di = {1,…, n}, and the constraint on each pair of columns is that the two queens must not share a row or diagonal. An interesting property of this problem is that the number of variables is always the same as the number of values in each domain.

    Crossword Puzzles

    Crossword puzzles have also been used in evaluating constraint satisfaction algorithms. The task is to assign words from the dictionary (or from a given set of words) into vertical or horizontal slots according to certain constraints. If we allow each word to be placed in any space of correct length, a possible constraint network formulation of the crossword puzzle in Figure 1.1 is the following: each white square is a variable, the domain of each variable is the alphabet, and the constraints are dictated by the input of possible words:

    Figure 1.1 A crossword puzzle.

    So, for example, there is a constraint over the variables x8, x9, x10, x11 that allows assigning only the four-letter words from the list to these four variables. This constraint can be described by the relation C = {(A, L, S, O), (E, A, R, N), (H, I, K, E), (I, R, O, N), (S, A, M, E)}. This means that x8, x9, x10, x11 can be assigned, respectively, either {A, L, S, O} or {E, A, R, N}, and so on. A solution to the constraint problem will generate an assignment of letters to the squares so that only these four-letter words are entered.

    Map Coloring and k-Colorability

    The map-coloring problem is a well-known problem that asks whether it is possible to color a map with only four colors when no two adjacent countries may share the same color. The problem was open for many years until it was solved in 1976 by a combination of mathematics and computer simulation ( Appel and Haken 1976). Many resource allocation and communication problems can be abstracted to this problem, known in the field of classical graph theory as k-colorability. Examples of this type of problem are radio link frequency assignment problems—communication problems where the goal is to assign frequencies to a set of radio links in such a way that all the links may operate together without noticeable interference.

    In graph coloring, the map can be abstracted to a graph with a node for each country and an edge joining any two neighboring countries. Given a graph of arbitrary size, the problem is to decide whether or not the nodes can be colored with only k colors such that any two adjacent nodes in the graph must be of different colors. If the answer is yes, then a possible k-coloring of the graph’s nodes should be given.

    The graph k-colorability problem is formulated as a constraint satisfaction problem where each node in the graph is a variable, the domains of the variables are the possible colors (when k = 3 for every variable xi, its domain is Di = {red, blue, green} or Di = {1, 2, 3}), and the constraints are that every two adjacent nodes must be assigned different values (if xi is connected to xj, then xi xj). A specific example is given in Figure 1.2, where the variables are the countries denoted A, B, C, D, E, F, and G. A solution to this constraint satisfaction problem is a legal 3-coloring of the graph. A property of k-colorability problems is that their not-equal constraints appear in many problems and are among the loosest constraints, namely, they forbid only k out of the k² possible value combinations between any two constrained variables. A solution to the graph-coloring problem in Figure 1.2, using three colors, is (A = green, D = red, E = blue, B = blue, F = blue, C = red, G = green).

    Figure 1.2 A map-coloring problem.

    Configuration and Design

    Configuration and allocation problems are a particularly interesting class for which constraint satisfaction formalism is useful. These problems arise in applications as diverse as automobile transmission design, microcomputer system configuration, and floor plan layout. Let us consider a simple example from the domain of architectural site location. Consider the map in Figure 1.3, showing eight lots available for development. ¹ Five developments are to be located on these lots: a recreation area, an apartment complex, a cluster of 50 single-family houses, a large cemetery, and a dump site. Assume the following information and conditions:

    Figure 1.3 Development map.

    • The recreation area must be near the lake.

    • Steep slopes must be avoided for all but the recreation area.

    • Poor soil must be avoided for developments that involve construction, namely, the apartments and the houses.

    • Because it is noisy, the highway must not be near the apartments, the houses, or the recreation area.

    • The dump site must not be visible from the apartments, the houses, or the lake.

    • Lots 3 and 4 have poor soil.

    • Lots 3, 4, 7, and 8 are on steep slopes.

    • Lots 2, 3, and 4 are near the lake.

    • Lots 1 and 2 are near the highway.

    The problem of siting the five developments on the eight available lots while satisfying the given conditions can be naturally formulated as a constraint satisfaction problem. A variable can be associated with each development:

    • Recreation, Apartments, Houses, Cemetery, and Dump

    The eight development sites constitute our variables’ common domain:

    • Domain = {1, 2, 3, 4, 5, 6, 7, 8}

    The information and conditions above suggest the constraints on the variables. For instance, the statement of the solution incorporates the implicit constraints that no two developments may occur on the same lot. This can be formulated using the not-equal constraints as we see in graph coloring.

    Huffman-Clowes Scene Labeling

    One of the earliest constraint satisfaction problems was introduced in the early 1970s. This problem concerns the three-dimensional interpretation of a two-dimensional drawing. Huffman and Clowes (1975) developed a basic labeling scheme of the arcs in a block world picture graph, where + stands for a convex edge, − for a concave edge, and → for occluding boundaries. They demonstrated that the possible combination of legal labelings associated with different junctions in such scenes can be enumerated as in Figure 1.4. The interpretation problem is to label the junction of a given drawing such that every junction type is labeled according to one of its legal labeling combinations and that edges common to two junctions receive the same label. One formulation of the cube instance as a constraint satisfaction problem assigns variables for the lines in the figure with their domains being the possible labelings {+, −, →, ←}. The labeling of adjacent lines in a junction are constrained according to the junction type. The possible consistent labelings of a cube are presented in Figure 1.5.

    Figure 1.4 Huffman-Clowes junction labelings.

    Figure 1.5 Solutions: (a) stuck on left wall, (b) stuck on right wall, (c) suspended in midair, and (d) resting on floor.

    Scheduling Problems

    Scheduling problems naturally lend themselves to constraint satisfaction formulation. These problems typically involve scheduling activities or jobs, while satisfying given resource and temporal precedence constraints. A well-known problem class is job shop scheduling, where a typical resource is the availability of a machine for processing certain tasks. Another class is time-tabling problems, which involve scheduling rooms and teachers to classes and time slots while satisfying natural constraints. In such problems it is common to associate the tasks with variables, their domain with the starting time of the task, and the constraints restrict the relevant variables. A simple example will be given in Chapter 2.

    We have looked at several examples of problems, and described them—informally—as constraint problems. A formal treatment will be given in Chapter 2. We conclude this chapter with an overview of the remaining chapters, and with a discussion of the mathematical concepts and background used throughout the book.

    1.2 Overview by Chapter

    The remainder of this book is divided into two parts. The first ( Chapters 2 through 7 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7) contains the basic core information for constraint processing. The second ( Chapters 8 through 15 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15) includes more advanced areas and related material. Techniques for processing constraints can be classified into two main categories: (1) inference and (2) search, and these techniques may also be combined. Both inference and search techniques offer methods that are systematic and complete, as well as stochastic and incomplete. We call an algorithm complete if it is guaranteed to solve the problem, or it can prove that the problem is unsolvable. Incomplete (or approximation) algorithms sometimes solve hard problems quickly, but they are not guaranteed to solve the problem even with unbounded time and space.

    1.2.1 Inference

    Chapters 3 and 4 in the first part of the book and Chapters 8 and 9 in the second part focus on inference algorithms. Inference algorithms create equivalent problems through problem reformulation. They usually create simpler problems that are subsequently easy to solve by a search algorithm. Occasionally, inference methods can deliver a solution or prove the inconsistency of the network without requiring further search.

    Chapter 3 focuses on the main concepts associated with inference methods applied to constraint networks called local consistency algorithms. These algorithms take a telescopic view of subparts of the constraint network and demand that they contain no obviously extraneous or contradictory parts. For example, if a network consists of several variables x1,…, xn, all with domains D = {1,…, 10}, and if x1 is constrained such that its value must be strictly greater than the value of every other variable, then we can infer that there is no need for the value 1 in x1’s domain.

    In the language of constraint networks, the value 1 is inconsistent and should be removed from x1′s domain because there is no solution to the problem that assigns x1 = 1. [We could similarly reason that the value 10 should be removed from the domain of x2 and x3 and so on.]

    In general, local consistency algorithms (also known as constraint propagation) are polynomial algorithms that transform a given constraint network into an equivalent, yet more explicit, network by deducing new constraints, which are then added to the problem. For example, the most basic consistency algorithm we will present, called arc-consistency, ensures that any legal value in the domain of a single variable has a legal match in the domain of any other selected variable. This is the sort of consistency enforcing that we saw in the above example. Path-consistency ensures that any consistent solution to any two variables is extensible to any third variable, and, in general, i-consistency algorithms guarantee that any locally consistent instantiation of i − 1 variables is extensible to any ith variables. We can transform a given network to an i-consistent one by inferring some constraints. However, enforcing i-consistency is computationally expensive; it can be accomplished in time and space exponential in i. While these methods are not guaranteed to find a solution, sometimes they eliminate domain values to the point of completely emptying one or more domains, allowing us to declare the network inconsistent.

    Chapters 4 and 8 continue our discussion of advanced inference methods, some of which are guaranteed to find a solution to the problem by making the problem globally consistent. We will present less expensive directional consistency algorithms that enforce global consistency only relative to a certain variable ordering, such as ADAPTIVE-CONSISTENCY, an inference algorithm typical of a class of variable elimination algorithms. A unifying description of such algorithms is the focus of Chapter 8. Chapters 4 and 8 will expose you to structure-based analysis and parameters, such as induced width, that accompany many of the subsequent chapters. Some tractable classes, exploiting the notions of tight domains and tight constraints, and row-convex constraints, are also introduced in Chapter 8. The chapter also includes the specialized language and consistency methods of propositional theories and the constraints expressed by linear inequalities.

    1.2.2 Search

    Chapters 5 and 6 focus on complete search algorithms. The most common algorithm for performing systematic search is backtracking that traverses the space of partial solutions in a depth-first manner. Each step in the search represents the assignment of a value to one additional variable, thus extending a candidate partial solution. When a variable is encountered such that none of its possible values are consistent with the current candidate solution, a situation referred to as a dead-end, backtracking takes place. Namely, the algorithm returns to an earlier variable and attempts to assign it a new value such that the dead-end can be avoided. The best scenario for a backtracking algorithm occurs when the algorithm is able to successfully assign a value to every variable without encountering any dead-ends. In such problem instances, backtracking is very efficient. Worst-case performance, encountering many dead-ends, however, still requires exponential time.

    Improvements to backtracking have focused on the two phases of the algorithm: moving-forward activity (look-ahead schemes), described in Chapter 5, and backtracking activity (look-back schemes), the focus of Chapter 6. When moving forward to extend a partial solution, look-ahead schemes perform some computation to decide which variable and even which of that variable’s values to assign next in order to enhance the efficiency of the search. For example, variables that participate in many constraints maximally constrain the rest of the search space and are thus preferred. In choosing a value from the domain of the selected variable, however, the least constraining value is preferred, in order to maximize options for future instantiations.

    Look-back schemes are described in Chapter 6. They are invoked when the algorithm encounters a dead-end and perform two functions: First, they decide how far to backtrack by analyzing the reasons for the dead-end, a process often referred to as backjumping. Second, they record the reasons for the dead-end in the form of new constraints, so that the same conflict does not arise again, a process known as constraint learning or no-good recording.

    Chapter 7 describes stochastic local search (SLS) strategies, which approximate search. These methods move in a hill-climbing manner, traversing the space of complete instantiations of all the variables. The algorithm improves its current instantiation by iteratively changing (or flipping) the value of the variable to maximize the number of constraints satisfied. Such search algorithms are incomplete, may get stuck in a local optimum of their guiding cost function, and cannot prove inconsistency. Nevertheless, when equipped with some heuristics for randomizing the search or for revising the guiding criterion function (constraint reweighting), they have been demonstrated successful in solving large and hard problems that are frequently too difficult for backtracking-style search. The chapter also shows how these local search algorithms can be combined with inference using a method known as the cycle-cutset scheme. This method is also elaborated on in Chapter 10.

    Local search algorithms flourished initially for solving satisfiability, namely, finding a truth assignment to a set of propositional variables that satisfies a set of Boolean clauses. Satisfiability is a central constraint satisfaction problem that received targeted attention in a variety of communities. In this book we treat satisfiability as a special case of constraint solving in each of the relevant chapters.

    The second part of the book, Chapters 8 through 15 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15, focuses on advanced structure-based inference, search, and their hybrids, and discusses tractable classes that are derived from the constraint types. It also includes topics such as temporal constraints, constraint optimization, probabilistic networks, and constraint programming. Each of the latter constitutes an area of its own significance and deserves its own book. Here we only seek to open a window into these topics, as they emerge naturally from within the constraint processing framework.

    Chapter 8 was discussed earlier. Chapter 9 continues with advanced inference algorithms, concentrating on a structure-based compilation method called tree-clustering. This method belongs to structure-driven algorithms that depict constraint networks as graphs where nodes represent variables and edges represent constraints. These techniques emerged from an attempt to characterize easy-to-solve constraint problems by their graph characteristics. The basic network structure that supports tractability is a tree. This has been observed repeatedly in areas of constraint networks, complexity theory, and database theory. Tree-clustering compiles a constraint problem into an equivalent tree of subproblems whose respective solutions can be efficiently combined into a solution to the whole problem. The ADAPTIVE-CONSISTENCY algorithm, described in Chapters 4 and 8, is quite related to tree-clustering, and both are time and space exponentially bounded in a parameter of the constraint graph called induced width.

    Chapter 10 focuses on hybrids of search and inference. When a problem is computationally hard for inference, it can be solved by bounding the amount of constraint propagation and augmenting the algorithm with a search component.

    We start by presenting the cycle-cutset scheme, a search and inference hybrid, which is exponentially bounded by the graph’s cycle-cutset. We then extend this approach to a general parameterized hybrid scheme whose parameter b bounds the level of inference allowed. Such combined methods also allow trade-offs between time and space.

    Chapter 11 extends the theory of tractable constraint problems by restricting the language by which the constraints themselves can be expressed. It presents a theory of both the expressiveness and the complexity of constraint languages and characterizes both for constraints over finite domains. The tractable cases discussed include implicational and max-ordered constraints and linear constraints.

    In Chapter 12, we introduce special classes of constraints associated with temporal reasoning that have received much attention in the last decade. These tractable classes include subsets of the qualitative interval algebra, expressing relationships such as "time interval A overlaps or precedes time interval B," as well as quantitative binary linear inequalities over real numbers of the form X Y a.

    Chapter 13 extends the constraint processing task to combinatorial optimization. It is often the case that the problem at hand requires specifying preferences among solutions. Such problems can be expressed by augmenting the constraint problem with a cost function. The chapter extends search and inference approaches to optimization. It summarizes the two best-known approaches for optimization developed in operations research: branch-and-bound and dynamic programming.

    Chapter 14 takes you one step beyond deterministic constraints to networks involving probabilistic relationships such as Bayesian networks. It shows that the principles for computing answers to relevant queries over Bayesian networks can be addressed in a way similar to that of processing constraints and optimization problems, namely, by using inference and search. In particular, the inference-type bucket elimination algorithms for finding posterior probabilities and for computing most likely tuples are given.

    The practice of designing models for given problems is only casually addressed in this book, via sporadic examples and exercises. Although important, this aspect of constraint networks is something of an art and not well-understood. Our final chapter, Chapter 15, provides a window into a language approach for addressing the modeling issue. It also provides an introduction to a class of languages that exploit constraint processing algorithms known as constraint programming.

    Some chapters throughout the book include empirical testing of relevant algorithms. Our examples aim at giving the flavor of empirical testing, but they are by no means inclusive. For a comprehensive treatment of empirical testing, see Frost (1997).

    1.3 Mathematical Background

    The formalization of constraint networks relies upon concepts drawn from the related areas of discrete mathematics, logic, the theory of relational databases, and graph theory. This section is a summary of the mathematical knowledge needed for understanding the formalization of constraint networks and the analyses presented in subsequent chapters. Here we present the basic notations and definitions for sets, relations, operations on relations, and graphs. If you are already familiar with these topics, a skim of this material will suffice to ensure your understanding of the notation used in this

    Enjoying the preview?
    Page 1 of 1