Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Industrial Agents: Emerging Applications of Software Agents in Industry
Industrial Agents: Emerging Applications of Software Agents in Industry
Industrial Agents: Emerging Applications of Software Agents in Industry
Ebook1,031 pages21 hours

Industrial Agents: Emerging Applications of Software Agents in Industry

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Industrial Agents explains how multi-agent systems improve collaborative networks to offer dynamic service changes, customization, improved quality and reliability, and flexible infrastructure. Learn how these platforms can offer distributed intelligent management and control functions with communication, cooperation and synchronization capabilities, and also provide for the behavior specifications of the smart components of the system. The book offers not only an introduction to industrial agents, but also clarifies and positions the vision, on-going efforts, example applications, assessment and roadmap applicable to multiple industries. This edited work is guided and co-authored by leaders of the IEEE Technical Committee on Industrial Agents who represent both academic and industry perspectives and share the latest research along with their hands-on experiences prototyping and deploying industrial agents in industrial scenarios.
  • Learn how new scientific approaches and technologies aggregate resources such next generation intelligent systems, manual workplaces and information and material flow system
  • Gain insight from experts presenting the latest academic and industry research on multi-agent systems
  • Explore multiple case studies and example applications showing industrial agents in a variety of scenarios
  • Understand implementations across the enterprise, from low-level control systems to autonomous and collaborative management units
LanguageEnglish
Release dateMar 13, 2015
ISBN9780128004111
Industrial Agents: Emerging Applications of Software Agents in Industry

Related to Industrial Agents

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Industrial Agents

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Industrial Agents - Paulo Leitão

    Brazil

    Part I

    Industrial Agents: Concepts and Definitions

    Chapter 1

    Software Agent Systems

    Rainer Unland    Institute for Computer Science and Business Information Systems (ICB), University of Duisburg-Essen, Essen, Germany

    Department of Computer Science and Software Engineering, University of Canterbury, Christchurch, New Zealand

    Abstract

    Agents and multi-agent systems are one of the most fascinating topics in computer science. They attracted and unified not only researchers from nearly all computer science areas but also researchers from other core disciplines such as psychology, sociology, biology, or control engineering. In the meantime, agent-based systems successfully prove their usefulness in many different real-life application areas, especially industrial ones. This is a clear sign that this discipline has become mature. This chapter presents a comprehensive state-of-the-art introduction into advanced software agents and multi-agent systems. Properties and types of agents and multi-agent systems are discussed, which include precise definitions of both. A successful cooperation between agents is only possible if they can communicate in an efficient and semantically meaningful way. Thus, relevant communication strategies are discussed. Agent-based applications can be very powerful, complex systems. Their development can profit a lot from adequate support tools. Different development support options and environments are discussed in some detail. Due to their nature, multi-agent systems are excellent candidates for the realization of comprehensive simulations, especially if the individuality and uniqueness of components of the simulation environment play an important role. The second part of the chapter addresses supporting technologies and concepts. Ontologies, self-organization and emergence, and swarm intelligence and stigmergy are introduced and discussed in some detail.

    Keywords

    Software agents

    Multi-agent systems

    Ontologies

    Self-organization

    Emergence

    Swarm intelligence

    Stigmergy

    1.1 Introduction

    In the beginning of the 1990s, agents and agent-based systems started to become a major research topic. Very soon, they became one of the hottest and most-funded research topics in computer science. One of the fascinating facets of agent-based research has always been that it attracted not only researchers from most computer science areas but also researchers from other core research disciplines, such as psychology, sociology, biology, and control engineering. Of course, these huge influences from many sides led to some chaotic and hardly controllable research. Since then, the tempest has calmed and agent-based systems have slowly found their way into real-life applications in many disciplines, especially industrial ones. This is a clear sign that this discipline has started to become mature.

    This chapter will offer a general introduction of agents, agent-based systems, and related technologies, but will be slightly influenced by the view and requirements of industrial applications. Thus, the remainder of this chapter is organized as follows. The next section discusses the fundamentals of agents and agent-based systems, and will especially discuss the set of properties associated with them. Also, different kinds of agent communication will be introduced. The section closes with a discussion of development concepts for agent-based systems. Section 1.2.6 presents technologies and concepts closely related to, and that substantially extend, the capabilities of agent technology. In particular, ontologies, self-organization and emergence, and swarm intelligence and stigmergy are discussed in more detail. Finally, Section 1.4 offers a summary of these developments.

    1.2 Fundamentals of Agents and Agent-Based Systems

    1.2.1 Agents and Agent Properties

    An agent can be regarded as an autonomous, problem-solving, and goal-driven computational entity with social abilities that is capable of effective, maybe even proactive, behavior in an open and dynamic environment in the sense that it is observing and acting upon it in order to achieve its goals (cf., e.g., Wooldridge and Jennings, 1995; Wooldridge, 2002). There are a number of definitions of intelligent agents that need to be extended in the light of long successful research in this area (cf., e.g., Weiss, 1999; Object Management Group, 2004). The set of features that is to be supported when the term (advanced) agent is used encompasses the properties listed in Table 1.1.

    Table 1.1

    Properties of (Advanced) Agents

    Autonomy: An intelligent agent has control over its behavior (i.e., it operates without the direct intervention of human beings or other entities from the outside world). It has sole control over its internal state and its goals and is the only instance that can change either

    Responsiveness/situatedness: An agent is equipped with sensors and actuators, which form its direct interface to its environment. It perceives its environment by receiving sensory inputs from it. It responds in a timely manner to relevant changes in it through its actuators. The reaction reflects its design goals in the sense that it always tries to steer toward these goals

    Proactiveness: A more sophisticated agent acts not only responsively but may even be opportunistic and act on initiative (i.e., it may proactively anticipate possible changes in its environment and react to them)

    Goal-orientation: An intelligent agent is goal-directed. This implies that it takes initiative whenever there is an opportunity to work toward its goals

    Smart behavior: An agent has comprehensive expertise and knowledge in a specific, well-defined area. Thus, it is capable of dealing with and solving problems in this domain. The most common may be equipped with an internal representation of that part of the world it has to act in

    Social ability: An agent interacts directly with humans and/or other agents in pursuit of its individual, organizational, and/or combined goals. Especially, more intelligent agents may have to deal with all kinds of (unpredictable) situations in which they may need help from other agents. Thus, they may collect and maintain knowledge about other agents (their contact, (subjective) capabilities, reliability, trustworthiness, etc.) and their acquaintances’ information

    Learning capabilities: In order for agents to be adaptive and autonomous, they need to able to learn without intervention from the outside. According to Maes (1994), learning is meant to be incremental, has to take the noise into account, is unsupervised, and can make use of the background knowledge provided by the user and/or the developer of the system

    1.2.2 Types of Agents

    Agent research defines deliberative and reactive agents as the extreme points within the spectrum for the smartness of agents.

    Depending on the point of view, a deliberative, respectively (cognitive) intentional agent is either a synonym for a proactive agent or a specialization of it. Its behavior and architecture is reasonably sophisticated (i.e., the internal processes and computation is comparatively complex and, thus, time- and resource-consuming. However, in contrast to human beings, an agent understands at most only a small, abstracted portion of the real world, although it has always been intended to equip it with comprehensive real-world knowledge. This goal was in the mind of researchers from the beginning, but up to now has turned out to be too ambitious. Wooldridge (1995) defines a deliberative agent as one that possesses an explicitly represented, symbolic model of the world, and in which decisions (e.g., about what actions to perform) are made via symbolic reasoning. The most popular architecture for the implementation of such agents is the belief-desire-intention architecture (BDI) (cf. Bratman, 1987). The beliefs reflect the agent’s abstract understanding of that comparatively small part of the real world it is an expert in. This understanding is subjective to the agent, and thus may vary from agent to agent. The desires represent the goals of the agent (i.e., describe what the agent wants to achieve). It can be distinguished between short-term goals and long-term goals. The long-term goals are those that actually drive the behavior of an agent, and thus are comparatively stable and abstract. They form the underlying decision base for all (re)actions of the agent. Short-term goals only reflect goals that the agent wants to achieve in a specific situation. They may only express what the agent can do in this specific situation at most, and so usually only have a temporary character. As Logan and Scheutz (2001) state, deliberativeness is often realized by applying the concept of symbolic representation with compositional semantics (e.g., data tree) in all major functions, for an agent’s deliberation is not limited to presenting facts, but to construe hypotheses about possible future states, and in doing so, potentially offer information about the past. These hypothetical states involve goals, plans, partial solutions, hypothetical states of the agent’s beliefs, etc. On top of its symbolic representation, a deliberative agent has methods to interpret and predict the outside world in order to compare its state to the agent’s desired state (goal). On the basis of these interpretations and assumptions, it develops the best possible plan (from its point of view) and executes it. Intelligent planning is a complex process, especially if the resulting plan is comparatively sophisticated and spans a large exponentially growing solution space. During this planning time, the environment may change in a way that makes the execution of the actual plan (partially) obsolete or suboptimal. Thus, an immediate re-planning may be necessary. Vlahavas and Vrakas (2004) believe that deliberative agents are especially useful when a reasonable reaction to a sophisticated situation is required (however, not a real-time one), because of their ability to produce high-quality, domain-independent solutions.

    While deliberative agents are comparatively flexible in acting upon their environment, they may, on the other hand, become considerably complex and grow slow in their reactions. The architecture and behavior of a reactive agent are simpler because the agent doesn’t have to deal with a representation of a symbolic world model, nor does it utilize complex symbolic reasoning. Instead, reactive behavior implies that the agent responds comparatively quickly to relevant stimuli from its environment. Based on this input, it produces output by simple situation-action associations, usually implemented via pattern matching. Reactive agents need few resources, and so can react much faster. On the negative side, they are not as flexible and dynamic as deliberative agents, and usually are not able to behave proactively. Nevertheless, Knight (1993) and other researchers believe the results of reactive agents are normally not (much) worse than the results of deliberative agents. In many cases, it may even be possible to replace one deliberative agent with several reactive ones without a loss of quality. This, however, seems to be more a reflection of the current state-of-the-art in deliberative agent concepts and their inherent complexity. In the future, it can be expected that deliberative agents will be much more powerful than reactive ones.

    As a rule of thumb, it can be said that purely reactive agent systems can reveal little smartness, can hardly exhibit goal-directed behavior, and usually come with very limited learning capabilities. On the positive side, their implementation is relatively easy to achieve, their reaction to relevant real-world incidents can be extremely fast, and their explanation capabilities for their behavior usually work very well. Deliberative agents have their strengths where reactive ones have their weaknesses. Because they are based on general-purpose reasoning mechanisms, their behavior is neither fully explainable nor deterministic. The analysis of real-world incidents and their influence on the agent’s goals may need a lot of computing power, which results in slow reaction times. On the other hand, their behavior can be regarded as being comparatively smart and flexible. Moreover, in principle, deliberative agents can learn very well.

    In reality, often MASs use agents that do not belong to one of the preceding extremes but realize an architecture somewhere in between. Such agents are called hybrid agents. The main idea is to structure the reasoning capabilities of a hybrid agent into two or more parts that interact with each other to achieve a coherent behavior of the agent as a whole. One part may produce a fast reaction of the agent, which is then fine-tuned by its deliberative capabilities. Whenever real-time requirements of the environment require it, intermediate planning results of the agent’s reasoning can be executed.

    1.2.3 Multi-Agent Systems and Their Properties

    Due to the limited capabilities of a single agent, more complex real-world problems require the common and cooperative effort of a number of agents in order to get the problem at hand solved. A multi-agent system (MAS) is a federation of fully or semi-autonomous problem solvers that join forces to work positively toward a symbiosis of their individual goals, as well as the overall goals of the federation or the involved set of agents. In order to succeed, they rely on communication, collaboration, negotiation, and responsibility delegation, all of which are based on the individual rationality and social intelligence of the involved agents (cf. Marík et al., 2002). The global/macro behavior of a MAS is defined by the emergent interactions among its agents, which implies that the capabilities of a MAS surpass those of each individual agent. The reduction of complexity is achieved by recursively decomposing a complex task into well-defined subtasks until each subtask can be dealt with by a single agent. However, unlike hardwired federations, a MAS may be highly dynamic and flexible. Depending on the organizational rules, agents may join or leave the coalition whenever they feel like it, provided their commitments are fulfilled. Such MASs are usually referred to as open MASs. In general, if agents can be heterogeneous in their structure and their communication skills and languages, and if they, nevertheless, live in an environment in which they can arbitrarily join and leave arbitrary institutions, such institutions are called open institutions, respectively open MASs. In order for such an environment with possibly many different types of institutions (with different rules and architectures) and heterogeneous agents to function, many issues need to be resolved, such as the heterogeneity of agents, the communication languages and behavior, trust and accountability, finding and joining issues of institutions, or exception handling in case of failures that may jeopardize the global operation of the system. In such an environment, standards are fundamental but cannot be assumed. In reality, most existing MASs are built with homogenous agents and may only support a restricted admission policy. Or as Dignum et al. (2008):1 state: Currently, in practice, agents are designed so as to be able to operate exclusively with a single given institution, thus, basically defying the open nature of the institution. Instead, homogenous MASs with often static structures, called closed MASs, still dominate most real agent-based applications.

    Table 1.2 presents essential properties of (advanced) MASs.

    Table 1.2

    Properties of (Advanced) Multi-Agent Systems

    Decentralized control: Due to its agents’ autonomy a MAS always comes with decentralized structure and control. This difference cannot be emphasized enough because application programs exhibit a centralized architecture

    Flexibility: In this chapter, flexibility refers to direct and efficient reactions to unforeseen sudden interferences in the execution phase of a plan (e.g., due to the unavailability of network connections, nodes, or involved agents). Often such problems are only of a temporary nature, and thus do not imply any permanent changes in the underlying execution plan. In general, flexibility means that a task can easily adapt itself during execution to changing real-world situations and requirements. Because a set of agents that has agreed on solving a complex task represents a set of loosely coupled problem solvers, specific agents can easily be replaced by other ones if necessary, and maybe even on the fly, if an agent is no longer available or temporarily unavailable

    Adaptability/reconfigurability: In contrast to flexibility adaptability, reconfigurability refers to the evolutionary nature of execution plans. Better fitting or more efficient services may appear or the requirements for a complex task may change. Such changes do not occur during the actual execution of a complex task, but become relevant prior to it. Usually, they will lead to permanent changes in the underlying execution plan. In an open MAS, new and more appropriate agents may enter it all the time, and by this, may improve its quality and functionality

    Scalability: MASs are inherently distributed computing systems that may run on an arbitrary number of computers connected through a network (e.g., the Internet). The addition of new agents or computers is, thus, a property that implicitly exists in such an environment, at least, if it is an open one

    Leanness: Agents in general, but especially agents in a MAS, are meant to be as lean as possible. In order to restrict complexity and to be able to understand the behavior of a MAS, it is essential that agents exactly cover a clearly defined, limited field of expertise. If more functionality is to be added, it might always be useful to check whether this can better be realized by subdividing the functionality on two or more (cooperating) agents

    Robustness/fault tolerance: The idea behind organic (cf., e.g., Organic Computing, 2014; Müller-Schloer, 2004; Schmeck, 2005), respectively autonomic (cf., e.g., Kephart and Chess, 2003; Tianfield and Unland, 2004) computing is to equip computing systems with the ability to manage themselves autonomously even when severe problems or failures occur. This feature is closely related to flexibility. It comes with so-called self-properties, such as self-healing, self-configuration, self-organization, self-optimization, self-protection, etc. MASs may behave like autonomic systems. Due to their loose coupling, smartness, and ability to autonomously orchestrate the execution of a task, they provide a high level of robustness and fault-tolerance (i.e., they can adapt themselves to even unpredictable hardware and network situations and may recover autonomously from many kinds of software and hardware failures)

    1.2.4 Agent Communication

    Agents need a means for communication in order to be able to cooperate. Communication can be direct or indirect.

    Direct communication usually translates to an exchange of messages. Like letters, messages consist of an envelope and its actual contents. Still, the most important, yet slightly outdated, communication languages KQML (Knowledge Query and Manipulation Language, standardized by DARPA) and ACL (agent communication language, a Foundation for Intelligent Physical Agents’ (2014) (FIPA) standard, (cf. Dale (2005)) are based on so-called speech acts that were introduced by Searle (1975) and enhanced by Winograd and Flores (1986). Speech acts are defined by a set of performatives and their meanings, such as agree, propose, refuse, request, and query-if. A performative can be seen as an envelope that contains an enhanced set of syntactical information. Only in rare cases will the envelope cover the complete communication act (maybe when a refuse answer occurs). In most cases, the actual content of the communication—its semantic part—is contained within the envelope. While the performatives of a communication language are standardized, its actual content is not. The reason for this is that the envelope is syntax, while its content is a message that needs to be understood by the receiver. Computer systems still can only understand semantics in rare, well-defined and closely limited situations. Thus, the underlying content language is usually application-specific and heavily limited in its expressiveness.

    Swarm intelligence approaches (see later) and, especially, the pheromone trails of ant colonies are known examples of simple but effective kinds of indirect communication. A different popular form of indirect communication between agents is the well-known concept of a blackboard on which agents can post their messages, which can then be read by other agents. Especially relevant in the e-business area is the contract net protocol. It is the digital version of the procedure that leads to a contract in normal life when, for example, a company asks suppliers to submit offers for a (public) announcement it made (task announcement, bidding, awarding, solution providing, and rewarding). Finally, agents may also be involved in electronic marketplaces and auctions, which is also a form of indirect communication through bidding.

    1.2.6 Development Support for Agent-Based Systems

    When it comes to the realization of agent-based systems, at least three principal support options are available: development methodologies, agent-oriented programming methodologies and languages, and development toolkits or frameworks.

    1.2.6.1 Development Toolkits and Frameworks for MASs

    Similar to the OSI reference architecture for networks, a MAS relies on hierarchically organized layers in order to function. The first few layers can be seen as syntactical layers because they do not provide anything to the actual intelligence of the system. This is to be added on higher levels that are not dealt with in agent development toolkits or frameworks.

    The lowest layers are the network and communication layers. They allow the agents to abstract away their physical location and facilitate them to physically exchange messages. The next level realizes the actual agent infrastructure. Here, usually, a number of different agent types are provided, such as normal agents or broker agents. The latter usually offer white or yellow pages services. Additionally, agent life-cycle services are located here. They provide higher-level development facilities that allow the programmer to easily realize interaction protocols, service registration and look-up services, agent specification (state and behaviors), error handling, and so on. More advanced toolkits and frameworks offer first steps toward the integration of more semantics, especially by supporting the integration of ontology services. For these still mainly syntactical layers, a significant number of commercial and open-source platforms have been developed. Altogether, at least 90 proposals were published in the literature up to 2014. Akbari (2010), Vrba (2003), Nikolai and Madey (2009), Allan (2010), AgentLink (2014), and Wikipedia (2014) give nice overviews and some comparisons, while Calisti et al. (2005) provide a comprehensive introduction to some relevant toolkits and platforms. Depending on the philosophy and the envisioned target area for the platforms, these tools provide different kinds of services and support different agent models. Table 1.3 only lists a small fraction of those published (commercial ones are in italics).

    Table 1.3

    Development Toolkits and Frameworks for Multi-Agent Systems

    Due to the similarities in the underlying concepts, object-oriented programming languages are excellent candidates not only for implementing MASs, but especially for implementing agent development toolkits and frameworks. Extended by agent-based concepts, they provide a high-level agent-oriented programming environment that can easily be extended by adding components implemented on the level of the object-oriented programming language. As one example for such a framework, we will briefly introduce here perhaps the most popular open-source framework in this field, JADE (Java Agent DEvelopment framework) (cf. Akbari, 2010; Bellifemine et al., 2007). It is Java-based and is one of a few tools that conforms to the Foundation for Intelligent Physical Agents’ (2014) standard. It provides the mandatory components defined by FIPA to manage the agents’ infrastructure, which are the Agent Communication Channel (ACC), the Agent Management System (AMS) and the Directory Facilitator (DF). The AMS agent provides white pages and agent life-cycle management services, maintaining a directory of agent identifiers and states. The DF provides yellow pages services and the capability of federation within other DFs on other existing platforms. The communication among agents is done via message passing. Messages are encoded using the FIPA-ACL. Their content is formatted according to the FIPA-SL (semantic language) language. Ontologies can be used to support a common understanding of the actual semantics and purpose of the message expressed in the message content. Ontologies can be designed using a knowledge representation tool, such as Protégé (2014), and can then be translated into Java classes according to the JADE guidelines that follow the FIPA Ontology Service Recommendations specifications (cf. Foundation for Intelligent Physical Agents (2014)). JADE also provides a set of graphical tools that permits supervising the status of agents and supporting the debugging phase—a quite complex task in distributed systems. For example, the Sniffer agent is a debugging tool that allows tracking messages exchanged in a JADE environment using a notation similar to UML sequence diagrams. Jadex (cf. Braubach et al., 2005; Jadex, 2014), as an extension of JADE, is one of the few examples of a platform that also provides support for reasoning capabilities of agents, because it comes with a reasoning engine implementing the BDI architecture.

    There has always been an intensive discussion about the differences between object-oriented and agent-based programming. The agent community has always seen agent-based programming as the next-generation programming paradigm that may finally replace object-oriented programming. On an abstract level, there are indeed a number of similarities, but also a number of distinct differences. Both paradigms rely on a world consisting of a large number of entities that have to collaborate in order to get a particular problem solved. In the object-oriented world, these entities are objects that belong to classes. The underlying class defines the functionality of its objects. However, objects are neither autonomous nor active. Thus, in order to get something done, the programmer first has to identify and implement the class hierarchy and then has to define the main program which essentially lays down how these objects have to interact with each other in order to get the task at hand solved. In contrast to this, agents are autonomous, responsive, and maybe capable of learning. This especially means that a main program for managing and supervising the execution of tasks is neither necessary nor possible. The choreography and/or orchestration of the task execution is left to the MAS. In this sense, agent-oriented programming can indeed be seen as the next higher level of programming. While object-oriented software engineering offers in the meantime a wide variety of sophisticated development tools, agent-based software engineering unfortunately still lacks mature tools and methodologies. In the last decade, especially, work on sound development tools has nearly come to a standstill. This is a problem because the philosophy behind agent-oriented programming requires a comprehensive, predictable, and sound programming methodology with appropriate and efficient development tools.

    1.2.6.2 Agent-Oriented Programming Languages

    An agent programming language, sometimes also called agent-oriented programming language (AOP), permits developing and programming intentional agents—in other words, the developed agents usually operate on a semantically higher level than those developed with the help of development toolkits. An AOP usually provides the basic building blocks to design and implement intentional agents by means of a set of programming constructs. These programming constructs facilitate the manipulation of the agents’ beliefs and goals and the structuring of their decision making. The language usually provides an intuitive programming framework based on symbolic or practical reasoning. Shoham (1993) suggests that an AOP system needs the following three elements in order to be complete:

    • A formal language with clear syntax for describing the mental state. This includes constructs for declaring beliefs and their structure (e.g., based on predicate calculus) and passing messages.

    • A programming language that permits defining agents. The semantics of this language should be closely related to those of the formal language.

    • A method for converting neutral applications into agents in order to allow an agent to communicate with a non-agent by attributing intentions.

    The most important AOPs are logic-based. They had their high in research some time ago, which is why many of them are not maintained any longer. Table 1.4 lists some relevant AOPs.

    Table 1.4

    Agent-Oriented Programming Language

    1.2.6.3 Agent-Based Software Development Methodologies

    The development of industrial-strength applications requires the availability of sound software engineering methodologies that typically consist of a set of methods, models, and techniques in order to facilitate a systematic development process that covers the complete software life cycle in a coordinated and integrated way. Within FIPA, the agent unified modeling language (AUML) initiative extended UML by modeling capabilities for large-scale agent-based applications (cf. Bauer and Odell, 2005). The FIPA standardization efforts, as well as the deep experiences in object-oriented software engineering, massively influenced the ideas behind agent-based software development and programming methodologies. Typical representatives are listed in Table 1.5.

    Table 1.5

    Agent-Based Development Methodologies

    1.2.7 MAS-Based Simulation Environments

    A simulation studies the resource consumption, behavior, and output of a physical or conceptual system over time. Agent-based systems have always been an excellent candidate for the design and implementation of simulations in many application areas (cf. Uhrmacher and Weyns, 2009). The most obvious advantage is that they provide an intuitive and direct way for the modeling of a simulation study, because real-world entities can one by one directly be realized by agents in the simulation environment. This means that not only can real-world types be modeled, but, especially, non-typed real-world instances with individual behaviors. The big advantage here is that the coarse grained modeling level of many other simulation techniques—by only allowing types to be defined and their instances to be treated equally—is replaced by a much more flexible approach where entities may still inherit properties from a type but can act individually. Given that industrial systems usually rely on a heterarchy or a hierarchy, which means a set of components, this makes them ideal candidates for an agent-based simulation. Each component is represented by its own agent, and all the communication among, and intelligence of, the individual agents comes for free. Such a simulation has the advantage that, if the agents were modeled and implemented properly, the agents used in the simulation environment can directly be transferred and used in a real-world application. There are two approaches possible. On the one hand, an intense simulation study can be executed before the deployment of the industrial application. On the other hand, the simulation tool can be integrated into the application system in order to be used whenever a reliable prediction of the (future) behavior of the overall application system, or parts of it, is requested. An example of such an online simulation approach is presented in Cardin and Castagna (2009).

    On the basis of such simulations, the behavior of a system can be tested extensively, which may lead to an improved control behavior, as well as to the establishment of substantial trust in its functioning, reliability, and flexibility. Additionally, envisioned extensions and adaptations of the system can be tested beforehand. In the meantime, a number of agent-based simulation tools are on the market. Table 1.6 lists some of them (freeware is in italics; for the homepage, see the Reference column).

    Table 1.6

    Agent-Based Simulation Tools

    The Multi-agent-based Simulation Workshops and Book Series (2014) are the most relevant events and publications on this topic and provide excellent insights. Good overview papers about agent-based simulation tools are Zhou et al. (2007), Michel et al. (2009), Theodoropoulos et al. (2009), Troitzsch (2009), and Allan (2010).

    1.3 Supporting Technologies and Concepts

    1.3.1 Ontologies

    On the one hand, intelligent agents are goal-directed problem solvers that solve autonomously and often proactively tasks and problems for their clients. On the other hand, in order to be able to act like that, they need to interoperate with other agents, and maybe human beings. As discussed in Section 1.2.2, in order to do so, intelligent agents need to rely on an abstract model of their environment that allows them to reason about relevant changes in their environment and define their reaction to them. Ontologies, respectively ontology languages, are an appropriate means to develop the foundation for such a model. They became very popular within the Semantic Web and with service-oriented architecture (SOA). In the meantime, they also play a profound role in agent-based systems (cf. Runde and Fay, 2011).

    One of the biggest challenges in problem solving by MASs is the autonomous (recursive) decomposition of an assigned complex task into appropriate subtasks. This, especially, implies that the involved agents can communicate with each other on a semantically meaningful level, and as a group have enough common knowledge and reasoning capabilities to understand what they are doing on the macro-level. However, because cooperating agents are usually specialists in different fields of expertise, their vocabulary and knowledge may overlap only partially (or not at all) and may not be consistent (due to homonyms or synonyms in their vocabulary or different interpretations of real-world conditions). In such cases, ontologies can help. An ontology is a formal, machine-processable taxonomy of an underlying domain (cf. Gruber, 1993; Sycara and Paolucci, 2004). As such, it contains all relevant entities or concepts within that domain, the underlying axioms and properties, the relationships between them and the constraints on their logically consistent application (i.e., it defines a domain of knowledge or discourse by a (shared) vocabulary and, by that, permits reasoning about its properties). Ontologies are typically specified in languages that allow abstractions and expressions of semantics (e.g., first-order logic languages). If agents are steered in their behavior by their underlying ontologies, these ontologies need to be merged, or at least synchronized, if two agents are supposed to cooperate in order to provide a common fundament for a meaningful conversation and cooperation between these agents (cf. Stumme and Mädche, 2001). Unfortunately, in reality an overwhelming, steadily growing number of ontologies for the same or overlapping areas exist. Despite its partial or complete overlap, their underlying terminology may vary substantially (e.g., because they model the same domain on a different level of abstraction or have a different overall view of it). Under these circumstances, ontology merging can become quite difficult. Problems such as homonyms, synonyms, different levels of abstractions, and possible contradictions in class and instance descriptions, axioms, and policies of the underlying ontologies need to be resolved, which, in the general case, is not yet possible. Thus, a sufficient merger of ontologies may often not be possible. In general, a merger or an interoperation can only be achieved if the underlying ontologies do overlap in a sufficient way. This implies that agents can only cooperate if their expertise overlaps sufficiently.

    One of the most prominent and powerful examples for an ontology language is the Web Ontology Language (cf. OWL, 2014). OWL provides an RDF/RDFS extension based on description logics which permits describing concepts and instances in the real world. More specifically, it is a subset of first-order logics and permits to describe three different types of objects in a domain, namely classes that describe the characteristics of relevant entities (called concepts) in the domain, individuals, which are concepts/objects/instances in the domain, and properties, which define relationships between objects/concepts. Properties are integrity constraints and axioms on the class, as well as the object instance level. They define, among others, transitivity, symmetry, or inverse functions, as well as cardinality or type restrictions. Moreover, due to the underlying logic, OWL provides automatic reasoning capabilities that permit it to infer information that is not explicitly represented in the underlying ontology. By this, the check for consistency of concept/object definitions, subsumption testing, the completion of concept definitions, the classification of new instances and concepts, and the extraction of implicit knowledge are realized. Although OWL is comparatively powerful, the underlying description logic also exposes it to some weaknesses: Description logic only permits the expression of static snapshots of the real world, not the expression of state transitions. Thus, processes, especially, but also workflows and the interaction among agents within a task execution, cannot be modeled appropriately.

    OWL-POLAR was invented in order to support OWL-based knowledge representation and reasoning on policies (cf. Sensoy et al., 2010). Policies, respectively norms, are machine-understandable declarations of constraints and rules on the overall global behavior within a distributed system, respectively MAS. In OWL-POLAR, a policy comes with activation and expiration conditions, possible obligations, the policy addressee, and possible actions. It is activated when its activation conditions hold, even if its expiration conditions do not. OWL-POLAR provides a reasonable foundation for the merger of ontologies.

    An extension of OWL toward the definition of complex macro-services and their interoperability through loose coupling is OWL-S (cf. Martin et al., 2005; Sycara, 2006; Martin et al., 2014). It provides the foundation for the construction of complex web service profiles, process models, and service grounding. A service profile consists of preconditions which are a set of conditions that need to be fulfilled prior to a service invocation, input parameters, which are a set of necessary inputs that the requester is supposed to provide to invoke the service, output parameters, which are the definition of the results that the requester expects to be delivered after the service was executed, and effects, which are the set of consequences that need to hold true if the service was invoked successfully. Additionally, a service description is created that provides the nonfunctional properties of the service, such as provenance, quality of service, security issues, policy issues, or domain-specific characteristics. A process model specifies the workflow that coordinates the execution of the basic processes involved in a complex task execution. Vaculin and Sycara (2007) propose the necessary extensions for monitoring and error handling during the execution of a complex task. Finally, the task of service grounding is to map the complex task at hand on an adequate WSDL file.

    1.3.2 Self-Organization and Emergence

    In order to function, the agents of a MAS need to have a common basis and have to follow common rules. Self-organizing respectively emergent systems define such rules that partially overlap with the general characteristics of MAS. Thus, it is no surprise that MAS may be organized according to the rules of self-organizing respectively emergent systems.

    1.3.2.1 Self-Organization

    A self-organizing system is a dynamic and adaptive system, functioning without external direction, control, manipulation, interference, or pressure (cf., e.g., Di Marzo Serugendo et al., 2004; Brueckner et al., 2005; De Wolf and Holvoet, 2005a). It constantly improves its spatial, temporal, and/or functional structure by organizing its components in a more suitable way in order to improve its behavior, performance, and/or accuracy (Di Marzo Serugendo et al., 2006). While such a system may get input from the outside, this input is meant to exclude control instructions (cf. Klir, 1991). De Wolf and Holvoet (2005b) identify a set of characteristics that a self-organizing system is supposed to reveal (see Table 1.7).

    Table 1.7

    Characteristics of Self-Organizing Systems

    Increase in order: Order implies that the system is goal-directed. While in the beginning the system may not be organized in an appropriate way with respect to this goal, it will constantly adapt its spatial, temporal, and/or functional structure in order to fulfill its goal in a better way

    Autonomy: The system runs and organizes itself without interference from the outside

    Adaptability and/or robustness: Robustness here refers to adaptability in the presence of perturbations and change

    Dynamicity: This characteristic is related to the order characteristic. If a self-organizing system is located in a constantly changing (dynamic) environment, it is capable of always adapting itself to these changes

    1.3.2.2 Emergence

    Emergence can be seen as an evolving process that leads to the creation of novel coherent structures, patterns of behavior, and properties at the macro-level, respectively interface of that system. They dynamically arise from the interactions between the parts at the micro-level, often but not only during the process of self-organization in complex systems (cf., e.g., Kauffman, 1996; Nitschke, 2004). The functioning of the system can only be understood by looking at each of the parts in the context of the system as a whole, not by simply taking the system apart and looking at the parts (i.e., emergence is more than just the summed behavior of its underlying parts). Table 1.8 lists the relevant characteristics as identified by de Wolf and Holvoet (2005a).

    Table 1.8

    Characteristics of Emergent Systems

    Micro-macro effect: The structures, patterns of behavior, and properties visible at the macro-level of the system arise from the coherent (inter)actions of the entities at the micro-level

    Radical novelty: The novel structure and patterns of the global behavior are neither directly described by, nor in any way ingrained in, the defining patterns, rules, and entities of the micro-level

    Coherence respectively organizational closure: The micro-macro effect relies on the logical and consistent correlation between entities on the micro-level, and thus spans and correlates the many lower-level entities into a coherent higher-level unity

    Dynamicity: Due to the micro-macro effect a new kind of behavior arises as the system evolves in time

    Decentralized control: While the actions of the parts are controllable, the whole is not directly controllable because decentralized control only relies on local mechanisms to influence the global behavior

    Two-way link: In emergent systems, there is a bidirectional link between the macro- and the micro-level. On the one hand, the emergent structure evolves from the micro-level to the macro-level. On the other hand, higher-level properties have causal effects on the lower level

    Robustness, adaptability, and flexibility: The architecture of an emergent system guarantees robustness, adaptability, and flexibility because it implies that an individual entity cannot be a single point of failure. If a failure occurs, graceful degradation of performance may be the consequence, but there will be no sudden loss of any function because each entity can be replaced without compromising the emergent structure of the system

    Self-organization and emergence have some similarities and some differences (cf. de Wolf and Holvoet, 2005a). They are both self-sustained systems that can neither be directly controlled nor manipulated in any way from the outside. They both evolve over time. However, only self-organizing systems need to exhibit a goal-directed behavior. Emergent systems consist of a larger number of low-level (micro-)entities that collaborate in order to exhibit a higher-level (macro-)behavior. The unavailability of one or more of those lower-level entities does not abrogate the functioning of the system (graceful degradation), while this may be the case in self-organizing systems.

    1.3.3 Swarm Intelligence and Stigmergy

    Swarm intelligence (cf., e.g., IEEE, 2014; Dorigo et al., 2004, 2006; Panigrahi et al., 2011) as an innovative distributed intelligence approach for the optimization problems, as well as specific kinds of general problem solving relies on the ideas of emergence. It uses social swarming behaviors observed in nature as a blueprint to design complex emergent systems. Depending on the underlying concept in nature, such as bird flocks, bee swarms, or ant colonies, different categories of swarm intelligence systems can be identified. Ants and ant colonies, as the most popular technique, will be discussed here in some

    Enjoying the preview?
    Page 1 of 1