You are on page 1of 11

0

An Abstract on

AUTONOMIC COMPUTING
A HOLISTIC APPROACH

PREPARED AND FORTIFYING BYK.AJAY KRISHNA TEJA 10501A0556 II-II J.GEETA KRISHNA 10501A0539 II-II

AbstractThe increasing scale complexity, heterogeneity and dynamism of networks, systems and applications have made our computational and information infrastructure brittle, unmanageable and insecure. This has necessitated the investigation of an alternate paradigm for system and application design, which is based on strategies used by biological systems to deal with similar challenges a vision that has been referred to as autonomic computing. The overarching goal of autonomic computing is to realize computer and software systems and applications that can manage themselves in accordance with high-level guidance from humans. Meeting the grand challenges of autonomic computing requires scientific and technological advances in a wide variety of fields, as well as new software and system architectures that support the effective integration of the constituent technologies. This paper presents an introduction to autonomic computing, history of autonomic computing, characteristics of autonomic computing, need for the autonomic computing, architectures that build autonomic computing, models that make autonomic computing to live in practical world, finally we would end up with giving its applications, advantages, dis-advantages and future expectations from autonomic computing.

This has created a number of challenging problems, the ever increasing difficulty in managing multivendor environments drives cost in human resources as well as software. Most systems require manually intense administration and management which in turn mandates highly skilled costly labour intensive process which impact time to market. Most importantly the business, technical, and even social aspects of systems have increased dramatically in complexity, requiring new technologies, paradigms and functionality. To be introduced to cope with these challenges these increases in complexity have made it almost impossible for a human to manage the different operational scenarios that are possible in current let alone future systems. For example, customers want personalized services that adopt to the current context and task being performed this requires devices and systems to be cognizant of the needs of the user, any environmental, administrative, or social restrictions on the use of certain functions, the capabilities of the devices in use, and other services that can be employed to best satisfy the user

IntroductionThe proliferation of different technologies combine with the ever increasing complexity of software and more advanced business models has let to a more complex networked world in the sole heterogeneous devices operate using a converged infrastructure to support multiple applications that each has different requirements service providers, equipment manufacturers and other actors deploy the latest technologies in order to gain competitive advantage

HistoryWe now describe early work in autonomic computing that we believe have been influential to many other later projects in the autonomic research field. Similarly to the birth of the Internet, one of the notable early self-managing projects was initiated by DARPA for a military application in 1997. The project was called the Situational Awareness System1 (SAS), which was part of the broader Small Units Operations (SUO) programme. Its aim was to create personal communication and location devices for soldiers on the battlefield. Soldiers could enter status reports, e.g. discovery of enemy tanks, on their personal device, and this information would autonomously spread to all other soldiers, which could then call up the latest status report when entering an enemy area. Collected and transmitted data includes voice messages and data from unattended ground sensors and unmanned aerial vehicles. These personal devices have to be able to communicate with each other in difficult environmental conditions, possibly with enemy jamming equipment in operation, and must at the same time minimise enemy interception to this end [Kenyon 2001]. The latter point is addressed by using multi-hop ad-hoc routing, i.e. a device sends its data only to the nearest neighbours, which then

forward the data to their own neighbours until finally all devices receive the data. This is a form of decentralised peer-to-peer mobile adaptive routing, which has proven to be a challenging self-management problem, especially because in this project the goal is keep latency below 200 milliseconds from the time a soldier begins speaking to the time the message is received. The former point is addressed by enabling the devices to transmit in a wide band of possible frequencies, 202,500MHz, with bandwidths ranging from 10bps to 4Mbps. For instance, when distance to next soldier is many miles, communication is only possible at low frequencies, which results in low bandwidth, which may still be enough to provide a brief but possibly crucial status report. Furthermore, there may be up to 10,000 soldiers on the battlefield, each with their own personal devices connected to the network. Another DARPA project related to selfmanagement is the DASADA2 project started in 2000. The objective of the DASADA programme was to research and develop technology that would enable mission critical systems to meet high assurance, dependability, and adaptability requirements. Essentially, it deals with the complexity of large distributed software systems, a goal not dissimilar to IBMs autonomic computing initiative. Indeed, this project pioneered the architecture-driven approach to selfmanagement, and more broadly the notion of probes and gauges for monitoring the system and an adaptation engine for optimising the system based on monitoring data [Garlan et al. 2001; Cobleigh et al. 2002; Kaiser et al. 2002; Gross et al. 2001; Wolf et al. 2000].In 2001, IBM suggested the concept of autonomic computing. In their manifesto [Horn 2001], complex computing systems are compared to the human body, which is a complex system, but has an autonomic nervous system that takes care of most bodily functions, thus removing from our consciousness the task of coordinating all our bodily functions. IBM suggested that complex computing systems should also have autonomic properties, i.e. should be able to independently take care of the regular maintenance and optimization tasks, thus reducing the workload on the system administrators. IBM also distilled the four properties of a self-managing (i.e. autonomic) system: self-configuration, selfoptimization, self-healing and self-protecting.

The DARPA Self-Regenerative Systems programme started in 2004 is a project that aims to develop technology for building military computing systems that provide critical functionality at all times, in spite of damage caused by unintentional errors or attacks [Badger 2004]. There are four key aspects to this project. First, software is made resistant to error and attacks by generating a large number of versions that have similar functional behaviour, but sufficiently different implementation, such that any attack will not be able to affect a substantial fraction of the versions of the program. Second, modifications to the binary code can be performed, such as pushing randomly sized blocks onto the memory stack, that make it harder for attackers to exploit vulnerabilities, such as specific branching address locations, as the vulnerability location changes. Furthermore, a trust model is used to steer the computation away from resources likely to cause damage. Whenever a resource is used, the trust model is updated based on outcome. Third, scalable wide-area intrusion-tolerant replication architecture is being worked on, which should provide accountability for authorised but malicious client updates. Fourth, technologies are being developed that supposedly allow a system to estimate the likelihood that a military system operator (an insider) is malicious and prevent it from initiating an attack on the system. Finally, we would like to mention an interesting project that started at NASA in 2005, the Autonomous Nanotechnology Swarm (ANTS). As an exemplary mission, they plan to launch into an asteroid belt a swarm of 1000 small spacecraft (so called pico-class spacecraft) from a stationary factory ship in order to explore the asteroid belt in detail. Because as much as 6070% of the swarm is expected to be lost as they enter the asteroid belt, the surviving craft must work together.This is done by forming small groups of worker craft with a coordinating ruler, that uses data gathered from workers to determine which asteroids are of interest and to issue instructions. Furthermore, messenger craft will coordinate communications between

members of the swarm and with ground control. In fact, NASA has already previously used autonomic behaviour in its DS1 (Deep Space 1) mission and the Mars Pathfinder [Muscettola et al. 1998]. Indeed, NASA has a strong interest in autonomic computing, in particular in making its deep-space probes more autonomous. This is mainly because there is a long round-trip delay between a probe in deep space and mission control on Earth. So, as mission control cannot rapidly send new commands to a probewhich may need to quickly adapt to extraordinary situationsit is extremely critical to the success of an expensive space exploration mission that the probes be able to make certain critical decisions on their own.

An autonomic computing system must perform something akin to healing - it must be able to recover from routine and extraordinary events that might cause some of its parts to malfunction. It must be able to discover problems or potential problems, then find an alternate way of using resources or reconfiguring the system to keep functioning smoothly. A virtual world is no less dangerous than the physical one, so an autonomic computing system must be an expert in self-protection. It must detect, identify and protect itself against various types of attacks to maintain overall system security and integrity. An autonomic computing system must know its environment and the context surrounding its activity, and act accordingly. It will find and generate rules for how best to interact with neighboring systems. It will tap available resources, even negotiate the use by other systems of its underutilized elements, changing both itself and its environment in the process -in a word, adapting. An autonomic computing system cannot exist in a hermetic environment. While independent in its ability to manage itself, it must function in a heterogeneous world and implement open standards -- in other words, an autonomic computing system cannot, by definition, be a proprietary solution. An autonomic computing system will anticipate the optimized resources needed while keeping its complexity hidden. It must marshal I/T resources to shrink the gap between the business or personal goals of the user, and the I/T implementation necessary to achieve those goals -- without involving the user in that implementation. These could be simply abstracted as

CharactersticsWhile the definition of autonomic computing will likely transform as contributing technologies mature, the following list suggests eight defining characteristics of an autonomic system. An autonomic computing system needs to "know itself" - its components must also possess a system identity. Since a "system" can exist at many levels, an autonomic system will need detailed knowledge of its components, current status, ultimate capacity, and all connections to other systems to govern itself. It will need to know the extent of its "owned" resources, those it can borrow or lend, and those that can be shared or should be isolated. An autonomic computing system must configure and reconfigure itself under varying (and in the future, even unpredictable) conditions. System configuration or "setup" must occur automatically, as well as dynamic adjustments to that configuration to best handle changing environments. An autonomic computing system never settles for the status quo - it always looks for ways to optimize its workings. It will monitor its constituent parts and fine-tune workflow to achieve predetermined system goals.

NEED FOR AUTONOMIC COMPUTINGForecasts suggest that the number of computing devices in use will grow at 38% per year and the average complexity of each device is increasing. Currently, this volume and complexity is managed by highly skilled humans; but the demand for skilled IT personnel is already outstripping supply, with labour costs exceeding equipment costs by a ratio of up to 18:1 Computing systems have brought great benefits of speed and automation but there is now an overwhelming need to automate their maintenance. In a 2003 IEEE Computer Magazine article, Kephart and Chess warn that the dream of interconnectivity of computing systems and devices could become the nightmare of pervasive computing in which architects are unable to anticipate, design and maintain the complexity of interactions. They state the essence of autonomic computing is system self-management, freeing administrators from low-level task management while delivering better system behavior. A general problem of modern distributed computing systems is that their complexity, and in particular the complexity of their management, is becoming a significant limiting factor in their further development. Large companies and institutions are employing large-scale computer networks for communication and computation. The distributed applications running on these computer networks are diverse and deal with many tasks, ranging from internal control processes to presenting web content and to customer support.Additionally, mobile computing is pervading these networks at an increasing speed: employees need to communicate with their companies while they are not in their office. They do so by using laptops, personal digital assistants, or mobile phones with diverse forms of wireless technologies to access their companies' data. This creates an enormous complexity in the overall computer network which is hard to control manually by human operators. Manual control is time-consuming, expensive, and error-prone. The manual effort needed to control a growing networked computer-system tends to increase very quickly.80% of such problems in infrastructure happen at the client specific application and database layer. Most 'autonomic' service providers guarantee to solve these problems(power, hardware, operating system, network and basic database parameters).

self-configuring-Self configuring is the ability of the systems to perform configurations according to pre-defined high- level policies and seamlessly adapt to change caused by autonomic configurations. Systems adapt automatically to dynamically changing environments. self-healing- Self-healing denotes the system ability to examine, find, diagnose and react to system malfunctions. Selfhealing components or applications must be able to observe system failures, evaluate constraints imposed by outside environment, and apply appropriate corrections such systems discover, diagnose and react to disruptions. self-optimizing- Self optimizing is the ability of the system to continuously monitor and control resources to improve performance and efficiency. Selfoptimizing is the capability of maximizing resource allocation and utilization for satisfying user requests. The tuning actions could mean reallocating resourcessuch as in response to dynamically changing workloads-to improve overall utilization,, or to ensure that particular business transactions can be completed in a timely fashion. self-protecting- Self-protecting is the ability of the ability of the system to proactively anticipate, identify detect and protect itself from malicious attacks from anywhere, or cascading failures that are not corrected by self-healing measures.

ArchitectureArchitecture of an autonomic compting varies with the context of its use.the vendor specific classification includes the following sub-categories that are Single resource Homogenous Heterogeneous Business systems

Control loop-

The monitor function provides the mechanisms that collect, aggregate, filter and report details (such as metrics and topologies) collected from a managed resource. The analyze function provides the mechanisms that correlate and model complex situations (for example, timeseries forecasting and queuing models). These mechanisms allow the autonomic manager to learn about the IT environment and help predict future situations. The plan function provides the mechanisms that construct the actions needed to achieve goals and objectives. The planning mechanism uses policy information to guide its work. The execute function provides the mechanisms that control the execution of a plan with considerations for dynamic updates. These four parts work together to provide the control loop functionality. Figure 4 shows a structural arrangement of the parts rather than a control flow. The four parts communicate and collaborate with one another and exchange appropriate knowledge and data, as shown in Figure A control loop works as-

The fundamental management element of an autonomic computing architecture is a control Loop;Since resources have vendor-specific differences, this architecture assumes that any resource to be managed can be instrumented in a standard way, so that an autonomic manager can communicate with it in a standard way.this architecture also assumes that each autonomic manager governs a set of homogenous resources (i.e., resources having the same functionality and programmed using the same language). The operation of control loop is as follows.sensors retrieve vendor-specific data,which is then converted to a normalized form and analyzed to determine if any correction to the managed resources being monitered is needed(e.g., to correct non-optimal, failed, or error states).if so, then those corrections are planned, and appropriate actions are executed using effectors that translate normalized commands back to a form that the managed resources can understand.

.As in the case of the single resource or the homogenous systems it is quite simple and effectively understood using a single control loop or a few control loops involved.When we have to describe an architecture for a heterogenous network or a company network we need to have a several several control loops and also with other layers

The five building blocks for an autonomic system are: Autonomic manager Knowledge source Touchpoint Manual manager Enterprise service bus Autonomic managerAn autonomic manager is an implementation that automates some management function and externalizes this function according to the behavior defined by management interfaces. The autonomic manager is a component that implements the control loop. For a system component to be self-managing, it must have an automated method to collect the details it needs from the system; to analyze those details to determine if something needs to change; to create a plan, or sequence of actions, that specifies the necessary changes; and to perform those actions. When these functions can be automated, an intelligent control loop is formed. Knowledge SourceA knowledge source is an implementation of a registry, dictionary, database or other repository that provides access to knowledge according to the interfaces prescribed by the architecture. In an autonomic system, knowledge consists of particular types of data with architected syntax and semantics, such as symptoms, policies, change requests and change plans. This knowledge can be stored in a knowledge source so that it can be shared among autonomic managers.

TouchpointsA touchpoint is the component in a system that exposes the state and management operations for a resource in the system. An autonomic manager communicates with a touchpoint through the manageability interface, described next. A touchpoint, depicted in Figure 5, is the implementation of the manageability interface for a specific manageable resource or a set of related manageable resources. For example, there might be a touchpoint implemented that exposes the manageability for a database server, the databases that database server hosts, and the tables within those databases. Manual ManagerA manual manager is an implementation of the user interface that enables an IT professional to perform some management function manually. The manual manager can collaborate with other autonomic managers at the same level or orchestrate autonomic managers and other IT professionals working at lower levels. Enterprise Service BusAn enterprise service bus is an implementation that assists in integrating other building blocks (for example, autonomic managers and touchpoints) by directing the interactions among these building blocks

Other components introduction Touchpoint - The interface to an instance of a managed resource, such as an operating system or a server. A touchpoint implements sensor and effector behavior for the managed resource, and maps the sensor and effector interfaces to existing interfaces. Touchpoint Autonomic Manager - An autonomic manager that works with managed resources through their touchpoints. Integrated Solutions Console - A technology that provides a common,

consistent user interface, based on industry standards and component reuse, and can host common system administrative functions. The IBM Integrated Solutions Console is a core technology of the IBM Autonomic Computing initiative that uses a portalbased interface to provide these common system administrative functions for IBM server, software or storage products. Orchestrating Autonomic Manager An autonomic manager that works with other autonomic managers to provide coordination functions. Manageability Interface - A service of the managed resource that includes the sensor and effector used by an autonomic manager. The autonomic manager uses the manageability interface to monitor and control the managed resource. Event - Any significant change in the state of a system resource, network resource or network application. An event can be generated for a problem, for the resolution of a problem or for the successful completion of a task. Sensor - An interface that exposes information about the state and state transitions of a managed resource. Effector -An interface that enables state changes for a managed resource. Enterprise Service Bus - An implementation that assists in integrating other building blocks (for example, autonomic managers and touchpoints) by directing the interactions among these building blocks. Knowledge Source - An implementation of a registry, dictionary, database or other repository that provides access to knowledge according to the interfaces prescribed by the architecture.

Model for developing an autonomic system-1.basic

1.basic level In basic level, an individual manages different tasks and day-to-day operations.It is a starting point in which individuals or IT professionals manage different tasks such as the setting up of the IT environment, monitoring and making updations. 2.managed At managed level, information is collected from systems through some technologies and tools; that further help to make good and intelligent decisions.this facilitates the systems administrator to collect and analyze information more quickly. 3.predictive In predictive level, predictions and optimal solutions are provided making the system more intelligent.Here,new technologies are used which correlate different components of a system to initiate pattern recognition, prediction and suggestions for optimal solutions. 4. adaptive At adaptive level, information and knowledge are extracted to initiate various actions automatically.The components use the available information and knowledge of the system to execute different actions automatically.

5.Autonomic In autonomic level,business policies and objectives are monitored and system can change business policies, objectives or both.through autonomic computing.

2) Hp 3PAR adaptive optimization:

Applications:
1) IBM websphere virtual enterprise:

It uses autonomic management to provide an enhanced quality of service in dynamic operations and extended manageability for service level management.Dynamic workload management offers better hardware utilization with lower overProvising requirement Allows the amount of physical compute resource allocated to an application to vary as the demand for that application varies.

It is an autonomic storage solution that delivers service level optimization for virtuala nd cloud data centres to reduce cost while increasing agility and minimizing risk.It uses policy rules to autonomically optimize data storage and movement in accordance With customer quality of service needs. The 3PAR solution has been integrated into many forms of hps storage solutions enabling users to realize elastic storage with changes inDemand . It enables IT managers to react swiftly to changing business needs while delivering Service level optimization over the entire application.It offers qos gradients for application priortisation modes to shift data at a granular level .Toward the toward the most appropriate resources . It intelligently monitors sub volume performance and then applies user specified policies To autonomically and nondisruptively move data.It works with HP 3PAR system reporter software to allow administrators to priortiz e Each application by configuring an adaptive optimization configuration with upto three Storage tiers defined by drive type ,raid level,stripe width. 3) Ibm Tivoli change and configuration management

Application edition management offers better availability Allows smooth upgrade of applications without taking an outage (higher availability) Supports rolling upgrades, where only part of a cell is updated at a time Allows user to try out new version of an application.

Health management offers better availability and resiliency Supports identification of risk factors that could lead to outages. -Examples of risk conditions are JVM heaps exceeding a defined threshold or JVM life reaching a defined amount of time.

Prospects of autonomic computingIt automatically tracks IT information aiding IT staff in understanding the relation ships And dependencies between products and their components .It delivers efficient, cost-effective management solutions by integrating IT processes, data and automating operational management product use It Facilitates internal and regulatory compliance by enforcing policies as well as tracking and recording changes across your organization.It Visualizes all critical intelligence regarding the infrastructure through data consolidation and federation capabilities.It helps in Employing bestpractice change management processes with impact assessment and visibility of schedules to reduce business impact 4)Oracle 11g : 1. Power Management Power management has been looked at from two systems perspectives; the datacentre and wireless sensors networks (WSN)this section examines data centresonly. It has been estimated that power equipment,cooling equipment, and electricity together are responsible for 63% of thetotal cost of ownership of the physical IT infrastructure of a data centre8. Such statistics have motivated research in self-adaptive systems that not only optimise resourcemanagement in terms of performance metrics but in terms of the power that a given algorithm or service will consume on a given infrastructure. 2.Data Centres, Clusters and GRID Computing Systems These kinds of systems can be grouped together as they are essentially wide-area high-performance heterogeneous distributed clusters of computers used to run anything from scientific to business applications for many differing users. This brings with it extra complexity in maintaining such a geographically distributed system (which can potentially span to world-wide). The allocation of resources is complicated by the fact that the system is expected to provide an agreed quality of service (QoS); which could be formalized in a Service Level Agreement (SLA). This typically means that the dynamic nature of the systems load, up-times and uncertainties etc are taken into account not only when initially allocating resources, but while the application is using those resources. Hence it is an excellent challenge for autonomic computing as maintaining a given QoS under such diverse dynamic conditions, using ad-hoc manual tuning and component replacement when failed, is unachievable

It has several important autonomic features .the sql performance analyzer enables The user to proactively assess the impact of sql execution to any change in the data baseData base replay enables production workloads to be used on a staging system for test its Reliability before it is deployed in production .

ADDM provides a near-real-time performance picture of the entire database .it can be run on demand or automatically to look for any performance related problems.if a problem is detected,it will identify its root causes and make appropriate recommendations .For solving it. It has a pl/sql interface that enables developers to embed ADDM capabilities into their application.ADDM also has an oracle enterprise manager interface for remote diagnosis.

3 Ubiquitous Computing Ubiquitous computing (ubicomp) is a post-desktop model of human-computer interaction in which information processing has been thoroughly integrated into everyday objects and activities. In the course of ordinary activities, someone "using" ubiquitous computing engages many computational devices and systems simultaneously, and may not necessarily even be aware that they are doing so. This model is usually considered an advancement from the desktop paradigm. More formally Ubiquitous computing is defined as "machines that fit the human environment instead of forcing

10

humans to enter theirsThis paradigm is also described as pervasive computing, ambient intelligence., where each term emphasizes slightly different aspects. When primarily concerning the objects involved, it is also physical computing, the Internet of Things, haptic computing, and things that think. Rather than propose a single definition for ubiquitous computing and for these related terms, a taxonomy of properties for ubiquitous computing has been proposed, from which different kinds or flavors of ubiquitous systems and applications can be described

Conclusion:
Though people buzz around the cloud computing strategies which was proposed on 1960s autonomic computing tends to have its own importance and unknowingly it has a much part in every programing,for example a dbms triggers help to autonomically execute queries in oracle.finally our bottom line would be.

Advantages:
It Simplifies user experience through a more responsive, real-time system. It saves cost and easy to use Scaled power, storage and costs that optimize usage across both hardware and software. Full use of idle processing power, including home PC's, through networked system. It provides server consolidation to maximize system availability, and minimizes cost and human effort to manage large server farms. Seamless access to multiple file types. Open standards will allow users to pull data from all potential sources . Stability. High availability. High security system. Fewer system or network errors due to self-healing. Autonomic computing will enable Esourcing that ability to deliver information technology as a utility, when you need it, in the amount you must have to accomplish the task at hand. And autonomic computing will create big opportunities for such services, which are emerging.

Autonomic an economic solution, Modern resolution, Happening revolution, Lets be a part in its evolution.

Disadvantages:
It remains a challenge to completely understand how biological systems work. Lack of mathematical models. Limited applicability of biologicallyinspired architectural styles. over-dependability on the system to control data

You might also like