You are on page 1of 47

CLOUD COMPUTING

ACKNOWLEDGEMENT

With immense joy and pleasure I hereby take privilege to present case study on Cloud Computing

Our individual Prof. Neha Srivastava stands apart from all other contributors to this case study, which constantly inspired me with their guidance and suggestions during making of this case study.

I would like to express my thanks to Prof. Rajesh Gaikwad (Head of Department). Also it wont be out of place to say big thanks to the staff members and laboratory assistance for their contribution for helping me in completion of this case study.

Index
Sr. No. 1 1.1 1.2 1.3 2 2.1 2.2 2.3 3 3.1 3.2 3.3 4 5 5.1 5.2 6 6.1 6.2 6.3 7 8 9 9.1 9.2 10 11 Topic Terminology Definition Benefits Usage Scenario Cloud Components Clients Data Center Distributed Server Infrastructure Grid Computing Full Virtualization Para Virtualization Taxonomy Types of Cloud Main Types Deployment Types Specific Characteristics Non-Functional Aspect Economic Aspect Technological Aspect Related Areas Security Architecture of Cloud Computing Gaps and Open areas Technical Gaps Non-technical Gaps State of Public Sector Cloud computing Customers Scenario

Terminology
A 'cloud' is an elastic execution environment of resources involving multiplestakeholders and providing a metered service at multiple granularities for aspecified level of quality (of service).

Cloud computing is a term used to describe both a platform and type of application. A cloudcomputing platform dynamically provisions, configures, reconfigures, and deprovisions servers asneeded. Servers in the cloud can be physical machines or virtual machines. Advanced cloudstypically include other computing resources such as storage area networks (SANs), networkequipment, firewall and other security devices. Cloud computing also describes applications that are extended to be accessible through theInternet. These cloud applications use large data centers and powerful servers that host Webapplications and Web services. Anyone with a suitable Internet connection and a standardbrowser can access a cloud application.

Definition
A cloud is a pool of virtualized computer resources. A cloud can: Host a variety of different workloads, including batch-style back-end jobs and interactive,user-facing applications Allow workloads to be deployed and scaled-out quickly through the rapid provisioning ofvirtual machines or physical machines Support redundant, self-recovering, highly scalable programming models that allowworkloads to recover from many unavoidable hardware/software failures Monitor resource use in real time to enable rebalancing of allocations when needed Cloud computing environments support grid computing by quickly providing physical and virtualservers on which the grid applications can run. Cloud computing should not be confused withgrid computing. Grid computing involves dividing a large task into many smaller tasks that runin parallel on separate servers. Grids require many computers, typically in the thousands, andcommonly use servers, desktops, and laptops. Clouds also support non-grid environments, such as a three-tier Web architecture running standardor Web 2.0 applications. A cloud is more than a collection of computer resources because acloud provides a mechanism to manage those resources. Management includes provisioning,change requests, reimaging, workload rebalancing, deprovisioning, and monitoring.

Benefits
Cloud computing infrastructures can allow enterprises to achieve more efficient use of their IThardware and software investments. They do this by breaking down the physical barriers inherentin isolated systems, and automating the management of the group of systems as a single entity. Cloud computing is an example of an ultimately virtualized system, and a natural evolution fordata centers that employ automated systems management, workload balancing, and virtualizationtechnologies. A cloud infrastructure can be a cost efficient model for delivering information services, reducingIT management complexity, promoting innovation, and increasing responsiveness through real-time workload balancing. The Cloud makes it possible to launch Web 2.0 applications quickly and to scale up applicationsas much as needed when needed. The platform supports traditional Java and Linux, Apache,MySQL, PHP (LAMP) stack-based applications as well as new architectures such as MapReduceand the Google File System, which provide a means to scale applications across thousands ofservers instantly.

Large amounts of computer resource, in the form of Xen virtual machines, can be provisionedand made available for new applications within minutes instead of days or weeks. Developerscan gain access to these resources through a portal and put them to use immediately. Severalproducts are available that provide virtual machine capabilities, including proprietary onesuchas VMware, and open source alternatives, such as XEN. This paper describes the use of XENvirtualization. Many customers are interested in cloud infrastructures to serve as platforms for innovation, particularly in countries that want to foster the development of a highly skilled, high-tech workforce. They want to provide startups and research organizations with an environment for ideaexchange, and the ability to rapidly develop and deploy new product prototypes. In fact, HiPODS has been hosting IBMs innovation portal on a virtualized cloud infrastructure inour Silicon Valley Lab for nearly two years. We have over seventy active innovations at a time, with each innovation lasting on average six months. 50% of those innovations are Web 2.0projects (search, collaboration, and social networking) and 27% turn into products or solutions. Our success with the innovation portal is documented in the August 20 Business Week cover storyon global collaboration.

Usage scenarios
Cloud computing can play a significant role in a variety of areas including internal pilots, innovations, virtual worlds, e-business, social networks, and search. Here we summarize severalbasic but important usage scenarios that highlight the breadth and depth of impact that cloudcomputing can have on an enterprise. Internal innovation Innovators request resources online through a simple Web interface. They specify a desired startand end dates for their pilot. A cloud resource administrator approves or rejects the request. Upon approval, the cloud provisions the servers. The innovator has the resources available foruse within a few minutes or an hour depending on what type of resource was requested. Virtual worlds Virtual worlds require significant amounts of computing power, especially as those virtual spacesbecome large or as more and more users log in. Massively multiplayer online games (MMPOG)are a good example of significantly large virtual worlds. Several commercial virtual worlds haveas many as nine million registered users and hundreds and thousands of servers supporting theseenvironments. A company that hosts a virtual world could have real time monitors showing the utilization levelof the current infrastructure or the average response time of the clients in any given realm of thevirtual world. Realms are arbitrary areas within a virtual world that support a

specific subset ofpeople or subset of the world. The company discovers that realm A has an significant increase inuse and the response times are declining, whereas realms S and Z have decreased in use. Thecompany initiates a cloud rebalance request to deprovision five servers each from realms S and Zand provision ten servers to Realm A. After a couple of minutes the ten servers are relocatedwithout interruption to any users in any of the realms and the response time for realm A has returned to acceptable levels. The company has achieved significant cost savings by reusingunderutilized equipment, maintained high customer satisfaction, avoided help desk calls fromusers and completed in minutes what would previously have taken days or weeks to accomplish. e-business In e-business, scalability can be achieved by making new servers available as needed. Forexample, during a peak shopping season, more virtual servers can be made available that cancater to high shopper demand. In another example a company may experience high workloads onweekends or evenings as opposed to early mornings and weekdays. If a company has asignificantly large cloud, they could schedule computer resources to be provisioned each evening, weekend, or during a peak season. There are more opportunities to achieve efficiencies as thecloud grows. Another aspect of this scenario involves employing business policies to decide whatapplications receive higher priorities and thus more computing resources. Revenue generatingapplications may be rated higher than research and development or innovation pilots. For severalmonths IBM has been running a cloud infrastructure that adjusts computer resourcesappropriately and automatically according to business policies. Personal hobbies Innovation is no longer a concept developed and owned by companies and businesses. It isbecoming popular at the individual level, and more individuals are coming up with innovations. These individuals could be requesting servers from a cloud to work on their innovations.

Cloud Components
In a simple, topological sense, a cloud computing solution is made up of several elements: clients, the datacenter, and distributed servers. As shown in Figure 1-3, these components make up the three parts of a cloud computing solution. Each element has a purpose and plays a specific role in delivering a functional cloud-based application, so lets take a closer look.

Clients
Clients are, in a cloud computing architecture, the exact same things that they are in a plain, old, everyday local area network (LAN). They are, typically, the computers that just sit on your desk. But they might also be laptops, tablet computers, mobile phones, or PDAsall big drivers for cloud computing because of their mobility. Anyway, clients are the devices that the end users interact with to manage their information on the cloud. Clients generally fall into three categories: Mobile Mobile devices include PDAs or smartphones, like a Blackberry, Windows Mobile Smartphone, or an iPhone. Thin Clients are computers that do not have internal hard drives, but rather let the servers do all the work, but then display the information. Thick This type of client is a regular computer, using a web browser like Firefox or Internet Explorer to connect to the cloud. Thin clients are becoming an increasingly popular solution, because of their price and effect on the environment. Some benefits to using thin clients include Lower hardware costs Thin clients are cheaper than thick clients because they do not contain as much hardware. They also last longer before they need to be upgraded or become obsolete. Lower IT costs Thin clients are managed at the server and there are fewer points of failure.

Security Since the processing takes place on the server and there is no hard drive, theres less chance of malware invading the device. Also, since thin clients dont work without a server, theres less chance of them being physically stolen. Data security Since data is stored on the server, theres less chance for data to be lost if the client computer crashes or is stolen. Less power consumption Thin clients consume less power than thick clients. This means youll pay less to power them, and youll also pay less to air-condition the office. Ease of repair or replacement If a thin client dies, its easy to replace. The box is simply swapped out and the users desktop return exactly as it was before the failure. Less noise Without a spinning hard drive, less heat is generated and quieter fans can be used on the thin client.

Datacenter
The datacenter is the collection of servers where the application to which you subscribe is housed. It could be a large room in the basement of your building or a room full of servers on the other side of the world that you access via the Internet. A growing trend in the IT world is virtualizing servers. That is, software can be installed allowing multiple instances of virtual servers to be used. In this way, you can have half a dozen virtual servers running on one physical server.

Distributed Servers
But the servers dont all have to be housed in the same location. Often, servers are in geographically disparate locations. But to you, the cloud subscriber, these servers act as if theyre humming away right next to each other. This gives the service provider more flexibility in options and security. For instance, Amazon has their cloud solution in servers all over the world. If something were to happen at one site, causing a failure, the service would still be accessed through another site. Also, if the cloud needs more hardware, they need not throw more servers in the safe roomthey can add them at another site and simply make it part of the cloud.

Infrastructure
Cloud computing isnt a one-size-fits-all affair. There are several different ways the infrastructure can be deployed. The infrastructure will depend on the application and how the provider has chosen to build the cloud solution. This is one of the key advantages for using the cloud. Your needs might be so massive that the number of servers required far exceeds your desire or budget to run those in-house. Alternatively, you may only need a sip of processing power, so you dont want to buy and run a dedicated server for the job. The cloud fits both needs.

Grid Computing
Grid computing is often confused with cloud computing, but they are quite different. Grid computing applies the resources of numerous computers in a network to work on a single problem at the same time. This is usually done to address a scientific or technical problem. A well-known example of this is the Search for Extraterrestrial Intelligence (SETI) @Home project. In this project, people all over the world allow the SETI project to share the unused cycles of their computers to search for signs of intelligence in thousands of hours of recorded radio data. This is shown in Figure 1-4. Another well-used grid is the World Community GridBerkeley Open Infrastructure for Network Computing (BOINC; see www.worldcommunity grid.org). Here you can dedicate as much or as little of your idle CPU processing power as you choose to help conduct protein-folding experiments in an effort to create better and more durable rice crops to feed the worlds hungry. I bet you didnt know you could feed the needy with your computer. Grid computing necessitates the use of software that can divide and then send out pieces of the program to thousands of computers. It can be done throughout the computers of an organization, or it can be done as a form of public collaboration. Sun Microsystems offers Grid Engine software that allows engineers at companies to pool the computer cycles on up to 80 workstations at a time. Grid computing is appealing for several reasons: It is a cost-effective way to use a given amount of computer resources. It is a way to solve problems that need a tremendous amount of computing power. The resources of several computers can be shared cooperatively, without one computer managing the other. So what do grid computing and cloud computing have to do with one another? Not much directly, as they function in fundamentally different ways. In grid computing, a large project is divided among multiple computers to make use of their resources. Cloud computing does just the opposite. It allows multiple smaller applications to run at the same time.

Full Virtualization
Full virtualization is a technique in which a complete installation of one machine is run on another. The result is a system in which all software running on the server is within a virtual machine.

In a fully virtualized deployment, the software running on the server is displayed on the clients.

This sort of deployment allows not only unique applications to run, but also different operating systems. Virtualization is relevant to cloud computing because it is one of the ways in which you will access services on the cloud. That is, the remote datacenter may be delivering your services in a fully virtualized format. In order for full virtualization to be possible, it was necessary for specific hardware combinations to be used. It wasnt until 2005 that the introduction of the AMD-Virtualization (AMD-V) and Intel Virtualization Technology (IVT) extensions made it easier to go fully virtualized. Full virtualization has been successful for several purposes: Sharing a computer system among multiple users

Isolating users from each other and from the control program Emulating hardware on another machine
Paravirtualization

Paravirtualization allows multiple operating systems to run on a single hardware device at the same time by more efficiently using system resources, like processors and memory. In full virtualization, the entire system is emulated (BIOS, drive, and so on), but in paravirtualization, its management module operates with an operating system that has been adjusted to work in a virtual machine. Paravirtualization typically runs better than the full virtualization model, simply because in a fully virtualized deployment, all elements must be emulated.

The trade-off is reduced security and flexibility. For instance, flexibility is reduced because a particular OS or distribution may not be able to work. For example, a new Windows deployment may not be available as a guest OS for the solution. Security can be at risk because the guest OS has more control of the underlying hardware, and there is a risk of impacting the hardware and all the guest systems on the host.

Paravirtualization also allows for better scaling. For example, if a fully virtualized solution requires 10 percent of processor utilization, then five systems are about the most that could be run on a system before performance takes a hit. Paravirtualization requires only 2 percent of processor utilization per guest instance and still leaves 10 percent of the guest OS available. This is illustrated in Table 1-1. Paravirtualization works best in these sorts of deployments: Disaster recovery In the event of a catastrophe, guest instances can be moved to other hardware until the equipment can be repaired. Migration Moving to a new system is easier and faster because guest instances can be removed from the underlying hardware. Capacity management Because of easier migrations, capacity management is simpler to implement. It is easier to add more processing power or hard drive capacity in a virtualized environment.

Taxonomy
This diagram defines taxonomy for cloud computing:

In this diagram, Service Consumers use the services provided through the cloud,Service Providers manage the cloud infrastructure and Service Developerscreate the services themselves. (Notice that open standards are needed for theinteractions between these roles.) Each role is discussed in more detail in thefollowing sections. Service Consumer The service consumer is the end user or enterprise that actuallyuses the service, whether it is Software, Platform or Infrastructure asa Service.Depending on the type of service and their role, the consumer workswith different user interfaces and programming interfaces. Some user interfaces look like any other application; the consumer doesnot need to know about cloud computing as they use the application.Other user interfaces provide administrative functions such asstarting and stopping virtual machines or managing cloud storage. Consumers writing application code use different programminginterfaces depending on the application they are writing.Consumers work with SLAs and contracts as well. Typically theseare negotiated via human intervention between the consumer andthe provider. The expectations of the consumer and the reputation ofthe provider are a key part of those negotiations. Service Provider The service provider delivers the service to the consumer. The actual task of theprovider varies depending on the type of service: For Software as a Service, the provider installs, manages and maintains

the software. The provider does not necessarily own the physicalinfrastructure in which the software is running. Regardless, the consumerdoes not have access to the infrastructure; they can access only theapplication. For Platform as a Service, the provider manages the cloud infrastructure forthe platform, typically a framework for a particular type of application. Theconsumers application cannot access the infrastructure underneath theplatform.

For Infrastructure as a Service, the provider maintains the storage,database, message queue or other middleware, or the hosting environmentfor virtual machines. The consumer uses that service as if it were a diskdrive, database, message queue, or machine, but they cannot access theinfrastructure that hosts it. In the service provider diagram, the lowest layer of the stack is the firmware andhardware on which everything else is based. Above that is the software kernel,either the operating system or virtual machine manager that hosts theinfrastructure beneath the cloud. The virtualized resources and images includethe basic cloud computing services such as processing power, storage andmiddleware. The virtual images controlled by the VM manager include both theimages themselves and the metadata required to manage them. Crucial to the service providers operations is the management layer. At a lowlevel, management requires metering to determine who uses the services and towhat extent, provisioning to determine how resources are allocated toconsumers, and monitoring to track the status of the system and its resources. At a higher level, management involves billing to recover costs, capacity planningto ensure that consumer demands will be met, SLA management to ensure thatthe terms of service agreed to by the provider and consumer are adhered to, andreporting for administrators.

Security applies to all aspects of the service providers operations. (The manylevels of security requirements are beyond the scope of this paper.) Openstandards apply to the providers operations as well. A well-rounded set ofstandards simplify operations within the provider and interoperability with otherproviders. Service Developer The service developer creates, publishes and monitors the cloudservice. These are typically "line-of-business" applications that aredelivered directly to end users via the SaaS model. Applicationswritten at the IaaS and PaaS levels will subsequently be used bySaaS developers and cloud providers.Development environments for service creation vary. If developersare creating a SaaS application, they are most likely writing code foran environment hosted by a cloud provider. In this case, publishingthe service is deploying it to the cloud providers infrastructure.During service creation, analytics involve remote debugging to testthe service before it is published to consumers. Once the service ispublished, analytics allow developers to monitor the performance oftheir service and make changes as necessary.

E.g. Figure 1 illustrates the high level architecture of the cloud computing platform. Its comprised ofa data center, IBM Tivoli Provisioning Manager, IBM Tivoli Monitoring, IBMWebsphere Application Server, IBM DB2, and virtualization components. This architecturediagram focuses on the core back end of the cloud computing platform; it does not address theuser interface.

Tivoli Provisioning Manager automates imaging, deployment, installation, and configuration ofthe Microsoft Windows and Linux operating systems, along with the installation / configurationof any software stack that the user requests.

Tivoli Provisioning Manager uses Websphere Application Server to communicate theprovisioning status and availability of resources in the data center, to schedule the provisioningand deprovisioning of resources, and to reserve resources for future use. As a result of the provisioning, virtual machines are created using the XEN hypervisor or physicalmachines are created using Network Installation Manager, Remote Deployment Manager, orCluster Systems Manager, depending upon the operating system and platform. IBM Tivoli Monitoring Server monitors the health (CPU, disk, and memory) of the serversprovisioned by Tivoli Provisioning Manager. DB2 is the database server that Tivoli Provisioning Manager uses to store the resource data. IBM Tivoli Monitoring agents that are installed on the virtual and physical machinescommunicate with the Tivoli Monitoring server to get the health of the virtual machines andprovide the same to the user. The cloud computing platform has two user interfaces to provision servers. -- fully loaded with the WebSphere suite of products andrelatively more involved from a process perspective. uests are handled by Web2.0 components deployed on the WebSphere Application Server.Requests are forwarded to Tivoli Provisioning Manager for provisioning/deprovisioning servers. Open source Open source solutions played an important role in the development of the cloud. In particular, acouple of projects have been foundations for common cloud services such as virtualization andparallel processing. Xen is an open-source virtual machine implementation that allows physicalmachines to host multiple copies of operating systems. Xen is used in the cloud to representmachines as virtual images that can be easily and repeatedly provisioned and deprovisioned.Hadoop, now under the Apache license, is an open-source framework for running large dataprocessing applications on a cluster. It allows the creation and execution of applications using Googles MapReduce programming paradigm, which divides the application into small fragmentsof work that can be executed on any node in the cluster. It also transparently supports reliabilityand data migration through the use of a distributed file system. Using Hadoop, the cloud canexecute parallel applications on a massive data set in a reasonable amount of time, enablingcomputationally-intensive services such as retrieving information efficiently, customizing usersessions based on past history, or generating results based on Monte Carlo (probabilistic)algorithms. Virtualization Virtualization in a cloud can be implemented on two levels. The first is at the hardware layer. Using hardware like the IBM System p enables innovators to request virtualized, dynamic LPARs with IBM AIX or Linux operating systems. The LPARs CPU resource is ideallymanaged by IBM Enterprise Workload Manager. Enterprise Workload Manager monitors CPUdemand and use and employs business policies to determine how much CPU resource is assignedto each LPAR. The System p has micropartitioning capability, which allows the system to assignpartial CPUs to LPARs. A partial CPU can be as granular as 1/10 of a physical CPU.Micropartitioning combined with the dynamic load balancing capabilities of Enterprise WorkloadManager make a powerful virtualized infrastructure available for innovators. In this environmentpilots and prototypes are generally lightly used at the beginning of the life cycle. During thestartup stage, CPU use is generally lower because there is typically more development work andfewer early adopters or pilot users. At the same time, other more mature pilots and prototypesmay have hundreds or thousands of early adopters

who are accessing the servers. Accordingly,those servers can take heavy loads at certain times of the day, or days of the week, and this iswhen Enterprise Workload Manager dynamically allocates CPU resources to the LPARs thatneed them. The second implementation of virtualization occurs at the software layer. Here technologies suchas Xen can provide tremendous advantages to a cloud environment. Our current implementationsof the cloud support Xen specifically but the framework also allows for other softwarevirtualization technologies such as VMWares ESX product. Software virtualization entails installing a hypervisor on an IBM System x or IBM System physical server. The hypervisor supports multiple guest operating systems and provides a layerof virtualization so that each guest operating system resides on the same physical hardwarewithout knowledge of the other guest operating systems. Each guest operating system isphysically protected from the other operating systems and will not be affected by instability orconfiguration issues of the other operating systems. Software virtualization allows underutilized servers to become fully utilized, saving the companysignificant costs in hardware and maintenance. A Xen virtualization model provides significantbenefits: virtualmachines (guest operating systems) in a matter of seconds with zero downtime. iving: allows the cloud to take a unused server offline with no ill affect. Later thatsame virtual machine can be restored and brought online in a matter of seconds. physicalmachines that have unused resources (memory, CPU, disk). seconds. Additional configurations or middleware and application provisioning may require additionaltime depending on the implementation.A SAN based storage architecture must be used for some of these software virtualization benefitsto be realized. This dynamic allocation of resource and the large number of active pilots enable cloud resourcesto be extremely efficient. A non-virtualized environment may well be able to handle less than halfthe number of projects of a virtualized cloud. Storage architecture in the cloud The storage architecture of the cloud includes the capabilities of the Google file system alongwith the benefits of a storage area network (SAN). Either technique can be used by itself, or bothcan be used together as needed. Computing without data is as rare as data without computing. The combination of data andcomputer power is important. Computer power often is measured in the cycle speed of aprocessor. Computer speed also needs to account for the number of processors. The number ofprocessors within an SMP and the number within a cluster may both be important. When looking at disk storage, the amount of space is often the primary measure. The number ofgigabytes or terrabytes of data needed is important. But access rates are often more important. Being able to only read sixty megabyes per second may limit your processing capabilites belowyour computer capabilites. Individual disks have limits on the rate at which they can processdata. A single computer may have multiple disks, or with SAN file system be able to access dataover the network. So data placement can be an important factor in achieving high data accessrates. Spreading the data over multiple computer nodes may be desired, or having all the datareside on a single node may be required for optimal performance.

The Google file structure can be used in the cloud environment. When used, it uses the disksinside the machines, along with the network to provide a shared file system that is redundant. This can increase the total data processing speed when the data and processing power is spreadout efficiently. The Google file system is a part of storage architecture but it is not considered to be SAN architecture. SAN architecture relies on an adapter other than an Ethernet in the computernodes, and has a network similar to an Etherent network that can then host various SAN devices. Piloting innovations on a cloud Many companies are creating innovation initiatives and funding programs to develop innovationprocesses. Because innovation is an evolving topic, the team leaders often dont know where tostart. More often than not, they look at traditional or existing collaboration tools to try to meetthe requirements for collaborative innovation. Through numerous engagements with clients, IBMhas discovered that collaboration tools by themselves will not yield the desired results aseffectively as having a structured innovation platform and program in place. IBM addressed this problem by developing a comprehensive innovation platform calledInnovation Factory. The Innovation Factory removes most of the barriers that innovatorsexperience bycombining collaboration tools, search and tagging technologies, as well as sitecreation tools in a single unified portal. This type of innovation platform enables innovation by putting a structure around the innovationprocess and providing tools for innovators and early adopters to publish, experiment, providefeedback, and enhance innovations. The Innovation Factory is a perfect complement to cloudcomputing because the innovators making new pilots and technologies available usually needservers or other computing resources in which to develop, test, and provide those services andapplications to the early adopters. By combining cloud computing and Innovation Factory, or any other innovation platform alreadyin use, a company can benefit from a complete solution that provides both physical computer resources and an innovation process combined with collaboration tools. Adding cloud computingto a companys existing innovation process reduces the time needed to develop and deliver aproduct, reduces the barrier to entry, and reduces costs associated with procurement, setup,management, and reuse of physical assets. Cloud computing should be part of every innovation process when physical or virtual computerresources are needed for innovation pilots.

TYPES OF CLOUDS
Cloud providers typically center on one type of cloud functionality provisioning: Infrastructure,Platform or Software / Application, though there is potentially no restriction to offer multiple typesat the same time, which can often be observed in PaaS (Platform as a Service) providers which offerspecific applications too, such as Google App Engine in combination with Google Docs. Due thiscombinatorial capability, these types are also often referred to as components. Literature and publications typically differ slightly in the terminologies applied. This is mostly due tothe fact that some application areas overlap and are therefore difficult to distinguish. As anexample, platforms typically have to provide access to resources indirectly, and thus are sometimesconfused with infrastructures. Additionally, more popular terms have been introduced in lesstechnologically centered publications. The following list identifies the main types of clouds (currently in use):

(Cloud) Infrastructure as a Service (IaaS)also referred to as Resource Clouds,


provide (managed andscalable) resources as services to the user in other words, they basically provide enhancedvirtualisation capabilities. Accordingly, different resources may be provided via a service interface: Data & Storage Cloudsdeal with reliable access to data of potentially dynamic size, weighingresource usage with access requirements and / or quality definition. Examples: Amazon S3, SQL Azure. Compute Cloudsprovide access to computational resources, i.e. CPUs. So far, such lowlevelresources cannot really be exploited on their own, so that they are typically exposed as part of avirtualized environment (not to be mixed with PaaS below), i.e. hypervisors. Compute CloudProviders therefore typically offer the capability to provide computing resources (i.e. raw access toresources unlike PaaS that offer full software stacks to develop and build applications), typicallyvirtualised, in which to execute cloudified services and applications. IaaS (Infrastructure as a Service)offers additional capabilities over a simple compute service. Examples: Amazon EC2, Zimory, Elastichosts.

(Cloud) Platform as a Service (PaaS), provide computational resources via a


platform upon whichapplications and services can be developed and hosted. PaaS typically makes use of dedicated APIsto control the behaviour of a server hosting engine which executes and replicates the executionaccording to user requests (e.g. access rate). As each provider exposes his / her own API accordingto the respective key capabilities, applications developed for one specific cloud provider cannot bemoved to another cloud host there are however attempts to extend generic programming modelswith cloud capabilities (such as MS Azure). Examples: Force.com, Google App Engine, Windows Azure (Platform).

(Clouds) Software as a Service (SaaS), also sometimes referred to as Service or


Application Cloudsare offering implementations of specific business functions and business processes that areprovided with specific cloud capabilities, i.e. they provide applications / services using a cloudinfrastructure or platform, rather than providing cloud features themselves. Often, kind of standardapplication software functionality is offered within a cloud. Examples: Google Docs, Salesforce CRM, SAP Business by Design.

Overall, Cloud Computing is not restricted to Infrastructure / Platform / Software as a Servicesystem, even though it provides enhanced capabilities which act as (vertical) enablers to thesesystems. As such, I/P/SaaS can be considered specific usage patterns for cloud systems whichrelate to models already approached by Grid, Web Services etc. Cloud systems are a promising wayto implement these models and extend them further. DEPLOYMENT TYPES (CLOUD USAGE) Similar to P/I/SaaS, clouds may be hosted and employed in different fashions, depending on the usecase, respectively the business model of the provider. So far, there has been a tendency of clouds toevolve from private, internal solutions (private clouds) to manage the local infrastructure and theamount of requests e.g. to ensure availability of highly requested data. This is due to the fact thatdata centres initiating cloud capabilities made use of these features for internal purposes beforeconsidering selling the capabilities publicly (public clouds). Only now that the providers have gainedconfidence in publication and exposition of cloud features do the first hybrid solutions emerge. Thismovement from private via public to combined solutions is often considered a natural evolutionof such systems, though there is no reason for providers to not start up with hybrid solutions, oncethe necessary technologies have reached a mature enough position. We can hence distinguish between the following deployment types: Private Clouds are typically owned by the respective enterprise and / or leased. Functionalities arenot directly exposed to the customer, though in some cases services with cloud enhanced featuresmay be offered this is similar to (Cloud) Software as a Service from the customer point of view. Example: eBay. Public Clouds. Enterprises may use cloud functionality from others, respectively offer their ownservices to users outside of the company. Providing the user with the actual capability to exploit thecloud features for his / her own purposes also allows other enterprises to outsource their services tosuch cloud providers, thus reducing costs and effort to build up their own infrastructure. As noted inthe context of cloud types, the scope of functionalities thereby may differ. Example: Amazon, Google Apps, Windows Azure. Hybrid Clouds. Though public clouds allow enterprises to outsource parts of their infrastructure tocloud providers, they at the same time would lose control over the resources and the distribution /management of code and data. In some cases, this is not desired by the respective enterprise.Hybrid clouds consist of a mixed employment of private and public cloud infrastructures so as toachieve a maximum of cost reduction through outsourcing whilst maintaining the desired degree ofcontrol over e.g. sensitive data by employing local private clouds. There are not many hybrid clouds actually in use today, though initial initiatives such as the one byIBM and Juniper already introduce base technologies for their realization. Community Clouds. Typically cloud systems are restricted to the local infrastructure, i.e. providersof public clouds offer their own infrastructure to customers. Though the provider could actuallyresell the infrastructure of another provider, clouds do not aggregate infrastructures to build uplarger, cross-boundary structures. In particular smaller SMEs could profit from community clouds towhich different entities contribute with their respective (smaller) infrastructure. Community cloudscan either aggregate public clouds or dedicated resource infrastructures.We may thereby distinguish between private and public community clouds. For example smallerorganizations may come together only to pool their resources for

building a private communitycloud. As opposed to this, resellers such as Zimory may pool cloud resources from differentproviders and resell them. Community Clouds as such are still just a vision, though there are already indicators for suchdevelopment, e.g. through Zimory and RightScale. Community clouds show some overlap with GRIDs technology. Special Purpose Clouds. In particular IaaS clouds originating from data centres have a generalpurpose appeal to them, as their according capabilities can be equally used for a wide scope of usecases and customer types. As opposed to this, PaaS clouds tend to provide functionalities morespecialized to specific use cases, which should not be confused with proprietariness of theplatform: specialization implies providing additional, use case specific methods, whilst proprietarydata implies that structure of data and interface are specific to the provider. Specialized functionalities are provided e.g. by the Google App Engine which provides specificcapabilities dedicated to distributed document management. Similar to general service provisioning(web based or not), it can be expected that future systems will provide even more specializedcapabilities to attract individual user areas, due to competition, customer demand and availableexpertise. Special Purpose Clouds are just extensions of normal cloud systems to provide additional,dedicated capabilities. The basis of such development is already visible.

SPECIFIC CHARACTERISTICS / CAPABILITIES OF CLOUDS


Since clouds do not refer to a specific technology, but to a general provisioning paradigm withenhanced capabilities, it is mandatory to elaborate on these aspects. There is currently a strongtendency to regard clouds as just a new name for an old idea, which is mostly due to a confusionbetween the cloud concepts and the strongly related P/I/SaaS paradigms, but alsodue to the fact that similar aspects have already been addressed without the dedicated termcloud associated with it. This section specifies the concrete capabilities associated with clouds that are considered essential(required in any cloud environment) and relevant (ideally supported, but may be restricted tospecific use cases). We can thereby distinguish non-functional, economic and technologicalcapabilities addressed, respectively to be addressed by cloud systems. Non-functional aspects represent qualities or properties of a system, rather than specifictechnological requirements. Implicitly, they can be realized in multiple fashions and interpreted indifferent ways which typically leads to strong compatibility and interoperability issues betweenindividual providers as they pursue their own approaches to realize their respective requirements,which strongly differ between providers. Non-functional aspects are one of the key reasons whyclouds differ so strongly in their interpretation (see also II.B). Economic considerations are one of the key reasons to introduce cloud systems in a businessenvironment in the first instance. The particular interest typically lies in the reduction of cost andeffort through outsourcing and / or automation of essential resource management. As has beennoted in the first section, relevant aspects thereby to consider relate to the cut-off between loss ofcontrol and reduction of effort. With respect to hosting private clouds, the gain through costreduction has to be carefully balanced with the increased effort to build and run such a system. Obviously, technological challenges implicitly arise from the non-functional and economical aspects,when trying to realize them. As opposed to these aspects, technological challenges typically imply aspecific realization even though there may be no standard approach as yet and deviations mayhence arise. In addition to these implicit challenges, one can identify additional technologicalaspects to be addressed by cloud system, partially as a pre-condition to realize some of the highlevel features, but partially also as they directly relate to specific characteristics of cloud systems. 1. NON-FUNCTIONAL ASPECTS The most important non-functional aspects are: Elasticity is an essential core feature of cloud systems and circumscribes the capability of theunderlying infrastructure to adapt to changing, potentially non-functional requirements, forexample amount and size of data supported by an application, number of concurrent users etc. Onecan distinguish between horizontal and vertical scalability, whereby horizontal scalability refers tothe amount of instances to satisfy e.g. changing amount of requests, and vertical scalability refers tothe size of the instances themselves and thus implicit to the amount of resources required tomaintain the size. Cloud scalability involves both (rapid) upand down-scaling. Elasticity goes one step further, tough, and does also allow the dynamic integration and extractionof physical resources to the infrastructure. Whilst from the application perspective, this is identicalto scaling, from the middleware management perspective this poses additional requirements, inparticular regarding reliability. In general, it is assumed that changes in the

resource infrastructureare announced first to the middleware manager, but with large scale systems it is vital that suchchanges can be maintained automatically. Reliability is essential for all cloud systems in order to support todays data centretypeapplications in a cloud, reliability is considered one of the main features to exploit cloud capabilities. Reliability denotes the capability to ensure constant operation of the system without disruption, i.e.no loss of data, no code reset during execution etc. Reliability is typically achieved throughredundant resource utilisation. Interestingly, many of the reliability aspects move from a hardwareto a software-based solution. (Redundancy in the file systems vs. RAID controllers, stateless frontend servers vs. UPS, etc.). Notably, there is a strong relationship between availability (see below) and reliability however,reliability focuses in particular on prevention of loss (of data or execution progress). Quality of Service support is a relevant capability that is essential in many use cases wherespecific requirements have to be met by the outsourced services and / or resources. In businesscases, basic QoS metrics like response time, throughput etc. must be guaranteed at least, so as toensure that the quality guarantees of the cloud user are met. Reliability is a particular QoS aspectwhich forms a specific quality requirement. Agility and adaptability are essential features of cloud systems that strongly relate to the elasticcapabilities. It includes on-time reaction to changes in the amount of requests and size of resources,but also adaptation to changes in the environmental conditions that e.g. require different types of resources, different quality or different routes, etc. Implicitly, agility and adaptability requireresources (or at least their management) to be autonomic and have to enable them to provide self-*capabilities. Availability of services and data is an essential capability of cloud systems and was actually oneof the core aspects to give rise to clouds in the first instance. It lies in the ability to introduceredundancy for services and data so failures can be masked transparently. Fault tolerance alsorequires the ability to introduce new redundancy (e.g. previously failed or fresh nodes) in an onlinemanner non-intrusively (without a significant performance penalty). With increasing concurrent access, availability is particularly achieved through replication of data /services and distributing them across different resources to achieve load-balancing. This can beregarded as the original essence of scalability in cloud systems.

2. ECONOMIC ASPECTS In order to allow for economic considerations, cloud systems should help in realising the followingaspects: Cost reduction is one of the first concerns to build up a cloud system that can adapt to changingconsumer behaviour and reduce cost for infrastructure maintenance and acquisition. Scalability andPay per Use are essential aspects of this issue. Notably, setting up a cloud system typically entailsadditional costs be it by adapting the business logic to the cloud host specific interfaces or byenhancing the local infrastructure to be cloud-ready. See also return of investment below. Pay per use. The capability to build up cost according to the actual consumption of resources is arelevant feature of cloud systems. Pay per use strongly relates to quality of service support, wherespecific requirements to be met by the system and hence to be paid for can be specified. One of thekey economic drivers for the current level of interest in cloud computing is the structural change inthis domain. By moving from the usual capital upfront investment model to an operational expense,cloud computing promises to enable especially SMEs and entrepreneurs to accelerate thedevelopment and adoption of innovative solutions.

Improved time to market is essential in particular for small to medium enterprises that want tosell their services quickly and easily with little delays caused by acquiring and setting up the infrastructure,in particular in a scope compatible and competitive with larger industries. Largerenterprises need to be able to publish new capabilities with little overhead to remain competitive.Clouds can support this by providing infrastructures, potentially dedicated to specific use cases thattake over essential capabilities to support easy provisioning and thus reduce time to market. Return of investment (ROI) is essential for all investors and cannot always be guaranteed in factsome cloud systems currently fail this aspect. Employing a cloud system must ensure that the costand effort vested into it is outweighed by its benefits to be commercially viable this may entaildirect (e.g. more customers) and indirect (e.g. benefits from advertisements) ROI. Outsourcingresources versus increasing the local infrastructure and employing (private) cloud technologies needtherefore to be outweighed and critical cutoff points identified. Turning CAPEX into OPEX is an implicit, and much argued characteristic of cloud systems, as theactual cost benefit (cf. ROI) is not always clear (see e.g.[9]). Capital expenditure (CAPEX) is requiredto build up a local infrastructure, but with outsourcing computational resources to cloud systems ondemand and scalable, a company will actually spend operational expenditure (OPEX) for provisioningof its capabilities, as it will acquire and use the resources according to operational need. Going Green is relevant not only to reduce additional costs of energy consumption, but also toreduce the carbon footprint. Whilst carbon emission by individual machines can be quite wellestimated, this information is actually taken little into consideration when scaling systems up.Clouds principally allow reducing the consumption of unused resources (downscaling). In addition,up-scaling should be carefully balanced not only with cost, but also carbon emission issues. Notethat beyond software stack aspects, plenty of Green IT issues are subject to development on thehardware level.

3. TECHNOLOGICAL ASPECTS The main technological challenges that can be identified and that are commonly associated withcloud systems are: Virtualisationis an essential technological characteristic of clouds which hides the technologicalcomplexity from the user and enables enhanced flexibility (through aggregation, routing andtranslation). More concretely, virtualisation supports the following features: Ease of use: through hiding the complexity of the infrastructure (including management,configuration etc.) virtualisation can make it easier for the user to develop new applications, as wellas reduces the overhead for controlling the system. Infrastructure independency: in principle, virtualisation allows for higher interoperability by makingthe code platform independent. Flexibility and Adaptability: by exposing a virtual execution environment, the underlyinginfrastructure can change more flexible according to different conditions and requirements(assigning more resources, etc.). Location independence: services can be accessed independent of the physical location of the userand the resource. Multi-tenancy is a highly essential issue in cloud systems, where the location of code and / ordata is principally unknown and the same resource may be assigned to multiple users (potentially atthe same time). This affects infrastructure resources as well as data / applications / services that arehosted on shared resources but need to be made available in multiple isolated instances. Classically,all information is maintained in separate databases or

tables, yet in more complicated casesinformation may be concurrently altered, even though maintained for isolated tenants. Multitenancyimplies a lot of potential issues, ranging from data protection to legislator issues. Security, Privacy and Compliance is obviously essential in all systems dealing with potentiallysensitive data and code. Data Management is an essential aspect in particular for storage clouds, where data is flexiblydistributed across multiple resources. Implicitly, data consistency needs to be maintained over awide distribution of replicated data sources. At the same time, the system always needs to be awareof the data location (when replicating across data centres) taking latencies and particularly workloadinto consideration. As size of data may change at any time, data management addresses bothhorizontal and vertical aspects of scalability. Another crucial aspect of data management is theprovided consistency guarantees (eventual vs. strong consistency, transactional isolation vs. noisolation, atomic operations over individual data items vs. multiple data times etc.). APIs and / or Programming Enhancements are essential to exploit the cloud features: commonprogramming models require that the developer takes care of the scalability and autonomiccapabilities him- / herself, whilst a cloud environment provides the features in a fashion that allowsthe user to leave such management to the system. Metering of any kind of resource and service consumption is essential in order to offer elasticpricing, charging and billing. It is therefore a pre-condition for the elasticity of clouds. Tools are generally necessary to support development, adaptation and usage of cloud services.

RELATED AREAS
It has been noted, that the cloud concept is strongly related to many other initiatives in the area ofthe Future Internet, such as Software as a Service and Service Oriented Architecture. Newconcepts and terminologies often bear the risk that they seemingly supersede preceding work andthus require a fresh start, where plenty of the existing results are lost and essential work isrepeated unnecessarily. In order to reduce this risk, this section provides a quick summary of themain related areas and their potential impact on further cloud developments. 1. INTERNET OF SERVICES Service based application provisioning is part of the Future Internet as such and therefore a similarstatement applies to cloud and Internet of Services as to cloud and Future Internet. Whilst the cloudconcept foresees essential support for service provisioning (making them scalable, providing asimple API for development etc.), its main focus does not primarily rest on service provisioning. Asdetailed cloud systems are particularly concerned with providing an infrastructureon which any type of service can be executed with enhanced features. Clouds can therefore be regarded as an enabler for enhanced features of large scale serviceprovisioning. Much research was vested into providing base capabilities for service provisioning accordingly, capabilities that overlap with cloud system features can be easily exploited for cloudinfrastructures. 2. INTERNET OF THINGS It is up to debate whether the Internet of Things is related to cloud systems at all: whilst the internetof things will certainly have to deal with issues related to elasticity, reliability and data managementetc., there is an implicit assumption that resources in cloud computing are of a type that can hostand / or process data in particular storage and processors that can form a computational unit (avirtual processing platform). However, specialised clouds may e.g. integrate dedicated sensors to provide enhanced capabilitiesand the issues related to reliability of data streams etc. are principally independent of the type ofdata source. Though sensors as yet do not pose essential scalability issues, metering of resourceswill already require some degree of sensor information integration into the cloud. Clouds may furthermore offer vital support to the internet of things, in order to deal with a flexibleamount of data originating from the diversity of sensors and things. Similarly, cloud concepts forscalability and elasticity may be of interest for the internet of things in order to better cope withdynamically scaling data streams. Overall, the Internet of Things may profit from cloud systems, but there is no direct relationshipbetween the two areas. There are however contact points that should not be disregarded. Datamanagement and interfaces between sensors and cloud systems therefore show commonalities. 3. THE GRID There is an on-going confusion about the relationship between Grids and Clouds [17], sometimesseeing Grids as on top of Clouds, vice versa or even identical. More surprising, even elaboratecomparisons still have different views on what the Grid is in the firstinstance, thus making the comparison cumbersome. Indeed most ambiguities can be quicklyresolved if the underlying concept of Grids is examined first: just like Clouds, Grid is primarily aconcept rather than a technology thus leading to many potential misunderstandings betweenindividual communities. With respect to research being carried out in the Grid over the last years, it is thereforerecommendable to distinguish (at least) between (1) Resource Grids, including in particular GridComputing, and (2) eBusiness Grids which centres mainly on distributed

Virtual Organizations andis closer related to Service Oriented Architectures (see below). Note that there may be combinationbetween the two, e.g. when capabilities of the eBusiness Grids are applied for commercial resourceprovisioning, but this has little impact on the assessment below. Resource Grids try to make resource - such as computational devices and storage - locally availablein a fashion that is transparent to the user. The main focus thereby lies on availability rather thanscalability, in particular rather than dynamic scalability. In this context we may have to distinguishbetween HPC Grids, such as EGEE, which select and provide access to (single) HPC resources, asopposed to distributed computing Grids (cf. Service Oriented Architecture below) which alsoincludes P2P like scalability - in other words, the more resources are available, the more codeinstances are deployed and executed. Replication capabilities may be applied to ensure reliability,though this is not an intrinsic capability of in particular computational Grids. Even though such Gridmiddleware(s) offers manageability interfaces, it typically acts on a layer on top of the actualresources and thus does rarely virtualise the hardware, but the computing resource as a whole (i.e.not on the IaaS level). Overall, Resource Grids do address similar issues to Cloud Systems, yet typically on a different layerwith a different focus - as such, e.g. Grids do generally not cater for horizontal and vertical elasticity. What is more important though is the strong conceptual overlap between the issues addressed byGrid and Clouds which allows re-usage of concepts and architectures, but also of parts of technology. Specific shared concepts:

-pointing

eBusiness Grids share the essential goals with Service Oriented Architecture, though the specificfocus rests on integration of existing services so as to build up new functionalities, and to enhancethese services with business specific capabilities. The eBusiness (or here Virtual Organization)approach derives in particular from the distributed computing aspect of Grids, where parts of theoverall logic are located in different sites. The typical Grid middleware thereby focus mostly onachieving reliability in the overall execution through onthe-fly replacement and (re)integration. But eBusiness Grids also explore the specific requirements for commercial employment of serviceconsumption and provisioning - even though this is generally considered an aspect more related toService Oriented Architectures than to Grids. Again, eBusiness Grids and Cloud Systems share common concepts and thus basic technologicalapproaches. In particular with the underlying SOA based structure, capabilities may be exposed andintegrated as stand-alone services, thus supporting the re-use aspect. Specific shared concepts: -per-use / Payment models

-management It is worth noting that the comparison here is with deployed Grids. The original Grids concept had avision of elasticity, virtualization and accessibility [48] [49] not unlike that claimed for the Cloudsvision.

Security Architectures for Cloud Computing


Moving computing into the Cloud makes computer processing much more convenient for users but also presents them with new security problems about safety and reliability. To solve these problems, service providers must establish andprovide security architectures for Cloud computing. This paper describes domestic and international trends in security requirements for Cloud computing, along with securit y architectures proposed by Fujitsu such as access protocol, authenticationand identity ( ID) management , and security visualization. Security problems in Cloudcomputing Users feel a sense of security and reliabilitywhen they understand exactly how a processis functioning and running. Although Cloudcomputing offers great user convenience byfreeing users from the need to understandprocessing details, it forces them to trust theCloud services provider, which worries manyusers. In todays market, awareness about Cloudcomputing problems is heavily weighted towardsecurity and reliability problems. For example,a survey conducted by Fujitsu on problems inCloud computing from the customer viewpoint(Figure 1) revealed that security, stableoperation, and a support system, that is, safetyand reliability, ranked highest among userconcerns. Given that in Cloud computing theinformation technology (IT) system is invisibleto the user, it is understandable that customersstrongly want their information to be fullyprotected and services to be provided stably. Thefollowing concerns, in particular, are commonlyraised by customers with regard to security: 1) Within the same data center, there aresome cases in which information belongingto more than one customer resides on thesame computer. In such a case, will suchdifferent sets of information be appropriatelyisolated? 2) Should we be concerned that operationsin a data center might lead to informationleakage or data corruption caused, forexample, by one customers information being mistaken for anothers? 3) Since the system platform of a Cloud servicesprovider is shared by a wide variety ofcustomer environments, couldnt reliabilitybe a problem? For example, if a maliciousprogram such as a virus were to penetratethe service, mightnt all the environmentsusing that service might be affected?

4) When multiple Cloud services are used atthe same time to perform work involvingthe linking of tasks between those services,can service reliability be assured?

Access control
The most outstanding feature of aCloud-computing platform is across-theboardvirtualization. The virtualization of each system level leads to flexible system construction andoperation essential to Cloud computing.In Fujitsus Cloud services platform calledthe Trusted-Service Platform, the network,operating-system, and data layers feature alogical separation of computing environmentsthrough advanced virtualization technologyestablished, for example, by METIs secureplatform project. This logical separation byvirtualization achieves the same level of securityas physical separation of computing environments(Figure 2). To ensure sufficient reliability, especiallyin the virtual-server layer, which is the focusof virtualization, source-code reviews of thevirtualization software are conducted withinFujitsu. Moreover, through the combination ofvirtualization and more robust authenticationof Cloudcomputing clients and the addition ofkey functions such as ones for visualizing accessactivities, it has become possible to detect andprevent access-control problems and attackschemes and to create more effective securitymeasures.

5. Authentication and IDmanagementProper authentication of users or userenvironments such as a client computer is basicto access control and other IT security functions.It is an essential technology for Cloud-computingenvironments in which connections to externalenvironments are common and risks are high.In Fujitsus Cloud computing, there areplans to provide various options to fortifytraditional authentication based on anidentifier (ID)/password format. In one-timepassword authentication, for example, the userenters a temporary password displayed on adedicated card or on a mobile phone into a fieldon a Web screen as authentication information.This mechanism prevents the reuse of apassword even if it should leak for some reasonduring transmission and therefore makesauthentication significantly safer. In addition,the use of device authentication technology usingdigital certificates or other means can providemuch safer authentication than password-basedauthentication, which is relatively easy to crackthrough guessing, leakage, etc.The management of user information whena user operates multiple systems by Cloudcomputing is also a major issue. So-called ghostIDsleftover IDs of users who have lost theirusage rightsand IDs that have been giveninappropriate rights because of managementoversights are a problem in terms of not onlysecurity but also corporate internal controls. To solve this problem, one needs a mechanismfor identity management common to multiplesystems. In Fujitsus Cloud computing, thereare plans to provide customers with an IDmanagement platform based on open IDmanagement frameworks such as the SecurityAssertion Markup Language (SAML) andWS-Federation. 6. Security visualization A characteristic of Cloud computing is thatunnecessary details are invisible. However,this is exactly why necessary things must beclearly visible to reassure customers and instillconfidence in Cloud computing.Fujitsu is taking various approaches tosecurity visualization. For example, it developeda security dashboard in 2009 for visualizingsecurity conditions within the Fujitsu Group.This dashboard has helped to improve

securitygovernance. Also in the same year, Fujitsu beganto provide an information-security visualizationservice to enable customers to visualize theefficiency and cost-effectiveness of informationsecuritymeasures. In 2010, Fujitsu will begin providing asecurity monitoring service using the ArcSightmonitoring platform adopted in the USA andelsewhere in the world. This platform gatherssecurity-related information from a customersvarious business systems on the Cloud, managesthat information in a unified manner, andprovides value-added reports from the viewpointsof information-security governance, internalcontrols, and the effects of security measures. Deploying such a general-purpose informationgatheringplatform can improve the efficiencyof security management and, by extension, theefficiency of internal controls and corporate riskmanagement in the Cloud-computing era.This service can also be used to efficientlyand safely outsource the managing and archivingof corporate and organizational logs that require

a considerable amount of storage. In this way,the service can reduce management costs andprovide thorough maintenance of log data.The work of recording and monitoringsecurityrelated activities in the Fujitsu Cloudis performed by a special organization that isindependent of the Fujitsu department providingCloud services. This scheme allows Cloudcomputing security to be assessed independentlyof the service business (Figure 3).

GAPS & OPEN AREAS


There is no full scale middleware existent which commonly addresses all cloud capabilities. What ismore, not all capabilities can as yet be fulfilled to the necessary extend, even though an essentialbasis has been provided from both commercial and academic side. The current set of capabilitiesfulfills the requirements to realise simple cloud systems (as was to be expected given theiravailability on the market). The particular issue of interest thereby is in how far the availablesupport fulfills the expectations towards cloud systems in their various appearances and use cases. The main gaps that can be identified relate to the following aspects:

1. TECHNICAL GAPS Manageability and Self Cloud systems focus on intelligent resource management so as to ensure availability of servicesthrough their replication and distribution. In principle, this ensures that the amount of resourcesconsumed per service / application reflects the degree of consumption, such as access throughusers, size of data etc. Whilst most cloud system allow for main features related to elasticity andavailability (see Table 1 and Table 4 above), the management features are nowhere near optimalresource usage issues not only relevant for cost reduction, but also for meeting the green agendaand for ensuring availability when resources are limited. Management features are mostly use-case specific at the moment and generally better at managingscale-up (e.g. when bandwidth usage exceeds a threshold) than at scale-down (mostly because theduration of inactivity is unpredictable). There is little general support in particular for new providerswith respect to how to manage resources, when to scale, how to meet the requirements of the userregarding quality of service etc. This also involves self-detection of failures, of resource-shortage, but also of free load etc. andtaking according actions in particular in hybrid environments where management has to act acrossdifferent resource infrastructures and can generally not be centralized. A major criterion therebyconsists in improving the performance of management. Obviously, interoperability plays a major role in distributed management across resourceenvironments, but also the capability to adapt to changes in the environment this does not onlyapply to customer requirements (see above), but also to technological restrictions, such as relatedto relevant libraries (IaaS&SaaS) or engines (PaaS). Adaptability and interoperability are therebystrongly linked to each other. Management and manageability plays a major role in many of the core cloud characteristics e.g. Elasticity, Quality of Service, Adaptability etc., and Cost Reduction, Going Green etc., but also implicitly Data Management and Programming Models. .

Data Management
The amount of data available on the web, as well as the throughput produced by applications,sensors etc. increases faster than storage and in particular bandwidth does. There is a strongtendency to host more and more public data sets in cloud infrastructures so that improved means ofmanaging and structuring the size of data will be necessary to deal with future requirements. Hencein particular storage clouds should be able to cater for such means in order to maintain availabilityof data and thus address quality requirements etc. Not only data size poses a problem for cloud systems, but more importantly consistencymaintenance, in particular when scaling up. As data may beshared between tenants partially or completely, i.e. either because the whole database is replicatedor indeed a subset is subject to concurrent access (such as state information), maintainingconsistency

over a potentially unlimited number of data instances becomes more and more important and difficult. One of the main research gaps and effortsin the area is how to provide truly transactional guarantees for software stacks (e.g. multi-tierarchitectures as SAP NetWeaver, Microsoft .NET or IBM WebSphere) that provides large scalability(100s of nodes) without resorting to data partitioning or relaxed consistency (such as eventualconsistency). Clearly ACID 2-phase commit transactions will not work (timing) and compensatingtransactions will be very complex. Worse, the use of caching on distributed database systems meanswe have to validate cache coherency. At the moment, segmentation and distribution of data occurs more or less uncontrolled, thus notonly leading to efficiency issues and (re)integration problems, but also potentially to clashes with legislation. In order to be able tocompensate this, further control capabilities over distribution in the infrastructure are required thatallow for context analysis (e.g. location) and QoSfulfilment (e.g. connectivity) - an aspect that ishardly addressed by commercial and / or research approaches so far. As most data in the web is unstructured and heterogeneous due to various data sources, sensiblesegmentation and usage information requires new forms of annotation. What is more, consistencymaintenance strategies may vary between data formats, which can only be compensated bymaintaining meta-information about usage and structure. But also with the proprietary structures ofindividual cloud systems, moving data (and / or services) between these infrastructures issometimes complicated, necessitating new standards to improve and guarantee long terminteroperability. Work on the eXternal Data Representation (XDR) standard forloosely coupled systems will play an important role in this context. Cloud resources are potentially shared between multiple tenants this does not only apply tostorage (and CPUs, see below), but potentially also to data (where e.g. a database is shared betweenmultiple users) so that not only changes can occur at different locations, but also in a concurrentfashion. This necessitates improved means to deal with multi-tenancy in distributed data systems.Classical data management systems break down with large numbers of nodes even if clustered in acloud. The latency of accessing disks means that classical transaction handling (two-phase commit)is unlikely to be sustainable if it is necessary to maintain an integral part of the system global state. Efficiency efforts (such as caching) compound the problem needing cache coherency across a verylarge number of nodes. As current clouds typically use either centralized Storage Area Networks(e.g. Amazon EBS), unshared local disk (e.g. Amazon AMI) or cluster file-systems (e.g. GFS; but forfiles, not entire disk images), commodity storage (such as desktop PCs) can currently not be easilyintegrated into cloud storage, even though Live Mesh already allows for synchronization of localstorage in / with the cloud. In order to address these issues, the actual usage behaviour with respect to file and data access incloud systems need to be assessed more carefully. There are only few of these studies currentlyavailable, but the according information would help identifying the typical distribution,access, consistency etc. requirements of the individual use cases.

Privacy & Security


Strongly related to the issues concerning legislation and data distribution is the concern of dataprotection and other potential security holes arising from the fact that the resources are sharedbetween multiple tenants and the location of the resources being potentially unknown. In particularsensitive data or protected applications are critical for outsourcing issues. In some use cases, theinformation that a certain industry is using the infrastructure at all is enough information forindustrial espionage. Whilst essential security aspects are addressed by most tools, additional issues apply through thespecifics of cloud systems, in particular related to the replication and distribution of data

inpotentially worldwide resource infrastructures. Whilst the data should be protected in a form thataddresses legislative issues with respect to data location, it should at the same still be manageableby the system. In addition, the many usages of cloud systems and the variety of cloud types imply different securitymodels and requirements by the user. As such, classical authentication models may be insufficientto distinguish between the Aggregators / Vendors and the actual User, in particular in IaaS cloudsystems, where the computational image may host services that are made accessible to users.In particular in cases of aggregation and resale of cloud systems, the mix of security mechanismsmay not only lead to problems of compatibility, but may also lead to the user distrusting the modeldue to lack of insight. All in all, new security governance models & processes are required that cater for the specific issuesarising from the cloud model.

Federation & Interoperability


One of the most pressing issues with respect to cloud computing is the current difference betweenthe individual vendor approaches, and the implicit lack of interoperability. Whilst a distributed dataenvironment (IaaS) cannot be easily moved to any platform provider (PaaS) and may even causeproblems to be used by a specific service (SaaS), it is also almost impossible to move a service /image / environment between providers on the same level (e.g. from Force.com to Amazon). This issue is mostly caused by the proprietary data structures employed by each providerindividually. History of web service standardisation has shown that specifications may easily divergerather than converge if too many parallel standardisation strands are pursued. Therefore, currentstandardisation approaches in the web service domain may prove insufficient to deal with thecomplexity of the problem, as it tends to be slow and diverging between multiple instances ofstandardization bodies. Also, interoperability is typically driven stronger by de facto standards, thanby other de jure standardization efforts. In particular cloud computing with the strong industrial drivers and the initial uptake already inplace has a strong tendency to impel de-facto standards (see also vendor lock in). Traditionally, US with an emphasis on software innovation - favour a voluntary, market driven approach tostandardisation. Europe, with a strong track record in telecom standardisation, seems to favour anupfront approach albeit mostly in hardware related fields. While innovations between domains usually benefit from an early focus on interoperability, thequest for disruptive innovations within domains benefits from a lower focus on interoperabilityrequirements in this early phase. Too early focus on interoperability and standardization issues maytherefore be disruptive as e.g. long-term requirements and structures cannot be assessed to theirfull extend today, and a bad specification may hinder interoperable development accordingly. Aparticular focus must hence rest on atomic, minimal, composable and adaptable standards. While nobody is questioning the usefulness and benefit of interoperability, it should also be notedthat with respect to the European research agenda, careful consideration is necessary in whichfields and when those steps provide the biggest benefit. New policies and approaches may therefore be needed to ensure convergence and thus achieve realinteroperability rather than adding to the issue of divergence.

Virtualisation, Elasticity and Adaptability

Though virtualisation techniques have improved considerably over recent years, additional issuesarise with the advent of cloud systems that have not been fully elaborated before in particularrelated to the elasticity of the system (horizontal and vertical up- and down-scaling), interoperabilityand manageability & control of the resources. Changes in the configuration of the service / dataneed to be reflected by the setup of the underlying resources (according to their capabilities andcapacities), but also changes in the infrastructure need to be exploited by the virtual environmentwithout impacting on the hosted capabilities. For example, if another CPU is added to avirtualmachine, the running code should make use of the additional resource without having to berestarted or even adapted. This obviously relates to the issue of programming models and resourcecontrol (cf. below) it should be noted in this context that actual resource integration in virtualmachines is less an issue than developing applications that actually exploit such dynamic changes. To provide efficient elasticity that is capable of respecting the QoS and green requirements as listedabove, new, advanced scheduling mechanisms are required that also take the multitenancy aspectinto consideration. For example, it may be more sensible to delay execution if resources will beavailable shortly, so as to avoid the employment of currently powered-down resources etc. Virtualisation (and to a degree scheduling) have to take the human factor into considerationthereby: the degree of interaction with cloud systems, as well the increasing connectivity willrequire that the systems are capable to integrate humans not only as users, but also as an extendedresource that can provide services, capabilities and data. Currently, also little support is available for cross-platform execution and migration which globalcloud structures will require (with the exception of specialized niche cloud systems). Especially,the movement of (parts of) an application between cloud structures (e.g. from private cloud topublic cloud and back) is a key issues that is not supported yet. All these capabilities will require a stronger self-* awareness of the resources and the virtualenvironment involved, so as to improve the adaptability to changes in the environment and thusmaintain boundary conditions (such as QoS and business policies). And, of course, implicitly newmodels to develop according applications and tools that can easily exploit these features.

APIs, Programming Models & Resource Control


Cloud virtual machines tend to be built for fixed resource environments, thus allowing horizontalscalability (instance replication) better than vertical scalability (changes in the resource structure) however, future systems will have to show more flexibility with this respect to adapt better torequirements, capabilities and of course green issues. In addition, more fine grained control overe.g. distribution of data etc. must be granted to the developer in order to address legislation issues,but also to exploit specific code requirements. Cloud systems will thus face similar issues that HPC has faced before with respect to description ofconnectivity requirements etc., but also to ensure reliability of execution, which is still a majorobstacle in distributed systems. At the same time, the model must be simple enough to beemployed by average developers and / or business users. Cloud systems provide enhanced capabilities and features, ranging from dynamically scalableapplications and data, over controlled distribution to integration of all types of resources (includinghumans). In order to exploit these features during development of enhanced applications andservices, the according interfaces and features need to be provided in an easy and intuitive fashionfor common users, but should also allow for extended control for more advanced users. In order to facilitate such enhanced control features, the cloud system needs to provide new meansto manage resources and infrastructure, potentially taking quality of service, the green

agenda andother customer specifications into consideration. This, however, implies that future cloud systemshave to discard the classical layered model (see also [29]). Development support for newcloudified applications has to ensure movability of application (segments) across the network,enabling a more distributed execution and communication model within and between applications. Since cloud applications are likely to be used by much more tenants and users than noncloudapplications (long tail), customizability must be considered from the outset. The issue applies equally to distributed code, as to distributed data. Data is expected to become exceedinglylarge (see Data Management above) - hence an interesting approach in cloud systemscode management consists in moving the software to the data, rather than the other way round,since most code occupies less space than the data they process. However this is intrinsically againstthe current trend for clouds to be provided in remote data centres with code and data co-existing.

2. NON-TECHNICAL GAPS Legislation, Government & Policies


Not only data (cf. above) is subject to specific legislation issues that may depend on the locationthey are currently hosted in, but also applications and services, in particular regarding their licensingmodels. Legislation issues arise due to the fact that different countries put forward different lawsregarding which kind of data is allowed, but also which data may be hosted where. With the cloudprincipally hosting data / code anywhere within the distributed infrastructure, i.e. potentiallyanywhere in the world, new legislative models have to be initiated, and / or new means to handlelegislative constraints during data distribution. Related to that, governance of clouds needs to be more open to the actual user who needs to beable to specify and enforce his / her requirements better (see also resource control above), such asdata privacy issues, issues caused by business (process) requirements and similar. Governancesolution could also help to select only those vendors providing open-source solutions, thus avoidvendor lock in. Clouds generally benefit from the economic globalisation so that providers (and implicitly users) canmake use of cheaper resources in other countries etc. Hence, similar issues apply to clouds thatapply to the global market and new policies are required to deal with jurisdiction, data sovereigntyand support for law enforcement agencies new cross-country regulation have to be enacted etc.

Economic Concerns
In order to provide a cloud infrastructure, a comparatively high amount of resources needs to beavailable, which implies a considerable high investment for start-up. As it is almost impossible toestimate the uptake and hence the profit of services offered to the customers, it remains difficult toassess the return of investment and hence the sensible amount of investment to maximise theprofit. With the cloud outsourcing principle being comparatively new on the market, new knowledgeabout business models, market situation, how to extract value and under what conditions etc.are required in other words, new expert systems and best use recommendations are required. This also includes issues related to the green agenda, namely policies basing on dedicatedbenchmarks under what circumstances to reduce resource usage and / or switch between differentpower settings etc. This implies new scheduling mechanisms that weigh green vs. business (profit &quality) issues. In a cloud environment it would be possible to improve green credentials byutilising more efficient processors and memory. A few large

data centres with clouds arelikely to bemore green than millions of smaller but already large data centres. Fan et al. argued that up to 50%savings in energy consumption are possible for data warehouses [30]. Notably, from a globalperspective, sharing resources may be greener than down-powering idle resources, if this reducestheir production (and hence the according carbon footprint) in the first instance. In general, business control is principally possible, yet linkage between the technical and economicalperspective is still weak and hence maintenance of e.g. service quality respecting the economicaldescriptions still requires improvement. An indirect economical issue that will have to be solved through e.g., means for improvedinteroperability (see below), consists in the current tendency towards vendor-lock in. Most vendorswant to maintain this status in order to secure their customer base, yet with scope and competitiongrowing in the near future, it is to be expected that even larger vendors will adopt moreinteroperable approaches. As a side note it should be mentioned that already some major keyplayer are basing their system on more standard based approaches, such as MS Azure.

State of Public Sector Cloud Computing


State of Michigan Announced Project: MiCloud (Department of Technology Management and Budget) In March 2010, Michigans Department of Information Technology consolidated with the States Department of Management and Budget. The new Department of Technology, Management & Budget (DTMB) is now building a full array of services to provide across governments and the private sector. Michigan is moving toward leveraging cloud-based solutions to provide clients with rapid, secure, and lower cost services through a program dubbed MiCloud. One key area of current action is the States strategic investment in storage virtualization technologies, expected to go live in October 2010. Michigan is actively piloting MiCloud Storage for Users and Storage for Servers as internal government cloud functions delivered by DTMB. The consumption expectation is more than 250 terabytes in the first year of operation at a projected storage cost that is 90 percent lower than todays lowest -cost storage tier. MiCloud provides self-service and automated delivery within 10 minutes of submitting an online request. The following table expresses projected savings based on migration rates. It isimportant to note that this low-cost option represents a service alternative that is only appropriate for data that do not require 24x7 availability or real-time, block-level replication.

The State of Michigans 2010-2014 strategic plan also outlines critical future investments in virtual server hosting and process automation. The State is in the proof-of-concept phase for the MiCloud Hosting for Development and Process Orchestrator functions in the internal government cloud. The hosting for development function automates the delivery of virtual servers within 30 minutes of submitting an online request. Michigan will also explore a hybrid cloud to deliver a more complex Application Platform as a Service (APaaS). The process orchestrator function enables agency business users, regardless of IT skill level, to create and test simple process definitions. Business users will be able to publish processes and related forms to the service catalog and over time analyze related metrics. Ultimately, the shift to cloud computing will allow Michigan to improve services to citizens and business while freeing up scarce capital, staff resources, and IT assets for critical investments.

Department of Defense Project: Army Experience Center (United States Army) The Army Experience Center (AEC), located in Philadelphia, PA, is an Army pilot program designed to explore new technologies and techniques that the Army can leverage to improve the efficiency and effectiveness of its marketing and recruiting operations. The AEC uses touch screen career exploration kiosks, state-of-the-art presentation facilities, community events, virtual reality simulators, and social networking to help potential recruits learn about the Army and make informed decisions about enlisting. The Army required a customer relationship management system that would track personal and electronic engagements with prospects and would help recruiting staff manage the recruiting process. Army's legacy proprietary data system, the Army Recruiting Information Support System (ARISS), was over 10 years old. Despite regular upgrades over the years, it was infeasible to modify ARISS to meet the AEC's requirements; including integration with Social Networking and other Web 2.0 applications, real time data access from multiple platforms including handheld devices, ability to track AEC visitor and engagement data, and integration of marketing and recruiting data. Initial bids from traditional IT vendors to provide required functionality ranged from $500,000 to over $1 million. Instead, the Army chose a customized version of the cloud-based Customer Relationship Management tool Salesforce.com as its pilot solution to manage recruiting efforts at the Army Experience Center. The Army is piloting this cloud-based solution at an annual cost of $54,000. With the new system, the Army is able to track recruits as they participate in multiple simulations at the Army Experience Center. The solution integrates directly with email and Facebook, allowing recruiters to connect with participants more dynamically after they leave the Army Experience Center. By using Salesforce.com's mobile solution, Army recruiters can access recruit information from anywhere. The Army is currently in the second year of a two year pilot of the customized Salesforce.com application. Using the cloud-based solution, the Army was able to have fewer recruiters handle the same workload as the five traditional recruiting centers the Army Experience Center replaced. The cloud application has resulted in faster application upgrades, dramatically reduced hardware and IT staff costs, and significantly increased staff productivity.

Customer Scenario
1. Payroll processing in the Cloud
1.1Applicable Use Case Scenario

1.2Customer scenario:
In this scenario, two servers were dedicated to payroll processing, a complex andtimeconsuming process. The organization decided to see how practical it wouldbe to run the payroll process in the cloud. The existing payroll system wasarchitected as a distributed application, so moving it to the cloud was relativelystraightforward. The payroll application used an SQL database for processing employee data. Instead of rewriting the application to use a cloud database service, a VM with adatabase server was deployed. The database server retrieved data from a cloudstorage system and constructed relational tables from it. Because of the size ofthe original (in-house) database, extraction tools were used to select only theinformation necessary for payroll processing. That extracted information wastransferred to a cloud storage service and then used by the database server. The payroll application was deployed to four VMs that run simultaneously; thosefour VMs work with the VM hosting the database server. The configuration of thepayroll application was changed to use the VM hosting the database server;otherwise the application was not changed.

1.3Customer problem solved:


In the cloud-based version of the application, processing time for the payroll taskwas reduced by 80%. As an added benefit, two servers formerly dedicated toprocessing payroll were freed up for other tasks. Finally, the cloud-based versionis much more elastic; that will be a significant advantage as the organizationexpands.

1.4Requirements and Capabilities:


The cloud services used were virtual machines and cloud storage (IaaS). Thepayroll application did not have to be modified; it was simply deployed to thevirtual machine. The original application used a relational database. To avoidchanging data structures and applications to use a cloud database, a relationaldatabase server was deployed into the cloud. The only API used was the S3 cloud storage API.

1.5Portability Concerns:
The payroll application runs on Fedora and Java 1.5, so it will run withoutchanges on any cloud provider's platform that supports Fedora. Modifying theapplication to use a different cloud storage provider could be a problem if theother vendor doesn't support the specific S3 APIs used in the payroll process. Finally, changing the application to use a cloud database could be extremelydifficult, particularly if it involved moving to a cloud database that does notsupport the relational model.

2. Logistics and Project Management in the Cloud


2.1Applicable Use Case Scenario:

2.2Customer scenario:
A small construction company with approximately 20 administrative employees needed a way to manage their resources, optimize project scheduling and track job costs. The company had very specific requirements that no commonly available system addressed, so they used a combination of Quickbooks and spreadsheets. This system was not elastic and was a huge waste of human resources. The solution to the problem was to build a custom client-side application. All of the business logic resides on the client. Data for the application is served from a Google App Engine (GAE) datastore. The datastore does not enforce any sort of schema other than an RDF graph, although it does host RDF-OWL ontology. The client uses that ontology to validate data before displaying it to the user or sending it back to the GAE. Data operations are communicated with the datastore using an application-specific RESTful protocol over HTTP. The datastore maintains RDF graphs specific to the applications it is serving within silos managed on the server. Security is implemented separately for each silo depending on the requirements of the application using a particular silo of data. Using this system, any number of applications can use the datastore without building a new code base for each one. Data was moved into the datastore on GAE from the locally hosted Quickbooks SQL server and the custom spreadsheets using a one-time data migration script that reconciled the data before uploading it to the GAE datastore. The data set was small and easily processed using local resources. The client application maintains a local datastore that contains a subset of the most recent changes to the data. The REST architecture of the application allows HTTPs built-in caching support to automatically propagate changes to the master datastore down to the client. In addition to the performance benefits of using a subset of the data, this design simplifies security. If a client application does not need to access to certain fields or records that portion of the datastore never leaves the server.

2.3Customer problem solved:


Data was moved from an inefficient system of software and spreadsheet macros into a cloud-based system. The resulting datastore can be used by a wide range of applications, making future development and maintenance much simpler. Although the original application infrastructure is still in use, the applications built on that infrastructure no longer rely on spreadsheets to analyze and manipulate the data. Significant savings will come from the fact that maintaining the spreadsheets will no longer be necessary. In addition, cutting and pasting data by hand is no longer part of the process, removing a tedious task and eliminating a source of errors.

2.4Requirements and Capabilities:


The cloud service used the Google App Engine, a PaaS implementation that provides database support. The combination of a RESTful API and the cloud datastore made the application more elastic than an application built around a traditional relational database.

2.5Portability Concerns:
The application runs on the Google App Engine and its BigTable database. BigTable is a sparse, distributed, persistent, multi-dimensional sorted map that achieves elasticity by prioritizing denormalization over normalization. This is a significant

difference from most datastores, and requires a fundamental rethinking of application development. Porting the application to run on top of a more traditional datastore would require major changes to the applications architecture.

3. Central Government Services in the Cloud


3.1Applicable Use Case Scenario:

3.2Customer scenario:
The ministries of the Japanese Government have thousands of servers across their infrastructures. The central government has announced a private Kasumigaseki cloud environment to provide a secure, centralized infrastructure for hosting government applications. Existing back office systems, such as payroll, accounting and personnel management, will be virtualized and hosted in the private cloud. Some front office systems, such as electronic procurement, will be virtualized to a public cloud, but that is outside the scope of this project. The ultimate goal of the project is to reduce the total cost of ownership by eliminating redundant systems and the need for administrators in each ministry.

3.3Customer problem solved:


The three problems solved by the Kasumigaseki cloud are reduced costs, reduced energy consumption and reduced IT staff.

3.4Requirements and Capabilities:


The cloud infrastructure will be built on a private network built by Japanese telecommunications companies. Because privacy and security are major concerns, a

private cloud is required. It is illegal for many types of personal data to be stored on a server outside of Japan.

3.5Portability Concerns:
Because the government is building its own private cloud to host its own applications, portability is not a concern. The government has no intention of moving its centralized applications and data from the private cloud to a public one.

Bibliography
References Cloud Computing Use Cases White Paper, Version 4.1 Internet sources 1. www.google.co.in 2. www.wikipedia.org 3. http://csrc.nist.gov/groups/SNS/cloud-computing/ 4. http://iac.dtic.mil/iatac

You might also like