You are on page 1of 88

Ensuring Distributed Data Sharing Accountability In The Cloud

ABSTRACT
1.1 PROJECT DESCRIPTION
Cloud computing enables highly scalable services to be easily consumed over the
Internet on an as-needed basis .A major feature of the cloud services is that users data
are usually processed remotely in unknown machines that users do not own or
operate. While enjoying the convenience brought by this new emerging technology,
users fears of losing control of their own data (particularly, financial and health data)
can become a significant barrier to the wide adoption of cloud services.
To address this problem, in this paper, we propose a novel highly decentralized
information accountability framework to keep track of the actual usage of the users
data in the cloud.
In particular, we propose an object-centered approach that enables enclosing our
logging mechanism together with users data and policies. We leverage the JAR
programmable capabilities to both create a dynamic and traveling object, and to
ensure that any access to users data will trigger authentication and automated logging
local to the JARs. To strengthen users control, we also provide distributed auditing
mechanisms. We provide extensive experimental studies that demonstrate the
efficiency and effectiveness of the proposed approaches.

IGNOU

Department of MCA 2014

Page 1

Ensuring Distributed Data Sharing Accountability In The Cloud

PREAMBLE
2.1 GENERAL INTRODUCTION
Cloud computing is the use of computing resources (hardware and software) that are
delivered as a service over a network (typically the Internet). The name comes from
the use of a cloud-shaped symbol as an abstraction for the complex infrastructure it
contains in system diagrams. Cloud computing entrusts remote services with a user's
data, software and computation.

IGNOU

Department of MCA 2014

Page 2

Ensuring Distributed Data Sharing Accountability In The Cloud


Figure 1.1: cloud computing
There are many types of public cloud computing:

Infrastructure as a Service (IaaS)

Platform as a Service (PaaS)

Software as a Service (SaaS)

Network as a Service (NaaS)

Storage as a Service (STaaS)

Security as a Service (SECaaS)

Data as a Service (DaaS)

Desktop as a service (DaaS - see above)

Database as a service (DBaaS)

Test environment as a service (TEaaS)

API as a service (APIaaS)

Backend as a service (BaaS)

Integrated development environment as a service (IDEaaS)

Integration platform as a service (IPaaS), see Cloud-based integration


Cloud computing presents a new way to supplement the current consumption and

delivery model for IT services based on the Internet, by providing for dynamically
scalable and often virtualized resources as a service over the Internet. Details of the
services provided are abstracted from the users. Moreover, users may not know the
machines which actually process and host their data. Users also start worrying about
losing control of their own data. The data processed on clouds are often outsourced,
leading to a number of issues related to accountability, including the handling of
personally identifiable information. Such fears are becoming a significant barrier to
the wide adoption of cloud services.
To overcome this problem, we make use of an approach, namely Cloud
Information
IGNOU

Accountability

(CIA)

framework,

based

Department of MCA 2014

on

the

information
Page 3

Ensuring Distributed Data Sharing Accountability In The Cloud


accountability given by D.J. Weitzner, H. Abelson, T. Berners-Lee, J. Feigen-baum,
J.Hendler, and G.J. Sussman. This approach has lightweight and powerful accountability that combines aspects of access control, usage control and authentication.

2.2 About the Project


2.2.1 Conceptual Methodology
Cloud computing presents a new way to supplement the current
consumption and delivery model for IT services based on the Internet, by
providing for dynamically scalable and often virtualized resources as a service
over the Internet. To date, there are a number of notable commercial and
individual cloud computing services, including Amazon, Google, Microsoft,
Yahoo, and Sales force. Details of the services provided are abstracted from
the users who no longer need to be experts of technology infrastructure.
Moreover, users may not know the machines which actually process and host
their data. While enjoying the convenience brought by this new technology,
users also start worrying about losing control of their own data. The data
processed on clouds are often outsourced, leading to a number of issues
related to accountability, including the handling of personally identifiable
information. Such fears are becoming a significant barrier to the wide
adoption of cloud services.
To allay users' concerns, it is essential to provide an effective
mechanism for users to monitor the usage of their data in the cloud. For
example, users need to be able to ensure that their data are handled according
to the service- level agreements made at the time they sign on for services in
the cloud. Conventional access control approaches developed for closed
domains such as databases and operating systems, or approaches using a
centralized server in distributed environmentsare not suitable, due to the data
handling can be outsourced by the direct cloud service provider (CSP) to
other entities in the cloud and theses entities can also delegate the tasks to
IGNOU

Department of MCA 2014

Page 4

Ensuring Distributed Data Sharing Accountability In The Cloud


others, and so on. Second, entities are allowed to join and leave the cloud in a
flexible manner. As a result, data handling in the cloud goes through a
complex and dynamic hierarchical service chain which does not exist in
conventional environments.

2.2.2 Working Principle


To overcome the above problems, we propose a novel approach,
namely Cloud Information Accountability (CIA) framework, based on the
notion of information accountability. Unlike privacy protection technologies
which are built on the hide-it-or-lose-it perspective, information accountability focuses on keeping the data usage transparent and tractable. Our
proposed CIA framework provides end-to- end accountability in a highly
distributed fashion. One of the main innovative features of the CIA
framework lies in its ability of maintaining lightweight and powerful
account- ability that combines aspects of access control, usage control and
authentication. By means of the CIA, data owners can track not only whether
or not the service-level agreements are being honored, but also enforce access
and usage control rules as needed. Associated with the accountability feature,
we also develop two distinct modes for auditing: push mode and pull mode.
The push mode refers to logs being periodically sent to the data owner or
stakeholder while the pull mode refers to an alternative approach whereby the
user (or another authorized party) can retrieve the logs as needed.
The design of the CIA framework presents substantial challenges,
including uniquely identifying CSPs, ensuring the reliability of the log,
adapting to a highly decentralized infrastructure, etc. Our basic approach
toward addressing these issues is to leverage and extend the programmable
capability of JAR (Java Archives) files to automatically log the usage of the
users' data by any entity in the cloud. Users will send their data along with any
policies such as access control policies and logging policies that they want to
IGNOU

Department of MCA 2014

Page 5

Ensuring Distributed Data Sharing Accountability In The Cloud


Enforce, enclosed in JAR files, to cloud service providers. Any access to the
data will trigger an automated and authenticated logging mechanism local to
the JARs. We refer to this type of enforcement as "strong binding" since the
policies and the logging mechanism travel with the data. To cope with this
issue, we provide the JARs with a central point of contact which forms a link
between them and the user. It records the error correction information sent by
the JARs, which allows it to monitor the loss of any logs from any of the
JARs. Moreover, if a JAR is not able to contact its central point, any access to
its enclosed data will be denied. Currently, we focus on image files since
images represent a very common content type for end users and organizations
and are increasingly hosted in the cloud as part of the storage services
offered by the utility computing paradigm featured by cloud computing.
In summary, our main contributions are as follows:
We propose a novel automatic and enforceable logging mechanism in the
cloud. To our knowledge, this is the first time a systematic approach to
data accountability through the novel usage of JAR files is proposed.
Our

proposed

architecture

is

platform independent

and

highly

decentralized, in that it does not require any dedicated authentication or


storage system in place.
We go beyond traditional access control in that we provide a certain
degree of usage control for the protected data after these are delivered to
the receiver.
We conduct experiments on a real cloud testbed. We also provide a
detailed security analysis and discuss the reliability and strength of our
architecture.

2.3 STATEMENT OF THE PROBLEM

IGNOU

Department of MCA 2014

Page 6

Ensuring Distributed Data Sharing Accountability In The Cloud


o First, data handling can be outsourced by the direct Cloud Service Provider
(CSP) to other entities in the cloud and theses entities can also delegate the
tasks to others, and so on.
o Second, entities are allowed to join and leave the cloud in a flexible manner.
o As a result, data handling in the cloud goes through a complex and dynamic
hierarchical service chain which does not exist in conventional environments.

2.4 OBJECTIVE OF STUDY


The main objective of this project is to keep track of accountability for the data
owners data in the cloud. To allay users concerns, it is essential to provide an
effective mechanism for users to monitor the usage of their data in the cloud. For
example, users need to be able to ensure that their data are handled according to the
service level agreements made at the time they sign on for services in the cloud.
Conventional access control approaches developed for closed domains such as
databases and operating systems, or approaches using a centralized server in
distributed environments, are not suitable, due to the following features characterizing
cloud environments.

2.5 SCOPE OF THE STUDY


The scope of this study enables the data owners to track their data usage in the
cloud rather than making the data public in the cloud. And also supports for protocols
like the accessibility for the data and the data usage.

2.5 COMPANY PROFILE


CMC Ltd
CMC Ltd is a leading IT solutions company and a subsidiary of Tata Consultancy
Services Ltd. They are involved in the design, development and implementation of
software technologies and applications. They are specialized in providing a broad
range information technology solution to a diverse base of global as well as national
IGNOU

Department of MCA 2014

Page 7

Ensuring Distributed Data Sharing Accountability In The Cloud


clients. They also provide professional services for export and procurement,
installation, commissioning, warranty & maintenance of imported and indigenous
computer systems, education & training and networking services. CMC's business is
organized into four strategic business units namely customer services, system
integration, IT enabled services and education & training. The customers of the
company include some of the biggest organizations in India. They are Reserve Bank
of India, Indian Railways, Indian Oil Corporation Ltd, Bharat Petroleum Corporation
Ltd, Oil and Natural Gas Corporation Ltd, United Western Bank, Bank of India and
Bank of Baroda. CMC Ltd was incorporated on December 26, 1975, as Computer
Maintenance Corporation Pvt. Ltd. The Government of India holds 100% of the
equity share capital. In August 19, 1977, the company was converted into a public
limited company. In the year 1978, when IBM wound up their operations in India,
CMC Ltd took over the maintenance of IBM installations at over 800 locations
around India. In the year 1981, the company commenced their work on Project
Interact, which is a UN funded project, which makes the way from a hardware
maintenance company to a complete end-to-end IT solutions provider. This project
involved design, development and systems engineering of real-time, computer-based
systems dedicated to applications in the areas of power distribution, railway freight
operations management, and meteorology. In the year 1982, the company set up a
R&D facility to develop competencies in the frontier areas of technology. In the year
1984, the company diversified their activities to turnkey projects, IT education and
software development. To reflect their diversified business activities, the company
was renamed as CMC Ltd on August 27, 1984. In the year 1991, the company
acquired Baton Rougue International Inc, USA that was renamed as CMC America
Inc in the year 2003. In the year 1992, the Indian Government divested 16.69% of the
company equity to the General Insurance Corporation of India and their subsidiaries.
In the year 2000, the company opened a branch office in London inorder to service
and develops the clientele in the UK and Europe. The Government of India divested
IGNOU

Department of MCA 2014

Page 8

Ensuring Distributed Data Sharing Accountability In The Cloud


51% of their issued and paid-up equity capital in favour of Tata Sons Ltd at a
consideration of Rs 152 crore on October 16, 2001 and thus, CMC Ltd became a part
of the Tata group. Also, the status of the company has been changed from
Government Company to a public limited company. In the year 2003, the company
opened a branch office in Dubai to tap the hitherto unexplored markets of West Asia
and Africa. In the year 2004, the Government divested their remaining 26.5% stake in
CMC to the public. In March 29, 2004, Tata Sons Ltd transferred their entire
shareholding of 51.12% to Tata Consultancy Services Ltd, a subsidiary of Tata Sons
Ltd. The company made a tie up with Xilinx to establish Xilinx' first development
center in Hyderabad called Xilinx-CMC India Development Center. During the year
2006-07, the company set up a Software Technology Park (STP) at Kolkata, which is
engaged primarily in executing new international contracts. The company inaugurated
the phase I of the SEZ project, Synergy Park at their campus in Gachibowli,
Hyderabad on February 11, 2008. The phase II is under construction and the project
will be completed by the end of the year 2009.

2.6 REVIEW OF LITERATURE


JAVA
Java is both a programming language and a platform. The Java programming
language is a high-level language that can be characterized by all of the following
buzzwords:
Simple
Architecture neutral
Object oriented
Portable
Distributed
High performance
Interpreted
IGNOU

Department of MCA 2014

Page 9

Ensuring Distributed Data Sharing Accountability In The Cloud


Multithreaded
Robust
Dynamic
Secure
With most programming languages, you either compile or interpret a program so that
you can run it on your computer. The Java programming language is unusual in that a
program is both compiled and interpreted. With the compiler, first you translate a
program into an intermediate language called Java byte codes the platformindependent codes interpreted by the interpreter on the Java platform. The interpreter
parses and runs each Java byte code instruction on the computer. Compilation
happens just once; interpretation occurs each time the program is executed.
The original and reference implementation Java compilers, virtual machines,
and class libraries were developed by Sun from 1991 and first released in 1995. As of
May 2007, in compliance with the specifications of the Java Community Process, Sun
relicensed most of its Java technologies under the GNU General Public License.
Others have also developed alternative implementations of these Sun technologies,
such as the GNU Compiler for Java and GNU Class path.
James Gosling, Mike Sheridan, and Patrick Naughton initiated the Java language
project in June 1991.Java was originally designed for interactive television, but it was
too advanced for the digital cable television industry at the time. The language was
initially called Oak after an oak tree that stood outside Gosling's office; it went by the
name Green later, and was later renamed Java, from Java coffee, said to be consumed
in large quantities by the language's creators. Gosling aimed to implement a virtual
machine and a language that had a familiar C/C++ style of notation.
Sun Microsystems released the first public implementation as Java 1.0 in 1995.It
promised "Write Once, Run Anywhere" (WORA), providing no-cost run-times on
popular platforms. Fairly secure and featuring configurable security, it allowed
IGNOU

Department of MCA 2014

Page 10

Ensuring Distributed Data Sharing Accountability In The Cloud


network- and file-access restrictions. Major web browsers soon incorporated the
ability to run Java applets within web pages, and Java quickly became popular. With
the advent of Java2 (released initially as J2SE 1.2 in December 1998 1999), new
versions had multiple configurations built for different types of platforms. For
example, J2EE targeted enterprise applications and the greatly stripped-down
version J2ME for mobile applications (Mobile Java). J2SE designated the Standard
Edition. In 2006, for marketing purposes, Sun renamed new J2 versions as Java
EE, Java ME, and Java SE, respectively.
In 1997, Sun Microsystems approached the ISO/IEC JTC1 standards body and
later the Ecma International to formalize Java, but it soon withdrew from the
process. Java remains a defacto standard, controlled through the Java Community
Process. At one time, Sun made most of its Java implementations available without
charge, despite their proprietary software status. Sun generated revenue from Java
through the selling of licenses for specialized products such as the Java Enterprise
System. Sun distinguishes between its Software Development Kit (SDK) and Runtime
Environment (JRE) (a subset of the SDK); the primary distinction involves
the JRE's lack of the compiler, utility programs, and header files.
On November 13, 2006, Sun released much of Java as free and open source
software, (FOSS), under the terms of the GNU General Public License (GPL). On
May 8, 2007, Sun finished the process, making all of Java's core code available
under free software/open-source distribution terms, aside from a small portion of code
to which Sun did not hold the copyright.
Following Oracle Corporation's acquisition of Sun Microsystems in 20092010,
Oracle has described itself as the "steward of Java technology with a relentless
commitment to fostering a community of participation and transparency
Principles
IGNOU

Department of MCA 2014

Page 11

Ensuring Distributed Data Sharing Accountability In The Cloud


There were five primary goals in the creation of the Java language:
It should be "simple, object-oriented and familiar"
1) It should be "robust and secure"
2) It should be "architecture-neutral and portable"
3) It should execute with "high performance"
4) It should be "interpreted, threaded, and dynamic".
Java Platform
One characteristic of Java is portability, which means that computer programs
written in the Java language must run similarly on any hardware/operating-system
platform. This is achieved by compiling the Java language code to an intermediate
representation called Java byte code, instead of directly to platform-specific machine
code. Java byte code instructions are analogous to machine code, but they are
intended to be interpreted by a virtual machine (VM) written specifically for the host
hardware. End-users commonly use a Java Runtime Environment (JRE) installed on
their own machine for standalone Java applications, or in a Web browser for
Java applets. Standardized libraries provide a generic way to access host-specific
features such as graphics, threading, and networking.
A major benefit of using byte code is porting. However, the overhead of
interpretation means that interpreted programs almost always run more slowly than
programs compiled to native executable would. Just-in-Time (JIT) compilers were
introduced from an early stage that compilesbyte codes to machine code during
runtime.
Automatic Memory Management
Java uses an automatic garbage collector to manage memory in the object
lifecycle. The programmer determines when objects are created, and the Java runtime
IGNOU

Department of MCA 2014

Page 12

Ensuring Distributed Data Sharing Accountability In The Cloud


is responsible for recovering the memory once objects are no longer in use. Once no
references to an object remain, the unreachable memory becomes eligible to be freed
automatically by the garbage collector. Something similar to a memory leak may still
occur if a programmer's code holds a reference to an object that is no longer needed,
typically when objects that are no longer needed are stored in containers that are still
in use. If methods for a nonexistent object are called, a "null pointer exception" is
thrown.
One of the ideas behind Java's automatic memory management model is that
programmers can be spared the burden of having to perform manual memory
management. In some languages, memory for the creation of objects is implicitly
allocated on the stack, or explicitly allocated and deallocated from the heap. In the
latter case the responsibility of managing memory resides with the programmer. If the
program does not deallocate an object, a memory leak occurs. If the program attempts
to access or deallocate memory that has already been deallocated, the result is
undefined and difficult to predict, and the program is likely to become unstable and/or
crash. This can be partially remedied by the use of smart pointers, but these add
overhead and complexity. Note that garbage collection does not prevent "logical"
memory leaks, i.e. those where the memory is still referenced but never used.
Garbage collection may happen at any time. Ideally, it will occur when a
program is idle. It is guaranteed to be triggered if there is insufficient free memory on
the heap to allocate a new object; this can cause a program to stall momentarily.
Explicit memory management is not possible in Java.
Java does not support C/C++ style pointer arithmetic, where object addresses
and unsigned integers (usually long integers) can be used interchangeably. This
allows the garbage collector to relocate referenced objects and ensures type safety and
security.
IGNOU

Department of MCA 2014

Page 13

Ensuring Distributed Data Sharing Accountability In The Cloud


As

in C++ and

some

other

object-oriented

languages,

variables

of

Java's primitive data types are not objects. Values of primitive types are either stored
directly in fields (for objects) or on the stack (for methods) rather than on the heap, as
commonly true for objects (but see Escape analysis). This was a conscious decision
by Java's designers for performance reasons. Because of this, Java was not considered
to be a pure object-oriented programming language. However, as of Java
5.0, autoboxing enables programmers to proceed as if primitive types were instances
of their wrapper class.
Java contains multiple types of garbage collectors. By default, HotSpot uses
the Concurrent Mark Sweep collector, also known as the CMS Garbage Collector.
However, there are also several other garbage collectors that can be used to manage
the Heap. For 90% of applications in Java, the CMS Garbage Collector is good
enough.

INTRODUCTION TO JDBC
JDBC (Java Database connectivity) is a front-end tool for connecting to a
server to ODBC in that respect, however JDBC can connect only java client and it
uses ODBC for the connectivity. JDBC is essentially a low-level API since any data
manipulation, storage and retrieval has to be done by the program itself. Some tools,
which provide a higher-level abstraction, are expected shortly.
The next question that needs to be answered is why we need JDBC, once we have
ODBC on hand. We can use the same ODBC to connect the entire database and
ODBCicrosyste is a proven technology. Problem for doing this is ODBC gives a c
language API, which uses pointers extensively. Since java does not have any pointers
and is object-oriented sun Mms, inventor of java developed to suit its needs.

DATABASE MODELS
JDBC and accessing the database through applets and JDBC API via an
intermediate server resulted server resulted in a new type of database model which is
IGNOU

Department of MCA 2014

Page 14

Ensuring Distributed Data Sharing Accountability In The Cloud


different from the client-server model. Based on number of intermediate server
through the request should go it is named as single tire, two tire and multi tire
architecture

Single Tier:
In a single tier the server and client are the same in the sense that a client
program that needs information (client) and the source of this type of architecture is
also possible in java, in case flat files are used to store the data. However this is useful
only in case of small applications. The advantage with this is the simplicity and
portability of the application developed.

Two Tier (client-server):


In two tier architecture the database resides in one machine and client in
different machine they are connected through the network. In this type of architecture
a database management takes control of the database and provides access to clients in
a network. This software bundle is also called as the server. Software in different
machines, requesting for information are called as the clients.

Three tier and N-Tier:


In the three-tier architecture, any number servers can access the database that
resides on server. Which in turn serve clients in a network. For example, you want to
access the database using java applets, the applet running in some other machine, can
send request only to the server from which it is down loaded. For this reason we will
need to have a intermediate server which will accept the requests from applets and
them to the actual database server. This intermediate server acts as a two-way
communication channel also. This is the information or data from the database is
passed on to the applet that is requesting it. This can be extended to make n tiers of
servers, each server carrying to specific type of request from clients, however in
practice only 3 tiers architecture is popular.

JDBC Driver Types:

IGNOU

Department of MCA 2014

Page 15

Ensuring Distributed Data Sharing Accountability In The Cloud


The JDBC drivers that we are aware of at this time fit into one of four
categories

1.

JDBC-ODBC BRIDGE PLUS ODBC DRIVER


The java soft bridge product provides JDBC access via ODBC drivers. Note
that ODBC binary code end in many cases database client code must be loaded on
each client machine that uses this driver. As a result, this kind of driver is most
appropriate on a corporate network where client installations are not major
problem, or for application server code written in java in a 3-tier architecture.

2. NATIVE API PARTLY-JAVA DRIVER


This kind of driver converts JDBC calls into calls on the client API for oracle
Sybase, Informix, DB2, or other DBMS. Note that, like the bridge driver, this style of
driver requires that some binary code be loaded on each client machine.

3. JDBC-NET ALL-JAVA DRIVER


This driver translates JDBC calls into a DBMS independent net protocol, which
is then translated, to a DBMS protocol by a server. This net server middle-ware is
able to connect its all java clients to many different databases. The Specific
protocol used depends on the vendor. In general, this most flexible JDBC
alternative. It is likely that all vendors of this solution will provide products suitable
for intranet use. In order for these products to also support Internet access, they must
handle the additional requirements for security, access through firewalls, etc that the
web imposes. Several vendors are adding JDBC drivers to their existing database
middleware products.

4. NATIVE PROTOCOL ALL-JAVA DRIVER


This kind of driver converts JDBC calls into the network protocol used by
DBMS directory. This allows a direct call from the client machine to the DBMS
server that is practical solution for intranet access. Since many of these protocols are
proprietary, the database vendors themselves will be the primary source. Several
database vendors have these in progress. Eventually, we expect that driver categories
IGNOU

Department of MCA 2014

Page 16

Ensuring Distributed Data Sharing Accountability In The Cloud


3 and 4 will be the preferred way to access databases from JDBC. Driver categories
one and two are interim solutions where direct all java drivers are not yet available.
Category 4 is in some sense the ideal; however, there are many cases where category
3 may be preferable: e.g.: -where a thin DBMS-independent client is desired, or if a
DBMS independent protocol is standardized and implemented directly by many
DBMS vendors.

Introduction to Servlets
Servlets provide a Java(TM)-based solution used to address the problems
currently associated with doing server-side programming, including inextensible
scripting solutions, platform-specific APIs, and incomplete interfaces.
Servlets are objects that conform to a specific interface that can be plugged
into a Java-based server. Servlets are to the server-side what applets are to the clientside -- object byte codes that can be dynamically loaded off the net. They differ from
applets in that they are faceless objects (without graphics or a GUI component). They
serve as platform-independent, dynamically-loadable, pluggable helper byte code
objects on the server side that can be used to dynamically extend server-side
functionality.
What is a Servlet?
Servlets are modules that extend request/response-oriented servers, such as Javaenabled web servers. For example, a servlet might be responsible for taking data in an
HTML order-entry form and applying the business logic used to update a company's
order database.

IGNOU

Department of MCA 2014

Page 17

Ensuring Distributed Data Sharing Accountability In The Cloud

Servlets are to servers what applets are to browsers. Unlike applets, however, Servlets
have no graphical user interface.
Servlets can be embedded in many different servers because the servlet API, which
you use to write Servlets, assumes nothing about the server's environment or protocol.
Servlets have become most widely used within HTTP servers; many web servers
support the Servlet API.

Servlet Lifecycle
Each servlet has the same life cycle:
1. A server loads and initializes the servlet
2. The servlet handles zero or more client requests
3. The server removes the servlet

IGNOU

Department of MCA 2014

Page 18

Ensuring Distributed Data Sharing Accountability In The Cloud

JSP
Java Server Pages (JSP) is a technology that helps software developers create
dynamically generated web pages based on HTML, XML, or other document types.
Released in 1999 by Sun Microsystems,JSP is similar to PHP, but it uses the Java
programming language.

Overview
The JSP Model 2 architecture.
Architecturally, JSP may be viewed as a high-level abstraction of Java servlets. JSPs
are translated into servlets at runtime; each JSP's servlet is cached and re-used until
the original JSP is modified.
JSP can be used independently or as the view component of a server-side model
viewcontroller design, normally with JavaBeans as the model and Java servlets (or a
framework such as Apache Struts) as the controller. This is a type of Model 2
architecture. It allows Java code and certain pre-defined actions to be interleaved with
static web markup content, with the resulting page being compiled and executed on
the server to deliver a document. The compiled pages, as well as any dependent Java
libraries, use Java byte code rather than a native software format. Like any other Java
program, they must be executed within a Java virtual machine (JVM) that integrates
with the server's host operating system to provide an abstract platform-neutral
environment.
JSPs are usually used to deliver HTML and XML documents, but through the use of
Output Stream, they can deliver other types of data as well.
The Web container creates JSP implicit objects like page Context, servletContext,
session, request & response.

Syntax
JSP pages use several delimiters for scripting functions. The most basic is <% ... %>,
which encloses a JSP scriptlet. A scriptlet is a fragment of Java code that is run when
IGNOU

Department of MCA 2014

Page 19

Ensuring Distributed Data Sharing Accountability In The Cloud


the user requests the page. Other common delimiters include <%= ... %> for
expressions, where the value of the expression is placed into the page delivered to the
user, and directives, denoted with <%@ ... %>.
Java code is not required to be complete or self-contained within its scriptlet element
block, but can straddle markup content providing the page as a whole is syntactically
correct. For example, any Java if/for/while blocks opened in one scriptlet element
must be correctly closed in a later element for the page to successfully compile.
Markup which falls inside a split block of code is subject to that code, so markup
inside an if block will only appear in the output when the if condition evaluates to
true; likewise, markup inside a loop construct may appear multiple times in the output
depending upon how many times the loop body runs.

Expression Language
Version 2.0 of the JSP specification added support for the Expression Language (EL),
used to access data and functions in Java objects. In JSP 2.1, it was folded into the
Unified Expression Language, which is also used in Java Server Faces.
An example of EL syntax:
The value of "variable" in the object "JavaBeans" is ${javabean.variable}.
Additional tags
The JSP syntax adds additional tags, called JSP actions, to invoke built-in
functionality. Additionally, the technology allows for the creation of custom JSP tag
libraries that act as extensions to the standard JSP syntax. One such library is the
JSTL, with support for common tasks such as iteration and conditionals (the
equivalent of "if" and "for" statements in Java).

Compiler
A Java Server Pages compiler is a program that parses JSPs, and transforms them into
executable Java Servlets. A program of this type is usually embedded into the
application server and run automatically the first time a JSP is accessed, but pages

IGNOU

Department of MCA 2014

Page 20

Ensuring Distributed Data Sharing Accountability In The Cloud


may also be precompiled for better performance, or compiled as a part of the build
process to test for errors.
Some JSP containers support configuring how often the container checks JSP file
timestamps to see whether the page has changed. Typically, this timestamp would be
set to a short interval (perhaps seconds) during software development, and a longer
interval (perhaps minutes, or even never) for a deployed Web application.
Eclipse (Software)
Eclipse is

multi-language Integrated

development

environment (IDE)

comprising a base workspace and an extensible plug-in system for customizing the
environmental Erlang. It can also be used to develop packages for the
software Mathematica. Development environments include the Eclipse Java
development tools (JDT) for Java and Scala, Eclipse CDT for C/C++ and Eclipse
PDT for PHP, among others.
The initial codebase originated from IBM Visual Age. The Eclipse software
development kit (SDK), which includes the Java development tools, is meant for Java
developers. Users can extend its abilities by installing plug-ins written for the Eclipse
Platform, such as development toolkits for other programming languages, and can
write and contribute their own plug-in modules.
Released under the terms of the Eclipse Public License, Eclipse SDK is free
and open source software (although it is incompatible with the GNU General Public
License). It was one of the first IDEs to run under GNU Class path and it runs without
problems under IcedTea.

Architecture
The Eclipse Platform uses plug-ins to provide all functionality within and on
top of the runtime system, in contrast to some other applications, in which
functionality is hard coded. The Eclipse Platform's runtime system is based
on Equinox, an implementation of the OSGi core framework specification.
IGNOU

Department of MCA 2014

Page 21

Ensuring Distributed Data Sharing Accountability In The Cloud


This plug-in mechanism is a lightweight software component framework. In
addition to allowing the Eclipse Platform to be extended using other programming
languages such as C and Python, the plug-in framework allows the Eclipse Platform
to work with typesetting languages like Latex, networking applications such
as telnet and database management systems. The plug-in architecture supports writing
any desired extension to the environment, such as for configuration management. Java
and CVS support is provided in the Eclipse SDK, with support for other version
control systems provided by third-party plug-ins.
With the exception of a small run-time kernel, everything in Eclipse is a plugin. This means that every plug-in developed integrates with Eclipse in exactly the
same way as other plug-ins; in this respect, all features are "created equal. Eclipse
provides plug-ins for a wide variety of features, some of which are through third
parties using both free and commercial models. Examples of plug-ins include
a UML plug-in for Sequence and other UML diagrams, a plug-in for DB Explorer,
and many others.
The Eclipse SDK includes the Eclipse Java development tools (JDT), offering
an IDE with a built-in incremental Java compiler and a full model of the Java source
files. This allows for advanced refactoring techniques and code analysis. The IDE also
makes use of a workspace, in this case a set of metadata over a flat file space allowing
external file modifications as long as the corresponding workspace "resource" is
refreshed afterwards.
Eclipse implements widgets through a widget toolkit for Java called SWT,
unlike most Java applications, which use the Java standard Abstract Window
Toolkit (AWT) or Swing. Eclipse's user interface also uses an intermediate graphical
user interface layer called JFace, which simplifies the construction of applications
based on SWT.
IGNOU

Department of MCA 2014

Page 22

Ensuring Distributed Data Sharing Accountability In The Cloud


Rich Client Platform
Eclipse provides the Rich Client Platform (RCP) for developing general
purpose applications. The following components constitute the rich client platform:
o Equinox OSGi a standard bundling framework
o Core platform boot Eclipse, run plug-in
o Standard Widget Toolkit (SWT) a portable widget toolkit
o JFace viewer classes to bring model view controller programming
to SWT, file buffers, text handling, text editors
o Eclipse Workbench views, editors, perspectives, wizards
Server Platform
Eclipse supports development for Tomcat, Glassfish and many other servers
and is often capable of installing the required server (for development) directly from
the IDE. It supports remote debugging, allowing the user to watch variables and step
through the code of an application that is running on the attached server.
Eclipse: Juno
Eclipse Juno includes an easier way to work with more than one mobile SDK
at once, an Eclipse-based framework for embedded automotive software
development, and an IDE for Lua development and brings a whole new look to the
workbench. The release has seen input from 72 different project teams, who have
synchronized the release of improved versions of their projects to make it possible to
provide a ready packaged platform.

IGNOU

Department of MCA 2014

Page 23

Ensuring Distributed Data Sharing Accountability In The Cloud


"Eclipse is a great example of open source distributed development that ships on a
predictable schedule, and scales to tens of millions of lines of code." Eclipse 4.2 now
becomes the mainstream platform for the Eclipse community, with the Eclipse3.x
family of releases put into maintenance mode. To make migration less painful,
Eclipse 4.2 includes a compatibility layer that allows existing plug-in and RCP
applications to run under the new version.
An addition to the new version is a new plug-in called Code Recommenders.
This is designed to improve code completion in Eclipse by analyzing
how Java applications use the language's APIs.For the future, the team that worked on
this is building a database of coding best practices to recommend proper API usage to
developers as they type using the plug-in.The Eclipse for Mobile Developers package
provides a way to work with several mobile SDKs, and has improved integration with
the Android SDK .For Lua development, theres a new IDE for the Eclipse M2M
Industry Working Group. Lua is used extensively in machine-to-machine (M2M)
applications and video game programming.
The Eclipse 4.2 Java application server (Virgo) has had a kernel added (Nano)
that you can use to build very small web applications based on the OSGi
specification.
Eclipses own OSGi framework, Equinox, has also been updated and released
as part of this update. Other Java improvements include integrated debugging support
for

JVM-based

DSLs

in

Xtext,

and

tighter

integration

with

the Java

Development Tools (JDT). JDT now supports Java 7 directly last year's release of
Eclipse was published after Java 7 was finalized and only supported it via a plugin.
The final improvement of note is better integration between documentation and
software. You can now write your documentation within Eclipse and generate the
output in HTML format or in other formats by using plugins.

MYSQL
IGNOU

Department of MCA 2014

Page 24

Ensuring Distributed Data Sharing Accountability In The Cloud


MySQL is the world's most used relational database management system
(RDBMS) that runs as a server providing multi-user access to a number of databases.
It is named after co-founder Michael Widenius' daughter, My. The SQL phrase stands
for Structured Query Language.

The MySQL development project has made its source code available under the
terms of the GNU General Public License, as well as under a variety of proprietary
agreements. MySQL was owned and sponsored by a single for-profit firm, the
Swedish company MySQL AB, now owned by Oracle Corporation.
Free-software-open source projects that require a full-featured database
management system often use MySQL. For commercial use, several paid editions are
available, and offer additional functionality. Applications which use MySQL
databases include: TYPO3, Joomla, WordPress, phpBB, Drupal and other software
built on the LAMP software stack. MySQL is also used in many high-profile, largescale World Wide Web products, including Wikipedia, Google (though not for
searches), Facebook, and Twitter.
MySQL is written in C and C++. Its SQL parser is written in yacc, and a homebrewed lexical analyzer named sql_lex.cc. MySQL works on many different system
platforms, including AIX, BSDi, FreeBSD, HP-UX, eComStation, i5/OS, IRIX,
Linux, Mac OS X, Microsoft Windows, NetBSD, Novell NetWare, OpenBSD,
OpenSolaris, OS/2 Warp, QNX, Solaris, Symbian, SunOS, SCO OpenServer, SCO
UnixWare, Sanos and Tru64. A port of MySQL to OpenVMS also exists.
Many programming languages with language-specific APIs include libraries
for accessing MySQL databases. These include MySQL Connector/Net for integration
with Microsoft's Visual Studio (languages such as C# and VB are most commonly
IGNOU

Department of MCA 2014

Page 25

Ensuring Distributed Data Sharing Accountability In The Cloud


used) and the JDBC driver for Java. In addition, an ODBC interface called MyODBC
allows additional programming languages that support the ODBC interface to
communicate with a MySQL database.

Apache Tomcat
Apache Tomcat is an open source software implementation of the Java Servlet
and JavaServer Pages technologies. The Java Servlet and JavaServer Pages
specifications are developed under the Java Community Process.
Apache Tomcat is developed in an open and participatory environment and
released under the Apache License version 2. Apache Tomcat is intended to be a
collaboration of the best-of-breed developers from around the world. We invite you to
participate in this open development project Apache Tomcat powers numerous largescale, mission-critical web applications across a diverse range of industries and
organizations. Some of these users and their stories are listed on the Powered by wiki
page. Apache Tomcat, Tomcat, Apache, the Apache feather, and the Apache Tomcat
project logo are trademarks of the Apache Software Foundation.

2.7 METHODLOGY
Design Methodology
The two basic modern design strategies employed in software design are
1. Top down Design
2. Bottom up Design
Top down Design is basically a decomposition process, which focuses on the flow
of control. At later stages it concern itself with the code production. The first step is
IGNOU

Department of MCA 2014

Page 26

Ensuring Distributed Data Sharing Accountability In The Cloud


to study the overall aspects of the tasks at hand and to break it into a number of
independent modules. The second step is to break each one of these modules further
into independent sub-modules.
The process is repeated one to obtain modules, which are small enough to group
mentally and to code in a straightforward manner. One important feature is that at
each level the details of the design at the lower level are hidden. Only the necessary
data and control that must be called back and forth over the interface are defined.
In a bottom-up design one first identifies and investigates parts of design that are
most difficult and necessary designed decision are made the reminder of the design is
tailored to fit around the design already chose for crucial part.
One storage point of the top-down method is that it postpones details of the
decision until the last stage of the decision. It allows making small design changes
when the design is half way through. There is danger that the specifications will be
incompatible and this will not be discovered until late in the design process. By
contrast the bottom-up strategy first focuses on the crucial part so that feasibility of
the design is tested at early stage. In this project we have applied the top down
approach in our design approach.
The model that is basically being followed is the water fall model, which states
that the phases are organized in a linear order. First of all the feasibility study is done.
Once that part is over the Requirement Analysis and project planning begins.
The design starts after the requirement analysis is complete and the coding begins
after the design is complete. Once the programming is completed, the testing is done.
In this model the sequence of activities performed in a software development project
are: Requirement analysis
Project Planning
System design
Detail Design
IGNOU

Department of MCA 2014

Page 27

Ensuring Distributed Data Sharing Accountability In The Cloud


Coding
Unit Testing
System integration & Testing
Here the linear ordering of these activities is critical .End of the phase and the
output of one phase is the input of another phase. The output of each phase has to be
consistent with the overall requirement of the system. Some of the qualities of spiral
model are also incorporated like after the people concerned with the project review
completion of each of the phase the work done.
Waterfall Model was being chosen because all the requirements were known
beforehand and the objective of our software development is the computerization /
automation of an already existing manual working system.
The waterfall model derives its name due to the cascading effect from one
phase to the other. In this model each phase well defined starting and ending point,
with identifiable deliveries to the next phase.

Fig 2.7.1 Water fall model

Waterfall Model
IGNOU

Department of MCA 2014

Page 28

Ensuring Distributed Data Sharing Accountability In The Cloud


The model consists of six distinct stages, namely:
1. In the requirements analysis phase:
The problem is specified along with the desired service objectives
(goals)
The constraints are identified
2. In the specification phase the system specification is produced from the detailed
definitions of above two steps. This document should clearly define the product
function.
3. In the system and software design phase, the system specifications are translated
into a software representation. The software engineer at this stage is concerned
with:
Data structure
Software architecture
Algorithmic detail and
Interface representations
The hardware requirements are also determined at this stage along with a picture
of the overall system architecture. By the end of this stage the software engineer
should be able to identify the relationship between the hardware, software and the
associated interfaces. Any faults in the specification should ideally not be passed
down stream.
4. In the implementation and testing phase stage the designs are translated into the
software domain.

Detailed documentation from the design phase can significantly reduce the
coding effort.

Testing at this stage focuses on making sure that any errors are identified and
that the software meets its required specification.

IGNOU

Department of MCA 2014

Page 29

Ensuring Distributed Data Sharing Accountability In The Cloud


5. In the integration and system testing phase all the program units are
integrated and tested to ensure that the complete system meets the software
requirements. After this stage the software is delivered to the customer.
6. The maintenance phase the usually the longest stage of the software. In this
phase the software is updated to:
Meet the changing customer needs
Adapted to accommodate changes in the external environment
Correct errors and oversights previously undetected in the testing phases
Enhancing the efficiency of the software Observe that feed back loops allow
for corrections to be incorporated into the model.
Advantages of waterfall model
Testing is inherent to every phase of the waterfall model
It is an enforced disciplined approach.
It is documentation driven, that is, documentation is produced at every
stage.

Disadvantages
The waterfall model is the oldest and the most widely used paradigm.
However, many projects rarely follow its sequential flow. This is due to the inherent
problems associated with its rigid format. Namely:
It only incorporates iteration indirectly, thus changes may cause considerable
confusion as the project progresses.
As The client usually only has a vague idea of exactly what is required from
the software product, this WM has difficulty accommodating the natural
uncertainty that exists at the beginning of the project.
In this project all the modules have been developed using object oriented
Programming Methodology. This helps in maintaining and updating the project easily.

IGNOU

Department of MCA 2014

Page 30

Ensuring Distributed Data Sharing Accountability In The Cloud


Addition of further modules incorporated with the existing ones will make that
enhanced one work without any much change to rest of the code.

HARDWARE/SOFTWARE REQUIREMENTS
3.1 HARDWARE REQUIREMENTS FOR CLIENT/SERVER
Processor

Pentium IV Compatible or faster Speed

Processor Speed

2.0 GHz or faster

RAM

2 GB

Hard Disk

100 GB

Monitor

higher resolution display with 256 colors

Keyboard

108 Keys

3.2 SOFTWARE REQUIREMENTS FOR CLIENT/SERVER

Operating System

: - Windows 7 or more.

Application Software Framework

: - IDE Eclipse

Front End

:-

Java/J2EE ,JSP

Back End

:-

MySQL

Server

IGNOU

:- Apache tomcat 7.0.35

Department of MCA 2014

Page 31

Ensuring Distributed Data Sharing Accountability In The Cloud

SOFTWARE REQUIREMENT SPECIFICATION


4.1 INTRODUCTION
A Software requirements specification (SRS), a requirements specification for
a software system, is a complete description of the behavior of a system to be
developed and may include a set of use cases that describe interactions the users will
have with the software. In addition it also contains non-functional requirements. Nonfunctional requirements impose constraints on the design or implementation (such
as performance engineering requirements, quality standards, or design constraints).
The software requirements specification document enlists all necessary
requirements that are required for the project development. To derive the
requirements we need to have clear and thorough understanding of the products to be
developed. This is prepared after detailed communications with the project team and
customer.
A system requirements specification is normally produced in response to a user
requirements specification or other expression of requirements, and is then used as the
basis for system design. The system requirements specification typically differs from
the expression of requirements in both scope and precision: the latter may cover both
IGNOU

Department of MCA 2014

Page 32

Ensuring Distributed Data Sharing Accountability In The Cloud


the envisaged system and the environment in which it will operate, but may leave
many broad concepts unrefined. Traditionally, system requirements specifications
took the form of natural-language documents. However, both the need for precision
and problems with the increasing size of specification documents have led to the
development of more formal notations. These are capable of being mathematically
manipulated so as to show that the system as designed and implemented actually
meets the specification. This may be especially important in connection with safetycritical systems.
An SRS minimizes the time and effort required by developers to achieve
desired goals and also minimizes the development cost. A good SRS defines how
an application will interact with system hardware, other programs and human users in
a wide variety of real-world situations. Parameters such as operating speed, response
time, availability, portability, maintainability, footprint, security and speed of
recovery from adverse events are evaluated.

4.2 FUNCTIONAL REQUIREMENTS


The functional requirements for a system describe the functionality or services
that the system is expected to provide. These depend on the type of the software,
which is being developed.
Introduction:
This module is for authorizing the member who submits his\her registration
online by providing their name and password.
Input:
The input for the module are user registration name, password and submit.
Processing:
This module is processed by submitting mandatory information of new user to
administration.
Output:
IGNOU

Department of MCA 2014

Page 33

Ensuring Distributed Data Sharing Accountability In The Cloud


Once the registration is given the database is update by entering the
information and user is authorized and is eligible for using .

4.2.1 Client Module:


Introduction:
This module is for entering the username and password for client. Since each
has its own authenticity and access permission. Each user is given with unique
username and password.
Input:
The input for this module is the username and password respective authorize
Like client.
Processing:
This module is processed by checking correct username and password. If either
of it is wrong the end user will be alerted by message like invalid username and
password.
Output:
Once username and password both is correct the user will be taken to the
privileges pages.

4.2.2 CSP- module


IntroductionCloud Service Provider who is trusted to store verification parameters and
offer public query services for these parameters. In our system the CSP, view the user,
File Uploaded and Generate the Key for the File uploaded then send notification to
DataOwner
InputThe input for this module is the username and password respective authorize
Like,CSP.
ProcessingThis module is processed by checking correct username and password. If either
IGNOU

Department of MCA 2014

Page 34

Ensuring Distributed Data Sharing Accountability In The Cloud


one of it is wrong the end user will be alerted by message like invalid username and
password.
OutputOnce both the username and password is correct the user will be taken to the
privileges pages.

4.2.3 DataOwner- module


IntroductionDataOwner who is the one Uploads the file to cloud and set the price for the
file, he can validate the user and the file also ,only after upload of file done by
DataOwner the client download them
InputThe input for this module is the username and password respective authorize
DataOwner.
ProcessingThis module is processed by checking correct username and password. If either
one of it is wrong the end user will be alerted by message like invalid username and
password.
OutputOnce both the username and password is correct the user will be taken to the
privileges pages.

4.3 NON-FUNCTIONAL REQUIREMENTS


Correctness:
Since the project is used to provide the actual and correct information about the files,
the system should always provide correct response and the data in all the database
should always be constantly updated with the latest information.

IGNOU

Department of MCA 2014

Page 35

Ensuring Distributed Data Sharing Accountability In The Cloud


Reliability:
The system has to provide the correct information under any situation, In case of any
error in input or operation, system should reflect proper message or give proper
helping information.

Robustness:
Its vital that the system should be a fault tolerant with respect to illegal user
input. Error Checking must be built in the system to prevent system failure.

Maintainability:
The product will be used for a long time, it must be easy to maintain and easy
to incorporate future changes. The design if the system should be module based and
changing the design of the one module should not affect the proper operation of the
other module.

Portability:
The system should be portable so as to can run on any client system on any
platform with very little or no modifications.

4.4 FEASIBILITY STUDY


Preliminary investigation examine project feasibility, the likelihood the system
will be useful to the organization. The main objective of the feasibility study is to test
the Technical, Operational and Economical feasibility for adding new modules and
debugging old running system. All system is feasible if they are unlimited resources
and infinite time. There are aspects in the feasibility study portion of the preliminary
investigation:
Technical Feasibility
Operation Feasibility
Economical Feasibility

Time and Resource Feasibility


Functional Feasibility
IGNOU

Department of MCA 2014

Page 36

Ensuring Distributed Data Sharing Accountability In The Cloud


4.4.1 Technical Feasibility
The technical issue usually raised during the feasibility stage of the
investigation includes the following:
Does the necessary technology exist to do what is suggested?
Do the proposed equipments have the technical capacity to hold the
data required to use the new system?
Will the proposed system provide adequate response to inquiries,
regardless of the number or location of users?
Can the system be upgraded if developed?
Are there technical guarantees of accuracy, reliability, ease of access
and data security?

4.4.2 Operational Feasibility


Proposed projects are beneficial only if they can be turned out into
information system. That will meet the organizations operating requirements.
Operational feasibility aspects of the project are to be taken as an important part of
the project implementation. Some of the important issues raised are to test the
operational feasibility of a project includes the following: Is there sufficient support for the management from the users?
Will the system be used and work properly if it is being developed
and implemented?
Will there be any resistance from the user that will undermine the
possible application benefits?
This system is targeted to be in accordance with the above-mentioned issues.
Beforehand, the management issues and user requirements have been taken into
consideration. So there is no question of resistance from the users that can undermine
the possible application benefits. The well-planned design would ensure the optimal

IGNOU

Department of MCA 2014

Page 37

Ensuring Distributed Data Sharing Accountability In The Cloud


utilization of the computer resources and would help in the improvement of
performance status.

4.4.3 Economic Feasibility


A system can be developed technically and that will be used if installed must
still be a good investment for the organization. In the economical feasibility, the
development cost in creating the system is evaluated against the ultimate benefit
derived from the new systems. Financial benefits must equal or exceed the costs.

4.4.4 Time and Resource Feasibility


This system helps the user to find in the best usage of resources keeping in
track of all the asset details and other necessary details over a period of time, thereby
reducing the decision making process easier and worthwhile. Acts to be a solution
provider in determining how many assets are coming in and moving out within a
particular location of a shop floor and finding out the way for time reduction.

4.4.5 Functional Feasibility


People are inherently resistant to change. Computers have been known to
facility changes. An estimate should be made to know the reaction of the user is likely
to have towards the new system.
Since this system is ready to use in the organization, this system is
operationally feasible. As this system is technically, economically and functionally
feasible the system is judged feasible.

IGNOU

Department of MCA 2014

Page 38

Ensuring Distributed Data Sharing Accountability In The Cloud

SYSTEM ANALYSIS AND DESIGN


5.1 SYSTEM ANALYSIS
System analysis is the field dealing with analysis of system and the interactions
within those systems. This field is closely related to operations research. The analysis
is intended to capture and describe all the requirements of the system and to make a
model that defines the key domain classes in the system. The purpose is to provide an
understanding and to enable a communication about the system between the
developers and the customer establishing the requirements. Therefore, the analysis is
typically conducted in co-operation user or customer.
The developer should not think in terms of code or programs during this phase;
it is just the first step towards really understanding the requirements and the reality of
the system under design.

5.1.1 Existing System


It is essential to provide an effective mechanism for the users to monitor the
usage of their data in the cloud. For example, users need to be able to ensure that their
IGNOU

Department of MCA 2014

Page 39

Ensuring Distributed Data Sharing Accountability In The Cloud


data is handled according to the service level agreements made at the time of
registration for services in the cloud. Conventional access control approaches
developed for closed domains such as databases, operating systems and other
approaches using centralized server in distributed environment is not suitable.

Demerits of Existing System


Data handling can be outsourced by the direct cloud service provider (CSP) to
other entities in the cloud and theses entities can also delegate the tasks to others, and
so on. Entities are allowed to join and leave the cloud in a flexible manner. As a
result, data handling in the cloud goes through a complex and dynamic hierarchical
service chain which does not exist in conventional environments.

5.1.2 Proposed System


A novel approach namely Cloud Information Accountability (CIA) framework,
based on the notion of information accountability is proposed. Unlike privacy
protection technologies which are built on the hide-it-or-lose-it perspective,
information accountability focuses on keeping the data usage transparent and
trackable. CIA framework provides end-to end accountability in a highly distributed
fashion. One of the main innovative features of the CIA framework lies in its ability
of maintaining lightweight and powerful accountability that combines aspects of
access control, usage control and authentication. By means of the CIA, data owners
can track whether or not the service-level agreements are being honored and also
enforce access &control rules when needed. Associated with the accountability
feature, we also develop two distinct modes for auditing: push mode and pull mode.
The push mode refers to logs being periodically sent to the data owner or stakeholder
while the pull mode refers to an alternative approach whereby the user (or another
authorized party) can retrieve the logs as needed.
Main contributions are as follows:
IGNOU

Department of MCA 2014

Page 40

Ensuring Distributed Data Sharing Accountability In The Cloud


I.

A novel automatic and enforceable logging mechanism in the cloud. The


architecture is platform independent and highly decentralized, in that it does
not require any dedicated authentication or storage system in place.

II.

We go beyond traditional access control and provide a certain degree of


usage control for the protected data after it is delivered to the receiver.
Experimented on a real cloud testbed. The results demonstrate the
efficiency, scalability, reliability and also provides a detailed security
analysis.

Advantages
o Avoiding local storage of data and reduces the storage cost, maintenance and
personnel.
o Reduces the chance of losing data by hardware failures & being transparent to
the owner.

5.2 SYSTEM DESIGN


5.2.1 Introduction
The design phase focuses on the detailed implementation of the system
recommended in the feasibility study. The design phase is a transition from useroriented document to a document oriented to the programmers or database personnel.

5.2.1.1 Input Design


Input design is the process of converting user-oriented inputs to a computerbased format. Inaccurate input data are one of the main causes of errors in data
processing. Errors in input data can be controlled by input design. Designing of input
data also makes data entry as easy, logical and free errors as possible. Source data are
captured initially on original paper or a source document. They can then be input to
the system in a variety of ways. Codes are developed to reduce input to control errors
and speed for data entry process.
IGNOU

Department of MCA 2014

Page 41

Ensuring Distributed Data Sharing Accountability In The Cloud


The input forms allow the user to input data into the system. In this software,
the input forms are designed in such a way as to make the task of data entry easier and
error free E.g. Combo-boxes are used wherever possible so that the employee can
select one of the options from it thus, reducing chances of errors that could occur due
to a spelling mistake in the input data. For e.g. the form employee allows the user to
add the detail of a new employee. The detail such as job category, domain are entered
using list boxes. This reduces the chances of errors and makes data entry easier.
Appropriate error messages are provided in all forms to guide the user in data
entry. Thus all the input forms are self-explanatory.

5.2.1.2 Output Design


Computer output is the most important and direct source of information to the
user. Efficient design improves the system user relationship and helps in decisionmaking. Output design was one of the main activities from the beginning of the
project. In the study phase, output were identified and described generally as the
project directive. An alternative output medium was then selected .Commonly used
output media are printer and CRT screen. In the output design phase many forms and
reports are designed. The output forms include such forms that allow the user to view
the desired details. Example: Employee details, Department details, Job details etc.
All data in the forms are displayed with appropriate captions. Reports are produced to
reflect up-to-date information. The different reports include: Job Report, Employee
Report. The outputs of this system are in 2 types-on screens and on printer. Screen
output is the default while the printer output is optional.
Based on the user requirements and the detailed analysis of the existing
system, the new system must be designed. This is the phase of system designing. It is
the most crucial phase in the developments of a system.
IGNOU

Department of MCA 2014

Page 42

Ensuring Distributed Data Sharing Accountability In The Cloud


The logical system design arrived at as a result of systems analysis is
converted into physical system design. Normally, the design proceeds in two stages:
l. Preliminary or General Design
2. Structured or Detailed Design

Preliminary or General Design:


In the preliminary or general design, the features of the new system are
specified. The costs of implementing these features and the benefits to be derived are
estimated. If the project is still considered to be feasible, we move to the detailed
design stage.

Structured or Detailed Design:


In the detailed design stage, computer oriented work begins in earnest. At this
stage, the design of the system becomes more structured. Structure design is a blue
print of a computer system solution to a given problem having the same components
and inter-relationships among the same components as the original problem. Input,
output, databases, forms, codification schemes and processing specifications are
drawn up in detail.

5.3 MODULES DESCRIPTION


5.3.1 Cloud Information Accountability (CIA) Framework:
CIA framework lies in its ability of maintaining lightweight and
powerful accountability that combines aspects of access control, usage
control and authentication. By means of the CIA, data owners can track not
only whether or not the service-level agreements are being honored, but also
enforce access and usage control rules as needed.
5.3.2. Distinct mode for auditing:
Push mode:

IGNOU

Department of MCA 2014

Page 43

Ensuring Distributed Data Sharing Accountability In The Cloud


The push mode refers to logs being periodically sent to the data owner or
stakeholder.
Pull mode:
Pull mode refers to an alternative approach whereby the user
(Or another authorized party) can retrieve the logs as needed.

5.3.3. Logging and auditing Techniques:


i.

The logging should be decentralized in order to adapt to the dynamic nature


of the cloud. More specifically, log files should be tightly bounded with the
corresponding data being controlled, and require minimal infrastructural
support from any server.

ii.

Every access to the users data should be correctly and automatically logged.
This requires integrated techniques to authenticate the entity who accesses
the data, verify, and record the actual operations on the data as well as the
time that the data have been accessed.

iii.

Log files should be reliable and tamper proof to avoid illegal insertion,
deletion, and modification by malicious parties. Recovery mechanisms are
also desirable to restore damaged log files caused by technical problems.

iv.

Log files should be sent back to their data owners periodically to inform
them of the current usage of their data. More importantly, log files should be
retrievable anytime by their data owners when needed regardless the
location where the files are stored.

v.

The proposed technique should not intrusively monitor data recipients


systems, nor it should introduce heavy communication and computation
overhead, which otherwise will hinder its feasibility and adoption in
practice.

5.3.4. Major components of CIA


There are two major components of the CIA, the first being the logger, and the
second being the log harmonizer.
IGNOU

Department of MCA 2014

Page 44

Ensuring Distributed Data Sharing Accountability In The Cloud


The logger is strongly coupled with users data (either single or multiple data
items). Its main tasks include automatically logging access to data items that it
contains, encrypting the log record using the public key of the content owner, and
periodically sending them to the log harmonizer. It may also be configured to ensure
that access and usage control policies associated with the data are honored. For
example, a data owner can specify that user X is only allowed to view but not to
modify the data. The logger will control the data access even after it is downloaded by
user X. The log harmonizer forms the central component which allows the user access
to the log files. The log harmonizer is responsible for auditing.

5.4 HIGH LEVEL DESIGN


High Level Design (HLD) gives the overall system design in terms of
functional architecture and database design. This is very useful for the developers to
understand the flow of the system. In this phase design team, review team (testers)
and customers plays a major role. For this the entry criteria are the requirement
document that is SRS and the exit criteria will be HLD, projects standards, the
functional design documents, and the database design document.

IGNOU

Department of MCA 2014

Page 45

Ensuring Distributed Data Sharing Accountability In The Cloud


Architecture

5.5 LOW LEVEL DESIGN


The Low Level Design Document gives the design of the actual program code
which is designed based on the High Level Design Document. It defines internal logic
of corresponding sub module designers are preparing and mapping individual LLD to
every module. A good Low Level Design Document developed will make the
program very easy to be developed
made and

by developers because if proper analysis is

the Low Level Design Document is prepared then the code can be

developed by developers directly from Low Level Design Document with minimal
effort of debugging and testing.

5.5.1 Database Design (Tables)


Database design is the process of producing a detailed data model of a
database. This logical data model contains all the needed logical and physical design
choices and physical storage parameters needed to generate a design in a Data
Definition language, which can then be used to create a database. A fully attributed
data model contains detailed attributes for each entity.

Table 1: Client
Field

Type

Null

Key

Default

Uid

Varchar(20)

NO

PRI

Yes

Pwd

Varchar(20)

NO

username

Varchar(20)

YES

NULL

ownername

Varchar(20)

YES

NULL

Mobno

Varchar(20)

YES

NULL

Emailed

Varchar(40)

YES

NULL

IGNOU

Department of MCA 2014

Extra

Page 46

Ensuring Distributed Data Sharing Accountability In The Cloud


Gender

Varchar(20)

Uname

Varchar(10)

YES

NULL

YES

NULL

Table 2: service provider


Field

Type

Null

Key

Default

Extra

Uid

Varchar(20)

NO

PRI

NOT NULL

Pwd

Varchar(20)

NO

Field

Type

Null

Key

Default

Extra

Uid

Varchar(20)

NO

PRI

Pwd

Varchar(20)

NO

Name

Varchar(20)

NO

Default

Extra

Table 3: Owner

Table 4:file_status
Field

Type

Null

Key

Id

Varchar(20)

NO

PRI

Uid

Varchar(20)

YES

Foreign

ownername

Varchar(20)

YES

Foreign

Status

Varchar(40)

YES

NULL

filename

Varchar(20)

YES

NULL

Filekey

Varchar(40)

YES

NULL

IGNOU

Department of MCA 2014

Page 47

Ensuring Distributed Data Sharing Accountability In The Cloud

Table 5:Download Log Files


Field

Type

Null

Key

Default

File_id

Varchar(20)

YES

Foreign

Uid

Varchar(20)

YES

Foreign

Email

Varchar(20)

YES

Foreign

Address

Varchar(40)

YES

NULL

MobileNo

Varchar(20)

YES

NULL

Payment Type

Varchar(40)

YES

NULL

Date

Date

Extra

5.5.3 Use Cases / Class Diagrams


5.5.3.1 Use Cases
Use case diagram basically depicts relationship among the actors and the use
cases within the system.
Actor:
An actor is a person, organization, or external system that plays a Role in
one or more interactions with your system. Actors are drawn as stick figures.

Use case:
A use case describes a sequence of actions that provide something of
measurable value to an actor and is drawn as horizontal ellipse.

Connector:
Connector simply connects the use case with the actor & it is denoted by a
line.
IGNOU

Department of MCA 2014

Page 48

Ensuring Distributed Data Sharing Accountability In The Cloud


1) Use case diagram shows the interaction between the User, CSP,Data
Owner..

5.5.3.2 Class Diagrams


The Class diagram shows the building blocks of any object-orientated system.
Class diagrams depict the static view of the model or part of the model, describing
what attributes and behavior it has rather that detailing the methods for achieving
operations. Class diagrams are most useful to illustrate relationships between classes
and interfaces. Generalizations, aggregations, and associations are all valuable in
reflecting inheritance, composition or usage, and connections, respectively.

Classes
A class is an element that defines the attributes and behaviors that an object is
able to generate. The behaviour is the described by the possible messages the class is
IGNOU

Department of MCA 2014

Page 49

Ensuring Distributed Data Sharing Accountability In The Cloud


able to understand along with operations that are appropriate for each message.
Classes may also contain definitions of constraints tagged values and stereotypes.

Class Notation
Classes are represented by rectangles which show the name of the class and
optionally the name of the operations and attributes. Compartments are used to divide
the class may be assigned to classes.

Associations
An association implies two model elements have a relationship
- usually implemented as an instance variable in one class. This
connector may include named roles at each end, cardinality,
direction and constraints. Association is the general relationship
type between elements. For more than two elements, a diagonal
representation toolbox element can be used as well. When code is
generated

for

class

diagrams,

associations

become

instance

variables in the target class.

IGNOU

Department of MCA 2014

Page 50

Ensuring Distributed Data Sharing Accountability In The Cloud

5.5.3.3 Sequence diagram


A sequence diagram is a form of interaction diagram which shows
objects as lifelines running down the page and with their interactions over time
represented as messages drawn as arrows from the source lifeline to the target lifeline.
Sequence diagrams are good at showing which objects communicate with which other
objects and what messages trigger those communications. Sequence diagrams are not
intended for showing complex procedural logic.

IGNOU

Department of MCA 2014

Page 51

Ensuring Distributed Data Sharing Accountability In The Cloud


Lifelines
A lifeline represents an individual participant in a sequence diagram. A lifeline
will usually have a rectangle containing its object name. If its name is self then that
indicates that the lifeline represents the classifier which owns the sequence diagram..

Messages
Messages are displayed as arrows. Messages can be complete, lost or found;
synchronous or asynchronous; call or signal. They are of three types the first message
is a synchronous message (denoted by the solid arrowhead) completes with an
implicit return message; the second message is asynchronous (denoted by line
arrowhead) and the third is the asynchronous return message (denoted by the dashed
line).

Execution Occurrence
A thin rectangle running down the lifeline denotes the execution occurrence or
activation of a focus of control. There are three execution occurrences. The first is the
source object sending two messages and receiving two replies; the second is the target
object receiving a synchronous message and returning a reply; and the third is the
target object receiving an asynchronous message and returning a reply.

IGNOU

Department of MCA 2014

Page 52

Ensuring Distributed Data Sharing Accountability In The Cloud


Sequence Diagram of Registration Process

Sequence Diagram of Accessing JAR Files

IGNOU

Department of MCA 2014

Page 53

Ensuring Distributed Data Sharing Accountability In The Cloud

Sequence Diagram of User logs

5.5.3.4 Sequence diagram


IGNOU

Department of MCA 2014

Page 54

Ensuring Distributed Data Sharing Accountability In The Cloud


Registration ActivityDiagram

Upload Activity Diagram

Download Activity Diagram


IGNOU

Department of MCA 2014

Page 55

Ensuring Distributed Data Sharing Accountability In The Cloud

Jar Activity Diagram

IGNOU

Department of MCA 2014

Page 56

Ensuring Distributed Data Sharing Accountability In The Cloud

TESTING
Software Testing is a critical element of software quality assurance and
represents the ultimate review of specification, design and coding, Testing presents an
interesting anomaly for the software engineer.

6.1 Testing Objectives


1. Testing is a process of executing a program with the intent of finding an
2. A good test case is one that has a probability of finding an as yet

error

undiscovered

error
3. A successful test is one that uncovers an undiscovered error

6.2 Testing Principles


All tests should be traceable to end user requirements.
Tests should be planned long before testing begins.
Testing should begin on a small scale and progress towards testing in
large.
Exhaustive testing is not possible.
To be most effective testing should be conducted by a independent
third party.

6.3 Testing Strategies


A Strategy for software testing integrates software test cases into a series of
well planned steps that result in the successful construction of software. Software
testing is a broader topic for what is referred to as Verification and Validation.
Verification refers to the set of activities that ensure that the software correctly
implements a specific function. Validation refers he set of activities that ensure that
the software that has been built is traceable to customers requirements

IGNOU

Department of MCA 2014

Page 57

Ensuring Distributed Data Sharing Accountability In The Cloud

6.4 Test Cases


A test case is a pair consisting of test data to be input to the program and the
expected output. The test data is a set of values, one for each input variable. A test set
is a collection of zero or more test cases.
Testing is the one of the important step in the system development. The testing
involves unit test, modular test, Integrity test and so on. Testing may be carried out by
the developer or by the user, who doesn't know anything about the software internal
functionality. Accordingly testing process is divided into white box testing, black box
testing.
Software development process is to produce software that has no errors or very
few errors. But during the software development, errors can occur at any stage. To
detect errors as soon as they are introduced, verification and validation activities are
mostly done manually and are unreliable. The unreliability is mainly because of the
absence of any executable code. All the errors will be reflected in the code, since code
is the only product that can be executed to check the actual behavior. So testing phase
is performed after coding to detect all the errors and provide quality assurance and
ensure reliability of the software. System testing is aimed at ensuring that the system
works accurately and efficiently before live operation commences. Testing is vital to
the success of the system. System testing makes a logical assumption that all part of
the system is correct.

Further testing methods were implemented to make the software developed


here are completely error free and reliable. There are different levels of testing to
detect different types of errors.

6.4.1 Unit Testing


Unit test is a procedure used to validate that a particular unit of the source code
is working properly. The procedure is to write test cases for all subroutine or function
IGNOU

Department of MCA 2014

Page 58

Ensuring Distributed Data Sharing Accountability In The Cloud


so that whenever a change occurs, it can be quickly identified and fixed. Ideally, each
test case is separate from the others. This type of testing is mostly done by the
Software developer or programmers and not by the end-users.
Testing
Scenario

Test

Name

Case

Test Objective

Expected Result

Final

Defective

/Response

Result

Reason

Client must be

Client Should

Pass

able to register

enter the personal

to respective

data in various

Data Owner.

fields and

ID
Client

registration

password should
be generated and
username,
password sent to
email id of client.
Client

registration

Client must be

Client Should

Fail

Invalid entry

able to register

enter the personal

of fields like

to respective

data in various

email-id,

Data Owner.

fields and

contact

password should

number, etc.

be generated and
username,
password sent to
email id of client.
Client
Login

Client must be

Client must be

able to login

able to enter valid

successfully.

username,

Pass

password in its
respective fields

IGNOU

Department of MCA 2014

Page 59

Ensuring Distributed Data Sharing Accountability In The Cloud


and login
successfully.
Client

Login

Client must be

Client must be

Fail

Invalid

able to login

able to enter valid

username or

successfully.

username,

password.

password in its
respective fields
and login
successfully.
Data

Data Owner

Data Owner must

Owner

must be able to

be able to enter

Login

login

valid username,

successfully.

password in its

Pass

respective fields
and login
successfully.
Data

Data Owner

Data Owner must

Fail

Invalid

Owner

must be able to

be able to enter

username or

Login

login

valid username,

password.

successfully.

password in its
respective fields
and login
successfully.

CSP Login

CSP must be

CSP must be able

able to login

to enter valid

successfully.

username,

Pass

password in its
respective fields
and login
successfully.

IGNOU

Department of MCA 2014

Page 60

Ensuring Distributed Data Sharing Accountability In The Cloud


CSP Login

CSP must be

CSP must be able

Fail

Invalid

able to login

to enter valid

username or

successfully.

username,

password.

password in its
respective fields
and login
successfully.
Upload

Client must be

Client must be

File by

able to upload

able to upload the

Client

file to CSP.

file after

Pass

verification of the
file successfully
to the CSP.
Upload

10

Client must be

Client must be

Fail

Invalid file

File by

able to upload

able to upload the

type or

Client

file to CSP.

file after

corrupted

verification of the

file.

file successfully
to the CSP.
Generate

11

CSP must

CSP must

Key by

generate the

approve the file

CSP

key for the

uploaded by the

uploaded file.

client and then

Pass

generate the key


for that file.
Generate

12

CSP must

CSP must

Fail

Invalid file

Key by

generate the

approve the file

type or

CSP

key for the

uploaded by the

corrupted

uploaded file.

client and then

file.

generate the key

IGNOU

Department of MCA 2014

Page 61

Ensuring Distributed Data Sharing Accountability In The Cloud


for that file.
Data

13

Data Owner

Data Owner must

Owner

must approve

verify the client,

approve the

file whose key

approve file, and

file from

are generated

set the amount

CSP

by the CSP.

for the files that

Pass

are uploaded by
the client and key
generated by the
CSP.
Data

14

Data Owner

Data Owner must

Fail

Invalid client

Owner

must approve

verify the client,

or corrupted

approve the

file whose key

approve file, and

file.

file from

are generated

set the amount

CSP

by the CSP.

for the files that


are uploaded by
the client and key
generated by the
CSP.

Request for 15

Client must

Client must

File Key

make payment

receive a file key

by Client

and receive a

through mail

file key for a

once the payment

particular file.

is done for a

Pass

particular file.
Request for 16

Client must

Client must

File Key

make payment

receive a file key

Credential

by Client

and receive a

through mail

invalid.

file key for a

once the payment

particular file.

is done for a

IGNOU

Department of MCA 2014

Fail

Payment

Page 62

Ensuring Distributed Data Sharing Accountability In The Cloud


particular file.
File

17

Download

Client request

Client request for

for a download.

a file download

by the

by entering the

Client

valid file key sent

Pass

to it.
File

18

Download

Client request

Client request for

for a download.

a file download

by the

by entering the

Client

valid file key sent

Fail

Invalid File
Key.

to it.
View User

19

Logs

Data Owner

Data owner must

must be able to

be able to view

view user logs.

all the logs of the

Pass

access and the


download of the
file.
View User

20

Logs

Data Owner

Data owner must

must be able to

be able to view

view user logs.

all the logs of the

Fail

Invalid Data
Owner.

access and the


download of the
file.

6.4.2 Integration Testing


Integration testing (sometimes called ''Integration'' and ''testing'' and
abbreviated ''I&T'') is the phase of software testing in which individual software
modules are combined and tested as a group. It follows unit testing and precedes
system testing. Integration testing takes as its input module that have been checked
out by unit testing, groups them in larger aggregates, applies tests defined in an
IGNOU

Department of MCA 2014

Page 63

Ensuring Distributed Data Sharing Accountability In The Cloud


Integration test plan to those aggregates, and delivers as its output the integrated
system ready for system testing. In this project all the modules were integrated and
tested for the required output.
6.4.3 System Testing
System testing is testing conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. System testing falls
within the scope of Black box testing, and as such, should require no knowledge of
the inner design of the code or logic. Alpha testing and Beta testing are sub-categories
of System testing.
As a rule, System testing takes, as its input, all of the "integrated" software
components that have successfully passed Integration testing and also the software
system itself integrated with any applicable hardware system(s). The purpose of
Integration testing is to detect any inconsistencies between the software units that are
integrated together called ''assemblages'' or between any of the ''assemblages'' and
hardware. System testing is more of a limiting type of testing, where it seeks to detect
both defects within the "inter-assemblages" and also the system as a whole.
Generally speaking, System testing is the first time that the entire system can
be tested against the Functional requirements or Functional requirements
Specification(s) (FRS) and/or the Requirements analysis or System Requirement
Specification (SRS), these are the ''rules'' that describe the functionality that the
vendor (the entity developing the software) and a customer have agreed upon.

IGNOU

Department of MCA 2014

Page 64

Ensuring Distributed Data Sharing Accountability In The Cloud

SYSTEM IMPLEMENTATION
Implementation is the stage of the project where the theoretical design is turned
into a working system. At this stage the main work load, the greatest upheaval and the
major impact on the existing system shifts to the user department. If the
implementation is not carefully planned or controlled, it can cause chaos and
confusion and may also mislead the end-users.
Implementation includes all those activities that take place to convert
from the old system to the modified or updated one. The new system may be totally
replacing an existing manual or automated system or it may be a major a reliable
system to meet the organization requirements. Successful implementation may not
guarantee improvement in the organization using the new system, but improper
installation will prevent it.
The process of putting the developed system in actual use is called system
implementation. This includes all those activities that take place to convert from the
old system to the new system. The system can be implemented only after thorough
testing is done and if it is found to be working according to the specifications. The
system personnel check the feasibility of the system.
The most crucial stage is achieving a new successful system and giving
confidence on the new system for the user that it will work efficiently and effectively.
It involves careful planning, investigation of the current system and its constraints on
implementation, design implemented, the more involved will be the system analysis
and the design effort just for implementation.
The system implementation has three main aspects. They are education and
training, system testing and changeover.
The implementation stage involves following tasks:
Careful planning.
Investigation of the system and constraints.
Design of methods to achieve the changeover.
IGNOU

Department of MCA 2014

Page 65

Ensuring Distributed Data Sharing Accountability In The Cloud


Training of the staff in the changeover phase.
Evaluation of the changeover method.
The method of implementation and the time scale to be adopted are found out
initially. Next the system is tested properly and the same time users are trained in the
new procedures.
The existing system also has been implemented with many advanced
functionalities and features. This is done by implementing a formal project change
procedure that strictly controls project scope to avoid confusion, wrong expectations
and the possibility of project failure. The existing system also requires frequent
reviews of all changes that are done, to ensure all requests are dealt with in an
expeditious manner and to protect the integrity of the project plan.

7.1 Implementation Phases


The system implementation stages are as follows:
Project pre-initiation phase
Project initiation phase
Project control phase
The implementation phases are being depicted as shown below:
Project Pre-Initiation Phase

IGNOU

Department of MCA 2014

Page 66

Ensuring Distributed Data Sharing Accountability In The Cloud

Project Initiation Phase

IGNOU

Department of MCA 2014

Page 67

Ensuring Distributed Data Sharing Accountability In The Cloud

IGNOU

Department of MCA 2014

Page 68

Ensuring Distributed Data Sharing Accountability In The Cloud

Project Control Phase

7.2 Implementation Procedures


Implementation of software refers to the final installation of the package in its
real environment, to the satisfaction of the intended users and the operation of the
system. In many organizations someone who will not be operating it, will not be
operating it, will commission the software development project. The people who
are not sure that the software but we have to ensure that the resistance does not
build up as one have to make sure that:
The active user must be aware of the benefits of using the system.
Their confidence in the software is built up

IGNOU

Department of MCA 2014

Page 69

Ensuring Distributed Data Sharing Accountability In The Cloud


Proper guidance is imparted to the user so that he is comfortable in using
the application.

7.3 User Training


To achieve the objectives and benefits expected from computer based system,
it is essential for the people who will be involved to be confident of their role in the
new system. As systems become more complex, the need for education and training is
more important.
Education is complementary to training. It brings life to formal training by
explaining the background to the resources for them. Education involves creating the
right atmosphere and motivating user staff. Education sections should encourage
participation from all staff with protection for individuals for group criticism.
Education should start before any development work to enable users to maintain or to
regain the ability to participate in the development of their system.
Education can make training more interesting and more understandable. The
main aim should always be to make individual feel that they can still make all
important contributions, to explain how they participate in making system changes,
and to show that the computer and staff do not operate in isolation, but are of the same
organization.
Some of the main topics covered in the training part are
1- How to manage documents in folders and subfolders.
2- Working with different options of the application.

IGNOU

Department of MCA 2014

Page 70

Ensuring Distributed Data Sharing Accountability In The Cloud


3- Users were also trained about how to upload multiple documents.
4- Users were also trained on how to lock a particular file.
5- Apart from this application they were given a clear picture of basic
knowledge of step by step working in the project.
6- How to manage import and export of data.
7- How to manage backups

7.4 Training on the Application Software


After providing the necessary basic training on the computer awareness the
users will have to be trained on the new application software. This will give the
underlying philosophy of the use of the new system such as the screen flow, screen
design, type of help on the screen, type of errors while entering the data the
corresponding validation check at each entry and the ways to correct the data entered.
It should then cover information needed by the specific user/groups to use the system
or part of the system while imparting the training of the program on the application.
This training may be different across different user groups and across different levels
of hierarchy.

7.5 Operational Documentation


Once the implementation plan is decided, it is essential that the user of the
system is made familiar and comfortable with the environment. Education involves
right atmosphere and motivating the user. A documentation providing the whole
operations of the system is being developed. The system is developed in such a way
that the user friendly so that the user can work with the system from the tips given in

IGNOU

Department of MCA 2014

Page 71

Ensuring Distributed Data Sharing Accountability In The Cloud


the application itself .Users have to be made aware that what can be achieved with the
new system and how it increases the performance of the system.

SCREEN SHOTS
Home page

Registration

IGNOU

Department of MCA 2014

Page 72

Ensuring Distributed Data Sharing Accountability In The Cloud


Notification of Sending User-ID and Password through mail

Login

Welcome
IGNOU

Department of MCA 2014

Page 73

Ensuring Distributed Data Sharing Accountability In The Cloud

File Upload

Client selecting files to upload

IGNOU

Department of MCA 2014

Page 74

Ensuring Distributed Data Sharing Accountability In The Cloud

Client: Notification of file uploaded to the client

IGNOU

Department of MCA 2014

Page 75

Ensuring Distributed Data Sharing Accountability In The Cloud


CSP PAGE

File validation

IGNOU

Department of MCA 2014

Page 76

Ensuring Distributed Data Sharing Accountability In The Cloud


Change Status to Waiting-Owner by CSP

Generation of the key by CSP

Notification of the key generated


IGNOU

Department of MCA 2014

Page 77

Ensuring Distributed Data Sharing Accountability In The Cloud

Data owner page with request for Approve

8.2 View of all Approve Files


IGNOU

Department of MCA 2014

Page 78

Ensuring Distributed Data Sharing Accountability In The Cloud

Data owner setting the amount

Notification of the file uploaded to the cloud


IGNOU

Department of MCA 2014

Page 79

Ensuring Distributed Data Sharing Accountability In The Cloud

Download of file, Uploaded by Client

IGNOU

Department of MCA 2014

Page 80

Ensuring Distributed Data Sharing Accountability In The Cloud


List of files available to download for other Client, under same Data Owner

Provide details and Amount to send key for JAR download through E-Mail

IGNOU

Department of MCA 2014

Page 81

Ensuring Distributed Data Sharing Accountability In The Cloud


Notification of Sending key through mail

Downloading the JAR

IGNOU

Department of MCA 2014

Page 82

Ensuring Distributed Data Sharing Accountability In The Cloud


Access the JAR

Provide Authentication

IGNOU

Department of MCA 2014

Page 83

Ensuring Distributed Data Sharing Accountability In The Cloud


Authentication successfully done and extraction begins

Finally file is extracted

Download logs of the files that are downloaded


IGNOU

Department of MCA 2014

Page 84

Ensuring Distributed Data Sharing Accountability In The Cloud

CONCLUSION
IGNOU

Department of MCA 2014

Page 85

Ensuring Distributed Data Sharing Accountability In The Cloud


We proposed innovative approaches for automatically logging any access to
the data in the cloud together with an auditing mechanism. Our approach allows the
data owner to not only audit his content but also enforce strong back-end protection if
needed. Moreover, one of the main features of our work is that it enables the data
owner to audit even those copies of its data that were made without his knowledge.
This system design has met or exceeded all of the criteria and constraints
outlined in the design proposal and are able to implement the same for better
reliability, safety, accessibility and better performance by ensuring automatic logging
facility.

9.1 Future Enhancements


In the future, we plan to refine our approach to verify the integrity of the JRE
and the authentication of JARs. For example, we will investigate whether it is
possible to leverage the notion of a secure JVM [18] being developed by IBM. This
research is aimed at providing software tamper resistance to Java applications. In the
long term, we plan to design a comprehensive and more generic object-oriented
approach to facilitate autonomous protection of traveling content. We would like to
support a variety of security policies, like indexing policies for text files, usage
control for executables, and generic accountability and provenance controls.

BIBLIOGRAPHY
IGNOU

Department of MCA 2014

Page 86

Ensuring Distributed Data Sharing Accountability In The Cloud


[1] S. Pearson and A. Charlesworth, Accountability as a Way Forward for Privacy

Protection in the Cloud,Proc. First Intl Conf. Cloud Computing,2009.


[2] B. Chun and A.C. Bavier, Decentralized Trust Management and Accountability
in Federated Systems,Proc. Ann. Hawaii Intl Conf.System Sciences (HICSS),2004.
[3] R. Corin, S. Etalle, J.I. den Hartog, G. Lenzini, and I. Staicu, ALogic for
Auditing Accountability in Decentralized Systems,Proc. IFIP TC1 WG1.7 Workshop
Formal Aspects in Security and Trust,pp. 187-201, 2005.
[4] B. Crispo and G. Ruffo, Reasoning about Accountability withinDelegation,Proc.
Third Intl Conf. Information and Comm. Security(ICICS),pp. 251-260, 2001.
[5] W. Lee, A. Cinzia Squicciarini, and E. Bertino, The Design and Evaluation of
Accountable Grid Computing System,Proc. 29thIEEE Intl Conf. Distributed
Computing Systems (ICDCS 09),pp. 145-154, 2009.
[6] SmithaSundareswaran, Anna C. Squicciarini, and Dan Lin, Ensuring Distributed
Accountability for Data Sharing in the Cloud,IEEE TRANSACTIONS ON
DEPENDABLE AND SECURE COMPUTING 2012, IEEE AUGUST 2012.
[7] M.C. Mont, S. Pearson, and P. Bramhall, Towards Accountable Management of
Identity and Privacy: Sticky Policies and Enforceable Tracing Services,

Web Links
http://java.sun.com
http://www.sourcefordgde.com
http://www.networkcomputing.com/
http://www.roseindia.com/
http://www.java2s.com/

IGNOU

Department of MCA 2014

Page 87

Ensuring Distributed Data Sharing Accountability In The Cloud

IGNOU

Department of MCA 2014

Page 88

You might also like