You are on page 1of 74

Mobile Banking

ABSTRACT:

"Mobile Banking refers to provision and availment of banking- and financial services
with the help of mobile telecommunication devices.The scope of offered services may
include facilities to conduct bank and stock market transactions, to administer accounts
and to access customised information."
According to this model Mobile Banking can be said to consist of three inter-related
concepts:

Mobile Accounting

Mobile Brokerage

Mobile Financial Information Services

Most services in the categories designated Accounting and Brokerage are transactionbased. The non-transaction-based services of an informational nature are however
essential for conducting transactions - for instance, balance enquiries might be needed
before committing a money remittance. The accounting and brokerage services are
therefore offered invariably in combination with information services. Information
services, on the other hand, may be offered as an independent module.

Project Overall Description:


Many believe that mobile users have just started to fully utilize the data capabilities in
their mobile phones. In Asian countries like India, China, Bangladesh, Indonesia and
Philippines, where mobile infrastructure is comparatively better than the fixed-line
infrastructure, and in European countries, where mobile phone penetration is very high
(at least 80% of consumers use a mobile phone), mobile banking is likely to appeal even
more.
Mobile devices, especially Smart phone are the most promising way to reach the masses
and to create stickiness among current customers, due to their ability to provide
services anytime, anywhere, high rate of penetration and potential to grow. According to
Gartner, shipment of smartphones is growing fast, and should top 20 million units (of
over 800 million sold) in 2006 alone.
In the last 4 years, banks across the globe have invested billions of dollars to build
sophisticated internet banking capabilities. As the trend is shifting to mobile banking,
there is a challenge for CIOs and CTOs of these banks to decide on how to leverage their
investment in internet banking and offer mobile banking, in the shortest possible time.

Mobile Banking is a web based application which is developed to serve the people for
their money transferring purpose and in order to relive the customers workload in their
busy lives. It helps in transferring money in time and in a hassle free manner which also
ensures reliability that the money is securely transferred to the receiving authority.
Now a days money transferring includes a lot of manual work and a hefty job. It is a
difficult task for the people in their busy lives. The customers are forced to wait in queue
in the bank for this transfer process and to fill in the details and it is unavoidable. Though
these can be done at multiple counters at different locations working people and business
people find it more difficult of this unavoidable inconvenience.

So as to overcome these difficulties Mobile Banking is developed. It ensures transfer of


money from the senders account to the receivers account provided that the users have
supplied proper account number and secret code and the receivers account number. The
end user can transfer and check the information about every transaction about their
transfers and withdrawals from the internet itself. The added advantage of this application
is, it ensures checkpoint reliability at every step even if there is power shut downs or
system crashes. Since at any moment the transaction details are maintained the transfer
process is ensured.

The customer has also an option of checking all the previous transactions, whether the
transaction process is success, the date and exact time of the transaction, and the number
of transactions performed in a particular date and etc.

Existing System:
Over the last few years, the mobile and wireless market has been one of the fastest
growing markets in the world and it is still growing at a rapid pace. According to the
GSM Association and Ovum, the number of mobile subscribers exceeded 2 billion in
September 2005, and now exceeds 2.5 billion (of which more than 2 billion are GSM). In
the last 4 years, banks across the globe have invested billions of dollars to build
sophisticated internet banking capabilities. As the trend is shifting to mobile banking,
there is a challenge for CIOs and CTOs of these banks to decide on how to leverage their
investment in internet banking and offer mobile banking, in the shortest possible time.

Proposed System:
With mobile banking, the customer may be sitting in any part of the world (true anytime,
anywhere banking) and hence banks need to ensure that the systems are up and running
in a true 24 x 7 fashion. As customers will find mobile banking more and more useful,

their expectations from the solution will increase. Banks unable to meet the performance
and reliability expectations may lose customer confidence.There as systems such as
Mobile Transaction Platform which enable quick and secure mobile enabling of various
banking service. Recently in India there has been a phenominal growth in the use of
Mobile Banking applications with leading banks adopting Mobile Transaction platform
and the Central Bank (RBI )publishing guidelines for mobile banking operations.

Very fast and accurate


No need of any extra manual effort
No fever of data loss
Doesnt require any hardware device.
At last very easy to transfer money in few minutes.
Just need a little knowledge to operate the system.

Scope of the Project:


Scope decides the efficiency of the project
The project Mobile Banking extends its scope to various layers of
users.
It can be used by all who access the internet and need to transfer
money with in a very short time with the added advantage of reliability
and in a rapid manner.
This system is developed with an objective to automate the online
money transfer process in a hassle free manner and with a complete
reliability. Its main aim is to help every customer to transfer their
money with confidence and to ensure reliability that the amount has
been transferred successfully.

The system is very secure and prevents the account number and
secret code theft of the customers who are transferring their money
online.

Software Requirements Specification:


Hardware Interfaces
Processor Type

: Pentium -IV

Speed

: 2.4 GHZ

Ram

: 256 MB RAM

Hard disk

: 20 GB HD

Software Interfaces
Operating System

: Win2000/Windows XP

Programming Package

: Asp.Net, C#.

Front End

: Microsoft visual studio 2.0 (Asp.Net),


Microsoft async 4.0

Back End

: SQL Server

Server

: Local Host.

HOW TO RUN:
First step is to attach the database.
1. Open the database folder and copy the log and the mdf file in any one of the local
drives
2. Go to enterprise manager and right click on the database and select attach
database
3. Browse the mdf file from the local drives and click ok
4.

The database will be attached successfully.

Open Microsoft visual studio then set the default page as the start page and run the
project.

Screen Shots:
Home Page:

Transaction Page ( Entering A/c number and secret code)

Mini Statement

Finance Process Main Page:

Finance Process Analyze the Customer query Page:

Finance Process Analyze the Customer query result Page:

User access info page :

Check Book Rquest process page:

Table Design

TABLE
Table Name: tblsavmain
Column name
userid

Datatype
nvarchar

Length
40

accountid

nvarchar

50

pwd
balance

nvarchar
int

50
4

Table Name: tblcurmain


Column name
userid

Datatype
nvarchar

Length
40

accountid

nvarchar

50

pwd
balance

nvarchar
numeric

50
4

Table Name: tblsavtran


Column name
accid
amount
dat
trandetail
trantype

Data type
nvarchar
numeric
datetime
varchar
varchar

length
40
50
8
50
50

Table Name: tblcurtran


Column name
accid
amount
dat
trandetail
trantype

DATA FLOW DIAGRAM:

Data type
nvarchar
numeric
datetime
varchar
varchar

length
40
50
8
50
50

User access info

Transaction

Check
request:

Mobile Application
Development

Mini Statement

TRANSFER MODULE:

Mobile
Banking

WML

IIS Generation

Finance

book

currentacc or
savingsacc
Account Number
CORRECT
USER

Verifies Account
number and
code

ACCOUNT TYPE

Secret Code

CORRECT

Verifies
Transfer
Number

CORRECT
TRANSFER COMPLETION

ACCOUNT STATUS MODULE:

currentacc or savingsacc

Account Number

CORRECT
Verifies Account
number and code

USER

Secret Code

User Access MODULE:

View user
statements

User
Request

Process the
user status

Check Book Request module:

User curerent status


info
check

Check book
Request

Process the
user status

User curerent status


info
check

ENTITY RELATIONSHIP DIAGRAM:

Account type

Account type

Account number

Account number

Secret
code

Cust name

Sender

Transf
er

LIST OF MODULES:
1)

Secret
code

Cust name

Mobile Application Development.

Recipient

2)

Generate IIS.

3)

WML Creation.

4)

Transfers.

5)

Accounts status module.

6)

Finance Enquiry module.

7)

Check book request.

8)

Access User details.

Module Description:
Mobile Application Development.
In the mobile world we are coming across different varieties of mobile devices. Some
mobiles are capable of rendering rich graphics, a few are able to render even low quality
graphics and others are capable of displaying text only. Developing application targeting
these devices had been a nightmare (prior to .NET mobile development facilities),
developers were writing additional code to render the same application for different
devices and it was not so easy to do. Now .NET mobile development facilities make the
work easier and relax the developers from understanding the target mobile device
capability and gives guarantee to run the same application under different mobile
platforms without writing any additional code. But again the question is "How is it
possible?". Well the short answer is everything is possible in .NET and the long answer is
you need to understand the rendering process and the flow of communication between the
web application and the mobile devices. By using the .net emulator we can run our
mobile application, there are some types of emulator in .net . they are smart phone 2003
and pocket pc 2003.

IIS Generation:
IIS (Internet Information Server) is a group of Internet servers (including a Web or
Hypertext Transfer Protocol server and a File Transfer Protocol server) with additional
capabilities for Microsoft's Windows NT and Windows 2000 Server operating systems.

IIS can create pages for Web sites using Microsoft's Front Page product (with its
WYSIWYG user interface). Web developers can use Microsoft's Active Server Page
(ASP)technology, which means that applications - including ActiveX controls - can be
imbedded in Web pages that modify the content sent back to users. Developers can also
write programs that filter requests and get the correct Web pages for different users by
using Microsoft's Internet Server Application Program Interface (ISAPI) interface. By
using this IIS we generate the mobile application in the local host.
WML:
WML pages are often called "decks". A deck contains a set of cards. A card element can
contain text, markup, links, input-fields, tasks, images and more. Cards can be related to
each other with links.
When a WML page is accessed from a mobile phone, all the cards in the page are
downloaded from the WAP server. Navigation between the cards is done by the phone
computer - inside the phone - without any extra access trips to the server:
In our project we using the MICROSOFT ASYNC 4.0 for the wml conversion.
Transfer
The customer can transfer the money to another account in on line from their savings
account or from their current account. The transaction is done by full authentication and
in full security. If the users have the low balance they not allow transferring the money.

Mini statement:
In this module the customer can view their transfer details from their savings account or
from their current account through date wise. Here we provide the security by using
their account id and password.

Finance Finance Enquiry:


In This module the customer can process the on line Enquiry for the different loan details
such as car loan, two-wheeler loan, education loan ,home loan and then calculate the EMI
and the number of months for the corresponding finance process Enquiry.
Check book request:
In this module the user can apply the request for the check book depend their accounts.
Here the hackers cannot apply for the check book request.
Access User details:
the user can accsess the details about the user maintain the low balance in the bank . and
also they can access the user who maintain the high balance .

HARDWARE INTERFACE:
Hardware includes any physical device that is connected to the
computer and it is controlled by the computers microprocessor. This
includes equipment that was connected to the computer when it was
manufactured, as well as peripheral equipment that added later. Some
examples of devices are modems, disk drives, printers and keyboards
etc.

Hardware interfaces are the plugs, sockets, wires, and the electrical
pulses traveling through them in a particular pattern.
Every interface implies a function. At the hardware level, electronic
signals activate functions, datas are read, written, transmitted,
serviced, analyzed for error etc.

SOFTWARE DEVELOPMENT
The following are the softwares used in our project. We have used ASP.Net with C# as
front end and SQL Server as backend.

FRONT END OF SOFTWARE:


Introduction to .net framework
NET (dot-net) is the name Microsoft gives to its general vision of the future of
computing, the view being of a world in which many applications run in a distributed
manner across the Internet. We can identify a number of different motivations driving this
vision.

Firstly, distributed computing is rather like object oriented programming, in that it


encourages specialized code to be collected in one place, rather than copied redundantly
in lots of places. There are thus potential efficiency gains to be made in moving to the
distributed model.

Secondly, by collecting specialized code in one place and opening up a generally


accessible interface to it, different types of machines (phones, handhelds, desktops, etc.)
can all be supported with the same code. Hence Microsoft's 'run-anywhere' aspiration.
Thirdly, by controlling real-time access to some of the distributed nodes
(especially those concerning authentication), companies like Microsoft can control more
easily the running of its applications. It moves applications further into the area of
'services provided' rather than 'objects owned'.
Interestingly, in taking on the .NET vision, Microsoft seems to have given up
some of its proprietary tendencies (whereby all the technology it touched was warped
towards its Windows operating system).
Because it sees its future as providing software services in distributed
applications, the .NET framework has been written so that applications on other
platforms will be able to access these services. For example, .NET has been built upon
open standard technologies like XML and SOAP.
At the development end of the .NET vision is the .NET Framework. This contains
the Common Language Runtime, the .NET Framework Classes, and higher-level features
like ASP.NET (the next generation of Active Server Pages technologies) and Win Forms
(for developing desktop applications).
The Common Language Runtime (CLR) manages the execution of code compiled
for the .NET platform. The CLR has two interesting features. Firstly, its specification has
been opened up so that it can be ported to non-Windows platforms. Secondly, any number
of different languages can be used to manipulate the .NET framework classes, and the
CLR will support them. This has led one commentator to claim that under .NET the
language one uses is a 'lifestyle choice'.
Not all of the supported languages fit entirely neatly into the .NET framework,
however (in some cases the fit has been somewhat Procrustean). But the one language
that is guaranteed to fit in perfectly is C#. This new language, a successor to C++, has

been released in conjunction with the .NET framework, and is likely to be the language of
choice for many developers working on .NET applications.
Asp.net
ASP.NET is a programming framework built on the common language runtime that
can be used on a server to build powerful Web applications. ASP.NET offers several
important advantages over previous Web development models:

Enhanced Performance. ASP.NET is compiled common language runtime code


running on the server. Unlike its interpreted predecessors, ASP.NET can take advantage
of early binding, just-in-time compilation, native optimization, and caching services right
out of the box. This amounts to dramatically better performance before you ever write a
line of code.

World-Class Tool Support. The ASP.NET framework is complemented by a rich


toolbox and designer in the Visual Studio integrated development environment.
WYSIWYG editing, drag-and-drop server controls, and automatic deployment are just a
few of the features this powerful tool provides.

Power and Flexibility. Because ASP.NET is based on the common language


runtime, the power and flexibility of that entire platform is available to Web application
developers. The .NET Framework class library, Messaging, and Data Access solutions are
all seamlessly accessible from the Web. ASP.NET is also language-independent, so you
can choose the language that best applies to your application or partition your application
across many languages.

Further, common language runtime interoperability guarantees that your existing


investment in COM-based development is preserved when migrating to ASP.NET.

Simplicity. ASP.NET makes it easy to perform common tasks, from simple form

submission and client authentication to deployment and site configuration. For example,
the ASP.NET page framework allows you to build user interfaces that cleanly separate
application logic from presentation code and to handle events in a simple, Visual Basic like forms processing model. Additionally, the common language runtime simplifies
development, with managed code services such as automatic reference counting and
garbage collection.

Manageability. ASP.NET employs a text-based, hierarchical configuration

system, which simplifies applying settings to your server environment and Web
applications. Because configuration information is stored as plain text, new settings may
be applied without the aid of local administration tools. This "zero local administration"
philosophy extends to deploying ASP.NET Framework applications as well. An ASP.NET
Framework application is deployed to a server simply by copying the necessary files to
the server. No server restart is required, even to deploy or replace running compiled code.

Scalability and Availability. ASP.NET has been designed with scalability in

mind, with features specifically tailored to improve performance in clustered and


multiprocessor environments. Further, processes are closely monitored and managed by
the ASP.NET runtime, so that if one misbehaves (leaks, deadlocks), a new process can be
created in its place, which helps keep your application constantly available to handle
requests.

Customizability and Extensibility. ASP.NET delivers a well-factored architecture


that allows developers to "plug-in" their code at the appropriate level. In fact, it is

possible to extend or replace any subcomponent of the ASP.NET runtime with your own
custom-written component. Implementing custom authentication or state services has
never been easier.

Security. With built in Windows authentication and per-application


configuration, you can be assured that your applications are secure.

ASP .NET has better language support, a large set of new controls and XML
based components, and better user authentication.

ASP .NET provides increased performance by running compiled code.

ASP .NET code is not fully backward compatible with ASP.

New in ASP .NET

Better language support

Programmable controls

Event-driven programming

XML-based components

User authentication, with accounts and roles

Higher scalability

Increased performance - Compiled code

Easier configuration and deployment

Not fully ASP compatible

Language Support

ASP .NET uses the new ADO .NET.

ASP .NET supports full Visual Basic, not VBScript.

ASP .NET supports C# (C sharp) and C++.

ASP .NET supports JScript as before.

ASP .NET Controls

ASP .NET contains a large set of HTML controls. Almost all HTML elements on
a page can be defined as ASP .NET control objects that can be controlled by scripts.

ASP .NET also contains a new set of object oriented input controls, like
programmable list boxes and validation controls.

A new data grid control supports sorting, data paging, and everything you expect
from a dataset control.

Event Aware Controls

All ASP .NET objects on a Web page can expose events that can be processed by
ASP .NET code.

Load, Click and Change events handled by code makes coding much simpler and
much better organized.

ASP .NET Components

ASP .NET components are heavily based on XML. Like the new AD Rotator, that
uses XML to store advertisement information and configuration.

User Authentication
ASP .NET supports forms-based user authentication, including cookie
management and automatic redirecting of unauthorized logins.
(You can still do your custom login page and custom user checking).
User Accounts and Roles
ASP .NET allows for user accounts and roles, to give each user (with a given role)
access to different server code and executables.

High Scalability

Much has been done with ASP .NET to provide greater scalability. Server to server
communication has been greatly enhanced, making it possible to scale an application
over several servers. One example of this is the ability to run XML parsers, XSL
transformations and even resource hungry session objects on other servers.
Compiled Code
The first request for an ASP .NET page on the server will compile the ASP .NET code
and keep a cached copy in memory. The result of this is greatly increased performance.
Easy Configuration
Configuration of ASP .NET is done with plain text files.
Configuration files can be uploaded or changed while the application is running. No need
to restart the server. No more metabase or registry puzzle.
Easy Deployment
No more server restart to deploy or replace compiled code. ASP .NET simply redirects all
new requests to the new code.
Compatibility
ASP .NET is not fully compatible with earlier versions of ASP, so most of the old
ASP code will need some changes to run under ASP .NET.
To overcome this problem, ASP .NET uses a new file extension ".aspx". This will
make ASP .NET applications able to run side by side with standard ASP applications on
the same server.

HTML Server Controls


HTML elements in ASP.NET files are, by default, treated as text. To make these elements
programmable, add a runat="server" attribute to the HTML element. This attribute
indicates that the element should be treated as a server control.
Note: All HTML server controls must be within a <form> tag with the runat="server"
attribute!
Note: ASP.NET requires that all HTML elements must be properly closed and properly
nested.
HTML Server Control
HtmlAnchor
HtmlButton
HtmlForm
HtmlGeneric

Description
Controls an <a> HTML element
Controls a <button> HTML element
Controls a <form> HTML element
Controls other HTML element not specified by a specific

HtmlImage
HtmlInputButton

HTML server control, like <body>, <div>, <span>, etc.


Controls an <image> HTML element
Controls <input type="button">, <input type="submit">, and

HtmlInputCheckBox
HtmlInputFile
HtmlInputHidden
HtmlInputImage
HtmlInputRadioButton
HtmlInputText

<input type="reset"> HTML elements


Controls an <input type="checkbox"> HTML element
Controls an <input type="file"> HTML element
Controls an <input type="hidden"> HTML element
Controls an <input type="image"> HTML element
Controls an <input type="radio"> HTML element
Controls <input type="text"> and <input type="password">

HtmlSelect
HtmlTable
HtmlTableCell
HtmlTableRow
HtmlTextArea

HTML elements
Controls a <select> HTML element
Controls a <table> HTML element
Controls <td>and <th> HTML elements
Controls a <tr> HTML element
Controls a <textarea> HTML element

Web Server Controls


Like HTML server controls, Web server controls are also created on the server and they
require a runat="server" attribute to work. However, Web server controls do not
necessarily map to any existing HTML elements and they may represent more complex
elements.
The syntax for creating a Web server control is:
<asp:control_name id="some_id" runat="server" />

Web Server Control


AdRotator
Button
Calendar
CheckBox
CheckBoxList
DataGrid
DataList
DropDownList
HyperLink
Image
ImageButton
Label

Description
Displays a sequence of images
Displays a push button
Displays a calendar
Displays a check box
Creates a multi-selection check box group
Displays fields of a data source in a grid
Displays items from a data source by using templates
Creates a drop-down list
Creates a hyperlink
Displays an image
Displays a clickable image
Displays static content which is programmable (lets you apply

LinkButton
ListBox
Literal

styles to its content)


Creates a hyperlink button
Creates a single- or multi-selection drop-down list
Displays static content which is programmable (does not let

Panel
PlaceHolder
RadioButton
RadioButtonList
Repeater

you apply styles to its content)


Provides a container for other controls
Reserves space for controls added by code
Creates a radio button
Creates a group of radio buttons
Displays a repeated list of items bound to the control

Table
TableCell
TableRow
TextBox
Xml

Creates a table
Creates a table cell
Creates a table row
Creates a text box
Displays an XML file or the results of an XSL transform

Validation Server Controls


A Validation server control is used to validate the data of an input control. If the data does
not pass validation, it will display an error message to the user.
The syntax for creating a Validation server control is:

<asp:control_name id="some_id" runat="server" />

Validation Server Control


CompareValidator

Description
Compares the value of one input control to the value

CustomValidator

of another input control or to a fixed value


Allows you to write a method to handle the validation

RangeValidator

of the value entered


Checks that the user enters a value that falls between

RegularExpressionValidator

two values
Ensures that the value of an input control matches a

RequiredFieldValidator
ValidationSummary

specified pattern
Makes an input control a required field
Displays a report of all validation errors occurred in a
Web page

ADO .NET
Most applications need data access at one point of time making it a crucial component
when working with applications. Data access is making the application interact with a
database, where all the data is stored. Different applications have different requirements
for database access. VB .NET uses ADO .NET (Active X Data Object) as it's data access
and manipulation protocol which also enables us to work with data on the Internet. Let's
take a look why ADO .NET came into picture replacing ADO.

Evolution of ADO.NET
The first data access model, DAO (data access model) was created for local databases
with the built-in Jet engine which had performance and functionality issues. Next came
RDO (Remote Data Object) and ADO (Active Data Object) which were designed for
Client Server architectures but soon ADO took over RDO. ADO was a good architecture
but as the language changes so is the technology. With ADO, all the data is contained in a
recordset object which had problems when implemented on the network and penetrating
firewalls. ADO was a connected data access, which means that when a connection to the
database is established the connection remains open until the application is closed.
Leaving the connection open for the lifetime of the application raises concerns about
database security and network traffic. Also, as databases are becoming increasingly
important and as they are serving more people, a connected data access model makes us
think about its productivity. For example, an application with connected data access may
do well when connected to two clients, the same may do poorly when connected to 10

and might be unusable when connected to 100 or more. Also, open database connections
use system resources to a maximum extent making the system performance less effective.

ADO.NET
To cope up with some of the problems mentioned above, ADO .NET came into existence.
ADO .NET addresses the above mentioned problems by maintaining a disconnected
database access model which means, when an application interacts with the database, the
connection is opened to serve the request of the application and is closed as soon as the
request is completed. Likewise, if a database is Updated, the connection is opened long
enough to complete the Update operation and is closed.

By keeping connections open for only a minimum period of time, ADO


.NET conserves system resources and provides maximum security for databases and also
has less impact on system performance.
Also, ADO .NET when interacting with the database uses XML and converts all
the data into XML format for database related operations making them more efficient.

The ADO.NET Data Architecture


Data Access in ADO.NET relies on two components: DataSet and Data Provider.
DataSet
The dataset is a disconnected, in-memory representation of data. It can be considered as a
local copy of the relevant portions of the database. The DataSet is persisted in memory
and the data in it can be manipulated and updated independent of the database. When the
use of this DataSet is finished, changes can be made back to the central database for

updating. The data in DataSet can be loaded from any valid data source like Microsoft
SQL server database, an Oracle database or from a Microsoft Access database.
Data Provider
The Data Provider is responsible for providing and maintaining the connection to
the database. A DataProvider is a set of related components that work together to provide
data in an efficient and performance driven manner. The .NET Framework currently
comes with two DataProviders: the SQL Data Provider which is designed only to work
with Microsoft's SQL Server 7.0 or later and the OleDb DataProvider which allows us to
connect to other types of databases like Access and Oracle. Each DataProvider consists of
the following component classes:
The

Connection

object

The

Command

object

which
which

provides a
is

used

connection
to

to

execute

the
a

database
command

The DataReader object which provides a forward-only, read only, connected recordset
The DataAdapter object which populates a disconnected DataSet with data and performs
update .

Data access with ADO.NET can be summarized as follows:


A connection object establishes the connection for the application with the database. The
command object provides direct execution of the command to the database. If the
command returns more than a single value, the command object returns a DataReader to
provide the data. Alternatively, the DataAdapter can be used to fill the Datasetdatabase

be

updated

using

the

command

object

or

the

DataAdapter.

Component classes that make up the Data Providers

The Connection Object


The Connection object creates the connection to the database. Microsoft Visual Studio
.NET provides two types of Connection classes: the SqlConnection object, which is
designed specifically to connect to Microsoft SQL Server 7.0 or later, and the
OleDbConnection object, which can provide connections to a wide range of database
types like Microsoft Access and Oracle. The Connection object contains all of the
information required to open a connection to the database.
The Command Object
The Command object is represented by two corresponding classes: SqlCommand and
OleDbCommand. Command objects are used to execute commands to a database across a
data connection. The Command objects can be used to execute stored procedures on the
database, SQL commands, or return complete tables directly. Command objects provide
three methods that are used to execute commands on the database:
ExecuteNonQuery: Executes commands that have no return values such as INSERT,
UPDATE
ExecuteScalar:

or
Returns

DELETE
single

value

from

database

query

ExecuteReader: Returns a result set by way of a DataReader object

The DataReader Object


The Data Reader object provides a forward-only, read-only, connected stream recordset
from a database. Unlike other components of the Data Provider, DataReader objects
cannot be directly instantiated. Rather, the DataReader is returned as the result of the
Command object's ExecuteReader method. The SqlCommand.ExecuteReader method
returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns
an OleDbDataReader object.

The DataReader can provide rows of data directly to application logic when you do not
need to keep the data cached in memory. Because only one row is in memory at a time,
the DataReader provides the lowest overhead in terms of system performance but
requires the exclusive use of an open Connection object for the lifetime of the
DataReader.
The DataAdapter Object
The DataAdapter is the class at the core of ADO .NET's disconnected data access. It is
essentially the middleman facilitating all communication between the database and a
DataSet. The DataAdapter is used either to fill a DataTable or DataSet with data from the
database with it's Fill method. After the memory-resident data has been manipulated, the
DataAdapter can commit the changes to the database by calling the Update method. The
DataAdapter provides four properties that represent database commands:
SelectCommand
InsertCommand
DeleteCommand
UpdateCommand
When the Update method is called, changes in the DataSet are copied back to the
database and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is
executed.

BACK END OF SOFTWARE:


SQL Introduction:
SQL stands for Structured Query Language and is used to pull information from
databases.SQL offers many features making it a powerfully diverse language that also
offers a secure way to work with databases.
SQL (commonly expanded to Structured Query Language is the most popular
computer language used to create, modify, retireve and manipulate data from relational
database management systems. The language has evolved beyond its original purpose to
support object-relational database management systems. It is an ANSI/ISO standard.
SQL alone can input, modify, and drop data from databases. In this tutorial we use
command line examples to show you the basics of what we are able to accomplish. With
the use of web languages such as HTML and PHP, SQL becomes an even greater tool for
building dynamic web pages.

Database:
A database is nothing more than an empty shell, like a vacant warehouse. It offers no real
functionality what so ever, other than holding a name. Tables are the next tier of our tree
offering a wide scope of functionality. If you follow our warehouse example, a SQL table
would be the physical shelving inside our vacant warehouse. Each SQL table is capable
of housing 1024 columns(shelves). Depending on the situation, your goods may require
reorganization, reserving, or removal. SQL tables can be manipulated in this same way or
in any fashion the situation calls for.

SQL Server:
Microsoft's SQL Server is steadily on the rise in the commercial world gaining popularity
slowly. This platform has a GUI "Windows" type interface and is also rich with
functionality. A free trial version can be downloaded at the Microsoft web site, however it
is only available to Windows users.
SQL Queries:
Queries are the backbone of SQL. Query is a loose term that refers to a widely available
set of SQL commands called clauses. Each clause (command) performs some sort of
function against the database. For instance, the create clause creates tables and databases
and the select clause selects rows that have been inserted into your tables. We will dive
deeper in detail as this tutorial continues but for now let's take a look at some query
structure.
Views:
Views are nothing but saved SQL statements, and are sometimes referred as Virtual
Tables. Keep in mind that Views cannot store data (except for Indexed Views); rather
they only refer to data present in tables.
Lets checkout the basic syntax for creating a view:
CREATE VIEW <View_Name>
AS
<SELECT Statement>
GO

There are two important options that can be used when a view is created. They are
SCHEMABINDING and ENCRYPTION. We shall have a detailed look on both of these,
shortly, but first of all, lets take a look of an example of a typical view creation statement
without any options.
Data storage
The main unit of data storage is a database, which is a collection of tables with
typed columns. SQL Server supports different data types, including primary types such as
Integer, Float, Decimal, Char (including character strings), Varchar (variable length
character strings), binary (for unstructured blobs of data), Text (for textual data) among
others. It also allows user-defined composite types (UDTs) to be defined and used. SQL
Server also makes server statistics available as virtual tables and views (called Dynamic
Management Views or DMVs). A database can also contain other objects including
views, stored procedures, indexes and constraints, in addition to tables, along with a
transaction log. A SQL Server database can contain a maximum of 2 31 objects, and can
span multiple OS-level files with a maximum file size of 220 TB. The data in the database
are stored in primary data files with an extension .mdf. Secondary data files, identified
with an .ndf extension, are used to store optional metadata. Log files are identified with
the .ldf extension.
Storage space allocated to a database is divided into sequentially numbered pages,
each 8 KB in size. A page is the basic unit of I/O for SQL Server operations. A page is
marked with a 96-byte header which stores metadata about the page including the page
number, page type, free space on the page and the ID of the object that owns it. Page type
defines the data contained in the page - data stored in the database, index, allocation map
which holds information about how pages are allocated to tables and indexes, change
map which holds information about the changes made to other pages since last backup or
logging, or contain large data types such as image or text. While page is the basic unit of
an I/O operation, space is actually managed in terms of an extent which consists of 8
pages. A database object can either span all 8 pages in an extent ("uniform extent") or
share an extent with up to 7 more objects ("mixed extent").

A row in a database table cannot span more than one page, so is limited to 8 KB
in size. However, if the data exceeds 8 KB and the row contains Varchar or Varbinary
data, the data in those columns are moved to a new page (or possible a sequence of pages,
called Allocation unit) and replaced with a pointer to the data.
For physical storage of a table, its rows are divided into a series of partitions
(numbered 1 to n). The partition size is user defined; by default all rows are in a single
partition. A table is split into multiple partitions in order to spread a database over a
cluster. Rows in each partition are stored in either B-tree or heap structure. If the table has
an associated index to allow fast retrieval of rows, the rows are stored in-order according
to their index values, with a B-tree providing the index. The data is in the leaf node of the
leaves, and other nodes storing the index values for the leaf data reachable from the
respective nodes. If the index is non-clustered, the rows are not sorted according to the
index keys. An indexed view has the same storage structure as an indexed table. A table
without an index is stored in an unordered heap structure. Both heaps and B-trees can
span multiple allocation units.

Buffer management
SQL Server buffers pages in RAM to minimize disc I/O. Any 8 KB page can be
buffered in-memory, and the set of all pages currently buffered is called the buffer cache.
The amount of memory available to SQL Server decides how many pages will be cached
in memory. The buffer cache is managed by the Buffer Manager. Either reading from or
writing to any page copies it to the buffer cache. Subsequent reads or writes are
redirected to the in-memory copy, rather than the on-disc version.
The page is updated on the disc by the Buffer Manager only if the in-memory
cache has not been referenced for some time. While writing pages back to disc,
asynchronous I/O is used whereby the I/O operation is done in a background thread so
that other operations do not have to wait for the I/O operation to complete. Each page is
written along with its checksum when it is written.

When reading the page back, its checksum is computed again and matched with
the stored version to ensure the page has not been damaged or tampered with in the mean
time.
Logging and Transaction
SQL Server ensures that any change to the data is ACID-compliant, i.e., it uses
transactions to ensure that any operation either totally completes or is undone if fails, but
never leave the database in an intermediate state. Using transactions, a sequence of
actions can be grouped together, with the guarantee that either all actions will succeed or
none will. SQL Server implements transactions using a write-ahead log. Any changes
made to any page will update the in-memory cache of the page, simultaneously all the
operations performed will be written to a log, along with the transaction ID which the
operation was a part of.
Each log entry is identified by an increasing Log Sequence Number (LSN) which
ensure that no event overwrites another. SQL Server ensures that the log will be written
onto the disc before the actual page is written back. This enables SQL Server to ensure
integrity of the data, even if the system fails. If both the log and the page were written
before the failure, the entire data is on persistent storage and integrity is ensured. If only
the log was written (the page was either not written or not written completely), then the
actions can be read from the log and repeated to restore integrity.
If the log wasn't written, then also the integrity is maintained, even though the
database is in a state when the transaction as if never occurred. If it was only partially
written, then the actions associated with the unfinished transaction are discarded. Since
the log was only partially written, the page is guaranteed to have not been written, again
ensuring data integrity. Removing the unfinished log entries effectively undoes the
transaction. SQL Server ensures consistency between the log and the data every time an
instance is restarted.

Concurrency and locking


SQL Server allows multiple clients to use the same database concurrently. As
such, it needs to control concurrent access to shared data, to ensure data integrity - when
multiple clients update the same data, or clients attempt to read data that is in the process
of being changed by another client. SQL Server provides two modes of concurrency
control: pessimistic concurrency and optimistic concurrency. When pessimistic
concurrency control is being used, SQL Server controls concurrent access by using locks.
Locks can be either shared or exclusive. Exclusive lock grants the user exclusive access
to the data - no other user can access the data as long as the lock is held. Shared locks are
used when some data is being read - multiple users can read from data locked with a
shared lock, but not acquire an exclusive lock. The latter would have to wait for all
shared locks to be released. Locks can be applied on different levels of granularity - on
entire tables, pages, or even on a per-row basis on tables. For indexes, it can either be on
the entire index or on index leaves.
The level of granularity to be used is defined on a per-database basis by the
database administrator. While a fine grained locking system allows more users to use the
table or index simultaneously, it requires more resources. So it does not automatically
turn into higher performing solution. SQL Server also includes two more lightweight
mutual exclusion solutions - latches and spin locks - which are less robust than locks but
are less resource intensive.
SQL Server uses them for DMVs and other resources that are usually not busy.
SQL Server also monitors all worker threads that acquire locks to ensure that they do not
end up in deadlocks - in case they do, SQL Server takes remedial measures, which in
many cases is to kill one of the threads entangled in a deadlock and rollback the
transaction it started. To implement locking, SQL Server contains the Lock Manager.

The Lock Manager maintains an in-memory table that manages the database
objects and locks, if any, on them along with other metadata about the lock. Access to any
shared object is mediated by the lock manager, which either grants access to the resource
or blocks it.
SQL Server also provides the optimistic concurrency control mechanism, which is
similar to the multiversion concurrency control used in other databases. The mechanism
allows a new version of a row to be created whenever the row is updated, as opposed to
overwriting the row, i.e., a row is additionally identified by the ID of the transaction that
created the version of the row. Both the old as well as the new versions of the row are
stored and maintained, though the old versions are moved out of the database into a
system database identified as Tempdb.
When a row is in the process of being updated, any other requests are not blocked
(unlike locking) but are executed on the older version of the row. If the other request is an
update statement, it will result in two different versions of the rows - both of them will be
stored by the database, identified by their respective transaction IDs.
Data retrieval
The main mode of retrieving data from an SQL Server database is querying for it.
The query is expressed using a variant of SQL called T-SQL, a dialect Microsoft SQL
Server shares with Sybase SQL Server due to its legacy. The query declaratively specifies
what is to be retrieved. It is processed by the query processor, which figures out the
sequence of steps that will be necessary to retrieve the requested data.
The sequence of actions necessary to execute a query is called a query plan.
There might be multiple ways to process the same query. For example, for a query that
contains a join statement and a select statement, executing join on both the tables and
then executing select on the results would give the same result as selecting from each
table and then executing the join, but result in different execution plans. In such case,
SQL Server chooses the plan that is supposed to yield the results in the shortest possible
time. This is called query optimization and is performed by the query processor itself.

SQL Server includes a cost-based query optimizer which tries to optimize on the
cost, in terms of the resources it will take to execute the query. Given a query, the query
optimizer looks at the database schema, the database statistics and the system load at that
time.
It then decides which sequence to access the tables referred in the query, which
sequence to execute the operations and what access method to be used to access the
tables. For example, if the table has an associated index, whether the index should be
used or not - if the index is on a column which is not unique for most of the columns (low
"selectivity"), it might not be worthwhile to use the index to access the data. Finally, it
decides whether to execute the query concurrently or not.
While a concurrent execution is more costly in terms of total processor time,
because the execution is actually split to different processors might mean it will execute
faster. Once a query plan is generated for a query, it is temporarily cached. For further
invocations of the same query, the cached plan is used. Unused plans are discarded after
some time.
SQL Server also allows stored procedures to be defined. Stored procedures are
parameterized T-SQL queries that are stored in the server itself (and not issued by the
client application as is the case with general queries). Stored procedures can accept
values sent by the client as input parameters, and send back results as output parameters.
They can also call other stored procedures, and can be selectively provided access
to. Unlike other queries, stored procedures have an associated name, which is used at
runtime to resolve into the actual queries. Also because the code need not be sent from
the client every time (as it can be accessed by name), it reduces network traffic and
somewhat improves performance. Execution plans for stored procedures are also cached
as necessary.

SQL CLR
Microsoft SQL Server 2005 includes a component named SQL CLR via which it
integrates with .NET Framework. Unlike most other applications that use .NET
Framework, SQL Server itself hosts the .NET Framework runtime, i.e., memory,
threading and resource management requirements of .NET Framework are satisfied by
SQLOS itself, rather than the underlying Windows operating system.
SQLOS provides deadlock detection and resolution services for .NET code as
well. With SQL CLR, stored procedures and triggers can be written in any managed .NET
language, including C# and VB.NET. Managed code can also be used to define UDTs
which can be persisted in the database. Managed code is compiled to .Net assemblies and
after being verified for type safety, registered at the database. After that, they can be
invoked like any other procedure. However, only a subset of the Base Class Library is
available, when running code under SQL CLR. Most APIs relating to user interface
functionality are not available.
When writing code for SQL CLR, data stored in SQL Server databases can be
accessed using the ADO.NET APIs like any other managed application that accesses
SQL Server data. However, doing that creates a new database session, different from the
one in which the code is executing. To avoid this, SQL Server provides some
enhancements to the ADO.NET provider that allows the connection to be redirected to
the same session which already hosts the running code. Such connections are called
context connections and are set by setting context connection parameter to true in the
connection string. SQL Server also provides several other enhancements to the
ADO.NET API, including classes to work with tabular data or a single row of data as
well as classes to work with internal metadata about the data stored in the database. It
also provides access to the XML features in SQL Server, including XQuery support.
These enhancements are also available in T-SQL Procedures in consequence of the
introduction of the new XML Datatype (query, value, nodes functions).

Services
SQL Server also includes an assortment of add-on services. While these are not
essential for the operation of the database system, these provide value added services on
top of the core database management system. These services either run as a part of some
SQL Server component or out-of-process as Windows Service and presents their own API
to control and interact with them.

Service Broker
The Service Broker, which runs as a part of the database engine, provides a
reliable messaging and message queuing platform for SQL Server applications. Used
inside an instance, it is used to provide an asynchronous programming environment. For
cross instance applications, Service Broker communicates over TCP/IP and allows the
different components to be synchronized together, via exchange of messages.

Replication Services
SQL Server Replication Services are used by SQL Server to replicate and
synchronize database objects, either in entirety or a subset of the objects present, across
replication agents, which might be other database servers across the network, or database
caches on the client side. Replication follows a publisher/subscriber model, i.e., the
changes are sent out by one database server ("publisher") and are received by others
("subscribers"). SQL Server supports three different types of replication:

Transaction replication
Each transaction made to the publisher database (master database) is synced out to
subscribers, who update their databases with the transaction. Transactional
replication synchronizes databases in near real time.

Merge replication
Changes made at both the publisher and subscriber databases are tracked, and
periodically the changes are synchronized bi-directionally between the publisher
and the subscribers. If the same data has been modified differently in both the
publisher and the subscriber databases, synchronization will result in a conflict
which has to be resolved - either manually or by using pre-defined policies.

Snapshot replication
Snapshot replication published a copy of the entire database (the then-snapshot of
the data) and replicates out to the subscribers. Further changes to the snapshot are
not tracked.

Analysis Services
SQL Server Analysis Services adds OLAP and data mining capabilities for SQL
Server databases. The OLAP engine supports MOLAP, ROLAP and HOLAP storage
modes for data. Analysis Services supports the XML for Analysis standard as the
underlying communication protocol. The cube data can be accessed using MDX queries.
Data mining specific functionality is exposed via the DMX query language. Analysis
Services includes various algorithms Decision trees, clustering algorithm, Nave Bayes
algorithm, time series analysis, sequence clustering algorithm, linear and logistic
regression analysis, and neural networks - for use in data mining.

Reporting Services
SQL Server Reporting Services is a report generation environment for data
gathered from SQL Server databases. It is administered via a web interface. Reporting
services features a web services interface to support the development of custom reporting
applications. Reports are created as RDL files.
Reports can be designed using recent versions of Microsoft Visual Studio
(including Visual Studio.NET 2003 onwards) with Business Intelligence Development
Studio, installed or with the included Report Builder. Once created, RDL files can be
rendered in a variety of formats including Excel, PDF, CSV, XML, TIFF (and other
image formats), and HTML Web Archive.

Notification Services
Introduced and available only with Sql Server 2005, SQL Server Notification
Services is a platform for generating notifications, which are sent to Notification Services
subscribers. A subscriber registers for a specific event or transaction (which is registered
on the database server as a trigger); when the event occurs, Notification Services uses
Service Broker to send a message to the subscriber informing about the occurrence of the
event.

Integration Services
SQL Server Integration Services is used to integrate data from different data
sources. It is used for the ETL capabilities for SQL Server for data warehousing needs.
Integration Services includes GUI tools to build data extraction workflows integration
various functionality such as extracting data from various sources, querying data,
transforming data including aggregating, duplication and merging data, and then loading
the transformed data onto other sources, or sending e-mails detailing the status of the
operation.

System Analysis
Assuming that a new system is to be developed, the next phase is system analysis.
Analysis involved a detailed study of the current system, leading to specifications of a
new system. Analysis is a detailed study of various operations performed by a system and
their relationships within and outside the system. During analysis, data are collected on
the available files, decision points and transactions handled by the present system.
Interviews, on-site observation and questionnaire are the tools used for system analysis
All procedures, requirements must be analyzed and documented in the form of
detailed data flow diagrams (DFDs), data dictionary, logical data structures and miniature
specifications. System Analysis also includes sub-dividing of complex process involving
the entire system, identification of data store and manual processes.
The important steps in system analysis are:

Specification of what the new system is to accomplish based on the user


requirements.

Functional hierarchy showing the functions to be performed by the new


system and their relationship with each other.

Function network, which are similar to function hierarchy but they


highlight those functions, which are common to more than one procedure.

List of attributes of the entities - these are the data items which need to be
held about each entity (record)

Feasibility Study

Feasibility is the determination of whether or not a project is worth doing. The


processes is followed in making this determination is called a feasibility study. Feasibility
study is the test of system proposal according to its workability, Impact on the
organization ability to meet users needs, and effective use of resources. The result of
feasibility study is a formal proposal. This is simply a report a formal document
detailing the nature and scope of the proposed solution .The main objective of a
feasibility study is to test the technical, social and economic feasibility of developing a
computer system. This is done by investigation the existing system in the area under
investigation and generating ideas about a new system. On studying the feasibility of the
system, three major considerations are dealt with, to find whether the automation of the
system is feasible. They are discussed as follows;

TECHNICAL FEASIBILITY
A system that can be developed technically and that will be used if installed must
still be a good invested for the organization. The assessment of technical feasibility must
be based on an outline design on system requirements in terms of inputs, outputs, files,
programs, procedures. Technical feasibility centers around the existing computer system
and to what extend it can support the proposed system. The current technical resources,
which are available in the organization, are capable of handling the requirements in the
aspect of technical staff. Technical feasibility also involves the investigations such as

whether the proposed system provides adequate response to inquiries and whether it can
be expanded if developed. The current project is to be designed so as to fit to the
expectations of various categories of people concerned with it. Besides some technical
experts who also have the computer knowledge are to be trained over the project enabling
them to take care of the technical problems. The system is developed to meet the
demands of the existing . The system is also reliable and easy to use. So it is found that
this project is technically practicable keeping the clients requirements in mind.

ECONOMIC FEASIBILITY
The technique of cost benefit analysis is often used as a basis for assessing
economic feasibility. Economic feasibility deals with the analysis of costs against benefits
(i.e) whether the benefits to the enjoyed due to the new system are worthy when
compared to the costs to be spent on the system. Economic analysis is the nose frequently
used technique for evaluating the cost effectiveness of the proposed project. More
commonly know as cost / benefit analysis, the procedure is to determine whether the
project have the benefits and savings. Further compared with the existing-costs in the
manual procedure, the current project involves less investment.
The cost when compared to the benefits of the system are much low. Hence the
system is economically feasible.The conversion of the staff in maintaining the paper
records to some other important work is possible which may be taken as the added
advantages of this project. Accurate and reliable information exchange with reasonable
cost is possible. Taking this into consideration, the system is found to be economically
feasible.

OPERATIONAL FEASIBILITY
Proposed projects are beneficial only if they can be turned into information
systems that will meet the companys operating requirements. Simply stated, this test of
feasibility asks if the system will work when it is developed and installed. There are
questions that will help to test the operational feasibility of a project.
The following aspects are considered during the time of feasibility study:
1. The changes brought to the system.
The operational skills that will be required for entering data and the training to be given
are also considered.

TIME FEASIBILITY
The only point is Can the project be developed in time so that it can be used
before any new proposal come to the company. The software is feasible with time as it
will be developed in the estimated time limit.
RESOURCE FEASIBILITY
The issue of consideration here is does the developer has enough resources to
develop such software and to succeed in it? i.e. the resources that would be required to
develop and implement the software. The resources not only include the hardware,
software, and technology but also require money, men power. It also takes into
consideration the resources required at client side when the software has been installed.

BEHAVIORAL FEASIBILITY
People are inherently resistant to change, and because of any new thing changes
are made. Evolution of any new system over existing system is a reason for resistant by
people. So, for a project the respective behavioral feasibility is calculated, so as to have
complete knowledge of what problems would be faced after implementing the software.
In this software, the user can never face any kind of problems, as the software is highly
user friendly.

Software Requirements Specification:


Hardware Interfaces
Processor Type

: Pentium -IV

Speed

: 2.4 GHZ

Ram

: 256 MB RAM

Hard disk

: 20 GB HD

Software Interfaces
Operating System

: Win2000/Windows XP

Programming Package

: Asp.Net, C#.

Front End

: Microsoft visual studio 2.0 (Asp.Net),


Microsoft async 4.0

Back End

: Ms-Access/SQL

Server

: Local Host.

Codings:
Current Page
using
using
using
using
using
using
using
using
using
using
using
using
using

System;
System.Collections;
System.ComponentModel;
System.Data;
System.Drawing;
System.Web;
System.Web.Mobile;
System.Web.SessionState;
System.Web.UI;
System.Web.UI.MobileControls;
System.Web.UI.WebControls;
System.Web.UI.HtmlControls;
System.Data.SqlClient;

public partial class _Default : System.Web.UI.MobileControls.MobilePage


{
SqlConnection con;
SqlCommand com;
//SqlDataAdapter da;
SqlDataReader dr;
string frm, to;
string a, d, p, q;
int b, c, o, r, s, t;
protected void Page_Load(object sender, EventArgs e)
{
con = new SqlConnection("Data Source=.;Initial
Catalog=mobilebanking;User ID=sa");
if (!IsPostBack)
{
Label4.Text = DateTime.Today.ToShortDateString();
Label7.Text = DateTime.Now.ToShortTimeString();
}
}
protected void Command1_Click(object sender, EventArgs e)
{
if (TextBox1.Text == TextBox3.Text)
{
Label6.Visible = true;
Label6.Text = " the transaction's Account id's are same so
unable to prefare transaction";
}
else
{
ch();
}
}
public void from()

{
frm = "withdraw";
con.Open();
com = new SqlCommand("sp_ins41", con);
com.CommandType = CommandType.StoredProcedure;
com.Parameters.AddWithValue("@acc", TextBox1.Text.ToString());
com.Parameters.AddWithValue("@amo", TextBox2.Text.ToString());
com.Parameters.AddWithValue("@dat", Label4.Text.ToString());
com.Parameters.AddWithValue("@tim", Label7.Text.ToString());
com.Parameters.AddWithValue("@tran", frm.ToString());
com.Parameters.AddWithValue("@to", TextBox3.Text.ToString());
com.ExecuteNonQuery();
//con.Close();
}
public void to1()
{
con.Close();
to = "deposit";
con.Open();
com = new SqlCommand("sp_ins41", con);
com.CommandType = CommandType.StoredProcedure;
com.Parameters.AddWithValue("@acc", TextBox3.Text.ToString());
com.Parameters.AddWithValue("@amo", TextBox2.Text.ToString());
com.Parameters.AddWithValue("@dat", Label4.Text.ToString());
com.Parameters.AddWithValue("@tim", Label7.Text.ToString());
com.Parameters.AddWithValue("@tran", to.ToString());
com.Parameters.AddWithValue("@to", TextBox1.Text.ToString());
com.ExecuteNonQuery();
//con.Close();
}
public void upd1()
{
con.Close();
d = Convert.ToString(c);
con.Open();
com = new SqlCommand("sp_Up1", con);
com.CommandType = CommandType.StoredProcedure;
com.Parameters.AddWithValue("@acc", TextBox1.Text.ToString());
com.Parameters.AddWithValue("@q", d.ToString());
com.ExecuteNonQuery();
//con.Close();

}
public void upd2()
{
con.Close();
q = Convert.ToString(t);
con.Open();
com = new SqlCommand("sp_Up1", con);

com.CommandType = CommandType.StoredProcedure;
com.Parameters.AddWithValue("@acc", TextBox3.Text.ToString());
com.Parameters.AddWithValue("@q", q.ToString());
com.ExecuteNonQuery();
//con.Close();

}
public void fun()
{
// dr.Close();
con.Close();
from();
to1();
con.Close();
con.Open();
com = new SqlCommand("select balance from tblcurmain where
accountid='" + TextBox1.Text + "'", con);
dr = com.ExecuteReader();
while (dr.Read())
{
a = dr[0].ToString();
}
//dr.Close();
//con.Close();
b = Convert.ToInt32(TextBox2.Text);
o = Convert.ToInt32(a.ToString());
c = o - b;
upd1();
con.Close();
con.Open();
com = new SqlCommand("select balance from tblcurmain where
accountid='" + TextBox3.Text + "'", con);
dr = com.ExecuteReader();
while (dr.Read())
{
p = dr[0].ToString();
}
// dr.Close();
//con.Close();
r = Convert.ToInt32(TextBox2.Text);
s = Convert.ToInt32(p.ToString());
t = r + s;
upd2();
}
public void fun1()
{
fun();
}
public void check()
{

con.Close();
con.Open();
com = new SqlCommand("select * from tblcurmain where accountid =
'" + TextBox3.Text.ToString() + "' ", con);
dr = com.ExecuteReader();
if (dr.Read())
{
//if (TextBox3.Text.ToString() == dr[0].ToString())
//{
fun1();
}
else
{
Label6.Visible = true;
Label6.Text = "to transfer account id is invalid";
}
// dr.Close();
// con.Close();
}
public void ch()
{
//string ca = Session["cur"].ToString();
con.Close();
con.Open();
com = new SqlCommand("select accountid,pwd from tblcurmain
where accountid='"+TextBox1.Text.ToString()+"'", con);
dr = com.ExecuteReader();
if (dr.Read())
{
if (TextBox1.Text.ToString() == dr[0].ToString() &&
TextBox4.Text.ToString() == dr[1].ToString())
{
bal();

}
else
{

Label6.Visible = true;
Label6.Text = "invalid Account Number or Password";

}
else
{

Label6.Visible = true;
Label6.Text = "invalid Account Number or Password";
}
dr.Close();

con.Close();
}
public void bal()
{
int u,u1;
con.Close();
con.Open();
com = new SqlCommand("select balance from tblcurmain where
accountid='" + TextBox1.Text.ToString() + "'", con);
dr = com.ExecuteReader();
if (dr.Read())
{
u = Convert.ToInt32(dr[0].ToString());
u1 = Convert.ToInt32(TextBox2.Text);
if (u < u1)
{
Label6.Visible = true;
Label6.Text = "your balance to low to transaction";

}
else
{

check();
Label6.Visible = true;
Label6.Text = "your transaction completed successfully";

}
}
else
{
Label6.Visible = true;
Label6.Text = "invalid accoint id ";
}
}
//dr.Close();
//con.Close();

view Page
using
using
using
using
using
using
using
using
using
using
using
using
using

System;
System.Collections;
System.ComponentModel;
System.Data;
System.Drawing;
System.Web;
System.Web.Mobile;
System.Web.SessionState;
System.Web.UI;
System.Web.UI.MobileControls;
System.Web.UI.WebControls;
System.Web.UI.HtmlControls;
System.Data.SqlClient;

public partial class cur1 : System.Web.UI.MobileControls.MobilePage


{
SqlConnection con;
SqlCommand com;
SqlDataAdapter da;
//SqlDataReader dr;
DataSet ds = new DataSet();
protected void Page_Load(object sender, EventArgs e)
{
con = new SqlConnection("Data Source=.;Initial
Catalog=mobilebanking;User ID=sa");
if (!IsPostBack)
{
string ac, da1;
ac = Session["accid1"].ToString();
da1 = Session["dat1"].ToString();
con.Open();
com = new SqlCommand("select tim Time,trans
Account_Id,amount Amount,trandetail Transaction_Type from tblcurtran
where dat= '" + da1.ToString() + "'and accid='" + ac.ToString() + "'",
con);
da = new SqlDataAdapter(com);
da.Fill(ds);
ObjectList1.DataSource = ds;
ObjectList1.DataBind();
con.Close();
}
}

System Testing
Testing Stages
There are four main testing stages:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing

Unit Testing:
This test demonstrates that a single program, module or unit of code function as
designed. The unit testing is normally white box oriented, and the step can be conducted
in parallel for multiple modules.

Integration Testing:
This test is done to validate the multiple parts of the system interact according to
the system design. Each integrated portion of the system is ready for testing with other
parts of the system. The objective is to take unit tested modules and built a program
structure that has been dictated by design.

System testing:
This test simulates operation of the entire system and confirms that it runs
correctly. The total system is also tested for recovery and fall back after various major
failures to ensure that no data is lost during the emergency

User Acceptance Testing:


Internal staff, customers, vendor or other users interact with the system to ensure
that it will function as desired regardless of the system requirements. An acceptance test
has the objective of selling the user on the validity and reliability of the system. It verifies
that the systems procedures operate to system specification and that the integrity of vital
data is maintained.

Conclusion
The Software/system was successfully developed to meet the needs of the client.
It was found to provide all the features that required for the organization. The accuracy
and complexity of the software are also ensured.
The System provides benefits such as user-friendly environment, effective
problem resolution and powerful search mechanisms. There is no limitations for the
Concurrent users.
Apart from the above benefits, the system also holds the benefits provided by the
technologies used in the development. They are:

Flexibilities
The System is more flexible in the sense that the changing requirements of the
user can be easily added to the application thereby making the application recent in future
too.

Since, the Designing of the Screens is by using the .NET Technology, anyone
knows the .NET Designing steps, can continue the process from which anyone else has
quit from.
Since the system is a Web-based one, the client can access the very same server
from anywhere in the Globe.

Enhancements
All software products aim at lesser degree of maintenance. This is quite natural,
but enhancements also pour in, in due course of time, which is unavoidable Better
technologies developers aiming for sophistication and increasing need of customers are
all part and parcel of the software.

Reference:
BIBLIOGRAPHY
Asp.Net Data Web Controls - by Scott Mitchell
Asp.Net - by Stephen Walther
Asp.Net for Web Designers -by Peter Ladka
C#: The Complete Reference. By Schildt, Herbert.
SQL: The Complete Reference, Second Edition. by James R Groff & Paul N.
Weinberg
Transact-SQL Language Reference Guide. Published By:
www.DyessConsulting.Com

You might also like