You are on page 1of 4

Copyright 2008 ISACA. All rights reserved. www.isaca.org.

The Art of Database Monitoring


By Sushila Nair, CISA, CISSP, BS 7799 LA

ost of a companys business-critical data is stored in


databases. Losing data confidentiality, availability or
integrity can cost a company seemingly countless
revenue in sales, reputation and litigation costs. Best practices
and, in many cases, government regulations mandate the use
of controls to adequately safeguard business data.
This article describes why a company should (or must) use
database monitoring as a vital part of its security controls and
how it should go about implementing it.

officer [CTO] positions), 78 percent of respondents said their


databases are critical or important to their business and
contain customer data.3 It is clear that, while loss of data
through corporate databases represents a major risk for most
organizations, very few have adequate controls in place to
monitor data policy violations or attacks. This may be due to
the fact that using the raw auditing capability for monitoring
has been associated with performance degradation and there
have been few viable alternative solutions.

Sources of Data Breaches

Regulatory Requirements for Monitoring

According to a 2007 study,1 85 percent of businesses have


experienced a data security breach. The survey also found that
about 23 million adults had been notified that their data were
compromised or lost; of those, 20 percent terminated their
accounts immediately after notification, while another
40 percent were considering termination at the time of the
survey. The estimated breaches cost US $182 per
compromised record; data breaches remain the leading cause
of financial losses. Using data from Privacy Rights
Clearinghouse, a web site devoted to maintaining a record of
all data security breaches in the US, laptops are the number
one source of data breach incidents (47 percent), databases
are next (40 percent), then tapes (11 percent) and e-mail
(2 percent).2 Looking at the same data based on the amount of
data lost (see figure 1), databases are the number one source
(64 percent), laptops are next (25 percent), then tapes
(10 percent) and finally e-mail (1 percent). Monitoring is
fairly common at the network layer, but monitoring at the
application layer remains relatively rare. A survey conducted
by Application Security and the Ponemon Institute at the 2007
Gartner IT Security Summit revealed that 40 percent of
companies are not monitoring their databases for suspicious
activity. According to the survey of 649 IT professionals (60
percent in chief information officer [CIO] or chief technology

The requirement to monitor log files is highlighted within


best practices such as Control Objectives for Information and
related Technology (COBIT) and industry requirements such as
the Payment Card Industry (PCI) Data Security Standard
(DSS). Some of the major regulations/requirementsPCI, US
Health Insurance Portability and Accountability Act (HIPAA),
Title 21 Code of Federal Regulations (CFR) Part 11 of the US
Federal Drug Administration guidelines, US Gramm-LeachBliley Act, North American Electric Reliability Corporation
(NERC)s Critical Infrastructure Protection (CIP) standards,
US Federal Information Security Management Act (FISMA)
and US Sarbanes-Oxley Actand the pertinent log file
requirements are compared in figure 2.

Figure 1Sources of Data Breaches in the US by


Volume of Data Lost
1%
10%
Database
Laptops

25%
64%

JOURNALONLINE

Tapes
E-mail

Figure 2Regulations and Log File Requirements


PCI HIPAA

21 GLBA NERC FISMA SarbanesCFR


Oxley
Part 11
Regular review
At least
of logs
Daily

monthly
Online retention
1-6

1-7 years
3+ months years days 90

Offline retention 1+
1-6

90

1-7 years
Backup of audit
trails to
separate media

The US Securities and Exchange Commission approved the


Public Company Accounting and Oversight Board (PCAOB)s
Auditing Standard No. 5 on 25 July 2007. Auditors use it to
assess managements internal control over financial reporting in
complying with the Sarbanes-Oxley Act.
As a result of these regulations, there is a growing focus on
using enhanced continuous control monitoring (CCM) and
continuous control auditing tools, which should reduce
compliance costs and provide business efficiency.4 One way to
automate application controls that are being checked manually
is to use the information in log files to implement CCM.

Most large organizations are struggling to know what data


they
have and where the data are located, making it virtually
Most network devices support the Syslog protocol to
impossible
to protect the information. Database monitoring
redirect their log files to a central log server. Applications and
projects
need
to start by determining what data are out there
databases generally do not support Syslog, although Oracle
and
what
applications
are touching the database. Then, the
10g release 2 now supports writing audit records to the
policy
can
be
built
from
there. It is not a trivial question to
operating system using a Syslog audit trail and other database
5
know
where
data
are,
so
some of the database monitoring
vendors should follow suit. Databases generally store their
vendors
have
a
discovery
capability built into their appliance,
audit information in a table within one of their system
which
looks
for
any
SQL
traffic and highlights what
databases. An agent or script can be run to watch this table
databases
there
are
and
which
ones are storing sensitive data.
and convert any entries that are written to the audit table into
The
database
itself
is
not
intelligent
enough to see
Syslog, so it can be forwarded to a central logging server.
suspicious
activity
over
the
wire
or
if
an
authorized user is
Instead of using the native auditing capability of the database,
executing
a
command
a
million
times,
that
is why you have to
which may impact performance, it is possible to use database
have
these
tools,
said
Noel
Yuhanna,
principal
analyst at
application monitoring appliances to monitor the database for
6
Forrester
Research.
activity (this is described in more depth later in the article).
Traditionally, enterprises have built tools and operational
The majority of these appliances support Syslog, enabling the
procedures
to monitor database activity. This usually leverages
information generated to be forwarded to a central log server.
some
form
of
the auditing capabilities packaged with the
It is possible to outsource monitoring to a managed
database
itself
and entails writing some tools around those.
security provider, enabling experts to help set up a robust and
This approach, over time, has become
flexible monitoring infrastructure. Managed
insufficient, as it is not policy-driven,
security monitoring (MSM) providers
aggregate, correlate, analyze and store the log Most large organizations are selective, scalable and manageable across
the heterogeneous data environments that
data to give organizations overall visibility
struggling to know what
exist today.
into their network security and work with
An emerging group of hardware and
customers to improve their incident response.
data they have and where
software
vendors have begun to address this
At the same time, MSM providers can help
the
data
are
located.
problem
with
out-of-the-box solutions.
satisfy auditors as they provide a level of
These
products
have a variety of approaches
objectivity and are experienced in producing
to
addressing
the
problem of monitoring
reports that auditors require. Regulatory pressuresfrom
database
activity.
There
are
three
fundamental
approaches to
legislation such as Sarbanes-Oxley and HIPAA to individual
addressing
this
issue:
industry requirementsmake log management and visibility
A software-only approachDatabase activity monitoring
into user access of systems and applications critical.
with a software agent requires installing and configuring
software on each database host. This approach typically
Database Monitoring Solutions
requires turning on some level of native database auditing
The solutions for monitoring databases are relatively new
from which the software agent gathers data. The challenge
and yet form an important component in the drive toward
with this approach is environments experiencing many
automated CCM. The ability to be alerted if there is a violation
database transactions may object to enabling native database
of policy at the application layer is extremely important; it may
auditing, as it may impact overall database performance.
be perfectly normal for a user to look at one credit card record,
Many software-only solutions do not have a centralized
but an alert should be generated when someone accesses or
management, reporting and configuration console.
downloads 90 million credit card records and associated

A network applianceA relatively new approach to


account information. The requirement to protect data in this
database monitoring is to use a network appliance to
fashion can be expressed through a policy; for instance, there
monitor database traffic. These appliances either run as
should be no access to the data unless the user accesses the
passive devices connected to a mirroring or Switched Port
system through an approved application and, hence, is
Analyzer (SPAN) on a switch, or act as in-line devices, i.e.,
constrained by the segregation of duties and controls within the
essentially database firewalls. The primary difference is that
application. This is an extremely important rule that requires
those appliances acting as in-line devices are a point of
monitoring, reporting and, if necessary, alert generation.
failure in the data environment. Any in-line device that goes
The requirement for database protection is further
down could result in lost business or database transactions
exacerbated by home-grown applications. Good development
and general application downtime. Most vendors support
practices reduce the number of coding errors, but they do not
either in-line or passive modes. In-line mode may be
remove them. Limited resources, human error or insufficient
regarded as most secure, since there is no way a hacker
testing results in applications being deployed with serious
could bypass the appliance to gain access to the database.
vulnerabilities. The burden of security cannot lie simply with
Passive devices tend to be simple to deploy, not impacting
the programmers; there is a vital need to move from network
the data environment in any way.
monitoring into the application layer. Industry analysts predict

A combination of the aboveA combination of network


the growth of database application monitoring (DAM) and
appliance and local software auditing is an ideal way to
DAM appliances, and expect them to become as commonplace
address data activity monitoring in an enterprise. This
as intrusion detection systems technology is today.

Monitoring as a Service

JOURNALONLINE

maximizes the overall coverage of the auditing solution.


Network database traffic can be captured by the network
appliance, and local host database traffic (e.g., database
administrator access to the database directly) can be audited
through the local software agent or native database auditing.
The collection of this data is centralized and analyzed in one
place, ensuring centralized policy management, reporting
and configuration. The majority of the network-based
devices function simply by watching the SQL traffic and
interpreting the activity. Though SQL is a standard, various
vendors implementations are slightly different, so the
database monitoring devices are platform-specific. It is best
to select a device that supports all of the database platforms
in-house, so there is no danger of ending up with multiple
monitoring platforms and any problems that are inherent in
having diverse solutions.
None of the approaches, however, monitors local access,
as only network traffic is being observed.

Database Monitoring Limitations


There are four main problems with database monitoring:
Stored procedures and triggers
Encrypted network traffic
Connection-pooled environments
Support for MSM or security incident and event
management (SIEM) systems
Stored procedures are a key part of an overall database
management system (DBMS) architecture. Auditing and
monitoring stored procedures can be a challenge to some
vendors, as stored procedures generate SQL at runtime. The
stored procedure is a database object and is executed inside
the container of the database, which means visibility of the
SQL is available only inside the database, not over the
network. Monitoring the SQL contained within a stored
procedure requires local monitoring. Network-based
monitoring can see that the procedure was called and the
results of running the stored procedure, but not what
commands were used within the procedures. By ensuring that
the correct change control is in place with tight controls on
the changes to applications in the production environment and
generating reports on the creation of new stored procedures,
the fact that the individual SQL statements are not audited
within a stored procedure may not pose a risk. Triggers have
the same challenge as they are native to the database. Native
auditing is the only way to know the details of what is
happening inside stored procedures and triggers.
Another database auditing challenge is when the data in
the database are encrypted: network-based monitoring cannot
read the encrypted SQL on the wire.
Database monitoring appliances can support these
challenges only by using an agent or agentless local auditing.
The vendor may provide an agent to read the raw audit data
from the database itself and then forward the information to
the appliance. The agent most likely sets up finely tuned audit
policies to monitor local access by privileged users and other
events that the organization needs to audit. The agentless
solution still requires local auditing to be turned on but,
instead of software running on the host to forward the events
JOURNALONLINE

to the database monitoring solution, the appliance logs in to


retrieve the audit information. This highlights the need for a
three-way auditing approachone that leverages the selective
native audit capabilities of the database, a network appliance
and, where necessary, a software agent.
Another problem with database monitoring lies with
connection-pooled environments, where authentication
information is not being passed along to the database. The
application uses a configured account on the web server to
communicate with the database server. In the database log or
in the traffic, the name of the configured account used by the
web server is shown rather than the actual user ID. Wellwritten web applications set some kind of client ID, sending
the login details of the account along with the other details.
This enables the actual user ID to be shown in the log files
and on network monitoring devices. Home-grown applications
or applications that have not used a facility to set the client ID
do not record the actual users login details. Some vendors
have solved this problem by creating agents installed on the
web servers and communicating with the monitoring devices
to ensure that the users identity is recorded. MSM service
providers can correlate web log files with the database log
files, and information such as the SPID (the SQL servers
internal process ID) enables the username to be identified.
Most DAM solutions have not been developed with
supporting existing monitoring frameworks in mind. The
alerts and messages that come from the appliance are not
coded in any way or intelligible to anyone other than the
database administrator. The fact that there is no clear labeling
of the type of alert detected means that an SIEM or an MSM
service cannot take the message and correlate it with other log
messages to be able to identify the alert as a symptom of a
wider attack. A DAM tool that is unable to integrate into a
wider monitoring framework would result in a fragmented
approach to monitoring, which would create islands of
SIEM tools.
Common actions that a monitoring appliance can be
configured to take include blocking transactions that violate
policy using TCP reset, automatic logouts of users or shutting
down the virtual private network.

Attack Recognition
Attack recognition generally is done by pattern matching,
anomaly detection or rule creation.
Pattern matching refers to the method of being able to
match patterns of data. Examples where this is most
commonly used include (US) Social Security and credit card
numbers. Some database appliances have credit card
validation code in them to reduce false positives.
Anomaly detection or behavioral fingerprinting refers to
behavior that the monitoring software defines as not
normal. Most database appliances are put into an observation
mode for a length of time, so that they can baseline activity.
Database activity tends to be normalized, so this kind of
alerting tends to be reasonably accurate. Some platforms have
a feature called intelligent learning where it learns new sets
of behavior. This feature is known as behavioral dynamic and
normally is done in a 30-day rolling period.
3

Rule creation allows the organization to define rules


according to its policy of what information should be
monitored. It would be possible to report on additions or
deletions to the master vendor table to ensure that enterprising
employees have not added themselves in as a supplier.
Signature-based attacks refer to SQL commands that are
associated with common attacks. Mastercard lists SQL
injection attacks as being the primary source of credit card
security breaches.7 In an SQL injection attack, the user uses
union attacks, and the monitoring tool detects the SQL
statements commonly associated with this attack.
The way the data are reported from the monitoring
platform is extremely important. It is crucial to build a
monitoring framework. Network devices should send their log
files to a central log collector, and the information should be
used within a security incident and event environment.
Security breaches generally involve more than one system,
and centralizing log files enables the managed security
monitoring service to correlate the information and look for
attacks across multiple platforms. For example, if an SQL
injection attack starts out at the web server and then comes
out of the database, monitoring only one layer or the other
does not give the full picture.
It would increase costs and complexity by having a
monitoring solution for every platform. The database
monitoring solution that an organization selects should be
able to plug into the existing monitoring framework. The
monitoring framework should be scalable and support
multiple platforms, including applications. Monitoring is a
24x7, 365-day-a-year service.
The majority of business-critical data is stored in
databases. Applications are written by human beings and,
therefore, no matter how good the development practices are
in an organization, they are subject to human error. It is
insufficient to rely on change control and application testing
to ensure that applications are not vulnerable to attack. Good
development practices do not necessarily protect organizations
from risks that result from a violation of policy or an internal
breach. Security must be approached from a layered defense
approach, and the final layer must always be monitoring. It is
imperative for organizations that need to protect personal or
confidential data to build a monitoring framework that
extends beyond the network layer into the application layer,
and DAM is a key component within the framework.

Endnotes
Ponemon Institute, Survey on the Business Impact of Data
Breach, commissioned by Scott & Scott LLP,
www.scottandscottllp.com and www.ponemon.org
2
Privacy Rights Clearinghouse, chronology of data breaches,
www.privacyrights.org/ar/ChronDataBreaches.htm
3
Gaudin, Sharon; Despite Deluge of Data Losses, 40%
Dont Monitor Databases, InformationWeek, 5 June 2007,
www.informationweek.com/management/showArticle.jhtml?
articleID=199900995&cid=RSSfeed_IWK_News
4
Commmittee of Sponsoring Organizations of the Treadway
Commission, Internal ControlIntegrated Framework,
Guidance on Monitoring Internal Control Systems,
September 2007, www.coso.org/Publications/
COSO_Monitoring_discussiondoc.pdf
5
Oracle support for Syslog can be found at http://downloaduk.oracle.com/docs/cd/B19306_01/network.102/b14266/what
snew.htm#i970212.
6
Roiter, Neil; Compliance, Data Breaches Heighten
Database Security Needs, Information Security Magazine,
16 August 2007
7
MasterCard Worldwide, Site Data Protection, Program
Update, 9 October 2006, www.mastercard.com/us/sdp/
assets/pdf/SDP_Presentation.pdf
1

Authors Note:
The author would like to thank Scott B. Smith and Jenna
Sindle for editorial and technical assistance
Sushila Nair, CISA, CISSP, BS 7799 LA
is a product manager at BT Counterpane, responsible for
compliance products. Nair has 20 years of experience in
computing infrastructure and business security and a diverse
background including work in the telecommunications sector,
risk analysis and credit card fraud. She has worked with the
insurance industry in Europe and the US on methods of
quantifying risk for e-insurance based on ISO 27001. She was
instrumental in creating the first banking group in Malaysia
focused on using secondary authentication devices for
banking transactions. Nair has worked extensively with
customers of BT to develop monitoring solutions that meet
the needs of regulatory compliance, including that of the
Payment Card Industry Data Security Standard.

Information Systems Control Journal is published by ISACA. Membership in the association, a voluntary organization serving IT governance professionals, entitles one to receive an annual subscription to
the Information Systems Control Journal.
Opinions expressed in the Information Systems Control Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and/or the IT
Governance Institute and their committees, and from opinions endorsed by authors employers, or the editors of this Journal. Information Systems Control Journal does not attest to the originality of
authors' content.
2008 ISACA. All rights reserved.
Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the
association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, Mass. 01970, to photocopy articles
owned by ISACA, for a flat fee of US $2.50 per article plus 25 per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article.
Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly
prohibited.
www.isaca.org

You might also like