You are on page 1of 29

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0968-5227.htm

IMCS
17,3

248
Received 12 September 2008
Revised 11 November 2008
Accepted 14 November 2008

Determinants of the critical


success factor of disaster
recovery planning for information
systems
Wing S. Chow and Wai On Ha
Department of Finance and Decision Sciences, Hong Kong Baptist University,
Kowloon Tong, Hong Kong, China
Abstract
Purpose Recent disaster recovery planning (DRP) literature has mainly focused on qualitative
research, while neglecting the quantification of critical success factors (CSFs) for information systems
function (ISF). This paper aims to address this issue.
Design/methodology/approach This paper first conducts an extensive literature review, and
then identifies 62 DRP measurement items for ISF. A questionnaire survey, which is based on these 62
measurement items, is used for data collection, and 129 managers of DRP in ISF participate in this
paper.
Findings Through the use of convergent factoring analysis, this paper identifies ten DRP CSFs for
ISF, they are: DRP documentations, DRP steering committee and DRP testing, DRP policy and goals,
DRP training, DRP maintenance and staff involvement, DRP minimum IS processing requirements,
top management commitment to DRP, prioritization IS functions/services, external, off-site back-up
system, and internal, and on-site back-up system.
Originality/value This paper determines success factors based on a set of decision variables
gathered from an extensive literature review in DRP in information systems.
Keywords Information systems, Critical success factors, Disasters
Paper type Literature review

Information Management &


Computer Security
Vol. 17 No. 3, 2009
pp. 248-275
q Emerald Group Publishing Limited
0968-5227
DOI 10.1108/09685220910978103

1. Introduction
People have typically associated disasters with catastrophic events such as fires,
hurricanes, and tornadoes (Rosenthal and Himel, 1991; Blake, 1992; Krousliss, 1993).
Wrobel (1997) summarizes that disasters can be caused by three different phenomena,
namely natural causes, human error, and intentional causes. The definition of a
disaster is thus not solely confined to natural catastrophic events like the Tsunami in
South-east Asia in 2004 or Typhoon Katrina off the Gulf Coast of the USA in 2005, but
may also include any event that significantly affects the operation of an organization,
such as human error in data entry, or intentional acts like the September 11 attacks in
the USA in 2001. The process of disaster recovery planning (DRP) sets out a series of
strategies that enable the resumption of critical functions for an organization to an
acceptable level of service in the event of any of these various types of disaster.
DRP is considered to be one of the most critical management issues for both private
and public organizations in our present age of digitalization. The reason for this is
that a halt of any information systems function (ISF) service may be truly devastating
for the operational capacity and reputation of an organization (Elstien, 1999).

Putting an effective DRP in place can reduce the severity of a potential disaster as well
as allaying staff anxiety, and can then speed up the recovery DRP processes where
necessary (Pember, 1996; Rutherford and Myer, 2000).
DRP is not a new concept for large organizations around the world, due to the fact
that many governments ordered those within their borders to implement DRP systems
for their ISF services, so that their computer systems could survive a possible Y2K
bug. To date, much anecdotal evidence has been published supporting the significance
of DRP for ISF. The published evidence includes that on IT disaster preparedness by
Nelson (2006) and Botha and von Solms (2004), the contingency planning guide for
information technology systems by USs National Institute of Standards and
Technology, and the ISO/IEC 27000 series of international standards that are reserved
for a family of information security management standards. However, crucially
missing from the DRP literature is any overview study that refers to the determination
of DRP critical success factors (CSFs) for ISF, based on relevant measurement items
collected from an extensive literature review.
The objectives of this paper are three-fold: to provide a literature review on the
critical success factors for DRP, to outline a methodology for assessing the CSFs in
an organization based on the perceived importance of DRP managers at a given time,
and to provide empirical research to substantiate the previous two objectives.
The following sections will review the DRP literature and relevant DRP
measurement items, discuss the model development, present the research
methodology, and discuss the findings, before offering a conclusion.
2. Disaster recovery planning
There is much literature documenting the success factors of DRP implementation.
These success factors, based on either an organizational or departmental level
perspective, are described in terms of events, items, components, operational steps, or
constructs.
At the organizational level, Francis (1993) examines the DRP process of Chi/Cor
Information Management, Inc. and reports that DRP implementation should comprise
the following ten processes: project organization, business impact analysis, security
review, strategy development, back-up and recovery, alternative site selection, disaster
recovery plan development, testing, maintenance, and periodic audit. Dwyer et al.
(1994) summarize nine DRP success factors from the reference guidelines of the
systems auditability and control report, namely organizing and managing the project,
performing organizational impact analysis, determining minimum processing
requirements, analyzing risks, prioritizing tasks to recover, analyzing alternatives
and selecting strategy, developing the plan, testing the plan, and maintaining the plan.
Similarly, Smith and Sherwood (1995) propose nine DRP success factors, namely policy
statement, planning responsibilities, incident management responsibility, business
impact analysis, recovery strategies, training and awareness, testing, maintenance and
review, and documentation. One study by Vartabedian (1999) details a total of 14
successful operation events for implementing DRP: problem recognition; need
justification; management support; dollar, time, and resource commitment; recovery
team selection; business impact analysis; risk analysis; plan content; responsibilities of
the recovery team; back-up procedures; disaster implementation task; post-plan
activities; testing; and maintenance.

CSF of DRP
for information
systems
249

IMCS
17,3

250

At the departmental level, Wong et al. (1994) propose that an effective DRP for ISF
should consist of nine procedural steps: obtaining top management commitment,
establishing a planning commitment, performing risk and impact analysis, prioritizing
recovery needs, selecting a recovery plan, selecting a vendor and developing
agreement, developing and implementing the plan, testing the plan, and continually
testing and evaluating the plan. Blatnik (1998) describes the DRP implementation for
ISF as including nine successful events, namely enforcement of policy, threat analysis,
back-up strategies, training, testing, documentation, regular reviews, regular updates,
and ISF personnel participation. Adam (1999) suggests that DRP implementation in
human resources departments should take into account the following DRP
components: compiling an emergency schedule, assigning an emergency crew,
establishing an alternative site, training, back-up, and updating the plan. For a simple
DRP for use in human resources departments, Solomon (1994) proposes three basic
elements: assessment of the vulnerabilities and resources, examination of
communication options, and exploration of alternative locations. In a study of DRP
implementation within small and large accounting firms, Ivancevich et al. (1997) states
that the following measurement items can be used to ensure a successful DRP system:
management support, risk analysis, fire and water insurance, identification of critical
application, alternative site processing, off-site storage, file back-up procedures,
communication policies to handle disruption of phone line or cables, testing the
plan, documentation of immediate action, documentation of the plan, and updating
the plan.
3. DRP constructs and their measurement items
This section presents an extensive literature review on DRP constructs for ISF and
their measurement items. Table I shows a comprehensive list of DRP constructs
gathered from the IS-related literature. In this table, we have identified 14 DRP
constructs and surveyed their measurement items from IS-related literature. These
constructs are as follows:
(1) obtaining the ISF top management commitment for DRP (hereafter, top
management commitment);
(2) establishing ISF policy and goals for DRP (policy and goals);
(3) forming the ISF steering committee for DRP (steering committee);
(4) performing IS risk assessment and impact analysis for DRP (risk assessment
and impact analysis);
(5) prioritizing IS functions for DRP (prioritization);
(6) determining the minimum IS processing requirement for DRP (minimum
processing requirement);
(7) selecting IS alternative sites for DRP (alternative site);
(8) developing IS back-up storage systems for DRP (back-up storage);
(9) forming an IS recovery team for DRP (recovery team);
(10) training ISF staff to activate DRP (training);
(11) launching DRP testing in ISF (testing);
(12) documenting DRP procedures for ISF (documentation);

Francis (1993)

Testing

Training
Documentation

Recovery team

Backup storage

Minimum
processing
requirement
Alternative site

Develop and
implement the plan

Select recovery plan


Select vendor and
develop agreement

Establish a planning
committee
Perform risk
assessment and
impact analysis
Prioritize recovery
needs

Obtain top
management
commitment

Wong et al. (1994)

Disaster recovery
plan development
Testing
Test the plan

Alternative site
selection
Strategy
development
Backup and
recovery

Top
management
commitment
Policy and goals Project
organization
Steering
Include in project
committee
organization
Risk assessment Business impact
analysis
and impact
Security review
analysis
Prioritization

DRP constructs

Preparation

Include in inventory
determination

Include in
impact
analysis
Included in
recovery
strategies
Recovery
strategies

Test and maintenance

Disaster recovery
strategy
Personnel assignment
and training

Inventory
determination

Justification of the plan

Lee and Ross (1995)

Impact
analysis

Coleman
(1993)
Senior
management
support

Include in develop the Included in


plan
preparation
Test the plan
Realistic
testing

Develop the plan

Organize and manage


the project
Perform organization
impact analysis
Analyzing risks
Include in perform
organization impact
analysis
Determine minimum
processing
requirement
Analyze alternatives
and select strategy

Dwyer et al. (1994)

(continued)

CSF of DRP
for information
systems
251

Table I.
A list of DRP constructs
in literature

Table I.
Continually test and
evaluate the plan
Include in steering
committee
Baker (1995)

Include in identify
minimum resources

Backup storage

Backup
procedures

Identify minimum
resources

Business impact
analysis
Risk analysis

Business impact
analysis

Vartabedian
(1999)
Top management
Problem
support
recognition
Need justification
Management
support
Dollar, time and
resource
commitment

Maintenance
Periodic audit

Minimum
processing
requirement
Alternative site

Prioritization

Risk assessment
and impact
analysis

Steering
committee

Policy and goals

Top
management
commitment

ISF personnel
participation
DRP constructs

Maintenance

Include in recovery
plan
Include in recovery
plan

Needs analysis

Include in project
initiation

Project initiation

Leary (1998)

Maintain the plan

Strategies
development

Include in
project
planning
Vulnerability
assessment
Business
impact
analysis

Devargas
(1999)
Project
planning

Plan
refinement

Designated and
equipped site

Risk assessment

Disaster recovery
philosophy, goals, and
mission statement
Temporary executive
committee

Written and approval


executive succession
instruction

Turner (1994)

(continued)

IMCS
17,3

252

Minimum
processing
requirement
Alternative site

Steering
committee
Risk assessment
and impact
analysis
Prioritization

Top
management
commitment
Policy and goals

ISF personnel
participation
DRP constructs

Testing
Maintenance

Training
Documentation

Recovery team

Make an
alternative

Adam (1999)

Include in assess
vulnerabilities and
resources
Explore alternative
location

Assess
vulnerabilities and
resources

Solomon (1994)

Document recovery
Disaster
procedure
implementation
task
Plan testing
Testing
Plan maintenance
Post plan

Plan content
Recovery team
selection
Responsibility of
team

Testing
Maintenance

Include in
strategies
development

Recovery strategies

Planning
responsibility
Business impact
analysis

Policy statement

Threat
analysis

Enforcement of
policy

Include in project
initiation
Smith and Sherwood Blatnik (1998)
(1995)

Maintenance

Include in recovery
plan

Recovery strategy

Included in business
impact analysis
Alternative site
processing
(continued)

Identification of critical
application

Risk analysis
Security control

Management support

Project planning

Business impact
analysis

Ivancevich et al. (1997)

Heng (1996)

Training personnel

Defined duties for


employees

CSF of DRP
for information
systems
253

Table I.

ISF personnel
participation

Testing
Maintenance

Update the plan

Compile
emergency
schedule

Documentation

Training

Assign
emergency crew
Training

Recovery team

Table I.

Backup

Examine
communication
option

Include in recovery
strategy

Testing
Maintenance and
review

Testing
Review and
update
regularly
ISF personnel
participation

Testing

Incident management
Plan development
responsibility
Training and
Training
Training
awareness
Include in recovery
Documentation
plan

Backup
strategies

Documentation the plan


Communication policy
to handle disruption of
phone line or cable
Documentation of
immediate action
Testing the plan
Updating the plan

Off-site storage
Fire and water
insurance
File backup procedures

254

Backup storage

IMCS
17,3

(13) maintaining DRP in ISF (maintenance); and


(14) encouraging the participation of ISF personnel for DRP (ISF personnel
participation).
The following section describes each of these DRP constructs, together with their
relevant measurement items.
3.1 Top management commitment
Top management commitment is considered the most vital construct to the success of
DRP. Ginn (1989) states three reasons to support such a claim: first, top management
finalizes an annual budget to support DRP implementation in an organization; second,
top management decides when and how the DRP should be implemented in an
organization; third, top management dictates the level of cooperation and support that
should be provided by the various departments when a DRP is launched in an
organization. Furthermore, this construct is considered as critically important because
DRP requires long-term planning, and that it involves ongoing capital investment
(Chow, 2000; Cerullo et al., 1994). The DRP measurement items for the top management
commitment construct are to:
(1) provide adequate financial support for DRP development (Zolkos, 2000;
Pember, 1996; Ferraro and Hayes, 1998);
(2) commit to DRP development (Gluckman, 2000; Ginn, 1989; Rohde and
Haskett, 1990);
(3) support DRP development (Bodnar, 1993; Coleman, 1993); and
(4) accept responsibility for DRP quality of output (Warigon, 1999; Carlson and
Parker, 1998).
The Appendix reveals these four measurement items, which are Q1-Q4.
3.2 Policy and goals
The policy and goals of DRP must be clearly outlined so that the DRP is clear to
everyone in the organization. It is important to establish policy and goals for an
effective DRP in an organization. The purpose of a policy statement is to set the
guidelines for disaster recovery and define who is accountable for the DRP planning
process (McNurlin, 1988; Turner, 1994). The policy statement should clearly define
realistic DRP goals and their objectives (Salzman, 1998). With clear goals and
objectives, recovery strategies can be established in a cost-effective manner (Rothstein,
1998). The DRP measurement items for the policy and goals construct are to:
(1) define the scope (Iyer and Sarkis, 1998);
(2) define goals and objectives (Meade, 1993; Kovacich, 1996);
(3) define realistic goals and objectives (Hiatt and Motz, 1990);
(4) establish policy (Petroni, 1999; Turner, 1994); and
(5) have a clear vision (Zolkos, 2000).
The Appendix reveals these five measurement items, which are Q5-Q9.

CSF of DRP
for information
systems
255

IMCS
17,3

256

3.3 Steering committee


A steering committee must be formed and appointed by the top management, so that
all functional units offer their full cooperation. The DRP steering committee should
have adequate visibility and be granted an appropriate degree of authority (Bodnar,
1993). The steering committee should consist of representatives from each functional
unit so that their views on critical DRP events can be accurately gathered (Wong et al.,
1994). The steering committee may also include external DRP consultants because they
can make recommendations objectively, without the consideration of office politics
(Peach, 1991). The DRP construct of steering committee includes the following
measurement items, the:
(1) formation of a steering committee (Hawkins et al., 2000; Wong et al., 1994);
(2) participation of representatives from all functional units (Cerullo and Cerullo,
1998; Wong et al., 1994); and
(3) participation of external consultants (Leary, 1998; Smith and Sherwood, 1995).
The Appendix reveals these three measurement items, which are Q10-Q12.
3.4 Risk assessment and impact analysis
Risk assessment and impact analysis determine how long an organization can survive
without the support of critical business functions when a disaster strikes (Rothstein,
1998). All critical functions must be pre-determined before a DRP strategy is chosen for
an organization (Chow, 2000). Here, the risk assessment identifies the business
events/functions that are most likely to pose threats to a firm (Gallegos and Wright,
1988), and the impact analysis refers to the evaluation of the consequences of a
disaster; such as the financial and non-financial loss of business functions (Doughty,
1991). The DRP measurement items for this construct are to:
(1) analyze financial loss (Wong et al., 1994; Doughty, 1991);
(2) analyze security control (Gilbert, 1995; Bodnar, 1993);
(3) analyze adverse events (Haubner, 1994; Rosenthal and Himel, 1991);
(4) analyze security weakness (Menkus, 2000; Moch, 1999); and
(5) rank the security risk (Coult, 1999; Cerullo and Cerullo, 1998).
The Appendix reveals these five measurement items, which are Q13-Q17.
3.5 Prioritization
The prioritization construct refers to the ranking of the business functions on which a
firm is highly dependent, concerning the survival of a disaster. The prioritization
construct is an important element in DRP success because not all business functions
are equally important and susceptible to the disruption caused by a disaster (Kull,
1982). Thus, critical missions should be given high priority for recovery (Coleman,
1993). The DRP measurement items for this construct are to prioritize:
(1) critical functions (Wong et al., 1994; Murphy, 1991);
(2) critical applications (Wong et al., 1994; Kull, 1982);
(3) recovery activities (Lee and Ross, 1995; Frost, 1994);

(4) requirements (Lee and Ross, 1995; Frost, 1994); and


(5) recovery schedule (Cerullo and Cerullo, 1998).
The Appendix reveals these five measurement items, which are Q18-Q22.
3.6 Minimum processing requirement
When a disaster strikes, no organization has sufficient time or resources to recover
every business function in a short period of time. An effective DRP should, therefore,
address the minimum-processing requirement that would ensure that company
operations are recovered to an acceptable level (Coleman, 1993). The minimum
processing requirement determines an acceptable recovery time, that is, the point in
time to which data must be restored (Myatt, 1999) and the maximum allowable
downtime of business functions that a company or functional unit can withstand
(Wong et al., 1994). Relevant personnel should be consulted to form the minimum
processing requirement, so that the actual requirement is properly captured (Doughty,
1991). The DRP measurement items for this construct are to determine:
(1) the minimum processing requirement (Baker, 1995; Heng, 1996);
(2) the maximum allowable downtime (Myers, 1999; Wong et al., 1994);
(3) the recovery time (Gluckman, 2000; Douglas, 1998; Tilley, 1995); and
(4) what data must be stored (Myatt, 1999; Douglas, 1998; Tilley, 1995).
The Appendix reveals these four measurement items, which are Q23-Q26.
3.7 Alternative site
Firms that highly dependent on IS applications must consider an alternative site with
which they can back up their IS resources/databases, so that they can be recovered
easily in the event of a disaster (Blake, 1992). Alternative sites can be operated on either
an external site or in-house site, and can be implemented in the mode of a hot site, a
cold site, mobile recovery facilities, or a mirrored site (Hawkins et al., 2000). The
practice of each of these alternative sites has trade-off value, thus one must establish its
selection criteria and then perform cost benefit analysis (Yiu and Tse, 1995). The DRP
measurement items for this construct are to establish:
(1) a clear vision of an alternative site (Blake, 1992);
(2) an in-house site (Leary, 1998; Peach, 1991); and
(3) an external site (Hawkins et al., 2000; Rothstein, 1998; Wong et al., 1994).
The Appendix reveals these three measurement items, which are Q27-Q29.
3.8 Backup storage
Backup storage refers to how all relevant IS data should be backed up and kept in a
safe place from which they can, when needed, be retrieved quickly for restoration.
Backup storage can be based either on- or off-sites (Rohde and Haskett, 1990). On-site
backup storage refers to the storage of data/resources in different locations within an
organization (Hawkins et al., 2000), whereas off-site backup storage refers to the
storage of data/resources on a remote site. Arnell (1990) indicates that off-site storage
should be located far enough from the present company that the backup

CSF of DRP
for information
systems
257

IMCS
17,3

258

storage is not affected by the same disaster. The DRP measurement items for this
construct are to:
(1) establish off-site backup (Hawkins et al., 2000; Rohde and Haskett, 1990);
(2) establish on-site backup (Hawkins et al., 2000; Rohde and Haskett, 1990);
(3) establish a backup schedule (Haubner, 1994); and
(4) subscribe to insurance coverage (Coult, 1999; Eckerson, 1992).
The Appendix reveals these four measurement items, which are Q30-Q33.
3.9 Recovery team
The recovery team coordinates recovery tasks in an effective manner when a disaster
strikes (Donovan et al., 1999). Miller (1997) states that a team approach to managing the
recovery process in the event of a disaster is important for two reasons: first,
all relevant staff may not be presented when a disaster strikes; second, when more of
the right people are involved, more intelligent answers to DRP problems may be
generated. ISF personnel must also participate in the recovery team so that a faster
restoration of ISF functions/services can be ensured in the event of a disaster. The DRP
measurement items for this construct are to:
(1) participate in the recovery team (Miller, 1997; Fong, 1991); and
(2) be involved in the recovery team (McNurlin, 1988; Fong, 1991).
The Appendix reveals these two measurement items, which are Q34-Q35.
3.10 Testing
A series of test programs needs to be developed to make sure the DRP is a complete
and accurate product. Testing can be designed in such a way that the weaknesses of a
DRP can be identified (Lee and Ross, 1995). The DRP should also be tested in such a
way that it renders minimal disturbance to the daily operations of an organization. The
DRP measurement items for this construct are to test the plan:
(1) as though a disaster had happened (Edwards and Cooper, 1994; Wong et al.,
1994);
(2) by duplicating processing (Wong et al., 1994);
(3) by recovering procedures (Wong et al., 1994);
(4) by recovering functions (Edwards and Cooper, 1994; Wong et al., 1994); and
(5) by simulating a disaster (Edwards and Cooper, 1994; Wong et al., 1994).
The Appendix reveals these five measurement items, which are Q36-Q40.
3.11 Training
Once the DRP is developed, all staff involved in the plan must know their roles and
duties. A training program is therefore required to ensure that all staff understand
their positions, which will subsequently reduce the potential for operational errors
and the opportunity for miscommunication when the plan is implemented during a real
disaster (Turner, 1994). Training is thus essential. The DRP measurement items for
this construct are to train staff in:

(1)
(2)
(3)
(4)
(5)
(6)
(7)

disaster responsiveness (Salzman, 1998);


stress management (Paton and Flin, 1999; Turner, 1994);
team skills (Solomon, 1994; Paton and Flin, 1999);
damage assessment (Turner, 1994);
notification (Turner, 1994);
responsibility and duties (Smith and Sherwood, 1995); and
the relocation of departments (Morwood, 1998).

The Appendix reveals these seven measurement items, which are Q41-Q47.
3.12 Documentation
The documentation construct refers to a set of manuals and procedures that outlines for
the DRP recovery team all respective events related to DRP in the event of a disaster. In
addition, the exact details of functions, personnel, responsibilities, contact names and
numbers, and equipment of the DRP, involved with a disaster, must be documented (Jacobs
and Weiner, 1997). The DRP measurement items for this construct are to document:
(1) recovery procedures (Salzman, 1998; Doughty, 1993);
(2) the procedures of emergency responsibility (Salzman, 1998; Ferraro and
Hayes, 1998);
(3) the roles and responsibilities of the recovery team (Salzman, 1998;
Doughty, 1993);
(4) team members and contact numbers (Miller, 1997; Jacobs and Weiner, 1997);
(5) backup operations (Salzman, 1998, Miller, 1997);
(6) the personnel to contact (Coult, 1999; Jacobs and Weiner, 1997);
(7) the notification procedure (Coult, 1999; Jacobs and Weiner, 1997); and
(8) the list of suppliers/vendors (Pember, 1996; Miller, 1997).
The Appendix reveals these eight measurement items, which are Q48-Q55.
3.13 Maintenance
Rothstein (1988) claims that the creation of a DRP without periodic testing and ongoing
maintenance is worsen than not having a DRP. The maintenance construct is
important to reduce the likelihood of incorrect decisions being made and to decrease
the stress of disaster-team members during the recovery process (Frost, 1994). To keep
up with the ever-changing information system technology, the DRP should be
reviewed and tested on a regular basis (Peterson and Perry, 1999). Each time a DRP is
altered, those changes must be updated. The DRP measurement items for this
construct are to:
(1) test the DRP periodically (Lee and Ross, 1995; Karakasidis, 1997);
(2) evaluate the DRP continually (Mitome et al., 2001; Miller, 1997);
(3) review the DRP regularly (Ebling, 1996; Matthews, 1994);
(4) update DRP changes (Norman, 1993; Korzeniowski, 1990); and
(5) audit the DRP (Bodnar, 1993; Francis, 1993).

CSF of DRP
for information
systems
259

IMCS
17,3

260

The Appendix reveals these five measurement items, which are Q56-Q60.
3.14 ISF personnel participation
ISF personnel must participate and monitor the development processes of DRP in an
organization (Wong et al., 1994). Although ISF personnel may not be DRP team
members, they should contribute their technical knowledge at all different stages
(Rutherford and Myer, 2000). Owing to the fact that ISF personnel should know their
duties and responsibilities within the disaster recovery process, they should review the
plan and check whether the recovery operation procedures are operated as planned. In
addition, ISF personnel should review the DRP regularly from a technical standpoint
so that minimum ISF service disruption is sustained (Blatnik, 1998). The DRP
measurement items for this construct are to participate in:
(1) recovery duties and responsibility (Blatnik, 1998; Butler, 1997); and
(2) reviewing the DRP (Blatnik, 1998; Baker, 1995).
The Appendix reveals these two measurement items, which are Q61-Q62.
4. Methodology
4.1 Study subject
The research sample was based on the directory of Top 2000 Foreign Enterprise in
Hong Kong 1999. Two criteria were adopted when selecting potential respondents: first,
organizations of all the selected candidates had to have DRP for their ISF; second, the
selected candidates had experience and also in-charge of their DRP in their present
organization. ISF departments were directly contacted to explain to them the purpose of
this research, and to obtain names and positions of target respondents. Subsequently,
a total of 500 potential participants were contacted and a questionnaire, together with
a covering letter, was mailed to them.
4.2 Instrument development
The 62 measurement items of DRP, as shown in the Appendix, were used in the first
part of our questionnaire. Managers were asked to evaluate the importance of these
measurement items using a measurement scale of 1-5, where a value of 5 represented
the most important one, and a value of 1 represented the least important. These
measurement scores were later used to determine the critical success constructs of the
DRP for ISF. The second part was used to collect demographics data of our
respondents.
4.3 Data collection procedure
A structured questionnaire with a cover letter was used in the data collection.
A complete set of questionnaires was sent to each selected respondent with a return
envelope. About 139 replies were returned, though ten were incomplete and so were
discarded. Therefore, 129 questionnaires were used for the data analysis, which
constituted a response rate of 26.7 percent.
Table II shows the characteristics of the respondents. In total, 23 held an IS director
position, 70 percent were managers, and the remaining 7 percent held the position of IS
executives. About 80 percent of the respondents had more than three years working
experience with DRP system in ISF, with the remaining 20 percent having one to three

Respondent characteristics
Position
Director
Manager
Executives
Years handing DRP in ISF
1-3 years
over 3 years
Company size
1-50
51-100
101-250
251-500
Above 500
Industry type
Banking/finance/insurance
Retail/wholesale
Manufacturing
Restaurant/hotel
Transportation/shipping
Real estate
Computers/telecommunications/networking
Medicine/health
Media/publishing
Utilities
Others

Total numbers

Percentage

30
90
9

23.3
69.8
7

23
106

17.8
82.2

8
21
24
27
49

6.2
16.3
18.6
20.9
38

24
11
25
4
5
2
19
2
6
5
25

18.6
8.5
19.4
3.1
3.9
1.6
14.7
1.6
4.7
3.9
20.2

years DRP working experience. This information shows that the respondents had
gained an in-depth DRP knowledge in their organizations. About 20 percent of the
respondents worked in financial institutes and in manufacturing, and about 15 percent
came from the IT industry. The information also showed that 38 percent of the
companies hired more than 500 employees, about 21 percent hired 251-500 employees,
and the remaining companies hired less than 250 employees.
About 85 percent of the respondents claimed that IS services are important to daily
operations in their organizations, with about 11 percent of the respondents rating the
dependency of IS services as crucial and critical, and the remaining 4 percent as
neutral. The range of DRP experience was ranging from seven to 25 years.
5. Results
This paper first analyzes the content validity, mean scores of the proposed construct,
CSFs of DRP, their reliability test, and their labeling.
5.1 Content validity
In this part, content validity of the instrument is assessed. Content validity refers to the
extent to which the measurement items of a construct/component are actually
representing the meaning of that construct/component (Babbie, 1992). Content validity
test is subjectively judged by researchers rather than proven by a statistical testing
(Saraph et al., 1989). The contents of proposed instrument in this thesis are derived

CSF of DRP
for information
systems
261

Table II.
Characteristics
of the respondents

IMCS
17,3

262

from an extensive literature review as reported in Section 3, the content validity of the
instrument is thus reasonably justified.
5.2 Mean scores of proposed DRP constructs
Table III reviews the mean scores of the proposed DRP constructs. In this table, a mean
score value of 5 for DRP constructs represents that a DRP construct is highly
important, and vice versa for a value of 1. It is shown that the construct of backup
storage has the highest score (mean score 4.118), while the construct of alternative
site has the lowest score (mean score 3.155).
5.3 Convergent analysis for DRP CSFs
This study applied the construct convergent approach; which is a rigorous way to create
a factor structure matrix where the correlations of all measurement items with every
construct are listed (Compeau and Higgins, 1995). Construct analysis with the varimax
rotation was used to determine the DRP critical success constructs for ISF. Table IV
shows the total variance explained by the convergent analysis, and a total of 11 CSFs of
DRP is proposed; which constitutes cumulative of 76 percent of explaining power. The
Kaiser-Meyer-Olkin measure of sampling is 0.868. Table V reveals the corresponding
factor loadings. In this table, a construct loading of $ 0.5 was considered a significant
contribution to a CSF for DRP. Table V also shows that ten proposed measurement items
were removed from data analysis because two of them (that are Q27 and Q56) were
shared by two constructs and the other eight (that are Q14, Q15, Q16, Q17, Q19, Q35, Q61,
and Q62) have all factor loadings less than a value of 0.5. We also removed the 11th CSF
in Table V that has only one measurement item (that is Q12) because we cannot compute
its reliability value, which is to be studied for all CSFs in the next section.
5.4 Reliability and labeling of DRP CSFs
A reliability test is referred the examination of the internal consistency of a
measurement instrument. The Cronbachs alpha (a) coefficient is the most popular
method adopted in assessing the reliability, and a high alpha value (close to 1) of the

Table III.
Mean scores and
standard deviations of
proposed DRP constructs

DRP constructs

Question numbersa

Top management commitment (F1)


Policy and goals (F2)
Steering committee (F3)
Risk assessment and impact analysis (F4)
Prioritization (F5)
Minimum processing requirement (F6)
Alternative site (F7)
Backup storage (F8)
Recovery team (F9)
Testing (F10)
Training (F11)
Documentation (F12)
Maintenance (F13)
ISF personnel participation (F14)

Q1-Q4
Q5-Q9
Q10-Q12
Q13-Q17
Q18-Q22
Q23-Q26
Q27-Q29
Q30-Q33
Q34-Q35
Q36-Q40
Q41-Q47
Q48-Q55
Q56-Q60
Q61-Q62

No. of items

Mean

SD

4
5
3
5
5
4
3
4
2
5
7
8
5
2

4.00
3.92
3.17
3.67
4.02
3.89
3.16
4.12
3.66
3.41
3.41
3.74
3.53
3.81

0.79
0.79
0.95
0.84
0.77
0.80
1.06
0.67
0.95
0.96
0.79
0.90
0.88
0.91

Note: aThese questions are correspondent to those measurement items shown in the Appendix

26.723
3.990
2.944
2.699
2.063
1.905
1.658
1.490
1.308
1.231
1.130

43.102
6.436
4.748
4.353
3.327
3.073
2.674
2.403
2.110
1.985
1.823

43.102
49.538
54.286
58.639
61.966
65.039
67.713
70.115
72.225
74.211
76.034

Note: Extraction method: principal component analysis

1
2
3
4
5
6
7
8
9
10
11

Component Total

Initial eigenvalues
Percentage of
Cumulative
variance
%
26.723
3.990
2.944
2.699
2.063
1.905
1.658
1.490
1.308
1.231
1.130

43.102
6.436
4.748
4.353
3.327
3.073
2.674
2.403
2.110
1.985
1.823

43.102
49.538
54.286
58.639
61.966
65.039
67.713
70.115
72.225
74.211
76.034

Extraction sums of squared loadings


Percentage of
Cumulative
Total
variance
%
6.718
6.279
5.431
5.064
4.699
4.405
4.379
4.149
2.217
2.135
1.664

10.835
10.128
8.760
8.168
7.579
7.106
7.063
6.691
3.575
3.444
2.684

10.835
20.964
29.723
37.892
45.470
52.576
59.639
66.331
69.906
73.350
76.034

Rotation sums of squared loadings


Percentage of
Cumulative
Total
variance
%

CSF of DRP
for information
systems
263

Table IV.
Total variance explained

Table V.
Rotated component
matrix

0.033
0.118
0.232
0.185
0.090
0.165
0.097
0.184
0.151
0.151
0.236
20.012
0.074
20.131
0.175
0.036
0.100
0.293
0.180
0.274
0.402
0.480
0.335
0.299
0.296
0.249
0.252
0.319
0.310
0.183
0.149
0.230
0.185

0.157
0.122
0.171
0.074
20.092
0.125
0.245
0.060
0.238
0.594
0.631
0.159
0.545
0.315
0.120
0.153
20.013
20.191
0.136
0.328
0.264
0.257
0.253
0.396
0.166
0.236
0.115
0.098
0.132
0.088
0.081
0.266
0.268

2
0.271
0.174
0.186
0.451
0.844
0.826
0.813
0.797
0.574
0.165
0.082
0.096
0.099
20.029
0.060
0.163
0.128
0.190
0.051
0.199
0.195
0.225
0.080
0.087
0.332
0.373
0.183
0.189
0.340
0.045
0.197
0.221
0.229

3
0.096
0.132
0.097
0.167
0.059
0.061
0.107
0.059
0.191
0.188
0.207
0.334
0.091
0.247
0.193
20.043
20.021
20.039
0.335
0.385
0.317
0.182
0.249
0.232
0.142
0.093
0.086
0.147
0.061
0.276
0.276
0.217
0.288

4
0.197
0.126
0.133
0.073
20.024
0.141
0.177
0.168
0.373
0.060
0.135
0.059
0.136
20.017
0.002
0.170
0.157
0.125
20.136
0.149
0.195
0.406
0.311
0.339
0.338
0.389
0.379
0.321
0.381
0.728
0.750
0.732
0.620

Critical success factor


6

0.042
0.136
0.150
0.173
0.090
0.146
0.097
0.149
0.180
0.125
20.054
0.091
0.168
0.041
0.027
0.153
0.006
0.142
0.076
20.051
0.010
0.064
0.139
0.049
0.097
0.105
0.238
0.143
0.236
0.106
0.204
0.131
0.176

5
0.686
0.804
0.777
0.588
0.149
0.130
0.164
0.183
0.311
0.335
0.212
0.089
0.199
0.119
0.054
0.387
0.100
0.292
0.106
0.083
0.318
0.348
0.174
0.120
0.185
0.293
0.185
0.205
0.107
0.103
0.139
0.143
0.222

7
0.061
0.255
0.139
0.051
0.143
0.151
0.140
0.100
0.101
0.310
0.268
20.057
0.038
0.087
20.054
0.201
0.093
0.335
0.532
0.504
0.222
0.177
0.335
0.340
0.667
0.489
0.597
0.627
0.579
0.282
0.187
0.037
0.179

8
0.114
0.139
0.123
0.128
0.075
0.130
0.083
20.062
0.079
0.138
0.152
0.096
0.599
0.373
0.750
0.599
20.060
20.189
0.125
20.059
20.171
20.019
20.137
20.083
20.101
20.111
0.201
20.005
0.146
20.080
0.108
0.124
0.167

10
0.057
0.125
0.060
0.149
0.053
0.123
0.009
0.040
0.050
20.303
20.159
20.095
0.007
0.509
20.124
0.109
0.803
0.468
0.164
0.052
0.208
0.278
0.259
0.241
0.004
20.095
0.211
0.039
0.172
20.051
0.169
0.091
0.142

264

Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9
Q10
Q11
Q12
Q27
Q28
Q29
Q30
Q31
Q32
Q33
Q13
Q14
Q15
Q16
Q17
Q18
Q19
Q20
Q21
Q22
Q23
Q24
Q25
Q26

1
20.056
0.038
0.170
0.037
0.032
20.011
0.025
0.080
0.048
0.254
0.229
0.650
20.061
20.096
0.189
20.113
20.049
0.075
0.031
0.281
0.220
0.168
0.403
0.268
20.072
20.036
20.063
20.168
0.015
0.164
20.006
0.048
20.069
(continued)

11

IMCS
17,3

0.214
0.239
0.080
0.312
0.196
0.225
0.662
0.695
0.660
0.607
0.532
0.671
0.588
0.236
0.311
0.296
0.080
0.236
0.057
0.114
0.168
0.230
0.304
0.413
0.332
0.202
0.152
0.327
0.083

0.203
0.132
0.067
0.115
0.178
0.259
0.238
0.258
0.302
0.231
0.228
0.163
0.137
0.255
0.257
0.311
0.381
0.336
0.392
0.285
0.297
0.603
0.645
0.610
0.748
0.714
0.740
0.682
0.668

0.136
0.030
0.164
0.086
0.015
0.295
0.066
0.140
0.146
0.179
0.195
20.036
0.023
0.133
0.256
0.307
0.123
0.221
0.359
0.423
0.336
0.338
0.249
0.185
0.099
0.199
0.148
0.081
0.059

3
0.528
0.806
0.700
0.776
0.682
0.542
0.286
0.255
0.179
0.262
0.200
0.195
20.005
0.200
0.099
0.148
0.078
0.302
0.284
0.376
0.344
0.069
0.243
0.137
0.163
0.164
0.120
0.095
0.126

4
0.168
0.137
0.169
0.192
0.310
0.312
0.132
0.009
0.078
0.284
0.295
0.150
0.157
0.187
0.148
0.190
0.033
0.123
20.049
0.085
0.066
0.058
0.113
0.160
0.129
0.153
0.186
0.249
0.149

0.153
0.060
0.152
0.019
0.150
0.143
20.017
0.183
0.087
0.209
0.250
20.012
0.099
0.102
0.110
0.211
0.091
0.126
0.285
0.341
0.397
0.343
0.252
0.273
0.147
0.276
20.086
0.020
20.005

Critical success factor


6

0.416
0.084
0.226
20.035
0.188
0.213
0.130
0.135
0.256
0.347
0.387
0.294
0.568
0.713
0.777
0.618
0.494
0.524
0.385
0.399
0.429
0.341
0.347
0.244
0.061
0.099
0.203
0.294
0.332

5
0.108
0.046
0.232
0.091
0.195
0.046
0.192
20.068
0.086
20.137
0.056
0.182
0.162
0.131
0.045
0.040
0.422
0.284
0.109
0.125
0.165
0.087
0.065
0.093
0.262
0.189
0.265
0.172
0.250

8
0.046
0.067
0.121
0.074
0.041
0.098
0.038
0.154
0.098
0.062
0.020
0.044
0.160
0.115
0.062
0.034
0.317
0.003
20.150
20.075
20.139
20.086
0.038
0.080
0.009
0.057
0.056
0.213
0.213

11
0.084
0.039
0.148
0.130
0.019
0.025
0.110
20.021
20.007
20.123
20.231
0.122
0.024
20.007
0.081
0.161
0.157
20.069
20.354
20.304
20.298
20.107
0.032
20.012
20.090
0.058
20.003
0.014
0.095

10
0.286
20.016
0.043
20.077
0.013
0.184
0.064
0.147
0.085
0.096
0.140
20.022
0.023
0.009
0.045
20.124
0.068
0.177
0.241
20.053
0.017
0.043
20.064
20.122
20.070
0.128
0.138
0.032
0.100

Notes: Extraction method: principal component analysis; rotation method: varimax with Kaiser normalization; rotation converged in 11 iterations;
measurement items not included in the results because all loadings less than 0.5 value; measurement items removed because they shared by two separate
factors

Q41
Q42
Q43
Q44
Q45
Q46
Q47
Q36
Q37
Q38
Q39
Q40
Q56
Q57
Q58
Q59
Q60
Q34
Q35
Q61
Q62
Q48
Q49
Q50
Q51
Q52
Q53
Q54
Q55

CSF of DRP
for information
systems
265

Table V.

IMCS
17,3

266

corresponding construct represents a high reliability. Nunnally (1967) suggests that an


alpha value of 0.7 or higher is normally considered an acceptable level. This study
adopted the refinement process proposed by Norusis (1993), which is to retain those
measurement items of a DRP construct that contribute to the maximum Cronbachs
alpha value. Table VI reveals the maximum alpha values for the ten significant CSFs of
Table V. It is shown that the a-value of all suggested DRP CSFs met the minimum level
of a-value $ 0.4. The maximum a-values of DRP CSF are ranging from 0.944 to 0.812.
The last column of Table VI refers to the labels of each CSF that best describing the
constitution of their measurement items. Each of these labels is to be discussed more in
next section.
6. Discussion
Many managers value highly the process of DRP for ISF, but the CSFs and
measurements around the process remain unclear. This paper reports an extensive
survey conducted on the topic of DRP applications in ISF: the survey identified
62 measurement items that can serve as a basis to determine DRP CSFs for ISF.
Based on the data collected from a questionnaire survey of DRP practices for ISF in
Hong Kong, this paper identifies ten DRP CSFs, each of which is further discussed in
this section.
The first DRP CSF, which consists of eight measurement items, is DRP
documentations. This DRP CSF shows that an effective DRP requires everyone
involved to know their DRP roles, and that the firm should adopt measures to
document all relevant events related to the activation of DRP when and if a disaster
strikes, such as who should be in-charge of DRP, what actions should be taken (i.e. both
recovery and emergency responses), how to activate the communication channel
(including the person to contact and back-up operations), and the responsibility and
role of each individual during a disaster.
The second DRP CSF, which also consists of eight measurement items, is the DRP
steering committee and DRP testing. The DRP steering committee outlines that we
should involve representatives from all functional areas, so that different perspectives
on DRP problems can be clearly identified. In addition, DRP should not only be
actively tested through the simulation scenarios of all IS functions, but should also be
examined for its validity in both processing and customer services, if a real disaster
should occur.
The third DRP CSF, which consists of five measurement items, is DRP policy and
goals. This refers to top management involvement in identifying the DRP vision, scope,
and goal of IS, with realistic applications for their firm. This involvement should lead to
an appreciation of the necessary scope of work, labor, and costs for an effective DRP
for ISF.
The fourth DRP CSF, which consists of six measurement items, is DRP training.
This refers not only to training with regard to the responsibility and duties of all staff
involved in DRP, but also to their training in the efficacy of stress management,
damage assessment, team collaboration, and DRP awareness.
The fifth DRP CSF, which consists of four measurement items, is DRP maintenance
and staff involvement. The active maintenance of DRP implies that DRP should be
regularly and continually reviewed and evaluated, and that it should be updated with

0.922
0.915
0.907
0.901

8
5
6
4
4
4
6
2
2

Q23, Q24, Q25, Q26

Q1, Q2, Q3, Q4

Q13, Q18, Q20, Q21, Q22, Q33

Q29, Q30
Q28, Q31

0.82
0.81

0.877

0.878

0.913

0.944

a-values

Q48, Q49, Q50, Q51, Q52, Q53,


Q54, Q55
Q10, Q11, Q36, Q37, Q38, Q39,
Q40, Q47
Q5, Q6, Q7, Q8, Q9
Q41, Q42, Q43, Q44, Q45, Q46
Q34, Q57, Q58, Q59

Original a-values
No. of
Measurement items
items

Note: aCorrespondent to Table V

9
10

3
4
5

Critical success
factorsa

Q13, Q33

Q23

Items
deleted

2
2

5
6
4

No. of retained
items

0.824
0.812

0.923

0.878

0.918

0.915
0.907
0.901

0.922

0.944
DRP steering committee and
DRP testing
DRP policy and goals
DRP training
DRP maintenance and staff
involvement
DRP minimum IS processing
requirements
Top management commitment
to DRP
Prioritization IS
functions/services
External, off-site back-up system
Internal, on-site back-up system

DRP documentations

Maximum a-values
Maximum
a-value
Labeling

CSF of DRP
for information
systems
267

Table VI.
Reliability of the
proposed ten CSFs

IMCS
17,3

268

any relevant changes immediately. All concerned personnel must be actively involved
in this maintenance process, so that everyone is aware of the changes.
The sixth DRP CSF, which consists of three measurement items, is the DRP
minimum IS processing requirement. The four areas that dictate the survivability of
firms without IS support may include the maximum allowable downtime for firms, the
maximum allowable downtime for customers, the minimum functional services to all
concerned parties, and the maximum time by which all IS services are restored. These
areas of maximum downtime requirement are particularly crucial for those firms
highly dependent on electronic commerce such as e-trading and e-banking.
The seventh DRP CSF, which consists of four measurement items, is top management
commitment to DRP. The top management commitment must firstly be shown in a
commitment to the DRP budget allocation, and then to exhibiting leadership by
involvement and participation in DRP events, such as personally attending DRP
meetings. These activities would promote staff enthusiasm toward DRP.
The eighth DRP CSF, which consists of four measurement items, is prioritization of
IS functions/services. It is suggested that firms should establish a rule to prioritize
which functions, activities, and system requirements are the most critical, and then
decide which of these should first be recovered and then the order of the rest, should a
disaster strike. Without the determination of these priorities, firms could easily fall into
a chaotic situation because DRP personnel would not be able to decide which of these
priority jobs to recover first.
The ninth DRP CSF, which consists of two measurement items, is the external,
off-site back-up processing sites for DRP. This CSF suggests that an external back-up
processing site should be established, so that IS operations/services would still be
functional if a disaster were to strike at a local office. Many industries, such as
banking, have adopted this measure to ensure that their IS functions would not be
totally halted if such a disaster were to happen.
The tenth DRP CSF, which consists of two measurement items, is the internal,
on-site back-up processing sites for DRP. This final DRP CSF suggests that all firms
should establish an internal back-up processing site so that IS operations/services
could be immediately retrieved in the event of a disaster. The recovery of IS
operations/services as a result of this practice is claimed to be more efficient than that
as a result of the implementation of the ninth CSF (off-site back-up), especially when
the disaster is a small-scale one such as a small fire in a computer room or a virus
attack on local servers.
7. Conclusion
DRP is considered equally important at both organizational and developmental levels.
Most companies nowadays have developed their DRP, but there is a lack of
quantifying reports on the CSFs for DRP implementation. This paper offers a solution
to this problem; first, by providing a comprehensive DRP literature review; second, by
identifying 62 DRP measurement items; and third, by verifying ten DRP CSFs for
information system function. These ten DRP CSFs are: DRP documentations; the DRP
steering committee and DRP testing; DRP policy and goals; DRP training; DRP
maintenance and staff involvement; DRP minimum IS processing requirements; top
management commitment to DRP; prioritization of IS functions/services; provision of
external, off-site back-up systems; and provision of an internal, on-site back-up system.

The findings of this paper can serve as a basis for many future DRP research studies.
One suggestion is to apply these CSFs to serve as a foundation for developing a corporate
DRP strategy, similar to the organizational system model proposed by Ravichandran
and Rai (2000). Other suggestions include studying how DRP CSFs contribute to the
success of DRP performance within a firm, or how their applicability to large enterprises
compares with their applicability to small- and medium-sized enterprises.

CSF of DRP
for information
systems
269

References
Adam, M. (1999), Preventing the chain react, HR Magazine, January, pp. 28-32.
Arnell, A. (1990), Handbook of Effective Disaster/Recovery Planning, McGraw-Hill,
New York, NY.
Babbie, E. (1992), The Practice of Social Research, 6th ed., Wadsworth, Belmont, CA.
Baker, G. (1995), Quick recoveries, CA Magazine, August, pp. 49-53.
Blake, W.F. (1992), Making recovery a priority, Security Management, Vol. 36 No. 4, pp. 71-4.
Blatnik, G.J. (1998), Point of failure recovery plan, IS Audit & Control Journal, Vol. 4 No. 1,
pp. 24-7.
Bodnar, G.H. (1993), Data security and contingency planning, Internal Auditing, Winter,
pp. 74-80.
Botha, J. and von Solms, R. (2004), Cyclic approach to business continuity planning,
Information Management & Computer Security, Vol. 12 No. 4, pp. 328-37.
Butler, J.G. (1997), Contingency Planning and Disaster Recovery: Protecting your Organizations
Resources, Computer Technology Research Corp., Montgomery, AL.
Carlson, S.J. and Parker, D.J. (1998), Disaster recovery planning and accounting information
systems, Review of Business, Winter, pp. 10-15.
Cerullo, M.J. and Cerullo, V. (1998), Key factors to strengthen the disaster contingency and
recovery planning process, Information Strategy: The Executives Journal, Winter,
pp. 37-43.
Cerullo, M.J., McDuffie, R.S. and Smith, L.M. (1994), Planning for disaster, The CPA Journal,
June, pp. 34-8.
Chow, W.S. (2000), Success factors for IS disaster recovery planning in Hong Kong,
Information Management & Computer Security, Vol. 8 No. 2, pp. 80-6.
Coleman, R. (1993), Six steps to disaster recovery, Security Management, February, pp. 61-2.
Compeau, D. and Higgins, C. (1995), Computer self-efficacy: development of a measure and
initial test, MIS Quarterly, Vol. 19 No. 2, pp. 189-211.
Coult, G. (1999), Disaster recovery, Managing Information, Vol. 6 No. 3, pp. 31-5.
Devargas, M. (1999), Survival is not compulsory: an introduction to business continuity
planning, Computers & Security, Vol. 18 No. 1, pp. 35-46.
Donovan, T., Rosson, B. and Eichstadt, B. (1999), Preparing carriers for . . ., Telephony, June,
pp. 180-8.
Doughty, K. (1991), Performing a business impact analysis, EDPACS, Vol. 18 No. 9, pp. 1-7.
Doughty, K. (1993), Auditing the disaster recovery plan, EDPACS, Vol. 21 No. 3, pp. 1-12.
Douglas, W.J. (1998), A systematic approach to continuous operations, Disaster Recovery
Journal, Summer, pp. 1-3.
Dwyer, P.D., Friedberg, A.H. and McKenzie, K.S. (1994), It can happen here: the important of
continuity planning, IS Audit & Control Journl, Vol. 1, pp. 30-5.

IMCS
17,3

270

Ebling, R.G. (1996), Establishing safe harbor: how to develop a successful disaster recovery
program, Risk Management, September, pp. 53-6.
Eckerson, W. (1992), Andrew reminds users of need for disaster planning, Network World,
Vol. 2, p. 56, September.
Edwards, B. and Cooper, J. (1994), Testing the disaster recovery plan, Information
Management & Computer Security, Vol. 3 No. 1, pp. 21-7.
Elstien, C. (1999), Reliance on technology, Enterprise Systems Journal, July, pp. 38-40.
Ferraro, A. and Hayes, S. (1998), Auditors add value to the business continuity program,
IS Audit & Control Journal, Vol. 5, pp. 47-50.
Fong, A. (1991), The Disaster Recovery Plan, School of Business, Hong Kong: Business Research
Centre, Hong Kong Baptist University, Kowloon Tong.
Francis, B. (1993), Recover from distributed disasters, Datamation, December, pp. 73-6.
Frost, C. (1994), Effective responses for proactive enterprises: business continuity planning,
Disaster Prevention and Management, Vol. 3 No. 1, pp. 7-15.
Gallegos, F. and Wright, D.M. (1988), Evaluating data security: the initial review, Data Security,
Spring, pp. 8-15.
Gilbert, L. (1995), The function of a business continuity plan, EDPACS, March, pp. 12-15.
Ginn, R.D. (1989), The case for continuity, Security Management, January, pp. 84-90.
Gluckman, D. (2000), Continuity . . . recovery, Risk Management, March, p. 45.
Haubner, D. (1994), Data processing disaster recovery planning considerations for tandem
network, EDPACS, Vol. 19 No. 11, pp. 1-7.
Hawkins, S.M., Yen, D.C. and Chou, D.C. (2000), Disaster recovery planning: a strategy for data
security, Information Management & Computer Security, Vol. 8 No. 5, pp. 222-9.
Heng, G.M. (1996), Developing a suitable business continuity planning methodology,
Information Management & Computer Security, Vol. 4 No. 2, pp. 11-13.
Hiatt, C. and Motz, A. (1990), Disaster recovery planning: what it should be, what it is, how to
improve it, EDPACS, Vol. 17 No. 9, pp. 1-9.
Hurwicz, M. (2000), When disaster strikes, Network Magazine, January, pp. 44-50.
Ivancevich, D.M., Hermanson, D.R. and Smith, L.M. (1997), Accounting information system
controls and disaster recovery plans, Internal Auditing, Spring, pp. 13-20.
Iyer, R. and Sarkis, J. (1998), Disaster recovery planning in an automated manufacturing
environment, IEEE Transactions on Engineering Management, Vol. 45 No. 2, pp. 163-75.
Jacobs, J. and Weiner, S. (1997), The CPAs role in disaster, The CPA Journal, Vol. 20-24,
November, pp. 56-8.
Karakasidis, K. (1997), A project planning process for business continuity, Information
Management & Computer Security, Vol. 5 No. 2, pp. 72-8.
Korzeniowski, P. (1990), How to avoid disaster with a recovery plan, Software Magazine,
February, pp. 46-55.
Kovacich, G.L. (1996), Establishing: a network security programme, Computers & Security,
Vol. 15 No. 6, pp. 486-98.
Krivda, C.D. (1998), Planning for the worst, Midrange Systems, September, pp. 10-14.
Krousliss, B. (1993), Disaster recovery planning, Catalog Age, Vol. 10 No. 12, p. 98.
Kull, D. (1982), Disaster recovery: just in case, Computer Decisions, September, pp. 180-209.
Leary, M.K. (1998), A rescue plan for your LAN, Security Management, March, pp. 53-60.

Lee, S. and Ross, S. (1995), Disaster recovery planning for information systems, Information
Resources Management Journal, Summer, pp. 18-23.
McNurlin, B.C. (1988), Trends in disaster, I/S Analyzer, Vol. 26 No. 11, pp. 1-12.
Marcella, A. Jr and Rauff, J. (1994), Automated disaster recovery plan auditing prospects for
using expert systems to evaluate disaster recovery plans, EDPACS, Vol. 21 No. 9, pp. 1-16.
Matthews, G. (1994), Disaster management: controlling the plan, Managing Information, Vol. 1
Nos 7/8, pp. 24-7.
Meade, P. (1993), Taking the risk out of disaster recovery services, Risk Management,
February, pp. 20-6.
Menkus, B. (2000), Defining security threats to information processing application, EDPACS,
February, pp. 9-15.
Miller, H.J. (1997), A guide to planning for the business recovery of an administrative business
unit, EDPACS, April, pp. 9-16.
Mitome, Y., Speer, K.D. and Swift, B. (2001), Embracing disaster with contingency planning,
Risk Management, May, pp. 18-27.
Moch, C. (1999), Taking cover: Bell Atlantic presents businesses with security options, August,
Telephony, p. 62.
Morwood, G. (1998), Business continuity: awareness and training programmes, Information
Management & Computer Security, Vol. 6 No. 1, pp. 28-32.
Murphy, J.H. (1991), Taking the disaster out of recovery, Security Management, August, pp. 61-6.
Myatt, P.B. (1999), Going in for analysis, Security Management, April, pp. 75-9.
Myers, K.N. (1999), Managers Guide to Contingency Planning for Disasters: Protecting Vital
Facilities and Critical Operations, 2nd ed., Wiley, New York, NY.
Nelson, K. (2006), Examining factors associated with IT preparedness, Proceedings of the 39th
Hawaii International Conference on System Science, pp. 1-10.
Newton, J. and Cudlipp, G. (1998), From chaos to control, Canadian Insurance, November,
pp. 20-22, 26.
Norman, G. (1993), Disaster recovery after downsizing, Computers & Security, Vol. 12 No. 3,
pp. 225-9.
Norusis, M.J. (1993), SPSS for Windows Professional Statistics Release 6.0, SPSS, Chicago, IL.
Nunnally, J.C. (1967), Psychmoetric Theory, McGraw-Hill, New York, NY.
Paton, D. and Flin, R. (1999), Disaster stress: an emergency management perspective, Disaster
Prevention and Management, Vol. 8 No. 4, pp. 261-7.
Peach, S. (1991), Disaster recovery: an unnecessary cost burden or an essential feature of any DP
installation, Computers & Security, Vol. 10 No. 6, pp. 565-8.
Pember, M.E. (1996), Information disaster planning: an integral component of corporate risk
management, Records Management Quarterly, April, pp. 31-7.
Peterson, D.M. and Perry, R.W. (1999), The impacts of disaster exercises on participants,
Disaster Prevention and Management, Vol. 8 No. 4, pp. 241-54.
Petroni, A. (1999), Managing information systems contingencies in banks: a case study,
Disaster Prevention and Management, Vol. 8 No. 2, pp. 101-10.
Ravichandran, T. and Rai, A. (2000), Quality management in systems development: an
organizational system perspective, MIS Quarterly, Vol. 24 No. 3, pp. 381-415.
Rohde, R. and Haskett, J. (1990), Disaster recovery planning for academic computing centers,
Communications of the ACM, Vol. 33 No. 6, pp. 652-7.

CSF of DRP
for information
systems
271

IMCS
17,3

272

Rosenthal, P. and Himel, B. (1991), Business resumption planning: exercising your emergency
response teams, Computer & Security, Vol. 10 No. 6, pp. 497-514.
Rothstein, P.J. (1988), Up and running, Datamation, October, pp. 86-96.
Rothstein, P.J. (1998), Disaster recovery: in the line of fire, Managing Office Technology, May,
pp. 26-30.
Rutherford, K. and Myer, G. (2000), Business continuity: do you have a plan?, Canadian
Underwriter, April, pp. 38-41.
Salzman, T. (1998), An audit work program for reviewing IS disaster recovery plans
(conclusion), EDPACS, Vol. 25 No. 7, pp. 8-20.
Saraph, J.V., Benson, P.G. and Schroeder, R.G. (1989), An instrument for measuring the critical
factors of quality management, Decision Sciences, Vol. 20 No. 4, pp. 810-29.
Smith, M. and Sherwood, J. (1995), Business continuity planning, Computer & Security, Vol. 14
No. 1, pp. 14-23.
Solomon, C.M. (1994), Bracing for emergencies, Personnel Journal, April, pp. 74-83.
Tilley, K. (1995), Work area recovery planning: the key to corporate survival, Disaster
Prevention and Management, Vol. 13 Nos 9/10, pp. 49-53.
Turner, D. (1994), Resources for disaster recovery, Security Management, August, pp. 61-7.
Vartabedian, M. (1999), For want of a nail, Call Center Solutions, February, pp. 40-8.
Warigon, S. (1999), Preparing and auditing an IS security incident-handling plan, EDPACS,
Vol. 26 No. 10, pp. 1-13.
Wong, B.K., Monaco, J.A. and Sellaro, C.L. (1994), Disaster recovery planning: suggestions to top
management and information systems managers, Journal of Systems Management, May,
pp. 28-32.
Wrobel, L.A. (1997), The Definitive Guide to Business Resumption Planning, Artech House,
Norwood, MA.
Wroblewski, M.E. (1982), Contingency planning for DP, Data Management, May, pp. 25-7.
Yiu, K. and Tse, Y.Y. (1995), A model for diaster recovery planning, IS Audit & Control Journal,
Vol. 5, pp. 45-51.
Zolkos, R. (2000), To rebound from disaster requires advance plans, Business Insurance, Vol. 34
No. 9, pp. 2-4.
Further reading
Kerlinger, F.N. (1986), Foundation of Behavioral Research, 3rd ed., Rinehard & Winston,
New York, NY.
Zajac, B.P. Jr (1989), Disaster recovery are you really ready?, Computers & Security, Vol. 8,
pp. 297-8.

Appendix

Question
numbers

CSF of DRP
for information
systems
Measurement items

Construct Top management commitment


Q1
Top management provides an adequate financial
support for DRP
Q2
Top management commits to DRP
Q3

Top management supports DRP

Q4

Top management assumes ultimate responsibility


for DRP
Construct Policy and goals
Q5
Top management defines the scope of DRP
Q6
Top management defines the goals and objectives of
DRP
Q7
Top management defines realistic goals and
objectives of DRP
Q8
Top management establishes the policy of DRP
Q9
Top management has a clear vision of DRP
Construct Steering committee
Q10
A formal steering committee has been formed for
disaster recovery plan
Q11
Representatives from different department
participate in the steering committee
Q12
External consultants participate in the steering
committee
Construct Risk assessment and impact analysis
Q13
Impact of financial losing business functions has
been analyzed
Q14
The degrees of security control of business functions
has been analyzed
Q15
The likelihood of potential adverse events to
business functions has been analyzed
Q16

The security weakness of business functions has


been analyzed
Q17
The security risk of business functions has been
ranked
Construct Prioritization
Q18
Prioritization has been established for critical
functions
Q19
Prioritization has been established for critical
applications
Q20
Prioritization has been established for recovery
activities
Q21
Prioritization has been established for requirements

Sample references

273
Pember (1996) and Ferraro and
Hayes (1998)
Ginn (1989) and Rohde and
Haskett (1990)
Bodnar (1993) and Coleman
(1993)
Carlson and Parker (1998), Kull
(1982) and Warigon (1999)
Iyer and Sarkis (1998)
Meade (1993) and Kovacich
(1996)
Hiatt and Motz (1990)
Turner (1994), Dwyer et al. (1994)
and Petroni (1999)
Zolkos (2000)
Karakasidis (1997) and Jacobs
and Weiner (1997)
Cerullo and Cerullo (1998) and
Wroblewski (1982)
Leary (1998) and Pember (1996)
Krivda (1998) and Wong et al.
(1994)
Newton and Cudlipp (1998),
Gilbert (1995) and Bodnar (1993)
Rosenthal and Himel (1991), Hiatt
and Motz (1990) and Haubner
(1994)
Moch (1999) and Menkus (2000)
Cerullo and Cerullo (1998) and
Coult (1999)
Gilbert (1995) and Peach (1991)
Salzman (1998) and Marcella and
Rauff (1994)
Blake (1992) and Smith and
Sherwood (1995)
Karakasidis (1997) and Smith and
Sherwood (1995)
(continued)

Table AI.
Measurement items for
DRP constructs

IMCS
17,3

Question
numbers

Measurement items

Q22

274

Prioritization has been established for recovery


schedules
Construct Minimum processing requirement
Q23
An minimum processing requirement of business
function has been determined
Q24
An maximum allowable downtime of business
functions has been determined
Q25
Acceptable recovery time of business functions has
been determined
Q26
The point in time to which data must be restored of
business functions has been determined
Construct Alternative site
Q27
A clear vision of alternative processing site has been
established
Q28
An in-house alternative processing site has been
established
Q29
An external alternative processing site has been
established
Construct Backup storage
Q30
Off site backup storage has been established for
recovering business functions
Q31

On-site backup storage has been established for


recovering business functions
Q32
A regular backup schedule has been established for
business functions
Q33
A proper insurance coverage has been subscribed for
business functions
Construct Recovery team
Q34
Relevant personnel participate in recovery team
Q35

Relevant personnel involve in recovery team

Construct Testing
Q36
Disaster recovery plan is tested as though a real
disaster occurred
Q37
Disaster recovery plan is tested by duplications of
regular processing
Q38
Disaster recovery plan is tested by testing individual
recovery procedures
Q39
Disaster recovery plan is tested by testing recovery
functions
Q40
Disaster recovery plan is tested by simulating actual
disaster condition by interrupting service
Construct Training
Q41
Training in disaster response is given to recovery
personnel
Q42
Training stress management is given to recovery
personnel
Table AI.

Sample references
Cerullo and Cerullo (1998) and
Cerullo et al. (1994)
Heng (1996)
Hurwicz (2000) and Wong et al.
(1994)
Rothstein (1988), Myatt (1999)
and Frost (1994)
Tilley (1995) and Elstien (1999)
Blake (1992)
Leary (1998)
Hawkins et al. (2000) and
Rothstein (1988)
Edwards and Cooper (1994),
Haubner (1994) and Marcella and
Rauff (1994)
Krousliss (1993) and Miller (1997)
Haubner (1994)
Eckerson (1992) and Lee and Ross
(1995)
Fong (1991), Miller (1997) and
McNurlin (1988)
Edwards and Cooper (1994) and
McNurlin (1988)
Wong et al. (1994), Edwards and
Cooper (1994) and Leary (1998)
Wong et al. (1994)
Wong et al. (1994) and Edwards
and Cooper (1994)
Wong et al. (1994) and Edwards
and Cooper (1994)
Edwards and Cooper (1994) and
Leary (1998)
Peterson and Perry (1999) and
Salzman (1998)
Paton and Flin (1999) and Turner
(1994)
(continued)

Question
numbers

Measurement items

Sample references

Q43

Training team skill is given to recovery personnel

Solomon (1994) and Paton and


Flin (1999)
Turner (1994)

Q44

Training damage assessment is given to recovery


personnel
Q45
Training notification when an disaster strikes is
given to recovery personnel
Q46
Training in recovery responsibility and duties is
given to recovery personnel
Q47
Training in relocation of department when a disaster
strikes is given to recovery personnel
Construct Documentation
Q48
Recovery procedures have been documented
Q49

Emergency response procedures have been


documented
Q50
Individual disaster recovery team roles and
responsibilities have been documented
Q51
Recovery team members and contact numbers have
been documented
Q52
Backup operations have been documented
Q53
Personnel to contact when disaster strikes have been
documented
Q54
Disaster notification procedures have been
documented
Q55
Suppliers/vendors list have been documented
Construct Maintenance
Q56
Disaster recovery plan is tested periodically
Q57

Disaster recovery plan is evaluated continually

Q58

Disaster recovery plan is reviewed regularly

Q59

Disaster recovery plan is updated with changes

Q60
Disaster recovery plan is audited
Construct ISF personnel participation
Q61
ISF personnel participate in recovery duties and
responsibilities
Q62
ISF personnel participate in reviewing disaster
recovery plan

Turner (1994)

CSF of DRP
for information
systems
275

Smith and Sherwood (1995) and


Morwood (1998)
Morwood (1998)
Francis (1993) and Doughty
(1993)
Salzman (1998)
Salzman (1998) and Doughty
(1993)
Miller (1997)
Miller (1997) and Salzman (1998)
Coult (1999)
Salzman (1998)
Pember (1996) and Miller (1997)
Lee and Ross (1995) and
Karakasidis (1997)
Miller (1997) and Mitome et al.
(2001)
Ebling (1996) and Matthews
(1994)
Norman (1993) and Korzeniowski
(1990)
Bodnar (1993) and Francis (1993)
Blatnik (1998) and Smith and
Sherwood (1995)
Blatnik (1998) and Baker (1995)

Corresponding author
Wing S. Chow can be contacted at: vwschow@hkbu.edu.hk

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

Table AI.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

You might also like