Professional Documents
Culture Documents
www.emeraldinsight.com/0968-5227.htm
IMCS
17,3
248
Received 12 September 2008
Revised 11 November 2008
Accepted 14 November 2008
1. Introduction
People have typically associated disasters with catastrophic events such as fires,
hurricanes, and tornadoes (Rosenthal and Himel, 1991; Blake, 1992; Krousliss, 1993).
Wrobel (1997) summarizes that disasters can be caused by three different phenomena,
namely natural causes, human error, and intentional causes. The definition of a
disaster is thus not solely confined to natural catastrophic events like the Tsunami in
South-east Asia in 2004 or Typhoon Katrina off the Gulf Coast of the USA in 2005, but
may also include any event that significantly affects the operation of an organization,
such as human error in data entry, or intentional acts like the September 11 attacks in
the USA in 2001. The process of disaster recovery planning (DRP) sets out a series of
strategies that enable the resumption of critical functions for an organization to an
acceptable level of service in the event of any of these various types of disaster.
DRP is considered to be one of the most critical management issues for both private
and public organizations in our present age of digitalization. The reason for this is
that a halt of any information systems function (ISF) service may be truly devastating
for the operational capacity and reputation of an organization (Elstien, 1999).
Putting an effective DRP in place can reduce the severity of a potential disaster as well
as allaying staff anxiety, and can then speed up the recovery DRP processes where
necessary (Pember, 1996; Rutherford and Myer, 2000).
DRP is not a new concept for large organizations around the world, due to the fact
that many governments ordered those within their borders to implement DRP systems
for their ISF services, so that their computer systems could survive a possible Y2K
bug. To date, much anecdotal evidence has been published supporting the significance
of DRP for ISF. The published evidence includes that on IT disaster preparedness by
Nelson (2006) and Botha and von Solms (2004), the contingency planning guide for
information technology systems by USs National Institute of Standards and
Technology, and the ISO/IEC 27000 series of international standards that are reserved
for a family of information security management standards. However, crucially
missing from the DRP literature is any overview study that refers to the determination
of DRP critical success factors (CSFs) for ISF, based on relevant measurement items
collected from an extensive literature review.
The objectives of this paper are three-fold: to provide a literature review on the
critical success factors for DRP, to outline a methodology for assessing the CSFs in
an organization based on the perceived importance of DRP managers at a given time,
and to provide empirical research to substantiate the previous two objectives.
The following sections will review the DRP literature and relevant DRP
measurement items, discuss the model development, present the research
methodology, and discuss the findings, before offering a conclusion.
2. Disaster recovery planning
There is much literature documenting the success factors of DRP implementation.
These success factors, based on either an organizational or departmental level
perspective, are described in terms of events, items, components, operational steps, or
constructs.
At the organizational level, Francis (1993) examines the DRP process of Chi/Cor
Information Management, Inc. and reports that DRP implementation should comprise
the following ten processes: project organization, business impact analysis, security
review, strategy development, back-up and recovery, alternative site selection, disaster
recovery plan development, testing, maintenance, and periodic audit. Dwyer et al.
(1994) summarize nine DRP success factors from the reference guidelines of the
systems auditability and control report, namely organizing and managing the project,
performing organizational impact analysis, determining minimum processing
requirements, analyzing risks, prioritizing tasks to recover, analyzing alternatives
and selecting strategy, developing the plan, testing the plan, and maintaining the plan.
Similarly, Smith and Sherwood (1995) propose nine DRP success factors, namely policy
statement, planning responsibilities, incident management responsibility, business
impact analysis, recovery strategies, training and awareness, testing, maintenance and
review, and documentation. One study by Vartabedian (1999) details a total of 14
successful operation events for implementing DRP: problem recognition; need
justification; management support; dollar, time, and resource commitment; recovery
team selection; business impact analysis; risk analysis; plan content; responsibilities of
the recovery team; back-up procedures; disaster implementation task; post-plan
activities; testing; and maintenance.
CSF of DRP
for information
systems
249
IMCS
17,3
250
At the departmental level, Wong et al. (1994) propose that an effective DRP for ISF
should consist of nine procedural steps: obtaining top management commitment,
establishing a planning commitment, performing risk and impact analysis, prioritizing
recovery needs, selecting a recovery plan, selecting a vendor and developing
agreement, developing and implementing the plan, testing the plan, and continually
testing and evaluating the plan. Blatnik (1998) describes the DRP implementation for
ISF as including nine successful events, namely enforcement of policy, threat analysis,
back-up strategies, training, testing, documentation, regular reviews, regular updates,
and ISF personnel participation. Adam (1999) suggests that DRP implementation in
human resources departments should take into account the following DRP
components: compiling an emergency schedule, assigning an emergency crew,
establishing an alternative site, training, back-up, and updating the plan. For a simple
DRP for use in human resources departments, Solomon (1994) proposes three basic
elements: assessment of the vulnerabilities and resources, examination of
communication options, and exploration of alternative locations. In a study of DRP
implementation within small and large accounting firms, Ivancevich et al. (1997) states
that the following measurement items can be used to ensure a successful DRP system:
management support, risk analysis, fire and water insurance, identification of critical
application, alternative site processing, off-site storage, file back-up procedures,
communication policies to handle disruption of phone line or cables, testing the
plan, documentation of immediate action, documentation of the plan, and updating
the plan.
3. DRP constructs and their measurement items
This section presents an extensive literature review on DRP constructs for ISF and
their measurement items. Table I shows a comprehensive list of DRP constructs
gathered from the IS-related literature. In this table, we have identified 14 DRP
constructs and surveyed their measurement items from IS-related literature. These
constructs are as follows:
(1) obtaining the ISF top management commitment for DRP (hereafter, top
management commitment);
(2) establishing ISF policy and goals for DRP (policy and goals);
(3) forming the ISF steering committee for DRP (steering committee);
(4) performing IS risk assessment and impact analysis for DRP (risk assessment
and impact analysis);
(5) prioritizing IS functions for DRP (prioritization);
(6) determining the minimum IS processing requirement for DRP (minimum
processing requirement);
(7) selecting IS alternative sites for DRP (alternative site);
(8) developing IS back-up storage systems for DRP (back-up storage);
(9) forming an IS recovery team for DRP (recovery team);
(10) training ISF staff to activate DRP (training);
(11) launching DRP testing in ISF (testing);
(12) documenting DRP procedures for ISF (documentation);
Francis (1993)
Testing
Training
Documentation
Recovery team
Backup storage
Minimum
processing
requirement
Alternative site
Develop and
implement the plan
Establish a planning
committee
Perform risk
assessment and
impact analysis
Prioritize recovery
needs
Obtain top
management
commitment
Disaster recovery
plan development
Testing
Test the plan
Alternative site
selection
Strategy
development
Backup and
recovery
Top
management
commitment
Policy and goals Project
organization
Steering
Include in project
committee
organization
Risk assessment Business impact
analysis
and impact
Security review
analysis
Prioritization
DRP constructs
Preparation
Include in inventory
determination
Include in
impact
analysis
Included in
recovery
strategies
Recovery
strategies
Disaster recovery
strategy
Personnel assignment
and training
Inventory
determination
Impact
analysis
Coleman
(1993)
Senior
management
support
(continued)
CSF of DRP
for information
systems
251
Table I.
A list of DRP constructs
in literature
Table I.
Continually test and
evaluate the plan
Include in steering
committee
Baker (1995)
Include in identify
minimum resources
Backup storage
Backup
procedures
Identify minimum
resources
Business impact
analysis
Risk analysis
Business impact
analysis
Vartabedian
(1999)
Top management
Problem
support
recognition
Need justification
Management
support
Dollar, time and
resource
commitment
Maintenance
Periodic audit
Minimum
processing
requirement
Alternative site
Prioritization
Risk assessment
and impact
analysis
Steering
committee
Top
management
commitment
ISF personnel
participation
DRP constructs
Maintenance
Include in recovery
plan
Include in recovery
plan
Needs analysis
Include in project
initiation
Project initiation
Leary (1998)
Strategies
development
Include in
project
planning
Vulnerability
assessment
Business
impact
analysis
Devargas
(1999)
Project
planning
Plan
refinement
Designated and
equipped site
Risk assessment
Disaster recovery
philosophy, goals, and
mission statement
Temporary executive
committee
Turner (1994)
(continued)
IMCS
17,3
252
Minimum
processing
requirement
Alternative site
Steering
committee
Risk assessment
and impact
analysis
Prioritization
Top
management
commitment
Policy and goals
ISF personnel
participation
DRP constructs
Testing
Maintenance
Training
Documentation
Recovery team
Make an
alternative
Adam (1999)
Include in assess
vulnerabilities and
resources
Explore alternative
location
Assess
vulnerabilities and
resources
Solomon (1994)
Document recovery
Disaster
procedure
implementation
task
Plan testing
Testing
Plan maintenance
Post plan
Plan content
Recovery team
selection
Responsibility of
team
Testing
Maintenance
Include in
strategies
development
Recovery strategies
Planning
responsibility
Business impact
analysis
Policy statement
Threat
analysis
Enforcement of
policy
Include in project
initiation
Smith and Sherwood Blatnik (1998)
(1995)
Maintenance
Include in recovery
plan
Recovery strategy
Included in business
impact analysis
Alternative site
processing
(continued)
Identification of critical
application
Risk analysis
Security control
Management support
Project planning
Business impact
analysis
Heng (1996)
Training personnel
CSF of DRP
for information
systems
253
Table I.
ISF personnel
participation
Testing
Maintenance
Compile
emergency
schedule
Documentation
Training
Assign
emergency crew
Training
Recovery team
Table I.
Backup
Examine
communication
option
Include in recovery
strategy
Testing
Maintenance and
review
Testing
Review and
update
regularly
ISF personnel
participation
Testing
Incident management
Plan development
responsibility
Training and
Training
Training
awareness
Include in recovery
Documentation
plan
Backup
strategies
Off-site storage
Fire and water
insurance
File backup procedures
254
Backup storage
IMCS
17,3
CSF of DRP
for information
systems
255
IMCS
17,3
256
CSF of DRP
for information
systems
257
IMCS
17,3
258
storage is not affected by the same disaster. The DRP measurement items for this
construct are to:
(1) establish off-site backup (Hawkins et al., 2000; Rohde and Haskett, 1990);
(2) establish on-site backup (Hawkins et al., 2000; Rohde and Haskett, 1990);
(3) establish a backup schedule (Haubner, 1994); and
(4) subscribe to insurance coverage (Coult, 1999; Eckerson, 1992).
The Appendix reveals these four measurement items, which are Q30-Q33.
3.9 Recovery team
The recovery team coordinates recovery tasks in an effective manner when a disaster
strikes (Donovan et al., 1999). Miller (1997) states that a team approach to managing the
recovery process in the event of a disaster is important for two reasons: first,
all relevant staff may not be presented when a disaster strikes; second, when more of
the right people are involved, more intelligent answers to DRP problems may be
generated. ISF personnel must also participate in the recovery team so that a faster
restoration of ISF functions/services can be ensured in the event of a disaster. The DRP
measurement items for this construct are to:
(1) participate in the recovery team (Miller, 1997; Fong, 1991); and
(2) be involved in the recovery team (McNurlin, 1988; Fong, 1991).
The Appendix reveals these two measurement items, which are Q34-Q35.
3.10 Testing
A series of test programs needs to be developed to make sure the DRP is a complete
and accurate product. Testing can be designed in such a way that the weaknesses of a
DRP can be identified (Lee and Ross, 1995). The DRP should also be tested in such a
way that it renders minimal disturbance to the daily operations of an organization. The
DRP measurement items for this construct are to test the plan:
(1) as though a disaster had happened (Edwards and Cooper, 1994; Wong et al.,
1994);
(2) by duplicating processing (Wong et al., 1994);
(3) by recovering procedures (Wong et al., 1994);
(4) by recovering functions (Edwards and Cooper, 1994; Wong et al., 1994); and
(5) by simulating a disaster (Edwards and Cooper, 1994; Wong et al., 1994).
The Appendix reveals these five measurement items, which are Q36-Q40.
3.11 Training
Once the DRP is developed, all staff involved in the plan must know their roles and
duties. A training program is therefore required to ensure that all staff understand
their positions, which will subsequently reduce the potential for operational errors
and the opportunity for miscommunication when the plan is implemented during a real
disaster (Turner, 1994). Training is thus essential. The DRP measurement items for
this construct are to train staff in:
(1)
(2)
(3)
(4)
(5)
(6)
(7)
The Appendix reveals these seven measurement items, which are Q41-Q47.
3.12 Documentation
The documentation construct refers to a set of manuals and procedures that outlines for
the DRP recovery team all respective events related to DRP in the event of a disaster. In
addition, the exact details of functions, personnel, responsibilities, contact names and
numbers, and equipment of the DRP, involved with a disaster, must be documented (Jacobs
and Weiner, 1997). The DRP measurement items for this construct are to document:
(1) recovery procedures (Salzman, 1998; Doughty, 1993);
(2) the procedures of emergency responsibility (Salzman, 1998; Ferraro and
Hayes, 1998);
(3) the roles and responsibilities of the recovery team (Salzman, 1998;
Doughty, 1993);
(4) team members and contact numbers (Miller, 1997; Jacobs and Weiner, 1997);
(5) backup operations (Salzman, 1998, Miller, 1997);
(6) the personnel to contact (Coult, 1999; Jacobs and Weiner, 1997);
(7) the notification procedure (Coult, 1999; Jacobs and Weiner, 1997); and
(8) the list of suppliers/vendors (Pember, 1996; Miller, 1997).
The Appendix reveals these eight measurement items, which are Q48-Q55.
3.13 Maintenance
Rothstein (1988) claims that the creation of a DRP without periodic testing and ongoing
maintenance is worsen than not having a DRP. The maintenance construct is
important to reduce the likelihood of incorrect decisions being made and to decrease
the stress of disaster-team members during the recovery process (Frost, 1994). To keep
up with the ever-changing information system technology, the DRP should be
reviewed and tested on a regular basis (Peterson and Perry, 1999). Each time a DRP is
altered, those changes must be updated. The DRP measurement items for this
construct are to:
(1) test the DRP periodically (Lee and Ross, 1995; Karakasidis, 1997);
(2) evaluate the DRP continually (Mitome et al., 2001; Miller, 1997);
(3) review the DRP regularly (Ebling, 1996; Matthews, 1994);
(4) update DRP changes (Norman, 1993; Korzeniowski, 1990); and
(5) audit the DRP (Bodnar, 1993; Francis, 1993).
CSF of DRP
for information
systems
259
IMCS
17,3
260
The Appendix reveals these five measurement items, which are Q56-Q60.
3.14 ISF personnel participation
ISF personnel must participate and monitor the development processes of DRP in an
organization (Wong et al., 1994). Although ISF personnel may not be DRP team
members, they should contribute their technical knowledge at all different stages
(Rutherford and Myer, 2000). Owing to the fact that ISF personnel should know their
duties and responsibilities within the disaster recovery process, they should review the
plan and check whether the recovery operation procedures are operated as planned. In
addition, ISF personnel should review the DRP regularly from a technical standpoint
so that minimum ISF service disruption is sustained (Blatnik, 1998). The DRP
measurement items for this construct are to participate in:
(1) recovery duties and responsibility (Blatnik, 1998; Butler, 1997); and
(2) reviewing the DRP (Blatnik, 1998; Baker, 1995).
The Appendix reveals these two measurement items, which are Q61-Q62.
4. Methodology
4.1 Study subject
The research sample was based on the directory of Top 2000 Foreign Enterprise in
Hong Kong 1999. Two criteria were adopted when selecting potential respondents: first,
organizations of all the selected candidates had to have DRP for their ISF; second, the
selected candidates had experience and also in-charge of their DRP in their present
organization. ISF departments were directly contacted to explain to them the purpose of
this research, and to obtain names and positions of target respondents. Subsequently,
a total of 500 potential participants were contacted and a questionnaire, together with
a covering letter, was mailed to them.
4.2 Instrument development
The 62 measurement items of DRP, as shown in the Appendix, were used in the first
part of our questionnaire. Managers were asked to evaluate the importance of these
measurement items using a measurement scale of 1-5, where a value of 5 represented
the most important one, and a value of 1 represented the least important. These
measurement scores were later used to determine the critical success constructs of the
DRP for ISF. The second part was used to collect demographics data of our
respondents.
4.3 Data collection procedure
A structured questionnaire with a cover letter was used in the data collection.
A complete set of questionnaires was sent to each selected respondent with a return
envelope. About 139 replies were returned, though ten were incomplete and so were
discarded. Therefore, 129 questionnaires were used for the data analysis, which
constituted a response rate of 26.7 percent.
Table II shows the characteristics of the respondents. In total, 23 held an IS director
position, 70 percent were managers, and the remaining 7 percent held the position of IS
executives. About 80 percent of the respondents had more than three years working
experience with DRP system in ISF, with the remaining 20 percent having one to three
Respondent characteristics
Position
Director
Manager
Executives
Years handing DRP in ISF
1-3 years
over 3 years
Company size
1-50
51-100
101-250
251-500
Above 500
Industry type
Banking/finance/insurance
Retail/wholesale
Manufacturing
Restaurant/hotel
Transportation/shipping
Real estate
Computers/telecommunications/networking
Medicine/health
Media/publishing
Utilities
Others
Total numbers
Percentage
30
90
9
23.3
69.8
7
23
106
17.8
82.2
8
21
24
27
49
6.2
16.3
18.6
20.9
38
24
11
25
4
5
2
19
2
6
5
25
18.6
8.5
19.4
3.1
3.9
1.6
14.7
1.6
4.7
3.9
20.2
years DRP working experience. This information shows that the respondents had
gained an in-depth DRP knowledge in their organizations. About 20 percent of the
respondents worked in financial institutes and in manufacturing, and about 15 percent
came from the IT industry. The information also showed that 38 percent of the
companies hired more than 500 employees, about 21 percent hired 251-500 employees,
and the remaining companies hired less than 250 employees.
About 85 percent of the respondents claimed that IS services are important to daily
operations in their organizations, with about 11 percent of the respondents rating the
dependency of IS services as crucial and critical, and the remaining 4 percent as
neutral. The range of DRP experience was ranging from seven to 25 years.
5. Results
This paper first analyzes the content validity, mean scores of the proposed construct,
CSFs of DRP, their reliability test, and their labeling.
5.1 Content validity
In this part, content validity of the instrument is assessed. Content validity refers to the
extent to which the measurement items of a construct/component are actually
representing the meaning of that construct/component (Babbie, 1992). Content validity
test is subjectively judged by researchers rather than proven by a statistical testing
(Saraph et al., 1989). The contents of proposed instrument in this thesis are derived
CSF of DRP
for information
systems
261
Table II.
Characteristics
of the respondents
IMCS
17,3
262
from an extensive literature review as reported in Section 3, the content validity of the
instrument is thus reasonably justified.
5.2 Mean scores of proposed DRP constructs
Table III reviews the mean scores of the proposed DRP constructs. In this table, a mean
score value of 5 for DRP constructs represents that a DRP construct is highly
important, and vice versa for a value of 1. It is shown that the construct of backup
storage has the highest score (mean score 4.118), while the construct of alternative
site has the lowest score (mean score 3.155).
5.3 Convergent analysis for DRP CSFs
This study applied the construct convergent approach; which is a rigorous way to create
a factor structure matrix where the correlations of all measurement items with every
construct are listed (Compeau and Higgins, 1995). Construct analysis with the varimax
rotation was used to determine the DRP critical success constructs for ISF. Table IV
shows the total variance explained by the convergent analysis, and a total of 11 CSFs of
DRP is proposed; which constitutes cumulative of 76 percent of explaining power. The
Kaiser-Meyer-Olkin measure of sampling is 0.868. Table V reveals the corresponding
factor loadings. In this table, a construct loading of $ 0.5 was considered a significant
contribution to a CSF for DRP. Table V also shows that ten proposed measurement items
were removed from data analysis because two of them (that are Q27 and Q56) were
shared by two constructs and the other eight (that are Q14, Q15, Q16, Q17, Q19, Q35, Q61,
and Q62) have all factor loadings less than a value of 0.5. We also removed the 11th CSF
in Table V that has only one measurement item (that is Q12) because we cannot compute
its reliability value, which is to be studied for all CSFs in the next section.
5.4 Reliability and labeling of DRP CSFs
A reliability test is referred the examination of the internal consistency of a
measurement instrument. The Cronbachs alpha (a) coefficient is the most popular
method adopted in assessing the reliability, and a high alpha value (close to 1) of the
Table III.
Mean scores and
standard deviations of
proposed DRP constructs
DRP constructs
Question numbersa
Q1-Q4
Q5-Q9
Q10-Q12
Q13-Q17
Q18-Q22
Q23-Q26
Q27-Q29
Q30-Q33
Q34-Q35
Q36-Q40
Q41-Q47
Q48-Q55
Q56-Q60
Q61-Q62
No. of items
Mean
SD
4
5
3
5
5
4
3
4
2
5
7
8
5
2
4.00
3.92
3.17
3.67
4.02
3.89
3.16
4.12
3.66
3.41
3.41
3.74
3.53
3.81
0.79
0.79
0.95
0.84
0.77
0.80
1.06
0.67
0.95
0.96
0.79
0.90
0.88
0.91
Note: aThese questions are correspondent to those measurement items shown in the Appendix
26.723
3.990
2.944
2.699
2.063
1.905
1.658
1.490
1.308
1.231
1.130
43.102
6.436
4.748
4.353
3.327
3.073
2.674
2.403
2.110
1.985
1.823
43.102
49.538
54.286
58.639
61.966
65.039
67.713
70.115
72.225
74.211
76.034
1
2
3
4
5
6
7
8
9
10
11
Component Total
Initial eigenvalues
Percentage of
Cumulative
variance
%
26.723
3.990
2.944
2.699
2.063
1.905
1.658
1.490
1.308
1.231
1.130
43.102
6.436
4.748
4.353
3.327
3.073
2.674
2.403
2.110
1.985
1.823
43.102
49.538
54.286
58.639
61.966
65.039
67.713
70.115
72.225
74.211
76.034
10.835
10.128
8.760
8.168
7.579
7.106
7.063
6.691
3.575
3.444
2.684
10.835
20.964
29.723
37.892
45.470
52.576
59.639
66.331
69.906
73.350
76.034
CSF of DRP
for information
systems
263
Table IV.
Total variance explained
Table V.
Rotated component
matrix
0.033
0.118
0.232
0.185
0.090
0.165
0.097
0.184
0.151
0.151
0.236
20.012
0.074
20.131
0.175
0.036
0.100
0.293
0.180
0.274
0.402
0.480
0.335
0.299
0.296
0.249
0.252
0.319
0.310
0.183
0.149
0.230
0.185
0.157
0.122
0.171
0.074
20.092
0.125
0.245
0.060
0.238
0.594
0.631
0.159
0.545
0.315
0.120
0.153
20.013
20.191
0.136
0.328
0.264
0.257
0.253
0.396
0.166
0.236
0.115
0.098
0.132
0.088
0.081
0.266
0.268
2
0.271
0.174
0.186
0.451
0.844
0.826
0.813
0.797
0.574
0.165
0.082
0.096
0.099
20.029
0.060
0.163
0.128
0.190
0.051
0.199
0.195
0.225
0.080
0.087
0.332
0.373
0.183
0.189
0.340
0.045
0.197
0.221
0.229
3
0.096
0.132
0.097
0.167
0.059
0.061
0.107
0.059
0.191
0.188
0.207
0.334
0.091
0.247
0.193
20.043
20.021
20.039
0.335
0.385
0.317
0.182
0.249
0.232
0.142
0.093
0.086
0.147
0.061
0.276
0.276
0.217
0.288
4
0.197
0.126
0.133
0.073
20.024
0.141
0.177
0.168
0.373
0.060
0.135
0.059
0.136
20.017
0.002
0.170
0.157
0.125
20.136
0.149
0.195
0.406
0.311
0.339
0.338
0.389
0.379
0.321
0.381
0.728
0.750
0.732
0.620
0.042
0.136
0.150
0.173
0.090
0.146
0.097
0.149
0.180
0.125
20.054
0.091
0.168
0.041
0.027
0.153
0.006
0.142
0.076
20.051
0.010
0.064
0.139
0.049
0.097
0.105
0.238
0.143
0.236
0.106
0.204
0.131
0.176
5
0.686
0.804
0.777
0.588
0.149
0.130
0.164
0.183
0.311
0.335
0.212
0.089
0.199
0.119
0.054
0.387
0.100
0.292
0.106
0.083
0.318
0.348
0.174
0.120
0.185
0.293
0.185
0.205
0.107
0.103
0.139
0.143
0.222
7
0.061
0.255
0.139
0.051
0.143
0.151
0.140
0.100
0.101
0.310
0.268
20.057
0.038
0.087
20.054
0.201
0.093
0.335
0.532
0.504
0.222
0.177
0.335
0.340
0.667
0.489
0.597
0.627
0.579
0.282
0.187
0.037
0.179
8
0.114
0.139
0.123
0.128
0.075
0.130
0.083
20.062
0.079
0.138
0.152
0.096
0.599
0.373
0.750
0.599
20.060
20.189
0.125
20.059
20.171
20.019
20.137
20.083
20.101
20.111
0.201
20.005
0.146
20.080
0.108
0.124
0.167
10
0.057
0.125
0.060
0.149
0.053
0.123
0.009
0.040
0.050
20.303
20.159
20.095
0.007
0.509
20.124
0.109
0.803
0.468
0.164
0.052
0.208
0.278
0.259
0.241
0.004
20.095
0.211
0.039
0.172
20.051
0.169
0.091
0.142
264
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9
Q10
Q11
Q12
Q27
Q28
Q29
Q30
Q31
Q32
Q33
Q13
Q14
Q15
Q16
Q17
Q18
Q19
Q20
Q21
Q22
Q23
Q24
Q25
Q26
1
20.056
0.038
0.170
0.037
0.032
20.011
0.025
0.080
0.048
0.254
0.229
0.650
20.061
20.096
0.189
20.113
20.049
0.075
0.031
0.281
0.220
0.168
0.403
0.268
20.072
20.036
20.063
20.168
0.015
0.164
20.006
0.048
20.069
(continued)
11
IMCS
17,3
0.214
0.239
0.080
0.312
0.196
0.225
0.662
0.695
0.660
0.607
0.532
0.671
0.588
0.236
0.311
0.296
0.080
0.236
0.057
0.114
0.168
0.230
0.304
0.413
0.332
0.202
0.152
0.327
0.083
0.203
0.132
0.067
0.115
0.178
0.259
0.238
0.258
0.302
0.231
0.228
0.163
0.137
0.255
0.257
0.311
0.381
0.336
0.392
0.285
0.297
0.603
0.645
0.610
0.748
0.714
0.740
0.682
0.668
0.136
0.030
0.164
0.086
0.015
0.295
0.066
0.140
0.146
0.179
0.195
20.036
0.023
0.133
0.256
0.307
0.123
0.221
0.359
0.423
0.336
0.338
0.249
0.185
0.099
0.199
0.148
0.081
0.059
3
0.528
0.806
0.700
0.776
0.682
0.542
0.286
0.255
0.179
0.262
0.200
0.195
20.005
0.200
0.099
0.148
0.078
0.302
0.284
0.376
0.344
0.069
0.243
0.137
0.163
0.164
0.120
0.095
0.126
4
0.168
0.137
0.169
0.192
0.310
0.312
0.132
0.009
0.078
0.284
0.295
0.150
0.157
0.187
0.148
0.190
0.033
0.123
20.049
0.085
0.066
0.058
0.113
0.160
0.129
0.153
0.186
0.249
0.149
0.153
0.060
0.152
0.019
0.150
0.143
20.017
0.183
0.087
0.209
0.250
20.012
0.099
0.102
0.110
0.211
0.091
0.126
0.285
0.341
0.397
0.343
0.252
0.273
0.147
0.276
20.086
0.020
20.005
0.416
0.084
0.226
20.035
0.188
0.213
0.130
0.135
0.256
0.347
0.387
0.294
0.568
0.713
0.777
0.618
0.494
0.524
0.385
0.399
0.429
0.341
0.347
0.244
0.061
0.099
0.203
0.294
0.332
5
0.108
0.046
0.232
0.091
0.195
0.046
0.192
20.068
0.086
20.137
0.056
0.182
0.162
0.131
0.045
0.040
0.422
0.284
0.109
0.125
0.165
0.087
0.065
0.093
0.262
0.189
0.265
0.172
0.250
8
0.046
0.067
0.121
0.074
0.041
0.098
0.038
0.154
0.098
0.062
0.020
0.044
0.160
0.115
0.062
0.034
0.317
0.003
20.150
20.075
20.139
20.086
0.038
0.080
0.009
0.057
0.056
0.213
0.213
11
0.084
0.039
0.148
0.130
0.019
0.025
0.110
20.021
20.007
20.123
20.231
0.122
0.024
20.007
0.081
0.161
0.157
20.069
20.354
20.304
20.298
20.107
0.032
20.012
20.090
0.058
20.003
0.014
0.095
10
0.286
20.016
0.043
20.077
0.013
0.184
0.064
0.147
0.085
0.096
0.140
20.022
0.023
0.009
0.045
20.124
0.068
0.177
0.241
20.053
0.017
0.043
20.064
20.122
20.070
0.128
0.138
0.032
0.100
Notes: Extraction method: principal component analysis; rotation method: varimax with Kaiser normalization; rotation converged in 11 iterations;
measurement items not included in the results because all loadings less than 0.5 value; measurement items removed because they shared by two separate
factors
Q41
Q42
Q43
Q44
Q45
Q46
Q47
Q36
Q37
Q38
Q39
Q40
Q56
Q57
Q58
Q59
Q60
Q34
Q35
Q61
Q62
Q48
Q49
Q50
Q51
Q52
Q53
Q54
Q55
CSF of DRP
for information
systems
265
Table V.
IMCS
17,3
266
0.922
0.915
0.907
0.901
8
5
6
4
4
4
6
2
2
Q29, Q30
Q28, Q31
0.82
0.81
0.877
0.878
0.913
0.944
a-values
Original a-values
No. of
Measurement items
items
9
10
3
4
5
Critical success
factorsa
Q13, Q33
Q23
Items
deleted
2
2
5
6
4
No. of retained
items
0.824
0.812
0.923
0.878
0.918
0.915
0.907
0.901
0.922
0.944
DRP steering committee and
DRP testing
DRP policy and goals
DRP training
DRP maintenance and staff
involvement
DRP minimum IS processing
requirements
Top management commitment
to DRP
Prioritization IS
functions/services
External, off-site back-up system
Internal, on-site back-up system
DRP documentations
Maximum a-values
Maximum
a-value
Labeling
CSF of DRP
for information
systems
267
Table VI.
Reliability of the
proposed ten CSFs
IMCS
17,3
268
any relevant changes immediately. All concerned personnel must be actively involved
in this maintenance process, so that everyone is aware of the changes.
The sixth DRP CSF, which consists of three measurement items, is the DRP
minimum IS processing requirement. The four areas that dictate the survivability of
firms without IS support may include the maximum allowable downtime for firms, the
maximum allowable downtime for customers, the minimum functional services to all
concerned parties, and the maximum time by which all IS services are restored. These
areas of maximum downtime requirement are particularly crucial for those firms
highly dependent on electronic commerce such as e-trading and e-banking.
The seventh DRP CSF, which consists of four measurement items, is top management
commitment to DRP. The top management commitment must firstly be shown in a
commitment to the DRP budget allocation, and then to exhibiting leadership by
involvement and participation in DRP events, such as personally attending DRP
meetings. These activities would promote staff enthusiasm toward DRP.
The eighth DRP CSF, which consists of four measurement items, is prioritization of
IS functions/services. It is suggested that firms should establish a rule to prioritize
which functions, activities, and system requirements are the most critical, and then
decide which of these should first be recovered and then the order of the rest, should a
disaster strike. Without the determination of these priorities, firms could easily fall into
a chaotic situation because DRP personnel would not be able to decide which of these
priority jobs to recover first.
The ninth DRP CSF, which consists of two measurement items, is the external,
off-site back-up processing sites for DRP. This CSF suggests that an external back-up
processing site should be established, so that IS operations/services would still be
functional if a disaster were to strike at a local office. Many industries, such as
banking, have adopted this measure to ensure that their IS functions would not be
totally halted if such a disaster were to happen.
The tenth DRP CSF, which consists of two measurement items, is the internal,
on-site back-up processing sites for DRP. This final DRP CSF suggests that all firms
should establish an internal back-up processing site so that IS operations/services
could be immediately retrieved in the event of a disaster. The recovery of IS
operations/services as a result of this practice is claimed to be more efficient than that
as a result of the implementation of the ninth CSF (off-site back-up), especially when
the disaster is a small-scale one such as a small fire in a computer room or a virus
attack on local servers.
7. Conclusion
DRP is considered equally important at both organizational and developmental levels.
Most companies nowadays have developed their DRP, but there is a lack of
quantifying reports on the CSFs for DRP implementation. This paper offers a solution
to this problem; first, by providing a comprehensive DRP literature review; second, by
identifying 62 DRP measurement items; and third, by verifying ten DRP CSFs for
information system function. These ten DRP CSFs are: DRP documentations; the DRP
steering committee and DRP testing; DRP policy and goals; DRP training; DRP
maintenance and staff involvement; DRP minimum IS processing requirements; top
management commitment to DRP; prioritization of IS functions/services; provision of
external, off-site back-up systems; and provision of an internal, on-site back-up system.
The findings of this paper can serve as a basis for many future DRP research studies.
One suggestion is to apply these CSFs to serve as a foundation for developing a corporate
DRP strategy, similar to the organizational system model proposed by Ravichandran
and Rai (2000). Other suggestions include studying how DRP CSFs contribute to the
success of DRP performance within a firm, or how their applicability to large enterprises
compares with their applicability to small- and medium-sized enterprises.
CSF of DRP
for information
systems
269
References
Adam, M. (1999), Preventing the chain react, HR Magazine, January, pp. 28-32.
Arnell, A. (1990), Handbook of Effective Disaster/Recovery Planning, McGraw-Hill,
New York, NY.
Babbie, E. (1992), The Practice of Social Research, 6th ed., Wadsworth, Belmont, CA.
Baker, G. (1995), Quick recoveries, CA Magazine, August, pp. 49-53.
Blake, W.F. (1992), Making recovery a priority, Security Management, Vol. 36 No. 4, pp. 71-4.
Blatnik, G.J. (1998), Point of failure recovery plan, IS Audit & Control Journal, Vol. 4 No. 1,
pp. 24-7.
Bodnar, G.H. (1993), Data security and contingency planning, Internal Auditing, Winter,
pp. 74-80.
Botha, J. and von Solms, R. (2004), Cyclic approach to business continuity planning,
Information Management & Computer Security, Vol. 12 No. 4, pp. 328-37.
Butler, J.G. (1997), Contingency Planning and Disaster Recovery: Protecting your Organizations
Resources, Computer Technology Research Corp., Montgomery, AL.
Carlson, S.J. and Parker, D.J. (1998), Disaster recovery planning and accounting information
systems, Review of Business, Winter, pp. 10-15.
Cerullo, M.J. and Cerullo, V. (1998), Key factors to strengthen the disaster contingency and
recovery planning process, Information Strategy: The Executives Journal, Winter,
pp. 37-43.
Cerullo, M.J., McDuffie, R.S. and Smith, L.M. (1994), Planning for disaster, The CPA Journal,
June, pp. 34-8.
Chow, W.S. (2000), Success factors for IS disaster recovery planning in Hong Kong,
Information Management & Computer Security, Vol. 8 No. 2, pp. 80-6.
Coleman, R. (1993), Six steps to disaster recovery, Security Management, February, pp. 61-2.
Compeau, D. and Higgins, C. (1995), Computer self-efficacy: development of a measure and
initial test, MIS Quarterly, Vol. 19 No. 2, pp. 189-211.
Coult, G. (1999), Disaster recovery, Managing Information, Vol. 6 No. 3, pp. 31-5.
Devargas, M. (1999), Survival is not compulsory: an introduction to business continuity
planning, Computers & Security, Vol. 18 No. 1, pp. 35-46.
Donovan, T., Rosson, B. and Eichstadt, B. (1999), Preparing carriers for . . ., Telephony, June,
pp. 180-8.
Doughty, K. (1991), Performing a business impact analysis, EDPACS, Vol. 18 No. 9, pp. 1-7.
Doughty, K. (1993), Auditing the disaster recovery plan, EDPACS, Vol. 21 No. 3, pp. 1-12.
Douglas, W.J. (1998), A systematic approach to continuous operations, Disaster Recovery
Journal, Summer, pp. 1-3.
Dwyer, P.D., Friedberg, A.H. and McKenzie, K.S. (1994), It can happen here: the important of
continuity planning, IS Audit & Control Journl, Vol. 1, pp. 30-5.
IMCS
17,3
270
Ebling, R.G. (1996), Establishing safe harbor: how to develop a successful disaster recovery
program, Risk Management, September, pp. 53-6.
Eckerson, W. (1992), Andrew reminds users of need for disaster planning, Network World,
Vol. 2, p. 56, September.
Edwards, B. and Cooper, J. (1994), Testing the disaster recovery plan, Information
Management & Computer Security, Vol. 3 No. 1, pp. 21-7.
Elstien, C. (1999), Reliance on technology, Enterprise Systems Journal, July, pp. 38-40.
Ferraro, A. and Hayes, S. (1998), Auditors add value to the business continuity program,
IS Audit & Control Journal, Vol. 5, pp. 47-50.
Fong, A. (1991), The Disaster Recovery Plan, School of Business, Hong Kong: Business Research
Centre, Hong Kong Baptist University, Kowloon Tong.
Francis, B. (1993), Recover from distributed disasters, Datamation, December, pp. 73-6.
Frost, C. (1994), Effective responses for proactive enterprises: business continuity planning,
Disaster Prevention and Management, Vol. 3 No. 1, pp. 7-15.
Gallegos, F. and Wright, D.M. (1988), Evaluating data security: the initial review, Data Security,
Spring, pp. 8-15.
Gilbert, L. (1995), The function of a business continuity plan, EDPACS, March, pp. 12-15.
Ginn, R.D. (1989), The case for continuity, Security Management, January, pp. 84-90.
Gluckman, D. (2000), Continuity . . . recovery, Risk Management, March, p. 45.
Haubner, D. (1994), Data processing disaster recovery planning considerations for tandem
network, EDPACS, Vol. 19 No. 11, pp. 1-7.
Hawkins, S.M., Yen, D.C. and Chou, D.C. (2000), Disaster recovery planning: a strategy for data
security, Information Management & Computer Security, Vol. 8 No. 5, pp. 222-9.
Heng, G.M. (1996), Developing a suitable business continuity planning methodology,
Information Management & Computer Security, Vol. 4 No. 2, pp. 11-13.
Hiatt, C. and Motz, A. (1990), Disaster recovery planning: what it should be, what it is, how to
improve it, EDPACS, Vol. 17 No. 9, pp. 1-9.
Hurwicz, M. (2000), When disaster strikes, Network Magazine, January, pp. 44-50.
Ivancevich, D.M., Hermanson, D.R. and Smith, L.M. (1997), Accounting information system
controls and disaster recovery plans, Internal Auditing, Spring, pp. 13-20.
Iyer, R. and Sarkis, J. (1998), Disaster recovery planning in an automated manufacturing
environment, IEEE Transactions on Engineering Management, Vol. 45 No. 2, pp. 163-75.
Jacobs, J. and Weiner, S. (1997), The CPAs role in disaster, The CPA Journal, Vol. 20-24,
November, pp. 56-8.
Karakasidis, K. (1997), A project planning process for business continuity, Information
Management & Computer Security, Vol. 5 No. 2, pp. 72-8.
Korzeniowski, P. (1990), How to avoid disaster with a recovery plan, Software Magazine,
February, pp. 46-55.
Kovacich, G.L. (1996), Establishing: a network security programme, Computers & Security,
Vol. 15 No. 6, pp. 486-98.
Krivda, C.D. (1998), Planning for the worst, Midrange Systems, September, pp. 10-14.
Krousliss, B. (1993), Disaster recovery planning, Catalog Age, Vol. 10 No. 12, p. 98.
Kull, D. (1982), Disaster recovery: just in case, Computer Decisions, September, pp. 180-209.
Leary, M.K. (1998), A rescue plan for your LAN, Security Management, March, pp. 53-60.
Lee, S. and Ross, S. (1995), Disaster recovery planning for information systems, Information
Resources Management Journal, Summer, pp. 18-23.
McNurlin, B.C. (1988), Trends in disaster, I/S Analyzer, Vol. 26 No. 11, pp. 1-12.
Marcella, A. Jr and Rauff, J. (1994), Automated disaster recovery plan auditing prospects for
using expert systems to evaluate disaster recovery plans, EDPACS, Vol. 21 No. 9, pp. 1-16.
Matthews, G. (1994), Disaster management: controlling the plan, Managing Information, Vol. 1
Nos 7/8, pp. 24-7.
Meade, P. (1993), Taking the risk out of disaster recovery services, Risk Management,
February, pp. 20-6.
Menkus, B. (2000), Defining security threats to information processing application, EDPACS,
February, pp. 9-15.
Miller, H.J. (1997), A guide to planning for the business recovery of an administrative business
unit, EDPACS, April, pp. 9-16.
Mitome, Y., Speer, K.D. and Swift, B. (2001), Embracing disaster with contingency planning,
Risk Management, May, pp. 18-27.
Moch, C. (1999), Taking cover: Bell Atlantic presents businesses with security options, August,
Telephony, p. 62.
Morwood, G. (1998), Business continuity: awareness and training programmes, Information
Management & Computer Security, Vol. 6 No. 1, pp. 28-32.
Murphy, J.H. (1991), Taking the disaster out of recovery, Security Management, August, pp. 61-6.
Myatt, P.B. (1999), Going in for analysis, Security Management, April, pp. 75-9.
Myers, K.N. (1999), Managers Guide to Contingency Planning for Disasters: Protecting Vital
Facilities and Critical Operations, 2nd ed., Wiley, New York, NY.
Nelson, K. (2006), Examining factors associated with IT preparedness, Proceedings of the 39th
Hawaii International Conference on System Science, pp. 1-10.
Newton, J. and Cudlipp, G. (1998), From chaos to control, Canadian Insurance, November,
pp. 20-22, 26.
Norman, G. (1993), Disaster recovery after downsizing, Computers & Security, Vol. 12 No. 3,
pp. 225-9.
Norusis, M.J. (1993), SPSS for Windows Professional Statistics Release 6.0, SPSS, Chicago, IL.
Nunnally, J.C. (1967), Psychmoetric Theory, McGraw-Hill, New York, NY.
Paton, D. and Flin, R. (1999), Disaster stress: an emergency management perspective, Disaster
Prevention and Management, Vol. 8 No. 4, pp. 261-7.
Peach, S. (1991), Disaster recovery: an unnecessary cost burden or an essential feature of any DP
installation, Computers & Security, Vol. 10 No. 6, pp. 565-8.
Pember, M.E. (1996), Information disaster planning: an integral component of corporate risk
management, Records Management Quarterly, April, pp. 31-7.
Peterson, D.M. and Perry, R.W. (1999), The impacts of disaster exercises on participants,
Disaster Prevention and Management, Vol. 8 No. 4, pp. 241-54.
Petroni, A. (1999), Managing information systems contingencies in banks: a case study,
Disaster Prevention and Management, Vol. 8 No. 2, pp. 101-10.
Ravichandran, T. and Rai, A. (2000), Quality management in systems development: an
organizational system perspective, MIS Quarterly, Vol. 24 No. 3, pp. 381-415.
Rohde, R. and Haskett, J. (1990), Disaster recovery planning for academic computing centers,
Communications of the ACM, Vol. 33 No. 6, pp. 652-7.
CSF of DRP
for information
systems
271
IMCS
17,3
272
Rosenthal, P. and Himel, B. (1991), Business resumption planning: exercising your emergency
response teams, Computer & Security, Vol. 10 No. 6, pp. 497-514.
Rothstein, P.J. (1988), Up and running, Datamation, October, pp. 86-96.
Rothstein, P.J. (1998), Disaster recovery: in the line of fire, Managing Office Technology, May,
pp. 26-30.
Rutherford, K. and Myer, G. (2000), Business continuity: do you have a plan?, Canadian
Underwriter, April, pp. 38-41.
Salzman, T. (1998), An audit work program for reviewing IS disaster recovery plans
(conclusion), EDPACS, Vol. 25 No. 7, pp. 8-20.
Saraph, J.V., Benson, P.G. and Schroeder, R.G. (1989), An instrument for measuring the critical
factors of quality management, Decision Sciences, Vol. 20 No. 4, pp. 810-29.
Smith, M. and Sherwood, J. (1995), Business continuity planning, Computer & Security, Vol. 14
No. 1, pp. 14-23.
Solomon, C.M. (1994), Bracing for emergencies, Personnel Journal, April, pp. 74-83.
Tilley, K. (1995), Work area recovery planning: the key to corporate survival, Disaster
Prevention and Management, Vol. 13 Nos 9/10, pp. 49-53.
Turner, D. (1994), Resources for disaster recovery, Security Management, August, pp. 61-7.
Vartabedian, M. (1999), For want of a nail, Call Center Solutions, February, pp. 40-8.
Warigon, S. (1999), Preparing and auditing an IS security incident-handling plan, EDPACS,
Vol. 26 No. 10, pp. 1-13.
Wong, B.K., Monaco, J.A. and Sellaro, C.L. (1994), Disaster recovery planning: suggestions to top
management and information systems managers, Journal of Systems Management, May,
pp. 28-32.
Wrobel, L.A. (1997), The Definitive Guide to Business Resumption Planning, Artech House,
Norwood, MA.
Wroblewski, M.E. (1982), Contingency planning for DP, Data Management, May, pp. 25-7.
Yiu, K. and Tse, Y.Y. (1995), A model for diaster recovery planning, IS Audit & Control Journal,
Vol. 5, pp. 45-51.
Zolkos, R. (2000), To rebound from disaster requires advance plans, Business Insurance, Vol. 34
No. 9, pp. 2-4.
Further reading
Kerlinger, F.N. (1986), Foundation of Behavioral Research, 3rd ed., Rinehard & Winston,
New York, NY.
Zajac, B.P. Jr (1989), Disaster recovery are you really ready?, Computers & Security, Vol. 8,
pp. 297-8.
Appendix
Question
numbers
CSF of DRP
for information
systems
Measurement items
Q4
Sample references
273
Pember (1996) and Ferraro and
Hayes (1998)
Ginn (1989) and Rohde and
Haskett (1990)
Bodnar (1993) and Coleman
(1993)
Carlson and Parker (1998), Kull
(1982) and Warigon (1999)
Iyer and Sarkis (1998)
Meade (1993) and Kovacich
(1996)
Hiatt and Motz (1990)
Turner (1994), Dwyer et al. (1994)
and Petroni (1999)
Zolkos (2000)
Karakasidis (1997) and Jacobs
and Weiner (1997)
Cerullo and Cerullo (1998) and
Wroblewski (1982)
Leary (1998) and Pember (1996)
Krivda (1998) and Wong et al.
(1994)
Newton and Cudlipp (1998),
Gilbert (1995) and Bodnar (1993)
Rosenthal and Himel (1991), Hiatt
and Motz (1990) and Haubner
(1994)
Moch (1999) and Menkus (2000)
Cerullo and Cerullo (1998) and
Coult (1999)
Gilbert (1995) and Peach (1991)
Salzman (1998) and Marcella and
Rauff (1994)
Blake (1992) and Smith and
Sherwood (1995)
Karakasidis (1997) and Smith and
Sherwood (1995)
(continued)
Table AI.
Measurement items for
DRP constructs
IMCS
17,3
Question
numbers
Measurement items
Q22
274
Construct Testing
Q36
Disaster recovery plan is tested as though a real
disaster occurred
Q37
Disaster recovery plan is tested by duplications of
regular processing
Q38
Disaster recovery plan is tested by testing individual
recovery procedures
Q39
Disaster recovery plan is tested by testing recovery
functions
Q40
Disaster recovery plan is tested by simulating actual
disaster condition by interrupting service
Construct Training
Q41
Training in disaster response is given to recovery
personnel
Q42
Training stress management is given to recovery
personnel
Table AI.
Sample references
Cerullo and Cerullo (1998) and
Cerullo et al. (1994)
Heng (1996)
Hurwicz (2000) and Wong et al.
(1994)
Rothstein (1988), Myatt (1999)
and Frost (1994)
Tilley (1995) and Elstien (1999)
Blake (1992)
Leary (1998)
Hawkins et al. (2000) and
Rothstein (1988)
Edwards and Cooper (1994),
Haubner (1994) and Marcella and
Rauff (1994)
Krousliss (1993) and Miller (1997)
Haubner (1994)
Eckerson (1992) and Lee and Ross
(1995)
Fong (1991), Miller (1997) and
McNurlin (1988)
Edwards and Cooper (1994) and
McNurlin (1988)
Wong et al. (1994), Edwards and
Cooper (1994) and Leary (1998)
Wong et al. (1994)
Wong et al. (1994) and Edwards
and Cooper (1994)
Wong et al. (1994) and Edwards
and Cooper (1994)
Edwards and Cooper (1994) and
Leary (1998)
Peterson and Perry (1999) and
Salzman (1998)
Paton and Flin (1999) and Turner
(1994)
(continued)
Question
numbers
Measurement items
Sample references
Q43
Q44
Q58
Q59
Q60
Disaster recovery plan is audited
Construct ISF personnel participation
Q61
ISF personnel participate in recovery duties and
responsibilities
Q62
ISF personnel participate in reviewing disaster
recovery plan
Turner (1994)
CSF of DRP
for information
systems
275
Corresponding author
Wing S. Chow can be contacted at: vwschow@hkbu.edu.hk
Table AI.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.