You are on page 1of 7

MAIN MEMORY DATABASE RECOVERY

Margaret H. Eich

Department of Computer Science and Engineering


Southern Methodist University
Dallas, Texas 75275

copy is memory resident. This change in perspective


from secondary storage to main memory not only causes
The idea of having entire databases reside in main problems with the recovery operations, but also changes
memory, or main memory databases (MMDB), has re- some of the accepted methods for recovering from vari-
cently been an active research topic. It is recognized ous failures. In the next section we identify the MMDB
that in this framework the issues concerned with model to be used throughout the paper, defining the
efficient database recovery are more complex than in various database components used. Section 3 then uses
traditional DBMS systems. While several authors have this model to define MMDB recovery and compare it’s
looked at different methods for MMDB recovery, an ex- requirements for recovery to those in traditional data-
amination of MMDB recovery functions and how they base systems. From t,his evaluation, we construct a
differ from traditional DBMS recovery has not been per- “wish list” of MMDB recovery requirements which we
formed. This paper examines MMDB recovery, feel a MMDB recovery technique should satisfy. Section
identifies differences from traditional DBMS recovery, 4 reviews existing MMDB recovery proposals and shows
composes a “wish list” of MMDB recovery require- that none meet all requirements on the wish list.
ments, describes why previously proposed techniques do Finally, in section 5 we propose a new recovery tech-
not satisfy these requirements, and proposes a new nique which does satisfy the wish list items.
MMDB recovery technique which does.

MMDB Model
Introduction
The DBMS model under evaluation is one where
The declining cost of main memory and need for entire databases being accessed reside in main memory.
high performance database systems have recently Although only recently receiving much research interest,
inspired research into systems with massive amounts of the concept is certainly not new. IMS has supported
memory and the ability to store complete databases in main memory dat;kases, Main Storage Data Bases, for
main memory 1~3~8.The use of memory resident data- quite some time . Unlike the IBM approach, how-
bases, or main memory databases (MMDB), can achieve ever, we assume no practical limitation on the size or
significant performance improvements over conventional structure of a main memory database.
database systems by eliminating the need for I/O to Figure 1 shows the MMDB model used throughout
perform database applications. this paper. Sometime prior to access, a database must
Database recovery techniques are used to ensure be loaded into main memory. To achieve this purpose,
that any erroneous database state due to transaction, an Archive Database exists on secondary storage. The
system, or media failure can be repaired and restored archive database is a complete image of some prior
into a usable state from which normal processing can database state and is used solely for loading main
resume 2~g~1’~18. Due to the volatility of main memory, memory. Since no access of the archive is made during
MMDBs complicate database recovery issues. This database processing, the organization of this file should
problem has been recognized and several new techniques be for efficient loading of memory. Due to the volatility
for MMDB recovery have been proposed 3~5~6~15. The of main memory, all updates to main memory must be
objectives of this paper are to examine recovery require- recorded on a Log located in some stable memory.
ments in a MMDB environment, provide a survey of Figure 1 shows the log on a secondary storage device,
proposed techniques, and then propose a new MMDB but it may actually exist in a nonvolatile main memory
recovery technique which better meets these require- supported by a backup power supply, or in a combina-
ments than previous methods. tion of the two. During recovery processing, the log can
The approach used in this paper is different than be used to achieve UNDO and REDO processing and
that in previous papers on MMDB recovery. Prior to thus may contain before Images (B{ZM) and/or ajter
defining a technique, we investigate what is meant by images (AFZM) of modified data . Together, the
MMDB recovery. With MMDBs, the primary database archive and log provide the ability to recover from a

1226
CH2345-7/86/0000/1226$01.00@ 1986 IEEE
main memory failure. In actual use the archive data-
base may be updated directly from data in main covery Defined
memory or from data in the log.
The issues concerning traditional database recovery
are well known and understood 2~g,11~*8.The purpose
of this section is to examine aspects concerning data-
base recovery under the MMDB assumption. We inves-
tigate the various types of failures effecting database
processing, describe recovery operations necessary for
MMDBs, and conclude by listing the desired features a
MMDB recovery technique should possess.
When discussing MMDB recovery, it is important
to realize that the objective is that of recovering data in
main memory. With conventional databases, the current
Fig. 1. - MMDB Model database state exists partly in main memory and partly
in the secondary storage. With MMDBs, the current
state completely exists in main memory. Secondary
This view of a MMDB may seem almost identical storage is used solely for recovery purposes. Since
to that of a conventional database. However, there MMDB processing incurs no I/O, any I/O required to
exist several major differences: ensure recoverability can have a significant impact on
system performance and become the major bottleneck
during processing.
1. Main memory is assumed to be large enough
to hold all databases currently being accessed. As with traditional database processing, a trans-
action is assumed to be the unit of recovery and con-
2. Any recovery schemes must deal with restor- sistency, and the failures which must be anticipated are
ing the MMDB not data on secondary storage. transaction, system, and media failures. The differences
3. No database access can be performed against between conventional DBMS recovery and MMDB
the archive database. It’s use is strictly as a recovery are introduced when the specific operations
backup to the main memory database. required to accomplish recovery are examined. TABLE
1 shows the operations needed to recover from the three
There are several advantages to the use of failure types in both the tradtional and MMDB environ-
MMDBs. Obviously, processing time and throughput ments.
rates should improve hue to the elimination of I/O
overhead. It has been suggested that the improved per-
formance can eliminate the need for concurrency control TABLE 1
bq allowing the serial execution of MMDB transactions DATABASE RECOVERY OPERATIONS
. While we don’t agree with this observation, it is cer-
tainly conceivable t,hat concurrency control mechanisms
specifically designed for the MMDB environment can I Failure Type 1 RecoveryOperationsRequired I
reduce the overhead and complexity usually associated Traditional DBMS MMDB
with concurrency control techniques 3. TransactionFailure TransactionUNDO TraasactionUNDO
Some problems not existing in conventional DBMS System Failure Global UNDO Global REDO
systems are introduced in the MMDB environment. Partial REDO
The major problem deals with the volatility of main
memory. To reduce the impact of this problem, it has
been proposed that a small stable main memory be used I Media Failure
I
Global REDO
I
Global REDO
Partial REDO I
to support recovery processing 3*6. If a stable memory
is assumed, its size must be large enough to contain all
updates of active transactions. Another major problem
is the excessive overhead needed to initially load a data- Transaction failure occurs when a transaction does
base into memory for processing. This procedure not successfully commit.’ This type of failure occurs
requires merging data on the archive database and the more often than the other two, and thus efficient
log to obtain data needed to recover to the most recent recovery from it is essential. A rule of thumb is that
consistent state. To reduce the time, archives should be recovery should occur in a similar time frame to that of
created very frequently and could be distributed across successful completion of the transaction ‘“tl’. The nor-
several secondary storage devices. Additional concerns mal procedure for recovery after a transaction failure is
center around the increase in the number of main a Transaction UNDO. This implies that all effects the
memory components and the resulting reliability prob aborted transaction has had on the primary database
lems and increase in access time. A unique architecture copy must be removed. The major concern existing
is currently under investigation at Princeton to address with transaction UNDO in a MMDB environment is
these issues ‘. We are currently only addressing what that it be done with as little I/O as possible. If the rule
appears to be the major problem: MMDB recovery. of thumb is to be achieved, no I/O should be required

1221
to accomplish transaction UNDO. Additional processing method to be used depends on such factors as database
during transaction UNDO involves the removal of any storage structure, access methods used, storage location
dirty data from the log or archive database. These on disk, and whether any transactions have immediate
would also like to be performed with no I/O. need of the data.
Recovery from a system failure is quite different Although media failure may only occur once or
with MMDBs than with traditional DBMSs. Tradition- twice a year, the impact on recovery of traditional data-
ally, the effects of any interrupt,ed t,ransactions must be bases can be severe I1 A memory failure with MMDBs
undone, Global UNDO, and any completed transactions can be treated as a system failure and a Global REDO
which have not had updates reflected in the database performed. However, if the specific location of the
need to be redone, Partial REDO. When a system failure can be identified, a Partial REDO of only the
failure occurs, the entire MMDB contents are lost. effected area would be warranted. This indicates that
MMDB recovery must therefore perform a Global the archive database should be physically structured to
REDO by completely reloading all databases in main correspond with memory addresses. Perhaps partition-
memory. The archive database is used to reload the ing of memory and archive databases and the ability to
databases to some prior state and then any committed recover by these partitions is needed. Future research
transactions reflected in the log are redone. The Global will examine this idea of partitioning for partial redos.
REDO operation is required in conventional systems Media failures can effect the archive database or log.
only after a media failure causes the loss of the primary Restoring these files creates similar problems for con-
database copy on secondary storage. ventional and MMDB systems. Differences do exist in
System failures occur less often than transaction that the archive database may be needed more fre-
failures and more often media failures ll. A goal for quently than in conventional DBMSs and thus it’s quick
system failure recovery is that it be accomplished in a recovery is more important with MMDBs. With the
time comparable to that required for successful comple- existence of prior archive databases and corresponding
tion of all active transactions. With MMDBs, a Global log data, recreation of archives simply requires redoing
REDO requires I/O from the archive database, and thus checkpoint processing.
it seems impossible to achieve this goal. One way to Authors have ignored the problems associated with
reload the MMDBs as quickly as possible is to ensure as failure of stable memories. The use of stable memories
much of recent database updates as possible are does imply that overhead for recovery from system
reflected in the archive database. This implies that log failures is greatly reduced, however, there is really no
data must be frequently flushed to the archive database. such thing as a “forever stable” memory. Thus any
In traditional database systems a checkpoint is reliable recovery technique must prepare for the failure
often used to reduce the work needed to recover from of stable memory. This implies that any MMDB
system failures 2~g~11~18.We view a MMDB Checkpoint recovery technique needs to provide the facilities for
as recording all data concerning a prior database state Global and/or Partial REDO of all of memory - stable
into the archive database and writing a corresponding and non-stable.
checkpoint record on the log. These checkpoints should Another issue concerning MMDB recovery is when
be done with as little impact on transaction processing log I/O operations occur. It is important that any I/O
as possible. Frequent flushing to the archive database needed be performed asynchronously to normal data-
thus requires frequent checkpointing. base processing. This implies that log I/O not occur
Global REDO loads MMDBs into main memory. only at commit time, but that it be performed
However, not all transactions require all data, therefore throughout transaction processing. Transaction pro-
transactions can begin processing as soon as some of the cessing should not be dependent on or held up by I/O
data they need is available. This problem is similar to to the log.
the fetching strategies associated with virtual memory We close this section by summarizing the major
management. At least four possibilities exist when requirements for MMDB recovery in the following wish
loading databases into main memory: list:

1. Database Prefetching - Loading an entire 1. No I/O required to accomplish transaction


database into main memory prior to schedul- UNDO.
ing any transactions accessing it. 2. Frequent checkpoints performed with
2. Page Prefetching - Prefetch some subset of minimum impact on transaction processing.
database pages and allow transactions to begin 3. Asynchronous processing of log I/O and trans-
access of them as the remainder of the data- action processing.
base is loaded.
3. Demand Loading - Only load a database when Certainly an additional requirement be that a minimum
some transaction first accessesit. amount of redundant data be used. For example, the
4. Demand Paging - Load database pages after use of before images on the log should be avoided if
first access to them. possible.

More research is needed to determine which of these


provides the best performance for Global REDO. The

1228
cache and disk database together represent the current
Recoverv Tecu database just as with traditional DBMS systems. Even
though this approach is not. strictly a MMDB, an
Prior to introducing the new MMDB recovery unusual recovery scheme appropriate to MMDBs is pro-
method, we examine previously proposed techniques and posed. This approach recognizes that %hadow” main
compare their processing to the items on the wish list. memory pages a can be used to eIiminate the need for
As stated earlier, IBM has as implemented MMDBs transaction UNDO. To avoid the overhead of loading
in the IMS/VS Fast Path Feature 2r13. At initializa- entire databases, a demand paging technique is used to
tion of IMS, the MMDBs are loaded into main memory. bring pages into main memory. No log is used, rather a
Updates are performed in special database buffers and safe localed in nonvalatile memory containing data
MMDB pages are not modified until commit time. needed to reconstruct part of the cache after failure is
Commit processing ensures that all after images are maintained. As a minimum, the safe contains all pages
written to the log prior to updating the MMDB. S stem not currently residing on the disk database. When
wide transaction-consistent checkpoints (TCC) II are memory pages are to be modified, a main memory sha-
accomplished by an asynchronous IMS task running in dow page is used for updating if the disk database does
parallel with MMDB processing. The major disadvan- not contain a copy of the page to be modified. In the
tages of this scheme are that log I/O is only performed event of transaction failure, these shadow pages are
at commit time and entire MMDBs must be read to per- simply deleted in the cache. Subsequent transactions
form checkpoints. will either access the other page in the cache or incur a
page fault to bring in a new copy of the page. To com-
The Massive Memory Machine (MMM) project at mit a transaction all modified pages are written to the
Princeton University has described an architecture safe. Novel procedures are used to limit the number of
specifically des\gffe to support massive amounts of pri- records on the safe and to determine the exact state of
mary storage 1 1 . Associated with this project is the each page in main memory. Only when a page is tar-
design of a MMDB recovery scheme based upon a geted for replacement is it written back to the disk
hardware logging device, HALO 6. HALO intercepts database.
all MMDB operations and creates BFIM and AFIM log
data initially written to a nonvolatile main memory, Design for a MMDB including data structure
and as time permits written out to t,he log on disk. The representation and recovery technique has been propsed
use of the stable main memory implies that commit at IBM ‘. It is assumed that a MMDB relation is
processing need not wait until the log buffers have been loaded into main memory at the first reference and, if
Bushed. The BFA4 log data is needed to accomplish modified, written to disk at commit time. All recovery
transaction UNDO. Continuous action-consistent overhead is restricted to the commit operation achieving
checkpoints (ACC) I1 of log data to the archive data- a type of transaction-oriented checkpoint (TOC) ‘. No
base is made. The asynchronous updating of the log and separate log is suggested, rather the use of shadow
parallel, continuous updating of the archive database pages on the archive database. At commit time, all
are certainly advantages of this scheme. However the modified relations are written to shadow areas on the
requirements for specialized hardware and stable main archive database. Once this has been accomplished the
memory, the interception of all database calls, and the new directory structure is updated and old database
requirement for BFIM log data to accomplish trans- areas released.
action UNDOs are disadvantages. Design for a MMDB system, MM-DBMS, is
Researchers at the University of California at currently under way at the University of Wisconsin-
Berkeley, have investigated ;ome of the implementation Madison 14115. This study includes the design of an
concerns for a MMDB . Their recovery scheme architecture, query processing, data structures, and
assumes the use of a log with BFIM and AFIM plus fre- recovery technique for a MMDB. Recovery processing
quent checkpointing. The notion of a pre-committed uses a stable log buffer as well as a special log processor
transaction is used to achieve asynchronous logging and to perform checkpointing. Further details concerning
database processing. When a transaction commits, the the recovery strategy used are not yet available.
commit record is placed in the log buffer and other TABLE 2 summarizes the different MMDB
conflicting transactions are allowed to progress even recovery techniques described. Although some unique
though the log buffers have not been flushed to disk. recovery ideas are introduced, none of the techniques
The transaction completes commit processing only when satisfies all items on our wish list. The Fast Path
this is done. ACC checkpointing occurs continously Feature, DB Cache, and Ammann techniques don’t
and in parallel with transaction processing by reading meet the asynchronous log I/O and transaction process-
the entire MMDB and identifying modified pages. ing requirement because all log I/O overhead occurs at
Although not specifically discussed, it appears that an transaction commit. The MMM and Berkeley methods
entire database must be loaded into main memory prior require BFIM and AFIM log data and must have log
to access. I/O operations to accomplish a transaction UNDO.
The concept of a Database Cache has been pro- The information shown in the last row of TABLE 2
posed for fatabase systems with large amounts of main describes the type of checkpointing performed by the
memory . It is assumed that there is sufficient various techniques. Part of this data indicates whether
memory space to store all dirty pages plus some other checkpointing is accomplished by reading the MMDB or
pages which have been fixed for reading. The database log data. Checkpoints for the Fast Path and Berkeley

1229
TABLE 2
COMPARISON OF PREVIOUS MMDB RECOVERY TECHNIQUES

techniques require examining all MMDB pages and To accomplish asynchronous log I/O and trans-
therefore must impact transaction processing when action processing, the precommitment technique is
checkpointing occurs. used. As explained above, transactions need not wait
until a conflicting transaction has completely commited
prior to beginning execution. Also, log I/O is performed
A Better Technique throughout a transaction execution rather than just at
commit time. Indeed, a transaction can not commit
In this section we describe preliminary results con- until all log buffers are written to disk, but without a
cerning a new MMDB recovery technique which better stable main memory for the buffer this can not be
satisfies the requirements identified in section 3 than avoided. If stable main memory were available, the
previously proposed methods. The highlights of this pre-commitment technique would not be necessary and
new method are: completely independent log I/O and transaction pro-
cessing would be possible.
1. Main memory shadow pages The log contains Begin-Transaction (BT),
Commit Transaction (CT), Abort-Transaction (AT),
2. Pre-committed transactions
Checkp&t, and AFIM records. All log records except
3. Automatic checkpointing the checkpoint contain the ID of the corresponding
4. Recovery processor transaction. The BT record contains a flag that indi-
cates the state of the corresponding transaction. It is
Each of these is discussed in the following paragraphs. initially written with an indication that the transaction
A followup paper will define this technique in more is active, when commi.ted or aborted it is appropriately
detail as well as provide simulation results examining its modified. This random access to the log implies that a
performance. We assume that no stable main memory random access device must be used. As explained
exists, but note any changes which would be needed in below, this flag is used during checkpointing to avoid
the event nonvolatile main memory were available. The flushing dirty data to the archive database. It also
impact of concurrency control is not included in this eliminates the need for removing this dirty data during
discussion. It may be assumed that transactions are transaction UNDO.
either run serially or that twophase locking at the page To accomplish automatic checkpointing, the logger
level is used. keeps track of the state of the log. Assuming an initial
Main memory shadow pages (similar to that pro- state of 0, state transitions occur when BT, CT, or AT
posed for the DB Cache 5, are used to achieve the goal records are written to the log. The BT record incre-
of no I/O for transaction UNDO. Duplicate copies are ments the state value by 1, while the CT and AT decre-
made of any pages updated by a transaction and all ment it by 1. A state value of 0 indicates that a TCC
modifications occur on these pages. As pages are state exists on the disk. When the logger detects a
modified, AFIM records are written into the log buffer state value of 0, a checkpoint record is written out to
for output to disk. At commit time, a commit record is the disk before any other records already in the buffer.
also written in the buffer for output. Subsequent To find the most recent checkpoint record on the log,
conflicting transactions can begin processing as soon as the logger also notes the address of this checkpoint
this occurs. They will use the data in the dirty pages record and records it along with its unique checkpoint
and, if needed, create new copies for their updating. If ID in a predefined fixed disk address. Performing this
a transaction commits (commit record written to disk), automatic checkpointing only requires two additional
the previous clean pages are released and the dirty I/O operations and is performed independently of trans-
pages become the new clean ones. When a transaction action processing. To ensure that TCC checkpoints
abnormally terminates, the dirty pages are released. occur, the logger may need to periodically force check-

1230
points even though the checkpoint state haa not
occurred. To accomplish this, the logger must write all Summary and Future Research
log records for actively executing transactions to be
written to the log, before allowing any new transactions After defining MMDB recovery, identifying its
to begin writing to the log. Unlike the techniques used requirements, and surveying the literature, we have pro-
in current database systems where processing is posed a new. MMDB recovery technique which better
quiesced to reach a TCC, this technique has no impact meets these requirements than previous methods. Our
on executing transactions. technique requires no transaction UNDO processing
Actual checkpointing to the archive database is after a transaction failure, uses asynchronous I/O and
accomplished by a separate Recovery Processor (RP). transaction processing to reduce recovery overhead dur-
Checkpointing is a two step process: creating a new ing transaction processing, and provides continuous
copy of the archive database and then applying log system checkpoints in parallel with yet no impact on
AFIM records for committed transactions to bring the transaction processing. Recent h4MDB recovery perfor-
new archive database state up to that indicated by the mance studies have shown that the major factor con-
latest checkpoint on the log. When an archive database cerning efficient recovery is the use of stable memory
is first created, the checkpoint ID and associated 4~~‘. Next to stable memory, the use of additional Iog-
address of the latest log checkpoint record are written ging and checkpointing processors can aIso impact per-
into a predefined location on the archive database. All formance 4. This is the only known technique propos-
AFIM records for successfully committed transactions ing the use of a special checkpoint processor.
(as identified by the ST log record) between the check- Many areaa for future research remain. A major
point address on the previous archive database and the area of study will address the issue of efficient loading
address of the latest checkpoint are applied to the for MMDBs including the idea of partitioning. This will
archive. If a checkpoint is attempted, but the check- examine methods for distributing log and archive data-
point ID on log is the same as that on the prior archive, base information across multiple secondary storage dev-
then the RP must wait until a new checkpoint is taken. ices. Along this line we also intend to evaluate various
To avoid the overhead of copying the entire archive, storage techniques to be used for the archive databases.
many checkpoints could be applied to the same copy
and at periodic time intervals request the creation of a Currently, a simulation study is being performed to
new archive database. more accurately compare the proposed technique to pre
vious ones. A future paper will more precisely define
Figure 2 shows the model of the MMDB recovery our proposed technique and present the results of the
system proposed. The use of main memory shadow simulation experiments.
pages ensures that only AFIMs are needed on the log
and that no I/O is required for transaction UNDO,
Checkpoint records are automatically written to the log,
and continous checkpointing to the archive database is
performed by the RP. Both of these operations are per-
formed in parallel with and asynchronously to MMDB 111Arthur C. Ammann, Maria Butrico Hanrahan, and
transaction processing. Without stable main memory, Ravi Krishnamurthy, “Design of a Memory
pre-commitment of transactions and continous writing Resident DBMS,” Proceedings 01 the IEEE Spring
to the log buffer provide asynchronous log I/O and Computer Conference, 1985, pp. 5457.
transaction processing. With the use of stable memory, I21 J. Date, An Introduction to Database Systems
C.
a nonvolatile log buffer removes the need for pre- Volume Ii, Addison-Wesley Publiihing Company,
commitment but achieves a true asynchronous opera- July 1984, pp. l-33.
tion of log I/O and transaction execution. PI David J. Dewitt, Randy H. Katz, Frank Olken,
Leonard D. Shapiro, Michael R. Stonebraker, and
David Wood, “Implementation Techniques for
Main Memory Database Systems,” Proceedings of
the ACM SIGMOD International Conference on
Management o/Data, 1984, pp. 1-8.
[41 Margaret H. Eich, “A Classification and Com-
parison of Main Memory Database Recovery Tech-
niques,” Southern Methodist University Depart-
ment of Computer Science Technical Report 86
CSE15, June 1986.
PI Klaus Elhardt and Rudolf Ba.yer, “A Database
Cache for High Performance and Fast Restart in
Database Systems,” ACM Transactions on Data-
base Systems, Vol. 9, No. 4, December 1984, pp.
503-525.
Fig. 2. - Model of Proposed MMDB Recovery System

1231
[S] Hector Garcia-Moiina, Richard J. Lipton, and Peter (12) IBM, IMS/VS Version 1 Fast Path Feature Gen-
Honeyman, “A Massive Memory Database Sys- eral Information A4anua1, GH2Q-9069-2, April 1978.
tem,” Princeton University Department of Electri- [13] IBM World Trade Systems Centers, IMS Version 1
cal Engineering and Computer Science Technical
Release 1.5 Fast Path Feature Description and
Report, September 1983.
Design Guide, GX!O-5775, 1979.
[7] Hector Garcia-Molina, Richard J. Lipton, and
Jacob0 Valdes, “A Massive Memory Machine,” [14] Tobin J. Lehman and Michael J. Carey, “A Study
IEEE Transactions on Computers, Vol. C-33, No.
of Index Struct,ures for Main Memory Database
Management Syst.ems,” University of Wisconsin-
5, May 1984, pp. 391-399.
Madison Computer Sciences Department Technical
[8] Hector Garcia-Molina, Richard Cullingford, Peter Report #SOS, July 1985.
Honeyman, and Richard Lipton, “The Case for
Massive Memory,” Princeton University Depart- [15] Tobin J. Lehman and Michael J. Carey, “Query
ment of Electrical Engineering and Computer Sci- Processing in Main Memory Dat:ibase Management
Systems,” Proceedings of the AC’M-SIGMOD Inter-
ence Technical Report 326, May 1984.
national Conlerence on Management of Data, May
[Q] J. N. Gray, “Notes on Data Base Operating Sys- 1986.
tems,” Lecture Noted in Computer Science No. 60,
Springer-Verlag, 1978, pp. 394-481. [16] Raymond A. Lorie, “Physical Integrity in a Large
Segmented Database,” ACM Transactions on Data-
(lo] Jim Gray, Paul McJones, Mike Blasgen, Bruce base Systems, Vol. 2, No. 1, March 1977, pp. 91-
Lindsay, Raymond Lorie, Tom Price, Franc0 Put- 104.
zolu, and Irving Traiger, “The Recovery Manager
[17] Kenneth Salem and Hector Garcia-Molina, “Crash
of the System R Database Manager,” Computing
Recovery Mechanisms for Mail1 Storage Database
Surveys, Vol. 13, No. 2, June 1981, pp. 223-242.
Systems,” Princeton University Department of
[II] Theo Haerder and Andreas Reuter, “Principles of Computer Science Technical Report CS-TR-
Transaction-Oriented Database Recovery,” Com- 034086, April 1986.
puting Surveys, Vol. 15, No. 4, December 1983, pp.
287-317. [18] Joost S. M. Verhofstad, “Recovery Techniques For
Database Systems,” Computing Surveys, Vol. 10,
No. 2, June 1978, pp. f6&195.

1232

You might also like