Ramakrishnan and Gehrke Solutions-185-199
.pdf
keyboard_arrow_up
School
University of California, Berkeley *
*We aren’t endorsed by this school
Course
C200
Subject
Computer Science
Date
Dec 6, 2023
Type
Pages
15
Uploaded by UltraSnowBear26
180
Chapter 18
4. WAL Protocol: Whenever a change is made to a database object, the change is
first recorded in the log and the log is written to stable storage before the change
is written to disk.
5. If a steal policy is in effect, the changes made to an object in the buffer pool by a
transaction can be written to disk before the transaction commits. This might be
because some other transaction might ”steal” the buffer page presently occupied
by an uncommitted transaction.
A no-force policy is in effect if, when a transaction commits, we need not ensure
that all the changes it has made to objects in the buffer pool are immediately
forced to disk.
Exercise 18.2
Briefly answer the following questions:
1. What are the properties required of LSNs?
2. What are the fields in an update log record? Explain the use of each field.
3. What are redoable log records?
4. What are the differences between update log records and CLRs?
Answer 18.2
Answer omitted.
Exercise 18.3
Briefly answer the following questions:
1. What are the roles of the Analysis, Redo, and Undo phases in ARIES?
2. Consider the execution shown in Figure 18.1.
(a) What is done during Analysis? (Be precise about the points at which Analysis
begins and ends and describe the contents of any tables constructed in this
phase.)
(b) What is done during Redo?
(Be precise about the points at which Redo
begins and ends.)
(c) What is done during Undo?
(Be precise about the points at which Undo
begins and ends.)
Answer 18.3
The answer to each question is given below.
1. The Analysis phase starts with the most recent begin
checkpoint record and pro-
ceeds forward in the log until the last log record. It determines
(a) The point in the log at which to start the Redo pass
Crash Recovery
181
10
20
30
40
50
60
00
end_checkpoint
begin_checkpoint
LOG
LSN
update: T1 writes P5
update: T2 writes P3
update: T3 writes P3
CRASH, RESTART
T2 end
T2 commit
T1 abort
70
Figure 18.1
Execution with a Crash
(b) The dirty pages in the buffer pool at the time of the crash.
(c) Transactions that were active at the time of the crash which need to be
undone.
The Redo phase follows Analysis and redoes all changes to any page that might
have been dirty at the time of the crash. The Undo phase follows Redo and undoes
the changes of all transactions that were active at the time of the crash.
2. (a) For this example, we will assume that the Dirty Page Table and Transaction
Table were empty before the start of the log. Analysis determines that the last
begin
checkpoint was at LSN 00 and starts at the corresponding end
checkpoint
(LSN 10).
We will denote Transaction Table records as (transID, lastLSN) and Dirty Page
Table records as (pageID, recLSN) sets.
Then Analysis phase runs until LSN 70, and does the following:
LSN 20
Adds (T1, 20) to TT and (P5, 20) to DPT
LSN 30
Adds (T2, 30) to TT and (P3, 30) to DPT
LSN 40
Changes status of T2 to ”C” from ”U”
LSN 50
Deletes entry for T2 from Transaction Table
LSN 60
Adds (T3, 60) to TT. Does not change P3 entry in DPT
LSN 70
Changes (T1, 20) to (T1, 70)
The final Transaction Table has two entries: (T1, 70), and (T3, 60).
The final
Dirty Page Table has two entries: (P5, 20), and (P3, 30).
(b) Redo Phase: Redo starts at LSN 20 (smallest recLSN in DPT).
182
Chapter 18
10
20
30
40
50
60
LSN
LOG
00
update: T1 writes P1
update: T3 writes P3
70
update: T1 writes P2
update: T2 writes P3
update: T2 writes P5
update: T2 writes P5
T2 abort
T3 commit
Figure 18.2
Aborting a Transaction
LSN 20
Changes to P5 are redone.
LSN 30
P3 is retrieved and its pageLSN is checked. If the page had been
written to disk before the crash (i.e. if
pageLSN >
= 30), nothing
is re-done otherwise the changes are re-done.
LSN 40,50
No action
LSN 60
Changes to P3 are redone
LSN 70
No action
(c) Undo Phase: Undo starts at LSN 70 (highest lastLSN in TT). The Loser Set
consists of LSNs 70 and 60. LSN 70: Adds LSN 20 to the Loser Set. Loser Set
= (60, 20). LSN 60: Undoes the change on P3 and adds a CLR indicating this
Undo.
Loser Set = (20).
LSN 20: Undoes the change on P5 and adds a CLR
indicating this Undo.
Exercise 18.4
Consider the execution shown in Figure 18.2.
1. Extend the figure to show prevLSN and undonextLSN values.
2. Describe the actions taken to rollback transaction
T
2.
3. Show the log after
T
2 is rolled back, including all prevLSN and undonextLSN
values in log records.
Answer 18.4
Answer omitted.
Exercise 18.5
Consider the execution shown in Figure 18.3. In addition, the system
Crash Recovery
183
10
20
30
40
50
60
LSN
LOG
00
70
CRASH, RESTART
T3 abort
update: T1 writes P5
T2 end
update: T3 writes P2
T2 commit
update: T3 writes P3
update: T2 writes P2
update: T1 writes P1
begin_checkpoint
end_checkpoint
80
90
Figure 18.3
Execution with Multiple Crashes
crashes during recovery after writing two log records to stable storage and again after
writing another two log records.
1. What is the value of the LSN stored in the master log record?
2. What is done during Analysis?
3. What is done during Redo?
4. What is done during Undo?
5. Show the log when recovery is complete, including all non-null prevLSN and un-
donextLSN values in log records.
Answer 18.5
The answer to each question is given below.
1. LSN 00 is stored in the master log record as it is the LSN of the begin
checkpoint
record.
2. During analysis the following happens:
184
Chapter 18
LSN 20
Add (T1,20) to TT and (P1,20) to DPT
LSN 30
Add (T2,30) to TT and (P2,30) to DPT
LSN 40
Add (T3,40) to TT and (P3,40) to DPT
LSN 50
Change status of T2 to C
LSN 60
Change (T3,40) to (T3,60)
LSN 70
Remove T2 from TT
LSN 80
Change (T1,20) to (T1,70) and add (P5,70) to DPT
LSN 90
No action
At the end of analysis, the transaction table contains the following entries: (T1,80),
and (T3,60). The Dirty Page Table has the following entries: (P1,20), (P2,30),
(P3,40), and (P5,80).
3. Redo starts from LSN20 (minimum recLSN in DPT).
LSN 20
Check whether P1 has pageLSN more than 10 or not. Since it is a
committed transaction, we probably need not redo this update.
LSN 30
Redo the change in P2
LSN 40
Redo the change in P3
LSN 50
No action
LSN 60
Redo the changes on P2
LSN 70
No action
LSN 80
Redo the changes on P5
LSN 90
No action
4. ToUndo consists of (80, 60).
LSN 80
Undo the changes in P5.
Append a CLR: Undo T1 LSN 80, set
undonextLSN = 20. Add 20 to ToUndo.
ToUndo consists of (60, 20).
LSN 60
Undo the changes on P2. Append a CLR: Undo T3 LSN 60, set
undonextLSN = 40. Add 40 to ToUndo.
ToUndo consists of (40, 20).
LSN 40
Undo the changes on P3. Append a CLR: Undo T3 LSN 40, T3
end
ToUndo consists of (20).
LSN 20
Undo the changes on P1. Append a CLR: Undo T1 LSN 20, T1
end
Crash Recovery
185
5. The log looks like the following after recovery:
LSN 00
begin
checkpoint
LSN 10
end
checkpoint
LSN 20
update: T1 writes P1
LSN 30
update: T2 writes P2
LSN 40
update: T3 writes P3
LSN 50
T2 commit
prevLSN = 30
LSN 60
update: T3 writes P2
prevLSN = 40
LSN 70
T2 end
prevLSN = 50
LSN 80
update: T1 writes P5
prevLSN = 20
LSN 90
T3 abort
prevLSN = 60
LSN 100
CLR: Undo T1 LSN 80
undonextLSN= 20
LSN 110
CLR: Undo T3 LSN 60
undonextLSN= 40
LSN 120,125
CLR: Undo T3 LSN 40
T3 end.
LSN 130,135
CLR: Undo T1 LSN 20
T1 end.
Exercise 18.6
Briefly answer the following questions:
1. How is checkpointing done in ARIES?
2. Checkpointing can also be done as follows: Quiesce the system so that only check-
pointing activity can be in progress, write out copies of all dirty pages, and include
the dirty page table and transaction table in the checkpoint record. What are the
pros and cons of this approach versus the checkpointing approach of ARIES?
3. What happens if a second begin
checkpoint record is encountered during the Anal-
ysis phase?
4. Can a second end
checkpoint record be encountered during the Analysis phase?
5. Why is the use of CLRs important for the use of undo actions that are not the
physical inverse of the original update?
6. Give an example that illustrates how the paradigm of repeating history and the
use of CLRs allow ARIES to support locks of finer granularity than a page.
Answer 18.6
Answer omitted.
Exercise 18.7
Briefly answer the following questions:
1. If the system fails repeatedly during recovery, what is the maximum number of
log records that can be written (as a function of the number of update and other
log records written before the crash) before restart completes successfully?
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Related Questions
CSC 472 Introduction to Database Systems Project
Which of the following is FALSE regarding "sequential flooding"?
Group of answer choices
The LRU replacement policy is susceptible to sequential flooding.
The sequential flooding pollutes the buffer pool with pages that are read once and then never again.
The sequential flooding is caused by a query performs a sequential scan that reads every page.
The CLOCK replacement policy is immune from the sequential flooding.
arrow_forward
Another action may be triggered by the execution of a trigger. There is often a
limit on how deep nesting may go in database systems. Why could they impose
such a restriction?
arrow_forward
2. DEL
Transactions:
The following transaction schedule is given. A,B,C are the addresses of the records
in the database and T1, T2 and T3 are the transactions.
T1
R(A)W(A)
R(C)
R(B)W(B)C
T2
T3
R(B)
W(B)
R(C)
R(B)W(B)
R(A)
с
a) Show an example of a strict 2PL implementation that leads to a deadlock
b) For a) draw a >>wait-for<< graph
c) Avoid the previously drawn deadlock with a wait-die<< policy
d) Give an example of another schedule that would be conflict-equivalent to the
given schedule of transactions
W(A)C
arrow_forward
131.
In concurrency control policy the the lock is obtained on
a.
Entire database
b.
A particular transaction alone
c.
All the new elements
d.
All of the mentioned
arrow_forward
In a database system, the allocation of records to blocks has a significant impact on its performance.
arrow_forward
Describe how if certain log entries for a block are not created to stable storage before the block is written to disk, the database may become inconsistent.
arrow_forward
Can clustering of files occur after the files are populated with records?
arrow_forward
2. DEL (30 %): Transactions:The following transaction schedule is given. A,B,C are the addresses of the recordsin the database and T1, T2 and T3 are the transactions.T1 R(A)W(A) R(C) R(B)W(B)C-----------------------------------------------------------------------------------------------------------------T2 R(B) W(B) R(A) W(A)C-----------------------------------------------------------------------------------------------------------------T3 R(C) R(B)W(B) Ca) Show an example of a strict 2PL implementation that leads to a deadlockb) For a) draw a »wait-for« graphc) Avoid the previously drawn deadlock with a »wait-die« policyd) Give an example of another schedule that would be conflict-equivalent to thegiven schedule of transactions
arrow_forward
Inserting data into a flat-file database is very straightforward and usually amounts to simply tacking
the new data onto the end of the data file. For example, inserting a new megalith record into a colon-
arrow_forward
True or false
arrow_forward
Compute total no of blocks}
Consider a file of 8192 records. Each record is 16 bytes long and its key field
Is of size 6 bytes. The file is ordered on a key field, and the file organization
is unspanned. The file is stored in a file system with block size 512 bytes, and
the size of a block pointer is 10 bytes. If the primary index is built on the key
field of the file, and a multilevel index scheme is used to store the primary index,
number of first-level and second level blocks in the multilovel index are
the
respectively
16 and 1
b.
32 and 1
16 and 2
8 and 1
c.
d.
arrow_forward
One option for protecting a bank's database is a snapshot isolation scheme. In certain cases, the bank may encounter difficulties due to an execution that cannot be serialized.
arrow_forward
Consider a file of 16384 records. Each record is 32 bytes long and its key field is of size 6 bytes. The file is ordered on a non-key field, and the file organization is unspanned. The file is stored in a file system with block size 1024 bytes, and the size of a block pointer is 10 bytes. If the secondary index is built on the key field of the file, and a multilevel index scheme is used to store the secondary index, the number of first-level and second-level blocks in the multilevel index are respectively
arrow_forward
Determine whether the following statements are true or false
a)Once a transaction is committed, it cannot be rolled back.
Answer:
b) An index file is always larger than the original database file because the index file needs to store an additional pointer to each record.
Answer:
c) Sparse indices facilitate faster search operations comparing to the dense indices.
Answer:
d) In a B+-tree, it always requires reorganizing the entire index file to maintain performance.
Answer:
e) Atomicity of transactions means that the transaction must be executed serially.
Answer:
f) Both committed and aborted transactions leave the database in consistent state.
Answer:
g) A serial schedule always satisfies isolation requirement.
Answer:
h) If P is the primary key of schema R, we must have P ® R.
Answer:
I) A failed transaction must be aborted.
Answer:
j) If the precedence graph of a schedule is acyclic, the schedule is serializable.
Answer:
arrow_forward
True/False: Rollback refers to the reversing of database changes.
arrow_forward
SEE MORE QUESTIONS
Recommended textbooks for you
Database System Concepts
Computer Science
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:McGraw-Hill Education
Starting Out with Python (4th Edition)
Computer Science
ISBN:9780134444321
Author:Tony Gaddis
Publisher:PEARSON
Digital Fundamentals (11th Edition)
Computer Science
ISBN:9780132737968
Author:Thomas L. Floyd
Publisher:PEARSON
C How to Program (8th Edition)
Computer Science
ISBN:9780133976892
Author:Paul J. Deitel, Harvey Deitel
Publisher:PEARSON
Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781337627900
Author:Carlos Coronel, Steven Morris
Publisher:Cengage Learning
Programmable Logic Controllers
Computer Science
ISBN:9780073373843
Author:Frank D. Petruzella
Publisher:McGraw-Hill Education
Related Questions
- CSC 472 Introduction to Database Systems Project Which of the following is FALSE regarding "sequential flooding"? Group of answer choices The LRU replacement policy is susceptible to sequential flooding. The sequential flooding pollutes the buffer pool with pages that are read once and then never again. The sequential flooding is caused by a query performs a sequential scan that reads every page. The CLOCK replacement policy is immune from the sequential flooding.arrow_forwardAnother action may be triggered by the execution of a trigger. There is often a limit on how deep nesting may go in database systems. Why could they impose such a restriction?arrow_forward2. DEL Transactions: The following transaction schedule is given. A,B,C are the addresses of the records in the database and T1, T2 and T3 are the transactions. T1 R(A)W(A) R(C) R(B)W(B)C T2 T3 R(B) W(B) R(C) R(B)W(B) R(A) с a) Show an example of a strict 2PL implementation that leads to a deadlock b) For a) draw a >>wait-for<< graph c) Avoid the previously drawn deadlock with a wait-die<< policy d) Give an example of another schedule that would be conflict-equivalent to the given schedule of transactions W(A)Carrow_forward
- 131. In concurrency control policy the the lock is obtained on a. Entire database b. A particular transaction alone c. All the new elements d. All of the mentionedarrow_forwardIn a database system, the allocation of records to blocks has a significant impact on its performance.arrow_forwardDescribe how if certain log entries for a block are not created to stable storage before the block is written to disk, the database may become inconsistent.arrow_forward
- Can clustering of files occur after the files are populated with records?arrow_forward2. DEL (30 %): Transactions:The following transaction schedule is given. A,B,C are the addresses of the recordsin the database and T1, T2 and T3 are the transactions.T1 R(A)W(A) R(C) R(B)W(B)C-----------------------------------------------------------------------------------------------------------------T2 R(B) W(B) R(A) W(A)C-----------------------------------------------------------------------------------------------------------------T3 R(C) R(B)W(B) Ca) Show an example of a strict 2PL implementation that leads to a deadlockb) For a) draw a »wait-for« graphc) Avoid the previously drawn deadlock with a »wait-die« policyd) Give an example of another schedule that would be conflict-equivalent to thegiven schedule of transactionsarrow_forwardInserting data into a flat-file database is very straightforward and usually amounts to simply tacking the new data onto the end of the data file. For example, inserting a new megalith record into a colon-arrow_forward
- True or falsearrow_forwardCompute total no of blocks} Consider a file of 8192 records. Each record is 16 bytes long and its key field Is of size 6 bytes. The file is ordered on a key field, and the file organization is unspanned. The file is stored in a file system with block size 512 bytes, and the size of a block pointer is 10 bytes. If the primary index is built on the key field of the file, and a multilevel index scheme is used to store the primary index, number of first-level and second level blocks in the multilovel index are the respectively 16 and 1 b. 32 and 1 16 and 2 8 and 1 c. d.arrow_forwardOne option for protecting a bank's database is a snapshot isolation scheme. In certain cases, the bank may encounter difficulties due to an execution that cannot be serialized.arrow_forward
arrow_back_ios
SEE MORE QUESTIONS
arrow_forward_ios
Recommended textbooks for you
- Database System ConceptsComputer ScienceISBN:9780078022159Author:Abraham Silberschatz Professor, Henry F. Korth, S. SudarshanPublisher:McGraw-Hill EducationStarting Out with Python (4th Edition)Computer ScienceISBN:9780134444321Author:Tony GaddisPublisher:PEARSONDigital Fundamentals (11th Edition)Computer ScienceISBN:9780132737968Author:Thomas L. FloydPublisher:PEARSON
- C How to Program (8th Edition)Computer ScienceISBN:9780133976892Author:Paul J. Deitel, Harvey DeitelPublisher:PEARSONDatabase Systems: Design, Implementation, & Manag...Computer ScienceISBN:9781337627900Author:Carlos Coronel, Steven MorrisPublisher:Cengage LearningProgrammable Logic ControllersComputer ScienceISBN:9780073373843Author:Frank D. PetruzellaPublisher:McGraw-Hill Education
Database System Concepts
Computer Science
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:McGraw-Hill Education
Starting Out with Python (4th Edition)
Computer Science
ISBN:9780134444321
Author:Tony Gaddis
Publisher:PEARSON
Digital Fundamentals (11th Edition)
Computer Science
ISBN:9780132737968
Author:Thomas L. Floyd
Publisher:PEARSON
C How to Program (8th Edition)
Computer Science
ISBN:9780133976892
Author:Paul J. Deitel, Harvey Deitel
Publisher:PEARSON
Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781337627900
Author:Carlos Coronel, Steven Morris
Publisher:Cengage Learning
Programmable Logic Controllers
Computer Science
ISBN:9780073373843
Author:Frank D. Petruzella
Publisher:McGraw-Hill Education