Chapter 3 ➢ Introduction to Transaction Processing ➢ Transaction and System Concepts ➢ Desirable Properties of Transactions ➢ Characterizing Schedules based on Recoverability ➢ Characterizing Schedules based on Serializability ➢ Transaction Support in SQL
Introduction to Transaction Processing • Transaction: An executing program (process) that includes one or more database access operations – Read operations (database retrieval, such as SQL SELECT) – Write operations (modify database, such as SQL INSERT, UPDATE, DELETE) – Transaction: A logical unit of database processing – Example: Bank balance transfer of $100 dollars from a checking account to a saving account in a BANK database • Note: Each execution of a program is a distinct transaction with different parameters – Bank transfer program parameters: savings account number, checking account number, transfer amount
Introduction to Transaction Processing (cont.) • A transaction (set of operations) may be: – stand-alone, specified in a high level language like SQL submitted interactively, or – consist of database operations embedded within a program (most transactions) • Transaction boundaries: Begin and End transaction. – Note: An application program may contain several transactions separated by Begin and End transaction boundaries
Introduction to Transaction Processing (cont.) • Transaction Processing Systems: Large multi-user database systems supporting thousands of concurrent transactions (user processes) per minute • Two Modes of Concurrency – Interleaved processing: concurrent execution of processes is interleaved in a single CPU – Parallel processing: processes are concurrently executed in multiple CPUs (Figure 21.1) – Basic transaction processing theory assumes interleaved concurrency
Introduction to Transaction Processing (cont.) For transaction processing purposes, a simple database model is used: • A database - collection of named data items • Granularity (size) of a data item - a field (data item value), a record, or a whole disk block – TP concepts are independent of granularity • Basic operations on an item X: – read_item(X): Reads a database item named X into a program variable. To simplify our notation, we assume that the program variable is also named X. – write_item(X): Writes the value of program variable X into the database item named X.
⚫ Basic unit of data transfer from the disk to the computer main memory is one disk block (or page). A data item X (what is read or written) will usually be the field of some record in the database, although it may be a larger unit such as a whole record or even a whole block. ⚫ read_item(X) command includes the following steps: • Find the address of the disk block that contains item X. • Copy that disk block into a buffer in main memory (if that disk block is not already in some main memory buffer). • Copy item X from the buffer to the program variable named X.
⚫ write_item(X) command includes the following steps: • Find the address of the disk block that contains item X. • Copy that disk block into a buffer in main memory (if it is not already in some main memory buffer). • Copy item X from the program variable named X into its correct location in the buffer. • Store the updated block from the buffer back to disk (either immediately or at some later point in time).
occur with concurrent transactions: • Lost Update Problem. Occurs when two transactions update the same data item, but both read the same original value before update (Figure 21.3(a), next slide) • The Temporary Update (or Dirty Read) Problem. This occurs when one transaction T1 updates a database item X, which is accessed (read) by another transaction T2; then T1 fails for some reason (Figure 21.3(b)); X was (read) by T2 before its value is changed back (rolled back or UNDONE) after T1 fails
One transaction is calculating an aggregate summary function on a number of records (for example, sum (total) of all bank account balances) while other transactions are updating some of these records (for example, transferring a large amount between two accounts, see Figure 21.3(c)); the aggregate function may read some values before they are updated and others after they are updated.
A transaction T1 may read an item (say, available seats on a flight); later, T1 may read the same item again and get a different value because another transaction T2 has updated the item (reserved seats on the flight) between the two reads by T1
1. A computer failure (system crash): A hardware or software error occurs during transaction execution. If the hardware crashes, the contents of the computer’s internal main memory may be lost. 2. A transaction or system error : Some operation in the transaction may cause it to fail, such as integer overflow or division by zero. Transaction failure may also occur because of erroneous parameter values or because of a logical programming error. In addition, the user may interrupt the transaction during its execution.
3. Local errors or exception conditions detected by the
transaction: - certain conditions necessitate cancellation of the transaction. For example, data for the transaction may not be found. A condition, such as insufficient account balance in a banking database, may cause a transaction, such as a fund withdrawal, to be canceled - a programmed abort causes the transaction to fail. 4. Concurrency control enforcement: The concurrency control method may decide to abort the transaction, to be restarted later, because it violates serializability or because several transactions are in a state of deadlock (see Chapter 22).
5. Disk failure: Some disk blocks may lose their data
because of a read or write malfunction or because of a disk read/write head crash. This kind of failure and item 6 are more severe than items 1 through 4. 6. Physical problems and catastrophes: This refers to an endless list of problems that includes power or air- conditioning failure, fire, theft, sabotage, overwriting disks or tapes by mistake, and mounting of a wrong tape by the operator.
A transaction is an atomic unit of work that is either
completed in its entirety or not done at all. A transaction passes through several states (Figure 21.4, similar to process states in operating systems). Transaction states: • Active state (executing read, write operations) • Partially committed state (ended but waiting for system checks to determine success or failure) • Committed state (transaction succeeded) • Failed state (transaction failed, must be rolled back) • Terminated State (transaction leaves system)
Transaction and System Concepts (cont.) DBMS Recovery Manager needs system to keep track of the following operations (in the system log file): • begin_transaction: Start of transaction execution. • read or write: Read or write operations on the database items that are executed as part of a transaction. • end_transaction: Specifies end of read and write transaction operations have ended. System may still have to check whether the changes (writes) introduced by transaction can be permanently applied to the database (commit transaction); or whether the transaction has to be rolled back (abort transaction) because it violates concurrency control or for some other reason.
Recovery manager keeps track of the following operations
(cont.): • commit_transaction: Signals successful end of transaction; any changes (writes) executed by transaction can be safely committed to the database and will not be undone. • abort_transaction (or rollback): Signals transaction has ended unsuccessfully; any changes or effects that the transaction may have applied to the database must be undone.
❑ undo(X): Similar to rollback except that it applies to a single write operation rather than to a whole transaction. ❑ redo(X): This specifies that a write operation of a committed transaction must be redone to ensure that it has been applied permanently to the database on disk.
• Is an append-only file to keep track of all operations of all transactions in the order in which they occurred. This information is needed during recovery from failures • Log is kept on disk - not affected except for disk or catastrophic failure • As with other disk files, a log main memory buffer is kept for holding the records being appended until the whole buffer is appended to the end of the log file on disk • Log is periodically backed up to archival storage (tape) to guard against catastrophic failures
Transaction and System Concepts (cont.) Types of records (entries) in log file: • [start_transaction,T]: Records that transaction T has started execution. • [write_item,T,X,old_value,new_value]: T has changed the value of item X from old_value to new_value. • [read_item,T,X]: T has read the value of item X (not needed in many cases). • [end_transaction,T]: T has ended execution • [commit,T]: T has completed successfully, and committed. • [abort,T]: T has been aborted.
⚫ Definition: A transaction T reaches its commit point when all its operations that access the database have been executed successfully and the effect of all the transaction operations on the database has been recorded in the log file (on disk). ⚫ The transaction is then said to be committed.
Transaction and System Concepts (cont.) Commit Point of a Transaction (cont.): ⚫ Log file buffers: Like database files on disk, whole disk blocks must be read or written to main memory buffers. ⚫ For log file, the last disk block (or blocks) of the file will be in main memory buffers to easily append log entries at end of file. ⚫ Force writing the log buffer: before a transaction reaches its commit point, any main memory buffers of the log that have not been written to disk yet must be copied to disk. ⚫ Called force-writing the log buffers before committing a transaction. ⚫ Needed to ensure that any write operations by the transaction are recorded in the log file on disk before the transaction commits
Desirable Properties of Transactions ❑ Called ACID properties ❑ Atomicity: A transaction is an atomic unit of processing; it is either performed in its entirety or not performed at all. ❑ Consistency preservation: A correct execution of the transaction must take the database from one consistent state to another.
• Isolation: Even though transactions are executing concurrently, they should appear to be executed in isolation – that is, their final effect should be as if each transaction was executed in isolation from start to finish.
• Durability or permanency: Once a transaction is
committed, its changes (writes) applied to the database must never be lost because of subsequent failure.
• Consistency preservation: Specifies that each transaction does a correct action on the database on its own. Application programmers and DBMS constraint enforcement are responsible for this. • Isolation: Responsibility of the concurrency control protocol. • Durability or permanency: Enforced by the recovery protocol.
Schedules of Transactions • Transaction schedule (or history): When transactions are executing concurrently in an interleaved fashion, the order of execution of operations from the various transactions forms what is known as a transaction schedule (or history).
• Figure 21.5 (next slide) shows 4 possible schedules (A,
B, C, D) of two transactions T1 and T2: – Order of operations from top to bottom – Each schedule includes same operations – Different order of operations in each schedule
Schedules of Transactions (cont.) • Schedules can also be displayed in more compact notation • Order of operations from left to right • Include only read (r) and write (w) operations, with transaction id (1, 2, …) and item name (X, Y, …) • Can also include other operations such as b (begin), e (end), c (commit), a (abort)
Schedules of Transactions (cont.) • Formal definition of a schedule (or history) S of n transactions T1, T2, ..., Tn : • An ordering of all the operations of the transactions subject to the constraint that, for each transaction Ti that participates in S, the operations of Ti in S must appear in the same order in which they occur in Ti.
Schedules of Transactions (cont.) • For n transactions T1, T2, ..., Tn, where each Ti has mi read and write operations, the number of possible schedules is (! is factorial function): (m1 + m2 + … + mn)! / ( (m1)! * (m2)! * … * (mn)! )
• Generally very large number of possible schedules
• Some schedules are easy to recover from after a failure, while others are not • Some schedules produce correct results, while others produce incorrect results • Rest of chapter characterizes schedules by classifying them based on ease of recovery (recoverability) and correctness (serializability)
Characterizing Schedules based on Recoverability Schedules classified into two main classes: • Recoverable schedule: One where no committed transaction needs to be rolled back (aborted). • A schedule S is recoverable if no transaction T in S commits until all transactions T’ that have written an item that T reads have committed. • Non-recoverable schedule: A schedule where a committed transaction may have to be rolled back during recovery. • This violates Durability from ACID properties (a committed transaction cannot be rolled back) and so non-recoverable schedules should not be allowed.
Characterizing Schedules Based on Recoverability (cont.) Summary: • Many schedules can exist for a set of transactions • The set of all possible schedules can be partitioned into two subsets: recoverable and non-recoverable • A subset of the recoverable schedules are cascadeless • If blind writes are allowed, a subset of the cascadeless schedules are strict • If blind writes are not allowed, the set of cascadeless schedules is the same as the set of strict schedules
Characterizing Schedules based on Serializability • Among the large set of possible schedules, we want to characterize which schedules are guaranteed to give a correct result • The consistency preservation property of the ACID properties states that: each transaction if executed on its own (from start to finish) will transform a consistent state of the database into another consistent state • Hence, each transaction is correct on its own
Characterizing Schedules based on Serializability (cont.) • Serial schedule: A schedule S is serial if, for every transaction T participating in the schedule, all the operations of T are executed consecutively (without interleaving of operations from other transactions) in the schedule. • Otherwise, the schedule is called nonserial. • Based on the consistency preservation property, any serial schedule will produce a correct result (assuming no inter-dependencies among different transactions)
Characterizing Schedules based on Serializability (cont.) • Serial schedules are not feasible for performance reasons: – No interleaving of operations – Long transactions force other transactions to wait – System cannot switch to other transaction when a transaction is waiting for disk I/O or any other event – Need to allow concurrency with interleaving without sacrificing correctness
Characterizing Schedules based on Serializability (cont.) • Serializability is generally hard to check at run-time: – Interleaving of operations is generally handled by the operating system through the process scheduler – Difficult to determine beforehand how the operations in a schedule will be interleaved – Transactions are continuously started and terminated
Characterizing Schedules Based on Serializability (cont.) Practical approach: • Come up with methods (concurrency control protocols) to ensure serializability. • DBMS concurrency control subsystem will enforce the protocol rules and thus guarantee serializability of schedules • Current approach used in most DBMSs: – Use of locks with two phase locking (see Section 22.1)
Characterizing Schedules based on Serializability (cont.) • View equivalence: A less restrictive definition of equivalence of schedules than conflict serializability when blind writes are allowed
• View serializability: definition of serializability
based on view equivalence. A schedule is view serializable if it is view equivalent to a serial schedule.
to be atomic. Either the statement completes execution without error or it fails and leaves the database unchanged. • With SQL, there is no explicit Begin Transaction statement. Transaction initiation is done implicitly when particular SQL statements are encountered. • Every transaction must have an explicit end statement, which is either a COMMIT or ROLLBACK.