DBMS 22MCA21 IA3-SOS
DBMS 22MCA21 IA3-SOS
DBMS 22MCA21 IA3-SOS
Transactions
1. Atomicity:
o A transaction is treated as an indivisible unit. Either all its
operations are performed, or none are.
o If a transaction fails at any point, the system must ensure that all
changes made up to that point are undone (rolled back).
2. Consistency:
o A transaction should transition the database from one consistent
state to another. If the database was in a valid state before the
transaction, it should still be in a valid state after the transaction.
3. Isolation:
o Transactions should not interfere with each other. The
intermediate states of a transaction should not be visible to other
transactions until it is completed.
o This ensures that concurrent transactions produce the same result
as if they were executed serially.
4. Durability:
o Once a transaction is committed, its changes must be permanent,
even in the case of system failures like crashes.
1. Concurrency Control:
o Concurrency control mechanisms manage the simultaneous
execution of transactions to prevent conflicts. The goal is to
maintain isolation and ensure correctness when transactions are
executed concurrently.
o Common techniques include locking (e.g., read/write locks),
timestamp ordering, and optimistic concurrency control.
2. Locking:
o Locking is a method used to control access to data by multiple
transactions to ensure isolation.
o Types of locks:
Shared lock (S-lock): Allows multiple transactions to
read the data but prevents them from modifying it.
Exclusive lock (X-lock): Ensures that no other transaction
can read or modify the data until the lock is released.
o Two-phase locking (2PL): A common locking protocol ensuring
serializability. It has two phases:
Growing phase: The transaction acquires all the locks it
needs but cannot release any.
Shrinking phase: The transaction releases locks but
cannot acquire new ones.
3. Deadlock:
o A deadlock occurs when two or more transactions are waiting for
each other to release locks, creating a cycle of dependencies that
prevents any of them from proceeding.
o Deadlock detection and deadlock prevention strategies are
employed to handle this.
4. Serializability:
o Serializability ensures that the result of executing multiple
transactions concurrently is equivalent to some serial execution of
the transactions (i.e., one after the other).
o This is the gold standard for correctness in transaction
processing.
5. Commit and Rollback:
o Commit: When a transaction is successfully completed, its
changes are made permanent (i.e., committed to the database).
o Rollback: If an error occurs or the transaction cannot be
completed, all changes made by the transaction are undone,
restoring the database to its previous state.
6. Log-based Recovery:
o A transaction log (or journal) records all the changes made
during transactions. This log can be used to recover the database
in case of a crash.
o Undo logging and redo logging are common techniques to either
reverse incomplete transactions or redo committed ones that had
not yet been written to disk.
7. Checkpointing:
o A checkpoint is a mechanism that saves the current state of the
database and transaction log at specific points. In case of a system
failure, the database can be recovered more efficiently from the
most recent checkpoint instead of replaying the entire log.
8. Transaction Isolation Levels:
o Isolation levels control the degree of visibility that one
transaction has into the operations of other transactions.
o Common isolation levels (defined by SQL standards) are:
Read Uncommitted: Transactions can see uncommitted
changes from other transactions, which may lead to dirty
reads.
Read Committed: A transaction can only read committed
data, avoiding dirty reads, but non-repeatable reads and
phantom reads may occur.
Repeatable Read: A transaction can repeatedly read the
same data without it changing, preventing non-repeatable
reads, though phantom reads can still occur.
Serializable: The highest level, ensuring full isolation,
where transactions appear to execute serially.
9. Optimistic vs. Pessimistic Concurrency Control:
o Optimistic: Assumes that conflicts between transactions are rare
and allows transactions to execute without locking data, checking
for conflicts only at the end.
o Pessimistic: Assumes that conflicts are likely and locks data at
the beginning of a transaction to prevent issues.
10. Database Recovery Techniques:
o Immediate Update: Changes are written to the database as soon
as they are made.
o Deferred Update: Changes are not applied to the database until
the transaction commits. This simplifies recovery because
uncommitted transactions do not affect the database.
OR
b) What are the desirable properties of transactions? Discuss each property in 10 CO3 L3
detail.
1. Atomicity
Detailed Breakdown:
2. Consistency
Consistency ensures that a transaction transforms the database from one valid
state to another valid state, preserving the integrity of the database. If the
database was consistent before the transaction started, it must remain consistent
after the transaction is completed.
Detailed Breakdown:
Integrity Constraints: These are rules enforced on the data (e.g., foreign
key constraints, data types, uniqueness). Consistency ensures that these
constraints are not violated during or after the execution of a transaction.
For example, transferring money from one account to another should not
create or lose money, meaning the total balance in the system should
remain constant.
Business Rules: Besides system-enforced integrity constraints, business
rules like limits on certain types of transactions or values are also
maintained under the consistency property.
Before and After States: A transaction starts with a consistent state, and
after it finishes, the database should still be in a consistent state. The
DBMS and transaction management system are responsible for
maintaining this integrity.
3. Isolation
Isolation ensures that transactions are executed in isolation from each other,
meaning that the intermediate states of a transaction are not visible to other
transactions until the transaction is complete. Even when multiple transactions
run concurrently, the results should be as if the transactions were executed
serially (one after the other).
Detailed Breakdown:
4. Durability
Durability guarantees that once a transaction has been committed, its effects are
permanently recorded in the database, even in the event of a system failure (e.g.,
a crash or power loss). Once a transaction is completed, its changes must not be
lost.
Detailed Breakdown:
Imagine an online banking system where a user transfers money from one
account to another.
Atomicity: The transfer either completes fully (both debit and credit are
done) or doesn’t happen at all (if any error occurs, no change is made).
Consistency: After the transaction, the total amount of money in both
accounts remains the same as it was before the transaction, maintaining
the consistency of account balances.
Isolation: If another user is checking their balance during the transfer,
they will not see an intermediate state where the money has been
deducted from one account but not yet added to the other.
Durability: Once the transfer is confirmed, even if there’s a system crash
immediately after, the transaction will not be lost, and the changes will
be reflected when the system comes back online.
2. a) How schedules are characterized based on recoverability? Explain the concept. 10 CO3 L3
Schedules can be categorized into different types based on whether they are
recoverable or not. These types are:
1. Recoverable Schedule
2. Cascadeless Schedule
3. Strict Schedule
Each type has different levels of recoverability, ensuring that transactions can be
correctly undone or committed based on the system's needs. Let’s explain each
in detail.
1. Recoverable Schedule
Example:
Consider two transactions, T1 and T2:
2. Cascadeless Schedule
Cascading Rollback:
In a cascadeless schedule:
This eliminates the risk of cascading aborts, which happen when multiple
transactions need to be aborted because they have read data from a transaction
that has been rolled back. Cascadeless schedules improve system efficiency by
preventing chain reactions of rollbacks.
3. Strict Schedule
In a strict schedule:
A transaction can neither read nor overwrite a data item until the last
transaction that updated that item has committed or aborted.
This prevents both cascading rollbacks and dirty writes (writing based
on uncommitted data).
In a strict schedule:
T2 is not allowed to read or write X until T1 either commits or aborts.
This guarantees that T2 will never work with uncommitted data, and no
cascading rollbacks can occur.
Strict schedules provide the highest level of safety by preventing both cascading
rollbacks and dirty writes, ensuring that transactions work only with committed
data. It simplifies recovery because once a transaction is committed, its changes
are guaranteed to be permanent.
1. ACID Properties
3. Isolation Levels
SQL provides isolation levels to define how and when the changes made by one
transaction become visible to other transactions. This helps control concurrency
issues like dirty reads, non-repeatable reads, and phantom reads. The SQL
standard defines four isolation levels:
SQL databases allow you to set the isolation level for a transaction explicitly
using commands like SET TRANSACTION ISOLATION LEVEL.
4. Concurrency Control
Try-Catch Blocks (in some databases like SQL Server): You can handle
errors inside transactions using BEGIN TRY and BEGIN CATCH blocks,
rolling back the transaction if an error occurs.
Transaction Timeouts: Some databases allow you to specify a
maximum time for a transaction to complete, after which the transaction
is automatically rolled back.
SQL also supports batch processing where multiple SQL statements are
executed in a batch within a transaction. This is useful for bulk operations such
as inserting or updating large datasets efficiently and ensuring that the operation
is atomic.
3. a) Discuss two-phase locking techniques for concurrency control. How does it 10 CO3 L3
ensure data consistency?
1. Growing Phase:
o The transaction can acquire locks (shared or exclusive) as
needed, but it cannot release any locks.
o This phase continues as long as the transaction is requesting
locks.
2. Shrinking Phase:
o Once the transaction releases its first lock, it enters the shrinking
phase.
o In this phase, the transaction can only release locks and cannot
acquire any new locks.
This process ensures that no transaction can change its lock set once it starts
releasing locks, preventing inconsistent states.
Lock Types:
However, 2PL can lead to deadlocks, where two or more transactions are stuck
waiting for each other’s locks to be released. Deadlock detection or prevention
strategies (like timeout or ordering locks) are used to mitigate this.
In summary, 2PL ensures data consistency by enforcing strict rules about when
locks can be acquired or released, which guarantees that transactions are
serializable and, thus, ensures the database's correctness even in the presence of
concurrent transactions.
OR
b) What are multisession concurrency control techniques? How do they handle read 10 CO3 L3
and write conflicts?
1. Locking Protocols
3. Timestamp-Based Protocols
Granularity of data items refers to the size or level of data being locked in a
database during transactions. Granularity can vary from fine-grained (smaller
units like individual rows or fields) to coarse-grained (larger units like tables or
the entire database). The granularity chosen impacts the balance between
concurrency and overhead in a system:
Key Concepts:
Handling Conflicts:
Advantages:
Disadvantages:
1. Read Phase:
o During this phase, transactions can read data freely and make
local changes without acquiring locks. They operate under the
assumption that conflicts are rare.
oEach transaction keeps track of the data it reads and writes, along
with their respective timestamps.
2. Validation Phase:
o Once a transaction completes its read operations and is ready to
commit, it enters the validation phase.
o The validation process checks whether the transaction can be
committed based on the following rules:
Read-Only Transactions: These can always be validated
successfully since they don’t modify data.
Read-Write Transactions: For a transaction to be valid,
it must ensure that no other transaction has modified the
data items it read since its start timestamp.
Conflicts are checked against the write timestamps of the
data items to ensure no other transaction has modified any
of the items that the current transaction has read or
written.
3. Write Phase:
o If a transaction passes validation, it proceeds to the write phase,
where its changes are committed to the database.
o If it fails validation, it is aborted, and all its changes are
discarded.
Advantages:
Disadvantages: