Unit 5

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 65

UNIT 5

Transaction Management and Database Security


Contents
• Introduction to transaction processing-, Transaction
and system concepts, Transaction properties,
Concurrency and integrity controls, Locking
Techniques for Concurrency Control, Recovery
concepts and Techniques,
• Database Security - Types of System Failures, Security,
Audit trail and encryption, data masking
Transaction and System
Concepts
• A transaction can be defined as a group of tasks. A single task is the minimum
processing unit which cannot be divided further.
• Let’s take an example of a simple transaction. Suppose a bank employee transfers
$500 from A's account to B's account.
• This very simple and small transaction
involves several low-level tasks.
Transaction and System
Concepts
• A transaction is a single logical unit of work which accesses and possibly modifies
the contents of a database.
• A transaction is a very small unit of a program and it may contain several low-level
tasks.
• A transaction in a database system must maintain Atomicity, Consistency, Isolation,
and Durability − commonly known as ACID properties − in order to ensure accuracy,
completeness, and data integrity. These are known as Transaction Properties.
• Transactions access data using read and write operations.
• In order to maintain consistency in a database, before and after the transaction,
certain properties are followed.
• These are called ACID properties.
ACID properties
ACID properties
• Atomicity − This property states that a transaction must be treated as an atomic unit, that is, either all
of its operations are executed or none. There must be no state in a database where a transaction is left
partially completed. States should be defined either before the execution of the transaction or after
the execution/abortion/failure of the transaction.
• Consistency − The database must remain in a consistent state after any transaction. No transaction
should have any adverse effect on the data residing in the database. If the database was in a consistent
state before the execution of a transaction, it must remain consistent after the execution of the
transaction as well.
• Durability − The database should be durable enough to hold all its latest updates even if the system
fails or restarts. If a transaction updates a chunk of data in a database and commits, then the database
will hold the modified data. If a transaction commits but the system fails before the data could be
written on to the disk, then that data will be updated once the system springs back into action.
• Isolation − In a database system where more than one transaction are being executed simultaneously
and in parallel, the property of isolation states that all the transactions will be carried out and
executed as if it is the only transaction in the system. No transaction will affect the existence of any
other transaction.
ACID properties
Atomicity
• By this, we mean that either the entire transaction takes place at once
or doesn’t happen at all. There is no midway i.e. transactions do not
occur partially. Each transaction is considered as one unit and either
runs to completion or is not executed at all. It involves the following
two operations.
—Abort: If a transaction aborts, changes made to database are not
visible.
—Commit: If a transaction commits, changes made are visible.
• Atomicity is also known as the ‘All or nothing rule’.
ACID properties
Atomicity
• Consider the following transaction T consisting of T1 and T2: Transfer
of 100 from account X to account Y.
• If the transaction fails after completion of T1 but before completion
of T2.( say, after write(X) but before write(Y)), then amount has been
deducted from X but not added to Y. This results in an inconsistent
database state. Therefore, the transaction must be executed in
entirety in order to ensure correctness of database state.
ACID properties
Consistency
• This means that integrity constraints must be maintained so that the
database is consistent before and after the transaction. It refers to the
correctness of a database.
• Referring to the example above,
The total amount before and after the transaction must be maintained.
Total before T occurs = 500 + 200 = 700.
Total after T occurs = 400 + 300 = 700.
Therefore, database is consistent. Inconsistency occurs in
case T1 completes but T2 fails. As a result T is incomplete.
ACID properties
Isolation
• This property ensures that multiple transactions can occur concurrently
without leading to the inconsistency of database state. Transactions
occur independently without interference.
• Changes occurring in a particular transaction will not be visible to any
other transaction until that particular change in that transaction is
written to memory or has been committed. This property ensures that
the execution of transactions concurrently will result in a state that is
equivalent to a state achieved these were executed serially in some order.
Let X= 500, Y = 500.
Consider two transactions T and T”.
ACID properties
Isolation
• Suppose T has been executed till Read (Y) and then T’’ starts. As a
result , interleaving of operations takes place due to which T’’ reads
correct value of X but incorrect value of Y and sum computed by
T’’: (X+Y = 50, 000+500=50, 500)
is thus not consistent with the sum at end of transaction:
T: (X+Y = 50, 000 + 450 = 50, 450).
This results in database inconsistency, due to a loss of 50 units. Hence,
transactions must take place in isolation and changes should be
visible only after they have been made to the main memory.
ACID properties
Durability
• This property ensures that once the transaction has completed
execution, the updates and modifications to the database are stored in
and written to disk and they persist even if a system failure occurs.
These updates now become permanent and are stored in non-volatile
memory. The effects of the transaction, thus, are never lost.
• The ACID properties, in totality, provide a mechanism to ensure
correctness and consistency of a database in a way such that each
transaction is a group of operations that acts a single unit, produces
consistent results, acts in isolation from other operations and updates
that it makes are durably stored.
Transaction States
A transaction in a database can be in one of the following states −
Transaction States
• Active − In this state, the transaction is being executed. This is the initial state of every transaction.
• Partially Committed − When a transaction executes its final operation, it is said to be in a partially
committed state.
• Failed − A transaction is said to be in a failed state if any of the checks made by the database
recovery system fails. A failed transaction can no longer proceed further.
• Aborted − If any of the checks fails and the transaction has reached a failed state, then the
recovery manager rolls back all its write operations on the database to bring the database back to
its original state where it was prior to the execution of the transaction. Transactions in this state
are called aborted. The database recovery module can select one of the two operations after a
transaction aborts −
• Re-start the transaction
• Kill the transaction
• Committed − If a transaction executes all its operations successfully, it is said to be committed. All
its effects are now permanently established on the database system.
Concurrency and Integrity
Controls
What is Concurrency Control?
• Concurrency Control in Database Management System is a procedure of managing
simultaneous operations without conflicting with each other. It ensures that Database
transactions are performed concurrently and accurately to produce correct results
without violating data integrity of the respective Database.
• Concurrent access is quite easy if all users are just reading data. There is no way they
can interfere with one another. Though for any practical Database, it would have a mix
of READ and WRITE operations and hence the concurrency is a challenge.
• DBMS Concurrency Control is used to address such conflicts, which mostly occur with
a multi-user system. Therefore, Concurrency Control is the most important element
for proper functioning of a Database Management System where two or more
database transactions are executed simultaneously, which require access to the same
data.
Concurrency and Integrity
Controls
Potential Problems of Concurrency
Here, are some issues which you will likely to face while using the DBMS Concurrency Control method:
• Dirty Read Problem occurs when a transaction reads the data that has been updated by another
transaction that is still uncommitted. It arises due to multiple uncommitted transactions executing
simultaneously.
• Lost Updates occur when multiple transactions select the same row and update the row based on
the value selected.
• Uncommitted dependency issues occur when the second transaction selects a row which is updated
by another transaction (dirty read)
• Non-Repeatable Read occurs when a second transaction is trying to access the same row several
times and reads different data each time.
• Incorrect Summary issue occurs when one transaction takes summary over the value of all the
instances of a repeated data-item, and second transaction update few instances of that specific data-
item. In that situation, the resulting summary does not reflect a correct result.
Concurrency Management
1. Dirty Read Problem
• The dirty read problem in DBMS occurs when a transaction reads the
data that has been updated by another transaction that is still
uncommitted. It arises due to multiple uncommitted transactions
executing simultaneously.
• Example 1: Consider two transactions A and B performing read/write
operations on a data DT in the database DB. The current value of DT is
1000: The following table shows the read/write operations in A and B
transactions.
Concurrency Management
1. Dirty Read Problem
Concurrency Management
1. Dirty Read Problem
• Transaction A reads the value of data DT as 1000 and modifies it to
1500 which gets stored in the temporary buffer.
• The transaction B reads the data DT as 1500 and commits it and the
value of DT permanently gets changed to 1500 in the database DB.
• Then some server errors occur in transaction A and it wants to get
rollback to its initial value, i.e., 1000 and then the dirty read problem
occurs.
Concurrency Management
1. Dirty Read
• The dirty read occurs in the case when one transaction updates an
item of the database, and then the transaction fails for some reason.
• The updated database item is accessed by another transaction before
it is changed back to the original value.
• A transaction T1 updates a record which is read by T2.
• If T1 aborts then T2 now has values which have never formed part of
the stable database.
• Consider the example 2;
Concurrency Management
1. Dirty Read
• At time t2, transaction-Y writes A's value.
• At time t3, Transaction-X reads A's value.
• At time t4, Transactions-Y rollbacks. So, it changes A's value back to
that of prior to t1.
• So, Transaction-X now contains a value which has never become part
of the stable database.
• Such type of problem is known as Dirty Read Problem, as one
transaction reads a dirty value which has not been committed.
Concurrency Management
2. Lost Update Problem
• When two transactions that access the same database items contain
their operations in a way that makes the value of some database item
incorrect, then the lost update problem occurs.
• If two transactions T1 and T2 read a record and then update it, then
the effect of updating of the first record will be overwritten by the
second update.
• Consider the example;
Concurrency Management
2. Lost Update Problem
Here;
• At time t2, transaction-X reads A's value.
• At time t3, Transaction-Y reads A's value.
• At time t4, Transactions-X writes A's value on the basis of the value seen at time t2.
• At time t5, Transactions-Y writes A's value on the basis of the value seen at time t3.
• So at time T5, the update of Transaction-X is lost because Transaction y overwrites it
without looking at its current value.
• Such type of problem is known as Lost Update Problem as update made by one
transaction is lost here.
Concurrency Management
3. Inconsistent Retrievals Problem
• Inconsistent Retrievals Problem is also known as unrepeatable read.
When a transaction calculates some summary function over a set of
data while the other transactions are updating the data, then the
Inconsistent Retrievals Problem occurs.
• A transaction T1 reads a record and then does some other processing
during which the transaction T2 updates the record.
• Now when the transaction T1 reads the record, then the new value
will be inconsistent with the previous value.
Concurrency Management
3. Inconsistent Retrievals Problem
• Consider the example,
Concurrency Management
3. Inconsistent Retrievals Problem
• Suppose two transactions operate on three accounts.
• Transaction-X is doing the sum of all balance while transaction-Y is
transferring an amount 50 from Account-1 to Account-3.
• Here, transaction-X produces the result of 550 which is incorrect. If
we write this produced result in the database, the database will
become an inconsistent state because the actual sum is 600.
• Here, transaction-X has seen an inconsistent state of the database.
Concurrency Management
4. Incorrect Summary Issue
• The Incorrect summary problem occurs when there is an incorrect
sum of the two data. This happens when a transaction tries to sum
two data using an aggregate function and the value of any one of the
data get changed by another transaction.
• Example: Consider two transactions A and B performing read/write
operations on two data DT1 and DT2 in the database DB. The current
value of DT1 is 1000 and DT2 is 2000: The following table shows the
read/write operations in A and B transactions.
Concurrency Management
4. Incorrect Summary Issue
Concurrency Management
4. Incorrect Summary Issue
• Transaction A reads the value of DT1 as 1000. It uses an aggregate
function SUM which calculates the sum of two data DT1 and DT2 in
variable add but in between the value of DT2 get changed from 2000
to 2500 by transaction B.
• Variable add uses the modified value of DT2 and gives the resultant
sum as 3500 instead of 3000.
Concurrency Management
5. Phantom Read Problem
• In the phantom read problem, data is read through two different read
operations in the same transaction. In the first read operation, a value
of the data is obtained but in the second operation, an error is
obtained saying the data does not exist.
• Example: Consider two transactions A and B performing read/write
operations on a data DT in the database DB. The current value of DT is
1000: The following table shows the read/write operations in A and B
transactions.
Concurrency Management
5. Phantom Read Problem
Concurrency Management
5. Phantom Read Problem
• Transaction B initially reads the value of DT as 1000. Transaction A
deletes the data DT from the database DB and then again transaction
B reads the value and finds an error saying the data DT does not exist
in the database DB.
Concurrency and Integrity
Controls
Why use Concurrency method?
Reasons for using Concurrency control method is DBMS:
• To apply Isolation through mutual exclusion between conflicting
transactions.
• To resolve read-write and write-write conflict issues.
• To preserve database consistency through constantly preserving execution
obstructions.
• The system needs to control the interaction among the concurrent
transactions. This control is achieved using concurrent-control schemes.
• Concurrency control helps to ensure serializability.
Concurrency and Integrity
Controls
Concurrency Control Protocols
Different concurrency control protocols offer different benefits
between the amount of concurrency they allow and the amount of
overhead that they impose. Following are the Concurrency Control
techniques in DBMS:
• Lock-Based Protocols
• Two Phase Locking Protocol
• Timestamp-Based Protocols
• Validation-Based Protocols
Concurrency and Integrity
Controls
Lock-based Protocols
• Lock Based Protocols in DBMS is a mechanism in which a transaction cannot
Read or Write the data until it acquires an appropriate lock. Lock based
protocols help to eliminate the concurrency problem in DBMS for
simultaneous transactions by locking or isolating a particular transaction to a
single user.
• A lock is a data variable which is associated with a data item. This lock signifies
that operations that can be performed on the data item. Locks in DBMS help
synchronize access to the database items by concurrent transactions.
• All lock requests are made to the concurrency-control manager. Transactions
proceed only once the lock request is granted.
Concurrency and Integrity
Controls
Lock-based Protocols
• To attain consistency, isolation between the transactions is the most important tool.
Isolation is achieved if we disable the transaction to perform a read/write operation.
This is known as locking an operation in a transaction. Through lock-based protocols,
desired operations are freely allowed to perform locking the undesired operations.
• There are two kinds of locks used in Lock-based protocols:
• Shared Lock(S): The locks which disable the write operations but allow read
operations for any data in a transaction are known as shared locks. They are also
known as read-only locks and are represented by 'S'.
• Exclusive Lock(X): The locks which allow both the read and write operations for any
data in a transaction are known as exclusive locks. This is a one-time use mode that
can't be utilized on the exact data item twice. They are represented by 'X'.
Concurrency and Integrity
Controls
Lock-based Protocols
There are four kinds of lock-based protocols:
• Simplistic Lock Protocol: This protocol instructs to lock all the other operations
on the data when the data is going to get updated. All the transactions may
unlock all the operations on the data after the write operation.
• Pre-claiming Lock Protocol: According to the pre-claiming lock protocol initially,
an assessment of the operations that are going to be performed is conducted.
Then a list is prepared to contain the data items on which locks will be imposed.
The transaction requests the system all the locks before starting the execution of
the operations. If all the locks are provided then the operations in the transaction
run smoothly and then locks are returned to the system on completion. The
transaction rolls back if all the locks are not provided.
Concurrency and Integrity
Controls
Lock-based Protocols
There are four kinds of lock-based protocols:
• Two-phase Locking Protocol: This protocol consists of three phases. The transaction
starts its execution with the first phase, where it asks for the locks. Once the locks
are granted, the second phase begins, where the transaction contains all the locks.
When the transaction releases the first lock, the third phase begins where all the
locks are getting released after the execution of every operation in the transaction.
• Strict Two-Phase Locking Protocol: The strict 2PL is almost similar to 2PL. The only
difference is that the strict 2PL does not allow releasing the locks just after the
execution of the operations, but it carries all the locks and releases them when the
commit is triggered. Check out this article to learn more about Lock-Based
Protocols.
Concurrency and Integrity
Controls
Two Phase Locking Protocol
• Two Phase Locking Protocol also known as 2PL protocol is a method of concurrency
control in DBMS that ensures serializability by applying a lock to the transaction data
which blocks other transactions to access the same data simultaneously. Two Phase
Locking protocol helps to eliminate the concurrency problem in DBMS.
• This locking protocol divides the execution phase of a transaction into three different
parts.
• In the first phase, when the transaction begins to execute, it requires permission for the
locks it needs.
• The second part is where the transaction obtains all the locks. When a transaction
releases its first lock, the third phase starts.
• In this third phase, the transaction cannot demand any new locks. Instead, it only
releases the acquired locks.
Concurrency and Integrity
Controls
Two Phase Locking Protocol
Concurrency and Integrity
Controls
Two Phase Locking Protocol
• The Two-Phase Locking protocol allows each transaction to make a lock or unlock
request in two steps:
• Growing Phase: In this phase transaction may obtain locks but may not release any
locks.
• Shrinking Phase: In this phase, a transaction may release locks but not obtain any
new lock
• It is true that the 2PL protocol offers serializability. However, it does not ensure that
deadlocks do not happen.
• In the above-given diagram, you can see that local and global deadlock detectors
are searching for deadlocks and solve them with resuming transactions to their
initial states.
Concurrency and Integrity
Controls
Two Phase Locking Protocol
Strict Two-Phase Locking Method
• Strict-Two phase locking system is almost similar to 2PL. The only difference is that Strict-2PL never releases a
lock after using it. It holds all the locks until the commit point and releases all the locks at one go when the
process is over.
Centralized 2PL
• In Centralized 2 PL, a single site is responsible for lock management process. It has only one lock manager for the
entire DBMS.
Primary copy 2PL
• Primary copy 2PL mechanism, many lock managers are distributed to different sites. After that, a particular lock
manager is responsible for managing the lock for a set of data items. When the primary copy has been updated,
the change is propagated to the slaves.
Distributed 2PL
• In this kind of two-phase locking mechanism, Lock managers are distributed to all sites. They are responsible for
managing locks for data at that site. If no data is replicated, it is equivalent to primary copy 2PL. Communication
costs of Distributed 2PL are quite higher than primary copy 2PL
Concurrency and Integrity
Controls
Two Phase Locking Protocol Examples
Concurrency and Integrity
Controls
Time-based Protocols
• According to this protocol, every transaction has a timestamp attached to
it. The timestamp is based on the time in which the transaction is entered
into the system. There is read and write timestamps associated with every
transaction which consists of the time at which the latest read and write
operations are performed respectively.
• Timestamp Ordering Protocol:
• The timestamp ordering protocol uses timestamp values of the transactions
to resolve the conflicting pairs of operations. Thus, ensuring serializability
among transactions. Following are the denotations of the terms used to
define the protocol for transaction A on the data item DT:
Concurrency and Integrity
Controls
Time-based Protocols
Concurrency and Integrity
Controls
Time-based Protocols
Following are the rules on which the Time-ordering protocol works:
1. When transaction A is going to perform a read operation on data item
DT:
• TS(A) < W-timestamp(DT): Transaction will rollback. If the timestamp of transaction
A at which it has entered in the system is less than the write timestamp of DT that
is the latest time at which DT has been updated then the transaction will roll back.
• TS(A) >= W-timestamp(DT): Transaction will be executed. If the timestamp of
transaction A at which it has entered in the system is greater than or equal to the
write timestamp of DT that is the latest time at which DT has been updated then
the read operation will be executed.
• All data-item timestamps updated.
Concurrency and Integrity
Controls
Time-based Protocols
Following are the rules on which the Time-ordering protocol works:
2. When transaction A is going to perform a write operation on data item DT:
• TS(A) < R-timestamp(DT): Transaction will rollback. If the timestamp of transaction A at which it has
entered in the system is less than the read timestamp of DT that is the latest time at which DT has
been read then the transaction will rollback.
• TS(A) < W-timestamp(DT): Transaction will rollback. If the timestamp of transaction A at which it has
entered in the system is less than the write timestamp of DT that is the latest time at which DT has
been updated then the transaction will rollback.
• All the operations other than this will be executed.
• Thomas' Write Rule: The rule alters the timestamp-ordering protocol to make the
schedule view serializable. For the case TS(A) < W-timestamp(DT), in the timestamp-
ordering protocol, the transaction will get rollback but according to Thomas Write Rule,
whenever the write operation comes up, it will get ignored.
Recoverability
A database may fail due to any of the following reasons,
• System failures are caused due to hardware or software problems in
the system.
• Transaction failures occur when a particular process that deals with
the modification of data can't be completed.
• Disk crashes may be due to the inability of the system to read the
disk.
• Physical damages includes problems like power failure or natural
disaster.
Recoverability
The data recovery techniques in DBMS make sure, that the state of data
is preserved to protect the atomic property and the data is always
recoverable to protect the durability property. The following techniques
are used to recover data in a DBMS,
• Log-based recovery in DBMS.
• Recovery through Deferred Update
• Recovery through Immediate Update
Recoverability
Log-Based Recovery
• Any DBMS has its own system logs that have the records for all the activity that has occurred
in the system along with timestamps on the time of the activity. Databases handle different
log files for activities like errors, queries, and other changes in the database. The log is stored
in the files in the following formats,
• The structure [start_transaction, T] denotes the start of execution of transaction T.
• [write_item, T, X, old_value, new_value] shows that the value of the variable, X is changed
from old_value to new_value by the transaction T.
• [read_item, T, X] represents that the value of X is read by the transaction T.
• [commit, T] indicates the changes in the data are stored in the database through
a commit and can't be further modified by the transaction. There will be no error after
a commit has been made to the database.
• [abort, T] is used to show that the transaction, T is aborted
Recoverability
Conceded Update Method
• In the conceded update method, the updates are not made to the data
until the transaction reaches the final phase or at the commit operating.
• After this operation is performed, the data is modified and permanently
stored in the main memory. The logs are maintained throughout the
operation and are used in case of failure to find the point of failure.
• This provides us an advantage as even if the system fails before
the commit stage, the data in the database will not be modified and the
status will be managed. If the system fails after the commit stage, we can
redo the changes to the new stage easily compared to the process involved
with the undo operation.
Recoverability
Quick Update Method
• In the quick update method, the update to the data is made
concurrently before the transaction reaches the commit stage. The logs
are also recorded as soon as the changes to the data are made.
• In the case of failure, the data may be in a partial state of the
transaction, and undo operations can be performed to restore the data.
We can also mark the state of the transaction and recover our data to
the marked state using SQL commands. The following commands are
used to achieve this,
• The SAVEPOINT command is used to save the current state of data in a
transaction.
Recoverability
What is the Difference Between a Deferred Update and an Immediate
Update?
• Deferred updates and immediate updates are database recovery techniques
in DBMS that are used to maintain the transaction log files of the DBMS.
• In the Deferred update, the state of the data in the database is not changed
immediately after any transaction is executed, but as soon as the commit
has been made, the changes are recorded in the log file, and the state of
data is changed.
• In the Immediate update, at every transaction, the database is updated
directly, and a log file is also maintained containing the old and new values.
Recoverability
What is the Difference
Between a Deferred
Update and an Immediate
Update?
Recoverability
• Transactions can perform “dirty reads” i.e reading data from an uncommitted
transaction.
If in a schedule,
• A transaction performs a dirty read operation from an uncommitted transaction,
and its commit operation is delayed till the uncommitted transaction either
commits or roll backs, then such a schedule is known as a Recoverable Schedule.

Here,
• The commit operation of the transaction that performs the dirty read is delayed.
• This ensures that it still has a chance to recover if the uncommitted transaction
fails later
Recoverability
Example
• Consider;
Recoverability
Example
Here,
• T2 performs a dirty read operation.
• The commit operation of T2 is delayed till T1 commits or roll backs.
• T1 commits later.
• T2 is now allowed to commit.
• In case, T1 would have failed, T2 has a chance to recover by rolling
back.
Recovery
Facilities
Backup mechanism
• periodic backup copies of the database.
• copy both the database and the log files without stopping system
• complete or incremental backup – stored offline
logging facilities
• keep track of state of transactions and database changes
checkpoint facility
• enables updates to be considered permanent
recovery manager
• allows system to restore database to a consistent state following failure
Recovery
Recovery Techniques
• Undoing – If a transaction crashes, then the recovery manager may undo transactions
i.e. reverse the operations of a transaction. This involves examining a transaction for the
log entry write_item(T, x, old_value, new_value) and setting the value of item x in the
database to old-value.There are two major techniques for recovery from non-
catastrophic transaction failures: deferred updates and immediate updates.
• Deferred update – This technique does not physically update the database on disk until
a transaction has reached its commit point. Before reaching commit, all transaction
updates are recorded in the local transaction workspace. If a transaction fails before
reaching its commit point, it will not have changed the database in any way so UNDO is
not needed. It may be necessary to REDO the effect of the operations that are recorded
in the local transaction workspace, because their effect may not yet have been written
in the database. Hence, a deferred update is also known as the No-undo/redo algorithm
Recovery
Recovery Techniques
• Immediate update – In the immediate update, the database may be updated by some operations of a
transaction before the transaction reaches its commit point. However, these operations are recorded in a log
on disk before they are applied to the database, making recovery still possible. If a transaction fails to reach
its commit point, the effect of its operation must be undone i.e. the transaction must be rolled back hence
we require both undo and redo. This technique is known as undo/redo algorithm.
• Caching/Buffering – In this one or more disk pages that include data items to be updated are cached into
main memory buffers and then updated in memory before being written back to disk. A collection of in-
memory buffers called the DBMS cache is kept under control of DBMS for holding these buffers. A directory
is used to keep track of which database items are in the buffer. A dirty bit is associated with each buffer,
which is 0 if the buffer is not modified else 1 if modified.
• Shadow paging – It provides atomicity and durability. A directory with n entries is constructed, where the ith
entry points to the ith database page on the link. When a transaction began executing the current directory
is copied into a shadow directory. When a page is to be modified, a shadow page is allocated in which
changes are made and when it is ready to become durable, all pages that refer to original are updated to
refer new replacement page
Database Security
• Database security refers to the collective measures used to protect and secure a database
or database management software from illegitimate use and malicious threats and attacks.
• It is a broad term that includes a multitude of processes, tools and methodologies that
ensure security within a database environment.
• Database security covers and enforces security on all aspects and components of
databases. This includes:
• Data stored in database
1. Database server
2. Database management system (DBMS)
3. Other database workflow applications
• Database security is generally planned, implemented and maintained by a database
administrator and or other information security professional.
Database Security
• Basically, database security is any form of security used to protect
databases and the information they contain from compromise. Examples of
how stored data can be protected include:
• Software – software is used to ensure that people can’t gain access to the
database through viruses, hacking, or any similar process.
• Physical controls – an example of a physical component of database security
could be the constant monitoring of the database by company personnel to
allow them to identify any potential weaknesses and/or compromises.
• Administrative controls – this refers to things like the use of passwords,
restricting the access of certain people to certain parts of the database, or
blocking the access of some company personnel altogether.
Database Security
Why is database security important?
• Database security is more than just important: it is essential to any company with any online component.
Sufficient database security prevents data being lost or compromised, which may have serious
ramifications for the company both in terms of finances and reputation. Database security helps:
1. Company’s block attacks, including ransom ware and breached firewalls, which in turn keeps sensitive
information safe.
2. Prevent malware or viral infections which can corrupt data, bring down a network, and spread to all end
point devices.
3. Ensure that physical damage to the server doesn’t result in the loss of data.
4. Prevent data loss through corruption of files or programming errors.
• As you will see, database security places an obligation on you and your business to keep sensitive data
stored correctly, and used appropriately. Complying with regulations and the applicable law not only
reduces the risk of information being mishandled, but it protects you from both costly legal ramifications
and lost customer confidence. Investment in Database security will ensure you have done your due
diligence in terms of data protection.
Database Security Steps
Ensure Physical Database Security
• In the traditional sense this means keeping your database server in a secure, locked environment with access controls in
place to keep unauthorized people out. But it also means keeping the database on a separate physical machine, removed
from the machines running application or web servers.
Use Web Application and Database Firewalls
• Your database server should be protected from database security threats by a firewall, which denies access to traffic by
default. The only traffic allowed through should come from specific application or web servers that need to access the
data. The firewall should also protect your database from initiating outbound connections unless there is a specific need to
do so.
Harden Your Database to Fullest Extent Possible
• Clearly it's important to ensure that the database you are using is still supported by the vendor or open source project
responsible for it, and that you are running the most up-to-date version of the database software with all database
security patches installed to remove known vulnerabilities.
Encrypt Your Data
• It is standard procedure in many organizations to encrypt stored data, but it's important to ensure that backup data is also
encrypted and stored separately from the decryption keys. (Not, for example, stored in encrypted form but alongside the
keys in plaintext.) As well as encrypting data at rest, it's also important to ensure confidential data is encrypted in motion
over your network to protect against database security threats.
Database Security Steps
Minimize Value of Your Database
• Attackers can only get their hands on what is stored in a database, so ensure that you are not storing any
confidential information that doesn't need to be there. Actively manage the data so you can delete any
information that you don't need from the database. Data that must be retained for compliance or other purposes
can be moved to more secure storage – perhaps offline -- which is less susceptible to database security threats.
Manage Database Access Tightly
• You should aim for the least number of people possible to have access to the database. Administrators should
have only the bare minimum privileges they need to do their job, and only during periods while they need access.
For smaller organizations this may not be practical, but at the very least permissions should be managed using
groups or roles rather than granted directly.
Audit and Monitor Database Activity
• This includes monitoring logins (and attempted logins) to the operating system and database and reviewing logs
regularly to detect anomalous activity.
• Effective monitoring should allow you to spot when an account has been compromised, when an employee is
carrying out suspicious activities or when your database is under attack. It should also help you determine if users
are sharing accounts, and alert you if accounts are created without your permission (for example, by a hacker).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy