Hindi1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Q1 Properties of Transaction Sr.

Timestamping Locking
To ensure accuracy, completeness and the integrity of data, database system maintains following properties of No.
transaction. 2 It is used for deadlock prevention. It is used to improve performance.
1. Atomicity: Atomicity property ensures that at the end of the transaction, either no changes have occurred to the A lock is a variable associated with a data item that
database or the database has been changed in a consistent manner. At the end of a transaction, the updates made 3 Timestamp is a unique identifier to identify a transaction. describes its status concerning possible operations
by the transaction will be accessible to other transactions and processes outside the transaction. applied.
2. Consistency: Consistency property of transaction implies that if the database was in consistent state before the Timestamping methods assign timestamps to Locking methods prevent unserializable schedules
start of a transaction, then on termination of a transaction, it will also be in a consistent state. In other words, Data is 4 transactions, enforcing serializability by ensuring that by stopping more than one transaction from
in a consistent state when a transaction starts and when it ends. transaction timestamps match the schedule. accessing the same data elements.
3. Isolation: Isolation property of transaction indicates that action performed by a transaction will be hidden from Timestamping methods may cause more transactions to Locking methods avoid transaction abortion by
5
outside the transaction until the transaction terminates. Thus each transaction is unaware of other transactions abort than a locking protocol. preventing potentially conflicting transactions.
executing concurrently in the system.
4. Durability: Durability property of a transaction ensures that once a transaction completes successfully (commits), Advantages of Two-Tier Client/Server Structure:
Q10) Advantage and disadvantages of single tier
the changes it has made to the database persist, even if there are system failures. Advantage of Single-Tier: 1)This structure is quite easy to maintain and modify.
Q2 States of transaction with diagram: 1. The data is easily and quickly available since it is 2)The communication between the client and server in
2. located in the same machine. the form of request response messages is quite fast.
1. Active State (Initial State): This is the initial state of transaction. A
Disadvantage of Single-Tier: Disadvantage of Two-Tier Client/Server Architecture:
transaction is active when it is executing. A transaction always starts
1. This architecture is completely not scalable. 1)If the client nodes are increased beyond capacity in the
with active state. It remains in active state till all commands of that
2. Only one user can access the system at a given architecture, then the server is not able to handle the
transaction are executed.
2. Partially Committed: When a transaction completes its last time through the local client. request overflow and performance of the system
statement (command), it enters in partially committed state. degrades.
3. Failed: If the system decides that the normal execution of the
transaction can no longer proceed, then transaction is termed as .Q4 What is Triggers? Concept of triggers ?
failed. If some failure occurs in active state or partially committed Advantages Disadvantages?
state, transaction enters in failed state. 1 A trigger is a set of SQL statements stored in the database catalog.
2 A trigger is a set of actions that are run automatically when a specified change operation (SQL INSERT,
4. Committed: When the transaction completes its execution
UPDATE, DELETE or TRUNCATE statement) is performed on a specified table or view.
successfully it enters committed state from partially committed state. 3 Definition: A trigger is, "a statement that is executed automatically by the system as a side effect of a modification
5. Aborted: To ensure the atomicity property, changes made by to the database".
failed transaction are undone i.e. the transaction is rolled back. After Concept of triggers
rollback, that transaction enters in aborted state. When the
a)To design a trigger mechanism, we must:
transaction is in failed state, it rollbacks that transaction and enters in
1. Specify the conditions under which the trigger is to be executed.
aborted state.
2. Specify the actions to be taken when the trigger executes.
b)A trigger is a special user-defined function that binds to a table.
To create a new trigger, we must define a trigger function first, and
then bind this trigger function to a table.
c)The difference between a trigger and a user-defined function is
that a trigger is automatically invoked when an event occurs
whereas a stored procedure must be called explicitly.

d)Triggering events can be insert, delete or update.


e)Trigger can be set to fire BEFORE an event occur or AFTER an event occur or even we can bypass the event by
using the INSTEAD OF command.
f)If we configured a trigger to fire BEFORE an INSERT or UPDATE, we will have the additional benefit of modifying
the new data to be inserted or updated and even skipping the operation itself.
g)There are two types of trigger i.e., Row Level Trigger and Statement Level trigger.
h)A row level trigger is fired for each affected row.
i)A statement level trigger is fired only once for a statement.
For example, consider the statement:
UPDATE account_current SET balance balance + 100 WHERE balance >
j)Suppose executing this statement may affect 20 rows. If a row level trigger is defined for the table, the trigger will
be fired for each 20 updated rows. But if it was a statement level trigger, it would have fired only once.
Advantages of Using Triggers:
1. Triggers provide an alternative way to check the integrity of data.
Q3 Difference between timestamping vs locking
2. Triggers can catch errors in business logic in the database layer.
Sr. 3. SQL triggers provide an alternative way to run scheduled tasks. By using SQL triggers, you don't have to wait to
Timestamping Locking
No.
run the scheduled tasks because the triggers are invoked automatically before or after a change is made to the data
It is used to decide whether a transaction should wait or in the tables.
1 It is used for concurrency control.
rollback.
4. Triggers are very useful to audit the changes of data in tables.

Disadvantages of Using SQL Triggers: T3:


1. SQL triggers only can provide an extended validation and they cannot replace all the validations. Some simple lock-X (B); T4:
lock-S (A); Q14 Rollback?
validations have to be done in the application layer. For example, we can validate user's inputs in the client side by read (B);
1)User rollback a failed transaction, Ti, using the log
using JavaScript or in the server side using server-side scripting languages such as JSP, PHP, ASP.NET, Perl, etc. B:=B-50;
read (A); 2)The log is scanned backward; for every log record of the form <Ti, Xj, V1, V2 >
2. SQL triggers are invoked and executed invisible from the client applications; therefore, it is difficult to figure out write (B);
found in the log, the data item Xj, is restored to its old value V1..
what happen in the database layer. lock-X(A); lock-S (B);
3)The scanning of the log terminates when the log record <Ti, start> is found.
3. Triggers may increase the overhead of the database server. read (A); read (B);
4)Scanning the log backward is more important, since a transaction may have
Q5 Checkpoint: A:= A + 50 ; display (A + B)
updated a data item more than once. Consider the pair of log records as given
When a system failure occurs, some transactions need to be redone and some need to be undone. Log record can write (A); Unlock (A); below:<Ti, A. 10, 20>
find out this. But for that we need to search the entire log. Unlock (B); Unlock (B). <Ti,A, 20, 30>
• There are two major difficulties with this approach. Unlock (A);
1. The search process is time consuming.
2. Most of the transactions that will be redone have already written their updates into the database. Hence, it is Q11)What is Deadlock recovery and its techniques?
better to avoid such redo operations. Deadlock recovery is the process of resolving a situation where two or more processes in a system are unable to
• To reduce these types of overheads, checkpoints are introduced. During the execution the system maintains the proceed because each is waiting for the other to release resources. Deadlocks can occur in systems like databases
log, using immediate database or deferred database modification technique. or operating systems when processes hold resources and simultaneously request additional resources held by other
In addition, the system periodically performs checkpoints, which require following sequence of operations: processes.
1. Output onto stable storage, all log records currently stored in main memory. 1. Process Termination: To eliminate deadlock, we can simply kill one or more processes. For this, we use two
2. Output to the disk, all modified buffer blocks. methods:
3. Output onto stable storage, a log record <checkpoint>. Abort all the processes involved in a deadlock
Abort one process at a time until the deadlock is eliminated.
2. Resource Preemption: To eliminate deadlocks using resource preemption, we preempt some resources from
processes and give those resources to other processes Preempt means that we are taking away the resources from
Q6 Cursors: a process
a)A PL/pgSQL cursor allows us to encapsulate a query and process each individual row at a time. We use cursors
when we want to divide a large result set into parts and process each part individually. If we process it at once, we Q12 Methods For database security?
may have a memory overflow error. In addition, we can develop a function that returns a reference to a cursor. This 1 Authorization: A DBMS typically includes a database security and authorization sub- system that is responsible
is an efficient way to return a large result set from a function. The caller of the function can process the result set for ensuring the security of portions of a database against unauthorized access.
based on the cursor reference. 2. Access Control: The security mechanism of a DBMS must include provisions for
b) Cursors can be of two types: implicit cursors and explicit cursors. restricting access to the database as a whole. This function is called Access Control. An access control mechanism
1. Implicit cursors are declared and managed by PL/pgSQL for all DML and PL/pgSQL SELECT statements. is a way to control the data that is accessible to a given user. It is handled by creating user accounts and passwords
2. Explicit cursors are declared and managed by the programmer. to control login process by the DBMS
We will see how to manage explicit cursor operations 3. Statistical Database Security: The security problem associated with databases is that
c) Explicit cursor operations are as follows: of controlling the access to a statistical database, which is used to provide statistical information or summaries of
Step 1: values based on various criteria. The encounter measure to statistical database security problem are called
Declare a cursor. Inference Control Measures.
Step 2: Open the cursor. 4 Database Encryption Techniques: These are used to protect sensitive data (such as
Step 3: Fetch rows from the result set into a credit card numbers) that is being transmitted via some type communication Network. The data is encoded using
target. some encoding algorithm. An unauthorized user who access encoded data will have difficulty deciphering it, but
Step 4: Check if there are more rows left to authorized users are given decoding or decrypting algorithms (or keys) to decipher data.
fetch. Q13)Types of failure?
If yes, go to step 3, otherwise go to step 5. 1. Transaction Failure:
Step 5: Close the cursor. There are two types of errors that may lead to transaction failure. They are:
(1) System Error: The system has entered an undesirable state as a result of which a transaction cannot continue
with its normal execution. The transaction, however, can be re-executed at a later time.
Q7 Two phase locking protocol (ii) Logical Error: The transaction can no longer continue with its normal execution. owing the some internal
A locking protocol is a set of rules followed by all transactions while requesting and releasing locks. condition, such as data not found, bad input or resource limit overflow exceeded
2PL protocol requires that each transaction issue a lock and unlock requests in two phases: 2. System Crash:
1. Growing Phase: A transaction may obtain locks, but may not release any lock. a)There is a hardware bug or error in the database software or the operating system that causes the loss of the
2. Shrinking Phase: A transaction may release locks, but may not obtain any new locks. Initially the transaction is in content of volatile storage and leads transaction processing to a halt
growing phase. In this it acquires locks as needed. Once, the transaction releases a lock, it enters the shrinking b)The content of non-volatile storage remains undamaged and is not corrupted,
phase and it can issue no more lock requests. 3. Disk Failure:
The point in the schedule where the transaction has obtained its final lock (the end of its growing phase) is called the a)In disk failures disk block losses its content as a result of either a head failure during a data transfer operation.
lock point of the transaction. The transactions can be ordered according to lock points. Copies of the data on other disks on tertiary media such as tapes, disks etc., are used to recover from the failure
This ordering gives the serializability ordering for transaction. This serial schedule is conflict equivalent i.e. the two- b)To find out how the system should recover from failure or crash, we need to recognize the failure modes of those
phase locking protocol ensures conflict serializability. devices used for storing data.
Example: Following two transactions are two phase transactions. Q14 )Concept of log?
a)The most widely used structure for recording database modifications is the log
b)It is a sequence of log records and maintains a record of all the update activities in the database.
c)There are several types of log records to record significant events during transaction processing
d)An update log record describes a single database write. It has the following fields: 1) In recoverability, there is need to address the effect of
1)Transaction identifier is the unique identifier of the transaction that performed the write operation. transaction failures or concurrently running transactions.
2)Data-item identifier is the unique identifier of the data item written. Typically, it is the location on disk of the data 2) Once a transaction T is committed, it should never be
item. necessary to rollbacks T.
3)Old value is the value of the data item prior to the write. 3) The schedules those meet this condition are called
4)New value is the value that the data item will have after the write. Other special log records exist to record recoverable schedules and those do not, are called non-
significant events during transaction processing, such as the start of a transaction and the commit or abort of a recoverable.
transaction. .
Q15) Aries algorithm
ARIES (Algorithms for Recovery and Isolation Exploiting Semantics) uses a steal/no- force approach for writing, and 4) In other words, A recoverable schedule is one where for each pair of transactions T and T_{j} such that T_{2}
it is based on following three concepts: reads a data item previously written by T_{v} the commit operation of T_{8} appears before the commit operation of
1. Write-ahead Logging: Any change to an object is first recorded in the log, and the log must be written to stable T_{9} Otherwise, the schedule is non recoverable
storage before changes to the objects are written to disk Q18 Cascadless Schedule?
2. Repeating History During Redo: ARIES will retrace all actions of the database system prior to the crash to
1) Even if a schedule is recoverable, to recover
reconstruct the database state when the crash occurred. Transactions that were uncommitted at the time of the
correctly from the failure of a transaction Tj may
crash (active transactions) are undone.
have to rollback the transaction.
3. Logging Changes during Undo: Will prevent ARIES from repeating the completed undo operations if a failure
occurs during recovery, which causes a restart of the 2)Transaction T10 writes a value of A that is
recovery process read by transaction T_{11} Transaction T_{11}
Q17 )Thomas Write Rule is the modification to the basic timestamp ordering, in which the rules for write operations writes a value A that is read by transaction T 12'
are slightly different from those of basic timestamp ordering. Suppose that at a point transaction T_{10} fails.
2) It can be summarized as ignore outdated writes. T_{10} must be rolled back. Since, T_{11}
3 )It states that, if a more recent transaction has already written the value of an object then a less recent transaction
does not need perform its own write since it will eventually be overwritten by the more recent one,
Q 16 )relationship between recovery management and buffer management
1)DBMS application program requires Input/Output (I/O) operations which are performed by a component of the
operating system. These 1/0 operations normally use buffers.
2)The recovery management system of the DBMS is responsible for recovery from hardware or software failures.
3)The recovery manager ensures that the database remains in a consistent state in the presence of failures. It is
responsible for transaction commit and abort operations. maintaining a log, and restoring the system to a consistent
state after a crash.
4) The buffer management effectively provides a temporary copy of a database page Therefore, it is used in
database recovery systems in which the modifications are done in this temporary copy and the original page
remains unchanged in the secondry storage

dependent on T_{10} T_{11} must be rolled back. Since, is dependent on T 11' T_{12} must be rolled back.
3) This concept, in which a single transaction failure results in a series of transaction rollbacks, is called cascading
rollback.
4) A cascadeless schedule is one where, for each pair of transactions T, and T_{1} and Tj that Tj reads a data item
previously written by Tj, the commit operation of T_{i} appears before commit operation of Tj.

Q19) How DBA is responsible for security of database systems?


1)User Management: DBAs create, manage, and deactivate user accounts, ensuring only authorized individuals
can access the database.
2)Data Encryption: DBAs implement encryption techniques to protect sensitive data both at rest and in transit. This
involves encrypting data stored on disks and encrypting data during transmission over networks.
3)Vulnerability Assessment: DBAs conduct regular security audits to identify and address vulnerabilities in the
database system. This includes scanning for weak configurations, outdated software, and potential security
loopholes.
Q 17) Recoverable Schedule 4)Patch Management: They apply security patches and updates promptly to address known vulnerabilities and
improve security posture.
5)Network Segmentation: DBAs may work with network administrators to segment the database network, limiting
access to authorized users and systems.
6)Firewall Configuration: They configure firewalls to block unauthorized access to the database server.
Regular Backups: DBAs implement regular backup procedures to protect data from loss or corruption.
7)Disaster Recovery Planning: They develop and test disaster recovery plans to ensure data can be restored in
case of a security breach or other disaster.

8)Activity Monitoring: DBAs monitor database activity to detect unusual patterns or suspicious behavior that may
indicate a security threat.

Q20)Shadow paging
Shadow paging is a recovery technique used in database systems to ensure data consistency and recoverability. It's
a copy-on-write approach that avoids in-place updates to database pages
Advantages of Shadow Paging:
Simplicity: It's a relatively simple technique to implement.
Increased Concurrency: Multiple transactions can be executed concurrently without interfering with each other.
Fast Recovery: In case of a system crash, recovery is straightforward as the database can be rolled back to the
state of the last committed transaction.
Reduced Write Amplification: By avoiding unnecessary writes to disk, shadow paging can improve performance.
Disadvantages of Shadow Paging:
Storage Overhead: It requires additional storage space to store shadow pages.
Performance Overhead: Creating and managing shadow pages can introduce overhead, especially for frequent
updates.

Q21) Types of Distributed Databases:


The image highlights two primary types of distributed databases:
1. Homogeneous Distributed Databases
* Definition: A network of two or more databases that utilize the same DBMS software. These databases can
be stored on one or more machines.
* Key Features:
* Data can be accessed and modified simultaneously across multiple databases within the network.
* They are generally easier to manage due to their uniformity.
* Example: A company with three departments using Oracle-9i as the DBMS. If one department makes
changes, the other departments will automatically update.
2. Heterogeneous Distributed Databases
* Definition: A network of two or more databases that use different DBMS software. These databases can
also be stored on one or more machines.
* Key Features:
* Data can be accessed across various databases in the network using generic connectivity standards like
ODBC and JDBC.
* They are more complex to manage due to the diversity of DBMS systems involved.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy