0% found this document useful (0 votes)
70 views

Distributed Database Management Systems: Outline

The document provides an outline for distributed database management systems (DDBMS). It begins with an introduction that defines a distributed database and DDBMS. The outline then covers key topics in DDBMS including architecture, distributed database design, distributed query processing, distributed concurrency control, and distributed reliability protocols. Distributed database design involves decisions around data fragmentation and placement across computer network sites.

Uploaded by

tani41
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Distributed Database Management Systems: Outline

The document provides an outline for distributed database management systems (DDBMS). It begins with an introduction that defines a distributed database and DDBMS. The outline then covers key topics in DDBMS including architecture, distributed database design, distributed query processing, distributed concurrency control, and distributed reliability protocols. Distributed database design involves decisions around data fragmentation and placement across computer network sites.

Uploaded by

tani41
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

1

Distributed Database
Management Systems
Distributed DBMS
Outline
B Introduction
B Distributed DBMS Architecture
B Distributed Database Design
B Distributed Query Processing
B Distributed Concurrency Control
B Distributed Reliability Protocols
Distributed DBMS
Outline
O Introduction
= What is a distributed DBMS
= Problems
= Current state-of-affairs
O Distributed DBMS Architecture
O Distributed Database Design
O Distributed Query Processing
O Distributed Concurrency Control
O Distributed Reliability Protocols
2
Distributed DBMS
Motivation
Database
Technology
Computer
Networks
integration distribution
integration
integration centralization
Distributed
Database
Systems
Distributed DBMS
What is a Distributed Database
System?
A distributed database (DDB) is a collection of multiple,
logically interrelated databases distributed over a
computer network.
A distributed database management system (DDBMS) is
the software that manages the DDB and provides an
access mechanism that makes this distribution
transparent to the users.
Distributed database system (DDBS) = DDB + DDBMS
Distributed DBMS
Centralized DBMS on Network
Communication
Network
Site 5
Site 1
Site 2
Site 3 Site 4
3
Distributed DBMS
Distributed DBMS Environment
Communication
Network
Site 5
Site 1
Site 2
Site 3 Site 4
Distributed DBMS
Implicit Assumptions
B Data stored at a number of sites each site
logically consists of a single processor.
B Processors at different sites are interconnected
by a computer network no multiprocessors
= parallel database systems
B Distributed database is a database, not a
collection of files data logically related as
exhibited in the users access patterns
= relational data model
B D-DBMS is a full-fledged DBMS
= not remote file system, not a TP system
Distributed DBMS
Distributed DBMS Promises
0 Transparent management of distributed,
fragmented, and replicated data
O Improved reliability/availability through
distributed transactions
O Improved performance
O Easier and more economical system expansion
4
Distributed DBMS
Transparency
B Transparency is the separation of the higher
level semantics of a system from the lower level
implementation issues.
B Fundamental issue is to provide
data independence
in the distributed environment
= Network (distribution) transparency
= Replication transparency
= Fragmentation transparency
+horizontal fragmentation: selection
+vertical fragmentation: projection
+hybrid
Distributed DBMS
Example
TITLE SAL
PAY
Elect. Eng. 40000
Syst. Anal. 34000
Mech. Eng. 27000
Programmer 24000
PROJ
PNO PNAME BUDGET
ENO ENAME TITLE
E1 J. Doe Elect. Eng.
E2 M. Smith Syst. Anal.
E3 A. Lee Mech. Eng.
E4 J. Miller Programmer
E5 B. Casey Syst. Anal.
E6 L. Chu Elect. Eng.
E7 R. Davis Mech. Eng.
E8 J. Jones Syst. Anal.
EMP
ENO PNO RESP
E1 P1 Manager 12
DUR
E2 P1 Analyst 24
E2 P2 Analyst 6
E3 P3 Consultant 10
E3 P4 Engineer 48
E4 P2 Programmer 18
E5 P2 Manager 24
E6 P4 Manager 48
E7 P3 Engineer 36
E8 P3 Manager 40
ASG
P1 Instrumentation 150000
P3 CAD/CAM 250000
P2 Database Develop. 135000
P4 Maintenance 310000
E7 P5 Engineer 23
Distributed DBMS
Transparent Access
SELECT ENAME,SAL
FROM EMP,ASG,PAY
WHERE DUR > 12
AND EMP.ENO = ASG.ENO
AND PAY.TITLE = EMP.TITLE
Paris projects
Paris employees
Paris assignments
Boston employees
Montreal projects
Paris projects
New York projects
with budget > 200000
Montreal employees
Montreal assignments
Boston
Communication
Network
Montreal
Paris
New
York
Boston projects
Boston employees
Boston assignments
Boston projects
New York employees
New York projects
New York assignments
Tokyo
5
Distributed DBMS
Distributed Database
Distributed Database
User View
Distributed DBMS
Distributed DBMS - Reality
Communication
Subsystem
User
Query
DBMS
Software
DBMS
Software
User
Application
DBMS
Software
User
Application User
Query
DBMS
Software
User
Query
DBMS
Software
Distributed DBMS
Potentially Improved
Performance
B Proximity of data to its points of use
= Requires some support for fragmentation and replication
B Parallelism in execution
= Inter-query parallelism
= Intra-query parallelism
6
Distributed DBMS
Parallelism Requirements
B Have as much of the data required by each
application at the site where the application
executes
= Full replication
B How about updates?
= Updates to replicated data requires implementation of
distributed concurrency control and commit protocols
Distributed DBMS
System Expansion
B Issue is database scaling
B Emergence of microprocessor and workstation
technologies
= Demise of Grosh's law
= Client-server model of computing
B Data communication cost vs telecommunication
cost
Distributed DBMS
Distributed DBMS Issues
B Distributed Database Design
= how to distribute the database
= replicated & non-replicated database distribution
= a related problem in directory management
B Query Processing
= convert user transactions to data manipulation instructions
= optimization problem
= min{cost = data transmission + local processing}
= general formulation is NP-hard
7
Distributed DBMS
Distributed DBMS Issues
B Concurrency Control
= synchronization of concurrent accesses
= consistency and isolation of transactions' effects
= deadlock management
B Reliability
= how to make the system resilient to failures
= atomicity and durability
Distributed DBMS
Directory
Management
Relationship Between Issues
Reliability
Deadlock
Management
Query
Processing
Concurrency
Control
Distribution
Design
Distributed DBMS
Outline
B Introduction
O Distributed DBMS Architecture
=Implementation Alternatives
=Component Architecture
O Distributed Database Design
O Distributed Query Processing
O Distributed Concurrency Control
O Distributed Reliability Protocols
8
Distributed DBMS
DBMS Implementation
Alternatives
Distribution
Heterogeneity
Autonomy
Client/server
Peer-to-peer
Distributed DBMS
Federated DBMS
Distributed
multi-DBMS
Multi-DBMS
Distributed DBMS
Dimensions of the Problem
B Distribution
= Whether the components of the system are located on the same
machine or not
B Heterogeneity
= Various levels (hardware, communications, operating system)
= DBMS important one
+ data model, query language,transaction management algorithms
B Autonomy
= Not well understood and most troublesome
= Various versions
+ Design autonomy: Ability of a component DBMS to decide on
issues related to its own design.
+ Communication autonomy: Ability of a component DBMS to
decide whether and how to communicate with other DBMSs.
+ Execution autonomy: Ability of a component DBMS to execute
local operations in any manner it wants to.
Distributed DBMS
Datalogical Distributed
DBMS Architecture
...
...
...
ES
1
ES
2
ES
n
GCS
LCS
1
LCS
2
LCS
n
LIS
1 LIS
2
LIS
n
9
Distributed DBMS
Datalogical Multi-DBMS
Architecture
...
GCS
GES
1
LCS
2
LCS
n

LIS
2
LIS
n
LES
11
LES
1n
LES
n1
LES
nm
GES
2
GES
n
LIS
1
LCS
1
Distributed DBMS
Clients/Server
Communications
Client
Services
Applications
Communications
DBMS Services
LAN
High-level
requests
Filtered
data only
Communications
Client
Services
Applications
Communications
Client
Services
Applications
Database
Multiple client/single server
Distributed DBMS
Task Distribution
Application
Communications Manager
Communications Manager
Lock Manager
Storage Manager
Page & Cache Manager
Query Optimizer
QL
Interface
Programmatic
Interface

SQL
query
result
table
Database
10
Distributed DBMS
Advantages of Client-
Server Architectures
B More efficient division of labor
B Horizontal and vertical scaling of resources
B Better price/performance on client machines
B Ability to use familiar tools on client machines
B Client access to remote data (via standards)
B Full DBMS functionality provided to client
workstations
B Overall better system price/performance
Distributed DBMS
Problems With Multiple-
Client/Single Server
B Server forms bottleneck
B Server forms single point of failure
B Database scaling difficult
Distributed DBMS
Multiple Clients/Multiple Servers
Communications
Client
Services
Applications
LAN
B directory
B caching
B query decomposition
B commit protocols
Communications
DBMS Services
Database
Communications
DBMS Services
Database
11
Distributed DBMS
Server-to-Server
Communications
DBMS Services
LAN
Communications
DBMS Services
B SQL interface
B programmatic
interface
B other application
support
environments
Communications
Client
Services
Applications
Database Database
Distributed DBMS
Peer-to-Peer
Component Architecture
Database
DATA PROCESSOR USER PROCESSOR
USER
User
requests
System
responses
External
Schema
U
s
e
r

I
n
t
e
r
f
a
c
e
H
a
n
d
l
e
r
Global
Conceptual
Schema
S
e
m
a
n
t
i
c

D
a
t
a
C
o
n
t
r
o
l
l
e
r
G
l
o
b
a
l
E
x
e
c
u
t
i
o
n
M
o
n
i
t
o
r
System
Log
L
o
c
a
l

R
e
c
o
v
e
r
y
M
a
n
a
g
e
r
Local
Internal
Schema
R
u
n
t
i
m
e
S
u
p
p
o
r
t
P
r
o
c
e
s
s
o
r
L
o
c
a
l

Q
u
e
r
y
P
r
o
c
e
s
s
o
r
Local
Conceptual
Schema
G
l
o
b
a
l

Q
u
e
r
y
O
p
t
i
m
i
z
e
r
GD/D
Distributed DBMS
Outline
B Introduction
B Distributed DBMS Architecture
O Distributed Database Design
= Fragmentation
= Data Placement
O Distributed Query Processing
O Distributed Concurrency Control
O Distributed Reliability Protocols
12
Distributed DBMS
Design Problem
B In the general setting :
= Making decisions about the placement of data and
programs across the sites of a computer network as well as
possibly designing the network itself.
B In Distributed DBMS, the placement of
applications entails
= placement of the distributed DBMS software; and
= placement of the applications that run on the database
Distributed DBMS
Distribution Design
B Top-down
= mostly in designing systems from scratch
= mostly in homogeneous systems
B Bottom-up
= when the databases already exist at a number of sites
Distributed DBMS
Top-Down Design
User Input
View Integration
User Input
Requirements
Analysis
Objectives
Conceptual
Design
View Design
Access
Information ESs GCS
Distribution
Design
Physical
Design
LCSs
LISs
13
Distributed DBMS
Distribution Design
B Fragmentation
= Localize access
= Horizontal fragmentation
= Vertical fragmentation
= Hybrid fragmentation
B Distribution
= Placement of fragments on nodes of a network
Distributed DBMS
PROJ
1
: projects with budgets
less than $200,000
PROJ
2
: projects with budgets
greater than or equal to
$200,000
PROJ
1
PNO PNAME BUDGET LOC
P3 CAD/CAM 250000 New York
P4 Maintenance 310000 Paris
P5 CAD/CAM 500000 Boston
PNO PNAME LOC
P1 Instrumentation 150000 Montreal
P2 Database Develop. 135000 New York
BUDGET
PROJ
2
Horizontal Fragmentation
New York
New York
PROJ
PNO PNAME BUDGET LOC
P1 Instrumentation 150000 Montreal
P3 CAD/CAM 250000
P2 Database Develop. 135000
P4 Maintenance 310000 Paris
P5 CAD/CAM 500000 Boston
New York
New York
Distributed DBMS
Vertical Fragmentation
PROJ
1
: information about
project budgets
PROJ
2
: information about
project names and
locations
PNO BUDGET
P1 150000
P3 250000
P2 135000
P4 310000
P5 500000
PNO PNAME LOC
P1 Instrumentation Montreal
P3 CAD/CAM New York
P2 Database Develop. New York
P4 Maintenance Paris
P5 CAD/CAM Boston
PROJ
1
PROJ
2
New York
New York
PROJ
PNO PNAME BUDGET LOC
P1 Instrumentation 150000 Montreal
P3 CAD/CAM 250000
P2 Database Develop. 135000
P4 Maintenance 310000 Paris
P5 CAD/CAM 500000 Boston
New York
New York
14
Distributed DBMS
B Completeness
= Decomposition of relation R into fragments R
1
, R
2
, ..., R
n
is complete
iff each data item in R can also be found in some R
i
B Reconstruction
= If relation R is decomposed into fragments R
1
, R
2
, ..., R
n
, then there
should exist some relational operator such that
R =
1in
R
i
B Disjointness
= If relation R is decomposed into fragments R
1
, R
2
, ..., R
n
, and data
item d
i
is in R
j
, then d
i
should not be in any other fragment R
k
(k j ).
Correctness of Fragmentation
Distributed DBMS
Allocation Alternatives
B Non-replicated
= partitioned : each fragment resides at only one site
B Replicated
= fully replicated : each fragment at each site
= partially replicated : each fragment at some of the sites
B Rule of thumb:
If replication is advantageous,
otherwise replication may cause problems
read - only queries
update queries
1
Distributed DBMS
Fragment Allocation
B Problem Statement
= Given
+ F = {F1, F2, , Fn} fragments
+ S ={S1, S2, , Sm} network sites
+ Q = {q1, q2,, qq} applications
= Find the "optimal" distribution of F to S.
B Optimality
= Minimal cost
+ Communication + storage + processing (read & update)
+ Cost in terms of time (usually)
= Performance
+ Response time and/or throughput
= Constraints
+ Per site constraints (storage & processing)
15
Distributed DBMS
General Form
min(Total Cost)
subject to
response time constraint
storage constraint
processing constraint
Decision Variable
Allocation Model
x
ij
=
1 if fragment F
i
is stored at site S
j

0 otherwise



Distributed DBMS
Outline
B Introduction
B Distributed DBMS Architecture
B Distributed Database Design
O Distributed Query Processing
= Query Processing Methodology
= Distributed Query Optimization
O Distributed Concurrency Control
O Distributed Reliability Protocols
Distributed DBMS
Query Processing
high level user query
query
processor
low level data manipulation
commands
16
Distributed DBMS
Query Processing Components
B Query language that is used
= SQL: intergalactic dataspeak
B Query execution methodology
= The steps that one goes through in executing high-level
(declarative) user queries.
B Query optimization
= How do we determine the best execution plan?
Distributed DBMS
SELECT ENAME
FROM EMP,ASG
WHERE EMP.ENO = ASG.ENO
AND DUR > 37
Strategy 1

ENAME
(
DUR>37/EMP.ENO=ASG.ENO
(EMP ASG))
Strategy 2

ENAME
(EMP
ENO
(
DUR>37
(ASG)))
Selecting Alternatives
Strategy 2 avoids Cartesian product, so is better
Distributed DBMS
What is the Problem?
Site 1 Site 2 Site 3 Site 4 Site 5
EMP
1
=
ENOE3
(EMP) EMP
2
=
ENO>E3
(EMP) ASG
2
=
ENO>E3
(ASG) ASG
1
=
ENOE3
(ASG) Result
Site 5
Site 1 Site 2 Site 3 Site 4
ASG
1
EMP
1
EMP
2
ASG
2
result
2
=(EMP
1
U EMP
2
)
ENO

DUR>37
(ASG
1
U ASG
1
)
Site 4
result = EMP
1

UEMP
2

Site 3
Site 1 Site 2
EMP
2

=EMP
2 ENO
ASG
2

EMP
1

=EMP
1 ENO
ASG
1

ASG
1

=
DUR>37
(ASG
1
) ASG
2

=
DUR>37
(ASG
2
)
Site 5
ASG
2

ASG
1

EMP
1

EMP
2

17
Distributed DBMS
B Assume:
= size(EMP) = 400, size(ASG) = 1000
= tuple access cost = 1 unit; tuple transfer cost = 10 units
B Strategy 1
0 produce ASG': (10+10)tuple access cost 20
O transfer ASG' to the sites of EMP: (10+10)tuple transfer cost 200
O produce EMP': (10+10) tuple access cost2 40
O transfer EMP' to result site: (10+10) tuple transfer cost 200
Total cost 460
B Strategy 2
0 transfer EMP to site 5:400tuple transfer cost 4,000
O transfer ASG to site 5 :1000tuple transfer cost 10,000
O produce ASG':1000tuple access cost 1,000
O join EMP and ASG':40020tuple access cost 8,000
Total cost 23,000
Cost of Alternatives
Distributed DBMS
Minimize a cost function
I/O cost + CPU cost + communication cost
These might have different weights in different
distributed environments
Wide area networks
= communication cost will dominate
+ low bandwidth
+ low speed
+ high protocol overhead
= most algorithms ignore all other cost components
Local area networks
= communication cost not that dominant
= total cost function should be considered
Can also maximize throughput
Query Optimization Objectives
Distributed DBMS
Query Optimization Issues
Types of Optimizers
B Exhaustive search
= cost-based
= optimal
= combinatorial complexity in the number of relations
B Heuristics
= not optimal
= regroup common sub-expressions
= perform selection, projection first
= replace a join by a series of semijoins
= reorder operations to reduce intermediate relation size
= optimize individual operations
18
Distributed DBMS
Query Optimization Issues
Optimization Granularity
B Single query at a time
= cannot use common intermediate results
B Multiple queries at a time
= efficient if many similar queries
= decision space is much larger
Distributed DBMS
Query Optimization Issues
Optimization Timing
B Static
= compilation = optimize prior to the execution
= difficult to estimate the size of the intermediate results =
error propagation
= can amortize over many executions
= R*
B Dynamic
= run time optimization
= exact information on the intermediate relation sizes
= have to reoptimize for multiple executions
= Distributed INGRES
B Hybrid
= compile using a static algorithm
= if the error in estimate sizes > threshold, reoptimize at run
time
= MERMAID
Distributed DBMS
Query Optimization Issues
Statistics
B Relation
= cardinality
= size of a tuple
= fraction of tuples participating in a join with another relation
B Attribute
= cardinality of domain
= actual number of distinct values
B Common assumptions
= independence between different attribute values
= uniform distribution of attribute values within their domain
19
Distributed DBMS
Query Optimization
Issues Decision Sites
B Centralized
= single site determines the best schedule
= simple
= need knowledge about the entire distributed database
B Distributed
= cooperation among sites to determine the schedule
= need only local information
= cost of cooperation
B Hybrid
= one site determines the global schedule
= each site optimizes the local subqueries
Distributed DBMS
Query Optimization Issues
Network Topology
B Wide area networks (WAN) point-to-point
= characteristics
+ low bandwidth
+ low speed
+ high protocol overhead
= communication cost will dominate; ignore all other cost
factors
= global schedule to minimize communication cost
= local schedules according to centralized query optimization
B Local area networks (LAN)
= communication cost not that dominant
= total cost function should be considered
= broadcasting can be exploited (joins)
= special algorithms exist for star networks
Distributed DBMS
Distributed Query Processing
Methodology
Calculus Query on Distributed
Relations
CONTROL
SITE
LOCAL
SITES
Query
Decomposition
Data
Localization
Algebraic Query on Distributed
Relations
Global
Optimization
Fragment Query
Local
Optimization
Optimized Fragment Query
with Communication Operations
Optimized Local
Queries
GLOBAL
SCHEMA
FRAGMENT
SCHEMA
STATS ON
FRAGMENTS
LOCAL
SCHEMAS
20
Distributed DBMS
Step 1 Query Decomposition
Input : Calculus query on global relations
B Normalization
= manipulate query quantifiers and qualification
B Analysis
= detect and reject incorrect queries
= possible for only a subset of relational calculus
B Simplification
= eliminate redundant predicates
B Restructuring
= calculus query = algebraic query
= more than one translation is possible
= use transformation rules
Distributed DBMS
B Convert relational calculus to
relational algebra
B Make use of query trees
B Example
Find the names of employees other
than J. Doe who worked on the
CAD/CAM project for either 1 or 2
years.
SELECT ENAME
FROM EMP, ASG, PROJ
WHERE EMP.ENO = ASG.ENO
AND ASG.PNO = PROJ.PNO
AND ENAME J. Doe
AND PNAME = CAD/CAM
AND (DUR = 12 OR DUR = 24)
Restructuring

ENAME

DUR=12 OR DUR=24

PNAME=CAD/CAM

ENAMEJ. DOE
PROJ ASG EMP
Project
Select
Join

PNO

ENO
Distributed DBMS
B Commutativity of binary operations
= R S = S R
= R S = S R
= R U S = S U R
B Associativity of binary operations
= ( R S ) T = R (S T)
= ( R S ) T = R (S T )
B Idempotence of unary operations
=
A
(
A
(R)) =
A
(R)
=
p1(A1)
(
p2(A2)
(R)) =
p1(A1)
/
p2(A2)
(R)
where R[A] and A' L A, A" L A and A' L A"
B Commuting selection with projection
Restructuring Transformation
Rules (Examples)
21
Distributed DBMS
Example
Recall the previous example:
Find the names of employees other
than J. Doe who worked on the
CAD/CAM project for either one or
two years.
SELECTENAME
FROM PROJ, ASG, EMP
WHERE ASG.ENO=EMP.ENO
AND ASG.PNO=PROJ.PNO
AND ENAMEJ. Doe
AND PROJ.PNAME=CAD/CAM
AND (DUR=12 OR DUR=24)

ENAME

DUR=12 OR DUR=24

PNAME=CAD/CAM

ENAMEJ. DOE
PROJ ASG EMP
Project
Select
Join

PNO

ENO
Distributed DBMS
Equivalent Query

ENAME

PNAME=CAD/CAM /(DUR=12 \ DUR=24) / ENAMEJ. DOE

PROJ ASG EMP


PNO /ENO
Distributed DBMS
EMP

ENAME

ENAME "J. Doe"


ASG PROJ

PNO,ENAME

PNAME = "CAD/CAM"

PNO

DUR =12 / DUR=24

PNO,ENO

PNO,ENAME
Restructuring
PNO
ENO
22
Distributed DBMS
Step 2 Data Localization
Input: Algebraic query on distributed relations
B Determine which fragments are involved
B Localization program
= substitute for each global query its materialization program
= optimize
Distributed DBMS
Example
Assume
= EMP is fragmented into EMP
1
, EMP
2
,
EMP
3
as follows:
+ EMP
1
=
ENOE3
(EMP)
+ EMP
2
=
E3<ENOE6
(EMP)
+ EMP
3
=
ENOE6
(EMP)
= ASG fragmented into ASG
1
and ASG
2
as follows:
+ ASG
1
=
ENOE3
(ASG)
+ ASG
2
=
ENO>E3
(ASG)
Replace EMP by (EMP
1
UEMP
2
UEMP
3
)
and ASG by (ASG
1
U ASG
2
) in any
query

ENAME

DUR=12 OR DUR=24

ENAMEJ. DOE
PROJ U U
EMP
1
EMP
2
EMP
3
ASG
1
ASG
2
PNO
ENO

PNAME=CAD/CAM
Distributed DBMS
Provides Parallellism
EMP
3
ASG
1
EMP
2
ASG
2
EMP
1
ASG
1
U
EMP
3
ASG
2
ENO ENO ENO ENO
23
Distributed DBMS
Eliminates Unnecessary Work
EMP
2
ASG
2
EMP
1
ASG
1
U
EMP
3
ASG
2
ENO ENO ENO
Distributed DBMS
Step 3 Global Query
Optimization
Input: Fragment query
B Find the best (not necessarily optimal) global
schedule
= Minimize a cost function
= Distributed join processing
+ Bushy vs. linear trees
+ Which relation to ship where?
+ Ship-whole vs ship-as-needed
= Decide on the use of semijoins
+ Semijoin saves on communication at the expense of
more local processing.
= Join methods
+ nested loop vs ordered joins (merge join or hash join)
Distributed DBMS
Cost-Based Optimization
B Solution space
= The set of equivalent algebra expressions (query trees).
B Cost function (in terms of time)
= I/O cost + CPU cost + communication cost
= These might have different weights in different distributed
environments (LAN vs WAN).
= Can also maximize throughput
B Search algorithm
= How do we move inside the solution space?
= Exhaustive search, heuristic algorithms (iterative
improvement, simulated annealing, genetic,)
24
Distributed DBMS
Query Optimization Process
Search Space
Generation
Search
Strategy
Equivalent QEP
Input Query
Transformation
Rules
Cost Model
Best QEP
Distributed DBMS
Search Space
B Search space characterized by
alternative execution plans
B Focus on join trees
B For N relations, there are O(N!)
equivalent join trees that can be
obtained by applying
commutativity and associativity
rules
SELECTENAME,RESP
FROM EMP, ASG, PROJ
WHERE EMP.ENO=ASG.ENO
AND ASG.PNO=PROJ.PNO
PROJ
ASG EMP
PROJ ASG
EMP
PROJ
ASG
EMP

ENO
ENO
PNO
PNO
ENO,PNO
Distributed DBMS
Search Space
B Restrict by means of heuristics
= Perform unary operations before binary operations
=
B Restrict the shape of the join tree
= Consider only linear trees, ignore bushy ones
R
2
R
1
R
3
R
4
Linear Join Tree
R
2
R
1
R
4
R
3
Bushy Join Tree
25
Distributed DBMS
Search Strategy
B How to move in the search space.
B Deterministic
= Start from base relations and build plans by adding one
relation at each step
= Dynamic programming: breadth-first
= Greedy: depth-first
B Randomized
= Search for optimalities around a particular starting point
= Trade optimization time for execution time
= Better when > 5-6 relations
= Simulated annealing
= Iterative improvement
Distributed DBMS
Search Strategies
B Deterministic
B Randomized
R
2
R
1
R
3
R
4
R
2
R
1
R
2
R
1
R
3
R
2
R
1
R
3
R
3
R
1
R
2
Distributed DBMS
B Total Time (or Total Cost)
= Reduce each cost (in terms of time) component individually
= Do as little of each cost component as possible
= Optimizes the utilization of the resources
Increases system throughput
B Response Time
= Do as many things as possible in parallel
= May increase total time because of increased total activity
Cost Functions
26
Distributed DBMS
Summation of all cost factors
Total cost = CPU cost + I/O cost + communication
cost
CPU cost = unit instruction cost no.of instructions
I/O cost = unit disk I/O cost no. of disk I/Os
communication cost = message initiation + transmission
Total Cost
Distributed DBMS
B Wide area network
= message initiation and transmission costs high
= local processing cost is low (fast mainframes or
minicomputers)
= ratio of communication to I/O costs = 20:1
B Local area networks
= communication and local processing costs are more or less
equal
= ratio = 1:1.6
Total Cost Factors
Distributed DBMS
Elapsed time between the initiation and the completion of a
query
Response time = CPU time + I/O time + communication time
CPU time = unit instruction time no. of sequential instructions
I/O time = unit I/O time no. of sequential I/Os
communication time = unit msg initiation time
no. of sequential msg + unit transmission time
no. of sequential bytes
Response Time
27
Distributed DBMS
Assume that only the communication cost is considered
Total time = 2 message initialization time + unit transmission
time (x+y)
Response time = max {time to send x from 1 to 3, time to send
y from 2 to 3}
time to send x from 1 to 3 = message initialization time + unit
transmission time x
time to send y from 2 to 3 = message initialization time + unit
transmission time y
Example
Site 1
Site 2
x units
y units
Site 3
Distributed DBMS
B Alternatives
= Ordering joins
= Semijoin ordering
B Consider two relations only
B Multiple relations more difficult because too many
alternatives.
= Compute the cost of all alternatives and select the
best one.
+ Necessary to compute the size of intermediate
relations which is difficult.
= Use heuristics
Join Ordering
R
if size (R) < size (S)
if size (R) > size (S)
S
Distributed DBMS
Consider
PROJ
PNO
ASG
ENO
EMP
Join Ordering Example
Site 2
Site 3 Site 1
PNO ENO
PROJ
ASG
EMP
28
Distributed DBMS
Execution alternatives:
1. EMP Site 2 2. ASG Site 1
Site 2 computes EMP'=EMP ASG Site 1 computes EMP'=EMP ASG
EMP' Site 3 EMP' Site 3
Site 3 computes EMP PROJ Site 3 computes EMP PROJ
3. ASG Site 3 4. PROJ Site 2
Site 3 computes ASG'=ASG PROJ Site 2 computes PROJ'=PROJ ASG
ASG' Site 1 PROJ' Site 1
Site 1 computes ASG' EMP Site 1 computes PROJ' EMP
5. EMP Site 2
PROJ Site 2
Site 2 computes EMP PROJ ASG
Join Ordering Example
Distributed DBMS
B Consider the join of two relations:
= R[A] (located at site 1)
= S[A] (located at site 2)
B Alternatives:
1 Do the join R
A
S
2 Perform one of the semijoin equivalents
R
A
S = (R
A
S)
A
S
= R
A
(S
A
R)
= (R
A
S)
A
(S
A
R)
Semijoin Algorithms
Distributed DBMS
B Perform the join
=send R to Site 2
=Site 2 computes R
A
S
B Consider semijoin (R
A
S)
A
S
=S'
A
(S)
=S' Site 1
=Site 1 computes R' = R
A
S'
=R' Site 2
=Site 2 computes R'
A
S
Semijoin is better if
size(
A
(S)) + size(R
A
S)) < size(R)
Semijoin Algorithms
29
Distributed DBMS
B Cost function includes local processing as well
as transmission
B Considers only joins
B Exhaustive search
B Compilation
B Published papers provide solutions to handling
horizontal and vertical fragmentations but the
implemented prototype does not
R* Algorithm
Distributed DBMS
Performing joins
B Ship whole
= larger data transfer
= smaller number of messages
= better if relations are small
B Fetch as needed
= number of messages = O(cardinality of external relation)
= data transfer per message is minimal
= better if relations are large and the selectivity is good
R* Algorithm
Distributed DBMS
1. Move outer relation tuples to the site of the inner
relation
(a) Retrieve outer tuples
(b) Send them to the inner relation site
(c) Join them as they arrive
Total Cost = cost(retrieving qualified outer tuples)
+ no. of outer tuples fetched
cost(retrieving qualified inner tuples)
+ msg. cost (no. outer tuples fetched
avg. outer tuple size) / msg. size
R* Algorithm
Vertical Partitioning & Joins
30
Distributed DBMS
2. Move inner relation to the site of outer relation
cannot join as they arrive; they need to be stored
Total Cost = cost(retrieving qualified outer tuples)
+ no. of outer tuples fetched
cost(retrieving matching inner tuples
from temporary storage)
+ cost(retrieving qualified inner tuples)
+ cost(storing all qualified inner tuples
in temporary storage)
+ msg. cost (no. of inner tuples fetched
avg. inner tuple size) / msg. size
R* Algorithm
Vertical Partitioning & Joins
Distributed DBMS
3. Move both inner and outer relations to another site
Total cost = cost(retrieving qualified outer tuples)
+ cost(retrieving qualified inner tuples)
+ cost(storing inner tuples in storage)
+ msg. cost (no. of outer tuples fetched
avg. outer tuple size) / msg. size
+ msg. cost (no. of inner tuples fetched
avg. inner tuple size) / msg. size
+ no. of outer tuples fetched
cost(retrieving inner tuples from
temporary storage)
R* Algorithm
Vertical Partitioning & Joins
Distributed DBMS
4. Fetch inner tuples as needed
(a) Retrieve qualified tuples at outer relation site
(b) Send request containing join column value(s) for outer tuples
to inner relation site
(c) Retrieve matching inner tuples at inner relation site
(d) Send the matching inner tuples to outer relation site
(e) Join as they arrive
Total Cost = cost(retrieving qualified outer tuples)
+ msg. cost (no. of outer tuples fetched)
+ no. of outer tuples fetched (no. of
inner tuples fetched avg. inner tuple
size msg. cost / msg. size)
+ no. of outer tuples fetched
cost(retrieving matching inner tuples
for one outer value)
R* Algorithm
Vertical Partitioning & Joins
31
Distributed DBMS
Step 4 Local Optimization
Input: Best global execution schedule
B Select the best access path
B Use the centralized optimization techniques
Distributed DBMS
Outline
B Introduction
B Distributed DBMS Architecture
B Distributed Database Design
B Distributed Query Processing
B Distributed Concurrency Control
= Transaction Concepts & Models
= Serializability
= Distributed Concurrency Control Protocols
B Distributed Reliability Protocols
Distributed DBMS
Transaction
A transaction is a collection of actions that make consistent
transformations of system states while preserving system
consistency.
= concurrency transparency
= failure transparency
Database in a
consistent
state
Database may be
temporarily in an
inconsistent state
during execution
Begin
Transaction
End
Transaction
Execution of
Transaction
Database in a
consistent
state
32
Distributed DBMS
Example Database
Consider an airline reservation example with the
relations:
FLIGHT(FNO, DATE, SRC, DEST, STSOLD, CAP)
CUST(CNAME, ADDR, BAL)
FC(FNO, DATE, CNAME,SPECIAL)
Distributed DBMS
Example Transaction
Begin_transaction Reservation
begin
input(flight_no, date, customer_name);
EXEC SQL UPDATE FLIGHT
SET STSOLD = STSOLD + 1
WHERE FNO = flight_no AND DATE = date;
EXEC SQL INSERT
INTO FC(FNO, DATE, CNAME, SPECIAL);
VALUES (flight_no, date, customer_name, null);
output(reservation completed)
end . {Reservation}
Distributed DBMS
Termination of Transactions
Begin_transaction Reservation
begin
input(flight_no, date, customer_name);
EXEC SQL SELECT STSOLD,CAP
INTO temp1,temp2
FROM FLIGHT
WHERE FNO = flight_no AND DATE = date;
if temp1 = temp2 then
output(no free seats);
Abort
else
EXEC SQL UPDATE FLIGHT
SET STSOLD = STSOLD + 1
WHERE FNO = flight_no AND DATE = date;
EXEC SQL INSERT
INTO FC(FNO, DATE, CNAME, SPECIAL);
VALUES (flight_no, date, customer_name, null);
Commit
output(reservation completed)
endif
end . {Reservation}
33
Distributed DBMS
Properties of Transactions
ATOMICITY
= all or nothing
CONSISTENCY
= no violation of integrity constraints
ISOLATION
= concurrent changes invisible serializable
DURABILITY
= committed updates persist
Distributed DBMS
Transactions Provide
B Atomic and reliable execution in the presence
of failures
B Correct execution in the presence of multiple
user accesses
B Correct management of replicas (if they support
it)
Distributed DBMS
Architecture Revisited
Scheduling/
Descheduling
Requests
Transaction Manager
(TM)
Distributed
Execution Monitor
With other
SCs
With other
TMs
Begin_transaction,
Read, Write,
Commit, Abort
To data
processor
Results
Scheduler
(SC)
34
Distributed DBMS
Centralized Transaction
Execution
Begin_Transaction,
Read, Write, Abort, EOT
Results &
User Notifications
Scheduled
Operations
Results
Results

Read, Write,
Abort, EOT
User
Application
User
Application
Transaction
Manager
(TM)
Scheduler
(SC)
Recovery
Manager
(RM)
Distributed DBMS
Distributed Transaction
Execution
Begin_transaction,
Read, Write, EOT,
Abort
User application
Results &
User notifications
Read, Write,
EOT, Abort
TM
SC
RM
SC
RM
TM
Local
Recovery
Protocol
Distributed
Concurrency Control
Protocol
Replica Control
Protocol
Distributed
Transaction Execution
Model
Distributed DBMS
Concurrency Control
B The problem of synchronizing concurrent
transactions such that the consistency of the
database is maintained while, at the same time,
maximum degree of concurrency is achieved.
B Anomalies:
= Lost updates
+ The effects of some transactions are not reflected on
the database.
= Inconsistent retrievals
+ A transaction, if it reads the same data item more than
once, should always read the same value.
35
Distributed DBMS
Serializable History
B Transactions execute concurrently, but the net
effect of the resulting history upon the database
is equivalent to some serial history.
B Equivalent with respect to what?
= Conflict equivalence: the relative order of execution of the
conflicting operations belonging to unaborted transactions in
two histories are the same.
= Conflicting operations: two incompatible operations (e.g.,
Read and Write) conflict if they both access the same data
item.
+ Incompatible operations of each transaction is assumed
to conflict; do not change their execution orders.
+ If two operations from two different transactions conflict,
the corresponding transactions are also said to conflict.
Distributed DBMS
Serializability in Distributed
DBMS
B Somewhat more involved. Two histories have to
be considered:
= local histories
= global history
B For global transactions (i.e., global history) to
be serializable, two conditions are necessary:
= Each local history should be serializable.
= Two conflicting operations should be in the same relative
order in all of the local histories where they appear together.
Distributed DBMS
Global Non-serializability
The following two local histories are individually
serializable (in fact serial), but the two transactions
are not globally serializable.
T
1
: Read(x) T
2
: Read(x)
x x+5 x x15
Write(x) Write(x)
Commit Commit
LH
1
={R
1
(x),W
1
(x),C
1
,R
2
(x),W
2
(x),C
2
}
LH
2
={R
2
(x),W
2
(x),C
2
,R
1
(x),W
1
(x),C
1
}
36
Distributed DBMS
Concurrency Control
Algorithms
B Pessimistic
= Two-Phase Locking-based (2PL)
+ Centralized (primary site) 2PL
+ Primary copy 2PL
+ Distributed 2PL
= Timestamp Ordering (TO)
+ Basic TO
+ Multiversion TO
+ Conservative TO
= Hybrid
B Optimistic
= Locking-based
= Timestamp ordering-based
Distributed DBMS
Locking-Based Algorithms
B Transactions indicate their intentions by requesting
locks from the scheduler (called lock manager).
B Locks are either read lock (rl) [also called shared
lock] or write lock (wl) [also called exclusive lock]
B Read locks and write locks conflict (because Read
and Write operations are incompatible
rl wl
rl yes no
wl no no
B Locking works nicely to allow concurrent processing
of transactions.
Distributed DBMS
Centralized 2PL
B There is only one 2PL scheduler in the distributed system.
B Lock requests are issued to the central scheduler.
Data Processors at
participating sites Coordinating TM Central Site LM
Lock Request
Lock Granted
Operation
End of Operation
Release Locks
37
Distributed DBMS
Distributed 2PL
B 2PL schedulers are placed at each site. Each
scheduler handles lock requests for data at that
site.
B A transaction may read any of the replicated
copies of item x, by obtaining a read lock on
one of the copies of x. Writing into x requires
obtaining write locks for all copies of x.
Distributed DBMS
Distributed 2PL Execution
Coordinating TM Participating LMs Participating DPs
Lock Request
Operation
End of Operation
Release Locks
Distributed DBMS
Timestamp Ordering
0Transaction (T
i
) is assigned a globally unique timestamp
ts(T
i
).
OTransaction manager attaches the timestamp to all
operations issued by the transaction.
OEach data item is assigned a write timestamp (wts) and a
read timestamp (rts):
= rts(x) = largest timestamp of any read on x
= wts(x) = largest timestamp of any read on x
OConflicting operations are resolved by timestamp order.
Basic T/O:
for R
i
(x) for W
i
(x)
if ts(T
i
) < wts(x) if ts(T
i
) < rts(x) and ts(T
i
) < wts(x)
then reject R
i
(x) then reject W
i
(x)
else accept R
i
(x) else accept W
i
(x)
rts(x) ts(T
i
) wts(x) ts(T
i
)
38
Distributed DBMS
Outline
B Introduction
B Distributed DBMS Architecture
B Distributed Database Design
B Distributed Query Processing
B Distributed Concurrency Control
B Distributed Reliability Protocols
= Distributed Commit Protocols
= Distributed Recovery Protocols
Distributed DBMS
Problem:
How to maintain
atomicity
durability
properties of transactions
Reliability
Distributed DBMS
Types of Failures
B Transaction failures
= Transaction aborts (unilaterally or due to deadlock)
= Avg. 3% of transactions abort abnormally
B System (site) failures
= Failure of processor, main memory, power supply,
= Main memory contents are lost, but secondary storage contents
are safe
= Partial vs. total failure
B Media failures
= Failure of secondary storage devices such that the stored data
is lost
= Head crash/controller failure (?)
B Communication failures
= Lost/undeliverable messages
= Network partitioning
39
Distributed DBMS
Distributed Reliability Protocols
B Commit protocols
= How to execute commit command for distributed transactions.
= Issue: how to ensure atomicity and durability?
B Termination protocols
= If a failure occurs, how can the remaining operational sites deal
with it.
= Non-blocking : the occurrence of failures should not force the
sites to wait until the failure is repaired to terminate the
transaction.
B Recovery protocols
= When a failure occurs, how do the sites where the failure
occurred deal with it.
= Independent : a failed site can determine the outcome of a
transaction without having to obtain remote information.
B Independent recovery = non-blocking termination
Distributed DBMS
Two-Phase Commit (2PC)
Phase 1 : The coordinator gets the participants
ready to write the results into the database
Phase 2 : Everybody writes the results into the
database
= Coordinator :The process at the site where the transaction
originates and which controls the execution
= Participant :The process at the other sites that participate
in executing the transaction
Global Commit Rule:
0 The coordinator aborts a transaction if and only if at least
one participant votes to abort it.
O The coordinator commits a transaction if and only if all of
the participants vote to commit it.
Distributed DBMS
Centralized 2PC
ready? yes/no commit/abort?commited/aborted
Phase 1 Phase 2
C C C
P
P
P
P
P
P
P
P
40
Distributed DBMS
2PC Protocol Actions
Participant Coordinator
No
Yes
VOTE-COMMIT
Yes GLOBAL-ABORT
No
write abort
in log
Abort
Commi t
ACK
ACK
INITIAL
write abort
in log
write ready
in log
write commit
in log
Type of
msg
WAIT
Ready to
Commit?
write commit
in log
Any No?
write abort
in log
ABORT COMMIT
COMMIT
ABORT
write
begin_commit
in log
write
end_of_transaction
in log
READY
INITIAL
PREPARE
VOTE-ABORT
VOTE-COMMIT
U
N
IL
A
T
E
R
A
L
A
B
O
R
T
Distributed DBMS
Problem With 2PC
B Blocking
= Ready implies that the participant waits for the coordinator
= If coordinator fails, site is blocked until recovery
= Blocking reduces availability
B Independent recovery is not possible
B However, it is known that:
= Independent recovery protocols exist only for single site
failures; no independent recovery protocol exists which is
resilient to multiple-site failures.
B So we search for these protocols 3PC

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy