Rdbms With SQL Server 2005 - Done
Rdbms With SQL Server 2005 - Done
Rdbms With SQL Server 2005 - Done
EPITOME
PREFACE
Dear Students,
If this E-Book helps anyone to learn and to use the computer effectively, it
will be of great success.
CONTENTS
- lntroduction to RDBMS 4
- Approaches to Data Management 5
- lntroduction to DBMS 7
- Database Models 8
- Cood's 12 rulefor RDBMS 13
- Database Design 16
- Structured Query Language (SQL) 44
- SQL Server Data types 44
- Create Table statement 46
- lntegrity Contraints 49
- Altering Table Statements 56
- Sub Queries 64
- Joins 66
- Sorting of Data 77
- Using Transact - SQL 81
- View 87
- Trigger 89
- Stored Procedure 93
- SQL Cursors 97
- Data Permissions 139
SQL Server 2005 4
INTRODUCTION TO RDBMS
SQL Server is a Relational Database Management System (RDBMS) developed and marketed
by Microsoft. This system is a important part of Microsoft's Back Office, an enterprise
suite of client-server applications.
SQL Server runs exclusively under Windows. Microsoft's decision to concentrate on only
their operating system results in benefits as :
Almost all relational database originated under the UNIX operating system. The
consequence is that, existing user interface provided by these systems are rather difficult to
use. Microsoft's goal is to make SQL Server the easiest database for implementing and
managing database applications.
The original SQL Server database system was developed and implemented by
Sybase Inc. Microsoft licensed this DBMS in 1988 for their OS/2 operating system and began
the implementation of it for Windows NT in early 1990's. In 1994, Microsoft ended their
cooperative agreement with Sybase Inc.
EPITOME
SQL Server 2005 5
• Data redundancy
• Data isolation
• Data Redundancy- Since the files and application programs are written over a long
period of time, data in the files is likely to get repeated. This may also lead to inconsistency
that is, the various copies of the same data may contain different information. Data
redundancy could also occur due to duplication of data at different locations - the need
for the same data at physically different locations.
• Risk to data integrity- The data values stored, must satisfy certain integrity constraints.
These constraints can be enforced in the system, by adding appropriate code in the
application programs. The problem is compounded when constraints involve several
data items from different files. .
• Data isolation- Since data is scattered in various files, and files may be in different
formats, it is difficult to write new application programs to retrieve the appropriate data.
• Difficult access to data- Whenever there is a new request, that was not anticipated
earlier, there are two possible solutions. Either extract the records manually, or have the
system programmer write the necessary application.
EPITOME
SQL Server 2005 7
Database management addressed all these problems, though at the price of increased over-
heads and costs.
However, with the advancement of technology and the all-round development made in Hard-
ware, Software, Networking and Operating Systems, the drawbacks of Data Management
have been eliminated to a great extent.
Introduction to DBMS
• Use of 4 GL Tools
• Data View
• External View
• Conceptual View
• Internal View
EPITOME
SQL Server 2005 8
Database Models
• Hierarchical
• Network
• Relational
• Hierarchical Model
This model was introduced in the Information Management System (IMS) developed by IBM
in 1968. This model is like a hierarchical tree structure, used to construct a hierarchy of
records in the form of nodes and branches. The data elements present in the structure have
a Parent-Child relationship. Closely related information in the parent-child structure is stored
together as a logical unit. A parent unit may have many child units, but a child is restricted to
have only one parent.
EPITOME
SQL Server 2005 9
parts superior to suppliers. The user sees four individual trees, or hierarchical occurrences
one for each part. Each tree consists of one part record occurrence, together with a set of
subordinate supplier record occurrences, one for each supplier of the part. Each supplier
occurrence includes the corresponding shipment quantity that the set of supplier occurrences
for a given part may contain any number of members, including zero.
The record type at the top of the tree - the part record type in our example is usually known
as the “root”. In general, the root may have any number of dependents, and so on, to any
number of levels.
Such a file, however, is a more complex object than the tables. In the first place, it contains
several types of records, not just one; in our example there are two, one for parts, and one
for suppliers. Second, it also contains links, connecting occurrences of these records; in our
example there are links between part occurrences and supplier occurrences, representing
the associated shipments.
The drawbacks of this model are:
1. The hierarchical structure is not flexible enough to represent all the relationship proportions
which occur in the real world.
2. It cannot demonstrate the over all data model for the enterprise because of the non-
availability of actual data at the time of designing the data model.
4. The Hierarchical model is used only when the concerned data has a clearly hierarchical
character with a single root, for example the DOS directory structure.
• Network Model
EPITOME
SQL Server 2005 10
EPITOME
SQL Server 2005 11
Transaction is maintained using pointers and having to trace the pointers is the
drawback of this design.
Relational Model
• All values appearing in the columns are derived from the underlying domain
• Row must be unique
• Column Names must be unique
• All column values are atomic
• In relational database, there are no hard-coded relationships defined between
tables. A relationship can be specifed at any time using any column name.
The publication of the paper “A Relational Model of Data for Large Shared Databases” by
E.F. Codd in June 1970 in the “Communication of ACM”, set a trend for vigorous and extensive
investigation into a theoretical frame work to support further work in the airea of Data Modeling.
The end result is the Relational Database Management Systems (RDBMS).
• Supplier
EPITOME
SQL Server 2005 12
• Part
• Shipment
Consider the tables shown in the sample database. Each supplier status has a unique supplier
code which uniquely identifies the entire row of the table and exactly one name, and city.
Likewise each part has a unique PCode and exactly one name, size and city and at any
given time no more than one shipment exists for a given supplier/part combination.
The term ‘relation’ is used to describe these tables, which is a more precise way of defining
than the traditional data processing term “file” or “table”. Rows of such tables are referred to
as tuples, again a more precise definition than rows or records and columns are referred to
as attributes. A domain is a set of values from which actual values appearing in a given
column are ‘drawn. For example, the values appearing in the part column of both parts and
shipment tables are drawn from the underlying domain of all valid part numbers. This domain
itself may not be explicitly recorded but will be defined in the appropriate schema and will
have a name of its own. We can see that the relations supplier and parts have a domain in
common; so do parts and shipment; so do supplier and shipment. Another important concept
in relational database is that relationships between tables are not hardcoded in the structure
of the data. There are no pointer in the data to relate one table to another. The relationship
between two or more sets of data can be specified at the development time rather than when
EPITOME
SQL Server 2005 13
the tables are first created. This greatly improves the flexibility of the database management
system.
The relational data structure is that association between tuples that are represented solely
by data values in columns drawn from a common domain. The fact that supplier S03 and
PO2 are located in the same city is represented by the appearance of the same value in the
city column for the two tuples.
EPITOME
SQL Server 2005 14
All information in the database to be represented in one and only one way, namely by values
in column positions within rows of tables.
All data must be accessible with no ambiguity. This rule is essentially a restatement of the
fundamental requirement for primary keys. It says that every individual scalar value in the
database must be logically addressable by specifying the name of the containing table, the
name of the containing column and the primary key value of the containing row.
The DBMS must allow each field to remain null (or empty). Specifically, it must support a
representation of "missing information and inapplicable information" that is systematic, distinct
from all regular values (for example, "distinct from zero or any other number," in the case of
numeric values), and independent of data type. It is also implied that such representations
must be manipulated by the DBMS in a systematic way.
The system must support an online, inline, relational catalog that is accessible to authorized
users by means of their regular query language. That is, users must be able to access the
database's structure (catalog) using the same query language that they use to access the
database's data.
EPITOME
SQL Server 2005 15
All views that are theoretically updatable must be updatable by the system.
The system must support set-at-a-time insert, update, and delete operators.
The physical representation of the data must be invisible to the user, so that the user can
manage the data according to need rather than according to the way it is stored
If a user's view of the data has been defined, then changing the logical structure of the data
(tables, columns, rows, and so on) must not change the user's view.
• Integrity independence
Integrity constraints must be specified separately from application programs and stored in
the catalog. It must be possible to change such constraints as and when appropriate without
unnecessarily affecting existing applications.
• Distribution independence
The distribution of portions of the database to various locations should be invisible to users
of the database. Existing applications should continue to operate successfully
(a) when a distributed version of the DBMS is first introduced.
(b) when existing distributed data is redistributed around the system.
If the system provides a low-level (record-at-a-time) interface, then that interface cannot be
used to subvert the system, for example, bypassing a relational security or integrity constraint.
An RDBMS product has to satisfy at least six of the 12 rules of Codd to be accepted as
a full fledged RDBMS.
EPITOME
SQL Server 2005 16
Database Design
EPITOME
SQL Server 2005 17
Database design is the process of developing database structures to hold data to cater to
user requirements. The final design must satisfy user needs in terms of completeness, integrity,
performance and other factors. For a large enterprise, the database design will turn out to be
an extremely complex task leaving a lot to the skill and experience of the designer. A number
of tools and techniques, including computer-assisted techniques, are available to facilitate
database design.
The primary input to the database design process is the organization’s statement of require-
ments. Poor definition of these requirements is a major cause of poor database design,
resulting in databases of limited scope and utilities which are unable to adapt to change.
Requirement analysis is the process of identifying and documenting the data that the user
requires in the database to meet present and future information needs. During this phase,
the analyst studies data flows and decision making processes in the organization and works
with the users.
• What are the operational requirements regarding Security, Integrity and Response
time?
• Conceptual Design
The major step in conceptual design is to identify the entities and relationships that reflect
the organization’s data, ‘naturally’. The objective of this step is to specify the conceptual
structure of the data and is often referred to as data modeling. The E-R model, with an
emphasis on the topdown approach, will be discussed as an information model to develop a
conceptual structure, later in this chapter.
EPITOME
SQL Server 2005 18
Data Modeling
Data Modeling is achieved in two levels - the first level builds the conceptual model of the
data, using E-R Modeling. The second level organizes this model better, by removing re-
dundancies, through a process called Normalization. The Normalized model is then converted
into the Physical Database.
Entity Relationship Modeling is a technique for analysis and logical modeling of a system’s
data requirements. It uses three basic concepts, entities, their attributes and the relationships
that exist between the entities. It uses graphical notations for representing these.
EPITOME
SQL Server 2005 19
Entity
An Entity is any object, place, person, concept, activity about which an enterprise records
data. It is an object which can have instances or occurrences. Each instance should be
capable of being uniquely identified. Each entity has certain properties, or Attributes associ-
ated with it and operations applicable to it. An entity is represented by a rectangle in the E-R
Model.
Categories of objects under which an entity is identified:
Category Examples
Physical object Employee, Machine, Book, Client, Student, Item
Abstract object Application, Department
Event Application, Reservation, Invoice, Contract
Location Building, City, State
Entity type and entity instance are two important term related to entities. An entity type is a
set of things which share common properties. For e.g., STUDENT COURSE etc. An entity
type is usually denoted in upper case. An entity instance is a specific individual thing. For
e.g., “Mohit”, “Physics”. An entity instance is usually denoted in lower case.
• Attributes
Attributes are data elements that describe an entity. If the attribute of an entity has more
attributes that describe it, then it is not an attribute of that entity, but another entity. Attributes
can either be listed next to the entities, or placed in circles and attached to the entities.
Entity Attributes
Customer Name, Address, Status
Book ISBN, Title, Author, Price
Order Order Number, Order Date, Placed by
EPITOME
SQL Server 2005 20
Identifier is one or more attributes of an Entity or Relation, whose value uniquely identifies
the entities or relationships. The identifier attributes are underlined in the diagram.
Example:
PERSON PERSON ID
NAME
DOB
PROJECT PROJECT ID
START_DATE
BUDGET
Relationship
Entity Attributes
Student ID, Name, Address, DOB
Course Course Code, Duration, Start Date, Room No.
A relationship may associate an entity with itself. For example, in a company, one employee
EPITOME
SQL Server 2005 21
may marry another employee. Hence the following relationship can be possible.
Several relationship may exist between the same entities. For example
Dependent Entities
• Depend on the existence of a parent entity
• Have composite identifiers, one from parent and another of its own
There are two types of entities, dependent and independent. An entity whose existence
depends on the existence of another entity is called a dependent entity. Consider the entity
type OFFERING. A course with same code can be offered in different semesters and different
campuses. The existence of OFFERING depends on the existence of entity type COURSE.
The dependent entity is depicted in the E-R diagram by a rectangular box with a second line
drawn across the top.
Dependent entities have composite identifiers. The identifier consists of the parent entity,
together with another attribute that uniquely identifies the dependent entity with the parent.
EPITOME
Computer Overview 22
Example:
Fig. 4.10
The figure depicts entities PROJECTS and TASKS. A task obviously cannot be present
without the existence of a related Project. The entity Task is thus modeled as a Dependent
Entity of the Project Entity. The dependence is shown by the arrow from PROJ ECTS to
TASKS.
Thus the identifier of TASKS is composed of the attributes, PROJECT-ID (which identifies
the project) and TASK-NO (which identifies the particular task in the project). Every task in a
project may be assigned a Budget, which becomes an attribute of the Task.
An independent entity does not depend on any other entity for existence. For e.g. STUDENT
• Modeled by rectangular boxes, with a second line on the left hand side of the box
A subset is derived from another entity which is called superset. For example, consider the
entity EMPLOYEE. There are two types of employees - salaried employees and wage earning
employees. In this example, EMPLOYEE is the superset, and the SALARIED and WAGE-
EARNING employees are the subset. Subsets have certain attributes of their own, which are
unique to them. They also share some attributes with the other subsets.
EPITOME
SQL Server 2005 23
The common attributes become attributes of the parent entity set, whereas attributes pertaining
to the subset alone are shown as attributes of that subset. For example, “name’, “address”
are common attributes of subset SALARIED and WAGE. So they become attributes of the
parent entity set EMPLOYEE.
WAGE-EARNING employees will have some attributes, like “overtime”, “daily wages” that
do not belong to the subset SALARIED. The subset SALARIED has some attributes like
HRA, BASIC, ALLOWANCE that do not belong to the subset WAGE-EARNING.
A subentity or subset is always dependent on the superentity or superset for its existence.
The attributes of superset apply to all of its subsets. The converse is not true.
Degree of Relationship
The Degree of a Relationship, indicates the link between two entities for a specified occur-
rence of each. The degree of a relationship is also called Cardinality.
EPITOME
SQL Server 2005 24
For one occurrence of the first entity, there can be at most one related occurrence of the
second entity, and vice-versa.
• One to Many Relationship: (1:N)
For one occurrence of the first entity there can exist many related occurrences of the
second entity and for every occurrence of the second entity there exists only one associ-
ated occurrence of the first.
For one occurrence of the first entity, there exists many related occurences of the second
EPITOME
SQL Server 2005 25
entity and for every occurrence of the second entity, there may exist many associated occur-
rences of the first.
Normalization
Normalization Terminology
Normalization terminology consists of various concepts that are frequently used in
normalization, for example primary key and functional dependency.
Primary Key
EPITOME
SQL Server 2005 26
The primary key of a relational table uniquely identifies each row in a table. A pri-
mary key is either a column in a table that is unique such as identification number and
social security number of it is generated by the DBMS such as a Globally Unique Identifier
(GUID). Primary key is a set of single column of multiple columns from a table. For exam-
ple, consider a student records database that contains tables related to student’s informa-
tion. The first table, STUDENTS, contains a record for each student at the university. The
table, STUDENTS, consists of various attributes such as student_id, First_name and
student_stream.
A unique Student_id number of a student is a primary key in the STUDENTS table. Your
cannot make the first of last name of a student a primary key because more than one
student can have the same first name and can have same stream.
Functional Dependency
In the table above , the various attributes of the EMPLOYEE are Employee_id,
Employee_name and Employee_dept. You can state that :
Employee_idemployee_name
EPITOME
SQL Server 2005 27
Functional dependencies are a type of constraints based on keys such as primary key or
foreign key. For a relation table R, a column Y is said to be functionally dependent on a
column X of the same table if each value of the column X is associated with only one value
of the column Y at a given time. All the columns in the relational table R should be functionally
dependent on X if the column X is a primary key.
If the columns X and Y are functionally dependent, the functional dependency can be repre-
sented as:
R.x R.y
For example, consider the following functional dependency in a table.
Employee_id Salary, the column Employee_id functionally determines the Salary column
because the salary of each employee is unique and remains same for an employee, each
time the name of the employee appears in the table.
A functional dependency represented by X Y between two sets of attributes X and Y are
the subsets of R and is termed as trivial functional dependency if Y is a subset of X. For
example, Employee_id Project is a trivial functional dependency.
A functional dependency represented by X Y between two sets of attributes X and Y are
subsets of r and is termed as non-trivial functional dependency if at least one of the attributes
of Y is not attributes of X. For example, Employee_ id Salary is a non-trivial functional
dependency.
EPITOME
SQL Server 2005 28
• IR1 (reflexive rule): If X 2 Y, then X Y: This rule states that if X >=Y and two tuples
t1 and t2 exist in a relation instance r of relation R such that t1 [X] = t2 [X]. Now, t1 [X] =
t2 [X] because X>= Y. This implies that X Y holds true in relation instance r of
relation R.
• IR2 (augmentation rule): { X Y } |= XZ YZ: This rule states that if X Y holds
true in a relation instance r of R but XZ YZ does not exist, then tuple t1 and t2 must
exist in relation R.
• IR3 (transitive rule): { X Y, Y Z } |= X Y: This rule states that if both, X Y
and Y Z hold true in a relation r, then for any two tuples t1 and t2 in r, you must have
t1 [Y] = t2 [Y].
• IR4 (decomposition or projective rule): This rule states that if X YZ holds true,
then X Y and X Z also hold true.
• IR5 (union or additive rule): This rule states that if X Yand X Z hold true, then
in the relation R, X YZ also holds true.
• IR6 (pseudotransitive rule): This rule states that if X Y and W Y Z hold true,
then WX Y also holds true.
Attribute Closure
To compute the closure J+ of a given set J of functional dependencies, you can apply the
inference rules until they stop producing new functional dependencies. You can test whether
a set of attributes, J is a super key or not, by finding the set of attributes, which are function-
ally determined by J. You can use the following algorithm to
compute the closure J+:
result : = J
while (changes to result) do
for each functional dependency in F do begin
if B c result; then result := result UY end
The above code assumes that J is a set of attributes and you can call the set of attributes
determined by a set F of functional dependencies. The closure of J under F is denoted by J+.
EPITOME
SQL Server 2005 29
According to the first normal form, a table should be atomic, which implies that no duplicate
data exists within the same row of a table. For example, consider the items table shown in
Table below.
In the Table above , since a book can have more than one author and also a book can be
included in different categories, therefore, columns that consist of multi-valued elements
should be removed from the table. Therefore, the Books table should contain book_ISBNno,
book_price and book_publisher columns.
Table below lists the various attributes of the Books table after the multi-valued elements are
removed.
The Books Table after the Multi-valued Elements are Removed
Book_ISBN no Book_price Book_publisher
8790478 35 ABC
8790388 25 PQR
8790689 77 ABC
EPITOME
SQL Server 2005 30
Book_ISBNno Book_category
8790478 Sales
8790388 Accounts
8790689 Sales
EPITOME
SQL Server 2005 31
In the Books table, the super keys are Book_author and book_ISBNno. The super keys for
the author table are the combination of first_name and last_name. Similarly, for the catego-
ries table, the super key is category.
The primary key for the Books table is Book_ISBNno and the primary keys for the author
table are first_name and last_name. The primary key for the categories table is CategoryID.
To ensure that each row in the author table is unique, you can add the Author_city and
Author_zipcode columns in the primary key field.
Table below lists the various attributes in the author table.
PARTIAL DEPENDENCY
In a table, a primary key consists of one or more than one column to uniquely identify each
row in the table. Partial dependency occurs when a row of a table is uniquely identified by
one column that constitutes a primary key without requiring the entire primary key to uniquely
identify the row. For example, consider a table ‘Stocks’ with attributes cust_id, stock and
stock_price.
EPITOME
SQL Server 2005 32
In Table above, suppose cust_id and stock are identified as the primary key for the Stocks
table. However, the column stock_price is partially dependent on the primary key because
only the stock column determines the stock_price. Also, the values in the stock_price column
do not need the cust_id column to uniquely identify the price of the stocks. Therefore, you
need to make a separate table for the stock_price where the stock column is the primary key.
In the new table, partial dependency is eliminated because the stock_price column is entirely
dependent on the primary key.
Partial dependencies can only occur when more than one field constitutes the primary key. If
there is only one field in the primary identifier, then partial dependencies cannot occur.
Table above conforms to 1NF since it does not contain repeated values and Emp_id and
EPITOME
SQL Server 2005 33
Proj_id are identified as the primary keys for the table. However, the table is not in 2NF because
all the columns of the table depend on only a part of the primary key, which comprises of
Emp_id and Proj_no, identified for the table. For example, the column Emp_name is dependent
on only the Emp_id and does not depend on the Proj_no part of the primary key. Similarly, the
Proj_name column is dependent on only the Proj_no column and not on the Emp_id primary
key.
Therefore, to apply 2NF to the employee_project table, you need to make a separate table for
columns that depend on only a part of the primary key. The new table should contain columns
that are dependent on the entire primary key identified for the table. The tables formed after
applying 2NF to the employee_project table are emp_proj table and emp table and proj table.
Table below lists the various attributes in the emp_proj table.
Emp_id Proj_no Proj_hrs
H76320 W36 08
H76321 W37 02
In the Table above , Order_no and Item_no are identified as the primary keys for the table.
Also, the table conforms to INF since it does not contain repeated value. However, to apply
2NF to the ORDERS table, you need to create a separate table for the columns that do not
depend on either Order_no or Item_no primary key.
The tables, which are created after 2NF is applied to the ORDERS table, are order_cust
table and orders table.
Table below lists the various attributes in the order_cust table.
Order_no Customer
H76320 ABC Corp
H76321 XYZ Co
In the above order_cust table, the customer column is dependent on the primary key order_no.
Similarly, another table is created in which all the columns, Order_no, Item_no, Item, Qty
and Price are dependent on the primary keys, Order_no and Item _no. Table below lists the
various attributes in the orders table.
TRANSITIVE DEPENDENCY
Transitive dependency occurs when a non-key column is uniquely identified by values in
another non-key column of a table. A non-key column of a table refers to the column that is
not identified as a key such as candidate or primary key. For example, consider a SUPPLIER
table with attributes supplier_id, supplier_status and supplier_address. The functional de-
pendencies that exist in the SUPPLIER table helps to understand the concept of transitive
dependency.
Table below lists the various attributes in the SUPPLIER table.
EPITOME
SQL Server 2005 35
In the above table, Subject_no is the only candidate key. Therefore, the following functional
dependency exists for the subject table.
Subject_no Chapter name
Subject_no Instructor
Instructor Department
EPITOME
SQL Server 2005 36
From the above functional dependencies, you can say that Subject_no Department and
therefore the above table is in 2NF. However, the table is not in 3NF since Department is not
directly dependent on Subject_no. In the Subject table, the Department column is determined
by another non-key column, Instructor. Therefore, to apply 3NF to the Subject table, you
need to decompose the table in two tables, subject_inst table and instructor table.
Table below lists the various attributes in the subject_inst table.
Subject_no Subject_name Instructor
H76320 Data structure ABC
H76320 Advanced OS XYZ
Isarily be present in BCNF. In 3NF, if a relation has more than one candidate key then anomalies
can occur. In case of overlapping of candidate keys, 3NF is unable to stop the occurrence of
anomalies. This provides a base for BCNF and is based on the determinant concept. A
determinant is an attribute on which some other attribute is fully functionally dependent. The
following code shows the relation and determinants:
R(a,b,c,d)
a, c b, d
a, d b
In the above code the first determinant states that you can change the primary key of relation
R from a,b to a,c. After applying this change, you can still determine the non-key attributes
present in relation R. The second determinant indicates that a,d detetrmine b, but as a,d do
not determine all the non-key attributes of R, it cannot be considered as the primary key of R.
This implies that the first determinant is a candidate key, but the second determinant is not a
candidate key, hence this relation is not in BCNF but is in 3NF.
EPITOME
SQL Server 2005 37
To be in BCNF, every determinant of the relation has to be a candidate key. The definition of
BCNF specifies that a relation schema R is in BCNF if a non-trivial functional dependency
X A holds in R, then X is a super-key of R.
DECOMPOSITION
The relational database design algorithm start with a single universal relation schema, R =
{A1, A2, A3, ...... An}, which includes all the attributes of a database. The database designers
specify the set, F of functional dependencies, which holds true for all the attributes of R. This
set, F of functional dependencies is also provided to the design algorithms. With the help of
functional dependencies, these algorithms decompose the universal relation schema, R into
a set of relation schemas, D = {R1 ........, Rm}, which becomes the relational database schema.
In this case, D is referred a decomposition of R. The properties of decomposition are as
follows:
• Attribute preservation: It involves preserving all the attributes of the relation, which
is being decomposed by the design algorithms. While decomposing a relation, you
need to make sure that each attribute in R exists in at least one relation schema, Ri
while decomposing the relation.
• Lossless-join decomposition: It ensures that the join remains in the same relation,
as it was before the decomposition of the relation. The decomposition of the relation
R into several relations, RI, R2, ..., Rn is called a lossless-join decomposition, if the
relation R is the natural join of the relations RI, R2, ..., Rn. To test whether a given
decomposition is a lossless join for a given set F of functional dependencies, you
need to decompose the relation, R into RI and R2. If the decomposition of the
relation R is lossless join, then one of the following conditions has to be true:
– (Ri intersection RI) -» (R1-R2) and
– {Ri intersection RI) -»(R2 - R1)
• Dependency preservation: It states that if each functional dependency X Y,
specified in F, either directly appears in one of the relation schemas Ri in the
decomposition D or is inferred from the dependencies that appear in the relation,
R;. The need of dependency preservation arises because each dependency in F
represents a constraint on the database. When a decomposition does not preserve
the dependency, then some dependency can be lost in the decomposition. You
can check for a lost dependency by creating a join of two or more relations in a
EPITOME
SQL Server 2005 38
decomposition to get a relation, which includes all the left and right-hand side attributes of
the lost dependency. Then, check whether or not the dependency is preserved on
the result of join.
MULTI-VALUED DEPENDENCY
An entity in E-R model can have multi-valued attributes. A multi-valued attribute is one that
does not have a single value but a collection of values. If you have to store such an entity in
one relation, you will have to repeat all the information other than the multi- valued attribute
value. In this way, the same instance of the entity will have many tuples. The situation becomes
much inferior if any entity has more than one multi-valued attributes. The multi-valued
dependency gives a solution to the problem of more than one multi-valued attributes. entity
will have many tuples. The situation becomes much inferior if any entity has more than one
multi-valued attributes. The multi-valued dependency gives a solution to the problem of more
than one multi-valued attributes.
MVD: Let R (X, Y, Z) be a relation. The multi-valued dependency XY can hold for relation
R if for the given set of value for attribute X there is a set of zero or more associated values
for the set of attribute Y. The values of Y only depend on X values and these have no
dependence on the set of attributes Z.
Suppose a Students table, which has Studjname, Qualifications and Languages as attributes.
In the relation a student can have more than one qualification (Stud_idQualifications)
and know more than one language (Stud_id Languages). This relation shows duplication
of data and inconsistency. You can decompose the Students relation into two relations having
attributes Stud_id, Qualifications and Stud_id , Languages.
In this example, if there were dependency between Qualifications and Languages attributes
then Student relation would not have MVD and cannot be decomposed into two relations.
EPITOME
SQL Server 2005 39
employee may have various skills and may know various languages, therefore, the table has
two many-to-many relationships. Under the fourth normal form, the two many-to-many
relationships are not represented in a single row and you need to split R into t\v,o tables.
Therefore, the table, R, is split into a table with attributes, employees and skill and another
table with attributes, employees and language. A relation is considered to be as the fourth
normal form if each defined table contains not more than one multi-valued dependency per
key attribute.
For example, consider an Instructor table shown :
MID Database Instructor
1 Access Miller
8 Access Smith
The redundancy of data is easily perceived. For each MID there are defined multiple on X
value values of Instructors and Database. This is a perfect example of a multi-valued depen-
dency. Table below shows the fourth normal form of instructor table.
MID DATA Table MID Instructor Table
MID Database MID Database
1 Access 1 Miller
8 Access 1 John
1 DB2 8 Smith
8 Oracle
EPITOME
SQL Server 2005 40
If you were to add the MID 2 to New York, you would have to add a line to the table for each
instructor located in New York. If Jones were certified for MID 2 and could travel to New York,
you would have to add two lines to reflect this.
Table below shows the instructor-MID-Location table. It is decomposed into the fifth normal
form.
Instructor-Seminar Seminar-Location Instructor-Location
Table Table Table
Instructor MID MID Location Instructor Location
Smith 1 1 New York Smith New York
Smith 2 1 Chicago Smith Chicago
Jones 1 2 Chicago Jones Chicago
EPITOME
SQL Server 2005 41
Workstation 1 Workstation 2
Centralised
Database
Workstation 4 Workstation 3
There are several advantages of using centralised database system. These advantages are:
• It provides control over the data of the organization.
• It provides easier way to control the information of the organization using well-
known technologies and experts.
• Centralized system contains easier interfaces to interact with its software and hard-
ware.
• Report generation is easier as whole data of the organization is located at a single
place.
Along with the advantages there are certain disadvantages also that are related to the
centralised system. These disadvantages are:
• A huge amount of money is required to establish and maintain a centralized system.
• Speed of accessing the information in the centralized system is slow because only
one system processes all the information of the organization.
• Centralized system provides less security to the information due to the use of one
centralized computer. This means that if the centralized computer fails, the com-
plete system will fail.
Decentralised Systems
In decentralised system, the data of the organization is not stored at a single location. There
EPITOME
SQL Server 2005 42
is no centralised computer to control the data. The control of the data is distributed among
several users and is individually handled by them. Decentralised system also provides an
easy way to access data from the tables. Information in a decentralised system is more
service oriented to an individual user. The following figure shows the diagram for a
decentralised database system.
Database Database
Workstation 1 Workstation 2
ORGANIZATION
Workstation 4 Workstation 3
Database Database
There are several advantages of using a decentralized database system. These advantages
are:
• Initial cost and maintenance cost for starting a decentralized system is less.
• It provides processing and controlling of information on individual computers
of an organization, thus increasing control at the local level.
• It provides an easier way to involve new users in the IT system of the organization.
• The failure of a individual system’does not result in the failure of the entire database
system.
There are certain disadvantages also that are related to the decentralized database system.
These disadvantages are:
• In a decentralized system, it is difficult to control the information of the organization
as the control is distributed among several users.
EPITOME
SQL Server 2005 43
• A decentralized control over the data of the organization results in the lack of
consistency.
• In a decentralized system, technologies used by users are not common, which
results in inefficient information and lack of standardization.
• There is a lot of wastage of money and efforts due to duplication of information.
• Use of decentralized database systems results in redundancy of data as same
data may be stored at different places.
Centralized versus Decentralized Systems
The failure of the centralized system will cause total loss -of the data. In contrast to this, the
failure of a local computer in a decentralized system does not result in the total failure of the
system. The processing units in case of centralized1 systems are expensive whereas the
processing units required for decentralized systems does not cost much as data is stored at
user’s local computer. Data is stored and managed at a single place in case of centralized
system that accounts for the standardization of the stored data whereas in case of decentral-
ized systems different users using different techniques manage data individually. Therefore,
the data in decentralized system lacks uniformity and consistency.
In a centralized database design, the different objects such as tables and views
within the data base are linked together. As a result, data can be exchanged between these
database objects. However, in case of decentralized database design the different Database
objects are not linked. Therefore, exchange of data is not possible in decentralized database
design. In centralized database design, there is no redundancy. In this type of database
design all users use same data. On the other hand, decentralized database design allows
redundancy. The interaction between different objects in a decentralized database design is
more difficult as compared to the database object interaction in centralized database.
In centralized database design, the process of retrieving information is a single step process
in centralized database design whereas in decentralized database design, the process of
retrieving information includes multiple steps. In decentralized database design, ownership
of data can be defined in more efficient manner as compared to the sscentralized database
design. In the centralized database design, the management can obtain all the data related
to decision-making, planning and control but the management could not obtain the data in
decentralized database design. The centralized database design supports data integration
whereas decentralized database design does not allow data integration.
EPITOME
SQL Server 2005 44
The Structured Query Language is a standardized set of statements which can be used in all
the RDBMS. The SQL statements can be classified widely in three categories – DDL (Data
Definition Language), DML (Data Manipulation Language) and DCL (Data Control Language).
The DDL contains all the statements which are used to define databases and database
objects for e.g. CREATE DATABASE, CREATE TABLE etc. The DML contains all the state-
ments which are used to manipulate the data stored in the databases for e.g. INSERT,
UPDATE and DELETE etc. The DCL contains all the statements which are used to control
the flow of data among the various users of the databases. All the statements related to the
permissions and security of the database like GRANT, DENY, REVOKE etc. fall under this
category.
All the data values of a column in a table must be of the same data type. Transact-SQL uses
different data types, which can be categorized in the following way:
EPITOME
SQL Server 2005 45
Numeric data types
EPITOME
SQL Server 2005 46
SMALLDATETIME Specifies a date and time with each value being stored as
an integer of 4 bytes. (January 1, 1950 to June 6, 2079)
Database Objects
Tables
Tables are database objects which are used to store data. A table consists of rows and
columns. The columns are also known as attributes. The rows also known as record are a
group of related attributes. The columns of a table are defined of a certain data type. The
tables are created using the CREATE TABLE statement. The general syntax of the CREATE
TABLE statement is:
CREATE TABLE table_name
(
column_name1 type [IDENTITY (seed, increment) column_constraint,
column_name2 type [IDENTITY (seed, increment) column_constraint,
...
)
Where table_name is the name of the table. A column with the IDENTITY property allows
only integer values, which are usually implicitly assigned by the system. Each value which
should be inserted in the column is calculated by incrementing the last inserted value of the
EPITOME
SQL Server 2005 47
column. Therefore, the definition of a column with the IDENTITY property contains an initial
and an increment. The default value for the initial and the increment are both 1. The following
restrictions apply to the IDENTITY property:
The column must be numeric. For NUMERIC or DECIMAL data types, the number
to the right of the decimal should be zero.
There can be at most one column in the table with the IDENTITY property.
The column with this property does not allow NULL values.
CREATE TABLE AND COLUMN CONSTRAINTS
One of the most important specifications made during creation of the tables are the column
constraints. These constraints also known as the integrity constraints are used to check
the modifications or insertion of data in the tables, thereby maintaining the integrity of the
data in the table. The most important benefits of handling integrity constraints by the RDBMS
are:
Increased reliability of data.
Reduced programming time.
Simple maintenance
There are two groups of integrity constraints handled by an RDBMS:
Declarative integrity constraints (included in the table definition)
Procedural integrity constraints (handled by triggers)
There are five declarative integrity constraints:
Primary key
Foreign key
Unique
Default
Check
Not null
EPITOME
SQL Server 2005 48
(
deptno INT,
dname VARCHAR(10),
location VARCHAR(10),
UNIQUE(dname)
)
EPITOME
SQL Server 2005 51
To define the deptno column as foreign key to the deptno column of the dept table
ALTER TABLE emp ADD CONSTRAINT emp_fk_dept FOREIGN KEY(deptno)
REFERENCES dept(deptno)
To add a check constraint to the column gender without validating the existing values
ALTER TABLE emp WITH NOCHECK ADD CONSTRAINT check_gender
CHECK(gender IN (‘M’,’F’))
To disable the above constraint
ALTER TABLE emp NOCHECK CONSTRAINT check_gender
To enable an existing constraint
ALTER TABLE emp CHECK CONSTRAINT check_gender
To delete a column with a constraint first we have to drop the constraint and then the table
ALTER TABLE emp DROP check_gender
ALTER TABLE emp DROP COLUMN gender
To add a primary key constraint to a nullable column, first we have to redefine the column as
not null
ALTER TABLE emp ALTER COLUMN empno INT NOT NULL
ALTER TABLE emp ADD CONSTRAINT emp_pf PRIMARY KEY(empno)
To add a column with default values and fill the new column with the default value for the
existing rows. If the values clause is omitted, the existing column contains null in the existing
rows.
ALTER TABLE emp ADD doj DATETIME CONSTRAINT doj_default DEFAULT
GETDATE() WITH VALUES
Deleting Tables
Tables can be deleted using the statement
DROP TABLE table_name1 [, table_name2 …]
All data, indexes, triggers belonging to the removed table are also dropped. In contrast, all
views that are defined using the dropped table are not removed. (NOTE: In case, we want to
delete a table which is being referenced by some other table, we must delete all the referencing
tables before deleting the target table.)
EPITOME
SQL Server 2005 53
After inserting the required value for the identity column we can reset IDENTITY_INSERT to
OFF for automatic insertion of values. SQL Server will continue providing values for the
identity column from the highest value in that column.
EPITOME
SQL Server 2005 55
To display only selected columns (for e.g. empno, ename and hiredate)
Select empno, ename, hiredate from emp
E.g. 2 Get the last and first names for all employees with employee number 15000.
SELECT emp_lname, emp_fname
FROM employee
WHERE emp_no>=15000
E.g.3 Get the project name for all projects with budget>60000 £. The current rate of ex
change is 0.51 £ per $1.
EPITOME
SQL Server 2005 56
SELECT project_name
FROM project
WHERE budget*0.51>60000
BOOLEAN OPERATORS
WHERE clause conditions can either be simple or contain multiple conditions. Multiple
conditions can be built using the Boolean operators AND, OR, and NOT
E.g.1 Get employee and project numbers of all clerks that work on project p2.
SELECT emp_no, project_no
FROM works_no
WHERE project-no= ‘p2’
AND job= ‘clerk’
E.g.2 Get employee numbers for all employees that work either for project p1 or project p2
for (or both).
SELECT project_no,emp_no
FROM works_on
WHERE project_no=‘p1’
OR project_no=‘p2’
The WHERE clause may include any number of the same or different Boolean
operations. The NOT operation has the highest priority, AND is evaluated next, and the OR
operation has the lowest priority.
E.g.1 SELECT * FROM employee
WHERE emp_no=25348 AND emp_lname= ‘Smith’
OR emp_fname= ‘Matthew’ AND dept_no= ‘d1’
The first SELECT statement the system evaluates both AND operators first and
then the OR operator is evaluated. In the second SELECT statement all expressions within
parentheses being executed first, in sequence from left to right. As you can see, the first
statement returned one row while the second one returned zero rows.
EPITOME
SQL Server 2005 57
E.g.2 Get employee numbers and first names of all employees who do not belong to the
department d2.
SELECT emp_no, emp_lname
FROM employee
WHERE NOT dept-no= ‘d2’
In this case the NOT operator can be replaced by the comparison operator <>(not equal).
E.g.3 SELECT emp_no,emp_lname
FROM employee
WHERE dept_no <> ‘d2’
E.g.2 Get all columns for employees whose employee numbers are neither 10102 nor
9031.
SELECT * FROM employee
WHERE emp_no NOT IN (1012, 9031)
The BETWEEN operator specifies a range, which determines the lower and upper bounds
of qualifying values.
E.g.1 Get the names and budgets for all projects where the budget is in the range between
$95,000 and $ 120,000 inclusive.
SELECT project_name, budget FROM project
WHERE budget BETWEEN 95000 AND 120000
Like the BETWEEN operator, the NOT BETWEEN operator can be used to search
for column values that do not fall within the specified range.
E.g.1 Get employee numbers of all analysts who did not enter their project in 1998.
SELECT emp_no FROM works_on
WHERE job= ‘analyst’
AND enter_date NOT BETWEEN ’01/01/1998’ AND ’12/31/1998
Queries Involving Null Values
A NULL in the CREATE TABLE or ALTER TABLE statement specifies that a special value
called NULL is allowed in the column.
All comparisons with NULL values will return false. To retrieve the rows with NULL
values in the column, Transact-SQL includes the operator feature IS [NOT] NULL.
E.g.1 Get employee numbers and corresponding project numbers for employees with un-
known jobs who work on project p2.
SELECT emp_no, project p2
FROM works_on
WHERE project_no= ‘p2’
AND job IS NULL
LIKE OPERATOR
The LIKE operator compares column values with a specified format. For example, the following
example is used the names of all employees whose last name begins with ‘S’.
SELECT emp_fname, emp_lname FROM Employee
WHERE emp_lname LIKE ‘S%”
‘%’ is a wildcard that represents more than one character. Other wild card operators used
include ‘_’ which represents single character, [] represents any single character in the speci-
fied range, [^] represents any character not within the specified range.
Example: the following query gets all records for the employees whose last name contains
‘h’ as the second character.
SELECT * FROM Employee
WHERE emp_lname LIKE ‘_h%’
Consider another example
EPITOME
SQL Server 2005 59
Grouping Results
The GROUP BY clause organizes the summarized data into groups using the aggregate
functions and HAVING clause. Consider the following example that returns the maximum
price for each type of book from the books table
SELECT type, “Max Price”=MAX(price) FROM books
GROUP BY type
The HAVING keyword can be used to select rows from the intermediate result set. Consider
the following example that displays the type and the average price of those types where the
average price is greater than 20 as
SELECT type, “Average Price”= AVG(price) FROM books
GROUP BY type
HAVING AVG(price) >20
Computing Results: The COMPUTE and COMPUTE BY Clause
The COMPUTE Clause is used to generate summarized reports with the help of Aggregate
functions. For example: the following query will contain the summarized report of the result
set returned by the SELECT statement.
SELECT type, price FROM books
ORDER BY type
COMPUTE AVG(price) BY type
The average price of all types can be obtained by modifying the above query as
SELECT type, price FROM books
ORDER BY type
COMPUTE AVG(price) BY type
COMPUTE AVG(price)
Sub Queries
Sub queries are SELECT Statements that are nested within the WHERE Clause of another
SELECT Statement. The first SELECT statement is called the outer query and the second
SELECT Statement is called the inner query. The inner query will be evaluated first and the
EPITOME
SQL Server 2005 60
outer query will be evaluated on the basis of the results of the inner query. Sub Queries are
of two types:
Simple
Co Related
Simple sub queries
In simple subqueries, the inner query will be evaluated only once. A co-related subquery
differs from a simple sub query as it’s value will depend on a variable of the outer query.
Example: Get the name of those employees who work in the department ‘technical’
SELECT emp_fname, emp_lname FROM emp
WHERE dept_no = (SELECT dept_no FROM dept WHERE
dept_name=‘technical’)
Consider another example to get the name of those employees who draw pay higher than
the average pay of the technical dept.
SELECT emp_fname, emp_lname FROM emp
WHERE pay > (SELECT AVG(pay) FROM emp WHERE dept_name= ‘techni-
cal’)
The IN operator is used in a sub query if the inner query returns a set of values and not a
single values. Consider the example where the query gets the records of employees whose
departments are located at ‘Ghy’
SELECT * FROM emp
WHERE dept_no IN (SELECT dept_no FROM dept WHERE city = ‘Ghy’)
SUB-QUERIES (WITH EXITS)
Sub queries, which use with the exit clause, always return data in terms of time or false. It
checks for the existence of data according to the condition specified in the nested query and
passes it to the outer query to produce the result.
E.g. To select the publishers name from publishers table if the city is ‘paris’.
SELECT pub_name FROM publishers
WHERE EXISTS (SELECT * FROM publishers WHERE city=‘paris’)
CO-RELATED SUB-QUERY
A corelated sub query can be defined as a query that depends on the outer query for its
evaluation.
E.g. Find the records from the sales table where the qty is less than the avg qty of sales
for that titles.
EPITOME
SQL Server 2005 61
Unrestricted Join:
A join that includes more than one table without any condition in the WHERE clause is called
an unrestricted join.
CO-RELATED SUB-QUERY
A corelated sub query can be defined as a query that depends on the outer query for its
evaluation.
E.g. Find the records from the sales table where the qty is less than the avg qty of sales
for that titles.
SELECT sales, title_id FROM sale A1
WHERE qty < (SELECT AVG(qty) FROM sales A2 WHERE
A1.titl_id=A2.title_id)
Implementing Joins
Transact-SQL provides the join operator, which allows retrieval of data from more
than one table. This operator is probably the most important operator for relational database
systems, because it allows data to be spread over many tables and thus achieves a vital
property of database systems – nonredundant data.
The various types of joins are:
Unrestricted join
Natural join
Equi join
EPITOME
SQL Server 2005 62
Self join
Outer join
Unrestricted Join:
A join that includes more than one table without any condition in the WHERE clause is called
an unrestricted join.
E.g. Consider the following example where the title name and the publisher of the book
are selected from the titles and publishers tables are displayed.
SELECT t.title, p.pub_name
FROM titles t, publishers p
WHERE t.pub_di=p.pub_id
Equi Join:
A join that uses an asterisk (*) sign in the SELECT list and displays redundant column data
in the result set is termed as an equi join. An equi join displays redundant column data in the
result set, where two or more tables are compared for equality.
E.g. SELECT *
FROM sales s, titles t, publishers p
will give the output:
13
Here ‘*’ asterisk counts the number of rows in resulting table.
Count ( [ ALL/ DISTINCT] column-name)
It returns the number of values in the expression, either all or distinct.
Further, for example the query:
SELECT COUNT (DISTINCT DEPTNO) FROM EMP;
will give the following output:
Here COUNT (DISTINCT) eliminats duplicate field values before calculating the COUNT.
This is be noted that COUNT, MAX and MIN functions can be used for both numeric and
character fields where as SUM and AVG functions can only be used for numeric fields.
EPITOME
SQL Server 2005 63
EPITOME
SQL Server 2005 64
Mathematical Functions
SQL Server supports various functions that can easily be used to manipulate data.
Mathematical functions are one of them. They are used to resolve some mathematical
problems like to calculate the power of two entities etc. Some mathematical functions are
described below:
(a) ABS (numeric- expn): It will return the absolute value of the column or value passed.
For example the following query:
SELECT ABS (-20)
Will give the output as 20.
(b) ACOS | ASIN | ATAN (float-expn): It will return the only in radians whose cosine,
sine, or tangent is a floating-point value.
(c) ATAN2 (float-expn1,float_expn2): It will return the angle in radians whose tangent is
in between float- expn1 and float- expn2.
(d) COS | SIN | COT | TAN (float-expn): It will return the angle in radians for cosine, sine
or tangent.
(e) CEILING (numeric-expn): It finds the smallest integer, greater than or equal to the
specified value
i.e. numeric expn. The value can be a column name.
For example the following query:
SELECT CEILING (9227.57)
Will give the output as 9228.
(f) DEGREES (numeric-expn): It will convert the numeric-expn from radians to degrees.
(g) EXP (float-expn) It will return the exponential value of specified value i.e. float-exan.
(h) FLOOR (numeric-expn): It finds the largest integer iess than or equal to specied
value i.e.,numeric
_expn.
For example the following query:
SELECT FLOOR (9227.56)
will give the output as 9227.
(i) LOG (float-expn) It will return the natural log value of the specified value in
parenthesis.
EPITOME
SQL Server 2005 65
(j) LOG10 (float-expn) It will return the Base -10 value of the float-expn enclosed in
parenthesis.
(k) PIC : It is a constant function,returns a constant i.e. 3.141592653589793
(l) POWER (num-expn1, num-expn 2) : This function return num-expn1 raised to the
power num-expn
2. Remember that num-expn 2 must be an integer.
For example the following query:
SELECT POWER (2,8)
will give the output as 256.0
(m) RADIANS (num-expn) : It used to convert the specified value from degrees to radians.
(n) RAND ([ Seed ]): It will return random float number between 0 and 1.
(o) ROUND (num-expn, length) : This function is used to round off the specified value to
the given
length. For example the following query:
SELECT ROUND (1234.56,0)
will give the output as 1234, where as
SELECT ROUND (1234.56,1)
will give the output as 1,234.60, and
SELECT ROUND ($1234.56,1)
will give the output as 1,234.60
(p) SIGN (num-expn) : It will return the value in positive, negative as zero.
(q) SQUARE (float-expn): It returns the square of specified value given in parenthesis.
For example the following query:
SELECT SQUARE (15)
will give the output as 225.
(r) SQRT (float-expn) It will return the squarerest of the specified value i.e, float-exan.
For example the following query:
SELECT SQRT (64)
will give the output as 8.
String Functions
String Functions are used to manipulate string in different forms. Some of the string func-
tions are as follows:
(a) Lower (string): This function converts all the characters in the string to lowercase
letters. For example
EPITOME
SQL Server 2005 66
the query:
SELECT LOWER (‘SHALINI’) or Select lower (‘SHALINI’)
will give the output :
SELECT LOWER (JOB) FROM EMP.
will give the output:
LOWER (JOB)
...................
Clerk
Salesman
Manager
Salesman
Manager
Analyst
Salesman
Clerk
Clerk
Analyst
Clerk
(b) UPPER (String) converts all the characters in the string to upper case letters.
For example the query:
SELECT UPPER (JOB) FROM EMP
will give the output
UPPER (JOB)
..............
CLERK
SALESMAN
SALESMAN
MANAGER
SALESMAN
MANAGER
MANAGER
ANALYST
SALESMAN
CLERK
CLERK ANALYST CLERK
EPITOME
SQL Server 2005 67
(c) SOUNDEX (String): This function returns a phonetic representation of each word
and allows you to compare word that are spelt differently but sounds alike. For example the
query,
SELECT ENAME FROM EMP WHERE SOUNDEX (ENAME)
=SOUNDEX (‘RAJ’)
will give the following output:
ENAME
RAJAN
(d) SUBSTRING (String M, N): This function returns a sub-string, N characters long
from the string, starting from position M.If the number of characters, N, is not specified, the
string is extracted
from position M to end. Note the blank spaces are also counted.
For example the query:
SELECT SUBSTRING (“ I WILL KILL YOU”, 8,4)
will give the following output;
kill
(e) ‘expn’ + ‘expn’: This function is used to concatenates two or more character strings.
For example the query:
SELECT ‘ HELLO’ + ‘SHALINI’
will give the output as-HELLO SHALINI.
(f) ASCII (Char-expn): This function returns the ASCII code value of the leftmost char-
acter.
For example the query:
SELECT ASCII (‘HELLO’)
will give the output as 72.
(g) CHAR (integer-expn): This function returns the character equivalent of the ASCII
code value.
(h) CHARINDEX (Pattern, expn) : This string function return the starting position of the
specified pattern.
(i) DIFFERENCE (Char-expn 1, Char-expn 2): It Compares two strings and evaluates
their similarity.
It returns a value from 0 to 4 being the best match.
(j) LEFT (Char-expn, int-expn): It returns a character string staring from the left and
preceding integer expn characters.
For example the following query:
EPITOME
SQL Server 2005 68
EPITOME
SQL Server 2005 69
Date Function
Date functions used to do the manipulation on system date or on the date type datafields.
The datetime values can be manipulated using the Date functions. They can be used in the
column-list WHERE clause, or Where ever an expression can be used. The syntax of date
function is as follows:
SELECT date-func (parameters)
The parameters passed in date-func must enclose date time values, between single or double
quotation marks. Some function take a parameter called date part. The following are the
dateparts, their values and abbreviations.
date part Abbreviation Values
day dd 1-31
day of year dy 1-366
hour hh 0-23
millisecond ms 0-999
minute mi 0-59
month mm 1-12
quarter qq 1-12
second ss 1-4
week wk 0-59
weekly dw 1-7( sun-sat)
year yy 1759-9999
EPITOME
SQL Server 2005 70
date. It means that any datepart like dd, mm, yy, mi etc. Can be changed in the date by
adding the number i nit.
For example the following query
SELECT DATE ADD (mm, 6, ‘1/1/07’)
will generate the output as Jul 1, 2007 10.50 PM.
Another example as follows:
SELECT DATE ADD (mm, 5, ‘1/1/07’)
will generate the output as Jul 1, 2006 10.50 PM.
Note that the function is taking the date in dd/mm/yy mi: ss AM/PM format
(b) ATE DIFF (datepart, datel, date 2) : It returns the number of dateparts i.e, days.
Months, year etc.,
in between two dates.
For example the following query
SELECT DATEDIFF (mm, ‘1/1/07’, ‘12/31/08)
will return the result as 23.
(c) DATE NAME(date part, date):It returns the ASC11 Code/value for a specified datepart
for
the date listed.
(d) DATE PART (datepart, date): It returns the integer value for a specified datepart for
the date listed.
(e) DAY (date) : It is used to get the integer value of the day.
(f) MONTH (date) : It returns an integer value representing the year.
(g) YEAR (date) : It returns an integer value representing the year.
(h) GET DATE () : It returns the current date and time in internal format.
Eliminating Duplicate Information
Preventing the Selection of Duplicate Rows
Unless you indicate otherwise, SQL Server displays the result of query without eliminating
duplicate entries.
For instance, the following query:
SELECT DEPTNO
FROM EMP
Produces the following result.
EPITOME
SQL Server 2005 71
DEPTNO
20
30
30
20
30
30
10
20
10
30
20
30
20
10
To eliminate duplicate values in the result, include the DISTINCT qualifier in the SELECT
command as follows:
SELECT DISTINCT DEPTNO
FROM EMP
This time the result will not contain duplicate values.
DEPTNO
20
30
10
Multiple columns may be specified after the DISTINCT qualifier and the DISTINCT
affects all selected column.
To display distinct values of DEPTNO, JOB
FROM EMP
The result is given below:
DEPTNO JOB
10 CLERK
10 MANAGER
10 PRESIDENT
20 ANALYST
EPITOME
SQL Server 2005 72
20 CLERK
20 MANAGER
30 CLERK
30 MANAGER
30 SALESMAN
Sorting of Data
Sorting Data Using the ORDER BY Clause
In general, the rows displayed from a query do not have any specific order either ascending
or descending. But that can be control for the selected rows by adding the clause ORDER
BY to the SELECT command, So,the ORDER BY clause is used to sort the rows on a
particular field. If used, ORDER BY clause must always be the last clause in SELECT
statement.
To sort by ENAME, the basic syntax for using the ORDER BY clause is as follows:
SELECT Column-list
FROM table-list
[ORDER BY Column-name [column-list-number [ASC DESC]
You can have any number of column in your ORDER by list as they are no wider than 900
bytes. You can also specify column names or use the ordinal number of the columns in the
column-list.
(In the table EMP) the query:
The result is below.
EPITOME
SQL Server 2005 73
The default sort order is ASC, which defines the following sort order.
Numeric values lowest first
Date values earliest first
Character values alphabetically (a to z)
To reverse the order, the command word DESC is specified after the column name in the
ORDER BY clause.
SELECT ENAME, JOB, SAL, DEPTNO
FROM EMP
ORDER BY ENAME
To reverse the order of the DOJ column, so that the latest dates are displayed first, enter.
SELECT ENAME, JOB
DOJ FROM EMP
ORDER BY DOJ DESC
ENAME JOB DOJ
SUDHA CLERK 23-Jul-84
PRASHANT MANAGER 11-Jun-84
RAMESH SALESMAN 4-Jun-84
AMAN CLERK 4-Jun-84
MINU MANAGER 14-May-84
RAKESH SALESMAN 26-Mar-84
SUNIL ANALYST 5 Mar-84
RAJAN SALESMAN 5 Dec-83
NILU ANALYST 5 Dec -83
VINOD CLERK 21-Nov-83
KIRON MANAGER 31-Oct-83
ANIL SALESMAN 15 -Aug -83
VIBHOR CLERK 13- Jun-83
It is possible to ORDER BY more than one column. The limit is the number of columns on the
table. In the ORDER BY clause, specify the columns to order by column.
To order by two columns and display in reverse order of salary, enter.
SELECT DEPTNO, ENAME, JOB, SAL
EPITOME
SQL Server 2005 74
FROM EMP
ORDER BY DEPTNO, SAL DESC
The result is shown below:
DEPTNO ENAME JOB SAL
10 MINU MANAGER 2,450.00
10 VINOD CLERK 1,300.00
20 SUNIL ANALYST 3,000.00
20 NILU ANALYST 3,000.00
20 KIRON MANAGER 2,975.00
20 AMON CLERK 1,100.00
20 VIBHOR CLERK 800.00
30 RAJAN SALESMAN 1,25.00
30 PRASHANT MANAGER 2,850.00
30 ANIL SALESMAN 1,600.00
30 RAMESH SALESMAN 1,500.00
30 RAKESH SALESMAN 1,250.00
30 SUDHA CLERK 950.00
Aggregate Function
EPITOME
SQL Server 2005 75
SQL Server aggregate functions produce a single value for the entire gruop of table entries.
Aggregate functions are normally used with the GROUP by clause and in the entries. Aggre-
gate functions are normaly used with the GROUP by clause and in the HAVING clause or
the column-list. This information might seem to little overwhelming at first, but with me.
Aggregate or column functions can also be used with the SELECT command. These func-
tions are SUM, AVG, MAX, MIN and COUNT etc.
Sum
Sum ([ALL DISTINCT] column-name)
Sum function is used to calculate the addition of all the selected values of a given column.
It returns the total of the values in the numeric expression, eithor all or distinct.
For example the query:
SELECT SUM (SAL) FROM EMP
will give the following output:
23000.00
AVG ([ ALL/ DISTINCT] column-name)
This function is used to calculates the average of all selected values. It returns the average
of the values in the numeric expn, either all or distinct.
For example the query:
SELECT AVG (SAL) FROM EMP
will give the following output:
17694.32
MAX (column-name)
This function is used to calculate the largest of all selected values of a given column or given
expression.
For example the query,
SELECT MAX (SAL) FROM EMP
will give the output as shown below:
3000.00
MIN (column-name)
This function is used to calculate the smallest of all selected values of a given column or
given expression.
For example the query:
SELECT MIN (SAL) FROM EMP;
will give the output as shown below
EPITOME
SQL Server 2005 76
800
Count (*)
This function is used to count the number of rows in the output table.
For example the query :
SELECT COUNT (*) FROM EMP
will give the output:
13
Here ‘*’ asterisk counts the number of rows in resulting table.
Count ( [ ALL/ DISTINCT] column-name)
It returns the number of values in the expression, either all or distinct.
Further, for example the query:
SELECT COUNT (DISTINCT DEPTNO) FROM EMP;
will give the following output:
Here COUNT (DISTINCT) eliminats duplicate field values before calculating the COUNT.
This is be noted that COUNT, MAX and MIN functions can be used for both numeric and
character fields where as SUM and AVG functions can only be used for numeric fields.
Using Transact-SQL
Introduction
You are already familiar with commands used to modify the database like INSERT, UP-
DATE and DELETE. we rarely, if ever, give our database users DELETE permissions It’s
far too easy for accidents to happen with of power. Instead of this, create stored procedures
that do a delete and ensure that data integrity is maintained.
This Unit focuses on the programming features of the Transact-SQL language. You start off
by learning about control-of-flow language elements, if-else blocks and while statements.
EPITOME
SQL Server 2005 77
statement is expected, you can use just a single statement. To execute multiple statements
together; you must enclose them between the keywords BEGIN and END. This construct is
used in some of the following examples.
Print Statement
Till now, you use only SELECT statement to return any information from SQL server to your
client program. SQL Server also provides a PRINT Statement.
SYNTAX : PRINT. { ‘ANY ASCII TEXT’/ @local-variable/@@global-variable}.
You can print an ASCII string (string constant) or type character. For example.
PRINT “ Hello”
PRINT @@ version
To print something more complex, you must build the string in a character variable and then
print that variable.
Example
Use pubs
DECLARE @msg varchar (50),
@numWA tinyint
SELECT @numWA = COUNT (*) FROM stores
WHERE state = ‘WA’
SELECT @msg = ‘ There are’ + Convert (varchar (3), @numWA)
+’ stores in washington’
PRINT @msg
(1 row (s) affected)
(1 row (s) affected)
Syntax
EPITOME
SQL Server 2005 78
IF Boolean-expression
{Sq1-statement/statement-block}
ELSE [ Boolean-expression]
{ Sql-statement/statement-block}
Example
The statement call given below calls precedure usp Weekly Report if today is Friday; other-
wise, no action is taken:
IF (datename (dw, getdate ( )) = ‘ Friday’)
BEGIN
PRINT ‘Weekly report’
EXEC usp weekly Report
END
ELSE
BEGIN
PRINT ‘Daily report’
EXEC usp Daily Report
END
The Boolean expression that follows the IF can include a SELECT statement. If that SE-
LECT statement returns a single value, it can then be compared with another value to pro-
duce a Boolean expression.
Example
IF (SELECT avg (price) FROM titles) > S15
PRINT ‘ Hold a big sale’
ELSE
PRINT ‘Time to raise prices’
If more that one value are returned by the SELECT statement following IF, then a special
form of IF-IF
EXISTS is used.
Syntax
IF EXISTS(SELECT statement)
{sql-statement/statement-block}
[ ELSE
{ sql-Statement /Statement-block}]
IF EXISTS returns true if the SELECT statement returns any rows at all and returns false if
the
EPITOME
SQL Server 2005 79
SELECT
statement returns no rows.
Example
The following example returns all the information about any book published by the publisher
with ID 8453.
IF EXISTS (SELECT * FROM titles WHERE
pub-id = ‘8453’)
BEGIN
PRINT ‘The books are as follows :’
SELECT * FROM titles WHERE
pub-id = ‘8453’
END
ELSE
PRINT ‘No book is found for that publisher’
Case Expressions
The CASE expression allows T-SQL expressions to be simplified for conditional values. It
allows statement to return different values depending on the value of a controlling value or
condition.
Syntax
CASE expression
WHEN expression 1 THEN expression 1
[ WHEN expression 2 THEN expression 2]
[.....] ]
[ELSE expression N]
END
A searched expression can also used.
Syntax
CASE
WHEN boolean=expression 1 THEN expression 1
[WHEN boolean=expression 2 THEN expression 2]
[........] ]
[ ELSE expression N]
END
A simple CASE expression compares an initial expression with each expression in the list
EPITOME
SQL Server 2005 80
and returns the associated result expression. If none of the expression match, the result
expression after the word ELSE is returned.
Example
SELECT title-id type = CASE type
WHEN ‘business’ THEN ‘Business Book’
WHEN ‘psychology’ THEN ‘psychology Book’
WHEN ‘mod-cook’ THEN ‘ Modern Cooking book’
WHEN ‘popular-camp’ THEN ‘popular computing book’
WHEN ‘undecided’ THEN ‘No’ type determined yet’
FROM titles
Sometimes a CASE expression has the following form:
CASE
WHEN expr IS NOT THEN expr 1
[ [ WHEN expr 2 IS NOT THEN expr 2]
[......]]
[ELSE expr N]
END
SP_UNBINDEFAULT ‘emp.joindate’
SQL Programing:
BATCH: A batch is a set of SQL statement that are sent together to the SQL server for
execution
BLOCK OF STATEMENTS:
A block allows the building of units with one or more SQL statements. Every block begins
with the BEGIN statement and ends with an END statement.
BEGIN
Statement – 1
Statement – 2
…………………
…………………
END
IF STATEMENT
This statement is used to execute block of the condition following the keyword ‘IF’ evaluates
to true.
E.g.
IF (SELECT sum (price) FROM titles where type= ‘business’)<60
BEGIN
PRINT ‘The sum of these business books are less then 60’
SELECT title FROM titles
WHERE type= ‘business’
END
WHILE STATEMENT
This statement executes a block while the Boolean expression following it evaluates to true.
E.g.
WHILE (SELECT AVG(price) FROM titles)<60
BEGIN
SELECT title FROM titles
IF (SELECT MAX(price) FROM titles)>30
BREAK
ELSE
CONTINUE
EPITOME
SQL Server 2005 82
END
Working with Views
Creating views
Views are database objects resembling a virtual table which are derived from one or more
base tables. Views do not contain any records. The records retrieved through the views are
actually retrieved from the underlying base table(s). The only information concerning the
views that is physically stored is the name of the view and the way in which the rows are to be
retrieved from the base tables. All the DML operations (INSERT, UPDATE, DELETE etc) that
are performed in the tables can be performed in the views as well. The changes being made
through the views are actually reflected in the base tables. The general syntax of creating
views is:
CREATE VIEW view_name
[WITH ENCRYPTION]
AS
select_statement
[WITH CHECK OPTION]
Where view_name is the name of the view to be created. The select_statement is the
SELECT statement which is used to retrieve the result-set of the view from the underlying
tables. The WITH ENCRYPTION option encrypts the select statement in the system table
syscomments. This option can be used to enhance the security of the SQL Server system.
Purpose of views
Views are used for the following purposes:
To restrict the use of particular columns and/or rows of a table. Therefore, views can
be used for controlling access to a particular part of one or more tables.
To hide the details of complicated queries. If database applications need queries
that involve complicated join operations, the creation of corresponding views can
simplify the use of such queries.
To restrict insertion and updation of values of certain ranges (with the use of WITH
CHECK OPTION).
DML statements and views
The following example creates a view named dept10 consisting the empno, ename, hiredate
and deptno columns of all records of the employees of department no. 10.
CREATE VIEW dept10
AS
select empno,ename,hiredate,deptno from emp where deptno=10
After the above view is being created, it can be used to retrieve the information from the
EPITOME
SQL Server 2005 83
EPITOME
SQL Server 2005 84
TRIGGER
A trigger is composed of two parts:
A SQL command to activate the trigger. The INSERT, DELETE, and UPDATE commands
can activate a trigger. The same trigger can be invoked when more than one action occurs.
An action executed by the trigger. The trigger executes the PL\SQL block.
Limitations of Triggers
A trigger can execute the commands contained in its body or activate stored procedures
and other triggers in order to execute certain tasks.
Any SET command can be specified inside a trigger. It remains active during the execution
of the trigger.
You cannot create a trigger for a view. However, when the view is used, the triggers of the
base table normally are activated.
When a trigger is executed, the results are returned to the application that called it. To
avoid returning the results, do not use the SELECT command that returns results or execute
the attribution of contents to the variables.
The Transact-SQL commands listed below cannot be used inside a trigger:
ALTER DATABASE ALTER PROCEDURE ALTER TABLE
ALTER TRIGGER ALTER VIEW CREATE DATABASE
CREATE DEFAULT CREATE INDEX CREATE PROCEDURE
CREATE RULE CREATE TABLE CREATE TRIGGER
Creating a Trigger
The syntax to create a trigger is:
CREATE TRIGGER trigger_name ON table_name FOR [INSERT | DELETE |
UPDATE]
AS commands
e.g.
CREATE TRIGGER newprod on test1 FOR INSERT
as
print “A new product was inserted”
To test the functioning of this trigger using the Query Analyzer.
INSERT test1 values ( 1, “prod01”, 10 )
Now let’s create a trigger that is activated when the UPDATE command is executed. The
code for the trigger is the following:
CREATE TRIGGER upprod ON test1 FOR UPDATE
AS
Print ‘A product was updated’
EPITOME
SQL Server 2005 85
Now execute the following code in the Query Analyzer to increase by 20 percent the value of
the Cost field of the codprod 2 record:
Update test1 Set cost=cost * .2 Where codprod=“2”
Select * from test1
And see the sesult:
codprod nameprod cost
———— —————- ———-
1 prod01 10
2 prod02 24
The Inserted Deleted Tables
When a trigger is executed, the SQL Server 7 creates two temporary tables that only
exist when the trigger is activated. One is called Inserted and the other is Deleted. When an
INSERT or UPDATE command is executed, the records that are created or changed are
copied to the Inserted table. When a DELETE command is executed, the deleted lines are
copied to the Deleted table.
These tables are useful when you want to have the lines added, deleted, or altered
during the process of executing the trigger. They are particularly useful for copying lines from
one table to another. To check the existence of this table we will change the updprod trigger
as
Create trigger updprod on test1 For UPDATE
As
Print ’A Product was updated’
SELECT * FROM INSERTED
The following code updates the record with prod02 and shows the result through the selection
of the altered record in Test1. The trigger shows the message “A product was updated” and
the contents of the Inserted table.
Update test1 Set cost=cost*1.2 Where codprod=’2’
Select * form test1
Eg. 1. Consider the Use of Triggers in the case of the following Tables :
ITEM TRANSAC
cItem_code cTransaction_code
vItem_name dTran_Date
iQOH cItem_Code
mPrice iQty
mTotalAmt
EPITOME
SQL Server 2005 86
EPITOME
SQL Server 2005 87
Changing a Trigger
A trigger can be changed directly with the ALTER TRIGGER command, as shown below.
Another option is to eliminate the trigger and create it again. This operation can also be
performed in the Trigger Properties dialog box. The syntax is:
ALTER TRIGGER trigger_name
Deleting a Trigger
In order to delete a trigger, first expand the Databases folder, expand the database in which
the table containing the trigger belongs, and then click Tables. In the details pane, right-click
the table in which the trigger exists. Point to Task, and then click Manage Triggers. In the
Name list, select the name of the trigger to delete, and select Delete. You can also use the
SQL DROP TRIGGER statement:
DROP TRIGGER trigger_name
Managing Database Transactions
Transactional control is the ability to manage various transactions that may occur within a
relational database management system as INSERT, UPDATE or DELETION of data. When
a transaction completes successfully, the target table is not changed immediately. Certain
control commands are used to finalize the transaction by either saving the changes made by
the transaction or by reversing the same. There are three commands used to control trans-
actions:
a. COMMIT
b. ROLLBACK
c. SAVE
The COMMIT Command
This command is used to save the changes made to the database tables since the last
commit or rollback statement. For example, consider the statement:
DELETE Products
WHERE cost<14
Which deletes say eight rows from the table.
A COMMIT statement is issued to save the changes to the database, completing the
transaction
COMMIT
The ROLLBACK Command
This command is used to undo transactions that has not yet been saved to the database. For
example, consider the statement.
Update Products
Set Price=34.45
EPITOME
SQL Server 2005 88
Where ProductID='11234'
A select statement will show the new values in the table if you execute the statement
Select * from Products
A rollback statement will undo the change made to the table
ROLLBACK
To verify that the changes have not occoured to the database, use the select statement
again. you will find that the original values are returned.
The Save command
A transaction can be rollback to the savepoint. consider the following example that sets a
savepoint and rollback atransaction upto the point.
Save s1
Delete Products where ProductID='11234'
Save s2
Delete Products where ProductID='11256'
Rollback s2
The Changes made by the 2nd Delete statement will be undone.
Stored Procedures
Stored procedures are precompiled transact-SQL (T-SQL) statements stored in a SQL Server
database. Because stored procedures are precompiled, they usually provide the best perfor-
mance of any type of query. Many system stored procedures defined with an sp_gather
information from system tables and are especially useful for administration. You can create
your own user-defined stored procedures as well. Stored procedures are fast running sets
of T-SQL commands stored in a Server database.
into a procedure plan and then run. This saves the time of reparsing, resolving and compiling
a query tree every time the stored procedure is run.
Another benefit of a stored procedure is that after it’s executed, the procedure plan is stored
in the procedure cache. This means that the next time you use that stored procedure in the
same session, it will be read directly from the cache and run.
Syntax
CREATE PROC [EDURE] procedure-name [ ; number]
[{ @parameter data-type} [VARYING] [= default] [OUTPUT]] [, .....n]
[WITH { RECOMPILE | ENCRYPTION | RECOMPILE, ENCRYPTION}]
[ FOR REPLICATION]
As sql-statement [........ n]
EPITOME
SQL Server 2005 90
Example
CREATE PROCEDURE pAuthors
AS SELECT au-fname, au-lname
FROM authors
ORDER BY au-lname DESC
This command did not return data, and it did not return any rows.
EXEC pAuthors
au-fname au-lname
Yokomoto Akiko
Stringer Dirk
[...] [...]
Blotchet-Halls Reginald
Bennet Abraham
(23 row (s) affected)
The result is a two-column table with the last and first names shown in descending order.
COMPONENTS OF A STORED PROCEDURE
Parameters:
Parameters are used to establish connection between a stored procedure and the outside
world. When a program executes a stored procedure, it can pass values to it in the form of
input parameters and receive values from it in the form of output parameters.
Parameters must have a unique name and always start with a ‘@’ symbol followed by the
data type definition. All the parameters created are considered input by default. Output
parameter can be created by using the keyword OUTPUT with the parameter.
Return Codes:
Stored procedures can return an integer type of value called return code to indicate whether
the execution of the procedure was successful. For example:
Value Meaning
0 Procedure executed successfully
1 Input Parameter not specified
2 Invalid parameter contents
EPITOME
SQL Server 2005 91
EPITOME
SQL Server 2005 92
Except the above method, you also can using with Recompile when you Execute the
Store procedure.
SQL Cursors
Cursors are a way of taking a set of data and being able to interact with a single record at a
time. It doesn’t happen nearly as often as one tends to think, but there are indeed times
where you just can’t obtain the results you want to by modifying or even selecting the data in
an entire set. The set is generated by something all of the rows have in common (as defined
EPITOME
SQL Server 2005 93
by a SELECT statement), but then you need to deal with those rows on a one-by-one basis.
The result set you place in a cursor has several distinct features that set it apart from a
normal SELECT statement:
You declare the cursor separately from actually executing it.
The cursor and, therefore, its result set are named at declaration-you then refer to it
by name.
The result set in a cursor, once opened, stays open until you close it.
Cursors have a special set of commands used to navigate recordset.
While SQL Server has its own engine to deal with cursors, there are actually a few different
object libraries that can also create in SQL Server:
SQL Native Client (used by ADO.NET)
OLE DB (used by ADO)
ODBC ( used by RDO, DAO, and in some cases, OLE DB/ADO)
DB-Lib (used by VB-SQL)
These are the libraries that client applications will typically use to access individual records.
Each provide its own syntax for navigating the recordset and otherwise managing the cursor.
Each, however, shares in the same set of basic concepts, so, once you have got one object
model down for cursors, you’re most of the way there for all of them.
Every data access API out there (ADO. NET,ADO, ODBC,OLE DB etc.) returns
data to a client application or component in a cursor-it’s simply the only way that non-SQL
programming languages can currently deal with things. This is the source of a big difference
between this kind of cursor and SQL Server cursors. With SQL Server cursors, you usually
have a choice to perform things as a set operation, which is what SQL Server was designed
to do. With the API-based cursors, all you have is cursors, so you don’t have the same
cursors versus no cursor debate that you have in your server-side activities.
The client-side part of your data handling is going to be done using cursors-that’s a given, so
don’t worry about it. Instead, worry about making the server side of your data access as
efficient as possible- that means not using cursors on the server side if you can possibly help
it.
The Lifespan of a Cursor
Cursors have lots of little pieces to them, but I think that it’s best if we get right into looking
first at the most basic form of cursor and then build up from there.
Before we get into the actual syntax though, we need to understand that using a cursor
requires more than one statement-indeed, it takes several. The main parts include:
The Declaration
Opening
EPITOME
SQL Server 2005 94
Utilizing/Navigating
Closing
Deallocating
That being said, the syntax for declaring a cursor looks this :
DECLARE< CURSOR name> CURSOR
FOR < select statement>
Keep in mind that this is the super-simple rendition-create a cursor using defaults wherever
possible. we’ll look at more advanced cursors a little later in the chapter.
The cursor name is just like any other variable name, and, other than not requiring the “ @”
prefix, they must obey the rules for SQL Server naming. The SELECT statement can be any
valid SELECT statement that returns a result set. Note that some result sets will not, how-
ever, be updatable. (For example, if you use a GROUP BY, then what part of the group is
updated? The same holds true for calculated field for much the same reason.)
We’ll go ahead and start building a reasonably simple example. For now, we’re not really
going to use it for much, but we’ll see later that it will be the beginning of a rather handy tool
for administering your indexes :
DECLARE @SchemaName varchar (255)
DECLARE @TableName varchar (255)
DECLARE @ IndexName varchar (255)
DECLARE @ Fragmentation float
DECLARE TableCursor CURSOR FOR
SELECT SCHEMA_NAME (CAST (OBJECTPROPERTYEX (i.object -id,
‘SchemaId’ ) AS int) ),
OBJECT-NAME (i.object-id),
i. name,
ps.avg-fragmentation-in-percent
FROM sys.dm-db-index-physical-stats (DB-ID), NULL, NULL, NULL, NULL)
AS ps
JOIN sys. indexes AS i
ON ps. object-id = i. object-id
AND ps. index-id = i. index-id
WHERE avg-fragmentation-in-percent > 30
Note that this is just the beginning of what you will be building. One of the first things you
should notice about cursors is that they require a lot more code than the usual SELECT statement.
EPITOME
SQL Server 2005 95
We’ve just declared a cursor called TableCursor that is based on a SELECT statement that
will select all of the tables in our database. We also declare a holding variable that will
contain the values of our current row while we are working with the cursor.
Just declaring the cursor isn’t enough though-we need to actually open it:
OPEN TableCursor
This actually executes the query that was the subject of the FOR clause, but we still don’t
have anything in place we can work with it. For that, we need to do a couple of things:
Grab-or FETCH-our first record
Loop through, as necessary, FETCHing the remaining records
We issue our first FETCH-this is the command that says to retrieve a particular record. we
must also say into which variables we want to place the values :
FETCH NEXT FROM TableCursor INTO @TableName, @IndexName,
@Fragmentation
Now that we have a first record, we’re ready to move onto performing actions against the
cursor set:
WHILE @@FETCH-STATUE =0
BEGIN
PRINT @SchemaName + ‘.’ + @TableName + ‘.’ +@IndexName + ‘is’
+ CAST @Fragmentation AS varchar) + ‘% Fragmented’
FETCH NEXT FROM TableCursor INTO @SchemaName, @TableName,
@IndexName,
@Fragmentation
END
Every time we fetch a row, @@FETCH-STATUS is updated to tell us how our fetch went.
The possible values are:
0 Fetch succeeded-Everything’s fine.
-1 Fetch failed-Record missing (you’re not at the end, but a record has been deleted
since you opened the cursor).
-2 Fetch failed-This time it’s because you’re beyond the last (or before the first) record
in the cursor.
Once we exit this loop, we are, for our purposes here, done with the cursor, so we’ll close it:
CLOSE TableCursor
Closing the cursor, does not, however, free up the memory associated with that cursor. It
does free up the locks associated with it. In order to be sure that you’ve totally freed up the
resources used by the cursor, you must deallocate it:
EPITOME
SQL Server 2005 96
DEALLOCATE TableCursor
So, let’s bring it all together just for clarity :
DECLARE @SchemaName varchar (255)
DECLARE @TableName varchar (255)
DECLARE @IndexName varchar (255)
DECLARE @Fragmentation float
DECLARE TableCursor CURSOR FOR
SELECT SCHEMA-NAME (CAST (OBJECTPROPERTYEX (i. object-id, ‘SchemaId’) AS
int),
OBJECT-NAME (i. object-id)
i. name
ps.avg-fragmentation-in-percent
From sys.dm-db-index-physical-stats (DB-ID), NULL, NULL, NULL) AS ps JOIN sys.
indexes AS i
ON ps. object-id = i. object-id
AND ps.index-id = i index-id
WHERE avg-fragmentation-inpercent > 30
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @SchemaName, @TableName,
@IndexName,
@Fragmentation
WHILE @@ FETCH-STATUS =0
BEGIN
PRINT @SchemaName + ‘.’ @TableName + ‘.’ + @IndexName + ‘is’ +
CAST @Fragmentation AS varchar) + ‘% Fragmentented’
FETCH NEXT FROM TableCursor INTO @SchemaName, @TableName,
@IndexName,
@ Fragmentation
END
CLOSE TableCursor
DEALLOCATE TableCursor
We now have something that runs, but as we’ve created it at the moment, it’s really nothing
more than if we had just run the SELECT statement by itself (technically, this isn’t true since
we can’t “PRINT” a SELECT statement, but you could do what amounts to the same thing).
EPITOME
SQL Server 2005 97
What’s different is that, if we so chose, we could have done nearly anything to the individual
rows. Let’s go ahead and illustrate this by completing our little utility.
In days of old, there was no single statement that will rebuild all the indexes in an entire
database (fortunately, we now have an option in DBCC INDEXDEFRAG to do an entire
database). Keeping your indexes defragmented was, however, a core part of administering
your system. The cursor example we’re using here is something of a descendent of what
was the common way of getting this kind of index defragmentation done. In this newer ver-
sion, however, we’re making use of specific fragmentation information, and we’re making it
possible to allow for the use of ALTER INDEX (which allows for more options in how exactly
to do our defragmentation) instead of DBCC INDEXDEFRAG.
Okay, so we have a few different methods for rebuilding or reorganizing indexes without
entirely dropping and recreating them. ALTER INDEX is the most flexible in terms letting you
select different underlying methods of defragmenting (online or offline, complete rebuild or
just a recorganization of what’s there, etc.), so we’re going to leverage this way of doing
things. The simple version of the syntax for ALTER INDEX looks like this:
ALTER INDEX <index name> / ALL
ON <object>
{ [ REBUILD] / [REORGANIZE ]}
Again, this is the hyper-simple version of ALTER INDEX.
The problem with trying to use this statement to rebuild all the indexes on all of your tables is
that it is designed to work on one table at a time. You can use ALL option instead of the index
name if you want to build all the indexes for a table, but you can’t leave off the table name to
build all the indexes for all the tables.Indeed, even if we had used a tool like DBCC
INDEXDEFRAG-which can do an entire database, but just doesn’t have as many options-it
would still be an all-or-nothing thing. That is, we can’t tell to do just the tables above a certain
level of fragmentation, or to exclude particular tables that we may want to have fragmentation
in.
Remember that there are occasionally times when fragmentation is a good thing. In particular,
it can be helpful on where we are doing a large number of random inserts as reduces the
number of page splits.
Our cursor can get us around this just dynamically building the DBCC command:
DECLARE @SchemaName varchar (255)
EPITOME
SQL Server 2005 98
WHILE @@ FETCH_STATUS = 0
BEGIN
PRINT ‘Reindexing’ + ISNULL ( @SchemaName, ‘dbo’) + ‘. ’ +
@TableName + ‘. ’ + @ IndexName
SET @Command = ‘ALTER INDEX [‘ + @IndexName + ’] ON [ ‘ +
ISNULL ( @SchemaName, ‘dbo’) + ‘. ’ + @TableName + ’]
REBUILD'
EXEC ( @Command)
FETCH NEXT FROM TableCursor
INTO @SchemaName, @TableName, @IndexName,
@Fragmentation
END
CLOSE TableCursor
DEALLOCATE TableCursor
EPITOME
SQL Server 2005 99
We have now done what would be impossible using only set based commands. The ALTER
INDEX command is expecting a single argument-providing that a recordset won’t work. We
get around the problem by combining the notion of a set operation (the SELECT that forms
the basis for the cursor) with single-data-point operations (the data in the cursor).
In order to mix these set based and individual data point operations, we had to walk through
a series of steps.
First, we declared the cursor and any necessary holding variables.We then “opened” the
cursor-- it was not until this point that the data was actually retrieved from the database.
Next, we utilized the cursor by navigating through it. In this case, we only navigated forward,
but, as we shall see, we could have created a cursor that could scroll forward and backward.
Moving on, we closed the cursor (if the cursor had still had any open locks, they were re-
leased at this point), but memory continues to be allocated for the cursor. Finally, we
deallocated the cursor. At this point, all resources in use by the cursor are freed for use by
other objects in the system.
So just that quick, we have our first cursor. Still, this is really only the beginning. There is
much more to cursors than meets the eye in this particular example. Next, we’ll go on and
take a closer look at some of the powerful features that give cursors additional flexibility.
Cursors come in a variety of different flavors . The default cursor is forward-only (you can
only move forward through the records, not backward) and read-only, but cursors can also
be scrollable and updatable. They can also have a varying level of sensitivity to changes that
are made to the underlying data by other processes.
The forward-only, read-only cursor is the default type of cursor in not only the native
SQL Server cursor engine, but is also default from pretty much all the cursor models. It is
extremely low in overhead, by comparison, to the other cursor choices, and is usually re-
ferred to as being a “firehose” cursor because of the sheer with which you can enumerate
the data. Like a firehose, it knows how to dump its contents in just one direction. Firehose
cursors simply blow away the other cursor-based options in most cases, but don’t mistake
this as a performance choice over set operations-even a firehose cursor is slow by compari-
son to most equivalent set operations.
Let’s start out by taking a look at a more extended syntax for cursors, and then we’ll look at
all of the options individually:
EPITOME
SQL Server 2005 100
At first glance, it really looks like a handful, and indeed are a good many things to think about
when declaring cursors . The bright side is that several of these options imply one another,
so once you`ve made one choice the others often start to fall into place quickly.
Let`s go ahead and apply the specific syntax in a step manner that attaches each part to the
important concepts that go with it.
Scope
The LOCAL versus GLOBAL option determines the scope of the cursor, that is what
connections and processes can “see” the cursor. Most items have scope will default to the
more conservative approach, that is, the mininum scope (which would be LOCAL in this
case). SQL Server cursors are something of an exception to this-the default is actually
GLOBAL.
We are already dealing with something of an exception in that the default scope is set to
what we`re calling global rather than the more conservative option of local. The exception
doesn’t stop there though. In SQL Server, the notion of something being global versus local
usually indicates that it can be seen by all connections rather than just the current connection.
For the purposes of our cursor declaration, however, it refers to whether all processes (batches,
triggers, spocs) in the current connection can see it versus just the current process.
Now let`s think about what this means, and test it a bit.
The ramifications to the global default fall, as you might expect, on both the pro and the con
side of the things. Being global, it means that you can create a cursor within one sproc and
refer to it from within a separate sproc-you don’t necessarily have to pass references to it.
The downside of this though is that, if you try to create another cursor with the same name,
you’re going to get an error.
Let`s test this out with a brief sample. what we`re going to do here create a sproc that will
create a cursor for us :
EPITOME
SQL Server 2005 101
USE AdventureWords
GO
SELECT @Counter = 1
OPEN CursorTest
FETCH NEXT FROM CursorTest INTO @OrderID, @CustomerID
PRINT ‘Row’ + CAST ( @Counter AS varchar) + ‘ has a SalesOrderID of ‘ +
CONVERT (varchar, @OrderID) + ’ and a CustomerID of ‘ + CAST
(@CustomerID AS varchar)
Next, we declare the actual cursor. By default, if we had left off the GLOBAL, keyword,
then we would have still received a cursor that was global in scope.
You do not have to live by this default. You can use sp_dboption or ALTER DATABASE to set
the “ default to local cursor” option to True (set it back to False if you want to go back to
global).
We then go ahead and open the cursor and step through serveral records. Notice, however,
that we do not close or deallocate the cursor-we just leave it open and available as we exit
the sproc.
When we look declaring our cursor for output, we will see a much more explicit and better
choice for situations where we want to allow outside interaction with our cursors.
Now that we`ve enumerated several records and proven that our sproc is operating, we will
then exit the sproc (remember, we haven’t closed or deallocated the cursor). We’ll then refer
to the cursor from outside the sproc:
EXEC spCursorScope
CLOSE CursorTest
EPITOME
SQL Server 2005 103
DEALLOCATE CursorTest
First, we execute the sproc. As we`ve already seen, sproc builds the cursor and then enu-
merates several rows. It exits, leaving the cursor open.
Next, we declare the very same variables that were declared in the sproc. Why do we have
to declare them again, but not the cursor? Because it is only the cursor that is global by
default. That is, our variables went away as soon as the sproc went out of scope-we can’t
refer to them anymore, or we’ll get a variable undefined error. We must redeclare them.
The next code structure looks almost identical to one in our sproc-we`re again looping through
to enumerate several records.
Finally, once we`ve proven our point that the cursor is still alive outside the realm of the
sproc, we`re ready to close and deallocate the cursor. It is not untill we close the cursor that
we free up the memory or tempdb space from the result set used in the cursor, and it is not
until we deallocate that the memory taken up by the cursor variable and its query definition is
freed.
Now, go ahead and create the sproc in the system and execute the script. You should wind
up with a result that looks like this:
Row 1 has a SalesOrderID of 43659 and a CustomerID of 676
Row 2 has a SalesOrderID of 43660 and a CustomerID of 117
Row 3 has a SalesOrderID of 43661 and a CustomerID of 442
Row 4 has a SalesOrderID of 43662 and a CustomerID of 227
Row 5 has a SalesOrderID of 43663 and a CustomerID of 510
Row 6 has a SalesOrderID of 43664 and a CustomerID of 397
So, you can see that the cursor stayed open, and our loop outside the sproc was able to pick
up right where the code inside the sproc had left off.
Now let`s see what happens if we alter our sproc to have local scope:
USE AdventureWorks
GO
EPITOME
SQL Server 2005 104
SELECT @Counter =1
OPEN CursorTest
FETCH NEXT FROM CursorTest INTO @OrderID, @CustomerID
PRINT ‘ Row ’ + CAST @Counter AS varchar) + ‘ has a SalesOrderID of ’ +
CAST (@OrderID AS varchar) + ’ and a CustomerID of ’ + CAST (@CustomerID
AS varchar)
EPITOME
SQL Server 2005 105
Row 5 has a SalesOrderID of 43663 and a CustomerID of 510
Row 6 has a SalesOrderID of 43664 and a CustomerID of 397
The big thing that you should have gotten out of this section is that you need to think about
the scope of your cursors. They do not behave quite the way that other items for which you
use the DECLARE statement do.
Scrollability
Scrollability applies to pretty much any cursor model you might face. The notion is
actually fairly simple. Can we navigate in relatively any direction, or are we limited to only
moving forward? The default is no-we can only move forward.
FORWARD_ONLY
A forward-only cursor is exactly what it sounds like. Since it is default method, it
probably doesn’t surprise you to hear that it is the only type of cursor that we`ve been using
up to this point. When you are using a forward-only cursor, the only navigation option that is
valid is FETCH NEXT. You need to be sure that you`re done with each record before you
move onto the next because, once it`s gone, there`s no getting back to the previous record
unless you close and reopen the cursor
SCROLLABLE
Again, this is just as it sounds. You “scroll” the cursor backward and forward as
necessary. If you`re using one of the APIs (ODBC, OLE DB, DB-Lib), then, depending on
what object model you`re dealing with, you can often navigate right to a specific record.
Indeed, with ADO and ADO. NET you can even easily resort the data and add additional
EPITOME
SQL Server 2005 106
filters.
The cornerstone of scrolling is the FETCH keyword. You can use FETCH to move forward
and backward through the cursor, as well as move to specific positions. The main arguments
to FETCH are:
Let`s do a brief example to get across the concept of a scrollable cursor. We`ll actually just
use a slight variation of the sproc we created a little earlier.
USE AdventureWorks
GO
SELECT @Counter =1
OPEN CursorTest
EPITOME
SQL Server 2005 107
CLOSE CursorTest
DEALLOCATE CursorTest
We went ahead and closed and deallocated the cursor in the sproc rather than
using
an outside procedure.
The interesting part comes in the results.
EXEC spCursorScro11
And you`ll see how the order values scroll forward and back:
As you can see, we were able to successfully navigate not only forward, as we did before,
but also backward.
A forward-only cursor is far and away the more efficient choice of the two options. Think
about the overhead for a moment--if it is read-only, then SQL Server really needs to keep
track of the next record only- a linked list. In a situation where you may reposition the cursor
in other ways, extra information must be stored in order to reasonably seek out the re-
quested row. How exactly this is implemented depends on the specific cursor options you
choose.
Some types of cursors imply scrollability; others do not. Some types of cursors are sensitive
to changes in the data, and some are not.
Cursor Types
The various APIs generally break cursors into types :
Static
Keyset driven
EPITOME
SQL Server 2005 109
Dynamic
Forward-only
How exactly these four types are implemented (and what, they`re called) will something vary
slightly among the various APIs and object models, but their general nature is usually pretty
much the same.
What makes the various cursor types different is their ability to be scrollable and their sensi-
tivity to changes in the database over the life of the cursor.
Whether a cursor is sensitive or not defines whether if notices changes in the database or
not after the cursor is opened. It also defines just what it does about it once the change is
detected. Let`s look at this in its most extreme versions-static versus dynamic cursor, how-
ever, is effectively aware of every change (inserted records, deletions, updates, you name it)
to the database as long as the cursor remains open. We`ll explore the sensitivity issue as
we look at each of the cursor types.
Static Cursors
A static cursor is one that represents a “ snapshot” in time. Indeed, at least one of
the data access object models refers to it as a snapshot recordset rather than a static one.
When a static cursor is created, the entire recordset is created in what amounts to a tempo-
rary table in tempdb. After the time that it`s created,a static cursor changes for no one .
Some of the different object models will let you update information in a static cursor, some
won`t, but the bottom line is always the same: you cannot write updates to the database via
a static cursor.
Static cursor is kept by SQL Server in a private table in tempdb. If that`s how SQL Server is
going to be storing it anyway, why not just use a temporary table yourself? There are times
when that won`t give you what you need (record rather than set operations). However, If you
are just after the concept of a snapshot in time, rather than record-based operations , build
your own temp table using SELECT INTO and save yourself (and SQL Server) a lot of
overhead.
If you`re working in a client-server arrangement, static cursors are often better dealt with on
the client side. By moving the entire operation to the client, you can cut the number of net-
work
EPITOME
SQL Server 2005 110
roundtrips to the server substantially. Since you know that your cursor isn`t going to be
affected by changes to the database, there`s no reason to make contact with the server
again regarding the cursor after it is created.
Let us take this example:
USE AdventureWorks
/* Build the table that we`ll be playing with this time*/
SELECT SalesOrderID, CustomerID
INTO CursorTable
FROM Sales.SalesOrderHeader
WHERE SalesOrderID BETWEEN 43661 AND 43665
OPEN CursorTest
FETCH NEXT CursorTest INTO @SalesOrderID, @CustomerID
- - Now loop through them all
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT CAST (@SalesOrderID AS varchar) + ‘ ’ + @CustomerID
FETCH NEXT FROM CursorTest INTO @SalesOrderID, @CustomerID
END
UPDATE CursorTable
SET CustomerID = -111
WHERE SalesOrderID = 43663
EPITOME
SQL Server 2005 111
WHILE @FETCH_STATUS = 0
BEGIN
PRINT CONVERT ( varchar (5) @SalesOrderID ) + ‘ ’ + @CustomerID
FETCH NEXT FROM CursorTest INTO @SalesOrderID, @CustomerID
END
CLOSE CursorTest
DEALLOCATE CursorTest
EPITOME
SQL Server 2005 112
43661 442
43662 227
43663 510
43664 397
43665 146
There are several things to notice about what happened during the run on this script.
First, even though we had a result set open against the table, we were still able to
perform the update. In this case, it`s because we have a static cursor-once it was created, it
was disconnected from the actual records and no longer maintains any locks.
Second, although we can clearly see that our update did indeed take place in the
actual table, it did not affect the data in our cursor. Again, this is because, once created, our
cursor took on something of a life of its own-it is no longer associated with the original data
in any way.
Under the heading of “ one more thing,” you could also notice that we made use of
a new argument to the FETCH keyword-this time we went back to the top of our result set by
using FETCH FIRST.
Keyset-Driven Cursors
We`re talking about maintaining a set of data that uniquely identifies the entire row
in the database.
Keyset-driven cursors have the following high points:
They require a unique index to exist on the table question.
Only the keyset is stored in tempdh-not the entire dataset.
They are sensitive to changes to the rows that are already part of the keyset, including
the possibility that they have been deleted.
They are, however, not sensitive to new rows that are added after the cursor is
created.
Keyset cursors can be used as the basis for a cursor that is going to perform updates
to the data.
Given that it has a name of “keyset” and that I`ve already said that the keyset uniquely
identifies each row, it probably doesn’t shock you in any way that you must have a unique
index of some kind (usually a primary key, but it could also be any index that is explicitly
defined as unique) to create the keyset from.
The keys are all stored in a private table in tempdb. SQL Server uses this key as a method
EPITOME
SQL Server 2005 113
to find its way back to the data as you ask for a specific row in the cursor. The point to take
note of here is that the actual data is being fetched, based on the key, at the time that you
issue the FETCH. The great part about this is that the data for that particular row is up to date
as of when the specific row is fetched. The downside (or upside depending on what you`re
using the cursor for) is that keyset that is already created to do the lookup. This means that
once the keyset is created, that is all the rows that will be included in your cursor. Any rows
that were added after the cursor was created--even if they meet the conditions of the WHERE
clause in the SELECT statement-will not be seen by the cursor. The rows that are already
part of the cursor can, depending on the cursor options you chose, be updated by a cursor
operation.
Let`s modify our earlier script to illustrate the sensitivity issue when we are making use of
keyset-driven cursors:
USE AdventureWorks
/* Build the table that we`ll be playing with this time*/
SELECT SalesOrderID, CustomerID
INTO CursorTable
FROM Sales.SalesOrderHeader
WHERE SalesOrderID BETWEEN 43661 AND 43665
FOR
SELECT SalesOrderID, CustomerID
FROM CursorTable
DECLARE @SalesOrderID int
DECLARE @CustomerID varchar (5)
OPEN CursorTest
FETCH NEXT FROM CursorTest INTO @SalesOrderID, @CustomerID
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT CAST (@SalesOrderID AS varchar) + ‘ ’ + @CustomerID
FETCH NEXT FROM CursorTest INTO @SalesOrderID, @CustomerID
END
UPDATE CursorTable
SET CustomerID = -111
WHERE SalesOrderID = 43663
DELETE CursorTable
WHERE SalesOrderID = 43664
WHILE @@FETCH_STATUS! = -1
BEGIN
IF @@FETCH_STATUS = -2
BEGIN
EPITOME
SQL Server 2005 115
DEALLOCATE CursorTest
Perhaps the most important thing that we`ve changed is the condition for the WHILE loop on
the final run through the cursor. Technically speaking, we should have made this change to
both loops, but there is zero risk of a deleted record the first time around in this example, and
I wanted the difference to be visible right within the same script.
The change was made to deal with something new we`ve added--the possibility that we
might get to a record only to find that it`s now missing. More than likely, someone has deleted
it.
Let`s take a look then at the results we get after running this:
EPITOME
SQL Server 2005 116
43662 227
43663 510
43664 397
43665 146
43661 442
43662 227
43663 -111
MISSING! It probably was deleted.
43665 146
Okay, let`s walk through the highlights here.
Everything starts out pretty much as it did before. We see the same five rows in the result set
as we did last time. We then see an extra couple of “affected by” messages-these are for the
INSERT, UPDATE, and DELETE statement that we added. Next comes the second result
set. It`s at this point that things get a bit more interesting.
EPITOME
SQL Server 2005 117
In this we`re result set, we see the actual results of our UPDATE, INSERT and DELETE
statements. Just as we think we`re done, SalesOrderID 43664 has been deleted, and a new
order with the SalesOrderID of -99999 has been inserted. That’s what’s in the table, but
things don’t appear quite as cozy in the cursor.
The next (and final) result set tells the tale on some differences in the way that things are
presented in the cursor versus actually re-running the query. As it happens, we have exactly
five rows-- just like we started out with and just like our SELECT statement showed are in the
actual table. But that’s entirely coincidental.
In reality, there are a couple of key differences between what the cursor is showing and what
the table is showing. The first presents itself rather boldly-our result set actually knows that a
record is missing. The cursor continues to show the key position in the keyset, but, when it
went to do the lookup test on the data, the data wasn’t there anymore. Our
@@FETCH_STATUS was set to-2, and we were able to test for it and report it. The SELECT
statement showed us what data was actually there without any remembrance of the record
ever having been there . The INSERT, on the other hand, is an entirely unknown quantity to
the cursor. The record wasn`t there when the cursor was created, so the cursor has no
knowledge of its existence--it doesn`t show up in our result set.
Keyset cursors can be very handy for dealing with situations where you need some sensitiv-
ity to changes in the data, but don`t need to know about every insert right up to the minute.
They can, depending on the nature of the result set you`re after and the keyset, also provide
some substantial savings in the amount of data that has to be duplicated and stored into
tempdb-this can have some favorable performance impacts for your overall server.
Dynamic Cursors
Dynamic cursors fall just short of dynamic in the sense that they won`t proactively
tell you about changes to the underlying data. What gets them close enough to be called
dynamic is that they are sensitive towards all changes to the underlying data.
In order to understand some of the impacts that a dynamic cursor can have, you just need to
realize a bit about how they work. You see, with a dynamic cursor, your cursor is essentially
EPITOME
SQL Server 2005 118
rebuilt every single time you issue a FETCH . The SELECT statement that forms the basis of
your query, complete with its associated WHERE clause is effectively re-run.
The dynamic cursor can actually be slightly faster in terms of raw speed because it uses the
tempdh for keyset cursors. While a lot more work has to be done with each FETCH in order
to deal with a dynamic cursor, the data for requery will often be completely in cache (depend-
ing on the sizing and loading of your system) This means the dynamic cursor gets to work
largely from RAM. The keyset cursor, on the other hand is stored in tempdb, which is on disk
(that is, much, much slower) for most system.
As your table size gets larger, there is more diverse traffic hitting your server, the memory
allocated to SQL Server gets smaller, and the more that keyset-driven cursors are going to
have something of an advantage over dynamic cursors.
Let`s go ahead and re-run our last script with only modificaton--the change from KEYSET to
DYNAMIC:
USE AdventureWorks
/* Build the table that we`ll be playing with this time*/
EPITOME
SQL Server 2005 119
OPEN CursorTest
FETCH NEXT CursorTest INTO @SalesOrderID, @CustomerID
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT CAST (@SalesOrderID AS varchar) + ‘ ’ + @CustomerID
FETCH NEXT FROM CursorTest INTO @SalesOrderID, @CustomerID
END
UPDATE CursorTable
SET CustomerID = -111
WHERE SalesOrderID = 43663
DELETE CursorTable
WHERE SalesOrderID = 43664
EPITOME
SQL Server 2005 120
/*And loop through again.
** This time, notice that we changed what we`re testing for.
** Since we have the possibility of rows being missing (deleted)
** before we get to the end of the actual cursor, we need to do
** a little bit more refined testing of the status of the cursor.
*/
WHILE @@FETCH_STATUS ! = -1
BEGIN
IF @@FETCH_STATUS = -2
BEGIN
PRINT ‘ MISSING! It probably was deleted.'
END
ELSE
BEGIN
PRINT CAST ( @SalesOrderID AS varchar) + ‘ ’ + CAST(@CustomerID
AS varchar)
END
FETCH NEXT FROM CursorTest INTO @SalesOrderID, @CustomerID
END
CLOSE CursorTest
DEALLOCATE CursorTest
-99999 *
43661 442
43662 227
43663 -111
43665 146
The first two recordsets look exactly as thay did last time. Change comes when we get to the
third (and final) result set:
-There is no indication of a failed fetch, even though we deleted a record (no notification).
-The updated record shows the update (just as it did with a keyset).
-The inserted record now shows up in the cursor set.
Dynamic cursors are the most sensitive of all cursors. They are affected by everything you
do to the underlying data. The downside is that they can provide some extra concurrency
problems, and they can pound the system when dealing with larger data sets.
FAST_FORWARD Cursorsrs
Fast is the operative word on this one. This one is the epitome of term “firehose
cursor” that is often used around forward only cursors. With FAST_FORWARD cursors, you
open the cursor, and do nothing else but deal with the data, move forward, and deallocate it.
Now, it`s safe to say that calling this a cursor “type” is something of a misnomer. This kind of
cursor has several different circumstances where it is automatically converted to other cursor
EPITOME
SQL Server 2005 122
types, but I think of them as being most like a keyset-driven cursor in the sense that
membership is fixed-once the members of the cursor are established, no new records are
added. Deleted rows show up as a missing record (@@FETCH_STATUS of -2).
Condition Converted to
It`s worth noting that all FAST_FORWARD cursors are read-only in nature. You can explicity
set the cursor to have the FOR UPDATE option, but, as suggested in the preceding implicit
conversion table, the cursor will be implicitly converted to dynamic.
What exactly does a FAST_FORWARD cursor have that any of the other cursors wouldn`t
have if they were declared as being FORWARD_ONLY? Well, a FAST_FORWARD cursor
will implement at least one of two tricks to help things along:
The first is pre-fetch data. That is, at the same time that you open the cursor, it
automatically fetches the first now-this means that you save a roundtrip to the server if you
are operating in a client-server environment using ODBC. Unfortunately, this is available
only under ODBC.
The second is the one that is a sure thing-auto-closing of the cursor. Since you are
running a cursor that is forward-only, SQL Server can assume that you want the cursor
closed once you reach the end of the recordset. Again, this saves a roundtrip and squeezes
out a tiny bit of additional performance.
Choosing a cursor type is one of the most critical decisions when structuring a cursor. Choices
that have little apparent difference in the actual output of the cursor task can have major
differences in performance. Other affects can be seen in sensitivity to changes, concurrency
issues, and updatability.
EPITOME
SQL Server 2005 123
Concurrency Options
We got our first taste of concurrency issues under the topic transactions and locks. We deal
with concurrency issues whenever there are issues surrounding two or more processes
trying to get to the same data at essentially the same time. When dealing with cursors,
however, the issue becomes just slightly stickier.
As with all concurrency issues, this tends to be more of a problem in a transaction environ
ment than when running in a single statement situation. The longer the transaction, the more
likely you are to have concurrency problems.
SQL Server gives us three different options for dealing with this issue:
READ_ONLY
SCROLL_LOCKS (equates to Pessimistic in most terminologies)
OPTIMISTIC
READ_ONLY
In a read-only situation, you don`t have to worry about whether your cursor is going to try and
obtain any kind of update or exclusive lock. You also don`t have to worry about whether
anyone has edited the data while you`re been busy making changes of your own. Both of
these make life considerably easier.
READ_ONLY is just what it sounds like. When you choose this option, you cannot update
any of the data, but you also skip most (but not all) of the notion of concurrency entirely.
SCROLL_LOCKS
Scroll locks equate to what is more typically referred to as pessimistic locking in the various
APIs and object models. In its simplest form, it means that, as long as you are editing this
record, no one else is allowed to edit it. The specifics of implementation of duration of this
vary depending on:
EPITOME
SQL Server 2005 124
With update locks, we prevented other users from updating the data. This lock was held for
the duration of the transaction. If it was a single statement transaction, then the lock was not
released until every row affected by the update was complete.
Scroll locks work identically to update locks with only one significant exception-the duration
the lock is held. With scroll locks, there is much more of a variance depending on whether
the cursor is participating in a multi-statement transaction or not. Assuming for the moment
that you do not have a transaction wrapped around the cursor, then the lock is held only on
the current record in the cursor-that is, from the time the record is first fetched until the next
record (or end of the result set) is fetched. Once you move on to the next record, the lock is
removed from the prior record.
USE AdventureWorks
/* Build the table that we`ll be playing with this time*/
SELECT SalesOrderID, CustomerID
INTO CursorTable
FROM Sales.SalesOrderHeader
WHERE SalesOrderID BETWEEN 43661 AND 43665
EPITOME
SQL Server 2005 125
OPEN CursorTest
FETCH NEXT FROM CursorTest INTO @SalesOrderID, @CustomerID
What we`ve done is toss out most of the things that were happening, and we`ve refocused
ourselves back on the cursor. Perhaps the biggest think to notice though is a couple of key
things that we have deliberately omitted even though they are things that would normally
cause problems if we try to operate without them:
The reason we`ve left the cursor open is to create a situation where the state of the cursor
being open lasts long enough to play around with the locks somewhat. In addition, we fetch
only the first row because we want to make sure that there is an active row.
What you want to do is execute the preceding and then open a completely separate connec-
tion window with AdventureWorks active. Then run a simple test in the new connection
window:
EPITOME
SQL Server 2005 126
SalesOrderID CustomerID
------------------- ---------------------
43661 442
43662 227
43663 510
43664 397
43665 146
Based on what we know about locks you would probably expect the preceding SELECT
statement to be blocked by the locks on the current record. Not so scroll locks. The lock is
only on the record that is currently in the cursor, and perhaps more importantly, the lock only
prevents updates to the record. Any SELECT statements can see the contents of the cursor
without any problems.
CLOSE CursorTest
DEALLOCATE CursorTest
Don`t forget to run the preceding clean up code !!! If you forget, than you`ll have an open
transaction sitting in your system until you terminate the connection. SQL Server should
clean up any open transaction (by rolling them back) when the connection is broken.
OPTIMISTIC
Optimistic locking creates situation where no scroll locks of any kind are set on the cursor.
The assumption is that, if you do an update, you want people to still be able to get at your
data. You`re being optimistic because you are essentially guessing that no one will edit your
data between when you fetched it into the cursor and when you applied your update.
The optimism is not necessarily misplaced. If you have a lot a records and not that many
user, then the chances of two people trying to edit the same record at the same time are very
small. Still, if you get this optimistic, then you need to also be prepared for the possibility that
you will be wrong--that is, that someone has altered the data in between when you per
EPITOME
SQL Server 2005 127
formed the fetch and when you went to actually update the database.
If you happen to run into problem, SQL Server will issue an error with a value in @@ERROR
of 16394. When this happens, you need to completely re-fetch the data from the cursor and
either rollback the transaction or try the update again.
This is perhaps best understood with an example, so let`s go back and run a variation again
of the cursor that we`ve been using.
In this instance, we`re going to take out the piece of code that creates a key for the table.
Remember that without a unique on a table, a keyset will be implicitly converted to a static
cursor:
USE AdventureWorks
/* Build the table that we`ll be playing with this time*/
SELECT OrderID, CustomerID
INTO CursorTable
FROM Orders
WHERE OrderID BETWEEN 10701 AND 10705
EPITOME
SQL Server 2005 128
OPEN CursorTest
FETCH NEXT CursorTest INTO @SalesOrderID, @CustomerID
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT CONVERT ( varchar (5), @OrderID) + ‘ ’ + @CustomerID
FETCH NEXT FROM CursorTest INTO @OrderID, @CustomerID
END
CLOSE CursorTest
DEALLOCATE CursorTest
The major changes are the removal of blocks of code that don`t need for this illustration
along with the addition of the TYPE_WARNING option in the cursor declaration.
Everything ran okay-we just saw a statement that was meant solely as a warning. The re-
sults may not be what you expected given that the cursor was converted.
The downside here is that you get a message sent out. but no error. Programmatically speak-
ing, there is essentially no way to tell that you received this message--which makes this
EPITOME
SQL Server 2005 129
option fairly useless in a production environment . Still, it can often be quite handy when
you`re trying to debug a cursor to determine why it isn`t behaving in the expected fashion.
Almost any SELECT statement is valid--even those including an ORDER BY clause. As long
as your SELECT statement provides a single result set, you should be fine. Example of
options that would create problems would be any of the summary options such as a CUBE
or ROLLUP.
FOR UPDATE
By default, any cursor that is updatable at all is completely updatable-that is, if one column
can be edited then any of them can.
The FOR UPDATE <Column list > option allows you to specify that only certain columns
are to be editable within this cursor. If you include this option, then only the columns in your
column list will be allowed to be updatable. Any columns not explicity mentioned will be
considered to be read-only.
NEXT This moves you forward exactly one row in the result set and is the
backbone options. Ninety percent or more of your cursors won`t
need any more than this. keep this in mind deciding to declare as
FORWARD_ONLY or not. When you try to do a FETCH NEXT and
it results in moving beyond the last record. you will have a
EPITOME
SQL Server 2005 130
@@FETCH_STATUS of -1
PRIOR As you have probably surmised, this one is the functional opposite
of NEXT. This moves backward exactly one row. If you performed a
FETCH PRIOR when you were at the first row in the result set,
thenyou will get a @@FETCH_STATUS of -1 just as if you had
moved beyond the end of the file.
FIRST Like most cursor options, this one says what it is pretty clearly. If
you perform a FETCH FIRST, then you will be at the first record in
the recordset. The only time this option should generate a
@@FETCH_STATUS of -1 is if the result set is empty.
LAST The functional opposite of FIRST, FETCH LAST moves you to the
last record in the result set. Again, the only way you`ll get a -1
for@@FETCH_STATUS on this one is if you have an empty result
set.
ABSOLUTE With this one, you supply an integer value that indicates how many
rows you want from the beginning of the cursor. If the value supplied
is negative, then it is that many rows from the end of the cursor.
Note that this option is not supported with dynamic cursors. This
equates roughly to navigating to a specific “ absolute position” in a
few of the client access object models.
We’ve already gotten a fair look at a few of these in our previous cursors. The other naviga-
tional choices work pretty much the same.
EPITOME
SQL Server 2005 131
Since we`re dealing with a specific row rather than set data, we need some special syntax to
tell SQL Server that we want to update. Happily, this syntax is actually quite easy given that
you already know how to perform an UPDATE or DELETE.
Essentially, we’re going to update or delete data in table that is underlying our cursor. Doing
this is as simple as running the same UPDATE and DELETE statements that we’re now
used to, but qualifying them with a WHERE clause that matches our cursor row. We just add
one of syntax to our DELETE or UPDATE statement:
USE AdventureWorks
/* Build the table that we`ll be playing with this time*/
SELECT SalesOrderID, CustomerID
INTO CursorTable
FROM Sales.SalesOrderHeader
WHERE SalesOrderID BETWEEN 43661 AND 43665
OPEN CursorTest
FETCH NEXT CursorTest INTO @SalesOrderID, @CustomerID
WHILE @@FETCH_STATUS = 0
BEGIN
IF ( @SalesOrderID % 2 = 0)
BEGIN
UPDATE CursorTable
SET CustomerID = -99999
WHERE CURRENT OF CursorTest
END
ELSE
BEGIN
DELETE CursorTable
WHERE CURRENT OF CursorTest
END
FETCH NEXT FROM CursorTest INTO @SalesOrderID, @CustomerID
END
WHILE @@FETCH_STATUS ! = -1
BEGIN
IF @@FETCH_STATUS = -2
BEGIN
PRINT ’ MISSING! It probably was deleted.'
END
ELSE
BEGIN
PRINT CAST @SalesOrderID AS varchar) + ‘ ’ + CAST
(@CustomerID
AS varchar)
END
FETCH NEXT FROM CursorTEST INTO @SalesOrderID, @CustomerID
END
EPITOME
SQL Server 2005 133
CLOSE CursorTest
DEALLOCATE CursorTest
Again, we have treating this one as an entirely new cursor. We’ve done enough deletions,
additions, and updates that we suspect you’ll find it easier to just key things in a second
time rather than having to look through row by row to see what you might have missed.
We are also again using the modulus operator (%) . Remember that it gives us nothing but
the remainder. Therefore, if the remainder of any number divided by 2 is zero, then we know
the number was an even number.
The results are shown below:
You can see the multiple “ 1 row affected” that is the returned message for any row that was
affected by the UPDATE and DELETE statements. When we get down to the last result set
enumeration, you can quickly tell that we deleted all the odd numbers (which is what we told
our code to do), and that we updated the even numbered rows with a new CustomerID.
EPITOME
SQL Server 2005 134
DATA PERMISSIONS
Introduction
When you are working with database, you must have permission to perform any action. SQL
SERVER is an inherently secure system. If you want to perform any action, you must have
been given permission to do so. By designing and implementing a good security plan for
SQL Server, you can eliminate many problems before they happen rather than spend your
time trying to figure out how your data (or SQL Server) became damaged. What data modi-
fications can be made as well as what data a user is allowed to see can be successfully
restricted. Whether a user can back up a database, back up the transaction log for a data-
base, or create and manipulate objects in a database can also be restricted.
Another benefit of enabling multiple logins, users, and permissions is that you can track what
individual users are allowed to do and audit their activity.
EPITOME
SQL Server 2005 135
Permissions Precedence
It is critical to understand how permissions are applied than understanding when a particular
permission is in effect. All permissions in SQL Server are cumulative, except DENY, which
overrides other permissions.
If a user has the SELECT permission from a membership in ROLED and the INSERT per-
mission from another membership in Rule 2; he effectively have the SELECT and INSERT
permissions. If the user then to be denied SELECT permissions within either of these roles
or within any individual account, then he would no longer have the SELECT permission.
DENY always overrides any other permission.
Special Permissions
SQL Server 2000 supports several different levels of permissions. Most of these permis-
sions are database specific. However, fixed Server roles are tied to logins, not database
users. Each role implies a specific set of permissions. Also sysadmin role membership im-
plies its own particular set of permissions.
There are fixed database roles within each database, each of which is associated with a
particular set of permissions. Each database also has a special user known as dbo (the
database owner). Special permissions are inherent for anyone who’s in this conceptual role
as well.
Example
If you run
EXEC sp_srvrolepermission ‘db creator’
The output is as follows:
(8 row(s) affected)
Sysadmin
Members of the sysadmin server role are granted an extremely powerful set of permissions
and should be considered carefully. The sa login is always a member of this role and can’t be
removed from the sysadmin role. Members of the sysadmin fixed server role are always
considered to be the database owner of every database they use. Members of the sysadmin
role can’t be prevented from accessing any database on SQL Server.
A list of right is provided by the user interface for sysadmin members; however, it’s a little
misleading because a sysadmin role member can do anything.
Serveradmin
Server administrators who won’t otherwise be administering databases or other objects are
best suited to be members of the serveradmin role. Members of this role can perform the
following operations:
• Add another login to the serveradmin fixed server role.
• Run the DBCC FREE PROCCACHE command.
• Run the sp_configure system stored procedure to change system options.
• Run the RECONFIGURE command to install changes made with sp_configure.
• Run the SHUTDOWN command to shut down SQL Server.
• Run the sp_falltext_service system stored procedure to configure the full text service of
SQL Server 2000.
Setupadmin
Members of setupadmin role are administrators who are configuring remote servers. Members
of this role can perform the following operations:
• Add another login to the setupadmin fixed server role.
• Add, drop, or configure linked servers.
• Mark a stored procedure as startup
Securityadmin
Members of securityadmin role can perform any operation related to serverwide security in
SQL server. Members of this role can perform the following operations:
• Add members to the securityadmin fixed server role.
• Grant, revoke, or deny the CREATE DATABASE statement permission
• Read the SQL Server error log by using the sp_readerrorlog system stored procedure
• Run security_related system stored procedures.
Processadmin
Members of the processadmin role can control processes running on the database server.
This role typically involves “killing” runaway queries, and help desk personnel might need
this right. Members of this role can perform the following operations:
EPITOME
SQL Server 2005 137
dbcreator
Members of the dbcreator fixed server role, which most likely includes senior database
administrators, can perform operations relating to creating and modifying databases. Members
of this role can perform the following operations:
• Add members to the dbcreator fixed database role
• Run the sp_renamedb system stored procedure
• Run the CREATE DATABASE, ALTER DATABASE, and DROP DATABASE commands
• Restore a database or transaction log
Õiskadmin
Members of the diskadmin fixed server role can manage files. This role is mostly for backward
compatibility with SQL Server 6.5. Members of the diskadmin fixed serve role can perform
the following operations:
• Add members to the diskadmin fixed server role
• Run the following DISK commands (used for backward compatibility): DISK INIT, DISK
REINIT, DISK REFIT, DISK MIRROR, and DISK REMIRROR.
• Run the sp_disk default and sp_dropdevice system stored procedures
• Run the sp_addump device system stored procedure to add backup devices.
Syntax
sp_dbfixedrolepermission [ [Srolename =] ‘role’]
Here, role is the name of the fixed database role for which you want to see permissions.
The results of running this system stored procedure are discussed below:
db-owner
Members of the db_owner fixed database role are the “owners” of a database. They can do
almost everything the actual database owner can do. They can perform the following opera-
tions within their databases:
• Add members to, or remove members from, any fixed database role except for db-owner.
• Run any data definition language (DDL) statement, including TRUNCATE TABLE.
• Run the BACKUP DATABASE and BACKUP LOG statements.
• Run the RESTORE DATABASE and RESTORE LOG statements.
• Issue a CHECKPOINT in a database.
• Run the following Database Consistency Checker (DBCC) Commands: DBCC
CHECKALLOC, DBCC CHECKFILE GROUP, DBCC CHECKDB, DBCC CHECKIDENT,
DBCC CLEANTABLE,
EPITOME
SQL Server 2005 138
EPITOME
SQL Server 2005 139
• Control full_text services by using the sp_fulltext_column and sp_full text_table system
stored procedures.
db_backupoperator
Members of the db_backupoperator fixed database role can perform all operations related
to backing up a database. They can do the following:
• Run the BACKUP DATABASE and BACKUP LOG Statements
• Issue a CHECKPOINT in a database.
db_datareader
Members of the db_datareader fixed database role have the SELECT permission on any
table or view in a database. They can’t grant permission to or revoke it from anyone else.
db_datawriter
Members of the db_datawriter fixed database role have the INSERT, UPDATE and DELETE
permissions on all tables or views in a database. They can’t grant permission to or revoke it
from anyone else.
db_denydatareader
Members of the db_denydatareader fixed database role can’t run the SELECT statement on
any table or view in the database. This option is useful if you want your database administra-
tor (DBA) to set up your objects but not be able to read any sensitive data in the database.
db_denydatawriter
Members of the db_denydatawriter fixed database role can’t run the INSERT, UPDATE, or
DELETE Statement on any table or view in the database.
User Permissions
Most people using the database will be ordinary users. The database users have no inherent
rights or permissions. All rights must be explicitly granted or assigned to the user. Permis-
sions granted to the users can be categorized as follows:
• Statement Permissions
• Object Permissions
Statement Permissions
Statement permissions allow users to create new databases, new objects within an existing
database, or backup the database or transaction log. Statement permissions allow a user to
run a particular command(s) rather than merely manipulate a particular object.
When a user creates an object, he becomes the owner of that object and has all the permis-
sions associated with database object ownership.
• Change the owner of any object by using the sp_changeobjectowner system stored proce-
dure.
• Configure full_text services within the database by using the following system stored pro-
cedure: sp_fulltext_catalog, sp_full text_column, sp_fulltext_database, and sp_full text_table.
db_accessadmin
Members of the db_accessadmin fixed database role manage which logins can access a
EPITOME
SQL Server 2005 140
database. As with the securityadmin role, your help desk staff might be the best candidates
for membership in this role. Members can perform the following operation:
• Run the following system stored procedure:
sp_addalias, sp_adduser, sp_dropalias, sp_dropuser, p_grantdbaccess, and
sp_revokedbaccess
db_securityadmin
Members of the db_securityadmin fixed database role can administer security within a data-
base, and they can perform the following operations:
• Run the GRANT, REVOKE or DENY Statements
• Run the following system stored procedure:
p_addapprole, sp_addgroup, sp_addrole, sp_addrolemember,
p_approlepassword, sp_changegroup, sp_changeobjectowner,
p_dropapprols, sp_dropgroup, sp_droprole, and sp_droprole members.
db.ddladmin
Member of the db_ddladmin fixed database rate can perform the following operations:
• Run any DDL command except GRANT, REVOKE, and DENY.
• Grant the REFERENCES permission on any table.
• Recompile stored procedure by using the sp_recompile system stored procedure.
• Rename any object by using the sp_rename system stored procedure.
• Modify some table-specific option by using the sp_tableoption system stored procedure.
• Change the owner of any object by using the sp_changeobjectowner system stored proce-
dure.
• Run the following DBCC command: DBCC CLEANTABLE, DBCC SHOW_STATISTICS,
and DBCC SHOWCONTIG
• Control full_text services by using the sp_fulltext_column and sp_full text_table system
stored procedures.
db_backupoperator
Members of the db_backupoperator fixed database role can perform all operations related
to backing up a database. They can do the following:
• Run the BACKUP DATABASE and BACKUP LOG Statements
• Issue a CHECKPOINT in a database.
db_datareader
Members of the db_datareader fixed database role have the SELECT permission on any
table or view in a database. They can’t grant permission to or revoke it from anyone else.
db_datawriter
Members of the db_datawriter fixed database role have the INSERT, UPDATE and DELETE
permission on all tables or view in a database. They can’t grant permission to or revoke it
from anyone else.
db_denydatareader
Members of The db_denydatareader fixed database role can’t run the SELECT statement
on any table or view in the database. This option is useful if you want your database
administrator (DBA) to set up your objects but not be able to read any sensitive data in the
database.
EPITOME
SQL Server 2005 141
db_denydatawriter
Member of the db_denydatawriter fixed database role can’t run the INSERT, UPDATE, or
DELETE statement on any table or view in the database.
User Permissions
Most people using the database will be ordinary users. The database users have no inherent
rights or permissions. All rights must be explicitly granted or assigned to the user. Permissions
granted to the users can be categoized as follows:
· Statement Permissions
· Object Permissions
Statement Permissions
Statement permissions allow users to create new database, new objects within an existing
database, or backup the database or transaction log. Statement permissions allow a user to
run a particular command(s) rather than merely manipulate a particular object.
When a user creates an object, she becomes the owner of that object and has all the
permissions associated with database object ownership.
Statement permissions should be granted only when explicitly needed. The statement per-
missions that can be granted, revoked, or denied include:
• CREATE DATABASE
• CREATE TABLE
• CREATE PROCEDURE
• CREATE DEFAULT
• CREATE RULE
• CREATE VIEW
• CREATE FUNCTION
• BACKUP DATABASE
• BACKUP LOG
These permissions can be granted individually or all at once (using the keyword ALL).
The CREATE DATABASE Permission
The CREATE DATABASE Permission enables users to create their own databases and thus
become dbo (database owner) of those databases. Database ownership can later be changed
with the sp_changedbowner system stored procedure. Only members of the sysadmin or
dbcreator fixed server role are allowed to grant a user the CREATE DATABASE permis-
sions. This permission must be granted to the user in the master database only. The CRE-
ATE DATABASE permission also allows you to use the ALTER DATABASE command.
The Create Table, View, Function, Procedure, Default, and Rule Permissions
The CREATE TABLE, VIEW, FUNCTION, PROCEDURE, DEFAULT, and RULE permis-
sions enable users to run referenced statement to create objects in the database where the
permissions were given. Programmers are frequently given these permissions to allow them
to create the resources they need in a database during development.
Assigning Statement Permissions
EPITOME
SQL Server 2005 142
Transact-SQL or SQL Server Enterprise Manager can be used to grant, revoke, and deny
statement permissions. Let’s now consider some of the statement permission commands:
(i) GRANT: This command gives a user statement permissions.
SYNTAX: GRANT {ALL |Statement_List |}To (account)
Where-ALL stands for all possible statement permissions.
• Statement_list is an enumerated list of the statement permissions you want to give to an
account.
• Account is the name of database user, database role, Windows user or Windows group.
Example
(a) GRANT CREATE VIEW TO [Rhome/JOHN]
(b) GRANT ALL TO [Rhome/JOHN]
(i) REVOKE: This command takes away statement permissions already granted.
(ii) SYNTAX: REVOKE { ALL | Statement_List}FROM {account}
Example
REVOKE ALL FROM ROLE
DENY: This command explicity takes away a statement permission. The permission
doesn’t have to be first granted to a user,
SYNTAX: DENY {ALL|Statement_List} TO {account}
Object Permissions
Object permissions allow a user to perform action against a particular object in a
database.Object permissions are applied only to the specific object named while granting
the permission, not to all the objects contained in the entire database. Object permissions
enable users to give individual user accounts the rights to run specific Transact-SQL state-
ments on an object. They are the most common types of permissions granted.
Some available object permissions are as follows:
SELECT - View data in a table, view, or column
Insert - Add data to a table, or a view
UPDATE - Modify existing data in a table, view, or column
DELETE - Remove data from a table or view
EXECUTE - Run a stored procedure
REFERENCES - Refer to a table with foreign keys
Assigning Object Permissions
You can use T-SQL or SQL Server Enterprise Manager to grant, revoke and deny object
permissions. In this section we will study how to grant object permissions with Transact-
SQL.
(i) GRANT: This command is used to give someone one or more object permission. It also
removes a DENY permission.
SYNTAX
GRANT {ALL [PRIVILEGES] | permission_list [, ...n] }
{
[(Column [, ...n] ) ] ON {table | view}
|ON {table|view} [(Column [, ...n] ) ]
|NO {stored_procedure | extended_stored_procedure}
|NO {user_defined_f unction}
EPITOME
SQL Server 2005 143
}
To account [ , ...n]
[WITH GRANT OPTION]
[AS {group | role}]
Here - ALL stands for all possible object permissions that apply to a particular object type
permission_list is an enumerated list of the object permissions you want to give to an ac-
count
(a) Column is the level down to which object permissions can be granted (only for SELECT
or UPDATE).
(b) Account is the name of a database user, database role, Windows user, or Windows
group
(c) WITH GRANT OPTION allows the user who’s the recipient of the grant to also give
away
the permission he has been granted to other users
(d) As {group| role} specifies which group or role you are using for a particular grant.
Example
GRANT SELECT ON SALES TO [Rhome/John]
(i) REVOKE: This command takes away one or more object permissions that have
already been granted. An error message is not received if a command, that hasn’t previously
been granted is revoked, it just doesn’t have any effect.
Syntax
REVOKE [GRANT OPTION FOR]
{ ALL [PRIVILEGES] | [, ... permission_list n] }
{
[(Column [, ...n] ) ] ON { table | view}
|ON {table | view} [(Column [, ... n] ) ]
| stored_procedure | extended_stored_procedure}
| {user_def ined_function}
}
FROM account [ , ......n]
[CASCADE]
[AS {group} | role]
Example
(a) REVOKE SELECT, INSERT ON AUTHORS FROM [Rhome | Mary],
[Rhome | John]
(b) RE VOKE SELECT, INSERT ON PUBLISHERS FROM [Rhome | Ann l]
CASCADE.
(i) DENY: Unlike a REVOKE command, DENY explicitly takes away an object
permission. The permission doesn’t have to first be granted to a user.
Syntax
DENY
{ALL [privileges] | permission List [, ...n] }
{
[(Column [, ... n] ) ] ON {table | view}
EPITOME
SQL Server 2005 144
Permissions on Views
A view can be considered as a stored query that appears as a table to users. The stored
query appears as a SELECT statement. You can create a view that refers only to selected
columns of a table. Then permissions can be assigned to the view for those users, and they
won’t have any rights to see the underlying table. They will be able to view data from the table
only through the view.
EPITOME
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: