SQL Server

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

SQL Server

What is normalization?
It is set of rules that have been established to aid in the design of tables that are meant to be
connected through relationships. This set of rules is known as Normalization. Benefits of
normalizing your database will include:
1. Avoiding repetitive entries
2. Reducing required storage space
3. Preventing the need to restructure existing tables to accommodate new data.
4. Increased speed and flexibility of queries, sorts, and summaries.
What are different types of normalization?
Following are the three normal forms :-
1. First Normal - Form For a table to be in first normal form, data must be broken up into the
smallest units possible.In addition to breaking data up into the smallest meaningful values, tables in
first normal form should not contain repetitions groups of fields.
2. Second Normal form - The second normal form states that each field in a multiple field primary
keytable must be directly related to the entire primary key. Or in other words, each non-key field
should be a fact about all the fields in the primary key.
3. Third normal - form A non-key field should not depend on other Non-key field.
What is denormalization?
Denormalization is the process of putting one fact in numerous places (its vice-versa of
normalization).Only one valid reason exists for denormalizing a relational design - to enhance
performance.The sacrifice to performance is that you increase redundancy in database.
What is a candidate key?
A table may have more than one combination of columns that could uniquely identify the rows in a
table; each combination is a candidate key. During database design you can pick up one of the
candidate keys to be the primary key. For example, in the supplier table supplierid and
suppliername can be candidate key but you will only pick up supplierid as the primary key.
What are the different types of joins? What is the difference between them?
1. Inner join - Inner Join is the default type of join, it will producesses the result set, which contains
matched rows only.
Syntax: select * from table1<innerjoin>table2.

2. Outer Join - Outer join produces the results, which contains matched rows and unmatched rows.
Here we have three types of joins:-
1.Left Outer Join 2.Right Outer Join 3.Full Outer Join.
Left Outer Join: Left Outer Join producesses the results, which contains all the rows from Left table
and matched rows from Right Table. Syntax: select * from table1<leftouterjoin>table2.
Right Outer Join: Right Outer Join producesses the resultset, which contains all the rows from right
table and matched rows from left table. syntax:select * from table1<right outer join>table2.
Full Outer Join: Full Outer Join producesses the resultset, which contains all the rows from left table
and all the rows from right table. syntax:select * from table1<fullouterjoin>table2.

3. Cross Join - A join without having any condition is known as Cross Join, in cross join every row in
first table is joins with every row in second table. syntax: select * from table1<cross join>table2
Self Join: A join joins withitself is called self join working with self joins we use Alias tables
What are indexes? What is the difference between clustered and nonclustered indexes?
Indexes in SQL Server are similar to the indexes in books. They help SQL Server retrieve the data
quickly. There are clustered and nonclustered indexes. A clustered index is a special type of index
that reorders the way in which records in the table are physically stored. Therefore table can have
only one clustered index. The leaf nodes of a clustered index contain the data pages. A nonclustered
index is a special type of index in which the logical order of the index does not match the physical
stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data
pages. Instead, the leaf nodes contain index rows.
How can you increase SQL performance?
Following are tips which will increase your SQl performance :-
1. Every index increases the time takes to perform INSERTS, UPDATES and DELETES, so the number
of indexes should not be too much. Try to use maximum 4-5 indexes on one table, not more. If you
have read-only table, then the number of indexes may be increased.
2. Keep your indexes as narrow as possible. This reduces the size of the index and reduces the
number of reads required to read the index.
3. Try to create indexes on columns that have integer values rather than character values.
4. If you create a composite (multi-column) index, the order of the columns in the key are very
important. Try to order the columns in the key as to enhance selectivity, with the most selective
columns to the leftmost of the key.
5. If you want to join several tables, try to create surrogate integer keys for this purpose and create
indexes on their columns.
6. Create surrogate integer primary key (identity for example) if your table will not have many
insert operations.
7. Clustered indexes are more preferable than nonclustered, if you need to select by a range of
values or you need to sort results set with GROUP BY or ORDER BY.
8. If your application will be performing the same query over and over on the same table, consider
creating a covering index on the table.
9. You can use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables"
trace to determine which tables in your database may need indexes. This trace will show which
tables are being scanned by queries instead of using an index.
What is the use of OLAP?
OLAP is useful because it provides fast and interactive access to aggregated data and the ability to
drill down to detail.
What is a measure in OLAP?
Measures are the key performance indicator that you want to evaluate. To determine which of the
numbers in the data might be measures. A rule of thumb is: If a number makes sense when it is
aggregated, then it is a measure.
What are dimensions in OLAP?
Dimensions are the categories of data analysis. For example, in a revenue report by month by sales
region, the two dimensions needed are time and sales region. Typical dimensions include product,
time, and region.
What are levels in dimensions?
Dimensions are arranged in hierarchical levels, with unique positions within each level. For
example, a time dimension may have four levels, such as Year, Quarter, Month, and Day. Or the
dimension might have only three levels, for example, Year, Week, and Day. The values within the
levels are called members. For example, the years 2002 and 2003 are members of the level Year in
the Time dimension.
What are fact tables and dimension tables in OLAP?
The dimensions and measures are physically represented by a star schema. Dimension tables
revolve around fact table. A fact table contains a column for each measure as well as a column for
each dimension. Each dimension column has a foreign-key relationship to the related dimension
table, and the dimension columns taken together are the key to the fact table.
What is DTS?
DTS is used to pull data from various sources into the star schema.
What is fillfactor?
The 'fill factor' option specifies how full SQL Server will make each index page. When there is no
free space to insert new row on the index page, SQL Server will create new index page and transfer
some rows from the previous page to the new one. This operation is called page splits. You can
reduce the number of page splits by setting the appropriate fill factor option to reserve free space
on each index page. The fill factor is a value from 1 through 100 that specifies the percentage of the
index page to be left empty. The default value for fill factor is 0. It is treated similarly to a fill factor
value of 100, the difference in that SQL Server leaves some space within the upper level of the index
tree for FILLFACTOR = 0. The fill factor percentage is used only at the time when the index is
created. If the table contains read-only data (or data that very rarely changed), you can set the 'fill
factor' option to 100. When the table's data is modified very often, you can decrease the fill factor to
70% or whatever you think is best.
What is RAID and how does it work?
Redundant Array of Independent Disks (RAID) is a term used to describe the technique of
improving data availability through the use of arrays of disks and various data-striping
methodologies. Disk arrays are groups of disk drives that work together to achieve higher data-
transfer and I/O rates than those provided by single large drives. An array is a set of multiple disk
drives plus a specialized controller (an array controller) that keeps track of how data is distributed
across the drives. Data for a particular file is written in segments to the different drives in the array
rather than being written to a single drive. For speed and reliability, it is better to have more disks.
When these disks are arranged in certain patterns and are use a specific controller, they are called a
Redundant Array of Inexpensive Disks (RAID) set. There are several numbers associated with RAID,
but the most common are 1, 5 and 10. RAID 1 works by duplicating the same writes on two hard
drives. Let us assume you have two 20 Gigabyte drives. In RAID 1, data is written at the same time
to both the drives. RAID1 is optimized for fast writes. RAID 5 works by writing parts of data across
all drives in the set (it requires at least three drives). If a drive failed, the entire set would be
worthless. To combat this problem, one of the drives stores a "parity" bit. Think of a math problem,
such as 3 + 7 = 10. You can think of the drives as storing one of the numbers, and the 10 is the parity
part. By removing any one of the numbers, you can get it back by referring to the other two, like
this: 3 + X = 10. Of course, losing more than one could be evil. RAID 5 is optimized for reads. RAID
10 is a bit of a combination of both types. It doesn't store a parity bit, so it is faster, but it duplicates
the data on two drives to be safe. You need at least four drives for RAID 10. This type of RAID is
probably the best compromise for a database server.
What is the difference between DELETE TABLE and TRUNCATE TABLE commands?
Following are difference between them :-
1. DELETE TABLE syntax logs the deletes thus make the delete operation slow. TRUNCATE table
does not log any information but it logs information about deallocation of data page of the table so
TRUNCATE table is faster as compared to delete table.
2. DELETE table can have criteria while TRUNCATE can not.
3. TRUNCATE table can not fire trigger.
What are the problems that can occur if you do not implement locking properly in SQL SERVER?
Following are the problems that occur if you do not implement locking properly in SQL SERVER:-
1. Lost Updates Lost updates occur if you let two transactions modify the same data at the same
time, and the transaction that completes first is lost. You need to watch out for lost updates with the
READ UNCOMMITTED isolation level. This isolation level disregards any type of locks, so two
simultaneous data modifications are not aware of each other. Suppose that a customer has due of
2000$ to be paid. He pays 1000$ and again buys a product of 500$. Lets say that these two
transactions are now been entered from two different counters of the company. Now both the
counter user starts making entry at the same time 10:00 AM. Actually speaking at 10:01 AM the
customer should have 2000$-1000$+500 = 1500$ pending to be paid. But as said in lost updates the
first transaction is not considered and the second transaction overrides it. So the final pending is
2000$+500$ = 2500$.
2. Non-Repeatable Read - Non-repeatable reads occur if a transaction is able to read the same row
multiple times and gets a different value each time. Again, this problem is most likely to occur with
the READ UNCOMMITTED isolation level. Because you let two transactions modify data at the same
time, you can get some unexpected results. For instance, a customer wants to book flight, so the
travel agent checks for the flights availability. Travel agent finds a seat and goes ahead to book the
seat.While the travel agent is booking the seat, some other travel agent books the seat. When this
travel agent goes to update the record, he gets error saying that "Seat is already booked". In short
the travel agent gets different status at different times for the seat. Dirty Reads Dirty reads are a
special case of non-repeatable read. This happens if you run a report while transactions are
modifying the data that you're reporting on. For example there is a customer invoice report which
runs on 1:00 AM in afternoon and after that all invoices are sent to the respective customer for
payments. Let us say one of the customer has 1000$ to be paid. Customer pays 1000$ at 1:00 AM
and at the same time report is run. Actually customer has no money pending but is still issued an
invoice. Phantom Reads Phantom reads occur due to a transaction being able to read a row on the
first read, but not being able to modify the same row due to another transaction deleting rows from
the same table. Lets say you edit a record in the mean time somebody comes and deletes the record,
you then go for updating the record which does not exist. Interestingly, the phantom reads can
occur even with the default isolation level supported by SQL Server: READ COMMITTED. The only
isolation level that doesn't allow phantoms is SERIALIZABLE, which ensures that each transaction is
completely isolated from others. In other words, no one can acquire any type of locks on the
affected row while it is being modified.
What are different transaction levels in SQL SERVER?
Transaction Isolation level decides how is one process isolated from other process.Using
transaction levels you can implement locking in SQL SERVER. There are four transaction levels in
SQL SERVER :-
1. READ COMMITTED - The shared lock is held for the duration of the transaction, meaning that no
other transactions can change the data at the same time. Other transactions can insert and modify
data in the same table, however, as long as it is not locked by the first transaction.
2. READ UNCOMMITTED - No shared locks and no exclusive locks are honored. This is the least
restrictive isolation level resulting in the best concurrency but the least data integrity.
3. REPEATABLE READ - This setting disallows dirty and non-repeatable reads. However, even
though the locks are held on read data, new rows can still be inserted in the table, and will
subsequently be interpreted by the transaction.
4. SERIALIZABLE This is the most restrictive setting holding shared locks on the range of data. This
setting does not allow the insertion of new rows in the range that is locked; therefore, no phantoms
are allowed. Following is the syntax for setting transaction level in SQL SERVER. SET TRANSACTION
ISOLATION LEVEL SERIALIZABLE
What are the different locks in SQL SERVER?
Depending on the transaction level six types of lock can be acquired on data :- 314 Intent The intent
lock shows the future intention of SQL Server's lock manager to acquire locks on a specific unit of
data for a particular transaction. SQL Server uses intent locks to queue exclusive locks, thereby
ensuring that these locks will be placed on the data elements in the order the transactions were
initiated. Intent locks come in three flavors: intent shared (IS), intent exclusive (IX), and shared with
intent exclusive (SIX). IS locks indicate that the transaction will read some (but not all) resources in
the table or page by placing shared locks. IX locks indicate that the transaction will modify some
(but not all) resources in the table or page by placing exclusive locks. SIX locks indicates that the
transaction will read all resources, and modify some(but not all) of them. This will be accomplished
by placing the shared locks on the resources read and exclusive locks on the rows modified. Only
one SIX lock is allowed per resource at one time; therefore, SIX locks prevent other connections
from modifying any data in the resource (page or table), although they do allow reading the data in
the same resource. Shared Shared locks (S) allow transactions to read data with SELECT statements.
Other connections are allowed to read the data at the same time; however, no transactions are
allowed to modify data until the shared locks are released. Update Update locks (U) are acquired
just prior to modifying the data. If a transaction modifies a row, then the update lock is escalated to
an exclusive lock; otherwise, it is converted to a shared lock. Only one transaction can acquire
update locks to a resource at one time. Using update locks prevents multiple connections from
having a shared lock that want to eventually modify a resource using an exclusive lock. Shared locks
are compatible with other shared locks, but are not compatible with Update locks. Exclusive
Exclusive locks (X) completely lock the resource from any type of access including reads. They are
issued when data is being modified through INSERT, UPDATE and DELETE statements. 315 Schema
Schema modification locks (Sch-M) are acquired when data definition language statements, such as
CREATE TABLE, CREATE INDEX, ALTER TABLE, and so on are being executed. Schema stability
locks (Sch-S) are acquired when store procedures are being compiled. Bulk Update Bulk update
locks (BU) are used when performing a bulk-copy of data into a table with TABLOCK hint. These
locks improve performance while bulk copying data into a table; however, they reduce concurrency
by effectively disabling any other connections to read or modify data in the table.
Can we suggest locking hints to SQL SERVER?
We can give locking hints that helps you over ride default decision made by SQL Server. For
instance, you can specify the ROWLOCK hint with your UPDATE statement to convince SQL Server
to lock each row affected by that data modification. Whether it's prudent to do so is another story;
what will happen if your UPDATE affects 95% of rows in the affected table? If the table contains
1000 rows, then SQL Server will have to acquire 950 individual locks, which is likely to cost a lot
more in terms of memory than acquiring a single table lock. So think twice before you bombard
your code with ROWLOCKS.
What is LOCK escalation?
Lock escalation is the process of converting of low level locks (like row locks, page locks) into
higher level locks (like table locks). Every lock is a memory structure too many locks would mean,
more memory being occupied by locks. To prevent this from happening, SQL Server escalates the
many fine-grain locks to fewer coarse-grain locks. Lock escalation threshold was definable in SQL
Server 6.5, but from SQL Server 7.0 onwards it's dynamically managed by SQL Server.
What are the different ways of moving data/ databases between servers and databases in SQL
Server?
There are lots of option available; you have to choose your option depending upon your
requirements. Some of the options you have are: BACKUP/RESTORE, detaching and attaching
databases, replication, DTS, BCP, logshipping, INSERT...SELECT, SELECT...INTO, creating INSERT
scripts to generate data.
What is the difference between a HAVING CLAUSE and a WHERE CLAUSE?
You can use Having Clause with the GROUP BY function in a query and WHERE Clause is applied to
each row before they are part of the GROUP BY function in a query.
What is the difference between UNION and UNION ALL SQL syntax?
UNION SQL syntax is used to select information from two tables. But it selects only distinct records
from both the table, while UNION ALL selects all records from both the tables.
How can you raise custom errors from stored procedure?
The RAISERROR statement is used to produce an ad hoc error message or to retrieve a custom
message that is stored in the sysmessages table. You can use this statement with the error handling
code presented in the previous section to implement custom error messages in your applications.
The syntax of the statement is shown here. RAISERROR ({msg_id |msg_str }{,severity ,state } [
,argument [ ,,...n ] ] )) [ WITH option [ ,,...n ] ] A description of the components of the statement
follows. msg_id :-The ID for an error message, which is stored in the error column in sysmessages.
msg_str :-A custom message that is not contained in sysmessages. severity :- The severity level
associated with the error. The valid values are 0–25. Severity levels 0–18 can be used by any user,
but 19–25 are only available to members of the fixed-server role sysadmin. When levels 19–25 are
used, the WITH LOG option is required. state A value that indicates the invocation state of the error.
The valid values are 0–127. This value is not used by SQL Server. Argument, One or more variables
that are used to customize the message. For example, you could pass the current process ID
(@@SPID) so it could be displayed in the message. WITH option, The three values that can be used
with this optional argument are described here.
LOG - Forces the error to log in the SQL Server error log and the NT application log.
NOWAIT - Sends the message immediately to the client.
SETERROR - Sets @@ERROR to the unique ID for the message or 50,000. The number of options
available for the statement makes it seem complicated, but it is actually easy to use. The following
shows how to create an ad hoc message with a severity of 10 and a state of 1. RAISERROR ('An error
occurred updating the NonFatal table',10,1) --Results-- An error occured updating the NonFatal
table The statement does not have to be used in conjunction with any other code, but for our
purposes it will be used with the error handling code presented earlier. The following alters the
ps_NonFatal_INSERT procedure to use RAISERROR.
USE tempdb go
ALTER PROCEDURE ps_NonFatal_INSERT
@Column2 int =NULL
AS
DECLARE @ErrorMsgID int
INSERT NonFatal VALUES (@Column2)
SET @ErrorMsgID=@@ERROR
IF @ErrorMsgID <>0
BEGIN RAISERROR ('An error occured updating the NonFataltable',10,1)
END
When an error-producing call is made to the procedure, the custom message is passed to the client.
What is ACID fundamental? What are transactions in SQL SERVER?
A transaction is a sequence of operations performed as a single logical unit of work. A logical unit of
work must exhibit four properties, called the ACID (Atomicity, Consistency, Isolation, and
Durability) properties, to qualify as a transaction: Atomicity
1. A transaction must be an atomic unit of work; either all of its data modifications are performed or
none of them is performed. Consistency
2. When completed, a transaction must leave all data in a consistent state. In a relational database,
all rules must be applied to the transaction's modifications to maintain all data integrity. All internal
data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the
transaction. 320 Isolation
3. Modifications made by concurrent transactions must be isolated from the modifications made by
any other concurrent transactions. A transaction either see data in the state it was before another
concurrent transaction modified it, or it sees the data after the second transaction has completed,
but it does not see an intermediate state. This is referred to as serializability because it results in the
ability to reload the starting data and replay a series of transactions to end up with the data in the
same state it was in after the original transactions were performed. Durability
4. After a transaction has completed, its effects are permanently in place in the system. The
modifications persist even in the event of a system failure.
What is the purpose of Replication?
Replication is way of keeping data synchronized in multiple replication has two important aspects
publisher and subscriber.Publisher Database server that makes data available for replication is
called as Publisher. Subscriber Database Servers that get data from the publishers is called as
Subscribers.
What are the different types of replication supported by SQL SERVER?
There are three types of replication supported by SQL SERVER:-
1. Snapshot Replication - Snapshot Replication takes snapshot of one database and moves it to the
other database. After initial load data can be refreshed periodically. The only disadvantage of this
type of replication is that all data has to be copied each time the table is refreshed.
2. Transactional Replication - In transactional replication data is copied first time as in snapshot
replication, but later only the transactions are synchronized rather than replicating the whole
database.You can either specify to run continuously or on periodic basis.
3. Merge Replication - Merge replication combines data from multiple sources into a single central
database.Again as usual the initial load is like snapshot but later it allows change of data both on
subscriber and publisher, later when they come on-line it detects and combines them and updates
accordingly.
What are the different types of triggers in SQl SERVER 2000?
There are two types of triggers :-
1. INSTEAD OF triggers - INSTEAD OF triggers fire in place of the triggering action. For example, if
an INSTEAD OF UPDATE trigger exists on the Sales table and an UPDATE statement is executed
against the Salestable, the UPDATE statement will not change a row in the sales table. Instead, the
UPDATE statement causes the INSTEAD OF UPDATE trigger to be executed, which may or may not
modify data in the Sales table. 326
2. AFTER triggers - AFTER triggers execute following the SQL action, such as an insert, update, or
delete. This is the traditional trigger which existed in SQL SERVER. INSTEAD OF triggers gets
executed automatically before the Primary Key and the Foreign Key constraints are checked,
whereas the traditional AFTER triggers gets executed after these constraints are checked. Unlike
AFTER triggers, INSTEAD OF triggers can be created on views.
If we have multiple AFTER Triggers on table how can we define the sequence of the triggers?
If a table has multiple AFTER triggers, then you can specify which trigger should be executed first
and which trigger should be executed last using the stored procedure sp_settriggerorder. All the
other triggers are in an undefined order which you cannot control.
What is SQl injection?
It is a Form of attack on a database-driven Web site in which the attacker executes unauthorized
SQL commands by taking advantage of insecure code on a system connected to the Internet,
bypassing the firewall. SQL injection attacks are used to steal information from a database from
which the data would normally not be available and/or to gain access to an organization’s host
computers through the computer that is hosting the database. SQL injection attacks typically are
easy to avoid by ensuring that a system has strong input validation. As name suggest we inject SQL
which can be relatively dangerous for the database. Example this is a simple SQL
SELECT email,passwd, login_id, full_name FROM members WHERE email = 'x'.
Now somebody does not put "x" as the input but puts "x ; DROP TABLE members;". So the actual
SQL which will execute is :-
SELECT email, passwd, login_id, full_name FROM members WHERE email = 'x' ; DROP TABLE
members;
What is the difference between Stored Procedure (SP) and User Defined Function (UDF)?
Following are some major differences between a stored procedure and user defined functions:-
1. UDF can be executed using the "SELECT" clause while SP’s can not be.
2. UDF can not be used in XML FOR clause but SP’s can be used.
3. UDF does not return output parameters while SP’s return output parameters.
4. If there is an error in UDF its stops executing. But in SP’s it just ignores the error and moves to the
next statement.
5. UDF can not make permanent changes to server environments while SP’s can change some of the
server environment.
What is Recursive Queries and Common Table Expressions(Sql Server 2005)?
Common Table Expressions (CTE) evolved from SQL Server's derived tables. CTEs are refered to in
the FROM clause of a query, just like a derived table or view, but unlike a view, CTEs are non-
persistent. CTEs share the life of the full query to which they belong. The big improvement here is
support for recursion on, in effect, derived tables. With a derived table, you cannot create it once
and use it several times in a query, and it cannot refer to itself in a statement. CTEs overcome this.
It's an excellent feature with major potential for report writing. This is one of the many low-level
feature additions in the BI push in SQL Server 2005.
What are PIVOT and UNPIVIT Operators(Sql Server 2005)?
PIVOT and UNPIVOT are relational operators that, like CTEs, are geared toward BI applications. To
put it briefly, PIVOT provides the basic operation required for crosstabs, effectively converting
columns into rows. UNPIVOT does the reverse, converting rows into columns. This allows you to
perform quick crosstabs without having to use OLAP. This is a great feature for report building!
What are APPLY, CROSS APPLY and OUTER APPLY Operators(Sql Server 2005)?
APPLY is a relational operator that allows you to use a table-valued function once for each row of
outer table expressions. You use APPLY in the FROM clause much like a JOIN, and it comes in two
forms - CROSS APPLY and OUTER APPLY. CROSS APPLY invokes a table-valued function for each
row in an outer table expression. OUTER APPLY is similar but it will return rows from the outer
table where the table-valued function returned an empty set. This is a reasonably useful feature, but
not life changing for most SQL programmers.
How to handle Exceptions in Transactions(Sql Server 2005)?
In an effort to render the GOTO statements of previous TSQL implementations useless, SQL Server
2005 implements TRY/CATCH blocks. These blocks work as expected, just like blocks of the same
type in C# or VB.NET. This is a good feature for multipart transactions and will definitely
demonstrate increased value for developers. The new features in TSQL offer a lot of benefit for SQL
programmers and definitely warrant poking around with the server. Next up is the expanded XML
support provided by SQL Server 2005. There is now an XML data type with methods built to operate
on that data as well as support for XQuery and DML Extensions.
How Sql support for XML Data Type(Sql Server 2005)?
Standard SQL supports only simple scalar datatypes. XML, on the other hand, is capable of modeling
any type of data. The XML datatype allows you to store XML in the database as a supported data
type, as opposed to simply storing it as a string. With the XML data type, you can query, validate,
and modify the contents of an XML document in the CML data type. It "integrates traditional,
relational data with data in unstructured or semi-structured XML documents in ways that are not
possible with SQL Server 2000." I have to take issue with this because SQL is about data storage and
retrieval. SQL is not about data processing, especially not XML processing. I'll talk of this more later
in the article.
What are SQL Management Objects (SMO)(Sql Server 2005)?
The SMO model is the management object model for SQL Server 2005. It is based on .NET
Framework managed code. It's the primary tool for creating database management applications
using the .NET Framework, and is used by every dialog box in the SQL Management Studio. SMO
replaces SQL-DMO but, where possible, implements similar objects for ease of use. SMO has a much
larger array of features, however, adding over 150 classes. Microsoft says the big advantages of
using SMO are its performance and scalability. It uses a cached object model, meaning you can
change multiple object properties before a call to the server is made. This is a nice management
feature that will definitely be useful for developers of custom database tools.
What are the new Application Framework - Notifications, Reporting and Mobile Services
I chose to sum these up in one section because the objectives are all similar - to provide enhanced BI
and reporting services. The big feature add for notifications and messaging is the SQL Service
Broker, built specifically to provide a scalable architecture for building asynchronous message
routing. It allows an internal or external process to send and receive streams of reliable,
asynchronous messages by using the extensions to normal TSQL data manipulation. The new
reporting services are based on the recently acquired ActiveViews technology to provide
developers with a complete set of tools for creating, managing, and viewing reports. They also
provide an engine for hosting and processing reports and an open interface based architecture for
embedding reports. The new mobile services allow use of objects similar to the core ADO.NET
objects for CE environments, as well as use of DTS and parameterized queries. All together, these
make a nice foundation for building business applications - but once again, they don't necessarily
belong in the database.
CLR Integration in Sql Server 2005
The last and most important change to SQL Server is the integration of the .NET CLR into the
database server query environment. According to Microsoft, "Using common language runtime
(CLR) integration, you can code your stored procedures, functions, and triggers in the .NET
Framework language of your choice." I suppose this is great for .NET programmers, but what about
the rest of the world using SQL Server with Java, PHP, COBOL, ADA, etc? How about this line: "Many
tasks that were awkward or difficult to perform in Transact-SQL can be better accomplished by
using managed code…" The idea here is that developers can perform complex processing on their
data before the next application tier ever sees it, but I must ask - what's the point? Those tasks are
not database tasks! SQL is for data management and retrieval, not output formatting, mathematical
computations, graphics or list processing. Why keep piling on the features? Why dilute SQL with
extension upon extension until you have this big mess that does nothing well? Please humor me
while I digress.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy