Dotnetref
Dotnetref
2012 Progress Software Corporation and/or its subsidiaries or affiliates. All rights reserved. These materials and all Progress software products are copyrighted and all rights are reserved by Progress Software Corporation. The information in these materials is subject to change without notice, and Progress Software Corporation assumes no responsibility for any errors that may appear therein. The references in these materials to specific platforms supported are subject to change. Actional, Apama, Artix, Business Empowerment, Business Making Progress, Corticon, Corticon (and design), DataDirect (and design), DataDirect Connect, DataDirect Connect64, DataDirect Technologies, DataDirect XML Converters, DataDirect XQuery, DataXtend, Dynamic Routing Architecture, Empowerment Center, Fathom, Fuse Mediation Router, Fuse Message Broker, Fuse Services Framework, IONA, Making Software Work Together, Mindreef, ObjectStore, OpenEdge, Orbix, PeerDirect, Powered by Progress, PowerTier, Progress, Progress DataXtend, Progress Dynamics, Progress Business Empowerment, Progress Empowerment Center, Progress Empowerment Program, Progress OpenEdge, Progress Profiles, Progress Results, Progress Software Business Making Progress, Progress Software Developers Network, Progress Sonic, ProVision, PS Select, RulesCloud, RulesWorld, Savvion, SequeLink, Shadow, SOAPscope, SOAPStation, Sonic, Sonic ESB, SonicMQ, Sonic Orchestration Server, SpeedScript, Stylus Studio, Technical Empowerment, WebSpeed, Xcalia (and design), and Your Software, Our Technology-Experience the Connection are registered trademarks of Progress Software Corporation or one of its affiliates or subsidiaries in the U.S. and/or other countries. AccelEvent, Apama Dashboard Studio, Apama Event Manager, Apama Event Modeler, Apama Event Store, Apama Risk Firewall, AppsAlive, AppServer, ASPen, ASP-in-a-Box, BusinessEdge, Cache-Forward, CloudEdge, DataDirect Spy, DataDirect SupportLink, Fuse, FuseSource, Future Proof, GVAC, High Performance Integration, ObjectStore Inspector, ObjectStore Performance Expert, OpenAccess, Orbacus, Pantero, POSSE, ProDataSet, Progress Arcade, Progress CloudEdge, Progress Cloudware, Progress Control Tower, Progress ESP Event Manager, Progress ESP Event Modeler, Progress Event Engine, Progress RFID, Progress RPM, Progress Responsive Cloud, Progress Responsive Process Management, Progress Software, PSE Pro, SectorAlliance, SeeThinkAct, Shadow z/Services, Shadow z/Direct, Shadow z/Events, Shadow z/Presentation, Shadow Studio, SmartBrowser, SmartComponent, SmartDataBrowser, SmartDataObjects, SmartDataView, SmartDialog, SmartFolder, SmartFrame, SmartObjects, SmartPanel, SmartQuery, SmartViewer, SmartWindow, Sonic Business Integration Suite, Sonic Process Manager, Sonic Collaboration Server, Sonic Continuous Availability Architecture, Sonic Database Service, Sonic Workbench, Sonic XML Server, The Brains Behind BAM, WebClient, and Who Makes Progress are trademarks or service marks of Progress Software Corporation and/or its subsidiaries or affiliates in the U.S. and other countries. Java is a registered trademark of Oracle and/or its affiliates. Any other marks contained herein may be trademarks of their respective owners.
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using This Book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Typographical Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About the Product Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HTML Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compiled Help File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PDF Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contacting Customer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
7 8 9 9 10 10 11
Table of Contents
47
47 47 48 48 53 53 54 54 54 58 60 60
61
61 63 63 64 65 67 67 68 68 69 71 73 76 77 79 80 81 82 83
85
85 86 87 87 88
Table of Contents
Updating Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Synchronizing Changes Back to the Data Source . . . . . . . . . . . . . . . . . . . . . . . . . 100
8 A B
Table of Contents
Preface
This book provides reference information for using Progress DataDirect Connect Series for ADO.NET.
Chapter 1 Using SQL Escape Sequences in .NET Applications on page 13 describes the scalar functions that are supported by the DataDirect Connect for ADO.NET data providers. Your data store may not support all of these functions. Chapter 2 Locking and Isolation Levels on page 21 discusses locking and isolation levels and how their settings can affect the data you retrieve. Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework on page 25 describes how to create a model for the DataDirect ADO.NET Entity Framework data providers. Chapter 4 Using the Microsoft Enterprise Library on page 47 describes how to configure the Data Access Application Block and Logging Application Block, and how to use them in your application code. Chapter 5 Getting Schema Information on page 61 describes the columns that are returned by the GetSchemaTable Method and how to retrieve schema metadata with the GetSchema method. Chapter 6 Client Information for Connections on page 85 describes how you can use extensions provided by Progress DataDirect to store and return client information for a connection. Chapter 7 Designing .NET Applications for Performance Optimization on page 89 provides recommendations for improving the performance of your applications by optimizing their code. Chapter 8 Using ClickOnce Deployment on page 101 describes how you can deploy your Windows Forms application and a DataDirect Connect for ADO.NET data provider from a Web server. Appendix A Using an .edmx File on page 105 explains the necessary changes to an .edmx file in order to provide Extended Entity Framework functionality to the Entity Data Model (EDM) layer. Appendix B Using Enterprise Library 4.1 on page 111 provides configuration information for using the data providers with Microsoft Enterprise Library Version 4.1.
In addition, a "Glossary" on page 121 helps you with terminology referenced in this book. DataDirect Connect Series for ADO.NET Reference
Preface NOTE: This book refers the reader to Web pages using URLs for more information about specific topics, including Web pages not maintained by Progress DataDirect. Because it is the nature of Web content to change frequently, Progress DataDirect can guarantee only that the URLs referenced in this book were correct at the time of publishing.
Typographical Conventions
This book uses the following typographical conventions: Convention italics bold UPPERCASE Explanation Introduces new terms with which you may not be familiar, and is used occasionally for emphasis. Emphasizes important information. Also indicates button, menu, and icon names on which you can act. For example, click Next. Indicates keys or key combinations that you can use. For example, press the ENTER key. Also used for SQL reserved words. monospace monospaced italics forward slash / Indicates syntax examples, values that you specify, or results that you receive. Indicates names that are placeholders for values that you specify. For example,filename. Separates menus and their associated commands. For example, Select File / Copy means that you should select Copy from the File menu. The slash also separates directory levels when specifying locations under UNIX. vertical rule | brackets [ ] Indicates an "OR" separator used to delineate items. Indicates optional items. For example, in the following statement: SELECT [DISTINCT], DISTINCT is an optional keyword. Also indicates sections of the Windows Registry. braces { } ellipsis . . . Indicates that you must select one item. For example, {yes | no} means that you must specify either yes or no. Indicates that the immediately preceding item can be repeated any number of times in succession. An ellipsis following a closing bracket indicates that all information in that unit can be repeated.
DataDirect Connect Series for ADO.NET Installation Guide details requirements and procedures for installing DataDirect Connect for ADO.NET. DataDirect Connect Series for ADO.NET Users Guide provides provides information about configuring and using the product. DataDirect Connect Series for ADO.NET Reference provides detailed reference information about the product. DataDirect Connect Series for ADO.NET Troubleshooting Guide provides information about error messages and troubleshooting procedures for the product.
HTML Version
The product library, except for the installation guide, is placed on your system as HTML-based online help during a normal installation of the product. The help system is located in the help subdirectory of the product installation directory. To use the help, you must have one of the following Internet browser installed:
Internet Explorer 5.x, 6.x, 7.x, 8.x, and 9x Mozilla Firefox 1.x, 2.x, 3.x, and 8.0 Netscape 4.x, 7.x and 8.x Safari 1.x, 2.x, 3.x , and 5.1.2 Opera 7.54u2, 8.x, and 9.x
On Windows, you can access the entire help system by selecting the help icon that appears in the DataDirect Connect for ADO.NET program group. On all platforms, you can access the entire help system by opening the following file from within your browser: install_dir/dotnethelp/help.htm where install_dir is the path to the product installation directory. After the browser opens, the left pane displays the Table of Contents, Index, and Search tabs for the entire documentation library. When you have opened the main screen of the help system in your browser, you can bookmark it in the browser for quick access later. NOTE: Security features set in your browser can prevent the help system from launching. A security warning message is displayed. Often, the warning message provides instructions for unblocking the help system for the current session. To allow the help system to launch without encountering a security warning message, the security settings in your browser can be modified. Check with your system administrator before disabling any security features.
10
Preface
PDF Version
The product documentation is also provided in PDF format. You can view or print the documentation, and perform text searches in the files. The PDF documentation is available on the Progress DataDirect Web site at: http://www.datadirect.com/support/product-info/documentation/by-product.html You can download the entire library in a compressed file. When you uncompress the file, it appears in the correct directory structure. Maintaining the correct directory structure allows cross-book text searches and cross-references. If you download or copy the books individually outside of their normal directory structure, their cross-book search indexes and hyperlinked cross-references to other volumes will not work. You can view a book individually, but it will not automatically open other books to which it has cross-references. To help you navigate through the library, a file, called books.pdf, is provided. This file lists each online book provided for the product. We recommend that you open this file first and, from this file, open the book you want to view. NOTE: To use the cross-book search feature, you must use Adobe Reader 8.0 or higher. If you are using a version of Adobe Reader that does not support the cross book search feature or are using a version of Adobe Reader earlier than 8.0, you can still view the books and use the Find feature within a single book.
11
Your customer number or the serial number that corresponds to the product for which you are seeking support, or a case number if you have been provided one for your issue. If you do not have a SupportLink contract, the SupportLink representative assisting you will connect you with our Sales team. Your name, phone number, email address, and organization. For a first-time call, you may be asked for full customer information, including location. The Progress DataDirect product and the version that you are using. The type and version of the operating system where you have installed your product. Any database, database version, third-party software, or other environment information required to understand the problem. A brief description of the problem, including, but not limited to, any error messages you have received, what steps you followed prior to the initial occurrence of the problem, any trace logs capturing the issue, and so on. Depending on the complexity of the problem, you may be asked to submit an example or reproducible application so that the issue can be re-created. A description of what you have attempted to resolve the issue. If you have researched your issue on Web search engines, our Knowledgebase, or have tested additional configurations, applications, or other vendor products, you will want to carefully note everything you have already attempted. A simple assessment of how the severity of the issue is impacting your organization.
June 2012, Release 4.0.0 of DataDirect Connect for ADO.NET, version 0000
12
Preface
13
Date, time, and timestamp literals Scalar functions such as numeric, string, and data type conversion functions Stored procedures Outer joins SQL extension
The data providers recognize and parse the escape sequence, replacing the escape sequences with data store-specific grammar.
14
15
Table 1-1. Scalar Functions Supported (cont.) Data Store DB2 for z/OS String Functions CHAR _LENGTH CHARACTER _LENGTH CONCAT INSERT LCASE LEFT LENGTH LOCATE LTRIM POSITION REPEAT REPLACE RIGHT RTRIM SPACE SUBSTRING UCASE Numeric Functions ABS or ABSVAL ACOS ASIN ATAN ATAN2 BIGINT CEILING or CEIL COS COT DECIMAL DEGREES DIGITS DOUBLE EXP FLOAT FLOOR INTEGER LN LOG LOG10 MOD POWER RADIANS RAND REAL ROUND SIGN SIN SMALLINT SQRT TAN TRUNCATE Timedate Functions CURDATE CURTIME CURRENT_ DATE CURRENT_TIME CURRENT_TIMESTAMP DAYNAME DAYOFMONTH DAYOFWEEK DAYOFYEAR HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND WEEK YEAR System Functions DATABASE NULLIF USER
16
Table 1-1. Scalar Functions Supported (cont.) Data Store DB2 for Linux/UNIX/ Windows String Functions ASCII CHAR CHAR _LENGTH CHARACTER _LENGTH CONCAT DIFFERENCE INSERT LCASE LEFT LENGTH LOCATE LTRIM POSITION REPEAT REPLACE RIGHT RTRIM SOUNDEX SPACE SUBSTRING UCASE Numeric Functions ABS or ABSVAL ACOS ASIN ATAN ATAN2 BIGINT CEILING or CEIL COS COT DECIMAL DEGREES DIGITS DOUBLE EXP FLOAT FLOOR INTEGER LN LOG LOG10 MOD POWER RADIANS RAND REAL ROUND SIGN SIN SMALLINT SQRT TAN TRUNCATE ABS ACOS ASIN ATAN ATAN2 CEILING COS COT EXP FLOOR LOG LOG10 MOD PI POWER ROUND SIGN SIN SQRT TAN TRUNCATE Timedate Functions CURDATE CURTIME CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP DAYNAME DAYOFWEEK DAYOFYEAR HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND WEEK YEAR System Functions DATABASE NULLIF USER
Oracle
ASCII BIT_LENGTH CHAR CONCAT INSERT LCASE LEFT LENGTH LOCATE LOCATE2 LTRIM OCTET_LENGTH REPEAT REPLACE RIGHT RTRIM SOUNDEX SPACE SUBSTRING UCASE
CURDATE DAYNAME DAYOFMONTH DAYOFWEEK DAYOFYEAR HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND WEEK YEAR
IFNULL USER
17
Table 1-1. Scalar Functions Supported (cont.) Data Store SQL Server String Functions ASCII BIT_LENGTH CHAR CONCAT DIFFERENCE INSERT LCASE LEFT LENGTH LOCATE LTRIM OCTET_LENGTH REPEAT REPLACE RIGHT RTRIM SOUNDEX SPACE SUBSTRING UCASE Numeric Functions ABS ACOS ASIN ATAN ATAN2 CEILING COS COT DEGREES EXP FLOOR LOG LOG10 MOD PI POWER RADIANS RAND ROUND SIGN SIN SQRT TAN TRUNCATE ABS ACOS ASIN ATAN ATAN2 CEILING COS COT DEGREES EXP FLOOR LOG LOG10 MOD PI POWER RADIANS RAND ROUND SIGN SIN SQRT TAN Timedate Functions CURDATE CURTIME CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP DAYNAME DAYOFMONTH DAYOFWEEK DAYOFYEAR EXTRACT HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND TIMESTAMPADD TIMESTAMPDIFF WEEK YEAR System Functions CONVERT DATABASE IFNULL USER
Sybase
ASCII CHAR CONCAT DIFFERENCE INSERT LCASE LEFT LENGTH LOCATE LTRIM REPEAT RIGHT RTRIM SOUNDEX SPACE SUBSTRING UCASE
DAYNAME DAYOFMONTH DAYOFWEEK DAYOFYEAR HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND TIMESTAMPADD TIMESTAMPDIFF WEEK YEAR
18
The CommandType property of the data providers Command object is set to either CommandType.StoredProcedure or to CommandType.Text. The Text property of the Command object conforms to the defined escape syntax.
NOTE: Using a stored procedure escape does not change the existing behavior of CommandType.StoredProcedure (that is, if Command.Text is set only to the procedure name). It only adds to the existing support for calling stored procedures.
SQL Extension Escape Table 1-2 lists the outer join escape sequences that the data providers support.
19
Table 1-2. Outer Join Escape Sequences Supported Data Store DB2 Outer Join Escape Sequences Left outer joins Right outer joins Full outer joins Left outer joins Right outer joins Nested outer joins Left outer joins Right outer joins Full outer joins Left outer joins Right outer joins Nested outer joins
Oracle
SQL Server
Sybase
20
Chapter 1 Using SQL Escape Sequences in .NET Applications Using the RowSetSize SQL escape extension has the same affect as setting the ProviderCommand.RowSetSize property. However, the effect is limited to the result set created by the SQL statement. The RowSetSize SQL escape extension does not set the RowSetSize property on the Command object. Example SELECT * FROM mytable WHERE mycolumn2 >100 {ext RowSetSize 100} A maximum of 100 rows are returned from the result set. If the result set contains less than 100 rows, then the SQL extension escape has no affect. The size of the result sets that are created by subsequent SQL statements is not limited. If the application contains both the RowSetSize SQL extension escape and the RowSetSize property for a command, the escape takes precedence.
21
Locking
Locking is a database operation that restricts a user from accessing a table or record. Locking is used in situations where more than one user might try to use the same table or record at the same time. By locking the table or record, the system ensures that only one user at a time can affect the data. Locking is critical in multiuser databases, where different users can try to access or modify the same records concurrently. Although such concurrent database activity is desirable, it can create problems. Without locking, for example, if two users try to modify the same record at the same time, they might encounter problems ranging from retrieving bad data to deleting data that the other user needs. If, however, the first user to access a record can lock that record to temporarily prevent other users from modifying it, such problems can be avoided. Locking provides a way to manage concurrent database access while minimizing the various problems it can cause.
22
Isolation Levels
An isolation level represents a particular locking strategy that is employed in the database system to improve data consistency. The higher the isolation level, the more complex the locking strategy behind it. The isolation level that is provided by the database determines whether a transaction will encounter the following behaviors in data consistency: Dirty reads User 1 modifies a row. User 2 reads the same row before User 1 commits. User 1 performs a rollback. User 2 has read a row that has never really existed in the database. User 2 may base decisions on false data. User 1 reads a row but does not commit. User 2 modifies or deletes the same row and then commits. User 1 rereads the row and finds that it has changed (or has been deleted). User 1 uses a search condition to read a set of rows but does not commit. User 2 inserts one or more rows that satisfy this search condition, then commits. User 1 rereads the rows using the search condition and discovers rows that were not present before.
Non-repeatable reads
Phantom reads
Isolation levels represent the DBMSs ability to prevent these behaviors. The American National Standards Institute (ANSI) defines four isolation levels:
Read uncommitted (0) Read committed (1) Repeatable read (2) Serializable (3)
In ascending order (03), these isolation levels provide an increasing amount of data consistency to the transaction. At the lowest level, all three behaviors can occur. At the highest level, none can occur. The success of each level in preventing these behaviors is due to the locking strategies that they employ, which are as follows: Read uncommitted (0) Locks are obtained on modifications to the database and held until end of transaction (EOT). Reading from the database does not involve any locking. Locks are acquired for reading and modifying the database. Locks are released after reading, but locks on modified objects are held until EOT. Locks are obtained for reading and modifying the database. Locks on all modified objects are held until EOT. Locks obtained for reading data are held until EOT. Locks on unmodified access structures (such as indexes and hashing structures) are released after reading. A lock is placed on the affected rows of the DataSet until EOT. All access structures that are modified, and those used by the query, are locked until EOT.
Serializable (3)
Isolation Levels Table 2-1 shows what data consistency behaviors can occur at each isolation level.
23
Table 2-1. Isolation Levels and Data Consistency Level 0, Read uncommitted 1, Read committed 2, Repeatable read 3, Serializable Dirty Read Yes No No No Nonrepeatable Read Yes Yes No No Phantom Read Yes Yes Yes No
Although higher isolation levels provide better data consistency, this consistency can be costly in terms of the concurrency that is provided to individual users. Concurrency is the ability of multiple users to access and modify data simultaneously. As isolation levels increase, so does the chance that the locking strategy used will create problems in concurrency. The higher the isolation level, the more locking involved, and the more time users may spend waiting for data to be freed by another user. Because of this inverse relationship between isolation levels and concurrency, you must consider how people use the database before choosing an isolation level. You must weigh the trade-offs between data consistency and concurrency, and decide which is more important.
24
25
The Oracle ADO.NET Entity Framework data provider can be used with applications that use the features of the standard .NET Framework 4.0 and the ADO.NET Entity Framework 4.1 and 4.2. To use Plain Old CLR Objects (POCO) entities or Model First, you must use .NET Framework 4.0. To use Code First, you must use ADO.NET Entity Framework 4.1 or 4.2. The DB2 and Sybase ADO.NET Entity Framework data providers can be used with applications that use the features of the standard .NET Framework 3.5 AP1, including ADO.NET Entity Framework functionality. This means that these data providers support the Database First approach. To use Plain Old CLR Objects (POCO) entities, you must use .NET Framework 4.0.
See the README text file shipped with your Progress DataDirect product for the file name of the ADO.NET Entity Framework data provider.
26
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework
Using the Database First Model 3 Select ADO.NET Entity Data Model, and then click Add. The Choose Model Contents window appears.
27
28
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 4 Select Generate from database, and then click Next. The Choose Your Data Connection window appears.
5 6
If you want to use an established connection, select it from the drop-down list and continue at Step 7. To create a new connection, continue at Step 6. Click New Connection... to create a new connection. a On the Choose Data Source window, select Other in the Data source list, then select your data provider, for example, Progress DataDirect Connect for ADO.NET Oracle Data Provider, in the Data provider drop-down list. Click Continue. The Connection Properties window appears. Provide the necessary connection information; then, click OK.
b c
Using the Database First Model 7 The Wizard creates an Entity connection string. a b c If the radio buttons are selectable, select Yes, include the sensitive data in the connection string. In the Save entity connection settings... field, type a name for the name of the main data access class, or accept the default. Click Next. The Choose Your Database Objects window appears.
29
Select the database objects that you want to use in the model.
30
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 9 Click Finish. The model is generated and opened in the Model Browser.
31
32
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 3 Right-click the project and select Add / New Item.
Using Model First 4 Select ADO.NET EntityDataModel, and then click Add. The Entity Data Model Wizard is displayed.
33
Select Empty Model, and then click Finish. An empty model is added to your application.
34
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 6 Right-click the empty model and select Properties / DDL Generation Template. Click the drop-down menu and set the value to DataDirect SSDLToOracle.tt (VS).
Using Model First 7 Design your model (refer to MSDN for a wide assortment of tutorials).
35
8 9
When you are satisfied with the model design, right-click the model and select Generate database from model. The Generate Database wizard is displayed. On the Choose Your Data Connection window, do the following steps: a Select or create a connection:
Select an existing connection from the drop-down list Click New Connection to create a new connection.
b c Choose whether to include sensitive information. Click Next. The DDL is generated.
36
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 10 Click Finish. SQL is added to your application.
Using Model First 11 Copy the DDL and execute against the connection using any tool.
37
After executing the DDL, the backend database is ready for use, with all of the database objects mapped to your model.
38
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework
39
Table 3-1. Mapping to Pseudo Stored Procedure Connection Property CurrentPassword CurrentUser
1 1 1
Pseudo Stored Procedure DDTek_Connection_Reauthenticate DDTek_Connection_Reauthenticate DDTek_Connection_Reauthenticate DDTek_Connection_EnableStatistics DDTek_Connection_DisableStatistics Pseudo Stored Procedure DDTek_Connection_ResetStatistics DDTek_Connection_RetrieveStatistics
1. Supported for the DB2 and Oracle Entity Framework data providers.
You can create a function mapping in the entity model to invoke the pseudo-stored procedure. Alternatively, applications can use the ObjectContext to create a stored procedure command as shown in the following C# code fragment: using (MyContext context = new MyContext()) { EntityConnection entityConnection = (EntityConnection)context.Connection; // The EntityConnection exposes the underlying store connection DbConnection storeConnection = entityConnection.StoreConnection; DbCommand command = storeConnection.CreateCommand(); command.CommandText = "DDTek_Connection_EnableStatistics"; command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new OracleParameter("cid", 1)); } bool openingConnection = command.Connection.State == ConnectionState.Closed; if (openingConnection) { command.Connection.Open(); } int result;
40
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework try { result = command.ExecuteNonQuery(); } finally { if (openingConnection && command.Connection.State == ConnectionState.Open) { command.Connection.Close(); } }
Configuration Options
The Entity Framework data provider defines options that configure performance and specific behaviors. These options exist in the machine.config, devenv.exe.config, edmgen.exe.config, app.config, and/or the web.config file. The product installer specifies default values for the Visual Studio 2008 and Visual Studio 2010 devenv.exe.config and the EdmGen.exe.config files. If necessary, you can alter the default values in the configuration files, for example, to enable the use of the Enterprise Library Logging Application Block. The performance and behavior of the EdmGen and Visual Studio tools, when using the Entity Framework data provider to create and manipulate ADO.NET Entity Data Models, can be affected by the data provider configuration options. For example, setting edmSchemaRestrictions to User can improve performance, but may not display all the database objects that you need for your entity model.
Extending Entity Framework Functionality Suppose you have installed both Release 4.0 and Release 3.5 of the Oracle Entity Framework data provider. In the following example, the value of the edmSchemaRestrictions configuration option is set to User for Release 3.5, and to Accessible for Release 4.0. <ddtek.oracle.entity.3.5 edmSchemaRestrictions="User" /> <ddtek.oracle.entity edmSchemaRestrictions="Accessible" />
41
42
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework
43
Implementing Reauthentication
Typically, you can configure a connection pool to provide scalability for connections. In addition, to help minimize the number of connections required in a connection pool, you can switch the user associated with a connection to another user, a process known as reauthentication. For example, suppose you are using Kerberos authentication to authenticate users using their operating system user name and password. To reduce the number of connections that must be created and managed, you may want to switch the user associated with a connection to multiple users using reauthentication. For example, suppose your connection pool contains a connection, Conn, which was established using the user ALLUSERS. You can have that connection service multiple users, User A, B, C, and so on, by switching the user associated with the connection Conn to User A, B, C, and so on. For more information about the data providers support for reauthentication, refer to the DataDirect Connect Series for ADO.NET Users Guide. This functionality is modeled in the XML file provided in Appendix A Using an .edmx File on page 105. By surfacing the DDTekConnectionStatistics and DDTekStatus entities, you can quickly model this code using the standard tooling. First, we establish an Entity Container, DDTekConnectionContext, in which we have two Entity Sets: DDTekConnectionStatistics and DDTekStatus. To interact with each Entity, include functions to retrieve results. The following C# code fragment shows how you gain access to these statistics: DTekConnectionContext objCtx = new DDTekConnectionContext(); try { MessageBox.Show("CurrentUser = " + status.CurrentUser); status = objCtx.Reauthenticate("login5", "login5", 600).First(); MessageBox.Show("CurrentUser = " + status.CurrentUser); } catch (Exception ex) { MessageBox.Show(ex.ToString()); } where DDTekConnectionContext is declared in the app.config file. <add name="DDTekConnectionContext" connectionString="metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider= DDTek.Oracle;provider connection string="Host=nc-lnx02;Password=login4;Pooling=False;SID= CP31;User ID=login4;Reauthentication Enabled=true"" providerName="System.Data.EntityClient" /> For more information about your data providers support for reauthentication, refer to the DataDirect Connect Series for ADO.NET Users Guide.
44
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework
45
The DataDirect Connect for ADO.NET web page provides the additional information and examples about using the data provider. The DataConnections blog provides the latest information about our support for the ADO.NET Entity Framework and provides other information about the DataDirect Connect ADO.NET data providers. Programming Entity Framework by Julie Lerman provides a comprehensive discussion of using the ADO.NET Entity Framework. ADO.NET Entity Framework introduces the Entity Framework and provides links to numerous detailed articles. Connection Strings (Entity Framework) describes how connection strings are used by the Entity Framework. The connection strings contain information used to connect to the underlying ADO.NET data provider as well as information about the required Entity Data Model mapping and metadata. Working with POCO Entities explains how you can use existing domain objects and other CLR objects with your data model. Performance Considerations (Entity Framework) describes some implementation considerations for improving the performance of Entity Framework applications. Entity Data Model Tools describes the tools that help you to build applications graphically with the EDM: the Entity Data Model Wizard, the ADO.NET Entity Data Model Designer (Entity Designer), and the Update Model Wizard. These tools work together to help you generate, edit, and update an Entity Data Model. LINQ to Entities enables developers to write queries against the database from the same language used to build the business logic. DataDirect Connect Series for ADO.NET Reference
46
Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework
47
Return scalar values. Determine which parameters are needed and create them. Involve commands in a transaction.
If your application needs to address specific DBMS functionality, you can use a DataDirect Connect for ADO.NET data provider.
48
Chapter 4 Using the Microsoft Enterprise Library The DAAB is used with ADO.NET, increasing efficiency and productivity when creating applications for ADO.NET. The abstract Database class provides a number of methods, such as ExecuteNonQuery, ExecuteReader, and ExecuteScalar, that are the same as the methods that are used by the DbCommand class, or, if you are using database-specific code, a data provider-specific class such as OracleCommand. Although using the default DAAB during development is convenient, the resulting application lacks portability. When you use the provider-specific DataDirect Connect for ADO.NET DAAB implementation, the application includes the DataDirect Connect data providers SQL leveling capabilities. You have more flexibility, whether your application needs to access multiple databases, or whether you anticipate a change in your target data source.
The GenericDatabase class option is less suited to applications that need specific control of database behaviors. For portability, the GenericDatabase solution is the optimal approach. If your application needs to retrieve data in specialized way, or if your code needs customization to take advantage of features specific to a DBMS, using the DataDirect Connect for ADO.NET data provider for that DBMS might be better suited to your needs.
"Adding a New DAAB Entry" on page 49 "Adding the Data Access Application Block to Your Application" on page 52
49
50
Chapter 4 Using the Microsoft Enterprise Library 4 Select File / Open. Then, select the App.config file you created in Step 3 and click OK. The App.config file is displayed.
Data Access Application Block Overview 6 Click the plus sign button ( ) in the Database Instances column and select Add Database Connection String. This adds a new connection string item to the configuration.
51
7 8
In the Name field, enter a name for the DAABs connection string, for example, MyDB2conn. In the Connection String field, click the ellipsis button ( ) to display the Edit Text Value dialog box. Type or paste a connection string in the text box and click OK. For example, type: Database Name=DEV1DB9A;Host=dev1;Port=6070;User ID=TEST01;Encryption Method=SSL;AuthenticationMethod=Kerberos;
In the Database Provider drop-down list, select the data provider. For example, select DDTek.DB2.4.0 for the DB2 data provider.
52
Chapter 4 Using the Microsoft Enterprise Library 10 Click the chevron button ( ) to the right of the Database Settings title. In the Default Database Instance drop-down list, select the instance that you want to use, in this example, dev1.DEV1DB9A.
Add the following directive to your C# source code: using Microsoft.Practices.EnterpriseLibrary.Data; using Microsoft.Practices.EnterpriseLibrary.Common.Configuration; using System.Data;
Rebuild the solution to ensure that the new dependencies are functional.
Data Access Application Block Overview 4 Determine the output Debug or Release path location of your current solution, and switch back to the Enterprise Library Configuration window (see "Adding a New DAAB Entry" on page 49). Right-click the connection string under the Application Node and select Save Application. Navigate to the Debug or Release output directories of your current solution, and locate the .exe file of the current solution. Using File Explorer, copy the DDTek.EnterpriseLibrary.Data.XXX.dll into your applications working directory.
53
5 6 7
54
Logging Application Blocks The following procedure uses the configuration options for the .NET Framework 3.5. To configure the Logging Application Block on any supported platform: 1 Select Start / Programs / Microsoft patterns and practices / Enterprise Library 5.0 / Enterprise Library Configuration / EntLib Config .NET 3.5. The Enterprise Library Configuration window appears.
55
56
Chapter 4 Using the Microsoft Enterprise Library 2 Select Blocks / Add Logging Settings. Additional fields appear on the New Configuration window.
Add a flat file trace listener file: a b c Click the Logging Target Listeners plus sign button ( ). Then, select Add Logging Target Listeners / Add Flat File Trace Listener. In the properties pane next to the File Name property, click the ellipsis button ( The Open File window appears. ).
Browse to the target location for the file. Then, type a file name for the trace listener file, and click Open. In this example, the file name is labtrace.log.
Logging Application Blocks 4 Click the plus sign button ( ) to the right of the Categories heading; then, select Add Category. The Category section is expanded.
57
Define the characteristics of the new category: a b c In the Name field, type the name of the new category. In this example, the category DDTek Error will be created. Click the plus sign button ( ) next to the Listeners heading. Then, select Flat File Trace Listener from the drop-down list. From the Minimum Severity drop-down list, select Error.
58
Chapter 4 Using the Microsoft Enterprise Library 6 Repeat Step 3 through Step 5 to create the following categories:
DDTek Information: Information not related to errors DDTek Command: Enables SQL, Parameter, and DbCommandTree logging
Select File / Save As. The Save Configuration File window appears. Type a name for your configuration file. By default, the file is saved to C:\Program Files\Microsoft Enterprise Library 5.0\Bin\filename.exe.config, where filename is the name that you typed in the Save Configuration File window.
Table 4-1. LAB Configuration Settings Configuration Setting enableLoggingApplicationBlock labLogEntryTypeName Description Enables the Logging Application Block. Specifies the LogEntry type name for the LogEntry object.
59
Table 4-1. LAB Configuration Settings (cont.) Configuration Setting labLoggerTypeName labAssemblyName Description Specifies the Logger type name for the Logging Application Block. Specifies the assembly name to which the Logging Application Block applies. NOTE: If you are using any version of the LAB other than the Microsoft Enterprise Library 5.0 binary release, you must set the labAssemblyName. For example, if you are using an older or newer version of the LAB, or a version that you have customized, you must specify a value for labAssemblyName.
The following code fragment provides an example of a Logging Application Block that could be added to an Oracle data access application. <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="" logWarningsWhenNoCategoriesMatch="true"> <listeners> <add fileName="rolling.log" footer="----------------------------------------" header="----------------------------------------" rollFileExistsBehavior="Overwrite" rollInterval="None" rollSizeKB="0" timeStampPattern="yyyy-MM-dd" listenerDataType= "Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken= 31bf3856ad364e35" traceOutputOptions="None" filter="All" type= "Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken= 31bf3856ad364e35" name="Rolling Flat File Trace Listener" /> </listeners> <formatters> <add template="Message: {message}
Category: {category}
Priority: {priority}
EventId: {eventid}
Severity: {severity}
Title:{title}

" type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken= 31bf3856ad364e35" name="Text Formatter" /> </formatters> <categorySources> <add switchValue="All" name="DDTek"> <listeners> <add name="Rolling Flat File Trace Listener" /> </listeners> DataDirect Connect Series for ADO.NET Reference
60
Chapter 4 Using the Microsoft Enterprise Library </add> </categorySources> <specialSources> <allEvents switchValue="All" name="All Events" /> <notProcessed switchValue="All" name="Unprocessed Category" /> <errors switchValue="All" name="Logging Errors & Warnings"> <listeners> <add name="Rolling Flat File Trace Listener" /> </listeners> </errors> </specialSources> </loggingConfiguration>
The Microsoft patterns & practices Developer Center includes an overview on Application Block topics: http://msdn.microsoft.com/en-us/library/ff632023.aspx. The Microsoft patterns & practices Enterprise Library includes an FAQ section on using Logging Application Blocks: http://entlib.codeplex.com/Wiki/View.aspx?title= EntLib%20FAQ. The Microsoft Enterprise Library 5.0 Hands-on Labs provide detailed examples that help you learn about the application blocks: http://entlib.codeplex.com/ "The Data Access Application Block" provides an overview of tasks and applications using Data Access Application Blocks: http://msdn.microsoft.com/en-us/library/ff664408(v=PandP.50).aspx. The DataConnections blog provides the latest information about our support for the ADO.NET Entity Framework and provides other information about DataDirect Connect ADO.NET data providers.
61
62
Table 5-1. Columns Returned by GetSchemaTable on DataReader (cont.) Column ProviderType Description Specifies the provider-defined indicator of the column's data type. This column cannot be null. If the data type of the column varies from row to row, this must be Object. For the DB2 data provider, if Xml Describe Type is set to Binary, this should be a System.Byte[]. Otherwise, this should be System.String. IsLong Set if the column contains a BLOB, CLOB, LONG VARBINARY, LONG VARCHAR, or (for DB2) LONG VARGRAPHIC, that contains very long data. The definition of very long data is provider-specific. AllowDBnull IsReadOnly IsRowVersion IsUnique Set to true if the AllowDbNull constraint is set to true for the column. Otherwise, the value is false. The value is true if the column can be modified; otherwise, the value is false. Is set if the column contains a persistent row identifier that cannot be written to, and has no meaningful value except to identify the row. Specifies whether the column constitutes a key by itself or if there is a constraint of type UNIQUE that applies only to this column. When set to true, no two rows in the base table (the table returned in BaseTableName) can have the same value in this column. When set to false, the column can contain duplicate values in the base table. IsKey When set to true, the column is one of a set of columns that, taken together, uniquely identify the row in the DataTable. The set of columns with IsKey set to true must uniquely identify a row in the DataTable that may be generated from a DataTable primary key. When set to false, the column is not required to uniquely identify the row. IsAutoIncrement Specifies whether the column assigns values to new rows in fixed increments. When set to true, the column assigns values to new rows in fixed increments. When set to false, the column does not assign values to new rows in fixed increments. BaseSchemaName BaseCatalogName BaseTableName BaseColumnName Specifies the name of the schema in the database that contains the column. The value is null if the base schema name cannot be determined. Specifies the name of the catalog in the data store that contains the column. A null value is used if the base catalog name cannot be determined. Specifies the name of the table or view in the data store that contains the column. A null value is used if the base table name cannot be determined. Specifies the name of the column in the data store. This might be different than the column name returned in the ColumnName column if an alias was used. A null value is used if the base column name cannot be determined or if the rowset column is derived from, but is not identical to, a column in the database. IsAliased IsExpression Specifies whether the name of the column is an alias. The value true is returned if the column name is an alias; otherwise, false is returned. Specifies whether the name of the column is an expression. The value true is returned if the column is an expression; otherwise, false is returned.
63
Table 5-1. Columns Returned by GetSchemaTable on DataReader (cont.) Column IsIdentity IsHidden Description Specifies whether the name of the column is an identity column. The value true is returned if the column is an identity column; otherwise, false is returned. Specifies whether the name of the column is hidden. The value true is returned if the column is hidden; otherwise, false is returned.
"MetaDataCollections Schema Collections" on page 63 "DataSourceInformation Schema Collection" on page 64 "DataTypes Collection" on page 65 "ReservedWords Collection" on page 67 "Restrictions Collection" on page 67
Additional collections are specified and must be supported to return Schema information from the data provider. See "Additional Schema Metadata Collections" on page 68 for details about the other collections supported by the data providers. NOTE: Refer to the .NET Framework documentation for additional background functional requirements, including the required data type for each ColumnName.
64
ParameterMarkerFormat ParameterMarkerPattern ParameterNameMaxLength ParameterNamePattern QuotedIdentifierCase QuotedIdentifierPattern StatementSeparatorPattern StringLiteralPattern SupportedJoinOperators SupportsReauthentication
Retrieving Schema Metadata with the GetSchema Method Table 5-4 lists the provider-specific ColumnNames: Table 5-4. Provider-specific ColumnNames Data Provider Oracle Oracle SQL Server ColumnName SID ServiceName NameInstance Description SID of the data source1 Service name from the tnsnames.ora file An instance of SQL Server running on a host
65
1. SID and ServiceName are mutually exclusive in a connection string or data source.
DataTypes Collection
Table 5-5 describes the supported columns of the DataTypes schema collection. The columns can be returned in any order. Table 5-5. ColumnNames Returned by the DataTypes Collection ColumnName ColumnSize CreateFormat CreateParameters Description The length of a non-numeric column or parameter; refers to either the maximum or the length defined for this type by the data provider. Format string that represents how to add this column to a data definition statement, such as CREATE TABLE. The creation parameters that must be specified when creating a column of this data type. Each creation parameter is listed in the string, separated by a comma in the order they are to be supplied. For example, the SQL data type DECIMAL needs a precision and a scale. In this case, the creation parameters should contain the string "precision, scale". In a text command to create a DECIMAL column with a precision of 10 and a scale of 2, the value of the CreateFormat column might be DECIMAL({0},{1})" and the complete type specification would be DECIMAL(10,2). DataType IsAutoIncrementable The name of the .NET Framework type of the data type. Specifies whether values of a data type are auto-incremented. true: Values of this data type may be auto-incremented. false: Values of this data type may not be auto-incremented. IsBestMatch Specifies whether the data type is the best match between all data types in the data store and the .NET Framework data type that is indicated by the value in the DataType column. true: The data type is the best match. false: The data type is not the best match.
66
Table 5-5. ColumnNames Returned by the DataTypes Collection (cont.) ColumnName IsCaseSensitive Description Specifies whether the data type is both a character type and case-sensitive. true: The data type is a character type and is case-sensitive. false: The data type is not a character type or is not case-sensitive. IsConcurrencyType true: The data type is updated by the database every time the row is changed and the value of the column is different from all previous values. false: The data type is not updated by the database every time the row is changed. IsFixedLength true: Columns of this data type created by the data definition language (DDL) will be of fixed length. false: Columns of this data type created by the DDL will be of variable length. IsFixedPrecisionScale IsLiteralsSupported IsLong true: The data type has a fixed precision and scale. false: The data type does not have a fixed precision and scale. true: The data type can be expressed as a literal. false: The data type cannot be expressed as a literal. true: The data type contains very long data. The definition of very long data is provider-specific. false: The data type does not contain very long data. IsNullable IsSearchable true: The data type is nullable. false: The data type is not nullable. true: The data type contains very long data. The definition of very long data is provider-specific. false: The data type does not contain very long data. IsSearchableWithLike IsUnisgned LiteralPrefix LiteralSuffix MaximumScale true: The data type can be used with the LIKE predicate. false: The data type cannot be used with the LIKE predicate. true: The data type is unsigned. false: The data type is signed. The prefix applied to a given literal. The suffix applied to a given literal. If the type indicator is a numeric type, this is the maximum number of digits allowed to the right of the decimal point. Otherwise, this is DBNull.Value. NativeDataType ProviderDbType TypeName An OLE DB-specific column for exposing the OLE DB type of the data type. The provider-specific type value that should be used when specifying a parameter's type. The provider-specific data type name.
67
ReservedWords Collection
This schema collection exposes information about the words that are reserved by the database to which the data provider is connected. Table 5-6 describes the columns that the data provider supports.
Table 5-6. ReservedWords Schema Collection ColumnName Reserved Word Description Provider-specific reserved words
Restrictions Collection
The Restrictions schema collection exposes information about the restrictions that are supported by the data provider that is currently connected to the database. Table 5-7 describes the columns that are returned by the data providers. The columns can be returned in any order. DataDirect Connect for ADO.NET data providers use standardized names for restrictions. If a data provider supports a restriction for a Schema method, it always uses the same name for the restriction. The case sensitivity of any restriction value is determined by the underlying database, and can be determined by the IdentifierCase and QuotedIdentifierCase values in the DataSourceInformation collection (see "DataSourceInformation Schema Collection" on page 64). Table 5-7. ColumnNames Returned by the Restrictions Collection ColumnName CollectionName RestrictionName RestrictionDefault RestrictionNumber IsRequired Description The name of the collection to which the specified restrictions apply The name of the restriction in the collection Ignored The actual location in the collection restrictions for this restriction Specifies whether the restriction is required
68
Get result column data using the name of the column, not the ordinal, for example, using the DataTable.Columns property rather than getColumn(1). This lets the client program know whether the column exists for a given Metadata collection. Check for the existence of a Metadata collection before calling it. Use the MetaData collections to determine which Collections are supported, for example, by calling GetSchema(DbMetaDataCollectionNames.MetaDataCollections) on the Connection object. This lets the program know whether the given collection exists for the given data provider. Check for the existence of a particular restriction before using it. For example, get the restrictions Metadata collection GetSchema(DbMetaDataCollectionNames.Restrictions).
69
Table 5-8. Catalogs Schema Collection Column Name CATALOG_NAME DESCRIPTION .NET Framework DataType1 String String Description Catalog name. Cannot be null. A description of the catalog (if any). If none, an empty string must be returned.
The maximum length of the column in characters, bytes, or bits, respectively, if one is defined. The maximum length of the data type in characters, bytes, or bits, respectively, if the column does not have a defined length. Zero (0) if neither the column or the data type has a defined maximum length, or if the column is not a character, binary, or bit column.
CHARACTER_OCTET_LENGTH
Int32
The maximum length in octets (bytes) of the column, if the type of the column is character or binary. A value of zero (0) means the column has no maximum length or that the column is not a character or binary column. Catalog name in which the character set is defined. This column does not exist if the provider does not support catalogs or different character sets.
CHARACTER_SET_CATALOG
String
70
Table 5-9. Columns Schema Collection (cont.) Column Name CHARACTER_SET_NAME CHARACTER_SET_SCHEMA .NET Framework DataType1 String String Description Character set name. This column does not exist if the provider does not support different character sets. Unqualified schema name in which the character set is defined. This column does not exist if the provider does not support schemas or different character sets. The catalog name in which the collation is defined. This column exists only if the data provider supports catalogs or different collations. Collation name. This column exists only if the provider supports different collations. Unqualified schema name in which the collation is defined. This column exists only if the data provider supports schemas or different collations. Default value of the column. true: The column has a default value. false: The column does not have a default value, or it is unknown whether the column has a default value. COLUMN_NAME String The name of the column; this might not be unique. This column is returned only if the data provider supports catalogs. The indicator of the column's data type. If Xml Describe Type is set to Clob, this should be a System.String. Otherwise, this should be System.Byte. This value cannot be null. DATETIME_PRECISION Int32 Datetime precision (number of digits in the fractional seconds portion) of the column if the column is a datetime. If the column's data type is not datetime, this is DbNull. true: The column might be nullable. false: The column is known not to be nullable. NATIVE_DATA_TYPE String The data source description of the type. This should be BLOB. This value cannot be null. NUMERIC_PRECISION Int32 If the column's data type is of a numeric data, this is the maximum precision of the column. This column is returned only if the data provider supports catalogs.
COLLATION_CATALOG
String
COLLATION_NAME COLLATION_SCHEMA
String String
COLUMN_DEFAULT COLUMN_HASDEFAULT
String Boolean
DATA_TYPE
Object
IS_NULLABLE
Boolean
71
Table 5-9. Columns Schema Collection (cont.) Column Name NUMERIC_PRECISION_RADIX .NET Framework DataType1 Int32 Description The radix indicates in which base the values in NUMERIC_PRECISION and NUMERIC_SCALE are expressed. It is only useful to return either 2 or 10. This column is returned only if the data provider supports catalogs. NUMERIC_SCALE Int32 If the column's type is a numeric type that has a scale, this is the number of digits to the right of the decimal point. This column is returned only if the data provider supports catalogs. The ordinal of the column. Columns are numbered starting from one. This column is returned only if the data provider supports catalogs. The data source defined type of the column is mapped to the type enumeration of the data provider. For example, for Oracle, this is the DDTek.Oracle.OracleDbType enumeration. This value cannot be null. PROVIDER_GENERIC_TYPE Int32 The provider-defined type of the column as mapped to the System.Data.DbType enumeration. This value cannot be null. TABLE_NAME TABLE_SCHEMA String String The table name. This column is returned only if the data provider supports catalogs. The unqualified schema name.
ORDINAL_POSITION
Int32
PROVIDER_DEFINED_TYPE
Int32
72
Table 5-10. ForeignKeys Schema Collection Column Name DEFERRABILITY .NET Framework Datatype1 String Description The deferability of the foreign key. The value is one of the following: INITIALLY DEFERRED INITIALLY IMMEDIATE NOT DEFERRABLE
DELETE_RULE
String
If a delete rule was specified, the value is one of the following: CASCADE: A referential action of CASCADE was specified. SET NULL: A referential action of SET NULL was specified. SET DEFAULT: A referential action of SET DEFAULT was specified. NO ACTION: A referential action of NO ACTION was specified. For some data providers, this column does not exist if they cannot determine the DELETE_RULE. In most cases, this implies a default of NO ACTION.
Foreign key column name. Foreign key name. This column exists only if the data provider supports named foreign key constraints. Foreign key table name. Unqualified schema name in which the foreign key table is defined. This column exists only if the data provider supports schemas. The order of the column names in the key. For example, a table might contain several foreign key references to another table. The ordinal starts over for each reference; for example, two references to a three-column key would return 1, 2, 3, 1, 2, 3. Primary key column name. Primary key name. This column exists only if the data provider supports named primary key constraints. Primary key table name.
ORDINAL
Int32
73
Table 5-10. ForeignKeys Schema Collection (cont.) Column Name PK_TABLE_SCHEMA .NET Framework Datatype1 String Description Unqualified schema name in which the primary key table is defined. This column exists only if the data provider supports schemas. If an update rule was specified, the value is one of the following: CASCADE: A referential action of CASCADE was specified. SET NULL: A referential action of SET NULL was specified. SET DEFAULT: A referential action of SET DEFAULT was specified. NO ACTION: A referential action of NO ACTION was specified. For some data providers, this column will not exist if they cannot determine the UPDATE_RULE. In most cases, this implies a default of NO ACTION.
1. All classes are System.XXX. For example, System.String
UPDATE_RULE
String
74
Table 5-11. Indexes Schema Collection Column Name CARDINALITY CLUSTERED .NET Framework DataType1 Int32 Boolean Description The number of unique values in the index. Determines whether an index is clustered. This is one of the following: true: The leaf nodes of the index contain full rows, not bookmarks. This is a way to represent a table clustered by key value. false: The leaf nodes of the index contain bookmarks of the base table rows whose key value matches the key value of the index entry. COLLATION String This is one of the following: ASC: The sort sequence for the column is ascending. DESC: The sort sequence for the column is descending. This column exists only when a column sort sequence is supported. COLUMN_NAME FILL_FACTOR String Int32 The column name. For a B+-tree index, this property represents the storage utilization factor of page nodes during the creation of the index. The value is an integer from 0 to 100, representing the percentage of use of an index node. For a linear hash index, this property represents the storage utilization of the entire hash structure (the ratio of used area to total allocated area) before a file structure expansion occurs. FILTER_CONDITION INDEX_CATALOG INDEX_NAME INDEX_SCHEMA INITIAL_SIZE INTEGRATED String String String String Int32 Boolean The WHERE clause identifying the filtering restriction. The catalog name. This column exists only if the data provider supports catalogs. The index name. The unqualified schema name. This column exists only if the data provider supports schemas. The total amount of bytes allocated to this structure at creation time. Whether the index is integrated, that is, whether all base table columns are available from the index. This is one of the following: true: The index is integrated. For clustered indexes, this value must always be true. false: The index is not integrated.
75
Table 5-11. Indexes Schema Collection (cont.) Column Name NULL_COLLATION .NET Framework DataType1 String Description How NULLs are collated in the index. This is one of the following: END: NULLs are collated at the end of the list, regardless of the collation order. START: NULLs are collated at the start of the list, regardless of the collation order. HIGH: NULLs are collated at the high end of the list. LOW: NULLs are collated at the low end of the list. NULLS Int32 Whether NULL keys are allowed. This is one of the following: ALLOWNULL: The index allows entries where the key columns are NULL. DISALLOWNULL: The index does not allow entries where the key columns are NULL. If the consumer attempts to insert an index entry with a NULL key, the data provider returns an error. IGNORENULL: The index does not insert entries containing NULL keys. If the consumer attempts to insert an index entry with a NULL key, the data provider ignores that entry and no error code is returned. IGNOREANYNULL: The index does not insert entries where some column key has a NULL value. For an index having a multicolumn search key, if the consumer inserts an index entry with a NULL value in some column of the search key, the provider ignores that entry and no error code is returned. ORDINAL_POSITION PAGES PRIMARY_KEY TABLE_NAME TABLE_SCHEMA Int32 Int32 Boolean String String The ordinal position of the column in the index, starting with 1. The number of pages that are used to store the index. Determines whether the index represents the primary key on the table. This column does not exist if this is not known. The table name. Unqualified schema name. This column exists only if the data provider supports schemas.
76
Table 5-11. Indexes Schema Collection (cont.) Column Name TYPE .NET Framework DataType1 String Description The type of the index. This is one of the following: BTREE: The index is a B+-tree. HASH: The index is a hash file using, for example, linear or extensible hashing. CONTENT: The index is a content index. OTHER: The index is some other type of index. UNIQUE Boolean Determines whether index keys must be unique. This is one of the following: true: The index keys must be unique. false: Duplicate keys are allowed.
1. All classes are System.XXX. For example, System.String.
77
The maximum length of the parameter in characters, bytes, or bits, respectively, if one is defined. For example, a CHAR(5) parameter has a maximum length of 5. The maximum length of the data type in characters, bytes, or bits, respectively, if the parameter does not have a defined length. Zero (0) if neither the parameter or the data type has a defined maximum length. DbNull for all other types of parameters.
CHARACTER_OCTET_LENGTH
Int32
The maximum length in octets (bytes) of the parameter, if the type of the parameter is character or binary. If the parameter has no maximum length, the value of zero (0). For all other types of parameters, the value is -1.
DATA_TYPE DESCRIPTION
Object String
The indicator of the column's data type. This value cannot be null. The description of the parameter. For example, the description of the Name parameter in a procedure that adds a new employee might be Employee name. true: The parameter might be nullable. false: The parameter is not nullable. The data source description of the type. This value cannot be null.
IS_NULLABLE NATIVE_DATA_TYPE
Boolean String
78
Table 5-13. ProcedureParameters Schema Collection (cont.) Column Name NUMERIC_PRECISION .NET Framework DataType1 Int32 Description If the column's data type is of a numeric data, this is the maximum precision of the column. If the column's data type is not numeric, this is DbNull. NUMERIC_SCALE Int32 If the column's type is a numeric type that has a scale, this is the number of digits to the right of the decimal point. Otherwise, this is DbNull. ORDINAL_POSITION Int32 If the parameter is an input, input/output, or output parameter, this is the one-based ordinal position of the parameter in the procedure call. If the parameter is the return value, this is DbNull. PARAMETER_DEFAULT String The default value of parameter. If the default value is a NULL, then the PARAMETER_HASDEFAULT column will return true and the PARAMETER_DEFAULT column will not exist. If PARAMETER_HASDEFAULT is set to false, then the PARAMETER_DEFAULT column will not exist. PARAMETER_HASDEFAULT Int32 true: The parameter has a default value. false: The parameter does not have a default value, or it is unknown whether the parameter has a default value. PARAMETER_NAME PARAMETER_TYPE String String The parameter name. DbNull if the parameter is not named. This is one of the following: INPUT: The parameter is an input parameter. INPUTOUTPUT: The parameter is an input/output parameter. OUTPUT: The parameter is an output parameter. RETURNVALUE: The parameter is a procedure return value. UNKNOWN: The parameter type is unknown to the provider. PROCEDURE_CATALOG PROCEDURE_NAME PROCEDURE_SCHEMA String String String The catalog name. This column exists only if the data provider supports catalogs. The procedure name. The catalog name. This column exists only if the data provider supports schemas.
79
Table 5-13. ProcedureParameters Schema Collection (cont.) Column Name PROVIDER_DEFINED_TYPE .NET Framework DataType1 Int32 Description The data source defined type of the column as mapped to the type enumeration of the data provider. For example, for the Oracle data provider, this is the DDtek.Oracle.OracleDbType enumeration. This value cannot be null. PROVIDER_GENERIC_TYPE Int32 The data source defined type of the column as mapped to the System.Data.DbType enumeration. This value cannot be null.
1. All classes are System.XXX. For example, System.String.
80
Table 5-14. Procedures Schema Collection (cont.) Column Name PROCEDURE_SCHEMA PROCEDURE_TYPE .NET Framework DataType1 String String Description Unqualified schema name. This column exists only if the data provider supports schemas. This is one of the following: UNKNOWN: It is not known whether there is a returned value. PROCEDURE: Procedure; there is no returned value. FUNCTION: Function; there is a returned value.
1. All classes are System.XXX. For example, System.String.
String String
SCHEMA_NAME SCHEMA_OWNER
String String
81
TABLE_NAME TABLE_TYPE
String String
ALIAS TABLE SYNONYM SYSTEM TABLE VIEW GLOBAL TEMPORARY LOCAL TEMPORARY SYSTEM VIEW
This column cannot contain an empty string. DESCRIPTION String A description of the table. DbNull if no description is associated with the column.
82
TABLE_NAME TABLE_SCHEMA
String String
The table name. The unqualified schema name in which the table is defined. This column exists only if the data provider supports schemas.
83
VIEW _DEFINITION
String
84
85
Name of the application currently using the connection. User ID for whom the application using the connection is performing work. The user ID may be different than the user ID that was used to establish the connection. Host name of the client on which the application using the connection is running. Product name and version of the driver on the client. Additional information that may be used for accounting or troubleshooting purposes, such as an accounting ID.
For DB2 V9.5 for Linux/UNIX/Windows and DB2 for z/OS, this information can feed directly into the Workload Manager (WLM) for workload management and monitoring purposes. See "DB2 Workload Manager (WLM) Attributes" on page 87 for more information about using the WLM.
86
Application Name
Oracle
Microsoft SQL Server Sybase Client Host Name Host name of the client on which the application using the connection is running DB2
87
Table 6-1. Database Locations for Storing Client Information (cont.) Connection String Option Client User Description User ID for whom the application using the connection is performing work Database DB2 Location CURRENT CLIENT_USERID register (DB2 for Linux/UNIX/Windows) or CLIENT USERID register (DB2 for z/OS and DB2 for iSeries) OSUSER value in the V$SESSION table Local cache clientname value in sysprocesses table CLIENT_PRDID value. For DB2 V9.1 and higher for Linux/UNIX/Windows, the CLIENT_PRDID value is located in the SYSIBMADM.APPLICATIONS table. PROCESS value in the V$SESSION table The hostprocess value in the sysprocesses table The hostprocess value in the sysprocesses table
Program ID
DB2
88
Table 6-2. WLM Attributes for DB2 V9.5 for Linux/UNIX/Windows (cont.) WLM Attribute CURRENT CLIENT_USERID CURRENT CLIENT_WRKSTNNAME Connection String Option Client User Client Host Name Description User ID for whom the application using the connection is performing work Host name of the client on which the application using the connection is running
89
Retrieving only required data Selecting objects and methods that optimize performance Managing connections and updates
Following these general rules will help you solve some common .NET system performance problems, such as those listed in the following table: Problem Network communication is slow. Solution Reduce network traffic. See guidelines in "Retrieving Data" on page 97. Evaluation of complex SQL queries on the database is slow and can reduce concurrency. Excessive calls from the application to the data provider slow performance. Simplify queries. See guidelines in "Simplifying Automatically-generated SQL Queries" on page 90. Optimize application-to-data provider interaction. See guidelines in "Retrieving Data" on page 97. Disk input/output is slow. Limit disk input/output. See guidelines in "Using Connection Pooling" on page 91.
90
91
92
Chapter 7 Designing .NET Applications for Performance Optimization Connection pooling in ADO.NET is not provided by the core components of the .NET Framework. It must be implemented in the ADO.NET data provider itself. Pre-allocate connections. Decide which connection strings you will need to meet your needs. Remember that each unique connection string creates a new connection pool. Once created, connection pools are not destroyed until the active process ends or the connection lifetime is exceeded. Maintenance of inactive or empty pools involves minimal system overhead. Connection and statement handling should be addressed before implementation. Spending time and thoughtfully handling connection management improves application performance and maintainability.
93
Implementing Reauthentication
Typically, you can configure a connection pool to provide scalability for connections. In addition, to help minimize the number of connections required in a connection pool, you can switch the user associated with a connection to another user, a process known as reauthentication. For example, suppose you are using Kerberos authentication to authenticate users using their operating system user name and password. To reduce the number of connections that must be created and managed, you may want to switch the user associated with a connection to multiple users using reauthentication. For example, suppose your connection pool contains a connection, Conn, which was established using the user ALLUSERS. You can have that connection service multiple users, User A, B, C, and so on, by switching the user associated with the connection Conn to User A, B, C, and so on. For more information about the data providers support for reauthentication, refer to the DataDirect Connect for ADO.NET Users Guide.
94
Chapter 7 Designing .NET Applications for Performance Optimization Although using transactions can help application performance, do not take this tip too far. Leaving transactions active can reduce throughput by holding locks on rows for long times, preventing other users from accessing the rows. Commit transactions in intervals that allow maximum concurrency.
Designing .NET Applications Caching all of the prepared statements that an application uses might appear to offer increased performance. However, this approach may come at a cost of database memory if you implement statement caching with connection pooling. In this case, each pooled connection has its own statement cache that may contain all of the prepared statements that are used by the application. All of these pooled prepared statements are also maintained in the databases memory. See Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework on page 25 for application programming contexts that use the ADO.NET Entity Framework.
95
96
Chapter 7 Designing .NET Applications for Performance Optimization Example 2 SybaseCommand DBCmd = new SybaseCommand("getCustName", Conn); DBCmd.Parameters.Add("param1",SybaseDbType.Int,10,"").Value = 12345 myDataReader.CommandType = CommandType.StoredProcedure; myDataReader = DBCmd.ExecuteReader(); In this example, the stored procedure can be optimized to use a server-side RPC. Because the application avoids literal arguments and calls the procedure by specifying all arguments as parameters, the ADO.NET data provider can optimize the execution by invoking the stored procedure directly inside the database as an RPC. This example avoids SQL language processing on the database server and the execution time is greatly improved.
Retrieving Data However, many ADO.NET data provider architectures must bridge outside the CLR into native code to establish network communication with the database server. The overhead and processing that is required to enter this bridge is slow in the current version of the CLR. Depending on your architecture, you may not realize that the underlying ADO.NET data provider is incurring this security risk and performance penalty. Be careful when choosing an ADO.NET data provider that advertises itself as a 100% or pure managed code data provider. If the "Managed Data Provider" requires unmanaged database clients or other unmanaged pieces, then it is not a 100% managed data access solution. Only a very few vendors produce true managed code providers that implement their entire stack as a managed component.
97
Retrieving Data
To retrieve data efficiently, return only the data that you need, and choose the most efficient method of doing so. The guidelines in this section will help you to optimize system performance when retrieving data with .NET applications.
98
Retrieving Data DBCmd.Transaction = DBTxn; // Execute the statement with ExecuteNonQuery, because we are not // returning results DBCmd.ExecuteNonQuery(); // Now commit the transaction DBTxn.Commit(); // Close the connection DBConn.Close(); Use the ExecuteScalar method of the Command object to return a single value, such as a sum or a count, from the database. The ExecuteScalar method returns only the value of the first column of the first row of the result set. Once again, you could use the ExecuteReader method to successfully execute such queries, but by using the ExecuteScalar method, you tell the ADO.NET data provider to optimize for a result set that consists of a single row and a single column. By doing so, the data provider can avoid a lot of overhead and improve performance. The following example shows how to retrieve the count of a group: // Retrieve the number of employees who make more than $50000 // from the Employee table // Open connection to Sybase database SybaseConnection Conn; Conn = new SybaseConnection("host=bowhead;port=4100;User ID=test01; Password=test01;Database Name=Accounting"); Conn.Open(); // Make a command object SybaseCommand salCmd = new SybaseCommand("SELECT COUNT(sal) FROM Employee" + "WHERE sal>'50000'",Conn); try { int count = (int)salCmd.ExecuteScalar(); } catch (Exception ex) { // Display any exceptions in a messagebox MessageBox.Show (ex.Message); } // Close the connection Conn.Close();
99
100
Chapter 7 Designing .NET Applications for Performance Optimization Processing time is longest for character strings, followed by integers, which usually require some conversion or byte ordering.
Updating Data
This section provides general guidelines to help you optimize system performance when updating data in databases.
101
Deploying the Windows Forms application with a DataDirect Connect for ADO.NET data provider requires that each client has installed either the Microsoft .NET Framework or the Microsoft .NET Framework Redistributable 2.0 or higher, which is available for download on the Microsoft Web site. Distributed transactions are not supported with data providers that are deployed with No-Touch Deployment.
102
Chapter 8 Using ClickOnce Deployment 4 In the Build Action drop-down list, select Embedded Resource.
To specify a target CPU for the application, continue to Step 5. Otherwise, skip to Step 7. 5 Optionally, select Configuration Manager from the Solutions Manager drop-down list to specify a target CPU. The Configuration Manager dialog box appears.
Deploying the Data Provider with Your Application 6 In the Platform column, you can select the target CPU for your application. We recommend that you use Any CPU, the default setting, for optimal portability of your application. Click Close to close the Configuration Manager dialog box. Build the application. Deploy the application to the Web server. Refer to the Microsoft documentation for detailed instructions on using ClickOnce deployment.
103
7 8
104
105
106
Appendix A Using an .edmx File <Key> <PropertyRef Name="Id" /> </Key> <Property Name="SocketReadTime" Type="binary_double" Nullable="false" /> <Property Name="MaxSocketReadTime" Type="binary_double" Nullable="false" /> <Property Name="SocketReads" Type="number" Precision="20" Nullable="false" /> <Property Name="BytesReceived" Type="number" Precision="20" Nullable="false" /> <Property Name="MaxBytesPerSocketRead" Type="number" Precision="20" Nullable="false" /> <Property Name="SocketWriteTime" Type="binary_double" Nullable="false" /> <Property Name="MaxSocketWriteTime" Type="binary_double" Nullable="false" /> <Property Name="SocketWrites" Type="number" Precision="20" Nullable="false" /> <Property Name="BytesSent" Type="number" Precision="20" Nullable="false" /> <Property Name="MaxBytesPerSocketWrite" Type="number" Precision="20" Nullable="false" /> <Property Name="TimeToDisposeOfUnreadRows" Type="binary_double" Nullable="false" /> <Property Name="SocketReadsToDisposeUnreadRows" Type="number" Precision="20" Nullable="false" /> <Property Name="BytesRecvToDisposeUnreadRows" Type="number" Precision="20" Nullable="false" /> <Property Name="IDUCount" Type="number" Precision="20" Nullable="false" /> <Property Name="SelectCount" Type="number" Precision="20" Nullable="false" /> <Property Name="StoredProcedureCount" Type="number" Precision="20" Nullable="false" /> <Property Name="DDLCount" Type="number" Precision="20" Nullable="false" /> <Property Name="PacketsReceived" Type="number" Precision="20" Nullable="false" /> <Property Name="PacketsSent" Type="number" Precision="20" Nullable="false" /> <Property Name="ServerRoundTrips" Type="number" Precision="20" Nullable="false" /> <Property Name="SelectRowsRead" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheHits" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheMisses" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheReplaces" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheTopHit1" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheTopHit2" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheTopHit3" Type="number" Precision="20" Nullable="false" /> <Property Name="PacketsReceivedPerSocketRead" Type="binary_double" Nullable="false" /> <Property Name="BytesReceivedPerSocketRead" Type="binary_double" Nullable="false" /> <Property Name="PacketsSentPerSocketWrite" Type="binary_double" Nullable="false" /> <Property Name="BytesSentPerSocketWrite" Type="binary_double" Nullable="false" /> <Property Name="PacketsSentPerRoundTrip" Type="binary_double" Nullable="false" /> <Property Name="PacketsReceivedPerRoundTrip" Type="binary_double" Nullable="false" /> <Property Name="BytesSentPerRoundTrip" Type="binary_double" Nullable="false" /> <Property Name="BytesReceivedPerRoundTrip" Type="binary_double" Nullable="false" /> - <!-Oracle specific --> <Property Name="PartialPacketShifts" Type="number" Precision="20" Nullable="false" /> <Property Name="PartialPacketShiftBytes" Type="number" Precision="20" Nullable="false" /> <Property Name="MaxReplyBytes" Type="number" Precision="20" Nullable="false" /> <Property Name="MaxReplyPacketChainCount" Type="number" Precision="20" Nullable="false" /> <Property Name="Id" Type="number" Precision="10" Nullable="false" /> </EntityType> - <EntityType Name="Status"> - <Key> <PropertyRef Name="Id" /> </Key> <Property Name="ServerVersion" Type="varchar2" Nullable="false" /> <Property Name="Host" Type="varchar2" Nullable="false" /> <Property Name="Port" Type="number" Precision="10" Nullable="false" /> <Property Name="SID" Type="varchar2" Nullable="false" />
107 <!-- <Property Name="CurrentUser" Type="varchar2" Nullable="false" /> --> <!-- <Property Name="CurrentUserAffinityTimeout" Type="number" Precision="10" Nullable="false" /> --> <!-- <Property Name="SessionId" Type="number" Precision="10" Nullable="false" /> --> <Property Name="StatisticsEnabled" Type="number" Precision="1" Nullable="false" /> <Property Name="Id" Type="number" Precision="10" Nullable="false" /> </EntityType> </Schema> </edmx:StorageModels> Breaking the model down further, we establish a CSDL model at the conceptual layer this is what is exposed to the EDM. <edmx:ConceptualModels> <Schema Namespace="DDTek" Alias="Self" xmlns="http://schemas.microsoft.com/ado/2006/04/edm"> - <EntityContainer Name="DDTekConnectionContext"> <EntitySet Name="DDTekConnectionStatistics" EntityType="DDTek.DDTekConnectionStatistics" /> <EntitySet Name="DDTekStatus" EntityType="DDTek.DDTekStatus" /> <FunctionImport Name="RetrieveStatistics" EntitySet="DDTekConnectionStatistics" ReturnType= "Collection(DDTek.DDTekConnectionStatistics)" /> <FunctionImport Name="EnableStatistics" EntitySet="DDTekStatus" ReturnType= "Collection(DDTek.DDTekStatus)" /> <FunctionImport Name="DisableStatistics" EntitySet="DDTekStatus" ReturnType= "Collection(DDTek.DDTekStatus)" /> <FunctionImport Name="ResetStatistics" EntitySet="DDTekStatus" ReturnType= "Collection(DDTek.DDTekStatus)" /> - <FunctionImport Name="Reauthenticate" EntitySet="DDTekStatus" ReturnType= "Collection(DDTek.DDTekStatus)"> <Parameter Name="CurrentUser" Type="String" /> <Parameter Name="CurrentPassword" Type="String" /> <Parameter Name="CurrentUserAffinityTimeout" Type="Int32" /> </FunctionImport> </EntityContainer> - <EntityType Name="DDTekConnectionStatistics"> - <Key> <PropertyRef Name="Id" /> </Key> <Property Name="SocketReadTime" Type="Double" Nullable="false" /> <Property Name="MaxSocketReadTime" Type="Double" Nullable="false" /> <Property Name="SocketReads" Type="Int64" Nullable="false" /> <Property Name="BytesReceived" Type="Int64" Nullable="false" /> <Property Name="MaxBytesPerSocketRead" Type="Int64" Nullable="false" /> <Property Name="SocketWriteTime" Type="Double" Nullable="false" /> <Property Name="MaxSocketWriteTime" Type="Double" Nullable="false" /> <Property Name="SocketWrites" Type="Int64" Nullable="false" /> <Property Name="BytesSent" Type="Int64" Nullable="false" /> <Property Name="MaxBytesPerSocketWrite" Type="Int64" Nullable="false" /> <Property Name="TimeToDisposeOfUnreadRows" Type="Double" Nullable="false" /> <Property Name="SocketReadsToDisposeUnreadRows" Type="Int64" Nullable="false" /> <Property Name="BytesRecvToDisposeUnreadRows" Type="Int64" Nullable="false" /> <Property Name="IDUCount" Type="Int64" Nullable="false" /> <Property Name="SelectCount" Type="Int64" Nullable="false" /> <Property Name="StoredProcedureCount" Type="Int64" Nullable="false" /> <Property Name="DDLCount" Type="Int64" Nullable="false" /> <Property Name="PacketsReceived" Type="Int64" Nullable="false" /> <Property Name="PacketsSent" Type="Int64" Nullable="false" />
108
Appendix A Using an .edmx File <Property Name="ServerRoundTrips" Type="Int64" Nullable="false" /> <Property Name="SelectRowsRead" Type="Int64" Nullable="false" /> <Property Name="StatementCacheHits" Type="Int64" Nullable="false" /> <Property Name="StatementCacheMisses" Type="Int64" Nullable="false" /> <Property Name="StatementCacheReplaces" Type="Int64" Nullable="false" /> <Property Name="StatementCacheTopHit1" Type="Int64" Nullable="false" /> <Property Name="StatementCacheTopHit2" Type="Int64" Nullable="false" /> <Property Name="StatementCacheTopHit3" Type="Int64" Nullable="false" /> <Property Name="PacketsReceivedPerSocketRead" Type="Double" Nullable="false" /> <Property Name="BytesReceivedPerSocketRead" Type="Double" Nullable="false" /> <Property Name="PacketsSentPerSocketWrite" Type="Double" Nullable="false" /> <Property Name="BytesSentPerSocketWrite" Type="Double" Nullable="false" /> <Property Name="PacketsSentPerRoundTrip" Type="Double" Nullable="false" /> <Property Name="PacketsReceivedPerRoundTrip" Type="Double" Nullable="false" /> <Property Name="BytesSentPerRoundTrip" Type="Double" Nullable="false" /> <Property Name="BytesReceivedPerRoundTrip" Type="Double" Nullable="false" /> <Property Name="PartialPacketShifts" Type="Int64" Nullable="false" /> <Property Name="PartialPacketShiftBytes" Type="Int64" Nullable="false" /> <Property Name="MaxReplyBytes" Type="Int64" Nullable="false" /> <Property Name="MaxReplyPacketChainCount" Type="Int64" Nullable="false" /> <Property Name="Id" Type="Int32" Nullable="false" /> </EntityType> - <EntityType Name="DDTekStatus"> - <Key> <PropertyRef Name="Id" /> </Key> <Property Name="ServerVersion" Type="String" Nullable="false" /> <Property Name="Host" Type="String" Nullable="false" /> <Property Name="Port" Type="Int32" Nullable="false" /> <Property Name="SID" Type="String" Nullable="false" /> <Property Name="CurrentUser" Type="String" Nullable="false" /> <Property Name="CurrentUserAffinityTimeout" Type="Int32" Nullable="false" /> <Property Name="SessionId" Type="Int32" Nullable="false" /> <Property Name="StatisticsEnabled" Type="Boolean" Nullable="false" /> <Property Name="Id" Type="Int32" Nullable="false" /> </EntityType> </Schema> </edmx:ConceptualModels> The following simple mapping binds the pieces together. <!-C-S mapping content --> - <edmx:Mappings> - <Mapping Space="C-S" xmlns="urn:schemas-microsoft-com:windows:storage:mapping:CS"> - <EntityContainerMapping StorageEntityContainer="DDTek_Connection" CdmEntityContainer= "DDTekConnectionContext"> - <EntitySetMapping Name="DDTekConnectionStatistics"> - <EntityTypeMapping TypeName="DDTek.DDTekConnectionStatistics"> - <MappingFragment StoreEntitySet="Connection_Statistics"> - <!-StoreEntitySet="Connection_Statistics" TypeName="DDTek.DDTekConnectionStatistics"> --> <ScalarProperty Name="SocketReadTime" ColumnName="SocketReadTime" /> <ScalarProperty Name="MaxSocketReadTime" ColumnName="MaxSocketReadTime" />
109 <ScalarProperty Name="SocketReads" ColumnName="SocketReads" /> <ScalarProperty Name="BytesReceived" ColumnName="BytesReceived" /> <ScalarProperty Name="MaxBytesPerSocketRead" ColumnName="MaxBytesPerSocketRead" /> <ScalarProperty Name="SocketWriteTime" ColumnName="SocketWriteTime" /> <ScalarProperty Name="MaxSocketWriteTime" ColumnName="MaxSocketWriteTime" /> <ScalarProperty Name="SocketWrites" ColumnName="SocketWrites" /> <ScalarProperty Name="BytesSent" ColumnName="BytesSent" /> <ScalarProperty Name="MaxBytesPerSocketWrite" ColumnName="MaxBytesPerSocketWrite" /> <ScalarProperty Name="TimeToDisposeOfUnreadRows" ColumnName="TimeToDisposeOfUnreadRows" /> <ScalarProperty Name="SocketReadsToDisposeUnreadRows" ColumnName= "SocketReadsToDisposeUnreadRows" /> <ScalarProperty Name="BytesRecvToDisposeUnreadRows" ColumnName="BytesRecvToDisposeUnreadRows" /> <ScalarProperty Name="IDUCount" ColumnName="IDUCount" /> <ScalarProperty Name="SelectCount" ColumnName="SelectCount" /> <ScalarProperty Name="StoredProcedureCount" ColumnName="StoredProcedureCount" /> <ScalarProperty Name="DDLCount" ColumnName="DDLCount" /> <ScalarProperty Name="PacketsReceived" ColumnName="PacketsReceived" /> <ScalarProperty Name="PacketsSent" ColumnName="PacketsSent" /> <ScalarProperty Name="ServerRoundTrips" ColumnName="ServerRoundTrips" /> <ScalarProperty Name="SelectRowsRead" ColumnName="SelectRowsRead" /> <ScalarProperty Name="StatementCacheHits" ColumnName="StatementCacheHits" /> <ScalarProperty Name="StatementCacheMisses" ColumnName="StatementCacheMisses" /> <ScalarProperty Name="StatementCacheReplaces" ColumnName="StatementCacheReplaces" /> <ScalarProperty Name="StatementCacheTopHit1" ColumnName="StatementCacheTopHit1" /> <ScalarProperty Name="StatementCacheTopHit2" ColumnName="StatementCacheTopHit2" /> <ScalarProperty Name="StatementCacheTopHit3" ColumnName="StatementCacheTopHit3" /> <ScalarProperty Name="PacketsReceivedPerSocketRead" ColumnName="PacketsReceivedPerSocketRead" /> <ScalarProperty Name="BytesReceivedPerSocketRead" ColumnName="BytesReceivedPerSocketRead" /> <ScalarProperty Name="PacketsSentPerSocketWrite" ColumnName="PacketsSentPerSocketWrite" /> <ScalarProperty Name="BytesSentPerSocketWrite" ColumnName="BytesSentPerSocketWrite" /> <ScalarProperty Name="PacketsSentPerRoundTrip" ColumnName="PacketsSentPerRoundTrip" /> <ScalarProperty Name="PacketsReceivedPerRoundTrip" ColumnName="PacketsReceivedPerRoundTrip" /> <ScalarProperty Name="BytesSentPerRoundTrip" ColumnName="BytesSentPerRoundTrip" /> <ScalarProperty Name="BytesReceivedPerRoundTrip" ColumnName="BytesReceivedPerRoundTrip" /> <ScalarProperty Name="PartialPacketShifts" ColumnName="PartialPacketShifts" /> <ScalarProperty Name="PartialPacketShiftBytes" ColumnName="PartialPacketShiftBytes" /> <ScalarProperty Name="MaxReplyBytes" ColumnName="MaxReplyBytes" /> <ScalarProperty Name="MaxReplyPacketChainCount" ColumnName="MaxReplyPacketChainCount" /> <ScalarProperty Name="Id" ColumnName="Id" /> </MappingFragment> </EntityTypeMapping> </EntitySetMapping> - <EntitySetMapping Name="DDTekStatus"> - <EntityTypeMapping TypeName="DDTek.DDTekStatus"> - <MappingFragment StoreEntitySet="Status"> <ScalarProperty Name="ServerVersion" ColumnName="ServerVersion" /> <ScalarProperty Name="Host" ColumnName="Host" /> <ScalarProperty Name="Port" ColumnName="Port" /> <ScalarProperty Name="SID" ColumnName="SID" /> <!-- <ScalarProperty Name="CurrentUser" ColumnName="CurrentUser" /> <!-- <ScalarProperty Name="CurrentUserAffinityTimeout" <!-- ColumnName="CurrentUserAffinityTimeout" /> <!-- <ScalarProperty Name="SessionId" ColumnName="SessionId" /> <ScalarProperty Name="StatisticsEnabled" ColumnName="StatisticsEnabled" /> <ScalarProperty Name="Id" ColumnName="Id" />
110
Appendix A Using an .edmx File </MappingFragment> </EntityTypeMapping> </EntitySetMapping> <FunctionImportMapping FunctionImportName="RetrieveStatistics" FunctionName= "DDTek.Store.RetrieveStatistics" /> <FunctionImportMapping FunctionImportName="EnableStatistics" FunctionName= "DDTek.Store.EnableStatistics" /> <FunctionImportMapping FunctionImportName="DisableStatistics" FunctionName= "DDTek.Store.DisableStatistics" /> <FunctionImportMapping FunctionImportName="ResetStatistics" FunctionName= "DDTek.Store.ResetStatistics" /> <FunctionImportMapping FunctionImportName="Reauthenticate" FunctionName= "DDTek.Store.Reauthenticate" /> </EntityContainerMapping> </Mapping> </edmx:Mappings>
111
Return Scalar values. Determine which parameters that are needed and create them. Involve commands in a transaction.
If your application needs to address specific DBMS functionality, you can use a DataDirect Connect for ADO.NET data provider.
"Adding a New DAAB Entry" on page 112 "Adding the Data Access Application Block to Your Application" on page 113
112
3 4
In the Name field, enter a name for the DAABs connection string, for example, MyOracle. In the ConnectionString field, enter a connection string. For example, Host=ntsl2003;Port=1521;SID=ORCL1252; User ID=SCOTT;Password= TIGER;Encryption Method=SSL;AuthenticationMethod=Kerberos;
Right-click the ProviderName field, and select the data provider. For example, select DDTek.Oracle.4.0 for the Oracle data provider.
Data Access Application Block Overview 6 Right-click Custom Provider Mappings and select New / Provider Mappings.
113
7 8
In the Name field, select the data provider name you specified in Step 5. Select the TypeName field, and then choose the browse () button to navigate to the Debug output directory of the DataDirect DAAB that you built. Then, select the TypeName. For example, the Oracle TypeName is DDTek.EnterpriseLibrary.Data.Oracle.dll. Leave the Enterprise Library Configuration window open for now and do not save this configuration until you complete the following section.
Enterprise Library Shared Library Enterprise Library Data Access Application Block
Add the following directive to your C# source code: using Microsoft.Practices.EnterpriseLibrary.Data; using System.Data;
3 4
Rebuild the solution to ensure that the new dependencies are functional. Determine the output Debug or Release path location of your current solution, and switch back to the Enterprise Library Configuration window (see "Adding a New DAAB Entry" on page 112). Right-click the connection string under the Application Node and select Save Application.
114
Appendix B Using Enterprise Library 4.1 6 7 8 9 Navigate to the Debug or Release output directories of your current solution, and locate the .exe file of the current solution. Click the file name once, and add .config to the name, for example, MyOracle.config. Ensure that Save as type 'All Files' is selected, and select Save. Using File Explorer, copy the DDTek.EnterpriseLibrary.Data.XXX.dll from the DataDirect DAAB directories, where XXX indicates the data source.
10 Place the copy of this DLL into either the Debug or Release output directory of your current solution.
namespace DAAB_Test_App_1 { class Program { static void Main(string[] args) { Database database = DatabaseFactory.CreateDatabase("MyOracle"); DataSet ds = database.ExecuteDataSet(CommandType.TableDirect, "SQLCOMMANDTEST_NC_2003SERVER_1"); } } } The Microsoft Enterprise Library DAAB coding patterns are now at your disposal.
115
116
Appendix B Using Enterprise Library 4.1 To configure the Logging Application Block: 1 Select Start / Programs / Microsoft patterns and practices / Enterprise Library 4.1 October 2008 / Enterprise Library Configuration. The Enterprise Library Configuration window appears.
Logging Application Blocks 3 Right-click the Application Configuration node and select New / Logging Application Block.
117
4 5
Right-click Category Sources, and select New / Category. In the Name pane, select Name. Type the name of the new category, and then press ENTER. In the following example, the category name DDTek Error will be created.
118
Appendix B Using Enterprise Library 4.1 6 7 From the SourceLevels drop-down list, set the logging level for the new category. By default, all logging levels are enabled. Right-click the new category and select New / TraceListener Reference. A Formatted EventLog TraceListener node is added. From the ReferencedTraceListener drop-down list, select Formatted EventLog TraceListener. Repeat Step 4 through Step 7 to create the following categories:
DDTek Information: Information not related to errors DDTek Command: Enables SQL, Parameter, and DbCommandTree logging
Select File / Save Application. The Save As window appears. Type a name for your configuration file. By default, the file is saved to C:\Program Files\Microsoft Enterprise Library October 2008\Bin\filename.exe.config, where filename is the name that you typed in the Save As window.
3 4
In the TracingEnabled field, type True. Save the Logging application block.
119
EnableLoggingApplicationBlock: Enables the Logging Application Block. LABAssemblyName: Specifies the assembly name to which the Logging Application Block applies. NOTE: If you are using any version of the LAB other than the Microsoft Enterprise Library 4.1 (October 2008) binary release, you must set the LABAssemblyName. For example, if you are using an older or newer version of the LAB, or a version that you have customized, you must specify a value for LABAssemblyName.
LABLoggerTypeName: Specifies the type name for the Logging Application Block. LABLogEntryTypeName: Specifies the type name for the LogEntry object.
120
121
Glossary
.NET Framework Microsoft defines Microsoft .NET as a set of Microsoft software technologies for connecting information, people, systems, and devices. To optimize software integration, the .NET Framework uses small, discrete, building-block applications called Web services that connect to each other as well as to other, larger applications over the Internet. The .NET Framework has two key parts:
ASP.NET is an environment for building smart client applications (Windows Forms), and a loosely-coupled data access subsystem (ADO.NET). The common language runtime (CLR) is the core runtime engine for executing applications in the .NET Framework. You can think of the CLR as a safe areaa sandbox inside of which your .NET code runs. Code that runs in the CLR is called managed code.
ADO.NET
The data access component for the .NET Framework. ADO.NET is made of a set of classes that are used for connecting to a database; providing access to relational data, XML, and application data; and retrieving results. An object-relational mapping (ORM) framework for the .NET Framework. Developers can use it to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. This model allows developers to decrease the amount of code that must be written and maintained in data-centric applications. A compiled representation of one or more classes. Each assembly is self-contained, that is, the assembly includes the metadata about the assembly as a whole. Assemblies can be private or shared.
assembly
Private assemblies, which are used by a limited number of applications, are placed in the application folder or one of its subfolders. For example, even if the client has two different applications that call a private assembly named formulas, each client application loads the correct assembly. Shared assemblies, which are available to multiple client applications, are placed in the Global Assembly Cache (GAC). Each shared assembly is assigned a strong name to handle name and version conflicts.
assembly cache
A machine-wide code cache that is used for storing assemblies side-by-side. The cache is in two parts. The global assembly cache contains assemblies that are explicitly installed to be shared among many applications on the computer. The download cache stores code that is downloaded from Internet or intranet sites, which is isolated to the application that downloaded the code. The process of identifying a user, typically based on a user ID and password. Authentication ensures that users are who they claim to be. See also client authentication, NTLM authentication, OS authentication, and user ID/password authentication.
authentication
122
Glossary bulk load A method of inserting large amounts of data into a database table. Rows are sent from the database client to the database server in a continuous stream. The database server can optimize how rows are inserted. Also known as bulk copy. The process of identifying the user ID and password of the user logged onto the system on which the driver is running to authenticate the user to the database. The database server depends on the client to authenticate the user and does not provide additional authentication. See also authentication. A feature of the .NET Framework 2.0 that lets clients download the assemblies they need from a remote web server. The first time the assembly is referenced, it is downloaded to a cache on the client and executed. After that, when a client accesses the application, the application checks the server to find out whether any assemblies have been updated. Any new assemblies are downloaded to the download cache on the client, refreshing the application without any interaction with the end user. A mechanism that distributes new connections in a computing environment so that no given server is overwhelmed with connection requests. A mechanism that is provided by the common language runtime through which managed code is granted permissions by a security policy; permissions are enforced, limiting the operations that the code will be allowed to perform. A development concept that is focused around defining your model using C#/Visual Basic .NET classes. These classes can then be mapped to an existing database or be used to generate a database schema. Additional configuration can be supplied using Data Annotations or via a fluent API. A set of similarly typed objects that are grouped together. For example, data collections include hash tables, queues, stacks, dictionaries, and lists. Some collections have specialized functions, such as the MetaDataCollection collections. The common language runtime (CLR) is the core runtime engine in the Microsoft .NET Framework. The CLR supplies services such as cross-language integration, code access security, object lifetime management, and debugging support. Applications that run in the CLR are sometimes said to be running "in the sandbox." A mechanism that allows an application to connect to an alternate, or backup, database server if the primary database server is unavailable, for example, because of a hardware failure or traffic overload. Connection retry defines the number of times the data provider attempts to connect to the primary and, if configured, alternate database servers after the initial unsuccessful connection attempt. Connection retry can be an important strategy for system recovery. The process by which connections can be reused rather than creating a new one every time the data provider needs to establish a connection to the underlying database. A pre-defined code block that provides access to the most often used ADO.NET data access features. Applications can use the application block to pass data through application layers, and submit changed data back to the database.
client authentication
ClickOnce Deployment
collection
connection failover
connection retry
123 data provider An ADO.NET data provider communicates with the application and database and performs tasks such as establishing a connection to a database, executing commands, and returning results to the application. In a DataDirect Bulk Load operation, the table on the database server into which the data is copied. A pre-defined code block that provides access to the most often used ADO.NET data access features. Applications can use the application block to pass data through application layers, and submit changed data back to the database. The part of the assembly cache that stores assemblies that are specifically installed to be shared by many applications on the computer. Applications deployed in the Global Assembly Cache (GAC) must have a strong name to handle name and version conflicts. A particular locking strategy that is employed in the database system to improve data consistency. The higher the isolation level number, the more complex the locking strategy behind it. The isolation level provided by the database determines how a transaction handles data consistency. The American National Standards Institute (ANSI) defines four isolation levels:
isolation level
Read uncommitted (0) Read committed (1) Repeatable read (2) Serializable (3)
An OS authentication protocol that provides authentication using secret key cryptography. See also authentication and OS authentication. See client load balancing. A database operation that restricts a user from accessing a table or record. Locking is used in situations when more than one user might try to use the same table at the same time. By locking the table or record, the system ensures that only one user at a time can affect the data. A component of the Microsoft Enterprise Libraries that simplifies the implementation of common logging functions. Developers can use the Logging Block to write information to a variety of locations, such as the event log, an e-mail message, or a database. Code that is executed and managed by the .NET Framework, specifically by the CLR. Managed code must supply the information necessary for the CLR to provide services such as memory management and code access security. Information about data, for example, a database schema that describes the fields, columns, and formats used in a database. Different database schema elements are exposed through schema collections. A development concept that is focused on the ability to start with a conceptual model and create the database from it. Additional configuration can be supplied using Data Annotations or through a fluent API.
metadata
124
Glossary namespace A logical naming scheme for grouping related types. The .NET Framework uses a hierarchical naming scheme for grouping types into logical categories of related functionality, such as the ASP.NET technology or remoting functionality. Design tools can use namespaces to make it easier for developers to browse and reference types in their code. A single assembly can contain types whose hierarchical names have different namespace roots, and a logical namespace root can span multiple assemblies. In the .NET Framework, a namespace is a logical design-time naming convenience, whereas an assembly establishes the name scope for types at run time. A feature of the .NET Framework 1.x that lets clients download the assemblies they need from a remote web server. The first time the assembly is referenced, it is downloaded to a cache on the client and executed. After that, when a client accesses the application, the application checks the server to find out whether any assemblies have been updated. Any new assemblies are downloaded to the download cache on the client, refreshing the application without any interaction with the end user. A network authentication protocol that provides a challenge response security mechanism for connections between Windows clients and servers, confirming the users identification to a network service. It is used in later versions of Windows for backward compatibility. See authentication and OS authentication. An authentication process that can take advantage of the user name and password that is maintained by the operating system to authenticate users to the database or use another set of user credentials specified by the application. By allowing the database to share the user name and password that is used for the operating system, users with a valid operating system account can log into the database without supplying a user name and password. See also authentication, Kerberos authentication, and NTLM authentication. A tool in the Windows SDK that identifies areas in which performance problems exist. A component that is built into DataDirect Connect for ADO.NET and accessible through Visual Studio that leads you through a series of questions about your application. Based on your answers, the Wizard provides the optimal settings for DataDirect Connect for ADO.NET connection string options that affect performance. Optionally, you can generate a new application that is pre-configured with a connection string that is optimized for your environment. Closely related schemas that can be handled more efficiently when grouped together. Database schema elements such as tables and columns are exposed through schema collections. An industry-standard protocol for sending encrypted data over database connections. SSL secures the integrity of your data by encrypting information and providing SSL client/SSL server authentication. An abstraction of a sequence of binary or text data. The Stream class and its derived classes provide a generic view of these different types of input and output. A name that consists of an assembly's text name, version number, and culture information (if provided), with a public key and a digital signature generated over the assembly. Assemblies with the same strong name must be identical.
No-Touch Deployment
NTLM authentication
OS authentication
schema collection
125 unmanaged code Code that is executed directly by the operating system, outside of the CLR. Unmanaged code includes all code written before the .NET Framework was introduced. Because it is outside the .NET environment, unmanaged code cannot make use of any .NET managed facilities such as memory management and code access security. Authentication process that authenticates the user to the database using a database user name and password. See also authentication.
126
Glossary
127
Index
Symbols
.edmx file 105 .NET ClickOnce Deployment 101 designing applications for performance 89 getting schema information 61
C
CATALOGS schema collection 68 ClickOnce Deployment 101 client information about 85 how databases store 85 location used for storing 86 storing 86 code example pseudo stored procedures 39 using
A
adding DAAB to your application 52, 113 new DAAB entry 49, 112 new Logging Application block entry 118 ADO.NET Entity Framework Code First Model 25 configuring the data providers designing an Entity Data Model 41 overview 40
DAAB in application code 53, 114 Logging Application Block in your application 58, 119
Code First support 25 using 38 code portability, increasing 47 Columns schema collection 69 CommandBuilder class, impact on performance 91 commands retrieving little or no data 98 using multiple times 94 compiled help file 10 configuring Data Access Application Block (DAAB) 48, 111 Logging Application Block 54, 115 connecting improving performance 91 start transaction after 93 connection statistics, implementing in Entity Framework application 42 contacting Customer Support 11 controlling the size of the Entity Data Model 41 conventions, typographical 8 creating a model 26, 31 Customer Support, contacting 11
B
books HTML version 9 PDF version 10 bridge, performance impact of using 96
D
Data Access Application Block (DAAB) adding a new DAAB entry 49, 112 adding to your application 52, 113 additional resources 60 configuring 48, 111 overview 47, 111 using in application code 53, 114 to increase code portability 47 when to use 47 data types, choosing to improve performance 99
128
Index
Database First model 26 DataReader class, choosing when to use 96 DataSet choosing when to use 96 effect on performance 98 keeping result sets small 96 DataSourceInformation schema collection ColumnNames for Oracle data provider 65 ColumnNames supported 64 date, time, timestamp literal escape sequences 13 DB2 data provider outer join escape syntax 18 scalar functions supported 14 with ADO.NET Entity Framework 26 Workload Manager (WLM) 85 DB2 Workload Manager (WLM) attributes 87 deploying .NET applications ClickOnce Deployment 101 Windows Forms applications 101 with an ADO.NET data provider 101 designing .NET applications See performance optimization documentation, about 9
H
help file 10
I
IDBCommand 19 implementing Kerberos authentication in Entity Framework 43, 93 Indexes schema collection 73 interoperability using SQL extension escapes 19 using the GenericDatabase class option for DAAB implementation 48 iSeries and AS/400, scalar functions supported 14 isolation levels data consistency behavior compared 23 dirty reads 22 non-repeatable reads 22 phantom reads 22 data currency 23 description read committed 22 read uncommitted 22 repeatable read 22 serializable 22
E
Enterprise Library, using with the data providers version 4.1 41, 111 version 5.0 47 Entity Data Model (EDM) 105 Entity Framework See ADO.NET Entity Framework escape sequences, outer join 18 escapes date and time 13 RowSetSize property used in 19 SQL extension 19 stored procedure 18 example .edmx file 105 reauthentication 43, 93 using the LAB in application code 58, 119 ExecuteScalar and ExecuteNonQuery, performance implications 98
J
joins left outer 19 nested outer 19 outer join escape sequence 18 right outer 19
K
Kerberos authentication implementing in Entity Framework application 43, 93 using with reauthentication 43, 93
F
fetching random data 96 functions supported 14
L
left outer joins 19 license file 101 limiting the size of the result set, SQL extension escape 19 literals, escape sequence 13 location used for storing client information for a connection 86 locking modes and levels 21 overview 21
G
GenericDatabase class option for DAAB implementation 48 glossary 121
Index
Logging Application Block (LAB) adding a new LAB entry 118 configuring 54, 115 using in application code 58, 119 when to use 54, 115 long data, performance impact of retrieving 97 managing connections 91 simplifyng automatically-generated SQL queries 90 size of data retrieved 98 turning off autocommit 93 using disconnected DataSet 96 native managed providers 96 POCO entities 25 prepared statements caching 94 performance 94 PrimaryKeys schema collection 76 ProcedureParameters schema collection 77 Procedures schema collection 79 pseudo stored procedures, using to provide functionality 39
129
M
managed code, performance advantages 96 MetaDataCollections schema collection 63 Microsoft Enterprise Library Application Blocks 47 Model First support 25, 44 using 31
R N
native managed providers, performance advantages 96 nested outer joins 19 numeric functions Oracle data provider 16 SQL Server data provider 17 Sybase data provider 17 random data, fetching 96 reauthentication example 43, 93 specifying support 64 ReservedWords schema collection 67 result set, impact of size on scalability 96 retrieving long data 97 right outer joins 19 RowsetSize property 19
O
obtaining connection statistics in Entity Framework 42 online books, installing 9 Oracle data provider outer join escape syntax 18 provider-specific ColumnNames supported 65 scalar functions supported 16 with ADO.NET Entity Framework 26 Oracle Entity Framework data provider using Code First 38 using Model First 31 outer join escape sequence 18
S
scalar functions DB2 data provider 14 Oracle data provider 16 overview 14 SQL Server data provider 17 Sybase data provider 17 schema collection CATALOGS 68 Columns 69 DataSourceInformation 64 Indexes 73 MetaDataCollections 63 PrimaryKeys 76 ProcedureParameters 77 Procedures 79 ReservedWords 67 Schemata 80 TablePrivileges 82 TablesSchemas 81 Views 83 Schemata schema collection 80 simplifyng automatically-generated SQL queries 90
P
page-level locking 21 parameter markers in stored procedures 95 performance optimization avoiding distributed transactions 94 use of CommandBuilder objects 91 choosing
130
Index
SQL escape sequences date, time, timestamp 13 extension 19 general 13 outer join 18 RowSetSize property 19 scalar functions 14 support for 13 SQL leveling 47 SQL queries generated by Visual Studio wizards 90 SQL Server data provider outer join escape syntax 18 scalar functions supported 17 statement caching 94 stored procedures escapes 18 performance implications 94 using parameter markers as arguments 95 storing client information 86 string functions DB2 data provider 14 Oracle data provider 16 SQL Server data provider 17 Sybase data provider 17 Sybase data provider outer join escape syntax 18 scalar functions supported 17
W
Workload Manager (WLM) 85
X
Xml Describe Type connection string option 61, 62 XML, manipulating relational data as 96
T
TablePrivileges schema collection 82 TablesSchemas schema collection 81 time literal escape sequence 13 Timedate functions Oracle data provider 16 SQL Server data provider 17 Sybase data provider 17 timestamp literal escape sequence 13 transactions managing commits 93 performance considerations of using distributed 94 typographical conventions 8
U
unmanaged code, performance impact 96 using Command.Prepare 94 DAAB in application code 53, 114 LAB in application code 58, 119 schema metadata 63
V
Views schema collection 83