GemFireEnterprise NativeClientGuide PDF
GemFireEnterprise NativeClientGuide PDF
GemFireEnterprise NativeClientGuide PDF
Version 3.5
September 2010
COPYRIGHTS
This software product, its documentation, and its user interface 1997-2008 GemStone Systems, Inc. All rights reserved by GemStone Systems, Inc.
Doxygen 1997-2004 Dimitri van Heesch.
STLport 1994 Hewlett-Packard Company, 1996-1999 Silicon Graphics Computer Systems, Inc., 1997 Moscow Center for SPARC
Technology, 1999-2002 Boris Fomitchev.
ACE, TAO, CIAO, and CoSMIC (henceforth referred to as DOC software) are copyrighted by Douglas C. Schmidt and his research group at
Washington University, University of California, Irvine, and Vanderbilt University. 1993-2006, all rights reserved.
PATENTS
GemFire is protected by U.S. patent 6,360,219. Additional patents pending.
TRADEMARKS
GemStone, GemFire, and the GemStone logo are trademarks or registered trademarks of GemStone Systems, Inc. in the United States and other
countries.
UNIX is a registered trademark of The Open Group in the U. S. and other countries.
Linux is a registered trademark of Linus Torvalds.
Red Hat and all Red Hat-based trademarks and logos are trademarks or registered trademarks of Red Hat, Inc. in the United States and other countries.
SUSE is a registered trademark of SUSE AG.
Sun, Sun Microsystems, Solaris, Forte, Java, Java Runtime Edition, JRE, and other Java-related marks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC
International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun
Microsystems, Inc.
Intel and Pentium are registered trademarks of Intel Corporation in the United States and other countries.
Microsoft, Windows, and Visual C++ are registered trademarks of Microsoft Corporation in the United States and other countries.
W3C is a registered trademark of the World Wide Web Consortium.
ACE, TAO, CIAO, and CoSMIC are trademarks of Douglas C. Schmidt and his research group at Washington University, University of California,
Irvine, and Vanderbilt University.
Berkeley DB and Sleepycat are trademarks or registered trademarks of Oracle Corporation.
Other company or product names mentioned herein may be trademarks or registered trademarks of their respective owners. Trademark specifications
are subject to change without notice. All terms mentioned in this documentation that are known to be trademarks or service marks have been
appropriately capitalized to the best of our knowledge; however, GemStone cannot attest to the accuracy of all trademark information. Use of a term
in this documentation should not be regarded as affecting the validity of any trademark or service mark.
GemStone Systems, Inc.
1260 NW Waterhouse Avenue, Suite 200
Beaverton, OR 97006
2
Table of
Contents
Table of Contents
List of Figures 11
List of Tables 13
List of Examples 15
Preface 19
About This Manual 19
How This Manual Is Organized 19
Typographical Conventions 20
Other Useful Documents 20
Technical Support 20
Contacting Technical Support 21
24x7 Emergency Technical Support 21
GemStone Support Web Site 21
Training and Consulting 22
Glossary 309
Index 313
Example 4.4 Using the API to Put Entries Into the Cache 91
Example 4.5 Using the get API to Retrieve Values From the Cache 92
Example 4.6 The Simple Class BankAccount 95
Example 4.7 Implementing a Serializable Class 97
Example 4.8 Extending a Serializable Class To Be a CacheableKey 98
Example 4.9 Using a BankAccount Object 102
Example 4.10 Creating New Statistics Programmatically 104
Example 5.1 Connecting and Creating the Cache 115
Example 5.2 Creating a Cache with a cache.xml File 115
Example 5.3 Creating a Region with Caching and LRU 116
Example 5.4 Creating a Region With Disk Overflow 117
Example 5.5 Using the API to Put Values Into the Cache 118
Example 5.6 Using the Get API to Retrieve Values From the Cache 119
Example 5.7 The Simple BankAccount Class 121
Example 5.8 Implementing a Serializable Class 122
Example 5.9 Extending an IGFSerializable Class to Be a Key 125
Example 5.10 Using a BankAccount Object 128
Example 5.11 Using ICacheLoader to Load New Integers in the Region 131
Example 5.12 Using ICacheWriter to Track Creates and Updates for a Region 132
Example 5.13 A Sample ICacheListener Implementation 133
Example 5.14 Simple C# Code 135
Example 5.15 A Sample .config File 137
Example 6.1 Defining System Properties Programmatically 149
Example 7.1 Setting the Server Redundancy Level in cache.xml 152
Example 7.2 Setting the Server Redundancy Level Programmatically 152
Example 7.3 Configuring a Durable Native Client Using gfcpp.properties 155
Example 7.4 Configuring a Durable Client Through the API (C++) 155
Example 7.5 API Client Durable Interest List Registration (C++) 155
Example 7.6 Durable Client Cache Ready Notification (C++) 156
Example 7.7 Durable Client Disconnect With Queues Maintained 156
Example 8.1 C++ Client Acquiring Credentials Programmatically 165
Example 8.2 .NET Client Acquiring Credentials Programmatically 165
Example 8.3 Implementing the Factory Method for Authentication (C++ and .NET) 166
Example 8.4 Implementing the Static create Method 167
Example 8.5 Determining Pre-Operation or Post-Operation Authorization 172
Example 9.1 C++ Class Definition and Corresponding Java Class Definition 182
Example 9.2 Retrieve all active portfolios 183
Example 9.3 Retrieve all portfolios that are active and have type xyz 184
Example 9.4 Get the ID and status of all portfolios with positions in secId yyy 184
Example 9.5 Get distinct positions from all active portfolios with at least a $25.00 market value 185
Example 9.6 Get distinct positions from portfolios with at least a $25.00 market value 185
Example 9.7 Get distinct entry keys and positions from active portfolios with at least a $25.00 market value 185
Example 9.8 Query Using Explicit Iterator Variables 196
Example 9.9 Using Imported Classes 196
Example 9.10 Query Using IMPORT and TYPE for Object Typing 197
Example 9.11 Query Using IMPORT and Typecasting for Object Typing 198
Example 9.12 Query Using Named Iterators for Object Typing 198
Example 9.13 Creating an Index on a Cache Server Using a Server XML File 209
Example 9.14 Query Returning a ResultSet for a Built-In Data Type 213
Example 9.15 Query Returning a ResultSet for a User-Defined Data Type 214
Example 9.16 Query Returning a StructSet for a Built-In Data Type 215
Example 9.17 Query Returning a StructSet for a User-Defined Data Type 216
Example 9.18 Returning Struct Objects 217
Example 9.19 Returning Collections 219
Example 10.1 CqListener Implementation (C++) 228
Example 10.2 CqListener Implementation (C#) 229
Example 10.3 CQ Creation, Execution, and Close (C++) 231
Example 10.4 CQ Creation, Execution, and Close (C#) 231
Example 11.1 Configuring a Pool 239
Example 11.2 Connection Pool Creation and Execution Using C++ 242
Example 11.3 Connection Pool Creation and Execution Using (C#) 242
Example 12.1 Running a function on a region in C++ 250
Example 12.2 Running a function on a region in C# 250
Example 12.3 Running a function on a server pool in C++ 251
Example 12.4 Running a function on a server pool in C# 251
Example 13.1 Enabling Cloning in cache.xml 259
Example 13.2 Enabling Cloning Through the C++ API 259
Example 13.3 XML file used for the examples 264
Example 13.4 Delta Example Implementation - C# 264
Example 13.5 Delta Example Implementation - C++ 267
Example 14.1 Declaring a Native Client Region Using cache.xml 272
Example 14.2 Demonstrating Gets and Puts Using the C# .NET API 273
Example 14.3 Implementing a Cache Loader Using the C++ API 275
Example 14.4 Implementing an Embedded Object Using the C++ API 276
Example 14.5 Implementing Complex Data Types Using the C++ API 279
Example 14.6 Implementing a User-Defined Serializable Object Using the C# API 282
Example 14.7 Implementing an Embedded Object Using the Java API 283
Example 14.8 Implementing Complex Data Types Using the Java API 285
Example B.1 gfcpp.properties File Format 293
Example B.2 Default gfcpp.properties File 294
Preface
Typographical Conventions
This document uses the following typographical conventions:
Methods, types, file names and paths, code listings, and prompts are shown in Courier New
typeface. For example:
put
Parameters and variables are shown in italic font. For example,
gfConnect(sysDir, connectionName, writeProtectAllowed)
In examples showing both user input and system output, the lines you type are distinguished
from system output by boldface type:
prompt> gemfire
If you are viewing this document online, the page, section, and chapter references are hyperlinks
to the places they refer to, like this reference to System Requirements on page 25 and this
reference to Chapter 1, Introducing the GemFire Enterprise Native Client, on page 23. Blue text
denotes a hyperlink.
Technical Support
GemStone provides several sources for product information and support. This manual, the GemFire
Enterprise Native Client Guide, and the API reference pages provide extensive documentation and
should always be your first source of information for the native client. GemStone Technical Support
engineers will refer you to these documents when applicable. However, you may need to contact
Technical Support for the following reasons:
Your technical question is not answered in the documentation
You receive an error message that directs you to contact GemStone Technical Support
You want to report a bug
You want to submit a feature request
Questions concerning product availability, pricing, license keyfiles, or future features should be
directed to your GemStone account manager.
Help Request is an online form that allows designated technical support contacts to submit
requests for information or assistance via e-mail to GemStone Technical Support.
Technotes provide answers to questions submitted by GemStone customers. They may contain
coding examples, links to other sources of information, or downloadable code.
Bugnotes identify performance issues or error conditions that you may encounter when using a
GemStone product. A bugnote describes the cause of the condition and, when possible, provides
an alternative means of accomplishing the task. In addition, bugnotes identify whether a fix is
available, either by upgrading to another version of the product or by applying a patch. Bugnotes
are updated regularly.
Patches provide code fixes and enhancements that have been developed after product release.
A patch generally addresses a specific group of behavior or performance issues. Most patches
listed on the GemStone web site are available for direct downloading.
Tips and Examples provide information and instructions for topics that usually relate to more
effective or efficient use of GemStone products. Some tips may contain code that can be
downloaded for use at your site.
Release Notes and Install Guides for product software are provided in PDF and HTML format.
Documentation for GemStone GemFire is provided in PDF and HTML format.
Community Links provide customer forums for discussion of GemStone product issues.
Technical information on the GemStone web site is reviewed and updated regularly. We recommend
that you check this site on a regular basis to obtain the latest technical information for GemStone
products. We also welcome suggestions and ideas for improving and expanding the site to better
serve you.
The GemFire Enterprise native client provides access for C++ and Microsoft .NET clients to the
GemFire Enterprise distributed system. The native client is written entirely in C++, so its initialization
process does not involve the creation of a Java virtual machine. The .NET native client provides native
operations for the .NET Framework application developer writing in .NET languages that needs to access
the GemFire cache server.
GemFire Enterprise native clients in C++, Java, and .NET languages communicate only with the cache
server and do not communicate with each other. The native clients interface with the server at the sockets
level and implement the same wire protocol to the server. These capabilities produce extremely high
performance and system scalability.
C++ and .NET native clients provide access to the full region API, including support for application plug-
ins, managed connectivity, highly available data, and reliable failover to a specified server list. All of this
is transparent to the end user.
The native client delivers the full set of capabilities supplied by Java clients communicating with the
GemFire Enterprise cache server. You can configure GemFire Enterprise native clients to cache data
locally, or they can act in a cacheless mode where they retrieve data from a cache server and directly pass
it to other system members without incurring the caching overhead. They can be configured as read only
caches, or be configured to receive notifications from the server whenever a key of interest to the client
changes on the server.
This is a conceptual overview of how .NET and C++ applications access the cache server.
.NET application
In this chapter:
System Requirements (page 25)
Installing and Uninstalling the Native Client (page 27)
Licensing (page 30)
Running Native Client Applications (page 31)
Running the Product Examples and QuickStart Guide (page 36)
Windows
The native client is built and tested on Windows XP Professional, Service Pack 2.
The native client is not supported on Windows NT 4.0.
The native client is supported with Microsoft .NET Framework 2.0, 3.0, and 3.5.
Microsoft .NET Framework Version 2.0 must be installed to support C++/CLI (Common Language
Infrastructure) for the native client.
You can download the .NET Framework Version 2.0 Redistributable Package (x86 for
32bit or x64 for 64bit) from http://www.microsoft.com/downloads. If it isnt listed on
the Download Center page, use the Search tool to search for .NET Framework
Version 2.0.
Linux
The GemFire Enterprise native client is built on Red Hat Enterprise ES 3, kernel version 2.4.21-47.EL.
The native client is tested on the following Linux versions:
SLES 9 SP3, kernel version 2.6.5-7.244-smp
SLES 10 SP1, kernel version 2.6.16.27-0.9-smp
Red Hat Enterprise ES 4, kernel version 2.6.9-27.ELsmp
Update 3 (Nahant) or later is recommended.
Red Hat Enterprise 5 release 5 (Tikanga), kernel version 2.6.18-8.EL5
Red Hat Enterprise WS 3 and WS 4, kernel version 2.4.21-40.ELsmp
If you are not sure of the kernel version on your system, use this command to list it:
prompt> uname -r
The following table lists the RPM package dependencies for several Linux distributions.The i386 or i686
after the package name indicates that you must install the package for that particular architecture
regardless of the native operating system architecture. All of the packages listed are available with the
default media for each distribution.
Table 1.1 GemFire Dependencies on Linux RPM Packages
For versions of Linux not listed in the table, you can verify that you meet the native client dependencies
at the library level by using the ldd tool and entering this command:
prompt> ldd $GFCPP/lib/libgfcppcache.so
The following libraries are external dependencies of the native library, libgfcppcache.so. Verify that
the ldd tool output includes all of these:
libdl.so.2
libm.so.6
libpthread.so.0
libc.so.6
libz.so.1
For details on the ldd tool, see its Linux online man page.
Solaris
The native client is supported on the following Solaris versions:
Solaris 9 kernel update 118558-38
Solaris 10 kernel update 118833-24
Linux
The Linux installer is NativeClient_xxxx_Lin.zip. The default installation path is
NativeClient_xxxx, where xxxx represents the four-digit product version.
Solaris
The Solaris installer is NativeClient_xxxx_Sol.zip. The default installation path is
NativeClient_xxxx, where xxxx represents the four-digit product version.
Installing on Windows
The native client can be installed on Windows by using the NativeClient_xxxx.msi Windows
installer, where xxxx represents the four-digit version of the product. The installer requires msiexec
version 3.0 or higher. Version 3.0 is standard on Windows XP SP2.
Double-click the MSI file to start the installation. You can accept the default installation path, or install
to a different location. The product is installed in NativeClient_xxxx, where xxxx represents the
four-digit product version.
You must be logged in with administrative privileges or the MSI installation will fail.
To complete the installation, the MSI installer automatically configures these native client system
environment settings:
Sets the GFCPP environment variable to productDir, where productDir represents the
NativeClient_xxxx directory (xxxx is the four-digit product version).
Adds %GFCPP%\bin to the Windows PATH.
Option Explanation
/q Creates a quiet installation with no interface or prompts.
/i Indicates that the product is to be installed or configured.
DEFAULT_INSTALLDIR=<path> Specifies the destination directory, if different from the
default.
/x Indicates a product uninstall procedure.
Windows
The native client can be uninstalled on Windows using either the MSI installer graphical interface, its
command-line interface, or through the Windows Control Panel.
1.3 Licensing
The native client requires a license for full functionality with GemFire. You must specifically request a
native client license when you contact GemStone Technical Support.
An application can reference the license file by using a gfcpp.properties configuration file that
points to where the license file is located. For example, this setting entered in gfcpp.properties
points to the license file that is in the directory where the gfcpp.properties file is located:
license-file=gfCppLicense.zip
The license generates a log file for examination if any licensing errors occur. The log file may help you
and the GemStone Technical Support team resolve the licensing problem.
Argument Explanation
-D_REENTRANT Required to compile Linux programs in a thread-safe way.
-m32 Enables 32-bit compilation.
-m64 Enables 64-bit compilation.
-I$GFCPP/include Specifies the native client include directory.
Dynamically Linking
The following table lists the linker switches that must be present on the command line when dynamically
linking to the GemFire library.
Table 1.4 Linker Switches (Dynamically Linking on Linux)
Argument Explanation
-rpath $GFCPP/lib Tells the linker to look in $GFCPP/lib for libraries on
which the GemFire library depends
-L$GFCPP/lib Tells the linker where to find the named libraries.
-lgfcppcache Links the GemFire C++ cache library to the compiled
executable.
Argument Explanation
/MD Memory model.
/EHsc Catches C++ exceptions only and tells the com-
piler to assume that *extern* C functions never
throw a C++ exception.
/GR Runtime type information.
-I%GFCPP%\include Specifies the GemFire include directory.
%GFCPP%\lib\gfcppcache.lib Specifies the library file for the shared library.
/D_CRT_SECURE_NO_DEPRECATE Suppresses warnings. Required for Visual Studio
2005.
/D_CRT_NON_CONFORMING_SWPRINTFS Suppresses warnings. Required for Visual Studio
2005.
Argument Explanation
-D_REENTRANT Required to compile Solaris programs in a thread-safe way.
-xarch=v8plus Enables 32-bit compilation.
-xarch=v9 Enables 64-bit compilation.
-ldl; -lpthread; -lc; Additional libraries.
-lm; -lsocket; -lrt;
-lnsl; -ldemangle;
-lkstat; -lz
-library=stlport4 Solaris library compilation.
-I$GFCPP/include Specifies the GemFire include directory.
Product Examples
The product examples demonstrate the capabilities of the GemFire Enterprise native client, and provide
source code so you can examine how each example is designed. Both C++ and C# examples are available
to see how the native client performs as either a C++ or C# client. Each example has a README.HTML
that provides setup and operating instructions.
The examples are located in the subdirectories of the productDir/examples directory, where
productDir is the path to the native client product directory.
The C# examples have a .config file that should not be modified or deleted. The file
provides references to support files for running the example.
Configuring Output
You can control the output of the product examples through an optional system-level configuration file.
Follow these steps:
1. If it is not present, copy productDir/defaultSystem/gfcpp.properties to the example
directory.
2. To decrease the volume of messages, open the copy of gfcpp.properties in the example
directory, uncomment the log-level line, then change its setting to error:
log-level=error
3. To send the messages to a log file instead of stdout, uncomment the log-file line and specify a
name for the log file:
log-file=gemfire_cpp.log
QuickStart Guide
The QuickStart Guide for the native client consists of a set of compact programming samples that
demonstrate both C++ and C# client operations. Run the QuickStart Guide to rapidly become familiar
with native client functionality. The QuickStart has a README.HTML file in productDir/quickstart
that describes each programming sample, explains initial environment setup, and provides instructions
for running the QuickStart.
The GemFire Enterprise native client cache provides a framework for native clients to store, manage, and
distribute application data. The native client cache provides the following features:
Local and distributed data caching for fast access.
Data distribution between applications on the same or different platforms.
Local and remote data loading through application plug-ins.
Application plug-ins for synchronous and asynchronous handling of data events.
Automated and application-specific data eviction for freeing up space in the cache, including
optional overflow to disk.
System message logging, and statistics gathering and archiving.
In this chapter:
Using Thread Safety in Cache Management (page 38)
Caches (page 39)
Regions (page 42)
Entries (page 46)
Region Attributes (page 53)
Managing the Lifetime of a Cached Object (page 71)
Client to Server Connection Process (page 72)
Troubleshooting (page 73)
2.2 Caches
The cache is the entry point to native client data caching in GemFire. Through the cache, native clients
gain access to the GemFire caching framework for data loading, distribution, and maintenance.
The cache is composed of a number of data regions, each of which can contain any number of entries.
Region entries hold the cached data. Every entry has a key that uniquely identifies it within the region
and a value where the data object is stored.
Regions are created from the Cache instance. Rgions provide the entry point to the interfaces for
instances of Region and RegionEntry.
This section describes cache management. Subsequent sections cover region and entry management.
RegionService
RegionService provides:
Access to existing cache regions.
Access to the standard query service for the cache, which sends queries to the servers. See Chapter
9, Remote Querying, on page 179 and Chapter 10, Continuous Querying, on page 221.
RegionService is inherited by Cache.
You do not use instances of RegionService except for secure client applications with many users. See
Creating Multiple Secure User Connections with RegionService on page 168.
Cache
Use the Cache to manage your client caches. You have one Cache per client.
The Cache inherits RegionService and adds management of these client caching features:
Region creation.
Subscription keepalive management for durable clients.
Access to the underlying distributed system.
RegionService creation for secure access by multiple users.
Cache Ownership
The distributed system defines how native client and cache server processes find each other. The
distributed system keeps track of its membership list and makes its members aware of the identities of
the other members in the distributed system.
A cache for a native client is referred to as its local cache. All other caches in the distributed system are
considered remote caches to the application. Every cache server and application process has its own
cache. The term distributed cache is used to describe the union of all caches in a GemFire distributed
system.
Cache Functionality
The Cache instance allows your process to do the following:
Set general parameters for communication between a cache and other caches in the distributed
system.
Create and access any region in the cache. This provides the entry point to region and entry
management in the cache.
2.3 Regions
You can create cache regions either programmatically or through declarative statements in the
cache.xml file. Generally, a cache is organized and populated through a combination of the two
approaches.
Region creation is subject to attribute consistency checks. A distributed region can be either non-
partitioned or a partitioned region. See the GemFire Enterprise Developers Guide for detailed
descriptions of both non-partitioned and partitioned regions. The requirements for consistency between
attributes are detailed both in the online API documentation and throughout the discussion of Region
Attributes on page 53.
The following topics describe several scenarios for creating and accessing regions.
<cache>
<pool name="examplePool" subscription-enabled="true" >
<server host="localhost" port="40404" />
</pool>
<region name="A" refid="PROXY">
<region-attributes pool-name="examplePool"/>
</region>
<region name="A1">
<region-attributes refid="PROXY" pool-name="examplePool"/>
</region>
<region name="A2" refid="CACHING_PROXY">
<region-attributes pool-name="examplePool"/>
<region-time-to-live>
<expiration-attributes timeout="120" action="invalidate"/>
</region-time-to-live>
</region-attributes>
</region>
</cache>
The cache.xml file contents must conform to the DTD provided in the
productDir/dtd/gfcpp-cache3500.dtd file. For details, see Cache Initialization File on page 75.
RegionFactoryPtr regionFactory =
cachePtr->createRegionFactory(CACHING_PROXY);
RegionPtr regPtr0 = regionFactory->setLruEntriesLimit(20000)
->create("exampleRegion0");
Region Access
You can use Cache::getRegion to retrieve a reference to a specified region. RegionPtr returns
NULL if the region is not already present in the applications cache. A server region must already exist.
A regions name cannot contain the slash character. Region names also cannot contain these characters:
<
>
:
\
|
?
*
Figure 2.1 Data Flow Between Overflow Region and Disk Files
Disk Files:
values of overflow entries
Values of LRU Region B offload overflow
entries moved overflow update/
to disk. Z
invalidate/
Keys retained. X Y X
destroy
In this figure the value of the LRU entry X has been moved to disk to recover space in memory. The key
for the entry remains in memory. From the distributed system perspective, the value on disk is as much
a part of the region as the data in memory. A get performed on region B looks first in memory and then
on disk as part of the local cache search.
2.4 Entries
Region entries hold the cached application data. Entries are automatically managed according to region
attribute settings, and can be created, updated, invalidated, and destroyed through explicit API calls or
through operations distributed from other caches. When the number of entries is very large, a partitioned
region can provide the required data management capacity if the total size of the data is greater than the
heap in any single VM. See the GemFire Enterprise Developers Guide for a detailed description of
partitioned regions and related data management.
When an entry is created, a new object is instantiated in the region containing:
The entry key.
The entry value. This is the application data object. The entry value may be set to NULL, which is the
equivalent of an invalid value.
Entry operations invoke callbacks to user-defined application plug-ins. In this section, the calls are
highlighted that may affect the entry operation itself (by providing a value or aborting the operation, for
example), but all possible interactions are not listed. For details, see Application Plug-Ins on page 61.
DateTime objects must be stored in the cache in UTC, so that times correspond between client and
server. If you use a date with a different time zone, convert it when storing into and retrieving from the
cache.
Client Notification
In addition to the programmatic function calls, to register interest for a server region and receive updated
entries you need to configure the region with PROXY or CACHING_PROXY RegionShortcut setting.
Otherwise, when you register interest, youll get an UnsupportedOperationException.
Both native clients and Java clients that have subscriptions enabled track and drop (ignore) any duplicate
notifications received. To reduce resource usage, a client expires tracked sources for which new
notifications have not been received for a configurable amount of time.
Notification Sequence
Notifications invoke CacheListeners of cacheless clients in all cases for keys that have been registered
on the server. Similarly, invalidates received from the server invoke CacheListeners of cacheless clients.
If you register to receive notifications, listener callbacks are invoked irrespective of whether the key is
in the client cache when a destroy or invalidate event is received.
keys0.push_back(keyPtr1);
keys1.push_back(keyPtr3);
regPtr0->registerKeys(keys0);
regPtr1->registerKeys(keys1);
The programmatic code snippet in the next example shows how to unregister interest in specific keys:
regPtr0->unregisterKeys(keys0);
regPtr1->unregisterKeys(keys1);
regPtr0->registerAllKeys();
regPtr1->registerAllKeys();
The following example shows a code sample for unregistering interest in all keys.
regPtr0->unregisterAllKeys();
regPtr1->unregisterAllKeys();
The following example shows interest registration for all keys whose first four characters are Key-,
followed by any string of characters. The characters .* represent a wildcard that matches any string.
regPtr1->registerRegex("Key-.*");
To unregister interest using regular expressions, you use the unregisterRegex function. The next
example shows how to unregister interest in all keys whose first four characters are Key-, followed by
any string (represented by the .* wildcard).
regPtr1->unregisterRegex("Key-.*");
Example 2.10 Using serverKeys to Retrieve the Set of Keys From the Server
VectorOfCacheableKey keysVec;
region->serverKeys( keysVec );
size_t vlen = keysVec.size();
bool foundKey1 = false;
bool foundKey2 = false;
for( size_t i = 0; i < vlen; i++ ) {
CacheableStringPtr strPtr = dynCast<CacheableStringPtr>( keysVec.at( i );
std::string veckey = strPtr->asChar();
if ( veckey == "skey1" ) {
printf( "found skey1" );
foundKey1 = true;
}
if ( veckey == "skey2" ) {
printf( "found skey2" );
foundKey2 = true;
}
}
DateTime objects must be stored in the cache in UTC, so that times correspond between client and
server. If you use a date with a different time zone, convert it when storing into and retrieving from the
cache. This example converts a local time to UTC for a put operation:
DateTime t1( 2009, 8, 13, 4, 11, 0, DateTimeKind.Local);
region0.Put( 1, t1.ToUniversalTime() );
Updating Entries
A cached entry can be updated using these methods:
Explicitly, when a client invokes a put operation on an existing entry.
Implicitly, when a get is performed on an entry that has an invalid value in the cache. An entry can
become invalid through an explicit API call, through an automated expiration action, or by being
created with a value of null.
Automatically, when a new entry value is distributed from another cache.
Similar to entry creation, all of these operations can be aborted by a cache writer.
The get function returns a direct reference to the entry value object. A change made using that reference
is called an in-place change because it directly modifies the contents of the value in the local cache. For
details on safe cache access, see Changed Objects on page 71.
Accessing Entries
The API provides operations for the retrieval of the entry key, entry value, and the RegionEntry object
itself. A variety of functions provide information for individual entries and for the set of all entries
resident in the region. The online API documentation lists all available access functions.
A regions entry keys and RegionEntry objects are directly available from the local cache. Applications
can directly access the local caches stored entry value through the RegionEntry::getValue function.
The getValue function either returns the value if a valid value is present in the local cache, or NULL if
the value is not valid locally. This function does no data loading, nor does it look elsewhere in the
distributed system for a valid value.
Direct access through RegionEntry::getValue does not reset an entrys
timestamp for LRU expiration. See Expiration Attributes on page 69 for more
information about LRU expiration.
In comparison, the standard Region::get functions consider all caches and all applicable loaders in the
distributed system in an attempt to return a valid entry value to the calling application. The primary
attribute setting affecting entry retrieval is CacheLoader (page 62).
The standard Region::get functions may implement a number of operations in order to retrieve a valid
entry value. The operations used depend on the regions attribute settings and on the state of the entry
itself. By default, the client retrieves entry values through calls to the get function. The client can
override this behavior for any region by defining a cache loader for the region.
The following sections discuss the get function and special considerations for entry retrieval.
Entry Retrieval
Entry values are retrieved through the Region::get function.
When an entry value is requested from a region, it is either retrieved from the cache server or fetched by
the regions locally-defined cache loader in this sequence:
Whether carried out explicitly or through expiration activities, invalidation and destruction cause event
notification: The CacheEvent object has an isExpiration flag that is set to true for events resulting
from expiration activities, and set to false for all other events.
has been evicted by LRU or expiration during this period, so when an exception is received from the
server for an operation then local changes may not have been rolled back
RegionShortcuts
GemFire provides a number of predefined, shortcut region attributes settings for your use, in
RegionShortcut. You can also create custom region attributes and store them with an identifier for
later retrieval. Both types of stored attributes are referred to as named region attributes. You can create
and store your attribute settings in the cache.xml file and through the API.
Retrieve named attributes by providing the ID to the region creation. This example uses the shortcut
CACHING_PROXY attributes to create a region:
You can modify named attributes as needed. For example, this adds a cache listener to the region:
In this example, the modified region shortcut is saved to the cache using the region attribute id, for
retrieval and use by a second region:
Example 2.13 Storing and Using a New Named Region Attributes Setting
Data Eviction
For the non-PROXY regionsthe regions that store data in the client cacheyou can add data eviction:
ENTRY_LRU causes least recently used data to be evicted from memory when the region reaches the
entry count limit.
The remaining region attributes identify expiration and cache listener, cache writer and cache loader
actions that are run from the defining client. The next table lists the mutable attributes that generally may
be modified after region creation by using the AttributesMutator for the region.
See Using AttributesMutator to Modify a Plug-In on page 67 for information about using
AttributesMutator with cache listeners, cache loaders and cache writers.
The remainder of this section examines these attributes in detail. Throughout the descriptions,
cache.xml file snippets show how each attribute can be set declaratively.
CachingEnabled
This attribute determines whether data is cached in this region. For example, you might choose to
configure the distributed system as a simple messaging service where clients run without a cache.
You can configure the most common of these options with GemFires predefined
region attributes. See RegionShortcuts on page 53 and the Javadocs for
RegionShortcut.
If this attribute is false (no caching), an IllegalStateException is thrown if any of these attributes
are set:
InitialCapacity (page 56)
EntryTimeToLive (page 69)
EntryIdleTimeout (page 69)
LoadFactor (page 56)
ConcurrencyLevel (page 56)
LruEntriesLimit (page 57)
DiskPolicy (page 57)
The following declaration enables caching for the region:
<region-attributes caching-enabled="true">
</region-attributes>
InitialCapacity
This attribute, together with the LoadFactor attribute, sets the initial parameters on the underlying
hashmap used for storing region entries. This is the entry capacity that the region map will be ready to
hold when it is created.
This declaration sets the regions initial capacity to 10000:
<region-attributes initial-capacity="10000">
</region-attributes>
LoadFactor
This attribute, together with the InitialCapacity attribute, sets the initial parameters on the
underlying hashmap used for storing region entries. When the number of entries in the map exceeds the
LoadFactor times current capacity, the capacity is increased and the map is rehashed. The best
performance is achieved if you configure a properly sized region at the start and do not have to rehash it.
This declaration sets the regions load factor to 0.75:
<region-attributes load-factor="0.75">
</region-attributes>
ConcurrencyLevel
This attribute estimates the maximum number of application threads that concurrently access a region
entry at one time. This attribute helps optimize the use of system resources and reduce thread contention.
LruEntriesLimit
This attribute sets the maximum number of entries to hold in a caching region. When the capacity of the
caching region is exceeded, a least-recently-used (LRU) algorithm is used to evict entries.
This is a tuning parameter that affects system performance.
This declaration limits the region to 20,000 entries:
<region-attributes lru-entries-limit="20000"
initial-capacity="20000"
load-factor="1">
</region-attributes>
Evicted entries can be destroyed or moved to disk as an extension of the cache. See DiskPolicy (page 57).
When CachingEnabled is false, do not set the LruEntriesLimit attribute. An
IllegalStateException is thrown if the attribute is set.
DiskPolicy
If the lru-entries-limit attribute is greater than zero, the optional disk-policy attribute
determines how over-limit LRU entries are handled. LRU entries over the limit are either destroyed by
default (disk-policy is none) or written to disk (overflows).
The following declaration sets the regions LRU eviction policy to overflow to disk:
<region-attributes lru-entries-limit="nnnnn"
disk-policy="overflows">
<persistence-manager .../>
</region-attributes>
If disk-policy is set to overflows, a persistence manager must be specified to implement the actual
disk writes and reads. The persistence manager provides the interface to the back-end disk or database.
See PersistenceManager (page 57) for more information.
If LruEntriesLimit is 0, or CachingEnabled is false, do not set the disk-
policy attribute. An IllegalStateException is thrown if the attribute is set.
PersistenceManager
For each region, if the disk-policy attribute is set to overflows then a persistence-manager
plug-in performs the cache-to-disk and disk-to-cache operations. See Application Plug-Ins on page 61.
Each BDB persistence manager (PM) stores its region data in a Berkeley database that is stored in disk
files, as shown in Figure 2.2 on page 58.
Each process must have a unique environment (DB) directory. See EnvironmentDirectory (page
59).
In a process, each regions BDB persistence manager shares a single Berkeley database environment
directory. Each region must have a unique persistence (overflow) directory. See
PersistenceDirectory (page 59).
Disk
To set these attributes programmatically, see Handling Cache LRU on page 89.
Table 2.3 Berkeley DB Persistence Manager Directory Attributes
Berkeley DB performance is optimized by configuring the environment and database with the settings
listed in Table 2.4 on page 60. The recommended settings are provided as default values.
Berkeley databases share a memory pool of database pages. The environment cache is a major factor that
affects DB performance. It is recommended that the cache size be set as large as possible without causing
performance degradation in the virtual memory subsystem.
By ensuring that each database file does not grow beyond a cer-
tain limit, performance of read/write operations can be pre-
served as the number of entries in the region grows. The default
value is recommended for entry sizes of 1Kb. As average entry
size reduces, the limit can be increased.
Application Plug-Ins
The plug-in attributes allow you to customize client region behavior for loading, updating, deleting, and
overflowing region data and for accessing data in server partitioned regions. All client plug-ins are
available through the C++ and .NET API.
Application plug-ins for cache regions in clients can be declared either programmatically or in the
cache.xml file.
Distributed System
The following XML declaration specifies a cache loader for a region when the region is created.
<region-attributes>
<cache-loader library-name="appl-lib"
library-function-name ="createCacheLoader">
</cache-loader>
</region-attributes>
The API provides the framework for application plug-ins with callback functions for the appropriate
events. Your classes and functions can customize these for your applications needs. When creating a
region, specify these as part of the regions attributes settings. For regions already in the cache, you can
specify new CacheLoader, CacheWriter, and CacheListener using the regions
AttributesMutator. The PartitionResolver is not mutable.
CacheLoaderA data loader called when an entry get operation fails to find a value for a given
key. A cache loader is generally used to retrieve data from an outside source such as a database, but
it may perform any operation defined by the user. Loaders are invoked as part of the distributed
loading activities for entry retrieval, described in Entry Retrieval on page 51.
CacheWriterA synchronous event listener that receives callbacks before region events occur
and has the ability to abort the operations. Writers are generally used to keep a back-end data source
synchronized with the cache.
CacheListenerAn asynchronous event listener for region events in the local cache.
PartitionResolverUsed for single-hop access to partitioned region entries on the server side.
This resolver implementation must match that of the PartitionResolver on the server side.
The rest of this section gives more detailed descriptions of these application plug-ins, followed by special
considerations for plug-ins in distributed regions and some guidelines for writing callbacks.
CacheLoader
A cache loader is an application plug-in used to load data into the region. When an entry is requested that
is unavailable in the region, a cache loader may be called upon to load it. Generally, a cache loader is
used to retrieve the data from a database or some other source outside the distributed system, but it may
perform any operation defined by the user.
The CacheLoader interface provides one function, load, for customizing region entry loading. A
distributed region may have cache loaders defined in any or all caches where the region is defined. When
loading an entry value, a locally defined cache loader is always used before a remote loader. In
distributed regions, loaders are available for remote entry retrieval.
CacheWriter
A cache writer is an application plug-in used to synchronously handle changes to a regions contents. It
is generally used to keep back-end data sources synchronized with a cache region. A cache writer has
callback functions to handle region destruction and entry creation, update, and destruction. These
functions are all called before the modification has taken place and can abort the operation.
You can also use cache writers to store data that you want to make persistent.
CacheListener
A cache listener is an application plug-in used to asynchronously handle changes to a regions contents.
A cache listener has callback functions to handle region destruction and invalidation, along with entry
creation, update, invalidation, and destruction. These functions are called asynchronously after the
modification has taken place.
This declarative XML example establishes a cache listener when a region is created:
<region name=region11>
<region-attributes>
<cache-listener library-name="appl-lib"
library-function-name ="createCacheListener" />
</region-attributes>
</region>
Unlike cache loaders and cache writers, cache listeners only receive events for entries to which the client
has performed operations or registered interest.
When the listener is attached to a region with caching disabled, the old value is always NULL.
Caution: Do not perform region operations inside the cache listener. Once you have
configured a cache listener, the event supplies the new entry values to the application.
Performing a get with a key from the EntryEvent can result in distributed deadlock.
For more about this, see the online API documentation for EntryEvent.
When a region disconnects from a cache listener, you can implement the afterRegionDisconnected
callback event. This callback event is only be invoked when using the pool API and subscription is
enabled on the pool. For example:
Example 2.16 Defining Callback After Region Disconnects From Cache Listener
PartitionResolver
This section pertains to data access in server regions that have custom partitioning. Custom partitioning
uses a Java PartitionResolver to colocate like data in the same buckets. For the client, you can use
a PartitionResolver that matches the servers implementation to access data in a single hop. With
single-hop data access, the client pool maintains information on where a partitioned regions data is
hosted. When accessing a single entry, the client directly contacts the server that hosts the keyin a
single hop.
Single hop is only used for these operations that are run on single data points: put,
get, getAll, destroy.
Implementing a PartitionResolver
See the GemFire Enterprise Developers Guide for information on custom partitioning the server
partitioned regions.
Your implementation of PartitionResolver must match that of the server side.
1. Implement PartitionResolver in the same place as you did in the servercustom class, key, or
cache callback argument.
2. Program the resolver's functions the same way you programmed them in the Java implementation.
Your implementation must match the servers.
using System;
using System.Threading;
// Use the GemFire namespace
using GemStone.GemFire.Cache;
class TradeKeyResolver : IPartitionResolver
{
private int m_month = 0;
private int m_year = 0;
subscription-ack-interval="567" subscription-enabled="true"
subscription-message-tracking-timeout="900123"
subscription-redundancy="0" thread-local-connections="5"
pr-single-hop-enabled="true">
<locator host="localhost" port="34756"/>
</pool>
void setPartitionResolver()
{
CachePtr cachePtr = CacheFactory::createCacheFactory()->create();
PartitionResolverPtr resolver( new TradeKeyResolver());
RegionFactoryPtr regionFactory =
cachePtr->createRegionFactory(PROXY)
->setClientNotificationEnabled(true)
->setPartitionResolver(resolver);
RegionPtr regionPtr = regionFactory->create( "Trades" );
}
The plug-ins can also be implemented using a dynamically linked library. The class is not available to
the application code in this case, so a factory method is required by the set function along with the
name of the library.
This example shows how to use AttributesMutator along with the setCacheListener function to
obtain a new cache listener object using the factory function provided by the library. Next, the listener
is set for the region.
To use AttributesMutator to remove a plug-in from a region, set the plug-ins value to NULLPTR, as
shown in the following example.
Expiration Attributes
Expiration attributes govern the automatic eviction of regions and region entries from the cache. Eviction
is based on the time elapsed since the last update or access to the object. This is referred to as the least-
recently-used (LRU) eviction process. Expiration options range from marking the expired object as
invalid to completely removing it from the distributed cache. Eviction can help keep data current by
removing outdated entries, prompting a reload the next time they are requested. Eviction may also be
used to recover space in the cache by clearing out unaccessed entries and regions.
Similar to application plug-ins, expiration activities are hosted by each application that defines a region
in its cache.
The following example shows a declaration that causes the regions entries to be invalidated in the local
cache after they have not been accessed for one minute.
<region-attributes>
<entry-idle-time>
<expiration-attributes timeout="60" action="local-invalidate"/>
</entry-idle-time>
</region-attributes>
Region and region entry expiration attributes are set at the region level. By default, regions and entries
do not expire. The following attributes cover two types of expiration: time-to-live (TTL) and idle
timeout.
RegionTimeToLive The amount of time, in seconds, that the region remains in the cache after
the last creation or update before the expiration action occurs.
EntryTimeToLive For entries, the counter is set to zero for create and put operations.
Region counters are reset when the region is created and when an entry has
its counter reset. An update to an entry causes the time-to-live (TTL)
counters to be reset for the entry and its region.
RegionIdleTimeout The amount of time, in seconds, that the region remains in the cache after
the last access before the expiration action occurs.
EntryIdleTimeout The idle timeout counter for an object is reset when its TTL counter is
reset. An entrys idle timeout counter is also reset whenever the entry is
accessed through a get operation.
The idle timeout counter for a region is reset whenever the idle timeout is
reset for one of its entries.
Expiration Actions
You can specify one of the following actions for each expiration attribute:
DestroyRemove the object completely from the cache. For regions, all entries are destroyed as
well. Destroy actions are distributed according to the regions distribution settings.
InvalidateMark the object as invalid. For entries, the value is set to NULL. Regions are invalidated
by invalidating all of its entries. Invalidate actions are distributed according to the regions
distribution settings. When an entry is invalid, a get causes the cache to retrieve the entry according
to the steps described in Entry Retrieval on page 51.
Local destroyDestroy the object in the local cache but do not distribute the operation.
Local invalidateInvalidate the object in the local cache but do not distribute the operation.
Destruction and invalidation cause the same event notification activities whether
carried out explicitly or through expiration activities.
Region Expiration
Expiration activities in distributed regions can be distributed or performed only in the local cache. So one
cache could control region expiration for a number of caches in the distributed system.
In the example, the act of object creation allocates memory and initializes the object.
When you assign the object to a SharedPtr, you relinquish control of the lifetime of that object
to the reference counting mechanism for the cache.
The put operation does not actually copy the object into the cache. Rather, it copies a
SharedPtr into the caches hashmap. Consequently, the object remains alive in the cache
when the original SharedPtr goes away.
The client can make use of an object after you have initialized the object. For example, another
SharedPtr might issue a get to retrieve the object from the cache:
CacheableStringPtr p2 = region.get ("key") ;
Because p (the original SharedPtr) and p2 point to the same object in memory, it is possible under
some circumstances for multiple SharedPtr types to work on the same object in data storage.
Once you have put an object into the cache, do not delete it explicitly. Attempting to do
so can produce undesirable results.
Changed Objects
If an object update is received, the cache no longer holds the same object. Rather, it holds a completely
different instance of the object. The client does not see the updates until it calls a get to fetch the object
again from the local cache, or (in a cache plug-in) calls EntryEvent::getNewValue.
For more about plug-ins, see Application Plug-Ins on page 61.
Object Expiration
When a cache automatically deletes an object as a result of an expiration action, the reference counting
pointers protect the client from situations that might otherwise result if the cache actually freed the
objects memory. Instead, the client disconnects the object from the cache by deleting the caches
SharedPtr reference, while leaving untouched any client threads with a SharedPtr to that object.
2.8 Troubleshooting
To ease the task of managing the structure of the cache, you can define the default cache structure in an
XML-based initialization file. To modify the cache structure, you just edit cache.xml in your preferred
text editor. No changes to the application code are required.
This chapter describes the file format of the cache.xml file and discusses its contents.
In this chapter:
Cache Initialization File Basics (page 76)
Example cache.xml File (page 77)
Native Client Cache XML DTD (page 78)
File Contents
The contents of a declarative XML file correspond to APIs declared in the Cache.hpp and Region.hpp
header files. The cache initialization file allows you to accomplish declaratively many of the cache
management activities that you can program through the API.
The contents of the cache initialization file must conform to the XML definition in
productDir/dtd/gfcpp-cache3500.dtd (see Native Client Cache XML DTD on page 78).
The name of the declarative XML file is specified when establishing a connection to the distributed
system. You can define it by setting the cache-xml-file configuration attribute in the
gfcpp.properties file for the native client. For details about the gfcpp.properties file, see
Setting Properties on page 139.
For details about the individual region attributes, see Region Attributes on page 53.
<!--
This is the XML DTD for the GemFire Enterprise -C++ distributed cache declarative
caching XML file.
The contents of a declarative XML file correspond to APIs found in the
Gemfire Enterprise -C++ product, more specifically in the
Cache.hpp and Region.hpp files in the product include directory
A declarative caching XML file is used to populate a Cache
when it is created.
-->
<!--
The "cache" element is the root element of the declarative cache file.
This element configures a GemFire Enterprise -C++ Cache and describes the
root regions it contains, if any.
-->
<!ELEMENT cache (pool*, root-region*, region*)>
<!ATTLIST cache
endpoints CDATA #IMPLIED
redundancy-level CDATA #IMPLIED
>
<!--
A "locator" element specifies the host and port that a server locator is listening on
-->
<!ELEMENT locator EMPTY>
<!ATTLIST locator
host CDATA #REQUIRED
port CDATA #REQUIRED
>
<!--
A "server" element specifies the host and port that a cache server is listening on
-->
<!ELEMENT server EMPTY>
<!ATTLIST server
host CDATA #REQUIRED
port CDATA #REQUIRED
>
<!--
A "pool" element specifies a client-server connection pool.
-->
<!--
A root-region" element describes a root region whose entries and
subregions will be stored in memory.
Note that the "name" attribute specifies the simple name of the region;
it cannot contain a "/".
-->
<!ELEMENT root-region (region-attributes?, region*)>
<!ATTLIST root-region
name CDATA #REQUIRED
>
<!--
A "region" element describes a region (and its entries) in GemFire
Enterprise -C++ distributed cache. It may be used to create a new region or may be
used to add new entries to an existing region. Note that the "name"
attribute specifies the simple name of the region; it cannot contain a
"/".
-->
<!ELEMENT region (region-attributes?, region*)>
<!ATTLIST region
name CDATA #REQUIRED
>
<!--
A "region-attributes" element describes the attributes of a region to
be created. For more details see the AttributesFactory header in the
product include directory
-->
<!ELEMENT region-attributes ((region-time-to-live |
region-idle-time | entry-time-to-live | entry-idle-time |
partition-resolver |
cache-loader | cache-listener | cache-writer | persistence-manager)*)>
<!ATTLIST region-attributes
caching-enabled (true | TRUE | false | FALSE) #IMPLIED
cloning-enabled (true | TRUE | false | FALSE) #IMPLIED
scope (local | distributed-no-ack | distributed-ack ) #IMPLIED
<!--
A "region-time-to-live" element specifies a Region's time to live
-->
<!ELEMENT region-time-to-live (expiration-attributes)>
<!--
A "region-idle-time" element specifies a Region's idle time
-->
<!ELEMENT region-idle-time (expiration-attributes)>
<!--
A "entry-time-to-live" element specifies a Region's entries' time to
live
-->
<!ELEMENT entry-time-to-live (expiration-attributes)>
<!--
A "entry-idle-time" element specifies a Region's entries' idle time
-->
<!ELEMENT entry-idle-time (expiration-attributes)>
<!--
A "properties" element specifies a persistence properties
-->
<!ELEMENT properties (property*)>
<!--
An "expiration-attributes" element describes expiration
-->
<!ELEMENT expiration-attributes EMPTY>
<!ATTLIST expiration-attributes
timeout CDATA #REQUIRED
action (invalidate | destroy | local-invalidate | local-destroy) #IMPLIED
>
<!--
A "cache-loader" element describes a region's CacheLoader
-->
<!ELEMENT cache-loader EMPTY >
<!ATTLIST cache-loader
library-name CDATA #IMPLIED
library-function-name CDATA #REQUIRED
>
<!--
A "cache-listener" element describes a region's CacheListener
-->
<!--
A "cache-writer" element describes a region's CacheListener
-->
<!ELEMENT cache-writer EMPTY>
<!ATTLIST cache-writer
library-name CDATA #IMPLIED
library-function-name CDATA #REQUIRED
>
<!--
A "partition-resolver" element describes a region's PartitionResolver
-->
<!ELEMENT partition-resolver EMPTY>
<!ATTLIST partition-resolver
library-name CDATA #IMPLIED
library-function-name CDATA #REQUIRED
>
<!--
A "persistence-manager" element describes a region's persistence feature
-->
<!ELEMENT persistence-manager (properties)>
<!ATTLIST persistence-manager
library-name CDATA #IMPLIED
library-function-name CDATA #REQUIRED
>
<!--
A "property" element describes a persistence property
-->
<!ELEMENT property EMPTY>
<!ATTLIST property
name CDATA #REQUIRED
value CDATA #REQUIRED
>
The native client C++ caching API allows C++ clients to manage data in a GemFire Enterprise system.
The online C++ API documentation (included in the docs/cppdocs directory of the GemFire Enterprise
native client product installation) provides extensive implementation details for the C++ structures and
functions.
Several example API programs are included in the productDir/examples directory. See Running the
Product Examples and QuickStart Guide on page 36.
In this chapter:
The GemFire C++ API (page 84)
Creating a Cache (page 87)
Cache Eviction and Overflow (page 88)
Adding an Entry to the Cache (page 91)
Accessing an Entry (page 92)
Serialization (page 93)
Using a Custom Class (page 102)
Creating New Statistics (page 104)
Cache
CacheFactoryThis class creates a Cache instance based on an instance of
DistributedSystem. If cache.xml is specified by DistributedSystem, the cache is created
based on the declarations loaded from that file.
CacheThis is the entry point to the client caching API. You create regions.with this class. The
cache is created by calling the create function of the factory class, CacheFactory. When creating
a cache, you specify a DistributedSystem that tells the new cache where to find other caches on
the network and how to communicate with them.
Region
RegionThis class provides functions for managing regions and cached data. The functions for
this class allow you to perform the following actions:
Retrieve information about the region, such as its parent region and region attribute objects.
Invalidate or destroy the region.
Create, update, invalidate and destroy region entries.
Retrieve region entry keys, entry values, and RegionEntry objects, either individually or as
entire sets.
Retrieve the statistics object associated with the region.
Set and get user-defined attributes.
RegionEntryThis class contains the key and value for the entry, and provides all non-
distributed entry operations. This object's operations are not distributed and do not affect statistics.
Region Attributes
RegionAttributesThis class holds all attribute values for a region and provides functions for
retrieving all attribute settings. This class can be modified by the AttributesMutator class after
region creation.
AttributesMutatorThis class allows modification of an existing regions attributes for
application plug-ins and expiration actions. Each region has an AttributesMutator instance.
Application Plug-Ins
CacheLoaderApplication plug-in class for loading data into a region on a cache miss.
CacheWriterApplication plug-in class for synchronously handling region and entry events
before the events occur. Entry events are create, update, invalidate, and destroy. Region
events are invalidate and destroy. This class has the ability to abort events.
CacheListenerApplication plug-in class for handling region and entry events after they
occur. Entry events are create, update, invalidate, and destroy. Region events are
invalidate and destroy.
Event Handling
RegionEventThis class provides information about the event, such as what region the event
originated in, whether the event originated in a cache remote to the event handler, and whether the
event resulted from a distributed operation.
EntryEventThis class provides all of the information available for the RegionEvent, and
provides entry-specific information such as the old and new entry values and whether the event
resulted from a load operation.
Statistics API
The StatisticsType API represents a blueprint for the same type of Statistics. The
StatisticsType API is a collection of StatisticDescriptor. Internally, each
StatisticDescriptor describes data of each individual statistic. StatisticsFactory provides
functionality for creating StatisticDescriptor, StatisticsType, and Statistics.
CacheStatisticsThis class defines common statistics functions. Region and
RegionEntry both have functions that return a CacheStatistics object for accessing and
resetting their statistics counts.
StatisticDescriptorAn instance of this class describes a statistic whose value is updated
by an application and may be archived by the native client. Each statistic has a type of either int,
long, or double, and either a gauge or a counter. The value of a gauge can increase and decrease,
and the value of a counter strictly increases. The StatisticDescriptor is created by calling one
of these StatisticsFactory functions: createDoubleCounter, createDoubleGauge,
createIntCounter, createIntGauge, createLongCounter, createLongGauge.
StatisticsTypeAn instance of this class describes a logical collection of
StatisticDescriptors. These descriptions are used to create an instance of Statistics. The
StatisticsType is created by calling StatisticsFactory::createType.
StatisticsAn instance of this class represents concrete Statistics of the associated
StatisticsType. This class stores data related to all individual statistic objects. You can create an
instance by calling StatisticsFactory::createStatistics. This class has functions to get,
set, and increment statistic values.
StatisticsFactoryThis class provides functions for creating instances of
StatisticDescriptor, StatisticsType, and Statistics objects. This is a singleton class,
and its instance can be acquired using StatisticsFactory::getExistingInstance.
If you need to create new statistics, see Creating New Statistics on page 104 for details.
RegionFactoryPtr regionFactory =
cachePtr->createRegionFactory(CACHING_PROXY);
regionPtr = regionFactory->setLruEntriesLimit( 20000 )
->setInitialCapacity( 20000 )
->create("exampleRegion");
Example 4.3 Creating a Region With Disk Overflow Based on Entry Capacity
Example 4.4 Using the API to Put Entries Into the Cache
Example 4.5 Using the get API to Retrieve Values From the Cache
See the Region online API documentation for more information about using getAll.
4.6 Serialization
The native client provides a Serializable interface that you can use for fast and compact data
serialization. This section discusses serialization, and presents a simple implementation example.
Built-In Types
The following table describes the set of built-in serializable types that are automatically registered at
initialization.
Table 4.2 Built-In Cacheable Types Automatically Registered at Initialization
Complex Types
If your application uses more complex key types that you want to make more accessible or easier to
handle, you can derive a new class from CacheableKey. For details, see Custom Key Types on page 98.
Another option is for the application to do its own object serialization using the GemFire
CacheableBytes type or a custom type. See Handling Data as a Blob on page 101.
The GemFire Serializable interface does not support object graphs with multiple
references to the same object. If your application uses such circular graphs, you must
address this design concern explicitly.
When your application subsequently receives a byte array, GemFire takes the following steps:
1. Decodes the typeId, extracts the classId from the typeId, then creates an object of the
designated type using the registered factory functions.
2. Invokes the fromData function with input from the data stream.
3. Decodes the data, then populates the data fields.
class BankAccount
{
private:
int m_ownerId;
int m_accountId;
public:
int getOwner( )
{
return m_ownerId;
}
int getAccount( )
{
return m_accountId;
}
};
To make BankAccount serializable, you would need to derive the class from Serializable and
implement the following:
toDataa function to serialize the data.
fromDataa function to deserialize the data.
classIda function to provide a unique integer for the class.
TypeFactoryMethoda pointer to a function that returns a Serializable* to an uninitialized
instance of the type.
The next example shows a code sample that demonstrates how to implement a serializable class.
class BankAccount
: public Serializable
{
private:
int m_ownerId;
int m_accountId;
public:
BankAccount( int owner, int account )
: m_ownerId( owner ),
m_accountId( account )
{
}
int getOwner( )
{
return m_ownerId;
}
int getAccount( )
{
return m_accountId;
}
// Add the following for the Serializable interface
// Our TypeFactoryMethod
static Serializable* createInstance( )
{
return new BankAccount( 0, 0 );
}
int32_t classId( )
{
return 10; // must be unique per class.
}
virtual uint32_t objectSize() const
{
return 10;
}
void toData( DataOutput& output )
{
output.writeInt( m_ownerId );
output.writeInt( m_accountId );
}
Serializable* fromData( DataInput& input )
{
input.readInt( &m_ownerId );
input.readInt( &m_accountId );
return this;
}
};
Typically, you would register the type before calling the function DistributedSystem::connect.
Type IDs must be unique to only one class.
class BankAccount
: public CacheableKey
{
private:
int m_ownerId;
int m_accountId;
public:
BankAccount( int owner, int account )
: m_ownerId( owner ),
m_accountId( account )
{
}
int getOwner( )
{
return m_ownerId;
}
int getAccount( )
{
return m_accountId;
}
// Our TypeFactoryMethod
static Serializable* createInstance( )
{
return new BankAccount( 0, 0 );
}
int32_t typeId( )
{
return 1000; // must be unique per class.
}
Instantiator.register
With the Instantiator.register method, a bridge client sends a RegistrationMessage to every
Java VM in its distributed system. The message announces the mapping between a user-defined
classId and class name. The other Java VMs can deserialize the byte array with the correct class.
If two bridge clients are in different distributed systems, then a RegistrationMessage cannot be sent
to each other. For example: a put made by a client in one distributed system will hang when a client in
another distributed system performs a get in pure Java mode. Similarly, a put made by a C++ client will
cause a Java client to hang.
DataSerializable
Using the DataSerializable method, the user-defined object is serialized into the following byte
array:
45 <2-byte-length> <class-name>
Another Java client in a different distributed system can deserialize the byte array, but a C++ client
cannot convert the Java class name to a C++ class name.
Implementation
The DataSerializable method does not support using a nested object, while
Instantiator.register does support the use of nested objects. A workaround is to let each Java
client manually initiate an object for each possible user object class a C++ client provides, using the
following code:
User u = new User("", 0);
See Example 14.7 on page 283 for a code sample that shows how to set up user object classes in a Java
client.
#include <gfcpp/GemfireCppCache.hpp>
#include "BankAccount.hpp"
#include "AccountHistory.hpp"
using namespace gemfire;
/*
This example connects, registers types, creates the cache, creates a
region, and then puts and gets user defined type BankAccount.
*/
int main( int argc, char** argv )
{
// Register the user-defined serializable type.
Serializable::registerType( AccountHistory::createDeserializable );
Serializable::registerType( BankAccount::createDeserializable );
// Create a region.
RegionFactoryPtr regionFactory =
cachePtr->createRegionFactory(CACHING_PROXY);
RegionPtr regionPtr = regionFactory->create("BankAccounts");
//Get StatisticsFactory
StatisticsFactory* factory = StatisticsFactory::getExistingInstance();
statDescriptorArr[1] = statFactory->createIntGauge("IntGauge",
"Test Statistic Descriptor Int Gauge.","TestUnit");
statDescriptorArr[2] = statFactory->createLongCounter("LongCounter",
"Test Statistic Descriptor Long Counter.","TestUnit");
statDescriptorArr[3] = statFactory->createLongGauge("LongGauge",
"Test Statistic Descriptor Long Gauge.","TestUnit");
statDescriptorArr[4] = statFactory->createDoubleCounter("DoubleCounter",
"Test Statistic Descriptor Double Counter.","TestUnit");
statDescriptorArr[5] = statFactory->createDoubleGauge("DoubleGauge",
"Test Statistic Descriptor Double Gauge.","TestUnit");
//Create a StatisticsType
StatisticsType* statsType = statFactory->createType("TestStatsType",
"Statistics for Unit Test.",statDescriptorArr, 6);
//Statistics are created and registered. Set and increment individual values
Int statIdIntCounter = statsType->nameToId("IntCounter");
testStat->setInt(statIdIntCounter, 10 );
testStat->incInt(statIdIntCounter, 1 );
int currentValue = testStat->getInt(statIdIntCounter);
The Microsoft .NET Framework interface for the GemFire Enterprise native client provides complete
access to the native client C++ functionality from any .NET Framework language (C#, C++/CLI,
VB.NET, J#). This enables C# clients, as well as those using other .NET languages, to use the capabilities
provided by the C++ API.
The GemFire Enterprise native client calls a C++/CLI (Common Language Infrastructure) managed set
of assemblies. C++/CLI includes the libraries and objects necessary for common language types, and it
is the framework for .NET applications. C# is used as the reference language in this chapter, although
any other .NET language works the same.
This chapter gives an overview of the functions of the caching API for .NET, and provides programming
examples of their use. It also describes how to use the .NET Framework interface with the native client.
The API documentation in the native client product docs directory provides extensive implementation
details for the C++ and .NET structures and functions. Example API programs are included in the
productDir/examples directory.
In this chapter:
Overview of the C# .NET API (page 106)
The GemFire C# .NET API (page 107)
C++ Class to .NET Class Mappings (page 111)
Object Lifetimes (page 113)
AppDomains (page 114)
Creating a Cache (page 115)
Creating a Region with Caching and LRU (page 116)
Adding an Entry to the Cache (page 118)
Accessing an Entry (page 119)
Serialization (page 120)
Using a Custom Class (page 128)
Application Callbacks (page 131)
Examples (page 135)
Troubleshooting .NET Applications (page 136)
C# .NET
Application
C++ API
This section gives a general overview of the classes in the GemStone::GemFire::Cache namespace.
For the complete and current information on the classes listed, see the online .NET API documentation.
Cache
This is a list of the cache classes and interfaces, along with a brief description. See the online .NET API
documentation for more information.
CacheFactoryThis class creates a Cache instance based on an instance of
DistributedSystem. If a cache.xml file is specified by DistributedSystem, the cache is
initialized based on the declarations loaded from that file. If a cache.xml file is used to create a
cache and some of the regions already exist, then a warning states that the regions exist and the cache
is created.
CacheThis class is the entry point to the GemFire caching API. This class allows you to create
regions. The cache is created by calling the create function of the CacheFactory class. When
creating a cache, you specify a DistributedSystem that tells the new cache where to find other
caches on the network and how to communicate with them.
IGFSerializableThis interface is the superclass of all user objects in the cache that can be
serialized and stored in the cache. GemFire provides built-in serializable types. See Table 4.2 on
page 93 for a list of built-in types. Any custom types defined for objects that need to be transmitted
as values should implement this interface. See Example 5.9 on page 125 for a code example.
Region
This is a list and brief description of the region classes and interfaces. See the online .NET API
documentation for more information.
RegionThis class provides functions for managing regions and cached data. The functions for
this class allow you to perform the following actions:
Retrieve information about the region, such as its parent region and region attribute objects.
Invalidate or destroy the region.
Create, update, invalidate and destroy region entries.
Determine, individually or as entire sets, the region's entry keys, entry values and
RegionEntry objects.
RegionEntryThis class contains the key and value for the entry, and provides all non-
distributed entry operations. The operations of this object are not distributed and do not affect
statistics.
Region Attributes
RegionAttributesThis class holds all attribute values for a region and provides functions for
retrieving all attribute settings. This class can only be modified by the AttributesFactory class
before region creation, and the AttributesMutator class after region creation.
AttributesMutatorThis class allows modification of an existing region's attributes for
application plug-ins and expiration actions. Each region has an AttributesMutator instance.
PropertiesProvides a collection of properties, each of which is a key/value pair. Each key is
a string, and the value can be a string or an integer.
Events
RegionEventThis class provides information about the event, such as what region the event
originated in, whether the event originated in a cache remote to the event handler, and whether the
event resulted from a distributed operation.
EntryEventThis class provides all of the information available for the RegionEvent. It also
provides entry-specific information, such as the old and new entry values and whether the event
resulted from a load operation.
5.5 AppDomains
AppDomains are the units of isolation, security boundaries, and loading and unloading for applications
in the .NET runtime. Multiple application domains can run in a single process. Each can have one or
many threads, and a thread can switch application domains at runtime. The .NET managed assemblies
make the assumption that any interface methods invoked by the native C++ layer are in the same
AppDomain as that of the .NET DLL, otherwise an exception is thrown because it is unable to cross
AppDomain boundaries.
Problem Scenarios
These scenarios describe processes and implementations that should be avoided when using
AppDomains.
For systems with security enabled, the credentials for a joining member are authenticated when the cache
is created and the system connection is made. See Security on page 163 for more information about
secure connections to a distributed system.
In the next example, the application creates the cache by calling the CacheFactory.Create function
with the new DistributedSystem object.
A cache can also be created by referencing a cache.xml file, as shown in the following example.
See Chapter 3, Cache Initialization File for more information about the cache.xml file.
RegionFactory regionFact =
cache.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
region = regionFact.SetLruEntriesLimit(20000)
.SetInitialCapacity(20000)
.Create("exampleRegion");
Cache Overflow
By default, the cache is kept in memory. When LruEntriesLimit is exceeded, those LRU entries are
destroyed. With cache overflow, the over-limit LRU entry values are written to disk, keeping the key in
memory. When one of those entries is requested, its value is loaded back into memory. The disk policy
specifies whether to destroy or write overflow entries. See DiskPolicy (page 57) for more information.
A persistence manager API is provided with the GemFire Enterprise native client for customers who need
to provide their own mechanism for writing to disk or to a backend database. The persistence manager
uses the open source Berkeley DB library. A native C++ API library implementation of this API can be
registered and used with the .NET API. The Berkeley DB API is provided in bdbimpl.dll in the native
client \bin directory. See Appendix A, Installing the Berkeley DB Persistence Manager, on page 289
for information about installing and configuring Berkeley DB.
RegionFactory regionFact =
cache.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
region = regionFact.SetLruEntriesLimit(20000)
.SetInitialCapacity(20000)
.SetDiskPolicy(DiskPolicyType.Overflows)
.SetPersistenceManager("BDBImpl", "createBDBInstance", bdbProperties)
.Create("exampleRegion");
Example 5.5 Using the API to Put Values Into the Cache
Example 5.6 Using the Get API to Retrieve Values From the Cache
5.10 Serialization
The GemFire Enterprise native client .NET API provides an IGFSerializable interface for fast and
compact data serialization. This section discusses serialization and presents a simple implementation
example.
Serializable Types
The native client provides a set of built-in serializable types that are automatically registered at
initialization. See Table 4.2 on page 93 for a list of the built-in types and their descriptions.
If your application uses more complex key types that you want to make more accessible or easier to
handle, you can derive a new class from IGFSerializable. Another option is for the application to do
its own object serialization using the GemFire CacheableBytes type or a custom type.
The GemFire IGFSerializable interface does not support object graphs with
multiple references to the same object. If your application uses these types of circular
graphs, you must address this design concern explicitly.
Serialization Classes
The native client .NET API also provides two classes for convenient wrapping of generic objects as
IGFSerializable values: CacheableObject and CacheableObjectXml.
}
}
public int Account
{
get
{
return m_accountId;
}
}
public BankAccount(int customer, int account)
{
m_customerId = customer;
m_accountId = account;
}
}
To make BankAccount serializable, you implement the IGFSerializable interface as shown in the
following example:
m_customerId = input.ReadInt32();
m_accountId = input.ReadInt32();
return this;
}
public UInt32 ClassId
{
get
{
return 11;
}
}
public UInt32 ObjectSize
{
get
{
return (UInt32)(sizeof(Int32) + sizeof(Int32));
}
}
}
Using ClassId
A ClassId is an integer that returns the ClassId of the instance being serialized. The ClassId is used
by deserialization to determine what instance type to create and deserialize into.
Using DSFID
A DSFID is an integer that returns the data serialization fixed ID type. DSFID is used to determine what
instance type to create and deserialize into. DSFID should not be overridden by custom implementations,
and it is reserved only for built-in serializable types.
To extend a type that implements IGFSerializable to be a key, add the extra HashCode and
Equals(ICacheableKey) methods in ICacheableKey.
The following example shows how to extend an IGFSerializable class to be a key. Example 5.10 on
page 128 shows how to use the BankAccount custom key type.
#endregion
#endregion
// Our TypeFactoryMethod
public static IGFSerializable CreateInstance()
{
return new BankAccountKey(0, 0);
}
{
get
{
return 11;
}
}
#endregion
#endregion
#endregion
}
#endregion
public AccountHistory()
{
m_history = new List<string>();
}
m_history.Clear();
for (int i = 0; i < len; i++) {
m_history.Add(input.ReadUTF());
}
return this;
}
#endregion
}
public class TestBankAccount
{
public static void Main()
{
// Register the user-defined serializable type.
Serializable.RegisterType(AccountHistory.CreateInstance);
Serializable.RegisterType(BankAccountKey.CreateInstance);
// Create a cache.
CacheFactory cacheFactory = CacheFactory.CreateCacheFactory(null);
Cache cache = cacheFactory.Create();
// Create a region.
RegionFactory regionFactory =
cache.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
Region region = regionFactory.Create("BankAccounts");
// Place some instances of BankAccount cache region.
BankAccountKey baKey = new BankAccountKey(2309, 123091);
AccountHistory ahVal = new AccountHistory();
ahVal.AddLog("Created account");
region.Put(baKey, ahVal);
Console.WriteLine("Put an AccountHistory in cache keyed with
BankAccount.");
// Display the BankAccount information.
Console.WriteLine(baKey.ToString());
// Call custom behavior on instance of AccountHistory.
ahVal.ShowAccountHistory();
// Get a value out of the region.
AccountHistory history = region.Get(baKey) as AccountHistory;
if (history != null)
{
Console.WriteLine("Found AccountHistory in the cache.");
history.ShowAccountHistory();
history.AddLog("debit $1,000,000.");
region.Put(baKey, history);
Console.WriteLine("Updated AccountHistory in the cache.");
}
// Look up the history again.
The next example shows how to implement ICacheWriter to track create and update events for a
region.
Example 5.12 Using ICacheWriter to Track Creates and Updates for a Region
}
#endregion
}
}
#endregion
}
5.13 Examples
A Simple C# Example
This example shows how to connect to GemFire, create a cache and region, put and get keys and values,
and disconnect.
class FirstSteps
{
public static void Main()
{
// 1. Create a cache
CacheFactory cacheFactory = CacheFactory.CreateCacheFactory();
Cache cache = cacheFactory.Create();
// 2. Create default region attributes using region factory
RegionFactory regionFactory =
cache.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
// 3. Create region
Region region = regionFactory.Create("exampleputgetregion");
// 4. Put some entries
int iKey = 777;
string sKey = "abc";
region.Put(iKey, 12345678);
region.Put(sKey, "testvalue");
// 5. Get the entries
CacheableInt32 ciValue = region.Get(iKey) as CacheableInt32;
Console.WriteLine("Get - key: {0}, value: {1}", iKey,
ciValue.Value);
CacheableString csValue = region.Get(sKey) as CacheableString;
Console.WriteLine("Get - key: {0}, value: {1}", sKey,
csValue.Value);
// 6. Close cache
cache.Close();
}
}
}}
<configuration>
<runtime>
<assemblyBinding
xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="GemStone.GemFire.Cache"
publicKeyToken="126e6338d9f55e0c"
culture="neutral" />
<codeBase version="0.0.0.0"
href="../../bin/GemStone.GemFire.Cache.dll"/>
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>
If the .config file contain errors, no warning or error messages are issued. The
application runs as if no .config file is present.
6 Setting Properties
This chapter describes the GemFire Enterprise native client configuration attributes that you can modify
through the gfcpp.properties configuration file, and also shows a programmatic approach.
In this chapter:
Configuration Overview (page 140)
Attributes in gfcpp.properties (page 143)
Defining Properties Programmatically (page 149)
Configuration Options
The typical configuration procedure for a native client includes the high-level steps listed below. The rest
of this chapter provides the details.
1. Place the gfcpp.properties file for the application in the working directory or in
productDir/defaultSystem. Use the configuration file that came with the application software
if there is one, or create your own. See Appendix B, gfcpp.properties Example File, on page 293 for
a sample of the file format and contents.
2. Place the cache.xml file for the application in the desired location and specify its path in the
gfcpp.properties file.
3. Add other attributes to the gfcpp.properties file as needed for the local system architecture. See
Table 6.2 on page 145 for the configurable attributes, and Appendix B, gfcpp.properties Example
File, on page 293 for a sample of the file format.
Table 6.1 Priority of Configuration Settings for Native Clients and Cache Servers
The gfcpp.properties files and programmatic configuration are optional. If they are not present, no
warnings or errors occur. For details on programmatic configuration through the Properties object,
see Defining Properties Programmatically on page 149.
Alphabetical Lookup
archive-disk-space-limit (page 147)
archive-file-size-limit (page 147)
cache-xml-file (page 145)
conflate-events (page 145)
connection-pool-size (page 145)
crash-dump-enabled (page 145)
durable-client-id (page 147)
durable-timeout (page 147)
enable-time-statistics (page 147)
grid-client (page 145)
heap-lru-delta (page 145)
heap-lru-limit (page 145)
license-file (page 146)
license-type (page 146)
log-disk-space-limit (page 146)
log-file (page 146)
log-file-size-limit (page 146)
log-level (page 146)
max-socket-buffer-size (page 145)
notify-ack-interval (page 145)
notify-dupcheck-life (page 145)
ping-interval (page 146)
redundancy-monitor-interval (page 146)
security-client-auth-factory (page 148)
security-client-auth-library (page 148)
security-client-dhalgo (page 148)
security-client-kspath (page 148)
stacktrace-enabled (page 146)
statistic-sampling-enabled (page 147)
statistic-archive-file (page 147)
statistic-sampling-enabled (page 147)
For the system properties that relate to high availability, see Sending Periodic Acknowledgement on
page 153. For a list of security-related system properties and their descriptions, see Table 8.1 on
page 174.
Attribute Definitions
This table lists GemFire configuration attributes that can be stored in the gfcpp.properties file to be
read by a native client. The attributes are grouped according to the categories listed at the beginning of
this section.
gfcpp.properties
Attribute Description Default
General Properties
cache-xml-file The name and path of the file whose contents are used no default
by default to initialize a cache if one is created. If not
specified, the native client starts with an empty cache,
which is populated at runtime.
See Chapter 3, Cache Initialization File, on page 75 for
more information on the cache initialization file.
heap-lru-delta When heap LRU is triggered, this is the amount that gets 10
added to the percentage that is above the heap-lru-
limit amount. LRU continues until the memory usage
is below heap-lru-limit minus this percentage. This
property is only used if heap-lru-limit is greater
than 0.
heap-lru-limit Maximum amount of memory, in megabytes, used by 0
the cache for all regions. If this limit is exceeded by
heap-lru-delta percent, LRU reduces the memory
footprint as necessary. If not specified, or set to 0,
memory usage is governed by each regions LRU entries
limit, if any.
conflate-events The client side conflation setting, which is sent to the server
server.
connection-pool-size The number of connections per endpoint 5
crash-dump-enabled Whether crash dump generation for unhandled fatal true
errors is enabled. True is enabled, false otherwise.
grid-client If true, the client starts does not start various internal false
threads so that startup and shutdown time is reduced.
max-socket-buffer- The maximum size of the socket buffers, in bytes, that 65 * 1024
size the native client will try to set for client-server
connections.
notify-ack-interval The interval, in seconds, in which client sends 1
acknowledgements for subscription notifications.
notify-dupcheck-life The amount of time, in seconds, the client tracks 300
subscription notifications before dropping the
duplicates.
gfcpp.properties
Attribute (Continued) Description Default
ping-interval How often, in seconds, to communicate with the server 10
to show the client is alive. Pings are only sent when the
ping-interval elapses between normal client mes-
sages. This must be set lower than the servers
maximum-time-between-pings.
redundancy-monitor- The interval, in seconds, at which the subscription HA 10
interval maintenance thread checks for the configured
redundancy of subscription servers.
stacktrace-enabled If true, the exception classes capture a stack trace that false
can be printed with their printStackTrace function.
If false, the function prints a message that the trace is
unavailable.
Licensing Properties
license-file The name and path of the file containing the license for productDir/
the distributed system member, which is read in during gfCppLicense.zip
connection to the distributed system.
See Licensing on page 30.
license-type The type of license used by the distributed system evaluation
member: evaluation, development, or production. All
applications in the distributed system must have the
same type of license.
See Licensing on page 30.
Logging Properties
log-disk-space-limit The maximum amount of disk space, in megabytes, 0
allowed for all log files, current, and rolled. If set to 0,
the space is unlimited.
log-file The name and full path of the file where a running client no default file
writes log messages. If not specified, logging goes to
stdout.
log-file-size-limit The maximum size, in megabytes, of a single log file. 0
Once this limit is exceeded, a new log file is created and
the current log file becomes inactive. If set to 0, the file
size is unlimited.
log-level Controls the types of messages that are written to the config
applications log. Levels for general operational use are:
config, info, warning, and error, where config
logs the most messages and error the least.
Setting log-level to one of the ordered levels causes
all messages of that level and greater severity to be
printed.
Lowering the log-level reduces system resource
consumption while still providing some logging
information for failure analysis.
gfcpp.properties
Attribute (Continued) Description Default
Statistics Archiving Properties
statistic-sampling- Controls whether the process creates a statistic archive true
enabled file.
statistic-archive- The name and full path of the file where a running ./statArchive.gfs
file system member writes archives statistics. If archive-
disk-space-limit is not set, the native client
appends the process ID to the configured file name, like
statArchive-PID.gfs. If the space limit is set, the
process ID is not appended but each rolled file name is
renamed to statArchive-ID.gfs, where ID is the
rolled number of the file.
archive-disk-space- The maximum amount of disk space, in megabytes, 0
limit allowed for all archive files, current, and rolled. If set to
0, the space is unlimited.
archive-file-size- The maximum size, in bytes, of a single statistic archive 0
limit file. Once this limit is exceeded, a new statistic archive
file is created and the current archive file becomes
inactive. If set to 0, the file size is unlimited.
statistic-sample- The rate, in seconds, that statistics are sampled. 1
rate Operating system statistics are updated only when a
sample is taken. If statistic archival is enabled, then
these samples are written to the archive.
Lowering the sample rate for statistics reduces system
resource use while still providing some statistics for
system tuning and failure analysis.
You can view archived statistics with the optional VSD
utility.
enable-time- Enables time-based statistics for the distributed system false
statistics and caching. For performance reasons, time-based sta-
tistics are disabled by default. For more information, see
Appendix D, System Statistics, on page 301.
Durable Client Properties
auto-ready-for-events Whether a non durable client starts to receive and pro- true
cess subscription events automatically on startup. If set
to false, event startup is not automatic and you need to
call the Cache::readyForEvents method after your
regions and their listeners are initialized.
durable-client-id The identifier to specify if we want the client to be dura- empty
ble
durable-timeout The time, in seconds, a durable client's subscription is 300
maintained when it is not connected to the server before
being dropped
gfcpp.properties
Attribute (Continued) Description Default
Security Properties
security-client- The security diffie-hellman secret key algorithm. null
dhalgo
security-client- The keystore (.pem file ) path. null
kspath
security-client- The factory method for the security AuthInitialize empty
auth-factory module
security-client- The path to the client security library for the empty
auth-library AuthInitialize module
7 Preserving Data
The GemFire Enterprise native client can be configured for data loss prevention in cases where either the
cache server that the client is communicating with loses its connection, or if the native client is
temporarily disconnected or abnormally terminated from the distributed system.
Data preservation is accomplished by ensuring that reliable event messaging is maintained between the
client and the cache server. Any lost messages intended to be sent to a client as create, update or
invalidate events for cached entries can quickly cause the data in the client cache to be
unsynchronized and out of date with the rest of the distributed system.
This chapter describes the configuration and operation of native clients to prevent data loss in the event
of client or server failure.
In this chapter:
High Availability for Client-to-Server Communication (page 152)
Durable Client Messaging (page 154)
<cache>
<pool name="examplePool"
subscription-enabled="true" subscription-redundancy="1">
<server host="java_servername1" port="java_port1" />
<server host="java_servername2" port="java_port2" />
</pool>
<region name = "ThinClientRegion1" >
<region-attributes refid="CACHING_PROXY" pool-name="examplePool"/>
</region>
</cache>
You can set the redundancy level programmatically. This example creates a client cache with two
redundant cache servers configured in addition to the primary server.
The server redundancy level can be configured using the pool API. For more information about the pool
API, see Using Connection Pools on page 235.
PropertiesPtr pp = Properties::create( );
systemPtr = CacheFactory::createCacheFactory(pp);
// Create a cache.
cachePtr = systemPtr->setSubscriptionEnabled(true)
->addServer("localhost", 24680)
->addServer("localhost", 24681)
->addServer("localhost", 24682)
->setSubscriptionRedundancy(2)
->create();
When failover occurs to a secondary server, a new secondary is added to the redundancy set. If no new
secondary server is found, then the redundancy level is not satisfied but the failover procedure completes
successfully. Any new live server is added as a secondary and interest is registered on it.
The Pool API also provides attributes to configure periodic ack and duplicate message tracking timeout.
See subscription-message-tracking-timeout and subscription-ack-interval in the list
of pool attributes at Configuring Pools for Servers or Locators on page 238.
Client-Side Configuration
This section describes the settings for configuring and implementing durable messaging for C++ and
.NET clients. All durable messaging configurations are performed on the client.
To assist with tuning, GemFire Enterprise provides statistics that track message queues for durable
clients through the disconnect and reconnect cycles. The statistics are documented in the GemFire
Enterprise System Administrators Guide.
When the queue is full it blocks further operations that add messages until the queue size drops to an
acceptable level. The action to take is specified on the server. For details on configuring the queue, see
the GemFire Enterprise Developers Guide.
The following example shows gfcpp.properties settings to make the client durable and set the
durable timeout to 200 seconds.
durable-client-id=31
durable-timeout=200
This programmatic example creates a durable client using the DistributedSystem::connect call.
You use the typical methods for interest registration and configure notification by subscription on the
server as usual. For details, see Registering Interest for Entries on page 46.
Changing interest registration after the durable client connects the first time can cause
data inconsistency and is not recommended.
At restart, if the client doesnt register durable interest for exactly the same keys as before then the entries
in the interest list are not copied from the server during the registration. Instead, the client cache starts
out empty and entries are added during updates. If no updates come in for an entry, it never shows up in
the client cache.
To keep the client from losing events, do not call this method until all regions and listeners are created.
For more information, see Reconnection on page 158.
Initial Operation
The initial startup of a durable client is similar to the startup of any other client, except that it specifically
calls the Cache.readyForEvents method when all regions and listeners on the client are ready to
process messages from the server. See Sending the Cache Ready Message to the Server on page 156.
Disconnection
While the client and servers are disconnected, their operation varies depending on the circumstances.
Normal disconnect
When a durable client disconnects normally, the Cache.close request states whether to maintain the
clients message queue and durable subscriptions. The servers stop sending messages to the client and
release its connection. See Disconnecting From the Server on page 156 for more information.
If requested, the servers maintain the queues and durable interest list until the client reconnects or times
out. The non-durable interest list is discarded. The servers continue to queue up incoming messages for
entries on the durable interest list. All messages that were in the queue when the client disconnected
remain in the queue, including messages for entries on the non-durable list.
If the client requests to not have its subscriptions maintained, or if there are no durable subscriptions, the
servers unregister the client and perform the same cleanup as for a non-durable client.
Abnormal disconnect
If the client crashes or loses its connections to all servers, the servers automatically maintain its message
queue and durable subscriptions until the client reconnects or times out.
Reconnection
During initialization, operations on the client cache can come from multiple sources:
Cache operations by the application
Results returned by the cache server in response to the clients interest registrations
Callbacks triggered by replaying old events from the queue
These procedures can act on the cache concurrently, and the cache is never blocked from doing
operations.
GemFire Enterprise handles the conflicts between the application and interest registration, but you need
to prevent the callback problem. Writing callback methods that do cache operations is never
recommended, but it is a particularly bad idea for durable clients, as explained in Implementing Cache
Listeners for Durable Clients on page 161.
When the durable client reconnects, it performs these steps:
1. The client creates its cache and regions. This ensures that all cache listeners are ready. At this point,
the application hosting the client can begin cache operations.
2. The client calls Cache.readyForEvents, meaning that all regions and listeners on the client are
now ready to process messages from the server. The cache ready message triggers the queued
message replay process on the primary server.
3. The client issues its register interest requests. This allows the client cache to be populated with the
initial interest registration results. The primary server responds with the current state of those entries
if they still exist in the servers cache.
Interest registration must happen immediately after the cache ready message.
For an example that demonstrates Cache.readyForEvents, see Sending the Cache Ready Message to
the Server on page 156.
This figure shows the concurrent procedures that occur during the initialization process. The application
begins operations immediately on the client (step 1), while the clients cache ready message (also step 1)
triggers a series of queue operations on the cache servers (starting with step 2 on the primary server). At
the same time, the client registers interest (step 2 on the client) and receives a response from the server.
Message B2 applies to an entry in Region A, so the cache listener handles B2s event. Because B2 comes
before the marker, the client does not apply the update to the cache.
Distributed System
message tracking
Region A Cache Writer
A 10
X 5 B2 B 1
Y event
Z
Cache Listener
1 update/create
Application Native Client
Only one region is shown for simplicity, but the messages in the queue could apply to multiple regions.
Also, the figure omits the concurrent cache updates on the servers, which would normally be adding more
messages to the clients message queue.
Implementation Notes
Redundancy management is handled by the client, so when the client is disconnected from the server the
redundancy of client events is not maintained. Even if the servers fail one at a time, so that running clients
have time to fail over and pick new secondary servers, an offline durable client cannot fail over. As a
result, the client loses its queued messages.
8 Security
This chapter describes the security framework for the GemFire Enterprise native client, and explains its
authentication and authorization processes with the GemFire distributed system.
The security framework authenticates clients attempting to connect to a GemFire cache server, and
authorizes client cache operations. It can also be configured for client authentication of servers. It allows
you to plug in your own implementations for authentication and authorization.
In this chapter:
Authentication (page 164)
Client Authorization (page 172)
Encrypting Credentials Using Diffie-Hellman (page 173)
System Properties (page 174)
SSL Client/Server Communication (page 175)
Using OpenSSL (page 177)
8.1 Authentication
A client is authenticated when it connects to a GemFire cache server that is configured with the client
Authenticator callback. The connection request must contain valid credentials for the client. Once
authenticated, the server assigns the client a unique ID and principal, used to authorize operations. The
client must trust all cache servers in the server system as it may connect to any one of them. For
information on configuring client/server, see Standard Client/Server Deployment on page 201 of the
GemFire Enterprise Developers Guide.
GemFire has two types of client authentication:
Process level Each pool creates a configured minimum number of connections across the server
group. The pool accesses the least loaded server for each cache operation
Process level connections represent the overall client process and are the standard way a client
accesses the server cache.
Multi-userEach user/pool pair creates a connection to one server and then sticks with it for
operations. If the server is unable to respond to a request, the pool selects a new one for the user.
Multi-user connections are generally used by application servers or web servers that act as clients to
GemFire servers. Multi-user allows a single app or web server process to service a large number of
users with varied access permissions.
By default, server pools use process level authentication. You can enable multi-user authentication by
setting a pools multi-user-secure-mode-enabled attribute to true.
Client A Cache
Pool - default:
process-wide
connections
Server Server
principal A principal A
principal A
principal B1 principal B2
Server
Credentials can be sent in encrypted form using the Diffie-Hellman key exchange algorithm. See
Encrypting Credentials Using Diffie-Hellman (page 173) for more information.
This example shows a C++ client connecting with credentials.
secProp.Insert("security-username"," gemfire6");
secProp.Insert("security-password"," gemfire6Pass);
security-client-auth-factory
This is the system property for the factory function of the class implementing the AuthInitialize
interface (IAuthInitialize in .NET). The .NET clients can load both C++ and .NET
implementations. For .NET implementations, this property is the fully qualified name of the static factory
function (including the namespace and class).
security-client-auth-library
This is the system property for the library where the factory methods reside. The library is loaded
explicitly and the factory functions are invoked dynamically, returning an object of the class
implementing the AuthInitialize interface.
Other implementations of the AuthInitialize interface may be required to build credentials using
properties that are also passed as system properties. These properties also start with the security-
prefix. For example, the PKCS implementation requires an alias name and the corresponding keystore
path, which are specified as security-alias and security-keystorepath, respectively. Similarly,
UserPasswordAuthInit requires a username specified in security-username, and the
corresponding password is specified in the security-password system property.
The getCredentials function for the AuthInitialize interface is called to obtain the credentials.
All system properties starting with security- are passed to this callback as the first argument to the
getCredentials function, as shown in the following code snippet:
PropertiesPtr getCredentials(PropertiesPtr& securityprops, const char
*server);
The following example shows how to implement the factory method in both C++ and .NET:
Example 8.3 Implementing the Factory Method for Authentication (C++ and .NET)
C++ Implementation
LIBEXP AuthInitialize* createPKCSAuthInitInstance()
{
return new PKCSAuthInit( );
}
.NET Implementation
public static IAuthInitialize Create()
{
return new UserPasswordAuthInit();
}
Implementations of the factory method are user-provided. Credentials in the form of properties returned
by this function are sent by the client to the server for authentication during the clients handshake
process with the server.
The GemFire Enterprise native client installation provides sample security implementations in its
templates/security folder.
For details on the Authenticator interface and server side configuration, refer to the Security chapter
in the GemFire Enterprise System Administrator's Guide.
The implementation of authenticate on the server must be compatible with
getCredentials implemented on the client side. The set of properties returned by
the getCredentials method on the client side are sent over to the server side
authenticate method.
To use RegionService:
Regions must be configured as EMPTY. Depending on your data access requirements, this
configuration might affect performance, because the client goes to the server for every get.
If you are running durable CQs through the region services, stop and start the offline event storage
for the client as a whole. The server manages one queue for the entire client process, so you need to
request the stop and start of durable client queue (CQ) event messaging for the cache as a whole,
through the ClientCache instance. If you closed the RegionService instances, event processing
would stop, but the events from the server would continue, and would be lost.
Stop with:
cachePtr->close(true);
Note that the native client samples are provided in source form only in the "templates" directory within
the product directory
When a client initiates a connection to a cache server, the client submits its credentials to the server and
the server submits those credentials to the LDAP server. To be authenticated, the credentials for the client
need to match one of the valid entries in the LDAP server. The credentials can consist of the entry name
and the corresponding password. If the submitted credentials result in a connection to the LDAP server
because the credentials match the appropriate LDAP entries, then the client is authenticated and granted
a connection to the server. If the server fails to connect to the LDAP server with the supplied credentials
then an AuthenticationFailedException is sent to the client and its connection with the cache server is
closed.
Configuration Settings
In the gfcpp.properties file for the client, you need to specify the UserPasswordAuthInit
callback, the username, and the password, like this:
security-client-auth-library=securityImpl
security-client-auth-factory=createUserPasswordAuthInitInstance
security-username=<username>
security-password=<password>
For server side settings and LDAP server configuration, see the Security chapter in the GemFire
Enterprise System Administrator's Guide.
Configuration Settings
In the gfcpp.properties file for the client, you need to specify the PKCSAuthInit callback, the
keystore path, the security alias, and the keystore password, like this:
security-client-auth-library=securityImpl
security-client-auth-factory=createPKCSAuthInitInstance
security-keystorepath=<PKCS#12 keystore path>
security-alias=<alias>
security-keystorepass=<keystore password>
For server side settings, see the description of PKCS sample in the Security chapter in the GemFire
Enterprise System Administrator's Guide.
Post-Operation Authorization
Authorization in the post-operation phase occurs on the server after the operation is complete and before
the results are sent to the client. The callback can modify the results of certain operations, such as query,
get and keySet, or even completely disallow the operation. For example, a post-operation callback for
a query operation can filter out sensitive data or data that the client should not receive, or even completely
fail the operation.
The security-client-accessor-pp system property in the servers gemfire.properties file
specifies the callback to invoke in the post-operation phase. For example:
security-client-accessor-pp=templates.security.XmlAuthorization.create
If an authorization failure occurs in a pre-operation or post-operation callback on the server, the operation
throws a NotAuthorizedException on the client.
For more information, see the Configuring Authorization section in the Security chapter for the GemFire
Enterprise System Administrators Guide.
Enabling Diffie-Hellman
To encrypt credentials, set security-client-dhalgo in the gfcpp.properties file to the
password for the public key file store on the client. Example:
security-client-dhalgo=Blowfish:128
This causes the server to authenticate the client using the Diffie-Hellmant algorithm.
Unix
1. Make sure the OpenSSL header file directory is in your compiler's include path, and that OpenSSL
libraries are in your library link/load path, LD_LIBRARY_PATH.
2. If you installed OpenSSL into a set of directories unknown by the compiler, set the SSL_ROOT
environment variable to point to the top level directory of your OpenSSL distributionthe parent
directory to OpenSSL's include and lib directories.
3. Build ACE as described above. When building ACE, add ssl=1 to your make command line
invocation, or add it to your platform_macros.GNU file.
4. Set the ACE_ROOT environment variable.
5. Build the ACE_SSL library in the $ACE_ROOT/ace/SSL directory.
System properties
1. set ssl-enabled (page 174) to true
2. set ssl-keystore (page 174) and ssl-truststore (page 174) to point to your files
Limitations
The keys and keystores need to be in the JKS (Java KeyStore) format for the GemFire server and in the
clear PEM format for the Native Client.
Currently the native client only support the NULL cipher with mutual authentication for SSL socket
communications.
Windows
> set GFCPP=<path to installation, typically C:\NativeClient_xxxx>
> set OPENSSL=<path to installed OpenSSL, typically C:\OpenSSL>
> set PATH=<path to Java JDK or JRE>\bin;%GFCPP%\bin;%OPENSSL%\bin;%PATH%
> set CLASSPATH=<path to GemFire installation>\lib\gfSecurityImpl.jar;%CLASSPATH%
Downloading OpenSSL
This section lists the web sites where you can download OpenSSL, and additional packages for each
supported platform. For Linux and Solaris, you can download the latest tarball archive from the OpenSSL
web site at http://www.openssl.org/source/.
Linux
OpenSSL is installed on Linux by default, but the OpenSSL development package may need to be
installed to successfully compile the security templates.
Solaris
For Solaris, you can also download a pre-built version of OpenSSL from the Sunfreeware site: See Pre-
Built OpenSSL on page 178 for more information.
Windows
An OpenSSL installer can be downloaded for Windows from
http://www.openssl.org/related/binaries.html
Linux
To build OpenSSL on Linux, copy the downloaded tarball file into
NativeClient_xxxx/templates/security/openssl/Linux and run buildit.sh.
Solaris
To build OpenSSL on Solaris, copy the downloaded tarball file into
NativeClient_xxxx/templates/security/openssl/SunOS and run buildit.sh.
Pre-Built OpenSSL
If you do not want to build OpenSSL, you can download a pre-built archive of OpenSSL for Solaris at
http://www.sunfreeware.com/. Follow the links for your specific Solaris operating system to download
a compatible version of OpenSSL and install it. To complete the installation, you may need to install
some additional listed support files.
Windows
Use the downloaded OpenSSL installer to install it on Windows. You can usually accept the default
installation path (C:\OpenSSL).
9 Remote Querying
You can use the GemFire Enterprise native client query API to query your cached data stored on a
GemFire Enterprise cache server. The query is evaluated and executed on the cache server, then the
results are returned to the native client. You can also optimize your queries by defining indexes on the
cache server.
The query language for the native client is essentially a subset of OQL (ODMG 3.0 Object Data
Management Group, www.odmg.org). OQL is an SQL-like language with extended functionality for
querying complex objects, object attributes and methods.
In this chapter:
Quick Introduction to Remote Querying (page 180)
Querying Object Data (page 186)
The Query Language (page 189)
Indexes (page 209)
Performance Considerations (page 210)
The Remote Query API (page 211)
Programming Examples (page 213)
The online C++ and .NET API documentation located in the docs directory for the native client provides
extensive details for all of the querying interfaces, classes and methods.
It is assumed that you have general familiarity with SQL querying and indexing, and with the information
on the native client cache provided in the previous chapters of this manual.
If you are using the new pool API, you should obtain the QueryService from the pool. For information
about the pool API, see Using Connection Pools on page 235.
Executing a Query
To execute a query from the native client:
1. Get a pointer to the QueryService. Example:
QueryServicePtr qsPtr = cachePtr->getQueryService();
2. Create a QueryPtr to a query that is compatible with the OQL specification. Example:
QueryPtr qry = qsPtr->newQuery(
"select distinct * from /Portfolios where status = 'active'");
3. Use the execute method for the Query interface to submit the query string to the cache server.
SelectResultsPtr results = qry->execute(10);
The server remotely evaluates the query string and returns the results to the client.
4. You can iterate through the returned objects as part of the query process. Example:
SelectResultsIterator iter = results->getIterator();
while(iter.hasNext())
{
PortfolioPtr portfolio = dynCast<PortfolioPtr >(iter.next());
}
The examples in this chapter all assume that you have already obtainted a pointer to the QueryService.
For a cache server region, setting its data policy to replicate or persistent-replicate ensures
that it reflects the state of the entire distributed region. Without replication, some server cache entries
may not be available.
Depending on your use of the server cache, the non-global distributed scopes distributed-ack and
distributed-no-ack may encounter race conditions during entry distribution that cause the data set
to be out of sync with the distributed region. The global scope guarantees data consistency across the
distributed system, but at the cost of reduced performance.
Table 9.1 on page 181 summarizes the effects of cache server region scope and data policy settings on
the data available to your querying and indexing operations.
Table 9.1 The Effects of Cache Server Region Scope and Data Policy on the Data Available for Querying
For more information, see the Scope and Data Policy discussions in the GemFire Enterprise Developers
Guide.
Example 9.1 C++ Class Definition and Corresponding Java Class Definition
User-defined data types must implement the Serializable interface on the native client side, while
corresponding Java classes must implement the DataSerializable interface. The C++ objects for the
native client must correspond to the Java objects for the GemFire Enterprise cache server. This means
that an object on one side should deserialize correctly to the other side. The following table lists the
sample data in the /portfolios region:
Example 9.3 Retrieve all portfolios that are active and have type xyz
Example 9.4 Get the ID and status of all portfolios with positions in secId yyy
Example 9.5 Get distinct positions from all active portfolios with at least a $25.00 market value
Example 9.6 Get distinct positions from portfolios with at least a $25.00 market value
Example 9.7 Get distinct entry keys and positions from active portfolios with at least a $25.00 market value
As you can see, the classic database storage model is flatter, with each table standing on its own. You
can access position data independent of the portfolio data, or you can join the two tables if you need to
by using multiple expressions in your FROM clause. With the object model, on the other hand, you use
multiple expressions in the FROM clause to drill down into the portfolio table and access the position data.
This can be helpful, since the position-to-portfolio relationship is implicit and does not require restating
in your queries. It may also hinder you when you want to ask general questions about the positions data
independent of the portfolio information. The two queries below illustrate these differences.
You want to list the market values for all positions of active portfolios. You query the two data models
with these statements:
The difference between the queries reflects the difference in data storage models. The database query
must explicitly join the portfolio and position data in the WHERE clause by matching the position
tables foreign key, id, with the portfolio tables primary key. In the object model, this one-to-many
relationship is implicitly specified by defining the positions map as a field of the portfolio class.
This difference is reflected again when you query the full list of positions market values.
The position table is independent of the portfolio table, so your database query runs through the
single table. The cached position data, however, is only accessible via the portfolio objects. The cache
query aggregates data from each portfolio objects individual positions map to create the result set.
For the cache query engine, the positions data only becomes visible when the first
expression in the FROM clause, /portfolios, is evaluated.
The next section explores the drill-down nature of cache querying in more detail.
In cache querying, the FROM expression brings new data into the query scope. These concepts are
discussed further in Attribute Visibility on page 191.
In this section:
Accessing Cached Data (page 189)
What is a Query String? (page 193)
The SELECT Statement (page 194)
Additional Query Language Elements (page 202)
Object attributes
You can access the Region objects public fields and methods from a region path, referred to as the
regions attributes. Using this method, /portfolios.name returns portfolios and
/portfolios.name.length returns 10. An attribute is mapped to a Java class member in three
possible ways with the following priority until a match is found. If the attribute is named x, then:
public method getX()
public method x()
public field x
The term attribute in this context is not the same as a region attribute.
Region data
You can also access entry keys and entry data through the region:
/portfolios.keySet returns the Set of entry keys in the region
/portfolios.entrySet returns the Set of Region.Entry objects
/portfolios.values returns the Collection of entry values
/portfolios returns the Collection of entry values
These collections are immutable. Invoking modifier methods on them, such as add and
remove, result in an UnsupportedOperationException.
For the last two bullets, the FROM clause /portfolios.values and /portfolios return the same
thing.
You cannot change the order of the expressions in this FROM clause. The second
expression depends on the scope created by the first expression.
Attribute Visibility
Within the current query scope, you can access any available object or object attribute. In querying, an
objects attribute is any identifier that can be mapped to a public field or method in the object. In the FROM
specification, any object that is in scope is valid, so at the beginning of a query all cached regions and
their attributes on the cache server are in scope.
This query is valid because name resolves to the Region method getName:
/portfolios.name
This query is valid because toArray resolves to the Collection method with the same name:
SELECT DISTINCT * FROM /portfolios.toArray
You cannot, however, refer to the attribute of a collection object in the region path expression where the
collection itself is specified. The following statement is invalid because neither Collection nor
Region contain an attribute named positions. The entry values collection (specified by
/portfolios) that does contain an attribute named positions is not yet part of the query name space.
/* INCORRECT: positions is not an attribute of Region or of Collection */
SELECT DISTINCT * FROM /portfolios.positions
This following SELECT statement is valid because positions is an element of the entry value collection
that is specified by /portfolios. The entry value collection is in scope as soon as the specification in
the FROM expression is complete (before WHERE or SELECT are evaluated).
SELECT DISTINCT positions FROM /portfolios
You can also refer to positions inside the FROM clause after the /portfolios entry value collection
is created. In this example, positions is an element of the /portfolios entry value collection and
values is an attribute of positions:
IMPORT javaobject.Position;
SELECT DISTINCT posnVal
FROM /portfolios, positions.values posnVal TYPE Position
WHERE posnVal.mktValue >= 25.00
After the comma in the FROM clause, /portfolios is in scope, so its value collection can be iterated. In
this case, this is done with the second FROM clause specification, positions.values. This is discussed
further in The SELECT Statement on page 194.
This statement shows the outer scope in bold. The outer scope has all the attributes of a Portfolio in it.
IMPORT javaobject.Position;
SELECT DISTINCT * FROM /portfolios
WHERE NOT
(SELECT DISTINCT * FROM positions.values TYPE Position
WHERE secId='YYY').isEmpty
Now the statement with the inner scope is shown in bold. The inner scope has all the attributes of a
Portfolio in it (inherited from the outer scope), and all the attributes of a Position as well.
IMPORT javaobject.Position;
SELECT DISTINCT * FROM /portfolios
WHERE NOT
(SELECT DISTINCT * FROM positions.values TYPE Position
WHERE secId='YYY').isEmpty
In the SELECT statement, the FROM clause data is input to the WHERE clause, and the WHERE clause filters
the data before passing it to the SELECT projection list. The SELECT statement output is the output from
the projection list operations. This is similar to SQL for database access, where the FROM clause specifies
the data tables, the WHERE clause specifies the rows, and the SELECT clause specifies the columns.
These are the semantics of the query SELECT statement:
The FROM clause specifies the cached data to be accessed. Each expression in this clause must
evaluate to a collection, which is a group of distinct objects of homogeneous type. There must be at
least one collection specified. The collections specified in the FROM clause are iterated over by the
rest of the query, including the rest of the FROM clause.
The WHERE clause specifies which of the elements in the FROM clause collections are to be passed on
to the SELECT clause, while the rest are dropped. The WHERE clause specifications are called WHERE
search criteria. The WHERE clause is optional. Without it, all elements of the collections are added
to the interim result.
The projection list in the SELECT clause specifies the attributes to retrieve from the result elements,
and may also perform transformations. The SELECT clause projection list can be one or more
expressions, or it can be * to put all elements in the return set without modification.
In the following query, the FROM clause of this query accesses the collection of all entry values in the
/portfolios region. This is passed to the WHERE clause, which searches for entry values whose
status field is active. For each of the values found by the WHERE clause, the SELECT projection
retrieves the value of the type attribute into the result set.
SELECT DISTINCT "type" FROM /portfolios WHERE status = 'active'
The rest of this section explores the SELECT statement elements in greater detail. The subsections are:
The FROM Clause (page 195)
The WHERE Clause (page 198)
Joins (page 199)
The SELECT Projection List (page 199)
SELECT Statement Query Results (page 200)
In the following query, positions.values evaluates to a Collection because positions is a Map, and
the method values on Map returns a Collection.
IMPORT javaobject.Position;
SELECT DISTINCT "type"
FROM /portfolios, positions.values posnVal TYPE Position
WHERE posnVal.qty > 1000.00
Every expression in the FROM clause must evaluate to a Collection. For a Map, the
values method returns a Collection.
If positions were a List instead of a Map, this query could be used to retrieve the data:
IMPORT javaobject.Position;
SELECT DISTINCT "type"
FROM /portfolios, positions posnVal TYPE Position
WHERE posnVal.qty >= 1000.00
For each object type accessed in your FROM clause, use the method that returns a
Collection for that object.
Each expression in the FROM clause can be any expression that evaluates to a Collection. An
expression in the FROM clause is typically a path expression that resolves to a region in the cache so that
the values in the region become the collection of objects to filter.
For example, this is a simple SELECT statement that evaluates to a set of all the entry value objects of the
region /portfolios with active status. The collection of entry values provided by the FROM clause is
traversed by the WHERE clause, which accesses each elements status attribute for comparison.
SELECT DISTINCT * FROM /portfolios WHERE status = 'active'
If the FROM clause has just one expression in it, the result of the clause is the single collection that the
expression evaluates to. If the clause has more than one expression in it, the result is a collection of structs
that contain a member for each of those collection expressions. For example, if the FROM clause contains
three expressions that evaluate to collections C1, C2, and C3, then the FROM clause generates a set of
struct(x1, x2, x3) where x1, x2, and x3 represent nested iterations over the collections specified.
If the collections are independent of each other, this struct represents their cartesian product.
In this query, the FROM clause produces a struct of portfolio and position pairs to be iterated. Each element
in the struct contains the portfolio and one of its contained positions.
IMPORT javaobject.Position;
SELECT DISTINCT "type" FROM /portfolios, positions TYPE Position
WHERE qty > 1000.00
Iterator variables
For each collection expressed in the FROM clause, you can associate an explicit variable. The variable is
added to the current scope and becomes the iterator variable bound to the elements of the collection as
they are iterated over. In this example, pflo and posnVal are both explicit iterator variables.
IMPORT javaobject.Position;
SELECT DISTINCT pflo."type", posnVal.qty
FROM /portfolios pflo, positions.values posnVal TYPE Position
WHERE pflo.status = 'active' and posnVal.mktValue > 25.00
The first form of the import statement allows Portfolio to be used as the name of the class,
com.myFolder.Portfolio. The second form provides an alternative class name, MyPortfolio, to
be used. This is useful when a class name is not unique across packages and classes in a single query.
The following example uses imported classes:
IMPORT com.commonFolder.Portfolio;
IMPORT com.myFolder.Portfolio AS MyPortfolio;
SELECT DISTINCT mpflo.status
FROM /portfolios pflo TYPE Portfolio,
/myPortfolios mpflo TYPE MyPortfolio,
WHERE pflo.status = 'active' and mpflo.id = pflo.id
This entire query string must be passed to the query engine, including the IMPORT statements.
Common type names do not require an IMPORT statement. The following table lists the types that are
defined by the system and the Java types they represent.
Table 9.3 Predefined Class Types
Native Client
Type Java Representation Representation
short short CacheableInt16
long long CacheableInt64
int int CacheableInt32
float float CacheableFloat
double double CacheableDouble
char char CacheableWideChar
string java.lang.String CacheableString
boolean boolean CacheableBoolean
byte or octet byte CacheableByte
date java.sql.Date CacheableDate
time java.sql.Time Unsupported
timestamp java.sql.Timestamp Unsupported
set<type> java.util.Set CacheableHashSet
list<type> java.util.List CacheableVector
array<type> java.lang.Object[] CacheableArray
map<type,type> or java.lang.Map CacheableHashMap
dictionary<type,type>
The type specification can be an imported type or any of these predefined types.
The object type information must be available when the query is created. To provide the appropriate
information to the query engine, specify the type for each of your FROM clause collection objects by
importing the objects class before running the query and typing the object inside the query. For the
example region, this query is valid (all of the examples in this chapter assume that this IMPORT statement
is provided):
Example 9.10 Query Using IMPORT and TYPE for Object Typing
IMPORT javaobject.Position;
SELECT DISTINCT mktValue
FROM /portfolios, positions.values TYPE Position
WHERE mktValue > 25.00
This entire query string must be passed to the query engine, including the IMPORT statement. Import the
objects class before running the query and typecast the object inside the query. For the example region,
both of these queries are valid:
Example 9.11 Query Using IMPORT and Typecasting for Object Typing
IMPORT javaobject.Position;
SELECT DISTINCT value.mktValue
FROM /portfolios, (map<string,Position>)positions
WHERE value.mktValue > 25.00
IMPORT cacheRunner.Position;
SELECT DISTINCT mktValue
FROM /portfolios, (collection<Position>)positions.values
WHERE mktValue > 25.00
This entire query string must be passed to the query engine, including the IMPORT statement. Use named
iterators in the FROM clause and explicitly prefix the path expression with iterator names.
Joins
If collections in the FROM clause are not related to each other, the WHERE clause can be used to join them.
The statement below returns all the persons from the /Persons region with the same name as a flower
in the /Flowers region.
SELECT DISTINCT p FROM /Persons p, /Flowers f WHERE p.name = f.name
Indexes are supported for region joins. To create indexes for region joins, you create single-region
indexes for both sides of the join condition. These are used during query execution for the join condition.
When a Struct is returned, the name of each field in the Struct is determined as follows:
If a field is specified explicitly using the fieldname:expression syntax, the fieldname is used.
If the SELECT projection list is * and an explicit iterator expression is used in the FROM clause, the
iterator variable name is used as the field name.
If the field is associated with a region or attribute path expression, the last attribute name in the
expression is used.
If names can not be decided based on these rules, arbitrary unique names are generated by the query
processor.
These examples show how the projections and FROM clause expressions are applied:
Method Invocation
The query language supports method invocation inside query expressions. The query processor maps
attributes in query strings using the attribute rules described in Object attributes on page 189.
Methods declared to return void evaluate to null when invoked through the query
processor.
Operators
GemFire supports the following operator types in expressions:
Comparison operators (page 203)
Logical operators (page 203)
Unary operators (page 204)
Map and index operators (page 204)
Dot and forward slash operators (page 204)
Comparison operators
Comparison operators compare two values and return the results as either true or false. The native
client query language supports the following comparison operators:
The equal (=) and not equal (<>, !=) operators have lower precedence than the other comparison
operators.
The equal (=) and not equal (<>, !=) operators can be used with NULL. To perform equality or inequality
comparisons with UNDEFINED, use the IS_DEFINED and IS_UNDEFINED operators. For more
information, refer to Rules for UNDEFINED on page 205.
The following example selects all the portfolios with more than one position:
SELECT DISTINCT * FROM /portfolios WHERE positions.size >= 2
This example selects all the portfolios whose status is active:
SELECT DISTINCT * FROM /portfolios WHERE status = 'active'
This example selects all the portfolios whose id is not XYZ-1:
SELECT DISTINCT * FROM /portfolios WHERE id <> 'XYZ-1'
Logical operators
The operators AND and OR allow you to create more complex expressions by combining expressions to
produce a boolean result (TRUE or FALSE). When you combine two conditional expressions using the
AND operator, both conditions must evaluate to true for the entire expression to be true. When you
combine two conditional expressions using the OR operator, the expression evaluates to true if either one
or both of the conditions are true. You can create complex expressions by combining multiple simple
conditional expressions with AND and OR operators. When expressions are combined using multiple AND
and OR operators, the AND operator has higher precedence than the OR operator.
The following query string selects only those portfolios with a type of xyz that have active status. If
either condition is false for an element in the queried collection portfolios, that element is not included
in the result.
SELECT DISTINCT * FROM /portfolios
WHERE "type" = 'xyz' AND status = 'active'
The following query string selects only those portfolios whose ID is XYZ-1 or whose ID is ABC-1. If
either one of these conditions is true for an element of portfolios, that element is included in the result.
SELECT DISTINCT * FROM /portfolios
WHERE id = 'XYZ-1' OR id = 'ABC-1'
In the following example, the query selects only those portfolios with positions that have a market value
over 30.00 and that have an ID of XYZ-1 or ABC-1.
IMPORT cacheRunner.Position;
SELECT DISTINCT * FROM /portfolios, positions.values TYPE Position
WHERE mktValue > 30.00
AND (id = 'XYZ-1' OR id = 'ABC-1')
Unary operators
Unary operators operate on a single value or expression, and have lower precedence than comparison
operators in expressions. The native client supports the unary operator NOT. NOT is the negation operator,
which changes the value of the operand to its opposite. That is, if the expression evaluates to TRUE, NOT
changes this value to FALSE. The operand must be a boolean. The following query returns the set of
portfolios that have positions.
SELECT DISTINCT * FROM /portfolios WHERE NOT positions.isEmpty
Any other operation with any UNDEFINED operands results in UNDEFINED. This includes the dot
operator, method invocations with UNDEFINED arguments, and comparison operations. For example, the
operation X=UNDEFINED evaluates to UNDEFINED, but IS_UNDEFINED(UNDEFINED) evaluates to
TRUE.
Functions
The query language supports these functions:
ELEMENT(query)Extracts a single element from a collection or array. This function throws a
FunctionDomainException if the argument is not a collection or array with
exactly one element.
IS_DEFINED(query)Returns TRUE if the expression does not evaluate to UNDEFINED.
IS_UNDEFINED(query)Returns TRUE if the expression evaluates to UNDEFINED.
NVL(expr1, expr2)Returns expr2 if expr1 is null. The expressions can be bind arguments,
path expressions, or literals.
TO_DATE (date_str, format_str)Returns a Java Date class object. The arguments must be
Strings with date_str representing the date and format_str representing the
format used by date_str.
For information on defined and undefined values, see Rules for UNDEFINED on page 205.
The following example returns a boolean indicating whether the single portfolio with ID XYZ-1 is active.
ELEMENT(SELECT DISTINCT * FROM /portfolios
WHERE id = 'XYZ-1').status = 'active'
The next example returns all portfolios with an undefined value for status. For example, if status is null,
it is undefined.
SELECT DISTINCT * FROM /portfolios
WHERE IS_UNDEFINED(status)
In most queries, undefined values are not included in the query results. The IS_UNDEFINED function
allows undefined values to be included in a result set, so you can retrieve information about elements
with undefined values (or at least identify elements with undefined values).
Construction Expressions
The construction expression implemented in the cache server is the set constructor. A set can be
constructed using SET(e1, e2, ..., en) where e1, e2, ..., en are expressions. This constructor
creates and returns the set containing the elements e1, e2, ..., en. For example, SET(1, 2, 3)
returns a Set containing the three elements 1, 2, 3.
The IN Expression
The IN expression is a boolean indicating if one expression is present inside a collection of expressions
of compatible type.
If e1 and e2 are expressions, e2 is a collection, and e1 is an object or a literal whose type is a subtype
or the same type as the elements of e2, then
e1 IN e2
is an expression of type boolean. The expression returns:
TRUE, if e1 is not UNDEFINED and belongs to collection e2
FALSE, if e1 is not UNDEFINED and does not belong to collection e2
UNDEFINED, if e1 is UNDEFINED
A very simple example of this is the expression, 2 IN SET(1, 2, 3), which returns TRUE.
For a more interesting example, look at the case where the collection you are querying into is defined by
a sub-query. Suppose that, in addition to the /portfolios region, there is another region named
/company with attributes id, name, and address. You can run the following query to retrieve the
names and addresses of all companies for whom you have an active portfolio on file.
SELECT name, address FROM /company
WHERE id IN (SELECT id FROM /portfolios WHERE status = 'active')
The interior SELECT statement returns a collection of ids for all /portfolios entries whose status is
active. The exterior SELECT iterates over /company, comparing each entrys id with this collection. For
each entry, if the IN expression returns TRUE, the associated name and address are added to the outer
SELECTs collection.
Literals
Query language expressions can contain literals as well as operators and attribute names. These are the
literal types that the native client supports.
booleanA boolean value, either TRUE or FALSE
integer and longAn integer literal is of type long if it is suffixed with the ASCII letter L.
Otherwise it is of type int.
floating pointA floating-point literal is of type float if it is suffixed with an ASCII letter F.
Otherwise its type is double and it can optionally be suffixed with an ASCII letter D. A double or
floating point literal can optionally include an exponent suffix of E or e, followed by a signed or
unsigned number.
stringString literals are delimited by single quotation marks. Embedded single quotation marks
are doubled. For example, the character string 'Hello' evaluates to the value Hello, while the
character string 'He said, ''Hello''' evaluates to He said, 'Hello'. Embedded newlines
are kept as part of the string literal.
charA literal is of type char if it is a string literal prefixed by the keyword CHAR, otherwise it is
of type string. The CHAR literal for the single quotation mark character is CHAR '''' (four single
quotation marks).
dateA java.sql.Date object that uses the JDBC format prefixed with the DATE keyword:
DATE yyyy-mm-dd In the Date, yyyy represents the year, mm represents the month, and dd
represents the day. The year must be represented by four digits; a two-digit shorthand for the year is
not allowed.
timeNot supported.
timestampNot supported.
UNDEFINEDA special literal that is a valid value for any data type. An UNDEFINED value is the
result of accessing an attribute of a null-valued attribute. Note that if you access an attribute that has
an explicit value of null, then it is not undefined. For example if a query accesses the attribute
address.city and address is null, then the result is undefined. If the query accesses address,
then the result is not undefined, it is null. For more information, see Rules for UNDEFINED on
page 205.
Type Conversions
Java rules used within a query string dictate that the query processor performs implicit conversions and
promotions under certain cases in order to evaluate expressions that contain different types. The query
processor performs binary numeric promotion, method invocation conversion, and temporal type
conversion.
The query processor performs binary numeric promotion on the operands of the following operators:
comparison operators <, <=, >, and >=
equality operators = and <>
This is essentially the same behavior as in Java, except that chars are not considered to be numeric in the
native client query language.
Comments
You can include comments in the query string. To insert a one-line comment, begin the line with two
dashes (--). To insert a multiple-line comment, or to embed a comment in a line of code, begin the
comment with /* and end it with */. The following query strings include comments:
SELECT DISTINCT * FROM /portfolios /* here is a comment */
WHERE status = 'active'
9.4 Indexes
Indexes are created and maintained on the cache server. An index can provide significant performance
gains for query execution. A query run without the aid of an index iterates through every object in the
collection on the cache server. If an index is available that matches part or all of the query specification,
the query iterates only over the indexed set, and query processing time can be reduced.
When you create your indexes on the cache server, remember that indexes incur maintenance costs as
they must be updated when the indexed data changes. An index that requires many updates and is not
used very often may require more system resources than no index at all. Indexes also consume memory.
For information on the amount of memory used for indexes, see the system configuration information in
the GemFire Enterprise System Administrators Guide.
An index for remote querying can be declaratively created on the cache server using a cache.xml file,
as shown in the next example:
Example 9.13 Creating an Index on a Cache Server Using a Server XML File
<region name="portfolios">
<region-attributes . . . >
<value-constraint>cacheRunner.Portfolio</value-constraint>
</region-attributes>
<index name="myFuncIndex">
<functional from-clause="/portfolios" expression="status"/>
</index>
<index name="myPrimIndex">
<primary-key field="id"/>
</index>
<entry> . . .
For detailed information about working with indexes configured on a cache server, see the Querying and
Indexing chapter in the GemFire Enterprise Developers Guide.
Simplify these types of queries by optimizing the class relations or data storage. In the previous
example, performance is improved if positions are kept in a separate region with a pointer to
Portfolios. This simpler query determines which position objects belong to a particular portfolio:
select distinct port from /Portfolios port, /Positions pos where
pos.portfolio() = port.ID
Some queries may take more time to execute on the server, and the results size may be large and take
more time to be transmitted back to the client. Specify an optional timeout parameter in the query
methods to allow sufficient time for the operation to succeed.
QueryService
This method is the entry point to the query package. To execute a query you must obtain a
QueryService from the cache. It is retrieved from the Cache instance through
Cache::getQueryService. If you are using the Pool API you must obtain the QueryService from
the pools and not from the cache.
Query
A Query is obtained from a QueryService, which is obtained from the Cache. The Query interface
provides methods for managing the compilation and execution of queries, and for retrieving an existing
query string.
A Query object must be obtained for each new query. The following example demonstrates the method
used to obtain a new instance of Query:
QueryPtr newQuery(const char * querystr);
The selectValue method retrieves one value object. In this call, you request the portfolio with ID
ABC-1:
SerializablePtr port = region->selectValue("ID='ABC-1'");
The existsValue method returns a boolean indicating if any entry exists that satisfies the
predicate. This call returns false because there is no entry with the indicated type:
bool entryExists = region->existsValue("'type' = 'QQQ' ");
For more information about these shortcut query methods, see the Region class description in the native
client online API documentation.
SelectResults
This method executes the query on the cache server and returns the results as either a ResultSet or a
StructSet.
SelectResultsIterator
This method iterates over the items available in a ResultSet or StructSet.
ResultSet
A ResultSet is obtained after executing a Query, which is obtained from a QueryService that is obtained
from a Cache class.
StructSet
This is used when a SELECT statement returns more than one set of results. This is accompanied by a
Struct, which provides the StructSet definition and contains its field values.
Query Management
Queries are created on the cache server through QueryService and then managed through the resulting
Query object. The newQuery method for the Query interface binds a query string. By invoking the
execute method, the query is submitted to the cache server and returns SelectResults, which is
either a ResultSet or a StructSet.
if (str != NULLPTR)
{
printf("\n Data for %s is %s ",
simpl->getFieldName(inner_field),str->asChar() );
}
else
{
printf("\n some other object type inside struct\n");
}
}
}
} //end of columns
}//end of rows
10 Continuous Querying
Continuous querying in GemFire Enterprise native client gives C++ and C# .NET clients a way to run
queries against events in the GemFire cache server region. The clients register interest in events using
simple query expressions. Events are sent to client listeners that you can program to do whatever your
application requires.
Continuous queries (CQs) provide these main features:
Standard GemFire Enterprise native client query syntax and semanticsCQ queries are
expressed in the same language used for other native client queries (see Chapter 9, Remote Querying,
on page 179).
Standard GemFire Enterprise events-based management of CQ eventsThe event handling
used to process CQ events is based on the standard GemFire Enterprise event handling framework.
The CQListener interface is similar to Application Plug-Ins on page 85 and Application Callback
Interfaces on page 110.
Complete integration with the client/server architecture CQ functionality uses existing server-
to-client messaging mechanisms to send events. All tuning of your server-to-client messaging also
tunes the messaging of your CQ events. If your system is configured for high availability then your
CQs are highly available, with seamless failover provided in case of server failure (see High
Availability for Client-to-Server Communication on page 152). If your clients are durable, you can
also define any of your CQs as durable (see Durable Client Messaging on page 154).
Interest criteria based on data valuesCQ queries are run against the regions entry values.
Compare this to register interest by reviewing Registering Interest for Entries on page 46.
Active query executionOnce initialized, the queries only operate on new events instead of on the
entire region data set. Events that change the query result are sent to the client immediately.
In this chapter:
How Continuous Querying Works (page 222)
Configuring for Continuous Querying (page 224)
The Native Client CQ API (page 225)
State and Life Cycle of a CQ (page 226)
Implementing Continuous Queries (page 227)
Distributed System
Server
Region B
Region A CQ1
...
CQn
CQ executor framework
CQ
CQlistener
listener CQ1
Region A CQ
CQlistener
listener CQn
Native Client
CQs do not update the native client region. This is in contrast to other server-to-client messaging, like
the updates sent to satisfy interest registration and responses to get requests from the client. CQs serve
as notification tools for the CQ listeners, which can be programmed in any way your application requires.
When a CQ is running against a server region, each entry event is evaluated against the CQ query by the
thread that updates the server cache. If either the old or the new entry value satisfies the query, the thread
puts a CqEvent in the clients queue. The CqEvent contains information from the original cache event,
plus information specific to the CQs execution. Once received by the client, the CqEvent is passed to
the onEvent method of all CqListeners defined for the CQ.
The following figure shows the typical CQ data flow when entries are updated in the server cache. The
steps are:
1. Entry events come to the servers cache from any source: the server or its peers, distribution from
remote sites, or updates from a client.
2. For each event, the servers CQ executor framework checks for a match with the CQs it has running.
3. If the old or new entry value satisfies a CQ query, a CQ event is sent to the CQs listeners on the
client side. Each listener for the CQ gets the event. In the following figure:
Both the new and old prices for entry X satisfy the CQ query, so that event is sent indicating an
update to the query results.
The old price for entry Y satisfied the query, so it was part of the query results. The invalidation
of entry Y makes it not satisfy the query. Because of this, the event is sent indicating that it is
destroyed in the query results.
The price for the newly created entry Z does not satisfy the query, so no event is sent.
The region operations do not translate directly to specific query operations, and the
query operations do not specifically describe the region events. Instead, each query
operation describes how its corresponding region event affects the query results. For
more information on this, see The CqEvent Object on page 230.
Distributed System
events from this server, peers,
remote systems, or clients
Server
update: old price = 101, Region /tradeOrder
X new price = 102
CQ: PriceTracker, query=
invalidate:
Y old price = 231
"SELECT * FROM /tradeOrder t
X Y Z WHERE t.price > 100.00"
create:
Z
new price = 42
CQ executor framework
CqEvent CqEvent
update destroy
X Y
Region /tradeOrder
Native Client
If you want your CQs to be durable, configure your native clients for durable messaging. When your
clients are durable, you can create durable CQs whose events are maintained during client
disconnects and replayed for the client when it reconnects. The process and data flow particular to
durable CQs is described in Durable Client Messaging on page 154.
Gemstone::GemFire::Cache
QueryServiceThis interface provides methods to:
create a new CQ and specify whether it is durable (available for durable clients)
execute a CQ with or without an initial result
list all the CQs registered by the client
close and stop CQs
get a handle on CqStatistics for the client
The QueryService CQ methods are run against the server cache.
Query Syntax
This is the basic syntax for the CQ query:
SELECT * FROM /fullRegionPath [iterator] [WHERE clause]
The CQ query must satisfy the standard GemFire Enterprise native client querying specifications
described in The Query Language on page 189. It also must satisfy these restrictions:
The FROM clause must contain only a single region specification, with optional iterator variable.
The query must be a SELECT expression only, preceded by zero or more IMPORT statements. This
means the query cannot be a statement like /tradeOrder.name or (SELECT * from
/tradeOrder).size.
// CqListener class
class TradeEventListener : public CqListener {
public:
void onEvent(const CqEvent& cqEvent) {
// Operation associated with the query op
CqOperation::CqOperationType queryOperation =
cqEvent.getQueryOperation();
// key and new value from the event
CacheableKeyPtr key = cqEvent.getKey();
TradeOrderPtr tradeOrder =
dynCast<TradeOrderPtr>(cqEvent.getNewValue());
if (queryOperation==CqOperation::OP_TYPE_UPDATE) {
// update data on the screen for the trade order
. . .
}
else if (queryOperation==CqOperation::OP_TYPE_CREATE) {
// add the trade order to the screen
. . .
}
else if (queryOperation==CqOperation::OP_TYPE_DESTROY) {
// remove the trade order from the screen
. . .
}
}
void onError(const CqEvent& cqEvent) {
// handle the error
}
void close() {
// close the output screen for the trades
. . .
}
}
// CqListener class
public class TradeEventListener : ICqListener {
public void onEvent(CqEvent cqEvent) {
// Operation associated with the query op
CqOperationType queryOperation = cqEvent.getQueryOperation();
// key and new value from the event
ICacheableKey key = cqEvent.getKey();
CacheableString keyStr = key as CacheableString;
IGFSerializable val = cqEvent.getNewValue();
TradeOrder tradeOrder = val as TradeOrder;
if (queryOperation==CqOperationType.OP_TYPE_UPDATE) {
// update data on the screen for the trade order
// . . .
}
else if (queryOperation== CqOperationType.OP_TYPE_CREATE) {
// add the trade order to the screen
// . . .
}
else if (queryOperation== CqOperationType.OP_TYPE_DESTROY) {
// remove the trade order from the screen
// . . .
}
}
public void onError(CqEvent cqEvent) {
// handle the error
}
// From CacheCallback
public void close() {
// close the output screen for the trades
// . . .
}
}
CQ events do not change your client cache. They are provided as an event service only. This allows you
to have any collection of CQs without storing large amounts of data in your regions. If you need to persist
information from CQ events, program your listener to store the information where it makes the most
sense for your application.
Be very careful if you choose to update your cache from your CqListener. If your
listener updates the region that is queried in its own CQ, the update may be forwarded
to the server. If the update on the server satisfies the same CQ, it may be returned to
the same listener that did the update, which could put your application into an infinite
loop. This same scenario could be played out with multiple regions and multiple CQs
if the listeners are programmed to update each others regions.
You can use the query operation to decide what to do with the CqEvent in your listeners. For example,
a CqListener that displays query results on screen might stop displaying the entry, start displaying the
entry, or update the entry display depending on the query operation.
CQ Execution Options
CQ execution can be done with or without an initial result set by calling CqQuery.Execute or
CqQuery.ExecuteWithInitialResults. The initial SelectResults returned from
ExecuteWithInitialResults is the same as the one you would get if you ran the query ad hoc by
calling QueryService.NewQuery.Execute on the server cache, but with the key added.
If you are managing a data set from the CQ results, you can initialize the set by iterating over the result
set and then updating it from your listeners as events arrive. For example, you might populate a new
screen with initial results and then update the screen from a listener.
Just as with the standalone query, the initial results represents a possibly moving snapshot of the cache.
If there are updates to the server region while the result set is being created, the result set and the
subsequent event-by-event CQ query execution might miss some events.
Accessing CQ Statistics
CQ runtime statistics are available for the client through the CqServiceStatistics and
CqStatistics interfaces described under The Native Client CQ API on page 225. You can get
information on the events generated by a specific CQ from the CqStatistics object returned by
CqQuery.GetStatistics. You can get higher-level information about the CQs the client has
registered, running, and so on, from the CqServiceStatistics object returned by
QueryService.GetCqStatistics.
For both the client and server, you can access these statistics by loading the statistics archive file into
VSD. The optional VSD (Visual Statistics Display) tool can be acquired from GemStone Technical
Support. See Contacting Technical Support on page 21 for details.
Client statistics are for the single client only. The servers pertain to all clients with CQs on this server.
Modifying CQ Attributes
You can modify the attributes for an existing CQ using the methods provided by
CqQuery.GetCqAttributesMutator. The attributes consist of a list of listeners.
Executing CQs
Individual CQs are executed using CqQuery execute* methods. You can also execute all CQs for the
client or for a region through the client QueryService. CQs that are running can be stopped or closed.
Stopping CQs
Individual CQs are stopped using the CqQuery stop method. You can also stop all CQs for the client
through the QueryService. Stopped CQs are in the same state as new CQs that have not yet been
executed. You can close or execute a stopped CQ.
Closing CQs
Individual CQs are closed using the CqQuery close method. You can also close all CQs for the client
through the QueryService. Closed CQs cannot be executed. CQs are also closed in the following cases:
The client closes its cache after closing all of its CQsClosing the client cache closes the
QueryService and all associated CQs on the client and server.
The client disconnects from its serverThis might be because of a network outage or some other
failure. When a client disconnects, all CQs created by the client are removed from the server and put
into a CLOSED state on the client.
The server region is destroyedWhen a server region is destroyed, all associated CQs are also
cleaned up on the server and the region destroy event is sent to the client. On the client, the
CqListener.Close method is called for all CQs on the region.
The connection pool API supports connecting to servers through server locators or directly connecting to
servers. In a distributed system, servers can be added or removed and their capacity to service new client
connections may vary. The locators continuously monitor server availability and server load information
and provide clients with the connection information of the server with the least load at any given time.
Locators provide clients with dynamic server discovery and server load balancing. Clients are configured
with locator information, and depend on the locators for server connectivity.
Server locators provide these main features:
Automated Discovery of Servers and LocatorsAdding and removing servers or locators is made
easy as each client does not require a list of servers to be configured at the time of pool creation.
Client Load RebalancingSpreading the client load over servers in a distributed system is made
efficient. Server locators give clients dynamic server information and provide server load
rebalancing after servers depart or join the system.
High AvailabilityWhen a client/server connection receives an exception, the connection is
automatically failed over to another available connection in the pool. Redundancy is also provided
for client subscriptions.
You can alternatively configure a pool statically with a list of endpoints. When the pools are statically
configured, a round-robin load balancing policy is used to distribute connections across the servers.
In this chapter:
How Client Load Balancing Works (page 236)
Configuring Pools for Servers or Locators (page 238)
The Native Client Pool API (page 240)
Working With Pools (page 241)
Server Farm
address
cacheserver and load
Distributed
Distributed
information
Cache
Distributed
Cache Locator
Cache tracks servers and
server load
connection pool
...
Cache Clients
When a connection receives an exception, the operation is failed over to another connection from the
pool. The failover mechanism obtains the endpoint to failover to from the locator or from the specified
endpoint list in the pool.
The locator, server, and pool settings can be configured declaratively in client XML or
programmatically through the PoolFactory method.
Create an instance of PoolFactory through PoolManager.
Gemstone::GemFire::Cache
PoolThis interface provides the API to retrieve pool attributes.
PoolFactoryThis interface provides the API to configure pool attributes.
PoolManagerThis interface provides API to create a PoolFactory object and to find the pool
objects.
The AttributesFactory class has a new method setPoolname which assigns a pool to a region.
Operations performed on the configured region use connections from the pool.
A region can have a pool attached to it. A pool may have multiple regions attached to
it.
Subscription Properties
Each connection pool has a single subscription connection which could be to any server which matches
the requirements of the connection pool. When a client registers interest for a region, if the connection
pool does not already have a subscription channel, the connection pool sends a message to the server
locator and the server locator chooses servers to host the queue and return those server names to the
client. The client then contacts the chosen servers and asks them to create the queue. The client maintains
at least one connection with each server hosting a queue - if the server does not detect any connections
from a non-durable client, it drops the client queue and closes all artifacts for the client. For information
about durable client subscriptions, see Durable Client Messaging on page 154.
cachePtr = systemPtr->create();
PoolFactoryPtr poolFacPtr = PoolManager::createFactory();
//to create pool add either endpoints or add locators or servers
//pool with endpoint, adding to pool factory
//poolFacPtr->addServer("localhost", 12345 /*port number*/);
//pool with locator, adding to pool factory
poolFacPtr->addLocator("localhost", 34756 /*port number*/);
PoolPtr pptr = NULLPTR;
if ((PoolManager::find("examplePool")) == NULLPTR) {// Pool does not exist
with the same name.
pptr = poolFacPtr->create("examplePool");
}
RegionFactoryPtr regionFactory =
cachePtr->createRegionFactory(CACHING_PROXY);
regionPtr = regionFactory
->setPoolName("examplePool")
->create("regionName");
QueryServicePtr qs = cachePtr->getQueryService("examplePool");
12 Function Execution
Using the GemFire Enterprise function execution service, you can execute application functions on a
single server member, in parallel on a subset of server members, or in parallel on all server members of
a distributed system. Achieving linear scalability is predicated upon being able to horizontally partition
the application data such that concurrent operations by distributed applications can be done
independently across partitions. In other words, if the application requirements for transactions can be
restricted to a single partition, and all data required for the transaction can be colocated to a single server
member or a small subset of server members, then true parallelism can be achieved by vectoring the
concurrent accessors to the ever-growing number of partitions.
Most scalable enterprise applications grow in terms of data volume, where the number of data items
managed rather than each item grows over time. If the above logic holds (especially true for OLTP class
applications), then we can derive sizable benefits by routing the data dependent application code to the
fabric member hosting the data. The term we use to describe this routing of application code to the data
of interest is aptly called data-aware function routing, or behavior routing.
Function Execution can be used only along with the pool functionality. For more information about the
pool API, see Using Connection Pools on page 235.
In this chapter:
How Function Execution Works (page 244)
Executing Functions in GemFire (page 249)
Solutions and Use Cases (page 253)
Only C++ versions of Function Execution API interfaces, classes, and methods are
shown throughout the text in this chapter (like FunctionService::onRegion). The
code examples show C++ and C#.
Server Server
Server
result
4
result execute
result 4 3
4
3
execute Server
3
execute
execute 5
2 result
Client
1 FunctionService::onServers::execute
6 ResultCollector::getResult
This shows a data-dependent function run by a client. The specified region is connected to the server
system, so the function automatically goes there to run against all servers holding data for the region.
Client execute 5
Client Region 2 result
1 FunctionService::onRegion::execute
6 ResultCollector::getResult
This shows the same data-dependent function with the added specification of a set of keys on which to
run. Servers that don't hold any of the keys are left out of the function execution.
Server Server
Partitioned Region Partitioned Region
Datastore Server Datastore
X Partitioned Region Z
Datastore
Y
result
result
4
4
Server Server
3
Partitioned Region 3 execute Partitioned Region
Data Accessor execute Data Accessor
execute 5
Client 2 result
filter = X,Y Client Region
1 FunctionService::onRegion::withFilter::execute
6 ResultCollector::getResult
This scenario demonstrates the steps in a call to a highly available function. The call fails the first time
on one of the participating servers and is successfully run a second time on all servers.
Figure 12.4 Highly Available Data Dependent Function with Failure on First Execution
Distributed System
Server Server
Server
Partitioned Region
Datastore
Z
5
execute execute
FAILURE 10
2 result 7 result
Client
filter = X,Y
Function.isHA = = true Client Region
1 FunctionService::onRegion::execute
11 ResultCollector.getResult
5. If your function returns results and you need special results handling, code a custom
ResultsCollector implementation to replace the default provided by GemFire. Use the
Execution::withCollector method to define your custom collector.
6. Write the application code to run the function and, if the function returns results, to get the results
from the collector. For high availability, the function must return results.
regPtr0 = initRegion();
ExecutionPtr exc = FunctionService::onRegion(regPtr0);
CacheableVectorPtr routingObj = CacheableVector::create();
char buf[128];
bool getResult = true;
pptr = PoolManager::find(poolName);
ExecutionPtr exc = FunctionService::onServer(cache);
CacheableVectorPtr routingObj = CacheableVector::create();
char buf[128];
bool getResult = true;
sprintf(buf, "VALUE--%d", 10);
CacheablePtr value(CacheableString::create(buf));
13 Delta Propagation
In most distributed data management systems, the data stored in the system tends to be created once and
then updated frequently. These updates are sent to other members for event propagation, redundancy
management, and cache consistency in general. Tracking only the changes in an updated object and
sending only the updates, or deltas, mean lower network transmission costs and lower object
serialization/deserialization costs. Performance improvements can be significant, especially when
changes to an object instance are relatively small compared to the overall size of the instance.
In this chapter:
How Delta Propagation Works (page 256)
Supported Topologies and Limitations (page 257)
Delta Propagation API (page 258)
Delta Propagation Properties (page 259)
Implementing Delta Propagation (page 260)
Errors in Delta Propagation (page 261)
Performance Considerations and Limitations (page 262)
Examples of Delta Propagation (page 263)
update k 2
k, delta
k,v
put k 3
Region A
GemFire calls
k,v v has delta,
k,v v to delta
k,delta
(serialized)
Server
GemFire calls 4
Region A v from delta
k,v
k,delta distributed 5
to any clients/peers
1. get operation. The get works as usual; the cache returns the full entry object from the local cache
or, if it isnt available there, from a server cache or from a loader.
2. update methods. You need to add code to the objects update methods so that they save delta
information for object updates, in addition to the work they were already doing.
3. put operation. The put works as usual in the local cache, using the full value, then calls hasDelta
to see if there are deltas and toDelta to serialize the information.
4. receipt of delta. fromDelta extracts the delta information that was serialized by toDelta and
applies it to the object in the local cache. The delta is applied directly to the existing value or to a
clone, depending on how you configure it for the region.
5. additional distributions. As with full distributions, receiving members forward the delta according
to their configurations and connections to other members. In the example, the server would forward
the delta to its peers and its other clients as needed. Receiving members do not recreate the delta;
toDelta is only called in the originating member.
.NET - for C#
Your application class must implement:
GemStone::GemFire::Cache::IGFDelta
GemStone::GemFire::Cache::IGFSerializable
IGFDelta provides the methods, HasDelta, ToDelta, and FromDelta, which you program to report
on, send, and receive deltas for your class.
Additionally, for cloning, your class must implement the standard .NET IClonable interface and its
Clone method.
C++
Your application must publicly derive from:
gemfire::Delta
gemfire::Cacheable
Delta provides the methods, hasDelta, toDelta, fromDelta, which you program to report on, send,
and receive deltas for your class.
For cloning, use the clone method provided in the Delta interface.
cloning-enabled
A region attributes boolean, configured in the cache.xml, that affects how fromDelta applies deltas
to the local client cache. When true, the updates are applied to a clone of the value and then the clone
is saved to the cache. When false, the value is modified in place in the cache. The default value is false.
Cloning can be expensive, but it ensures that the new object is fully initialized with the delta before any
application code sees it.
When cloning is enabled, by default GemFire does a deep copy of the object, using serialization. You
may be able to improve performance by implementing the appropriate clone method for your API,
making a deep copy of anything to which a delta may be applied. The goal is to significantly reduce the
overhead of copying the object while still retaining the isolation needed for your deltas.
Without cloning:
It is possible for application code to read the entry value as it is being modified, possibly seeing the
value in an intermediate, inconsistent state, with just part of the delta applied. You may choose to
resolve this issue by having your application code synchronize on reads and writes.
GemFire loses any reference to the old value because the old value is transformed in place into the
new value. Because of this, your CacheListener sees the same new value returned for
EntryEvent.getOldValue and EntryEvent.getNewValue.
Exceptions thrown from fromDelta may leave your cache in an inconsistent state. Without cloning,
any interruption of the delta application could leave you with some of the fields in your cached
object changed and others unchanged. If you do not use cloning, keep this in mind when you
program your error handling in your fromDelta implementation.
<region name="exampleRegion">
<region-attributes refid="CACHING_PROXY" cloning-enabled="true"
pool-name="examplePool"/>
</region>
RegionFactoryPtr regionFactory =
cachePtr->createRegionFactory(CACHING_PROXY);
RegionPtr regionPtr = regionFactory
->setCloningEnabled(true)
->create("myRegion");
(k,delta)
Server1 Server2
Distributed Distributed
Region A Region A
k,v k,v
(k,delta) (k,delta)
Region A Region A
k,v k,v
<cache>
<region name="root" refid="CACHING_PROXY">
<region-attributes cloning-enabled="true" pool-name="examplePool"/>
</region>
<pool name="examplePool" subscription-enabled="true" server-group=
"ServerGroup1">
<locator host="localhost" port="34756"/>
</pool>
</cache>
using System;
using GemStone.GemFire.Cache;
namespace GemStone.GemFire.Cache.QuickStart
{
public class DeltaExample : IGFDelta, IGFSerializable, ICloneable
{
// data members
private Int32 m_field1;
private Int32 m_field2;
private Int32 m_field3;
// delta indicators
private bool m_f1set;
private bool m_f2set;
private bool m_f3set;
m_f1set = false;
m_f2set = false;
m_f3set = false;
}
reset();
}
}
public void FromDelta(DataInput DataIn)
{
lock(this)
{
m_f1set = DataIn.ReadBoolean();
if (m_f1set)
{
m_field1 = DataIn.ReadInt32();
}
// REPEAT FOR OTHER FIELDS
}
}
public void ToData(DataOutput DataOut)
{
DataOut.WriteInt32(m_field1);
DataOut.WriteInt32(m_field2);
DataOut.WriteInt32(m_field3);
}
public IGFSerializable FromData(DataInput DataIn)
{
m_field1 = DataIn.ReadInt32();
m_field2 = DataIn.ReadInt32();
m_field3 = DataIn.ReadInt32();
return this;
}
public UInt32 ClassId
{
get
{
return 0x02;
}
}
public UInt32 ObjectSize
{
get
{
UInt32 objectSize = 0;
return objectSize;
}
}
#ifndef __Delta_Example__
#define __Delta_Example__
#include <gfcpp/GemfireCppCache.hpp>
private:
// data members
int32_t m_field1;
int32_t m_field2;
int32_t m_field3;
// delta indicators
mutable bool m_f1set;
mutable bool m_f2set;
mutable bool m_f3set;
public:
DeltaExample()
{
reset();
}
DeltaExample(DeltaExample * copy)
{
m_field1 = copy->m_field1;
m_field2 = copy->m_field2;
m_field3 = copy->m_field3;
reset();
}
int getField1()
{
return m_field1;
}
// REPEAT FOR OTHER FIELDS
{
lock();
m_field1 = val;
m_f1set = true;
unlock();
}
// REPEAT FOR OTHER FIELDS
out.writeBoolean(m_f1set);
if (m_f1set)
{
out.writeInt(m_field1);
}
// REPEAT FOR OTHER FIELDS
reset();
unlock();
}
in.readBoolean(&m_f1set);
if (m_f1set)
{
in.readInt(&m_field1);
}
// REPEAT FOR OTHER FIELDS
reset();
unlock();
}
input.readInt(&m_field3);
unlock();
return this;
}
DeltaPtr clone()
{
return DeltaPtr(new DeltaExample(this));
}
virtual ~DeltaExample()
{
}
14 Programming Examples
This chapter provides a set of programming examples to help you understand the GemFire Enterprise
native client API.
In this chapter:
Declaring a Native Client Region (page 272)
API Programming Example C# (page 273)
API Programming Example C++ (page 275)
Data Serialization Examples (page 276)
<cache>
<region name = "root1" >
<region-attributes refid="CACHING_PROXY" pool-name="poolName1"/>
</region>
<region name = "root2" >
<region-attributes refid="PROXY" pool-name="poolName2"/>
</region>
<pool name="poolName1" subscription-enabled="true">
<server host="localhost" port="40404" />
</pool>
<pool name="poolName2" subscription-enabled="true">
<server host="localhost" port="40404" />
</pool>
</cache>
The pool defines a list of cache servers that the native client region can communicate with.
The CACHING_PROXY setting causes the client region to cache data and to communicate with the
servers. The PROXY setting causes the client region to communicate with the servers, but cache no
data.
The region subscription-enabled property, if true, indicates that the client should receive data
updates when server data changes.
Native clients do not specify cache loaders or writers, which are provided by the server.
Example 14.2 Demonstrating Gets and Puts Using the C# .NET API
using System;
namespace GemStone.GemFire.Cache.QuickStart
{
// The BasicOperations QuickStart example.
class BasicOperations
{
static void Main(string[] args)
{
try
{
// Create a GemFire Cache.
CacheFactory cacheFactory = CacheFactory.CreateCacheFactory(null);
RegionFactory regionFactory =
cache.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
// Put an Entry (Key and Value pair) into the Region using the
direct/shortcut method.
region.Put("Key1", "Value1");
}
// Get Entries back out of the Region.
IGFSerializable result1 = region.Get("Key1");
CacheablePtr TestCacheLoader::load(
const RegionPtr& region,
const CacheableKeyPtr& key,
const UserDataPtr& aCallbackArgument)
{
m_bInvoked = true;
printf( "CacheLoader.load : %s\n", printEvent(region, key,
aCallbackArgument).c_str());
CacheablePtr value = NULLPTR;
try {
value = region->get(key, aCallbackArgument);
} catch(Exception& ex) {
fprintf(stderr, "Exception in TestCacheCallback::printEvent [%s]\n",
ex.getMessage());
}
if (value != NULLPTR) {
printf( "Loader found value: ");
std::string formatValue = printEntryValue(value);
printf( "%s\n",formatValue.c_str());
} else {
printf( " Loader did not find a value");
}
return value;
}
C++ Example
Example 14.4 Implementing an Embedded Object Using the C++ API
class User
: public Serializable
{
private:
std::string name;
int32_t userId;
ExampleObject *eo;
public:
User( std::string name, int32_t userId )
: name( name ),userId( userId )
{
eo = new ExampleObject(this->userId);
}
~User() {
if (eo != NULL) delete eo;
eo = NULL;
}
User ()
{
name = "";
userId = 0;
eo = new ExampleObject(userId);
}
int32_t getUserId( )
{
return userId;
}
std::string getName( )
{
return name;
}
ExampleObject *getEO()
{
return eo;
}
char *readbuf;
input.readASCII( &readbuf );
name = std::string(readbuf);
input.freeUTFMemory( readbuf );
input.readInt( &userId );
eo->fromData(input);
return this;
}
};
Example 14.5 Implementing Complex Data Types Using the C++ API
class ExampleObject
: public Serializable
{
private:
double double_field;
float float_field;
long long_field;
int int_field;
short short_field;
std::string string_field;
std::vector<std::string> string_vector;
public:
ExampleObject() {
double_field = 0.0;
float_field = 0.0;
long_field = 0;
int_field = 0;
short_field = 0;
string_field = "";
string_vector.clear();
}
~ExampleObject() {
}
ExampleObject(int id) {
char buf[64];
sprintf(buf, "%d", id);
std::string sValue(buf);
int_field = id;
long_field = int_field;
short_field = int_field;
double_field = (double)int_field;
float_field = (float)int_field;
string_field = sValue;
string_vector.clear();
for (int i=0; i<3; i++) {
string_vector.push_back(sValue);
}
}
ExampleObject(std::string sValue) {
int_field = atoi(sValue.c_str());
long_field = int_field;
short_field = int_field;
double_field = (double)int_field;
float_field = (float)int_field;
string_field = sValue;
string_vector.clear();
for (int i=0; i<3; i++) {
string_vector.push_back(sValue);
}
}
CacheableStringPtr toString() const {
char buf[1024];
std::string sValue = "ExampleObject: ";
sprintf(buf,"%f(double),%f(double),%ld(long),%d(int),%d(short),",
double_field,float_field,long_field,int_field,short_field);
sValue += std::string(buf) + string_field + "(string),";
if (string_vector.size() >0) {
sValue += "[";
for (unsigned int i=0; i<string_vector.size(); i++) {
sValue += string_vector[i];
if (i != string_vector.size()-1) {
sValue += ",";
}
}
sValue += "](string vector)";
}
return CacheableString::create( sValue.c_str() );
}
double getDouble_field() {
return double_field;
}
float getFloat_field() {
return float_field;
}
long getLong_field() {
return long_field;
}
int getInt_field() {
return int_field;
}
short getShort_field() {
return short_field;
}
std::string & getString_field() {
return string_field;
}
std::vector<std::string> & getString_vector( ) {
return string_vector;
}
int32_t itemCount = 0;
input.readASCII( &readbuf );
string_field = std::string(readbuf);
input.freeUTFMemory( readbuf );
string_vector.clear();
input.readInt( (int32_t*) &itemCount );
for( int32_t idx = 0; idx < itemCount; idx++ ) {
// read from serialization buffer into a character array
input.readASCII( &readbuf );
// and store in the history list of strings.
string_vector.push_back( readbuf );
input.freeUTFMemory( readbuf );
}
return this;
}
};
typedef SharedPtr<ExampleObject> ExampleObjectPtr;
C# Example
Example 14.6 Implementing a User-Defined Serializable Object Using the C# API
public ExampleObject EO
{
get
{
return m_eo;
}
set
{
m_eo = value;
}
}
// Our TypeFactoryMethod
public static IGFSerializable CreateInstance()
{
return new User();
}
#endregion
}
Java Example
Example 14.7 Implementing an Embedded Object Using the Java API
Example 14.8 Implementing Complex Data Types Using the Java API
}
public void setLong_field( long long_field ) {
this.long_field = long_field;
}
public float getFloat_field( ) {
return this.float_field;
}
public void setFloat_field( float float_field ) {
this.float_field = float_field;
}
public int getInt_field( ) {
return this.int_field;
}
public void setInt_field( int int_field ) {
this.int_field = int_field;
}
public short getShort_field( ) {
return this.short_field;
}
public void setShort_field( short short_field ) {
this.short_field = short_field;
}
public java.lang.String getString_field( ) {
return this.string_field;
}
public void setString_field( java.lang.String string_field ) {
this.string_field = string_field;
}
public Vector getString_vector( ) {
return this.string_vector;
}
public void setString_vector( Vector string_vector ) {
this.string_vector = string_vector;
}
public void toData(DataOutput out) throws IOException
{
out.writeDouble(double_field);
out.writeFloat(float_field);
out.writeLong(long_field);
out.writeInt(int_field);
out.writeShort(short_field);
out.writeUTF(string_field);
out.writeInt(string_vector.size());
for (int i = 0; i < string_vector.size(); i++)
{
out.writeUTF((String)string_vector.elementAt(i));
}
}
public void fromData(DataInput in) throws IOException,
ClassNotFoundException
{
this.double_field = in.readDouble();
this.float_field = in.readFloat();
this.long_field = in.readLong();
this.int_field = in.readInt();
this.short_field = in.readShort();
this.string_field = in.readUTF();
this.string_vector = new Vector();
int size = in.readInt();
This appendix describes how to download, build and install the Berkeley database libraries for use with
disk overflow. The commands that you type are shown in boldface fixed font. See Berkeley DB
Persistence Manager on page 58 for additional information about the Berkeley database libraries.
In this appendix:
Linux Installation (page 290)
Windows Installation (page 291)
Solaris Installation (page 292)
3. You will be prompted to convert the project files to current Visual C++ format. Select "Yes to
All".
4. For 64-bit only, choose the project configuration from the drop-down menu on the tool bar ("Debug
AMD64", "Release AMD64"). Change the CPU type from Win32 to x64.
B gfcpp.properties
Example File
The gfcpp.properties file provides a way to configure distributed system connections for the
GemFire Enterprise native client. The following example shows the format of a gfcpp.properties
file. The first two attributes in this example should be set by programmers during application
development, while other attributes are set on-site during system integration. The properties and their
default settings that can be set in this file are described in detail in Table 6.2 on page 145.
#
## .NET AppDomain support
#
#appdomain-enabled=false
#
## Misc
#
#conflate-events=server
#disable-shuffling-of-endpoints=false
#grid-client=false
#max-fe-threads=
#max-socket-buffer-size=66560
# the units are in seconds.
#connect-timeout=59
#notify-ack-interval=10
#notify-dupcheck-life=300
#ping-interval=10
#redundancy-monitor-interval=10
#auto-ready-for-events=true
#
## module name of the initializer pointing to sample
## implementation from templates/security
#security-client-auth-library=securityImpl
## static method name of the library mentioned above
#security-client-auth-factory=createUserPasswordAuthInitInstance
## credential for Dummy Authenticator configured in server.
## note: security-password property will be inserted by the initializer
## mentioned in the above property.
#security-username=root
C Query Language
Grammar and Reserved
Words
This appendix lists the reserved words and the grammar for the query language used in the GemFire
Enterprise native client.
Reserved KeyWords
These words are reserved for the query language and may not be used as identifiers. The words with an
asterisk (*) after them are not currently used by the native client, but are reserved for future
implementations.
Language Grammar
symbol ::= expression
n A nonterminal symbol that has to appear at some place within the grammar on the
left side of a rule. All nonterminal symbols have to be derived to be terminal
symbols.
t The terminal symbol t (shown in bold)
x y x followed by y
x | y x or y
(x | y ) x or y
[ x ] x or empty
{ x } possibly empty sequence of x
query_program ::= [ imports ; ] query [semicolon]
imports ::= import { ; import }
import ::= IMPORT qualifiedName [ AS identifier ]
query ::= selectExpr | expr
selectExpr ::= SELECT DISTINCT projectionAttributes fromClause [ whereClause ]
projectionAttributes ::= * | projectionList
projectionList ::= projection { comma projection }
projection ::= field | expr [ AS identifier ]
field ::= identifier colon expr
fromClause ::= FROM iteratorDef { comma iteratorDef }
iteratorDef ::= expr [ [ AS ] identifier ] [ TYPE identifier ]
| identifier IN expr [ TYPE identifier ]
whereClause ::= WHERE expr
expr ::= castExpr
castExpr ::= orExpr | left_paren identifier right_paren castExpr
orExpr ::= andExpr { OR andExpr }
andExpr ::= equalityExpr { AND equalityExpr }
equalityExpr ::= relationalExpr { ( = | <> | != ) relationalExpr }
relationalExpr ::= inExpr { ( < | <= | > | >= ) inExpr }
inExpr ::= unaryExpr { IN unaryExpr }
unaryExpr ::= [ NOT ] unaryExpr
postfixExpr ::= primaryExpr { left_bracket expr right_bracket }
| primaryExpr { dot identifier [ argList ] }
argList ::= left_paren [ valueList ] right_paren
qualifiedName ::= identifier { dot identifier }
primaryExpr ::= functionExpr
| identifier [ argList ]
| undefinedExpr
| collectionConstruction
| queryParam
| literal
| ( query )
| region_path
conversionExpr ::= ELEMENT left_paren query right_paren
| NVL left_paren query comma query right_paren
| TO_DATE left_paren query right_paren
undefinedExpr ::= IS_UNDEFINED left_paren query right_paren
| IS_DEFINED left_paren query right_paren
collectionConstruction ::= SET left_paren [ valueList ] right_paren
valueList ::= expr { comma expr }
queryParam ::= $ integerLiteral
Language Notes
Query language keywords such as SELECT, NULL, and DATE are case-insensitive. Identifiers such as
attribute names, method names, and path expressions are case-sensitive.
Comment lines begin with -- (double dash).
Comment blocks begin with /* and ended with */.
String literals are delimited by single quotes. Embedded single quotes are doubled. Examples:
'Hello' value = Hello
'He said, ''Hello''' value = He said, 'Hello'
Character literals begin with the CHAR keyword followed by the character in single quotation marks.
The single quotation mark character itself is represented as CHAR'''' (with four single quotation
marks).
For the TIMESTAMP literal, there is a maximum of nine digits after the decimal point.
D System Statistics
This appendix provides information on the GemFire installation standard statistics for caching and
distribution activities.
In this appendix:
Sampling Statistics (page 301)
System Performance Statistics (page 302)
Operating System Statistics (page 305)
Sampling Statistics
When applications and cache servers join a distributed system, they indicate whether to enable statistics
sampling and whether to archive the statistics that are gathered. For more information about configuring
statistics, see Configuration Options on page 141. The following are statistics related to the statistic
sampler.
Region Statistics
These methods help you to get the statistics of a region. The primary statistics are:.
CQ Service Statistics
Using these methods, you can get aggregate statistical information about the continuous queries of a
client.:
Pool Statistics
The pool object provides the following statistics.
Delta Statistics
deltaMessageFailures Total number of messages containing delta (received from server)
but could not be processed after reception.
deltaPuts Total number of puts containing delta that have been sent from client
to server.
processedDeltaMessag Total number of messages containing delta received from server and
es processed after reception.
processedDeltaMessag Total time spent applying delta (received from server) on existing
esTime values at client.
handles The total number of handles currently open by this process. This number
is the sum of the handles currently open by each thread in this process.
priorityBase The current base priority of the process. Threads within a process can
raise and lower their own base priority relative to the process's base pri-
ority.
threads Number of threads currently active in this process. An instruction is the
basic unit of execution in a processor, and a thread is the object that exe-
cutes instructions. Every running process has at least one thread.
activeTime The elapsed time in milliseconds that all of the threads of this process
used the processor to execute instructions. An instruction is the basic
unit of execution in a computer, a thread is the object that executes
instructions, and a process is the object created when a program is run.
Code executed to handle some hardware interrupts and trap conditions
are included in this count.
pageFaults The total number of Page Faults by the threads executing in this process.
A page fault occurs when a thread refers to a virtual memory page that is
not in its working set in main memory. This will not cause the page to be
fetched from disk if it is on the standby list and hence already in main
memory, or if it is in use by another process with whom the page is
shared.
pageFileSize The current number of bytes this process has used in the paging file(s).
Paging files are used to store pages of memory used by the process that
are not contained in other files. Paging files are shared by all processes,
and lack of space in paging files can prevent other processes from allo-
cating memory.
pageFileSizePeak The maximum number of bytes this process has used in the paging
file(s). Paging files are used to store pages of memory used by the pro-
cess that are not contained in other files. Paging files are shared by all
processes, and lack of space in paging files can prevent other processes
from allocating memory.
privateSize The current number of bytes this process has allocated that cannot be
shared with other processes.
systemTime The elapsed time in milliseconds that the threads of the process have
spent executing code in privileged mode. When a Windows system ser-
vice is called, the service will often run in Privileged Mode to gain
access to system-private data. Such data is protected from access by
threads executing in user mode. Calls to the system can be explicit or
implicit, such as page faults or interrupts. Unlike some early operating
systems, Windows uses process boundaries for subsystem protection in
addition to the traditional protection of user and privileged modes. These
subsystem processes provide additional protection. Therefore, some
work done by Windows on behalf of your application might appear in
other subsystem processes in addition to the privileged time in your pro-
cess.
userTime The elapsed time in milliseconds that this process's threads have spent
executing code in user mode. Applications, environment subsystems,
and integral subsystems execute in user mode. Code executing in User
Mode cannot damage the integrity of the Windows Executive, Kernel,
and device drivers. Unlike some early operating systems, Windows uses
process boundaries for subsystem protection in addition to the tradi-
tional protection of user and privileged modes. These subsystem pro-
cesses provide additional protection. Therefore, some work done by
Windows on behalf of your application might appear in other subsystem
processes in addition to the privileged time in your process.
virtualSize Virtual Bytes is the current size in bytes of the virtual address space the
process is using. Use of virtual address space does not necessarily imply
corresponding use of either disk or main memory pages. Virtual space is
finite, and by using too much, the process can limit its ability to load
libraries.
activeTime The elapsed time in milliseconds that all of the threads of this process
used the processor to execute instructions. An instruction is the basic
unit of execution in a computer, a thread is the object that executes
instructions, and a process is the object created when a program is run.
Code executed to handle some hardware interrupts and trap conditions
are included in this count.
pageFaults The total number of Page Faults by the threads executing in this process.
A page fault occurs when a thread refers to a virtual memory page that is
not in its working set in main memory. This will not cause the page to be
fetched from disk if it is on the standby list and hence already in main
memory, or if it is in use by another process with whom the page is
shared.
pageFileSize The current number of bytes this process has used in the paging file(s).
Paging files are used to store pages of memory used by the process that
are not contained in other files. Paging files are shared by all processes,
and lack of space in paging files can prevent other processes from allo-
cating memory.
pageFileSizePeak The maximum number of bytes this process has used in the paging
file(s). Paging files are used to store pages of memory used by the pro-
cess that are not contained in other files. Paging files are shared by all
processes, and lack of space in paging files can prevent other processes
from allocating memory.
privateSize The current number of bytes this process has allocated that cannot be
shared with other processes.
systemTime The elapsed time in milliseconds that the threads of the process have
spent executing code in privileged mode. When a Windows system ser-
vice is called, the service will often run in Privileged Mode to gain
access to system-private data. Such data is protected from access by
threads executing in user mode. Calls to the system can be explicit or
implicit, such as page faults or interrupts. Unlike some early operating
systems, Windows uses process boundaries for subsystem protection in
addition to the traditional protection of user and privileged modes. These
subsystem processes provide additional protection. Therefore, some
work done by Windows on behalf of your application might appear in
other subsystem processes in addition to the privileged time in your pro-
cess.
userTime The elapsed time in milliseconds that this process's threads have spent
executing code in user mode. Applications, environment subsystems,
and integral subsystems execute in user mode. Code executing in User
Mode cannot damage the integrity of the Windows Executive, Kernel,
and device drivers. Unlike some early operating systems, Windows uses
process boundaries for subsystem protection in addition to the tradi-
tional protection of user and privileged modes. These subsystem pro-
cesses provide additional protection. Therefore, some work done by
Windows on behalf of your application might appear in other subsystem
processes in addition to the privileged time in your process.
virtualSize Virtual Bytes is the current size in bytes of the virtual address space the
process is using. Use of virtual address space does not necessarily imply
corresponding use of either disk or main memory pages. Virtual space is
finite, and by using too much, the process can limit its ability to load
libraries.
virtualSizePeak The maximum number of bytes of virtual address space the process has
used at any one time. Use of virtual address space does not necessarily
imply corresponding use of either disk or main memory pages. Virtual
space is however finite, and by using too much, the process might limit
its ability to load libraries.
workingSetSize The current number of bytes in the Working Set of this process. The
Working Set is the set of memory pages touched recently by the threads
in the process. If free memory in the computer is above a threshold,
pages are left in the Working Set of a process even if they are not in use.
When free memory falls below a threshold, pages are trimmed from
Working Sets. If they are needed they will then be soft-faulted back into
the Working Set before they are paged out to disk.
workingSetSizePeak The maximum number of bytes in the Working Set of this process at any
point in time. The Working Set is the set of memory pages touched
recently by the threads in the process. If free memory in the computer is
above a threshold, pages are left in the Working Set of a process even if
they are not in use. When free memory falls below a threshold, pages are
trimmed from Working Sets. If they are needed they will then be soft
faulted back into the Working Set before they leave main memory.
cpuUsage Percentage CPU used by this process.
WindowsProcessStats Statistics for a Microsoft Windows process.
API Application Programming Interface. GemFire provides APIs to cached data for C++ and
.NET applications.
application program A program designed to perform a specific function directly for the user or, in some cases,
for another application program. GemFire applications use the GemFire application
programming interfaces (APIs) to modify cached data.
cache A cache created by an application or cache server process. For the process, its cache is the
point of access to all caching features and the only view of the cache that is available. Cache
creation requires membership in the distributed system. See also local cache and remote
cache.
cache configuration An XML file that declares the initial configuration of a cache, commonly named
file cache.xml. C++ and .NET applications can configure the cache additionally through the
GemFire programming APIs.
cache listener User-implemented plug-in for receiving and handling region entry events. A regions cache
listener is called after an entry in the local cache is modified.
cache loader User-implemented plug-in for loading data into a region. A regions cache loader is used to
load data that is requested of the region but is not available in the distributed system. For a
distributed region, the loader that is used can be in a different cache from the one where the
data-request operation originated. See also cache writer and netSearch.
cache server A long-running, configurable caching process, generally used to serve cached data to the
applications. Usually, cache servers are configured to operate as servers in a client-server
typology and their regions are configured to be replicates. See also server.
cache writer User-implemented plug-in intended for synchronizing the cache with an outside data
source. A regions cache writer is a synchronous listener to cache data events. The cache
writer has the ability to abort a data modification. See also cache loader.
caching enabled Specifies whether data is cached in the region. GemFire gives you the option of running
applications without entry caching. For example, you can configure a distributed system as
a simple messaging service.
client In a client-server topology, clients can connect to cache servers, create new regions on the
cache server, and store data in the cache server region. Clients can also connect to existing
309
Glossary
regions on a cache server and do directed gets and puts on the cache server. Clients do not
track membership information about other clients, nor do they share information with other
clients.
concurrency level An estimate of the number of threads expected to concurrently modify values in the region.
The actual concurrency may vary; this value is used to optimize the allocation of system
resources.
connection What an application uses to access a GemFire distributed system. An application can
connect to a GemFire system by calling the DistributedSystem::connect function
with the appropriate parameter settings. An application must connect to a distributed
system to gain access to GemFire functionality.
disk policy Determines whether LRU entries exceeding the entries limit for a caching region are
destroyed or written to disk.
distributed scope Enables a region to automatically send entry value updates to remote caches and
incorporate updates received from remote caches. The scope identifies whether distribution
operations must wait for acknowledgement from other caches before continuing. A
distributed regions cache loader and cache writer (defined in the local cache) can be
invoked for operations originating in remote caches.
distributed system One or more GemFire system members that have been configured to communicate with
each other, forming a single, logical system. Also used for the object that is instantiated to
create the connection between the distributed system members.
DTD Document Type Definition. A language that describes the contents of a Standard
Generalized Markup Language (SGML) document. The DTD is also used with XML. The
DTD definitions can be embedded within an XML document or in a separate file.
entry A data object in a region. A region entry consists of a key and a value. The value is either
null (invalid) or an object. A region entry knows what region it is in. See also region data,
entry key, and entry value.
expiration A cached object expires when its time-to-live or idle timeout counters are exhausted. A
region has one set of expiration attributes for itself and one set for all region entries.
expiration action The action to be taken when a cached object expires. The expiration action specifies
whether the object is to be invalidated or destroyed, and whether the action is to be
performed only in the local cache or throughout the distributed system. A destroyed object
is completely removed from the cache. A region is invalidated by invalidating all entries
contained in the region. An entry is invalidated by having its value marked as invalid.
Expiration attributes are set at the region level for the region and at the entry level for
entries. See also idle timeout and time-to-live.
factory method An interface for creating an object which at creation time can let its subclasses decide
which class to instantiate. The factory method helps instantiate the appropriate subclass by
creating the correct object from a group of related classes.
idle timeout The amount of time a region or region entry may remain in the cache unaccessed before
being expired. Access to an entry includes any get operation and any operation that resets
the entrys time-to-live counter. Region access includes any operation that resets an entry
idle timeout, and any operation that resets the regions time-to-live.
Idle timeout attributes are set at the region level for the region and at the entry level for
entries. See also time-to-live and expiration action.
interest list A mechanism that allows a region to maintain information about receivers for a particular
key-value pair in the region, and send out updates only to those nodes. Interest lists are
particularly useful when you expect a large number of updates on a key as part of the entry
life cycle.
invalid The state of an object when the cache holding it does not have the current value of the
object.
invalidate Remove only the value of an entry in a cache, not the entry itself.
listener An event handler. The listener registers its interest in one or more events and is notified
when the events occur.
load factor A region attribute that sets initial parameters on the underlying hashmap used for storing
region entries.
local cache The part of the distributed cache that is resident in the current process. This term is used to
differentiate the cache where a specific operation is being performed from other caches in
the distributed system. See also remote cache.
local scope Enables a region to hold a private data set that is not visible to other caches. See also scope.
LRU Least Recently Used. Refers to a region entry or entries most eligible for eviction due to
lack of interest by client applications.
LRU entries limit A region attribute that sets the maximum number of entries to hold in a caching region.
When the capacity of the caching region is exceeded, LRU is used to evict entries.
membership Applications and cache servers connect to a GemFire distributed system by invoking the
static function DistributedSystem::connect. Through this connection, the
application gains access to the APIs for distributed data caches. When a C++ or .NET
application connects to a distributed system, it specifies the system it is connecting to by
indicating the communication protocol and address to use to find other system members.
netSearch The method used by GemFire to search remote caches for a data entry that is not found in
the local cache region. This operates only on distributed regions.
overflows An eviction option that causes the values of LRU entries to be moved to disk when the
region reaches capacity. See disk policy.
persistence manager The persistence manager manages the memory-to-disk and disk-to-memory actions for
LRU entries. See overflows.
region A logical grouping of data within a cache. Regions are used to store data entries (see entry).
Each region has a set of attributes governing activities such as expiration, distribution, data
loading, events, and eviction control.
311
Glossary
region attributes The class of attributes governing the creation, location, distribution, and management of a
region and its entries.
remote cache Any part of the distributed cache that is resident in a process other than the current one. If
an application or cache server does not have a data entry in the region in its local cache, it
can do a netSearch in an attempt to retrieve the entry from the region in a remote cache.
See also local cache.
scope Region attribute. Identifies whether a region keeps its entries private or automatically sends
entry value updates to remote caches and incorporates updates received from remote
caches. The scope also identifies whether distribution operations must wait for
acknowledgement from other caches before continuing. See also distributed scope and
local scope.
server In a client-server topology, the server manages membership and allows remote operations.
The server maintains membership information for its clients in the distributed system,
along with information about peer applications and other servers in the system. See also
cache server.
time-to-live The amount of time a region or region entry may remain in the cache without being
modified before being expired. Entry modification includes creation, update, and removal.
Region modification includes creation, update, or removal of the region or any of its
entries.
Time-to-live attributes are set at the region level for the region, and at the entry level for
entries. See also idle timeout and expiration action.
XML EXtensible Markup Language. An open standard for describing data from the W3C, XML
is a markup language similar to HTML. Both are designed to describe and transform data,
but where HTML uses predefined tags, XML allows tags to be defined inside the XML
document itself. Using XML, virtually any data item can be identified. The XML
programmer creates and implements data-appropriate tags whose syntax is defined in a
DTD file.
Index
G L
GemFire LD_LIBRARY_PATH environment variable 31, 35
data type definitions for cache 78 license-file 146
GemFireDir 32 license-type 146
get from cache 92 Linux
getAll 49, 51, 92 environment variables 31, 35
GFCPP environment variable 3135 system requirements 25
gfcpp.properties 76, 139 load-factor region attribute 56
attributes 145 local cache 39
creating 140 event listeners 61
example 293 localCreate 49
location 140 localPut 49
properties 145 log file 146
setting not read 142 log-disk-space-limit 146
troubleshooting 142 log-file 146
gfcpp-cache.dtd log-file-size-limit 146
example 78 logging
gfcpp-cache3500.dtd 76 configuring 146
grid-client 145 log-level 146
LRU 88
H
heap-lru-delta 90, 145 M
heap-lru-limit 90, 145 MaxFileSize 60
max-socket-buffer-size 145
I membership and discovery
idle timeout 69 configuration 146
importing classes for querying 197 messaging
IN expression 206 client/server, configuring 154
initial-capacity region attribute 56 durable client 154
initialization
cache configuration file 140 N
gfcpp.properties file 140 nesting 192
troubleshooting 140 notify-ack-interval 145, 153
initialization file 7576 notify-dupcheck-life 145, 153
in-place change 50
installation O
requirements 25 object graph, serialization 101
interest operating system, installation requirement 25
registering 46 OQL 189
unregister 46 overflow of region entry data 45
invalidation 44, 69
distributed 70
entry 51
P
interaction with application plug-ins 44 PageSize 60
local 70 partition resolver 61
application plug-in 61
S retrieving 69
sampling enabled 147
scope of cache queries 188, 190, 192, 195
sampling rate 147
scope server region attribute
statistic-sample-rate 147
distributed-ack
statistic-sampling-enabled 147
indexing and 181
StatisticsFactory 86
querying and 181
StatisticsType 86
distributed-no-ack
Sun Solaris 25
indexing and 181
support, technical 20
querying and 181
swap space, installation requirement 25
global
system requirements for installation 25
indexing and 181
querying and 181
security 163178 T
authentication 164 TCP connection 72
encrypting credentials 173 technical support 20
authorization of cache access 172 thread
client/server multiple, referencing an object 71
multi-user 168 timeout parameter 183, 210
configuring 148 time-to-live 69
OpenSSL 177 troubleshooting
SSL 175 gfcpp.properties setting not read 142
security-alias 174 TTL
security-client-auth-factory 148, 174 See time-to-live
security-client-auth-library 148, 174 tuning
security-client-dhalgo 148, 173174 durable client timeout 154
security-client-kspasswd 173174 types, user defined
security-client-kspath 148, 173174 key type 98
security-keystorepass 174 registration 97
security-keystorepath 174 typographical conventions 20
SELECT statement 194
SelectResults, querying 200 U
Serializable 93 unregisterAllKeys 46
serialization unregisterKeys 47
blob data 101 updating entry to new object 71
object graph 101
serialization of data 46, 93101 W
SharedPtr 71
Windows
SQL 189
environment variable 33
ssl-enabled 174
system requirements 25
ssl-keystore 174
ssl-truststore 174
stack trace
enable 146
stack trace, enabling 146
stacktrace-enabled 146
startup cache structure, defining 7576
statistic-archive-file 147
StatisticDescriptor 86
Statistics 86
statistics
archive file 147
archive file size limit 147
configuring collection 147