0% found this document useful (0 votes)
154 views102 pages

2.1 Informix HighAvailability and Scalability

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
154 views102 pages

2.1 Informix HighAvailability and Scalability

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

Informix 11.

7 Bootcamp
Informix High-Availability and Scalability
Information Management Technology Ecosystems

© 2010 IBM Corporation


Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
2 © 2010 IBM Corporation
Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
3 © 2010 IBM Corporation
High-Availability Data Replication (HDR)
ƒ Use
ƒ Disaster Recovery When primary server
ƒ Two identical servers on two identical machines goes down, secondary
ƒ Primary server server takes over as
ƒ Secondary server
ƒ Primary server standard server
ƒ Fully functional server.
ƒ All database activity – insert/update/deletes, are performed
on this instance
ƒ Sends logs to secondary server
ƒ Simple to administer
ƒ Secondary server
ƒ Read only server - allows read only query
ƒ Little configuration required
ƒ Always in recovery mode ƒ Just backup the primary
ƒ Receives logs from primary and replay them to keep in sync and restore to secondary
with primary
HDRTraffic

HDRSecondary
Primary

Client Apps
Re
ad-
On
ly

Blade Server A Blade Server B


<New Orleans> <Memphis>
Building-A

© 2010 IBM Corporation


High-Availability Data Replication (HDR): Easy Setup
• Requirements:
• Same hardware (vendor and architecture)
• Logged databases
• Same storage paths on each machine

• Backup the primary system


• ontape –s –L 0
• Set the type of the primary
• onmode –d primary <secondary_server_name>
• Restore the backup on the secondary
• ontape -p
• Change the type of the secondary
• onmode –d secondary <primary_server_name>

• DONE!
© 2010 IBM Corporation
Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
6 © 2010 IBM Corporation
Enterprise Replication (ER)
ƒ Use
ƒ Workload partitioning
ƒ Capacity relief
ƒ The entire group of servers is the replication domain
ƒ Any node within the domain can replicate data with any other
node in the domain
ƒ Servers in domain can be configured to be root, non-root and
Leaf
ƒ Supports Multiple topologies supported
ƒ Heterogeneous OS, Informix versions, and H/W for maximum
implementation flexibility
ƒ Secure data communication ƒ Fully Connected
ƒ Update anywhere (Bi-directional replication) ƒ Hierarchical Routing
ƒ Hierarchical Tree
ƒConflicting updates resolved by ƒ Forest of Trees
Timestamp, stored procedure, or always apply
ƒ Based on log snooping rather than transaction based

ƒ Low data transfer latency


ƒ Already integrated in the server!
BENEFITS
ƒ Flexible
ƒ Choose what to replicate – column level!
ƒ Choose where to replicate – all nodes or select
ƒ Scalable
ƒ Add or remove servers/nodes easily

© 2010 IBM Corporation


Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
8 © 2010 IBM Corporation
What is High Availability Cluster?

• Extends HDR to support a primary server with many


secondary servers

• Three new types of secondary instances:


• Shared Disk Secondary (SDS)
• Remote Standalone Secondary (RSS)
• Continuous Log Restore (CLR) or “near-line” standby**

• High Availability Cluster is not just 1-to-N HDR


• It is the treating of all new forms of secondary as a multi-
tiered availability solution

© 2010 IBM Corporation


Supporting Infrastructure for High Availability Cluster

• Two new sub-components introduced to support “High


Availability Cluster”

• Server Multiplexer (SMX)


• Automatically enabled internally to establish network connections
between the instances
• Supports multiple logical connections over a single TCP connection
• Sends packets without waiting for return “ack”

• Index Page Logging


• Allows index pages to be copied to the logical log when initially
creating the index
• HDR currently transfers the index pages to the secondary when
creating the index
• Required for RSS
© 2010 IBM Corporation
Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
11 © 2010 IBM Corporation
• Capacity Relief
Remote Standalone Server (RSS) • Web Applications / Reporting
• Ideal for disaster recovery

• Extends HDR to include new type of secondary


(RSS) Replication to Multiple Remote
Secondary Nodes
• Receive logs from Primary
• Has its own set of disks to manage Primary Node HDR Secondary

• Primary performance does not affect RSS servers;


and vice versa
• Only manual failover supported
• Requires Index Page Logging be turned on
• Uses full duplex communication (SMX) with RSS
nodes
• Does not support SYNC mode
• Can have 0 to N asynchronous RSS nodes
• Supports Secondary - RSS conversions RSS #1 RSS #2

• Allows for simultaneous local and remote replication for HA


BENEFITS • Support read-write operations
• Simple online setup and use
© 2010 IBM Corporation
Remote Secondary Server (RSS): Easy Setup
• Requirements: (similar to HDR)
• Same hardware (vendor and architecture)
• Logged databases
• Same storage paths on each machine

• Configuration:
• Primary:
LOG_INDEX_BUILDS: Enable index page logging
• Dynamically: onmode –wf LOG_INDEX_BUILDS=1

• Identify the RSS server on the primary


• onmode –d add rss <server_name>
• Backup the primary system
• ontape –s –L 0
• Restore the backup on the secondary
• ontape -p
• Identify the primary on the RSS server
• onmode –d rss <primary_server_name>

• DONE!
© 2010 IBM Corporation
New RSS Configuration Parameters (11.50.xC5)
Dynamic – onmode –wf/wm
• DELAY_APPLY
• Used to configure RS secondary servers to wait for a
specified period of time before applying logs
• LOG_STAGING_DIR
• Specifies the location of log files received from the
primary server when configuring delayed application of
log files on RS secondary servers
• STOP_APPLY
• Used to stop an RS secondary server from applying log
files received from the primary serve

Useful when a problem on the Primary should not be replicated to the Secondary server(s)

© 2010 IBM Corporation


Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
15 © 2010 IBM Corporation
Shared Disk Secondary (SDS) LSN
ACK
LSN
• Extends HDR to include a new type of
secondary (SDS)
• SDS nodes share data storage with the
primary
Primary SDS
ƒ Can have 0 to N SDS nodes SDS

• Uses
ƒ Adjust capacity online as demand changes
ƒ Lower data storage costs Hardware
Mirror
Shared
• How does it work? Disk

ƒ Primary transmits the current Log Sequence


Number (LSN) as it is flushing logs BENEFITS
ƒ SDS instance receives the LSN from the • Provides on-line capacity relief
primary and reads the logs from the shared
disks • Multiple redundancy
ƒ SDS instance applies log changes to its • Simple to setup and flexible
buffer cache (easily = scalable)
ƒ SDS instance re-sync processed LSN to • Low cost - not duplicate disk space
primary
• Does not require specialized hardware
ƒ Dirty reads allowed on SDS nodes
• Can coexist with ER, HDR and RSS
ƒ The primary can failover to any SDS node secondary nodes

© 2010 IBM Corporation


Shared Disk Secondary (SDS): Easy Setup
• Requirements:
• Same hardware (vendor and architecture) Primary:
• Logged databases Mark Primary as SDS node
• Same storage paths on each machine SDS:
Enable SDS in the onconfig
• Configuration: Start SDS node
• Primary
SDS_TIMEOUT: Wait time in seconds for acknowledgement from SDS server
• SDS
SDS_ENABLE: Set to 1 to enable SES server
SDS_PAGING: Path to two buffer paging files that may be used between checkpoints to save pages
SDS_TEMPDBS: Temporary dbspace used by an SDS server

• Change the following ONCONFIG parameters to be unique for this SDS instance
¾ DBSERVERALIASES, DBSERVERNAME, MSGPATH, SERVERNUM
¾ Leave all other parameters the same

• On Primary, identify the primary as the shared disk primary


• onmode –d set sds primary <name_of_primary_instance>
• Start the shared disk secondary
• oninit

• DONE!
© 2010 IBM Corporation
Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
18 © 2010 IBM Corporation
Continuous Log Restore (CLR)
• Also known as “Log Shipping”
• Server in roll forward mode
• Logical log backups made from an IDS instance are
continuously restored on a second machine
• Allows logical recovery to span multiple ‘ontape/onbar’
commands/logs
Primary

BENEFITS
• Provides a secondary instance with ‘log file
granularity’
• Does not impact the primary server
• Can co-exist with “the cluster” (HDR/RSS/SDS)
as well as ER
• Useful when backup site is totally isolated (i.e. CLR1
CLR2
CLR3

no network)
• Ideal for disaster recovery
• Replay server logs when convenient
© 2010 IBM Corporation
Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
20 © 2010 IBM Corporation
Updatable Secondary Servers

• Client applications can update data on secondary servers by using


redirected writes
• Secondary servers (SDS, RSS, and HDR) now support both DDL
(CREATE, ALTER, DROP, etc) and DML (INSERT, UPDATE, and
DELETE) statements
• The secondary server is not updated directly
• Transaction is transferred to the primary server to check for conflict
resolution and then the change is propagated back to the secondary
server
Updatable secondaries give the appearance that
updates are occurring directly on the secondary
server when in fact the transaction is transferred to
the primary server and then the change is
replicated to the secondary server
21 © 2010 IBM Corporation
Configuring an Updatable Secondary Server

• New $ONCONFIG parameter for High Availability Cluster secondaries


UPDATABLE_SECONDARY

• Also determines the number of communication pipes and proxy


dispatchers used to perform updatable secondaries
• A value of 0 (zero) disables updatable secondary
• Additional SMX send/receive pair is allocated for each redirected
write pipes
• Example:
UPDATABLE_SECONDARY 3
• generates 4 smxsnd/smxrcv threads (3 for pipes + 1 for normal High
Availability Cluster communication)

Value of UPDATABLE_SECONDARY should be set to no more


than two times the number of CPU virtual processors.

22 © 2010 IBM Corporation


Updatable Secondary Servers – 11.5 Supported Features

• Data types supported (logged types)


• Standard built-in types
• UDTs stored in the database
• Logged smart BLOBs – including partition BLOBs
• Starting with 11.50.xC2, dirty read, last committed, and committed
read isolation are supported on all secondary nodes

• Supports explicit and implicit temporary tables


• DDLs not supported
• Update statistics
• Create database (with no logging)
• Create raw table
• Create temp table (with logging)
• Create external table,
• Creating any type of virtual table
23 © 2010 IBM Corporation
Updatable Secondary - Conflict Resolution
If the primary node fails and the
• Two options for detecting update HDR/RSS/SDS secondary is
conflicts between nodes promoted to the new primary
then the writes are automatically
1. Secondary sends “before” and sent to the new primary!
“after” images to the primary
(optimistic concurrency)
• Primary compares the before
Update
image to current row HDR Operation
Traffic

2. Secondary sends row version


information along with after image
• Primary compares row version
number and checksum to current
Primary HDR
row to detect collisions Secondary

If the “before” image on the secondary is different than the current image on the primary,
then the write operation is not allowed and an EVERCONFLICT (-7350) error is returned
© 2010 IBM Corporation
Updatable Secondary Servers: Row Versioning

• Two Shadow Columns required for each row in an


updatable table

• ifx_insert_checksum
• insert checksum value
• remains constant for the life of the row

• ifx_row_version
• update version
• incremented with each update of the row

25 © 2010 IBM Corporation


Updatable Secondary Servers: Configure Row Versioning
• To add row versioning to an existing table
alter table tablename add vercols;

• To delete row versioning


alter table tablename drop vercols;

• To create a new table with row versioning


create table tablename (
column_name datatype,
column_name datatype,
column_name datatype
) with vercols;

26 © 2010 IBM Corporation


Updatable Secondary Servers: Row Versioning

• Only needs to be applied to tables used for updatable secondary


operations

• Small rows may not benefit from turning on row versioning

• Shadow columns are not seen when the following


SQL/commands are run
• select * from . .
• dbschema–d dbname …

• Use of row versions can reduce the network traffic and improve
performance
ƒ If no vercols, the entire secondary “before” image is sent to primary
and compared to its image
ƒ SLOW and network hog!!!!
ƒ Row Versioning Optional but STRONGLY RECOMMENDED

27 © 2010 IBM Corporation


Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
28 © 2010 IBM Corporation
Connection model prior to Informix 11.5

Austin
Frisco

Dallas

Las Vegas

Paris Tokyo
Sao Paulo
© 2010 IBM Corporation
29
Connection Model starting with Informix 11.5

Austin
Frisco

OLTP

Dallas

C
AT
AL Las Vegas
O
MART G

Paris Tokyo
Sao Paulo

30 © 2010 IBM Corporation


The Connection Manager (CM)
• A daemon program:
1. Accepts a client
connection request

as
and then re-routes

eg
Austin

sV
that connection to

e?

La
Frisco
one of the “best fit” OLTP

nc
ta
ins
nodes in the

log
Informix cluster
ta
ca
h
hic

2. Monitors and Dallas


W

manages
instances failovers
C
A
TA
LLas
O Vegas
M ART G
Connection Manager Utility:
Paris Tokyo
oncmsm (Online Connection Sao Paulo
Manager and Server) Monitor

31 © 2010 IBM Corporation


The Connection Manager is FREE!

• Delivered in the CSDK


• No additional software you have to buy
• Completely integrated into client and server connectivity, not an
add-on

• Works with CSDK, JDBC and JCC


• DRDA is available but waiting on DRDA API enhancements
• .NET, Ruby and other support for interaction with the Connection
Manager coming soon too

• Resolves connection requests based on Service Level


Agreements (SLA)

32 © 2010 IBM Corporation


SLA-Based Client Routing

• Applications can either connect to a specific


instance
• database stores@inst1_primary

OR

• Applications can connect to a “server cloud” - aka SLA -


and be routed to the “best choice” available instance in the
“cloud”
• database stores@payroll
• database stores@catalog

33 © 2010 IBM Corporation


Sample SLA
SLA oltp=primary
SLA report=rss_1+rss_2+rss_3
SLA accounting=SDS+HDR
SLA catalog=rss_4+rss_5+rss_6
SLA test=RSS

• The following are “reserved words” for SLA definitions:


• primary – the cluster Primary
• SDS – any SDS instance
• HDR – the HDR Secondary
• RSS – any RSS instance
• An SLA definition can include “reserved words” as well as specific instance
names
• NOTE: Each SLA has a separate SQLHOSTS entry on CM server

34 © 2010 IBM Corporation


How does CM re-route clients based on “best choice”?

• All servers in a cluster maintain a weighted history of


their resource
• Free CPU cycles, number of threads on ready
queue, number of active threads, etc

• Every 5 seconds, each server in the cluster sends this


resource information to the Connection Manager

• The connection manager uses this information to


determine “best choice” within a SLA class
• Clients directed to node has the most free resources

© 2010 IBM Corporation


Setting up SQLHOSTS – NO Connection Manager
• There are different content requirements for this file depending on
if it is on a client, CMSM or IDS server

• SQLHOSTS on physical servers hosting IDS instances without a


Connection Manager

• Should have entries for all instances in the cluster

• Any other instances in the enterprise where ER replication or


distributed operations occur

• Example
production onsoctcp mac_1 prod_tcp
production_shm onipcshm mac_1 place_holder
sds_1 onsoctcp mac_2 sds1_tcp Cluster instances
hdr1 onsoctcp mac_3 hdr1_tcp
rss_1 onsoctcp mac_4 rss1_tcp
dev_1 onsoctcp georgetown dev_1_tcp

36 © 2010 IBM Corporation


Setting up SQLHOSTS – Connection Manager Host
• SQLHOSTS on physical servers hosting CMSM agent(s):
• Must have
• Entry for each SLA.
• Entry for cluster primary
• Strongly recommended to have
• Entries for all instances in the cluster

• Example:
production onsoctcp mac_1 prod_tcp
sds_1 onsoctcp mac_2 sds1_tcp
Cluster instances
hdr1 onsoctcp mac_3 hdr1_tcp
rss_1 onsoctcp mac_4 rss1_tcp

oltp onsoctcp concord oltp_tcp


report onsoctcp concord report_tcp SLA definitions
test onsoctcp concord test_tcp

37 © 2010 IBM Corporation


Setting up SQLHOSTS - Client
• SQLHOSTS on physical servers hosting Client
• Must have
• Entry for each IDS “service” to be used
• SLA.
and/or
• Direct instance connection.
• Example (client):

oltp onsoctcp concord oltp_tcp SLA definitions


report onsoctcp concord report_tcp

cerberus onsoctcp boulder cerb_tcp


Direct connects
orpheus onsoctcp littleton orph_tcp

production onsoctcp mac_1 prod_tcp


sds_1 onsoctcp mac_2 sds1_tcp optional, cluster entries
hdr1 onsoctcp mac_3 hdr1_tcp
rss_1 onsoctcp mac_4 rss1_tcp

38 © 2010 IBM Corporation


Failover Arbitrator (part of Connection Manager)

• The Failover Arbitrator provides automatic


failover logic for high-availability clusters

• Monitors all nodes checking for the primary Is Primary


failover Primary Really Down?
Down?

• Performs failover (i.e. promotes a HDR


Traffic
secondary to primary) when it is confirmed
that the primary is down
Primary HDR secondary
RSS
Traffic
• Released as part of the Connection
Manager

• Will support failover to RSS, SDS and RSS


HDR Secondary nodes

39 © 2010 IBM Corporation


Fail Over Configuration (FOC) Parameter

• Order of failover is defined by an entry in the Connection Manager


configuration file $INFORMIXDIR/etc/cmsm.cfg
• FOC parameter format
FOC failover_configuration,timeout_value
failover_configuration: One or more of primary, SDS, HDR, RSS, or specific
instances, separated by a plus (+); sub-groups can be
created within parentheses
timeout_value: An amount of time (in seconds) the CM Agent will wait to hear
from the primary before executing a failover

• Set timeout_value to a reasonable value to take into account


temporary network burps, etc
• Example
FOC serv1+(serv2+SDS)+HDR+RSS,10
• Default: FOC SDS+HDR+RSS,0

40 © 2010 IBM Corporation


Complete Connection Manager Configuration File
• File format:
NAME ConnectionManagerName
SLA name=value
[ SLA name=value ] . . .
FOC failover_config, timeout_value
DEBUG [ 1 | 0 ]
LOGFILE <path_to_log_file>
• value can be an instance name, an instance type, or a list of instance names and
types, separated by a “+”

• Example of a configuration manager configuration file


NAME doe_test
SLA oltp=primary
SLA report=HDR+SDS
SLA test=RSS
FOC inst1+SDS+RSS+HDR,10
DEBUG 1
LOGFILE /opt/IBM/informix/logs/cm.log
41 © 2010 IBM Corporation
Starting and Stopping a Connection Manager

• If there is only one connection manager to start, configuration


file is at default location and variables are set
oncmsm

• Other Examples
oncmsm –c /path_to_config_file

oncmsm cm1 -s oltp=primary -s payroll=HDR+primary


-s report=SDS+HDR -l cm.log –f HDR+SDS,30

• To stop a connection manager agent


oncmsm –k agent_name
agent_name must be provided even if only one agent is running on server

42 © 2010 IBM Corporation


Connection Manager Statistics
onstat –g cmsm
• Displays the various Connection Manager daemons and corresponding
details that are attached to a server instance.
• Display contents:
• All connection managers inside cluster.
• Associated hosts.
• SLA and corresponding defines.
• Arbitrator configuration (discussed next).
• Flags and statistics.

• Sample Output:
CM name host sla define foc flag connections
Cm1 bia oltp primary SDS+HDR+RSS,0 3 5
Cm1 bia report (SDS+RSS) SDS+HDR+RSS,0 3 16

43 © 2010 IBM Corporation


Connection Manager Failover Arbitrator: FAILOVER_CALLBACK

• FAILOVER_CALLBACK
• Valid for secondary instances
• Pathname to program/script to execute if the server is
promoted from secondary to primary
• Can be used to issue alert, take specific actions, etc

44 © 2010 IBM Corporation


Connection Manager – Proxy Mode

• Connection Manager Proxy Mode


• Introduced in 11.50.xC6
• Proxy Mode is an additional operation mode for an SLA
• Client connection to SLA with proxy mode will be “routed through”
CM
• Each SLA can be chosen to run in either redirect or proxy mode
• Default mode is redirect
• Add “<mode=proxy>” to SLA for proxy mode
• With “<mode=redirect>” after SLA you explicitly specify
redirect mode
• Old clients (before CSDK/JDBC 3.50) can connect to proxy mode
SLAs

45 © 2010 IBM Corporation


Connection Manager Config – mode + workers
• Additional mode and workers parameters for SLA
definition:
NAME cm1
SLA primary=PRI <mode=redirect>
SLA secondary=(SDS+RSS+HDR)

SLA oltp=primary <mode=proxy>


SLA report=(SDS+RSS+HDR) <mode=proxy workers=512>

LOGFILE /opt/informix/3.50.EVP5/tmp/cm1.log
DEBUG 1

ƒ Please note the angle brackets are required!


ƒ Also note, the optional workers option
ƒ Worker threads can be per SLA configured between 8 and 2048
ƒ For optimum performance, tests are recommended
46 © 2010 IBM Corporation
Connection Manager - Proxy Mode Overview

47 © 2010 IBM Corporation


Connection Manager – Configuration Parameter

• New Connection Manager configuration file parameter LOG:


• LOG 1
Print out all connections handled in redirect mode or proxy
mode

• LOG 2
Print out how many bytes are received and sent for each
session

• LOG 3
Dump each communication buffer for each session
(use with care, obviously a lot of data to be expected)

48 © 2010 IBM Corporation


Connection Manager Failover Arbitrator: DRAUTO

• New setting for DRAUTO configuration parameter


DRAUTO 3

• Arbitrator first verifies no other active primary servers in the


cluster before promoting a secondary to primary
• If another primary server is active, Arbitrator will reject
promotion request

• Should be set on all Secondary instances in a High Availability


Cluster and also the Primary

49 © 2010 IBM Corporation


Multiple Connection Managers

• Multiple CM agents can be active in a cluster at any time


• Each can have the same configuration **OR** they can have a
different configuration
• File for the agent is passed in using the –c path_to_file syntax

• Each must be invoked while pointing to the cluster primary


• When invoked, it will connect to the primary to download
cluster instance list
• All CM agents are “active” and can control connections, so verify
client connection strings!!
• Only the first invoked agent is the active fail-over control (if
configured) CM
• Fail-over control will cascade in the event of a Arbitrator failure
though
50 © 2010 IBM Corporation
Connection Manager - onpassword utility

• Used to encrypt/decrypt a centralized password file for access to all


instances in a cluster by the Connection Manager

• Information in the encrypted file includes instance IDs of all servers


in the cluster and their associated usernames and passwords

• User ID and password must already exist on the target physical


server for the instance

• Output from utility is encrypted and stored in


$INFORMIXDIR/etc/passwd_file

• Output required by the ONCMSM connection manager (discussed


later) to connect to instances

• NOT used by client applications

51 © 2010 IBM Corporation


onpassword: File Structure
• The password file is an ASCII text file with the following structure
instance_name alternate_instance username password

• instance_name: DBSERVERNAME/DBSERVERALIAS; must be TCP/IP based.


• alternate_instance: Alternate alias, must be TCP/IP based
• username: user ID for the connection
• password: password for user ID

• Example
• lx-rama lx-rama ravi foobar
• toru toru_2 usr2 fivebar
• seth_tcp seth_alias fred 9ocheetah
• cheetah panther anup cmpl1cate

• One instance in the cluster per line


• If the second/alternate instance name is different, the Connection
Manager will try that instance id if it cannot connect to the server using
the first instance name
52 © 2010 IBM Corporation
onpassword Utility: Examples

onpassword –k 6azy78op –e $HOME/my_passwd_file

onpassword –k 34RogerSippl1 –e
/user_data/my_stuff/my_passwd_file

• The encrypted output file in the above examples is called passwd_file


and placed in $INFORMIXDIR/etc/

onpassword –k 6azy78op –d $HOME/out_file

onpassword –k 34RogerSippl1 –d /tmp/another_file

• The decrypted output file in the above examples is placed where


directed
53 © 2010 IBM Corporation
Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
54 © 2010 IBM Corporation
What is a Flexible Grid?

• A named set of interconnected replication servers for propagating


commands from an authorized server to the rest of the servers in
the set

• Useful if you have multiple replication servers and you often need
to perform the same tasks on every replication server

• Nodes in grid do not have to be identical


• Different tables, different hardware, different OS’s, different IDS
versions

• Requirements
• Enterprise Replication must be running
• Servers must be on Panther (11.70.xC1)
• Pre-panther servers within the ER domain cannot be part of
the GRID
© 2010 IBM Corporation
What are the features of the new Informix Flexible Grid?

• Simplify creation, management, and maintenance of a global grid


• Create grid, attach to grid, detach from grid, add/drop node to/from
Grid
• DDL/DML operations on any node propagated to all nodes in the
Grid
• Management of grid can be done by any node in the grid
• Run or create stored procedures or user-defined routines on one
or all nodes
• Simplified management and maintenance of replication
• Tables no longer require primary keys
• Easily setup member servers and authorized users are of the grid
• Flexibility in setup of servers grid routines can be run
• Integration with OpenAdmin Tool (OAT)

© 2010 IBM Corporation


Define/Enable/Disable the Grid
OAT support enabled

• To setup/disable a GRID, use the cdr utility


• Define
• Defines the nodes within the grid
cdr define grid <grid_name> --all
cdr define grid <grid_name> <node1 node2 …>
• Enable
• Defines the nodes and users within the grid that can perform a grid level
operations
cdr enable grid –grid=<grid_name> --user=<user>
--node=<node>
• Disable
• Used to remove a node or user from being able to perform grid
operations
cdr disable grid –grid=<grid_name> --node=<node_name>
--user=<user_name>
© 2010 IBM Corporation
Propagating Database Object Changes

• Once the Grid is “enabled,” a replset is created for all replicates


created within the grid

• Can propagate creating, altering, and dropping of database


objects to servers in the grid while connected

• The grid must exist and the grid routines must be executed as an
authorized user from an authorized server

• Grid operations do NOT, by default, replicate DML operations


• To replicate DML, must enable ER by executing procedure
ifx_set_erstate()

• To propagate database object changes:


• Connect to the grid by running the ifx_grid_connect() procedure
• Run one or more SQL DDL statements
• Disconnect from the grid by running the ifx_grid_disconnect() procedure

© 2010 IBM Corporation


Example of DDL propagation

execute procedure ifx_grid_connect(‘grid1’, ‘tag1’);


create database tstdb with log;
create table tab1 (
col1 int primary key,
col2 int,
col3 char(20)) lock mode row; Will be executed
create index idx1 on tab1 (col2); on all nodes
within the ‘grid1’
create procedure loadtab1(maxnum int)
GRID
define tnum int;
for tnum = 1 to maxnum
insert into tab1 values
(tnum, tnum * tnum, ‘mydata’);
end for:
end procedure;
execute procedure ifx_grid_disconnect();
© 2010 IBM Corporation
Grid Operation Functions

• Operations can be run by any database in any node on the Grid


• ifx_grid_connect() – Opens a connection and any command run is
applied to the Grid
• ifx_grid_disconnect() – Closes a connection with the Grid
• ifx_grid_execute() – Executes a single command across the Grid
• ifx_grid_function() – Executes a routine across the Grid
• ifx_grid_procedure() – Executes a procedure across the Grid
• ifx_set_erstate() – Controls replication of DML across the Grid for
all tables that participate in a replicate
• ifx_get_erstate() – Reports whether replication is enabled on a
transaction that is propagated across the Grid
• Ifx_grid_purge() – Purges metadata about operations that have
been executed on the Grid
• ifx_grid_redo() – re-executes a failed and tagged grid operation
© 2010 IBM Corporation
Dynamically Enabling/Disabling ER

• Enable
• execute procedure ifx_set_erstate(‘on’)
• Disable
• execute procedure ifx_set_erstate(‘off’)
• Get current state
• execute function ifx_get_erstate();
• Return of 1 means that ER is going to snoop the logs for this transaction

• Example of enabling ER for the execution of a procedure


Execute procedure ifx_grid_connect(‘grid1’);
create procedure myproc()
execute procedure ifx_set_erstate(‘on’);
execute procedure create_summary_report();
end procedure;
execute procedure ifx_grid_disconnect();
execute procedure ifx_grid_procedure(‘grid1’,’myproc()’);

© 2010 IBM Corporation


Monitoring a Grid
NEW: Monitor a cluster
onstat -g cluster
• cdr list grid
• View information about server in the grid
• View the commands that were run on servers in the grid
• Without any options or a grid name, the output shows the list of grids

• Servers in the grid on which users are authorized to run grid commands
are marked with an asterisk (*)

• When you add a server to the grid, any commands that were previously
run through the grid have a status of PENDING for that server

• Options include:
--source=<source_node>
--summary cdr list grid grid1
--verbose
--nacks
--acks
--pending
© 2010 IBM Corporation
Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
63 © 2010 IBM Corporation
Connection Manager and Flexible Grids

• The oncmsm agent functionality (Connection Manager) has


been extended to include ER and Grid clusters!
• For ER/HA clusters
• The agent supports all Informix 11 instances!
• For Grid clusters
• The agent only supports Informix 11.70 instances

• Connection Manager is either for a cluster or for ER, no


mixing

• SLA is at the replicate set level

© 2010 IBM Corporation


Connection Manager Configuration File

• New parameters
• TYPE REPLSET
• Indicates that this is an ER / Grid agent
• NODES name=instname+instname+instname
• A named list of the Grid / ER instances to participate in this named list
• Can be more than one list of node names
• New option for SLA definition
• policy=[LATENCY | FAILURE]
• The SLA will select the server with the lowest replication latency, the
fewest replication failures or both if the “+” keyword is used
• Must enable before use - cdr define qod –start
• New parameter setting
• FOC DISABLED
• With a Grid, there is no list of “promotable primary” nodes to fail over to

© 2010 IBM Corporation


Example concsm.cfg for the Grid

NAME doe_test_1
TYPE REPLSET
NODES list_1=g_pan1+g_pan2
NODES list_2=g_pan3+g_pan4

SLA data_entry=REPLSET mytest_grid list_1 <policy=LATENCY+FAILURE>


SLA grid_reports=REPLSET mytest_grid list_2 <policy=FAILURE>

# Failover Configuration
FOC DISABLED

# worker threads for each SLA listener, default is 8


SLA_WORKERS 16

# Connection Manager message file


LOGFILE /opt/IBM/informix/logs/cm_test1.log
DEBUG 0

© 2010 IBM Corporation


Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
67 © 2010 IBM Corporation
Replicate tables without Primary Keys

• No longer require a Primary Keys for tables replicated by


Enterprise Replication (ER)

• Use the WITH ERKEY keyword when defining tables


• Creates shadow columns (ifx_erkey_1, ifx_erkey_2, and
ifx_erkey_3)
• Creates a new unique index and a unique constraint that ER
uses for a primary key

• For most database operations, the ERKEY columns are hidden


• Not visible to statement like SELECT * FROM tablename;

• Example
CREATE TABLE customer (id INT) WITH ERKEY;
ALTER TABLE customer ADD ERKEY;

© 2010 IBM Corporation


Informix Flexible Grid - Quickly CLONE a Server
• Previously, to clone the Primary
1. Create a level-0 backup
2. Transfer the backup to the new system
3. Restore the image
4. Initialize the instance

• ifxclone utility
• Clones an instance from a single command
• Starts the backup and restore processes simultaneously (SMX
transfer)
• No need to read or write data to disk or tape
• Creates a standalone server ER node or a remote standalone
secondary (RSS) server
• If creating a new ER node, ER registration is cloned as well
• No Sync/Check is necessary
ifxclone -T -S machine2 -I 111.222.333.555 -P 456 -t machine1
-i 111.222.333.444 -p 123
© 2010 IBM Corporation
Easily Convert Cluster Servers to ER nodes

• RSS Æ ER
• Use the rss2er() stored procedure is located in the syscdr
database
• Converts the RSS secondary server into an ER server
• Secondary will inherit the replication rules that the primary had
• Does not require a ‘cdr check’ or ‘cdr sync’

• HDR/RSS pair Æ ER pair (cdr start sec2er)


• Converts an HDR/RSS pair into an ER pair
• Automatically creates ER replication between primary and
secondary server
• Splits HDR/RSS pair into independent standard servers that
use ER
© 2010 IBM Corporation
Upgrading a Cluster while it is Online

• Use ‘cdr start sec2er’ and ‘ifxclone’ to perform a rolling


upgrade of an HDR/RSS pair so that planned down time is
not required during a server migration

• Basic Steps
1. Execute ‘cdr start sec2er’
2. Restrict application to only one of the nodes
3. Migrate server on which the apps are not running
4. Move apps to the migrated server
5. Use ifxclone to switch back to RSS/HDR

© 2010 IBM Corporation


72 © 2010 IBM Corporation
Agenda
• High Availability BEFORE Informix 11.10
• High Availability Data Replication (HDR)
• Enterprise Replication (ER)
• New High Availability Features in Informix 11.10
• MACH 11 and required subcomponents
• Remote Standalone Secondary (RSS)
• Shared Disk Secondary (SDS)
• Continuous Log Restore (CLR)
• New Features in Informix 11.5
• Updatable Secondaries
• Connection Manager
• New Features in Informix 11.70
• Flexible Grid
• Connection Manager Grid Support
• Other Supporting Features
• Appendix
73 © 2010 IBM Corporation
Full Suite of High Availability options to Lower Costs
Client Apps

HDR Traffic

HDRPrimary
Secondary
Primary
CAF
Client Apps
Offline
Shared
Disk

Blade Server A HDR Traffic Shared Blade Server C


<New Orleans>
Disk <Memphis>
Building-A
RSS Traffic

Shared
CAF Disk
Mirror HDR Secondary
RSS

Offline
DBA

Blade Server B
<New Orleans>
Building-B Blade Server D
Shared <Denver>
OAT Disk

HDR HDR
Reduce Disaster
Local
The
Add
hardware
Traffic Rest
HDR/RSS
SecondaryThe
Clients
of
Connect
a Local
costs
Add with
Resumes, Initial
aStrikes
Add Loose
the
New
Copy inSystem
New
Capacity
RSS Orleans
Connectivity
Clients
Replication
and Clients
blade
Failover
Local Continue
Nodes
Stops
Denver
servers
Node
Clients Promoted
and no application changes
Continue Client Apps

© 2010 IBM Corporation


RSS Usage – Customer Situations

You need to add additional capacity for your web applications

Adding additional RSS nodes may be the answer

You are currently using HDR and are uncomfortable with losing both primary and HDR secondary
instances
If the primary fails, it is possible to convert the existing HDR secondary into the primary instance. If
it appears that the original primary is going to be down for an extended period of time, the RSS
instance can be converted into an HDR secondary instance

You want to provide copies of the instance in remote locations, but testing shows that the ping rate
is around 333 ms. You realize that thiswill cause problems on the primary if he uses HDR
Since RSS uses SMX protocol (full duplex), and does not require checkpoints be processed in
SYNC mode, it should not have a significant impact on the performance of the primary instance

You want to provide copies of the database in remote locations, but know there is a high latency
between the sites
RSS uses a fully duplexed communication protocol. This allows RSS to be used in places where
network communication is slow or not always reliable

You are currently using HDR for high availability but would like to have an additional backup of
your system in the event of a disaster in which both primary and secondary servers are lost
Using HDR to provide High Availability is a proven choice. Additional disaster availability Is
provided by using RSS to replicate to a secure ‘bunker’

© 2010 IBM Corporation


High Availability Availability Decision Tree

Do you need to
Protect yourself from Use HDR
Yes Node failure?

No Yes
Do you need to
Multilevel site failure
Yes Use RSS
Do you need to
protection?
Use SDS protect yourself from
Site failure?

No

END
Yes Do you need
Use ER geographically disperse
processing?

No

© 2010 IBM Corporation


New SYSMASTER Tables for Connection Manager
• Post events to the Connection Manager and to the
Open Admin Tool (OAT)
ƒ sysrepstats
ƒ Provides a light-weight communication between
the server, the connection manager, and Open Admin Tool
ƒ sysrepevtreg
ƒ Registers connectivity to the sysrepstats pseudo table

• Connection Manager information


ƒ syscmsmtab
ƒ Contains the Connection Manager information
ƒ syscmsmsla
ƒ Connection Manager service level agreement (SLA) information
ƒ syscmsm
ƒ View of the syscmsmtab and syscmsmsla tables
© 2010 IBM Corporation
Comparison of Client Redirection Methods
DBPATH Connectivity Information INFORMIXSERVER Connection Manager

Comparison Automatic User Automatic User User Redirection User Redirection


Criteria Redirection Redirection Redirection Redirection

Automatic No No No No
Failover

When is a client When the client next tries After the administrator When the client When the configured
redirected? to connect with a specified changes the connectivity restarts and reads a service level
database. information, when the client new value for the agreement is
next tries to establish a INFORMIXSERVER attained.
connection with a database environment
server. variable.
Do clients need No Yes No Yes Yes No
to be restarted
to be
redirected?
What is the Individual Individual All clients Individual Individual clients Individual clients
scope of the clients clients that use a clients redirected. redirected.
redirection? redirected. redirected. given redirected.
database
server
redirected.

Are changes to No No Yes No


environment
variables
required?

© 2010 IBM Corporation


Connection Manager Syntax
Full syntax, including CLI for arbitrator (discussed next):

ƒ oncmsm_ID : Name of the CMSM agent.


ƒ server_alias : Used to specify a server alias name.
ƒ -s listname : Used to identify the SLA of a given listener port.
ƒ -f failover_configuration : Specifies the secondary servers for failover.
ƒ timeout_value : Specifies the time (in seconds) to wait before performing failover.
ƒ -c configfile : Optional configuration file that will contain the SLA requirements.
ƒ -l logfile : Used to specify an optional log file.
ƒ -i/-u : (Windows Only) Install/Uninstall oncmsm as Windows service.

79 © 2010 IBM Corporation


Making the Connection Manager Redundant

• The key to making the Connection Manager redundant is to


place them in server groups.
• From a client perspective, it wants to connect to one of three
SLAs: oltp, report, test.
Connection Managers
? With SLA definitions
ce
s tan
] in
st x
te e_
p| c
o lt s tan
g| In
l o
a ta
[c
hic h
W

IDS instances
80 © 2010 IBM Corporation
Making the Connection Manager Redundant
• Clients requesting catalog connectivity get a response from a virtualized
Communication Manger:
• If the first CM agent doesn’t respond, an attempt will be made to the next in
the group definition.

concord walnut_creek

pleasant_hill
e?
nc
i nsta
]
st x
te e_
p| c
o lt s tan
g| In
l o
a ta
[c
hic h
W

IDS instances

81 © 2010 IBM Corporation


Connection Manager Setup
• User passwords encrypted using onpassword (already discussed)
• Environment variables must point to cluster primary as well as
CSDK install location if CSDK is the only product on physical server
• INFORMIXDIR
• INFORMIXSERVER
• ONCONFIG
• INFORMIXSQLHOSTS – optional, if SQLHOSTS not at default
location
• The Connection Manager configuration file
• Default $INFORMIXDIR/etc/cmsm.cfg
• Can be overridden with the –c path_to_file syntax
• Correct SQLHOSTS entries for IDS instances, CMSM agents and
Clients
82 © 2010 IBM Corporation
New Threads to Support Updatable Secondary Servers

• Primary Server
ƒ ProxyTh
ƒ A pool of threads performing the low level portion of
a redirected write using optimistic concurrency
ƒ ProxyDispatch
ƒ Manages the ProxyTh threads and passes
redirected operations to the ProxyTh thread

• Secondary Servers
ƒ ProxySync
ƒ Receives status messages from the ProxyTh
threads on the primary and communicates those to
the sqlexec threads running on the secondary

© 2010 IBM Corporation


Benefits of using the Connection Manager
• Applications connected via SLAs have the ability of
automatic connection re-routing in case of instance failure
• Application must be written to recognize connection time-out
and attempt to re-connect to down instance
• Connection Manger will re-route request to available
instance
• How does this happen?
1. Server fails taking IDS instance with it
2. Application attempts SQL operation and receives connection
failed error
3. Application must be coded to attempt to reconnect
4. Communication Manager agent will receive “new”
connection request and route client to another instance in
the SLA “cloud”

84 © 2010 IBM Corporation


Configuring Updatable Secondaries: Optimistic Concurrency

• Optimistic Concurrency:
• Introduced in IDS 11.
• A update technique used by many web applications to
allow disconnected transactions.
• Briefly -- an update is allowed to proceed if the row
about to be updated has not been updated by some
other activity.
• Relies on either a comparison of the “before” row
image or some form of row versioning

85 © 2010 IBM Corporation


High Availability Cluster Sub-Component - Server Multiplexer (SMX)

• Multiplexed network connection


• Uses full duplex protocol
• Sends packets without waiting
for return “ack”
Server-A Server-B
• Does monitor to make sure
“acks” eventually are returned RSS
SMX Receive
• HDR uses half-duplex so
primary knows secondary RSS Send
received transaction before Logic
RSS
committing Apply

ACK
• Supports encryption Receive

• Automatically activated
• Requires no configuration other
than encryption

© 2010 IBM Corporation


High Availability Cluster Sub-Component - Index Page Logging

• HDR currently transfers the index pages to the secondary when creating the
index
• Requirement: HDR Secondary instance must be available
• Causes index usage on the primary to be delayed

• Index Page Logging alleviates the above issues


• Allows index pages to be copied to the logical log when initially creating the index
• The logging of the index can be split into multiple transactions and is not part of the
original user transaction
• Control not returned to user until logging is performed
• This logging is a requirement for RSS

• Index Logging implications


• Index page logging will require additional disk IO
• More logical logs will fill in a shorter length of time -- will need to perform more log
backups than they are currently using
• There may be additional logs to process during recovery if a failure occurs prior to
completion of index build
© 2010 IBM Corporation
Global High Availability Solution Suite
Asia Cluster
HDR Traffic
Primary Secondary

SDS
US Cluster
HDR Traffic
Blade Server D Shared
Primary Secondary Shared Blade Server E
BOM-A Disk
Disk PNQ
RSS Traffic
SDS
RSS
Blade Server A Shared SDS
Disk Shared Blade Server B
SFO-A Shared Disk Mirror
RSS Traffic Disk ATL Optional
Blade Server G Shared Blade Server F
BOM-B Disk CCU
RSS
SDS
Shared Disk Mirror HDR Traffic
Optional Shared Primary Secondary
Blade Server D Blade Server C
SFO-B Disk DEN
SDS

Blade Server H Shared


Disk Shared Blade Server H
LHR-A
Disk
• Any Node within the ER domain can also be a IDS GA RSS Traffic
BAA

cluster RSS
SDS
• ER can be used to replicate complete or partial (schema Shared Disk Mirror
Optional
based) cluster data Blade Server J
LHR-B
Shared
Disk
Blade Server I
MUC

• ER relieves the dependency with the Primary in situations Europe Cluster


such as network outages

© 2010 IBM Corporation


Updatable Secondary Servers: BLOBs

• Secondary writes are supported only for logged SLOBs


• To log SLOBs
• Store SLOB in logged SBSpace
onspaces –c –S …. –Df “Logging=ON”
• Explicitly set each SLOB object created to be log

• Without logging, updates will not go back to secondary instances –


secondary reads return 0 length
• Writes go to the primary
• Reads are executed on the secondary if possible

89 © 2010 IBM Corporation


What is a Service Level Agreement (SLA)?
SLA = Service Level Agreement

• A “contract” between client applications and Informix instances

• Can be based on probability of performance and data “currency”:


• If user requires the most current data as opposed to slight lag time while
replication occurs, user should connect to primary or HDR secondary
instance
• If user can tolerate some degree of latency, then user should be
connected to another secondary instance based on “currency”
requirement (SDS vs HDR secondary vs RSS)

• Can also be based on geographic or functional requirements:


• Connect to instances within X miles/km
or
• Connect to instances designated to support specific types of application
workloads (but you must define this by “known” instance names)

90 © 2010 IBM Corporation


Informix Grid - Initialization
• Informix Grid requires ER to be initialized first
• Means all ER pre-requisite configuration must occur
• Create required dbspaces
• Create required smart blobspaces
• Set the $ONCONFIG parameters
• Create the $SQLHOSTS instance “group” definitions for Grid
nodes like ER nodes
g_pan1 group
• These -
instances - used
will be i=200
in the examples that follow
pan_1 onsoctcp 127.0.0.1 pan1_tcp g=g_pan1

g_pan2 group - - i=201


pan_2 onsoctcp 127.0.0.1 pan2_tcp g=g_pan2

g_pan3 group - - i=202


pan_3 onsoctcp 127.0.0.1 pan3_tcp g=g_pan3

© 2010 IBM Corporation


Informix Grid - Initialization
• Define the instances as part of an ER cluster
Pan_3: cdr define serv -c g_pan1 --init g_pan1
Pan_3:
Pan_3: cdr define serv -c g_pan2 --init --sync=g_pan1 g_pan2
Pan_3:
Pan_3: cdr define serv -c g_pan3 --init --sync=g_pan1 g_pan3
Pan_3:

© 2010 IBM Corporation


Informix Grid - Initialization
• Once the Grid is “enabled,” a replset is created for all repls
created within the grid. As you execute replicated Grid DDL
operations, you’ll see individual repls created for each table.

Pan_1: cdr list replset


Ex T REPLSET PARTICIPANTS
-----------------------------------------------
N N ifx_internal_set _ifx_check_timestamp,
_ifx_grid_cm_er_serv,
_ifx_grid_cm_nodes,
_ifx_grid_cm_sla, _ifx_grid_cmd,
_ifx_grid_cmd_ddl_part,
_ifx_grid_def, _ifx_grid_node,
_ifx_grid_part, _ifx_grid_users,
_ifx_qod_clock_differences
N N mytest_grid
Pan_1:

Note: currently, the repl names are numeric and don’t include the table
name. That will change in a future release.
© 2010 IBM Corporation
Using the oncmsm agent
• With the H/A cluster, SLAs were defined at the
instance level. For example:

SLA oltp=primary
SLA report=rss_1+rss_2+rss_3
SLA accounting=(SDS+HDR)
SLA catalog=rss_4+rss_5+rss_6
SLA test=RS

• With ER or Grid clusters, SLAs are defined at the


replicate set level
• Only defined replicate set tables will used for evaluating
the health/welfare of the node
• Additional setup and configuration requirements for ER
and Grid oncmsm agents
© 2010 IBM Corporation
Oncmsm with ER / Grid

• The oncmsm agents with ER / Grid capture more detailed instance


load information but this needs to be explicitly turned on for each
instance
• Requires choosing one node as the “master” node for monitoring of
replication statistics specifically
• Latency
• Apply failures

• Use the cdr define qod command to enable monitoring of


replicated data quality
• Syntax:

• -C nodename – node to use as quality monitor, note – capital “C”


• --start | -s – begin monitoring replicated data quality

© 2010 IBM Corporation


Connection Manager Failover Arbitrator: HA_ALIAS
• New optional configuration parameter HA _ALIAS
ƒ Defines the name by which the server is known within a HA cluster
ƒ Must match either DBSERVERNAME or one of the DBSERVERALIASes
ƒ Must refer to a server whose connection type is a network protocol
ƒ Only affects HDR Secondary, RSS and SDS server types
ƒ When a secondary server connects to a primary server, the secondary
server sends the name of a network alias that can be used in case of
failover
ƒ The setting of HA_ALIAS is used to describe which network alias will be
sent
ƒ Connection Manager Arbitrator can use this alias name to fail over to a
secondary server
• When defining instances in the onpassword file, you can use
DBSERVERNAME, DBSERVERALIAS or HA_ALIAS interchangeably

96 © 2010 IBM Corporation


Ifx_grid_connect syntax

Syntax:
execute procedure ifx_grid_connect(‘gridname’, ‘tagname’,
er_enabled)
• Tag -- A character string to identify Grid operations
• Enables
• tracing of execution
• reapplying a failed operation
• er_enabled – a numeric value identifying whether master
replicates should be created
• enabling replicating of DML across nodes
• Values
• 0 – off (Default)
• 1 -- on

© 2010 IBM Corporation


Using the Grid – The ifx_set_erstate() function

• Forces / disables replication of DML operations as part of


a Grid operation
• One input parameter
• [1 (one) | ‘on’] – ER-based replication on
• [0 (zero) | ‘off’] – ER-based replication off
• Can be set / changed dynamically within a command
block

© 2010 IBM Corporation


Enabling replication within a Transaction

• By default, the results of transactions run in the context of the grid are not
replicated by ER
• Can now enable replication within a transaction that is run in the context of
the grid
• Some situations require both propagation of a transaction to the servers in
the grid and replicate the results of the transaction
• To enable replication within a transaction:
1. Connect to the grid with the ifx_grid_connect() procedure
2. Create a procedure that performs the following tasks:
• Defines a data variable for the ER state information
• Runs the ifx_get_erstate() function and save its result in the data variable
• Enables replication by running the ifx_set_erstate() procedure with argument 1
• Runs the statements that needs to be replicated
• Resets the replication state to the previous value by running the ifx_set_erstate()
procedure with the name of the data variable
3. Disconnect from the grid with the ifx_grid_disconnect() procedure
4. Run the newly-defined procedure by using the
ifx_grid_procedure() procedure
© 2010 IBM Corporation
11.70.xC2 functionality changes

The abilty to run the SQL command ALTER TABLE in an ifx_grid_connect()


without er_enabled is disabled*
• Correct method
• AL:TER TABLE with er_enabled set

• A shadow replicate will be created with the original repl definition


• The “real” repl will have the new definitions and be active
• The shadow replicate will be dropped when the current logical log is
closed and the new log receives a checkpoint

*Though doing ALTER TABLE without er_enabled being set is available in


11.70.xC1, it should be avoided since it produces inconsistent results

© 2010 IBM Corporation


Informix Flexible Grid – ER Functionality

• A highly scalable multi-node availability solution on a global scope


• Provides a means of replicating DDL across multiple nodes
• CREATE TABLE, CREATE INDEX, CREATE PROCEDURE…
• Provides the ability to replicate data using ER without a primary key
• Provides a means of replicating the execution of a statement rather
than only the results of the execution
• Provides the ability to turn on/off ER replication within the
transaction and not just at the start of the transaction
• Provides the ability to automatically create ER replication as part of
a CREATE TABLE, ALTER TABLE, or DROP TABLE DDL
statement
• No additional administration is required
• Provides a means of supporting the Connection Manager on top of
ER
© 2010 IBM Corporation
Informix 11.7 Bootcamp
Informix HAC and Flexible Grid
Information Management Technology Ecosystem

© 2010 IBM Corporation

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy