HP UCMDB DICG Third Party Integrations
HP UCMDB DICG Third Party Integrations
HP UCMDB DICG Third Party Integrations
Copyright Notice
© Copyright 1996 - 2013 Hewlett-Packard Development Company, L.P.
Trademark Notices
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
AMD and the AMD Arrow symbol are trademarks of Advanced Micro Devices, Inc.
Google™ and Google Maps™ are trademarks of Google Inc.
Intel®, Itanium®, Pentium®, and Intel® Xeon® are trademarks of Intel Corporation in the U.S. and other countries.
Java and Oracle are registered trademarks of Oracle Corporation and/or its affiliates.
Microsoft®, Windows®, Windows NT®, Windows® XP, and Windows Vista® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
l This product includes software developed by the Apache Software Foundation (http://www.apache.org/).
l This product includes GNU code from Free Software Foundation, Inc. (http://www.fsf.org/).This product includes JiBX code from Dennis M. Sosnoski.
l This product includes the XPP3 XMLPull parser included in the distribution and used throughout JiBX, from Extreme! Lab, Indiana University.
l This product includes the Office Look and Feels License from Robert Futrell (http://sourceforge.net/projects/officelnfs).
l This product includes JEP - Java Expression Parser code from Netaphor Software, Inc. (http://www.netaphor.com/home.asp).
Documentation Updates
The title page of this document contains the following identifying information:
l Software Version number, which indicates the software version.
l Document Release Date, which changes each time the document is updated.
l Software Release Date, which indicates the release date of this version of the software.
To check for recent updates or to verify that you are using the most recent edition of a document, go to: http://h20230.www2.hp.com/selfsolve/manuals
This site requires that you register for an HP Passport and sign in. To register for an HP Passport ID, go to: http://h20229.www2.hp.com/passport-registration.html
Or click the New users - please register link on the HP Passport login page.
You will also receive updated or new editions if you subscribe to the appropriate product support service. Contact your HP sales representative for details.
Support
Visit the HP Software Support Online web site at: http://www.hp.com/go/hpsoftwaresupport
This web site provides contact information and details about the products, services, and support that HP Software offers.
HP Software online support provides customer self-solve capabilities. It provides a fast and efficient way to access interactive technical support tools needed to manage
your business. As a valued support customer, you can benefit by using the support web site to:
l Search for knowledge documents of interest
l Submit and track support cases and enhancement requests
l Download software patches
l Manage support contracts
l Look up HP support contacts
l Review information about available services
Most of the support areas require that you register as an HP Passport user and sign in. Many also require a support contract. To register for an HP Passport ID, go to:
http://h20229.www2.hp.com/passport-registration.html
To find more information about access levels, go to:
http://h20230.www2.hp.com/new_access_levels.jsp
HP Software Solutions Now accesses the HPSW Solution and Integration Portal Web site. This site enables you to explore HP Product Solutions to meet your
business needs, includes a full list of Integrations between HP Products, as well as a listing of ITIL Processes. The URL for this Web site is
http://h20230.www2.hp.com/sc/solutions/index.jsp
Contents
Supported Versions 11
Topology 11
Discovery Mechanism 14
Supported Versions 17
Mapping Files 31
Supported Versions 37
Integration Mechanism 37
Integration Query 40
Supported Versions 43
Topology 43
Adapter 45
Trigger Query 45
Parameters 45
Adapter 45
Trigger Query 45
Adapter 46
Trigger Query 46
Discovery Flow 47
Discovery Mechanism 51
Supported Versions 55
Topology 55
Views 61
Reports 68
Supported Versions 73
Topology 73
Databases 84
Properties Files 84
Custom Converters 91
External_source_import Package 91
Topology 104
Topology 132
Adapter 134
Parameters 134
Views 154
Reports 162
Adapter 177
Overview 11
Supported Versions 11
Topology 11
Discovery Mechanism 14
Overview
Aperture VISTA is used to model datacenter information, including the exact location of a physical
server in a rack, row, space, and floor of a datacenter. VISTA also contains detailed information
about the power supply to racks and individual servers. This enables impact analysis from a power
supply point of view, and with integration it becomes possible to analyze the impact of power failure
on applications, business services and lines of business in UCMDB.
Integration is accomplished by running SQL queries on the Aperture VISTA SQL database.
Supported Versions
UCMDB supports integration with Aperture VISTA version 600. Aperture Integration Management
Software Package version 2.0 or later is required.
Topology
The following image displays datacenter topology from VISTA.
Note: For a list of discovered CITs, see "Discovered CITs" on the next page.
n v600_VIP_DAL_DV_Devices.sql
n v600_Device_to_PDU.sql
2. Prerequisite - Credentials
Population is accomplished using SQL queries over JDBC. The following credentials should be
defined:
For credential information, see "Supported Protocols" in the HP Universal CMDB Discovery
and Integration Content Guide - Supported Content document.
b. Run Database TCP Ports to discover SQL Server ports on the IP addresses discovered
above.
c. Run MSSQL Connection by SQL to discover the SQL Server instance used by Aperture
VISTA.
d. Run MSSQL Topology by SQL to discover database instances in the SQL Server
instance discovered above.
e. Create a new integration point, and use the Aperture VISTA by SQL adapter to discover
datacenter and power infrastructure from VISTA.
Input CIT
Microsoft SQL Server
Input Query
Triggered CI Data
Name Value
credentialsId ${SOURCE.credentials_id}
ip_address ${SOURCE.application_ip}
port ${PROCESS.application_port}
Used Script
Aperture_Vista_by_SQL.py
Discovered CITs
l Chassis
l Composition
l Containment
l Datacenter
l DatacenterResource
l Node
l PowerDistributionUnit
l Rack
l RemotePowerPanel
l Usage
Discovery Mechanism
Aperture VISTA uses a Microsoft SQL Server database as its data store. The Aperture Integration
Management software package (v2.0 or greater) adds some views to the VISTA database, and this
integration adapter collects information by running SQL Queries against these views.
The integration adapter in this package is triggered by SQL Server instances in UCMDB that have a
database instance with vista in their name. Out-of-the-box discovery jobs may be used to discover
these SQL Server database instances.
FROM vista.dbo.vip_dal_dv_devices
FROM vista.dbo.vip_dal_pwr_device_power_sources
ORDER BY downstream_device_name
Overview 17
Supported Versions 17
Mapping Files 31
Overview
UCMDB-Atrium integration consists of two independent, bi-directional parts: the Data Push into
Atrium and the Population from Atrium.
l The Data Push into Atrium in UCMDB replicates CIs and relationships to Atrium and
Remedy.
The out-of-the-box integration does not transfer a specific list of CIs and relationships, but does
enable you to replicate any CI or relationship from UCMDB to Remedy or Atrium.
For examples of enabling the integration with commonly used CIs and relationships, see
"Configure synchronization queries" on page 21.
l The Population from Atrium in UCMDB pulls CIs and relationships from Atrium to UCMDB.
Supported Versions
HP Universal CMDB integrates with the following BMC products:
l BMC Remedy Service Desk (Remedy) versions 7.0, 7.1, 7.5, and 7.6
l BMC Atrium CMDB (Atrium) versions 2.0, 2.1, 7.5.x, 7.6.x and earlier, 8.1
Property Description
jythonScript.name The name of the Jython script that is invoked by this push
adapter.
Property Description
testConnNameSpace Must be set to the BMC NameSpace being used for test
connection purposes (for example, BMC.CORE).
For Atrium 8.1: Copy the files arapi81_build001.jar and cmdbapi.jar from the BMC
server to the following directory on the Data Flow Probe server:
C:\hp\UCMDB\DataFlowProbe\runtime\probeManager\discoveryResources\
AtriumPushAdapter.
The directory
C:\hp\UCMDB\DataFlowProbe\runtime\probeManager\discoveryResources\
AtriumPushAdapter is automatically created once the AtriumPushAdapter package is
deployed on the UCMDB Server. If it is not present, ensure that the AtriumPushAdapter
package has been correctly deployed on the UCMDB Server.
For details on deploying packages, see "Package Manager" in the HP Universal CMDB
Administration Guide.
Table 1
JAR Files DLL Files
arapi75.jar arapi75.dll
arutil75.jar arencrypt75.dll
cmdbapi75.jar arjni75.dll
commons-beanutils.jar arrpc75.dll
commons-codec-1.3.jar arutiljni75.dll
commons-collections-3.2.jar arutl75.dll
commons-configuration-1.3.jar arxmlutil75.dll
commons-digester-1.7.jar cmdbapi75.dll
commons-lang-2.2.jar cmdbjni75.dll
log4j-1.2.14.jar icudt32.dll
oncrpc.jar icuinbmc32.dll
spring.jar icuucbmc32.dll
Xalan-Cbmc_1_9.dll
XalanMessagesbmc_1_9.DLL
xerces-cbmc_2_6.dll
xerces-depdombmc_2_6.dll
Note:
o The AR System Java API is forward and backward compatible with other versions
of the AR System, except Atrium version 7.6.0.4 where you must use the SDK of
the same version. For a complete compatibility matrix, refer to the "API
Compatibility" section in the BMC Remedy/Atrium Developer Reference Guide.
o The arencrypt*.dll files are only required if encryption is enabled on the Remedy
server.
b. Edit the WrapperGateway.conf file (or WrapperManager.conf if the Probe Manager and
Gateway are running in separate mode) in the following directory:
C:\hp\UCMDB\DataFlowProbe\bin.
wrapper.java.library.path.3=%runtime%/probeManager
/discoveryResources/AtriumPushAdapter
c. For Atrium 7.6.04 and earlier versions only: Add the complete path to the Atrium DLL
files (for example,
C:\hp\UCMDB\DataFlowProbe\runtime\probeManager\discoveryResources\
AtriumPushAdapter) to the Windows System Path on the Data Flow Probe machine.
C:\hp\UCMDB\UCMDBServer\runtime\fcmdb\CodeBase\AtriumPushAdapter\
mappings
a. In the Integration Studio, create an integration point, selecting the Data Push into Atrium
adapter. Enter the following information:
Name Description
Is Select this check box to create an active integration point. You clear the
Integration check box if you want to deactivate an integration, for instance, to set up
Activated an integration point without actually connecting to a remote machine.
Data Flow Select the Data Flow Probe that should run this integration.
Probe
b. Test the connection. If a connection is not successfully created, check the integration point
parameters and try again.
7. Define a Job
For details, see "New Integration Job/Edit Integration Job Dialog Box" in the HP Universal
CMDB Data Flow Management Guide.
Select the queries that will synchronize data between UCMDB and Remedy/Atrium. Save the
job definition and the integration point.
In the Integration Studio, on the Job Definition tool bar, click to run a full discovery job. For
details, see "Integration Jobs Pane" in the HP Universal CMDB Data Flow Management
Guide.
C:\hp\UCMDB\DataFlowProbe\runtime\probeManager\discoveryResources\AtriumI
mportAdapter
For Atrium 8.1: Locate the files arapi81_build001.jar and cmdbapi.jar on the Remedy
ARS and Atrium system and copy them to:
C:\hp\UCMDB\DataFlowProbe\runtime\probeManager\discoveryResources\AtriumI
mportAdapter
Table 2
JAR Files DLL Files
arapi75.jar arapi75.dll
arutil75.jar arencrypt75.dll
cmdbapi75.jar arjni75.dll
commons-beanutils.jar arrpc75.dll
commons-codec-1.3.jar arutiljni75.dll
commons-collections-3.2.jar arutl75.dll
commons-configuration-1.3.jar arxmlutil75.dll
commons-digester-1.7.jar cmdbapi75.dll
commons-lang-2.2.jar cmdbjni75.dll
log4j-1.2.14.jar icudt32.dll
oncrpc.jar icuinbmc32.dll
spring.jar icuucbmc32.dll
Xalan-Cbmc_1_9.dll
XalanMessagesbmc_1_9.DLL
xerces-cbmc_2_6.dll
xerces-depdombmc_2_6.dll
Note:
o The AR System Java API is forward and backward compatible with other versions
of the AR System, except Atrium version 7.6.0.4 where you must use the SDK of
the same version. For a complete compatibility matrix, refer to the "API
Compatibility" section in the BMC Remedy/Atrium Developer Reference Guide.
o The arencrypt*.dll files are only required if encryption is enabled on the Remedy
server.
b. Edit the WrapperGateway.conf file (or WrapperManager.conf if the Probe Manager and
Gateway are running in separate mode) in the following directory:
C:\hp\UCMDB\DataFlowProbe\bin.
wrapper.java.library.path.3=%runtime%/probeManager
/discoveryResources/AtriumPushAdapter
c. For Atrium 7.6.04 and earlier versions only: Add the complete path to the Atrium DLL
files (for example, C:\hp\UCMDB\DataFlowProbe\runtime\probeManager\
discoveryResources\AtriumImportAdapter) to the Windows System Path on the Data
Flow Probe machine.
Note: While creating the generic protocol, set the protocol description to atrium.
b. Under Integration Properties > Adapter, select the Population from Atrium adapter.
o ARS_Server
o ARS_Port
o BMC_NameSpace
d. Under Adapter Properties > Data Flow Probe, select the Data Flow Probe to be used for
the integration.
i. Select Existing CI (if you have a valid, existing CI). The Select Existing CI pane
appears. Select the CI, or
ii. Create New CI (if you need to create a new CI). The Topology CI Creation Wizard
appears. Complete the creation of the CI using the Wizard.
Note: For details on running an integration job, see "Integration Studio" in the HP
Universal CMDB Data Flow Management Guide.
Integration Flow
Integration includes the following activities:
1. Querying the UCMDB for CIs and relationships. When an ad-hoc integration job is run in
the Integration Studio, the integration process:
a. Receives the names of the integration queries that are defined in the job definition for that
integration point.
b. Queries UCMDB for the results (new, updated, or deleted CIs and relationships) of these
defined queries.
c. Applies the mapping transformation according to the pre-defined XML mapping files for
every query.
2. Sending the data to BMC Remedy/Atrium. On the Data Flow Probe, the integration
process:
a. Receives the CI and relationship data sent from the UCMDB Server.
Used Script
pushToAtrium.py
Parameters
Parameter Description
Integration Flow
The integration flow has the following steps:
In this step, the integration adapter connects to the Atrium server and queries it for classes,
attributes and relationships, described in the XML mapping files. The result of this step is the
creation of intermediate XML files (in the
<probe>\runtime\probeManager\discoveryResources\TQLExport\Atrium\inter directory).
In this step, the data collected from the previous step and stored in the intermediate XML file, is
converted into the UCMDB data format based on the mappings defined in the XML mapping
files.
In this final step, after being mapped into the UCMDB object state holder vector format, the
data is sent to the UCMDB server.
Input CIT
The input CIT for this adapter is discoveryprobegateway. The job uses an instance of the
Discovery Probe Gateway which has access to connect to the remote BMC Atrium server.
Used Scripts
The adapter uses the following scripts:
Script Description
atrium_map.py Used to map the queried data into data UCMDB can use.
Script Description
Discovered CITs
This integration can discover any CIT or relationship which is (a) mapped in the integration and (b)
can be queried and converted to its UCMDB equivalent.
Parameters
Parameter Detail
ARS_Port The port for connecting to the ARS server. If portmapper is being used, this
should be left as 0. Otherwise, specify the TCP port.
ChunkSize The chunk size in which data should be retrieved from the remote server.
DebugMode Set to true to run integration in debug mode; this does not send data to
UCMDB.
Mapping Files
This section includes:
Mapping files:
l Control the attributes for the CITs and relationships that are to be mapped.
l Map attributes of children CIs (those having a containment or composition relationship) to the
parent CI in the target data store. For example:
l Map attributes of parent CIs (those having a containment or composition relationship) in the
target data store CI. For example, in the Atrium target data store, set the value of a Container
Server attribute on the Installed Software CIT by retrieving the value of the UCMDB Installed
Software CI container node.
<pkey>...</pkey>
</targetprimarykey>
<target_attribute name="..." datatype="..." >
<map type="..." />
</target_attribute>
</target_ci_type>
</source_ci_type>
</integration>
l <integration>. The root element of the XML file. This element has no attributes.
l <info>. The source and target data stores being used, for example:
<info>
<source name="Atrium" versions="7.6" vendor="BMC" />
<target name="UCMDB" versions="9.0" vendor="HP" />
</info>
l <targetcis>. The element that encapsulates the mapping for all CI types.
l <targetrelations>. The element that encapsulates the mapping for all relationship types.
l <source_ci_type>. The element that defines a CI type of the source data store, for example:
n Attribute: namespace. Defines the namespace in the Atrium system that the CI type is
associated with.
n Attribute: mode. Defines the mode of the update in the target data store. The possible
values are: update, insert, and update_else_insert.
l <target_ci_type>. The element that defines the target CIT, for example:
<target_ci_type name="unix">
l <targetprimarykey>. The element that defines a list of all primary keys of the target CIT, for
example:
<targetprimarykey>
<pkey>host_key</pkey>
</targetprimarykey>
l <target_attribute>. The element that defines an attribute mapping from the source CI type to
the target CI type attribute. Attribute mapping can be of the following types:
n Constant. This type enables setting a constant value on the target attribute:
n Direct. This type enables setting a direct value of a source data store attribute on the target
data store:
n Compound String. This type enables the use of the above mapping types together to form
more complex values for the target attribute, for example:
l <link>. The element that defines a relationship mapping from the source data store to a target
data store, for example:
<link source_link_type="composition"
target_link_type="BMC_HostedSystemComponents"
source_ci_type_end1="unix"
source_ci_type_end2="cpu"
role1="Source"
role2="Destination "
mode="update_else_insert">
<target_ci_type_end1 name="BMC_ComputerSystem"
superclass="BMC_System" />
<target_ci_type_end2 name="BMC_Processor"
superclass="BMC_SystemComponent" />
... Relationship attribute mapping elements similar to the CI type attribu
te mapping elements ...
</link>
n Attribute: source_link_type. Defines the name of the source link. To specify namespace, use
the following format: "NAMESPACE:CLASSNAME". For example: BMC.J2EE:BMC_
J2EEApplicationServer. The same format applies to <antecedent> and <dependent>
elements.
n Attribute: target_link_type. Defines the name of the target link. To specify namespace, use
the following format: "NAMESPACE:CLASSNAME". For example: BMC.J2EE:BMC_
J2EEApplicationServer. The same format applies to <antecedent> and <dependent>
elements.
n <target_ci_type_end1>. Used to specific the value of the target links end1 CI type
n <target_ci_type_end2>. Used to specific the value of the target links end2 CI type
Overview 37
Supported Versions 37
Integration Mechanism 37
Integration Query 40
Overview
The UCMDB - CA CMDB integration adapter allows pushing CIs and relationships from UCMDB
into CA CMDB.
This is achieved by querying the UCMDB for CIs and Relationships based on queries defined in the
push integration adapter. The output of the queried CIs and Relationships are saved in an XML file.
GRLoader, a utility provided with CA CMDB, transfers the CIs and Relationship data stored in the
XML file into CA CMDB. An XML mapping file is used to define how the CIs and Relationships in
UCMDB are related to the CIs and Relationships in CA CMDB.
Supported Versions
UCMDB supports integration with CA CMDB R12.5 and R12.0.
Integration Mechanism
This section describes the UCMDB - CA CMDB integration mechanism:
b. The integration process queries UCMDB for the results of these queries
(new/updated/deleted CIs and Relationships), and applies the mapping transformation
according to the pre-defined XML mapping files for every query.
2. Queried data is converted into temporary XML files on the Data Flow
Probe system
On the Data Flow Probe side, the integration process receives the CI and Relationship data
sent from the UCMDB server, and converts it into a format which can be used as input XML for
the GRLoader, a utility provided with CA CMDB used to transfer the CI and Relationship data
into CA CMDB.
2. Prerequisite - Other
Data Flow Probe System:
a. Copy all of the files in the CA CMDB system's %NX_ROOT%\java\lib directory to the
CaCmdbPushAdapter directory on the data flow probe system:
<UCMDB Installation>\DataFlowProbe\runtime\probeManager\
discoveryResources\CaCmdbPushAdapter
b. Locate the file, NX.ENV, in the CaCmdbPushAdapter directory. If the file does not exist,
create it in the CaCmdbPushAdapter directory and add the following text to it:
@NX_LOG=C:/CA/java/lib/log
db/oracle/*.*;db/mssqlserver/*.*;db/db2/*.*;db/sybase/*.*;nnm/*.*;Atrium
PushAdapter/*.*;CaCmdbPushAdapter/*.*
For an example of such an integration query, see "Integration Query" on the next page.
<UCMDB Installation>\UCMDBServer\runtime\fcmdb\CodeBase\
CaCmdbPushAdapter\mappings
For more information about mapping files, see "Prepare the Mapping Files" in the HP Universal
CMDB Developer Reference Guide.
Attribute Description
Data Flow The name of the Data Flow Probe on which the integration runs.
Probe
d. Add a job definition to the integration point, selecting the queries to use to synchronize data
between UCMDB and CA CMDB. Define a synchronization schedule, if required.
e. Invoke the ad hoc job, Full Topology Sync, for a full synchronization of the data.
Integration Query
The integration query, Unix_SW_to_CACMDB, is included with the CA CMDB integration
package. This is an example of a query that can be used to query the CIs and relationships that
must be pushed from UCMDB to CA CMDB. This query is accessible from UCMDB's Modeling
Studio, among the query resources. For details, see "Modeling Studio Page" in the HP Universal
CMDB Modeling Guide.
l Debug Mode
To create an XML dump of the CIs and links being sent to the CA CMDB server for debug
purposes, in <UCMDB installation>\DataFlowProbe\
runtime\probeManager\discoveryConfigFiles\CaCmdbPushAdapter\
push.properties, set the value of the debugMode property to true and restart the Data Flow
Probe service.
This ensures that every time the integration is invoked, a set of XML files is created in the
<UCMDB installation>\DataFlowProbe\runtime\
probeManager\discoveryResources\CaCmdbPushAdapter\work directory. These files are
time-stamped and contain the CIs and links that UCMDB is trying to push to CA CMDB. This
information can be helpful in debugging a problem with the integration:
n If data is not being sent from UCMDB, there is a problem on the UCMDB side.
l During the export of more than 4,000 CIs (this is default UCMDB configuration) the export TQL
result is divided into chunks. To make sure that chunks are created properly, you must configure
the export TQL as follows:In the container node's properties, set the Element Name field to
Root.
Overview 43
Supported Versions 43
Topology 43
Adapter 45
Trigger Query 45
Parameters 45
Adapter 45
Trigger Query 45
Adapter 46
Trigger Query 46
Discovery Flow 47
Discovery Mechanism 51
Overview
CiscoWorks LAN Management Solution (LMS) is a suite of management tools that simplify the
configuration, administration, monitoring, and troubleshooting of Cisco networks.
This integration involves synchronizing devices, topology, and hierarchy of network infrastructure in
UCMDB, and also synchronizes relationships between various hardware and logical network
entities to enable end-to-end mapping of the data network infrastructure. The integration enables
change management and impact analysis across all business services mapped in UCMDB, from a
data network point of view.
Supported Versions
This integration supports CiscoWorks LAN Management Solution Version 3.x.
Topology
The following image displays the CiscoWorks LAN Management Solution topology.
Note: For a list of discovered CITs, see "Discovered CITs" on page 47.
Add credentials for the Sybase database instances used by CiscoWorks LMS (RMENGDB
and ANIDB) to the Generic DB (SQL) protocol.
For credential information, see "Supported Protocols" in the HP Universal CMDB Discovery
and Integration Content Guide - Supported Content document.
b. Run the CiscoWorks LMS Database Ports job to discover the TCP ports at which the
Sybase databases used by CiscoWorks LMS are listening.
c. Create a new integration point, and use the CiscoWorks NetDevices adapter to discover
network device information from CiscoWorks.
d. Create a new integration point, and use the CiscoWorks Layer 2 adapter to discover node
(server) information from CiscoWorks.
Steps 2a and 2b are optional (although highly recommended - see note below) since
CiscoWorks adapters are available in the Integration Studio, allowing manual creation of
the necessary IpAddress, Node and IpServiceEndpoint CIs.
Note: The CiscoWorks Layer 2 job requires additional data about CIs created by the
CiscoWorks NetDevices adapter and already in UCMDB. This information is provided by
the Input Query, which contains CI Types (NetDevice and PhysicalPort) that provide this
data in addition to CI Types required to identify the integration target
(IpServiceEndpoint).
For this reason, it is highly recommended to execute steps 2a and 2b. If steps 2a and 2b
are not executed, creating the integration target CIs (while creating an integration point
using the CiscoWorks Layer 2 adapter) requires the creation of Node and PhysicalPort
CIs.
Trigger Query
l Trigger CI: IpAddress
l Trigger Query:
l CI attribute conditions:
CI Attribute Value
Parameters
Ports: 43443, 43455
Trigger Query
l Trigger CI: IpServiceEndpoint
l CI attribute conditions:
CI Attribute Value
Trigger Query
l Trigger CI: IpServiceEndpoint
l CI attribute conditions:
CI Attribute Value
PhysicalPort NOT Name Is null AND NOT Port VLAN Is null AND NOT
PortIndex Is null AND NOT Container Is null
Discovery Flow
Add IP addresses of the Sybase databases RMENGDB and ANIDB used by CiscoWorks LMS to
a discovery probe range:
3. CiscoWorks NetDevices
4. CiscoWorks Layer 2
Input CIT
IpServiceEndpoint: the TCP port at which the RMENGDB Sybase instance is listening. (The
default is 43455.)
Used Scripts
l ciscoworks_utils.py
l CiscoWorks_NetDevices.py
Discovered CITs
l Composition
l Containment
l HardwareBoard
l Interface
l IpAddress
l IpSubnet
l Layer2Connection
l Membership
l Node
l PhysicalPort
l Realization
l Vlan
Parameters
Parameter Description
Default: false.
Default: true.
Default: rmengdb.
Default: 250.
Input CIT
IpServiceEndpoint: the TCP port at which the ANIDB Sybase instance is listening. (The default is
43443.)
Input Query
CiscoWorks LMS Campus DB with PhysicalPorts
Node Conditions
Triggered CI Data
Name Value
db_port ${SOURCE.network_port_number}
ip_address ${IpAddress.ip_address}
netdevice_cmdbid ${NetDevice.global_id}
netdevice_name ${NetDevice.name}
port_cmdbid ${PhysicalPort.global_id}
port_container_cmdbid ${PhysicalPort.root_container}
Name Value
port_index ${PhysicalPort.port_index}
port_name ${PhysicalPort.name}
port_vlan ${PhysicalPort.port_vlan}
Used Scripts
l ciscoworks_utils.py
l CiscoWorks_Layer2.py
Discovered CITs
l Composition
l Containment
l HardwareBoard
l Interface
l IpAddress
l IpSubnet
l Layer2Connection
l Membership
l Node
l PhysicalPort
l Realization
l Vlan
Parameters
Parameter Description
Default: false.
Default: true.
Default: anidb.
Default: 1,000.
Discovery Mechanism
The adapters in this package connect to the Sybase databases used by CiscoWorks LMS using
JDBC, and run SQL queries to retrieve information. The Sybase database instances are used as
part of the trigger for jobs in this package. This allows the jobs to be included in UCMDB's spiral
discovery schedule.
l CiscoWorks Layer 2.
CiscoWorks NetDevices triggers off the CiscoWorks Resource Manager Essentials database,
and retrieves network devices, VLAN and layer two infrastructure from it.
CiscoWorks Layer 2 triggers off the CiscoWorks Campus Manager database, and retrieves nodes
(servers). It associates them with VLANs and layer two infrastructure retrieved by CiscoWorks
NetDevices.
Database queries executed by this package on the CiscoWorks databases are as follows:
Note: The following query is used by the CiscoWorks NetDevices and CiscoWorks Layer 2
adapters on the RMENGDB and ANIDB database instances
Get the database name to verify that queries are run on the correct database:
SELECT db_name()
Note: The following queries are used by the CiscoWorks NetDevices adapter on the
RMENGDB database instance
Get a count of the number of network devices in the database (This is required to determine the
number of chunks to query. For details on chunking, see "Parameters" on page 48.)
SELECT netdevices.Device_Id,
deviceState.NetworkElementID, netdevices.Device_Display_Name,
netdevices.Host_Name, netdevices.Device_Category,
netdevices.Device_Model, netdevices.Management_IPAddress,
deviceState.Global_State
FROM lmsdatagrp.NETWORK_DEVICES netdevices JOIN dba.DM_Dev_State
deviceState ON netdevices.Device_Id=deviceState.DCR_ID
Note: The following queries are used by the CiscoWorks Layer 2 adapter on the ANIDB
database instance.
Get a count of the number of nodes (servers) in the database (This is required to determine if
chunking is required. See "Parameters" on page 51.)
If the database connection failure occurs after the driver is copied, it may be necessary to change
the driver classes in globalSettings.xml from:
<Sybase>com.sybase.jdbc.SybDriver</Sybase>
to
<Sybase>com.sybase.jdbc3.SybDriver</Sybase>
Overview 55
Supported Versions 55
Topology 55
Views 61
Reports 68
Overview
Integration between ECC and DFM involves synchronizing devices, topology, and hierarchy of
storage infrastructure in the UCMDB database (CMDB). This enables Change Management and
Impact Analysis across all business services mapped in UCMDB from a storage point of view.
DFM initiates discovery on the ECC database. Synchronized Configuration Items (CIs) include
Storage Arrays, Fibre Channel Switches, Hosts (Servers), Storage Fabrics, Storage Zones, Logical
Volumes, Host Bus Adapters, Storage Controllers, and Fibre Channel Ports. The integration also
synchronizes physical relationships between hardware, and logical relationships between Logical
Volumes and hardware devices, to enable end-to-end mapping of the storage infrastructure.
The integration includes the ECC_Integration.zip package, which contains the trigger TQL, DFM
script, adapter, and job for ECC integration.
Supported Versions
Target Platform OS Platform DFM Protocol ECC Version
EMC Control Center All Generic DB (SQL) over JDBC, 6.0 and 6.1
SSL optional
Topology
The following diagram illustrates the storage topology and shows the relationships between logical
volumes on a storage array and those on servers:
Note: For details on running an integration job, see "Integration Studio" in the HP Universal
CMDB Data Flow Management Guide.
2. Under Integration Properties > Adapter, select the EMC Control Center adapter.
3. Under Adapter Properties > Data Flow Probe, select the Data Flow Probe.
a. Select Existing CI (if you have a valid, existing CI). The Select Existing CI pane appears.
Select the CI or
b. Create New CI (if you need to create a new CI). The Topology CI Creation Wizard
appears. Complete the creation of the CI using the Wizard.
Note: For details on the Topology CI Creation Wizard, see "Topology CI Creation Wizard"
in the HP Universal CMDB Data Flow Management Guide.
5. Verify the credentials for the chosen CI instance. Right-click on Trigger CI instance and
select Actions > Edit Credentials Information.
Note: For details about the credentials, see "How to Run the ECC/UCMDB Integration
Job" on the previous page
Tip: You can include the ECC job in the DFM schedule. For details, see "New Integration
Job/Edit Integration Job Dialog Box" in the HP Universal CMDB Data Flow Management
Guide.
To connect to the ECC Oracle database with SSL communication, see "How to Run the
ECC/UCMDB Integration Job" on the previous page.
l "Adapter" on page 60
1. Connects to the ECC Oracle database instance using credentials from the Generic DB
Protocol (SQL). For details, see "How to Run the ECC/UCMDB Integration Job" on page 56.
2. Queries for fibre channel switches and ports on each switch and creates Fibre Channel
Switch CIs:
3. Queries for fibre channel adapters and ports on each Fibre Channel Switch and creates Fibre
Channel HBA and Fibre Channel Port CIs:
5. Queries for Fibre Channel ports, Fibre Channel host bus adapters (HBA), and logical volumes
on each storage array, and creates Fibre Channel Port, Fibre Channel Port HBA, and
Logical Volume CIs:
6. Queries for hosts/servers and creates appropriate Computer, Windows, or Unix CIs. Results
of this query are used to create host resource CIs, such as CPU, if this information is
available:
7. Queries for Fibre Channel ports, Fibre Channel host bus adapters (HBA), and logical volumes
on each host/server and creates Fibre Channel Port, Fibre Channel Port HBA, and
Logical Volume CIs:
8. Queries for logical volume mapping between logical volumes on hosts/servers and logical
volumes on storage arrays, and adds Dependency relationships between hosts/servers and
storage arrays:
9. Queries for paths between hosts/servers and storage arrays and adds Fibre Channel
Connect relationships between respective hosts/servers, switches, and storage arrays:
Trigger Query
Trigger CI: ECC Oracle database
Adapter
The following table gives details of the adapter parameters.
Parameter Description
allowDNSLookup If a node in the ECC database does not have an IP address but
has a DNS name, it is possible to resolve the IP address by the
DNS name.
Default: False
Default: True
l CPU
l Containment
l Composition (link)
l Dependency (link)
l Node
l IpAddress
l Logical Volume
l Membership (link)
l Storage Array
l Storage Processor
l Unix
l Windows
Views
The Storage_Basic package contains views that display common storage topologies. These are
basic views that can be customized to suit the integrated ECC applications.
To access the Storage_Basic package, go to Administration > Package Manager. For details,
see "Package Manager" in the HP Universal CMDB Administration Guide.
Storage Array does not require all components in this view to be functional. Composition links
stemming from the Storage Array have a cardinality of zero-to-many. The view may show Storage
Arrays even when there are no Logical Volumes or Storage Processors.
FC Switch Details
This view shows a Fibre Channel Switch and all connected Fibre Channel Ports.
Note: Although shown in the preceding graphic, the ECC job does not discover Storage
Fabrics. The view represented by this query is populated without Storage Fabrics.
SAN Topology
This view maps physical connections between Storage Arrays, Fibre Channel Switches, and
Hosts. The view shows Fibre Channel Ports below their containers. The view groups the Fibre
Channel Connect relationship CIT to prevent multiple relationships between the same nodes from
appearing in the top layer.
Storage Topology
This view maps logical dependencies between Logical Volumes on Hosts and Logical Volumes on
Storage Arrays. There is no folding in this view.
All impact analysis rules fully propagate both Change and Operation events. For details on impact
analysis, see "Impact Analysis Manager Page" and "Impact Analysis Manager Overview" in the HP
Universal CMDB Modeling Guide.
To access the Storage_Basic package: Administration > Package Manager. For details, see
"Package Manager" in the HP Universal CMDB Administration Guide.
Note: Impact analysis events are not propagated to Fibre Channel Ports for performance
reasons.
Note: Although shown in the preceding graphic, the ECC job does not discover Storage
Fabrics. The rule represented by this query is used without Storage Fabrics.
FC Port to FC Port
This rule propagates events on a Fibre Channel Port to another connected Channel Port.
l The event propagates from the HBA to the Storage Array and the Logical Volumes on the Array
because of the Storage Devices to Storage Array rule.
l The impact analysis event on the Logical Volume then propagates to other dependent Logical
Volumes through the Logical Volume to Logical Volume rule.
l Hosts using those dependent Logical volumes see the event next because of the Host Devices
to Host rule.
l Depending on business needs, you define impact analysis rules to propagate events from these
hosts to applications, business services, lines of business, and so on. This enables end-to-end
mapping and impact analysis using UCMDB.
Reports
The Storage_Basic package contains basic reports that can be customized to suit the integrated
ECC applications.
In addition to the system reports, Change Monitoring and Asset Data parameters are set on each
CIT in this package, to enable Change and Asset Reports in UCMDB.
To access the Storage_Basic package: Administration > Package Manager. For details, see
"Package Manager" in the HP Universal CMDB Administration Guide.
Host Configuration
This report shows detailed information on hosts that contain one or more Fibre Channel HBAs,
Fibre Channel Ports, or Logical volumes. The report lists hosts with sub-components as children of
the host.
Note: Although shown in the preceding graphic, the ECC job does not discover Storage
Fabrics. The report represented by this query is populated without Storage Fabrics.
Note: Although shown in the preceding graphic, the ECC job does not discover Storage
Fabrics. The report represented by this query is populated without Storage Fabrics.
Overview 73
Supported Versions 73
Topology 73
Overview
UCMDB integration with IDS Scheer ARIS IT Architect (ARIS) involves synchronizing business
services/processes and Enterprise Architecture (EA) information from ARIS to the UCMDB
database. This enables end-to-end Change Management and Impact Analysis from the IT
infrastructure (at the data center level) to the business service/process level.
The integration involves a UCMDB initiated pull of information from an XML export generated by
ARIS. Synchronized configuration items (CIs) include Business Service, Business Process,
Business Process Step, Ownership information and Business Application (software). The
integration requires manual reconciliation of business application instances in UCMDB.
ARIS_Integration.zip, contains the views, scripts, adapters, and jobs for the IDS Scheer ARIS
Integration.
Supported Versions
This integration supports ARIS IT Architect version 7.1.
Topology
The following image is a sample topology showing relationships between the IT infrastructure (data
center layer) and Business Processes/Services.
Note: For a list of discovered CITs, see "Discovered CITs" on page 80.
n The language of the output file must be the same as the language used for UCMDB.
o Assignments: No assignments
o Options to export users and groups and group structures should NOT be selected.
Note: Save the exported file to a location accessible to the Data Flow Probe.
For more details on exporting XML files in ARIS, contact your IDS Scheer support
representative or ARIS IT Architect documentation.
A user configurable mapping file (also in XML format) may be used to customize mapping of:
a. For each ARIS object type that you want to map, in the exported ARIS XML file (the
source XML) locate the relevant ObjDef tag, and note the TypeNum and AttrDef.Type
values.
b. In the mapping file, ARIS_To_UCMDB.xml, locate the <targetcis> section and enter
these values into the source_CI_type namesource_attribute attributes respectively.
Example:
In the following image of the source XML file, the object, ObjDef.4hzv--y-----p--, has the
following attribute values:
o TypeNum = OT_IS_FUNC
o AttrDef.Type = AT_NAME
These values are entered in the mapping file's source_CI_type name and source_
attribute attributes, as illustrated below:
Note: The section marked as Must be present for all CI Types must exist for ALL CI
type mappings defined in the mapping file. This section populates the unique object ID
used by ARIS in the "data_externalid" attribute of the UCMDB CI type.
c. For each ARIS link that you want to map, note the following values in the source XML file:
o Locate the relevant CxnDef tag and note the CxnDef.Type attribute.
o Locate the CxnDef tag's parent, ObjDef. Note the TypeNum value under this ObjDef.
o Under CxnDef, note the ToObjDef.IDRef attribute, and search for an ObjDef tag with
the identical value. Then, under this ObjDef, note the TypeNum attribute.
o For source_ci_type_end1, enter the TypeNum value of the CxnDef tag's parent.
o For source_ci_type_end2, enter the TypeNum value of the ObjDef that is equivalent to
the ToObjDef.IDRef
Example:
o CxnDef.Type = CT_CAN_SUPP_1
These values are entered in the mapping file's <link> tag, in the source_link_type,
source_ci_type_end1, and source_ci_type_end2 attributes respectively, as illustrated
below:
b. Under Integration Properties > Adapter, select the Software AG ARIS adapter.
c. Fill in the value for ARIS_XML_file. Set the value as the path to the XML file containing the
exported ARIS data. See "Export the ARIS model to an XML file" on page 74.
e. Under Adapter Properties > Data Flow Probe, select the Data Flow Probe.
o Select Existing CI (if you have a valid, existing CI). The Select Existing CI pane
appears.
o Create New CI (if you need to create a new CI). The Topology CI Creation Wizard
appears. Complete the creation of the CI using the Wizard.
Note: For details on the Topology CI Creation Wizard, see "Topology CI Creation
Wizard" in the HP Universal CMDB Data Flow Management Guide.
Note: For details on running an integration job, see "Integration Studio" in the HP
Universal CMDB Data Flow Management Guide.
l Trigger query:
Adapter
Discovered CITs
The UCMDB-ARIS integration discovers the following CITs:
l BusinessApplication
l BusinessFunction
l BusinessService
l Containment
l Node
l Organization
l Ownership
l Person
Overview 83
Databases 84
Properties Files 84
Custom Converters 91
External_source_import Package 91
Overview
Your data is probably stored in several formats, for example, in spreadsheets, databases, XML
documents, properties files, and so on. You can import this information into HP Universal CMDB
and use the functionality to model the data and work with it. External data are mapped to CIs in the
CMDB.
The default delimiter for CSV files is the comma, but any symbol can be used as a CSV delimiter,
for example, a horizontal tab.
Note: Microsoft Office Excel includes native support for the CSV format: Excel spreadsheets
can be saved to a CSV file and their data can then be imported into UCMDB. CSV files can be
opened in an Excel spreadsheet.
1. Select Adapter Management > Resources pane > Packages > External_source_import >
Adapters > Import_CSV.
By default, the value is 1, that is, DFM retrieves data from the first row.
4. Replace 1 with the number of the row at which to start retrieving data. For example, to skip the
first row and start with the second row, replace 1 with 2.
Databases
A database is a widely used enterprise approach to storing data. Relational databases consist of
tables and relations between these tables. Data is retrieved from a database by running queries
against it.
The following databases are supported: Oracle, Microsoft SQL Server, MySQL, and DB2.
Properties Files
A properties file is a file that stores data in the key = value format. Each row in a properties file
contains one key-to-value association. In code terms, a properties file represents an associative
array and each element of this array (key) is associated with a value.
A properties file is commonly used by an application to hold its configuration. If your application
uses a configuration file, you can model the application in UCMDB.
l "Prerequisites" below
l "Result" on page 89
1. Prerequisites
The admin opens the CSV file and analyzes the data:
The file includes the name, model, year of manufacture, and the date when the car was
purchased, that is, there are four columns of data:
1 Name string
2 Model string
There are three rows to the file, which means that the admin expects three CIs to be created in
UCMDB.
2. Create a CIT
a. The admin creates a CIT named Car to hold the attributes that are to be mapped to the data
in the CSV file (name, model, and so on):
For details, see "Create a CI Type" in the HP Universal CMDB Modeling Guide.
b. During the creation of the CIT, the admin adds these attributes as follows:
For details, see "Attributes Page" in the HP Universal CMDB Modeling Guide.
communication.downloader.cfgfiles.CiMappingConfigFile">
<ci type="car">
<map>
<attribute>name</attribute>
<column>1</column>
</map>
<map>
<attribute>model</attribute>
<column>2</column>
</map>
<map>
<attribute>year_of_manufacture</attribute>
<column>3</column>
</map>
<map>
<attribute>date_of_purchase</attribute>
<column>4</column>
</map>
</ci>
</mappings>
</map>
</mappings>
All conversions between the values in the CSV file and the CI attributes are done by a
converter. Several converter types are included in the package by default. For details, see
"How to Convert Strings to Numbers" on page 90.
n Under Integration Properties > Adapter, select the Import from CSV adapter.
n Under Adapter Properties > Data Flow Probe, select the Data Flow Probe.
o Select Existing CI (if you have a valid, existing CI). The Select Existing CI pane
appears. Select the CI; or
o Create New CI (if you need to create a new CI). The Topology CI Creation Wizard
appears. Complete the creation of the CI using the Wizard.
Note: For details on the Topology CI Creation Wizard, see "Topology CI Creation
Wizard" in the HP Universal CMDB Data Flow Management Guide.
Note: For details on running an integration job, see "Integration Studio" in the HP
Universal CMDB Data Flow Management Guide.
This job uses the Shell Trigger CIT to discover the CSV file on a remote machine. The
Input CIT is Shell and the discovered CIT is declared as IT Universe. However, the actual
discovered CIs depend on the mapping configuration.
The admin activates the following job: Import from CSV file.
For details on activating jobs, see "Discovery Modules Pane" in the HP Universal CMDB
Data Flow Management Guide.
Note: This step only applies if using UCMDB 9.03 and earlier.
After activation, the admin locates the Shell CI (of the machine where the cars.csv file is
located) and adds it to the job. For details, see "Choose CIs to Add Dialog Box" in the HP
Universal CMDB Data Flow Management Guide.
6. Result
The admin accesses the CIT Manager and searches for instances of the Car CIT. UCMDB
finds the three instances of the CIT:
A CSV file contains records of type string. However, some of the record values need to be
handled as numbers. This is done by adding a converter element to the map element (in [your
mapping file name].xml):
<converter module="import_converters"></converter>
The import_converters.py file (Adapter Management > Resources pane > Packages>
External_source_import > Scripts) contains a set of the most commonly needed converters and
types:
l toString
l stringToInt
l stringToLong
l stringToFloat
l stringToBoolean
l stringToDate
l stringToDouble
l skipSpaces
l binaryIntToBoolean
l stringToBytesArray
l stringToZippedBytesArray
Example of a Converter
This row must be mapped to the Person CIT that includes name (Usain), age (21), and gender
(Male) attributes. The age attribute should be of type integer. Therefore, the string in the CSV file
must be converted to an integer in the CIT to make it compliant with the CIT attribute type, before
the Person CIs can retrieve the age values.
<map>
<attribute>age</attribute>
<column>2</column>
<converter module="import_converters">stringToInt</converter>
</map>
stringToInt. The name of the converter. In the import_converters.py file, the method is written
as follows:
def stringToInt(value):
if value is not None:
return int(value.strip())
else:
return 0
Custom Converters
You can write your own custom converters: Add a new method to the import_converters.py file
or create your own script and add a set of converter methods to it. Call the method with the name of
the script, for example:
<converter module="your_converter_script">[your_converter_method]
</converter>
External_source_import Package
The External_source_import package consists of three jobs and three adapters. There is one job
and one adapter for each external source (CSV file, properties file, database):
The jobs are located under Universal Discovery > Discovery Modules/Jobs > Discovery
Modules > Others > Discovery Tools.
The adapters are located in the Integration Studio and are accessed when creating or editing an
Integration Point.
Job Details
The job details are as follows:
This job has no Trigger queries associated with it. That is, this job is not triggered automatically (nor
are the Import from Properties file and the Import from Database jobs). After you activate
the job, you must manually add input CIs to the job so that it runs against a particular destination.
For details, see "Add the discovered Shell CI to the job" on page 89.
Adapter Parameters
The following parameters are included by default:
Parameter Description
bulkSize This parameter only works if the parameter flushObjects is true, in which
case, when sending discovery results, it sets the size of chunks used to
that number of CIs.
ciType The CIT name. This job creates and reports CIs of this type to UCMDB,
based on data in the CSV file. For example, if the CSV file contains records
for UNIX hosts, you must set the ciType parameter to unix.
csvFile The full path to the CSV file on the remote machine. The job uses the Shell
CI Type as input to reach this path on the remote machine.
delimiter The delimiter used in the CSV file. The comma (,) delimiter is the default but
other delimiters are supported. For details, see "Delimiters" on page 94.
Parameter Description
If true, the probe divides the discovery result into chunks, and sends each
chunk to the UCMDB Server. This helps prevent out-of-memory issues
where a large amount of data is sent. The chunk size can be configured with
the bulkSize parameter.
If false (the default value), the probe sends the discovery result without
dividing it into chunks.
mappingFile For details of the mapping file, see "External Source Mapping Files" on
page 101.
mappingString The string containing mapping information used to map the CSV column
indexes and attributes to import. You define this mapping in the following
format:
rowToStartIndex For details on setting the row at which DFM starts collecting data, see
"CSV Files with Column Titles in First Row" on page 84.
skipEmptyValues This flag determines whether to skip empty values. If true, empty column
values are not sent.
For details on overriding an adapter parameter, see "Override Adapter Parameters" in the HP
Universal CMDB Developer Reference Guide.
l In an external XML file. You must specify the mappingFile parameter. For details, see
"External Source Mapping Files" on page 101.
l Directly in a job's ciType and mappingString parameters, without using an external file.
Note: When using this mapping method, you cannot specify attribute types or converters.
If the mappingFile parameter is specified, the job tries to retrieve mapping information from the
XML file. If it is not specified, the job uses the mapping information specified in the ciType and
mappingString parameters.
l Delimiters
The delimiter divides values in the same row of a CSV file. Supported delimiters are:
n Single symbol. Any symbol can be used as a delimiter, for example, the pipe sign (|), the
letter O. Delimiters are case sensitive.
n ASCII code. If an integer number is used as the value for a delimiter parameter, this value is
treated as ASCII code, and the related symbol is used as the delimiter. For example, 9 is a
valid delimiter because 9 is the ASCII code for the horizontal tab.
l Quotation Marks
You can use double or single quotes in values, that is, all values residing between the two
quotes are treated as a single value.
n If a delimiter symbol is used in a value, the value must be surrounded with quotation marks.
For example, the following row includes a comma inside a value, so the value must be
quoted:
- April 4, 1915.
l Escaping Characters
n Backslash
n Single quote
n Double quote
n Delimiter, that is, the delimiter used in the same CSV file.
Job Details
The job details are as follows:
This job has no trigger queries associated with it. The job tries to get the Instance name and Port
using the attributes Name and Application Listening Port Number of the Input Database CI. If
these attributes are empty, it uses the Instance Name and Port number defined in Generic
DB Protocol (SQL) credentials.
Parameter Description
bulkSize This parameter only works if the parameter flushObjects is true, in which
case, when sending discovery results, it sets the size of chunks used to that
number of CIs.
If true, the probe divides the discovery result into chunks, and sends each
chunk to the UCMDB Server. This helps prevent out-of-memory issues where
a large amount of data is sent. The chunk size can be configured with the
bulkSize parameter.
If false (the default value), the probe sends the discovery result without
dividing it into chunks.
mappingString The string containing mapping information used to map the Database column
names and the attributes to import. You define this mapping in the following
format:
Example:
A_IP_ADDRESS:ip_address, A_IP_DOMAIN:ip_domain
sqlQuery If a SQL query is specified, mapping is performed against its result. This
parameter is ignored if tableName is defined.
tableName If a table name is specified, mapping is performed against the table's columns.
For details on overriding an adapter parameter, see "Override Adapter Parameters" in HP Universal
CMDB Developer Reference Guide.
l Import data using the schema name and table name parameters:
l Import data specifying an arbitrary SQL query as the source of the data:
The SQL query is generated from the defined query. For more details, see "Importing Data with
an SQL Query" on the next page.
l A schema name cannot be specified in a parameter but can be included in a SQL query.
Note: The default dbo schema is used for Microsoft SQL Server.
Column Types
Types enable you to specify, in the mapping file, the type of column that exists in the external
source. For example, a database includes information about column types, and the value of this
type needs to be included in the CI's attributes. This is done by adding a type element to the map
element (in mapping_[your mapping file name].xml):
<column type="int"></column>
l string
l Boolean
l date
l int
l long
l double
l float
l timestamp
Note:
n If the column element does not include a type attribute, the element is mapped as a string.
A database column has an integer type and can be either 0 or 1. This integer must be mapped to a
Boolean attribute of a CIT in UCMDB. Use the binaryIntToBoolean converter, as follows:
<map>
<attribute>cluster_is_active</attribute>
<column type="int">cluster_is_active</column>
<converter module="import_converters">binaryIntToBoolean</converte
r>
</map>
type="int". This attribute specifies that the value of cluster_is_active should be retrieved as
an integer, and that the value passed to the converter method should be an integer.
If the cluster_is_active attribute of the CIT is of type integer, the converter is not needed
here, and the mapping file should say:
<map>
<attribute>cluster_is_active</attribute>
<column type="int">cluster_is_active</column>
</map>
1. The root CI (for example: Node) was already populated into UCMDB.
2. It is possible to construct an SQL query to select the UCMDB ID of already imported root CIs
from an external database.
In such cases, you can map the selected UCMDB ID of the root CI to the root container attribute of
the contained CI.
Note: Validation of data (checking if the root CI already exists in UCMDB) is not supported. It
is your responsibility to correctly configure the population of the root container attribute.
Job Details
The job details are as follows:
Parameter Description
bulkSize This parameter only works if the parameter flushObjects is true, in which
case, when sending discovery results, it sets the size of chunks used to that
number of CIs.
If true, the probe divides the discovery result into chunks, and sends each
chunk to the UCMDB Server. This helps prevent out-of-memory issues where
a large amount of data is sent. The chunk size can be configured with the
bulkSize parameter.
If false (the default value), the probe sends the discovery result without
dividing it into chunks.
mappingFile For details of the mapping file, see "External Source Mapping Files" on the
next page.
Parameter Description
mappingString The string containing mapping information used to map the column indexes
and attributes to import. You define this mapping in the following format:
ip_address:ip_address,name:name,ip_address_type:ip_address_type
propertyFile The full path to the properties file located on a remote machine. The Input CI
runs the Shell discovery that is used to access this file on the remote machine
For details on overriding an adapter parameter, see "Override Adapter Parameters" in the HP
Universal CMDB Developer Reference Guide.
Each value must be set out in a single line. Use backslash+n (\n) to specify a new line. Values can
contain anything, including \n for a new line, quotes, tabs, and so on.
l mapping_template.xml. A template that serves as a source for creating the mapping file.
l mapping_schema.xsd. The XML schema used to validate the XML mapping file. The XML
mapping file must be compliant with this schema.
l mapping_doc.xml. A file that contains Help on creating a mapping file, including all valid
elements.
The mapping file describes the mapping only and does not include information about how data
should be obtained. In this way, you can use one mapping file across different jobs.
All the adapter files in the External_source_import package include a mappingFile parameter, for
example:
name="mappingFile". The value of this parameter is the mapping XML file. The mapping file is
always located on the server and is downloaded to the Data Flow Probe machine upon job
execution.
Solution: For details on defining from which row DFM should read the CSV file, see "CSV Files
with Column Titles in First Row" on page 84.
l Problem: When importing large CSV or properties files on the network, there may be time-out
issues.
l Limitation: When importing data from an external database, and the data includes a null value,
it is sent to UCMDB with an attribute value of None.
l Limitation: The DFM Probe breaks down the imported data into 20 KB chunks. This can cause
identification issues.
Overview 104
Topology 104
Overview
This chapter describes the usage and functionality of the XLS_Import discovery package
developed for importing UCMDB topology from a Microsoft Excel (*.xls, *.xlsx) file.
Supported Versions
This discovery supports
l Microsoft Excel files, versions 97, 2000, XP, and 2003 (*.xls)
Topology
The following image displays the topology of the Import from Excel discovery.
Note: The topology discovered by the Import from Excel Workbook job relies on import file
content, so only root objects are enumerated as discovered CITs. For a list of discovered
CITs, see "Discovered CITs" on page 118.
Note:
l The Import from Excel Sample job is similar to the Import from Excel Workbook job. It
differs only by reference to the sample import file.
b. Under Integration Properties > Adapter, select the Import topology from Excel
Workbook adapter.
c. Under Adapter Properties > Data Flow Probe, select the Data Flow Probe.
d. Under Adapter Properties > Trigger CI instance select one of the following:
o Select Existing CI (if you have a valid, existing CI). The Select Existing CI pane
appears. Select the CI and click OK.
o Create New CI (if you need to create a new CI). The Topology CI Creation Wizard
appears. Create the CI using the Wizard.
Note: For details on the Topology CI Creation Wizard, see "Topology CI Creation
Wizard" in the HP Universal CMDB Data Flow Management Guide.
Note: For details on running an integration job, see "Integration Studio" in the HP
Universal CMDB Data Flow Management Guide.
l Two hosts
l "Prerequisite" below
1. Prerequisite
Open a new Excel file and name it tutorial.xls.
2. Add a CI type
Double-click the Sheet1 tab and rename it with the desired CI type. For this tutorial, use the
name node.
Note:
To import a node CI into UCMDB, you must set the host_key attribute. You may do this by one
of the following methods:
n Set host_key in the view <IP address> <Domain>. (For example: 192.168.100.100
DefaultDomain.) This is enough to import a node CI into UCMDB.
n Set host_key as the lowest MAC address of the attached network interface. This is not
enough to import a node CI into UCMDB. You must also configure the following attributes:
ii. Set values for the node attributes that allow the node to be identified by the
reconciliation rule. Note: The node reconciliation rule also allows identification of the
nodes linked to this node IP address or network interface CIs. If you prefer to identify
nodes by linked CIs, you must ensure the Excel document also has the imported
IP/Network interface CIs, and the relationships between node CIs and IP/ network
interface CIs.
Note:
You can show the node name and the operating system.
Note: Each row in the sheet (except the first one) represents a single CI.
b. Use the same procedure to define IP addresses in a second Excel sheet, for example,
Sheet2.
c. Use the same procedure to define a network CI in a third Excel sheet, for example, Sheet3.
running_software and process definitions are described in "Add CIs with containers" on the
next page
The Excel definition referencing style is recommended because only the tab name (CI type
name) and row number (the row number of the CI defined on the tab) are needed to identify
any imported CI - the presence or absence of any key fields is not necessary, reconciliation
rules are defined in UCMDB, and so on.
Typical links appear as =node!A2, meaning that the node tab on the CI defined at row 2 is
being referenced. It does not matter which column you are referencing; only the rows
numbers are significant.
Note: Such references cannot be used if the Excel file was created from a CSV file or
using some other non-Excel format.
n By setting a composition of the desired object key fields divided by the pipe symbol
('|').
Note:
o The order of the key fields in the definition is important!
o Many objects have no keyed attributes and are identified with reconciliation rules.
For this reason, Excel references are preferred.
The Excel definition referencing style is recommended because only the tab name (CI type
name) and row number (the row number of the CI defined on the tab) are needed to identify
any imported CI - the presence or absence of any key fields is not necessary, reconciliation
rules are defined in UCMDB, and so on.
Typical links appear as =node!A2, meaning that the node tab on the CI defined at row 2 is
being referenced. It does not matter which column you are referencing; only the rows
numbers are significant.
Note: Such references cannot be used if the Excel file was created from a CSV file or
using some other non-Excel format.
n By setting a composition of the desired object key fields divided by the pipe symbol
('|').
Note:
o The order of the key fields in the definition is important!
o Many objects have no keyed attributes and are identified with reconciliation rules.
For this reason, Excel references are preferred.
Note: To define an Excel reference, type an equal sign (=) in a cell, select the desired
reference cell, and press ENTER.
7. Define relationships
To define relationships, create a sheet called relationships.
All links (relationships) in UCMDB are directed. This means each link has a start and end point.
Also, links have names that might have some attributes similar to other CIs.
Start object reference -> link name -> End object reference [-> Attributes]
Link attribute definitions are described in "Add relationship attributes" on page 113.
The first row (column headings) displays the reason for the information. On this sheet, only the
order of the parameters is important.
a. Using Excel references, add informative captions and define member links between the IP
subnet and first two IP addresses.
In this image, defined formulas are displayed (for example, =ip_address!A2). In actuality,
the values of referenced cells are shown.
b. Using key composition, define the relationships between the two IP addresses and their
routing domains as follows:
IP key fields are ip_address and routing_domain. The composite key looks like
192.168.100.100|MyDomain.
Note:
o Any type of reference can be chosen. You can use only one reference type in a cell.
o Since the IP subnet CI has no key attributes in UCMDB 9.0x, they can be
referenced only by Excel reference.
c. Add a containment reference from node to ip_address and add a dependency reference
from running_software to process:
Note: This use case is not widespread, but the Import from Excel Workbook job offers
such capability.
Since many different types of links can be defined on the relationships tab in Excel, it is
impossible to name columns with attribute names. For this purpose, the following notation is
used:
The description definition for the link dependency from running_software to process looks like
description|The Business app depends from the Sample process.
If many attributes must be added, they must be defined in additional columns in the
dependency row.
Note: On the relationships tab, no captions are needed for the attribute columns. If the
column heading is present, these columns are treated as comment columns.
Note: If the attribute cannot be converted to the type defined in UCMDB, it is skipped and
you receive a warning in the UI.
Two list types exist in UCMDB: integer_list and string_list. To import such types, the value
delimiters are intended. They are integer_list_delimiter and string_list_delimiter
respectively. The default values are separated by a comma (','), but this can be changed to a
job parameter.
If there is an attribute named some_int_list and it needs to be set using an integer list from 1 to
5, the cell in the relationships tab will look like:
some_int_list|1,2,3,4,5
n Enumerate attribute types
Enumeration data types are supported for attributes. The job assumes the enumeration has
been entered in human readable form and performs a search of the internal integer
representation used in UCMDB.
If a value is entered that is not an enumeration value, it is ignored and you receive a warning
in the log.
Because enumeration values are case sensitive in UCMDB, they are also case sensitive in
Excel.
For example, if SSN in the image below had been written in lower case letters, ssn, the job
would send an error message because it would not find the ssn string in UCMDB.
Discovery Mechanism
Each tab in the Excel file reflects a specific CI type. The CIT must be defined in the UCMDB data
model prior to importing file content. If only out-of-the-box CITs are imported, you do not have to
create the CITs because they already exist in UCMDB.
All attributes defined for a CIT must also already exist in UCMDB or the data will be rejected. Any
special rules for attributes—such as data type, obligation, formatting, and so on—must also be
acceptable by UCMDB for the data to be successfully imported into UCMDB.
The data type of the attribute —string, long, integer, boolean, and so on— depends on the UCMDB
data model. You do not need to set attribute types manually. You must specify the attribute name in
the document header line.
1. Verifies that the CITs on the tabs in the Excel spreadsheet exist in UCMDB.
2. Verifies that the attributes (the column names in the Excel spreadsheet) exist in UCMDB.
4. Processes all CITs that contain a root_container attribute after CITs that do not have this
type of attribute. This helps to ensure that the parent CI is created before a contained CI.
5. Processes the relationships tab last to create relationships between CIs that do not use the
containment (container_f) relationship.
For the relationship to be created, the keyed attributes of a CI must be used in the relationships
tab.
7. Verifies that the CITs on the tabs in the Excel spreadsheet exist in UCMDB.
8. Verifies that the attributes (the column names in the Excel spreadsheet) exist in UCMDB.
10. Processes all CITs that contain a root_container attribute after CITs that do not have this
type of attribute. This helps to ensure that the parent CI is created before a contained CI.
11. Processes the relationships tab last to create relationships between CIs that do not use the
containment (container_f) relationship.
For the relationship to be created, the keyed attributes of a CI must be used in the relationships
tab.
12. Relation attributes also must exist in the UCMDB class model.
Trigger Query
The Import from Excel Workbook job has no trigger query. Therefore, you must manually add the
Probe that imports the data. For details, see "Probe Selection Pane" in the HP Universal CMDB
Data Flow Management Guide.
Job Parameters
Parameter Description
file_name The import file name. An absolute path accessible from the used probe must be
used. For details on settin up this file, see " How to Set Up an Import File in
Excel" on page 106.
integer_list_ The delimiter used to handle values in the spreadsheet that are to be treated as
delimiter the UCMDB data type integer_list.
string_list_ The delimiter used to handle values in the spreadsheet which would be mapped
delimiter as the UCMDB data type string_list.
Parameter Description
relationship_ On the Relationship tab of the source file object, the linked attributes could be
attr_ added.
delimiter
The default is attribute_name|attribute_value (a pipe symbol is used between
the attribute name and value). This should be aligned with actual data.
Adapter
l Input Query
Input query: Because the Import from Excel Workbook job's input CIT is Discovery Probe
Gateway, there is no need to supply an input TQL query.
l Scripts Used
The following scripts are used to import data from an Excel workbook.
n import_from_excel.py
n xlsutils.py
Note: The Import from Excel Workbook job may also use library scripts supplied in the Auto
Discovery content package.
Created/Changed Entities
Entity
Entity Name Type Entity Description
Import from Excel Job Sample job that imports the predefined sample import
Sample file
xlsutils.py Script Contains utility methods for class model validation and
fetching objects from Excel worksheets
Entity
Entity Name Type Entity Description
poi-3.7-beta1- Resource Java library for working with Excel 97-2003 file format
20100620.jar
poi-ooxml-3.7-beta1- Resource Java library for working with Excel 2007 file format
20100620.jar
poi-ooxml-schemas-3.7- Resource Java library with XML schemas used in Excel 2007
beta1-20100620.jar files
Discovered CITs
l ConfigurationItem
l Managed Relationship
Solution: Verify that you have performed the instructions in the Prerequisite section of the this
discovery. For details, see "Prerequisite - Set up permissions" on page 105.
Solution: Currently this problem is not resolvable on the job side. This can only be resolved by
defining reconciliation rules.
Solution: The date cannot be imported if it is represented in text format. This issue is not
resolvable because of localization. Represent the date in numerical format.
l Limitation: The DFM Probe breaks down the imported data into 20 KB chunks. This can cause
identification issues.
Overview 121
Overview
This document includes the main concepts, tasks, and reference information for integration of
Microsoft System Center Configuration Manager (SCCM)/Systems Management Server (SMS)
with HP Universal CMDB.
Integration occurs by populating the UCMDB database with devices, topology, and hierarchy from
SCCM/SMS and by federation with SCCM/SMS supported classes and attributes.
l Track deployment and use of software assets, and use this information to plan software
procurement and licensing
l Manage security on computers running Windows operating systems, with a minimal level of
administrative overhead
Supported Versions
Integration has been developed and tested on HP Universal CMDB version 8.03 or later, with
SCCM versions 2007 and 2012, and SMS version 2003.
SMS Adapter
Integration with SCCM/SMS is performed using an SMS adapter, which is based on the Generic
DB Adapter. This adapter supports full and differential population for defined CI types as well as
federation for other CI types or attributes.
l Identifying changes that have occurred in SCCM/SMS, to update them in the UCMDB.
When a CI is removed from SCCM/SMS, it is physically deleted from the database and there is
no way to report about it. The SMS Adapter supports a full synchronization interval. This means
that the adapter transfers data for which the aging mechanism has been enabled, and provides
the time interval to run a full synchronization that simulates the touch mechanism.
l Node (some of the attributes are populated and some are federated)
l Layer2 connection
l IP address
l Interface
l CPU
l File system
l Installed software
l Windows service
The following classes and attributes should be marked as federated by the SCCM/SMS adapter for
the proper functionality of the Actual State feature of Service Manager:
l Classes
n CPU
n Installed software
n Windows service
l Node attributes
n DiscoveredOsVendor
n DiscoveredModel
n Description
n DomainName
n NetBiosName
b. Click the New Integration Point button to open the new integration point Dialog Box.
Each out-of-the-box adapter comes predefined with the basic setup needed to perform
integration with UCMDB. For information about changing these settings, see
"Integration Studio Page" in the HP Universal CMDB Data Flow Management Guide.
Name Description
Credentials Allows you to set credentials for integration points. For credential
information, see "Supported Protocols" in the HP Universal
CMDB Discovery and Integration Content Guide - Supported Content
document.
Hostname/IP The host name of the machine where the database of SCCM/SMS is
running.
Name Description
Is Select this check box to create an active integration point. You clear
Integration the check box if you want to deactivate an integration, for instance, to
Activated set up an integration point without actually connecting to a remote
machine.
Port The port through which you access the MSSQL database.
d. Click Next and verify that the following message is displayed: A connection has been
successfully created. If it does not, check the integration point parameters and try again.
You can also create additional jobs. To do this, select the Population tab to define a population
job that uses the integration point you defined in "Define the SMS integration" on the previous
page. For details, see "New Integration Job/Edit Integration Job Dialog Box" in the HP
Universal CMDB Data Flow Management Guide.
n To immediately run a full population job, click . In a full population job, all appropriate
data is transferred, without taking the last run of the population job into consideration.
Note: the replicated CIs are controlled by the integration TQL that is used. You can create
additional TQL queries that contain different topologies for use in other jobs.
2. Select the integration point that you defined in "Define the SMS integration" on page 123.
3. Click the Federation tab. The panel shows the CI types that are supported by the SMS
adapter.
5. Click Save.
Note:
n CI types that populate UCMDB should not be selected for federation. Specifically,
avoid federating node, IP address, interface, location, and Layer2, which populate
UCMDB out-of-the-box.
n Other CI types can be used in federation only after the node data has been replicated to
CMDB by the hostDataImport query. This is because the default reconciliation rule is
based on node identification.
1. Go to the orm.xml file as follows: Data Flow Management > Adapter Management > SMS
Adapter > Configuration Files > orm.xml.
2. Locate the generic_db_adapter.[CI type] to be changed, and add the new attribute.
3. Ensure that the TQL queries that include this CI type have the new attribute in their layouts as
follows:
a. In the Modeling Studio, right-click the node where you want to include the attribute.
For details about selecting attributes, see "Layout Settings Dialog Box" in the HP Universal
CMDB Modeling Guide. For limitations on creating this TQL query, see "Troubleshooting
and Limitations" on page 130.
1. In UCMDB, create the CI Type that you want to add to the adapter, if it does not already exist.
For details, see "Create a CI Type" in the HP Universal CMDB Modeling Guide.
2. Go to the orm.xml file as follows: Data Flow Management > Adapter Management > SMS
Adapter > Configuration Files > orm.xml.
3. Map the new CI type by adding a new entity called generic_db_adapter.[CI type].
For more details, see "The orm.xml File" in the HP Universal CMDB Developer Reference
Guide.
4. Create queries to support the new CI types that you have added. Make sure that all mapped
attributes are selected in the Advanced Layout settings:
a. In the Modeling Studio, right-click the node where you want to include the attribute.
For details about selecting attributes, see "Layout Settings Dialog Box" in HP Universal
CMDB Modeling Guide. For limitations on creating this TQL query, see "Troubleshooting
and Limitations" on page 130.
6. Edit the SMS integration point to support the new CI type by selecting it either for population or
for federation.
7. If the new CI type is for population, edit the population job that you created above.
l hostDataFromSMS. Imports nodes and their related data. Information also includes each
node's IP address and interface.
Transformations
Following is the list of transformations that are applied to values when they are transferred to or
from the SCCM/SMS database:
UCMDB: 2.0
UCMDB. Windows XP
UCMDB. Professional
SCCM/SMS DB. C:
UCMDB. C
Transformer. standard
AdapterToCmdbRemoveSuffixTransformer that
removes the colon.
UCMDB. ABCDEF012345
SCCM/SMS Plug-in
The SmsReplicationPlugin provides enhanced functions to those found in the Generic Database
Adapter. It is called when:
l full topology is requested (getFullTopology) – this returns all the CIs that were found in the
external SCCM/SMS database.
l topology of changes is requested (getChangesTopology) – this returns only the CIs that are
modified or added after a specific time. The topology of the changes is calculated as follows:
n There is a specific date (fromDate) after which all changes are requested.
n Most of the entities in the SCCM/SMS database contain a Timestamp column that contains
the date and time of the last modification. This Timestamp column is mapped to the root_
updatetime attribute of a CI. Currently, some entities do not contain any creation time
information. The entities that have a timestamp column must be listed in the replication_
config.txt file.
n Using the plug-in, the integration TQL query is dynamically modified so that each Root entity
and all entities that are listed in the replication_config.txt file have an additional condition
causing the value of the root_updatetime attribute to be greater than or equal to the
fromDate value.
Reconciliation
The adapter uses the default reconciliation rule-based mapping engine.
l orm.xml. The Object Relational mapping file, which maps between SCCM/SMS database
tables and columns, and UCMDB classes and attributes. Both CIs and links are mapped.
l fixed_values.txt. Used by the Generic DB Adapter to set the ip_domain of IP Address CIs to
DefaultDomain.
l plugins.txt. Contains configuration information for the Generic DB Adapter. Also defines three
plug-ins that are used during replication: getFullTopology, getChangesTopology, and getLayout.
l transformations.txt. Contains the configuration for transformation of attribute values. For a list
of the transformations, see "Transformations" on page 127.
For details on adapter configuration, see "Developing Generic Database Adapters" in the HP
Universal CMDB Developer Reference Guide.
The root node is the main CI that is synchronized; the other nodes are the contained CIs of the
main CI. For example, when synchronizing the Node CI Type, that graph node is labeled as Root
and the resources are not labeled Root.
l A query that is used to synchronize relations should have the cardinality 1...* and an OR
condition between the relations.
l Entities that are added in SCCM/SMS are sent as updates to UCMDB by the SMS Adapter
during differential population.
l The TQL graph should contain only CI types and relations that are supported by the SCCM/SMS
adapter.
Overview 132
Topology 132
Adapter 134
Parameters 134
Overview
Integration between NetApp SANscreen and DFM involves a UCMDB initiated integration adapter
on the SANscreen WebService API, and synchronizes devices, topology, and hierarchy of storage
infrastructure in UCMDB. This enables Change Management and Impact Analysis across all
business services mapped in UCMDB from a Storage point of view.
Supported Versions
SANscreen integration supports version 5.1.2 (275) of NetApp SANscreen and version 6.2x of the
product which has been renamed NetApp OnCommand Insight.
Topology
The following diagram illustrates the storage topology and shows the relationships between logical
volumes on a storage array and those on servers:
The following diagram illustrates the SAN Topology and shows the fiber channel paths between
storage arrays, switches, and servers:
For credential information, see "Supported Protocols" in the HP Universal CMDB Discovery
and Integration Content Guide - Supported Content document.
b. Run the TCP Ports job in order to discover SANscreen WebService ports.
b. Select the NetApp SANscreen/On Command Insight adapter and enter the required
properties as follows:
Attribute Description
Data Flow Select the name of the Probe on which this integration will run.
Probe
Parameters
ChunkSize
Default: 1000
Integration Flow
The adapter works as follows:
1. Connect to the SANscreen WebService API using credentials from the SANscreen protocol.
3. Query for logical volumes, fiber channel adapters (Host Bus Adapters), and fiber channel ports
on each storage array and create LOGICAL VOLUME, HBA, and FC PORT CIs.
5. Query for fiber channel adapters and ports on each fiber channel switch and create HBA and
FC PORT CIs.
6. Query for hosts/servers and create appropriate COMPUTER, WINDOWS, or UNIX CIs.
7. Query for logical volumes, fiber channel adapters (Host Bus Adapters), and fiber channel ports
on each host/server and create LOGICAL VOLUME, HBA, and FC PORT CIs.
8. Query for paths between hosts/servers and storage arrays and add FCCONNECT
relationships between respective hosts/servers, switches, and storage arrays.
9. Query for logical volume mapping between logical volumes on hosts/servers and logical
volumes on storage arrays and add DEPEND relationships between respective hosts/servers
and storage arrays.
SANscreen Adapter
This section contains information about the SANscreen adapter.
Input CIT
IpServiceEndpoint
Input Query
Triggered CI Data
Name Value
ip_address ${SOURCE.bound_to_ip_address}
port ${SOURCE.network_port_number}
Used Script
SANscreen_Discovery.py
Discovered CITs
l Composition
l Containment
l Cpu
l Dependency
l FabricZoneSet
l FiberChannelZone
l IpAddress
l LogicalVolume
l Membership
l Node
l Storage Array
l Storage Fabric
l Storage Processor
l Unix
l Windows
Parameters
ChunkSize: The number of CIs to pull from SANscreen/OnCommand Insight per query. The default
is 1,000.
Overview 139
Overview
This integration adapter provides the ability to push CIs and relationships from UCMDB to
ServiceNow. The adapter uses an XML mapping framework that enables users to dynamically map
CI Types between UCMDB and ServiceNow, without requiring code changes.
Supported Versions
This integration solution supports pushing CIs to ServiceNow from HP Universal CMDB version
9.02 and later.
1. Configure queries
The CIs and relationships to be pushed to ServiceNow have to be queried from UCMDB using
TQL queries. Create integration type queries to query the CIs and relationships that have to be
pushed to ServiceNow.
For every query created in the step above, create an XML mapping file with exactly the same
name (case-sensitive) as the integration query in the following directory:
<UCMDB>\UCMDBServer\runtime\fcmdb\
CodeBase\ServiceNowPushAdapter\mappings
For more information about mapping files, see the HP Universal CMDB Developer Reference
Guide.
a. In Data Flow Management, in the Integration Studio, define a new integration point:
Field Value
Proxy If an HTTP(S) proxy service is used to access the Internet, enter the
Server proxy server name.
Name/IP
Data Flow Select the name of the Data Flow Probe to run this integration.
Probe
b. Test the connection to the target CMDB server. If the connection fails, verify that the
information provided is correct.
d. Add a new job definition to the integration point. Provide a name for the job definition and
select the queries to use to synchronize data from UCMDB to ServiceNow. Define a
synchronization schedule if required.
Integration Mechanism
The components responsible for the ServiceNow integration are bundled in the ServiceNow
Integration package, ServiceNow_Integration.zip.
When an ad-hoc job is run from the integration point in the Integration Studio, the integration
receives the names of the integration queries defined in the job definition, for that integration
point.
It queries UCMDB for the results of these queries (new, updated and deleted CIs and
relationships) and then applies the mapping transformation according to the pre-defined XML
mapping files for every TQL query.
Next, on the Data Flow Probe side, the integration process receives the CI and relationship
data sent from the UCMDB server, connects to the ServiceNow server using the Direct Web
Services SOAP API, and transfers the CIs and relationships.
Since the ServiceNow coalescing (CI reconciliation) mechanism is not available for the Direct
Web Services API, a mapping of UCMDB CI IDs to ServiceNow SysIds is maintained on the
discovery Probe. This mapping is used to update and delete CIs and relationships in
ServiceNow.
Supported CITs
The following CIs and their relationship are supported:
l Apache Tomcat
l Apache
l DB2
l ESX Server
l Host
l Interface
l Ip Address
l JBoss AS
l MySQL
l Net Device
l Oracle
l SQL Server
l Switch
l Unix Server
l Weblogic AS
l WebSphere AS
l Windows Server
Pushing additional CI Types requires corresponding JAR files to be generated using the WSDL
URL for each CI Type. The WSDL URL can be generated using information from the Direct Web
Services section at:
http://wiki.service-now.com/index.php?title=SOAP_Web_Service.
The resulting JAR files should be placed in the following directory on the UCMDB Data Flow Probe
server:
<hp>\UCMDB\DataFlowProbe\runtime\probeManager\discoveryResources\Service-Now\
JAR files may be generated using WSDL2JAVA or other similar utilities. An example using this
utility is at :
http://roseindia.net/webservices/axis2/axis2-client.shtml.
For CIs containing reference fields, the target data type of the attribute being mapped to a reference
field should be set to the name of the reference table. For example, to populate the Manufacturer
field on a Windows Server CI, the data type should be the reference table name core_company
as shown below:
l Limitation: The integration mapping file only allows mapping concrete CITs and relationships to
the CITs and relationships in ServiceNow. That is, a parent CIT cannot be used to map its
children CIs.
l Limitation: Since this adapter uses the ServiceNow Direct Web Services API, which does not
support CI coalescing (reconciliation), if some CIs being pushed from UCMDB are already
present in the ServiceNow CMDB, before the integration with UCMDB is installed, and if those
CIs are (a) also in UCMDB; and (b) pushed into ServiceNow by the integration, those CIs are
duplicated. (This is because UCMDB does not know these CIs are already in the ServiceNow
CMDB.) After the adapter is installed, UCMDB keeps track of the CIs it pushes to ServiceNow,
to prevent duplication.
l Limitation: ServiceNow Web Service Import Sets are currently not supported.
Overview 147
Views 154
Reports 162
Overview
Integration involves synchronizing devices, topology, and the hierarchy of a customer storage
infrastructure in the Universal CMDB database (CMDB). This enables Change Management and
Impact Analysis across all business services mapped in UCMDB from a storage point of view.
When you activate the Storage Essentials integration, DFM retrieves data from the SE Oracle
database and saves CIs to the Universal CMDB database. Users can then view SE storage
infrastructure in UCMDB.
The data includes information on storage arrays, fibre channel switches, hosts (servers), storage
fabrics, logical volumes, host bus adapters, storage controllers, and fibre channel ports. Integration
also synchronizes physical relationships between the hardware, and logical relationships between
logical volumes, storage zones, storage fabrics, and hardware devices.
Supported Versions
The integration procedure supports SE versions 6.x, 9.4, 9.41 and 9.5.
1. Prerequisites.
For details on running integration jobs, see "Integration Studio" in the HP Universal CMDB
Data Flow Management Guide.
b. Under Integration Properties > Adapter, select the Storage Essentials adapter.
c. Under Adapter Properties > Data Flow Probe, select the Data Flow Probe.
i. Select Existing CI (if you have a valid, existing CI). The Select Existing CI pane
appears. Select the CI or
ii. Create New CI (if you need to create a new CI). The Topology CI Creation Wizard
appears. Complete the creation of the CI using the Wizard.
Note: For details on the Topology CI Creation Wizard, see "Topology CI Creation
Wizard" in the HP Universal CMDB Data Flow Management Guide.
e. Verify the credentials for the chosen CI instance. Right-click Trigger CI instance and
select Actions > Edit Credentials Information.
l SE_Discovery.zip. Contains the trigger query for SE discovery, discovery script, adapter, and
job.
l Storage_Basic.zip. Contains the new CI Type definitions, views, reports, and impact analysis
rules. This package is common to all Storage Management integration solutions.
For details, see "New Integration Job/Edit Integration Job Dialog Box" in the HP Universal
CMDB Data Flow Management Guide.
Adapter Parameters
This job runs queries against Oracle materialized views that are installed and maintained by
Storage Essentials in the Oracle database. The job uses a database CI as the trigger.
A switch or server in SE inherits from a Node CIT in UCMDB based on the following adapter
parameters:
Parameter Description
allowDNSLookp If a node in the SE database does not have an IP address but has
a DNS name, it is possible to resolve the IP address by the DNS
name.
Default: False
Default: True
ignorePortsWithoutWWN If set to true, the integration ignores Fiber Channel Ports that do
not have a WWN.
Default: True
l Fibre Channel Connect. Represents a fibre channel connection between fibre channel ports.
l Fibre Channel HBA. Has change monitoring enabled on parameters such as state, status,
version, firmware version, driver version, WWN, and serial number. A Fibre Channel HBA
inherits from the Node Resource CIT.
l Fibre Channel Port. Has change monitoring enabled on parameters such as state, status,
WWN, and trunked state. Since a Fibre Channel Port is a physical port on a switch, it inherits
from the Physical Port CIT under the NodeElement Resource CIT.
l Fibre Channel Switch. Falls under the Node CIT because SE maintains an IP address for each
switch. Parameters such as status, state, total/free/available ports, and version are change
monitored.
This package retrieves Fibre Channel Switch details from the mvc_switchsummaryvw and
mvc_switchconfigvw views. The job retrieves detailed information about Fibre Channel Ports
on each switch from the mvc_portsummaryvw view.
l Logical Volume. Represents volumes on storage arrays and hosts with change monitoring on
availability, total/free/available space, and storage capabilities.
l Storage Array. Represents a storage array with change monitoring on details such as serial
number, version, and status. Since a storage array may not have a discoverable IP address, it
inherits from the Network Device CIT.
l NetApp Filer. This is a specialized storage array from the NetApp application.
Both Storage Array and NetApp Filer CITs retrieve details from the mvc_
storagesystemsummaryvw view. DFM retrieves detailed information on Storage Processors
and HBAs from the mvc_storageprocessorsummaryvw and mvc_cardsummaryvw tables
respectively.
The SE database may possibly not be able to obtain IP address information on storage arrays for
a variety of technical and policy related reasons.
Since Fibre Channel Ports may be present on a storage array, Storage Processor, or HBA, DFM
uses three separate queries to retrieve Fibre Channel Ports for each storage array. Detailed
information about Fibre Channel Ports on each array is retrieved from the mvc_
portsummaryvw view. Since this view uses a container ID as the key, DFM queries the view
by container ID for each storage array, each Storage Processor on a storage array, and each
HBA on a storage array.
DFM retrieves detailed information about Logical Volumes on each storage Array from the mvc_
storagevolumesummaryvw view.
l Storage Fabric. Inherits from the Network Resource CIT and represents a storage fabric. This
CIT has no change monitoring enabled.
l Storage Processor. Represents other storage devices such as SCSI controllers, and inherits
from the Host Resource CIT. A Storage Processor CIT monitors change on parameters such as
state, status, version, WWN, roles, power management, and serial number.
l Storage Pool. Storage Pool information is also collected from each storage array using the
query below.
Node Details
DFM retrieves Host details from the mvc_hostsummaryvw view and detailed information on
HBAs from the mvc_cardsummaryvw view.
SE maintains information on Operating Systems, IP address, and DNS name on each host. DFM
uses this information to create Node CIs (UNIX or Windows) and IpAddress CIs.
Since UCMDB uses the IP address of a node as part of its primary key, DFM attempts to use the
IP address from SE for this purpose. If an IP address is not available, DFM then attempts to resolve
the hosts IP address using a DNS name. If neither an IP address nor a DNS name is available,
DFM ignores the host (see "Adapter Parameters" on page 148.
Similar to Storage Arrays, a node may have Fibre Channel Ports directly associated with itself or on
HBAs on the host. The DFM job uses three separate queries to retrieve Fibre Channel Ports for
each host. The job retrieves detailed information about Fibre Channel Ports on each host from the
mvc_portsummaryvw view. Since this view uses a ContainerID attribute as the key, the job
queries the view by containerID for each host, and each HBA on a host.
Finally, DFM retrieves detailed information about Logical Volumes on each host from the mvc_
hostvolumesummaryvw and mvc_hostcapacityvw views. The mvc_hostcapacityvw view
maintains capacity information for each volume over multiple instances in time, and the job uses
only the latest available information.
SAN Topology
SAN Topology consists of the Fibre Channel network topology and includes (fibre channel)
connections between Fibre Channel Switches, Hosts, and Storage Arrays. SE maintains a list of
WWNs that each Fibre Channel Port connects to, and this package uses this list of WWNs to
establish Fibre Channel Connection links.
Storage Topology
Storage topology consists of relationships between Logical Volumes on a host and Logical
Volumes on a Storage Array. DFM uses multiple tables to identify this relationship as shown in the
query below. This view is a summary of all of the above information.
Views
The SE package contains views that display common storage topologies. These are basic views
that can be customized to suit the integrated SE applications.
Storage Array does not require all components in this view to be functional. Composition links
stemming from the Storage Array have a cardinality of zero-to-many. The view may show Storage
Arrays even when there are no Logical Volumes or Storage Processors.
FC Switch Details
This view shows a Fibre Channel Switch and all connected Fibre Channel Ports.
FC Switch Virtualization
FC Switch Virtualization consists of a physical switch or chassis, partitioned into multiple logical
switches. Unlike Ethernet virtualization, physical ports are not shared among multiple virtual
switches. Rather, each virtual switch is assigned one or more dedicated physical ports that are
managed independently by the logical switches.
goal of this type of virtualization is to virtualize multiple disk arrays from different vendors, scattered
over the network, into a single monolithic storage device that can be managed uniformly.
SAN Topology
This view maps physical connections between Storage Arrays, Fibre Channel Switches, and
Hosts. The view shows Fibre Channel Ports below their containers. The view groups the Fibre
Channel Connect relationship CIT to prevent multiple relationships between the same nodes from
appearing in the top layer.
Storage Topology
This view maps logical dependencies between Logical Volumes on Hosts and Logical Volumes on
Storage Arrays. There is no folding in this view.
FC Port to FC Port
This rule propagates events on a Fibre Channel Port to another connected Channel Port.
l The event propagates from the HBA to the Storage Array and the Logical Volumes on the Array
because of the Storage Devices to Storage Array rule.
l The impact analysis event on the Logical Volume then propagates to other dependent Logical
Volumes through the Logical Volume to Logical Volume rule.
l Hosts using those dependent Logical volumes see the event next because of the Host Devices
to Host rule.
l Depending on business needs, you define impact analysis rules to propagate events from these
hosts to applications, business services, lines of business, and so on. This enables end-to-end
mapping and impact analysis using UCMDB.
All impact analysis rules fully propagate both Change and Operation events. For details on impact
analysis, see "Impact Analysis Manager Page" and "Impact Analysis Manager Overview" in the HP
Universal CMDB Modeling Guide.
Note: Impact analysis events are not propagated to Fibre Channel Ports for performance
reasons.
FC Port to FC Port
This impact analysis rule propagates events between related fiber channel ports.
Reports
The SE package contains basic reports that can be customized to suit the integrated SE
applications.
In addition to the system reports, Change Monitoring and Asset Data parameters are set on each
CIT in this package, to enable Change and Asset Reports in Universal CMDB. For details see "
Storage Array Configuration" below, " Host Configuration" below, " Storage Array Dependency" on
the next page, and " Host Storage Dependency" on the next page.
Host Configuration
This report shows detailed information on hosts that contain one or more Fibre Channel HBAs,
Fibre Channel Ports, or Logical volumes. The report lists hosts with sub-components as children of
the host.
l Problem: If the SE system has duplicate entries for nodes, switches or arrays, the job produces
the following error message: "Process validator error: multiple updates in bulk...".
Solution: This is expected behavior and does not affect population of valid CIs into UCMDB. To
prevent this error message, duplicates must be removed from the SE system.
l Error message: "Please use the SE database with SID 'REPORT' for this integration."
Solution: For integration with SE 9.4 or higher, you should configure the integration point for a
REPORT database instance (and not for an APPIQ instance as applied for SE versions up to
and including 6.3).
l Limitation: The credentials used for integration with SE should have access to the relevant
materialized view status table as detailed below:
appiq_system.mview_status
l Limitation: If the discovery does not find a valid IP address and serial number for a VMWare
ESX server in the HP Storage Essentials database, that VMware ESX server is not reported to
UCMDB.
Overview 166
Overview
Troux is a vendor in the EA (Enterprise Architecture) tools market. EA tools allow business users to
understand the gaps between business demands and initiatives. Reviewing how your fixed budget
aligns to business capabilities and how your discretionary spending is allocated across initiatives.
Future-state scenario investigation can be accomplished prior to locking down your roadmap.
Although many use cases can be achieved using EA tools, two specific use cases were chosen for
the UCMDB-Troux integration. This does not preclude additional use cases in the future.
Depending on the use case, a provider of record is determined. For example, UCMDB would be the
provider of record for inventory information such as the server operating system, server hardware,
database, and other infrastructure CIs. Troux on the other hand provides component lifecycles for
server operating system, server hardware, and database versions.
Integration Overview
UCMDB-Troux integration consists of two independent, bi-directional parts: the Troux Push
Adapter, and the Troux Population Adapter.
l The Troux Push Adapter in UCMDB replicates CIs and relationships to Troux. The Troux Push
Adapter is necessary to achieve both the Technology Standards Assessment and Business
Impact Analysis use cases discussed in the introduction above. The adapter also allows the
user to push to Troux CIs that are aged out of UCMDB or deleted.
l The Troux Population Adapter pulls CIs and relationships from Troux to UCMDB. It is necessary
only for the Business Impact Analysis use-case.
Data transfer occurs using XML files between configured directories. Mapping files are used to
apply conversion from TUX format to UCMDB and vice versa.
Supported Versions
Supported versions of the products are listed below.
Use Cases
The use cases chosen for UCMDB-Troux integration are:
l Business Impact Analysis. Definition of the definitive source of application CIs to align IT with
business. These application CIs in Troux are related to server operating system, server
hardware, database, and other CIs discovered by UCMDB. Impact Analysis can be determined
using application, business function, and organization for planned change or unplanned
disruption of service.
of CIs and attributes to Troux components. This adapter also allows the user to push to Troux CIs
that are aged out of UCMDB or deleted.
Define queries
1. Create a query that defines the CIs and attributes you want to replicate to Troux. Two example
queries are supplied in the Integration > Troux folder.
For details, see "Topology Query Language" in the HP Universal CMDB Modeling Guide.
Note: This step is critical to the operation of the push adapter. You must define the
attributes that will be transferred to Troux.
For details, see "Query Node/Relationship Properties Dialog Box" in the HP Universal CMDB
Modeling Guide.
Example: Computers_for_Troux
In this example, the query requests UCMDB to send all computers with installed software to
Troux. You must define the mapping file with the same name as the query in order for the push
adapter to recognize the query.
The example mapping file and query (Servers_with_Software) included with the content package
sends Windows computers with installed software to Troux, as expected by Troux. If your
environment uses different CIs with Troux, make sure Troux handles those component types.
When you create the mapping file, give it exactly the same name as your query. For details about
the mapping file options, see "Prepare the Mapping Files" in the HP Universal CMDB Developer
Reference Guide. Use the example mapping files as reference examples for the mapping file
creation.
Note: The definitions in the mapping file (<adapter>.xml) must be the direct CITs and
relationships to be transferred to Troux. The mapping does not support inheritance of class
types. For example, if the query is transferring nt CITs, the mapping file must have definitions
for nt CITs, and not for general nodes or computers. That is, the definition must be an exact
match for what to transfer.
Example
2. Click the New Integration Point button to open the new integration point Dialog Box.
a. Click , select the Data Push into Troux adapter and click OK.
Name Description
Name Description
TUX path The location of the TUX output file (created when the
integration job is run).
c. Click Test connection to verify the connectivity, and click OK. If the connection fails,
verify the provided information is correct.
1. Select the queries that you defined in "Define queries" on page 168.
2. To enable deletion, select the Allow Deletion check box. This is a master delete on/off flag
for the job that turns on and off the capability to send deletes.
Note: Deleted CITs are pushed depending on the changes to them in UCMDB. For
example, if UCMDB discovers a computer, and that computer is deleted in UCMDB, this
indicates a change of a deleted computer. And if this computer is part of the push query for
Troux, with delete enabled, a delete is pushed to Troux the next time the push job is run.
4. Click OK.
5. In the Integration Point pane, click Save. A full data push job will run according to schedule.
The Troux output file (TUX) is generated in the path that you specified in the Integration
Properties for the job.
The top section of the file defines the object or CIT mapping from Troux to UCMDB. The lower
section defines the relationship mapping.
Note: For details on running an integration job, see "Integration Studio" in the HP Universal
CMDB Data Flow Management Guide.
2. Under Integration Properties > Adapter, select the Population from Troux adapter.
3. Edit the TUX path field, if required; this sets the location of the TUX output file.
4. Under Adapter Properties > Data Flow Probe, select the Data Flow Probe.
b. Create New CI (if you need to create a new CI). The Topology CI Creation Wizard
appears. Complete the creation of the CI using the Wizard.
Note: For details on the Topology CI Creation Wizard, see "Topology CI Creation Wizard"
in the HP Universal CMDB Data Flow Management Guide.
Overview 176
Adapter 177
Overview
By using the UCMDB to XML adapter, it is possible to export the results (CIs and relationships) of
TQL queries and convert these to XML files.
Integration Mechanism
After defining an integration point with the UCMDB to XML adapter, TQL queries can be added to
the jobs of that integration point.
The adapter exports the result of the TQL queries into XML format, and creates XML files in the
Export Directory (as predefined in the integration point).
a. Create a directory on the UCMDB data flow probe system to which the adapter will write
the exported XML files.
b. In the Modeling Studio, create integration TQL queries for the data to be exported to XML.
Ensure the queries return valid results.
c. Under Adapter Properties > Export Directory type an absolute path of the directory on
the probe system where the adapter should export the XML files to.
d. Under Adapter Properties > Data Flow Probe Name, select the name of the Data Flow
Probe to be used.
e. Click the Test Connection button to ensure the adapter can validate the defined export
directory.
g. Under Integration Jobs add a new integration job. Edit the job to add integration queries
under the job's definition.
For details on integration points, see Integration Studio > Work with Data Push Jobs >
Create an integration point in the HP Universal CMDB Data Flow Management Guide.
a. To run the job ad hoc, select the integration job and click the button. The job can also
be configured to run on a schedule.
b. Check the defined export directory on the probe system for the exported XML data, to
ensure the queries create valid XML files.
Note: The XML files have time stamps in the format YYMMDDHHMMSSZZZ. If the
integration query returns a large number of CIs, the export is by chunks of 1,000 CIs.
Each chunk is a separate XML file, with the file for the last chunk having the string
EOD (end of data) appended to it.
For details on running an integration job, see "Integration Studio" in the HP Universal CMDB Data
Flow Management Guide.
Adapter
This job uses the adapter UCMDB to XML (XmlPushAdapter).
Used Script
pushToXml.py
Parameters
Parameter Description
Export Directory The absolute path (on the probe system) of the
directory where the XML files will be exported
to.
Data Flow Probe The name of the data flow probe to be used.