VIOS
VIOS
VIOS
Contents
This Readme contains installation and other information about VIOS Update Release 2.2.5.10
Installation information
Pre-installation information and instructionsInstalling the Update ReleasePerforming the necessary tasks
after installationAdditional information
Package information
IOSLEVEL: 2.2.5.10
In June 2015, VIOS introduces the minipack as a new service stream delivery vehicle as well as a change
to the VIOS fix level numbering scheme. The VIOS "fix level" (the 4th number) has changed to two digits.
For example, VIOS 2.2.5.1 has been changed to VIOS 2.2.5.10. Please refer to the VIOS Maintenance
Strategy here for more details regarding the change to the VIOS release numbering scheme.
Be sure to heed all minimum space requirements before installing. For this version, target at least
900MB available in the '/' directory.
The VIOS Update Release 2.2.5.10 includes the IVM code, but it will not be enabled on HMC-managed
systems. Update Release 2.2.5.10, like all VIOS Update Releases, can be applied to either HMC-managed
or IVM-managed VIOS.
Update Release 2.2.5.10 updates your VIOS partition to ioslevel 2.2.5.10. To determine if Update
Release 2.2.5.10 is already installed, run the following command from the VIOS command line:
$ ioslevel
The following requirements and limitations apply to Shared Storage Pool (SSP) features and any
associated virtual storage enhancements.
•Platforms: POWER6 and later (includes Blades), IBM PureFlex Systems (Power Compute Nodes only)
Software Installation
•When installing updates for VIOS Update Release 2.2.5.10 participating in a Shared Storage Pool, the
Shared Storage Pool Services must be stopped on the node being upgraded.
•In order to take advantage of the new SSP features in 2.2.4.00 (including improvements in the min/max
levels), all nodes in the SSP cluster must be at 2.2.4.00.
•Over 250 Client LPARs per VIOS requires each VIOS have at least 4 CPUs and 8 GM memory.
Other notes:
•Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
•The Shared Storage Pool cluster name must be less than 63 characters long.
•The Shared Storage Pool pool name must be less than 127 characters long.
•The maximum supported LU size is 4TB, however, for high I/O workloads it is recommended to use
multiple smaller LUs as it will improve performance. For example, using 16 separate 16GB LUs would
yield better performance than a single 256GB LU for applications that perform reads and writes to a
variety of storage locations concurrently.
•The size of the /var drive should be greater than or equal to 3GB to ensure proper logging.
Network Configuration
•Uninterrupted network connectivity is required for operation. i.e. The network interface used for
Shared Storage Pool configuration must be on a highly reliable network which is not congested.
•A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.
•A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution
to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the
TCP/IP name resolution documentation in the IBM Knowledge Center.
•The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared
Storage Pool configuration.
•It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their
clocks synchronized.
Storage Configuration
•Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the
classic Virtual SCSI devices.
•Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the
Shared Storage pool is not supported.
•On the client LPAR Virtual SCSI disk is the only peripheral device type supported by SSP at this time.
•When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is
not supported.
•VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode
(the default mode). SEA in Interrupt Mode is not supported within SSP.
•VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP
paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical
unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not
supported.
Installation information
Please ensure that your rootvg contains at least 30GB and that there is at least 4GB free space before
you attempt to update to Update Release 2.2.5.10. Run the lsvg rootvg command, and then ensure
there is enough free space.
Example:
$ lsvg rootvg
If you are planning to update a version of VIOS lower than 2.1, you must first migrate your VIOS to VIOS
version 2.1.0 using the Migration DVD. After the VIOS is at version 2.1.0, the Update/Fixpack 2.2.5.10
must be applied to bring the VIOS to the latest Fix Pack VIOS 2.2.5.10 level.
Note that with this Update Release 2.2.5.10, a single boot alternative to this multiple step process is
available to NIM users. NIM users can update by creating a single, merged lpp_source that combines the
contents of the Migration DVD with the contents of this Update Release 2.2.5.10.
A single, merged lpp_source is not supported for VIOS that uses SDDPCM. However, if you use SDDPCM,
you can still enable a single boot update by using the alternate method described at the following
location:
SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x
After the VIOS migration is complete, from 1.X to 2.X, you must set the Processor Folding, described
here under "Migration DVD":
The current level of VIOS must be between 2.2.1.1 and 2.2.4.x, you can put 2.2.5.10 in a location and do
the update using updateios command.
To check for a loaded media repository, and then unload it, follow these steps.
$ lsvopt
2. To unload media images, run the following commands on all Virtual Target Devices that have loaded
images.
3. To verify that all media are unloaded, run the following command again.
$ lsvopt
The Virtual I/O Server (VIOS) Version 2.2.2.1 or later, supports rolling updates for SSP clusters. The VIOS
can be updated to Update Release 2.2.5.10 using rolling updates.
If your current VIOS is running with Shared Storage Pool from 2.2.1.1 or 2.2.1.3, the following
information applies:
A cluster that is created and configured on VIOS Version 2.2.1.1 or 2.2.1.3 must be migrated to version
2.2.1.4 or 2.2.1.5 prior to utilizing rolling updates. This allows the user to keep their shared storage pool
devices. When VIOS version is equal or greater than 2.2.1.4 and less than 2.2.5.10, the user needs to
download 2.2.5.10 update images into a directory, then update the VIOS to Update Release 2.2.5.10
using rolling updates.
If your current VIOS is configured with Shared Storage Pool from 2.2.1.4 or later, the following
information applies:
The rolling updates enhancement allows the user to apply Update Release 2.2.5.10 to the VIOS logical
partitions in the cluster individually without causing an outage in the entire cluster. The updated VIOS
logical partitions cannot use the new SSP capabilities until all VIOS logical partitions in the cluster are
updated.
To upgrade the VIOS logical partitions to use the new SSP capabilities, ensure that the following
conditions are met:
•All VIOS logical partitions must have VIOS Update Release version 2.2.1.4 or later installed. After the
update, you can verify that the logical partitions have the new level of software installed by typing the
cluster -status -verbose command from the VIOS command line. In the Node Upgrade Status field, if the
status of the VIOS logical partition is displayed as UP_LEVEL , the software level in the logical partition is
higher than the software level in the cluster. If the status is displayed as ON_LEVEL , the software level in
the logical partition and the cluster is the same.
•All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the
cluster cannot be upgraded to use the new SSP capabilities.
The VIOS SSP software monitors node status and will automatically upgrade the cluster to make use of
the new capabilities when all the nodes in the cluster have been updated, "cluster -status -verbose"
reports "ON_LEVEL" .
--------------------------------------------------------------------------------
There is now a method to verify the VIOS update files before installation. This process requires access to
openssl by the 'padmin' User, which can be accomplished by creating a link.
$ oem_setup_env
# ln -s /usr/bin/openssl /usr/ios/utils/openssl
# exit
Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you
should create a VIOS backup before making changes.
If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared
Storage Pool Configuration.
Note : While running 'updateios' in the following steps, you may see accessauth messages, but these
messages can safely be ignored.
If your current level is between 2.2.1.1 and 2.2.2.1, you can directly apply 2.2.5.10 updates. This fixes an
update problem with the builddate on bos.alt_disk_install.boot_images fileset.
If your current level is 2.2.2.1, 2.2.2.2, 2.2.2.3, or 2.2.3.1, you need to run updateios command twice to
get bos.alt_disk_install.boot_images fileset update problem fixed.
Run the following command after the step of "$ updateios –accept –install –dev <directory_name >"
completes.
Depending on the VIOs level, one or more of the LPPs below may be reported as "Missing Requisites",
and they may be ignored.
MISSING REQUISITES:
--------------------------------------------------------------------------------
Applying updates
WARNING: If the target node to be updated is part of a redundant VIOS pair, ensure that the VIOS
partner node is fully operational before beginning to update the target node. NOTE that for VIOS nodes
that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a
cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully
operational, client LPARs may crash.
The current level of the VIOS must be 2.2.2.1 or later if you use the Share Storage Pool .
2. If you use one or more File Backed Optical Media Repositories, you need to unload media images
before you apply the Update Release. See details here.
3. If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
4. To apply updates from a directory on your local hard disk, follow the steps:
•Using ftp, transfer the update file(s) to the directory you created.
To apply updates from a remotely mounted file system, and the remote file system is to be mounted
read-only, follow the steps:
The update release can be burned onto a CD by using the ISO image file(s). To apply updates from the
CD/DVD drive, follow the steps:
$ updateios -commit
6. Verify the updates files that were copied. This step can only be performed if the link to openssl was
created.
$ shutdown -restart
Note: If shutdown –restart command failed, run swrole –PAdmin in order for padmin to set
authorization and establish access to the shutdown command properly.
$ ioslevel
--------------------------------------------------------------------------------
After installing an Update Release, you can use this method to determine if you have encountered the
problem of a loaded media library.
$ lsrep
If the command reports: "Unable to retrieve repository date due to incomplete repository structure,"
then you have likely encountered this problem during the installation. The media images have not been
lost and are still present in the file system of the virtual media library.
To recover from this type of installation failure, unload any media repository images, and then reinstall
the ios.cli.rte package. Follow these steps:
$ oem_setup_env
# exit
$ shutdown –restart
$ lsrep
Additional information
Use of NIM to back up, install, and update the VIOS is supported.
For further assistance on the back up and install using NIM, refer to the NIM documentation.
Note : For install, always create the SPOT resource directly from the VIOS mksysb image. Do NOT update
the SPOT from an LPP_SOURCE.
On the NIM Master, use the operation updateios to update the VIOS Server.
On the NIM Master, use the operation alt_disk_install to update an alternate disk copy of the VIOS
Server.
Sample:
If NIM is not used to update the VIOS, only the updateios or the alt_root_vg command from the padmin
shell can be used to update the VIOS.
This release of VIOS contains several enhancements. These enhancements are in the area of POWER
virtualization. The following list provides the features of each element by product area.
Note: Version 6.1.0, the previous version of Tivoli TSM, is still shipped and installed from the VIOS
installation DVD.
Tivoli TSM version 6.2.2
The Tivoli TSM filesets are now being shipped on the VIOS Expansion Pack, with the required GSKit8
libraries.
The following are sample installation instructions for the new Tivoli TSM filesets:
1. Insert the VIOS Expansion DVD into the DVD drive that is assigned to the VIOS partition.
Fileset Name
GSKit8.gskcrypt32.ppc.rte 8.0.14.7
GSKit8.gskcrypt64.ppc.rte 8.0.14.7
GSKit8.gskssl32.ppc.rte 8.0.14.7
GSKit8.gskssl64.ppc.rte 8.0.14.7
..
..
NOTE: Any prerequisite filesets will be pulled in from the Expansion DVD, including for TSM the
GSKit8.gskcrypt fileset.
4. If needed, install additional TSM filesets.
$ lssw
Sample output:
..
Interface
IV69355 SEA LARGESEND CAN CAUSE TCP BAD HEADER OFFSET ON VIRTUAL CLIENT
IV71906 LVS AND VGS GET DISPLAYED WHEN THEY SHOULD BE HIDDEN.
IV72893 CLCOMD CAN USE TOO MUCH CPU WHEN AHAFS NOT ACCESSIBLE
IV73838 caa: inactive node fails to generate remote node_up ahafs event
IV73952 cleanup SRC VIOS mappings after client lpar is remote restarted.
IV75538 caa: defined pv that matches repo pvid causes mkcluster to fail
IV75591 Resume can fail with HMC code HSCLA27F after validation pass.
IV75685 LPM can fail with HMC to MSP connection timed out error
IV76033 Some SSP VTDs are not restored during cluster restore
IV76151 IKED LOOPS AND CAUSES CPU LOAD WHEN 0-BYTE DATAGRAM IS PRESENT
IV76256 Unlock adapter is not required when lock did not happen earlier
IV76512 Packets with old MAC are received after changing MAC
IV76529 System crash while running hxecom with modified ent attributes
IV76538 SCTP heartbeats are not getting sent within RTO of the path
IV76723 CAA UNICAST CLU: DMS ON LAST NODE DUE TO INCORRECT DPCOM STATE
IV76817 LPM validation or some VIOS function may give ODM lock errors
IV77179 viosecure command fails during rule failure with file option
IV77472 lnc2ent kdb command fails due to corrupted command table entry.
IV77475 ADD FOR POWERVC RESERVE_LOCK POLICY USING EMC 5.3.0.6 OR OLDER
IV78144 cluster -status command does not always show pool status
IV78360 LDAP USER HAS ADMCHG SET AFTER CHANGING OWN PASSWORD
IV78897 LU-level validation for LPM fails with IBM i or Linux clients
IV79281 CLMIGCHECK MIGHT FAIL ON CLUSTERS WITH MORE THAN TWO NODES.
IV79634 FAILED PATH ON A CLOSED DISK MAY NOT RECOVER AFTER DISK REOPEN
IV79658 LEVEL.OT MIGRATIONS FAIL IF ALL NODES NOT AT THE SAME CAA
IV79874 SEA CONTROL CHANNEL & TRUNK VIRTUAL ADAPTER SHARE PVID CRASH
IV80569 MUSENTDD'S RECEIVE PATH HANGS WHEN IT RECEVIES CERTAIN SIZE PKT
IV80689 LNCENTDD: MULTICAST ADDRS LOST WHEN PROMISCUOUS MODE TURNED OFF
IV81023 NETSTAT & ENTSTAT FOR ETHERCHANNELS WITH SRIOV DO NOT SHOW MAC
IV81223 HMC reports that VIOS is busy and not responding to queries
IV81241 MELLANOX V2 DRIVER LOGS LINK UP AND LINK DOWN DURING REBOOT
IV81297 VIO CLIENT CRASHES IN DISK DRIVER DURING LPM WITH GPFS AND PR SH
IV82449 VIRTUAL ETHERNET ADAPTER DRIVER MAY GOES INTO DEAD STATE.
IV82461 VIOS may rarely crash during LUN-level validation for LPM
IV82463 Adapters using lncentdd driver may log EEH as permanent Error
IV82577 UNMIRRORVG ROOTVG MAY HANG WITH FW-ASSISTED DUMP & HD6
IV82596 SYN NOT RECEIVED WHEN VEA CLIENT TURN OFF CHECKSUM OFFLOAD
IV82728 SECONDARY GROUP NOT DETERMINED FOR OLDER NETIQ LDAP SERVERS
IV82983 viosbr -restore not restoring other nodes except initiator node
IV83078 VIOS may crash when doing LUN-level validation for LPM
IV83212 HOSTNAME ISSUES PREVENT NODE FROM JOINING THE CAA CLUSTER
IV84449 TCBCK -Y ALL IS RUN AT FIRSTBOOT FROM ALTERNATE DISK IN TCB ENV
IV84636 BOOTPD DOESN'T FIND ROUTE WITH MANY NETWORK INTERFACES IN USE
IV84704 LARGE RECEIVE ON VIOS IS SLOW WHEN VIO CLIENT HAS CHKSUM OFF
IV84985 EEH Resume failure can cause restart to set invalid device state
IV85145 CONSTANT DISK_ERR4 ERRORS FOR INTERNAL SCSI DISKS IF HEALTH CHEC
IV85311 Promiscuous or All mcast flags still shown as set after a close
IV85593 NFS3 OPEN FAILURE AFTER NFS3 OPEN WITH O_DIRECT AND O_CIO FAILS
IV86237 Possible race condition between ctrc suspend and trace operation
IV86535 system crash while using adapter with FC#EN0S and EN0W
IV86577 System crash due to stack overflow when errlogging on slih path
IV86666 mkvopt command allows vopt device names with special characters
IV86792 Removal of more than one node from a cluster was giving error
IV87053 IPTRACE -L DOES NOT ROTATE LOGFILE IF LOGSIZE MORE THAN 2GB
IV87734 CLUSTER HEARTBEAT SETTINGS SMIT MENU MAY SHOW VALUE AS ZERO
IV87957 FC5899 ENTSTAT PRINTS 1818495488 FOR BAD PACKET COUNT IN NON-C
IV90068 PCM SSP data collection status may not be updated properly