IBM PowerVM Getting Started Guide Redp4815
IBM PowerVM Getting Started Guide Redp4815
IBM PowerVM Getting Started Guide Redp4815
ibm.com/redbooks
Redpaper
International Technical Support Organization IBM PowerVM Getting Started Guide February 2012
REDP-4815-00
Note: Before using this information and the product it supports, read the information in Notices on page v.
First Edition (February 2012) This edition applies to IBM Virtual I/O Server, versions 2.2.0 and 2.2.1; IBM Systems Director Management Console, version 6.7.4.0; and IBM Hardware Management Console.
Copyright International Business Machines Corporation 2012. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Chapter 1. Introduction to PowerVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Terminology differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 6 6
Chapter 2. Configuring PowerVM with Integrated Virtualization Manager . . . . . . . . . . 7 2.1 Setting up a single VIOS using IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.1 Installing a VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.2 Creating a partition for the client operating system. . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.3 Configuring a VIOS for a client network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.4 Configuring a VIOS for client storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.5 Installing a client operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Setting up a dual VIOS with IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Setting up an N_Port ID Virtualization Fibre Channel with IVM . . . . . . . . . . . . . . . . . . 15 Chapter 3. Configuring PowerVM with the Hardware Management Console . . . . . . . 3.1 Setting up a single VIOS using an HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Creating a VIOS partition profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Installing a VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Configuring a VIOS partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Creating a logical partition profile for a client operating system . . . . . . . . . . . . . . 3.2 Setting up a dual VIOS using HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Creating dual VIOS partition profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Installing a VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Configuring VIOS partitions for a dual setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Creating a logical partition profile for the client operating system. . . . . . . . . . . . . 3.3 Setting up a virtual Fibre Channel using HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Adding additional client partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Implementing a dual VIOS setup using the SDMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Creating the virtual servers for VIOS1 and VIOS2 . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Installing VIOS1 and VIOS2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Configuring the TCP/IP stack in VIOS1 and VIOS2 . . . . . . . . . . . . . . . . . . . . . . . 4.1.4 Creating the SEA failover configuration by using the SDMC . . . . . . . . . . . . . . . . 4.1.5 Configuring storage devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.6 Creating a virtual server for a client operating system . . . . . . . . . . . . . . . . . . . . . 4.1.7 Installing a client operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 18 18 23 26 30 32 32 34 34 39 40 44
45 46 46 49 49 50 51 53 54
iii
4.1.8 Configuring virtual Fibre Channel adapters using the SDMC . . . . . . . . . . . . . . . . 4.2 Setting up a single VIOS using the SDMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Creating a VIOS virtual server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Installing a VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Configuring a VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Creating a virtual server for a client operating system . . . . . . . . . . . . . . . . . . . . . 4.2.5 Installing a client operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Setting up a dual VIOS using the SDMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Creating a second VIOS virtual server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Installing a second VIOS using NIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Configuring a second VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Setting up a virtual Fibre Channel using the SDMC . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Configuring a client virtual server for NPIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Configuring a VIOS for NPIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Configuring a second VIOS for NPIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Advanced configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Adapter ID numbering scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Partition numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 VIOS partition and system redundancy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Advanced VIOS network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Using IEEE 802.3ad Link Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Enabling IEEE 802.1Q VLAN tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Multiple SEA configuration on VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 General network considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Advanced storage connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Shared processor pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Active Memory Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Active Memory Deduplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Shared storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54 56 57 59 60 64 65 65 65 67 69 74 74 75 77 79 80 81 81 82 82 83 84 84 85 87 87 87 87 88 89 89 89 89
iv
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Active Memory AIX BladeCenter GPFS IBM POWER Hypervisor Power Systems POWER6 POWER7 PowerHA PowerVM POWER Redbooks Redpaper Redbooks (logo) System i System p5 System Storage Tivoli
The following terms are trademarks of other companies: Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
vi
Preface
IBM PowerVM virtualization technology is a combination of hardware and software that supports and manages virtual environments on IBM POWER5, POWER5+, POWER6, and POWER7 processor-based systems. These systems are available on IBM Power Systems and IBM BladeCenter servers as optional editions, and are supported by the IBM AIX, IBM i, and Linux operating systems. With this set of comprehensive systems technologies and services, you can aggregate and manage resources with a consolidated, logical view. By deploying PowerVM virtualization and IBM Power Systems, you can take advantage of the following benefits: Lower energy costs through server consolidation Reduced cost of your existing infrastructure Better management of the growth, complexity, and risk of your infrastructure This IBM Redpaper publication is a quick start guide to help you install and configure a complete PowerVM virtualization solution on IBM Power Systems. It highlights how to use the following management console interfaces to configure PowerVM: Integrated Virtualization Manager (IVM) Hardware Management Console (HMC) Systems Director Management Console (SDMC) This paper also highlights advanced configuration of a dual Virtual I/O Server setup. This paper targets new customers who need assistance with quickly and easily installing, configuring, and starting a new PowerVM server in a virtualized environment.
vii
Pavel Pokorn is an IT Consultant at GC System a.s., an IBM Business Partner in the Czech Republic. He has seven years of experience in the Information Technology field. His areas of expertise include PowerVM, Power Systems, AIX, Data Protection, IBM System Storage, and Storage Area Network. Pavel holds a master degree in Information Technology from the University of Defence in Brno, Czech Republic. The project that produced this publication was managed by Scott Vetter, Certified Executive Project Manager, for the ITSO in Austin, Texas. Thanks to the following people for their contributions to this project: David Bennin Richard M. Conway Brian King Ann Lund Linda Robinson Alfred Schwab Don S. Spangler IBM US Nicolas Guerin IBM France
viii
Comments welcome
Your comments are important to us! We want our papers to be as helpful as possible. Send us your comments about this paper or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
Preface
ix
Chapter 1.
Introduction to PowerVM
Businesses are turning to IBM PowerVM virtualization to consolidate multiple workloads onto fewer systems, to increase server utilization, and to reduce cost. PowerVM provides a secure and scalable virtualization environment for AIX, IBM i, and Linux applications that is built on the advanced reliability, availability, and serviceability features and the leading performance of the IBM Power platform. This chapter provides an overview of the key PowerVM concepts, a planning model, and preferred practices to follow. It includes the following sections: Overview Planning Terminology differences Prerequisites
1.1 Overview
With a PowerVM system, you can immediately install and configure virtual machines and have a fully functional logical partition (LPAR). This paper highlights how to use the following management console interfaces to install and configure PowerVM step by step: Integrated Virtualization Manager (IVM) in Chapter 2, Configuring PowerVM with Integrated Virtualization Manager on page 7) Hardware Management Console (HMC) in Chapter 3, Configuring PowerVM with the Hardware Management Console on page 17 IBM Systems Director Management Console (SDMC) in Chapter 4, Configuring PowerVM with the IBM Systems Director Management Console on page 45 These chapters use a cookbook style approach that includes similar steps to accomplish the same task. Logical partition: The term logical partition is used as a generic term in this document. Other terms that are used include guest partition, partitions, and virtual servers. All of these terms refer to virtualized guest servers that run their own operating systems (OS). IBM BladeCenter: This paper does not address IBM BladeCenter. All three management console interfaces manage virtualization on IBM Power Systems. Table 1-1 shows how these interfaces differ in managing virtualization.
Table 1-1 Feature comparison of management console interfaces to manage virtualization IVM Included in PowerVM Manage Power Blades Manage more than one server Hardware monitoring Service agent call home Graphical interface Requires a separate server to run on Run on virtualized environments Advanced PowerVM features High-end servers Low-end and midrange servers Server families support POWER5/POWER5+: POWER6/POWER6+: POWER7: POWER5/POWER5+: POWER6/POWER6+: POWER7: (hardware appliance only) POWER5/POWER5+: POWER6/POWER6+: POWER7: HMC SDMC
Redundant setup
Chapters 2 - 4 include the following basic tasks, but they vary in order and complexity from one managing system to another: This paper guides you through all the installation and configuration from scratch. You can factory reset your server if you prefer, no previous configurations needed. Important: Before performing a factory reset, back up all of your data. Depending on the case, install one or two Virtual I/O Servers (VIOS). A redundant VIOS is only supported by HMC and SDMC. Configure your network and storage. This procedure might require information from your network administrator, storage administrator, or both. Create a client LPAR. Each chapter guides you step by step to achieve a fully functional PowerVM solution with one LPAR ready for use.
1.2 Planning
During the development of this paper, a unique server was used for each of the management interfaces. Figure 1-1 shows the model, type, and management interface used.
Before you start to configure your environment, complete the following planning tasks: Check the firmware levels on the server and the HMC or SDMC. Decide whether to use Logical Volume Mirroring (LVM) in AIX LPARs or Multipath I/O (MPIO.) The examples in this paper use MPIO. MPIO: MPIO is a fault-tolerance and performance enhancement technique where more than one path is used between a system and its storage devices. Make sure that your Fibre Channel switches and adapters are N_Port ID Virtualization (NPIV) capable. NPIV: N_Port ID Virtualization is a subset of Fibre Channel standard that PowerVM uses to virtualize Fibre Channel adapters. 3
Make sure that your network is properly configured. Check the firewall rules on the HMC or SDMC. Plan how much processor and memory to assign to the VIOS for best performance. Plan the slot numbering scheme of the VIOS virtual adapter. This paper uses the scheme shown in Figure 1-2. SDMC offers automatic handling of slot allocation.
11 12 11 12
VirtServer1 LPARID=10
21 22 21 22
112
VirtServer2 LPARID=11
. . .
XX1 XX2 11 12
VirtServerXX LPARID=12
21 22
XX1 XX2
Plan for two VIOS. Use the dual VIOS architecture so that you can have serviceability and scalability. Dual VIOS architecture: The dual VIOS architecture is available only when using the HMC or SDMC as managers. You cannot use dual VIOS with IVM. The dual VIOS setup offers serviceability to a PowerVM environment on the managed system. It also provides added redundancy and load balancing of client network and storage. The mechanisms involved in setting up a dual VIOS configuration use Shared Ethernet Adapter (SEA) failover for network and MPIO by using shared drives on the VIOS partitions for client storage. Other mechanisms can be employed, but SEA failover for networks and MPIO for storage require less configuration on the client partitions. SEA: A SEA is a VIOS component that bridges a physical Ethernet adapter and one or more virtual Ethernet adapters. For more information, see the IBM Power Systems Hardware Information Center at the following address, and search for POWER6 Shared Ethernet Adapters: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/ipha8 /hwicwelcome.htm SEA failover and MPIO allow for serviceability, redundancy, and load balancing with the VIOS partitions. One VIOS can act as a primary VIOS for networks and can be a standby for
storage. The other VIOS can act as a standby for networks and can be the primary VIOS for storage. The flexibility afforded by using a dual-VIOS setup caters to a range of client requirements. Table 1-2 shows the adapter allocation for VIOS1 (illustrated in Figure 1-2). This table describes the relationship between the virtual client adapter ID and the client adapter IDs of the virtual servers.
Table 1-2 VIOS1 adapter ID allocation Virtual adapter Virtual Ethernet Virtual Etherneta Virtual Ethernetb Virtual VSCSI Virtual fiber Virtual Ethernetb Virtual VSCSI Virtual fiber 111 112 101 102 1 C111 C112 Server adapter ID 2 (used default allocation) 3 (used default allocation) VLAN ID 1 (used default allocation) 99 (default for SDMC only) 1 C101 C102 Server adapter slot C2 C3 VirtServer1 VirtServer1 VirtServer1 VirtServer2 VirtServer2 VirtServer2 2 11 12 2 11 12 C2 C11 C12 C2 C11 C12 Client partition or virtual server All virtual servers Client adapter ID Client adapter slot
a. Use this virtual Ethernet adapter as the control channel adapter (SEA failover adapter). b. This client virtual Ethernet adapter is not associated with a VIOS. The VLAN ID configured on the adapter is the link to the SEA configuration.
Similarly Table 1-3 describes the adapter ID allocation for VIOS2 and its relationship to the client adapter IDs of the virtual servers.
Table 1-3 VIOS2 adapter ID allocation Virtual adapter Virtual Ethernet Virtual Etherneta Virtual VSCSI Virtual fiber Virtual VSCSI Virtual fiber Server adapter ID 2 (used default allocation) 3 (used default allocation) 101 102 111 112 VLAN ID 1 (used default allocation) 99 (default for SDMC only) Server adapter slot C2 C3 C101 C102 C111 C112 VirtServer1 VirtServer1 VirtServer2 VirtServer2 21 22 21 22 C21 C22 C21 C22 Client partition or virtual server Client adapter ID Client adapter slot
a. Use this virtual Ethernet adapter as the control channel adapter (SEA failover adapter).
Management partition
Virtual machine, virtual server, management operating system, VMWare Service Console, or KVM Host partition Virtual machine or virtual server x86 hypervisor
1.4 Prerequisites
Verify the following prerequisites to get as close to an ideal scenario as possible: Ensure that your HMC or SDMC (the hardware or the virtual appliance) is configured, up, and running. Ensure that your HMC or SDMC is connected to the HMC port of the new server. Use either a private network or a direct cable connection. Ensure that TCP port 657 is open between the HMC or SDMC and the virtual server to enable dynamic LPAR functions. Ensure that you properly assigned IP addresses for the HMC and SDMC. Ensure that IBM Power Server is ready to power on. Ensure that all your equipment is connected to 802.3ad capable network switches with link aggregation enabled. For more information, see Chapter 5, Advanced configuration on page 79. Ensure that the Fibre Channel fabrics are redundant. For more information, see Chapter 5, Advanced configuration on page 79. Ensure that the Ethernet network switches are redundant. Ensure that SAN storage for virtual servers (logical partitions) is ready to be provisioned.
Chapter 2.
3. Ensure that the server is in the normal boot mode as indicated on the operator panel (1 N V=N T). 4. Press the white power button on the front panel to power on the server. 5. On the ASCII console, if presented with options to set this panel as the active console, press the keys indicated on the panel. 6. If prompted about the license agreements or software maintenance terms, accept the agreements or terms.
7. After the first menu selection panel (Figure 2-1) is displayed, as soon as you see the word keyboard the bottom of the panel, press the 1 key. If you delay, the system attempts to start any operating system (OS) that might be loaded on the server.
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
8. Select the language if necessary. 9. Enter the Service Processor password for the admin user account. The default password is admin. If the default password does not work, nor do you have the admin password, contact hardware support to walk through the sign-on process with an IBM service representative profile. 10.Insert the VIOS installation media in the CD or DVD drive. 11.To start the media from the CD or DVD drive, complete these steps: a. Type 5 for Boot Options. b. Type 1 for Install/Boot Device. c. Type 3 for CD/DVD. d. Type 9 for List All Devices. e. Select the correct CD or DVD device from the list (probably the last device at the bottom of the list). f. Select the media type from the list. 12.From the SMS menu, select Normal Mode Boot, and then select Exit. 13.Select the console number, and then press Enter. 14.Select the preferred language. 15.On the Installation and Maintenance menu, select option 1 to start with default settings. For the other panels, select the default options. A progress panel shows Approximate% Complete and Elapsed Time. This installation takes between 15 minutes and an hour to complete.
In Example 2-1, the top port (T1) of the Ethernet card in slot 4 (C4) of the CEC drawer (P1, serial number DNWKGPB) is assigned to ent4. 2. Enter the cfgassist command, and then select VIOS TCP/IP Configuration. Select the appropriate enn interface related to the adapter port chosen previously. In this case, the interface is en4, which is related to adapter port ent4. entn: Each entn has an associated enn and etn (where n is the same number). In the example of ent4, en4 and et4 are all related to the same Ethernet port on the card. Always use the enn entry to assign TCP/IP addresses. 3. In the VIOS TCP/IP Configuration panel (Figure 2-2), enter TPC/IP configuration values for the VIOS connectivity. VIOS TCP/IP Configuration Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] *Hostname *Internet ADDRESS (dotted decimal) Network MASK (dotted decimal) *Network INTERFACE Default Gateway (dotted decimal) NAMESERVER Internet ADDRESS (dotted decimal) DOMAIN Name Cable Type
Figure 2-2 VIOS TCP/IP Configuration panel
Initializing the Ethernet might take a few minutes to complete. 4. Ping the Internet address set for VIOS from your PC. In this example, the address is 172.16.22.10.
10
5. Open a browser on your PC, and connect to following address: HTTPS://internet-address In this example, we use the following Internet address: HTTPS://172.16.22.10 Browser connection: Before you can proceed, you must get the browser to connect. Not all Windows browsers work with IVM. Use Microsoft Internet Explorer version 8 or earlier or Mozilla Firefox 3.6 or earlier. You must also enable pop-up windows in the browser. 6. Using a browser, sign on to VIOS using the padmin profile and the password set previously. Complete steps 6 and 7 7. To check for updates to VIOS, from the Service Management section in the left panel, click from a Updates. browser VIOS is now installed and ready for client partitions. At this time, VIOS owns all the hardware window. in the server. VIOS can supply virtual adapters to the various client partitions, or it can give up control of a hardware adapter or port for assignment to the client partition.
11
7. If installing AIX or Linux, skip the virtual fiber connections. Click Next. 8. Confirm that one virtual optical device (CD or DVD) is selected. Click Next. 9. In the final summary panel, which shows the settings for this partition, if everything is correct, click Finish. VIOS finishes creating the partition environment.
3. On the Create Storage Pool tab, complete the following steps: a. Enter a name for the storage pool. b. Accept the default setting of Logical volume based. c. Optional: Leave the Assign as default storage pool check box selected to make logical volume creation easier later. d. Select as many of the physical volumes from the listing at the bottom of the panel as needed to size the storage pool. e. When you are finished, click OK. Error message: You might receive an error message about the disk being previously assigned to another storage pool. Correct the assignment if you made a mistake, or select the Force physical volume addition check box at the bottom of the panel to continue. 4. Back on the Storage Pool tab, view the size of the storage pool that was created. This pool is now ready to divide into individual virtual disks.
13
The physical CD or DVD drive in the server now belongs to that partition.
14
3. Select the IPL type for the IBM i partition, and verify the other partition settings: a. In the left pane, click the View/Modify Partitions. b. In the right pane, check the partition. Complete these steps: i. From the More-Tasks pull-down list, select properties. ii. Change the IPL type to D (IPL from CD/DVD), and change the keylock position to Manual. 4. Place the I_Base_01 CD in the CD/DVD drive of the server. Click OK at the bottom of the panel. 5. Select the partition again, and use the Activate button to start the partition IPL. Progress and reference codes: For IBM i, if the partition reaches the C600-4031 reference code, the partition is operating normally and looks for the LAN console session. If the IBM i partition reaches reference code A600-5008, the partition was unsuccessful in contacting the console session. Therefore, you must troubleshoot the LAN console connectivity. Make sure that you bridged the correct VLAN ports and that the LAN console PC is on the same subnet as the bridged Ethernet port. After you reach the language selection panel on the console, the installation of IBM i proceeds the same as installing on a stand-alone server. Continue with the Dedicated Service Tools functions to add the disk to the auxiliary storage pool (ASP) and loading the OS. You have now installed and configured VIOS and at lease one client partition. The following sections expand on this basic installation with more advanced features.
15
To configure N_Port ID Virtualization (NPIV) attached storage, create the virtual fiber adapters to generate the worldwide port name (WWPN) to allow the configuration and assignment of the storage. To configure the virtual fiber adapters: 1. In the left pane, click View/Modify Partitions. 2. Select the partition. 3. From the More Tasks pull-down list, select Properties. 4. On the Storage tab, complete these steps: a. Expand the Virtual Fiber Channel section. b. If an interface is not shown, click Add to create the first interface. Select the first interface listed (listed as Automatically Generated), and select the correct physical port from the pull-down list. c. Click OK to complete the generation of the WWPNs for this interface. 5. Return to the partition storage properties (steps 1 - 3) to view the WWPNs. Record these numbers to configure the fiber attached storage. After the OS is installed and the NPIV attached storage is provisioned, directly assign the storage to the OS of the partition. VIOS is unaware of the storage. Use the normal procedures to add newly attached storage to the OS (AIM, IBM i, or Linux.) After you finish the installation using the IVM, you can increase the reliability, availability, and serviceability (RAS) of the configuration by applying the information in Chapter 5, Advanced configuration on page 79.
16
Chapter 3.
17
18
3. In the Servers panel, complete these steps: a. Select your managed system (the first check box in the first column on the right sides of the table (circled in Figure 3-1)). b. In the Name field for your managed system, click the button. Then select Configuration Create Logical Partition VIO Server.
4. In the Create Partition panel, specify the name of your partition. In this example, we enter VIOS1. Then click Next. 5. In the Partition Profile panel, enter your profile name. In this example, we enter Normal. Then click Next. 6. In the Processors panel, verify that Shared is selected. Then click Next. 7. In the Processor Settings panel, complete the following settings: a. b. c. d. e. f. For Desired processing units, enter 0.2. For Maximum processing units, enter 10. For Desired virtual processors, enter 2. For Desired maximum processors, enter 10. Select the Uncapped check box. Update Weight setting to 192.
19
Processor settings: The processor settings allow for the lowest utilization setting for the VIOS of 0.2 (Desired processing units), but it is scalable up to two processing units (Desired virtual processors) if necessary. The higher weight gives the VIOS priority over the other logical partitions (LPARs). For more information, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. 8. In the Memory Settings panel, complete the following fields: a. For Minimum Memory, enter 1 GB. b. For Desired Memory, enter 4 GB. c. For Maximum Memory, enter 8 GB. 9. In the I/O panel, complete the following steps: a. Select the following check boxes: The RAID or SAS controller where the internal disks are attached to (disk controllers for the VIOS internal drives) The Ethernet adapter (newer adapters are described as PCI-to-PCI Bridge) where it has been cabled to the network The Fibre Channel adapter attached to the SAN fabric
b. Click Add as desired. c. Click Next. Figure 3-2 shows the selected adapters.
10.In the Virtual Adapters panel, update the Maximum virtual adapters setting to 1000. Adapter numbering scheme: You can plan your own adapter numbering scheme. You must set the Maximum virtual adapters setting in the Virtual Adapters panel to allow for your numbering scheme. The maximum setting is 65535. The higher the setting is, the more memory the managed system reserves to manage the adapters.
20
11.Create a virtual Ethernet adapter for Ethernet bridging. In the Virtual Adapters panel (Figure 3-3), complete these steps: a. Select Actions Create Virtual Adapter Ethernet Adapter. b. In the Create Virtual Ethernet Adapter panel, select the Use this adapter for Ethernet bridging check box and click OK. The virtual Ethernet adapter is created and is shown in the Virtual Adapters panel. Default settings: When creating the virtual Ethernet adapter, we accepted the default settings for Adapter ID, Port VLAN ID, and Ethernet Bridging Priority (Trunk Priority). These settings are customizable for a range of planning designs or standards.
12.Depending on your VIOS partition setup, choose one of the following options: For a single VIOS partition setup, skip to step 13. For dual VIOS partition setup, continue to create a virtual Ethernet adapter for SEA failover, in the Virtual Adapters panel: i. Select Actions Create Virtual Adapter Ethernet Adapter. ii. In the Create Virtual Ethernet Adapter panel, update the Port Virtual Ethernet value to 99. iii. Click OK. The virtual Ethernet adapter is created and is shown in the Virtual Adapters panel.
21
13.To create the virtual SCSI adapter, complete these steps: a. In the Virtual Adapters panel, select Actions Create Virtual Adapter SCSI Adapter. b. In the next panel, complete the following steps: i. ii. iii. iv. v. Select the Only selected client partition can connect check box. For Adapter, enter 101. For Client partition, enter 10. For Client adapter ID, enter 11. Click OK to accept settings.
Information: For the client partition, we begin at partition ID 10 (reserving partition IDs 2 - 9 for future VIOS or infrastructure servers). For the adapter ID, we chose 101 as a numbering scheme to denote the partition and virtual device 1. For the Client adapter ID, we chose 11 as the first disk adapter for the client partition. 14.In the Virtual Adapter panel (Figure 3-4), which shows the virtual adapters that you created, click Next.
Figure 3-4 Virtual Adapters panel with virtual Ethernet and virtual SCSI adapter defined
15.For the remaining panels, click Next until you reach the Profile Summary panel. 16.In the Profile Summary panel, verify your settings, and then click Finish. 17.Click your managed system to view the VIOS partition profile you created. You have now created an LPAR (virtual server) for the VIOS installation.
22
3. In the Activate Logical Partition panel, click Advanced. 4. In the Advanced options panel, from the Boot mode list, select SMS, and then click OK. 5. In the Activate Logical Partition panel, click OK to activate the VIOS partition. 6. Open a terminal window to the VIOS partition. Observe the VIOS partition being booted into the SMS Main Menu. 7. Continue with steps 10 - 15 on page 9 in 2.1.1, Installing a VIOS on page 8. Then complete the steps in Accepting the license agreements on page 9. The VIOS is ready to be configured for client network and storage service. For client storage, you can extract the worldwide port name (WWPN) from the Fibre Channel adapter interface and give it to the storage area network (SAN) administrator for zoning. You can use the following command to extract the WWPN: lsdev -dev fcs0 -vpd | grep "Network Address" Example 3-1 shows the WWPN for Fibre Channel adapter port fcs0. To obtain the WWPN for fcs1, run the command as shown, but replace fsc0 with fcs1.
Example 3-1 WWPN of fcs0 Fibre Channel adapter port $ lsdev -dev fcs0 -vpd | grep "Network Address" Network Address.............10000000C99FC3F6
23
To install a VIOS by using the installios command on the HMC console command-line interface (CLI): 1. Insert the VIOS media into the HMC DVD drive. (If multiple media are available, insert the first DVD.) 2. Log on to the HMC CLI with an ASCII terminal emulator (SSH to the TCP/IP address of the HMC). 3. At the command prompt, enter installios. 4. Select your system from the list of systems connected to the HMC. 5. Select the VIOS partition on which you are conducting the installation. 6. Select the VIOS partition profile. 7. Press Enter to accept /dev/cdrom as the default source of the media. 8. Enter the VIOS TCP/IP address. 9. Enter the VIOS subnet mask. 10.Enter the VIOS TCP/IP gateway. 11.For the VIOS adapter speed, type auto. 12.For the VIOS adapter duplex, type auto. 13.To not configure the TCP/IP address on the VIOS after installation, type no. 14.Select the open TCP/IP address of the HMC. Adapters: At least two adapters are shown with their TCP/IP addresses. One address is for the HMC open network. The other address is the private network to the Flexible Service Processor (FSP) port of your system. 15.After the HMC retrieves the Ethernet adapter details based on the VIOS partition profile configuration, select the Ethernet adapter port that is cabled in 3.1, Setting up a single VIOS using an HMC on page 18. 16.Press Enter to accept en_US as the language and the locale defaults. Alternative: If en_US is not your default language and locale, enter the language and locale that you regularly use. 17.In the window that shows the details that you selected, press Enter. 18.Review the License Agreement details. At the end of the License Agreement window, type Y to accept the agreement. 19.If the installation media spans multiple DVDs: When prompted, change DVDs. Then type c to continue. Using the details that you provided, the HMC uploads the software from the installation media to a local file system within the HMC. NIM on Linux (NIMOL) features on HMC are used to network boot the VIOS partition and network install the VIOS software. 20.Open a terminal window to the VIOS partition. 21.After the VIOS installation is completed and the VIOS partition prompts you to log in, enter the padmin user ID. 22.When prompted, change the password to a secure password. 23.To accept the VIOS software maintenance terms and conditions, type a.
24
24.To accept the VIOS license agreement, enter the following command: license -accept 25.To list the physical Fibre Channel adapters on the VIOS, enter the lsnports command. Example 3-2 shows the Fibre Channel adapter ports configured on VIOS1. As explained in 3.1, Setting up a single VIOS using an HMC on page 18, the first port (T1) is planned for virtual SCSI. The second port (T2) is planned for virtual Fibre Channel, as explained later in this chapter.
Example 3-2 Fibre Channel adapter port listing on VIOS1 $ lsnports name fcs0 fcs1 physloc U5802.001.0087356-P1-C2-T1 U5802.001.0087356-P1-C2-T2 fabric tports aports swwpns 1 64 64 2048 1 64 64 2048 awwpns 2046 2048
26.For client storage, extract the WWPN from the Fibre Channel adapter interface and given to the SAN administrator for zoning by using the following command: lsdev -dev fcsX -vpd | grep "Network Address" Example 3-3 shows the WWPN for Fibre Channel adapter port fcs0. To obtain the WWPN for fcs1, run the command but replace fsc0 with fcs1.
Example 3-3 WWPN for fcs0 Fibre Channel Adapter port $ lsdev -dev fcs0 -vpd | grep "Network Address" Network Address.............10000000C99FC3F6
You can now configure the VIOS for client network and storage service.
If a NIM server is not available and you want to use NIM to build a PowerVM environment on your system, complete these steps: 1. 2. 3. 4. Build the VIOS partition using either DVD or the installios command. Build the first client partition as an AIX NIM server. If you plan to build a second VIOS partition, build the second VIOS using NIM. Deploy any Linux or AIX client partitions by using NIM.
25
In Example 3-4 on page 26, ent0 (U78A0.001.DNWHZS4-P1-C2-T1) is the physical Ethernet adapter port that is cabled. The U78A0.001.DNWHZS4-P1-C2 Ethernet adapter is the adapter selected in Figure 3-2 on page 20. Adapter ent4 (U8233.E8B.061AB2P-V1-C2-T1) is the virtual Ethernet adapter shown in Figure 3-4 on page 22. Virtual Ethernet adapter: For the virtual Ethernet adapter U8233.E8B.061AB2P-V1-C2-T1, the V in V1 indicates that it is a virtual adapter, and C2 indicates that it is a slot with adapter ID 2 as shown in step 14 on page 22. If you plan to use 802.3ad Link Aggregation, your respective adapters must be cabled and the network switch ports must be configured for 802.3ad Link Aggregation. To create the Link Aggregation adapter, enter the following command: mkvdev -lnaggr <entX> <entY> -attr mode=8023ad Alternatively, create the adapter by using the cfgassist command: a. On a command line, enter cfgassist. b. Select Devices Link Aggregation Adapter Add a Link Aggregation Adapter. c. In the Target Adapters field, enter the physical network adapters (spaces between each physical network adapter). d. In the ATTRIBUTES field, enter mode=8023ad. List all physical Ethernet adapters and EtherChannel adapters available for creating an SEA: ledev -type ent4sea
26
3. Create the SEA, which bridges the physical adapter and the virtual adapter: mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 Where: ent0 The physical adapter in step 2 (use the EtherChannel adapter if one is available for the SEA configuration). ent4 The virtual adapter found in step 2. 1 The port VLAN ID of ent4 where you accepted the default Port VLAN ID allocation. Example 3-5 shows creating the SEA virtual network devices, where: ent5 An Ethernet network adapter device. en5 A standard Ethernet network interface where TCP/IP addresses are assigned. et5 An Institute of Electrical and Electronics Engineers (IEEE) 802.3 Ethernet network interface.
Example 3-5 Creating an SEA interface $ mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 ent5 Available en5 et5
4. Configure the TCP/IP connection for the VIOS with details provided by the network administrator. mktcpip -hostname vios1 -interface en5 -inetaddr 172.16.22.15 -netmask 255.255.252.0 -gateway 172.16.20.1 Where: Network IP address 172.16.22.15 Network subnet 255.255.252.0 Network gateway 172.16.20.1 Alternatively, configure the connection by using the cfgassist command: a. b. c. d. On a command line, enter cfgassist. Select VIOS TCP/IP Configuration. Select en5, which is the SEA interface created in step 3. Then press Enter. Enter the TCP/IP details in listed previously in step 4 after the mktcpip command (see Where:).
Interface and port details: Interface en5 is the SEA created in step 3 on page 27. Alternatively, an additional virtual adapter can be created for the VIOS remote connection, or another physical adapter can be used (must be cabled) for the TCP/IP remote connection. TCP and UDP port 657 must be open between the HMC and the VIOS, which is required for dynamic LPAR (DLPAR; using the Resource Monitoring and Control (RMC) protocol).
27
To configure and map the disks to the client partition: 1. List any Fibre Channel adapter SCSI protocol devices that are configured on the VIOS: lsdev | grep fscsi In Example 3-6, fscsi0 and fscsi1 are the Fibre Channel adapter SCSI protocol devices configured on VIOS1. Their attributes must be updated to allow for dynamic tracking and fast failover (applicable for a multiple Fibre Channel adapter VIOS).
Example 3-6 List of Fibre Channel adapter SCSI protocol devices on VIOS1 $ lsdev | grep fscsi fscsi0 Available fscsi1 Available FC SCSI I/O Controller Protocol Device FC SCSI I/O Controller Protocol Device
2. Update the device attributes of the Fibre Channel adapter SCSI protocol listed in step 1 to enable dynamic tracking and fast failover: chdev -dev fscsi0 -attr dyntrk=yes fc_err_recov=fast_fail chdev -dev fscsi1 -attr dyntrk=yes fc_err_recov=fast_fail Tip for a busy Fibre Channel adapter SCSI protocol: If the Fibre Channel adapter SCSI protocol device is busy, append the -perm flag to the command to update the VIOS database only as shown in the following example: chdev -dev fscsi0 -attr dyntrk=yes fc_err_recov=fast_fail -perm The attributes are not applied to the device until the VIOS is rebooted. Tip for the fast_fail and dyntrk settings: The fast_fail and dyntrk settings are useful in a setup with multiple Fibre Channel adapters or dual VIOS. With the fast_fail setting, the I/O can immediately fail if the adapter detects link events such as a lost link between the Fibre Channel adapter and the SAN switch port. With the dyntrk setting, the VIOS tolerates cabling changes in the SAN. 3. To configure the disks on the VIOS, enter cfgdev. 4. List the disks on the VIOS partition, and see the disk type: lsdev -type disk In Example 3-7, VIOS1 lists two internal SAS disks and six DS4800 disks.
Example 3-7 List of disks with the disk type on the VIOS1 $ lsdev -type disk name status hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available description SAS Disk Drive SAS Disk Drive MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk
5. To confirm the SAN LUN ID on VIOS1, enter the following command for each disk listed in step 4 until the correct disk is found with LUN ID provided by the SAN administrator: lsdev -dev hdiskX -attr | grep -i -E "reserve|unique_id"
28
Example 3-8 shows the hdisk, which the SAN administrator assigned. Also, the SCSI reserve policy is set with single_path. You must update this setting with no SCSI reserve locks. The LUN ID is embedded in the unique_id string for hdisk6, beginning with the sixth character.
Example 3-8 Disk attributes of hdisk6 $ lsdev -dev hdisk6 -attr | grep -E "unique_id|reserve" reserve_policy single_path Reserve Policy True unique_id 3E213600A0B8000114632000092784EC50F0B0F1815 FAStT03IBMfcp Unique device identifier False
Additional disk information: Disks using EMC PowerPath drivers, IBM Subsystem Device Driver Path Control Module (SDDPCM) drivers, and IBM Subsystem Device Drivers (SDDs) also have their LUN IDs embedded in the unique_id string. Use their supplied commands to view the LUN IDs in a more readable format. To obtain the disks complete with LUN IDs, see the driver manual. EMC disks are displayed with hdiskpowerX notation, and SDD disks are displayed with a vpathX notation. Use their disk notations with the lsdev command sequence instead of hdisk. Other disk subsystems can use different fields to set their SCSI reserve locks. Use the lsdev command sequence without the pipe to grep, as in the following example: lsdev -dev sampledisk -attr 6. Deactivate the SCSI reserve lock on the disk, in this case hdisk6: chdev -dev hdisk6 -attr reserve_policy=no_policy Disks using SDDPCM and SDD drivers: Ignore this step if the disks are using SDDPCM and SDD drivers because the SCSI reserve locks are already deactivated. For EMC disks and disks using native MPIO, you must deactivate the SCSI reserve locks. The SCSI reserve lock attribute differs among the disk subsystems. The IBM System Storage SCSI reserve lock attribute is reserve_policy as shown in Example 3-8. The attribute on EMC disk subsystem is reserve_lock. If you are unsure of the allowable value to use to deactivate the SCSI reserve lock, use the following command to provide a list of allowable values: lsdev -dev hdisk6 -range reserve_policy 7. Determine the virtual adapter name of the virtual SCSI adapter created in step 13 on page 22: lsdev -vpd | grep "Virtual SCSI" In Example 3-9, the virtual SCSI adapter with server adapter ID C101 is vhost0 to use in the next step.
Example 3-9 List of virtual SCSI devices $ lsdev -vpd | grep "Virtual SCSI" vhost0 U8233.E8B.061AB2P-V1-C101 Virtual SCSI Server Adapter
29
8. Use the MPIO setup to map a whole LUN to client the operating system (OS) partitions. To map hdisk6 to CLIENT1, enter the following command: mkvdev -vdev hdisk6 -vadapter vhost0 Where: hdisk6 The disk in step 5 on page 28. vhost0 The virtual server SCSI adapter with adapter ID 101 created for CLIENT1, in step 6. In Example 3-10, the Virtual Target Device (VTD) vtscsi0 is created.
Example 3-10 Creating a disk mapping to a client partition $ mkvdev -vdev hdisk6 -vadapter vhost0 vtscsi0 Available
9. Check the mapped devices to vhost0: lsmap -vadapter vhost0 In Example 3-11, the vhost0 virtual SCSI adapter shows one disk mapped, where hdisk6 is mapped to the vtscsi0 device.
Example 3-11 vhost0 disk mapping $ lsmap -vadapter vhost0 SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost0 U8233.E8B.061AB2P-V1-C101 0x0000000a VTD Status LUN Backing device Physloc Mirrored vtscsi0 Available 0x8100000000000000 hdisk6 U5802.001.0087356-P1-C2-T1-W202200A0B811A662-L5000000000000 false
30
4. In the Partition Profile panel, enter your Profile name. In this example, we enter Normal. Then click Next. 5. In the Processors panel, ensure that the Shared option is selected, and then click Next. 6. In the Processor Settings panel, complete the following steps: a. b. c. d. e. For Desired processing units, enter 0.4. For Maximum processing units, enter 10. For Desired virtual processors, enter 4. For Desired maximum processors, enter 10. Select the Uncapped check box.
7. In the Memory Settings panel, complete the following fields: a. For Minimum Memory, enter 1 GB. b. For Desired Memory, enter 16 GB. c. For Maximum Memory, enter 24 GB. 8. In the I/O panel, click Next. 9. In the Virtual Adapters panel, for the Maximum virtual adapters setting, enter 50. 10.To create virtual Ethernet adapters, in the Virtual Adapters panel: a. Select Actions Create Virtual Adapter Ethernet Adapter. b. In the Create Virtual Ethernet Adapter panel, click OK. The virtual Ethernet adapter is created and displayed in the Virtual Adapters panel. 11.To create the virtual SCSI adapter, in the Virtual Adapters panel: a. Select Actions Create Virtual Adapter SCSI Adapter. b. In the Create Virtual SCSI Adapter panel, complete the following steps: i. ii. iii. iv. v. Select the Only selected client partition can connect check box. For Adapter, enter 11. For Server partition, enter 1. For Server adapter ID, enter 101. Click OK to accept the settings.
The virtual SCSI adapter is created and is displayed in the Virtual Adapters panel. 12.Depending on your VIOS partition setup, choose one of the following options: For a single VIOS partition setup, skip to step 13. For dual VIOS partition setup, create an additional virtual SCSI adapter to map to VIOS2 virtual server SCSI adapter. To begin, select Actions Create Virtual Adapter SCSI Adapter. Then in the next panel, complete the following steps: i. ii. iii. iv. v. Select the Only selected client partition can connect check box. For Adapter, enter 21. For Server partition, enter 2. For Server adapter ID, enter 101. Click OK to accept the settings.
The virtual SCSI adapter is created and is displayed in the Virtual Adapters panel. 13.In the Virtual Adapter panel, which shows the virtual adapters you created, click Next. 14.For the remaining panels, click Next until you reach the Profile Summary panel. 15.In the Profile Summary panel, click Finish. 16.Click your managed system to view the partition profile you created.
31
iii. Select Actions Create Virtual Adapter Ethernet Adapter. iv. In the Create Virtual Ethernet Adapter panel, change the Port Virtual Ethernet field to 99. Then click OK.
32
The virtual Ethernet adapter is created with Adapter ID 3 and is displayed in the Virtual Adapters panel with virtual Ethernet for bridging with Adapter ID 2 and server virtual SCSI adapter with Adapter ID 101 (Figure 3-5).
v. Click OK to dynamically add the virtual Ethernet adapter. 2. To save the DLPAR updates to VIOS1 to its profile, click the button at the end of the client partition, and select Configuration Save Current Configuration. DLPAR failure: DLPAR relies on RMC connectivity between the HMC and VIOS1. If DLPAR fails, use steps a - h on page 40 as a reference to create the virtual Ethernet adapter for the SEA failover. 3. In the Save Partition Configuration panel, click OK. Go to step 5 to create the VIOS2 partition profile. 4. To create the VIOS1partition profile, follow the steps in 3.1.1, Creating a VIOS partition profile on page 18. 5. To create the VIOS2 partition profile, follow the steps in 3.1.1, Creating a VIOS partition profile on page 18. Important changes to VIOS2: VIOS2 requires the following changes: Ensure that the priority of the virtual Ethernet adapters used for bridging is different. By default, VIOS1 is created with a priority of 1. For VIOS2, when the virtual Ethernet adapter used for bridging is created in step 11 on page 21, the priority is set to 2. For VIOS1 and VIOS2, ensure that the virtual Ethernet adapters used for SEA failover in step 12 on page 21 are created with the same Port VLAN ID. This step is essential for inter-VIOS communication. For VIOS2, ensure the virtual SCSI adapter in step 13 on page 22 is created with a different client adapter ID than VIOS1 by using the following settings: For Adapter, enter 101. For Client partition, enter 10. For Client adapter ID, enter 22.
33
Requirements for using virtual Ethernet adapters as SEA failover adapters: For virtual Ethernet adapters to be used as SEA failover adapters, keep in mind the following requirements: Ensure that the Port Virtual Ethernet ID (also known as VLAN ID) is consistent for VIOS1 and VIOS2. Ensure that the Port Virtual Ethernet ID is not a known VLAN ID in the network. Ensure that the virtual Ethernet adapter is not configured for IEEE 802.1Q. Ensure that the virtual Ethernet adapter is not bridged to a physical adapter. VIOS1 and VIOS2 are now displayed in your system server listing. Their partition profiles are ready to use for the installation process.
4. Query the ent device that is the SEA: lsdev -type sea
34
5. Update the SEA to add the SEA failover functions: chdev -dev ent5 -attr ctl_chan=ent6 ha_mode=auto Where: ent5 ent6 The SEA in step 4. The SEA failover virtual Ethernet Adapter in step Example 3-12.
In Example 3-14, note the following explanation: ent0 (U78A0.001.DNWHZS4-P1-C2-T1) The physical Ethernet adapter port cabled. U78A0.001.DNWHZS4-P1-C2 The Ethernet adapter selected in Figure 3-2 on page 20. ent4 (U8233.E8B.061AB2P-V1-C2-T1) The virtual Ethernet adapter shown in Figure 3-5 on page 33. ent5 (U8233.E8B.061AB2P-V2-C3-T1) The virtual Ethernet adapter also shown in Figure 3-5 on page 33. If you plan to use 802.3ad Link Aggregation, the adapters must be cabled and the network switch ports must be configured for 802.3ad Link Aggregation. To create the Link Aggregation adapter, enter the following command: mkvdev -lnaggr <entX> <entY> -attr mode=8023ad Alternatively, use the cfgassist command as explained in the following steps: a. On the command line, enter cfgassist. b. Select Devices Link Aggregation Adapter Add a Link Aggregation Adapter.
35
c. In the Target Adapters field, enter the physical network adapters (with spaces between each physical network adapter). d. In the ATTRIBUTES field, enter mode=8023ad. List all physical Ethernet adapters and EtherChannel adapters that are available for creating an SEA: lsdev -type ent4sea 3. Create the SEA, which bridges the physical adapter and the virtual adapter: mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 -attr ctl_chan=ent5 ha_mode-auto Where: ent0 ent4 1 ent5 ent6 en6 et6 The physical adapter in step 2 on page 35. (Use an EtherChannel adapter if one was created for the SEA configuration.) The bridging virtual adapter in step 2 on page 35. The port VLAN ID of ent4. The SEA failover virtual adapter in step 2 on page 35. An Ethernet network adapter device. A standard Ethernet network interface where TCP/IP addresses are assigned. An IEEE 802.3 Ethernet network interface.
Example 3-15 shows the SEA virtual network devices that are created, where:
Example 3-15 Creating an SEA interface $ mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 -attr ctl_chan=ent5 ha_mode=auto ent6 Available en6 et6
4. Configure the TCP/IP connection for the VIOS with details provided by the network administrator: mktcpip -hostname vios1 -interface en6 -inetaddr 172.16.22.15 -netmask 255.255.252.0 -gateway 172.16.20.1 Alternatively, you can use the cfgassist command: a. Enter cfgassist b. Select VIOS TCP/IP Configuration. c. Select en5, which is the SEA interface created in step 3 on page 27, and then click Enter. d. Enter the TCP/IP details shown in the following list. Regardless of the method you choose, in this example, we use the following details: Network IP address: 172.16.22.15 Network subnet: 255.255.252.0 Network gateway: 172.16.20.1
To configure and map the disks to the target client partition: 1. List any Fibre Channel adapter SCSI protocol devices that are configured on the VIOS: lsdev | grep fscsi In Example 3-16, fscsi0 and fscsi1 are the Fibre Channel adapter SCSI protocol devices configured on VIOS1. Their attributes are updated to allow for dynamic tracking and fast failover (applicable for a multiple Fibre Channel adapter VIOS).
Example 3-16 List of Fibre Channel adapter SCSI protocol devices on VIOS1 $ lsdev | grep fscsi fscsi0 Available fscsi1 Available FC SCSI I/O Controller Protocol Device FC SCSI I/O Controller Protocol Device
2. Update the Fibre Channel adapter SCSI protocol device attributes listed in step 5 on page 35 to enable dynamic tracking and fast failover: chdev -dev fscsi0 -attr dyntrk=yes fc_err_recov=fast_fail chdev -dev fscsi1 -attr dyntrk=yes fc_err_recov=fast_fail Busy SCSI protocol device: If the SCSI protocol device of the Fibre Channel adapter is busy, append the -perm flag to the command, as shown in the following example, to update the VIOS database only. The attributes are not applied to the device until the VIOS is rebooted. chdev -dev fscsi0 -attr dyntrk=yes fc_err_recov=fast_fail -perm 3. To configure the disks on the VIOS, enter cfgdev. 4. List the disks on the VIOS partition and show the disk type: lsdev -type disk In Example 3-17, VIOS1 lists two internal SAS disks and six DS4800 disks.
Example 3-17 List disks with their type on VIOS1 $ lsdev -type disk name status hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available description SAS Disk Drive SAS Disk Drive MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk
5. To confirm the SAN LUN ID on VIOS1, enter the following command for each disk listed in step 2 on page 35 until the correct disk is found with LUN ID provided by the SAN administrator: lsdev -dev hdiskX -attr | grep -i -E "reserve|unique_id"
37
Example 3-18 shows the hdisk that the SAN administrator assigned. Also, the SCSI reserve policy was set with single_path, which must be updated with no SCSI reserve locks. The LUN ID is embedded in the unique_id string for hdisk6.
Example 3-18 Disk attributes of hdisk6 $ lsdev -dev hdisk6 -attr | grep -E "unique_id|reserve" reserve_policy single_path Reserve Policy True unique_id 3E213600A0B8000114632000092784EC50F0B0F1815 FAStT03IBMfcp Unique device identifier False
Additional information: Disks using EMC PowerPath, IBM SDDPCM, and IBM SDD drivers also have their LUN IDS embedded in the unique_id string. Use their supplied commands to show the LUN IDS in a more readable format. To obtain the disks complete with LUN IDs, see their manuals. EMC disks are displayed with the hdiskpowerX notation, and SDD disks are displayed with vpathX notation. Use these disk notations with the lsdev command sequence instead of hdisk. Other disks subsystems might use different fields to set their SCSI reserve locks. Use the lsdev command sequence without the pipe to grep as in the following example: lsdev -dev sampledisk -attr 6. For users with EMC disks and disks using native MPIO: Deactivate the SCSI reserve lock on the disk, which is hdisk6 in this example: chdev -dev hdisk6 -attr reserve_policy=no_policy Disks using SDDPCM and SDD drivers: Ignore the following step if the disks are using SDDPCM and SDD drivers because the SCSI reserve locks are already deactivated. SCSI reserve lock: The SCSI reserve lock attribute differs among disk subsystems. The IBM System Storage SCSI reserve lock attribute is reserve_policy as displayed in Example 3-8. The attribute on EMC disk subsystem is reserve_lock. If you are unsure of the allowable value to use to deactivate the SCSI reserve lock, the following command provides a list of allowable values: lsdev -dev hdisk6 -range reserve_policy 7. Determine the virtual adapter name of the virtual SCSI adapter in step 13 on page 22: lsdev -vpd | grep "Virtual SCSI" In Example 3-19, the virtual SCSI adapter with server adapter ID C101 is vhost0 to use in the next step.
Example 3-19 List of virtual SCSI devices $ lsdev -vpd | grep "Virtual SCSI" vhost0 U8233.E8B.061AB2P-V1-C101 Virtual SCSI Server Adapter
The MPIO setup is used to map whole LUNS to client OS partitions. 8. To map hdisk hdisk6 to CLIENT1, enter: mkvdev -vdev hdisk6 -vadapter vhost0
38
Where: hdisk6 vhost0 The disk found in step 5 on page 28. The virtual server SCSI adapter found in step 6 on page 29.
9. To check mapped devices to vhost0, enter: lsmap -vadapter vhost0 In Example 3-21, the vhost0 virtual SCSI adapter shows one disk mapped, where hdisk6 is mapped to the vtscsi0 virtual target device.
Example 3-21 Disk mapping for vhost0 $ lsmap -vadapter vhost0 SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost0 U8233.E8B.061AB2P-V1-C101 0x0000000a VTD Status LUN Backing device Physloc Mirrored vtscsi0 Available 0x8100000000000000 hdisk6 U5802.001.0087356-P1-C2-T1-W202200A0B811A662-L5000000000000 false
10.Repeat step 1 on page 35 through step 9, to configure the VIOS2 partition. For step 1 on page 35, ensure that you log on to the VIOS2 terminal window.
3.2.4 Creating a logical partition profile for the client operating system
Create the client partition profile as explained in 3.1.4, Creating a logical partition profile for a client operating system on page 30. Alternatively, if the client partition profile exists and you want to configure an additional virtual SCSI adapter, you can choose one of the following methods: Add the virtual SCSI adapter by using DLPAR. Then save the current configuration, overwriting the current profile. (The client partition must be running and have RMC connectivity to the HMC.) a. Select your client partition. b. Click the button at the end of your client partition, and then select Dynamic Logical Partitioning Virtual Adapters. c. Select Actions Create Virtual Adapter SCSI Adapter. d. In the Create Virtual SCSI Adapter panel, complete the following steps: i. ii. iii. iv. v. Select the Only selected client partition can connect check box. For Adapter, type 22. For Server partition, type 2. For Server adapter ID, type 101. Click OK to accept the settings.
39
e. Click OK to dynamically add the virtual SCSI adapter. f. Click the button at the end of your client partition, and then select Configuration Save Current Configuration. g. In the Save Partition Configuration panel, click OK. h. Click Yes to confirm the save. Update the client partition profile to add the additional virtual SCSI adapter. Then shut down the client partition (if it is running), and activate the client partition. Important: Shutting down the client partition and then activating it causes the client partition to re-read its profile. A partition restart does not re-read the partition profile. a. Click the button at the end of your client partition, and then select Configuration Manage Profiles. b. Click the profile to update. c. Click the Virtual Adapters tab d. Select Actions Create Virtual Adapter SCSI Adapter. e. In the Create Virtual SCSI Adapter panel, complete the following steps: i. ii. iii. iv. v. Select the Only selected client partition can connect check box. In the Adapter field, type 22. In the Server partition field, type 2. In the Server adapter ID field, type 101. Click OK to accept settings.
f. Click OK to save the profile. g. Run the shutdown command on the client partition. h. After the client partition is displayed with the Not Activated state, activate the client partition.
A server virtual Fibre Channel adapter has a one-to-one relationship with a client virtual Fibre Channel adapter. For a single VIOS setup, the VIOS is configured with one virtual Fibre Channel adapter, and the client partition is configured with one virtual Fibre Channel adapter mapped to each other. For a dual VIOS setup, each VIOS is configured with one virtual Fibre Channel adapter, and the client partition is configured with two virtual Fibre Channel adapters mapped to each VIOS virtual Fibre Channel adapter. You can add the virtual Fibre Channel adapters to existing partition profiles similarly to the virtual SCSI adapters as explained in 3.2.4, Creating a logical partition profile for the client operating system on page 39. Use the DLPAR method as explained previously to add the server virtual Fibre Channel adapter to the VIOS and to add the corresponding client virtual Fibre Channel adapter to the client partition.
41
3. Select Actions Create Virtual Adapter Fibre Channel Adapter. 4. In the Create Virtual Fibre Channel Adapter panel, complete the following steps: a. b. c. d. In the Adapter field, type 102. In the Client partition field, type 10. In the Client adapter ID field, type 12. Click OK to accept the settings.
5. Click OK to dynamically add the virtual Fibre Channel adapter. 6. Click the button at the end of your client partition, and then select Configuration Save Current Configuration (Figure 3-7).
7. In the Save Partition Configuration panel, click OK. 8. Click Yes to confirm the save, overwriting the existing profile.
42
4. In the Create Virtual Fibre Channel Adapter panel, complete the following steps: a. b. c. d. In the Adapter field, type 12. In the Server partition field, type 1. In the Server adapter ID field, type 102. Click OK to accept settings
5. In the Virtual Adapters panel, which shows the virtual Fibre Channel adapters that were created (Figure 3-8), click OK to dynamically add the virtual Fibre Channel adapter.
6. Click the button at the end of your client partition, and then select Configuration Save Current Configuration. 7. In the Save Partition Configuration panel, click OK. 8. Click Yes to confirm the save.
43
3. To configure the virtual Fibre Channel adapter that was added by using DLPAR in step 4 on page 42, enter the cfgdev command. 4. List the virtual Fibre Channel adapters: lsdev -vpd | grep vfchost In Example 3-23, one virtual Fibre Channel adapter is listed with an adapter slot ID of 102 (C102) created in step 4 on page 42, called vfchost0.
Example 3-23 List of virtual Fibre Channel adapters on the VIOS $ lsdev -vpd | grep vfchost vfchost0 U8233.E8B.061AB2P-V1-C102 Virtual FC Server Adapter
5. Map the client virtual Fibre Channel adapter to the physical Fibre Channel adapter that is zoned for NPIV: vfcmap -vadapter vfchost0 -fcp fcs1 Tip: You can map multiple client virtual Fibre Channel adapters to a physical Fibre Channel adapter port. Up to 64 client virtual Fibre Channel adapters can be active at one time per physical Fibre Channel adapter. 6. Verify the virtual Fibre Channel mapping for vfchost0: lsmap -vadapter vfchost0 -npiv Alternatively, list all virtual Fibre Channel mapping on the VIOS: lsmap -all -npiv 7. If you have a dual VIOS setup, repeat step 1 on page 43 through step 5 for VIOS2. Ensure that the client partition adapter IDs are unique.
44
Chapter 4.
45
Before you begin the tasks in this chapter, you must complete the following tasks: 1. 2. 3. 4. Verify the prerequisites in Chapter 1, Introduction to PowerVM on page 1. Verify that the SDMC is installed and configured. Verify that the host is already discovered and visible to the SDMC. Verify that your host is in a Started state.
46
To create the virtual server for VIOS1: 1. In the Name panel, complete the following steps: a. b. c. d. In the Virtual Server name field, enter a name. We enter VIOS1 in this example. For Environment, select VIOS. For the other fields, accept the default values. Click Next.
2. In the Memory panel, select the Dedicated for Memory Mode check box (if shown), and enter an appropriate amount of memory in the Assigned memory field. Use 4 GB of memory. Amount of memory: The amount of memory your VIOS needs depends on the functions of VIOS that you will use. Start with 4 GB of memory, and then periodically monitor the memory usage on VIOS. Click Next. 3. In the Processor panel, complete these steps: a. Select Shared for Processing Mode. b. In the Assigned Processors field, type 1 for a single shared processor. Assigned Processors field: Start with one shared processor, and then periodically monitor the processor on VIOS. c. Click Next. Virtual Ethernet adapters: By default, the wizard creates two virtual Ethernet adapters. The first virtual Ethernet adapter uses adapter ID 2 and VLAN ID 1. The second virtual Ethernet adapter uses adapter ID 3 and VLAN ID 99. The second virtual Ethernet adapter is used for a control channel between two VIOSs in dual VIOS configurations and the Shared Ethernet Adapter failover configuration. For more information about control channel and dual VIOS configuration for virtual Ethernet, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
47
4. In the Ethernet panel, complete the following steps: a. Expand Virtual Ethernet. b. Select the check box to the left of the first adapter (ID 2), and then click Edit. c. In the Virtual Ethernet - Modify Adapter panel, complete these steps: i. Select the Use this adapter for Ethernet bridging check box. ii. In the Priority field, enter 1. For more information about the priorities, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. iii. Click OK to confirm changes to the first Ethernet adapter. d. Back in the Ethernet panel, click Next. 5. In the Virtual Storage Adapter panel, click Next. Because the client virtual servers are added and assigned storage, the console automatically creates the virtual SCSI or virtual Fibre Channel server adapters. 6. In the Physical I/O Adapters panel, select the check boxes to the left of the Location Code and Description of the adapter that you need. These physical adapters and controllers are used later to virtualize devices to the virtual server for the client OS. To use all functions as explained in this paper, you must make the following selections: One SAS or SCSI disk controller (controller for internal disk drives) One Ethernet adapter (Ethernet adapter for connection to LAN) One Fibre Channel adapter (Fibre Channel adapter for connection to SAN and a virtual Fibre Channel configuration) In our case, we selected the following physical adapters: U78A0.001.DNWKF81-P1-T9 RAID Controller U5802.001.RCH8497-P1-C7 Quad 10/100/1000 Base-TX PCI-Express Adapter U5802.001.RCH8497-P1-C3 Fibre Channel Serial Bus The RAID Controller that we selected also has the physical CD or DVD drive connected. Verifying the adapters: Check the physical location codes of your adapters and verify that you are using the correct adapters for your virtual server. Sometimes the description can be misleading. For example, the PCI-to-PCI bridge can be the Ethernet adapter. Click Next. 7. In the Summary panel, verify the information, and then click Finish to confirm the creation of the virtual server. To create the virtual server for VIOS2, follow step 1 on page 47 - step 7, but change the following values: In step 1, change the name for the virtual server to VIOS2. For Environment, make sure that you choose VIOS. In step 4, change the priority of the virtual Ethernet adapter to 2. Also, select the Use this adapter for Ethernet bridging check box. In step 6, select the different adapters and controller. We selected the following adapters: U5802.001.RCH8497-P1-C2 PCI-E SAS Controller U5802.001.RCH8497-P1-C6 Quad 10/100/1000 Base-TX PCI-Express Adapter U5802.001.RCH8497-P1-C5 Fibre Channel Serial Bus
48
For a more detailed description of the options when creating a virtual server, see IBM Systems Director Management Console: Introduction and Overview, SG24-7860.
Select the correct Ethernet port from the listed ports that is to be used for the LAN connection and has the Ethernet cable plugged in. The interface device name of this physical adapter port is used in the next step. In this example, it is en0. 2. Enter the cfgassist command, and select VIOS TCP/IP Configuration. Then select the appropriate interface device name from the previous step.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
49
3. In the VIOS TCP/IP Configuration panel, enter TPC/IP configuration values for VIOS connectivity. For these values, consult your network administrator. Figure 2-2 on page 10 shows the TPC/IP configuration values for this example. After entering the needed values for TCP/IP configuration press Enter. 4. When you see the output Command: OK, press F10. Alternatively, press Esc+0. To configure TCP/IP on VIOS2, follow the same steps on VIOS2 console, but in step 3, change the IP configuration. SSH: From this point onward, you can use Secure Shell (SSH) to connect to VIOS1 and VIOS2.
50
The SDMC automatically creates the SEAs on both VIOS1 and VIOS2. The SDMC also configures the control channel as a part of this step. The virtual Ethernet adapter with the highest VLAN ID is used for the SEA control channel. 3. In the Virtual Network Management panel, confirm the created SEAs. Two SEAs are created, each with a different priority as shown in Figure 4-3.
In our case, we used the Fibre Channel port fcs0 for LUN masking the SAN LUNs for the installation device of the client OS. 2. Find the worldwide port name (WWPN) address for the fcs0 device: lsdev -dev fcs0 -vpd | grep Address Your output is similar to the following example: Network Address.............10000000C9E3AB56 3. Repeat the step 1 and 2 on VIOS2. 4. Provide the location codes and the WWPN addresses from the previous steps for both VIOS1 and VIOS2 to your storage administrator. At this time, your storage administrator provisions your SAN LUN. The storage administrator makes the LUN masking so that VIOS1 and VIOS2 both see the same SAN LUN. The storage administrator also gives you the SAN LUN ID of the disk for client OS installation. For this exercise, the SAN administrator allocated disk with LUN ID 60:0a:0b:80:00:11:46:32:00:00:92:75:4e:c5:0e:78 and size 25 GB. 5. After the storage administrator provisions the storage, run the cfgdev command on the VIOS1 and VIOS2 command line to discover any new devices.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
51
Before SAN LUN can be virtualized and provisioned to the virtual server for the client OS, you must change the behavior of locking the SCSI reservation of the physical disk (here SAN LUN). You do not want VIOS to lock the SCSI reservation (to be prepared for dual VIOS configuration). To change the behavior of locking SCSI reservations, you must complete the following steps on both VIOS1 and VIOS2: 1. Log on to the VIOS console and list the physical disks attached to your VIOS: lsdev -type disk Example 4-2 shows the listing. In the output, you can see four internal disks (SAS hdisk0 to hdisk3) and six external disks from the IBM DS4800 (MPIO hdisk4 to hdisk9).
Example 4-2 Listing of physical disk devices on VIOS
$ lsdev -type disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available hdisk8 Available hdisk9 Available
SAS SAS SAS SAS IBM IBM IBM IBM IBM IBM
Disk Disk Disk Disk MPIO MPIO MPIO MPIO MPIO MPIO
Drive Drive Drive Drive DS4800 DS4800 DS4800 DS4800 DS4800 DS4800
2. Confirm the SAN LUN ID on VIOS: lsdev -dev hdisk4 -attr | grep unique_id Example 4-3 shows the output with the LUN ID highlighted.
Example 4-3 Listing disk LUN ID on VIOS
FAStT03IBMfcp PCM
Runt the same lsdev command again to find the physical disk with the correct LUN ID you received from the storage administrator. The device name for the provisioned external disk used in the following steps is hdisk4. 3. Change the behavior of locking SCSI reservations: chdev -dev hdisk4 -attr reserve_policy=no_reserve Complete steps 1 - 3 on both VIOS1 and VIOS2. Make a note of the correct device names of the SAN LUN on both VIOS1 and VIOS2. Use these device names in Chapter 5, Advanced configuration on page 79, to virtualize them to the virtual server for the client OS.
52
Device names: You might notice that the names of the devices on VIOS1 are not the same as the device names on VIOS2. The reason for this difference might be that VIOS2 has a different number of internal disks. The reserve_policy attribute name: If you use a storage subsystem from a different vendor, the reserve_policy attribute can have a different name. For example if you use EMC PowerPath drivers to connect LUNs from the EMC storage subsystem, you must use the reserve_lock attribute and the value no. Same disk configuration: Make the disk configuration of both VIOS1 and VOS2 the same. This approach makes management of dual VIOS configurations easier and less prone to administrator mistakes.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
53
5. If the Storage selection panel opens, select Yes, Automatically manage the virtual storage adapters for this Virtual Server. You can provision the virtual disks, physical volumes, or virtualize the Fibre Channel adapters here. Select the check box to the left of the physical volumes. Click Next. Virtual Fibre Channel adapter configuration: The virtual Fibre Channel adapters will be configured in Chapter 5, Advanced configuration on page 79. 6. In the Physical Volumes part of the panel, select the physical disk to virtualize to the virtual server for the client OS. These disks are the same disks that you changed the SCSI reservation policy on in 4.1.5, Configuring storage devices on page 51. You can also check the Physical Location Code column to find the correct physical disk. Important: Make sure that you select the appropriate physical disk on both VIOS1 and VIOS2. 7. In the Optical devices panel, on the Physical Optical Devices tab, select the check box to the left of cd0 to virtualize the physical DVD drive to the virtual server for the client OS. Click Next. 8. In the Physical I/O Adapters panel, do not select any physical I/O adapters. The client OS is installed on the disk that is connected by using the virtual SCSI adapter, and all other devices are virtualized. 9. In the Summary panel, verify the information, and then click Finish to confirm creation of the virtual server.
54
Before you create virtual Fibre Channel adapters for the virtual server for the client OS, complete these steps: 1. Log in to the SDMC environment. 2. From the home page, locate the host that contains the VirtServer1 virtual server. Click the host name. 3. In the Resource Explorer window that opens, select the check box to the left of the virtual server (VirtServer1), and then select Actions System Configuration Manage Virtual Server. To create virtual Fibre Channel adapters for the virtual server for the client OS: 1. From the left pane, click Storage Devices. 2. Under Fibre Channel, click Add. 3. In the Add Fibre Channel panel, which shows the physical Fibre Channel adapters that support NPIV, select the physical Fibre Channel adapter that you want to virtualize to the virtual server for the client OS. In our case, we selected the physical Fibre Channel adapter with device name fcs1 for both VIOS1 and VIOS2. Click OK. Physical Fibre Channel adapter fcs0: The physical Fibre Channel adapter with device name fcs0 was already used in 4.1.5, Configuring storage devices on page 51 to provision the SAN LUN. 4. Click Apply. Now update the configuration profiles of VirtServer1, VIOS1, and VIOS2. To update the profile on VirtServer1 Virtual Server, log on to the SDMC environment. Then complete the following steps: 1. From the home page, locate the host that contains the VirtServer1 virtual server, and then click the name of the host. 2. In the Resource Explorer window that opens, select the check box to the left of the virtual server VirtServer1, and then select Actions System Configuration Save Current Configuration. 3. Select the Overwrite existing profile check box, and then select the OriginalProfile profile. 4. Click OK. 5. In the Save Profile window, click Yes. Repeat steps 1- 5 for VIOS1 and VIOS2 to update their configuration profiles. You now have a running virtual server with the following virtualized configurations: One virtual processor from the shared processor pool (can be adjusted dynamically to meet your needs) 4 GB of memory (can be adjusted dynamically to meet your needs) One virtual Ethernet adapter with high-available failover mode Two virtual SCSI adapters for the OS disk This disk uses two paths: one path to VIOS1 and a second path to VIOS2. Two virtual Fibre Channel adapters, likely for connecting the SAN LUNs for data Each virtual Fibre Channel adapter is provided by a separate VIOS.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
55
Example 4-4 shows devices from the virtual server running the AIX OS.
Example 4-4 List of the virtual devices from AIX
# lsdev -Cc adapter ent0 Available Virtual I/O Ethernet Adapter (l-lan) fcs0 Available C5-T1 Virtual Fibre Channel Client Adapter fcs1 Available C6-T1 Virtual Fibre Channel Client Adapter vsa0 Available LPAR Virtual Serial Adapter vscsi0 Available Virtual SCSI Client Adapter vscsi1 Available Virtual SCSI Client Adapter # lsdev -Cc disk hdisk0 Available Virtual SCSI Disk Drive # lspath Enabled hdisk0 vscsi0 Enabled hdisk0 vscsi1
In this section, you use commands that show the locations of the hardware cards and adapters. These location codes look similar to this example: DNWKGPB-P1-C4-T1. In the location code, the number after the C character indicates the card slot, the number after the T character is the number of the port on the card. The portion preceding the P character is the serial number of the drawer. In this example, DNWKGPB-P1-C4-T1 refers to the first port on a card in slot C4 of the drawer with serial number DNWKGPB. This location code provides information that you need to find the correct card and to plug cables into the correct ports. Drawer serial number: You can find the serial number of the drawer in the front of the drawer under the plastic cover.
56
Figure 4-4 Accessing the option to create the virtual server in the SDMC
To create the virtual server for VIOS1: 1. In the Name panel, complete the following steps: a. In the Virtual Server name field, enter a name. In this example, we enter VIOS1. b. For Virtual server ID, enter an ID. We follow the naming convention shown in 1.2, Planning on page 3. The default value is the next partition number that is available. c. For Environment, select VIOS. d. Click Next. 2. In the Memory panel, complete these steps: a. Select the Dedicated for Memory Mode check box (if present). b. In the Assigned memory field, enter 4 GB of memory or an appropriate amount of memory. Amount of memory: The amount of memory your VIOS needs depends on the functions of VIOS that you will use. Start with 4 GB of memory, and then periodically monitor the memory usage on VIOS. c. Click Next.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
57
3. In the Processor panel, complete the following steps; a. Select Shared for Processing Mode. b. In the Assigned Processors field, type 1 for a single shared processor (from the Shared Processor Pool DefaultPool(0)). c. Click Next. Assigned Uncapped Processing Units: In background, the value of Assigned Uncapped Processing Units is 0.1 by default. Start with one shared processor, and then periodically monitor the processor on the VIOS. 4. In the Ethernet panel, complete the following steps: a. Expand Virtual Ethernet. By default, the wizard creates two virtual Ethernet adapters. The first virtual Ethernet adapter uses adapter ID 2 and VLAN ID 1. The second virtual Ethernet adapter uses adapter ID 3 and VLAN ID 99. The second virtual Ethernet adapter is used as a control channel between two VIOSs in a dual VIOS configuration and is not used in a single VIOS configuration. For more information about a control channel and dual VIOS configuration for virtual Ethernet, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. b. Select the check box to the left of the first adapter (ID 2), and then click Edit. c. In the Virtual Ethernet - Modify Adapter panel, complete these steps: i. Select the Use this adapter for Ethernet bridging check box. ii. In the Priority field, type 1. For information about the priorities, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. iii. Click OK to confirm changes to first Ethernet adapter. d. Back in the Ethernet panel, click Next. 5. In the Virtual Storage Adapter panel, complete the following steps: a. In the Maximum number of virtual adapters field, type 200. b. Click Create Adapter. c. In the Create Virtual Adapter panel, complete the following steps: i. ii. iii. iv. Adapter ID:101 Adapter type:SCSI Connecting Virtual Server ID:10 Connecting adapter ID:11 For the numbers for virtual adapters and the virtual server ID, see the naming convention in 1.2, Planning on page 3. v. Click OK. d. Back in the Virtual Storage Adapter panel, click Next. 6. In the Physical I/O Adapters panel, select the check boxes to the left of the Location Code and Description of the adapter that you need. To use all the functions as explained in this paper, you must select the following components: One SAS or SCSI disk controller (controller for internal disk drives) One Ethernet adapter (Ethernet adapter for connection to a LAN)
58
One Fibre Channel adapter (Fibre Channel adapter for connection to a SAN and a virtual Fibre Channel configuration) Tip: If the busy icon called Working seems to hang, click another tab, and then come back to the previous window. In our case, we selected the following physical adapters: U78A0.001.DNWKF81-P1-T9 RAID Controller U5802.001.RCH8497-P1-C7 Quad 10/100/1000 Base-TX PCI-Express Adapter U5802.001.RCH8497-P1-C3 Fibre Channel Serial Bus The RAID controller that is selected also has the physical CD or DVD drive connected. Physical location codes: Check the physical location codes of your adapters and be sure to use the correct adapters for your virtual server. Sometimes the description can be misleading. For example, the PCI-to-PCI bridge can be the Ethernet adapter. In the Physical I/O Adapters panel, click Next. 7. In the Summary panel, verify the information, and then click Finish to confirm the creation of the virtual server. For a more detailed description of the options available from the virtual server creation wizard, see IBM Systems Director Management Console: Introduction and Overview, SG24-7860.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
59
In the terminal console window, install VIOS: 1. If you are prompted with options to set this console as the active console, press the appropriate key indicated in the panel. 2. Type 5 for Select Boot Options. 3. Type 1 for Select Install/Boot Device. 4. Type 7 for List All Devices. 5. Find the CD-ROM device in the list. You might need to type N to scroll down. Record the number of the device, and press Enter. 6. Type 2 for Normal Bode Boot, and then type 1 for Yes to exit the SMS menu. 7. Select the console number, and then press Enter. 8. Select the preferred language. To select English, press Enter. 9. When prompted with the Installation and Maintenance menu, type 2 for Change/Show Installation Settings and Install to open installation settings panel. 10.Type 1 for Disk(s) where you want to install option to select the target installation device (the target installation device is marked with three closing angle brackets (>>>)). Usually this device is the first physical disk device. Therefore, you can leave the default value. 11.Type 99 for Previous Menu. 12.Type 5 for Select Edition to choose the correct PowerVM edition. 13.Type 0 for Install with the settings listed above to start the installation. A progress panel shows the Approximate % Complete and Elapsed Time. 14.Insert volume 2 when prompted by the installation process, and press Enter. This installation takes between 15 minutes and an hour to complete. 15.When VIOS1 first opens, log in with the padmin user name. 16.When prompted by the VIOS, change the password and accept the software maintenance terms. After you change the password and agree to the license, enter the following command: license -accept
60
4. Find the device name for the physical Ethernet adapter ports: lsdev -type ent4sea To also find the physical locations, enter the following command (Example 4-5): lsdev -vpd | grep ent | grep -v Virtual
Example 4-5 Listing of physical Ethernet adapter ports on VIOS $ lsdev Model ent0 ent1 ent2 ent3 -vpd | grep ent | grep -vi Virtual Implementation: Multiple Processor, PCI bus U5802.001.RCH8497-P1-C7-T1 4-Port 10/100/1000 U5802.001.RCH8497-P1-C7-T2 4-Port 10/100/1000 U5802.001.RCH8497-P1-C7-T3 4-Port 10/100/1000 U5802.001.RCH8497-P1-C7-T4 4-Port 10/100/1000
From the listed ports, select the correct Ethernet port that is to be used for the LAN connection and that has the Ethernet cable plugged in. In this example, we select device ent0. This physical adapter port device name is used in the following steps as a value for the -sea attribute. 5. Find the device name for the virtual Ethernet adapter port with adapter ID 2: lsdev -vpd | grep ent | grep Virtual | grep C2 The following example shows the output: ent4 U8233.E8B.10F5D0P-V1-C2-T1 Virtual I/O Ethernet Adapter
Command value C2: The value C2 used in the command in this step is related to adapter ID 2 of the virtual Ethernet adapter created in 4.2.1, Creating a VIOS virtual server on page 57. You can also find this ID and the slot number in Table 1-2 on page 5. The device name from the output is the virtual Ethernet adapter port. In this example, the name is device ent4. This virtual adapter port device name is used in the following step as a value for the -vadapter and -default attributes. The virtual Ethernet adapter in this step must use VLAN ID 1. Confirm the VLAN ID by using the following command: entstat -all ent4 | grep "Port VLAN ID" VLAN ID 1 is confirmed by the following output: Port VLAN ID: 1 6. Create a virtual bridge or an SEA in VIOS terminology (Example 4-6): mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 199
Example 4-6 Creating an SEA on VIOS
$ mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 199 main: 86 Recived SEA events bytes 164 ent6 Available en6 et6 In this example, the SEA device name is ent6. Make a note of the name of the created device. This SEA device name is required in the first part of Configuring a second VIOS for a client network on page 69 for changing the attributes of the SEA on VIOS1. Tip: The SEA is bridging a virtual and physical network using VIOS.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
61
7. Run the cfgassist command, and then select the VIOS TCP/IP Configuration option. Select the appropriate interface device name from the previous step, which is en6 in this example. 8. In the VIOS TCP/IP Configuration panel, enter the TPC/IP configuration values for VIOS connectivity. For these values, consult your network administrator. Figure 2-2 on page 10 shows TPC/IP configuration values used in this example. After you enter the required values for TCP/IP configuration, press Enter. 9. When you see the output Command: OK, press F10, or press Esc+0. SSH: From this point onward, you can use SSH to connect to VIOS1.
2. Find the WWPN address for the fcs0 device: lsdev -dev fcs0 -vpd | grep Address This command has the following output: Network Address.............10000000C9E3AB56 3. Provide the location code and the WWPN address from the previous steps to your storage administrator. At this time, your storage administrator provisions the necessary SAN LUN. The storage administrator also gives you the SAN LUN ID of the disk for the client OS installation. For this exercise, the SAN administrator allocated disk with LUN ID 60:0a:0b:80:00:11:46:32:00:00:92:75:4e:c5:0e:78 and a size of 25 GB. 4. After the storage administrator provisions the storage, find any new devices by running the cfgdev command. 5. List the physical disks attached to your VIOS: lsdev -type disk In the output shown in Example 4-7, you can see four internal disks (SAS hdisk0 to hdisk3) and six external disks from the IBM DS4800 (MPIO hdisk4 to hdisk9).
Example 4-7 Listing of physical disk devices on VIOS
$ lsdev -type disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available 62
IBM PowerVM Getting Started Guide
Drive Drive Drive Drive DS4800 Array Disk DS4800 Array Disk
6. Confirm the SAN LUN ID on VIOS: lsdev -dev hdisk4 -attr | grep unique_id Example 4-8 shows the output with the LUN ID highlighted in bold.
Example 4-8 Listing disk LUN ID on VIOS
FAStT03IBMfcp PCM
If you have more disk devices, repeat this lsdev command to find the physical disk with the correct LUN ID you received from the storage administrator. The device name for the external disk used in the next steps is hdisk4. 7. If do not plan to use a dual VIOS configuration skip this step. In this step, you change the behavior of locking the SCSI reservation on the physical disk device. You do not want VIOS to lock the SCSI reservation (to be prepared for dual VIOS configuration). To change the behavior of locking SCSI reservations, enter the following command: chdev -dev hdisk4 -attr reserve_policy=no_reserve The reserve_policy attribute name: If you use a storage subsystem from a different vendor, the reserve_policy attribute can have a different name. For example, if you use EMC PowerPath drivers to connect LUNs from the EMC storage subsystem, you must use the reserve_lock attribute and the value no.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
63
11.In the New virtual server assignment panel, select VirtServer1(10), and then click OK. 12.Click Close.
2. In the Memory panel, complete these steps: a. Select the Dedicated for Memory Mode check box (if present). b. In the Assigned memory field, enter an appropriate amount of memory for this virtual server. c. Click Next. 3. In the Processor panel, complete these steps: a. Select Shared for Processing Mode. b. In the Assigned Processors field, type 1 or a value that reflects your needs. c. Click Next. By default, the Ethernet wizard creates two virtual Ethernet adapters. Only the first virtual Ethernet adapter (with VLAN ID 1) is used for a network connectivity. 4. Select the check box to the left of the second adapter (ID 3), and then click Delete. 5. If the Storage selection panel opens, select the No, I want to manage the virtual storage adapters for this Virtual Server check box. 6. In the Virtual Storage Adapter panel, complete the following steps: a. In Maximum number of virtual adapters field, type 30. b. Click Create Adapter, and then complete the following steps: i. ii. iii. iv. v. In the Adapter ID field, type 11. In the Adapter type field, type SCSI. In the Connecting Virtual Server ID field, type VIOS (1). In the Connecting adapter ID field, type 101. Click OK.
c. Click Next. 64
IBM PowerVM Getting Started Guide
7. In the Physical I/O Adapters panel, do not select any physical I/O adapters. The client OS is installed on the disk that is connected by using the virtual SCSI adapter. The virtual Fibre Channel adapters are added in 4.4, Setting up a virtual Fibre Channel using the SDMC on page 74. Click Next. 8. In the Summary panel, verify the information, and then click Finish to confirm creation of the virtual server.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
65
2. In the Memory panel, select the Dedicated for Memory Mode check box (if shown). In the Assigned memory field, enter 4 GB of memory or an appropriate amount of memory. Amount of memory: The amount of memory your VIOS needs depends on the functions of VIOS that you will use. Start with 4 GB of memory, and then periodically monitor the memory usage on VIOS. Click Next. 3. In the Processor panel, complete the following steps; a. Select Shared for Processing Mode. b. In the Assigned Processors field, type 1 for a single shared processor. c. Click Next. Assigned Uncapped Processing Units: In background, the value of Assigned Uncapped Processing Units is 0.1 by default. Start with one shared processor, and then periodically monitor the processor on the VIOS. 4. In the Ethernet panel, complete these steps: a. Expand Virtual Ethernet. By default, the wizard creates two virtual Ethernet adapters. The first virtual Ethernet adapter uses adapter ID 2 and VLAN ID 1. The second virtual Ethernet adapter uses adapter ID 3 and VLAN ID 99. The second virtual Ethernet adapter is used for control channel between two VIOSs in the dual VIOS configuration. For more information about the control channel and the dual VIOS configuration for virtual Ethernet, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. b. Select the check box to the left of the first adapter (ID 2), and then click Edit. c. In the Virtual Ethernet - Modify Adapter panel, complete the following steps: i. Select the Use this adapter for Ethernet bridging check box. ii. In the Priority field, type 2. For an explanation of the priorities, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. iii. Click OK to confirm changes to the first Ethernet adapter. d. In the Ethernet panel, click Next. 5. In the Virtual Storage Adapter panel, complete the following steps: a. In the Maximum number of virtual adapters field, type 200. b. Click Create Adapter. c. In the Create Virtual Adapter panel, complete the following steps: i. ii. iii. iv. v. In the Adapter ID field, type 101. In the Adapter type field, type SCSI. In the Connecting Virtual Server ID field, type 10, or select VirtServer1 (10). In the Connecting adapter ID field, type 21. Click OK.
66
6. In the Physical I/O Adapters panel, select the check boxes to the left of the Location Code and Description of the required adapters. To use all the functions described in this paper, you must make the following selections: One SAS or SCSI disk controller (controller for internal disk drives) One Ethernet adapter (Ethernet adapter for connection to LAN) One Fibre Channel adapter (Fibre Channel adapter for connection to SAN) In this example, we selected the following physical adapters: U5802.001.RCH8497-P1-C2 U5802.001.RCH8497-P1-C6 U5802.001.RCH8497-P1-C5 PCI-E SAS Controller Quad 10/100/1000 Base-TX PCI-Express Adapter Fibre Channel Serial Bus
In the Physical I/O Adapters panel, click Next. 7. In the Summary panel, verify the information, and then click Finish to confirm the creation of the virtual server. For a more detailed description of the options that are available, see IBM Systems Director Management Console: Introduction and Overview, SG24-7860.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
67
Important: In the Install the Base Operating System on the Standalone Clients panel, ensure that the Remain NIM client after install attribute field is set to NO. With this setting, NIM does not set up the TCP/IP configuration on a newly installed VIOS so that you can create an SEA in this VIOS. Now all the necessary resources are prepared for your NIM environment, and an installation of the second VIOS is initialized from NIM. For a detailed explanation of how to prepare NIM to install VIOS, see the NIM installation and backup of the VIOS Technote at: https://www.ibm.com/support/docview.wss?uid=isg3T1011386#4 Before you install VIOS2 into the virtual server created in 4.3.1, Creating a second VIOS virtual server on page 65, complete these steps: 1. Log on to the SDMC. 2. From the SDMC home page, locate the host on which the virtual server for VIOS2 was created, and click its name. 3. Select the check box to the left of the virtual server name VIOS2, and then select Actions Operations Activate Profile. 4. In the Activate Virtual server: VIOS2 panel, click Advanced. a. Change Boot mode to SMS, and click OK. b. Select the Open a terminal window or console session check box, and click OK. 5. In the terminal console for VIOS2 that opens, enter your SDMC user ID and password to open the terminal console To install a second VIOS: 1. If prompted with options to set this console as the active console, press the key indicated in the panel. 2. Type 2 for Setup Remote IPL (Initial Program Load). 3. Select the number of the port that is connected to the Ethernet switch and the subnet used during installation. In this example, we type 3 for Port 1. 4. Type 1 for IPv4 - Address Format 123.231.111.222. 5. Type 1 for BOOTP. 6. Type 1 for IP Parameters. 7. Enter the TCP/IP configuration parameters. We used these parameters: 1. 2. 3. 4. Client IP Address Server IP Address Gateway IP Address Subnet Mask [172.16.22.13] [172.16.20.40] [172.16.20.1] [255.255.252.0]
The Server TCP/IP Address address is the TCP/IP address of your NIM server. 8. Press the ESC key. 9. Type 3 for Ping Test. 10.Type 1 for Execute Ping Test. You see the following message: .-----------------. | Ping Success. | `-----------------' 68
IBM PowerVM Getting Started Guide
11.Press any key to continue, and then press the ESC key five times to go to the Main Menu. 12.From the Main Menu, type 5 for Select Boot Options. 13.Type 1 for Select Install/Boot Device. 14.Type 6 for Network. 15.Type 1 for BOOTP. 16.In the Select Device panel, select the number of the port that is connected to the switch and subnet that are used during the installation. In this example, we typed 3 for Port 1: 17.Type 1 for Normal Mode Boot, and then type 1 for Yes to leave the SMS menu and start the installation. 18.When prompted to define the system console, type 1. The number that you type might be different for your installation. Press Enter. 19.To confirm English as the language to use during the installation, press Enter. 20.From the Installation and Maintenance menu, type 2 for Change/Show Installation Settings and Install to open the installation settings panel. 21.Type 1 for Disk(s) where you want to install to select the target installation device. Usually this device is the first physical disk device. Therefore, you can accept the default. After you select the target installation device, type 0 for Continue with choices indicated above to return to the main menu. 22.Type 5 for Select Edition to choose the PowerVM edition. 23.Start the installation by typing 0 for Install with the settings listed above. A progress panel shows Approximate % Complete and Elapsed Time. This installation takes between 15 minutes and one hour to complete. 24.When VIOS2 first starts, log in with the padmin user name. 25.When prompted by VIOS, change the password and accept software maintenance terms. After you change the password and agree to the license, enter the following command: license -accept
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
69
To change the appropriate attributes, log on to VIOS1, and complete the following steps: 1. Open the console for VIOS1 using the SDMC. For details about this process, open the terminal console by using the SDMC is described in Configuring a VIOS for a client network on page 60. 2. Find the device name for the virtual port that will function as a control channel lsdev -vpd | grep ent | grep C3 The command produces the following output: ent5 U8233.E8B.10F5D0P-V1-C3-T1 Virtual I/O Ethernet Adapter This adapter is the second virtual Ethernet adapter with adapter ID 3 that was created by default in 4.2.1, Creating a VIOS virtual server on page 57. This device name is used in the next step in the ctl_chan attribute. In this example, the device name is ent5. The virtual Ethernet adapter used in this step must use VLAN ID 99. Confirm the VLAN ID: entstat -all ent5 | grep "Port VLAN ID" VLAN ID 99 is confirmed by the following output: Port VLAN ID: 99 3. Change the attributes of the SEA on VIOS1: chdev -dev ent6 -attr ha_mode=auto ctl_chan=ent5 In this command, the -dev attribute contains the SEA device name from Configuring a VIOS for a client network on page 60. To confirm the attributes of the SEA on VIOS1, enter the following command: lsdev -dev ent6 -attr Now configure the virtual Ethernet bridge (known as the SEA) on the second VIOS (VIOS2) and also configure the management TCP/IP address for the second VIOS. Follow these steps in VIOS2 console: Important: Make sure that you are logged on the second VIOS, which in this example is VIOS2. 1. Find the device names for the physical Ethernet adapter ports: lsdev -vpd | grep ent | grep -v Virtual Select one of the listed ports (Example 4-9) that is used for a LAN connection and has an Ethernet cable plugged in. In this case, the device is ent0. This physical adapter port device name is used in the next steps as the value for the -sea attribute.
Example 4-9 Listing of physical Ethernet adapter ports on VIOS $ lsdev Model ent0 ent1 ent2 ent3 -vpd | grep ent | grep -v Virtual Implementation: Multiple Processor, PCI bus U5802.001.RCH8497-P1-C6-T1 4-Port 10/100/1000 U5802.001.RCH8497-P1-C6-T2 4-Port 10/100/1000 U5802.001.RCH8497-P1-C6-T3 4-Port 10/100/1000 U5802.001.RCH8497-P1-C6-T4 4-Port 10/100/1000
2. Find the device name for the virtual port: lsdev -vpd | grep ent | grep C2 This command has the following output: ent4 70 U8233.E8B.10F5D0P-V2-C2-T1 Virtual I/O Ethernet Adapter (l-lan)
Command value C2: C2 used in the previous command is related to the number of the adapter created in 4.3.1, Creating a second VIOS virtual server on page 65. The device name from the output is the virtual Ethernet adapter port. In our case, the device is ent4. This virtual adapter port device name is used in the next step as a value for the -vadapter and -default attributes. The virtual port device name found in this step uses VLAN ID 1. Confirm the VLAN ID by using the following command: entstat -all ent4 | grep "Port VLAN ID" VLAN ID 1 is confirmed by the output: Port VLAN ID: 1 3. Find the device name for the virtual port that functions as a control channel in the output of the following command: lsdev -vpd | grep ent | grep C3 This command has the following output: ent5 U8233.E8B.10F5D0P-V2-C3-T1 Virtual I/O Ethernet Adapter (l-lan) This adapter is the second virtual Ethernet adapter with adapter ID 3 that was created by default in 4.3.1, Creating a second VIOS virtual server on page 65. This device name is used in the next step in the ctl_chan attribute. In this example, the device name is ent5. The virtual Ethernet adapter in this step must use VLAN ID 99. Confirm the VLAN ID by using the following command: entstat -all ent5 | grep "Port VLAN ID" VLAN ID 99 is confirmed by the following output: Port VLAN ID: 99 4. Create a virtual bridge or an SEA in VIOS terminology (Example 4-10): mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 199 -attr ha_mode=auto ctl_chan=ent5
Example 4-10 Creating an SEA on a second VIOS
$ mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 199 -attr ha_mode=auto ctl_chan=ent5 main: 86 Recived SEA events bytes 164 ent6 Available en6 et6 Note the name of the created SEA and interface. In this example, the device name of the interface is en6. Important: Mismatching SEA and the SEA failover can cause broadcast storms to occur in the network and effect the network stability. For more information, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. 5. Run the cfgassist command, and then select VIOS TCP/IP Configuration. Select the appropriate interface device name from the previous step. In this example, we selected en6.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
71
6. In the VIOS TCP/IP Configuration panel, enter TPC/IP configuration values for VIOS2 connectivity. For these values, consult your network administrator. See Figure 2-2 on page 10 for the TPC/IP configuration values in this example. After entering the necessary values for TCP/IP configuration, press Enter. 7. When you see the output Command: OK, press F10 or press ESC+0. SSH: From this point onward, you can use SSH to connect to VIOS2. From the Ethernet point of view, the virtual server for the client OS is already prepared for the dual VIOS configuration. You do not need to make changes to the virtual server for the client OS.
g. Click Apply to dynamically add the virtual SCSI adapter to the virtual server.
72
2. Update the configuration profile of VirtServer1: a. Log on to the SDMC environment. b. From the home page, locate the host that contains the VirtServer1 virtual server, and select the name of the host. c. In the Resource Explorer window that opens, select the check box to the left of virtual server VirtServer1, and then select Actions System Configuration Save Current Configuration. d. Select the Overwrite existing profile check box, and then select the OriginalProfile profile. e. Click OK. f. In the Save Profile window, click Yes. Now configure the second VIOS (VIOS2) to provision the disk to the virtual server for the client OS. To attach the second VIOS to a SAN and configure the storage, complete the following steps in VIOS2 console: 1. Provide Fibre Channel card location codes and their WWPN addresses to your storage administrator. For the steps to find the location codes and WWPN addresses, see Configuring a VIOS for client storage on page 62. At this time, your storage administrator provides the same SAN LUN (and its LUN ID) that was provisioned and used in Configuring a VIOS for client storage on page 62. 2. After the storage administrator completes the provisioning, run the cfgdev command to find the new devices. 3. List the physical disks attached to your VIOS: lsdev -type disk Example 4-11 shows out system output from the lsdev command. In the output, you can see six internal disks and six external disks from the IBM DS4800 storage subsystem. Make sure that you find the correct physical disk device names as explained in Configuring a VIOS for client storage on page 62. In this example, the physical disk with LUN ID 60:0a:0b:80:00:11:46:32:00:00:92:75:4e:c5:0e:78 has the device name hdisk6. This device name is used in the following steps.
Example 4-11 Listing physical disks on the VIOS
$ lsdev -type disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available hdisk8 Available hdisk9 Available hdisk10 Available hdisk11 Available
SAS SAS SAS SAS SAS SAS IBM IBM IBM IBM IBM IBM
RAID RAID RAID RAID RAID RAID MPIO MPIO MPIO MPIO MPIO MPIO
0 SSD Array 0 SSD Array 0 SSD Array 0 SSD Array 0 SSD Array 0 SSD Array DS4800 Array DS4800 Array DS4800 Array DS4800 Array DS4800 Array DS4800 Array
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
73
Device names: You might notice that the names of devices on VIOS1 are not the same as the device names on VIOS2. The reason for this difference is that VIOS2 has more internal disks, and the external disk has a higher disk number. Same disk configuration: Make the disk configuration of both VIOS1 and VOS2 the same. This approach makes management of a dual VIOS configuration easier and less prone to administrator mistakes. 4. Change the behavior of locking SCSI reservations: chdev -dev hdisk6 -attr reserve_policy=no_reserve 5. Find the device name for the virtual adapter connected to virtual server for the client OS: lsdev -vpd | grep vhost | grep C101 C101 is the slot number from the Table 1-3 on page 5. In this example, this command produces the following output: vhost0 U8233.E8B.10F5D0P-V1-C101 Virtual SCSI Server Adapter The device name for the virtual adapter is used in the next step. In this example, the device name is vhost0. 6. Map the external disk to the virtual server for the client OS: mkvdev -vdev hdisk6 -vadapter vhost0
74
3. In the Create Virtual Storage Adapter window, complete the following steps: a. b. c. d. e. In the Adapter ID field, type 12. In Adapter type field, select Fibre Channel. In the Connecting virtual server field, type VIOS1(1). In the Connecting adapter ID field, type 102. Click Add.
4. Click Add. 5. In the Create Virtual Storage Adapter window, complete the following steps: a. b. c. d. e. In the Adapter Id field, type 22. In the Adapter type field, select Fibre Channel. In the Connecting virtual server, type VIOS2(2). In the Connecting adapter ID field, type 102. Click Add.
6. Click Apply to dynamically add virtual Fibre Channel adapters to VirtServer1. Now update the configuration profiles of the VirtServer1 virtual server. To update the profile of VirtServer1 virtual server, log in to the SDMC environment: 1. From the home page, locate the host that contains the VirtServer1 virtual server, and click the name of the host. 2. In the Resource Explorer window that opens, select the check box to the left of the virtual server VirtServer1, and then select Actions System Configuration Save Current Configuration. 3. Select the Overwrite existing profile check box, and then select the OriginalProfile profile. 4. Click OK. 5. In the Save Profile window, click Yes.
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
75
4. Click Apply to dynamically add the virtual Fibre Channel adapter to VIOS1. 5. On the VIOS1 command line, run the cfgdev command to check for newly added devices. 6. On the VIOS1 command line, list virtual Fibre Channel adapters as shown in Example 4-12: lsdev -type adapter | grep "Virtual FC"
Example 4-12 Listing virtual Fibre Channel adapters on VIOS1
$ lsdev -type adapter | grep "Virtual FC" vfchost0 Available Virtual FC Server Adapter The device name vfchost0 is used in the following steps as the -vadapter attribute. 7. List the physical Fibre Channel ports and NPIV attributes by using the lsnports command (Example 4-13).
Example 4-13 Listing the NPIV capable Fibre Channel ports on VIOS1
NPIV capable ports have a value of 1 in the fabric column. For Fibre Channel virtualization, select the physical port with the device name fcs1. This device name is used in the following steps to create the mapping. The physical port, fcs0, was used for a SAN LUN masking in Configuring a VIOS for client storage on page 62. 8. Create the virtual Fibre Channel adapter to physical Fibre Channel adapter mapping. You can perform this mapping by using the SDMC interface or the VIOS command line. Use the SDMC interface to create the mapping between the virtual Fibre Channel adapter and the physical Fibre Channel adapter. Creation of this mapping using the VIOS command is explained in 4.4.3, Configuring a second VIOS for NPIV on page 77. To create the mapping by using the SDMC: a. Log on to the SDMC: b. From the home page, locate the host that contains the VIOS1 virtual server. c. Select the check box to the left of the host that contains the VIOS1, and then select Actions System Configuration Virtual Resources Virtual Storage Management. d. In VIOS/SSP section, select VIOS1, and click Query. e. Click Virtual Fibre Channel. f. Select the check box to the left of the fcs1 physical Fibre Channel port. This device name was in previous steps. g. Click Modify virtual server connections. h. Select the check box to the left of the VirtServer1 virtual server name. i. Click OK. Now update configuration profiles of the VIOS1 virtual server: 1. Log on to the SDMC environment. 2. From the home page, locate the host that contains the VIOS1 virtual server. 3. Click the name of the host.
76
4. In the Resource Explorer window that opens, select the check box to the left of the virtual server VIOS1, and then select Actions System Configuration Save Current Configuration. 5. Select the Overwrite existing profile check box, and then select the OriginalProfile profile. 6. In the Confirm window, click OK. 7. In the Save Profile window, click Yes.
4. Click Apply to dynamically add the virtual Fibre Channel adapter to VIOS2. 5. On the VIOS2 command line, run the cfgdev command to check for newly added devices. 6. On the VIOS2 command line, list the virtual Fibre Channel adapters as shown in Example 4-14: lsdev -type adapter | grep "Virtual FC" Device name vfchost0 is used in the following steps as a -vadapter attribute.
Example 4-14 Listing virtual Fibre Channel adapters on VIOS2
$ lsdev -type adapter | grep "Virtual FC" vfchost0 Available Virtual FC Server Adapter
Chapter 4. Configuring PowerVM with the IBM Systems Director Management Console
77
7. List the physical Fibre Channel ports and NPIV attributes by using the lsnports command as shown in Example 4-15.
Example 4-15 Listing NPIV capable Fibre Channel ports on VIOS2
NPIV-capable ports have a value of 1 in the fabric column. For Fibre Channel virtualization, select the physical port with the device name fcs1. The physical port fcs0 was used for the SAN LUN masking in Configuring a second VIOS for client storage on page 72. 8. Create the Fibre Channel virtualization: vfcmap -vadapter vfchost0 -fcp fcs1 9. Verify the virtual Fibre Channel mapping: lsmap -all -npiv Example 4-16 shows the complete listing. The status of the virtual Fibre Channel adapter must be LOGGED_IN. Checking for new devices: Make sure that the client OS in virtual server VirtServer1 checks for new devices after adding devices in 4.4.1, Configuring a client virtual server for NPIV on page 74. In AIX, use the cfgmgr command to check for newly added devices.
Example 4-16 Listing virtual Fibre Channel mapping on VIOS
$ lsmap -all -npiv Name Physloc ClntID ClntName ClntOS ------------- ---------------------------------- ------ -------------- ------vfchost0 U8233.E8B.10F5D0P-V2-C102 10 VirtServer1 AIX Status:LOGGED_IN FC name:fcs1 Ports logged in:1 Flags:a<LOGGED_IN,STRIP_MERGE> VFC client name:fcs1
FC loc code:U5802.001.RCH8497-P1-C5-T2
Now update configuration profiles of VIOS2 Virtual Server. To update the profile of the VIOS2 Virtual Server log on to the SDMC environment and: 1. From the home page, locate the host that contains the VIOS2 virtual server, and click the name of the host. 2. In the Resource Explorer window that opens, select the check box to the left of the virtual server VIOS2, and then select Actions System Configuration Save Current Configuration. 3. Select the Overwrite existing profile check box, and then select the OriginalProfile profile. 4. In the Confirm window, click OK. 5. In the Save Profile window, click Yes.
78
Chapter 5.
Advanced configuration
This chapter describes additional configurations to a dual Virtual I/O Server (VIOS) setup and highlights other advanced configuration practices. The advanced setup addresses performance concerns over the single and dual VIOS setup. This chapter includes the following sections: Adapter ID numbering scheme Partition numbering VIOS partition and system redundancy Advanced VIOS network setup Advanced storage connectivity Shared processor pools Live Partition Mobility Active Memory Sharing Active Memory Deduplication Shared storage pools
79
22 23
98 99 20 845
C22 C23 C10 C11 C101 C103 C105 C10 C11 VirtServer1 VirtServer1 VirtServer1 VirtServer1 VirtServer1 VirtServer1 VirtServer1 10 11 21 23 25 10 11 C10 C11 C21 C23 C25 C10 C11
80
Virtual adapter Virtual SCSI Virtual Fibre Virtual Fibre Virtual Ethernet Virtual Ethernet Virtual SCSI Virtual Fibre Virtual Fibre
VLAN ID
Additional VLANs
Client partition or virtual server VirtServer2 VirtServer2 VirtServer2 VirtServer1 VirtServer1 VirtServer3 VirtServer3 VirtServer3
Client adapter ID 21 23 25 10 11 21 23 25
Client adapter slot C21 C23 C25 C10 C11 C21 C23 C25
81
Redundancy can be applied to a system which spans multiple I/O drawers and multiple central electronics complex (CEC) servers. Separate CEC loops can be created as explained in the product manual of the system. If you have a system that spans multiple I/O drawers and CECs, allocate the adapters from different I/O drawers or CECs to a VIOS partition for a highly redundant setup.
VIOS 1 Primary ent7 (SEA) ent6 (inaggr) ent0 (phy.) ent 1 (phy.) ent2 (phy.) ent3 (phy. )
en7 (if.)
VIOS 2 Standby
Clien t Partiti on
ent4 (v irt.)
ent5 (virt.)
ent5 (virt.)
priority=1
priority=2
PVID= 1
PVI D= 1
Ethernet switch 1
Uplink VL AN=1
Ethernet switch 2
82
Hypervisor
Each VIOS partition is configured with two physical Ethernet ports on different Ethernet adapters. If an Ethernet adapter fails, the external link is still active with the port of the other Ethernet adapter. The two Ethernet adapter ports are linked together to form a logical EtherChannel adapter. EtherChannel: In this section, EtherChannel is used to describe a link aggregated adapter on VIOS or AIX. In this regard, EtherChannel is not the same as CISCO EtherChannel. Figure 5-1 on page 82 also shows each VIOS partition connected to a network switch. Linked adapters in the 802.3ad Link Aggregation configuration do not support spanning across multiple network switches if the network switches are in a nonvirtualized state. To configure a logical EtherChannel adapter, configure 802.3ad Link Aggregation settings on the VIOS partition and on the network switches. Activate Portfast on the network switches to allow faster failover time. To create a Link Aggregation adapter between physical Ethernet adapters entX and entY, use the following command syntax: mkvdev -lnaggr <entX> <entY> -attr mode=8023ad 802.3ad setting: You must set the 802.3ad setting on the VIOS partition side and network switch end. The Link Aggregation adapter is nonresponsive if the setting is set on either side only.
83
84
Client Fibre Channel Adapter MPIO for OS disks Client SCSI Adapter
VIOS 1
Server Fibre Channel Adapter Server Fibre Channel Adapter Server SCSI Adapter MPIO Server SCSI Adapter MPIO Server Fibre Channel Adapter
VIOS 2
Server Fibre Channel Adapter
fcs0
Physical FC Adapter 1
fcs1
fcs2
Physical FC Adapter 2
fcs3
fcs0
Physical FC Adapter 1
fcs1
fcs2
Physical FC Adapter 2
fcs3
SAN
Figure 5-2 MPIO setup where each VIOS partition is connected to one SAN switch
85
Each VIOS partition can have its Fibre Channel adapter ports connected to different SAN switches as illustrated in Figure 5-3.
VIOS 1
Server Fibre Channel Adapter Server Fibre Channel Adapter S erver SCSI A dapter MPI O Server SCSI Adapt er MPI O Server Fibre Channel Adapt er
VIOS 2
Server Fibre Channel Adapt er
f cs 0
fcs1
f cs 2
Physical FC Adapter 2
fc s3
f cs0
P hysic al FC Adapt er 1
fc s1
fcs2
Physical FC Adapter 2
fc s3
S AN
Figure 5-3 MPIO setup where the VIO partitions are connected to the 2 SAN switches
Table 5-2 highlights the benefits and drawbacks of the two approaches.
Table 5-2 Fibre Channel cabling scenarios Scenario SAN switch 1 is brought down for maintenance. SAN switch 1 is misconfigured. Cabling issues VIOS partition connected to one SAN switch (Figure 5-2 on page 85) VIOS1 is unavailable for storage. The LUNS are accessible by using VIOS2. VIOS1 is affected. VIOS2 is unaffected. Easier to pinpoint cabling problems because all connections on VIOS1 are connected to SAN switch 1. For VIOS2, all connections are connected to SAN switch 2. VIOS Partition connected to two SAN switches (Figure 5-3) Storage is available through both VIOS partitions. VIOS1 and VIOS2 are both impacted and might lose connectivity to the SAN. Harder to manage cable issues because VIOS1 and VIOS2 have connections to both SAN switch 1 and 2.
86
87
because the savings can be used to either lower memory over commitment levels or to create room to increase the memory footprint of LPARs. For more information about memory deduplication, see Power Systems Memory Deduplication, REDP-4827.
88
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only. IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 IBM Systems Director Management Console: Introduction and Overview, SG24-7860 Integrated Virtualization Manager on IBM System p5, REDP-4061 You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website: ibm.com/redbooks
Online resources
These websites are also relevant as further information sources: IBM i Information Center http://publib.boulder.ibm.com/infocenter/iseries/v6r1m0/index.jsp NIM installation and backup of the VIO server technote https://www.ibm.com/support/docview.wss?uid=isg3T1011386#4 PowerVM QuickStart by William Favorite http://www.tablespace.net/quicksheet/powervm-quickstart.html
89
90
Back cover
REDP-4815-00