0% found this document useful (0 votes)
189 views

Oracle Linux: KVM User's Guide

Uploaded by

Mokarrom Hossain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
189 views

Oracle Linux: KVM User's Guide

Uploaded by

Mokarrom Hossain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Oracle Linux

KVM User's Guide

F29966-18
June 2022
Oracle Linux KVM User's Guide,

F29966-18

Copyright © 2020, 2022, Oracle and/or its affiliates.


Contents
Preface
Conventions v
Documentation Accessibility v
Access to Oracle Support for Accessibility v
Diversity and Inclusion v

1 About Oracle Linux KVM


Description of the Oracle Linux KVM Feature 1-1
Guest Operating System Requirements 1-1
Linux Guest Operating Systems 1-1
Microsoft Windows Guest Operating Systems 1-2
Oracle Solaris Guest Operating System 1-3
System Requirements and Recommendations 1-3
About Virtualization Packages 1-4

2 Installing KVM User Space Packages


Configuring Yum Repositories and ULN Channels 2-1
Oracle Linux 7 2-1
Oracle Linux 8 2-4
Installing Virtualization Packages 2-6
Installing Virtualization Packages During an Oracle Linux System Installation 2-6
Using the Installation Program to Install Virtualization Hosts 2-6
Using a Kickstart File to Install Virtualization Hosts 2-7
Installing Virtualization Packages on an Existing System 2-7
Upgrading Virtualization Packages 2-8
Switching Application Streams on Oracle Linux 8 2-8
Switching to the Oracle KVM Stack 2-9
Switching to the Default KVM Stack 2-9
Validating the Host System 2-10

iii
3 KVM Usage
Checking the libvirt Daemon Status 3-1
Base Operations 3-1
Creating a New Virtual Machine 3-1
Starting and Stopping Virtual Machines 3-2
Deleting a Virtual Machine 3-3
Configuring a Virtual Machine With a Virtual Trusted Platform Module 3-4
Working With Storage for KVM Guests 3-5
Storage Pools 3-6
Storage Volumes 3-8
Managing Virtual Disks 3-10
Adding or Removing a Virtual Disk 3-10
Removing a Virtual Disk 3-11
Extending a Virtual Disk 3-11
Working With Memory and CPU Allocation 3-12
Configuring Virtual CPU Count 3-12
Configuring Memory Allocation 3-13
Setting Up Networking for KVM Guests 3-14
Setting Up and Managing Virtual Networks 3-15
Adding or Removing a vNIC 3-16
Bridged and Direct vNICs 3-17
Interface Bonding for Bridged Networks 3-19
Cloning Virtual Machines 3-19
Preparing a Virtual Machine for Cloning 3-20
Cloning a Virtual Machine by Using the virt-clone Command 3-22
Cloning a Virtual Machine by Using Virtual Machine Manager 3-22

4 Known Issues for Oracle Linux KVM


Upgrading From QEMU 3.10 to Version 4.2.1 Can Prevent Existing KVM Guests From
Starting on Oracle Linux 7 4-1

iv
Preface
Oracle Linux: KVM User's Guide provides information about how to install, configure, and use
the Oracle Linux KVM packages to run guest system on top of a bare metal Oracle Linux
system. This documentation provides information on using KVM on a standalone platform in
an unmanaged environment. Typical usage in this mode is for development and testing
purposes, although production level deployments are supported. Oracle recommends that
customers use Oracle Linux Virtualization Manager for more complex deployments of a
managed KVM infrastructure.

Conventions
The following text conventions are used in this document:

Convention Meaning
boldface Boldface type indicates graphical user
interface elements associated with an action,
or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or
placeholder variables for which you supply
particular values.
monospace Monospace type indicates commands within a
paragraph, URLs, code in examples, text that
appears on the screen, or text that you enter.

Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at https://www.oracle.com/corporate/accessibility/.
For information about the accessibility of the Oracle Help Center, see the Oracle Accessibility
Conformance Report at https://www.oracle.com/corporate/accessibility/templates/
t2-11535.html.

Access to Oracle Support for Accessibility


Oracle customers that have purchased support have access to electronic support through My
Oracle Support. For information, visit https://www.oracle.com/corporate/accessibility/learning-
support.html#support-tab.

Diversity and Inclusion


Oracle is fully committed to diversity and inclusion. Oracle respects and values having a
diverse workforce that increases thought leadership and innovation. As part of our initiative to

v
Preface

build a more inclusive culture that positively impacts our employees, customers, and
partners, we are working to remove insensitive terms from our products and
documentation. We are also mindful of the necessity to maintain compatibility with our
customers' existing technologies and the need to ensure continuity of service as
Oracle's offerings and industry standards evolve. Because of these technical
constraints, our effort to remove insensitive terms is ongoing and will take time and
external cooperation.

vi
1
About Oracle Linux KVM
This chapter provides a high-level overview of the Kernel-based Virtual Machine (KVM)
feature on Oracle Linux, the user space tools that are available for installing and managing a
standalone instance of KVM, and the differences between KVM usage in this mode and
usage within a managed environment provided by Oracle Linux Virtualization Manager.

Description of the Oracle Linux KVM Feature


The KVM feature provides a set of modules that enable you to use the Oracle Linux kernel as
a hypervisor. KVM supports both x86_64 and aarch64 processor architectures and is
supported on Oracle Linux 7 and Oracle Linux 8 systems using either RHCK or any UEK
release since Unbreakable Enterprise Kernel Release 4.
By default, KVM is built into the Unbreakable Enterprise Kernel (UEK) release. KVM features
are actively developed and might vary depending on platform and kernel release. If you are
using Unbreakable Enterprise Kernel you should refer to the release notes for the kernel
release that you are currently using to obtain information about features and any known
issues or limitations that may apply. See Unbreakable Enterprise Kernel documentation for
more information.
For enterprise or clustered KVM deployments on Oracle Linux, consider using Oracle Linux
Virtualization Manager which is a server virtualization management platform. Through its
Administration or virtual machine (VM) portals, you can configure, monitor, and manage an
Oracle Linux KVM environment, including hosts, virtual machines, storage, networks, and
users. Oracle Linux Virtualization Manager also provides a REST API for managing your
Oracle Linux KVM infrastructure, allowing you to integrate Oracle Linux Virtualization
Manager with other management systems or to automate repetitive tasks with scripts. Find
out more at https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-manager/.

Guest Operating System Requirements


The following guest operating systems can be used when installed within a standalone
instance of KVM.

Linux Guest Operating Systems


Linux Operating System 32-bit Architecture 64-bit Architecture
Oracle Linux 6 Yes* Yes
Oracle Linux 7 Not available Yes
Oracle Linux 8 Not available Yes
Red Hat Enterprise Linux 6 Yes* Yes
Red Hat Enterprise Linux 7 Not available Yes
Red Hat Enterprise Linux 8 Not available Yes

1-1
Chapter 1
Guest Operating System Requirements

Linux Operating System 32-bit Architecture 64-bit Architecture


CentOS 6 Yes* Yes
CentOS 7 Not available Yes
CentOS 8 Not available Yes
SUSE Linux Enterprise Server Not available Yes
12 SP5
SUSE Linux Enterprise Server Not available Yes
15 SP1
Ubuntu 16.04 Not available Yes
Ubuntu 18.04 Not available Yes
Ubuntu 20.04 Not available Yes

Important:
* cloud-init is unavailable for 32-bit architectures

You can download Oracle Linux ISO images and disk images from Oracle Software
Delivery Cloud: https://edelivery.oracle.com/linux.

Microsoft Windows Guest Operating Systems


Microsoft Windows 32-bit Architecture 64-bit Architecture
Operating System
Microsoft Windows Server Not available Yes
2019
Microsoft Windows Server Not available Yes
2016
Microsoft Windows Server Not available Yes
2012 R2
Microsoft Windows Server Not available Yes
2012
Microsoft Windows Server Not available Yes
2008 R2 SP1
Microsoft Windows Server Yes Yes
2008 SP1
Microsoft Windows 10 Yes Yes
Microsoft Windows 8.1 Yes Yes
Microsoft Windows 8 Yes Yes
Microsoft Windows 7 SP1 Yes Yes

1-2
Chapter 1
System Requirements and Recommendations

Note:
Oracle recommends that you install the Oracle VirtIO Drivers for Microsoft Windows
in Windows virtual machines for improved performance for network and block (disk)
devices and to resolve common issues. The drivers are paravirtualized drivers for
Microsoft Windows guests running on Oracle Linux KVM hypervisors.

For instructions on how to obtain and install the drivers, see Oracle Linux: Oracle VirtIO
Drivers for Microsoft Windows for use with KVM.

Oracle Solaris Guest Operating System


Oracle Solaris 11.4 can be used as a guest operating system when installed within a
standalone instance of KVM.
Oracle Solaris 11.4.33 (Oracle Solaris 11.4 SRU 33) is the minimum version that provides
VirtIO driver support.
For best results, follow these recommendations:
• Use at least a two-core configuration for the Oracle Solaris VM.
• Use the most current QEMU system type (Custom Emulated Machine = pc-i440fx-4.2) for
the Oracle Solaris VM.
You can download Oracle Solaris ISO images and disk images from Oracle Software Delivery
Cloud: https://edelivery.oracle.com/.

System Requirements and Recommendations


Although most systems running Oracle Linux 7 or Oracle Linux 8 are capable of using KVM,
there are some general hardware recommendations, requirements, and guidelines that you
should follow to run a guest on a host system. Many of these depend on the kinds of
applications being run on the VM and the amount of work they are expected to perform.
• Bare metal host
KVM is supported when it is run on a bare metal host. Nested virtualization scenarios are
not supported for KVM.
• CPU
The host system CPU must have virtualization features Intel (VT-x) or AMD (AMD-V)
enabled. Arm (aarch64) CPUs are supported. If these are not available, you should check
that virtualization is enabled in the system firmware BIOS or UEFI. As a rule of thumb,
you can start with the following virtual CPU to host CPU ratios (this ratio is of distinct
CPU cores and assumes SMT is enabled):
– 1:1 to 2:1 can typically achieve good VM performance,
– 3:1 may cause some VM performance degradations
– 4:1 or greater may cause significant VM performance problems
The ratio of virtual CPUs to host CPUs should be determined by running performance
tests on your VM and host systems. Determining acceptable performance depends on
many factors such as, for example:

1-3
Chapter 1
About Virtualization Packages

– the tasks your VM systems perform


– the volume of tasks to be processed
– the desired rate that these tasks need to be processed
• Memory
3 GB reserved for the host is a good starting point but memory requirements for
the host operating system scale with the amount of physical memory available. For
systems with lots of available physical memory, you should increase the reserved
memory for the host operating system. For example, on a system with 1TB
memory, Oracle recommends at least 20GB available for the host operating
system. If system work on a host and all VMs start exceeding the available
physical RAM the performance impact is severe. However, if VMs are typically
idle, you may not need to allocate as much RAM. Make sure you do performance
testing to ensure your applications always have sufficient memory.
• Storage
The minimum disk space, usually 6 GB, required for the host operating system
should be met. Each virtual machine requires it's own storage for the guest
operating system and for swap usage. You should cater to around 6 GB, at
minimum, per virtual machine that you intend to create, but you should consider
the purpose of the virtual machine and scale accordingly.

About Virtualization Packages


Oracle Linux provides several virtualization packages that enable you work with KVM.
You can install virtualization packages from the Oracle Linux yum server or from the
Unbreakable Linux Network (ULN). Packages are provided from various upstream
projects, including:
• https://www.linux-kvm.org/page/Main_Page
• https://libvirt.org/
• https://www.qemu.org/
In most cases, the following packages are the minimally required for a virtualization
host:
• libvirt: This package provides an interface to KVM, as well as the libvirtd
daemon for managing guest virtual machines.
• qemu-kvm: This package installs the QEMU emulator that performs hardware
virtualization so that guests can access host CPU and other resources.
• virt-install: This package provides command line utilities for creating and
provisioning guest virtual machines.
• virt-viewer: This package provides a graphical utility that can be loaded into a
desktop environment to access the graphical console of a guest virtual machine.
As an alternative to installing virtualization packages individually, you can install
virtualization package groups.
The Virtualization Host package group contains the minimum set of packages that
are required for a virtualization host. If your Oracle Linux system includes a GUI
environment, you can also choose to install the Virtualization Client package
group.

1-4
2
Installing KVM User Space Packages
This chapter describes how to configure the appropriate ULN channels or yum repositories,
and how to install user space tools to manage a standalone instance of KVM. A final check is
performed to validate whether the system is capable of hosting guest virtual machines.

Configuring Yum Repositories and ULN Channels


Virtualization packages and their dependencies are available in a variety of locations on the
Oracle Linux yum server and on the Unbreakable Linux Network (ULN), depending on Oracle
Linux release, the system architecture and use case or support requirements.

Oracle Linux 7
Due to the availability of several very different kernel versions and the requirement for more
recent versions of user space tools that may break compatibility with RHCK, there are several
different yum repositories or ULN channels across the different supported architectures for
Oracle Linux 7. Packages in the different channels have different use cases and have
different levels of support. This section describes the available yum repositories and ULN
channels for each architecture.
Repositories and Channels That Are Available for x86_64 Platforms

Location Description
Yum repository: The virtualization packages that are provided
ol7_latest in this repository or ULN channel maximize
compatibility with RHCK and with RedHat
ULN channel: Enterprise Linux. Packages from this
ol7_x86_64_latest repository or ULN channel are fully supported
for all kernels.

2-1
Chapter 2
Configuring Yum Repositories and ULN Channels

Location Description
Yum repository: The virtualization packages that are provided
ol7_kvm_utils in this repository or ULN channel take
advantage of newer features and functionality
ULN channel: available in upstream packages. These
ol7_x86_64_kvm_utils packages are also engineered to work with
KVM features that are enabled in the latest
releases of UEK. If you install these packages,
you must also install the latest version of
either UEK R4 or UEK R5.

Note:
The
ol7_kvm_utils
and
ol7_x86_64_kvm_u
tils channels
distribute 64-bit
packages only. If
you manually
installed any 32-bit
packages, for
example, libvirt-
client, Yum
updates from these
channels will fail.
To use the
ol7_kvm_utils
and
ol7_x86_64_kvm_u
tils channels, you
must first remove
any 32-bit versions
of the packages
distributed by
these channels that
are installed on
your system.

You may choose to configure on-premises


virtualization the same way that you configure
systems on Oracle Cloud Infrastructure or
other Oracle products that use KVM. Oracle
Linux provides specific virtualization packages
in this channel to assist with the configuration.
Packages in this channel are delivered with
limited support. Limited support coverage is
only available for packages that are tested on
Oracle Linux 7 with UEK. The following are the
limitations and requirements:
• A minimum of Oracle Linux 7.4 is
required.

2-2
Chapter 2
Configuring Yum Repositories and ULN Channels

Location Description
• A minimum of Unbreakable Enterprise
Kernel Release 4 is required.
• Guest operating systems, as supported on
Oracle Cloud Infrastructure and described
at https://docs.oracle.com/iaas/Content/
Compute/References/images.htm.
• KVM guests boot by using iSCSI, VirtIO,
VirtIO-SCSI or IDE device emulation.
Yum repository: The virtualization packages that are provided
ol7_developer in these repositories or ULN channels take
advantage of newer features and functionality
ULN channel: that is available upstream, but are
ol7_x86_64_developer unsupported and are made available for
developer use only.
Yum repository:
If you are using the Oracle Linux yum server,
ol7_developer_kvm_utils
you can configure these repositories by
ULN channel: installing the oraclelinux-developer-
ol7_x86_64_developer_kvm_utils release-el7 package and then enabling these
repositories by editing the repository files or
by using yum-config-manager.

Repositories and Channels That Are Available for aarch64 Platforms

Yum Repositories ULN Channels Description


ol7_latest ol7_aarch64_latest The virtualization packages
that are provided in this
repository or ULN channel
include the latest
virtualization packages, which
are available and fully
supported on Unbreakable
Enterprise Kernel Release 5.
ol7_developer ol7_aarch64_developer The virtualization packages
that are provided in this
repository or ULN channel
take advantage of newer
features and functionality,
which are available upstream,
but are unsupported and are
made available for developer
use only.

2-3
Chapter 2
Configuring Yum Repositories and ULN Channels

Caution:
Virtualization packages may also be available in the ol7_developer_EPEL
yum repository or the ol7_arch_developer_EPEL ULN channel. These
packages are unsupported and contain features that might never be tested
on Oracle Linux and may conflict with virtualization packages from other
channels. If you intend to use packages from any of the repositories or
channels that are previously listed, first uninstall any virtualization packages
that installed from this repository. You can also disable this repository or
channel or set exclusions to prevent virtualization packages from being
installed from this repository.

Depending on your use case and support requirements, you must enable the
repository or ULN channel that you require before installing the virtualization packages
from that repository or ULN channel.
If you are using ULN, follow these steps to ensure that the system is registered with
ULN and that the appropriate channel is enabled:
1. Log in to https://linux.oracle.com with your ULN user name and password.
2. On the Systems tab, from the list of registered systems, select the link name for
the specified system.
3. On the System Details page, select Manage Subscriptions.
4. On the System Summary page, from the list of available channels, select each of
the required channels, then click the right arrow to move each channel to the list of
subscribed channels.
5. Select Save Subscriptions.
If you are using the Oracle Linux yum server, you can either edit the repository
configuration files in /etc/yum.repos.d/ directly; or alternatively, if you have the yum-
utils package installed, you can use the yum-config-manager command, for
example:
sudo yum-config-manager --enable ol7_kvm_utils ol7_UEKR6

If you want to prevent yum from installing the package versions from a particular
repository, you can set an exclude option on these packages for that repository. For
instance, to prevent yum from installing the virtualization packages in the
ol7_developer_EPEL repository, use the following command:
sudo yum-config-manager --setopt="ol7_developer_EPEL.exclude=libvirt* qemu*" --
save

Oracle Linux 8
The number of options available on Oracle Linux 8 are significantly reduced as the
available kernels are newer and there are less options to select from.
Repositories and Channels That Are Available for Oracle Linux 8

2-4
Chapter 2
Configuring Yum Repositories and ULN Channels

Yum Repositories ULN Channels Description


ol8_appstream ol8_x86_64_appstream The virtualization packages
that are provided in this
ol8_aarch64_appstream
repository or ULN channel
maximize compatibility with
RHCK and with RedHat
Enterprise Linux. Packages
from this repository or ULN
channel are fully supported
for all kernels.
Packages released in this
repository or ULN channel are
released as part of the default
DNF module: virt
ol8_kvm_appstream ol8_x86_64_kvm_appstream The virtualization packages
that are provided in this
ol8_aarch64_kvm_appstream
repository or ULN channel
take advantage of newer
features and functionality
available in upstream
packages. These packages are
also engineered to work with
KVM features that are enabled
in the latest releases of UEK. If
you install these packages, you
must also install the latest
version of UEK R6 to use these
features.
The Oracle KVM stack
packages released in this
repository or ULN channel are
available as a separate DNF
module streams:
virt:kvm_utils and
virt:kvm_utils2.
Additionally, some associated
non-modular packages, such
as virt-manager, edk2, swtpm
and libtpms are available
within this repository or
channel. Packages that are
included here are either not
available in the standard
AppStream repository or are
available at a more recent
version to take advantage of
newer functionality.
See Switching Application
Streams on Oracle Linux 8 for
more information.

Since the Application Stream repository or channel is required for system software on Oracle
Linux 8, it is enabled by default on any Oracle Linux 8 system.

2-5
Chapter 2
Installing Virtualization Packages

If you intend to use the virt:kvm_utils2 application stream for improved functionality
and integration with newer features released within UEK, you must subscribe to the
ol8_kvm_appstream yum repository or ol8_base_arch_kvm_utils ULN channel. Note
that the virt:kvm_utils application stream is now a legacy stream on Oracle Linux 8.

If you are using ULN, you can check that the system is registered with ULN and that
the appropriate channel is enabled:
1. Log in to https://linux.oracle.com with your ULN user name and password.
2. On the Systems tab, from the list of registered systems, select the link name for
the specified system.
3. On the System Details page, select Manage Subscriptions.
4. On the System Summary page, from the list of available channels, select each of
the required channels, then click the right arrow to move each channel to the list of
subscribed channels.
5. Select Save Subscriptions.
If you are using the Oracle Linux yum server, make sure that you have installed the
most recent version of the oraclelinux-release-el8 package and enable the
required repositories. For example:
sudo dnf install -y oraclelinux-release-el8
sudo dnf config-manager --enable ol8_appstream ol8_kvm_appstream

Installing Virtualization Packages


Virtualization packages provide an interface to the KVM hypervisor, as well as user-
space tools.

Installing Virtualization Packages During an Oracle Linux System


Installation
You can use the following procedures to install virtualization packages during system
installation. The Anaconda installation program can be used to install a single
virtualization host. You can use a kickstart file to install virtualization hosts over the
network.
Note that installation of virtualization software during system install defaults to a KVM
stack most compatible with RHCK. If you wish to use an alternate KVM stack you may
need to perform steps to add other yum or dnf configuration and if you are running
Oracle Linux 8 you may need to select an alternate application stream for the
installation.

Using the Installation Program to Install Virtualization Hosts


The following steps describe how to install a virtualization host with the Oracle Linux
graphical installation program:
1. Boot the Oracle Linux installation media and proceed to the Software Selection
screen.
2. Select one of the following virtualization host types:

2-6
Chapter 2
Installing Virtualization Packages

Minimum Virtualization Host


(Available on Oracle Linux 7 and Oracle Linux 8)
a. Select Virtualization Host in the Base Environment section.
b. Select Virtualization Host in the Add-ons for Selected Environment section.

Virtualization Host with GUI


(Available only on Oracle Linux 7)
a. Select Server with GUI in the Base Environment section.
b. Select the following package groups in the Add-ons for Selected Environment
section:
• Virtualization Client
• Virtualization Hypervisor
• Virtualization Tools
3. Follow the prompts to complete the installation.

Using a Kickstart File to Install Virtualization Hosts


You can install virtualization hosts by specifying individual packages or package groups in the
%packages section of a kickstart file.

Specify virtualization packages individually, as in the following example:


%packages
libvirt
qemu-kvm
virt-install

Specify the appropriate package groups for the installation type in the %packages section of
the kickstart file by using the @GroupID format:

Minimum Virtualization Host


%packages
@virtualization-hypervisor
@virtualization-tools
# The following group is optional. Uncomment line to include...:
#@virtualization-platform

Virtualization Host with GUI


%packages
@virtualization-hypervisor
@virtualization-client
@virtualization-platform
@virtualization-tools

Installing Virtualization Packages on an Existing System


1. Log in as the root user on the target Oracle Linux system.
2. Ensure that your system has the appropriate Yum repository or ULN channel enabled for
the virtualization package versions that you wish to install. See Configuring Yum
Repositories and ULN Channels for more information.

2-7
Chapter 2
Installing Virtualization Packages

3. Update the system so that it has the most recent packages available.
• If you are using Oracle Linux 7, run the yum update command.
• If you are using Oracle Linux 8, run the dnf update command.
4. Install virtualization packages on the system.
• If you are using Oracle Linux 7 run the following commands to install the base
virtualization packages and additional utilities:
sudo yum groupinstall "Virtualization Host"
sudo yum install qemu-kvm virt-install virt-viewer

• If you are using Oracle Linux 8 run the following commands to install the base
virtualization packages and additional utilities:
sudo dnf module install virt
sudo dnf install virt-install virt-viewer

See also Switching Application Streams on Oracle Linux 8.

Upgrading Virtualization Packages


Virtualization packages are updated by using the standard yum update or dnf
updatecommand. Note that if you want to change the versions of the virtualization
packages to match the versions that are shipped in a particular yum repository or ULN
channel, you might need to specify the channel or repository from or to which you are
installing packages. For example, you would update to the latest supported
virtualization packages that are available in the ol7_kvm_utils repository as follows:
sudo yum --disablerepo="*" --enablerepo="ol7_kvm_utils" update

If you want to downgrade packages to a version in an alternate repository or channel,


for example, to downgrade from the virtualization packages in the ol7_kvm_utils
repository to the version of the same packages in the ol7_latest repository, you must
first remove the existing packages before installing the packages from the alternate
repository:
sudo yum remove libvirt* qemu* virt-install
sudo yum --disablerepo="*" --enablerepo="ol7_latest" install libvirt qemu-kvm
virt-install

Switching Application Streams on Oracle Linux 8


Virtualization packages on Oracle Linux 8 are released as a DNF module: virt. The
default stream in the module contains packages that are capable of working with both
RHCK and UEK. Alternate versions of the packages that are capable of taking
advantage of features supported only in UEK are available within a separate
application stream, virt:kvm_utils2, that is provided along with some newer versions
of non-modular packages within the ol8_kvm_appstream repository.

For more information about DNF modules and application streams, see Oracle Linux:
Managing Software on Oracle Linux.

2-8
Chapter 2
Installing Virtualization Packages

Switching to the Oracle KVM Stack


On an existing Oracle Linux 8 system, you can switch from the default KVM stack to the
Oracle KVM stack in the virt:kvm_utils2 stream by performing the following steps:

• Remove any packages from the existing default virt stream:


sudo dnf module remove virt -y --all

• Reset the virt module state so that it is neither enabled nor disabled:
sudo dnf module reset virt -y

• Enable the virt:kvm_utils2 module and stream:


sudo dnf module enable virt:kvm_utils2 -y

• Perform any necessary package upgrade or downgrade operations to handle


dependencies for the enabled module and stream:
sudo dnf --allowerasing distro-sync

• Install the base packages from the virt:kvm_utils2 stream:


sudo dnf module install virt:kvm_utils2 -y

NOT_SUPPORTED:
Pre-existing guests that were created using the default KVM stack are not
compatible and do not start using the Oracle KVM stack.

Note that although you are able to switch to the Oracle KVM stack and install the packages
while using RHCK, the stack is not compatible. You must be running a current version of UEK
to use this software.

Switching to the Default KVM Stack


On an existing Oracle Linux 8 system, you can switch from the Oracle KVM stack to the
default KVM stack by performing the following steps:
• Remove any packages from the existing Oracle virt:kvm_utils or virt:kvm_utils2
streams:
sudo dnf module remove virt:kvm_utils -y --all
sudo dnf module remove virt:kvm_utils2 -y --all

• Reset the virt module state so that it is neither enabled nor disabled:
sudo dnf module reset virt -y

• Enable the virt module and stream:


sudo dnf module enable virt -y

• Perform any necessary package upgrade or downgrade operations to handle


dependencies for the enabled module and stream:
sudo dnf --allowerasing distro-sync

• Install the base packages from the virt stream:

2-9
Chapter 2
Validating the Host System

sudo dnf module install virt -y

NOT_SUPPORTED:
Pre-existing guests that were created using the Oracle KVM stack are not
compatible and do not start using the default KVM stack.

Validating the Host System


The libvirt tools provide a validation utility that checks whether a system is capable of
functioning correctly as a virtualization host. The utility can check for several
virtualization functionality, but KVM functionality is covered specifically by testing the
qemu virtualization type.

To test whether a system can act as a KVM host, run the following command:
sudo virt-host-validate qemu

If all of the checks return a PASS value, the system can host guest virtual machines. If
any of the tests fail, a reason is provided and information is displayed on how to
resolve the issue if such an option is available.

Note:
If the following message is displayed, the system is not capable of
functioning as a KVM host:
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are
available, performance will be significantly limited)

In the event that this message is displayed, attempts to create or start a


virtual machine on the host are likely to fail.

2-10
3
KVM Usage
Several tools exist for administering the libvirt interface with KVM. In most cases, a variety
of different tools are capable of performing the same operation. This document focuses on
the tools that you can use from the command line. However, if you are using a desktop
environment, you might consider using a graphical user interface (GUI) such as the Virtual
Machine Manager, to create and manage virtual machines (VMs). For more information about
Virtual Machine Manager, see https://virt-manager.org/.

Checking the libvirt Daemon Status


To check the status of the libvirt daemon, run the following command on the virtualization
host:
sudo systemctl status libvirtd

The output should indicate that the libvirtd daemon is running, as shown in the following
example output:
* libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset:
enabled)
Active: active (running) since time_stamp; xh ago

If the daemon is not running, start it by running the following command:


sudo systemctl start libvirtd

After you verify that the libvirtd service is running, you can start provisioning guest
systems.

Base Operations
In this section base operations are described including creating, starting and stopping, and
deleting virtual machines.

Creating a New Virtual Machine


The virt-install command is the most commonly used command-line tool for creating
and setting up new virtual machines. This utility has many options to allow you to customize
your virtual machine and control how it is created. For complete documentation on this tool,
view the VIRT-INSTALL(1) manual page; or, for a quick list of options, you can run the virt-
install --help command.

The following example, illustrates the creation of a simple virtual machine and assumes that
virt-viewer is installed and available to load the installer in a graphical environment:
virt-install --name guest-ol8 --memory 2048 --vcpus 2 \
--disk size=8 --location OracleLinux-R8.iso --os-variant ol8.0

3-1
Chapter 3
Base Operations

The following are detailed descriptions of each of the options that are specified in the
example:
• --name is used to specify a name for the virtual machine. This is registered as a
domain within libvirt.
• --memory is used to specify the RAM available to the virtual machine and is
specified in MB.
• --vcpus is used to specify the number of virtual CPUs that should be available to
the virtual machine.
• --disk is used to specify hard disk parameters. In this case, only the size is
specified in GB. If a path is not specified the disk image is created as a qcow file
automatically. If virt-install is run as root, the disk image is created
in /var/lib/libvirt/images/ and is named using the name specified for the
virtual machine at install. If virt-install is run as an ordinary user, the disk
image is created in $HOME/.local/share/libvirt/images/.
• --location is used to provide the path to the installation media. This can be an
ISO file, or an expanded installation resource hosted at a local path or remotely on
an HTTP or NFS server.
• --os-variant is an optional specification but provides some default parameters
for each virtual machine that can help improve performance for a specific
operating system or distribution. For a complete list of options available, run
osinfo-query os.
When the command is run, the virtual machine is created and automatically started to
boot using the install media specified in the location parameter. If you have the virt-
viewer package installed and the command has been run in a terminal within a
desktop environment, the graphical console is opened automatically and you can
proceed with the guest operating system installation within the console.

Starting and Stopping Virtual Machines


After a virtual machine is created within KVM, it is registered as a domain within libvirt
and you are able to manage it using the virsh command. To obtain a complete list of
all registered domains and their status, run the following command:
virsh list --all

Output:
Id Name State
----------------------------------------------------
1 guest-ol8 running

Use the virsh help command to view available options and syntax. For example, to
find out more about the options available to listings of virtual machines, run virsh
help list. This command shows options to view listings of virtual machines that are
stopped or paused or that are currently active.
To start a virtual machine, run the following command:
virsh start guest-ol8

3-2
Chapter 3
Base Operations

Output:
Domain guest-ol8 started

To gracefully shutdown a virtual machine, run the following command:


virsh shutdown guest-ol8

Output:
Domain guest-ol8 is being shutdown

To reboot a virtual machine, run the following command:


virsh reboot guest-ol8

Output:
Domain guest-ol8 is being rebooted

To suspend a virtual machine, run the following command:


virsh suspend guest-ol8

Output:
Domain guest-ol8 suspended

To resume a suspended virtual machine, run the following command:


virsh resume guest-ol8

Output:
Domain guest-ol8 resumed

To forcefully stop a virtual machine, run the following command:


virsh destroy guest-ol8

Output:
Domain guest-ol8 destroyed

Deleting a Virtual Machine


The following steps can be followed to remove a virtual machine from a system:
1. If you are unsure of where the disk for the virtual machine is located, obtain this
information before you remove the virtual machine, so that you can find it later and
remove it manually. You can do this by dumping information about the virtual machine
and checking for the source files. For example, run:
virsh dumpxml --domain guest-ol8 | grep 'source file'

Output:

3-3
Chapter 3
Configuring a Virtual Machine With a Virtual Trusted Platform Module

<source file='/home/testuser/.local/share/libvirt/images/guest-ol8-1.qcow2'/>

2. Shutdown the virtual machine, if possible. For example, run:


virsh shutdown guest-ol8

If the virtual machine can't be shutdown gracefully you can force it to stop by
running:
virsh destroy guest-ol8

3. To delete the virtual machine, run:


virsh undefine guest-ol8

This step removes all configuration information about the virtual machine from
libvirt. Storage artifacts such as virtual disks are left intact. If you need to remove
these as well, you can delete them manually from their location returned in the first
step in this procedure. For example, you could run:
rm /home/testuser/.local/share/libvirt/images/guest-ol8-1.qcow2

Note that it is not possible to delete a virtual machine if it has snapshots. You should
remove any snapshots using the virsh snapshot-delete command before
attempting to remove a virtual machine that has any snapshots defined.

Configuring a Virtual Machine With a Virtual Trusted


Platform Module
A virtual Trusted Platform Module (vTPM) is a software-based representation of a
physical Trusted Platform Module 2.0 chip. A vTPM acts as any other virtual device
and provides security-related functions such as random number generation,
attestation, key generation. When added to a virtual machine, a vTPM enables the
guest operating system to create and store keys that are private and not exposed to
the guest operating system. This means that if a virtual machine is compromised, the
risk of its secrets being compromised is greatly reduced because the keys can be
used only by the guest operating system for encryption or signing.
You can add a vTPM to an existing Oracle Linux 7 or Oracle Linux 8 KVM virtual
machine. When you configure a vTPM, the virtual machine files are encrypted but not
the disks. Although, you can choose to add encryption explicitly for the virtual machine
and its disks.

Note:
Virtual Trusted Platform Module is available on Oracle Linux 7 and Oracle
Linux 8 KVM guests but not on QEMU.

To provide a vTPM to an existing Oracle Linux 7 or Oracle Linux 8 KVM virtual


machine, follow the steps below.

3-4
Chapter 3
Working With Storage for KVM Guests

1. Install the vTPM packages:


yum -y install swtpm libtpms swtpm-tools

2. Shutdown the KVM virtual machine.


3. Edit the KVM virtual machine configuration to include TPM settings. You can either
modify the KVM virtual machine XML directly, or you can use the virsh edit command
to edit the XML and get validation for your changes.
• Use the virsh edit command to update the configuration for the virtual machine:
virsh edit guest-ol8

• Modify the KVM virtual machine's XML to include the TPM, as shown in the tpm
section in the following example:
<devices>
...
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
</tpm>
<graphics type='vnc' port='-1' autoport='yes'>
<listen type='address'/>
</graphics>
...
</devices>

Note that if you are creating a new virtual machine, the virt-install on Oracle Linux
8 also provides a --tpm option that allows you to specify the vTPM information at
installation time, for example:
virt-install --name guest-ol8-tpm2 --memory 2048 --vcpus 2 \
--disk path=/systest/images/guest-ol8-tpm2.qcow2,size=20 \
--location /systest/iso/ol8.iso --os-variant ol8 \
--network network=default --graphics vnc,listen=0.0.0.0 --tpm
emulator,model=tpm-crb,version=2.0

If you are using Oracle Linux 7, the virt-install command does not provide this
option but you can manually edit the configuration after the virtual machine is created.
4. Start the KVM virtual machine.

Working With Storage for KVM Guests


Libvirt handles a variety of different storage mechanisms that an administrator can configure
for use by virtual machines. These mechanisms are organized into different pools or units. By
default, libvirt uses directory-based storage pools for the creation of new disks, but pools can
be configured for different storage types including physical disk, NFS and iSCSI.
Depending on the storage pool type that is configured, different storage volumes can be
made available to your virtual machines to be used as block devices. In some cases, such as
when using iSCSI pools, volumes do not need to be defined as the LUNs for the iSCSI target
are automatically presented to the virtual machine.
Note that you do not need to specifically define different storage pools and volumes to use
libvirt with KVM. These tools are used to help administrators manage how storage is used

3-5
Chapter 3
Working With Storage for KVM Guests

and consumed by virtual machines as they need it. It is perfectly acceptable to use the
default directory-based storage and take advantage of manually mounted storage at
the default locations.
Oracle recommends using Oracle Linux Virtualization Manager to easily manage and
configure complex storage requirements for KVM environments.

Storage Pools
Storage pools provide logical groupings of storage types that are available to host the
volumes that can be used as virtual disks by a set of virtual machines. A wide variety
of different storage types are provided. Local storage can be used in the form of
directory based storage pools, file system storage and disk based storage. Other
storage types such as NFS and iSCSI provide standard network based storage, while
RBD and Gluster types provide support for distributed storage mechanisms. More
information is provided at https://libvirt.org/storage.html.
Storage pools help to abstract underlying storage resources from the virtual machine
configurations. This is particularly useful if you suspect that resources such as virtual
disks may change physical location or media type. This becomes even more important
when using network based storage and target paths, DNS or IP addressing may
change over time. By abstracting this configuration information, an administrator can
manage resources in a consolidated way without needing to update multiple virtual
machine configurations.
You can create transient storage pools that are available until the host reboots, or you
can define persistent storage pools that are restored after a reboot.
Transient storage pools are started automatically as soon as they are created and the
volumes that are within them are made available to virtual machines immediately,
however any configuration information about a transient storage pool is lost after the
pool is stopped, the host reboots or if the libvirtd service is restarted. The storage itself
is unaffected, but virtual machines configured to use resources in a transient storage
pool lose access to these resources. Transient storage pools are created using the
virsh pool-create command.

For most use cases, you should consider creating persistent storage pools. Persistent
storage pools are defined as a configuration entry that is stored within /etc/libvirt.
Persistent storage pools can be stopped and started and can be configured to start
when the host system boots. This can be useful as libvirt can take care of
automatically mounting and enabling access to network based resources. Persistent
storage pools are created using the virsh pool-define command, and usually
need to be started after they have been created before you are able to use them.
The following examples show how you can set up directory based storage, and
perform basic operations on it.
To create a directory-based storage pool named pool_dir at /share/storage_pool on the
host system, run:
virsh pool-define-as pool_dir dir --target /share/storage_pool

You can verify that the pool was created by running:


virsh pool-list --all

To start the storage pool and make it accessible to any virtual machines, run:

3-6
Chapter 3
Working With Storage for KVM Guests

virsh pool-start pool_dir

If you require the storage pool to start at boot, run:


virsh pool-autostart pool_dir

To stop the storage pool, run:


virsh pool-stop pool_dir

To remove the storage pool configuration completely, run:


virsh pool-undefine pool_dir

Other storage pool types can be easily created using the same virsh pool-define-as
command. Options that are used with this command depend on the storage type that you
select when you create your storage pool. For example, to create file system based storage,
that mounts a formatted block device, /dev/sdc1, at the mount point /share/storage_mount,
you can run:
virsh pool-create-as pool_fs fs --source-path /dev/sdc1 --target /share/storage_mount

Similarly, you can easily add an NFS share as a storage pool. For example, run:
virsh pool-create-as pool_nfs netfs --source-path /ISO --source-host nfs.example.com \
--target /share/storage_nfs

It is also possible to create an XML file representation of the storage pool configuration and
load the configuration information from file using the virsh pool-define command. For
example, you could create a storage pool for a Gluster volume by creating an XML file named
gluster_pool.xml with the following content:
<pool type='gluster'>
<name>pool_gluster</name>
<source>
<host name='192.0.2.1'/>
<dir path='/'/>
<name>gluster-vol1</name>
</source>
</pool>

This example assumes that a Gluster server is already configured and running on a host with
IP address 192.0.2.1 and that a volume named gluster-vol1 is exported. Note that the
glusterfs-fuse package must be installed on the host and you should verify that you are
able to mount the Gluster volume before attempting to use it with libvirt.
Run the following command to load the configuration information from the gluster_pool.xml
file into libvirt.
virsh pool-define gluster_pool.xml

Note that Oracle recommends using Oracle Linux Virtualization Manager when attempting to
use complex network based storage such as Gluster.

3-7
Chapter 3
Working With Storage for KVM Guests

For more information on the XML format for a storage pool definition, see https://
libvirt.org/formatstorage.html#StoragePool.

Storage Volumes
Storage volumes are created within a storage pool and represent the virtual disks that
can be loaded as block devices within one or more virtual machines. Some storage
pool types do not need storage volumes to be created individually as the storage
mechanism may present these to as block devices already. For example, iSCSI
storage pools present the individual LUNs for an iSCSI target as separate block
devices.
In some cases, such as when using directory or file system based storage pools,
storage volumes are individually created for use as virtual disks. In these cases,
several disk image formats are supported although some formats, such as qcow2, may
require additional tools such as qemu-img for creation.

For disk based pools, standard partition type labels are used to represent individual
volumes; while for pools based on the logical volume manager, the volumes
themselves are presented individually within the pool.
Depending on the storage pool type, you can create new storage volumes using the
virsh vol-create command. This command expects you to provide an XML file
representation of the volume parameters. For example, to create a new volume in
storage pool named pooldiryou could create an XML file, volume1.xml, with the
required parameters and run:
virsh vol-create pooldir
volume1.xml

The XML for a volume may depend on the pool type and the volume being created,
but in the case of a sparsely allocated 10 GB image in qcow2 format, the XML may
look similar to the following:
<volume>
<name>volume1</name>
<allocation>0</allocation>
<capacity unit="G">10</capacity>
<target>
<path>/home/testuser/.local/share/libvirt/images/volume1.qcow2</path>
<permissions>
<owner>107</owner>
<group>107</group>
<mode>0744</mode>
<label>virt_image_t</label>
</permissions>
</target>
</volume>

For more information, see https://libvirt.org/formatstorage.html#StorageVol.


You can use the virsh vol-create-as command to create a volume directly by
passing command line arguments to it directly. Many of the available options, such as
the allocation or format have default value set, so you can typically just specify the
name of the storage pool where the volume should be created, the name of the
volume and the capacity that you require. For example, run the command:

3-8
Chapter 3
Working With Storage for KVM Guests

virsh vol-create-as --pool pooldir --name volume1 --capacity 10G

Note that storage volumes can be sparsely allocated by setting the allocation value for the
initial size of the volume to a value lower than the capacity of the volume. The allocation
indicates the initial or current physical size of the volume, while the capacity indicates the size
of the virtual disk as it is presented to the virtual machine. Sparse allocation is frequently
used to over-subscribe physical disk space where virtual machines may ultimately require
more disk space than is initially available. For a non-sparsely allocated volume, the allocation
matches or exceeds the capacity of the volume. Exceeding the capacity of the disk provides
space for metadata, if required.
You can use the virsh vol-info command to view information about a volume to
determine its type, capacity, and allocation. For example:
virsh vol-info --pool pooldir
volume1

Output:
Name: volume1
Type: file
Capacity: 9.31 GiB
Allocation: 8.00 GiB

You can clone a storage volume using the virsh vol-clone command. This command
simply takes the name of the original volume and the name of the cloned volume as a
parameter and the clone is created in the same storage pool with identical parameters. For
example:
virsh vol-clone --pool pooldir
volume1
volume1-clone

Note that you can use the --pool option if you have volumes with matching names in
different pools on the same system and you need to specify which pool the operation should
take place in.
You can delete a storage volume by running the virsh vol-delete command. For
example, to delete the volume named volume1 in the storage pool named pooldir, run the
following command:
virsh vol-delete volume1 --pool pooldir

As long as a storage volume is not being used by a virtual machine, you can resize it using
the virsh vol-resize command. For example:
virsh vol-resize --pool pooldir
volume1
15G

It is generally not advisable to reduce the size of an existing volume, as this can risk
destroying data. However, if you attempt to resize a volume to reduce it, you must specify the
--shrink option with the new size value.

3-9
Chapter 3
Working With Storage for KVM Guests

Managing Virtual Disks


Virtual disks are attached to virtual machines, usually as block devices based on disk
images stored at some or other path. Virtual disks can be defined for a virtual machine
when it is created, or can be added to an existing virtual machine. The command line
tools available for the purpose of managing virtual disks are not completely consistent
in terms of their handling of storage volumes and storage pools.

Adding or Removing a Virtual Disk


Storage volumes can be attached to a virtual machine as a virtual disk when the virtual
machine is created. The virt-install command allows you to specify the volume
or storage pool directly for any use of the --disk option. For example, to use an
existing volume when creating a virtual machine, using virt-install, specify the
disk as follows:
virt-install --name guest --disk vol=storage_pool1/volume1.qcow2
...

You can equally use virt-install to create a virtual disk as a volume within an
existing storage pool automatically at install. For example, to create a new disk image
as a volume within the storage pool named storage_pool1:
virt-install --name guest --disk pool=storage_pool1 size=10
...

Tools to attach a volume to an existing virtual machine are limited and it is generally
recommended that you use a GUI tool like virt-manager or cockpit to assist with
this. If you expect that you may need to work with volumes a lot, consider using Oracle
Linux Virtualization Manager.
You can use the virsh attach-disk command to attach a disk image to an
existing virtual machine. This command requires that you provide the path to the disk
image when you attach it to the virtual machine. If the disk image is a volume, you can
obtain it's correct path by running the virsh vol-list command first.
virsh vol-list storage_pool_1

Output:
Name Path
--------------------------------------------------------------------
volume1 /share/disk-images/volume1.qcow2

Attach the disk image within the existing virtual machine configuration so that it is
persistent and attaches itself on each subsequent restart of the virtual machine:
virsh attach-disk --config --domain guest1 --source /share/disk-images/
volume1.qcow2 --target sdb1

Note that you can use the --live option with this command to temporarily attach a
disk image to a running virtual machine; or you can use the --persistent option to
attach a disk image to a running virtual machine and also update it's configuration so
that the disk is attached on each subsequent restart.

3-10
Chapter 3
Working With Storage for KVM Guests

Removing a Virtual Disk


You can remove a virtual disk from a virtual machine using the virsh detach-disk
command. For example, to remove the disk at the target sdb1 from the configuration for the
virtual machine named guest1, you could run:
virsh detach-disk --config guest1
sdb1

Note that you can use the --live option with this command to temporarily detach a disk
image from a running virtual machine; or you can use the --persistent option to detach a
disk image from a running virtual machine and also update it's configuration so that the disk is
permanently detached from the virtual machine on subsequent restarts.
Where disks are attached as block devices within a guest virtual machine, you can obtain a
listing of the block devices attached to a guest so that you are able to identify the disk target
that is associated with a particular source image file, by running the virsh domblklist
command. For example, run:
virsh domblklist guest1

Detaching a virtual disk from the virtual machine does note delete the disk image file or
volume from the host system. If you need to delete a virtual disk, you can either manually
delete the source image file or delete the volume from the host.

Extending a Virtual Disk


You can effectively extend a virtual disk image by using the virsh blockresize command
while the virtual machine is running. For example, to increase the size of the disk image at
the source location /share/disk-images/volume1.qcow2 on the running virtual machine named
guest1 to 20GB, run:
virsh blockresize guest1
/share/disk-images/volume1.qcow2
20GB

You can check that the resize has worked by checking the block device information for the
running virtual machine, using the virsh domblkinfo command. For example to list all
block devices attached to guest1 in human readable format:
virsh domblkinfo guest1 --all --human

The virsh blockresize command allows you to scale up a disk on a live virtual machine,
but it does not guarantee that the virtual machine is able to immediately identify that the
additional disk resource is available. For some guest operating systems, restarting the virtual
machine may be required before the guest is capable of identifying the additional resources
available.
Individual partitions and file systems on the block device are not scaled using this command.
You need to perform these operations manually from withing the guest, as required.

3-11
Chapter 3
Working With Memory and CPU Allocation

Working With Memory and CPU Allocation


You can configure how many virtual CPUs are active and how much memory is
available for a given virtual machine. These configuration changes can be made on a
running virtual machine (hot plugging or hot unplugging) or can be stored in the virtual
machine XML configuration file. Note that changes can be limited by the virtual
machine host, the hypervisor, or by the original virtual machine description.

Configuring Virtual CPU Count


Optimizing vCPUs can impact the resource efficiency of your virtual machines. One
way to optimize is to adjust how many vCPUs are assigned to a virtual machine. Hot
plugging or hot unplugging vCPUs is when you configure vCPU count on a running
virtual machine.
You can change the number of virtual CPUs that are active in a guest virtual machine
using the virsh setvcpus command. By default, virsh setvcpus works on
running guest virtual machines. If you want to change the virtual CPU count for a
stopped virtual machine, you add the --config option.

For example, run the following command to set the number of virtual CPUs on a
running virtual machine:
virsh setvcpus domain-name, id, or uuid
count-value --live

Note that the count value cannot exceed the number of CPUs assigned to the guest
virtual machine. The count value also might be limited by the host, hypervisor, or from
the original description of the guest virtual machine.
The following command options are available:
• domain
A string value representing the virtual machine name, ID or UUID.
• count
A number value representing the number of virtual CPUs.
• --maximum
Controls the maximum number of virtual CPUs that can be hot-plugged the next
time the guest virtual machine is booted. Therefore, it can only be used with the --
config flag.
• --config
Changes the stored XML configuration for the guest virtual machine and takes
effect when the guest is started.
• --live
The guest virtual machine must be running and the change takes place
immediately, thus hot plugging a vCPU.
• --current
Affects the current guest virtual machine.

3-12
Chapter 3
Working With Memory and CPU Allocation

• --guest
Modifies the CPU state in the current guest virtual machine.
• --hotpluggable
Configures the vCPUs so they can be hot unplugged.
You can use the --config and --live options together if supported by the hypervisor.

If you do not specify --config, --live, or --current, the --live option is assumed. So, if
you do not select an option and the guest virtual machine is not running, the command fails.
In addition, if no options are specified, it is up to the hypervisor whether the --config option
is also assumed. This determines whether the XML configuration is adjusted to make the
change persistent.

Configuring Memory Allocation


To improve the performance of a virtual machine, you can assign additional host RAM to the
virtual machine. You can also decrease the amount of allocated memory to free up the
resource for other virtual machines or tasks. Hot plugging or hot unplugging memory is when
you configure memory size on a running virtual machine.
You use the virsh setmem command to change the available memory for a virtual
machine. If you want to change the maximum memory that can be allocated, use the virsh
setmaxmem command.

To change a virtual machine's memory allocation, run:


virsh setmem domain-name, id, or uuid --kilobytes size

You must specify the size as a scaled integer in kibibytes and the new value cannot exceed
the amount you specified for the virtual machine. Values lower than 64 MB are unlikely to
work with most virtual machine operating systems. A higher maximum memory value does
not affect active virtual machines. If the new value is lower than the available memory, it
shrinks possibly causing the virtual machine to crash.
The following command options are available:
• domain
A string value representing the virtual machine name, ID or UUID.
• size
A number value representing the new memory size, as a scaled integer. The default unit
is KiB, but you can select from other valid memory units:
– b or bytes for bytes
– KB for kilobytes (103 or blocks of 1,000 bytes)
– k or KiB for kibibytes (210 or blocks of 1024 bytes)
– MB for megabytes (106 or blocks of 1,000,000 bytes)
– M or MiB for mebibytes (220 or blocks of 1,048,576 bytes)
– GB for gigabytes (109 or blocks of 1,000,000,000 bytes)
– G or GiB for gibibytes (230 or blocks of 1,073,741,824 bytes)
– TB for terabytes (1012 or blocks of 1,000,000,000,000 bytes)

3-13
Chapter 3
Setting Up Networking for KVM Guests

– T or TiB for tebibytes (240 or blocks of 1,099,511,627,776 bytes)


• --config
Changes the stored XML configuration for the guest virtual machine and takes
effect when the guest is started.
• --live
The guest virtual machine must be running and the change takes place
immediately, thus hot plugging memory.
• --current
Affects the memory on the current guest virtual machine.
To set the maximum memory that can be allocated to a virtual machine, run:
virsh setmaxmem domain-name_id_or_uuid
size --current

You must specify the size as a scaled integer in kibibytes unless you also specify a
supported memory unit, which are the same as for the virsh setmem command.

All other options for virsh setmaxmem are the same as for virsh setmem with one
caveat. If you specify the --live option be aware that not all hypervisors allow live
changes of the maximum memory limit.

Setting Up Networking for KVM Guests


KVM provides tools to add or remove vNICs of different types and to facilitate complex
networking architectures. Networking in KVM is achieved by creating virtual Network
Interface Cards (vNICs) on the guest virtual machine. vNICS are mapped to the host
system's own network infrastructure, by connecting to a virtual network running on the
host itself; by directly using a physical interface on the host; through the use of Single
Root I/O Virtualization (SR-IOV) capabilities on a PCIe device; or by use of a network
bridge that allows the vNIC to share a physical network interface on the host.
In most cases, vNICs are defined when the virtual machine is first created, however
the libvirt API allows for vNICs of different types to be added or removed from virtual
machines as required and also facilitates hot plugging to allow you to perform these
actions on a running virtual machine to avoid downtime.
Networking with KVM can be complex as it can involve components that are
configured directly on the host itself, configuration for the virtual machine within
libvirt and also configuration for the network within the running guest operating
system. As a result, for many development and testing environments, it is often
sufficient to configure each vNIC to use the virtual networking provided by libvirt.
This driver is used to create a virtual network that uses Network Address Translation
(NAT) to allow virtual machines to gain access to external resources. This approach is
simple to configure and often facilitates similar network access already configured on
the host system.
Where virtual machines may need to belong to specific subnetworks, a bridged
network can be used. Network bridges use virtual interfaces that are mapped to and
share a physical interface on the host. In this configuration, network traffic from a
virtual machine behaves like it is coming from an independent system on the same
physical network as the host system. Depending on the tools used, this may require

3-14
Chapter 3
Setting Up Networking for KVM Guests

some manual changes to the host network configuration before it can be set up for a virtual
machine.
Networking for virtual machines can also be configured to directly use a physical interface on
the host system. This can provide network behavior similar to using a bridged network
interface in that the vNIC behaves as if it is connected to the physical network directly. Direct
connections tend to use the macvtap driver to extend physical network interfaces to provide a
range of functionality that can also provide a virtual bridge that behaves similarly to a bridged
network but which is easier to configure and maintain and which offers improved
performance.
KVM is able to use SR-IOV for passthrough networking where a PCIe interface supports this
functionality. The SR-IOV hardware must be properly set up and configured on the host
system before you are able to attach the device to a virtual machine and configure the
network to use this device.
Where network configuration is likely to be complex, Oracle recommends using Oracle Linux
Virtualization Manager. Simple networking configurations and operations are described here
to facilitate the majority of basic deployment scenarios.

Setting Up and Managing Virtual Networks


If you are considering using virtual networking with NAT for your virtual machine networking
requirements, you can use the default virtual network that is set up by libvirt for virtual
machines or you can create and manage different virtual networks within KVM for the
purpose of grouping virtual machines on their own subnetworks.
Use the following command to list all virtual networks that are configured on the host:
virsh net-list --all

Output:
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes

You can find out more about a network using the virsh net-info command. For example,
to find out about the default network, run:
virsh net-info default

Output:
Name: default
UUID: 16318035-eed4-45b6-99f8-02f1ed0661d9
Active: yes
Persistent: yes
Autostart: yes
Bridge: virbr0

Note that the virtual network uses a network bridge, called virbr0. This is not to be confused
with traditional bridged networking. The virtual bridge is not connected to a physical interface
and relies on NAT and IP forwarding to connect virtual machines to the physical network
beyond. Libvirt also handles IP address assignment for virtual machines using DHCP. The
default network is typically in the range 192.168.122.1/24. To see the full configuration
information about a network, use the virsh net-dumpxml command:
virsh net-dumpxml default

3-15
Chapter 3
Setting Up Networking for KVM Guests

Output:
<network>
<name>default</name>
<uuid>16318035-eed4-45b6-99f8-02f1ed0661d9</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:82:75:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>

Adding or Removing a vNIC


You can use the virsh attach-interface command to add a new vNIC to an
existing virtual machine. This command can be used to create a vNIC on a virtual
machine that uses any of the networking types that KVM is capable of supporting.
virsh attach-interface --domain guest --type network --source default --config

You must specify the following parameters with this command:


• --domain
The virtual machine name, ID or UUID.
• --type
The type of networking that the vNIC should use. Available options include:
– network for a libvirt virtual network using NAT
– bridge for a bridge device on the host
– direct for a direct mapping to one of the host's network interfaces or bridges
– hostdev for a passthrough connection using a PCI device on the host.
• --source
The source that should be used for the network type specified. These vary
depending on the type:
– for a network, specify the name of the virtual network
– for a bridge specify the name of the bridge device
– for a direct connection specify the name of the host's interface or bridge
– for a hostdev connection specify the PCI address of the host's interface
formatted as domain:bus:slot.function.
• --config
Changes the stored XML configuration for the guest virtual machine and takes
effect when the guest is started.

3-16
Chapter 3
Setting Up Networking for KVM Guests

• --live
The guest virtual machine must be running and the change takes place immediately, thus
hot plugging the vNIC.
• --current
Affects the current guest virtual machine.
Additional options are available to further customize the interface, such as setting the MAC
address or configuring the target macvtap device when using some of the alternate network
types. You can also use --model option to change the model of network interface that is
presented to the virtual machine. By default, the virtio model is used, but alternate models,
such as e1000 or rtl8139 are available, Run virsh help attach-interface for more
information, or refer to the VIRSH(1) man page.
Remove a vNIC from a virtual machine using the virsh detach-interface command. For
example, run:
virsh detach-interface --domain guest --type network --mac 52:54:00:41:6a:65 --config

Note that the domain or virtual machine name and type are required parameters. If the virtual
machine has more than one vNIC attached, you must specify the mac parameter to provide
the MAC address of the vNIC that you wish to remove. You can obtain this value by listing the
vNICs that are currently attached to a virtual machine. For example, you can run:
virsh domiflist guest

Output:
Interface Type Source Model MAC
-------------------------------------------------------
vnet0 network default virtio 52:54:00:8c:d2:44
vnet1 network default virtio 52:54:00:41:6a:65

Bridged and Direct vNICs


Bridged vNICs are simple to configure and allow a virtual machine's network to act
independently to the host's network configuration by sharing the same physical network
interface to connect to the existing network infrastructure. This can reduce complexity and is
relatively easy to manage.
Traditional network bridging using linux bridges is supported using the bridge type when
attaching an interface. The virsh iface-bridge command can be used to create a bridge on the
host system and add a physical interface to it. For example, to create a bridge named
vmbridge1 with the Ethernet port named enp0s31f6 attached, you can run:
virsh iface-bridge vmbridge1
enp0s31f6

Once the bridge is created, you can attach it using the virsh attach-interface
command as described in Adding or Removing a vNIC.
There are several issues that you may need to be aware of when using traditional linux
bridged networking for KVM guests. For instance, it is not simple to set up a bridge on a
wireless interface due to the number of addresses available in 802.11 frames. Furthermore,
the complexity of the code to handle software bridges can result in reduced throughput,
increased latency and additional configuration complexity. The main advantage that this

3-17
Chapter 3
Setting Up Networking for KVM Guests

approach offers, is that it allows the host system to communicate across the network
stack directly with any guests configured to use bridged networking.
Most of the issues related to using traditional linux bridges can be easily overcome by
using the macvtap driver which simplifies virtualized bridge network significantly. For
most bridged network configurations in KVM, this is the preferred approach because it
offers better performance and it is easier to configure. The macvtap driver is used
when the network type is set to direct.

The macvtap driver creates endpoint devices that follow the tun/tap ioctl interface
model to extend an existing network interface so that KVM can use it to connect to the
physical network interface directly to support different network functions. These
functions can be controlled by setting a different mode for the interface. The following
modes are available:
• vepa (Virtual Ethernet Port Aggregator) is the default mode and forces all data
from a vNIC out of the physical interface to a network switch. If the switch supports
hairpin mode, different vNICs connected to the same physical interface are able to
communicate via the switch. Many switches currently do not support hairpin mode,
which means that virtual machines with direct connection interfaces running in
VEPA mode are unable to communicate, but can connect to the external network
via the switch.
• bridge mode connects all vNICS directly to each other so that traffic between
virtual machines using the same physical interface is not sent out to the switch and
is facilitated directly. This is the most useful option when using switches that do not
support hairpin mode, and when you need maximum performance for
communications between virtual machines. It is important to note that when
configured in this mode, unlike a traditional software bridge, the host is unable to
use this interface to communicate directly with the virtual machine.
• private mode behaves like a VEPA mode vNIC in the absence of a switch
supporting hairpin mode. However, even if the switch does support hairpin mode,
two virtual machines connected to the same physical interface are unable to
communicate with each other. This option has limited use cases.
• passthrough mode attaches a physical interface device or an SR-IOV Virtual
Function (VF) directly to the vNIC without losing the migration capability. All
packets are sent directly to the configured network device. There is a one-to-one
mapping between network devices and virtual machines when configured in
passthrough mode because a network device cannot be shared between virtual
machines in this configuration.
Unfortunately, the virsh attach-interface command does not allow you to
specify the different modes available when attaching a direct type interface that uses
the macvtap driver and defaults to vepa mode . The graphical virt-manager utility
makes setting up bridged networks using macvtap significantly easier and provides
options for each different mode.
Nonetheless, it is not very difficult to change the configuration of a virtual machine by
editing the XML definition for it directly. The following steps can be followed to
configure a bridged network using the macvtap driver on an existing virtual machine:

1. Attach a direct type interface to the virtual machine using the virsh attach-
interface command and specify the source for the physical interface that
should be used for the bridge. In this example, the virtual machine is called guest1
and the physical network interface on the host is a wireless interface called
wlp4s0:

3-18
Chapter 3
Cloning Virtual Machines

virsh attach-interface --domain guest1 --type direct --source wlp4s0 --config

2. Dump the XML for the virtual machine configuration and copy it to a file that you can edit:
virsh dumpxml guest1 > /tmp/guest1.xml

3. Edit the XML for the virtual machine to change the vepa mode interface to use bridged
mode. If there are many interfaces connected to the virtual machine, or you wish to
review your changes, you can do this in a text editor. If you are happy to make this
change globally, run:
sed -i "s/mode='vepa'/mode='bridge'/g" /tmp/guest1.xml

4. Remove the existing configuration for this virtual machine and replace it with the modified
configuration in the XML file:
virsh undefine guest1
virsh define /tmp/guest1.xml

5. Restart the virtual machine for the changes to take affect. The direct interface is attached
in bridge mode and is persistent and automatically started when the virtual machine
boots.

Interface Bonding for Bridged Networks


The use of bonded interfaces for higher throughput is common where hosts may run several
concurrent virtual machines that are providing multiple services at once. Where a single
physical interface may have provided sufficient bandwidth for applications hosted on a
physical server, the increase in network traffic when running multiple virtual machines can
have a negative impact on network performance where a single physical interface is shared.
By using bonded interfaces, the throughput capability for your virtual machines can be
increased significantly and you are able to also take advantage of the high availability
features that come with a network bond.
Since the physical network interfaces that a virtual machine may use are located on the host
and not on the virtual machine, setting up any form of bonded networking for greater
throughput or for high availability, must be configured on the host system, itself. This
approach allows you to configure network bonds on the host and then to attach a virtual
network interface, using a network bridge, directly to the bonded network on the host.
Network bonding of physical interfaces for Oracle Linux 7 is described in Oracle Linux 7:
Setting Up Networking. For Oracle Linux 8, see Oracle Linux 8: Setting Up Networking. To
achieve HA networking for you virtual machines, you should configure a network bond on the
host system first.
When the bond is configured, you should configure your virtual machine networks to use the
bonded interface when you create a network bridge. This can be done using either the
bridge type interface or using a direct interface configured to use the macvtap driver's
bridge mode. The bond interface can be used instead of a physical network interface when
configuring a virtual network interface.

Cloning Virtual Machines


You can use two types of virtual machine instances to create copies of virtual machines:
• Clone

3-19
Chapter 3
Cloning Virtual Machines

A clone is an instance of a single virtual machine. You can use a clone to set up a
network of identical virtual machines which you can optionally distribute to other
destinations.
• Template
A template is an instance of a virtual machine that you can use as the cloning
source. You can use a template to create multiple clones and optionally make
modifications to each clone.
The difference between clones and templates is how they are used. For the created
clone to work properly, ensure that you remove information and modify configurations
unique to the virtual machine that is being cloned before cloning. This information and
configurations differs based on how you will use the clones, for example:
• anything assigned to the virtual machine such as the number of Network Interface
Cards (NICs) and their MAC addresses.
• anything configured within the virtual machine such as SSH keys.
• anything configured by an application installed on the virtual machine such as
activation codes and registration information.
You must remove some of the information and configurations from within the virtual
machine. Other information and configurations must be removed from the virtual
machine using the virtualization environment.

Preparing a Virtual Machine for Cloning


Before cloning a virtual machine, you must prepare it by running the virt-sysprep
utility on its disk image or by completing the following steps.

Note:
For more information on how to use the virt-sysprep utility to prepare a
virtual machine and understand the available options, see https://
libguestfs.org/virt-sysprep.1.html.

1. Build the virtual machine that you want to use for the clone or template.
a. Install any needed software.
b. Configure any non-unique operating system and application settings.
2. Remove any persistent or unique network configuration details.
a. Run the following command to remove any persistent udev rules:
rm -f /etc/udev/rules.d/70-persistent-net.rules

Note:
If you do not remove the udev rules, the name of the first NIC might
be eth1instead of eth0.

3-20
Chapter 3
Cloning Virtual Machines

b. Modify /etc/sysconfig/network-scripts/ifcfg-eth[x] to remove the HWADDR and


static lines as well as any other unique or non-desired settings, such as UUID, for
example:
DEVICE=eth[x]
BOOTPROTO=none
ONBOOT=yes

#NETWORK=10.0.1.0 <- REMOVE


#NETMASK=255.255.255.0 <- REMOVE
#IPADDR=10.0.1.20 <- REMOVE
#HWADDR=xx:xx:xx:xx:xx <- REMOVE
#USERCTL=no <- REMOVE

After modification, your file should not include a HWADDR entry or any unique
information, and at a minimum include the following lines:
DEVICE=eth[x]
ONBOOT=yes

Important:
You must remove the HWADDR entry because if its address does not match
the new guest's MAC address, the ifcfg is ignored.

c. If you have /etc/sysconfig/networking/profiles/default/ifcfg-eth[x]


and /etc/sysconfig/networking/devices/ifcfg-eth[x] files, ensure they have the
same content as the /etc/sysconfig/network-scripts/ifcfg-eth[x] file.

Note:
Ensure that any additional unique information is removed from the ifcfg
files.

3. If the guest virtual machine from which you want to create a clone is registered with ULN,
you must de-register it. For more information, see the Oracle Linux: Unbreakable Linux
Network User's Guide for Oracle Linux 6 and Oracle Linux 7.
4. Run the following command to remove any sshd public/private key pairs:
rm -rf /etc/ssh/ssh_host_*

Note:
Removing ssh keys prevents problems with ssh clients not trusting these hosts.

5. Remove any other application-specific identifiers or configurations that might cause


conflicts if running on multiple machines.
6. Configure the virtual machine to run the relevant configuration wizards the next time it
boots.

3-21
Chapter 3
Cloning Virtual Machines

• For Oracle Linux 6 and below, run the following command to create an empty
file on the root file system called .unconfigured:
touch /.unconfigured

• For Oracle Linux 7, run the following commands to enable the first boot and
initial-setup wizards:
sed -ie 's/RUN_FIRSTBOOT=NO/RUN_FIRSTBOOT=YES/' /etc/sysconfig/firstboot
systemctl enable firstboot-graphical
systemctl enable initial-setup-graphical

Note:
The wizards that run on the next boot depend on the configurations
that have been removed from the virtual machine. Also, on the first
boot of the clone we recommend that you change the hostname.

Important:
Before proceeding with cloning, shut down the virtual machine. You can
clone a virtual machine using virt-clone or virt-manager.

Cloning a Virtual Machine by Using the virt-clone Command


You can use virt-clone to clone virtual machines from the command line; however,
you need root privileges for virt-clone to complete successfully. The virt-clone
command provides a number of options that can be passed on the command line,
which include general, storage configuration, networking configuration, and other
miscellaneous options. Only the --original is required.

Run virt-clone --help to see a complete list of options, or refer to the VIRT-
CLONE(1) man page.
Run the following command to clone a virtual machine on the default connection,
automatically generating a new name and disk clone path:
virt-clone --original vm-name --auto-clone

Run the following command to clone a virtual machine with multiple disks:
virt-clone --connect qemu:///system --original vm-name --name vm-clone-name \
--file /var/lib/libvirt/images/vm-clone-name.img --file /var/lib/libvirt/images/
vm-clone-data.img

Cloning a Virtual Machine by Using Virtual Machine Manager


Complete the following steps to clone a guest virtual machine using Virtual Machine
Manager.
1. Start Virtual Machine Manager in one of the following ways:

3-22
Chapter 3
Cloning Virtual Machines

• Launch Virtual Machine Manager from the System Tools menu.


• Run the virt-manager command as root.
2. From the list of guest virtual machines, right-click the guest virtual machine you want to
clone and click Clone.
The Clone Virtual Machine window opens.
3. In the Name field, change the name of the clone or accept the default name.
4. To change the Networking information, click Details. Then, enter a new MAC address
for the clone and click OK.
5. For each disk in the cloned guest virtual machine, select one of the following options:
• Clone this disk - The disk is cloned for the cloned guest virtual machine.
• Share disk with guest-virtual-machine-name - The disk is shared by the guest
virtual machine to be cloned and its clone.
• Details - Opens the Change storage path window if you want to select a new path
for the disk.
6. Click Clone.

3-23
4
Known Issues for Oracle Linux KVM
This chapter provides information about known issues for Oracle Linux KVM. If a workaround
is available, that information is also provided.

Upgrading From QEMU 3.10 to Version 4.2.1 Can Prevent


Existing KVM Guests From Starting on Oracle Linux 7
Attempting to upgrade a KVM host from QEMU version 3.10 to version 4.2.1 results in a
libvirt server error that can prevent existing KVM guests from starting on an Oracle Linux 7
host.
An error similar to the following is displayed:
Upgrade qemu-3.1.0-7.el7.x86_64 to qemu-4.2.1-4.el7.x86_64, kvm can not be
started, got below libvirt service error:

Dec 21 15:10:48 ca-ex05db01.us.oracle.com libvirtd[23588]: Unable to read


from monitor: Connection reset by peer
Dec 21 15:10:48 ca-ex05db01.us.oracle.com libvirtd[23588]: internal error:
qemu unexpectedly closed the monitor: 2020-12-21T23:10:48.306929Z
qemu-system-x86_64: We need to set caching-mode=on for intel-iommu to enable
device assignment with IOMMU protection.
Dec 21 15:10:52 ca-ex05db01.us.oracle.com libvirtd[23588]: internal error:
Failed to autostart VM 'ca-ex05db01vm01.us.oracle.com': internal error: qemu
unexpectedly closed the monitor: 2020-12-21T23:10:48.306929Z
qemu-system-x86_64: We need to set caching-mode=on for intel-iommu to enable
device assignment with IOMMU protection.
Dec 21 15:10:52 ca-ex05db01.us.oracle.com libvirtd[23588]: nl_recv returned
with error: No buffer space available

To work around this issue so that KVM guests can run the updated qemu version, edit the
XML file of each KVM guest, adding the caching_mode='on' parameter to the iommu section
for each driver sub-element, as shown in the following example:
<iommu model='intel'>
<driver aw_bits='48' caching_mode='on'/>
</iommu>

(Bug ID 32312933)

4-1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy