Oracle Linux: KVM User's Guide
Oracle Linux: KVM User's Guide
F29966-18
June 2022
Oracle Linux KVM User's Guide,
F29966-18
iii
3 KVM Usage
Checking the libvirt Daemon Status 3-1
Base Operations 3-1
Creating a New Virtual Machine 3-1
Starting and Stopping Virtual Machines 3-2
Deleting a Virtual Machine 3-3
Configuring a Virtual Machine With a Virtual Trusted Platform Module 3-4
Working With Storage for KVM Guests 3-5
Storage Pools 3-6
Storage Volumes 3-8
Managing Virtual Disks 3-10
Adding or Removing a Virtual Disk 3-10
Removing a Virtual Disk 3-11
Extending a Virtual Disk 3-11
Working With Memory and CPU Allocation 3-12
Configuring Virtual CPU Count 3-12
Configuring Memory Allocation 3-13
Setting Up Networking for KVM Guests 3-14
Setting Up and Managing Virtual Networks 3-15
Adding or Removing a vNIC 3-16
Bridged and Direct vNICs 3-17
Interface Bonding for Bridged Networks 3-19
Cloning Virtual Machines 3-19
Preparing a Virtual Machine for Cloning 3-20
Cloning a Virtual Machine by Using the virt-clone Command 3-22
Cloning a Virtual Machine by Using Virtual Machine Manager 3-22
iv
Preface
Oracle Linux: KVM User's Guide provides information about how to install, configure, and use
the Oracle Linux KVM packages to run guest system on top of a bare metal Oracle Linux
system. This documentation provides information on using KVM on a standalone platform in
an unmanaged environment. Typical usage in this mode is for development and testing
purposes, although production level deployments are supported. Oracle recommends that
customers use Oracle Linux Virtualization Manager for more complex deployments of a
managed KVM infrastructure.
Conventions
The following text conventions are used in this document:
Convention Meaning
boldface Boldface type indicates graphical user
interface elements associated with an action,
or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or
placeholder variables for which you supply
particular values.
monospace Monospace type indicates commands within a
paragraph, URLs, code in examples, text that
appears on the screen, or text that you enter.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at https://www.oracle.com/corporate/accessibility/.
For information about the accessibility of the Oracle Help Center, see the Oracle Accessibility
Conformance Report at https://www.oracle.com/corporate/accessibility/templates/
t2-11535.html.
v
Preface
build a more inclusive culture that positively impacts our employees, customers, and
partners, we are working to remove insensitive terms from our products and
documentation. We are also mindful of the necessity to maintain compatibility with our
customers' existing technologies and the need to ensure continuity of service as
Oracle's offerings and industry standards evolve. Because of these technical
constraints, our effort to remove insensitive terms is ongoing and will take time and
external cooperation.
vi
1
About Oracle Linux KVM
This chapter provides a high-level overview of the Kernel-based Virtual Machine (KVM)
feature on Oracle Linux, the user space tools that are available for installing and managing a
standalone instance of KVM, and the differences between KVM usage in this mode and
usage within a managed environment provided by Oracle Linux Virtualization Manager.
1-1
Chapter 1
Guest Operating System Requirements
Important:
* cloud-init is unavailable for 32-bit architectures
You can download Oracle Linux ISO images and disk images from Oracle Software
Delivery Cloud: https://edelivery.oracle.com/linux.
1-2
Chapter 1
System Requirements and Recommendations
Note:
Oracle recommends that you install the Oracle VirtIO Drivers for Microsoft Windows
in Windows virtual machines for improved performance for network and block (disk)
devices and to resolve common issues. The drivers are paravirtualized drivers for
Microsoft Windows guests running on Oracle Linux KVM hypervisors.
For instructions on how to obtain and install the drivers, see Oracle Linux: Oracle VirtIO
Drivers for Microsoft Windows for use with KVM.
1-3
Chapter 1
About Virtualization Packages
1-4
2
Installing KVM User Space Packages
This chapter describes how to configure the appropriate ULN channels or yum repositories,
and how to install user space tools to manage a standalone instance of KVM. A final check is
performed to validate whether the system is capable of hosting guest virtual machines.
Oracle Linux 7
Due to the availability of several very different kernel versions and the requirement for more
recent versions of user space tools that may break compatibility with RHCK, there are several
different yum repositories or ULN channels across the different supported architectures for
Oracle Linux 7. Packages in the different channels have different use cases and have
different levels of support. This section describes the available yum repositories and ULN
channels for each architecture.
Repositories and Channels That Are Available for x86_64 Platforms
Location Description
Yum repository: The virtualization packages that are provided
ol7_latest in this repository or ULN channel maximize
compatibility with RHCK and with RedHat
ULN channel: Enterprise Linux. Packages from this
ol7_x86_64_latest repository or ULN channel are fully supported
for all kernels.
2-1
Chapter 2
Configuring Yum Repositories and ULN Channels
Location Description
Yum repository: The virtualization packages that are provided
ol7_kvm_utils in this repository or ULN channel take
advantage of newer features and functionality
ULN channel: available in upstream packages. These
ol7_x86_64_kvm_utils packages are also engineered to work with
KVM features that are enabled in the latest
releases of UEK. If you install these packages,
you must also install the latest version of
either UEK R4 or UEK R5.
Note:
The
ol7_kvm_utils
and
ol7_x86_64_kvm_u
tils channels
distribute 64-bit
packages only. If
you manually
installed any 32-bit
packages, for
example, libvirt-
client, Yum
updates from these
channels will fail.
To use the
ol7_kvm_utils
and
ol7_x86_64_kvm_u
tils channels, you
must first remove
any 32-bit versions
of the packages
distributed by
these channels that
are installed on
your system.
2-2
Chapter 2
Configuring Yum Repositories and ULN Channels
Location Description
• A minimum of Unbreakable Enterprise
Kernel Release 4 is required.
• Guest operating systems, as supported on
Oracle Cloud Infrastructure and described
at https://docs.oracle.com/iaas/Content/
Compute/References/images.htm.
• KVM guests boot by using iSCSI, VirtIO,
VirtIO-SCSI or IDE device emulation.
Yum repository: The virtualization packages that are provided
ol7_developer in these repositories or ULN channels take
advantage of newer features and functionality
ULN channel: that is available upstream, but are
ol7_x86_64_developer unsupported and are made available for
developer use only.
Yum repository:
If you are using the Oracle Linux yum server,
ol7_developer_kvm_utils
you can configure these repositories by
ULN channel: installing the oraclelinux-developer-
ol7_x86_64_developer_kvm_utils release-el7 package and then enabling these
repositories by editing the repository files or
by using yum-config-manager.
2-3
Chapter 2
Configuring Yum Repositories and ULN Channels
Caution:
Virtualization packages may also be available in the ol7_developer_EPEL
yum repository or the ol7_arch_developer_EPEL ULN channel. These
packages are unsupported and contain features that might never be tested
on Oracle Linux and may conflict with virtualization packages from other
channels. If you intend to use packages from any of the repositories or
channels that are previously listed, first uninstall any virtualization packages
that installed from this repository. You can also disable this repository or
channel or set exclusions to prevent virtualization packages from being
installed from this repository.
Depending on your use case and support requirements, you must enable the
repository or ULN channel that you require before installing the virtualization packages
from that repository or ULN channel.
If you are using ULN, follow these steps to ensure that the system is registered with
ULN and that the appropriate channel is enabled:
1. Log in to https://linux.oracle.com with your ULN user name and password.
2. On the Systems tab, from the list of registered systems, select the link name for
the specified system.
3. On the System Details page, select Manage Subscriptions.
4. On the System Summary page, from the list of available channels, select each of
the required channels, then click the right arrow to move each channel to the list of
subscribed channels.
5. Select Save Subscriptions.
If you are using the Oracle Linux yum server, you can either edit the repository
configuration files in /etc/yum.repos.d/ directly; or alternatively, if you have the yum-
utils package installed, you can use the yum-config-manager command, for
example:
sudo yum-config-manager --enable ol7_kvm_utils ol7_UEKR6
If you want to prevent yum from installing the package versions from a particular
repository, you can set an exclude option on these packages for that repository. For
instance, to prevent yum from installing the virtualization packages in the
ol7_developer_EPEL repository, use the following command:
sudo yum-config-manager --setopt="ol7_developer_EPEL.exclude=libvirt* qemu*" --
save
Oracle Linux 8
The number of options available on Oracle Linux 8 are significantly reduced as the
available kernels are newer and there are less options to select from.
Repositories and Channels That Are Available for Oracle Linux 8
2-4
Chapter 2
Configuring Yum Repositories and ULN Channels
Since the Application Stream repository or channel is required for system software on Oracle
Linux 8, it is enabled by default on any Oracle Linux 8 system.
2-5
Chapter 2
Installing Virtualization Packages
If you intend to use the virt:kvm_utils2 application stream for improved functionality
and integration with newer features released within UEK, you must subscribe to the
ol8_kvm_appstream yum repository or ol8_base_arch_kvm_utils ULN channel. Note
that the virt:kvm_utils application stream is now a legacy stream on Oracle Linux 8.
If you are using ULN, you can check that the system is registered with ULN and that
the appropriate channel is enabled:
1. Log in to https://linux.oracle.com with your ULN user name and password.
2. On the Systems tab, from the list of registered systems, select the link name for
the specified system.
3. On the System Details page, select Manage Subscriptions.
4. On the System Summary page, from the list of available channels, select each of
the required channels, then click the right arrow to move each channel to the list of
subscribed channels.
5. Select Save Subscriptions.
If you are using the Oracle Linux yum server, make sure that you have installed the
most recent version of the oraclelinux-release-el8 package and enable the
required repositories. For example:
sudo dnf install -y oraclelinux-release-el8
sudo dnf config-manager --enable ol8_appstream ol8_kvm_appstream
2-6
Chapter 2
Installing Virtualization Packages
Specify the appropriate package groups for the installation type in the %packages section of
the kickstart file by using the @GroupID format:
2-7
Chapter 2
Installing Virtualization Packages
3. Update the system so that it has the most recent packages available.
• If you are using Oracle Linux 7, run the yum update command.
• If you are using Oracle Linux 8, run the dnf update command.
4. Install virtualization packages on the system.
• If you are using Oracle Linux 7 run the following commands to install the base
virtualization packages and additional utilities:
sudo yum groupinstall "Virtualization Host"
sudo yum install qemu-kvm virt-install virt-viewer
• If you are using Oracle Linux 8 run the following commands to install the base
virtualization packages and additional utilities:
sudo dnf module install virt
sudo dnf install virt-install virt-viewer
For more information about DNF modules and application streams, see Oracle Linux:
Managing Software on Oracle Linux.
2-8
Chapter 2
Installing Virtualization Packages
• Reset the virt module state so that it is neither enabled nor disabled:
sudo dnf module reset virt -y
NOT_SUPPORTED:
Pre-existing guests that were created using the default KVM stack are not
compatible and do not start using the Oracle KVM stack.
Note that although you are able to switch to the Oracle KVM stack and install the packages
while using RHCK, the stack is not compatible. You must be running a current version of UEK
to use this software.
• Reset the virt module state so that it is neither enabled nor disabled:
sudo dnf module reset virt -y
2-9
Chapter 2
Validating the Host System
NOT_SUPPORTED:
Pre-existing guests that were created using the Oracle KVM stack are not
compatible and do not start using the default KVM stack.
To test whether a system can act as a KVM host, run the following command:
sudo virt-host-validate qemu
If all of the checks return a PASS value, the system can host guest virtual machines. If
any of the tests fail, a reason is provided and information is displayed on how to
resolve the issue if such an option is available.
Note:
If the following message is displayed, the system is not capable of
functioning as a KVM host:
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are
available, performance will be significantly limited)
2-10
3
KVM Usage
Several tools exist for administering the libvirt interface with KVM. In most cases, a variety
of different tools are capable of performing the same operation. This document focuses on
the tools that you can use from the command line. However, if you are using a desktop
environment, you might consider using a graphical user interface (GUI) such as the Virtual
Machine Manager, to create and manage virtual machines (VMs). For more information about
Virtual Machine Manager, see https://virt-manager.org/.
The output should indicate that the libvirtd daemon is running, as shown in the following
example output:
* libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset:
enabled)
Active: active (running) since time_stamp; xh ago
After you verify that the libvirtd service is running, you can start provisioning guest
systems.
Base Operations
In this section base operations are described including creating, starting and stopping, and
deleting virtual machines.
The following example, illustrates the creation of a simple virtual machine and assumes that
virt-viewer is installed and available to load the installer in a graphical environment:
virt-install --name guest-ol8 --memory 2048 --vcpus 2 \
--disk size=8 --location OracleLinux-R8.iso --os-variant ol8.0
3-1
Chapter 3
Base Operations
The following are detailed descriptions of each of the options that are specified in the
example:
• --name is used to specify a name for the virtual machine. This is registered as a
domain within libvirt.
• --memory is used to specify the RAM available to the virtual machine and is
specified in MB.
• --vcpus is used to specify the number of virtual CPUs that should be available to
the virtual machine.
• --disk is used to specify hard disk parameters. In this case, only the size is
specified in GB. If a path is not specified the disk image is created as a qcow file
automatically. If virt-install is run as root, the disk image is created
in /var/lib/libvirt/images/ and is named using the name specified for the
virtual machine at install. If virt-install is run as an ordinary user, the disk
image is created in $HOME/.local/share/libvirt/images/.
• --location is used to provide the path to the installation media. This can be an
ISO file, or an expanded installation resource hosted at a local path or remotely on
an HTTP or NFS server.
• --os-variant is an optional specification but provides some default parameters
for each virtual machine that can help improve performance for a specific
operating system or distribution. For a complete list of options available, run
osinfo-query os.
When the command is run, the virtual machine is created and automatically started to
boot using the install media specified in the location parameter. If you have the virt-
viewer package installed and the command has been run in a terminal within a
desktop environment, the graphical console is opened automatically and you can
proceed with the guest operating system installation within the console.
Output:
Id Name State
----------------------------------------------------
1 guest-ol8 running
Use the virsh help command to view available options and syntax. For example, to
find out more about the options available to listings of virtual machines, run virsh
help list. This command shows options to view listings of virtual machines that are
stopped or paused or that are currently active.
To start a virtual machine, run the following command:
virsh start guest-ol8
3-2
Chapter 3
Base Operations
Output:
Domain guest-ol8 started
Output:
Domain guest-ol8 is being shutdown
Output:
Domain guest-ol8 is being rebooted
Output:
Domain guest-ol8 suspended
Output:
Domain guest-ol8 resumed
Output:
Domain guest-ol8 destroyed
Output:
3-3
Chapter 3
Configuring a Virtual Machine With a Virtual Trusted Platform Module
<source file='/home/testuser/.local/share/libvirt/images/guest-ol8-1.qcow2'/>
If the virtual machine can't be shutdown gracefully you can force it to stop by
running:
virsh destroy guest-ol8
This step removes all configuration information about the virtual machine from
libvirt. Storage artifacts such as virtual disks are left intact. If you need to remove
these as well, you can delete them manually from their location returned in the first
step in this procedure. For example, you could run:
rm /home/testuser/.local/share/libvirt/images/guest-ol8-1.qcow2
Note that it is not possible to delete a virtual machine if it has snapshots. You should
remove any snapshots using the virsh snapshot-delete command before
attempting to remove a virtual machine that has any snapshots defined.
Note:
Virtual Trusted Platform Module is available on Oracle Linux 7 and Oracle
Linux 8 KVM guests but not on QEMU.
3-4
Chapter 3
Working With Storage for KVM Guests
• Modify the KVM virtual machine's XML to include the TPM, as shown in the tpm
section in the following example:
<devices>
...
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
</tpm>
<graphics type='vnc' port='-1' autoport='yes'>
<listen type='address'/>
</graphics>
...
</devices>
Note that if you are creating a new virtual machine, the virt-install on Oracle Linux
8 also provides a --tpm option that allows you to specify the vTPM information at
installation time, for example:
virt-install --name guest-ol8-tpm2 --memory 2048 --vcpus 2 \
--disk path=/systest/images/guest-ol8-tpm2.qcow2,size=20 \
--location /systest/iso/ol8.iso --os-variant ol8 \
--network network=default --graphics vnc,listen=0.0.0.0 --tpm
emulator,model=tpm-crb,version=2.0
If you are using Oracle Linux 7, the virt-install command does not provide this
option but you can manually edit the configuration after the virtual machine is created.
4. Start the KVM virtual machine.
3-5
Chapter 3
Working With Storage for KVM Guests
and consumed by virtual machines as they need it. It is perfectly acceptable to use the
default directory-based storage and take advantage of manually mounted storage at
the default locations.
Oracle recommends using Oracle Linux Virtualization Manager to easily manage and
configure complex storage requirements for KVM environments.
Storage Pools
Storage pools provide logical groupings of storage types that are available to host the
volumes that can be used as virtual disks by a set of virtual machines. A wide variety
of different storage types are provided. Local storage can be used in the form of
directory based storage pools, file system storage and disk based storage. Other
storage types such as NFS and iSCSI provide standard network based storage, while
RBD and Gluster types provide support for distributed storage mechanisms. More
information is provided at https://libvirt.org/storage.html.
Storage pools help to abstract underlying storage resources from the virtual machine
configurations. This is particularly useful if you suspect that resources such as virtual
disks may change physical location or media type. This becomes even more important
when using network based storage and target paths, DNS or IP addressing may
change over time. By abstracting this configuration information, an administrator can
manage resources in a consolidated way without needing to update multiple virtual
machine configurations.
You can create transient storage pools that are available until the host reboots, or you
can define persistent storage pools that are restored after a reboot.
Transient storage pools are started automatically as soon as they are created and the
volumes that are within them are made available to virtual machines immediately,
however any configuration information about a transient storage pool is lost after the
pool is stopped, the host reboots or if the libvirtd service is restarted. The storage itself
is unaffected, but virtual machines configured to use resources in a transient storage
pool lose access to these resources. Transient storage pools are created using the
virsh pool-create command.
For most use cases, you should consider creating persistent storage pools. Persistent
storage pools are defined as a configuration entry that is stored within /etc/libvirt.
Persistent storage pools can be stopped and started and can be configured to start
when the host system boots. This can be useful as libvirt can take care of
automatically mounting and enabling access to network based resources. Persistent
storage pools are created using the virsh pool-define command, and usually
need to be started after they have been created before you are able to use them.
The following examples show how you can set up directory based storage, and
perform basic operations on it.
To create a directory-based storage pool named pool_dir at /share/storage_pool on the
host system, run:
virsh pool-define-as pool_dir dir --target /share/storage_pool
To start the storage pool and make it accessible to any virtual machines, run:
3-6
Chapter 3
Working With Storage for KVM Guests
Other storage pool types can be easily created using the same virsh pool-define-as
command. Options that are used with this command depend on the storage type that you
select when you create your storage pool. For example, to create file system based storage,
that mounts a formatted block device, /dev/sdc1, at the mount point /share/storage_mount,
you can run:
virsh pool-create-as pool_fs fs --source-path /dev/sdc1 --target /share/storage_mount
Similarly, you can easily add an NFS share as a storage pool. For example, run:
virsh pool-create-as pool_nfs netfs --source-path /ISO --source-host nfs.example.com \
--target /share/storage_nfs
It is also possible to create an XML file representation of the storage pool configuration and
load the configuration information from file using the virsh pool-define command. For
example, you could create a storage pool for a Gluster volume by creating an XML file named
gluster_pool.xml with the following content:
<pool type='gluster'>
<name>pool_gluster</name>
<source>
<host name='192.0.2.1'/>
<dir path='/'/>
<name>gluster-vol1</name>
</source>
</pool>
This example assumes that a Gluster server is already configured and running on a host with
IP address 192.0.2.1 and that a volume named gluster-vol1 is exported. Note that the
glusterfs-fuse package must be installed on the host and you should verify that you are
able to mount the Gluster volume before attempting to use it with libvirt.
Run the following command to load the configuration information from the gluster_pool.xml
file into libvirt.
virsh pool-define gluster_pool.xml
Note that Oracle recommends using Oracle Linux Virtualization Manager when attempting to
use complex network based storage such as Gluster.
3-7
Chapter 3
Working With Storage for KVM Guests
For more information on the XML format for a storage pool definition, see https://
libvirt.org/formatstorage.html#StoragePool.
Storage Volumes
Storage volumes are created within a storage pool and represent the virtual disks that
can be loaded as block devices within one or more virtual machines. Some storage
pool types do not need storage volumes to be created individually as the storage
mechanism may present these to as block devices already. For example, iSCSI
storage pools present the individual LUNs for an iSCSI target as separate block
devices.
In some cases, such as when using directory or file system based storage pools,
storage volumes are individually created for use as virtual disks. In these cases,
several disk image formats are supported although some formats, such as qcow2, may
require additional tools such as qemu-img for creation.
For disk based pools, standard partition type labels are used to represent individual
volumes; while for pools based on the logical volume manager, the volumes
themselves are presented individually within the pool.
Depending on the storage pool type, you can create new storage volumes using the
virsh vol-create command. This command expects you to provide an XML file
representation of the volume parameters. For example, to create a new volume in
storage pool named pooldiryou could create an XML file, volume1.xml, with the
required parameters and run:
virsh vol-create pooldir
volume1.xml
The XML for a volume may depend on the pool type and the volume being created,
but in the case of a sparsely allocated 10 GB image in qcow2 format, the XML may
look similar to the following:
<volume>
<name>volume1</name>
<allocation>0</allocation>
<capacity unit="G">10</capacity>
<target>
<path>/home/testuser/.local/share/libvirt/images/volume1.qcow2</path>
<permissions>
<owner>107</owner>
<group>107</group>
<mode>0744</mode>
<label>virt_image_t</label>
</permissions>
</target>
</volume>
3-8
Chapter 3
Working With Storage for KVM Guests
Note that storage volumes can be sparsely allocated by setting the allocation value for the
initial size of the volume to a value lower than the capacity of the volume. The allocation
indicates the initial or current physical size of the volume, while the capacity indicates the size
of the virtual disk as it is presented to the virtual machine. Sparse allocation is frequently
used to over-subscribe physical disk space where virtual machines may ultimately require
more disk space than is initially available. For a non-sparsely allocated volume, the allocation
matches or exceeds the capacity of the volume. Exceeding the capacity of the disk provides
space for metadata, if required.
You can use the virsh vol-info command to view information about a volume to
determine its type, capacity, and allocation. For example:
virsh vol-info --pool pooldir
volume1
Output:
Name: volume1
Type: file
Capacity: 9.31 GiB
Allocation: 8.00 GiB
You can clone a storage volume using the virsh vol-clone command. This command
simply takes the name of the original volume and the name of the cloned volume as a
parameter and the clone is created in the same storage pool with identical parameters. For
example:
virsh vol-clone --pool pooldir
volume1
volume1-clone
Note that you can use the --pool option if you have volumes with matching names in
different pools on the same system and you need to specify which pool the operation should
take place in.
You can delete a storage volume by running the virsh vol-delete command. For
example, to delete the volume named volume1 in the storage pool named pooldir, run the
following command:
virsh vol-delete volume1 --pool pooldir
As long as a storage volume is not being used by a virtual machine, you can resize it using
the virsh vol-resize command. For example:
virsh vol-resize --pool pooldir
volume1
15G
It is generally not advisable to reduce the size of an existing volume, as this can risk
destroying data. However, if you attempt to resize a volume to reduce it, you must specify the
--shrink option with the new size value.
3-9
Chapter 3
Working With Storage for KVM Guests
You can equally use virt-install to create a virtual disk as a volume within an
existing storage pool automatically at install. For example, to create a new disk image
as a volume within the storage pool named storage_pool1:
virt-install --name guest --disk pool=storage_pool1 size=10
...
Tools to attach a volume to an existing virtual machine are limited and it is generally
recommended that you use a GUI tool like virt-manager or cockpit to assist with
this. If you expect that you may need to work with volumes a lot, consider using Oracle
Linux Virtualization Manager.
You can use the virsh attach-disk command to attach a disk image to an
existing virtual machine. This command requires that you provide the path to the disk
image when you attach it to the virtual machine. If the disk image is a volume, you can
obtain it's correct path by running the virsh vol-list command first.
virsh vol-list storage_pool_1
Output:
Name Path
--------------------------------------------------------------------
volume1 /share/disk-images/volume1.qcow2
Attach the disk image within the existing virtual machine configuration so that it is
persistent and attaches itself on each subsequent restart of the virtual machine:
virsh attach-disk --config --domain guest1 --source /share/disk-images/
volume1.qcow2 --target sdb1
Note that you can use the --live option with this command to temporarily attach a
disk image to a running virtual machine; or you can use the --persistent option to
attach a disk image to a running virtual machine and also update it's configuration so
that the disk is attached on each subsequent restart.
3-10
Chapter 3
Working With Storage for KVM Guests
Note that you can use the --live option with this command to temporarily detach a disk
image from a running virtual machine; or you can use the --persistent option to detach a
disk image from a running virtual machine and also update it's configuration so that the disk is
permanently detached from the virtual machine on subsequent restarts.
Where disks are attached as block devices within a guest virtual machine, you can obtain a
listing of the block devices attached to a guest so that you are able to identify the disk target
that is associated with a particular source image file, by running the virsh domblklist
command. For example, run:
virsh domblklist guest1
Detaching a virtual disk from the virtual machine does note delete the disk image file or
volume from the host system. If you need to delete a virtual disk, you can either manually
delete the source image file or delete the volume from the host.
You can check that the resize has worked by checking the block device information for the
running virtual machine, using the virsh domblkinfo command. For example to list all
block devices attached to guest1 in human readable format:
virsh domblkinfo guest1 --all --human
The virsh blockresize command allows you to scale up a disk on a live virtual machine,
but it does not guarantee that the virtual machine is able to immediately identify that the
additional disk resource is available. For some guest operating systems, restarting the virtual
machine may be required before the guest is capable of identifying the additional resources
available.
Individual partitions and file systems on the block device are not scaled using this command.
You need to perform these operations manually from withing the guest, as required.
3-11
Chapter 3
Working With Memory and CPU Allocation
For example, run the following command to set the number of virtual CPUs on a
running virtual machine:
virsh setvcpus domain-name, id, or uuid
count-value --live
Note that the count value cannot exceed the number of CPUs assigned to the guest
virtual machine. The count value also might be limited by the host, hypervisor, or from
the original description of the guest virtual machine.
The following command options are available:
• domain
A string value representing the virtual machine name, ID or UUID.
• count
A number value representing the number of virtual CPUs.
• --maximum
Controls the maximum number of virtual CPUs that can be hot-plugged the next
time the guest virtual machine is booted. Therefore, it can only be used with the --
config flag.
• --config
Changes the stored XML configuration for the guest virtual machine and takes
effect when the guest is started.
• --live
The guest virtual machine must be running and the change takes place
immediately, thus hot plugging a vCPU.
• --current
Affects the current guest virtual machine.
3-12
Chapter 3
Working With Memory and CPU Allocation
• --guest
Modifies the CPU state in the current guest virtual machine.
• --hotpluggable
Configures the vCPUs so they can be hot unplugged.
You can use the --config and --live options together if supported by the hypervisor.
If you do not specify --config, --live, or --current, the --live option is assumed. So, if
you do not select an option and the guest virtual machine is not running, the command fails.
In addition, if no options are specified, it is up to the hypervisor whether the --config option
is also assumed. This determines whether the XML configuration is adjusted to make the
change persistent.
You must specify the size as a scaled integer in kibibytes and the new value cannot exceed
the amount you specified for the virtual machine. Values lower than 64 MB are unlikely to
work with most virtual machine operating systems. A higher maximum memory value does
not affect active virtual machines. If the new value is lower than the available memory, it
shrinks possibly causing the virtual machine to crash.
The following command options are available:
• domain
A string value representing the virtual machine name, ID or UUID.
• size
A number value representing the new memory size, as a scaled integer. The default unit
is KiB, but you can select from other valid memory units:
– b or bytes for bytes
– KB for kilobytes (103 or blocks of 1,000 bytes)
– k or KiB for kibibytes (210 or blocks of 1024 bytes)
– MB for megabytes (106 or blocks of 1,000,000 bytes)
– M or MiB for mebibytes (220 or blocks of 1,048,576 bytes)
– GB for gigabytes (109 or blocks of 1,000,000,000 bytes)
– G or GiB for gibibytes (230 or blocks of 1,073,741,824 bytes)
– TB for terabytes (1012 or blocks of 1,000,000,000,000 bytes)
3-13
Chapter 3
Setting Up Networking for KVM Guests
You must specify the size as a scaled integer in kibibytes unless you also specify a
supported memory unit, which are the same as for the virsh setmem command.
All other options for virsh setmaxmem are the same as for virsh setmem with one
caveat. If you specify the --live option be aware that not all hypervisors allow live
changes of the maximum memory limit.
3-14
Chapter 3
Setting Up Networking for KVM Guests
some manual changes to the host network configuration before it can be set up for a virtual
machine.
Networking for virtual machines can also be configured to directly use a physical interface on
the host system. This can provide network behavior similar to using a bridged network
interface in that the vNIC behaves as if it is connected to the physical network directly. Direct
connections tend to use the macvtap driver to extend physical network interfaces to provide a
range of functionality that can also provide a virtual bridge that behaves similarly to a bridged
network but which is easier to configure and maintain and which offers improved
performance.
KVM is able to use SR-IOV for passthrough networking where a PCIe interface supports this
functionality. The SR-IOV hardware must be properly set up and configured on the host
system before you are able to attach the device to a virtual machine and configure the
network to use this device.
Where network configuration is likely to be complex, Oracle recommends using Oracle Linux
Virtualization Manager. Simple networking configurations and operations are described here
to facilitate the majority of basic deployment scenarios.
Output:
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
You can find out more about a network using the virsh net-info command. For example,
to find out about the default network, run:
virsh net-info default
Output:
Name: default
UUID: 16318035-eed4-45b6-99f8-02f1ed0661d9
Active: yes
Persistent: yes
Autostart: yes
Bridge: virbr0
Note that the virtual network uses a network bridge, called virbr0. This is not to be confused
with traditional bridged networking. The virtual bridge is not connected to a physical interface
and relies on NAT and IP forwarding to connect virtual machines to the physical network
beyond. Libvirt also handles IP address assignment for virtual machines using DHCP. The
default network is typically in the range 192.168.122.1/24. To see the full configuration
information about a network, use the virsh net-dumpxml command:
virsh net-dumpxml default
3-15
Chapter 3
Setting Up Networking for KVM Guests
Output:
<network>
<name>default</name>
<uuid>16318035-eed4-45b6-99f8-02f1ed0661d9</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:82:75:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
3-16
Chapter 3
Setting Up Networking for KVM Guests
• --live
The guest virtual machine must be running and the change takes place immediately, thus
hot plugging the vNIC.
• --current
Affects the current guest virtual machine.
Additional options are available to further customize the interface, such as setting the MAC
address or configuring the target macvtap device when using some of the alternate network
types. You can also use --model option to change the model of network interface that is
presented to the virtual machine. By default, the virtio model is used, but alternate models,
such as e1000 or rtl8139 are available, Run virsh help attach-interface for more
information, or refer to the VIRSH(1) man page.
Remove a vNIC from a virtual machine using the virsh detach-interface command. For
example, run:
virsh detach-interface --domain guest --type network --mac 52:54:00:41:6a:65 --config
Note that the domain or virtual machine name and type are required parameters. If the virtual
machine has more than one vNIC attached, you must specify the mac parameter to provide
the MAC address of the vNIC that you wish to remove. You can obtain this value by listing the
vNICs that are currently attached to a virtual machine. For example, you can run:
virsh domiflist guest
Output:
Interface Type Source Model MAC
-------------------------------------------------------
vnet0 network default virtio 52:54:00:8c:d2:44
vnet1 network default virtio 52:54:00:41:6a:65
Once the bridge is created, you can attach it using the virsh attach-interface
command as described in Adding or Removing a vNIC.
There are several issues that you may need to be aware of when using traditional linux
bridged networking for KVM guests. For instance, it is not simple to set up a bridge on a
wireless interface due to the number of addresses available in 802.11 frames. Furthermore,
the complexity of the code to handle software bridges can result in reduced throughput,
increased latency and additional configuration complexity. The main advantage that this
3-17
Chapter 3
Setting Up Networking for KVM Guests
approach offers, is that it allows the host system to communicate across the network
stack directly with any guests configured to use bridged networking.
Most of the issues related to using traditional linux bridges can be easily overcome by
using the macvtap driver which simplifies virtualized bridge network significantly. For
most bridged network configurations in KVM, this is the preferred approach because it
offers better performance and it is easier to configure. The macvtap driver is used
when the network type is set to direct.
The macvtap driver creates endpoint devices that follow the tun/tap ioctl interface
model to extend an existing network interface so that KVM can use it to connect to the
physical network interface directly to support different network functions. These
functions can be controlled by setting a different mode for the interface. The following
modes are available:
• vepa (Virtual Ethernet Port Aggregator) is the default mode and forces all data
from a vNIC out of the physical interface to a network switch. If the switch supports
hairpin mode, different vNICs connected to the same physical interface are able to
communicate via the switch. Many switches currently do not support hairpin mode,
which means that virtual machines with direct connection interfaces running in
VEPA mode are unable to communicate, but can connect to the external network
via the switch.
• bridge mode connects all vNICS directly to each other so that traffic between
virtual machines using the same physical interface is not sent out to the switch and
is facilitated directly. This is the most useful option when using switches that do not
support hairpin mode, and when you need maximum performance for
communications between virtual machines. It is important to note that when
configured in this mode, unlike a traditional software bridge, the host is unable to
use this interface to communicate directly with the virtual machine.
• private mode behaves like a VEPA mode vNIC in the absence of a switch
supporting hairpin mode. However, even if the switch does support hairpin mode,
two virtual machines connected to the same physical interface are unable to
communicate with each other. This option has limited use cases.
• passthrough mode attaches a physical interface device or an SR-IOV Virtual
Function (VF) directly to the vNIC without losing the migration capability. All
packets are sent directly to the configured network device. There is a one-to-one
mapping between network devices and virtual machines when configured in
passthrough mode because a network device cannot be shared between virtual
machines in this configuration.
Unfortunately, the virsh attach-interface command does not allow you to
specify the different modes available when attaching a direct type interface that uses
the macvtap driver and defaults to vepa mode . The graphical virt-manager utility
makes setting up bridged networks using macvtap significantly easier and provides
options for each different mode.
Nonetheless, it is not very difficult to change the configuration of a virtual machine by
editing the XML definition for it directly. The following steps can be followed to
configure a bridged network using the macvtap driver on an existing virtual machine:
1. Attach a direct type interface to the virtual machine using the virsh attach-
interface command and specify the source for the physical interface that
should be used for the bridge. In this example, the virtual machine is called guest1
and the physical network interface on the host is a wireless interface called
wlp4s0:
3-18
Chapter 3
Cloning Virtual Machines
2. Dump the XML for the virtual machine configuration and copy it to a file that you can edit:
virsh dumpxml guest1 > /tmp/guest1.xml
3. Edit the XML for the virtual machine to change the vepa mode interface to use bridged
mode. If there are many interfaces connected to the virtual machine, or you wish to
review your changes, you can do this in a text editor. If you are happy to make this
change globally, run:
sed -i "s/mode='vepa'/mode='bridge'/g" /tmp/guest1.xml
4. Remove the existing configuration for this virtual machine and replace it with the modified
configuration in the XML file:
virsh undefine guest1
virsh define /tmp/guest1.xml
5. Restart the virtual machine for the changes to take affect. The direct interface is attached
in bridge mode and is persistent and automatically started when the virtual machine
boots.
3-19
Chapter 3
Cloning Virtual Machines
A clone is an instance of a single virtual machine. You can use a clone to set up a
network of identical virtual machines which you can optionally distribute to other
destinations.
• Template
A template is an instance of a virtual machine that you can use as the cloning
source. You can use a template to create multiple clones and optionally make
modifications to each clone.
The difference between clones and templates is how they are used. For the created
clone to work properly, ensure that you remove information and modify configurations
unique to the virtual machine that is being cloned before cloning. This information and
configurations differs based on how you will use the clones, for example:
• anything assigned to the virtual machine such as the number of Network Interface
Cards (NICs) and their MAC addresses.
• anything configured within the virtual machine such as SSH keys.
• anything configured by an application installed on the virtual machine such as
activation codes and registration information.
You must remove some of the information and configurations from within the virtual
machine. Other information and configurations must be removed from the virtual
machine using the virtualization environment.
Note:
For more information on how to use the virt-sysprep utility to prepare a
virtual machine and understand the available options, see https://
libguestfs.org/virt-sysprep.1.html.
1. Build the virtual machine that you want to use for the clone or template.
a. Install any needed software.
b. Configure any non-unique operating system and application settings.
2. Remove any persistent or unique network configuration details.
a. Run the following command to remove any persistent udev rules:
rm -f /etc/udev/rules.d/70-persistent-net.rules
Note:
If you do not remove the udev rules, the name of the first NIC might
be eth1instead of eth0.
3-20
Chapter 3
Cloning Virtual Machines
After modification, your file should not include a HWADDR entry or any unique
information, and at a minimum include the following lines:
DEVICE=eth[x]
ONBOOT=yes
Important:
You must remove the HWADDR entry because if its address does not match
the new guest's MAC address, the ifcfg is ignored.
Note:
Ensure that any additional unique information is removed from the ifcfg
files.
3. If the guest virtual machine from which you want to create a clone is registered with ULN,
you must de-register it. For more information, see the Oracle Linux: Unbreakable Linux
Network User's Guide for Oracle Linux 6 and Oracle Linux 7.
4. Run the following command to remove any sshd public/private key pairs:
rm -rf /etc/ssh/ssh_host_*
Note:
Removing ssh keys prevents problems with ssh clients not trusting these hosts.
3-21
Chapter 3
Cloning Virtual Machines
• For Oracle Linux 6 and below, run the following command to create an empty
file on the root file system called .unconfigured:
touch /.unconfigured
• For Oracle Linux 7, run the following commands to enable the first boot and
initial-setup wizards:
sed -ie 's/RUN_FIRSTBOOT=NO/RUN_FIRSTBOOT=YES/' /etc/sysconfig/firstboot
systemctl enable firstboot-graphical
systemctl enable initial-setup-graphical
Note:
The wizards that run on the next boot depend on the configurations
that have been removed from the virtual machine. Also, on the first
boot of the clone we recommend that you change the hostname.
Important:
Before proceeding with cloning, shut down the virtual machine. You can
clone a virtual machine using virt-clone or virt-manager.
Run virt-clone --help to see a complete list of options, or refer to the VIRT-
CLONE(1) man page.
Run the following command to clone a virtual machine on the default connection,
automatically generating a new name and disk clone path:
virt-clone --original vm-name --auto-clone
Run the following command to clone a virtual machine with multiple disks:
virt-clone --connect qemu:///system --original vm-name --name vm-clone-name \
--file /var/lib/libvirt/images/vm-clone-name.img --file /var/lib/libvirt/images/
vm-clone-data.img
3-22
Chapter 3
Cloning Virtual Machines
3-23
4
Known Issues for Oracle Linux KVM
This chapter provides information about known issues for Oracle Linux KVM. If a workaround
is available, that information is also provided.
To work around this issue so that KVM guests can run the updated qemu version, edit the
XML file of each KVM guest, adding the caching_mode='on' parameter to the iommu section
for each driver sub-element, as shown in the following example:
<iommu model='intel'>
<driver aw_bits='48' caching_mode='on'/>
</iommu>
(Bug ID 32312933)
4-1