Linux Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1596

Inside Linux

 Kernel

o The core of the UNIX system. Loaded at system start up (boot). Memory-resident control
program.
o Manages the entire resources of the system, presenting them to you and every other user
as a coherent system. Provides service to user applications such as device management,
process scheduling, etc.
o Example functions performed by the kernel are:
 Managing the machine's memory and allocating it to each process.
 Scheduling the work done by the CPU so that the work of each user is carried out
as efficiently as is possible.
 Accomplishing the transfer of data from one part of the machine to another
 Interpreting and executing instructions from the shell
 Enforcing file access permissions

o You do not need to know anything about the kernel in order to use a UNIX system. These
details are provided for your information only.

 Shell

o Whenever you login to a Unix system you are placed in a shell program. The shell's
prompt is usually visible at the cursor's position on your screen. To get your work done,
you enter commands at this prompt.
o The shell is a command interpreter; it takes each command and passes it to the operating
system kernel to be acted upon. It then displays the results of this operation on your
screen.
o Several shells are usually available on any UNIX system, each with its own strengths and
weaknesses.
o Different users may use different shells. Initially, your system adminstrator will supply a
default shell, which can be overridden or changed. The most commonly available shells
are:
 Bourne shell (sh)
 C shell (csh)
 Korn shell (ksh)
 TC Shell (tcsh)
 Bourne Again Shell (bash)
o Each shell also includes its own programming language. Command files, called "shell
scripts" are used to accomplish a series of tasks.

 Utilities

o UNIX provides several hundred utility programs, often referred to as commands.


o Accomplish universal functions
 editing
 file maintenance
 printing
 sorting
 programming support
 online info etc.
o Modular: single functions can be grouped to perform more complex tasks
Operating system

An operating system or OS is a software program that enables the computer hardware to communicate
and operate with the computer software. Without a computer operating system, a computer and software
programs would be useless.
An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded into
the computer by a boot program, manages all the other programs in a computer. The other programs are
called applications or application programs. The application programs make use of the operating system
by making requests for services through a defined application program interface (API). In addition, users
can interact directly with the operating system through a user interface such as a command language or a
graphical user interface (GUI).

An operating system performs these services for applications:


 In a multitasking operating system where multiple programs can be running at the same time,
the operating system determines which applications should run in what order and how much
time should be allowed for each application before giving another application a turn.
 It manages the sharing of internal memory among multiple applications.
 It handles input and output to and from attached hardware devices, such as hard disks, printers,
and dial-up ports.
 It sends messages to each application or interactive user (or to a system operator) about the status
of operation and any errors that may have occurred.
 It can offload the management of what are called batch jobs (for example, printing) so that the
initiating application is freed from this work.
 On computers that can provide parallel processing, an operating system can manage how to
divide the program so that it runs on more than one processor at a time.

Examples of computer operating systems


 Redhat – Very popular Linux operating system from Redhat
 Microsoft Windows - PC and IBM compatible operating system. Microsoft Windows is the most
commonly found and used operating system in PCs
 Apple MacOS - Apple computer operating system. The only Apple computer operating system.
 Ubuntu Linux - A popular variant of Linux used with PC and IBM compatible computers.
 Google Android - operating system used with Android compatible phones.
 iOS - Operating system used with the Apple iPhone.
Various Parts of an Operating System

UNIX and 'UNIX-like' operating systems (such as Linux) consist of a kernel and some system programs.
There are also some application programs for doing work. The kernel is the heart of the operating system.
In fact, it is often mistakenly considered to be the operating system itself, but it is not. An operating
system provides many more services than a plain kernel.

It keeps track of files on the disk, starts programs and runs them concurrently, assigns memory and other
resources to various processes, receives packets from and sends packets to the network, and so on. The
kernel does very little by itself, but it provides tools with which all services can be built. It also prevents
anyone from accessing the hardware directly, forcing everyone to use the tools it provides. This way the
kernel provides some protection for users from each other. The tools provided by the kernel are used via
system calls.

The system programs use the tools provided by the kernel to implement the various services required
from an operating system. System programs, and all other programs, run `on top of the kernel', in what is
called the user mode. The difference between system and application programs is one of intent:
applications are intended for getting useful things done (or for playing, if it happens to be a game),
whereas system programs are needed to get the system working. A word processor is an application;
mount is a system program. The difference is often somewhat blurry, however, and is important only to
compulsive categorizers.

An operating system can also contain compilers and their corresponding libraries (GCC and the C library
in particular under Linux), although not all programming languages need be part of the operating
system. Documentation, and sometimes even games, can also be part of it.

Important parts of the kernel

The Linux kernel consists of several important parts:

 Process management
 Memory management
 Hardware device drivers
 Filesystem drivers
 Network management
 Various other bits and pieces
The following figure shows some of the more important parts of the Linux kernel

Probably the most important parts of the kernel (nothing else works without them) are memory
management and process management. Memory management takes care of assigning memory areas and
swap space areas to processes, parts of the kernel, and for the buffer cache. Process management creates
processes, and implements multitasking by switching the active process on the processor.

At the lowest level, the kernel contains a hardware device driver for each kind of hardware it supports.
Since the world is full of different kinds of hardware, the number of hardware device drivers is large.
There are often many otherwise similar pieces of hardware that differ in how they are controlled by
software. The similarities make it possible to have general classes of drivers that support similar
operations; each member of the class has the same interface to the rest of the kernel but differs in what it
needs to do to implement them. For example, all disk drivers look alike to the rest of the kernel, i.e., they
all have operations like `initialize the drive', `read sector N', and `write sector N'.
What is virtual memory?

Linux supports virtual memory, that is, using a disk as an extension of RAM so that the effective size of
usable memory grows correspondingly. The kernel will write the contents of a currently unused block of
memory to the hard disk so that the memory can be used for another purpose. When the original contents
are needed again, they are read back into memory. This is all made completely transparent to the user;
programs running under Linux only see the larger amount of memory available and don't notice that
parts of them reside on the disk from time to time. Of course, reading and writing the hard disk is slower
(on the order of a thousand times slower) than using real memory, so the programs don't run as fast. The
part of the hard disk that is used as virtual memory is called the swap space.

Linux can use either a normal file in the filesystem or a separate partition for swap space. A swap
partition is faster, but it is easier to change the size of a swap file (there's no need to repartition the whole
hard disk, and possibly install everything from scratch). When you know how much swap space you
need, you should go for a swap partition, but if you are uncertain, you can use a swap file first, use the
system for a while so that you can get a feel for how much swap you need, and then make a swap
partition when you're confident about its size.

You should also know that Linux allows one to use several swap partitions and/or swap files at the same
time. This means that if you only occasionally need an unusual amount of swap space, you can set up an
extra swap file at such times, instead of keeping the whole amount allocated all the time.

A note on operating system terminology: computer science usually distinguishes between swapping
(writing the whole process out to swap space) and paging (writing only fixed size parts, usually a few
kilobytes, at a time). Paging is usually more efficient, and that's what Linux does, but traditional Linux
terminology talks about swapping anyway.
Linux Structure

Linux is a layered operating system. The innermost layer is the hardware that provides the services for
the OS. The operating system, referred to in Linux as the kernel, interacts directly with the hardware and
provides the services to the user programs. These user programs don’t need to know anything about the
hardware. They just need to know how to interact with the kernel and it’s up to the kernel to provide the
desired service. One of the big appeals of Linux to programmers has been that most well written user
programs are independent of the underlying hardware, making them readily portable to new systems.

User programs interact with the kernel through a set of standard system calls. These system calls request
services to be provided by the kernel. Such services would include accessing a file: open close, read,
write, link, or execute a file; starting or updating accounting records; changing ownership of a file or
directory; changing to a new directory; creating, suspending, or killing a process; enabling access to
hardware devices; and setting limits on system resources.

Linux is a multi-user, multi-tasking operating system. You can have many users logged into a system
simultaneously, each running many programs. It’s the kernel’s job to keep each process and user separate
and to regulate access to system hardware, including cpu, memory, disk and other I/O devices.
Linux vs. Windows

Linux and Windows. Each has its own set of unique features, advantages and disadvantages. While it is
difficult to say which one is the better choice, it is not as difficult to answer which is the better choice
given your needs.

Note: The operating system that you use on your desktop computer (the vast majority of people use some flavor of
Windows) has absolutely nothing to do with the one that your host needs to serve your web site. Most personal sites
are created with MS FrontPage and even although that is a Microsoft product, it can be hosted perfectly on a
LINUX web server with FrontPage Extensions installed.

Stability:
LINUX systems (we actually use Linux but for comparison purposes they are identical) are hands-down
the winner in this category. There are many factors here but to name just a couple big ones: in our
experience LINUX handles high server loads better than Windows and LINUX machines seldom require
reboots while Windows is constantly needing them. Servers running on LINUX enjoy extremely high up-
time and high availability/reliability.

Performance:
While there is some debate about which operating system performs better, in our experience both
perform comparably in low-stress conditions however LINUX servers under high load (which is what is
important) are superior to Windows.

Scalability:
Web sites usually change over time. They start off small and grow as the needs of the person or
organization running them grow. While both platforms can often adapt to your growing needs, Windows
hosting is more easily made compatible with LINUX-based programming features like PHP and MySQL.
LINUX-based web software is not always 100% compatible with Microsoft technologies like .NET and VB
development. Therefore if you wish to use these, you should choose Windows web hosting.

Compatibility:
Web sites designed and programmed to be served under a LINUX-based web server can easily be hosted
on a Windows server, whereas the reverse is not always true. This makes programming for LINUX the
better choice.

Price:
Servers hosting your web site require operating systems and licenses just like everyone else. Windows
2003 and other related applications like SQL Server each cost a significant amount of money; on the other
hand, Linux is a free operating system to download, install and operate. Windows hosting results in
being a more expensive platform.
Conclusion:
To sum it up, LINUX-based hosting is more stable, performs faster and more compatible than Windows-
based hosting. You only need Windows hosting if you are going to developing in .NET or Visual Basic, or
some other application that limits your choices
Logging On To System

 Before you can begin to use the system you will need to have a valid username and a password.
Assignment of usernames and initial passwords is typically handled by the System
Administrator
 Your username, also called a userid, should be unique and should not change. Initial passwords
can be anything and should be changed after your first login.

To login to your account

 Type your username at the login prompt, initial of your first name followed by last name (e.g
iafzal). LINUX is case sensitive - if your username is kellyk do not type KellyK . Press the
RETURN or ENTER key after typing your username.
 When the password prompt appears, type in your password. Your password is never displayed
on the screen as a security measure. It also is case sensitive. Press the RETURN or ENTER key
after entering your password.
 What happens after you successfully login depends upon your system, many LINUX systems
will display a login banner or "message of the day". Make a habit of reading this since it may
contain important information about the system.
 Other LINUX systems will automatically configure your environment and open one or more
windows for you to do work in.
 You should see a prompt - usually a percent sign (%) or dollar sign ($). This is called the "shell
prompt" (the shell is discussed in detail later). It indicates that the system is ready to accept
commands from you.

If your login attempt was unsuccessful, there are several possible reasons:

 You made a typing error while entering your username or password


 The CAPS LOCK key is on and everything is being sent to the system in uppercase letters.
 You have an expired or invalid username or password, or the system security has changed
 There are system problems

Example of user login

login: kellyk
kellyk's Password:
************************************************************
* Welcome to the Linux Systems Training Class
************************************************************
*
* Hello! (Greetings)
*
* System maintenance is scheduled today from 2:00
* until 4:00 pm EST
*
* (Thank you very much)
*
************************************************************

Your Home Directory


 Each user has a unique "home" directory. Your home directory is that part of the file system
reserved for your files.
 After login, you are "put" into your home directory automatically. This is where you start your
work.
 You are in control of your home directory and the files which reside there. You are also in control
of the file access permissions (discussed later) to the files in your home directory. Generally, you
alone should be able to create/delete/modify files in your home directory. Others may have
permission to read or execute your files as you determine.
 In most LINUX systems, you can "move around" or navigate to other parts of the file system
outside of your home directory. This depends upon how the file permissions have been set by
others and/or the System Administrator
Linux File System

A file system is a logical collection of files on a partition or disk. A partition is a container for information
and can span an entire hard drive if desired.

Your hard drive can have various partitions which usually contains only one file system, such as one file
system housing the / file system or another containing the /home file system.

One file system per partition allows for the logical maintenance and management of differing file
systems.

Everything in Linux is considered to be a file, including physical devices such as DVD-ROMs, USB
devices, floppy drives, and so forth.

Directory Structure:
Linux uses a hierarchical file system structure, much like an upside-down tree, with root (/) at the base of
the file system and all other directories spreading from there.

A LINUX filesystem is a collection of files and directories that has the following properties:

It has a root directory (/) that contains other files and directories.

Each file or directory is uniquely identified by its name, the directory in which it resides, and a unique
identifier, typically called an inode.

By convention, the root directory has an inode number of 2 and the lost+found directory has an inode
number of 3. Inode numbers 0 and 1 are not used. File inode numbers can be seen by specifying the -i
option to ls command.

It is self contained. There are no dependencies between one filesystem and any other.
File System:

What are filesystems?


A filesystem is the methods and data structures that an operating system uses to keep track of files on a
disk or partition; that is, the way the files are organized on the disk. The word is also used to refer to a
partition or disk that is used to store the files or the type of the filesystem. Thus, one might say ``I have
two filesystems'' meaning one has two partitions on which one stores files, or that one is using the
``extended filesystem'', meaning the type of the filesystem.

The difference between a disk or partition and the filesystem it contains is important. A few programs
(including, reasonably enough, programs that create filesystems) operate directly on the raw sectors of a
disk or partition; if there is an existing file system there it will be destroyed or seriously corrupted. Most
programs operate on a filesystem, and therefore won't work on a partition that doesn't contain one (or
that contains one of the wrong types).

Before a partition or disk can be used as a filesystem, it needs to be initialized, and the bookkeeping data
structures need to be written to the disk. This process is called making a filesystem.

Most LINUX filesystem types have a similar general structure, although the exact details vary quite a bit.
The central concepts are superblock, inode , data block, directory block , and indirection block. The
superblock contains information about the filesystem as a whole, such as its size (the exact information
here depends on the filesystem). An inode contains all information about a file, except its name. The
name is stored in the directory, together with the number of the inode. A directory entry consists of a
filename and the number of the inode which represents the file. The inode contains the numbers of
several data blocks, which are used to store the data in the file. There is space only for a few data block
numbers in the inode, however, and if more are needed, more space for pointers to the data blocks is
allocated dynamically. These dynamically allocated blocks are indirect blocks; the name indicates that in
order to find the data block, one has to find its number in the indirect block first.

LINUX filesystems usually allow one to create a hole in a file (this is done with the lseek() system call;
check the manual page), which means that the filesystem just pretends that at a particular place in the file
there is just zero bytes, but no actual disk sectors are reserved for that place in the file (this means that the
file will use a bit less disk space). This happens especially often for small binaries, Linux shared libraries,
some databases, and a few other special cases. (Holes are implemented by storing a special value as the
address of the data block in the indirect block or inode. This special address means that no data block is
allocated for that part of the file, ergo, there is a hole in the file.)

Comparing Filesystem Features

FS Name Year Original OS Max File Max FS Size Journaling


Introduced Size
FAT16 1983 MSDOS V2 4GB 16MB to 8GB N
FAT32 1997 Windows 95 4GB 8GB to 2TB N
FS Name Year Original OS Max File Max FS Size Journaling
Introduced Size
HPFS 1988 OS/2 4GB 2TB N
NTFS 1993 Windows NT 16EB 16EB Y
HFS+ 1998 Mac OS 8EB ? N
UFS2 2002 FreeBSD 512GB to 1YB N
32PB
ext2 1993 Linux 16GB to 2TB4 2TB to 32TB N
ext3 1999 Linux 16GB to 2TB4 2TB to 32TB Y
ReiserFS3 2001 Linux 8TB8 16TB Y
ReiserFS4 2005 Linux ? ? Y
XFS 1994 IRIX 9EB 9EB Y
JFS ? AIX 8EB 512TB to 4PB Y
VxFS 1991 SVR4.0 16EB ? Y
ZFS 2004 Solaris 10 1YB 16EB N

This topic is loosely based on the Filesystems Hierarchy Standard (FHS), which attempts to set a standard
for how the directory tree in a Linux system should be organized. Such a standard has the advantage
that it will be easier to write or port software for Linux, and to administer Linux machines, since
everything should be in standardized places. There is no authority behind the standard that forces
anyone to comply with it, but it has gained the support of many Linux distributions. It is not a good idea
to break with the FHS without very compelling reasons. The FHS attempts to follow Linux tradition and
current trends, making Linux systems familiar to those with experience with other Linux systems, and
vice versa.

The full directory tree is intended to be breakable into smaller parts, each capable of being on its own disk
or partition, to accommodate to disk size limits and to ease backup and other system administration
tasks. The major parts are the root (/ ), /usr , /var , and /home filesystems (see the following figure). Each
part has a different purpose. The directory tree has been designed so that it works well in a network of
Linux machines which may share some parts of the filesystems over a read-only device (e.g., a CD-ROM),
or over the network with NFS.

Parts of a Linux directory tree. Dashed lines indicate partition limits

The roles of the different parts of the directory tree are described below
 The root filesystem is specific for each machine (it is generally stored on a local disk, although it could be
a ramdisk or network drive as well) and contains the files that are necessary for booting the system up,
and to bring it up to such a state that the other filesystems may be mounted. The contents of the root
filesystem will therefore be sufficient for the single user state. It will also contain tools for fixing a broken
system, and for recovering lost files from backups.

 The /usr filesystem contains all commands, libraries, manual pages, and other unchanging files needed
during normal operation. No files in /usr should be specific for any given machine, nor should they be
modified during normal use. This allows the files to be shared over the network, which can be cost-
effective since it saves disk space (there can easily be hundreds of megabytes, increasingly multiple
gigabytes in /usr). It can make administration easier (only the master /usr needs to be changed when
updating an application, not each machine separately) to have /usr network mounted. Even if the
filesystem is on a local disk, it could be mounted read-only, to lessen the chance of filesystem corruption
during a crash.

 The /var filesystem contains files that change, such as spool directories (for mail, news, printers, etc), log
files, formatted manual pages, and temporary files. Traditionally everything in /var has been somewhere
below /usr , but that made it impossible to mount /usr read-only.

 The /home filesystem contains the users' home directories, i.e., all the real data on the system. Separating
home directories to their own directory tree or filesystem makes backups easier; the other parts often do
not have to be backed up, or at least not as often as they seldom change. A big /home might have to be
broken across several filesystems, which requires adding an extra naming level below /home, for example
/home/students and /home/staff.

Although the different parts have been called filesystems above, there is no requirement that they
actually be on separate filesystems. They could easily be kept in a single one if the system is a small
single-user system and the user wants to keep things simple. The directory tree might also be divided
into filesystems differently, depending on how large the disks are, and how space is allocated for various
purposes. The important part, though, is that all the standard names work; even if, say, /var and /usr are
actually on the same partition, the names /usr/lib/libc.a and /var/log/messages must work, for example by
moving files below /var into /usr/var, and making /var a symlink to /usr/var.

The Linux filesystem structure groups files according to purpose, i.e., all commands are in one place, all
data files in another, documentation in a third, and so on. An alternative would be to group files
according to the program they belong to, i.e., all Emacs files would be in one directory, all TeX in another,
and so on. The problem with the latter approach is that it makes it difficult to share files (the program
directory often contains both static and sharable and changing and non-sharable files), and sometimes to
even find the files (e.g., manual pages in a huge number of places, and making the manual page
programs find all of them is a maintenance nightmare).

The root filesystem should generally be small, since it contains very critical files and a small, infrequently
modified filesystem has a better chance of not getting corrupted. A corrupted root filesystem will
generally mean that the system becomes unbootable except with special measures (e.g., from a floppy), so
you don't want to risk it.
The root directory generally doesn't contain any files, except perhaps on older systems where the
standard boot image for the system, usually called /vmlinuz was kept there. (Most distributions have
moved those files the the /boot directory.
1. / – Root

 Every single file and directory starts from the root directory.
 Only root user has write privilege under this directory.
 Please note that /root is root user’s home directory, which is not same as /.

2. /bin – User Binaries

 Contains binary executables.


 Common linux commands you need to use in single-user modes are located under this
directory.
 Commands used by all the users of the system are located here.
 For example: ps, ls, ping, grep, cp.

3. /sbin – System Binaries

 Just like /bin, /sbin also contains binary executables.


 But, the linux commands located under this directory are used typically by system
aministrator, for system maintenance purpose.
 For example: iptables, reboot, fdisk, ifconfig, swapon

4. /etc – Configuration Files

 Contains configuration files required by all programs.


 This also contains startup and shutdown shell scripts used to start/stop individual
programs.
 For example: /etc/resolv.conf, /etc/logrotate.conf

5. /dev – Device Files

 Contains device files.


 These include terminal devices, usb, or any device attached to the system.
 For example: /dev/tty1, /dev/usbmon0

6. /proc – Process Information

 Contains information about system process.


 This is a pseudo filesystem contains information about running process. For example:
/proc/{pid} directory contains information about the process with that particular pid.
 This is a virtual filesystem with text information about system resources. For example:
/proc/uptime

7. /var – Variable Files

 var stands for variable files.


 Content of the files that are expected to grow can be found under this directory.
 This includes — system log files (/var/log); packages and database files (/var/lib); emails
(/var/mail); print queues (/var/spool); lock files (/var/lock); temp files needed across
reboots (/var/tmp);

8. /tmp – Temporary Files

 Directory that contains temporary files created by system and users.


 Files under this directory are deleted when system is rebooted.

9. /usr – User Programs

 Contains binaries, libraries, documentation, and source-code for second level programs.
 /usr/bin contains binary files for user programs. If you can’t find a user binary under /bin,
look under /usr/bin. For example: at, awk, cc, less, scp
 /usr/sbin contains binary files for system administrators. If you can’t find a system binary
under /sbin, look under /usr/sbin. For example: atd, cron, sshd, useradd, userdel
 /usr/lib contains libraries for /usr/bin and /usr/sbin
 /usr/local contains users programs that you install from source. For example, when you
install apache from source, it goes under /usr/local/apache2

10. /home – Home Directories

 Home directories for all users to store their personal files.


 For example: /home/john, /home/nikita

11. /boot – Boot Loader Files

 Contains boot loader related files.


 Kernel initrd, vmlinux, grub files are located under /boot
 For example: initrd.img-2.6.32-24-generic, vmlinuz-2.6.32-24-generic

12. /lib – System Libraries

 Contains library files that supports the binaries located under /bin and /sbin
 Library filenames are either ld* or lib*.so.*
 For example: ld-2.11.1.so, libncurses.so.5.7

13. /opt – Optional add-on Applications

 opt stands for optional.


 Contains add-on applications from individual vendors.
 add-on applications should be installed under either /opt/ or /opt/ sub-directory.

14. /mnt – Mount Directory


 Temporary mount directory where sysadmins can mount filesystems.

15. /media – Removable Media Devices

 Temporary mount directory for removable devices.


 For examples, /media/cdrom for CD-ROM; /media/floppy for floppy drives;
/media/cdrecorder for CD writer

16. /srv – Service Data

 srv stands for service.


 Contains server specific services related data.
 For example, /srv/cvs contains CVS related data
File Names

 LINUX permits file names to use most characters, but avoid spaces, tabs and characters that have
a special meaning to the shell, such as:

& ; ( ) | ? \ ' " ` [ ] { } < > $ - ! /

 Case Sensitivity: uppercase and lowercase are not the same! These are three different files:

NOVEMBER November november

 Length: can be up to 256 characters

 Extensions: may be used to identify types of files


libc.a - archive, library file
program.c - C language source file
alpha2.f - Fortran source file
xwd2ps.o - Object/executable code
mygames.Z - Compressed file

 Hidden Files: have names that begin with a dot (.) For example:
.cshrc .login .mailrc .mwmrc

 Uniqueness: as children in a family, no two files with the same parent directory can have the
same name. Files located in separate directories can have identical names.

 Reserved Filenames:
/ - the root directory (slash)
. - current directory (period)
.. - parent directory (double period)
~ - your home directory (tilde)
Passwords Standards

When your account is issued, you will be given an initial password. It is important for system and
personal security that the password for your account be changed to something of your choosing. The
command for changing a password is "passwd". You will be asked both for your old password and to
type your new selected password twice. If you mistype your old password or do not type your new
password the same way twice, the system will indicate that the password has not been changed.
Some system administrators have installed programs that check for appropriateness of password (is it
cryptic enough for reasonable system security). A password change may be rejected by this program.
When choosing a password, it is important that it be something that could not be guessed -- either by
somebody unknown to you trying to break in, or by an acquaintance who knows you. Suggestions for
choosing and using a password follow:

Don't
use a word (or words) in any language
use a proper name
use information that can be found in your wallet
use information commonly known about you (car license, pet name, etc)
use control characters. Some systems can't handle them
write your password anywhere
ever give your password to *anybody*

Do
use a mixture of character types (alphabetic, numeric, special)
use a mixture of upper case and lower case
use at least 6 characters
choose a password you can remember
change your password often
make sure nobody is looking over your shoulder when you are entering your password
Change Password in LINUX

How do I change the password in LINUX?

To modify a user's password or your own password in LINUX use the passwd command. Open the
terminal and then type the passwd command entering the new password, the characters entered do not
display on screen, in order to avoid the password being seen by a passer-by. The passwd command
prompts for the new password twice in order to detect any typing errors. The encrypted password is
stored in /etc/shadow file.

Change Any Users Password


Login as the root user and type the command:
# passwd userName
# passwd vivek
# passwd tom

Sample outputs:
Enter new LINUX password:
Retype new LINUX password:
passwd: password updated successfully

Change Your Own Password


Simply type the passwd command:
$ passwd

Sample outputs:
(current) LINUX password:
Enter new LINUX password:
Retype new LINUX password:
passwd: password updated successfully
Difference between locate and find command in Linux

Two popular commands for locating files on Linux are find and locate. Depending on the size of your
file system and the depth of your search, the find command can sometime take a long time to scan all of
the data. For example, if you search your entire filesystem for the files named data.txt:

# find / -name data.txt

More likely than not, this will take on the order of minutes, if not longer to return. A quicker method is
to use the locate command:

# locate data.txt

However, this efficiency comes at a cost, the data reported in the output of locate isn’t as fresh as the
data reported by the find command. By default, the system will run updatedb which takes a snapshot of
the system files once a day, locate uses this snapshot to quickly report what files are where. However,
recent file additions or removals (within 24 hours) are not recorded in the snapshot and are unknown
to locate.

The find command has a number of options and is very configurable. There are many ways to reduce
the depth and breadth of your search and make it more efficient.

locate uses a previously built database, If database is not updated then locate command will not show
the output. to sync the database it is must to execute updatedb command.

# updatedb
How to Use Wildcards

A wildcard is a character that can be used as a substitute for any of a class of characters in a search,
thereby greatly increasing the flexibility and efficiency of searches.

Wildcards are commonly used in shell commands in Linux and other Unix-like operating systems. A
shell is a program that provides a text-only user interface and whose main function is to execute
commands typed in by users and display their results.

Wildcards are also used in regular expressions and programming languages. Regular expressions are a
pattern matching system that uses strings (i.e., sequences of characters) constructed according to pre-
defined syntax rules to find desired strings in text.

The term wildcard or wild card was originally used in card games to describe a card that can be assigned
any value that its holder desires. However, its usage has spread so that it is now used to describe an
unknown or unpredictable factor in a variety of fields.

Star Wildcard

Three types of wildcards are used with Linux commands. The most frequently employed and usually the
most useful is the star wildcard, which is the same as an asterisk (*). The star wildcard has the broadest
meaning of any of the wildcards, as it can represent zero characters, all single characters or any string.

As an example, the file command provides information about any filesystem object (i.e., file, directory or
link) that is provided to it as an argument (i.e., input). Because the star wildcard represents every string,
it can be used as the argument for file to return information about every object in the specified directory.
Thus, the following would display information about every object in the current directory (i.e., the
directory in which the user is currently working):

file *

If there are no matches, an error message is returned, such as *: can't stat `*' (No such file or directory).. In
the case of this example, the only way that there would be no matches is if the directory were empty.

Wildcards can be combined with other characters to represent parts of strings. For example, to represent
any filesystem object that has a .jpg filename extension, *.jpg would be used. Likewise, a* would
represent all objects that begin with a lower case (i.e., small) letter a.

As another example, the following would tell the ls command (which is used to list files) to provide the
names of all files in the current directory that have an .html or a .txt extension:

ls *.html *.txt

Likewise, the following would tell the rm command (which is used to remove files and directories) to
delete all files in the current directory that have the string xxx in their name:

rm *xxx*
Question Mark Wildcard

The question mark (?) is used as a wildcard character in shell commands to represent exactly one
character, which can be any single character. Thus, two question marks in succession would represent
any two characters in succession, and three question marks in succession would represent any string
consisting of three characters.

Thus, for example, the following would return data on all objects in the current directory whose names,
inclusive of any extensions, are exactly three characters in length:

file ???

And the following would provide data on all objects whose names are one, two or three characters in
length:

file ? ?? ???

As is the case with the star wildcard, the question mark wildcard can be used in combination with other
characters. For example, the following would provide information about all objects in the current
directory that begin with the letter a and are five characters in length:

file a????

The question mark wildcard can also be used in combination with other wildcards when separated by
some other character. For example, the following would return a list of all files in the current directory
that have a three-character filename extension:

ls *.???

Square Brackets Wildcard

The third type of wildcard in shell commands is a pair of square brackets, which can represent any of the
characters enclosed in the brackets. Thus, for example, the following would provide information about all
objects in the current directory that have an x, y and/or z in them:

file *[xyz]*

And the following would list all files that had an extension that begins with x, y or z:

ls *.[xyz]*

The same results can be achieved by merely using the star and question mark wildcards. However, it is
clearly more efficient to use the bracket wildcard.

When a hyphen is used between two characters in the square brackets wildcard, it indicates a range
inclusive of those two characters. For example, the following would provide information about all of the
objects in the current directory that begin with any letter from a through f:
file [a-f]*

And the following would provide information about every object in the current directory whose name
includes at least one numeral:

file *[0-9]*

The use of the square brackets to indicate a range can be combined with its use to indicate a list. Thus, for
example, the following would provide information about all filesystem objects whose names begin with
any letter from a through c or begin with s or t:

file [a-cst]*

Likewise, multiple sets of ranges can be specified. Thus, for instance, the following would return
information about all objects whose names begin with the first three or the final three lower case letters of
the alphabet:

file [a-cx-z]*

Sometimes it can be useful to have a succession of square bracket wildcards. For example, the following
would display all filenames in the current directory that consist of jones followed by a three-digit
number:

ls jones[0-9][0-9][0-9]

Other Wild Cards

\ (backslash) = is used as an "escape" character, i.e. to protect a subsequent special character. Thus, "\\"
searches for a backslash. Note you may need to use quotation marks and backslash(es).

^ (caret) = means "the beginning of the line". So "^a" means find a line starting with an "a".

$ (dollar sign) = means "the end of the line". So "a$" means find a line ending with an "a".

For example, this command searches the file myfile for lines starting with an "s" and ending with an "n",
and prints them to the standard output (screen):

cat myfile | grep '^s.*n$'


Soft Link and Hard Links

Example:
Create two files:
$ touch blah1
$ touch blah2

Enter some data into them:

$ echo "Cat" > blah1


$ echo "Dog" > blah2

And as expected:

$cat blah1; cat blah2


Cat
Dog

Let's create hard and soft links:

$ ln blah1 blah1-hard
$ ln -s blah2 blah2-soft

Let's see what just happened:

$ ls -l

blah1
blah1-hard
blah2
blah2-soft -> blah2

Changing the name of blah1 does not matter:

$ mv blah1 blah1-new
$ cat blah1-hard
Cat

blah1-hard points to the inode, the contents, of the file - that wasn't changed.

$ mv blah2 blah2-new
$ ls blah2-soft
blah2-soft
$ cat blah2-soft
cat: blah2-soft: No such file or directory

The contents of the file could not be found because the soft link points to the name, that was changed,
and not to the contents.
Similarly, If blah1 is deleted, blah1-hard still holds the contents; if blah2 is deleted, blah2-soft is just a link
to a non-existing file.
List folders and files in a directory The command: Information Commands:
Written By: Alexandros Mavridis ls - list directory contents ls --version
ls --help
info ls
Contents man ls

Listing Folders
Options Used In This Document
Non Hidden Folders Page 1
-r, --reverse
reverse order while sorting
Hidden Folders Page 3
-l use a long listing format
Non Hidden And Hidden Folders Page 4
-t sort by modification time, newest
Listing Files
first
Non Hidden Files Page 5
-i, --inode
print the index number of each file
Hidden Files Page 7
-a, --all
Non Hidden And Hidden Files Page 8
do not ignore entries starting with .
Listing Folders and Files
-d, --directory
list directories themselves, not their
Non Hidden Folders and Files Page 9
contents
Hidden Folders And Files Page 13
-p, --indicator-style=slash
append / indicator to directories
Non Hidden And Hidden Folders And Files Page 20
--group-directories-first
Sources Page 23
group directories before files

All the commands in the current document can be | to wc -l


command for printing number of folders or files, instead of
folders and files themselves. For example:

ls -d */ | wc -l
A. Listing Folders

Non hidden folders

Command Output

ls -d */ Prints all non hidden folders in the current


working directory in alphabetical order.
ls -dr */ Prints all non hidden folders in the current
working directory in reverse alphabetical order.
ls -dl */
Prints in detail all non hidden folders in the
ls -l | grep ^d
current working directory in alphabetical order.
ls -l | awk '{if ($1 ~ /d/) print $0}'
ls -dlr */
Prints in detail all non hidden folders in the
ls -lr | grep ^d current working directory in reverse alphabetical
order.
ls -lr | awk '{if ($1 ~ /d/) print $0}'
Prints all non hidden folders in the current
ls -dt */ working directory in chronological order, going
from newest to oldest.
Prints all non hidden folders in the current
ls -dtr */ working directory in reverse chronological
order, going from oldest to newest.
ls -dlt */
Prints in detail all non hidden folders in the
ls -lt | grep ^d current working directory in chronological order,
ls -lt | awk '{if ($1 ~ /d/) print $0}' going from newest to oldest.

ls -dltr */ Prints in detail all non hidden folders in the


ls -ltr | grep ^d current working directory in reverse
chronological order, going from oldest to
ls -ltr | awk '{if ($1 ~ /d/) print $0}' newest.
Prints all non hidden folders in the current
ls -di */ working directory, including inode numbers, in
alphabetical order.
Prints all non hidden folders in the current
ls -dri */ working directory, including inode numbers, in
reverse alphabetical order.
Prints in detail all non hidden folders in the
ls -dli */ current working directory, including inode
numbers, in alphabetical order.
Prints in detail all non hidden folders in the
ls -drli */ current working directory, including inode
numbers, in reverse alphabetical order.
Prints all non hidden folders in the current
ls -dti */ working directory, including inode numbers, in
chronological order, going from newest to oldest.
Prints all non hidden folders in the current working
ls -dtri */ directory, including inode numbers, in reverse
chronological order, going from oldest to newest.
Prints in detail all non hidden folders in the
ls -dlti */ current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -dltri */ Prints in detail all non hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest.
Hidden folders

Command Output

ls -d .*/ Prints all hidden folders in the current working


directory in alphabetical order.
ls -dr .*/ Prints all hidden folders in the current working
directory in reverse alphabetical order.
ls -dl .*/ Prints in detail all hidden folders in the current
working directory in alphabetical order.
ls -dlr .*/ Prints in detail all hidden folders in the current
working directory in reverse alphabetical order.
ls -dt .*/ Prints all hidden folders in the current working
directory in chronological order, going from
newest to oldest.
ls -dtr .*/ Prints all hidden folders in the current working
directory in reverse chronological order, going
from oldest to newest.
ls -dlt .*/ Prints in detail all hidden folders in the current
working directory in chronological order, going
from newest to oldest.
ls -dltr .*/ Prints in detail all hidden folders in the current
working directory in reverse chronological
order, going from oldest to newest.
ls -di .*/ Prints all hidden folders in the current working
directory, including inode numbers, in
alphabetical order.
ls -dri .*/ Prints all hidden folders in the current working
directory, including inode numbers, in reverse
alphabetical order.
ls -dli .*/ Prints in detail all hidden folders in the current
working directory, including inode numbers, in
alphabetical order.
ls -drli .*/ Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse alphabetical order.
ls -dti .*/ Prints all hidden folders in the current working
directory, including inode numbers, in
chronological order, going from newest to oldest.
ls -dtri .*/ Prints all hidden folders in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to
newest.
ls -dlti .*/ Prints in detail all hidden folders in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest.
ls -dltri .*/ Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest.

Non hidden and hidden folders

Command Output

ls -d */ .*/ Prints all non hidden and hidden folders in the


current working directory in alphabetical order.
ls -dr */ .*/ Prints all non hidden and hidden folders in the
current working directory in reverse alphabetical
order.
ls -dl */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory in alphabetical
order.
ls -dlr */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory in reverse
alphabetical order.
ls -dt */ .*/ Prints all non hidden and hidden folders in the
current working directory in chronological order,
going from newest to oldest.
ls -dtr */ .*/ Prints all non hidden and hidden folders in the
current working directory in reverse
chronological order, going from oldest to
newest.
ls -dlt */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory in chronological
order, going from newest to oldest.
ls -dltr */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -di */ .*/ Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in alphabetical order.
ls -dri */ .*/ Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in reverse alphabetical order.
ls -dli */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in alphabetical order.
ls -drli */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in reverse alphabetical order.
ls -dti */ .*/ Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -dtri */ .*/ Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -dlti */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -dltri */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.

B. Listing Files

Non hidden files

Command Output

ls -p | grep -v / Prints all non hidden files in the current working


directory in alphabetical order.
ls -pr | grep -v / Prints all non hidden files in the current working
directory in reverse alphabetical order.
ls -pl | grep -v /
Prints in detail all non hidden files in the current
ls -l | grep -v ^d
working directory in alphabetical order.
ls -l | grep '^\-'
ls -plr | grep -v /
Prints in detail all non hidden files in the current
ls -lr | grep -v ^d
working directory in reverse alphabetical order.
ls -lr | grep '^\-'
Prints all non hidden files in the current working
ls -pt | grep -v / directory in chronological order, going from
newest to oldest.
Prints all non hidden files in the current working
ls -ptr | grep -v / directory in reverse chronological order, going
from oldest to newest.
ls -plt | grep -v / Prints in detail all non hidden files in the current
ls -lt | grep -v ^d working directory in chronological order, going
ls -lt | grep '^\-' from newest to oldest.

ls -pltr | grep -v / Prints in detail all non hidden files in the current
ls -ltr | grep -v ^d working directory in reverse chronological
ls -lt | grep '^\-' order, going from oldest to newest.

ls -pi | grep -v / Prints all non hidden files in the current working
directory, including inode numbers, in
alphabetical order.
ls -pri | grep -v / Prints all non hidden files in the current working
directory, including inode numbers, in reverse
alphabetical order.
ls -pli | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
ls -li | grep -v ^d alphabetical order.
ls -plri | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
ls -lri | grep -v ^d reverse alphabetical order.
Prints all non hidden files in the current working
ls -pti | grep -v / directory, including inode numbers, in
chronological order, going from newest to oldest.
Prints all non hidden files in the current working
directory, including inode numbers, in reverse
ls -ptri | grep -v /
chronological order, going from oldest to
newest.
ls -plti | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
ls -lti | grep -v ^d chronological order, going from newest to oldest.

ls -pltri | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
ls -ltri | grep -v ^d
newest.
Hidden Files

Command Output

ls -d .?* Prints all hidden files in the current working


ls -a | grep '^\.' directory in alphabetical order.
ls -dr .?* Prints all hidden files in the current working
ls -ar | grep '^\.' directory in reverse alphabetical order.
Prints in detail all hidden files in the current
ls -ld .?*
working directory in alphabetical order.
Prints in detail all hidden files in the current
ls -ldr .?*
working directory in reverse alphabetical order.
ls -dt .?* Prints all hidden files in the current working
directory in chronological order, going from
ls -at | grep '^\.' newest to oldest.
ls -dtr .?* Prints all hidden files in the current working
directory in reverse chronological order, going
ls -atr | grep '^\.' from oldest to newest.
Prints in detail all hidden files in the current
ls -dlt .?* working directory in chronological order, going
from newest to oldest.
Prints in detail all hidden files in the current
ls -dltr .?* working directory in reverse chronological
order, going from oldest to newest.
Prints all hidden files in the current working
ls -di .?* directory, including inode numbers, in
alphabetical order.
Prints all hidden files in the current working
ls -dir .?* directory, including inode numbers, in reverse
alphabetical order.
Prints in detail all hidden files in the current
ls -ldi .?* working directory, including inode numbers, in
alphabetical order.
Prints in detail all hidden files in the current
ls -ldri .?* working directory, including inode numbers, in
reverse alphabetical order.
Prints all hidden files in the current working
ls -dti .?* directory, including inode numbers, in
chronological order, going from newest to oldest.
Prints all hidden files in the current working
directory, including inode numbers, in reverse
ls -dtri .?*
chronological order, going from oldest to
newest.
ls -dlti .?* Prints in detail all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to oldest.
ls -dltri .?* Prints in detail all hidden files in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest.

Non hidden and hidden files

Command Output

ls -pa | grep -v / Prints all non hidden and hidden files in the
current working directory in alphabetical order.
ls -pra | grep -v / Prints all non hidden and hidden files in the
current working directory in reverse alphabetical
order.
ls -pla | grep -v / Prints in detail all non hidden and hidden files
ls -la | grep -v ^d in the current working directory in alphabetical
ls -la | grep '^\-' order.
ls -prla | grep -v / Prints in detail all non hidden and hidden files in
ls -rla | grep -v ^d the current working directory in reverse
ls -lra | grep '^\-' alphabetical order.

Prints all non hidden and hidden files in the


ls -pta | grep -v / current working directory in chronological
order, going from newest to oldest.
Prints all non hidden and hidden files in the
current working directory in reverse
ls -ptra | grep -v /
chronological order, going from oldest to
newest.
ls -plta | grep -v / Prints in detail all non hidden and hidden files in
ls -lta | grep -v ^d the current working directory in chronological
ls -lta | grep '^\-' order, going from newest to oldest.

ls -pltra | grep -v / Prints in detail all non hidden and hidden files in
ls -ltra | grep -v ^d the current working directory in reverse
chronological order, going from oldest to
ls -ltra | grep '^\-' newest.
Prints all non hidden and hidden files in the
ls -pai | grep -v / current working directory, including inode
numbers, in alphabetical order.
Prints all non hidden and hidden files in the
ls -prai | grep -v / current working directory, including inode
numbers, in reverse alphabetical order.
ls -plai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in alphabetical order.
ls -prlai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in reverse alphabetical order.
ls -ptia | grep -v / Prints all non hidden and hidden files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -ptrai | grep -v / Prints all non hidden and hidden files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -pltai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -pltrai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.

C. Listing Folders And Files


Non hidden folders and files

Command Output

ls Prints all non hidden folders and files in the


current working directory in alphabetical order.
ls --group-directories-first Prints all non hidden folders in the current
working directory in alphabetical order,
followed by all non hidden files in the current
working directory in alphabetical order.
ls -r Prints all non hidden folders and files in the current
working directory in reverse alphabetical order.
ls -r --group-directories-first Prints all non hidden folders in the current
working directory in reverse alphabetical order,
followed by all non hidden files in the current
working directory in reverse alphabetical order.
ls -l Prints in detail all non hidden folders and files in
the current working directory in alphabetical
order.
ls -l --group-directories-first Prints in detail all non hidden folders in the
current working directory in alphabetical order,
followed by all non hidden files in the current
working directory in alphabetical order.
ls -lr Prints in detail all non hidden folders and files in
the current working directory in reverse
alphabetical order.
ls -lr --group-directories-first Prints in detail all non hidden folders in the
current working directory in reverse alphabetical
order, followed by all non hidden files in the
current working directory in reverse alphabetical
order.
ls -t Prints all non hidden folders and files in the
current working directory in chronological order,
going from newest to oldest.
ls -t --group-directories-first Prints all non hidden folders in the current
working directory in chronological order, going
from newest to oldest, followed by all non
hidden files in the current working directory in
chronological order, going from newest to
oldest.
ls -tr Prints all non hidden folders and files in the
current working directory in reverse
chronological order, going from oldest to
newest.
ls -tr --group-directories-first Prints all non hidden folders in the current
working directory in reverse chronological
order, going from oldest to newest, followed by
all non hidden files in the current working
directory in reverse chronological order, going
from oldest to newest.
ls -lt Prints in detail all non hidden folders and files in
the current working directory in chronological
order, going from newest to oldest.
ls -lt --group-directories-first Prints in detail all non hidden folders in the
current working directory in chronological order,
going from newest to oldest, followed by all non
hidden files in the current working directory in
chronological order, going from newest to
oldest.
ls -ltr Prints in detail all non hidden folders and files in
the current working directory in reverse
chronological order, going from oldest to newest.
ls -ltr --group-directories-first Prints in detail all non hidden folders in the
current working directory in reverse
chronological order, going from oldest to newest,
followed by all non hidden files in the current
working directory in reverse chronological order,
going from oldest to newest.
ls -i Prints all non hidden folders and files in the
current working directory, including inode
numbers, in alphabetical order.
ls -i --group-directories-first Prints all non hidden folders in the current
working directory, including inode numbers, in
alphabetical order, followed by all non hidden
files in the current working directory, including
inode numbers, in alphabetical order.
ls -ri Prints all non hidden folders and files in the
current working directory, including inode
numbers, in reverse alphabetical order.
ls -ri --group-directories-first Prints all non hidden folders in the current
working directory, including inode numbers, in
reverse alphabetical order, followed by all non
hidden files in the current working directory,
including inode numbers, in reverse alphabetical
order.
ls -li Prints in detail all non hidden folders and files in
the current working directory, including inode
numbers, in alphabetical order.
ls -li --group-directories-first Prints in detail all non hidden folders in the
current working directory, including inode
numbers, in alphabetical order, followed by all
non hidden files in the current working
directory, including inode numbers, in
alphabetical order.
ls -lri Prints in detail all non hidden folders and files in
the current working directory, including inode
numbers, in reverse alphabetical order.
ls -lri --group-directories-first Prints in detail all non hidden folders in the
current working directory, including inode
numbers, in reverse alphabetical order, followed
by all non hidden files in the current working
directory, including inode numbers, in reverse
alphabetical order.
ls -ti Prints all non hidden folders and files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -ti --group-directories-first Prints all non hidden folders in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all non hidden files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -tri Prints all non hidden folders and files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -tri --group-directories-first Prints all non hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest, followed by all non hidden files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -lti Prints in detail all non hidden folders and files in
the current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -lti --group-directories-first Prints in detail all non hidden folders in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest, followed by all non hidden
files in the current working directory, including
inode numbers, in chronological order, going
from newest to oldest.
ls -ltri Prints in detail all non hidden folders and files in
the current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -ltri --group-directories-first Prints in detail all non hidden folders in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest, followed by all non
hidden files in the current working directory,
including inode numbers, in reverse
chronological order, going from oldest to
newest.
Hidden folders and files

Command Output

ls -d .* Prints all hidden folders and files in the current


working directory in alphabetical order.
ls -d .[^.]* Prints all hidden folders and files in the current
working directory in alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -d .* --group-directories-first Prints all hidden folders in the current working
directory in alphabetical order, followed by all
hidden files in the current working directory in
alphabetical order.
ls -d .[^.]* --group-directories-first Prints all hidden folders in the current working
directory in alphabetical order, followed by all
hidden files in the current working directory in
alphabetical order. Returns an error if at least
one hidden folder or at least one hidden file
does not exist.
ls -dr .* Prints all hidden folders and files in the current
working directory in reverse alphabetical order.
ls -dr .[^.]* Prints all hidden folders and files in the current
working directory in reverse alphabetical order.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dr .* --group-directories-first Prints all hidden folders in the current working
directory in reverse alphabetical order, followed
by all hidden files in the current working
directory in reverse alphabetical order.
ls -dr .[^.]* --group-directories-first Prints all hidden folders in the current working
directory in reverse alphabetical order, followed
by all hidden files in the current working
directory in reverse alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -dl .* Prints in detail all hidden folders and files in the
current working directory in alphabetical order.
ls -dl .[^.]* Prints in detail all hidden folders and files in the
current working directory in alphabetical order.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dl .* --group-directories-first Prints in detail all hidden folders in the current
working directory in alphabetical order,
followed by all hidden files in the current
working directory in alphabetical order.
ls -dl .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory in alphabetical order,
followed by all hidden files in the current
working directory in alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -dlr .* Prints in detail all hidden folders and files in the
current working directory in reverse alphabetical
order.
ls -dlr .[^.]* Prints in detail all hidden folders and files in the
current working directory in reverse alphabetical
order. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dlr .* --group-directories-first Prints in detail all hidden folders in the current
working directory in reverse alphabetical order,
followed by all hidden files in the current
working directory in reverse alphabetical order.
ls -dlr .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory in reverse alphabetical order,
followed by all hidden files in the current
working directory in reverse alphabetical order.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dt .* Prints all hidden folders and files in the current
working directory in chronological order, going
from newest to oldest.
ls -dt .[^.]* Prints all hidden folders and files in the current
working directory in chronological order, going
from newest to oldest. Returns an error if at least
one hidden folder or at least one hidden file does
not exist.
ls -dt .* --group-directories-first Prints all hidden folders in the current working
directory in chronological order, going from
newest to oldest, followed by all hidden files in
the current working directory in chronological
order, going from newest to oldest.
ls -dt .[^.]* --group-directories-first Prints all hidden folders in the current working
directory in chronological order, going from
newest to oldest, followed by all hidden files in
the current working directory in chronological
order, going from newest to oldest. Returns an
error if at least one hidden folder or at least one
hidden file does not exist.
ls -dtr .* Prints all hidden folders and files in the current
working directory in reverse chronological
order, going from oldest to newest.
ls -dtr .[^.]* Prints all hidden folders and files in the current
working directory in reverse chronological
order, going from oldest to newest. Returns an
error if at least one hidden folder or at least one
hidden file does not exist.
ls -dtr .* --group-directories-first Prints all hidden folders in the current working
directory in reverse chronological order, going
from oldest to newest, followed by all hidden
files in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -dtr .[^.]* --group-directories-first Prints all hidden folders in the current working
directory in reverse chronological order, going
from oldest to newest, followed by all hidden
files in the current working directory in reverse
chronological order, going from oldest to
newest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtl .* Prints in detail all hidden folders and files in the
current working directory in chronological order,
going from newest to oldest.
ls -dtl .[^.]* Prints in detail all hidden folders and files in the
current working directory in chronological order,
going from newest to oldest. Returns an error if
at least one hidden folder or at least one hidden
file does not exist.
ls -dtl .* --group-directories-first Prints in detail all hidden folders in the current
working directory in chronological order, going
from newest to oldest, followed by all hidden
files in the current working directory in
chronological order, going from newest to
oldest.
ls -dtl .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory in chronological order, going
from newest to oldest, followed by all hidden
files in the current working directory in
chronological order, going from newest to
oldest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtrl .* Prints in detail all hidden folders and files in the
current working directory in reverse
chronological order, going from oldest to newest.
ls -dtrl .[^.]* Prints in detail all hidden folders and files in the
current working directory in reverse
chronological order, going from oldest to newest.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dtrl .* --group-directories-first Prints in detail all hidden folders in the current
working directory in reverse chronological
order, going from oldest to newest, followed by
all hidden files in the current working directory
in reverse chronological order, going from oldest
to newest.
ls -dtrl .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory in reverse chronological
order, going from oldest to newest, followed by
all hidden files in the current working directory
in reverse chronological order, going from oldest
to newest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -di .* Prints all hidden folders and files in the current
working directory, including inode numbers, in
alphabetical order.
ls -di .[^.]* Prints all hidden folders and files in the current
working directory, including inode numbers, in
alphabetical order. Returns an error if at least
one hidden folder or at least one hidden file does
not exist.
ls -di .* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in alphabetical order.
ls -di .[^.]* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in alphabetical order. Returns an error
if at least one hidden folder or at least one
hidden file does not exist.
ls -dri .* Prints all hidden folders and files in the current
working directory, including inode numbers, in
reverse alphabetical order.
ls -dri .[^.]* Prints all hidden folders and files in the current
working directory, including inode numbers, in
reverse alphabetical order. Returns an error if at
least one hidden folder or at least one hidden file
does not exist.
ls -dri .* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in reverse
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in reverse alphabetical order.
ls -dri .[^.]* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in reverse
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in reverse alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -dli .* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in alphabetical order.
ls -dli .[^.]* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in alphabetical order. Returns an error
if at least one hidden folder or at least one
hidden file does not exist.
ls -dli .* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in alphabetical order.
ls -dli .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in alphabetical order. Returns an error
if at least one hidden folder or at least one
hidden file does not exist.
ls -dlri .* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in reverse alphabetical order.
ls -dlri .[^.]* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in reverse alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -dlri .* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse alphabetical order, followed by all hidden
files in the current working directory, including
inode numbers, in reverse alphabetical order.
ls -dlri .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse alphabetical order, followed by all hidden
files in the current working directory, including
inode numbers, in reverse alphabetical order.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dti .* Prints all hidden folders and files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest.
ls -dti .[^.]* Prints all hidden folders and files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dti .* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest.
ls -dti .[^.]* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtri .* Prints all hidden folders and files in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest.
ls -dtri .[^.]* Prints all hidden folders and files in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtri .* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest,
followed by all hidden files in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest.
ls -dtri .[^.]* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest,
followed by all hidden files in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest.
Returns an error if at least one hidden folder or at
least one hidden file does not exist.
ls -dtli .* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -dtli .[^.]* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest. Returns an error if at least one
hidden folder or at least one hidden file does not
exist.
ls -dtli .* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest.
ls -dtli .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtrli .* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -dtrli .[^.]* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest. Returns an error if at least
one hidden folder or at least one hidden file does
not exist.
ls -dtrli .* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest, followed by all hidden files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -dtrli .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest, followed by all hidden files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest. Returns an error if at least
one hidden folder or at least one hidden file does
not exist.

Non hidden and hidden folders and files

Command Output

ls -a Prints all non hidden and hidden folders and


files in the current working directory in
alphabetical order.
ls -a --group-directories-first Prints all non hidden and hidden folders in the
current working directory in alphabetical order,
followed by all non hidden and hidden files in
the current working directory in alphabetical
order.
ls -ar Prints all non hidden and hidden folders and
files in the current working directory in reverse
alphabetical order.
ls -ar --group-directories-first Prints all non hidden and hidden folders in the
current working directory in reverse alphabetical
order, followed by all non hidden and hidden
files in the current working directory in reverse
alphabetical order.
ls -la Prints in detail all non hidden and hidden folders
and files in the current working directory in
alphabetical order.
ls -la --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory in alphabetical
order, followed by all non hidden and hidden
files in the current working directory in
alphabetical order.
ls -lra Prints in detail all non hidden and hidden folders
and files in the current working directory in
reverse alphabetical order.
ls -lra --group-directories-first Prints in detail all non hidden and hidden folders in
the current working directory in reverse alphabetical
order, followed by all non hidden and hidden files in
the current working directory in reverse alphabetical
order.
ls -ta Prints all non hidden and hidden folders and
files in the current working directory in
chronological order, going from newest to
oldest.
ls -ta --group-directories-first Prints all non hidden and hidden folders in the
current working directory in chronological order,
going from newest to oldest, followed by all non
hidden and hidden files in the current working
directory in chronological order, going from
newest to oldest.
ls -tra Prints all non hidden and hidden folders and
files in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -tra --group-directories-first Prints all non hidden and hidden folders in the
current working directory in reverse
chronological order, going from oldest to
newest, followed by all non hidden and hidden
files in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -tla Prints in detail all non hidden and hidden folders
and files in the current working directory in
chronological order, going from newest to
oldest.
ls -tla --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory in chronological
order, going from newest to oldest, followed by
all non hidden and hidden files in the current
working directory in chronological order, going
from newest to oldest.
ls -trla Prints in detail all non hidden and hidden folders
and files in the current working directory in
reverse chronological order, going from oldest to
newest.
ls -trla --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory in reverse
chronological order, going from oldest to
newest, followed by all non hidden and hidden
files in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -ia Prints all non hidden and hidden folders and
files in the current working directory, including
inode numbers, in alphabetical order.
ls -ia --group-directories-first Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in alphabetical order, followed by all
non hidden and hidden files in the current
working directory, including inode numbers, in
alphabetical order.
ls -iar Prints all non hidden and hidden folders and
files in the current working directory, including
inode numbers, in reverse alphabetical order.
ls -iar --group-directories-first Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in reverse alphabetical order, followed
by all non hidden and hidden files in the current
working directory, including inode numbers, in
reverse alphabetical order.
ls -ial Prints in detail all non hidden and hidden folders
and files in the current working directory,
including inode numbers, in alphabetical order.
ls -ial --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in alphabetical order, followed by all
non hidden and hidden files in the current
working directory, including inode numbers, in
alphabetical order.
ls -lrai Prints in detail all non hidden and hidden folders
and files in the current working directory,
including inode numbers, in reverse alphabetical
order.
ls -lrai --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in reverse alphabetical order, followed
by all non hidden and hidden files in the current
working directory, including inode numbers, in
reverse alphabetical order.
ls -tia Prints all non hidden and hidden folders and files
in the current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -tia --group-directories-first Prints all non hidden and hidden folders in the
current working directory, including inode numbers,
in chronological order, going from newest to oldest,
followed by all non hidden and hidden files in the
current working directory, including inode numbers,
in chronological order, going from newest to oldest.
ls -tiar Prints all non hidden and hidden folders and files in
the current working directory, including inode
numbers, in reverse chronological order, going from
oldest to newest.
ls -tiar --group-directories-first Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest, followed by all non
hidden and hidden files in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to
newest.
ls -tlia Prints in detail all non hidden and hidden folders
and files in the current working directory in
chronological order, going from newest to
oldest.
ls -tlia --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in chronological order, going from
newest to oldest, followed by all non hidden and
hidden files in the current working directory,
including inode numbers, in chronological order,
going from newest to oldest.
ls -tlari Prints in detail all non hidden and hidden folders
and files in the current working directory,
including inode numbers, in reverse
chronological order, going from oldest to
newest.
ls -tlari --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest, followed by all non
hidden and hidden files in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest.

_____________________

Sources:
https://stackoverflow.com/questions/14352290/listing-only-directories-using-ls-in-bash-an-examination

https://serverfault.com/questions/368370/how-do-i-exclude-directories-when-listing-files

https://www.cyberciti.biz/faq/bash-shell-display-only-hidden-dot-files/

https://askubuntu.com/questions/468901/how-to-show-only-hidden-files-in-terminal

Imran Afzal for the command: ls -a | grep '^\.'


________________________________________________________________________________
LEFT CLICK TO RESTART FROM THE TOP
Linux Command Line Structure

A command is a program that tells the Linux system to do something. It has the form:
command [options] [arguments]

where an argument indicates on what the command is to perform its action, usually a file or series of
files. An option modifies the command, changing the way it performs. Commands are case sensitive.
command and Commands are not the same.

Options are generally preceded by a hyphen (-), and for most commands, more than one option can be
strung together, in the form:
command -[option][option][option]
e.g.:
ls –alR = will perform a long list on all files in the current directory and recursively
perform the list through all sub-directories.

For most commands you can separate the options, preceding each with a hyphen, e.g.:
command -option1 -option2 -option3
as in: ls -a -l -R

Some commands have options that require parameters. Options requiring parameters are usually
specified separately, e.g.:
lpr -Pprinter3 -# 2 file
will send 2 copies of file to printer3.

These are the standard conventions for commands. However, not all Linux commands will follow the
standard. Some don’t require the hyphen before options and some won’t let you group options
together, i.e. they may require that each option be preceded by a hyphen and separated by whitespace
from other options and arguments.

Options and syntax for a command are listed in the man page for the command.
File Permissions

• UNIX is a multi-user system. Every file and directory in your account can be protected from or
made accessible to other users by changing its access permissions. Every user has responsibility
for controlling access to their files.

• Permissions for a file or directory may be restricted to by types

• There are 3 type of permissions


• r - read
• w - write
• x - execute = running a program

• Each permission (rwx) can be controlled at three levels:


• u - user = yourself
• g - group = can be people in the same project
• o - other = everyone on the system

• File or Directory permission can be displayed by running ls –l command


• -rwxrwxrwx

• Command to change permission


• chmod

Example:

Type User Group Everyone else

rwx rwx rwx

- = First dash or bit identifies the file type


--- = 2nd 3 bits defines the permission for user (file or dir owner)
--- = 3rd 3 bits defines the permission for group
--- = 4th 3 bits defines the permission for everyone else
Permissions can also be change through numerical method. Each of the permission types is represented
by either a numeric equivalent:
read=4, write=2, execute=1
or a single letter:
read=r, write=w, execute=x

A permission of 4 or r would specify read permissions. If the permissions desired are read and write,
the 4 (representing read) and the 2 (representing write) are added together to make a permission of 6.
Therefore, a permission setting of 6 would allow read and write permissions.

Common Options
-f force (no error message is generated if the change is unsuccessful)
-R recursively descend through the directory structure and change the modes

Examples
If the permission desired for file1 is user: read, write, execute, group: read, execute, other: read,
execute, the command to use would be

chmod 755 file1 or chmod u=rwx,go=rx file1

Reminder: When giving permissions to group and other to use a file, it is necessary to allow at least
execute permission to the directories for the path in which the file is located. The easiest way to do
this is to be in the directory for which permissions need to be granted:

chmod 711 . or chmod u=rw,+x . or chmod u=rwx,go=x .

where the dot (.) indicates this directory.

File Ownership

chown - change ownership


Ownership of a file can be changed with the chown command. On most versions of Unix this can
only be done by the super-user, i.e. a normal user can’t give away ownership of their files. chown is
used as below, where # represents the shell prompt for the super-user:

Syntax
chown [options] user[:group] file (SVR4)
chown [options] user[.group] file (BSD)

Common Options
-R recursively descend through the directory structure
-f force, and don’t report any errors

Examples
# chown new_owner file

chgrp - change group


Anyone can change the group of files they own, to another group they belong to, with the chgrp
command.

Syntax
chgrp [options] group file

Common Options
-R recursively descend through the directory structure
-f force, and don’t report any errors

Examples
% chgrp
Getting Help

 The "man" command

o The "man" command man gives you access to an on-line manual which potentially contains a
complete description of every command available on the system. In practice, the manual
usually contains a subset of all commands.
o man can also provide you with one line descriptions of commands which match a specified
keyword
o The online manual is divided into sections:

Section Description
------- -----------
1 User Commands
2 System Commands
3 Subroutines
4 Devices
5 File Formats
6 Games
7 Miscellaneous
8 System Administration
l Local Commands
n New Commands

o Examples of using the man command:

To display the manual page for the cp (copy files) command:


man cp
--More--23% at the bottom left of the screen means that only 23% of the man page is
displayed. Press the space bar to display more of it or type q to quit.

By default, the man page in section 1 is displayed if multiple sections exist. You can access a
different section by specifying the section. For example:
man 8 telnetd

Keyword searching: use the -k option followed by the keyword. Two examples appear below.
man -k mail
man -k 'copy files'

To view a one line description of what a command does:


whatis more
will display what the "more" command does:
more, page (1) - browse or page through a text file

 who - shows who is on the system


who
who am i

 finger - displays information about users, by name or login name


finger doe
finger userid
Prompting completion

The following example shows how command-line completion works in Bash. Other command line shells
may perform slightly differently.

First we type the first three letters of our command:

fir

Then we press Tab ↹ and because the only command in our system that starts with "fir" is "firefox", it
will be completed to:

firefox

Then we start typing the file name:

firefox i

But this time introduction-to-command-line-completion.html is not the only file in the current directory
that starts with "i". The directory also contains files introduction-to-bash.html and introduction-to-
firefox.html. The system can't decide which of these filenames we wanted to type, but it does know that
the file must begin with "introduction-to-", so the command will be completed to:

firefox introduction-to-

Now we type "c":

firefox introduction-to-c

After pressing Tab ↹ it will be completed to the whole filename:

firefox introduction-to-command-line-completion.html

In short we typed:

fir Tab ↹ i Tab ↹ c Tab ↹

This is just eight keystrokes, which is considerably less than 52 keystrokes we would have needed to type
without using command-line completion.
Rotating completion

The following example shows how command-line completion works with rotating completion, such as
Windows's CMD uses.

We follow the same procedure as for prompting completion until we have:

firefox i

We press Tab ↹ once, with the result:

firefox introduction-to-bash.html

We press Tab ↹ again, getting:

firefox introduction-to-command-line-completion.html

In short we typed:

fir Tab ↹ i Tab ↹Tab ↹


Adding Text to Files:

echo command
 echo “Your text goes here” > filename (To add text and create a new file)
 echo “Additional text” >> filename (To append to an existing file)

cp command
 cp exisiting-file new-filename (To copy an existing file to new file)
 cat existing-file > new-filename (cat the content of an existing file and add to new file. This
command does the same as above)

vi command
 vi filename (Create a new file and enter text using vi insert mode)
Pipes

 A pipe is used by the shell to connect the stdout of one command directly to the stdin of another
command.
 The symbol for a pipe is the vertical bar ( | ). The command syntax is:

command1 [arguments] | command2 [arguments]

 Pipes accomplish with one command what otherwise would take intermediate files and multiple
commands. For example, operation 1 and operation 2 are equivalent:

Operation 1
who > temp
sort temp

Operation 2
who | sort

 Pipes do not affect the contents of the input files.


 Two very common uses of a pipe are with the "more" and "grep" utilities. Some examples:

ls -al | more
who | more
ps ug | grep myuserid
who | grep kelly
File Maintenance Commands

cp
Copies files. Will overwrite unless otherwise specified. Must also have write permission in the
destination directory.

Example:
cp sample.f sample2.f - copies sample.f to sample2.f
cp -R dir1 dir2 - copies contents of directory dir1 to dir2
cp -i file.1 file.new - prompts if file.new will be overwritten
cp *.txt chapt1 - copies all files with .txt suffix to directory
chapt1
cp /usr/doc/README ~ - copies file to your home directory
cp ~betty/index . - copies the file "index" from user betty's
home directory to current directory

rm
Deletes/removes files or directories if file permissions permit

Example:
rm sample.f - deletes sample.f
rm chap?.txt - deletes all files with chap as the first four
characters of their name and with .txt as the last
four characters of their name
rm -i * - deletes all files in current directory but asks
first for each file
rm -r /olddir - recursively removes all files in the directory
olddir, including the directory itself

mv
Moves files. It will overwrite unless otherwise specified. Must also have write permission in the
destination directory.

Example:
mv sample.f sample2.f - moves sample.f to sample2.f
mv dir1 newdir/dir2 - moves contents of directory dir1 to
newdir/dir2
mv -i file.1 file.new - prompts if file.new will be overwritten
mv *.txt chapt1 - moves all files with .txt suffix to
directory chapt1
mkdir
Make directory. Will create the new directory in your working directory by default.

Example:
mkdir /u/training/data
mkdir data2

rmdir
Remove directory. Directories must be empty before you remove them.
rmdir project1

To recursively remove nested directories, use the rm command with the -r option:
rm -r dirctory_name

chgrp
Changes the group ownership of a file or directory.

Syntax
chgrp [ -f ] [ -h ] [-R ] Group { File ... | Directory ... }
chgrp -R [ -f ] [ -H | -L | -P ] Group { File... | Directory... }

Description
The chgrp command changes the group of the file or directory specified by the File or Directory parameter
to the group specified by the Group parameter. The value of the Group parameter can be a group name
from the group database or a numeric group ID. When a symbolic link is encountered and you have not
specified the -h or -P flags, the chgrp command changes the group ownership of the file or directory
pointed to by the link and not the group ownership of the link itself.

chown
The chown command is used to change the owner and group of files, directories and links. By default,
the owner of a filesystem object is the user that created it. The group is a set of users that share the same
access permissions (i.e., read, write and execute) for that object. The basic syntax for using chown to
change owners is

chown [options] new_owner object(s)

new_owner is the user name or the numeric user ID (UID) of the new owner, and object is the name of
the target file, directory or link. The ownership of any number of objects can be changed simultaneously.

For example, the following would transfer the ownership of a file named file1 and a directory named dir1
to a new owner named alice:
chown alice file1 dir1

In order to perform the above command, most systems are configured by default to require access to the
root (i.e., system administrator) account, which can be obtained on a personal computer by using the su
(i.e., substitute user) command. An error message will be returned in the event that the user does not
have the proper permissions or that the specified new owner or target(s) does not exist (or is spelled
incorrectly).

The ownership and group of a filesystem object can be confirmed by using the ls command with its -l (i.e.,
long) option. The owner is shown in the third column and the group in the fourth. Thus, for example, the
owner and group of file1 can be seen by using the following:

ls -l file1

The basic syntax for using chown to change groups is

chown [options] :new_group object(s)

or

chown [options] .new_group object(s)

The only difference between the two versions is that the name or numeric ID of the new group is
preceded directly by a colon in the former and by a dot in the latter; there is no functional difference. In
this case, chown performs the same function as the chgrp (i.e., change group) command.

The owner and group can be changed simultaneously by combining the syntaxes for changing owner and
group. That is, the name or UID of the new owner is followed directly (i.e., with no intervening spaces)
by a period or colon, which is followed directly by the name or numeric ID of the new group, which, in
turn, is followed by a space and then by the names of the target files, directories and/or links.

Thus, for example, the following would change the owner of a file named file2 to the user with the user
name bob and change its group to group2:

chown bob:group2 file2

If a user name or UID is followed directly by a colon or dot but no group name is provided, then the
group is changed to that user's login group. Thus, for example, the following would change the
ownership of file3 to cathy and would also change that file's group to the login group of the new owner
(which by default is usually the same as the new owner):
chown cathy: file3

Among chown's few options is -R, which operates on filesystem objects recursively. That is, when used
on a directory, it can change the ownership and/or group of all objects within the directory tree beginning
with that directory rather than just the ownership of the directory itself.

The -v (verbose) option provides information about every object processed. The -c is similar, but reports
only when a change is made. The --help option displays the documentation found in the man online
manual, and the --version option outputs version information

chmod
Change access permissions, change mode.

Syntax
chmod [Options]... Mode [,Mode]... file...
chmod [Options]... Numeric_Mode file...
chmod [Options]... --reference=RFile file...

Options
-f, --silent, --quiet suppress most error messages
-v, --verbose output a diagnostic for every file processed
-c, --changes like verbose but report only when a change is made
--reference=RFile use RFile's mode instead of MODE values
-R, --recursive change files and directories recursively
--help display help and exit
--version output version information and exit

chmod changes the permissions of each given file according to mode, where mode describes the
permissions to modify. Mode can be specified with octal numbers or with letters. Using letters is easier to
understand for most people.
Permissions:

Owner Group Other


Read
Write
Execute

Numeric mode:
From one to four octal digits
Any omitted digits are assumed to be leading zeros.
The first digit = selects attributes for the set user ID (4) and set group ID (2) and save text image (1)S
The second digit = permissions for the user who owns the file: read (4), write (2), and execute (1)
The third digit = permissions for other users in the file's group: read (4), write (2), and execute (1)
The fourth digit = permissions for other users NOT in the file's group: read (4), write (2), and execute (1)

The octal (0-7) value is calculated by adding up the values for each digit
User (rwx) = 4+2+1 = 7
Group(rx) = 4+1 = 5
World (rx) = 4+1 = 5
chmode mode = 0755

Examples
chmod 400 file - Read by owner
chmod 040 file - Read by group
chmod 004 file - Read by world

chmod 200 file - Write by owner


chmod 020 file - Write by group
chmod 002 file - Write by world

chmod 100 file - execute by owner


chmod 010 file - execute by group
chmod 001 file - execute by world

To combine these, just add the numbers together:


chmod 444 file - Allow read permission to owner and group and world
chmod 777 file - Allow everyone to read, write, and execute file

Symbolic Mode
The format of a symbolic mode is a combination of the letters +-= rwxXstugoa
Multiple symbolic operations can be given, separated by commas.
The full syntax is [ugoa...][[+-=][rwxXstugo...]...][,...] but this is explained below.

A combination of the letters ugoa controls which users' access to the file will be changed:

User letter
The user who owns it u
Other users in the file's Group g
Other users not in the file's group o
All users a

If none of these are given, the effect is as if was given, but bits that are set in the umask are not affected.
All users a is effectively user + group + others

The operator '+' causes the permissions selected to be added to the existing permissions of each file; '-'
causes them to be removed; and '=' causes them to be the only permissions that the file has.

The letters 'rwxXstugo' select the new permissions for the affected users:

Permission letter
Read r
Write w
Execute (or access for directories) x
Execute only if the file is a directory
(or already has execute permission for some user) X
Set user or group ID on execution s
Save program text on swap device t

The permissions that the User who owns the file


currently has for it u
The permissions that other users in the file's
Group have for it g
Permissions that Other users not in the file's
group have for it o

Examples
Deny execute permission to everyone:
chmod a-x file

Allow read permission to everyone:


chmod a+r file

Make a file readable and writable by the group and others:


chmod go+rw file

Make a shell script executable by the user/owner


$ chmod u+x myscript.sh

Allow everyone to read, write, and execute the file and turn on the set group-ID:
chmod =rwx,g+s file

Notes:
When chmod is applied to a directory:
read = list files in the directory
write = add new files to the directory
execute = access files in the directory

chmod never changes the permissions of symbolic links. This is not a problem since the permissions of
symbolic links are never used. However, for each symbolic link listed on the command line, chmod
changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered
during recursive directory traversals

Visit chmod calculator


http://www.onlineconversion.com/html_chmod_calculator.htm
File Display Commands

cat - concatenate a file


Display the contents of a file with the concatenate command, cat.

Syntax
cat [options] [file]

Common Options
-n precede each line with a line number
-v display non-printing characters, except tabs, new-lines, and form-feeds
-e display $ at the end of each line (prior to new-line) (when used with -v option)

Examples
% cat filename

You can list a series of files on the command line, and cat will concatenate them, starting each in turn,
immediately after completing the previous one, e.g.:
% cat file1 file2 file3

more, less, and pg - page through a file


more, less, and pg let you page through the contents of a file one screenful at a time. These may not
all be available on your Linux system. They allow you to back up through the previous pages and
search for words, etc.

Syntax
more [options] [+/pattern] [filename]
less [options] [+/pattern] [filename]
pg [options] [+/pattern] [filename]

Options
more less pg Action
-c -c -c clear display before displaying
-i ignore case
-w default default don’t exit at end of input, but prompt and wait
-lines -lines # of lines/screenful
+/pattern +/pattern +/pattern search for the pattern

Internal Controls
more displays (one screen at a time) the file requested
<space bar> to view next screen
<return> or <CR> to view one more line
q to quit viewing the file
h help
b go back up one screenful
/word search for word in the remainder of the file
See the man page for additional options
less similar to more; see the man page for options
pg the SVR4 equivalent of more (page)

-------------------------------------------------------------------------------

echo - echo a statement


The echo command is used to repeat, or echo, the argument you give it back to the standard output
device. It normally ends with a line-feed, but you can specify an option to prevent this.

Syntax
echo [string]

Common Options
-n don’t print <new-line> (BSD, shell built-in)
\c don’t print <new-line> (SVR4)
\0n where n is the 8-bit ASCII character code (SVR4)
\t tab (SVR4)
\f form-feed (SVR4)
\n new-line (SVR4)
\v vertical tab (SVR4)

Examples
% echo Hello Class or echo "Hello Class"
To prevent the line feed:
% echo -n Hello Class or echo "Hello Class \c"
where the style to use in the last example depends on the echo command in use.
The \x options must be within pairs of single or double quotes, with or without other string characters.

-------------------------------------------------------------------------------

head - display the start of a file


head displays the head, or start, of the file.

Syntax
head [options] file

Common Options
-n number number of lines to display, counting from the top of the file
-number same as above

Examples
By default head displays the first 10 lines. You can display more with the "-n number", or
"-number" options, e.g., to display the first 40 lines:
% head -40 filename or head -n 40 filename

-------------------------------------------------------------------------------

more
Browses/displays files one screen at a time.

 Use h for help


 spacebar to page
 b for back
 q to quit
 /string to search for string

Example:
more sample.f

-------------------------------------------------------------------------------

tail - display the end of a file


tail displays the tail, or end, of the file.

Syntax
tail [options] file

Common Options
-number number of lines to display, counting from the bottom of the file

Examples
The default is to display the last 10 lines, but you can specify different line or byte numbers, or a
different starting point within the file. To display the last 30 lines of a file use the -number style:
% tail -30 filename
Filter / Text Processing Commands

grep, awk, sed

grep
The grep utility is used to search for generalized regular expressions occurring in Linux files. Regular
expressions, such as those shown above, are best specified in apostrophes (or single quotes) when
specified in the grep utility. The egrep utility provides searching capability using an extended set of
meta-characters. The syntax of the grep utility, some of the available options, and a few examples are
shown below.

Syntax
grep [options] regexp [file[s]]
Common Options
-i ignore case
-c report only a count of the number of lines containing matches, not the matches
themselves
-v invert the search, displaying only lines that do not match
-n display the line number along with the line on which a match was found
-s work silently, reporting only the final status:
0, for match(es) found
1, for no matches
2, for errors
-l list filenames, but not lines, in which matches were found

Examples
Consider the following file:
cat num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
4 12 twelve
5 11 eleven
6 10 ten
8 8 eight
9 7 seven
10 6 six
11 5 five
14 2 two
15 1 one
Here are some grep examples using this file. In the first we’ll search for the number 15:
> grep '15' num.list
1 15 fifteen
15 1 one

Now we’ll use the "-c" option to count the number of lines matching the search criterion:
> grep -c '15' num.list
2
Here we’ll be a little more general in our search, selecting for all lines containing the character 1
followed by either of 1, 2 or 5:
> grep '1[125]' num.list
1 15 fifteen
4 12 twelve
5 11 eleven
11 5 five
12 4 four
15 1 one

Now we’ll search for all lines that begin with a space:
> grep '^ ' num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
4 12 twelve
5 11 eleven
6 10 ten
7 9 nine
8 8 eight
9 7 seven

Or all lines that don’t begin with a space:


> grep '^[^ ]' num.list
10 6 six
11 5 five
12 4 four
13 3 three
14 2 two
15 1 one

The latter could also be done by using the -v option with the original search string, e.g.:
> grep -v '^ ' num.list
10 6 six
11 5 five
12 4 four
13 3 three
14 2 two
15 1 one

Here we search for all lines that begin with the characters 1 through 9:
> grep '^[1-9]' num.list
10 6 six
11 5 five
12 4 four
13 3 three
14 2 two
15 1 one

This example will search for any instances of t followed by zero or more occurrences of e:
> grep 'te*' num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
4 12 twelve
6 10 ten
8 8 eight
13 3 three
14 2 two

This example will search for any instances of t followed by one or more occurrences of e:
> grep 'tee*' num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
6 10 ten

We can also take our input from a program, rather than a file. Here we report on any lines output by
the who program that begin with the letter l.
> who | grep '^l'
lcondron ttyp0 Dec 1 02:41 (lcondron-pc.acs.)
sed

The non-interactive, stream editor, sed, edits the input stream, line by line, making the specified
changes, and sends the result to standard output.

Syntax
sed [options] edit_command [file]
The format for the editing commands are:
[address1[,address2]][function][arguments]

where the addresses are optional and can be separated from the function by spaces or tabs. The
function is required. The arguments may be optional or required, depending on the function in use.

Line-number Addresses are decimal line numbers, starting from the first input line and incremented
by one for each. If multiple input files are given the counter continues cumulatively through the files.
The last input line can be specified with the "$" character.

Context Addresses are the regular expression patterns enclosed in slashes (/).

Commands can have 0, 1, or 2 comma-separated addresses with the following affects:


# of addresses lines affected
0 every line of input
1 only lines matching the address
2 first line matching the first address and all lines until, and including, the
line matching the second address. The process is then repeated on
subsequent lines.
Substitution functions allow context searches and are specified in the form:
s/regular_expression_pattern/replacement_string/flag

and should be quoted with single quotes (’) if additional options or functions are specified. These
patterns are identical to context addresses, except that while they are normally enclosed in slashes (/),
any normal character is allowed to function as the delimiter, other than <space> and <newline>.
The replacement string is not a regular expression pattern; characters do not have special meanings
here, except:

& substitute the string specified by regular_expression_pattern


\n substitute the nth string matched by regular_expression_pattern
enclosed in ’\(’, ’\)’ pairs.

These special characters can be escaped with a backslash (\) to remove their special meaning
Common Options
-e script edit script
-n don’t print the default output, but only those lines specified by p or s///p functions
-f script_file take the edit scripts from the file, script_file

Valid flags on the substitution functions include:


d delete the pattern
g globally substitute the pattern
p print the line

Examples
This example changes all incidents of a comma (,) into a comma followed by a space (, ) when doing
output:
% cat filey | sed s/,/,\ /g

The following example removes all incidents of Jr preceded by a space ( Jr) in filey:
% cat filey | sed s/\ Jr//g

To perform multiple operations on the input precede each operation with the -e (edit) option and
quote the strings. For example, to filter for lines containing "Date: " and "From: " and replace these
without the colon (:), try:
sed -e ’s/Date: /Date /’ -e ’s/From: /From /’

To print only those lines of the file from the one beginning with "Date:" up to, and including, the one
beginning with "Name:" try:
sed -n ’/^Date:/,/^Name:/p’

To print only the first 10 lines of the input (a replacement for head):
sed -n 1,10p

awk, nawk, gawk


awk is a pattern scanning and processing language. Its name comes from the last initials of the three
authors: Alfred. V. Aho, Brian. W. Kernighan, and Peter. J. Weinberger. nawk is new awk, a newer
version of the program, and gawk is gnu awk, from the Free Software Foundation. Each version is a
little different. Here we’ll confine ourselves to simple examples which should be the same for all
versions. On some OSs awk is really nawk.

awk searches its input for patterns and performs the specified operation on each line, or fields of the
line, that contain those patterns. You can specify the pattern matching statements for awk either on
the command line, or by putting them in a file and using the -f program_file option.

Syntax
awk program [file]
where program is composed of one or more:
pattern { action }

fields. Each input line is checked for a pattern match with the indicated action being taken on a
match. This continues through the full sequence of patterns, then the next line of input is checked.

Input is divided into records and fields. The default record separator is <newline>, and the variable
NR keeps the record count. The default field separator is whitespace, spaces and tabs, and the
variable NF keeps the field count. Input field, FS, and record, RS, separators can be set at any time to
match any single character. Output field, OFS, and record, ORS, separators can also be changed to
any single character, as desired. $n, where n is an integer, is used to represent the nth field of the
input record, while $0 represents the entire input record.

BEGIN and END are special patterns matching the beginning of input, before the first field is read,
and the end of input, after the last field is read, respectively.

Printing is allowed through the print, and formatted print, printf, statements.

Patterns may be regular expressions, arithmetic relational expressions, string-valued expressions,


and boolean combinations of any of these. For the latter the patterns can be combined with the
boolean operators below, using parentheses to define the combination:
|| or
&& and
! not

Comma separated patterns define the range for which the pattern is applicable, e.g.:
/first/,/last/

selects all lines starting with the one containing first, and continuing inclusively, through the one
containing last.

To select lines 15 through 20 use the pattern range:


NR == 15, NR == 20

Regular expressions must be enclosed with slashes (/) and meta-characters can be escaped with the
backslash (\). Regular expressions can be grouped with the operators:
| or, to separate alternatives
+ one or more
? zero or one

A regular expression match can be either of:


~ contains the expression
!~ does not contain the expression

So the program:
$1 ~ /[Ff]rank/

is true if the first field, $1, contains "Frank" or "frank" anywhere within the field. To match a field
identical to "Frank" or "frank" use:
$1 ~ /^[Ff]rank$/

Relational expressions are allowed using the relational operators:


< less than
<= less than or equal to
== equal to
>= greater than or equal to
!= not equal to
> greater than

Offhand you don’t know if variables are strings or numbers. If neither operand is known to be
numeric, than string comparisons are performed. Otherwise, a numeric comparison is done. In the
absence of any information to the contrary, a string comparison is done, so that:
$1 > $2
will compare the string values. To ensure a numerical comparison do something similar to:
( $1 + 0 ) > $2
The mathematical functions: exp, log and sqrt are built-in

Some other built-in functions include:


index(s,t) returns the position of string s where t first occurs, or 0 if it doesn’t
length(s) returns the length of string s
substr(s,m,n) returns the n-character substring of s, beginning at position m

Arrays are declared automatically when they are used, e.g.:


arr[i] = $1
assigns the first field of the current input record to the ith element of the array.

Flow control statements using if-else, while, and for are allowed with C type syntax:
for (i=1; i <= NF; i++) {actions}
while (i<=NF) {actions}
if (i<NF) {actions}

Common Options
-f program_file read the commands from program_file
-Fc use character c as the field separator character
Examples
% cat filex | tr a-z A-Z | awk -F: '{printf ("7R %-6s %-9s %-24s \n",$1,$2,$3)}'>upload.file

cats filex, which is formatted as follows:


nfb791:99999999:smith
7ax791:999999999:jones
8ab792:99999999:chen
8aa791:999999999:mcnulty
changes all lower case characters to upper case with the tr utility, and formats the file into the
following which is written into the file upload.file:
7R NFB791 99999999 SMITH
7R 7AX791 999999999 JONES
7R 8AB792 99999999 CHEN
7R 8AA791 999999999 MCNULTY

cut - select parts of a line

The cut command allows a portion of a file to be extracted for another use.
Syntax

cut [options] file

Common Options
-c character_list character positions to select (first character is 1)
-d delimiter field delimiter (defaults to <TAB>)
-f field_list fields to select (first field is 1)
Both the character and field lists may contain comma-separated or blank-character-separated
numbers (in increasing order), and may contain a hyphen (-) to indicate a range. Any numbers
missing at either before (e.g. -5) or after (e.g. 5-) the hyphen indicates the full range starting with the first,
or ending with the last character or field, respectively. Blank-character-separated lists must be enclosed in
quotes. The field delimiter should be enclosed in quotes if it has special meaning to the shell, e.g. when
specifying a <space> or <TAB> character.

Examples
In these examples we will use the file users:

jdoe John Doe 4/15/96


lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96

If you only wanted the username and the user's real name, the cut command could be used to get only
that information:

% cut -f 1,2 users


jdoe John Doe
lsmith Laura Smith
pchen Paul Chen
jhsu Jake Hsu
sphilip Sue Phillip

The cut command can also be used with other options. The -c option allows characters to be the
selected cut. To select the first 4 characters:

% cut -c 1-4 users


This yields:
jdoe
lsmi
pche
jhsu
sphi
thus cutting out only the first 4 characters of each line.

paste - merge files

The paste command allows two files to be combined side-by-side. The default delimiter between the
columns in a paste is a tab, but options allow other delimiters to be used.

Syntax
paste [options] file1 file2

Common Options
-d list list of delimiting characters
-s concatenate lines
The list of delimiters may include a single character such as a comma; a quoted string, such as a
space; or any of the following escape sequences:
\n <newline> character
\t <tab> character
\\ backslash character
\0 empty string (non-null character)

It may be necessary to quote delimiters with special meaning to the shell.


A hyphen (-) in place of a file name is used to indicate that field should come from standard input.

Examples
Given the file users:
jdoe John Doe 4/15/96
lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96
and the file phone:
John Doe 555-6634
Laura Smith 555-3382
Paul Chen 555-0987
Jake Hsu 555-1235
Sue Phillip 555-7623

the paste command can be used in conjunction with the cut command to create a new file, listing, that
includes the username, real name, last login, and phone number of all the users. First, extract the phone
numbers into a temporary file, temp.file:
% cut -f2 phone > temp.file
555-6634
555-3382
555-0987
555-1235
555-7623
The result can then be pasted to the end of each line in users and directed to the new file, listing:
% paste users temp.file > listing
jdoe John Doe 4/15/96 237-6634
lsmith Laura Smith 3/12/96 878-3382
pchen Paul Chen 1/5/96 888-0987
jhsu Jake Hsu 4/17/96 545-1235
sphilip Sue Phillip 4/2/96 656-7623

This could also have been done on one line without the temporary file as:
% cut -f2 phone | paste users - > listing

with the same results. In this case the hyphen (-) is acting as a placeholder for an input field (namely,
the output of the cut command).

sort - sort file contents

The sort command is used to order the lines of a file. Various options can be used to choose the order as
well as the field on which a file is sorted. Without any options, the sort compares entire lines in the file
and outputs them in ASCII order (numbers first, upper case letters, then lower case letters).

Syntax
sort [options] [+pos1 [ -pos2 ]] file

Common Options
-b ignore leading blanks (<space> & <tab>) when determining starting and
ending characters for the sort key
-d dictionary order, only letters, digits, <space> and <tab> are significant
-f fold upper case to lower case
-k keydef sort on the defined keys (not available on all systems)
-i ignore non-printable characters
-n numeric sort
-o outfile output file
-r reverse the sort
-t char use char as the field separator character
-u unique; omit multiple copies of the same line (after the sort)
+pos1 [-pos2] (old style) provides functionality similar to the "-k keydef" option.

For the +/-position entries pos1 is the starting word number, beginning with 0 and pos2 is the ending
word number. When -pos2 is omitted the sort field continues through the end of the line. Both pos1 and
pos2 can be written in the form w.c, where w is the word number and c is the character within the word.
For c 0 specifies the delimiter preceding the first character, and 1 is the first character of the word. These
entries can be followed by type modifiers, e.g. n for numeric, b to skip blanks, etc.

The keydef field of the "-k" option has the syntax:


start_field [type] [ ,end_field [type] ]

where:
start_field, end_field define the keys to restrict the sort to a portion of the line
type modifies the sort, valid modifiers are given the single characters (bdfiMnr)
from the similar sort options, e.g. a type b is equivalent to "-b", but applies
only to the specified field

Examples
In the file users:
jdoe John Doe 4/15/96
lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96
sort users yields the following:
jdoe John Doe 4/15/96
jhsu Jake Hsu 4/17/96
lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
sphilip Sue Phillip 4/2/96

If, however, a listing sorted by last name is desired, use the option to specify which field to sort on (fields
are numbered starting at 0):
% sort +2 users:
pchen Paul Chen 1/5/96
jdoe John Doe 4/15/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96
lsmith Laura Smith 3/12/96

To sort in reverse order:


% sort -r users:
sphilip Sue Phillip 4/2/96
pchen Paul Chen 1/5/96
lsmith Laura Smith 3/12/96
jhsu Jake Hsu 4/17/96
jdoe John Doe 4/15/96

A particularly useful sort option is the -u option, which eliminates any duplicate entries in a file while
ordering the file. For example, the file todays.logins:

sphillip
jchen
jdoe
lkeres
jmarsch
ageorge
lkeres
proy
jchen

shows a listing of each username that logged into the system today. If we want to know how many
unique users logged into the system today, using sort with the -u option will list each user only once.
(The command can then be piped into "wc -l" to get a number):

% sort -u todays.logins
ageorge
jchen
jdoe
jmarsch
lkeres
proy
sphillip

uniq - remove duplicate lines

uniq filters duplicate adjacent lines from a file.

Syntax
uniq [options] [+|-n] file [file.new]

Common Options
-d one copy of only the repeated lines
-u select only the lines not repeated
+n ignore the first n characters
-s n same as above (SVR4 only)
-n skip the first n fields, including any blanks (<space> & <tab>)
-f fields same as above (SVR4 only)

Examples
Consider the following file and example, in which uniq removes the 4th line from file and places the
result in a file called file.new.

$ cat file
1 2 3 6
4 5 3 6
7 8 9 0
7 8 9 0

$ uniq file file.new

$ cat file.new
1 2 3 6
4 5 3 6
7 8 9 0

Below, the -n option of the uniq command is used to skip the first 2 fields in file, and filter out lines
which are duplicates from the 3rd field onward.

$ uniq -2 file
1 2 3 6
7 8 9 0

tee - copy command output


tee sends standard in to specified files and also to standard out. It’s often used in command pipelines.

Syntax
tee [options] [file[s]]
Common Options
-a append the output to the files
-i ignore interrupts
Examples
In this first example the output of who is displayed on the screen and stored in the file users.file:
> who | tee users.file
condron ttyp0 Apr 22 14:10 (lcondron-pc.acs.)
frank ttyp1 Apr 22 16:19 (nyssa)
condron ttyp9 Apr 22 15:52 (lcondron-mac.acs)

> cat users.file


Condron ttyp0 Apr 22 14:10 (lcondron-pc.acs.)
Frank ttyp1 Apr 22 16:19 (nyssa)
Condron ttyp9 Apr 22 15:52 (lcondron-mac.acs)

In this next example the output of who is sent to the files users.a and users.b. It is also piped to the
wc command, which reports the line count.
> who | tee users.a users.b | wc -l
3

> cat users.a


condron ttyp0 Apr 22 14:10 (lcondron-pc.acs.)
frank ttyp1 Apr 22 16:19 (nyssa)
condron ttyp9 Apr 22 15:52 (lcondron-mac.acs)

> cat users.b


condron ttyp0 Apr 22 14:10 (lcondron-pc.acs.)
frank ttyp1 Apr 22 16:19 (nyssa)
condron ttyp9 Apr 22 15:52 (lcondron-mac.acs)

In the following example a long directory listing is sent to the file files.long. It is also piped to the
grep command which reports which files were last modified in August.
> ls -l | tee files.long |grep Aug
1 drwxr-sr-x 2 condron 512 Aug 8 1995 News/
2 -rw-r--r-- 1 condron 1076 Aug 8 1995 magnus.cshrc
2 -rw-r--r-- 1 condron 1252 Aug 8 1995 magnus.login

> cat files.long


total 34
2 -rw-r--r-- 1 condron 1253 Oct 10 1995 #.login#
1 drwx------ 2 condron 512 Oct 17 1995 Mail/
1 drwxr-sr-x 2 condron 512 Aug 8 1995 News/
5 -rw-r--r-- 1 condron 4299 Apr 21 00:18 editors.txt
2 -rw-r--r-- 1 condron 1076 Aug 8 1995 magnus.cshrc
2 -rw-r--r-- 1 condron 1252 Aug 8 1995 magnus.login
7 -rw-r--r-- 1 condron 6436 Apr 21 23:50 resources.txt
4 -rw-r--r-- 1 condron 3094 Apr 18 18:24 telnet.ftp
1 drwxr-sr-x 2 condron 512 Apr 21 23:56 uc/
1 -rw-r--r-- 1 condron 1002 Apr 22 00:14 uniq.tee.txt
1 -rw-r--r-- 1 condron 1001 Apr 20 15:05 uniq.tee.txt~
7 -rw-r--r-- 1 condron 6194 Apr 15 20:18 Linuxgrep.txt
Finding System Information:

uname –a
cat /etc/redhat-release
dmidecode

uname:
Sometimes it is required to quickly determine details like kernel name, version, hostname, etc of the
Linux box you are using.

Even though you can find all these details in respective files present under the proc filesystem, it is easier
to use uname utility to get these information quickly.

The basic syntax of the uname command is:

uname [OPTION]...

Now lets look at some examples that demonstrate the usage of ‘uname’ command.
uname without any option

When the ‘uname’ command is run without any option then it prints just the kernel name. So the output
below shows that its the ‘Linux’ kernel that is used by this system.

$ uname
Linux

You can also use uname -s, which also displays the kernel name.

$ uname -s
Linux

Get the network node host name using -n option

Use uname -n option to fetch the network node host name of your Linux box.

$ uname -n
dev-server

The output above will be the same as the output of the hostname command.
Get kernel release using -r option
uname command can also be used to fetch the kernel release information. The option -r can be used for
this purpose.

$ uname -r
2.6.32-100.28.5.el6.x86_64

Get the kernel version using -v option

uname command can also be used to fetch the kernel version information. The option -v can be used for
this purpose.

$ uname -v
#1 SMP Wed Feb 2 18:40:23 EST 2011

Get the machine hardware name using -m option

uname command can also be used to fetch the machine hardware name. The option -m can be used for
this purpose. This indicates that it is a 64-bit system.

$ uname -m
x86_64

Get the processor type using -p option

uname command can also be used to fetch the processor type information. The option -p can be used for
this purpose. If the uname command is not able to fetch the processor type information then it produces
‘unknown’ in the output.

$ uname -p
x86_64

Sometimes you might see ‘unknown’ as the output of this command, if uname was not able to fetch the
information on processor type.
Get the hardware platform using -i option

uname command can also be used to fetch the hardware platform information. The option -i can be used
for this purpose. If the uname command is not able to fetch the hardware platform information then it
produces ‘unknown’ in the output.

$ uname -i
x86_64

Sometimes you might see ‘unknown’ as the output of this command, if uname was not able to fetch the
information about the platform.
Get the operating system name using the -o option

uname command can also be used to fetch the operating system name. The option -o can be used for this
purpose.

For example :

$ uname -o
GNU/Linux

cat /etc/redhat-release:
 This file provides information about your system distribution and its version
 You can also run /etc/*rel* for systems that are not on CentOS or Redhat

Dmidecode:

dmidecode is a tool for dumping a computer's DMI (some say SMBIOS) table contents in a human-
readable format. This table contains a description of the system's hardware components, as well as other
useful pieces of information such as serial numbers and BIOS revision. Thanks to this table, you can
retrieve this information without having to probe for the actual hardware.

Take a look at

man dmidecode

to find out all options. The most common option is the --type switch which takes one or more of the
following keywords:

bios, system, baseboard, chassis, processor, memory, cache, connector, slot

You can as well specify one or more of the following numbers:

Type Information
----------------------------------------
0 BIOS
1 System
2 Base Board
3 Chassis
4 Processor
5 Memory Controller
6 Memory Module
7 Cache
8 Port Connector
9 System Slots
10 On Board Devices
11 OEM Strings
12 System Configuration Options
13 BIOS Language
14 Group Associations
15 System Event Log
16 Physical Memory Array
17 Memory Device
18 32-bit Memory Error
19 Memory Array Mapped Address
20 Memory Device Mapped Address
21 Built-in Pointing Device
22 Portable Battery
23 System Reset
24 Hardware Security
25 System Power Controls
26 Voltage Probe
27 Cooling Device
28 Temperature Probe
29 Electrical Current Probe
30 Out-of-band Remote Access
31 Boot Integrity Services
32 System Boot
33 64-bit Memory Error
34 Management Device
35 Management Device Component
36 Management Device Threshold Data
37 Memory Channel
38 IPMI Device
39 Power Supply

Each keyword is equivalent to a list of type numbers:

Keyword Types
------------------------------
bios 0, 13
system 1, 12, 15, 23, 32
baseboard 2, 10
chassis 3
processor 4
memory 5, 6, 16, 17
cache 7
connector 8
slot 9

Here are a few sample outputs from one of my servers:

dmidecode --type bios

server1:/home/admin# dmidecode --type bios


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0000, DMI type 0, 24 bytes


BIOS Information
Vendor: American Megatrends Inc.
Version: V1.5B2
Release Date: 10/31/2007
Address: 0xF0000
Runtime Size: 64 kB
ROM Size: 1024 kB
Characteristics:
ISA is supported
PCI is supported
PNP is supported
APM is supported
BIOS is upgradeable
BIOS shadowing is allowed
ESCD support is available
Boot from CD is supported
Selectable boot is supported
BIOS ROM is socketed
EDD is supported
5.25"/1.2 MB floppy services are supported (int 13h)
3.5"/720 KB floppy services are supported (int 13h)
3.5"/2.88 MB floppy services are supported (int 13h)
Print screen service is supported (int 5h)
8042 keyboard services are supported (int 9h)
Serial services are supported (int 14h)
Printer services are supported (int 17h)
CGA/mono video services are supported (int 10h)
ACPI is supported
USB legacy is supported
LS-120 boot is supported
ATAPI Zip drive boot is supported
BIOS boot specification is supported
Targeted content distribution is supported
BIOS Revision: 8.14

Handle 0x0028, DMI type 13, 22 bytes


BIOS Language Information
Installable Languages: 1
en|US|iso8859-1
Currently Installed Language: en|US|iso8859-1

server1:/home/admin#

dmidecode --type system

server1:/home/admin# dmidecode --type system


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0001, DMI type 1, 27 bytes


System Information
Manufacturer: MICRO-STAR INTERANTIONAL CO.,LTD
Product Name: MS-7368
Version: 1.0
Serial Number: To Be Filled By O.E.M.
UUID: Not Present
Wake-up Type: Power Switch
SKU Number: To Be Filled By O.E.M.
Family: To Be Filled By O.E.M.

Handle 0x0027, DMI type 12, 5 bytes


System Configuration Options
Option 1: To Be Filled By O.E.M.

server1:/home/admin#

dmidecode --type baseboard

server1:/home/admin# dmidecode --type baseboard


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0002, DMI type 2, 15 bytes


Base Board Information
Manufacturer: MICRO-STAR INTERANTIONAL CO.,LTD
Product Name: MS-7368
Version: 1.0
Serial Number: To be filled by O.E.M.
Asset Tag: To Be Filled By O.E.M.
Features:
Board is a hosting board
Board is replaceable
Location In Chassis: To Be Filled By O.E.M.
Chassis Handle: 0x0003
Type: Motherboard
Contained Object Handles: 0

Handle 0x0025, DMI type 10, 6 bytes


On Board Device Information
Type: Video
Status: Enabled
Description: To Be Filled By O.E.M.

server1:/home/admin#

dmidecode --type chassis

server1:/home/admin# dmidecode --type chassis


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0003, DMI type 3, 21 bytes


Chassis Information
Manufacturer: To Be Filled By O.E.M.
Type: Desktop
Lock: Not Present
Version: To Be Filled By O.E.M.
Serial Number: To Be Filled By O.E.M.
Asset Tag: To Be Filled By O.E.M.
Boot-up State: Safe
Power Supply State: Safe
Thermal State: Safe
Security Status: None
OEM Information: 0x00000000
Heigth: Unspecified
Number Of Power Cords: 1
Contained Elements: 0

server1:/home/admin#

dmidecode --type processor

server1:/home/admin# dmidecode --type processor


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0004, DMI type 4, 40 bytes


Processor Information
Socket Designation: CPU 1
Type: Central Processor
Family: Other
Manufacturer: AMD
ID: B2 0F 06 00 FF FB 8B 17
Version: AMD Athlon(tm) 64 X2 Dual Core Processor 5600+
Voltage: 1.5 V
External Clock: 200 MHz
Max Speed: 2800 MHz
Current Speed: 2900 MHz
Status: Populated, Enabled
Upgrade: Other
L1 Cache Handle: 0x0005
L2 Cache Handle: 0x0006
L3 Cache Handle: 0x0007
Serial Number: To Be Filled By O.E.M.
Asset Tag: To Be Filled By O.E.M.
Part Number: To Be Filled By O.E.M.

server1:/home/admin#

dmidecode --type memory


server1:/home/admin# dmidecode --type memory
# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0008, DMI type 5, 20 bytes


Memory Controller Information
Error Detecting Method: 64-bit ECC
Error Correcting Capabilities:
None
Supported Interleave: One-way Interleave
Current Interleave: One-way Interleave
Maximum Memory Module Size: 512 MB
Maximum Total Memory Size: 1024 MB
Supported Speeds:
70 ns
60 ns
Supported Memory Types:
SIMM
DIMM
SDRAM
Memory Module Voltage: 3.3 V
Associated Memory Slots: 2
0x0009
0x000A
Enabled Error Correcting Capabilities:
None

Handle 0x0009, DMI type 6, 12 bytes


Memory Module Information
Socket Designation: DIMM0
Bank Connections: 0 5
Current Speed: 161 ns
Type: ECC DIMM
Installed Size: 1024 MB (Double-bank Connection)
Enabled Size: 1024 MB (Double-bank Connection)
Error Status: OK

Handle 0x000A, DMI type 6, 12 bytes


Memory Module Information
Socket Designation: DIMM1
Bank Connections: 0 5
Current Speed: 163 ns
Type: ECC DIMM
Installed Size: 1024 MB (Double-bank Connection)
Enabled Size: 1024 MB (Double-bank Connection)
Error Status: OK

Handle 0x0029, DMI type 16, 15 bytes


Physical Memory Array
Location: System Board Or Motherboard
Use: System Memory
Error Correction Type: None
Maximum Capacity: 8 GB
Error Information Handle: Not Provided
Number Of Devices: 2

Handle 0x002B, DMI type 17, 27 bytes


Memory Device
Array Handle: 0x0029
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 72 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM0
Bank Locator: BANK0
Type: DDR2
Type Detail: Synchronous
Speed: 333 MHz (3.0 ns)
Manufacturer: Manufacturer0
Serial Number: SerNum0
Asset Tag: AssetTagNum0
Part Number: PartNum0

Handle 0x002D, DMI type 17, 27 bytes


Memory Device
Array Handle: 0x0029
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 72 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM1
Bank Locator: BANK1
Type: DDR2
Type Detail: Synchronous
Speed: 333 MHz (3.0 ns)
Manufacturer: Manufacturer1
Serial Number: SerNum1
Asset Tag: AssetTagNum1
Part Number: PartNum1

server1:/home/admin#

dmidecode --type cache

server1:/home/admin# dmidecode --type cache


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0005, DMI type 7, 19 bytes


Cache Information
Socket Designation: L1-Cache
Configuration: Enabled, Not Socketed, Level 1
Operational Mode: Varies With Memory Address
Location: Internal
Installed Size: 256 KB
Maximum Size: 256 KB
Supported SRAM Types:
Pipeline Burst
Installed SRAM Type: Pipeline Burst
Speed: Unknown
Error Correction Type: Single-bit ECC
System Type: Data
Associativity: 4-way Set-associative

Handle 0x0006, DMI type 7, 19 bytes


Cache Information
Socket Designation: L2-Cache
Configuration: Enabled, Not Socketed, Level 2
Operational Mode: Varies With Memory Address
Location: Internal
Installed Size: 1024 KB
Maximum Size: 1024 KB
Supported SRAM Types:
Pipeline Burst
Installed SRAM Type: Pipeline Burst
Speed: Unknown
Error Correction Type: Single-bit ECC
System Type: Unified
Associativity: 4-way Set-associative

Handle 0x0007, DMI type 7, 19 bytes


Cache Information
Socket Designation: L3-Cache
Configuration: Disabled, Not Socketed, Level 3
Operational Mode: Unknown
Location: Internal
Installed Size: 0 KB
Maximum Size: 0 KB
Supported SRAM Types:
Unknown
Installed SRAM Type: Unknown
Speed: Unknown
Error Correction Type: Unknown
System Type: Unknown
Associativity: Unknown

server1:/home/admin#

dmidecode --type connector

server1:/home/admin# dmidecode --type connector


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x000B, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J1A1
Internal Connector Type: None
External Reference Designator: PS2Mouse
External Connector Type: PS/2
Port Type: Mouse Port

Handle 0x000C, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J1A1
Internal Connector Type: None
External Reference Designator: Keyboard
External Connector Type: PS/2
Port Type: Keyboard Port

Handle 0x000D, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2A2
Internal Connector Type: None
External Reference Designator: USB1
External Connector Type: Access Bus (USB)
Port Type: USB

Handle 0x000E, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2A2
Internal Connector Type: None
External Reference Designator: USB2
External Connector Type: Access Bus (USB)
Port Type: USB

Handle 0x000F, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J4A1
Internal Connector Type: None
External Reference Designator: LPT 1
External Connector Type: DB-25 male
Port Type: Parallel Port ECP/EPP

Handle 0x0010, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2A1
Internal Connector Type: None
External Reference Designator: COM A
External Connector Type: DB-9 male
Port Type: Serial Port 16550A Compatible

Handle 0x0011, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6A1
Internal Connector Type: None
External Reference Designator: Audio Mic In
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port

Handle 0x0012, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6A1
Internal Connector Type: None
External Reference Designator: Audio Line In
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port

Handle 0x0013, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6B1 - AUX IN
Internal Connector Type: On Board Sound Input From CD-ROM
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Audio Port

Handle 0x0014, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6B2 - CDIN
Internal Connector Type: On Board Sound Input From CD-ROM
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Audio Port

Handle 0x0015, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6J2 - PRI IDE
Internal Connector Type: On Board IDE
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0016, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6J1 - SEC IDE
Internal Connector Type: On Board IDE
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0017, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J4J1 - FLOPPY
Internal Connector Type: On Board Floppy
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0018, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9H1 - FRONT PNL
Internal Connector Type: 9 Pin Dual Inline (pin 10 cut)
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0019, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J1B1 - CHASSIS REAR FAN
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001A, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2F1 - CPU FAN
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001B, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J8B4 - FRONT FAN
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001C, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9G2 - FNT USB
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001D, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6C3 - FP AUD
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001E, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9G1 - CONFIG
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001F, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J8C1 - SCSI LED
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0020, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9J2 - INTRUDER
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0021, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9G4 - ITP
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0022, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2H1 - MAIN POWER
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

server1:/home/admin#
dmidecode --type slot

server1:/home/admin# dmidecode --type slot


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0023, DMI type 9, 13 bytes


System Slot Information
Designation: AGP
Type: 32-bit AGP 4x
Current Usage: In Use
Length: Short
ID: 0
Characteristics:
3.3 V is provided
Opening is shared
PME signal is supported

Handle 0x0024, DMI type 9, 13 bytes


System Slot Information
Designation: PCI1
Type: 32-bit PCI
Current Usage: Available
Length: Short
ID: 1
Characteristics:
3.3 V is provided
Opening is shared
PME signal is supported
Linux Permissions Cheat Sheet

I created this repository in hopes that it may be used as a helpful reference.

Permissions

Permissions on Unix and other systems like it are split into three classes:

 User
 Group
 Other

Files and directories are owned by a user.

Files and directories are also assigned to a group.

If a user is not the owner, nor a member of the group, then they are classified as other.

Changing permissions

In order to change permissions, we need to first understand the two notations of permissions.

1. Symbolic notation
2. Octal notation

Symbolic notation

Symbolic notation is what you'd see on the left-hand side if you ran a command like ls -l in a terminal.
The first character in symbolic notation indicates the file type and isn't related to permissions in any way. The
remaining characters are in sets of three, each representing a class of permissions.

The first class is the user class. The second class is the group class. The third class is the other class.

Each of the three characters for a class represents the read, write and execute permissions.

 r will be displayed if reading is permitted


 w will be displayed if writing is permitted
 x will be displayed if execution is permitted
 - will be displayed in the place of r, w, and x, if the respective permission is not permitted

Here are some examples of symbolic notation:


 -rwxr--r--: A regular file whose user class has read/write/execute, group class has only read
permissions, other class has only read permissions
 drw-rw-r--: A directory whose user class has read/write permissions, group class has read/write
permissions, other class has only read permissions
 crwxrw-r--: A character special file whose user has read/write/execute permissions, group class has
read/write permissions, other class has only read permissions

Octal notation

Octal (base-8) notation consists of at least 3 digits (sometimes 4, the left-most digit, which represents the setuid
bit, the setgid bit, and the sticky bit).

Each of the three right-most digits are the sum of its component bits in the binary numeral system.

For example:

 The read bit (r in symbolic notation) adds 4 to its total


 The write bit (w in symbolic notation) adds 2 to its total
 The execute bit (x in symbolic notation) adds 1 to its total

So what number would you use if you wanted to set a permission to read and write? 4 + 2 = 6.

Symbolic
Octal notation Plain English
notation

-rwxr--r-- 0744 user class can read/write/execute; group class can read; other class can read

-rw-rw-r-- 0664 user class can read/write; group class can read/write; other class can read

user class can read/write/execute; group class can read/write/execute;


-rwxrwxr-- 0774
other class can read

---------- 0000 None of the classes have permissions

user class can read/write/execute; group class has no permissions;


-rwx------ 0700
other class has no permissions

-rwxrwxrwx 0777 All classes can read/write/execute

-rw-rw-rw 0666 All classes can read/write

-r-xr-xr-x 0555 All classes can read/execute

-r--r--r-- 0444 All classes can read


--wx-wx-wx 0333 All classes can write/execute

--w--w--w- 0222 All classes can write

---x--x--x 0111 All classes can execute

All together now

Let's use the examples from the symbolic notation section and show how it'd convert to octal notation

CHMOD commands

Now that we have a better understanding of permissions and what all of these letters and numbers mean, let's take

Permission
(symbolic CHMOD command Description
nocation)

-rwxrwxrwx chmod 0777 filename; chmod -R 0777 dir All classes can read/write/execute

user can read/write/execute; all others can


-rwxr--r-- chmod 0744 filename; chmod -R 0744 dir
read

-rw-r--r-- chmod 0644 filename; chmod -R 0644 dir user class can read/write; all others can read

-rw-rw-rw- chmod 0666 filename' chmod -R 0666 dir All classes can read/write

a look at how we can use the chmod command in our terminal to change permissions to anything we'd like!
These are just some examples. Using your new-found knowledge, you can set any permissions you'd like! Just be
careful and make sure you don't break your system.
Access Control Lists(ACL) in Linux

What is ACL ?
Access control list (ACL) provides an additional, more flexible permission mechanism for file systems. It
is designed to assist with UNIX file permissions. ACL allows you to give permissions for any user or
group to any disc resource

Use of ACL :
Think of a scenario in which a particular user is not a member of group created by you but still you
want to give some read or write access, how can you do it without making user a member of group,
here comes in picture Access Control Lists, ACL helps us to do this trick.

Basically, ACLs are used to make a flexible permission mechanism in Linux.

setfacl and getfacl are used for setting up ACL and showing ACL respectively.

For example :
getfacl test/seinfeld.txt

Output:
# file: test/seinfeld.txt
# owner: iafzal
# group: iafzal
user::rw-
group::rw-
other::r--

List of commands for setting up ACL :


1) To add permission for a user
setfacl -m "u:user:permissions" /path/to/file

2) To add permissions for a group


setfacl -m "g:group:permissions" /path/to/file

3) To allow all files or directories to inherit ACL entries from the directory it is within
setfacl -dm "entry" /path/to/dir

4) To remove a specific entry


setfacl -x "entry" /path/to/file

5) To remove all entries


setfacl -b path/to/file
For example :
setfacl -m u:iafzal:rwx test/seinfeld.txt

Modifying ACL using setfacl :


To add permissions for a user (user is either the user name or ID):
# setfacl -m "u:user:permissions"

To add permissions for a group (group is either the group name or ID):
# setfacl -m "g:group:permissions"

To allow all files or directories to inherit ACL entries from the directory it is within:
# setfacl -dm "entry"

Example :

setfacl -m u:iafzal:r-x test/seinfeld.txt

setfacl and getfacl

View ACL :
To show permissions :
# getfacl filename

Observe the difference between output of getfacl command before and after setting up ACL permissions
using setfacl command.

Remove ACL :
If you want to remove the set ACL permissions, use setfacl command with -b option.
For example :

remove set permissions


If you compare output of getfacl command before and after using setfacl command with -b option, you
can observe that there is no particular entry for user iafzal in later output.

You can also check if there are any extra permissions set through ACL using ls command.

check set acl with ls

Observe the first command output in image, there is extra “+” sign after the permissions like -rw-rwxr–
+, this indicates there are extra ACL permissions set which you can check by getfacl command
vi Commands

Entering vi

vi filename - The filename can be the name of an


existing file or the name of the file
you want to create.
view filename - Starts vi in "read only" mode. Allows
you to look at a file without the risk
of altering its contents.

Exiting vi

:q - quit - if you have made any changes, vi


will warn you of this, and you'll need
to use one of the other quits.
:w - write edit buffer to disk
:w filename - write edit buffer to disk as filename
:wq - write edit buffer to disk and quit
ZZ - write edit buffer to disk and quit
:q! - quit without writing edit buffer to disk

Positioning within text

By character
left arrow - left one character
right arrow - right one character
backspace - left one character
space - right one character
h - left one character
l - right one character

By word
w - beginning of next word
nw - beginning of nth next word
b - back to previous word
nb - back to nth previous word
e - end of next word
ne - end of nth next word
By line
down arrow - down one line
up arrow - up one line
j - down one line
k - up one line
+ - beginning of next line down
- - beginning of previous line up
0 - first column of current line (zero)
^ - first character of current line
$ - last character of current line

By block
( - beginning of sentence
) - end of sentence
{ - beginning of paragraph
} - end of paragraph

By screen
CTRL-f - forward 1 screen
CTRL-b - backward 1 screen
CTRL-d - down 1/2 screen
CTRL-u - up 1/2 screen
H - top line on screen
M - mid-screen
L - last line on screen

Within file
nG - line n within file
1G - first line in file
G - last line in file

Begin the vi editor exercises

Inserting text

a - append text after cursor *


A - append text at end of line *
i - insert text before cursor *
I - insert text at beginning of line *
o - open a blank line after the current
line for text input *
O - open a blank line before the current
line for text input *

* Note: hit ESC (escape) key when finished inserting!

Continue the vi exercises

Deleting text

x - delete character at cursor


dh - delete character before cursor
nx - delete n characters at cursor
dw - delete next word
db - delete previous word
dnw - delete n words from cursor
dnb - delete n words before cursor
d0 - delete to beginning of line
d$ - delete to end of line
D - delete to end of line
dd - delete current line
d( - delete to beginning of sentence
d) - delete to end of sentence
d{ - delete to beginning of paragraph
d} - delete to end of paragraph
ndd - delete n lines (start at current line)

Changing text

cw - replace word with text *


cc - replace line with text *
c0 - change to beginning of line *
c$ - change to end of line *
C - change to end of line *
c( - change to beginning of sentence *
c) - change to end of sentence *
c{ - change to beginning of paragraph *
c} - change to end of paragraph *
r - overtype only 1 character
R - overtype text until ESC is hit *
J - join two lines

* Note: hit ESC (escape) key when finished changing!


Copying lines

yy - "yank": copy 1 line into buffer


nyy - "yank": copy n lines into buffer
p - put contents of buffer after current
line
P - put contents of buffer before current
line

Moving lines (cutting and pasting)

ndd - delete n lines (placed in buffer)


p - put contents of buffer after current
line
P - put contents of buffer before current
line

Searching / Substituting

/str - search forward for str


?str - search backward for str
n - find next occurrence of current string
N - repeat previous search in reverse
direction

The substitution command requires a line range


specification. If it is omitted, the default
is the current line only. The examples below
show how to specify line ranges.

:s/old/new - substitute new for first occurrence


of old in current line
:s/old/new/g - substitute new for all occurrences
of old in current line
:1,10s/old/new - substitute new for first occurrence
of old in lines 1 - 10
:.,$s/old/new - substitute new for first occurrence
of old in remainder of file
:.,+5s/old/new - substitute new for first occurrence
of old in current line and next 5 lines
:.,-5s/old/new - substitute new for first occurrence
of old in current line and previous
5 lines
:%s/old/new/g - substitute new for all occurrences
of old in the entire file
:%s/old/new/gc - interactively substitute new for all
occurrences of old - will prompt for
y/n response for each substitution.

Miscellaneous commands

u - undo the last command (including undo)


. - repeat last command
xp - swap two adjacent characters
m[a-z] - set a marker (a - z)
'[a-z] - go to a previously set marker (a - z)
:!command - execute specified LINUX command
:r filename - read/insert contents of filename after
current line.
:1,100!fmt - reformat the first 100 lines
:!fmt - reformat the entire file

--------------------------------------------------------------------------------

vi Options

You can change the way vi operates by changing the value of certain options which control
specific parts of the vi environment.

To set an option during a vi session, use one of the commands below as required by the option:

:set option_name
:set option_name=value

Some examples of the more common options are described below.

:set all - shows all vi options in effect

:set ai - set autoindent - automatically indents


each line of text

:set noai - turn autoindent off

:set nu - set line numbering on


:set nonu - turn line numbering off

:set scroll=n - sets number of lines to be scrolled


to n. Used by screen scroll commands.

:set sw=n - set shiftwidth to n. Used by autoindent


option.

:set wm=n - set wrapmargin to n. Specifies number


of spaces to leave on right edge of the
screen before wrapping words to next
line.

:set showmode - reminds you when you are inserting


text.

:set ic - ignore case of characters when


performing a search.

Options can be set permanently by putting them in a file called .exrc in your home directory. A
sample .exrc file appears below. Note that you do not need the colon (:) as part of the option
specification when you put the commands in a .exrc file. Also note that you can put them all on
one line.

set nu ai wm=5 showmode ic


User Account Management:

Following are the basic user account management commands

 useradd
To create a new user in Linux. A different options can be used to modify userId, home directory
etc.

 userdel
This command is used to delete the user. Please note this command alone will not delete the user
home directory. You will have to use option –r to delete user home directory

 groupadd
Creates a new group

 groupdel
Removes an existing group

 usermod
Modify user attributes such as user home directory, user group, user ID etc.

User Files
 /etc/passwd = This file has all user’s attributes
 /etc/shadow = This file contains encrypted user password and password policy
 /etc/group = All group and user group information
Creating User Accounts in Linux:

When we run ‘useradd‘ command in Linux terminal, it performs following major things:

It edits /etc/passwd, /etc/shadow, /etc/group and /etc/gshadow files for the newly created User account.
Creates and populate a home directory for the new user.
Sets permissions and ownerships to home directory.

Basic syntax of command is:

useradd [options] username

In this article we will show you the most used 15 useradd commands with their practical examples in
Linux. We have divided the section into two parts from Basic to Advance usage of command.

Part I: Basic usage with 10 examples


Part II: Advance usage with 5 examples

Part I – 10 Basic Usage of useradd Commands

1. How to Add a New User in Linux

To add/create a new user, all you’ve to follow the command ‘useradd‘ or ‘adduser‘ with ‘username’. The
‘username’ is a user login name, that is used by user to login into the system.

Only one user can be added and that username must be unique (different from other username already
exists on the system).

For example, to add a new user called ‘solider‘, use the following command.

[root@localhost ~]# useradd solider

When we add a new user in Linux with ‘useradd‘ command it gets created in locked state and to unlock
that user account, we need to set a password for that account with ‘passwd‘ command.

[root@localhost ~]# passwd solider


Changing password for user solider.
New LINUX password:
Retype new LINUX password:
passwd: all authentication tokens updated successfully.
Once a new user created, it’s entry automatically added to the ‘/etc/passwd‘ file. The file is used to store
users information and the entry should be.

solider:x:504:504:solider:/home/solider:/bin/bash

The above entry contains a set of seven colon-separated fields, each field has it’s own meaning. Let’s see
what are these fields:

Username: User login name used to login into system. It should be between 1 to 32 charcters long.
Password: User password (or x character) stored in /etc/shadow file in encrypted format.
User ID (UID): Every user must have a User ID (UID) User Identification Number. By default UID 0 is
reserved for root user and UID’s ranging from 1-99 are reserved for other predefined accounts. Further
UID’s ranging from 100-999 are reserved for system accounts and groups.
Group ID (GID): The primary Group ID (GID) Group Identification Number stored in /etc/group file.
User Info: This field is optional and allow you to define extra information about the user. For example,
user full name. This field is filled by ‘finger’ command.
Home Directory: The absolute location of user’s home directory.
Shell: The absolute location of a user’s shell i.e. /bin/bash.

2. Create a User with Different Home Directory

By default ‘useradd‘ command creates a user’s home directory under /home directory with username.
Thus, for example, we’ve seen above the default home directory for the user ‘solider‘ is ‘/home/solider‘.

However, this action can be changed by using ‘-d‘ option along with the location of new home directory
(i.e. /home/newusers). For example, the following command will create a user ‘solider‘ with a home
directory ‘/home/newusers‘.

[root@localhost ~]# useradd -d /home/newusers solider

You can see the user home directory and other user related information like user id, group id, shell and
comments.

[root@localhost ~]# cat /etc/passwd | grep solider


solider:x:505:505::/home/newusers:/bin/bash

3. Create a User with Specific User ID

In Linux, every user has its own UID (Unique Identification Number). By default, whenever we create a
new user accounts in Linux, it assigns userid 500, 501, 502 and so on…
But, we can create user’s with custom userid with ‘-u‘ option. For example, the following command will
create a user ‘navin‘ with custom userid ‘999‘.

[root@localhost ~]# useradd -u 999 navin

Now, let’s verify that the user created with a defined userid (999) using following command.

[root@localhost ~]# cat /etc/passwd | grep solider


navin:x:999:999::/home/solider:/bin/bash

NOTE: Make sure the value of a user ID must be unique from any other already created users on the
system.
4. Create a User with Specific Group ID

Similarly, every user has its own GID (Group Identification Number). We can create users with specific
group ID’s as well with -g option.

Here in this example, we will add a user ‘tarunika‘ with a specific UID and GID simultaneously with the
help of ‘-u‘ and ‘-g‘ options.

[root@localhost ~]# useradd -u 1000 -g 500 tarunika

Now, see the assigned user id and group id in ‘/etc/passwd‘ file.

[root@localhost ~]# cat /etc/passwd | grep tarunika


tarunika:x:1000:500::/home/tarunika:/bin/bash

5. Add a User to Multiple Groups

The ‘-G‘ option is used to add a user to additional groups. Each group name is separated by a comma,
with no intervening spaces.

Here in this example, we are adding a user ‘solider‘ into multiple groups like admins, webadmin and
developer.

[root@localhost ~]# useradd -G admins,webadmin,developers solider

Next, verify that the multiple groups assigned to the user with id command.

[root@localhost ~]# id solider


uid=1001(solider) gid=1001(solider)
groups=1001(solider),500(admins),501(webadmin),502(developers)
context=root:system_r:unconfined_t:SystemLow-SystemHigh
6. Add a User without Home Directory

In some situations, where we don’t want to assign a home directories for a user’s, due to some security
reasons. In such situation, when a user logs into a system that has just restarted, its home directory will
be root. When such user uses su command, its login directory will be the previous user home directory.

To create user’s without their home directories, ‘-M‘ is used. For example, the following command will
create a user ‘shilpi‘ without a home directory.

[root@localhost ~]# useradd -M shilpi

Now, let’s verify that the user is created without home directory, using ls command.

[root@localhost ~]# ls -l /home/shilpi


ls: cannot access /home/shilpi: No such file or directory

7. Create a User with Account Expiry Date

By default, when we add user’s with ‘useradd‘ command user account never get expires i.e their expiry
date is set to 0 (means never expired).

However, we can set the expiry date using ‘-e‘ option, that sets date in YYYY-MM-DD format. This is
helpful for creating temporary accounts for a specific period of time.

Here in this example, we create a user ‘aparna‘ with account expiry date i.e. 27th April 2014 in YYYY-
MM-DD format.

[root@localhost ~]# useradd -e 2014-03-27 aparna

Next, verify the age of account and password with ‘chage‘ command for user ‘aparna‘ after setting
account expiry date.

[root@localhost ~]# chage -l aparna


Last password change : Mar 28, 2014
Password expires : never
Password inactive : never
Account expires : Mar 27, 2014
Minimum number of days between password change :0
Maximum number of days between password change : 99999
Number of days of warning before password expires :7

8. Create a User with Password Expiry Date


The ‘-f‘ argument is used to define the number of days after a password expires. A value of 0 inactive the
user account as soon as the password has expired. By default, the password expiry value set to -1 means
never expire.

Here in this example, we will set a account password expiry date i.e. 45 days on a user ‘solider’ using ‘-e‘
and ‘-f‘ options.

[root@localhost ~]# useradd -e 2014-04-27 -f 45 solider

9. Add a User with Custom Comments

The ‘-c‘ option allows you to add custom comments, such as user’s full name, phone number, etc to
/etc/passwd file. The comment can be added as a single line without any spaces.

For example, the following command will add a user ‘mansi‘ and would insert that user’s full name,
Manis Khurana, into the comment field.

[root@localhost ~]# useradd -c "Manis Khurana" mansi

You can see your comments in ‘/etc/passwd‘ file in comments section.

[root@localhost ~]# tail -1 /etc/passwd


mansi:x:1006:1008:Manis Khurana:/home/mansi:/bin/sh

10. Change User Login Shell:

Sometimes, we add users which has nothing to do with login shell or sometimes we require to assign
different shells to our users. We can assign different login shells to a each user with ‘-s‘ option.

Here in this example, will add a user ‘solider‘ without login shell i.e. ‘/sbin/nologin‘ shell.

[root@localhost ~]# useradd -s /sbin/nologin solider

You can check assigned shell to the user in ‘/etc/passwd‘ file.

[root@localhost ~]# tail -1 /etc/passwd


solider:x:1002:1002::/home/solider:/sbin/nologin

Part II – 5 Advance Usage of useradd Commands


11. Add a User with Specific Home Directory, Default Shell and Custom Comment

The following command will create a user ‘ravi‘ with home directory ‘/var/www/solider‘, default shell
/bin/bash and adds extra information about user.
[root@localhost ~]# useradd -m -d /var/www/ravi -s /bin/bash -c "Solider Owner" -U ravi

In the above command ‘-m -d‘ option creates a user with specified home directory and the ‘-s‘ option set
the user’s default shell i.e. /bin/bash. The ‘-c‘ option adds the extra information about user and ‘-U‘
argument create/adds a group with the same name as the user.
12. Add a User with Home Directory, Custom Shell, Custom Comment and UID/GID

The command is very similar to above, but here we defining shell as ‘/bin/zsh‘ and custom UID and GID
to a user ‘tarunika‘. Where ‘-u‘ defines new user’s UID (i.e. 1000) and whereas ‘-g‘ defines GID (i.e. 1000).

[root@localhost ~]# useradd -m -d /var/www/tarunika -s /bin/zsh -c "Solider Technical Writer" -u 1000 -g


1000 tarunika

13. Add a User with Home Directory, No Shell, Custom Comment and User ID

The following command is very much similar to above two commands, the only difference is here, that
we disabling login shell to a user called ‘avishek‘ with custom User ID (i.e. 1019).

Here ‘-s‘ option adds the default shell /bin/bash, but in this case we set login to ‘/usr/sbin/nologin‘. That
means user ‘avishek‘ will not able to login into the system.

[root@localhost ~]# useradd -m -d /var/www/avishek -s /usr/sbin/nologin -c "Solider Sr. Technical Writer"


-u 1019 avishek

14. Add a User with Home Directory, Shell, Custom Skell/Comment and User ID

The only change in this command is, we used ‘-k‘ option to set custom skeleton directory i.e.
/etc/custom.skell, not the default one /etc/skel. We also used ‘-s‘ option to define different shell i.e.
/bin/tcsh to user ‘navin‘.

[root@localhost ~]# useradd -m -d /var/www/navin -k /etc/custom.skell -s /bin/tcsh -c "No Active Member


of Solider" -u 1027 navin

15. Add a User without Home Directory, No Shell, No Group and Custom Comment

This following command is very different than the other commands explained above. Here we used ‘-M‘
option to create user without user’s home directory and ‘-N‘ argument is used that tells the system to
only create username (without group). The ‘-r‘ arguments is for creating a system user.

[root@localhost ~]# useradd -M -N -r -s /bin/false -c "Disabled Solider Member" clayton

For more information and options about useradd, run ‘useradd‘ command on the terminal to see
available options.
Read Also: 15 usermod Command Examples
Share
+
0
0
0
Ask Anything
If You Appreciate What We Do Here On Solider, You Should Consider:

Stay Connected to: Twitter | Facebook | Google Plus


Subscribe to our email updates: Sign Up Now
Get your own self-hosted blog with a Free Domain at ($3.95/month).
Become a Supporter - Make a contribution via PayPal
Support us by purchasing our premium books in PDF format.
Support us by taking our online Linux courses

We are thankful for your never ending support.

Tags: adduserlinux usersuseradd

View all Posts

Ravi Saive

I am Ravi Saive, creator of Solider. A Computer Geek and Linux Guru who loves to share tricks and tips
on Internet. Most Of My Servers runs on Open Source Platform called Linux. Follow Me: Twitter,
Facebook and Google+

Your name can also be listed here. Got a tip? Submit it here to become an Solider author.
RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide

Next story
Fun in Linux Terminal – Play with Word and Character Counts
Previous story
nSnake: A Clone of Old Classic Snake Game – Play in Linux Terminal

You may also like...

Find Number of Files in a Directory and Subdirectories


0
How to Find Number of Files in a Directory and Subdirectories

17 Jan, 2017
Display Command File Contents in Column Format
0
Display Command Output or File Contents in Column Format

6 Feb, 2018
Convert RPM to DEB and DEB to RPM
8
How to Convert From RPM to DEB and DEB to RPM Package Using Alien

26 Aug, 2015

88 Responses

Comments4
Pingbacks0

Decontee K Sawyer
October 26, 2017 at 6:29 pm

Hi Ravi. Your suggestion to go directly to the source documentation to understand the requirements
and details is an exceedingly excellent one. You have obviously done so, and translated the English it is
written in, into whatever your native language is. A link to your interpretation, in your native language
would be more helpful than the confusing broken English found here.
Reply
Anuj
October 16, 2017 at 4:01 pm

Hi Ravi,

I have one problem, from client side I have a request to add a new user with username having space, I
mean username of two words.

For example,

# adduser "ravi gen"


adduser: invalid user name 'ravi gen'

Reply
Ravi Saive
October 25, 2017 at 11:52 am

@Anuj,

That’s not possible, add underscore or dash, like ravi_gen or ravi-gen.


Reply
oscar javier guerrero
October 6, 2017 at 9:19 pm

Hi Ravi, I have a question, If use: su “user” type the password and the system say: su: System Error,
why is this message?
Reply

« Older Comments
Got something to say? Join the discussion.

Comment

Name *

Email *

Website

Notify me of followup comments via e-mail. You can also subscribe without commenting.
Switch Users and Sudo Access:

Switch Users:

Following is the user switch command that can be used to switch from one user to another

 su - username
su - invokes a login shell after switching the user. A login shell resets most environment variables,
providing a clean base.

 su username
just switches the user, providing a normal shell with an environment nearly the same as with the old user

Sudo Access:
 sudo command-name
The above command “sudo command-name” will run any command owned and authorized by root account
as long as that user is authorized to run it in /etc/sudoers file

Configuring sudo Access

1. Log in to the system as the root user.


2. Create a normal user account using the useradd command. Replace USERNAME with
the user name that you wish to create.

# useradd USERNAME

3. Set a password for the new user using the passwd command.
4. # passwd USERNAME
5. Changing password for user USERNAME.
6. New password:
7. Retype new password:
passwd: all authentication tokens updated successfully.

8. Run the visudo to edit the /etc/sudoers file. This file defines the policies applied by
the sudo command.

# visudo

9. Find the lines in the file that grant sudo access to users in the group wheel when enabled.
10. ## Allows people in group wheel to run all commands
# %wheel ALL=(ALL) ALL

11. Remove the comment character (#) at the start of the second line. This enables the
configuration option.
12. Save your changes and exit the editor.
13. Add the user you created to the wheel group using the usermod command.
# usermod -aG wheel USERNAME

14. Test that the updated configuration allows the user you created to run commands using
sudo.
1. Use the su to switch to the new user account that you created.

# su USERNAME -

2. Use the groups to verify that the user is in the wheel group.
3. $ groups
USERNAME wheel

4. Use the sudo command to run the whoami command. As this is the first time you
have run a command using sudo from this user account the banner message will
be displayed. You will be also be prompted to enter the password for the user
account.
5. $ sudo whoami
6. We trust you have received the usual lecture from the local
System
7. Administrator. It usually boils down to these three things:
8.
9. #1) Respect the privacy of others.
10. #2) Think before you type.
11. #3) With great power comes great responsibility.
12.
13. [sudo] password for USERNAME:
root

The last line of the output is the user name returned by the whoami command. If
sudo is configured correctly this value will be root.

You have successfully configured a user with sudo access. You can now log in to this user
account and use sudo to run commands as if you were logged in to the account of the root user.
Linux Editors

 What is a text editor?


o A text editor is a program which enables you to create and manipulate character
data (text) in a computer file.
o A text editor is not a word processor although some text editors do include word
processing facilities.
o Text editors often require "memorizing" commands in order to perform editing
tasks. The more you use them, the easier it becomes. There is a "learning curve"
in most cases though.
 There are several standard text editors available on most LINUX systems:
o ed - standard line editor
o ex - extended line editor
o vi - a visual editor; full screen; uses ed/ex line-mode commands for global file
editing
o sed - stream editor for batch processing of files
 In addition to these, other local "favorites" may be available:
o emacs - a full screen editor and much more
o pico - an easy "beginner's" editor
o lots of others

The Standard Display Editor - vi

 vi supplies commands for:


o inserting and deleting text
o replacing text
o moving around the file
o finding and substituting strings
o cutting and pasting text
o reading and writing to other files
 vi uses a "buffer"
o While using vi to edit an existing file, you are actually working on a copy of the
file that is held in a temporary buffer in your computer's memory.
o If you invoked vi with a new filename, (or no file name) the contents of the file
only exist in this buffer.
o Saving a file writes the contents of this buffer to a disk file, replacing its contents.
You can write the buffer to a new file or to some other file.
o You can also decide not to write the contents of the buffer, and leave your
original file unchanged.
 vi operates in two different "modes":
o Command mode
 vi starts up in this mode
 Whatever you type is interpreted as a command - not text to be inserted
into the file.
 The mode you need to be in if you want to "move around" the file.
o Insert mode
 This is the mode you use to type (insert) text.
 There are several commands that you can use to enter this mode.
 Once in this mode, whatever you type is interpreted as text to be included
in the file. You cannot "move around" the file in this mode.
 Must press the ESC (escape) key to exit this mode and return to command
mode.
Monitor User Commands:
Following are the basic user monitor commands

 who
 last
 w
 id

who
As a Linux user, sometimes it is required to know some basic information like :

 Time of last system boot


 List of users logged-in
 Current run level etc

Though this type of information can be obtained from various files in the Linux system but there
is a command line utility 'who' that does exactly the same for you. In this article, we will discuss
the capabilities and features provided by the 'who' command.

The basic syntax of the who command is :


who [OPTION]... [ FILE | ARG1 ARG2 ]

Examples of 'who' command

1. Get the information on currently logged in users

This is done by simply running the 'who' command (without any options). Consider the
following example:
$ who
iafzal tty7 2012-08-07 05:33 (:0)
iafzal pts/0 2012-08-07 06:47 (:0.0)
iafzal pts/1 2012-08-07 07:58 (:0.0)

2. Get the time of last system boot

The is done using the -b option. Consider the following example:


$ who -b
system boot 2012-08-07 05:32
So we see that the above output gives the exact date and time of last system boot.
3. Get information on system login processes

This is done using the -l option. Consider the following example:


$ who -l
LOGIN tty4 2012-08-07 05:32 1309 id=4
LOGIN tty5 2012-08-07 05:32 1313 id=5
LOGIN tty2 2012-08-07 05:32 1322 id=2
LOGIN tty3 2012-08-07 05:32 1324 id=3
LOGIN tty6 2012-08-07 05:32 1327 id=6
LOGIN tty1 2012-08-07 05:32 1492 id=1
So we see that information related to system login processes was displayed in the output.

4. Get the hostname and user associated with stdin

This is done using the -m option. Consider the following example:


$ who -m
iafzal pts/1 2012-08-07 07:58 (:0.0)
So we see that the relevant information was produced in the output.

5. Get the current run level

This is done using the -r option. Consider the following example:


$ who -r
run-level 2 2012-08-07 05:32
So we see that the information related to current run level (which is 2) was produced in the
output.

6. Get the list of user logged in

This is done using the -u option. Consider the following example:


$ who -u
iafzal tty7 2012-08-07 05:33 old 1619 (:0)
iafzal pts/0 2012-08-07 06:47 00:31 2336 (:0.0)
iafzal pts/1 2012-08-07 07:58 . 2336 (:0.0)
So we see that a list of logged-in users was produced in the output.

7. Get number of users logged-in and their user names

This is done using the -q option. Consider the following example:


$ who -q
iafzal iafzal iafzal
# users=3
So we see that information related to number of logged-in users and their user names was
produced in the output.
8. Get all the information

This is done using the -a option. Consider the following example:


$ who -a
system boot 2012-08-07 05:32
run-level 2 2012-08-07 05:32
LOGIN tty4 2012-08-07 05:32 1309 id=4
LOGIN tty5 2012-08-07 05:32 1313 id=5
LOGIN tty2 2012-08-07 05:32 1322 id=2
LOGIN tty3 2012-08-07 05:32 1324 id=3
LOGIN tty6 2012-08-07 05:32 1327 id=6
LOGIN tty1 2012-08-07 05:32 1492 id=1
iafzal + tty7 2012-08-07 05:33 old 1619 (:0)
iafzal + pts/0 2012-08-07 06:47 . 2336 (:0.0)
iafzal + pts/1 2012-08-07 07:58 . 2336 (:0.0)
So we see that all the information that 'who' can print is produced in output.

last command:

To find out when a particular user last logged in to the Linux or Unix server.

Syntax

The basic syntax is:

last
last [userNameHere]
last [tty]
last [options] [userNameHere]

If no options provided last command displays a list of all users logged in (and out). You can
filter out results by supplying names of users or terminal to show only those entries matching the
username/tty.

last command examples

To find out who has recently logged in and out on your server, type:
$ last
Sample outputs:

root pts/1 10.1.6.120 Tue Jan 28 05:59 still logged in


root pts/0 10.1.6.120 Tue Jan 28 04:08 still logged in
root pts/0 10.1.6.120 Sat Jan 25 06:33 - 08:55 (02:22)
root pts/1 10.1.6.120 Thu Jan 23 14:47 - 14:51 (00:03)
root pts/0 10.1.6.120 Thu Jan 23 13:02 - 14:51 (01:48)
root pts/0 10.1.6.120 Tue Jan 7 12:02 - 12:38 (00:35)

wtmp begins Tue Jan 7 12:02:54 2014

List all users last logged in/out time

last command searches back through the file /var/log/wtmp file and the output may go back to
several months. Just use the less command or more command as follows to display output one
screen at a time:
$ last | more
last | less

List a particular user last logged in

To find out when user iafzal last logged in, type:


$ last iafzal
$ last iafzal | less
$ last iafzal | grep 'Thu Jan 23'

Sample outputs:
Hide hostnames (Linux only)

To hide the display of the hostname field pass -R option:


$ last -R
last -R iafzal
Sample outputs:

Display complete login & logout times

By default year is now displayed by last command. You can force last command to display full
login and logout times and dates by passing -F option:
$ last -F
Sample outputs:
Display full user/domain names
$ last -w

Display last reboot time

The user reboot logs in each time the system is rebooted. Thus following command will show a
log of all reboots since the log file was created:
$ last reboot
$ last -x reboot

Display last shutdown time

Find out the system shutdown entries and run level changes:
$ last -x
$ last -x shutdown

Find out who was logged in at a particular time

The syntax is as follows to see the state of logins as of the specified time:
$ last -t YYYYMMDDHHMMSS
$ last -t YYYYMMDDHHMMSS userNameHere

w command:
Options:
-h, --no-header do not print header
-u, --no-current ignore current process username
-s, --short short format
-f, --from show remote hostname field
-o, --old-style old style output
-i, --ip-addr display IP address instead of hostname (if possible)

--help display this help and exit


-V, --version output version information and exit

id command:
Print user and group information for the specified USER,
or (when USER omitted) for the current user.
-a ignore, for compatibility with other versions
-Z, --context print only the security context of the current user
-g, --group print only the effective group ID
-G, --groups print all group IDs
-n, --name print a name instead of a number, for -ugG
-r, --real print the real ID instead of the effective ID, with -ugG
-u, --user print only the effective user ID
-z, --zero delimit entries with NUL characters, not whitespace;
not permitted in default format
--help display this help and exit
--version output version information and exit
System Utility Commands:

 date
 uptime
 hostname
 uname
 which
 cal
 bc

date
Print or set the system date and time

Usage: date [OPTION]... [+FORMAT]


or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
Display the current time in the given FORMAT, or set the system date.

Mandatory arguments to long options are mandatory for short options too.
-d, --date=STRING display time described by STRING, not 'now'
-f, --file=DATEFILE like --date once for each line of DATEFILE
-I[TIMESPEC], --iso-8601[=TIMESPEC] output date/time in ISO 8601 format.
TIMESPEC='date' for date only (the default),
'hours', 'minutes', 'seconds', or 'ns' for date
and time to the indicated precision.
-r, --reference=FILE display the last modification time of FILE
-R, --rfc-2822 output date and time in RFC 2822 format.
Example: Mon, 07 Aug 2006 12:34:56 -0600
--rfc-3339=TIMESPEC output date and time in RFC 3339 format.
TIMESPEC='date', 'seconds', or 'ns' for
date and time to the indicated precision.
Date and time components are separated by
a single space: 2006-08-07 12:34:56-06:00
-s, --set=STRING set time described by STRING
-u, --utc, --universal print or set Coordinated Universal Time (UTC)
--help display this help and exit
--version output version information and exit

uptime:
Tell how long the system has been running
uptime gives a one line display of the following information. The current time, how long the system has
been running, how many users are currently logged on, and the system load averages for the past 1, 5,
and 15 minutes

Options:
-p, --pretty show uptime in pretty format
-h, --help display this help and exit
-s, --since system up since
-V, --version output version information and exit

hostname
Show or set the system's host name

Program options:
-a, --alias alias names
-A, --all-fqdns all long host names (FQDNs)
-b, --boot set default hostname if none available
-d, --domain DNS domain name
-f, --fqdn, --long long host name (FQDN)
-F, --file read host name or NIS domain name from given file
-i, --ip-address addresses for the host name
-I, --all-ip-addresses all addresses for the host
-s, --short short host name
-y, --yp, --nis NIS/YP domain name

Description:
This command can get or set the host name or the NIS domain name. You can
also get the DNS domain or the FQDN (fully qualified domain name).
Unless you are using bind or NIS for host lookups you can change the
FQDN (Fully Qualified Domain Name) and the DNS domain name (which is
part of the FQDN) in the /etc/hosts file

uname
This command will give you system information. It is one of the important command that should be
used every time you login to a Linux/Unix machine.

Usage: uname [OPTION]...


Print certain system information. With no OPTION, same as -s.

-a, --all print all information, in the following order,


except omit -p and -i if unknown:
-s, --kernel-name print the kernel name
-n, --nodename print the network node hostname
-r, --kernel-release print the kernel release
-v, --kernel-version print the kernel version
-m, --machine print the machine hardware name
-p, --processor print the processor type or "unknown"
-i, --hardware-platform print the hardware platform or "unknown"
-o, --operating-system print the operating system
--help display this help and exit
--version output version information and exit

which
Shows the full path of (shell) commands

Usage: /usr/bin/which [options] [--] COMMAND [...]


Write the full path of COMMAND(s) to standard output.

--version, -[vV] Print version and exit successfully.


--help, Print this help and exit successfully.
--skip-dot Skip directories in PATH that start with a dot.
--skip-tilde Skip directories in PATH that start with a tilde.
--show-dot Don't expand a dot to current directory in output.
--show-tilde Output a tilde for HOME directory for non-root.
--tty-only Stop processing options on the right if not on tty.
--all, -a Print all matches in PATH, not just the first
--read-alias, -i Read list of aliases from stdin.
--skip-alias Ignore option --read-alias; don't read stdin.
--read-functions Read shell functions from stdin.
--skip-functions Ignore option --read-functions; don't read stdin.

cal and bc
cal command is simply for calendar and bc is for calculator
Processes
 Whenever you enter a command at the shell prompt, it invokes a program. While this
program is running it is called a process. Your login shell is also a process, created for
you upon logging in and existing until you logout.
 LINUX is a multi-tasking operating system. Any user can have multiple processes
running simultaneously, including multiple login sessions. As you do your work within
the login shell, each command creates at least one new process while it executes.
 Process id: every process in a LINUX system has a unique PID - process identifier.
 ps - displays information about processes. Note that the ps command differs between
different LINUX systems - see the local ps man page for details.
To see your current shell's processes:

% ps
PID TTY TIME CMD
26450 pts/9 0:00 ps
66801 pts/9 0:00 -csh

To see a detailed list of all of your processes on a machine (current shell and all other
shells):

% ps uc
USER PID %CPU %MEM SZ RSS TTY STAT STIME TIME COMMAND
jsmith 26451 0.0 0.0 120 232 pts/9 R 21:01:14 0:00 ps
jsmith 43520 0.0 1.0 300 660 pts/76 S 19:18:31 0:00 elm
jsmith 66801 0.0 1.0 348 640 pts/9 S 20:49:20 0:00 csh
jsmith 112453 0.0 0.0 340 432 pts/76 S Mar 03 0:00 csh

To see a detailed list of every process on a machine:

% ps ug
USER PID %CPU %MEM SZ RSS TTY STAT STIME TIME COMMAND
root 0 0.0 0.0 8 8 - S Feb 08 32:57 swapper
root 1 0.1 0.0 252 188 - S Feb 08 39:16 /etc/init
root 514 72.6 0.0 12 8 - R Feb 08 28984:05 kproc
root 771 0.2 0.0 16 16 - S Feb 08 65:14 kproc
root 1028 0.0 0.0 16 16 - S Feb 08 0:00 kproc
{ lines deleted }
root 60010 0.0 0.0 1296 536 - S Mar 07 0:00 -ncd19:0
kdr 60647 0.0 0.0 288 392 pts/87 S Mar 06 0:00 -ksh
manfield 60968 0.0 0.0 268 200 - S 10:12:52 0:00 mwm
kelly 61334 0.0 0.0 424 640 - S 08:18:10 0:00 twm
sjw 61925 0.0 0.0 552 376 - S Mar 06 0:00 rlogin kanaha
mkm 62357 0.0 0.0 460 240 - S Feb 08 0:00 xterm
ishley 62637 0.0 0.0 324 152 pts/106 S Mar 06 0:00 xedit march2
tusciora 62998 0.0 0.0 340 448 - S Mar 06 0:05 xterm -e
dilfeath 63564 0.0 0.0 200 268 - S 07:32:45 0:00 xclock
tusciora 63878 0.0 0.0 548 412 - S Mar 06 0:41 twm

 kill - use the kill command to send a signal to a process. In most cases, this will be a kill
signal, hence the command name. However, other types of signals are usually
supported. Note that you can only kill processes which you own. The command syntax
is:
kill [-signal] process_identifier(PID)

Examples:

kill 63878 - kills process 63878


kill -9 1225 - kills (kills!) process 1225. Use if
simple kill doesn't work.
kill -STOP 2339 - stops process 2339
kill -CONT 2339 - continues stopped process 2339
kill -l - list the supported kill signals

You can also use CTRL-C to kill the currently running process.
 Suspend a process: Use CTRL-Z.
 Background a process: Normally, commands operate in the foreground - you can not do
additional work until the command completes. Backgrounding a command allows you
to continue working at the shell prompt.
To start a job in the background, use an ampersand (&) when you invoke the command:
myprog &

To put an already running job in the background, first suspend it with CRTL-Z and then
use the "bg" command:

myprog - execute a process


CTRL-Z - suspend the process
bg - put suspended process in background

 Foreground a process: To move a background job to the foreground, find its "job"
number and then use the "fg" command. In this example, the jobs command shows that
two processes are running in the background. The fg command is used to bring the
second job (%2) to the foreground.

jobs
[1] + Running xcalc
[2] Running find / -name core -print
fg %2

 Stop a job running in the background: Use the jobs command to find its job number, and
then use the stop command. You can then bring it to the foreground or restart execution
later.

jobs
[1] + Running xcalc
[2] Running find / -name core -print
stop %2
 Kill a job running in the background, use the jobs command to find its job number, and
then use the kill command. Note that you can also use the ps and kill commands to
accomplish the same task.

jobs
[1] + Running xcalc
[2] Running find / -name core -print
kill %2

 Some notes about background processes:


o If a background job tries to read from the terminal, it will automatically be
stopped by the shell. If this happens, you must put it in the foreground to supply
the input.
o The shell will warn you if you attempt to logout and jobs are still running in the
background. You can then use the jobs command to review the list of jobs and act
accordingly. Alternately, you can simply issue the logout command again and
you will be permitted to exit
Linux Programs

A program, or command, interacts with the kernel to provide the environment and perform the
functions called for by the user. A program can be: an executable shell file, known as a shell script; a
built-in shell command; or a source compiled, object code file.
The shell is a command line interpreter. The user interacts with the kernel through the shell. You can
write ASCII (text) scripts to be acted upon by a shell.

System programs are usually binary, having been compiled from C source code. These are located in
places like /bin, /usr/bin, /usr/local/bin, /usr/ucb, etc. They provide the functions that you normally
think of when you think of Linux. Some of these are sh, csh, date, who, more, and there are many
others.
crontab – Quick Reference
crontab is used to schedule task/jobs

Setting up cron jobs in Unix, Solaris & Linux


cron is a Unix, solaris, Linux utility that allows tasks to be automatically run in the background
at regular intervals by the cron daemon.

cron meaning – There is no definitive explanation but most accepted answers is reportedly from
Ken Thompson ( author of unix cron ), name cron comes from chron ,the Greek prefix for
‘time.’.
What is cron ? – Cron is a daemon which runs at the times of system boot from /etc/init.d
scripts. If needed it can be stopped/started/restart using init script or with command service crond
start in Linux systems.

This document covers following aspects of Unix, Linux cron jobs to help you understand
and implement cronjobs successfully

1. What is crontab?
2. What is a cron job or cron schedule?
3. Crontab Restrictions
4. Crontab Commands
5. Crontab file – syntax
6. Crontab Example
7. Crontab Environment
8. Disable Email
9. Generate log file for crontab activity
10. Crontab file location

1. What is crontab?

Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at
specified times. File location varies by operating systems, See Crontab file location at the end of
this document.

2.What is a cron job or cron schedule?

Cron job or cron schedule is a specific set of execution instructions specifing day, time and
command to execute. crontab can have multiple execution statments.

3. Crontab Restrictions
You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow. If that file does
not exist, you can use
crontab if your name does not appear in the file /usr/lib/cron/cron.deny.
If only cron.deny exists and is empty, all users can use crontab. If neither file exists, only the root
user can use crontab. The allow/deny files consist of one user name per line.

4. Crontab Commands

export EDITOR=vi ;to specify a editor to open crontab file.

crontab -e Edit crontab file, or create one if it doesn’t already exist.


crontab -l crontab list of cronjobs , display crontab file contents.
crontab -r Remove your crontab file.
crontab -v Display the last time you edited your crontab file. (This option is only available on
a few systems.)

5. Crontab file

Crontab syntax :
A crontab file has five fields for specifying day , date and time followed by the command to be
run at that interval.

* * * * * command to be executed
- - - - -
| | | | |
| | | | +----- day of week (0 - 6)
(Sunday=0)
| | | +------- month (1 - 12)
| | +--------- day of month (1 - 31)
| +----------- hour (0 - 23)
+------------- min (0 - 59)

* in the value field above means all legal values as in braces for that column.
The value column can have a * or a list of elements separated by commas. An element is either a
number in the ranges shown above or two numbers in the range separated by a hyphen (meaning
an inclusive range).
Notes
A. ) Repeat pattern like /2 for every 2 minutes or /10 for every 10 minutes is not supported by all
operating systems. If you try to use it and crontab complains it is probably not supported.

B.) The specification of days can be made in two fields: month day and weekday. If both are
specified in an entry, they are cumulative meaning both of the entries will get executed .

6. Crontab Examples

A line in crontab file like below removes the tmp files from /home/someuser/tmp each day at
6:30 PM.
30 18 * * * rm /home/someuser/tmp/*

Changing the parameter values as below will cause this command to run at different time
schedule below :

min hour day/month month day/week Execution time

— 00:30 Hrs on 1st of Jan, June &


30 0 1 1,6,12 *
Dec.
–8.00 PM every weekday (Mon-Fri) only in Oct.
0 20 * 10 1-5

— midnight on 1st ,10th & 15th of month


0 0 1,10,15 * *

— At 12.05,12.10 every Monday & on 10th of


5,10 0 10 * 1
every month
:

Note : If you inadvertently enter the crontab command with no argument(s), do not attempt to
get out with Control-d. This removes all entries in your crontab file. Instead, exit with Control-c.

7. Crontab Environment

cron invokes the command from the user’s HOME directory with the shell, (/usr/bin/sh).
cron supplies a default environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh

Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a
script called by the entry.

8. Disable Email

By default cron jobs sends a email to the user account executing the cronjob. If this is not needed
put the following command At the end of the cron job line .

>/dev/null 2>&1

9. Generate log file

To collect the cron execution execution log in a file :

30 18 * * * rm /home/someuser/tmp/* > /home/someuser/cronlogs/clean_tmp_dir.log

10. Crontab file location


User crontab files are stored by the login names in different locations in different Unix and Linux
flavors. These files are useful for backing up, viewing and restoring but should be edited only
with crontab command by the users.

 Mac OS X
/usr/lib/cron/tabs/
 BSD Unix
/var/cron/tabs/
 Solaris, HP-UX, Debian, Ubuntu
/var/spool/cron/crontabs/
 AIX, Red Hat Linux, CentOS, Ferdora
/var/spool/cron/
System Resources Commands:

Command/Syntax What it will do


date report the current date and time
df report the summary of disk blocks and inodes free and in use
du report amount of disk space in use+
hostname/uname display or set (super-user only) the name of the current machine
passwd set or change your password
whereis report the binary, source, and man page locations for the command
which reports the path to the command or the shell alias in use
who or w report who is logged in and what processes are running
cal displays a calendar
bc Calculator

df - summarize disk block and file usage


df is used to report the number of disk blocks and inodes used and free for each file system. The
output format and valid options are very specific to the OS and program version in use.

Syntax
df [options] [resource]
Common Options
-l local file systems only (SVR4)
-k report in kilobytes (SVR4)

du - report disk space in use


du reports the amount of disk space in use for the files or directories you specify.

Syntax
du [options] [directory or file]
Common Options
-a display disk usage for each file, not just subdirectories
-s display a summary total only
-k report in kilobytes (SVR4)

who - list current users


who reports who is logged in at the present time.

Syntax
who [am i]
Examples
> who
wmtell ttyp1 Apr 21 20:15 (apple.acs.ohio-s)
fbwalk ttyp2 Apr 21 23:21 (worf.acs.ohio-st)
stwang ttyp3 Apr 21 23:22 (127.99.25.8)

whereis - report program locations


whereis reports the filenames of source, binary, and manual page files associated with command(s).

Syntax
whereis [options] command(s)
Common Options
-b report binary files only
-m report manual sections only
-s report source files only
Examples
> whereis Mail
Mail: /usr/ucb/Mail /usr/lib/Mail.help /usr/lib/Mail.rc /usr/man/man1/Mail.1
> whereis -b Mail
Mail: /usr/ucb/Mail /usr/lib/Mail.help /usr/lib/Mail.rc
> whereis -m Mail
Mail: /usr/man/man1/Mail.1

which - report the command found


which will report the name of the file that is be executed when the command is invoked. This will be
the full path name or the alias that’s found first in your path.

Syntax
which command(s)
example--
> which Mail
/usr/ucb/Mail

hostname/uname –n = name of machine


hostname (uname -n on SysV) reports the host name of the machine the user is logged into, e.g.:
> hostname
yourcomputername

uname has additional options to print information about system hardware type and software version.
date - current date and time
date displays the current data and time. A superuser can set the date and time.

Syntax
date [options] [+format]
Common Options
-u use Universal Time (or Greenwich Mean Time)
+format specify the output format
%a weekday abbreviation, Sun to Sat
%h month abbreviation, Jan to Dec
%j day of year, 001 to 366
%n <new-line>
%t <TAB>
%y last 2 digits of year, 00 to 99
%D MM/DD/YY date
%H hour, 00 to 23
%M minute, 00 to 59
%S second, 00 to 59
%T HH:MM:SS time
Examples
> date
Mon Jun 10 09:01:05 EDT 1996
> date -u
Mon Jun 10 13:01:33 GMT 1996
> date +%a%t%D
Mon 06/10/96
> date '+%y:%j'
96:162
Terminal Control Keys

 Several key combinations on your keyboard usually have a special effect on the
terminal.
 These "control" (CTRL) keys are accomplished by holding the CTRL key while typing
the second key. For example, CTRL-c means to hold the CTRL key while you type the
letter "c".
 The most common control keys are listed below:

CTRL-u - erase everything you've typed on the command


line

CTRL-c - stop/kill a command

CTRL-h - backspace (usually)

CTRL-z - suspend a command

CTRL-s - stop the screen from scrolling

CTRL-q - continue scrolling

CTRL-d - exit from an interactive program (signals end


of data)
top command

Know what is happening in “real time” on your systems is in my opinion the basis to use and
optimize your OS. The top command can help us, this is a very useful system monitor that is
really easy to use, and that can also allows us to understand why our OS suffers and which
process use most resources. The command to be run on the terminal is:

$ top

And we’ll get a screen similar to the one on the right:

Let’s see now every single row of this output to explain all the information found within the
screen.

1° Row — top

This first line indicates in order:

 current time (11:37:19)


 uptime of the machine (up 1 day, 1:25)
 users sessions logged in (3 users)
 average load on the system (load average: 0.02, 0.12, 0.07) the 3 values refer to the last
minute, five minutes and 15 minutes.

2° Row – task

The second row gives the following information:

 Processes running in totals (73 total)


 Processes running (2 running)
 Processes sleeping (71 sleeping)
 Processes stopped (0 stopped)
 Processes waiting to be stoppati from the parent process (0 zombie)

3° Row – cpu
The third line indicates how the cpu is used. If you sum up all the percentages the total will be
100% of the cpu. Let’s see what these values indicate in order:

 Percentage of the CPU for user processes (0.3%us)


 Percentage of the CPU for system processes (0.0%sy)
 Percentage of the CPU processes with priority upgrade nice (0.0%ni)
 Percentage of the CPU not used (99,4%id)
 Percentage of the CPU processes waiting for I/O operations(0.0%wa)
 Percentage of the CPU serving hardware interrupts (0.3% hi — Hardware IRQ
 Percentage of the CPU serving software interrupts (0.0% si — Software Interrupts
 The amount of CPU ‘stolen’ from this virtual machine by the hypervisor for other tasks
(such as running another virtual machine) this will be 0 on desktop and server without
Virtual machine. (0.0%st — Steal Time)

4° and 5° Rows – memory usage

The fourth and fifth rows respectively indicate the use of physical memory (RAM) and swap. In
this order: Total memory in use, free, buffers cached.

Following Rows — Processes list

And as last thing ordered by CPU usage (as default) there are the processes currently in use.
Let’s see what information we can get in the different columns:
 PID – l’ID of the process(4522)
 USER – The user that is the owner of the process (root)
 PR – priority of the process (15)
 NI – The “NICE” value of the process (0)
 VIRT – virtual memory used by the process (132m)
 RES – physical memory used from the process (14m)
 SHR – shared memory of the process (3204)
 S – indicates the status of the process: S=sleep R=running Z=zombie (S)
 %CPU – This is the percentage of CPU used by this process (0.3)
 %MEM – This is the percentage of RAM used by the process (0.7)
 TIME+ –This is the total time of activity of this process (0:17.75)
 COMMAND – And this is the name of the process (bb_monitor.pl)

Conclusions

Now that we have seen in detail all the information that the command “top” returns, it will be
easier to understand the reason of excessive load and/or the slowing of the system
Recover/Reset Root Password

1 – In the boot grub menu select option to edit

2 – Select Option to edit (e).


3 – Go to the line = ro and change it with rw init=/sysroot/bin/sh

4 – Now press Control+x to start on single user mode

5 – Now access the system with this command.


chroot /sysroot

6 – Reset the password.


passwd root

7 – Exit chroot
exit
8 - Reboot your system
reboot
SIGHUP - The SIGHUP signal disconnects a process from the parent process. This an also be
used to restart processes. For example, "killall -SIGUP compiz" will restart Compiz. This is useful
for daemons with memory leaks.

SIGINT - This signal is the same as pressing ctrl-c. On some systems, "delete" + "break" sends the
same signal to the process. The process is interrupted and stopped. However, the process can ignore
this signal.

SIGQUIT - This is like SIGINT with the ability to make the process produce a core dump.

SIGILL - When a process performs a faulty, forbidden, or unknown function, the system sends the
SIGILL signal to the process. This is the ILLegal SIGnal.

SIGTRAP - This signal is used for debugging purposes. When a process has performed an action or
a condition is met that a debugger is waiting for, this signal will be sent to the process.

SIGABRT - This kill signal is the abort signal. Typically, a process will initiate this kill signal on
itself.

SIGBUS - When a process is sent the SIGBUS signal, it is because the process caused a bus error.
Commonly, these bus errors are due to a process trying to use fake physical addresses or the process
has its memory alignment set incorrectly.

SIGFPE - Processes that divide by zero are killed using SIGFPE. Imagine if humans got the death
penalty for such math. NOTE: The author of this article was recently drug out to the street and shot
for dividing by zero.

SIGKILL - The SIGKILL signal forces the process to stop executing immediately. The program
cannot ignore this signal. This process does not get to clean-up either.

SIGUSR1 - This indicates a user-defined condition. This signal can be set by the user by
programming the commands in sigusr1.c. This requires the programmer to know C/C++.

SIGSEGV - When an application has a segmentation violation, this signal is sent to the process.

SIGUSR2 - This indicates a user-defined condition.

SIGPIPE - When a process tries to write to a pipe that lacks an end connected to a reader, this
signal is sent to the process. A reader is a process that reads data at the end of a pipe.

SIGALRM - SIGALRM is sent when the real time or clock time timer expires.
SIGTERM - This signal requests a process to stop running. This signal can be ignored. The process
is given time to gracefully shutdown. When a program gracefully shuts down, that means it is given
time to save its progress and release resources. In other words, it is not forced to stop. SIGINT is
very similar to SIGTERM.

SIGCHLD - When a parent process loses its child process, the parent process is sent the
SIGCHLD signal. This cleans up resources used by the child process. In computers, a child process
is a process started by another process know as a parent.

SIGCONT - To make processes continue executing after being paused by the SIGTSTP or
SIGSTOP signal, send the SIGCONT signal to the paused process. This is the CONTinue SIGnal.
This signal is beneficial to Unix job control (executing background tasks).

SIGSTOP - This signal makes the operating system pause a process's execution. The process cannot
ignore the signal.

SIGTSTP - This signal is like pressing ctrl-z. This makes a request to the terminal containing the
process to ask the process to stop temporarily. The process can ignore the request.

SIGTTIN - When a process attempts to read from a tty (computer terminal), the process receives
this signal.

SIGTTOU - When a process attempts to write from a tty (computer terminal), the process receives
this signal.

SIGURG - When a process has urgent data to be read or the data is very large, the SIGURG signal
is sent to the process.

SIGXCPU - When a process uses the CPU past the allotted time, the system sends the process this
signal. SIGXCPU acts like a warning; the process has time to save the progress (if possible) and
close before the system kills the process with SIGKILL.

SIGXFSZ - Filesystems have a limit to how large a file can be made. When a program tries to
violate this limit, the system will send that process the SIGXFSZ signal.

SIGVTALRM - SIGVTALRM is sent when CPU time used by the process elapses.

SIGPROF - SIGPROF is sent when CPU time used by the process and by the system on behalf of
the process elapses.

SIGWINCH - When a process is in a terminal that changes its size, the process receives this signal.
SIGIO - Alias to SIGPOLL or at least behaves much like SIGPOLL.

SIGPWR - Power failures will cause the system to send this signal to processes (if the system is still
on).

SIGSYS - Processes that give a system call an invalid parameter will receive this signal.

SIGRTMIN* - This is a set of signals that varies between systems. They are labeled
SIGRTMIN+1, SIGRTMIN+2, SIGRTMIN+3, ......., and so on (usually up to 15). These are user-
defined signals; they must be programmed in the Linux kernel's source code. That would require the
user to know C/C++.

SIGRTMAX* - This is a set of signals that varies between systems. They are labeled SIGRTMAX-
1, SIGRTMAX-2, SIGRTMAX-3, ......., and so on (usually up to 14). These are user-defined signals;
they must be programmed in the Linux kernel's source code. That would require the user to know
C/C++.

SIGEMT - Processes receive this signal when an emulator trap occurs.

SIGINFO - Terminals may sometimes send status requests to processes. When this happens,
processes will also receive this signal.

SIGLOST - Processes trying to access locked files will get this signal.

SIGPOLL - When a process causes an asynchronous I/O event, that process is sent the SIGPOLL
signal.
UNIX Kernel:

Technically speaking, the UNIX kernel "is" the operating system. It provides the basic full time software
connection to the hardware. By full time, it means that the kernel is always running while the computer
is turned on. When a system boots up, kernel is loaded. Likewise, the kernel is only exited when the
computer is turned off.

The UNIX kernel is built specifically for a machine when it is installed. It has a record of all the pieces of
hardware it needs to talk to and knows what languages they speak (how to turn switches on and off to
get a desired result). Thus, a kernel is not easily ported to another computer. Each individual computer
will have its own tailor- made kernel. If the computer's hardware configuration changes during its life,
the kernel must be "rebuilt" (told about the new pieces of hardware).

However, though the connection between the kernel and the hardware is "hardcoded" to a specific
machine, the connection between the user and the kernel is generic. That is the beauty of the UNIX
kernel. From your perspective, regardless of how the kernel interacts with the hardware, no matter which
UNIX computer you use, you will have the same kernel interface to work with. That is because the
hardware is "hidden" by the kernel.

The kernel also handles memory management, input and output requests, and process scheduling for
time-shared operations (we'll talk more about what this means later).
To help it with its work, the kernel also executes daemon programs which stay alive as long as the
machine is turned on and help perform tasks such as printing or serving web documents.

However, the task of hiding the hardware is a pretty much full time job for the kernel. As such, it does
not have too much time to provide for a fancy user-friendly interface. Thus, though the kernel is much
easier to talk to than the hardware, the language of the kernel is still pretty cryptic.

Fortunately, the UNIX operating system has built in "shells" which wrap around the kernel and provide a
much more user-friendly interface. Let's take a look at shells.
Shells

The shell sits between you and the kernel, acting as a command interpreter. It reads your terminal input
and translates the commands into actions taken by the system. The shell is analogous to command in
DOS. When you log into the system you are given a default shell. When the shell starts up it reads its
startup files and may set environment variables, command search paths, and command aliases, and
executes any commands specified in these files.

The original shell was the Bourne shell, sh. Every Linux platform will either have the Bourne shell, or a
Bourne compatible shell available. It has very good features for controlling input and output, but is not
well suited for the interactive user. To meet the latter need the C shell, csh, was written and is now found
on most, but not all, Linux systems. It uses C type syntax, the language Unix is written in, but has a more
awkward input/output implementation. It has job control, so that you can reattach a job running in the
background to the foreground. It also provides a history feature which allows you to modify and repeat
previously executed commands.

The default prompt for the Bourne shell is $ (or #, for the root user). The default prompt for C shell is %.

Numerous other shells are available from the network. Almost all of them are based on either sh or csh
with extensions to provide job control to sh, allow in-line editing of commands, page through previously
executed commands, provide command name completion and custom prompt, etc. Some of the more
well known of these may be on your favorite Linux system: the Korn shell, ksh, by David Korn and the
Bourne Again Shell, bash, from the Free Software Foundations GNU project, both based on sh, the T-C
shell, tcsh, and the extended C shell, cshe, both based on csh. Below we will describe some of the features
of sh and csh so that you can get started.

Built-in Commands
The shells have a number of built-in, or native commands. These commands are executed directly in the
shell and don’t have to call another program to be run. These built-in commands are different for the
different shells.

sh
For the Bourne shell some of the more commonly used built-in commands are:
: null command
. source (read and execute) commands from a file
case case conditional loop
cd change the working directory (default is $HOME)
echo write a string to standard output
eval evaluate the given arguments and feed the result back to the shell
exec execute the given command, replacing the current shell
exit exit the current shell
export share the specified environment variable with subsequent shells
for for conditional loop
if if conditional loop
pwd print the current working directory
read read a line of input from stdin
set set variables for the shell
test evaluate an expression as true or false
trap trap for a typed signal and execute commands
umask set a default file permission mask for new files
unset unset shell variables
wait wait for a specified process to terminate
while while conditional loop

csh
For the C shell the more commonly used built-in functions are:
alias assign a name to a function
bg put a job into the background
cd change the current working directory
echo write a string to stdout
eval evaluate the given arguments and feed the result back to the shell
exec execute the given command, replacing the current shell
exit exit the current shell
fg bring a job to the foreground
foreach for conditional loop
glob do filename expansion on the list, but no "\" escapes are honored
history print the command history of the shell
if if conditional loop
jobs list or control active jobs
kill kill the specified process
limit set limits on system resources
logout terminate the login shell
nice command lower the scheduling priority of the process, command
nohup command do not terminate command when the shell exits
set set a shell variable
setenv set an environment variable for this and subsequent shells
stop stop the specified background job
umask set a default file permission mask for new files
unalias remove the specified alias name
unset unset shell variables
while while conditional loop
Environment Variables

Environmental variables are used to provide information to the programs you use. You can have both
global environment and local shell variables. Global environment variables are set by your login
shell and new programs and shells inherit the environment of their parent shell. Local shell variables
are used only by that shell and are not passed on to other processes. A child process cannot pass a
variable back to its parent process.

The current environment variables are displayed with the "env" or "printenv" commands. Some
common ones are:

• DISPLAY The graphical display to use, e.g. nyssa:0.0


• EDITOR The path to your default editor, e.g. /usr/bin/vi
• GROUP Your login group, e.g. staff
• HOME Path to your home directory, e.g. /home/frank
• HOST The hostname of your system, e.g. nyssa
• IFS Internal field separators, usually any white space (defaults to tab, space
and <newline>)
• LOGNAME The name you login with, e.g. frank
• PATH Paths to be searched for commands, e.g. /usr/bin:/usr/ucb:/usr/local/bin
• PS1 The primary prompt string, Bourne shell only (defaults to $)
• PS2 The secondary prompt string, Bourne shell only (defaults to >)
• SHELL The login shell you’re using, e.g. /usr/bin/csh
• TERM Your terminal type, e.g. xterm
• USER Your username, e.g. frank

Many environment variables will be set automatically when you login. You can modify them or define
others with entries in your startup files or at any time within the shell. Some variables you might want
to change are PATH and DISPLAY. The PATH variable specifies the directories to be automatically
searched for the command you specify. Examples of this are in the shell startup scripts below.
You set a global environment variable with a command similar to the following for the C shell:
% setenv NAME value
and for Bourne shell:
$ NAME=value; export NAME
You can list your global environmental variables with the env or printenv commands. You unset them
with the unsetenv (C shell) or unset (Bourne shell) commands.
To set a local shell variable use the set command with the syntax below for C shell. Without options
set displays all the local variables.
% set name=value
For the Bourne shell set the variable with the syntax:
$ name=value
The current value of the variable is accessed via the "$name", or "${name}", notation
Shell

Whenever you login to a Linux system you are placed in a shell program. The shell's prompt is
usually visible at the cursor's position on your screen. To get your work done, you enter
commands at this prompt.

The shell is a command interpreter; it takes each command and passes it to the operating
system kernel to be acted upon. It then displays the results of this operation on your screen.

Several shells are usually available on any UNIX system, each with its own strengths and
weaknesses.

Different users may use different shells. Initially, your system administrator will supply a
default shell, which can be overridden or changed. The most commonly available shells are:
Bourne shell (sh)
 C shell (csh)
 Korn shell (ksh)
 TC Shell (tcsh)
 Bourne Again Shell (bash)

Each shell also includes its own programming language. Command files, called "shell scripts"
are used to accomplish a series of tasks.

Shell Script Execution


Once a shell script is created, there are several ways to execute it. However, before any Korn
shell script be can executed, it must be assigned the proper permissions. The chmod command
changes permissions on individual files, in this case giving execute permission to the simple file:

$ chmod +x simple

After that, you can execute the script by specifying the filename as an argument to the bash
command:

$ bash simple
You can also execute scripts by just typing its name alone. However, for that method to work,
the directory containing the script must be defined in your PATH variable. When looking at
your .profile earlier in the course, you may have noticed that the PATH=$PATH:$HOME
definition was already in place. This enables you to run scripts located in your home directory
($HOME) without using the ksh command. For instance, because of that pre-defined PATH
variable, the simple script can be run from the command line like this:

$ simple

(For the purposes of this course, we'll simplify things by running all scripts by their script name
only, not as an argument to the ksh command.)

You can also invoke the script from your current shell by opening a background subprocess - or
subshell - where the actual command processing will occur. You won't see it running, but it will
free up your existing shell so you can continue working. This is really only necessary when
running long, processing-intensive scripts that would otherwise take over your current shell
until they complete.

To run the script you created in the background, invoke it this way:

$ simple &

When the script completes, you'll see output similar to this in the current shell:

[1] - Done (127) simple

It is important to understand that Korn shell scripts run in a somewhat different way than they
would in other shells. Specifically, variables defined in the Korn shell aren't understood outside
of the defining - or parent - shell. They must be explicitly exported from the parent shell to work
in a subsequent script or subshell. If you use the export or typeset -x commands to make the
variable known outside the parent shell, any subshell will automatically inherit the values you
defined for the parent shell.

For example, here's a script named lookmeup that does nothing more than print a line to
standard output using the myaddress (defined as 123 Anystreet USA) variable:
$ cat lookmeup
print "I live at $myaddress"

If you open a new shell (using the ksh command) from the parent shell and run the script, you
see that myaddress is undefined:

$ ksh
$ lookmeup
I live at

However, if you export myaddress from the parent shell:

$ exit
$ export myaddress

and then open a new shell and run the lookmeup script again, the variable is now defined:

$ ksh
$ lookmeup
I live at 123 Anystreet USA

To illustrate further how the parent shell takes processing precedence, let's change the value of
myaddress in the subshell:

$ myaddress='Houston, Texas'

$ print $myaddress
Houston, Texas

Now, if you exit the new shell and go back to the parent shell and type the same command:

$ exit
$ print $myaddress
123 Anystreet USA
you see that the original value in the parent shell was not affected by what you did in the
subshell.

A way to export variables automatically is to use the set -o allexport command. This command
cannot export variables to the parent shell from a subshell, but can export variables created in
the parent shell to all subshells created after the command is run. Likewise, it can automatically
export variables created in subshells to new subshells created after running the command. set -o
allexport is a handy command to place in your .kshrc file.

Shell Scripts - An Illustrated View


At the risk of sounding redundant, let's recap: shell scripts are simply files containing a list of
commands to be executed in sequence. Now let's go a bit further and look at a shell script, line-
by-line.

Any Korn shell script should contain this line at the very beginning:

#!/usr/bin/ksh

As you probably already know, the # sign marks anything that follows it on the line as a
comment - anything coming after it won't be interpreted or processed as part of the script. But,
when the # character is followed by a ! (commonly called "bang"), the meaning changes. The line
above specifies that the Korn shell will be (or should be) executing the script. If nothing is
specified, the system will attempt to execute the script using whatever its default shell type is,
not necessarily a Korn shell. Since the Korn shell supports some commands that other shells do
not, this can sometimes cause a problem. To be valid, this line must be on the very first line of
the script.

Shell scripts are often used to automate day-to-day tasks. For example, a system administrator
might use the following script, named diskuse here, to keep track of disk space usage:

#!/usr/bin/ksh
# diskuse
# Shows disk usage in blocks for /home
cd /var/log
cp disk.log disk.log.0
cd /home
du -sk * > /var/log/disk.log
cat /var/log/disk.log

Shown again - but this time with annotation - the script's processing steps are clear:

#!/usr/bin/ksh
# SCRIPT NAME: diskuse
# SCRIPT PURPOSE: Shows disk usage in blocks for /home

# change to the directory where disk.log resides


cd /var/log

# make a copy of disk.log


cp disk.log disk.log.0

# change to the target directory


cd /home

# run the du -sk * command on all files


# in /home and redirect the output
# to /var/log/disk.log
du -sk * > /var/log/disk.log

# display the output of the du -sk *


# command to standard output
cat /var/log/disk.log

It's not a good idea to hard-code pathnames into your scripts like we did in the previous
example. We specified /var/log as the target directory several times, but what if the location of
the files changed? In a short script like this one, the impact is not great. However, some scripts
can be hundreds of lines long, creating a maintenance headache if files are moved. A way
around this is to create a variable to take the place of the full pathname, such as:

LOGDIR=/var/log

The fourth line of the script would change from:

cp disk.log disk.log.0
to:

cp ${LOGDIR}/disk.log ${LOGDIR}/disk.log.0

Then, if the locations of disk.log changes in the future, you would only have to change the
variable definition to update the script. Also note that since you are defining the pathname with
the LOGDIR variable, the cd /var/log line in the script is unnecessary. Likewise, the du -sk * >
/var/log/disk.log and cat /var/log/disk.log lines can substitute ${LOGDIR} for /var/log.
Basic Shell Scripts:

Output to screen

#!/bin/bash
# Simple output script

echo "Hello World"

Defining Tasks

#!/bin/bash
# Define small tasks

whoami
echo
pwd
echo
hostname
echo
ls -ltr
echo

Defining variables

#!/bin/bash
# Example of defining variables

a=Imran
b=Afzal
c=’Linux class’

echo "My first name is $a"


echo "My surname is $b"
echo ‘My surname is $c’

Read Input

#!/bin/bash
# Read user input

echo "What is your first name?"


read a
echo

echo "What is your last name?"


read b
echo

echo Hello $a $b

Scripts to run commands within

#!/bin/bash
# Script to run commands within

clear
echo "Hello `whoami`"
echo
echo "Today is `date`"
echo
echo "Number of user login: `who | wc -l `"
echo

Read input and perform a task

#!/bin/bash
# This script will rename a file

echo Enter the file name to be renamed


read oldfilename

echo Enter the new file name


read newfilename

mv $oldfilename $newfilename
echo The file has been renamed as $newfilename
for loop Scripts:

Simple for loop output

#!/bin/bash

for i in 1 2 3 4 5
do
echo "Welcome $i times"
done

Simple for loop output

#!/bin/bash

for i in eat run jump play


do
echo See Imran $i
done

for loop to create 5 files named 1-5

#!/bin/bash

for i in {1..5}
do
touch $i
done

for loop to delete 5 files named 1-5

#!/bin/bash

for i in {1..5}
do
rm $i
done

Specify days in for loop

#!/bin/bash

i=1
for day in Mon Tue Wed Thu Fri
do
echo "Weekday $((i++)) : $day"
done

List all users one by one from /etc/passwd file

#!/bin/bash

i=1
for username in `awk -F: '{print $1}' /etc/passwd`
do
echo "Username $((i++)) : $username"
done
do-while Script

Script to run for a number of times

#!/bin/bash

c=1
while [ $c -le 5 ]
do
echo "Welcone $c times"
(( c++ ))
done

Script to run for a number of seconds

#!/bin/bash

count=0
num=10
while [ $count -lt 10 ]
do
echo
echo $num seconds left to stop this process $1
echo
sleep 1

num=`expr $num - 1`
count=`expr $count + 1`
done
echo
echo $1 process is stopped!!!
echo
If-then Scripts:

Check the variable

#!/bin/bash

count=100
if [ $count -eq 100 ]
then
echo Count is 100
else
echo Count is not 100
fi

Check if a file error.txt exist

#!/bin/bash

clear
if [ -e /home/iafzal/error.txt ]

then
echo "File exist"
else
echo "File does not exist"
fi

Check if a variable value is met

#!/bin/bash

a=`date | awk '{print $1}'`

if [ "$a" == Mon ]

then
echo Today is $a
else
echo Today is not Monday
fi
Check the response and then output

#!/bin/bash

clear
echo
echo "What is your name?"
echo
read a
echo

echo Hello $a sir


echo

echo "Do you like working in IT? (y/n)"


read Like
echo

if [ "$Like" == y ]
then
echo You are cool

elif [ "$Like" == n ]
then
echo You should try IT, it’s a good field
echo
fi

Other If statements

If the output is either Monday or Tuesday


if [ “$a” = Monday ] || [ “$a” = Tuesday ]

Test if the error.txt file exist and its size is greater than zero
if test -s error.txt

if [ $? -eq 0 ] If input is equal to zero (0)


if [ -e /export/home/filename ] If file is there
if [ "$a" != "" ] If variable does not match
if [ error_code != "0" ] If file not equal to zero (0)

Comparisons:
-eq equal to for numbers
== equal to for letters
-ne not equal to
!== not equal to for letters
-lt less than
-le less than or equal to
-gt greater than
-ge greater than or equal to
File Operations:
-s file exists and is not empty
-f file exists and is not a directory
-d directory exists
-x file is executable
-w file is writable
-r file is readable
case Scripts:

#!/bin/bash

echo
echo Please chose one of the options below
echo
echo 'a = Display Date and Time'
echo 'b = List file and directories'
echo 'c = List users logged in'
echo 'd = Check System uptime'
echo

read choices

case $choices in

a) date;;
b) ls;;
c) who;;
d) uptime;;
*) echo Invalid choice - Bye.

esac

This script will look at your current day and tell you the state of the
backup

#!/bin/bash

NOW=$(date +"%a")
case $NOW in
Mon)
echo "Full backup";;
Tue|Wed|Thu|Fri)
echo "Partial backup";;
Sat|Sun)
echo "No backup";;
*) ;;
esac
Aliases

 The alias command allows you to define new commands. Useful for creating shortcuts
for longer commands. The syntax is.

alias alias-name=executed_command

Some examples:

alias m=more
alias rm="rm -i"
alias h="history -r | more"

To view all current aliases:


alias

To remove a previously defined alias:


unalias alias_name
System Run Level

A run level is a preset operating state on a Unix-like operating system.

A system can be booted into (i.e., started up into) any of several runlevels, each of which is
represented by a single digit integer. Each runlevel designates a different system configuration
and allows access to a different combination of processes (i.e., instances of executing programs).

The are differences in the runlevels according to the operating system. Seven runlevels are
supported in the standard Linux kernel (i.e., core of the operating system). They are:

0 - System halt; no activity, the system can be safely powered down.

1 - Single user; rarely used.

2 - Multiple users, no NFS (network filesystem); also used rarely.

3 - Multiple users, command line (i.e., all-text mode) interface; the standard runlevel for most
Linux-based server hardware.

4 - User-definable

5 - Multiple users, GUI (graphical user interface); the standard runlevel for most Linux-based
desktop systems.

6 - Reboot; used when restarting the system.

By default Linux boots either to runlevel 3 or to runlevel 5. The former permits the system to
run all services except for a GUI. The latter allows all services including a GUI.

In addition to the standard runlevels, users can modify the preset runlevels or even create new
ones if desired. Runlevels 2 and 4 are usually used for user defined runlevels.

The program responsible for altering the runlevel is init, and it can be called using the telinit
command. For example, changing from runlevel 3 to runlevel 5, which allows the GUI to be
started, can be accomplished by the root (i.e., administrative) user by issuing the following
command:
telinit 5

Booting into a different runlevel can help solve certain problems. For example, if a change made
in the X Window System configuration on a machine that has been set up to boot into a GUI has
rendered the system unusable, it is possible to temporarily boot into a console (i.e., all-text
mode) runlevel (i.e., runlevels 3 or 1) in order to repair the error and then reboot into the GUI.
The X Window System is a widely used system for managing GUIs on single computers and on
networks of computers.

Likewise, if a machine will not boot due to a damaged configuration file or will not allow
logging in because of a corrupted /etc/passwd file (which stores user names and other data
about users) or because of a forgotten password, the problem can solved by first booting into
single-user mode (i.e. runlevel 1).

The runlevel command can be used to find both the current runlevel and the previous runlevel
by merely typing the following and pressing the Enter key:

/sbin/runlevel

The runlevel executable file (i.e., the ready-to-run form of the program) is typically located in
the /sbin directory, which contains mostly administrative tools and which by default is not in
the user's PATH (i.e., the list of directories in which the system searches for programs). Thus, it
is usually necessary to type the full path of the command as shown above rather than just the
name of the command itself.

The default runlevel for a system is specified in the /etc/inittab file, which will contain an entry
such as id:3:initdefault: if the system starts in runlevel 3, or id:5:initdefault: if it starts in runlevel
5. This file can be easily (and safely) read with a command such as cat, i.e.,

cat /etc/inittab

As an alternative to telinit, the runlevel into which the system boots can be changed by
modifying /etc/inittab manually with a text editor. However, it is generally easier and safer (i.e.,
less chance of accidental damage to the file) to use telinit. It is always wise to make a backup
copy of /etc/inittab or any other configuration file before attempting to modify it manually.
Partitioning a Disk

Linux
# fdisk /dev/emcpowerp OR fdisk /dev/sdb
m  n  p  1  enter  enter  w

Then format the new partition


# mkfs –t ext2 /dev/sdb1
OR

# mkfs.ext3 /dev/emcpowerL# OR mkfs.ext3 /dev/sdb1


Mount Disk Partitions:

 First make sure the slice has no data

Create directories that will be mounted on a slice

e.g:

# mkdir /rocket
# cd /rocket

# mkdir IFMX_ROCKET
# mkdir ROCKET_DATA

Mount slice to the directory


e.g:
Linux
# mount /dev/sdb1 /rocket/ROCKET_DATA
# mount /dev/sdb2 /rocket/IFMX_ROCKET

Add these entries to /etc/fstab file so the system can mount on boot up
# cp /etc/fstab /etc/fstab.bak

vi /etc/fstab and add the following lines


/dev/sdb1 /rocket/ROCKET_DATA ext4 defaults 1 1
/dev/sdb2 /rocket/IFMX_ROCKET ext4 defaults 1 1
 fdisk /dev/sdc
 n
 p
 Enter for first sector
 Enter for last sector
 p = print the partition table
 t = change a partition's system id
 L = type L to list all codes
 8e = Partition type from Linux to Linux LVM
 w

 Create Physical Volume (PV) = pvcreate /dev/sdc1


 Verify physical volume = pvdisplay

 Create Volume Group (VG) = vgcreate oracle_vg /dev/sdc1


 Verify Volume group = vgdisplay oracle_vg

 Create Logical Volumes (LV) = lvcreate –n oracle_lv –size 2G oracle_vg


 Verify logical volumes = lvdisplay

 Format Logical Volumes = mkfs.xfs /dev/oracle_vg/oracle_lv

 Create a new directory = mkdir /oracle

 Mount the new file system = mount /dev/oracle_vg/oracle_lv /oracle

 Verify = df –h
To extend filesystem of a Linux VM using LVM

Go to your virtualization product (VMWare or Oracle Virtual Box)


 Increase the disk space to desired number and then click ok

Now go to your Linux VM


 Reboot the VM to have the system re-scan the newly added disk Or
 cd /sys/class/scsi_disk/2:0:0:0
 echo '1' > device/rescan

 fdisk –l (To make sure the disk is increased)

 Create a new partition


o fdisk /dev/sdc
o n (for new partition)
o p (for primary partition)
o 2 (partition number, 2 or the new partition)
o Enter
o Enter
o t (Label the new partition)
o 3 (Pick default value)
o 8e (This will make the filesystem as LVM)
o w (Write)
o # reboot or init 6

Note: The above procedure will create /dev/sdc2 partition

 Extend the LVM group


o pvdisplay (To see which group associated with which disk)
o pvs (Info about physical volumes
o vgdisplay oracle_vg (oracle_vg is the group name or you can simply run vgdisplay)
On vgdisplay you will notice Free PE / Size at the bottom

o pvcreate /dev/sdc2 (Initialize partition for use by LVM)


o vgextend oracle_vg /dev/sdc2 (# = whichever partition was created above)
o Run vgdisplay oracle_vg
check (Free PE / Size). The second column is the right column as free. If it is in G convert
that into M. e.g. 1G = 1024M
o lvextend –L+1024M /dev/mapper/oracle_vg-oracle_lv
o resize2fs /dev/mapper/oracle_vg-oracle_lv
o OR
o xfs_growfs /dev/mapper/oracle_vg-oracle_lv
Use a File for Additional Swap Space:

What is swap? – CentOS.org


Swap space in Linux is used when the amount of physical memory (RAM) is full. If the
system needs more memory resources and the RAM is full, inactive pages in memory are
moved to the swap space. While swap space can help machines with a small amount of
RAM, it should not be considered a replacement for more RAM. Swap space is located on
hard drives, which have a slower access time than physical memory

Recommended swap size = Twice the size of RAM


Lets say,
M = Amount of RAM in GB, and S = Amount of swap in GB, then

If M < 2
then S = M *2
Else S=M+2

Commands
dd
mkswap
swapon or swapoff

Steps to Create Swap Space from Existing Disk:


If you don’t have any additional disks, you can create a file somewhere on your filesystem, and
use that file for swap space.

The following dd command example creates a swap file with the name “newswap” under /
directory with a size of 1024MB (1.0GB).

# dd if=/dev/zero of=/newswap bs=1M count=1024


Where
if = read from FILE instead of stdin
of = write to FILE instead of stdout
bs = read and write BYTES at a time
count = total size of the file

Change the permission of the swap file so that only root can access it.
# chmod go-r /newswap OR
# chmod 0600 /newswap

Make this file as a swap file using mkswap command.


# mkswap /newswap

Enable the newly created newswap.


# swapon /newswap

To make this swap file available as a swap area even after the reboot, add the following line to
the /etc/fstab file.
# cat /etc/fstab
/newswap swap swap defaults 0 0

Verify whether the newly created swap area is available for your use.
# swapon –s
# free –h

If you don’t want to reboot to verify whether the system takes all the swap space mentioned in
the /etc/fstab, you can do the following, which will disable and enable all the swap partition
mentioned in the /etc/fstab
# swapoff -a
# swapon -a
Overview of systemd for RHEL 7
The systemd system and service manager is responsible for controlling how services are started,
stopped and otherwise managed on Red Hat Enterprise Linux 7 systems. By offering on-demand
service start-up and better transactional dependency controls, systemd dramatically reduces start
up times. As a systemd user, you can prioritize critical services over less important services.

Although the systemd process replaces the init process (quite literally, /sbin/init is now a
symbolic link to /usr/lib/systemd/systemd) for starting services at boot time and changing
runlevels, systemd provides much more control than the init process does while still supporting
existing init scripts. Here are some examples of the features of systemd:

 Logging: From the moment that the initial RAM disk is mounted to start the Linux kernel
to final shutdown of the system, all log messages are stored by the new systemd journal.
Before the systemd journal existed, initial boot messages were lost, requiring that you try
to watch the screen as messages scrolled by to debug boot problems.
Now, all system messages come in on a single stream and are stored in the /run directory.
Messages can then be consumed by the rsyslog facility (and redirected to traditional log
files in the /var/log directory or to remote log servers) or displayed using the journalctl
command across a variety of attributes.
 Dependencies: With systemd, an explicit set of dependencies can be defined for each
service, instead of being implied by boot order. This allows a service to start at any point
that its dependencies are met. In this way, many services can start at the same time,
making the boot process faster. Likewise, complex sets of dependencies can be set up, so
the exact requirements of a service (such as storage availability or file system checking)
can be met before a service starts.
 Cgroups: Services are identified by Cgroups, which allow every component of a service
to be managed. For example, the older System V init scripts would start a service by
launching a process which itself might start other child processes. When the service was
killed, it was hoped that the parent process would do the right thing and kill its children.
By using Cgroups, all components of a service have a tag that can be used to make sure
that all of those components are properly started or stopped.
 Activating services: Services don't just have to be always running or not running based
on runlevel, as they were previous to systemd. Services can now be activated based on
path, socket, bus, timer, or hardware activation. Likewise, because systemd can set up
sockets, if a process handling communications goes away, the process that starts up in its
place can pick up the next message from the socket. To the clients using the service, it
can look as though the service continued without interruption.
 More than services: Instead of just managing services, systemd can manage several
different unit types. These unit types include:
o Devices: Create and use devices.
o Mounts and automounts: Mount file systems upon request or automount a file
system based on a request for a file or directory within that file system.
o Paths: Check the existence of files or directories or create them as needed.
o Services: Start a service, which often means launching a service daemon and
related components.
o Slices: Divide up computer resources (such as CPU and memory) and apply them
to selected units.
o Snapshots: Take snapshots of the current state of the system.
o Sockets: Set up sockets to allow communication paths to processes that can
remain in place, even if the underlying process needs to restart.
o Swaps: Create and use swap files or swap partitions.
o Targets: Manage a set of services under a single unit, represented by a target
name rather than a runlevel number.
o Timers: Trigger actions based on a timer.
 Resource management
o The fact that each systemd unit is always associated with its own cgroup lets you
control the amount of resources each service can use. For example, you can set a
percent of CPU usage by service which can put a cap on the total amount of CPU
that service can use -- in other words, spinning off more processes won't allow
more resources to be consumed by the service. Prior to systemd, nice levels were
often used to prevent processes from hogging precious CPU time. With systemd's
use of cgroups, precise limits can be set on CPU and memory usage, as well as
other resources.
o A feature called slices lets you slice up many different types of system resources
and assign them to users, services, virtual machines, and other units. Accounting
is also done on these resources, which can allow you to charge customers for their
resource usage.

Booting RHEL 7 with systemd


When you boot a standard X86 computer to run RHEL 7, the BIOS boots from the selected
medium (usually a local hard disk) and the boot loader (GRUB2 for RHEL 7) starts the RHEL 7
kernel and initial RAM disk. After that, the systemd process takes over to initialize the system
and start all the system services.

Although there is not a strict order in which services are started when a RHEL 7 (systemd)
system is booted, there is a structure to the boot process. The direction that the systemd process
takes at boot time depends on the default.target file. A long listing of the default.target file
shows you which target starts when the system boots:

# cd /etc/systemd/system
# ls -l default.target
lrwxrwxrwx. 1 root root 16 Aug 23 19:18 default.target ->
/lib/systemd/system/graphical.target

You can see here that the graphical.target (common for desktop systems or servers with
graphical interfaces) is set as the default.target (via a symbolic link). To understand what
targets, services and other units start up with the graphical target, it helps to work backwards, as
systemd does, to build the dependency tree. Here's what to look for:
 graphical.target: The /lib/systemd/system/graphical.target file includes these lines:
 Requires=multi-user.target
 Wants=display-manager.service
 Conflicts=rescue.service rescue.target
 After=multi-user.target rescue.service rescue.target display-
manager.service
 AllowIsolate=yes

This tells systemd to start everything in the multi-user.target before starting the graphical
target. Once that's done, the "Wants" entry tells systemd to start the display-
manager.service service (/etc/systemd/system/display-manager.service), which runs
the GNOME display manager (/usr/sbin/gdm).

 multi-user.target: The /usr/lib/systemd/system/multi-user.target starts the services


you would expect in a RHEL multi-user mode. The file contains the following line:

Requires=basic.target

This tells systemd to start everything in the /usr/lib/systemd/system/basic.target target


before starting the other multi-user services. After that, for the multi-user.target, all units
(services, targets, etc.) in the /etc/systemd/system/multi-user.target.wants and
/usr/lib/systemd/system/multi-user.target.wants directories are started. When you
enable a service, a symbolic link is placed in the /etc/systemd/system/multi-
user.target.wants directory. That directory is where you will find links to most of the
services you think of as starting in multi-user mode (printing, cron, auditing, SSH, and so
on). Here is an example of the services, paths, and targets in a typical multi-
user.target.wants directory:

# cd /etc/systemd/system/multi-user.target.wants
abrt-ccpp.service hypervkvpd.service postfix.service
abrtd.service hypervvssd.service remote-fs.target
abrt-oops.service irqbalance.service rhsmcertd.service
abrt-vmcore.service ksm.service rngd.service
abrt-xorg.service ksmtuned.service rpcbind.service
atd.service libstoragemgmt.service rsyslog.service
auditd.service libvirtd.service smartd.service
avahi-daemon.service mdmonitor.service sshd.service
chronyd.service ModemManager.service sysstat.service
crond.service netcf-transaction.service tuned.service
cups.path nfs.target vmtoolsd.service

 basic.target: The /usr/lib/systemd/system/basic.target file starts the basic services


associated with all running RHEL 7 systems. The file contains the following line:

Requires=sysinit.target

This points systemd to the /usr/lib/systemd/system/sysinit.target, which must start


before the basic.target can continue. The basic.target target file starts the firewalld and
microcode services from the /etc/systemd/system/basic.target.wants directory and
services for SELinux, kernel messages, and loading modules from the
/usr/lib/systemd/system/basic.target.wants directory.

 sysinit.target: The /usr/lib/systemd/system/sysinit.target file starts system initialization


services, such as mounting file systems and enabling swap devices. The file contains the
following line:

Wants=local-fs.target swap.target

Besides mounting file systems and enabling swap devices, the sysinit.target starts targets,
services, and mounts based on units contained in the
/usr/lib/systemd/system/sysinit.target.wants directory. These units enable logging, set
kernel options, start the udevd daemon to detect hardware, and allow file system
decryption, among other things. The /etc/systemd/system/sysinit.target.wants directory
contains services that start iSCSI, multipath, LVM monitoring and RAID services.

 local-fs.target: The local-fs.target is set to run after the local-fs-pre.target target, based
on this line:

After=local-fs-pre.target

There are no services associated with the local-fs-pre.target target (you could add some to
a "wants" directory if you like). However, units in the /usr/lib/systemd/system/local-
fs.target.wants directory import the network configuration from the initramfs, run a file
system check (fsck) on the root file system when necessary, and remounting the root file
system (and special kernel file systems) based on the contents of the /etc/fstab file.

Although the boot process is built by systemd in the order just shown, it actually runs, in general,
in the opposite order. As a rule, a target on which another target is dependent must be running
before the units in the first target can start. To see more details about the boot process, see the
bootup man page (man 7 bootup).

Using the systemctl Command


The most important command for managing services on a RHEL 7 (systemd) system is the
systemctl command. Here are some examples of the systemctl command (using the nfs-server
service as an example) and a few other commands that you may find useful:

 Checking service status: To check the status of a service (for example, nfs-
server.service), type the following:
 # systemctl status nfs-server.service
 nfs-server.service - NFS Server
 Loaded: loaded (/usr/lib/systemd/system/nfs-server.service;
disabled)
 Active: active (exited) since Wed 2014-03-19 10:29:40 MDT; 57s ago
 Process: 5206 ExecStartPost=/usr/libexec/nfs-utils/scripts/nfs-
server.postconfig (code=exited, status=0/SUCCESS)
 Process: 5191 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS $RPCNFSDCOUNT
(code=exited, status=0/SUCCESS)
 Process: 5188 ExecStartPre=/usr/sbin/exportfs -r (code=exited,
status=0/SUCCESS)
 Process: 5187 ExecStartPre=/usr/libexec/nfs-utils/scripts/nfs-
server.preconfig (code=exited, status=0/SUCCESS)
 Main PID: 5191 (code=exited, status=0/SUCCESS)
 CGroup: /system.slice/nfs-server.service

 Mar 19 10:29:40 localhost.localdomain systemd[1]: Starting NFS
Server...
 Mar 19 10:29:40 localhost.localdomain systemd[1]: Started NFS Server.
 Stopping a service: To stop a service, use the stop option as follows:
 # systemctl stop nfs-server.service
 Starting a service: To start a service, use the start option as follows:
 # systemctl start nfs-server.service
 Enabling a service: To enable a service so it starts automatically at boot time, type the
following:
 # systemctl enable nfs-server.service
 Disable a service: To disable a service so it doesn't start automatically at boot time, type
the following:
 # systemctl disable nfs-server.service
 Listing dependencies: To see dependencies of a service, use the list-dependencies
option, as follows:
 # systemctl list-dependencies nfs-server.service
 nfs-server.service
 ├─nfs-idmap.service
 ├─nfs-mountd.service
 ├─nfs-rquotad.service
 ├─proc-fs-nfsd.mount
 ├─rpcbind.service
 ├─system.slice
 ├─var-lib-nfs-rpc_pipefs.mount
 └─basic.target
 ├─alsa-restore.service
 ├─alsa-state.service
 ...
 Listing units in targets: To see what services and other units (service, mount, path,
socket, and so on) are associated with a particular target, type the following:
 # systemctl list-dependencies multi-user.target
 multi-user.target
 ├─abrt-ccpp.service
 ├─abrt-oops.service
 ├─abrt-vmcore.service
 ├─abrt-xorg.service
 ├─abrtd.service
 ├─atd.service
 ├─auditd.service
 ├─avahi-daemon.service
 ├─brandbot.path
 ├─chronyd.service
 ├─crond.service
 ...
 List specific types of units: Use the following command to list specific types of units (in
these examples, service and mount unit types):
 # systemctl list-units --type service
 UNIT LOAD ACTIVE SUB DESCRIPTION
 abrt-ccpp.service loaded active exited Install ABRT
coredump hook
 abrt-oops.service loaded active running ABRT kernel log
watcher
 abrt-xorg.service loaded active running ABRT Xorg log
watcher
 abrtd.service loaded active running ABRT Automated Bug
Reporting
 accounts-daemon.service loaded active running Accounts Service
 ...

 # systemctl list-units --type mount
 UNIT LOAD ACTIVE SUB DESCRIPTION
 -.mount loaded active mounted /
 boot.mount loaded active mounted /boot
 dev-hugepages.mount loaded active mounted Huge Pages File
System
 dev-mqueue.mount loaded active mounted POSIX Message Queue
File Syst
 mnt-repo.mount loaded active mounted /mnt/repo
 proc-fs-nfsd.mount loaded active mounted RPC Pipe File System
 run-user-1000-gvfs.mount loaded active mounted /run/user/1000/gvfs
 ...
 Listing all units: To list all units installed on the system, along with their current states,
type the following:
 # systemctl list-unit-files
 UNIT FILE STATE
 proc-sys-fs-binfmt_misc.automount static
 dev-hugepages.mount static
 dev-mqueue.mount static
 proc-sys-fs-binfmt_misc.mount static
 ...
 arp-ethers.service disabled
 atd.service enabled
 auditd.service enabled
 ...
 View service processes with systemd-cgtop: To view processes associated with a
particular service (cgroup), you can use the systemd-cgtop command. Like the top
command (which sorts processes by such things as CPU and memory usage), systemd-
cgtop lists running processes based on their service (cgroup label). Once systemd-cgtop
is running, you can press keys to sort by memory (m), CPU (c), task (t), path (p), or I/O
load (i). Here is an example:
 # systemd-cgtop
 Recursively view cgroup contents: To output a recursive list of cgroup content, use the
systemd-cgls command:
 # systemd-cgls
 ├─user.slice
 │ ├─user-1000.slice
 │ │ ├─session-5.scope
 │ │ │ ├─2661 gdm-session-worker [pam/gdm-password]
 │ │ │ ├─2672 /usr/bin/gnome-keyring-daemon --daemonize --login
 │ │ │ ├─2674 gnome-session --session gnome-classic
 │ │ │ ├─2682 dbus-launch --sh-syntax --exit-with-session
 │ │ │ ├─2683 /bin/dbus-daemon --fork --print-pid 4 --print-address 6 --
session
 │ │ │ ├─2748 /usr/libexec/gvfsd
 ...
 View journal (log) files: Using the journalctl command you can view messages from
the systemd journal. Using different options you can select which group of messages to
display. The journalctl command also supports tab completion to fill in fields for which
to search. Here are some examples:
 # journalctl -h View help for the command
 # journalctl -k View kernel messages from current boot
 # journalctl -f Follow journal messages (like tail -f)
 # journalctl -u NetworkManager View messages for specific unit (can
tab complete)

Comparing systemd to Traditional init


Some of the benefits of systemd over the traditional System V init facility include:

 systemd never loses initial log messages


 systemd can respawn daemons as needed
 systemd records runtime data (i.e., captures stdout/stderr of processes)
 systemd doesn't lose daemon context during runtime
 systemd can kill all components of a service cleanly

Here are some details of how systemd compares to pre-RHEL 7 init and related commands:

 System startup: The systemd process is the first process ID (PID 1) to run on RHEL 7
system. It initializes the system and launches all the services that were once started by the
traditional init process.
 Managing system services: For RHEL 7, the systemctl command replaces service and
chkconfig. Prior to RHEL 7, once RHEL was up and running, the service command was
used to start and stop services immediately. The chkconfig command was used to
identify at which run levels a service would start or stop automatically.
Although you can still use the service and chkconfig commands to start/stop and
enable/disable services, respectively, they are not 100% compatible with the RHEL 7
systemctl command. For example, non-standard service options, such as those that start
databases or check configuration files, may not be supported in the same way for RHEL 7
services.
 Changing runlevels: Prior to RHEL 7, runlevels were used to identify a set of services
that would start or stop when that runlevel was requested. Instead of runlevels, systemd
uses the concept of targets to group together sets of services that are started or stopped. A
target can also include other targets (for example, the multi-user target includes an nfs
target).
There are systemd targets that align with the earlier runlevels. However the point of
targets is not to necessarily imply a level of activity (for example, runlevel 3 implied
more services were active than runlevel 1). Instead targets just represent a group of
services, so it's appropriate that there are many more targets available than there are
runlevels. The following list shows how systemd targets align with traditional runlevels:
 Traditional runlevel New target name Symbolically linked to...
 Runlevel 0 | runlevel0.target -> poweroff.target
 Runlevel 1 | runlevel1.target -> rescue.target
 Runlevel 2 | runlevel2.target -> multi-user.target
 Runlevel 3 | runlevel3.target -> multi-user.target
 Runlevel 4 | runlevel4.target -> multi-user.target
 Runlevel 5 | runlevel5.target -> graphical.target
 Runlevel 6 | runlevel6.target -> reboot.target
 Default runlevel: The default runlevel (previously set in the /etc/inittab file) is now
replaced by a default target. The location of the default target is
/etc/systemd/system/default.target, which by default is linked to the multi-user target.
 Location of services: Before systemd, services were stored as scripts in the /etc/init.d
directory, then linked to different runlevel directories (such as /etc/rc3.d, /etc/rc5.d, and
so on). Services with systemd are named something.service, such as firewalld.service,
and are stored in /lib/systemd/system and /etc/systemd/system directories. Think of the
/lib files as being more permanent and the /etc files as the place you can modify
configurations as needed.
When you enable a service in RHEL 7, the service file is linked to a file in the
/etc/systemd/system/multi-user.target.wants directory. For example, if you run
systemctl enable fcoe.service a symbolic link is created from
/etc/systemd/system/multi-user.target.wants/fcoe.service that points to
/lib/systemd/system/fcoe.service to cause the fcoe.service to start at boot time.
Also, the older System V init scripts were actual shell scripts. The systemd files tasked to
do the same job are more like .ini files that contain the information needed to launch a
service.
 Configuration files: The /etc/inittab file was used by the init process in RHEL 6 and
earlier to point to the initialization files (such as /etc/rc.sysinit) and runlevel service
directories (such as /etc/rc5.d) needed to start up the system. Changes to those services
was done in files (usually named after the service) in the /etc/sysconfig directory. For
systemd in RHEL 7, there are still files in /etc/sysconfig used to modify how services
behave. However, services can be modified by adding files to the /etc/systemd directory
to override the permanent service files in the /lib/systemd directories.
Transitioning to systemd
If you are used to using the init process and System V init scripts prior to RHEL 7, there are a
few things you should know about transitioning to systemd:

 Using RHEL 6 commands: For the time being, you can use commands such as service,
chkconfig, runlevel, and init as you did in RHEL 6. They will cause appropriate systemd
commands to run, with similar, if not exactly the same, results. Here are some examples:
 # service cups restart
 Redirecting to /bin/systemctl restart cups.service
 # chkconfig cups on
 Note: Forwarding request to 'systemctl enable cups.service'.
 System V init Scripts: Although not encouraged, System V init scripts are still
supported. There are still some services in RHEL 7 that are implemented in System V init
scripts. To see System V init scripts that are available on your system and the runlevels
on which they start, use the chkconfig command as follows:
 # chkconfig --list
 ...
 iprdump 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 iprinit 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 iprupdate 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 netconsole 0:off 1:off 2:off 3:off 4:off 5:on 6:off
 network 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 ...

Using chkconfig, however, will not show you the whole list of services on your system. To see
the systemd-specific services, run the systemctl list-unit-files command, as described earlier.
Customizing motd
You can have the MOTD (message of the day) display messages that may be unique to the machine. One way
to do this is to create a script that runs when a user logs on to the system.
First, create a script in /etc/profile.d = touch motd.sh
Make it executable = chmod a+x motd.sh (make sure it has the extension as .sh)

#!/bin/bash
#
echo -e "
##################################
#
# Welcome to `hostname`
# This system is running `cat /etc/redhat-release`
# kernel is `uname -r`
#
# You are logged in as `whoami`
#
##################################
"

Next, edit /etc/ssh/sshd_config as follows:

PrintMotd no

This will disable motd

Now restart the sshd service


systemctl restart sshd.service
That’s it! When you log in, you’d see something similar to:

#####################################
#
# Welcome to MyFirstLinuxVM
# This system is running CentOS Linux release 7.5.1804 (Core)
# kernel is 3.10.0-862.el7.x86_64
#
# You are logged in as iafzal
#
#####################################
Steps for NFS Server Configuration

• Install NFS packages

# yum install nfs-utils libnfsidmap (most likely they are installed)

• Once the packages are installed, enable and start NFS services

# systemctl enable rpcbind

# systemctl enable nfs-server

# systemctl start rpcbind, nfs-server, rpc-statd, nfs-idmapd

• Create NFS share directory and assign permissions

# mkdir /mypretzels

# chmod a+rwx /mypretzels

• Modify /etc/exports file to add new shared filesystem

# /mypretzels 192.168.12.7(rw,sync,no_root_squash) = for only 1 host

# /mypretzels *(rw,sync,no_root_squash) = for everyone

• Export the NFS filesystem

# exportfs -rv

• Stop and disable firewalld

# systemctl stop firewalld

# systemctl disable firewalld

Steps for NFS Client Configuration

• Install NFS packages

# yum install nfs-utils rpcbind

• Once the packages are installed enable and start rpcbind service

# systemctl rpcbind start

• Make sure firewalld or iptables stopped (if running)

# ps –ef | egrep “firewall|iptable”

• Show mount from the NFS server


# showmount -e 192.168.1.5 (NFS Server IP)

• Create a mount point

# mkdir /mnt/kramer

• Mount the NFS filesystem

# mount 192.168.1.5:/mypretzels /mnt/kramer

• Verify mounted filesystem

# df –h

• To unmount

# umount /mnt/kramer
Red Hat Enterprise Linux 7

Storage Administration Guide

Deploying and configuring single-node storage in Red Hat Enterprise Linux 7

Last Updated: 2018-08-28


Red Hat Enterprise Linux 7 Storage Administration Guide
Deploying and configuring single-node storage in Red Hat Enterprise Linux 7

Milan Navrátil
Red Hat Customer Content Services

Jacquelynn East
Red Hat Customer Content Services

Don Domingo
Red Hat Customer Content Services

Josef Bacik
Server Development Kernel File System
jwhiter@redhat.com
Disk Quotas

Kamil Dudka
Base Operating System Core Services - BRNO
kdudka@redhat.com
Access Control Lists

Hans de Goede
Base Operating System Installer
hdegoede@redhat.com
Partitions

Harald Hoyer
Engineering Software Engineering
harald@redhat.com
File Systems

Dennis Keefe
Base Operating Systems Kernel Storage
dkeefe@redhat.com
VDO

Doug Ledford
Server Development Hardware Enablement
dledford@redhat.com
RAID

Daniel Novotny
Base Operating System Core Services - BRNO
dnovotny@redhat.com
The /proc File System

Nathan Straz
Quality Engineering QE - Platform
nstraz@redhat.com
Legal Notice
GFS2

Copyright
Andy Walsh© 2018 Red Hat, Inc.
Base Operating Systems Kernel Storage
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0
awalsh@redhat.com
Unported
VDO License. If you distribute this document, or a modified version of it, you must provide
attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat
David Wysochanski
trademarks must be removed.
Server Development Kernel Storage
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
dwysocha@redhat.com
Section
LVM/LVM2 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat,Christie
Michael Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity
logo, and RHCE are trademarks
Server Development of Red Hat, Inc., registered in the United States and other
Kernel Storage
countries.
mchristi@redhat.com
Online Storage
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
Sachin Prabhu
Java ® is a registered trademark of Oracle and/or its affiliates.
Software Maintenance Engineering
sprabhu@redhat.com
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
NFS
and/or other countries.
Rob
MySQL Evers
® is a registered trademark of MySQL AB in the United States, the European Union and
Server Development
other countries. Kernel Storage
revers@redhat.com
Online Storage
Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to
or endorsed by the official Joyent Node.js open source or commercial project.
David Howells
Server
The OpenStack
Development
® WordHardware
Mark and
Enablement
OpenStack logo are either registered trademarks/service marks
dhowells@redhat.com
or trademarks/service marks of the OpenStack Foundation, in the United States and other countries
FS-Cache
and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.
David Lehman
Base
All other
Operating
trademarks
Systemare Installer
the property of their respective owners.
dlehman@redhat.com
Storage configuration during installation
Abstract
Jeff Moyer
This guide
Server provides instructions
Development Kernel File on how to effectively manage storage devices and file systems on
System
Red Hat Enterprise
jmoyer@redhat.com Linux 7. It is intended for use by system administrators with basic to
intermediate knowledge of Red Hat Enterprise Linux or Fedora.
Solid-State Disks

Eric Sandeen
Server Development Kernel File System
esandeen@redhat.com
ext3, ext4, XFS, Encrypted File Systems

Mike Snitzer
Server Development Kernel Storage
msnitzer@redhat.com
I/O Stack and Limits
Red Hat Subject Matter Experts

Contributors

Edited by
Marek Suchánek
Red Hat Customer Content Services
msuchane@redhat.com

Apurva Bhide
Red Hat Customer Content Services
abhide@redhat.com
Table of Contents

Table of Contents
.CHAPTER
. . . . . . . . .1.. .OVERVIEW
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . .
1.1. NEW FEATURES AND ENHANCEMENTS IN RED HAT ENTERPRISE LINUX 7 7

. . . . . .I.. FILE
PART . . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . . . . . . . . .

.CHAPTER
. . . . . . . . .2.. .FILE
. . . . SYSTEM
. . . . . . . . STRUCTURE
. . . . . . . . . . . .AND
. . . . MAINTENANCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
...........
2.1. OVERVIEW OF FILESYSTEM HIERARCHY STANDARD (FHS) 10
2.2. SPECIAL RED HAT ENTERPRISE LINUX FILE LOCATIONS 18
2.3. THE /PROC VIRTUAL FILE SYSTEM 18
2.4. DISCARD UNUSED BLOCKS 19

.CHAPTER
. . . . . . . . .3.. .THE
. . . .XFS
. . . .FILE
. . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
...........
3.1. CREATING AN XFS FILE SYSTEM 21
3.2. MOUNTING AN XFS FILE SYSTEM 22
3.3. XFS QUOTA MANAGEMENT 23
3.4. INCREASING THE SIZE OF AN XFS FILE SYSTEM 25
3.5. REPAIRING AN XFS FILE SYSTEM 26
3.6. SUSPENDING AN XFS FILE SYSTEM 26
3.7. BACKING UP AND RESTORING XFS FILE SYSTEMS 27
3.8. CONFIGURING ERROR BEHAVIOR 30
3.9. OTHER XFS FILE SYSTEM UTILITIES 32
3.10. MIGRATING FROM EXT4 TO XFS 33

. . . . . . . . . .4.. .THE
CHAPTER . . . .EXT3
. . . . .FILE
. . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
...........
4.1. CREATING AN EXT3 FILE SYSTEM 38
4.2. CONVERTING TO AN EXT3 FILE SYSTEM 39
4.3. REVERTING TO AN EXT2 FILE SYSTEM 39

.CHAPTER
. . . . . . . . .5.. .THE
. . . .EXT4
. . . . .FILE
. . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41
...........
5.1. CREATING AN EXT4 FILE SYSTEM 42
5.2. MOUNTING AN EXT4 FILE SYSTEM 44
5.3. RESIZING AN EXT4 FILE SYSTEM 44
5.4. BACKING UP EXT2, EXT3, OR EXT4 FILE SYSTEMS 45
5.5. RESTORING EXT2, EXT3, OR EXT4 FILE SYSTEMS 46
5.6. OTHER EXT4 FILE SYSTEM UTILITIES 49

.CHAPTER
. . . . . . . . .6.. .BTRFS
. . . . . . (TECHNOLOGY
. . . . . . . . . . . . . . PREVIEW)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
...........
6.1. CREATING A BTRFS FILE SYSTEM 51
6.2. MOUNTING A BTRFS FILE SYSTEM 51
6.3. RESIZING A BTRFS FILE SYSTEM 52
6.4. INTEGRATED VOLUME MANAGEMENT OF MULTIPLE DEVICES 55
6.5. SSD OPTIMIZATION 58
6.6. BTRFS REFERENCES 59

. . . . . . . . . .7.. .GLOBAL
CHAPTER . . . . . . . .FILE
. . . . SYSTEM
. . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
...........

.CHAPTER
. . . . . . . . .8.. .NETWORK
. . . . . . . . . .FILE
. . . .SYSTEM
. . . . . . . .(NFS)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
...........
8.1. INTRODUCTION TO NFS 61
8.2. PNFS 64
8.3. CONFIGURING NFS CLIENT 65
8.4. AUTOFS 66
8.5. COMMON NFS MOUNT OPTIONS 72
8.6. STARTING AND STOPPING THE NFS SERVER 74
8.7. CONFIGURING THE NFS SERVER 75

1
Storage Administration Guide

8.8. SECURING NFS 84


8.9. NFS AND RPCBIND 86
8.10. NFS REFERENCES 88

.CHAPTER
. . . . . . . . .9.. .SERVER
. . . . . . . .MESSAGE
. . . . . . . . . BLOCK
. . . . . . . (SMB)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
...........
9.1. PROVIDING SMB SHARES 89
9.2. MOUNTING AN SMB SHARE 89

.CHAPTER
. . . . . . . . .10.
. . .FS-CACHE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
...........
10.1. PERFORMANCE GUARANTEE 96
10.2. SETTING UP A CACHE 96
10.3. USING THE CACHE WITH NFS 97
10.4. SETTING CACHE CULL LIMITS 99
10.5. STATISTICAL INFORMATION 100
10.6. FS-CACHE REFERENCES 100

. . . . . .II.
PART . .STORAGE
. . . . . . . . . ADMINISTRATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
............

. . . . . . . . . .11.
CHAPTER . . .STORAGE
. . . . . . . . . CONSIDERATIONS
. . . . . . . . . . . . . . . . .DURING
. . . . . . . .INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
............
11.1. SPECIAL CONSIDERATIONS 102

.CHAPTER
. . . . . . . . .12.
. . .FILE
. . . . SYSTEM
. . . . . . . . CHECK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
............
12.1. BEST PRACTICES FOR FSCK 104
12.2. FILE SYSTEM-SPECIFIC INFORMATION FOR FSCK 105

. . . . . . . . . .13.
CHAPTER . . .PARTITIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
............
Manipulating Partitions on Devices in Use 109
Modifying the Partition Table 109
13.1. VIEWING THE PARTITION TABLE 110
13.2. CREATING A PARTITION 112
13.3. REMOVING A PARTITION 115
13.4. SETTING A PARTITION TYPE 116
13.5. RESIZING A PARTITION WITH FDISK 116

. . . . . . . . . .14.
CHAPTER . . .CREATING
. . . . . . . . . .AND
. . . . MAINTAINING
. . . . . . . . . . . . .SNAPSHOTS
. . . . . . . . . . . WITH
. . . . . SNAPPER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
............
14.1. CREATING INITIAL SNAPPER CONFIGURATION 119
14.2. CREATING A SNAPPER SNAPSHOT 120
14.3. TRACKING CHANGES BETWEEN SNAPPER SNAPSHOTS 124
14.4. REVERSING CHANGES IN BETWEEN SNAPSHOTS 127
14.5. DELETING A SNAPPER SNAPSHOT 129

. . . . . . . . . .15.
CHAPTER . . .SWAP
. . . . . .SPACE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
............
15.1. ADDING SWAP SPACE 131
15.2. REMOVING SWAP SPACE 133
15.3. MOVING SWAP SPACE 135

. . . . . . . . . .16.
CHAPTER . . .SYSTEM
. . . . . . . .STORAGE
. . . . . . . . . MANAGER
. . . . . . . . . .(SSM)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
............
16.1. SSM BACK ENDS 136
16.2. COMMON SSM TASKS 138
16.3. SSM RESOURCES 145

. . . . . . . . . .17.
CHAPTER . . .DISK
. . . . .QUOTAS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147
............
17.1. CONFIGURING DISK QUOTAS 147
17.2. MANAGING DISK QUOTAS 152
17.3. DISK QUOTA REFERENCES 154

2
Table of Contents

.CHAPTER
. . . . . . . . .18.
. . .REDUNDANT
. . . . . . . . . . . .ARRAY
. . . . . . .OF
. . .INDEPENDENT
. . . . . . . . . . . . . DISKS
. . . . . . (RAID)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156
............
18.1. RAID TYPES 156
18.2. RAID LEVELS AND LINEAR SUPPORT 157
18.3. LINUX RAID SUBSYSTEMS 159
18.4. RAID SUPPORT IN THE ANACONDA INSTALLER 159
18.5. CONVERTING ROOT DISK TO RAID1 AFTER INSTALLATION 160
18.6. CONFIGURING RAID SETS 160
18.7. CREATING ADVANCED RAID DEVICES 161

. . . . . . . . . .19.
CHAPTER . . .USING
. . . . . .THE
. . . . MOUNT
. . . . . . . COMMAND
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163
............
19.1. LISTING CURRENTLY MOUNTED FILE SYSTEMS 163
19.2. MOUNTING A FILE SYSTEM 164
19.3. UNMOUNTING A FILE SYSTEM 173
19.4. MOUNT COMMAND REFERENCES 174

. . . . . . . . . .20.
CHAPTER . . .THE
. . . .VOLUME_KEY
. . . . . . . . . . . . .FUNCTION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175
............
20.1. VOLUME_KEY COMMANDS 175
20.2. USING VOLUME_KEY AS AN INDIVIDUAL USER 176
20.3. USING VOLUME_KEY IN A LARGER ORGANIZATION 177
20.4. VOLUME_KEY REFERENCES 179

.CHAPTER
. . . . . . . . .21.
. . .SOLID-STATE
. . . . . . . . . . . . DISK
. . . . . DEPLOYMENT
. . . . . . . . . . . . . GUIDELINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
............
Deployment Considerations 180
Performance Tuning Considerations 182

. . . . . . . . . .22.
CHAPTER . . .WRITE
. . . . . . BARRIERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183
............
22.1. IMPORTANCE OF WRITE BARRIERS 183
22.2. ENABLING AND DISABLING WRITE BARRIERS 183
22.3. WRITE BARRIER CONSIDERATIONS 184

. . . . . . . . . .23.
CHAPTER . . .STORAGE
. . . . . . . . . I/O
. . . ALIGNMENT
. . . . . . . . . . . AND
. . . . .SIZE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186
............
23.1. PARAMETERS FOR STORAGE ACCESS 186
23.2. USERSPACE ACCESS 187
23.3. I/O STANDARDS 188
23.4. STACKING I/O PARAMETERS 189
23.5. LOGICAL VOLUME MANAGER 189
23.6. PARTITION AND FILE SYSTEM TOOLS 189

. . . . . . . . . .24.
CHAPTER . . .SETTING
. . . . . . . . UP
. . . A. .REMOTE
. . . . . . . . DISKLESS
. . . . . . . . . .SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191
............
24.1. CONFIGURING A TFTP SERVICE FOR DISKLESS CLIENTS 191
24.2. CONFIGURING DHCP FOR DISKLESS CLIENTS 192
24.3. CONFIGURING AN EXPORTED FILE SYSTEM FOR DISKLESS CLIENTS 193

. . . . . . . . . .25.
CHAPTER . . .ONLINE
. . . . . . . STORAGE
. . . . . . . . . MANAGEMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195
............
25.1. TARGET SETUP 195
25.2. CREATING AN ISCSI INITIATOR 204
25.3. FIBRE CHANNEL 205
25.4. CONFIGURING A FIBRE CHANNEL OVER ETHERNET INTERFACE 208
25.5. CONFIGURING AN FCOE INTERFACE TO AUTOMATICALLY MOUNT AT BOOT 209
25.6. ISCSI 211
25.7. PERSISTENT NAMING 212
25.8. REMOVING A STORAGE DEVICE 216
25.9. REMOVING A PATH TO A STORAGE DEVICE 218
25.10. ADDING A STORAGE DEVICE OR PATH 218
25.11. SCANNING STORAGE INTERCONNECTS 220

3
Storage Administration Guide

25.12. ISCSI DISCOVERY CONFIGURATION 221


25.13. CONFIGURING ISCSI OFFLOAD AND INTERFACE BINDING 222
25.14. SCANNING ISCSI INTERCONNECTS 226
25.15. LOGGING IN TO AN ISCSI TARGET 229
25.16. RESIZING AN ONLINE LOGICAL UNIT 229
25.17. ADDING/REMOVING A LOGICAL UNIT THROUGH RESCAN-SCSI-BUS.SH 233
25.18. MODIFYING LINK LOSS BEHAVIOR 233
25.19. CONTROLLING THE SCSI COMMAND TIMER AND DEVICE STATUS 236
25.20. TROUBLESHOOTING ONLINE STORAGE CONFIGURATION 237
25.21. CONFIGURING MAXIMUM TIME FOR ERROR RECOVERY WITH EH_DEADLINE 238

. . . . . . . . . .26.
CHAPTER . . .DEVICE
. . . . . . .MAPPER
. . . . . . . . MULTIPATHING
. . . . . . . . . . . . . . AND
. . . . .VIRTUAL
. . . . . . . . STORAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240
............
26.1. VIRTUAL STORAGE 240
26.2. DM-MULTIPATH 240

.CHAPTER
. . . . . . . . .27.
. . .EXTERNAL
. . . . . . . . . .ARRAY
. . . . . . .MANAGEMENT
. . . . . . . . . . . . . .(LIBSTORAGEMGMT)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242
............
27.1. INTRODUCTION TO LIBSTORAGEMGMT 242
27.2. LIBSTORAGEMGMT TERMINOLOGY 243
27.3. INSTALLING LIBSTORAGEMGMT 245
27.4. USING LIBSTORAGEMGMT 246

. . . . . . . . . .28.
CHAPTER . . .PERSISTENT
. . . . . . . . . . . .MEMORY:
. . . . . . . . .NVDIMMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251
............
NVDIMMs Interleaving 251
Persistent Memory Access Modes 251
28.1. CONFIGURING PERSISTENT MEMORY WITH NDCTL 252
28.2. CONFIGURING PERSISTENT MEMORY FOR USE AS A BLOCK DEVICE (LEGACY MODE) 255
28.3. CONFIGURING PERSISTENT MEMORY FOR FILE SYSTEM DIRECT ACCESS (DAX) 255
28.4. CONFIGURING PERSISTENT MEMORY FOR USE IN DEVICE DAX MODE 256
28.5. TROUBLESHOOTING 257

. . . . . .III.
PART . . DATA
. . . . . .DEDUPLICATION
. . . . . . . . . . . . . . . AND
. . . . .COMPRESSION
. . . . . . . . . . . . . .WITH
. . . . .VDO
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258
............

. . . . . . . . . .29.
CHAPTER . . .VDO
. . . . INTEGRATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259
............
29.1. THEORETICAL OVERVIEW OF VDO 259
29.2. SYSTEM REQUIREMENTS 262
29.3. GETTING STARTED WITH VDO 265
29.4. ADMINISTERING VDO 269
29.5. DEPLOYMENT SCENARIOS 278
29.6. TUNING VDO 279
29.7. VDO COMMANDS 285
29.8. STATISTICS FILES IN /SYS 303

. . . . . . . . . .30.
CHAPTER . . .VDO
. . . . EVALUATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305
............
30.1. INTRODUCTION 305
30.2. TEST ENVIRONMENT PREPARATIONS 305
30.3. DATA EFFICIENCY TESTING PROCEDURES 309
30.4. PERFORMANCE TESTING PROCEDURES 317
30.5. ISSUE REPORTING 322
30.6. CONCLUSION 323

. . . . . . . . . . A.
APPENDIX . . .RED
. . . .HAT
. . . . CUSTOMER
. . . . . . . . . . .PORTAL
. . . . . . . .LABS
. . . . . RELEVANT
. . . . . . . . . . TO
. . . STORAGE
. . . . . . . . . .ADMINISTRATION
. . . . . . . . . . . . . . . . . . . . . . .324
............
SCSI DECODER 324
FILE SYSTEM LAYOUT CALCULATOR 324
LVM RAID CALCULATOR 324
ISCSI HELPER 324

4
Table of Contents

SAMBA CONFIGURATION HELPER 324


MULTIPATH HELPER 324
NFS HELPER 325
MULTIPATH CONFIGURATION VISUALIZER 325
RHEL BACKUP AND RESTORE ASSISTANT 325

. . . . . . . . . . B.
APPENDIX . . .REVISION
. . . . . . . . .HISTORY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .326
............

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327
INDEX ............

5
Storage Administration Guide

6
CHAPTER 1. OVERVIEW

CHAPTER 1. OVERVIEW
The Storage Administration Guide contains extensive information on supported file systems and data
storage features in Red Hat Enterprise Linux 7. This book is intended as a quick reference for
administrators managing single-node (that is, non-clustered) storage solutions.

The Storage Administration Guide is split into the following parts: File Systems, Storage Administration,
and Data Deduplication and Compression with VDO.

The File Systems part details the various file systems Red Hat Enterprise Linux 7 supports. It describes
them and explains how best to utilize them.

The Storage Administration part details the various tools and storage administration tasks Red Hat
Enterprise Linux 7 supports. It describes them and explains how best to utilize them.

The Data Deduplication and Compression with VDO part describes the Virtual Data Optimizer (VDO). It
explains how to use VDO to reduce your storage requirements.

1.1. NEW FEATURES AND ENHANCEMENTS IN RED HAT


ENTERPRISE LINUX 7
Red Hat Enterprise Linux 7 features the following file system enhancements:

eCryptfs not included


As of Red Hat Enterprise Linux 7, eCryptfs is not included. For more information on encrypting file
systems, see Red Hat's Security Guide.

System Storage Manager


Red Hat Enterprise Linux 7 includes a new application called System Storage Manager which provides a
command-line interface to manage various storage technologies. For more information, see Chapter 16,
System Storage Manager (SSM).

XFS Is the Default File System


As of Red Hat Enterprise Linux 7, XFS is the default file system. For more information about the XFS file
system, see Chapter 3, The XFS File System.

File System Restructure


Red Hat Enterprise Linux 7 introduces a new file system structure. The directories /bin, /sbin, /lib,
and /lib64 are now nested under /usr.

Snapper
Red Hat Enterprise Linux 7 introduces a new tool called Snapper that allows for the easy creation and
management of snapshots for LVM and Btrfs. For more information, see Chapter 14, Creating and
Maintaining Snapshots with Snapper.

Btrfs (Technology Preview)

7
Storage Administration Guide

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

Btrfs is a local file system that aims to provide better performance and scalability, including integrated
LVM operations. This file system is not fully supported by Red Hat and as such is a technology preview.
For more information on Btrfs, see Chapter 6, Btrfs (Technology Preview).

NFSv2 No Longer Supported


As of Red Hat Enterprise Linux 7, NFSv2 is no longer supported.

8
PART I. FILE SYSTEMS

PART I. FILE SYSTEMS


The File Systems section provides information on the file system structure and maintenance, the Btrfs
Technology Preview, and file systems that Red Hat fully supports: ext3, ext4, GFS2, XFS, NFS, and FS-
Cache.

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

For an overview of Red Hat Enterprise Linux file systems and storage limits, see Red Hat
Enterprise Linux technology capabilities and limits at Red Hat Knowledgebase.

XFS is the default file system in Red Hat Enterprise Linux 7 and Red Hat, and Red Hat recommends you
to use XFS unless you have a strong reason to use another file system. For general information on
common file systems and their properties, see the following Red Hat Knowledgebase article: How to
Choose your Red Hat Enterprise Linux File System.

9
Storage Administration Guide

CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE


The file system structure is the most basic level of organization in an operating system. The way an
operating system interacts with its users, applications, and security model nearly always depends on
how the operating system organizes files on storage devices. Providing a common file system structure
ensures users and programs can access and write files.

File systems break files down into two logical categories:

Shareable and unsharable files


Shareable files can be accessed locally and by remote hosts. Unsharable files are only available
locally.

Variable and static files


Variable files, such as documents, can be changed at any time. Static files, such as binaries, do not
change without an action from the system administrator.

Categorizing files in this manner helps correlate the function of each file with the permissions assigned
to the directories which hold them. How the operating system and its users interact with a file determines
the directory in which it is placed, whether that directory is mounted with read-only or read and write
permissions, and the level of access each user has to that file. The top level of this organization is
crucial; access to the underlying directories can be restricted, otherwise security problems could arise if,
from the top level down, access rules do not adhere to a rigid structure.

2.1. OVERVIEW OF FILESYSTEM HIERARCHY STANDARD (FHS)


Red Hat Enterprise Linux uses the Filesystem Hierarchy Standard (FHS) file system structure, which
defines the names, locations, and permissions for many file types and directories.

The FHS document is the authoritative reference to any FHS-compliant file system, but the standard
leaves many areas undefined or extensible. This section is an overview of the standard and a description
of the parts of the file system not covered by the standard.

The two most important elements of FHS compliance are:

Compatibility with other FHS-compliant systems

The ability to mount a /usr/ partition as read-only. This is crucial, since /usr/ contains
common executables and should not be changed by users. In addition, since /usr/ is mounted
as read-only, it should be mountable from the CD-ROM drive or from another machine via a
read-only NFS mount.

2.1.1. FHS Organization


The directories and files noted here are a small subset of those specified by the FHS document. For the
most complete information, see the latest FHS documentation at
http://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.pdf; the file-hierarchy(7) man page also provides an
overview.

NOTE

The directories that are available depend on what is installed on any given system. The
following lists are only an example of what may be found.

10
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

2.1.1.1. Gathering File System Information

df Command

The df command reports the system's disk space usage. Its output looks similar to the following:

Example 2.1. df Command Output

Filesystem 1K-blocks Used Available Use% Mounted on


/dev/mapper/VolGroup00-LogVol00
11675568 6272120 4810348 57% / /dev/sda1
100691 9281 86211 10% /boot
none 322856 0 322856 0% /dev/shm

By default, df shows the partition size in 1 kilobyte blocks and the amount of used and available disk
space in kilobytes. To view the information in megabytes and gigabytes, use the command df -h. The
-h argument stands for "human-readable" format. The output for df -h looks similar to the following:

Example 2.2. df -h Command Output

Filesystem Size Used Avail Use% Mounted on


/dev/mapper/VolGroup00-LogVol00
12G 6.0G 4.6G 57% / /dev/sda1
99M 9.1M 85M 10% /boot
none 316M 0 316M 0% /dev/shm

NOTE

In the given examples, the mounted partition /dev/shm represents the system's virtual
memory file system.

du Command

The du command displays the estimated amount of space being used by files in a directory, displaying
the disk usage of each subdirectory. The last line in the output of du shows the total disk usage of the
directory. To see only the total disk usage of a directory in human-readable format, use du -hs. For
more options, see man du.

Gnome System Monitor

To view the system's partitions and disk space usage in a graphical format, use the Gnome System
Monitor by clicking on Applications → System Tools → System Monitor or using the command
gnome-system-monitor. Select the File Systems tab to view the system's partitions. The following
figure illustrates the File Systems tab.

11
Storage Administration Guide

Figure 2.1. File Systems Tab in GNOME System Monitor

2.1.1.2. The /boot/ Directory

The /boot/ directory contains static files required to boot the system, for example, the Linux kernel.
These files are essential for the system to boot properly.


WARNING

Do not remove the /boot/ directory. Doing so renders the system unbootable.

2.1.1.3. The /dev/ Directory

The /dev/ directory contains device nodes that represent the following device types:

devices attached to the system;

virtual devices provided by the kernel.

These device nodes are essential for the system to function properly. The udevd daemon creates and
removes device nodes in /dev/ as needed.

Devices in the /dev/ directory and subdirectories are defined as either character (providing only a serial

12
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

stream of input and output, for example, mouse or keyboard) or block (accessible randomly, such as a
hard drive or a floppy drive). If GNOME or KDE is installed, some storage devices are automatically
detected when connected (such as with USB) or inserted (such as a CD or DVD drive), and a pop-up
window displaying the contents appears.

Table 2.1. Examples of Common Files in the /dev Directory

File Description

/dev/hda The master device on the primary IDE channel.

/dev/hdb The slave device on the primary IDE channel.

/dev/tty0 The first virtual console.

/dev/tty1 The second virtual console.

/dev/sda The first device on the primary SCSI or SATA


channel.

/dev/lp0 The first parallel port.

A valid block device can be one of two types of entries:

Mapped device
A logical volume in a volume group, for example, /dev/mapper/VolGroup00-LogVol02.

Static device
A traditional storage volume, for example, /dev/sdbX, where sdb is a storage device name and X is
the partition number. /dev/sdbX can also be /dev/disk/by-id/WWID, or /dev/disk/by-
uuid/UUID. For more information, see Section 25.7, “Persistent Naming”.

2.1.1.4. The /etc/ Directory

The /etc/ directory is reserved for configuration files that are local to the machine. It should not contain
any binaries; if there are any binaries, move them to /usr/bin/ or /usr/sbin/.

For example, the /etc/skel/ directory stores "skeleton" user files, which are used to populate a home
directory when a user is first created. Applications also store their configuration files in this directory and
may reference them when executed. The /etc/exports file controls which file systems export to
remote hosts.

2.1.1.5. The /mnt/ Directory

The /mnt/ directory is reserved for temporarily mounted file systems, such as NFS file system mounts.
For all removable storage media, use the /media/ directory. Automatically detected removable media is
mounted in the /media directory.

13
Storage Administration Guide

IMPORTANT

The /mnt directory must not be used by installation programs.

2.1.1.6. The /opt/ Directory

The /opt/ directory is normally reserved for software and add-on packages that are not part of the
default installation. A package that installs to /opt/ creates a directory bearing its name, for example,
/opt/packagename/. In most cases, such packages follow a predictable subdirectory structure; most
store their binaries in /opt/packagename/bin/ and their man pages in /opt/packagename/man/.

2.1.1.7. The /proc/ Directory

The /proc/ directory contains special files that either extract information from the kernel or send
information to it. Examples of such information include system memory, CPU information, and hardware
configuration. For more information about /proc/, see Section 2.3, “The /proc Virtual File System”.

2.1.1.8. The /srv/ Directory

The /srv/ directory contains site-specific data served by a Red Hat Enterprise Linux system. This
directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data
that only pertains to a specific user should go in the /home/ directory.

2.1.1.9. The /sys/ Directory

The /sys/ directory utilizes the new sysfs virtual file system specific to the kernel. With the increased
support for hot plug hardware devices in the kernel, the /sys/ directory contains information similar to
that held by /proc/, but displays a hierarchical view of device information specific to hot plug devices.

2.1.1.10. The /usr/ Directory

The /usr/ directory is for files that can be shared across multiple machines. The /usr/ directory is
often on its own partition and is mounted read-only. At a minimum, /usr/ should contain the following
subdirectories:

/usr/bin
This directory is used for binaries.

/usr/etc
This directory is used for system-wide configuration files.

/usr/games
This directory stores games.

/usr/include
This directory is used for C header files.

/usr/kerberos
This directory is used for Kerberos-related binaries and files.

14
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

/usr/lib
This directory is used for object files and libraries that are not designed to be directly utilized by shell
scripts or users.

As of Red Hat Enterprise Linux 7.0, the /lib/ directory has been merged with /usr/lib. Now it
also contains libraries needed to execute the binaries in /usr/bin/ and /usr/sbin/. These
shared library images are used to boot the system or execute commands within the root file system.

/usr/libexec
This directory contains small helper programs called by other programs.

/usr/sbin
As of Red Hat Enterprise Linux 7.0, /sbin has been moved to /usr/sbin. This means that it
contains all system administration binaries, including those essential for booting, restoring,
recovering, or repairing the system. The binaries in /usr/sbin/ require root privileges to use.

/usr/share
This directory stores files that are not architecture-specific.

/usr/src
This directory stores source code.

/usr/tmp linked to /var/tmp


This directory stores temporary files.

The /usr/ directory should also contain a /local/ subdirectory. As per the FHS, this subdirectory is
used by the system administrator when installing software locally, and should be safe from being
overwritten during system updates. The /usr/local directory has a structure similar to /usr/, and
contains the following subdirectories:

/usr/local/bin

/usr/local/etc

/usr/local/games

/usr/local/include

/usr/local/lib

/usr/local/libexec

/usr/local/sbin

/usr/local/share

/usr/local/src

Red Hat Enterprise Linux's usage of /usr/local/ differs slightly from the FHS. The FHS states that
/usr/local/ should be used to store software that should remain safe from system software
upgrades. Since the RPM Package Manager can perform software upgrades safely, it is not necessary

15
Storage Administration Guide

to protect files by storing them in /usr/local/.

Instead, Red Hat Enterprise Linux uses /usr/local/ for software local to the machine. For instance, if
the /usr/ directory is mounted as a read-only NFS share from a remote host, it is still possible to install
a package or program under the /usr/local/ directory.

2.1.1.11. The /var/ Directory

Since the FHS requires Linux to mount /usr/ as read-only, any programs that write log files or need
spool/ or lock/ directories should write them to the /var/ directory. The FHS states /var/ is for
variable data, which includes spool directories and files, logging data, transient and temporary files.

Following are some of the directories found within the /var/ directory:

/var/account/

/var/arpwatch/

/var/cache/

/var/crash/

/var/db/

/var/empty/

/var/ftp/

/var/gdm/

/var/kerberos/

/var/lib/

/var/local/

/var/lock/

/var/log/

/var/mail linked to /var/spool/mail/

/var/mailman/

/var/named/

/var/nis/

/var/opt/

/var/preserve/

/var/run/

/var/spool/

16
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

/var/tmp/

/var/tux/

/var/www/

/var/yp/

IMPORTANT

The /var/run/media/user directory contains subdirectories used as mount points for


removable media such as USB storage media, DVDs, CD-ROMs, and Zip disks. Note that
previously, the /media/ directory was used for this purpose.

System log files, such as messages and lastlog, go in the /var/log/ directory. The
/var/lib/rpm/ directory contains RPM system databases. Lock files go in the /var/lock/ directory,
usually in directories for the program using the file. The /var/spool/ directory has subdirectories that
store data files for some programs. These subdirectories include:

/var/spool/at/

/var/spool/clientmqueue/

/var/spool/cron/

/var/spool/cups/

/var/spool/exim/

/var/spool/lpd/

/var/spool/mail/

/var/spool/mailman/

/var/spool/mqueue/

/var/spool/news/

/var/spool/postfix/

/var/spool/repackage/

/var/spool/rwho/

/var/spool/samba/

/var/spool/squid/

/var/spool/squirrelmail/

/var/spool/up2date/

/var/spool/uucp/

17
Storage Administration Guide

/var/spool/uucppublic/

/var/spool/vbox/

2.2. SPECIAL RED HAT ENTERPRISE LINUX FILE LOCATIONS


Red Hat Enterprise Linux extends the FHS structure slightly to accommodate special files.

Most files pertaining to RPM are kept in the /var/lib/rpm/ directory. For more information on RPM,
see man rpm.

The /var/cache/yum/ directory contains files used by the Package Updater, including RPM header
information for the system. This location may also be used to temporarily store RPMs downloaded while
updating the system. For more information about the Red Hat Network, see https://rhn.redhat.com/.

Another location specific to Red Hat Enterprise Linux is the /etc/sysconfig/ directory. This directory
stores a variety of configuration information. Many scripts that run at boot time use the files in this
directory.

2.3. THE /PROC VIRTUAL FILE SYSTEM


Unlike most file systems, /proc contains neither text nor binary files. Because it houses virtual files, the
/proc is referred to as a virtual file system. These virtual files are typically zero bytes in size, even if
they contain a large amount of information.

The /proc file system is not used for storage. Its main purpose is to provide a file-based interface to
hardware, memory, running processes, and other system components. Real-time information can be
retrieved on many system components by viewing the corresponding /proc file. Some of the files within
/proc can also be manipulated (by both users and applications) to configure the kernel.

The following /proc files are relevant in managing and monitoring system storage:

/proc/devices
Displays various character and block devices that are currently configured.

/proc/filesystems
Lists all file system types currently supported by the kernel.

/proc/mdstat
Contains current information on multiple-disk or RAID configurations on the system, if they exist.

/proc/mounts
Lists all mounts currently used by the system.

/proc/partitions
Contains partition block allocation information.

For more information about the /proc file system, see the Red Hat Enterprise Linux 7 Deployment
Guide.

18
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

2.4. DISCARD UNUSED BLOCKS


Batch discard and online discard operations are features of mounted file systems that discard blocks not
in use by the file system. They are useful for both solid-state drives and thinly-provisioned storage.

Batch discard operations are run explicitly by the user with the fstrim command. This
command discards all unused blocks in a file system that match the user's criteria.

Online discard operations are specified at mount time, either with the -o discard option as
part of a mount command or with the discard option in the /etc/fstab file. They run in real
time without user intervention. Online discard operations only discard blocks that are
transitioning from used to free.

Both operation types are supported for use with ext4 file systems as of Red Hat Enterprise Linux 6.2 and
later and with XFS file systems since Red Hat Enterprise Linux 6.4. Also, the block device underlying the
file system must support physical discard operations. Physical discard operations are supported if the
value stored in the /sys/block/device/queue/discard_max_bytes file is not zero.

If you are executing the fstrim command on:

a device that does not support discard operations, or

a logical device (LVM or MD) comprised of multiple devices, where any one of the device does
not support discard operations

the following message will be displayed:

fstrim -v /mnt/non_discard
fstrim: /mnt/non_discard: the discard operation is not supported

NOTE

The mount command allows you to mount a device that does not support discard
operations with the -o discard option.

Red Hat recommends batch discard operations unless the system's workload is such that batch discard
is not feasible, or online discard operations are necessary to maintain performance.

For more information, see the fstrim(8) and mount(8) man pages.

19
Storage Administration Guide

CHAPTER 3. THE XFS FILE SYSTEM


XFS is a highly scalable, high-performance file system which was originally designed at Silicon Graphics,
Inc. XFS is the default file system for Red Hat Enterprise Linux 7.

Main Features of XFS

XFS supports metadata journaling, which facilitates quicker crash recovery.

The XFS file system can be defragmented and enlarged while mounted and active.

In addition, Red Hat Enterprise Linux 7 supports backup and restore utilities specific to XFS.

Allocation Features
XFS features the following allocation schemes:

Extent-based allocation

Stripe-aware allocation policies

Delayed allocation

Space pre-allocation

Delayed allocation and other performance optimizations affect XFS the same way that they do ext4.
Namely, a program's writes to an XFS file system are not guaranteed to be on-disk unless the
program issues an fsync() call afterwards.

For more information on the implications of delayed allocation on a file system (ext4 and XFS), see
Allocation Features in Chapter 5, The ext4 File System.

NOTE

Creating or expanding files occasionally fails with an unexpected ENOSPC write


failure even though the disk space appears to be sufficient. This is due to XFS's
performance-oriented design. In practice, it does not become a problem since it only
occurs if remaining space is only a few blocks.

Other XFS Features


The XFS file system also supports the following:

Extended attributes (xattr)


This allows the system to associate several additional name/value pairs per file. It is enabled by
default.

Quota journaling
This avoids the need for lengthy quota consistency checks after a crash.

Project/directory quotas
This allows quota restrictions over a directory tree.

Subsecond timestamps

20
CHAPTER 3. THE XFS FILE SYSTEM

This allows timestamps to go to the subsecond.

Default atime behavior is relatime


Relatime is on by default for XFS. It has almost no overhead compared to noatime while still
maintaining sane atime values.

3.1. CREATING AN XFS FILE SYSTEM


Prerequisites
Create or reuse a partition on your disk. For information on creating MBR or GPT partitions, see
Chapter 13, Partitions.

Alternatively, use an LVM or MD volume or a similar layer below XFS.

Procedure

Procedure 3.1. Creating an XFS File System

To create an XFS file system, use the following command:

# mkfs.xfs block_device

Replace block_device with the path to a partition or a logical volume. For example,
/dev/sdb1, /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or
/dev/my-volgroup/my-lv.

In general, the default options are optimal for common use.

When using mkfs.xfs on a block device containing an existing file system, add the -f
option to overwrite that file system.

Example 3.1. mkfs.xfs Command Output

Following is a sample output of the mkfs.xfs command:

meta-data=/dev/device isize=256 agcount=4, agsize=3277258


blks
= sectsz=512 attr=2
data = bsize=4096 blocks=13109032,
imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=6400, version=2
= sectsz=512 sunit=0 blks, lazy-
count=1
realtime =none extsz=4096 blocks=0, rtextents=0

21
Storage Administration Guide

NOTE

After an XFS file system is created, its size cannot be reduced. However, it can still be
enlarged using the xfs_growfs command. For more information, see Section 3.4,
“Increasing the Size of an XFS File System”).

Striped Block Devices


For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified at the time of
file system creation. Using proper stripe geometry greatly enhances the performance of an XFS
filesystem.

When creating filesystems on LVM or MD volumes, mkfs.xfs chooses an optimal geometry. This may
also be true on some hardware RAIDs that export geometry information to the operating system.

If the device exports stripe geometry information, the mkfs utility (for ext3, ext4, and xfs) will
automatically use this geometry. If stripe geometry is not detected by the mkfs utility and even though
the storage does, in fact, have stripe geometry, it is possible to manually specify it when creating the file
system using the following options:

su=value
Specifies a stripe unit or RAID chunk size. The value must be specified in bytes, with an optional k,
m, or g suffix.

sw=value
Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe.

The following example specifies a chunk size of 64k on a RAID device containing 4 stripe units:

# mkfs.xfs -d su=64k,sw=4 /dev/block_device

Additional Resources
For more information about creating XFS file systems, see:

The mkfs.xfs(8) man page

The Red Hat Enterprise Linux Performance Tuning Guide, chapter Tuning XFS

3.2. MOUNTING AN XFS FILE SYSTEM


An XFS file system can be mounted with no extra options, for example:

# mount /dev/device /mount/point

The default for Red Hat Enterprise Linux 7 is inode64.

NOTE

Unlike mke2fs, mkfs.xfs does not utilize a configuration file; they are all specified on
the command line.

Write Barriers

22
CHAPTER 3. THE XFS FILE SYSTEM

By default, XFS uses write barriers to ensure file system integrity even when power is lost to a device
with write caches enabled. For devices without write caches, or with battery-backed write caches,
disable the barriers by using the nobarrier option:

# mount -o nobarrier /dev/device /mount/point

For more information about write barriers, see Chapter 22, Write Barriers.

Direct Access Technology Preview


Since Red Hat Enterprise Linux 7.3, Direct Access (DAX) is available as a Technology Preview on
the ext4 and XFS file systems. It is a means for an application to directly map persistent memory into its
address space. To use DAX, a system must have some form of persistent memory available, usually in
the form of one or more Non-Volatile Dual Inline Memory Modules (NVDIMMs), and a file system that
supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax
mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of
storage into the application's address space.

3.3. XFS QUOTA MANAGEMENT


The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas
control or report on usage of these items on a user, group, or directory or project level. Also, note that
while user, group, and directory or project quotas are enabled independently, group and project quotas
are mutually exclusive.

When managing on a per-directory or per-project basis, XFS manages the disk usage of directory
hierarchies associated with a specific project. In doing so, XFS recognizes cross-organizational "group"
boundaries between projects. This provides a level of control that is broader than what is available when
managing quotas for users or groups.

XFS quotas are enabled at mount time, with specific mount options. Each mount option can also be
specified as noenforce; this allows usage reporting without enforcing any limits. Valid quota mount
options are:

uquota/uqnoenforce: User quotas

gquota/gqnoenforce: Group quotas

pquota/pqnoenforce: Project quota

Once quotas are enabled, the xfs_quota tool can be used to set limits and report on disk usage. By
default, xfs_quota is run interactively, and in basic mode. Basic mode subcommands simply report
usage, and are available to all users. Basic xfs_quota subcommands include:

quota username/userID
Show usage and limits for the given username or numeric userID

df
Shows free and used counts for blocks and inodes.

23
Storage Administration Guide

In contrast, xfs_quota also has an expert mode. The subcommands of this mode allow actual
configuration of limits, and are available only to users with elevated privileges. To use expert mode
subcommands interactively, use the following command:

# xfs_quota -x

Expert mode subcommands include:

report /path
Reports quota information for a specific file system.

limit
Modify quota limits.

For a complete list of subcommands for either basic or expert mode, use the subcommand help.

All subcommands can also be run directly from a command line using the -c option, with -x for expert
subcommands.

Example 3.2. Display a Sample Quota Report

For example, to display a sample quota report for /home (on /dev/blockdevice), use the
command xfs_quota -x -c 'report -h' /home. This displays output similar to the following:

User quota on /home (/dev/blockdevice)


Blocks
User ID Used Soft Hard Warn/Grace
---------- ---------------------------------
root 0 0 0 00 [------]
testuser 103.4G 0 0 00 [------]
...

To set a soft and hard inode count limit of 500 and 700 respectively for user john, whose home
directory is /home/john, use the following command:

# xfs_quota -x -c 'limit isoft=500 ihard=700 john' /home/

In this case, pass mount_point which is the mounted xfs file system.

By default, the limit subcommand recognizes targets as users. When configuring the limits for a
group, use the -g option (as in the previous example). Similarly, use -p for projects.

Soft and hard block limits can also be configured using bsoft or bhard instead of isoft or ihard.

Example 3.3. Set a Soft and Hard Block Limit

For example, to set a soft and hard block limit of 1000m and 1200m, respectively, to group
accounting on the /target/path file system, use the following command:

# xfs_quota -x -c 'limit -g bsoft=1000m bhard=1200m accounting'


/target/path

24
CHAPTER 3. THE XFS FILE SYSTEM

NOTE

The commands bsoft and bhard count by the byte.

IMPORTANT

While real-time blocks (rtbhard/rtbsoft) are described in man xfs_quota as valid


units when setting quotas, the real-time sub-volume is not enabled in this release. As
such, the rtbhard and rtbsoft options are not applicable.

Setting Project Limits


Before configuring limits for project-controlled directories, add them first to /etc/projects. Project
names can be added to /etc/projectid to map project IDs to project names. Once a project is added
to /etc/projects, initialize its project directory using the following command:

# xfs_quota -x -c 'project -s projectname' project_path

Quotas for projects with initialized directories can then be configured, with:

# xfs_quota -x -c 'limit -p bsoft=1000m bhard=1200m projectname'

Generic quota configuration tools (quota, repquota, and edquota for example) may also be used to
manipulate XFS quotas. However, these tools cannot be used with XFS project quotas.

IMPORTANT

Red Hat recommends the use of xfs_quota over all other available tools.

For more information about setting XFS quotas, see man xfs_quota, man projid(5), and man
projects(5).

3.4. INCREASING THE SIZE OF AN XFS FILE SYSTEM


An XFS file system may be grown while mounted using the xfs_growfs command:

# xfs_growfs /mount/point -D size

The -D size option grows the file system to the specified size (expressed in file system blocks).
Without the -D size option, xfs_growfs will grow the file system to the maximum size supported by
the device.

Before growing an XFS file system with -D size, ensure that the underlying block device is of an
appropriate size to hold the file system later. Use the appropriate resizing methods for the affected block
device.

25
Storage Administration Guide

NOTE

While XFS file systems can be grown while mounted, their size cannot be reduced at all.

For more information about growing a file system, see man xfs_growfs.

3.5. REPAIRING AN XFS FILE SYSTEM


To repair an XFS file system, use xfs_repair:

# xfs_repair /dev/device

The xfs_repair utility is highly scalable and is designed to repair even very large file systems with
many inodes efficiently. Unlike other Linux file systems, xfs_repair does not run at boot time, even
when an XFS file system was not cleanly unmounted. In the event of an unclean unmount, xfs_repair
simply replays the log at mount time, ensuring a consistent file system.


WARNING

The xfs_repair utility cannot repair an XFS file system with a dirty log. To clear
the log, mount and unmount the XFS file system. If the log is corrupt and cannot be
replayed, use the -L option ("force log zeroing") to clear the log, that is,
xfs_repair -L /dev/device. Be aware that this may result in further
corruption or data loss.

For more information about repairing an XFS file system, see man xfs_repair.

3.6. SUSPENDING AN XFS FILE SYSTEM


To suspend or resume write activity to a file system, use the following command:

# xfs_freeze mount-point

Suspending write activity allows hardware-based device snapshots to be used to capture the file system
in a consistent state.

NOTE

The xfs_freeze utility is provided by the xfsprogs package, which is only available on
x86_64.

To suspend (that is, freeze) an XFS file system, use:

# xfs_freeze -f /mount/point

To unfreeze an XFS file system, use:

26
CHAPTER 3. THE XFS FILE SYSTEM

# xfs_freeze -u /mount/point

When taking an LVM snapshot, it is not necessary to use xfs_freeze to suspend the file system first.
Rather, the LVM management tools will automatically suspend the XFS file system before taking the
snapshot.

For more information about freezing and unfreezing an XFS file system, see man xfs_freeze.

3.7. BACKING UP AND RESTORING XFS FILE SYSTEMS


XFS file system backup and restoration involve these utilities:

xfsdump for creating the backup

xfsrestore for restoring from backup

3.7.1. Features of XFS Backup and Restoration

Backup

You can use the xfsdump utility to:

Perform backups to regular file images.

Only one backup can be written to a regular file.

Perform backups to tape drives.

The xfsdump utility also allows you to write multiple backups to the same tape. A backup can
span multiple tapes.

To back up multiple file systems to a single tape device, simply write the backup to a tape that
already contains an XFS backup. This appends the new backup to the previous one. By default,
xfsdump never overwrites existing backups.

Create incremental backups.

The xfsdump utility uses dump levels to determine a base backup to which other backups are
relative. Numbers from 0 to 9 refer to increasing dump levels. An incremental backup only backs
up files that have changed since the last dump of a lower level:

To perform a full backup, perform a level 0 dump on the file system.

A level 1 dump is the first incremental backup after a full backup. The next incremental
backup would be level 2, which only backs up files that have changed since the last level 1
dump; and so on, to a maximum of level 9.

Exclude files from a backup using size, subtree, or inode flags to filter them.

Restoration

The xfsrestore utility restores file systems from backups produced by xfsdump. The xfsrestore utility
has two modes:

27
Storage Administration Guide

The simple mode enables users to restore an entire file system from a level 0 dump. This is the
default mode.

The cumulative mode enables file system restoration from an incremental backup: that is, level 1
to level 9.

A unique session ID or session label identifies each backup. Restoring a backup from a tape containing
multiple backups requires its corresponding session ID or label.

To extract, add, or delete specific files from a backup, enter the xfsrestore interactive mode. The
interactive mode provides a set of commands to manipulate the backup files.

3.7.2. Backing Up an XFS File System


This procedure describes how to back up the content of an XFS file system into a file or a tape.

Procedure 3.2. Backing Up an XFS File System

Use the following command to back up an XFS file system:

# xfsdump -l level [-L label] -f backup-destination path-to-xfs-


filesystem

Replace level with the dump level of your backup. Use 0 to perform a full backup or 1 to 9 to
perform consequent incremental backups.

Replace backup-destination with the path where you want to store your backup. The
destination can be a regular file, a tape drive, or a remote tape device. For example,
/backup-files/Data.xfsdump for a file or /dev/st0 for a tape drive.

Replace path-to-xfs-filesystem with the mount point of the XFS file system you want to back
up. For example, /mnt/data/. The file system must be mounted.

When backing up multiple file systems and saving them on a single tape device, add a
session label to each backup using the -L label option so that it is easier to identify them
when restoring. Replace label with any name for your backup: for example, backup_data.

Example 3.4. Backing up Multiple XFS File Systems

To back up the content of XFS file systems mounted on the /boot/ and /data/ directories
and save them as files in the /backup-files/ directory:

# xfsdump -l 0 -f /backup-files/boot.xfsdump /boot


# xfsdump -l 0 -f /backup-files/data.xfsdump /data

To back up multiple file systems on a single tape device, add a session label to each backup
using the -L label option:

# xfsdump -l 0 -L "backup_boot" -f /dev/st0 /boot


# xfsdump -l 0 -L "backup_data" -f /dev/st0 /data

Additional Resources

For more information about backing up XFS file systems, see the xfsdump(8) man page.

28
CHAPTER 3. THE XFS FILE SYSTEM

For more information about backing up XFS file systems, see the xfsdump(8) man page.

3.7.3. Restoring an XFS File System from Backup


This procedure describes how to restore the content of an XFS file system from a file or tape backup.

Prerequisites

You need a file or tape backup of XFS file systems, as described in Section 3.7.2, “Backing Up
an XFS File System”.

Procedure 3.3. Restoring an XFS File System from Backup

The command to restore the backup varies depending on whether you are restoring from a full
backup or an incremental one, or are restoring multiple backups from a single tape device:

# xfsrestore [-r] [-S session-id] [-L session-label] [-i]


-f backup-location restoration-path

Replace backup-location with the location of the backup. This can be a regular file, a tape
drive, or a remote tape device. For example, /backup-files/Data.xfsdump for a file or
/dev/st0 for a tape drive.

Replace restoration-path with the path to the directory where you want to restore the file
system. For example, /mnt/data/.

To restore a file system from an incremental (level 1 to level 9) backup, add the -r option.

To restore a backup from a tape device that contains multiple backups, specify the backup
using the -S or -L options.

The -S lets you choose a backup by its session ID, while the -L lets you choose by the
session label. To obtain the session ID and session labels, use the xfsrestore -I
command.

Replace session-id with the session ID of the backup. For example, b74a3586-e52e-
4a4a-8775-c3334fa8ea2c. Replace session-label with the session label of the backup.
For example, my_backup_session_label.

To use xfsrestore interactively, use the -i option.

The interactive dialog begins after xfsrestore finishes reading the specified device.
Available commands in the interactive xfsrestore shell include cd, ls, add, delete, and
extract; for a complete list of commands, use the help command.

Example 3.5. Restoring Multiple XFS File Systems

To restore the XFS backup files and save their content into directories under /mnt/:

# xfsrestore -f /backup-files/boot.xfsdump /mnt/boot/


# xfsrestore -f /backup-files/data.xfsdump /mnt/data/

To restore from a tape device containing multiple backups, specify each backup by its session label
or session ID:

29
Storage Administration Guide

# xfsrestore -f /dev/st0 -L "backup_boot" /mnt/boot/


# xfsrestore -f /dev/st0 -S "45e9af35-efd2-4244-87bc-4762e476cbab"
/mnt/data/

Informational Messages When Restoring a Backup from a Tape


When restoring a backup from a tape with backups from multiple file systems, the xfsrestore utility
might issue messages. The messages inform you whether a match of the requested backup has been
found when xfsrestore examines each backup on the tape in sequential order. For example:

xfsrestore: preparing drive


xfsrestore: examining media file 0
xfsrestore: inventory session uuid (8590224e-3c93-469c-a311-fc8f23029b2a)
does not match the media header's session uuid (7eda9f86-f1e9-4dfd-b1d4-
c50467912408)
xfsrestore: examining media file 1
xfsrestore: inventory session uuid (8590224e-3c93-469c-a311-fc8f23029b2a)
does not match the media header's session uuid (7eda9f86-f1e9-4dfd-b1d4-
c50467912408)
[...]

The informational messages keep appearing until the matching backup is found.

Additional Resources

For more information about restoring XFS file systems, see the xfsrestore(8) man page.

3.8. CONFIGURING ERROR BEHAVIOR


When an error occurs during an I/O operation, the XFS driver responds in one of two ways:

Continue retries until either:

the I/O operation succeeds, or

an I/O operation retry count or time limit is exceeded.

Consider the error permanent and halt the system.

XFS currently recognizes the following error conditions for which you can configure the desired behavior
specifically:

EIO: Error while trying to write to the device

ENOSPC: No space left on the device

ENODEV: Device cannot be found

All other possible error conditions, which do not have specific handlers defined, share a single, global
configuration.

You can set the conditions under which XFS deems the errors permanent, both in the maximum number
of retries and the maximum time in seconds. XFS stops retrying when any one of the conditions is met.

30
CHAPTER 3. THE XFS FILE SYSTEM

There is also an option to immediately cancel the retries when unmounting the file system, regardless of
any other configuration. This allows the unmount operation to succeed even in case of persistent errors.

3.8.1. Configuration Files for Specific and Undefined Conditions


Configuration files controlling error behavior are located in the /sys/fs/xfs/device/error/
directory.

The /sys/fs/xfs/device/error/metadata/ directory contains subdirectories for each specific


error condition:

/sys/fs/xfs/device/error/metadata/EIO/ for the EIO error condition

/sys/fs/xfs/device/error/metadata/ENODEV/ for the ENODEV error condition

/sys/fs/xfs/device/error/metadata/ENOSPC/ for the ENOSPC error condition

Each one then contains the following configuration files:

/sys/fs/xfs/device/error/metadata/condition/max_retries: controls the


maximum number of times that XFS retries the operation.

/sys/fs/xfs/device/error/metadata/condition/retry_timeout_seconds: the
time limit in seconds after which XFS will stop retrying the operation

All other possible error conditions, apart from those described in the previous section, share a common
configuration in these files:

/sys/fs/xfs/device/error/default/max_retries: controls the maximum number of


retries

/sys/fs/xfs/device/error/default/retry_timeout_seconds: controls the time limit


for retrying

3.8.2. Setting File System Behavior for Specific and Undefined Conditions
To set the maximum number of retries, write the desired number to the max_retries file.

For specific conditions:

# echo value >


/sys/fs/xfs/device/error/metadata/condition/max_retries

For undefined conditions:

# echo value > /sys/fs/xfs/device/error/default/max_retries

value is a number between -1 and the maximum possible value of int, the C signed integer type. This
is 2147483647 on 64-bit Linux.

To set the time limit, write the desired number of seconds to the retry_timeout_seconds file.

For specific conditions:

31
Storage Administration Guide

# echo value >


/sys/fs/xfs/device/error/metadata/condition/retry_timeout_seconds

For undefined conditions:

# echo value >


/sys/fs/xfs/device/error/default/retry_timeout_seconds

value is a number between -1 and 86400, which is the number of seconds in a day.

In both the max_retries and retry_timeout_seconds options, -1 means to retry forever and 0 to
stop immediately.

device is the name of the device, as found in the /dev/ directory; for example, sda.

NOTE

The default behavior for a each error condition is dependent on the error context. Some
errors, like ENODEV, are considered to be fatal and unrecoverable, regardless of the retry
count, so their default value is 0.

3.8.3. Setting Unmount Behavior


If the fail_at_unmount option is set, the file system overrides all other error configurations during
unmount, and immediately umnounts the file system without retrying the I/O operation. This allows the
unmount operation to succeed even in case of persistent errors.

To set the unmount behavior:

# echo value > /sys/fs/xfs/device/error/fail_at_unmount

value is either 1 or 0:

1 means to cancel retrying immediately if an error is found.

0 means to respect the max_retries and retry_timeout_seconds options.

device is the name of the device, as found in the /dev/ directory; for example, sda.

IMPORTANT

The fail_at_unmount option has to be set as desired before attempting to unmount


the file system. After an unmount operation has started, the configuration files and
directories may be unavailable.

3.9. OTHER XFS FILE SYSTEM UTILITIES


Red Hat Enterprise Linux 7 also features other utilities for managing XFS file systems:

xfs_fsr

32
CHAPTER 3. THE XFS FILE SYSTEM

Used to defragment mounted XFS file systems. When invoked with no arguments, xfs_fsr
defragments all regular files in all mounted XFS file systems. This utility also allows users to suspend
a defragmentation at a specified time and resume from where it left off later.

In addition, xfs_fsr also allows the defragmentation of only one file, as in xfs_fsr
/path/to/file. Red Hat advises not to periodically defrag an entire file system because XFS
avoids fragmentation by default. System wide defragmentation could cause the side effect of
fragmentation in free space.

xfs_bmap
Prints the map of disk blocks used by files in an XFS filesystem. This map lists each extent used by a
specified file, as well as regions in the file with no corresponding blocks (that is, holes).

xfs_info
Prints XFS file system information.

xfs_admin
Changes the parameters of an XFS file system. The xfs_admin utility can only modify parameters of
unmounted devices or file systems.

xfs_copy
Copies the contents of an entire XFS file system to one or more targets in parallel.

The following utilities are also useful in debugging and analyzing XFS file systems:

xfs_metadump
Copies XFS file system metadata to a file. Red Hat only supports using the xfs_metadump utility to
copy unmounted file systems or read-only mounted file systems; otherwise, generated dumps could
be corrupted or inconsistent.

xfs_mdrestore
Restores an XFS metadump image (generated using xfs_metadump) to a file system image.

xfs_db
Debugs an XFS file system.

For more information about these utilities, see their respective man pages.

3.10. MIGRATING FROM EXT4 TO XFS


Starting with Red Hat Enterprise Linux 7.0, XFS is the default file system instead of ext4. This section
highlights the differences when using or administering an XFS file system.

The ext4 file system is still fully supported in Red Hat Enterprise Linux 7 and can be selected at
installation. While it is possible to migrate from ext4 to XFS, it is not required.

3.10.1. Differences Between Ext3/4 and XFS


File system repair

33
Storage Administration Guide

Ext3/4 runs e2fsck in userspace at boot time to recover the journal as needed. XFS, by comparison,
performs journal recovery in kernelspace at mount time. An fsck.xfs shell script is provided but
does not perform any useful action as it is only there to satisfy initscript requirements.

When an XFS file system repair or check is requested, use the xfs_repair command. Use the -n
option for a read-only check.

The xfs_repair command will not operate on a file system with a dirty log. To repair such a file
system mount and unmount must first be performed to replay the log. If the log is corrupt and cannot
be replayed, the -L option can be used to zero out in the log.

For more information on file system repair of XFS file systems, see Section 12.2.2, “XFS”

Metadata error behavior


The ext3/4 file system has configurable behavior when metadata errors are encountered, with the
default being to simply continue. When XFS encounters a metadata error that is not recoverable it will
shut down the file system and return a EFSCORRUPTED error. The system logs will contain details of
the error encountered and will recommend running xfs_repair if necessary.

Quotas
XFS quotas are not a remountable option. The -o quota option must be specified on the initial
mount for quotas to be in effect.

While the standard tools in the quota package can perform basic quota administrative tasks (tools
such as setquota and repquota), the xfs_quota tool can be used for XFS-specific features, such as
Project Quota administration.

The quotacheck command has no effect on an XFS file system. The first time quota accounting is
turned on XFS does an automatic quotacheck internally. Because XFS quota metadata is a first-
class, journaled metadata object, the quota system will always be consistent until quotas are
manually turned off.

File system resize


The XFS file system has no utility to shrink a file system. XFS file systems can be grown online via the
xfs_growfs command.

Inode numbers
For file systems larger than 1 TB with 256-byte inodes, or larger than 2 TB with 512-byte inodes, XFS
inode numbers might exceed 2^32. Such large inode numbers cause 32-bit stat calls to fail with the
EOVERFLOW return value. The described problem might occur when using the default Red Hat
Enterprise Linux 7 configuration: non-striped with four allocation groups. A custom configuration, for
example file system extension or changing XFS file system parameters, might lead to a different
behavior.

Applications usually handle such larger inode numbers correctly. If needed, mount the XFS file
system with the -o inode32 parameter to enforce inode numbers below 2^32. Note that using
inode32 does not affect inodes that are already allocated with 64-bit numbers.

34
CHAPTER 3. THE XFS FILE SYSTEM

IMPORTANT

Do not use the inode32 option unless it is required by a specific environment. The
inode32 option changes allocation behavior. As a consequence, the ENOSPC error
might occur if no space is available to allocate inodes in the lower disk blocks.

Speculative preallocation
XFS uses speculative preallocation to allocate blocks past EOF as files are written. This avoids file
fragmentation due to concurrent streaming write workloads on NFS servers. By default, this
preallocation increases with the size of the file and will be apparent in "du" output. If a file with
speculative preallocation is not dirtied for five minutes the preallocation will be discarded. If the inode
is cycled out of cache before that time, then the preallocation will be discarded when the inode is
reclaimed.

If premature ENOSPC problems are seen due to speculative preallocation, a fixed preallocation
amount may be specified with the -o allocsize=amount mount option.

Fragmentation-related tools
Fragmentation is rarely a significant issue on XFS file systems due to heuristics and behaviors, such
as delayed allocation and speculative preallocation. However, tools exist for measuring file system
fragmentation as well as defragmenting file systems. Their use is not encouraged.

The xfs_db frag command attempts to distill all file system allocations into a single fragmentation
number, expressed as a percentage. The output of the command requires significant expertise to
understand its meaning. For example, a fragmentation factor of 75% means only an average of 4
extents per file. For this reason the output of xfs_db's frag is not considered useful and more careful
analysis of any fragmentation problems is recommended.


WARNING

The xfs_fsr command may be used to defragment individual files, or all files
on a file system. The later is especially not recommended as it may destroy
locality of files and may fragment free space.

Commands Used with ext3 and ext4 Compared to XFS

The following table compares common commands used with ext3 and ext4 to their XFS-specific
counterparts.

Table 3.1. Common Commands for ext3 and ext4 Compared to XFS

Task ext3/4 XFS

Create a file system mkfs.ext4 or mkfs.ext3 mkfs.xfs

File system check e2fsck xfs_repair

35
Storage Administration Guide

Task ext3/4 XFS

Resizing a file system resize2fs xfs_growfs

Save an image of a file system e2image xfs_metadump and


xfs_mdrestore

Label or tune a file system tune2fs xfs_admin

Backup a file system dump and restore xfsdump and xfsrestore

The following table lists generic tools that function on XFS file systems as well, but the XFS versions
have more specific functionality and as such are recommended.

Table 3.2. Generic Tools for ext4 and XFS

Task ext4 XFS

Quota quota xfs_quota

File mapping filefrag xfs_bmap

More information on many the listed XFS commands is included in Chapter 3, The XFS File System. You
can also consult the manual pages of the listed XFS administration tools for more information.

36
CHAPTER 4. THE EXT3 FILE SYSTEM

CHAPTER 4. THE EXT3 FILE SYSTEM


The ext3 file system is essentially an enhanced version of the ext2 file system. These improvements
provide the following advantages:

Availability
After an unexpected power failure or system crash (also called an unclean system shutdown), each
mounted ext2 file system on the machine must be checked for consistency by the e2fsck program.
This is a time-consuming process that can delay system boot time significantly, especially with large
volumes containing a large number of files. During this time, any data on the volumes is unreachable.

It is possible to run fsck -n on a live filesystem. However, it will not make any changes and may
give misleading results if partially written metadata is encountered.

If LVM is used in the stack, another option is to take an LVM snapshot of the filesystem and run fsck
on it instead.

Finally, there is the option to remount the filesystem as read only. All pending metadata updates (and
writes) are then forced to the disk prior to the remount. This ensures the filesystem is in a consistent
state, provided there is no previous corruption. It is now possible to run fsck -n.

The journaling provided by the ext3 file system means that this sort of file system check is no longer
necessary after an unclean system shutdown. The only time a consistency check occurs using ext3 is
in certain rare hardware failure cases, such as hard drive failures. The time to recover an ext3 file
system after an unclean system shutdown does not depend on the size of the file system or the
number of files; rather, it depends on the size of the journal used to maintain consistency. The default
journal size takes about a second to recover, depending on the speed of the hardware.

NOTE

The only journaling mode in ext3 supported by Red Hat is data=ordered (default).

Data Integrity
The ext3 file system prevents loss of data integrity in the event that an unclean system shutdown
occurs. The ext3 file system allows you to choose the type and level of protection that your data
receives. With regard to the state of the file system, ext3 volumes are configured to keep a high level
of data consistency by default.

Speed
Despite writing some data more than once, ext3 has a higher throughput in most cases than ext2
because ext3's journaling optimizes hard drive head motion. You can choose from three journaling
modes to optimize speed, but doing so means trade-offs in regards to data integrity if the system was
to fail.

NOTE

The only journaling mode in ext3 supported by Red Hat is data=ordered (default).

Easy Transition

37
Storage Administration Guide

It is easy to migrate from ext2 to ext3 and gain the benefits of a robust journaling file system without
reformatting. For more information on performing this task, see Section 4.2, “Converting to an ext3
File System” .

NOTE

Red Hat Enterprise Linux 7 provides a unified extN driver. It does this by disabling the
ext2 and ext3 configurations and instead uses ext4.ko for these on-disk formats. This
means that kernel messages will always refer to ext4 regardless of the ext file system
used.

4.1. CREATING AN EXT3 FILE SYSTEM


After installation, it is sometimes necessary to create a new ext3 file system. For example, if a new disk
drive is added to the system, you may want to partition the drive and use the ext3 file system.

Prerequisites
Create or reuse a partition on your disk. For information on creating MBR or GPT partitions, see
Chapter 13, Partitions.

Alternatively, use an LVM or MD volume or a similar layer below ext3.

Procedure

Procedure 4.1. Creating an ext3 File System

1. Format the partition or LVM volume with the ext3 file system using the mkfs.ext3 utility:

# mkfs.ext3 block_device

Replace block_device with the path to a partition or a logical volume. For example,
/dev/sdb1, /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or
/dev/my-volgroup/my-lv.

2. Label the file system using the e2label utility:

# e2label block_device volume_label

Configuring UUID
It is also possible to set a specific UUID for a file system. To specify a UUID when creating a file system,
use the -U option:

# mkfs.ext3 -U UUID device

Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-
96d749c02da7.

Replace device with the path to an ext3 file system to have the UUID added to it: for example,
/dev/sda8.

38
CHAPTER 4. THE EXT3 FILE SYSTEM

To change the UUID of an existing file system, see Section 25.7.3.2, “Modifying Persistent Naming
Attributes”

Additional Resources
The mkfs.ext3(8) man page

The e2label(8) man page

4.2. CONVERTING TO AN EXT3 FILE SYSTEM


The tune2fs command converts an ext2 file system to ext3.

NOTE

To convert ext2 to ext3, always use the e2fsck utility to check your file system before
and after using tune2fs. Before trying to convert ext2 to ext3, back up all file systems in
case any errors occur.

In addition, Red Hat recommends creating a new ext3 file system and migrating data to it,
instead of converting from ext2 to ext3 whenever possible.

To convert an ext2 file system to ext3, log in as root and type the following command in a terminal:

# tune2fs -j block_device

block_device contains the ext2 file system to be converted.

Issue the df command to display mounted file systems.

4.3. REVERTING TO AN EXT2 FILE SYSTEM


In order to revert to an ext2 file system, use the following procedure.

For simplicity, the sample commands in this section use the following value for the block device:

/dev/mapper/VolGroup00-LogVol02

Procedure 4.2. Revert from ext3 to ext2

1. Unmount the partition by logging in as root and typing:

# umount /dev/mapper/VolGroup00-LogVol02

2. Change the file system type to ext2 by typing the following command:

# tune2fs -O ^has_journal /dev/mapper/VolGroup00-LogVol02

3. Check the partition for errors by typing the following command:

# e2fsck -y /dev/mapper/VolGroup00-LogVol02

39
Storage Administration Guide

4. Then mount the partition again as ext2 file system by typing:

# mount -t ext2 /dev/mapper/VolGroup00-LogVol02 /mount/point

Replace /mount/point with the mount point of the partition.

NOTE

If a .journal file exists at the root level of the partition, delete it.

To permanently change the partition to ext2, remember to update the /etc/fstab file, otherwise it will
revert back after booting.

40
CHAPTER 5. THE EXT4 FILE SYSTEM

CHAPTER 5. THE EXT4 FILE SYSTEM


The ext4 file system is a scalable extension of the ext3 file system. With Red Hat Enterprise Linux 7, it
can support a maximum individual file size of 16 terabytes, and file systems to a maximum of 50
terabytes, unlike Red Hat Enterprise Linux 6 which only supported file systems up to 16 terabytes. It also
supports an unlimited number of sub-directories (the ext3 file system only supports up to 32,000), though
once the link count exceeds 65,000 it resets to 1 and is no longer increased. The bigalloc feature is not
currently supported.

NOTE

As with ext3, an ext4 volume must be umounted in order to perform an fsck. For more
information, see Chapter 4, The ext3 File System.

Main Features
The ext4 file system uses extents (as opposed to the traditional block mapping scheme used by ext2
and ext3), which improves performance when using large files and reduces metadata overhead for
large files. In addition, ext4 also labels unallocated block groups and inode table sections
accordingly, which allows them to be skipped during a file system check. This makes for quicker file
system checks, which becomes more beneficial as the file system grows in size.

Allocation Features
The ext4 file system features the following allocation schemes:

Persistent pre-allocation

Delayed allocation

Multi-block allocation

Stripe-aware allocation

Because of delayed allocation and other performance optimizations, ext4's behavior of writing files to
disk is different from ext3. In ext4, when a program writes to the file system, it is not guaranteed to be
on-disk unless the program issues an fsync() call afterwards.

By default, ext3 automatically forces newly created files to disk almost immediately even without
fsync(). This behavior hid bugs in programs that did not use fsync() to ensure that written data
was on-disk. The ext4 file system, on the other hand, often waits several seconds to write out
changes to disk, allowing it to combine and reorder writes for better disk performance than ext3.


WARNING

Unlike ext3, the ext4 file system does not force data to disk on transaction
commit. As such, it takes longer for buffered writes to be flushed to disk. As with
any file system, use data integrity calls such as fsync() to ensure that data is
written to permanent storage.

41
Storage Administration Guide

Other ext4 Features


The ext4 file system also supports the following:

Extended attributes (xattr) — This allows the system to associate several additional name
and value pairs per file.

Quota journaling — This avoids the need for lengthy quota consistency checks after a crash.

NOTE

The only supported journaling mode in ext4 is data=ordered (default).

Subsecond timestamps — This gives timestamps to the subsecond.

5.1. CREATING AN EXT4 FILE SYSTEM


Prerequisites
Create or reuse a partition on your disk. For information on creating MBR or GPT partitions, see
Chapter 13, Partitions.

Alternatively, use an LVM or MD volume or a similar layer below ext4.

Procedure

Procedure 5.1. Creating an ext4 File System

To create an ext4 file system, use the following command:

# mkfs.ext4 block_device

Replace block_device with the path to a partition or a logical volume. For example,
/dev/sdb1, /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or
/dev/my-volgroup/my-lv.

In general, the default options are optimal for most usage scenarios.

Example 5.1. mkfs.ext4 Command Output

Below is a sample output of this command, which displays the resulting file system geometry and
features:

~]# mkfs.ext4 /dev/sdb1


mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
245280 inodes, 979456 blocks
48972 blocks (5.00%) reserved for the super user
First data block=0

42
CHAPTER 5. THE EXT4 FILE SYSTEM

Maximum filesystem blocks=1006632960


30 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done


Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

IMPORTANT

It is possible to use tune2fs to enable certain ext4 features on ext3 file systems.
However, using tune2fs in this way has not been fully tested and is therefore not
supported in Red Hat Enterprise Linux 7. As a result, Red Hat cannot guarantee
consistent performance and predictable behavior for ext3 file systems converted or
mounted by using tune2fs.

Striped Block Devices


For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified at the time of
file system creation. Using proper stripe geometry greatly enhances the performance of an ext4 file
system.

When creating file systems on LVM or MD volumes, mkfs.ext4 chooses an optimal geometry. This
may also be true on some hardware RAIDs which export geometry information to the operating system.

To specify stripe geometry, use the -E option of mkfs.ext4 (that is, extended file system options) with
the following sub-options:

stride=value
Specifies the RAID chunk size.

stripe-width=value
Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe.

For both sub-options, value must be specified in file system block units. For example, to create a file
system with a 64k stride (that is, 16 x 4096) on a 4k-block file system, use the following command:

# mkfs.ext4 -E stride=16,stripe-width=64 /dev/block_device

Configuring UUID
It is also possible to set a specific UUID for a file system. To specify a UUID when creating a file system,
use the -U option:

# mkfs.ext4 -U UUID device

Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-
96d749c02da7.

43
Storage Administration Guide

Replace device with the path to an ext4 file system to have the UUID added to it: for example,
/dev/sda8.

To change the UUID of an existing file system, see Section 25.7.3.2, “Modifying Persistent Naming
Attributes”

Additional Resources
For more information about creating ext4 file systems, see:

The mkfs.ext4(8) man page

5.2. MOUNTING AN EXT4 FILE SYSTEM


An ext4 file system can be mounted with no extra options. For example:

# mount /dev/device /mount/point

The ext4 file system also supports several mount options to influence behavior. For example, the acl
parameter enables access control lists, while the user_xattr parameter enables user extended
attributes. To enable both options, use their respective parameters with -o, as in:

# mount -o acl,user_xattr /dev/device /mount/point

As with ext3, the option data_err=abort can be used to abort the journal if an error occurs in file data.

# mount -o data_err=abort /dev/device /mount/point

The tune2fs utility also allows administrators to set default mount options in the file system superblock.
For more information on this, refer to man tune2fs.

Write Barriers
By default, ext4 uses write barriers to ensure file system integrity even when power is lost to a device
with write caches enabled. For devices without write caches, or with battery-backed write caches,
disable barriers using the nobarrier option, as in:

# mount -o nobarrier /dev/device /mount/point

For more information about write barriers, refer to Chapter 22, Write Barriers.

Direct Access Technology Preview


Starting with Red Hat Enterprise Linux 7.3, Direct Access (DAX) provides, as a Technology Preview
on the ext4 and XFS file systems, a means for an application to directly map persistent memory into its
address space. To use DAX, a system must have some form of persistent memory available, usually in
the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that
supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax
mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of
storage into the application's address space.

5.3. RESIZING AN EXT4 FILE SYSTEM

44
CHAPTER 5. THE EXT4 FILE SYSTEM

Before growing an ext4 file system, ensure that the underlying block device is of an appropriate size to
hold the file system later. Use the appropriate resizing methods for the affected block device.

An ext4 file system may be grown while mounted using the resize2fs command:

# resize2fs /mount/device size

The resize2fs command can also decrease the size of an unmounted ext4 file system:

# resize2fs /dev/device size

When resizing an ext4 file system, the resize2fs utility reads the size in units of file system block size,
unless a suffix indicating a specific unit is used. The following suffixes indicate specific units:

s — 512 byte sectors

K — kilobytes

M — megabytes

G — gigabytes

NOTE

The size parameter is optional (and often redundant) when expanding. The resize2fs
automatically expands to fill all available space of the container, usually a logical volume
or partition.

For more information about resizing an ext4 file system, refer to man resize2fs.

5.4. BACKING UP EXT2, EXT3, OR EXT4 FILE SYSTEMS


This procedure describes how to back up the content of an ext4, ext3, or ext2 file system into a file.

Prerequisites
If the system has been running for a long time, run the e2fsck utility on the partitions before
backup:

# e2fsck /dev/device

Procedure 5.2. Backing up ext2, ext3, or ext4 File Systems

1. Back up configuration information, including the content of the /etc/fstab file and the output of
the fdisk -l command. This is useful for restoring the partitions.

To capture this information, run the sosreport or sysreport utilities. For more information
about sosreport, see the What is a sosreport and how to create one in Red Hat Enterprise
Linux 4.6 and later? Kdowledgebase article.

2. Depending on the role of the partition:

45
Storage Administration Guide

If the partition you are backing up is an operating system partition, boot your system into the
rescue mode. See the Booting to Rescue Mode section of the System Administrator's Guide.

When backing up a regular, data partition, unmount it.

Although it is possible to back up a data partition while it is mounted, the results of backing
up a mounted data partition can be unpredictable.

If you need to back up a mounted file system using the dump utility, do so when the file
system is not under a heavy load. The more activity is happening on the file system when
backing up, the higher the risk of backup corruption is.

3. Use the dump utility to back up the content of the partitions:

# dump -0uf backup-file /dev/device

Replace backup-file with a path to a file where you want the to store the backup. Replace device
with the name of the ext4 partition you want to back up. Make sure that you are saving the
backup to a directory mounted on a different partition than the partition you are backing up.

Example 5.2. Backing up Multiple ext4 Partitions

To back up the content of the /dev/sda1, /dev/sda2, and /dev/sda3 partitions into
backup files stored in the /backup-files/ directory, use the following commands:

# dump -0uf /backup-files/sda1.dump /dev/sda1


# dump -0uf /backup-files/sda2.dump /dev/sda2
# dump -0uf /backup-files/sda3.dump /dev/sda3

To do a remote backup, use the ssh utility or configure a password-less ssh login. For more
information on ssh and password-less login, see the Using the ssh Utility and Using Key-based
Authentication sections of the System Administrator's Guide.

For example, when using ssh:

Example 5.3. Performing a Remote Backup Using ssh

# dump -0u -f - /dev/device | ssh root@remoteserver.example.com dd


of=backup-file

Note that if using standard redirection, you must pass the -f option separately.

Additional Resources
For more information, see the dump(8) man page.

5.5. RESTORING EXT2, EXT3, OR EXT4 FILE SYSTEMS


This procedure describes how to restore an ext4, ext3, or ext2 file system from a file backup.

Prerequisites

46
CHAPTER 5. THE EXT4 FILE SYSTEM

You need a backup of partitions and their metadata, as described in Section 5.4, “Backing up
ext2, ext3, or ext4 File Systems”.

Procedure 5.3. Restoring ext2, ext3, or ext4 File Systems

1. If you are restoring an operating system partition, boot your system into Rescue Mode. See the
Booting to Rescue Mode section of the System Administrator's Guide.

This step is not required for ordinary data partitions.

2. Rebuild the partitions you want to restore by using the fdisk or parted utilites.

If the partitions no longer exist, recreate them. The new partitions must be large enough to
contain the restored data. It is important to get the start and end numbers right; these are the
starting and ending sector numbers of the partitions obtained from the fdisk utility when
backing up.

For more information on modifying partitions, see Chapter 13, Partitions

3. Use the mkfs utility to format the destination partition:

# mkfs.ext4 /dev/device

IMPORTANT

Do not format the partition that stores your backup files.

4. If you created new partitions, re-label all the partitions so they match their entries in the
/etc/fstab file:

# e2label /dev/device label

5. Create temporary mount points and mount the partitions on them:

# mkdir /mnt/device
# mount -t ext4 /dev/device /mnt/device

6. Restore the data from backup on the mounted partition:

# cd /mnt/device
# restore -rf device-backup-file

If you want to restore on a remote machine or restore from a backup file that is stored on a
remote host, you can use the ssh utility. For more information on ssh, see the Using the ssh
Utility section of the System Administrator's Guide.

Note that you need to configure a password-less login for the following commands. For more
information on setting up a password-less ssh login, see the Using Key-based Authentication
section of the System Administrator's Guide.

To restore a partition on a remote machine from a backup file stored on the same machine:

# ssh remote-address "cd /mnt/device && cat backup-file |

47
Storage Administration Guide

/usr/sbin/restore -r -f -"

To restore a partition on a remote machine from a backup file stored on a different remote
machine:

# ssh remote-machine-1 "cd /mnt/device && RSH=/usr/bin/ssh


/usr/sbin/restore -rf remote-machine-2:backup-file"

7. Reboot:

# systemctl reboot

Example 5.4. Restoring Multiple ext4 Partitions

To restore the /dev/sda1, /dev/sda2, and /dev/sda3 partitions from Example 5.2, “Backing up
Multiple ext4 Partitions”:

1. Rebuild partitions you want to restore by using the fdisk command.

2. Format the destination partitions:

# mkfs.ext4 /dev/sda1
# mkfs.ext4 /dev/sda2
# mkfs.ext4 /dev/sda3

3. Re-label all the partitions so they match the /etc/fstab file:

# e2label /dev/sda1 Boot1


# e2label /dev/sda2 Root
# e2label /dev/sda3 Data

4. Prepare the working directories.

Mount the new partitions:

# mkdir /mnt/sda1
# mount -t ext4 /dev/sda1 /mnt/sda1
# mkdir /mnt/sda2
# mount -t ext4 /dev/sda2 /mnt/sda2
# mkdir /mnt/sda3
# mount -t ext4 /dev/sda3 /mnt/sda3

Mount the partition that contains backup files:

# mkdir /backup-files
# mount -t ext4 /dev/sda6 /backup-files

5. Restore the data from backup to the mounted partitions:

# cd /mnt/sda1
# restore -rf /backup-files/sda1.dump
# cd /mnt/sda2

48
CHAPTER 5. THE EXT4 FILE SYSTEM

# restore -rf /backup-files/sda2.dump


# cd /mnt/sda3
# restore -rf /backup-files/sda3.dump

6. Reboot:

# systemctl reboot

Additional Resources
For more information, see the restore(8) man page.

5.6. OTHER EXT4 FILE SYSTEM UTILITIES


Red Hat Enterprise Linux 7 also features other utilities for managing ext4 file systems:

e2fsck
Used to repair an ext4 file system. This tool checks and repairs an ext4 file system more efficiently
than ext3, thanks to updates in the ext4 disk structure.

e2label
Changes the label on an ext4 file system. This tool also works on ext2 and ext3 file systems.

quota
Controls and reports on disk space (blocks) and file (inode) usage by users and groups on an ext4 file
system. For more information on using quota, refer to man quota and Section 17.1, “Configuring
Disk Quotas”.

fsfreeze
To suspend access to a file system, use the command # fsfreeze -f mount-point to freeze it
and # fsfreeze -u mount-point to unfreeze it. This halts access to the file system and creates
a stable image on disk.

NOTE

It is unnecessary to use fsfreeze for device-mapper drives.

For more information see the fsfreeze(8) manpage.

As demonstrated in Section 5.2, “Mounting an ext4 File System”, the tune2fs utility can also adjust
configurable file system parameters for ext2, ext3, and ext4 file systems. In addition, the following tools
are also useful in debugging and analyzing ext4 file systems:

debugfs
Debugs ext2, ext3, or ext4 file systems.

e2image
Saves critical ext2, ext3, or ext4 file system metadata to a file.

49
Storage Administration Guide

For more information about these utilities, refer to their respective man pages.

50
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

Btrfs is a next generation Linux file system that offers advanced management, reliability, and scalability
features. It is unique in offering snapshots, compression, and integrated device management.

6.1. CREATING A BTRFS FILE SYSTEM


In order to make a basic btrfs file system, use the following command:

# mkfs.btrfs /dev/device

For more information on creating btrfs file systems with added devices and specifying multi-device
profiles for metadata and data, refer to Section 6.4, “Integrated Volume Management of Multiple
Devices”.

6.2. MOUNTING A BTRFS FILE SYSTEM


To mount any device in the btrfs file system use the following command:

# mount /dev/device /mount-point

Other useful mount options include:

device=/dev/name
Appending this option to the mount command tells btrfs to scan the named device for a btrfs volume.
This is used to ensure the mount will succeed as attempting to mount devices that are not btrfs will
cause the mount to fail.

NOTE

This does not mean all devices will be added to the file system, it only scans them.

max_inline=number
Use this option to set the maximum amount of space (in bytes) that can be used to inline data within a
metadata B-tree leaf. The default is 8192 bytes. For 4k pages it is limited to 3900 bytes due to
additional headers that need to fit into the leaf.

alloc_start=number
Use this option to set where in the disk allocations start.

thread_pool=number

51
Storage Administration Guide

Use this option to assign the number of worker threads allocated.

discard
Use this option to enable discard/TRIM on freed blocks.

noacl
Use this option to disable the use of ACL's.

space_cache
Use this option to store the free space data on disk to make caching a block group faster. This is a
persistent change and is safe to boot into old kernels.

nospace_cache
Use this option to disable the above space_cache.

clear_cache
Use this option to clear all the free space caches during mount. This is a safe option but will trigger
the space cache to be rebuilt. As such, leave the file system mounted in order to let the rebuild
process finish. This mount option is intended to be used once and only after problems are apparent
with the free space.

enospc_debug
This option is used to debug problems with "no space left".

recovery
Use this option to enable autorecovery upon mount.

6.3. RESIZING A BTRFS FILE SYSTEM


It is not possible to resize a btrfs file system but it is possible to resize each of the devices it uses. If there
is only one device in use then this works the same as resizing the file system. If there are multiple
devices in use then they must be manually resized to achieve the desired result.

NOTE

The unit size is not case specific; it accepts both G or g for GiB.

The command does not accept t for terabytes or p for petabytes. It only accepts k, m, and
g.

Enlarging a btrfs File System


To enlarge the file system on a single device, use the command:

# btrfs filesystem resize amount /mount-point

For example:

52
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

# btrfs filesystem resize +200M /btrfssingle


Resize '/btrfssingle' of '+200M'

To enlarge a multi-device file system, the device to be enlarged must be specified. First, show all devices
that have a btrfs file system at a specified mount point:

# btrfs filesystem show /mount-point

For example:

# btrfs filesystem show /btrfstest


Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39
Total devices 4 FS bytes used 192.00KiB
devid 1 size 1.00GiB used 224.75MiB path /dev/vdc
devid 2 size 524.00MiB used 204.75MiB path /dev/vdd
devid 3 size 1.00GiB used 8.00MiB path /dev/vde
devid 4 size 1.00GiB used 8.00MiB path /dev/vdf

Btrfs v3.16.2

Then, after identifying the devid of the device to be enlarged, use the following command:

# btrfs filesystem resize devid:amount /mount-point

For example:

# btrfs filesystem resize 2:+200M /btrfstest


Resize '/btrfstest/' of '2:+200M'

NOTE

The amount can also be max instead of a specified amount. This will use all remaining
free space on the device.

Shrinking a btrfs File System


To shrink the file system on a single device, use the command:

# btrfs filesystem resize amount /mount-point

For example:

# btrfs filesystem resize -200M /btrfssingle


Resize '/btrfssingle' of '-200M'

To shrink a multi-device file system, the device to be shrunk must be specified. First, show all devices
that have a btrfs file system at a specified mount point:

# btrfs filesystem show /mount-point

For example:

53
Storage Administration Guide

# btrfs filesystem show /btrfstest


Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39
Total devices 4 FS bytes used 192.00KiB
devid 1 size 1.00GiB used 224.75MiB path /dev/vdc
devid 2 size 524.00MiB used 204.75MiB path /dev/vdd
devid 3 size 1.00GiB used 8.00MiB path /dev/vde
devid 4 size 1.00GiB used 8.00MiB path /dev/vdf

Btrfs v3.16.2

Then, after identifying the devid of the device to be shrunk, use the following command:

# btrfs filesystem resize devid:amount /mount-point

For example:

# btrfs filesystem resize 2:-200M /btrfstest


Resize '/btrfstest' of '2:-200M'

Set the File System Size


To set the file system to a specific size on a single device, use the command:

# btrfs filesystem resize amount /mount-point

For example:

# btrfs filesystem resize 700M /btrfssingle


Resize '/btrfssingle' of '700M'

To set the file system size of a multi-device file system, the device to be changed must be specified.
First, show all devices that have a btrfs file system at the specified mount point:

# btrfs filesystem show /mount-point

For example:

# btrfs filesystem show /btrfstest


Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39
Total devices 4 FS bytes used 192.00KiB
devid 1 size 1.00GiB used 224.75MiB path /dev/vdc
devid 2 size 724.00MiB used 204.75MiB path /dev/vdd
devid 3 size 1.00GiB used 8.00MiB path /dev/vde
devid 4 size 1.00GiB used 8.00MiB path /dev/vdf

Btrfs v3.16.2

Then, after identifying the devid of the device to be changed, use the following command:

# btrfs filesystem resize devid:amount /mount-point

For example:

54
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

# btrfs filesystem resize 2:300M /btrfstest


Resize '/btrfstest' of '2:300M'

6.4. INTEGRATED VOLUME MANAGEMENT OF MULTIPLE DEVICES


A btrfs file system can be created on top of many devices, and more devices can be added after the file
system has been created. By default, metadata will be mirrored across two devices and data will be
striped across all devices present, however if only one device is present, metadata will be duplicated on
that device.

6.4.1. Creating a File System with Multiple Devices


The mkfs.btrfs command, as detailed in Section 6.1, “Creating a btrfs File System”, accepts the
options -d for data, and -m for metadata. Valid specifications are:

raid0

raid1

raid10

dup

single

The -m single option instructs that no duplication of metadata is done. This may be desired when
using hardware raid.

NOTE

RAID 10 requires at least four devices to run correctly.

Example 6.1. Creating a RAID 10 btrfs File System

Create a file system across four devices (metadata mirrored, data striped).

# mkfs.btrfs /dev/device1 /dev/device2 /dev/device3 /dev/device4

Stripe the metadata without mirroring.

# mkfs.btrfs -m raid0 /dev/device1 /dev/device2

Use raid10 for both data and metadata.

# mkfs.btrfs -m raid10 -d raid10 /dev/device1 /dev/device2 /dev/device3


/dev/device4

Do not duplicate metadata on a single drive.

# mkfs.btrfs -m single /dev/device

55
Storage Administration Guide

Use the single option to use the full capacity of each drive when the drives are different sizes.

# mkfs.btrfs -d single /dev/device1 /dev/device2 /dev/device3

To add a new device to an already created multi-device file system, use the following command:

# btrfs device add /dev/device1 /mount-point

After rebooting or reloading the btrfs module, use the btrfs device scan command to discover all
multi-device file systems. See Section 6.4.2, “Scanning for btrfs Devices” for more information.

6.4.2. Scanning for btrfs Devices


Use btrfs device scan to scan all block devices under /dev and probe for btrfs volumes. This must
be performed after loading the btrfs module if running with more than one device in a file system.

To scan all devices, use the following command:

# btrfs device scan

To scan a single device, use the following command:

# btrfs device scan /dev/device

6.4.3. Adding New Devices to a btrfs File System


Use the btrfs filesystem show command to list all the btrfs file systems and which devices they
include.

The btrfs device add command is used to add new devices to a mounted file system.

The btrfs filesystem balance command balances (restripes) the allocated extents across all
existing devices.

An example of all these commands together to add a new device is as follows:

Example 6.2. Add a New Device to a btrfs File System

First, create and mount a btrfs file system. Refer to Section 6.1, “Creating a btrfs File System” for
more information on how to create a btrfs file system, and to Section 6.2, “Mounting a btrfs file
system” for more information on how to mount a btrfs file system.

# mkfs.btrfs /dev/device1
# mount /dev/device1

Next, add a second device to the mounted btrfs file system.

# btrfs device add /dev/device2 /mount-point

The metadata and data on these devices are still stored only on /dev/device1. It must now be
balanced to spread across all devices.

56
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

# btrfs filesystem balance /mount-point

Balancing a file system will take some time as it reads all of the file system's data and metadata and
rewrites it across the new device.

6.4.4. Converting a btrfs File System


To convert a non-raid file system to a raid, add a device and run a balance filter that changes the chunk
allocation profile.

Example 6.3. Converting a btrfs File System

To convert an existing single device system, /dev/sdb1 in this case, into a two device, raid1 system
in order to protect against a single disk failure, use the following commands:

# mount /dev/sdb1 /mnt


# btrfs device add /dev/sdc1 /mnt
# btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt

IMPORTANT

If the metadata is not converted from the single-device default, it remains as DUP. This
does not guarantee that copies of the block are on separate devices. If data is not
converted it does not have any redundant copies at all.

6.4.5. Removing btrfs Devices


Use the btrfs device delete command to remove an online device. It redistributes any extents in
use to other devices in the file system in order to be safely removed.

Example 6.4. Removing a Device on a btrfs File System

First create and mount a few btrfs file systems.

# mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde


# mount /dev/sdb /mnt

Add some data to the file system.

Finally, remove the required device.

# btrfs device delete /dev/sdc /mnt

6.4.6. Replacing Failed Devices on a btrfs File System


Section 6.4.5, “Removing btrfs Devices” can be used to remove a failed device provided the super block
can still be read. However, if a device is missing or the super block corrupted, the file system will need to
be mounted in a degraded mode:

57
Storage Administration Guide

# mkfs.btrfs -m raid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde

ssd is destroyed or removed, use -o degraded to force the mount


to ignore missing devices

# mount -o degraded /dev/sdb /mnt

'missing' is a special device name

# btrfs device delete missing /mnt

The command btrfs device delete missing removes the first device that is described by the file
system metadata but not present when the file system was mounted.

IMPORTANT

It is impossible to go below the minimum number of devices required for the specific raid
layout, even including the missing one. It may be required to add a new device in order to
remove the failed one.

For example, for a raid1 layout with two devices, if a device fails it is required to:

1. mount in degraded mode,

2. add a new device,

3. and, remove the missing device.

6.4.7. Registering a btrfs File System in /etc/fstab

If you do not have an initrd or it does not perform a btrfs device scan, it is possible to mount a multi-
volume btrfs file system by passing all the devices in the file system explicitly to the mount command.

Example 6.5. Example /etc/fstab Entry

An example of a suitable /etc/fstab entry would be:

/dev/sdb /mnt btrfs


device=/dev/sdb,device=/dev/sdc,device=/dev/sdd,device=/dev/sde 0

Note that using universally unique identifiers (UUIDs) also works and is more stable than using device
paths.

6.5. SSD OPTIMIZATION


Using the btrfs file system can optimize SSD. There are two ways this can be done.

The first way is mkfs.btrfs turns off metadata duplication on a single device when

58
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

/sys/block/device/queue/rotational is zero for the single specified device. This is equivalent


to specifying -m single on the command line. It can be overridden and duplicate metadata forced by
providing the -m dup option. Duplication is not required due to SSD firmware potentially losing both
copies. This wastes space and is a performance cost.

The second way is through a group of SSD mount options: ssd, nossd, and ssd_spread.

The ssd option does several things:

It allows larger metadata cluster allocation.

It allocates data more sequentially where possible.

It disables btree leaf rewriting to match key and block order.

It commits log fragments without batching multiple processes.

NOTE

The ssd mount option only enables the ssd option. Use the nossd option to disable it.

Some SSDs perform best when reusing block numbers often, while others perform much better when
clustering strictly allocates big chunks of unused space. By default, mount -o ssd will find groupings of
blocks where there are several free blocks that might have allocated blocks mixed in. The command
mount -o ssd_spread ensures there are no allocated blocks mixed in. This improves performance
on lower end SSDs.

NOTE

The ssd_spread option enables both the ssd and the ssd_spread options. Use the
nossd to disable both these options.

The ssd_spread option is never automatically set if none of the ssd options are provided
and any of the devices are non-rotational.

These options will all need to be tested with your specific build to see if their use improves or reduces
performance, as each combination of SSD firmware and application loads are different.

6.6. BTRFS REFERENCES


The man page btrfs(8) covers all important management commands. In particular this includes:

All the subvolume commands for managing snapshots.

The device commands for managing devices.

The scrub, balance, and defragment commands.

The man page mkfs.btrfs(8) contains information on creating a btrfs file system including all the
options regarding it.

The man page btrfsck(8) for information regarding fsck on btrfs systems.

59
Storage Administration Guide

CHAPTER 7. GLOBAL FILE SYSTEM 2


The Red Hat Global File System 2 (GFS2) is a native file system that interfaces directly with the Linux
kernel file system interface (VFS layer). When implemented as a cluster file system, GFS2 employs
distributed metadata and multiple journals.

GFS2 is based on 64-bit architecture, which can theoretically accommodate an 8 exabyte file system.
However, the current supported maximum size of a GFS2 file system is 100 TB. If a system requires
GFS2 file systems larger than 100 TB, contact your Red Hat service representative.

When determining the size of a file system, consider its recovery needs. Running the fsck command on
a very large file system can take a long time and consume a large amount of memory. Additionally, in the
event of a disk or disk-subsystem failure, recovery time is limited by the speed of backup media.

When configured in a Red Hat Cluster Suite, Red Hat GFS2 nodes can be configured and managed with
Red Hat Cluster Suite configuration and management tools. Red Hat GFS2 then provides data sharing
among GFS2 nodes in a Red Hat cluster, with a single, consistent view of the file system namespace
across the GFS2 nodes. This allows processes on different nodes to share GFS2 files in the same way
that processes on the same node can share files on a local file system, with no discernible difference.
For information about the Red Hat Cluster Suite, see Red Hat's Cluster Administration guide.

A GFS2 must be built on a logical volume (created with LVM) that is a linear or mirrored volume. Logical
volumes created with LVM in a Red Hat Cluster suite are managed with CLVM (a cluster-wide
implementation of LVM), enabled by the CLVM daemon clvmd, and running in a Red Hat Cluster Suite
cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster,
allowing all nodes in the cluster to share the logical volumes. For information on the Logical Volume
Manager, see Red Hat's Logical Volume Manager Administration guide.

The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes.

For comprehensive information on the creation and configuration of GFS2 file systems in clustered and
non-clustered storage, see Red Hat's Global File System 2 guide.

60
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

CHAPTER 8. NETWORK FILE SYSTEM (NFS)


A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with
those file systems as though they are mounted locally. This enables system administrators to
consolidate resources onto centralized servers on the network.

This chapter focuses on fundamental NFS concepts and supplemental information.

8.1. INTRODUCTION TO NFS


Currently, there are two major versions of NFS included in Red Hat Enterprise Linux:

NFS version 3 (NFSv3) supports safe asynchronous writes and is more robust at error handling
than the previous NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access
more than 2 GB of file data.

NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an
rpcbind service, supports ACLs, and utilizes stateful operations.

Red Hat Enterprise Linux fully supports NFS version 4.2 (NFSv4.2) since the Red Hat
Enterprise Linux 7.4 release.

Following are the features of NFSv4.2 in Red Hat Enterprise Linux 7.5 :

Server-Side Copy: NFSv4.2 supports copy_file_range() system call, which allows the NFS
client to efficiently copy data without wasting network resources.

Sparse Files: It verifies space efficiency of a file and allows placeholder to improve storage
efficiency. It is a file having one or more holes; holes are unallocated or uninitialized data blocks
consisting only of zeroes. lseek() operation in NFSv4.2, supports seek_hole() and
seek_data(), which allows application to map out the location of holes in the sparse file.

Space Reservation: It permits storage servers to reserve free space, which prohibits servers to
run out of space. NFSv4.2 supports allocate() operation to reserve space, deallocate()
operation to unreserve space, and fallocate() operation to preallocate or deallocate space
in a file.

Labeled NFS: It enforces data access rights and enables SELinux labels between a client and a
server for individual files on an NFS file system.

Layout Enhancements: NFSv4.2 provides new operation, layoutstats(), which the client can
use to notify the metadata server about its communication with the layout.

Versions of Red Hat Enterprise Linux earlier than 7.4 support NFS up to version 4.1.

Following are the features of NFSv4.1:

Enhances performance and security of network, and also includes client-side support for Parallel
NFS (pNFS).

No longer requires a separate TCP connection for callbacks, which allows an NFS server to
grant delegations even when it cannot contact the client. For example, when NAT or a firewall
interferes.

61
Storage Administration Guide

It provides exactly once semantics (except for reboot operations), preventing a previous issue
whereby certain operations could return an inaccurate result if a reply was lost and the operation
was sent twice.

NFS clients attempt to mount using NFSv4.1 by default, and fall back to NFSv4.0 when the server does
not support NFSv4.1. The mount later fall back to NFSv3 when server does not support NFSv4.0.

NOTE

NFS version 2 (NFSv2) is no longer supported by Red Hat.

All versions of NFS can use Transmission Control Protocol (TCP) running over an IP network, with
NFSv4 requiring it. NFSv3 can use the User Datagram Protocol (UDP) running over an IP network to
provide a stateless network connection between the client and server.

When using NFSv3 with UDP, the stateless UDP connection (under normal conditions) has less protocol
overhead than TCP. This can translate into better performance on very clean, non-congested networks.
However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to
saturate the network with requests for the server. In addition, when a frame is lost with UDP, the entire
RPC request must be retransmitted; with TCP, only the lost frame needs to be resent. For these
reasons, TCP is the preferred protocol when connecting to an NFS server.

The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also
listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbind [1],
lockd, and rpc.statd daemons. The rpc.mountd daemon is still required on the NFS server to set
up the exports, but is not involved in any over-the-wire operations.

NOTE

TCP is the default transport protocol for NFS version 3 under Red Hat Enterprise Linux.
UDP can be used for compatibility purposes as needed, but is not recommended for wide
usage. NFSv4 requires TCP.

All the RPC/NFS daemons have a '-p' command line option that can set the port,
making firewall configuration easier.

After TCP wrappers grant access to the client, the NFS server refers to the /etc/exports configuration
file to determine whether the client is allowed to access any exported file systems. Once verified, all file
and directory operations are available to the user.

IMPORTANT

In order for NFS to work with a default installation of Red Hat Enterprise Linux with a
firewall enabled, configure IPTables with the default TCP port 2049. Without proper
IPTables configuration, NFS will not function properly.

The NFS initialization script and rpc.nfsd process now allow binding to any specified
port during system start up. However, this can be error-prone if the port is unavailable, or
if it conflicts with another daemon.

8.1.1. Required Services


Red Hat Enterprise Linux uses a combination of kernel-level support and daemon processes to provide

62
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

NFS file sharing. All NFS versions rely on Remote Procedure Calls (RPC) between clients and servers.
RPC services under Red Hat Enterprise Linux 7 are controlled by the rpcbind service. To share or
mount NFS file systems, the following services work together depending on which version of NFS is
implemented:

NOTE

The portmap service was used to map RPC program numbers to IP address port number
combinations in earlier versions of Red Hat Enterprise Linux. This service is now replaced
by rpcbind in Red Hat Enterprise Linux 7 to enable IPv6 support.

nfs
systemctl start nfs starts the NFS server and the appropriate RPC processes to service
requests for shared NFS file systems.

nfslock
systemctl start nfs-lock activates a mandatory service that starts the appropriate RPC
processes allowing NFS clients to lock files on the server.

rpcbind
rpcbind accepts port reservations from local RPC services. These ports are then made available (or
advertised) so the corresponding remote RPC services can access them. rpcbind responds to
requests for RPC services and sets up connections to the requested RPC service. This is not used
with NFSv4.

The following RPC processes facilitate NFS services:

rpc.mountd
This process is used by an NFS server to process MOUNT requests from NFSv3 clients. It checks that
the requested NFS share is currently exported by the NFS server, and that the client is allowed to
access it. If the mount request is allowed, the rpc.mountd server replies with a Success status and
provides the File-Handle for this NFS share back to the NFS client.

rpc.nfsd
rpc.nfsd allows explicit NFS versions and protocols the server advertises to be defined. It works
with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads
each time an NFS client connects. This process corresponds to the nfs service.

lockd
lockd is a kernel thread which runs on both clients and servers. It implements theNetwork Lock
Manager (NLM) protocol, which allows NFSv3 clients to lock files on the server. It is started
automatically whenever the NFS server is run and whenever an NFS file system is mounted.

rpc.statd
This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients
when an NFS server is restarted without being gracefully brought down. rpc.statd is started
automatically by the nfslock service, and does not require user configuration. This is not used with
NFSv4.

63
Storage Administration Guide

rpc.rquotad
This process provides user quota information for remote users. rpc.rquotad is started
automatically by the nfs service and does not require user configuration.

rpc.idmapd
rpc.idmapd provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4
names (strings in the form of user@domain) and local UIDs and GIDs. For idmapd to function with
NFSv4, the /etc/idmapd.conf file must be configured. At a minimum, the "Domain" parameter
should be specified, which defines the NFSv4 mapping domain. If the NFSv4 mapping domain is the
same as the DNS domain name, this parameter can be skipped. The client and server must agree on
the NFSv4 mapping domain for ID mapping to function properly.

NOTE

In Red Hat Enterprise Linux 7, only the NFSv4 server uses rpc.idmapd. The NFSv4
client uses the keyring-based idmapper nfsidmap. nfsidmap is a stand-alone
program that is called by the kernel on-demand to perform ID mapping; it is not a
daemon. If there is a problem with nfsidmap does the client fall back to using
rpc.idmapd. More information regarding nfsidmap can be found on the nfsidmap
man page.

8.2. PNFS
Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat
Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements
to performance. That is, when a server implements pNFS as well, a client is able to access data through
multiple servers concurrently. It supports three storage protocols or layouts: files, objects, and blocks.

NOTE

The protocol allows for three possible pNFS layout types: files, objects, and blocks. While
the Red Hat Enterprise Linux 6.4 client only supported the files layout type, Red Hat
Enterprise Linux 7 supports the files layout type, with objects and blocks layout types
being included as a technology preview.

pNFS Flex Files


Flexible Files is a new layout for pNFS that enables the aggregation of standalone NFSv3 and NFSv4
servers into a scale out name space. The Flex Files feature is part of the NFSv4.2 standard as described
in the RFC 7862 specification.

Red Hat Enterprise Linux can mount NFS shares from Flex Files servers since Red Hat Enterprise
Linux 7.4.

Mounting pNFS Shares


To enable pNFS functionality, mount shares from a pNFS-enabled server with NFS version 4.1
or later:

# mount -t nfs -o v4.1 server:/remote-export /local-directory

64
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

After the server is pNFS-enabled, the nfs_layout_nfsv41_files kernel is automatically


loaded on the first mount. The mount entry in the output should contain minorversion=1. Use
the following command to verify the module was loaded:

$ lsmod | grep nfs_layout_nfsv41_files

To mount an NFS share with the Flex Files feature from a server that supports Flex Files, use
NFS version 4.2 or later:

# mount -t nfs -o v4.2 server:/remote-export /local-directory

Verify that the nfs_layout_flexfiles module has been loaded:

$ lsmod | grep nfs_layout_flexfiles

Additional Resources
For more information on pNFS, refer to: http://www.pnfs.com.

8.3. CONFIGURING NFS CLIENT


The mount command mounts NFS shares on the client side. Its format is as follows:

# mount -t nfs -o options server:/remote/export /local/directory

This command uses the following variables:

options
A comma-delimited list of mount options; for more information on valid NFS mount options, see
Section 8.5, “Common NFS Mount Options”.

server
The hostname, IP address, or fully qualified domain name of the server exporting the file system you
wish to mount

/remote/export
The file system or directory being exported from the server, that is, the directory you wish to mount

/local/directory
The client location where /remote/export is mounted

The NFS protocol version used in Red Hat Enterprise Linux 7 is identified by the mount options
nfsvers or vers. By default, mount uses NFSv4 with mount -t nfs. If the server does not support
NFSv4, the client automatically steps down to a version supported by the server. If the nfsvers/vers
option is used to pass a particular version not supported by the server, the mount fails. The file system
type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o
nfsvers=4 host:/remote/export /local/directory.

For more information, see man mount.

If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Red

65
Storage Administration Guide

Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the
/etc/fstab file and the autofs service. For more information, see Section 8.3.1, “Mounting NFS File
Systems Using /etc/fstab” and Section 8.4, “autofs”.

8.3.1. Mounting NFS File Systems Using /etc/fstab

An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file.
The line must state the hostname of the NFS server, the directory on the server being exported, and the
directory on the local machine where the NFS share is to be mounted. You must be root to modify the
/etc/fstab file.

Example 8.1. Syntax Example

The general syntax for the line in /etc/fstab is as follows:

server:/usr/local/pub /pub nfs defaults 0 0

The mount point /pub must exist on the client machine before this command can be executed. After
adding this line to /etc/fstab on the client system, use the command mount /pub, and the mount
point /pub is mounted from the server.

A valid /etc/fstab entry to mount an NFS export should contain the following information:

server:/remote/export /local/directory nfs options 0 0

The variables server, /remote/export, /local/directory, and options are the same ones used when
manually mounting an NFS share. For more information, see Section 8.3, “Configuring NFS Client”.

NOTE

The mount point /local/directory must exist on the client before /etc/fstab is read.
Otherwise, the mount fails.

After editing /etc/fstab, regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

Additional Resources

For more information about /etc/fstab, refer to man fstab.

8.4. AUTOFS
One drawback of using /etc/fstab is that, regardless of how infrequently a user accesses the NFS
mounted file system, the system must dedicate resources to keep the mounted file system in place. This
is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at
one time, overall system performance can be affected. An alternative to /etc/fstab is to use the
kernel-based automount utility. An automounter consists of two components:

a kernel module that implements a file system, and

66
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

a user-space daemon that performs all of the other functions.

The automount utility can mount and unmount NFS file systems automatically (on-demand mounting),
therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS,
CIFS, and local file systems.

IMPORTANT

The nfs-utils package is now a part of both the 'NFS file server' and the 'Network File
System Client' groups. As such, it is no longer installed by default with the Base group.
Ensure that nfs-utils is installed on the system first before attempting to automount an
NFS share.

autofs is also part of the 'Network File System Client' group.

autofs uses /etc/auto.master (master map) as its default primary configuration file. This can be
changed to use another supported network source and name using the autofs configuration (in
/etc/sysconfig/autofs) in conjunction with the Name Service Switch (NSS) mechanism. An
instance of the autofs version 4 daemon was run for each mount point configured in the master map
and so it could be run manually from the command line for any given mount point. This is not possible
with autofs version 5, because it uses a single daemon to manage all configured mount points; as
such, all automounts must be configured in the master map. This is in line with the usual requirements of
other industry standard automounters. Mount point, hostname, exported directory, and options can all be
specified in a set of files (or other supported network sources) rather than configuring them manually for
each host.

8.4.1. Improvements in autofs Version 5 over Version 4


autofs version 5 features the following enhancements over version 4:

Direct map support


Direct maps in autofs provide a mechanism to automatically mount file systems at arbitrary points in
the file system hierarchy. A direct map is denoted by a mount point of /- in the master map. Entries
in a direct map contain an absolute path name as a key (instead of the relative path names used in
indirect maps).

Lazy mount and unmount support


Multi-mount map entries describe a hierarchy of mount points under a single key. A good example of
this is the -hosts map, commonly used for automounting all exports from a host under /net/host
as a multi-mount map entry. When using the -hosts map, an ls of /net/host will mount autofs
trigger mounts for each export from host. These will then mount and expire them as they are
accessed. This can greatly reduce the number of active mounts needed when accessing a server
with a large number of exports.

Enhanced LDAP support


The autofs configuration file (/etc/sysconfig/autofs) provides a mechanism to specify the
autofs schema that a site implements, thus precluding the need to determine this via trial and error
in the application itself. In addition, authenticated binds to the LDAP server are now supported, using
most mechanisms supported by the common LDAP server implementations. A new configuration file
has been added for this support: /etc/autofs_ldap_auth.conf. The default configuration file is
self-documenting, and uses an XML format.

67
Storage Administration Guide

Proper use of the Name Service Switch (nsswitch) configuration.


The Name Service Switch configuration file exists to provide a means of determining from where
specific configuration data comes. The reason for this configuration is to allow administrators the
flexibility of using the back-end database of choice, while maintaining a uniform software interface to
access the data. While the version 4 automounter is becoming increasingly better at handling the
NSS configuration, it is still not complete. Autofs version 5, on the other hand, is a complete
implementation.

For more information on the supported syntax of this file, see man nsswitch.conf. Not all NSS
databases are valid map sources and the parser will reject ones that are invalid. Valid sources are
files, yp, nis, nisplus, ldap, and hesiod.

Multiple master map entries per autofs mount point


One thing that is frequently used but not yet mentioned is the handling of multiple master map entries
for the direct mount point /-. The map keys for each entry are merged and behave as one map.

Example 8.2. Multiple Master Map Entries per autofs Mount Point

Following is an example in the connectathon test maps for the direct mounts:

/- /tmp/auto_dcthon
/- /tmp/auto_test3_direct
/- /tmp/auto_test4_direct

8.4.2. Configuring autofs


The primary configuration file for the automounter is /etc/auto.master, also referred to as the
master map which may be changed as described in the Section 8.4.1, “Improvements in autofs Version 5
over Version 4”. The master map lists autofs-controlled mount points on the system, and their
corresponding configuration files or network sources known as automount maps. The format of the
master map is as follows:

mount-point map-name options

The variables used in this format are:

mount-point
The autofs mount point, /home, for example.

map-name
The name of a map source which contains a list of mount points, and the file system location from
which those mount points should be mounted.

options
If supplied, these applies to all entries in the given map provided they do not themselves have options
specified. This behavior is different from autofs version 4 where options were cumulative. This has
been changed to implement mixed environment compatibility.

68
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

Example 8.3. /etc/auto.master File

The following is a sample line from /etc/auto.master file (displayed with cat
/etc/auto.master):

/home /etc/auto.misc

The general format of maps is similar to the master map, however the "options" appear between the
mount point and the location instead of at the end of the entry as in the master map:

mount-point [options] location

The variables used in this format are:

mount-point
This refers to the autofs mount point. This can be a single directory name for an indirect mount or
the full path of the mount point for direct mounts. Each direct and indirect map entry key (mount-
point) may be followed by a space separated list of offset directories (subdirectory names each
beginning with a /) making them what is known as a multi-mount entry.

options
Whenever supplied, these are the mount options for the map entries that do not specify their own
options.

location
This refers to the file system location such as a local file system path (preceded with the Sun map
format escape character ":" for map names beginning with /), an NFS file system or other valid file
system location.

The following is a sample of contents from a map file (for example, /etc/auto.misc):

payroll -fstype=nfs personnel:/dev/hda3


sales -fstype=ext3 :/dev/hda4

The first column in a map file indicates the autofs mount point (sales and payroll from the server
called personnel). The second column indicates the options for the autofs mount while the third
column indicates the source of the mount. Following the given configuration, the autofs mount points will
be /home/payroll and /home/sales. The -fstype= option is often omitted and is generally not
needed for correct operation.

The automounter create the directories if they do not exist. If the directories exist before the automounter
was started, the automounter will not remove them when it exits.

To start the automount daemon, use the following command:

# systemctl start autofs

To restart the automount daemon, use the following command:

# systemctl restart autofs

69
Storage Administration Guide

Using the given configuration, if a process requires access to an autofs unmounted directory such as
/home/payroll/2006/July.sxc, the automount daemon automatically mounts the directory. If a
timeout is specified, the directory is automatically unmounted if the directory is not accessed for the
timeout period.

To view the status of the automount daemon, use the following command:

# systemctl status autofs

8.4.3. Overriding or Augmenting Site Configuration Files


It can be useful to override site defaults for a specific mount point on a client system. For example,
consider the following conditions:

Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following
directive:

automount: files nis

The auto.master file contains:

+auto.master

The NIS auto.master map file contains:

/home auto.home

The NIS auto.home map contains:

beth fileserver.example.com:/export/home/beth
joe fileserver.example.com:/export/home/joe
* fileserver.example.com:/export/home/&

The file map /etc/auto.home does not exist.

Given these conditions, let's assume that the client system needs to override the NIS map auto.home
and mount home directories from a different server. In this case, the client needs to use the following
/etc/auto.master map:

/home ​/etc/auto.home
+auto.master

The /etc/auto.home map contains the entry:

* labserver.example.com:/export/home/&

Because the automounter only processes the first occurrence of a mount point, /home contain the
contents of /etc/auto.home instead of the NIS auto.home map.

70
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

Alternatively, to augment the site-wide auto.home map with just a few entries, create an
/etc/auto.home file map, and in it put the new entries. At the end, include the NISauto.home map.
Then the /etc/auto.home file map looks similar to:

mydir someserver:/export/mydir
+auto.home

With these NIS auto.home map conditions, the ls /home command outputs:

beth joe mydir

This last example works as expected because autofs does not include the contents of a file map of the
same name as the one it is reading. As such, autofs moves on to the next map source in the
nsswitch configuration.

8.4.4. Using LDAP to Store Automounter Maps


LDAP client libraries must be installed on all systems configured to retrieve automounter maps from
LDAP. On Red Hat Enterprise Linux, the openldap package should be installed automatically as a
dependency of the automounter. To configure LDAP access, modify /etc/openldap/ldap.conf.
Ensure that BASE, URI, and schema are set appropriately for your site.

The most recently established schema for storing automount maps in LDAP is described by
rfc2307bis. To use this schema it is necessary to set it in the autofs configuration
(/etc/sysconfig/autofs) by removing the comment characters from the schema definition. For
example:

Example 8.4. Setting autofs Configuration

DEFAULT_MAP_OBJECT_CLASS="automountMap"
DEFAULT_ENTRY_OBJECT_CLASS="automount"
DEFAULT_MAP_ATTRIBUTE="automountMapName"
DEFAULT_ENTRY_ATTRIBUTE="automountKey"
DEFAULT_VALUE_ATTRIBUTE="automountInformation"

Ensure that these are the only schema entries not commented in the configuration. The automountKey
replaces the cn attribute in the rfc2307bis schema. Following is an example of an LDAP Data
Interchange Format (LDIF) configuration:

Example 8.5. LDF Configuration

# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (&(objectclass=automountMap)(automountMapName=auto.master))
# requesting: ALL
#

# auto.master, example.com
dn: automountMapName=auto.master,dc=example,dc=com
objectClass: top

71
Storage Administration Guide

objectClass: automountMap
automountMapName: auto.master

# extended LDIF
#
# LDAPv3
# base <automountMapName=auto.master,dc=example,dc=com> with scope
subtree
# filter: (objectclass=automount)
# requesting: ALL
#

# /home, auto.master, example.com


dn: automountMapName=auto.master,dc=example,dc=com
objectClass: automount
cn: /home

automountKey: /home
automountInformation: auto.home

# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (&(objectclass=automountMap)(automountMapName=auto.home))
# requesting: ALL
#

# auto.home, example.com
dn: automountMapName=auto.home,dc=example,dc=com
objectClass: automountMap
automountMapName: auto.home

# extended LDIF
#
# LDAPv3
# base <automountMapName=auto.home,dc=example,dc=com> with scope subtree
# filter: (objectclass=automount)
# requesting: ALL
#

# foo, auto.home, example.com


dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: foo
automountInformation: filer.example.com:/export/foo

# /, auto.home, example.com
dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: /
automountInformation: filer.example.com:/export/&

8.5. COMMON NFS MOUNT OPTIONS

72
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

Beyond mounting a file system with NFS on a remote host, it is also possible to specify other options at
mount time to make the mounted share easier to use. These options can be used with manual mount
commands, /etc/fstab settings, and autofs.

The following are options commonly used for NFS mounts:

intr
Allows NFS requests to be interrupted if the server goes down or cannot be reached.

lookupcache=mode
Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid
arguments for mode are all, none, or pos/positive.

nfsvers=version
Specifies which version of the NFS protocol to use, where version is 3 or 4. This is useful for hosts
that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by
the kernel and mount command.

The option vers is identical to nfsvers, and is included in this release for compatibility reasons.

noacl
Turns off all ACL processing. This may be needed when interfacing with older versions of Red Hat
Enterprise Linux, Red Hat Linux, or Solaris, since the most recent ACL technology is not compatible
with older systems.

nolock
Disables file locking. This setting is sometimes required when connecting to very old NFS servers.

noexec
Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a
non-Linux file system containing incompatible binaries.

nosuid
Disables set-user-identifier or set-group-identifier bits. This prevents remote users
from gaining higher privileges by running a setuid program.

port=num
Specifies the numeric value of the NFS server port. If num is 0 (the default value), then mount
queries the remote host's rpcbind service for the port number to use. If the remote host's NFS
daemon is not registered with its rpcbind service, the standard NFS port number of TCP 2049 is
used instead.

rsize=num and wsize=num


These options set the maximum number of bytes to be transfered in a single NFS read or write
operation.

There is no fixed default value for rsize and wsize. By default, NFS uses the largest possible value
that both the server and the client support. In Red Hat Enterprise Linux 7, the client and server
maximum is 1,048,576 bytes. For more details, see the What are the default and maximum values for
rsize and wsize with NFS mounts? KBase article.

73
Storage Administration Guide

sec=mode
Its default setting is sec=sys, which uses local UNIX UIDs and GIDs. These use AUTH_SYS to
authenticate NFS operations.

sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.

sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS
operations using secure checksums to prevent data tampering.

sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to
prevent traffic sniffing. This is the most secure setting, but it also involves the most performance
overhead.

tcp
Instructs the NFS mount to use the TCP protocol.

udp
Instructs the NFS mount to use the UDP protocol.

For more information, see man mount and man nfs.

8.6. STARTING AND STOPPING THE NFS SERVER


Prerequisites

For servers that support NFSv2 or NFSv3 connections, the rpcbind[1] service must be running.
To verify that rpcbind is active, use the following command:

$ systemctl status rpcbind

To configure an NFSv4-only server, which does not require rpcbind, see Section 8.7.7,
“Configuring an NFSv4-only Server”.

On Red Hat Enterprise Linux 7.0, if your NFS server exports NFSv3 and is enabled to start at
boot, you need to manually start and enable the nfs-lock service:

# systemctl start nfs-lock


# systemctl enable nfs-lock

On Red Hat Enterprise Linux 7.1 and later, nfs-lock starts automatically if needed, and an
attempt to enable it manually fails.

Procedures
To start an NFS server, use the following command:

# systemctl start nfs

To enable NFS to start at boot, use the following command:

# systemctl enable nfs

74
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

To stop the server, use:

# systemctl stop nfs

The restart option is a shorthand way of stopping and then starting NFS. This is the most
efficient way to make configuration changes take effect after editing the configuration file for
NFS. To restart the server type:

# systemctl restart nfs

After you edit the /etc/sysconfig/nfs file, restart the nfs-config service by running the
following command for the new values to take effect:

# systemctl restart nfs-config

The try-restart command only starts nfs if it is currently running. This command is the
equivalent of condrestart (conditional restart) in Red Hat init scripts and is useful because it
does not start the daemon if NFS is not running.

To conditionally restart the server, type:

# systemctl try-restart nfs

To reload the NFS server configuration file without restarting the service type:

# systemctl reload nfs

8.7. CONFIGURING THE NFS SERVER


There are two ways to configure exports on an NFS server:

Manually editing the NFS configuration file, that is, /etc/exports, and

Through the command line, that is, by using the command exportfs

8.7.1. The /etc/exports Configuration File

The /etc/exports file controls which file systems are exported to remote hosts and specifies options.
It follows the following syntax rules:

Blank lines are ignored.

To add a comment, start a line with the hash mark (#).

You can wrap long lines with a backslash (\).

Each exported file system should be on its own individual line.

Any lists of authorized hosts placed after an exported file system must be separated by space
characters.

75
Storage Administration Guide

Options for each of the hosts must be placed in parentheses directly after the host identifier,
without any spaces separating the host and the first parenthesis.

Each entry for an exported file system has the following structure:

export host(options)

The aforementioned structure uses the following variables:

export
The directory being exported

host
The host or network to which the export is being shared

options
The options to be used for host

It is possible to specify multiple hosts, along with specific options for each host. To do so, list them on the
same line as a space-delimited list, with each hostname followed by its respective options (in
parentheses), as in:

export host1(options1) host2(options2) host3(options3)

For information on different methods for specifying hostnames, see Section 8.7.5, “Hostname Formats”.

In its simplest form, the /etc/exports file only specifies the exported directory and the hosts permitted
to access it, as in the following example:

Example 8.6. The /etc/exports File

/exported/directory bob.example.com

Here, bob.example.com can mount /exported/directory/ from the NFS server. Because no
options are specified in this example, NFS uses default settings.

The default settings are:

ro
The exported file system is read-only. Remote hosts cannot change the data shared on the file
system. To allow hosts to make changes to the file system (that is, read and write), specify the rw
option.

sync
The NFS server will not reply to requests before changes made by previous requests are written to
disk. To enable asynchronous writes instead, specify the option async.

wdelay
The NFS server will delay writing to the disk if it suspects another write request is imminent. This can

76
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

improve performance as it reduces the number of times the disk must be accessed by separate write
commands, thereby reducing write overhead. To disable this, specify the no_wdelay. no_wdelay is
only available if the default sync option is also specified.

root_squash
This prevents root users connected remotely (as opposed to locally) from having root privileges;
instead, the NFS server assigns them the user ID nfsnobody. This effectively "squashes" the power
of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote
server. To disable root squashing, specify no_root_squash.

To squash every remote user (including root), use all_squash. To specify the user and group IDs that
the NFS server should assign to remote users from a particular host, use the anonuid and anongid
options, respectively, as in:

export host(anonuid=uid,anongid=gid)

Here, uid and gid are user ID number and group ID number, respectively. The anonuid and anongid
options allow you to create a special user and group account for remote NFS users to share.

By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. To disable
this feature, specify the no_acl option when exporting the file system.

Each default for every exported file system must be explicitly overridden. For example, if the rw option is
not specified, then the exported file system is shared as read-only. The following is a sample line from
/etc/exports which overrides two default options:

/another/exported/directory 192.168.0.3(rw,async)

In this example 192.168.0.3 can mount /another/exported/directory/ read and write and all
writes to disk are asynchronous. For more information on exporting options, see man exportfs.

Other options are available where no default value is specified. These include the ability to disable sub-
tree checking, allow access from insecure ports, and allow insecure file locks (necessary for certain early
NFS client implementations). For more information on these less-used options, see man exports.

IMPORTANT

The format of the /etc/exports file is very precise, particularly in regards to use of the
space character. Remember to always separate exported file systems from hosts and
hosts from one another with a space character. However, there should be no other space
characters in the file except on comment lines.

For example, the following two lines do not mean the same thing:

/home bob.example.com(rw)
/home bob.example.com (rw)

The first line allows only users from bob.example.com read and write access to the
/home directory. The second line allows users from bob.example.com to mount the
directory as read-only (the default), while the rest of the world can mount it read/write.

8.7.2. The exportfs Command

77
Storage Administration Guide

Every file system being exported to remote users with NFS, as well as the access level for those file
systems, are listed in the /etc/exports file. When the nfs service starts, the /usr/sbin/exportfs
command launches and reads this file, passes control to rpc.mountd (if NFSv3) for the actual mounting
process, then to rpc.nfsd where the file systems are then available to remote users.

When issued manually, the /usr/sbin/exportfs command allows the root user to selectively export
or unexport directories without restarting the NFS service. When given the proper options, the
/usr/sbin/exportfs command writes the exported file systems to /var/lib/nfs/xtab. Since
rpc.mountd refers to the xtab file when deciding access privileges to a file system, changes to the list
of exported file systems take effect immediately.

The following is a list of commonly-used options available for /usr/sbin/exportfs:

-r
Causes all directories listed in /etc/exports to be exported by constructing a new export list in
/etc/lib/nfs/xtab. This option effectively refreshes the export list with any changes made to
/etc/exports.

-a
Causes all directories to be exported or unexported, depending on what other options are passed to
/usr/sbin/exportfs. If no other options are specified, /usr/sbin/exportfs exports all file
systems specified in /etc/exports.

-o file-systems
Specifies directories to be exported that are not listed in /etc/exports. Replace file-systems with
additional file systems to be exported. These file systems must be formatted in the same way they
are specified in /etc/exports. This option is often used to test an exported file system before
adding it permanently to the list of file systems to be exported. For more information on
/etc/exports syntax, see Section 8.7.1, “The /etc/exports Configuration File”.

-i
Ignores /etc/exports; only options given from the command line are used to define exported file
systems.

-u
Unexports all shared directories. The command /usr/sbin/exportfs -ua suspends NFS file
sharing while keeping all NFS daemons up. To re-enable NFS sharing, use exportfs -r.

-v
Verbose operation, where the file systems being exported or unexported are displayed in greater
detail when the exportfs command is executed.

If no options are passed to the exportfs command, it displays a list of currently exported file systems.
For more information about the exportfs command, see man exportfs.

8.7.2.1. Using exportfs with NFSv4

In Red Hat Enterprise Linux 7, no extra steps are required to configure NFSv4 exports as any filesystems
mentioned are automatically available to NFSv3 and NFSv4 clients using the same path. This was not
the case in previous versions.

78
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

To prevent clients from using NFSv4, turn it off by setting RPCNFSDARGS= -N 4 in


/etc/sysconfig/nfs.

8.7.3. Running NFS Behind a Firewall


NFS requires rpcbind, which dynamically assigns ports for RPC services and can cause issues for
configuring firewall rules. To allow clients to access NFS shares behind a firewall, edit the
/etc/sysconfig/nfs file to set which ports the RPC services run on.

The /etc/sysconfig/nfs file does not exist by default on all systems. If /etc/sysconfig/nfs
does not exist, create it and specify the following:

RPCMOUNTDOPTS="-p port"
This adds "-p port" to the rpc.mount command line: rpc.mount -p port.

To specify the ports to be used by the nlockmgr service, set the port number for the nlm_tcpport and
nlm_udpport options in the /etc/modprobe.d/lockd.conf file.

If NFS fails to start, check /var/log/messages. Commonly, NFS fails to start if you specify a port
number that is already in use. After editing /etc/sysconfig/nfs, you need to restart the nfs-
config service for the new values to take effect in Red Hat Enterprise Linux 7.2 and prior by running:

# systemctl restart nfs-config

Then, restart the NFS server:

# systemctl restart nfs-server

Run rpcinfo -p to confirm the changes have taken effect.

NOTE

To allow NFSv4.0 callbacks to pass through firewalls set


/proc/sys/fs/nfs/nfs_callback_tcpport and allow the server to connect to that
port on the client.

This process is not needed for NFSv4.1 or higher, and the other ports for mountd,
statd, and lockd are not required in a pure NFSv4 environment.

8.7.3.1. Discovering NFS exports

There are two ways to discover which file systems an NFS server exports.

On any server that supports NFSv3, use the showmount command:

$ showmount -e myserver
Export list for mysever
/exports/foo
/exports/bar

On any server that supports NFSv4, mount the root directory and look around.

79
Storage Administration Guide

# mount myserver:/ /mnt/


# cd /mnt/
exports
# ls exports
foo
bar

On servers that support both NFSv4 and NFSv3, both methods work and give the same results.

NOTE

Before Red Hat Enterprise Linux 6 on older NFS servers, depending on how they are
configured, it is possible to export filesystems to NFSv4 clients at different paths. Because
these servers do not enable NFSv4 by default, this should not be a problem.

8.7.4. Accessing RPC Quota through a Firewall


If you export a file system that uses disk quotas, you can use the quota Remote Procedure Call (RPC)
service to provide disk quota data to NFS clients.

Procedure 8.1. Making RPC Quota Accessible Behind a Firewall

1. To enable the rpc-rquotad service, use the following command:

# systemctl enable rpc-rquotad

2. To start the rpc-rquotad service, use the following command:

# systemctl start rpc-rquotad

Note that rpc-rquotad is, if enabled, started automatically after starting the nfs-server
service.

3. To make the quota RPC service accessible behind a firewall, UDP or TCP port 875 need to be
open. The default port number is defined in the /etc/services file.

You can override the default port number by appending -p port-number to the
RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file.

4. Restart rpc-rquotad for changes in the /etc/sysconfig/rpc-rquotad file to take effect:

# systemctl restart rpc-rquotad

Setting Quotas from Remote Hosts


By default, quotas can only be read by remote hosts. To allow setting quotas, append the -S option to
the RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file.

Restart rpc-rquotad for changes in the /etc/sysconfig/rpc-rquotad file to take effect:

# systemctl restart rpc-rquotad

8.7.5. Hostname Formats

80
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

The host(s) can be in the following forms:

Single machine
A fully-qualified domain name (that can be resolved by the server), hostname (that can be resolved
by the server), or an IP address.

Series of machines specified with wildcards


Use the * or ? character to specify a string match. Wildcards are not to be used with IP addresses;
however, they may accidentally work if reverse DNS lookups fail. When specifying wildcards in fully
qualified domain names, dots (.) are not included in the wildcard. For example, *.example.com
includes one.example.com but does not include one.two.example.com.

IP networks
Use a.b.c.d/z, where a.b.c.d is the network and z is the number of bits in the netmask (for example
192.168.0.0/24). Another acceptable format is a.b.c.d/netmask, where a.b.c.d is the network and
netmask is the netmask (for example, 192.168.100.8/255.255.255.0).

Netgroups
Use the format @group-name, where group-name is the NIS netgroup name.

8.7.6. Enabling NFS over RDMA (NFSoRDMA)


The remote direct memory access (RDMA) service works automatically in Red Hat Enterprise Linux 7 if
there is RDMA-capable hardware present.

To enable NFS over RDMA:

1. Install the rdma and rdma-core packages.

The /etc/rdma/rdma.conf file contains a line that sets XPRTRDMA_LOAD=yes by default,


which requests the rdma service to load the NFSoRDMA client module.

2. To enable automatic loading of NFSoRDMA server modules, add SVCRDMA_LOAD=yes on a


new line in /etc/rdma/rdma.conf.

RPCNFSDARGS="--rdma=20049" in the /etc/sysconfig/nfs file specifies the port number


on which the NFSoRDMA service listens for clients. RFC 5667 specifies that servers must listen
on port 20049 when providing NFSv4 services over RDMA.

3. Restart the nfs service after editing the /etc/rdma/rdma.conf file:

# systemctl restart nfs

Note that with earlier kernel versions, a system reboot is needed after editing
/etc/rdma/rdma.conf for the changes to take effect.

8.7.7. Configuring an NFSv4-only Server


By default, the NFS server supports NFSv2, NFSv3, and NFSv4 connections in Red Hat Enterprise
Linux 7. However, you can also configure NFS to support only NFS version 4.0 and later. This minimizes
the number of open ports and running services on the system, because NFSv4 does not require the
rpcbind service to listen on the network.

81
Storage Administration Guide

When your NFS server is configured as NFSv4-only, clients attempting to mount shares using NFSv2 or
NFSv3 fail with an error like the following:

Requested NFS version or transport protocol is not supported.

Procedure 8.2. Configuring an NFSv4-only Server

To configure your NFS server to support only NFS version 4.0 and later:

1. Disable NFSv2, NFSv3, and UDP by adding the following line to the /etc/sysconfig/nfs
configuration file:

RPCNFSDARGS="-N 2 -N 3 -U"

2. Optionally, disable listening for the RPCBIND, MOUNT, and NSM protocol calls, which are not
necessary in the NFSv4-only case.

The effects of disabling these options are:

Clients that attempt to mount shares from your server using NFSv2 or NFSv3 become
unresponsive.

The NFS server itself is unable to mount NFSv2 and NFSv3 file systems.

To disable these options:

Add the following to the /etc/sysconfig/nfs file:

RPCMOUNTDOPTS="-N 2 -N 3"

Disable related services:

# systemctl mask --now rpc-statd.service rpcbind.service


rpcbind.socket

3. Restart the NFS server:

# systemctl restart nfs

The changes take effect as soon as you start or restart the NFS server.

Verifying the NFSv4-only Configuration

You can verify that your NFS server is configured in the NFSv4-only mode by using the netstat utility.

The following is an example netstat output on an NFSv4-only server; listening for RPCBIND,
MOUNT, and NSM is also disabled. Here, nfs is the only listening NFS service:

# netstat -ltu

Active Internet connections (only servers)


Proto Recv-Q Send-Q Local Address Foreign Address
State

82
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

tcp 0 0 0.0.0.0:nfs 0.0.0.0:*


LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:*
LISTEN
tcp 0 0 localhost:smtp 0.0.0.0:*
LISTEN
tcp6 0 0 [::]:nfs [::]:*
LISTEN
tcp6 0 0 [::]:12432 [::]:*
LISTEN
tcp6 0 0 [::]:12434 [::]:*
LISTEN
tcp6 0 0 localhost:7092 [::]:*
LISTEN
tcp6 0 0 [::]:ssh [::]:*
LISTEN
udp 0 0 localhost:323 0.0.0.0:*
udp 0 0 0.0.0.0:bootpc 0.0.0.0:*
udp6 0 0 localhost:323 [::]:*

In comparison, the netstat output before configuring an NFSv4-only server includes the
sunrpc and mountd services:

# netstat -ltu

Active Internet connections (only servers)


Proto Recv-Q Send-Q Local Address Foreign Address
State
tcp 0 0 0.0.0.0:nfs 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:36069 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:52364 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:sunrpc 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:mountd 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:*
LISTEN
tcp 0 0 localhost:smtp 0.0.0.0:*
LISTEN
tcp6 0 0 [::]:34941 [::]:*
LISTEN
tcp6 0 0 [::]:nfs [::]:*
LISTEN
tcp6 0 0 [::]:sunrpc [::]:*
LISTEN
tcp6 0 0 [::]:mountd [::]:*
LISTEN
tcp6 0 0 [::]:12432 [::]:*
LISTEN
tcp6 0 0 [::]:56881 [::]:*
LISTEN
tcp6 0 0 [::]:12434 [::]:*
LISTEN

83
Storage Administration Guide

tcp6 0 0 localhost:7092 [::]:*


LISTEN
tcp6 0 0 [::]:ssh [::]:*
LISTEN
udp 0 0 localhost:323 0.0.0.0:*
udp 0 0 0.0.0.0:37190 0.0.0.0:*
udp 0 0 0.0.0.0:876 0.0.0.0:*
udp 0 0 localhost:877 0.0.0.0:*
udp 0 0 0.0.0.0:mountd 0.0.0.0:*
udp 0 0 0.0.0.0:38588 0.0.0.0:*
udp 0 0 0.0.0.0:nfs 0.0.0.0:*
udp 0 0 0.0.0.0:bootpc 0.0.0.0:*
udp 0 0 0.0.0.0:sunrpc 0.0.0.0:*
udp6 0 0 localhost:323 [::]:*
udp6 0 0 [::]:57683 [::]:*
udp6 0 0 [::]:876 [::]:*
udp6 0 0 [::]:mountd [::]:*
udp6 0 0 [::]:40874 [::]:*
udp6 0 0 [::]:nfs [::]:*
udp6 0 0 [::]:sunrpc [::]:*

8.8. SECURING NFS


NFS is suitable for transparent sharing of entire file systems with a large number of known hosts.
However, with ease-of-use comes a variety of potential security problems. To minimize NFS security
risks and protect data on the server, consider the following sections when exporting NFS file systems on
a server or mounting them on a client.

8.8.1. NFS Security with AUTH_SYS and Export Controls


Traditionally, NFS has given two options in order to control access to exported files.

First, the server restricts which hosts are allowed to mount which file systems either by IP address or by
host name.

Second, the server enforces file system permissions for users on NFS clients in the same way it does
local users. Traditionally it does this using AUTH_SYS (also called AUTH_UNIX) which relies on the client
to state the UID and GID's of the user. Be aware that this means a malicious or misconfigured client can
easily get this wrong and allow a user access to files that it should not.

To limit the potential risks, administrators often allow read-only access or squash user permissions to a
common user and group ID. Unfortunately, these solutions prevent the NFS share from being used in the
way it was originally intended.

Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS file
system, the system associated with a particular hostname or fully qualified domain name can be pointed
to an unauthorized machine. At this point, the unauthorized machine is the system permitted to mount
the NFS share, since no username or password information is exchanged to provide additional security
for the NFS mount.

Wildcards should be used sparingly when exporting directories through NFS, as it is possible for the
scope of the wildcard to encompass more systems than intended.

It is also possible to restrict access to the rpcbind[1] service with TCP wrappers. Creating rules with
iptables can also limit access to ports used by rpcbind, rpc.mountd, and rpc.nfsd.

84
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

For more information on securing NFS and rpcbind, refer to man iptables.

8.8.2. NFS Security with AUTH_GSS

NFSv4 revolutionized NFS security by mandating the implementation of RPCSEC_GSS and the
Kerberos version 5 GSS-API mechanism. However, RPCSEC_GSS and the Kerberos mechanism are
also available for all versions of NFS. In FIPS mode, only FIPS-approved algorithms can be used.

Unlike AUTH_SYS, with the RPCSEC_GSS Kerberos mechanism, the server does not depend on the
client to correctly represent which user is accessing the file. Instead, cryptography is used to
authenticate users to the server, which prevents a malicious client from impersonating a user without
having that user's Kerberos credentials. Using the RPCSEC_GSS Kerberos mechanism is the most
straightforward way to secure mounts because after configuring Kerberos, no additional setup is
needed.

Configuring Kerberos
Before configuring an NFSv4 Kerberos-aware server, you need to install and configure a Kerberos Key
Distribution Centre (KDC). Kerberos is a network authentication system that allows clients and servers to
authenticate to each other by using symmetric encryption and a trusted third party, the KDC. Red Hat
recommends using Identity Management (IdM) for setting up Kerberos.

Procedure 8.3. Configuring an NFS Server and Client for IdM to Use RPCSEC_GSS

1. Create the nfs/hostname.domain@REALM principal on the NFS server side.

Create the host/hostname.domain@REALM principal on both the server and the client
side.

Add the corresponding keys to keytabs for the client and server.

For instructions, see the Adding and Editing Service Entries and Keytabs and Setting up a
Kerberos-aware NFS Server sections in the Red Hat Enterprise Linux 7 Linux Domain Identity,
Authentication, and Policy Guide.

2. On the server side, use the sec= option to enable the wanted security flavors. To enable all
security flavors as well as non-cryptographic mounts:

/export *(sec=sys:krb5:krb5i:krb5p)

Valid security flavors to use with the sec= option are:

sys: no cryptographic protection, the default

krb5: authentication only

krb5i: integrity protection

krb5p: privacy protection

3. On the client side, add sec=krb5 (or sec=krb5i, or sec=krb5p, depending on the setup) to
the mount options:

# mount -o sec=krb5 server:/export /mnt

85
Storage Administration Guide

For information on how to configure a NFS client, see the Setting up a Kerberos-aware NFS
Client section in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and
Policy Guide.

Although Red Hat recommends using IdM, Active Directory (AD) Kerberos servers are also supported.
For details, see the following Red Hat Knowledgebase article: How to set up NFS using Kerberos
authentication on RHEL 7 using SSSD and Active Directory.

For more information, see the exports(5) and nfs(5) manual pages, and Section 8.5, “Common NFS
Mount Options”.

For further information on the RPCSEC_GSS framework, including how gssproxy and rpc.gssd inter-
operate, see the GSSD flow description.

8.8.2.1. NFS Security with NFSv4

NFSv4 includes ACL support based on the Microsoft Windows NT model, not the POSIX model,
because of the Microsoft Windows NT model's features and wide deployment.

Another important security feature of NFSv4 is the removal of the use of the MOUNT protocol for mounting
file systems. The MOUNT protocol presented a security risk because of the way the protocol processed
file handles.

8.8.3. File Permissions


Once the NFS file system is mounted as either read or read and write by a remote host, the only
protection each shared file has is its permissions. If two users that share the same user ID value mount
the same NFS file system, they can modify each others' files. Additionally, anyone logged in as root on
the client system can use the su - command to access any files with the NFS share.

By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. Red Hat
recommends that this feature is kept enabled.

By default, NFS uses root squashing when exporting a file system. This sets the user ID of anyone
accessing the NFS share as the root user on their local machine to nobody. Root squashing is
controlled by the default option root_squash; for more information about this option, refer to
Section 8.7.1, “The /etc/exports Configuration File”. If possible, never disable root squashing.

When exporting an NFS share as read-only, consider using the all_squash option. This option makes
every user accessing the exported file system take the user ID of the nfsnobody user.

8.9. NFS AND RPCBIND

NOTE

The following section only applies to NFSv3 implementations that require the rpcbind
service for backward compatibility.

For information on how to configure an NFSv4-only server, which does not need
rpcbind, see Section 8.7.7, “Configuring an NFSv4-only Server”.

The rpcbind[1] utility maps RPC services to the ports on which they listen. RPC processes notify
rpcbind when they start, registering the ports they are listening on and the RPC program numbers they

86
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

expect to serve. The client system then contacts rpcbind on the server with a particular RPC program
number. The rpcbind service redirects the client to the proper port number so it can communicate with
the requested service.

Because RPC-based services rely on rpcbind to make all connections with incoming client requests,
rpcbind must be available before any of these services start.

The rpcbind service uses TCP wrappers for access control, and access control rules for rpcbind
affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the
NFS RPC daemons. The man pages for rpc.mountd and rpc.statd contain information regarding the
precise syntax for these rules.

8.9.1. Troubleshooting NFS and rpcbind

Because rpcbind[1] provides coordination between RPC services and the port numbers used to
communicate with them, it is useful to view the status of current RPC services using rpcbind when
troubleshooting. The rpcinfo command shows each RPC-based service with port numbers, an RPC
program number, a version number, and an IP protocol type (TCP or UDP).

To make sure the proper NFS RPC-based services are enabled for rpcbind, use the following
command:

# rpcinfo -p

Example 8.7. rpcinfo -p command output

The following is sample output from this command:

program vers proto port service


100021 1 udp 32774 nlockmgr
100021 3 udp 32774 nlockmgr
100021 4 udp 32774 nlockmgr
100021 1 tcp 34437 nlockmgr
100021 3 tcp 34437 nlockmgr
100021 4 tcp 34437 nlockmgr
100011 1 udp 819 rquotad
100011 2 udp 819 rquotad
100011 1 tcp 822 rquotad
100011 2 tcp 822 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100005 1 udp 836 mountd
100005 1 tcp 839 mountd
100005 2 udp 836 mountd
100005 2 tcp 839 mountd
100005 3 udp 836 mountd
100005 3 tcp 839 mountd

87
Storage Administration Guide

If one of the NFS services does not start up correctly, rpcbind will be unable to map RPC requests from
clients for that service to the correct port. In many cases, if NFS is not present in rpcinfo output,
restarting NFS causes the service to correctly register with rpcbind and begin working.

For more information and a list of options on rpcinfo, see its man page.

8.10. NFS REFERENCES


Administering an NFS server can be a challenge. Many options, including quite a few not mentioned in
this chapter, are available for exporting or mounting NFS shares. For more information, see the following
sources:

Installed Documentation
man mount — Contains a comprehensive look at mount options for both NFS server and client
configurations.

man fstab — Provides detail for the format of the /etc/fstab file used to mount file systems
at boot-time.

man nfs — Provides details on NFS-specific file system export and mount options.

man exports — Shows common options used in the /etc/exports file when exporting NFS
file systems.

Useful Websites
http://linux-nfs.org — The current site for developers where project status updates can be
viewed.

http://nfs.sourceforge.net/ — The old home for developers which still contains a lot of useful
information.

http://www.citi.umich.edu/projects/nfsv4/linux/ — An NFSv4 for Linux 2.6 kernel resource.

http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.111.4086 — An excellent whitepaper on


the features and enhancements of the NFS Version 4 protocol.

Related Books
Managing NFS and NIS by Hal Stern, Mike Eisler, and Ricardo Labiaga; O'Reilly & Associates
— Makes an excellent reference guide for the many different NFS export and mount options
available.

NFS Illustrated by Brent Callaghan; Addison-Wesley Publishing Company — Provides


comparisons of NFS to other network file systems and shows, in detail, how NFS communication
occurs.

[1] The rpcbind service replaces portmap, which was used in previous versions of Red Hat Enterprise Linux to
map RPC program numbers to IP address port number combinations. For more information, refer to Section 8.1.1,
“Required Services”.

88
CHAPTER 9. SERVER MESSAGE BLOCK (SMB)

CHAPTER 9. SERVER MESSAGE BLOCK (SMB)


The Server Message Block (SMB) protocol implements an application-layer network protocol used to
access resources on a server, such as file shares and shared printers. On Microsoft Windows, SMB is
implemented by default. If you run Red Hat Enterprise Linux, use Samba to provide SMB shares and the
cifs-utils utility to mount SMB shares from a remote server.

NOTE

In the context of SMB, you sometimes read about the Common Internet File System
(CIFS) protocol, which is a dialect of SMB. Both the SMB and CIFS protocol are
supported and the kernel module and utilities involved in mounting SMB and CIFS shares
both use the name cifs.

9.1. PROVIDING SMB SHARES


See the Samba section in the Red Hat System Administrator's Guide.

9.2. MOUNTING AN SMB SHARE


On Red Hat Enterprise Linux, the cifs.ko file system module of the kernel provides support for the
SMB protocol. However, to mount and work with SMB shares, you must also install the cifs-utils
package:

# yum install cifs-utils

The cifs-utils package provides utilities to:

Mount SMB and CIFS shares

Manage NT Lan Manager (NTLM) credentials in the kernel's keyring

Set and display Access Control Lists (ACL) in a security descriptor on SMB and CIFS shares

9.2.1. Supported SMB Protocol Versions


The cifs.ko kernel module supports the following SMB protocol versions:

SMB 1

SMB 2.0

SMB 2.1

SMB 3.0

NOTE

Depending on the protocol version, not all SMB features are implemented.

9.2.1.1. UNIX Extensions Support

89
Storage Administration Guide

Samba uses the CAP_UNIX capability bit in the SMB protocol to provide the UNIX extensions feature.
These extensions are also supported by the cifs.ko kernel module. However, both Samba and the
kernel module support UNIX extensions only in the SMB 1 protocol.

To use UNIX extensions:

1. Set the server min protocol option in the [global] section in the
/etc/samba/smb.conf file to NT1. This is the default on Samba servers.

2. Mount the share using the SMB 1 protocol by providing the -o vers=1.0 option to the mount
command. For example:

mount -t cifs -o vers=1.0,username=user_name


//server_name/share_name /mnt/

By default, the kernel module uses SMB 2 or the highest later protocol version supported by the
server. Passing the -o vers=1.0 option to the mount command forces that the kernel module
uses the SMB 1 protocol that is required for using UNIX extensions.

To verify if UNIX extensions are enabled, display the options of the mounted share:

# mount
...
//server/share on /mnt type cifs (...,unix,...)

If the unix entry is displayed in the list of mount options, UNIX extensions are enabled.

9.2.2. Manually Mounting an SMB Share


To manually mount an SMB share, use the mount utility with the -t cifs parameter:

# mount -t cifs -o username=user_name //server_name/share_name /mnt/


Password for user_name@//server_name/share_name: ********

In the -o options parameter, you can specify options that will be used to mount the share. For details,
see Section 9.2.6, “Frequently Used Mount Options” and the OPTIONS section in the mount.cifs(8) man
page.

Example 9.1. Mounting a Share Using an Encrypted SMB 3.0 Connection

To mount the \\server\example\ share as the DOMAIN\Administrator user over an


encrypted SMB 3.0 connection into the /mnt/ directory:

# mount -t cifs -o username=DOMAIN\Administrator,seal,vers=3.0


//server/example /mnt/
Password for user_name@//server_name/share_name: ********

9.2.3. Mounting an SMB Share Automatically When the System Boots


To mount an SMB share automatically when the system boots, add an entry for the share to the
/etc/fstab file. For example:

90
CHAPTER 9. SERVER MESSAGE BLOCK (SMB)

//server_name/share_name /mnt cifs credentials=/root/smb.cred 0 0

IMPORTANT

To enable the system to mount a share automatically, you must store the user name,
password, and domain name in a credentials file. For details, see Section 9.2.4,
“Authenticating To an SMB Share Using a Credentials File”.

In the fourth field of the /etc/fstab file, specify mount options, such as the path to the credentials file.
For details, see Section 9.2.6, “Frequently Used Mount Options” and the OPTIONS section in the
mount.cifs(8) man page.

To verify that the share mounts successfully, enter:

# mount /mnt/

9.2.4. Authenticating To an SMB Share Using a Credentials File


In certain situations, administrators want to mount a share without entering the user name and password.
To implement this, create a credentials file. For example:

Procedure 9.1. Creating a Credentials File

1. Create a file, such as ~/smb.cred, and specify the user name, password, and domain name
that file:

username=user_name
password=password
domain=domain_name

2. Set the permissions to only allow the owner to access the file:

# chown user_name ~/smb.cred


# chmod 600 ~/smb.cred

You can now pass the credentials=file_name mount option to the mount utility or use it in the
/etc/fstab file to mount the share without being prompted for the user name and password.

9.2.5. Performing a Multi-user SMB Mount


The credentials you provide to mount a share determine the access permissions on the mount point by
default. For example, if you use the DOMAIN\example user when you mount a share, all operations on
the share will be executed as this user, regardless which local user performs the operation.

However, in certain situations, the administrator wants to mount a share automatically when the system
boots, but users should perform actions on the share's content using their own credentials. The
multiuser mount options lets you configure this scenario.

91
Storage Administration Guide

IMPORTANT

To use multiuser, you must additionally set the sec=security_type mount option to
a security type which supports providing credentials in a non-interactive way, such as
krb5 or the ntlmssp option with a credentials file. See the section called “Accessing a
Share as a User”.

The root user mounts the share using the multiuser option and an account that has minimal access
to the contents of the share. Regular users can then provide their user name and password to the current
session's kernel keyring using the cifscreds utility. If the user accesses the content of the mounted
share, the kernel uses the credentials from the kernel keyring instead of the one initially used to mount
the share.

Mounting a Share with the multiuser Option

To mount a share automatically with the multiuser option when the system boots:

Procedure 9.2. Creating an /etc/fstab File Entry with the multiuser Option

1. Create the entry for the share in the /etc/fstab file. For example:

//server_name/share_name /mnt cifs


multiuser,sec=ntlmssp,credentials=/root/smb.cred 0 0

2. Mount the share:

# mount /mnt/

If you do not want to mount the share automatically when the system boots, mount it manually by
passing -o multiuser,sec=security_type to the mount command. For details about mounting an
SMB share manually, see Section 9.2.2, “Manually Mounting an SMB Share”.

Verifying if an SMB Share is Mounted with themultiuser Option

To verify if a share is mounted with the multiuser option:

# mount
...
//server_name/share_name on /mnt type cifs (sec=ntlmssp,multiuser,...)

Accessing a Share as a User

If an SMB share is mounted with the multiuser option, users can provide their credentials for the
server to the kernel's keyring:

# cifscreds add -u SMB_user_name server_name


Password: ********

Now, when the user performs operations in the directory that contains the mounted SMB share, the
server applies the file system permissions for this user, instead of the one initially used when the share
was mounted.

92
CHAPTER 9. SERVER MESSAGE BLOCK (SMB)

NOTE

Multiple users can perform operations using their own credentials on the mounted share
at the same time.

9.2.6. Frequently Used Mount Options


When you mount an SMB share, the mount options determine:

How the connection will be established with the server. For example, which SMB protocol
version is used when connecting to the server.

How the share will be mounted into the local file system. For example, if the system overrides
the remote file and directory permissions to enable multiple local users to access the content on
the server.

To set multiple options in the fourth field of the /etc/fstab file or in the -o parameter of a mount
command, separate them with commas. For example, see Procedure 9.2, “Creating an /etc/fstab
File Entry with the multiuser Option”.

The following list gives an overview of frequently used mount options:

Table 9.1. Frequently Used Mount Options

Option Description

credentials=file_name Sets the path to the credentials file. See Section 9.2.4, “Authenticating To an
SMB Share Using a Credentials File”.

dir_mode=mode Sets the directory mode if the server does not support CIFS UNIX
extensions.

file_mode=mode Sets the file mode if the server does not support CIFS UNIX extensions.

password=password Sets the password used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.

seal Enables encryption support for connections using SMB 3.0 or a later
protocol version. Therefore, use seal together with the vers mount option
set to 3.0 or later. See Example 9.1, “Mounting a Share Using an
Encrypted SMB 3.0 Connection”.

sec=security_mode Sets the security mode, such as ntlmsspi, to enable NTLMv2 password
hashing and enabled packet signing. For a list of supported values, see the
option's description in the mount.cifs(8) man page.

If the server does not support the ntlmv2 security mode, use
sec=ntlmssp , which is the default. For security reasons, do not use the
insecure ntlm security mode.

username=user_name Sets the user name used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.

93
Storage Administration Guide

Option Description

vers=SMB_protocol_version Sets the SMB protocol version used for the communication with the server.

For a complete list, see the OPTIONS section in the mount.cifs(8) man page.

94
CHAPTER 10. FS-CACHE

CHAPTER 10. FS-CACHE


FS-Cache is a persistent local cache that can be used by file systems to take data retrieved from over
the network and cache it on local disk. This helps minimize network traffic for users accessing data from
a file system mounted over the network (for example, NFS).

The following diagram is a high-level illustration of how FS-Cache works:

Figure 10.1. FS-Cache Overview

FS-Cache is designed to be as transparent as possible to the users and administrators of a system.


Unlike cachefs on Solaris, FS-Cache allows a file system on a server to interact directly with a client's
local cache without creating an overmounted file system. With NFS, a mount option instructs the client to
mount the NFS share with FS-cache enabled.

FS-Cache does not alter the basic operation of a file system that works over the network - it merely
provides that file system with a persistent place in which it can cache data. For instance, a client can still
mount an NFS share whether or not FS-Cache is enabled. In addition, cached NFS can handle files that
won't fit into the cache (whether individually or collectively) as files can be partially cached and do not
have to be read completely up front. FS-Cache also hides all I/O errors that occur in the cache from the
client file system driver.

95
Storage Administration Guide

To provide caching services, FS-Cache needs a cache back end. A cache back end is a storage driver
configured to provide caching services (i.e. cachefiles). In this case, FS-Cache requires a mounted
block-based file system that supports bmap and extended attributes (e.g. ext3) as its cache back end.

FS-Cache cannot arbitrarily cache any file system, whether through the network or otherwise: the shared
file system's driver must be altered to allow interaction with FS-Cache, data storage/retrieval, and
metadata setup and validation. FS-Cache needs indexing keys and coherency data from the cached file
system to support persistence: indexing keys to match file system objects to cache objects, and
coherency data to determine whether the cache objects are still valid.

NOTE

In Red Hat Enterprise Linux 7, the cachefilesd package is not installed by default and
needs to be installed manually.

10.1. PERFORMANCE GUARANTEE


FS-Cache does not guarantee increased performance, however it ensures consistent performance by
avoiding network congestion. Using a cache back end incurs a performance penalty: for example,
cached NFS shares add disk accesses to cross-network lookups. While FS-Cache tries to be as
asynchronous as possible, there are synchronous paths (e.g. reads) where this isn't possible.

For example, using FS-Cache to cache an NFS share between two computers over an otherwise
unladen GigE network will not demonstrate any performance improvements on file access. Rather, NFS
requests would be satisfied faster from server memory rather than from local disk.

The use of FS-Cache, therefore, is a compromise between various factors. If FS-Cache is being used to
cache NFS traffic, for instance, it may slow the client down a little, but massively reduce the network and
server loading by satisfying read requests locally without consuming network bandwidth.

10.2. SETTING UP A CACHE


Currently, Red Hat Enterprise Linux 7 only provides the cachefiles caching back end. The
cachefilesd daemon initiates and manages cachefiles. The /etc/cachefilesd.conf file
controls how cachefiles provides caching services.

The first setting to configure in a cache back end is which directory to use as a cache. To configure this,
use the following parameter:

$ dir /path/to/cache

Typically, the cache back end directory is set in /etc/cachefilesd.conf as


/var/cache/fscache, as in:

$ dir /var/cache/fscache

If you want to change the cache back end directory, the selinux context must be same as
/var/cache/fscache:

# semanage fcontext -a -e /var/cache/fscache /path/to/cache


# restorecon -Rv /path/to/cache

Replace /path/to/cache with the directory name while setting up cache.

96
CHAPTER 10. FS-CACHE

NOTE

If the given commands for setting selinux context did not work, use the following
commands:

# semanage permissive -a cachefilesd_t


# semanage permissive -a cachefiles_kernel_t

FS-Cache will store the cache in the file system that hosts /path/to/cache. On a laptop, it is
advisable to use the root file system (/) as the host file system, but for a desktop machine it would be
more prudent to mount a disk partition specifically for the cache.

File systems that support functionalities required by FS-Cache cache back end include the Red Hat
Enterprise Linux 7 implementations of the following file systems:

ext3 (with extended attributes enabled)

ext4

Btrfs

XFS

The host file system must support user-defined extended attributes; FS-Cache uses these attributes to
store coherency maintenance information. To enable user-defined extended attributes for ext3 file
systems (i.e. device), use:

# tune2fs -o user_xattr /dev/device

Alternatively, extended attributes for a file system can be enabled at mount time, as in:

# mount /dev/device /path/to/cache -o user_xattr

The cache back end works by maintaining a certain amount of free space on the partition hosting the
cache. It grows and shrinks the cache in response to other elements of the system using up free space,
making it safe to use on the root file system (for example, on a laptop). FS-Cache sets defaults on this
behavior, which can be configured via cache cull limits. For more information about configuring cache
cull limits, refer to Section 10.4, “Setting Cache Cull Limits”.

Once the configuration file is in place, start up the cachefilesd service:

# systemctl start cachefilesd

To configure cachefilesd to start at boot time, execute the following command as root:

# systemctl enable cachefilesd

10.3. USING THE CACHE WITH NFS


NFS will not use the cache unless explicitly instructed. To configure an NFS mount to use FS-Cache,
include the -o fsc option to the mount command:

97
Storage Administration Guide

# mount nfs-share:/ /mount/point -o fsc

All access to files under /mount/point will go through the cache, unless the file is opened for direct I/O
or writing. For more information, see Section 10.3.2, “Cache Limitations with NFS”. NFS indexes cache
contents using NFS file handle, not the file name, which means hard-linked files share the cache
correctly.

Caching is supported in version 2, 3, and 4 of NFS. However, each version uses different branches for
caching.

10.3.1. Cache Sharing


There are several potential issues to do with NFS cache sharing. Because the cache is persistent, blocks
of data in the cache are indexed on a sequence of four keys:

Level 1: Server details

Level 2: Some mount options; security type; FSID; uniquifier

Level 3: File Handle

Level 4: Page number in file

To avoid coherency management problems between superblocks, all NFS superblocks that wish to
cache data have unique Level 2 keys. Normally, two NFS mounts with same source volume and options
share a superblock, and thus share the caching, even if they mount different directories within that
volume.

Example 10.1. Cache Sharing

Take the following two mount commands:

mount home0:/disk0/fred /home/fred -o fsc

mount home0:/disk0/jim /home/jim -o fsc

Here, /home/fred and /home/jim likely share the superblock as they have the same options,
especially if they come from the same volume/partition on the NFS server (home0). Now, consider
the next two subsequent mount commands:

mount home0:/disk0/fred /home/fred -o fsc,rsize=230

mount home0:/disk0/jim /home/jim -o fsc,rsize=231

In this case, /home/fred and /home/jim will not share the superblock as they have different
network access parameters, which are part of the Level 2 key. The same goes for the following mount
sequence:

mount home0:/disk0/fred /home/fred1 -o fsc,rsize=230

mount home0:/disk0/fred /home/fred2 -o fsc,rsize=231

Here, the contents of the two subtrees (/home/fred1 and /home/fred2) will be cached twice.

98
CHAPTER 10. FS-CACHE

Another way to avoid superblock sharing is to suppress it explicitly with the nosharecache
parameter. Using the same example:

mount home0:/disk0/fred /home/fred -o nosharecache,fsc

mount home0:/disk0/jim /home/jim -o nosharecache,fsc

However, in this case only one of the superblocks is permitted to use cache since there is nothing to
distinguish the Level 2 keys of home0:/disk0/fred and home0:/disk0/jim. To address this,
add a unique identifier on at least one of the mounts, i.e. fsc=unique-identifier. For example:

mount home0:/disk0/fred /home/fred -o nosharecache,fsc

mount home0:/disk0/jim /home/jim -o nosharecache,fsc=jim

Here, the unique identifier jim is added to the Level 2 key used in the cache for /home/jim.

10.3.2. Cache Limitations with NFS


Opening a file from a shared file system for direct I/O automatically bypasses the cache. This is
because this type of access must be direct to the server.

Opening a file from a shared file system for writing will not work on NFS version 2 and 3. The
protocols of these versions do not provide sufficient coherency management information for the
client to detect a concurrent write to the same file from another client.

Opening a file from a shared file system for either direct I/O or writing flushes the cached copy of
the file. FS-Cache will not cache the file again until it is no longer opened for direct I/O or writing.

Furthermore, this release of FS-Cache only caches regular NFS files. FS-Cache will not cache
directories, symlinks, device files, FIFOs and sockets.

10.4. SETTING CACHE CULL LIMITS


The cachefilesd daemon works by caching remote data from shared file systems to free space on the
disk. This could potentially consume all available free space, which could be bad if the disk also housed
the root partition. To control this, cachefilesd tries to maintain a certain amount of free space by
discarding old objects (i.e. accessed less recently) from the cache. This behavior is known as cache
culling.

Cache culling is done on the basis of the percentage of blocks and the percentage of files available in the
underlying file system. There are six limits controlled by settings in /etc/cachefilesd.conf:

brun N% (percentage of blocks) , frun N% (percentage of files)


If the amount of free space and the number of available files in the cache rises above both these
limits, then culling is turned off.

bcull N% (percentage of blocks), fcull N% (percentage of files)


If the amount of available space or the number of files in the cache falls below either of these limits,
then culling is started.

bstop N% (percentage of blocks), fstop N% (percentage of files)

99
Storage Administration Guide

If the amount of available space or the number of available files in the cache falls below either of
these limits, then no further allocation of disk space or files is permitted until culling has raised things
above these limits again.

The default value of N for each setting is as follows:

brun/frun - 10%

bcull/fcull - 7%

bstop/fstop - 3%

When configuring these settings, the following must hold true:

0 ≤ bstop < bcull < brun < 100

0 ≤ fstop < fcull < frun < 100

These are the percentages of available space and available files and do not appear as 100 minus the
percentage displayed by the df program.

IMPORTANT

Culling depends on both bxxx and fxxx pairs simultaneously; they can not be treated
separately.

10.5. STATISTICAL INFORMATION


FS-Cache also keeps track of general statistical information. To view this information, use:

# cat /proc/fs/fscache/stats

FS-Cache statistics includes information on decision points and object counters. For more information,
see the following kernel document:

/usr/share/doc/kernel-
doc-version/Documentation/filesystems/caching/fscache.txt

10.6. FS-CACHE REFERENCES


For more information on cachefilesd and how to configure it, see man cachefilesd and man
cachefilesd.conf. The following kernel documents also provide additional information:

/usr/share/doc/cachefilesd-version-number/README

/usr/share/man/man5/cachefilesd.conf.5.gz

/usr/share/man/man8/cachefilesd.8.gz

For general information about FS-Cache, including details on its design constraints, available statistics,
and capabilities, see the following kernel document: /usr/share/doc/kernel-
doc-version/Documentation/filesystems/caching/fscache.txt

100
PART II. STORAGE ADMINISTRATION

PART II. STORAGE ADMINISTRATION


The Storage Administration section starts with storage considerations for Red Hat Enterprise Linux 7.
Instructions regarding partitions, logical volume management, and swap partitions follow this. Disk
Quotas, RAID systems are next, followed by the functions of mount command, volume_key, and acls.
SSD tuning, write barriers, I/O limits and diskless systems follow this. The large chapter of Online
Storage is next, and finally device mapper multipathing and virtual storage to finish.

101
Storage Administration Guide

CHAPTER 11. STORAGE CONSIDERATIONS DURING


INSTALLATION
Many storage device and file system settings can only be configured at install time. Other settings, such
as file system type, can only be modified up to a certain point without requiring a reformat. As such, it is
prudent that you plan your storage configuration accordingly before installing Red Hat
Enterprise Linux 7.

This chapter discusses several considerations when planning a storage configuration for your system.
For installation instructions (including storage configuration during installation), see the Installation Guide
provided by Red Hat.

For information on what Red Hat officially supports with regards to size and storage limits, see the article
http://www.redhat.com/resourcelibrary/articles/articles-red-hat-enterprise-linux-6-technology-capabilities-
and-limits.

11.1. SPECIAL CONSIDERATIONS


This section enumerates several issues and factors to consider for specific storage configurations.

Separate Partitions for /home, /opt, /usr/local


If it is likely that you will upgrade your system in the future, place /home, /opt, and /usr/local on a
separate device. This allows you to reformat the devices or file systems containing the operating system
while preserving your user and application data.

DASD and zFCP Devices on IBM System Z


On the IBM System Z platform, DASD and zFCP devices are configured via the Channel Command
Word (CCW) mechanism. CCW paths must be explicitly added to the system and then brought online.
For DASD devices, this means listing the device numbers (or device number ranges) as the DASD=
parameter at the boot command line or in a CMS configuration file.

For zFCP devices, you must list the device number, logical unit number (LUN), and world wide port name
(WWPN). Once the zFCP device is initialized, it is mapped to a CCW path. The FCP_x= lines on the boot
command line (or in a CMS configuration file) allow you to specify this information for the installer.

Encrypting Block Devices Using LUKS


Formatting a block device for encryption using LUKS/dm-crypt destroys any existing formatting on that
device. As such, you should decide which devices to encrypt (if any) before the new system's storage
configuration is activated as part of the installation process.

Stale BIOS RAID Metadata


Moving a disk from a system configured for firmware RAID without removing the RAID metadata from the
disk can prevent Anaconda from correctly detecting the disk.

102
CHAPTER 11. STORAGE CONSIDERATIONS DURING INSTALLATION


WARNING

Removing/deleting RAID metadata from disk could potentially destroy any stored
data. Red Hat recommends that you back up your data before proceeding.

To delete RAID metadata from the disk, use the following command:

dmraid -r -E /device/

For more information about managing RAID devices, see man dmraid and Chapter 18, Redundant
Array of Independent Disks (RAID).

iSCSI Detection and Configuration


For plug and play detection of iSCSI drives, configure them in the firmware of an iBFT boot-capable
network interface card (NIC). CHAP authentication of iSCSI targets is supported during installation.
However, iSNS discovery is not supported during installation.

FCoE Detection and Configuration


For plug and play detection of Fibre Channel over Ethernet (FCoE) drives, configure them in the firmware
of an EDD boot-capable NIC.

DASD
Direct-access storage devices (DASD) cannot be added or configured during installation. Such devices
are specified in the CMS configuration file.

Block Devices with DIF/DIX Enabled


DIF/DIX is a hardware checksum feature provided by certain SCSI host bus adapters and block devices.
When DIF/DIX is enabled, error occurs if the block device is used as a general-purpose block device.
Buffered I/O or mmap(2)-based I/O will not work reliably, as there are no interlocks in the buffered write
path to prevent buffered data from being overwritten after the DIF/DIX checksum has been calculated.

This causes the I/O to later fail with a checksum error. This problem is common to all block device (or file
system-based) buffered I/O or mmap(2) I/O, so it is not possible to work around these errors caused by
overwrites.

As such, block devices with DIF/DIX enabled should only be used with applications that use O_DIRECT.
Such applications should use the raw block device. Alternatively, it is also safe to use the XFS file
system on a DIF/DIX enabled block device, as long as only O_DIRECT I/O is issued through the file
system. XFS is the only file system that does not fall back to buffered I/O when doing certain allocation
operations.

The responsibility for ensuring that the I/O data does not change after the DIF/DIX checksum has been
computed always lies with the application, so only applications designed for use with O_DIRECT I/O and
DIF/DIX hardware should use DIF/DIX.

103
Storage Administration Guide

CHAPTER 12. FILE SYSTEM CHECK


File systems may be checked for consistency, and optionally repaired, with file system-specific
userspace tools. These tools are often referred to as fsck tools, where fsck is a shortened version of
file system check.

NOTE

These file system checkers only guarantee metadata consistency across the file system;
they have no awareness of the actual data contained within the file system and are not
data recovery tools.

File system inconsistencies can occur for various reasons, including but not limited to hardware errors,
storage administration errors, and software bugs.

Before modern metadata-journaling file systems became common, a file system check was required any
time a system crashed or lost power. This was because a file system update could have been
interrupted, leading to an inconsistent state. As a result, a file system check is traditionally run on each
file system listed in /etc/fstab at boot-time. For journaling file systems, this is usually a very short
operation, because the file system's metadata journaling ensures consistency even after a crash.

However, there are times when a file system inconsistency or corruption may occur, even for journaling
file systems. When this happens, the file system checker must be used to repair the file system. The
following provides best practices and other useful information when performing this procedure.

IMPORTANT

Red Hat does not recommended this unless the machine does not boot, the file system is
extremely large, or the file system is on remote storage. It is possible to disable file
system check at boot by setting the sixth field in /etc/fstab to 0.

12.1. BEST PRACTICES FOR FSCK


Generally, running the file system check and repair tool can be expected to automatically repair at least
some of the inconsistencies it finds. In some cases, severely damaged inodes or directories may be
discarded if they cannot be repaired. Significant changes to the file system may occur. To ensure that
unexpected or undesirable changes are not permanently made, perform the following precautionary
steps:

Dry run
Most file system checkers have a mode of operation which checks but does not repair the file system.
In this mode, the checker prints any errors that it finds and actions that it would have taken, without
actually modifying the file system.

NOTE

Later phases of consistency checking may print extra errors as it discovers


inconsistencies which would have been fixed in early phases if it were running in
repair mode.

Operate first on a file system image


Most file systems support the creation of a metadata image, a sparse copy of the file system which

104
CHAPTER 12. FILE SYSTEM CHECK

contains only metadata. Because file system checkers operate only on metadata, such an image can
be used to perform a dry run of an actual file system repair, to evaluate what changes would actually
be made. If the changes are acceptable, the repair can then be performed on the file system itself.

NOTE

Severely damaged file systems may cause problems with metadata image creation.

Save a file system image for support investigations


A pre-repair file system metadata image can often be useful for support investigations if there is a
possibility that the corruption was due to a software bug. Patterns of corruption present in the pre-
repair image may aid in root-cause analysis.

Operate only on unmounted file systems


A file system repair must be run only on unmounted file systems. The tool must have sole access to
the file system or further damage may result. Most file system tools enforce this requirement in repair
mode, although some only support check-only mode on a mounted file system. If check-only mode is
run on a mounted file system, it may find spurious errors that would not be found when run on an
unmounted file system.

Disk errors
File system check tools cannot repair hardware problems. A file system must be fully readable and
writable if repair is to operate successfully. If a file system was corrupted due to a hardware error, the
file system must first be moved to a good disk, for example with the dd(8) utility.

12.2. FILE SYSTEM-SPECIFIC INFORMATION FOR FSCK

12.2.1. ext2, ext3, and ext4


All of these file sytems use the e2fsck binary to perform file system checks and repairs. The file names
fsck.ext2, fsck.ext3, and fsck.ext4 are hardlinks to this same binary. These binaries are run
automatically at boot time and their behavior differs based on the file system being checked and the state
of the file system.

A full file system check and repair is invoked for ext2, which is not a metadata journaling file system, and
for ext4 file systems without a journal.

For ext3 and ext4 file systems with metadata journaling, the journal is replayed in userspace and the
binary exits. This is the default action as journal replay ensures a consistent file system after a crash.

If these file systems encounter metadata inconsistencies while mounted, they record this fact in the file
system superblock. If e2fsck finds that a file system is marked with such an error, e2fsck performs a
full check after replaying the journal (if present).

e2fsck may ask for user input during the run if the -p option is not specified. The -p option tells
e2fsck to automatically do all repairs that may be done safely. If user intervention is required, e2fsck
indicates the unfixed problem in its output and reflect this status in the exit code.

Commonly used e2fsck run-time options include:

-n

105
Storage Administration Guide

No-modify mode. Check-only operation.

-b superblock
Specify block number of an alternate suprerblock if the primary one is damaged.

-f
Force full check even if the superblock has no recorded errors.

-j journal-dev
Specify the external journal device, if any.

-p
Automatically repair or "preen" the file system with no user input.

-y
Assume an answer of "yes" to all questions.

All options for e2fsck are specified in the e2fsck(8) manual page.

The following five basic phases are performed by e2fsck while running:

1. Inode, block, and size checks.

2. Directory structure checks.

3. Directory connectivity checks.

4. Reference count checks.

5. Group summary info checks.

The e2image(8) utility can be used to create a metadata image prior to repair for diagnostic or testing
purposes. The -r option should be used for testing purposes in order to create a sparse file of the same
size as the file system itself. e2fsck can then operate directly on the resulting file. The -Q option should
be specified if the image is to be archived or provided for diagnostic. This creates a more compact file
format suitable for transfer.

12.2.2. XFS
No repair is performed automatically at boot time. To initiate a file system check or repair, use the
xfs_repair tool.

NOTE

Although an fsck.xfs binary is present in the xfsprogs package, this is present only to
satisfy initscripts that look for an fsck.file system binary at boot time. fsck.xfs
immediately exits with an exit code of 0.

Older xfsprogs packages contain an xfs_check tool. This tool is very slow and does not
scale well for large file systems. As such, it has been deprecated in favor of xfs_repair
-n.

106
CHAPTER 12. FILE SYSTEM CHECK

A clean log on a file system is required for xfs_repair to operate. If the file system was not cleanly
unmounted, it should be mounted and unmounted prior to using xfs_repair. If the log is corrupt and
cannot be replayed, the -L option may be used to zero the log.

IMPORTANT

The -L option must only be used if the log cannot be replayed. The option discards all
metadata updates in the log and results in further inconsistencies.

It is possible to run xfs_repair in a dry run, check-only mode by using the -n option. No changes will
be made to the file system when this option is specified.

xfs_repair takes very few options. Commonly used options include:

-n
No modify mode. Check-only operation.

-L
Zero metadata log. Use only if log cannot be replayed with mount.

-m maxmem
Limit memory used during run to maxmem MB. 0 can be specified to obtain a rough estimate of the
minimum memory required.

-l logdev
Specify the external log device, if present.

All options for xfs_repair are specified in the xfs_repair(8) manual page.

The following eight basic phases are performed by xfs_repair while running:

1. Inode and inode blockmap (addressing) checks.

2. Inode allocation map checks.

3. Inode size checks.

4. Directory checks.

5. Pathname checks.

6. Link count checks.

7. Freemap checks.

8. Super block checks.

For more information, see the xfs_repair(8) manual page.

xfs_repair is not interactive. All operations are performed automatically with no input from the user.

107
Storage Administration Guide

If it is desired to create a metadata image prior to repair for diagnostic or testing purposes, the
xfs_metadump(8) and xfs_mdrestore(8) utilities may be used.

12.2.3. Btrfs

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

The btrfsck tool is used to check and repair btrfs file systems. This tool is still in early development
and may not detect or repair all types of file system corruption.

By default, btrfsck does not make changes to the file system; that is, it runs check-only mode by
default. If repairs are desired the --repair option must be specified.

The following three basic phases are performed by btrfsck while running:

1. Extent checks.

2. File system root checks.

3. Root reference count checks.

The btrfs-image(8) utility can be used to create a metadata image prior to repair for diagnostic or
testing purposes.

108
CHAPTER 13. PARTITIONS

CHAPTER 13. PARTITIONS


With the parted utility, you can:

View the existing partition table.

Change the size of existing partitions.

Add partitions from free space or additional hard drives.

The parted package is installed by default on Red Hat Enterprise Linux 7. To start parted, log in as root
and enter the following command:

# parted /dev/sda

Replace /dev/sda with the device name for the drive to configure.

Manipulating Partitions on Devices in Use


For a device to not be in use, none of the partitions on the device can be mounted, and no swap space
on the device can be enabled.

If you want to remove or resize a partition, the device on which that partition resides must not be in use.

It is possible to create a new partition on a device that is in use, but this is not recommended.

Modifying the Partition Table


Modifying the partition table while another partition on the same disk is in use is generally not
recommended because the kernel is not able to reread the partition table. As a consequence, changes
are not applied to a running system. In the described situation, reboot the system, or use the following
command to make the system register new or modified partitions:

# partx --update --nr partition-number disk

The easiest way to modify disks that are currently in use is:

1. Boot the system in rescue mode if the partitions on the disk are impossible to unmount, for
example in the case of a system disk.

2. When prompted to mount the file system, select Skip.

If the drive does not contain any partitions in use, that is there are no system processes that use or lock
the file system from being unmounted, you can unmount the partitions with the umount command and
turn off all the swap space on the hard drive with the swapoff command.

To see commonly used parted commands, see Table 13.1, “parted Commands”.

IMPORTANT

Do not use the parted utility to create file systems. Use the mkfs tool instead.

Table 13.1. parted Commands

109
Storage Administration Guide

Command Description

help Display list of available commands

mklabel label Create a disk label for the partition table

mkpart part-type [fs-type] start-mb Make a partition without creating a new file system
end-mb

name minor-num name Name the partition for Mac and PC98 disklabels only

print Display the partition table

quit Quit parted

rescue start-mb end-mb Rescue a lost partition from start-mb to end-mb

rm minor-num Remove the partition

select device Select a different device to configure

set minor-num flag state Set the flag on a partition; state is either on or off

toggle [NUMBER [FLAG] Toggle the state of FLAG on partition NUMBER

unit UNIT Set the default unit to UNIT

13.1. VIEWING THE PARTITION TABLE


To view the partition table:

1. Start parted.

2. Use the following command to view the partition table:

(parted) print

A table similar to the following one appears:

Example 13.1. Partition Table

Model: ATA ST3160812AS (scsi)


Disk /dev/sda: 160GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags


1 32.3kB 107MB 107MB primary ext3 boot
2 107MB 105GB 105GB primary ext3

110
CHAPTER 13. PARTITIONS

3 105GB 107GB 2147MB primary linux-swap


4 107GB 160GB 52.9GB extended root
5 107GB 133GB 26.2GB logical ext3
6 133GB 133GB 107MB logical ext3
7 133GB 160GB 26.6GB logical lvm

Following is the description of the partition table:

Model: ATA ST3160812AS (scsi): explains the disk type, manufacturer, model number, and
interface.

Disk /dev/sda: 160GB: displays the disk label type.

In the partition table, Number is the partition number. For example, the partition with minor
number 1 corresponds to /dev/sda1. The Start and End values are in megabytes. Valid
Types are metadata, free, primary, extended, or logical. The File system is the file system
type. The Flags column lists the flags set for the partition. Available flags are boot, root, swap,
hidden, raid, lvm, or lba.

The File system in the partition table can be any of the following:

ext2

ext3

fat16

fat32

hfs

jfs

linux-swap

ntfs

reiserfs

hp-ufs

sun-ufs

xfs

If a File system of a device shows no value, this means that its file system type is unknown.

111
Storage Administration Guide

NOTE

To select a different device without having to restart parted, use the following command
and replace /dev/sda with the device you want to select:

(parted) select /dev/sda

It allows you to view or configure the partition table of a device.

13.2. CREATING A PARTITION


WARNING

Do not attempt to create a partition on a device that is in use.

Procedure 13.1. Creating a Partition

1. Before creating a partition, boot into rescue mode, or unmount any partitions on the device and
turn off any swap space on the device.

2. Start parted:

# parted /dev/sda

Replace /dev/sda with the device name on which you want to create the partition.

3. View the current partition table to determine if there is enough free space:

(parted) print

If there is not enough free space, you can resize an existing partition. For more information, see
Section 13.5, “Resizing a Partition with fdisk”.

From the partition table, determine the start and end points of the new partition and what partition
type it should be. You can only have four primary partitions, with no extended partition, on a
device. If you need more than four partitions, you can have three primary partitions, one
extended partition, and multiple logical partitions within the extended. For an overview of disk
partitions, see the appendix An Introduction to Disk Partitions in the Red Hat Enterprise Linux 7
Installation Guide.

4. To create partition:

(parted) mkpart part-type name fs-type start end

Replace part-type with with primary, logical, or extended as per your requirement.

Replace name with partition-name; name is required for GPT partition tables.

112
CHAPTER 13. PARTITIONS

Replace fs-type with any one of btrfs, ext2, ext3, ext4, fat16, fat32, hfs, hfs+, linux-swap, ntfs,
reiserfs, or xfs; fs-type is optional.

Replace start end with the size in megabytes as per your requirement.

For example, to create a primary partition with an ext3 file system from 1024 megabytes until
2048 megabytes on a hard drive, type the following command:

(parted) mkpart primary 1024 2048

NOTE

If you use the mkpartfs command instead, the file system is created after the
partition is created. However, parted does not support creating an ext3 file
system. Thus, if you wish to create an ext3 file system, use mkpart and create
the file system with the mkfs command as described later.

The changes start taking place as soon as you press Enter, so review the command before
executing to it.

5. View the partition table to confirm that the created partition is in the partition table with the
correct partition type, file system type, and size using the following command:

(parted) print

Also remember the minor number of the new partition so that you can label any file systems on
it.

6. Exit the parted shell:

(parted) quit

7. Use the following command after parted is closed to make sure the kernel recognizes the new
partition:

# cat /proc/partitions

The maximum number of partitions parted can create is 128. While the GUID Partition Table (GPT)
specification allows for more partitions by growing the area reserved for the partition table, common
practice used by parted is to limit it to enough area for 128 partitions.

13.2.1. Formatting and Labeling the Partition


To format and label the partition use the following procedure:

Procedure 13.2. Format and Label the Partition

1. The partition does not have a file system. To create the ext4 file system, use:

# mkfs.ext4 /dev/sda6

113
Storage Administration Guide


WARNING

Formatting the partition permanently destroys any data that currently exists
on the partition.

2. Label the file system on the partition. For example, if the file system on the new partition is
/dev/sda6 and you want to label it Work, use:

# e2label /dev/sda6 "Work"

By default, the installation program uses the mount point of the partition as the label to make
sure the label is unique. You can use any label you want.

3. Create a mount point (e.g. /work) as root.

13.2.2. Add the Partition to /etc/fstab

1. As root, edit the /etc/fstab file to include the new partition using the partition's UUID.

Use the command blkid -o list for a complete list of the partition's UUID, or blkid
device for individual device details.

In /etc/fstab:

The first column should contain UUID= followed by the file system's UUID.

The second column should contain the mount point for the new partition.

The third column should be the file system type: for example, ext4 or swap.

The fourth column lists mount options for the file system. The word defaults here means
that the partition is mounted at boot time with default options.

The fifth and sixth field specify backup and check options. Example values for a non-root
partition are 0 2.

2. Regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

3. Try mounting the file system to verify that the configuration works:

# mount /work

Additional Information

If you need more information about the format of /etc/fstab, see the fstab(5) man page.

114
CHAPTER 13. PARTITIONS

13.3. REMOVING A PARTITION


WARNING

Do not attempt to remove a partition on a device that is in use.

Procedure 13.3. Remove a Partition

1. Before removing a partition, do one of the following:

Boot into rescue mode, or

Unmount any partitions on the device and turn off any swap space on the device.

2. Start the parted utility:

# parted device

Replace device with the device on which to remove the partition: for example, /dev/sda.

3. View the current partition table to determine the minor number of the partition to remove:

(parted) print

4. Remove the partition with the command rm. For example, to remove the partition with minor
number 3:

(parted) rm 3

The changes start taking place as soon as you press Enter, so review the command before
committing to it.

5. After removing the partition, use the print command to confirm that it is removed from the
partition table:

(parted) print

6. Exit from the parted shell:

(parted) quit

7. Examine the content of the /proc/partitions file to make sure the kernel knows the partition
is removed:

# cat /proc/partitions

115
Storage Administration Guide

8. Remove the partition from the /etc/fstab file. Find the line that declares the removed
partition, and remove it from the file.

9. Regenerate mount units so that your system registers the new /etc/fstab configuration:

# systemctl daemon-reload

13.4. SETTING A PARTITION TYPE


The partition type, not to be confused with the file system type, is used by a running system only rarely.
However, the partition type matters to on-the-fly generators, such as systemd-gpt-auto-generator,
which use the partition type to, for example, automatically identify and mount devices.

You can start the fdisk utility and use the t command to set the partition type. The following example
shows how to change the partition type of the first partition to 0x83, default on Linux:

# fdisk /dev/sdc
Command (m for help): t
Selected partition 1
Partition type (type L to list all types): 83
Changed type of partition 'Linux LVM' to 'Linux'.

The parted utility provides some control of partition types by trying to map the partition type to 'flags',
which is not convenient for end users. The parted utility can handle only certain partition types, for
example LVM or RAID. To remove, for example, the lvm flag from the first partition with parted, use:

# parted /dev/sdc 'set 1 lvm off'

For a list of commonly used partition types and hexadecimal numbers used to represent them, see the
Partition Types table in the Partitions: Turning One Drive Into Many appendix of the Red Hat
Enterprise Linux 7 Installation Guide.

13.5. RESIZING A PARTITION WITH FDISK


The fdisk utility allows you to create and manipulate GPT, MBR, Sun, SGI, and BSD partition tables.
On disks with a GUID Partition Table (GPT), using the parted utility is recommended, as fdisk GPT
support is in an experimental phase.

Before resizing a partition, back up the data stored on the file system and test the procedure, as the only
way to change a partition size using fdisk is by deleting and recreating the partition.

IMPORTANT

The partition you are resizing must be the last partition on a particular disk.

Red Hat only supports extending and resizing LVM partitions.

Procedure 13.4. Resizing a Partition

The following procedure is provided only for reference. To resize a partition using fdisk:

1. Unmount the device:

116
CHAPTER 13. PARTITIONS

# umount /dev/vda

2. Run fdisk disk_name. For example:

# fdisk /dev/vda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help):

3. Use the p option to determine the line number of the partition to be deleted.

Command (m for help): p


Disk /dev/vda: 16.1 GB, 16106127360 bytes, 31457280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0006d09a

Device Boot Start End Blocks Id System


/dev/vda1 * 2048 1026047 512000 83 Linux
/dev/vda2 1026048 31457279 15215616 8e Linux LVM

4. Use the d option to delete a partition. If there is more than one partition available, fdisk
prompts you to provide a number of the partition to delete:

Command (m for help): d


Partition number (1,2, default 2): 2
Partition 2 is deleted

5. Use the n option to create a partition and follow the prompts. Allow enough space for any future
resizing. The fdisk default behavior (press Enter) is to use all space on the device. You can
specify the end of the partition by sectors, or specify a human-readable size by using
+<size><suffix>, for example +500M, or +10G.

Red Hat recommends using the human-readable size specification if you do not want to use all
free space, as fdisk aligns the end of the partition with the physical sectors. If you specify the
size by providing an exact number (in sectors), fdisk does not align the end of the partition.

Command (m for help): n


Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p): *Enter*
Using default response p
Partition number (2-4, default 2): *Enter*
First sector (1026048-31457279, default 1026048): *Enter*
Using default value 1026048

117
Storage Administration Guide

Last sector, +sectors or +size{K,M,G} (1026048-31457279, default


31457279): +500M
Partition 2 of type Linux and of size 500 MiB is set

6. Set the partition type to LVM:

Command (m for help): t


Partition number (1,2, default 2): *Enter*
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

7. Write the changes with the w option when you are sure the changes are correct, as errors can
cause instability with the selected partition.

8. Run e2fsck on the device to check for consistency:

# e2fsck /dev/vda
e2fsck 1.41.12 (17-May-2010)
Pass 1:Checking inodes, blocks, and sizes
Pass 2:Checking directory structure
Pass 3:Checking directory connectivity
Pass 4:Checking reference counts
Pass 5:Checking group summary information
ext4-1:11/131072 files (0.0% non-contiguous),27050/524128 blocks

9. Mount the device:

# mount /dev/vda

For more information, see the fdisk(8) manual page.

118
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS


WITH SNAPPER
A snapshot volume is a point in time copy of a target volume that provides a way to revert a file system
back to an earlier state. Snapper is a command-line tool to create and maintain snapshots for Btrfs and
thinly-provisioned LVM file systems.

14.1. CREATING INITIAL SNAPPER CONFIGURATION


Snapper requires discrete configuration files for each volume it operates on. You must set up the
configuration files manually. By default, only the root user is allowed to perform snapper commands.

The file system recommended by Red Hat with Snapper depends on your Red Hat Enterprise Linux
version:

In Red Hat Enterprise Linux 7.4 or earlier versions of Red Hat Enterprise Linux 7, use ext4 with
Snapper. Use the XFS file system on lvm-thin volumes only if you are monitoring the amount of
free space in the pool to prevent out-of-space problems that can lead to a failure.

In Red Hat Enterprise Linux 7.5 or later versions, use XFS with Snapper.

Note that the Btrfs tools and file system are provided as a Technology Preview, which make them
unsuitable for production systems.

Although it is possible to allow a user or group other than root to use certain Snapper commands,
Red Hat recommends that you do not add elevated permissions to otherwise unprivileged users or
groups. Such a configuration bypasses SELinux and could pose a security risk. Red Hat recommends
that you review these capabilities with your Security Team and consider using the sudo infrastructure
instead.

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

Procedure 14.1. Creating a Snapper Configuration File

1. Create or choose either:

A thinly-provisioned logical volume with a Red Hat supported file system on top of it, or

A Btrfs subvolume.

2. Mount the file system.

3. Create the configuration file that defines this volume.

For LVM2:

119
Storage Administration Guide

# snapper -c config_name create-config -f "lvm(fs_type)" /mount-


point

For example, to create a configuration file called lvm_config on an LVM2 subvolume with an
ext4 file system, mounted at /lvm_mount, use:

# snapper -c lvm_config create-config -f "lvm(ext4)" /lvm_mount

For Btrfs:

# snapper -c config_name create-config -f btrfs /mount-point

The -c config_name option specifies the name of the configuration file.

The create-config tells snapper to create a configuration file.

The -f file_system tells snapper what file system to use; if this is omitted snapper will
attempt to detect the file system.

The /mount-point is where the subvolume or thinly-provisioned LVM2 file system is


mounted.

Alternatively, to create a configuration file called btrfs_config, on a Btrfs subvolume that is


mounted at /btrfs_mount, use:

# snapper -c btrfs_config create-config -f btrfs /btrfs_mount

The configuration files are stored in the /etc/snapper/configs/ directory.

14.2. CREATING A SNAPPER SNAPSHOT


Snapper can create the following kinds of snapshots:

Pre Snapshot
A pre snapshot serves as a point of origin for a post snapshot. The two are closely tied and designed
to track file system modification between the two points. The pre snapshot must be created before
the post snapshot.

Post Snapshot
A post snapshot serves as the end point to the pre snapshot. The coupled pre and post snapshots
define a range for comparison. By default, every new snapper volume is configured to create a
background comparison after a related post snapshot is created successfully.

Single Snapshot
A single snapshot is a standalone snapshot created at a specific moment. These can be used to
track a timeline of modifications and have a general point to return to later.

14.2.1. Creating a Pre and Post Snapshot Pair

14.2.1.1. Creating a Pre Snapshot with Snapper

120
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

To create a pre snapshot, use:

# snapper -c config_name create -t pre

The -c config_name option creates a snapshot according to the specifications in the named
configuration file. If the configuration file does not yet exist, see Section 14.1, “Creating Initial Snapper
Configuration”.

The create -t option specifies what type of snapshot to create. Accepted entries are pre, post, or
single.

For example, to create a pre snapshot using the lvm_config configuration file, as created in
Section 14.1, “Creating Initial Snapper Configuration”, use:

# snapper -c SnapperExample create -t pre -p


1

The -p option prints the number of the created snapshot and is optional.

14.2.1.2. Creating a Post Snapshot with Snapper

A post snapshot is the end point of the snapshot and should be created after the parent pre snapshot by
following the instructions in Section 14.2.1.1, “Creating a Pre Snapshot with Snapper”.

Procedure 14.2. Creating a Post Snapshot

1. Determine the number of the pre snapshot:

# snapper -c config_name list

For example, to display the list of snapshots created using the configuration file lvm_config,
use the following:

# snapper -c lvm_config list


Type | # | Pre # | Date | User | Cleanup |
Description | Userdata
-------+---+-------+-------------------+------+----------+-------
------+---------
single | 0 | | | root | | current
|
pre | 1 | | Mon 06<...> | root | |
|

This output shows that the pre snapshot is number 1.

2. Create a post snapshot that is linked to a previously created pre snapshot:

# snapper -c config_file create -t post --pre-num


pre_snapshot_number

The -t post option specifies the creation of the post snapshot type.

The --pre-num option specifies the corresponding pre snapshot.

121
Storage Administration Guide

For example, to create a post snapshot using the lvm_config configuration file and is linked to
pre snapshot number 1, use:

# snapper -c lvm_config create -t post --pre-num 1 -p


2

The -p option prints the number of the created snapshot and is optional.

3. The pre and post snapshots 1 and 2 are now created and paired. Verify this with the list
command:

# snapper -c lvm_config list


Type | # | Pre # | Date | User | Cleanup |
Description | Userdata
-------+---+-------+-------------------+------+----------+-------
------+---------
single | 0 | | | root | | current
|
pre | 1 | | Mon 06<...> | root | |
|
post | 2 | 1 | Mon 06<...> | root | |
|

14.2.1.3. Wrapping a Command in Pre and Post Snapshots

You can also wrap a command within a pre and post snapshot, which can be useful when testing. See
Procedure 14.3, “Wrapping a Command in Pre and Post Snapshots”, which is a shortcut for the following
steps:

1. Running the snapper create pre snapshot command.

2. Running a command or a list of commands to perform actions with a possible impact on the file
system content.

3. Running the snapper create post snapshot command.

Procedure 14.3. Wrapping a Command in Pre and Post Snapshots

1. To wrap a command in pre and post snapshots:

# snapper -c lvm_config create --command "command_to_be_tracked"

For example, to track the creation of the /lvm_mount/hello_file file:

# snapper -c lvm_config create --command "echo Hello >


/lvm_mount/hello_file"

2. To verify this, use the status command:

# snapper -c config_file status


first_snapshot_number..second_snapshot_number

For example, to track the changes made in the first step:

122
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

# snapper -c lvm_config status 3..4


+..... /lvm_mount/hello_file

Use the list command to verify the number of the snapshot if needed.

For more information on the status command, see Section 14.3, “Tracking Changes Between
Snapper Snapshots”.

Note that there is no guarantee that the command in the given example is the only thing the snapshots
capture. Snapper also records anything that is modified by the system, not just what a user modifies.

14.2.2. Creating a Single Snapper Snapshot


Creating a single snapper snapshot is similar to creating a pre or post snapshot, only the create -t
option specifies single. The single snapshot is used to create a single snapshot in time without having it
relate to any others. However, if you are interested in a straightforward way to create snapshots of LVM2
thin volumes without the need to automatically generate comparisons or list additional information,
Red Hat recommends using the System Storage Manager instead of Snapper for this purpose, as
described in Section 16.2.6, “Snapshot”.

To create a single snapshot, use:

# snapper -c config_name create -t single

For example, the following command creates a single snapshot using the lvm_config configuration file.

# snapper -c lvm_config create -t single

Although single snapshots are not specifically designed to track changes, you can use the snapper
diff, xadiff, and status commands to compare any two snapshots. For more information on these
commands, see Section 14.3, “Tracking Changes Between Snapper Snapshots” .

14.2.3. Configuring Snapper to Take Automated Snapshots


Taking automated snapshots is one of key features of Snapper. By default, when you configure Snapper
for a volume, Snapper starts taking a snapshot of the volume every hour.

Under the default configuration, Snapper keeps:

10 hourly snapshots, and the final hourly snapshot is saved as a “daily” snapshot.

10 daily snapshots, and the final daily snapshot for a month is saved as a “monthly” snapshot.

10 monthly snapshots, and the final monthly snapshot is saved as a “yearly” snapshot.

10 yearly snapshots.

Note that Snapper keeps by default no more that 50 snapshots in total. However, Snapper keeps by
default all snapshots created less than 1,800 seconds ago.

The default configuration is specified in the /etc/snapper/config-templates/default file. When


you use the snapper create-config command to create a configuration, any unspecified values are
set based on the default configuration. You can edit the configuration for any defined volume in the
/etc/snapper/configs/config_name file.

123
Storage Administration Guide

14.3. TRACKING CHANGES BETWEEN SNAPPER SNAPSHOTS


Use the status, diff, and xadiff commands to track the changes made to a subvolume between
snapshots:

status
The status command shows a list of files and directories that have been created, modified, or
deleted between two snapshots, that is a comprehensive list of changes between two snapshots. You
can use this command to get an overview of the changes without excessive details.

For more information, see Section 14.3.1, “Comparing Changes with the status Command”.

diff
The diff command shows a diff of modified files and directories between two snapshots as received
from the status command if there is at least one modification detected.

For more information, see Section 14.3.2, “Comparing Changes with the diff Command”.

xadiff
The xadiff command compares how the extended attributes of a file or directory have changed
between two snapshots.

For more information, see Section 14.3.3, “Comparing Changes with the xadiff Command”.

14.3.1. Comparing Changes with the status Command

The status command shows a list of files and directories that have been created, modified, or deleted
between two snapshots.

To display the status of files between two snapshots, use:

# snapper -c config_file status


first_snapshot_number..second_snapshot_number

Use the list command to determine snapshot numbers if needed.

For example, the following command displays the changes made between snapshot 1 and 2, using the
configuration file lvm_config.

#snapper -c lvm_config status 1..2


tp.... /lvm_mount/dir1
-..... /lvm_mount/dir1/file_a
c.ug.. /lvm_mount/file2
+..... /lvm_mount/file3
....x. /lvm_mount/file4
cp..xa /lvm_mount/file5

Read letters and dots in the first part of the output as columns:

+..... /lvm_mount/file3
||||||
123456

124
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

Column 1 indicates any modification of the file (directory entry) type. Possible values are:

Column 1

Output Meaning

. Nothing has changed.

+ File created.

- File deleted.

c Content changed.

t The type of directory entry has changed. For


example, a former symbolic link has changed to a
regular file with the same file name.

Column 2 indicates any changes in the file permissions. Possible values are:

Column 2

Output Meaning

. No permissions changed.

p Permissions changed.

Column 3 indicates any changes in the user ownership. Possible values are:

Column 3

Output Meaning

. No user ownership changed.

u User ownership has changed.

Column 4 indicates any changes in the group ownership. Possible values are:

Column 4

125
Storage Administration Guide

Output Meaning

. No group ownership changed.

g Group ownership has changed.

Column 5 indicates any changes in the extended attributes. Possible values are:

Column 5

Output Meaning

. No extended attributes changed.

x Extended attributes changed.

Column 6 indicates any changes in the access control lists (ACLs). Possible values are:

Column 6

Output Meaning

. No ACLs changed.

a ACLs modified.

14.3.2. Comparing Changes with the diff Command

The diff command shows the changes of modified files and directories between two snapshots.

# snapper -c config_name diff


first_snapshot_number..second_snapshot_number

Use the list command to determine the number of the snapshot if needed.

For example, to compare the changes made in files between snapshot 1 and snapshot 2 that were
made using the lvm_config configuration file, use:

# snapper -c lvm_config diff 1..2


--- /lvm_mount/.snapshots/13/snapshot/file4 19<...>
+++ /lvm_mount/.snapshots/14/snapshot/file4 20<...>
@@ -0,0 +1 @@
+words

This output shows that file4 had been modified to add "words" into the file.

14.3.3. Comparing Changes with the xadiff Command

126
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

The xadiff command compares how the extended attributes of a file or directory have changed
between two snapshots:

# snapper -c config_name xadiff


first_snapshot_number..second_snapshot_number

Use the list command to determine the number of the snapshot if needed.

For example, to show the xadiff output between snapshot number 1 and snapshot number 2 that were
made using the lvm_config configuration file, use:

# snapper -c lvm_config xadiff 1..2

14.4. REVERSING CHANGES IN BETWEEN SNAPSHOTS


To reverse changes made between two existing Snapper snapshots, use the undochange command in
the following format, where 1 is the first snapshot and 2 is the second snapshot:

snapper -c config_name undochange 1..2

IMPORTANT

Using the undochange command does not revert the Snapper volume back to its original
state and does not provide data consistency. Any file modification that occurs outside of
the specified range, for example after snapshot 2, will remain unchanged after reverting
back, for example to the state of snapshot 1. For example, if undochange is run to undo
the creation of a user, any files owned by that user can still remain.

There is also no mechanism to ensure file consistency as a snapshot is made, so any


inconsistencies that already exist can be transferred back to the snapshot when the
undochange command is used.

Do not use the Snapper undochange command with the root file system, as doing so is
likely to lead to a failure.

The following diagram demonstrates how the undochange command works:

127
Storage Administration Guide

Figure 14.1. Snapper Status over Time

The diagram shows the point in time in which snapshot_1 is created, file_a is created, then file_b
deleted. Snapshot_2 is then created, after which file_a is edited and file_c is created. This is now
the current state of the system. The current system has an edited version of file_a, no file_b, and a
newly created file_c.

When the undochange command is called, Snapper generates a list of modified files between the first
listed snapshot and the second. In the diagram, if you use the snapper -c SnapperExample
undochange 1..2 command, Snapper creates a list of modified files (that is, file_a is created;
file_b is deleted) and applies them to the current system. Therefore:

the current system will not have file_a, as it has yet to be created when snapshot_1 was
created.

file_b will exist, copied from snapshot_1 into the current system.

file_c will exist, as its creation was outside the specified time.

Be aware that if file_b and file_c conflict, the system can become corrupted.

You can also use the snapper -c SnapperExample undochange 2..1 command. In this case,
the current system replaces the edited version of file_a with one copied from snapshot_1, which
undoes edits of that file made after snapshot_2 was created.

Using the mount and unmount Commands to Reverse Changes


The undochange command is not always the best way to revert modifications. With the status and
diff command, you can make a qualified decision, and use the mount and unmount commands
instead of Snapper. The mount and unmount commands are only useful if you want to mount snapshots
and browse their content independently of Snapper workflow.

If needed, the mount command activates respective LVM Snapper snapshot before mounting. Use the
mount and unmount commands if you are, for example, interested in mounting snapshots and
extracting older version of several files manually. To revert files manually, copy them from a mounted

128
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

snapshot to the current file system. The current file system, snapshot 0, is the live file system created in
Procedure 14.1, “Creating a Snapper Configuration File”. Copy the files to the subtree of the original
/mount-point.

Use the mount and unmount commands for explicit client-side requests. The
/etc/snapper/configs/config_name file contains the ALLOW_USERS= and ALLOW_GROUPS=
variables where you can add users and groups. Then, snapperd allows you to perform mount
operations for the added users and groups.

14.5. DELETING A SNAPPER SNAPSHOT


To delete a snapshot:

# snapper -c config_name delete snapshot_number

You can use the list command to verify that the snapshot was successfully deleted.

129
Storage Administration Guide

CHAPTER 15. SWAP SPACE


Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs
more memory resources and the RAM is full, inactive pages in memory are moved to the swap space.
While swap space can help machines with a small amount of RAM, it should not be considered a
replacement for more RAM. Swap space is located on hard drives, which have a slower access time than
physical memory. Swap space can be a dedicated swap partition (recommended), a swap file, or a
combination of swap partitions and swap files. Note that Btrfs does not support swap space.

In years past, the recommended amount of swap space increased linearly with the amount of RAM in the
system. However, modern systems often include hundreds of gigabytes of RAM. As a consequence,
recommended swap space is considered a function of system memory workload, not system memory.

Table 15.1, “Recommended System Swap Space” illustrates the recommended size of a swap partition
depending on the amount of RAM in your system and whether you want sufficient memory for your
system to hibernate. The recommended swap partition size is established automatically during
installation. To allow for hibernation, however, you need to edit the swap space in the custom partitioning
stage.

Recommendations in Table 15.1, “Recommended System Swap Space” are especially important on
systems with low memory (1 GB and less). Failure to allocate sufficient swap space on these systems
can cause issues such as instability or even render the installed system unbootable.

Table 15.1. Recommended System Swap Space

Amount of RAM in the system Recommended swap space Recommended swap space if
allowing for hibernation

⩽ 2 GB 2 times the amount of RAM 3 times the amount of RAM

> 2 GB – 8 GB Equal to the amount of RAM 2 times the amount of RAM

> 8 GB – 64 GB At least 4 GB 1.5 times the amount of RAM

> 64 GB At least 4 GB Hibernation not recommended

At the border between each range listed in Table 15.1, “Recommended System Swap Space”, for
example a system with 2 GB, 8 GB, or 64 GB of system RAM, discretion can be exercised with regard to
chosen swap space and hibernation support. If your system resources allow for it, increasing the swap
space may lead to better performance. A swap space of at least 100 GB is recommended for systems
with over 140 logical processors or over 3 TB of RAM.

Note that distributing swap space over multiple storage devices also improves swap space performance,
particularly on systems with fast drives, controllers, and interfaces.

130
CHAPTER 15. SWAP SPACE

IMPORTANT

File systems and LVM2 volumes assigned as swap space should not be in use when
being modified. Any attempts to modify swap fail if a system process or the kernel is using
swap space. Use the free and cat /proc/swaps commands to verify how much and
where swap is in use.

You should modify swap space while the system is booted in rescue mode, see Booting
Your Computer in Rescue Mode in the Red Hat Enterprise Linux 7 Installation Guide.
When prompted to mount the file system, select Skip.

15.1. ADDING SWAP SPACE


Sometimes it is necessary to add more swap space after installation. For example, you may upgrade the
amount of RAM in your system from 1 GB to 2 GB, but there is only 2 GB of swap space. It might be
advantageous to increase the amount of swap space to 4 GB if you perform memory-intense operations
or run applications that require a large amount of memory.

You have three options: create a new swap partition, create a new swap file, or extend swap on an
existing LVM2 logical volume. It is recommended that you extend an existing logical volume.

15.1.1. Extending Swap on an LVM2 Logical Volume


By default, Red Hat Enterprise Linux 7 uses all available space during installation. If this is the case with
your system, then you must first add a new physical volume to the volume group used by the swap
space.

After adding additional storage to the swap space's volume group, it is now possible to extend it. To do
so, perform the following procedure (assuming /dev/VolGroup00/LogVol01 is the volume you want
to extend by 2 GB):

Procedure 15.1. Extending Swap on an LVM2 Logical Volume

1. Disable swapping for the associated logical volume:

# swapoff -v /dev/VolGroup00/LogVol01

2. Resize the LVM2 logical volume by 2 GB:

# lvresize /dev/VolGroup00/LogVol01 -L +2G

3. Format the new swap space:

# mkswap /dev/VolGroup00/LogVol01

4. Enable the extended logical volume:

# swapon -v /dev/VolGroup00/LogVol01

5. To test if the swap logical volume was successfully extended and activated, inspect active swap
space:

131
Storage Administration Guide

$ cat /proc/swaps
$ free -h

15.1.2. Creating an LVM2 Logical Volume for Swap


To add a swap volume group 2 GB in size, assuming /dev/VolGroup00/LogVol02 is the swap
volume you want to add:

1. Create the LVM2 logical volume of size 2 GB:

# lvcreate VolGroup00 -n LogVol02 -L 2G

2. Format the new swap space:

# mkswap /dev/VolGroup00/LogVol02

3. Add the following entry to the /etc/fstab file:

/dev/VolGroup00/LogVol02 swap swap defaults 0 0

4. Regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

5. Activate swap on the logical volume:

# swapon -v /dev/VolGroup00/LogVol02

6. To test if the swap logical volume was successfully created and activated, inspect active swap
space:

$ cat /proc/swaps
$ free -h

15.1.3. Creating a Swap File


To add a swap file:

Procedure 15.2. Add a Swap File

1. Determine the size of the new swap file in megabytes and multiply by 1024 to determine the
number of blocks. For example, the block size of a 64 MB swap file is 65536.

2. Create an empty file:

# dd if=/dev/zero of=/swapfile bs=1024 count=65536

Replace count with the value equal to the desired block size.

3. Set up the swap file with the command:

132
CHAPTER 15. SWAP SPACE

# mkswap /swapfile

4. Change the security of the swap file so it is not world readable.

# chmod 0600 /swapfile

5. To enable the swap file at boot time, edit /etc/fstab as root to include the following entry:

/swapfile swap swap defaults 0 0

The next time the system boots, it activates the new swap file.

6. Regenerate mount units so that your system registers the new /etc/fstab configuration:

# systemctl daemon-reload

7. To activate the swap file immediately:

# swapon /swapfile

8. To test if the new swap file was successfully created and activated, inspect active swap space:

$ cat /proc/swaps
$ free -h

15.2. REMOVING SWAP SPACE


Sometimes it can be prudent to reduce swap space after installation. For example, you have
downgraded the amount of RAM in your system from 1 GB to 512 MB, but there is 2 GB of swap space
still assigned. It might be advantageous to reduce the amount of swap space to 1 GB, since the larger 2
GB could be wasting disk space.

You have three options: remove an entire LVM2 logical volume used for swap, remove a swap file, or
reduce swap space on an existing LVM2 logical volume.

15.2.1. Reducing Swap on an LVM2 Logical Volume


To reduce an LVM2 swap logical volume (assuming /dev/VolGroup00/LogVol01 is the volume you
want to reduce):

Procedure 15.3. Reducing an LVM2 Swap Logical Volume

1. Disable swapping for the associated logical volume:

# swapoff -v /dev/VolGroup00/LogVol01

2. Reduce the LVM2 logical volume by 512 MB:

# lvreduce /dev/VolGroup00/LogVol01 -L -512M

3. Format the new swap space:

133
Storage Administration Guide

# mkswap /dev/VolGroup00/LogVol01

4. Activate swap on the logical volume:

# swapon -v /dev/VolGroup00/LogVol01

5. To test if the swap logical volume was successfully reduced, inspect active swap space:

$ cat /proc/swaps
$ free -h

15.2.2. Removing an LVM2 Logical Volume for Swap


To remove a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you
want to remove):

Procedure 15.4. Remove a Swap Volume Group

1. Disable swapping for the associated logical volume:

# swapoff -v /dev/VolGroup00/LogVol02

2. Remove the LVM2 logical volume:

# lvremove /dev/VolGroup00/LogVol02

3. Remove the following associated entry from the /etc/fstab file:

/dev/VolGroup00/LogVol02 swap swap defaults 0 0

4. Regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

5. To test if the logical volume was successfully removed, inspect active swap space:

$ cat /proc/swaps
$ free -h

15.2.3. Removing a Swap File


To remove a swap file:

Procedure 15.5. Remove a Swap File

1. At a shell prompt, execute the following command to disable the swap file (where /swapfile is
the swap file):

# swapoff -v /swapfile

134
CHAPTER 15. SWAP SPACE

2. Remove its entry from the /etc/fstab file.

3. Regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

4. Remove the actual file:

# rm /swapfile

15.3. MOVING SWAP SPACE


To move swap space from one location to another:

1. Removing swap space Section 15.2, “Removing Swap Space”.

2. Adding swap space Section 15.1, “Adding Swap Space”.

135
Storage Administration Guide

CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)


System Storage Manager (SSM) provides a command line interface to manage storage in various
technologies. Storage systems are becoming increasingly complicated through the use of Device
Mappers (DM), Logical Volume Managers (LVM), and Multiple Devices (MD). This creates a system that
is not user friendly and makes it easier for errors and problems to arise. SSM alleviates this by creating a
unified user interface. This interface allows users to run complicated systems in a simple manner. For
example, to create and mount a new file system without SSM, there are five commands that must be
used. With SSM only one is needed.

This chapter explains how SSM interacts with various back ends and some common use cases.

16.1. SSM BACK ENDS


SSM uses a core abstraction layer in ssmlib/main.py which complies with the device, pool, and
volume abstraction, ignoring the specifics of the underlying technology. Back ends can be registered in
ssmlib/main.py to handle specific storage technology methods, such as create, snapshot, or to
remove volumes and pools.

There are already several back ends registered in SSM. The following sections provide basic information
on them as well as definitions on how they handle pools, volumes, snapshots, and devices.

16.1.1. Btrfs Back End

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

Btrfs, a file system with many advanced features, is used as a volume management back end in SSM.
Pools, volumes, and snapshots can be created with the Btrfs back end.

16.1.1.1. Btrfs Pool

The Btrfs file system itself is the pool. It can be extended by adding more devices or shrunk by removing
devices. SSM creates a Btrfs file system when a Btrfs pool is created. This means that every new Btrfs
pool has one volume of the same name as the pool which cannot be removed without removing the
entire pool. The default Btrfs pool name is btrfs_pool.

The name of the pool is used as the file system label. If there is already an existing Btrfs file system in
the system without a label, the Btrfs pool will generate a name for internal use in the format of
btrfs_device_base_name.

16.1.1.2. Btrfs Volume

Volumes created after the first volume in a pool are the same as sub-volumes. SSM temporarily mounts
the Btrfs file system if it is unmounted in order to create a sub-volume.

136
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

The name of a volume is used as the subvolume path in the Btrfs file system. For example, a subvolume
displays as /dev/lvm_pool/lvol001. Every object in this path must exist in order for the volume to
be created. Volumes can also be referenced with its mount point.

16.1.1.3. Btrfs Snapshot

Snapshots can be taken of any Btrfs volume in the system with SSM. Be aware that Btrfs does not
distinguish between subvolumes and snapshots. While this means that SSM cannot recognize the Btrfs
snapshot destination, it will try to recognize special name formats. If the name specified when creating
the snapshot does the specific pattern, the snapshot is not be recognized and instead be listed as a
regular Btrfs volume.

16.1.1.4. Btrfs Device

Btrfs does not require any special device to be created on.

16.1.2. LVM Back End


Pools, volumes, and snapshots can be created with LVM. The following definitions are from an LVM
point of view.

16.1.2.1. LVM Pool

LVM pool is the same as an LVM volume group. This means that grouping devices and new logical
volumes can be created out of the LVM pool. The default LVM pool name is lvm_pool.

16.1.2.2. LVM Volume

An LVM volume is the same as an ordinary logical volume.

16.1.2.3. LVM Snapshot

When a snapshot is created from the LVM volume a new snapshot volume is created which can then
be handled just like any other LVM volume. Unlike Btrfs, LVM is able to distinguish snapshots from
regular volumes so there is no need for a snapshot name to match a particular pattern.

16.1.2.4. LVM Device

SSM makes the need for an LVM back end to be created on a physical device transparent for the user.

16.1.3. Crypt Back End


The crypt back end in SSM uses cryptsetup and dm-crypt target to manage encrypted volumes.
Crypt back ends can be used as a regular back end for creating encrypted volumes on top of regular
block devices (or on other volumes such as LVM or MD volumes), or to create encrypted LVM volumes
in a single steps.

Only volumes can be created with a crypt back end; pooling is not supported and it does not require
special devices.

The following sections define volumes and snapshots from the crypt point of view.

16.1.3.1. Crypt Volume

137
Storage Administration Guide

Crypt volumes are created by dm-crypt and represent the data on the original encrypted device in an
unencrypted form. It does not support RAID or any device concatenation.

Two modes, or extensions, are supported: luks and plain. Luks is used by default. For more information
on the extensions, see man cryptsetup.

16.1.3.2. Crypt Snapshot

While the crypt back end does not support snapshotting, if the encrypted volume is created on top of an
LVM volume, the volume itself can be snapshotted. The snapshot can then be opened by using
cryptsetup.

16.1.4. Multiple Devices (MD) Back End


MD back end is currently limited to only gathering the information about MD volumes in the system.

16.2. COMMON SSM TASKS


The following sections describe common SSM tasks.

16.2.1. Installing SSM


To install SSM use the following command:

# yum install system-storage-manager

There are several back ends that are enabled only if the supporting packages are installed:

The LVM back end requires the lvm2 package.

The Btrfs back end requires the btrfs-progs package.

The Crypt back end requires the device-mapper and cryptsetup packages.

16.2.2. Displaying Information about All Detected Devices


Displaying information about all detected devices, pools, volumes, and snapshots is done with the list
command. The ssm list command with no options display the following output:

# ssm list
----------------------------------------------------------
Device Free Used Total Pool Mount point
----------------------------------------------------------
/dev/sda 2.00 GB PARTITIONED
/dev/sda1 47.83 MB /test
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
----------------------------------------------------------
------------------------------------------------
Pool Type Devices Free Used Total
------------------------------------------------
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB

138
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

------------------------------------------------
----------------------------------------------------------------------
-----------
Volume Pool Volume size FS FS size Free Type
Mount point
----------------------------------------------------------------------
-----------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear /
/dev/rhel/swap rhel 1000.00 MB linear
/dev/sda1 47.83 MB xfs 44.50 MB 44.41 MB part
/test
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part
/boot
----------------------------------------------------------------------
-----------

This display can be further narrowed down by using arguments to specify what should be displayed. The
list of available options can be found with the ssm list --help command.

NOTE

Depending on the argument given, SSM may not display everything.

Running the devices or dev argument omits some devices. CDRoms and
DM/MD devices, for example, are intentionally hidden as they are listed as
volumes.

Some back ends do not support snapshots and cannot distinguish between a
snapshot and a regular volume. Running the snapshot argument on one of
these back ends cause SSM to attempt to recognize the volume name in order to
identify a snapshot. If the SSM regular expression does not match the snapshot
pattern then the snapshot is not be recognized.

With the exception of the main Btrfs volume (the file system itself), any
unmounted Btrfs volumes are not shown.

16.2.3. Creating a New Pool, Logical Volume, and File System


In this section, a new pool is be created with a default name which have the devices /dev/vdb and
/dev/vdc, a logical volume of 1G, and an XFS file system.

The command to create this scenario is ssm create --fs xfs -s 1G /dev/vdb /dev/vdc. The
following options are used:

The --fs option specifies the required file system type. Current supported file system types are:

ext3

ext4

xfs

btrfs

139
Storage Administration Guide

The -s specifies the size of the logical volume. The following suffixes are supported to define
units:

K or k for kilobytes

M or m for megabytes

G or g for gigabytes

T or t for terabytes

P or p for petabytes

E or e for exabytes

The two listed devices, /dev/vdb and /dev/vdc, are the two devices you wish to create.

# ssm create --fs xfs -s 1G /dev/vdb /dev/vdc


Physical volume "/dev/vdb" successfully created
Physical volume "/dev/vdc" successfully created
Volume group "lvm_pool" successfully created
Logical volume "lvol001" created

There are two other options for the ssm command that may be useful. The first is the -p pool
command. This specifies the pool the volume is to be created on. If it does not yet exist, then SSM
creates it. This was omitted in the given example which caused SSM to use the default name
lvm_pool. However, to use a specific name to fit in with any existing naming conventions, the -p
option should be used.

The second useful option is the -n name command. This names the newly created logical volume. As
with the -p, this is needed in order to use a specific name to fit in with any existing naming conventions.

An example of these two options being used follows:

# ssm create --fs xfs -p new_pool -n XFS_Volume /dev/vdd


Volume group "new_pool" successfully created
Logical volume "XFS_Volume" created

SSM has now created two physical volumes, a pool, and a logical volume with the ease of only one
command.

16.2.4. Checking a File System's Consistency


The ssm check command checks the file system consistency on the volume. It is possible to specify
multiple volumes to check. If there is no file system on the volume, then the volume is skipped.

To check all devices in the volume lvol001, run the command ssm check
/dev/lvm_pool/lvol001.

# ssm check /dev/lvm_pool/lvol001


Checking xfs file system on '/dev/mapper/lvm_pool-lvol001'.
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- scan filesystem freespace and inode maps...

140
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

- found root inode chunk


Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

16.2.5. Increasing a Volume's Size


The ssm resize command changes the size of the specified volume and file system. If there is no file
system then only the volume itself will be resized.

For this example, we currently have one logical volume on /dev/vdb that is 900MB called lvol001.

# ssm list
-----------------------------------------------------------------
Device Free Used Total Pool Mount point
-----------------------------------------------------------------
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
/dev/vdb 120.00 MB 900.00 MB 1.00 GB lvm_pool
/dev/vdc 1.00 GB
-----------------------------------------------------------------
---------------------------------------------------------
Pool Type Devices Free Used Total
---------------------------------------------------------
lvm_pool lvm 1 120.00 MB 900.00 MB 1020.00 MB
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
---------------------------------------------------------
----------------------------------------------------------------------
----------------------

141
Storage Administration Guide

Volume Pool Volume size FS FS size Free


Type Mount point
----------------------------------------------------------------------
----------------------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB
linear /
/dev/rhel/swap rhel 1000.00 MB
linear
/dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB
linear
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB
part /boot
----------------------------------------------------------------------
----------------------

The logical volume needs to be increased by another 500MB. To do so we will need to add an extra
device to the pool:

~]# ssm resize -s +500M /dev/lvm_pool/lvol001 /dev/vdc


Physical volume "/dev/vdc" successfully created
Volume group "lvm_pool" successfully extended
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
Extending logical volume lvol001 to 1.37 GiB
Logical volume lvol001 successfully resized
meta-data=/dev/mapper/lvm_pool-lvol001 isize=256 agcount=4,
agsize=57600 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=230400, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0

142
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

log =internal bsize=4096 blocks=853, version=2


= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 230400 to 358400

SSM runs a check on the device and then extends the volume by the specified amount. This can be
verified with the ssm list command.

# ssm list
------------------------------------------------------------------
Device Free Used Total Pool Mount point
------------------------------------------------------------------
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
/dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool
/dev/vdc 640.00 MB 380.00 MB 1.00 GB lvm_pool
------------------------------------------------------------------
------------------------------------------------------
Pool Type Devices Free Used Total
------------------------------------------------------
lvm_pool lvm 2 640.00 MB 1.37 GB 1.99 GB
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
------------------------------------------------------
----------------------------------------------------------------------
------------------------
Volume Pool Volume size FS FS size
Free Type Mount point
----------------------------------------------------------------------
------------------------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB
linear /
/dev/rhel/swap rhel 1000.00 MB
linear
/dev/lvm_pool/lvol001 lvm_pool 1.37 GB xfs 1.36 GB 1.36 GB
linear
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB
part /boot
----------------------------------------------------------------------
------------------------

143
Storage Administration Guide

NOTE

It is only possible to decrease an LVM volume's size; it is not supported with other volume
types. This is done by using a - instead of a +. For example, to decrease the size of an
LVM volume by 50M the command would be:

# ssm resize -s-50M /dev/lvm_pool/lvol002


Rounding size to boundary between physical extents: 972.00 MiB
WARNING: Reducing active logical volume to 972.00 MiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvol002? [y/n]: y
Reducing logical volume lvol002 to 972.00 MiB
Logical volume lvol002 successfully resized

Without either the + or -, the value is taken as absolute.

16.2.6. Snapshot
To take a snapshot of an existing volume, use the ssm snapshot command.

NOTE

This operation fails if the back end that the volume belongs to does not support
snapshotting.

To create a snapshot of the lvol001, use the following command:

# ssm snapshot /dev/lvm_pool/lvol001


Logical volume "snap20150519T130900" created

To verify this, use the ssm list, and note the extra snapshot section.

# ssm list
----------------------------------------------------------------
Device Free Used Total Pool Mount point
----------------------------------------------------------------
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
/dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool
/dev/vdc 1.00 GB
----------------------------------------------------------------
--------------------------------------------------------
Pool Type Devices Free Used Total
--------------------------------------------------------
lvm_pool lvm 1 0.00 KB 1020.00 MB 1020.00 MB
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
--------------------------------------------------------
----------------------------------------------------------------------
------------------------
Volume Pool Volume size FS FS size
Free Type Mount point
----------------------------------------------------------------------

144
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

------------------------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB
linear /
/dev/rhel/swap rhel 1000.00 MB
linear
/dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB
linear
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB
part /boot
----------------------------------------------------------------------
------------------------
----------------------------------------------------------------------
------------
Snapshot Origin Pool Volume size
Size Type
----------------------------------------------------------------------
------------
/dev/lvm_pool/snap20150519T130900 lvol001 lvm_pool 120.00 MB 0.00 KB
linear
----------------------------------------------------------------------
------------

16.2.7. Removing an Item


The ssm remove is used to remove an item, either a device, pool, or volume.

NOTE

If a device is being used by a pool when removed, it will fail. This can be forced using the
-f argument.

If the volume is mounted when removed, it will fail. Unlike the device, it cannot be forced
with the -f argument.

To remove the lvm_pool and everything within it use the following command:

# ssm remove lvm_pool


Do you really want to remove volume group "lvm_pool" containing 2 logical
volumes? [y/n]: y
Do you really want to remove active logical volume snap20150519T130900?
[y/n]: y
Logical volume "snap20150519T130900" successfully removed
Do you really want to remove active logical volume lvol001? [y/n]: y
Logical volume "lvol001" successfully removed
Volume group "lvm_pool" successfully removed

16.3. SSM RESOURCES


For more information on SSM, see the following resources:

The man ssm page provides good descriptions and examples, as well as details on all of the
commands and options too specific to be documented here.

145
Storage Administration Guide

Local documentation for SSM is stored in the doc/ directory.

The SSM wiki can be accessed at http://storagemanager.sourceforge.net/index.html.

The mailing list can be subscribed from https://lists.sourceforge.net/lists/listinfo/storagemanager-


devel and mailing list archives from http://sourceforge.net/mailarchive/forum.php?
forum_name=storagemanager-devel. The mailing list is where developers communicate. There
is currently no user mailing list so feel free to post questions there as well.

146
CHAPTER 17. DISK QUOTAS

CHAPTER 17. DISK QUOTAS


Disk space can be restricted by implementing disk quotas which alert a system administrator before a
user consumes too much disk space or a partition becomes full.

Disk quotas can be configured for individual users as well as user groups. This makes it possible to
manage the space allocated for user-specific files (such as email) separately from the space allocated to
the projects a user works on (assuming the projects are given their own groups).

In addition, quotas can be set not just to control the number of disk blocks consumed but to control the
number of inodes (data structures that contain information about files in UNIX file systems). Because
inodes are used to contain file-related information, this allows control over the number of files that can be
created.

The quota RPM must be installed to implement disk quotas.

NOTE

This chapter is for all file systems, however some file systems have their own quota
management tools. See the corresponding description for the applicable file systems.

For XFS file systems, see Section 3.3, “XFS Quota Management”.

Btrfs does not have disk quotas so is not covered.

17.1. CONFIGURING DISK QUOTAS


To implement disk quotas, use the following steps:

1. Enable quotas per file system by modifying the /etc/fstab file.

2. Remount the file system(s).

3. Create the quota database files and generate the disk usage table.

4. Assign quota policies.

Each of these steps is discussed in detail in the following sections.

17.1.1. Enabling Quotas

Procedure 17.1. Enabling Quotas

1. Log in as root.

2. Edit the /etc/fstab file.

3. Add either the usrquota or grpquota or both options to the file systems that require quotas.

Example 17.1. Edit /etc/fstab

For example, to use the text editor vim type the following:

# vim /etc/fstab

147
Storage Administration Guide

Example 17.2. Add Quotas

/dev/VolGroup00/LogVol00 / ext3 defaults 1 1


LABEL=/boot /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
/dev/VolGroup00/LogVol02 /home ext3 defaults,usrquota,grpquota
1 2
/dev/VolGroup00/LogVol01 swap swap defaults 0 0 . . .

In this example, the /home file system has both user and group quotas enabled.

NOTE

The following examples assume that a separate /home partition was created during the
installation of Red Hat Enterprise Linux. The root (/) partition can be used for setting
quota policies in the /etc/fstab file.

17.1.2. Remounting the File Systems


After adding either the usrquota or grpquota or both options, remount each file system whose fstab
entry has been modified. If the file system is not in use by any process, use one of the following
methods:

Run the umount command followed by the mount command to remount the file system. See the
man page for both umount and mount for the specific syntax for mounting and unmounting
various file system types.

Run the mount -o remount file-system command (where file-system is the name of
the file system) to remount the file system. For example, to remount the /home file system, run
the mount -o remount /home command.

If the file system is currently in use, the easiest method for remounting the file system is to reboot the
system.

17.1.3. Creating the Quota Database Files


After each quota-enabled file system is remounted run the quotacheck command.

The quotacheck command examines quota-enabled file systems and builds a table of the current disk
usage per file system. The table is then used to update the operating system's copy of disk usage. In
addition, the file system's disk quota files are updated.

NOTE

The quotacheck command has no effect on XFS as the table of disk usage is completed
automatically at mount time. See the man page xfs_quota(8) for more information.

148
CHAPTER 17. DISK QUOTAS

Procedure 17.2. Creating the Quota Database Files

1. Create the quota files on the file system using the following command:

# quotacheck -cug /file system

2. Generate the table of current disk usage per file system using the following command:

# quotacheck -avug

Following are the options used to create quota files:

c
Specifies that the quota files should be created for each file system with quotas enable.

u
Checks for user quotas.

g
Checks for group quotas. If only -g is specified, only the group quota file is created.

If neither the -u or -g options are specified, only the user quota file is created.

The following options are used to generate the table of current disk usage:

a
Check all quota-enabled, locally-mounted file systems

v
Display verbose status information as the quota check proceeds

u
Check user disk quota information

g
Check group disk quota information

After quotacheck has finished running, the quota files corresponding to the enabled quotas (either user
or group or both) are populated with data for each quota-enabled locally-mounted file system such as
/home.

17.1.4. Assigning Quotas per User


The last step is assigning the disk quotas with the edquota command.

Prerequisite

User must exist prior to setting the user quota.

Procedure 17.3. Assigning Quotas per User

149
Storage Administration Guide

Procedure 17.3. Assigning Quotas per User

1. To assign the quota for a user, use the following command:

# edquota username

Replace username with the user to which you want to assign the quotas.

2. To verify that the quota for the user has been set, use the following command:

# quota username

Example 17.3. Assigning Quotas to a user

For example, if a quota is enabled in /etc/fstab for the /home partition


(/dev/VolGroup00/LogVol02 in the following example) and the command edquota testuser
is executed, the following is shown in the editor configured as the default for the system:

Disk quotas for user testuser (uid 501):


Filesystem blocks soft hard inodes soft
hard
/dev/VolGroup00/LogVol02 440436 0 0 37418 0
0

NOTE

The text editor defined by the EDITOR environment variable is used by edquota. To
change the editor, set the EDITOR environment variable in your ~/.bash_profile file
to the full path of the editor of your choice.

The first column is the name of the file system that has a quota enabled for it. The second column shows
how many blocks the user is currently using. The next two columns are used to set soft and hard block
limits for the user on the file system. The inodes column shows how many inodes the user is currently
using. The last two columns are used to set the soft and hard inode limits for the user on the file system.

The hard block limit is the absolute maximum amount of disk space that a user or group can use. Once
this limit is reached, no further disk space can be used.

The soft block limit defines the maximum amount of disk space that can be used. However, unlike the
hard limit, the soft limit can be exceeded for a certain amount of time. That time is known as the grace
period. The grace period can be expressed in seconds, minutes, hours, days, weeks, or months.

If any of the values are set to 0, that limit is not set. In the text editor, change the desired limits.

Example 17.4. Change Desired Limits

For example:

Disk quotas for user testuser (uid 501):


Filesystem blocks soft hard inodes soft
hard

150
CHAPTER 17. DISK QUOTAS

/dev/VolGroup00/LogVol02 440436 500000 550000 37418 0


0

To verify that the quota for the user has been set, use the command:

# quota testuser
Disk quotas for user username (uid 501):
Filesystem blocks quota limit grace files quota limit
grace
/dev/sdb 1000* 1000 1000 0 0 0

17.1.5. Assigning Quotas per Group


Quotas can also be assigned on a per-group basis.

Prerequisite

Group must exist prior to setting the group quota.

Procedure 17.4. Assigning Quotas per Group

1. To set a group quota, use the following command:

# edquota -g groupname

2. To verify that the group quota is set, use the following command:

# quota -g groupname

Example 17.5. Assigning quotas to group

For example, to set a group quota for the devel group, use the command:

# edquota -g devel

This command displays the existing quota for the group in the text editor:

Disk quotas for group devel (gid 505):


Filesystem blocks soft hard inodes soft
hard
/dev/VolGroup00/LogVol02 440400 0 0 37418 0
0

Modify the limits, then save the file.

To verify that the group quota has been set, use the command:

# quota -g devel

151
Storage Administration Guide

17.1.6. Setting the Grace Period for Soft Limits


If a given quota has soft limits, you can edit the grace period (i.e. the amount of time a soft limit can be
exceeded) with the following command:

# edquota -t

This command works on quotas for inodes or blocks, for either users or groups.

IMPORTANT

While other edquota commands operate on quotas for a particular user or group, the -t
option operates on every file system with quotas enabled.

17.2. MANAGING DISK QUOTAS


If quotas are implemented, they need some maintenance mostly in the form of watching to see if the
quotas are exceeded and making sure the quotas are accurate.

If users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator has
a few choices to make depending on what type of users they are and how much disk space impacts their
work. The administrator can either help the user determine how to use less disk space or increase the
user's disk quota.

17.2.1. Enabling and Disabling


It is possible to disable quotas without setting them to 0. To turn all user and group quotas off, use the
following command:

# quotaoff -vaug

If neither the -u or -g options are specified, only the user quotas are disabled. If only -g is specified,
only group quotas are disabled. The -v switch causes verbose status information to display as the
command executes.

To enable user and group quotas again, use the following command:

# quotaon

To enable user and group quotas for all file systems, use the following command:

# quotaon -vaug

If neither the -u or -g options are specified, only the user quotas are enabled. If only -g is specified,
only group quotas are enabled.

To enable quotas for a specific file system, such as /home, use the following command:

# quotaon -vug /home

152
CHAPTER 17. DISK QUOTAS

NOTE

The quotaon command is not always needed for XFS because it is performed
automatically at mount time. Refer to the man page quotaon(8) for more information.

17.2.2. Reporting on Disk Quotas


Creating a disk usage report entails running the repquota utility.

Example 17.6. Output of the repquota Command

For example, the command repquota /home produces this output:

*** Report for user quotas on device /dev/mapper/VolGroup00-LogVol02


Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
--------------------------------------------------------------------
--
root -- 36 0 0 4 0 0
kristin -- 540 0 0 125 0 0
testuser -- 440400 500000 550000 37418 0 0

To view the disk usage report for all (option -a) quota-enabled file systems, use the command:

# repquota -a

While the report is easy to read, a few points should be explained. The -- displayed after each user is a
quick way to determine whether the block or inode limits have been exceeded. If either soft limit is
exceeded, a + appears in place of the corresponding -; the first - represents the block limit, and the
second represents the inode limit.

The grace columns are normally blank. If a soft limit has been exceeded, the column contains a time
specification equal to the amount of time remaining on the grace period. If the grace period has expired,
none appears in its place.

17.2.3. Keeping Quotas Accurate


When a file system fails to unmount cleanly, for example due to a system crash, it is necessary to run
the following command:

# quotacheck

However, quotacheck can be run on a regular basis, even if the system has not crashed. Safe
methods for periodically running quotacheck include:

Ensuring quotacheck runs on next reboot

NOTE

This method works best for (busy) multiuser systems which are periodically rebooted.

153
Storage Administration Guide

Save a shell script into the /etc/cron.daily/ or /etc/cron.weekly/ directory or schedule one
using the following command:

# crontab -e

The crontab -e command contains the touch /forcequotacheck command. This creates an
empty forcequotacheck file in the root directory, which the system init script looks for at boot time.
If it is found, the init script runs quotacheck. Afterward, the init script removes the
/forcequotacheck file; thus, scheduling this file to be created periodically with cron ensures that
quotacheck is run during the next reboot.

For more information about cron, see man cron.

Running quotacheck in single user mode


An alternative way to safely run quotacheck is to boot the system into single-user mode to prevent
the possibility of data corruption in quota files and run the following commands:

# quotaoff -vug /file_system

# quotacheck -vug /file_system

# quotaon -vug /file_system

Running quotacheck on a running system


If necessary, it is possible to run quotacheck on a machine during a time when no users are logged
in, and thus have no open files on the file system being checked. Run the command quotacheck -
vug file_system; this command will fail if quotacheck cannot remount the given file_system as
read-only. Note that, following the check, the file system will be remounted read-write.


WARNING

Running quotacheck on a live file system mounted read-write is not


recommended due to the possibility of quota file corruption.

See man cron for more information about configuring cron.

17.3. DISK QUOTA REFERENCES


For more information on disk quotas, refer to the man pages of the following commands:

quotacheck

edquota

repquota

154
CHAPTER 17. DISK QUOTAS

quota

quotaon

quotaoff

155
Storage Administration Guide

CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS


(RAID)
The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to
accomplish performance or redundancy goals not attainable with one large and expensive drive. This
array of drives appears to the computer as a single logical storage unit or drive.

RAID allows information to be spread across several disks. RAID uses techniques such as disk striping
(RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Level 5) to achieve
redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk
crashes.

RAID distributes data across each drive in the array by breaking it down into consistently-sized chunks
(commonly 256K or 512k, although other values are acceptable). Each chunk is then written to a hard
drive in the RAID array according to the RAID level employed. When the data is read, the process is
reversed, giving the illusion that the multiple drives in the array are actually one large drive.

System Administrators and others who manage large amounts of data would benefit from using RAID
technology. Primary reasons to deploy RAID include:

Enhances speed

Increases storage capacity using a single virtual disk

Minimizes data loss from disk failure

18.1. RAID TYPES


There are three possible RAID approaches: Firmware RAID, Hardware RAID, and Software RAID.

Firmware RAID
Firmware RAID, also known as ATARAID, is a type of software RAID where the RAID sets can be
configured using a firmware-based menu. The firmware used by this type of RAID also hooks into the
BIOS, allowing you to boot from its RAID sets. Different vendors use different on-disk metadata formats
to mark the RAID set members. The Intel Matrix RAID is a good example of a firmware RAID system.

Hardware RAID
The hardware-based array manages the RAID subsystem independently from the host. It presents a
single disk per RAID array to the host.

A Hardware RAID device may be internal or external to the system, with internal devices commonly
consisting of a specialized controller card that handles the RAID tasks transparently to the operating
system and with external devices commonly connecting to the system via SCSI, Fibre Channel, iSCSI,
InfiniBand, or other high speed network interconnect and presenting logical volumes to the system.

RAID controller cards function like a SCSI controller to the operating system, and handle all the actual
drive communications. The user plugs the drives into the RAID controller (just like a normal SCSI
controller) and then adds them to the RAID controllers configuration. The operating system will not be
able to tell the difference.

Software RAID

156
CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)

Software RAID implements the various RAID levels in the kernel disk (block device) code. It offers the
cheapest possible solution, as expensive disk controller cards or hot-swap chassis [2] are not required.
Software RAID also works with cheaper IDE disks as well as SCSI disks. With today's faster CPUs,
Software RAID also generally outperforms Hardware RAID.

The Linux kernel contains a multi-disk (MD) driver that allows the RAID solution to be completely
hardware independent. The performance of a software-based array depends on the server CPU
performance and load.

Key features of the Linux software RAID stack:

Multithreaded design

Portability of arrays between Linux machines without reconstruction

Backgrounded array reconstruction using idle system resources

Hot-swappable drive support

Automatic CPU detection to take advantage of certain CPU features such as streaming SIMD
support

Automatic correction of bad sectors on disks in an array

Regular consistency checks of RAID data to ensure the health of the array

Proactive monitoring of arrays with email alerts sent to a designated email address on important
events

Write-intent bitmaps which drastically increase the speed of resync events by allowing the kernel
to know precisely which portions of a disk need to be resynced instead of having to resync the
entire array

Resync checkpointing so that if you reboot your computer during a resync, at startup the resync
will pick up where it left off and not start all over again

The ability to change parameters of the array after installation. For example, you can grow a 4-
disk RAID5 array to a 5-disk RAID5 array when you have a new disk to add. This grow operation
is done live and does not require you to reinstall on the new array.

18.2. RAID LEVELS AND LINEAR SUPPORT


RAID supports various configurations, including levels 0, 1, 4, 5, 6, 10, and linear. These RAID types are
defined as follows:

Level 0
RAID level 0, often called "striping," is a performance-oriented striped data mapping technique. This
means the data being written to the array is broken down into strips and written across the member
disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy.

Many RAID level 0 implementations will only stripe the data across the member devices up to the size
of the smallest device in the array. This means that if you have multiple devices with slightly different
sizes, each device will get treated as though it is the same size as the smallest drive. Therefore, the
common storage capacity of a level 0 array is equal to the capacity of the smallest member disk in a
Hardware RAID or the capacity of smallest member partition in a Software RAID multiplied by the
number of disks or partitions in the array.

157
Storage Administration Guide

Level 1
RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides
redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on
each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1
operates with two or more disks, and provides very good data reliability and improves performance
for read-intensive applications but at a relatively high cost. [3]

The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in
a Hardware RAID or the smallest mirrored partition in a Software RAID. Level 1 redundancy is the
highest possible among all RAID types, with the array being able to operate with only a single disk
present.

Level 4

Level 4 uses parity [4] concentrated on a single disk drive to protect data. Because the dedicated
parity disk represents an inherent bottleneck on all write transactions to the RAID array, level 4 is
seldom used without accompanying technologies such as write-back caching, or in specific
circumstances where the system administrator is intentionally designing the software RAID device
with this bottleneck in mind (such as an array that will have little to no write transactions once the
array is populated with data). RAID level 4 is so rarely used that it is not available as an option in
Anaconda. However, it could be created manually by the user if truly needed.

The storage capacity of Hardware RAID level 4 is equal to the capacity of the smallest member
partition multiplied by the number of partitions minus one. Performance of a RAID level 4 array will
always be asymmetrical, meaning reads will outperform writes. This is because writes consume extra
CPU and main memory bandwidth when generating parity, and then also consume extra bus
bandwidth when writing the actual data to disks because you are writing not only the data, but also the
parity. Reads need only read the data and not the parity unless the array is in a degraded state. As a
result, reads generate less traffic to the drives and across the busses of the computer for the same
amount of data transfer under normal operating conditions.

Level 5
This is the most common type of RAID. By distributing parity across all of an array's member disk
drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance
bottleneck is the parity calculation process itself. With modern CPUs and Software RAID, that is
usually not a bottleneck at all since modern CPUs can generate parity very fast. However, if you have
a sufficiently large number of member devices in a software RAID5 array such that the combined
aggregate data transfer speed across all devices is high enough, then this bottleneck can start to
come into play.

As with level 4, level 5 has asymmetrical performance, with reads substantially outperforming writes.
The storage capacity of RAID level 5 is calculated the same way as with level 4.

Level 6
This is a common level of RAID when data redundancy and preservation, and not performance, are
the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a
complex parity scheme to be able to recover from the loss of any two drives in the array. This
complex parity scheme creates a significantly higher CPU burden on software RAID devices and also
imposes an increased burden during write transactions. As such, level 6 is considerably more
asymmetrical in performance than levels 4 and 5.

The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you
must subtract 2 devices (instead of 1) from the device count for the extra parity storage space.

158
CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)

Level 10
This RAID level attempts to combine the performance advantages of level 0 with the redundancy of
level 1. It also helps to alleviate some of the space wasted in level 1 arrays with more than 2 devices.
With level 10, it is possible to create a 3-drive array configured to store only 2 copies of each piece of
data, which then allows the overall array size to be 1.5 times the size of the smallest devices instead
of only equal to the smallest device (like it would be with a 3-device, level 1 array).

The number of options available when creating level 10 arrays as well as the complexity of selecting
the right options for a specific use case make it impractical to create during installation. It is possible
to create one manually using the command line mdadm tool. For more information on the options and
their respective performance trade-offs, see man md.

Linear RAID
Linear RAID is a grouping of drives to create a larger virtual drive. In linear RAID, the chunks are
allocated sequentially from one member drive, going to the next drive only when the first is completely
filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations split
between member drives. Linear RAID also offers no redundancy and decreases reliability; if any one
member drive fails, the entire array cannot be used. The capacity is the total of all member disks.

18.3. LINUX RAID SUBSYSTEMS


RAID in Linux is composed of the following subsystems:

Linux Hardware RAID Controller Drivers


Hardware RAID controllers have no specific RAID subsystem in Linux. Because they use special RAID
chipsets, hardware RAID controllers come with their own drivers; these drivers allow the system to detect
the RAID sets as regular disks.

mdraid
The mdraid subsystem was designed as a software RAID solution for Linux; it is also the preferred
solution for software RAID under Linux. This subsystem uses its own metadata format, generally
referred to as native mdraid metadata.

mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 7
uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are
configured and controlled through the mdadm utility.

dmraid
Device-mapper RAID or dmraid refers to device-mapper kernel code that offers the mechanism to piece
disks together into a RAID set. This same kernel code does not provide any RAID configuration
mechanism.

dmraid is configured entirely in user-space, making it easy to support various on-disk metadata formats.
As such, dmraid is used on a wide variety of firmware RAID implementations. dmraid also supports
Intel firmware RAID, although Red Hat Enterprise Linux 7 uses mdraid to access Intel firmware RAID
sets.

18.4. RAID SUPPORT IN THE ANACONDA INSTALLER

159
Storage Administration Guide

The Anaconda installer automatically detects any hardware and firmware RAID sets on a system,
making them available for installation. Anaconda also supports software RAID using mdraid, and can
recognize existing mdraid sets.

Anaconda provides utilities for creating RAID sets during installation; however, these utilities only allow
partitions (as opposed to entire disks) to be members of new sets. To use an entire disk for a set, create
a partition on it spanning the entire disk, and use that partition as the RAID set member.

When the root file system uses a RAID set, Anaconda adds special kernel command-line options to the
bootloader configuration telling the initrd which RAID set(s) to activate before searching for the root
file system.

For instructions on configuring RAID during installation, see the Red Hat Enterprise Linux 7 Installation
Guide.

18.5. CONVERTING ROOT DISK TO RAID1 AFTER INSTALLATION


If you need to convert a non-raided root disk to a RAID1 mirror after installing Red Hat
Enterprise Linux 7, see the instructions in the following Red Hat Knowledgebase article: How do I
convert my root disk to RAID1 after installation of Red Hat Enterprise Linux 7?

On the PowerPC (PPC) architecture, take the following additional steps:

1. Copy the contents of the PowerPC Reference Platform (PReP) boot partition from /dev/sda1
to /dev/sdb1:

# dd if=/dev/sda1 of=/dev/sdb1

2. Update the Prep and boot flag on the first partition on both disks:

$ parted /dev/sda set 1 prep on


$ parted /dev/sda set 1 boot on

$ parted /dev/sdb set 1 prep on


$ parted /dev/sdb set 1 boot on

NOTE

Running the grub2-install /dev/sda command does not work on a PowerPC


machine and returns an error, but the system boots as expected.

18.6. CONFIGURING RAID SETS


Most RAID sets are configured during creation, typically through the firmware menu or from the installer.
In some cases, you may need to create or modify RAID sets after installing the system, preferably
without having to reboot the machine and enter the firmware menu to do so.

Some hardware RAID controllers allow you to configure RAID sets on-the-fly or even define completely
new sets after adding extra disks. This requires the use of driver-specific utilities, as there is no standard
API for this. For more information, see your hardware RAID controller's driver documentation for
information on this.

mdadm

160
CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)

The mdadm command-line tool is used to manage software RAID in Linux, i.e. mdraid. For information
on the different mdadm modes and options, see man mdadm. The man page also contains useful
examples for common operations like creating, monitoring, and assembling software RAID arrays.

dmraid
As the name suggests, dmraid is used to manage device-mapper RAID sets. The dmraid tool finds
ATARAID devices using multiple metadata format handlers, each supporting various formats. For a
complete list of supported formats, run dmraid -l.

As mentioned earlier in Section 18.3, “Linux RAID Subsystems”, the dmraid tool cannot configure RAID
sets after creation. For more information about using dmraid, see man dmraid.

18.7. CREATING ADVANCED RAID DEVICES


In some cases, you may wish to install the operating system on an array that can't be created after the
installation completes. Usually, this means setting up the /boot or root file system arrays on a complex
RAID device; in such cases, you may need to use array options that are not supported by Anaconda. To
work around this, perform the following procedure:

Procedure 18.1. Creating Advanced RAID Devices

1. Insert the install disk.

2. During the initial boot up, select Rescue Mode instead of Install or Upgrade. When the system
fully boots into Rescue mode, the user will be presented with a command line terminal.

3. From this terminal, use parted to create RAID partitions on the target hard drives. Then, use
mdadm to manually create raid arrays from those partitions using any and all settings and options
available. For more information on how to do these, see Chapter 13, Partitions, man parted,
and man mdadm.

4. Once the arrays are created, you can optionally create file systems on the arrays as well.

5. Reboot the computer and this time select Install or Upgrade to install as normal. As Anaconda
searches the disks in the system, it will find the pre-existing RAID devices.

6. When asked about how to use the disks in the system, select Custom Layout and click Next. In
the device listing, the pre-existing MD RAID devices will be listed.

7. Select a RAID device, click Edit and configure its mount point and (optionally) the type of file
system it should use (if you did not create one earlier) then click Done. Anaconda will perform
the install to this pre-existing RAID device, preserving the custom options you selected when you
created it in Rescue Mode.

NOTE

The limited Rescue Mode of the installer does not include man pages. Both the man
mdadm and man md contain useful information for creating custom RAID arrays, and may
be needed throughout the workaround. As such, it can be helpful to either have access to
a machine with these man pages present, or to print them out prior to booting into Rescue
Mode and creating your custom arrays.

161
Storage Administration Guide

[2] A hot-swap chassis allows you to remove a hard drive without having to power-down your system.

[3] RAID level 1 comes at a high cost because you write the same information to all of the disks in the array,
provides data reliability, but in a much less space-efficient manner than parity based RAID levels such as level 5.
However, this space inefficiency comes with a performance benefit: parity-based RAID levels consume
considerably more CPU power in order to generate the parity while RAID level 1 simply writes the same data more
than once to the multiple RAID members with very little CPU overhead. As such, RAID level 1 can outperform the
parity-based RAID levels on machines where software RAID is employed and CPU resources on the machine are
consistently taxed with operations other than RAID activities.

[4] Parity information is calculated based on the contents of the rest of the member disks in the array. This
information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then
be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has
been replaced.

162
CHAPTER 19. USING THE MOUNT COMMAND

CHAPTER 19. USING THE MOUNT COMMAND


On Linux, UNIX, and similar operating systems, file systems on different partitions and removable
devices (CDs, DVDs, or USB flash drives for example) can be attached to a certain point (the mount
point) in the directory tree, and then detached again. To attach or detach a file system, use the mount or
umount command respectively. This chapter describes the basic use of these commands, as well as
some advanced topics, such as moving a mount point or creating shared subtrees.

19.1. LISTING CURRENTLY MOUNTED FILE SYSTEMS


To display all currently attached file systems, use the following command with no additional arguments:

$ mount

This command displays the list of known mount points. Each line provides important information about
the device name, the file system type, the directory in which it is mounted, and relevant mount options in
the following form:

device on directory type type (options)

The findmnt utility, which allows users to list mounted file systems in a tree-like form, is also available
from Red Hat Enterprise Linux 6.1. To display all currently attached file systems, run the findmnt
command with no additional arguments:

$ findmnt

19.1.1. Specifying the File System Type


By default, the output of the mount command includes various virtual file systems such as sysfs and
tmpfs. To display only the devices with a certain file system type, provide the -t option:

$ mount -t type

Similarly, to display only the devices with a certain file system using the findmnt command:

$ findmnt -t type

For a list of common file system types, see Table 19.1, “Common File System Types”. For an example
usage, see Example 19.1, “Listing Currently Mounted ext4 File Systems”.

Example 19.1. Listing Currently Mounted ext4 File Systems

Usually, both / and /boot partitions are formatted to use ext4. To display only the mount points that
use this file system, use the following command:

$ mount -t ext4
/dev/sda2 on / type ext4 (rw)
/dev/sda1 on /boot type ext4 (rw)

To list such mount points using the findmnt command, type:

163
Storage Administration Guide

$ findmnt -t ext4
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda2 ext4 rw,realtime,seclabel,barrier=1,data=ordered
/boot /dev/sda1 ext4 rw,realtime,seclabel,barrier=1,data=ordered

19.2. MOUNTING A FILE SYSTEM


To attach a certain file system, use the mount command in the following form:

$ mount [option…] device directory

The device can be identified by:

a full path to a block device: for example, /dev/sda3

a universally unique identifier (UUID): for example, UUID=34795a28-ca6d-4fd8-a347-


73671d0c19cb

a volume label: for example, LABEL=home

Note that while a file system is mounted, the original content of the directory is not accessible.

IMPORTANT

Linux does not prevent a user from mounting a file system to a directory with a file system
already attached to it. To determine whether a particular directory serves as a mount
point, run the findmnt utility with the directory as its argument and verify the exit code:

findmnt directory; echo $?

If no file system is attached to the directory, the given command returns 1.

When you run the mount command without all required information, that is without the device name, the
target directory, or the file system type, the mount reads the contents of the /etc/fstab file to check if
the given file system is listed. The /etc/fstab file contains a list of device names and the directories in
which the selected file systems are set to be mounted as well as the file system type and mount options.
Therefore, when mounting a file system that is specified in /etc/fstab, you can choose one of the
following options:

mount [option…] directory


mount [option…] device

Note that permissions are required to mount the file systems unless the command is run as root (see
Section 19.2.2, “Specifying the Mount Options”).

164
CHAPTER 19. USING THE MOUNT COMMAND

NOTE

To determine the UUID and—if the device uses it—the label of a particular device, use the
blkid command in the following form:

blkid device

For example, to display information about /dev/sda3:

# blkid /dev/sda3
/dev/sda3: LABEL="home" UUID="34795a28-ca6d-4fd8-a347-
73671d0c19cb" TYPE="ext3"

19.2.1. Specifying the File System Type


In most cases, mount detects the file system automatically. However, there are certain file systems, such
as NFS (Network File System) or CIFS (Common Internet File System), that are not recognized, and
need to be specified manually. To specify the file system type, use the mount command in the following
form:

$ mount -t type device directory

Table 19.1, “Common File System Types” provides a list of common file system types that can be used
with the mount command. For a complete list of all available file system types, see the section called
“Manual Page Documentation”.

Table 19.1. Common File System Types

Type Description

ext2 The ext2 file system.

ext3 The ext3 file system.

ext4 The ext4 file system.

btrfs The btrfs file system.

xfs The xfs file system.

iso9660 The ISO 9660 file system. It is commonly used by optical media, typically CDs.

jfs The JFS file system created by IBM.

nfs The NFS file system. It is commonly used to access files over the network.

nfs4 The NFSv4 file system. It is commonly used to access files over the network.

165
Storage Administration Guide

Type Description

ntfs The NTFS file system. It is commonly used on machines that are running the Windows
operating system.

udf The UDF file system. It is commonly used by optical media, typically DVDs.

vfat The FAT file system. It is commonly used on machines that are running the Windows
operating system, and on certain digital media such as USB flash drives or floppy disks.

See Example 19.2, “Mounting a USB Flash Drive” for an example usage.

Example 19.2. Mounting a USB Flash Drive

Older USB flash drives often use the FAT file system. Assuming that such drive uses the /dev/sdc1
device and that the /media/flashdisk/ directory exists, mount it to this directory by typing the
following at a shell prompt as root:

~]# mount -t vfat /dev/sdc1 /media/flashdisk

19.2.2. Specifying the Mount Options


To specify additional mount options, use the command in the following form:

mount -o options device directory

When supplying multiple options, do not insert a space after a comma, or mount interprets incorrectly
the values following spaces as additional parameters.

Table 19.2, “Common Mount Options” provides a list of common mount options. For a complete list of all
available options, consult the relevant manual page as referred to in the section called “Manual Page
Documentation”.

Table 19.2. Common Mount Options

Option Description

async Allows the asynchronous input/output operations on the file system.

auto Allows the file system to be mounted automatically using the mount -a command.

defaults Provides an alias for async,auto,dev,exec,nouser,rw,suid.

exec Allows the execution of binary files on the particular file system.

loop Mounts an image as a loop device.

166
CHAPTER 19. USING THE MOUNT COMMAND

Option Description

noauto Default behavior disallows the automatic mount of the file system using the mount -a
command.

noexec Disallows the execution of binary files on the particular file system.

nouser Disallows an ordinary user (that is, other than root) to mount and unmount the file
system.

remount Remounts the file system in case it is already mounted.

ro Mounts the file system for reading only.

rw Mounts the file system for both reading and writing.

user Allows an ordinary user (that is, other than root) to mount and unmount the file
system.

See Example 19.3, “Mounting an ISO Image” for an example usage.

Example 19.3. Mounting an ISO Image

An ISO image (or a disk image in general) can be mounted by using the loop device. Assuming that
the ISO image of the Fedora 14 installation disc is present in the current working directory and that
the /media/cdrom/ directory exists, mount the image to this directory by running the following
command:

# mount -o ro,loop Fedora-14-x86_64-Live-Desktop.iso /media/cdrom

Note that ISO 9660 is by design a read-only file system.

19.2.3. Sharing Mounts


Occasionally, certain system administration tasks require access to the same file system from more than
one place in the directory tree (for example, when preparing a chroot environment). This is possible, and
Linux allows you to mount the same file system to as many directories as necessary. Additionally, the
mount command implements the --bind option that provides a means for duplicating certain mounts.
Its usage is as follows:

$ mount --bind old_directory new_directory

Although this command allows a user to access the file system from both places, it does not apply on the
file systems that are mounted within the original directory. To include these mounts as well, use the
following command:

$ mount --rbind old_directory new_directory

167
Storage Administration Guide

Additionally, to provide as much flexibility as possible, Red Hat Enterprise Linux 7 implements the
functionality known as shared subtrees. This feature allows the use of the following four mount types:

Shared Mount
A shared mount allows the creation of an exact replica of a given mount point. When a mount point is
marked as a shared mount, any mount within the original mount point is reflected in it, and vice
versa. To change the type of a mount point to a shared mount, type the following at a shell prompt:

$ mount --make-shared mount_point

Alternatively, to change the mount type for the selected mount point and all mount points under it:

$ mount --make-rshared mount_point

See Example 19.4, “Creating a Shared Mount Point” for an example usage.

Example 19.4. Creating a Shared Mount Point

There are two places where other file systems are commonly mounted: the /media/ directory for
removable media, and the /mnt/ directory for temporarily mounted file systems. By using a
shared mount, you can make these two directories share the same content. To do so, as root,
mark the /media/ directory as shared:

# mount --bind /media /media


# mount --make-shared /media

Create its duplicate in /mnt/ by using the following command:

# mount --bind /media /mnt

It is now possible to verify that a mount within /media/ also appears in /mnt/. For example, if
the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, run the
following commands:

# mount /dev/cdrom /media/cdrom


# ls /media/cdrom
EFI GPL isolinux LiveOS
# ls /mnt/cdrom
EFI GPL isolinux LiveOS

Similarly, it is possible to verify that any file system mounted in the /mnt/ directory is reflected in
/media/. For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is
plugged in and the /mnt/flashdisk/ directory is present, type:

# # mount /dev/sdc1 /mnt/flashdisk


# ls /media/flashdisk
en-US publican.cfg
# ls /mnt/flashdisk
en-US publican.cfg

Slave Mount

168
CHAPTER 19. USING THE MOUNT COMMAND

A slave mount allows the creation of a limited duplicate of a given mount point. When a mount point
is marked as a slave mount, any mount within the original mount point is reflected in it, but no mount
within a slave mount is reflected in its original. To change the type of a mount point to a slave mount,
type the following at a shell prompt:

mount --make-slave mount_point

Alternatively, it is possible to change the mount type for the selected mount point and all mount points
under it by typing:

mount --make-rslave mount_point

See Example 19.5, “Creating a Slave Mount Point” for an example usage.

Example 19.5. Creating a Slave Mount Point

This example shows how to get the content of the /media/ directory to appear in /mnt/ as well,
but without any mounts in the /mnt/ directory to be reflected in /media/. As root, first mark the
/media/ directory as shared:

~]# mount --bind /media /media


~]# mount --make-shared /media

Then create its duplicate in /mnt/, but mark it as "slave":

~]# mount --bind /media /mnt


~]# mount --make-slave /mnt

Now verify that a mount within /media/ also appears in /mnt/. For example, if the CD-ROM
drive contains non-empty media and the /media/cdrom/ directory exists, run the following
commands:

~]# mount /dev/cdrom /media/cdrom


~]# ls /media/cdrom
EFI GPL isolinux LiveOS
~]# ls /mnt/cdrom
EFI GPL isolinux LiveOS

Also verify that file systems mounted in the /mnt/ directory are not reflected in /media/. For
instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the
/mnt/flashdisk/ directory is present, type:

~]# mount /dev/sdc1 /mnt/flashdisk


~]# ls /media/flashdisk
~]# ls /mnt/flashdisk
en-US publican.cfg

Private Mount
A private mount is the default type of mount, and unlike a shared or slave mount, it does not receive
or forward any propagation events. To explicitly mark a mount point as a private mount, type the
following at a shell prompt:

169
Storage Administration Guide

mount --make-private mount_point

Alternatively, it is possible to change the mount type for the selected mount point and all mount points
under it:

mount --make-rprivate mount_point

See Example 19.6, “Creating a Private Mount Point” for an example usage.

Example 19.6. Creating a Private Mount Point

Taking into account the scenario in Example 19.4, “Creating a Shared Mount Point”, assume that
a shared mount point has been previously created by using the following commands as root:

~]# mount --bind /media /media


~]# mount --make-shared /media
~]# mount --bind /media /mnt

To mark the /mnt/ directory as private, type:

~]# mount --make-private /mnt

It is now possible to verify that none of the mounts within /media/ appears in /mnt/. For
example, if the CD-ROM drives contains non-empty media and the /media/cdrom/ directory
exists, run the following commands:

~]# mount /dev/cdrom /media/cdrom


~]# ls /media/cdrom
EFI GPL isolinux LiveOS
~]# ls /mnt/cdrom
~]#

It is also possible to verify that file systems mounted in the /mnt/ directory are not reflected in
/media/. For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is
plugged in and the /mnt/flashdisk/ directory is present, type:

~]# mount /dev/sdc1 /mnt/flashdisk


~]# ls /media/flashdisk
~]# ls /mnt/flashdisk
en-US publican.cfg

Unbindable Mount
In order to prevent a given mount point from being duplicated whatsoever, an unbindable mount is
used. To change the type of a mount point to an unbindable mount, type the following at a shell
prompt:

mount --make-unbindable mount_point

Alternatively, it is possible to change the mount type for the selected mount point and all mount points
under it:

170
CHAPTER 19. USING THE MOUNT COMMAND

mount --make-runbindable mount_point

See Example 19.7, “Creating an Unbindable Mount Point” for an example usage.

Example 19.7. Creating an Unbindable Mount Point

To prevent the /media/ directory from being shared, as root:

# mount --bind /media /media


# mount --make-unbindable /media

This way, any subsequent attempt to make a duplicate of this mount fails with an error:

# mount --bind /media /mnt


mount: wrong fs type, bad option, bad superblock on /media,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

19.2.4. Moving a Mount Point


To change the directory in which a file system is mounted, use the following command:

# mount --move old_directory new_directory

See Example 19.8, “Moving an Existing NFS Mount Point” for an example usage.

Example 19.8. Moving an Existing NFS Mount Point

An NFS storage contains user directories and is already mounted in /mnt/userdirs/. As root,
move this mount point to /home by using the following command:

# mount --move /mnt/userdirs /home

To verify the mount point has been moved, list the content of both directories:

# ls /mnt/userdirs
# ls /home
jill joe

19.2.5. Setting Read-only Permissions for root

Sometimes, you need to mount the root file system with read-only permissions. Example use cases
include enhancing security or ensuring data integrity after an unexpected system power-off.

19.2.5.1. Configuring root to Mount with Read-only Permissions on Boot

1. In the /etc/sysconfig/readonly-root file, change READONLY to yes:

171
Storage Administration Guide

# Set to 'yes' to mount the file systems as read-only.


READONLY=yes
[output truncated]

2. Change defaults to ro in the root entry (/) in the /etc/fstab file:

/dev/mapper/luks-c376919e... / ext4 ro,x-systemd.device-timeout=0 1


1

3. Add ro to the GRUB_CMDLINE_LINUX directive in the /etc/default/grub file and ensure


that it does not contain rw:

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root
rd.lvm.lv=rhel/swap rhgb quiet ro"

4. Recreate the GRUB2 configuration file:

# grub2-mkconfig -o /boot/grub2/grub.cfg

5. If you need to add files and directories to be mounted with write permissions in the tmpfs file
system, create a text file in the /etc/rwtab.d/ directory and put the configuration there. For
example, to mount /etc/example/file with write permissions, add this line to the
/etc/rwtab.d/example file:

files /etc/example/file

IMPORTANT

Changes made to files and directories in tmpfs do not persist across boots.

See Section 19.2.5.3, “Files and Directories That Retain Write Permissions” for more information
on this step.

6. Reboot the system.

19.2.5.2. Remounting root Instantly

If root (/) was mounted with read-only permissions on system boot, you can remount it with write
permissions:

# mount -o remount,rw /

This can be particularly useful when / is incorrectly mounted with read-only permissions.

To remount / with read-only permissions again, run:

# mount -o remount,ro /

172
CHAPTER 19. USING THE MOUNT COMMAND

NOTE

This command mounts the whole / with read-only permissions. A better approach is to
retain write permissions for certain files and directories by copying them into RAM, as
described in Section 19.2.5.1, “Configuring root to Mount with Read-only Permissions on
Boot”.

19.2.5.3. Files and Directories That Retain Write Permissions

For the system to function properly, some files and directories need to retain write permissions. With root
in read-only mode, they are mounted in RAM in the tmpfs temporary file system. The default set of
such files and directories is read from the /etc/rwtab file, which contains:

dirs /var/cache/man
dirs /var/gdm
[output truncated]
empty /tmp
empty /var/cache/foomatic
[output truncated]
files /etc/adjtime
files /etc/ntp.conf
[output truncated]

Entries in the /etc/rwtab file follow this format:

how the file or directory is copied to tmpfs path to the file or


directory

A file or directory can be copied to tmpfs in the following three ways:

empty path: An empty path is copied to tmpfs. Example: empty /tmp

dirs path: A directory tree is copied to tmpfs, empty. Example: dirs /var/run

files path: A file or a directory tree is copied to tmpfs intact. Example: files
/etc/resolv.conf

The same format applies when adding custom paths to /etc/rwtab.d/.

19.3. UNMOUNTING A FILE SYSTEM


To detach a previously mounted file system, use either of the following variants of the umount
command:

$ umount directory
$ umount device

Note that unless this is performed while logged in as root, the correct permissions must be available to
unmount the file system. For more information, see Section 19.2.2, “Specifying the Mount Options”. See
Example 19.9, “Unmounting a CD” for an example usage.

173
Storage Administration Guide

IMPORTANT

When a file system is in use (for example, when a process is reading a file on this file
system, or when it is used by the kernel), running the umount command fails with an
error. To determine which processes are accessing the file system, use the fuser
command in the following form:

$ fuser -m directory

For example, to list the processes that are accessing a file system mounted to the
/media/cdrom/ directory:

$ fuser -m /media/cdrom
/media/cdrom: 1793 2013 2022 2435 10532c 10672c

Example 19.9. Unmounting a CD

To unmount a CD that was previously mounted to the /media/cdrom/ directory, use the following
command:

$ umount /media/cdrom

19.4. MOUNT COMMAND REFERENCES


The following resources provide an in-depth documentation on the subject.

Manual Page Documentation


man 8 mount: The manual page for the mount command that provides a full documentation on
its usage.

man 8 umount: The manual page for the umount command that provides a full documentation
on its usage.

man 8 findmnt: The manual page for the findmnt command that provides a full
documentation on its usage.

man 5 fstab: The manual page providing a thorough description of the /etc/fstab file
format.

Useful Websites
Shared subtrees — An LWN article covering the concept of shared subtrees.

174
CHAPTER 20. THE VOLUME_KEY FUNCTION

CHAPTER 20. THE VOLUME_KEY FUNCTION


The volume_key function provides two tools, libvolume_key and volume_key. libvolume_key is a library
for manipulating storage volume encryption keys and storing them separately from volumes.
volume_key is an associated command line tool used to extract keys and passphrases in order to
restore access to an encrypted hard drive.

This is useful for when the primary user forgets their keys and passwords, after an employee leaves
abruptly, or in order to extract data after a hardware or software failure corrupts the header of the
encrypted volume. In a corporate setting, the IT help desk can use volume_key to back up the
encryption keys before handing over the computer to the end user.

Currently, volume_key only supports the LUKS volume encryption format.

NOTE

volume_key is not included in a standard install of Red Hat Enterprise Linux 7 server.
For information on installing it, refer to
http://fedoraproject.org/wiki/Disk_encryption_key_escrow_use_cases.

20.1. VOLUME_KEY COMMANDS


The format for volume_key is:

volume_key [OPTION]... OPERAND

The operands and mode of operation for volume_key are determined by specifying one of the following
options:

--save
This command expects the operand volume [packet]. If a packet is provided then volume_key will
extract the keys and passphrases from it. If packet is not provided, then volume_key will extract the
keys and passphrases from the volume, prompting the user where necessary. These keys and
passphrases will then be stored in one or more output packets.

--restore
This command expects the operands volume packet. It then opens the volume and uses the keys and
passphrases in the packet to make the volume accessible again, prompting the user where
necessary, such as allowing the user to enter a new passphrase, for example.

--setup-volume
This command expects the operands volume packet name. It then opens the volume and uses the
keys and passphrases in the packet to set up the volume for use of the decrypted data as name.

Name is the name of a dm-crypt volume. This operation makes the decrypted volume available as
/dev/mapper/name.

This operation does not permanently alter the volume by adding a new passphrase, for example. The
user can access and modify the decrypted volume, modifying volume in the process.

--reencrypt, --secrets, and --dump

175
Storage Administration Guide

These three commands perform similar functions with varying output methods. They each require the
operand packet, and each opens the packet, decrypting it where necessary. --reencrypt then
stores the information in one or more new output packets. --secrets outputs the keys and
passphrases contained in the packet. --dump outputs the content of the packet, though the keys and
passphrases are not output by default. This can be changed by appending --with-secrets to the
command. It is also possible to only dump the unencrypted parts of the packet, if any, by using the --
unencrypted command. This does not require any passphrase or private key access.

Each of these can be appended with the following options:

-o, --output packet


This command writes the default key or passphrase to the packet. The default key or passphrase
depends on the volume format. Ensure it is one that is unlikely to expire, and will allow --restore to
restore access to the volume.

--output-format format
This command uses the specified format for all output packets. Currently, format can be one of the
following:

asymmetric: uses CMS to encrypt the whole packet, and requires a certificate

asymmetric_wrap_secret_only: wraps only the secret, or keys and passphrases, and


requires a certificate

passphrase: uses GPG to encrypt the whole packet, and requires a passphrase

--create-random-passphrase packet
This command generates a random alphanumeric passphrase, adds it to the volume (without
affecting other passphrases), and then stores this random passphrase into the packet.

20.2. USING VOLUME_KEY AS AN INDIVIDUAL USER


As an individual user, volume_key can be used to save encryption keys by using the following
procedure.

NOTE

For all examples in this file, /path/to/volume is a LUKS device, not the plaintext
device contained within. blkid -s type /path/to/volume should report
type="crypto_LUKS".

Procedure 20.1. Using volume_key Stand-alone

1. Run:

volume_key --save /path/to/volume -o escrow-packet

A prompt will then appear requiring an escrow packet passphrase to protect the key.

2. Save the generated escrow-packet file, ensuring that the passphrase is not forgotten.

176
CHAPTER 20. THE VOLUME_KEY FUNCTION

If the volume passphrase is forgotten, use the saved escrow packet to restore access to the data.

Procedure 20.2. Restore Access to Data with Escrow Packet

1. Boot the system in an environment where volume_key can be run and the escrow packet is
available (a rescue mode, for example).

2. Run:

volume_key --restore /path/to/volume escrow-packet

A prompt will appear for the escrow packet passphrase that was used when creating the escrow
packet, and for the new passphrase for the volume.

3. Mount the volume using the chosen passphrase.

To free up the passphrase slot in the LUKS header of the encrypted volume, remove the old, forgotten
passphrase by using the command cryptsetup luksKillSlot.

20.3. USING VOLUME_KEY IN A LARGER ORGANIZATION


In a larger organization, using a single password known by every system administrator and keeping
track of a separate password for each system is impractical and a security risk. To counter this,
volume_key can use asymmetric cryptography to minimize the number of people who know the
password required to access encrypted data on any computer.

This section will cover the procedures required for preparation before saving encryption keys, how to
save encryption keys, restoring access to a volume, and setting up emergency passphrases.

20.3.1. Preparation for Saving Encryption Keys


In order to begin saving encryption keys, some preparation is required.

Procedure 20.3. Preparation

1. Create an X509 certificate/private pair.

2. Designate trusted users who are trusted not to compromise the private key. These users will be
able to decrypt the escrow packets.

3. Choose which systems will be used to decrypt the escrow packets. On these systems, set up an
NSS database that contains the private key.

If the private key was not created in an NSS database, follow these steps:

Store the certificate and private key in an PKCS#12 file.

Run:

certutil -d /the/nss/directory -N

At this point it is possible to choose an NSS database password. Each NSS database can
have a different password so the designated users do not need to share a single password if
a separate NSS database is used by each user.

177
Storage Administration Guide

Run:

pk12util -d /the/nss/directory -i the-pkcs12-file

4. Distribute the certificate to anyone installing systems or saving keys on existing systems.

5. For saved private keys, prepare storage that allows them to be looked up by machine and
volume. For example, this can be a simple directory with one subdirectory per machine, or a
database used for other system management tasks as well.

20.3.2. Saving Encryption Keys


After completing the required preparation (see Section 20.3.1, “Preparation for Saving Encryption Keys”)
it is now possible to save the encryption keys using the following procedure.

NOTE

For all examples in this file, /path/to/volume is a LUKS device, not the plaintext
device contained within; blkid -s type /path/to/volume should report
type="crypto_LUKS".

Procedure 20.4. Saving Encryption Keys

1. Run:

volume_key --save /path/to/volume -c /path/to/cert escrow-packet

2. Save the generated escrow-packet file in the prepared storage, associating it with the system
and the volume.

These steps can be performed manually, or scripted as part of system installation.

20.3.3. Restoring Access to a Volume


After the encryption keys have been saved (see Section 20.3.1, “Preparation for Saving Encryption Keys”
and Section 20.3.2, “Saving Encryption Keys”), access can be restored to a driver where needed.

Procedure 20.5. Restoring Access to a Volume

1. Get the escrow packet for the volume from the packet storage and send it to one of the
designated users for decryption.

2. The designated user runs:

volume_key --reencrypt -d /the/nss/directory escrow-packet-in -o


escrow-packet-out

After providing the NSS database password, the designated user chooses a passphrase for
encrypting escrow-packet-out. This passphrase can be different every time and only
protects the encryption keys while they are moved from the designated user to the target
system.

3. Obtain the escrow-packet-out file and the passphrase from the designated user.

178
CHAPTER 20. THE VOLUME_KEY FUNCTION

4. Boot the target system in an environment that can run volume_key and have the escrow-
packet-out file available, such as in a rescue mode.

5. Run:

volume_key --restore /path/to/volume escrow-packet-out

A prompt will appear for the packet passphrase chosen by the designated user, and for a new
passphrase for the volume.

6. Mount the volume using the chosen volume passphrase.

It is possible to remove the old passphrase that was forgotten by using cryptsetup luksKillSlot,
for example, to free up the passphrase slot in the LUKS header of the encrypted volume. This is done
with the command cryptsetup luksKillSlot device key-slot. For more information and
examples see cryptsetup --help.

20.3.4. Setting up Emergency Passphrases


In some circumstances (such as traveling for business) it is impractical for system administrators to work
directly with the affected systems, but users still need access to their data. In this case, volume_key can
work with passphrases as well as encryption keys.

During the system installation, run:

volume_key --save /path/to/volume -c /path/to/ert --create-random-


passphrase passphrase-packet

This generates a random passphrase, adds it to the specified volume, and stores it to passphrase-
packet. It is also possible to combine the --create-random-passphrase and -o options to
generate both packets at the same time.

If a user forgets the password, the designated user runs:

volume_key --secrets -d /your/nss/directory passphrase-packet

This shows the random passphrase. Give this passphrase to the end user.

20.4. VOLUME_KEY REFERENCES


More information on volume_key can be found:

in the readme file located at /usr/share/doc/volume_key-*/README

on volume_key's manpage using man volume_key

online at http://fedoraproject.org/wiki/Disk_encryption_key_escrow_use_cases

179
Storage Administration Guide

CHAPTER 21. SOLID-STATE DISK DEPLOYMENT GUIDELINES


Solid-state disks (SSD) are storage devices that use NAND flash chips to persistently store data. This
sets them apart from previous generations of disks, which store data in rotating, magnetic platters. In an
SSD, the access time for data across the full Logical Block Address (LBA) range is constant; whereas
with older disks that use rotating media, access patterns that span large address ranges incur seek
costs. As such, SSD devices have better latency and throughput.

Performance degrades as the number of used blocks approaches the disk capacity. The degree of
performance impact varies greatly by vendor. However, all devices experience some degradation.

To address the degradation issue, the host system (for example, the Linux kernel) may use discard
requests to inform the storage that a given range of blocks is no longer in use. An SSD can use this
information to free up space internally, using the free blocks for wear-leveling. Discards will only be
issued if the storage advertises support in terms of its storage protocol (be it ATA or SCSI). Discard
requests are issued to the storage using the negotiated discard command specific to the storage protocol
(TRIM command for ATA, and WRITE SAME with UNMAP set, or UNMAP command for SCSI).

Enabling discard support is most useful when the following points are true:

Free space is still available on the file system.

Most logical blocks on the underlying storage device have already been written to.

For more information about TRIM, see Data Set Management T13 Specifications.

For more information about UNMAP, see the section 4.7.3.4 of the SCSI Block Commands 3 T10
Specification.

NOTE

Not all solid-state devices in the market have discard support. To determine if your
solid-state device has discard support, check for
/sys/block/sda/queue/discard_granularity, which is the size of internal
allocation unit of device.

Deployment Considerations
Because of the internal layout and operation of SSDs, it is best to partition devices on an internal erase
block boundary. Partitioning utilities in Red Hat Enterprise Linux 7 chooses sane defaults if the SSD
exports topology information. However, if the device does not export topology information, Red Hat
recommends that the first partition should be created at a 1MB boundary.

SSD has various types of TRIM mechanism depending on the vendors choice. The early versions of
disks improved the performance by compromising possible data leakage after the read command.

Following are the types of TRIM mechanism:

Non-deterministic TRIM

Deterministic TRIM (DRAT)

Deterministic Read Zero after TRIM (RZAT)

The first two types of TRIM mechanism can cause data leakage as the read command to the LBA after a
TRIM returns different or same data. RZAT returns zero after the read command and Red Hat

180
CHAPTER 21. SOLID-STATE DISK DEPLOYMENT GUIDELINES

recommends this TRIM mechanism to avoid data leakage. It is affected only in SSD. Choose the disk
which supports RZAT mechanism.

Type of TRIM mechanism used depends on hardware implementation. To find the type of TRIM
mechanism on ATA, use the hdparm command. See the following example to find the type of TRIM
mechanism:

# hdparm -I /dev/sda | grep TRIM


Data Set Management TRIM supported (limit 8 block)
Deterministic read data after TRIM

For more information, see man hdparm.

The Logical Volume Manager (LVM), the device-mapper (DM) targets, and MD (software raid) targets
that LVM uses support discards. The only DM targets that do not support discards are dm-snapshot, dm-
crypt, and dm-raid45. Discard support for the dm-mirror was added in Red Hat Enterprise Linux 6.1 and
as of 7.0 MD supports discards.

Using RAID level 5 over SSD results in low performance if SSDs do not handle discard correctly. You
can set discard in the raid456.conf file, or in the GRUB2 configuration. For instructions, see the
following procedures.

Procedure 21.1. Setting discard in raid456.conf

The devices_handle_discard_safely module parameter is set in the raid456 module. To enable


discard in the raid456.conf file:

1. Verify that your hardware supports discards:

# cat /sys/block/disk-name/queue/discard_zeroes_data

If the returned value is 1, discards are supported. If the command returns 0, the RAID code has
to zero the disk out, which takes more time.

2. Create the /etc/modprobe.d/raid456.conf file, and include the following line:

options raid456 devices_handle_discard_safely=Y

3. Use the dracut -f command to rebuild the initial ramdisk (initrd).

4. Reboot the system for the changes to take effect.

Procedure 21.2. Setting discard in the GRUB2 Configuration

The devices_handle_discard_safely module parameter is set in the raid456 module. To enable


discard in the GRUB2 configuration:

1. Verify that your hardware supports discards:

# cat /sys/block/disk-name/queue/discard_zeroes_data

If the returned value is 1, discards are supported. If the command returns 0, the RAID code has
to zero the disk out, which takes more time.

181
Storage Administration Guide

2. Add the following line to the /etc/default/grub file:

raid456.devices_handle_discard_safely=Y

3. The location of the GRUB2 configuration file is different on systems with the BIOS firmware and
on systems with UEFI. Use one of the following commands to recreate the GRUB2 configuration
file.

On a system with the BIOS firmware, use:

# grub2-mkconfig -o /boot/grub2/grub.cfg

On a system with the UEFI firmware, use:

# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

4. Reboot the system for the changes to take effect.

NOTE

In Red Hat Enterprise Linux 7, discard is fully supported by the ext4 and XFS file systems
only.

In Red Hat Enterprise Linux 6.3 and earlier, only the ext4 file system fully supports discard. Starting with
Red Hat Enterprise Linux 6.4, both ext4 and XFS file systems fully support discard. To enable discard
commands on a device, use the discard option of the mount command. For example, to mount
/dev/sda2 to /mnt with discard enabled, use:

# mount -t ext4 -o discard /dev/sda2 /mnt

By default, ext4 does not issue the discard command to, primarily, avoid problems on devices which
might not properly implement discard. The Linux swap code issues discard commands to discard-
enabled devices, and there is no option to control this behavior.

Performance Tuning Considerations


For information on performance tuning considerations regarding solid-state disks, see the Solid-State
Disks section in the Red Hat Enterprise Linux 7 Performance Tuning Guide.

182
CHAPTER 22. WRITE BARRIERS

CHAPTER 22. WRITE BARRIERS


A write barrier is a kernel mechanism used to ensure that file system metadata is correctly written and
ordered on persistent storage, even when storage devices with volatile write caches lose power. File
systems with write barriers enabled ensures that data transmitted via fsync() is persistent throughout a
power loss.

Enabling write barriers incurs a substantial performance penalty for some applications. Specifically,
applications that use fsync() heavily or create and delete many small files will likely run much slower.

22.1. IMPORTANCE OF WRITE BARRIERS


File systems safely update metadata, ensuring consistency. Journalled file systems bundle metadata
updates into transactions and send them to persistent storage in the following manner:

1. The file system sends the body of the transaction to the storage device.

2. The file system sends a commit block.

3. If the transaction and its corresponding commit block are written to disk, the file system assumes
that the transaction will survive any power failure.

However, file system integrity during power failure becomes more complex for storage devices with extra
caches. Storage target devices like local S-ATA or SAS drives may have write caches ranging from
32MB to 64MB in size (with modern drives). Hardware RAID controllers often contain internal write
caches. Further, high end arrays, like those from NetApp, IBM, Hitachi and EMC (among others), also
have large caches.

Storage devices with write caches report I/O as "complete" when the data is in cache; if the cache loses
power, it loses its data as well. Worse, as the cache de-stages to persistent storage, it may change the
original metadata ordering. When this occurs, the commit block may be present on disk without having
the complete, associated transaction in place. As a result, the journal may replay these uninitialized
transaction blocks into the file system during post-power-loss recovery; this will cause data inconsistency
and corruption.

How Write Barriers Work


Write barriers are implemented in the Linux kernel via storage write cache flushes before and after the
I/O, which is order-critical. After the transaction is written, the storage cache is flushed, the commit block
is written, and the cache is flushed again. This ensures that:

The disk contains all the data.

No re-ordering has occurred.

With barriers enabled, an fsync() call also issues a storage cache flush. This guarantees that file data
is persistent on disk even if power loss occurs shortly after fsync() returns.

22.2. ENABLING AND DISABLING WRITE BARRIERS


To mitigate the risk of data corruption during power loss, some storage devices use battery-backed write
caches. Generally, high-end arrays and some hardware controllers use battery-backed write caches.
However, because the cache's volatility is not visible to the kernel, Red Hat Enterprise Linux 7 enables
write barriers by default on all supported journaling file systems.

183
Storage Administration Guide

NOTE

Write caches are designed to increase I/O performance. However, enabling write barriers
means constantly flushing these caches, which can significantly reduce performance.

For devices with non-volatile, battery-backed write caches and those with write-caching disabled, you
can safely disable write barriers at mount time using the -o nobarrier option for mount. However,
some devices do not support write barriers; such devices log an error message to
/var/log/messages. For more information, see Table 22.1, “Write Barrier Error Messages per File
System”.

Table 22.1. Write Barrier Error Messages per File System

File System Error Message

ext3/ext4 JBD: barrier-based sync failed on


device - disabling barriers

XFS Filesystem device - Disabling


barriers, trial barrier write
failed

btrfs btrfs: disabling barriers on dev


device

22.3. WRITE BARRIER CONSIDERATIONS


Some system configurations do not need write barriers to protect data. In most cases, other methods are
preferable to write barriers, since enabling write barriers causes a significant performance penalty.

Disabling Write Caches


One way to alternatively avoid data integrity issues is to ensure that no write caches lose data on power
failures. When possible, the best way to configure this is to disable the write cache. On a simple server
or desktop with one or more SATA drives (off a local SATA controller Intel AHCI part), you can disable
the write cache on the target SATA drives with the following command:

# hdparm -W0 /device/

Battery-Backed Write Caches


Write barriers are also unnecessary whenever the system uses hardware RAID controllers with battery-
backed write cache. If the system is equipped with such controllers and if its component drives have write
caches disabled, the controller acts as a write-through cache; this informs the kernel that the write cache
data survives a power loss.

Most controllers use vendor-specific tools to query and manipulate target drives. For example, the LSI
Megaraid SAS controller uses a battery-backed write cache; this type of controller requires the
MegaCli64 tool to manage target drives. To show the state of all back-end drives for LSI Megaraid
SAS, use:

184
CHAPTER 22. WRITE BARRIERS

# MegaCli64 -LDGetProp -DskCache -LAll -aALL

To disable the write cache of all back-end drives for LSI Megaraid SAS, use:

# MegaCli64 -LDSetProp -DisDskCache -Lall -aALL

NOTE

Hardware RAID cards recharge their batteries while the system is operational. If a system
is powered off for an extended period of time, the batteries will lose their charge, leaving
stored data vulnerable during a power failure.

High-End Arrays
High-end arrays have various ways of protecting data in the event of a power failure. As such, there is no
need to verify the state of the internal drives in external RAID storage.

NFS
NFS clients do not need to enable write barriers, since data integrity is handled by the NFS server side.
As such, NFS servers should be configured to ensure data persistence throughout a power loss (whether
through write barriers or other means).

185
Storage Administration Guide

CHAPTER 23. STORAGE I/O ALIGNMENT AND SIZE


Recent enhancements to the SCSI and ATA standards allow storage devices to indicate their preferred
(and in some cases, required) I/O alignment and I/O size. This information is particularly useful with
newer disk drives that increase the physical sector size from 512 bytes to 4k bytes. This information may
also be beneficial for RAID devices, where the chunk size and stripe size may impact performance.

The Linux I/O stack has been enhanced to process vendor-provided I/O alignment and I/O size
information, allowing storage management tools (parted, lvm, mkfs.*, and the like) to optimize data
placement and access. If a legacy device does not export I/O alignment and size data, then storage
management tools in Red Hat Enterprise Linux 7 will conservatively align I/O on a 4k (or larger power of
2) boundary. This will ensure that 4k-sector devices operate correctly even if they do not indicate any
required/preferred I/O alignment and size.

For information on determining the information that the operating system obtained from the device, see
the Section 23.2, “Userspace Access”. This data is subsequently used by the storage management tools
to determine data placement.

The IO scheduler has changed for Red Hat Enterprise Linux 7. Default IO Scheduler is now Deadline,
except for SATA drives. CFQ is the default IO scheduler for SATA drives. For faster storage, Deadline
outperforms CFQ and when it is used there is a performance increase without the need of special tuning.

If default is not right for some disks (for example, SAS rotational disks), then change the IO scheduler to
CFQ. This instance will depend on the workload.

23.1. PARAMETERS FOR STORAGE ACCESS


The operating system uses the following information to determine I/O alignment and size:

physical_block_size
Smallest internal unit on which the device can operate

logical_block_size
Used externally to address a location on the device

alignment_offset
The number of bytes that the beginning of the Linux block device (partition/MD/LVM device) is offset
from the underlying physical alignment

minimum_io_size
The device’s preferred minimum unit for random I/O

optimal_io_size
The device’s preferred unit for streaming I/O

For example, certain 4K sector devices may use a 4K physical_block_size internally but expose a
more granular 512-byte logical_block_size to Linux. This discrepancy introduces potential for
misaligned I/O. To address this, the Red Hat Enterprise Linux 7 I/O stack will attempt to start all data
areas on a naturally-aligned boundary (physical_block_size) by making sure it accounts for any
alignment_offset if the beginning of the block device is offset from the underlying physical alignment.

Storage vendors can also supply I/O hints about the preferred minimum unit for random I/O

186
CHAPTER 23. STORAGE I/O ALIGNMENT AND SIZE

(minimum_io_size) and streaming I/O (optimal_io_size) of a device. For example,


minimum_io_size and optimal_io_size may correspond to a RAID device's chunk size and stripe
size respectively.

23.2. USERSPACE ACCESS


Always take care to use properly aligned and sized I/O. This is especially important for Direct I/O access.
Direct I/O should be aligned on a logical_block_size boundary, and in multiples of the
logical_block_size.

With native 4K devices (i.e. logical_block_size is 4K) it is now critical that applications perform
direct I/O in multiples of the device's logical_block_size. This means that applications will fail with
native 4k devices that perform 512-byte aligned I/O rather than 4k-aligned I/O.

To avoid this, an application should consult the I/O parameters of a device to ensure it is using the
proper I/O alignment and size. As mentioned earlier, I/O parameters are exposed through the both
sysfs and block device ioctl interfaces.

For more information, see man libblkid. This man page is provided by the libblkid-devel
package.

sysfs Interface
/sys/block/disk/alignment_offset

or

/sys/block/disk/partition/alignment_offset

NOTE

The file location depends on whether the disk is a physical disk (be that a local
disk, local RAID, or a multipath LUN) or a virtual disk. The first file location is
applicable to physical disks while the second file location is applicable to virtual
disks. The reason for this is because virtio-blk will always report an alignment
value for the partition. Physical disks may or may not report an alignment value.

/sys/block/disk/queue/physical_block_size

/sys/block/disk/queue/logical_block_size

/sys/block/disk/queue/minimum_io_size

/sys/block/disk/queue/optimal_io_size

The kernel will still export these sysfs attributes for "legacy" devices that do not provide I/O parameters
information, for example:

Example 23.1. sysfs Interface

alignment_offset: 0
physical_block_size: 512
logical_block_size: 512

187
Storage Administration Guide

minimum_io_size: 512
optimal_io_size: 0

Block Device ioctls


BLKALIGNOFF: alignment_offset

BLKPBSZGET: physical_block_size

BLKSSZGET: logical_block_size

BLKIOMIN: minimum_io_size

BLKIOOPT: optimal_io_size

23.3. I/O STANDARDS


This section describes I/O standards used by ATA and SCSI devices.

ATA
ATA devices must report appropriate information via the IDENTIFY DEVICE command. ATA devices
only report I/O parameters for physical_block_size, logical_block_size, and
alignment_offset. The additional I/O hints are outside the scope of the ATA Command Set.

SCSI
I/O parameters support in Red Hat Enterprise Linux 7 requires at least version 3 of the SCSI Primary
Commands (SPC-3) protocol. The kernel will only send an extended inquiry (which gains access to the
BLOCK LIMITS VPD page) and READ CAPACITY(16) command to devices which claim compliance
with SPC-3.

The READ CAPACITY(16) command provides the block sizes and alignment offset:

LOGICAL BLOCK LENGTH IN BYTES is used to derive


/sys/block/disk/queue/physical_block_size

LOGICAL BLOCKS PER PHYSICAL BLOCK EXPONENT is used to derive


/sys/block/disk/queue/logical_block_size

LOWEST ALIGNED LOGICAL BLOCK ADDRESS is used to derive:

/sys/block/disk/alignment_offset

/sys/block/disk/partition/alignment_offset

The BLOCK LIMITS VPD page (0xb0) provides the I/O hints. It also uses OPTIMAL TRANSFER
LENGTH GRANULARITY and OPTIMAL TRANSFER LENGTH to derive:

/sys/block/disk/queue/minimum_io_size

/sys/block/disk/queue/optimal_io_size

188
CHAPTER 23. STORAGE I/O ALIGNMENT AND SIZE

The sg3_utils package provides the sg_inq utility, which can be used to access the BLOCK LIMITS
VPD page. To do so, run:

# sg_inq -p 0xb0 disk

23.4. STACKING I/O PARAMETERS


All layers of the Linux I/O stack have been engineered to propagate the various I/O parameters up the
stack. When a layer consumes an attribute or aggregates many devices, the layer must expose
appropriate I/O parameters so that upper-layer devices or tools will have an accurate view of the storage
as it transformed. Some practical examples are:

Only one layer in the I/O stack should adjust for a non-zero alignment_offset; once a layer
adjusts accordingly, it will export a device with an alignment_offset of zero.

A striped Device Mapper (DM) device created with LVM must export a minimum_io_size and
optimal_io_size relative to the stripe count (number of disks) and user-provided chunk size.

In Red Hat Enterprise Linux 7, Device Mapper and Software Raid (MD) device drivers can be used to
arbitrarily combine devices with different I/O parameters. The kernel's block layer will attempt to
reasonably combine the I/O parameters of the individual devices. The kernel will not prevent combining
heterogeneous devices; however, be aware of the risks associated with doing so.

For instance, a 512-byte device and a 4K device may be combined into a single logical DM device, which
would have a logical_block_size of 4K. File systems layered on such a hybrid device assume that
4K will be written atomically, but in reality it will span 8 logical block addresses when issued to the 512-
byte device. Using a 4K logical_block_size for the higher-level DM device increases potential for a
partial write to the 512-byte device if there is a system crash.

If combining the I/O parameters of multiple devices results in a conflict, the block layer may issue a
warning that the device is susceptible to partial writes and/or is misaligned.

23.5. LOGICAL VOLUME MANAGER


LVM provides userspace tools that are used to manage the kernel's DM devices. LVM will shift the start
of the data area (that a given DM device will use) to account for a non-zero alignment_offset
associated with any device managed by LVM. This means logical volumes will be properly aligned
(alignment_offset=0).

By default, LVM will adjust for any alignment_offset, but this behavior can be disabled by setting
data_alignment_offset_detection to 0 in /etc/lvm/lvm.conf. Disabling this is not
recommended.

LVM will also detect the I/O hints for a device. The start of a device's data area will be a multiple of the
minimum_io_size or optimal_io_size exposed in sysfs. LVM will use the minimum_io_size if
optimal_io_size is undefined (i.e. 0).

By default, LVM will automatically determine these I/O hints, but this behavior can be disabled by setting
data_alignment_detection to 0 in /etc/lvm/lvm.conf. Disabling this is not recommended.

23.6. PARTITION AND FILE SYSTEM TOOLS

189
Storage Administration Guide

This section describes how different partition and file system management tools interact with a device's
I/O parameters.

util-linux-ng's libblkid and fdisk


The libblkid library provided with the util-linux-ng package includes a programmatic API to
access a device's I/O parameters. libblkid allows applications, especially those that use Direct I/O, to
properly size their I/O requests. The fdisk utility from util-linux-ng uses libblkid to determine
the I/O parameters of a device for optimal placement of all partitions. The fdisk utility will align all
partitions on a 1MB boundary.

parted and libparted


The libparted library from parted also uses the I/O parameters API of libblkid. Anaconda, the
Red Hat Enterprise Linux 7 installer, uses libparted, which means that all partitions created by either
the installer or parted will be properly aligned. For all partitions created on a device that does not
appear to provide I/O parameters, the default alignment will be 1MB.

The heuristics parted uses are as follows:

Always use the reported alignment_offset as the offset for the start of the first primary
partition.

If optimal_io_size is defined (i.e. not 0), align all partitions on an optimal_io_size


boundary.

If optimal_io_size is undefined (i.e. 0), alignment_offset is 0, and minimum_io_size


is a power of 2, use a 1MB default alignment.

This is the catch-all for "legacy" devices which don't appear to provide I/O hints. As such, by
default all partitions will be aligned on a 1MB boundary.

NOTE

Red Hat Enterprise Linux 7 cannot distinguish between devices that don't provide
I/O hints and those that do so with alignment_offset=0 and
optimal_io_size=0. Such a device might be a single SAS 4K device; as such,
at worst 1MB of space is lost at the start of the disk.

File System Tools


The different mkfs.filesystem utilities have also been enhanced to consume a device's I/O
parameters. These utilities will not allow a file system to be formatted to use a block size smaller than the
logical_block_size of the underlying storage device.

Except for mkfs.gfs2, all other mkfs.filesystem utilities also use the I/O hints to layout on-disk data
structure and data areas relative to the minimum_io_size and optimal_io_size of the underlying
storage device. This allows file systems to be optimally formatted for various RAID (striped) layouts.

190
CHAPTER 24. SETTING UP A REMOTE DISKLESS SYSTEM

CHAPTER 24. SETTING UP A REMOTE DISKLESS SYSTEM


To set up a basic remote diskless system booted over PXE, you need the following packages:

tftp-server

xinetd

dhcp

syslinux

dracut-network

NOTE

After installing the dracut-network package, add the following line to


/etc/dracut.conf:

add_dracutmodules+="nfs"

Remote diskless system booting requires both a tftp service (provided by tftp-server) and a DHCP
service (provided by dhcp). The tftp service is used to retrieve kernel image and initrd over the
network via the PXE loader.

NOTE

SELinux is only supported over NFSv4.2. To use SELinux, NFS must be explicitly
enabled in /etc/sysconfig/nfs by adding the line:

RPCNFSDARGS="-V 4.2"

Then, in /var/lib/tftpboot/pxelinux.cfg/default, change


root=nfs:server-ip:/exported/root/directory to root=nfs:server-
ip:/exported/root/directory,vers=4.2.

Finally, reboot the NFS server.

The following sections outline the necessary procedures for deploying remote diskless systems in a
network environment.

IMPORTANT

Some RPM packages have started using file capabilities (such as setcap and getcap).
However, NFS does not currently support these so attempting to install or update any
packages that use file capabilities will fail.

24.1. CONFIGURING A TFTP SERVICE FOR DISKLESS CLIENTS


Prerequisites
Install the necessary packages. See Chapter 24, Setting up a Remote Diskless System

191
Storage Administration Guide

Procedure
To configure tftp, perform the following steps:

Procedure 24.1. To Configure tftp

1. Enable PXE booting over the network:

# systemctl enable --now tftp

2. The tftp root directory (chroot) is located in /var/lib/tftpboot. Copy


/usr/share/syslinux/pxelinux.0 to /var/lib/tftpboot/:

# cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/

3. Create a pxelinux.cfg directory inside the tftp root directory:

# mkdir -p /var/lib/tftpboot/pxelinux.cfg/

4. Configure firewall rules to allow tftp traffic.

As tftp supports TCP wrappers, you can configure host access to tftp in the
/etc/hosts.allow configuration file. For more information on configuring TCP wrappers and
the /etc/hosts.allow configuration file, see the Red Hat Enterprise Linux 7 Security Guide.
The hosts_access(5) also provides information about /etc/hosts.allow.

Next Steps
After configuring tftp for diskless clients, configure DHCP, NFS, and the exported file system
accordingly. For instructions on configuring the DHCP, NFS, and the exported file system, see
Section 24.2, “Configuring DHCP for Diskless Clients” and Section 24.3, “Configuring an Exported File
System for Diskless Clients”.

24.2. CONFIGURING DHCP FOR DISKLESS CLIENTS


Prerequisites
Install the necessary packages. See Chapter 24, Setting up a Remote Diskless System

Configure the tftp service. See Section 24.1, “Configuring a tftp Service for Diskless Clients”.

Procedure
1. After configuring a tftp server, you need to set up a DHCP service on the same host machine.
For instructions on setting up a DHCP server, see the Configuring a DHCP Server.

2. Enable PXE booting on the DHCP server by adding the following configuration to
/etc/dhcp/dhcp.conf:

allow booting;
allow bootp;
class "pxeclients" {
match if substring(option vendor-class-identifier, 0, 9) =
"PXEClient";

192
CHAPTER 24. SETTING UP A REMOTE DISKLESS SYSTEM

next-server server-ip;
filename "pxelinux.0";
}

Replace server-ip with the IP address of the host machine on which the tftp and DHCP
services reside.

NOTE

When libvirt virtual machines are used as the diskless client, libvirt
provides the DHCP service and the stand alone DHCP server is not used. In this
situation, network booting must be enabled with the bootp file='filename'
option in the libvirt network configuration, virsh net-edit.

Next Steps
Now that tftp and DHCP are configured, configure NFS and the exported file system. For instructions,
see the Section 24.3, “Configuring an Exported File System for Diskless Clients”.

24.3. CONFIGURING AN EXPORTED FILE SYSTEM FOR DISKLESS


CLIENTS
Prerequisites
Install the necessary packages. See Chapter 24, Setting up a Remote Diskless System

Configure the tftp service. See Section 24.1, “Configuring a tftp Service for Diskless Clients”.

Configure DHCP. See Section 24.2, “Configuring DHCP for Diskless Clients”.

Procedure
1. The root directory of the exported file system (used by diskless clients in the network) is shared
via NFS. Configure the NFS service to export the root directory by adding it to /etc/exports.
For instructions on how to do so, see the Section 8.7.1, “The /etc/exports Configuration
File”.

2. To accommodate completely diskless clients, the root directory should contain a complete
Red Hat Enterprise Linux installation. You can either clone an existing installation or install a
new base system:

To synchronize with a running system, use the rsync utility:

# rsync -a -e ssh --exclude='/proc/*' --exclude='/sys/*' \


hostname.com:/exported-root-directory

Replace hostname.com with the hostname of the running system with which to
synchronize via rsync.

Replace exported-root-directory with the path to the exported file system.

To install Red Hat Enterprise Linux to the exported location, use the yum utility with the --
installroot option:

193
Storage Administration Guide

# yum install @Base kernel dracut-network nfs-utils \ --


installroot=exported-root-directory --releasever=/

The file system to be exported still needs to be configured further before it can be used by diskless
clients. To do this, perform the following procedure:

Procedure 24.2. Configure File System

1. Select the kernel that diskless clients should use (vmlinuz-kernel-version) and copy it to
the tftp boot directory:

# cp /boot/vmlinuz-kernel-version /var/lib/tftpboot/

2. Create the initrd (that is, initramfs-kernel-version.img) with network support:

# dracut initramfs-kernel-version.img kernel-version

3. Change the initrd's file permissions to 644 using the following command:

# chmod 644 initramfs-kernel-version.img


WARNING

If the initrd's file permissions are not changed, the pxelinux.0 boot loader
will fail with a "file not found" error.

4. Copy the resulting initramfs-kernel-version.img into the tftp boot directory as well.

5. Edit the default boot configuration to use the initrd and kernel in the /var/lib/tftpboot/
directory. This configuration should instruct the diskless client's root to mount the exported file
system (/exported/root/directory) as read-write. Add the following configuration in the
/var/lib/tftpboot/pxelinux.cfg/default file:

default rhel7

label rhel7
kernel vmlinuz-kernel-version
append initrd=initramfs-kernel-version.img root=nfs:server-
ip:/exported/root/directory rw

Replace server-ip with the IP address of the host machine on which the tftp and DHCP
services reside.

The NFS share is now ready for exporting to diskless clients. These clients can boot over the network via
PXE.

194
CHAPTER 25. ONLINE STORAGE MANAGEMENT

CHAPTER 25. ONLINE STORAGE MANAGEMENT


It is often desirable to add, remove or re-size storage devices while the operating system is running, and
without rebooting. This chapter outlines the procedures that may be used to reconfigure storage devices
on Red Hat Enterprise Linux 7 host systems while the system is running. It covers iSCSI and Fibre
Channel storage interconnects; other interconnect types may be added it the future.

This chapter focuses on adding, removing, modifying, and monitoring storage devices. It does not
discuss the Fibre Channel or iSCSI protocols in detail. For more information about these protocols, refer
to other documentation.

This chapter makes reference to various sysfs objects. Red Hat advises that the sysfs object names
and directory structure are subject to change in major Red Hat Enterprise Linux releases. This is
because the upstream Linux kernel does not provide a stable internal API. For guidelines on how to
reference sysfs objects in a transportable way, refer to the document /usr/share/doc/kernel-
doc-version/Documentation/sysfs-rules.txt in the kernel source tree for guidelines.


WARNING

Online storage reconfiguration must be done carefully. System failures or


interruptions during the process can lead to unexpected results. Red Hat advises
that you reduce system load to the maximum extent possible during the change
operations. This will reduce the chance of I/O errors, out-of-memory errors, or
similar errors occurring in the midst of a configuration change. The following
sections provide more specific guidelines regarding this.

In addition, Red Hat recommends that you back up all data before reconfiguring
online storage.

25.1. TARGET SETUP


Red Hat Enterprise Linux 7 uses the targetcli shell as a front end for viewing, editing, and saving the
configuration of the Linux-IO Target without the need to manipulate the kernel target's configuration files
directly. The targetcli tool is a command-line interface that allows an administrator to export local
storage resources, which are backed by either files, volumes, local SCSI devices, or RAM disks, to
remote systems. The targetcli tool has a tree-based layout, includes built-in tab completion, and
provides full auto-complete support and inline documentation.

The hierarchy of targetcli does not always match the kernel interface exactly because targetcli is
simplified where possible.

IMPORTANT

To ensure that the changes made in targetcli are persistent, start and enable the
target service:

# systemctl start target


# systemctl enable target

195
Storage Administration Guide

25.1.1. Installing and Running targetcli


To install targetcli, use:

# yum install targetcli

Start the target service:

# systemctl start target

Configure target to start at boot time:

# systemctl enable target

Open port 3260 in the firewall and reload the firewall configuration:

# firewall-cmd --permanent --add-port=3260/tcp


Success
# firewall-cmd --reload
Success

Use the targetcli command, and then use the ls command for the layout of the tree interface:

# targetcli
:
/> ls
o- /........................................[...]
o- backstores.............................[...]
| o- block.................[Storage Objects: 0]
| o- fileio................[Storage Objects: 0]
| o- pscsi.................[Storage Objects: 0]
| o- ramdisk...............[Storage Ojbects: 0]
o- iscsi...........................[Targets: 0]
o- loopback........................[Targets: 0]

NOTE

In Red Hat Enterprise Linux 7.0, using the targetcli command from Bash, for example,
targetcli iscsi/ create, does not work and does not return an error. Starting with
Red Hat Enterprise Linux 7.1, an error status code is provided to make using targetcli
with shell scripts more useful.

25.1.2. Creating a Backstore


Backstores enable support for different methods of storing an exported LUN's data on the local machine.
Creating a storage object defines the resources the backstore uses.

196
CHAPTER 25. ONLINE STORAGE MANAGEMENT

NOTE

In Red Hat Enterprise Linux 6, the term 'backing-store' is used to refer to the mappings
created. However, to avoid confusion between the various ways 'backstores' can be used,
in Red Hat Enterprise Linux 7 the term 'storage objects' refers to the mappings created
and 'backstores' is used to describe the different types of backing devices.

The backstore devices that LIO supports are:

FILEIO (Linux file-backed storage)


FILEIO storage objects can support either write_back or write_thru operation. The
write_back enables the local file system cache. This improves performance but increases the risk
of data loss. It is recommended to use write_back=false to disable write_back in favor of
write_thru.

To create a fileio storage object, run the command /backstores/fileio create file_name
file_location file_size write_back=false. For example:

/> /backstores/fileio create file1 /tmp/disk1.img 200M write_back=false


Created fileio file1 with size 209715200

BLOCK (Linux BLOCK devices)


The block driver allows the use of any block device that appears in the /sys/block to be used with
LIO. This includes physical devices (for example, HDDs, SSDs, CDs, DVDs) and logical devices (for
example, software or hardware RAID volumes, or LVM volumes).

NOTE

BLOCK backstores usually provide the best performance.

To create a BLOCK backstore using any block device, use the following command:

# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x39dc48fb.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): *Enter*
Using default response p
Partition number (1-4, default 1): *Enter*
First sector (2048-2097151, default 2048): *Enter*
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151):
+250M
Partition 1 of type Linux and of size 250 MiB is set

197
Storage Administration Guide

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.

/> /backstores/block create name=block_backend dev=/dev/vdb


Generating a wwn serial.
Created block storage object block_backend using /dev/vdb.

NOTE

You can also create a BLOCK backstore on a logical volume.

PSCSI (Linux pass-through SCSI devices)


Any storage object that supports direct pass-through of SCSI commands without SCSI emulation,
and with an underlying SCSI device that appears with lsscsi in /proc/scsi/scsi (such as a SAS
hard drive) can be configured as a backstore. SCSI-3 and higher is supported with this subsystem.


WARNING

PSCSI should only be used by advanced users. Advanced SCSI commands


such as for Aysmmetric Logical Unit Assignment (ALUAs) or Persistent
Reservations (for example, those used by VMware ESX, and vSphere) are
usually not implemented in the device firmware and can cause malfunctions or
crashes. When in doubt, use BLOCK for production setups instead.

To create a PSCSI backstore for a physical SCSI device, a TYPE_ROM device using /dev/sr0 in
this example, use:

/> backstores/pscsi/ create name=pscsi_backend dev=/dev/sr0


Generating a wwn serial.
Created pscsi storage object pscsi_backend using /dev/sr0

Memory Copy RAM disk (Linux RAMDISK_MCP)


Memory Copy RAM disks (ramdisk) provide RAM disks with full SCSI emulation and separate
memory mappings using memory copy for initiators. This provides capability for multi-sessions and is
particularly useful for fast, volatile mass storage for production purposes.

To create a 1GB RAM disk backstore, use the following command:

/> backstores/ramdisk/ create name=rd_backend size=1GB


Generating a wwn serial.
Created rd_mcp ramdisk rd_backend with size 1GB.

198
CHAPTER 25. ONLINE STORAGE MANAGEMENT

25.1.3. Creating an iSCSI Target


To create an iSCSI target:

Procedure 25.1. Creating an iSCSI target

1. Run targetcli.

2. Move into the iSCSI configuration path:

/> iscsi/

NOTE

The cd command is also accepted to change directories, as well as simply listing


the path to move into.

3. Create an iSCSI target using a default target name.

/iscsi> create
Created target
iqn.2003-01.org.linux-iscsi.hostname.x8664:sn.78b473f296ff
Created TPG1

Or create an iSCSI target using a specified name.

/iscsi > create iqn.2006-04.com.example:444


Created target iqn.2006-04.com.example:444
Created TPG1

4. Verify that the newly created target is visible when targets are listed with ls.

/iscsi > ls
o- iscsi.......................................[1 Target]
o- iqn.2006-04.com.example:444................[1 TPG]
o- tpg1...........................[enabled, auth]
o- acls...............................[0 ACL]
o- luns...............................[0 LUN]
o- portals.........................[0 Portal]

NOTE

As of Red Hat Enterprise Linux 7.1, whenever a target is created, a default portal is also
created.

25.1.4. Configuring an iSCSI Portal


To configure an iSCSI portal, an iSCSI target must first be created and associated with a TPG. For
instructions on how to do this, refer to Section 25.1.3, “Creating an iSCSI Target”.

199
Storage Administration Guide

NOTE

As of Red Hat Enterprise Linux 7.1 when an iSCSI target is created, a default portal is
created as well. This portal is set to listen on all IP addresses with the default port number
(that is, 0.0.0.0:3260). To remove this and add only specified portals, use /iscsi/iqn-
name/tpg1/portals delete ip_address=0.0.0.0 ip_port=3260 then create a
new portal with the required information.

Procedure 25.2. Creating an iSCSI Portal

1. Move into the TPG.

/iscsi> iqn.2006-04.example:444/tpg1/

2. There are two ways to create a portal: create a default portal, or create a portal specifying what
IP address to listen to.

Creating a default portal uses the default iSCSI port 3260 and allows the target to listen on all IP
addresses on that port.

/iscsi/iqn.20...mple:444/tpg1> portals/ create


Using default IP port 3260
Binding to INADDR_Any (0.0.0.0)
Created network portal 0.0.0.0:3260

To create a portal specifying what IP address to listen to, use the following command.

/iscsi/iqn.20...mple:444/tpg1> portals/ create 192.168.122.137


Using default IP port 3260
Created network portal 192.168.122.137:3260

3. Verify that the newly created portal is visible with the ls command.

/iscsi/iqn.20...mple:444/tpg1> ls
o- tpg.................................. [enambled, auth]
o- acls ......................................[0 ACL]
o- luns ......................................[0 LUN]
o- portals ................................[1 Portal]
o- 192.168.122.137:3260......................[OK]

25.1.5. Configuring LUNs


To configure LUNs, first create storage objects. See Section 25.1.2, “Creating a Backstore” for more
information.

Procedure 25.3. Configuring LUNs

1. Create LUNs of already created storage objects.

/iscsi/iqn.20...mple:444/tpg1> luns/ create


/backstores/ramdisk/rd_backend
Created LUN 0.

200
CHAPTER 25. ONLINE STORAGE MANAGEMENT

/iscsi/iqn.20...mple:444/tpg1> luns/ create


/backstores/block/block_backend
Created LUN 1.

/iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/fileio/file1


Created LUN 2.

2. Show the changes.

/iscsi/iqn.20...mple:444/tpg1> ls
o- tpg.................................. [enambled, auth]
o- acls ......................................[0 ACL]
o- luns .....................................[3 LUNs]
| o- lun0.........................[ramdisk/ramdisk1]
| o- lun1.................[block/block1 (/dev/vdb1)]
| o- lun2...................[fileio/file1 (/foo.img)]
o- portals ................................[1 Portal]
o- 192.168.122.137:3260......................[OK]

NOTE

Be aware that the default LUN name starts at 0, as opposed to 1 as was the case
when using tgtd in Red Hat Enterprise Linux 6.

IMPORTANT

By default, LUNs are created with read-write permissions. In the event that a new LUN is
added after ACLs have been created that LUN will be automatically mapped to all
available ACLs. This can cause a security risk. Use the following procedure to create a
LUN as read-only.

Procedure 25.4. Create a Read-only LUN

1. To create a LUN with read-only permissions, first use the following command:

/> set global auto_add_mapped_luns=false


Parameter auto_add_mapped_luns is now 'false'.

This prevents the auto mapping of LUNs to existing ACLs allowing the manual mapping of LUNs.

2. Next, manually create the LUN with the command


iscsi/target_iqn_name/tpg1/acls/initiator_iqn_name/ create
mapped_lun=next_sequential_LUN_number tpg_lun_or_backstore=backstore
write_protect=1.

/> iscsi/iqn.2015-06.com.redhat:target/tpg1/acls/iqn.2015-
06.com.redhat:initiator/ create mapped_lun=1
tpg_lun_or_backstore=/backstores/block/block2 write_protect=1
Created LUN 1.
Created Mapped LUN 1.
/> ls
o- / ...................................................... [...]
o- backstores ........................................... [...]

201
Storage Administration Guide

<snip>
o- iscsi ......................................... [Targets: 1]
| o- iqn.2015-06.com.redhat:target .................. [TPGs: 1]
| o- tpg1 ............................ [no-gen-acls, no-auth]
| o- acls ....................................... [ACLs: 2]
| | o- iqn.2015-06.com.redhat:initiator .. [Mapped LUNs: 2]
| | | o- mapped_lun0 .............. [lun0 block/disk1 (rw)]
| | | o- mapped_lun1 .............. [lun1 block/disk2 (ro)]
| o- luns ....................................... [LUNs: 2]
| | o- lun0 ...................... [block/disk1 (/dev/vdb)]
| | o- lun1 ...................... [block/disk2 (/dev/vdc)]
<snip>

The mapped_lun1 line now has (ro) at the end (unlike mapped_lun0's (rw)) stating that it is read-
only.

25.1.6. Configuring ACLs


Create an ACL for each initiator that will be connecting. This enforces authentication when that initiator
connects, allowing only LUNs to be exposed to each initiator. Usually each initator has exclusive access
to a LUN. Both targets and initiators have unique identifying names. The initiator's unique name must be
known to configure ACLs. For open-iscsi initiators, this can be found in
/etc/iscsi/initiatorname.iscsi.

Procedure 25.5. Configuring ACLs

1. Move into the acls directory.

/iscsi/iqn.20...mple:444/tpg1> acls/

2. Create an ACL. Either use the initiator name found in /etc/iscsi/initiatorname.iscsi


on the initiator, or if using a name that is easier to remember, refer to Section 25.2, “Creating an
iSCSI Initiator” to ensure ACL matches the initiator. For example:

/iscsi/iqn.20...444/tpg1/acls> create iqn.2006-


04.com.example.foo:888
Created Node ACL for iqn.2006-04.com.example.foo:888
Created mapped LUN 2.
Created mapped LUN 1.
Created mapped LUN 0.

NOTE

The given example's behavior depends on the setting used. In this case, the
global setting auto_add_mapped_luns is used. This automatically maps LUNs
to any created ACL.

You can set user-created ACLs within the TPG node on the target server:

/iscsi/iqn.20...scsi:444/tpg1> set attribute


generate_node_acls=1

3. Show the changes.

202
CHAPTER 25. ONLINE STORAGE MANAGEMENT

/iscsi/iqn.20...444/tpg1/acls> ls
o- acls .................................................[1 ACL]
o- iqn.2006-04.com.example.foo:888 ....[3 Mapped LUNs, auth]
o- mapped_lun0 .............[lun0 ramdisk/ramdisk1 (rw)]
o- mapped_lun1 .................[lun1 block/block1 (rw)]
o- mapped_lun2 .................[lun2 fileio/file1 (rw)]

25.1.7. Configuring Fibre Channel over Ethernet (FCoE) Target


In addition to mounting LUNs over FCoE, as described in Section 25.4, “Configuring a Fibre Channel
over Ethernet Interface”, exporting LUNs to other machines over FCoE is also supported with the aid of
targetcli.

IMPORTANT

Before proceeding, refer to Section 25.4, “Configuring a Fibre Channel over Ethernet
Interface” and verify that basic FCoE setup is completed, and that fcoeadm -i displays
configured FCoE interfaces.

Procedure 25.6. Configure FCoE target

1. Setting up an FCoE target requires the installation of the targetcli package, along with its
dependencies. Refer to Section 25.1, “Target Setup” for more information on targetcli basics
and set up.

2. Create an FCoE target instance on an FCoE interface.

/> tcm_fc/ create 00:11:22:33:44:55:66:77

If FCoE interfaces are present on the system, tab-completing after create will list available
interfaces. If not, ensure fcoeadm -i shows active interfaces.

3. Map a backstore to the target instance.

Example 25.1. Example of Mapping a Backstore to the Target Instance

/> tcm_fc/00:11:22:33:44:55:66:77

/> luns/ create /backstores/fileio/example2

4. Allow access to the LUN from an FCoE initiator.

/> acls/ create 00:99:88:77:66:55:44:33

The LUN should now be accessible to that initiator.

5. To make the changes persistent across reboots, use the saveconfig command and type yes
when prompted. If this is not done the configuration will be lost after rebooting.

6. Exit targetcli by typing exit or entering ctrl+D.

203
Storage Administration Guide

25.1.8. Removing Objects with targetcli

To remove an backstore use the command:

/> /backstores/backstore-type/backstore-name

To remove parts of an iSCSI target, such as an ACL, use the following command:

/> /iscsi/iqn-name/tpg/acls/ delete iqn-name

To remove the entire target, including all ACLs, LUNs, and portals, use the following command:

/> /iscsi delete iqn-name

25.1.9. targetcli References

For more information on targetcli, refer to the following resources:

man targetcli
The targetcli man page. It includes an example walk through.

The Linux SCSI Target Wiki


http://linux-iscsi.org/wiki/Targetcli

Screencast by Andy Grover


https://www.youtube.com/watch?v=BkBGTBadOO8

NOTE

This was uploaded on February 28, 2012. As such, the service name has changed
from targetcli to target.

25.2. CREATING AN ISCSI INITIATOR


After creating a target with targetcli as in Section 25.1, “Target Setup”, use the iscsiadm utility to
set up an initiator.

In Red Hat Enterprise Linux 7, the iSCSI service is lazily started by default: the service starts after
running the iscsiadm command.

Procedure 25.7. Creating an iSCSI Initiator

1. Install iscsi-initiator-utils:

# yum install iscsi-initiator-utils -y

2. If the ACL was given a custom name in Section 25.1.6, “Configuring ACLs”, modify the
/etc/iscsi/initiatorname.iscsi file accordingly. For example:

204
CHAPTER 25. ONLINE STORAGE MANAGEMENT

# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2006-04.com.example.node1

# vi /etc/iscsi/initiatorname.iscsi

3. Discover the target:

# iscsiadm -m discovery -t st -p target-ip-address


10.64.24.179:3260,1 iqn.2006-04.com.example:3260

4. Log in to the target with the target IQN you discovered in step 3:

# iscsiadm -m node -T iqn.2006-04.com.example:3260 -l


Logging in to [iface: default, target: iqn.2006-04.com.example:3260,
portal: 10.64.24.179,3260] (multiple)
Login to [iface: default, target: iqn.2006-04.com.example:3260,
portal: 10.64.24.179,3260] successful.

This procedure can be followed for any number of initators connected to the same LUN so long
as their specific initiator names are added to the ACL as described in Section 25.1.6,
“Configuring ACLs”.

5. Find the iSCSI disk name and create a file system on this iSCSI dick:

# grep "Attached SCSI" /var/log/messages

# mkfs.ext4 /dev/disk_name

Replace disk_name with the iSCSI disk name displayed in /var/log/messages.

6. Mount the file system:

# mkdir /mount/point
# mount /dev/disk_name /mount/point

Replace /mount/point with the mount point of the partition.

7. Edit the /etc/fstab to mount the file system automatically when the system boots:

# vim /etc/fstab
/dev/disk_name /mount/point ext4 _netdev 0 0

Replace disk_name with the iSCSI disk name.

8. Log off from the target:

# iscsiadm -m node -T iqn.2006-04.com.example:3260 -u

25.3. FIBRE CHANNEL


This section discusses the Fibre Channel API, native Red Hat Enterprise Linux 7 Fibre Channel drivers,
and the Fibre Channel capabilities of these drivers.

205
Storage Administration Guide

25.3.1. Fibre Channel API


Following is a list of /sys/class/ directories that contain files used to provide the userspace API. In
each item, host numbers are designated by H, bus numbers are B, targets are T, logical unit numbers
(LUNs) are L, and remote port numbers are R.

IMPORTANT

If your system is using multipath software, Red Hat recommends that you consult your
hardware vendor before changing any of the values described in this section.

Transport: /sys/class/fc_transport/targetH:B:T/

port_id — 24-bit port ID/address

node_name — 64-bit node name

port_name — 64-bit port name

Remote Port: /sys/class/fc_remote_ports/rport-H:B-R/

port_id

node_name

port_name

dev_loss_tmo: controls when the scsi device gets removed from the system. After
dev_loss_tmo triggers, the scsi device is removed.

In multipath.conf, you can set dev_loss_tmo to infinity, which sets its value to
2,147,483,647 seconds, or 68 years, and is the maximum dev_loss_tmo value.

In Red Hat Enterprise Linux 7, if you do not set the fast_io_fail_tmo option,
dev_loss_tmo is capped to 600 seconds. By default, fast_io_fail_tmo is set to 5
seconds in Red Hat Enterprise Linux 7 if the multipathd service is running; otherwise, it is
set to off.

fast_io_fail_tmo: specifies the number of seconds to wait before it marks a link as


"bad". Once a link is marked bad, existing running I/O or any new I/O on its corresponding
path fails.

If I/O is in a blocked queue, it will not be failed until dev_loss_tmo expires and the queue is
unblocked.

If fast_io_fail_tmo is set to any value except off, dev_loss_tmo is uncapped. If


fast_io_fail_tmo is set to off, no I/O fails until the device is removed from the system.
If fast_io_fail_tmo is set to a number, I/O fails immediately when the
fast_io_fail_tmo timeout triggers.

Host: /sys/class/fc_host/hostH/

port_id

206
CHAPTER 25. ONLINE STORAGE MANAGEMENT

issue_lip: instructs the driver to rediscover remote ports.

25.3.2. Native Fibre Channel Drivers and Capabilities


Red Hat Enterprise Linux 7 ships with the following native Fibre Channel drivers:

lpfc

qla2xxx

zfcp

bfa

IMPORTANT

The qla2xxx driver runs in initiator mode by default. To use qla2xxx with Linux-IO, enable
Fibre Channel target mode with the corresponding qlini_mode module parameter.

First, make sure that the firmware package for your qla device, such as ql2200-firmware
or similar, is installed.

To enable target mode, add the following parameter to the


/usr/lib/modprobe.d/qla2xxx.conf qla2xxx module configuration file:

options qla2xxx qlini_mode=disabled

Then, use the dracut -f command to rebuild the initial ramdisk (initrd), and reboot
the system for the changes to take effect.

Table 25.1, “Fibre Channel API Capabilities” describes the different Fibre Channel API capabilities of
each native Red Hat Enterprise Linux 7 driver. X denotes support for the capability.

Table 25.1. Fibre Channel API Capabilities

lpfc qla2xxx zfcp bfa

Transport X X X X
port_id

Transport X X X X
node_name

Transport X X X X
port_name

Remote Port X X X X
dev_loss_tmo

207
Storage Administration Guide

lpfc qla2xxx zfcp bfa

Remote Port X X [a] X [b] X


fast_io_fail
_tmo

Host port_id X X X X

Host issue_lip X X X

[a] Supported as of Red Hat Enterprise Linux 5.4

[b] Supported as of Red Hat Enterprise Linux 6.0

25.4. CONFIGURING A FIBRE CHANNEL OVER ETHERNET INTERFACE


Setting up and deploying a Fibre Channel over Ethernet (FCoE) interface requires two packages:

fcoe-utils

lldpad

Once these packages are installed, perform the following procedure to enable FCoE over a virtual LAN
(VLAN):

Procedure 25.8. Configuring an Ethernet Interface to Use FCoE

1. To configure a new VLAN, make a copy of an existing network script, for example
/etc/fcoe/cfg-eth0, and change the name to the Ethernet device that supports FCoE. This
provides you with a default file to configure. Given that the FCoE device is ethX, run:

# cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-ethX

Modify the contents of cfg-ethX as needed. Notably, set DCB_REQUIRED to no for networking
interfaces that implement a hardware Data Center Bridging Exchange (DCBX) protocol client.

2. If you want the device to automatically load during boot time, set ONBOOT=yes in the
corresponding /etc/sysconfig/network-scripts/ifcfg-ethX file. For example, if the
FCoE device is eth2, edit /etc/sysconfig/network-scripts/ifcfg-eth2 accordingly.

3. Start the data center bridging daemon (dcbd) by running:

# systemctl start lldpad

4. For networking interfaces that implement a hardware DCBX client, skip this step.

For interfaces that require a software DCBX client, enable data center bridging on the Ethernet
interface by running:

# dcbtool sc ethX dcb on

208
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Then, enable FCoE on the Ethernet interface by running:

# dcbtool sc ethX app:fcoe e:1

Note that these commands only work if the dcbd settings for the Ethernet interface were not
changed.

5. Load the FCoE device now using:

# ip link set dev ethX up

6. Start FCoE using:

# systemctl start fcoe

The FCoE device appears soon if all other settings on the fabric are correct. To view configured
FCoE devices, run:

# fcoeadm -i

After correctly configuring the Ethernet interface to use FCoE, Red Hat recommends that you set FCoE
and the lldpad service to run at startup. To do so, use the systemctl utility:

# systemctl enable lldpad

# systemctl enable fcoe

NOTE

Running the # systemctl stop fcoe command stops the daemon, but does not reset
the configuration of FCoE interfaces. To do so, run the # systemctl -s SIGHUP
kill fcoe command.

As of Red Hat Enterprise Linux 7, Network Manager has the ability to query and set the DCB settings of
a DCB capable Ethernet interface.

25.5. CONFIGURING AN FCOE INTERFACE TO AUTOMATICALLY


MOUNT AT BOOT

NOTE

The instructions in this section are available in /usr/share/doc/fcoe-


utils-version/README as of Red Hat Enterprise Linux 6.1. Refer to that document for
any possible changes throughout minor releases.

You can mount newly discovered disks via udev rules, autofs, and other similar methods. Sometimes,
however, a specific service might require the FCoE disk to be mounted at boot-time. In such cases, the
FCoE disk should be mounted as soon as the fcoe service runs and before the initiation of any service
that requires the FCoE disk.

209
Storage Administration Guide

To configure an FCoE disk to automatically mount at boot, add proper FCoE mounting code to the
startup script for the fcoe service. The fcoe startup script is
/lib/systemd/system/fcoe.service.

The FCoE mounting code is different per system configuration, whether you are using a simple formatted
FCoE disk, LVM, or multipathed device node.

Example 25.2. FCoE Mounting Code

The following is a sample FCoE mounting code for mounting file systems specified via wild cards in
/etc/fstab:

mount_fcoe_disks_from_fstab()
{
local timeout=20
local done=1
local fcoe_disks=($(egrep 'by-path\/fc-.*_netdev' /etc/fstab | cut
-d ' ' -f1))

test -z $fcoe_disks && return 0

echo -n "Waiting for fcoe disks . "


while [ $timeout -gt 0 ]; do
for disk in ${fcoe_disks[*]}; do
if ! test -b $disk; then
done=0
break
fi
done

test $done -eq 1 && break;


sleep 1
echo -n ". "
done=1
let timeout--
done

if test $timeout -eq 0; then


echo "timeout!"
else
echo "done!"
fi

# mount any newly discovered disk


mount -a 2>/dev/null
}

The mount_fcoe_disks_from_fstab function should be invoked after the fcoe service script starts
the fcoemon daemon. This will mount FCoE disks specified by the following paths in /etc/fstab:

/dev/disk/by-path/fc-0xXX:0xXX /mnt/fcoe-disk1 ext3 defaults,_netdev 0


0
/dev/disk/by-path/fc-0xYY:0xYY /mnt/fcoe-disk2 ext3 defaults,_netdev 0
0

210
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Entries with fc- and _netdev sub-strings enable the mount_fcoe_disks_from_fstab function to
identify FCoE disk mount entries. For more information on /etc/fstab entries, refer to man 5 fstab.

NOTE

The fcoe service does not implement a timeout for FCoE disk discovery. As such, the
FCoE mounting code should implement its own timeout period.

25.6. ISCSI
This section describes the iSCSI API and the iscsiadm utility. Before using the iscsiadm utility, install
the iscsi-initiator-utils package first by running yum install iscsi-initiator-utils.

In Red Hat Enterprise Linux 7, the iSCSI service is lazily started by default. If root is not on an iSCSI
device or there are no nodes marked with node.startup = automatic then the iSCSI service will
not start until an iscsiadm command is run that requires iscsid or the iscsi kernel modules to be started.
For example, running the discovery command iscsiadm -m discovery -t st -p ip:port will
cause iscsiadmin to start the iSCSI service.

To force the iscsid daemon to run and iSCSI kernel modules to load, run systemctl start
iscsid.service.

25.6.1. iSCSI API


To get information about running sessions, run:

# iscsiadm -m session -P 3

This command displays the session/device state, session ID (sid), some negotiated parameters, and the
SCSI devices accessible through the session.

For shorter output (for example, to display only the sid-to-node mapping), run:

# iscsiadm -m session -P 0

or

# iscsiadm -m session

These commands print the list of running sessions with the format:

driver [sid] target_ip:port,target_portal_group_tag proper_target_name

Example 25.3. Output of the iscsisadm -m session Command

For example:

# iscsiadm -m session

tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311


tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

211
Storage Administration Guide

For more information about the iSCSI API, refer to /usr/share/doc/iscsi-initiator-


utils-version/README.

25.7. PERSISTENT NAMING


Red Hat Enterprise Linux provides a number of ways to identify storage devices. It is important to use the
correct option to identify each device when used in order to avoid inadvertently accessing the wrong
device, particularly when installing to or reformatting drives.

25.7.1. Major and Minor Numbers of Storage Devices


Storage devices managed by the sd driver are identified internally by a collection of major device
numbers and their associated minor numbers. The major device numbers used for this purpose are not
in a contiguous range. Each storage device is represented by a major number and a range of minor
numbers, which are used to identify either the entire device or a partition within the device. There is a
direct association between the major and minor numbers allocated to a device and numbers in the form
of sd<letter(s)>[number(s)]. Whenever the sd driver detects a new device, an available major
number and minor number range is allocated. Whenever a device is removed from the operating system,
the major number and minor number range is freed for later reuse.

The major and minor number range and associated sd names are allocated for each device when it is
detected. This means that the association between the major and minor number range and associated
sd names can change if the order of device detection changes. Although this is unusual with some
hardware configurations (for example, with an internal SCSI controller and disks that have their SCSI
target ID assigned by their physical location within a chassis), it can nevertheless occur. Examples of
situations where this can happen are as follows:

A disk may fail to power up or respond to the SCSI controller. This will result in it not being
detected by the normal device probe. The disk will not be accessible to the system and
subsequent devices will have their major and minor number range, including the associated sd
names shifted down. For example, if a disk normally referred to as sdb is not detected, a disk
that is normally referred to as sdc would instead appear as sdb.

A SCSI controller (host bus adapter, or HBA) may fail to initialize, causing all disks connected to
that HBA to not be detected. Any disks connected to subsequently probed HBAs would be
assigned different major and minor number ranges, and different associated sd names.

The order of driver initialization could change if different types of HBAs are present in the
system. This would cause the disks connected to those HBAs to be detected in a different order.
This can also occur if HBAs are moved to different PCI slots on the system.

Disks connected to the system with Fibre Channel, iSCSI, or FCoE adapters might be
inaccessible at the time the storage devices are probed, due to a storage array or intervening
switch being powered off, for example. This could occur when a system reboots after a power
failure, if the storage array takes longer to come online than the system take to boot. Although
some Fibre Channel drivers support a mechanism to specify a persistent SCSI target ID to
WWPN mapping, this will not cause the major and minor number ranges, and the associated sd
names to be reserved, it will only provide consistent SCSI target ID numbers.

These reasons make it undesirable to use the major and minor number range or the associated sd
names when referring to devices, such as in the /etc/fstab file. There is the possibility that the wrong
device will be mounted and data corruption could result.

212
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Occasionally, however, it is still necessary to refer to the sd names even when another mechanism is
used (such as when errors are reported by a device). This is because the Linux kernel uses sd names
(and also SCSI host/channel/target/LUN tuples) in kernel messages regarding the device.

25.7.2. World Wide Identifier (WWID)


The World Wide Identifier (WWID) can be used in reliably identifying devices. It is a persistent, system-
independent ID that the SCSI Standard requires from all SCSI devices. The WWID identifier is
guaranteed to be unique for every storage device, and independent of the path that is used to access the
device.

This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital
Product Data (page 0x83) or Unit Serial Number (page 0x80). The mappings from these WWIDs to the
current /dev/sd names can be seen in the symlinks maintained in the /dev/disk/by-id/ directory.

Example 25.4. WWID

For example, a device with a page 0x83 identifier would have:

scsi-3600508b400105e210000900000490000 -> ../../sda

Or, a device with a page 0x80 identifier would have:

scsi-SSEAGATE_ST373453LW_3HW1RHM6 -> ../../sda

Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device
name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/ name
to reference the data on the disk, even if the path to the device changes, and even when accessing the
device from different systems.

If there are multiple paths from a system to a device, DM Multipath uses the WWID to detect this. DM
Multipath then presents a single "pseudo-device" in the /dev/mapper/wwid directory, such as
/dev/mapper/3600508b400105df70000e00000ac0000.

The command multipath -l shows the mapping to the non-persistent identifiers:


Host:Channel:Target:LUN, /dev/sd name, and the major:minor number.

3600508b400105df70000e00000ac0000 dm-2 vendor,product


[size=20G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 5:0:1:1 sdc 8:32 [active][undef]
\_ 6:0:1:1 sdg 8:96 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 5:0:0:1 sdb 8:16 [active][undef]
\_ 6:0:0:1 sdf 8:80 [active][undef]

DM Multipath automatically maintains the proper mapping of each WWID-based device name to its
corresponding /dev/sd name on the system. These names are persistent across path changes, and
they are consistent when accessing the device from different systems.

213
Storage Administration Guide

When the user_friendly_names feature (of DM Multipath) is used, the WWID is mapped to a name
of the form /dev/mapper/mpathn. By default, this mapping is maintained in the file
/etc/multipath/bindings. These mpathn names are persistent as long as that file is maintained.

IMPORTANT

If you use user_friendly_names, then additional steps are required to obtain


consistent names in a cluster. Refer to the Consistent Multipath Device Names in a
Cluster section in the DM Multipath book.

In addition to these persistent names provided by the system, you can also use udev rules to implement
persistent names of your own, mapped to the WWID of the storage.

25.7.3. Device Names Managed by the udev Mechanism in /dev/disk/by-*

The udev mechanism consists of three major components:

The kernel
Generates events that are sent to user space when devices are added, removed, or changed.

The udevd service


Receives the events.

The udev rules


Specifies the action to take when the udev service receives the kernel events.

This mechanism is used for all types of devices in Linux, not just for storage devices. In the case of
storage devices, Red Hat Enterprise Linux contains udev rules that create symbolic links in the
/dev/disk/ directory allowing storage devices to be referred to by their contents, a unique identifier,
their serial number, or the hardware path used to access the device.

/dev/disk/by-label/
Entries in this directory provide a symbolic name that refers to the storage device by a label in the
contents (that is, the data) stored on the device. The blkid utility is used to read data from the device
and determine a name (that is, a label) for the device. For example:

/dev/disk/by-label/Boot

NOTE

The information is obtained from the contents (that is, the data) on the device so if the
contents are copied to another device, the label will remain the same.

The label can also be used to refer to the device in /etc/fstab using the following syntax:

LABEL=Boot

/dev/disk/by-uuid/

214
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Entries in this directory provide a symbolic name that refers to the storage device by a unique
identifier in the contents (that is, the data) stored on the device. The blkid utility is used to read data
from the device and obtain a unique identifier (that is, the UUID) for the device. For example:

UUID=3e6be9de-8139-11d1-9106-a43f08d823a6

/dev/disk/by-id/
Entries in this directory provide a symbolic name that refers to the storage device by a unique
identifier (different from all other storage devices). The identifier is a property of the device but is not
stored in the contents (that is, the data) on the devices. For example:

/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05

/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05

The id is obtained from the world-wide ID of the device, or the device serial number. The
/dev/disk/by-id/ entries may also include a partition number. For example:

/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05-part1

/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05-part1

/dev/disk/by-path/
Entries in this directory provide a symbolic name that refers to the storage device by the hardware
path used to access the device, beginning with a reference to the storage controller in the PCI
hierarchy, and including the SCSI host, channel, target, and LUN numbers and, optionally, the
partition number. Although these names are preferable to using major and minor numbers or sd
names, caution must be used to ensure that the target numbers do not change in a Fibre Channel
SAN environment (for example, through the use of persistent binding) and that the use of the names
is updated if a host adapter is moved to a different PCI slot. In addition, there is the possibility that the
SCSI host numbers could change if a HBA fails to probe, if drivers are loaded in a different order, or
if a new HBA is installed on the system. An example of by-path listing is:

/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0

The /dev/disk/by-path/ entries may also include a partition number, such as:

/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0-part1

25.7.3.1. Limitations of the udev Device Naming Convention

The following are some limitations of the udev naming convention.

It is possible that the device may not be accessible at the time the query is performed because
the udev mechanism may rely on the ability to query the storage device when the udev rules
are processed for a udev event. This is more likely to occur with Fibre Channel, iSCSI or FCoE
storage devices when the device is not located in the server chassis.

The kernel may also send udev events at any time, causing the rules to be processed and
possibly causing the /dev/disk/by-*/ links to be removed if the device is not accessible.

215
Storage Administration Guide

There can be a delay between when the udev event is generated and when it is processed,
such as when a large number of devices are detected and the user-space udevd service takes
some amount of time to process the rules for each one). This could cause a delay between when
the kernel detects the device and when the /dev/disk/by-*/ names are available.

External programs such as blkid invoked by the rules may open the device for a brief period of
time, making the device inaccessible for other uses.

25.7.3.2. Modifying Persistent Naming Attributes

Although udev naming attributes are persistent, in that they do not change on their own across system
reboots, some are also configurable. You can set custom values for the following persistent naming
attributes:

UUID: file system UUID

LABEL: file system label

Because the UUID and LABEL attributes are related to the file system, the tool you need to use depends
on the file system on that partition.

To change the UUID or LABEL attributes of an XFS file system, unmount the file system and then
use the xfs_admin utility to change the attribute:

# umount /dev/device
# xfs_admin [-U new_uuid] [-L new_label] /dev/device
# udevadm settle

To change the UUID or LABEL attributes of an ext4, ext3, or ext2 file system, use the tune2fs
utility:

# tune2fs [-U new_uuid] [-L new_label] /dev/device


# udevadm settle

Replace new_uuid with the UUID you want to set; for example, 1cdfbc07-1c90-4984-b5ec-
f61943f5ea50. Replace new_label with a label; for example, backup_data.

NOTE

Changing udev attributes happens in the background and might take a long time. The
udevadm settle command waits until the change is fully registered, which ensures that
your next command will be able to utilize the new attribute correctly.

You should also use the command after creating new devices; for example, after using the
parted tool to create a partition with a custom PARTUUID or PARTLABEL attribute, or after
creating a new file system.

25.8. REMOVING A STORAGE DEVICE


Before removing access to the storage device itself, it is advisable to back up data from the device first.
Afterwards, flush I/O and remove all operating system references to the device (as described below). If
the device uses multipathing, then do this for the multipath "pseudo device" (Section 25.7.2, “World Wide

216
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Identifier (WWID)”) and each of the identifiers that represent a path to the device. If you are only
removing a path to a multipath device, and other paths will remain, then the procedure is simpler, as
described in Section 25.10, “Adding a Storage Device or Path”.

Removal of a storage device is not recommended when the system is under memory pressure, since the
I/O flush will add to the load. To determine the level of memory pressure, run the command vmstat 1
100; device removal is not recommended if:

Free memory is less than 5% of the total memory in more than 10 samples per 100 (the
command free can also be used to display the total memory).

Swapping is active (non-zero si and so columns in the vmstat output).

The general procedure for removing all access to a device is as follows:

Procedure 25.9. Ensuring a Clean Device Removal

1. Close all users of the device and backup device data as needed.

2. Use umount to unmount any file systems that mounted the device.

3. Remove the device from any md and LVM volume using it. If the device is a member of an LVM
Volume group, then it may be necessary to move data off the device using the pvmove
command, then use the vgreduce command to remove the physical volume, and (optionally)
pvremove to remove the LVM metadata from the disk.

4. If the device uses multipathing, run multipath -l and note all the paths to the device.
Afterwards, remove the multipathed device using multipath -f device.

5. Run blockdev --flushbufs device to flush any outstanding I/O to all paths to the device.
This is particularly important for raw devices, where there is no umount or vgreduce operation
to cause an I/O flush.

6. Remove any reference to the device's path-based name, like /dev/sd, /dev/disk/by-path
or the major:minor number, in applications, scripts, or utilities on the system. This is important
in ensuring that different devices added in the future will not be mistaken for the current device.

7. Finally, remove each path to the device from the SCSI subsystem. To do so, use the command
echo 1 > /sys/block/device-name/device/delete where device-name may be sde,
for example.

Another variation of this operation is echo 1 >


/sys/class/scsi_device/h:c:t:l/device/delete, where h is the HBA number, c is
the channel on the HBA, t is the SCSI target ID, and l is the LUN.

NOTE

The older form of these commands, echo "scsi remove-single-device 0


0 0 0" > /proc/scsi/scsi, is deprecated.

You can determine the device-name, HBA number, HBA channel, SCSI target ID and LUN for a device
from various commands, such as lsscsi, scsi_id, multipath -l, and ls -l /dev/disk/by-*.

217
Storage Administration Guide

After performing Procedure 25.9, “Ensuring a Clean Device Removal”, a device can be physically
removed safely from a running system. It is not necessary to stop I/O to other devices while doing so.

Other procedures, such as the physical removal of the device, followed by a rescan of the SCSI bus (as
described in Section 25.11, “Scanning Storage Interconnects”) to cause the operating system state to be
updated to reflect the change, are not recommended. This will cause delays due to I/O timeouts, and
devices may be removed unexpectedly. If it is necessary to perform a rescan of an interconnect, it must
be done while I/O is paused, as described in Section 25.11, “Scanning Storage Interconnects”.

25.9. REMOVING A PATH TO A STORAGE DEVICE


If you are removing a path to a device that uses multipathing (without affecting other paths to the device),
then the general procedure is as follows:

Procedure 25.10. Removing a Path to a Storage Device

1. Remove any reference to the device's path-based name, like /dev/sd or /dev/disk/by-
path or the major:minor number, in applications, scripts, or utilities on the system. This is
important in ensuring that different devices added in the future will not be mistaken for the
current device.

2. Take the path offline using echo offline > /sys/block/sda/device/state.

This will cause any subsequent I/O sent to the device on this path to be failed immediately.
Device-mapper-multipath will continue to use the remaining paths to the device.

3. Remove the path from the SCSI subsystem. To do so, use the command echo 1 >
/sys/block/device-name/device/delete where device-name may be sde, for
example (as described in Procedure 25.9, “Ensuring a Clean Device Removal”).

After performing Procedure 25.10, “Removing a Path to a Storage Device”, the path can be safely
removed from the running system. It is not necessary to stop I/O while this is done, as device-mapper-
multipath will re-route I/O to remaining paths according to the configured path grouping and failover
policies.

Other procedures, such as the physical removal of the cable, followed by a rescan of the SCSI bus to
cause the operating system state to be updated to reflect the change, are not recommended. This will
cause delays due to I/O timeouts, and devices may be removed unexpectedly. If it is necessary to
perform a rescan of an interconnect, it must be done while I/O is paused, as described in Section 25.11,
“Scanning Storage Interconnects”.

25.10. ADDING A STORAGE DEVICE OR PATH


When adding a device, be aware that the path-based device name (/dev/sd name, major:minor
number, and /dev/disk/by-path name, for example) the system assigns to the new device may have
been previously in use by a device that has since been removed. As such, ensure that all old references
to the path-based device name have been removed. Otherwise, the new device may be mistaken for the
old device.

Procedure 25.11. Add a Storage Device or Path

1. The first step in adding a storage device or path is to physically enable access to the new
storage device, or a new path to an existing device. This is done using vendor-specific
commands at the Fibre Channel or iSCSI storage server. When doing so, note the LUN value for
the new storage that will be presented to your host. If the storage server is Fibre Channel, also

218
CHAPTER 25. ONLINE STORAGE MANAGEMENT

take note of the World Wide Node Name (WWNN) of the storage server, and determine whether
there is a single WWNN for all ports on the storage server. If this is not the case, note the World
Wide Port Name (WWPN) for each port that will be used to access the new LUN.

2. Next, make the operating system aware of the new storage device, or path to an existing device.
The recommended command to use is:

$ echo "c t l" > /sys/class/scsi_host/hosth/scan

In the previous command, h is the HBA number, c is the channel on the HBA, t is the SCSI
target ID, and l is the LUN.

NOTE

The older form of this command, echo "scsi add-single-device 0 0 0


0" > /proc/scsi/scsi, is deprecated.

a. In some Fibre Channel hardware, a newly created LUN on the RAID array may not be visible
to the operating system until a Loop Initialization Protocol (LIP) operation is performed. Refer
to Section 25.11, “Scanning Storage Interconnects” for instructions on how to do this.

IMPORTANT

It will be necessary to stop I/O while this operation is executed if an LIP is


required.

b. If a new LUN has been added on the RAID array but is still not being configured by the
operating system, confirm the list of LUNs being exported by the array using the sg_luns
command, part of the sg3_utils package. This will issue the SCSI REPORT LUNS command
to the RAID array and return a list of LUNs that are present.

For Fibre Channel storage servers that implement a single WWNN for all ports, you can
determine the correct h,c,and t values (i.e. HBA number, HBA channel, and SCSI target ID) by
searching for the WWNN in sysfs.

Example 25.5. Determine Correct h, c, and t Values

For example, if the WWNN of the storage server is 0x5006016090203181, use:

$ grep 5006016090203181 /sys/class/fc_transport/*/node_name

This should display output similar to the following:

/sys/class/fc_transport/target5:0:2/node_name:0x5006016090203181
/sys/class/fc_transport/target5:0:3/node_name:0x5006016090203181
/sys/class/fc_transport/target6:0:2/node_name:0x5006016090203181
/sys/class/fc_transport/target6:0:3/node_name:0x5006016090203181

This indicates there are four Fibre Channel routes to this target (two single-channel HBAs,
each leading to two storage ports). Assuming a LUN value is 56, then the following command
will configure the first path:

219
Storage Administration Guide

$ echo "0 2 56" > /sys/class/scsi_host/host5/scan

This must be done for each path to the new device.

For Fibre Channel storage servers that do not implement a single WWNN for all ports, you can
determine the correct HBA number, HBA channel, and SCSI target ID by searching for each of
the WWPNs in sysfs.

Another way to determine the HBA number, HBA channel, and SCSI target ID is to refer to
another device that is already configured on the same path as the new device. This can be done
with various commands, such as lsscsi, scsi_id, multipath -l, and ls -l
/dev/disk/by-*. This information, plus the LUN number of the new device, can be used as
shown above to probe and configure that path to the new device.

3. After adding all the SCSI paths to the device, execute the multipath command, and check to
see that the device has been properly configured. At this point, the device can be added to md,
LVM, mkfs, or mount, for example.

If the steps above are followed, then a device can safely be added to a running system. It is not
necessary to stop I/O to other devices while this is done. Other procedures involving a rescan (or a
reset) of the SCSI bus, which cause the operating system to update its state to reflect the current device
connectivity, are not recommended while storage I/O is in progress.

25.11. SCANNING STORAGE INTERCONNECTS


Certain commands allow you to reset, scan, or both reset and scan one or more interconnects, which
potentially adds and removes multiple devices in one operation. This type of scan can be disruptive, as it
can cause delays while I/O operations time out, and remove devices unexpectedly. Red Hat
recommends using interconnect scanning only when necessary. Observe the following restrictions when
scanning storage interconnects:

All I/O on the effected interconnects must be paused and flushed before executing the
procedure, and the results of the scan checked before I/O is resumed.

As with removing a device, interconnect scanning is not recommended when the system is
under memory pressure. To determine the level of memory pressure, run the vmstat 1 100
command. Interconnect scanning is not recommended if free memory is less than 5% of the total
memory in more than 10 samples per 100. Also, interconnect scanning is not recommended if
swapping is active (non-zero si and so columns in the vmstat output). The free command
can also display the total memory.

The following commands can be used to scan storage interconnects:

echo "1" > /sys/class/fc_host/host/issue_lip


This operation performs a Loop Initialization Protocol (LIP), scans the interconnect, and causes the
SCSI layer to be updated to reflect the devices currently on the bus. Essentially, an LIP is a bus
reset, and causes device addition and removal. This procedure is necessary to configure a new SCSI
target on a Fibre Channel interconnect.

Note that issue_lip is an asynchronous operation. The command can complete before the entire
scan has completed. You must monitor /var/log/messages to determine when issue_lip
finishes.

220
CHAPTER 25. ONLINE STORAGE MANAGEMENT

The lpfc, qla2xxx, and bnx2fc drivers support issue_lip. For more information about the API
capabilities supported by each driver in Red Hat Enterprise Linux, see Table 25.1, “Fibre Channel
API Capabilities”.

/usr/bin/rescan-scsi-bus.sh
The /usr/bin/rescan-scsi-bus.sh script was introduced in Red Hat Enterprise Linux 5.4. By
default, this script scans all the SCSI buses on the system, and updates the SCSI layer to reflect new
devices on the bus. The script provides additional options to allow device removal, and the issuing of
LIPs. For more information about this script, including known issues, see Section 25.17,
“Adding/Removing a Logical Unit Through rescan-scsi-bus.sh”.

echo "- - -" > /sys/class/scsi_host/hosth/scan


This is the same command as described in Section 25.10, “Adding a Storage Device or Path” to add
a storage device or path. In this case, however, the channel number, SCSI target ID, and LUN values
are replaced by wildcards. Any combination of identifiers and wildcards is allowed, so you can make
the command as specific or broad as needed. This procedure adds LUNs, but does not remove
them.

modprobe --remove driver-name, modprobe driver-name


Running the modprobe --remove driver-name command followed by the modprobe driver-
name command completely re-initializes the state of all interconnects controlled by the driver. Despite
being rather extreme, using the described commands can be appropriate in certain situations. The
commands can be used, for example, to restart the driver with a different module parameter value.

25.12. ISCSI DISCOVERY CONFIGURATION


The default iSCSI configuration file is /etc/iscsi/iscsid.conf. This file contains iSCSI settings
used by iscsid and iscsiadm.

During target discovery, the iscsiadm tool uses the settings in /etc/iscsi/iscsid.conf to create
two types of records:

Node records in /var/lib/iscsi/nodes


When logging into a target, iscsiadm uses the settings in this file.

Discovery records in /var/lib/iscsi/discovery_type


When performing discovery to the same destination, iscsiadm uses the settings in this file.

Before using different settings for discovery, delete the current discovery records (i.e.
/var/lib/iscsi/discovery_type) first. To do this, use the following command: [5]

# iscsiadm -m discovery -t discovery_type -p target_IP:port -o delete

Here, discovery_type can be either sendtargets, isns, or fw.

For details on different types of discovery, refer to the DISCOVERY TYPES section of the iscsiadm(8)
man page.

There are two ways to reconfigure discovery record settings:

221
Storage Administration Guide

Edit the /etc/iscsi/iscsid.conf file directly prior to performing a discovery. Discovery


settings use the prefix discovery; to view them, run:

# iscsiadm -m discovery -t discovery_type -p target_IP:port

Alternatively, iscsiadm can also be used to directly change discovery record settings, as in:

# iscsiadm -m discovery -t discovery_type -p target_IP:port -o


update -n setting -v %value

Refer to the iscsiadm(8) man page for more information on available setting options and valid
value options for each.

After configuring discovery settings, any subsequent attempts to discover new targets will use the new
settings. Refer to Section 25.14, “Scanning iSCSI Interconnects” for details on how to scan for new
iSCSI targets.

For more information on configuring iSCSI target discovery, refer to the man pages of iscsiadm and
iscsid. The /etc/iscsi/iscsid.conf file also contains examples on proper configuration syntax.

25.13. CONFIGURING ISCSI OFFLOAD AND INTERFACE BINDING


This chapter describes how to set up iSCSI interfaces in order to bind a session to a NIC port when
using software iSCSI. It also describes how to set up interfaces for use with network devices that support
offloading.

The network subsystem can be configured to determine the path/NIC that iSCSI interfaces should use
for binding. For example, if portals and NICs are set up on different subnets, then it is not necessary to
manually configure iSCSI interfaces for binding.

Before attempting to configure an iSCSI interface for binding, run the following command first:

$ ping -I ethX target_IP

If ping fails, then you will not be able to bind a session to a NIC. If this is the case, check the network
settings first.

25.13.1. Viewing Available iface Configurations


iSCSI offload and interface binding is supported for the following iSCSI initiator implementations:

Software iSCSI
This stack allocates an iSCSI host instance (that is, scsi_host) per session, with a single
connection per session. As a result, /sys/class_scsi_host and /proc/scsi will report a
scsi_host for each connection/session you are logged into.

Offload iSCSI
This stack allocates a scsi_host for each PCI device. As such, each port on a host bus adapter will
show up as a different PCI device, with a different scsi_host per HBA port.

222
CHAPTER 25. ONLINE STORAGE MANAGEMENT

To manage both types of initiator implementations, iscsiadm uses the iface structure. With this
structure, an iface configuration must be entered in /var/lib/iscsi/ifaces for each HBA port,
software iSCSI, or network device (ethX) used to bind sessions.

To view available iface configurations, run iscsiadm -m iface. This will display iface information
in the following format:

iface_name
transport_name,hardware_address,ip_address,net_ifacename,initiator_name

Refer to the following table for an explanation of each value/setting.

Table 25.2. iface Settings

Setting Description

iface_name iface configuration name.

transport_name Name of driver

hardware_address MAC address

ip_address IP address to use for this port

net_iface_name Name used for the vlan or alias binding of a


software iSCSI session. For iSCSI offloads,
net_iface_name will be <empty> because this
value is not persistent across reboots.

initiator_name This setting is used to override a default name for the


initiator, which is defined in
/etc/iscsi/initiatorname.iscsi

Example 25.6. Sample Output of the iscsiadm -m iface Command

The following is a sample output of the iscsiadm -m iface command:

iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-
06.com.redhat:madmax
iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-
06.com.redhat:madmax

For software iSCSI, each iface configuration must have a unique name (with less than 65 characters).
The iface_name for network devices that support offloading appears in the format
transport_name.hardware_name.

Example 25.7. iscsiadm -m iface Output with a Chelsio Network Card

For example, the sample output of iscsiadm -m iface on a system using a Chelsio network card
might appear as:

223
Storage Administration Guide

default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,
<empty>

It is also possible to display the settings of a specific iface configuration in a more friendly way. To do
so, use the option -I iface_name. This will display the settings in the following format:

iface.setting = value

Example 25.8. Using iface Settings with a Chelsio Converged Network Adapter

Using the previous example, the iface settings of the same Chelsio converged network adapter (i.e.
iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07) would appear as:

# BEGIN RECORD 2.0-871


iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07
iface.net_ifacename = <empty>
iface.ipaddress = <empty>
iface.hwaddress = 00:07:43:05:97:07
iface.transport_name = cxgb3i
iface.initiatorname = <empty>
# END RECORD

25.13.2. Configuring an iface for Software iSCSI


As mentioned earlier, an iface configuration is required for each network object that will be used to bind
a session.

Before

To create an iface configuration for software iSCSI, run the following command:

# iscsiadm -m iface -I iface_name --op=new

This will create a new empty iface configuration with a specified iface_name. If an existing iface
configuration already has the same iface_name, then it will be overwritten with a new, empty one.

To configure a specific setting of an iface configuration, use the following command:

# iscsiadm -m iface -I iface_name --op=update -n iface.setting -v


hw_address

Example 25.9. Set MAC Address of iface0

For example, to set the MAC address (hardware_address) of iface0 to 00:0F:1F:92:6B:BF,


run:

# iscsiadm -m iface -I iface0 --op=update -n iface.hwaddress -v


00:0F:1F:92:6B:BF

224
CHAPTER 25. ONLINE STORAGE MANAGEMENT


WARNING

Do not use default or iser as iface names. Both strings are special values
used by iscsiadm for backward compatibility. Any manually-created iface
configurations named default or iser will disable backwards compatibility.

25.13.3. Configuring an iface for iSCSI Offload


By default, iscsiadm creates an iface configuration for each port. To view available iface
configurations, use the same command for doing so in software iSCSI: iscsiadm -m iface.

Before using the iface of a network card for iSCSI offload, first set the iface.ipaddress value of the
offload interface to the initiator IP address that the interface should use:

For devices that use the be2iscsi driver, the IP address is configured in the BIOS setup
screen.

For all other devices, to configure the IP address of the iface, use:

# iscsiadm -m iface -I iface_name -o update -n iface.ipaddress -v


initiator_ip_address

Example 25.10. Set the iface IP Address of a Chelsio Card

For example, to set the iface IP address to 20.15.0.66 when using a card with the iface name
of cxgb3i.00:07:43:05:97:07, use:

# iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update -n


iface.ipaddress -v 20.15.0.66

25.13.4. Binding/Unbinding an iface to a Portal


Whenever iscsiadm is used to scan for interconnects, it will first check the iface.transport
settings of each iface configuration in /var/lib/iscsi/ifaces. The iscsiadm utility will then bind
discovered portals to any iface whose iface.transport is tcp.

This behavior was implemented for compatibility reasons. To override this, use the -I iface_name to
specify which portal to bind to an iface, as in:

# iscsiadm -m discovery -t st -p target_IP:port -I iface_name -P 1


[5]

By default, the iscsiadm utility will not automatically bind any portals to iface configurations that use

225
Storage Administration Guide

offloading. This is because such iface configurations will not have iface.transport set to tcp. As
such, the iface configurations need to be manually bound to discovered portals.

It is also possible to prevent a portal from binding to any existing iface. To do so, use default as the
iface_name, as in:

# iscsiadm -m discovery -t st -p IP:port -I default -P 1

To remove the binding between a target and iface, use:

# iscsiadm -m node -targetname proper_target_name -I iface0 --op=delete[6]

To delete all bindings for a specific iface, use:

# iscsiadm -m node -I iface_name --op=delete

To delete bindings for a specific portal (e.g. for Equalogic targets), use:

# iscsiadm -m node -p IP:port -I iface_name --op=delete

NOTE

If there are no iface configurations defined in /var/lib/iscsi/iface and the -I


option is not used, iscsiadm will allow the network subsystem to decide which device a
specific portal should use.

25.14. SCANNING ISCSI INTERCONNECTS


For iSCSI, if the targets send an iSCSI async event indicating new storage is added, then the scan is
done automatically.

However, if the targets do not send an iSCSI async event, you need to manually scan them using the
iscsiadm utility. Before doing so, however, you need to first retrieve the proper --targetname and
the --portal values. If your device model supports only a single logical unit and portal per target, use
iscsiadm to issue a sendtargets command to the host, as in:

# iscsiadm -m discovery -t sendtargets -p target_IP:port


[5]

The output will appear in the following format:

target_IP:port,target_portal_group_tag proper_target_name

Example 25.11. Using iscsiadm to issue a sendtargets Command

For example, on a target with a proper_target_name of iqn.1992-


08.com.netapp:sn.33615311 and a target_IP:port of 10.15.85.19:3260, the output may
appear as:

10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311

226
CHAPTER 25. ONLINE STORAGE MANAGEMENT

10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

In this example, the target has two portals, each using target_ip:ports of 10.15.84.19:3260
and 10.15.85.19:3260.

To see which iface configuration will be used for each session, add the -P 1 option. This option will
print also session information in tree format, as in:

Target: proper_target_name
Portal: target_IP:port,target_portal_group_tag
Iface Name: iface_name

Example 25.12. View iface Configuration

For example, with iscsiadm -m discovery -t sendtargets -p 10.15.85.19:3260 -P


1, the output may appear as:

Target: iqn.1992-08.com.netapp:sn.33615311
Portal: 10.15.84.19:3260,2
Iface Name: iface2
Portal: 10.15.85.19:3260,3
Iface Name: iface2

This means that the target iqn.1992-08.com.netapp:sn.33615311 will use iface2 as its
iface configuration.

With some device models a single target may have multiple logical units and portals. In this case, issue a
sendtargets command to the host first to find new portals on the target. Then, rescan the existing
sessions using:

# iscsiadm -m session --rescan

You can also rescan a specific session by specifying the session's SID value, as in:

# iscsiadm -m session -r SID --rescan[7]

If your device supports multiple targets, you will need to issue a sendtargets command to the hosts to
find new portals for each target. Rescan existing sessions to discover new logical units on existing
sessions using the --rescan option.

227
Storage Administration Guide

IMPORTANT

The sendtargets command used to retrieve --targetname and --portal values


overwrites the contents of the /var/lib/iscsi/nodes database. This database will
then be repopulated using the settings in /etc/iscsi/iscsid.conf. However, this will
not occur if a session is currently logged in and in use.

To safely add new targets/portals or delete old ones, use the -o new or -o delete
options, respectively. For example, to add new targets/portals without overwriting
/var/lib/iscsi/nodes, use the following command:

iscsiadm -m discovery -t st -p target_IP -o new

To delete /var/lib/iscsi/nodes entries that the target did not display during
discovery, use:

iscsiadm -m discovery -t st -p target_IP -o delete

You can also perform both tasks simultaneously, as in:

iscsiadm -m discovery -t st -p target_IP -o delete -o new

The sendtargets command will yield the following output:

ip:port,target_portal_group_tag proper_target_name

Example 25.13. Output of the sendtargets Command

For example, given a device with a single target, logical unit, and portal, with equallogic-iscsi1
as your target_name, the output should appear similar to the following:

10.16.41.155:3260,0 iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-
63aff113e344a4a2-dl585-03-1

Note that proper_target_name and ip:port,target_portal_group_tag are identical to the


values of the same name in Section 25.6.1, “iSCSI API”.

At this point, you now have the proper --targetname and --portal values needed to manually scan
for iSCSI devices. To do so, run the following command:

# iscsiadm --mode node --targetname proper_target_name --portal


ip:port,target_portal_group_tag \ --login
[8]

Example 25.14. Full iscsiadm Command

Using our previous example (where proper_target_name is equallogic-iscsi1), the full


command would be:

228
CHAPTER 25. ONLINE STORAGE MANAGEMENT

# iscsiadm --mode node --targetname \ iqn.2001-05.com.equallogic:6-


8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1 \ --portal
10.16.41.155:3260,0 --login[8]

25.15. LOGGING IN TO AN ISCSI TARGET


As mentioned in Section 25.6, “iSCSI”, the iSCSI service must be running in order to discover or log into
targets. To start the iSCSI service, run:

# systemctl start iscsi

When this command is executed, the iSCSI init scripts will automatically log into targets where the
node.startup setting is configured as automatic. This is the default value of node.startup for all
targets.

To prevent automatic login to a target, set node.startup to manual. To do this, run the following
command:

# iscsiadm -m node --targetname proper_target_name -p target_IP:port -o


update -n node.startup -v manual

Deleting the entire record will also prevent automatic login. To do this, run:

# iscsiadm -m node --targetname proper_target_name -p target_IP:port -o


delete

To automatically mount a file system from an iSCSI device on the network, add a partition entry for the
mount in /etc/fstab with the _netdev option. For example, to automatically mount the iSCSI device
sdb to /mount/iscsi during startup, add the following line to /etc/fstab:

/dev/sdb /mnt/iscsi ext3 _netdev 0 0

To manually log in to an iSCSI target, use the following command:

# iscsiadm -m node --targetname proper_target_name -p target_IP:port -l

NOTE

The proper_target_name and target_IP:port refer to the full name and IP


address/port combination of a target. For more information, refer to Section 25.6.1, “iSCSI
API” and Section 25.14, “Scanning iSCSI Interconnects”.

25.16. RESIZING AN ONLINE LOGICAL UNIT


In most cases, fully resizing an online logical unit involves two things: resizing the logical unit itself and
reflecting the size change in the corresponding multipath device (if multipathing is enabled on the
system).

229
Storage Administration Guide

To resize the online logical unit, start by modifying the logical unit size through the array management
interface of your storage device. This procedure differs with each array; as such, consult your storage
array vendor documentation for more information on this.

NOTE

In order to resize an online file system, the file system must not reside on a partitioned
device.

25.16.1. Resizing Fibre Channel Logical Units


After modifying the online logical unit size, re-scan the logical unit to ensure that the system detects the
updated size. To do this for Fibre Channel logical units, use the following command:

$ echo 1 > /sys/block/sdX/device/rescan

IMPORTANT

To re-scan Fibre Channel logical units on a system that uses multipathing, execute the
aforementioned command for each sd device (i.e. sd1, sd2, and so on) that represents a
path for the multipathed logical unit. To determine which devices are paths for a multipath
logical unit, use multipath -ll; then, find the entry that matches the logical unit being
resized. It is advisable that you refer to the WWID of each entry to make it easier to find
which one matches the logical unit being resized.

25.16.2. Resizing an iSCSI Logical Unit


After modifying the online logical unit size, re-scan the logical unit to ensure that the system detects the
updated size. To do this for iSCSI devices, use the following command:

# iscsiadm -m node --targetname target_name -R


[5]

Replace target_name with the name of the target where the device is located.

NOTE

You can also re-scan iSCSI logical units using the following command:

# iscsiadm -m node -R -I interface

Replace interface with the corresponding interface name of the resized logical unit (for
example, iface0). This command performs two operations:

It scans for new devices in the same way that the command echo "- - -" >
/sys/class/scsi_host/host/scan does (refer to Section 25.14, “Scanning
iSCSI Interconnects”).

It re-scans for new/modified logical units the same way that the command echo
1 > /sys/block/sdX/device/rescan does. Note that this command is the
same one used for re-scanning Fibre Channel logical units.

230
CHAPTER 25. ONLINE STORAGE MANAGEMENT

25.16.3. Updating the Size of Your Multipath Device


If multipathing is enabled on your system, you will also need to reflect the change in logical unit size to
the logical unit's corresponding multipath device (after resizing the logical unit). This can be done through
multipathd. To do so, first ensure that multipathd is running using service multipathd
status. Once you've verified that multipathd is operational, run the following command:

# multipathd -k"resize map multipath_device"

The multipath_device variable is the corresponding multipath entry of your device in /dev/mapper.
Depending on how multipathing is set up on your system, multipath_device can be either of two
formats:

mpathX, where X is the corresponding entry of your device (for example, mpath0)

a WWID; for example, 3600508b400105e210000900000490000

To determine which multipath entry corresponds to your resized logical unit, run multipath -ll. This
displays a list of all existing multipath entries in the system, along with the major and minor numbers of
their corresponding devices.

IMPORTANT

Do not use multipathd -k"resize map multipath_device" if there are any


commands queued to multipath_device. That is, do not use this command when the
no_path_retry parameter (in /etc/multipath.conf) is set to "queue", and there
are no active paths to the device.

For more information about multipathing, refer to the Red Hat Enterprise Linux 7 DM Multipath guide.

25.16.4. Changing the Read/Write State of an Online Logical Unit


Certain storage devices provide the user with the ability to change the state of the device from
Read/Write (R/W) to Read-Only (RO), and from RO to R/W. This is typically done through a
management interface on the storage device. The operating system will not automatically update its view
of the state of the device when a change is made. Follow the procedures described in this chapter to
make the operating system aware of the change.

Run the following command, replacing XYZ with the desired device designator, to determine the
operating system's current view of the R/W state of a device:

# blockdev --getro /dev/sdXYZ

The following command is also available for Red Hat Enterprise Linux 7:

# cat /sys/block/sdXYZ/ro 1 = read-only 0 = read-write

When using multipath, refer to the ro or rw field in the second line of output from the multipath -ll
command. For example:

36001438005deb4710000500000640000 dm-8 GZ,GZ500


[size=20G][features=0][hwhandler=0][ro]
\_ round-robin 0 [prio=200][active]

231
Storage Administration Guide

\_ 6:0:4:1 sdax 67:16 [active][ready]


\_ 6:0:5:1 sday 67:32 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 6:0:6:1 sdaz 67:48 [active][ready]
\_ 6:0:7:1 sdba 67:64 [active][ready]

To change the R/W state, use the following procedure:

Procedure 25.12. Change the R/W State

1. To move the device from RO to R/W, see step 2.

To move the device from R/W to RO, ensure no further writes will be issued. Do this by stopping
the application, or through the use of an appropriate, application-specific action.

Ensure that all outstanding write I/Os are complete with the following command:

# blockdev --flushbufs /dev/device

Replace device with the desired designator; for a device mapper multipath, this is the entry for
your device in dev/mapper. For example, /dev/mapper/mpath3.

2. Use the management interface of the storage device to change the state of the logical unit from
R/W to RO, or from RO to R/W. The procedure for this differs with each array. Consult
applicable storage array vendor documentation for more information.

3. Perform a re-scan of the device to update the operating system's view of the R/W state of the
device. If using a device mapper multipath, perform this re-scan for each path to the device
before issuing the command telling multipath to reload its device maps.

This process is explained in further detail in Section 25.16.4.1, “Rescanning Logical Units” .

25.16.4.1. Rescanning Logical Units

After modifying the online logical unit Read/Write state, as described in Section 25.16.4, “Changing the
Read/Write State of an Online Logical Unit”, re-scan the logical unit to ensure the system detects the
updated state with the following command:

# echo 1 > /sys/block/sdX/device/rescan

To re-scan logical units on a system that uses multipathing, execute the above command for each sd
device that represents a path for the multipathed logical unit. For example, run the command on sd1, sd2
and all other sd devices. To determine which devices are paths for a multipath unit, use multipath -
11, then find the entry that matches the logical unit to be changed.

Example 25.15. Use of the multipath -11 Command

For example, the multipath -11 above shows the path for the LUN with WWID
36001438005deb4710000500000640000. In this case, enter:

# echo 1 > /sys/block/sdax/device/rescan


# echo 1 > /sys/block/sday/device/rescan
# echo 1 > /sys/block/sdaz/device/rescan
# echo 1 > /sys/block/sdba/device/rescan

232
CHAPTER 25. ONLINE STORAGE MANAGEMENT

25.16.4.2. Updating the R/W State of a Multipath Device

If multipathing is enabled, after rescanning the logical unit, the change in its state will need to be reflected
in the logical unit's corresponding multipath drive. Do this by reloading the multipath device maps with
the following command:

# multipath -r

The multipath -11 command can then be used to confirm the change.

25.16.4.3. Documentation

Further information can be found in the Red Hat Knowledgebase. To access this, navigate to
https://www.redhat.com/wapps/sso/login.html?redirect=https://access.redhat.com/knowledge/ and log in.
Then access the article at https://access.redhat.com/kb/docs/DOC-32850.

25.17. ADDING/REMOVING A LOGICAL UNIT THROUGH RESCAN-SCSI-


BUS.SH
The sg3_utils package provides the rescan-scsi-bus.sh script, which can automatically update
the logical unit configuration of the host as needed (after a device has been added to the system). The
rescan-scsi-bus.sh script can also perform an issue_lip on supported devices. For more
information about how to use this script, refer to rescan-scsi-bus.sh --help.

To install the sg3_utils package, run yum install sg3_utils.

Known Issues with rescan-scsi-bus.sh


When using the rescan-scsi-bus.sh script, take note of the following known issues:

In order for rescan-scsi-bus.sh to work properly, LUN0 must be the first mapped logical
unit. The rescan-scsi-bus.sh can only detect the first mapped logical unit if it is LUN0. The
rescan-scsi-bus.sh will not be able to scan any other logical unit unless it detects the first
mapped logical unit even if you use the --nooptscan option.

A race condition requires that rescan-scsi-bus.sh be run twice if logical units are mapped
for the first time. During the first scan, rescan-scsi-bus.sh only adds LUN0; all other logical
units are added in the second scan.

A bug in the rescan-scsi-bus.sh script incorrectly executes the functionality for recognizing
a change in logical unit size when the --remove option is used.

The rescan-scsi-bus.sh script does not recognize ISCSI logical unit removals.

25.18. MODIFYING LINK LOSS BEHAVIOR


This section describes how to modify the link loss behavior of devices that use either Fibre Channel or
iSCSI protocols.

25.18.1. Fibre Channel

233
Storage Administration Guide

If a driver implements the Transport dev_loss_tmo callback, access attempts to a device through a link
will be blocked when a transport problem is detected. To verify if a device is blocked, run the following
command:

$ cat /sys/block/device/device/state

This command will return blocked if the device is blocked. If the device is operating normally, this
command will return running.

Procedure 25.13. Determining the State of a Remote Port

1. To determine the state of a remote port, run the following command:

$ cat
/sys/class/fc_remote_port/rport-H:B:R/port_state

2. This command will return Blocked when the remote port (along with devices accessed through
it) are blocked. If the remote port is operating normally, the command will return Online.

3. If the problem is not resolved within dev_loss_tmo seconds, the rport and devices will be
unblocked and all I/O running on that device (along with any new I/O sent to that device) will be
failed.

Procedure 25.14. Changing dev_loss_tmo

To change the dev_loss_tmo value, echo in the desired value to the file. For example, to set
dev_loss_tmo to 30 seconds, run:

$ echo 30 >
/sys/class/fc_remote_port/rport-H:B:R/dev_loss_tmo

For more information about dev_loss_tmo, refer to Section 25.3.1, “Fibre Channel API”.

When a link loss exceeds dev_loss_tmo, the scsi_device and sdN devices are removed. Typically,
the Fibre Channel class will leave the device as is; i.e. /dev/sdx will remain /dev/sdx. This is
because the target binding is saved by the Fibre Channel driver so when the target port returns, the
SCSI addresses are recreated faithfully. However, this cannot be guaranteed; the sdx will be restored
only if no additional change on in-storage box configuration of LUNs is made.

25.18.2. iSCSI Settings with dm-multipath

If dm-multipath is implemented, it is advisable to set iSCSI timers to immediately defer commands to


the multipath layer. To configure this, nest the following line under device { in
/etc/multipath.conf:

features "1 queue_if_no_path"

This ensures that I/O errors are retried and queued if all paths are failed in the dm-multipath layer.

You may need to adjust iSCSI timers further to better monitor your SAN for problems. Available iSCSI
timers you can configure are NOP-Out Interval/Timeouts and replacement_timeout, which are
discussed in the following sections.

234
CHAPTER 25. ONLINE STORAGE MANAGEMENT

25.18.2.1. NOP-Out Interval/Timeout

To help monitor problems the SAN, the iSCSI layer sends a NOP-Out request to each target. If a NOP-
Out request times out, the iSCSI layer responds by failing any running commands and instructing the
SCSI layer to requeue those commands when possible.

When dm-multipath is being used, the SCSI layer will fail those running commands and defer them to
the multipath layer. The multipath layer then retries those commands on another path. If dm-multipath
is not being used, those commands are retried five times before failing altogether.

Intervals between NOP-Out requests are 10 seconds by default. To adjust this, open
/etc/iscsi/iscsid.conf and edit the following line:

node.conn[0].timeo.noop_out_interval = [interval value]

Once set, the iSCSI layer will send a NOP-Out request to each target every [interval value] seconds.

By default, NOP-Out requests time out in 10 seconds[9]. To adjust this, open


/etc/iscsi/iscsid.conf and edit the following line:

node.conn[0].timeo.noop_out_timeout = [timeout value]

This sets the iSCSI layer to timeout a NOP-Out request after [timeout value] seconds.

SCSI Error Handler

If the SCSI Error Handler is running, running commands on a path will not be failed immediately when a
NOP-Out request times out on that path. Instead, those commands will be failed after
replacement_timeout seconds. For more information about replacement_timeout, refer to
Section 25.18.2.2, “replacement_timeout”.

To verify if the SCSI Error Handler is running, run:

# iscsiadm -m session -P 3

25.18.2.2. replacement_timeout

replacement_timeout controls how long the iSCSI layer should wait for a timed-out path/session to
reestablish itself before failing any commands on it. The default replacement_timeout value is 120
seconds.

To adjust replacement_timeout, open /etc/iscsi/iscsid.conf and edit the following line:

node.session.timeo.replacement_timeout = [replacement_timeout]

The 1 queue_if_no_path option in /etc/multipath.conf sets iSCSI timers to immediately defer


commands to the multipath layer (refer to Section 25.18.2, “iSCSI Settings with dm-multipath”). This
setting prevents I/O errors from propagating to the application; because of this, you can set
replacement_timeout to 15-20 seconds.

By configuring a lower replacement_timeout, I/O is quickly sent to a new path and executed (in the
event of a NOP-Out timeout) while the iSCSI layer attempts to re-establish the failed path/session. If all
paths time out, then the multipath and device mapper layer will internally queue I/O based on the settings

235
Storage Administration Guide

in /etc/multipath.conf instead of /etc/iscsi/iscsid.conf.

IMPORTANT

Whether your considerations are failover speed or security, the recommended value for
replacement_timeout will depend on other factors. These factors include the network,
target, and system workload. As such, it is recommended that you thoroughly test any
new configurations to replacements_timeout before applying it to a mission-critical
system.

25.18.3. iSCSI Root


When accessing the root partition directly through an iSCSI disk, the iSCSI timers should be set so that
iSCSI layer has several chances to try to reestablish a path/session. In addition, commands should not
be quickly re-queued to the SCSI layer. This is the opposite of what should be done when dm-
multipath is implemented.

To start with, NOP-Outs should be disabled. You can do this by setting both NOP-Out interval and
timeout to zero. To set this, open /etc/iscsi/iscsid.conf and edit as follows:

node.conn[0].timeo.noop_out_interval = 0
node.conn[0].timeo.noop_out_timeout = 0

In line with this, replacement_timeout should be set to a high number. This will instruct the system to
wait a long time for a path/session to reestablish itself. To adjust replacement_timeout, open
/etc/iscsi/iscsid.conf and edit the following line:

node.session.timeo.replacement_timeout = replacement_timeout

After configuring /etc/iscsi/iscsid.conf, you must perform a re-discovery of the affected storage.
This will allow the system to load and use any new values in /etc/iscsi/iscsid.conf. For more
information on how to discover iSCSI devices, refer to Section 25.14, “Scanning iSCSI Interconnects”.

Configuring Timeouts for a Specific Session

You can also configure timeouts for a specific session and make them non-persistent (instead of using
/etc/iscsi/iscsid.conf). To do so, run the following command (replace the variables accordingly):

# iscsiadm -m node -T target_name -p target_IP:port -o update -n


node.session.timeo.replacement_timeout -v $timeout_value

IMPORTANT

The configuration described here is recommended for iSCSI sessions involving root
partition access. For iSCSI sessions involving access to other types of storage (namely, in
systems that use dm-multipath), refer to Section 25.18.2, “iSCSI Settings with dm-
multipath”.

25.19. CONTROLLING THE SCSI COMMAND TIMER AND DEVICE


STATUS

236
CHAPTER 25. ONLINE STORAGE MANAGEMENT

The Linux SCSI layer sets a timer on each command. When this timer expires, the SCSI layer will
quiesce the host bus adapter (HBA) and wait for all outstanding commands to either time out or
complete. Afterwards, the SCSI layer will activate the driver's error handler.

When the error handler is triggered, it attempts the following operations in order (until one successfully
executes):

1. Abort the command.

2. Reset the device.

3. Reset the bus.

4. Reset the host.

If all of these operations fail, the device will be set to the offline state. When this occurs, all I/O to that
device will be failed, until the problem is corrected and the user sets the device to running.

The process is different, however, if a device uses the Fibre Channel protocol and the rport is blocked.
In such cases, the drivers wait for several seconds for the rport to become online again before
activating the error handler. This prevents devices from becoming offline due to temporary transport
problems.

Device States
To display the state of a device, use:

$ cat /sys/block/device-name/device/state

To set a device to the running state, use:

# echo running > /sys/block/device-name/device/state

Command Timer
To control the command timer, modify the /sys/block/device-name/device/timeout file:

# echo value > /sys/block/device-name/device/timeout

Replace value in the command with the timeout value, in seconds, that you want to implement.

25.20. TROUBLESHOOTING ONLINE STORAGE CONFIGURATION


This section provides solution to common problems users experience during online storage
reconfiguration.

Logical unit removal status is not reflected on the host.


When a logical unit is deleted on a configured filer, the change is not reflected on the host. In such
cases, lvm commands will hang indefinitely when dm-multipath is used, as the logical unit has
now become stale.

To work around this, perform the following procedure:

Procedure 25.15. Working Around Stale Logical Units

237
Storage Administration Guide

Procedure 25.15. Working Around Stale Logical Units

1. Determine which mpath link entries in /etc/lvm/cache/.cache are specific to the stale
logical unit. To do this, run the following command:

$ ls -l /dev/mpath | grep stale-logical-unit

Example 25.16. Determine Specific mpath Link Entries

For example, if stale-logical-unit is 3600d0230003414f30000203a7bc41a00, the


following results may appear:

lrwxrwxrwx 1 root root 7 Aug 2 10:33


/3600d0230003414f30000203a7bc41a00 -> ../dm-4
lrwxrwxrwx 1 root root 7 Aug 2 10:33
/3600d0230003414f30000203a7bc41a00p1 -> ../dm-5

This means that 3600d0230003414f30000203a7bc41a00 is mapped to two mpath links:


dm-4 and dm-5.

2. Next, open /etc/lvm/cache/.cache. Delete all lines containing stale-logical-unit


and the mpath links that stale-logical-unit maps to.

Example 25.17. Delete Relevant Lines

Using the same example in the previous step, the lines you need to delete are:

/dev/dm-4
/dev/dm-5
/dev/mapper/3600d0230003414f30000203a7bc41a00
/dev/mapper/3600d0230003414f30000203a7bc41a00p1
/dev/mpath/3600d0230003414f30000203a7bc41a00
/dev/mpath/3600d0230003414f30000203a7bc41a00p1

25.21. CONFIGURING MAXIMUM TIME FOR ERROR RECOVERY WITH


EH_DEADLINE

238
CHAPTER 25. ONLINE STORAGE MANAGEMENT

IMPORTANT

In most scenarios, you do not need to enable the eh_deadline parameter. Using the
eh_deadline parameter can be useful in certain specific scenarios, for example if a link
loss occurs between a Fibre Channel switch and a target port, and the Host Bus Adapter
(HBA) does not receive Registered State Change Notifications (RSCNs). In such a case,
I/O requests and error recovery commands all time out rather than encounter an error.
Setting eh_deadline in this environment puts an upper limit on the recovery time, which
enables the failed I/O to be retried on another available path by multipath.

However, if RSCNs are enabled, the HBA does not register the link becoming
unavailable, or both, the eh_deadline functionality provides no additional benefit, as the
I/O and error recovery commands fail immediately, which allows multipath to retry.

The SCSI host object eh_deadline parameter enables you to configure the maximum amount of time
that the SCSI error handling mechanism attempts to perform error recovery before stopping and resetting
the entire HBA.

The value of the eh_deadline is specified in seconds. The default setting is off, which disables the
time limit and allows all of the error recovery to take place. In addition to using sysfs, a default value
can be set for all SCSI HBAs by using the scsi_mod.eh_deadline kernel parameter.

Note that when eh_deadline expires, the HBA is reset, which affects all target paths on that HBA, not
only the failing one. As a consequence, I/O errors can occur if some of the redundant paths are not
available for other reasons. Enable eh_deadline only if you have a fully redundant multipath
configuration on all targets.

[5] The target_IP and port variables refer to the IP address and port combination of a target/portal, respectively. For
more information, refer to Section 25.6.1, “iSCSI API” and Section 25.14, “Scanning iSCSI Interconnects” .

[6] Refer to Section 25.14, “Scanning iSCSI Interconnects” for information on proper_target_name .

[7] For information on how to retrieve a session's SID value, refer to Section 25.6.1, “iSCSI API”.

[8] This is a single command split into multiple lines, to accommodate printed and PDF versions of this document.
All concatenated lines — preceded by the backslash (\) — should be treated as one command, sans backslashes.

[9] Prior to Red Hat Enterprise Linux 5.4, the default NOP-Out requests time out was 15 seconds.

239
Storage Administration Guide

CHAPTER 26. DEVICE MAPPER MULTIPATHING AND VIRTUAL


STORAGE
Red Hat Enterprise Linux 7 supports DM-Multipath and virtual storage. Both features are documented in
detail in the Red Hat books DM Multipath and Virtualization Deployment and Administration Guide.

26.1. VIRTUAL STORAGE


Red Hat Enterprise Linux 7 supports the following file systems/online storage methods for virtual storage:

Fibre Channel

iSCSI

NFS

GFS2

Virtualization in Red Hat Enterprise Linux 7 uses libvirt to manage virtual instances. The libvirt
utility uses the concept of storage pools to manage storage for virtualized guests. A storage pool is
storage that can be divided up into smaller volumes or allocated directly to a guest. Volumes of a storage
pool can be allocated to virtualized guests. There are two categories of storage pools available:

Local storage pools


Local storage covers storage devices, files or directories directly attached to a host. Local storage
includes local directories, directly attached disks, and LVM Volume Groups.

Networked (shared) storage pools


Networked storage covers storage devices shared over a network using standard protocols. It
includes shared storage devices using Fibre Channel, iSCSI, NFS, GFS2, and SCSI RDMA
protocols, and is a requirement for migrating guest virtualized guests between hosts.

IMPORTANT

For comprehensive information on the deployment and configuration of virtual storage


instances in your environment, refer to the Virtualization Deployment and Administration
Guide provided by Red Hat.

26.2. DM-MULTIPATH
Device Mapper Multipathing (DM-Multipath) is a feature that allows you to configure multiple I/O paths
between server nodes and storage arrays into a single device. These I/O paths are physical SAN
connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O
paths, creating a new device that consists of the aggregated paths.

DM-Multipath are used primarily for the following reasons:

Redundancy
DM-Multipath can provide failover in an active/passive configuration. In an active/passive
configuration, only half the paths are used at any time for I/O. If any element of an I/O path (the cable,
switch, or controller) fails, DM-Multipath switches to an alternate path.

240
CHAPTER 26. DEVICE MAPPER MULTIPATHING AND VIRTUAL STORAGE

Improved Performance
DM-Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-
robin fashion. In some configurations, DM-Multipath can detect loading on the I/O paths and
dynamically re-balance the load.

IMPORTANT

For comprehensive information on the deployment and configuration of DM Multipath in


your environment, refer to the DM Multipath guide provided by Red Hat.

241
Storage Administration Guide

CHAPTER 27. EXTERNAL ARRAY MANAGEMENT


(LIBSTORAGEMGMT)
Red Hat Enterprise Linux 7 ships with a new external array management library called
libStorageMgmt.

27.1. INTRODUCTION TO LIBSTORAGEMGMT


The libStorageMgmt library is a storage array independent Application Programming Interface (API).
It provides a stable and consistent API that allows developers the ability to programmatically manage
different storage arrays and leverage the hardware accelerated features provided.

This library is used as a building block for other higher level management tools and applications. End
system administrators can also use it as a tool to manually manage storage and automate storage
management tasks with the use of scripts.

The libStorageMgmt library allows operations such as:

List storage pools, volumes, access groups, or file systems.

Create and delete volumes, access groups, file systems, or NFS exports.

Grant and remove access to volumes, access groups, or initiators.

Replicate volumes with snapshots, clones, and copies.

Create and delete access groups and edit members of a group.

Server resources such as CPU and interconnect bandwidth are not utilized because the operations are
all done on the array.

The libstoragemgmt package provides:

A stable C and Python API for client application and plug-in developers.

A command-line interface that utilizes the library (lsmcli).

A daemon that executes the plug-in (lsmd).

A simulator plug-in that allows the testing of client applications (sim).

Plug-in architecture for interfacing with arrays.

242
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)


WARNING

This library and its associated tool have the ability to destroy any and all data
located on the arrays it manages. It is highly recommended to develop and test
applications and scripts against the storage simulator plug-in to remove any logic
errors before working with production systems. Testing applications and scripts on
actual non-production hardware before deploying to production is also strongly
encouraged if possible.

The libStorageMgmt library in Red Hat Enterprise Linux 7 adds a default udev rule to handle the
REPORTED LUNS DATA HAS CHANGED unit attention.

When a storage configuration change has taken place, one of several Unit Attention ASC/ASCQ codes
reports the change. A uevent is then generated and is rescanned automatically with sysfs.

The file /lib/udev/rules.d/90-scsi-ua.rules contains example rules to enumerate other


events that the kernel can generate.

The libStorageMgmt library uses a plug-in architecture to accommodate differences in storage arrays.
For more information on libStorageMgmt plug-ins and how to write them, refer to the Red Hat
Developer Guide.

27.2. LIBSTORAGEMGMT TERMINOLOGY


Different array vendors and storage standards use different terminology to refer to similar functionality.
This library uses the following terminology.

Storage array
Any storage system that provides block access (FC, FCoE, iSCSI) or file access through Network
Attached Storage (NAS).

Volume
Storage Area Network (SAN) Storage Arrays can expose a volume to the Host Bus Adapter (HBA)
over different transports, such as FC, iSCSI, or FCoE. The host OS treats it as block devices. One
volume can be exposed to many disks if multipath[2] is enabled).

This is also known as the Logical Unit Number (LUN), StorageVolume with SNIA terminology, or
virtual disk.

Pool
A group of storage spaces. File systems or volumes can be created from a pool. Pools can be
created from disks, volumes, and other pools. A pool may also hold RAID settings or thin provisioning
settings.

This is also known as a StoragePool with SNIA Terminology.

Snapshot
A point in time, read only, space efficient copy of data.

243
Storage Administration Guide

This is also known as a read only snapshot.

Clone
A point in time, read writeable, space efficient copy of data.

This is also known as a read writeable snapshot.

Copy
A full bitwise copy of the data. It occupies the full space.

Mirror
A continuously updated copy (synchronous and asynchronous).

Access group
Collections of iSCSI, FC, and FCoE initiators which are granted access to one or more storage
volumes. This ensures that only storage volumes are accessibly by the specified initiators.

This is also known as an initiator group.

Access Grant
Exposing a volume to a specified access group or initiator. The libStorageMgmt library currently
does not support LUN mapping with the ability to choose a specific logical unit number. The
libStorageMgmt library allows the storage array to select the next available LUN for assignment. If
configuring a boot from SAN or masking more than 256 volumes be sure to read the OS, Storage
Array, or HBA documents.

Access grant is also known as LUN Masking.

System
Represents a storage array or a direct attached storage RAID.

File system
A Network Attached Storage (NAS) storage array can expose a file system to host an OS through an
IP network, using either NFS or CIFS protocol. The host OS treats it as a mount point or a folder
containing files depending on the client operating system.

Disk
The physical disk holding the data. This is normally used when creating a pool with RAID settings.

This is also known as a DiskDrive using SNIA Terminology.

Initiator
In Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE), the intiator is the World Wide Port
Name (WWPN) or World Wide Node Name (WWNN). In iSCSI, the initiator is the iSCSI Qualified
Name (IQN). In NFS or CIFS, the initiator is the host name or the IP address of the host.

Child dependency
Some arrays have an implicit relationship between the origin (parent volume or file system) and the
child (such as a snapshot or a clone). For example, it is impossible to delete the parent if it has one
or more depend children. The API provides methods to determine if any such relationship exists and

244
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)

a method to remove the dependency by replicating the required blocks.

27.3. INSTALLING LIBSTORAGEMGMT


To install libStorageMgmt for use of the command line, required run-time libraries and simulator plug-
ins, use the following command:

$ sudo yum install libstoragemgmt libstoragemgmt-python

To develop C applications that utilize the library, install the libstoragemgmt-devel package with the
following command:

# yum install libstoragemgmt-devel

To install libStorageMgmt for use with hardware arrays, select one or more of the appropriate plug-in
packages with the following command:

$ sudo yum install libstoragemgmt-name-plugin

The following plug-ins that are available include:

libstoragemgmt-smis-plugin
Generic SMI-S array support.

libstoragemgmt-netapp-plugin
Specific support for NetApp files.

libstoragemgmt-nstor-plugin
Specific support for NexentaStor.

libstoragemgmt-targetd-plugin
Specific support for targetd.

The daemon is then installed and configured to run at start up but will not do so until the next reboot. To
use it immediately without rebooting, start the daemon manually.

To manage an array requires support through a plug-in. The base install package includes open source
plug-ins for a number of different vendors. Additional plug-in packages will be available separately as
array support improves. Currently supported hardware is constantly changing and improving.

The libStorageMgmt daemon (lsmd) behaves like any standard service for the system.

To check on the status of the libStorageMgmt service, use:

$ sudo systemctl status libstoragemgmt

To stop the service use:

$ sudo systemctl stop libstoragemgmt

245
Storage Administration Guide

To start the service use:

$ sudo systemctl start libstoragemgmt

27.4. USING LIBSTORAGEMGMT


To use libStorageMgmt interactively, use the lsmcli tool.

The lsmcli tool requires two things to run:

A Uniform Resource Identifier (URI) which is used to identify the plug-in to connect to the array
and any configurable options the array requires.

A valid user name and password for the array.

URI has the following form:

plugin+optional-transport://user-name@host:port/?query-string-parameters

Each plug-in has different requirements for what is needed.

Example 27.1. Examples of Different Plug-in Requirements

Simulator Plug-in That Requires No User Name or Password


sim://

NetApp Plug-in over SSL with User Name root


ontap+ssl://root@filer.company.com/

SMI-S Plug-in over SSL for EMC Array


smis+ssl://admin@provider.com:5989/?namespace=root/emc

There are three options to use the URI:

1. Pass the URI as part of the command.

$ lsmcli -u sim://...

2. Store the URI in an environmental vairable.

$ export LSMCLI_URI=sim:// && lsmcli ...

3. Place the URI in the file ~/.lsmcli, which contains name-value pairs separated by "=". The
only currently supported configuration is 'uri'.

Determining which URI to use needs to be done in this order. If all three are supplied, only the first one
on the command line will be used.

Supply the password by specifying the -P option on the command line or by placing it in an
environmental variable LSMCLI_PASSWORD.

246
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)

Example 27.2. Examples of lsmcli

An example for using the command line to create a new volume and making it visible to an initiator.

List arrays that are serviced by this connection.

$ lsmcli list --type SYSTEMS


ID | Name | Status
-------+-------------------------------+--------
sim-01 | LSM simulated storage plug-in | OK

List storage pools.

$ lsmcli list --type POOLS -H


ID | Name | Total space | Free space |
System ID
-----+---------------+----------------------+----------------------
+-----------
POO2 | Pool 2 | 18446744073709551616 | 18446744073709551616 |
sim-01
POO3 | Pool 3 | 18446744073709551616 | 18446744073709551616 |
sim-01
POO1 | Pool 1 | 18446744073709551616 | 18446744073709551616 |
sim-01
POO4 | lsm_test_aggr | 18446744073709551616 | 18446744073709551616 |
sim-01

Create a volume.

$ lsmcli volume-create --name volume_name --size 20G --pool POO1 -H


ID | Name | vpd83 | bs | #blocks
| status | ...
-----+-------------+----------------------------------+-----+-------
---+--------+----
Vol1 | volume_name | F7DDF7CA945C66238F593BC38137BD2F | 512 | 41943040 |
OK | ...

Create an access group with an iSCSI initiator in it.

$ lsmcli --create-access-group example_ag --id iqn.1994-


05.com.domain:01.89bd01 --type ISCSI --system sim-01
ID | Name | Initiator ID
|SystemID
---------------------------------+------------+---------------------
-------------+--------
782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-
05.com.domain:01.89bd01 |sim-01

Create an access group with an iSCSI intiator in it:

$ lsmcli access-group-create --name example_ag --init iqn.1994-


05.com.domain:01.89bd01 --init-type ISCSI --sys sim-01
ID | Name | Initiator IDs
| System ID

247
Storage Administration Guide

---------------------------------+------------+---------------------
-------------+-----------
782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-
05.com.domain:01.89bd01 | sim-01

Allow the access group visibility to the newly created volume:

$ lsmcli access-group-grant --ag 782d00c8ac63819d6cca7069282e03a0 --vol


Vol1 --access RW

The design of the library provides for a process separation between the client and the plug-in by means
of inter-process communication (IPC). This prevents bugs in the plug-in from crashing the client
application. It also provides a means for plug-in writers to write plug-ins with a license of their own
choosing. When a client opens the library passing a URI, the client library looks at the URI to determine
which plug-in should be used.

The plug-ins are technically stand alone applications but they are designed to have a file descriptor
passed to them on the command line. The client library then opens the appropriate Unix domain socket
which causes the daemon to fork and execute the plug-in. This gives the client library a point to point
communcation channel with the plug-in. The daemon can be restarted without affecting existing clients.
While the client has the library open for that plug-in, the plug-in process is running. After one or more
commands are sent and the plug-in is closed, the plug-in process cleans up and then exits.

The default behavior of lsmcli is to wait until the operation is completee. Depending on the requested
operations, this could potentially could take many hours. To allow a return to normal usage, it is possible
to use the -b option on the command line. If the exit code is 0 the command is completed. If the exit
code is 7 the command is in progress and a job identifier is written to standard output. The user or script
can then take the job ID and query the status of the command as needed by using lsmcli --
jobstatus JobID. If the job is now completed, the exit value will be 0 and the results printed to
standard output. If the command is still in progress, the return value will be 7 and the percentage
complete will be printed to the standard output.

Example 27.3. An Asynchronous Example

Create a volume passing the -b option so that the command returns immediately.

$ lsmcli volume-create --name async_created --size 20G --pool POO1 -b


JOB_3

Check to see what the exit value was, remembering that 7 indicates the job is still in progress.

$ echo $?
7

Check to see if the job is completed.

$ lsmcli job-status --job JOB_3


33

Check to see what the exit value was, remembering that 7 indicates the job is still in progress so the
standard output is the percentage done or 33% based on the above screen.

$ echo $?

248
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)

Wait some more and check it again, remembering that exit 0 means success and standard out
displays the new volume.

$ lsmcli job-status --job JOB_3


ID | Name | vpd83 | Block Size
| ...
-----+---------------+----------------------------------+-----------
--+-----
Vol2 | async_created | 855C9BA51991B0CC122A3791996F6B15 | 512 |
...

For scripting, pass the -t SeparatorCharacters option. This will make it easier to parse the output.

Example 27.4. Scripting Examples

$ lsmcli list --type volumes -t#


Vol1#volume_name#049167B5D09EC0A173E92A63F6C3EA2A#512#41943040#214748364
80#OK#sim-01#POO1
Vol2#async_created#3E771A2E807F68A32FA5E15C235B60CC#512#41943040#2147483
6480#OK#sim-01#POO1

$ lsmcli list --type volumes -t " | "


Vol1 | volume_name | 049167B5D09EC0A173E92A63F6C3EA2A | 512 | 41943040 |
21474836480 | OK | 21474836480 | sim-01 | POO1
Vol2 | async_created | 3E771A2E807F68A32FA5E15C235B60CC | 512 | 41943040
| 21474836480 | OK | sim-01 | POO1

$ lsmcli list --type volumes -s


---------------------------------------------
ID | Vol1
Name | volume_name
VPD83 | 049167B5D09EC0A173E92A63F6C3EA2A
Block Size | 512
#blocks | 41943040
Size | 21474836480
Status | OK
System ID | sim-01
Pool ID | POO1
---------------------------------------------
ID | Vol2
Name | async_created
VPD83 | 3E771A2E807F68A32FA5E15C235B60CC
Block Size | 512
#blocks | 41943040
Size | 21474836480
Status | OK
System ID | sim-01
Pool ID | POO1
---------------------------------------------

249
Storage Administration Guide

It is recommended to use the Python library for non-trivial scripting.

For more information on lsmcli, refer to the man pages or the command lsmcli --help.

250
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS

CHAPTER 28. PERSISTENT MEMORY: NVDIMMS


Persistent memory (pmem), also called as storage class memory, is a combination of memory and
storage. pmem combines the durability of storage with the low access latency and the high bandwidth of
dynamic RAM (DRAM):

Persistent memory is byte-addressable, so it can be accessed by using CPU load and store
instructions. In addition to read() or write() system calls that are required for accessing
traditional block-based storage, pmem also supports direct load and store programming model.

The performance characteristics of persistent memory are similar to DRAM with very low access
latency, typically in the tens to hundreds of nanoseconds.

Contents of persistent memory are preserved when the power is off, like with storage.

Using persistent memory is beneficial for use cases like:

Rapid start: data set is already in memory.


Rapid start is also called the warm cache effect. A file server has none of the file contents in memory
after starting. As clients connect and read and write data, that data is cached in the page cache.
Eventually, the cache contains mostly hot data. After a reboot, the system must start the process
again.

Persistent memory allows an application to keep the warm cache across reboots if the application is
designed properly. In this instance, there would be no page cache involved: the application would
cache data directly in the persistent memory.

Fast write-cache
File servers often do not acknowledge a client's write request until the data is on durable media. Using
persistent memory as a fast write cache enables a file server to acknowledge the write request
quickly thanks to the low latency of pmem.

NVDIMMs Interleaving
Non-Volatile Dual In-line Memory Modules (NVDIMMs) can be grouped into interleave sets in the same
way as regular DRAM. An interleave set is like a RAID 0 (stripe) across multiple DIMMs.

Following are the advantages of NVDIMMS interleaving:

Like DRAM, NVDIMMs benefit from increased performance when they are configured into
interleave sets.

It can be used to combine multiple smaller NVDIMMs into one larger logical device.

Use the system BIOS or UEFI firmware to configure interleave sets.

In Linux, one region device is created per interleave set.

Following is the relation between region devices and labels:

If your ⁠NVDIMMs support labels, the region device can be further subdivided into namespaces.

If your NVDIMMs do not support labels, the region devices can only contain a single namespace.
In this case, the kernel creates a default namespace which covers the entire region.

Persistent Memory Access Modes

251
Storage Administration Guide

You can use persistent memory in sector, memory, dax (Direct Access) or raw mode:

sector mode
It presents the storage as a fast block device. Using sector mode is useful for legacy applications that
have not been modified to use persistent memory, or for applications that make use of the full I/O
stack, including the Device Mapper.

memory mode
It enables persistent memory devices to support direct access programming as described in the
Storage Networking Industry Association (SNIA) Non-Volatile Memory (NVM) Programming Model
specification. In memory mode, I/O bypasses the storage stack of the kernel, and many Device
Mapper drivers therefore cannot be used.

dax mode
The dax mode,also called device DAX, provides raw access to persistent memory by using a DAX
character device node. Data on a DAX device can be made durable using CPU cache flushing and
fencing instructions. Certain databases and virtual machine hypervisors might benefit from DAX
mode. File systems cannot be created on device dax instances.

raw mode
The raw mode namespaces have several limitations and should not be used.

28.1. CONFIGURING PERSISTENT MEMORY WITH NDCTL


Use the ndctl utility to configure persistent memory devices. To install ndctl utility, use the following
command:

# yum install ndctl

Procedure 28.1. Configuring Persistent Memory for device that does not support labels

1. List the available pmem regions on your system. In the following example, the command lists an
NVDIMM-N device that does not support labels:

# ndctl list --regions


[
{
"dev":"region1",
"size":34359738368,
"available_size":0,
"type":"pmem"
},
{
"dev":"region0",
"size":34359738368,
"available_size":0,
"type":"pmem"
}
]

252
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS

OS creates a default namespace for each region because the NVDIMM-N device here does not
support labels. Hence, the available size is 0 bytes.

2. List all the inactive namespaces on your system:

# ndctl list --namespaces --idle


[
{
"dev":"namespace1.0",
"mode":"raw",
"size":34359738368,
"state":"disabled",
"numa_node":1
},
{
"dev":"namespace0.0",
"mode":"raw",
"size":34359738368,
"state":"disabled",
"numa_node":0
}
]

3. Reconfigure the inactive namespaces in order to make use of this space. For example, to use
namespace0.0 for a file system that supports DAX, use the following command:

# ndctl create-namespace --force --reconfig=namespace0.0 --


mode=memory --map=mem
{
"dev":"namespace0.0",
"mode":"memory",
"size":"32.00 GiB (34.36 GB)",
"uuid":"ab91cc8f-4c3e-482e-a86f-78d177ac655d",
"blockdev":"pmem0",
"numa_node":0
}

Procedure 28.2. Configuring Persistent Memory for device that support labels

1. List the available pmem regions on your system. In the following example, the command lists an
NVDIMM-N device that support labels:

# ndctl list --regions


[
{
"dev":"region5",
"size":270582939648,
"available_size":270582939648,
"type":"pmem",
"iset_id":-7337419320239190016
},
{
"dev":"region4",
"size":270582939648,
"available_size":270582939648,

253
Storage Administration Guide

"type":"pmem",
"iset_id":-137289417188962304
}
]

2. If an NVDIMM support labels, default namespaces are not created, and you can allocate one or
more namespaces from a region without using the --force or --reconfigure flags:

# ndctl create-namespace --region=region4 --mode=memory --map=dev --


size=36G
{
"dev":"namespace4.0",
"mode":"memory",
"size":"35.44 GiB (38.05 GB)",
"uuid":"9c5330b5-dc90-4f7a-bccd-5b558fa881fe",
"blockdev":"pmem4",
"numa_node":0
}

Now, you can create another namespace from the same region:

# ndctl create-namespace --region=region4 --mode=memory --map=dev --


size=36G
{
"dev":"namespace4.1",
"mode":"memory",
"size":"35.44 GiB (38.05 GB)",
"uuid":"91868e21-830c-4b8f-a472-353bf482a26d",
"blockdev":"pmem4.1",
"numa_node":0
}

You can also create namespaces of different types from the same region, using the following
command:

# ndctl create-namespace --region=region4 --mode=dax --align=2M --


size=36G
{
"dev":"namespace4.2",
"mode":"dax",
"size":"35.44 GiB (38.05 GB)",
"uuid":"a188c847-4153-4477-81bb-7143e32ffc5c",
"daxregion":
{
"id":4,
"size":"35.44 GiB (38.05 GB)",
"align":2097152,
"devices":[
{
"chardev":"dax4.2",
"size":"35.44 GiB (38.05 GB)"
}]
},
"numa_node":0
}

254
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS

For more information on ndctl utility, see man ndctl.

28.2. CONFIGURING PERSISTENT MEMORY FOR USE AS A BLOCK


DEVICE (LEGACY MODE)
To use persistent memory as a fast block device, set the namespace to sector mode.

# ndctl create-namespace --force --reconfig=namespace1.0 --mode=sector


{
"dev":"namespace1.0",
"mode":"sector",
"size":17162027008,
"uuid":"029caa76-7be3-4439-8890-9c2e374bcc76",
"sector_size":4096,
"blockdev":"pmem1s"
}

In the example, namespace1.0 is reconfigured to sector mode. Note that the block device name
changed from pmem1 to pmem1s. This device can be used in the same way as any other block device on
the system. For example, the device can be partitioned, you can create a file system on the device, the
device can be configured as part of a software RAID set, and the device can be the cache device for dm-
cache.

28.3. CONFIGURING PERSISTENT MEMORY FOR FILE SYSTEM


DIRECT ACCESS (DAX)
Direct access requires the namespace to be configured to memory mode. Memory mode allows for the
direct access programming model. When a device is configured in memory mode, a file system can be
created on top of it, and then mounted with the -o dax mount option. Then, any application that
performs an mmap() operation on a file on this file system gets direct access to its storage. See the
following example:

# ndctl create-namespace --force --reconfig=namespace0.0 --mode=memory --


map=mem
{
"dev":"namespace0.0",
"mode":"memory",
"size":17177772032,
"uuid":"e6944638-46aa-4e06-a722-0b3f16a5acbf",
"blockdev":"pmem0"
}

In the example, namespace0.0 is converted to namespace memory mode. With the --map=mem
argument, ndctl puts operating system data structures used for Direct Memory Access (DMA) in system
DRAM.

To perform DMA, the kernel requires a data structure for each page in the memory region. The overhead
of this data structure is 64 bytes per 4-KiB page. For small devices, the amount of overhead is small
enough to fit in DRAM with no problems. For example, the 16-GiB namespace only requires 256MiB for
page structures. Because the NVDIMM is small and expensive, storing the kernel’s page tracking data
structures in DRAM is preferable, as indicated by the --map=mem parameter.

255
Storage Administration Guide

In the future, NVDIMM devices might be terabytes in size. For such devices, the amount of memory
required to store the page tracking data structures might exceed the amount of DRAM in the system. One
TiB of persistent memory requires 16 GiB just for page structures. As a result, specifying the --map=dev
parameter to store the data structures in the persistent memory itself is preferable in such cases.

After configuring the namespace in memory mode, the namespace is ready for a file system. Starting
with Red Hat Enterprise Linux 7.3, both the Ext4 and XFS file system enable using persistent memory as
a Technology Preview. File system creation requires no special arguments. To get the DAX functionality,
mount the file system with the dax mount option. For example:

# mkfs -t xfs /dev/pmem0


# mount -o dax /dev/pmem0 /mnt/pmem/

Then, applications can use persistent memory and create files in the /mnt/pmem/ directory, open the
files, and use the mmap operation to map the files for direct access.

When creating partitions on a pmem device to be used for direct access, partitions must be aligned on
page boundaries. On the Intel 64 and AMD64 architecture, at least 4KiB alignment for the start and end
of the partition, but 2MiB is the preferred alignment. By default, the parted tool aligns partitions on 1MiB
boundaries. For the first partition, specify 2MiB as the start of the partition. If the size of the partition is a
multiple of 2MiB, all other partitions are also aligned.

28.4. CONFIGURING PERSISTENT MEMORY FOR USE IN DEVICE DAX


MODE
Device DAX provides a means for applications to directly access storage, without the involvement of a
file system. The benefit of device DAX is that it provides a guaranteed fault granularity, which can be
configured using the --align option with the ndctl utilty:

# ndctl create-namespace --force --reconfig=namespace0.0 --mode=dax --


align=2M

The given command ensures that the operating system would fault in 2MiB pages at a time. For the Intel
64 and AMD64 architecture, the following fault granularities are supported:

4KiB

2MiB

1GiB

Device DAX nodes (/dev/daxN.M) only supports the following system call:

open()

close()

mmap()

fallocate()

read() and write() variants are not supported because the use case is tied to persistent memory
programming.

256
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS

28.5. TROUBLESHOOTING
Some NVDIMMs support Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) interfaces for
retrieving health information.

NOTE

On some systems, the acpi_ipmi driver must be loaded to retrieve health information
using the following command:

# modprobe acpi_ipmi

To access the health information, use the following command:

# ndctl list --dimms --health --dimm=nmem0


[
{
{
"dev":"nmem0",
"id":"802c-01-1513-b3009166",
"handle":1,
"phys_id":22,
"health":
{
"health_state":"ok",
"temperature_celsius":25.000000,
"spares_percentage":99,
"alarm_temperature":false,
"alarm_spares":false,
"temperature_threshold":50.000000,
"spares_threshold":20,
"life_used_percentage":1,
"shutdown_state":"clean"
}
}
}
]

257
Storage Administration Guide

PART III. DATA DEDUPLICATION AND COMPRESSION WITH


VDO
This part describes how to provide deduplicated block storage capabilities to existing storage
management applications by enabling them to utilize Virtual Data Optimizer (VDO).

258
CHAPTER 29. VDO INTEGRATION

CHAPTER 29. VDO INTEGRATION

29.1. THEORETICAL OVERVIEW OF VDO


Virtual Data Optimizer (VDO) is a block virtualization technology that allows you to easily create
compressed and deduplicated pools of block storage.

Deduplication is a technique for reducing the consumption of storage resources by eliminating


multiple copies of duplicate blocks.

Instead of writing the same data more than once, VDO detects each duplicate block and records
it as a reference to the original block. VDO maintains a mapping from logical block addresses,
which are used by the storage layer above VDO, to physical block addresses, which are used
by the storage layer under VDO.

After deduplication, multiple logical block addresses may be mapped to the same physical block
address; these are called shared blocks. Block sharing is invisible to users of the storage, who
read and write blocks as they would if VDO were not present. When a shared block is
overwritten, a new physical block is allocated for storing the new block data to ensure that other
logical block addresses that are mapped to the shared physical block are not modified.

Compression is a data-reduction technique that works well with file formats that do not
necessarily exhibit block-level redundancy, such as log files and databases. See Section 29.4.8,
“Using Compression” for more detail.

The VDO solution consists of the following components:

kvdo
A kernel module that loads into the Linux Device Mapper layer to provide a deduplicated,
compressed, and thinly provisioned block storage volume

uds
A kernel module that communicates with the Universal Deduplication Service (UDS) index on the
volume and analyzes data for duplicates.

Command line tools


For configuring and managing optimized storage.

29.1.1. The UDS Kernel Module (uds)

The UDS index provides the foundation of the VDO product. For each new piece of data, it quickly
determines if that piece is identical to any previously stored piece of data. If the index finds match, the
storage system can then internally reference the existing item to avoid storing the same information more
than once.

The UDS index runs inside the kernel as the uds kernel module.

29.1.2. The VDO Kernel Module (kvdo)

The kvdo Linux kernel module provides block-layer deduplication services within the Linux Device
Mapper layer. In the Linux kernel, Device Mapper serves as a generic framework for managing pools of
block storage, allowing the insertion of block-processing modules into the storage stack between the

259
Storage Administration Guide

kernel's block interface and the actual storage device drivers.

The kvdo module is exposed as a block device that can be accessed directly for block storage or
presented through one of the many available Linux file systems, such as XFS or ext4. When kvdo
receives a request to read a (logical) block of data from a VDO volume, it maps the requested logical
block to the underlying physical block and then reads and returns the requested data.

When kvdo receives a request to write a block of data to a VDO volume, it first checks whether it is a
DISCARD or TRIM request or whether the data is uniformly zero. If either of these conditions holds, kvdo
updates its block map and acknowledges the request. Otherwise, a physical block is allocated for use by
the request.

Overview of VDO Write Policies

If the kvdo module is operating in synchronous mode:

1. It temporarily writes the data in the request to the allocated block and then acknowledges the
request.

2. Once the acknowledgment is complete, an attempt is made to deduplicate the block by


computing a MurmurHash-3 signature of the block data, which is sent to the VDO index.

3. If the VDO index contains an entry for a block with the same signature, kvdo reads the indicated
block and does a byte-by-byte comparison of the two blocks to verify that they are identical.

4. If they are indeed identical, then kvdo updates its block map so that the logical block points to
the corresponding physical block and releases the allocated physical block.

5. If the VDO index did not contain an entry for the signature of the block being written, or the
indicated block does not actually contain the same data, kvdo updates its block map to make the
temporary physical block permanent.

If kvdo is operating in asynchronous mode:

1. Instead of writing the data, it will immediately acknowledge the request.

2. It will then attempt to deduplicate the block in same manner as described above.

3. If the block turns out to be a duplicate, kvdo will update its block map and release the allocated
block. Otherwise, it will write the data in the request to the allocated block and update the block
map to make the physical block permanent.

29.1.3. VDO Volume


VDO uses a block device as a backing store, which can include an aggregation of physical storage
consisting of one or more disks, partitions, or even flat files. When a VDO volume is created by a
storage management tool, VDO reserves space from the volume for both a UDS index and the VDO
volume, which interact together to provide deduplicated block storage to users and applications.
Figure 29.1, “VDO Disk Organization” illustrates how these pieces fit together.

260
CHAPTER 29. VDO INTEGRATION

Figure 29.1. VDO Disk Organization

Slabs

The physical storage of the VDO volume is divided into a number of slabs, each of which is a contiguous
region of the physical space. All of the slabs for a given volume will be of the same size, which may be
any power of 2 multiple of 128 MB up to 32 GB.

The default slab size is 2 GB in order to facilitate evaluating VDO on smaller test systems. A single VDO
volume may have up to 8096 slabs. Therefore, in the default configuration with 2 GB slabs, the
maximum allowed physical storage is 16 TB. When using 32 GB slabs, the maximum allowed physical
storage is 256 TB.

For a recommendation on what slab size to choose depending on your physical volume size, see
Table 29.1, “Recommended VDO Slab Sizes by Physical Volume Size”.

At least one entire slab will be reserved by VDO for metadata, and therefore cannot be used for storing
user data.

The size of a slab can be controlled by providing the --vdoSlabSize=megabytes option to the vdo
create command.

Table 29.1. Recommended VDO Slab Sizes by Physical Volume Size

Physical 10–99 GB 100 GB – 2–10 TB 11–50 TB 51–100 TB 101–256 TB


Volume 1 TB
Size

Slab Size 1 GB 2 GB 32 GB 32 GB 32 GB 32 GB

Physical Size and Available Physical Size

Both physical size and available physical size describe the amount of disk space on the block device
that VDO can utilize:

Physical size is the same size as the underlying block device. VDO uses this storage for:

User data, which might be deduplicated and compressed

VDO metadata, such as the UDS index

Available physical size is the portion of the physical size that VDO is able to use for user data.

It is equivalent to the physical size minus the size of the metadata, minus the remainder after
dividing the volume into slabs by the given slab size.

261
Storage Administration Guide

For examples of how much storage VDO metadata require on block devices of different sizes, see
Section 29.2.3, “Examples of VDO System Requirements by Physical Volume Size”.

Logical Size

If the --vdoLogicalSize option is not specified, the logical volume size defaults to the available
physical volume size. Note that, in Figure 29.1, “VDO Disk Organization”, the VDO deduplicated storage
target sits completely on top of the block device, meaning the physical size of the VDO volume is the
same size as the underlying block device.

VDO currently supports any logical size up to 254 times the size of the physical volume with an absolute
maximum logical size of 4PB.

29.1.4. Command Line Tools


VDO includes the following command line tools for configuration and management:

vdo
Creates, configures, and controls VDO volumes

vdostats
Provides utilization and performance statistics

29.2. SYSTEM REQUIREMENTS


Processor Architectures
One or more processors implementing the Intel 64 instruction set are required: that is, a processor of the
AMD64 or Intel 64 architecture.

RAM
Each VDO volume has two distinct memory requirements:

The VDO module requires 370 MB plus an additional 268 MB per each 1 TB of physical storage
managed.

The Universal Deduplication Service (UDS) index requires a minimum of 250 MB of DRAM,
which is also the default amount that deduplication uses. For details on the memory usage of
UDS, see Section 29.2.1, “UDS Index Memory Requirements”.

Storage
A single VDO volume can be configured to use up to 256 TB of physical storage. See Section 29.2.2,
“VDO Storage Requirements” for the calculations to determine the usable size of a VDO-managed
volume from the physical size of the storage pool the VDO is given.

Additional System Software


VDO depends on the following software:

LVM

Python 2.7

The yum package manager will install all necessary software dependencies automatically.

262
CHAPTER 29. VDO INTEGRATION

Placement of VDO in the Storage Stack


As a general rule, you should place certain storage layers under VDO and others on top of VDO:

Under VDO: DM-Multipath, DM-Crypt, and software RAID (LVM or mdraid).

On top of VDO: LVM cache, LVM Logical Volumes, LVM snapshots, and LVM Thin Provisioning.

The following configurations are not supported:

VDO on top of VDO volumes: storage → VDO → LVM → VDO

VDO on top of LVM Snapshots

VDO on top of LVM Cache

VDO on top of the loopback device

VDO on top of LVM Thin Provisioning

Encrypted volumes on top of VDO: storage → VDO → DM-Crypt

Partitions on a VDO volume: fdisk, parted, and similar partitions

RAID (LVM, MD, or any other type) on top of a VDO volume

IMPORTANT

VDO supports two write modes: sync and async. When VDO is in sync mode, writes to
the VDO device are acknowledged when the underlying storage has written the data
permanently. When VDO is in async mode, writes are acknowledged before being
written to persistent storage.

It is critical to set the VDO write policy to match the behavior of the underlying storage. By
default, VDO write policy is set to the auto option, which selects the appropriate policy
automatically.

For more information, see Section 29.4.2, “Selecting VDO Write Modes”.

29.2.1. UDS Index Memory Requirements


The UDS index consists of two parts:

A compact representation is used in memory that contains at most one entry per unique block.

An on-disk component which records the associated block names presented to the index as they
occur, in order.

UDS uses an average of 4 bytes per entry in memory (including cache).

The on-disk component maintains a bounded history of data passed to UDS. UDS provides deduplication
advice for data that falls within this deduplication window, containing the names of the most recently
seen blocks. The deduplication window allows UDS to index data as efficiently as possible while limiting
the amount of memory required to index large data repositories. Despite the bounded nature of the
deduplication window, most datasets which have high levels of deduplication also exhibit a high degree
of temporal locality — in other words, most deduplication occurs among sets of blocks that were written
at about the same time. Furthermore, in general, data being written is more likely to duplicate data that

263
Storage Administration Guide

was recently written than data that was written a long time ago. Therefore, for a given workload over a
given time interval, deduplication rates will often be the same whether UDS indexes only the most recent
data or all the data.

Because duplicate data tends to exhibit temporal locality, it is rarely necessary to index every block in
the storage system. Were this not so, the cost of index memory would outstrip the savings of reduced
storage costs from deduplication. Index size requirements are more closely related to the rate of data
ingestion. For example, consider a storage system with 100 TB of total capacity but with an ingestion
rate of 1 TB per week. With a deduplication window of 4 TB, UDS can detect most redundancy among
the data written within the last month.

UDS's Sparse Indexing feature (the recommended mode for VDO) further exploits temporal locality by
attempting to retain only the most relevant index entries in memory. UDS can maintain a deduplication
window that is ten times larger while using the same amount of memory. While the sparse index provides
the greatest coverage, the dense index provides more advice. For most workloads, given the same
amount of memory, the difference in deduplication rates between dense and sparse indexes is
negligible.

The memory required for the index is determined by the desired size of the deduplication window:

For a dense index, UDS will provide a deduplication window of 1 TB per 1 GB of RAM. A 1 GB
index is generally sufficient for storage systems of up to 4 TB.

For a sparse index, UDS will provide a deduplication window of 10 TB per 1 GB of RAM. A 1 GB
sparse index is generally sufficient for up to 40 TB of physical storage.

For concrete examples of UDS Index memory requirements, see Section 29.2.3, “Examples of VDO
System Requirements by Physical Volume Size”

29.2.2. VDO Storage Requirements


VDO requires storage both for VDO metadata and for the actual UDS deduplication index:

VDO writes two types of metadata to its underlying physical storage:

The first type scales with the physical size of the VDO volume and uses approximately 1 MB
for each 4 GB of physical storage plus an additional 1 MB per slab.

The second type scales with the logical size of the VDO volume and consumes
approximately 1.25 MB for each 1 GB of logical storage, rounded up to the nearest slab.

See Section 29.1.3, “VDO Volume” for a description of slabs.

The UDS index is stored within the VDO volume group and is managed by the associated VDO
instance. The amount of storage required depends on the type of index and the amount of RAM
allocated to the index. For each 1 GB of RAM, a dense UDS index will use 17 GB of storage,
and a sparse UDS index will use 170 GB of storage.

For concrete examples of VDO storage requirements, see Section 29.2.3, “Examples of VDO System
Requirements by Physical Volume Size”

29.2.3. Examples of VDO System Requirements by Physical Volume Size


The following tables provide approximate system requirements of VDO based on the size of the
underlying physical volume. Each table lists requirements appropriate to the intended deployment, such
as primary storage or backup storage.

264
CHAPTER 29. VDO INTEGRATION

The exact numbers depend on your configuration of the VDO volume.

Primary Storage Deployment

In the primary storage case, the UDS index is between 0.01% to 25% the size of the physical volume.

Table 29.2. VDO Storage and Memory Requirements for Primary Storage

Physical 10 GB – 1–TB 2–10 TB 11–50 TB 51–100 TB 101–256 TB


Volume Size

RAM Usage 250 MB Dense: 1 GB 2 GB 3 GB 12 GB

Sparse:
250 MB

Disk Usage 2.5 GB Dense: 10 GB 170 GB 255 GB 1020 GB

Sparse: 22 GB

Index Type Dense Dense or Sparse Sparse Sparse


Sparse

Backup Storage Deployment

In the backup storage case, the UDS index covers the size of the backup set but is not bigger than the
physical volume. If you expect the backup set or the physical size to grow in the future, factor this into
the index size.

Table 29.3. VDO Storage and Memory Requirements for Backup Storage

Physical 10 GB – 1 TB 2–10 TB 11–50 TB 51–100 TB 101–256 TB


Volume Size

RAM Usage 250 MB 2 GB 10 GB 20 GB 26 GB

Disk Usage 2.5 GB 170 GB 850 GB 1700 GB 3400 GB

Index Type Dense Sparse Sparse Sparse Sparse

29.3. GETTING STARTED WITH VDO

29.3.1. Introduction
Virtual Data Optimizer (VDO) provides inline data reduction for Linux in the form of deduplication,
compression, and thin provisioning. When you set up a VDO volume, you specify a block device on
which to construct your VDO volume and the amount of logical storage you plan to present.

When hosting active VMs or containers, Red Hat recommends provisioning storage at a 10:1
logical to physical ratio: that is, if you are utilizing 1 TB of physical storage, you would present it
as 10 TB of logical storage.

265
Storage Administration Guide

For object storage, such as the type provided by Ceph, Red Hat recommends using a 3:1 logical
to physical ratio: that is, 1 TB of physical storage would present as 3 TB logical storage.

In either case, you can simply put a file system on top of the logical device presented by VDO and then
use it directly or as part of a distributed cloud storage architecture.

This chapter describes the following use cases of VDO deployment:

the direct-attached use case for virtualization servers, such as those built using Red Hat
Virtualization, and

the cloud storage use case for object-based distributed storage clusters, such as those built
using Ceph Storage.

NOTE

VDO deployment with Ceph is currently not supported.

This chapter provides examples for configuring VDO for use with a standard Linux file system that can
be easily deployed for either use case; see the diagrams in Section 29.3.5, “Deployment Examples”.

29.3.2. Installing VDO


VDO is deployed using the following RPM packages:

vdo

kmod-kvdo

To install VDO, use the yum package manager to install the RPM packages:

# yum install vdo kmod-kvdo

29.3.3. Creating a VDO Volume


Create a VDO volume for your block device. Note that multiple VDO volumes can be created for
separate devices on the same machine. If you choose this approach, you must supply a different name
and device for each instance of VDO on the system.

In all the following steps, replace vdo_name with the identifier you want to use for your VDO volume; for
example, vdo1.

NOTE

Before creating volumes, VDO uses LVM utilities such as, pvcreate --test to validate
block device.

1. Create the VDO volume using the VDO Manager:

# vdo create \
--name=vdo_name \
--device=block_device \
--vdoLogicalSize=logical_size

266
CHAPTER 29. VDO INTEGRATION

Replace block_device with the persistent name of the block device where you want to create
the VDO volume. For example, /dev/disk/by-id/scsi-
3600508b1001c264ad2af21e903ad031f.

IMPORTANT

Use a persistent device name. If you use a non-persistent device name, then
VDO might fail to start properly in the future if the device name changes.

For more information on persistent names, see Section 25.7, “Persistent


Naming”.

Replace logical_size with the amount of logical storage that the VDO volume should present:

For active VMs or container storage, use logical size that is ten times the physical size
of your block device. For example, if your block device is 1 TB in size, use 10T here.

For object storage, use logical size that is three times the physical size of your block
device. For example, if your block device is 1 TB in size, use 3T here.

Example 29.1. Creating VDO for Container Storage

For example, to create a VDO volume for container storage on a 1 TB block device, you
might use:

# vdo create \
--name=vdo1 \
--device=/dev/disk/by-id/scsi-
3600508b1001c264ad2af21e903ad031f \
--vdoLogicalSize=10T

When a VDO volume is created, VDO adds an entry to the /etc/vdoconf.yml configuration
file. The vdo.service systemd unit then uses the entry to start the volume by default.

IMPORTANT

If a failure occurs when creating the VDO volume, remove the volume to clean up.
See Section 29.4.3.1, “Removing an Unsuccessfully Created Volume” for details.

2. Create a file system:

For the XFS file system:

# mkfs.xfs -K /dev/mapper/vdo_name

For the ext4 file system:

# mkfs.ext4 -E nodiscard /dev/mapper/vdo_name

3. Mount the file system:

267
Storage Administration Guide

# mkdir -m 1777 /mnt/vdo_name


# mount /dev/mapper/vdo_name /mnt/vdo_name

4. To configure the file system to mount automatically, use either the /etc/fstab file or a
systemd mount unit:

If you decide to use the /etc/fstab configuration file, add one of the following lines to the
file:

For the XFS file system:

/dev/mapper/vdo_name /mnt/vdo_name xfs defaults,x-


systemd.requires=vdo.service 0 0

For the ext4 file system:

/dev/mapper/vdo_name /mnt/vdo_name ext4 defaults,x-


systemd.requires=vdo.service 0 0

Alternatively, if you decide to use a systemd unit, create a systemd mount unit file with the
appropriate filename. For the mount point of your VDO volume, create the
/etc/systemd/system/mnt-vdo_name.mount file with the following content:

[Unit]
Description = VDO unit file to mount file system
name = vdo_name.mount
Requires = vdo.service
After = multi-user.target
Conflicts = umount.target

[Mount]
What = /dev/mapper/vdo_name
Where = /mnt/vdo_name
Type = xfs

[Install]
WantedBy = multi-user.target

An example systemd unit file is also installed at


/usr/share/doc/vdo/examples/systemd/VDO.mount.example.

5. Enable the discard feature for the file system on your VDO device. Both batch and online
operations work with VDO.

For information on how to set up the discard feature, see Section 2.4, “Discard Unused
Blocks”.

29.3.4. Monitoring VDO


Because VDO is thin provisioned, the file system and applications will only see the logical space in use
and will not be aware of the actual physical space available. Scripting should be used to monitor the
actual available space and generate an alert if use exceeds a threshold: for example, when the file
system is 80% full.

268
CHAPTER 29. VDO INTEGRATION

VDO space usage and efficiency can be monitored using the vdostats utility:

# vdostats --human-readable

Device 1K-blocks Used Available Use%


Space saving%
/dev/mapper/node1osd1 926.5G 21.0G 905.5G 2% 73%
/dev/mapper/node1osd2 926.5G 28.2G 898.3G 3% 64%

29.3.5. Deployment Examples


The following examples illustrate how VDO might be used in KVM and other deployments.

VDO Deployment with KVM

To see how VDO can be deployed successfully on a KVM server configured with Direct Attached
Storage, see Figure 29.2, “VDO Deployment with KVM”.

Figure 29.2. VDO Deployment with KVM

More Deployment Scenarios

For more information on VDO deployment, see Section 29.5, “Deployment Scenarios”.

29.4. ADMINISTERING VDO

29.4.1. Starting or Stopping VDO


To start a given VDO volume, or all VDO volumes, and the associated UDS index(es), storage
management utilities should invoke one of these commands:

# vdo start --name=my_vdo


# vdo start --all

The VDO systemd unit is installed and enabled by default when the vdo package is installed. This unit
automatically runs the vdo start --all command at system startup to bring up all activated VDO
volumes. See Section 29.4.6, “Automatically Starting VDO Volumes at System Boot” for more

269
Storage Administration Guide

information.

To stop a given VDO volume, or all VDO volumes, and the associated UDS index(es), use one of these
commands:

# vdo stop --name=my_vdo


# vdo stop --all

If restarted after an unclean shutdown, VDO will perform a rebuild to verify the consistency of its
metadata and will repair it if necessary. Rebuilds are automatic and do not require user intervention. See
Section 29.4.5, “Recovering a VDO Volume After an Unclean Shutdown” for more information on the
rebuild process.

VDO might rebuild different writes depending on the write mode:

In synchronous mode, all writes that were acknowledged by VDO prior to the shutdown will be
rebuilt.

In asynchronous mode, all writes that were acknowledged prior to the last acknowledged flush
request will be rebuilt.

In either mode, some writes that were either unacknowledged or not followed by a flush may also be
rebuilt.

For details on VDO write modes, see Section 29.4.2, “Selecting VDO Write Modes”.

29.4.2. Selecting VDO Write Modes


VDO supports three write modes, sync, async, and auto:

When VDO is in sync mode, the layers above it assume that a write command writes data to
persistent storage. As a result, it is not necessary for the file system or application, for example,
to issue FLUSH or Force Unit Access (FUA) requests to cause the data to become persistent at
critical points.

VDO must be set to sync mode only when the underlying storage guarantees that data is written
to persistent storage when the write command completes. That is, the storage must either have
no volatile write cache, or have a write through cache.

When VDO is in async mode, the data is not guaranteed to be written to persistent storage
when a write command is acknowledged. The file system or application must issue FLUSH or
FUA requests to ensure data persistence at critical points in each transaction.

VDO must be set to async mode if the underlying storage does not guarantee that data is
written to persistent storage when the write command completes; that is, when the storage has a
volatile write back cache.

For information on how to find out if a device uses volatile cache or not, see the section called
“Checking for a Volatile Cache”.

The auto mode automatically selects sync or async based on the characteristics of each
device. This is the default option.

For a more detailed theoretical overview of how write policies operate, see the section called “Overview
of VDO Write Policies”.

270
CHAPTER 29. VDO INTEGRATION

To set a write policy, use the --writePolicy option. This can be specified either when creating a VDO
volume as in Section 29.3.3, “Creating a VDO Volume” or when modifying an existing VDO volume with
the changeWritePolicy subcommand:

# vdo changeWritePolicy --writePolicy=sync|async|auto --name=vdo_name

IMPORTANT

Using the incorrect write policy might result in data loss on power failure.

Checking for a Volatile Cache

To see whether a device has a writeback cache, read the


/sys/block/block_device/device/scsi_disk/identifier/cache_type sysfs file. For
example:

Device sda indicates that it has a writeback cache:

$ cat '/sys/block/sda/device/scsi_disk/7:0:0:0/cache_type'

write back

Device sdb indicates that it does not have a writeback cache:

$ cat '/sys/block/sdb/device/scsi_disk/1:2:0:0/cache_type'

None

Additionally, in the kernel boot log, you can find whether the above mentioned devices have a write
cache or not:

sd 7:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't


support DPO or FUA
sd 1:2:0:0: [sdb] Write cache: disabled, read cache: disabled, supports
DPO and FUA

See the Viewing and Managing Log Files chapter in the System Administrator's Guide for more
information on reading the system log.

In these examples, use the following write policies for VDO:

async mode for the sda device

sync mode for the sdb device

NOTE

You should configure VDO to use the sync write policy if the cache_type value is none
or write through.

29.4.3. Removing VDO Volumes

271
Storage Administration Guide

A VDO volume can be removed from the system by running:

# vdo remove --name=my_vdo

Prior to removing a VDO volume, unmount file systems and stop applications that are using the storage.
The vdo remove command removes the VDO volume and its associated UDS index, as well as logical
volumes where they reside.

29.4.3.1. Removing an Unsuccessfully Created Volume

If a failure occurs when the vdo utility is creating a VDO volume, the volume is left in an intermediate
state. This might happen when, for example, the system crashes, power fails, or the administrator
interrupts a running vdo create command.

To clean up from this situation, remove the unsuccessfully created volume with the --force option:

# vdo remove --force --name=my_vdo

The --force option is required because the administrator might have caused a conflict by changing the
system configuration since the volume was unsuccessfully created. Without the --force option, the
vdo remove command fails with the following message:

[...]
A previous operation failed.
Recovery from the failure either failed or was interrupted.
Add '--force' to 'remove' to perform the following cleanup.
Steps to clean up VDO my_vdo:
umount -f /dev/mapper/my_vdo
udevadm settle
dmsetup remove my_vdo
vdo: ERROR - VDO volume my_vdo previous operation (create) is incomplete

29.4.4. Configuring the UDS Index


VDO uses a high-performance deduplication index called UDS to detect duplicate blocks of data as they
are being stored. The deduplication window is the number of previously written blocks which the index
remembers. The size of the deduplication window is configurable. For a given window size, the index will
requires a specific amount of RAM and a specific amount of disk space. The size of the window is
usually determined by specifying the size of the index memory using the --indexMem=size option. The
amount of disk space to use will then be determined automatically.

In general, Red Hat recommends using a sparse UDS index for all production use cases. This is an
extremely efficient indexing data structure, requiring approximately one-tenth of a byte of DRAM per
block in its deduplication window. On disk, it requires approximately 72 bytes of disk space per block.
The minimum configuration of this index uses 256 MB of DRAM and approximately 25 GB of space on
disk. To use this configuration, specify the --sparseIndex=enabled --indexMem=0.25 options to
the vdo create command. This configuration results in a deduplication window of 2.5 TB (meaning it
will remember a history of 2.5 TB). For most use cases, a deduplication window of 2.5 TB is appropriate
for deduplicating storage pools that are up to 10 TB in size.

The default configuration of the index, however, is to use a dense index. This index is considerably less
efficient (by a factor of 10) in DRAM, but it has much lower (also by a factor of 10) minimum required
disk space, making it more convenient for evaluation in constrained environments.

272
CHAPTER 29. VDO INTEGRATION

In general, a deduplication window which is one quarter of the physical size of a VDO volume is a
recommended configuration. However, this is not an actual requirement. Even small deduplication
windows (compared to the amount of physical storage) can find significant amounts of duplicate data in
many use cases. Larger windows may also be used, but it in most cases, there will be little additional
benefit to doing so.

Speak with your Red Hat Technical Account Manager representative for additional guidelines on tuning
this important system parameter.

29.4.5. Recovering a VDO Volume After an Unclean Shutdown


If a volume is restarted without having been shut down cleanly, VDO will need to rebuild a portion of its
metadata to continue operating, which occurs automatically when the volume is started. (Also see
Section 29.4.5.2, “Forcing a Rebuild” to invoke this process on a volume that was cleanly shut down.)

Data recovery depends on the write policy of the device:

If VDO was running on synchronous storage and write policy was set to sync, then all data
written to the volume will be fully recovered.

If the write policy was async, then some writes may not be recovered if they were not made
durable by sending VDO a FLUSH command, or a write I/O tagged with the FUA flag (force unit
access). This is accomplished from user mode by invoking a data integrity operation like fsync,
fdatasync, sync, or umount.

29.4.5.1. Online Recovery

In the majority of cases, most of the work of rebuilding an unclean VDO volume can be done after the
VDO volume has come back online and while it is servicing read and write requests. Initially, the amount
of space available for write requests may be limited. As more of the volume's metadata is recovered,
more free space may become available. Furthermore, data written while the VDO is recovering may fail
to deduplicate against data written before the crash if that data is in a portion of the volume which has not
yet been recovered. Data may be compressed while the volume is being recovered. Previously
compressed blocks may still be read or overwritten.

During an online recovery, a number of statistics will be unavailable: for example, blocks in use and
blocks free. These statistics will become available once the rebuild is complete.

29.4.5.2. Forcing a Rebuild

VDO can recover from most hardware and software errors. If a VDO volume cannot be recovered
successfully, it is placed in a read-only mode that persists across volume restarts. Once a volume is in
read-only mode, there is no guarantee that data has not been lost or corrupted. In such cases, Red Hat
recommends copying the data out of the read-only volume and possibly restoring the volume from
backup. (The operating mode attribute of vdostats indicates whether a VDO volume is in read-only
mode.)

If the risk of data corruption is acceptable, it is possible to force an offline rebuild of the VDO volume
metadata so the volume can be brought back online and made available. Again, the integrity of the rebuilt
data cannot be guaranteed.

To force a rebuild of a read-only VDO volume, first stop the volume if it is running:

# vdo stop --name=my_vdo

273
Storage Administration Guide

Then restart the volume using the --forceRebuild option:

# vdo start --name=my_vdo --forceRebuild

29.4.6. Automatically Starting VDO Volumes at System Boot


During system boot, the vdo systemd unit automatically starts all VDO devices that are configured as
activated.

To prevent certain existing volumes from being started automatically, deactivate those volumes by
running either of these commands:

To deactivate a specific volume:

# vdo deactivate --name=my_vdo

To deactivate all volumes:

# vdo deactivate --all

Conversely, to activate volumes, use one of these commands:

To activate a specific volume:

# vdo activate --name=my_vdo

To activate all volumes:

# vdo activate --all

You can also create a VDO volume that does not start automatically by adding the --
activate=disabled option to the vdo create command.

For systems that place LVM volumes on top of VDO volumes as well as beneath them (for example,
Figure 29.5, “Deduplicated Unified Storage”), it is vital to start services in the right order:

1. The lower layer of LVM must be started first (in most systems, starting this layer is configured
automatically when the LVM2 package is installed).

2. The vdo systemd unit must then be started.

3. Finally, additional scripts must be run in order to start LVM volumes or other services on top of
the now running VDO volumes.

29.4.7. Disabling and Re-enabling Deduplication


In some instances, it may be desirable to temporarily disable deduplication of data being written to a
VDO volume while still retaining the ability to read to and write from the volume. While disabling
deduplication will prevent subsequent writes from being deduplicated, data which was already
deduplicated will remain so.

To stop deduplication on a VDO volume, use the following command:

274
CHAPTER 29. VDO INTEGRATION

# vdo disableDeduplication --name=my_vdo

This stops the associated UDS index and informs the VDO volume that deduplication is no
longer active.

To restart deduplication on a VDO volume, use the following command:

# vdo enableDeduplication --name=my_vdo

This restarts the associated UDS index and informs the VDO volume that deduplication is active
again.

You can also disable deduplication when creating a new VDO volume by adding the --
deduplication=disabled option to the vdo create command.

29.4.8. Using Compression

29.4.8.1. Introduction

In addition to block-level deduplication, VDO also provides inline block-level compression using the
HIOPS Compression™ technology. While deduplication is the optimal solution for virtual machine
environments and backup applications, compression works very well with structured and unstructured file
formats that do not typically exhibit block-level redundancy, such as log files and databases.

Compression operates on blocks that have not been identified as duplicates. When unique data is seen
for the first time, it is compressed. Subsequent copies of data that have already been stored are
deduplicated without requiring an additional compression step. The compression feature is based on a
parallelized packaging algorithm that enables it to handle many compression operations at once. After
first storing the block and responding to the requestor, a best-fit packing algorithm finds multiple blocks
that, when compressed, can fit into a single physical block. After it is determined that a particular
physical block is unlikely to hold additional compressed blocks, it is written to storage and the
uncompressed blocks are freed and reused. By performing the compression and packaging operations
after having already responded to the requestor, using compression imposes a minimal latency penalty.

29.4.8.2. Enabling and Disabling Compression

VDO volume compression is on by default.

When creating a volume, you can disable compression by adding the --compression=disabled
option to the vdo create command.

Compression can be stopped on an existing VDO volume if necessary to maximize performance or to


speed processing of data that is unlikely to compress.

To stop compression on a VDO volume, use the following command:

# vdo disableCompression --name=my_vdo

To start it again, use the following command:

# vdo enableCompression --name=my_vdo

275
Storage Administration Guide

29.4.9. Managing Free Space


Because VDO is a thinly provisioned block storage target, the amount of physical space VDO uses may
differ from the size of the volume presented to users of the storage. Integrators and systems
administrators can exploit this disparity to save on storage costs but must take care to avoid
unexpectedly running out of storage space if the data written does not achieve the expected rate of
deduplication.

Whenever the number of logical blocks (virtual storage) exceeds the number of physical blocks (actual
storage), it becomes possible for file systems and applications to unexpectedly run out of space. For that
reason, storage systems using VDO must provide storage administrators with a way of monitoring the
size of the VDO's free pool. The size of this free pool may be determined by using the vdostats utility;
see Section 29.7.2, “vdostats” for details. The default output of this utility lists information for all running
VDO volumes in a format similar to the Linux df utility. For example:

Device 1K-blocks Used Available Use%


/dev/mapper/my_vdo 211812352 105906176 105906176 50%

If the size of VDO's free pool drops below a certain level, the storage administrator can take action by
deleting data (which will reclaim space whenever the deleted data is not duplicated), adding physical
storage, or even deleting LUNs.

Reclaiming Space on File Systems

VDO cannot reclaim space unless file systems communicate that blocks are free using DISCARD, TRIM,
or UNMAP commands. For file systems that do not use DISCARD, TRIM, or UNMAP, free space may be
manually reclaimed by storing a file consisting of binary zeros and then deleting that file.

File systems may generally be configured to issue DISCARD requests in one of two ways:

Realtime discard (also online discard or inline discard)


When realtime discard is enabled, file systems send REQ_DISCARD requests to the block layer
whenever a user deletes a file and frees space. VDO recieves these requests and returns space to its
free pool, assuming the block was not shared.

For file systems that support online discard, you can enable it by setting the discard option at mount
time.

Batch discard
Batch discard is a user-initiated operation that causes the file system to notify the block layer (VDO)
of any unused blocks. This is accomplished by sending the file system an ioctl request called
FITRIM.

You can use the fstrim utility (for example from cron) to send this ioctl to the file system.

For more information on the discard feature, see Section 2.4, “Discard Unused Blocks”.

Reclaiming Space Without a File System

It is also possible to manage free space when the storage is being used as a block storage target without
a file system. For example, a single VDO volume can be carved up into multiple subvolumes by
installing the Logical Volume Manager (LVM) on top of it. Before deprovisioning a volume, the
blkdiscard command can be used in order to free the space previously used by that logical volume.

276
CHAPTER 29. VDO INTEGRATION

LVM supports the REQ_DISCARD command and will forward the requests to VDO at the appropriate
logical block addresses in order to free the space. If other volume managers are being used, they would
also need to support REQ_DISCARD, or equivalently, UNMAP for SCSI devices or TRIM for ATA devices.

Reclaiming Space on Fibre Channel or Ethernet Network

VDO volumes (or portions of volumes) can also be provisioned to hosts on a Fibre Channel storage
fabric or an Ethernet network using SCSI target frameworks such as LIO or SCST. SCSI initiators can
use the UNMAP command to free space on thinly provisioned storage targets, but the SCSI target
framework will need to be configured to advertise support for this command. This is typically done by
enabling thin provisioning on these volumes. Support for UNMAP can be verified on Linux-based SCSI
initiators by running the following command:

# sg_vpd --page=0xb0 /dev/device

In the output, verify that the "Maximum unmap LBA count" value is greater than zero.

29.4.10. Increasing Logical Volume Size


Management applications can increase the logical size of a VDO volume using the vdo growLogical
subcommand. Once the volume has been grown, the management should inform any devices or file
systems on top of the VDO volume of its new size. The volume may be grown as follows:

# vdo growLogical --name=my_vdo --vdoLogicalSize=new_logical_size

The use of this command allows storage administrators to initially create VDO volumes which have a
logical size small enough to be safe from running out of space. After some period of time, the actual rate
of data reduction can be evaluated, and if sufficient, the logical size of the VDO volume can be grown to
take advantage of the space savings.

29.4.11. Increasing Physical Volume Size


To increase the amount of physical storage available to a VDO volume:

1. Increase the size of the underlying device.

The exact procedure depends on the type of the device. For example, to resize an MBR
partition, use the fdisk utility as described in Section 13.5, “Resizing a Partition with fdisk”.

2. Use the growPhysical option to add the new physical storage space to the VDO volume:

# vdo growPhysical --name=my_vdo

It is not possible to shrink a VDO volume with this command.

29.4.12. Automating VDO with Ansible


You can use the Ansible tool to automate VDO deployment and administration. For details, see:

Ansible documentation: https://docs.ansible.com/

VDO Ansible module documentation:


https://docs.ansible.com/ansible/latest/modules/vdo_module.html

277
Storage Administration Guide

29.5. DEPLOYMENT SCENARIOS


VDO can be deployed in a variety of ways to provide deduplicated storage for both block and file access
and for both local and remote storage. Because VDO exposes its deduplicated storage as a standard
Linux block device, it can be used with standard file systems, iSCSI and FC target drivers, or as unified
storage.

29.5.1. iSCSI Target


As a simple example, the entirety of the VDO storage target can be exported as an iSCSI Target to
remote iSCSI initiators.

Figure 29.3. Deduplicated Block Storage Target

See http://linux-iscsi.org/ for more information on iSCSI Target.

29.5.2. File Systems


If file access is desired instead, file systems can be created on top of VDO and exposed to NFS or CIFS
users via either the Linux NFS server or Samba.

Figure 29.4. Deduplicated NAS

29.5.3. LVM
More feature-rich systems may make further use of LVM to provide multiple LUNs that are all backed by
the same deduplicated storage pool. In Figure 29.5, “Deduplicated Unified Storage”, the VDO target is
registered as a physical volume so that it can be managed by LVM. Multiple logical volumes (LV1 to
LV4) are created out of the deduplicated storage pool. In this way, VDO can support multiprotocol unified
block/file access to the underlying deduplicated storage pool.

278
CHAPTER 29. VDO INTEGRATION

Figure 29.5. Deduplicated Unified Storage

Deduplicated unified storage design allows for multiple file systems to collectively use the same
deduplication domain through the LVM tools. Also, file systems can take advantage of LVM snapshot,
copy-on-write, and shrink or grow features, all on top of VDO.

29.5.4. Encryption
Data security is critical today. More and more companies have internal policies regarding data
encryption. Linux Device Mapper mechanisms such as DM-Crypt are compatible with VDO. Encrypting
VDO volumes will help ensure data security, and any file systems above VDO still gain the deduplication
feature for disk optimization. Note that applying encryption above VDO results in little if any data
deduplication; encryption renders duplicate blocks different before VDO can deduplicate them.

Figure 29.6. Using VDO with Encryption

29.6. TUNING VDO

29.6.1. Introduction to VDO Tuning


As with tuning databases or other complex software, tuning VDO involves making trade-offs between
numerous system constraints, and some experimentation is required. The primary controls available for

279
Storage Administration Guide

tuning VDO are the number of threads assigned to different types of work, the CPU affinity settings for
those threads, and cache settings.

29.6.2. Background on VDO Architecture


The VDO kernel driver is multi-threaded to improve performance by amortizing processing costs across
multiple concurrent I/O requests. Rather than have one thread process an I/O request from start to finish,
it delegates different stages of work to one or more threads or groups of threads, with messages passed
between them as the I/O request makes its way through the pipeline. This way, one thread can serialize
all access to a global data structure without having to lock and unlock it each time an I/O operation is
processed. If the VDO driver is well-tuned, each time a thread completes a requested processing stage
there will usually be another request queued up for that same processing. Keeping these threads busy
reduces the overhead of context switching and scheduling, improving performance. Separate threads
are also used for parts of the operating system that can block, such as enqueueing I/O operations to the
underlying storage system or messages to UDS.

The various worker thread types used by VDO are:

Logical zone threads


The logical threads, with process names including the string kvdo:logQ, maintain the mapping
between the logical block numbers (LBNs) presented to the user of the VDO device and the physical
block numbers (PBNs) in the underlying storage system. They also implement locking such that two
I/O operations attempting to write to the same block will not be processed concurrently. Logical zone
threads are active during both read and write operations.

LBNs are divided into chunks (a block map page contains a bit over 3 MB of LBNs) and these chunks
are grouped into zones that are divided up among the threads.

Processing should be distributed fairly evenly across the threads, though some unlucky access
patterns may occasionally concentrate work in one thread or another. For example, frequent access
to LBNs within a given block map page will cause one of the logical threads to process all of those
operations.

The number of logical zone threads can be controlled using the --vdoLogicalThreads=thread
count option of the vdo command

Physical zone threads


Physical, or kvdo:physQ, threads manage data block allocation and maintain reference counts.
They are active during write operations.

Like LBNs, PBNs are divided into chunks called slabs, which are further divided into zones and
assigned to worker threads that distribute the processing load.

The number of physical zone threads can be controlled using the --


vdoPhysicalThreads=thread count option of the vdo command.

I/O submission threads


kvdo:bioQ threads submit block I/O (bio) operations from VDO to the storage system. They take I/O
requests enqueued by other VDO threads and pass them to the underlying device driver. These
threads may communicate with and update data structures associated with the device, or set up
requests for the device driver's kernel threads to process. Submitting I/O requests can block if the
underlying device's request queue is full, so this work is done by dedicated threads to avoid
processing delays.

280
CHAPTER 29. VDO INTEGRATION

If these threads are frequently shown in D state by ps or top utilities, then VDO is frequently keeping
the storage system busy with I/O requests. This is generally good if the storage system can service
multiple requests in parallel, as some SSDs can, or if the request processing is pipelined. If thread
CPU utilization is very low during these periods, it may be possible to reduce the number of I/O
submission threads.

CPU usage and memory contention are dependent on the device driver(s) beneath VDO. If CPU
utilization per I/O request increases as more threads are added then check for CPU, memory, or lock
contention in those device drivers.

The number of I/O submission threads can be controlled using the --vdoBioThreads=thread
count option of the vdo command.

CPU-processing threads
kvdo:cpuQ threads exist to perform any CPU-intensive work such as computing hash values or
compressing data blocks that do not block or require exclusive access to data structures associated
with other thread types.

The number of CPU-processing threads can be controlled using the --vdoCpuThreads=thread


count option of the vdo command.

I/O acknowledgement threads


The kvdo:ackQ threads issue the callbacks to whatever sits atop VDO (for example, the kernel page
cache, or application program threads doing direct I/O) to report completion of an I/O request. CPU
time requirements and memory contention will be dependent on this other kernel-level code.

The number of acknowledgement threads can be controlled using the --vdoAckThreads=thread


count option of the vdo command.

Non-scalable VDO kernel threads:

Deduplication thread
The kvdo:dedupeQ thread takes queued I/O requests and contacts UDS. Since the socket buffer
can fill up if the server cannot process requests quickly enough or if kernel memory is constrained by
other system activity, this work is done by a separate thread so if a thread should block, other VDO
processing can continue. There is also a timeout mechanism in place to skip an I/O request after a
long delay (several seconds).

Journal thread
The kvdo:journalQ thread updates the recovery journal and schedules journal blocks for writing. A
VDO device uses only one journal, so this work cannot be split across threads.

Packer thread
The kvdo:packerQ thread, active in the write path when compression is enabled, collects data
blocks compressed by the kvdo:cpuQ threads to minimize wasted space. There is one packer data
structure, and thus one packer thread, per VDO device.

29.6.3. Values to tune

29.6.3.1. CPU/memory

281
Storage Administration Guide

29.6.3.1.1. Logical, physical, cpu, ack thread counts

The logical, physical, cpu, and I/O acknowledgement work can be spread across multiple threads, the
number of which can be specified during initial configuration or later if the VDO device is restarted.

One core, or one thread, can do a finite amount of work during a given time. Having one thread compute
all data-block hash values, for example, would impose a hard limit on the number of data blocks that
could be processed per second. Dividing the work across multiple threads (and cores) relieves that
bottleneck.

As a thread or core approaches 100% usage, more work items will tend to queue up for processing.
While this may result in CPU having fewer idle cycles, queueing delays and latency for individual I/O
requests will typically increase. According to some queueing theory models, utilization levels above 70%
or 80% can lead to excessive delays that can be several times longer than the normal processing time.
Thus it may be helpful to distribute work further for a thread or core with 50% or higher utilization, even if
those threads or cores are not always busy.

In the opposite case, where a thread or CPU is very lightly loaded (and thus very often asleep), supplying
work for it to do is more likely to incur some additional cost. (A thread attempting to wake another thread
must acquire a global lock on the scheduler's data structures, and may potentially send an inter-
processor interrupt to transfer work to another core). As more cores are configured to run VDO threads, it
becomes less likely that a given piece of data will be cached as work is moved between threads or as
threads are moved between cores — so too much work distribution can also degrade performance.

The work performed by the logical, physical, and CPU threads per I/O request will vary based on the
type of workload, so systems should be tested with the different types of workloads they are expected to
service.

Write operations in sync mode involving successful deduplication will entail extra I/O operations (reading
the previously stored data block), some CPU cycles (comparing the new data block to confirm that they
match), and journal updates (remapping the LBN to the previously-stored data block's PBN) compared to
writes of new data. When duplication is detected in async mode, data write operations are avoided at the
cost of the read and compare operations described above; only one journal update can happen per write,
whether or not duplication is detected.

If compression is enabled, reads and writes of compressible data will require more processing by the
CPU threads.

Blocks containing all zero bytes (a zero block) are treated specially, as they commonly occur. A special
entry is used to represent such data in the block map, and the zero block is not written to or read from the
storage device. Thus, tests that write or read all-zero blocks may produce misleading results. The same
is true, to a lesser degree, of tests that write over zero blocks or uninitialized blocks (those that were
never written since the VDO device was created) because reference count updates done by the physical
threads are not required for zero or uninitialized blocks.

Acknowledging I/O operations is the only task that is not significantly affected by the type of work being
done or the data being operated upon, as one callback is issued per I/O operation.

29.6.3.1.2. CPU Affinity and NUMA

Accessing memory across NUMA node boundaries takes longer than accessing memory on the local
node. With Intel processors sharing the last-level cache between cores on a node, cache contention
between nodes is a much greater problem than cache contention within a node.

Tools such as top can not distinguish between CPU cycles that do work and cycles that are stalled.
These tools interpret cache contention and slow memory accesses as actual work. As a result, moving a
thread between nodes may appear to reduce the thread's apparent CPU utilization while increasing the

282
CHAPTER 29. VDO INTEGRATION

number of operations it performs per second.

While many of VDO's kernel threads maintain data structures that are accessed by only one thread, they
do frequently exchange messages about the I/O requests themselves. Contention may be high if VDO
threads are run on multiple nodes, or if threads are reassigned from one node to another by the
scheduler. If it is possible to run other VDO-related work (such as I/O submissions to VDO, or interrupt
processing for the storage device) on the same node as the VDO threads, contention may be further
reduced. If one node does not have sufficient cycles to run all VDO-related work, memory contention
should be considered when selecting threads to move onto other nodes.

If practical, collect VDO threads on one node using the taskset utility. If other VDO-related work can
also be run on the same node, that may further reduce contention. In that case, if one node lacks the
CPU power to keep up with processing demands then memory contention must be considered when
choosing threads to move onto other nodes. For example, if a storage device's driver has a significant
number of data structures to maintain, it may help to move both the device's interrupt handling and
VDO's I/O submissions (the bio threads that call the device's driver code) to another node. Keeping I/O
acknowledgment (ack threads) and higher-level I/O submission threads (user-mode threads doing direct
I/O, or the kernel's page cache flush thread) paired is also good practice.

29.6.3.1.3. Frequency throttling

If power consumption is not an issue, writing the string performance to the


/sys/devices/system/cpu/cpu*/cpufreq/scaling_governor files if they exist might produce
better results. If these sysfs nodes do not exist, Linux or the system's BIOS may provide other options
for configuring CPU frequency management.

Performance measurements are further complicated by CPUs that dynamically vary their frequencies
based on workload, because the time needed to accomplish a specific piece of work may vary due to
other work the CPU has been doing, even without task switching or cache contention.

29.6.3.2. Caching

29.6.3.2.1. Block Map Cache

VDO caches a number of block map pages for efficiency. The cache size defaults to 128 MB, but it can
be increased with the --blockMapCacheSize=megabytes option of the vdo command. Using a
larger cache may produce significant benefits for random-access workloads.

29.6.3.2.2. Read Cache

A second cache may be used for caching data blocks read from the storage system to verify VDO's
deduplication advice. If similar data blocks are seen within a short time span, the number of I/O
operations needed may be reduced.

The read cache also holds storage blocks containing compressed user data. If multiple compressible
blocks were written within a short period of time, their compressed versions may be located within the
same storage system block. Likewise, if they are read within a short time, caching may avoid the need
for additional reads from the storage system.

The vdo command's --readCache={enabled | disabled} option controls whether a read cache is
used. If enabled, the cache has a minimum size of 8 MB, but it can be increased with the --
readCacheSize=megabytes option. Managing the read cache incurs a slight overhead, so it may not
increase performance if the storage system is fast enough. The read cache is disabled by default.

283
Storage Administration Guide

29.6.3.3. Storage System I/O

29.6.3.3.1. Bio Threads

For generic hard drives in a RAID configuration, one or two bio threads may be sufficient for submitting
I/O operations. If the storage device driver requires its I/O submission threads to do significantly more
work (updating driver data structures or communicating with the device) such that one or two threads are
very busy and storage devices are often idle, the bio thread count can be increased to compensate.
However, depending on the driver implementation, raising the thread count too high may lead to cache or
spin lock contention. If device access timing is not uniform across all NUMA nodes, it may be helpful to
run bio threads on the node "closest" to the storage device controllers.

29.6.3.3.2. IRQ Handling

If a device driver does significant work in its interrupt handler and does not use a threaded IRQ handler,
it may prevent the scheduler from providing the best performance. CPU time spent servicing hardware
interrupts may look like normal VDO (or other) kernel thread execution in some ways. For example, if
hardware IRQ handling required 30% of a core's cycles, a busy kernel thread on the same core could
only use the remaining 70%. However, if the work queued up for that thread demanded 80% of the core's
cycles, the thread would never catch up, and the scheduler might simply leave that thread to run
impeded on that core instead of switching that thread to a less busy core.

Using such a device driver under a heavy VDO workload may require a large number of cycles to service
hardware interrupts (the %hi indicator in the header of the top display). In that case it may help to
assign IRQ handling to certain cores and adjust the CPU affinity of VDO kernel threads not to run on
those cores.

29.6.3.4. Maximum Discard Sectors

The maximum allowed size of DISCARD (TRIM) operations to a VDO device can be tuned via
/sys/kvdo/max_discard_sectors, based on system usage. The default is 8 sectors (that is, one 4
KB block). Larger sizes may be specified, though VDO will still process them in a loop, one block at a
time, ensuring that metadata updates for one discarded block are written to the journal and flushed to
disk before starting on the next block.

When using a VDO volume as a local file system, Red Hat testing found that a small discard size works
best, as the generic block-device code in the Linux kernel will break large discard requests into multiple
smaller ones and submit them in parallel. If there is low I/O activity on the device, VDO can process
many smaller requests concurrently and much more quickly than one large request.

If the VDO device is to be used as a SCSI target, the initiator and target software introduce additional
factors to consider. If the target SCSI software is SCST, it reads the maximum discard size and relays it
to the initiator. (Red Hat has not attempted to tune VDO configurations in conjunction with LIO SCSI
target code.)

Because the Linux SCSI initiator code allows only one discard operation at a time, discard requests that
exceed the maximum size would be broken into multiple smaller discards and sent, one at a time, to the
target system (and to VDO). So, in addition to VDO processing a number of small discard operations in
serial, the round-trip communication time between the two systems adds additional latency.

Setting a larger maximum discard size can reduce this communication overhead, though that larger
request is passed in its entirety to VDO and processed one 4 KB block at a time. While there is no per-
block communication delay, additional processing time for the larger block may cause the SCSI initiator
software to time out.

For SCSI target usage, Red Hat recommends configuring the maximum discard size to be moderately

284
CHAPTER 29. VDO INTEGRATION

large while still keeping the typical discard time well within the initiator's timeout setting. An extra round-
trip cost every few seconds, for example, should not significantly affect performance and SCSI initiators
with timeouts of 30 or 60 seconds should not time out.

29.6.4. Identifying Bottlenecks


There are several key factors that affect VDO performance, and many tools available to identify those
having the most impact.

Thread or CPU utilization above 70%, as seen in utilities such as top or ps, generally implies that too
much work is being concentrated in one thread or on one CPU. However, in some cases it could mean
that a VDO thread was scheduled to run on the CPU but no work actually happened; this scenario could
occur with excessive hardware interrupt handler processing, memory contention between cores or
NUMA nodes, or contention for a spin lock.

When using the top utility to examine system performance, Red Hat suggests running top -H to show
all process threads separately and then entering the 1 f j keys, followed by the Enter/Return key; the
top command then displays the load on individual CPU cores and identifies the CPU on which each
process or thread last ran. This information can provide the following insights:

If a core has low %id (idle) and %wa (waiting-for-I/O) values, it is being kept busy with work of
some kind.

If the %hi value for a core is very low, that core is doing normal processing work, which is being
load-balanced by the kernel scheduler. Adding more cores to that set may reduce the load as
long as it does not introduce NUMA contention.

If the %hi for a core is more than a few percent and only one thread is assigned to that core, and
%id and %wa are zero, the core is over-committed and the scheduler is not addressing the
situation. In this case the kernel thread or the device interrupt handling should be reassigned to
keep them on separate cores.

The perf utility can examine the performance counters of many CPUs. Red Hat suggests using the
perf top subcommand as a starting point to examine the work a thread or processor is doing. If, for
example, the bioQ threads are spending many cycles trying to acquire spin locks, there may be too
much contention in the device driver below VDO, and reducing the number of bioQ threads might
alleviate the situation. High CPU use (in acquiring spin locks or elsewhere) could also indicate contention
between NUMA nodes if, for example, the bioQ threads and the device interrupt handler are running on
different nodes. If the processor supports them, counters such as stalled-cycles-backend,
cache-misses, and node-load-misses may be of interest.

The sar utility can provide periodic reports on multiple system statistics. The sar -d 1 command
reports block device utilization levels (percentage of the time they have at least one I/O operation in
progress) and queue lengths (number of I/O requests waiting) once per second. However, not all block
device drivers can report such information, so the sar usefulness might depend on the device drivers in
use.

29.7. VDO COMMANDS


This section describes the following VDO utilities:

vdo
The vdo utility manages both the kvdo and UDS components of VDO.

285
Storage Administration Guide

It is also used to enable or disable compression.

vdostats
The vdostats utility displays statistics for each configured (or specified) device in a format similar to
the Linux df utility.

29.7.1. vdo
The vdo utility manages both the kvdo and UDS components of VDO.

Synopsis

vdo { activate | changeWritePolicy | create | deactivate |


disableCompression | disableDeduplication | enableCompression |
enableDeduplication | growLogical | growPhysical | list | modify |
printConfigFile | remove | start | status | stop }
[ options... ]

Sub-Commands

Table 29.4. VDO Sub-Commands

Sub-Command Description

286
CHAPTER 29. VDO INTEGRATION

Sub-Command Description

create Creates a VDO volume and its associated index and makes it available. If
−−activate=disabled is specified the VDO volume is created but
not made available. Will not overwrite an existing file system or
formatted VDO volume unless −−force is given. This command must
be run with root privileges. Applicable options include:

--name=volume (required)

--device=device (required)

--activate={enabled | disabled}

--indexMem=gigabytes

--blockMapCacheSize=megabytes

--blockMapPeriod=period

--compression={enabled | disabled}

--confFile=file

--deduplication={enabled | disabled}

--emulate512={enabled | disabled}

--sparseIndex={enabled | disabled}

--vdoAckThreads=thread count

--vdoBioRotationInterval=I/O count

--vdoBioThreads=thread count

--vdoCpuThreads=thread count

--vdoHashZoneThreads=thread count

--vdoLogicalThreads=thread count

--vdoLogLevel=level

--vdoLogicalSize=megabytes

--vdoPhysicalThreads=thread count

--readCache={enabled | disabled}

--readCacheSize=megabytes

--vdoSlabSize=megabytes

--verbose

--writePolicy={ auto | sync | async }

--logfile=pathname

287
Storage Administration Guide

Sub-Command Description

remove Removes one or more stopped VDO volumes and associated indexes.
This command must be run with root privileges. Applicable options
include:

{ --name=volume | --all } (required)

--confFile=file

--force

--verbose

--logfile=pathname

start Starts one or more stopped, activated VDO volumes and associated
services. This command must be run with root privileges. Applicable
options include:

{ --name=volume | --all } (required)

--confFile=file

--forceRebuild

--verbose

--logfile=pathname

stop Stops one or more running VDO volumes and associated services. This
command must be run with root privileges. Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--force

--verbose

--logfile=pathname

activate Activates one or more VDO volumes. Activated volumes can be started
using the
start

command. This command must be run with root privileges. Applicable


options include:

{ --name=volume | --all } (required)

--confFile=file

--logfile=pathname

--verbose

288
CHAPTER 29. VDO INTEGRATION

Sub-Command Description

deactivate Deactivates one or more VDO volumes. Deactivated volumes cannot be


started by the
start

command. Deactivating a currently running volume does not stop it.


Once stopped a deactivated VDO volume must be activated before it can
be started again. This command must be run with root privileges.
Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

status Reports VDO system and volume status in YAML format. This command
does not require root privileges though information will be incomplete if
run without. Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

See Table 29.6, “VDO Status Output” for the output provided.

list Displays a list of started VDO volumes. If −−all is specified it displays


both started and non‐started volumes. This command must be run with
root privileges. Applicable options include:

--all

--confFile=file

--logfile=pathname

--verbose

289
Storage Administration Guide

Sub-Command Description

modify Modifies configuration parameters of one or all VDO volumes. Changes


take effect the next time the VDO device is started; already‐running
devices are not affected. Applicable options include:

{ --name=volume | --all } (required)

--blockMapCacheSize=megabytes

--blockMapPeriod=period

--confFile=file

--vdoAckThreads=thread count

--vdoBioThreads=thread count

--vdoCpuThreads=thread count

--vdoHashZoneThreads=thread count

--vdoLogicalThreads=thread count

--vdoPhysicalThreads=thread count

--readCache={enabled | disabled}

--readCacheSize=megabytes

--verbose

--logfile=pathname

changeWritePolicy Modifies the write policy of one or all running VDO volumes. This
command must be run with root privileges.

{ --name=volume | --all } (required)

--writePolicy={ auto | sync | async }

(required)

--confFile=file

--logfile=pathname

--verbose

290
CHAPTER 29. VDO INTEGRATION

Sub-Command Description

enableDeduplication Enables deduplication on one or more VDO volumes. This command


must be run with root privileges. Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

disableDeduplication Disables deduplication on one or more VDO volumes. This command


must be run with root privileges. Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

enableCompression Enables compression on one or more VDO volumes. If the VDO volume
is running, takes effect immediately. If the VDO volume is not running
compression will be enabled the next time the VDO volume is started.
This command must be run with root privileges. Applicable options
include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

disableCompression Disables compression on one or more VDO volumes. If the VDO volume
is running, takes effect immediately. If the VDO volume is not running
compression will be disabled the next time the VDO volume is started.
This command must be run with root privileges. Applicable options
include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

291
Storage Administration Guide

Sub-Command Description

growLogical Adds logical space to a VDO volume. The volume must exist and must be
running. This command must be run with root privileges. Applicable
options include:

--name=volume (required)

--vdoLogicalSize=megabytes

(required)

--confFile=file

--verbose

--logfile=pathname

growPhysical Adds physical space to a VDO volume. The volume must exist and must
be running. This command must be run with root privileges. Applicable
options include:

--name=volume (required)

--confFile=file

--verbose

--logfile=pathname

printConfigFile Prints the configuration file to stdout . This command require root
privileges. Applicable options include:

--confFile=file

--logfile=pathname

--verbose

Options

Table 29.5. VDO Options

Option Description

--indexMem=gigabytes Specifies the amount of UDS server memory in gigabytes; the default
size is 1 GB. The special decimal values 0.25, 0.5, 0.75 can be used, as
can any positive integer.

--sparseIndex={enabled Enables or disables sparse indexing. The default is disabled.


| disabled}

--all Indicates that the command should be applied to all configured VDO
volumes. May not be used with --name .

292
CHAPTER 29. VDO INTEGRATION

Option Description

-- Specifies the amount of memory allocated for caching block map pages;
blockMapCacheSize=mega the value must be a multiple of 4096. Using a value with a B (ytes),
bytes K (ilobytes), M (egabytes), G (igabytes), T (erabytes), P (etabytes) or
E (xabytes) suffix is optional. If no suffix is supplied, the value will be
interpreted as megabytes. The default is 128M; the value must be at
least 128M and less than 16T. Note that there is a memory overhead of
15%.

-- A value between 1 and 16380 which determines the number of block


blockMapPeriod=period map updates which may accumulate before cached pages are flushed to
disk. Higher values decrease recovery time after a crash at the expense
of decreased performance during normal operation. The default value is
16380. Speak with your Red Hat representative before tuning this
parameter.

--compression={enabled Enables or disables compression within the VDO device. The default is
| disabled} enabled. Compression may be disabled if necessary to maximize
performance or to speed processing of data that is unlikely to compress.

--confFile=file Specifies an alternate configuration file. The default is


/etc/vdoconf.yml .

--deduplication= Enables or disables deduplication within the VDO device. The default is
{enabled | disabled} enabled. Deduplication may be disabled in instances where data is not
expected to have good deduplication rates but compression is still
desired.

--emulate512={enabled Enables 512-byte block device emulation mode. The default is


| disabled} disabled.

--force Unmounts mounted file systems before stopping a VDO volume.

--forceRebuild Forces an offline rebuild before starting a read-only VDO volume so that
it may be brought back online and made available. This option may
result in data loss or corruption.

--help Displays documentation for the vdo utility.

--logfile=pathname Specify the file to which this script's log messages are directed. Warning
and error messages are always logged to syslog as well.

--name=volume Operates on the specified VDO volume. May not be used with --all.

--device=device Specifies the absolute path of the device to use for VDO storage.

--activate={enabled | The argument disabled indicates that the VDO volume should only be
disabled} created. The volume will not be started or enabled. The default is
enabled.

293
Storage Administration Guide

Option Description

--vdoAckThreads=thread Specifies the number of threads to use for acknowledging completion of


count requested VDO I/O operations. The default is 1; the value must be at
least 0 and less than or equal to 100.

-- Specifies the number of I/O operations to enqueue for each bio-


vdoBioRotationInterval submission thread before directing work to the next. The default is 64;
=I/O count the value must be at least 1 and less than or equal to 1024.

--vdoBioThreads=thread Specifies the number of threads to use for submitting I/O operations to
count the storage device. Minimum is 1; maximum is 100. The default is 4; the
value must be at least 1 and less than or equal to 100.

--vdoCpuThreads=thread Specifies the number of threads to use for CPU- intensive work such as
count hashing or compression. The default is 2; the value must be at least 1
and less than or equal to 100.

-- Specifies the number of threads across which to subdivide parts of the


vdoHashZoneThreads=thre VDO processing based on the hash value computed from the block data.
ad count The default is 1 ; the value must be at least 0 and less than or equal to
100. vdoHashZoneThreads , vdoLogicalThreads and
vdoPhysicalThreads must be either all zero or all non-zero.

-- Specifies the number of threads across which to subdivide parts of the


vdoLogicalThreads=thre VDO processing based on the hash value computed from the block data.
ad count The value must be at least 0 and less than or equal to 100. A logical
thread count of 9 or more will require explicitly specifying a sufficiently
large block map cache size, as well. vdoHashZoneThreads ,
vdoLogicalThreads, and vdoPhysicalThreads must be either
all zero or all non‐zero. The default is 1.

--vdoLogLevel=level Specifies the VDO driver log level: critical, error, warning,
notice , info, or debug. Levels are case sensitive; the default is
info.

-- Specifies the logical VDO volume size in megabytes. Using a value with
vdoLogicalSize=megabyt a S (ectors), B (ytes), K (ilobytes), M (egabytes), G (igabytes), T (erabytes),
es P (etabytes) or E (xabytes) suffix is optional. Used for over- provisioning
volumes. This defaults to the size of the storage device.

-- Specifies the number of threads across which to subdivide parts of the


vdoPhysicalThreads=thre VDO processing based on physical block addresses. The value must be
ad count at least 0 and less than or equal to 16. Each additional thread after the
first will use an additional 10 MB of RAM. vdoPhysicalThreads ,
vdoHashZoneThreads , and vdoLogicalThreads must be either
all zero or all non‐zero. The default is 1.

294
CHAPTER 29. VDO INTEGRATION

Option Description

--readCache={enabled | Enables or disables the read cache within the VDO device. The default is
disabled} disabled. The cache should be enabled if write workloads are
expected to have high levels of deduplication, or for read intensive
workloads of highly compressible data.

-- Specifies the extra VDO device read cache size in megabytes. This
readCacheSize=megabytes space is in addition to a system- defined minimum. Using a value with a
B (ytes), K (ilobytes), M (egabytes), G (igabytes), T (erabytes), P (etabytes)
or E (xabytes) suffix is optional. The default is 0M. 1.12 MB of memory
will be used per MB of read cache specified, per bio thread.

-- Specifies the size of the increment by which a VDO is grown. Using a


vdoSlabSize=megabytes smaller size constrains the total maximum physical size that can be
accommodated. Must be a power of two between 128M and 32G; the
default is 2G. Using a value with a S (ectors), B (ytes), K (ilobytes),
M (egabytes), G (igabytes), T (erabytes), P (etabytes) or E (xabytes) suffix
is optional. If no suffix is used, the value will be interpreted as
megabytes.

--verbose Prints commands before executing them.

--writePolicy={ auto | Specifies the write policy:


sync | async }
auto: Select sync or async based on the storage layer
underneath VDO. If a write back cache is present, async will
be chosen. Otherwise, sync will be chosen.

sync: Writes are acknowledged only after data is stably


written. This is the default policy. This policy is not supported
if the underlying storage is not also synchronous.

async: Writes are acknowledged after data has been cached


for writing to stable storage. Data which has not been flushed is
not guaranteed to persist in this mode.

The status subcommand returns the following information in YAML format, divided into keys as
follows:

Table 29.6. VDO Status Output

Key Description

VDO Status Information in this key covers the name of the host and date and time at which the status
inquiry is being made. Parameters reported in this area include:

Node The host name of the system on which VDO is running.

Date The date and time at which the vdo status command is run.

295
Storage Administration Guide

Key Description

Kernel Information in this key covers the configured kernel.


Module
Loaded Whether or not the kernel module is loaded (True or False).

Version Information on the version of kvdo that is configured.


Information

Configuratio Information in this key covers the location and status of the VDO configuration file.
n
File Location of the VDO configuration file.

Last The last-modified date of the VDO configuration file.


modified

VDOs Provides configuration information for all VDO volumes. Parameters reported for each VDO
volume include:

Block size The block size of the VDO volume, in bytes.

512 byte Indicates whether the volume is running in 512-byte emulation mode.
emulation

Enable Whether deduplication is enabled for the volume.


deduplicatio
n

Logical size The logical size of the VDO volume.

Physical The size of a VDO volume's underlying physical storage.


size

Write policy The configured value of the write policy (sync or async).

VDO Output of the vdostats utility.


Statistics

29.7.2. vdostats
The vdostats utility displays statistics for each configured (or specified) device in a format similar to the
Linux df utility.

The output of the vdostats utility may be incomplete if it is not run with root privileges.

Synopsis

vdostats [ --verbose | --human-readable | --si | --all ] [ --version ] [


device ...]

296
CHAPTER 29. VDO INTEGRATION

Options

Table 29.7. vdostats Options

Option Description

--verbose Displays the utilization and block I/O (bios) statistics for one (or more) VDO devices.
See Table 29.9, “vdostats --verbose Output” for details.

--human- Displays block values in readable form (Base 2: 1 KB = 2 10 bytes = 1024 bytes).
readable

--si The --si option modifies the output of the --human-readable option to use SI
units (Base 10: 1 KB = 103 bytes = 1000 bytes). If the --human-readable option is
not supplied, the --si option has no effect.

--all This option is only for backwards compatibility. It is now equivalent to the --verbose
option.

--version Displays the vdostats version.

device ... Specifies one or more specific volumes to report on. If this argument is omitted,
vdostats will report on all devices.

Output

The following example shows sample output if no options are provided, which is described in Table 29.8,
“Default vdostats Output”:

Device 1K-blocks Used Available Use%


Space Saving%
/dev/mapper/my_vdo 1932562432 427698104 1504864328 22% 21%

Table 29.8. Default vdostats Output

Item Description

Device The path to the VDO volume.

1K-blocks The total number of 1K blocks allocated for a VDO volume (= physical volume size *
block size / 1024)

Used The total number of 1K blocks used on a VDO volume (= physical blocks used * block
size / 1024)

Available The total number of 1K blocks available on a VDO volume (= physical blocks free *
block size / 1024)

297
Storage Administration Guide

Item Description

Use% The percentage of physical blocks used on a VDO volume (= used blocks / allocated
blocks * 100)

Space Saving% The percentage of physical blocks saved on a VDO volume (= [logical blocks used -
physical blocks used] / logical blocks used)

The --human-readable option converts block counts into conventional units (1 KB = 1024 bytes):

Device Size Used Available Use% Space Saving%


/dev/mapper/my_vdo 1.8T 407.9G 1.4T 22% 21%

The --human-readable and --si options convert block counts into SI units (1 KB = 1000 bytes):

Device Size Used Available Use% Space


Saving%
/dev/mapper/my_vdo 2.0T 438G 1.5T 22% 21%

The --verbose (Table 29.9, “vdostats --verbose Output” ) option displays VDO device statistics in
YAML format for one (or all) VDO devices.

Statistics printed in bold in Table 29.9, “vdostats --verbose Output” will continue to be reported in future
releases. The remaining fields are primarily intended for software support and are subject to change in
future releases; management tools should not rely upon them. Management tools should also not rely
upon the order in which any of the statistics are reported.

Table 29.9. vdostats --verbose Output

Item Description

Version The version of these statistics.

Release version The release version of the VDO.

Data blocks used The number of physical blocks currently in use by a VDO volume to store data.

Overhead blocks used The number of physical blocks currently in use by a VDO volume to store VDO
metadata.

Logical blocks used The number of logical blocks currently mapped.

Physical blocks The total number of physical blocks allocated for a VDO volume.

Logical blocks The maximum number of logical blocks that can be mapped by a VDO volume.

1K-blocks The total number of 1K blocks allocated for a VDO volume (= physical volume size
* block size / 1024)

298
CHAPTER 29. VDO INTEGRATION

Item Description

1K-blocks used The total number of 1K blocks used on a VDO volume (= physical blocks used *
block size / 1024)

1K-blocks available The total number of 1K blocks available on a VDO volume (= physical blocks free
* block size / 1024)

Used percent The percentage of physical blocks used on a VDO volume (= used blocks /
allocated blocks * 100)

Saving percent The percentage of physical blocks saved on a VDO volume (= [logical blocks used
- physical blocks used] / logical blocks used)

Block map cache size The size of the block map cache, in bytes.

Write policy The active write policy (sync or async). This is configured via vdo
changeWritePolicy --writePolicy=auto|sync|async.

Block size The block size of a VDO volume, in bytes.

Completed recovery The number of times a VDO volume has recovered from an unclean shutdown.
count

Read-only recovery The number of times a VDO volume has been recovered from read-only mode (via
count vdo start --forceRebuild).

Operating mode Indicates whether a VDO volume is operating normally, is in recovery mode, or is
in read-only mode.

Recovery progress (%) Indicates online recovery progress, or N/A if the volume is not in recovery mode.

Compressed The number of compressed fragments that have been written since the VDO
fragments written volume was last restarted.

Compressed blocks The number of physical blocks of compressed data that have been written since
written the VDO volume was last restarted.

Compressed fragments The number of compressed fragments being processed that have not yet been
in packer written.

Slab count The total number of slabs.

Slabs opened The total number of slabs from which blocks have ever been allocated.

Slabs reopened The number of times slabs have been re-opened since the VDO was started.

299
Storage Administration Guide

Item Description

Journal disk full count The number of times a request could not make a recovery journal entry because
the recovery journal was full.

Journal commits The number of times the recovery journal requested slab journal commits.
requested count

Journal entries batching The number of journal entry writes started minus the number of journal entries
written.

Journal entries started The number of journal entries which have been made in memory.

Journal entries writing The number of journal entries in submitted writes minus the number of journal
entries committed to storage.

Journal entries written The total number of journal entries for which a write has been issued.

Journal entries The number of journal entries written to storage.


committed

Journal blocks batching The number of journal block writes started minus the number of journal blocks
written.

Journal blocks started The number of journal blocks which have been touched in memory.

Journal blocks writing The number of journal blocks written (with metadatata in active memory) minus
the number of journal blocks committed.

Journal entries written The total number of journal blocks for which a write has been issued.

Journal blocks The number of journal blocks written to storage.


committed

Slab journal disk full The number of times an on-disk slab journal was full.
count

Slab journal flush count The number of times an entry was added to a slab journal that was over the flush
threshold.

Slab journal blocked The number of times an entry was added to a slab journal that was over the
count blocking threshold.

Slab journal blocks The number of slab journal block writes issued.
written

Slab journal tail busy The number of times write requests blocked waiting for a slab journal write.
count

300
CHAPTER 29. VDO INTEGRATION

Item Description

Slab summary blocks The number of slab summary block writes issued.
written

Reference blocks written The number of reference block writes issued.

Block map dirty pages The number of dirty pages in the block map cache.

Block map clean pages The number of clean pages in the block map cache.

Block map free pages The number of free pages in the block map cache.

Block map failed pages The number of block map cache pages that have write errors.

Block map incoming The number of block map cache pages that are being read into the cache.
pages

Block map outgoing The number of block map cache pages that are being written.
pages

Block map cache The number of times a free page was not available when needed.
pressure

Block map read count The total number of block map page reads.

Block map write count The total number of block map page writes.

Block map failed reads The total number of block map read errors.

Block map failed writes The total number of block map write errors.

Block map reclaimed The total number of block map pages that were reclaimed.

Block map read outgoing The total number of block map reads for pages that were being written.

Block map found in The total number of block map cache hits.
cache

Block map discard The total number of block map requests that required a page to be discarded.
required

Block map wait for page The total number of requests that had to wait for a page.

Block map fetch required The total number of requests that required a page fetch.

Block map pages loaded The total number of page fetches.

301
Storage Administration Guide

Item Description

Block map pages saved The total number of page saves.

Block map flush count The total number of flushes issued by the block map.

Invalid advice PBN The number of times the index returned invalid advice
count

No space error count. The number of write requests which failed due to the VDO volume being out of
space.

Read only error count The number of write requests which failed due to the VDO volume being in read-
only mode.

Instance The VDO instance.

512 byte emulation Indicates whether 512 byte emulation is on or off for the volume.

Current VDO IO The number of I/O requests the VDO is current processing.
requests in progress.

Maximum VDO IO The maximum number of simultaneous I/O requests the VDO has processed.
requests in progress

Current dedupe queries The number of deduplication queries currently in flight.

Maximum dedupe The maximum number of in-flight deduplication queries.


queries

Dedupe advice valid The number of times deduplication advice was correct.

Dedupe advice stale The number of times deduplication advice was incorrect.

Dedupe advice timeouts The number of times deduplication queries timed out.

Flush out The number of flush requests submitted by VDO to the underlying storage.

302
CHAPTER 29. VDO INTEGRATION

Item Description

Bios in... Bios in partial... These statistics count the number of bios in each category with a given flag. The
Bios out... Bios meta... categories are:
Bios journal... Bios page
bios in: The number of block I/O requests received by VDO.
cache... Bios out
completed... Bio meta bios in partial: The number of partial block I/O requests received by
completed... Bios journal VDO. Applies only to 512-byte emulation mode.
completed... Bios page
cache completed... Bios bios out: The number of non-metadata block I/O requests submitted by
acknowledged... Bios VDO to the storage device.
acknowledged partial...
bios meta: The number of metadata block I/O requests submitted by
Bios in progress... VDO to the storage device.

bios journal: The number of recovery journal block I/O requests


submitted by VDO to the storage device.

bios page cache: The number of block map I/O requests submitted by
VDO to the storage device.

bios out completed: The number of non-metadata block I/O requests


completed by the storage device.

bios meta completed: The number of metadata block I/O requests


completed by the storage device.

bios journal completed: The number of recovery journal block I/O


requests completed by the storage device.

bios page cache completed: The number of block map I/O requests
completed by the storage device.

bios acknowledged: The number of block I/O requests acknowledged


by VDO.

bios acknowledged partial: The number of partial block I/O requests


acknowledged by VDO. Applies only to 512-byte emulation mode.

bios in progress: The number of bios submitted to the VDO which have
not yet been acknowledged.

There are three types of flags:

read: The number of non-write bios (bios without the REQ_WRITE flag
set)

write: The number of write bios (bios with the REQ_WRITE flag set)

discard: The number of bios with a REQ_DISCARD flag set

Read cache accesses The number of times VDO searched the read cache.

Read cache hits The number of read cache hits.

29.8. STATISTICS FILES IN /SYS


Statistics for a running VDO volume may be read from files in the

303
Storage Administration Guide

/sys/kvdo/volume_name/statistics directory, where volume_name is the name of thhe VDO


volume. This provides an alternate interface to the data produced by the vdostats utility suitable for
access by shell scripts and management software.

There are files in the statistics directory in addition to the ones listed in the table below. These
additional statistics files are not guaranteed to be supported in future releases.

Table 29.10. Statistics files

File Description

dataBlocksUsed The number of physical blocks currently in use by a VDO volume to store data.

logicalBlocksUse The number of logical blocks currently mapped.


d

physicalBlocks The total number of physical blocks allocated for a VDO volume.

logicalBlocks The maximum number of logical blocks that can be mapped by a VDO volume.

mode Indicates whether a VDO volume is operating normally, is in recovery mode, or is


in read-only mode.

304
CHAPTER 30. VDO EVALUATION

CHAPTER 30. VDO EVALUATION

30.1. INTRODUCTION
VDO is software that provides inline block-level deduplication, compression, and thin provisioning
capabilities for primary storage. VDO installs within the Linux device mapper framework, where it takes
ownership of existing physical block devices and remaps these to new, higher-level block devices with
data-efficiency properties. Specifically, VDO can multiply the effective capacity of these devices by ten or
more. These benefits require additional system resources, so it is therefore necessary to measure
VDO's impact on system performance.

Storage vendors undoubtedly have existing in-house test plans and expertise that they use to evaluate
new storage products. Since the VDO layer helps to identify deduplication and compression, different
tests may be required. An effective test plan requires studying the VDO architecture and exploring these
items:

VDO-specific configurable properties (performance tuning end-user applications)

Impact of being a native 4 KB block device

Response to access patterns and distributions of deduplication and compression

Performance in high-load environments (very important)

Analyze cost vs. capacity vs. performance, based on application

Failure to consider such factors up front has created situations that have invalidated certain tests and
required customers to repeat testing and data collection efforts.

30.1.1. Expectations and Deliverables


This Evaluation Guide is meant to augment, not replace, a vendor's internal evaluation effort. With a
modest investment of time, it will help evaluators produce an accurate assessment of VDO's integration
into existing storage devices. This guide is designed to:

Help engineers identify configuration settings that elicit optimal responses from the test device

Provide an understanding of basic tuning parameters to help avoid product misconfigurations

Create a performance results portfolio as a reference to compare against "real" application


results

Identify how different workloads affect performance and data efficiency

Expedite time-to-market with VDO implementations

The test results will help Red Hat engineers assist in understanding VDO's behavior when integrated into
specific storage environments. OEMs will understand how to design their deduplication and compression
capable devices, and also how their customers can tune their applications to best use those devices.

Be aware that the procedures in this document are designed to provide conditions under which VDO can
be most realistically evaluated. Altering test procedures or parameters may invalidate results. Red Hat
Sales Engineers are available to offer guidance when modifying test plans.

30.2. TEST ENVIRONMENT PREPARATIONS

305
Storage Administration Guide

Before evaluating VDO, it is important to consider the host system configuration, VDO configuration, and
the workloads that will be used during testing. These choices will affect benchmarking both in terms of
data optimization (space efficiency) and performance (bandwidth and latency). Items that should be
considered when developing test plans are listed in the following sections.

30.2.1. System Configuration


Number and type of CPU cores available. This can be controlled by using the taskset utility.

Available memory and total installed memory.

Configuration of storage devices.

Linux kernel version. Note that Red Hat Enterprise Linux 7 provides only one Linux kernel
version.

Packages installed.

30.2.2. VDO Configuration


Partitioning scheme

File system(s) used on VDO volumes

Size of the physical storage assigned to a VDO volume

Size of the logical VDO volume created

Sparse or dense indexing

UDS Index in memory size

VDO's thread configuration

30.2.3. Workloads
Types of tools used to generate test data

Number of concurrent clients

The quantity of duplicate 4 KB blocks in the written data

Read and write patterns

The working set size

VDO volumes may need to be re-created in between certain tests to ensure that each test is performed
on the same disk environment. Read more about this in the testing section.

30.2.4. Supported System Configurations


Red Hat has tested VDO with Red Hat Enterprise Linux 7 on the Intel 64 architecture.

For the system requirements of VDO, see Section 29.2, “System Requirements”.

The following utilities are recommended when evaluating VDO:

306
CHAPTER 30. VDO EVALUATION

Flexible I/O Tester version 2.08 or higher; available from the fio package

sysstat version 8.1.2-2 or higher; available from the sysstat package

30.2.5. Pre-Test System Preparations


This section describes how to configure system settings to achieve optimal performance during the
evaluation. Testing beyond the implicit bounds established in any particular test may result in loss of
testing time due to abnormal results. For example, this guide describes a test that conducts random
reads over a 100 GB address range. To test a working set of 500 GB, the amount of DRAM allocated for
the VDO block map cache should be increased accordingly.

System Configuration

Ensure that your CPU is running at its highest performance setting.

Disable frequency scaling if possible using the BIOS configuration or the Linux cpupower
utility.

Enable Turbo mode if possible to achieve maximum throughput. Turbo mode introduces
some variability in test results, but performance will meet or exceed that of testing without
Turbo.

Linux Configuration

For disk-based solutions, Linux offers several I/O scheduler algorithms to handle multiple
read/write requests as they are queued. By default, Red Hat Enterprise Linux uses the CFQ
(completely fair queuing) scheduler, which arranges requests in a way that improves
rotational disk (hard disk) access in many situations. We instead suggest using the Deadline
scheduler for rotational disks, having found that it provides better throughput and latency in
Red Hat lab testing. Change the device settings as follows:

# echo "deadline" > /sys/block/device/queue/scheduler

For flash-based solutions, the noop scheduler demonstrates superior random access
throughput and latency in Red Hat lab testing. Change the device settings as follows:

# echo "noop" > /sys/block/device/queue/scheduler

Storage device configuration

File systems (ext4, XFS, etc.) may have unique impacts on performance; they often skew
performance measurements, making it harder to isolate VDO's impact on the results. If
reasonable, we recommend measuring performance on the raw block device. If this is not
possible, format the device using the file system that would be used in the target implementation.

30.2.6. VDO Internal Structures


We believe that a general understanding of VDO mechanisms is essential for a complete and successful
evaluation. This understanding becomes especially important when testers wish to deviate from the test
plan or devise new stimuli to emulate a particular application or use case. For more information, see
Chapter 29, VDO Integration.

The Red Hat test plan was written to operate with a default VDO configuration. When developing new
tests, some of the VDO parameters listed in the next section must be adjusted.

307
Storage Administration Guide

30.2.7. VDO Optimizations

High Load

Perhaps the most important strategy for producing optimal performance is determining the best I/O
queue depth, a characteristic that represents the load on the storage system. Most modern storage
systems perform optimally with high I/O depth. VDO's performance is best demonstrated with many
concurrent requests.

Synchronous vs. Asynchronous Write Policy

VDO might operate with either of two write policies, synchronous or asynchronous. By default, VDO
automatically chooses the appropriate write policy for your underlying storage device.

When testing performance, you need to know which write policy VDO selected. The following command
shows the write policy of your VDO volume:

# vdo status --name=my_vdo

For more information on write policies, see the section called “Overview of VDO Write Policies” and
Section 29.4.2, “Selecting VDO Write Modes”.

Metadata Caching

VDO maintains a table of mappings from logical block addresses to physical block addresses, and VDO
must look up the relevant mapping when accessing any particular block. By default, VDO allocates
128 MB of metadata cache in DRAM to support efficient access to 100 GB of logical space at a time. The
test plan generates workloads appropriate to this configuration option.

Working sets larger than the configured cache size will require additional I/Os to service requests, in
which case performance degradation will occur. If additional memory is available, the block map cache
should be made larger. If the working set is larger than what the block map cache can hold in memory,
additional I/O hover head can occur to lookup associated block map pages.

VDO Multithreading Configuration

VDO's thread configuration must be tuned to achieve optimal performance. Review the VDO Integration
Guide for information on how to modify these settings when creating a VDO volume. Contact your
Red Hat Sales Engineer to discuss how to design a test to find the optimal setting.

Data Content

Because VDO performs deduplication and compression, test data sets must be chosen to effectively
exercise these capabilities.

30.2.8. Special Considerations for Testing Read Performance


When testing read performance, these factors must be considered:

1. If a 4 KB block has never been written, VDO will not perform I/O to the storage and will
immediately respond with a zero block.

2. If a 4 KB block has been written but contains all zeros, VDO will not perform I/O to the storage
and will immediately respond with a zero block.

308
CHAPTER 30. VDO EVALUATION

This behavior results in very fast read performance when there is no data to read. This makes it
imperative that read tests prefill with actual data.

30.2.9. Cross Talk


To prevent one test from affecting the results of another, it is suggested that a new VDO volume be
created for each iteration of each test.

30.3. DATA EFFICIENCY TESTING PROCEDURES


Successful validation of VDO is dependent upon following a well-structured test procedure. This section
provides a series of steps to follow, along with the expected results, as examples of tests to consider
when participating in an evaluation.

Test Environment
The test cases in the next section make the following assumptions about the test environment:

One or more Linux physical block devices are available.

The target block device (for example, /dev/sdb) is larger than 512 GB.

Flexible I/O Tester (fio) version 2.1.1 or later is installed.

VDO is installed.

The following information should be recorded at the start of each test in order to ensure that the test
environment is fully understood:

The Linux build used, including the kernel build number.

A complete list of installed packages, as obtained from the rpm -qa command.

Complete system specifications:

CPU type and quantity (available in /proc/cpuinfo).

Installed memory and the amount available after the base OS is running (available in
/proc/meminfo).

Type(s) of drive controller(s) used.

Type(s) and quantity of disk(s) used.

A complete list of running processes (from ps aux or a similar listing).

Name of the Physical Volume and the Volume Group created for use with VDO (pvs and vgs
listings).

File system used when formatting the VDO volume (if any).

Permissions on the mounted directory.

Contents of /etc/vdoconf.yaml.

Location of the VDO files.

309
Storage Administration Guide

You can capture much of the required information by running sosreport.

Workloads
Effectively testing VDO requires the use of data sets that simulate real world workloads. The data sets
should provide a balance between data that can be deduplicated and/or compressed and data that
cannot in order to demonstrate performance under different conditions.

There are several tools that can synthetically generate data with repeatable characteristics. Two utilities
in particular, VDbench and fio, are recommended for use during testing.

This guide uses fio. Understanding the arguments is critical to a successful evaluation:

Table 30.1. fio Options

Argument Description Value

--size The quantity of data fio will send to the target per job (see 100 GB
numjobs below).

--bs The block size of each read/write request produced by fio. 4k


Red Hat recommends a 4 KB block size to match VDO's 4 KB
default

--numjobs The number of jobs that fio will create to run the benchmark. 1 (HDD)

Each job sends the amount of data specified by the --size 2 (SSD)
parameter.

The first job sends data to the device at the offset specified
by the --offset parameter. Subsequent jobs write the
same region of the disk (overwriting) unless the
-​-​offset_increment parameter is provided, which will
offset each job from where the previous job began by that
value. To achieve peak performance on flash at least two
jobs are recommended. One job is typically enough to
saturate rotational disk (HDD) throughput.

--thread Instructs fio jobs to be run in threads rather than being forked, <N/A>
which may provide better performance by limiting context
switching.

--ioengine There are several I/O engines available in Linux that are able libaio
to be tested using fio. Red Hat testing uses the asynchronous
unbuffered engine ( libaio ). If you are interested in another
engine, discuss that with your Red Hat Sales Engineer.

The Linux libaio engine is used to evaluate workloads in


which one or more processes are making random requests
simultaneously. libaio allows multiple requests to be
made asynchronously from a single thread before any data
has been retrieved, which limits the number of context
switches that would be required if the requests were provided
by manythreads via a synchronous engine.

310
CHAPTER 30. VDO EVALUATION

Argument Description Value

--direct When set, direct allows requests to be submitted to the 1 (libaio)


device bypassing the Linux Kernel's page cache.

Libaio Engine: libaio must be used with direct enabled


(=1) or the kernel may resort to the sync API for all I​/​O
requests.

--iodepth The number of I​/​O buffers in flight at any time. 128


(minimum)
A high iodepth will usually increase performance,
particularly for random reads or writes. High depths ensure
that the controller always has requests to batch. However,
setting iodepth too high (greater than 1K, typically) may
cause undesirable latency. While Red Hat recommends an
iodepth between 128 and 512, the final value is a trade-off
and depends on how your application tolerates latency.

-- The number of I/Os to create when the iodepth buffer pool 16


iodepth_batch_submi begins to empty. This parameter limits task switching from
t I​/​O to buffer creation during the test.

-- The number of I/Os to complete before submitting a batch 16


iodepth_batch_compl (iodepth_batch_complete). This parameter limits task
ete switching from I​/​O to buffer creation during the test.

--gtod_reduce Disables time-of-day calls to calculate latency. This setting 1


will lower throughput if enabled (=0), so it should be enabled
(=1) unless latency measurement is necessary.

30.3.1. Configuring a VDO Test Volume

1. Create a VDO Volume with a Logical Size of 1 TB on a 512 GB Physical Volume

1. Create a VDO volume.

To test the VDO async mode on top of synchronous storage, create an asynchronous
volume using the --writePolicy=async option:

# vdo create --name=vdo0 --device=/dev/sdb \


--vdoLogicalSize=1T --writePolicy=async --verbose

To test the VDO sync mode on top of synchronous storage, create a synchronous volume
using the --writePolicy=sync option:

# vdo create --name=vdo0 --device=/dev/sdb \


--vdoLogicalSize=1T --writePolicy=sync --verbose

2. Format the new device with an XFS or ext4 file system.

For XFS:

311
Storage Administration Guide

# mkfs.xfs -K /dev/mapper/vdo0

For ext4:

# mkfs.ext4 -E nodiscard /dev/mapper/vdo0

3. Mount the formatted device:

# mkdir /mnt/VDOVolume
# mount /dev/mapper/vdo0 /mnt/VDOVolume && \
chmod a+rwx /mnt/VDOVolume

30.3.2. Testing VDO Efficiency

2. Test Reading and Writing to the VDO Volume

1. Write 32 GB of random data to the VDO volume:

$ dd if=/dev/urandom of=/mnt/VDOVolume/testfile bs=4096


count=8388608

2. Read the data from the VDO volume and write it to another location not on the VDO volume:

$ dd if=/mnt/VDOVolume/testfile of=/home/user/testfile bs=4096

3. Compare the two files using diff, which should report that the files are the same:

$ diff -s /mnt/VDOVolume/testfile /home/user/testfile

4. Copy the file to a second location on the VDO volume:

$ dd if=/home/user/testfile of=/mnt/VDOVolume/testfile2 bs=4096

5. Compare the third file to the second file. This should report that the files are the same:

$ diff -s /mnt/VDOVolume/testfile2 /home/user/testfile

3. Remove the VDO Volume

1. Unmount the file system created on the VDO volume:

# umount /mnt/VDOVolume

2. Run the command to remove the VDO volume vdo0 from the system:

# vdo remove --name=vdo0

3. Verify that the volume has been removed. There should be no listing in vdo list for the VDO
partition:

312
CHAPTER 30. VDO EVALUATION

# vdo list --all | grep vdo

4. Measure Deduplication

1. Create and mount a VDO volume following Section 30.3.1, “Configuring a VDO Test Volume”.

2. Create 10 directories on the VDO volume named vdo1 through vdo10 to hold 10 copies of the
test data set:

$ mkdir /mnt/VDOVolume/vdo{01..10}

3. Examine the amount of disk space used according to the file system:

$ df -h /mnt/VDOVolume

Filesystem Size Used Avail Use% Mounted on


/dev/mapper/vdo0 1.5T 198M 1.4T 1% /mnt/VDOVolume

Consider tabulating the results in a table:

Statistic Bare File System After Seed After 10


Copies

File System Used Size 198 MB

VDO Data Used

VDO Logical Used

4. Run the following command and record the values. "Data blocks used" is the number of blocks
used by user data on the physical device running under VDO. "Logical blocks used" is the
number of blocks used before optimization. It will be used as the starting point for measurements

# vdostats --verbose | grep "blocks used"

data blocks used : 1090


overhead blocks used : 538846
logical blocks used : 6059434

5. Create a data source file in the top level of the VDO volume

$ dd if=/dev/urandom of=/mnt/VDOVolume/sourcefile bs=4096


count=1048576

4294967296 bytes (4.3 GB) copied, 540.538 s, 7.9 MB/s

6. Re-examine the amount of used physical disk space in use. This should show an increase in the
number of blocks used corresponding to the file just written:

$ df -h /mnt/VDOVolume

313
Storage Administration Guide

Filesystem Size Used Avail Use% Mounted on


/dev/mapper/vdo0 1.5T 4.2G 1.4T 1% /mnt/VDOVolume

# vdostats --verbose | grep "blocks used"

data blocks used : 1050093 (increased by 4GB)


overhead blocks used : 538846 (Did not change)
logical blocks used : 7108036 (increased by 4GB)

7. Copy the file to each of the 10 subdirectories:

$ for i in {01..10}; do
cp /mnt/VDOVolume/sourcefile /mnt/VDOVolume/vdo$i
done

8. Once again, check the amount of physical disk space used (data blocks used). This number
should be similar to the result of step 6 above, with only a slight increase due to file system
journaling and metadata:

$ df -h /mnt/VDOVolume

Filesystem Size Used Avail Use% Mounted on


/dev/mapper/vdo0 1.5T 45G 1.3T 4% /mnt/VDOVolume

# vdostats --verbose | grep "blocks used"

data blocks used : 1050836 (increased by 3M)


overhead blocks used : 538846
logical blocks used : 17594127 (increased by 41G)

9. Subtract this new value of the space used by the file system from the value found before writing
the test data. This is the amount of space consumed by this test from the file system's
perspective.

10. Observe the space savings in your recorded statistics:

Note:In the following table, values have been converted to MB/GB. vdostats "blocks" are 4,096
B.

Statistic Bare File System After Seed After 10


Copies

File System Used Size 198 MB 4.2 GB 45 GB

VDO Data Used 4 MB 4.1 GB 4.1 GB

VDO Logical Used 23.6 GB* 27.8 GB 68.7 GB

* File system overhead for 1.6 TB formatted drive

314
CHAPTER 30. VDO EVALUATION

5. Measure Compression

1. Create a VDO volume of at least 10 GB of physical and logical size. Add options to disable
deduplication and enable compression:

# vdo create --name=vdo0 --device=/dev/sdb \


--vdoLogicalSize=10G --verbose \
--deduplication=disabled --compression=enabled

2. Inspect VDO statistics before transfer; make note of data blocks used and logical blocks used
(both should be zero):

# vdostats --verbose | grep "blocks used"

3. Format the new device with an XFS or ext4 file system.

For XFS:

# mkfs.xfs -K /dev/mapper/vdo0

For ext4:

# mkfs.ext4 -E nodiscard /dev/mapper/vdo0

4. Mount the formatted device:

# mkdir /mnt/VDOVolume
# mount /dev/mapper/vdo0 /mnt/VDOVolume && \
chmod a+rwx /mnt/VDOVolume

5. Synchronize the VDO volume to complete any unfinished compression:

# sync && dmsetup message vdo0 0 sync-dedupe

6. Inspect VDO statistics again. Logical blocks used — data blocks used is the number of 4 KB
blocks saved by compression for the file system alone. VDO optimizes file system overhead as
well as actual user data:

# vdostats --verbose | grep "blocks used"

7. Copy the contents of /lib to the VDO volume. Record the total size:

# cp -vR /lib /mnt/VDOVolume

...
sent 152508960 bytes received 60448 bytes 61027763.20 bytes/sec
total size is 152293104 speedup is 1.00

8. Synchronize Linux caches and the VDO volume:

# sync && dmsetup message vdo0 0 sync-dedupe

315
Storage Administration Guide

9. Inspect VDO statistics once again. Observe the logical and data blocks used:

# vdostats --verbose | grep "blocks used"

Logical blocks used - data blocks used represents the amount of space used (in units of
4 KB blocks) for the copy of your /lib files.

The total size (from the table in the section called “4. Measure Deduplication”) - (logical
blocks used-data blocks used * 4096) = bytes saved by compression.

10. Remove the VDO volume:

# umount /mnt/VDOVolume && vdo remove --name=vdo0

6. Test VDO Compression Efficiency

1. Create and mount a VDO volume following Section 30.3.1, “Configuring a VDO Test Volume”.

2. Repeat the experiments in the section called “4. Measure Deduplication” and the section called
“5. Measure Compression” without removing the volume. Observe changes to space savings in
vdostats.

3. Experiment with your own datasets.

7. Understanding TRIM and DISCARD

Thin provisioning allows a logical or virtual storage space to be larger than the underlying physical
storage. Applications such as file systems benefit from running on the larger virtual layer of storage, and
data-efficiency techniques such as data deduplication reduce the number of physical data blocks needed
to store all of the data. To benefit from these storage savings, the physical storage layer needs to know
when application data has been deleted.

Traditional file systems did not have to inform the underlying storage when data was deleted. File
systems that work with thin provisioned storage send TRIM or DISCARD commands to inform the storage
system when a logical block is no longer required. These commands can be sent whenever a block is
deleted using the discard mount option, or these commands can be sent in a controlled manner by
running utilities such as fstrim that tell the file system to detect which logical blocks are unused and
send the information to the storage system in the form of a TRIM or DISCARD command.

To see how this works:

1. Create and mount a new VDO logical volume following Section 30.3.1, “Configuring a VDO Test
Volume”.

2. Trim the file system to remove any unneeded blocks (this may take a long time):

# fstrim /mnt/VDOVolume

3. Record the initial state in following table below by entering:

$ df -m /mnt/VDOVolume

to see how much capacity is used in the file system, and run vdostats to see how many physical
and logical data blocks are being used.

316
CHAPTER 30. VDO EVALUATION

4. Create a 1 GB file with non-duplicate data in the file system running on top of VDO:

$ dd if=/dev/urandom of=/mnt/VDOVolume/file bs=1M count=1K

and then collect the same data. The file system should have used an additional 1 GB, and the
data blocks used and logical blocks used have increased similarly.

5. Run fstrim /mnt/VDOVolume and confirm that this has no impact after creating a new file.

6. Delete the 1 GB file:

$ rm /mnt/VDOVolume/file

Check and record the parameters. The file system is aware that a file has been deleted, but
there has been no change to the number of physical or logical blocks because the file deletion
has not been communicated to the underlying storage.

7. Run fstrim /mnt/VDOVolume and record the same parameters. fstrim looks for free
blocks in the file system and sends a TRIM command to the VDO volume for unused addresses,
which releases the associated logical blocks, and VDO processes the TRIM to release the
underlying physical blocks.

Step File Space Used (MB) Data Blocks Used Logical Blocks Used

Initial

Add 1 GB File

Run fstrim

Delete 1 GB File

Run fstrim

From this exercise, the TRIM process is needed so the underlying storage can have an accurate
knowledge of capacity utilization. fstrim is a command line tool that analyzes many blocks at once for
greater efficiency. An alternative method is to use the file system discard option when mounting. The
discard option will update the underlying storage after each file system block is deleted, which can slow
throughput but provides for great utilization awareness. It is also important to understand that the need to
TRIM or DISCARD unused blocks is not unique to VDO; any thin-provisioned storage system has the
same challenge

30.4. PERFORMANCE TESTING PROCEDURES


The goal of this section is to construct a performance profile of the device with VDO installed. Each test
should be run with and without VDO installed, so that VDO's performance can be evaluated relative to
the performance of the base system.

30.4.1. Phase 1: Effects of I/O Depth, Fixed 4 KB Blocks


The goal of this test is to determine the I/O depth that produces the optimal throughput and the lowest

317
Storage Administration Guide

latency for your appliance. VDO uses a 4 KB sector size rather than the traditional 512 B used on legacy
storage devices. The larger sector size allows it to support higher-capacity storage, improve
performance, and match the cache buffer size used by most operating systems.

1. Perform four-corner testing at 4 KB I/O, and I/O depth of 1, 8, 16, 32, 64, 128, 256, 512, 1024:

Sequential 100% reads, at fixed 4 KB *

Sequential 100% write, at fixed 4 KB

Random 100% reads, at fixed 4 KB *

Random 100% write, at fixed 4 KB **

* Prefill any areas that may be read during the read test by performing a write fio job first

** Re-create the VDO volume after 4 KB random write I/O runs

Example shell test input stimulus (write):

# for depth in 1 2 4 8 16 32 64 128 256 512 1024 2048; do


fio --rw=write --bs=4096 --name=vdo --filename=/dev/mapper/vdo0 \
--ioengine=libaio --numjobs=1 --thread --norandommap --
runtime=300\
--direct=1 --iodepth=$depth --scramble_buffers=1 --offset=0 \
--size=100g
done

2. Record throughput and latency at each data point, and then graph.

3. Repeat test to complete four-corner testing: --rw=randwrite, --rw=read, and --


rw=randread.

The result is a graph as shown below. Points of interest are the behavior across the range and the points
of inflection where increased I​/​O depth proves to provide diminishing throughput gains. Likely, sequential
access and random access will peak at different values, but it may be different for all types of storage
configurations. In Figure 30.1, “I/O Depth Analysis” notice the "knee" in each performance curve. Marker
1 identifies the peak sequential throughput at point X, and marker 2 identifies peak random 4 KB
throughput at point Z.

This particular appliance does not benefit from sequential 4 KB I​/​O depth > X. Beyond that
depth, there are diminishing bandwidth bandwidth gains, and average request latency will
increase 1:1 for each additional I​/​O request.

This particular appliance does not benefit from random 4 KB I​/​O depth > Z. Beyond that depth,
there are diminishing bandwidth gains, and average request latency will increase 1:1 for each
additional I​/​O request.

318
CHAPTER 30. VDO EVALUATION

Figure 30.1. I/O Depth Analysis

Figure 30.2, “Latency Response of Increasing I/O for Random Writes” shows an example of the random
write latency after the "knee" of the curve in Figure 30.1, “I/O Depth Analysis”. Benchmarking practice
should test at these points for maximum throughput that incurs the least response time penalty. As we
move forward in the test plan for this example appliance, we will collect additional data with I​/​O depth =
Z

Figure 30.2. Latency Response of Increasing I/O for Random Writes

30.4.2. Phase 2: Effects of I/O Request Size


The goal of this test is to understand the block size that produces the best performance of the system
under test at the optimal I/O depth determined in the previous step.

1. Perform four-corner testing at fixed I/O depth, with varied block size (powers of 2) over the range
8 KB to 1 MB. Remember to prefill any areas to be read and to recreate volumes between
tests.

2. Set the I/O Depth to the value determined in Section 30.4.1, “Phase 1: Effects of I/O Depth,
Fixed 4 KB Blocks”.

Example test input stimulus (write):

# z=[see previous step]


# for iosize in 4 8 16 32 64 128 256 512 1024; do
fio --rw=write --bs=$iosize\k --name=vdo --
filename=/dev/mapper/vdo0

319
Storage Administration Guide

--ioengine=libaio --numjobs=1 --thread --norandommap --


runtime=300
--direct=1 --iodepth=$z --scramble_buffers=1 --offset=0 --
size=100g
done

3. Record throughput and latency at each data point, and then graph.

4. Repeat test to complete four-corner testing: --rw=randwrite, --rw=read, and --


rw=randread.

There are several points of interest that you may find in the results. In this example:

Sequential writes reach a peak throughput at request size Y. This curve demonstrates how
applications that are configurable or naturally dominated by certain request sizes may perceive
performance. Larger request sizes often provide more throughput because 4 KB I/Os may
benefit from merging.

Sequential reads reach a similar peak throughput at point Z. Remember that after these peaks,
overall latency before the I/O completes will increase with no additional throughput. It would be
wise to tune the device to not accept I/Os larger than this size.

Random reads achieve peak throughput at point X. Some devices may achieve near-sequential
throughput rates at large request size random accesses, while others suffer more penalty when
varying from purely sequential access.

Random writes achieve peak throughput at point Y. Random writes involve the most interaction
of a deduplication device, and VDO achieves high performance especially when request sizes
and/or I/O depths are large.

The results from this test Figure 30.3, “Request Size vs. Throughput Analysis and Key Inflection Points”
help in understanding the characteristics of the storage device and the user experience for specific
applications. Consult with a Red Hat Sales Engineer to determine if there may be further tuning needed
to increase performance at different request sizes.

Figure 30.3. Request Size vs. Throughput Analysis and Key Inflection Points

30.4.3. Phase 3: Effects of Mixing Read & Write I/Os


The goal of this test is to understand how your appliance with VDO behaves when presented with mixed
I/O loads (read/write), analyzing the effects of read/write mix at the optimal random queue depth and
request sizes from 4 KB to 1 MB. You should use whatever is appropriate in your case.

320
CHAPTER 30. VDO EVALUATION

1. Perform four-corner testing at fixed I/O depth, varied block size (powers of 2) over the 8 KB to
256 KB range, and set read percentage at 10% increments, beginning with 0%. Remember to
prefill any areas to be read and to recreate volumes between tests.

2. Set the I/O Depth to the value determined in Section 30.4.1, “Phase 1: Effects of I/O Depth,
Fixed 4 KB Blocks”.

Example test input stimulus (read/write mix):

# z=[see previous step]


# for readmix in 0 10 20 30 40 50 60 70 80 90 100; do
for iosize in 4 8 16 32 64 128 256 512 1024; do
fio --rw=rw --rwmixread=$readmix --bs=$iosize\k --name=vdo \
--filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1
--thread \
--norandommap --runtime=300 --direct=0 --iodepth=$z --
scramble_buffers=1 \
--offset=0 --size=100g
done
done

3. Record throughput and latency at each data point, and then graph.

Figure 30.4, “Performance Is Consistent across Varying Read/Write Mixes” shows an example of how
VDO may respond to I/O loads:

Figure 30.4. Performance Is Consistent across Varying Read/Write Mixes

Performance (aggregate) and latency (aggregate) are relatively consistent across the range of mixing
reads and writes, trending from the lower max write throughput to the higher max read throughput.

This behavior may vary with different storage, but the important observation is that the performance is
consistent under varying loads and/or that you can understand performance expectation for applications
that demonstrate specific read/write mixes. If you discover any unexpected results, Red Hat Sales
Engineers will be able to help you understand if it is VDO or the storage device itself that needs
modification.

Note: Systems that do not exhibit a similar response consistency often signify a sub-optimal
configuration. Contact your Red Hat Sales Engineer if this occurs.

30.4.4. Phase 4: Application Environments

321
Storage Administration Guide

The goal of these final tests is to understand how the system with VDO behaves when deployed in a real
application environment. If possible, use real applications and use the knowledge learned so far;
consider limiting the permissible queue depth on your appliance, and if possible tune the application to
issue requests with those block sizes most beneficial to VDO performance.

Request sizes, I/O loads, read/write patterns, etc., are generally hard to predict, as they will vary by
application use case (i.e., filers vs. virtual desktops vs. database), and applications often vary in the
types of I/O based on the specific operation or due to multi-tenant access.

The final test shows general VDO performance in a mixed environment. If more specific details are
known about your expected environment, test those settings as well.

Example test input stimulus (read/write mix):

# for readmix in 20 50 80; do


for iosize in 4 8 16 32 64 128 256 512 1024; do
fio --rw=rw --rwmixread=$readmix --bsrange=4k-256k --name=vdo \
--filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --
thread \
--norandommap --runtime=300 --direct=0 --iodepth=$iosize \
--scramble_buffers=1 --offset=0 --size=100g
done
done

Record throughput and latency at each data point, and then graph (Figure 30.5, “Mixed Environment
Performance”).

Figure 30.5. Mixed Environment Performance

30.5. ISSUE REPORTING


In the event that an issue is encountered while working with VDO, it is important to gather as much
information as possible to assist Red Hat Sales Engineers in attempting to reproduce the issue.

Issue reports should include the following:

A detailed description of the test environment; see the section called “Test Environment” for
specifics

The VDO configuration

The use case that generated the issue

The actions that were being performed at the time of the error

322
CHAPTER 30. VDO EVALUATION

The text of any error messages on the console or terminal

The kernel log files

Kernel crash dumps, if available

The result of sosreport, which will capture data describing the entire Linux environment

30.6. CONCLUSION
Going through this or any other well-structured evaluation plan is an important step in integrating VDO
into any storage system. The evaluation process is important to understanding performance and
catching any potential compatibility issues. The collection of results from this evaluation not only
demonstrates deduplication and compression, but also provides a performance profile of your system
implementing VDO. The results help determine whether the results achieved in real applications are as
expected and plausible or whether they fall short of expectations. Finally, we can also use these results
to help predict the kinds of applications that will operate favorably with VDO.

323
Storage Administration Guide

APPENDIX A. RED HAT CUSTOMER PORTAL LABS


RELEVANT TO STORAGE ADMINISTRATION
Red Hat Customer Portal Labs are tools designed to help you improve performance, troubleshoot issues,
identify security problems, and optimize configuration. This appendix provides an overview of Red Hat
Customer Portal Labs relevant to storage administration. All Red Hat Customer Portal Labs are available
at https://access.redhat.com/labs/.

SCSI DECODER
The SCSI decoder is designed to decode SCSI error messages in the /log/* files or log file snippets,
as these error messages can be hard to understand for the user.

Use the SCSI decoder to individually diagnose each SCSI error message and get solutions to resolve
problems efficiently.

FILE SYSTEM LAYOUT CALCULATOR


The File System Layout Calculator determines the optimal parameters for creating ext3, ext4, and xfs file
systems, after you provide storage options that describe your current or planned storage. Move the
cursor over the question mark ("?") for a brief explanation of a particular option, or scroll down to read a
summary of all options.

Use the File System Layout Calculator to generate a command that creates a file system with provided
parameters on the specified RAID storage. Copy the generated command and execute it as root to
create the required file system.

LVM RAID CALCULATOR


The LVM RAID Calculator determines the optimal parameters for creating logical volumes (LVMs) on a
given RAID storage after you specify storage options. Move the cursor over the question mark ("?") for a
brief explanation of a particular option, or scroll down to read a summary of all options.

The LVM RAID Calculator generates a sequence of commands that create LVMs on a given RAID
storage. Copy and execute the generated commands one by one as root to create the required LVMs.

ISCSI HELPER
The iSCSI Helper provides a block-level storage over Internet Protocol (IP) networks, and enables the
use of storage pools within server virtualization.

Use the iSCSI Helper to generate a script that prepares the system for its role of an iSCSI target (server)
or an iSCSI initiator (client) configured according to the settings that you provide.

SAMBA CONFIGURATION HELPER


The Samba Configuration Helper creates a configuration that provides basic file and printer sharing
through Samba:

Click Server to specify basic server settings.

Click Shares to add the directories that you want to share

Click Server to add attached printers individually.

MULTIPATH HELPER
The Multipath Helper creates an optimal configuration for multipath devices on Red Hat
Enterprise Linux 5, 6, and 7. By following the steps, you can create advanced multipath configurations,
such as custom aliases or device blacklists.

324
APPENDIX A. RED HAT CUSTOMER PORTAL LABS RELEVANT TO STORAGE ADMINISTRATION

The Multipath Helper also provides the multipath.conf file for a review. When you achieve the
required configuration, download the installation script to run on your server.

NFS HELPER
The NFS Helper simplifies configuring a new NFS server or client. Follow the steps to specify the export
and mount options. Then, generate a downloadable NFS configuration script.

MULTIPATH CONFIGURATION VISUALIZER


The Multipath Configuration Visualizer analyzes files in a sosreport and provides a diagram that
visualizes the multipath configuration. Use the Multipath Configuration Visualizer to display:

Hosts components including Host Bus Adapters (HBAs), local devices, and iSCSI devices on the
server side

Storage components on the storage side

Fabric or Ethernet components between the server and the storage

Paths to all mentioned components

You can either upload a sosreport compressed in the .xz, .gz, or .bz2 format, or extract a sosreport in a
directory that you then select as the source for a client-side analysis.

RHEL BACKUP AND RESTORE ASSISTANT


The RHEL Backup and Restore Assistant provides information on back-up and restore tools, and
common scenarios of Linux usage.

Described tools:

dump and restore: for backing up the ext2, ext3, and ext4 file systems.

tar and cpio: for archiving or restoring files and folders, especially when backing up the tape
drives.

rsync: for performing back-up operations and synchronizing files and directories between
locations.

dd: for copying files from a source to a destination block by block independently of the file
systems or operating systems involved.

Described scenarios:

Disaster recovery

Hardware migration

Partition table backup

Important folder backup

Incremental backup

Differential backup

325
Storage Administration Guide

APPENDIX B. REVISION HISTORY


Revision 4-06 Tue Aug 28 11 2018 Marek Suchanek
An asynchronous update

Revision 4-02 Thu May 10 2018 Marek Suchanek


An asynchronous update

Revision 4-00 Fri Apr 6 2018 Marek Suchanek


Document version for 7.5 GA publication.

Revision 3-95 Thu Apr 5 2018 Marek Suchanek


An asynchronous update

Revision 3-93 Mon Mar 5 2018 Marek Suchanek


New chapter: VDO Integration

Revision 3-92 Fri Feb 9 2018 Marek Suchanek


An asynchronous update

Revision 3-90 Wed Dec 6 2017 Marek Suchanek


Version for 7.5 Alpha publication.

Revision 3-86 Mon Nov 6 2017 Marek Suchanek


An asynchronous update.

Revision 3-80 Thu Jul 27 2017 Milan Navratil


Document version for 7.4 GA publication.

Revision 3-77 Wed May 24 2017 Milan Navratil


An asynchronous update.

Revision 3-68 Fri Oct 21 2016 Milan Navratil


Version for 7.3 GA publication.

Revision 3-67 Fri Jun 17 2016 Milan Navratil


An asynchronous update.

Revision 3-64 Wed Nov 11 2015 Jana Heves


Version for 7.2 GA release.

Revision 3-33 Wed Feb 18 2015 Jacquelynn East


Version for 7.1 GA

Revision 3-26 Wed Jan 21 2015 Jacquelynn East


Added overview of Ceph

Revision 3-22 Thu Dec 4 2014 Jacquelynn East


7.1 Beta

Revision 3-4 Thu Jul 17 2014 Jacquelynn East


Added new chapter on targetcli

Revision 3-1 Tue Jun 3 2014 Jacquelynn East


Version for 7.0 GA release

326
INDEX

INDEX
Symbols
/boot/ directory, The /boot/ Directory
/dev/shm, df Command
/etc/fstab, Converting to an ext3 File System, Mounting NFS File Systems Using /etc/fstab,
Mounting a File System
/etc/fstab file
enabling disk quotas with, Enabling Quotas

/local/directory (client configuration, mounting)


NFS, Configuring NFS Client

/proc
/proc/devices, The /proc Virtual File System
/proc/filesystems, The /proc Virtual File System
/proc/mdstat, The /proc Virtual File System
/proc/mounts, The /proc Virtual File System
/proc/mounts/, The /proc Virtual File System
/proc/partitions, The /proc Virtual File System

/proc/devices
virtual file system (/proc), The /proc Virtual File System

/proc/filesystems
virtual file system (/proc), The /proc Virtual File System

/proc/mdstat
virtual file system (/proc), The /proc Virtual File System

/proc/mounts
virtual file system (/proc), The /proc Virtual File System

/proc/mounts/
virtual file system (/proc), The /proc Virtual File System

/proc/partitions
virtual file system (/proc), The /proc Virtual File System

/remote/export (client configuration, mounting)


NFS, Configuring NFS Client

A
adding paths to a storage device, Adding a Storage Device or Path
adding/removing

327
Storage Administration Guide

LUN (logical unit number), Adding/Removing a Logical Unit Through rescan-scsi-bus.sh

advanced RAID device creation


RAID, Creating Advanced RAID Devices

allocation features
ext4, The ext4 File System
XFS, The XFS File System

Anaconda support
RAID, RAID Support in the Anaconda Installer

API, Fibre Channel, Fibre Channel API


API, iSCSI, iSCSI API
ATA standards
I/O alignment and size, ATA

autofs , autofs, Configuring autofs


(see also NFS)

autofs version 5
NFS, Improvements in autofs Version 5 over Version 4

B
backup/restoration
XFS, Backing Up and Restoring XFS File Systems

battery-backed write caches


write barriers, Battery-Backed Write Caches

bcull (cache cull limits settings)


FS-Cache, Setting Cache Cull Limits

binding/unbinding an iface to a portal


offload and interface binding
iSCSI, Binding/Unbinding an iface to a Portal

block device ioctls (userspace access)


I/O alignment and size, Block Device ioctls

blocked device, verifying


Fibre Channel
modifying link loss behavior, Fibre Channel

brun (cache cull limits settings)

328
INDEX

FS-Cache, Setting Cache Cull Limits

bstop (cache cull limits settings)


FS-Cache, Setting Cache Cull Limits

Btrfs
File System, Btrfs (Technology Preview)

C
cache back end
FS-Cache, FS-Cache

cache cull limits


FS-Cache, Setting Cache Cull Limits

cache limitations with NFS


FS-Cache, Cache Limitations with NFS

cache setup
FS-Cache, Setting up a Cache

cache sharing
FS-Cache, Cache Sharing

cachefiles
FS-Cache, FS-Cache

cachefilesd
FS-Cache, Setting up a Cache

CCW, channel command word


storage considerations during installation, DASD and zFCP Devices on IBM System Z

changing dev_loss_tmo
Fibre Channel
modifying link loss behavior, Fibre Channel

Changing the read/write state


Online logical units, Changing the Read/Write State of an Online Logical Unit

channel command word (CCW)


storage considerations during installation, DASD and zFCP Devices on IBM System Z

coherency data
FS-Cache, FS-Cache

329
Storage Administration Guide

command timer (SCSI)


Linux SCSI layer, Command Timer

commands
volume_key, volume_key Commands

configuration
discovery
iSCSI, iSCSI Discovery Configuration

configuring a tftp service for diskless clients


diskless systems, Configuring a tftp Service for Diskless Clients

configuring an Ethernet interface to use FCoE


FCoE, Configuring a Fibre Channel over Ethernet Interface

configuring DHCP for diskless clients


diskless systems, Configuring DHCP for Diskless Clients

configuring RAID sets


RAID, Configuring RAID Sets

controlling SCSI command timer and device status


Linux SCSI layer, Controlling the SCSI Command Timer and Device Status

creating
ext4, Creating an ext4 File System
XFS, Creating an XFS File System

cumulative mode (xfsrestore)


XFS, Restoration

D
DASD and zFCP devices on IBM System z
storage considerations during installation, DASD and zFCP Devices on IBM System Z

debugfs (other ext4 file system utilities)


ext4, Other ext4 File System Utilities

deployment
solid-state disks, Solid-State Disk Deployment Guidelines

deployment guidelines
solid-state disks, Solid-State Disk Deployment Guidelines

determining remote port states

330
INDEX

Fibre Channel
modifying link loss behavior, Fibre Channel

dev directory, The /dev/ Directory


device status
Linux SCSI layer, Device States

device-mapper multipathing, DM-Multipath


devices, removing, Removing a Storage Device
dev_loss_tmo
Fibre Channel
modifying link loss behavior, Fibre Channel

dev_loss_tmo, changing
Fibre Channel
modifying link loss behavior, Fibre Channel

df, df Command
DHCP, configuring
diskless systems, Configuring DHCP for Diskless Clients

DIF/DIX-enabled block devices


storage considerations during installation, Block Devices with DIF/DIX Enabled

direct map support (autofs version 5)


NFS, Improvements in autofs Version 5 over Version 4

directories
/boot/, The /boot/ Directory
/dev/, The /dev/ Directory
/etc/, The /etc/ Directory
/mnt/, The /mnt/ Directory
/opt/, The /opt/ Directory
/proc/, The /proc/ Directory
/srv/, The /srv/ Directory
/sys/, The /sys/ Directory
/usr/, The /usr/ Directory
/var/, The /var/ Directory

dirty logs (repairing XFS file systems)


XFS, Repairing an XFS File System

disabling NOP-Outs
iSCSI configuration, iSCSI Root

331
Storage Administration Guide

disabling write caches


write barriers, Disabling Write Caches

discovery
iSCSI, iSCSI Discovery Configuration

disk quotas, Disk Quotas


additional resources, Disk Quota References
assigning per file system, Setting the Grace Period for Soft Limits
assigning per group, Assigning Quotas per Group
assigning per user, Assigning Quotas per User
disabling, Enabling and Disabling
enabling, Configuring Disk Quotas, Enabling and Disabling
/etc/fstab, modifying, Enabling Quotas
creating quota files, Creating the Quota Database Files
quotacheck, running, Creating the Quota Database Files

grace period, Assigning Quotas per User


hard limit, Assigning Quotas per User
management of, Managing Disk Quotas
quotacheck command, using to check, Keeping Quotas Accurate
reporting, Reporting on Disk Quotas

soft limit, Assigning Quotas per User

disk storage (see disk quotas)


parted (see parted)

diskless systems
DHCP, configuring, Configuring DHCP for Diskless Clients
exported file systems, Configuring an Exported File System for Diskless Clients
network booting service, Setting up a Remote Diskless System
remote diskless systems, Setting up a Remote Diskless System
required packages, Setting up a Remote Diskless System
tftp service, configuring, Configuring a tftp Service for Diskless Clients

dm-multipath
iSCSI configuration, iSCSI Settings with dm-multipath

dmraid
RAID, dmraid

dmraid (configuring RAID sets)


RAID, dmraid

drivers (native), Fibre Channel, Native Fibre Channel Drivers and Capabilities

332
INDEX

du, du Command
dump levels
XFS, Backup

E
e2fsck, Reverting to an Ext2 File System
e2image (other ext4 file system utilities)
ext4, Other ext4 File System Utilities

e2label
ext4, Other ext4 File System Utilities

e2label (other ext4 file system utilities)


ext4, Other ext4 File System Utilities

enablind/disabling
write barriers, Enabling and Disabling Write Barriers

enhanced LDAP support (autofs version 5)


NFS, Improvements in autofs Version 5 over Version 4

error messages
write barriers, Enabling and Disabling Write Barriers

etc directory, The /etc/ Directory


expert mode (xfs_quota)
XFS, XFS Quota Management

exported file systems


diskless systems, Configuring an Exported File System for Diskless Clients

ext2
reverting from ext3, Reverting to an Ext2 File System

ext3
converting from ext2, Converting to an ext3 File System
creating, Creating an ext3 File System
features, The ext3 File System

ext4
allocation features, The ext4 File System
creating, Creating an ext4 File System
debugfs (other ext4 file system utilities), Other ext4 File System Utilities
e2image (other ext4 file system utilities), Other ext4 File System Utilities
e2label, Other ext4 File System Utilities

333
Storage Administration Guide

e2label (other ext4 file system utilities), Other ext4 File System Utilities
file system types, The ext4 File System
fsync(), The ext4 File System
main features, The ext4 File System
mkfs.ext4, Creating an ext4 File System
mounting, Mounting an ext4 File System
nobarrier mount option, Mounting an ext4 File System
other file system utilities, Other ext4 File System Utilities
quota (other ext4 file system utilities), Other ext4 File System Utilities
resize2fs (resizing ext4), Resizing an ext4 File System
resizing, Resizing an ext4 File System
stride (specifying stripe geometry), Creating an ext4 File System
stripe geometry, Creating an ext4 File System
stripe-width (specifying stripe geometry), Creating an ext4 File System
tune2fs (mounting), Mounting an ext4 File System
write barriers, Mounting an ext4 File System

F
FCoE
configuring an Ethernet interface to use FCoE, Configuring a Fibre Channel over Ethernet
Interface
Fibre Channel over Ethernet, Configuring a Fibre Channel over Ethernet Interface
required packages, Configuring a Fibre Channel over Ethernet Interface

FHS, Overview of Filesystem Hierarchy Standard (FHS), FHS Organization


(see also file system)

Fibre Channel
online storage, Fibre Channel

Fibre Channel API, Fibre Channel API


Fibre Channel drivers (native), Native Fibre Channel Drivers and Capabilities
Fibre Channel over Ethernet
FCoE, Configuring a Fibre Channel over Ethernet Interface

file system
FHS standard, FHS Organization
hierarchy, Overview of Filesystem Hierarchy Standard (FHS)
organization, FHS Organization
structure, File System Structure and Maintenance

File System
Btrfs, Btrfs (Technology Preview)

334
INDEX

file system types


ext4, The ext4 File System
GFS2, Global File System 2
XFS, The XFS File System

file systems, Gathering File System Information


ext2 (see ext2)
ext3 (see ext3)

findmnt (command)
listing mounts, Listing Currently Mounted File Systems

FS-Cache
bcull (cache cull limits settings), Setting Cache Cull Limits
brun (cache cull limits settings), Setting Cache Cull Limits
bstop (cache cull limits settings), Setting Cache Cull Limits
cache back end, FS-Cache
cache cull limits, Setting Cache Cull Limits
cache sharing, Cache Sharing
cachefiles, FS-Cache
cachefilesd, Setting up a Cache
coherency data, FS-Cache
indexing keys, FS-Cache
NFS (cache limitations with), Cache Limitations with NFS
NFS (using with), Using the Cache with NFS
performance guarantee, Performance Guarantee
setting up a cache, Setting up a Cache
statistical information (tracking), Statistical Information
tune2fs (setting up a cache), Setting up a Cache

fsync()
ext4, The ext4 File System
XFS, The XFS File System

G
GFS2
file system types, Global File System 2
gfs2.ko, Global File System 2
maximum size, Global File System 2

GFS2 file system maximum size, Global File System 2


gfs2.ko
GFS2, Global File System 2

335
Storage Administration Guide

Global File System 2


file system types, Global File System 2
gfs2.ko, Global File System 2
maximum size, Global File System 2

gquota/gqnoenforce
XFS, XFS Quota Management

H
Hardware RAID (see RAID)
hardware RAID controller drivers
RAID, Linux Hardware RAID Controller Drivers

hierarchy, file system, Overview of Filesystem Hierarchy Standard (FHS)


high-end arrays
write barriers, High-End Arrays

host
Fibre Channel API, Fibre Channel API

how write barriers work


write barriers, How Write Barriers Work

I
I/O alignment and size, Storage I/O Alignment and Size
ATA standards, ATA
block device ioctls (userspace access), Block Device ioctls
Linux I/O stack, Storage I/O Alignment and Size
logical_block_size, Userspace Access
LVM, Logical Volume Manager
READ CAPACITY(16), SCSI
SCSI standards, SCSI
stacking I/O parameters, Stacking I/O Parameters
storage access parameters, Parameters for Storage Access
sysfs interface (userspace access), sysfs Interface
tools (for partitioning and other file system functions), Partition and File System Tools
userspace access, Userspace Access

I/O parameters stacking


I/O alignment and size, Stacking I/O Parameters

iface (configuring for iSCSI offload)


offload and interface binding

336
INDEX

iSCSI, Configuring an iface for iSCSI Offload

iface binding/unbinding
offload and interface binding
iSCSI, Binding/Unbinding an iface to a Portal

iface configurations, viewing


offload and interface binding
iSCSI, Viewing Available iface Configurations

iface for software iSCSI


offload and interface binding
iSCSI, Configuring an iface for Software iSCSI

iface settings
offload and interface binding
iSCSI, Viewing Available iface Configurations

importance of write barriers


write barriers, Importance of Write Barriers

increasing file system size


XFS, Increasing the Size of an XFS File System

indexing keys
FS-Cache, FS-Cache

individual user
volume_key, Using volume_key as an Individual User

initiator implementations
offload and interface binding
iSCSI, Viewing Available iface Configurations

installation storage configurations


channel command word (CCW), DASD and zFCP Devices on IBM System Z
DASD and zFCP devices on IBM System z, DASD and zFCP Devices on IBM System Z
DIF/DIX-enabled block devices, Block Devices with DIF/DIX Enabled
iSCSI detection and configuration, iSCSI Detection and Configuration
LUKS/dm-crypt, encrypting block devices using, Encrypting Block Devices Using LUKS
separate partitions (for /home, /opt, /usr/local), Separate Partitions for /home, /opt, /usr/local
stale BIOS RAID metadata, Stale BIOS RAID Metadata

337
Storage Administration Guide

updates, Storage Considerations During Installation


what's new, Storage Considerations During Installation

installer support
RAID, RAID Support in the Anaconda Installer

interactive operation (xfsrestore)


XFS, Restoration

interconnects (scanning)
iSCSI, Scanning iSCSI Interconnects

introduction, Overview
iSCSI
discovery, iSCSI Discovery Configuration
configuration, iSCSI Discovery Configuration
record types, iSCSI Discovery Configuration

offload and interface binding, Configuring iSCSI Offload and Interface Binding
binding/unbinding an iface to a portal, Binding/Unbinding an iface to a Portal
iface (configuring for iSCSI offload), Configuring an iface for iSCSI Offload
iface configurations, viewing, Viewing Available iface Configurations
iface for software iSCSI, Configuring an iface for Software iSCSI
iface settings, Viewing Available iface Configurations
initiator implementations, Viewing Available iface Configurations
software iSCSI, Configuring an iface for Software iSCSI
viewing available iface configurations, Viewing Available iface Configurations

scanning interconnects, Scanning iSCSI Interconnects


software iSCSI, Configuring an iface for Software iSCSI
targets, Logging in to an iSCSI Target
logging in, Logging in to an iSCSI Target

iSCSI API, iSCSI API


iSCSI detection and configuration
storage considerations during installation, iSCSI Detection and Configuration

iSCSI logical unit, resizing, Resizing an iSCSI Logical Unit


iSCSI root
iSCSI configuration, iSCSI Root

K
known issues
adding/removing

338
INDEX

LUN (logical unit number), Known Issues with rescan-scsi-bus.sh

L
lazy mount/unmount support (autofs version 5)
NFS, Improvements in autofs Version 5 over Version 4

levels
RAID, RAID Levels and Linear Support

limit (xfs_quota expert mode)


XFS, XFS Quota Management

linear RAID
RAID, RAID Levels and Linear Support

Linux I/O stack


I/O alignment and size, Storage I/O Alignment and Size

logging in
iSCSI targets, Logging in to an iSCSI Target

logical_block_size
I/O alignment and size, Userspace Access

LUKS/dm-crypt, encrypting block devices using


storage considerations during installation, Encrypting Block Devices Using LUKS

LUN (logical unit number)


adding/removing, Adding/Removing a Logical Unit Through rescan-scsi-bus.sh
known issues, Known Issues with rescan-scsi-bus.sh
required packages, Adding/Removing a Logical Unit Through rescan-scsi-bus.sh
rescan-scsi-bus.sh, Adding/Removing a Logical Unit Through rescan-scsi-bus.sh

LVM
I/O alignment and size, Logical Volume Manager

M
main features
ext4, The ext4 File System
XFS, The XFS File System

maximum size
GFS2, Global File System 2

339
Storage Administration Guide

maximum size, GFS2 file system, Global File System 2


mdadm (configuring RAID sets)
RAID, mdadm

mdraid
RAID, mdraid

mirroring
RAID, RAID Levels and Linear Support

mkfs , Formatting and Labeling the Partition


mkfs.ext4
ext4, Creating an ext4 File System

mkfs.xfs
XFS, Creating an XFS File System

mnt directory, The /mnt/ Directory


modifying link loss behavior, Modifying Link Loss Behavior
Fibre Channel, Fibre Channel

mount (client configuration)


NFS, Configuring NFS Client

mount (command), Using the mount Command


listing mounts, Listing Currently Mounted File Systems
mounting a file system, Mounting a File System
moving a mount point, Moving a Mount Point
options, Specifying the Mount Options
shared subtrees, Sharing Mounts
private mount, Sharing Mounts
shared mount, Sharing Mounts
slave mount, Sharing Mounts
unbindable mount, Sharing Mounts

mounting, Mounting a File System


ext4, Mounting an ext4 File System
XFS, Mounting an XFS File System

moving a mount point, Moving a Mount Point


multiple master map entries per autofs mount point (autofs version 5)
NFS, Improvements in autofs Version 5 over Version 4

340
INDEX

native Fibre Channel drivers, Native Fibre Channel Drivers and Capabilities
network booting service
diskless systems, Setting up a Remote Diskless System

Network File System (see NFS)


NFS
/etc/fstab , Mounting NFS File Systems Using /etc/fstab
/local/directory (client configuration, mounting), Configuring NFS Client
/remote/export (client configuration, mounting), Configuring NFS Client
additional resources, NFS References
installed documentation, Installed Documentation
related books, Related Books
useful websites, Useful Websites

autofs
augmenting, Overriding or Augmenting Site Configuration Files
configuration, Configuring autofs
LDAP, Using LDAP to Store Automounter Maps

autofs version 5, Improvements in autofs Version 5 over Version 4


client
autofs , autofs
configuration, Configuring NFS Client
mount options, Common NFS Mount Options

condrestart, Starting and Stopping the NFS Server


configuration with firewall, Running NFS Behind a Firewall
direct map support (autofs version 5), Improvements in autofs Version 5 over Version 4
enhanced LDAP support (autofs version 5), Improvements in autofs Version 5 over Version 4
FS-Cache, Using the Cache with NFS
hostname formats, Hostname Formats
how it works, Introduction to NFS
introducing, Network File System (NFS)
lazy mount/unmount support (autofs version 5), Improvements in autofs Version 5 over
Version 4
mount (client configuration), Configuring NFS Client
multiple master map entries per autofs mount point (autofs version 5), Improvements in
autofs Version 5 over Version 4
options (client configuration, mounting), Configuring NFS Client
overriding/augmenting site configuration files (autofs), Configuring autofs
proper nsswitch configuration (autofs version 5), use of, Improvements in autofs Version 5
over Version 4
RDMA, Enabling NFS over RDMA (NFSoRDMA)
reloading, Starting and Stopping the NFS Server

341
Storage Administration Guide

required services, Required Services


restarting, Starting and Stopping the NFS Server
rfc2307bis (autofs), Using LDAP to Store Automounter Maps
rpcbind , NFS and rpcbind
security, Securing NFS
file permissions, File Permissions
NFSv3 host access, NFS Security with AUTH_SYS and Export Controls
NFSv4 host access, NFS Security with AUTH_GSS

server (client configuration, mounting), Configuring NFS Client


server configuration, Configuring the NFS Server
/etc/exports , The /etc/exports Configuration File
exportfs command, The exportfs Command
exportfs command with NFSv4, Using exportfs with NFSv4

starting, Starting and Stopping the NFS Server


status, Starting and Stopping the NFS Server
stopping, Starting and Stopping the NFS Server
storing automounter maps, using LDAP to store (autofs), Overriding or Augmenting Site
Configuration Files
TCP, Introduction to NFS
troubleshooting NFS and rpcbind, Troubleshooting NFS and rpcbind
UDP, Introduction to NFS
write barriers, NFS

NFS (cache limitations with)


FS-Cache, Cache Limitations with NFS

NFS (using with)


FS-Cache, Using the Cache with NFS

nobarrier mount option


ext4, Mounting an ext4 File System
XFS, Write Barriers

NOP-Out requests
modifying link loss
iSCSI configuration, NOP-Out Interval/Timeout

NOP-Outs (disabling)
iSCSI configuration, iSCSI Root

O
offline status

342
INDEX

Linux SCSI layer, Controlling the SCSI Command Timer and Device Status

offload and interface binding


iSCSI, Configuring iSCSI Offload and Interface Binding

Online logical units


Changing the read/write state, Changing the Read/Write State of an Online Logical Unit

online storage
Fibre Channel, Fibre Channel
overview, Online Storage Management
sysfs, Online Storage Management

troubleshooting, Troubleshooting Online Storage Configuration

opt directory, The /opt/ Directory


options (client configuration, mounting)
NFS, Configuring NFS Client

other file system utilities


ext4, Other ext4 File System Utilities

overriding/augmenting site configuration files (autofs)


NFS, Configuring autofs

overview, Overview
online storage, Online Storage Management

P
Parallel NFS
pNFS, pNFS

parameters for storage access


I/O alignment and size, Parameters for Storage Access

parity
RAID, RAID Levels and Linear Support

parted , Partitions
creating partitions, Creating a Partition
overview, Partitions
removing partitions, Removing a Partition
resizing partitions, Resizing a Partition with fdisk
selecting device, Viewing the Partition Table
table of commands, Partitions
viewing partition table, Viewing the Partition Table

343
Storage Administration Guide

partition table
viewing, Viewing the Partition Table

partitions
creating, Creating a Partition
formatting
mkfs , Formatting and Labeling the Partition

removing, Removing a Partition


resizing, Resizing a Partition with fdisk
viewing list, Viewing the Partition Table

path to storage devices, adding, Adding a Storage Device or Path


path to storage devices, removing, Removing a Path to a Storage Device
performance guarantee
FS-Cache, Performance Guarantee

persistent naming, Persistent Naming


pNFS
Parallel NFS, pNFS

port states (remote), determining


Fibre Channel
modifying link loss behavior, Fibre Channel

pquota/pqnoenforce
XFS, XFS Quota Management

private mount, Sharing Mounts


proc directory, The /proc/ Directory
project limits (setting)
XFS, Setting Project Limits

proper nsswitch configuration (autofs version 5), use of


NFS, Improvements in autofs Version 5 over Version 4

Q
queue_if_no_path
iSCSI configuration, iSCSI Settings with dm-multipath
modifying link loss
iSCSI configuration, replacement_timeout

quota (other ext4 file system utilities)


ext4, Other ext4 File System Utilities

344
INDEX

quota management
XFS, XFS Quota Management

quotacheck , Creating the Quota Database Files


quotacheck command
checking quota accuracy with, Keeping Quotas Accurate

quotaoff , Enabling and Disabling


quotaon , Enabling and Disabling

R
RAID
advanced RAID device creation, Creating Advanced RAID Devices
Anaconda support, RAID Support in the Anaconda Installer
configuring RAID sets, Configuring RAID Sets
dmraid, dmraid
dmraid (configuring RAID sets), dmraid
Hardware RAID, RAID Types
hardware RAID controller drivers, Linux Hardware RAID Controller Drivers
installer support, RAID Support in the Anaconda Installer
level 0, RAID Levels and Linear Support
level 1, RAID Levels and Linear Support
level 4, RAID Levels and Linear Support
level 5, RAID Levels and Linear Support
levels, RAID Levels and Linear Support
linear RAID, RAID Levels and Linear Support
mdadm (configuring RAID sets), mdadm
mdraid, mdraid
mirroring, RAID Levels and Linear Support
parity, RAID Levels and Linear Support
reasons to use, Redundant Array of Independent Disks (RAID)
Software RAID, RAID Types
striping, RAID Levels and Linear Support
subsystems of RAID, Linux RAID Subsystems

RDMA
NFS, Enabling NFS over RDMA (NFSoRDMA)

READ CAPACITY(16)
I/O alignment and size, SCSI

record types
discovery
iSCSI, iSCSI Discovery Configuration

345
Storage Administration Guide

Red Hat Enterprise Linux-specific file locations


/etc/sysconfig/, Special Red Hat Enterprise Linux File Locations
(see also sysconfig directory)

/var/cache/yum, Special Red Hat Enterprise Linux File Locations


/var/lib/rpm/, Special Red Hat Enterprise Linux File Locations

remote diskless systems


diskless systems, Setting up a Remote Diskless System

remote port
Fibre Channel API, Fibre Channel API

remote port states, determining


Fibre Channel
modifying link loss behavior, Fibre Channel

removing devices, Removing a Storage Device


removing paths to a storage device, Removing a Path to a Storage Device
repairing file system
XFS, Repairing an XFS File System

repairing XFS file systems with dirty logs


XFS, Repairing an XFS File System

replacement_timeout
modifying link loss
iSCSI configuration, SCSI Error Handler, replacement_timeout

replacement_timeoutM
iSCSI configuration, iSCSI Root

report (xfs_quota expert mode)


XFS, XFS Quota Management

required packages
adding/removing
LUN (logical unit number), Adding/Removing a Logical Unit Through rescan-scsi-bus.sh

diskless systems, Setting up a Remote Diskless System


FCoE, Configuring a Fibre Channel over Ethernet Interface

rescan-scsi-bus.sh
adding/removing
LUN (logical unit number), Adding/Removing a Logical Unit Through rescan-scsi-bus.sh

346
INDEX

resize2fs, Reverting to an Ext2 File System


resize2fs (resizing ext4)
ext4, Resizing an ext4 File System

resized logical units, resizing, Resizing an Online Logical Unit


resizing
ext4, Resizing an ext4 File System

resizing an iSCSI logical unit, Resizing an iSCSI Logical Unit


resizing resized logical units, Resizing an Online Logical Unit
restoring a backup
XFS, Restoration

rfc2307bis (autofs)
NFS, Using LDAP to Store Automounter Maps

rpcbind , NFS and rpcbind


(see also NFS)
NFS, Troubleshooting NFS and rpcbind
rpcinfo , Troubleshooting NFS and rpcbind
status, Starting and Stopping the NFS Server

rpcinfo , Troubleshooting NFS and rpcbind


running sessions, retrieving information about
iSCSI API, iSCSI API

running status
Linux SCSI layer, Controlling the SCSI Command Timer and Device Status

S
scanning interconnects
iSCSI, Scanning iSCSI Interconnects

scanning storage interconnects, Scanning Storage Interconnects


SCSI command timer
Linux SCSI layer, Command Timer

SCSI Error Handler


modifying link loss
iSCSI configuration, SCSI Error Handler

SCSI standards
I/O alignment and size, SCSI

separate partitions (for /home, /opt, /usr/local)

347
Storage Administration Guide

storage considerations during installation, Separate Partitions for /home, /opt, /usr/local

server (client configuration, mounting)


NFS, Configuring NFS Client

setting up a cache
FS-Cache, Setting up a Cache

shared mount, Sharing Mounts


shared subtrees, Sharing Mounts
private mount, Sharing Mounts
shared mount, Sharing Mounts
slave mount, Sharing Mounts
unbindable mount, Sharing Mounts

simple mode (xfsrestore)


XFS, Restoration

slave mount, Sharing Mounts


SMB (see SMB)
software iSCSI
iSCSI, Configuring an iface for Software iSCSI
offload and interface binding
iSCSI, Configuring an iface for Software iSCSI

Software RAID (see RAID)


solid-state disks
deployment, Solid-State Disk Deployment Guidelines
deployment guidelines, Solid-State Disk Deployment Guidelines
SSD, Solid-State Disk Deployment Guidelines
throughput classes, Solid-State Disk Deployment Guidelines
TRIM command, Solid-State Disk Deployment Guidelines

specific session timeouts, configuring


iSCSI configuration, Configuring Timeouts for a Specific Session

srv directory, The /srv/ Directory


SSD
solid-state disks, Solid-State Disk Deployment Guidelines

SSM
System Storage Manager, System Storage Manager (SSM)
Back Ends, SSM Back Ends
Installation, Installing SSM
list command, Displaying Information about All Detected Devices

348
INDEX

resize command, Increasing a Volume's Size


snapshot command, Snapshot

stacking I/O parameters


I/O alignment and size, Stacking I/O Parameters

stale BIOS RAID metadata


storage considerations during installation, Stale BIOS RAID Metadata

statistical information (tracking)


FS-Cache, Statistical Information

storage access parameters


I/O alignment and size, Parameters for Storage Access

storage considerations during installation


channel command word (CCW), DASD and zFCP Devices on IBM System Z
DASD and zFCP devices on IBM System z, DASD and zFCP Devices on IBM System Z
DIF/DIX-enabled block devices, Block Devices with DIF/DIX Enabled
iSCSI detection and configuration, iSCSI Detection and Configuration
LUKS/dm-crypt, encrypting block devices using, Encrypting Block Devices Using LUKS
separate partitions (for /home, /opt, /usr/local), Separate Partitions for /home, /opt, /usr/local
stale BIOS RAID metadata, Stale BIOS RAID Metadata
updates, Storage Considerations During Installation
what's new, Storage Considerations During Installation

storage interconnects, scanning, Scanning Storage Interconnects


storing automounter maps, using LDAP to store (autofs)
NFS, Overriding or Augmenting Site Configuration Files

stride (specifying stripe geometry)


ext4, Creating an ext4 File System

stripe geometry
ext4, Creating an ext4 File System

stripe-width (specifying stripe geometry)


ext4, Creating an ext4 File System

striping
RAID, RAID Levels and Linear Support
RAID fundamentals, Redundant Array of Independent Disks (RAID)

su (mkfs.xfs sub-options)
XFS, Creating an XFS File System

349
Storage Administration Guide

subsystems of RAID
RAID, Linux RAID Subsystems

suspending
XFS, Suspending an XFS File System

sw (mkfs.xfs sub-options)
XFS, Creating an XFS File System

swap space, Swap Space


creating, Adding Swap Space
expanding, Adding Swap Space
file
creating, Creating a Swap File, Removing a Swap File

LVM2
creating, Creating an LVM2 Logical Volume for Swap
extending, Extending Swap on an LVM2 Logical Volume
reducing, Reducing Swap on an LVM2 Logical Volume
removing, Removing an LVM2 Logical Volume for Swap

moving, Moving Swap Space


recommended size, Swap Space
removing, Removing Swap Space

sys directory, The /sys/ Directory


sysconfig directory, Special Red Hat Enterprise Linux File Locations
sysfs
overview
online storage, Online Storage Management

sysfs interface (userspace access)


I/O alignment and size, sysfs Interface

system information
file systems, Gathering File System Information
/dev/shm, df Command

System Storage Manager


SSM, System Storage Manager (SSM)
Back Ends, SSM Back Ends
Installation, Installing SSM
list command, Displaying Information about All Detected Devices
resize command, Increasing a Volume's Size

350
INDEX

snapshot command, Snapshot

T
targets
iSCSI, Logging in to an iSCSI Target

tftp service, configuring


diskless systems, Configuring a tftp Service for Diskless Clients

throughput classes
solid-state disks, Solid-State Disk Deployment Guidelines

timeouts for a specific session, configuring


iSCSI configuration, Configuring Timeouts for a Specific Session

tools (for partitioning and other file system functions)


I/O alignment and size, Partition and File System Tools

tracking statistical information


FS-Cache, Statistical Information

transport
Fibre Channel API, Fibre Channel API

TRIM command
solid-state disks, Solid-State Disk Deployment Guidelines

troubleshooting
online storage, Troubleshooting Online Storage Configuration

troubleshooting NFS and rpcbind


NFS, Troubleshooting NFS and rpcbind

tune2fs
converting to ext3 with, Converting to an ext3 File System
reverting to ext2 with, Reverting to an Ext2 File System

tune2fs (mounting)
ext4, Mounting an ext4 File System

tune2fs (setting up a cache)


FS-Cache, Setting up a Cache

U
udev rule (timeout)

351
Storage Administration Guide

command timer (SCSI), Command Timer

umount, Unmounting a File System


unbindable mount, Sharing Mounts
unmounting, Unmounting a File System
updates
storage considerations during installation, Storage Considerations During Installation

uquota/uqnoenforce
XFS, XFS Quota Management

userspace access
I/O alignment and size, Userspace Access

userspace API files


Fibre Channel API, Fibre Channel API

usr directory, The /usr/ Directory

V
var directory, The /var/ Directory
var/lib/rpm/ directory, Special Red Hat Enterprise Linux File Locations
var/spool/up2date/ directory, Special Red Hat Enterprise Linux File Locations
verifying if a device is blocked
Fibre Channel
modifying link loss behavior, Fibre Channel

version
what is new
autofs, Improvements in autofs Version 5 over Version 4

viewing available iface configurations


offload and interface binding
iSCSI, Viewing Available iface Configurations

virtual file system (/proc)


/proc/devices, The /proc Virtual File System
/proc/filesystems, The /proc Virtual File System
/proc/mdstat, The /proc Virtual File System
/proc/mounts, The /proc Virtual File System
/proc/mounts/, The /proc Virtual File System
/proc/partitions, The /proc Virtual File System

virtual storage, Virtual Storage

352
INDEX

volume_key
commands, volume_key Commands
individual user, Using volume_key as an Individual User

W
what's new
storage considerations during installation, Storage Considerations During Installation

World Wide Identifier (WWID)


persistent naming, World Wide Identifier (WWID)

write barriers
battery-backed write caches, Battery-Backed Write Caches
definition, Write Barriers
disabling write caches, Disabling Write Caches
enablind/disabling, Enabling and Disabling Write Barriers
error messages, Enabling and Disabling Write Barriers
ext4, Mounting an ext4 File System
high-end arrays, High-End Arrays
how write barriers work, How Write Barriers Work
importance of write barriers, Importance of Write Barriers
NFS, NFS
XFS, Write Barriers

write caches, disabling


write barriers, Disabling Write Caches

WWID
persistent naming, World Wide Identifier (WWID)

X
XFS
allocation features, The XFS File System
backup/restoration, Backing Up and Restoring XFS File Systems
creating, Creating an XFS File System
cumulative mode (xfsrestore), Restoration
dump levels, Backup
expert mode (xfs_quota), XFS Quota Management
file system types, The XFS File System
fsync(), The XFS File System
gquota/gqnoenforce, XFS Quota Management
increasing file system size, Increasing the Size of an XFS File System

353
Storage Administration Guide

interactive operation (xfsrestore), Restoration


limit (xfs_quota expert mode), XFS Quota Management
main features, The XFS File System
mkfs.xfs, Creating an XFS File System
mounting, Mounting an XFS File System
nobarrier mount option, Write Barriers
pquota/pqnoenforce, XFS Quota Management
project limits (setting), Setting Project Limits
quota management, XFS Quota Management
repairing file system, Repairing an XFS File System
repairing XFS file systems with dirty logs, Repairing an XFS File System
report (xfs_quota expert mode), XFS Quota Management
simple mode (xfsrestore), Restoration
su (mkfs.xfs sub-options), Creating an XFS File System
suspending, Suspending an XFS File System
sw (mkfs.xfs sub-options), Creating an XFS File System
uquota/uqnoenforce, XFS Quota Management
write barriers, Write Barriers
xfsdump, Backup
xfsprogs, Suspending an XFS File System
xfsrestore, Restoration
xfs_admin, Other XFS File System Utilities
xfs_bmap, Other XFS File System Utilities
xfs_copy, Other XFS File System Utilities
xfs_db, Other XFS File System Utilities
xfs_freeze, Suspending an XFS File System
xfs_fsr, Other XFS File System Utilities
xfs_growfs, Increasing the Size of an XFS File System
xfs_info, Other XFS File System Utilities
xfs_mdrestore, Other XFS File System Utilities
xfs_metadump, Other XFS File System Utilities
xfs_quota, XFS Quota Management
xfs_repair, Repairing an XFS File System

xfsdump
XFS, Backup

xfsprogs
XFS, Suspending an XFS File System

xfsrestore
XFS, Restoration

354
INDEX

xfs_admin
XFS, Other XFS File System Utilities

xfs_bmap
XFS, Other XFS File System Utilities

xfs_copy
XFS, Other XFS File System Utilities

xfs_db
XFS, Other XFS File System Utilities

xfs_freeze
XFS, Suspending an XFS File System

xfs_fsr
XFS, Other XFS File System Utilities

xfs_growfs
XFS, Increasing the Size of an XFS File System

xfs_info
XFS, Other XFS File System Utilities

xfs_mdrestore
XFS, Other XFS File System Utilities

xfs_metadump
XFS, Other XFS File System Utilities

xfs_quota
XFS, XFS Quota Management

xfs_repair
XFS, Repairing an XFS File System

355
Red Hat Enterprise Linux 7.0 Beta
System Administrators Guide

Deployment, Configuration and Administration of Red Hat Enterprise Linux


7

Jaromír Hradílek Douglas Silas Martin Prpič


Stephen Wadeley Eva Kopalová Ella Lackey
Tomáš Čapek Petr Kovář Miroslav Svoboda
Petr Bokoč Peter Ondrejka Eliška Slobodová
John Ha David O'Brien Michael Hideo
Don Domingo
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Deployment, Configuration and Administration of Red Hat Enterprise Linux


7

Jaromír Hradílek
Red Hat Engineering Content Services
jhradilek@redhat.com

Douglas Silas
Red Hat Engineering Content Services
silas@redhat.com

Martin Prpič
Red Hat Engineering Content Services
mprpic@redhat.com

Stephen Wadeley
Red Hat Engineering Content Services
swadeley@redhat.com

Eva Kopalová
Red Hat Engineering Content Services
ekopalova@redhat.com

Ella Lackey
Red Hat Engineering Content Services
dlackey@redhat.com

Tomáš Čapek
Red Hat Engineering Content Services
tcapek@redhat.com

Petr Kovář
Red Hat Engineering Content Services
pkovar@redhat.com

Miroslav Svoboda
Red Hat Engineering Content Services
msvoboda@redhat.com

Petr Bokoč
Red Hat Engineering Content Services
pbokoc@redhat.com

Peter Ondrejka
Red Hat Engineering Content Services
pondrejk@redhat.com

Eliška Slobodová
Red Hat Engineering Content Services
eslobodo@redhat.com

John Ha
Red Hat Engineering Content Services
Red Hat Engineering Content Services
Legal Notice
David O'Brien
Red Hat Engineering
Copyright Content
© 2014 Red Hat,Services
Inc.
Michael Hideo
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported
Red Hat Engineering Content Services
License. If you distribute this document, or a modified version of it, you must provide attribution to Red
Hat,Domingo
Don Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be
removed.
Red Hat Engineering Content Services

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section
4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo,
and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.

MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other
countries.

Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or
endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack ® Word Mark and OpenStack Logo are either registered trademarks/service marks or
trademarks/service marks of the OpenStack Foundation, in the United States and other countries and
are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored
by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.


Abstract
The System Administrator's Guide documents relevant information regarding the deployment,
configuration and administration of Red Hat Enterprise Linux 7. It is oriented towards system
administrators with a basic understanding of the system. Note: This document is under development, is
subject to substantial change, and is provided only as a preview. The included information and
instructions should not be considered complete, and should be used with caution.
Table of Contents

Table of Contents

.Preface
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
..........
1. Target Audience 17
2. About This Book 17
3. How to Read this Book 17
4. Document Conventions 20
4.1. Typographic Conventions 20
4.2. Pull-quote Conventions 21
4.3. Notes and Warnings 22
5. Feedback 22
6. Acknowledgments 23

. . . . I.. .Basic
Part . . . . . System
. . . . . . .Configuration
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
..........

.Chapter
. . . . . . .1.. .System
. . . . . . Locale
. . . . . . .and
. . . Keyboard
. . . . . . . . . Configuration
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
..........
1.1. Setting the System Locale 25
1.1.1. Displaying the Current Status 25
1.1.2. Listing Available Locales 26
1.1.3. Setting the Locale 26
1.2. Changing the Keyboard Layout 27
1.2.1. Displaying the Current Settings 27
1.2.2. Listing Available Keymaps 27
1.2.3. Setting the Keymap 27
1.3. Additional Resources 28
Installed Documentation 28
See Also 28

.Chapter
. . . . . . .2.. .Configuring
. . . . . . . . . . the
. . . .Date
. . . . and
. . . .Time
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
..........
2.1. Using the timedatectl Command 30
2.1.1. Displaying the Current Date and Time 30
2.1.2. Changing the Current Date 31
2.1.3. Changing the Current Time 31
2.1.4. Changing the Time Zone 32
2.1.5. Synchronizing the System Clock with a Remote Server 32
2.2. Using the date Command 33
2.2.1. Displaying the Current Date and Time 33
2.2.2. Changing the Current Date 34
2.2.3. Changing the Current Time 34
2.3. Additional Resources 35
Installed Documentation 35
See Also 35

.Chapter
. . . . . . .3.. .Managing
. . . . . . . . .Users
. . . . . and
. . . .Groups
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
..........
3.1. Introduction to Users and Groups 36
3.1.1. User Private Groups 36
3.1.2. Shadow Passwords 36
3.2. Using the User Manager Tool 37
3.2.1. Viewing Users and Groups 37
3.2.2. Adding a New User 38
3.2.3. Adding a New Group 39
3.2.4. Modifying User Properties 39
3.2.5. Modifying Group Properties 40
3.3. Using Command Line Tools 41
3.3.1. Adding a New User 41

1
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Explaining the Process 42


3.3.2. Adding a New Group 44
3.3.3. Creating Group Directories 44
3.4. Additional Resources 45
Installed Documentation 45
Online Documentation 46
See Also 46

.Chapter
. . . . . . .4.. .Gaining
. . . . . . .Privileges
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
..........
4.1. The su Command 47
4.2. The sudo Command 48
4.3. Additional Resources 49
Installed Documentation 49
Online Documentation 49
See Also 50

. . . . II.
Part . . Package
. . . . . . . . Management
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
..........

.Chapter
. . . . . . .5.. .Yum
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
..........
5.1. Checking For and Updating Packages 52
5.1.1. Checking For Updates 52
5.1.2. Updating Packages 53
Updating a Single Package 53
Updating All Packages and Their Dependencies 55
Updating Security-Related Packages 55
5.1.3. Preserving Configuration File Changes 55
5.2. Working with Packages 55
5.2.1. Searching Packages 55
Filtering the Results 56
5.2.2. Listing Packages 56
Listing Repositories 58
5.2.3. Displaying Package Information 58
Using yumdb 59
5.2.4. Installing Packages 60
5.2.5. Downloading Packages 64
5.2.6. Removing Packages 64
5.3. Working with Package Groups 65
5.3.1. Listing Package Groups 65
5.3.2. Installing a Package Group 67
5.3.3. Removing a Package Group 68
5.4. Working with Transaction History 68
5.4.1. Listing Transactions 68
5.4.2. Examining Transactions 72
5.4.3. Reverting and Repeating Transactions 74
5.4.4. Starting New Transaction History 74
5.5. Configuring Yum and Yum Repositories 75
5.5.1. Setting [main] Options 75
5.5.2. Setting [repository] Options 79
5.5.3. Using Yum Variables 81
5.5.4. Viewing the Current Configuration 82
5.5.5. Adding, Enabling, and Disabling a Yum Repository 83
Adding a Yum Repository 83
Enabling a Yum Repository 84
Disabling a Yum Repository 84
5.5.6. Creating a Yum Repository 85
5.6. Yum Plug-ins 85

2
Table of Contents

5.6.1. Enabling, Configuring, and Disabling Yum Plug-ins 86


5.6.2. Installing Additional Yum Plug-ins 86
5.6.3. Working with Plug-ins 87
Backup 87
Installation and Download 89
Security and Package Protection 90
System Registration 91
5.7. Additional Resources 92
Installed Documentation 92
Online Documentation 92
See Also 92

.Chapter
. . . . . . .6.. .PackageKit
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
..........
6.1. Updating Packages with Software Update 94
Setting the Update-Checking Interval 95
6.2. Using Add/Remove Software 95
6.2.1. Refreshing Software Sources (Yum Repositories) 96
6.2.2. Finding Packages with Filters 96
6.2.3. Installing and Removing Packages (and Dependencies) 98
6.2.4. Installing and Removing Package Groups 100
6.2.5. Viewing the Transaction Log 101
6.3. PackageKit Architecture 101
6.4. Additional Resources 102
Online Documentation 102
See Also 103

. . . . III.
Part . . .Infrastructure
. . . . . . . . . . . .Services
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
...........

.Chapter
. . . . . . .7.. .Managing
. . . . . . . . .Services
. . . . . . . with
. . . . .systemd
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
...........
7.1. Introduction to systemd 105
7.1.1. Main Features 105
7.1.2. Compatibility Changes 106
7.2. Managing System Services 107
7.2.1. Listing Services 108
7.2.2. Displaying Service Status 110
7.2.3. Starting a Service 111
7.2.4. Stopping a Service 111
7.2.5. Restarting a Service 112
7.2.6. Enabling a Service 112
7.2.7. Disabling a Service 113
7.3. Working with systemd Targets 114
7.3.1. Viewing the Default Target 115
7.3.2. Viewing the Current Target 116
7.3.3. Changing the Default Target 116
7.3.4. Changing the Current Target 117
7.3.5. Changing to Rescue Mode 117
7.3.6. Changing to Emergency Mode 118
7.4. Shutting Down, Suspending, and Hibernating the System 118
7.4.1. Shutting Down the System 119
7.4.2. Restarting the System 119
7.4.3. Suspending the System 119
7.4.4. Hibernating the System 120
7.5. Controlling systemd on a Remote Machine 120
7.6. Additional Resources 121
Installed Documentation 121
Online Documentation 121

3
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

See Also 122

.Chapter
. . . . . . .8.. .OpenSSH
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
...........
8.1. The SSH Protocol 123
8.1.1. Why Use SSH? 123
8.1.2. Main Features 124
8.1.3. Protocol Versions 124
8.1.4. Event Sequence of an SSH Connection 125
8.1.4.1. Transport Layer 125
8.1.4.2. Authentication 126
8.1.4.3. Channels 126
8.2. Configuring OpenSSH 126
8.2.1. Configuration Files 126
8.2.2. Starting an OpenSSH Server 128
8.2.3. Requiring SSH for Remote Connections 128
8.2.4. Using Key-based Authentication 128
8.2.4.1. Generating Key Pairs 129
8.2.4.2. Configuring ssh-agent 131
8.3. OpenSSH Clients 133
8.3.1. Using the ssh Utility 134
8.3.2. Using the scp Utility 135
8.3.3. Using the sftp Utility 135
8.4. More Than a Secure Shell 136
8.4.1. X11 Forwarding 136
8.4.2. Port Forwarding 137
8.5. Additional Resources 138
Installed Documentation 138
Online Documentation 138
See Also 138

.Chapter
. . . . . . .9.. .TigerVNC
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
...........
9.1. VNC Server 140
9.1.1. Installing VNC Server 140
9.1.2. Configuring VNC Server 140
9.1.3. Starting VNC Server 141
9.1.3.1. Troubleshooting 141
9.1.4. Terminating VNC session 142
9.2. VNC Viewer 142
9.2.1. Connecting to VNC Server 142
9.2.1.1. Switching off firewall to enable VNC connection 143
9.2.2. Connecting to VNC Server using SSH 143
9.3. Additional Resources 143

. . . . IV.
Part . . .Servers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
...........

.Chapter
. . . . . . .10.
. . .Web
. . . .Servers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
...........
10.1. The Apache HTTP Server 145
10.1.1. Notable Changes 145
10.1.2. Updating the Configuration 148
10.1.3. Running the httpd Service 148
10.1.3.1. Starting the Service 148
10.1.3.2. Stopping the Service 149
10.1.3.3. Restarting the Service 149
10.1.3.4. Verifying the Service Status 149
10.1.4. Editing the Configuration Files 150
10.1.5. Working with Modules 150

4
Table of Contents

10.1.5.1. Loading a Module 150


10.1.5.2. Writing a Module 150
10.1.6. Setting Up Virtual Hosts 151
10.1.7. Setting Up an SSL Server 151
10.1.7.1. An Overview of Certificates and Security 152
10.1.7.2. Enabling the mod_ssl Module 152
10.1.7.3. Using an Existing Key and Certificate 153
10.1.7.4. Generating a New Key and Certificate 153
10.1.8. Additional Resources 158
10.1.8.1. Installed Documentation 158
10.1.8.2. Useful Websites 158

.Chapter
. . . . . . .11.
. . .Mail
. . . .Servers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
...........
11.1. Email Protocols 159
11.1.1. Mail Transport Protocols 159
11.1.1.1. SMTP 159
11.1.2. Mail Access Protocols 159
11.1.2.1. POP 160
11.1.2.2. IMAP 160
11.1.2.3. Dovecot 161
11.2. Email Program Classifications 162
11.2.1. Mail Transport Agent 162
11.2.2. Mail Delivery Agent 162
11.2.3. Mail User Agent 163
11.3. Mail Transport Agents 163
11.3.1. Postfix 163
11.3.1.1. The Default Postfix Installation 163
11.3.1.2. Basic Postfix Configuration 164
11.3.1.3. Using Postfix with LDAP 164
11.3.1.3.1. The /etc/aliases lookup example 165
11.3.2. Sendmail 165
11.3.2.1. Purpose and Limitations 165
11.3.2.2. The Default Sendmail Installation 166
11.3.2.3. Common Sendmail Configuration Changes 167
11.3.2.4. Masquerading 168
11.3.2.5. Stopping Spam 168
11.3.2.6. Using Sendmail with LDAP 169
11.3.3. Fetchmail 169
11.3.3.1. Fetchmail Configuration Options 170
11.3.3.2. Global Options 171
11.3.3.3. Server Options 171
11.3.3.4. User Options 172
11.3.3.5. Fetchmail Command Options 172
11.3.3.6. Informational or Debugging Options 172
11.3.3.7. Special Options 173
11.3.4. Mail Transport Agent (MTA) Configuration 173
11.4. Mail Delivery Agents 173
11.4.1. Procmail Configuration 174
11.4.2. Procmail Recipes 175
11.4.2.1. Delivering vs. Non-Delivering Recipes 175
11.4.2.2. Flags 176
11.4.2.3. Specifying a Local Lockfile 176
11.4.2.4. Special Conditions and Actions 176
11.4.2.5. Recipe Examples 177
11.4.2.6. Spam Filters 178
11.5. Mail User Agents 179

5
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

11.5.1. Securing Communication 179


11.5.1.1. Secure Email Clients 179
11.5.1.2. Securing Email Client Communications 180
11.6. Additional Resources 181
11.6.1. Installed Documentation 181
11.6.2. Useful Websites 182
11.6.3. Related Books 182

.Chapter
. . . . . . .12.
. . .Directory
. . . . . . . .Servers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
...........
12.1. OpenLDAP 183
12.1.1. Introduction to LDAP 183
12.1.1.1. LDAP Terminology 183
12.1.1.2. OpenLDAP Features 184
12.1.1.3. OpenLDAP Server Setup 184
12.1.2. Installing the OpenLDAP Suite 185
12.1.2.1. Overview of OpenLDAP Server Utilities 185
12.1.2.2. Overview of OpenLDAP Client Utilities 186
12.1.2.3. Overview of Common LDAP Client Applications 187
12.1.3. Configuring an OpenLDAP Server 187
12.1.3.1. Changing the Global Configuration 188
12.1.3.2. Changing the Database-Specific Configuration 190
12.1.3.3. Extending Schema 192
12.1.4. Running an OpenLDAP Server 192
12.1.4.1. Starting the Service 192
12.1.4.2. Stopping the Service 192
12.1.4.3. Restarting the Service 193
12.1.4.4. Verifying the Service Status 193
12.1.5. Configuring a System to Authenticate Using OpenLDAP 193
12.1.5.1. Migrating Old Authentication Information to LDAP Format 193
12.1.6. Additional Resources 194
12.1.6.1. Installed Documentation 194
12.1.6.2. Useful Websites 195
12.1.6.3. Related Books 195

.Chapter
. . . . . . .13.
. . .File
. . . and
. . . . Print
. . . . .Servers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
...........
13.1. Samba 197
13.1.1. Introduction to Samba 197
13.1.1.1. Samba Features 197
13.1.2. Samba Daemons and Related Services 198
13.1.2.1. Samba Daemons 198
13.1.3. Connecting to a Samba Share 199
13.1.3.1. Command Line 200
13.1.3.2. Mounting the Share 200
13.1.4. Configuring a Samba Server 201
13.1.4.1. Graphical Configuration 201
13.1.4.2. Command Line Configuration 201
13.1.4.3. Encrypted Passwords 202
13.1.5. Starting and Stopping Samba 202
13.1.6. Samba Network Browsing 203
13.1.6.1. Domain Browsing 203
13.1.6.2. WINS (Windows Internet Name Server) 204
13.1.7. Samba Distribution Programs 204
13.1.8. Additional Resources 208
13.1.8.1. Installed Documentation 209
13.1.8.2. Related Books 209
13.1.8.3. Useful Websites 209

6
Table of Contents

13.2. FTP 209


13.2.1. The File Transfer Protocol 210
13.2.2. The vsftpd Server 211
13.2.2.1. Starting and Stopping vsftpd 211
13.2.2.2. Starting Multiple Copies of vsftpd 212
13.2.2.3. Encrypting vsftpd Connections Using SSL 213
13.2.2.4. SELinux Policy for vsftpd 213
13.2.3. Additional Resources 214
13.2.3.1. Installed Documentation 214
13.2.3.2. Online Documentation 214
13.3. Printer Configuration 215
13.3.1. Starting the Printer Configuration Tool 216
13.3.2. Starting Printer Setup 216
13.3.3. Adding a Local Printer 216
13.3.4. Adding an AppSocket/HP JetDirect printer 217
13.3.5. Adding an IPP Printer 218
13.3.6. Adding an LPD/LPR Host or Printer 219
13.3.7. Adding a Samba (SMB) printer 220
13.3.8. Selecting the Printer Model and Finishing 222
13.3.9. Printing a Test Page 224
13.3.10. Modifying Existing Printers 224
13.3.10.1. The Settings Page 225
13.3.10.2. The Policies Page 225
13.3.10.2.1. Sharing Printers 225
13.3.10.2.2. The Access Control Page 226
13.3.10.2.3. The Printer Options Page 227
13.3.10.2.4. Job Options Page 227
13.3.10.2.5. Ink/Toner Levels Page 228
13.3.10.3. Managing Print Jobs 229
13.3.11. Additional Resources 230
13.3.11.1. Installed Documentation 230
13.3.11.2. Useful Websites 231

.Chapter
. . . . . . .14.
. . .Configuring
. . . . . . . . . . NTP
. . . . Using
. . . . . .the
. . . chrony
. . . . . . .Suite
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
...........
14.1. Introduction to the chrony Suite 232
14.1.1. Differences Between ntpd and chronyd 232
14.1.2. Choosing Between NTP Daemons 233
14.2. Understanding chrony and Its Configuration 233
14.2.1. Understanding chronyd 233
14.2.2. Understanding chronyc 233
14.2.3. Understanding the chrony Configuration Commands 233
14.2.4. Security with chronyc 237
14.3. Using chrony 239
14.3.1. Checking if chrony is Installed 239
14.3.2. Installing chrony 239
14.3.3. Checking the Status of chronyd 239
14.3.4. Starting chronyd 239
14.3.5. Stopping chronyd 239
14.3.6. Checking if chrony is Synchronized 240
14.3.6.1. Checking chrony Tracking 240
14.3.6.2. Checking chrony Sources 242
14.3.6.3. Checking chrony Source Statistics 243
14.3.7. Manually Adjusting the System Clock 244
14.4. Setting Up chrony for Different Environments 244
14.4.1. Setting Up chrony for a System Which is Infrequently Connected 244
14.4.2. Setting Up chrony for a System in an Isolated Network 245

7
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

14.5. Using chronyc 245


14.5.1. Using chronyc to Control chronyd 245
14.5.2. Using chronyc for Remote Administration 246
14.6. Additional Resources 246
14.6.1. Installed Documentation 247
14.6.2. Useful Websites 247

.Chapter
. . . . . . .15.
. . .Configuring
. . . . . . . . . . NTP
. . . . Using
. . . . . .ntpd
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
...........
15.1. Introduction to NTP 248
15.2. NTP Strata 248
15.3. Understanding NTP 249
15.4. Understanding the Drift File 250
15.5. UTC, Timezones, and DST 250
15.6. Authentication Options for NTP 251
15.7. Managing the Time on Virtual Machines 251
15.8. Understanding Leap Seconds 251
15.9. Understanding the ntpd Configuration File 252
15.10. Understanding the ntpd Sysconfig File 253
15.11. Disabling chrony 253
15.12. Checking if the NTP Daemon is Installed 254
15.13. Installing the NTP Daemon (ntpd) 254
15.14. Checking the Status of NTP 254
15.15. Configure the Firewall to Allow Incoming NTP Packets 254
15.15.1. Change the Firewall Settings 255
15.15.2. Open Ports in the Firewall for NTP Packets 255
15.16. Configure ntpdate Servers 255
15.17. Configure NTP 256
15.17.1. Configure Access Control to an NTP Service 256
15.17.2. Configure Rate Limiting Access to an NTP Service 257
15.17.3. Adding a Peer Address 257
15.17.4. Adding a Server Address 258
15.17.5. Adding a Broadcast or Multicast Server Address 258
15.17.6. Adding a Manycast Client Address 258
15.17.7. Adding a Broadcast Client Address 259
15.17.8. Adding a Manycast Server Address 259
15.17.9. Adding a Multicast Client Address 259
15.17.10. Configuring the Burst Option 259
15.17.11. Configuring the iburst Option 260
15.17.12. Configuring Symmetric Authentication Using a Key 260
15.17.13. Configuring the Poll Interval 260
15.17.14. Configuring Server Preference 260
15.17.15. Configuring the Time-to-Live for NTP Packets 261
15.17.16. Configuring the NTP Version to Use 261
15.18. Configuring the Hardware Clock Update 261
15.19. Configuring Clock Sources 261
15.20. Additional Resources 262
15.20.1. Installed Documentation 262
15.20.2. Useful Websites 262

.Chapter
. . . . . . .16.
. . .Configuring
. . . . . . . . . . PTP
. . . . Using
. . . . . .ptp4l
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
...........
16.1. Introduction to PTP 263
16.1.1. Understanding PTP 263
16.1.2. Advantages of PTP 264
16.2. Using PTP 265
16.2.1. Checking for Driver and Hardware Support 265
16.2.2. Installing PTP 265

8
Table of Contents

16.2.3. Starting ptp4l 266


16.2.3.1. Selecting a Delay Measurement Mechanism 267
16.3. Specifying a Configuration File 267
16.4. Using the PTP Management Client 268
16.5. Synchronizing the Clocks 268
16.6. Verifying Time Synchronization 269
16.7. Serving PTP Time with NTP 271
16.8. Serving NTP Time with PTP 271
16.9. Improving Accuracy 272
16.10. Additional Resources 272
16.10.1. Installed Documentation 272
16.10.2. Useful Websites 272

. . . . V.
Part . . Monitoring
. . . . . . . . . . and
. . . . Automation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
...........

.Chapter
. . . . . . .17.
. . .System
. . . . . . Monitoring
. . . . . . . . . . Tools
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
...........
17.1. Viewing System Processes 274
17.1.1. Using the ps Command 274
17.1.2. Using the top Command 275
17.1.3. Using the System Monitor Tool 276
17.2. Viewing Memory Usage 277
17.2.1. Using the free Command 277
17.2.2. Using the System Monitor Tool 278
17.3. Viewing CPU Usage 279
17.3.1. Using the System Monitor Tool 279
17.4. Viewing Block Devices and File Systems 280
17.4.1. Using the lsblk Command 280
17.4.2. Using the blkid Command 281
17.4.3. Using the findmnt Command 282
17.4.4. Using the df Command 283
17.4.5. Using the du Command 284
17.4.6. Using the System Monitor Tool 285
17.5. Viewing Hardware Information 285
17.5.1. Using the lspci Command 285
17.5.2. Using the lsusb Command 286
17.5.3. Using the lspcmcia Command 287
17.5.4. Using the lscpu Command 287
17.6. Monitoring Performance with Net-SNMP 288
17.6.1. Installing Net-SNMP 288
17.6.2. Running the Net-SNMP Daemon 289
17.6.2.1. Starting the Service 289
17.6.2.2. Stopping the Service 289
17.6.2.3. Restarting the Service 290
17.6.3. Configuring Net-SNMP 290
17.6.3.1. Setting System Information 290
17.6.3.2. Configuring Authentication 291
Configuring SNMP Version 2c Community 291
Configuring SNMP Version 3 User 291
17.6.4. Retrieving Performance Data over SNMP 292
17.6.4.1. Hardware Configuration 293
17.6.4.2. CPU and Memory Information 293
17.6.4.3. File System and Disk Information 295
17.6.4.4. Network Information 295
17.6.5. Extending Net-SNMP 296
17.6.5.1. Extending Net-SNMP with Shell Scripts 296
17.6.5.2. Extending Net-SNMP with Perl 298

9
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

17.7. Additional Resources 302


17.7.1. Installed Documentation 302

.Chapter
. . . . . . .18.
. . .OpenLMI
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
...........
18.1. About OpenLMI 303
18.1.1. Main Features 303
18.1.2. Management Capabilities 303
18.2. Installing OpenLMI 304
18.2.1. Installing OpenLMI on a Managed System 304
18.2.2. Installing OpenLMI on a Client System 305
18.3. Configuring SSL Certificates for OpenPegasus 305
18.3.1. Managing Self-signed Certificates 306
18.3.2. Managing Authority-signed Certificates with Identity Management (Recommended) 307
18.3.3. Managing Authority-signed Certificates Manually 308
18.4. Using LMIShell 310
18.4.1. Starting, Using, and Exiting LMIShell 310
Starting LMIShell in Interactive Mode 310
Using Tab Completion 310
Browsing History 310
Handling Exceptions 311
Configuring a Temporary Cache 311
Exiting LMIShell 311
Running an LMIShell Script 312
18.4.2. Connecting to a CIMOM 312
Connecting to a Remote CIMOM 312
Connecting to a Local CIMOM 312
Verifying a Connection to a CIMOM 313
18.4.3. Working with Namespaces 313
Listing Available Namespaces 313
Accessing Namespace Objects 314
18.4.4. Working with Classes 314
Listing Available Classes 315
Accessing Class Objects 315
Examining Class Objects 316
Listing Available Methods 317
Listing Available Properties 317
Listing and Viewing ValueMap Properties 318
Fetching a CIMClass Object 321
18.4.5. Working with Instances 321
Accessing Instances 321
Examining Instances 322
Creating New Instances 323
Deleting Individual Instances 324
Listing and Accessing Available Properties 325
Listing and Using Available Methods 326
Listing and Viewing ValueMap Parameters 328
Refreshing Instance Objects 330
Displaying MOF Representation 330
18.4.6. Working with Instance Names 331
Accessing Instance Names 331
Examining Instance Names 332
Creating New Instance Names 332
Listing and Accessing Key Properties 333
Converting Instance Names to Instances 334
18.4.7. Working with Associated Objects 334
Accessing Associated Instances 334

10
Table of Contents

Accessing Associated Instance Names 336


18.4.8. Working with Association Objects 336
Accessing Association Instances 337
Accessing Association Instance Names 338
18.4.9. Working with Indications 339
Subscribing to Indications 339
Listing Subscribed Indications 340
Unsubscribing from Indications 340
Implementing an Indication Handler 341
18.4.10. Example Usage 342
Using the OpenLMI Service Provider 343
Using the OpenLMI Networking Provider 344
Using the OpenLMI Storage Provider 347
Using the OpenLMI Hardware Provider 349
18.5. Using OpenLMI Scripts 350
18.6. Additional Resources 351
Installed Documentation 351
Online Documentation 351
See Also 351

.Chapter
. . . . . . .19.
. . .Viewing
. . . . . . .and
. . . .Managing
. . . . . . . . .Log
. . . .Files
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
...........
19.1. Locating Log Files 353
19.2. Basic Configuration of Rsyslog 353
19.2.1. Filters 354
19.2.2. Actions 357
Specifying Multiple Actions 362
19.2.3. Templates 362
Generating Dynamic File Names 363
Properties 363
Template Examples 364
19.2.4. Global Directives 366
19.2.5. Log Rotation 366
19.2.6. Using the New Configuration Format 367
19.2.7. Rulesets 368
19.2.8. Compatibility with syslogd 369
19.3. Working with Queues in Rsyslog 369
19.3.1. Defining Queues 370
Direct Queues 370
Disk Queues 371
In-memory Queues 371
Disk-Assisted In-memory Queues 372
19.3.2. Managing Queues 372
Limiting Queue Size 372
Discarding Messages 373
Using Timeframes 373
Configuring Worker Threads 373
Batch Dequeuing 374
Terminating Queues 374
19.4. Using Rsyslog Modules 374
19.4.1. Importing Text Files 375
19.4.2. Exporting Messages to a Database 376
19.4.3. Enabling Encrypted Transport 377
19.4.4. Using RELP 377
19.5. Interaction of Rsyslog and Journal 377
19.6. Structured Logging with Rsyslog 378
19.6.1. Importing Data from Journal 379

11
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

19.6.2. Filtering Structured Messages 380


19.6.3. Parsing JSON 380
19.6.4. Storing Messages in the MongoDB 380
19.7. Debugging Rsyslog 381
19.8. Using the Journal 381
19.8.1. Viewing Log Files 382
19.8.2. Access Control 383
19.8.3. Using The Live View 383
19.8.4. Filtering Messages 384
Filtering by Priority 384
Filtering by Time 384
Advanced Filtering 385
19.8.5. Enabling Persistent Storage 386
19.9. Managing Log Files in Graphical Environment 386
19.9.1. Viewing Log Files 387
19.9.2. Adding a Log File 389
19.9.3. Monitoring Log Files 390
19.10. Additional Resources 390
Installed Documentation 390
Online Documentation 391
See Also 391

.Chapter
. . . . . . .20.
. . .Automating
. . . . . . . . . . System
. . . . . . .Tasks
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
...........
20.1. Cron and Anacron 392
20.1.1. Installing Cron and Anacron 392
20.1.2. Running the Crond Service 392
20.1.2.1. Starting and Stopping the Cron Service 393
20.1.2.2. Stopping the Cron Service 393
20.1.2.3. Restarting the Cron Service 393
20.1.3. Configuring Anacron Jobs 393
20.1.3.1. Examples of Anacron Jobs 394
20.1.4. Configuring Cron Jobs 395
20.1.5. Controlling Access to Cron 397
20.1.6. Black and White Listing of Cron Jobs 397
20.2. At and Batch 397
20.2.1. Installing At and Batch 398
20.2.2. Running the At Service 398
20.2.2.1. Starting and Stopping the At Service 398
20.2.2.2. Stopping the At Service 398
20.2.2.3. Restarting the At Service 399
20.2.3. Configuring an At Job 399
20.2.4. Configuring a Batch Job 400
20.2.5. Viewing Pending Jobs 400
20.2.6. Additional Command Line Options 400
20.2.7. Controlling Access to At and Batch 400
20.3. Additional Resources 401

.Chapter
. . . . . . .21.
. . .OProfile
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
...........
21.1. Overview of Tools 402
21.1.1. operf vs. opcontrol 403
operf 403
Legacy Mode 403
21.2. Using operf 404
21.2.1. Specifying the Kernel 404
21.2.2. Setting Events to Monitor 404
21.2.3. Categorization of Samples 406

12
Table of Contents

21.3. Configuring OProfile Using Legacy Mode 406


21.3.1. Specifying the Kernel 407
21.3.2. Setting Events to Monitor 407
21.3.2.1. Sampling Rate 410
21.3.2.2. Unit Masks 410
21.3.3. Separating Kernel and User-space Profiles 410
21.4. Starting and Stopping OProfile Using Legacy Mode 411
21.5. Saving Data in Legacy Mode 412
21.6. Analyzing the Data 412
21.6.1. Using opreport 413
21.6.2. Using opreport on a Single Executable 414
21.6.3. Getting more detailed output on the modules 416
21.6.4. Using opannotate 417
21.7. Understanding /dev/oprofile/ 417
21.8. Example Usage 418
21.9. OProfile Support for Java 418
21.9.1. Profiling Java Code 418
21.10. Graphical Interface 419
21.11. OProfile and SystemTap 422
21.12. Additional Resources 422
21.12.1. Installed Docs 422
21.12.2. Useful Websites 422

. . . . VI.
Part . . .Kernel,
. . . . . . Module
. . . . . . . and
. . . . Driver
. . . . . .Configuration
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
...........

.Chapter
. . . . . . .22.
. . .Working
. . . . . . . with
. . . . the
. . . .GRUB
. . . . . 2. .Boot
. . . . Loader
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
...........
22.1. Configuring the GRUB 2 Boot Loader 424
22.2. Customizing GRUB 2 Menu 425
22.2.1. Changing the Default Boot Entry 426
22.2.2. Editing an Entry 426
Kernel Parameters 426
22.2.3. Adding a new Entry 426
22.2.4. Using only a Custom Menu 427
22.3. GRUB 2 Password Protection 429
22.3.1. Setting Up Users and Password Protection, Identifying Menu Entries 429
22.3.2. Preserving the Setup after GRUB 2 Updates 429
22.3.3. Password Encryption 430
22.4. Re-Installing GRUB 2 431
22.4.1. Using the grub2-install Command 431
22.4.2. Removing and Re-Installing GRUB 2 431
22.5. GRUB 2 over Serial Console 432
22.5.1. Configuring GRUB 2 432
22.5.2. Using screen to Connect to the Serial Console 432
22.6. Terminal Menu Editing During Boot 432
22.6.1. Booting to Rescue Mode 433
22.6.2. Booting to Emergency Mode 433
22.6.3. Recovering Root Password 433
22.6.4. Lost Root Password 434
22.7. Additional Resources 435

.Chapter
. . . . . . .23.
. . .Manually
. . . . . . . .Upgrading
. . . . . . . . . the
. . . .Kernel
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
...........
23.1. Overview of Kernel Packages 436
23.2. Preparing to Upgrade 437
23.3. Downloading the Upgraded Kernel 438
23.4. Performing the Upgrade 438
23.5. Verifying the Initial RAM Disk Image 439

13
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Verifying the Initial RAM Disk Image and Kernel on IBM eServer System i 440
23.6. Verifying the Boot Loader 441

.Chapter
. . . . . . .24.
. . .Working
. . . . . . . with
. . . . Kernel
. . . . . . Modules
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
...........
24.1. Listing Currently-Loaded Modules 442
24.2. Displaying Information About a Module 443
24.3. Loading a Module 446
24.4. Unloading a Module 446
24.5. Setting Module Parameters 447
24.6. Persistent Module Loading 448
24.7. Specific Kernel Module Capabilities 449
24.7.1. Using Multiple Ethernet Cards 449
24.7.2. Using Channel Bonding 449
24.7.2.1. Bonding Module Directives 450
24.8. Additional Resources 456
Manual Page Documentation 456
Installable and External Documentation 456

.RPM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
...........
A.1. RPM Design Goals 458
A.2. Using RPM 459
A.2.1. Finding RPM Packages 459
A.2.2. Installing and Upgrading 459
A.2.2.1. Package Already Installed 460
A.2.2.2. Conflicting Files 461
A.2.2.3. Unresolved Dependency 461
A.2.3. Configuration File Changes 462
A.2.4. Uninstalling 462
A.2.5. Freshening 463
A.2.6. Querying 464
A.2.7. Verifying 464
A.3. Checking a Package's Signature 465
A.3.1. Importing Keys 466
A.3.2. Verifying Signature of Packages 466
A.4. Practical and Common Examples of RPM Usage 466
A.5. Additional Resources 468
A.5.1. Installed Documentation 468
A.5.2. Useful Websites 468
A.5.3. Related Books 469

.The
. . .X
. .Window
. . . . . . . System
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
...........
B.1. The X Server 470
B.2. Desktop Environments and Window Managers 470
B.2.1. Desktop Environments 471
B.2.2. Window Managers 471
B.3. X Server Configuration Files 472
B.3.1. The Structure of the Configuration 472
B.3.2. The xorg.conf.d Directory 473
B.3.3. The xorg.conf File 473
B.3.3.1. The InputClass section 473
B.3.3.2. The InputDevice section 474
B.3.3.3. The ServerFlags section 475
B.3.3.4. The ServerLayout Section 476
B.3.3.5. The Files section 477
B.3.3.6. The Monitor section 477
B.3.3.7. The Device section 478

14
Table of Contents

B.3.3.8. The Screen section 479


B.3.3.9. The DRI section 479
B.4. Fonts 480
B.4.1. Adding Fonts to Fontconfig 480
B.5. Runlevels and X 481
B.5.1. Runlevel 3 481
B.5.2. Runlevel 5 481
B.6. Additional Resources 482
B.6.1. Installed Documentation 482
B.6.2. Useful Websites 483

. . . . . . . . History
Revision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
...........

.Index
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
...........
Symbols 484
A 484
B 485
C 486
D 486
E 486
F 487
G 488
H 489
I 489
K 489
L 491
M 492
N 493
O 493
P 495
R 499
S 500
T 503
U 503
V 504
W 505
X 505
Y 507

15
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

16
Preface

Preface
The System Administrator's Guide contains information on how to customize the Red Hat
Enterprise Linux 7 system to fit your needs. If you are looking for a comprehensive, task-oriented guide
for configuring and customizing your system, this is the manual for you.

This manual discusses many intermediate topics such as the following:

Installing and managing packages using the graphical PackageKit and command line Yum package
managers
Configuring Apache HTTP Server, Postfix, Sendmail and other enterprise-class servers and
software
Gathering information about your system, including obtaining user-space crash data with the
Automatic Bug Reporting Tool
Working with kernel modules and upgrading the kernel

1. Target Audience
The System Administrator's Guide assumes you have a basic understanding of the Red Hat
Enterprise Linux operating system. If you need help with the installation of this system, refer to the
Red Hat Enterprise Linux 7 Installation Guide.

2. About This Book


The System Administrator's Guide is based on the material in the Deployment Guide. The networking
related material from the Deployment Guide guide, including information on DHCP and DNS servers, can
now be found in the Red Hat Enterprise Linux 7 Networking Guide. Reference material, such as that
found in the appendices of the Deployment Guide, is now in a separate guide, the Red Hat
Enterprise Linux 7 System Administrator's Reference Guide.

3. How to Read this Book


This manual is divided into the following main categories:

Part I, “Basic System Configuration”


This part covers basic system administration tasks such as keyboard configuration, date and
time configuration, managing users and groups, and gaining privileges.

Chapter 1, System Locale and Keyboard Configuration documents how to configure the system
locale and how to change the default keyboard layout. Read this chapter if you need to change
the language of your system or switch to a different keyboard layout.

Chapter 2, Configuring the Date and Time covers the configuration of the system date and time.
Read this chapter if you need to change the date and time, or configure the system to
synchronize the clock with a remote server.

Chapter 3, Managing Users and Groups covers the management of users and groups in a
graphical user interface and on the command line. Read this chapter if you need to manage
users and groups on your system, or enable password aging.

Chapter 4, Gaining Privileges documents how to gain administrative privileges. Read this
chapter to learn how to use the su and sudo commands.

17
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Part II, “Package Management”


This part describes how to manage software packages on Red Hat Enterprise Linux using both
Yum and the PackageKit suite of graphical package management tools.

Chapter 5, Yum describes the Yum package manager. Read this chapter for information on how
to search, install, update, and uninstall packages on the command line.

Chapter 6, PackageKit describes the PackageKit suite of graphical package management tools.
Read this chapter for information on how to search, install, update, and uninstall packages using
a graphical user interface.

Part III, “Infrastructure Services”


This part provides information on how to configure system services and enable remote logins.

Chapter 7, Managing Services with systemd provides an introduction to the systemd system and
service manager. Read this chapter to learn how to manage system services and systemd
targets on your machine, or how to shut down, restart, suspend, or hibernate your machine on
the command line.

Chapter 8, OpenSSH describes how to enable a remote login via the SSH protocol. It covers the
configuration of the sshd service, as well as a basic usage of the ssh, scp, sftp client utilities.
Read this chapter if you need a remote access to a machine.

Part IV, “Servers”


This part discusses various topics related to servers such as how to set up a web server or
share files and directories over the network.

Chapter 10, Web Servers focuses on the Apache HTTP Server 2.2, a robust, full-featured open
source web server developed by the Apache Software Foundation. Read this chapter if you
need to configure a web server on your system.

Chapter 11, Mail Servers reviews modern email protocols in use today, and some of the
programs designed to send and receive email, including Postfix, Sendmail, Fetchmail, and
Procmail. Read this chapter if you need to configure a mail server on your system.

Chapter 12, Directory Servers covers the installation and configuration of OpenLDAP 2.4, an
open source implementation of the LDAPv2 and LDAPv3 protocols. Read this chapter if you
need to configure a directory server on your system.

Chapter 13, File and Print Servers guides you through the installation and configuration of
Samba, an open source implementation of the Server Message Block (SMB) protocol, and
vsftpd, the primary FTP server shipped with Red Hat Enterprise Linux. Additionally, it explains
how to use the Printer Configuration tool to configure printers. Read this chapter if you need
to configure a file or print server on your system.

Chapter 14, Configuring NTP Using the chrony Suite covers the installation and configuration of
the chrony suite, a client and a server for the Network Time Protocol (NT P). Read this chapter
if you need to configure the system to synchronize the clock with a remote NT P server, or set up
an NT P server on this system.

Chapter 15, Configuring NTP Using ntpd covers the installation and configuration of the NT P
daemon, ntpd, for the Network Time Protocol (NT P). Read this chapter if you need to configure
the system to synchronize the clock with a remote NT P server, or set up an NT P server on this

18
Preface

system, and you prefer not to use the chrony application.

Chapter 16, Configuring PTP Using ptp4l covers the installation and configuration of the
Precision Time Protocol application, ptp4l, an application for use with network drivers that
support the Precision Network Time Protocol (PT P). Read this chapter if you need to configure
the system to synchronize the system clock with a master PT P clock.

Part V, “Monitoring and Automation”


This part describes various tools that allow system administrators to monitor system
performance, automate system tasks, and report bugs.

Chapter 17, System Monitoring Tools discusses applications and commands that can be used
to retrieve important information about the system. Read this chapter to learn how to gather
essential system information.

Chapter 18, OpenLMI documents OpenLMI, a common infrastructure for the management of
Linux systems. Read this chapter to learn how to use this infrastracture to monitor and manage
remote systems.

Chapter 19, Viewing and Managing Log Files describes the configuration of the rsyslog
daemon, and explains how to locate, view, and monitor log files. Read this chapter to learn how
to work with log files.

Chapter 20, Automating System Tasks provides an overview of the cron, at, and batch
utilities. Read this chapter to learn how to use these utilities to perform automated tasks.

Chapter 21, OProfile covers OProfile, a low overhead, system-wide performance monitoring
tool. Read this chapter for information on how to use OProfile on your system.

Part VI, “Kernel, Module and Driver Configuration”


This part covers various tools that assist administrators with kernel customization.

Chapter 22, Working with the GRUB 2 Boot Loader provides an introduction to GRUB 2 and
explains how to re-install and configure it on your system. Read this chapter if you need to
configure or interact with the GRUB 2 boot loader.

Chapter 23, Manually Upgrading the Kernel provides important information on how to manually
update a kernel package using the rpm command instead of yum . Read this chapter if you
cannot update a kernel package with the Yum package manager.

Chapter 24, Working with Kernel Modules explains how to display, query, load, and unload
kernel modules and their dependencies, and how to set module parameters. Additionally, it
covers specific kernel module capabilities such as using multiple Ethernet cards and using
channel bonding. Read this chapter if you need to work with kernel modules.

Appendix A, RPM
This appendix concentrates on the RPM Package Manager (RPM), an open packaging system
used by Red Hat Enterprise Linux, and the use of the rpm utility. Read this appendix if you need
to use rpm instead of yum .

Appendix B, The X Window System


This appendix covers the configuration of the X Window System, the graphical environment

19
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

used by Red Hat Enterprise Linux. Read this appendix if you need to adjust the configuration of
your X Window System.

4. Document Conventions
This manual uses several conventions to highlight certain words and phrases and draw attention to
specific pieces of information.

In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The
Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative
but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later include the Liberation
Fonts set by default.

4.1. Typographic Conventions


Four typographic conventions are used to call attention to specific words and phrases. These
conventions, and the circumstances they apply to, are as follows.

Mono-spaced Bold

Used to highlight system input, including shell commands, file names and paths. Also used to highlight
keys and key combinations. For example:

To see the contents of the file m y_next_bestselling_novel in your current working


directory, enter the cat m y_next_bestselling_novel command at the shell prompt
and press Enter to execute the command.

The above includes a file name, a shell command and a key, all presented in mono-spaced bold and all
distinguishable thanks to context.

Key combinations can be distinguished from an individual key by the plus sign that connects each part of
a key combination. For example:

Press Enter to execute the command.

Press Ctrl+Alt+F2 to switch to a virtual terminal.

The first example highlights a particular key to press. The second example highlights a key combination:
a set of three keys pressed simultaneously.

If source code is discussed, class names, methods, functions, variable names and returned values
mentioned within a paragraph will be presented as above, in m ono-spaced bold. For example:

File-related classes include filesystem for file systems, file for files, and dir for
directories. Each class has its own associated set of permissions.

Proportional Bold

This denotes words or phrases encountered on a system, including application names; dialog-box text;
labeled buttons; check-box and radio-button labels; menu titles and submenu titles. For example:

Choose System → Preferences → Mouse from the main menu bar to launch Mouse
Preferences. In the Buttons tab, select the Left-handed m ouse check box and click
Close to switch the primary mouse button from the left to the right (making the mouse
suitable for use in the left hand).

20
Preface

To insert a special character into a gedit file, choose Applications → Accessories →


Character Map from the main menu bar. Next, choose Search → Find… from the
Character Map menu bar, type the name of the character in the Search field and click
Next. The character you sought will be highlighted in the Character T able. Double-click
this highlighted character to place it in the T ext to copy field and then click the Copy
button. Now switch back to your document and choose Edit → Paste from the gedit menu
bar.

The above text includes application names; system-wide menu names and items; application-specific
menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all
distinguishable by context.

Mono-spaced Bold Italic or Proportional Bold Italic

Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable
text. Italics denotes text you do not input literally or displayed text that changes depending on
circumstance. For example:

To connect to a remote machine using ssh, type ssh username@ domain.name at a shell
prompt. If the remote machine is exam ple.com and your username on that machine is
john, type ssh john@ exam ple.com .

The m ount -o rem ount file-system command remounts the named file system. For
example, to remount the /hom e file system, the command is m ount -o rem ount /hom e.

To see the version of a currently installed package, use the rpm -q package command. It
will return a result as follows: package-version-release.

Note the words in bold italics above: username, domain.name, file-system, package, version and
release. Each word is a placeholder, either for text you enter when issuing a command or for text
displayed by the system.

Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and
important term. For example:

Publican is a DocBook publishing system.

4.2. Pull-quote Conventions


Terminal output and source code listings are set off visually from the surrounding text.

Output sent to a terminal is set in m ono-spaced rom an and presented thus:

books Desktop documentation drafts mss photos stuff svn


books_tests Desktop1 downloads images notes scripts svgs

Source-code listings are also set in m ono-spaced rom an but add syntax highlighting as follows:

21
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,


struct kvm_assigned_pci_dev *assigned_dev)
{
int r = 0;
struct kvm_assigned_dev_kernel *match;

mutex_lock(&kvm->lock);

match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,
assigned_dev->assigned_dev_id);
if (!match) {
printk(KERN_INFO "%s: device hasn't been assigned before, "
"so cannot be deassigned\n", __func__);
r = -EINVAL;
goto out;
}

kvm_deassign_device(kvm, match);

kvm_free_assigned_device(kvm, match);

out:
mutex_unlock(&kvm->lock);
return r;
}

4.3. Notes and Warnings


Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.

Note

Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should
have no negative consequences, but you might miss out on a trick that makes your life easier.

Important

Important boxes detail things that are easily missed: configuration changes that only apply to the
current session, or services that need restarting before an update will apply. Ignoring a box
labeled “Important” will not cause data loss but may cause irritation and frustration.

Warning

Warnings should not be ignored. Ignoring warnings will most likely cause data loss.

5. Feedback
If you find a typographical error in this manual, or if you have thought of a way to make this manual
better, we would love to hear from you! Please submit a report in Bugzilla against the product Red Hat
Enterprise Linux 7.

22
Preface

When submitting a bug report, be sure to provide the following information:

Manual's identifier: doc-System _Adm inistrators_Guide


Version number: 7

If you have a suggestion for improving the documentation, try to be as specific as possible when
describing it. If you have found an error, please include the section number and some of the surrounding
text so we can find it easily.

6. Acknowledgments
Certain portions of this text first appeared in the Deployment Guide, copyright © 2007 Red Hat, Inc.,
available at https://access.redhat.com/site/documentation/en-
US/Red_Hat_Enterprise_Linux/5/html/Deployment_Guide/index.html.

Section 17.6, “Monitoring Performance with Net-SNMP” is based on an article written by Michael Solberg.

The authors of this book would like to thank the following people for their valuable contributions: Adam
Tkáč, Andrew Fitzsimon, Andrius Benokraitis, Brian Cleary Edward Bailey, Garrett LeSage, Jeffrey
Fearn, Joe Orton, Joshua Wulf, Karsten Wade, Lucy Ringland, Marcela Mašláňová, Mark Johnson,
Michael Behm, Miroslav Lichvár, Radek Vokál, Rahul Kavalapara, Rahul Sundaram, Sandra Moore,
Zbyšek Mráz, Jan Včelák, Peter Hutterer and James Antill, among many others.

23
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Part I. Basic System Configuration


This part covers basic system administration tasks such as keyboard configuration, date and time
configuration, managing users and groups, and gaining privileges.

24
Chapter 1. System Locale and Keyboard Configuration

Chapter 1. System Locale and Keyboard Configuration


The system locale specifies the language settings of system services and user interfaces. The keyboard
layout settings control the layout used on the text console and graphical user interfaces.

These settings can be done by modifying the /etc/locale.conf configuration file of with use or the
localectl utility. Also, you can use the graphic user interface to perform the task, for description of this
method, see .

1.1. Setting the System Locale


System-wide locale settings are stored in the /etc/locale.conf file, which is read at early boot by
the system d daemon.The locale settings configured in /etc/locale.conf are inherited by every
service or user, unless individual programs or individual users override them.

The basic file format of /etc/locale.conf is a newline-separated list of variable assignments. For
example, German locale with English messages in /etc/locale.conf looks as follows:

LANG=de_DE.UTF-8
LC_MESSAGES=C

Here, the LC_MESSAGES option determines the locale used for diagnostic messages written to the
standard error output. To further specify locale settings in /etc/locale.conf, you can use several
other options, most relevant are summarized in Table 1.1, “Options configurable in /etc/locale.conf” See
the locale(7) manual page for detailed information on these options. Note that the LC_ALL option,
which represents all possible options, should not be configured in /etc/locale.conf.

Table 1.1. Options configurable in /etc/locale.conf

Option Description
LANG Provides a default value for the system locale.
LC_COLLATE Changes the behavior of functions which compare
strings in the local alphabet.
LC_CTYPE Changes the behavior of the character handling
and classification functions and the multibyte
character functions.
LC_NUMERIC Describes the way numbers are usually printed,
with details such as decimal point versus decimal
comma.
LC_TIME Changes the display of the current time, 24-hour
versus 12-hour clock.
LC_MESSAGES Determines the locale used for diagnostic
messages written to the standard error output.

1.1.1. Displaying the Current Status


The localectl command can be used to query and change the system locale and keyboard layout
settings. To show the current settings, use the status option:

localectl status

25
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 1.1. Displaying the Current Status

The output of the previous command lists the currently set locale, keyboard layout configured for the
console and for the X11 window system.

~]$ localectl status


System Locale: LANG=en_US.UTF-8
VC Keymap: us
X11 Layout: n/a

1.1.2. Listing Available Locales


To list all locales available for your system, type:

localectl list-locales

Example 1.2. Listing Locales

Imagine you want to select a specific English locale, but you are not sure if it is available on the
system. You can check that by listing all English locales with the following command:

~]$ localectl list-locales | grep en_


en_AG
en_AG.utf8
en_AU
en_AU.iso88591
en_AU.utf8
en_BW
en_BW.iso88591
en_BW.utf8

output truncated

1.1.3. Setting the Locale


To set the default system locale, use the following command as a root:

localectl set-locale LANG=locale

Replace locale with the locale name, found with list-locales. With this command, you can also set
options from Table 1.1, “Options configurable in /etc/locale.conf”

Example 1.3. Changing the Default Locale

For example, if you want to set British English as your default locale, first find the name of this locale
by using list-locales. Then, as a root user, type the command in the following form:

~]# localectl set-locale LANG=en_GB.utf8

26
Chapter 1. System Locale and Keyboard Configuration

1.2. Changing the Keyboard Layout


The keyboard layout settings let the user to control the layout used on the text console and graphical
user interfaces.

1.2.1. Displaying the Current Settings


As mentioned before, you can check your current keyboard layout configuration with the following
command:

localectl status

Example 1.4. Displaying the Keyboard Settings

In the following output, you can see the keyboard layout configured for the virtual console and for the
X11 window system.

~]$ localectl status


System Locale: LANG=en_US.utf8
VC Keymap: us
X11 Layout: us

1.2.2. Listing Available Keymaps


To list all available keyboard layouts that can be configured on your system, type:

localectl list-keymaps

Example 1.5. Searching for a Particular Keymap

You can use grep to search the output of the previous command for a specific keymap name. There
are often multiple keymaps compatible with your currently set locale. For example, to find available
Czech keyboard layouts, type:

~]$ localectl list-keymaps | grep cz


cz
cz-cp1250
cz-lat2
cz-lat2-prog
cz-qwerty
cz-us-qwertz
sunt5-cz-us
sunt5-us-cz

1.2.3. Setting the Keymap


To set the default keyboard layout for your system, use the following command as a root:

localectl set-keymap map

27
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Replace map with the name of keymap taken from the output of list-keym aps. Unless the --no-
convert option is passed, the selected setting is also applied to the default keyboard mapping of the
X11 window system, after converting it to the closest matching X11 keyboard mapping. This also applies
reversely, you can specify the both keymaps with the following command (as a root):

localectl set-X11-keymap map

If you want your X11 layout to differ from the console layout, use the --no-convert option

localectl --no-convert set-X11-keymap map

With this option, the X11 keymap is specified without changing the previous console layout setting.

Example 1.6. Setting the X11 Keymap Separately

Imagine you want to use German keyboard layout in the graphical interface, but for console
operations you want to retain the US keymap. To do so, type (as a root):

~]# localectl --no-convert set-X11-keymap de

Then you can verify if your setting was successful by checking the current status:

~]$ localectl status


System Locale: LANG=de_DE.UTF-8
VC Keymap: us
X11 Layout: de

Apart from keyboard layout (map), three other options can specified:

localectl set-x11-keymap map model variant options

Replace model with the keyboard model name, variant and options with keyboard variant and option
components, which can be used to enhance the keyboard behavior. These options are not set by
default. For more information on X11 Model, X11 Variant and X11 Options see kbd(4 ) man page.

1.3. Additional Resources


For more information on how to configure the keyboard layout on Red Hat Enterprise Linux, refer to the
resources listed below.

Installed Documentation

localectl(1) — The manual page for the localectl command line utility documents how to use
this tool to configure the system locale and keyboard layout.
loadkeys(1) — The manual page for the loadkeys command provides more information on how to
use this tool to change the keyboard layout in a virtual console.

See Also

Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and
sudo commands.

28
Chapter 1. System Locale and Keyboard Configuration

Chapter 7, Managing Services with systemd provides more information on systemd and documents
how to use the system ctl command to manage system services.

29
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 2. Configuring the Date and Time


Modern operating systems distinguish between the following two types of clocks:

A real-time clock (RTC), commonly referred to as hardware clock, is typically an integrated circuit on
the system board that is completely independent on the current state of the operating system and
runs even when the computer is shut down.
A system clock, also known as software clock, is maintained by the kernel and its initial value is based
on the real-time clock. Once the system is booted and the system clock is initialized, the system clock
is completely independent of the real-time clock.

The real-time clock can use either local time or Coordinated Universal Time (UTC). If you configure the
real-time clock to use UTC, the system time is calculated by applying the offset for your time zone and if
applicable, also daylight saving time (DST). In comparison, local time represents the actual time in your
current time zone. In most cases, it is recommended that you use UTC.

Red Hat Enterprise Linux 7 offers two command line tools that can be used to configure and display
information about the system date and time: the tim edatectl utility, which is new in Red Hat
Enterprise Linux 7 and is part of systemd, and the traditional date command.

2.1. Using the timedatectl Command


The timedatectl utility is distributed as part of the systemd system and service manager and allows you
to review and change the configuration of the system clock. You can use this tool to change the current
date and time, set the time zone, or enable automatic synchronization of the system clock with a remote
server.

For information on how to display the current date and time in a custom format, see also Section 2.2,
“Using the date Command”.

2.1.1. Displaying the Current Date and Time


To display the current date and time along with detailed information about the configuration of the
system and hardware clock, run the tim edatectl command with no additional command line options:

timedatectl

This displays the local, universal, and RTC time, the currently used time zone, the status of the NTP
configuration, and additional information related to DST.

30
Chapter 2. Configuring the Date and Time

Example 2.1. Displaying thee Current Date and Time

The following is an example output of the tim edatectl command on a system that does not use the
Network Time Protocol to synchronize the system clock with a remote server:

~]$ timedatectl
Local time: Mon 2013-09-16 19:30:24 CEST
Universal time: Mon 2013-09-16 17:30:24 UTC
Timezone: Europe/Prague (CEST, +0200)
NTP enabled: no
NTP synchronized: no
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2013-03-31 01:59:59 CET
Sun 2013-03-31 03:00:00 CEST
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2013-10-27 02:59:59 CEST
Sun 2013-10-27 02:00:00 CET

2.1.2. Changing the Current Date


To change the current date, type the following at a shell prompt as root:

timedatectl set-time YYYY-MM-DD

Replace YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month.

Example 2.2. Changing the Current Date

To change the current date to 2 June 2013, run the following command as root:

~]# timedatectl set-time 2013-06-02

2.1.3. Changing the Current Time


To change the current time, type the following at a shell prompt as root:

timedatectl set-time HH:MM:SS

Replace HH with an hour, MM with a minute, and SS with a second, all typed in a two-digit form.

By default, the system is configured to use UTC. To configure your system to maintain the clock in the
local time, run the tim edatectl command with the set-local-rtc option as root:

timedatectl set-local-rtc boolean

To configure your system to maintain the clock in the local time, replace boolean with yes. To configure
the system to use UTC, replace boolean with no (the default option).

31
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 2.3. Changing the Current Time

To change the current time to 11:26 p.m., run the following command as root:

~]# timedatectl set-time 23:26:00

2.1.4. Changing the Time Zone


To list all available time zones, type the following at a shell prompt:

timedatectl list-timezones

To change the currently used time zone, type as root:

timedatectl set-timezone time_zone

Replace time_zone with any of the values listed by the tim edatectl list-tim ezones command.

Example 2.4. Changing the Time Zone

To identify which time zone is closest to your present location, use the tim edatectl command with
the list-tim ezones command line option. For example, to list all available time zones in Europe,
type:

~]# timedatectl list-timezones | grep Europe


Europe/Amsterdam
Europe/Andorra
Europe/Athens
Europe/Belgrade
Europe/Berlin
Europe/Bratislava

To change the time zone to Europe/Prague, type as root:

~]# timedatectl set-timezone Europe/Prague

2.1.5. Synchronizing the System Clock with a Remote Server


As opposed to the manual setup described in the previous sections, the tim edatectl command also
allows you to enable automatic synchronization of your system clock with a remote server over the
Network Time Protocol (NTP). To enable or disable this feature, type the following at a shell prompt as
root:

timedatectl set-ntp boolean

To configure your system to synchronize the system clock with a remote NTP server, replace boolean
with yes (the default option). To disable this feature, replace boolean with no.

32
Chapter 2. Configuring the Date and Time

Example 2.5. Synchronizing the System Clock with a Remote Server

To enable automatic synchronization of the system clock with a remote server, type:

~]# timedatectl set-ntp yes

2.2. Using the date Command


The date utility is available on all Linux systems and allows you to display and configure the current date
and time. It is frequently used in scripts to display detailed information about the system clock in a
custom format.

For information on how to change the time zone or enable automatic synchronization of the system clock
with a remote server, see Section 2.1, “Using the timedatectl Command”.

2.2.1. Displaying the Current Date and Time


To display the current date and time, run the date command with no additional command line options:

date

This displays the day of the week followed by the current date, local time, abbreviated time zone, and
year.

By default, the date command displays the local time. To display the time in UTC, run the command
with the --utc or -u command line option:

date --utc

You can also customize the format of the displayed information by providing the +"format" option on
the command line:

date +"format"

Replace format with one or more supported control sequences as illustrated in Example 2.6, “Displaying
the Current Date and Time”. See Table 2.1, “Commonly Used Control Sequences” for a list of the most
frequently used formatting options, or the date(1) manual page for a complete list of these options.

33
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Table 2.1. Commonly Used Control Sequences

Control Sequence Description


%H The hour in the HH format (for example, 17).
%M The minute in the MM format (for example, 30).
%S The second in the SS format (for example, 24 ).
%d The day of the month in the DD format (for example, 16).
%m The month in the MM format (for example, 09).
%Y The year in the YYYY format (for example, 2013).
%Z The time zone abbreviation (for example, CEST ).
%F The full date in the YYYY-MM-DD format (for example, 2013-09-16). This
option is equal to %Y-%m -%d.
%T The full time in the HH:MM:SS format (for example, 17:30:24). This option is
equal to %H:%M:%S

Example 2.6. Displaying the Current Date and Time

To display the current date and time in UTC, type the following at a shell prompt:

~]$ date --utc


Mon Sep 16 17:30:24 CEST 2013

To customize the output of the date command, type:

~]$ date +"%Y-%m-%d %H:%M"


2013-09-16 17:30

2.2.2. Changing the Current Date


To change the current date, type the following at a shell prompt as root:

date +%F -s YYYY-MM-DD

Replace YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month.

Example 2.7. Changing the Current Date

To change the current date to 2 June 2013, run the following command as root:

~]# date +%F -s 2013-06-02

2.2.3. Changing the Current Time


To change the current time, run the date command with the --set or -s option as root:

date +%T -s HH:MM:SS

34
Chapter 2. Configuring the Date and Time

Replace HH with an hour, MM with a minute, and SS with a second, all typed in a two-digit form.

By default, the date command sets the system clock in the local time. To set the system clock in UTC
instead, run the command with the --utc or -u command line option:

date +%T --set HH:MM:SS --utc

Example 2.8. Changing the Current Time

To change the current time to 11:26 p.m., run the following command as root:

~]# date +%T --set 23:26:00

2.3. Additional Resources


For more information on how to configure the date and time in Red Hat Enterprise Linux 7, see the
resources listed below.

Installed Documentation

tim edatectl(1) — The manual page for the tim edatectl command line utility documents how
to use this tool to query and change the system clock and its settings.
date(1) — The manual page for the date command provides a complete list of supported command
line options.

See Also

Chapter 1, System Locale and Keyboard Configuration documents how to configure the keyboard
layout.
Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and
sudo commands.
Chapter 7, Managing Services with systemd provides more information on systemd and documents
how to use the system ctl command to manage system services.

35
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 3. Managing Users and Groups


The control of users and groups is a core element of Red Hat Enterprise Linux system administration.
This chapter explains how to add, manage, and delete users and groups in the graphical user interface
and on the command line, and covers advanced topics, such as enabling password aging or creating
group directories.

3.1. Introduction to Users and Groups


While users can be either people (meaning accounts tied to physical users) or accounts which exist for
specific applications to use, groups are logical expressions of organization, tying users together for a
common purpose. Users within a group can read, write, or execute files owned by that group.

Each user is associated with a unique numerical identification number called a user ID (UID). Likewise,
each group is associated with a group ID (GID). A user who creates a file is also the owner and group
owner of that file. The file is assigned separate read, write, and execute permissions for the owner, the
group, and everyone else. The file owner can be changed only by root, and access permissions can be
changed by both the root user and file owner.

Additionally, Red Hat Enterprise Linux supports access control lists (ACLs) for files and directories which
allow permissions for specific users outside of the owner to be set. For more information about this
feature, refer to the Red Hat Enterprise Linux 7 Storage Administration Guide.

3.1.1. User Private Groups


Red Hat Enterprise Linux uses a user private group (UPG) scheme, which makes UNIX groups easier to
manage. A user private group is created whenever a new user is added to the system. It has the same
name as the user for which it was created and that user is the only member of the user private group.

User private groups make it safe to set default permissions for a newly created file or directory, allowing
both the user and the group of that user to make modifications to the file or directory.

The setting which determines what permissions are applied to a newly created file or directory is called a
umask and is configured in the /etc/bashrc file. Traditionally on UNIX systems, the um ask is set to
022, which allows only the user who created the file or directory to make modifications. Under this
scheme, all other users, including members of the creator's group, are not allowed to make any
modifications. However, under the UPG scheme, this “group protection” is not necessary since every
user has their own private group.

3.1.2. Shadow Passwords


In environments with multiple users, it is very important to use shadow passwords provided by the
shadow-utils package to enhance the security of system authentication files. For this reason, the
installation program enables shadow passwords by default.

The following is a list of the advantages shadow passwords have over the traditional way of storing
passwords on UNIX-based systems:

Shadow passwords improve system security by moving encrypted password hashes from the world-
readable /etc/passwd file to /etc/shadow, which is readable only by the root user.
Shadow passwords store information about password aging.
Shadow passwords allow the /etc/login.defs file to enforce security policies.

Most utilities provided by the shadow-utils package work properly whether or not shadow passwords are
enabled. However, since password aging information is stored exclusively in the /etc/shadow file, any

36
Chapter 3. Managing Users and Groups

commands which create or modify password aging information do not work. The following is a list of
utilities and commands that do not work without first enabling shadow passwords:

The chage utility.


The gpasswd utility.
The userm od command with the -e or -f option.
The useradd command with the -e or -f option.

3.2. Using the User Manager Tool


The User Manager application allows you to view, modify, add, and delete local users and groups in the
graphical user interface. To start the application, either select System → Administration → Users and
Groups from the panel, or type system -config-users at a shell prompt. Note that unless you have
superuser privileges, the application will prompt you to authenticate as root.

3.2.1. Viewing Users and Groups


The main window of the User Manager is divided into two tabs: The Users tab provides a list of local
users along with additional information about their user ID, primary group, home directory, login shell,
and full name. The Groups tab provides a list of local groups with information about their group ID and
group members.

Figure 3.1. Viewing users and groups

To find a specific user or group, type the first few letters of the name in the Search filter field and
either press Enter, or click the Apply filter button. You can also sort the items according to any of
the available columns by clicking the column header.

Red Hat Enterprise Linux reserves user and group IDs below 500 for system users and groups. By
default, the User Manager does not display the system users. To view all users and groups, select Edit
→ Preferences to open the Preferences dialog box, and clear the Hide system users and
groups checkbox.

37
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

3.2.2. Adding a New User


To add a new user, click the Add User button. A window as shown in Figure 3.2, “Adding a new user”
appears.

Figure 3.2. Adding a new user

The Add New User dialog box allows you to provide information about the newly created user. In order
to create a user, enter the username and full name in the appropriate fields and then type the user's
password in the Password and Confirm Password fields. The password must be at least six
characters long.

Password security advice

It is advisable to use a much longer password, as this makes it more difficult for an intruder to
guess it and access the account without permission. It is also recommended that the password
not be based on a dictionary term: use a combination of letters, numbers and special characters.

The Login Shell pulldown list allows you to select a login shell for the user. If you are not sure which
shell to select, accept the default value of /bin/bash.

By default, the User Manager application creates the home directory for a new user in
/hom e/username/. You can choose not to create the home directory by clearing the Create hom e
directory checkbox, or change this directory by editing the content of the Hom e Directory text
box. Note that when the home directory is created, default configuration files are copied into it from the
/etc/skel/ directory.

38
Chapter 3. Managing Users and Groups

Red Hat Enterprise Linux uses a user private group (UPG) scheme. Whenever you create a new user, a
unique group with the same name as the user is created by default. If you do not want to create this
group, clear the Create a private group for the user checkbox.

To specify a user ID for the user, select Specify user ID m anually. If the option is not selected,
the next available user ID above 500 is assigned to the new user. Because Red Hat Enterprise Linux
reserves user IDs below 500 for system users, it is not advisable to manually assign user IDs 1–499.

Clicking the OK button creates the new user. To configure more advanced user properties, such as
password expiration, modify the user's properties after adding the user.

3.2.3. Adding a New Group


To add a new user group, select Add Group from the toolbar. A window similar to Figure 3.3, “New
Group” appears. Type the name of the new group. To specify a group ID for the new group, select
Specify group ID m anually and select the GID. Note that Red Hat Enterprise Linux also reserves
group IDs lower than 500 for system groups.

Figure 3.3. New Group

Click OK to create the group. The new group appears in the group list.

3.2.4. Modifying User Properties


To view the properties of an existing user, click on the Users tab, select the user from the user list, and
click Properties from the menu (or choose File → Properties from the pulldown menu). A window
similar to Figure 3.4, “User Properties” appears.

39
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 3.4. User Properties

The User Properties window is divided into multiple tabbed pages:

User Data — Shows the basic user information configured when you added the user. Use this tab
to change the user's full name, password, home directory, or login shell.
Account Info — Select Enable account expiration if you want the account to expire on a
certain date. Enter the date in the provided fields. Select Local password is locked to lock the
user account and prevent the user from logging into the system.
Password Info — Displays the date that the user's password last changed. To force the user to
change passwords after a certain number of days, select Enable password expiration and
enter a desired value in the Days before change required: field. The number of days before
the user's password expires, the number of days before the user is warned to change passwords,
and days before the account becomes inactive can also be changed.
Groups — Allows you to view and configure the Primary Group of the user, as well as other groups
that you want the user to be a member of.

3.2.5. Modifying Group Properties


To view the properties of an existing group, select the group from the group list and click Properties
from the menu (or choose File → Properties from the pulldown menu). A window similar to Figure 3.5,
“Group Properties” appears.

40
Chapter 3. Managing Users and Groups

Figure 3.5. Group Properties

The Group Users tab displays which users are members of the group. Use this tab to add or remove
users from the group. Click OK to save your changes.

3.3. Using Command Line Tools


The easiest way to manage users and groups on Red Hat Enterprise Linux is to use the User Manager
application as described in Section 3.2, “Using the User Manager Tool”. However, if you prefer command
line tools or do not have the X Window System installed, you can use command line utilities that are
listed in Table 3.1, “Command line utilities for managing users and groups”.

Table 3.1. Command line utilities for managing users and groups

Utilities Description
useradd, userm od, userdel Standard utilities for adding, modifying, and deleting user
accounts.
groupadd, groupm od, Standard utilities for adding, modifying, and deleting groups.
groupdel
gpasswd Standard utility for administering the /etc/group configuration
file.
pwck, grpck Utilities that can be used for verification of the password, group,
and associated shadow files.
pwconv, pwunconv Utilities that can be used for the conversion of passwords to
shadow passwords, or back from shadow passwords to standard
passwords.

3.3.1. Adding a New User


To add a new user to the system, typing the following at a shell prompt as root:

useradd [options] username

41
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

…where options are command line options as described in Table 3.2, “useradd command line options”.

By default, the useradd command creates a locked user account. To unlock the account, run the
following command as root to assign a password:

passwd username

Optionally, you can set password aging policy. Refer to Red Hat Enterprise Linux 7 Security Guide for
information on how to enable password aging.

Table 3.2. useradd command line options

Option Description
-c 'comment' comment can be replaced with any string. This option is generally used
to specify the full name of a user.
-d home_directory Home directory to be used instead of default /hom e/username/.
-e date Date for the account to be disabled in the format YYYY-MM-DD.
-f days Number of days after the password expires until the account is
disabled. If 0 is specified, the account is disabled immediately after the
password expires. If -1 is specified, the account is not be disabled
after the password expires.
-g group_name Group name or group number for the user's default group. The group
must exist prior to being specified here.
-G group_list List of additional (other than default) group names or group numbers,
separated by commas, of which the user is a member. The groups
must exist prior to being specified here.
-m Create the home directory if it does not exist.
-M Do not create the home directory.
-N Do not create a user private group for the user.
-p password The password encrypted with crypt.
-r Create a system account with a UID less than 500 and without a home
directory.
-s User's login shell, which defaults to /bin/bash.
-u uid User ID for the user, which must be unique and greater than 499.

Explaining the Process


The following steps illustrate what happens if the command useradd juan is issued on a system that
has shadow passwords enabled:

1. A new line for juan is created in /etc/passwd:

juan:x:501:501::/home/juan:/bin/bash

The line has the following characteristics:


It begins with the username juan.
There is an x for the password field indicating that the system is using shadow passwords.
A UID greater than 499 is created. Under Red Hat Enterprise Linux, UIDs below 500 are

42
Chapter 3. Managing Users and Groups

reserved for system use and should not be assigned to users.


A GID greater than 499 is created. Under Red Hat Enterprise Linux, GIDs below 500 are
reserved for system use and should not be assigned to users.
The optional GECOS information is left blank. The GECOS field can be used to provide
additional information about the user, such as their full name or phone number.
The home directory for juan is set to /hom e/juan/.
The default shell is set to /bin/bash.
2. A new line for juan is created in /etc/shadow:

juan:!!:14798:0:99999:7:::

The line has the following characteristics:


It begins with the username juan.
Two exclamation marks (!!) appear in the password field of the /etc/shadow file, which
locks the account.

Note

If an encrypted password is passed using the -p flag, it is placed in the /etc/shadow


file on the new line for the user.

The password is set to never expire.


3. A new line for a group named juan is created in /etc/group:

juan:x:501:

A group with the same name as a user is called a user private group. For more information on
user private groups, refer to Section 3.1.1, “User Private Groups”.
The line created in /etc/group has the following characteristics:
It begins with the group name juan.
An x appears in the password field indicating that the system is using shadow group
passwords.
The GID matches the one listed for user juan in /etc/passwd.
4. A new line for a group named juan is created in /etc/gshadow:

juan:!::

The line has the following characteristics:


It begins with the group name juan.
An exclamation mark (!) appears in the password field of the /etc/gshadow file, which locks
the group.
All other fields are blank.
5. A directory for user juan is created in the /hom e/ directory:

~]# ls -l /home
total 4
drwx------. 4 juan juan 4096 Mar 3 18:23 juan

43
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

This directory is owned by user juan and group juan. It has read, write, and execute privileges
only for the user juan. All other permissions are denied.
6. The files within the /etc/skel/ directory (which contain default user settings) are copied into the
new /hom e/juan/ directory:

~]# ls -la /home/juan


total 28
drwx------. 4 juan juan 4096 Mar 3 18:23 .
drwxr-xr-x. 5 root root 4096 Mar 3 18:23 ..
-rw-r--r--. 1 juan juan 18 Jun 22 2010 .bash_logout
-rw-r--r--. 1 juan juan 176 Jun 22 2010 .bash_profile
-rw-r--r--. 1 juan juan 124 Jun 22 2010 .bashrc
drwxr-xr-x. 2 juan juan 4096 Jul 14 2010 .gnome2
drwxr-xr-x. 4 juan juan 4096 Nov 23 15:09 .mozilla

At this point, a locked account called juan exists on the system. To activate it, the administrator must
next assign a password to the account using the passwd command and, optionally, set password aging
guidelines.

3.3.2. Adding a New Group


To add a new group to the system, type the following at a shell prompt as root:

groupadd [options] group_name

…where options are command line options as described in Table 3.3, “groupadd command line
options”.

Table 3.3. groupadd command line options

Option Description
-f, --force When used with -g gid and gid already exists, groupadd will choose
another unique gid for the group.
-g gid Group ID for the group, which must be unique and greater than 499.
-K, --key key=value Override /etc/login.defs defaults.
-o, --non-unique Allow to create groups with duplicate.
-p, --password password Use this encrypted password for the new group.
-r Create a system group with a GID less than 500.

3.3.3. Creating Group Directories


System administrators usually like to create a group for each major project and assign people to the
group when they need to access that project's files. With this traditional scheme, file managing is difficult;
when someone creates a file, it is associated with the primary group to which they belong. When a single
person works on multiple projects, it becomes difficult to associate the right files with the right group.
However, with the UPG scheme, groups are automatically assigned to files created within a directory with
the setgid bit set. The setgid bit makes managing group projects that share a common directory very
simple because any files a user creates within the directory are owned by the group which owns the
directory.

For example, a group of people need to work on files in the /opt/m yproject/ directory. Some people
are trusted to modify the contents of this directory, but not everyone.

44
Chapter 3. Managing Users and Groups

1. As root, create the /opt/m yproject/ directory by typing the following at a shell prompt:

mkdir /opt/myproject

2. Add the m yproject group to the system:

groupadd myproject

3. Associate the contents of the /opt/m yproject/ directory with the m yproject group:

chown root:myproject /opt/myproject

4. Allow users to create files within the directory, and set the setgid bit:

chmod 2775 /opt/myproject

At this point, all members of the m yproject group can create and edit files in the /opt/m yproject/
directory without the administrator having to change file permissions every time users write new files. To
verify that the permissions have been set correctly, run the following command:

~]# ls -l /opt
total 4
drwxrwsr-x. 3 root myproject 4096 Mar 3 18:31 myproject

3.4. Additional Resources


For more information on how to manage users and groups on Red Hat Enterprise Linux, refer to the
resources listed below.

Installed Documentation
For information about various utilities for managing users and groups, refer to the following manual
pages:

useradd(8) — The manual page for the useradd command documents how to use it to create new
users.
userdel(8) — The manual page for the userdel command documents how to use it to delete
users.
userm od(8) — The manual page for the userm od command documents how to use it to modify
users.
groupadd(8) — The manual page for the groupadd command documents how to use it to create
new groups.
groupdel(8) — The manual page for the groupdel command documents how to use it to delete
groups.
groupm od(8) — The manual page for the groupm od command documents how to use it to modify
group membership.
gpasswd(1) — The manual page for the gpasswd command documents how to manage the
/etc/group file.
grpck(8) — The manual page for the grpck command documents how to use it to verify the
integrity of the /etc/group file.

45
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

pwck(8) — The manual page for the pwck command documents how to use it to verify the integrity of
the /etc/passwd and /etc/shadow files.
pwconv(8) — The manual page for the pwconv command documents how to use it to convert
standard passwords to shadow passwords.
pwunconv(8) — The manual page for the pwunconv command documents how to use it to convert
shadow passwords to standard passwords.

For information about related configuration files, see:

group(5) — The manual page for the /etc/group file documents how to use this file to define
system groups.
passwd(5) — The manual page for the /etc/passwd file documents how to use this file to define
user information.
shadow(5) — The manual page for the /etc/shadow file documents how to use this file to set
passwords and account expiration information for the system.

Online Documentation

Red Hat Enterprise Linux 7 Security Guide — The Security Guide for Red Hat Enterprise Linux 7
provides additional information how to ensure password security and secure the workstation by
enabling password aging and user account locking.
Red Hat Enterprise Linux 7 Storage Administration Guide — The Storage Administration Guide for
Red Hat Enterprise Linux 7 provides instructions on how to manage storage devices and file systems
on this system.

See Also

Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and
sudo commands.

46
Chapter 4. Gaining Privileges

Chapter 4. Gaining Privileges


System administrators (and in some cases users) will need to perform certain tasks with administrative
access. Accessing the system as root is potentially dangerous and can lead to widespread damage to
the system and data. This chapter covers ways to gain administrative privileges using setuid programs
such as su and sudo. These programs allow specific users to perform tasks which would normally be
available only to the root user while maintaining a higher level of control and system security.

Refer to the Red Hat Enterprise Linux 7 Security Guide for more information on administrative controls,
potential dangers and ways to prevent data loss resulting from improper use of privileged access.

4.1. The su Command


When a user executes the su command, they are prompted for the root password and, after
authentication, are given a root shell prompt.

Once logged in via the su command, the user is the root user and has absolute administrative access to
the system [1] . In addition, once a user has become root, it is possible for them to use the su command
to change to any other user on the system without being prompted for a password.

Because this program is so powerful, administrators within an organization may wish to limit who has
access to the command.

One of the simplest ways to do this is to add users to the special administrative group called wheel. To
do this, type the following command as root:

usermod -G wheel <username>

In the previous command, replace <username> with the username you want to add to the wheel group.

You can also use the User Manager to modify group memberships, as follows. Note: you need
Administrator privileges to perform this procedure.

1. Click the System menu on the Panel, point to Administration and then click Users and Groups
to display the User Manager. Alternatively, type the command system -config-users at a shell
prompt.
2. Click the Users tab, and select the required user in the list of users.
3. Click Properties on the toolbar to display the User Properties dialog box (or choose Properties
on the File menu).
4. Click the Groups tab, select the check box for the wheel group, and then click OK.

Refer to Section 3.2, “Using the User Manager Tool” for more information about the User Manager.

After you add the desired users to the wheel group, it is advisable to only allow these specific users to
use the su command. To do this, you will need to edit the PAM configuration file for su:
/etc/pam .d/su. Open this file in a text editor and remove the comment (#) from the following line:

#auth required pam_wheel.so use_uid

This change means that only members of the administrative group wheel can switch to another user
using the su command.

47
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Note

The root user is part of the wheel group by default.

4.2. The sudo Command


The sudo command offers another approach to giving users administrative access. When trusted users
precede an administrative command with sudo, they are prompted for their own password. Then, when
they have been authenticated and assuming that the command is permitted, the administrative
command is executed as if they were the root user.

The basic format of the sudo command is as follows:

sudo <command>

In the above example, <command> would be replaced by a command normally reserved for the root user,
such as m ount.

The sudo command allows for a high degree of flexibility. For instance, only users listed in the
/etc/sudoers configuration file are allowed to use the sudo command and the command is executed
in the user's shell, not a root shell. This means the root shell can be completely disabled as shown in the
Red Hat Enterprise Linux 7 Security Guide.

Each successful authentication using the sudo is logged to the file /var/log/m essages and the
command issued along with the issuer's username is logged to the file /var/log/secure. Should you
require additional logging, use the pam _tty_audit module to enable TTY auditing for specified users
by adding the following line to your /etc/pam .d/system -auth file:

session required pam_tty_audit.so disable=<pattern> enable=<pattern>

where pattern represents a comma-separated listing of users with an optional use of globs. For
example, the following configuration will enable TTY auditing for the root user and disable it for all other
users:

session required pam_tty_audit.so disable=* enable=root

Another advantage of the sudo command is that an administrator can allow different users access to
specific commands based on their needs.

Administrators wanting to edit the sudo configuration file, /etc/sudoers, should use the visudo
command.

To give someone full administrative privileges, type visudo and add a line similar to the following in the
user privilege specification section:

juan ALL=(ALL) ALL

This example states that the user, juan, can use sudo from any host and execute any command.

The example below illustrates the granularity possible when configuring sudo:

48
Chapter 4. Gaining Privileges

%users localhost=/sbin/shutdown -h now

This example states that any user can issue the command /sbin/shutdown -h now as long as it is
issued from the console.

The man page for sudoers has a detailed listing of options for this file.

Important

There are several potential risks to keep in mind when using the sudo command. You can avoid
them by editing the /etc/sudoers configuration file using visudo as described above. Leaving
the /etc/sudoers file in its default state gives every user in the wheel group unlimited root
access.

By default, sudo stores the sudoer's password for a five minute timeout period. Any
subsequent uses of the command during this period will not prompt the user for a password.
This could be exploited by an attacker if the user leaves his workstation unattended and
unlocked while still being logged in. This behavior can be changed by adding the following line
to the /etc/sudoers file:

Defaults timestamp_timeout=<value>

where <value> is the desired timeout length in minutes. Setting the <value> to 0 causes
sudo to require a password every time.
If a sudoer's account is compromised, an attacker can use sudo to open a new shell with
administrative privileges:

sudo /bin/bash

Opening a new shell as root in this or similar fashion gives the attacker administrative access
for a theoretically unlimited amount of time, bypassing the timeout period specified in the
/etc/sudoers file and never requiring the attacker to input a password for sudo again until
the newly opened session is closed.

4.3. Additional Resources


While programs allowing users to gain administrative privileges are a potential security risk, security itself
is beyond the scope of this particular book. You should therefore refer to the resources listed below for
more information regarding security and privileged access.

Installed Documentation

su(1) — The manual page for su provides information regarding the options available with this
command.
sudo(8) — The manual page for sudo includes a detailed description of this command lists options
available for customizing its behavior.
pam (8) — The manual page describing the use of Pluggable Authentication Modules (PAM) for Linux.

Online Documentation

49
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Red Hat Enterprise Linux 7 Security Guide — The Security Guide for Red Hat Enterprise Linux 7
provides a more in-depth look at potential security issues pertaining to setuid programs as well as
techniques used to alleviate these risks.

See Also

Chapter 3, Managing Users and Groups documents how to manage system users and groups in the
graphical user interface and on the command line.

[1] This access is still subject to the restrictions imposed by SELinux, if it is enabled.

50
Part II. Package Management

Part II. Package Management


All software on a Red Hat Enterprise Linux system is divided into RPM packages, which can be installed,
upgraded, or removed. This part describes how to manage packages on Red Hat Enterprise Linux using
both Yum and the PackageKit suite of graphical package management tools.

51
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 5. Yum
Yum is the Red Hat package manager that is able to query for information about available packages,
fetch packages from repositories, install and uninstall them, and update an entire system to the latest
available version. Yum performs automatic dependency resolution on packages you are updating,
installing, or removing, and thus is able to automatically determine, fetch, and install all available
dependent packages.

Yum can be configured with new, additional repositories, or package sources, and also provides many
plug-ins which enhance and extend its capabilities. Yum is able to perform many of the same tasks that
RPM can; additionally, many of the command line options are similar. Yum enables easy and simple
package management on a single machine or on groups of them.

Secure package management with GPG-signed packages

Yum provides secure package management by enabling GPG (Gnu Privacy Guard; also known as
GnuPG) signature verification on GPG-signed packages to be turned on for all package
repositories (i.e. package sources), or for individual repositories. When signature verification is
enabled, Yum will refuse to install any packages not GPG-signed with the correct key for that
repository. This means that you can trust that the RPM packages you download and install on
your system are from a trusted source, such as Red Hat, and were not modified during transfer.
Refer to Section 5.5, “Configuring Yum and Yum Repositories” for details on enabling signature-
checking with Yum, or Section A.3, “Checking a Package's Signature” for information on working
with and verifying GPG-signed RPM packages in general.

Yum also enables you to easily set up your own repositories of RPM packages for download and
installation on other machines. When possible, Yum uses parallel download of multiple packages and
metadata to speed up downloading.

Learning Yum is a worthwhile investment because it is often the fastest way to perform system
administration tasks, and it provides capabilities beyond those provided by the PackageKit graphical
package management tools. Refer to Chapter 6, PackageKit for details on using PackageKit.

Yum and superuser privileges

You must have superuser privileges in order to use yum to install, update or remove packages on
your system. All examples in this chapter assume that you have already obtained superuser
privileges by using either the su or sudo command.

5.1. Checking For and Updating Packages


Yum allows you to check if your system has any updates that wait to be applied. You can list packages
that need to be updated and update them as a whole, or you can update a selected individual package.

5.1.1. Checking For Updates


To see which installed packages on your system have updates available, use the following command:

yum check-update

52
Chapter 5. Yum

Example 5.1. Example output of the yum check-update command

The output of yum check-update can look as follows:

~]# yum check-update


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
PackageKit.x86_64 0.5.8-2.el6 rhel
PackageKit-glib.x86_64 0.5.8-2.el6 rhel
PackageKit-yum.x86_64 0.5.8-2.el6 rhel
PackageKit-yum-plugin.x86_64 0.5.8-2.el6 rhel
glibc.x86_64 2.11.90-20.el6 rhel
glibc-common.x86_64 2.10.90-22 rhel
kernel.x86_64 2.6.31-14.el6 rhel
kernel-firmware.noarch 2.6.31-14.el6 rhel
rpm.x86_64 4.7.1-5.el6 rhel
rpm-libs.x86_64 4.7.1-5.el6 rhel
rpm-python.x86_64 4.7.1-5.el6 rhel
udev.x86_64 147-2.15.el6 rhel
yum.noarch 3.2.24-4.el6 rhel

The packages in the above output are listed as having updates available. The first package in the list
is PackageKit, the graphical package manager. The line in the example output tells us:

PackageKit — the name of the package


x86_64 — the CPU architecture the package was built for
0.5.8 — the version of the updated package to be installed
rhel — the repository in which the updated package is located

The output also shows us that we can update the kernel (the kernel package), Yum and RPM
themselves (the yum and rpm packages), as well as their dependencies (such as the kernel-firmware,
rpm-libs, and rpm-python packages), all using yum .

5.1.2. Updating Packages


You can choose to update a single package, multiple packages, or all packages at once. If any
dependencies of the package (or packages) you update have updates available themselves, then they
are updated too.

Updating a Single Package


To update a single package, run the following command as root:

yum update package_name

53
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 5.2. Updating the udev package

To update the udev package, type:

~]# yum update udev


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package udev.x86_64 0:147-2.15.el6 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================
Package Arch Version Repository Size
===========================================================================
Updating:
udev x86_64 147-2.15.el6 rhel 337 k

Transaction Summary
===========================================================================
Install 0 Package(s)
Upgrade 1 Package(s)

Total download size: 337 k


Is this ok [y/d/N]:

This output contains several items of interest:

1. Loaded plugins: product-id, refresh-packagekit, subscription-m anager


— yum always informs you which Yum plug-ins are installed and enabled. Refer to Section 5.6,
“Yum Plug-ins” for general information on Yum plug-ins, or to Section 5.6.3, “Working with Plug-
ins” for descriptions of specific plug-ins.
2. udev.x86_64 — you can download and install new udev package.
3. yum presents the update information and then prompts you as to whether you want it to perform
the update; yum runs interactively by default. If you already know which transactions the yum
command plans to perform, you can use the -y option to automatically answer yes to any
questions that yum asks (in which case it runs non-interactively). However, you should always
examine which changes yum plans to make to the system so that you can easily troubleshoot
any problems that might arise. You can also choose to download the package without installing
it. To do so, select the d option at the download prompt. This launches a background download
of the selected package.
If a transaction does go awry, you can view Yum transaction history by using the yum history
command as described in Section 5.4, “Working with Transaction History”.

54
Chapter 5. Yum

Updating and installing kernels with Yum

yum always installs a new kernel in the same sense that RPM installs a new kernel when you use
the command rpm -i kernel. Therefore, you do not need to worry about the distinction
between installing and upgrading a kernel package when you use yum : it will do the right thing,
regardless of whether you are using the yum update or yum install command.
When using RPM, on the other hand, it is important to use the rpm -i kernel command
(which installs a new kernel) instead of rpm -u kernel (which replaces the current kernel).
Refer to Section A.2.2, “Installing and Upgrading” for more information on installing/upgrading
kernels with RPM.

Similarly, it is possible to update a package group. Type as root:

yum group update group_name

Here, replace group_name with a name of the package group you wish to update. For more information
on package groups, see Section 5.3, “Working with Package Groups”.

Yum also offers the upgrade command that is equal to update with enabled obsoletes configuration
option (see Section 5.5.1, “Setting [main] Options”). By default, obsoletes is turned on in
/etc/yum .conf, which makes these two commands equivalent.

Updating All Packages and Their Dependencies


To update all packages and their dependencies, simply enter yum update (without any arguments):

yum update

Updating Security-Related Packages


Discovering which packages have security updates available and then updating those packages quickly
and easily is important. Yum provides the plug-in for this purpose. The security plug-in extends the yum
command with a set of highly-useful security-centric commands, subcommands and options. Refer to
Section 5.6.3, “Working with Plug-ins” for specific information.

5.1.3. Preserving Configuration File Changes


You will inevitably make changes to the configuration files installed by packages as you use your Red Hat
Enterprise Linux system. RPM, which Yum uses to perform changes to the system, provides a
mechanism for ensuring their integrity. Refer to Section A.2.2, “Installing and Upgrading” for details on
how to manage changes to configuration files across package upgrades.

5.2. Working with Packages


Yum allows you to preform a complete set of operations with software packages, including searching for
packages, viewing information about them, installing and removing.

5.2.1. Searching Packages


You can search all RPM package names, descriptions and summaries by using the following command:

yum search term…

55
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

This command displays the list of matches for each term.

Example 5.3. Searching for packages matching a specific string

To list all packages that match “meld” or “kompare”, type:

~]$ yum search meld kompare


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
============================ Matched: kompare =============================
kdesdk.x86_64 : The KDE Software Development Kit (SDK)
Warning: No matches found for: meld

The yum search command is useful for searching for packages you do not know the name of, but for
which you know a related term.

Filtering the Results


All of Yum's list commands allow you to filter the results by appending one or more glob expressions as
arguments. Glob expressions are normal strings of characters which contain one or more of the wildcard
characters * (which expands to match any character subset) and ? (which expands to match any single
character).

Be careful to escape the glob expressions when passing them as arguments to a yum command,
otherwise the Bash shell will interpret these expressions as pathname expansions, and potentially pass
all files in the current directory that match the global expressions to yum . To make sure the glob
expressions are passed to yum as intended, either:

escape the wildcard characters by preceding them with a backslash character


double-quote or single-quote the entire glob expression.

Examples in the following section demonstrate usage of both these methods.

5.2.2. Listing Packages


To list information on all installed and available packages type the following at a shell prompt:

yum list all

To list installed and available packages that match inserted glob expressions use the following
command:

yum list glob_expression…

56
Chapter 5. Yum

Example 5.4. Listing ABRT-related packages

Packages with various ABRT add-ons and plug-ins either begin with “abrt-addon-”, or “abrt-plugin-”.
To list these packages, type the following command at a shell prompt. Note how the wildcard
characters are escaped with a backslash character:

~]$ yum list abrt-addon\* abrt-plugin\*


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
Installed Packages
abrt-addon-ccpp.x86_64 1.0.7-5.el6 @rhel
abrt-addon-kerneloops.x86_64 1.0.7-5.el6 @rhel
abrt-addon-python.x86_64 1.0.7-5.el6 @rhel
abrt-plugin-bugzilla.x86_64 1.0.7-5.el6 @rhel
abrt-plugin-logger.x86_64 1.0.7-5.el6 @rhel
abrt-plugin-sosreport.x86_64 1.0.7-5.el6 @rhel
abrt-plugin-ticketuploader.x86_64 1.0.7-5.el6 @rhel

To list all packages installed on your system use the installed keyword. The rightmost column in the
output lists the repository from which the package was retrieved.

yum list installed glob_expression…

Example 5.5. Listing all installed versions of the krb package

The following example shows how to list all installed packages that begin with “krb” followed by exactly
one character and a hyphen. This is useful when you want to list all versions of certain component as
these are distinguished by numbers. The entire glob expression is quoted to ensure proper
processing.

~]$ yum list installed "krb?-*"


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
Installed Packages
krb5-libs.x86_64 1.8.1-3.el6 @rhel
krb5-workstation.x86_64 1.8.1-3.el6 @rhel

To list all packages in all enabled repositories that are available to install, use the command in the
following form:

yum list available glob_expression…

57
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 5.6. Listing available gstreamer plug-ins

For instance, to list all available packages with names that contain “gstreamer” and then “plugin”, run
the following command:

~]$ yum list available gstreamer\*plugin\*


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
Available Packages
gstreamer-plugins-bad-free.i686 0.10.17-4.el6 rhel
gstreamer-plugins-base.i686 0.10.26-1.el6 rhel
gstreamer-plugins-base-devel.i686 0.10.26-1.el6 rhel
gstreamer-plugins-base-devel.x86_64 0.10.26-1.el6 rhel
gstreamer-plugins-good.i686 0.10.18-1.el6 rhel

Listing Repositories
To list the repository ID, name, and number of packages for each enabled repository on your system,
use the following command:

yum repolist

To list more information about these repositories, add the -v option. With this option enabled,
information including the file name, overall size, date of the last update, and base URL are displayed for
each listed repository. As an alternative, you can use the repoinfo command that produces the same
output.

yum repolist -v

yum repoinfo

To list both enabled and disabled repositories use the following command. A status column is added to
the output list to show which of the repositories are enabled.

yum repolist all

By passing disabled as a first argument, you can reduce the command output to disabled repositories.
For further specification you can pass the ID or name of repositories or related glob_expressions as
arguments. Note that if there is an exact match between the repository ID or name and the inserted
argument, this repository is listed even if it does not pass the enabled or disabled filter.

5.2.3. Displaying Package Information


To display information about one or more packages, use the following command (glob expressions are
valid here as well):

yum info package_name…

Replace package_name with the name of the package.

58
Chapter 5. Yum

Example 5.7. Displaying information on the abrt package

To display information about the abrt package, type:

~]$ yum info abrt


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
Installed Packages
Name : abrt
Arch : x86_64
Version : 1.0.7
Release : 5.el6
Size : 578 k
Repo : installed
From repo : rhel
Summary : Automatic bug detection and reporting tool
URL : https://fedorahosted.org/abrt/
License : GPLv2+
Description: abrt is a tool to help users to detect defects in applications
: and to create a bug report with all informations needed by
: maintainer to fix it. It uses plugin system to extend its
: functionality.

The yum info package_name command is similar to the rpm -q --info package_name command,
but provides as additional information the ID of the Yum repository the RPM package is found in (look for
the From repo: line in the output).

Using yumdb
You can also query the Yum database for alternative and useful information about a package by using
the following command:

yumdb info package_name

This command provides additional information about a package, including the checksum of the package
(and algorithm used to produce it, such as SHA-256), the command given on the command line that was
invoked to install the package (if any), and the reason that the package is installed on the system (where
user indicates it was installed by the user, and dep means it was brought in as a dependency).

59
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 5.8. Querying yumdb for information on the yum package

To display additional information about the yum package, type:

~]$ yumdb info yum


Loaded plugins: product-id, refresh-packagekit, subscription-manager
yum-3.2.27-4.el6.noarch
checksum_data =
23d337ed51a9757bbfbdceb82c4eaca9808ff1009b51e9626d540f44fe95f771
checksum_type = sha256
from_repo = rhel
from_repo_revision = 1298613159
from_repo_timestamp = 1298614288
installed_by = 4294967295
reason = user
releasever = 6.1

For more information on the yum db command, refer to the yumdb(8) manual page.

5.2.4. Installing Packages


To install a single package and all of its non-installed dependencies, enter a command in the following
form (as root):

yum install package_name

You can also install multiple packages simultaneously by appending their names as arguments. To do
so, type as root:

yum install package_name package_name…

If you are installing packages on a multilib system, such as an AMD64 or Intel64 machine, you can
specify the architecture of the package (as long as it is available in an enabled repository) by appending
.arch to the package name:

yum install package_name.arch

Example 5.9. Installing packages on multilib system

To install the sqlite2 package for the i586 architecture, type:

~]# yum install sqlite2.i586

You can use glob expressions to quickly install multiple similarly-named packages. As root:

yum install glob_expression…

60
Chapter 5. Yum

Example 5.10. Installing all audacious plugins

Global expressions are useful when you want to install several packages with similar names. To install
all audacious plug-ins, use the command in the following form:

~]# yum install audacious-plugins-\*

In addition to package names and glob expressions, you can also provide file names to yum install.
If you know the name of the binary you want to install, but not its package name, you can give yum
install the path name. As root, type:

yum install /usr/sbin/named

yum then searches through its package lists, finds the package which provides /usr/sbin/nam ed, if
any, and prompts you as to whether you want to install it.

As you can see in the above examples, the yum install command does not require strictly defined
arguments. It can process various formats of package names and glob expressions, which makes
installation easier for users. On the other hand, it takes some time till yum parses the input correctly,
especially if you specify a large number of packages. To optimize the package search, you can use the
following commands to explicitly define how to parse the arguments:

yum install-n name

yum install-na name.architecture

yum install-nevra name-epoch:version-release.architecture

With install-n, yum interprets name as exact name of the package. The install-na command tels
yum that the subsequent argument contains the package name and architecture divided by the dot
character. With install-nevra, yum will expect argument in the form name-epoch:version-
release.architecture. Similarly, you can use yum rem ove-n, yum rem ove-na, and yum
rem ove-nevra when searching for packages to be removed.

61
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Finding which package owns a file

If you know you want to install the package that contains the nam ed binary, but you do not know
in which bin or sbin directory is the file installed, use the yum provides command with a glob
expression:

~]# yum provides "*bin/named"


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
32:bind-9.7.0-4.P1.el6.x86_64 : The Berkeley Internet Name Domain (BIND)
: DNS (Domain Name System) server
Repo : rhel
Matched from:
Filename : /usr/sbin/named

yum provides "* /file_name" is a common and useful trick to find the package(s) that
contain file_name.

62
Chapter 5. Yum

Example 5.11. Installation Process

The following example provides an overview of installation with use of yum. Imagine you want to
download and install the latest version of the httpd package. To do so, execute as root:

~]# yum install httpd


Loaded plugins: product-id, refresh-packagekit, rhnplugin, subscription-manager
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-12.el7 will be updated
---> Package httpd.x86_64 0:2.4.6-13.el7 will be an update
--> Processing Dependency: 2.4.6-13.el7 for package: httpd-2.4.6-13.el7.x86_64
--> Running transaction check
---> Package httpd-tools.x86_64 0:2.4.6-12.el7 will be updated
---> Package httpd-tools.x86_64 0:2.4.6-13.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

After executing the above command, yum loads the necessary plug-ins and runs the transaction
check. In this case, httpd is already installed. Since the installed package is older than the latest
currently available version, it will be updated. The same applies to the httpd-tools package that httpd
depends on. Then, a transaction summary is displayed:

=============================================================================
===
Package Arch Version Repository
Size
=============================================================================
===
Updating:
httpd x86_64 2.4.6-13.el7 rhel-x86_64-server-7 1.2 M
Updating for dependencies:
httpd-tools x86_64 2.4.6-13.el7 rhel-x86_64-server-7 77 k

Transaction Summary
=============================================================================
===
Upgrade 1 Package (+1 Dependent package)

Total size: 1.2 M


Is this ok [y/d/N]:

In this step yum prompts you to confirm the installation. Apart from y (yes) and N (no) options, you
can choose d (download only) to download the packages but not to install them directly. If you choose
y, the installation proceeds with the following messages until it is finished successfully.

63
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : httpd-tools-2.4.6-13.el7.x86_64 1/4
Updating : httpd-2.4.6-13.el7.x86_64 2/4
Cleanup : httpd-2.4.6-12.el7.x86_64 3/4
Cleanup : httpd-tools-2.4.6-12.el7.x86_64 4/4
Verifying : httpd-2.4.6-13.el7.x86_64 1/4
Verifying : httpd-tools-2.4.6-13.el7.x86_64 2/4
Verifying : httpd-tools-2.4.6-12.el7.x86_64 3/4
Verifying : httpd-2.4.6-12.el7.x86_64 4/4

Updated:
httpd.x86_64 0:2.4.6-13.el7

Dependency Updated:
httpd-tools.x86_64 0:2.4.6-13.el7

Complete!

To install a previously-downloaded package from the local directory on your system, use the following
command:

yum localinstall path

Replace path with a path to the package you wish to install.

5.2.5. Downloading Packages


As shown in Example 5.11, “Installation Process”, at certain point of installation process you are
prompted to confirm the installation with the following message:

...
Total size: 1.2 M
Is this ok [y/d/N]:
...

By choosing the d option, you tell yum to download the packages without installing them immediately.
You can install these packages later in off-line mode with the yum localinstall command or you
can share them with a different device. Downloaded packages are saved in one of the subdirectories of
the cache directory, by default /var/cache/yum /$basearch/$releasever/packages/ directory.
The downloading proceeds in background mode so that you can use yum for other operations in
parallel.

5.2.6. Removing Packages


Similarly to package installation, Yum allows you to uninstall (remove in RPM and Yum terminology)
them. To uninstall a particular package, as well as any packages that depend on it, run the following
command as root:

yum remove package_name…

As when you install multiple packages, you can remove several at once by adding more package names

64
Chapter 5. Yum

to the command.

Example 5.12. Removing several packages

To remove totem, rhythmbox, and sound-juicer, type the following at a shell prompt:

~]# yum remove totem rhythmbox sound-juicer

Similar to install, rem ove can take these arguments:

package names
glob expressions
file lists
package provides

Removing a package when other packages depend on it

Yum is not able to remove a package without also removing packages which depend on it. This
type of operation can only be performed by RPM, is not advised, and can potentially leave your
system in a non-functioning state or cause applications to misbehave and/or crash. For further
information, refer to Section A.2.4, “Uninstalling” in the RPM chapter.

5.3. Working with Package Groups


A package group is a collection of packages that serve a common purpose, for instance System Tools or
Sound and Video. Installing a package group pulls a set of dependent packages, saving time
considerably. The yum groups command is a top-level command that covers all the operations that act
on package groups in Yum.

5.3.1. Listing Package Groups


The sum m ary option is used to view the number of installed groups, available groups, available
environment groups, and both installed and available language groups:

yum groups summary

Example 5.13. Example output of yum groups sum m ary

~]$ yum groups summary


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Available Environment Groups: 12
Installed Groups: 10
Available Groups: 12

To list all package groups from yum repositories add the list option. You can filter the command output
by group names.

yum group list glob_expression…

65
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Several optional arguments can be passed to this command, including hidden to list also groups not
marked as user visible, and ids to list group IDs. You can add language, environm ent,
installed, or available options to reduce the command output to specific group type.

To list mandatory and optional packages contained in a particular group, use the following command:

yum group info glob_expression…

Example 5.14. Viewing information on the LibreOffice package group

~]$ yum group info LibreOffice


Loaded plugins: product-id, refresh-packagekit, subscription-manager

Group: LibreOffice
Group-Id: libreoffice
Description: LibreOffice Productivity Suite
Mandatory Packages:
=libreoffice-calc
libreoffice-draw
-libreoffice-emailmerge
libreoffice-graphicfilter
=libreoffice-impress
=libreoffice-math
=libreoffice-writer
+libreoffice-xsltfilter
Optional Packages:
libreoffice-base
libreoffice-pyuno

As you can see in the above example, the packages included in the package group can have different
states that are marked with the following symbols:

" - " — Package is not installed and it will not be installed as a part of the package group.
" + " — Package is not installed but it will be installed on next yum upgrade or yum group
upgrade.
" = " — Package is installed and it was installed as a part of the package group.
no symbol — Package is installed but it was installed outside of the package group. This means that
the yum group rem ove will not remove these packages.

These distinctions take place only when the group_com m and configuration parameter is set to
objects, which is the default setting. Set this parameter to a different value if you do not want yum to
track if a package was installed as a part of the group or separately, which will make "no symbol"
packages equivalent to "=" packages. Please note that the yum-cron package uses
group_com m and=sim ple as a default setting.

You can alter the above package states with use of the yum group m ark command. For example, yum
group m ark packages marks any given installed packages as members of a specified group. To
avoid installation of new packages on group update, use yum group m ark blacklist. Refer to yum
man page for more information on capabilities of yum group m ark.

66
Chapter 5. Yum

Note

You can identify an environmental group with use of the @^ prefix and a package group can be
marked with @. When using yum group list, info, install, or rem ove, pass
@group_name to specify a package group, @^group_name to specify an environmental group, or
group_name to include both.

5.3.2. Installing a Package Group


Each package group has a name and a groupid. To list the names of all package groups, and their
groupids, which are displayed in parentheses, type:

yum group list ids

Example 5.15. Finding name and groupid of a package group

Imagine you want to install a package group related to the KDE desktop environment, but you cannot
remember the exact name or id of the package group. To find the information, type:

~]$ yum group list ids kde\*


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Available Groups:
KDE Desktop (kde-desktop)
Done

You can install a package group by passing its full group name (without the groupid part) to the group
install command. As root, type:

yum group install group_name

You can also install by groupid. As root, execute the following command:

yum group install groupid

You can pass the groupid or quoted name to the install command if you prepend it with an @-
symbol, which tells yum that you want to perform group install. As root, type:

yum install @group

Replace group with the groupid or quoted group name. Similar logic applies to environmental groups:

yum install @^group

67
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 5.16. Four equivalent ways of installing the KDE Desktop group

As mentioned before, you can use four alternative, but equivalent ways to install a package group. For
KDE Desktop, the commands look as follows:

~]# yum group install "KDE Desktop"


~]# yum group install kde-desktop
~]# yum install @"KDE Desktop"
~]# yum install @kde-desktop

5.3.3. Removing a Package Group


You can remove a package group using syntax congruent with the install syntax, with use of either
name of the package group or its id. As root, type:

yum group remove group_name

yum group remove groupid

Also, you can pass the groupid or quoted name to the rem ove command if you prepend it with an @-
symbol, which tells yum that you want to perform group rem ove. As root, type:

yum remove @group

Replace group with the groupid or quoted group name. Similarly, you can replace an environmental
group:

yum remove @^group

Example 5.17. Four equivalent ways of removing the KDE Desktop group

Similarly to install, you can use four alternative, but equivalent ways to remove a package group. For
KDE Desktop, the commands look as follows:

~]# yum group remove "KDE Desktop"


~]# yum group remove kde-desktop
~]# yum remove @"KDE Desktop"
~]# yum remove @kde-desktop

5.4. Working with Transaction History


The yum history command allows users to review information about a timeline of Yum transactions,
the dates and times they occurred, the number of packages affected, whether transactions succeeded or
were aborted, and if the RPM database was changed between transactions. Additionally, this command
can be used to undo or redo certain transactions. All history data are stored in the history DB in the
var/lib/yum /history/ directory.

5.4.1. Listing Transactions

68
Chapter 5. Yum

To display a list of twenty most recent transactions, as root, either run yum history with no additional
arguments, or type the following at a shell prompt:

yum history list

To display all transactions, add the all keyword:

yum history list all

To display only transactions in a given range, use the command in the following form:

yum history list start_id..end_id

You can also list only transactions regarding a particular package or packages. To do so, use the
command with a package name or a glob expression:

yum history list glob_expression…

Example 5.18. Listing the five oldest transactions

In the output of yum history list, the most recent transaction is displayed at the top of the list. To
display information about the five oldest transactions stored in the history data base, type:

~]# yum history list 1..5


Loaded plugins: product-id, refresh-packagekit, subscription-manager
ID | Login user | Date and time | Action(s) | Altered
-----------------------------------------------------------------------------
--
5 | Jaromir ... <jhradilek> | 2013-07-29 15:33 | Install | 1
4 | Jaromir ... <jhradilek> | 2013-07-21 15:10 | Install | 1
3 | Jaromir ... <jhradilek> | 2013-07-16 15:27 | I, U | 73
2 | System <unset> | 2013-07-16 15:19 | Update | 1
1 | System <unset> | 2013-07-16 14:38 | Install | 1106
history list

All forms of the yum history list command produce tabular output with each row consisting of the
following columns:

ID — an integer value that identifies a particular transaction.


Login user — the name of the user whose login session was used to initiate a transaction. This
information is typically presented in the Full Name <username> form. For transactions that were not
issued by a user (such as an automatic system update), System <unset> is used instead.
Date and tim e — the date and time when a transaction was issued.
Action(s) — a list of actions that were performed during a transaction as described in Table 5.1,
“Possible values of the Action(s) field”.
Altered — the number of packages that were affected by a transaction, possibly followed by
additional information as described in Table 5.2, “Possible values of the Altered field”.

69
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Table 5.1. Possible values of the Action(s) field

Action Abbreviatio Description


n
Downgrade D At least one package has been downgraded to an older version.
Erase E At least one package has been removed.
Install I At least one new package has been installed.
Obsoleting O At least one package has been marked as obsolete.
Reinstall R At least one package has been reinstalled.
Update U At least one package has been updated to a newer version.

Table 5.2. Possible values of the Altered field

Symbol Description
< Before the transaction finished, the rpm db database was changed outside Yum.
> After the transaction finished, the rpm db database was changed outside Yum.
* The transaction failed to finish.
# The transaction finished successfully, but yum returned a non-zero exit code.
E The transaction finished successfully, but an error or a warning was displayed.
P The transaction finished successfully, but problems already existed in the rpm db
database.
s The transaction finished successfully, but the --skip-broken command line
option was used and certain packages were skipped.

To synchronize the rpm db or yum db database contents for any installed package with the currently
used rpm db or yum db database, type the following:

yum history sync

To display some overall statistics about the currently used history DB use the following command:

yum history stats

70
Chapter 5. Yum

Example 5.19. Example output of yum history stats

~]# yum history stats


Loaded plugins: langpacks, presto, refresh-packagekit
File : //var/lib/yum/history/history-2012-08-15.sqlite
Size : 2,766,848
Transactions: 41
Begin time : Wed Aug 15 16:18:25 2012
End time : Wed Feb 27 14:52:30 2013
Counts :
NEVRAC : 2,204
NEVRA : 2,204
NA : 1,759
NEVR : 2,204
rpm DB : 2,204
yum DB : 2,204
history stats

Yum also allows you to display a summary of all past transactions. To do so, run the command in the
following form as root:

yum history summary

To display only transactions in a given range, type:

yum history summary start_id..end_id

Similarly to the yum history list command, you can also display a summary of transactions
regarding a certain package or packages by supplying a package name or a glob expression:

yum history summary glob_expression…

Example 5.20. Summary of the five latest transactions

~]# yum history summary 1..5


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Login user | Time | Action(s) | Altered
-----------------------------------------------------------------------------
--
Jaromir ... <jhradilek> | Last day | Install | 1
Jaromir ... <jhradilek> | Last week | Install | 1
Jaromir ... <jhradilek> | Last 2 weeks | I, U | 73
System <unset> | Last 2 weeks | I, U | 1107
history summary

All forms of the yum history sum m ary command produce simplified tabular output similar to the
output of yum history list.

As shown above, both yum history list and yum history sum m ary are oriented towards
transactions, and although they allow you to display only transactions related to a given package or
packages, they lack important details, such as package versions. To list transactions from the
perspective of a package, run the following command as root:

71
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

yum history package-list glob_expression…

Example 5.21. Tracing the history of a package

For example, to trace the history of subscription-manager and related packages, type the following at
a shell prompt:

~]# yum history package-list subscription-manager\*


Loaded plugins: product-id, refresh-packagekit, subscription-manager
ID | Action(s) | Package
-----------------------------------------------------------------------------
--
3 | Updated | subscription-manager-0.95.11-1.el6.x86_64
3 | Update | 0.95.17-1.el6_1.x86_64
3 | Updated | subscription-manager-firstboot-0.95.11-1.el6.x86_64
3 | Update | 0.95.17-
1.el6_1.x86_64
3 | Updated | subscription-manager-gnome-0.95.11-1.el6.x86_64
3 | Update | 0.95.17-1.el6_1.x86_64
1 | Install | subscription-manager-0.95.11-1.el6.x86_64
1 | Install | subscription-manager-firstboot-0.95.11-1.el6.x86_64
1 | Install | subscription-manager-gnome-0.95.11-1.el6.x86_64
history package-list

In this example, three packages were installed during the initial system installation: subscription-
manager, subscription-manager-firstboot, and subscription-manager-gnome. In the third transaction,
all these packages were updated from version 0.95.11 to version 0.95.17.

5.4.2. Examining Transactions


To display the summary of a single transaction, as root, use the yum history sum m ary command in
the following form:

yum history summary id

To examine a particular transaction or transactions in more detail, run the following command as root:

yum history info id…

The id argument is optional and when you omit it, yum automatically uses the last transaction. Note that
when specifying more than one transaction, you can also use a range:

yum history info start_id..end_id

72
Chapter 5. Yum

Example 5.22. Example output of yum history info

The following is sample output for two transactions, each installing one new package:

~]# yum history info 4..5


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Transaction ID : 4..5
Begin time : Thu Jul 21 15:10:46 2011
Begin rpmdb : 1107:0c67c32219c199f92ed8da7572b4c6df64eacd3a
End time : 15:33:15 2011 (22 minutes)
End rpmdb : 1109:1171025bd9b6b5f8db30d063598f590f1c1f3242
User : Jaromir Hradilek <jhradilek>
Return-Code : Success
Command Line : install screen
Command Line : install yum-plugin-fs-snapshot
Transaction performed with:
Installed rpm-4.8.0-16.el6.x86_64
Installed yum-3.2.29-17.el6.noarch
Installed yum-metadata-parser-1.1.2-16.el6.x86_64
Packages Altered:
Install screen-4.0.3-16.el6.x86_64
Install yum-plugin-fs-snapshot-1.1.30-6.el6.noarch
history info

You can also view additional information, such as what configuration options were used at the time of the
transaction, or from what repository and why were certain packages installed. To determine what
additional information is available for a certain transaction, type the following at a shell prompt as root:

yum history addon-info id

Similarly to yum history info, when no id is provided, yum automatically uses the latest transaction.
Another way to refer to the latest transaction is to use the last keyword:

yum history addon-info last

Example 5.23. Example output of yum history addon-info

For the fourth transaction in the history, the yum history addon-info command provides the
following output:

~]# yum history addon-info 4


Loaded plugins: product-id, refresh-packagekit, subscription-manager
Transaction ID: 4
Available additional history information:
config-main
config-repos
saved_tx

history addon-info

In the output of the yum history addon-info command, three types of information are available:

config-m ain — global Yum options that were in use during the transaction. Refer to Section 5.5.1,

73
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

“Setting [main] Options” for information on how to change global options.


config-repos — options for individual Yum repositories. Refer to Section 5.5.2, “Setting
[repository] Options” for information on how to change options for individual repositories.
saved_tx — the data that can be used by the yum load-transaction command in order to
repeat the transaction on another machine (see below).

To display selected type of additional information, run the following command as root:

yum history addon-info id information

5.4.3. Reverting and Repeating Transactions


Apart from reviewing the transaction history, the yum history command provides means to revert or
repeat a selected transaction. To revert a transaction, type the following at a shell prompt as root:

yum history undo id

To repeat a particular transaction, as root, run the following command:

yum history redo id

Both commands also accept the last keyword to undo or repeat the latest transaction.

Note that both yum history undo and yum history redo commands only revert or repeat the
steps that were performed during a transaction. If the transaction installed a new package, the yum
history undo command will uninstall it, and if the transaction uninstalled a package the command will
again install it. This command also attempts to downgrade all updated packages to their previous
version, if these older packages are still available. If you need to restore the system to the state before
an update, consider using the fs-snapshot plug-in described in Section 5.6.3, “Working with Plug-ins”.

When managing several identical systems, Yum also allows you to perform a transaction on one of them,
store the transaction details in a file, and after a period of testing, repeat the same transaction on the
remaining systems as well. To store the transaction details to a file, type the following at a shell prompt
as root:

yum -q history addon-info id saved_tx > file_name

Once you copy this file to the target system, you can repeat the transaction by using the following
command as root:

yum load-transaction file_name

You can configure load-transaction to ignore missing packages or rpmdb version. For more
information on these configuration options see the yum.conf man page.

5.4.4. Starting New Transaction History


Yum stores the transaction history in a single SQLite database file. To start new transaction history, run
the following command as root:

yum history new

This will create a new, empty database file in the /var/lib/yum /history/ directory. The old

74
Chapter 5. Yum

transaction history will be kept, but will not be accessible as long as a newer database file is present in
the directory.

5.5. Configuring Yum and Yum Repositories


The configuration file for yum and related utilities is located at /etc/yum .conf. This file contains one
mandatory [m ain] section, which allows you to set Yum options that have global effect, and can also
contain one or more [repository] sections, which allow you to set repository-specific options.
However, it is recommended to define individual repositories in new or existing .repo files in the
/etc/yum .repos.d/directory. The values you define in the [m ain] section of the /etc/yum .conf
file can override values set in individual [repository] sections.

This section shows you how to:

set global Yum options by editing the [m ain] section of the /etc/yum .conf configuration file;
set options for individual repositories by editing the [repository] sections in /etc/yum .conf and
.repo files in the /etc/yum .repos.d/ directory;
use Yum variables in /etc/yum .conf and files in the /etc/yum .repos.d/ directory so that
dynamic version and architecture values are handled correctly;
add, enable, and disable Yum repositories on the command line; and,
set up your own custom Yum repository.

5.5.1. Setting [main] Options


The /etc/yum .conf configuration file contains exactly one [m ain] section, and while some of the
key-value pairs in this section affect how yum operates, others affect how Yum treats repositories. You
can add many additional options under the [m ain] section heading in /etc/yum .conf.

A sample /etc/yum .conf configuration file can look like this:

[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=3

[comments abridged]

# PUT YOUR REPOS HERE OR IN separate files named file.repo


# in /etc/yum.repos.d

The following are the most commonly-used options in the [m ain] section:

assum eyes=value
The assum eyes option determines whether or not yum prompts for confirmation of critical
actions. Replace value with one of:

0 — (default). yum should prompt for confirmation of critical actions it performs.

75
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

1 — Do not prompt for confirmation of critical yum actions. If assum eyes=1 is set, yum behaves
in the same way as the command line options -y and --assum eyes

cachedir=directory
Use this option to set the directory where Yum should store its cache and database files.
Replace directory with an absolute path to the directory By default, Yum's cache directory is
/var/cache/yum /$basearch/$releasever.

Refer to Section 5.5.3, “Using Yum Variables” for descriptions of the $basearch and
$releasever Yum variables.

debuglevel=value
This option specifies the detail of debugging output produced by yum . Here, value is an integer
between 1 and 10. Setting a higher debuglevel value causes yum to display more detailed
debugging output. debuglevel=0 disables debugging output, while debuglevel=2 is the
default.

exactarch=value
With this option, you can set yum to consider exact architecture when updating already installed
packages. Replace value with:

0 — Do not take into account the exact architecture when updating packages.

1 — (default). Consider the exact architecture when updating packages. With this setting, yum
does not install an i686 package to update an i386 package already installed on the system.

exclude=package_name [more_package_names]
The exclude option allows you to exclude packages by keyword during installation or updating.
Listing multiple packages for exclusion can be accomplished by quoting a space-delimited list of
packages. Shell glob expressions using wildcards (for example, * and ?) are allowed.

gpgcheck=value
Use the gpgcheck option to specify if yum should perform a GPG signature check on
packages. Replace value with:

0 — Disable GPG signature-checking on packages in all repositories, including local package


installation.

1 — (default). Enable GPG signature-checking on all packages in all repositories, including local
package installation. With gpgcheck enabled, all packages' signatures are checked.

If this option is set in the [m ain] section of the /etc/yum .conf file, it sets the GPG-checking
rule for all repositories. However, you can also set gpgcheck=value for individual repositories
instead; that is, you can enable GPG-checking on one repository while disabling it on another.
Setting gpgcheck=value for an individual repository in its corresponding .repo file overrides
the default if it is present in /etc/yum .conf.

For more information on GPG signature-checking, refer to Section A.3, “Checking a Package's
Signature”.

76
Chapter 5. Yum

group_com m and=value
Use the group_com m and option to specify how the yum group install, yum group
upgrade, and yum group rem ove commands handle a package group. Replace value with:

sim ple — Install all members of a package group. Upgrade only previously installed packages,
but do not install packages that were added to the group in the meantime.

com pat — Similar to sim ple but yum upgrade also installs packages that were added to the
group since the previous upgrade.

objects — (default.) With this option, yum keeps track of the previously-installed groups and
distinguishes between packages installed as a part of the group and packages installed
separately. See Example 5.14, “Viewing information on the LibreOffice package group”

group_package_types=package_type [more_package_types]
Here you can specify which type of packages (optional, default or mandatory) is installed when
the yum group install command is called. The default and mandatory package types are
chosen by default.

history_record=value
With this option, you can setyum to record transaction history. Replace value with one of:

0 — yum should not record history entries for transactions.

1 — (default). yum should record history entries for transactions. This operation takes certain
amount of disk space, and some extra time in the transactions, but it provides a lot of
information about past operations, which can be displayed with the yum history command.
history_record=1 is the default.

For more information on the yum history command, refer to Section 5.4, “Working with
Transaction History”.

history_record and rpmdb

yum uses history records to detect modifications to the rpmdb that have been done
outside of yum . In such case, yum displays a warning and automatically searches for
possible problems caused by altering rpmdb. With history_record turned off, yum is
not able to detect these changes and no automatic checks are performed.

installonlypkgs=space separated list of packages


Here you can provide a space-separated list of packages which yum can install, but will never
update. Refer to the yum.conf(5) manual page for the list of packages which are install-only by
default.

If you add the installonlypkgs directive to /etc/yum .conf, you should ensure that you
list all of the packages that should be install-only, including any of those listed under the
installonlypkgs section of yum.conf(5). In particular, kernel packages should always be
listed in installonlypkgs (as they are by default), and installonly_lim it should
always be set to a value greater than 2 so that a backup kernel is always available in case the

77
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

default one fails to boot.

installonly_lim it=value
This option sets how many packages listed in the installonlypkgs directive can be installed
at the same time. Replace value with an integer representing the maximum number of versions
that can be installed simultaneously for any single package listed in installonlypkgs.

The defaults for the installonlypkgs directive include several different kernel packages, so
be aware that changing the value of installonly_lim it also affects the maximum number
of installed versions of any single kernel package. The default value listed in /etc/yum .conf
is installonly_lim it=3, and it is not recommended to decrease this value, particularly
below 2.

keepcache=value
The keepcache option determines whether Yum keeps the cache of headers and packages
after successful installation. Here, value is one of:

0 — (default). Do not retain the cache of headers and packages after a successful installation.

1 — Retain the cache after a successful installation.

logfile=file_name
To specify the location for logging output, replacefile_name with an absolute path to the file in
which yum should write its logging output. By default, yum logs to /var/log/yum .log.

m ax_connenctions=number
Here value stands for the maximum number of simultaneous connections, default is 5.

m ultilib_policy=value
The m ultilib_policy option sets the installation behavior if several architecture versions
are available for package install. Here, value stands for:

best — install the best-choice architecture for this system. For example, setting
m ultilib_policy=best on an AMD64 system causes yum to install 64-bit versions of all
packages.

all — always install every possible architecture for every package. For example, with
m ultilib_policy set to all on an AMD64 system, yum would install both the i586 and
AMD64 versions of a package, if both were available.

obsoletes=value
The obsoletes option enables the obsoletes process logic during updates.When one package
declares in its spec file that it obsoletes another package, the latter package is replaced by the
former package when the former package is installed. Obsoletes are declared, for example,
when a package is renamed. Replace value with one of:

0 — Disable yum 's obsoletes processing logic when performing updates.

1 — (default). Enable yum 's obsoletes processing logic when performing updates.

78
Chapter 5. Yum

plugins=value
This is a global switch to enable or disable yum plug-ins, value is one of:

0 — Disable all Yum plug-ins globally.

Disabling all plug-ins is not advised

Disabling all plug-ins is not advised because certain plug-ins provide important Yum
services. In particular, rhnplugin provides support for RHN Classic, and product-id
and subscription-manager plug-ins provide support for the certificate-based Content
Delivery Network (CDN). Disabling plug-ins globally is provided as a convenience
option, and is generally only recommended when diagnosing a potential problem with
Yum .

1 — (default). Enable all Yum plug-ins globally. With plugins=1, you can still disable a specific
Yum plug-in by setting enabled=0 in that plug-in's configuration file.

For more information about various Yum plug-ins, refer to Section 5.6, “Yum Plug-ins”. For
further information on controlling plug-ins, see Section 5.6.1, “Enabling, Configuring, and
Disabling Yum Plug-ins”.

reposdir=directory
Here, directory is an absolute path to the directory where .repo files are located. All .repo
files contain repository information (similar to the [repository] sections of /etc/yum .conf).
yum collects all repository information from .repo files and the [repository] section of the
/etc/yum .conf file to create a master list of repositories to use for transactions. If reposdir
is not set, yum uses the default directory /etc/yum .repos.d/.

retries=value
This option sets the number of times yum should attempt to retrieve a file before returning an
error. value is an integer 0 or greater. Setting value to 0 makes yum retry forever. The default
value is 10.

For a complete list of available [m ain] options, refer to the [m ain] OPT IONS section of the
yum.conf(5) manual page.

5.5.2. Setting [repository] Options


The [repository] sections, where repository is a unique repository ID such as
m y_personal_repo (spaces are not permitted), allow you to define individual Yum repositories.

The following is a bare-minimum example of the form a [repository] section takes:

[repository]
name=repository_name
baseurl=repository_url

79
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Every [repository] section must contain the following directives:

nam e=repository_name
Here, repository_name is a human-readable string describing the repository.

baseurl=repository_url
Replace repository_url with a URL to the directory where the repodata directory of a
repository is located:

If the repository is available over HTTP, use: http://path/to/repo


If the repository is available over FTP, use: ftp://path/to/repo
If the repository is local to the machine, use: file:///path/to/local/repo
If a specific online repository requires basic HTTP authentication, you can specify your
username and password by prepending it to the URL as username:password@ link. For
example, if a repository on http://www.example.com/repo/ requires a username of “user” and
a password of “password”, then the baseurl link could be specified as
http://user:password@ www.exam ple.com /repo/.

Usually this URL is an HTTP link, such as:

baseurl=http://path/to/repo/releases/$releasever/server/$basearch/os/

Note that Yum always expands the $releasever, $arch, and $basearch variables in
URLs. For more information about Yum variables, refer to Section 5.5.3, “Using Yum Variables”.

Other useful [repository] directive are:

enabled=value
This is a simple way to tell yum to use or ignore a particular repository, value is one of:

0 — Do not include this repository as a package source when performing updates and installs.
This is an easy way of quickly turning repositories on and off, which is useful when you desire a
single package from a repository that you do not want to enable for updates or installs.

1 — Include this repository as a package source.

Turning repositories on and off can also be performed by passing either the --
enablerepo=repo_name or --disablerepo=repo_name option to yum , or through the
Add/Rem ove Software window of the PackageKit utility.

async=value
Controls parallel downloading of repository packages. Here, value is one of:

auto — (default) parallel downloading is used if possible, which means that yum automatically
disables it for repositories created by plug-ins to avoid failures.

on — parallel downloading is enabled the repository.

off — parallel downloading is disabled the repository.

80
Chapter 5. Yum

Many more [repository] options exist, part of them has the same form and function as certain
[main] options. For a complete list, refer to the [repository] OPT IONS section of the yum.conf(5)
manual page.

Example 5.24. A sample /etc/yum.repos.d/redhat.repo file

The following is a sample /etc/yum .repos.d/redhat.repo file:

#
# Red Hat Repositories
# Managed by (rhsm) subscription-manager
#

[red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-rpms]
name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement)
(RPMs)
baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-
6/releases/$releasever/$basearch/scalablefilesystem/os
enabled = 1
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sslverify = 1
sslcacert = /etc/rhsm/ca/redhat-uep.pem
sslclientkey = /etc/pki/entitlement/key.pem
sslclientcert = /etc/pki/entitlement/11300387955690106.pem

[red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-source-
rpms]
name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement)
(Source RPMs)
baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-
6/releases/$releasever/$basearch/scalablefilesystem/source/SRPMS
enabled = 0
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sslverify = 1
sslcacert = /etc/rhsm/ca/redhat-uep.pem
sslclientkey = /etc/pki/entitlement/key.pem
sslclientcert = /etc/pki/entitlement/11300387955690106.pem

[red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-debug-
rpms]
name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement)
(Debug RPMs)
baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-
6/releases/$releasever/$basearch/scalablefilesystem/debug
enabled = 0
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sslverify = 1
sslcacert = /etc/rhsm/ca/redhat-uep.pem
sslclientkey = /etc/pki/entitlement/key.pem
sslclientcert = /etc/pki/entitlement/11300387955690106.pem

5.5.3. Using Yum Variables

81
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

You can use and reference the following built-in variables in yum commands and in all Yum configuration
files (that is, /etc/yum .conf and all .repo files in the /etc/yum .repos.d/ directory):

$releasever
You can use this variable to reference the release version of Red Hat Enterprise Linux. Yum
obtains the value of $releasever from the distroverpkg=value line in the
/etc/yum .conf configuration file. If there is no such line in /etc/yum .conf, then yum infers
the correct value by deriving the version number from the redhat-release package.

$arch
You can use this variable to refer to the system's CPU architecture as returned when calling
Python's os.unam e() function. Valid values for $arch include: i586, i686 and x86_64 .

$basearch
You can use $basearch to reference the base architecture of the system. For example, i686
and i586 machines both have a base architecture of i386, and AMD64 and Intel64 machines
have a base architecture of x86_64 .

$YUM0-9
These ten variables are each replaced with the value of any shell environment variables with the
same name. If one of these variables is referenced (in /etc/yum .conf for example) and a
shell environment variable with the same name does not exist, then the configuration file
variable is not replaced.

To define a custom variable or to override the value of an existing one, create a file with the same name
as the variable (without the “$” sign) in the /etc/yum /vars/ directory, and add the desired value on its
first line.

For example, repository descriptions often include the operating system name. To define a new variable
called $osnam e, create a new file with “Red Hat Enterprise Linux” on the first line and save it as
/etc/yum /vars/osnam e:

~]# echo "Red Hat Enterprise Linux 7" > /etc/yum/vars/osname

Instead of “Red Hat Enterprise Linux 7”, you can now use the following in the .repo files:

name=$osname $releasever

5.5.4. Viewing the Current Configuration


To display the current values of global Yum options (that is, the options specified in the [m ain] section
of the /etc/yum .conf file), run the yum -config-m anager with no command line options:

yum-config-manager

To list the content of a different configuration section or sections, use the command in the following form:

yum-config-manager section…

82
Chapter 5. Yum

You can also use a glob expression to display the configuration of all matching sections:

yum-config-manager glob_expression…

Example 5.25. Viewing configuration of the main section

To list all configuration options and their corresponding values for the main section, type the following
at a shell prompt:

~]$ yum-config-manager main \*


Loaded plugins: product-id, refresh-packagekit, subscription-manager
================================== main ===================================
[main]
alwaysprompt = True
assumeyes = False
bandwith = 0
bugtracker_url = https://bugzilla.redhat.com/enter_bug.cgi?
product=Red%20Hat%20Enterprise%20Linux%206&component=yum
cache = 0
[output truncated]

5.5.5. Adding, Enabling, and Disabling a Yum Repository


Section 5.5.2, “Setting [repository] Options” described various options you can use to define a Yum
repository. This section explains how to add, enable, and disable a repository by using the yum -
config-m anager command.

The /etc/yum.repos.d/redhat.repo file

When the system is registered with the certificate-based Red Hat Network, the Red Hat
Subscription Manager tools are used to manage repositories in the
/etc/yum .repos.d/redhat.repo file. For more information how to register a system with
Red Hat Network and use the Red Hat Subscription Manager tools to manage subscriptions,
refer to the Red Hat Subscription Management Guide.

Adding a Yum Repository


To define a new repository, you can either add a [repository] section to the /etc/yum .conf file, or
to a .repo file in the /etc/yum .repos.d/ directory. All files with the .repo file extension in this
directory are read by yum , and it is recommended to define your repositories here instead of in
/etc/yum .conf.

Be careful when using untrusted software sources

Obtaining and installing software packages from unverified or untrusted software sources other
than Red Hat Network constitutes a potential security risk, and could lead to security, stability,
compatibility, and maintainability issues.

Yum repositories commonly provide their own .repo file. To add such a repository to your system and

83
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

enable it, run the following command as root:

yum-config-manager --add-repo repository_url

…where repository_url is a link to the .repo file.

Example 5.26. Adding example.repo

To add a repository located at http://www.example.com/example.repo, type the following at a shell


prompt:

~]# yum-config-manager --add-repo http://www.example.com/example.repo


Loaded plugins: product-id, refresh-packagekit, subscription-manager
adding repo from: http://www.example.com/example.repo
grabbing file http://www.example.com/example.repo to
/etc/yum.repos.d/example.repo
example.repo | 413 B 00:00
repo saved to /etc/yum.repos.d/example.repo

Enabling a Yum Repository


To enable a particular repository or repositories, type the following at a shell prompt as root:

yum-config-manager --enable repository…

…where repository is the unique repository ID (use yum repolist all to list available repository
IDs). Alternatively, you can use a glob expression to enable all matching repositories:

yum-config-manager --enable glob_expression…

Example 5.27. Enabling repositories defined in custom sections of /etc/yum.conf.

To enable repositories defined in the [exam ple], [exam ple-debuginfo], and [exam ple-
source]sections, type:

~]# yum-config-manager --enable example\*


Loaded plugins: product-id, refresh-packagekit, subscription-manager
============================== repo: example ==============================
[example]
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/6Server
baseurl = http://www.example.com/repo/6Server/x86_64/
cache = 0
cachedir = /var/cache/yum/x86_64/6Server/example
[output truncated]

When successful, the yum -config-m anager --enable command displays the current repository
configuration.

Disabling a Yum Repository


To disable a Yum repository, run the following command as root:

84
Chapter 5. Yum

yum-config-manager --disable repository…

…where repository is the unique repository ID (use yum repolist all to list available repository
IDs). Similarly to yum -config-m anager --enable, you can use a glob expression to disable all
matching repositories at the same time:

yum-config-manager --disable glob_expression…

When successful, the yum -config-m anager --disable command displays the current
configuration.

5.5.6. Creating a Yum Repository


To set up a Yum repository, follow these steps:

1. Install the createrepo package. To do so, type the following at a shell prompt as root:

yum install createrepo

2. Copy all packages that you want to have in your repository into one directory, such as
/m nt/local_repo/.
3. Change to this directory and run the following command:

createrepo --database /mnt/local_repo

This creates the necessary metadata for your Yum repository, as well as the sqlite database for
speeding up yum operations.

Using the createrepo command on Red Hat Enterprise Linux 5

Compared to Red Hat Enterprise Linux 5, RPM packages for Red Hat Enterprise Linux 7
are compressed with the XZ lossless data compression format and can be signed with
newer hash algorithms like SHA-256. Consequently, it is not recommended to use the
createrepo command on Red Hat Enterprise Linux 5 to create the package metadata for
Red Hat Enterprise Linux 7. If you want to use createrepo on this system anyway, install
the python-hashlib package from EPEL (Extra Packages for Enterprise Linux) so that the
repodata can also use the SHA-256 hash algorithm.

5.6. Yum Plug-ins


Yum provides plug-ins that extend and enhance its operations. Certain plug-ins are installed by default.
Yum always informs you which plug-ins, if any, are loaded and active whenever you call any yum
command. For example:

~]# yum info yum


Loaded plugins: product-id, refresh-packagekit, subscription-manager
[output truncated]

Note that the plug-in names which follow Loaded plugins are the names you can provide to the --
disableplugins=plugin_name option.

85
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

5.6.1. Enabling, Configuring, and Disabling Yum Plug-ins


To enable Yum plug-ins, ensure that a line beginning with plugins= is present in the [m ain] section
of /etc/yum .conf, and that its value is 1:

plugins=1

You can disable all plug-ins by changing this line to plugins=0.

Disabling all plug-ins is not advised

Disabling all plug-ins is not advised because certain plug-ins provide important Yum services. In
particular, rhnplugin provides support for RHN Classic, and product-id and subscription-
manager plug-ins provide support for the certificate-based Content Delivery Network
(CDN). Disabling plug-ins globally is provided as a convenience option, and is generally only
recommended when diagnosing a potential problem with Yum .

Every installed plug-in has its own configuration file in the /etc/yum /pluginconf.d/ directory. You
can set plug-in specific options in these files. For example, here is the refresh-packagekit plug-in's
refresh-packagekit.conf configuration file:

[main]
enabled=1

Plug-in configuration files always contain a [m ain] section (similar to Yum's /etc/yum .conf file) in
which there is (or you can place if it is missing) an enabled= option that controls whether the plug-in is
enabled when you run yum commands.

If you disable all plug-ins by setting enabled=0 in /etc/yum .conf, then all plug-ins are disabled
regardless of whether they are enabled in their individual configuration files.

If you merely want to disable all Yum plug-ins for a single yum command, use the --noplugins option.

If you want to disable one or more Yum plug-ins for a single yum command, add the --
disableplugin=plugin_name option to the command. For example, to disable the presto plug-in
while updating a system, type:

~]# yum update --disableplugin=presto

The plug-in names you provide to the --disableplugin= option are the same names listed after the
Loaded plugins line in the output of any yum command. You can disable multiple plug-ins by
separating their names with commas. In addition, you can match multiple plug-in names or shorten long
ones by using glob expressions:

~]# yum update --disableplugin=presto,refresh-pack*

5.6.2. Installing Additional Yum Plug-ins


Yum plug-ins usually adhere to the yum -plugin-plugin_name package-naming convention, but not
always: the package which provides the presto plug-in is named yum -presto, for example. You can
install a Yum plug-in in the same way you install other packages. For instance, to install the security
plug-in, type the following at a shell prompt:

86
Chapter 5. Yum

~]# yum install yum-plugin-security

5.6.3. Working with Plug-ins


The following list provides descriptions and usage instructions for several useful Yum plug-ins. Plug-ins
are listed by names, brackets contain the name of package.

Backup
fs-snapshot (yum-plugin-fs-snapshot)
The fs-snapshot plug-in extends Yum to create a snapshot of a file system before proceeding
with a transaction such as a system update or package removal. When you decide that the
changes made by the transaction are unwanted, this mechanism allows you to roll back to the
changes that are stored in a snapshot.

In order for the plug-in to work, the root file system (that is, /) must be on an LVM (Logical
Volume Manager) or Btrfs volume. To use the fs-snapshot plug-in on an LVM volume, take
the following steps:

1. Make sure that the volume group with the root file system has enough free extents. The
required size depends on the amount of changes to the original logical volume that is
expected during the life of the snapshot. The reasonable default is 50–80 % of the
original logical volume size.
To display detailed information about a particular volume group, run the vgdisplay
command in the following form as root:

vgdisplay volume_group

The number of free extents is listed on the Free PE / Size line of the output table.
2. If the volume group with the root file system does not have enough free extents, add a
new physical volume. To do this, login as a root user and run the pvcreate command
in the following form to initialize a physical volume for use with the Logical Volume
Manager:

pvcreate device

3. Then, use the vgextend command in the following form as root to add the physical
volume to the volume group:

vgextend volume_group physical_volume

4. Edit the configuration file located in /etc/yum /pluginconf.d/fs-snapshot.conf,


and make the following changes to the [lvm ] section:
Change the value of the enabled option to 1:

enabled = 1

5. Remove the hash sign (#) from the beginning of the lvcreate_size_args line, and
adjust the number of logical extents which are allocated for a snapshot. For example, to
allocate 80 % of the size of the original logical volume, use:

lvcreate_size_args = -l 80%ORIGIN

87
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Refer to Table 5.3, “Supported fs-snapshot.conf directives” for a complete list of


available configuration options.
6. Before you confirm the changes and proceed with the transaction, run the desired yum
command, and make sure fs-snapshot is included in the list of loaded plug-ins (the
Loaded plugins line). The fs-snapshot plug-in displays a line in the following form for
each affected logical volume:

fs-snapshot: snapshotting file_system


(/dev/volume_group/logical_volume): logical_volume_yum_timestamp

7. Verify that the system is working as expected:


If you decide to keep the changes, remove the snapshot by running the lvrem ove
command as root:

lvremove /dev/volume_group/logical_volume_yum_timestamp

8. If you decide to revert the changes and restore the file system to a state that is saved in a
snapshot, take the following steps:
As root, run the command in the following form to merge a snapshot into its original
logical volume:

lvconvert --merge /dev/volume_group/logical_volume_yum_timestamp

The lvconvert command will inform you that a restart is required in order for the
changes to take effect.
9. Restart the system as instructed. You can do so by typing the following at a shell prompt
as root:

reboot

To use the fs-snapshot plug-in on a Btrfs file system, take the following steps:

1. Run the desired yum command, and make sure fs-snapshot is included in the list of
loaded plug-ins (the Loaded plugins line) before you confirm the changes and
proceed with the transaction. The fs-snapshot plug-in displays a line in the following
form for each affected file system:

fs-snapshot: snapshotting file_system: file_system/yum_timestamp

2. Verify that the system is working as expected:


If you decide to keep the changes, you can optionally remove unwanted snapshots. To
remove a Btrfs snapshot, use the command in the following form as root:

btrfs subvolume delete file_system/yum_timestamp

3. If you decide to revert the changes and restore a file system to a state that is saved in a
snapshot, take the following steps:
Determine the identifier of a particular snapshot by using the following command as
root:

btrfs subvolume list file_system

88
Chapter 5. Yum

4. As root, configure the system to mount this snapshot by default:

btrfs subvolume set-default id file_system

5. Restart the system. You can do so by typing the following at a shell prompt as root:

reboot

For more information on logical volume management, Btrfs, and file system snapshots, see the
Red Hat Enterprise Linux 7 Storage Administration Guide. For additional information about the
plug-in and its configuration, refer to the yum-fs-snapshot(1) and yum-fs-snapshot.conf(5)
manual pages.

Table 5.3. Supported fs-snapshot.conf directives

Section Directive Description


[m ain] enabled=value Allows you to enable or disable the plug-in.
The value must be either 1 (enabled), or 0
(disabled). When installed, the plug-in is
enabled by default.
exclude=list Allows you to exclude certain file systems.
The value must be a space-separated list
of mount points you do not want to snapshot
(for example, /srv /m nt/backup). This
option is not included in the configuration file
by default.
[lvm ] enabled=value Allows you to enable or disable the use of the
plug-in on LVM volumes. The value must be
either 1 (enabled), or 0 (disabled). This
option is disabled by default.
lvcreate_size_args=valu Allows you to specify the size of a logical
e volume snapshot. The value must be the -l
or -L command line option for the lvcreate
utility followed by a valid argument (for
example, -l 80%ORIGIN).

Installation and Download


kabi (kabi-yum-plugins)
The kabi plug-in checks whether a driver update package conforms with official Red Hat kernel
Application Binary Interface (kABI). With this plug-in enabled, when a user attempts to install a
package that uses kernel symbols which are not on a whitelist, a warning message is written to
the system log. Additionally, configuring the plug-in to run in enforcing mode prevents such
packages from being installed at all.

To configure the kabi plug-in, edit the configuration file located in


/etc/yum /pluginconf.d/kabi.conf. A list of directives that can be used in the [m ain]
section is shown in the following table.

89
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Table 5.4. Supported kabi.conf directives

Directive Description
enabled=value Allows you to enable or disable the plug-in. The value must be
either 1 (enabled), or 0 (disabled). When installed, the plug-in is
enabled by default.
whitelists=directory Allows you to specify the directory in which the files with
supported kernel symbols are located. By default, the kabi plug-
in uses files provided by the kabi-whitelists package (that is, the
/lib/m odules/kabi/ directory).
enforce=value Allows you to enable or disable enforcing mode. The value
must be either 1 (enabled), or 0 (disabled). By default, this
option is commented out and the kabi plug-in only displays a
warning message.

presto (yum-presto)
The presto plug-in adds support to Yum for downloading delta RPM packages during updates,
from repositories which have presto metadata enabled. Delta RPMs contain only the
differences between the version of the package installed on the client requesting the RPM
package and the updated version in the repository.

Downloading a delta RPM is much quicker than downloading the entire updated package, and
can speed up updates considerably. Once the delta RPMs are downloaded, they must be rebuilt
to apply the difference to the currently-installed package and thus create the full, updated
package. This process takes CPU time on the installing machine. Using delta RPMs is therefore
a tradeoff between time-to-download, which depends on the network connection, and time-to-
rebuild, which is CPU-bound. Using the presto plug-in is recommended for fast machines and
systems with slower network connections, while slower machines on very fast connections
benefit more from downloading normal RPM packages, that is, by disabling presto.

refresh-packagekit (PackageKit-yum-plugin)
The refresh-packagekit plug-in updates metadata for PackageKit whenever yum is run. The
refresh-packagekit plug-in is installed by default.

yum-fastestmirror (yum-plugin-fastestmirror)
The yum-fastestmirror plug-in is designed to list mirrors by response speed before a download
begins. It chooses the closest mirror instead of using a random mirror from the mirrorlist, which
is default behavior. Your fastest mirror is recalculated once every 10 days by default, to force it
to update immediately, delete the /var/cache/yum /tim edhosts.txt cachefile.

yum-cron (yum-cron)
The yum-cron plug-in performs a yum update as a cron job. It provides methods to keep
repository metadata up to date, and to check for, download, and apply updates in scheduled
intervals. These operations are performed as background processes.

Security and Package Protection


security (yum-plugin-security)

90
Chapter 5. Yum

Discovering information about and applying security updates easily and often is important to all
system administrators. For this reason Yum provides the security plug-in, which extends yum
with a set of highly-useful security-related commands, subcommands and options.

You can check for security-related updates as follows:

~]# yum check-update --security


Loaded plugins: product-id, refresh-packagekit, security, subscription-
manager
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
Limiting package lists to security relevant ones
Needed 3 of 7 packages, for security
elinks.x86_64 0.12-0.13.el6 rhel
kernel.x86_64 2.6.30.8-64.el6 rhel
kernel-headers.x86_64 2.6.30.8-64.el6 rhel

You can then use either yum update --security or yum update-m inim al --security
to update those packages which are affected by security advisories. Both of these commands
update all packages on the system for which a security advisory has been issued. yum
update-m inim al --security updates them to the latest packages which were released as
part of a security advisory, while yum update --security will update all packages affected
by a security advisory to the latest version of that package available.

In other words, if:

the kernel-2.6.30.8-16 package is installed on your system;


the kernel-2.6.30.8-32 package was released as a security update;
then kernel-2.6.30.8-64 was released as a bug fix update,

...then yum update-m inim al --security will update you to kernel-2.6.30.8-32, and yum
update --security will update you to kernel-2.6.30.8-64. Conservative system
administrators probably want to use update-m inim al to reduce the risk incurred by updating
packages as much as possible.

Refer to the yum-security(8) manual page for usage details and further explanation of the
enhancements the security plug-in adds to yum .

protect-packages (yum-plugin-protect-packages)
The protect-packages plug-in prevents the yum package and all packages it depends on from
being deliberately or accidentally removed. This prevents many of the most important packages
necessary for your system to run from being removed. In addition, you can list more packages,
one per line, in the /etc/sysconfig/protected-packages file [2] (which you should create
if it does not exist), and protect-packages will extend protection-from-removal to those
packages as well.

To temporarily override package protection, use the --override-protection option with an


applicable yum command.

System Registration
subscription-manager (subscription-manager)
The subscription-manager plug-in provides support for connecting to Red Hat Network.

91
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

This allows systems registered with Red Hat Network to update and install packages from
the certificate-based Content Delivery Network. The subscription-manager plug-in is installed
by default.

For more information how to manage product subscriptions and entitlements, refer to the Red
Hat Subscription Management Guide.

product-id (subscription-manager)
The product-id plug-in manages product identity certificates for products installed from the
Content Delivery Network. The product-id plug-in is installed by default.

5.7. Additional Resources


For more information on how to manage software packages on Red Hat Enterprise Linux, refer to the
resources listed below.

Installed Documentation

yum (8) — The manual page for the yum command line utility provides a complete list of supported
options and commands.
yum db(8) — The manual page for the yum db command line utility documents how to use this tool to
query and, if necessary, alter the Yum database.
yum .conf(5) — The manual page named yum .conf documents available Yum configuration
options.
yum -utils(1) — The manual page named yum -utils lists and briefly describes additional utilities
for managing Yum configuration, manipulating repositories, and working with Yum database.

Online Documentation

Red Hat Enterprise Linux 7 Storage Administration Guide — The Storage Administration Guide for
Red Hat Enterprise Linux 7 provides instructions on how to manage storage devices and file systems
on this system.
Yum Guides — The Yum Guides page on the project home page provides links to further
documentation.

See Also

Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and
sudo commands.
Chapter 6, PackageKit describes PackageKit, a suite of package management tools for the graphical
user interface.
Appendix A, RPM describes the RPM Package Manager (RPM), the packaging system used by Red
Hat Enterprise Linux.

[2] You can also place files with the extension .l i st in the /etc/sysco n f i g /p ro tected - p ackag es.d / directory (which you should
create if it does not exist), and list packages—one per line—in these files. protect-packages will protect these too.

92
Chapter 5. Yum

93
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 6. PackageKit
Red Hat provides PackageKit for viewing, managing, updating, installing and uninstalling packages
compatible with your system. PackageKit consists of several graphical interfaces that can be opened
from the GNOME panel menu, or from the Notification Area when PackageKit alerts you that updates are
available. For more information on PackageKit's architecture and available front ends, refer to
Section 6.3, “PackageKit Architecture”.

6.1. Updating Packages with Software Update


PackageKit displays a starburst icon in the Notification Area whenever updates are available to be
installed on your system.

Figure 6.1. PackageKit's icon in the Notification Area

Clicking on the notification icon opens the Software Update window. Alternatively, you can open
Software Updates by clicking System → Administration → Software Update from the GNOME
panel, or running the gpk-update-viewer command at the shell prompt. In the Software Updates
window, all available updates are listed along with the names of the packages being updated (minus the
.rpm suffix, but including the CPU architecture), a short summary of the package, and, usually, short
descriptions of the changes the update provides. Any updates you do not wish to install can be de-
selected here by unchecking the checkbox corresponding to the update.

Figure 6.2. Installing updates with Software Update

94
Chapter 6. PackageKit

The updates presented in the Software Updates window only represent the currently-installed
packages on your system for which updates are available; dependencies of those packages, whether
they are existing packages on your system or new ones, are not shown until you click Install
Updates.

PackageKit utilizes the fine-grained user authentication capabilities provided by the PolicyKit toolkit
whenever you request it to make changes to the system. Whenever you instruct PackageKit to update,
install or remove packages, you will be prompted to enter the superuser password before changes are
made to the system.

If you instruct PackageKit to update the kernel package, then it will prompt you after installation, asking
you whether you want to reboot the system and thereby boot into the newly-installed kernel.

Setting the Update-Checking Interval


Right-clicking on PackageKit's Notification Area icon and clicking Preferences opens the Software
Update Preferences window, where you can define the interval at which PackageKit checks for
package updates, as well as whether or not to automatically install all updates or only security updates.
Leaving the Check for updates when using m obile broadband box unchecked is handy for
avoiding extraneous bandwidth usage when using a wireless connection on which you are charged for
the amount of data you download.

Figure 6.3. Setting PackageKit's update-checking interval

6.2. Using Add/Remove Software


To find and install a new package, on the GNOME panel click on System → Administration →
Add/Remove Software, or run the gpk-application command at the shell prompt.

95
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 6.4. PackageKit's Add/Remove Software window

6.2.1. Refreshing Software Sources (Yum Repositories)


PackageKit refers to Yum repositories as software sources. It obtains all packages from enabled
software sources. You can view the list of all configured and unfiltered (see below) Yum repositories by
opening Add/Rem ove Software and clicking System → Software sources. The Software
Sources dialog shows the repository name, as written on the nam e=<My Repository Name> field of
all [repository] sections in the /etc/yum .conf configuration file, and in all repository.repo files in
the /etc/yum .repos.d/ directory.

Entries which are checked in the Enabled column indicate that the corresponding repository will be
used to locate packages to satisfy all update and installation requests (including dependency resolution).
You can enable or disable any of the listed Yum repositories by selecting or clearing the checkbox. Note
that doing so causes PolicyKit to prompt you for superuser authentication.

The Enabled column corresponds to the enabled=<1 or 0> field in [repository] sections. When
you click the checkbox, PackageKit inserts the enabled=<1 or 0> line into the correct [repository]
section if it does not exist, or changes the value if it does. This means that enabling or disabling a
repository through the Software Sources window causes that change to persist after closing the
window or rebooting the system.

Note that it is not possible to add or remove Yum repositories through PackageKit.

Showing source RPM, test and debuginfo repositories

Checking the box at the bottom of the Software Sources window causes PackageKit to display
source RPM, testing and debuginfo repositories as well. This box is unchecked by default.

After making a change to the available Yum repositories, click on System → Refresh package lists to
make sure your package list is up-to-date.

6.2.2. Finding Packages with Filters


Once the software sources have been updated, it is often beneficial to apply some filters so that
PackageKit retrieves the results of our Find queries faster. This is especially helpful when performing

96
Chapter 6. PackageKit

many package searches. Four of the filters in the Filters drop-down menu are used to split results by
matching or not matching a single criterion. By default when PackageKit starts, these filters are all
unapplied (No filter), but once you do filter by one of them, that filter remains set until you either
change it or close PackageKit.

Because you are usually searching for available packages that are not installed on the system, click
Filters → Installed and select the Only available radio button.

Figure 6.5. Filtering out already-installed packages

Also, unless you require development files such as C header files, click Filters → Development and
select the Only end user files radio button. This filters out all of the <package_name>-devel packages
we are not interested in.

Figure 6.6. Filtering out development packages from the list of Find results

The two remaining filters with submenus are:

Graphical
Narrows the search to either applications which provide a GUI interface (Only graphical) or
those that do not. This filter is useful when browsing for GUI applications that perform a specific
function.

97
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Free
Search for packages which are considered to be free software. Refer to the Fedora Licensing
List for details on approved licenses.

The remaining filters can be enabled by selecting the checkboxes next to them:

Hide subpackages
Checking the Hide subpackages checkbox filters out generally-uninteresting packages that are
typically only dependencies of other packages that we want. For example, checking Hide
subpackages and searching for <package> would cause the following related packages to be
filtered out of the Find results (if it exists):

<package>-devel
<package>-libs
<package>-libs-devel
<package>-debuginfo

Only newest packages


Checking Only newest packages filters out all older versions of the same package from the list
of results, which is generally what we want. Note that this filter is often combined with the Only
available filter to search for the latest available versions of new (not installed) packages.

Only native packages


Checking the Only native packages box on a multilib system causes PackageKit to omit listing
results for packages compiled for the architecture that runs in compatibility mode. For example,
enabling this filter on a 64-bit system with an AMD64 CPU would cause all packages built for the
32-bit x86 CPU architecture not to be shown in the list of results, even though those packages
are able to run on an AMD64 machine. Packages which are architecture-agnostic (i.e. noarch
packages such as crontabs-1.10-32.1.el6.noarch.rpm ) are never filtered out by
checking Only native packages. This filter has no affect on non-multilib systems, such as x86
machines.

6.2.3. Installing and Removing Packages (and Dependencies)


With the two filters selected, Only available and Only end user files, search for the screen
window manager for the command line and highlight the package. You now have access to some very
useful information about it, including: a clickable link to the project homepage; the Yum package group it
is found in, if any; the license of the package; a pointer to the GNOME menu location from where the
application can be opened, if applicable; and the size of the package, which is relevant when we
download and install it.

98
Chapter 6. PackageKit

Figure 6.7. Viewing and installing a package with PackageKit's Add/Remove Software window

When the checkbox next to a package or group is checked, then that item is already installed on the
system. Checking an unchecked box causes it to be marked for installation, which only occurs when the
Apply button is clicked. In this way, you can search for and select multiple packages or package groups
before performing the actual installation transactions. Additionally, you can remove installed packages by
unchecking the checked box, and the removal will occur along with any pending installations when
Apply is pressed. Dependency resolution, which may add additional packages to be installed or
removed, is performed after pressing Apply. PackageKit will then display a window listing those
additional packages to install or remove, and ask for confirmation to proceed.

Select screen and click the Apply button. You will then be prompted for the superuser password; enter
it, and PackageKit will install screen. After finishing the installation, PackageKit sometimes presents you
with a list of your newly-installed applications and offers you the choice of running them immediately.
Alternatively, you will remember that finding a package and selecting it in the Add/Rem ove Software
window shows you the Location of where in the GNOME menus its application shortcut is located,
which is helpful when you want to run it.

Once it is installed, you can run screen, a screen manager that allows you to have multiple logins on
one terminal, by typing screen at a shell prompt.

screen is a very useful utility, but we decide that we do not need it and we want to uninstall it.
Remembering that we need to change the Only available filter we recently used to install it to Only
installed in Filters → Installed, we search for screen again and uncheck it. The program did not
install any dependencies of its own; if it had, those would be automatically removed as well, as long as
they were not also dependencies of any other packages still installed on our system.

Removing a package when other packages depend on it

Although PackageKit automatically resolves dependencies during package installation and


removal, it is unable to remove a package without also removing packages which depend on it.
This type of operation can only be performed by RPM, is not advised, and can potentially leave
your system in a non-functioning state or cause applications to behave erratically and/or crash.

99
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 6.8. Removing a package with PackageKit's Add/Remove Software window

6.2.4. Installing and Removing Package Groups


PackageKit also has the ability to install Yum package groups, which it calls Package collections.
Clicking on Package collections in the top-left list of categories in the Software Updates
window allows us to scroll through and find the package group we want to install. In this case, we want to
install Czech language support (the Czech Support group). Checking the box and clicking apply
informs us how many additional packages must be installed in order to fulfill the dependencies of the
package group.

Figure 6.9. Installing the Czech Support package group

100
Chapter 6. PackageKit

Similarly, installed package groups can be uninstalled by selecting Package collections,


unchecking the appropriate checkbox, and applying.

6.2.5. Viewing the Transaction Log


PackageKit maintains a log of the transactions that it performs. To view the log, from the Add/Rem ove
Software window, click System → Software log, or run the gpk-log command at the shell prompt.

The Software Log Viewer shows the following information:

Date — the date on which the transaction was performed.


Action — the action that was performed during the transaction, for example Updated packages
or Installed packages.
Details — the transaction type such as Updated, Installed, or Rem oved, followed by a list of
affected packages.
Usernam e — the name of the user who performed the action.
Application — the front end application that was used to perform the action, for example Update
System .

Typing the name of a package in the top text entry field filters the list of transactions to those which
affected that package.

Figure 6.10. Viewing the log of package management transactions with the Software Log Viewer

6.3. PackageKit Architecture


Red Hat provides the PackageKit suite of applications for viewing, updating, installing and uninstalling
packages and package groups compatible with your system. Architecturally, PackageKit consists of
several graphical front ends that communicate with the packagekitd daemon back end, which
communicates with a package manager-specific back end that utilizes Yum to perform the actual
transactions, such as installing and removing packages, etc.

Table 6.1, “PackageKit GUI windows, menu locations, and shell prompt commands” shows the name of
the GUI window, how to start the window from the GNOME desktop or from the Add/Rem ove
Software window, and the name of the command line application that opens that window.

101
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Table 6.1. PackageKit GUI windows, menu locations, and shell prompt commands

Window Title Function How to Open Shell Command


Add/Remove Software Install, remove or view From the GNOME gpk-application
package info panel: System →
Administration →
Add/Remove Software
Software Update Perform package From the GNOME gpk-update-viewer
updates panel: System →
Administration →
Software Update
Software Sources Enable and disable From Add/Rem ove gpk-repo
Yum repositories Software: System →
Software Sources
Software Log Viewer View the transaction log From Add/Rem ove gpk-log
Software: System →
Software Log
Software Update Set PackageKit gpk-prefs
Preferences preferences
(Notification Area Alert) Alerts you when From the GNOME gpk-update-icon
updates are available panel: System →
Preferences →
Startup Applications,
the Startup
Program s tab

The packagekitd daemon runs outside the user session and communicates with the various graphical
front ends. The packagekitd daemon [3] communicates via the DBus system message bus with
another back end, which utilizes Yum's Python API to perform queries and make changes to the system.
On Linux systems other than Red Hat and Fedora, packagekitd can communicate with other back
ends that are able to utilize the native package manager for that system. This modular architecture
provides the abstraction necessary for the graphical interfaces to work with many different package
managers to perform essentially the same types of package management tasks. Learning how to use
the PackageKit front ends means that you can use the same familiar graphical interface across many
different Linux distributions, even when they utilize a native package manager other than Yum.

In addition, PackageKit's separation of concerns provides reliability in that a crash of one of the GUI
windows—or even the user's X Window session—will not affect any package management tasks being
supervised by the packagekitd daemon, which runs outside of the user session.

All of the front end graphical applications discussed in this chapter are provided by the gnom e-
packagekit package instead of by PackageKit and its dependencies.

Finally, PackageKit also comes with a console-based front end called pkcon.

6.4. Additional Resources


For more information on how to manage software packages on Red Hat Enterprise Linux, refer to the
resources listed below.

Online Documentation

102
Chapter 6. PackageKit

PackageKit Home Page — The project home page provides more information about PackageKit.
PackageKit Frequently Asked Questions — The Frequently Asked Questions page for the PackageKit
software suite provides answers to TODO.

See Also

Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and
sudo commands.
Chapter 5, Yum describes how to use the Yum package manager to search, install, update, and
uninstall packages on the command line.
Appendix A, RPM describes the RPM Package Manager (RPM), the packaging system used by Red
Hat Enterprise Linux.

[3] System daemons are typically long-running processes that provide services to the user or to other programs, and which are started, often
at boot time, by special initialization scripts (often shortened to init scripts). Daemons respond to the servi ce command and can be turned
on or off permanently by using the ch kco n f i g o n or ch kco n f i g o f f commands. They can typically be recognized by a “d” appended to
their name, such as the p ackag eki td daemon. Refer to Chapter 7, Managing Services with systemd for information about system services.

103
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Part III. Infrastructure Services


This part provides information on how to configure services and daemons and enable remote access to
a Red Hat Enterprise Linux machine.

104
Chapter 7. Managing Services with systemd

Chapter 7. Managing Services with systemd

7.1. Introduction to systemd


Systemd is a system and service manager for Linux operating systems. It is designed to be backwards
compatible with SysV init scripts, and provides a number of features such as parallel startup of system
services at boot time, on-demand activation of daemons, support for system state snapshots, or
dependency-based service control logic. In Red Hat Enterprise Linux 7, systemd replaces Upstart as the
default init system.

Systemd introduces the concept of systemd units. These units are represented by unit configuration files
located in one of the directories listed in Table 7.2, “Systemd Unit Locations”, and encapsulate
information about system service, listening sockets, saved system state snapshots, and other objects
that are relevant to the init system. For a complete list of available systemd unit types, see Table 7.1,
“Available systemd Unit Types”.

Table 7.1. Available systemd Unit Types

Unit Type File Extension Description


Service unit .service A system service.
Target unit .target A group of systemd units.
Automount unit .autom ount A file system automount point.
Device unit .device A device file recognized by the kernel.
Mount unit .m ount A file system mount point.
Path unit .path A file or directory in a file system.
Scope unit .scope An externally created process.
Slice unit .slice A group of hierarchically organized units that
manage system processes.
Snapshot unit .snapshot A saved state of the systemd manager.
Socket unit .socket An inter-process communication socket.
Swap unit .swap A swap device or a swap file.
Timer unit .tim er A systemd timer.

Table 7.2. Systemd Unit Locations

Directory Description
/usr/lib/system d/system / Systemd units distributed with installed RPM packages.
/run/system d/system / Systemd units created at run time. This directory takes
precedence over the directory with installed service units.
/etc/system d/system / Systemd units created and managed by the system
administrator. This directory takes precedence over the
directory with runtime units.

7.1.1. Main Features


In Red Hat Enterprise Linux 7, the systemd system and service manager provides the following main
features:

105
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Socket-based activation — At boot time, systemd creates listening sockets for all system services that
support this type of activation, and passes the sockets to these services as soon as they are started.
This not only allows systemd to start services in parallel, but also makes it possible to restart a
service without losing any message sent to it while it is unavailable: the corresponding socket remains
accessible and all messages are queued.
Systemd uses socket units for socket-based activation.
Bus-based activation — System services that use D-Bus for inter-process communication can be
started on-demand the first time a client application attempts to communicate with them. Systemd
uses D-Bus service files for bus-based activation.
Device-based activation — System services that support device-based activation can be started on-
demand when a particular type of hardware is plugged in or becomes available. Systemd uses device
units for device-based activation.
Path-based activation — System services that support path-based activation can be started on-
demand when a particular file or directory changes its state. Systemd uses path units for path-based
activation.
System state snapshots — Systemd can temporarily save the current state of all units or restore a
previous state of the system from a dynamically created snapshot. To store the current state of the
system, systemd uses dynamically created snapshot units.
Mount and automount point management — Systemd monitors and manages mount and automount
points. Systemd uses mount units for mount points and automount units for automount points.
Aggressive parallelization — Because of the use of socket-based activation, systemd can start
system services in parallel as soon as all listening sockets are in place. In combination with system
services that support on-demand activation, parallel activation significantly reduces the time required
to boot the system.
Transactional unit activation logic — Before activating or deactivating a unit, systemd calculates its
dependencies, creates a temporary transaction, and verifies that this transaction is consistent. If a
transaction is inconsistent, systemd automatically attempts to correct it and remove non-essential
jobs from it before reporting an error.
Backwards compatibility with SysV init — Systemd fully supports SysV init scripts as described in the
Linux Standard Base Core Specification, which eases the upgrade path to systemd service units.

7.1.2. Compatibility Changes


The systemd system and service manager is designed to be mostly compatible with SysV init and
Upstart. The following are the most notable compatibility changes with regards to the previous major
release of the Red Hat Enterprise Linux system:

Systemd has only limited support for runlevels. It provides a number of target units that can be
directly mapped to these runlevels and for compatibility reasons, it is also distributed with the earlier
runlevel command. Not all systemd targets can be directly mapped to runlevels, however, and as
a consequence, this command may return N to indicate an unknown runlevel. It is recommended that
you avoid using the runlevel command if possible.
For more information about systemd targets and their comparison with runlevels, see Section 7.3,
“Working with systemd Targets”.
The system ctl utility does not support custom commands. In addition to standard commands such
as start, stop, and status, authors of SysV init scripts could implement support for any number of
arbitrary commands in order to provide additional functionality. For example, the init script for
iptables in Red Hat Enterprise Linux 6 could be executed with the panic command, which
immediately enabled panic mode and reconfigured the system to start dropping all incoming and
outgoing packets. This is not supported in systemd and the system ctl only accepts documented
commands.

106
Chapter 7. Managing Services with systemd

For more information about the system ctl utility and its comparison with the earlier service utility,
see Section 7.2, “Managing System Services”.
The system ctl utility does not communicate with services that have not been started by systemd.
When systemd starts a system service, it stores the ID of its main process in order to keep track of it.
The system ctl utility then uses this PID to query and manage the service. Consequently, if a user
starts a particular daemon directly on the command line, system ctl is unable to determine its
current status or stop it.
Systemd stops only running services. Previously, when the shutdown sequence was initiated, Red
Hat Enterprise Linux 6 and earlier releases of the system used symbolic links located in the
/etc/rc0.d/ directory to stop all available system services regardless of their status. With systemd,
only running services are stopped on shutdown.
System services are unable to read from the standard input stream. When systemd starts a service, it
connects its standard input to /dev/null to prevent any interaction with the user.
System services do not inherit any context (such as the HOME and PAT H environment variables) from
the invoking user and their session. Each service runs in a clean execution context.
Systemd reads dependency information encoded in the Linux Standard Base (LSB) header and
interprets it at run time.
All operations on service units are subject to a timeout of 5 minutes to prevent a malfunctioning
service from freezing the system.

For a detailed list of compatibility changes introduced with systemd, see the Migration Planning Guide for
Red Hat Enterprise Linux 7. For a comparison of systemd with SysV init and Upstart, see the upstream
documentation.

7.2. Managing System Services


Previous versions of Red Hat Enterprise Linux, which were distributed with SysV init or Upstart, used init
scripts located in the /etc/rc.d/init.d/ directory. These init scripts were typically written in Bash,
and allowed the system administrator to control the state of services and daemons in their system. In
Red Hat Enterprise Linux 7, these init scripts have been replaced with service units.

Service units end with the .service file extension and serve a similar purpose as init scripts. To view,
start, stop, restart, enable, or disable system services, use the system ctl command as described in
Table 7.3, “Comparison of the service Utility with systemctl ”, Table 7.4, “Comparison of the chkconfig
Utility with systemctl”, and in the section below. The service and chkconfig commands are still
available in the system and work as expected, but are only included for compatibility reasons and should
be avoided.

107
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Note

For clarity, all examples in the rest of this section use full unit names with the .service file
extension, for example:

~]# systemctl stop bluetooth.service

When working with system services, it is possible to omit this file extension to reduce typing: when
the system ctl utility encounters a unit name without a file extension, it automatically assumes it
is a service unit. The following command is equivalent to the one above:

~]# systemctl stop bluetooth

Table 7.3. Comparison of the service Utility with systemctl

service systemctl Description


service name start system ctl start name.service Starts a service.
service name stop system ctl stop name.service Stops a service.
service name restart system ctl restart name.service Restarts a service.
service name system ctl try-restart Restarts a service only
condrestart name.service if it is running.
service name reload system ctl reload name.service Reloads configuration.
service name status system ctl status name.service Checks if a service is
system ctl is-active name.service running.

service --status-all system ctl list-units --type Displays the status of


service --all all services.

Table 7.4. Comparison of the chkconfig Utility with systemctl

chkconfig systemctl Description


chkconfig name on system ctl enable name.service Enables a service.
chkconfig name off system ctl disable name.service Disables a service.
chkconfig --list name system ctl status name.service Checks if a service is
system ctl is-enabled name.service enabled.

chkconfig --list system ctl list-unit-files --type Lists all services and
service checks if they are
enabled.

7.2.1. Listing Services


To list all currently loaded service units, type the following at a shell prompt:

systemctl list-units --type service

108
Chapter 7. Managing Services with systemd

For each service unit, this command displays its full name (UNIT ) followed by a note whether the unit
has been loaded (LOAD), its high-level (ACT IVE) and low-level (SUB) unit activation state, and a short
description (DESCRIPT ION).

By default, the system ctl list-units command displays only active units. If you want to list all
loaded units regardless of their state, run this command with the --all or -a command line option:

systemctl list-units --type service --all

You can also list all available service units to see if they are enabled. To do so, type:

systemctl list-unit-files --type service

For each service unit, this command displays its full name (UNIT FILE) followed by information whether
the service unit is enabled or not (ST AT E). For information on how to determine the status of individual
service units, see Section 7.2.2, “Displaying Service Status”.

Example 7.1. Listing Services

To list all currently loaded service units, run the following command:

~]$ systemctl list-units --type service


UNIT LOAD ACTIVE SUB DESCRIPTION
abrt-ccpp.service loaded active exited Install ABRT coredump hook
abrt-oops.service loaded active running ABRT kernel log watcher
abrt-vmcore.service loaded active exited Harvest vmcores for ABRT
abrt-xorg.service loaded active running ABRT Xorg log watcher
abrtd.service loaded active running ABRT Automated Bug
Reporting Tool
...
systemd-vconsole-setup.service loaded active exited Setup Virtual Console
tog-pegasus.service loaded active running OpenPegasus CIM Server

LOAD = Reflects whether the unit definition was properly loaded.


ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.

46 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'

To list all installed service units to determine if they are enabled, type:

~]$ systemctl list-unit-files --type service


UNIT FILE STATE
abrt-ccpp.service enabled
abrt-oops.service enabled
abrt-vmcore.service enabled
abrt-xorg.service enabled
abrtd.service enabled
...
wpa_supplicant.service disabled
ypbind.service disabled

208 unit files listed.

109
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

7.2.2. Displaying Service Status


To display detailed information about a service unit that corresponds to a system service, type the
following at a shell prompt:

systemctl status name.service

Replace name with the name of the service unit you want to inspect (for example, gdm ). This command
displays the name of the selected service unit followed by its short description, one or more fields
described in Table 7.5, “Available Service Unit Information”, and if it is executed by the root user, also
the most recent log entries.

Table 7.5. Available Service Unit Information

Field Description
Loaded Information whether the service unit has been loaded, the absolute
path to the unit file, and a note whether the unit is enabled.
Active Information whether the service unit is running followed by a time
stamp.
Main PID The PID of the corresponding system service followed by its name.
Status Additional information about the corresponding system service.
Process Additional information about related processes.
CGroup Additional information about related Control Groups.

To only verify that a particular service unit is running, run the following command:

systemctl is-active name.service

Similarly, to determine whether a particular service unit is enabled, type:

systemctl is-enabled name.service

Note that both system ctl is-active and system ctl is-enabled return an exit status of 0 if at
least one of the specified service units is running or enabled. For information on how to list all currently
loaded service units, see Section 7.2.1, “Listing Services”.

110
Chapter 7. Managing Services with systemd

Example 7.2. Displaying Service Status

The service unit for the GNOME Display Manager is named gdm .service. To determine the current
status of this service unit, type the following at a shell prompt:

~]# systemctl status gdm.service


gdm.service - GNOME Display Manager
Loaded: loaded (/usr/lib/systemd/system/gdm.service; enabled)
Active: active (running) since Thu 2013-10-17 17:31:23 CEST; 5min ago
Main PID: 1029 (gdm)
CGroup: /system.slice/gdm.service
├─1029 /usr/sbin/gdm
├─1037 /usr/libexec/gdm-simple-slave --display-id /org/gno...
└─1047 /usr/bin/Xorg :0 -background none -verbose -auth /r...

Oct 17 17:31:23 localhost systemd[1]: Started GNOME Display Manager.

7.2.3. Starting a Service


To start a service unit that corresponds to a system service, type the following at a shell prompt as root:

systemctl start name.service

Replace name with the name of the service unit you want to start (for example, gdm ). This command
starts the selected service unit in the current session. For information on how to enable a service unit to
be started at boot time, see Section 7.2.6, “Enabling a Service”. For information on how to determine the
status of a certain service unit, see Section 7.2.2, “Displaying Service Status”.

Example 7.3. Starting a Service

The service unit for the Apache HTTP Server is named httpd.service. To activate this service unit
and start the httpd daemon in the current session, run the following command as root:

~]# systemctl start httpd.service

7.2.4. Stopping a Service


To stop a service unit that corresponds to a system service, type the following at a shell prompt as root:

systemctl stop name.service

Replace name with the name of the service unit you want to stop (for example, bluetooth). This
command stops the selected service unit in the current session. For information on how to disable a
service unit and prevent it from being started at boot time, see Section 7.2.7, “Disabling a Service”. For
information on how to determine the status of a certain service unit, see Section 7.2.2, “Displaying
Service Status”.

111
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 7.4. Stopping a Service

The service unit for the bluetoothd daemon is named bluetooth.service. To deactivate this
service unit and stop the bluetoothd daemon in the current session, run the following command as
root:

~]# systemctl stop bluetooth.service

7.2.5. Restarting a Service


To restart a service unit that corresponds to a system service, type the following at a shell prompt as
root:

systemctl restart name.service

Replace name with the name of the service unit you want to restart (for example, httpd). This command
stops the selected service unit in the current session and immediately starts it again. Importantly, if the
selected service unit is not running, this command starts it too. To tell systemd to restart a service unit
only if the corresponding service is already running, run the following command as root:

systemctl try-restart name.service

Certain system services also allow you to reload their configuration without interrupting their execution.
To do so, type as root:

systemctl reload name.service

Note that system services that do not support this feature ignore this command altogether. For
convenience, the system ctl command also supports the reload-or-restart and reload-or-
try-restart commands that restart such services instead. For information on how to determine the
status of a certain service unit, see Section 7.2.2, “Displaying Service Status”.

Example 7.5. Restarting a Service

In order to prevent users from encountering unnecessary error messages or partially rendered web
pages, the Apache HTTP Server allows you to edit and reload its configuration without the need to
restart it and interrupt actively processed requests. To do so, type the following at a shell prompt as
root:

~]# systemctl reload httpd.service

7.2.6. Enabling a Service


To configure a service unit that corresponds to a system service to be automatically started at boot time,
type the following at a shell prompt as root:

systemctl enable name.service

Replace name with the name of the service unit you want to enable (for example, httpd). This command

112
Chapter 7. Managing Services with systemd

reads the [Install] section of the selected service unit and creates appropriate symbolic links to the
/usr/lib/system d/system /name.service file in the /etc/system d/system / directory. This
command does not, however, rewrite links that already exist. If you want to ensure that the symbolic links
are re-created, use the following command as root:

systemctl reenable name.service

This command disables the selected service unit and immediately enables it again. For information on
how to determine whether a certain service unit is enabled to start at boot time, see Section 7.2.2,
“Displaying Service Status”. For information on how to start a service in the current session, see
Section 7.2.3, “Starting a Service”.

Example 7.6. Enabling a Service

To configure the Apache HTTP Server to start automatically at boot time, run the following command
as root:

~]# systemctl enable httpd.service


ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-
user.target.wants/httpd.service'

7.2.7. Disabling a Service


To prevent a service unit that corresponds to a system service from being automatically started at boot
time, type the following at a shell prompt as root:

systemctl disable name.service

Replace name with the name of the service unit you want to disable (for example, bluetooth). This
command reads the [Install] section of the selected service unit and removes appropriate symbolic
links to the /usr/lib/system d/system /name.service file from the /etc/system d/system /
directory. In addition, you can mask any service unit to prevent it from being started manually or by
another service. To do so, run the following command as root:

systemctl mask name.service

This command replaces the /etc/system d/system /name.service file with a symbolic link to
/dev/null, rendering the actual unit file inaccessible to systemd. To revert this action and unmask a
service unit, type as root:

systemctl unmask name.service

For information on how to determine whether a certain service unit is enabled to start at boot time, see
Section 7.2.2, “Displaying Service Status”. For information on how to stop a service in the current
session, see Section 7.2.4, “Stopping a Service”.

113
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 7.7. Disabling a Service

Example 7.4, “Stopping a Service” illustrates how to stop the bluetooth.service unit in the current
session. To prevent this service unit from starting at boot time, type the following at a shell prompt as
root:

~]# systemctl disable bluetooth.service


rm '/etc/systemd/system/dbus-org.bluez.service'
rm '/etc/systemd/system/bluetooth.target.wants/bluetooth.service'

7.3. Working with systemd Targets


Previous versions of Red Hat Enterprise Linux, which were distributed with SysV init or Upstart,
implemented a predefined set of runlevels that represented specific modes of operation. These runlevels
were numbered from 0 to 6 and were defined by a selection of system services to be run when a
particular runlevel was enabled by the system administrator. In Red Hat Enterprise Linux 7, the concept
of runlevels has been replaced with systemd targets.

Systemd targets are represented by target units. Target units end with the .target file extension and
their only purpose is to group together other systemd units through a chain of dependencies. For
example, the graphical.target unit, which is used to start a graphical session, starts system
services such as the GNOME Display Manager (gdm .service) or Accounts Service (accounts-
daem on.service) and also activates the m ulti-user.target unit. Similarly, the m ulti-
user.target unit starts other essential system services such as NetworkManager
(NetworkManager.service) or D-Bus (dbus.service) and activates another target unit named
basic.target.

Red Hat Enterprise Linux 7 is distributed with a number of predefined targets that are more or less
similar to the standard set of runlevels from the previous releases of this system. For compatibility
reasons, it also provides aliases for these targets that directly map them to SysV runlevels. Table 7.6,
“Comparison of SysV Runlevels with systemd Targets” provides a complete list of SysV runlevels and
their corresponding systemd targets.

114
Chapter 7. Managing Services with systemd

Table 7.6. Comparison of SysV Runlevels with systemd Targets

Runlev Target Units Description


el
0 runlevel0.target, Shut down and power off the system.
poweroff.target
1 runlevel1.target, Set up a rescue shell.
rescue.target
2 runlevel2.target, m ulti- Set up a non-graphical multi-user system.
user.target
3 runlevel3.target, m ulti- Set up a non-graphical multi-user system.
user.target
4 runlevel4 .target, m ulti- Set up a non-graphical multi-user system.
user.target
5 runlevel5.target, Set up a graphical multi-user system.
graphical.target
6 runlevel6.target, Shut down and reboot the system.
reboot.target

To view, change, or configure systemd targets, use the system ctl utility as described in Table 7.7,
“Comparison of SysV init Commands with systemctl” and in the sections below. The runlevel and
telinit commands are still available in the system and work as expected, but are only included for
compatibility reasons and should be avoided.

Table 7.7. Comparison of SysV init Commands with systemctl

Old Command New Command Description


runlevel system ctl list-units --type Displays the current target.
target
telinit runlevel system ctl isolate name.target Changes the current target.

7.3.1. Viewing the Default Target


To determine which target unit is used by default, run the following command:

systemctl get-default

This command resolves the symbolic link located at /etc/system d/system /default.target and
displays the result. For information on how to change the default target, see Section 7.3.3, “Changing the
Default Target”. For information on how to list all currently loaded target units, see Section 7.3.2,
“Viewing the Current Target”.

Example 7.8. Viewing the Default Target

To display the default target unit, type:

~]$ systemctl get-default


graphical.target

115
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

7.3.2. Viewing the Current Target


To list all currently loaded target units, type the following command at a shell prompt:

systemctl list-units --type target

For each target unit, this commands displays its full name (UNIT ) followed by a note whether the unit
has been loaded (LOAD), its high-level (ACT IVE) and low-level (SUB) unit activation state, and a short
description (DESCRIPT ION).

By default, the system ctl list-units command displays only active units. If you want to list all
loaded units regardless of their state, run this command with the --all or -a command line option:

systemctl list-units --type target --all

See Section 7.3.1, “Viewing the Default Target” for information on how to display the default target. For
information on how to change the current target, see Section 7.3.4, “Changing the Current Target”.

Example 7.9. Viewing the Current Target

To list all currently loaded target units, run the following command:

~]$ systemctl list-units --type target


UNIT LOAD ACTIVE SUB DESCRIPTION
basic.target loaded active active Basic System
cryptsetup.target loaded active active Encrypted Volumes
getty.target loaded active active Login Prompts
graphical.target loaded active active Graphical Interface
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target loaded active active Local File Systems
multi-user.target loaded active active Multi-User System
network.target loaded active active Network
paths.target loaded active active Paths
remote-fs.target loaded active active Remote File Systems
sockets.target loaded active active Sockets
sound.target loaded active active Sound Card
spice-vdagentd.target loaded active active Agent daemon for Spice guests
swap.target loaded active active Swap
sysinit.target loaded active active System Initialization
time-sync.target loaded active active System Time Synchronized
timers.target loaded active active Timers

LOAD = Reflects whether the unit definition was properly loaded.


ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.

17 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

7.3.3. Changing the Default Target


To configure the system to use a different target unit by default, type the following at a shell prompt as
root:

116
Chapter 7. Managing Services with systemd

systemctl set-default name.target

Replace name with the name of the target unit you want to use by default (for example, m ulti-user).
This command replaces the /etc/system d/system /default.target file with a symbolic link to
/usr/lib/system d/system /name.target, where name is the name of the target unit you want to
use. For information on how to change the current target, see Section 7.3.4, “Changing the Current
Target”.

Example 7.10. Changing the Default Target

To configure the system to use the m ulti-user.target unit by default, run the following command
as root:

~]# systemctl set-default multi-user.target


rm '/etc/systemd/system/default.target'
ln -s '/usr/lib/systemd/system/multi-user.target'
'/etc/systemd/system/default.target'

7.3.4. Changing the Current Target


To change to a different target unit in the current session, type the following at a shell prompt as root:

systemctl isolate name.target

Replace name with the name of the target unit you want to use (for example, m ulti-user). This
command starts the target unit named name and all dependent units, and immediately stops all others.
For information on how to change the default target, see Section 7.3.3, “Changing the Default Target”.
For information on how to list all currently loaded target units, see Section 7.3.4, “Changing the Current
Target”.

Example 7.11. Changing the Current Target

To turn off the graphical user interface and change to the m ulti-user.target unit in the current
session, run the following command as root:

~]# systemctl isolate multi-user.target

7.3.5. Changing to Rescue Mode


Rescue mode provides a convenient single-user environment and allows you to repair your system in
situations when it is unable to complete a regular booting process. In rescue mode, the system attempts
to mount all local file systems and start some important system services, but it does not activate network
interfaces or allow more users to be logged into the system at the same time. In Red Hat
Enterprise Linux 7, rescue mode is equivalent to single user mode and requires the root password.

To change the current target and enter rescue mode in the current session, type the following at a shell
prompt as root:

systemctl rescue

117
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

This command is similar to system ctl isolate rescue.target, but it also sends an informative
message to all users that are currently logged into the system. To prevent systemd from sending this
message, run this command with the --no-wall command line option:

systemctl --no-wall rescue

For information on how to enter emergency mode, see Section 7.3.6, “Changing to Emergency Mode”.

Example 7.12. Changing to Rescue Mode

To enter rescue mode in the current session, run the following command as root:

~]# systemctl rescue

Broadcast message from root@localhost on pts/0 (Fri 2013-10-25 18:23:15 CEST):

The system is going down to rescue mode NOW!

7.3.6. Changing to Emergency Mode


Emergency mode provides the most minimal environment possible and allows you to repair your system
even in situations when the system is unable to enter rescue mode. In emergency mode, the system
mounts the root file system only for reading, does not attempt to mount any other local file systems, does
not activate network interfaces, and only starts few essential services. In Red Hat Enterprise Linux 7,
emergency mode requires the root password.

To change the current target and enter emergency mode, type the following at a shell prompt as root:

systemctl emergency

This command is similar to system ctl isolate em ergency.target, but it also sends an
informative message to all users that are currently logged into the system. To prevent systemd from
sending this message, run this command with the --no-wall command line option:

systemctl --no-wall emergency

For information on how to enter rescue mode, see Section 7.3.5, “Changing to Rescue Mode”.

Example 7.13. Changing to Emergency Mode

To enter emergency mode without sending a message to all users that are currently logged into the
system, run the following command as root:

~]# systemctl --no-wall emergency

7.4. Shutting Down, Suspending, and Hibernating the System


In Red Hat Enterprise Linux 7, the system ctl utility replaces a number of power management
commands used in previous versions of the Red Hat Enterprise Linux system. The commands listed in

118
Chapter 7. Managing Services with systemd

Table 7.8, “Comparison of Power Management Commands with systemctl” are still available in the
system for compatibility reasons, but it is advised that you use system ctl when possible.

Table 7.8. Comparison of Power Management Commands with systemctl

Old Command New Command Description


halt system ctl halt Halts the system.
poweroff system ctl poweroff Powers off the system.
reboot system ctl reboot Restarts the system.
pm -suspend system ctl suspend Suspends the system.
pm -hibernate system ctl hibernate Hibernates the system.
pm -suspend-hybrid system ctl hybrid- Hibernates and suspends the system.
sleep

7.4.1. Shutting Down the System


To shut down the system and power off the machine, type the following at a shell prompt as root:

systemctl poweroff

To shut down and halt the system without powering off the machine, run the following command as
root:

systemctl halt

By default, running either of these commands causes systemd to send an informative message to all
users that are currently logged into the system. To prevent systemd from sending this message, run the
selected command with the --no-wall command line option, for example:

systemctl --no-wall poweroff

7.4.2. Restarting the System


To restart the system, run the following command as root:

systemctl reboot

By default, this command causes systemd to send an informative message to all users that are currently
logged into the system. To prevent systemd from sending this message, run this command with the --
no-wall command line option:

systemctl --no-wall reboot

7.4.3. Suspending the System


To suspend the system, type the following at a shell prompt as root:

systemctl suspend

This command saves the system state in RAM and with the exception of the RAM module, powers off

119
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

most of the devices in the machine. When you turn the machine back on, the system then restores its
state from RAM without having to boot again. Because the system state is saved in RAM and not on the
hard disk, restoring the system from suspend mode is significantly faster than restoring it from
hibernation, but as a consequence, a suspended system state is also vulnerable to power outages.

For information on how to hibernate the system, see Section 7.4.4, “Hibernating the System”.

7.4.4. Hibernating the System


To hibernate the system, type the following at a shell prompt as root:

systemctl hibernate

This command saves the system state on the hard disk drive and powers off the machine. When you
turn the machine back on, the system then restores its state from the saved data without having to boot
again. Because the system state is saved on the hard disk and not in RAM, the machine does not have
to maintain electrical power to the RAM module, but as a consequence, restoring the system from
hibernation is significantly slower than restoring it from suspend mode.

To hibernate and suspend the system, run the following command as root:

systemctl hybrid-sleep

For information on how to suspend the system, see Section 7.4.3, “Suspending the System”.

7.5. Controlling systemd on a Remote Machine


In addition to controlling the systemd system and service manager locally, the system ctl utility also
allows you to interact with systemd running on a remote machine over the SSH protocol. Provided that
the sshd service on the remote machine is running, you can connect to this machine by running the
system ctl command with the --host or -H command line option:

systemctl --host user_name@host_name command

Replace user_name with the name of the remote user, host_name with the machine's host name, and
com m and with any of the system ctl commands described above. Note that the remote machine must
be configured to allow the selected user remote access over the SSH protocol. For more information on
how to configure an SSH server, see Chapter 8, OpenSSH.

Example 7.14. Remote Management

To log in to a remote machine named server-01.exam ple.com as the root user and determine
the current status of the httpd.service unit, type the following at a shell prompt:

~]$ systemctl -H root@server-01.example.com status httpd.service


root@server-01.example.com's password:
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)
Active: active (running) since Fri 2013-11-01 13:58:56 CET; 2h 48min ago
Main PID: 649
Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0
B/sec"
CGroup: /system.slice/httpd.service

120
Chapter 7. Managing Services with systemd

7.6. Additional Resources


For more information on systemd and its usage on Red Hat Enterprise Linux, refer to the resources
listed below.

Installed Documentation

system ctl(1) — The manual page for the system ctl command line utility provides a complete list
of supported options and commands.
system d(1) — The manual page for the system d system and service manager provides more
information about its concepts and documents available command line options and environment
variables, supported configuration files and directories, recognized signals, and available kernel
options.
system d.unit(5) — The manual page named system d.unit provides in-depth information about
systemd unit files and documents all available configuration options.
system d.service(5) — The manual page named system d.service documents the format of
service unit files.
system d.target(5) — The manual page named system d.target documents the format of
target unit files.

Online Documentation

Red Hat Enterprise Linux 7 Networking Guide — The Networking Guide for Red Hat Enterprise Linux
7 documents relevant information regarding the configuration and administration of network
interfaces, networks, and network services in this system. It provides an introduction to the
hostnam ectl utility and explains how to use it to view and set host names on the command line,
both locally and remotely.
Red Hat Enterprise Linux 7 Desktop Migration and Administration Guide — The Desktop Migration
and Administration Guide for Red Hat Enterprise Linux 7 documents the migration planning,
deployment, configuration, and administration of the GNOME 3 desktop on this system. It introduces
the logind service, enumerates its most significant features, and explains how to use the
loginctl utility to list active sessions and enable multi-seat support.
Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide — The SELinux User's and
Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and
documents in detail how to configure and use SELinux with various services such as the Apache
HTTP Server, Postfix, PostgreSQL, or OpenShift. It explains how to configure SELinux access
permissions for system services managed by systemd.
Red Hat Enterprise Linux 7 Installation Guide — The Installation Guide for Red Hat Enterprise Linux 7
documents how to install the system on AMD64 and Intel 64 systems, 64-bit IBM Power Systems
servers, and IBM System z. It also covers advanced installation methods such as Kickstart
installations, PXE installations, and installations over the VNC protocol. In addition, it describes
common post-installation tasks and explains how to troubleshoot installation problems, including
detailed instructions on how to boot into rescue mode or recover the root password.
Red Hat Enterprise Linux 7 Security Guide — The Security Guide for Red Hat Enterprise Linux 7
assists users and administrators in learning the processes and practices of securing their
workstations and servers against local and remote intrusion, exploitation, and malicious activity. It
also explains how to secure critical system services.
systemd Home Page — The project home page provides more information about systemd.

121
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

See Also

Chapter 1, System Locale and Keyboard Configuration documents how to manage the system locale
and keyboard layouts. It explains how to use the localectl utility to view the current locale, list
available locales, and set the system locale on the command line, as well as to view the current
keyboard layout, list available keymaps, and enable a particular keyboard layout on the command
line.
Chapter 2, Configuring the Date and Time documents how to manage the system date and time. It
explains the difference between a real-time clock and system clock and describes how to use the
tim edatectl utility to display the current settings of the system clock, configure the date and time,
change the time zone, and synchronize the system clock with a remote server.
Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and
sudo commands.
Chapter 8, OpenSSH describes how to configure an SSH server and how to use the ssh, scp, and
sftp client utilities to access it.
Chapter 19, Viewing and Managing Log Files provides an introduction to journald. It describes the
journal, introduces the journald service, and documents how to use the journalctl utility to
view log entries, enter live view mode, and filter log entries. In addition, this chapter describes how to
give non-root users access to system logs and enable persistent storage for log files.

122
Chapter 8. OpenSSH

Chapter 8. OpenSSH
SSH (Secure Shell) is a protocol which facilitates secure communications between two systems using a
client/server architecture and allows users to log in to server host systems remotely. Unlike other remote
communication protocols, such as FT P or T elnet, SSH encrypts the login session, rendering the
connection difficult for intruders to collect unencrypted passwords.

The ssh program is designed to replace older, less secure terminal applications used to log in to remote
hosts, such as telnet or rsh. A related program called scp replaces older programs designed to copy
files between hosts, such as rcp. Because these older applications do not encrypt passwords
transmitted between the client and the server, avoid them whenever possible. Using secure methods to
log in to remote systems decreases the risks for both the client system and the remote host.

Red Hat Enterprise Linux includes the general OpenSSH package (openssh) as well as the OpenSSH
server (openssh-server) and client (openssh-clients) packages. Note, the OpenSSH packages
require the OpenSSL package (openssl) which installs several important cryptographic libraries,
enabling OpenSSH to provide encrypted communications.

8.1. The SSH Protocol

8.1.1. Why Use SSH?


Potential intruders have a variety of tools at their disposal enabling them to disrupt, intercept, and re-
route network traffic in an effort to gain access to a system. In general terms, these threats can be
categorized as follows:

Interception of communication between two systems


The attacker can be somewhere on the network between the communicating parties, copying
any information passed between them. He may intercept and keep the information, or alter the
information and send it on to the intended recipient.

This attack is usually performed using a packet sniffer, a rather common network utility that
captures each packet flowing through the network, and analyzes its content.

Impersonation of a particular host


Attacker's system is configured to pose as the intended recipient of a transmission. If this
strategy works, the user's system remains unaware that it is communicating with the wrong
host.

This attack can be performed using a technique known as DNS poisoning, or via so-called IP
spoofing. In the first case, the intruder uses a cracked DNS server to point client systems to a
maliciously duplicated host. In the second case, the intruder sends falsified network packets that
appear to be from a trusted host.

Both techniques intercept potentially sensitive information and, if the interception is made for hostile
reasons, the results can be disastrous. If SSH is used for remote shell login and file copying, these
security threats can be greatly diminished. This is because the SSH client and server use digital
signatures to verify their identity. Additionally, all communication between the client and server systems is
encrypted. Attempts to spoof the identity of either side of a communication does not work, since each
packet is encrypted using a key known only by the local and remote systems.

123
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

8.1.2. Main Features


The SSH protocol provides the following safeguards:

No one can pose as the intended server


After an initial connection, the client can verify that it is connecting to the same server it had
connected to previously.

No one can capture the authentication information


The client transmits its authentication information to the server using strong, 128-bit encryption.

No one can intercept the communication


All data sent and received during a session is transferred using 128-bit encryption, making
intercepted transmissions extremely difficult to decrypt and read.

Additionally, it also offers the following options:

It provides secure means to use graphical applications over a network


Using a technique called X11 forwarding, the client can forward X11 (X Window System)
applications from the server.

It provides a way to secure otherwise insecure protocols


The SSH protocol encrypts everything it sends and receives. Using a technique called port
forwarding, an SSH server can become a conduit to securing otherwise insecure protocols, like
POP, and increasing overall system and data security.

It can be used to create a secure channel


The OpenSSH server and client can be configured to create a tunnel similar to a virtual private
network for traffic between server and client machines.

It supports the Kerberos authentication


OpenSSH servers and clients can be configured to authenticate using the GSSAPI (Generic
Security Services Application Program Interface) implementation of the Kerberos network
authentication protocol.

8.1.3. Protocol Versions


Two varieties of SSH currently exist: version 1, and newer version 2. The OpenSSH suite under Red Hat
Enterprise Linux uses SSH version 2, which has an enhanced key exchange algorithm not vulnerable to
the known exploit in version 1. However, for compatibility reasons, the OpenSSH suite does support
version 1 connections as well.

Avoid using SSH version 1

To ensure maximum security for your connection, it is recommended that only SSH version 2-
compatible servers and clients are used whenever possible.

124
Chapter 8. OpenSSH

8.1.4. Event Sequence of an SSH Connection


The following series of events help protect the integrity of SSH communication between two hosts.

1. A cryptographic handshake is made so that the client can verify that it is communicating with the
correct server.
2. The transport layer of the connection between the client and remote host is encrypted using a
symmetric cipher.
3. The client authenticates itself to the server.
4. The remote client interacts with the remote host over the encrypted connection.

8.1.4.1. Transport Layer


The primary role of the transport layer is to facilitate safe and secure communication between the two
hosts at the time of authentication and during subsequent communication. The transport layer
accomplishes this by handling the encryption and decryption of data, and by providing integrity protection
of data packets as they are sent and received. The transport layer also provides compression, speeding
the transfer of information.

Once an SSH client contacts a server, key information is exchanged so that the two systems can
correctly construct the transport layer. The following steps occur during this exchange:

Keys are exchanged


The public key encryption algorithm is determined
The symmetric encryption algorithm is determined
The message authentication algorithm is determined
The hash algorithm is determined

During the key exchange, the server identifies itself to the client with a unique host key. If the client has
never communicated with this particular server before, the server's host key is unknown to the client and
it does not connect. OpenSSH gets around this problem by accepting the server's host key. This is done
after the user is notified and has both accepted and verified the new host key. In subsequent
connections, the server's host key is checked against the saved version on the client, providing
confidence that the client is indeed communicating with the intended server. If, in the future, the host key
no longer matches, the user must remove the client's saved version before a connection can occur.

Always verify the integrity of a new SSH server

It is possible for an attacker to masquerade as an SSH server during the initial contact since the
local system does not know the difference between the intended server and a false one set up by
an attacker. To help prevent this, verify the integrity of a new SSH server by contacting the server
administrator before connecting for the first time or in the event of a host key mismatch.

SSH is designed to work with almost any kind of public key algorithm or encoding format. After an initial
key exchange creates a hash value used for exchanges and a shared secret value, the two systems
immediately begin calculating new keys and algorithms to protect authentication and future data sent
over the connection.

After a certain amount of data has been transmitted using a given key and algorithm (the exact amount
depends on the SSH implementation), another key exchange occurs, generating another set of hash
values and a new shared secret value. Even if an attacker is able to determine the hash and shared
secret value, this information is only useful for a limited period of time.

125
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

8.1.4.2. Authentication
Once the transport layer has constructed a secure tunnel to pass information between the two systems,
the server tells the client the different authentication methods supported, such as using a private key-
encoded signature or typing a password. The client then tries to authenticate itself to the server using
one of these supported methods.

SSH servers and clients can be configured to allow different types of authentication, which gives each
side the optimal amount of control. The server can decide which encryption methods it supports based
on its security model, and the client can choose the order of authentication methods to attempt from the
available options.

8.1.4.3. Channels
After a successful authentication over the SSH transport layer, multiple channels are opened via a
technique called multiplexing [4] . Each of these channels handles communication for different terminal
sessions and for forwarded X11 sessions.

Both clients and servers can create a new channel. Each channel is then assigned a different number on
each end of the connection. When the client attempts to open a new channel, the clients sends the
channel number along with the request. This information is stored by the server and is used to direct
communication to that channel. This is done so that different types of sessions do not affect one another
and so that when a given session ends, its channel can be closed without disrupting the primary SSH
connection.

Channels also support flow-control, which allows them to send and receive data in an orderly fashion. In
this way, data is not sent over the channel until the client receives a message that the channel is open.

The client and server negotiate the characteristics of each channel automatically, depending on the type
of service the client requests and the way the user is connected to the network. This allows great
flexibility in handling different types of remote connections without having to change the basic
infrastructure of the protocol.

8.2. Configuring OpenSSH

8.2.1. Configuration Files


There are two different sets of configuration files: those for client programs (that is, ssh, scp, and
sftp), and those for the server (the sshd daemon).

System-wide SSH configuration information is stored in the /etc/ssh/ directory as described in


Table 8.1, “System-wide configuration files”. User-specific SSH configuration information is stored in
~/.ssh/ within the user's home directory as described in Table 8.2, “User-specific configuration files”.

126
Chapter 8. OpenSSH

Table 8.1. System-wide configuration files

File Description
/etc/ssh/m oduli Contains Diffie-Hellman groups used for the Diffie-Hellman key
exchange which is critical for constructing a secure transport
layer. When keys are exchanged at the beginning of an SSH
session, a shared, secret value is created which cannot be
determined by either party alone. This value is then used to
provide host authentication.
/etc/ssh/ssh_config The default SSH client configuration file. Note that it is
overridden by ~/.ssh/config if it exists.
/etc/ssh/sshd_config The configuration file for the sshd daemon.
/etc/ssh/ssh_host_dsa_key The DSA private key used by the sshd daemon.
/etc/ssh/ssh_host_dsa_key.pu The DSA public key used by the sshd daemon.
b
/etc/ssh/ssh_host_key The RSA private key used by the sshd daemon for version 1
of the SSH protocol.
/etc/ssh/ssh_host_key.pub The RSA public key used by the sshd daemon for version 1 of
the SSH protocol.
/etc/ssh/ssh_host_rsa_key The RSA private key used by the sshd daemon for version 2
of the SSH protocol.
/etc/ssh/ssh_host_rsa_key.pu The RSA public key used by the sshd daemon for version 2 of
b the SSH protocol.

Table 8.2. User-specific configuration files

File Description
~/.ssh/authorized_keys Holds a list of authorized public keys for servers. When the
client connects to a server, the server authenticates the client
by checking its signed public key stored within this file.
~/.ssh/id_dsa Contains the DSA private key of the user.
~/.ssh/id_dsa.pub The DSA public key of the user.
~/.ssh/id_rsa The RSA private key used by ssh for version 2 of the SSH
protocol.
~/.ssh/id_rsa.pub The RSA public key used by ssh for version 2 of the SSH
protocol.
~/.ssh/identity The RSA private key used by ssh for version 1 of the SSH
protocol.
~/.ssh/identity.pub The RSA public key used by ssh for version 1 of the SSH
protocol.
~/.ssh/known_hosts Contains DSA host keys of SSH servers accessed by the user.
This file is very important for ensuring that the SSH client is
connecting the correct SSH server.

For information concerning various directives that can be used in the SSH configuration files, refer to the
ssh_config(5) and sshd_config(5) manual pages.

127
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

8.2.2. Starting an OpenSSH Server


In order to run an OpenSSH server, you must have the openssh-server and openssh packages installed
(refer to Section 5.2.4, “Installing Packages” for more information on how to install new packages in
Red Hat Enterprise Linux 7).

To start the sshd daemon in the current session, type the following at a shell prompt as root:

~]# systemctl start sshd.service

To stop the running sshd daemon in the current session, use the following command as root:

~]# systemctl stop sshd.service

If you want the daemon to start automatically at the boot time, type as root:

~]# systemctl enable sshd.service


ln -s '/usr/lib/systemd/system/sshd.service' '/etc/systemd/system/multi-
user.target.wants/sshd.service'

This enables the service for all runlevels. For more information on how to manage system services in
Red Hat Enterprise Linux, see Chapter 7, Managing Services with systemd.

Note that if you reinstall the system, a new set of identification keys will be created. As a result, clients
who had connected to the system with any of the OpenSSH tools before the reinstall will see the
following message:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.

To prevent this, you can backup the relevant files from the /etc/ssh/ directory (see Table 8.1,
“System-wide configuration files” for a complete list), and restore them whenever you reinstall the
system.

8.2.3. Requiring SSH for Remote Connections


For SSH to be truly effective, using insecure connection protocols should be prohibited. Otherwise, a
user's password may be protected using SSH for one session, only to be captured later while logging in
using Telnet. Some services to disable include telnet, rsh, rlogin, and vsftpd.

For information on how to configure the vsftpd service, see Section 13.2, “FTP”. To learn how to
manage system services in Red Hat Enterprise Linux 7, read Chapter 7, Managing Services with
systemd.

8.2.4. Using Key-based Authentication


To improve the system security even further, you can enforce the key-based authentication by disabling
the standard password authentication. To do so, open the /etc/ssh/sshd_config configuration file in
a text editor such as vi or nano, and change the PasswordAuthentication option as follows:

PasswordAuthentication no

128
Chapter 8. OpenSSH

To be able to use ssh, scp, or sftp to connect to the server from a client machine, generate an
authorization key pair by following the steps below. Note that keys must be generated for each user
separately.

Red Hat Enterprise Linux 7 uses SSH Protocol 2 and RSA keys by default (see Section 8.1.3, “Protocol
Versions” for more information).

Do not generate key pairs as root

If you complete the steps as root, only root will be able to use the keys.

Backup your ~/.ssh/ directory

If you reinstall your system and want to keep previously generated key pair, backup the ~/.ssh/
directory. After reinstalling, copy it back to your home directory. This process can be done for all
users on your system, including root.

8.2.4.1. Generating Key Pairs


To generate an RSA key pair for version 2 of the SSH protocol, follow these steps:

1. Generate an RSA key pair by typing the following at a shell prompt:

~]$ ssh-keygen -t rsa


Generating public/private rsa key pair.
Enter file in which to save the key (/home/john/.ssh/id_rsa):

2. Press Enter to confirm the default location (that is, ~/.ssh/id_rsa) for the newly created key.
3. Enter a passphrase, and confirm it by entering it again when prompted to do so. For security
reasons, avoid using the same password as you use to log in to your account.
After this, you will be presented with a message similar to this:

Your identification has been saved in /home/john/.ssh/id_rsa.


Your public key has been saved in /home/john/.ssh/id_rsa.pub.
The key fingerprint is:
e7:97:c7:e2:0e:f9:0e:fc:c4:d7:cb:e5:31:11:92:14 john@penguin.example.com
The key's randomart image is:
+--[ RSA 2048]----+
| E. |
| . . |
| o . |
| . .|
| S . . |
| + o o ..|
| * * +oo|
| O +..=|
| o* o.|
+-----------------+

4. Change the permissions of the ~/.ssh/ directory:

~]$ chmod 700 ~/.ssh

129
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

5. Copy the content of ~/.ssh/id_rsa.pub into the ~/.ssh/authorized_keys on the machine


to which you want to connect, appending it to its end if the file already exists.
6. Change the permissions of the ~/.ssh/authorized_keys file using the following command:

~]$ chmod 600 ~/.ssh/authorized_keys

To generate a DSA key pair for version 2 of the SSH protocol, follow these steps:

1. Generate a DSA key pair by typing the following at a shell prompt:

~]$ ssh-keygen -t dsa


Generating public/private dsa key pair.
Enter file in which to save the key (/home/john/.ssh/id_dsa):

2. Press Enter to confirm the default location (that is, ~/.ssh/id_dsa) for the newly created key.
3. Enter a passphrase, and confirm it by entering it again when prompted to do so. For security
reasons, avoid using the same password as you use to log in to your account.
After this, you will be presented with a message similar to this:

Your identification has been saved in /home/john/.ssh/id_dsa.


Your public key has been saved in /home/john/.ssh/id_dsa.pub.
The key fingerprint is:
81:a1:91:a8:9f:e8:c5:66:0d:54:f5:90:cc:bc:cc:27 john@penguin.example.com
The key's randomart image is:
+--[ DSA 1024]----+
| .oo*o. |
| ...o Bo |
| .. . + o. |
|. . E o |
| o..o S |
|. o= . |
|. + |
| . |
| |
+-----------------+

4. Change the permissions of the ~/.ssh/ directory:

~]$ chmod 700 ~/.ssh

5. Copy the content of ~/.ssh/id_dsa.pub into the ~/.ssh/authorized_keys on the machine


to which you want to connect, appending it to its end if the file already exists.
6. Change the permissions of the ~/.ssh/authorized_keys file using the following command:

~]$ chmod 600 ~/.ssh/authorized_keys

To generate an RSA key pair for version 1 of the SSH protocol, follow these steps:

1. Generate an RSA key pair by typing the following at a shell prompt:

~]$ ssh-keygen -t rsa1


Generating public/private rsa1 key pair.
Enter file in which to save the key (/home/john/.ssh/identity):

130
Chapter 8. OpenSSH

2. Press Enter to confirm the default location (that is, ~/.ssh/identity) for the newly created
key.
3. Enter a passphrase, and confirm it by entering it again when prompted to do so. For security
reasons, avoid using the same password as you use to log in to your account.
After this, you will be presented with a message similar to this:

Your identification has been saved in /home/john/.ssh/identity.


Your public key has been saved in /home/john/.ssh/identity.pub.
The key fingerprint is:
cb:f6:d5:cb:6e:5f:2b:28:ac:17:0c:e4:62:e4:6f:59 john@penguin.example.com
The key's randomart image is:
+--[RSA1 2048]----+
| |
| . . |
| o o |
| + o E |
| . o S |
| = + . |
| . = . o . .|
| . = o o..o|
| .o o o=o.|
+-----------------+

4. Change the permissions of the ~/.ssh/ directory:

~]$ chmod 700 ~/.ssh

5. Copy the content of ~/.ssh/identity.pub into the ~/.ssh/authorized_keys on the


machine to which you want to connect, appending it to its end if the file already exists.
6. Change the permissions of the ~/.ssh/authorized_keys file using the following command:

~]$ chmod 600 ~/.ssh/authorized_keys

Refer to Section 8.2.4.2, “Configuring ssh-agent” for information on how to set up your system to
remember the passphrase.

Never share your private key

The private key is for your personal use only, and it is important that you never give it to anyone.

8.2.4.2. Configuring ssh-agent


To store your passphrase so that you do not have to enter it each time you initiate a connection with a
remote machine, you can use the ssh-agent authentication agent. If you are running GNOME, you can
configure it to prompt you for your passphrase whenever you log in and remember it during the whole
session. Otherwise you can store the passphrase for a certain shell prompt.

To save your passphrase during your GNOME session, follow these steps:

1. Make sure you have the openssh-askpass package installed. If not, refer to Section 5.2.4,
“Installing Packages” for more information on how to install new packages in Red Hat
Enterprise Linux.
2. Select System → Preferences → Startup Applications from the panel. The Startup

131
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Applications Preferences will be started, and the tab containing a list of available startup
programs will be shown by default.

Figure 8.1. Startup Applications Preferences

3. Click the Add button on the right, and enter /usr/bin/ssh-add in the Com m and field.

Figure 8.2. Adding new application

4. Click Add and make sure the checkbox next to the newly added item is selected.

132
Chapter 8. OpenSSH

Figure 8.3. Enabling the application

5. Log out and then log back in. A dialog box will appear prompting you for your passphrase. From
this point on, you should not be prompted for a password by ssh, scp, or sftp.

Figure 8.4. Entering a passphrase

To save your passphrase for a certain shell prompt, use the following command:

~]$ ssh-add
Enter passphrase for /home/john/.ssh/id_rsa:

Note that when you log out, your passphrase will be forgotten. You must execute the command each
time you log in to a virtual console or a terminal window.

8.3. OpenSSH Clients


To connect to an OpenSSH server from a client machine, you must have the openssh-clients and
openssh packages installed (refer to Section 5.2.4, “Installing Packages” for more information on how to

133
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

install new packages in Red Hat Enterprise Linux).

8.3.1. Using the ssh Utility


The ssh utility allows you to log in to a remote machine and execute commands there. It is a secure
replacement for the rlogin, rsh, and telnet programs.

Similarly to telnet, to log in to a remote machine by using the following command:

ssh hostname

For example, to log in to a remote machine named penguin.exam ple.com , type the following at a
shell prompt:

~]$ ssh penguin.example.com

This will log you in with the same username you are using on a local machine. If you want to specify a
different one, use a command in the command in the following form:

ssh username@hostname

For example, to log in to penguin.exam ple.com as john, type:

~]$ ssh john@penguin.example.com

The first time you initiate a connection, you will be presented with a message similar to this:

The authenticity of host 'penguin.example.com' can't be established.


RSA key fingerprint is 94:68:3a:3a:bc:f3:9a:9b:01:5d:b3:07:38:e2:11:0c.
Are you sure you want to continue connecting (yes/no)?

Type yes to confirm. You will see a notice that the server has been added to the list of known hosts, and
a prompt asking for your password:

Warning: Permanently added 'penguin.example.com' (RSA) to the list of known hosts.


john@penguin.example.com's password:

Updating the host key of an SSH server

If the SSH server's host key changes, the client notifies the user that the connection cannot
proceed until the server's host key is deleted from the ~/.ssh/known_hosts file. To do so, open
the file in a text editor, and remove a line containing the remote machine name at the beginning.
Before doing this, however, contact the system administrator of the SSH server to verify the
server is not compromised.

After entering the password, you will be provided with a shell prompt for the remote machine.

Alternatively, the ssh program can be used to execute a command on the remote machine without
logging in to a shell prompt:

ssh [username@]hostname command

134
Chapter 8. OpenSSH

For example, the /etc/redhat-release file provides information about the Red Hat Enterprise Linux
version. To view the contents of this file on penguin.exam ple.com , type:

~]$ ssh john@penguin.example.com cat /etc/redhat-release


john@penguin.example.com's password:
Red Hat Enterprise Linux Server release 6.2 (Santiago)

After you enter the correct password, the username will be displayed, and you will return to your local
shell prompt.

8.3.2. Using the scp Utility


scp can be used to transfer files between machines over a secure, encrypted connection. In its design, it
is very similar to rcp.

To transfer a local file to a remote system, use a command in the following form:

scp localfile username@hostname:remotefile

For example, if you want to transfer taglist.vim to a remote machine named


penguin.exam ple.com , type the following at a shell prompt:

~]$ scp taglist.vim john@penguin.example.com:.vim/plugin/taglist.vim


john@penguin.example.com's password:
taglist.vim 100% 144KB 144.5KB/s 00:00

Multiple files can be specified at once. To transfer the contents of .vim /plugin/ to the same directory
on the remote machine penguin.exam ple.com , type the following command:

~]$ scp .vim/plugin/* john@penguin.example.com:.vim/plugin/


john@penguin.example.com's password:
closetag.vim 100% 13KB 12.6KB/s 00:00
snippetsEmu.vim 100% 33KB 33.1KB/s 00:00
taglist.vim 100% 144KB 144.5KB/s 00:00

To transfer a remote file to the local system, use the following syntax:

scp username@hostname:remotefile localfile

For instance, to download the .vim rc configuration file from the remote machine, type:

~]$ scp john@penguin.example.com:.vimrc .vimrc


john@penguin.example.com's password:
.vimrc 100% 2233 2.2KB/s 00:00

8.3.3. Using the sftp Utility


The sftp utility can be used to open a secure, interactive FTP session. In its design, it is similar to ftp
except that it uses a secure, encrypted connection.

To connect to a remote system, use a command in the following form:

sftp username@hostname

135
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

For example, to log in to a remote machine named penguin.exam ple.com with john as a
username, type:

~]$ sftp john@penguin.example.com


john@penguin.example.com's password:
Connected to penguin.example.com.
sftp>

After you enter the correct password, you will be presented with a prompt. The sftp utility accepts a set
of commands similar to those used by ftp (see Table 8.3, “A selection of available sftp commands”).

Table 8.3. A selection of available sftp commands

Command Description
ls [directory] List the content of a remote directory. If none is supplied, a
current working directory is used by default.
cd directory Change the remote working directory to directory.
m kdir directory Create a remote directory.
rm dir path Remove a remote directory.
put localfile [remotefile] Transfer localfile to a remote machine.
get remotefile [localfile] Transfer remotefile from a remote machine.

For a complete list of available commands, refer to the sftp(1) manual page.

8.4. More Than a Secure Shell


A secure command line interface is just the beginning of the many ways SSH can be used. Given the
proper amount of bandwidth, X11 sessions can be directed over an SSH channel. Or, by using TCP/IP
forwarding, previously insecure port connections between systems can be mapped to specific SSH
channels.

8.4.1. X11 Forwarding


To open an X11 session over an SSH connection, use a command in the following form:

ssh -Y username@hostname

For example, to log in to a remote machine named penguin.exam ple.com with john as a
username, type:

~]$ ssh -Y john@penguin.example.com


john@penguin.example.com's password:

When an X program is run from the secure shell prompt, the SSH client and server create a new secure
channel, and the X program data is sent over that channel to the client machine transparently.

X11 forwarding can be very useful. For example, X11 forwarding can be used to create a secure,
interactive session of the Printer Configuration utility. To do this, connect to the server using ssh and
type:

~]$ system-config-printer &

136
Chapter 8. OpenSSH

The Printer Configuration Tool will appear, allowing the remote user to safely configure printing on the
remote system.

8.4.2. Port Forwarding


SSH can secure otherwise insecure T CP/IP protocols via port forwarding. When using this technique,
the SSH server becomes an encrypted conduit to the SSH client.

Port forwarding works by mapping a local port on the client to a remote port on the server. SSH can map
any port from the server to any port on the client. Port numbers do not need to match for this technique
to work.

Using reserved port numbers

Setting up port forwarding to listen on ports below 1024 requires root level access.

To create a TCP/IP port forwarding channel which listens for connections on the localhost, use a
command in the following form:

ssh -L local-port:remote-hostname:remote-port username@hostname

For example, to check email on a server called m ail.exam ple.com using POP3 through an encrypted
connection, use the following command:

~]$ ssh -L 1100:mail.example.com:110 mail.example.com

Once the port forwarding channel is in place between the client machine and the mail server, direct a
POP3 mail client to use port 1100 on the localhost to check for new email. Any requests sent to port
1100 on the client system will be directed securely to the m ail.exam ple.com server.

If m ail.exam ple.com is not running an SSH server, but another machine on the same network is,
SSH can still be used to secure part of the connection. However, a slightly different command is
necessary:

~]$ ssh -L 1100:mail.example.com:110 other.example.com

In this example, POP3 requests from port 1100 on the client machine are forwarded through the SSH
connection on port 22 to the SSH server, other.exam ple.com . Then, other.exam ple.com
connects to port 110 on m ail.exam ple.com to check for new email. Note that when using this
technique, only the connection between the client system and other.exam ple.com SSH server is
secure.

Port forwarding can also be used to get information securely through network firewalls. If the firewall is
configured to allow SSH traffic via its standard port (that is, port 22) but blocks access to other ports, a
connection between two hosts using the blocked ports is still possible by redirecting their communication
over an established SSH connection.

137
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

A connection is only as secure as a client system

Using port forwarding to forward connections in this manner allows any user on the client system
to connect to that service. If the client system becomes compromised, the attacker also has
access to forwarded services.
System administrators concerned about port forwarding can disable this functionality on the
server by specifying a No parameter for the AllowT cpForwarding line in
/etc/ssh/sshd_config and restarting the sshd service.

8.5. Additional Resources


For more information on how to configure or connect to an OpenSSH server on Red Hat Enterprise
Linux, refer to the resources listed below.

Installed Documentation

sshd(8) — The manual page for the sshd daemon documents available command line options and
provides a complete list of supported configuration files and directories.
ssh(1) — The manual page for the ssh client application provides a complete list of available
command line options and supported configuration files and directories.
scp(1) — The manual page for the scp utility provides a more detailed description of this utility and
its usage.
sftp(1) — The manual page for the sftp utility.
ssh-keygen(1) — The manual page for the ssh-keygen utility documents in detail how to use it to
generate, manage, and convert authentication keys used by ssh.
ssh_config(5) — The manual page named ssh_config documents available SSH client
configuration options.
sshd_config(5) — The manual page named sshd_config provides a full description of available
SSH daemon configuration options.

Online Documentation

OpenSSH Home Page — The OpenSSH home page containing further documentation, frequently
asked questions, links to the mailing lists, bug reports, and other useful resources.
OpenSSL Home Page — The OpenSSL home page containing further documentation, frequently
asked questions, links to the mailing lists, and other useful resources.

See Also

Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and
sudo commands.
Chapter 7, Managing Services with systemd provides more information on systemd and documents
how to use the system ctl command to manage system services.

[4] A multiplexed connection consists of several signals being sent over a shared, common medium. With SSH, different channels are sent
over a common secure connection.

138
Chapter 8. OpenSSH

139
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 9. TigerVNC
T igerVNC (Tiger Virtual Network Computing) is a system of graphical desktop sharing which allows you
to remotely control other computers.

T igerVNC works on the client-server network: a server shares its output (vncserver) and a client
(vncviewer) connects to the server.

Note

Unlike in previous Red Hat Enterprise Linux distributions, current T igerVNC uses the system d
system management daemon for its configuration. The /etc/sysconfig/vncserver
configuration file has been replaced by /lib/system d/system /vncserver@ .service

9.1. VNC Server


vncserver is a utility which starts a VNC (Virtual Network Computing) desktop. It runs Xvnc with
appropriate options and starts a window manager on the VNC desktop. vncserver allows users to run
totally parallel sessions on a machine which can be accessed by any number of clients from anywhere.

9.1.1. Installing VNC Server


To install the TigerVNC server, run the following command as root:

# yum install tigervnc-server

If you desire to use your TigerVNC as the client as well, run the command, which installs the server
component along with the client:

# yum install vnc

9.1.2. Configuring VNC Server


Procedure 9.1. Configuring the first VNC connection

1. Create a new configuration file named


/lib/system d/system /vncserver@ :display_number.service for each of the display
numbers you want to enable. Follow the example below: display number 3 is set, which is included
in the configuration file name. You need not create a completely new file, just copy-paste the
content of /lib/system d/system /vncserver@ .service:
Example 9.1. Creating a configuration file

# cp /lib/systemd/system/vncserver@.service
/lib/systemd/system/vncserver@:3.service

2. Edit /lib/system d/system /vncserver@ :display_number.service, setting the User and


ExecStart arguments as in the example below. Leave the remaining lines of the file unmodified.
The -geom etry argument specifies the size of the VNC desktop to be created; by default, it is to
1024x768.

140
Chapter 9. TigerVNC

Example 9.2. Setting the arguments in the configuration file

User=joe
ExecStart=/sbin/runuser -l joe -c "/usr/bin/vncserver %i -geometry
1280x1024"

3. Save the changes.


4. Update the system ctl to ensure the changes are taken into account immediately.

# systemctl daemon-reload

5. Set the password for the user or users defined in the configuration file.

# vncpasswd user
Password:
Verify:

Repeat the procedure to set the password for other user or users:

# su - user2

Important

The stored password is not encrypted securely; anyone who has access to the password
file can find the plain-text password.

9.1.3. Starting VNC Server


To start the service with a concrete display number, execute the following command:

# systemctl start vncserver@:display_number.service

You can also enable the service to start automatically at system start. Every time you log in, vncserver
is automatically started. As root, run

# systemctl enable vncserver@:display_number.service

At this point, other users are able to use the vncviewer program to connect to your server using the
display number and password defined.

9.1.3.1. Troubleshooting
If the vncserver does not start, you should verify whether the respective port is open.

Procedure 9.2. Opening ports

1. To check if iptables is configured to listen to a certain port, run:

# sudo iptables --list | grep port_number

141
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 9.3. Checking iptables for port 5903


The VNC display number maps to 5900 series of ports. If we continue in Example 9.1, “Creating
a configuration file” in which display number 3 has been chosen (step 1), vncserver will listen
on port 5903.

# sudo iptables --list | grep 5903

2. If not, update iptables by adding the following line into the /etc/sysconfig/iptables file:

-A INPUT -p tcp -m state --state NEW -m tcp --dport port_number -j ACCEPT

3. Save the file, then restart iptables and verify that the port is active by running:

# sudo systemctl restart iptables.service


# sudo iptables --list | grep port_number

9.1.4. Terminating VNC session


Similarly to starting the vncserver service, you can disable the start of the service automatically at
start of your operation system:

# systemctl disable vncserver@:display_number.service

Or, when your operation system is on, you can stop the service by running the following command :

# systemctl stop vncserver@:display_number.service

9.2. VNC Viewer


vncviewer is the program which shows the shared graphical user interfaces and controls the server.

For operating the vncviewer, there is a pop-up menu containing entries which perform various actions
such as switching in and out of full-screen mode or quitting the viewer. Alternatively, you can operate
vncviewer through the terminal; there is a list of parameters vncviewer can be used with which you
obtain by typing vncviewer -h on the command line.

9.2.1. Connecting to VNC Server


Once your VNC server is configured, you can connect to it from any VNC server. If you have not done it
yet, install the package containing vncviewer:

# yum install tigervnc-server

In order to connect to a VNC server, run vncviewer in the format of # vncviewer


machine_name.local_domain:port_number.

142
Chapter 9. TigerVNC

Example 9.4. One client connecting to vncserver

For example, with the IP address 192.168.0.4, display number 3, and machine name joe, the
command looks as follows:

# vncviewer joe 192.168.0.4:3

9.2.1.1. Switching off firewall to enable VNC connection


When you use a non-encrypted connection, firewall may discourage your connection efforts by
blocking them. Thus, either consider setting up an encrypted connection (see more in Section 9.2.2,
“Connecting to VNC Server using SSH”) or disable firewall temporarily for the time of the connection.

To disable Linux Firewall, run the following two commands as root:

# /etc/init.d/ebtables save
# /etc/init.d/ebtables stop

9.2.2. Connecting to VNC Server using SSH


this is a test paragraph

9.3. Additional Resources


vncserver(1)
The VNC server manual pages.

vncviewer(1)
The VNC viewer manual pages.

passwd(1)
The VNC password manual pages.

143
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Part IV. Servers


This part discusses various topics related to servers such as how to set up a Web server or share files
and directories over the network.

144
Chapter 10. Web Servers

Chapter 10. Web Servers


HT T P (Hypertext Transfer Protocol) server, or a web server, is a network service that serves content to a
client over the web. This typically means web pages, but any other documents can be served as well.

10.1. The Apache HTTP Server


The web server available in Red Hat Enterprise Linux 7 is the Apache HTTP server daemon, httpd, an
open source web server developed by the Apache Software Foundation. In Red Hat Enterprise Linux 7
the Apache server has been updated to Apache HTTP Server 2.4. This section describes the basic
configuration of the httpd service, and covers some advanced topics such as adding server modules,
setting up virtual hosts, or configuring the secure HTTP server. Reference material can be found in the
Red Hat Enterprise Linux 7 System Administrator's Reference Guide.

There are important differences between the Apache HTTP Server 2.4 and version 2.2, and if you are
upgrading from a previous release of Red Hat Enterprise Linux, you will need to update the httpd
service configuration accordingly. This section reviews some of the newly added features, outlines
important changes, and guides you through the update of older configuration files.

10.1.1. Notable Changes


The Apache HTTP Server version 2.4 has the following changes:

httpd Service Control


With the migration away from SysV init scripts, server administrators should switch to using the
apachectl and system ctl commands to control the service, in place of the service
command. The following examples are specific to the httpd service. The command:

service httpd graceful

is replaced by

apachectl graceful

The command:

service httpd configtest

is replaced by

apachectl configtest

The system d unit file for httpd has different behavior from the init script as follows:

A graceful restart is used by default when the service is reloaded.


A graceful stop is used by default when the service is stopped.

Private /tmp
To enhance system security, the system d unit file runs the httpd daemon using a private
/tm p directory, separate to the system /tm p directory.

Configuration Layout

145
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Configuration files which load modules are now placed in the /etc/httpd/conf.m odules.d
directory. Packages, such as php, which provide additional loadable modules for httpd will
place a file in this directory. Any configuration files in the conf.m odules.d are processed
before the main body of httpd.conf. Configuration files in the /etc/httpd/conf.d
directory are now processed after the main body of httpd.conf.

Some additional configuration files are provided by the httpd package itself:

/etc/httpd/conf.d/autoindex.conf

This configures mod_autoindex directory indexing.

/etc/httpd/conf.d/userdir.conf

This configures access to user directories, for example,


http://exam ple.com /~usernam e/; such access is disabled by default for security
reasons.

/etc/httpd/conf.d/welcome.conf

As in previous releases, this configures the welcome page displayed for


http://localhost/ when no content is present.

Default Configuration
A minimal default httpd.conf is now provided by default. Many common configuration
settings, such as T im eout or KeepAlive are no longer explicitly configured in the default
configuration; hard-coded settings will be used instead, by default. The hard-coded default
settings for all configuration directives are specified in the manual.

Configuration Changes
A number of backwards-incompatible changes to the httpd configuration syntax were made
which will require changes if migrating an existing configuration from httpd 2.2 to httpd 2.4.

Processing Model
In previous releases of Red Hat Enterprise Linux, different multi-processing models (MPM) were
made available as different httpd binaries: the forked model, “prefork”, as /usr/sbin/httpd,
and the thread-based model “worker” as /usr/sbin/httpd.worker.

In Red Hat Enterprise Linux 7, only a single httpd binary is used, and three MPMs are
available as loadable modules: worker, prefork (default), and event. The configuration file
/etc/httpd/conf.m odules.d/00-m pm .conf can be changed to select which of the three
MPM modules is loaded.

Packaging Changes
The LDAP authentication and authorization modules are now provided in a separate sub-
package mod_ldap. The new module mod_session and associated helper modules are
provided in a new sub-package, mod_session. The new modules mod_proxy_html and
mod_xml2enc are provided in a new sub-package, mod_proxy_html.

146
Chapter 10. Web Servers

Packaging Filesystem Layout


The /var/cache/m od_proxy directory is no longer provided; instead, the
/var/cache/httpd/ directory is packaged with a proxy and ssl subdirectory.

Packaged content provided with httpd has been moved from /var/www/ to
/usr/share/httpd/:

/usr/share/httpd/icons/

The /var/www/icons/ has moved to /usr/share/httpd/icons. This directory


contains a set of icons used with directory indices. Available at
http://localhost/icons/ in the default configuration, via
/etc/httpd/conf.d/autoindex.conf.

/usr/share/httpd/manual/

The /var/www/m anual/ has moved to /usr/share/httpd/m anual/. This directory,


contained in the httpd-manual package, contains the HTML version of the manual for
httpd. Available at http://localhost/m anual/ if the package is installed, via
/etc/httpd/conf.d/m anual.conf.

/usr/share/httpd/error/

The /var/www/error/ has moved to /usr/share/httpd/error/. Custom multi-


language HTTP error pages. Not configured by default, the example configuration file is
provided at /usr/share/doc/httpd-VERSION/httpd-m ultilang-errordoc.conf.

Authentication, Authorization and Access Control


The configuration directives used to control authentication, authorization and access control
have changed significantly. Existing configuration files using the Order, Deny and Allow
directives should be adapted to use the new Require syntax.

suexec
To improve system security, the suexec binary is no longer installed setuid root; instead, it
has file system capability bits set which allow a more restrictive set of permissions. In
conjunction with this change, the suexec binary no longer uses the
/var/log/httpd/suexec.log logfile. Instead, log messages are sent to syslog; by default
these will appear in the /var/log/secure log file.

Module Interface
Due to changes to the httpd module interface, httpd 2.4 is not compatible with third-party
binary modules built against httpd 2.2. Such modules will need to be adjusted as necessary for
the httpd 2.4 module interface, and then rebuilt. A detailed list of the API changes in version
2.4 is available here: http://httpd.apache.org/docs/2.4/developer/new_api_2_4.html.

The apxs binary used to build modules from source has moved from /usr/sbin/apxs to
/usr/bin/apxs.

Removed modules

147
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

List of httpd modules removed in Red Hat Enterprise Linux 7:


mod_auth_mysql, mod_auth_pgsql
httpd 2.4 provides SQL database authentication support internally in the
mod_authn_dbd module.

mod_perl
mod_perl is not officially supported with httpd 2.4 by upstream.

mod_authz_ldap
httpd 2.4 provides LDAP support internally using mod_authnz_ldap.

10.1.2. Updating the Configuration


To update the configuration files from the Apache HTTP Server version 2.0, take the following steps:

1. Make sure all module names are correct, since they may have changed. Adjust the LoadModule
directive for each module that has been renamed.
2. Recompile all third party modules before attempting to load them. This typically means
authentication and authorization modules.
3. If you use the m od_userdir module, make sure the UserDir directive indicating a directory
name (typically public_htm l) is provided.
4. If you use the Apache HTTP Secure Server, edit the /etc/httpd/conf.d/ssl.conf to enable
the Secure Sockets Layer (SSL) protocol.

Note that you can check the configuration for possible errors by using the following command:

~]# service httpd configtest


Syntax OK

For more information on upgrading the Apache HTTP Server configuration from version 2.0 to 2.2, refer
to http://httpd.apache.org/docs/2.2/upgrading.html.

10.1.3. Running the httpd Service


This section describes how to start, stop, restart, and check the current status of the Apache HTTP
Server. To be able to use the httpd service, make sure you have the httpd installed. You can do so by
using the following command:

~]# yum install httpd

For more information on the concept of runlevels and how to manage system services in Red Hat
Enterprise Linux in general, refer to Chapter 7, Managing Services with systemd.

10.1.3.1. Starting the Service


To run the httpd service, type the following at a shell prompt:

~]# systemctl start httpd.service

148
Chapter 10. Web Servers

If you want the service to start automatically at the boot time, use the following command:

~]# systemctl enable httpd.service


ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-
user.target.wants/httpd.service'

Using the secure server

If running the Apache HTTP Server as a secure server, a password may be required after the
machine boots if using an encrypted private SSL key.

10.1.3.2. Stopping the Service


To stop the running httpd service, type the following at a shell prompt:

~]# systemctl stop httpd.service

To prevent the service from starting automatically at the boot time, type:

~]# systemctl disable httpd.service


rm '/etc/systemd/system/multi-user.target.wants/httpd.service'

10.1.3.3. Restarting the Service


There are three different ways how to restart a running httpd service:

1. To restart the service completely, type:

~]# systemctl restart httpd.service

This stops the running httpd service and immediately starts it again. Use this command after
installing or removing a dynamically loaded module such as PHP.
2. To only reload the configuration, type:

~]# systemctl reload httpd.service

This causes the running httpd service to reload its configuration file. Any requests being currently
processed will be interrupted, which may cause a client browser to display an error message or
render a partial page.
3. To reload the configuration without affecting active requests, type:

~]# service httpd graceful

This cause the running httpd service to reload its configuration file. Any requests being currently
processed will use the old configuration.

For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 7,
Managing Services with systemd.

10.1.3.4. Verifying the Service Status


To verify that the httpd service is running, type the following at a shell prompt:

149
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

~]# systemctl is-active httpd.service


active

10.1.4. Editing the Configuration Files


When the httpd service is started, by default, it reads the configuration from locations that are listed in
Table 10.1, “The httpd service configuration files”.

Table 10.1. The httpd service configuration files

Path Description
/etc/httpd/conf/httpd.c The main configuration file.
onf
/etc/httpd/conf.d/ An auxiliary directory for configuration files that are included in the
main configuration file.

Although the default configuration should be suitable for most situations, it is a good idea to become at
least familiar with some of the more important configuration options. Note that for any changes to take
effect, the web server has to be restarted first. Refer to Section 10.1.3.3, “Restarting the Service” for
more information on how to restart the httpd service.

To check the configuration for possible errors, type the following at a shell prompt:

~]# service httpd configtest


Syntax OK

To make the recovery from mistakes easier, it is recommended that you make a copy of the original file
before editing it.

10.1.5. Working with Modules


Being a modular application, the httpd service is distributed along with a number of Dynamic Shared
Objects (DSOs), which can be dynamically loaded or unloaded at runtime as necessary. By default,
these modules are located in /usr/lib/httpd/m odules/ on 32-bit and in
/usr/lib64 /httpd/m odules/ on 64-bit systems.

10.1.5.1. Loading a Module


To load a particular DSO module, use the LoadModule directive as described in System Administrator's
Reference Guide. Note that modules provided by a separate package often have their own configuration
file in the /etc/httpd/conf.d/ directory.

Example 10.1. Loading the mod_ssl DSO

LoadModule ssl_module modules/mod_ssl.so

Once you are finished, restart the web server to reload the configuration. Refer to Section 10.1.3.3,
“Restarting the Service” for more information on how to restart the httpd service.

10.1.5.2. Writing a Module


If you intend to create a new DSO module, make sure you have the httpd-devel package installed. To do
so, type the following at a shell prompt:

150
Chapter 10. Web Servers

~]# yum install httpd-devel

This package contains the include files, the header files, and the APache eXtenSion (apxs) utility
required to compile a module.

Once written, you can build the module with the following command:

~]# apxs -i -a -c module_name.c

If the build was successful, you should be able to load the module the same way as any other module
that is distributed with the Apache HTTP Server.

10.1.6. Setting Up Virtual Hosts


The Apache HTTP Server's built in virtual hosting allows the server to provide different information based
on which IP address, hostname, or port is being requested.

To create a name-based virtual host, find the virtual host container provided in
/etc/httpd/conf/httpd.conf as an example, remove the hash sign (that is, #) from the beginning
of each line, and customize the options according to your requirements as shown in Example 10.2,
“Sample virtual host configuration”.

Example 10.2. Sample virtual host configuration

NameVirtualHost penguin.example.com:80

<VirtualHost penguin.example.com:80>
ServerAdmin webmaster@penguin.example.com
DocumentRoot /www/docs/penguin.example.com
ServerName penguin.example.com:80
ErrorLog logs/penguin.example.com-error_log
CustomLog logs/penguin.example.com-access_log common
</VirtualHost>

Note that ServerNam e must be a valid DNS name assigned to the machine. The <VirtualHost>
container is highly customizable, and accepts most of the directives available within the main server
configuration. Directives that are not supported within this container include User and Group, which
were replaced by SuexecUserGroup.

Changing the port number

If you configure a virtual host to listen on a non-default port, make sure you update the Listen
directive in the global settings section of the /etc/httpd/conf/httpd.conf file accordingly.

To activate a newly created virtual host, the web server has to be restarted first. Refer to
Section 10.1.3.3, “Restarting the Service” for more information on how to restart the httpd service.

10.1.7. Setting Up an SSL Server


Secure Sockets Layer (SSL) is a cryptographic protocol that allows a server and a client to communicate
securely. Along with its extended and improved version called Transport Layer Security (TLS), it ensures

151
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

both privacy and data integrity. The Apache HTTP Server in combination with m od_ssl, a module that
uses the OpenSSL toolkit to provide the SSL/TLS support, is commonly referred to as the SSL server.

Unlike a regular HTTP connection that can be read and possibly modified by anybody who is able to
intercept it, the use of m od_ssl prevents any inspection or modification of the transmitted content. This
section provides basic information on how to enable this module in the Apache HTTP Server
configuration, and guides you through the process of generating private keys and self-signed certificates.

10.1.7.1. An Overview of Certificates and Security


Secure communication is based on the use of keys. In conventional or symmetric cryptography, both
ends of the transaction have the same key they can use to decode each other's transmissions. On the
other hand, in public or asymmetric cryptography, two keys co-exist: a private key that is kept a secret,
and a public key that is usually shared with the public. While the data encoded with the public key can
only be decoded with the private key, data encoded with the private key can in turn only be decoded with
the public key.

To provide secure communications using SSL, an SSL server must use a digital certificate signed by a
Certificate Authority (CA). The certificate lists various attributes of the server (that is, the server
hostname, the name of the company, its location, etc.), and the signature produced using the CA's
private key. This signature ensures that a particular certificate authority has issued the certificate, and
that the certificate has not been modified in any way.

When a web browser establishes a new SSL connection, it checks the certificate provided by the web
server. If the certificate does not have a signature from a trusted CA, or if the hostname listed in the
certificate does not match the hostname used to establish the connection, it refuses to communicate with
the server and usually presents a user with an appropriate error message.

By default, most web browsers are configured to trust a set of widely used certificate authorities.
Because of this, an appropriate CA should be chosen when setting up a secure server, so that target
users can trust the connection, otherwise they will be presented with an error message, and will have to
accept the certificate manually. Since encouraging users to override certificate errors can allow an
attacker to intercept the connection, you should use a trusted CA whenever possible. For more
information on this, see Table 10.2, “CA lists for most common web browsers”.

Table 10.2. CA lists for most common web browsers

Web Browser Link


Mozilla Firefox Mozilla root CA list.
Opera Root certificates used by Opera.
Internet Explorer Windows root certificate program members.

When setting up an SSL server, you need to generate a certificate request and a private key, and then
send the certificate request, proof of the company's identity, and payment to a certificate authority. Once
the CA verifies the certificate request and your identity, it will send you a signed certificate you can use
with your server. Alternatively, you can create a self-signed certificate that does not contain a CA
signature, and thus should be used for testing purposes only.

10.1.7.2. Enabling the mod_ssl Module


If you intend to set up an SSL server, make sure you have the mod_ssl (the m od_ssl module) and
openssl (the OpenSSL toolkit) packages installed. To do so, type the following at a shell prompt:

~]# yum install mod_ssl openssl

152
Chapter 10. Web Servers

This will create the m od_ssl configuration file at /etc/httpd/conf.d/ssl.conf, which is included in
the main Apache HTTP Server configuration file by default. For the module to be loaded, restart the
httpd service as described in Section 10.1.3.3, “Restarting the Service”.

10.1.7.3. Using an Existing Key and Certificate


If you have a previously created key and certificate, you can configure the SSL server to use these files
instead of generating new ones. There are only two situations where this is not possible:

1. You are changing the IP address or domain name.


Certificates are issued for a particular IP address and domain name pair. If one of these values
changes, the certificate becomes invalid.
2. You have a certificate from VeriSign, and you are changing the server software.
VeriSign, a widely used certificate authority, issues certificates for a particular software product, IP
address, and domain name. Changing the software product renders the certificate invalid.

In either of the above cases, you will need to obtain a new certificate. For more information on this topic,
refer to Section 10.1.7.4, “Generating a New Key and Certificate”.

If you wish to use an existing key and certificate, move the relevant files to the
/etc/pki/tls/private/ and /etc/pki/tls/certs/ directories respectively. You can do so by
typing the following commands:

~]# mv key_file.key /etc/pki/tls/private/hostname.key


~]# mv certificate.crt /etc/pki/tls/certs/hostname.crt

Then add the following lines to the /etc/httpd/conf.d/ssl.conf configuration file:

SSLCertificateFile /etc/pki/tls/certs/hostname.crt
SSLCertificateKeyFile /etc/pki/tls/private/hostname.key

To load the updated configuration, restart the httpd service as described in Section 10.1.3.3,
“Restarting the Service”.

Example 10.3. Using a key and certificate from the Red Hat Secure Web Server

~]# mv /etc/httpd/conf/httpsd.key
/etc/pki/tls/private/penguin.example.com.key
~]# mv /etc/httpd/conf/httpsd.crt /etc/pki/tls/certs/penguin.example.com.crt

10.1.7.4. Generating a New Key and Certificate


In order to generate a new key and certificate pair, you must to have the crypto-utils package installed in
your system. You can install it by typing the following at a shell prompt:

~]# yum install crypto-utils

This package provides a set of tools to generate and manage SSL certificates and private keys, and
includes genkey, the Red Hat Keypair Generation utility that will guide you through the key generation
process.

153
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Replacing an existing certificate

If the server already has a valid certificate and you are replacing it with a new one, specify a
different serial number. This ensures that client browsers are notified of this change, update to
this new certificate as expected, and do not fail to access the page. To create a new certificate
with a custom serial number, use the following command instead of genkey:

~]# openssl req -x509 -new -set_serial number -key hostname.key -out
hostname.crt

Remove a previously created key

If there already is a key file for a particular hostname in your system, genkey will refuse to start.
In this case, remove the existing file using the following command:

~]# rm /etc/pki/tls/private/hostname.key

To run the utility, use the genkey command followed by the appropriate hostname (for example,
penguin.exam ple.com ):

~]# genkey hostname

To complete the key and certificate creation, take the following steps:

1. Review the target locations in which the key and certificate will be stored.

Figure 10.1. Running the genkey utility

Use the T ab key to select the Next button, and press Enter to proceed to the next screen.
2. Using the Up and down arrow keys, select the suitable key size. Note that while the large key
increases the security, it also increases the response time of your server. The NIST recommends

154
Chapter 10. Web Servers

using 204 8 bits. See NIST Special Publication 800-131A.

Figure 10.2. Selecting the key size

Once finished, use the T ab key to select the Next button, and press Enter to initiate the random
bits generation process. Depending on the selected key size, this may take some time.
3. Decide whether you wish to send a certificate request to a certificate authority.

Figure 10.3. Generating a certificate request

Use the T ab key to select Yes to compose a certificate request, or No to generate a self-signed
certificate. Then press Enter to confirm your choice.
4. Using the Spacebar key, enable ([* ]) or disable ([ ]) the encryption of the private key.

155
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 10.4. Encrypting the private key

Use the T ab key to select the Next button, and press Enter to proceed to the next screen.
5. If you have enabled the private key encryption, enter an adequate passphrase. Note that for
security reasons, it is not displayed as you type, and it must be at least five characters long.

Figure 10.5. Entering a passphrase

Use the T ab key to select the Next button, and press Enter to proceed to the next screen.

Do not forget the passphrase

Entering the correct passphrase is required in order for the server to start. If you lose it, you
will need to generate a new key and certificate.

6. Customize the certificate details.

156
Chapter 10. Web Servers

Figure 10.6. Specifying certificate information

Use the T ab key to select the Next button, and press Enter to finish the key generation.
7. If you have previously enabled the certificate request generation, you will be prompted to send it to
a certificate authority.

Figure 10.7. Instructions on how to send a certificate request

Press Enter to return to a shell prompt.

Once generated, add the key and certificate locations to the /etc/httpd/conf.d/ssl.conf
configuration file:

SSLCertificateFile /etc/pki/tls/certs/hostname.crt
SSLCertificateKeyFile /etc/pki/tls/private/hostname.key

Finally, restart the httpd service as described in Section 10.1.3.3, “Restarting the Service”, so that the
updated configuration is loaded.

157
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

10.1.8. Additional Resources


To learn more about the Apache HTTP Server, refer to the following resources.

10.1.8.1. Installed Documentation


http://localhost/manual/
The official documentation for the Apache HTTP Server with the full description of its directives
and available modules. Note that in order to access this documentation, you must have the
httpd-manual package installed, and the web server must be running.

m an httpd
The manual page for the httpd service containing the complete list of its command line
options.

m an genkey
The manual page for genkey containing the full documentation on its usage.

10.1.8.2. Useful Websites


http://httpd.apache.org/
The official website for the Apache HTTP Server with documentation on all the directives and
default modules.

http://www.modssl.org/
The official website for the mod_ssl module.

http://www.openssl.org/
The OpenSSL home page containing further documentation, frequently asked questions, links
to the mailing lists, and other useful resources.

158
Chapter 11. Mail Servers

Chapter 11. Mail Servers


Email was born in the 1960s. The mailbox was a file in a user's home directory that was readable only by
that user. Primitive mail applications appended new text messages to the bottom of the file, making the
user wade through the constantly growing file to find any particular message. This system was only
capable of sending messages to users on the same system.

The first network transfer of an electronic mail message file took place in 1971 when a computer
engineer named Ray Tomlinson sent a test message between two machines via ARPANET—the
precursor to the Internet. Communication via email soon became very popular, comprising 75 percent of
ARPANET's traffic in less than two years.

Today, email systems based on standardized network protocols have evolved into some of the most
widely used services on the Internet. Red Hat Enterprise Linux offers many advanced applications to
serve and access email.

This chapter reviews modern email protocols in use today, and some of the programs designed to send
and receive email.

11.1. Email Protocols


Today, email is delivered using a client/server architecture. An email message is created using a mail
client program. This program then sends the message to a server. The server then forwards the
message to the recipient's email server, where the message is then supplied to the recipient's email
client.

To enable this process, a variety of standard network protocols allow different machines, often running
different operating systems and using different email programs, to send and receive email.

The following protocols discussed are the most commonly used in the transfer of email.

11.1.1. Mail Transport Protocols


Mail delivery from a client application to the server, and from an originating server to the destination
server, is handled by the Simple Mail Transfer Protocol (SMTP).

11.1.1.1. SMTP
The primary purpose of SMTP is to transfer email between mail servers. However, it is critical for email
clients as well. To send email, the client sends the message to an outgoing mail server, which in turn
contacts the destination mail server for delivery. For this reason, it is necessary to specify an SMTP
server when configuring an email client.

Under Red Hat Enterprise Linux, a user can configure an SMTP server on the local machine to handle
mail delivery. However, it is also possible to configure remote SMTP servers for outgoing mail.

One important point to make about the SMTP protocol is that it does not require authentication. This
allows anyone on the Internet to send email to anyone else or even to large groups of people. It is this
characteristic of SMTP that makes junk email or spam possible. Imposing relay restrictions limits random
users on the Internet from sending email through your SMTP server, to other servers on the internet.
Servers that do not impose such restrictions are called open relay servers.

Red Hat Enterprise Linux provides the Postfix and Sendmail SMTP programs.

11.1.2. Mail Access Protocols


There are two primary protocols used by email client applications to retrieve email from mail servers: the

159
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Post Office Protocol (POP) and the Internet Message Access Protocol (IMAP).

11.1.2.1. POP
The default POP server under Red Hat Enterprise Linux is Dovecot and is provided by the dovecot
package.

Installing the dovecot package

In order to use Dovecot, first ensure the dovecot package is installed on your system by
running, as root:

~]# yum install dovecot

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing
Packages”.

When using a POP server, email messages are downloaded by email client applications. By default, most
POP email clients are automatically configured to delete the message on the email server after it has
been successfully transferred, however this setting usually can be changed.

POP is fully compatible with important Internet messaging standards, such as Multipurpose Internet Mail
Extensions (MIME), which allow for email attachments.

POP works best for users who have one system on which to read email. It also works well for users who
do not have a persistent connection to the Internet or the network containing the mail server.
Unfortunately for those with slow network connections, POP requires client programs upon authentication
to download the entire content of each message. This can take a long time if any messages have large
attachments.

The most current version of the standard POP protocol is POP3.

There are, however, a variety of lesser-used POP protocol variants:

APOP — POP3 with MDS (Monash Directory Service) authentication. An encoded hash of the user's
password is sent from the email client to the server rather then sending an unencrypted password.
KPOP — POP3 with Kerberos authentication.
RPOP — POP3 with RPOP authentication. This uses a per-user ID, similar to a password, to
authenticate POP requests. However, this ID is not encrypted, so RPOP is no more secure than
standard POP.

For added security, it is possible to use Secure Socket Layer (SSL) encryption for client authentication
and data transfer sessions. This can be enabled by using the pop3s service, or by using the stunnel
application. For more information on securing email communication, refer to Section 11.5.1, “Securing
Communication”.

11.1.2.2. IMAP
The default IMAP server under Red Hat Enterprise Linux is Dovecot and is provided by the dovecot
package. Refer to Section 11.1.2.1, “POP” for information on how to install Dovecot.

When using an IMAP mail server, email messages remain on the server where users can read or delete
them. IMAP also allows client applications to create, rename, or delete mail directories on the server to

160
Chapter 11. Mail Servers

organize and store email.

IMAP is particularly useful for users who access their email using multiple machines. The protocol is also
convenient for users connecting to the mail server via a slow connection, because only the email header
information is downloaded for messages until opened, saving bandwidth. The user also has the ability to
delete messages without viewing or downloading them.

For convenience, IMAP client applications are capable of caching copies of messages locally, so the user
can browse previously read messages when not directly connected to the IMAP server.

IMAP, like POP, is fully compatible with important Internet messaging standards, such as MIME, which
allow for email attachments.

For added security, it is possible to use SSL encryption for client authentication and data transfer
sessions. This can be enabled by using the im aps service, or by using the stunnel program. For more
information on securing email communication, refer to Section 11.5.1, “Securing Communication”.

Other free, as well as commercial, IMAP clients and servers are available, many of which extend the
IMAP protocol and provide additional functionality.

11.1.2.3. Dovecot
The im ap-login and pop3-login processes which implement the IMAP and POP3 protocols are
spawned by the master dovecot daemon included in the dovecot package. The use of IMAP and POP is
configured through the /etc/dovecot/dovecot.conf configuration file; by default dovecot runs
IMAP and POP3 together with their secure versions using SSL. To configure dovecot to use POP,
complete the following steps:

1. Edit the /etc/dovecot/dovecot.conf configuration file to make sure the protocols variable
is uncommented (remove the hash sign (#) at the beginning of the line) and contains the pop3
argument. For example:

protocols = imap imaps pop3 pop3s

When the protocols variable is left commented out, dovecot will use the default values
specified for this variable.
2. Make that change operational for the current session by running the following command:

~]# systemctl restart dovecot

3. Make that change operational after the next reboot by running the command:

~]# systemctl enable dovecot


ln -s '/usr/lib/systemd/system/dovecot' '/etc/systemd/system/multi-
user.target.wants/dovecot'

The dovecot service starts the POP3 server

Please note that dovecot only reports that it started the IMAP server, but also starts the
POP3 server.

Unlike SMT P, both IMAP and POP3 require connecting clients to authenticate using a username and
password. By default, passwords for both protocols are passed over the network unencrypted.

161
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

To configure SSL on dovecot:

Edit the /etc/pki/dovecot/dovecot-openssl.cnf configuration file as you prefer. However, in


a typical installation, this file does not require modification.
Rename, move or delete the files /etc/pki/dovecot/certs/dovecot.pem and
/etc/pki/dovecot/private/dovecot.pem .
Execute the /usr/libexec/dovecot/m kcert.sh script which creates the dovecot self signed
certificates. These certificates are copied in the /etc/pki/dovecot/certs and
/etc/pki/dovecot/private directories. To implement the changes, restart dovecot:

~]# systemctl restart dovecot

More details on dovecot can be found online at http://www.dovecot.org.

11.2. Email Program Classifications


In general, all email applications fall into at least one of three classifications. Each classification plays a
specific role in the process of moving and managing email messages. While most users are only aware
of the specific email program they use to receive and send messages, each one is important for ensuring
that email arrives at the correct destination.

11.2.1. Mail Transport Agent


A Mail Transport Agent (MTA) transports email messages between hosts using SMT P. A message may
involve several MTAs as it moves to its intended destination.

While the delivery of messages between machines may seem rather straightforward, the entire process
of deciding if a particular MTA can or should accept a message for delivery is quite complicated. In
addition, due to problems from spam, use of a particular MTA is usually restricted by the MTA's
configuration or the access configuration for the network on which the MTA resides.

Many modern email client programs can act as an MTA when sending email. However, this action should
not be confused with the role of a true MTA. The sole reason email client programs are capable of
sending email like an MTA is because the host running the application does not have its own MTA. This
is particularly true for email client programs on non-UNIX-based operating systems. However, these
client programs only send outbound messages to an MTA they are authorized to use and do not directly
deliver the message to the intended recipient's email server.

Since Red Hat Enterprise Linux offers two MTAs—Postfix and Sendmail—email client programs are often
not required to act as an MTA. Red Hat Enterprise Linux also includes a special purpose MTA called
Fetchmail.

For more information on Postfix, Sendmail, and Fetchmail, refer to Section 11.3, “Mail Transport Agents”.

11.2.2. Mail Delivery Agent


A Mail Delivery Agent (MDA) is invoked by the MTA to file incoming email in the proper user's mailbox. In
many cases, the MDA is actually a Local Delivery Agent (LDA), such as m ail or Procmail.

Any program that actually handles a message for delivery to the point where it can be read by an email
client application can be considered an MDA. For this reason, some MTAs (such as Sendmail and
Postfix) can fill the role of an MDA when they append new email messages to a local user's mail spool
file. In general, MDAs do not transport messages between systems nor do they provide a user interface;
MDAs distribute and sort messages on the local machine for an email client application to access.

162
Chapter 11. Mail Servers

11.2.3. Mail User Agent


A Mail User Agent (MUA) is synonymous with an email client application. An MUA is a program that, at
the very least, allows a user to read and compose email messages. Many MUAs are capable of
retrieving messages via the POP or IMAP protocols, setting up mailboxes to store messages, and
sending outbound messages to an MTA.

MUAs may be graphical, such as Evolution, or have simple text-based interfaces, such as pine.

11.3. Mail Transport Agents


Red Hat Enterprise Linux offers two primary MTAs: Postfix and Sendmail. Postfix is configured as the
default MTA, although it is easy to switch the default MTA to Sendmail. To switch the default MTA to
Sendmail, you can either uninstall Postfix or use the following command to switch to Sendmail:

~]# alternatives --config mta

You can also use the following command to enable the desired service:

~]# systemctl enable <service>

Similarly, to disable the service, type the following at a shell prompt:

~]# systemctl disable <service>

For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 7,
Managing Services with systemd.

11.3.1. Postfix
Originally developed at IBM by security expert and programmer Wietse Venema, Postfix is a Sendmail-
compatible MTA that is designed to be secure, fast, and easy to configure.

To improve security, Postfix uses a modular design, where small processes with limited privileges are
launched by a master daemon. The smaller, less privileged processes perform very specific tasks related
to the various stages of mail delivery and run in a change rooted environment to limit the effects of
attacks.

Configuring Postfix to accept network connections from hosts other than the local computer takes only a
few minor changes in its configuration file. Yet for those with more complex needs, Postfix provides a
variety of configuration options, as well as third party add-ons that make it a very versatile and full-
featured MTA.

The configuration files for Postfix are human readable and support upward of 250 directives. Unlike
Sendmail, no macro processing is required for changes to take effect and the majority of the most
commonly used options are described in the heavily commented files.

11.3.1.1. The Default Postfix Installation


The Postfix executable is postfix. This daemon launches all related processes needed to handle mail
delivery.

Postfix stores its configuration files in the /etc/postfix/ directory. The following is a list of the more
commonly used files:

163
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

access — Used for access control, this file specifies which hosts are allowed to connect to Postfix.
m ain.cf — The global Postfix configuration file. The majority of configuration options are specified
in this file.
m aster.cf — Specifies how Postfix interacts with various processes to accomplish mail delivery.
transport — Maps email addresses to relay hosts.

The aliases file can be found in the /etc/ directory. This file is shared between Postfix and Sendmail.
It is a configurable list required by the mail protocol that describes user ID aliases.

Configuring Postfix as a server for other clients

The default /etc/postfix/m ain.cf file does not allow Postfix to accept network connections
from a host other than the local computer. For instructions on configuring Postfix as a server for
other clients, refer to Section 11.3.1.2, “Basic Postfix Configuration”.

Restart the postfix service after changing any options in the configuration files under the
/etc/postfix directory in order for those changes to take effect:

~]# systemctl restart postfix

11.3.1.2. Basic Postfix Configuration


By default, Postfix does not accept network connections from any host other than the local host. Perform
the following steps as root to enable mail delivery for other hosts on the network:

Edit the /etc/postfix/m ain.cf file with a text editor, such as vi.
Uncomment the m ydom ain line by removing the hash sign (#), and replace domain.tld with the
domain the mail server is servicing, such as exam ple.com .
Uncomment the m yorigin = $m ydom ain line.
Uncomment the m yhostnam e line, and replace host.domain.tld with the hostname for the
machine.
Uncomment the m ydestination = $m yhostnam e, localhost.$m ydom ain line.
Uncomment the m ynetworks line, and replace 168.100.189.0/28 with a valid network setting for
hosts that can connect to the server.
Uncomment the inet_interfaces = all line.
Comment the inet_interfaces = localhost line.
Restart the postfix service.

Once these steps are complete, the host accepts outside emails for delivery.

Postfix has a large assortment of configuration options. One of the best ways to learn how to configure
Postfix is to read the comments within the /etc/postfix/m ain.cf configuration file. Additional
resources including information about Postfix configuration, SpamAssassin integration, or detailed
descriptions of the /etc/postfix/m ain.cf parameters are available online at http://www.postfix.org/.

11.3.1.3. Using Postfix with LDAP


Postfix can use an LDAP directory as a source for various lookup tables (e.g.: aliases, virtual,
canonical, etc.). This allows LDAP to store hierarchical user information and Postfix to only be given
the result of LDAP queries when needed. By not storing this information locally, administrators can easily

164
Chapter 11. Mail Servers

maintain it.

11.3.1.3.1. The /etc/aliases lookup example


The following is a basic example for using LDAP to look up the /etc/aliases file. Make sure your
/etc/postfix/m ain.cf contains the following:

alias_maps = hash:/etc/aliases, ldap:/etc/postfix/ldap-aliases.cf

Create a /etc/postfix/ldap-aliases.cf file if you do not have one created already and make
sure it contains the following:

server_host = ldap.example.com
search_base = dc=example, dc=com

where ldap.example.com, example, and com are parameters that need to be replaced with
specification of an existing available LDAP server.

The /etc/postfix/ldap-aliases.cf file

The /etc/postfix/ldap-aliases.cf file can specify various parameters, including


parameters that enable LDAP SSL and ST ART T LS. For more information, refer to the
ldap_table(5) man page.

For more information on LDAP, refer to Section 12.1, “OpenLDAP”.

11.3.2. Sendmail
Sendmail's core purpose, like other MTAs, is to safely transfer email among hosts, usually using the
SMT P protocol. However, Sendmail is highly configurable, allowing control over almost every aspect of
how email is handled, including the protocol used. Many system administrators elect to use Sendmail as
their MTA due to its power and scalability.

11.3.2.1. Purpose and Limitations


It is important to be aware of what Sendmail is and what it can do, as opposed to what it is not. In these
days of monolithic applications that fulfill multiple roles, Sendmail may seem like the only application
needed to run an email server within an organization. Technically, this is true, as Sendmail can spool
mail to each users' directory and deliver outbound mail for users. However, most users actually require
much more than simple email delivery. Users usually want to interact with their email using an MUA, that
uses POP or IMAP, to download their messages to their local machine. Or, they may prefer a Web
interface to gain access to their mailbox. These other applications can work in conjunction with Sendmail,
but they actually exist for different reasons and can operate separately from one another.

It is beyond the scope of this section to go into all that Sendmail should or could be configured to do.
With literally hundreds of different options and rule sets, entire volumes have been dedicated to helping
explain everything that can be done and how to fix things that go wrong. Refer to the Section 11.6,
“Additional Resources” for a list of Sendmail resources.

This section reviews the files installed with Sendmail by default and reviews basic configuration changes,
including how to stop unwanted email (spam) and how to extend Sendmail with the Lightweight Directory
Access Protocol (LDAP).

165
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

11.3.2.2. The Default Sendmail Installation


In order to use Sendmail, first ensure the sendmail package is installed on your system by running, as
root:

~]# yum install sendmail

In order to configure Sendmail, ensure the sendmail-cf package is installed on your system by running,
as root:

~]# yum install sendmail-cf

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing Packages”.

Before using Sendmail, the default MTA has to be switched from Postfix. For more information how to
switch the default MTA refer to Section 11.3, “Mail Transport Agents”.

The Sendmail executable is sendm ail.

Sendmail's lengthy and detailed configuration file is /etc/m ail/sendm ail.cf. Avoid editing the
sendm ail.cf file directly. To make configuration changes to Sendmail, edit the
/etc/m ail/sendm ail.m c file, back up the original /etc/m ail/sendm ail.cf, and use the
following alternatives to generate a new configuration file:

Use the included makefile in /etc/m ail/ to create a new /etc/m ail/sendm ail.cf
configuration file:

~]# make all -C /etc/mail/

All other generated files in /etc/m ail (db files) will be regenerated if needed. The old makemap
commands are still usable. The make command is automatically used whenever you start or restart
the sendm ail service.

More information on configuring Sendmail can be found in Section 11.3.2.3, “Common Sendmail
Configuration Changes”.

Various Sendmail configuration files are installed in the /etc/m ail/ directory including:

access — Specifies which systems can use Sendmail for outbound email.
dom aintable — Specifies domain name mapping.
local-host-nam es — Specifies aliases for the host.
m ailertable — Specifies instructions that override routing for particular domains.
virtusertable — Specifies a domain-specific form of aliasing, allowing multiple virtual domains to
be hosted on one machine.

Several of the configuration files in /etc/m ail/, such as access, dom aintable, m ailertable and
virtusertable, must actually store their information in database files before Sendmail can use any
configuration changes. To include any changes made to these configurations in their database files, run
the following command, as root:

~]# makemap hash /etc/mail/<name> < /etc/mail/<name>

where <name> represents the name of the configuration file to be updated. You may also restart the
sendm ail service for the changes to take effect by running:

166
Chapter 11. Mail Servers

~]# systemctl restart sendmail

For example, to have all emails addressed to the exam ple.com domain delivered to bob@ other-
exam ple.com , add the following line to the virtusertable file:

@example.com bob@other-example.com

To finalize the change, the virtusertable.db file must be updated:

~]# makemap hash /etc/mail/virtusertable < /etc/mail/virtusertable

Sendmail will create an updated virtusertable.db file containing the new configuration.

11.3.2.3. Common Sendmail Configuration Changes


When altering the Sendmail configuration file, it is best not to edit an existing file, but to generate an
entirely new /etc/m ail/sendm ail.cf file.

Backup the sendmail.cf file before changing its content

Before changing the sendm ail.cf file, it is a good idea to create a backup copy.

To add the desired functionality to Sendmail, edit the /etc/m ail/sendm ail.m c file as root. Once you
are finished, restart the sendm ail service and, if the m4 package is installed, the m 4 macro processor
will automatically generate a new sendm ail.cf configuration file:

~]# systemctl restart sendmail

Configuring Sendmail as a server for other clients

The default sendm ail.cf file does not allow Sendmail to accept network connections from any
host other than the local computer. To configure Sendmail as a server for other clients, edit the
/etc/m ail/sendm ail.m c file, and either change the address specified in the Addr= option of
the DAEMON_OPT IONS directive from 127.0.0.1 to the IP address of an active network device
or comment out the DAEMON_OPT IONS directive all together by placing dnl at the beginning of
the line. When finished, regenerate /etc/m ail/sendm ail.cf by restarting the service

~]# systemctl restart sendmail

The default configuration which ships with Red Hat Enterprise Linux works for most SMT P-only sites.
However, it does not work for UUCP (UNIX-to-UNIX Copy Protocol) sites. If using UUCP mail transfers,
the /etc/m ail/sendm ail.m c file must be reconfigured and a new /etc/m ail/sendm ail.cf file
must be generated.

Consult the /usr/share/sendm ail-cf/README file before editing any files in the directories under
the /usr/share/sendm ail-cf directory, as they can affect the future configuration of the
/etc/m ail/sendm ail.cf file.

167
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

11.3.2.4. Masquerading
One common Sendmail configuration is to have a single machine act as a mail gateway for all machines
on the network. For instance, a company may want to have a machine called m ail.exam ple.com that
handles all of their email and assigns a consistent return address to all outgoing mail.

In this situation, the Sendmail server must masquerade the machine names on the company network so
that their return address is user@ exam ple.com instead of user@ host.exam ple.com .

To do this, add the following lines to /etc/m ail/sendm ail.m c:

FEATURE(always_add_domain)dnl
FEATURE(`masquerade_entire_domain')dnl
FEATURE(`masquerade_envelope')dnl
FEATURE(`allmasquerade')dnl
MASQUERADE_AS(`bigcorp.com.')dnl
MASQUERADE_DOMAIN(`bigcorp.com.')dnl
MASQUERADE_AS(bigcorp.com)dnl

After generating a new sendm ail.cf using the m 4 macro processor, this configuration makes all mail
from inside the network appear as if it were sent from bigcorp.com .

11.3.2.5. Stopping Spam


Email spam can be defined as unnecessary and unwanted email received by a user who never
requested the communication. It is a disruptive, costly, and widespread abuse of Internet communication
standards.

Sendmail makes it relatively easy to block new spamming techniques being employed to send junk email.
It even blocks many of the more usual spamming methods by default. Main anti-spam features available
in sendmail are header checks, relaying denial (default from version 8.9), access database and sender
information checks.

For example, forwarding of SMT P messages, also called relaying, has been disabled by default since
Sendmail version 8.9. Before this change occurred, Sendmail directed the mail host (x.edu) to accept
messages from one party (y.com ) and sent them to a different party (z.net). Now, however, Sendmail
must be configured to permit any domain to relay mail through the server. To configure relay domains,
edit the /etc/m ail/relay-dom ains file and restart Sendmail

~]# systemctl restart sendmail

However, many times users are bombarded with spam from other servers throughout the Internet. In
these instances, Sendmail's access control features available through the /etc/m ail/access file can
be used to prevent connections from unwanted hosts. The following example illustrates how this file can
be used to both block and specifically allow access to the Sendmail server:

badspammer.com ERROR:550 "Go away and do not spam us anymore"


tux.badspammer.com OK 10.0 RELAY

This example shows that any email sent from badspam m er.com is blocked with a 550 RFC-821
compliant error code, with a message sent back to the spammer. Email sent from the
tux.badspam m er.com sub-domain, is accepted. The last line shows that any email sent from the
10.0.*.* network can be relayed through the mail server.

Because the /etc/m ail/access.db file is a database, use the m akem ap command to update any
changes. Do this using the following command as root:

168
Chapter 11. Mail Servers

~]# makemap hash /etc/mail/access < /etc/mail/access

Message header analysis allows you to reject mail based on header contents. SMT P servers store
information about an email's journey in the message header. As the message travels from one MTA to
another, each puts in a Received header above all the other Received headers. It is important to
note that this information may be altered by spammers.

The above examples only represent a small part of what Sendmail can do in terms of allowing or
blocking access. Refer to the /usr/share/sendm ail-cf/README for more information and
examples.

Since Sendmail calls the Procmail MDA when delivering mail, it is also possible to use a spam filtering
program, such as SpamAssassin, to identify and file spam for users. Refer to Section 11.4.2.6, “Spam
Filters” for more information about using SpamAssassin.

11.3.2.6. Using Sendmail with LDAP


Using LDAP is a very quick and powerful way to find specific information about a particular user from a
much larger group. For example, an LDAP server can be used to look up a particular email address from
a common corporate directory by the user's last name. In this kind of implementation, LDAP is largely
separate from Sendmail, with LDAP storing the hierarchical user information and Sendmail only being
given the result of LDAP queries in pre-addressed email messages.

However, Sendmail supports a much greater integration with LDAP, where it uses LDAP to replace
separately maintained files, such as /etc/aliases and /etc/m ail/virtusertables, on different
mail servers that work together to support a medium- to enterprise-level organization. In short, LDAP
abstracts the mail routing level from Sendmail and its separate configuration files to a powerful LDAP
cluster that can be leveraged by many different applications.

The current version of Sendmail contains support for LDAP. To extend the Sendmail server using LDAP,
first get an LDAP server, such as OpenLDAP, running and properly configured. Then edit the
/etc/m ail/sendm ail.m c to include the following:

LDAPROUTE_DOMAIN('yourdomain.com')dnl
FEATURE('ldap_routing')dnl

Advanced configuration

This is only for a very basic configuration of Sendmail with LDAP. The configuration can differ
greatly from this depending on the implementation of LDAP, especially when configuring several
Sendmail machines to use a common LDAP server.
Consult /usr/share/sendm ail-cf/README for detailed LDAP routing configuration
instructions and examples.

Next, recreate the /etc/m ail/sendm ail.cf file by running the m 4 macro processor and again
restarting Sendmail. Refer to Section 11.3.2.3, “Common Sendmail Configuration Changes” for
instructions.

For more information on LDAP, refer to Section 12.1, “OpenLDAP”.

11.3.3. Fetchmail

169
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Fetchmail is an MTA which retrieves email from remote servers and delivers it to the local MTA. Many
users appreciate the ability to separate the process of downloading their messages located on a remote
server from the process of reading and organizing their email in an MUA. Designed with the needs of
dial-up users in mind, Fetchmail connects and quickly downloads all of the email messages to the mail
spool file using any number of protocols, including POP3 and IMAP. It can even forward email messages
to an SMT P server, if necessary.

Installing the fetchmail package

In order to use Fetchmail, first ensure the fetchmail package is installed on your system by
running, as root:

~]# yum install fetchmail

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing
Packages”.

Fetchmail is configured for each user through the use of a .fetchm ailrc file in the user's home
directory. If it does not already exist, create the .fetchm ailrc file in your home directory

Using preferences in the .fetchm ailrc file, Fetchmail checks for email on a remote server and
downloads it. It then delivers it to port 25 on the local machine, using the local MTA to place the email in
the correct user's spool file. If Procmail is available, it is launched to filter the email and place it in a
mailbox so that it can be read by an MUA.

11.3.3.1. Fetchmail Configuration Options


Although it is possible to pass all necessary options on the command line to check for email on a remote
server when executing Fetchmail, using a .fetchm ailrc file is much easier. Place any desired
configuration options in the .fetchm ailrc file for those options to be used each time the fetchm ail
command is issued. It is possible to override these at the time Fetchmail is run by specifying that option
on the command line.

A user's .fetchm ailrc file contains three classes of configuration options:

global options — Gives Fetchmail instructions that control the operation of the program or provide
settings for every connection that checks for email.
server options — Specifies necessary information about the server being polled, such as the
hostname, as well as preferences for specific email servers, such as the port to check or number of
seconds to wait before timing out. These options affect every user using that server.
user options — Contains information, such as username and password, necessary to authenticate
and check for email using a specified email server.

Global options appear at the top of the .fetchm ailrc file, followed by one or more server options,
each of which designate a different email server that Fetchmail should check. User options follow server
options for each user account checking that email server. Like server options, multiple user options may
be specified for use with a particular server as well as to check multiple email accounts on the same
server.

Server options are called into service in the .fetchm ailrc file by the use of a special option verb,
poll or skip, that precedes any of the server information. The poll action tells Fetchmail to use this
server option when it is run, which checks for email using the specified user options. Any server options
after a skip action, however, are not checked unless this server's hostname is specified when Fetchmail

170
Chapter 11. Mail Servers

is invoked. The skip option is useful when testing configurations in the .fetchm ailrc file because it
only checks skipped servers when specifically invoked, and does not affect any currently working
configurations.

The following is a sample example of a .fetchm ailrc file:

set postmaster "user1"


set bouncemail

poll pop.domain.com proto pop3


user 'user1' there with password 'secret' is user1 here

poll mail.domain2.com
user 'user5' there with password 'secret2' is user1 here
user 'user7' there with password 'secret3' is user1 here

In this example, the global options specify that the user is sent email as a last resort (postm aster
option) and all email errors are sent to the postmaster instead of the sender (bouncem ail option). The
set action tells Fetchmail that this line contains a global option. Then, two email servers are specified,
one set to check using POP3, the other for trying various protocols to find one that works. Two users are
checked using the second server option, but all email found for any user is sent to user1's mail spool.
This allows multiple mailboxes to be checked on multiple servers, while appearing in a single MUA inbox.
Each user's specific information begins with the user action.

Omitting the password from the configuration

Users are not required to place their password in the .fetchm ailrc file. Omitting the with
password '<password>' section causes Fetchmail to ask for a password when it is launched.

Fetchmail has numerous global, server, and local options. Many of these options are rarely used or only
apply to very specific situations. The fetchm ail man page explains each option in detail, but the most
common ones are listed in the following three sections.

11.3.3.2. Global Options


Each global option should be placed on a single line after a set action.

daem on <seconds> — Specifies daemon-mode, where Fetchmail stays in the background.


Replace <seconds> with the number of seconds Fetchmail is to wait before polling the server.
postm aster — Specifies a local user to send mail to in case of delivery problems.
syslog — Specifies the log file for errors and status messages. By default, this is
/var/log/m aillog.

11.3.3.3. Server Options


Server options must be placed on their own line in .fetchm ailrc after a poll or skip action.

auth <auth-type> — Replace <auth-type> with the type of authentication to be used. By


default, password authentication is used, but some protocols support other types of authentication,
including kerberos_v5, kerberos_v4 , and ssh. If the any authentication type is used, Fetchmail
first tries methods that do not require a password, then methods that mask the password, and finally
attempts to send the password unencrypted to authenticate to the server.
interval <number> — Polls the specified server every <number> of times that it checks for

171
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

email on all configured servers. This option is generally used for email servers where the user rarely
receives messages.
port <port-number> — Replace <port-number> with the port number. This value overrides the
default port number for the specified protocol.
proto <protocol> — Replace <protocol> with the protocol, such as pop3 or im ap, to use
when checking for messages on the server.
tim eout <seconds> — Replace <seconds> with the number of seconds of server inactivity after
which Fetchmail gives up on a connection attempt. If this value is not set, a default of 300 seconds is
assumed.

11.3.3.4. User Options


User options may be placed on their own lines beneath a server option or on the same line as the server
option. In either case, the defined options must follow the user option (defined below).

fetchall — Orders Fetchmail to download all messages in the queue, including messages that
have already been viewed. By default, Fetchmail only pulls down new messages.
fetchlim it <number> — Replace <number> with the number of messages to be retrieved
before stopping.
flush — Deletes all previously viewed messages in the queue before retrieving new messages.
lim it <max-number-bytes> — Replace <max-number-bytes> with the maximum size in bytes
that messages are allowed to be when retrieved by Fetchmail. This option is useful with slow network
links, when a large message takes too long to download.
password '<password>' — Replace <password> with the user's password.
preconnect "<command>" — Replace <command> with a command to be executed before
retrieving messages for the user.
postconnect "<command>" — Replace <command> with a command to be executed after
retrieving messages for the user.
ssl — Activates SSL encryption.
user "<username>" — Replace <username> with the username used by Fetchmail to retrieve
messages. This option must precede all other user options.

11.3.3.5. Fetchmail Command Options


Most Fetchmail options used on the command line when executing the fetchm ail command mirror the
.fetchm ailrc configuration options. In this way, Fetchmail may be used with or without a
configuration file. These options are not used on the command line by most users because it is easier to
leave them in the .fetchm ailrc file.

There may be times when it is desirable to run the fetchm ail command with other options for a
particular purpose. It is possible to issue command options to temporarily override a .fetchm ailrc
setting that is causing an error, as any options specified at the command line override configuration file
options.

11.3.3.6. Informational or Debugging Options


Certain options used after the fetchm ail command can supply important information.

--configdum p — Displays every possible option based on information from .fetchm ailrc and
Fetchmail defaults. No email is retrieved for any users when using this option.
-s — Executes Fetchmail in silent mode, preventing any messages, other than errors, from
appearing after the fetchm ail command.

172
Chapter 11. Mail Servers

-v — Executes Fetchmail in verbose mode, displaying every communication between Fetchmail and
remote email servers.
-V — Displays detailed version information, lists its global options, and shows settings to be used with
each user, including the email protocol and authentication method. No email is retrieved for any users
when using this option.

11.3.3.7. Special Options


These options are occasionally useful for overriding defaults often found in the .fetchm ailrc file.

-a — Fetchmail downloads all messages from the remote email server, whether new or previously
viewed. By default, Fetchmail only downloads new messages.
-k — Fetchmail leaves the messages on the remote email server after downloading them. This
option overrides the default behavior of deleting messages after downloading them.
-l <max-number-bytes> — Fetchmail does not download any messages over a particular size
and leaves them on the remote email server.
--quit — Quits the Fetchmail daemon process.

More commands and .fetchm ailrc options can be found in the fetchm ail man page.

11.3.4. Mail Transport Agent (MTA) Configuration


A Mail Transport Agent (MTA) is essential for sending email. A Mail User Agent (MUA) such as
Evolution, Thunderbird, and Mutt, is used to read and compose email. When a user sends an email
from an MUA, the message is handed off to the MTA, which sends the message through a series of
MTAs until it reaches its destination.

Even if a user does not plan to send email from the system, some automated tasks or system programs
might use the m ail command to send email containing log messages to the root user of the local
system.

Red Hat Enterprise Linux 7 provides two MTAs: Postfix and Sendmail. If both are installed, Postfix is the
default MTA.

11.4. Mail Delivery Agents


Red Hat Enterprise Linux includes two primary MDAs, Procmail and m ail. Both of the applications are
considered LDAs and both move email from the MTA's spool file into the user's mailbox. However,
Procmail provides a robust filtering system.

This section details only Procmail. For information on the m ail command, consult its man page (m an
m ail).

Procmail delivers and filters email as it is placed in the mail spool file of the localhost. It is powerful,
gentle on system resources, and widely used. Procmail can play a critical role in delivering email to be
read by email client applications.

Procmail can be invoked in several different ways. Whenever an MTA places an email into the mail spool
file, Procmail is launched. Procmail then filters and files the email for the MUA and quits. Alternatively, the
MUA can be configured to execute Procmail any time a message is received so that messages are
moved into their correct mailboxes. By default, the presence of /etc/procm ailrc or of a
~/.procm ailrc file (also called an rc file) in the user's home directory invokes Procmail whenever an
MTA receives a new message.

173
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

By default, no system-wide rc files exist in the /etc/ directory and no .procm ailrc files exist in any
user's home directory. Therefore, to use Procmail, each user must construct a .procm ailrc file with
specific environment variables and rules.

Whether Procmail acts upon an email message depends upon whether the message matches a
specified set of conditions or recipes in the rc file. If a message matches a recipe, then the email is
placed in a specified file, is deleted, or is otherwise processed.

When Procmail starts, it reads the email message and separates the body from the header information.
Next, Procmail looks for a /etc/procm ailrc file and rc files in the /etc/procm ailrcs directory for
default, system-wide, Procmail environmental variables and recipes. Procmail then searches for a
.procm ailrc file in the user's home directory. Many users also create additional rc files for Procmail
that are referred to within the .procm ailrc file in their home directory.

11.4.1. Procmail Configuration


The Procmail configuration file contains important environmental variables. These variables specify
things such as which messages to sort and what to do with the messages that do not match any recipes.

These environmental variables usually appear at the beginning of the ~/.procm ailrc file in the
following format:

<env-variable>="<value>"

In this example, <env-variable> is the name of the variable and <value> defines the variable.

There are many environment variables not used by most Procmail users and many of the more
important environment variables are already defined by a default value. Most of the time, the following
variables are used:

DEFAULT — Sets the default mailbox where messages that do not match any recipes are placed.
The default DEFAULT value is the same as $ORGMAIL.
INCLUDERC — Specifies additional rc files containing more recipes for messages to be checked
against. This breaks up the Procmail recipe lists into individual files that fulfill different roles, such as
blocking spam and managing email lists, that can then be turned off or on by using comment
characters in the user's ~/.procm ailrc file.
For example, lines in a user's .procm ailrc file may look like this:

MAILDIR=$HOME/Msgs INCLUDERC=$MAILDIR/lists.rc INCLUDERC=$MAILDIR/spam.rc

To turn off Procmail filtering of email lists but leaving spam control in place, comment out the first
INCLUDERC line with a hash sign (#).
LOCKSLEEP — Sets the amount of time, in seconds, between attempts by Procmail to use a
particular lockfile. The default is 8 seconds.
LOCKT IMEOUT — Sets the amount of time, in seconds, that must pass after a lockfile was last
modified before Procmail assumes that the lockfile is old and can be deleted. The default is 1024
seconds.
LOGFILE — The file to which any Procmail information or error messages are written.
MAILDIR — Sets the current working directory for Procmail. If set, all other Procmail paths are
relative to this directory.
ORGMAIL — Specifies the original mailbox, or another place to put the messages if they cannot be
placed in the default or recipe-required location.

174
Chapter 11. Mail Servers

By default, a value of /var/spool/m ail/$LOGNAME is used.


SUSPEND — Sets the amount of time, in seconds, that Procmail pauses if a necessary resource,
such as swap space, is not available.
SWIT CHRC — Allows a user to specify an external file containing additional Procmail recipes, much
like the INCLUDERC option, except that recipe checking is actually stopped on the referring
configuration file and only the recipes on the SWIT CHRC-specified file are used.
VERBOSE — Causes Procmail to log more information. This option is useful for debugging.

Other important environmental variables are pulled from the shell, such as LOGNAME, which is the login
name; HOME, which is the location of the home directory; and SHELL, which is the default shell.

A comprehensive explanation of all environments variables, as well as their default values, is available in
the procm ailrc man page.

11.4.2. Procmail Recipes


New users often find the construction of recipes the most difficult part of learning to use Procmail. To
some extent, this is understandable, as recipes do their message matching using regular expressions,
which is a particular format used to specify qualifications for a matching string. However, regular
expressions are not very difficult to construct and even less difficult to understand when read.
Additionally, the consistency of the way Procmail recipes are written, regardless of regular expressions,
makes it easy to learn by example. To see example Procmail recipes, refer to Section 11.4.2.5, “Recipe
Examples”.

Procmail recipes take the following form:

:0<flags>: <lockfile-name> * <special-condition-character>


<condition-1> * <special-condition-character>
<condition-2> * <special-condition-character>
<condition-N>
<special-action-character>
<action-to-perform>

The first two characters in a Procmail recipe are a colon and a zero. Various flags can be placed after
the zero to control how Procmail processes the recipe. A colon after the <flags> section specifies that
a lockfile is created for this message. If a lockfile is created, the name can be specified by replacing
<lockfile-name>.

A recipe can contain several conditions to match against the message. If it has no conditions, every
message matches the recipe. Regular expressions are placed in some conditions to facilitate message
matching. If multiple conditions are used, they must all match for the action to be performed. Conditions
are checked based on the flags set in the recipe's first line. Optional special characters placed after the
asterisk character (* ) can further control the condition.

The <action-to-perform> argument specifies the action taken when the message matches one of the
conditions. There can only be one action per recipe. In many cases, the name of a mailbox is used here
to direct matching messages into that file, effectively sorting the email. Special action characters may
also be used before the action is specified. Refer to Section 11.4.2.4, “Special Conditions and Actions”
for more information.

11.4.2.1. Delivering vs. Non-Delivering Recipes


The action used if the recipe matches a particular message determines whether it is considered a
delivering or non-delivering recipe. A delivering recipe contains an action that writes the message to a
file, sends the message to another program, or forwards the message to another email address. A non-

175
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

delivering recipe covers any other actions, such as a nesting block. A nesting block is a set of actions,
contained in braces { }, that are performed on messages which match the recipe's conditions. Nesting
blocks can be nested inside one another, providing greater control for identifying and performing actions
on messages.

When messages match a delivering recipe, Procmail performs the specified action and stops comparing
the message against any other recipes. Messages that match non-delivering recipes continue to be
compared against other recipes.

11.4.2.2. Flags
Flags are essential to determine how or if a recipe's conditions are compared to a message. The
following flags are commonly used:

A — Specifies that this recipe is only used if the previous recipe without an A or a flag also matched
this message.
a — Specifies that this recipe is only used if the previous recipe with an A or a flag also matched this
message and was successfully completed.
B — Parses the body of the message and looks for matching conditions.
b — Uses the body in any resulting action, such as writing the message to a file or forwarding it. This
is the default behavior.
c — Generates a carbon copy of the email. This is useful with delivering recipes, since the required
action can be performed on the message and a copy of the message can continue being processed
in the rc files.
D — Makes the egrep comparison case-sensitive. By default, the comparison process is not case-
sensitive.
E — While similar to the A flag, the conditions in the recipe are only compared to the message if the
immediately preceding recipe without an E flag did not match. This is comparable to an else action.
e — The recipe is compared to the message only if the action specified in the immediately preceding
recipe fails.
f — Uses the pipe as a filter.
H — Parses the header of the message and looks for matching conditions. This is the default
behavior.
h — Uses the header in a resulting action. This is the default behavior.
w — Tells Procmail to wait for the specified filter or program to finish, and reports whether or not it
was successful before considering the message filtered.
W — Is identical to w except that "Program failure" messages are suppressed.

For a detailed list of additional flags, refer to the procm ailrc man page.

11.4.2.3. Specifying a Local Lockfile


Lockfiles are very useful with Procmail to ensure that more than one process does not try to alter a
message simultaneously. Specify a local lockfile by placing a colon (:) after any flags on a recipe's first
line. This creates a local lockfile based on the destination file name plus whatever has been set in the
LOCKEXT global environment variable.

Alternatively, specify the name of the local lockfile to be used with this recipe after the colon.

11.4.2.4. Special Conditions and Actions


Special characters used before Procmail recipe conditions and actions change the way they are
interpreted.

176
Chapter 11. Mail Servers

The following characters may be used after the asterisk character (* ) at the beginning of a recipe's
condition line:

! — In the condition line, this character inverts the condition, causing a match to occur only if the
condition does not match the message.
< — Checks if the message is under a specified number of bytes.
> — Checks if the message is over a specified number of bytes.

The following characters are used to perform special actions:

! — In the action line, this character tells Procmail to forward the message to the specified email
addresses.
$ — Refers to a variable set earlier in the rc file. This is often used to set a common mailbox that is
referred to by various recipes.
| — Starts a specified program to process the message.
{ and } — Constructs a nesting block, used to contain additional recipes to apply to matching
messages.

If no special character is used at the beginning of the action line, Procmail assumes that the action line is
specifying the mailbox in which to write the message.

11.4.2.5. Recipe Examples


Procmail is an extremely flexible program, but as a result of this flexibility, composing Procmail recipes
from scratch can be difficult for new users.

The best way to develop the skills to build Procmail recipe conditions stems from a strong understanding
of regular expressions combined with looking at many examples built by others. A thorough explanation
of regular expressions is beyond the scope of this section. The structure of Procmail recipes and useful
sample Procmail recipes can be found at various places on the Internet (such as
http://www.iki.fi/era/procmail/links.html). The proper use and adaptation of regular expressions can be
derived by viewing these recipe examples. In addition, introductory information about basic regular
expression rules can be found in the grep man page.

The following simple examples demonstrate the basic structure of Procmail recipes and can provide the
foundation for more intricate constructions.

A basic recipe may not even contain conditions, as is illustrated in the following example:

:0: new-mail.spool

The first line specifies that a local lockfile is to be created but does not specify a name, so Procmail uses
the destination file name and appends the value specified in the LOCKEXT environment variable. No
condition is specified, so every message matches this recipe and is placed in the single spool file called
new-m ail.spool, located within the directory specified by the MAILDIR environment variable. An MUA
can then view messages in this file.

A basic recipe, such as this, can be placed at the end of all rc files to direct messages to a default
location.

The following example matched messages from a specific email address and throws them away.

:0 * ^From: spammer@domain.com /dev/null

177
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

With this example, any messages sent by spam m er@ dom ain.com are sent to the /dev/null device,
deleting them.

Sending messages to /dev/null

Be certain that rules are working as intended before sending messages to /dev/null for
permanent deletion. If a recipe inadvertently catches unintended messages, and those messages
disappear, it becomes difficult to troubleshoot the rule.
A better solution is to point the recipe's action to a special mailbox, which can be checked from
time to time to look for false positives. Once satisfied that no messages are accidentally being
matched, delete the mailbox and direct the action to send the messages to /dev/null.

The following recipe grabs email sent from a particular mailing list and places it in a specified folder.

:0: * ^(From|Cc|To).*tux-lug tuxlug

Any messages sent from the tux-lug@ dom ain.com mailing list are placed in the tuxlug mailbox
automatically for the MUA. Note that the condition in this example matches the message if it has the
mailing list's email address on the From , Cc, or T o lines.

Consult the many Procmail online resources available in Section 11.6, “Additional Resources” for more
detailed and powerful recipes.

11.4.2.6. Spam Filters


Because it is called by Sendmail, Postfix, and Fetchmail upon receiving new emails, Procmail can be
used as a powerful tool for combating spam.

This is particularly true when Procmail is used in conjunction with SpamAssassin. When used together,
these two applications can quickly identify spam emails, and sort or destroy them.

SpamAssassin uses header analysis, text analysis, blacklists, a spam-tracking database, and self-
learning Bayesian spam analysis to quickly and accurately identify and tag spam.

Installing the spamassassin package

In order to use SpamAssassin, first ensure the spamassassin package is installed on your
system by running, as root:

~]# yum install spamassassin

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing
Packages”.

The easiest way for a local user to use SpamAssassin is to place the following line near the top of the
~/.procm ailrc file:

INCLUDERC=/etc/mail/spamassassin/spamassassin-default.rc

The /etc/m ail/spam assassin/spam assassin-default.rc contains a simple Procmail rule that
activates SpamAssassin for all incoming email. If an email is determined to be spam, it is tagged in the

178
Chapter 11. Mail Servers

header as such and the title is prepended with the following pattern:

*****SPAM*****

The message body of the email is also prepended with a running tally of what elements caused it to be
diagnosed as spam.

To file email tagged as spam, a rule similar to the following can be used:

:0 Hw * ^X-Spam-Status: Yes spam

This rule files all email tagged in the header as spam into a mailbox called spam .

Since SpamAssassin is a Perl script, it may be necessary on busy servers to use the binary
SpamAssassin daemon (spam d) and the client application (spamc). Configuring SpamAssassin this
way, however, requires root access to the host.

To start the spam d daemon, type the following command:

~]# systemctl start spamassassin

To start the SpamAssassin daemon when the system is booted, use an initscript utility, such as the
Services Configuration Tool (system -config-services), to turn on the spam assassin service.
Refer to Chapter 7, Managing Services with systemd for more information about starting and stopping
services.

To configure Procmail to use the SpamAssassin client application instead of the Perl script, place the
following line near the top of the ~/.procm ailrc file. For a system-wide configuration, place it in
/etc/procm ailrc:

INCLUDERC=/etc/mail/spamassassin/spamassassin-spamc.rc

11.5. Mail User Agents


Red Hat Enterprise Linux offers a variety of email programs, both, graphical email client programs, such
as Evolution, and text-based email programs such as m utt.

The remainder of this section focuses on securing communication between a client and a server.

11.5.1. Securing Communication


Popular MUAs included with Red Hat Enterprise Linux, such as Evolution and m utt offer SSL-encrypted
email sessions.

Like any other service that flows over a network unencrypted, important email information, such as
usernames, passwords, and entire messages, may be intercepted and viewed by users on the network.
Additionally, since the standard POP and IMAP protocols pass authentication information unencrypted, it
is possible for an attacker to gain access to user accounts by collecting usernames and passwords as
they are passed over the network.

11.5.1.1. Secure Email Clients


Most Linux MUAs designed to check email on remote servers support SSL encryption. To use SSL when
retrieving email, it must be enabled on both the email client and the server.

179
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

SSL is easy to enable on the client-side, often done with the click of a button in the MUA's configuration
window or via an option in the MUA's configuration file. Secure IMAP and POP have known port numbers
(993 and 995, respectively) that the MUA uses to authenticate and download messages.

11.5.1.2. Securing Email Client Communications


Offering SSL encryption to IMAP and POP users on the email server is a simple matter.

First, create an SSL certificate. This can be done in two ways: by applying to a Certificate Authority (CA)
for an SSL certificate or by creating a self-signed certificate.

Avoid using self-signed certificates

Self-signed certificates should be used for testing purposes only. Any server used in a production
environment should use an SSL certificate granted by a CA.

To create a self-signed SSL certificate for IMAP or POP, change to the /etc/pki/dovecot/ directory,
edit the certificate parameters in the /etc/pki/dovecot/dovecot-openssl.cnf configuration file
as you prefer, and type the following commands, as root:

dovecot]# rm -f certs/dovecot.pem private/dovecot.pem


dovecot]# /usr/libexec/dovecot/mkcert.sh

Once finished, make sure you have the following configurations in your /etc/dovecot/conf.d/10-
ssl.conf file:

ssl_cert = </etc/pki/dovecot/certs/dovecot.pem
ssl_key = </etc/pki/dovecot/private/dovecot.pem

Execute the following command to restart the dovecot daemon:

~]# systemctl restart dovecot

Alternatively, the stunnel command can be used as an SSL encryption wrapper around the standard,
non-secure connections to IMAP or POP services.

The stunnel utility uses external OpenSSL libraries included with Red Hat Enterprise Linux to provide
strong cryptography and to protect the network connections. It is recommended to apply to a CA to
obtain an SSL certificate, but it is also possible to create a self-signed certificate.

Installing the stunnel package

In order to use stunnel, first ensure the stunnel package is installed on your system by running,
as root:

~]# yum install stunnel

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing
Packages”.

180
Chapter 11. Mail Servers

To create a self-signed SSL certificate, change to the /etc/pki/tls/certs/ directory, and type the
following command:

certs]# make stunnel.pem

Answer all of the questions to complete the process.

Once the certificate is generated, create an stunnel configuration file, for example
/etc/stunnel/m ail.conf, with the following content:

cert = /etc/pki/tls/certs/stunnel.pem

[pop3s]
accept = 995
connect = 110

[imaps]
accept = 993
connect = 143

Once you start stunnel with the created configuration file using the stunnel
/etc/stunnel/m ail.conf command, it will be possible to use an IMAP or a POP email client and
connect to the email server using SSL encryption.

For more information on stunnel, refer to the stunnel man page or the documents in the
/usr/share/doc/stunnel-<version-number> / directory, where <version-number> is the version
number of stunnel.

11.6. Additional Resources


The following is a list of additional documentation about email applications.

11.6.1. Installed Documentation


Information on configuring Sendmail is included with the sendm ail and sendm ail-cf packages.
/usr/share/sendm ail-cf/README — Contains information on the m 4 macro processor, file
locations for Sendmail, supported mailers, how to access enhanced features, and more.
In addition, the sendm ail and aliases man pages contain helpful information covering various
Sendmail options and the proper configuration of the Sendmail /etc/m ail/aliases file.
/usr/share/doc/postfix-<version-number> — Contains a large amount of information about
ways to configure Postfix. Replace <version-number> with the version number of Postfix.
/usr/share/doc/fetchm ail-<version-number> — Contains a full list of Fetchmail features in
the FEAT URES file and an introductory FAQ document. Replace <version-number> with the version
number of Fetchmail.
/usr/share/doc/procm ail-<version-number> — Contains a README file that provides an
overview of Procmail, a FEAT URES file that explores every program feature, and an FAQ file with
answers to many common configuration questions. Replace <version-number> with the version
number of Procmail.
When learning how Procmail works and creating new recipes, the following Procmail man pages are
invaluable:
procm ail — Provides an overview of how Procmail works and the steps involved with filtering

181
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

email.
procm ailrc — Explains the rc file format used to construct recipes.
procm ailex — Gives a number of useful, real-world examples of Procmail recipes.
procm ailsc — Explains the weighted scoring technique used by Procmail to match a particular
recipe to a message.
/usr/share/doc/spam assassin-<version-number>/ — Contains a large amount of
information pertaining to SpamAssassin. Replace <version-number> with the version number of
the spam assassin package.

11.6.2. Useful Websites


http://www.sendmail.org/ — Offers a thorough technical breakdown of Sendmail features,
documentation and configuration examples.
http://www.sendmail.com/ — Contains news, interviews and articles concerning Sendmail, including
an expanded view of the many options available.
http://www.postfix.org/ — The Postfix project home page contains a wealth of information about
Postfix. The mailing list is a particularly good place to look for information.
http://fetchmail.berlios.de/ — The home page for Fetchmail, featuring an online manual, and a
thorough FAQ.
http://www.procmail.org/ — The home page for Procmail with links to assorted mailing lists dedicated
to Procmail as well as various FAQ documents.
http://partmaps.org/era/procmail/mini-faq.html — An excellent Procmail FAQ, offers troubleshooting
tips, details about file locking, and the use of wildcard characters.
http://www.uwasa.fi/~ts/info/proctips.html — Contains dozens of tips that make using Procmail much
easier. Includes instructions on how to test .procm ailrc files and use Procmail scoring to decide if
a particular action should be taken.
http://www.spamassassin.org/ — The official site of the SpamAssassin project.

11.6.3. Related Books


Sendmail Milters: A Guide for Fighting Spam by Bryan Costales and Marcia Flynt; Addison-Wesley —
A good Sendmail guide that can help you customize your mail filters.
Sendmail by Bryan Costales with Eric Allman et al.; O'Reilly & Associates — A good Sendmail
reference written with the assistance of the original creator of Delivermail and Sendmail.
Removing the Spam: Email Processing and Filtering by Geoff Mulligan; Addison-Wesley Publishing
Company — A volume that looks at various methods used by email administrators using established
tools, such as Sendmail and Procmail, to manage spam problems.
Internet Email Protocols: A Developer's Guide by Kevin Johnson; Addison-Wesley Publishing
Company — Provides a very thorough review of major email protocols and the security they provide.
Managing IMAP by Dianna Mullet and Kevin Mullet; O'Reilly & Associates — Details the steps
required to configure an IMAP server.

182
Chapter 12. Directory Servers

Chapter 12. Directory Servers

12.1. OpenLDAP
LDAP (Lightweight Directory Access Protocol) is a set of open protocols used to access centrally stored
information over a network. It is based on the X.500 standard for directory sharing, but is less complex
and resource-intensive. For this reason, LDAP is sometimes referred to as “X.500 Lite”.

Like X.500, LDAP organizes information in a hierarchical manner using directories. These directories can
store a variety of information such as names, addresses, or phone numbers, and can even be used in a
manner similar to the Network Information Service (NIS), enabling anyone to access their account from
any machine on the LDAP enabled network.

LDAP is commonly used for centrally managed users and groups, user authentication, or system
configuration. It can also serve as a virtual phone directory, allowing users to easily access contact
information for other users. Additionally, it can refer a user to other LDAP servers throughout the world,
and thus provide an ad-hoc global repository of information. However, it is most frequently used within
individual organizations such as universities, government departments, and private companies.

This section covers the installation and configuration of OpenLDAP 2.4, an open source implementation
of the LDAPv2 and LDAPv3 protocols.

12.1.1. Introduction to LDAP


Using a client/server architecture, LDAP provides reliable means to create a central information directory
accessible from the network. When a client attempts to modify information within this directory, the server
verifies the user has permission to make the change, and then adds or updates the entry as requested.
To ensure the communication is secure, the Secure Sockets Layer (SSL) or Transport Layer Security
(TLS) cryptographic protocols can be used to prevent an attacker from intercepting the transmission.

Using Mozilla NSS

The OpenLDAP suite in Red Hat Enterprise Linux 7 no longer uses OpenSSL. Instead, it uses the
Mozilla implementation of Network Security Services (NSS). OpenLDAP continues to work with
existing certificates, keys, and other TLS configuration. For more information on how to configure
it to use Mozilla certificate and key database, refer to How do I use TLS/SSL with Mozilla NSS.

The LDAP server supports several database systems, which gives administrators the flexibility to choose
the best suited solution for the type of information they are planning to serve. Because of a well-defined
client Application Programming Interface (API), the number of applications able to communicate with an
LDAP server is numerous, and increasing in both quantity and quality.

12.1.1.1. LDAP Terminology


The following is a list of LDAP-specific terms that are used within this chapter:

entry
A single unit within an LDAP directory. Each entry is identified by its unique Distinguished Name
(DN).

attribute
Information directly associated with an entry. For example, if an organization is represented as

183
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

an LDAP entry, attributes associated with this organization might include an address, a fax
number, etc. Similarly, people can be represented as entries with common attributes such as
personal telephone number or email address.

An attribute can either have a single value, or an unordered space-separated list of values.
While certain attributes are optional, others are required. Required attributes are specified using
the objectClass definition, and can be found in schema files located in the
/etc/openldap/slapd.d/cn=config/cn=schem a/ directory.

The assertion of an attribute and its corresponding value is also referred to as a Relative
Distinguished Name (RDN). Unlike distinguished names that are unique globally, a relative
distinguished name is only unique per entry.

LDIF
The LDAP Data Interchange Format (LDIF) is a plain text representation of an LDAP entry. It
takes the following form:

[id] dn: distinguished_name


attribute_type: attribute_value…
attribute_type: attribute_value…

The optional id is a number determined by the application that is used to edit the entry. Each
entry can contain as many attribute_type and attribute_value pairs as needed, as long
as they are all defined in a corresponding schema file. A blank line indicates the end of an entry.

12.1.1.2. OpenLDAP Features


OpenLDAP suite provides a number of important features:

LDAPv3 Support — Many of the changes in the protocol since LDAP version 2 are designed to make
LDAP more secure. Among other improvements, this includes the support for Simple Authentication
and Security Layer (SASL), Transport Layer Security (TLS), and Secure Sockets Layer (SSL)
protocols.
LDAP Over IPC — The use of inter-process communication (IPC) enhances security by eliminating
the need to communicate over a network.
IPv6 Support — OpenLDAP is compliant with Internet Protocol version 6 (IPv6), the next generation
of the Internet Protocol.
LDIFv1 Support — OpenLDAP is fully compliant with LDIF version 1.
Updated C API — The current C API improves the way programmers can connect to and use LDAP
directory servers.
Enhanced Standalone LDAP Server — This includes an updated access control system, thread
pooling, better tools, and much more.

12.1.1.3. OpenLDAP Server Setup


The typical steps to set up an LDAP server on Red Hat Enterprise Linux are as follows:

1. Install the OpenLDAP suite. Refer to Section 12.1.2, “Installing the OpenLDAP Suite” for more
information on required packages.
2. Customize the configuration as described in Section 12.1.3, “Configuring an OpenLDAP Server”.
3. Start the slapd service as described in Section 12.1.4, “Running an OpenLDAP Server”.

184
Chapter 12. Directory Servers

4. Use the ldapadd utility to add entries to the LDAP directory.


5. Use the ldapsearch utility to verify that the slapd service is accessing the information correctly.

12.1.2. Installing the OpenLDAP Suite


The suite of OpenLDAP libraries and tools is provided by the following packages:

Table 12.1. List of OpenLDAP packages

Package Description
openldap A package containing the libraries necessary to run the OpenLDAP
server and client applications.
openldap-clients A package containing the command line utilities for viewing and
modifying directories on an LDAP server.
openldap-servers A package containing both the services and utilities to configure and
run an LDAP server. This includes the Standalone LDAP Daemon,
slapd.
openldap-servers-sql A package containing the SQL support module.
compat-openldap A package containing the OpenLDAP compatibility libraries.

Additionally, the following packages are commonly used along with the LDAP server:

Table 12.2. List of commonly installed additional LDAP packages

Package Description
nss-pam-ldapd A package containing nslcd, a local LDAP name service that
allows a user to perform local LDAP queries.
mod_ldap A package containing the m od_authnz_ldap and m od_ldap
modules. The m od_authnz_ldap module is the LDAP
authorization module for the Apache HTTP Server. This module can
authenticate users' credentials against an LDAP directory, and can
enforce access control based on the user name, full DN, group
membership, an arbitrary attribute, or a complete filter string. The
m od_ldap module contained in the same package provides a
configurable shared memory cache, to avoid repeated directory
access across many HTTP requests, and also support for SSL/TLS.

To install these packages, use the yum command in the following form:

yum install package…

For example, to perform the basic LDAP server installation, type the following at a shell prompt:

~]# yum install openldap openldap-clients openldap-servers

Note that you must have superuser privileges (that is, you must be logged in as root) to run this
command. For more information on how to install new packages in Red Hat Enterprise Linux, refer to
Section 5.2.4, “Installing Packages”.

12.1.2.1. Overview of OpenLDAP Server Utilities

185
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

To perform administrative tasks, the openldap-servers package installs the following utilities along with
the slapd service:

Table 12.3. List of OpenLDAP server utilities

Command Description
slapacl Allows you to check the access to a list of attributes.
slapadd Allows you to add entries from an LDIF file to an LDAP directory.
slapauth Allows you to check a list of IDs for authentication and authorization
permissions.
slapcat Allows you to pull entries from an LDAP directory in the default
format and save them in an LDIF file.
slapdn Allows you to check a list of Distinguished Names (DNs) based on
available schema syntax.
slapindex Allows you to re-index the slapd directory based on the current
content. Run this utility whenever you change indexing options in
the configuration file.
slappasswd Allows you to create an encrypted user password to be used with
the ldapm odify utility, or in the slapd configuration file.
slapschem a Allows you to check the compliance of a database with the
corresponding schema.
slaptest Allows you to check the LDAP server configuration.

For a detailed description of these utilities and their usage, refer to the corresponding manual pages as
referred to in Section 12.1.6.1, “Installed Documentation”.

Make sure the files have correct owner

Although only root can run slapadd, the slapd service runs as the ldap user. Because of
this, the directory server is unable to modify any files created by slapadd. To correct this issue,
after running the slapd utility, type the following at a shell prompt:

~]# chown -R ldap:ldap /var/lib/ldap

Stop slapd before using these utilities

To preserve the data integrity, stop the slapd service before using slapadd, slapcat, or
slapindex. You can do so by typing the following at a shell prompt:

~]# systemctl stop slapd.service

For more information on how to start, stop, restart, and check the current status of the slapd
service, see Section 12.1.4, “Running an OpenLDAP Server”.

12.1.2.2. Overview of OpenLDAP Client Utilities


The openldap-clients package installs the following utilities which can be used to add, modify, and delete

186
Chapter 12. Directory Servers

entries in an LDAP directory:

Table 12.4. List of OpenLDAP client utilities

Command Description
ldapadd Allows you to add entries to an LDAP directory, either from a file, or
from standard input. It is a symbolic link to ldapm odify -a.
ldapcom pare Allows you to compare given attribute with an LDAP directory entry.
ldapdelete Allows you to delete entries from an LDAP directory.
ldapexop Allows you to perform extended LDAP operations.
ldapm odify Allows you to modify entries in an LDAP directory, either from a file,
or from standard input.
ldapm odrdn Allows you to modify the RDN value of an LDAP directory entry.
ldappasswd Allows you to set or change the password for an LDAP user.
ldapsearch Allows you to search LDAP directory entries.
ldapurl Allows you to compose or decompose LDAP URLs.
ldapwhoam i Allows you to perform a whoam i operation on an LDAP server.

With the exception of ldapsearch, each of these utilities is more easily used by referencing a file
containing the changes to be made rather than typing a command for each entry to be changed within
an LDAP directory. The format of such a file is outlined in the man page for each utility.

12.1.2.3. Overview of Common LDAP Client Applications


Although there are various graphical LDAP clients capable of creating and modifying directories on the
server, none of them is included in Red Hat Enterprise Linux. Popular applications that can access
directories in a read-only mode include Mozilla Thunderbird, Evolution, or Ekiga.

12.1.3. Configuring an OpenLDAP Server


By default, the OpenLDAP configuration is stored in the /etc/openldap/ directory. The following table
highlights the most important directories and files within this directory:

Table 12.5. List of OpenLDAP configuration files and directories

Path Description
/etc/openldap/ldap.conf The configuration file for client applications that use the OpenLDAP
libraries. This includes ldapadd, ldapsearch, Evolution, etc.
/etc/openldap/slapd.d/ The directory containing the slapd configuration.

Note that OpenLDAP no longer reads its configuration from the /etc/openldap/slapd.conf file.
Instead, it uses a configuration database located in the /etc/openldap/slapd.d/ directory. If you
have an existing slapd.conf file from a previous installation, you can convert it to the new format by
running the following command:

~]# slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/

The slapd configuration consists of LDIF entries organized in a hierarchical directory structure, and the
recommended way to edit these entries is to use the server utilities described in Section 12.1.2.1,
“Overview of OpenLDAP Server Utilities”.

187
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Do not edit LDIF files directly

An error in an LDIF file can render the slapd service unable to start. Because of this, it is
strongly advised that you avoid editing the LDIF files within the /etc/openldap/slapd.d/
directly.

12.1.3.1. Changing the Global Configuration


Global configuration options for the LDAP server are stored in the
/etc/openldap/slapd.d/cn=config.ldif file. The following directives are commonly used:

olcAllows
The olcAllows directive allows you to specify which features to enable. It takes the following
form:

olcAllows: feature…

It accepts a space-separated list of features as described in Table 12.6, “Available olcAllows


options”. The default option is bind_v2.

Table 12.6. Available olcAllows options

Option Description
bind_v2 Enables the acceptance of LDAP version 2 bind requests.
bind_anon_cred Enables an anonymous bind when the Distinguished Name (DN) is
empty.
bind_anon_dn Enables an anonymous bind when the Distinguished Name (DN) is
not empty.
update_anon Enables processing of anonymous update operations.
proxy_authz_anon Enables processing of anonymous proxy authorization control.

Example 12.1. Using the olcAllows directive

olcAllows: bind_v2 update_anon

olcConnMaxPending
The olcConnMaxPending directive allows you to specify the maximum number of pending
requests for an anonymous session. It takes the following form:

olcConnMaxPending: number

The default option is 100.

Example 12.2. Using the olcConnMaxPending directive

olcConnMaxPending: 100

188
Chapter 12. Directory Servers

olcConnMaxPendingAuth
The olcConnMaxPendingAuth directive allows you to specify the maximum number of
pending requests for an authenticated session. It takes the following form:

olcConnMaxPendingAuth: number

The default option is 1000.

Example 12.3. Using the olcConnMaxPendingAuth directive

olcConnMaxPendingAuth: 1000

olcDisallows
The olcDisallows directive allows you to specify which features to disable. It takes the
following form:

olcDisallows: feature…

It accepts a space-separated list of features as described in Table 12.7, “Available olcDisallows


options”. No features are disabled by default.

Table 12.7. Available olcDisallows options

Option Description
bind_anon Disables the acceptance of anonymous bind requests.
bind_sim ple Disables the simple bind authentication mechanism.
tls_2_anon Disables the enforcing of an anonymous session when the
STARTTLS command is received.
tls_authc Disallows the STARTTLS command when authenticated.

Example 12.4. Using the olcDisallows directive

olcDisallows: bind_anon

olcIdleT im eout
The olcIdleT im eout directive allows you to specify how many seconds to wait before
closing an idle connection. It takes the following form:

olcIdleTimeout: number

This option is disabled by default (that is, set to 0).

189
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 12.5. Using the olcIdleTimeout directive

olcIdleTimeout: 180

olcLogFile
The olcLogFile directive allows you to specify a file in which to write log messages. It takes
the following form:

olcLogFile: file_name

The log messages are written to standard error by default.

Example 12.6. Using the olcLogFile directive

olcLogFile: /var/log/slapd.log

olcReferral
The olcReferral option allows you to specify a URL of a server to process the request in
case the server is not able to handle it. It takes the following form:

olcReferral: URL

This option is disabled by default.

Example 12.7. Using the olcReferral directive

olcReferral: ldap://root.openldap.org

olcWriteT im eout
The olcWriteT im eout option allows you to specify how many seconds to wait before closing
a connection with an outstanding write request. It takes the following form:

olcWriteTimeout

This option is disabled by default (that is, set to 0).

Example 12.8. Using the olcWriteTimeout directive

olcWriteTimeout: 180

12.1.3.2. Changing the Database-Specific Configuration


By default, the OpenLDAP server uses Berkeley DB (BDB) as a database back end. The configuration for

190
Chapter 12. Directory Servers

this database is stored in the


/etc/openldap/slapd.d/cn=config/olcDatabase={1}bdb.ldif file. The following directives
are commonly used in a database-specific configuration:

olcReadOnly
The olcReadOnly directive allows you to use the database in a read-only mode. It takes the
following form:

olcReadOnly: boolean

It accepts either T RUE (enable the read-only mode), or FALSE (enable modifications of the
database). The default option is FALSE.

Example 12.9. Using the olcReadOnly directive

olcReadOnly: TRUE

olcRootDN
The olcRootDN directive allows you to specify the user that is unrestricted by access controls
or administrative limit parameters set for operations on the LDAP directory. It takes the following
form:

olcRootDN: distinguished_name

It accepts a Distinguished Name (DN). The default option is cn=Manager,dn=m y-


dom ain,dc=com .

Example 12.10. Using the olcRootDN directive

olcRootDN: cn=root,dn=example,dn=com

olcRootPW
The olcRootPW directive allows you to set a password for the user that is specified using the
olcRootDN directive. It takes the following form:

olcRootPW: password

It accepts either a plain text string, or a hash. To generate a hash, type the following at a shell
prompt:

~]$ slappaswd
New password:
Re-enter new password:
{SSHA}WczWsyPEnMchFf1GRTweq2q7XJcvmSxD

191
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 12.11. Using the olcRootPW directive

olcRootPW: {SSHA}WczWsyPEnMchFf1GRTweq2q7XJcvmSxD

olcSuffix
The olcSuffix directive allows you to specify the domain for which to provide information. It
takes the following form:

olcSuffix: domain_name

It accepts a fully qualified domain name (FQDN). The default option is dc=m y-
dom ain,dc=com .

Example 12.12. Using the olcSuffix directive

olcSuffix: dc=example,dc=com

12.1.3.3. Extending Schema


Since OpenLDAP 2.3, the /etc/openldap/slapd.d/ directory also contains LDAP definitions that
were previously located in /etc/openldap/schem a/. It is possible to extend the schema used by
OpenLDAP to support additional attribute types and object classes using the default schema files as a
guide. However, this task is beyond the scope of this chapter. For more information on this topic, refer to
http://www.openldap.org/doc/admin/schema.html.

12.1.4. Running an OpenLDAP Server


This section describes how to start, stop, restart, and check the current status of the Standalone LDAP
Daemon. For more information on how to manage system services in general, refer to Chapter 7,
Managing Services with systemd.

12.1.4.1. Starting the Service


To start the slapd service in the current session, type the following at a shell prompt as root:

~]# systemctl start slapd.service

To configure the service to start automatically at the boot time, use the following command as root:

~]# systemctl enable slapd.service


ln -s '/usr/lib/systemd/system/slapd.service' '/etc/systemd/system/multi-
user.target.wants/slapd.service'

12.1.4.2. Stopping the Service


To stop the running slapd service in the current session, type the following at a shell prompt as root:

~]# systemctl stop slapd.service

192
Chapter 12. Directory Servers

To prevent the service from starting automatically at the boot time, type as root:

~]# systemctl disable slapd.service


rm '/etc/systemd/system/multi-user.target.wants/slapd.service'

12.1.4.3. Restarting the Service


To restart the running slapd service, type the following at a shell prompt:

~]# systemctl restart slapd.service

This stops the service and immediately starts it again. Use this command to reload the configuration.

12.1.4.4. Verifying the Service Status


To verify that the slapd service is running, type the following at a shell prompt:

~]$ systemctl is-active slapd.service


active

12.1.5. Configuring a System to Authenticate Using OpenLDAP


In order to configure a system to authenticate using OpenLDAP, make sure that the appropriate
packages are installed on both LDAP server and client machines. For information on how to set up the
server, follow the instructions in Section 12.1.2, “Installing the OpenLDAP Suite” and Section 12.1.3,
“Configuring an OpenLDAP Server”. On a client, type the following at a shell prompt:

~]# yum install openldap openldap-clients nss-pam-ldapd

For detailed instructions on how to configure applications to use LDAP for authentication, see the Red
Hat Enterprise Linux 7 Authentication Guide.

12.1.5.1. Migrating Old Authentication Information to LDAP Format


The migrationtools package provides a set of shell and Perl scripts to help you migrate authentication
information into an LDAP format. To install this package, type the following at a shell prompt:

~]# yum install migrationtools

This will install the scripts to the /usr/share/m igrationtools/ directory. Once installed, edit the
/usr/share/m igrationtools/m igrate_com m on.ph file and change the following lines to reflect
the correct domain, for example:

# Default DNS domain


$DEFAULT_MAIL_DOMAIN = "example.com";

# Default base
$DEFAULT_BASE = "dc=example,dc=com";

Alternatively, you can specify the environment variables directly on the command line. For example, to
run the m igrate_all_online.sh script with the default base set to dc=exam ple,dc=com , type:

~]# export DEFAULT_BASE="dc=example,dc=com" \


/usr/share/migrationtools/migrate_all_online.sh

193
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

To decide which script to run in order to migrate the user database, refer to Table 12.8, “Commonly used
LDAP migration scripts”.

Table 12.8. Commonly used LDAP migration scripts

Existing Name Service Is LDAP Script to Use


Running?
/etc flat files yes m igrate_all_online.sh
/etc flat files no m igrate_all_offline.sh
NetInfo yes m igrate_all_netinfo_online.sh
NetInfo no m igrate_all_netinfo_offline.sh
NIS (YP) yes m igrate_all_nis_online.sh
NIS (YP) no m igrate_all_nis_offline.sh

For more information on how to use these scripts, refer to the README and the m igration-
tools.txt files in the /usr/share/doc/m igrationtools-version/ directory.

12.1.6. Additional Resources


The following resources offer additional information on the Lightweight Directory Access Protocol. Before
configuring LDAP on your system, it is highly recommended that you review these resources, especially
the OpenLDAP Software Administrator's Guide.

12.1.6.1. Installed Documentation


The following documentation is installed with the openldap-servers package:

/usr/share/doc/openldap-servers-version/guide.htm l
A copy of the OpenLDAP Software Administrator's Guide.

/usr/share/doc/openldap-servers-version/README.schem a
A README file containing the description of installed schema files.

Additionally, there is also a number of manual pages that are installed with the openldap, openldap-
servers, and openldap-clients packages:

Client Applications

m an ldapadd — Describes how to add entries to an LDAP directory.


m an ldapdelete — Describes how to delete entries within an LDAP directory.
m an ldapm odify — Describes how to modify entries within an LDAP directory.
m an ldapsearch — Describes how to search for entries within an LDAP directory.
m an ldappasswd — Describes how to set or change the password of an LDAP user.
m an ldapcom pare — Describes how to use the ldapcom pare tool.
m an ldapwhoam i — Describes how to use the ldapwhoam i tool.
m an ldapm odrdn — Describes how to modify the RDNs of entries.

Server Applications

194
Chapter 12. Directory Servers

m an slapd — Describes command line options for the LDAP server.

Administrative Applications

m an slapadd — Describes command line options used to add entries to a slapd


database.
m an slapcat — Describes command line options used to generate an LDIF file from a
slapd database.
m an slapindex — Describes command line options used to regenerate an index based
upon the contents of a slapd database.
m an slappasswd — Describes command line options used to generate user passwords
for LDAP directories.

Configuration Files

m an ldap.conf — Describes the format and options available within the configuration file
for LDAP clients.
m an slapd-config — Describes the format and options available within the configuration
directory.

12.1.6.2. Useful Websites


http://www.openldap.org/doc/admin24/
The current version of the OpenLDAP Software Administrator's Guide.

http://www.kingsmountain.com/ldapRoadmap.shtml
Jeff Hodges' LDAP Roadmap & FAQ containing links to several useful resources and emerging
news concerning the LDAP protocol.

http://www.ldapman.org/articles/
A collection of articles that offer a good introduction to LDAP, including methods to design a
directory tree and customizing directory structures.

http://www.padl.com/
A website of developers of several useful LDAP tools.

12.1.6.3. Related Books


OpenLDAP by Example by John Terpstra and Benjamin Coles; Prentice Hall.
A collection of practical exercises in the OpenLDAP deployment.

Implementing LDAP by Mark Wilcox; Wrox Press, Inc.


A book covering LDAP from both the system administrator's and software developer's
perspective.

Understanding and Deploying LDAP Directory Services by Tim Howes et al.; Macmillan

195
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Technical Publishing.
A book covering LDAP design principles, as well as its deployment in a production environment.

196
Chapter 13. File and Print Servers

Chapter 13. File and Print Servers

13.1. Samba
Samba is an open source implementation of the Server Message Block (SMB) protocol. It allows the
networking of Microsoft Windows®, Linux, UNIX, and other operating systems together, enabling access
to Windows-based file and printer shares. Samba's use of SMB allows it to appear as a Windows server
to Windows clients.

Installing the samba package

In order to use Samba, first ensure the samba package is installed on your system by running, as
root:

~]# yum install samba

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing
Packages”.

13.1.1. Introduction to Samba


The third major release of Samba, version 3.0.0, introduced numerous improvements from prior
versions, including:

The ability to join an Active Directory domain by means of the Lightweight Directory Access Protocol
(LDAP) and Kerberos
Built in Unicode support for internationalization
Support for all recent Microsoft Windows server and client versions to connect to Samba servers
without needing local registry hacking
Two new documents developed by the Samba.org team, which include a 400+ page reference
manual, and a 300+ page implementation and integration manual. For more information about these
published titles, refer to Section 13.1.8.2, “Related Books”.

13.1.1.1. Samba Features


Samba is a powerful and versatile server application. Even seasoned system administrators must know
its abilities and limitations before attempting installation and configuration.

What Samba can do:

Serve directory trees and printers to Linux, UNIX, and Windows clients
Assist in network browsing (with or without NetBIOS)
Authenticate Windows domain logins
Provide Windows Internet Name Service (WINS) name server resolution
Act as a Windows NT®-style Primary Domain Controller (PDC)
Act as a Backup Domain Controller (BDC) for a Samba-based PDC
Act as an Active Directory domain member server
Join a Windows NT/2000/2003/2008 PDC

What Samba cannot do:

197
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Act as a BDC for a Windows PDC (and vice versa)


Act as an Active Directory domain controller

13.1.2. Samba Daemons and Related Services


The following is a brief introduction to the individual Samba daemons and services.

13.1.2.1. Samba Daemons


Samba is comprised of three daemons (sm bd, nm bd, and winbindd). Three services (sm b, nm b, and
winbind) control how the daemons are started, stopped, and other service-related features. These
services act as different init scripts. Each daemon is listed in detail below, as well as which specific
service has control over it.

sm bd

The sm bd server daemon provides file sharing and printing services to Windows clients. In addition, it is
responsible for user authentication, resource locking, and data sharing through the SMB protocol. The
default ports on which the server listens for SMB traffic are T CP ports 139 and 4 4 5.

The sm bd daemon is controlled by the sm b service.

nm bd

The nm bd server daemon understands and replies to NetBIOS name service requests such as those
produced by SMB/Common Internet File System (CIFS) in Windows-based systems. These systems
include Windows 95/98/ME, Windows NT, Windows 2000, Windows XP, and LanManager clients. It also
participates in the browsing protocols that make up the Windows Network Neighborhood view. The
default port that the server listens to for NMB traffic is UDP port 137.

The nm bd daemon is controlled by the nm b service.

winbindd

The winbind service resolves user and group information on a server running Windows NT, 2000,
2003 or Windows Server 2008. This makes Windows user / group information understandable by UNIX
platforms. This is achieved by using Microsoft RPC calls, Pluggable Authentication Modules (PAM), and
the Name Service Switch (NSS). This allows Windows NT domain users to appear and operate as UNIX
users on a UNIX machine. Though bundled with the Samba distribution, the winbind service is
controlled separately from the sm b service.

The winbindd daemon is controlled by the winbind service and does not require the sm b service to
be started in order to operate. winbindd is also used when Samba is an Active Directory member, and
may also be used on a Samba domain controller (to implement nested groups and/or interdomain trust).
Because winbind is a client-side service used to connect to Windows NT-based servers, further
discussion of winbind is beyond the scope of this chapter.

For information on how to configure winbind for authentication, see the Red Hat Enterprise Linux 7
Authentication Guide.

198
Chapter 13. File and Print Servers

Obtaining a list of utilities that are shipped with Samba

You may refer to Section 13.1.7, “Samba Distribution Programs” for a list of utilities included in the
Samba distribution.

13.1.3. Connecting to a Samba Share


You can use Nautilus to view available Samba shares on your network. To view a list of Samba
workgroups and domains on your network, select Places → Network from the GNOME panel, and select
your desired network. You can also type sm b: in the File → Open Location bar of Nautilus to view the
workgroups/domains.

As shown in Figure 13.1, “SMB Workgroups in Nautilus”, an icon appears for each available SMB
workgroup or domain on the network.

Figure 13.1. SMB Workgroups in Nautilus

Double-click one of the workgroup/domain icons to view a list of computers within the workgroup/domain.

199
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 13.2. SMB Machines in Nautilus

As you can see from Figure 13.2, “SMB Machines in Nautilus”, an icon exists for each machine within the
workgroup. Double-click on an icon to view the Samba shares on the machine. If a username and
password combination is required, you are prompted for them.

Alternately, you can also specify the Samba server and sharename in the Location: bar for Nautilus
using the following syntax (replace <servername> and <sharename> with the appropriate values):

smb://<servername>/<sharename>

13.1.3.1. Command Line


To query the network for Samba servers, use the findsm b command. For each server found, it displays
its IP address, NetBIOS name, workgroup name, operating system, and SMB server version.

To connect to a Samba share from a shell prompt, type the following command:

~]$ smbclient //<hostname>/<sharename> -U <username>

Replace <hostname> with the hostname or IP address of the Samba server you want to connect to,
<sharename> with the name of the shared directory you want to browse, and <username> with the
Samba username for the system. Enter the correct password or press Enter if no password is required
for the user.

If you see the sm b:\> prompt, you have successfully logged in. Once you are logged in, type help for
a list of commands. If you wish to browse the contents of your home directory, replace sharename with
your username. If the -U switch is not used, the username of the current user is passed to the Samba
server.

To exit sm bclient, type exit at the sm b:\> prompt.

13.1.3.2. Mounting the Share

200
Chapter 13. File and Print Servers

Sometimes it is useful to mount a Samba share to a directory so that the files in the directory can be
treated as if they are part of the local file system.

To mount a Samba share to a directory, create a directory to mount it to (if it does not already exist), and
execute the following command as root:

~]# mount -t cifs //<servername>/<sharename> /mnt/point/ -o


username=<username>,password=<password>

This command mounts <sharename> from <servername> in the local directory /mnt/point/.

Installing cifs-utils package

The mount.cifs utility is a separate RPM (independent from Samba). In order to use mount.cifs,
first ensure the cifs-utils package is installed on your system by running, as root:

~]# yum install cifs-utils

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing
Packages”.
Note that the cifs-utils package also contains the cifs.upcall binary called by the kernel in order to
perform kerberized CIFS mounts. For more information on cifs.upcall, refer to m an
cifs.upcall.

For more information about mounting a samba share, refer to m an m ount.cifs.

CIFS servers that require plain text passwords

Some CIFS servers require plain text passwords for authentication. Support for plain text
password authentication can be enabled using the following command:

~]# echo 0x37 > /proc/fs/cifs/SecurityFlags

WARNING: This operation can expose passwords by removing password encryption.

13.1.4. Configuring a Samba Server


The default configuration file (/etc/sam ba/sm b.conf) allows users to view their home directories as a
Samba share. It also shares all printers configured for the system as Samba shared printers. In other
words, you can attach a printer to the system and print to it from the Windows machines on your
network.

13.1.4.1. Graphical Configuration


To configure Samba using a graphical interface, use one of the available Samba graphical user
interfaces. A list of available GUIs can be found at http://www.samba.org/samba/GUI/.

13.1.4.2. Command Line Configuration


Samba uses /etc/sam ba/sm b.conf as its configuration file. If you change this configuration file, the
changes do not take effect until you restart the Samba daemon with the following command, as root:

201
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

~]# systemctl restart smb.service

To specify the Windows workgroup and a brief description of the Samba server, edit the following lines in
your /etc/sam ba/sm b.conf file:

workgroup = WORKGROUPNAME
server string = BRIEF COMMENT ABOUT SERVER

Replace WORKGROUPNAME with the name of the Windows workgroup to which this machine should belong.
The BRIEF COMMENT ABOUT SERVER is optional and is used as the Windows comment about the Samba
system.

To create a Samba share directory on your Linux system, add the following section to your
/etc/sam ba/sm b.conf file (after modifying it to reflect your needs and your system):

[sharename]
comment = Insert a comment here
path = /home/share/
valid users = tfox carole
public = no
writable = yes
printable = no
create mask = 0765

The above example allows the users tfox and carole to read and write to the directory
/hom e/share, on the Samba server, from a Samba client.

13.1.4.3. Encrypted Passwords


Encrypted passwords are enabled by default because it is more secure to do so. To create a user with
an encrypted password, use the command sm bpasswd -a <username>.

13.1.5. Starting and Stopping Samba


To start a Samba server, type the following command in a shell prompt, as root:

~]# systemctl start smb.service

Setting up a domain member server

To set up a domain member server, you must first join the domain or Active Directory using the
net join command before starting the sm b service.

To stop the server, type the following command in a shell prompt, as root:

~]# systemctl stop smb.service

The restart option is a quick way of stopping and then starting Samba. This is the most reliable way to
make configuration changes take effect after editing the configuration file for Samba. Note that the
restart option starts the daemon even if it was not running originally.

To restart the server, type the following command in a shell prompt, as root:

202
Chapter 13. File and Print Servers

~]# systemctl restart smb.service

The condrestart (conditional restart) option only starts sm b on the condition that it is currently
running. This option is useful for scripts, because it does not start the daemon if it is not running.

Applying the changes to the configuration

When the /etc/sam ba/sm b.conf file is changed, Samba automatically reloads it after a few
minutes. Issuing a manual restart or reload is just as effective.

To conditionally restart the server, type the following command, as root:

~]# systemctl try-restart smb.service

A manual reload of the /etc/sam ba/sm b.conf file can be useful in case of a failed automatic reload
by the sm b service. To ensure that the Samba server configuration file is reloaded without restarting the
service, type the following command, as root:

~]# systemctl reload smb.service

By default, the sm b service does not start automatically at boot time. To configure Samba to start at boot
time, type the following at a shell prompt as root:

~]# systemctl enable smb.service

See Chapter 7, Managing Services with systemd for more information regarding these tools.

13.1.6. Samba Network Browsing


Network browsing enables Windows and Samba servers to appear in the Windows Network
Neighborhood. Inside the Network Neighborhood, icons are represented as servers and if
opened, the server's shares and printers that are available are displayed.

Network browsing capabilities require NetBIOS over T CP/IP. NetBIOS-based networking uses broadcast
(UDP) messaging to accomplish browse list management. Without NetBIOS and WINS as the primary
method for T CP/IP hostname resolution, other methods such as static files (/etc/hosts) or DNS, must
be used.

A domain master browser collates the browse lists from local master browsers on all subnets so that
browsing can occur between workgroups and subnets. Also, the domain master browser should
preferably be the local master browser for its own subnet.

13.1.6.1. Domain Browsing


By default, a Windows server PDC for a domain is also the domain master browser for that domain. A
Samba server must not be set up as a domain master server in this type of situation

For subnets that do not include the Windows server PDC, a Samba server can be implemented as a
local master browser. Configuring the /etc/sam ba/sm b.conf file for a local master browser (or no
browsing at all) in a domain controller environment is the same as workgroup configuration (see
Section 13.1.4, “Configuring a Samba Server”).

203
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

13.1.6.2. WINS (Windows Internet Name Server)


Either a Samba server or a Windows NT server can function as a WINS server. When a WINS server is
used with NetBIOS enabled, UDP unicasts can be routed which allows name resolution across networks.
Without a WINS server, the UDP broadcast is limited to the local subnet and therefore cannot be routed
to other subnets, workgroups, or domains. If WINS replication is necessary, do not use Samba as your
primary WINS server, as Samba does not currently support WINS replication.

In a mixed NT/2000/2003/2008 server and Samba environment, it is recommended that you use the
Microsoft WINS capabilities. In a Samba-only environment, it is recommended that you use only one
Samba server for WINS.

The following is an example of the /etc/sam ba/sm b.conf file in which the Samba server is serving as
a WINS server:

[global]
wins support = Yes

Using WINS

All servers (including Samba) should connect to a WINS server to resolve NetBIOS names.
Without WINS, browsing only occurs on the local subnet. Furthermore, even if a domain-wide list
is somehow obtained, hosts cannot be resolved for the client without WINS.

13.1.7. Samba Distribution Programs


findsm b

findsm b <subnet_broadcast_address>

The findsm b program is a Perl script which reports information about SMB-aware systems on a specific
subnet. If no subnet is specified the local subnet is used. Items displayed include IP address, NetBIOS
name, workgroup or domain name, operating system, and version.

The following example shows the output of executing findsm b as any valid user on a system:

~]$ findsmb
IP ADDR NETBIOS NAME WORKGROUP/OS/VERSION
------------------------------------------------------------------
10.1.59.25 VERVE [MYGROUP] [Unix] [Samba 3.0.0-15]
10.1.59.26 STATION22 [MYGROUP] [Unix] [Samba 3.0.2-7.FC1]
10.1.56.45 TREK +[WORKGROUP] [Windows 5.0] [Windows 2000 LAN Manager]
10.1.57.94 PIXEL [MYGROUP] [Unix] [Samba 3.0.0-15]
10.1.57.137 MOBILE001 [WORKGROUP] [Windows 5.0] [Windows 2000 LAN Manager]
10.1.57.141 JAWS +[KWIKIMART] [Unix] [Samba 2.2.7a-security-rollup-fix]
10.1.56.159 FRED +[MYGROUP] [Unix] [Samba 3.0.0-14.3E]
10.1.59.192 LEGION *[MYGROUP] [Unix] [Samba 2.2.7-security-rollup-fix]
10.1.56.205 NANCYN +[MYGROUP] [Unix] [Samba 2.2.7a-security-rollup-fix]

net

net <protocol> <function> <misc_options> <target_options>

The net utility is similar to the net utility used for Windows and MS-DOS. The first argument is used to

204
Chapter 13. File and Print Servers

specify the protocol to use when executing a command. The <protocol> option can be ads, rap, or
rpc for specifying the type of server connection. Active Directory uses ads, Win9x/NT3 uses rap, and
Windows NT4/2000/2003/2008 uses rpc. If the protocol is omitted, net automatically tries to determine
it.

The following example displays a list the available shares for a host named wakko:

~]$ net -l share -S wakko


Password:
Enumerating shared resources (exports) on remote server:
Share name Type Description
---------- ---- -----------
data Disk Wakko data share
tmp Disk Wakko tmp share
IPC$ IPC IPC Service (Samba Server)
ADMIN$ IPC IPC Service (Samba Server)

The following example displays a list of Samba users for a host named wakko:

~]$ net -l user -S wakko


root password:
User name Comment
-----------------------------
andriusb Documentation
joe Marketing
lisa Sales

nm blookup

nm blookup <options> <netbios_name>

The nm blookup program resolves NetBIOS names into IP addresses. The program broadcasts its
query on the local subnet until the target machine replies.

The following example displays the IP address of the NetBIOS name trek:

~]$ nmblookup trek


querying trek on 10.1.59.255
10.1.56.45 trek<00>

pdbedit

pdbedit <options>

The pdbedit program manages accounts located in the SAM database. All back ends are supported
including sm bpasswd, LDAP, and the tdb database library.

The following are examples of adding, deleting, and listing users:

205
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

~]$ pdbedit -a kristin


new password:
retype new password:
Unix username: kristin
NT username:
Account Flags: [U ]
User SID: S-1-5-21-1210235352-3804200048-1474496110-2012
Primary Group SID: S-1-5-21-1210235352-3804200048-1474496110-2077
Full Name: Home Directory: \\wakko\kristin
HomeDir Drive:
Logon Script:
Profile Path: \\wakko\kristin\profile
Domain: WAKKO
Account desc:
Workstations: Munged
dial:
Logon time: 0
Logoff time: Mon, 18 Jan 2038 22:14:07 GMT
Kickoff time: Mon, 18 Jan 2038 22:14:07 GMT
Password last set: Thu, 29 Jan 2004 08:29:28
GMT Password can change: Thu, 29 Jan 2004 08:29:28 GMT
Password must change: Mon, 18 Jan 2038 22:14:07 GMT
~]$ pdbedit -v -L kristin
Unix username: kristin
NT username:
Account Flags: [U ]
User SID: S-1-5-21-1210235352-3804200048-1474496110-2012
Primary Group SID: S-1-5-21-1210235352-3804200048-1474496110-2077
Full Name:
Home Directory: \\wakko\kristin
HomeDir Drive:
Logon Script:
Profile Path: \\wakko\kristin\profile
Domain: WAKKO
Account desc:
Workstations: Munged
dial:
Logon time: 0
Logoff time: Mon, 18 Jan 2038 22:14:07 GMT
Kickoff time: Mon, 18 Jan 2038 22:14:07 GMT
Password last set: Thu, 29 Jan 2004 08:29:28 GMT
Password can change: Thu, 29 Jan 2004 08:29:28 GMT
Password must change: Mon, 18 Jan 2038 22:14:07 GMT
~]$ pdbedit -L
andriusb:505:
joe:503:
lisa:504:
kristin:506:
~]$ pdbedit -x joe
~]$ pdbedit -L
andriusb:505: lisa:504: kristin:506:

rpcclient

rpcclient <server> <options>

The rpcclient program issues administrative commands using Microsoft RPCs, which provide access
to the Windows administration graphical user interfaces (GUIs) for systems management. This is most
often used by advanced users that understand the full complexity of Microsoft RPCs.

206
Chapter 13. File and Print Servers

sm bcacls

sm bcacls <//server/share> <filename> <options>

The sm bcacls program modifies Windows ACLs on files and directories shared by a Samba server or a
Windows server.

sm bclient

sm bclient <//server/share> <password> <options>

The sm bclient program is a versatile UNIX client which provides functionality similar to ftp.

sm bcontrol

sm bcontrol -i <options>

sm bcontrol <options> <destination> <messagetype> <parameters>

The sm bcontrol program sends control messages to running sm bd, nm bd, or winbindd daemons.
Executing sm bcontrol -i runs commands interactively until a blank line or a 'q' is entered.

sm bpasswd

sm bpasswd <options> <username> <password>

The sm bpasswd program manages encrypted passwords. This program can be run by a superuser to
change any user's password as well as by an ordinary user to change their own Samba password.

sm bspool

sm bspool <job> <user> <title> <copies> <options> <filename>

The sm bspool program is a CUPS-compatible printing interface to Samba. Although designed for use
with CUPS printers, sm bspool can work with non-CUPS printers as well.

sm bstatus

sm bstatus <options>

The sm bstatus program displays the status of current connections to a Samba server.

sm btar

sm btar <options>

The sm btar program performs backup and restores of Windows-based share files and directories to a
local tape archive. Though similar to the tar command, the two are not compatible.

testparm

testparm <options> <filename> <hostname IP_address>

The testparm program checks the syntax of the /etc/sam ba/sm b.conf file. If your
/etc/sam ba/sm b.conf file is in the default location (/etc/sam ba/sm b.conf) you do not need to
specify the location. Specifying the hostname and IP address to the testparm program verifies that the

207
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

hosts.allow and host.deny files are configured correctly. The testparm program also displays a
summary of your /etc/sam ba/sm b.conf file and the server's role (stand-alone, domain, etc.) after
testing. This is convenient when debugging as it excludes comments and concisely presents information
for experienced administrators to read.

For example:

~]$ testparm
Load smb config files from /etc/samba/smb.conf
Processing section "[homes]"
Processing section "[printers]"
Processing section "[tmp]"
Processing section "[html]"
Loaded services file OK.
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions
<enter>
# Global parameters
[global]
workgroup = MYGROUP
server string = Samba Server
security = SHARE
log file = /var/log/samba/%m.log
max log size = 50
socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
dns proxy = No
[homes]
comment = Home Directories
read only = No
browseable = No
[printers]
comment = All Printers
path = /var/spool/samba
printable = Yes
browseable = No
[tmp]
comment = Wakko tmp
path = /tmp
guest only = Yes
[html]
comment = Wakko www
path = /var/www/html
force user = andriusb
force group = users
read only = No
guest only = Yes

wbinfo

wbinfo <options>

The wbinfo program displays information from the winbindd daemon. The winbindd daemon must
be running for wbinfo to work.

13.1.8. Additional Resources


The following sections give you the means to explore Samba in greater detail.

208
Chapter 13. File and Print Servers

13.1.8.1. Installed Documentation

Installing the samba-doc package

In order to use the Samba documentation, first ensure the samba-doc package is installed on
your system by running, as root:

~]# yum install samba-doc

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing
Packages”.

/usr/share/doc/sam ba-<version-number>/ — All additional files included with the Samba


distribution. This includes all helper scripts, sample configuration files, and documentation. This
directory also contains online versions of The Official Samba-3 HOWTO-Collection and Samba-3 by
Example, both of which are cited below.
Refer to the following man pages for detailed information specific Samba features:
sm b.conf
sam ba
sm bd
nm bd
winbind

13.1.8.2. Related Books

The Official Samba-3 HOWTO-Collection by John H. Terpstra and Jelmer R. Vernooij; Prentice Hall
— The official Samba-3 documentation as issued by the Samba development team. This is more of a
reference guide than a step-by-step guide.
Samba-3 by Example by John H. Terpstra; Prentice Hall — This is another official release issued by
the Samba development team which discusses detailed examples of OpenLDAP, DNS, DHCP, and
printing configuration files. This has step-by-step related information that helps in real-world
implementations.
Using Samba, 2nd Edition by Jay Ts, Robert Eckstein, and David Collier-Brown; O'Reilly — A good
resource for novice to advanced users, which includes comprehensive reference material.

13.1.8.3. Useful Websites

http://www.samba.org/ — Homepage for the Samba distribution and all official documentation created
by the Samba development team. Many resources are available in HTML and PDF formats, while
others are only available for purchase. Although many of these links are not Red Hat Enterprise Linux
specific, some concepts may apply.
http://samba.org/samba/archives.html — Active email lists for the Samba community. Enabling digest
mode is recommended due to high levels of list activity.
Samba newsgroups — Samba threaded newsgroups, such as gmane.org, that use the NNTP
protocol are also available. This an alternative to receiving mailing list emails.

13.2. FTP
The File Transfer Protocol (FT P) is one of the oldest and most commonly used protocols found on the

209
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Internet today. Its purpose is to reliably transfer files between computer hosts on a network without
requiring the user to log directly in to the remote host or to have knowledge of how to use the remote
system. It allows users to access files on remote systems using a standard set of simple commands.

This section outlines the basics of the FT P protocol and introduces vsftpd, the primary FT P server
shipped with Red Hat Enterprise Linux.

13.2.1. The File Transfer Protocol


FTP uses a client-server architecture to transfer files using the T CP network protocol. Because FT P is a
rather old protocol, it uses unencrypted username and password authentication. For this reason, it is
considered an insecure protocol and should not be used unless absolutely necessary. However, because
FT P is so prevalent on the Internet, it is often required for sharing files to the public. System
administrators, therefore, should be aware of FT P's unique characteristics.

The following chapters include information on how to configure vsftpd, the primary FT P server shipped
with Red Hat Enterprise Linux, to establish connections secured by SSL and how to secure an FT P
server with the help of SELinux. A good substitute for FTP is sftp from the OpenSSH suite of tools. For
information about configuring OpenSSH and about the SSH protocol in general, refer to Chapter 8,
OpenSSH.

Unlike most protocols used on the Internet, FT P requires multiple network ports to work properly. When
an FT P client application initiates a connection to an FT P server, it opens port 21 on the server —
known as the command port. This port is used to issue all commands to the server. Any data requested
from the server is returned to the client via a data port. The port number for data connections, and the
way in which data connections are initialized, vary depending upon whether the client requests the data
in active or passive mode.

The following defines these modes:

active mode
Active mode is the original method used by the FT P protocol for transferring data to the client
application. When an active-mode data transfer is initiated by the FT P client, the server opens a
connection from port 20 on the server to the IP address and a random, unprivileged port
(greater than 1024 ) specified by the client. This arrangement means that the client machine
must be allowed to accept connections over any port above 1024 . With the growth of insecure
networks, such as the Internet, the use of firewalls for protecting client machines is now
prevalent. Because these client-side firewalls often deny incoming connections from active-
mode FT P servers, passive mode was devised.

passive mode
Passive mode, like active mode, is initiated by the FT P client application. When requesting data
from the server, the FT P client indicates it wants to access the data in passive mode and the
server provides the IP address and a random, unprivileged port (greater than 1024 ) on the
server. The client then connects to that port on the server to download the requested
information.

While passive mode does resolve issues for client-side firewall interference with data
connections, it can complicate administration of the server-side firewall. You can reduce the
number of open ports on a server by limiting the range of unprivileged ports on the FT P server.
This also simplifies the process of configuring firewall rules for the server. Refer to the vsftpd
Configuration Files and Options chapter of the Red Hat Enterprise Linux 7 Administrator's
Reference Guide for more information about limiting passive ports.

210
Chapter 13. File and Print Servers

13.2.2. The vsftpd Server


The Very Secure FTP Daemon (vsftpd) is designed from the ground up to be fast, stable, and, most
importantly, secure. vsftpd is the only stand-alone FT P server distributed with Red Hat
Enterprise Linux, due to its ability to handle large numbers of connections efficiently and securely.

The security model used by vsftpd has three primary aspects:

Strong separation of privileged and non-privileged processes — Separate processes handle different
tasks, and each of these processes runs with the minimal privileges required for the task.
Tasks requiring elevated privileges are handled by processes with the minimal privilege necessary —
By taking advantage of compatibilities found in the libcap library, tasks that usually require full root
privileges can be executed more safely from a less privileged process.
Most processes run in a chroot jail — Whenever possible, processes are change-rooted to the
directory being shared; this directory is then considered a chroot jail. For example, if the
/var/ftp/ directory is the primary shared directory, vsftpd reassigns /var/ftp/ to the new root
directory, known as /. This disallows any potential malicious hacker activities for any directories not
contained in the new root directory.

Use of these security practices has the following effect on how vsftpd deals with requests:

The parent process runs with the least privileges required — The parent process dynamically
calculates the level of privileges it requires to minimize the level of risk. Child processes handle direct
interaction with the FT P clients and run with as close to no privileges as possible.
All operations requiring elevated privileges are handled by a small parent process — Much like the
Apache HT T P Server, vsftpd launches unprivileged child processes to handle incoming
connections. This allows the privileged, parent process to be as small as possible and handle
relatively few tasks.
All requests from unprivileged child processes are distrusted by the parent process —
Communication with child processes is received over a socket, and the validity of any information
from child processes is checked before being acted on.
Most interactions with FTP clients are handled by unprivileged child processes in a chroot jail —
Because these child processes are unprivileged and only have access to the directory being shared,
any crashed processes only allow the attacker access to the shared files.

13.2.2.1. Starting and Stopping vsftpd


To start the vsftpd service in the current session, type the following at a shell prompt as root:

~]# systemctl start vsftpd.service

To stop the service in the current session, type as root:

~]# systemctl stop vsftpd.service

To restart the vsftpd service, run the following command as root:

~]# systemctl restart vsftpd.service

This command stops and immediately starts the vsftpd service, which is the most efficient way to make
configuration changes take effect after editing the configuration file for this FT P server. Alternatively, you

211
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

can use the following command to restart the vsftpd service only if it is already running:

~]# systemctl try-restart vsftpd.service

By default, the vsftpd service does not start automatically at boot time. To configure the vsftpd
service to start at boot time, type the following at a shell prompt as root:

~]# systemctl enable vsftpd.service


ln -s '/usr/lib/systemd/system/vsftpd.service' '/etc/systemd/system/multi-
user.target.wants/vsftpd.service'

For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 7,
Managing Services with systemd.

13.2.2.2. Starting Multiple Copies of vsftpd


Sometimes, one computer is used to serve multiple FT P domains. This is a technique called
multihoming. One way to multihome using vsftpd is by running multiple copies of the daemon, each
with its own configuration file.

To do this, first assign all relevant IP addresses to network devices or alias network devices on the
system. For more information about configuring network devices, device aliases, and additional
information about network configuration scripts, refer to the Red Hat Enterprise Linux 7 Networking
Guide.

Next, the DNS server for the FT P domains must be configured to reference the correct machine. For
information about BIND, the DNS protocol implementation used in Red Hat Enterprise Linux, and its
configuration files, refer to the Red Hat Enterprise Linux 7 Networking Guide.

For vsftpd to answer requests on different IP addresses, multiple copies of the daemon must be
running. To facilitate launching multiple instances of the vsftpd daemon, a special systemd service unit
(vsftpd@ .service) for launching vsftpd as an instantiated service is supplied in the vsftpd package.

In order to make use of this service unit, a separate vsftpd configuration file for each required instance
of the FT P server must be created and placed in the /etc/vsftpd/ directory. Note that each of these
configuration files must have a unique name (such as /etc/vsftpd/vsftpd-site-2.conf) and must
be readable and writable only by the root user.

Within each configuration file for each FT P server listening on an IPv4 network, the following directive
must be unique:

listen_address=N.N.N.N

Replace N.N.N.N with a unique IP address for the FT P site being served. If the site is using IPv6, use
the listen_address6 directive instead.

Once there are multiple configuration files present in the /etc/vsftpd/ directory, individual instances
of the vsftpd daemon can be started by executing the following command as root:

~]# systemctl start vsftpd@configuration-file-name.service

In the above command, replace configuration-file-name with the unique name of the requested
server's configuration file, such as vsftpd-site-2. Note that the configuration file's .conf extension
should not be included in the command.

212
Chapter 13. File and Print Servers

If you wish to start several instances of the vsftpd daemon at once, you can make use of a systemd
target unit file (vsftpd.target), which is supplied in the vsftpd package. This systemd target causes
an independent vsftpd daemon to be launched for each available vsftpd configuration file in the
/etc/vsftpd/ directory. Execute the following command as root to enable the target:

~]# systemctl enable vsftpd.target


ln -s '/usr/lib/systemd/system/vsftpd.target' '/etc/systemd/system/multi-
user.target.wants/vsftpd.target'

The above command configures the systemd service manager to launch the vsftpd service (along with
the configured vsftpd server instances) at boot time. To start the service immediately, without
rebooting the system, execute the following command as root:

~]# systemctl start vsftpd.target

Refer to Section 7.3, “Working with systemd Targets” for more information on how to use systemd
targets to manage services.

Other directives to consider altering on a per-server basis are:

anon_root
local_root
vsftpd_log_file
xferlog_file

For a detailed list of directives that can be used in the configuration file of the vsftpd daemon, refer to
the vsftpd Configuration Options chapter of the Red Hat Enterprise Linux 7 Administrator's Reference
Guide.

13.2.2.3. Encrypting vsftpd Connections Using SSL


In order to counter the inherently insecure nature of FT P, which transmits usernames, passwords, and
data without encryption by default, the vsftpd daemon can be configured to utilize the SSL or T LS
protocols to authenticate connections and encrypt all transfers. Note that an FT P client that supports
SSL is needed to communicate with vsftpd with SSL enabled.

Set the ssl_enable configuration directive in the vsftpd.conf file to YES to turn on SSL support.
The default settings of other SSL-related directives that become automatically active when the
ssl_enable option is enabled provide for a reasonably well-configured SSL set up. This includes,
among other things, the requirement to use the T LS v1 protocol for all connections or forcing all non-
anonymous logins to use SSL for sending passwords and data transfers.

Refer to vsftpd.conf(5) for other SSL-related configuration directives for fine-tuning the use of SSL
by vsftpd. Also, see the vsftpd Configuration Options chapter of the Red Hat Enterprise Linux 7
Administrator's Reference Guide for a description of other commonly used vsftpd.conf configuration
directives.

13.2.2.4. SELinux Policy for vsftpd


The SELinux policy governing the vsftpd daemon (as well as other ftpd processes), defines a
mandatory access control, which, by default, is based on least access required. In order to allow the FT P
daemon to access specific files or directories, appropriate labels need to be assigned to them.

213
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

For example, in order to be able to share files anonymously, the public_content_t label must be
assigned to the files and directories to be shared. You can do this using the chcon command as root:

~]# chcon -R -t public_content_t /path/to/directory

In the above command, replace /path/to/directory with the path to the directory to which you wish to
assign the label. Similarly, if you want to set up a directory for uploading files, you need to assign that
particular directory the public_content_rw_t label. In addition to that, the
allow_ftpd_anon_write SELinux Boolean option must be set to 1. Use the setsebool command
as root to do that:

~]# setsebool -P allow_ftpd_anon_write=1

If you want local users to be able to access their home directories through FT P, which is the default
setting on Red Hat Enterprise Linux 7, the ftp_hom e_dir Boolean option needs to be set to 1. If
vsftpd is to be allowed to run in standalone mode, which is also enabled by default on Red Hat
Enterprise Linux 7, the ftpd_is_daem on option needs to be set to 1 as well.

Refer to ftpd_selinux(8) for more information, including examples of other useful labels and
Boolean options, on how to configure the SELinux policy pertaining to FT P. Also, see the Red Hat
Enterprise Linux 7 SELinux User's and Administrator's Guide for more detailed information about
SELinux in general.

13.2.3. Additional Resources


For more information about vsftpd, refer to the following resources.

13.2.3.1. Installed Documentation

The /usr/share/doc/vsftpd-version-number/ directory — Replace version-number with the


installed version of the vsftpd package. This directory contains a README with basic information
about the software. The T UNING file contains basic performance-tuning tips and the SECURIT Y/
directory contains information about the security model employed by vsftpd.
vsftpd-related man pages — There are a number of man pages for the daemon and the
configuration files. The following lists some of the more important man pages.
Server Applications
m an vsftpd — Describes available command-line options for vsftpd.

Configuration Files
m an vsftpd.conf — Contains a detailed list of options available within the
configuration file for vsftpd.
m an 5 hosts_access — Describes the format and options available within the T CP
wrappers configuration files: hosts.allow and hosts.deny.

Interaction with SELinux


m an ftpd_selinux — Contains a description of the SELinux policy governing ftpd
processes as well as an explanation of the way SELinux labels need to be assigned and
Booleans set.

13.2.3.2. Online Documentation


About vsftpd and FTP in General

214
Chapter 13. File and Print Servers

http://vsftpd.beasts.org/ — The vsftpd project page is a great place to locate the latest
documentation and to contact the author of the software.
http://slacksite.com/other/ftp.html — This website provides a concise explanation of the
differences between active and passive-mode FT P.

Red Hat Enterprise Linux Documentation

Red Hat Enterprise Linux 7 Networking Guide — The Networking Guide for Red Hat
Enterprise Linux 7 documents relevant information regarding the configuration and
administration of network interfaces, networks, and network services in this system. It
provides an introduction to the hostnam ectl utility and explains how to use it to view and
set host names on the command line, both locally and remotely.
Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide — The SELinux User's
and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of
SELinux and documents in detail how to configure and use SELinux with various services
such as the Apache HTTP Server, Postfix, PostgreSQL, or OpenShift. It explains how to
configure SELinux access permissions for system services managed by systemd.
Red Hat Enterprise Linux 7 Security Guide — The Security Guide for Red Hat Enterprise
Linux 7 assists users and administrators in learning the processes and practices of securing
their workstations and servers against local and remote intrusion, exploitation, and malicious
activity. It also explains how to secure critical system services.

Relevant RFC Documents

RFC 0959 — The original Request for Comments (RFC) of the FT P protocol from the IETF.
RFC 1123 — The small FT P-related section extends and clarifies RFC 0959.
RFC 2228 — FT P security extensions. vsftpd implements the small subset needed to
support TLS and SSL connections.
RFC 2389 — Proposes FEAT and OPT S commands.
RFC 2428 — IPv6 support.

13.3. Printer Configuration


The Printer Configuration tool serves for printer configuring, maintenance of printer configuration files,
print spool directories and print filters, and printer classes management.

The tool is based on the Common Unix Printing System (CUPS). If you upgraded the system from a
previous Red Hat Enterprise Linux version that used CUPS, the upgrade process preserved the
configured printers.

Using the CUPS web application or command line tools

You can perform the same and additional operations on printers directly from the CUPS web
application or command line. To access the application, in a web browser, go to
http://localhost:631/. For CUPS manuals refer to the links on the Hom e tab of the web site.

215
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

13.3.1. Starting the Printer Configuration Tool


With the Printer Configuration tool you can perform various operations on existing printers and set up
new printers. However, you can use also CUPS directly (go to http://localhost:631/ to access CUPS).

On the panel, click System → Administration → Printing, or run the system -config-printer
command from the command line to start the tool.

The Printer Configuration window depicted in Figure 13.3, “Printer Configuration window”
appears.

Figure 13.3. Printer Configuration window

13.3.2. Starting Printer Setup


Printer setup process varies depending on the printer queue type.

If you are setting up a local printer connected with USB, the printer is discovered and added
automatically. You will be prompted to confirm the packages to be installed and provide the root
password. Local printers connected with other port types and network printers need to be set up
manually.

Follow this procedure to start a manual printer setup:

1. Start the Printer Configuration tool (refer to Section 13.3.1, “Starting the Printer Configuration
Tool”).
2. Go to Server → New → Printer.
3. In the Authenticate dialog box, type the root user password and confirm.
4. Select the printer connection type and provide its details in the area on the right.

13.3.3. Adding a Local Printer


Follow this procedure to add a local printer connected with other than a serial port:

1. Open the New Printer dialog (refer to Section 13.3.2, “Starting Printer Setup”).
2. If the device does not appear automatically, select the port to which the printer is connected in the
list on the left (such as Serial Port #1 or LPT #1).
3. On the right, enter the connection properties:
for Other

216
Chapter 13. File and Print Servers

URI (for example file:/dev/lp0)

for Serial Port


Baud Rate
Parity
Data Bits
Flow Control

Figure 13.4. Adding a local printer

4. Click Forward.
5. Select the printer model. Refer to Section 13.3.8, “Selecting the Printer Model and Finishing” for
details.

13.3.4. Adding an AppSocket/HP JetDirect printer


Follow this procedure to add an AppSocket/HP JetDirect printer:

1. Open the New Printer dialog (refer to Section 13.3.1, “Starting the Printer Configuration Tool”).
2. In the list on the left, select Network Printer → AppSocket/HP JetDirect.
3. On the right, enter the connection settings:
Hostnam e

217
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Printer hostname or IP address.

Port Num ber


Printer port listening for print jobs (9100 by default).

Figure 13.5. Adding a JetDirect printer

4. Click Forward.
5. Select the printer model. Refer to Section 13.3.8, “Selecting the Printer Model and Finishing” for
details.

13.3.5. Adding an IPP Printer


An IPP printer is a printer attached to a different system on the same TCP/IP network. The system this
printer is attached to may either be running CUPS or simply configured to use IPP.

If a firewall is enabled on the printer server, then the firewall must be configured to allow incoming TCP
connections on port 631. Note that the CUPS browsing protocol allows client machines to discover
shared CUPS queues automatically. To enable this, the firewall on the client machine must be configured
to allow incoming UDP packets on port 631.

Follow this procedure to add an IPP printer:

1. Open the New Printer dialog (refer to Section 13.3.2, “Starting Printer Setup”).

218
Chapter 13. File and Print Servers

2. In the list of devices on the left, select Network Printer and Internet Printing Protocol (ipp) or
Internet Printing Protocol (https).
3. On the right, enter the connection settings:
Host
The hostname of the IPP printer.

Queue
The queue name to be given to the new queue (if the box is left empty, a name based on
the device node will be used).

Figure 13.6. Adding an IPP printer

4. Click Forward to continue.


5. Select the printer model. Refer to Section 13.3.8, “Selecting the Printer Model and Finishing” for
details.

13.3.6. Adding an LPD/LPR Host or Printer


Follow this procedure to add an LPD/LPR host or printer:

1. Open the New Printer dialog (refer to Section 13.3.2, “Starting Printer Setup”).
2. In the list of devices on the left, select Network Printer → LPD/LPR Host or Printer.
3. On the right, enter the connection settings:

219
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Host
The hostname of the LPD/LPR printer or host.
Optionally, click Probe to find queues on the LPD host.

Queue
The queue name to be given to the new queue (if the box is left empty, a name based on
the device node will be used).

Figure 13.7. Adding an LPD/LPR printer

4. Click Forward to continue.


5. Select the printer model. Refer to Section 13.3.8, “Selecting the Printer Model and Finishing” for
details.

13.3.7. Adding a Samba (SMB) printer


Follow this procedure to add a Samba printer:

1. Open the New Printer dialog (refer to Section 13.3.2, “Starting Printer Setup”).
2. In the list on the left, select Network Printer → Windows Printer via SAMBA.
3. Enter the SMB address in the sm b:// field. Use the format computer name/printer share. In
Figure 13.8, “Adding a SMB printer”, the computer name is dellbox and the printer share is
r2.

220
Chapter 13. File and Print Servers

Figure 13.8. Adding a SMB printer

4. Click Browse to see the available workgroups/domains. To display only queues of a particular
host, type in the host name (NetBios name) and click Browse.
5. Select either of the options:
A. Prom pt user if authentication is required: username and password are
collected from the user when printing a document.
B. Set authentication details now: provide authentication information now so it is not
required later. In the Usernam e field, enter the username to access the printer. This user
must exist on the SMB system, and the user must have permission to access the printer. The
default user name is typically guest for Windows servers, or nobody for Samba servers.
6. Enter the Password (if required) for the user specified in the Usernam e field.

Be careful when choosing a password

Samba printer usernames and passwords are stored in the printer server as unencrypted
files readable by root and lpd. Thus, other users that have root access to the printer server
can view the username and password you use to access the Samba printer.
As such, when you choose a username and password to access a Samba printer, it is
advisable that you choose a password that is different from what you use to access your
local Red Hat Enterprise Linux system.
If there are files shared on the Samba print server, it is recommended that they also use a
password different from what is used by the print queue.

221
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

7. Click Verify to test the connection. Upon successful verification, a dialog box appears confirming
printer share accessibility.
8. Click Forward.
9. Select the printer model. Refer to Section 13.3.8, “Selecting the Printer Model and Finishing” for
details.

13.3.8. Selecting the Printer Model and Finishing


Once you have properly selected a printer connection type, the systems attempts to acquire a driver. If
the process fails, you can locate or search for the driver resources manually.

Follow this procedure to provide the printer driver and finish the installation:

1. In the window displayed after the automatic driver detection has failed, select one of the following
options:
A. Select a Printer from database — the system chooses a driver based on the
selected make of your printer from the list of Makes. If your printer model is not listed, choose
Generic.
B. Provide PPD file — the system uses the provided PostScript Printer Description (PPD) file
for installation. A PPD file may also be delivered with your printer as being normally provided by
the manufacturer. If the PPD file is available, you can choose this option and use the browser
bar below the option description to select the PPD file.
C. Search for a printer driver to download — enter the make and model of your
printer into the Make and m odel field to search on OpenPrinting.org for the appropriate
packages.

222
Chapter 13. File and Print Servers

Figure 13.9. Selecting a printer brand

2. Depending on your previous choice provide details in the area displayed below:
Printer brand for the Select printer from database option.
PPD file location for the Provide PPD file option.
Printer make and model for the Search for a printer driver to download option.
3. Click Forward to continue.
4. If applicable for your option, window shown in Figure 13.10, “Selecting a printer model” appears.
Choose the corresponding model in the Models column on the left.

Selecting a printer driver

On the right, the recommended printed driver is automatically selected; however, you can
select another available driver. The print driver processes the data that you want to print
into a format the printer can understand. Since a local printer is attached directly to your
computer, you need a printer driver to process the data that is sent to the printer.

Figure 13.10. Selecting a printer model

5. Click Forward.
6. Under the Describe Printer enter a unique name for the printer in the Printer Nam e field.

223
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

The printer name can contain letters, numbers, dashes (-), and underscores (_); it must not
contain any spaces. You can also use the Description and Location fields to add further
printer information. Both fields are optional, and may contain spaces.

Figure 13.11. Printer setup

7. Click Apply to confirm your printer configuration and add the print queue if the settings are
correct. Click Back to modify the printer configuration.
8. After the changes are applied, a dialog box appears allowing you to print a test page. Click Yes to
print a test page now. Alternatively, you can print a test page later as described in Section 13.3.9,
“Printing a Test Page”.

13.3.9. Printing a Test Page


After you have set up a printer or changed a printer configuration, print a test page to make sure the
printer is functioning properly:

1. Right-click the printer in the Printing window and click Properties.


2. In the Properties window, click Settings on the left.
3. On the displayed Settings tab, click the Print T est Page button.

13.3.10. Modifying Existing Printers


To delete an existing printer, in the Printer Configuration window, select the printer and go to
Printer → Delete. Confirm the printer deletion. Alternatively, press the Delete key.

224
Chapter 13. File and Print Servers

To set the default printer, right-click the printer in the printer list and click the Set as Default button in the
context menu.

13.3.10.1. The Settings Page


To change printer driver configuration, double-click the corresponding name in the Printer list and
click the Settings label on the left to display the Settings page.

You can modify printer settings such as make and model, print a test page, change the device location
(URI), and more.

Figure 13.12. Settings page

13.3.10.2. The Policies Page


Click the Policies button on the left to change settings in printer state and print output.

You can select the printer states, configure the Error Policy of the printer (you can decide to abort
the print job, retry, or stop it if an error occurs).

You can also create a banner page (a page that describes aspects of the print job such as the originating
printer, the username from the which the job originated, and the security status of the document being
printed): click the Starting Banner or Ending Banner drop-menu and choose the option that best
describes the nature of the print jobs (such as topsecret, classified, or confidential).

13.3.10.2.1. Sharing Printers


On the Policies page, you can mark a printer as shared: if a printer is shared, users published on the
network can use it. To allow the sharing function for printers, go to Server → Settings and select
Publish shared printers connected to this system .

Finally, make sure that the firewall allows incoming TCP connections to port 631 (that is Network Printing
Server (IPP) in system-config-firewall).

225
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 13.13. Policies page

13.3.10.2.2. The Access Control Page


You can change user-level access to the configured printer on the Access Control page. Click the
Access Control label on the left to display the page. Select either Allow printing for
everyone except these users or Deny printing for everyone except these users
and define the user set below: enter the user name in the text box and click the Add button to add the
user to the user set.

Figure 13.14. Access Control page

226
Chapter 13. File and Print Servers

13.3.10.2.3. The Printer Options Page


The Printer Options page contains various configuration options for the printer media and output,
and its content may vary from printer to printer. It contains general printing, paper, quality, and printing
size settings.

Figure 13.15. Printer Options page

13.3.10.2.4. Job Options Page


On the Job Options page, you can detail the printer job options. Click the Job Options label on the
left to display the page. Edit the default settings to apply custom job options, such as number of copies,
orientation, pages per side,scaling (increase or decrease the size of the printable area, which can be
used to fit an oversize print area onto a smaller physical sheet of print medium), detailed text options,
and custom job options.

227
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 13.16. Job Options page

13.3.10.2.5. Ink/Toner Levels Page


The Ink/T oner Levels page contains details on toner status if available and printer status messages.
Click the Ink/T oner Levels label on the left to display the page.

228
Chapter 13. File and Print Servers

Figure 13.17. Ink/Toner Levels page

13.3.10.3. Managing Print Jobs


When you send a print job to the printer daemon, such as printing a text file from Emacs or printing an
image from GIMP, the print job is added to the print spool queue. The print spool queue is a list of print
jobs that have been sent to the printer and information about each print request, such as the status of
the request, the job number, and more.

During the printing process, the Printer Status icon appears in the Notification Area on the panel.
To check the status of a print job, click the Printer Status, which displays a window similar to
Figure 13.18, “GNOME Print Status”.

Figure 13.18. GNOME Print Status

To cancel, hold, release, reprint or authenticate a print job, select the job in the GNOME Print Status
and on the Job menu, click the respective command.

To view the list of print jobs in the print spool from a shell prompt, type the command lpstat -o. The
last few lines look similar to the following:

229
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 13.1. Example of lpstat -o output

$ lpstat -o
Charlie-60 twaugh 1024 Tue 08 Feb 2011 16:42:11 GMT
Aaron-61 twaugh 1024 Tue 08 Feb 2011 16:42:44 GMT
Ben-62 root 1024 Tue 08 Feb 2011 16:45:42 GMT

If you want to cancel a print job, find the job number of the request with the command lpstat -o and
then use the command cancel job number. For example, cancel 60 would cancel the print job in
Example 13.1, “Example of lpstat -o output”. You can not cancel print jobs that were started by other
users with the cancel command. However, you can enforce deletion of such job by issuing the cancel
-U root job_number command. To prevent such canceling change the printer operation policy to
Authenticated to force root authentication.

You can also print a file directly from a shell prompt. For example, the command lp sam ple.txt prints
the text file sam ple.txt. The print filter determines what type of file it is and converts it into a format the
printer can understand.

13.3.11. Additional Resources


To learn more about printing on Red Hat Enterprise Linux, refer to the following resources.

13.3.11.1. Installed Documentation


m an lp
The manual page for the lpr command that allows you to print files from the command line.

m an cancel
The manual page for the command line utility to remove print jobs from the print queue.

m an m page
The manual page for the command line utility to print multiple pages on one sheet of paper.

m an cupsd
The manual page for the CUPS printer daemon.

m an cupsd.conf
The manual page for the CUPS printer daemon configuration file.

m an classes.conf
The manual page for the class configuration file for CUPS.

m an lpstat
The manual page for the lpstat command, which displays status information about classes,
jobs, and printers.

230
Chapter 13. File and Print Servers

13.3.11.2. Useful Websites


http://www.linuxprinting.org/
GNU/Linux Printing contains a large amount of information about printing in Linux.

http://www.cups.org/
Documentation, FAQs, and newsgroups about CUPS.

231
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 14. Configuring NTP Using the chrony Suite


Accurate time keeping is important for a number of reasons in IT. In networking for example, accurate
time stamps in packets and logs are required. In Linux systems, the NT P protocol is implemented by a
daemon running in user space.

The user space daemon updates the system clock running in the kernel. The system clock can keep time
by using various clock sources. Usually, the Time Stamp Counter (TSC) is used. The TSC is a CPU
register which counts the number of cycles since it was last reset. It is very fast, has a high resolution,
and there are no interrupts.

There is a choice between the daemons ntpd and chronyd, which are available from the repositories
in the ntp and chrony packages respectively. This section describes the use of the chrony suite of
utilities to update the system clock on systems that do not fit into the conventional permanently
networked, always on, dedicated server category.

14.1. Introduction to the chrony Suite


Chrony consists of chronyd, a daemon that runs in user space, and chronyc, a command line
program for making adjustments to chronyd. Systems which are not permanently connected, or not
permanently powered up, take a relatively long time to adjust their system clocks with ntpd. This is
because many small corrections are made based on observations of the clocks drift and offset.
Temperature changes, which may be significant when powering up a system, affect the stability of
hardware clocks. Although adjustments begin within a few milliseconds of booting a system, acceptable
accuracy may take anything from ten seconds from a warm restart to a number of hours depending on
your requirements, operating environment and hardware. chrony is a different implementation of the
NT P protocol than ntpd, it can adjust the system clock more rapidly.

14.1.1. Differences Between ntpd and chronyd


One of the main differences between ntpd and chronyd is in the algorithms used to control the
computer's clock. Things chronyd can do better than ntpd are:

chronyd can work well when external time references are only intermittently accessible whereas
ntpd needs regular polling of time reference to work well.
chronyd can perform well even when the network is congested for longer periods of time.
chronyd can usually synchronize the clock faster and with better time accuracy.
chronyd quickly adapts to sudden changes in the rate of the clock, for example, due to changes in
the temperature of the crystal oscillator, whereas ntpd may need a long time to settle down again.
chronyd in the default configuration never steps the time after the clock has been synchronized at
system start, in order not to upset other running programs. ntpd can be configured to never step the
time too, but it has to use a different means of adjusting the clock, which has some disadvantages.
chronyd can adjust the rate of the clock on a Linux system in a larger range, which allows it to
operate even on machines with a broken or unstable clock. For example, on some virtual machines.

Things chronyd can do that ntpd cannot do:

chronyd provides support for isolated networks where the only method of time correction is manual
entry. For example, by the administrator looking at a clock. chronyd can look at the errors corrected
at different updates to estimate the rate at which the computer gains or loses time, and use this
estimate to trim the computer clock subsequently.
chronyd provides support to work out the rate of gain or loss of the real-time clock, the hardware

232
Chapter 14. Configuring NTP Using the chrony Suite

clock, that maintains the time when the computer is turned off. It can use this data when the system
boots to set the system time using an adjusted value of the time taken from the real-time clock. This
is, at time of writing, only available in Linux.

Things ntpd can do that chronyd cannot do:

ntpd fully supports NT P version 4 (RFC 5905), including broadcast, multicast, manycast clients and
servers, and the orphan mode. It also supports extra authentication schemes based on public-key
cryptography (RFC 5906). chronyd uses NT P version 3 (RFC 1305), which is compatible with
version 4.
ntpd includes drivers for many reference clocks whereas chronyd relies on other programs, for
example gpsd, to access the data from the reference clocks.

14.1.2. Choosing Between NTP Daemons


Chrony should be considered for all systems which are frequently suspended or otherwise
intermittently disconnected and reconnected to a network. Mobile and virtual systems for example.
The NT P daemon (ntpd) should be considered for systems which are normally kept permanently on.
Systems which are required to use broadcast or multicast IP, or to perform authentication of packets
with the Autokey protocol, should consider using ntpd. Chrony only supports symmetric key
authentication using a message authentication code (MAC) with MD5, SHA1 or stronger hash
functions, whereas ntpd also supports the Autokey authentication protocol which can make use of
the PKI system. Autokey is described in RFC 5906.

14.2. Understanding chrony and Its Configuration

14.2.1. Understanding chronyd


The chrony daemon, chronyd, running in user space, makes adjustments to the system clock which is
running in the kernel. It does this by consulting external time sources, using the NT P protocol, when ever
network access allows it to do so. When external references are not available, chronyd will use the last
calculated drift stored in the drift file. It can also be commanded manually to make corrections, by
chronyc.

14.2.2. Understanding chronyc


The chrony daemon, chronyd, can be controlled by the command line utility chronyc. This utility
provides a command prompt which allows entering of a number of commands to make changes to
chronyd. The default configuration is for chronyd to only accept commands from a local instance of
chronyc, but chronyc can be used to alter the configuration so that chronyd will allow external control.
That is to say, chronyc can be run remotely after first configuring chronyd to accept remote
connections. The IP addresses allowed to connect to chronyd should be tightly controlled.

14.2.3. Understanding the chrony Configuration Commands


The default configuration file for chronyd is /etc/chrony.conf. The -f option can be used to
specify an alternate configuration file path. Refer to the chronyd man page for further options. For a
complete list of the directives that can be used see
http://chrony.tuxfamily.org/manual.html#Configuration-file. We present here a selection of configuration
options:
Comments
Comments should be preceded by #, %, ; or !

233
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

allow
Optionally specify a host, subnet, or network from which to allow NT P connections to a machine
acting as NT P server. The default is not to allow connections.

Examples:

1. allow server1.example.com

Use this form to specify a particular host, by its host name, to be allowed access.

2. allow 192.0.2.0/24

Use this form to specify a particular network to be allowed access.

3. allow 2001:db8::/32

Use this form to specify an IPv6 address to be allowed access.

cmdallow
This is similar to the allow directive (see section allow), except that it allows control access
(rather than NT P client access) to a particular subnet or host. (By “control access” is meant that
chronyc can be run on those hosts and successfully connect to chronyd on this computer.)
The syntax is identical. There is also a cm ddeny all directive with similar behavior to the
cm dallow all directive.

dumpdir
Path to the directory to save the measurement history across restarts of chronyd (assuming
no changes are made to the system clock behavior whilst it is not running). If this capability is to
be used (via the dum ponexit command in the configuration file, or the dum p command in
chronyc), the dum pdir command should be used to define the directory where the
measurement histories are saved.

dumponexit
If this command is present, it indicates that chronyd should save the measurement history for
each of its time sources recorded whenever the program exits. (See the dum pdir command
above).

local
The local keyword is used to allow chronyd to appear synchronized to real time (from the
viewpoint of clients polling it), even if it has no current synchronization source. This option is
normally used on computers in an isolated network, where several computers are required to
synchronize to one other, this being the “master” which is kept vaguely in line with real time by
manual input.

An example of the command is:

local stratum 10

A large value of 10 indicates that the clock is so many hops away from a reference clock that its

234
Chapter 14. Configuring NTP Using the chrony Suite

time is fairly unreliable. Put another way, if the computer ever has access to another computer
which is ultimately synchronized to a reference clock, it will almost certainly be at a stratum less
than 10. Therefore, the choice of a high value like 10 for the local command prevents the
machine’s own time from ever being confused with real time, were it ever to leak out to clients
that have visibility of real servers.

log
The log command indicates that certain information is to be logged. It accepts the following
options:
measurements
This option logs the raw NT P measurements and related information to a file called
m easurem ents.log.

statistics
This option logs information about the regression processing to a file called
statistics.log.

tracking
This option logs changes to the estimate of the system’s gain or loss rate, and any
slews made, to a file called tracking.log.

rtc
This option logs information about the system’s real-time clock.

refclocks
This option logs the raw and filtered reference clock measurements to a file called
refclocks.log.

tempcomp
This option logs the temperature measurements and system rate compensations to a
file called tem pcom p.log.

The log files are written to the directory specified by the logdir command. An example of the
command is:

log measurements statistics tracking

logdir
This directive allows the directory where log files are written to be specified. An example of the
use of this directive is:

logdir /var/log/chrony

makestep

235
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Normally chronyd will cause the system to gradually correct any time offset, by slowing down
or speeding up the clock as required. In certain situations, the system clock may be so far adrift
that this slewing process would take a very long time to correct the system clock. This directive
forces chronyd to step system clock if the adjustment is larger than a threshold value, but only
if there were no more clock updates since chronyd was started than a specified limit (a
negative value can be used to disable the limit). This is particularly useful when using reference
clocks, because the initstepslew directive only works with NT P sources.

An example of the use of this directive is:

makestep 1000 10

This would step the system clock if the adjustment is larger than 1000 seconds, but only in the
first ten clock updates.

maxchange
This directive sets the maximum allowed offset corrected on a clock update. The check is
performed only after the specified number of updates to allow a large initial adjustment of the
system clock. When an offset larger than the specified maximum occurs, it will be ignored for
the specified number of times and then chronyd will give up and exit (a negative value can be
used to never exit). In both cases a message is sent to syslog.

An example of the use of this directive is:

maxchange 1000 1 2

After the first clock update, chronyd will check the offset on every clock update, it will ignore
two adjustments larger than 1000 seconds and exit on another one.

maxupdateskew
One of chronyd's tasks is to work out how fast or slow the computer’s clock runs relative to its
reference sources. In addition, it computes an estimate of the error bounds around the
estimated value. If the range of error is too large, it probably indicates that the measurements
have not settled down yet, and that the estimated gain or loss rate is not very reliable. The
m axupdateskew parameter allows the threshold for determining whether an estimate may be
so unreliable that it should not be used. By default, the threshold is 1000 ppm. The format of the
syntax is:

maxupdateskew skew-in-ppm

Typical values for skew-in-ppm might be 100 for a dial-up connection to servers over a
telephone line, and 5 or 10 for a computer on a LAN. It should be noted that this is not the only
means of protection against using unreliable estimates. At all times, chronyd keeps track of
both the estimated gain or loss rate, and the error bound on the estimate. When a new estimate
is generated following another measurement from one of the sources, a weighted combination
algorithm is used to update the master estimate. So if chronyd has an existing highly-reliable
master estimate and a new estimate is generated which has large error bounds, the existing
master estimate will dominate in the new master estimate.

noclientlog
This directive, which takes no arguments, specifies that client accesses are not to be logged.

236
Chapter 14. Configuring NTP Using the chrony Suite

Normally they are logged, allowing statistics to be reported using the clients command in
chronyc.

reselectdist
When chronyd selects synchronization source from available sources, it will prefer the one
with minimum synchronization distance. However, to avoid frequent reselecting when there are
sources with similar distance, a fixed distance is added to the distance for sources that are
currently not selected. This can be set with the reselectdist option. By default, the distance
is 100 microseconds.

The format of the syntax is:

reselectdist dist-in-seconds

stratumweight
The stratum weight directive sets how much distance should be added per stratum to the
synchronization distance when chronyd selects the synchronization source from available
sources.

The format of the syntax is:

stratumweight dist-in-seconds

By default, dist-in-seconds is 1 second. This usually means that sources with lower stratum
will be preferred to sources with higher stratum even when their distance is significantly worse.
Setting stratum weight to 0 makes chronyd ignore stratum when selecting the source.

rtcfile
The rtcfile directive defines the name of the file in which chronyd can save parameters
associated with tracking the accuracy of the system’s real-time clock (RTC). The format of the
syntax is:

rtcfile /var/lib/chrony/rtc

chronyd saves information in this file when it exits and when the writertc command is
issued in chronyc. The information saved is the RTC’s error at some epoch, that epoch (in
seconds since January 1 1970), and the rate at which the RTC gains or loses time. Not all real-
time clocks are supported as their code is system-specific. Note that if this directive is used then
real-time clock should not be manually adjusted as this would interfere with chrony's need to
measure the rate at which the real-time clock drifts if it was adjusted at random intervals.

rtcsync
The rtcsync directive is present in the /etc/chrony.conf file by default. This will inform the
kernel the system clock is kept synchronized and the kernel will update the real-time clock every
11 minutes.

14.2.4. Security with chronyc


As access to chronyc allows changing chronyd just as editing the configuration files would, access to

237
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

chronyc should be limited. Passwords can be specified in the key file, written in ASCII or HEX, to restrict
the use of chronyc. One of the entries is used to restrict the use of operational commands and is
referred to as the command key. In the default configuration, a random command key is generated
automatically on start. It should not be necessary to specify or alter it manually.

Other entries in the key file can be used as NT P keys to authenticate packets received from remote NT P
servers or peers. The two sides need to share a key with identical ID, hash type and password in their
key file. This requires manually creating the keys and copying them over a secure medium, such as SSH.
If the key ID was, for example, 10 then the systems that act as clients must have a line in their
configuration files in the following format:

server w.x.y.z key 10


peer w.x.y.z key 10

The location of the key file is specified in the /etc/chrony.conf file. The default entry in the
configuration file is:

keyfile /etc/chrony.keys

The command key number is specified in /etc/chrony.conf using the com m andkey directive, it is
the key chronyd will use for authentication of user commands. The directive in the configuration file
takes the following form:

commandkey 1

An example of the format of the default entry in the key file, /etc/chrony.keys, for the command key
is:

1 SHA1 HEX:A6CFC50C9C93AB6E5A19754C246242FC5471BCDF

Where 1 is the key ID, SHA1 is the hash function to use, HEX is the format of the key, and
A6CFC50C9C93AB6E5A19754 C24 624 2FC54 71BCDF is the key randomly generated when chronyd
was started for the first time. The key can be given in hexidecimal or ASCII format (the default).

A manual entry in the key file, used to authenticate packets from certain NT P servers or peers, can be as
simple as the following:

20 foobar

Were 20 is the key ID and foobar is the secret authentication key. The default hash is MD5, and ASCII
is the default format for the key.

By default, chronyd is configured to listen for commands only from localhost (127.0.0.1 and ::1)
on port 323. To access chronyd remotely with chronyc, any bindcm daddress directives in the
/etc/chrony.conf file should be removed to enable listening on all interfaces and the cm dallow
directive should be used to allow commands from the remote IP address, network, or subnet. In addition,
port 323 has to be opened in the firewall in order to connect from a remote system. Note that the allow
directive is for NT P access whereas the cm dallow directive is to enable the receiving of remote
commands. It is possible to make these changes temporarily using chronyc running locally. Edit the
configuration file to make persistent changes.

The communication between chronyc and chronyd is done over UDP, so it needs to be authorized
before issuing operational commands. To authorize, use the authhash and password commands as

238
Chapter 14. Configuring NTP Using the chrony Suite

follows:

chronyc> authhash SHA1


chronyc> password HEX:A6CFC50C9C93AB6E5A19754C246242FC5471BCDF
200 OK

If chronyc is used to configure the local chronyd, the -a option will run the authhash and password
commands automatically.

Only the following commands can be used without providing a password: activity , authhash , dns ,
exit , help , password , quit , rtcdata , sources , sourcestats , tracking , waitsync .

14.3. Using chrony

14.3.1. Checking if chrony is Installed


To check if chrony is installed, run the following command as root:

~]# yum install chrony

The default location for the chrony daemon is /usr/sbin/chronyd. The command line utility will be
installed to /usr/bin/chronyc.

14.3.2. Installing chrony


To install chrony, run the following command as root:

~]# yum install chrony -y

The default location for the chrony daemon is /usr/sbin/chronyd. The command line utility will be
installed to /usr/bin/chronyc.

14.3.3. Checking the Status of chronyd


To check the status of chronyd, issue the following command:

~]$ systemctl status chronyd


chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled)
Active: active (running) since Wed 2013-06-12 22:23:16 CEST; 11h ago

14.3.4. Starting chronyd


To start chronyd, issue the following command as root:

~]# systemctl start chronyd

To ensure chronyd starts automatically at system start, issue the following command as root:

~]# systemctl enable chronyd

14.3.5. Stopping chronyd


To stop chronyd, issue the following command as root:

239
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

~]# systemctl stop chronyd

To prevent chronyd from starting automatically at system start, issue the following command as root:

~]# systemctl disable chronyd

14.3.6. Checking if chrony is Synchronized


To check if chrony is synchronized, make use of the tracking, sources, and sourcestats
commands.

14.3.6.1. Checking chrony Tracking


To check chrony tracking, issue the following command:

~]$ chronyc tracking


Reference ID : 1.2.3.4 (a.b.c)
Stratum : 3
Ref time (UTC) : Fri Feb 3 15:00:29 2012
System time : 0.000001501 seconds slow of NTP time
Last offset : -0.000001632 seconds
RMS offset : 0.000002360 seconds
Frequency : 331.898 ppm fast
Residual freq : 0.004 ppm
Skew : 0.154 ppm
Root delay : 0.373169 seconds
Root dispersion : 0.024780 seconds
Update interval : 64.2 seconds
Leap status : Normal

The fields are as follows:

Reference ID
This is the reference ID and name (or IP address) if available, of the server to which the
computer is currently synchronized. If this is 127.127.1.1 it means the computer is not
synchronized to any external source and that you have the “local” mode operating (via the local
command in chronyc, or the local directive in the /etc/chrony.conf file (see section
local)).

Stratum
The stratum indicates how many hops away from a computer with an attached reference clock
we are. Such a computer is a stratum-1 computer, so the computer in the example is two hops
away (that is to say, a.b.c is a stratum-2 and is synchronized from a stratum-1).

Ref time
This is the time (UTC) at which the last measurement from the reference source was
processed.

System time
In normal operation, chronyd never steps the system clock, because any jump in the timescale
can have adverse consequences for certain application programs. Instead, any error in the
system clock is corrected by slightly speeding up or slowing down the system clock until the

240
Chapter 14. Configuring NTP Using the chrony Suite

error has been removed, and then returning to the system clock’s normal speed. A
consequence of this is that there will be a period when the system clock (as read by other
programs using the gettim eofday() system call, or by the date command in the shell) will be
different from chronyd's estimate of the current true time (which it reports to NT P clients when
it is operating in server mode). The value reported on this line is the difference due to this effect.

Last offset
This is the estimated local offset on the last clock update.

RMS offset
This is a long-term average of the offset value.

Frequency
The “frequency” is the rate by which the system’s clock would be would be wrong if chronyd
was not correcting it. It is expressed in ppm (parts per million). For example, a value of 1ppm
would mean that when the system’s clock thinks it has advanced 1 second, it has actually
advanced by 1.000001 seconds relative to true time.

Residual freq
This shows the “residual frequency” for the currently selected reference source. This reflects
any difference between what the measurements from the reference source indicate the
frequency should be and the frequency currently being used. The reason this is not always zero
is that a smoothing procedure is applied to the frequency. Each time a measurement from the
reference source is obtained and a new residual frequency computed, the estimated accuracy
of this residual is compared with the estimated accuracy (see skew next) of the existing
frequency value. A weighted average is computed for the new frequency, with weights
depending on these accuracies. If the measurements from the reference source follow a
consistent trend, the residual will be driven to zero over time.

Skew
This is the estimated error bound on the frequency.

Root delay
This is the total of the network path delays to the stratum-1 computer from which the computer
is ultimately synchronized. In certain extreme situations, this value can be negative. (This can
arise in a symmetric peer arrangement where the computers’ frequencies are not tracking each
other and the network delay is very short relative to the turn-around time at each computer.)

Root dispersion
This is the total dispersion accumulated through all the computers back to the stratum-1
computer from which the computer is ultimately synchronized. Dispersion is due to system clock
resolution, statistical measurement variations etc.

Leap status
This is the leap status, which can be Normal, Insert second, Delete second or Not synchronized.

241
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

14.3.6.2. Checking chrony Sources


The sources command displays information about the current time sources that chronyd is accessing.
The optional argument -v can be specified, meaning verbose. In this case, extra caption lines are shown
as a reminder of the meanings of the columns.

~]$ chronyc sources


210 Number of sources = 3
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
#* GPS0 0 4 377 11 -479ns[ -621ns] +/- 134ns
^? a.b.c 2 6 377 23 -923us[ -924us] +/- 43ms
^+ d.e.f 1 6 377 21 -2629us[-2619us] +/- 86ms

The columns are as follows:


M
This indicates the mode of the source. ^ means a server, = means a peer and # indicates a
locally connected reference clock.

S
This column indicates the state of the sources. “*” indicates the source to which chronyd is
currently synchronized. “+” indicates acceptable sources which are combined with the selected
source. “-” indicates acceptable sources which are excluded by the combining algorithm. “?”
indicates sources to which connectivity has been lost or whose packets do not pass all tests. “x”
indicates a clock which chronyd thinks is is a falseticker (that is to say, its time is inconsistent
with a majority of other sources). “~” indicates a source whose time appears to have too much
variability. The “?” condition is also shown at start-up, until at least 3 samples have been
gathered from it.

Name/IP address
This shows the name or the IP address of the source, or reference ID for reference clocks.

Stratum
This shows the stratum of the source, as reported in its most recently received sample. Stratum
1 indicates a computer with a locally attached reference clock. A computer that is synchronized
to a stratum 1 computer is at stratum 2. A computer that is synchronized to a stratum 2
computer is at stratum 3, and so on.

Poll
This shows the rate at which the source is being polled, as a base-2 logarithm of the interval in
seconds. Thus, a value of 6 would indicate that a measurement is being made every 64
seconds. chronyd automatically varies the polling rate in response to prevailing conditions.

Reach
This shows the source’s reachability register printed as octal number. The register has 8 bits
and is updated on every received or missed packet from the source. A value of 377 indicates
that a valid reply was received for all from the last eight transmissions.

LastRx

242
Chapter 14. Configuring NTP Using the chrony Suite

This column shows how long ago the last sample was received from the source. This is normally
in seconds. The letters m , h, d or y indicate minutes, hours, days or years. A value of 10 years
indicates there were no samples received from this source yet.

Last sample
This column shows the offset between the local clock and the source at the last measurement.
The number in the square brackets shows the actual measured offset. This may be suffixed by
ns (indicating nanoseconds), us (indicating microseconds), m s (indicating milliseconds), or s
(indicating seconds). The number to the left of the square brackets shows the original
measurement, adjusted to allow for any slews applied to the local clock since. The number
following the +/- indicator shows the margin of error in the measurement. Positive offsets
indicate that the local clock is fast of the source.

14.3.6.3. Checking chrony Source Statistics


The sourcestats command displays information about the drift rate and offset estimation process for
each of the sources currently being examined by chronyd. The optional argument -v can be specified,
meaning verbose. In this case, extra caption lines are shown as a reminder of the meanings of the
columns.

~]$ chronyc sourcestats

210 Number of sources = 1


Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
===============================================================================
abc.def.ghi

The columns are as follows:


Name/IP address
This is the name or IP address of the NT P server (or peer) or reference ID of the reference
clock to which the rest of the line relates.

NP
This is the number of sample points currently being retained for the server. The drift rate and
current offset are estimated by performing a linear regression through these points.

NR
This is the number of runs of residuals having the same sign following the last regression. If this
number starts to become too small relative to the number of samples, it indicates that a straight
line is no longer a good fit to the data. If the number of runs is too low, chronyd discards older
samples and re-runs the regression until the number of runs becomes acceptable.

Span
This is the interval between the oldest and newest samples. If no unit is shown the value is in
seconds. In the example, the interval is 46 minutes.

Frequency
This is the estimated residual frequency for the server, in parts per million. In this case, the

243
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

computer’s clock is estimated to be running 1 part in 109 slow relative to the server.

Freq Skew
This is the estimated error bounds on Freq (again in parts per million).

Offset
This is the estimated offset of the source.

Std Dev
This is the estimated sample standard deviation.

14.3.7. Manually Adjusting the System Clock


To update, or step, the system clock immediately, bypassing any adjustments in progress by slewing the
clock, issue the following commands as root:

~]# chronyc
chrony> password commandkey-password
200 OK
chrony> makestep
200 OK

Where commandkey-password is the command key or password stored in the key file.

The real-time clock should not be manually adjusted if the rtcfile directive is used as this would
interfere with chrony's need to measure the rate at which the real-time clock drifts if it was adjusted at
random intervals.

If chronyc is used to configure the local chronyd, the -a will run the authhash and password
commands automatically. This means that the interactive session illustrated above can be replaced by:

chronyc -a makestep

14.4. Setting Up chrony for Different Environments

14.4.1. Setting Up chrony for a System Which is Infrequently Connected


This example is intended for systems which use dial-on-demand connections. The normal configuration
should be sufficient for mobile and virtual devices which connect intermittently. First, review and confirm
that the default settings in the /etc/chrony.conf are similar to the following:

driftfile /var/lib/chrony/drift
commandkey 1
keyfile /etc/chrony.keys

The command key ID is generated at install time and should correspond with the com m andkey value in
the key file, /etc/chrony.keys.

1. Using your editor running as root, add the addresses of four NT P servers as follows:

244
Chapter 14. Configuring NTP Using the chrony Suite

server 0.pool.ntp.org offline


server 1.pool.ntp.org offline
server 2.pool.ntp.org offline
server 3.pool.ntp.org offline

The offline option can be useful in preventing systems from trying to activate connections. The
chrony daemon will wait for chronyc to inform it that the system is connected to the network or
Internet.

14.4.2. Setting Up chrony for a System in an Isolated Network


For a network that is never connected to the Internet, one computer is selected to be the master
timeserver. The other computers are either direct clients of the master, or clients of clients. On the
master, the drift file must be manually set with the average rate of drift of the system clock. If the master
is rebooted it will obtain the time from surrounding systems and take an average to set its system clock.
Thereafter it resumes applying adjustments based on the drift file. The drift file will be updated
automatically when the settim e command is used.

On the system selected to be the master, using a text editor running as root, edit the
/etc/chrony.conf as follows:

driftfile /var/lib/chrony/drift
commandkey 1
keyfile /etc/chrony.keys
initstepslew 10 client1 client3 client6
local stratum 8
manual
allow 192.0.2.0

Where 192.0.2.0 is the network or subnet address from which the clients are allowed to connect.

On the systems selected to be direct clients of the master, using a text editor running as root, edit the
/etc/chrony.conf as follows:

server master
driftfile /var/lib/chrony/drift
logdir /var/log/chrony
log measurements statistics tracking
keyfile /etc/chrony.keys
commandkey 24
local stratum 10
initstepslew 20 master
allow 192.0.2.123

Where 192.0.2.123 is the address of the master, and m aster is the host name of the master. These
client will resynchronize the master if it restarts.

On the client systems which are not to be direct clients of the master, the /etc/chrony.conf file
should be the same except that the local and allow directives should be omitted.

14.5. Using chronyc

14.5.1. Using chronyc to Control chronyd

245
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

To make changes using the command line utility chronyc in interactive mode, enter the following
command as root:

~]# chronyc

chronyc must run as root if some of the restricted commands are to be used.

The chronyc command prompt will be displayed as follows:

chronyc>

You can type help to list all of the commands.

The utility can also be invoked in non-interactive command mode if called together with a command as
follows:

~]# chronyc command

14.5.2. Using chronyc for Remote Administration


To configure chrony to connect to a remote instance of chronyd, issue a command as root in the
following format:

~]# chronyc -h hostname

Where hostname is the hostnam e of a system running chronyd to connect to in order to allow remote
administration from that host. The default is to connect to the daemon on the localhost.

To configure chrony to connect to a remote instance of chronyd on a non-default port, issue a


command as root in the following format:

~]# chronyc -h hostname -p port

Where port is the port in use for controlling and monitoring by the instance of chronyd to be connected
to.

Note that commands issued at the chrony command prompt are not persistent. Only commands in the
configuration file are persistent.

From the remote systems, the system administrator can issue commands after first using the password
command, preceded by the authhash command if the key used a hash different from MD5, at the
chronyc command prompt as follows:

chronyc> password secretpasswordwithnospaces


200 OK

The password or hash associated with the command key for a remote system is best obtained by SSH.
That is to say, an SSH connection should be established to the remote machine and the ID of the
command key from /etc/chrony.conf and the command key in /etc/chrony.keys memorized or
stored securely for the duration of the session.

14.6. Additional Resources


The following sources of information provide additional resources regarding chrony.

246
Chapter 14. Configuring NTP Using the chrony Suite

14.6.1. Installed Documentation


chrony(1) man page — Introduces the chrony daemon and the command-line interface tool.

chronyc(1) man page — Describes the chronyc command-line interface tool including commands
and command options.

chronyd(1) man page — Describes the chronyd daemon including commands and command
options.

chrony.conf(5) man page — Describes the chrony configuration file.


/usr/share/doc/chrony* /chrony.txt — User guide for the chrony suite.

14.6.2. Useful Websites


http://chrony.tuxfamily.org/manual.html
The on-line user guide for chrony.

247
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 15. Configuring NTP Using ntpd

15.1. Introduction to NTP


The Network Time Protocol (NTP) enables the accurate dissemination of time and date information in
order to keep the time clocks on networked computer systems synchronized to a common reference
over the network or the Internet. Many standards bodies around the world have atomic clocks which may
be made available as a reference. The satellites that make up the Global Position System contain more
than one atomic clock, making their time signals potentially very accurate. Their signals can be
deliberately degraded for military reasons. An ideal situation would be where each site has a server, with
its own reference clock attached, to act as a site-wide time server. Many devices which obtain the time
and date via low frequency radio transmissions or the Global Position System (GPS) exist. However for
most situations, a range of publicly accessible time servers connected to the Internet at geographically
dispersed locations can be used. These NT P servers provide “Coordinated Universal Time” (UTC).
Information about these time servers can found at www.pool.ntp.org.

Accurate time keeping is important for a number of reasons in IT. In networking for example, accurate
time stamps in packets and logs are required. Logs are used to investigate service and security issues
and so timestamps made on different systems must be made by synchronized clocks to be of real value.
As systems and networks become increasingly faster, there is a corresponding need for clocks with
greater accuracy and resolution. In some countries there are legal obligations to keep accurately
synchronized clocks. Please see www.ntp.org for more information. In Linux systems, NT P is
implemented by a daemon running in user space. The default NT P user space daemon in Red Hat
Enterprise Linux 7 is chronyd. It must be disabled if you want to use the ntpd daemon. See
Chapter 14, Configuring NTP Using the chrony Suite for information on chrony.

The user space daemon updates the system clock, which is a software clock running in the kernel. Linux
uses a software clock as its system clock for better resolution than the typical embedded hardware clock
referred to as the “Real Time Clock” (RTC). See the rtc(4 ) and hwclock(8) man pages for
information on hardware clocks. The system clock can keep time by using various clock sources. Usually,
the Time Stamp Counter (TSC) is used. The TSC is a CPU register which counts the number of cycles
since it was last reset. It is very fast, has a high resolution, and there are no interrupts. On system start,
the system clock reads the time and date from the RTC. The time kept by the RTC will drift away from
actual time by up to 5 minutes per month due to temperature variations. Hence the need for the system
clock to be constantly synchronized with external time references. When the system clock is being
synchronized by ntpd, the kernel will in turn update the RTC every 11 minutes automatically.

15.2. NTP Strata


NT P servers are classified according to their synchronization distance from the atomic clocks which are
the source of the time signals. The servers are thought of as being arranged in layers, or strata, from 1
at the top down to 15. Hence the word stratum is used when referring to a specific layer. Atomic clocks
are referred to as Stratum 0 as this is the source, but no Stratum 0 packet is sent on the Internet, all
stratum 0 atomic clocks are attached to a server which is referred to as stratum 1. These servers send
out packets marked as Stratum 1. A server which is synchronized by means of packets marked stratum
n belongs to the next, lower, stratum and will mark its packets as stratum n+1. Servers of the same
stratum can exchange packets with each other but are still designated as belonging to just the one
stratum, the stratum one below the best reference they are synchronized to. The designation Stratum 16
is used to indicate that the server is not currently synchronized to a reliable time source.

Note that by default NT P clients act as servers for those systems in the stratum below them.

248
Chapter 15. Configuring NTP Using ntpd

Here is a summary of the NT P Strata:

Stratum 0:
Atomic Clocks and their signals broadcast over Radio and GPS

GPS (Global Positioning System)


Mobile Phone Systems
Low Frequency Radio Broadcasts WWVB (Colorado, USA.), JJY-40 and JJY-60 (Japan),
DCF77 (Germany), and MSF (United Kingdom)

These signals can be received by dedicated devices and are usually connected by RS-232 to a
system used as an organizational or site-wide time server.

Stratum 1:
Computer with radio clock, GPS clock, or atomic clock attached

Stratum 2:
Reads from stratum 1; Serves to lower strata

Stratum 3:
Reads from stratum 2; Serves to lower strata

Stratum n+1:
Reads from stratum n; Serves to lower strata

Stratum 15:
Reads from stratum 14; This is the lowest stratum.

This process continues down to Stratum 15 which is the lowest valid stratum. The label Stratum 16 is
used to indicated an unsynchronized state.

15.3. Understanding NTP


The version of NT P used by Red Hat Enterprise Linux is as described in RFC 1305 Network Time
Protocol (Version 3) Specification, Implementation and Analysis and RFC 5905 Network Time Protocol
Version 4: Protocol and Algorithms Specification

This implementation of NT P enables sub-second accuracy to be achieved. Over the Internet, accuracy to
10s of milliseconds is normal. On a Local Area Network (LAN), 1 ms accuracy is possible under ideal
conditions. This is because clock drift is now accounted and corrected for, which was not done in earlier,
simpler, time protocol systems. A resolution of 233 picoseconds is provided by using 64-bit timestamps:
32-bits for seconds, 32-bits for fractional seconds.

NT P represents the time as a count of the number of seconds since 00:00 (midnight) 1 January, 1900
GMT. As 32-bits is used to count the seconds, this means the time will “roll over” in 2036. However NT P
works on the difference between timestamps so this does not present the same level of problem as other
implementations of time protocols have done. If a hardware clock accurate to better than 68 years is
available at boot time then NT P will correctly interpret the current date. The NT P4 specification provides

249
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

for an “Era Number” and an “Era Offset” which can be used to make software more robust when dealing
with time lengths of more than 68 years. Note, please do not confuse this with the Unix Year 2038
problem.

The NT P protocol provides additional information to improve accuracy. Four timestamps are used to
allow the calculation of round-trip time and server response time. In order for a system in its role as NT P
client to synchronize with a reference time server, a packet is sent with an “originate timestamp”. When
the packet arrives, the time server adds a “receive timestamp”. After processing the request for time and
date information and just before returning the packet, it adds a “transmit timestamp”. When the returning
packet arrives at the NT P client, a “receive timestamp” is generated. The client can now calculate the
total round trip time and by subtracting the processing time derive the actual traveling time. By assuming
the outgoing and return trips take equal time, the single-trip delay in receiving the NT P data is calculated.
The full NT P algorithm is much more complex then presented here.

Each packet containing time information received is not immediately acted upon, but is subject to
validation checks and then used together with several other samples to arrive at a reasonably good
estimate of the time. This is then compared to the system clock to determine the time offset, that is to
say, the difference between the system clock's time and what ntpd has determined the time should be.
The system clock is adjusted slowly, at most at a rate of 0.5ms per second, to reduce this offset by
changing the frequency of the counter being used. It will take at least 2000 seconds to adjust the clock
by 1 second using this method. This slow change is referred to as slewing and cannot go backwards. If
the time offset of the clock is more than 128ms (the default setting), ntpd can “step” the clock forwards
or backwards. If the time offset at system start is greater than 1000 seconds then the user, or an
installation script, should make a manual adjustment. See Chapter 2, Configuring the Date and Time.
With the -g option to the ntpd command (used by default), any offset at system start will be corrected,
but during normal operation only offsets of up to 1000 seconds will be corrected.

Some software may fail or produce an error if the time is changed backwards. For systems that are
sensitive to step changes in the time, the threshold can be changed to 600s instead of 128ms using the
-x option (unrelated to the -g option). Using the -x option to increase the stepping limit from 0.128s to
600s has a drawback because a different method of controlling the clock has to be used. It disables the
kernel clock discipline and may have a negative impact on the clock accuracy. The -x option can be
added to the /etc/sysconfig/ntpd configuration file.

15.4. Understanding the Drift File


The drift file is used to store the frequency offset between the system clock running at its nominal
frequency and the frequency required to remain in synchronization with UTC. If present, the value
contained in the drift file is read at system start and used to correct the clock source. Use of the drift file
reduces the time required to achieve a stable and accurate time. The value is calculated, and the drift file
replaced, once per hour by ntpd. The drift file is replaced, rather than just updated, and for this reason
the drift file must be in a directory for which the ntpd has write permissions.

15.5. UTC, Timezones, and DST


As NT P is entirely in UTC (Universal Time, Coordinated), Timezones and DST (Daylight Saving Time)
are applied locally by the system. The file /etc/localtim e is a copy of, or symlink to, a zone
information file from /usr/share/zoneinfo. The RTC may be in localtime or in UTC, as specified by
the 3rd line of /etc/adjtim e, which will be one of LOCAL or UTC to indicate how the RTC clock has
been set. Users can easily change this setting using the checkbox System Clock Uses UT C in the
Date and Time graphical configuration tool. See Chapter 2, Configuring the Date and Time for
information on how to use that tool. Running the RTC in UTC is recommended to avoid various problems

250
Chapter 15. Configuring NTP Using ntpd

when daylight saving time is changed.

The operation of ntpd is explained in more detail in the man page ntpd(8). The resources section lists
useful sources of information. See Section 15.20, “Additional Resources”.

15.6. Authentication Options for NTP


NT Pv4 added support for the Autokey Security Architecture, which is based on public asymmetric
cryptography while retaining support for symmetric key cryptography. The Autokey Security Architecture
is described in RFC 5906 Network Time Protocol Version 4: Autokey Specification. The man page
ntp_auth(5) describes the authentication options and commands for ntpd.

An attacker on the network can attempt to disrupt a service by sending NT P packets with incorrect time
information. On systems using the public pool of NT P servers, this risk is mitigated by having more than
three NT P servers in the list of public NT P servers in /etc/ntp.conf. If only one time source is
compromised or spoofed, ntpd will ignore that source. You should conduct a risk assessment and
consider the impact of incorrect time on your applications and organization. If you have internal time
sources you should consider steps to protect the network over which the NT P packets are distributed. If
you conduct a risk assessment and conclude that the risk is acceptable, and the impact to your
applications minimal, then you can choose not to use authentication.

The broadcast and multicast modes require authentication by default. If you have decided to trust the
network then you can disable authentication by using disable auth directive in the ntp.conf file.
Alternatively, authentication needs to be configured by using SHA1 or MD5 symmetric keys, or by public
(asymmetric) key cryptography using the Autokey scheme. The Autokey scheme for asymmetric
cryptography is explained in the ntp_auth(8) man page and the generation of keys is explained in
ntp-keygen(8). To implement symmetric key cryptography, see Section 15.17.12, “Configuring
Symmetric Authentication Using a Key” for an explanation of the key option.

15.7. Managing the Time on Virtual Machines


Virtual machines cannot access a real hardware clock and a virtual clock is not stable enough as the
stability is dependent on the host systems work load. For this reason, para-virtualized clocks should be
provided by the virtualization application in use. On Red Hat Enterprise Linux with KVM the default clock
source is kvm -clock. See the KVM guest timing management chapter of the Virtualization Host
Configuration and Guest Installation Guide.

15.8. Understanding Leap Seconds


Greenwich Mean Time (GMT) was derived by measuring the solar day, which is dependent on the
Earth's rotation. When atomic clocks were first made, the potential for more accurate definitions of time
became possible. In 1958, International Atomic Time (TAI) was introduced based on the more accurate
and very stable atomic clocks. A more accurate astronomical time, Universal Time 1 (UT1), was also
introduced to replace GMT. The atomic clocks are in fact far more stable than the rotation of the Earth
and so the two times began to drift apart. For this reason UTC was introduced as a practical measure. It
is kept within one second of UT1 but to avoid making many small trivial adjustments it was decided to
introduce the concept of a leap second in order to reconcile the difference in a manageable way. The
difference between UT1 and UTC is monitored until they drift apart by more than half a second. Then
only is it deemed necessary to introduce a one second adjustment, forward or backward. Due to the
erratic nature of the Earth's rotational speed, the need for an adjustment cannot be predicted far into the
future. The decision as to when to make an adjustment is made by the International Earth Rotation and
Reference Systems Service (IERS). However, these announcements are important only to administrators

251
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

of Stratum 1 servers because NT P transmits information about pending leap seconds and applies them
automatically.

15.9. Understanding the ntpd Configuration File


The daemon, ntpd, reads the configuration file at system start or when the service is restarted. The
default location for the file is /etc/ntp.conf and you can view the file by entering the following
command:

~]$ less /etc/ntp.conf

The configuration commands are explained briefly later in this chapter, see Section 15.17, “Configure
NTP”, and more verbosely in the ntp.conf(5) man page.

Here follows a brief explanation of the contents of the default configuration file:
The driftfile entry
A path to the drift file is specified, the default entry on Red Hat Enterprise Linux is:

driftfile /var/lib/ntp/drift

If you change this be certain that the directory is writable by ntpd. The file contains one value
used to adjust the system clock frequency after every system or service start. See
Understanding the Drift File for more information.

The access control entries


The following line sets the default access control restriction:

restrict default nomodify notrap nopeer noquery

The nom odify options prevents any changes to the configuration. The notrap option
prevents ntpdc control message protocol traps. The nopeer option prevents a peer
association being formed. The noquery option prevents ntpq and ntpdc queries, but not time
queries, from being answered. The ntpq and ntpdc queries can be used in amplification
attacks (see CVE-2013-5211 for more details), do not remove the noquery option from the
restrict default command on publicly accessible systems.

Addresses within the range 127.0.0.0/8 range are sometimes required by various processes
or applications. As the "restrict default" line above prevents access to everything not explicitly
allowed, access to the standard loopback address for IPv4 and IPv6 is permitted by means of
the following lines:

# the administrative functions.


restrict 127.0.0.1
restrict ::1

Addresses can be added underneath if specifically required by another application.

Hosts on the local network are not permitted because of the "restrict default" line above. To
change this, for example to allow hosts from the 192.0.2.0/24 network to query the time and
statistics but nothing more, a line in the following format is required:

restrict 192.0.2.0 mask 255.255.255.0 nomodify notrap nopeer

252
Chapter 15. Configuring NTP Using ntpd

To allow unrestricted access from a specific host, for example 192.0.2.250/24 , a line in the
following format is required:

restrict 192.0.2.250

A mask of 255.255.255.255 is applied if none is specified.

The restrict commands are explained in the ntp_acc(5) man page.

The public servers entry


By default, the ntp.conf file contains four public server entries:

server 0.rhel.pool.ntp.org iburst


server 1.rhel.pool.ntp.org iburst
server 2.rhel.pool.ntp.org iburst
server 3.rhel.pool.ntp.org iburst

The broadcast multicast servers entry


By default, the ntp.conf file contains some commented out examples. These are largely self
explanatory. Refer to the explanation of the specific commands Section 15.17, “Configure NTP”.
If required, add your commands just below the examples.

Note

When the DHCP client program, dhclient, receives a list of NT P servers from the DHCP server, it
adds them to ntp.conf and restarts the service. To disable that feature, add PEERNT P=no to
/etc/sysconfig/network.

15.10. Understanding the ntpd Sysconfig File


The file will be read by the ntpd init script on service start. The default contents is as follows:

# Command line options for ntpd


OPTIONS="-g"

The -g option enables ntpd to ignore the offset limit of 1000s and attempt to synchronize the time even
if the offset is larger than 1000s, but only on system start. Without that option ntpd will exit if the time
offset is greater than 1000s. It will also exit after system start if the service is restarted and the offset is
greater than 1000s even with the -g option.

15.11. Disabling chrony


In order to use ntpd the default user space daemon, chronyd, must be stopped and disabled. Issue
the following command as root:

~]# systemctl stop chronyd

253
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

To prevent it restarting at system start, issue the following command as root:

~]# systemctl disable chronyd

To check the status of chronyd, issue the following command:

~]$ systemctl status chronyd

15.12. Checking if the NTP Daemon is Installed


To check if ntpd is installed, enter the following command as root:

~]# yum install ntp

NT P is implemented by means of the daemon or service ntpd, which is contained within the ntp
package.

15.13. Installing the NTP Daemon (ntpd)


To install ntpd, enter the following command as root:

~]# yum install ntp

To enable ntpd at system start, enter the following command as root:

~]# systemctl enable ntpd

15.14. Checking the Status of NTP


To check if ntpd is running and configured to run at system start, issue the following command:

~]$ systemctl status ntpd

To obtain a brief status report from ntpd, issue the following command:

~]$ ntpstat
unsynchronised
time server re-starting
polling server every 64 s

~]$ ntpstat
synchronised to NTP server (10.5.26.10) at stratum 2
time correct to within 52 ms
polling server every 1024 s

15.15. Configure the Firewall to Allow Incoming NTP Packets


The NT P traffic consists of UDP packets on port 123 and needs to be permitted through network and
host-based firewalls in order for NT P to function.

Check if the firewall is configured to allow incoming NT P traffic for clients using the graphical Firewall

254
Chapter 15. Configuring NTP Using ntpd

Configuration tool.

To start the graphical firewall-config tool, press the Super key to enter the Activities Overview, type
firewall and then press Enter. The firewall-config tool appears. You will be prompted for your user
password.

To start the graphical firewall configuration tool using the command line, enter the following command as
root user:

~]# firewall-config

The Firewall Configuration window opens. Note, this command can be run as normal user but
you will then be prompted for the root password from time to time.

Look for the word “Connected” in the lower left corner. This indicates that the firewall-config tool is
connected to the user space daemon, firewalld.

15.15.1. Change the Firewall Settings


To immediately change the current firewall settings, ensure the current view is set to Runtim e
Configuration. Alternatively, to edit the settings to be applied at the next system start, or firewall
reload, select Perm anent Configuration from the drop-down list.

Note

When making changes to the firewall settings in Runtim e Configuration mode, your
selection takes immediate effect when you set or clear the check box associated with the service.
You should keep this in mind when working on a system that may be in use by other users.
When making changes to the firewall settings in Perm anent Configuration mode, your
selection will only take effect when you reload the firewall or the system restarts. You can use the
reload icon below the File menu, or click the Options menu and select Reload Firewall.

15.15.2. Open Ports in the Firewall for NTP Packets


To permit traffic through the firewall to a certain port, start the firewall-config tool and select the
network zone whose settings you want to change. Select the Ports tab and the click the Add button on
the right hand side. The Port and Protocol window opens.

Enter the port number 123 and select udp from the drop-down list.

15.16. Configure ntpdate Servers


The purpose of the ntpdate service is to set the clock during system boot. This was used prevously to
ensure that the services started after ntpdate would have the correct time and not observe a jump in
the clock. The use of ntpdate and the list of step-tickers is considered deprecated and so Red Hat
Enterprise Linux 7 uses the -g option to the ntpd command by default and not ntpdate.

The ntpdate service in Red Hat Enterprise Linux 7 is mostly useful only when used alone without
ntpd. With systemd, which starts services in parallel, enabling the ntpdate service will not ensure that
other services started after it will have correct time unless they specify an ordering dependency on on
tim e-sync.target, which is provided by the ntpdate service. The ntp-wait service (in the ntp-perl
subpackage) provides the tim e-sync target for the ntpd service. In order to ensure a service starts

255
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

with correct time, add After=tim e-sync.target to the service and enable one of the services which
provide the target (ntpdate, sntp, or ntp-wait if ntpd is enabled). Some services on Red Hat
Enterprise Linux 7 have the dependency included by default ( for example, dhcpd, dhcpd6, and
crond).

To check if the ntpdate service is enabled to run at system start, issue the following command:

~]$ systemctl status ntpdate

To enable the service to run at system start, issue the following command as root:

~]# systemctl enable ntpdate

In Red Hat Enterprise Linux 7 the default /etc/ntp/step-tickers file contains


0.rhel.pool.ntp.org. To configure addtional ntpdate servers, using a text editor running as
root, edit /etc/ntp/step-tickers. The number of servers listed is not very important as ntpdate
will only use this to obtain the date information once when the system is starting. If you have an internal
time server then use that host name for the first line. An additional host on the second line as a backup is
sensible. The selection of backup servers and whether the second host is internal or external depends
on your risk assessment. For example, what is the chance of any problem affecting the first server also
affecting the second server? Would connectivity to an external server be more likely to be available than
connectivity to internal servers in the event of a network failure disrupting access to the first server?

15.17. Configure NTP


To change the default configuration of the NT P service, use a text editor running as root user to edit
the /etc/ntp.conf file. This file is installed together with ntpd and is configured to use time servers
from the Red Hat pool by default. The man page ntp.conf(5) describes the command options that
can be used in the configuration file apart from the access and rate limiting commands which are
explained in the ntp_acc(5) man page.

15.17.1. Configure Access Control to an NTP Service


To restrict or control access to the NT P service running on a system, make use of the restrict
command in the ntp.conf file. See the commented out example:

# Hosts on local network are less restricted.


#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

The restrict command takes the following form:

restrict option

where option is one or more of:

ignore — All packets will be ignored, including ntpq and ntpdc queries.
kod — a “Kiss-o'-death” packet is to be sent to reduce unwanted queries.
lim ited — do not respond to time service requests if the packet violates the rate limit default
values or those specified by the discard command. ntpq and ntpdc queries are not affected. For
more information on the discard command and the default values, see Section 15.17.2, “Configure
Rate Limiting Access to an NTP Service”.
lowpriotrap — traps set by matching hosts to be low priority.

256
Chapter 15. Configuring NTP Using ntpd

nom odify — prevents any changes to the configuration.


noquery — prevents ntpq and ntpdc queries, but not time queries, from being answered.
nopeer — prevents a peer association being formed.
noserve — deny all packets except ntpq and ntpdc queries.
notrap — prevents ntpdc control message protocol traps.
notrust — deny packets that are not cryptographically authenticated.
ntpport — modify the match algorithm to only apply the restriction if the source port is the standard
NT P UDP port 123.
version — deny packets that do not match the current NT P version.

To configure rate limit access to not respond at all to a query, the respective restrict command has to
have the lim ited option. If ntpd should reply with a KoD packet, the restrict command needs to
have both lim ited and kod options.

The ntpq and ntpdc queries can be used in amplification attacks (see CVE-2013-5211 for more
details), do not remove the noquery option from the restrict default command on publicly
accessible systems.

15.17.2. Configure Rate Limiting Access to an NTP Service


To enable rate limiting access to the NT P service running on a system, add the lim ited option to the
restrict command as explained in Section 15.17.1, “Configure Access Control to an NTP Service”. If
you do not want to use the default discard parameters, then also use the discard command as
explained here.

The discard command takes the following form:

discard [average value] [minimum value] [monitor value]

average — specifies the minimum average packet spacing to be permitted, it accepts an argument
in log2 seconds. The default value is 3 (23 equates to 8 seconds).
m inim um — specifies the minimum packet spacing to be permitted, it accepts an argument in log2
seconds. The default value is 1 (21 equates to 2 seconds).
m onitor — specifies the discard probability for packets once the permitted rate limits have been
exceeded. The default value is 3000 seconds. This option is intended for servers that receive 1000 or
more requests per second.

Examples of the discard command are as follows:

discard average 4

discard average 4 minimum 2

15.17.3. Adding a Peer Address


To add the address of a peer, that is to say, the address of a server running an NT P service of the same
stratum, make use of the peer command in the ntp.conf file.

The peer command takes the following form:

257
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

peer address

where address is an IP unicast address or a DNS resolvable name. The address must only be that of a
system known to be a member of the same stratum. Peers should have at least one time source that is
different to each other. Peers are normally systems under the same administrative control.

15.17.4. Adding a Server Address


To add the address of a server, that is to say, the address of a server running an NT P service of a higher
stratum, make use of the server command in the ntp.conf file.

The server command takes the following form:

server address

where address is an IP unicast address or a DNS resolvable name. The address of a remote reference
server or local reference clock from which packets are to be received.

15.17.5. Adding a Broadcast or Multicast Server Address


To add a broadcast or multicast address for sending, that is to say, the address to broadcast or multicast
NT P packets to, make use of the broadcast command in the ntp.conf file.

The broadcast and multicast modes require authentication by default. See Section 15.6, “Authentication
Options for NTP”.

The broadcast command takes the following form:

broadcast address

where address is an IP broadcast or multicast address to which packets are sent.

This command configures a system to act as an NT P broadcast server. The address used must be a
broadcast or a multicast address. Broadcast address implies the IPv4 address 255.255.255.255. By
default, routers do not pass broadcast messages. The multicast address can be an IPv4 Class D
address, or an IPv6 address. The IANA has assigned IPv4 multicast address 224 .0.1.1 and IPv6
address FF05::101 (site local) to NT P. Administratively scopedIPv4 multicast addresses can also be
used, as described in RFC 2365 Administratively Scoped IP Multicast.

15.17.6. Adding a Manycast Client Address


To add a manycast client address, that is to say, to configure a multicast address to be used for NT P
server discovery, make use of the m anycastclient command in the ntp.conf file.

The m anycastclient command takes the following form:

manycastclient address

where address is an IP multicast address from which packets are to be received. The client will send a
request to the address and select the best servers from the responses and ignore other servers. NT P
communication then uses unicast associations, as if the discovered NT P servers were listed in
ntp.conf.

This command configures a system to act as an NT P client. Systems can be both client and server at the

258
Chapter 15. Configuring NTP Using ntpd

same time.

15.17.7. Adding a Broadcast Client Address


To add a broadcast client address, that is to say, to configure a broadcast address to be monitored for
broadcast NT P packets, make use of the broadcastclient command in the ntp.conf file.

The broadcastclient command takes the following form:

broadcastclient

Enables the receiving of broadcast messages. Requires authentication by default. See Section 15.6,
“Authentication Options for NTP”.

This command configures a system to act as an NT P client. Systems can be both client and server at the
same time.

15.17.8. Adding a Manycast Server Address


To add a manycast server address, that is to say, to configure an address to allow the clients to discover
the server by multicasting NT P packets, make use of the m anycastserver command in the ntp.conf
file.

The m anycastserver command takes the following form:

manycastserver address

Enables the sending of multicast messages. Where address is the address to multicast to. This should
be used together with authentication to prevent service disruption.

This command configures a system to act as an NT P server. Systems can be both client and server at
the same time.

15.17.9. Adding a Multicast Client Address


To add a multicast client address, that is to say, to configure a multicast address to be monitored for
multicast NT P packets, make use of the m ulticastclient command in the ntp.conf file.

The m ulticastclient command takes the following form:

multicastclient address

Enables the receiving of multicast messages. Where address is the address to subscribe to. This should
be used together with authentication to prevent service disruption.

This command configures a system to act as an NT P client. Systems can be both client and server at the
same time.

15.17.10. Configuring the Burst Option


Using the burst option against a public server is considered abuse. Do not use this option with public
NT P servers. Use it only for applications within your own organization.

To increase the average quality of time offset statistics, add the following option to the end of a server
command:

259
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

burst

At every poll interval, send a burst of eight packets instead of one, when the server is responding. For
use with the server command to improve the average quality of the time offset calculations.

15.17.11. Configuring the iburst Option


To improve the time taken for initial synchronization, add the following option to the end of a server
command:

iburst

At every poll interval, send a burst of eight packets instead of one. When the server is not responding,
packets are sent 16s apart. When the server responds, packets are sent every 2s. For use with the
server command to improve the time taken for initial synchronization. This is now a default option in the
configuration file.

15.17.12. Configuring Symmetric Authentication Using a Key


To configure symmetric authentication using a key, add the following option to the end of a server or
peer command:

key number

where number is in the range 1 to 65534 inclusive. This option enables the use of a message
authentication code (MAC) in packets. This option is for use with the peer, server, broadcast, and
m anycastclient commands.

The option can be used in the /etc/ntp.conf file as follows:

server 192.168.1.1 key 10


broadcast 192.168.1.255 key 20
manycastclient 239.255.254.254 key 30

See also Section 15.6, “Authentication Options for NTP”.

15.17.13. Configuring the Poll Interval


To change the default poll interval, add the following options to the end of a server or peer command:

minpoll value and maxpoll value

Options to change the default poll interval, where the interval in seconds will be calculated by raising 2 to
the power of value, in other words, the interval is expressed in log2 seconds. The default m inpoll
value is 6, 26 equates to 64s. The default value for m axpoll is 10, which equates to 1024s. Allowed
values are in the range 3 to 17 inclusive, which equates to 8s to 36.4h respectively. These options are
for use with the peer or server. Setting a shorter m axpoll may improve clock accuracy.

15.17.14. Configuring Server Preference


To specify that a particular server should be preferred above others of similar statistical quality, add the
following option to the end of a server or peer command:

prefer

260
Chapter 15. Configuring NTP Using ntpd

Use this server for synchronization in preference to other servers of similar statistical quality. This option
is for use with the peer or server commands.

15.17.15. Configuring the Time-to-Live for NTP Packets


To specify that a particular time-to-live (TTL) value should be used in place of the default, add the
following option to the end of a server or peer command:

ttl value

Specify the time-to-live value to be used in packets sent by broadcast servers and multicast NT P servers.
Specify the maximum time-to-live value to use for the “expanding ring search” by a manycast client. The
default value is 127.

15.17.16. Configuring the NTP Version to Use


To specify that a particular version of NT P should be used in place of the default, add the following
option to the end of a server or peer command:

version value

Specify the version of NT P set in created NT P packets. The value can be in the range 1 to 4 . The default
is 4 .

15.18. Configuring the Hardware Clock Update


To configure the system clock to update the hardware clock, also known as the real-time clock (RTC),
once after executing ntpdate, add the following line to /etc/sysconfig/ntpdate:

SYNC_HWCLOCK=yes

To update the hardware clock from the system clock, issue the following command as root:

~]# hwclock --systohc

When the system clock is being synchronized by ntpd, the kernel will in turn update the RTC every 11
minutes automatically.

15.19. Configuring Clock Sources


To list the available clock sources on your system, issue the following commands:

~]$ cd /sys/devices/system/clocksource/clocksource0/
clocksource0]$ cat available_clocksource
kvm-clock tsc hpet acpi_pm
clocksource0]$ cat current_clocksource
kvm-clock

In the above example, the kernel is using kvm-clock. This was selected at boot time as this is a virtual
machine.

To override the default clock source, add a line similar to the following in grub.conf:

261
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

clocksource=tsc

The available clock source is architecture dependent.

15.20. Additional Resources


The following sources of information provide additional resources regarding NT P and ntpd.

15.20.1. Installed Documentation


ntpd(8) man page — Describes ntpd in detail, including the command line options.
ntp.conf(5) man page — Contains information on how to configure associations with servers and
peers.
ntpq(8) man page — Describes the NT P query utility for monitoring and querying an NT P server.
ntpdc(8) man page — Describes the ntpd utility for querying and changing the state of ntpd.
ntp_auth(5) man page — Describes authentication options, commands, and key management for
ntpd.
ntp_keygen(8) man page — Describes generating public and private keys for ntpd.
ntp_acc(5) man page — Describes access control options using the restrict command.
ntp_m on(5) man page — Describes monitoring options for the gathering of statistics.
ntp_clock(5) man page — Describes commands for configuring reference clocks.
ntp_m isc(5) man page — Describes miscellaneous options.

15.20.2. Useful Websites


http://doc.ntp.org/
The NTP Documentation Archive

http://www.eecis.udel.edu/~mills/ntp.html
Network Time Synchronization Research Project.

http://www.eecis.udel.edu/~mills/ntp/html/manyopt.html
Information on Automatic Server Discovery in NT Pv4 .

262
Chapter 16. Configuring PTP Using ptp4l

Chapter 16. Configuring PTP Using ptp4l

16.1. Introduction to PTP


The Precision Time Protocol (PTP) is a protocol used to synchronize clocks in a network. When used in
conjunction with hardware support, PT P is capable of sub-microsecond accuracy, which is far better than
is normally obtainable with NT P. PT P support is divided between the kernel and user space. The kernel
in Red Hat Enterprise Linux includes support for PT P clocks, which are provided by network drivers.
The actual implementation of the protocol is known as linuxptp, a PT Pv2 implementation according to
the IEEE standard 1588 for Linux.

The linuxptp package includes the ptp4l and phc2sys programs for clock synchronization. The ptp4l
program implements the PT P boundary clock and ordinary clock. With hardware time stamping, it is used
to synchronize the PT P hardware clock to the master clock, and with software time stamping it
synchronizes the system clock to the master clock. The phc2sys program is needed only with hardware
time stamping, for synchronizing the system clock to the PT P hardware clock on the network interface
card (NIC).

16.1.1. Understanding PTP


The clocks synchronized by PT P are organized in a master-slave hierarchy. The slaves are
synchronized to their masters which may be slaves to their own masters. The hierarchy is created and
updated automatically by the best master clock (BMC) algorithm, which runs on every clock. When a
clock has only one port, it can be master or slave, such a clock is called an ordinary clock (OC). A clock
with multiple ports can be master on one port and slave on another, such a clock is called a boundary
clock (BC). The top-level master is called the grandmaster clock, which can be synchronized by using a
Global Positioning System (GPS) time source. By using a GPS-based time source, disparate networks
can be synchronized with a high-degree of accuracy.

263
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 16.1. PTP grandmaster, boundary, and slave Clocks

16.1.2. Advantages of PTP


One of the main advantages that PT P has over the Network Time Protocol (NTP) is hardware support
present in various network interface controllers (NIC) and network switches. This specialized hardware
allows PT P to account for delays in message transfer, and greatly improves the accuracy of time
synchronization. While it is possible to use non-PTP enabled hardware components within the network,
this will often cause an increase in jitter or introduce an asymmetry in the delay resulting in
synchronization inaccuracies, which add up with multiple non-PTP aware components used in the
communication path. To achieve the best possible accuracy, it is recommended that all networking
components between PT P clocks are PT P hardware enabled. Time synchronization in larger networks
where not all of the networking hardware supports PT P might be better suited for NT P.

With hardware PT P support, the NIC has its own on-board clock, which is used to time stamp the
received and transmitted PT P messages. It is this on-board clock that is synchronized to the PT P

264
Chapter 16. Configuring PTP Using ptp4l

master, and the computer's system clock is synchronized to the PT P hardware clock on the NIC. With
software PT P support, the system clock is used to time stamp the PT P messages and it is synchronized
to the PT P master directly. Hardware PT P support provides better accuracy since the NIC can time
stamp the PT P packets at the exact moment they are sent and received while software PT P support
requires additional processing of the PT P packets by the operating system.

16.2. Using PTP


In order to use PT P, the kernel network driver for the intended interface has to support either software or
hardware time stamping capabilities.

16.2.1. Checking for Driver and Hardware Support


In addition to hardware time stamping support being present in the driver, the NIC must also be capable
of supporting this functionality in the physical hardware. The best way to verify the time stamping
capabilities of a particular driver and NIC is to use the ethtool utility to query the interface as follows:

~]# ethtool -T eth3


Time stamping parameters for eth3:
Capabilities:
hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
off (HWTSTAMP_TX_OFF)
on (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
none (HWTSTAMP_FILTER_NONE)
all (HWTSTAMP_FILTER_ALL)

Where eth3 is the interface you wish to check.

For software time stamping support, the parameters list should include:

SOF_T IMEST AMPING_SOFT WARE

SOF_T IMEST AMPING_T X_SOFT WARE

SOF_T IMEST AMPING_RX_SOFT WARE

For hardware time stamping support, the parameters list should include:

SOF_T IMEST AMPING_RAW_HARDWARE

SOF_T IMEST AMPING_T X_HARDWARE

SOF_T IMEST AMPING_RX_HARDWARE

16.2.2. Installing PTP


The kernel in Red Hat Enterprise Linux includes support for PT P. User space support is provided by
the tools in the linuxptp package. To install linuxptp, issue the following command as root:

265
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

~]# yum install linuxptp

This will install ptp4l and phc2sys.

Do not run more than one service to set the system clock's time at the same time. If you intend to serve
PT P time using NT P, see Section 16.7, “Serving PTP Time with NTP”.

16.2.3. Starting ptp4l


The ptp4l program tries to use hardware time stamping by default. To use ptp4l with hardware time
stamping capable drivers and NICs, you must provide the network interface to use with the -i option.
Enter the following command as root:

~]# ptp4l -i eth3 -m

Where eth3 is the interface you wish to configure. Below is example output from ptp4l when the PT P
clock on the NIC is synchronized to a master:

~]# ptp4l -i eth3 -m


selected eth3 as PTP clock
port 1: INITIALIZING to LISTENING on INITIALIZE
port 0: INITIALIZING to LISTENING on INITIALIZE
port 1: new foreign master 00a069.fffe.0b552d-1
selected best master clock 00a069.fffe.0b552d
port 1: LISTENING to UNCALIBRATED on RS_SLAVE
master offset -23947 s0 freq +0 path delay 11350
master offset -28867 s0 freq +0 path delay 11236
master offset -32801 s0 freq +0 path delay 10841
master offset -37203 s1 freq +0 path delay 10583
master offset -7275 s2 freq -30575 path delay 10583
port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
master offset -4552 s2 freq -30035 path delay 10385

The master offset value is the measured offset from the master in nanoseconds. The s0, s1, s2 strings
indicate the different clock servo states: s0 is unlocked, s1 is clock step and s2 is locked. Once the
servo is in the locked state (s2), the clock will not be stepped (only slowly adjusted) unless the
pi_offset_const option is set to a positive value in the configuration file (described in the ptp4 l(8)
man page). The adj value is the frequency adjustment of the clock in parts per billion (ppb). The path
delay value is the estimated delay of the synchronization messages sent from the master in
nanoseconds. Port 0 is a Unix domain socket used for local PT P management. Port 1 is the eth3
interface (based on the example above.) INITIALIZING, LISTENING, UNCALIBRATED and SLAVE are
some of possible port states which change on the INITIALIZE, RS_SLAVE,
MASTER_CLOCK_SELECTED events. In the last state change message, the port state changed from
UNCALIBRATED to SLAVE indicating successful synchronization with a PT P master clock.

The ptp4l program can also be started as a service by running:

~]# systemctl start ptp4l

When running as a service, options are specified in the /etc/sysconfig/ptp4 l file. More information
on the different ptp4l options and the configuration file settings can be found in the ptp4 l(8) man
page.

By default, messages are sent to /var/log/m essages. However, specifying the -m option enables
logging to standard output which can be useful for debugging purposes.

266
Chapter 16. Configuring PTP Using ptp4l

To enable software time stamping, the -S option needs to be used as follows:

~]# ptp4l -i eth3 -m -S

16.2.3.1. Selecting a Delay Measurement Mechanism


There are two different delay measurement mechanisms and they can be selected by means of an
option added to the ptp4 l command as follows:
-P
The -P selects the peer-to-peer (P2P) delay measurement mechanism.

The P2P mechanism is preferred as it reacts to changes in the network topology faster, and
may be more accurate in measuring the delay, than other mechanisms. The P2P mechanism
can only be used in topologies where each port exchanges PTP messages with at most one
other P2P port. It must be supported and used by all hardware, including transparent clocks, on
the communication path.

-E
The -E selects the end-to-end (E2E) delay measurement mechanism. This is the default.

The E2E mechanism is also referred to as the delay “request-response” mechanism.

-A
The -A enables automatic selection of the delay measurement mechanism.

The automatic option starts ptp4l in E2E mode. It will change to P2P mode if a peer delay
request is received.

Note

All clocks on a single PT P communication path must use the same mechanism to measure the
delay. A warning will be printed when a peer delay request is received on a port using the E2E
mechanism. A warning will be printed when a E2E delay request is received on a port using the
P2P mechanism.

16.3. Specifying a Configuration File


The command line options and other options, which cannot be set on the command line, can be set in an
optional configuration file.

No configuration file is read by default, so it needs to be specified at runtime with the -f option. For
example:

~]# ptp4l -f /etc/ptp4l.conf

A configuration file equivalent to the -i eth3 -m -S options shown above would look as follows:

267
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

~]# cat /etc/ptp4l.conf


[global]
verbose 1
time_stamping software
[eth3]

16.4. Using the PTP Management Client


The PT P management client, pmc, can be used to obtain additional information from ptp4l as follows:

~]# pmc -u -b 0 'GET CURRENT_DATA_SET'


sending: GET CURRENT_DATA_SET
90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT CURRENT_DATA_SET
stepsRemoved 1
offsetFromMaster -142.0
meanPathDelay 9310.0

~]# pmc -u -b 0 'GET TIME_STATUS_NP'


sending: GET TIME_STATUS_NP
90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT TIME_STATUS_NP
master_offset 310
ingress_time 1361545089345029441
cumulativeScaledRateOffset +1.000000000
scaledLastGmPhaseChange 0
gmTimeBaseIndicator 0
lastGmPhaseChange 0x0000'0000000000000000.0000
gmPresent true
gmIdentity 00a069.fffe.0b552d

Setting the -b option to zero limits the boundary to the locally running ptp4l instance. A larger boundary
value will retrieve the information also from PT P nodes further from the local clock. The retrievable
information includes:

stepsRem oved is the number of communication paths to the grandmaster clock.


offsetFrom Master and master_offset is the last measured offset of the clock from the master in
nanoseconds.
m eanPathDelay is the estimated delay of the synchronization messages sent from the master in
nanoseconds.
if gm Present is true, the PT P clock is synchronized to a master, the local clock is not the
grandmaster clock.
gm Identity is the grandmaster's identity.

For a full list of pmc commands, type the following as root:

~]# pmc help

Additional information is available in the pm c(8) man page.

16.5. Synchronizing the Clocks


The phc2sys program is used to synchronize the system clock to the PT P hardware clock (PHC) on the
NIC. To start phc2sys, where eth3 is the interface with the PT P hardware clock, enter the following

268
Chapter 16. Configuring PTP Using ptp4l

command as root:

~]# phc2sys -s eth3 -w

The -w option waits for the running ptp4l application to synchronize the PT P clock and then retrieves the
TAI to UTC offset from ptp4l.

Normally, PT P operates in the International Atomic Time (TAI) timescale, while the system clock is kept
in Coordinated Universal Time (UTC). The current offset between the TAI and UTC timescales is 35
seconds. The offset changes when leap seconds are inserted or deleted, which typically happens every
few years. The -O option needs to be used to set this offset manually when the -w is not used, as
follows:

~]# phc2sys -s eth3 -O -35

Once the phc2sys servo is in a locked state, the clock will not be stepped, unless the -S option is used.
This means that the phc2sys program should be started after the ptp4l program has synchronized the
PT P hardware clock. However, with -w, it is not necessary to start phc2sys after ptp4l as it will wait for it
to synchronize the clock.

The phc2sys program can also be started as a service by running:

~]# systemctl start phc2sys

When running as a service, options are specified in the /etc/sysconfig/phc2sys file. More
information on the different phc2sys options can be found in the phc2sys(8) man page.

Note that the examples in this section assume the command is run on a slave system or slave port.

16.6. Verifying Time Synchronization


When PT P time synchronization is working properly, new messages with offsets and frequency
adjustments will be printed periodically to the ptp4l and phc2sys (if hardware time stamping is used)
outputs. These values will eventually converge after a short period of time. These messages can be
seen in /var/log/m essages file. An example of the output follows:

269
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

ptp4l[352.359]: selected /dev/ptp0 as PTP clock


ptp4l[352.361]: port 1: INITIALIZING to LISTENING on INITIALIZE
ptp4l[352.361]: port 0: INITIALIZING to LISTENING on INITIALIZE
ptp4l[353.210]: port 1: new foreign master 00a069.fffe.0b552d-1
ptp4l[357.214]: selected best master clock 00a069.fffe.0b552d
ptp4l[357.214]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE
ptp4l[359.224]: master offset 3304 s0 freq +0 path delay 9202
ptp4l[360.224]: master offset 3708 s1 freq -29492 path delay 9202
ptp4l[361.224]: master offset -3145 s2 freq -32637 path delay 9202
ptp4l[361.224]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l[362.223]: master offset -145 s2 freq -30580 path delay 9202
ptp4l[363.223]: master offset 1043 s2 freq -29436 path delay 8972
ptp4l[364.223]: master offset 266 s2 freq -29900 path delay 9153
ptp4l[365.223]: master offset 430 s2 freq -29656 path delay 9153
ptp4l[366.223]: master offset 615 s2 freq -29342 path delay 9169
ptp4l[367.222]: master offset -191 s2 freq -29964 path delay 9169
ptp4l[368.223]: master offset 466 s2 freq -29364 path delay 9170
ptp4l[369.235]: master offset 24 s2 freq -29666 path delay 9196
ptp4l[370.235]: master offset -375 s2 freq -30058 path delay 9238
ptp4l[371.235]: master offset 285 s2 freq -29511 path delay 9199
ptp4l[372.235]: master offset -78 s2 freq -29788 path delay 9204

An example of the phc2sys output follows:

phc2sys[526.527]: Waiting for ptp4l...


phc2sys[527.528]: Waiting for ptp4l...
phc2sys[528.528]: phc offset 55341 s0 freq +0 delay 2729
phc2sys[529.528]: phc offset 54658 s1 freq -37690 delay 2725
phc2sys[530.528]: phc offset 888 s2 freq -36802 delay 2756
phc2sys[531.528]: phc offset 1156 s2 freq -36268 delay 2766
phc2sys[532.528]: phc offset 411 s2 freq -36666 delay 2738
phc2sys[533.528]: phc offset -73 s2 freq -37026 delay 2764
phc2sys[534.528]: phc offset 39 s2 freq -36936 delay 2746
phc2sys[535.529]: phc offset 95 s2 freq -36869 delay 2733
phc2sys[536.529]: phc offset -359 s2 freq -37294 delay 2738
phc2sys[537.529]: phc offset -257 s2 freq -37300 delay 2753
phc2sys[538.529]: phc offset 119 s2 freq -37001 delay 2745
phc2sys[539.529]: phc offset 288 s2 freq -36796 delay 2766
phc2sys[540.529]: phc offset -149 s2 freq -37147 delay 2760
phc2sys[541.529]: phc offset -352 s2 freq -37395 delay 2771
phc2sys[542.529]: phc offset 166 s2 freq -36982 delay 2748
phc2sys[543.529]: phc offset 50 s2 freq -37048 delay 2756
phc2sys[544.530]: phc offset -31 s2 freq -37114 delay 2748
phc2sys[545.530]: phc offset -333 s2 freq -37426 delay 2747
phc2sys[546.530]: phc offset 194 s2 freq -36999 delay 2749

For ptp4l there is also a directive, sum m ary_interval, to reduce the output and print only statistics,
as normally it will print a message every second or so. For example, to reduce the output to every 1024
seconds, add the following line to the /etc/ptp4 l.conf file:

summary_interval 10

An example of the ptp4l output, with sum m ary_interval 6, follows:

270
Chapter 16. Configuring PTP Using ptp4l

ptp4l: [615.253] selected /dev/ptp0 as PTP clock


ptp4l: [615.255] port 1: INITIALIZING to LISTENING on INITIALIZE
ptp4l: [615.255] port 0: INITIALIZING to LISTENING on INITIALIZE
ptp4l: [615.564] port 1: new foreign master 00a069.fffe.0b552d-1
ptp4l: [619.574] selected best master clock 00a069.fffe.0b552d
ptp4l: [619.574] port 1: LISTENING to UNCALIBRATED on RS_SLAVE
ptp4l: [623.573] port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l: [684.649] rms 669 max 3691 freq -29383 ± 3735 delay 9232 ± 122
ptp4l: [748.724] rms 253 max 588 freq -29787 ± 221 delay 9219 ± 158
ptp4l: [812.793] rms 287 max 673 freq -29802 ± 248 delay 9211 ± 183
ptp4l: [876.853] rms 226 max 534 freq -29795 ± 197 delay 9221 ± 138
ptp4l: [940.925] rms 250 max 562 freq -29801 ± 218 delay 9199 ± 148
ptp4l: [1004.988] rms 226 max 525 freq -29802 ± 196 delay 9228 ± 143
ptp4l: [1069.065] rms 300 max 646 freq -29802 ± 259 delay 9214 ± 176
ptp4l: [1133.125] rms 226 max 505 freq -29792 ± 197 delay 9225 ± 159
ptp4l: [1197.185] rms 244 max 688 freq -29790 ± 211 delay 9201 ± 162

To reduce the output from the phc2sys, it can be called it with the -u option as follows:

~]# phc2sys -u summary-updates

Where summary-updates is the number of clock updates to include in summary statistics. An example
follows:

~]# phc2sys -s eth3 -w -m -u 60


phc2sys[700.948]: rms 1837 max 10123 freq -36474 ± 4752 delay 2752 ± 16
phc2sys[760.954]: rms 194 max 457 freq -37084 ± 174 delay 2753 ± 12
phc2sys[820.963]: rms 211 max 487 freq -37085 ± 185 delay 2750 ± 19
phc2sys[880.968]: rms 183 max 440 freq -37102 ± 164 delay 2734 ± 91
phc2sys[940.973]: rms 244 max 584 freq -37095 ± 216 delay 2748 ± 16
phc2sys[1000.979]: rms 220 max 573 freq -36666 ± 182 delay 2747 ± 43
phc2sys[1060.984]: rms 266 max 675 freq -36759 ± 234 delay 2753 ± 17

16.7. Serving PTP Time with NTP


The ntpd daemon can be configured to distribute the time from the system clock synchronized by ptp4l
or phc2sys by using the LOCAL reference clock driver. To prevent ntpd from adjusting the system
clock, the ntp.conf file must not specify any NT P servers. The following is a minimal example of
ntp.conf:

~]# cat /etc/ntp.conf


server 127.127.1.0
fudge 127.127.1.0 stratum 0

Note

When the DHCP client program, dhclient, receives a list of NT P servers from the DHCP server, it
adds them to ntp.conf and restarts the service. To disable that feature, add PEERNT P=no to
/etc/sysconfig/network.

16.8. Serving NTP Time with PTP

271
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

NT P to PT P synchronization in the opposite direction is also possible. When ntpd is used to synchronize
the system clock, ptp4l can be configured with the priority1 option (or other clock options included in
the best master clock algorithm) to be the grandmaster clock and distribute the time from the system
clock via PT P:

~]# cat /etc/ptp4l.conf


[global]
priority1 127
[eth3]
# ptp4l -f /etc/ptp4l.conf

With hardware time stamping, phc2sys needs to be used to synchronize the PT P hardware clock to the
system clock:

~]# phc2sys -c eth3 -s CLOCK_REALTIME -w

To prevent quick changes in the PT P clock's frequency, the synchronization to the system clock can be
loosened by using smaller P (proportional) and I (integral) constants of the PI servo:

~]# phc2sys -c eth3 -s CLOCK_REALTIME -w -P 0.01 -I 0.0001

16.9. Improving Accuracy


Test results indicate that disabling the tickless kernel capability can significantly improve the stability of
the system clock, and thus improve the PT P synchronization accuracy (at the cost of increased power
consumption). The kernel tickless mode can be disabled by adding nohz=off to the kernel boot option
parameters.

16.10. Additional Resources


The following sources of information provide additional resources regarding PT P and the ptp4l tools.

16.10.1. Installed Documentation


ptp4 l(8) man page — Describes ptp4l options including the format of the configuration file.
pm c(8) man page — Describes the PT P management client and its command options.
phc2sys(8) man page — Describes a tool for synchronizing the system clock to a PT P hardware
clock (PHC).

16.10.2. Useful Websites


http://www.nist.gov/el/isd/ieee/ieee1588.cfm
The IEEE 1588 Standard.

272
Part V. Monitoring and Automation

Part V. Monitoring and Automation


This part describes various tools that allow system administrators to monitor system performance,
automate system tasks, and report bugs.

273
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 17. System Monitoring Tools


In order to configure the system, system administrators often need to determine the amount of free
memory, how much free disk space is available, how the hard drive is partitioned, or what processes are
running.

17.1. Viewing System Processes

17.1.1. Using the ps Command


The ps command allows you to display information about running processes. It produces a static list, that
is, a snapshot of what is running when you execute the command. If you want a constantly updated list of
running processes, use the top command or the System Monitor application instead.

To list all processes that are currently running on the system including processes owned by other users,
type the following at a shell prompt:

ps ax

For each listed process, the ps ax command displays the process ID (PID), the terminal that is
associated with it (T T Y), the current status (ST AT ), the cumulated CPU time (T IME), and the name of
the executable file (COMMAND). For example:

~]$ ps ax
PID TTY STAT TIME COMMAND
1 ? Ss 0:01 /sbin/init
2 ? S 0:00 [kthreadd]
3 ? S 0:00 [migration/0]
4 ? S 0:00 [ksoftirqd/0]
5 ? S 0:00 [migration/0]
6 ? S 0:00 [watchdog/0]
[output truncated]

To display the owner alongside each process, use the following command:

ps aux

Apart from the information provided by the ps ax command, ps aux displays the effective username of
the process owner (USER), the percentage of the CPU (%CPU) and memory (%MEM) usage, the virtual
memory size in kilobytes (VSZ), the non-swapped physical memory size in kilobytes (RSS), and the time
or date the process was started. For instance:

~]$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 19404 832 ? Ss Mar02 0:01 /sbin/init
root 2 0.0 0.0 0 0 ? S Mar02 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S Mar02 0:00 [migration/0]
root 4 0.0 0.0 0 0 ? S Mar02 0:00 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S Mar02 0:00 [migration/0]
root 6 0.0 0.0 0 0 ? R Mar02 0:00 [watchdog/0]
[output truncated]

You can also use the ps command in a combination with grep to see if a particular process is running.
For example, to determine if Emacs is running, type:

274
Chapter 17. System Monitoring Tools

~]$ ps ax | grep emacs


12056 pts/3 S+ 0:00 emacs
12060 pts/2 S+ 0:00 grep --color=auto emacs

For a complete list of available command line options, refer to the ps(1) manual page.

17.1.2. Using the top Command


The top command displays a real-time list of processes that are running on the system. It also displays
additional information about the system uptime, current CPU and memory usage, or total number of
running processes, and allows you to perform actions such as sorting the list or killing a process.

To run the top command, type the following at a shell prompt:

top

For each listed process, the top command displays the process ID (PID), the effective username of the
process owner (USER), the priority (PR), the nice value (NI), the amount of virtual memory the process
uses (VIRT ), the amount of non-swapped physical memory the process uses (RES), the amount of
shared memory the process uses (SHR), the percentage of the CPU (%CPU) and memory (%MEM) usage,
the cumulated CPU time (T IME+), and the name of the executable file (COMMAND). For example:

~]$ top
top - 02:19:11 up 4 days, 10:37, 5 users, load average: 0.07, 0.13, 0.09
Tasks: 160 total, 1 running, 159 sleeping, 0 stopped, 0 zombie
Cpu(s): 10.7%us, 1.0%sy, 0.0%ni, 88.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 760752k total, 644360k used, 116392k free, 3988k buffers
Swap: 1540088k total, 76648k used, 1463440k free, 196832k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND


14401 jhradile 20 0 313m 10m 5732 S 5.6 1.4 6:27.29 gnome-system-mo
1764 root 20 0 133m 23m 4756 S 5.3 3.2 6:32.66 Xorg
13865 jhradile 20 0 1625m 177m 6628 S 0.7 23.8 0:57.26 java
20 root 20 0 0 0 0 S 0.3 0.0 4:44.39 ata/0
2085 root 20 0 40396 348 276 S 0.3 0.0 1:57.13 udisks-daemon
1 root 20 0 19404 832 604 S 0.0 0.1 0:01.21 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
4 root 20 0 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/0
5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
7 root 20 0 0 0 0 S 0.0 0.0 0:01.00 events/0
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuset
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 netns
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 async/mgr
12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 pm
[output truncated]

Table 17.1, “Interactive top commands” contains useful interactive commands that you can use with top.
For more information, refer to the top(1) manual page.

275
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Table 17.1. Interactive top commands

Command Description
Enter, Space Immediately refreshes the display.
h, ? Displays a help screen.
k Kills a process. You are prompted for the process ID and the signal to send
to it.
n Changes the number of displayed processes. You are prompted to enter the
number.
u Sorts the list by user.
M Sorts the list by memory usage.
P Sorts the list by CPU usage.
q Terminates the utility and returns to the shell prompt.

17.1.3. Using the System Monitor Tool


The Processes tab of the System Monitor tool allows you to view, search for, change the priority of,
and kill processes from the graphical user interface.

To start the System Monitor tool, either select Applications → System Tools → System Monitor from
the panel, or type gnom e-system -m onitor at a shell prompt. Then click the Processes tab to view
the list of running processes.

276
Chapter 17. System Monitoring Tools

Figure 17.1. System Monitor — Processes

For each listed process, the System Monitor tool displays its name (Process Nam e), current status
(Status), percentage of the CPU usage (% CPU), nice value (Nice), process ID (ID), memory usage
(Mem ory), the channel the process is waiting in (Waiting Channel), and additional details about the
session (Session). To sort the information by a specific column in ascending order, click the name of
that column. Click the name of the column again to toggle the sort between ascending and descending
order.

By default, the System Monitor tool displays a list of processes that are owned by the current user.
Selecting various options from the View menu allows you to:

view only active processes,


view all processes,
view your processes,
view process dependencies,
view a memory map of a selected process,
view the files opened by a selected process, and
refresh the list of processes.

Additionally, various options in the Edit menu allows you to:

stop a process,
continue running a stopped process,
end a process,
kill a process,
change the priority of a selected process, and
edit the System Monitor preferences, such as the refresh interval for the list of processes, or what
information to show.

You can also end a process by selecting it from the list and clicking the End Process button.

17.2. Viewing Memory Usage

17.2.1. Using the free Command


The free command allows you to display the amount of free and used memory on the system. To do
so, type the following at a shell prompt:

free

The free command provides information about both the physical memory (Mem ) and swap space
(Swap). It displays the total amount of memory (total), as well as the amount of memory that is in use
(used), free (free), shared (shared), in kernel buffers (buffers), and cached (cached). For
example:

277
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

~]$ free
total used free shared buffers cached
Mem: 760752 661332 99420 0 6476 317200
-/+ buffers/cache: 337656 423096
Swap: 1540088 283652 1256436

By default, free displays the values in kilobytes. To display the values in megabytes, supply the -m
command line option:

free -m

For instance:

~]$ free -m
total used free shared buffers cached
Mem: 742 646 96 0 6 309
-/+ buffers/cache: 330 412
Swap: 1503 276 1227

For a complete list of available command line options, refer to the free(1) manual page.

17.2.2. Using the System Monitor Tool


The Resources tab of the System Monitor tool allows you to view the amount of free and used
memory on the system.

To start the System Monitor tool, either select Applications → System Tools → System Monitor from
the panel, or type gnom e-system -m onitor at a shell prompt. Then click the Resources tab to view
the system's memory usage.

278
Chapter 17. System Monitoring Tools

Figure 17.2. System Monitor — Resources

In the Mem ory and Swap History section, the System Monitor tool displays a graphical
representation of the memory and swap usage history, as well as the total amount of the physical
memory (Mem ory) and swap space (Swap) and how much of it is in use.

17.3. Viewing CPU Usage

17.3.1. Using the System Monitor Tool


The Resources tab of the System Monitor tool allows you to view the current CPU usage on the
system.

To start the System Monitor tool, either select Applications → System Tools → System Monitor from
the panel, or type gnom e-system -m onitor at a shell prompt. Then click the Resources tab to view
the system's CPU usage.

279
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 17.3. System Monitor — Resources

In the CPU History section, the System Monitor tool displays a graphical representation of the CPU
usage history and shows the percentage of how much CPU is currently in use.

17.4. Viewing Block Devices and File Systems

17.4.1. Using the lsblk Command


The lsblk command allows you to display a list of available block devices. To do so, type the following
at a shell prompt:

lsblk

For each listed block device, the lsblk command displays the device name (NAME), major and minor
device number (MAJ:MIN), if the device is removable (RM), what is its size (SIZE), if the device is read-
only (RO), what type is it (T YPE), and where the device is mounted (MOUNT POINT ). For example:

280
Chapter 17. System Monitoring Tools

~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 rom
|-vda1 252:1 0 500M 0 part /boot
`-vda2 252:2 0 19.5G 0 part
|-vg_kvm-lv_root (dm-0) 253:0 0 18G 0 lvm /
`-vg_kvm-lv_swap (dm-1) 253:1 0 1.5G 0 lvm [SWAP]

By default, lsblk lists block devices in a tree-like format. To display the information as an ordinary list,
add the -l command line option:

lsblk -l

For instance:

~]$ lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 rom
vda1 252:1 0 500M 0 part /boot
vda2 252:2 0 19.5G 0 part
vg_kvm-lv_root (dm-0) 253:0 0 18G 0 lvm /
vg_kvm-lv_swap (dm-1) 253:1 0 1.5G 0 lvm [SWAP]

For a complete list of available command line options, refer to the lsblk(8) manual page.

17.4.2. Using the blkid Command


The blkid command allows you to display information about available block devices. To do so, type the
following at a shell prompt as root:

blkid

For each listed block device, the blkid command displays available attributes such as its universally
unique identifier (UUID), file system type (T YPE), or volume label (LABEL). For example:

~]# blkid
/dev/vda1: UUID="7fa9c421-0054-4555-b0ca-b470a97a3d84" TYPE="ext4"
/dev/vda2: UUID="7IvYzk-TnnK-oPjf-ipdD-cofz-DXaJ-gPdgBW" TYPE="LVM2_member"
/dev/mapper/vg_kvm-lv_root: UUID="a07b967c-71a0-4925-ab02-aebcad2ae824"
TYPE="ext4"
/dev/mapper/vg_kvm-lv_swap: UUID="d7ef54ca-9c41-4de4-ac1b-4193b0c1ddb6"
TYPE="swap"

By default, the lsblk command lists all available block devices. To display information about a particular
device only, specify the device name on the command line:

blkid device_name

For instance, to display information about /dev/vda1, type:

~]# blkid /dev/vda1


/dev/vda1: UUID="7fa9c421-0054-4555-b0ca-b470a97a3d84" TYPE="ext4"

281
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

You can also use the above command with the -p and -o udev command line options to obtain more
detailed information. Note that root privileges are required to run this command:

blkid -po udev device_name

For example:

~]# blkid -po udev /dev/vda1


ID_FS_UUID=7fa9c421-0054-4555-b0ca-b470a97a3d84
ID_FS_UUID_ENC=7fa9c421-0054-4555-b0ca-b470a97a3d84
ID_FS_VERSION=1.0
ID_FS_TYPE=ext4
ID_FS_USAGE=filesystem

For a complete list of available command line options, refer to the blkid(8) manual page.

17.4.3. Using the findmnt Command


The findm nt command allows you to display a list of currently mounted file systems. To do so, type the
following at a shell prompt:

findmnt

For each listed file system, the findm nt command displays the target mount point (T ARGET ), source
device (SOURCE), file system type (FST YPE), and relevant mount options (OPT IONS). For example:

~]$ findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/vg_kvm-lv_root ext4 rw,relatime,sec
|-/proc /proc proc rw,relatime
| |-/proc/bus/usb /proc/bus/usb usbfs rw,relatime
| `-/proc/sys/fs/binfmt_misc binfmt_m rw,relatime
|-/sys /sys sysfs rw,relatime,sec
|-/selinux selinuxf rw,relatime
|-/dev udev devtmpfs rw,relatime,sec
| `-/dev udev devtmpfs rw,relatime,sec
| |-/dev/pts devpts devpts rw,relatime,sec
| `-/dev/shm tmpfs tmpfs rw,relatime,sec
|-/boot /dev/vda1 ext4 rw,relatime,sec
|-/var/lib/nfs/rpc_pipefs sunrpc rpc_pipe rw,relatime
|-/misc /etc/auto.misc autofs rw,relatime,fd=
`-/net -hosts autofs rw,relatime,fd=
[output truncated]

By default, findm nt lists file systems in a tree-like format. To display the information as an ordinary list,
add the -l command line option:

findmnt -l

For instance:

282
Chapter 17. System Monitoring Tools

~]$ findmnt -l
TARGET SOURCE FSTYPE OPTIONS
/proc /proc proc rw,relatime
/sys /sys sysfs rw,relatime,seclabe
/dev udev devtmpfs rw,relatime,seclabe
/dev/pts devpts devpts rw,relatime,seclabe
/dev/shm tmpfs tmpfs rw,relatime,seclabe
/ /dev/mapper/vg_kvm-lv_root ext4 rw,relatime,seclabe
/selinux selinuxf rw,relatime
/dev udev devtmpfs rw,relatime,seclabe
/proc/bus/usb /proc/bus/usb usbfs rw,relatime
/boot /dev/vda1 ext4 rw,relatime,seclabe
/proc/sys/fs/binfmt_misc binfmt_m rw,relatime
/var/lib/nfs/rpc_pipefs sunrpc rpc_pipe rw,relatime
/misc /etc/auto.misc autofs rw,relatime,fd=7,pg
/net -hosts autofs rw,relatime,fd=13,p
[output truncated]

You can also choose to list only file systems of a particular type. To do so, add the -t command line
option followed by a file system type:

findmnt -t type

For example, to all list ext4 file systems, type:

~]$ findmnt -t ext4


TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/vg_kvm-lv_root ext4 rw,relatime,seclabel,barrier=1,data=ord
/boot /dev/vda1 ext4 rw,relatime,seclabel,barrier=1,data=ord

For a complete list of available command line options, refer to the findmnt(8) manual page.

17.4.4. Using the df Command


The df command allows you to display a detailed report on the system's disk space usage. To do so,
type the following at a shell prompt:

df

For each listed file system, the df command displays its name (Filesystem ), size (1K-blocks or
Size), how much space is used (Used), how much space is still available (Available), the percentage
of space usage (Use%), and where is the file system mounted (Mounted on). For example:

~]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_kvm-lv_root 18618236 4357360 13315112 25% /
tmpfs 380376 288 380088 1% /dev/shm
/dev/vda1 495844 77029 393215 17% /boot

By default, the df command shows the partition size in 1 kilobyte blocks and the amount of used and
available disk space in kilobytes. To view the information in megabytes and gigabytes, supply the -h
command line option, which causes df to display the values in a human-readable format:

df -h

283
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

For instance:

~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_kvm-lv_root 18G 4.2G 13G 25% /
tmpfs 372M 288K 372M 1% /dev/shm
/dev/vda1 485M 76M 384M 17% /boot

For a complete list of available command line options, refer to the df(1) manual page.

17.4.5. Using the du Command


The du command allows you to displays the amount of space that is being used by files in a directory. To
display the disk usage for each of the subdirectories in the current working directory, run the command
with no additional command line options:

du

For example:

~]$ du
14972 ./Downloads
4 ./.gnome2
4 ./.mozilla/extensions
4 ./.mozilla/plugins
12 ./.mozilla
15004 .

By default, the du command displays the disk usage in kilobytes. To view the information in megabytes
and gigabytes, supply the -h command line option, which causes the utility to display the values in a
human-readable format:

du -h

For instance:

~]$ du -h
15M ./Downloads
4.0K ./.gnome2
4.0K ./.mozilla/extensions
4.0K ./.mozilla/plugins
12K ./.mozilla
15M .

At the end of the list, the du command always shows the grand total for the current directory. To display
only this information, supply the -s command line option:

du -sh

For example:

~]$ du -sh
15M .

For a complete list of available command line options, refer to the du(1) manual page.

284
Chapter 17. System Monitoring Tools

17.4.6. Using the System Monitor Tool


The File System s tab of the System Monitor tool allows you to view file systems and disk space
usage in the graphical user interface.

To start the System Monitor tool, either select Applications → System Tools → System Monitor from
the panel, or type gnom e-system -m onitor at a shell prompt. Then click the File System s tab to
view a list of file systems.

Figure 17.4. System Monitor — File Systems

For each listed file system, the System Monitor tool displays the source device (Device), target mount
point (Directory), and file system type (T ype), as well as its size (T otal) and how much space is
free (Free), available (Available), and used (Used).

17.5. Viewing Hardware Information

17.5.1. Using the lspci Command


The lspci command allows you to display information about PCI buses and devices that are attached
to them. To list all PCI devices that are in the system, type the following at a shell prompt:

lspci

This displays a simple list of devices, for example:

~]$ lspci
00:00.0 Host bridge: Intel Corporation 82X38/X48 Express DRAM Controller
00:01.0 PCI bridge: Intel Corporation 82X38/X48 Express Host-Primary PCI Express
Bridge
00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller
#4 (rev 02)
00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller
#5 (rev 02)
00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller
#6 (rev 02)
[output truncated]

285
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

You can also use the -v command line option to display more verbose output, or -vv for very verbose
output:

lspci -v|-vv

For instance, to determine the manufacturer, model, and memory size of a system's video card, type:

~]$ lspci -v
[output truncated]

01:00.0 VGA compatible controller: nVidia Corporation G84 [Quadro FX 370] (rev
a1) (prog-if 00 [VGA controller])
Subsystem: nVidia Corporation Device 0491
Physical Slot: 2
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at f2000000 (32-bit, non-prefetchable) [size=16M]
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f0000000 (64-bit, non-prefetchable) [size=32M]
I/O ports at 1100 [size=128]
Expansion ROM at <unassigned> [disabled]
Capabilities: <access denied>
Kernel driver in use: nouveau
Kernel modules: nouveau, nvidiafb

[output truncated]

For a complete list of available command line options, refer to the lspci(8) manual page.

17.5.2. Using the lsusb Command


The lsusb command allows you to display information about USB buses and devices that are attached
to them. To list all USB devices that are in the system, type the following at a shell prompt:

lsusb

This displays a simple list of devices, for example:

~]$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
[output truncated]
Bus 001 Device 002: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device
(Multicard Reader)
Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse
Bus 008 Device 003: ID 04b3:3025 IBM Corp.

You can also use the -v command line option to display more verbose output:

lsusb -v

For instance:

286
Chapter 17. System Monitoring Tools

~]$ lsusb -v
[output truncated]

Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse


Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0x03f0 Hewlett-Packard
idProduct 0x2c24 Logitech M-UAL-96 Mouse
bcdDevice 31.00
iManufacturer 1
iProduct 2
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
[output truncated]

For a complete list of available command line options, refer to the lsusb(8) manual page.

17.5.3. Using the lspcmcia Command


The lspcm cia command allows you to list all PCMCIA devices that are present in the system. To do so,
type the following at a shell prompt:

lspcmcia

For example:

~]$ lspcmcia
Socket 0 Bridge: [yenta_cardbus] (bus ID: 0000:15:00.0)

You can also use the -v command line option to display more verbose information, or -vv to increase
the verbosity level even further:

lspcmcia -v|-vv

For instance:

~]$ lspcmcia -v
Socket 0 Bridge: [yenta_cardbus] (bus ID: 0000:15:00.0)
Configuration: state: on ready: unknown

For a complete list of available command line options, refer to the pccardctl(8) manual page.

17.5.4. Using the lscpu Command


The lscpu command allows you to list information about CPUs that are present in the system, including
the number of CPUs, their architecture, vendor, family, model, CPU caches, etc. To do so, type the
following at a shell prompt:

287
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

lscpu

For example:

~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Stepping: 7
CPU MHz: 1998.000
BogoMIPS: 4999.98
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 3072K
NUMA node0 CPU(s): 0-3

For a complete list of available command line options, refer to the lscpu(1) manual page.

17.6. Monitoring Performance with Net-SNMP


Red Hat Enterprise Linux 7 includes the Net-SNMP software suite, which includes a flexible and
extensible Simple Network Management Protocol (SNMP) agent. This agent and its associated utilities
can be used to provide performance data from a large number of systems to a variety of tools which
support polling over the SNMP protocol.

This section provides information on configuring the Net-SNMP agent to securely provide performance
data over the network, retrieving the data using the SNMP protocol, and extending the SNMP agent to
provide custom performance metrics.

17.6.1. Installing Net-SNMP


The Net-SNMP software suite is available as a set of RPM packages in the Red Hat Enterprise Linux
software distribution. Table 17.2, “Available Net-SNMP packages” summarizes each of the packages and
their contents.

288
Chapter 17. System Monitoring Tools

Table 17.2. Available Net-SNMP packages

Package Provides
net-snmp The SNMP Agent Daemon and documentation. This package is required for
exporting performance data.
net-snmp-libs The netsnm p library and the bundled management information bases
(MIBs). This package is required for exporting performance data.
net-snmp-utils SNMP clients such as snm pget and snm pwalk. This package is required
in order to query a system's performance data over SNMP.
net-snmp-perl The m ib2c utility and the NetSNMP Perl module.
net-snmp-python An SNMP client library for Python.

To install any of these packages, use the yum command in the following form:

yum install package…

For example, to install the SNMP Agent Daemon and SNMP clients used in the rest of this section, type
the following at a shell prompt:

~]# yum install net-snmp net-snmp-libs net-snmp-utils

Note that you must have superuser privileges (that is, you must be logged in as root) to run this
command. For more information on how to install new packages in Red Hat Enterprise Linux, refer to
Section 5.2.4, “Installing Packages”.

17.6.2. Running the Net-SNMP Daemon


The net-snmp package contains snm pd, the SNMP Agent Daemon. This section provides information on
how to start, stop, and restart the snm pd service, and shows how to enable it in a particular runlevel. For
more information on the concept of runlevels and how to manage system services in Red Hat Enterprise
Linux in general, refer to Chapter 7, Managing Services with systemd.

17.6.2.1. Starting the Service


To run the snm pd service in the current session, type the following at a shell prompt as root:

systemctl start snmpd.service

To configure the service to be automatically started at boot time, use the following command:

systemctl enable snmpd.service

17.6.2.2. Stopping the Service


To stop the running snm pd service, type the following at a shell prompt as root:

systemctl stop snmpd.service

To disable starting the service at boot time, use the following command:

systemctl disable snmpd.service

289
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

17.6.2.3. Restarting the Service


To restart the running snm pd service, type the following at a shell prompt:

systemctl restart snmpd.service

This command stops the service and starts it again in quick succession. To only reload the configuration
without stopping the service, run the following command instead:

systemctl reload snmpd.service

This causes the running snm pd service to reload its configuration.

17.6.3. Configuring Net-SNMP


To change the Net-SNMP Agent Daemon configuration, edit the /etc/snm p/snm pd.conf
configuration file. The default snm pd.conf file shipped with Red Hat Enterprise Linux 7 is heavily
commented and serves as a good starting point for agent configuration.

This section focuses on two common tasks: setting system information and configuring authentication.
For more information about available configuration directives, refer to the snmpd.conf(5) manual page.
Additionally, there is a utility in the net-snmp package named snm pconf which can be used to
interactively generate a valid agent configuration.

Note that the net-snmp-utils package must be installed in order to use the snm pwalk utility described in
this section.

Applying the changes

For any changes to the configuration file to take effect, force the snm pd service to re-read the
configuration by running the following command as root:

systemctl reload snmpd.service

17.6.3.1. Setting System Information


Net-SNMP provides some rudimentary system information via the system tree. For example, the
following snm pwalk command shows the system tree with a default agent configuration.

~]# snmpwalk -v2c -c public localhost system


SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64
#1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (99554) 0:16:35.54
SNMPv2-MIB::sysContact.0 = STRING: Root <root@localhost> (configure
/etc/snmp/snmp.local.conf)
SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain
SNMPv2-MIB::sysLocation.0 = STRING: Unknown (edit /etc/snmp/snmpd.conf)

By default, the sysNam e object is set to the hostname. The sysLocation and sysContact objects
can be configured in the /etc/snm p/snm pd.conf file by changing the value of the syslocation and
syscontact directives, for example:

290
Chapter 17. System Monitoring Tools

syslocation Datacenter, Row 3, Rack 2


syscontact UNIX Admin <admin@example.com>

After making changes to the configuration file, reload the configuration and test it by running the
snm pwalk command again:

~]# systemctl reload snmp.service


~]# snmpwalk -v2c -c public localhost system
SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64
#1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (158357) 0:26:23.57
SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <admin@example.com>
SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain
SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 3, Rack 2

17.6.3.2. Configuring Authentication


The Net-SNMP Agent Daemon supports all three versions of the SNMP protocol. The first two versions
(1 and 2c) provide for simple authentication using a community string. This string is a shared secret
between the agent and any client utilities. The string is passed in clear text over the network however
and is not considered secure. Version 3 of the SNMP protocol supports user authentication and message
encryption using a variety of protocols. The Net-SNMP agent also supports tunneling over SSH, TLS
authentication with X.509 certificates, and Kerberos authentication.

Configuring SNMP Version 2c Community


To configure an SNMP version 2c community, use either the rocom m unity or rwcom m unity
directive in the /etc/snm p/snm pd.conf configuration file. The format of the directives is the following:

directive community [source [OID]]

… where community is the community string to use, source is an IP address or subnet, and OID is the
SNMP tree to provide access to. For example, the following directive provides read-only access to the
system tree to a client using the community string “redhat” on the local machine:

rocommunity redhat 127.0.0.1 .1.3.6.1.2.1.1

To test the configuration, use the snm pwalk command with the -v and -c options.

~]# snmpwalk -v2c -c redhat localhost system


SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64
#1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (158357) 0:26:23.57
SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <admin@example.com>
SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain
SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 3, Rack 2

Configuring SNMP Version 3 User


To configure an SNMP version 3 user, use the net-snm p-create-v3-user command. This
command adds entries to the /var/lib/net-snm p/snm pd.conf and /etc/snm p/snm pd.conf
files which create the user and grant access to the user. Note that the net-snm p-create-v3-user
command may only be run when the agent is not running. The following example creates the “admin”

291
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

user with the password “redhatsnmp”:

~]# systemctl stop snmpd.service


~]# net-snmp-create-v3-user
Enter a SNMPv3 user name to create:
admin
Enter authentication pass-phrase:
redhatsnmp
Enter encryption pass-phrase:
[press return to reuse the authentication pass-phrase]

adding the following line to /var/lib/net-snmp/snmpd.conf:


createUser admin MD5 "redhatsnmp" DES
adding the following line to /etc/snmp/snmpd.conf:
rwuser admin
~]# systemctl start snmpd.service

The rwuser directive (or rouser when the -ro command line option is supplied) that net-snm p-
create-v3-user adds to /etc/snm p/snm pd.conf has a similar format to the rwcom m unity and
rocom m unity directives:

directive user [noauth|auth|priv] [OID]

… where user is a username and OID is the SNMP tree to provide access to. By default, the Net-SNMP
Agent Daemon allows only authenticated requests (the auth option). The noauth option allows you to
permit unauthenticated requests, and the priv option enforces the use of encryption. The authpriv
option specifies that requests must be authenticated and replies should be encrypted.

For example, the following line grants the user “admin” read-write access to the entire tree:

rwuser admin authpriv .1

To test the configuration, create a .snm p directory in your user's home directory and a configuration file
named snm p.conf in that directory (~/.snm p/snm p.conf) with the following lines:

defVersion 3
defSecurityLevel authPriv
defSecurityName admin
defPassphrase redhatsnmp

The snm pwalk command will now use these authentication settings when querying the agent:

~]$ snmpwalk -v3 localhost system


SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64
#1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64
[output truncated]

17.6.4. Retrieving Performance Data over SNMP


The Net-SNMP Agent in Red Hat Enterprise Linux provides a wide variety of performance information
over the SNMP protocol. In addition, the agent can be queried for a listing of the installed RPM packages
on the system, a listing of currently running processes on the system, or the network configuration of the
system.

This section provides an overview of OIDs related to performance tuning available over SNMP. It

292
Chapter 17. System Monitoring Tools

assumes that the net-snmp-utils package is installed and that the user is granted access to the SNMP
tree as described in Section 17.6.3.2, “Configuring Authentication”.

17.6.4.1. Hardware Configuration


The Host Resources MIB included with Net-SNMP presents information about the current hardware
and software configuration of a host to a client utility. Table 17.3, “Available OIDs” summarizes the
different OIDs available under that MIB.

Table 17.3. Available OIDs

OID Description
HOST -RESOURCES-MIB::hrSystem Contains general system information such as
uptime, number of users, and number of running
processes.
HOST -RESOURCES-MIB::hrStorage Contains data on memory and file system usage.
HOST -RESOURCES-MIB::hrDevices Contains a listing of all processors, network
devices, and file systems.
HOST -RESOURCES-MIB::hrSWRun Contains a listing of all running processes.
HOST -RESOURCES-MIB::hrSWRunPerf Contains memory and CPU statistics on the
process table from HOST-RESOURCES-
MIB::hrSWRun.
HOST -RESOURCES-MIB::hrSWInstalled Contains a listing of the RPM database.

There are also a number of SNMP tables available in the Host Resources MIB which can be used to
retrieve a summary of the available information. The following example displays HOST -RESOURCES-
MIB::hrFST able:

~]$ snmptable -Cb localhost HOST-RESOURCES-MIB::hrFSTable


SNMP table: HOST-RESOURCES-MIB::hrFSTable

Index MountPoint RemoteMountPoint Type


Access Bootable StorageIndex LastFullBackupDate LastPartialBackupDate
1 "/" "" HOST-RESOURCES-TYPES::hrFSLinuxExt2
readWrite true 31 0-1-1,0:0:0.0 0-1-1,0:0:0.0
5 "/dev/shm" "" HOST-RESOURCES-TYPES::hrFSOther
readWrite false 35 0-1-1,0:0:0.0 0-1-1,0:0:0.0
6 "/boot" "" HOST-RESOURCES-TYPES::hrFSLinuxExt2
readWrite false 36 0-1-1,0:0:0.0 0-1-1,0:0:0.0

For more information about HOST -RESOURCES-MIB, see the /usr/share/snm p/m ibs/HOST -
RESOURCES-MIB.txt file.

17.6.4.2. CPU and Memory Information


Most system performance data is available in the UCD SNMP MIB. The system Stats OID provides a
number of counters around processor usage:

293
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

~]$ snmpwalk localhost UCD-SNMP-MIB::systemStats


UCD-SNMP-MIB::ssIndex.0 = INTEGER: 1
UCD-SNMP-MIB::ssErrorName.0 = STRING: systemStats
UCD-SNMP-MIB::ssSwapIn.0 = INTEGER: 0 kB
UCD-SNMP-MIB::ssSwapOut.0 = INTEGER: 0 kB
UCD-SNMP-MIB::ssIOSent.0 = INTEGER: 0 blocks/s
UCD-SNMP-MIB::ssIOReceive.0 = INTEGER: 0 blocks/s
UCD-SNMP-MIB::ssSysInterrupts.0 = INTEGER: 29 interrupts/s
UCD-SNMP-MIB::ssSysContext.0 = INTEGER: 18 switches/s
UCD-SNMP-MIB::ssCpuUser.0 = INTEGER: 0
UCD-SNMP-MIB::ssCpuSystem.0 = INTEGER: 0
UCD-SNMP-MIB::ssCpuIdle.0 = INTEGER: 99
UCD-SNMP-MIB::ssCpuRawUser.0 = Counter32: 2278
UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 1395
UCD-SNMP-MIB::ssCpuRawSystem.0 = Counter32: 6826
UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 3383736
UCD-SNMP-MIB::ssCpuRawWait.0 = Counter32: 7629
UCD-SNMP-MIB::ssCpuRawKernel.0 = Counter32: 0
UCD-SNMP-MIB::ssCpuRawInterrupt.0 = Counter32: 434
UCD-SNMP-MIB::ssIORawSent.0 = Counter32: 266770
UCD-SNMP-MIB::ssIORawReceived.0 = Counter32: 427302
UCD-SNMP-MIB::ssRawInterrupts.0 = Counter32: 743442
UCD-SNMP-MIB::ssRawContexts.0 = Counter32: 718557
UCD-SNMP-MIB::ssCpuRawSoftIRQ.0 = Counter32: 128
UCD-SNMP-MIB::ssRawSwapIn.0 = Counter32: 0
UCD-SNMP-MIB::ssRawSwapOut.0 = Counter32: 0

In particular, the ssCpuRawUser, ssCpuRawSystem , ssCpuRawWait, and ssCpuRawIdle OIDs


provide counters which are helpful when determining whether a system is spending most of its processor
time in kernel space, user space, or I/O. ssRawSwapIn and ssRawSwapOut can be helpful when
determining whether a system is suffering from memory exhaustion.

More memory information is available under the UCD-SNMP-MIB::m em ory OID, which provides similar
data to the free command:

~]$ snmpwalk localhost UCD-SNMP-MIB::memory


UCD-SNMP-MIB::memIndex.0 = INTEGER: 0
UCD-SNMP-MIB::memErrorName.0 = STRING: swap
UCD-SNMP-MIB::memTotalSwap.0 = INTEGER: 1023992 kB
UCD-SNMP-MIB::memAvailSwap.0 = INTEGER: 1023992 kB
UCD-SNMP-MIB::memTotalReal.0 = INTEGER: 1021588 kB
UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 634260 kB
UCD-SNMP-MIB::memTotalFree.0 = INTEGER: 1658252 kB
UCD-SNMP-MIB::memMinimumSwap.0 = INTEGER: 16000 kB
UCD-SNMP-MIB::memBuffer.0 = INTEGER: 30760 kB
UCD-SNMP-MIB::memCached.0 = INTEGER: 216200 kB
UCD-SNMP-MIB::memSwapError.0 = INTEGER: noError(0)
UCD-SNMP-MIB::memSwapErrorMsg.0 = STRING:

Load averages are also available in the UCD SNMP MIB. The SNMP table UCD-SNMP-MIB::laT able
has a listing of the 1, 5, and 15 minute load averages:

294
Chapter 17. System Monitoring Tools

~]$ snmptable localhost UCD-SNMP-MIB::laTable


SNMP table: UCD-SNMP-MIB::laTable

laIndex laNames laLoad laConfig laLoadInt laLoadFloat laErrorFlag laErrMessage


1 Load-1 0.00 12.00 0 0.000000 noError
2 Load-5 0.00 12.00 0 0.000000 noError
3 Load-15 0.00 12.00 0 0.000000 noError

17.6.4.3. File System and Disk Information


The Host Resources MIB provides information on file system size and usage. Each file system (and
also each memory pool) has an entry in the HOST -RESOURCES-MIB::hrStorageT able table:

~]$ snmptable -Cb localhost HOST-RESOURCES-MIB::hrStorageTable


SNMP table: HOST-RESOURCES-MIB::hrStorageTable

Index Type Descr


AllocationUnits Size Used AllocationFailures
1 HOST-RESOURCES-TYPES::hrStorageRam Physical memory
1024 Bytes 1021588 388064 ?
3 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Virtual memory
1024 Bytes 2045580 388064 ?
6 HOST-RESOURCES-TYPES::hrStorageOther Memory buffers
1024 Bytes 1021588 31048 ?
7 HOST-RESOURCES-TYPES::hrStorageOther Cached memory
1024 Bytes 216604 216604 ?
10 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Swap space
1024 Bytes 1023992 0 ?
31 HOST-RESOURCES-TYPES::hrStorageFixedDisk /
4096 Bytes 2277614 250391 ?
35 HOST-RESOURCES-TYPES::hrStorageFixedDisk /dev/shm
4096 Bytes 127698 0 ?
36 HOST-RESOURCES-TYPES::hrStorageFixedDisk /boot
1024 Bytes 198337 26694 ?

The OIDs under HOST -RESOURCES-MIB::hrStorageSize and HOST -RESOURCES-


MIB::hrStorageUsed can be used to calculate the remaining capacity of each mounted file system.

I/O data is available both in UCD-SNMP-MIB::system Stats (ssIORawSent.0 and


ssIORawRecieved.0) and in UCD-DISKIO-MIB::diskIOT able. The latter provides much more
granular data. Under this table are OIDs for diskIONReadX and diskIONWrittenX, which provide
counters for the number of bytes read from and written to the block device in question since the system
boot:

~]$ snmptable -Cb localhost UCD-DISKIO-MIB::diskIOTable


SNMP table: UCD-DISKIO-MIB::diskIOTable

Index Device NRead NWritten Reads Writes LA1 LA5 LA15 NReadX NWrittenX
...
25 sda 216886272 139109376 16409 4894 ? ? ? 216886272 139109376
26 sda1 2455552 5120 613 2 ? ? ? 2455552 5120
27 sda2 1486848 0 332 0 ? ? ? 1486848 0
28 sda3 212321280 139104256 15312 4871 ? ? ? 212321280 139104256

17.6.4.4. Network Information


The Interfaces MIB provides information on network devices. IF-MIB::ifT able provides an

295
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

SNMP table with an entry for each interface on the system, the configuration of the interface, and various
packet counters for the interface. The following example shows the first few columns of ifT able on a
system with two physical network interfaces:

~]$ snmptable -Cb localhost IF-MIB::ifTable


SNMP table: IF-MIB::ifTable

Index Descr Type Mtu Speed PhysAddress AdminStatus


1 lo softwareLoopback 16436 10000000 up
2 eth0 ethernetCsmacd 1500 0 52:54:0:c7:69:58 up
3 eth1 ethernetCsmacd 1500 0 52:54:0:a7:a3:24 down

Network traffic is available under the OIDs IF-MIB::ifOutOctets and IF-MIB::ifInOctets. The
following SNMP queries will retrieve network traffic for each of the interfaces on this system:

~]$ snmpwalk localhost IF-MIB::ifDescr


IF-MIB::ifDescr.1 = STRING: lo
IF-MIB::ifDescr.2 = STRING: eth0
IF-MIB::ifDescr.3 = STRING: eth1
~]$ snmpwalk localhost IF-MIB::ifOutOctets
IF-MIB::ifOutOctets.1 = Counter32: 10060699
IF-MIB::ifOutOctets.2 = Counter32: 650
IF-MIB::ifOutOctets.3 = Counter32: 0
~]$ snmpwalk localhost IF-MIB::ifInOctets
IF-MIB::ifInOctets.1 = Counter32: 10060699
IF-MIB::ifInOctets.2 = Counter32: 78650
IF-MIB::ifInOctets.3 = Counter32: 0

17.6.5. Extending Net-SNMP


The Net-SNMP Agent can be extended to provide application metrics in addition to raw system metrics.
This allows for capacity planning as well as performance issue troubleshooting. For example, it may be
helpful to know that an email system had a 5-minute load average of 15 while being tested, but it is more
helpful to know that the email system has a load average of 15 while processing 80,000 messages a
second. When application metrics are available via the same interface as the system metrics, this also
allows for the visualization of the impact of different load scenarios on system performance (for example,
each additional 10,000 messages increases the load average linearly until 100,000).

A number of the applications that ship with Red Hat Enterprise Linux extend the Net-SNMP Agent to
provide application metrics over SNMP. There are several ways to extend the agent for custom
applications as well. This section describes extending the agent with shell scripts and Perl plug-ins. It
assumes that the net-snmp-utils and net-snmp-perl packages are installed, and that the user is granted
access to the SNMP tree as described in Section 17.6.3.2, “Configuring Authentication”.

17.6.5.1. Extending Net-SNMP with Shell Scripts


The Net-SNMP Agent provides an extension MIB (NET -SNMP-EXT END-MIB) that can be used to query
arbitrary shell scripts. To specify the shell script to run, use the extend directive in the
/etc/snm p/snm pd.conf file. Once defined, the Agent will provide the exit code and any output of the
command over SNMP. The example below demonstrates this mechanism with a script which determines
the number of httpd processes in the process table.

296
Chapter 17. System Monitoring Tools

Using the proc directive

The Net-SNMP Agent also provides a built-in mechanism for checking the process table via the
proc directive. Refer to the snmpd.conf(5) manual page for more information.

The exit code of the following shell script is the number of httpd processes running on the system at a
given point in time:

#!/bin/sh

NUMPIDS=`pgrep httpd | wc -l`

exit $NUMPIDS

To make this script available over SNMP, copy the script to a location on the system path, set the
executable bit, and add an extend directive to the /etc/snm p/snm pd.conf file. The format of the
extend directive is the following:

extend name prog args

… where name is an identifying string for the extension, prog is the program to run, and args are the
arguments to give the program. For instance, if the above shell script is copied to
/usr/local/bin/check_apache.sh, the following directive will add the script to the SNMP tree:

extend httpd_pids /bin/sh /usr/local/bin/check_apache.sh

The script can then be queried at NET -SNMP-EXT END-MIB::nsExtendObjects:

~]$ snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects


NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendCommand."httpd_pids" = STRING: /bin/sh
NET-SNMP-EXTEND-MIB::nsExtendArgs."httpd_pids" = STRING:
/usr/local/bin/check_apache.sh
NET-SNMP-EXTEND-MIB::nsExtendInput."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendCacheTime."httpd_pids" = INTEGER: 5
NET-SNMP-EXTEND-MIB::nsExtendExecType."httpd_pids" = INTEGER: exec(1)
NET-SNMP-EXTEND-MIB::nsExtendRunType."httpd_pids" = INTEGER: run-on-read(1)
NET-SNMP-EXTEND-MIB::nsExtendStorage."httpd_pids" = INTEGER: permanent(4)
NET-SNMP-EXTEND-MIB::nsExtendStatus."httpd_pids" = INTEGER: active(1)
NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendOutputFull."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendOutNumLines."httpd_pids" = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids" = INTEGER: 8
NET-SNMP-EXTEND-MIB::nsExtendOutLine."httpd_pids".1 = STRING:

Note that the exit code (“8” in this example) is provided as an INTEGER type and any output is provided
as a STRING type. To expose multiple metrics as integers, supply different arguments to the script using
the extend directive. For example, the following shell script can be used to determine the number of
processes matching an arbitrary string, and will also output a text string giving the number of processes:

297
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

#!/bin/sh

PATTERN=$1
NUMPIDS=`pgrep $PATTERN | wc -l`

echo "There are $NUMPIDS $PATTERN processes."


exit $NUMPIDS

The following /etc/snm p/snm pd.conf directives will give both the number of httpd PIDs as well as
the number of snm pd PIDs when the above script is copied to /usr/local/bin/check_proc.sh:

extend httpd_pids /bin/sh /usr/local/bin/check_proc.sh httpd


extend snmpd_pids /bin/sh /usr/local/bin/check_proc.sh snmpd

The following example shows the output of an snm pwalk of the nsExtendObjects OID:

~]$ snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects


NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 2
NET-SNMP-EXTEND-MIB::nsExtendCommand."httpd_pids" = STRING: /bin/sh
NET-SNMP-EXTEND-MIB::nsExtendCommand."snmpd_pids" = STRING: /bin/sh
NET-SNMP-EXTEND-MIB::nsExtendArgs."httpd_pids" = STRING:
/usr/local/bin/check_proc.sh httpd
NET-SNMP-EXTEND-MIB::nsExtendArgs."snmpd_pids" = STRING:
/usr/local/bin/check_proc.sh snmpd
NET-SNMP-EXTEND-MIB::nsExtendInput."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendInput."snmpd_pids" = STRING:
...
NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids" = INTEGER: 8
NET-SNMP-EXTEND-MIB::nsExtendResult."snmpd_pids" = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendOutLine."httpd_pids".1 = STRING: There are 8 httpd
processes.
NET-SNMP-EXTEND-MIB::nsExtendOutLine."snmpd_pids".1 = STRING: There are 1 snmpd
processes.

Integer exit codes are limited

Integer exit codes are limited to a range of 0–255. For values that are likely to exceed 256, either
use the standard output of the script (which will be typed as a string) or a different method of
extending the agent.

This last example shows a query for the free memory of the system and the number of httpd
processes. This query could be used during a performance test to determine the impact of the number of
processes on memory pressure:

~]$ snmpget localhost \


'NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids"' \
UCD-SNMP-MIB::memAvailReal.0
NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids" = INTEGER: 8
UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 799664 kB

17.6.5.2. Extending Net-SNMP with Perl


Executing shell scripts using the extend directive is a fairly limited method for exposing custom
application metrics over SNMP. The Net-SNMP Agent also provides an embedded Perl interface for

298
Chapter 17. System Monitoring Tools

exposing custom objects. The net-snmp-perl package provides the NetSNMP::agent Perl module that
is used to write embedded Perl plug-ins on Red Hat Enterprise Linux.

The NetSNMP::agent Perl module provides an agent object which is used to handle requests for a
part of the agent's OID tree. The agent object's constructor has options for running the agent as a sub-
agent of snm pd or a standalone agent. No arguments are necessary to create an embedded agent:

use NetSNMP::agent (':all');

my $agent = new NetSNMP::agent();

The agent object has a register method which is used to register a callback function with a particular
OID. The register function takes a name, OID, and pointer to the callback function. The following
example will register a callback function named hello_handler with the SNMP Agent which will
handle requests under the OID .1.3.6.1.4 .1.8072.9999.9999:

$agent->register("hello_world", ".1.3.6.1.4.1.8072.9999.9999",
\&hello_handler);

Obtaining a root OID

The OID .1.3.6.1.4 .1.8072.9999.9999 (NET -SNMP-MIB::netSnm pPlaypen) is


typically used for demonstration purposes only. If your organization does not already have a root
OID, you can obtain one by contacting an ISO Name Registration Authority (ANSI in the United
States).

The handler function will be called with four parameters, HANDLER, REGISTRATION_INFO,
REQUEST_INFO, and REQUESTS. The REQUESTS parameter contains a list of requests in the current call
and should be iterated over and populated with data. The request objects in the list have get and set
methods which allow for manipulating the OID and value of the request. For example, the following call
will set the value of a request object to the string “hello world”:

$request->setValue(ASN_OCTET_STR, "hello world");

The handler function should respond to two types of SNMP requests: the GET request and the
GETNEXT request. The type of request is determined by calling the getMode method on the
request_info object passed as the third parameter to the handler function. If the request is a GET
request, the caller will expect the handler to set the value of the request object, depending on the OID
of the request. If the request is a GETNEXT request, the caller will also expect the handler to set the OID
of the request to the next available OID in the tree. This is illustrated in the following code example:

299
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

my $request;
my $string_value = "hello world";
my $integer_value = "8675309";

for($request = $requests; $request; $request = $request->next()) {


my $oid = $request->getOID();
if ($request_info->getMode() == MODE_GET) {
if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setValue(ASN_OCTET_STR, $string_value);
}
elsif ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.1")) {
$request->setValue(ASN_INTEGER, $integer_value);
}
} elsif ($request_info->getMode() == MODE_GETNEXT) {
if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.1");
$request->setValue(ASN_INTEGER, $integer_value);
}
elsif ($oid < new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.0");
$request->setValue(ASN_OCTET_STR, $string_value);
}
}
}

When getMode returns MODE_GET , the handler analyzes the value of the getOID call on the request
object. The value of the request is set to either string_value if the OID ends in “.1.0”, or set to
integer_value if the OID ends in “.1.1”. If the getMode returns MODE_GET NEXT , the handler
determines whether the OID of the request is “.1.0”, and then sets the OID and value for “.1.1”. If the
request is higher on the tree than “.1.0”, the OID and value for “.1.0” is set. This in effect returns the
“next” value in the tree so that a program like snm pwalk can traverse the tree without prior knowledge
of the structure.

The type of the variable is set using constants from NetSNMP::ASN. See the perldoc for
NetSNMP::ASN for a full list of available constants.

The entire code listing for this example Perl plug-in is as follows:

300
Chapter 17. System Monitoring Tools

#!/usr/bin/perl

use NetSNMP::agent (':all');


use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER);

sub hello_handler {
my ($handler, $registration_info, $request_info, $requests) = @_;
my $request;
my $string_value = "hello world";
my $integer_value = "8675309";

for($request = $requests; $request; $request = $request->next()) {


my $oid = $request->getOID();
if ($request_info->getMode() == MODE_GET) {
if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setValue(ASN_OCTET_STR, $string_value);
}
elsif ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.1")) {
$request->setValue(ASN_INTEGER, $integer_value);
}
} elsif ($request_info->getMode() == MODE_GETNEXT) {
if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.1");
$request->setValue(ASN_INTEGER, $integer_value);
}
elsif ($oid < new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) {
$request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.0");
$request->setValue(ASN_OCTET_STR, $string_value);
}
}
}
}

my $agent = new NetSNMP::agent();


$agent->register("hello_world", ".1.3.6.1.4.1.8072.9999.9999",
\&hello_handler);

To test the plug-in, copy the above program to /usr/share/snm p/hello_world.pl and add the
following line to the /etc/snm p/snm pd.conf configuration file:

perl do "/usr/share/snmp/hello_world.pl"

The SNMP Agent Daemon will need to be restarted to load the new Perl plug-in. Once it has been
restarted, an snm pwalk should return the new data:

~]$ snmpwalk localhost NET-SNMP-MIB::netSnmpPlaypen


NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: "hello world"
NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309

The snm pget should also be used to exercise the other mode of the handler:

~]$ snmpget localhost \


NET-SNMP-MIB::netSnmpPlaypen.1.0 \
NET-SNMP-MIB::netSnmpPlaypen.1.1
NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: "hello world"
NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309

301
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

17.7. Additional Resources


To learn more about gathering system information, refer to the following resources.

17.7.1. Installed Documentation


ps(1) — The manual page for the ps command.
top(1) — The manual page for the top command.
free(1) — The manual page for the free command.
df(1) — The manual page for the df command.
du(1) — The manual page for the du command.
lspci(8) — The manual page for the lspci command.
snmpd(8) — The manual page for the snm pd service.
snmpd.conf(5) — The manual page for the /etc/snm p/snm pd.conf file containing full
documentation of available configuration directives.

302
Chapter 18. OpenLMI

Chapter 18. OpenLMI


The Open Linux Management Infrastructure, commonly abbreviated as OpenLMI, is a common
infrastructure for the management of Linux systems. It builds on top of existing tools and serves as an
abstraction layer in order to hide much of the complexity of the underlying system from system
administrators. OpenLMI is distributed with a set of services that can be accessed locally or remotely and
provides multiple language bindings, standard APIs, and standard scripting interfaces that can be used
to manage and monitor hardware, operating systems, and system services.

18.1. About OpenLMI


OpenLMI is designed to provide a common management interface to production servers running the
Red Hat Enterprise Linux system on both physical and virtual machines. It consists of the following three
components:

1. System management agents — these agents are installed on a managed system and implement
an object model that is presented to a standard object broker. The initial agents implemented in
OpenLMI include storage configuration and network configuration, but later work will address
additional elements of system management. The system management agents are commonly
referred to as Common Information Model providers or CIM providers.
2. A standard object broker — the object broker manages system management agents and provides
an interface to them. The standard object broker is also known as a CIM Object Monitor or
CIMOM.
3. Client applications and scripts — the client applications and scripts call the system management
agents through the standard object broker.

The OpenLMI project complements existing management initiatives by providing a low-level interface
that can be used by scripts or system management consoles. Interfaces distributed with OpenLMI
include C, C++, Python, Java, and an interactive command line client, and all of them offer the same full
access to the capabilities implemented in each agent. This ensures that you always have access to
exactly the same capabilities no matter which programming interface you decide to use.

18.1.1. Main Features


The following are key benefits of installing and using OpenLMI on your system:

OpenLMI provides a standard interface for configuration, management, and monitoring of your local
and remote systems.
It allows you to configure, manage, and monitor production servers running on both physical and
virtual machines.
It is distributed with a collection of CIM providers that allow you to configure, manage, and monitor
storage devices and complex networks.
It allows you to call system management functions from C, C++, Python, and Java programs, and is
distributed with a command line interface.
It is free software based on open industry standards.

18.1.2. Management Capabilities


Key capabilities of OpenLMI include the management of storage devices, networks, system services,
user accounts, hardware and software configuration, power management, and interaction with Active
Directory. For a complete list of CIM providers that are distributed with Red Hat Enterprise Linux 7, see
Table 18.1, “Available CIM Providers”.

303
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Table 18.1. Available CIM Providers

Package Name Description


openlmi-account A CIM provider for managing user accounts.
openlmi-logicalfile A CIM provider for reading files and directories.
openlmi-networking A CIM provider for network management.
openlmi-powermanagement A CIM provider for power management.
openlmi-service A CIM provider for managing system services.
openlmi-storage A CIM provider for storage management.
openlmi-fan A CIM provider for controlling computer fans.
openlmi-hardware A CIM provider for retrieving hardware information.
openlmi-realmd A CIM provider for configuring realmd.
openlmi-software [a] A CIM provider for software management.
[a] In Red Hat Enterprise Linux 7, the OpenLMI Software provider is included as a Technology Preview. This provider is fully functional,
but has a known performance scaling issue where listing large numbers of software packages may consume excessive amount of memory
and time. To work around this issue, adjust package searches to return as few packages as possible.

18.2. Installing OpenLMI


OpenLMI is distributed as a collection of RPM packages that include the CIMOM, individual CIM
providers, and client applications. This allows you distinguish between a managed and client system and
install only those components you need.

18.2.1. Installing OpenLMI on a Managed System


A managed system is the system you intend to monitor and manage by using the OpenLMI client tools.
To install OpenLMI on a managed system, complete the following steps:

1. Install the tog-pegasus package by typing the following at a shell prompt as root:

yum install tog-pegasus

This command installs the OpenPegasus CIMOM and all its dependencies to the system and
creates a user account for the pegasus user.
2. Install required CIM providers by running the following command as root:

yum install openlmi-{storage,networking,service,account,powermanagement}

This command installs the CIM providers for storage, network, service, account, and power
management. For a complete list of CIM providers distributed with Red Hat Enterprise Linux 7, see
Table 18.1, “Available CIM Providers”.
3. Edit the /etc/Pegasus/access.conf configuration file to customize the list of users that are
allowed to connect to the OpenPegasus CIMOM. By default, only the pegasus user is allowed to
access the CIMOM both remotely and locally. To activate this user account, run the following
command as root to set the user's password:

passwd pegasus

4. Start the OpenPegasus CIMOM by activating the tog-pegasus.service unit. To activate the
tog-pegasus.service unit in the current session, type the following at a shell prompt as root:

304
Chapter 18. OpenLMI

systemctl start tog-pegasus.service

To configure the tog-pegasus.service unit to start automatically at boot time, type as root:

systemctl enable tog-pegasus.service

5. If you intend to interact with the managed system from a remote machine, enable TCP
communication on port 5989 (wbem -https). To open this port in the current session, run the
following command as root:

firewall-cmd --add-port 5989/tcp

To open port 5989 for TCP communication permanently, type as root:

firewall-cmd --permanent --add-port 5989/tcp

You can now connect to the managed system and interact with it by using the OpenLMI client tools as
described in Section 18.4, “Using LMIShell”. If you intend to perform OpenLMI operations directly on the
managed system, also complete the steps described in Section 18.2.2, “Installing OpenLMI on a Client
System”.

18.2.2. Installing OpenLMI on a Client System


A client system is the system from which you intend to interact with the managed system. In a typical
scenario, the client system and the managed system are installed on two separate machines, but you
can also install the client tools on the managed system and interact with it directly.

To install OpenLMI on a client system, complete the following steps:

1. Install the openlmi-tools package by typing the following at a shell prompt as root:

yum install openlmi-tools

This command installs LMIShell, an interactive client and interpreter for accessing CIM objects
provided by OpenPegasus, and all its dependencies to the system.
2. Configure SSL certificates for OpenPegasus as described in Section 18.3, “Configuring SSL
Certificates for OpenPegasus”.

You can now use the LMIShell client to interact with the managed system as described in Section 18.4,
“Using LMIShell”.

18.3. Configuring SSL Certificates for OpenPegasus


OpenLMI uses the Web-Based Enterprise Management (WBEM) protocol and it functions over an HTTP
transport layer. Standard HTTP Basic authentication is performed in this protocol, which means that the
user name and password are transmitted alongside the requests.

Configuring the OpenPegasus CIMOM to use HTTPS for communication is necessary to ensure secure
authentication. A Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificate is required on
the managed system to establish an encrypted channel.

There are two ways of managing SSL/TLS certificates on a system:

305
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Self-signed certificates require less infrastructure to use, but are more difficult to both deploy to
clients and manage securely.
Authority-signed certificates are easier to deploy to clients once they are set up, but may require a
greater initial investment.

When using an authority-signed certificate, it is necessary to configure a trusted certificate authority on


the client systems. The authority can then be used for signing all of the managed systems' CIMOM
certificates. Certificates can also be part of a certificate chain, so the certificate used for signing the
managed systems' certificates may in turn be signed by another, higher authority (such as Verisign,
CAcert, RSA and many others).

The default certificate and trust store locations on the file system are listed in Table 18.2, “Certificate and
Trust Store Locations”.

Table 18.2. Certificate and Trust Store Locations

Configuration Option Location Description


sslCertificateFilePat /etc/Pegasus/server.p Public certificate of the CIMOM.
h em
sslKeyFilePath /etc/Pegasus/file.pem Private key known only to the CIMOM.
sslT rustStore /etc/Pegasus/client.p The file or directory providing the list of
em trusted certificate authorities.

Important

If you modify any of the files mentioned in Table 18.2, “Certificate and Trust Store Locations”,
restart the tog-pegasus service to make sure it recognizes the new certificates. To restart the
service, type the following at a shell prompt as root:

systemctl restart tog-pegasus.service

For more information on how to manage system services in Red Hat Enterprise Linux 7, see
Chapter 7, Managing Services with systemd.

18.3.1. Managing Self-signed Certificates


A self-signed certificate uses its own private key to sign itself and it is not connected to any chain of trust.
On a managed system, if certificates have not been provided by the administrator prior to the first time
that the tog-pegasus service is started, a set of self-signed certificates will be automatically generated
using the system's primary hostname as the certificate subject.

Important

The automatically generated self-signed certificates are valid by default for 10 years, but they
have no automatic-renewal capability. Any modification to these certificates will require manually
creating new certificates following guidelines provided by the OpenSSL or Mozilla NSS
documentation on the subject.

To configure client systems to trust the self-signed certificate, complete the following steps:

306
Chapter 18. OpenLMI

1. Copy the /etc/Pegasus/client.pem certificate from the managed system to the


/etc/pki/ca-trust/source/anchors/ directory on the client system. To do so, type the
following at a shell prompt as root:

scp root@hostname:/etc/Pegasus/client.pem /etc/pki/ca-


trust/source/anchors/pegasus-hostname.pem

Replace hostname with the host name of the managed system. Note that this command only
works if the sshd service is running on the managed system and is configured to allow the root
user to log in to the system over the SSH protocol. For more information on how to install and
configure the sshd service and use the scp command to transfer files over the SSH protocol, see
Chapter 8, OpenSSH.
2. Verify the integrity of the certificate on the client system by comparing its checksum with the
checksum of the original file. To calculate the checksum of the /etc/Pegasus/client.pem file
on the managed system, run the following command as root on that system:

sha1sum /etc/Pegasus/client.pem

To calculate the checksum of the /etc/pki/ca-


trust/source/anchors/pegasus-hostname.pem file on the client system, run the following
command on this system:

sha1sum /etc/pki/ca-trust/source/anchors/pegasus-hostname.pem

Replace hostname with the host name of the managed system.


3. Update the trust store by running the following command as root:

update-ca-trust extract

18.3.2. Managing Authority-signed Certificates with Identity Management


(Recommended)
Identity Management feature of Red Hat Enterprise Linux provides a domain controller which simplifies
the management of SSL certificates within systems joined to the domain. Among others, the Identity
Management server provides an embedded Certificate Authority. Refer to the Red Hat Enterprise Linux 7
Linux Domain Identity Management Guide or the FreeIPA documentation for information on how to join
the client and managed systems to the domain.

It is necessary to register the managed system to Identity Management; for client systems the
registration is optional.

The following steps are required on the managed system:

1. Install the ipa-client package and register the system to Identity management as described in the
Red Hat Enterprise Linux 7 Linux Domain Identity Management Guide.
2. Copy the Identity Management signing certificate to the trusted store by typing the following
command as root:

cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt

3. Update the trust store by running the following command as root:

307
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

update-ca-trust extract

4. Register Pegasus as a service in the Identity Management domain by running the following
command as a privileged domain user:

ipa service-add CIMOM/hostname

Replace hostname with the host name of the managed system.


This command can be run from any system in the Identity management domain that has the ipa-
admintools package installed. It creates a service entry in Identity Management that can be used
to generate signed SSL certificates.
5. Back up the PEM files located in the /etc/Pegasus/ directory (recommended).
6. Retrieve the signed certificate by running the following command as root:

ipa-getcert request -f /etc/Pegasus/server.pem -k /etc/Pegasus/file.pem -N


CN=hostname -K CIMOM/hostname

Replace hostname with the host name of the managed system.


The certificate and key files are now kept in proper locations. The certm onger daemon installed
on the managed system by the ipa-client-install script ensures that the certificate is kept
up-to-date and renewed as necessary.
For more information, refer to the Red Hat Enterprise Linux 7 Linux Domain Identity Management
Guide.

To register the client system and update the trust store, follow the steps below.

1. Install the ipa-client package and register the system to Identity management as described in the
Red Hat Enterprise Linux 7 Linux Domain Identity Management Guide.
2. Copy the Identity Management signing certificate to the trusted store by typing the following
command as root: root:

cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt

3. Update the trust store by running the following command as root:

update-ca-trust extract

If the client system is not meant to be registered in Identity Managment, complete the following steps to
update the trust store.

1. Copy the /etc/ipa/ca.crt file securly from any other system joined to the same Identity
Management domain to the trusted store /etc/pki/ca-trust/source/anchors/ directory as
root.
2. Update the trust store by running the following command as root:

update-ca-trust extract

18.3.3. Managing Authority-signed Certificates Manually


Managing authority-signed certificates with other mechanisms than Identity Management requires more
manual configuration.

308
Chapter 18. OpenLMI

It is necessary to ensure that all of the clients trust the certificate of the authority that will be signing the
managed system certificates:

In case of a certificate authority trusted by default, it is not necessary to perform any particular steps
to accomplish this.
If the certificate authority is not trusted by default, the certificate has to be imported both on the client
and managed systems.
1. Copy the certificate to the trusted store by typing the following command as root:

cp /path/to/ca.crt /etc/pki/ca-trust/source/anchors/ca.crt

2. Update the trust store by running the following command as root:

update-ca-trust extract

On the managed system, complete the following steps:

1. Create an SSL configuration file, which will store information about the certificate. The
/etc/Pegasus/ssl.cnf file should be modified similarly to the following example:

[ req ]
distinguished_name = req_distinguished_name
prompt = no
[ req_distinguished_name ]
C = US
ST = Massachusetts
L = Westford
O = Fedora
OU = Fedora OpenLMI
CN = hostname

Replace hostname with the fully qualified domain name of the managed system.
2. Generate a private key on the managed system by using the following command as root:

openssl genrsa -out /etc/Pegasus/file.pem 1024

3. Generate a certificate signing request (CSR) by running this command as root:

openssl req -config /etc/Pegasus/ssl.cnf -new -key /etc/Pegasus/file.pem


-out /etc/Pegasus/server.csr

4. Send the /etc/Pegasus/server.csr file to the certificate authority for signing. The detailed
procedure of submitting the file depends on the particular certificate authority.
5. When the signed certificate is received from the certificate authority, save it as
/etc/Pegasus/server.pem .
6. Copy the certificate of the trusted authority to the Pegasus trust store to make sure that Pegasus is
capable of trusting its own certificate by running as root:

cp /path/to/ca.crt /etc/Pegasus/client.pem

After accomplishing all the described steps, the clients that trust the signing authority are able to
successfully communicate with the managed server's CIMOM.

309
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Important

Unlike the Identity Management solution, if the certificate expires and needs to be renewed, all of
the described manual steps have to be carried out again. It is recommended to renew the
certificates before they expire.

18.4. Using LMIShell


LMIShell is an interactive client and non-interactive interpreter that can be used to access CIM objects
provided by the OpenPegasus CIMOM. It is based on the Python interpreter, but also implements
additional functions and classes for interacting with CIM objects.

18.4.1. Starting, Using, and Exiting LMIShell


Similarly to the Python interpreter, you can use LMIShell either as an interactive client, or as a non-
interactive interpreter for LMIShell scripts.

Starting LMIShell in Interactive Mode


To start the LMIShell interpreter in interactive mode, run the lm ishell command with no additional
arguments:

lmishell

By default, when LMIShell attempts to establish a connection with a CIMOM, it validates the server-side
certificate against the Certification Authorities trust store. To disable this validation, run the lm ishell
command with the --noverify or -n command line option:

lmishell --noverify

Using Tab Completion


When running in interactive mode, the LMIShell interpreter allows you press the T ab key to complete
basic programming structures and CIM objects, including namespaces, classes, methods, and object
properties.

Browsing History
By default, LMIShell stores all commands you type at the interactive prompt in the
~/.lm ishell_history file. This allows you to browse the command history and re-use already
entered lines in interactive mode without the need to type them at the prompt again. To move backward
in the command history, press the Up Arrow key or the Ctrl+p key combination. To move forward in
the command history, press the Down Arrow key or the Ctrl+n key combination.

LMIShell also supports an incremental reverse search. To look for a particular line in the command
history, press Ctrl+r and start typing any part of the command. For example:

> (reverse-i-search)`connect': c = connect("server.example.com", "pegasus")

To clear the command history, use the clear_history() function as follows:

clear_history()

310
Chapter 18. OpenLMI

You can configure the number of lines that are stored in the command history by changing the value of
the history_length option in the ~/.lm ishellrc configuration file. In addition, you can change the
location of the history file by changing the value of the history_file option in this configuration file.
For example, to set the location of the history file to ~/.lm ishell_history and configure LMIShell to
store the maximum of 1000 lines in it, add the following lines to the ~/.lm ishellrc file:

history_file = "~/.lmishell_history"
history_length = 1000

Handling Exceptions
By default, the LMIShell interpreter handles all exceptions and uses return values. To disable this
behavior in order to handle all exceptions in the code, use the use_exceptions() function as follows:

use_exceptions()

To re-enable the automatic exception handling, use:

use_exception(False)

You can permanently disable the exception handling by changing the value of the use_exceptions
option in the ~/.lm ishellrc configuration file to T rue:

use_exceptions = True

Configuring a Temporary Cache


With the default configuration, LMIShell connection objects use a temporary cache for storing CIM class
names and CIM classes in order to reduce network communication. To clear this temporary cache, use
the clear_cache() method as follows:

object_name.clear_cache()

Replace object_name with the name of a connection object.

To disable the temporary cache for a particular connection object, use the use_cache() method as
follows:

object_name.use_cache(False)

To enable it again, use:

object_name.use_cache(True)

You can permanently disable the temporary cache for connection objects by changing the value of the
use_cache option in the ~/.lm ishellrc configuration file to False:

use_cache = False

Exiting LMIShell
To terminate the LMIShell interpreter and return to the shell prompt, press the Ctrl+d key combination
or issue the quit() function as follows:

311
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

> quit()
~]$

Running an LMIShell Script


To run an LMIShell script, run the lm ishell command as follows:

lmishell file_name

Replace file_name with the name of the script. To inspect an LMIShell script after its execution, also
specify the --interact or -i command line option:

lmishell --interact file_name

The preferred file extension of LMIShell scripts is .lm i.

18.4.2. Connecting to a CIMOM


LMIShell allows you to connect to a CIMOM that is running either locally on the same system, or on a
remote machine accessible over the network.

Connecting to a Remote CIMOM


To access CIM objects provided by a remote CIMOM, create a connection object by using the
connect() function as follows:

connect(host_name, user_name[, password])

Replace host_name with the host name of the managed system, user_name with the name of a user
that is allowed to connect to the OpenPegasus CIMOM running on that system, and password with the
user's password. If the password is omitted, LMIShell prompts the user to enter it. The function returns
an LMIConnection object.

Example 18.1. Connecting to a Remote CIMOM

To connect to the OpenPegasus CIMOM running on server.exam ple.com as user pegasus, type
the following at the interactive prompt:

> c = connect("server.example.com", "pegasus")


password:
>

Connecting to a Local CIMOM


LMIShell allows you to connect to a local CIMOM by using a Unix socket. For this type of connection, you
must run the LMIShell interpreter as the root user and the /var/run/tog-
pegasus/cim xm l.socket socket must exist.

To access CIM objects provided by a local CIMOM, create a connection object by using the connect()
function as follows:

connect(host_name)

312
Chapter 18. OpenLMI

Replace host_name with localhost, 127.0.0.1, or ::1. The function returns an LMIConnection
object or None.

Example 18.2. Connecting to a Local CIMOM

To connect to the OpenPegasus CIMOM running on localhost as the root user, type the following
at the interactive prompt:

> c = connect("localhost")
>

Verifying a Connection to a CIMOM


The connect() function returns either an LMIConnection object, or None if the connection could not
be established. In addition, when the connect() function fails to establish a connection, it prints an
error message to standard error output.

To verify that a connection to a CIMOM has been established successfully, use the isinstance()
function as follows:

isinstance(object_name, LMIConnection)

Replace object_name with the name of the connection object. This function returns T rue if
object_name is an LMIConnection object, or False otherwise.

Example 18.3. Verifying a Connection to a CIMOM

To verify that the c variable created in Example 18.1, “Connecting to a Remote CIMOM” contains an
LMIConnection object, type the following at the interactive prompt:

> isinstance(c, LMIConnection)


True
>

Alternatively, you can verify that c is not None:

> c is None
False
>

18.4.3. Working with Namespaces


LMIShell namespaces provide a natural means of organizing available classes and serve as a hierarchic
access point to other namespaces and classes. The root namespace is the first entry point of a
connection object.

Listing Available Namespaces


To list all available namespaces, use the print_nam espaces() method as follows:

313
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

object_name.print_namespaces()

Replace object_name with the name of the object to inspect. This method prints available namespaces
to standard output.

To get a list of available namespaces, access the object attribute nam espaces:

object_name.namespaces

This returns a list of strings.

Example 18.4. Listing Available Namespaces

To inspect the root namespace object of the c connection object created in Example 18.1,
“Connecting to a Remote CIMOM” and list all available namespaces, type the following at the
interactive prompt:

> c.root.print_namespaces()
cimv2
interop
PG_InterOp
PG_Internal
>

To assign a list of these namespaces to a variable named root_nam espaces, type:

> root_namespaces = c.root.namespaces


>

Accessing Namespace Objects


To access a particular namespace object, use the following syntax:

object_name.namespace_name

Replace object_name with the name of the object to inspect and namespace_name with the name of the
namespace to access. This returns an LMINam espace object.

Example 18.5. Accessing Namespace Objects

To access the cim v2 namespace of the c connection object created in Example 18.1, “Connecting to
a Remote CIMOM” and assign it to a variable named ns, type the following at the interactive prompt:

> ns = c.root.cimv2
>

18.4.4. Working with Classes


LMIShell classes represent classes provided by a CIMOM. You can access and list their properties,
methods, instances, instance names, and ValueMap properties, print their documentation strings, and

314
Chapter 18. OpenLMI

create new instances and instance names.

Listing Available Classes


To list all available classes in a particular namespace, use the print_clases() method as follows:

namespace_object.print_classes()

Replace namespace_object with the namespace object to inspect. This method prints available classes
to standard output.

To get a list of available classes, use the classes() method:

namespace_object.classes()

This method returns a list of strings.

Example 18.6. Listing Available Classes

To inspect the ns namespace object created in Example 18.5, “Accessing Namespace Objects” and
list all available classes, type the following at the interactive prompt:

> ns.print_classes()
CIM_CollectionInSystem
CIM_ConcreteIdentity
CIM_ControlledBy
CIM_DeviceSAPImplementation
CIM_MemberOfStatusCollection
...
>

To assign a list of these classes to a variable named cim v2_classes, type:

> cimv2_classes = ns.classes()


>

Accessing Class Objects


To access a particular class object that is provided by the CIMOM, use the following syntax:

namespace_object.class_name

Replace namespace_object with the name of the namespace object to inspect and class_name with
the name of the class to access.

315
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.7. Accessing Class Objects

To access the LMI_IPNetworkConnection class of the ns namespace object created in


Example 18.5, “Accessing Namespace Objects” and assign it to a variable named cls, type the
following at the interactive prompt:

> cls = ns.LMI_IPNetworkConnection


>

Examining Class Objects


All class objects store information about their name and the namespace they belong to, as well as
detailed class documentation. To get the name of a particular class object, use the following syntax:

class_object.classname

Replace class_object with the name of the class object to inspect. This returns a string representation
of the object name.

To get information about the namespace a class object belongs to, use:

class_object.namespace

This returns a string representation of the namespace.

To display detailed class documentation, use the doc() method as follows:

class_object.doc()

Example 18.8. Examining Class Objects

To inspect the cls class object created in Example 18.7, “Accessing Class Objects” and display its
name and corresponding namespace, type the following at the interactive prompt:

> cls.classname
'LMI_IPNetworkConnection'
> cls.namespace
'root/cimv2'
>

To access class documentation, type:

> cls.doc()
Class: LMI_IPNetworkConnection
SuperClass: CIM_IPNetworkConnection
[qualifier] string UMLPackagePath: 'CIM::Network::IP'

[qualifier] string Version: '0.1.0'


...

316
Chapter 18. OpenLMI

Listing Available Methods


To list all available methods of a particular class object, use the print_m ethods() method as follows:

class_object.print_methods()

Replace class_object with the name of the class object to inspect. This method prints available
methods to standard output.

To get a list of available methods, use the m ethods() method:

class_object.methods()

This method returns a list of strings.

Example 18.9. Listing Available Methods

To inspect the cls class object created in Example 18.7, “Accessing Class Objects” and list all
available methods, type the following at the interactive prompt:

> cls.print_methods()
RequestStateChange
>

To assign a list of these methods to a variable named service_m ethods, type:

> service_methods = cls.methods()


>

Listing Available Properties


To list all available properties of a particular class object, use the print_properties() method as
follows:

class_object.print_properties()

Replace class_object with the name of the class object to inspect. This method prints available
properties to standard output.

To get a list of available properties, use the properties() method:

class_object.properties()

This method returns a list of strings.

317
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.10. Listing Available Properties

To inspect the cls class object created in Example 18.7, “Accessing Class Objects” and list all
available properties, type the following at the interactive prompt:

> cls.print_properties()
RequestedState
HealthState
StatusDescriptions
TransitioningToState
Generation
...
>

To assign a list of these classes to a variable named service_properties, type:

> service_properties = cls.properties()


>

Listing and Viewing ValueMap Properties


CIM classes may contain ValueMap properties in their Managed Object Format (MOF) definition.
ValueMap properties contain constant values, which may be useful when calling methods or checking
returned values.

To list all available ValueMap properties of a particular class object, use the
print_valuem ap_properties() method as follows:

class_object.print_valuemap_properties()

Replace class_object with the name of the class object to inspect. This method prints available
ValueMap properties to standard output:

To get a list of available ValueMap properties, use the valuem ap_properties() method:

class_object.valuemap_properties()

This method returns a list of strings.

318
Chapter 18. OpenLMI

Example 18.11. Listing ValueMap Properties

To inspect the cls class object created in Example 18.7, “Accessing Class Objects” and list all
available ValueMap properties, type the following at the interactive prompt:

> cls.print_valuemap_properties()
RequestedState
HealthState
TransitioningToState
DetailedStatus
OperationalStatus
...
>

To assign a list of these ValueMap properties to a variable named


service_valuem ap_properties, type:

> service_valuemap_properties = cls.valuemap_properties()


>

To access a particular ValueMap property, use the following syntax:

class_object.valuemap_propertyValues

Replace valuemap_property with the name of the ValueMap property to access.

To list all available constant values, use the print_values() method as follows:

class_object.valuemap_propertyValues.print_values()

This method prints available named constant values to standard output. You can also get a list of
available constant values by using the values() method:

class_object.valuemap_propertyValues.values()

This method returns a list of strings.

319
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.12. Accessing ValueMap Properties

Example 18.11, “Listing ValueMap Properties” mentions a ValueMap property named


RequestedState. To inspect this property and list available constant values, type the following at
the interactive prompt:

> cls.RequestedStateValues.print_values()
Reset
NoChange
NotApplicable
Quiesce
Unknown
...
>

To assign a list of these constant values to a variable named requested_state_values, type:

> requested_state_values = cls.RequestedStateValues.values()


>

To access a particular constant value, use the following syntax:

class_object.valuemap_propertyValues.constant_value_name

Replace constant_value_name with the name of the constant value. Alternatively, you can use the
value() method as follows:

class_object.valuemap_propertyValues.value("constant_value_name")

To determine the name of a particular constant value, use the value_nam e() method:

class_object.valuemap_propertyValues.value_name("constant_value")

This method returns a string.

320
Chapter 18. OpenLMI

Example 18.13. Accessing Constant Values

Example 18.12, “Accessing ValueMap Properties” shows that the RequestedState property
provides a constant value named Reset. To access this named constant value, type the following at
the interactive prompt:

> cls.RequestedStateValues.Reset
11
> cls.RequestedStateValues.value("Reset")
11
>

To determine the name of this constant value, type:

> cls.RequestedStateValues.value_name(11)
u'Reset'
>

Fetching a CIMClass Object


Many class methods do not require access to a CIMClass object, which is why LMIShell only fetches
this object from the CIMOM when a called method actually needs it. To fetch the CIMClass object
manually, use the fetch() method as follows:

class_object.fetch()

Replace class_object with the name of the class object. Note that methods that require access to a
CIMClass object fetch it automatically.

18.4.5. Working with Instances


LMIShell instances represent instances provided by a CIMOM. You can get and set their properties, list
and call their methods, print their documentation strings, get a list of associated or association objects,
push modified objects to the CIMOM, and delete individual instances from the CIMOM.

Accessing Instances
To get a list of all available instances of a particular class object, use the instances() method as
follows:

class_object.instances()

Replace class_object with the name of the class object to inspect. This method returns a list of
LMIInstance objects.

To access the first instance of a class object, use the first_instance() method:

class_object.first_instance()

This method returns an LMIInstance object.

In addition to listing all instances or returning the first one, both instances() and first_instance()
support an optional argument to allow you to filter the results:

321
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

class_object.instances(criteria)

class_object.first_instance(criteria)

Replace criteria with a dictionary consisting of key-value pairs, where keys represent instance
properties and values represent required values of these properties.

Example 18.14. Accessing Instances

To find the first instance of the cls class object created in Example 18.7, “Accessing Class Objects”
that has the Elem entNam e property equal to eth0 and assign it to a variable named device, type
the following at the interactive prompt:

> device = cls.first_instance({"ElementName": "eth0"})


>

Examining Instances
All instance objects store information about their class name and the namespace they belong to, as well
as detailed documentation about their properties and values. In addition, instance objects allow you to
retrieve a unique identification object.

To get the class name of a particular instance object, use the following syntax:

instance_object.classname

Replace instance_object with the name of the instance object to inspect. This returns a string
representation of the class name.

To get information about the namespace an instance object belongs to, use:

instance_object.namespace

This returns a string representation of the namespace.

To retrieve a unique identification object for an instance object, use:

instance_object.path

This returns an LMIInstanceNam e object.

Finally, to display detailed documentation, use the doc() method as follows:

instance_object.doc()

322
Chapter 18. OpenLMI

Example 18.15. Examining Instances

To inspect the device instance object created in Example 18.14, “Accessing Instances” and display
its class name and the corresponding namespace, type the following at the interactive prompt:

> device.classname
u'LMI_IPNetworkConnection'
> device.namespace
'root/cimv2'
>

To access instance object documentation, type:

> device.doc()
Instance of LMI_IPNetworkConnection
[property] uint16 RequestedState = '12'

[property] uint16 HealthState

[property array] string [] StatusDescriptions


...

Creating New Instances


Certain CIM providers allow you to create new instances of specific classes objects. To create a new
instance of a class object, use the create_instance() method as follows:

class_object.create_instance(properties)

Replace class_object with the name of the class object and properties with a dictionary that consists
of key-value pairs, where keys represent instance properties and values represent property values. This
method returns an LMIInstance object.

323
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.16. Creating New Instances

The LMI_Group class represents system groups and the LMI_Account class represents user
accounts on the managed system. To use the ns namespace object created in Example 18.5,
“Accessing Namespace Objects”, create instances of these two classes for the system group named
pegasus and the user named lm ishell-user, and assign them to variables named group and
user, type the following at the interactive prompt:

> group = ns.LMI_Group.first_instance({"Name" : "pegasus"})


> user = ns.LMI_Account.first_instance({"Name" : "lmishell-user"})
>

To get an instance of the LMI_Identity class for the lm ishell-user user, type:

> identity = user.first_associator(ResultClass="LMI_Identity")


>

The LMI_Mem berOfGroup class represents system group membership. To use the
LMI_Mem berOfGroup class to add the lm ishell-user to the pegasus group, create a new
instance of this class as follows:

> ns.LMI_MemberOfGroup.create_instance({
... "Member" : identity.path,
... "Collection" : group.path})
LMIInstance(classname="LMI_MemberOfGroup", ...)
>

Deleting Individual Instances


To delete a particular instance from the CIMOM, use the delete() method as follows:

instance_object.delete()

Replace instance_object with the name of the instance object to delete. This method returns a
boolean. Note that after deleting an instance, its properties and methods become inaccessible.

Example 18.17. Deleting Individual Instances

The LMI_Account class represents user accounts on the managed system. To use the ns
namespace object created in Example 18.5, “Accessing Namespace Objects”, create an instance of
the LMI_Account class for the user named lm ishell-user, and assign it to a variable named
user, type the following at the interactive prompt:

> user = ns.LMI_Account.first_instance({"Name" : "lmishell-user"})


>

To delete this instance and remove the lm ishell-user from the system, type:

> user.delete()
True
>

324
Chapter 18. OpenLMI

Listing and Accessing Available Properties


To list all available properties of a particular instance object, use the print_properties() method as
follows:

instance_object.print_properties()

Replace instance_object with the name of the instance object to inspect. This method prints available
properties to standard output.

To get a list of available properties, use the properties() method:

instance_object.properties()

This method returns a list of strings.

Example 18.18. Listing Available Properties

To inspect the device instance object created in Example 18.14, “Accessing Instances” and list all
available properties, type the following at the interactive prompt:

> device.print_properties()
RequestedState
HealthState
StatusDescriptions
TransitioningToState
Generation
...
>

To assign a list of these properties to a variable named device_properties, type:

> device_properties = device.properties()


>

To get the current value of a particular property, use the following syntax:

instance_object.property_name

Replace property_name with the name of the property to access.

To modify the value of a particular property, assign a value to it as follows:

instance_object.property_name = value

Replace value with the new value of the property. Note that in order to propagate the change to the
CIMOM, you must also execute the push() method:

instance_object.push()

This method returns a three-item tuple consisting of a return value, return value parameters, and an

325
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

error string.

Example 18.19. Accessing Individual Properties

To inspect the device instance object created in Example 18.14, “Accessing Instances” and display
the value of the property named System Nam e, type the following at the interactive prompt:

> device.SystemName
u'server.example.com'
>

Listing and Using Available Methods


To list all available methods of a particular instance object, use the print_m ethods() method as
follows:

instance_object.print_methods()

Replace instance_object with the name of the instance object to inspect. This method prints available
methods to standard output.

To get a list of available methods, use the m ethod() method:

instance_object.methods()

This method returns a list of strings.

Example 18.20. Listing Available Methods

To inspect the device instance object created in Example 18.14, “Accessing Instances” and list all
available methods, type the following at the interactive prompt:

> device.print_methods()
RequestStateChange
>

To assign a list of these methods to a variable named network_device_m ethods, type:

> network_device_methods = device.methods()


>

To call a particular method, use the following syntax:

instance_object.method_name(
parameter=value,
...)

Replace instance_object with the name of the instance object to use, method_name with the name of
the method to call, parameter with the name of the parameter to set, and value with the value of this
parameter. Methods return a three-item tuple consisting of a return value, return value parameters, and
an error string.

326
Chapter 18. OpenLMI

Important

LMIInstance objects do not automatically refresh their contents (properties, methods,


qualifiers, and so on). To do so, use the refresh() method as described below.

Example 18.21. Using Methods

The PG_Com puterSystem class represents the system. To create an instance of this class by using
the ns namespace object created in Example 18.5, “Accessing Namespace Objects” and assign it to a
variable named sys, type the following at the interactive prompt:

> sys = ns.PG_ComputerSystem.first_instance()


>

The LMI_AccountManagem entService class implements methods that allow you to manage
users and groups in the system. To create an instance of this class and assign it to a variable named
acc, type:

> acc = ns.LMI_AccountManagementService.first_instance()


>

To create a new user named lm ishell-user in the system, use the CreateAccount() method
as follows:

> acc.CreateAccount(Name="lmishell-user", System=sys)


LMIReturnValue(rval=0, rparams=NocaseDict({u'Account':
LMIInstanceName(classname="LMI_Account"...), u'Identities':
[LMIInstanceName(classname="LMI_Identity"...),
LMIInstanceName(classname="LMI_Identity"...)]}), errorstr='')

LMIShell support synchronous method calls: when you use a synchronous method, LMIShell waits for
the corresponding Job object to change its state to “finished” and then returns the return parameters of
this job. LMIShell is able to perform a synchronous method call if the given method returns an object of
one of the following classes:

LMI_StorageJob
LMI_SoftwareInstallationJob
LMI_NetworkJob

LMIShell first tries to use indications as the waiting method. If it fails, it uses a polling method instead.

To perform a synchronous method call, use the following syntax:

instance_object.Syncmethod_name(
parameter=value,
...)

Replace instance_object with the name of the instance object to use, method_name with the name of
the method to call, parameter with the name of the parameter to set, and value with the value of this
parameter. All synchronous methods have the Sync prefix in their name and return a three-item tuple

327
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

consisting of the job's return value, job's return value parameters, and job's error string.

You can also force LMIShell to use only polling method. To do so, specify the PreferPolling
parameter as follows:

instance_object.Syncmethod_name(
PreferPolling=True
parameter=value,
...)

Listing and Viewing ValueMap Parameters


CIM methods may contain ValueMap parameters in their Managed Object Format (MOF) definition.
ValuaMap parameters contain constant values.

To list all available ValueMap parameters of a particular method, use the


print_valuem ap_param eters() method as follows:

instance_object.method_name.print_valuemap_parameters()

Replace instance_object with the name of the instance object and method_name with the name of the
method to inspect. This method prints available ValueMap parameters to standard output.

To get a list of available ValueMap parameters, use the valuem ap_param eters() method:

instance_object.method_name.valuemap_parameters()

This method returns a list of strings.

Example 18.22. Listing ValueMap Parameters

To inspect the acc instance object created in Example 18.21, “Using Methods” and list all available
ValueMap parameters of the CreateAccount() method, type the following at the interactive prompt:

> acc.CreateAccount.print_valuemap_parameters()
CreateAccount
>

To assign a list of these ValueMap parameters to a variable named


create_account_param eters, type:

> create_account_parameters = acc.CreateAccount.valuemap_parameters()


>

To access a particular ValueMap parameter, use the following syntax:

instance_object.method_name.valuemap_parameterValues

Replace valuemap_parameter with the name of the ValueMap parameter to access.

To list all available constant values, use the print_values() method as follows:

328
Chapter 18. OpenLMI

This method prints available named constant values to standard output. You can also get a list of
available constant values by using the values() method:

instance_object.method_name.valuemap_parameterValues.values()

This method returns a list of strings.

Example 18.23. Accessing ValueMap Parameters

Example 18.22, “Listing ValueMap Parameters” mentions a ValueMap parameter named


CreateAccount. To inspect this parameter and list available constant values, type the following at
the interactive prompt:

> acc.CreateAccount.CreateAccountValues.print_values()
Operationunsupported
Failed
Unabletosetpasswordusercreated
Unabletocreatehomedirectoryusercreatedandpasswordset
Operationcompletedsuccessfully
>

To assign a list of these constant values to a variable named create_account_values, type:

> create_account_values = acc.CreateAccount.CreateAccountValues.values()


>

To access a particular constant value, use the following syntax:

instance_object.method_name.valuemap_parameterValues.constant_value_name

Replace constant_value_name with the name of the constant value. Alternatively, you can use the
value() method as follows:

instance_object.method_name.valuemap_parameterValues.value("constant_value_name"
)

To determine the name of a particular constant value, use the value_nam e() method:

instance_object.method_name.valuemap_parameterValues.value_name("constant_value"
)

This method returns a string.

329
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.24. Accessing Constant Values

Example 18.23, “Accessing ValueMap Parameters” shows that the CreateAccount ValueMap
parameter provides a constant value named Failed. To access this named constant value, type the
following at the interactive prompt:

> acc.CreateAccount.CreateAccountValues.Failed
2
> acc.CreateAccount.CreateAccountValues.value("Failed")
2
>

To determine the name of this constant value, type:

> acc.CreateAccount.CreateAccountValues.value_name(2)
u'Failed'
>

Refreshing Instance Objects


Local objects used by LMIShell, which represent CIM objects at CIMOM side, can get outdated, if such
object changes while working with LMIShell's one. To update object's properties, methods, etc, follow the
next example:

To update the properties and methods of a particular instance object, use the refresh() method as
follows:

instance_object.refresh()

Replace instance_object with the name of the object to refresh. This method returns a three-item
tuple consisting of a return value, return value parameter, and an error string.

Example 18.25. Refreshing Instance Objects

To update the properties and methods of the device instance object created in Example 18.14,
“Accessing Instances”, type the following at the interactive prompt:

> device.refresh()
LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='')
>

Displaying MOF Representation


To display the Managed Object Format (MOF) representation of an instance object, use the tom of()
method as follows:

instance_object.tomof()

Replace instance_object with the name of the instance object to inspect. This method prints the MOF
representation of the object to standard output.

330
Chapter 18. OpenLMI

Example 18.26. Displaying MOF Representation

To display the MOF representation of the device instance object created in Example 18.14,
“Accessing Instances”, type the following at the interactive prompt:

> device.tomof()
instance of LMI_IPNetworkConnection {
RequestedState = 12;
HealthState = NULL;
StatusDescriptions = NULL;
TransitioningToState = 12;
...

18.4.6. Working with Instance Names


LMIShell instance names are objects that hold a set of primary keys and their values. This type of an
object exactly identifies an instance.

Accessing Instance Names


CIMInstance objects are identified by CIMInstanceNam e objects. To get a list of all available
instance name objects, use the instance_nam es() method as follows:

class_object.instance_names()

Replace class_object with the name of the class object to inspect. This method returns a list of
LMIInstanceNam e objects.

To access the first instance name object of a class object, use the first_instance_nam e() method:

class_object.first_instance_name()

This method returns an LMIInstanceNam e object.

In addition to listing all instance name objects or returning the first one, both instance_nam es() and
first_instance_nam e() support an optional argument to allow you to filter the results:

class_object.instance_names(criteria)

class_object.first_instance_name(criteria)

Replace criteria with a dictionary consisting of key-value pairs, where keys represent key properties
and values represent required values of these key properties.

331
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.27. Accessing Instance Names

To find the first instance name of the cls class object created in Example 18.7, “Accessing Class
Objects” that has the Nam e key property equal to eth0 and assign it to a variable named
device_nam e, type the following at the interactive prompt:

> device_name = cls.first_instance_name({"Name": "eth0"})


>

Examining Instance Names


All instance name objects store information about their class name and the namespace they belong to.

To get the class name of a particular instance name object, use the following syntax:

instance_name_object.classname

Replace instance_name_object with the name of the instance name object to inspect. This returns a
string representation of the class name.

To get information about the namespace an instance name object belongs to, use:

instance_name_object.namespace

This returns a string representation of the namespace.

Example 18.28. Examining Instance Names

To inspect the device_nam e instance name object created in Example 18.27, “Accessing Instance
Names” and display its class name and the corresponding namespace, type the following at the
interactive prompt:

> device_name.classname
u'LMI_IPNetworkConnection'
> device_name.namespace
'root/cimv2'
>

Creating New Instance Names


LMIShell allows you to create a new wrapped CIMInstanceNam e object if you know all primary keys of
a remote object. This instance name object can then be used to retrieve the whole instance object.

To create a new instance name of a class object, use the new_instance_nam e() method as follows:

class_object.new_instance_name(key_properties)

Replace class_object with the name of the class object and key_properties with a dictionary that
consists of key-value pairs, where keys represent key properties and values represent key property
values. This method returns an LMIInstanceNam e object.

332
Chapter 18. OpenLMI

Example 18.29. Creating New Instance Names

The LMI_Account class represents user accounts on the managed system. To use the ns
namespace object created in Example 18.5, “Accessing Namespace Objects” and create a new
instance name of the LMI_Account class representing the lm ishell-user user on the managed
system, type the following at the interactive prompt:

> instance_name = ns.LMI_Account.new_instance_name({


... "CreationClassName" : "LMI_Account",
... "Name" : "lmishell-user",
... "SystemCreationClassName" : "PG_ComputerSystem",
... "SystemName" : "server"})
>

Listing and Accessing Key Properties


To list all available key properties of a particular instance name object, use the
print_key_properties() method as follows:

instance_name_object.print_key_properties()

Replace instance_name_object with the name of the instance name object to inspect. This method
prints available key properties to standard output.

To get a list of available key properties, use the key_properties() method:

instance_name_object.key_properties()

This method returns a list of strings.

Example 18.30. Listing Available Key Properties

To inspect the device_nam e instance name object created in Example 18.27, “Accessing Instance
Names” and list all available key properties, type the following at the interactive prompt:

> device_name.print_key_properties()
CreationClassName
SystemName
Name
SystemCreationClassName
>

To assign a list of these key properties to a variable named device_nam e_properties, type:

> device_name_properties = device_name.key_properties()


>

To get the current value of a particular key property, use the following syntax:

instance_name_object.key_property_name

333
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Replace key_property_name with the name of the key property to access.

Example 18.31. Accessing Individual Key Properties

To inspect the device_nam e instance name object created in Example 18.27, “Accessing Instance
Names” and display the value of the key property named System Nam e, type the following at the
interactive prompt:

> device_name.SystemName
u'server.example.com'
>

Converting Instance Names to Instances


Each instance name can be converted to an instance. To do so, use the to_instance() method as
follows:

instance_name_object.to_instance()

Replace instance_name_object with the name of the instance name object to convert. This method
returns an LMIInstance object.

Example 18.32. Converting Instance Names to Instances

To convert the device_nam e instance name object created in Example 18.27, “Accessing Instance
Names” to an instance object and assign it to a variable named device, type the following at the
interactive prompt:

> device = device_name.to_instance()


>

18.4.7. Working with Associated Objects


The Common Information Model defines an association relationship between managed objects.

Accessing Associated Instances


To get a list of all objects associated with a particular instance object, use the associators() method
as follows:

instance_object.associators(
AssocClass=class_name,
ResultClass=class_name,
ResultRole=role,
IncludeQualifiers=include_qualifiers,
IncludeClassOrigin=include_class_origin,
PropertyList=property_list)

To access the first object associated with a particular instance object, use the first_associator()
method:

334
Chapter 18. OpenLMI

instance_object.first_associator(
AssocClass=class_name,
ResultClass=class_name,
ResultRole=role,
IncludeQualifiers=include_qualifiers,
IncludeClassOrigin=include_class_origin,
PropertyList=property_list)

Replace instance_object with the name of the instance object to inspect. You can filter the results by
specifying the following parameters:

AssocClass — Each returned object must be associated with the source object through an instance
of this class or one of its subclasses. The default value is None.
ResultClass — Each returned object must be either an instance of this class or one of its
subclasses, or it must be this class or one of its subclasses. The default value is None.
Role — Each returned object must be associated with the source object through an association in
which the source object plays the specified role. The name of the property in the association class
that refers to the source object must match the value of this parameter. The default value is None.
ResultRole — Each returned object must be associated with the source object through an
association in which the returned object plays the specified role. The name of the property in the
association class that refers to the returned object must match the value of this parameter. The
default value is None.

The remaining parameters refer to:

IncludeQualifiers — A boolean indicating whether all qualifiers of each object (including


qualifiers on the object and on any returned properties) should be be included as QUALIFIER
elements in the response. The default value is False.
IncludeClassOrigin — A boolean indicating whether the CLASSORIGIN attribute should be
present on all appropriate elements in each returned object. The default value is False.
PropertyList — The members of this list define one or more property names. Returned objects
will not include elements for any properties missing from this list. If PropertyList is an empty list,
no properties are included in returned objects. If it is None, no additional filtering is defined. The
default value is None.

Example 18.33. Accessing Associated Instances

The LMI_StorageExtent class represents block devices available in the system. To use the ns
namespace object created in Example 18.5, “Accessing Namespace Objects”, create an instance of
the LMI_StorageExtent class for the block device named /dev/vda, and assign it to a variable
named vda, type the following at the interactive prompt:

> vda = ns.LMI_StorageExtent.first_instance({


... "DeviceID" : "/dev/vda"})
>

To get a list of all disk partitions on this block device and assign it to a variable named
vda_partitions, use the associators() method as follows:

> vda_partitions = vda.associators(ResultClass="LMI_DiskPartition")


>

335
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Accessing Associated Instance Names


To get a list of all associated instance names of a particular instance object, use the
associator_nam es() method as follows:

instance_object.associator_names(
AssocClass=class_name,
ResultClass=class_name,
Role=role,
ResultRole=role)

To access the first associated instance name of a particular instance object, use the
first_associator_nam e() method:

instance_object.first_associator_name(
AssocClass=class_object,
ResultClass=class_object,
Role=role,
ResultRole=role)

Replace instance_object with the name of the instance object to inspect. You can filter the results by
specifying the following parameters:

AssocClass — Each returned name identifies an object that must be associated with the source
object through an instance of this class or one of its subclasses. The default value is None.
ResultClass — Each returned name identifies an object that must be either an instance of this
class or one of its subclasses, or it must be this class or one of its subclasses. The default value is
None.
Role — Each returned name identifies an object that must be associated with the source object
through an association in which the source object plays the specified role. The name of the property
in the association class that refers to the source object must match the value of this parameter. The
default value is None.
ResultRole — Each returned name identifies an object that must be associated with the source
object through an association in which the returned named object plays the specified role. The name
of the property in the association class that refers to the returned object must match the value of this
parameter. The default value is None.

Example 18.34. Accessing Associated Instance Names

To use the vda instance object created in Example 18.33, “Accessing Associated Instances”, get a list
of its associated instance names, and assign it to a variable named vda_partitions, type:

> vda_partitions = vda.associator_names(ResultClass="LMI_DiskPartition")


>

18.4.8. Working with Association Objects


The Common Information Model defines an association relationship between managed objects.
Association objects define the relationship between two other objects.

336
Chapter 18. OpenLMI

Accessing Association Instances


To get a list of association objects that refer to a particular target object, use the references()
method as follows:

instance_object.references(
ResultClass=class_name,
Role=role,
IncludeQualifiers=include_qualifiers,
IncludeClassOrigin=include_class_origin,
PropertyList=property_list)

To access the first association object that refers to a particular target object, use the
first_reference() method:

instance_object.first_reference(
... ResultClass=class_name,
... Role=role,
... IncludeQualifiers=include_qualifiers,
... IncludeClassOrigin=include_class_origin,
... PropertyList=property_list)
>

Replace instance_object with the name of the instance object to inspect. You can filter the results by
specifying the following parameters:

ResultClass — Each returned object must be either an instance of this class or one of its
subclasses, or it must be this class or one of its subclasses. The default value is None.
Role — Each returned object must refer to the target object through a property with a name that
matches the value of this parameter. The default value is None.

The remaining parameters refer to:

IncludeQualifiers — A boolean indicating whether each object (including qualifiers on the


object and on any returned properties) should be included as a QUALIFIER element in the response.
The default value is False.
IncludeClassOrigin — A boolean indicating whether the CLASSORIGIN attribute should be
present on all appropriate elements in each returned object. The default value is False.
PropertyList — The members of this list define one or more property names. Returned objects
will not include elements for any properties missing from this list. If PropertyList is an empty list,
no properties are included in returned objects. If it is None, no additional filtering is defined. The
default value is None.

337
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.35. Accessing Association Instances

The LMI_LANEndpoint class represents a communication endpoint associated with a certain


network interface device. To use the ns namespace object created in Example 18.5, “Accessing
Namespace Objects”, create an instance of the LMI_LANEndpoint class for the network interface
device named eth0, and assign it to a variable named lan_endpoint, type the following at the
interactive prompt:

> lan_endpoint = ns.LMI_LANEndpoint.first_instance({


... "Name" : "eth0"})
>

To access the first association object that refers to an LMI_BindsT oLANEndpoint object and
assign it to a variable named bind, type:

> bind = lan_endpoint.first_reference(


... ResultClass="LMI_BindsToLANEndpoint")
>

You can now use the Dependent property to access the dependent LMI_IPProtocolEndpoint
class that represents the IP address of the corresponding network interface device:

> ip = bind.Dependent.to_instance()
> print ip.IPv4Address
192.168.122.1
>

Accessing Association Instance Names


To get a list of association instance names of a particular instance object, use the
reference_nam es() method as follows:

instance_object.reference_names(
ResultClass=class_name,
Role=role)

To access the first association instance name of a particular instance object, use the
first_reference_nam e() method:

instance_object.first_reference_name(
ResultClass=class_name,
Role=role)

Replace instance_object with the name of the instance object to inspect. You can filter the results by
specifying the following parameters:

ResultClass — Each returned object name identifies either an instance of this class or one of its
subclasses, or this class or one of its subclasses. The default value is None.
Role — Each returned object identifies an object that refers to the target instance through a property
with a name that matches the value of this parameter. The default value is None.

338
Chapter 18. OpenLMI

Example 18.36. Accessing Association Instance Names

To use the lan_endpoint instance object created in Example 18.35, “Accessing Association
Instances”, access the first association instance name that refers to an LMI_BindsT oLANEndpoint
object, and assign it to a variable named bind, type:

> bind = lan_endpoint.first_reference_name(


... ResultClass="LMI_BindsToLANEndpoint")

You can now use the Dependent property to access the dependent LMI_IPProtocolEndpoint
class that represents the IP address of the corresponding network interface device:

> ip = bind.Dependent.to_instance()
> print ip.IPv4Address
192.168.122.1
>

18.4.9. Working with Indications


Indication is a reaction to a specific event that occurs in response to a change to a particular change in
data. LMIShell can subscribe to an indication in order to receive such event responses.

Subscribing to Indications
To subscribe to an indication, use the subscribe_indication() method as follows:

connection_object.subscribe_indication(
QueryLanguage="WQL",
Query='SELECT * FROM CIM_InstModification',
Name="cpu",
CreationNamespace="root/interop",
SubscriptionCreationClassName="CIM_IndicationSubscription",
FilterCreationClassName="CIM_IndicationFilter",
FilterSystemCreationClassName="CIM_ComputerSystem",
FilterSourceNamespace="root/cimv2",
HandlerCreationClassName="CIM_IndicationHandlerCIMXML",
HandlerSystemCreationClassName="CIM_ComputerSystem",
Destination="http://host_name:5988")

Alternatively, you can use a shorter version of the method call as follows:

connection_object.subscribe_indication(
Query='SELECT * FROM CIM_InstModification',
Name="cpu",
Destination="http://host_name:5988")

Replace connection_object with a connection object and host_name with the host name of the
system you want to deliver the indications to.

By default, all subscriptions created by the LMIShell interpreter are automatically deleted when the
interpreter terminates. To change this behavior, pass the Perm anent=T rue keyword parameter to the
subscribe_indication() method call. This will prevent LMIShell from deleting the subscription.

339
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.37. Subscribing to Indications

To use the c connection object created in Example 18.1, “Connecting to a Remote CIMOM” and
subscribe to an indication named cpu, type the following at the interactive prompt:

> c.subscribe_indication(
... QueryLanguage="WQL",
... Query='SELECT * FROM CIM_InstModification',
... Name="cpu",
... CreationNamespace="root/interop",
... SubscriptionCreationClassName="CIM_IndicationSubscription",
... FilterCreationClassName="CIM_IndicationFilter",
... FilterSystemCreationClassName="CIM_ComputerSystem",
... FilterSourceNamespace="root/cimv2",
... HandlerCreationClassName="CIM_IndicationHandlerCIMXML",
... HandlerSystemCreationClassName="CIM_ComputerSystem",
... Destination="http://server.example.com:5988")
LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='')
>

Listing Subscribed Indications


To list all the subscribed indications, use the print_subscribed_indications() method as
follows:

connection_object.print_subscribed_indications()

Replace connection_object with the name of the connection object to inspect. This method prints
subscribed indications to standard output.

To get a list of subscribed indications, use the subscribed_indications() method:

connection_object.subscribed_indications()

This method returns a list of strings.

Example 18.38. Listing Subscribed Indications

To inspect the c connection object created in Example 18.1, “Connecting to a Remote CIMOM” and
list all subscribed indications, type the following at the interactive prompt:

> c.print_subscribed_indications()
>

To assign a list of these indications to a variable named indications, type:

> indications = c.subscribed_indications()


>

Unsubscribing from Indications


By default, all subscriptions created by the LMIShell interpreter are automatically deleted when the

340
Chapter 18. OpenLMI

interpreter terminates. To delete an individual subscription sooner, use the


unsubscribe_indication() method as follows:

connection_object.unsubscribe_indication(indication_name)

Replace connection_object with the name of the connection object and indication_name with the
name of the indication to delete.

To delete all subscriptions, use the unsubscribe_all_indications() method:

connection_object.unsubscribe_all_indications()

Example 18.39. Unsubscribing from Indications

To use the c connection object created in Example 18.1, “Connecting to a Remote CIMOM” and
unsubscribe from the indication created in Example 18.37, “Subscribing to Indications”, type the
following at the interactive prompt:

> c.unsubscribe_indication('cpu')
LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='')
>

Implementing an Indication Handler


The subscribe_indication() method allows you to specify the host name of the system you want
to deliver the indications to. The following example illustrates how to implement an indication handler:

> def handler(ind, arg1, arg2, **kwargs):


... exported_objects = ind.exported_objects()
... do_something_with(exported_objects)
> listener = LmiIndicationListener("0.0.0.0", listening_port)
> listener.add_handler("indication-name-XXXXXXXX", handler, arg1, arg2,
**kwargs)
> listener.start()
>

The first argument of the handler is an Lm iIndication object, which contains a list of methods and
objects exported by the indication. Other parameters are user specific: those arguments need to be
specified when adding a handler to the listener.

In the example above, the add_handler() method call uses a special string with eight “X” characters.
These characters are replaced with a random string that is generated by listeners in order to avoid a
possible handler name collision. To use the random string, start the indication listener first and then
subscribe to an indication so that the Destination property of the handler object contains the following
value: schema://host_name/random_string.

341
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.40. Implementing an Indication Handler

The following script illustrates how to write a handler that monitors a managed system located at
192.168.122.1 and calls the indication_callback() function whenever a new user account
is created:

#!/usr/bin/lmishell

import sys
from time import sleep
from lmi.shell.LMIUtil import LMIPassByRef
from lmi.shell.LMIIndicationListener import LMIIndicationListener

# These are passed by reference to indication_callback


var1 = LMIPassByRef("some_value")
var2 = LMIPassByRef("some_other_value")

def indication_callback(ind, var1, var2):


# Do something with ind, var1 and var2
print ind.exported_objects()
print var1.value
print var2.value

c = connect("hostname", "username", "password")

listener = LMIIndicationListener("0.0.0.0", 65500)


unique_name = listener.add_handler(
"demo-XXXXXXXX", # Creates a unique name for me
indication_callback, # Callback to be called
var1, # Variable passed by ref
var2 # Variable passed by ref
)

listener.start()

print c.subscribe_indication(
Name=unique_name,
Query="SELECT * FROM LMI_AccountInstanceCreationIndication WHERE
SOURCEINSTANCE ISA LMI_Account",
Destination="192.168.122.1:65500"
)

try:
while True:
sleep(60)
except KeyboardInterrupt:
sys.exit(0)

18.4.10. Example Usage


This section provides a number of examples for various CIM providers distributed with the OpenLMI
packages. All examples in this section use the following two variable definitions:

c = connect("host_name", "user_name", "password")


ns = c.root.cimv2

342
Chapter 18. OpenLMI

Replace host_name with the host name of the managed system, user_name with the name of user that
is allowed to connect to OpenPegasus CIMOM running on that system, and password with the user's
password.

Using the OpenLMI Service Provider


The openlmi-service package installs a CIM provider for managing system services. The examples below
illustrate how to use this CIM provider to list available system services and how to start, stop, enable, and
disable them.

Example 18.41. Listing Available Services

To list all available services on the managed machine along with information whether the service has
been started (T RUE) or stopped (FALSE) and the status string, use the following code snippet:

for service in ns.LMI_Service.instances():


print "%s:\t%s" % (service.Name, service.Status)

To list only the services that are enabled by default, use this code snippet:

cls = ns.LMI_Service
for service in cls.instances():
if service.EnabledDefault == cls.EnabledDefaultValues.Enabled:
print service.Name

Note that the value of the EnabledDefault property is equal to 2 for enabled services and 3 for
disabled services.

To display information about the cups service, use the following:

cups = ns.LMI_Service.first_instance({"Name": "cups.service"})


cups.doc()

Example 18.42. Starting and Stopping Services

To start and stop the cups service and to see its current status, use the following code snippet:

cups = ns.LMI_Service.first_instance({"Name": "cups.service"})


cups.StartService()
print cups.Status
cups.StopService()
print cups.Status

343
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.43. Enabling and Disabling Services

To enable and disable the cups service and to display its EnabledDefault property, use the
following code snippet:

cups = ns.LMI_Service.first_instance({"Name": "cups.service"})


cups.TurnServiceOff()
print cups.EnabledDefault
cups.TurnServiceOn()
print cups.EnabledDefault

Using the OpenLMI Networking Provider


The openlmi-networking package installs a CIM provider for networking. The examples below illustrate
how to use this CIM provider to list IP addresses associated with a certain port number, create a new
connection, configure a static IP address, and activate a connection.

344
Chapter 18. OpenLMI

Example 18.44. Listing IP Addresses Associated with a Given Port Number

To list all IP adresses associated with the eth0 network interface, use the following code snippet:

device = ns.LMI_IPNetworkConnection.first_instance({'ElementName': 'eth0'})


for endpoint in device.associators(AssocClass="LMI_NetworkSAPSAPDependency",
ResultClass="LMI_IPProtocolEndpoint"):
if endpoint.ProtocolIFType ==
ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv4:
print "IPv4: %s/%s" % (endpoint.IPv4Address, endpoint.SubnetMask)
elif endpoint.ProtocolIFType ==
ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv6:
print "IPv6: %s/%d" % (endpoint.IPv6Address,
endpoint.IPv6SubnetPrefixLength)

This code snippet uses the LMI_IPProtocolEndpoint class associated with a given
LMI_IPNetworkConnection class.

The display the default gateway, use this code snippet:

for rsap in
device.associators(AssocClass="LMI_NetworkRemoteAccessAvailableToElement",
ResultClass="LMI_NetworkRemoteServiceAccessPoint"):
if rsap.AccessContext ==
ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DefaultGateway:
print "Default Gateway: %s" % rsap.AccessInfo

The default gateway is represented by an LMI_NetworkRem oteServiceAccessPoint instance


with the AccessContext property equal to DefaultGateway.

To get a list of DNS servers, the object model needs to be traversed as follows:

1. Get the LMI_IPProtocolEndpoint instances associated with a given


LMI_IPNetworkConnection using LMI_NetworkSAPSAPDependency.
2. Use the same association for the LMI_DNSProtocolEndpoint instances.

The LMI_NetworkRem oteServiceAccessPoint instances with the AccessContext property


equal to the DNS Server associated through LMI_NetworkRem oteAccessAvailableT oElem ent
have the DNS server address in the AccessInfo property.

There can be more possible paths to get to the Rem oteServiceAccessPath and entries can be
duplicated. The following code snippet uses the set() function to remove duplicate entries from the
list of DNS servers:

345
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

dnsservers = set()
for ipendpoint in device.associators(AssocClass="LMI_NetworkSAPSAPDependency",
ResultClass="LMI_IPProtocolEndpoint"):
for dnsedpoint in
ipendpoint.associators(AssocClass="LMI_NetworkSAPSAPDependency",
ResultClass="LMI_DNSProtocolEndpoint"):
for rsap in
dnsedpoint.associators(AssocClass="LMI_NetworkRemoteAccessAvailableToElement",
ResultClass="LMI_NetworkRemoteServiceAccessPoint"):
if rsap.AccessContext ==
ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DNSServer:
dnsservers.add(rsap.AccessInfo)
print "DNS:", ", ".join(dnsservers)

Example 18.45. Creating a New Connection and Configuring a Static IP Address

To create a new setting with a static IPv4 and stateless IPv6 configuration for network interface eth0,
use the following code snippet:

capability = ns.LMI_IPNetworkConnectionCapabilities.first_instance({
'ElementName': 'eth0' })
result = capability.LMI_CreateIPSetting(Caption='eth0 Static',
IPv4Type=capability.LMI_CreateIPSetting.IPv4TypeValues.Static,
IPv6Type=capability.LMI_CreateIPSetting.IPv6TypeValues.Stateless)
setting = result.rparams["SettingData"].to_instance()
for settingData in
setting.associators(AssocClass="LMI_OrderedIPAssignmentComponent"):
if setting.ProtocolIFType ==
ns.LMI_IPAssignmentSettingData.ProtocolIFTypeValues.IPv4:
# Set static IPv4 address
settingData.IPAddresses = ["192.168.1.100"]
settingData.SubnetMasks = ["255.255.0.0"]
settingData.GatewayAddresses = ["192.168.1.1"]
settingData.push()

This code snippet creates a new setting by calling the LMI_CreateIPSetting() method on the
instance of LMI_IPNetworkConnectionCapabilities, which is associated with
LMI_IPNetworkConnection through LMI_IPNetworkConnectionElem entCapabilities. It
also uses the push() method to modify the setting.

346
Chapter 18. OpenLMI

Example 18.46. Activating a Connection

To apply a setting to the network interface, call the ApplySettingT oIPNetworkConnection()


method of the LMI_IPConfigurationService class. This method is asynchronous and returns a
job. The following code snippets illustrates how to call this method synchronously:

setting = ns.LMI_IPAssignmentSettingData.first_instance({ "Caption": "eth0 Static"


})
port = ns.LMI_IPNetworkConnection.first_instance({ 'ElementName': 'ens8' })
service = ns.LMI_IPConfigurationService.first_instance()
service.SyncApplySettingToIPNetworkConnection(SettingData=setting,
IPNetworkConnection=port, Mode=32768)

The Mode parameter affects how the setting is applied. The most commonly used values of this
parameter are as follows:

1 — apply the setting now and make it auto-activated.


2 — make the setting auto-activated and do not apply it now.
4 — disconnect and disable auto-activation.
5 — do not change the setting state, only disable auto-activation.
32768 — apply the setting.
32769 — disconnect.

Using the OpenLMI Storage Provider


The openlmi-storage package installs a CIM provider for storage management. The examples below
illustrate how to use this CIM provider to create a volume group, create a logical volume, build a file
system, mount a file system, and list block devices known to the system.

In addition to the c and ns variables, these examples use the following variable definitions:

MEGABYTE = 1024*1024
storage_service = ns.LMI_StorageConfigurationService.first_instance()
filesystem_service = ns.LMI_FileSystemConfigurationService.first_instance()

347
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.47. Creating a Volume Group

To create a new volume group located in /dev/m yGroup/ that has three members and the default
extent size of 4 MB, use the following code snippet:

# Find the devices to add to the volume group


# (filtering the CIM_StorageExtent.instances()
# call would be faster, but this is easier to read):
sda1 = ns.CIM_StorageExtent.first_instance({"Name": "/dev/sda1"})
sdb1 = ns.CIM_StorageExtent.first_instance({"Name": "/dev/sdb1"})
sdc1 = ns.CIM_StorageExtent.first_instance({"Name": "/dev/sdc1"})

# Create a new volume group:


(ret, outparams, err) = storage_service.SyncCreateOrModifyVG(
ElementName="myGroup",
InExtents=[sda1, sdb1, sdc1])
vg = outparams['Pool'].to_instance()
print "VG", vg.PoolID, \
"with extent size", vg.ExtentSize, \
"and", vg.RemainingExtents, "free extents created."

Example 18.48. Creating a Logical Volume

To create two logical volumes with the size of 100 MB, use this code snippet:

# Find the volume group:


vg = ns.LMI_VGStoragePool.first_instance({"Name": "/dev/mapper/myGroup"})

# Create the first logical volume:


(ret, outparams, err) = storage_service.SyncCreateOrModifyLV(
ElementName="Vol1",
InPool=vg,
Size=100 * MEGABYTE)
lv = outparams['TheElement'].to_instance()
print "LV", lv.DeviceID, \
"with", lv.BlockSize * lv.NumberOfBlocks,\
"bytes created."

# Create the second logical volume:


(ret, outparams, err) = storage_service.SyncCreateOrModifyLV(
ElementName="Vol2",
InPool=vg,
Size=100 * MEGABYTE)
lv = outparams['TheElement'].to_instance()
print "LV", lv.DeviceID, \
"with", lv.BlockSize * lv.NumberOfBlocks, \
"bytes created."

348
Chapter 18. OpenLMI

Example 18.49. Creating a File System

To create an ext3 file system on logical volume lv from Example 18.48, “Creating a Logical Volume”,
use the following code snippet:

(ret, outparams, err) = filesystem_service.SyncLMI_CreateFileSystem(

FileSystemType=filesystem_service.LMI_CreateFileSystem.FileSystemTypeValues.EXT3
,
InExtents=[lv])

Example 18.50. Mounting a File System

To mount the file system created in Example 18.49, “Creating a File System”, use the following code
snippet:

# Find the file system on the logical volume:


fs = lv.first_associator(ResultClass="LMI_LocalFileSystem")

mount_service = ns.LMI_MountConfigurationService.first_instance()
(rc, out, err) = mount_service.SyncCreateMount(
FileSystemType='ext3',
Mode=32768, # just mount
FileSystem=fs,
MountPoint='/mnt/test',
FileSystemSpec=lv.Name)

Example 18.51. Listing Block Devices

To list all block devices known to the system, use the following code snippet:

devices = ns.CIM_StorageExtent.instances()
for device in devices:
if lmi_isinstance(device, ns.CIM_Memory):
# Memory and CPU caches are StorageExtents too, do not print them
continue
print device.classname,
print device.DeviceID,
print device.Name,
print device.BlockSize*device.NumberOfBlocks

Using the OpenLMI Hardware Provider


The openlmi-hardware package installs a CIM provider for monitoring hardware. The examples below
illustrate how to use this CIM provider to retrieve information about CPU, memory modules, PCI devices,
and the manufacturer and model of the machine.

349
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 18.52. Viewing CPU Information

To display basic CPU information such as the CPU name, the number of processor cores, and the
number of hardware threads, use the following code snippet:

cpu = ns.LMI_Processor.first_instance()
cpu_cap = cpu.associators(ResultClass="LMI_ProcessorCapabilities")[0]
print cpu.Name
print cpu_cap.NumberOfProcessorCores
print cpu_cap.NumberOfHardwareThreads

Example 18.53. Viewing Memory Information

To display basic information about memory modules such as their individual sizes, use the following
code snippet:

mem = ns.LMI_Memory.first_instance()
for i in mem.associators(ResultClass="LMI_PhysicalMemory"):
print i.Name

Example 18.54. Viewing Chassis Information

To display basic information about the machine such as its manufacturer or its model, use the
following code snippet:

chassis = ns.LMI_Chassis.first_instance()
print chassis.Manufacturer
print chassis.Model

Example 18.55. Listing PCI Devices

To list all PCI devices known to the system, use the following code snippet:

for pci in ns.LMI_PCIDevice.instances():


print pci.Name

18.5. Using OpenLMI Scripts


The LMIShell interpreter is built on top of Python modules that can be used to develop custom
management tools. The OpenLMI Scripts project provides a number of Python libraries for interfacing
with OpenLMI providers. In addition, it is distributed with lm i, an extensible utility that can be used to
interact with these libraries from the command line.

To install OpenLMI Scripts on your system, type the following at a shell prompt:

easy_install --user openlmi-scripts

This command installs the Python modules and the lm i utility in the ~/.local/ directory. To extend

350
Chapter 18. OpenLMI

the functionality of the lm i utility, install additional OpenLMI modules by using the following command:

easy_install --user package_name

For a complete list of available modules, see the Python website. For more information about OpenLMI
Scripts, see the official OpenLMI Scripts documentation.

18.6. Additional Resources


For more information about OpenLMI and system management in general, refer to the resources listed
below.

Installed Documentation

lmishell(1) — The manual page for the lm ishell client and interpreter provides detailed information
about its execution and usage.

Online Documentation

Red Hat Enterprise Linux 7 Networking Guide — The Networking Guide for Red Hat Enterprise Linux
7 documents relevant information regarding the configuration and administration of network
interfaces and network services on this system.
Red Hat Enterprise Linux 7 Storage Administration Guide — The Storage Administration Guide for
Red Hat Enterprise Linux 7 provides instructions on how to manage storage devices and file systems
on this system.
Red Hat Enterprise Linux 7 Power Management Guide — The Power Management Guide for Red
Hat Enterprise Linux 7 explains how to manage power consumption of this system effectively. It
discusses different techniques that lower power consumption for both servers and laptops, and
explains how each technique affects the overall performance of the system.
Red Hat Enterprise Linux 7 Linux Domain Identity Management Guide — The Linux Domain Identity
Management Guide for Red Hat Enterprise Linux 7 covers all aspects of installing, configuring, and
managing IPA domains, including both servers and clients. The guide is intended for IT and systems
administrators.
FreeIPA Documentation — The FreeIPA Documentation serves as a main user documentation for
using the FreeIPA Identity Management project.
OpenSSL Home Page — The OpenSSL home page provides an overview of the OpenSSL project.
Mozilla NSS Documentation — The Mozilla NSS Documentation serves as a main user
documentation for using the Mozilla NSS project.

See Also

Chapter 3, Managing Users and Groups documents how to manage system users and groups in the
graphical user interface and on the command line.
Chapter 5, Yum describes how to use the Yum package manager to search, install, update, and
uninstall packages on the command line.
Chapter 6, PackageKit describes PackageKit, a suite of package management tools for the graphical
user interface.
Chapter 7, Managing Services with systemd provides an introduction to system d and documents
how to use the system ctl command to manage system services, configure systemd targets, and
execute power management commands.
Chapter 8, OpenSSH describes how to configure an SSH server and how to use the ssh, scp, and

351
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

sftp client utilities to access it.

352
Chapter 19. Viewing and Managing Log Files

Chapter 19. Viewing and Managing Log Files


Log files are files that contain messages about the system, including the kernel, services, and
applications running on it. There are several types of log files for storing various information. For
example, there is a default system log file, a log file just for security messages, and a log file for cron
tasks. Log files can be very useful in many situations, for instance to troubleshoot a problem with the
system, when trying to load a kernel driver, or when looking for unauthorized login attempts to the
system.

Some log files are controlled by a daemon called rsyslogd. The rsyslogd daemon is an enhanced
replacement for previous syslogd, and provides extended filtering, various configuration options, input
and output modules, support for transportation via the T CP or UDP protocols. A list of log files maintained
by rsyslogd can be found in the /etc/rsyslog.conf configuration file. Most log files are located in
the /var/log/ directory.

Log files can also be managed by the journald daemon – a component of system d. The journald
daemon captures Syslog messages, kernel log messages, initial RAM disk and early boot messages as
well as messages written to standard output and standard error output of all services, indexes them and
makes this available to the user. The native journal file format, which is a structured and indexed binary
file, improves searching and provides faster operation, and it also stores meta data information like
timestamps or user IDs. Log files produced by journald are by default not persistent, log files are
stored only in memory or a small ring-buffer in the /run/log/journal/ directory. The amount of
logged data depends on free memory, when you reach the capacity limit, the oldest entries are deleted.
However, this setting can be altered – see Section 19.8.5, “Enabling Persistent Storage”. For more
information on Journal see Section 19.8, “Using the Journal”.

By default, these two logging tools coexist on your system. The additional structured data that is added
to messages by the journald daemon that is also the primary tool for troubleshooting. Data acquired
by journald are forwarded into the /run/system d/journal/syslog socket that may be used by
rsyslogd to process the data further. However, rsyslog does the actual integration by default via the
im journal input module, thus avoiding the aforementioned socket. You can also transfer data in the
opposite direction, from rsyslogd to journald with use of om journal module. See Section 19.5,
“Interaction of Rsyslog and Journal” for further information. The integration allows to maintain text-based
logs in a consistent format to assure compatibility with possible applications or configurations dependent
on rsyslogd. Also, you can maintain rsyslog messages in a structured format (see Section 19.6,
“Structured Logging with Rsyslog”).

19.1. Locating Log Files


Most log files are located in the /var/log/ directory. Some applications such as httpd and sam ba
have a directory within /var/log/ for their log files.

You may notice multiple files in the /var/log/ directory with numbers after them (for example, cron-
20100906). These numbers represent a timestamp that has been added to a rotated log file. Log files
are rotated so their file sizes do not become too large. The logrotate package contains a cron task
that automatically rotates log files according to the /etc/logrotate.conf configuration file and the
configuration files in the /etc/logrotate.d/ directory.

19.2. Basic Configuration of Rsyslog


The main configuration file for rsyslog is /etc/rsyslog.conf. Here, you can specify global directives,
modules, and rules that consist of filter and action parts. Also, you can add comments in form of text

353
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

following a hash sign (#).

19.2.1. Filters
A rule is specified by a filter part, which selects a subset of syslog messages, and an action part, which
specifies what to do with the selected messages. To define a rule in your /etc/rsyslog.conf
configuration file, define both, a filter and an action, on one line and separate them with one or more
spaces or tabs.

rsyslog offers various ways to filter syslog messages according to selected properties. The available
filtering methods can be divided into Facility/Priority-based, Property-based, and Expression-based
filters.

Facility/Priority-based filters
The most used and well-known way to filter syslog messages is to use the facility/priority-based
filters which filter syslog messages based on two conditions: facility and priority separated by a
comma. To create a selector, use the following syntax:

FACILITY.PRIORITY

where:

FACILITY specifies the subsystem that produces a specific syslog message. For example,
the m ail subsystem handles all mail-related syslog messages. FACILITY can be
represented by one of the following keywords: auth, authpriv, cron, daem on, kern,
lpr, m ail, news, syslog, user, ftp, uucp, and local0 through local7.
PRIORITY specifies a priority of a syslog message. PRIORITY can be represented by one of
the following keywords (or by a number): debug (0), info (1), notice (2), warning (3),
err (4), crit (5), alert (6), and em erg (7).
The aforementioned syntax selects syslog messages with the defined or higher priority. By
preceding any priority keyword with an equal sign (=), you specify that only syslog messages
with the specified priority will be selected. All other priorities will be ignored. Conversely,
preceding a priority keyword with an exclamation mark (!) selects all syslog messages
except those with the defined priority.

In addition to the keywords specified above, you may also use an asterisk (* ) to define all
facilities or priorities (depending on where you place the asterisk, before or after the comma).
Specifying the priority keyword none serves for facilities with no given priorities. Both facility and
priority conditions are case-insensitive.

To define multiple facilities and priorities, separate them with a comma (,). To define multiple
selectors on one line, separate them with a semi-colon (;). Note that each selector in the
selector field is capable of overwriting the preceding ones, which can exclude some priorities
from the pattern.

354
Chapter 19. Viewing and Managing Log Files

Example 19.1. Facility/Priority-based Filters

The following are a few examples of simple facility/priority-based filters that can be specified
in /etc/rsyslog.conf. To select all kernel syslog messages with any priority, add the
following text into the configuration file:

kern.*

To select all mail syslog messages with priority crit and higher, use this form:

mail.crit

To select all cron syslog messages except those with the info or debug priority, set the
configuration in the following form:

cron.!info,!debug

Property-based filters
Property-based filters let you filter syslog messages by any property, such as timegenerated
or syslogtag. For more information on properties, refer to Section 19.2.3, “Properties”. You
can compare each of the specified properties to a particular value using one of the compare-
operations listed in Table 19.1, “Property-based compare-operations”. Both property names and
compare operations are case-sensitive.

Property-based filter must start with a colon (:). To define the filter, use the following syntax:

:PROPERTY, [!]COMPARE_OPERATION, "STRING"

where:

The PROPERTY attribute specifies the desired property.


The optional exclamation point (!) negates the output of the compare-operation. Other
Boolean operators are currently not supported in property-based filters.
The COMPARE_OPERATION attribute specifies one of the compare-operations listed in
Table 19.1, “Property-based compare-operations”.
The STRING attribute specifies the value that the text provided by the property is compared
to. This value must be enclosed in quotation marks. To escape certain character inside the
string (for example a quotation mark (")), use the backslash character (\).

355
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Table 19.1. Property-based compare-operations

Compare-operation Description
contains Checks whether the provided string matches any part of
the text provided by the property. To perform case-
insensitive comparisons, use contains_i.
isequal Compares the provided string against all of the text
provided by the property. These two values must be
exactly equal to match.
startswith Checks whether the provided string is found exactly at
the beginning of the text provided by the property. To
perform case-insensitive comparisons, use
startswith_i.
regex Compares the provided POSIX BRE (Basic Regular
Expression) regular expression against the text provided
by the property.
ereregex Compares the provided POSIX ERE (Extended Regular
Expression) regular expression against the text provided
by the property.
isempty Checks if the property is empty. The value is discarded.
This is especially useful when working with normalized
data, where some fields may be populated based on
normalization result.

Example 19.2. Property-based Filters

The following are a few examples of property-based filters that can be specified in
/etc/rsyslog.conf. To select syslog messages which contain the string error in their
message text, use:

:msg, contains, "error"

The following filter selects syslog messages received from the hostname host1:

:hostname, isequal, "host1"

To select syslog messages which do not contain any mention of the words fatal and
error with any or no text between them (for example, fatal lib error), type:

:msg, !regex, "fatal .* error"

Expression-based filters
Expression-based filters select syslog messages according to defined arithmetic, Boolean or
string operations. Expression-based filters use rsyslog's own scripting language called
RainerScript to build complex filters. See Section 19.10, “Online Documentation” for the syntax
definition of this script along with examples of various expression-based filters. Also RainerScript
is a basis for rsyslog's new configuration format, see Section 19.2.6, “Using the New
Configuration Format”

356
Chapter 19. Viewing and Managing Log Files

The basic syntax of expression-based filter looks as follows:

if EXPRESSION then ACTION else ACTION

where:

The EXPRESSION attribute represents an expression to be evaluated, for example: $m sg


startswith 'DEVNAME' or $syslogfacility-text == 'local0'. You can specify
more than one expression in single filter by using and/or operators.
The ACTION attribute represents an action to be performed if the expression returns the
value true. This can be a single action, or an arbitrary complex script enclosed in curly
braces.
Expression-based filters are indicated by the keyword if at the start of a new line. The then
keyword separates the EXPRESSION from the ACTION. Optionally, you can employ the else
keyword to specify what action is to be performed in case the condition is not met.

With expression-based filters, you can nest the conditions by using a script enclosed in curly
braces as in Example 19.3, “Expression-based Filters”. The script allows you to use
facility/priority-based filters inside the expression. On the other hand, property-based filters are
not recommended here. RainerScript supports regular expressions with specialized functions
re_m atch() and re_extract()

Example 19.3. Expression-based Filters

The following expression contains two nested conditions. The log files created by a program
called prog1 are split into two files based on the presence of the "test" string in the message.

if $programname == 'prog1' then {


action(type="omfile" file="/var/log/prog1.log")
if $msg contains 'test' then
action(type="omfile" file="/var/log/prog1test.log")
else
action(type="omfile" file="/var/log/prog1notest.log")
}

19.2.2. Actions
Actions specify what is to be done with the messages filtered out by an already-defined selector. The
following are some of the actions you can define in your rule:

Saving syslog messages to log files


The majority of actions specify to which log file a syslog message is saved. This is done by
specifying a file path after your already-defined selector:

FILTER PATH

where FILTER stands for user-specified selector and PATH is a path of a target file.

For instance, the following rule is comprised of a selector that selects all cron syslog messages
and an action that saves them into the /var/log/cron.log log file:

357
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

cron.* /var/log/cron.log

By default, the log file is synchronized every time a syslog message is generated. Use a dash
mark (-) as a prefix of the file path you specified to omit syncing:

FILTER -PATH

Note that you might lose information if the system terminates right after a write attempt.
However, this setting can save some performance, especially if you run programs that produce
very verbose log messages.

Your specified file path can be either static or dynamic. Static files are represented by a fixed file
path as was shown in the example above. Dynamic file paths can differ according to the
received message. Dynamic file paths are represented by a template and a question mark (?)
prefix:

FILTER ?DynamicFile

where DynamicFile is a name of a predefined template that modifies output paths. You can
use the dash prefix (-) to disable syncing, also you can use multiple templates separated by
colon (;). For more information on templates, refer to Section 19.2.3, “Generating Dynamic File
Names”.

If the file you specified is an existing terminal or /dev/console device, syslog messages are
sent to standard output (using special terminal-handling) or your console (using special
/dev/console-handling) when using the X Window System, respectively.

Sending syslog messages over the network


rsyslog allows you to send and receive syslog messages over the network. This feature allows
you to administer syslog messages of multiple hosts on one machine. To forward syslog
messages to a remote machine, use the following syntax:

@[(zNUMBER)]HOST:[PORT]

where:

The at sign (@ ) indicates that the syslog messages are forwarded to a host using the UDP
protocol. To use the T CP protocol, use two at signs with no space between them (@ @ ).
The optional zNUMBER setting enables zlib compression for syslog messages. The NUMBER
attribute specifies the level of compression (from 1 – lowest to 9 – maximum). Compression
gain is automatically checked by rsyslogd, messages are compressed only if there is any
compression gain and messages below 60 bytes are never compressed.
The HOST attribute specifies the host which receives the selected syslog messages.
The PORT attribute specifies the host machine's port.

When specifying an IPv6 address as the host, enclose the address in square brackets ([, ]).

358
Chapter 19. Viewing and Managing Log Files

Example 19.4. Sending syslog Messages over the Network

The following are some examples of actions that forward syslog messages over the network
(note that all actions are preceded with a selector that selects all messages with any priority).
To forward messages to 192.168.0.1 via the UDP protocol, type:

*.* @192.168.0.1

To forward messages to "example.com" using port 18 and the T CP protocol, use:

*.* @@example.com:18

The following compresses messages with zlib (level 9 compression) and forwards them to
2001::1 using the UDP protocol

*.* @(z9)[2001::1]

Output channels
Output channels are primarily used to specify the maximum size a log file can grow to. This is
very useful for log file rotation (for more information see Section 19.2.5, “Log Rotation”. An
output channel is basically a collection of information about the output action. Output channels
are defined by the $outchannel directive. To define an output channel in
/etc/rsyslog.conf, use the following syntax:

$outchannel NAME, FILE_NAME, MAX_SIZE, ACTION

where:

The NAME attribute specifies the name of the output channel.


The FILE_NAME attribute specifies the name of the output file. Output channels can write
only into files, not pipes, terminal or other kind of output.
The MAX_SIZE attribute represents the maximum size the specified file (in FILE_NAME) can
grow to. This value is specified in bytes.
The ACTION attribute specifies the action that is taken when the maximum size, defined in
MAX_SIZE, is hit.

To use the defined output channel can as an action inside a rule, type:

FILTER :omfile:$NAME

359
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Example 19.5. Output channel log rotation

The following output shows a simple log rotation through the use of an output channel. First,
the output channel is defined via the $outchannel directive:

$outchannel log_rotation, /var/log/test_log.log, 104857600,


/home/joe/log_rotation_script

and then it is used in a rule that selects every syslog message with any priority and executes
the previously-defined output channel on the acquired syslog messages:

*.* :omfile:$log_rotation

Once the limit (in the example 100 MB) is hit, the /hom e/joe/log_rotation_script is
executed. This script can contain anything from moving the file into a different folder, editing
specific content out of it, or simply removing it.

Sending syslog messages to specific users


rsyslog can send syslog messages to specific users by specifying a username of the user you
wish to send the messages to (as in Example 19.7, “Specifying Multiple Actions”) To specify
more than one user, separate each username with a comma (,). To send messages to every
user that is currently logged on, use an asterisk (* ).

Executing a program
rsyslog lets you execute a program for selected syslog messages and uses the system () call
to execute the program in shell. To specify a program to be executed, prefix it with a caret
character (^). Consequently, specify a template that formats the received message and passes
it to the specified executable as a one line parameter (for more information on templates, refer
to Section 19.2.3, “Templates”).

FILTER ^EXECUTABLE; TEMPLATE

Here an output of the FILTER condition is processed by a program represented by


EXECUTABLE. This program can be any valid executable. Replace TEMPLATE with the name of
the formating template.

Example 19.6. Executing a Program

In the following example, any syslog message with any priority is selected, formatted with the
template template and passed as a parameter to the test-program program, which is then
executed with the provided parameter:

*.* ^test-program;template

360
Chapter 19. Viewing and Managing Log Files

Be careful when using the shell execute action

When accepting messages from any host, and using the shell execute action, you may
be vulnerable to command injection. An attacker may try to inject and execute his
commands in the program you specified to be executed in your action. To avoid any
possible security threats, thoroughly consider the use of the shell execute action.

Inputting syslog messages in a database


Selected syslog messages can be directly written into a database table using the database
writer action. The database writer uses the following syntax:

:PLUGIN:DB_HOST,DB_NAME,DB_USER,DB_PASSWORD;[TEMPLATE]

where:

The PLUGIN calls the specified plug-in that handles the database writing (for example, the
om m ysql plug-in).
The DB_HOST attribute specifies the database hostname.
The DB_NAME attribute specifies the name of the database.
The DB_USER attribute specifies the database user.
The DB_PASSWORD attribute specifies the password used with the aforementioned database
user.
The TEMPLATE attribute specifies an optional use of a template that modifies the syslog
message. For more information on templates, refer to Section 19.2.3, “Templates”.

Using MySQL and PostgreSQL

Currently, rsyslog provides support for MySQL and PostgreSQL databases only. In
order to use the MySQL and PostgreSQL database writer functionality, install the
rsyslog-mysql and rsyslog-pgsql packages, respectively. Also, make sure you load the
appropriate modules in your /etc/rsyslog.conf configuration file:

$ModLoad ommysql # Output module for MySQL support


$ModLoad ompgsql # Output module for PostgreSQL support

For more information on rsyslog modules, refer to Section 19.4, “Using Rsyslog
Modules”.
Alternatively, you may use a generic database interface provided by the om libdb
module (supports: Firebird/Interbase, MS SQL, Sybase, SQLLite, Ingres, Oracle, mSQL).

Discarding syslog messages


To discard your selected messages, use the tilde character (~).

FILTER ~

The discard action is mostly used to filter out messages before carrying on any further
processing. It can be effective if you want to omit some repeating messages that would

361
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

processing. It can be effective if you want to omit some repeating messages that would
otherwise fill the log files. The results of discard action depend on where in the configuration file
it is specified, for the best results place these actions on top of the actions list. Please note that
once a message has been discarded there is no way to retrieve it in later configuration file lines.

For instance, the following rule discards any cron syslog messages:

cron.* ~

Specifying Multiple Actions


For each selector, you are allowed to specify multiple actions. To specify multiple actions for one
selector, write each action on a separate line and precede it with an ampersand character (&) :

FILTER ACTION
& ACTION
& ACTION

Specifying multiple actions improves the overall performance of the desired outcome since the specified
selector has to be evaluated only once.

Example 19.7. Specifying Multiple Actions

In the following example, all kernel syslog messages with the critical priority (crit) are send to user
joe, processed by the template temp and passed on to the test-program executable, and
forwarded to 192.168.0.1 via the UDP protocol.

kern.=crit joe
& ^test-program;temp
& @192.168.0.1

Any action can be followed by a template that formats the message. To specify a template, suffix an
action with a semicolon (;) and specify the name of the template. For more information on templates,
refer to Section 19.2.3, “Templates”.

Using templates

A template must be defined before it is used in an action, otherwise it is ignored. In other words,
template definitions should always precede rule definitions in /etc/rsyslog.conf.

19.2.3. Templates
Any output that is generated by rsyslog can be modified and formatted according to your needs with the
use of templates. To create a template use the following syntax in /etc/rsyslog.conf:

$template TEMPLATE_NAME,"text %PROPERTY% more text", [OPTION]

where:

$template is the template directive that indicates that the text following it, defines a template.
TEMPLATE_NAME is the name of the template. Use this name to refer to the template.

362
Chapter 19. Viewing and Managing Log Files

Anything between the two quotation marks ("…") is the actual template text. Within this text, can
escape special characters, such as \n for new line or \r for carriage return. Other characters, such
as % or ", have to be escaped in case you want to those characters literally.
The text specified within two percent signs (%) specifies a property that allows you to access specific
contents of a syslog message. For more information on properties, refer to Section 19.2.3,
“Properties”.
The OPTION attribute specifies any options that modify the template functionality. Do not confuse
them with property options, which are defined inside the template text (between "…"). The currently
supported template options are sql and stdsql, which are used for formatting the text as an SQL
query.

The sql and stdsql options

Note that the database writer (for more information, refer to section Inputting syslog messages
in a database in Section 19.2.2, “Actions”) checks whether the sql and stdsql options are
specified in the template. If they are not, the database writer does not perform any action. This
is to prevent any possible security threats, such as SQL injection.

Generating Dynamic File Names


Templates can be used to generate dynamic file names. By specifying a property as a part of the file
path, a new file will be created for each unique property, which is a convenient way to classify syslog
messages.

For example, use the timegenerated property, which extracts a timestamp from the message, to
generate a unique file name for each syslog message:

$template DynamicFile,"/var/log/test_logs/%timegenerated%-test.log"

Keep in mind that the $template directive only specifies the template. You must use it inside a rule for it
to take effect. In /etc/rsyslog.conf, use the question mark (?) in action definition to mark the
dynamic filename template:

*.* ?DynamicFile

Properties
Properties defined inside a template (within two percent signs (%)) allow you to access various contents
of a syslog message through the use of a property replacer. To define a property inside a template
(between the two quotation marks ("…")), use the following syntax:

%PROPERTY_NAME[:FROM_CHAR:TO_CHAR:OPTION]%

where:

The PROPERTY_NAME attribute specifies the name of a property. A comprehensible list of all available
properties and their detailed description can be found in the rsyslog.conf manual page under the
section Available Properties.
FROM_CHAR and TO_CHAR attributes denote a range of characters that the specified property will act
upon. Alternatively, regular expressions can be used to specify a range of characters. To do so, set
the letter R as the FROM_CHAR attribute and specify your desired regular expression as the TO_CHAR
attribute.

363
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

The OPTION attribute specifies any property options, such as the lovercase option to convert the
input to lowercase. A comprehensible list of all available property options and their detailed
description can be found in the rsyslog.conf manual page under the section Property Options.

The following are some examples of simple properties:

The following property obtains the whole message text of a syslog message:

%msg%

The following property obtains the first two characters of the message text of a syslog message:

%msg:1:2%

The following property obtains the whole message text of a syslog message and drops its last line
feed character:

%msg:::drop-last-lf%

The following property obtains the first 10 characters of the timestamp that is generated when the
syslog message is received and formats it according to the RFC 3999 date standard.

%timegenerated:1:10:date-rfc3339%

Template Examples
This section presents few examples of rsyslog templates.

Example 19.8, “A verbose syslog message template” shows a template that formats a syslog message
so that it outputs the message's severity, facility, the timestamp of when the message was received, the
hostname, the message tag, the message text, and ends with a new line.

Example 19.8. A verbose syslog message template

$template verbose, "%syslogseverity%, %syslogfacility%, %timegenerated%,


%HOSTNAME%, %syslogtag%, %msg%\n"

Example 19.9, “A wall message template” shows a template that resembles a traditional wall message (a
message that is send to every user that is logged in and has their mesg(1) permission set to yes). This
template outputs the message text, along with a hostname, message tag and a timestamp, on a new line
(using \r and \n) and rings the bell (using \7).

Example 19.9. A wall message template

$template wallmsg,"\r\n\7Message from syslogd@%HOSTNAME% at %timegenerated%


...\r\n %syslogtag% %msg%\n\r"

Example 19.10, “A database formatted message template” shows a template that formats a syslog
message so that it can be used as a database query. Notice the use of the sql option at the end of the
template specified as the template option. It tells the database writer to format the message as an
MySQL SQL query.

364
Chapter 19. Viewing and Managing Log Files

Example 19.10. A database formatted message template

$template dbFormat,"insert into SystemEvents (Message, Facility,FromHost,


Priority, DeviceReportedTime, ReceivedAt, InfoUnitID, SysLogTag) values
('%msg%', %syslogfacility%, '%HOSTNAME%',%syslogpriority%,
'%timereported:::date-mysql%', '%timegenerated:::date-mysql%', %iut%,
'%syslogtag%')",sql

rsyslog also contains a set of predefined templates identified by the RSYSLOG_ prefix. These are
reserved for the syslog use and it is advisable to not create a template using this prefix to avoid conflicts.
The following list shows these predefined templates along with their definitions.

RSYSLOG_DebugFormat
A special format used for troubleshooting property problems

"Debug line with all properties:\nFROMHOST: '%FROMHOST%', fromhost-ip:


'%fromhost-ip%', HOSTNAME: '%HOSTNAME%', PRI: %PRI%,\nsyslogtag
'%syslogtag%', programname: '%programname%', APP-NAME: '%APP-NAME%',
PROCID: '%PROCID%', MSGID: '%MSGID%',\nTIMESTAMP: '%TIMESTAMP%',
STRUCTURED-DATA: '%STRUCTURED-DATA%',\nmsg: '%msg%'\nescaped msg:
'%msg:::drop-cc%'\nrawmsg: '%rawmsg%'\n\n\"

RSYSLOG_SyslogProtocol23Format
The format specified in IETF's internet-draft ietf-syslog-protocol-23, which is assumed to be
come the new syslog standard RFC.

"%PRI%1 %TIMESTAMP:::date-rfc3339% %HOSTNAME% %APP-NAME% %PROCID% %MSGID%


%STRUCTURED-DATA% %msg%\n\"

RSYSLOG_FileFormat
A modern-style logfile format similar to TraditionalFileFormat, but with high-precision
timestamps and timezone information.

"%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-


sp%%msg:::drop-last-lf%\n\"

RSYSLOG_TraditionalFileFormat
The older default log file format with low-precision timestamps.

"%TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::drop-last-


lf%\n\"

RSYSLOG_ForwardFormat
A forwarding format with high-precision timestamps and timezone information.

"%PRI%%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-


no-1st-sp%%msg%\"

365
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

RSYSLOG_TraditionalForwardFormat
The traditional forwarding format with low-precision timestamps.

"%PRI%%TIMESTAMP% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%\"

19.2.4. Global Directives


Global directives are configuration options that apply to the rsyslogd daemon. They usually specify a
value for a specific pre-defined variable that affects the behavior of the rsyslogd daemon or a rule that
follows. All of the global directives must start with a dollar sign ($). Only one directive can be specified
per line. The following is an example of a global directive that specifies the maximum size of the syslog
message queue:

$MainMsgQueueSize 50000

The default size defined for this directive (10,000 messages) can be overridden by specifying a
different value (as shown in the example above).

You may define multiple directives in your /etc/rsyslog.conf configuration file. A directive affects
the behavior of all configuration options until another occurrence of that same directive is detected.
Global directives can be used to configure actions, queues and for debugging. A comprehensive list of all
available configuration directives can be found in Section 19.10, “Online Documentation”. Currently, a
new configuration format has been developed that replaces the $-based syntax (see Section 19.2.6,
“Using the New Configuration Format”). However, classic global directives remain supported as a legacy
format.

19.2.5. Log Rotation


The following is a sample /etc/logrotate.conf configuration file:

# rotate log files weekly


weekly
# keep 4 weeks worth of backlogs
rotate 4
# uncomment this if you want your log files compressed
compress

All of the lines in the sample configuration file define global options that apply to every log file. In our
example, log files are rotated weekly, rotated log files are kept for the duration of 4 weeks, and all rotated
log files are compressed by gzip into the .gz format. Any lines that begin with a hash sign (#) are
comments and are not processed

You may define configuration options for a specific log file and place it under the global options.
However, it is advisable to create a separate configuration file for any specific log file in the
/etc/logrotate.d/ directory and define any configuration options there.

The following is an example of a configuration file placed in the /etc/logrotate.d/ directory:

366
Chapter 19. Viewing and Managing Log Files

/var/log/messages {
rotate 5
weekly
postrotate
/usr/bin/killall -HUP syslogd
endscript
}

The configuration options in this file are specific for the /var/log/m essages log file only. The settings
specified here override the global settings where possible. Thus the rotated /var/log/m essages log
file will be kept for five weeks instead of four weeks as was defined in the global options.

The following is a list of some of the directives you can specify in your logrotate configuration file:

weekly — Specifies the rotation of log files on a weekly basis. Similar directives include:
daily
monthly
yearly
compress — Enables compression of rotated log files. Similar directives include:
nocompress
compresscmd — Specifies the command to be used for compressing.
uncompresscmd
compressext — Specifies what extension is to be used for compressing.
compressoptions — Lets you specify any options that may be passed to the used compression
program.
delaycompress — Postpones the compression of log files to the next rotation of log files.
rotate INTEGER — Specifies the number of rotations a log file undergoes before it is removed or
mailed to a specific address. If the value 0 is specified, old log files are removed instead of rotated.
mail ADDRESS — This option enables mailing of log files that have been rotated as many times as
is defined by the rotate directive to the specified address. Similar directives include:
nomail
mailfirst — Specifies that the just-rotated log files are to be mailed, instead of the about-to-
expire log files.
maillast — Specifies that the about-to-expire log files are to be mailed, instead of the just-
rotated log files. This is the default option when mail is enabled.

For the full list of directives and various configuration options, refer to the logrotate man page
(m an logrotate).

19.2.6. Using the New Configuration Format


In rsyslog version 6, a new configuration syntax has been introduced. This new configuration format
aims to be more powerfull, more intuitive and to prevent some typical mistakes by not permitting certain
invalid constructs. The syntax enhancement is enabled by the new configuration processor that relies on
RainerScript. The legacy format is still fully supported and it is used by the default
/etc/rsyslog.conf.

RainerScript is a scripting language designed for processing network events and configuring event
processors such as rsyslog. RainerScript was primarily used to define expression-based filters, see
Example 19.3, “Expression-based Filters”. The newest version of RainerScript implements the input()
and ruleset() statements, which permit the /etc/rsyslog.conf configuration file to be written in

367
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

new style only.

In the following examples you can compare the configuration written with legacy-style parameters:

$InputFileName /tmp/inputfile
$InputFileTag tag1:
$InputFileStateFile inputfile-state
$InputRunFileMonitor

and the same configuration with use of the new format statement:

input(type="imfile" file="/tmp/inputfile" tag="tag1:" statefile="inputfile-state")

This significantly reduces the number of parameters used in configuration, improves readability, and also
provides higher execution speed. For more information on RainerScript statements and parameters see
Section 19.10, “Online Documentation”.

19.2.7. Rulesets
Leaving special directives aside, rsyslog handles messages as defined by rules that consist of a filter
condition and an action to be performed if the condition is true. With traditionally written
/etc/rsyslog.conf, all rules are evaluated in order of appearance for every input message. This
process starts with the first rule and continues until all rules have been processed or until the message is
discarded by one of the rules.

However, rules can be grouped into sequences called rulesets. With rulesets, you can limit the effect of
certain rules only to selected inputs or enhance the performance of rsyslog by defining a distinct set of
actions bound to a specific input. In other words, filter conditions that will be inevitably evaluated as false
for certain types of messages can be skipped. With the new configuration format, the input() and
ruleset() statements are reserved for this operation. The ruleset definition in /etc/rsyslog.conf
can look as follows:

ruleset(name="rulesetname") {
rule
rule2
call rulesetname2

}

Replace rulesetname with an identificator for your ruleset. The rueleset name cannot start with
RSYSLOG_ since this name space is reserved for use by rsyslog. RSYSLOG_DefautRuleset then
defines the default set of rules to be performed if message has no other ruleset assigned. With rule and
rule2 you can define rules in filter-action format mentioned above. With the call parameter, you can
nest rulesets by calling them from inside other ruleset block.

After creating a ruleset, you need to specify what input will it apply to:

input(type="input_type" port="port_num" ruleset="rulesetname");

Here you can identify an input message by input_type, which is an input module that gathered the
message, or by port_num – the port number. Another parameters such as file or tag can be specified for
input(). Replace rulesetname with a name of the ruleset to be evaluated against the massage. In
case an input message is not explicitly bound to a ruleset, the default ruleset is triggered.

You can also use legacy format to define rulesets, for more information see Section 19.10, “Online

368
Chapter 19. Viewing and Managing Log Files

Documentation”

Example 19.11. Using rulesets

The following rulesets ensure different handling of remote messages coming from different ports.
Type the following into /etc/rsyslog.conf:

ruleset(name="remote-10514") {
action(type="omfile" file="/var/log/remote-10514")
}

ruleset(name="remote-10515") {
cron.* action(type="omfile" file="/var/log/remote-10515-cron")
mail.* action(type="omfile" file="/var/log/remote-10515-mail")
}

input(type="imtcp" port="10514" ruleset="remote-10514");


input(type="imtcp" port="10515" ruleset="remote-10515");

Rulesets shown in the above example define log destinations for the remote input from two ports, in
case of 10515, messages are sorted according to the facility. Then, the TCP input is enabled and
bound to rulesets. Note that you must load required modules (imtcp) for this configuration to work.

19.2.8. Compatibility with syslogd


From rsyslog version 6, compatibility mode specified via the -c option has been removed. Also, the
syslogd-style command line options are deprecated and configuring rsyslog through these command
line options should be avoided. However, you can use several templates and directives to configure
rsyslogd to emulate syslogd-like behavior.

For more information on various rsyslogd options, refer to m an rsyslogd.

19.3. Working with Queues in Rsyslog


Queues are used to pass content, mostly syslog messages, between components of rsyslog. With
queues, rsyslog is capable of processing multiple messages simultaneously and to apply several actions
to a single message at once. The data flow inside rsyslog can be illustrated as follows:

369
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 19.1. Message Flow in Rsyslog

Whenever rsyslog receives a message, it passes this message to the preprocessor and then places it
into the main message queue. Messages wait there to be dequeued and passed to the rule processor.

The rule processor is a parsing and filtering engine. Here, the rules defined in /etc/rsyslog.conf
are applied. Based on these rules, the rule processor evaluates which actions are to be performed. Each
action has its own action queue. Messages are passed through this queue to the respective action
processor which creates the final output. Note that at this point, several actions can run simultaneously
on one message. For this purpose, a message is duplicated and passed to multiple action processors.

Only one queue per action is possible. Depending on configuration, the messages can be sent right to
the action processor without action queuing. This is the behavior of direct queues (see below). In case
the output action fails, the action processor notifies the action queue, which than takes an unprocessed
element back and after some time interval, the action is attempted again.

To sum up, we recognize two positions where queues stand in rsyslog: either in front of the rule
processor as single main message queue or in front of various types of output actions as action queues.
Queues provide two main advantages that both lead to increased performance of message processing:

they serve as buffers that decouple producers and consumers in the structure of rsyslog
they allow for parallelization of actions performed on messages

Apart from this, queues can be configured with several directives to provide optimal performance for your
system. These configuration options are covered in the following chapters. For more information, see
Section 19.10, “Online Documentation”.

19.3.1. Defining Queues


Based on where the messages are stored, we recognize several types of queues: direct, in-memory,
disk, and disk-assisted in-memory queues that are most widely used. You can choose one of these type
for the main message queue and also for action queues. Type the following into /etc/rsyslog.conf:

$objectQueueType queue_type

Here, you can apply the setting for the main message queue (replace object with MainMsg) or for an
action queue (replace object with Action). Replace queue_type with one of direct, linkedlist
or fixedarray (which are in-memory queues), or disk.

The default setting for a main message queue is the FixedArray queue with a limit of 10,000 messages.
Action queues are by default set as Direct queues

Direct Queues
For many simple operations, such as when writing output to a local file, building a queue in front of an
action is not needed. To avoid queuing, use:

$objectQueueType Direct

370
Chapter 19. Viewing and Managing Log Files

Replace object with MainMsg or with Action to use this option to the main message queue or for an
action queue respectively. With direct queue, messages are passed directly and immediately from the
producer to the consumer.

Disk Queues
Disk queues store messages strictly on hard drive, which makes them highly reliable but also the slowest
of all possible queuing modes. This mode can be used to prevent the loss of highly important log data.
However, disk queues are not recommended in regular use cases. To set a disk queue, type the
following into /etc/rsyslog.conf:

$objectQueueType Disk

Replace object with MainMsg or with Action to use this option to the main message queue or for an
action queue respectively. Disk queues are written in parts of default size 10 Mb. This default size can be
modified with the following configuration directive:

$objectQueueMaxFileSize size

where size represents the specified size of disk queue part. The defined size limit is not restrictive,
rsyslog always writes one complete queue entry, even if it violates the size limit. Each part of a disk
queue matches with an individual file. The naming directive for these files looks as follows:

$objectQueueFilename name

This sets a name prefix for the file followed by a 7-digit number starting at one and incremented for each
file.

In-memory Queues
With in-memory queue, the enqueued messages are held in memory which makes the process very fast.
The queued data are lost in case of a hard reset or after a regular shutdown. However, you can use the
$ActionQueueSaveOnShutdown setting to save the data before shutdown. There are two types of in-
memory queues:

FixedArray queue — the default mode for the main message queue, with a limit of 10,000 elements.
This type of queue uses a fixed, pre-allocated array that holds pointers to queue elements. Due to
these pointers, even if the queue is empty a certain amount of memory is consumed. However,
FixedArray offers the best run time performance and is optimal when you expect a relatively low
number of queued messages and high performance.
LinkedList queue — here, all structures are dynamically allocated in a linked list, thus the memory is
allocated only when needed. LinkedList queues handle occasional message bursts very well.

In general, use LinkedList queues when in doubt. Compared to FixedArray, it consumes less memory
and lowers the processing overhead.

Use the following syntax to configure in-memory queues:

$objectQueueType LinkedList

$objectQueueType FixedArray

Replace object with MainMsg or with Action to use this option to the main message queue or for an
action queue respectively.

371
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Disk-Assisted In-memory Queues


Both disk and in-memory queues have their advantages and rsyslog lets you to combine them in disk-
assisted in-memory queues. To do so, configure a regular in-memory queue and add the
$objectQueueFileName directive to define a file name for disk assistance. This queue then becomes
disk-assisted, which means it couples an in-memory queue with a disk queue to work in tandem.

The disk queue is activated if the in-memory queue is full or needs to be persisted on shutdown. With a
disk-assisted queue, you can set both disk-specific and in-memory specific configuration parameters.
This type of queue is probably the most commonly used, it is especially useful for potentially long-running
and unreliable actions.

To specify the functioning of a disk-assisted in-memory queue, use the so-called watermarks:

$objectQueueHighWatermark number

$objectQueueLowWatermark number

Replace object with MainMsg or with Action to use this option to the main message queue or for an
action queue respectively. Replace number with a number of enqueued messages. When an in-memory
queue reaches the number defined by the high watermark, it starts writing messages to disk and
continues until the in-memory queue size drops to the number defined with the low watermark. Correctly
set watermarks minimize unnecessary disk writes, but also leave memory space for message bursts
since writing to disk files is rather lengthy. Therefore, the high watermark must be lower than the whole
queue capacity set with $objectQueueSize. The difference between the high watermark and the overall
queue size is a spare memory buffer reserved for message bursts. On the other hand, setting the high
water mark too low will turn on disk assistance unnecessarily often.

19.3.2. Managing Queues


All types of queues can be further configured to match your requirements. You can use several directives
to modify both action queues and the main message queue. Currently, there are more than 20 queue
parameters available, see Section 19.10, “Online Documentation” Some of these settings are used
commonly, other, such as worker thread management, provide closer control over the queue behavior
and are reserved for advanced users. With advanced settings, you can optimize rsyslog's performance,
schedule queuing or modify the behavior of queue on system shutdown.

Limiting Queue Size


You can limit the number of messages that queue can contain with the following setting:

$objectQueueHighWatermark number

Replace object with MainMsg or with Action to use this option to the main message queue or for an
action queue respectively. Replace number with a number of enqueued messages. You can set the
queue size only as the number of messages, not as their actual memory size. The default queue size is
10,000 messages for the main message queue and ruleset queues, and 1000 for action queues.

Disk assisted queues are unlimited by default and can not be restricted with this directive, but you can
reserve them physical disk space in bytes with the following settings:

$objectQueueMaxDiscSpace number

Replace object with MainMsg or with Action. When the size limit specified by number is hit,

372
Chapter 19. Viewing and Managing Log Files

messages are discarded until sufficient amount of space is freed by dequeued messages.

Discarding Messages
When a queue reaches a certain number of messages, you can discard less important messages in
order to save space in queue for entries of higher priority. The threshold that launches the discarding
process can be set with the so-called discard mark:

$objectQueueDiscardMark number

Replace object with MainMsg or with Action to use this option to the main message queue or for an
action queue respectively. Here, number stands for a number of messages that have to be in queue to
start the discarding process. To define which messages to discard, use:

$objectQueueDiscardSeverity priority

Replace priority with one of the following keywords (or with a number): debug (0), info (1), notice
(2), warning (3), err (4), crit (5), alert (6), and em erg (7). With this setting, both newly incoming
and already queued messages with lower then defined priority are erased from the queue immediately
after the discard mark is reached.

Using Timeframes
You can configure rsyslog to process queues during a specific time period. With this option, you can for
example transfer some processing into off-peak hours. To define a timeframe, use the following syntax:

$objectQueueDequeueTimeBegin hour

$objectQueueDequeueTimeEnd hour

With hour you can specify hours that bound your timeframe. Use the 24-hour format without minutes.

Configuring Worker Threads


A worker thread performs a specified action on the enqueued message. For example, in the main
message queue, a worker task is to apply filter logic to each incoming message and enqueue them to
the relevant action queues. When a message arrives, a worker thread is started automatically. When the
number of messages reaches a certain number, another worker thread is turned on. To specify this
number, use:

$objectQueueWorkerThreadMinimumMessages number

Replace number with a number of messages that will trigger a supplemental worker thread. For example,
with number set to 100, a new worker thread is started when more than 100 messages arrive. When
more than 200 messages arrive, the third worker thread starts and so on. However, too many working
threads running in parallel become ineffective, so you can limit the maximum number of them by using:

$objectQueueWorkerThreads number

where number stands for a maximum number off working threads that can run in parallel. For the main
message queue, the default limit is 5 messages. Once a working thread has been started, it keeps
running until an inactivity timeout appears. To set the length of timeout, type:

$objectQueueWorkerTimeoutThreadShutdown time

373
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Replace time with the duration set in milliseconds. Without this setting, a zero timeout is applied and a
worker thread is terminated immediately when it runs out of messages. If you specify time as -1, no
thread will be closed.

Batch Dequeuing
To increase performance, you can configure rsyslog to dequeue multiple messages at once. To set the
upper limit for such dequeueing, use:

$objectQueueDequeueBatchSize number

Replace number with maximum number of messages that can be dequeued at once. Note that higher
setting combined with higher number of permitted working threads results in greater memory
consumption.

Terminating Queues
When terminating a queue that still contains messages, you can try to minimize the data loss by
specifying a time interval for worker threads to finish the queue processing:

$objectQueueTimeoutShutdown time

Specify time in milliseconds. If after that period there are still some enqueued messages, workers finish
the current data element and then terminate. Unprocessed messages are therefore lost. Another time
interval can be set for workers to finish the final element:

$objectQueueTimeoutActionCompletion time

In case this timeout expires, any remaining workers are shut down. To save data at shutdown, use:

$objectQueueTimeoutSaveOnShutdown time

If set, all queue elements are saved to disk before rsyslog terminates.

19.4. Using Rsyslog Modules


Due to its modular design, rsyslog offers a variety of modules which provide additional functionality.
Note that modules can be written by third parties. Most modules provide additional inputs (see Input
Modules below) or outputs (see Output Modules below). Other modules provide special functionality
specific to each module. The modules may provide additional configuration directives that become
available after a module is loaded. To load a module, use the following syntax:

$ModLoad MODULE

where $ModLoad is the global directive that loads the specified module and MODULE represents your
desired module. For example, if you want to load the Text File Input Module (im file) that enables
rsyslog to convert any standard text files into syslog messages, specify the following line in your
/etc/rsyslog.conf configuration file:

$ModLoad imfile

rsyslog offers a number of modules which are split into the following main categories:

374
Chapter 19. Viewing and Managing Log Files

Input Modules — Input modules gather messages from various sources. The name of an input
module always starts with the im prefix, such as im file, im journal, etc.
Output Modules — Output modules provide a facility to store messages into various targets such as
sending them across network, storing them in a database or encrypting them. The name of an output
module always starts with the om prefix, such as om snm p, om relp, etc.
Parser Modules — The name of a parser module always starts with the pm prefix, such as
pm rfc54 24 , pm rfc3164 , etc. These modules are useful when you want to create your custom
parsing or to parse malformed messages. With moderate knowledge of the C programming
language, you can create your own message parser.
Message Modification Modules — Message modification modules change content of syslog
messages. Names of these modules start with the m m prefix. Message Modification Modules such as
m m anon, m m norm alize, or m m jsonparse are used for anonymization or normalization of
messages.
String Generator Modules — String generator modules generate strings based on the message
content and strongly cooperate with the template feature provided by rsyslog. For more information
on templates, refer to Section 19.2.3, “Templates”. The name of a string generator module always
starts with the sm prefix, such as sm file or sm tradfile.
Library Modules — Library modules provide functionality for other loadable modules. These modules
are loaded automatically by rsyslog when needed and cannot be configured by the user.

A comprehensive list of all available modules and their detailed description can be found at
http://www.rsyslog.com/doc/rsyslog_conf_modules.html

Make sure you use trustworthy modules only

Note that when rsyslog loads any modules, it provides them with access to some of its functions
and data. This poses a possible security threat. To minimize security risks, use trustworthy
modules only.

19.4.1. Importing Text Files


The Text File Input Module, abbreviated as im file, enables rsyslog to convert any text file into a
stream of syslog messages. You can use im file to import log messages from applications that create
their own text file logs. To load im file, type the following into etc/rsyslog.conf:

$ModLoad imfile
$InputFilePollInterval int

It is sufficient to load im file once, even when you import multiple files. The $InputFilePollInterval
global directive specifies how often rsyslog checks for changes in connected text files. The default
interval is 10 seconds, to change it, replace int with a time interval specified in seconds.

To identify the text files you want to import, use the following syntax in /etc/rsyslog.conf:

375
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

# File 1
$InputFileName path_to_file
$InputFileTag tag:
$InputFileStateFile state_file_name
$InputFileSeverity severity
$InputFileFacility facility
$InputRunFileMonitor

# File 2
$InputFileName path_to_file2
...

Four settings are required to specify an input text file:

replace path_to_file with a path to the text file


replace tag: with a tag name for this message
replace state_file_name with a unique name for the state file. State files, which are stored in the
rsyslog working directory, keep cursors for the monitored files, marking what partition has already
been processed. If you delete them, whole files will be read in again. Make sure that you specify a
name that does not already exist.
add the $InputRunFileMonitor directive that enables the file monitoring. Without this setting, the text
file will be ignored.

Apart from required directives, there are several other settings you can apply on the text input. You can
set the severity of messages by replacing severity or replace facility to define the subsystem that
produced the message.

Example 19.12. Importing Text Files

The Apache HTTP server creates log files in text format. To apply the processing capabilities of
rsyslog on apache error messages, you need to first import them with use of the im file module.
Type the following into /etc/rsyslog.conf:

$ModLoad imfile

$InputFileName var/log/httpd/error_log
$InputFileTag apache-error:
$InputFileStateFile state-apache-error
$InputRunFileMonitor

19.4.2. Exporting Messages to a Database


Processing of log data can be faster and more convenient when performed in a database rather than
with text files. Based on a type of DBMS you want to use, you can chose from various output modules
such as om m ysql, om pgsql, om oracle, or om m ongodb. Alternatively, you can use the generic
om libdbi output module that relies on the libdbi library. The om libdbi module supports database
systems Firebird/Interbase, MS SQL, Sybase, SQLite, Ingres, Oracle, mSQL, MySQL, PostgreSQL.

376
Chapter 19. Viewing and Managing Log Files

Example 19.13. Exporting Rsyslog Messages to a Database

To store the rsyslog messages in a MySQL database, type the following into /etc/rsyslog.conf:

$ModLoad ommysql

$ActionOmmysqlServerPort 1234
*.* :ommysql:database-server,database-name,database-userid,database-password

First, the output module is loaded, then the communication port is specified. Additional information,
such as name of the server and the database, and authentication data, is specified on the last line of
the above example.

19.4.3. Enabling Encrypted Transport


Confidentiality and integrity in network transmission can be provided by either the TLS or GSSAPI
encryption protocol.

Transport Layer Security (TLS) is a cryptographic protocol designed to provide communication security
over the network. When using TLS, rsyslog messages are encrypted before sending, and mutual
authentication exists between the sender and receiver.

Generic Security Service API (GSSAPI) is an application programming interface for programs to access
security services. To use it in connection with rsyslog you must have a functioning Kerberos
environment.

19.4.4. Using RELP


Reliable Event Logging Protocol (RELP) is a networking protocol for data logging in computer networks.
It is designed to provide a reliable delivery of event messages, which makes it useful in environments
where message loss is not acceptable.

19.5. Interaction of Rsyslog and Journal


As mentioned above, Rsyslog and Journal, the two logging applications present on your system, have
several distinctive features that make them suitable for specific use cases. In many situations it is useful
to combine their capabilities, for example to create structured messages and store them in a file
database (see Section 19.6, “Structured Logging with Rsyslog”). A communication interface needed for
this cooperation is provided by input and output modules on the side of Rsyslog and by the Journal's
communication socket.

By default, rsyslogd uses the im journal module as a default input mode for journal files. With this
module, you import not only the messages but also the structured data provided by journald. Also you
can import older data from journald (unless you forbid it with the
$Im journalIgnorePreviousMessages directive. See Section 19.6.1, “Importing Data from Journal”
for basic configuration of im journal.

As an alternative, you can configure rsyslogd to read from the socket provided by journal as an
output for syslog-based applications. The path to the socket is /run/system d/journal/syslog. Use
this option when you wish to maintain plain rsyslog messages. Compared to im journal the socket
input currently offers more features, such as ruleset binding or filtering. To import Journal data trough
the socket, use the following configuration in /etc/rsyslog.conf:

377
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

$ModLoad imuxsock

The path to the system socket is specified in /etc/rsyslog.d/listen.conf as follows:

$SystemLogSocketName /run/systemd/journal/syslog

You can also output messages from Rsyslog to Journal with the om journal module.

$ModLoad omjournal

*.* :omjournal:

For instance, the following configuration forwards all received on tcp port 10514 to the Journal:

$ModLoad imtcp
$ModLoad omjournal

$RuleSet remote
*.* :omjournal:

$InputTCPServerBindRuleset remote
$InputTCPServerRun 10514

19.6. Structured Logging with Rsyslog


On systems that produce large amounts of log data, it can be convenient to maintain log messages in a
structured format. With structured messages, it is easier to search for particular information, to produce
statistics and to cope with changes and inconsistencies in message structure. Rsyslog uses the JSON
(JavaScript Object Notation) format to provide structure for log messages.

Compare the following unstructured log message:

Oct 25 10:20:37 localhost anacron[1395]: Jobs will be executed sequentially

with a structured one:

{"timestamp":"2013-10-25T10:20:37", "host":"localhost", "program":"anacron",


"pid":"1395", "msg":"Jobs will be executed sequentially"}

Searching structured data with use of key-value pairs is faster and more precise than searching text files
with regular expressions. The structure also lets you to search for the same entry in messages produced
by various applications. Also, you can store JSON files in a document database such as MongoDB, which
provides additional performance and analysis capabilities. On the other hand, a structured message
requires more disk space than the unstructured one.

In rsyslog, log messages with meta data are pulled form Journal with use of the im journal module.
With the m m jsonparse module, you can parse data imported from Journal and form other sources
and process them further, for example as a database output. For parsing to be successful,
m m jsonparse requires input messages to be structured in a way that is defined by the Lumberjack
project.

The Lumberjack project aims to add structured logging to rsyslog in a backward-compatible way. To
identify a structured message, Lumberjack specifies the @cee: string that prepends the actual JSON

378
Chapter 19. Viewing and Managing Log Files

structure. Also, Lumberjack defines the the list of standard field names that should be used for entities
in the JSON string. For more information on Lumberjack, see Section 19.10, “Online Documentation”.

The following is an example of a lumberjack-formatted message:

@cee: {"pid":17055, "uid":1000, "gid":1000, "appname":"logger", "msg":"Message


text."}

To build this structure inside Rsyslog, a template is used, see Section 19.6.2, “Filtering Structured
Messages”. Applications and servers can employ the libum berlog library to generate messages in
the lumberjack-compliant form. For more information on libum berlog, see Section 19.10, “Online
Documentation”.

19.6.1. Importing Data from Journal


The im journal module is Rsyslog's input module to natively read the journal files (see Section 19.5,
“Interaction of Rsyslog and Journal”). Journal messages are then logged in text format as other rsyslog
messages. However, with further processing, it is possible to translate meta data provided by journal
into a structured message.

To import data from Journal to Rsyslog, use the following configuration in /etc/rsyslog.conf:

$ModLoad imjournal

$imjournalPersistStateInterval number_of_messages
$imjournalStateFile path
$imjournalRatelimitInterval seconds
$imjournalRatelimitBurst burst_number
$ImjournalIgnorePreviousMessages off/on

*.* :imjournal:

With number_of_messages, you can specify how often will the Journal data be saved. This will
happen each time the specified number of messages is reached.
Replace path with a path to the state file. This file tracks the journal entry that was the last one
processed.
With seconds, you set the length of the rate limit interval. The number of messages processed during
this interval can not exceed the value specified in burst_number. The default setting is 20,000
messages per 600 seconds. Rsyslog discards messages that come after the maximum burst within
the rate limit was reached.
With $Im journalIgnorePreviousMessages you can ignore messages that are currently in
Journal and import only new messages, which is used when there is no state file specified. The
default setting is off. Please note that if this setting is off and there is no state file, all messages in
the Journal are processed, even if they were already processed in previous rsyslog session.

379
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Note

You can use im journal simultaneously with im uxsock module that is the traditional system log
input. However, to avoid message duplication, you must prevent im uxsock from reading the
Journal's system socket. To do so, use the $Om itLocalLogging directive:

$ModLoad imuxsock
$ModLoad imjournal

$OmitLocalLogging on
$AddUnixListenSocket /run/systemd/journal/syslog

You can translate all data and meta data stored by Journal into structured messages. Some of these
meta data entries are listed in Example 19.15, “Verbose journalctl Output”, for complete list of journal
fields see system d.journal-fields man page. For example, it is possible to focus on kernel journal
fields, that are used by messages originating in the kernel.

19.6.2. Filtering Structured Messages


To create a lumberjack-formatted message that is required by rsyslog's parsing module, you can use
the following template:

template(name="CEETemplate" type="string" string="%TIMESTAMP% %HOSTNAME%


%syslogtag% @cee: %$!all-json%\n")

This template prepends @ cee: string to the JSON string and can be applied for example when creating
an output file with om file module. To access JSON field names, use the $! prefix. For example, the
following filter condition searches for messages with specific hostname and UID :

($!hostname == "hostname" && $!UID== "UID")

19.6.3. Parsing JSON


The m m jsonparse module is used for parsing structured messages. These messages can come from
Journal or from other input sources, and must be formatted in a way defined by the Lumberjack
project. These messages are identified by the presence of the @cee: string. Then, m m jsonparse
checks if the JSON structure is valid and then the message is parsed.

To parse lumberjack-formatted JSON messages with m m jsonparse, use the following configuration in
the /etc/rsyslog.conf:

$ModLoad mmjsonparse

*.* :mmjsonparse:

In this example, the m m jsonparse module is loaded on the first line, then all messages are forwarded
to it. Currently, there are no configuration parameters available for m m jsonparse.

19.6.4. Storing Messages in the MongoDB


Rsyslog supports storing JSON logs in the MongoDB document database trough the ommongodb
output module.

380
Chapter 19. Viewing and Managing Log Files

To forward log messages into the MongoDB, use the following syntax in the /etc/rsyslog.conf
(configuration parameters for ommongodb are available only in the new configuration format - see
Section 19.2.6, “Using the New Configuration Format”):

$ModLoad ommongodb

*.* action(type="ommongodb" server="DB_server" serverport="port" db="DB_name"


collection="collection_name" uid="UID" pwd="password")

Replace DB_server with the name or address of the MongoDB server. Specify port if you want to
select a non-standard port from the MongoDB server. The default port value is 0 and usually there
is no need to change this parameter.
With DB_name, you can identify to which database from the MongoDB server you want to direct your
output. Replace collection_name with a name of collection in this database. In MongoDB, collection
is a group documents, the equivalent of RDBMS table.
You can set your login details by replacing UID and password.

You can shape the form of the final database output with use of templates. By default, ryslog uses a
template based on standard lumberjack field names.

19.7. Debugging Rsyslog


To run rsyslogd in the debugging mode, use the following command:

rsyslogd -dn

With this command, rsyslogd produces debugging information and prints it to the standard output. The
-n stands for "no fork". You can modify debugging with environmental variables, for example, you can
store the debug output in a log file. Before starting rsyslogd, type the following in the command line:

export RSYSLOG_DEBUGLOG="path"
export RSYSLOG_DEBUG="Debug"

Replace path with a desired location for the file where the debugging information will be logged. For a
complete list of options available for the RSYSLOG_DEBUG variable, see the related section in the
rsyslogd man page.

To check if syntax used in your etc/rsyslog.conf is valid use:

rsyslogd -N 1

Where 1 represents level of verbosity of the output message. This is a forward compatibility option
because currently, only one level is provided. However, you must add this argument to run the validation.

19.8. Using the Journal


The Journal is a component of systemd that is responsible for viewing and management of log files. It
can be used in parallel, or in place of a traditional syslog daemon, such as rsyslogd. The Journal was
developed to address problems connected with traditional logging. It is closely integrated with the rest of
the system, supports various logging technologies and access management for the log files.

381
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Logging data are collected, stored and processed by the Journal's journald service. It creates and
maintains binary files called journals based on logging information that is received from the kernel, from
user processes, from standard output and standard error output of system services or via its native API.
These journals are structured and indexed, which provides relatively fast seek times. Journal entries can
carry a unique identifier. The journald service collects numerous meta data fields for each log
message and the actual journal files are secured.

19.8.1. Viewing Log Files


To access the journal logs, use the journalctl tool. For basic view of the logs type as root:

journalctl

An output of this command is a list of all log files generated on the system including messages generated
by system components and by users. The structure of this output is similar to one used in
var/log/m essages/ but with certain improvements:

the priority of entries is marked visually. Lines of error priority and higher are highlighted with red
color and a bold font is used for lines with notice and warning priority
the timestamps are converted into local time-zone of your system
all logged data are shown, including rotated logs
the beginning of a boot is tagged with a special line

Example 19.14. Example Output of journalctl

The following is an example output provided by the journalctl tool. When called without parameters,
the listed entries begin with a timestamp, then the hostname and application that performed the
operation is mentioned followed by the actual message.This example shows the first three entries in
the journal log.

# journalctl
-- Logs begin at Thu 2013-08-01 15:42:12 CEST, end at Thu 2013-08-01 15:48:48
CEST. --
Aug 01 15:42:12 localhost systemd-journal[54]: Allowing runtime journal files to
grow to 49.7M.
Aug 01 15:42:12 localhost kernel: Initializing cgroup subsys cpuset
Aug 01 15:42:12 localhost kernel: Initializing cgroup subsys cpu

[...]

In many cases, only the latest entries in the journal log are relevant. The simplest way to reduce
journalctl output is to use the -n option that lists only the specified number of most recent log
entries:

journalctl -n Number

Replace Number with a number of lines you want to show. When no number is specified, journalctl
displays the ten most recent entries.

The journalctl command allows you to control the form of the output with the following syntax:

journalctl -o form

382
Chapter 19. Viewing and Managing Log Files

Replace form with a keyword specifying a desired form of output. There are several options, such as
verbose, which returns full-structured entry items with all fields, export, which creates a binary stream
suitable for backups and network transfer, and json, which formats entries as a JSON data structures.
For the full list of keywords, see journalctl man page.

Example 19.15. Verbose journalctl Output

To view full meta data about all entries, type:

# journalctl -o verbose
[...]

Fri 2013-08-02 14:41:22 CEST


[s=e1021ca1b81e4fc688fad6a3ea21d35b;i=55c;b=78c81449c920439da57da7bd5c56a770;m=
27cc
_BOOT_ID=78c81449c920439da57da7bd5c56a770
PRIORITY=5
SYSLOG_FACILITY=3
_TRANSPORT=syslog
_MACHINE_ID=69d27b356a94476da859461d3a3bc6fd
_HOSTNAME=localhost.localdomain
_PID=562
_COMM=dbus-daemon
_EXE=/usr/bin/dbus-daemon
_CMDLINE=/bin/dbus-daemon --system --address=systemd: --nofork --
nopidfile --systemd-activation
_SYSTEMD_CGROUP=/system/dbus.service
_SYSTEMD_UNIT=dbus.service
SYSLOG_IDENTIFIER=dbus
SYSLOG_PID=562
_UID=81
_GID=81
_SELINUX_CONTEXT=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023
MESSAGE=[system] Successfully activated service 'net.reactivated.Fprint'
_SOURCE_REALTIME_TIMESTAMP=1375447282839181

[...]

This example lists fields that identify a single log entry. These meta data can be used for message
filtering as shown in Section 19.8.4, “Advanced Filtering” For a complete description of all possible
fields see system d.journal-fields(7) man page.

19.8.2. Access Control


By default, Journal users without root privileges can only see log files generated by them. The system
administrator can add selected users to the adm group, which grants them access to complete log files.
To do so, type as root:

usermod -a -G adm username

Here, replace username with a name of the user to be added to the adm group. This user then receives
the same output of the journalctl command as the root user.

19.8.3. Using The Live View

383
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

When called without parameters, journalctl shows the full list of entries, starting with the oldest entry
collected. With the live view, you can supervise the log messages in real time as new entries are
continuously printed as they appear. To start journalctl in live view mode, type:

journalctl -f

This command returns a list of the ten most current log lines. The journalctl utility then stays running
and waits for new changes to view them immediately.

19.8.4. Filtering Messages


The output of the journalctl command executed without parameters is often extensive, therefore you
can use various filtering methods to extract information to meet your needs.

Filtering by Priority
Log messages are often used to track erroneous behavior on the system. To view only entries with a
selected or higher priority, use the following syntax:

journalctl -p priority

Here, replace priority with one of the following keywords (or with a number): debug (0), info (1),
notice (2), warning (3), err (4), crit (5), alert (6), and em erg (7).

Example 19.16. Filtering by Priority

To view only entries with error or higher priority, use:

journalctl -p err

Filtering by Time
To view log entries only form the current boot, type:

journalctl -b

If you reboot your system just occasionally, the -b will not significantly reduce the output of
journalctl. In such cases, time-based filtering is more helpful:

journalctl --since=value --until=value

With --since and --until, you can view only log messages created within a specified time range. You
can pass values to these options in form of date or time or both as shown in the following example.

Example 19.17. Filtering by Time and Priority

Filtering options can be combined to narrow the set of results according to your requests. For
example, to view the warning or higher priority messages from certain point in time, type:

journalctl -p warning --since="2013-3-16 23:59:59"

384
Chapter 19. Viewing and Managing Log Files

Advanced Filtering
Example 19.15, “Verbose journalctl Output” lists a set of fields that specify a log entry and can all be
used for filtering. For a complete description of meta data that system d can store, see
system d.journal-fields man page. This meta data is collected for each log message, without user
intervention. Values are usually text-based, but can take binary and large values; fields can have multiple
values assigned though it is not very common.

To view a list of unique values that occur in a specified field, use the following syntax:

journalctl -F fieldname

Replace fieldname with a name of a field you are interested in.

To show only log entries that fit a specific condition, use the following syntax:

journalctl fieldname=value

Replace fieldname with a name of a field and value with a specific value contained in that field. As a
result, only lines that match this condition are returned.

Tab Completion on Field Names

As the number of meta data fields stored by system d is quite large, it is easy to forget the exact
name of your field of interest. When unsure, type:

journalctl

and press the T ab key two times. This shows a list of available field names. T ab completion
based on context works on field names, so you can type a distinctive set of letters from a field
name and then press T ab to complete the name automatically. Similarly, you can list unique
values from a field. Type:

journalctl fieldname=

and press T ab two times. This serves as an alternative to journalctl -F fieldname.

You can specify multiple values for one field:

journalctl fieldname=value1 fieldname=value2 ...

Specifying two matches for the same field results in a logical OR combination of the matches. Entries
matching value1 or value2 are displayed.

Also, you can specify multiple field-value pairs to further reduce the output set:

journalctl fieldname1=value fieldname2=value ...

If you specify two matches for different field names, they will be combined with a logical AND. Entries
have to match both conditions to be shown.

With use of the + symbol, you can set a logical OR combination of matches for multiple fields:

385
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

journalctl fieldname1=value + fieldname2=value ...

This command returns entries that match at least one of the conditions, not only those that match the
both of them.

Example 19.18. Advanced filtering

To display entries created by avahi-daem on.service or crond.service under user with UID
70, use the following command:

journalctl _UID=70 _SYSTEMD_UNIT=avahi-daemon.service


_SYSTEMD_UNIT=crond.service

Since there are two values set for the _SYST EMD_UNIT field, both results will be displayed, but only
when matching the _UID=70 condition. This can be expressed simply as: (UID=70 and (avahi or
cron)).

You can apply the aforementioned filtering also in the live view mode to keep track of the latest changes
in your selected group of log entries:

journalctl -f fieldname=value ...

19.8.5. Enabling Persistent Storage


By default, Journal stores log files only in memory or a small ring-buffer in the /run/log/journal/
directory. This is sufficient to show recent log history with journalctl. This directory is volatile, log
data are not saved permanently. With the default configuration, syslog reads the journal logs and stores
them in the /var/log/ directory. With persistent logging enabled, journal files are stored in
/var/log/journal which means they persist after reboot. Journal can then replace rsyslog for some
users (but see the chapter introduction).

Enabled persistent storage has the following advantages

Richer data is recorded for troubleshooting in a longer period of time


For immediate troubleshooting, richer data is available after a reboot
Server console currently reads data from journal, not log files

Persistent storage has also certain disadvantages:

Even with persistent storage the amount of data stored depend on free memory, there is no
guarantee to cover a specific time span
More disk space is needed for logs

To enable persistent storage for Journal, create the journal directory manually as shown in the following
example. As root type:

mkdir -p /var/log/journal

19.9. Managing Log Files in Graphical Environment

386
Chapter 19. Viewing and Managing Log Files

19.9.1. Viewing Log Files


Most log files are in plain text format. You can view them with any text editor such as Vi or Emacs.
Some log files are readable by all users on the system; however, root privileges are required to read
most log files.

To view system log files in an interactive, real-time application, use the Log File Viewer.

Installing the gnome-system-log package

In order to use the Log File Viewer, first ensure the gnome-system-log package is installed on
your system by running, as root:

~]# yum install gnome-system-log

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing
Packages”.

After you have installed the gnome-system-log package, you can open the Log File Viewer by clicking
on Applications → System Tools → Log File Viewer, or type the following command at a shell
prompt:

~]$ gnome-system-log

The application only displays log files that exist; thus, the list might differ from the one shown in
Figure 19.2, “Log File Viewer”.

Figure 19.2. Log File Viewer

The Log File Viewer application lets you filter any existing log file. Click on Filters from the menu and
select Manage Filters to define or edit your desired filter.

387
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 19.3. Log File Viewer - Filters

Adding or editing a filter lets you define its parameters as is shown in Figure 19.4, “Log File Viewer -
defining a filter”.

Figure 19.4. Log File Viewer - defining a filter

When defining a filter, you can edit the following parameters:

Nam e — Specifies the name of the filter.


Regular Expression — Specifies the regular expression that will be applied to the log file and will
attempt to match any possible strings of text in it.
Effect
Highlight — If checked, the found results will be highlighted with the selected color. You may
select whether to highlight the background or the foreground of the text.
Hide — If checked, the found results will be hidden from the log file you are viewing.

When you have at least one filter defined, you may select it from the Filters menu and it will
automatically search for the strings you have defined in the filter and highlight/hide every successful
match in the log file you are currently viewing.

388
Chapter 19. Viewing and Managing Log Files

Figure 19.5. Log File Viewer - enabling a filter

When you check the Show m atches only option, only the matched strings will be shown in the log file
you are currently viewing.

19.9.2. Adding a Log File


To add a log file you wish to view in the list, select File → Open. This will display the Open Log window
where you can select the directory and file name of the log file you wish to view.Figure 19.6, “Log File
Viewer - adding a log file” illustrates the Open Log window.

Figure 19.6. Log File Viewer - adding a log file

Click on the Open button to open the file. The file is immediately added to the viewing list where you can
select it and view its contents.

389
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Reading zipped log files

The Log File Viewer also allows you to open log files zipped in the .gz format.

19.9.3. Monitoring Log Files


Log File Viewer monitors all opened logs by default. If a new line is added to a monitored log file, the log
name appears in bold in the log list. If the log file is selected or displayed, the new lines appear in bold at
the bottom of the log file. Figure 19.7, “Log File Viewer - new log alert” illustrates a new alert in the cron
log file and in the m essages log file. Clicking on the cron log file displays the logs in the file with the new
lines in bold.

Figure 19.7. Log File Viewer - new log alert

19.10. Additional Resources


For more information on how to configure of the rsyslog daemon and how to locate, view, and monitor
log files, refer to the resources listed below.

Installed Documentation

rsyslogd(8) — The manual page for the rsyslogd daemon documents its usage.
rsyslog.conf(5) — The manual page named rsyslog.conf documents available configuration
options.
logrotate(8) — The manual page for the logrotate utility explains in greater detail how to
configure and use it.
journalctl(1) — The manual page for the journalctl daemon documents its usage.
journald.conf(5) — This manual page documents available configuration options.
system d.journal-fields(7) — This manual page lists special Journal fields.

390
Chapter 19. Viewing and Managing Log Files

Online Documentation

rsyslog Home Page — The rsyslog home page offers a thorough technical breakdown of its
features, documentation, configuration examples, and video tutorials.
RainerScript documentation on the rsyslog Home Page — Commented summary of data types,
expressions, and functions available in RainerScript.
Description of queues on the rsyslog Home Page — General information on various types of
message queues and their usage.
rsyslog Wiki — The rsyslog Wiki contains useful configuration examples.
Lumberjack Home Page — Lumberjack Home Page provides an overview of the libum berlog
project.
libumberlog Home Page — libumberlog Home Page provides an overview of the libum berlog
library.

See Also

Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and
sudo commands.
Chapter 7, Managing Services with systemd provides more information on systemd and documents
how to use the system ctl command to manage system services.

391
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 20. Automating System Tasks


Tasks, also known as jobs, can be configured to run automatically within a specified period of time, on a
specified date, or when the system load average decreases below 0.8.

Red Hat Enterprise Linux is pre-configured to run important system tasks to keep the system updated.
For example, the slocate database used by the locate command is updated daily. A system
administrator can use automated tasks to perform periodic backups, monitor the system, run custom
scripts, and so on.

Red Hat Enterprise Linux comes with the following automated task utilities: cron, anacron, at, and
batch.

Every utility is intended for scheduling a different job type: while Cron and Anacron schedule recurring
jobs, At and Batch schedule one-time jobs (refer to Section 20.1, “Cron and Anacron” and Section 20.2,
“At and Batch” respectively).

20.1. Cron and Anacron


Both Cron and Anacron are daemons that can schedule execution of recurring tasks to a certain point in
time defined by the exact time, day of the month, month, day of the week, and week.

Cron jobs can run as often as every minute. However, the utility assumes that the system is running
continuously and if the system is not on at the time when a job is scheduled, the job is not executed.

On the other hand, Anacron remembers the scheduled jobs if the system is not running at the time when
the job is scheduled. The job is then executed as soon as the system is up. However, Anacron can only
run a job once a day.

20.1.1. Installing Cron and Anacron


To install Cron and Anacron, you need to install the cronie package with Cron and the cronie-anacron
package with Anacron (cronie-anacron is a sub-package of cronie).

To determine if the packages are already installed on your system, issue the following command:

rpm -q cronie cronie-anacron

The command returns full names of the cronie and cronie-anacron packages if already installed, or
notifies you that the packages are not available.

To install these packages, use the yum command in the following form as root:

yum install package

For example, to install both Cron and Anacron, type the following at a shell prompt:

~]# yum install cronie cronie-anacron

For more information on how to install new packages in Red Hat Enterprise Linux, see Section 5.2.4,
“Installing Packages”.

20.1.2. Running the Crond Service


The cron and anacron jobs are both picked by the crond service. This section provides information on

392
Chapter 20. Automating System Tasks

how to start, stop, and restart the crond service, and shows how to configure it to start automatically at
boot time. For more information on how to manage system service in Red Hat Enterprise Linux 7 in
general, see Chapter 7, Managing Services with systemd.

20.1.2.1. Starting and Stopping the Cron Service


To determine if the service is running, use the following command:

systemctl status crond.service

To run the crond service in the current session, type the following at a shell prompt as root:

systemctl start crond.service

To configure the service to start automatically at boot time, use the following command as root:

systemctl enable crond.service

20.1.2.2. Stopping the Cron Service


To stop the crond service in the current session, type the following at a shell prompt as root:

systemctl stop crond.service

To prevent the service from starting automatically at boot time, use the following command as root:

systemctl disable crond.service

20.1.2.3. Restarting the Cron Service


To restart the crond service, type the following at a shell prompt as root:

systemctl restart crond.service

This command stops the service and starts it again in quick succession.

20.1.3. Configuring Anacron Jobs


The main configuration file to schedule jobs is the /etc/anacrontab file, which can be only accessed
by the root user. The file contains the following:

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=45
# the jobs will be started during the following hours only
START_HOURS_RANGE=3-22

#period in days delay in minutes job-identifier command


1 5 cron.daily nice run-parts /etc/cron.daily
7 25 cron.weekly nice run-parts /etc/cron.weekly
@monthly 45 cron.monthly nice run-parts /etc/cron.monthly

The first three lines define the variables that configure the environment in which the anacron tasks run:

393
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

SHELL — shell environment used for running jobs (in the example, the Bash shell)
PAT H — paths to executable programs
MAILT O — username of the user who receives the output of the anacron jobs by email
If the MAILT O variable is not defined (MAILT O=), the email is not sent.

The next two variables modify the scheduled time for the defined jobs:

RANDOM_DELAY — maximum number of minutes that will be added to the delay in m inutes
variable which is specified for each job
The minimum delay value is set, by default, to 6 minutes.
If RANDOM_DELAY is, for example, set to 12, then between 6 and 12 minutes are added to the delay
in m inutes for each job in that particular anacrontab. RANDOM_DELAY can also be set to a value
below 6, including 0. When set to 0, no random delay is added. This proves to be useful when, for
example, more computers that share one network connection need to download the same data every
day.
ST ART _HOURS_RANGE — interval, when scheduled jobs can be run, in hours
In case the time interval is missed, for example due to a power failure, the scheduled jobs are not
executed that day.

The remaining lines in the /etc/anacrontab file represent scheduled jobs and follow this format:

period in days delay in minutes job-identifier command

period in days — frequency of job execution in days


The property value can be defined as an integer or a macro (@ daily, @ weekly, @ m onthly), where
@ daily denotes the same value as integer 1, @ weekly the same as 7, and @ m onthly specifies
that the job is run once a month regardless of the length of the month.
delay in m inutes — number of minutes anacron waits before executing the job
The property value is defined as an integer. If the value is set to 0, no delay applies.
job-identifier — unique name referring to a particular job used in the log files
com m and — command to be executed
The command can be either a command such as ls /proc >> /tm p/proc or a command which
executes a custom script.

Any lines that begin with a hash sign (#) are comments and are not processed.

20.1.3.1. Examples of Anacron Jobs


The following example shows a simple /etc/anacrontab file:

394
Chapter 20. Automating System Tasks

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root

# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=30
# the jobs will be started during the following hours only
START_HOURS_RANGE=16-20

#period in days delay in minutes job-identifier command


1 20 dailyjob nice run-parts /etc/cron.daily
7 25 weeklyjob /etc/weeklyjob.bash
@monthly 45 monthlyjob ls /proc >> /tmp/proc

All jobs defined in this anacrontab file are randomly delayed by 6-30 minutes and can be executed
between 16:00 and 20:00.

The first defined job is triggered daily between 16:26 and 16:50 (RANDOM_DELAY is between 6 and 30
minutes; the delay in minutes property adds 20 minutes). The command specified for this job executes
all present programs in the /etc/cron.daily directory using the run-parts script (the run-parts
scripts accepts a directory as a command-line argument and sequentially executes every program in the
directory).

The second job executes the weeklyjob.bash script in the /etc directory once a week.

The third job runs a command, which writes the contents of /proc to the /tm p/proc file (ls /proc
>> /tm p/proc) once a month.

20.1.4. Configuring Cron Jobs


The configuration file for cron jobs is /etc/crontab, which can be only modified by the root user. The
file contains the following:

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * username command to be executed

The first three lines contain the same variable definitions as an anacrontab file: SHELL, PAT H, and
MAILT O. For more information about these variables, refer to Section 20.1.3, “Configuring Anacron
Jobs”.

In addition, the file can define the HOME variable. The HOME variable defines the directory, which will be
used as the home directory when executing commands or scripts run by the job.

The remaining lines in the /etc/crontab file represent scheduled jobs and have the following format:

395
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

minute hour day month day of week username command

The following define the time when the job is to be run:

m inute — any integer from 0 to 59


hour — any integer from 0 to 23
day — any integer from 1 to 31 (must be a valid day if a month is specified)
m onth — any integer from 1 to 12 (or the short name of the month such as jan or feb)
day of week — any integer from 0 to 7, where 0 or 7 represents Sunday (or the short name of the
week such as sun or mon)

The following define other job properties:

usernam e — specifies the user under which the jobs are run.
com m and — the command to be executed.
The command can be either a command such as ls /proc /tm p/proc or a command which
executes a custom script.

For any of the above values, an asterisk (*) can be used to specify all valid values. If you, for example,
define the month value as an asterisk, the job will be executed every month within the constraints of the
other values.

A hyphen (-) between integers specifies a range of integers. For example, 1-4 means the integers 1, 2,
3, and 4.

A list of values separated by commas (,) specifies a list. For example, 3,4 ,6,8 indicates exactly these
four integers.

The forward slash (/) can be used to specify step values. The value of an integer will be skipped within a
range following the range with /integer. For example, the minute value defined as 0-59/2 denotes
every other minute in the minute field. Step values can also be used with an asterisk. For instance, if the
month value is defined as * /3, the task will run every third month.

Any lines that begin with a hash sign (#) are comments and are not processed.

Users other than root can configure cron tasks with the crontab utility. The user-defined crontabs are
stored in the /var/spool/cron/ directory and executed as if run by the users that created them.

To create a crontab as a specific user, login as that user and type the command crontab -e to edit the
user's crontab with the editor specified in the VISUAL or EDIT OR environment variable. The file uses the
same format as /etc/crontab. When the changes to the crontab are saved, the crontab is stored
according to the user name and written to the file /var/spool/cron/username. To list the contents of
the current user's crontab file, use the crontab -l command.

The /etc/cron.d/ directory contains files that have the same syntax as the /etc/crontab file. Only
root is allowed to create and modify files in this directory.

396
Chapter 20. Automating System Tasks

Do not restart the daemon to apply the changes

The cron daemon checks the /etc/anacrontab file, the /etc/crontab file, the
/etc/cron.d/ directory, and the /var/spool/cron/ directory every minute for changes and
the detected changes are loaded into memory. It is therefore not necessary to restart the daemon
after an anacrontab or a crontab file have been changed.

20.1.5. Controlling Access to Cron


To restrict the access to Cron, you can use the /etc/cron.allow and /etc/cron.deny files. These
access control files use the same format with one user name on each line. Mind that no whitespace
characters are permitted in either file.

If the cron.allow file exists, only users listed in the file are allowed to use cron, and the cron.deny
file is ignored.

If the cron.allow file does not exist, users listed in the cron.deny file are not allowed to use Cron.

The Cron daemon (crond) does not have to be restarted if the access control files are modified. The
access control files are checked each time a user tries to add or delete a cron job.

The root user can always use cron, regardless of the user names listed in the access control files.

You can control the access also through Pluggable Authentication Modules (PAM). The settings are
stored in the /etc/security/access.conf file. For example, after adding the following line to the
file, no other user but the root user can create crontabs:

-:ALL EXCEPT root :cron

The forbidden jobs are logged in an appropriate log file or, when using crontab -e, returned to the
standard output. For more information, refer to access.conf.5 (that is, m an 5 access.conf).

20.1.6. Black and White Listing of Cron Jobs


Black and white listing of jobs is used to define parts of a job that do not need to be executed. This is
useful when calling the run-parts script on a Cron directory, such as /etc/cron.daily: if the user
adds programs located in the directory to the job black list, the run-parts script will not execute these
programs.

To define a black list, create a jobs.deny file in the directory that run-parts scripts will be executing
from. For example, if you need to omit a particular program from /etc/cron.daily, create the
/etc/cron.daily/jobs.deny file. In this file, specify the names of the programs to be omitted from
execution (only programs located in the same directory can be enlisted). If a job runs a command which
runs the programs from the cron.daily directory, such as run-parts /etc/cron.daily, the
programs defined in the jobs.deny file will not be executed.

To define a white list, create a jobs.allow file.

The principles of jobs.deny and jobs.allow are the same as those of cron.deny and
cron.allow described in section Section 20.1.5, “Controlling Access to Cron”.

20.2. At and Batch

397
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

While Cron is used to schedule recurring tasks, the At utility is used to schedule a one-time task at a
specific time and the Batch utility is used to schedule a one-time task to be executed when the system
load average drops below 0.8.

20.2.1. Installing At and Batch


To determine if the at package is already installed on your system, issue the following command:

rpm -q at

The command returns the full name of the at package if already installed or notifies you that the package
is not available.

To install the packages, use the yum command in the following form as root:

yum install package

For example, to install both At and Batch, type the following at a shell prompt:

~]# yum install at

For more information on how to install new packages in Red Hat Enterprise Linux, see Section 5.2.4,
“Installing Packages”.

20.2.2. Running the At Service


The At and Batch jobs are both picked by the atd service. This section provides information on how to
start, stop, and restart the atd service, and shows how to configure it to start automatically at boot time.
For more information on how to manage system services in Red Hat Enterprise Linux 7 in general, see
Chapter 7, Managing Services with systemd.

20.2.2.1. Starting and Stopping the At Service


To determine if the service is running, use the following command:

systemctl status atd.service

To run the atd service in the current session, type the following at a shell prompt as root:

systemctl start atd.service

To configure the service to start automatically at boot time, use the following command as root:

systemctl enable atd.service

Note

It is recommended that you configure your system to start the atd service automatically at boot
time.

20.2.2.2. Stopping the At Service


To stop the atd service, type the following at a shell prompt as root:

398
Chapter 20. Automating System Tasks

systemctl stop atd.service

To prevent the service from starting automatically at boot time, use the following command as root:

systemctl disable atd.service

20.2.2.3. Restarting the At Service


To restart the atd service, type the following at a shell prompt as root:

systemctl restart atd.service

This command stops the service and starts it again in quick succession.

20.2.3. Configuring an At Job


To schedule a one-time job for a specific time with the At utility, do the following:

1. On the command line, type the command at TIME, where TIME is the time when the command is
to be executed.
The TIME argument can be defined in any of the following formats:
HH:MM specifies the exact hour and minute; For example, 04 :00 specifies 4:00 a.m.
m idnight specifies 12:00 a.m.
noon specifies 12:00 p.m.
teatim e specifies 4:00 p.m.
MONTHDAYYEAR format; For example, January 15 2012 specifies the 15th day of January in
the year 2012. The year value is optional.
MMDDYY, MM/DD/YY, or MM.DD.YY formats; For example, 011512 for the 15th day of January in
the year 2012.
now + TIME where TIME is defined as an integer and the value type: minutes, hours, days, or
weeks. For example, now + 5 days specifies that the command will be executed at the same
time five days from now.
The time must be specified first, followed by the optional date. For more information about the
time format, refer to the /usr/share/doc/at-<version>/tim espec text file.
If the specified time has past, the job is executed at the time the next day.
2. In the displayed at> prompt, define the job commands:
A. Type the command the job should execute and press Enter. Optionally, repeat the step to
provide multiple commands.
B. Enter a shell script at the prompt and press Enter after each line in the script.
The job will use the shell set in the user's SHELL environment, the user's login shell, or
/bin/sh (whichever is found first).
3. Once finished, press Ctrl+D on an empty line to exit the prompt.

If the set of commands or the script tries to display information to standard output, the output is emailed
to the user.

To view the list of pending jobs, use the atq command. Refer to Section 20.2.5, “Viewing Pending Jobs”
for more information.

You can also restrict the usage of the at command. For more information, refer to Section 20.2.7,

399
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

“Controlling Access to At and Batch” for details.

20.2.4. Configuring a Batch Job


The Batch application executes the defined one-time tasks when the system load average decreases
below 0.8.

To define a Batch job, do the following:

1. On the command line, type the command batch.


2. In the displayed at> prompt, define the job commands:
A. Type the command the job should execute and press Enter. Optionally, repeat the step to
provide multiple commands.
B. Enter a shell script at the prompt and press Enter after each line in the script.
If a script is entered, the job uses the shell set in the user's SHELL environment, the user's
login shell, or /bin/sh (whichever is found first).
3. Once finished, press Ctrl+D on an empty line to exit the prompt.

If the set of commands or the script tries to display information to standard output, the output is emailed
to the user.

To view the list of pending jobs, use the atq command. Refer to Section 20.2.5, “Viewing Pending Jobs”
for more information.

You can also restrict the usage of the batch command. For more information, refer to Section 20.2.7,
“Controlling Access to At and Batch” for details.

20.2.5. Viewing Pending Jobs


To view the pending At and Batch jobs, run the atq command. The atq command displays a list of
pending jobs, with each job on a separate line. Each line follows the job number, date, hour, job class,
and user name format. Users can only view their own jobs. If the root user executes the atq command,
all jobs for all users are displayed.

20.2.6. Additional Command Line Options


Additional command line options for at and batch include the following:

Table 20.1. at and batch Command Line Options

Option Description
-f Read the commands or shell script from a file instead of specifying them
at the prompt.
-m Send email to the user when the job has been completed.
-v Display the time that the job is executed.

20.2.7. Controlling Access to At and Batch


You can restrict the access to the at and batch commands using the /etc/at.allow and
/etc/at.deny files. These access control files use the same format defining one user name on each
line. Mind that no whitespace are permitted in either file.

If the file at.allow exists, only users listed in the file are allowed to use at or batch, and the at.deny

400
Chapter 20. Automating System Tasks

file is ignored.

If at.allow does not exist, users listed in at.deny are not allowed to use at or batch.

The at daemon (atd) does not have to be restarted if the access control files are modified. The access
control files are read each time a user tries to execute the at or batch commands.

The root user can always execute at and batch commands, regardless of the content of the access
control files.

20.3. Additional Resources


To learn more about configuring automated tasks, refer to the following installed documentation:

cron man page contains an overview of cron.


crontab man pages in sections 1 and 5:
The manual page in section 1 contains an overview of the crontab file.
The man page in section 5 contains the format for the file and some example entries.
anacron manual page contains an overview of anacron.
anacrontab manual page contains an overview of the anacrontab file.
/usr/share/doc/at-<version>/tim espec contains detailed information about the time values
that can be used in cron job definitions.
at manual page contains descriptions of at and batch and their command line options.

401
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 21. OProfile


OProfile is a low overhead, system-wide performance monitoring tool. It uses the performance
monitoring hardware on the processor to retrieve information about the kernel and executables on the
system, such as when memory is referenced, the number of L2 cache requests, and the number of
hardware interrupts received. On a Red Hat Enterprise Linux system, the oprofile package must be
installed to use this tool.

Many processors include dedicated performance monitoring hardware. This hardware makes it possible
to detect when certain events happen (such as the requested data not being in cache). The hardware
normally takes the form of one or more counters that are incremented each time an event takes place.
When the counter value increments, an interrupt is generated, making it possible to control the amount
of detail (and therefore, overhead) produced by performance monitoring.

OProfile uses this hardware (or a timer-based substitute in cases where performance monitoring
hardware is not present) to collect samples of performance-related data each time a counter generates
an interrupt. These samples are periodically written out to disk; later, the data contained in these
samples can then be used to generate reports on system-level and application-level performance.

OProfile is a useful tool, but be aware of some limitations when using it:

Use of shared libraries — Samples for code in shared libraries are not attributed to the particular
application unless the --separate=library option is used.
Performance monitoring samples are inexact — When a performance monitoring register triggers a
sample, the interrupt handling is not precise like a divide by zero exception. Due to the out-of-order
execution of instructions by the processor, the sample may be recorded on a nearby instruction.
opreport does not associate samples for inline functions properly — opreport uses a simple
address range mechanism to determine which function an address is in. Inline function samples are
not attributed to the inline function but rather to the function the inline function was inserted into.
OProfile accumulates data from multiple runs — OProfile is a system-wide profiler and expects
processes to start up and shut down multiple times. Thus, samples from multiple runs accumulate.
Use the command opcontrol --reset to clear out the samples from previous runs.
Hardware performance counters do not work on guest virtual machines — Because the hardware
performance counters are not available on virtual systems, you need to use the tim er mode. Run
the command opcontrol --deinit, and then execute m odprobe oprofile tim er=1 to
enable the tim er mode.
Non-CPU-limited performance problems — OProfile is oriented to finding problems with CPU-limited
processes. OProfile does not identify processes that are asleep because they are waiting on locks or
for some other event to occur (for example an I/O device to finish an operation).

21.1. Overview of Tools


Table 21.1, “OProfile Commands” provides a brief overview of the most often used tools provided with
the oprofile package.

402
Chapter 21. OProfile

Table 21.1. OProfile Commands

Command Description
ophelp Displays available events for the system's processor along with a brief
description of each.
opim port Converts sample database files from a foreign binary format to the
native format for the system. Only use this option when analyzing a
sample database from a different architecture.
opannotate Creates annotated source for an executable if the application was
compiled with debugging symbols. Refer to Section 21.6.4, “Using
opannotate” for details.
opcontrol Configures what data is collected. Refer to Section 21.3, “Configuring
OProfile Using Legacy Mode” for details.
operf Recommended tool to be used in place of opcontrol for profiling.
Refer to Section 21.2, “Using operf” for details. for details. For
differences between operf and opcontrol see Section 21.1.1,
“operf vs. opcontrol”.
opreport Retrieves profile data. Refer to Section 21.6.1, “Using opreport” for
details.
oprofiled Runs as a daemon to periodically write sample data to disk.

21.1.1. operf vs. opcontrol


There are two mutually exclusive methods for collecting profiling data with OProfile. Either you can use
the newer and preferred operf or the opcontrol tool.

operf
This is the recommended mode for profiling. The operf tool uses the Linux Performance Events
Subsystem, and therefore does not require the oprofile kernel driver. The operf tool allows you to
target your profiling more precisely, as a single process or system-wide, and also allows OProfile to co-
exist better with other tools using the performance monitoring hardware on your system. Unlike
opcontrol, it can be used without the root privileges. However, operf is also capable of system-
wide operations with use of the --system -wide option, where root authority is required.

With operf, there is no initial setup needed. You can invoke operf with command-line options to
specify your profiling settings. After that, you can run the OProfile post-processing tools described in
Section 21.6, “Analyzing the Data”. Refer to Section 21.2, “Using operf” for further information.

Legacy Mode
This mode consists of the opcontrol shell script, the oprofiled daemon, and several post-
processing tools. The opcontrol command is used for configuring, starting, and stopping a profiling
session. An OProfile kernel driver, usually built as a kernel module, is used for collecting samples, which
are then recorded into sample files by oprofiled. You can use legacy mode only if you have root
privileges. In certain cases, such as when you need to sample areas with disabled interrupt request
(IRQ), this is a better alternative.

Before OProfile can be run in legacy mode, it must be configured as shown in Section 21.3, “Configuring
OProfile Using Legacy Mode”. These settings are then applied when starting OProfile (Section 21.4,
“Starting and Stopping OProfile Using Legacy Mode”).

403
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

21.2. Using operf


As mentioned before, operf is the recommended profiling mode that does not require an initial setup
before starting. All settings are specified as command-line options and there is no separate command to
start the profiling process. To stop operf, press Ctrl-c. The typical operf command syntax looks as
follows:

operf options range command args

Replace options with desired command-line options to specify your profiling settings. Full set of options
is described in operf man pages. Replace range with one of the following:

--system -wide - this setting allows for global profiling, see Using operf in System-wide Mode

--pid=PID - this is to profile running application, where PID is process ID of the process you wish to
profile.

With command and args, you can define a specific command or application to be profiled, and also the
input arguments that this command or application requires. Either command, --pid or --system -wide
is required, but these cannot be used simultaneously.

When you invoke operf on a command line without setting the range option, data will be collected for
the children processes.

Using operf in System-wide Mode

To run operf --system -wide, you need root authority. When finished profiling, you can stop
operf with:

Ctrl-C

If you run operf --system -wide as a background process (with & ), stop it in a controlled
manner in order to process the collected profile data. For this purpose, use:

kill -SIGINT operf-PID

When running operf --system -wide, it is recommended that your current working directory is
/root or a subdirectory of /root so that sample data files are not stored in locations accessible
by regular users.

21.2.1. Specifying the Kernel


To monitor the kernel, execute the following command:

operf --vmlinux=vmlinux_path

With this option, you can specify a path to a vmlinux file that matches the running kernel. Kernel samples
will be attributed to this binary, allowing post-processing tools to attribute samples to the appropriate
kernel symbols. If this option is not specified, all kernel samples will be attributed to a pseudo binary
named "no-vmlinux".

404
Chapter 21. OProfile

Most processors contain counters, which are used by OProfile to monitor specific events. As shown in
Table 21.3, “OProfile Processors and Counters”, the number of counters available depends on the
processor.

The events for each counter can be configured via the command line or with a graphical interface. For
more information on the graphical interface, refer to Section 21.10, “Graphical Interface”. If the counter
cannot be set to a specific event, an error message is displayed.

Older Processors and operf

Some older processor models are not supported by the underlying Linux Performance Events
Subsystem kernel and therefore are not supported by operf. If you receive this message:

Your kernel's Performance Events Subsystem does not support your processor
type

when attempting to use operf, try profiling with opcontrol to see if your processor type may
be supported by OProfile's legacy mode.

Using operf on Virtual Systems

Since hardware performance counters are not available on guest virtual machines, you have to
enable timer mode to use operf on virtual systems. To do so, type as root:

opcontrol --deinit

modprobe oprofile timer=1

To set the event for each configurable counter via the command line, use:

operf --events=event1,event2…

Here, pass a comma-separated list of event specifications for profiling. Each event specification is a
colon-separated list of attributes in the following form:

event-name:sample-rate:unit-mask:kernel:user

Table 21.2, “Event Specifications” summarizes these options. The last three values are optional, if you
omit them, they will be set to their default values. Note that certain events do require a unit mask.

405
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Table 21.2. Event Specifications

Specification Description
event-name The exact symbolic event name taken from
ophelp
sample-rate The number of events to wait before sampling
again. The smaller the count, the more frequent
the samples. For events that do not happen
frequently, a lower count may be needed to
capture a statistically significant number of event
instances. On the other hand, sampling too
frequently can overload the system. By default,
OProfile uses a time-based event set, which
creates a sample every 100,000 clock cycles per
processor.
unit-mask Unit masks, which further define the event, are
listed in ophelp. You can insert either a
hexadecimal value, beginning with "0x", or a
string that matches the first word of the unit mask
description in ophelp. The second option is valid
only for unit masks having "extra:" parameters, as
shown by the output of ophelp. This type of unit
mask cannot be defined with a hexadecimal
value.
kernel Specifies whether to profile kernel code (insert 0
or 1(default))
user Specifies whether to profile user-space code
(insert 0 or 1 (default))

The events available vary depending on the processor type. When no event specification is given, the
default event for the running processor type will be used for profiling. Refer to Table 21.4, “Default
Events” for a list of these default events. To determine the events available for profiling, use the ophelp
command.

ophelp

21.2.3. Categorization of Samples


The --separate-thread option categorizes samples by thread group ID (tgid) and thread ID (tid).
This is useful for seeing per-thread samples in multi-threaded applications. When used in conjunction
with the --system -wide option, --separate-thread is also useful for seeing per-process (i.e., per-
thread group) samples for the case where multiple processes are executing the same program during a
profiling run.

The --separate-cpu option categorizes samples by cpu.

21.3. Configuring OProfile Using Legacy Mode


Before OProfile can be run in legacy mode, it must be configured. At a minimum, selecting to monitor the
kernel (or selecting not to monitor the kernel) is required. The following sections describe how to use the
opcontrol utility to configure OProfile. As the opcontrol commands are executed, the setup options

406
Chapter 21. OProfile

are saved to the /root/.oprofile/daem onrc file.

21.3.1. Specifying the Kernel


First, configure whether OProfile should monitor the kernel. This is the only configuration option that is
required before starting OProfile. All others are optional.

To monitor the kernel, execute the following command as root:

opcontrol --setup --vmlinux=/usr/lib/debug/lib/modules/`uname -r`/vmlinux

Install the debuginfo package

The debuginfo package for the kernel must be installed (which contains the uncompressed
kernel) in order to monitor the kernel.

To configure OProfile not to monitor the kernel, execute the following command as root:

opcontrol --setup --no-vmlinux

This command also loads the oprofile kernel module, if it is not already loaded, and creates the
/dev/oprofile/ directory, if it does not already exist. Refer to Section 21.7, “Understanding
/dev/oprofile/” for details about this directory.

Setting whether samples should be collected within the kernel only changes what data is collected, not
how or where the collected data is stored. To generate different sample files for the kernel and
application libraries, refer to Section 21.3.3, “Separating Kernel and User-space Profiles”.

21.3.2. Setting Events to Monitor


Most processors contain counters, which are used by OProfile to monitor specific events. As shown in
Table 21.3, “OProfile Processors and Counters”, the number of counters available depends on the
processor.

407
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Table 21.3. OProfile Processors and Counters

Processor cpu_type Number of Counters


AMD64 x86-64/hammer 4
AMD Family 10h x86-64/family10 4
AMD Family 11h x86-64/family11 4
AMD Family 12h x86-64/family12 4
AMD Family 14h x86-64/family14 4
AMD Family 15h x86-64/family15 6
IBM eServer System i and IBM eServer timer 1
System p
IBM POWER4 ppc64/power4 8
IBM POWER5 ppc64/power5 6
IBM PowerPC 970 ppc64/970 8
IBM PowerPC 970MP ppc64/970MP 8
IBM POWER5+ ppc64/power5+ 6
IBM POWER5++ ppc64/power5++ 6
IBM POWER56 ppc64/power6 6
IBM POWER7 ppc64/power7 6
IBM S/390 and IBM System z timer 1
Intel Core i7 i386/core_i7 4
Intel Nehalem microarchitecture i386/nehalem 4
Intel Westmere microarchitecture i386/westmere 4
Intel Haswell microarchitecture (non- i386/haswell 8
hyper-threaded)
Intel Haswell microarchitecture (hyper- i386/haswell-ht 4
threaded)
Intel Ivy Bridge microarchitecture (non- i386/ivybridge 8
hyper-threaded)
Intel Ivy Bridge microarchitecture (hyper- i386/ivybridge-ht 4
threaded)
Intel Sandy Bridge microarchitecture (non- i386/sandybridge 8
hyper-threaded)
Intel Sandy Bridge microarchitecture i386/sandybridge-ht 4
TIMER_INT timer 1

Use Table 21.3, “OProfile Processors and Counters” to verify that the correct processor type was
detected and to determine the number of events that can be monitored simultaneously. tim er is used
as the processor type if the processor does not have supported performance monitoring hardware.

If tim er is used, events cannot be set for any processor because the hardware does not have support
for hardware performance counters. Instead, the timer interrupt is used for profiling.

If tim er is not used as the processor type, the events monitored can be changed, and counter 0 for the
processor is set to a time-based event by default. If more than one counter exists on the processor, the
counters other than counter 0 are not set to an event by default. The default events monitored are
shown in Table 21.4, “Default Events”.

408
Chapter 21. OProfile

Table 21.4. Default Events

Processor Default Event for Counter Description


AMD Athlon and CPU_CLK_UNHALTED The processor's clock is not halted
AMD64
AMD Family 10h, AMD CPU_CLK_UNHALTED The processor's clock is not halted
Family 11h, AMD
Family 12h
AMD Family 14h, AMD CPU_CLK_UNHALTED The processor's clock is not halted
Family 15h
IBM POWER4 CYCLES Processor Cycles
IBM POWER5 CYCLES Processor Cycles
IBM PowerPC 970 CYCLES Processor Cycles
Intel Core i7 CPU_CLK_UNHALTED The processor's clock is not halted
Intel Nehalem CPU_CLK_UNHALTED The processor's clock is not halted
microarchitecture
Intel Pentium 4 (hyper- GLOBAL_POWER_EVENTS The time during which the processor is
threaded and non- not stopped
hyper-threaded)
Intel Westmere CPU_CLK_UNHALTED The processor's clock is not halted
microarchitecture
TIMER_INT (none) Sample for each timer interrupt

The number of events that can be monitored at one time is determined by the number of counters for the
processor. However, it is not a one-to-one correlation; on some processors, certain events must be
mapped to specific counters. To determine the number of counters available, execute the following
command:

ls -d /dev/oprofile/[0-9]*

The events available vary depending on the processor type. To determine the events available for
profiling, execute the following command as root (the list is specific to the system's processor type):

ophelp

Make sure that OProfile is configured

Unless OProfile is be properly configured, the ophelp fails with the following error message:

Unable to open cpu_type file for reading


Make sure you have done opcontrol --init
cpu_type 'unset' is not valid
you should upgrade oprofile or force the use of timer mode

To configure OProfile, follow the instructions in Section 21.3, “Configuring OProfile Using Legacy
Mode”.

The events for each counter can be configured via the command line or with a graphical interface. For
more information on the graphical interface, refer to Section 21.10, “Graphical Interface”. If the counter

409
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

cannot be set to a specific event, an error message is displayed.

To set the event for each configurable counter via the command line, use opcontrol:

opcontrol --event=event-name:sample-rate

Replace event-name with the exact name of the event from ophelp, and replace sample-rate with
the number of events between samples.

21.3.2.1. Sampling Rate


By default, a time-based event set is selected. It creates a sample every 100,000 clock cycles per
processor. If the timer interrupt is used, the timer is set to the respective rate and is not user-settable. If
the cpu_type is not tim er, each event can have a sampling rate set for it. The sampling rate is the
number of events between each sample snapshot.

When setting the event for the counter, a sample rate can also be specified:

opcontrol --event=event-name:sample-rate

Replace sample-rate with the number of events to wait before sampling again. The smaller the count,
the more frequent the samples. For events that do not happen frequently, a lower count may be needed
to capture the event instances.

Sampling too frequently can overload the system

Be extremely careful when setting sampling rates. Sampling too frequently can overload the
system, causing the system to appear as if it is frozen or causing the system to actually freeze.

21.3.2.2. Unit Masks


Some user performance monitoring events may also require unit masks to further define the event.

Unit masks for each event are listed with the ophelp command. The values for each unit mask are
listed in hexadecimal format. To specify more than one unit mask, the hexadecimal values must be
combined using a bitwise or operation.

opcontrol --event=event-name:sample-rate:unit-mask

21.3.3. Separating Kernel and User-space Profiles


By default, kernel mode and user mode information is gathered for each event. To configure OProfile to
ignore events in kernel mode for a specific counter, execute the following command:

opcontrol --event=event-name:sample-rate:unit-mask:0

Execute the following command to start profiling kernel mode for the counter again:

opcontrol --event=event-name:sample-rate:unit-mask:1

To configure OProfile to ignore events in user mode for a specific counter, execute the following
command:

410
Chapter 21. OProfile

opcontrol --event=event-name:sample-rate:unit-mask:1:0

Execute the following command to start profiling user mode for the counter again:

opcontrol --event=event-name:sample-rate:unit-mask:1:1

When the OProfile daemon writes the profile data to sample files, it can separate the kernel and library
profile data into separate sample files. To configure how the daemon writes to sample files, execute the
following command as root:

opcontrol --separate=choice

choice can be one of the following:

none — Do not separate the profiles (default).


library — Generate per-application profiles for libraries.
kernel — Generate per-application profiles for the kernel and kernel modules.
all — Generate per-application profiles for libraries and per-application profiles for the kernel and
kernel modules.

If --separate=library is used, the sample file name includes the name of the executable as well as
the name of the library.

Restart the OProfile profiler

These configuration changes will take effect when the OProfile profiler is restarted.

21.4. Starting and Stopping OProfile Using Legacy Mode


To start monitoring the system with OProfile, execute the following command as root:

opcontrol --start

Output similar to the following is displayed:

Using log file /var/lib/oprofile/oprofiled.log Daemon started. Profiler running.

The settings in /root/.oprofile/daem onrc are used.

The OProfile daemon, oprofiled, is started; it periodically writes the sample data to the
/var/lib/oprofile/sam ples/ directory. The log file for the daemon is located at
/var/lib/oprofile/oprofiled.log.

411
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Disable the nmi_watchdog registers

On a Red Hat Enterprise Linux 7 system, the nm i_watchdog registers with the perf
subsystem. Due to this, the perf subsystem grabs control of the performance counter registers
at boot time, blocking OProfile from working.
To resolve this, either boot with the nm i_watchdog=0 kernel parameter set, or run the following
command as root to disable nm i_watchdog at run time:

echo 0 > /proc/sys/kernel/nmi_watchdog

To re-enable nm i_watchdog, use the following command as root:

echo 1 > /proc/sys/kernel/nmi_watchdog

To stop the profiler, execute the following command as root:

opcontrol --shutdown

21.5. Saving Data in Legacy Mode


Sometimes it is useful to save samples at a specific time. For example, when profiling an executable, it
may be useful to gather different samples based on different input data sets. If the number of events to
be monitored exceeds the number of counters available for the processor, multiple runs of OProfile can
be used to collect data, saving the sample data to different files each time.

To save the current set of sample files, execute the following command, replacing name with a unique
descriptive name for the current session.

opcontrol --save=name

The directory /var/lib/oprofile/sam ples/name/ is created and the current sample files are
copied to it.

To specify the session directory to hold the sample data, use the --session-dir. If not specified, the
data is saved in the oprofile_data/ directory on the current path.

21.6. Analyzing the Data


The same OProfile post-processing tools are used whether you collect your profile with operf or
opcontrol in legacy mode.

By default, operf stores the profiling data in the current_dir/oprofile_data directory. You can
change to a different location with the --session-dir option. The usual post-profiling analysis tools
such as opreport and opannotate can be used to generate profile reports. These tools search for
samples in current_dir/oprofile_data first. If that directory does not exist, the analysis tools use
the standard session directory of /var/lib/oprofile/. Statistics, such as total samples received and
lost samples, are written to the session_dir/sam ples/operf.log file.

When using legacy mode, the OProfile daemon, oprofiled, periodically collects the samples and

412
Chapter 21. OProfile

writes them to the /var/lib/oprofile/sam ples/ directory. Before reading the data, make sure all
data has been written to this directory by executing the following command as root:

opcontrol --dump

Each sample file name is based on the name of the executable. For example, the samples for the default
event on a Pentium III processor for /bin/bash becomes:

\{root\}/bin/bash/\{dep\}/\{root\}/bin/bash/CPU_CLK_UNHALTED.100000

The following tools are available to profile the sample data once it has been collected:

opreport
opannotate

Use these tools, along with the binaries profiled, to generate reports that can be further analyzed.

Back up the executable and the sample files

The executable being profiled must be used with these tools to analyze the data. If it must change
after the data is collected, back up the executable used to create the samples as well as the
sample files. Please note that the sample file and the binary have to agree. Making a backup is
not going to work if they do not match. oparchive can be used to address this problem.

Samples for each executable are written to a single sample file. Samples from each dynamically linked
library are also written to a single sample file. While OProfile is running, if the executable being monitored
changes and a sample file for the executable exists, the existing sample file is automatically deleted.
Thus, if the existing sample file is needed, it must be backed up, along with the executable used to
create it before replacing the executable with a new version. The OProfile analysis tools use the
executable file that created the samples during analysis. If the executable changes the analysis tools will
be unable to analyze the associated samples. Refer to Section 21.5, “Saving Data in Legacy Mode” for
details on how to back up the sample file.

21.6.1. Using opreport


The opreport tool provides an overview of all the executables being profiled. To view this information,
type:

opreport

The following is part of a sample output:

413
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Profiling through timer interrupt


TIMER:0|
samples| %|
------------------
25926 97.5212 no-vmlinux
359 1.3504 pi
65 0.2445 Xorg
62 0.2332 libvte.so.4.4.0
56 0.2106 libc-2.3.4.so
34 0.1279 libglib-2.0.so.0.400.7
19 0.0715 libXft.so.2.1.2
17 0.0639 bash
8 0.0301 ld-2.3.4.so
8 0.0301 libgdk-x11-2.0.so.0.400.13
6 0.0226 libgobject-2.0.so.0.400.7
5 0.0188 oprofiled
4 0.0150 libpthread-2.3.4.so
4 0.0150 libgtk-x11-2.0.so.0.400.13
3 0.0113 libXrender.so.1.2.2
3 0.0113 du
1 0.0038 libcrypto.so.0.9.7a
1 0.0038 libpam.so.0.77
1 0.0038 libtermcap.so.2.0.8
1 0.0038 libX11.so.6.2
1 0.0038 libgthread-2.0.so.0.400.7
1 0.0038 libwnck-1.so.4.9.0

Each executable is listed on its own line. The first column is the number of samples recorded for the
executable. The second column is the percentage of samples relative to the total number of samples.
The third column is the name of the executable.

Refer to the opreport man page for a list of available command line options, such as the -r option
used to sort the output from the executable with the smallest number of samples to the one with the
largest number of samples. You can also use the -t or --threshold option to trim the output of
opcontrol.

21.6.2. Using opreport on a Single Executable


To retrieve more detailed profiled information about a specific executable, use:

opreport mode executable

Replace executable with the full path to the executable to be analyzed. mode stands for one of the
following options:

-l
This option is used to list sample data by symbols. For example, running this command:

~]# opreport -l /lib/tls/libc-version.so

produces the following output:

414
Chapter 21. OProfile

samples % symbol name


12 21.4286 __gconv_transform_utf8_internal
5 8.9286 _int_malloc 4 7.1429 malloc
3 5.3571 __i686.get_pc_thunk.bx
3 5.3571 _dl_mcount_wrapper_check
3 5.3571 mbrtowc
3 5.3571 memcpy
2 3.5714 _int_realloc
2 3.5714 _nl_intern_locale_data
2 3.5714 free
2 3.5714 strcmp
1 1.7857 __ctype_get_mb_cur_max
1 1.7857 __unregister_atfork
1 1.7857 __write_nocancel
1 1.7857 _dl_addr
1 1.7857 _int_free
1 1.7857 _itoa_word
1 1.7857 calc_eclosure_iter
1 1.7857 fopen@@GLIBC_2.1
1 1.7857 getpid
1 1.7857 memmove
1 1.7857 msort_with_tmp
1 1.7857 strcpy
1 1.7857 strlen
1 1.7857 vfprintf
1 1.7857 write

The first column is the number of samples for the symbol, the second column is the percentage
of samples for this symbol relative to the overall samples for the executable, and the third
column is the symbol name.

To sort the output from the largest number of samples to the smallest (reverse order), use -r in
conjunction with the -l option.

-i symbol-name
List sample data specific to a symbol name. For example, running:

~]# opreport -l -i __gconv_transform_utf8_internal


/lib/tls/libc-version.so

returns the following output:

samples % symbol name


12 100.000 __gconv_transform_utf8_internal

The first line is a summary for the symbol/executable combination.

The first column is the number of samples for the memory symbol. The second column is the
percentage of samples for the memory address relative to the total number of samples for the
symbol. The third column is the symbol name.

-d
This lists sample data by symbols with more detail than the -l option. For example, with the
following command:

415
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

~]# opreport -d -i __gconv_transform_utf8_internal


/lib/tls/libc-version.so

this output is returned:

vma samples % symbol name


00a98640 12 100.000 __gconv_transform_utf8_internal
00a98640 1 8.3333
00a9868c 2 16.6667
00a9869a 1 8.3333
00a986c1 1 8.3333
00a98720 1 8.3333
00a98749 1 8.3333
00a98753 1 8.3333
00a98789 1 8.3333
00a98864 1 8.3333
00a98869 1 8.3333
00a98b08 1 8.3333

The data is the same as the -l option except that for each symbol, each virtual memory
address used is shown. For each virtual memory address, the number of samples and
percentage of samples relative to the number of samples for the symbol is displayed.

-e symbol-name…
With this option, you can exclude some symbols from the output. Replace symbol-name with
the comma-separated list of symbols you want to exclude

session:name
Here, you can specify the full path to the session, a directory relative to the
/var/lib/oprofile/sam ples/ directory, or if you are using operf, a directory relative to
./oprofile_data/sam ples/.

21.6.3. Getting more detailed output on the modules


OProfile collects data on a system-wide basis for kernel- and user-space code running on the machine.
However, once a module is loaded into the kernel, the information about the origin of the kernel module
is lost. The module could have come from the initrd file on boot up, the directory with the various
kernel modules, or a locally created kernel module. As a result, when OProfile records samples for a
module, it just lists the samples for the modules for an executable in the root directory, but this is unlikely
to be the place with the actual code for the module. You will need to take some steps to make sure that
analysis tools get the proper executable.

To get a more detailed view of the actions of the module, you will need to either have the module
"unstripped" (that is installed from a custom build) or have the debuginfo package installed for the kernel.

Find out which kernel is running with the unam e -a command, obtain the appropriate debuginfo
package and install it on the machine.

Then proceed with clearing out the samples from previous runs with the following command:

opcontrol --reset

416
Chapter 21. OProfile

To start the monitoring process, for example, on a machine with Westmere processor, run the following
command:

~]# opcontrol --setup --vmlinux=/usr/lib/debug/lib/modules/`uname -r`/vmlinux


--event=CPU_CLK_UNHALTED:500000

Then the detailed information, for instance, for the ext4 module can be obtained with:

~]# opreport /ext4 -l --image-path /lib/modules/`uname -r`/kernel


CPU: Intel Westmere microarchitecture, speed 2.667e+06 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of
0x00 (No unit mask) count 500000
warning: could not check that the binary file /lib/modules/2.6.32-
191.el6.x86_64/kernel/fs/ext4/ext4.ko has not been modified since the profile was
taken. Results may be inaccurate.
samples % symbol name
1622 9.8381 ext4_iget
1591 9.6500 ext4_find_entry
1231 7.4665 __ext4_get_inode_loc
783 4.7492 ext4_ext_get_blocks
752 4.5612 ext4_check_dir_entry
644 3.9061 ext4_mark_iloc_dirty
583 3.5361 ext4_get_blocks
583 3.5361 ext4_xattr_get
479 2.9053 ext4_htree_store_dirent
469 2.8447 ext4_get_group_desc
414 2.5111 ext4_dx_find_entry

21.6.4. Using opannotate


The opannotate tool tries to match the samples for particular instructions to the corresponding lines in
the source code. The resulting files generated should have the samples for the lines at the left. It also
puts in a comment at the beginning of each function listing the total samples for the function.

For this utility to work, the appropriate debuginfo package for the executable must be installed on the
system. On Red Hat Enterprise Linux, the debuginfo packages are not automatically installed with the
corresponding packages that contain the executable. You have to obtain and install them separately.

The general syntax for opannotate is as follows:

opannotate --search-dirs src-dir --source executable

These command line options are mandatory. Replace src-dir with a path to the directory containing
the source code and specify the executable to be analyzed. Refer to the opannotate man page for a
list of additional command line options.

21.7. Understanding /dev/oprofile/


When using OProfile in legacy mode, the /dev/oprofile/ directory is used to store the file system for
OProfile. On the other hand, the operf does not require /dev/oprofile/. Use the cat command to
display the values of the virtual files in this file system. For example, the following command displays the
type of processor OProfile detected:

cat /dev/oprofile/cpu_type

417
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

A directory exists in /dev/oprofile/ for each counter. For example, if there are 2 counters, the
directories /dev/oprofile/0/ and /dev/oprofile/1/ exist.

Each directory for a counter contains the following files:

count — The interval between samples.


enabled — If 0, the counter is off and no samples are collected for it; if 1, the counter is on and
samples are being collected for it.
event — The event to monitor.
extra — Used on machines with Nehalem processors to further specify the event to monitor.
kernel — If 0, samples are not collected for this counter event when the processor is in kernel-
space; if 1, samples are collected even if the processor is in kernel-space.
unit_m ask — Defines which unit masks are enabled for the counter.
user — If 0, samples are not collected for the counter event when the processor is in user-space; if
1, samples are collected even if the processor is in user-space.

The values of these files can be retrieved with the cat command. For example:

cat /dev/oprofile/0/count

21.8. Example Usage


While OProfile can be used by developers to analyze application performance, it can also be used by
system administrators to perform system analysis. For example:

Determine which applications and services are used the most on a system — opreport can be
used to determine how much processor time an application or service uses. If the system is used for
multiple services but is underperforming, the services consuming the most processor time can be
moved to dedicated systems.
Determine processor usage — The CPU_CLK_UNHALT ED event can be monitored to determine the
processor load over a given period of time. This data can then be used to determine if additional
processors or a faster processor might improve system performance.

21.9. OProfile Support for Java


OProfile allows you to profile dynamically compiled code (also known as "just-in-time" or JIT code) of the
Java Virtual Machine (JVM). OProfile in Red Hat Enterprise Linux 7 includes built-in support for the JVM
Tools Interface (JVMTI) agent library, which supports Java 1.5 and higher.

21.9.1. Profiling Java Code


To profile JIT code from the Java Virtual Machine with the JVMTI agent, add the following to the JVM
startup parameters:

-agentlib:jvmti_oprofile

Where jvmti_oprofile is a path to the oprofile agent. For 64-bit JVM, the path looks as follows:

-agentlib:/usr/lib64/oprofile/libjvmti_oprofile.so

Currently, you can add one command-line option: --debug, which enables debugging mode.

418
Chapter 21. OProfile

Install the oprofile-jit package

The oprofile-jit package must be installed on the system in order to profile JIT code with OProfile.
With this package, you gain capability to show method-level information.

Depending on the JVM that you are using, you may have to install the debuginfo package for the JVM.
For OpenJDK, this package is required, there is no debuginfo package for Oracle JDK. To keep the
debug information packages synchronized with their respective non-debug packages, you also need to
install the yum-plugin-auto-update-debug-info plug-in. This plug-in searches the debug information
repository for corresponding updates.

After successful setup, you can apply the standard profiling and analyzing tools described in previous
sections

To learn more about Java support in OProfile, refer to the OProfile Manual, which is linked from
Section 21.12, “Additional Resources”.

21.10. Graphical Interface


Some OProfile preferences can be set with a graphical interface. To start it, execute the oprof_start
command as root at a shell prompt. To use the graphical interface, you will need to have the
oprofile-gui package installed.

After changing any of the options, save them by clicking the Save and quit button. The preferences
are written to /root/.oprofile/daem onrc, and the application exits.

Clicking the Save and quit button

Exiting the application does not stop OProfile from sampling.

On the Setup tab, to set events for the processor counters as discussed in Section 21.3.2, “Setting
Events to Monitor”, select the counter from the pulldown menu and select the event from the list. A brief
description of the event appears in the text box below the list. Only events available for the specific
counter and the specific architecture are displayed. The interface also displays whether the profiler is
running and some brief statistics about it.

419
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Figure 21.1. OProfile Setup

On the right side of the tab, select the Profile kernel option to count events in kernel mode for the
currently selected event, as discussed in Section 21.3.3, “Separating Kernel and User-space Profiles”. If
this option is not selected, no samples are collected for the kernel.

Select the Profile user binaries option to count events in user mode for the currently selected
event, as discussed in Section 21.3.3, “Separating Kernel and User-space Profiles”. If this option is not
selected, no samples are collected for user applications.

Use the Count text field to set the sampling rate for the currently selected event as discussed in
Section 21.3.2.1, “Sampling Rate”.

If any unit masks are available for the currently selected event, as discussed in Section 21.3.2.2, “Unit
Masks”, they are displayed in the Unit Masks area on the right side of the Setup tab. Select the
checkbox beside the unit mask to enable it for the event.

420
Chapter 21. OProfile

On the Configuration tab, to profile the kernel, enter the name and location of the vm linux file for
the kernel to monitor in the Kernel im age file text field. To configure OProfile not to monitor the
kernel, select No kernel im age.

Figure 21.2. OProfile Configuration

If the Verbose option is selected, the oprofiled daemon log includes more information.

If Per-application profiles is selected, OProfile generates per-application profiles for libraries.


This is equivalent to the opcontrol --separate=library command. If Per-application
profiles, including kernel is selected, OProfile generates per-application profiles for the kernel
and kernel modules as discussed in Section 21.3.3, “Separating Kernel and User-space Profiles”. This is
equivalent to the opcontrol --separate=kernel command.

To force data to be written to samples files as discussed in Section 21.6, “Analyzing the Data”, click the

421
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Flush button. This is equivalent to the opcontrol --dum p command.

To start OProfile from the graphical interface, click Start. To stop the profiler, click Stop. Exiting the
application does not stop OProfile from sampling.

21.11. OProfile and SystemTap


SystemTap is a tracing and probing tool that allows users to study and monitor the activities of the
operating system in fine detail. It provides information similar to the output of tools like netstat, ps,
top, and iostat; however, SystemTap is designed to provide more filtering and analysis options for
collected information.

While using OProfile is suggested in cases of collecting data on where and why the processor spends
time in a particular area of code, it is less usable when finding out why the processor stays idle.

You might want to use SystemTap when instrumenting specific places in code. Because SystemTap
allows you to run the code instrumentation without having to stop and restart the instrumented code, it is
particularly useful for instrumenting the kernel and daemons.

For more information on SystemTap, refer to Section 21.12.2, “Useful Websites” for the relevant
SystemTap documentation.

21.12. Additional Resources


This chapter only highlights OProfile and how to configure and use it. To learn more, refer to the
following resources.

21.12.1. Installed Docs


/usr/share/doc/oprofile-version/oprofile.htm l — OProfile Manual
oprofile man page — Discusses opcontrol, opreport, opannotate, and ophelp
operf man page

21.12.2. Useful Websites


http://oprofile.sourceforge.net/ — Contains the latest documentation, mailing lists, IRC channels, and
more.
SystemTap Beginners Guide — Provides basic instructions on how to use SystemTap to monitor
different subsystems of Red Hat Enterprise Linux in finer detail.

422
Part VI. Kernel, Module and Driver Configuration

Part VI. Kernel, Module and Driver Configuration


This part covers various tools that assist administrators with kernel customization.

423
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 22. Working with the GRUB 2 Boot Loader


Red Hat Enterprise Linux 7 is distributed with the GNU GRand Unified Boot loader (GRUB) version 2
boot leader, which allows the user to select an operating system or kernel to be loaded at system boot
time. GRUB 2 also allows the user to pass arguments to the kernel.

22.1. Configuring the GRUB 2 Boot Loader


GRUB 2 reads its configuration from the /boot/grub2/grub.cfg file, or
/boot/efi/EFI/redhat/grub.cfg on UEFI machines. This file contains menu information,
however, it is not supposed to be edited as it is generated by the /usr/sbin/grub2-mkconfig utility based
on Linux kernels located in the /boot/ directory, template files located in /etc/grub.d/, and custom
settings in the /etc/default/grub file. Any manual edits could therefore cause the changes to be lost
during updates. The grub.cfg file is automatically updated each time a new kernel is installed. To
update this configuration file manually, type the following at a shell prompt as root:

~]# grub2-mkconfig -o /boot/grub2/grub.cfg

Alternatively, on UEFI systems, run the following:

~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

Among various code snippets and directives, the grub.cfg configuration file contains one or more
m enuentry blocks, each representing a single GRUB 2 boot menu entry. These blocks always start with
the m enuentry keyword followed by a title, list of options, and an opening curly bracket, and end with a
closing curly bracket. Anything between the opening and closing bracket should be indented. For
example, the following is a sample m enuentry block for Red Hat Enterprise Linux 7 with Linux kernel
3.8.0-0.40.el7.x86_64:

menuentry 'Red Hat Enterprise Linux Client' --class red --class gnu-linux --class
gnu --class os $menuentry_id_option 'gnulinux-simple-c60731dc-9046-4000-9182-
64bdcce08616' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod xfs
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-
efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 19d9e294-65f8-
4e37-8e73-d41d6daa6e58
else
search --no-floppy --fs-uuid --set=root 19d9e294-65f8-4e37-8e73-
d41d6daa6e58
fi
echo 'Loading Linux 3.8.0-0.40.el7.x86_64 ...'
linux /vmlinuz-3.8.0-0.40.el7.x86_64 root=/dev/mapper/rhel-root ro
rd.md=0 rd.dm=0 rd.lvm.lv=rhel/swap crashkernel=auto rd.luks=0 vconsole.keymap=us
rd.lvm.lv=rhel/root rhgb quiet
echo 'Loading initial ramdisk ...'
initrd /initramfs-3.8.0-0.40.el7.x86_64.img
}

Each m enuentry block that represents an installed Linux kernel contains linux (linuxefi on UEFI

424
Chapter 22. Working with the GRUB 2 Boot Loader

systems) and initrd directives followed by the path to the kernel and the initram fs image
respectively. If a separate /boot partition was created, the paths to the kernel and the initram fs
image are relative to /boot. In the example above, the initrd /initram fs-3.8.0-
0.4 0.el7.x86_64 .im g line means that the initram fs image is actually located at
/boot/initram fs-3.8.0-0.4 0.el7.x86_64 .im g when the root file system is mounted, and
likewise for the kernel path.

The kernel version number as given on the linux /vm linuz-kernel_version line must match the
version number of the initram fs image given on the initrd /initram fs-
kernel_version.im g line of each m enuentry block. For more information on how to verify the initial
RAM disk image, see Section 23.5, “Verifying the Initial RAM Disk Image”.

The initrd directive in grub.cfg refers to an initramfs image

In m enuentry blocks, the initrd directive must point to the location (relative to the /boot/
directory if it is on a separate partition) of the initram fs file corresponding to the same kernel
version. This directive is called initrd because the previous tool which created initial RAM disk
images, m kinitrd, created what were known as initrd files. The grub.cfg directive remains
initrd to maintain compatibility with other tools. The file-naming convention of systems using
the dracut utility to create the initial RAM disk image is initram fs-kernel_version.im g.
For information on using Dracut, refer to Section 23.5, “Verifying the Initial RAM Disk Image”.

22.2. Customizing GRUB 2 Menu


GRUB 2 scripts search the user's computer and build a boot menu based on what operating systems the
scripts find. To reflect the latest system boot options, the boot menu is rebuilt automatically when the
kernel is updated or a new kernel is added.

However, users may want to build a menu containing specific entries or to have the entries in a specific
order. GRUB 2 allows basic customization of the boot menu to give users control of what actually
appears on the screen.

GRUB 2 uses a series of scripts to build the menu; these are located in the /etc/grub.d/ directory
and include:

00_header, which loads GRUB 2 settings from the /etc/default/grub file.


10_linux, which locates kernels in the default partition of Red Hat Enterprise Linux.
30_os-prober, which builds entries for operating systems found on other partitions.
4 0_custom , a template, which can be used to create additional menu entries.

Scripts from the /etc/grub.d/ directory are read in alphabetical order and can be therefore renamed
to change the boot order of specific menu entries.

Causing the GRUB 2 boot menu to display

If you set the GRUB_T IMEOUT key in the /etc/default/grub file to 0, GRUB 2 will not display
its list of bootable kernels when the system starts up. In order to display this list when booting,
press and hold any alphanumeric key while and immediately after BIOS information is displayed,
and GRUB 2 will present you with the GRUB menu.

425
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

22.2.1. Changing the Default Boot Entry


By default, the saved value is used for the GRUB_DEFAULT key in the /etc/default/grub file. This
instructs GRUB 2 to load last successfully loaded operating system. However, GRUB 2 does not support
using numeric values for this key to change the default order in which the operating systems are loaded.
To specify which operating system should be loaded as first, use the grub2-set-default command,
for example:

~]# grub2-set-default 2

Note that the position of a menu entry in the list is denoted by a number, starting with zero; therefore, in
the example above, the third entry will be loaded.

Before rebooting the machine, update the configuration file by running grub2-m kconfig -o
/boot/grub2/grub.cfg or, on UEFI systems,grub2-m kconfig -o
/boot/efi/EFI/redhat/grub.cfg

22.2.2. Editing an Entry

Kernel Parameters
To use a kernel parameter only during a single boot process, when the GRUB 2 boot menu appears,
move the cursor to the kernel you want to start, press the e key to edit the line with the kernel and add
the kernel parameter. For example, to run the system in emergency mode, add the emergency
parameter at the end of the linux line:

linux /vmlinuz-3.10.0-0.rc4.59.el7.x86_64 root=/dev/mapper/rhel-root ro


rd.md=0 rd.dm=0 rd.lvm.lv=rhel/swap crashkernel=auto rd.luks=0 vconsole.keymap=us
rd.lvm.lv=rhel/root rhgb quiet emergency

These settings are, however, not persistent and apply only for a single boot. To make the settings
persistent, edit values of the GRUB_CMDLINE_LINUX_DEFAULT key in the /etc/default/grub file.
For example. if you want to enable emergency mode for each boot, edit the entry as follows:

GRUB_CMDLINE_LINUX_DEFAULT="emergency"

Note that you can specify multiple parameters for the GRUB_CMDLINE_LINUX_DEFAULT key, similarly
to adding the parameters in the GRUB 2 boot menu.

Then, run the grub2-m kconfig -o /boot/grub2/grub.cfg command, or grub2-m kconfig -o


/boot/efi/EFI/redhat/grub.cfg on UEFI systems, to update the configuration file.

22.2.3. Adding a new Entry


When executing the grub2-m kconfig command, GRUB 2 searches for Linux kernels and other
operating systems based on the files located in the /etc/grub.d/ directory. The 10_linux script
searches for installed Linux kernels on the same partition. The 30_os-prober searches for other
operating systems. Menu entries are also automatically added to the boot menu when updating the
kernel.

The 4 0_custom file located in the /etc/grub.d/ directory is a template for custom entries and looks
as follows:

426
Chapter 22. Working with the GRUB 2 Boot Loader

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.

This file can be edited or copied. Note that as a minimum, a valid menu entry must include at least the
following:

menuentry "<Title>"{
<Data>
}

22.2.4. Using only a Custom Menu


If you do not wish menu entries to be updated automatically, you can create a custom menu.

Backup of /etc/grub.d/

Before proceeding, back up the contents of the /etc/grub.d/ directory in case you need to
revert the changes later.

1. Copy and paste the contents of the /boot/grub2/grub.cfg or


/boot/efi/EFI/redhat/grub.cfg file in the /etc/grub.d/4 0_custom file below the
existing header lines; the executable part of the 4 0_custom script has to be preserved.
2. Remove lines above the first menu entry except the existing header lines above.
This is an example of a custom 4 0_custom script:

427
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.

menuentry 'First custom entry' --class red --class gnu-linux --class gnu --
class os $menuentry_id_option 'gnulinux-3.10.0-67.el7.x86_64-advanced-
32782dd0-4b47-4d56-a740-2076ab5e5976' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod xfs
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'
7885bba1-8aa7-4e5d-a7ad-821f4f52170a
else
search --no-floppy --fs-uuid --set=root 7885bba1-8aa7-4e5d-a7ad-
821f4f52170a
fi
linux16 /vmlinuz-3.10.0-67.el7.x86_64 root=/dev/mapper/rhel-root ro
rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=rhel/swap
vconsole.keymap=us crashkernel=auto rhgb quiet LANG=en_US.UTF-8
initrd16 /initramfs-3.10.0-67.el7.x86_64.img
}
menuentry 'Second custom entry' --class red --class gnu-linux --class gnu --
class os $menuentry_id_option 'gnulinux-0-rescue-
07f43f20a54c4ce8ada8b70d33fd001c-advanced-32782dd0-4b47-4d56-a740-
2076ab5e5976' {
load_video
insmod gzio
insmod part_msdos
insmod xfs
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'
7885bba1-8aa7-4e5d-a7ad-821f4f52170a
else
search --no-floppy --fs-uuid --set=root 7885bba1-8aa7-4e5d-a7ad-
821f4f52170a
fi
linux16 /vmlinuz-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c
root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-
sun16 rd.lvm.lv=rhel/swap vconsole.keymap=us crashkernel=auto rhgb quiet
initrd16 /initramfs-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c.img
}

3. Remove all files from the /etc/grub.d directory except the following:
00_header,
4 0_custom ,
and README.
4. Edit, add, or remove menu entries in the 4 0_custom file as desired.
Alternatively, if you wish to keep the files in the /etc/grub2.d/ directory, make them
unexecutable by running the chm od a-x <file_nam e> command.

428
Chapter 22. Working with the GRUB 2 Boot Loader

5. Update the grub.cfg file by running the grub2-m kconfig -o /boot/grub2/grub.cfg


command or grub2-m kconfig -o /boot/efi/EFI/redhat/grub.cfg on UEFI systems.

22.3. GRUB 2 Password Protection


GRUB 2 supports basic password protection that uses unencrypted passwords in the GRUB 2 template
files. To enable this functionality, specify a superuser, who can reach the protected entries, and the
password. Other users can be specified to access these entries as well. To specify which menu entries
should be password-protected, edit the /etc/grub.d/00_header file. Alternatively, if you wish to
preserve the settings after GRUB 2 upgrades, modify the /etc/grub.d/4 0_custom file.

22.3.1. Setting Up Users and Password Protection, Identifying Menu Entries


1. To specify a superuser, add the following lines in the /etc/grub.d/00_header file, where
john is the name of the user designated as the superuser, and johnspassword is the
superuser's password:

cat <<EOF
set superusers="john"
password john johnspassword
EOF

2. To allow other users to access the menu entries, add additional lines per user at the end of the
/etc/grub.d/00_header file.

cat <<EOF
set superusers="john"
password john johnspassword
password jane janespassword
EOF

3. When the users and passwords are set up, specify the menu entries that should be password-
protected. If you do not specify the menu entries, all menu entries will be password-protected by
default.
4. Run the grub2-m kconfig -o /boot/grub2/grub.cfg command to update the
configuration file. Or the grub2-m kconfig -o /boot/efi/EFI/redhat/grub.cfg
command on UEFI systems.

22.3.2. Preserving the Setup after GRUB 2 Updates


Alternatively, if you wish to preserve the changes after future GRUB 2 updates, add lines in the
/etc/grub.d/4 0_custom file in a similar fashion:

429
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

set superusers="john"
password john johnspassword
password jane janespassword

menuentry 'Red Hat Enterprise Linux Client' {


set root=(hd0,msdos1)
linux /vmlinuz
}

menuentry 'Fedora' --user jane {


set root=(hd0,msdos2)
linux /vmlinuz
}

In the above example, john is the superuser and can therefore boot any menu entry, use the GRUB 2
command line, and edit items of the GRUB 2 menu during boot. In this case, john can access both Red
Hat Enterprise Linux Client and Fedora. Anyone can boot Red Hat Enterprise Linux Client. User jane
can boot Fedora since she was granted the permission in the configuration. If you do not specify a menu
entry, the password protection function will not work.

After you have made the changes in the template file, run the following command to update the GRUB 2
configuration file:

~]# grub2-mkconfig -o /boot/grub2/grub.cfg

Alternatively, on UEFI systems, run the following:

~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

Even if you do not specify a user and password for a menu entry, the superuser's password will be asked
for when accessing such a system.

22.3.3. Password Encryption


By default, passwords are saved in plain text in GRUB 2 scripts. Although the files cannot be accessed
on boot without the correct password, security can be improved by encrypting the password using the
grub-m kpasswd-pbkdf2 command. This command converts a desired password into a long hash,
which is placed in the GRUB 2 scripts instead of the plain-text password.

1. To generate an encrypted password, run the grub-m kpasswd-pbkdf2 command on the


command line as root.
2. Enter the desired password when prompted and repeat it. The command then outputs your
password in an encrypted form.
3. Copy the hash, and paste it in the template file where you configured the users, that is, either in
/etc/grub.d/00_header or /etc/grub.d/4 0_custom . The following format applies for the
00_header file:

cat <<EOF
setsuperusers="john"
password_pbkdf2 john
grub.pbkdf2.sha512.10000.19074739ED80F115963D984BDCB35AA671C24325755377C3E9B
014D862DA6ACC77BC110EED41822800A87FD3700C037320E51E9326188D53247EC0722DDF15F
C.C56EC0738911AD86CEA55546139FEBC366A393DF9785A8F44D3E51BF09DB980BAFEF85281
CBBC56778D8B19DC94833EA8342F7D73E3A1AA30B205091F1015A85
EOF

430
Chapter 22. Working with the GRUB 2 Boot Loader

The following format applies for the 4 0_header file:

setsuperusers="john"
password_pbkdf2 john
grub.pbkdf2.sha512.10000.19074739ED80F115963D984BDCB35AA671C24325755377C3E9B
014D862DA6ACC77BC110EED41822800A87FD3700C037320E51E9326188D53247EC0722DDF15F
C.C56EC0738911AD86CEA55546139FEBC366A393DF9785A8F44D3E51BF09DB980BAFEF85281
CBBC56778D8B19DC94833EA8342F7D73E3A1AA30B205091F1015A85

Warning

If you do not use the correct format, or modify the configuration in an incorrect way, you might be
unable to boot your system.

22.4. Re-Installing GRUB 2


Reinstalling GRUB 2 might be a useful and easier way to fix certain problems usually caused by an
incorrect installation of GRUB 2, missing files, or a broken system. Other reasons to reinstall GRUB 2
include the following:

Upgrading from the previous version of GRUB.


The user requires the GRUB 2 boot loader to control installed operating systems. However, some
operating systems are installed with their own boot loaders. Reinstalling GRUB 2 returns control to
the desired operating system.
Adding the boot information to another drive.

22.4.1. Using the grub2-install Command


When using the grub2-install command, the boot information is updated and missing files are
restored. Note that the files are restored only if they are not corrupted. If the /boot/grub2/ directory is
missing, it will be recreated.

Use the grub2-install <device> command to reinstall GRUB 2 if the system is operating normally.
For example:

~]# grub2-install /dev/sda

Invoke the following command to re-generate the configuration file:

~]# grub2-mkconfig -o /boot/grub2/grub.cfg

On UEFI systems, use the following command:

~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

22.4.2. Removing and Re-Installing GRUB 2


This method completely removes all GRUB 2 configuration files and system settings, and is therefore
used when the user wishes to reset all configuration setting to the default values. Removing and
subsequent reinstalling of GRUB 2 fixes failures caused by corrupted files and incorrect configuration. To
do so, as root, follow these steps:

431
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

1. Run the yum rem ove grub2-tools command to remove the grub2 and grub2-tools packages,
or, on UEFI systems, to remove the grub2-efi and grub2-tools packages.
2. Run the yum rem ove grub2 command, or yum rem ove grub2-efi on UEFI systems, if you
wish to have the grub2-tools package installed as a dependency with support for all platforms.
3. Run the grub2-m kconfig -o /boot/grub2/grub.cfg or grub2-m kconfig -o
/boot/efi/EFI/redhat/grub.cfg command to update the configuration file.

22.5. GRUB 2 over Serial Console


If you use computers with no display or keyboard, it can be very useful to control the machines through
serial communications.

22.5.1. Configuring GRUB 2


In order to use GRUB 2 over a serial line, add the following two lines in the /etc/default/grub file:

GRUB_TERMINAL="serial"
GRUB_SERIAL_COMMAND="serial --speed=9600 --unit=0 --word=8 --parity=no --stop=1"

The first line disables the graphical terminal. Note that specifying the GRUB_T ERMINAL key overrides
values of GRUB_T ERMINAL_INPUT and GRUB_T ERMINAL_OUT PUT . On the second line, adjust the
baud rate, parity, and other values to fit your environment and hardware.

Once you have completed the changes in the /etc/default/grub file, it is necessary to execute the
grub2-m kconfig -o /etc/grub2/grub.cfg or grub2-m kconfig -o
/boot/efi/EFI/redhat/grub.cfg command to update the GRUB 2 configuration file.

22.5.2. Using screen to Connect to the Serial Console


The screen tool serves as a capable serial terminal. To install it, run as root:

~]# yum install screen

To connect to your machine using the serial console, run the following command:

~]$ screen /dev/<console_port>

By default, if no option is specified, screen uses the standard 9600 baud rate. To set a different baud
rate, run:

~]$ screen /dev/<console_port> 115200

To end the session in screen, press Ctrl+a, type :quit and press Enter.

Consult the screen manual pages for additional options and detailed information.

22.6. Terminal Menu Editing During Boot


Menu entries can be modified and arguments passed to the kernel on boot. This is done using the menu
entry editor interface, which is triggered when pressing the e key on a selected menu entry in the boot
loader menu. The Ecs key discards any changes and reloads the standard menu interface. The c key
loads the command line interface.

432
Chapter 22. Working with the GRUB 2 Boot Loader

The command line interface is the most basic GRUB interface, but it is also the one that grants the most
control. The command line makes it possible to type any relevant GRUB commands followed by the
Enter key to execute them. This interface features some advanced features similar to shell, including
T ab key completion based on context, and Ctrl+a to move to the beginning of a line and Ctrl+e to
move to the end of a line. In addition, the arrow, Hom e, End, and Delete keys work as they do in the
bash shell.

22.6.1. Booting to Rescue Mode


Rescue mode provides a convenient single-user environment and allows you to repair your system in
situations when it is unable to complete a regular booting process. In rescue mode, the system attempts
to mount all local file systems and start some important system services, but it does not activate network
interfaces or allow more users to be logged into the system at the same time. In Red Hat Enterprise
Linux 7, rescue mode is equivalent to single user mode and requires the root password.

1. To enter rescue mode during boot, on the GRUB 2 boot screen, press the e key for edit.
2. Add the following parameter at the end of the linux line, or linuxefi on UEFI systems:

systemd.unit=rescue.target

Note that equivalent parameters, s and single, can be passed to the kernel as well.
3. Press Ctrl+x to boot the system with the parameter.

22.6.2. Booting to Emergency Mode


Emergency mode provides the most minimal environment possible and allows you to repair your system
even in situations when the system is unable to enter rescue mode. In emergency mode, the system
mounts the root file system only for reading, does not attempt to mount any other local file systems, does
not activate network interfaces, and only starts few essential services. In Red Hat Enterprise Linux 7,
emergency mode requires the root password.

1. To enter emergency mode, on the GRUB 2 boot screen, press the e key for edit.
2. Add the following parameter at the end of the linux line, or linuxefi on UEFI systems:

systemd.unit=emergency.target

Note that equivalent parameters, emergency and -b, can be passed to the kernel as well.
3. Press Ctrl+x to boot the system with the parameter.

22.6.3. Recovering Root Password


Setting up the root password is a mandatory part of the Red Hat Enterprise Linux 7 installation. If you
forget or lose your password, it is possible to reset it.

Note that in GRUB 2, resetting the password is no longer performed in single-user mode as it was in
GRUB included in Red Hat Enterprise Linux 6. The root password is now required to operate in
single-user mode as well as in em ergency mode.

Procedure 22.1. Resetting the Root Password

1. Start the system and, on the GRUB 2 boot screen, press the e key for edit.
2. Add the following parameter at the end of the linux line, or linuxefi on UEFI systems:

433
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

init=/bin/sh

The Linux kernel will run the /bin/sh shell rather than the system init daemon. Therefore, some
functions may be limited or missing.
3. Press Ctrl+x to boot the system with the parameter.
The shell prompt appears.
4. Note that the file system is mounted read-only. You will not be allowed to change the password if
the file system is not writable.
To remount the file system as writable, run the m ount -o rem ount, rw command.
5. Run the passwd command and follow the instructions displayed on the command line to change
the root password.
Note that if the system is not writable, the passwd tool fails with the following error:

Authentication token manipulation error

6. Run the exec /sbin/init command to resume the initialization and finish the system boot.
Running the exec command with another command specified replaces the shell and creates a
new process; init in this case.
Alternatively, if you wish to reboot the system, run the exec /sbin/reboot command instead.

22.6.4. Lost Root Password


Setting up the root password is a mandatory part of the Red Hat Enterprise Linux 7 installation. If you
forget or lose your password, it is possible to reset it.

Note that in GRUB 2, resetting the password is no longer performed via single-user mode as it was in
GRUB shipped in Red Hat Enterprise Linux 6. The root password is now required to operate in single-
user mode as well as in em ergency mode.

Procedure 22.2. Resetting the Root Password

1. Start the system and, on the GRUB 2 boot screen, press the E key for edit.
2. Add the following parameter at the end of the linux line, or linuxefi on UEFI systems:

init=/bin/bash

The Linux kernel will run the /bin/bash shell rather than the system init. Therefore, some functions
may be limited or missing.
3. Press CT RL+X to boot the system with the parameter.
The shell prompt appears.
4. Note that the file system is mounted read-only. You will not be allowed to change the password if
the file system is not writable.
To remount the file system as writable, run the m ount -o rem ount, rw command.
5. Run the passwd command and follow the instructions displayed on the command line to change
the root password.
Note that if the system is not writable, the passwd tool fails with the following error:

Authentication token manipulation error

6. Run the exec /sbin/init command to resume the initialization and finish the system boot.

434
Chapter 22. Working with the GRUB 2 Boot Loader

Running the exec command with another command specified replaces the shell and creates a
new process; init in this case.
Alternatively, if you wish to reboot the system, run the exec /sbin/reboot command instead.

22.7. Additional Resources


Please see the following resources for more information on the GRUB 2 boot loader:

/usr/share/doc/grub2-tools-<version-num ber> — This directory contains information


about using and configuring GRUB 2. <version-num ber> corresponds to the version of the GRUB
2 package installed.
info grub2 — The GRUB 2 info page contains a tutorial, a user reference manual, a programmer
reference manual, and a FAQ document about GRUB 2 and its usage.
Red Hat Installation Guide — The Installation Guide provides basic information on GRUB 2, for
example, installation, terminology, interfaces, and commands.

435
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 23. Manually Upgrading the Kernel


The Red Hat Enterprise Linux kernel is custom-built by the Red Hat Enterprise Linux kernel team to
ensure its integrity and compatibility with supported hardware. Before Red Hat releases a kernel, it must
first pass a rigorous set of quality assurance tests.

Red Hat Enterprise Linux kernels are packaged in the RPM format so that they are easy to upgrade and
verify using the Yum or PackageKit package managers. PackageKit automatically queries the Red Hat
Network servers and informs you of packages with available updates, including kernel packages.

This chapter is therefore only useful for users who need to manually update a kernel package using the
rpm command instead of yum .

Use Yum to install kernels whenever possible

Whenever possible, use either the Yum or PackageKit package manager to install a new kernel
because they always install a new kernel instead of replacing the current one, which could
potentially leave your system unable to boot.

Building a custom kernel is not supported

Building a custom kernel is not supported by the Red Hat Global Services Support team, and
therefore is not explored in this manual.

For more information on installing kernel packages with Yum, refer to Section 5.1.2, “Updating
Packages”. For information on Red Hat Network, see related documents located on Customer Portal.

23.1. Overview of Kernel Packages


Red Hat Enterprise Linux contains the following kernel packages:

kernel — Contains the kernel for single, multicore and multiprocessor systems.
kernel-debug — Contains a kernel with numerous debugging options enabled for kernel diagnosis, at
the expense of reduced performance.
kernel-devel — Contains the kernel headers and makefiles sufficient to build modules against the
kernel package.
kernel-debug-devel — Contains files required for building kernel modules to match the kernel-debug
package.
kernel-headers — Includes the C header files that specify the interface between the Linux kernel and
user-space libraries and programs. The header files define structures and constants that are needed
for building most standard programs.
linux-firmware — Contains all of the firmware files that are required by various devices to operate.
perf — This package contains supporting scripts and documentation for the perf tool shipped in each
kernel image subpackage.
kernel-abi-whitelists — Contains information pertaining to the Red Hat Enterprise Linux kernel ABI,
including a lists of kernel symbols that are needed by external Linux kernel modules and a yum plug-
in to aid enforcement.
kernel-tools — Contains tools for manipulating the Linux kernel and supporting documentation.

436
Chapter 23. Manually Upgrading the Kernel

23.2. Preparing to Upgrade


Before upgrading the kernel, it is recommended that you take some precautionary steps.

First, ensure that working boot media exists for the system. If the boot loader is not configured properly
to boot the new kernel, you can use this media to boot into Red Hat Enterprise Linux.

USB media often comes in the form of flash devices sometimes called pen drives, thumb disks, or keys,
or as an externally-connected hard disk device. Almost all media of this type is formatted as a VFAT file
system. You can create bootable USB media on media formatted as ext2, ext3, or VFAT .

You can transfer a distribution image file or a minimal boot media image file to USB media. Make sure
that sufficient free space is available on the device. Around 4 GB is required for a distribution DVD
image, around 700 MB for a distribution CD image, or around 10 MB for a minimal boot media image.

You must have a copy of the boot.iso file from a Red Hat Enterprise Linux installation DVD, or
installation CD-ROM #1, and you need a USB storage device formatted with the VFAT file system and
around 16 MB of free space. The following procedure will not affect existing files on the USB storage
device unless they have the same path names as the files that you copy onto it. To create USB boot
media, perform the following commands as the root user:

1. Install the syslinux package if it is not installed on your system. To do so, as root, run the yum
install syslinux command.
2. Install the SYSLINUX bootloader on the USB storage device:

~]# syslinux /dev/sdX1

...where sdX is the device name.


3. Create mount points for boot.iso and the USB storage device:

~]# mkdir /mnt/isoboot /mnt/diskboot

4. Mount boot.iso:

~]# mount -o loop boot.iso /mnt/isoboot

5. Mount the USB storage device:

~]# mount /dev/<sdX1> /mnt/diskboot

6. Copy the ISOLINUX files from the boot.iso to the USB storage device:

~]# cp /mnt/isoboot/isolinux/* /mnt/diskboot

7. Use the isolinux.cfg file from boot.iso as the syslinux.cfg file for the USB device:

~]# grep -v local /mnt/isoboot/isolinux/isolinux.cfg >


/mnt/diskboot/syslinux.cfg

8. Unmount boot.iso and the USB storage device:

~]# umount /mnt/isoboot /mnt/diskboot

437
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

9. You should reboot the machine with the boot media and verify that you are able to boot with it
before continuing.

Alternatively, on systems with a floppy drive, you can create a boot diskette by installing the mkbootdisk
package and running the m kbootdisk command as root. Refer to m an m kbootdisk man page after
installing the package for usage information.

To determine which kernel packages are installed, execute the command yum list installed
"kernel-* " at a shell prompt. The output will comprise some or all of the following packages,
depending on the system's architecture, and the version numbers may differ:

~]# yum list installed "kernel-*"


kernel.x86_64 3.10.0-54.0.1.el7 @rhel7/7.0
kernel-devel.x86_64 3.10.0-54.0.1.el7 @rhel7
kernel-headers.x86_64 3.10.0-54.0.1.el7 @rhel7/7.0

From the output, determine which packages need to be downloaded for the kernel upgrade. For a single
processor system, the only required package is the kernel package. Refer to Section 23.1, “Overview of
Kernel Packages” for descriptions of the different packages.

23.3. Downloading the Upgraded Kernel


There are several ways to determine if an updated kernel is available for the system.

Security Errata — Refer to https://access.redhat.com/site/security/updates/active/ for information on


security errata, including kernel upgrades that fix security issues.
Via Red Hat Network — Download and install the kernel RPM packages. Red Hat Network can
download the latest kernel, upgrade the kernel on the system, create an initial RAM disk image if
needed, and configure the boot loader to boot the new kernel. For more information, see the related
documents located at https://access.redhat.com/site/documentation/en-US/.

If Red Hat Network was used to download and install the updated kernel, follow the instructions in
Section 23.5, “Verifying the Initial RAM Disk Image” and Section 23.6, “Verifying the Boot Loader”, only
do not change the kernel to boot by default. Red Hat Network automatically changes the default kernel to
the latest version. To install the kernel manually, continue to Section 23.4, “Performing the Upgrade”.

23.4. Performing the Upgrade


After retrieving all of the necessary packages, it is time to upgrade the existing kernel.

Keep the old kernel when performing the upgrade

It is strongly recommended that you keep the old kernel in case there are problems with the new
kernel.

At a shell prompt, change to the directory that contains the kernel RPM packages. Use -i argument with
the rpm command to keep the old kernel. Do not use the -U option, since it overwrites the currently
installed kernel, which creates boot loader problems. For example:

~]# rpm -ivh kernel-<kernel_version>.<arch>.rpm

The next step is to verify that the initial RAM disk image has been created. Refer to Section 23.5,

438
Chapter 23. Manually Upgrading the Kernel

“Verifying the Initial RAM Disk Image” for details.

23.5. Verifying the Initial RAM Disk Image


The job of the initial RAM disk image is to preload the block device modules, such as for IDE, SCSI or
RAID, so that the root file system, on which those modules normally reside, can then be accessed and
mounted. On Red Hat Enterprise Linux 7 systems, whenever a new kernel is installed using either the
Yum, PackageKit, or RPM package manager, the Dracut utility is always called by the installation scripts
to create an initramfs (initial RAM disk image).

On all architectures other than IBM eServer System i (see Section 23.5, “Verifying the Initial RAM Disk
Image and Kernel on IBM eServer System i”), you can create an initram fs by running the dracut
command. However, you usually don't need to create an initram fs manually: this step is automatically
performed if the kernel and its associated packages are installed or upgraded from RPM packages
distributed by Red Hat.

You can verify that an initram fs corresponding to your current kernel version exists and is specified
correctly in the grub.conf configuration file by following this procedure:

Procedure 23.1. Verifying the Initial RAM Disk Image

1. As root, list the contents in the /boot/ directory and find the kernel
(vm linuz-<kernel_version>) and initram fs-<kernel_version> with the latest (most
recent) version number:
Example 23.1. Ensuring that the kernel and initramfs versions match

~]# ls /boot/
config-3.10.0-67.el7.x86_64
config-3.10.0-78.el7.x86_64
efi
grub
grub2
initramfs-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c.img
initramfs-3.10.0-67.el7.x86_64.img
initramfs-3.10.0-67.el7.x86_64kdump.img
initramfs-3.10.0-78.el7.x86_64.img
initramfs-3.10.0-78.el7.x86_64kdump.img
initrd-plymouth.img
symvers-3.10.0-67.el7.x86_64.gz
symvers-3.10.0-78.el7.x86_64.gz
System.map-3.10.0-67.el7.x86_64
System.map-3.10.0-78.el7.x86_64
vmlinuz-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c
vmlinuz-3.10.0-67.el7.x86_64
vmlinuz-3.10.0-78.el7.x86_64

Example 23.1, “Ensuring that the kernel and initramfs versions match” shows that:
we have three kernels installed (or, more correctly, three kernel files are present in /boot/),
the latest kernel is vm linuz-3.10.0-78.el7.x86_64 , and
an initram fs file matching our kernel version, initram fs-3.10.0-
78.el7.x86_64 kdum p.im g, also exists.

439
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

initrd files in the /boot directory are not the same as initramfs files

In the /boot/ directory you may find several initrd-<version>kdum p.im g files. These
are special files created by the Kdump mechanism for kernel debugging purposes, are not
used to boot the system, and can safely be ignored.

2. (Optional) If your initram fs-<kernel_version> file does not match the version of the latest
kernel in /boot/, or, in certain other situations, you may need to generate an initram fs file
with the Dracut utility. Simply invoking dracut as root without options causes it to generate an
initram fs file in the /boot/ directory for the latest kernel present in that directory:

~]# dracut

You must use the --force option if you want dracut to overwrite an existing initram fs (for
example, if your initram fs has become corrupt). Otherwise dracut will refuse to overwrite the
existing initram fs file:

~]# dracut
Will not override existing initramfs (/boot/initramfs-3.10.0-
78.el7.x86_64.img) without --force

You can create an initramfs in the current directory by calling dracut


<initramfs_name> <kernel_version>:

~]# dracut "initramfs-$(uname -r).img" $(uname -r)

If you need to specify specific kernel modules to be preloaded, add the names of those modules
(minus any file name suffixes such as .ko) inside the parentheses of the
add_dracutm odules="<module> [<more_modules>]" directive of the /etc/dracut.conf
configuration file. You can list the file contents of an initram fs image file created by dracut by
using the lsinitrd <initramfs_file> command:

~]# lsinitrd /boot/initramfs-3.10.0-78.el7.x86_64.img


Image: /boot/initramfs-3.10.0-78.el7.x86_64.img: 11M
========================================================================
dracut-033-68.el7
========================================================================

drwxr-xr-x 12 root root 0 Feb 5 06:35 .


drwxr-xr-x 2 root root 0 Feb 5 06:35 proc
lrwxrwxrwx 1 root root 24 Feb 5 06:35 init ->
/usr/lib/systemd/systemd
drwxr-xr-x 10 root root 0 Feb 5 06:35 etc
drwxr-xr-x 2 root root 0 Feb 5 06:35 usr/lib/modprobe.d
[output truncated]

Refer to m an dracut and m an dracut.conf for more information on options and usage.
3. Examine the grub.conf configuration file in the /boot/grub/ directory to ensure that an
initrd initram fs-<kernel_version>.im g exists for the kernel version you are booting.
Refer to Section 23.6, “Verifying the Boot Loader” for more information.

Verifying the Initial RAM Disk Image and Kernel on IBM eServer System i

440
Chapter 23. Manually Upgrading the Kernel

On IBM eServer System i machines, the initial RAM disk and kernel files are combined into a single file,
which is created with the addRam Disk command. This step is performed automatically if the kernel and
its associated packages are installed or upgraded from the RPM packages distributed by Red Hat; thus,
it does not need to be executed manually. To verify that it was created, use the command ls -l
/boot/ to make sure the /boot/vm linitrd-<kernel_version> file already exists (the
<kernel_version> should match the version of the kernel just installed).

23.6. Verifying the Boot Loader


When you install a kernel using rpm , the kernel package creates an entry in the boot loader
configuration file for that new kernel. However, rpm does not configure the new kernel to boot as the
default kernel. You must do this manually when installing a new kernel with rpm .

It is always recommended to double-check the boot loader configuration file after installing a new kernel
with rpm to ensure that the configuration is correct. Otherwise, the system may not be able to boot into
Red Hat Enterprise Linux properly. If this happens, boot the system with the boot media created earlier
and re-configure the boot loader.

In the following table, find your system's architecture to determine the boot loader it uses, and then click
on the "Refer to" link to jump to the correct instructions for your system.

Table 23.1. Boot loaders by architecture

Architecture Boot Loader Refer to


x86 GRUB
AMD AMD64 or Intel 64 GRUB
IBM eServer System i OS/400
IBM eServer System p YABOOT
IBM System z z/IPL

441
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Chapter 24. Working with Kernel Modules


The Linux kernel is modular, which means it can extend its capabilities through the use of dynamically-
loaded kernel modules. A kernel module can provide:

a device driver which adds support for new hardware; or,


support for a file system such as btrfs or NFS.

Like the kernel itself, modules can take parameters that customize their behavior, though the default
parameters work well in most cases. User-space tools can list the modules currently loaded into a
running kernel; query all available modules for available parameters and module-specific information;
and load or unload (remove) modules dynamically into or from a running kernel. Many of these utilities,
which are provided by the kmod package, take module dependencies into account when performing
operations so that manual dependency-tracking is rarely necessary.

On modern systems, kernel modules are automatically loaded by various mechanisms when the
conditions call for it. However, there are occasions when it is necessary to load or unload modules
manually, such as when one module is preferred over another although either could provide basic
functionality, or when a module is misbehaving.

This chapter explains how to:

use the user-space kmod utilities to display, query, load and unload kernel modules and their
dependencies;
set module parameters both dynamically on the command line and permanently so that you can
customize the behavior of your kernel modules; and,
load modules at boot time.

Installing the kmod package

In order to use the kernel module utilities described in this chapter, first ensure the kmod package
is installed on your system by running, as root:

~]# yum install kmod

For more information on installing packages with Yum, refer to Section 5.2.4, “Installing
Packages”.

24.1. Listing Currently-Loaded Modules


You can list all kernel modules that are currently loaded into the kernel by running the lsm od command:

442
Chapter 24. Working with Kernel Modules

~]$ lsmod
Module Size Used by
xfs 803635 1
exportfs 3424 1 xfs
vfat 8216 1
fat 43410 1 vfat
tun 13014 2
fuse 54749 2
ip6table_filter 2743 0
ip6_tables 16558 1 ip6table_filter
ebtable_nat 1895 0
ebtables 15186 1 ebtable_nat
ipt_MASQUERADE 2208 6
iptable_nat 5420 1
nf_nat 19059 2 ipt_MASQUERADE,iptable_nat
rfcomm 65122 4
ipv6 267017 33
sco 16204 2
bridge 45753 0
stp 1887 1 bridge
llc 4557 2 bridge,stp
bnep 15121 2
l2cap 45185 16 rfcomm,bnep
cpufreq_ondemand 8420 2
acpi_cpufreq 7493 1
freq_table 3851 2 cpufreq_ondemand,acpi_cpufreq
usb_storage 44536 1
sha256_generic 10023 2
aes_x86_64 7654 5
aes_generic 27012 1 aes_x86_64
cbc 2793 1
dm_crypt 10930 1
kvm_intel 40311 0
kvm 253162 1 kvm_intel
[output truncated]

Each row of lsm od output specifies:

the name of a kernel module currently loaded in memory;


the amount of memory it uses; and,
the sum total of processes that are using the module and other modules which depend on it, followed
by a list of the names of those modules, if there are any. Using this list, you can first unload all the
modules depending the module you want to unload. For more information, refer to Section 24.4,
“Unloading a Module”.

Finally, note that lsm od output is less verbose and considerably easier to read than the content of the
/proc/m odules pseudo-file.

24.2. Displaying Information About a Module


You can display detailed information about a kernel module by running the m odinfo <module_name>
command.

443
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Module names do not end in .ko

When entering the name of a kernel module as an argument to one of the kmod utilities, do not
append a .ko extension to the end of the name. Kernel module names do not have extensions;
their corresponding files do.

For example, to display information about the e1000e module, which is the Intel PRO/1000 network
driver, run:

Example 24.1. Listing information about a kernel module with lsmod

~]# modinfo e1000e


filename: /lib/modules/2.6.32-
71.el6.x86_64/kernel/drivers/net/e1000e/e1000e.ko
version: 1.2.7-k2
license: GPL
description: Intel(R) PRO/1000 Network Driver
author: Intel Corporation, <linux.nics@intel.com>
srcversion: 93CB73D3995B501872B2982
alias: pci:v00008086d00001503sv*sd*bc*sc*i*
alias: pci:v00008086d00001502sv*sd*bc*sc*i*
[some alias lines omitted]
alias: pci:v00008086d0000105Esv*sd*bc*sc*i*
depends:
vermagic: 2.6.32-71.el6.x86_64 SMP mod_unload modversions
parm: copybreak:Maximum size of packet that is copied to a new buffer
on receive (uint)
parm: TxIntDelay:Transmit Interrupt Delay (array of int)
parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int)
parm: RxIntDelay:Receive Interrupt Delay (array of int)
parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int)
parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int)
parm: IntMode:Interrupt Mode (array of int)
parm: SmartPowerDownEnable:Enable PHY smart power down (array of int)
parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of
int)
parm: WriteProtectNVM:Write-protect NVM [WARNING: disabling this can
lead to corrupted NVM] (array of int)
parm: CrcStripping:Enable CRC Stripping, disable if your BMC needs
the CRC (array of int)
parm: EEE:Enable/disable on parts that support the feature (array of
int)

Here are descriptions of a few of the fields in m odinfo output:

filename
The absolute path to the .ko kernel object file. You can use m odinfo -n as a shortcut
command for printing only the filenam e field.

description
A short description of the module. You can use m odinfo -d as a shortcut command for
printing only the description field.

444
Chapter 24. Working with Kernel Modules

alias
The alias field appears as many times as there are aliases for a module, or is omitted entirely
if there are none.

depends
This field contains a comma-separated list of all the modules this module depends on.

Omitting the depends field

If a module has no dependencies, the depends field may be omitted from the output.

parm
Each parm field presents one module parameter in the form parameter_name:description,
where:

parameter_name is the exact syntax you should use when using it as a module parameter
on the command line, or in an option line in a .conf file in the /etc/m odprobe.d/
directory; and,
description is a brief explanation of what the parameter does, along with an expectation
for the type of value the parameter accepts (such as int, unit or array of int) in parentheses.

You can list all parameters that the module supports by using the -p option. However, because
useful value type information is omitted from m odinfo -p output, it is more useful to run:

Example 24.2. Listing module parameters

~]# modinfo e1000e | grep "^parm" | sort


parm: copybreak:Maximum size of packet that is copied to a new
buffer on receive (uint)
parm: CrcStripping:Enable CRC Stripping, disable if your BMC
needs the CRC (array of int)
parm: EEE:Enable/disable on parts that support the feature
(array of int)
parm: InterruptThrottleRate:Interrupt Throttling Rate (array of
int)
parm: IntMode:Interrupt Mode (array of int)
parm: KumeranLockLoss:Enable Kumeran lock loss workaround
(array of int)
parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of
int)
parm: RxIntDelay:Receive Interrupt Delay (array of int)
parm: SmartPowerDownEnable:Enable PHY smart power down (array
of int)
parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of
int)
parm: TxIntDelay:Transmit Interrupt Delay (array of int)
parm: WriteProtectNVM:Write-protect NVM [WARNING: disabling
this can lead to corrupted NVM] (array of int)

445
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

24.3. Loading a Module


To load a kernel module, run m odprobe <module_name> as root. For example, to load the wacom
module, run:

~]# modprobe wacom

By default, m odprobe attempts to load the module from


/lib/m odules/<kernel_version>/kernel/drivers/. In this directory, each type of module has
its own subdirectory, such as net/ and scsi/, for network and SCSI interface drivers respectively.

Some modules have dependencies, which are other kernel modules that must be loaded before the
module in question can be loaded. The m odprobe command always takes dependencies into account
when performing operations. When you ask m odprobe to load a specific kernel module, it first examines
the dependencies of that module, if there are any, and loads them if they are not already loaded into the
kernel. m odprobe resolves dependencies recursively: it will load all dependencies of dependencies, and
so on, if necessary, thus ensuring that all dependencies are always met.

You can use the -v (or --verbose) option to cause m odprobe to display detailed information about
what it is doing, which may include loading module dependencies. The following is an example of loading
the Fibre Channel over Ethernet module verbosely:

Example 24.3. modprobe -v shows module dependencies as they are loaded

~]# modprobe -v fcoe


insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/scsi_tgt.ko
insmod /lib/modules/2.6.32-
71.el6.x86_64/kernel/drivers/scsi/scsi_transport_fc.ko
insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/libfc/libfc.ko
insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/fcoe/libfcoe.ko
insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/fcoe/fcoe.ko

Example 24.3, “modprobe -v shows module dependencies as they are loaded” shows that m odprobe
loaded the scsi_tgt, scsi_transport_fc, libfc and libfcoe modules as dependencies before
finally loading fcoe. Also note that m odprobe used the more “primitive” insm od command to insert
the modules into the running kernel.

Always use modprobe instead of insmod!

Although the insm od command can also be used to load kernel modules, it does not resolve
dependencies. Because of this, you should always load modules using m odprobe instead.

24.4. Unloading a Module


You can unload a kernel module by running m odprobe -r <module_name> as root. For example,
assuming that the wacom module is already loaded into the kernel, you can unload it by running:

~]# modprobe -r wacom

446
Chapter 24. Working with Kernel Modules

However, this command will fail if a process is using:

the wacom module,


a module that wacom directly depends on, or,
any module that wacom —through the dependency tree—depends on indirectly.

Refer to Section 24.1, “Listing Currently-Loaded Modules” for more information about using lsm od to
obtain the names of the modules which are preventing you from unloading a certain module.

For example, if you want to unload the firewire_ohci module (because you believe there is a bug in
it that is affecting system stability, for example), your terminal session might look similar to this:

~]# modinfo -F depends firewire_ohci


depends: firewire-core
~]# modinfo -F depends firewire_core
depends: crc-itu-t
~]# modinfo -F depends crc-itu-t
depends:

You have figured out the dependency tree (which does not branch in this example) for the loaded
Firewire modules: firewire_ohci depends on firewire_core, which itself depends on crc-itu-
t.

You can unload firewire_ohci using the m odprobe -v -r <module_name> command, where -r
is short for --rem ove and -v for --verbose:

~]# modprobe -r -v firewire_ohci


rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/firewire/firewire-ohci.ko
rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/firewire/firewire-core.ko
rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/lib/crc-itu-t.ko

The output shows that modules are unloaded in the reverse order that they are loaded, given that no
processes depend on any of the modules being unloaded.

Do not use rmmod directly!

Although the rm m od command can be used to unload kernel modules, it is recommended to use
m odprobe -r instead.

24.5. Setting Module Parameters


Like the kernel itself, modules can also take parameters that change their behavior. Most of the time, the
default ones work well, but occasionally it is necessary or desirable to set custom parameters for a
module. Because parameters cannot be dynamically set for a module that is already loaded into a
running kernel, there are two different methods for setting them.

1. You can unload all dependencies of the module you want to set parameters for, unload the
module using m odprobe -r, and then load it with m odprobe along with a list of customized
parameters. This method is often used when the module does not have many dependencies, or to
test different combinations of parameters without making them persistent, and is the method
covered in this section.
2. Alternatively, you can list the new parameters in an existing or newly-created file in the

447
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

/etc/m odprobe.d/ directory. This method makes the module parameters persistent by
ensuring that they are set each time the module is loaded, such as after every reboot or
m odprobe command. This method is covered in Section 24.6, “Persistent Module Loading”,
though the following information is a prerequisite.

You can use m odprobe to load a kernel module with custom parameters using the following command
line format:

Example 24.4. Supplying optional parameters when loading a kernel module

~]# modprobe <module_name> [parameter=value]

When loading a module with custom parameters on the command line, be aware of the following:

You can enter multiple parameters and values by separating them with spaces.
Some module parameters expect a list of comma-separated values as their argument. When entering
the list of values, do not insert a space after each comma, or m odprobe will incorrectly interpret the
values following spaces as additional parameters.
The m odprobe command silently succeeds with an exit status of 0 if:
it successfully loads the module, or
the module is already loaded into the kernel.
Thus, you must ensure that the module is not already loaded before attempting to load it with custom
parameters. The m odprobe command does not automatically reload the module, or alert you that it
is already loaded.

Here are the recommended steps for setting custom parameters and then loading a kernel module. This
procedure illustrates the steps using the e1000e module, which is the network driver for Intel PRO/1000
network adapters, as an example:

Procedure 24.1. Loading a Kernel Module with Custom Parameters

1. First, ensure the module is not already loaded into the kernel:

~]# lsmod |grep e1000e


~]#

Output indicates that the module is already loaded into the kernel, in which case you must first
unload it before proceeding. Refer to Section 24.4, “Unloading a Module” for instructions on safely
unloading it.
2. Load the module and list all custom parameters after the module name. For example, if you
wanted to load the Intel PRO/1000 network driver with the interrupt throttle rate set to 3000
interrupts per second for the first, second and third instances of the driver, and Energy Efficient
Ethernet (EEE) turned on [5] , you would run, as root:

~]# modprobe e1000e InterruptThrottleRate=3000,3000,3000 EEE=1

This example illustrates passing multiple values to a single parameter by separating them with
commas and omitting any spaces between them.

24.6. Persistent Module Loading

448
Chapter 24. Working with Kernel Modules

As shown in Example 24.1, “Listing information about a kernel module with lsmod”, many kernel modules
are loaded automatically at boot time. You can specify additional modules to be loaded by creating a new
<file_name>.m odules file in the /etc/sysconfig/m odules/ directory, where <file_name> is any
descriptive name of your choice. Your <file_name>.m odules files are treated by the system startup
scripts as shell scripts, and as such should begin with an interpreter directive (also called a “bang line”)
as their first line:

Example 24.5. First line of a file_name.modules file

#!/bin/sh

Additionally, the <file_name>.m odules file should be executable. You can make it executable by
running:

modules]# chmod +x <file_name>.modules

For example, the following bluez-uinput.m odules script loads the uinput module:

Example 24.6. /etc/sysconfig/modules/bluez-uinput.modules

#!/bin/sh

if [ ! -c /dev/input/uinput ] ; then
exec /sbin/modprobe uinput >/dev/null 2>&1
fi

The if-conditional statement on the third line ensures that the /dev/input/uinput file does not
already exist (the ! symbol negates the condition), and, if that is the case, loads the uinput module
by calling exec /sbin/m odprobe uinput. Note that the uinput module creates the
/dev/input/uinput file, so testing to see if that file exists serves as verification of whether the
uinput module is loaded into the kernel.

The following >/dev/null 2>& 1 clause at the end of that line redirects any output to /dev/null
so that the m odprobe command remains quiet.

24.7. Specific Kernel Module Capabilities


This section explains how to enable specific kernel capabilities using various kernel modules.

24.7.1. Using Multiple Ethernet Cards


It is possible to use multiple Ethernet cards on a single machine. For each card there must be an alias
and, possibly, options lines for each card in a user-created <module_name>.conf file in the
/etc/m odprobe.d/ directory.

For additional information about using multiple Ethernet cards, refer to the Linux Ethernet-HOWTO
online at http://www.redhat.com/mirrors/LDP/HOWTO/Ethernet-HOWTO.html.

24.7.2. Using Channel Bonding


Red Hat Enterprise Linux allows administrators to bind NICs together into a single channel using the

449
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

bonding kernel module and a special network interface, called a channel bonding interface. Channel
bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth
and providing redundancy.

To channel bond multiple network interfaces, the administrator must perform the following steps:

1. As root, create a new file named <bonding>.conf in the /etc/m odprobe.d/ directory. Note
that you can name this file anything you like as long as it ends with a .conf extension. Insert the
following line in this new file:

alias bond<N> bonding

Replace <N> with the interface number, such as 0. For each configured channel bonding interface,
there must be a corresponding entry in your new /etc/m odprobe.d/<bonding>.conf file.
2. Configure a channel bonding interface as outlined in the Red Hat Enterprise Linux 7 Networking
Guide.
3. To enhance performance, adjust available module options to ascertain what combination works
best. Pay particular attention to the m iim on or arp_interval and the arp_ip_target
parameters. Refer to Section 24.7.2.1, “Bonding Module Directives” for a list of available options
and how to quickly determine the best ones for your bonded interface.

24.7.2.1. Bonding Module Directives


It is a good idea to test which channel bonding module parameters work best for your bonded interfaces
before adding them to the BONDING_OPTS="<bonding parameters>" directive in your bonding
interface configuration file (ifcfg-bond0 for example). Parameters to bonded interfaces can be
configured without unloading (and reloading) the bonding module by manipulating files in the sysfs file
system.

sysfs is a virtual file system that represents kernel objects as directories, files and symbolic links. sysfs
can be used to query for information about kernel objects, and can also manipulate those objects
through the use of normal file system commands. The sysfs virtual file system has a line in
/etc/fstab, and is mounted under the /sys/ directory. All bonding interfaces can be configured
dynamically by interacting with and manipulating files under the /sys/class/net/ directory.

In order to determine the best parameters for your bonding interface, create a channel bonding interface
file such as ifcfg-bond0 by following the instructions in the Red Hat Enterprise Linux 7 Networking
Guide. Insert the SLAVE=yes and MASTER=bond0 directives in the configuration files for each interface
bonded to bond0. Once this is completed, you can proceed to testing the parameters.

First, bring up the bond you created by running ifconfig bond<N> up as root:

~]# ifconfig bond0 up

If you have correctly created the ifcfg-bond0 bonding interface file, you will be able to see bond0
listed in the output of running ifconfig (without any options):

450
Chapter 24. Working with Kernel Modules

~]# ifconfig
bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
eth0 Link encap:Ethernet HWaddr 52:54:00:26:9E:F1
inet addr:192.168.122.251 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe26:9ef1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:207 errors:0 dropped:0 overruns:0 frame:0
TX packets:205 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:70374 (68.7 KiB) TX bytes:25298 (24.7 KiB)
[output truncated]

To view all existing bonds, even if they are not up, run:

~]# cat /sys/class/net/bonding_masters


bond0

You can configure each bond individually by manipulating the files located in the
/sys/class/net/bond<N>/bonding/ directory. First, the bond you are configuring must be taken
down:

~]# ifconfig bond0 down

As an example, to enable MII monitoring on bond0 with a 1 second interval, you could run (as root):

~]# echo 1000 > /sys/class/net/bond0/bonding/miimon

To configure bond0 for balance-alb mode, you could run either:

~]# echo 6 > /sys/class/net/bond0/bonding/mode

...or, using the name of the mode:

~]# echo balance-alb > /sys/class/net/bond0/bonding/mode

After configuring options for the bond in question, you can bring it up and test it by running ifconfig
bond<N> up. If you decide to change the options, take the interface down, modify its parameters using
sysfs, bring it back up, and re-test.

Once you have determined the best set of parameters for your bond, add those parameters as a space-
separated list to the BONDING_OPTS= directive of the /etc/sysconfig/network-scripts/ifcfg-
bond<N> file for the bonding interface you are configuring. Whenever that bond is brought up (for
example, by the system during the boot sequence if the ONBOOT=yes directive is set), the bonding
options specified in the BONDING_OPTS will take effect for that bond. For more information on configuring
bonding interfaces (and BONDING_OPTS), refer to the Red Hat Enterprise Linux 7 Networking Guide.

The following list provides the names of many of the more common channel bonding parameters, along
with a descriptions of what they do. For more information, refer to the brief descriptions for each parm in
m odinfo bonding output, or the exhaustive descriptions in the bonding.txt file in the kernel-doc

451
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

package (see Section 24.8, “Additional Resources”).

Bonding Interface Parameters

arp_interval=<time_in_milliseconds>
Specifies (in milliseconds) how often ARP monitoring occurs.

Make sure you specify all required parameters

It is essential that both arp_interval and arp_ip_target parameters are specified,


or, alternatively, the m iim on parameter is specified. Failure to do so can cause
degradation of network performance in the event that a link fails.

If using this setting while in m ode=0 or m ode=1 (the two load-balancing modes), the network
switch must be configured to distribute packets evenly across the NICs. For more information on
how to accomplish this, refer to /usr/share/doc/kernel-
doc-<kernel_version>/Docum entation/networking/bonding.txt

The value is set to 0 by default, which disables it.

arp_ip_target=<ip_address>[,<ip_address_2>,… <ip_address_16>]
Specifies the target IP address of ARP requests when the arp_interval parameter is
enabled. Up to 16 IP addresses can be specified in a comma separated list.

arp_validate=<value>
Validate source/distribution of ARP probes; default is none. Other valid values are active,
backup, and all.

debug=<number>
Enables debug messages. Possible values are:

0 — Debug messages are disabled. This is the default.


1 — Debug messages are enabled.

downdelay=<time_in_milliseconds>
Specifies (in milliseconds) how long to wait after link failure before disabling the link. The value
must be a multiple of the value specified in the m iim on parameter. The value is set to 0 by
default, which disables it.

lacp_rate=<value>
Specifies the rate at which link partners should transmit LACPDU packets in 802.3ad mode.
Possible values are:

slow or 0 — Default setting. This specifies that partners should transmit LACPDUs every 30
seconds.
fast or 1 — Specifies that partners should transmit LACPDUs every 1 second.

452
Chapter 24. Working with Kernel Modules

m iim on=<time_in_milliseconds>
Specifies (in milliseconds) how often MII link monitoring occurs. This is useful if high availability
is required because MII is used to verify that the NIC is active. To verify that the driver for a
particular NIC supports the MII tool, type the following command as root:

~]# ethtool <interface_name> | grep "Link detected:"

In this command, replace <interface_name> with the name of the device interface, such as
eth0, not the bond interface. If MII is supported, the command returns:

Link detected: yes

If using a bonded interface for high availability, the module for each NIC must support MII.
Setting the value to 0 (the default), turns this feature off. When configuring this setting, a good
starting point for this parameter is 100.

Make sure you specify all required parameters

It is essential that both arp_interval and arp_ip_target parameters are specified,


or, alternatively, the m iim on parameter is specified. Failure to do so can cause
degradation of network performance in the event that a link fails.

m ode=<value>
Allows you to specify the bonding policy. The <value> can be one of:

balance-rr or 0 — Sets a round-robin policy for fault tolerance and load balancing.
Transmissions are received and sent out sequentially on each bonded slave interface
beginning with the first one available.
active-backup or 1 — Sets an active-backup policy for fault tolerance. Transmissions are
received and sent out via the first available bonded slave interface. Another bonded slave
interface is only used if the active bonded slave interface fails.
balance-xor or 2 — Sets an XOR (exclusive-or) policy for fault tolerance and load
balancing. Using this method, the interface matches up the incoming request's MAC address
with the MAC address for one of the slave NICs. Once this link is established, transmissions
are sent out sequentially beginning with the first available interface.
broadcast or 3 — Sets a broadcast policy for fault tolerance. All transmissions are sent on
all slave interfaces.
802.3ad or 4 — Sets an IEEE 802.3ad dynamic link aggregation policy. Creates
aggregation groups that share the same speed and duplex settings. Transmits and receives
on all slaves in the active aggregator. Requires a switch that is 802.3ad compliant.
balance-tlb or 5 — Sets a Transmit Load Balancing (TLB) policy for fault tolerance and
load balancing. The outgoing traffic is distributed according to the current load on each slave
interface. Incoming traffic is received by the current slave. If the receiving slave fails, another
slave takes over the MAC address of the failed slave.
balance-alb or 6 — Sets an Active Load Balancing (ALB) policy for fault tolerance and
load balancing. Includes transmit and receive load balancing for IPV4 traffic. Receive load
balancing is achieved through ARP negotiation.

453
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

num _unsol_na=<number>
Specifies the number of unsolicited IPv6 Neighbor Advertisements to be issued after a failover
event. One unsolicited NA is issued immediately after the failover.

The valid range is 0 - 255; the default value is 1. This parameter affects only the active-
backup mode.

prim ary=<interface_name>
Specifies the interface name, such as eth0, of the primary device. The prim ary device is the
first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is
particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle
a bigger load.

This setting is only valid when the bonding interface is in active-backup mode. Refer to
/usr/share/doc/kernel-doc-<kernel-
version>/Docum entation/networking/bonding.txt for more information.

prim ary_reselect=<value>
Specifies the reselection policy for the primary slave. This affects how the primary slave is
chosen to become the active slave when failure of the active slave or recovery of the primary
slave occurs. This parameter is designed to prevent flip-flopping between the primary slave and
other slaves. Possible values are:

always or 0 (default) — The primary slave becomes the active slave whenever it comes
back up.
better or 1 — The primary slave becomes the active slave when it comes back up, if the
speed and duplex of the primary slave is better than the speed and duplex of the current
active slave.
failure or 2 — The primary slave becomes the active slave only if the current active slave
fails and the primary slave is up.

The prim ary_reselect setting is ignored in two cases:

If no slaves are active, the first slave to recover is made the active slave.
When initially enslaved, the primary slave is always made the active slave.

Changing the prim ary_reselect policy via sysfs will cause an immediate selection of the
best active slave according to the new policy. This may or may not result in a change of the
active slave, depending upon the circumstances

updelay=<time_in_milliseconds>
Specifies (in milliseconds) how long to wait before enabling a link. The value must be a multiple
of the value specified in the m iim on parameter. The value is set to 0 by default, which disables
it.

use_carrier=<number>
Specifies whether or not m iim on should use MII/ETHTOOL ioctls or netif_carrier_ok()
to determine the link state. The netif_carrier_ok() function relies on the device driver to
maintains its state with netif_carrier_on/off; most device drivers support this function.

454
Chapter 24. Working with Kernel Modules

The MII/ETHROOL ioctls tools utilize a deprecated calling sequence within the kernel. However,
this is still configurable in case your device driver does not support netif_carrier_on/off.

Valid values are:

1 — Default setting. Enables the use of netif_carrier_ok().


0 — Enables the use of MII/ETHTOOL ioctls.

Note

If the bonding interface insists that the link is up when it should not be, it is possible that
your network device driver does not support netif_carrier_on/off.

xm it_hash_policy=<value>
Selects the transmit hash policy used for slave selection in balance-xor and 802.3ad
modes. Possible values are:

0 or layer2 — Default setting. This parameter uses the XOR of hardware MAC addresses
to generate the hash. The formula used is:

(<source_MAC_address> XOR <destination_MAC>) MODULO <slave_count>

This algorithm will place all traffic to a particular network peer on the same slave, and is
802.3ad compliant.
1 or layer3+4 — Uses upper layer protocol information (when available) to generate the
hash. This allows for traffic to a particular network peer to span multiple slaves, although a
single connection will not span multiple slaves.
The formula for unfragmented TCP and UDP packets used is:

((<source_port> XOR <dest_port>) XOR


((<source_IP> XOR <dest_IP>) AND 0xffff)
MODULO <slave_count>

For fragmented TCP or UDP packets and all other IP protocol traffic, the source and
destination port information is omitted. For non-IP traffic, the formula is the same as the
layer2 transmit hash policy.
This policy intends to mimic the behavior of certain switches; particularly, Cisco switches with
PFC2 as well as some Foundry and IBM products.
The algorithm used by this policy is not 802.3ad compliant.
2 or layer2+3 — Uses a combination of layer2 and layer3 protocol information to generate
the hash.
Uses XOR of hardware MAC addresses and IP addresses to generate the hash. The
formula is:

(((<source_IP> XOR <dest_IP>) AND 0xffff) XOR


( <source_MAC> XOR <destination_MAC> ))
MODULO <slave_count>

This algorithm will place all traffic to a particular network peer on the same slave. For non-IP
traffic, the formula is the same as for the layer2 transmit hash policy.

455
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

This policy is intended to provide a more balanced distribution of traffic than layer2 alone,
especially in environments where a layer3 gateway device is required to reach most
destinations.
This algorithm is 802.3ad compliant.

24.8. Additional Resources


For more information on kernel modules and their utilities, refer to the following resources.

Manual Page Documentation

m an lsm od — The manual page for the lsm od command.


m an m odinfo — The manual page for the m odinfo command.
m an m odprobe — The manual page for the m odprobe command.
m an rm m od — The manual page for the rm m od command.
m an ethtool — The manual page for the ethtool command.
m an m ii-tool — The manual page for the m ii-tool command.

Installable and External Documentation

/usr/share/doc/kernel-doc-<kernel_version>/Docum entation/ — This directory, which


is provided by the kernel-doc package, contains information on the kernel, kernel modules, and their
respective parameters. Before accessing the kernel documentation, you must run the following
command as root:

~]# yum install kernel-doc

Linux Loadable Kernel Module HOWTO — The Linux Loadable Kernel Module HOWTO from the
Linux Documentation Project contains further information on working with kernel modules.

[5] Despite what the example might imply, Energy Efficient Ethernet is turned on by default in the e1 0 0 0 e driver.

456
RPM

RPM
The RPM Package Manager (RPM) is an open packaging system, which runs on Red Hat
Enterprise Linux as well as other Linux and UNIX systems. Red Hat, Inc. and the Fedora Project
encourage other vendors to use RPM for their own products. RPM is distributed under the terms of the
GPL (GNU General Public License).

The RPM Package Manager only works with packages built to work with the RPM format. RPM is itself
provided as a pre-installed rpm package. For the end user, RPM makes system updates easy. Installing,
uninstalling and upgrading RPM packages can be accomplished with short commands. RPM maintains a
database of installed packages and their files, so you can invoke powerful queries and verifications on
your system.

The RPM package format has been improved for Red Hat Enterprise Linux 7. RPM packages are now
compressed using the XZ lossless data compression format, which has the benefit of greater
compression and less CPU usage during decompression, and support multiple strong hash algorithms,
such as SHA-256, for package signing and verification.

Use Yum Instead of RPM Whenever Possible

For most package management tasks, the Yum package manager offers equal and often greater
capabilities and utility than RPM. Yum also performs and tracks complicated system dependency
resolution, and will complain and force system integrity checks if you use RPM as well to install
and remove packages. For these reasons, it is highly recommended that you use Yum instead of
RPM whenever possible to perform package management tasks. Refer to Chapter 5, Yum.
If you prefer a graphical interface, you can use the PackageKit GUI application, which uses Yum
as its back end, to manage your system's packages. Refer to Chapter 6, PackageKit for details.

Install RPM packages with the correct architecture!

When installing a package, ensure it is compatible with your operating system and processor
architecture. This can usually be determined by checking the package name. For example, the file
name of an RPM package compiled for the AMD64/Intel 64 computer architectures ends with
x86_64 .rpm .

During upgrades, RPM handles configuration files carefully, so that you never lose your customizations—
something that you cannot accomplish with regular .tar.gz files.

For the developer, RPM allows you to take software source code and package it into source and binary
packages for end users. This process is quite simple and is driven from a single file and optional patches
that you create. This clear delineation between pristine sources and your patches along with build
instructions eases the maintenance of the package as new versions of the software are released.

457
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Note

Because RPM can make changes to the system itself, performing operations like installing,
upgrading, downgrading, and uninstalling binary packages system wide will require root
privileges in most cases. In a default configuration, exceptions include the user's home directory,
temporary directories, and directories created by the administrator (with the appropriate
permissions).

A.1. RPM Design Goals


To understand how to use RPM, it can be helpful to understand the design goals of RPM:

Upgradability
With RPM, you can upgrade individual components of your system without completely
reinstalling. When you get a new release of an operating system based on RPM, such as
Red Hat Enterprise Linux, you do not need to reinstall a fresh copy of the operating system your
machine (as you might need to with operating systems based on other packaging systems).
RPM allows intelligent, fully-automated, in-place upgrades of your system. In addition,
configuration files in packages are preserved across upgrades, so you do not lose your
customizations. There are no special upgrade files needed to upgrade a package because the
same RPM file is used to both install and upgrade the package on your system.

Powerful Querying
RPM is designed to provide powerful querying options. You can perform searches on your
entire database for packages or even just certain files. You can also easily find out what
package a file belongs to and from where the package came. The files an RPM package
contains are in a compressed archive, with a custom binary header containing useful
information about the package and its contents, allowing you to query individual packages
quickly and easily.

System Verification
Another powerful RPM feature is the ability to verify packages. If you are worried that you
deleted an important file for some package, you can verify the package. You are then notified of
anomalies, if any—at which point you can reinstall the package, if necessary. Any configuration
files that you modified are preserved during reinstallation.

Pristine Sources
A crucial design goal was to allow the use of pristine software sources, as distributed by the
original authors of the software. With RPM, you have the pristine sources along with any
patches that were used, plus complete build instructions. This is an important advantage for
several reasons. For instance, if a new version of a program is released, you do not necessarily
have to start from scratch to get it to compile. You can look at the patch to see what you might
need to do. All the compiled-in defaults, and all of the changes that were made to get the
software to build properly, are easily visible using this technique.

The goal of keeping sources pristine may seem important only for developers, but it results in
higher quality software for end users, too.

458
RPM

A.2. Using RPM


RPM has five basic modes of operation (not counting package building): installing, uninstalling,
upgrading, querying, and verifying. This section contains an overview of each mode. For complete
details and options, try rpm --help or m an rpm . You can also refer to Section A.5, “Additional
Resources” for more information on RPM.

A.2.1. Finding RPM Packages


Before using any RPM packages, you must know where to find them. An Internet search returns many
RPM repositories, but if you are looking for Red Hat RPM packages, they can be found at the following
locations:

The Red Hat Enterprise Linux installation media contain many installable RPMs.
The initial RPM repositories provided with the YUM package manager. Refer to Chapter 5, Yum for
details on how to use the official Red Hat Enterprise Linux package repositories.
The Extra Packages for Enterprise Linux (EPEL) is a community effort to provide high-quality add-on
packages for Red Hat Enterprise Linux. Refer to http://fedoraproject.org/wiki/EPEL for details on
EPEL RPM packages.
Unofficial, third-party repositories not affiliated with Red Hat also provide RPM packages.

Third-party repositories and package compatibility

When considering third-party repositories for use with your Red Hat Enterprise Linux system,
pay close attention to the repository's web site with regard to package compatibility before
adding the repository as a package source. Alternate package repositories may offer different,
incompatible versions of the same software, including packages already included in the
Red Hat Enterprise Linux repositories.

The Red Hat Errata Page, available at http://www.redhat.com/apps/support/errata/.

A.2.2. Installing and Upgrading


RPM packages typically have file names like tree-1.5.3-2.el7.x86_64 .rpm . The file name
includes the package name (tree), version (1.5.3), release (2), operating system major version (el7)
and CPU architecture (x86_64 ).

You can use rpm 's -U option to:

upgrade an existing but older package on the system to a newer version, or


install the package even if an older version is not already installed.

That is, rpm -U <rpm_file> is able to perform the function of either upgrading or installing as is
appropriate for the package.

Assuming the tree-1.5.3-2.el7.x86_64 .rpm package is in the current directory, log in as root and
type the following command at a shell prompt to either upgrade or install the tree package as determined
by rpm :

rpm -Uvh tree-1.5.3-2.el7.x86_64.rpm

459
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Use -Uvh for nicely-formatted RPM installs

The -v and -h options (which are combined with -U) cause rpm to print more verbose output and
display a progress meter using hash signs.

If the upgrade/installation is successful, the following output is displayed:

Preparing... ########################################### [100%]


1:tree ########################################### [100%]

Always use the -i (install) option to install new kernel packages!

rpm provides two different options for installing packages: the aforementioned -U option (which
historically stands for upgrade), and the -i option, historically standing for install. Because the -U
option subsumes both install and upgrade functions, we recommend to use rpm -Uvh with all
packages except kernel packages.
You should always use the -i option to simply install a new kernel package instead of upgrading
it. This is because using the -U option to upgrade a kernel package removes the previous (older)
kernel package, which could render the system unable to boot if there is a problem with the new
kernel. Therefore, use the rpm -i <kernel_package> command to install a new kernel without
replacing any older kernel packages. For more information on installing kernel packages, refer to
Chapter 23, Manually Upgrading the Kernel.

The signature of a package is checked automatically when installing or upgrading a package. The
signature confirms that the package was signed by an authorized party. For example, if the verification of
the signature fails, an error message such as the following is displayed:

error: tree-1.5.3-2.el7.x86_64.rpm: Header V3 RSA/SHA256 signature: BAD, key ID


d22e77f2

If it is a new, header-only, signature, an error message such as the following is displayed:

error: tree-1.5.3-2.el7.x86_64.rpm: Header V3 RSA/SHA256 signature: BAD,


key ID d22e77f2

If you do not have the appropriate key installed to verify the signature, the message contains the word
NOKEY:

warning: tree-1.5.3-2.el7.x86_64.rpm: Header V3 RSA/SHA1 signature: NOKEY, key ID


57bbccba

Refer to Section A.3, “Checking a Package's Signature” for more information on checking a package's
signature.

A.2.2.1. Package Already Installed


If a package of the same name and version is already installed, the following output is displayed:

Preparing... ########################################### [100%]


package tree-1.5.3-2.el7.x86_64 is already installed

460
RPM

However, if you want to install the package anyway, you can use the --replacepkgs option, which tells
RPM to ignore the error:

rpm -Uvh --replacepkgs tree-1.5.3-2.el7.x86_64.rpm

This option is helpful if files installed from the RPM were deleted or if you want the original configuration
files from the RPM to be installed.

A.2.2.2. Conflicting Files


If you attempt to install a package that contains a file which has already been installed by another
package, the following is displayed:

Preparing... ##################################################
file /usr/bin/foobar from install of foo-1.0-1.el7.x86_64 conflicts
with file from package bar-3.1.1.el7.x86_64

To make RPM ignore this error, use the --replacefiles option:

rpm -Uvh --replacefiles foo-1.0-1.el7.x86_64.rpm

A.2.2.3. Unresolved Dependency


RPM packages may sometimes depend on other packages, which means that they require other
packages to be installed to run properly. If you try to install a package which has an unresolved
dependency, output similar to the following is displayed:

error: Failed dependencies:


bar.so.3()(64bit) is needed by foo-1.0-1.el7.x86_64

If you are installing a package from the Red Hat Enterprise Linux installation media, such as from a CD-
ROM or DVD, the dependencies may be available. Find the suggested package(s) on the Red Hat
Enterprise Linux installation media or on one of the active Red Hat Enterprise Linux mirrors and add it to
the command:

rpm -Uvh foo-1.0-1.el7.x86_64.rpm bar-3.1.1.el7.x86_64.rpm

If installation of both packages is successful, output similar to the following is displayed:

Preparing... ########################################### [100%]


1:foo ########################################### [ 50%]
2:bar ########################################### [100%]

You can try the --whatprovides option to determine which package contains the required file.

rpm -q --whatprovides "bar.so.3"

If the package that contains bar.so.3 is in the RPM database, the name of the package is displayed:

bar-3.1.1.el7.i586.rpm

461
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Warning: Forcing Package Installation

Although we can force rpm to install a package that gives us a Failed dependencies error
(using the --nodeps option), this is not recommended, and will usually result in the installed
package failing to run. Installing or removing packages with rpm --nodeps can cause
applications to misbehave and/or crash, and can cause serious package management problems
or, possibly, system failure. For these reasons, it is best to heed such warnings; the package
manager—whether RPM, Yum or PackageKit—shows us these warnings and suggests possible
fixes because accounting for dependencies is critical. The Yum package manager can perform
dependency resolution and fetch dependencies from online repositories, making it safer, easier
and smarter than forcing rpm to carry out actions without regard to resolving dependencies.

A.2.3. Configuration File Changes


Because RPM performs intelligent upgrading of packages with configuration files, you may see one or
the other of the following messages:

saving /etc/foo.conf as /etc/foo.conf.rpmsave

This message means that changes you made to the configuration file may not be forward-compatible
with the new configuration file in the package, so RPM saved your original file and installed a new one.
You should investigate the differences between the two configuration files and resolve them as soon as
possible, to ensure that your system continues to function properly.

Alternatively, RPM may save the package's new configuration file as, for example, foo.conf.rpm new,
and leave the configuration file you modified untouched. You should still resolve any conflicts between
your modified configuration file and the new one, usually by merging changes from the old one to the
new one with a diff program.

If you attempt to upgrade to a package with an older version number (that is, if a higher version of the
package is already installed), the output is similar to the following:

package foo-2.0-1.el7.x86_64.rpm (which is newer than foo-1.0-1) is already


installed

To force RPM to upgrade anyway, use the --oldpackage option:

rpm -Uvh --oldpackage foo-1.0-1.el7.x86_64.rpm

A.2.4. Uninstalling
Uninstalling a package is just as simple as installing one. Type the following command at a shell prompt:

rpm -e foo

rpm -e and package name errors

Notice that we used the package name foo, not the name of the original package file, foo-1.0-
1.el7.x86_64 . If you attempt to uninstall a package using the rpm -e command and the
original full file name, you will receive a package name error.

462
RPM

You can encounter dependency errors when uninstalling a package if another installed package depends
on the one you are trying to remove. For example:

rpm -e ghostscript
error: Failed dependencies:
libgs.so.8()(64bit) is needed by (installed) libspectre-0.2.2-3.el7.x86_64
libgs.so.8()(64bit) is needed by (installed) foomatic-4.0.3-1.el7.x86_64
libijs-0.35.so()(64bit) is needed by (installed) gutenprint-5.2.4-
5.el7.x86_64
ghostscript is needed by (installed) printer-filters-1.1-4.el7.noarch

Similar to how we searched for a shared object library (i.e. a <library_name>.so.<number> file) in
Section A.2.2.3, “Unresolved Dependency”, we can search for a 64-bit shared object library using this
exact syntax (and making sure to quote the file name):

~]# rpm -q --whatprovides "libgs.so.8()(64bit)"


ghostscript-8.70-1.el7.x86_64

Warning: Forcing Package Installation

Although we can force rpm to remove a package that gives us a Failed dependencies error
(using the --nodeps option), this is not recommended, and may cause harm to other installed
applications. Installing or removing packages with rpm --nodeps can cause applications to
misbehave and/or crash, and can cause serious package management problems or, possibly,
system failure. For these reasons, it is best to heed such warnings; the package manager—
whether RPM, Yum or PackageKit—shows us these warnings and suggests possible fixes
because accounting for dependencies is critical. The Yum package manager can perform
dependency resolution and fetch dependencies from online repositories, making it safer, easier
and smarter than forcing rpm to carry out actions without regard to resolving dependencies.

A.2.5. Freshening
Freshening is similar to upgrading, except that only existent packages are upgraded. Type the following
command at a shell prompt:

rpm -Fvh foo-2.0-1.el7.x86_64.rpm

RPM's freshen option checks the versions of the packages specified on the command line against the
versions of packages that have already been installed on your system. When a newer version of an
already-installed package is processed by RPM's freshen option, it is upgraded to the newer version.
However, RPM's freshen option does not install a package if no previously-installed package of the same
name exists. This differs from RPM's upgrade option, as an upgrade does install packages whether or
not an older version of the package was already installed.

Freshening works for single packages or package groups. If you have just downloaded a large number
of different packages, and you only want to upgrade those packages that are already installed on your
system, freshening does the job. Thus, you do not have to delete any unwanted packages from the
group that you downloaded before using RPM.

In this case, issue the following with the * .rpm glob:

463
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

rpm -Fvh *.rpm

RPM then automatically upgrades only those packages that are already installed.

A.2.6. Querying
The RPM database stores information about all RPM packages installed in your system. It is stored in the
directory /var/lib/rpm /, and is used to query what packages are installed, what versions each
package is, and to calculate any changes to any files in the package since installation, among other use
cases.

To query this database, use the -q option. The rpm -q package name command displays the package
name, version, and release number of the installed package <package_name>. For example, using rpm
-q tree to query installed package tree might generate the following output:

tree-1.5.2.2-4.el7.x86_64

You can also use the following Package Selection Options (which is a subheading in the RPM man page:
see m an rpm for details) to further refine or qualify your query:

-a — queries all currently installed packages.


-f <file_name> — queries the RPM database for which package owns <file_name>. Specify the
absolute path of the file (for example, rpm -qf /bin/ls instead of rpm -qf ls).
-p <package_file> — queries the uninstalled package <package_file>.

There are a number of ways to specify what information to display about queried packages. The
following options are used to select the type of information for which you are searching. These are called
the Package Query Options.

-i displays package information including name, description, release, size, build date, install date,
vendor, and other miscellaneous information.
-l displays the list of files that the package contains.
-s displays the state of all the files in the package.
-d displays a list of files marked as documentation (man pages, info pages, READMEs, etc.) in the
package.
-c displays a list of files marked as configuration files. These are the files you edit after installation to
adapt and customize the package to your system (for example, sendm ail.cf, passwd, inittab,
etc.).

For options that display lists of files, add -v to the command to display the lists in a familiar ls -l
format.

A.2.7. Verifying
Verifying a package compares information about files installed from a package with the same information
from the original package. Among other things, verifying compares the file size, MD5 sum, permissions,
type, owner, and group of each file.

The command rpm -V verifies a package. You can use any of the Verify Options listed for querying to
specify the packages you wish to verify. A simple use of verifying is rpm -V tree, which verifies that all
the files in the tree package are as they were when they were originally installed. For example:

To verify a package containing a particular file:

464
RPM

rpm -Vf /usr/bin/tree

In this example, /usr/bin/tree is the absolute path to the file used to query a package.
To verify ALL installed packages throughout the system (which will take some time):

rpm -Va

To verify an installed package against an RPM package file:

rpm -Vp tree-1.5.3-2.el7.x86_64.rpm

This command can be useful if you suspect that your RPM database is corrupt.

If everything verified properly, there is no output. If there are any discrepancies, they are displayed. The
format of the output is a string of eight characters (a "c" denotes a configuration file) and then the file
name. Each of the eight characters denotes the result of a comparison of one attribute of the file to the
value of that attribute recorded in the RPM database. A single period (.) means the test passed. The
following characters denote specific discrepancies:

5 — MD5 checksum
S — file size
L — symbolic link
T — file modification time
D — device
U — user
G — group
M — mode (includes permissions and file type)
? — unreadable file (file permission errors, for example)

If you see any output, use your best judgment to determine if you should remove the package, reinstall it,
or fix the problem in another way.

A.3. Checking a Package's Signature


If you wish to verify that a package has not been corrupted or tampered with, examine only the md5sum
by typing the following command at a shell prompt (where <rpm_file> is the file name of the RPM
package):

rpm -K --nosignature <rpm_file>

The message <rpm_file>: rsa sha1 (m d5) pgp m d5 OK (specifically the OK part of it) is
displayed. This brief message means that the file was not corrupted during download. To see a more
verbose message, replace -K with -Kvv in the command.

On the other hand, how trustworthy is the developer who created the package? If the package is signed
with the developer's GnuPG key, you know that the developer really is who they say they are.

An RPM package can be signed using GNU Privacy Guard (or GnuPG), to help you make certain your
downloaded package is trustworthy.

GnuPG is a tool for secure communication; it is a complete and free replacement for the encryption

465
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

technology of PGP, an electronic privacy program. With GnuPG, you can authenticate the validity of
documents and encrypt/decrypt data to and from other recipients. GnuPG is capable of decrypting and
verifying PGP 5.x files as well.

During installation, GnuPG is installed by default. That way you can immediately start using GnuPG to
verify any packages that you receive from Red Hat. Before doing so, you must first import Red Hat's
public key.

A.3.1. Importing Keys


To verify Red Hat packages, you must import the Red Hat GnuPG key. To do so, execute the following
command at a shell prompt:

rpm --import /usr/share/rhn/RPM-GPG-KEY

To display a list of all keys installed for RPM verification, execute the command:

rpm -qa gpg-pubkey*

For the Red Hat key, the output includes:

gpg-pubkey-db42a60e-37ea5438

To display details about a specific key, use rpm -qi followed by the output from the previous
command:

rpm -qi gpg-pubkey-db42a60e-37ea5438

A.3.2. Verifying Signature of Packages


To check the GnuPG signature of an RPM file after importing the builder's GnuPG key, use the following
command (replace <rpm-file> with the file name of the RPM package):

rpm -K <rpm-file>

If all goes well, the following message is displayed: m d5 gpg OK. This means that the signature of the
package has been verified, that it is not corrupt, and therefore is safe to install and use.

A.4. Practical and Common Examples of RPM Usage


RPM is a useful tool for both managing your system and diagnosing and fixing problems. The best way
to make sense of all its options is to look at some examples.

Perhaps you have deleted some files by accident, but you are not sure what you deleted. To verify
your entire system and see what might be missing, you could try the following command:

rpm -Va

If some files are missing or appear to have been corrupted, you should probably either re-install the
package or uninstall and then re-install the package.
At some point, you might see a file that you do not recognize. To find out which package owns it,
enter:

466
RPM

rpm -qf /usr/bin/ghostscript

The output would look like the following:

ghostscript-8.70-1.el7.x86_64

We can combine the above two examples in the following scenario. Say you are having problems with
/usr/bin/paste. You would like to verify the package that owns that program, but you do not know
which package owns paste. Enter the following command,

rpm -Vf /usr/bin/paste

and the appropriate package is verified.


Do you want to find out more information about a particular program? You can try the following
command to locate the documentation which came with the package that owns that program:

rpm -qdf /usr/bin/free

The output would be similar to the following:

/usr/share/doc/procps-3.2.8/BUGS
/usr/share/doc/procps-3.2.8/FAQ
/usr/share/doc/procps-3.2.8/NEWS
/usr/share/doc/procps-3.2.8/TODO
/usr/share/man/man1/free.1.gz
/usr/share/man/man1/pgrep.1.gz
/usr/share/man/man1/pkill.1.gz
/usr/share/man/man1/pmap.1.gz
/usr/share/man/man1/ps.1.gz
/usr/share/man/man1/pwdx.1.gz
/usr/share/man/man1/skill.1.gz
/usr/share/man/man1/slabtop.1.gz
/usr/share/man/man1/snice.1.gz
/usr/share/man/man1/tload.1.gz
/usr/share/man/man1/top.1.gz
/usr/share/man/man1/uptime.1.gz
/usr/share/man/man1/w.1.gz
/usr/share/man/man1/watch.1.gz
/usr/share/man/man5/sysctl.conf.5.gz
/usr/share/man/man8/sysctl.8.gz
/usr/share/man/man8/vmstat.8.gz

You may find a new RPM, but you do not know what it does. To find information about it, use the
following command:

rpm -qip crontabs-1.10-32.1.el7.noarch.rpm

The output would be similar to the following:

467
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Name : crontabs Relocations: (not relocatable)


Version : 1.10 Vendor: Red Hat, Inc.
Release : 32.1.el7 Build Date: Thu 03 Dec 2009
02:17:44 AM CET
Install Date: (not installed) Build Host: js20-bc1-
11.build.redhat.com
Group : System Environment/Base Source RPM: crontabs-1.10-
32.1.el7.src.rpm
Size : 2486 License: Public Domain and
GPLv2
Signature : RSA/8, Wed 24 Feb 2010 08:46:13 PM CET, Key ID 938a80caf21541eb
Packager : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
Summary : Root crontab files used to schedule the execution of programs
Description :
The crontabs package contains root crontab files and directories.
You will need to install cron daemon to run the jobs from the crontabs.
The cron daemon such as cronie or fcron checks the crontab files to
see when particular commands are scheduled to be executed. If commands
are scheduled, it executes them.
Crontabs handles a basic system function, so it should be installed on
your system.

Perhaps you now want to see what files the crontabs RPM package installs. You would enter the
following:

rpm -qlp crontabs-1.10-32.1.el7.noarch.rpm

The output is similar to the following:

/etc/cron.daily
/etc/cron.hourly
/etc/cron.monthly
/etc/cron.weekly
/etc/crontab
/usr/bin/run-parts
/usr/share/man/man4/crontabs.4.gz

These are just a few examples. As you use RPM, you may find more uses for it.

A.5. Additional Resources


RPM is an extremely complex utility with many options and methods for querying, installing, upgrading,
and removing packages. Refer to the following resources to learn more about RPM.

A.5.1. Installed Documentation


rpm --help — This command displays a quick reference of RPM parameters.
m an rpm — The RPM man page gives more detail about RPM parameters than the rpm --help
command.

A.5.2. Useful Websites


The RPM website — http://www.rpm.org/
The RPM mailing list can be subscribed to, and its archives read from, here —
https://lists.rpm.org/mailman/listinfo/rpm-list

468
RPM

A.5.3. Related Books


Maximum RPM — http://www.rpm.org/max-rpm/
The Maximum RPM book, which you can read online, covers everything from general RPM
usage to building your own RPMs to programming with rpmlib.

469
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

The X Window System


While the heart of Red Hat Enterprise Linux is the kernel, for many users, the face of the operating
system is the graphical environment provided by the X Window System, also called X.

Other windowing environments have existed in the UNIX world, including some that predate the release
of the X Window System in June 1984. Nonetheless, X has been the default graphical environment for
most UNIX-like operating systems, including Red Hat Enterprise Linux, for many years.

The graphical environment for Red Hat Enterprise Linux is supplied by the X.Org Foundation, an open
source organization created to manage development and strategy for the X Window System and related
technologies. X.Org is a large-scale, rapid-developing project with hundreds of developers around the
world. It features a wide degree of support for a variety of hardware devices and architectures, and runs
on myriad operating systems and platforms.

The X Window System uses a client-server architecture. Its main purpose is to provide network
transparent window system, which runs on a wide range of computing and graphics machines. The X
server (the Xorg binary) listens for connections from X client applications via a network or local loopback
interface. The server communicates with the hardware, such as the video card, monitor, keyboard, and
mouse. X client applications exist in the user space, creating a graphical user interface (GUI) for the user
and passing user requests to the X server.

B.1. The X Server


Red Hat Enterprise Linux 7 uses X server version, which includes several video drivers, EXA, and
platform support enhancements over the previous release, among others. In addition, this release
includes several automatic configuration features for the X server, as well as the generic input driver,
evdev, that supports all input devices that the kernel knows about, including most mice and keyboards.

X11R7.1 was the first release to take specific advantage of making the X Window System modular. This
release split X into logically distinct modules, which make it easier for open source developers to
contribute code to the system.

In the current release, all libraries, headers, and binaries live under the /usr/ directory. The
/etc/X11/ directory contains configuration files for X client and server applications. This includes
configuration files for the X server itself, the X display managers, and many other base components.

The configuration file for the newer Fontconfig-based font architecture is still
/etc/fonts/fonts.conf. For more information on configuring and adding fonts, refer to Section B.4,
“Fonts”.

Because the X server performs advanced tasks on a wide array of hardware, it requires detailed
information about the hardware it works on. The X server is able to automatically detect most of the
hardware that it runs on and configure itself accordingly. Alternatively, hardware can be manually
specified in configuration files.

The Red Hat Enterprise Linux system installer, Anaconda, installs and configures X automatically, unless
the X packages are not selected for installation. If there are any changes to the monitor, video card or
other devices managed by the X server, most of the time, X detects and reconfigures these changes
automatically. In rare cases, X must be reconfigured manually.

B.2. Desktop Environments and Window Managers


Once an X server is running, X client applications can connect to it and create a GUI for the user. A

470
The X Window System

range of GUIs are available with Red Hat Enterprise Linux, from the rudimentary Tab Window Manager
(twm) to the highly developed and interactive desktop environment (such as GNOME or KDE) that most
Red Hat Enterprise Linux users are familiar with.

To create the latter, more comprehensive GUI, two main classes of X client application must connect to
the X server: a window manager and a desktop environment.

B.2.1. Desktop Environments


A desktop environment integrates various X clients to create a common graphical user environment and
a development platform.

Desktop environments have advanced features allowing X clients and other running processes to
communicate with one another, while also allowing all applications written to work in that environment to
perform advanced tasks, such as drag-and-drop operations.

Red Hat Enterprise Linux provides two desktop environments:

GNOME — The default desktop environment for Red Hat Enterprise Linux based on the GTK+ 2
graphical toolkit.
KDE — An alternative desktop environment based on the Qt 4 graphical toolkit.

Both GNOME and KDE have advanced-productivity applications, such as word processors,
spreadsheets, and Web browsers; both also provide tools to customize the look and feel of the GUI.
Additionally, if both the GTK+ 2 and the Qt libraries are present, KDE applications can run in GNOME
and vice versa.

B.2.2. Window Managers


Window managers are X client programs which are either part of a desktop environment or, in some
cases, stand-alone. Their primary purpose is to control the way graphical windows are positioned,
resized, or moved. Window managers also control title bars, window focus behavior, and user-specified
key and mouse button bindings.

The Red Hat Enterprise Linux repositories provide five different window managers.

m etacity
The Metacity window manager is the default window manager for GNOME. It is a simple and
efficient window manager which supports custom themes. This window manager is
automatically pulled in as a dependency when the GNOME desktop is installed.

kwin
The KWin window manager is the default window manager for KDE. It is an efficient window
manager which supports custom themes. This window manager is automatically pulled in as a
dependency when the KDE desktop is installed.

com piz
The Compiz compositing window manager is based on OpenGL and can use 3D graphics
hardware to create fast compositing desktop effects for window management. Advanced
features, such as a cube workspace, are implemented as loadable plug-ins. To run this window
manager, you need to install the com piz package.

m wm

471
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

The Motif Window Manager (m wm ) is a basic, stand-alone window manager. Since it is designed
to be stand-alone, it should not be used in conjunction with GNOME or KDE. To run this window
manager, you need to install the openm otif package.

twm
The minimalist Tab Window Manager (twm ), which provides the most basic tool set among the
available window managers, can be used either as a stand-alone or with a desktop
environment. To run this window manager, you need to install the xorg-x11-twm package.

B.3. X Server Configuration Files


The X server is a single binary executable /usr/bin/Xorg; a symbolic link X pointing to this file is also
provided. Associated configuration files are stored in the /etc/X11/ and /usr/share/X11/
directories.

The X Window System supports two different configuration schemes. Configuration files in the
xorg.conf.d directory contain preconfigured settings from vendors and from distribution, and these
files should not be edited by hand. Configuration in the xorg.conf file, on the other hand, is done
completely by hand but is not necessary in most scenarios.

When do you need the xorg.conf file?

All necessary parameters for a display and peripherals are auto-detected and configured during
installation. The configuration file for the X server, /etc/X11/xorg.conf, that was necessary in
previous releases, is not supplied with the current release of the X Window System. It can still be
useful to create the file manually to configure new hardware, to set up an environment with
multiple video cards, or for debugging purposes.

The /usr/lib/xorg/m odules/ (or /usr/lib64 /xorg/m odules/) directory contains X server
modules that can be loaded dynamically at runtime. By default, only some modules in
/usr/lib/xorg/m odules/ are automatically loaded by the X server.

When Red Hat Enterprise Linux 7 is installed, the configuration files for X are created using information
gathered about the system hardware during the installation process by the HAL (Hardware Abstraction
Layer) configuration back end. Whenever the X server is started, it asks HAL for the list of input devices
and adds each of them with their respective driver. Whenever a new input device is plugged in, or an
existing input device is removed, HAL notifies the X server about the change. Because of this notification
system, devices using the m ouse, kbd, or vm m ouse driver configured in the xorg.conf file are, by
default, ignored by the X server. Refer to Section B.3.3.3, “The ServerFlags section” for further
details. Additional configuration is provided in the /etc/X11/xorg.conf.d/ directory and it can
override or augment any configuration that has been obtained through HAL.

B.3.1. The Structure of the Configuration


The format of the X configuration files is comprised of many different sections which address specific
aspects of the system hardware. Each section begins with a Section "section-name" line, where
"section-name" is the title for the section, and ends with an EndSection line. Each section contains
lines that include option names and one or more option values. Some of these are sometimes enclosed
in double quotes (").

472
The X Window System

Some options within the /etc/X11/xorg.conf file accept a Boolean switch which turns the feature on
or off. The acceptable values are:

1, on, true, or yes — Turns the option on.


0, off, false, or no — Turns the option off.

The following shows a typical configuration file for the keyboard. Lines beginning with a hash sign (#) are
not read by the X server and are used for human-readable comments.

# This file is autogenerated by system-setup-keyboard. Any


# modifications will be lost.

Section "InputClass"
Identifier "system-setup-keyboard"
MatchIsKeyboard "on"
Option "XkbModel" "pc105"
Option "XkbLayout" "cz,us"
# Option "XkbVariant" "(null)"
Option "XkbOptions"
"terminate:ctrl_alt_bksp,grp:shifts_toggle,grp_led:scroll"
EndSection

B.3.2. The xorg.conf.d Directory


The X server supports two configuration directories. The /usr/share/X11/xorg.conf.d/ provides
separate configuration files from vendors or third-party packages; changes to files in this directory may
be overwritten by settings specified in the /etc/X11/xorg.conf file. The /etc/X11/xorg.conf.d/
directory stores user-specific configuration.

Files with the suffix .conf in configuration directories are parsed by the X server upon startup and are
treated like part of the traditional xorg.conf configuration file. These files may contain one or more
sections; for a description of the options in a section and the general layout of the configuration file, refer
to Section B.3.3, “The xorg.conf File” or to the xorg.conf(5) man page. The X server essentially
treats the collection of configuration files as one big file with entries from xorg.conf at the end. Users
are encouraged to put custom configuration into /etc/xorg.conf and leave the directory for
configuration snippets provided by the distribution.

B.3.3. The xorg.conf File


In previous releases of the X Window System, /etc/X11/xorg.conf file was used to store initial setup
for X. When a change occurred with the monitor, video card or other device managed by the X server,
the file needed to be edited manually. In Red Hat Enterprise Linux, there is rarely a need to manually
create and edit the /etc/X11/xorg.conf file. Nevertheless, it is still useful to understand various
sections and optional parameters available, especially when troubleshooting or setting up unusual
hardware configuration.

In the following, some important sections are described in the order in which they appear in a typical
/etc/X11/xorg.conf file. More detailed information about the X server configuration file can be found
in the xorg.conf(5) man page. This section is mostly intended for advanced users as most
configuration options described below are not needed in typical configuration scenarios.

B.3.3.1. The InputClass section


InputClass is a new type of configuration section that does not apply to a single device but rather to a
class of devices, including hot-plugged devices. An InputClass section's scope is limited by the

473
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

matches specified; in order to apply to an input device, all matches must apply to the device as seen in
the example below:

Section "InputClass"
Identifier "touchpad catchall"
MatchIsTouchpad "on"
Driver "synaptics"
EndSection

If this snippet is present in an xorg.conf file or an xorg.conf.d directory, any touchpad present in
the system is assigned the synaptics driver.

Alphanumeric sorting in xorg.conf.d

Note that due to alphanumeric sorting of configuration files in the xorg.conf.d directory, the
Driver setting in the example above overwrites previously set driver options. The more generic
the class, the earlier it should be listed.

The match options specify which devices a section may apply to. To match a device, all match options
must correspond. The following options are commonly used in the InputClass section:

MatchIsPointer, MatchIsKeyboard, MatchIsT ouchpad, MatchIsT ouchscreen,


MatchIsJoystick — Boolean options to specify a type of a device.
MatchProduct "product_name" — this option matches if the product_name substring occurs in
the product name of the device.
MatchVendor "vendor_name" — this option matches if the vendor_name substring occurs in the
vendor name of the device.
MatchDevicePath "/path/to/device" — this option matches any device if its device path
corresponds to the patterns given in the "/path/to/device" template, for example
/dev/input/event* . Refer to the fnm atch(3) man page for further details.
MatchT ag "tag_pattern" — this option matches if at least one tag assigned by the HAL
configuration back end matches the tag_pattern pattern.

A configuration file may have multiple InputClass sections. These sections are optional and are used
to configure a class of input devices as they are automatically added. An input device can match more
than one InputClass section. When arranging these sections, it is recommended to put generic
matches above specific ones because each input class can override settings from a previous one if an
overlap occurs.

B.3.3.2. The InputDevice section


Each InputDevice section configures one input device for the X server. Previously, systems typically
had at least one InputDevice section for the keyboard, and most mouse settings were automatically
detected.

With Red Hat Enterprise Linux 7, no InputDevice configuration is needed for most setups, and the
xorg-x11-drv-* input driver packages provide the automatic configuration through HAL. The default driver
for both keyboards and mice is evdev.

The following example shows a typical InputDevice section for a keyboard:

474
The X Window System

Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
Option "XkbModel" "pc105"
Option "XkbLayout" "us"
EndSection

The following entries are commonly used in the InputDevice section:

Identifier — Specifies a unique name for this InputDevice section. This is a required entry.
Driver — Specifies the name of the device driver X must load for the device. If the
AutoAddDevices option is enabled (which is the default setting), any input device section with
Driver "m ouse" or Driver "kbd" will be ignored. This is necessary due to conflicts between
the legacy mouse and keyboard drivers and the new evdev generic driver. Instead, the server will
use the information from the back end for any input devices. Any custom input device configuration in
the xorg.conf should be moved to the back end. In most cases, the back end will be HAL and the
configuration location will be the /etc/X11/xorg.conf.d directory.
Option — Specifies necessary options pertaining to the device.
A mouse may also be specified to override any auto-detected values for the device. The following
options are typically included when adding a mouse in the xorg.conf file:
Protocol — Specifies the protocol used by the mouse, such as IMPS/2.
Device — Specifies the location of the physical device.
Em ulate3Buttons — Specifies whether to allow a two-button mouse to act like a three-button
mouse when both mouse buttons are pressed simultaneously.
Consult the xorg.conf(5) man page for a complete list of valid options for this section.

B.3.3.3. The ServerFlags section


The optional ServerFlags section contains miscellaneous global X server settings. Any settings in this
section may be overridden by options placed in the ServerLayout section (refer to Section B.3.3.4,
“The ServerLayout Section” for details).

Each entry within the ServerFlags section occupies a single line and begins with the term Option
followed by an option enclosed in double quotation marks (").

The following is a sample ServerFlags section:

Section "ServerFlags"
Option "DontZap" "true"
EndSection

The following lists some of the most useful options:

"DontZap" "boolean" — When the value of <boolean> is set to true, this setting prevents the
use of the Ctrl+Alt+Backspace key combination to immediately terminate the X server.

475
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

X keyboard extension

Even if this option is enabled, the key combination still must be configured in the X Keyboard
Extension (XKB) map before it can be used. One way how to add the key combination to the
map is to run the following command:

setxkbmap -option "terminate:ctrl_alt_bksp"

"DontZoom " "boolean" — When the value of <boolean> is set to true, this setting prevents
cycling through configured video resolutions using the Ctrl+Alt+Keypad-Plus and
Ctrl+Alt+Keypad-Minus key combinations.
"AutoAddDevices" "boolean" — When the value of <boolean> is set to false, the server will
not hot plug input devices and instead rely solely on devices configured in the xorg.conf file. Refer
to Section B.3.3.2, “The InputDevice section” for more information concerning input devices. This
option is enabled by default and HAL (hardware abstraction layer) is used as a back end for device
discovery.

B.3.3.4. The ServerLayout Section


The ServerLayout section binds together the input and output devices controlled by the X server. At a
minimum, this section must specify one input device and one output device. By default, a monitor (output
device) and a keyboard (input device) are specified.

The following example shows a typical ServerLayout section:

Section "ServerLayout"
Identifier "Default Layout"
Screen 0 "Screen0" 0 0
InputDevice "Mouse0" "CorePointer"
InputDevice "Keyboard0" "CoreKeyboard"
EndSection

The following entries are commonly used in the ServerLayout section:

Identifier — Specifies a unique name for this ServerLayout section.


Screen — Specifies the name of a Screen section to be used with the X server. More than one
Screen option may be present.
The following is an example of a typical Screen entry:

Screen 0 "Screen0" 0 0

The first number in this example Screen entry (0) indicates that the first monitor connector, or head
on the video card, uses the configuration specified in the Screen section with the identifier
"Screen0".
An example of a Screen section with the identifier "Screen0" can be found in Section B.3.3.8,
“The Screen section”.
If the video card has more than one head, another Screen entry with a different number and a
different Screen section identifier is necessary.
The numbers to the right of "Screen0" give the absolute X and Y coordinates for the upper left
corner of the screen (0 0 by default).

476
The X Window System

InputDevice — Specifies the name of an InputDevice section to be used with the X server.
It is advisable that there be at least two InputDevice entries: one for the default mouse and one for
the default keyboard. The options CorePointer and CoreKeyboard indicate that these are the
primary mouse and keyboard. If the AutoAddDevices option is enabled, this entry needs not to be
specified in the ServerLayout section. If the AutoAddDevices option is disabled, both mouse and
keyboard are auto-detected with the default values.
Option "option-name" — An optional entry which specifies extra parameters for the section. Any
options listed here override those listed in the ServerFlags section.
Replace <option-name> with a valid option listed for this section in the xorg.conf(5) man page.

It is possible to put more than one ServerLayout section in the /etc/X11/xorg.conf file. By
default, the server only reads the first one it encounters, however. If there is an alternative
ServerLayout section, it can be specified as a command line argument when starting an X session; as
in the Xorg -layout <layoutnam e> command.

B.3.3.5. The Files section


The Files section sets paths for services vital to the X server, such as the font path. This is an optional
section, as these paths are normally detected automatically. This section can be used to override
automatically detected values.

The following example shows a typical Files section:

Section "Files"
RgbPath "/usr/share/X11/rgb.txt"
FontPath "unix/:7100"
EndSection

The following entries are commonly used in the Files section:

ModulePath — An optional parameter which specifies alternate directories which store X server
modules.

B.3.3.6. The Monitor section


Each Monitor section configures one type of monitor used by the system. This is an optional entry as
most monitors are now detected automatically.

This example shows a typical Monitor section for a monitor:

Section "Monitor"
Identifier "Monitor0"
VendorName "Monitor Vendor"
ModelName "DDC Probed Monitor - ViewSonic G773-2"
DisplaySize 320 240
HorizSync 30.0 - 70.0
VertRefresh 50.0 - 180.0
EndSection

The following entries are commonly used in the Monitor section:

Identifier — Specifies a unique name for this Monitor section. This is a required entry.
VendorNam e — An optional parameter which specifies the vendor of the monitor.
ModelNam e — An optional parameter which specifies the monitor's model name.

477
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

DisplaySize — An optional parameter which specifies, in millimeters, the physical size of the
monitor's picture area.
HorizSync — Specifies the range of horizontal sync frequencies compatible with the monitor, in
kHz. These values help the X server determine the validity of built-in or specified Modeline entries
for the monitor.
VertRefresh — Specifies the range of vertical refresh frequencies supported by the monitor, in
kHz. These values help the X server determine the validity of built-in or specified Modeline entries
for the monitor.
Modeline — An optional parameter which specifies additional video modes for the monitor at
particular resolutions, with certain horizontal sync and vertical refresh resolutions. Refer to the
xorg.conf(5) man page for a more detailed explanation of Modeline entries.
Option "option-name" — An optional entry which specifies extra parameters for the section.
Replace <option-name> with a valid option listed for this section in the xorg.conf(5) man page.

B.3.3.7. The Device section


Each Device section configures one video card on the system. While one Device section is the
minimum, additional instances may occur for each video card installed on the machine.

The following example shows a typical Device section for a video card:

Section "Device"
Identifier "Videocard0"
Driver "mga"
VendorName "Videocard vendor"
BoardName "Matrox Millennium G200"
VideoRam 8192
Option "dpms"
EndSection

The following entries are commonly used in the Device section:

Identifier — Specifies a unique name for this Device section. This is a required entry.
Driver — Specifies which driver the X server must load to utilize the video card. A list of drivers can
be found in /usr/share/hwdata/videodrivers, which is installed with the hwdata package.
VendorNam e — An optional parameter which specifies the vendor of the video card.
BoardNam e — An optional parameter which specifies the name of the video card.
VideoRam — An optional parameter which specifies the amount of RAM available on the video card,
in kilobytes. This setting is only necessary for video cards the X server cannot probe to detect the
amount of video RAM.
BusID — An entry which specifies the bus location of the video card. On systems with only one video
card a BusID entry is optional and may not even be present in the default /etc/X11/xorg.conf
file. On systems with more than one video card, however, a BusID entry is required.
Screen — An optional entry which specifies which monitor connector or head on the video card the
Device section configures. This option is only useful for video cards with multiple heads.
If multiple monitors are connected to different heads on the same video card, separate Device
sections must exist and each of these sections must have a different Screen value.
Values for the Screen entry must be an integer. The first head on the video card has a value of 0.
The value for each additional head increments this value by one.
Option "option-name" — An optional entry which specifies extra parameters for the section.
Replace <option-name> with a valid option listed for this section in the xorg.conf(5) man page.

478
The X Window System

One of the more common options is "dpm s" (for Display Power Management Signaling, a VESA
standard), which activates the Energy Star energy compliance setting for the monitor.

B.3.3.8. The Screen section


Each Screen section binds one video card (or video card head) to one monitor by referencing the
Device section and the Monitor section for each. While one Screen section is the minimum,
additional instances may occur for each video card and monitor combination present on the machine.

The following example shows a typical Screen section:

Section "Screen"
Identifier "Screen0"
Device "Videocard0"
Monitor "Monitor0"
DefaultDepth 16

SubSection "Display"
Depth 24
Modes "1280x1024" "1280x960" "1152x864" "1024x768" "800x600" "640x480"
EndSubSection

SubSection "Display"
Depth 16
Modes "1152x864" "1024x768" "800x600" "640x480"
EndSubSection
EndSection

The following entries are commonly used in the Screen section:

Identifier — Specifies a unique name for this Screen section. This is a required entry.
Device — Specifies the unique name of a Device section. This is a required entry.
Monitor — Specifies the unique name of a Monitor section. This is only required if a specific
Monitor section is defined in the xorg.conf file. Normally, monitors are detected automatically.
DefaultDepth — Specifies the default color depth in bits. In the previous example, 16 (which
provides thousands of colors) is the default. Only one DefaultDepth entry is permitted, although
this can be overridden with the Xorg command line option -depth <n>, where <n> is any additional
depth specified.
SubSection "Display" — Specifies the screen modes available at a particular color depth. The
Screen section can have multiple Display subsections, which are entirely optional since screen
modes are detected automatically.
This subsection is normally used to override auto-detected modes.
Option "option-name" — An optional entry which specifies extra parameters for the section.
Replace <option-name> with a valid option listed for this section in the xorg.conf(5) man page.

B.3.3.9. The DRI section


The optional DRI section specifies parameters for the Direct Rendering Infrastructure (DRI). DRI is an
interface which allows 3D software applications to take advantage of 3D hardware acceleration
capabilities built into most modern video hardware. In addition, DRI can improve 2D performance via
hardware acceleration, if supported by the video card driver.

This section is rarely used, as the DRI Group and Mode are automatically initialized to default values. If a
different Group or Mode is needed, then adding this section to the xorg.conf file will override the

479
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

default values.

The following example shows a typical DRI section:

Section "DRI"
Group 0
Mode 0666
EndSection

Since different video cards use DRI in different ways, do not add to this section without first referring to
http://dri.freedesktop.org/wiki/.

B.4. Fonts
Red Hat Enterprise Linux uses Fontconfig subsystem to manage and display fonts under the X Window
System. It simplifies font management and provides advanced display features, such as anti-aliasing.
This system is used automatically for applications programmed using the Qt 3 or GT K+ 2 graphical
toolkits, or their newer versions.

The Fontconfig font subsystem allows applications to directly access fonts on the system and use the X
FreeType interface library (Xft) or other rendering mechanisms to render Fontconfig fonts with advanced
features such as anti-aliasing. Graphical applications can use the Xft library with Fontconfig to draw text
to the screen.

Font configuration

Fontconfig uses the /etc/fonts/fonts.conf configuration file, which should not be edited by
hand.

Fonts group

Any system where the user expects to run remote X applications needs to have the fonts group
installed. This can be done by selecting the group in the installer, and also by running the yum
groupinstall fonts command after installation.

B.4.1. Adding Fonts to Fontconfig


Adding new fonts to the Fontconfig subsystem is a straightforward process:

1. To add fonts for an individual user, copy the new fonts into the .fonts/ directory in the user's
home directory.
To add fonts system-wide, copy the new fonts into the /usr/share/fonts/ directory. It is a
good idea to create a new subdirectory, such as local/ or similar, to help distinguish between
user-installed and default fonts.
2. Run the fc-cache command as root to update the font information cache:

fc-cache <path-to-font-directory>

In this command, replace <path-to-font-directory> with the directory containing the new
fonts (either /usr/share/fonts/local/ or /hom e/<user>/.fonts/).

480
The X Window System

Interactive font installation

Individual users may also install fonts interactively, by typing fonts:/// into the Nautilus
address bar, and dragging the new font files there.

B.5. Runlevels and X


In most cases, the Red Hat Enterprise Linux installer configures a machine to boot into a graphical login
environment, known as runlevel 5. It is possible, however, to boot into a text-only multi-user mode called
runlevel 3 and begin an X session from there.

The following subsections review how X starts up in both runlevel 3 and runlevel 5. For more information
about runlevels, refer to Chapter 7, Managing Services with systemd.

B.5.1. Runlevel 3
When in runlevel 3, the best way to start an X session is to log in and type startx. The startx
command is a front-end to the xinit command, which launches the X server (Xorg) and connects X
client applications to it. Because the user is already logged into the system at runlevel 3, startx does
not launch a display manager or authenticate users. Refer to Section B.5.2, “Runlevel 5” for more
information about display managers.

1. When the startx command is executed, it searches for the .xinitrc file in the user's home
directory to define the desktop environment and possibly other X client applications to run. If no
.xinitrc file is present, it uses the system default /etc/X11/xinit/xinitrc file instead.
2. The default xinitrc script then searches for user-defined files and default system files, including
.Xresources, .Xm odm ap, and .Xkbm ap in the user's home directory, and Xresources,
Xm odm ap, and Xkbm ap in the /etc/X11/ directory. The Xm odm ap and Xkbm ap files, if they
exist, are used by the xm odm ap utility to configure the keyboard. The Xresources file is read to
assign specific preference values to applications.
3. After setting the above options, the xinitrc script executes all scripts located in the
/etc/X11/xinit/xinitrc.d/ directory. One important script in this directory is xinput.sh,
which configures settings such as the default language.
4. The xinitrc script attempts to execute .Xclients in the user's home directory and turns to
/etc/X11/xinit/Xclients if it cannot be found. The purpose of the Xclients file is to start
the desktop environment or, possibly, just a basic window manager. The .Xclients script in the
user's home directory starts the user-specified desktop environment in the .Xclients-default
file. If .Xclients does not exist in the user's home directory, the standard
/etc/X11/xinit/Xclients script attempts to start another desktop environment, trying
GNOME first, then KDE, followed by twm .

When in runlevel 3, the user is returned to a text mode user session after ending an X session.

B.5.2. Runlevel 5
When the system boots into runlevel 5, a special X client application called a display manager is
launched. A user must authenticate using the display manager before any desktop environment or
window managers are launched.

Depending on the desktop environments installed on the system, three different display managers are
available to handle user authentication.

481
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

GDM (GNOME Display Manager) — The default display manager for Red Hat Enterprise Linux.
GNOME allows the user to configure language settings, shutdown, restart or log in to the system.
KDM — KDE's display manager which allows the user to shutdown, restart or log in to the system.
xdm (X Window Display Manager) — A very basic display manager which only lets the user log in to
the system.

When booting into runlevel 5, the /etc/X11/prefdm script determines the preferred display manager
by referencing the /etc/sysconfig/desktop file. A list of options for this file is available in this file:

/usr/share/doc/initscripts-<version-number>/sysconfig.txt

where <version-number> is the version number of the initscripts package.

Each of the display managers reference the /etc/X11/xdm /Xsetup_0 file to set up the login screen.
Once the user logs into the system, the /etc/X11/xdm /GiveConsole script runs to assign ownership
of the console to the user. Then, the /etc/X11/xdm /Xsession script runs to accomplish many of the
tasks normally performed by the xinitrc script when starting X from runlevel 3, including setting
system and user resources, as well as running the scripts in the /etc/X11/xinit/xinitrc.d/
directory.

Users can specify which desktop environment they want to use when they authenticate using the GNOME
or KDE display managers by selecting it from the Sessions menu item accessed by selecting System →
Preferences → More Preferences → Sessions. If the desktop environment is not specified in the
display manager, the /etc/X11/xdm /Xsession script checks the .xsession and .Xclients files in
the user's home directory to decide which desktop environment to load. As a last resort, the
/etc/X11/xinit/Xclients file is used to select a desktop environment or window manager to use in
the same way as runlevel 3.

When the user finishes an X session on the default display (:0) and logs out, the
/etc/X11/xdm /T akeConsole script runs and reassigns ownership of the console to the root user.
The original display manager, which continues running after the user logged in, takes control by
spawning a new display manager. This restarts the X server, displays a new login window, and starts the
entire process over again.

The user is returned to the display manager after logging out of X from runlevel 5.

For more information on how display managers control user authentication, refer to the
/usr/share/doc/gdm -<version-number>/README, where <version-number> is the version
number for the gdm package installed, or the xdm man page.

B.6. Additional Resources


There is a large amount of detailed information available about the X server, the clients that connect to it,
and the assorted desktop environments and window managers.

B.6.1. Installed Documentation


/usr/share/X11/doc/ — contains detailed documentation on the X Window System architecture,
as well as how to get additional information about the Xorg project as a new user.
/usr/share/doc/gdm -<version-number>/README — contains information on how display
managers control user authentication.

482
The X Window System

m an xorg.conf — Contains information about the xorg.conf configuration files, including the
meaning and syntax for the different sections within the files.
m an Xorg — Describes the Xorg display server.

B.6.2. Useful Websites


http://www.X.org/ — Home page of the X.Org Foundation, which produces major releases of the X
Window System bundled with Red Hat Enterprise Linux to control the necessary hardware and
provide a GUI environment.
http://dri.sourceforge.net/ — Home page of the DRI (Direct Rendering Infrastructure) project. The DRI
is the core hardware 3D acceleration component of X.
http://www.gnome.org/ — Home of the GNOME project.
http://www.kde.org/ — Home of the KDE desktop environment.

483
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

Revision History
Revision 0.0-0.6 Thu, Apr 10 2014 Stephen Wadeley
Added TigerVNC chapter. Updates mainly to OpenLMI, and Yum chapters, as well as the GRUB 2
section.

Revision 0.0-0.5 Wed, Feb 05 2014 Stephen Wadeley


Corrected a number of errors in the PTP and GRUB2 chapters.

Revision 0.0-0.3 Wed 11 Dec 2013 Jaromír Hradílek


Red Hat Enterprise Linux 7.0 Beta release of the System Administrator's Guide.

Index
Symbols
.fetchmailrc, Fetchmail Configuration Options
- server options, Server Options
- user options, User Options

.procmailrc, Procmail Configuration


/dev/oprofile/, Understanding /dev/oprofile/
/var/spool/anacron , Configuring Anacron Jobs
/var/spool/cron , Configuring Cron Jobs

(see OProfile)

A
adding
- group, Adding a New Group
- user, Adding a New User

anacron, Cron and Anacron


- anacron configuration file, Configuring Anacron Jobs
- user-defined tasks, Configuring Anacron Jobs

anacrontab , Configuring Anacron Jobs


Apache HTTP Server
- additional resources
- installed documentation, Installed Documentation
- useful websites, Useful Websites

- checking configuration, Editing the Configuration Files


- checking status, Verifying the Service Status
- directories
- /etc/httpd/conf.d/ , Editing the Configuration Files

484
Revision History

- /usr/lib/httpd/modules/ , Working with Modules


- /usr/lib64/httpd/modules/ , Working with Modules

- files
- /etc/httpd/conf.d/ssl.conf , Enabling the mod_ssl Module
- /etc/httpd/conf/httpd.conf , Editing the Configuration Files

- modules
- developing, Writing a Module
- loading, Loading a Module
- mod_ssl , Setting Up an SSL Server
- mod_userdir, Updating the Configuration

- restarting, Restarting the Service


- SSL server
- certificate, An Overview of Certificates and Security, Using an Existing Key and
Certificate, Generating a New Key and Certificate
- certificate authority, An Overview of Certificates and Security
- private key, An Overview of Certificates and Security, Using an Existing Key and
Certificate, Generating a New Key and Certificate
- public key, An Overview of Certificates and Security

- starting, Starting the Service


- stopping, Stopping the Service
- version 2.2
- updating from version 2.0, Updating the Configuration

- version 2.4
- changes, Notable Changes

- virtual host, Setting Up Virtual Hosts

at , At and Batch
- additional resources, Additional Resources

Automated Tasks, Automating System Tasks

B
batch , At and Batch
- additional resources, Additional Resources

blkid, Using the blkid Command


bonding (see channel bonding)
boot loader
- GRUB 2 boot loader, Working with the GRUB 2 Boot Loader
- verifying, Verifying the Boot Loader

485
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

boot media, Preparing to Upgrade

C
ch-email .fetchmailrc
- global options, Global Options

channel bonding
- configuration, Using Channel Bonding
- description, Using Channel Bonding
- parameters to bonded interfaces, Bonding Module Directives

channel bonding interface (see kernel module)


Configuration File Changes, Preserving Configuration File Changes
CPU usage, Viewing CPU Usage
createrepo, Creating a Yum Repository
cron, Cron and Anacron
- additional resources, Additional Resources
- cron configuration file, Configuring Cron Jobs
- user-defined tasks, Configuring Cron Jobs

crontab , Configuring Cron Jobs


CUPS (see Printer Configuration)

D
desktop environments (see X)
df, Using the df Command
directory server (see OpenLDAP)
display managers (see X)
documentation
- finding installed, Practical and Common Examples of RPM Usage

drivers (see kernel module)


DSA keys
- generating, Generating Key Pairs

du, Using the du Command

E
email
- additional resources, Additional Resources
- installed documentation, Installed Documentation

486
Revision History

- related books, Related Books


- useful websites, Useful Websites

- Fetchmail, Fetchmail
- history of, Mail Servers
- mail server
- Dovecot, Dovecot

- Postfix, Postfix
- Procmail, Mail Delivery Agents
- program classifications, Email Program Classifications
- protocols, Email Protocols
- IMAP, IMAP
- POP, POP
- SMTP, SMTP

- security, Securing Communication


- clients, Secure Email Clients
- servers, Securing Email Client Communications

- Sendmail, Sendmail
- spam
- filtering out, Spam Filters

- types
- Mail Delivery Agent, Mail Delivery Agent
- Mail Transport Agent, Mail Transport Agent
- Mail User Agent, Mail User Agent

extra packages for Enterprise Linux (EPEL)


- installable packages, Finding RPM Packages

F
feedback
- contact information for this manual, Feedback

Fetchmail, Fetchmail
- additional resources, Additional Resources
- command options, Fetchmail Command Options
- informational, Informational or Debugging Options
- special, Special Options

- configuration options, Fetchmail Configuration Options


- global options, Global Options
- server options, Server Options

487
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

server options, Server Options


- user options, User Options

file systems, Viewing Block Devices and File Systems


findmnt, Using the findmnt Command
findsmb, Command Line
findsmb program, Samba Distribution Programs
free, Using the free Command
FTP, FTP
- (see also vsftpd)
- active mode, The File Transfer Protocol
- command port, The File Transfer Protocol
- data port, The File Transfer Protocol
- definition of, FTP
- introducing, The File Transfer Protocol
- passive mode, The File Transfer Protocol

G
GNOME, Desktop Environments
- (see also X)

gnome-system-log (see Log File Viewer)


gnome-system-monitor, Using the System Monitor Tool, Using the System Monitor Tool,
Using the System Monitor Tool, Using the System Monitor Tool
GnuPG
- checking RPM package signatures, Checking a Package's Signature

group configuration
- adding groups, Adding a New Group
- filtering list of groups, Viewing Users and Groups
- groupadd, Adding a New Group
- modify users in groups, Modifying Group Properties
- modifying group properties, Modifying Group Properties
- viewing list of groups, Using the User Manager Tool

groups (see group configuration)


- GID, Managing Users and Groups
- introducing, Managing Users and Groups
- shared directories, Creating Group Directories
- tools for management of
- groupadd, User Private Groups, Using Command Line Tools
- system-config-users, User Private Groups
- User Manager, Using Command Line Tools

488
Revision History

- user private, User Private Groups

GRUB 2
- configuring GRUB 2, Working with the GRUB 2 Boot Loader
- customizing GRUB 2, Working with the GRUB 2 Boot Loader
- re-installing GRUB 2, Working with the GRUB 2 Boot Loader

H
hardware
- viewing, Viewing Hardware Information

HTTP server (see Apache HTTP Server)


httpd (see Apache HTTP Server )

I
information
- about your system, System Monitoring Tools

initial RAM disk image


- verifying, Verifying the Initial RAM Disk Image
- IBM eServer System i, Verifying the Initial RAM Disk Image

initial RPM repositories


- installable packages, Finding RPM Packages

insmod, Loading a Module


- (see also kernel module)

installing package groups


- installing package groups with PackageKit, Installing and Removing Package Groups

installing the kernel, Manually Upgrading the Kernel

K
KDE, Desktop Environments
- (see also X)

kernel
- downloading, Downloading the Upgraded Kernel
- installing kernel packages, Manually Upgrading the Kernel
- kernel packages, Overview of Kernel Packages

489
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

- package, Manually Upgrading the Kernel


- performing kernel upgrade, Performing the Upgrade
- RPM package, Manually Upgrading the Kernel
- upgrade kernel available, Downloading the Upgraded Kernel
- Security Errata, Downloading the Upgraded Kernel
- via Red Hat network, Downloading the Upgraded Kernel

- upgrading
- preparing, Preparing to Upgrade
- working boot media, Preparing to Upgrade

- upgrading the kernel, Manually Upgrading the Kernel

kernel module
- bonding module, Using Channel Bonding
- description, Using Channel Bonding
- parameters to bonded interfaces, Bonding Module Directives

- definition, Working with Kernel Modules


- directories
- /etc/sysconfig/modules/, Persistent Module Loading
- /lib/modules/<kernel_version>/kernel/drivers/, Loading a Module

- Ethernet module
- supporting multiple cards, Using Multiple Ethernet Cards

- files
- /proc/modules, Listing Currently-Loaded Modules

- listing
- currently loaded modules, Listing Currently-Loaded Modules
- module information, Displaying Information About a Module

- loading
- at the boot time, Persistent Module Loading
- for the current session, Loading a Module

- module parameters
- bonding module parameters, Bonding Module Directives
- supplying, Setting Module Parameters

- unloading, Unloading a Module


- utilities
- insmod, Loading a Module
- lsmod, Listing Currently-Loaded Modules
- modinfo, Displaying Information About a Module
- modprobe, Loading a Module, Unloading a Module

490
Revision History

- rmmod, Unloading a Module

kernel package
- kernel
- for single,multicore and multiprocessor systems, Overview of Kernel Packages

- kernel-devel
- kernel headers and makefiles, Overview of Kernel Packages

- kernel-doc
- documentation files, Overview of Kernel Packages

- kernel-headers
- C header files files, Overview of Kernel Packages

- linux-firmware
- firmware files, Overview of Kernel Packages

- perf
- firmware files, Overview of Kernel Packages

kernel upgrading
- preparing, Preparing to Upgrade

keyboard configuration, System Locale and Keyboard Configuration


- layout, Changing the Keyboard Layout

kwin, Window Managers


- (see also X)

L
LDAP (see OpenLDAP)
localectl (see keyboard configuration)
Log File Viewer
- filtering, Viewing Log Files
- monitoring, Monitoring Log Files
- refresh rate, Viewing Log Files
- searching, Viewing Log Files

log files, Viewing and Managing Log Files

491
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

- (see also Log File Viewer)


- description, Viewing and Managing Log Files
- locating, Locating Log Files
- monitoring, Monitoring Log Files
- rotating, Locating Log Files
- rsyslogd daemon, Viewing and Managing Log Files
- viewing, Viewing Log Files

logrotate, Locating Log Files


lsblk, Using the lsblk Command
lscpu, Using the lscpu Command
lsmod, Listing Currently-Loaded Modules
- (see also kernel module)

lspci, Using the lspci Command


lspcmcia, Using the lspcmcia Command
lsusb, Using the lsusb Command

M
Mail Delivery Agent (see email)
Mail Transport Agent (see email) (see MTA)
Mail Transport Agent Switcher, Mail Transport Agent (MTA) Configuration
Mail User Agent, Mail Transport Agent (MTA) Configuration (see email)
MDA (see Mail Delivery Agent)
memory usage, Viewing Memory Usage
metacity, Window Managers
- (see also X)

modinfo, Displaying Information About a Module


- (see also kernel module)

modprobe, Loading a Module, Unloading a Module


- (see also kernel module)

module (see kernel module)


module parameters (see kernel module)
MTA (see Mail Transport Agent)
- setting default, Mail Transport Agent (MTA) Configuration
- switching with Mail Transport Agent Switcher, Mail Transport Agent (MTA) Configuration

MUA, Mail Transport Agent (MTA) Configuration (see Mail User Agent)
mwm, Window Managers

492
Revision History

- (see also X)

N
net program, Samba Distribution Programs
NIC
- binding into single channel, Using Channel Bonding

nmblookup program, Samba Distribution Programs

O
opannotate (see OProfile)
opcontrol (see OProfile)
OpenLDAP
- checking status, Verifying the Service Status
- client applications, Overview of Common LDAP Client Applications
- configuration
- database, Changing the Database-Specific Configuration
- global, Changing the Global Configuration
- overview, OpenLDAP Server Setup

- directives
- olcAllows, Changing the Global Configuration
- olcConnMaxPending, Changing the Global Configuration
- olcConnMaxPendingAuth, Changing the Global Configuration
- olcDisallows, Changing the Global Configuration
- olcIdleTimeout, Changing the Global Configuration
- olcLogFile, Changing the Global Configuration
- olcReadOnly, Changing the Database-Specific Configuration
- olcReferral, Changing the Global Configuration
- olcRootDN, Changing the Database-Specific Configuration
- olcRootPW, Changing the Database-Specific Configuration
- olcSuffix, Changing the Database-Specific Configuration
- olcWriteTimeout, Changing the Global Configuration

- directories
- /etc/openldap/slapd.d/, Configuring an OpenLDAP Server
- /etc/openldap/slapd.d/cn=config/cn=schema/, Extending Schema

- features, OpenLDAP Features


- files
- /etc/openldap/ldap.conf, Configuring an OpenLDAP Server
- /etc/openldap/slapd.d/cn=config.ldif, Changing the Global Configuration
- /etc/openldap/slapd.d/cn=config/olcDatabase={1}bdb.ldif, Changing the
Database-Specific Configuration

- installation, Installing the OpenLDAP Suite

493
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

- migrating authentication information, Migrating Old Authentication Information to LDAP


Format
- packages, Installing the OpenLDAP Suite
- restarting, Restarting the Service
- running, Starting the Service
- schema, Extending Schema
- stopping, Stopping the Service
- terminology
- attribute, LDAP Terminology
- entry, LDAP Terminology
- LDIF, LDAP Terminology

- utilities, Overview of OpenLDAP Server Utilities, Overview of OpenLDAP Client Utilities

OpenSSH, OpenSSH, Main Features


- (see also SSH)
- client, OpenSSH Clients
- scp, Using the scp Utility
- sftp, Using the sftp Utility
- ssh, Using the ssh Utility

- DSA keys
- generating, Generating Key Pairs

- RSA keys
- generating, Generating Key Pairs

- RSA Version 1 keys


- generating, Generating Key Pairs

- server, Starting an OpenSSH Server


- starting, Starting an OpenSSH Server
- stopping, Starting an OpenSSH Server

- ssh-add, Configuring ssh-agent


- ssh-agent, Configuring ssh-agent
- ssh-keygen
- DSA, Generating Key Pairs
- RSA, Generating Key Pairs
- RSA Version 1, Generating Key Pairs

- using key-based authentication, Using Key-based Authentication

OpenSSL
- SSL (see SSL )
- TLS (see TLS )

494
Revision History

ophelp, Setting Events to Monitor


opreport (see OProfile)
OProfile, OProfile
- /dev/oprofile/, Understanding /dev/oprofile/
- additional resources, Additional Resources
- configuring, Configuring OProfile Using Legacy Mode
- separating profiles, Separating Kernel and User-space Profiles

- events
- sampling rate, Sampling Rate
- setting, Setting Events to Monitor

- Java, OProfile Support for Java


- monitoring the kernel, Specifying the Kernel
- opannotate, Using opannotate
- opcontrol, Configuring OProfile Using Legacy Mode
- --no-vmlinux, Specifying the Kernel
- --start, Starting and Stopping OProfile Using Legacy Mode
- --vmlinux=, Specifying the Kernel

- ophelp, Setting Events to Monitor


- opreport, Using opreport, Getting more detailed output on the modules
- on a single executable, Using opreport on a Single Executable

- oprofiled, Starting and Stopping OProfile Using Legacy Mode


- log file, Starting and Stopping OProfile Using Legacy Mode

- overview of tools, Overview of Tools


- reading data, Analyzing the Data
- saving data, Saving Data in Legacy Mode
- starting, Starting and Stopping OProfile Using Legacy Mode
- SystemTap, OProfile and SystemTap
- unit mask, Unit Masks

oprofiled (see OProfile)


oprof_start, Graphical Interface

P
package
- kernel RPM, Manually Upgrading the Kernel

package groups
- listing package groups with Yum
- yum groups, Listing Package Groups

495
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

PackageKit, PackageKit
- adding and removing, Using Add/Remove Software
- architecture, PackageKit Architecture
- installing and removing package groups, Installing and Removing Package Groups
- installing packages, PackageKit
- managing packages, PackageKit
- PolicyKit
- authentication, Updating Packages with Software Update

- uninstalling packages, PackageKit


- updating packages, PackageKit
- viewing packages, PackageKit
- viewing transaction log, Viewing the Transaction Log

packages, Working with Packages


- adding and removing with PackageKit, Using Add/Remove Software
- dependencies, Unresolved Dependency
- determining file ownership with, Practical and Common Examples of RPM Usage
- displaying packages
- yum info, Displaying Package Information

- displaying packages with Yum


- yum info, Displaying Package Information

- downloading packages with Yum, Downloading Packages


- extra packages for Enterprise Linux (EPEL), Finding RPM Packages
- filtering with PackageKit, Finding Packages with Filters
- Development, Finding Packages with Filters
- Free, Finding Packages with Filters
- Hide subpackages, Finding Packages with Filters
- Installed, Finding Packages with Filters
- No filter, Finding Packages with Filters
- Only available, Finding Packages with Filters
- Only development, Finding Packages with Filters
- Only end user files, Finding Packages with Filters
- Only graphical, Finding Packages with Filters
- Only installed, Finding Packages with Filters
- Only native packages, Finding Packages with Filters
- Only newest packages, Finding Packages with Filters

- filtering with PackageKit for packages, Finding Packages with Filters


- finding deleted files from, Practical and Common Examples of RPM Usage
- finding RPM packages, Finding RPM Packages
- initial RPM repositories, Finding RPM Packages
- installing a package group with Yum, Installing a Package Group
- installing and removing package groups, Installing and Removing Package Groups
- installing packages with PackageKit, PackageKit, Installing and Removing Packages (and
Dependencies)
- dependencies, Installing and Removing Packages (and Dependencies)

496
Revision History

- installing RPM, Installing and Upgrading


- installing with Yum, Installing Packages
- iRed Hat Enterprise Linux installation media, Finding RPM Packages
- kernel
- for single,multicore and multiprocessor systems, Overview of Kernel Packages

- kernel-devel
- kernel headers and makefiles, Overview of Kernel Packages

- kernel-doc
- documentation files, Overview of Kernel Packages

- kernel-headers
- C header files files, Overview of Kernel Packages

- linux-firmware
- firmware files, Overview of Kernel Packages

- listing packages with Yum


- Glob expressions, Searching Packages
- yum list available, Listing Packages
- yum list installed, Listing Packages
- yum repolist, Listing Packages
- yum search, Listing Packages

- locating documentation for, Practical and Common Examples of RPM Usage


- managing packages with PackageKit, PackageKit
- obtaining list of files, Practical and Common Examples of RPM Usage
- perf
- firmware files, Overview of Kernel Packages

- querying uninstalled, Practical and Common Examples of RPM Usage


- removing, Uninstalling
- removing packages with PackageKit, Installing and Removing Packages (and
Dependencies)
- RPM, RPM
- already installed, Package Already Installed
- configuration file changes, Configuration File Changes
- conflict, Conflicting Files
- failed dependencies, Unresolved Dependency
- freshening, Freshening
- pristine sources, RPM Design Goals
- querying, Querying
- removing, Uninstalling
- source and binary packages, RPM
- tips, Practical and Common Examples of RPM Usage
- uninstalling, Uninstalling
- verifying, Verifying

497
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

- searching packages with Yum


- yum search, Searching Packages

- setting packages with PackageKit


- checking interval, Updating Packages with Software Update

- uninstalling packages with PackageKit, PackageKit


- uninstalling packages with Yum, Removing Packages
- updating currently installed packages
- available updates, Updating Packages with Software Update

- updating packages with PackageKit, PackageKit


- PolicyKit, Updating Packages with Software Update
- Software Update, Updating Packages with Software Update

- upgrading RPM, Installing and Upgrading


- viewing packages with PackageKit, PackageKit
- viewing transaction log, Viewing the Transaction Log
- viewing Yum repositories with PackageKit, Refreshing Software Sources (Yum
Repositories)
- Yum instead of RPM, RPM

passwords
- shadow, Shadow Passwords

pdbedit program, Samba Distribution Programs


PolicyKit, Updating Packages with Software Update
Postfix, Postfix
- default installation, The Default Postfix Installation

postfix, Mail Transport Agent (MTA) Configuration


prefdm (see X)
Printer Configuration
- CUPS, Printer Configuration
- IPP Printers, Adding an IPP Printer
- LDP/LPR Printers, Adding an LPD/LPR Host or Printer
- Local Printers, Adding a Local Printer
- New Printer, Starting Printer Setup
- Print Jobs, Managing Print Jobs
- Samba Printers, Adding a Samba (SMB) printer
- Settings, The Settings Page
- Sharing Printers, Sharing Printers

printers (see Printer Configuration)


processes, Viewing System Processes

498
Revision History

Procmail, Mail Delivery Agents


- additional resources, Additional Resources
- configuration, Procmail Configuration
- recipes, Procmail Recipes
- delivering, Delivering vs. Non-Delivering Recipes
- examples, Recipe Examples
- flags, Flags
- local lockfiles, Specifying a Local Lockfile
- non-delivering, Delivering vs. Non-Delivering Recipes
- SpamAssassin, Spam Filters
- special actions, Special Conditions and Actions
- special conditions, Special Conditions and Actions

ps, Using the ps Command

R
RAM, Viewing Memory Usage
rcp, Using the scp Utility
Red Hat Enterprise Linux installation media
- installable packages, Finding RPM Packages

removing package groups


- removing package groups with PackageKit, Installing and Removing Package Groups

rmmod, Unloading a Module


- (see also kernel module)

rpcclient program, Samba Distribution Programs


RPM, RPM
- additional resources, Additional Resources
- already installed, Package Already Installed
- basic modes, Using RPM
- book about, Related Books
- checking package signatures, Checking a Package's Signature
- configuration file changes, Configuration File Changes
- conf.rpmsave, Configuration File Changes

- conflicts, Conflicting Files


- dependencies, Unresolved Dependency
- design goals, RPM Design Goals
- powerful querying, RPM Design Goals
- system verification, RPM Design Goals
- upgradability, RPM Design Goals

- determining file ownership with, Practical and Common Examples of RPM Usage

499
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

- documentation with, Practical and Common Examples of RPM Usage


- failed dependencies, Unresolved Dependency
- file conflicts
- resolving, Conflicting Files

- file name, Installing and Upgrading


- finding deleted files with, Practical and Common Examples of RPM Usage
- finding RPM packages, Finding RPM Packages
- freshening, Freshening
- GnuPG, Checking a Package's Signature
- installing, Installing and Upgrading
- md5sum, Checking a Package's Signature
- querying, Querying
- querying for file list, Practical and Common Examples of RPM Usage
- querying uninstalled packages, Practical and Common Examples of RPM Usage
- tips, Practical and Common Examples of RPM Usage
- uninstalling, Uninstalling
- upgrading, Installing and Upgrading
- verifying, Verifying
- website, Useful Websites

RPM Package Manager (see RPM)


RSA keys
- generating, Generating Key Pairs

RSA Version 1 keys


- generating, Generating Key Pairs

rsyslog, Viewing and Managing Log Files

S
Samba (see Samba)
- Abilities, Samba Features
- Additional Resources, Additional Resources
- installed documentation, Installed Documentation
- related books, Related Books
- useful websites, Useful Websites

- Browsing, Samba Network Browsing


- configuration, Configuring a Samba Server, Command Line Configuration
- default, Configuring a Samba Server

- daemon, Samba Daemons and Related Services


- nmbd, Samba Daemons
- overview, Samba Daemons
- smbd, Samba Daemons
- winbindd, Samba Daemons

500
Revision History

- encrypted passwords, Encrypted Passwords


- findsmb, Command Line
- graphical configuration, Graphical Configuration
- Introduction, Introduction to Samba
- Network Browsing, Samba Network Browsing
- Domain Browsing, Domain Browsing
- WINS, WINS (Windows Internet Name Server)

- Programs, Samba Distribution Programs


- findsmb, Samba Distribution Programs
- net, Samba Distribution Programs
- nmblookup, Samba Distribution Programs
- pdbedit, Samba Distribution Programs
- rpcclient, Samba Distribution Programs
- smbcacls, Samba Distribution Programs
- smbclient, Samba Distribution Programs
- smbcontrol, Samba Distribution Programs
- smbpasswd, Samba Distribution Programs
- smbspool, Samba Distribution Programs
- smbstatus, Samba Distribution Programs
- smbtar, Samba Distribution Programs
- testparm, Samba Distribution Programs
- wbinfo, Samba Distribution Programs

- Reference, Samba
- Samba Printers, Adding a Samba (SMB) printer
- service
- conditional restarting, Starting and Stopping Samba
- reloading, Starting and Stopping Samba
- restarting, Starting and Stopping Samba
- starting, Starting and Stopping Samba
- stopping, Starting and Stopping Samba

- share
- connecting to via the command line, Command Line
- connecting to with Nautilus, Connecting to a Samba Share
- mounting, Mounting the Share

- smbclient, Command Line


- WINS, WINS (Windows Internet Name Server)
- with Windows NT 4.0, 2000, ME, and XP, Encrypted Passwords

scp (see OpenSSH)


security plug-in (see Security)
Security-Related Packages
- updating security-related packages, Updating Packages

Sendmail, Sendmail

501
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

- additional resources, Additional Resources


- aliases, Masquerading
- common configuration changes, Common Sendmail Configuration Changes
- default installation, The Default Sendmail Installation
- LDAP and, Using Sendmail with LDAP
- limitations, Purpose and Limitations
- masquerading, Masquerading
- purpose, Purpose and Limitations
- spam, Stopping Spam
- with UUCP, Common Sendmail Configuration Changes

sendmail, Mail Transport Agent (MTA) Configuration


sftp (see OpenSSH)
shadow passwords
- overview of, Shadow Passwords

slapd (see OpenLDAP)


smbcacls program, Samba Distribution Programs
smbclient, Command Line
smbclient program, Samba Distribution Programs
smbcontrol program, Samba Distribution Programs
smbpasswd program, Samba Distribution Programs
smbspool program, Samba Distribution Programs
smbstatus program, Samba Distribution Programs
smbtar program, Samba Distribution Programs
SpamAssassin
- using with Procmail, Spam Filters

ssh (see OpenSSH)


SSH protocol
- authentication, Authentication
- configuration files, Configuration Files
- system-wide configuration files, Configuration Files
- user-specific configuration files, Configuration Files

- connection sequence, Event Sequence of an SSH Connection


- features, Main Features
- insecure protocols, Requiring SSH for Remote Connections
- layers
- channels, Channels
- transport layer, Transport Layer

- port forwarding, Port Forwarding


- requiring for remote login, Requiring SSH for Remote Connections
- security risks, Why Use SSH?

502
Revision History

- version 1, Protocol Versions


- version 2, Protocol Versions
- X11 forwarding, X11 Forwarding

ssh-add, Configuring ssh-agent


ssh-agent, Configuring ssh-agent
SSL , Setting Up an SSL Server
- (see also Apache HTTP Server )

SSL server (see Apache HTTP Server )


startx, Runlevel 3 (see X)
- (see also X)

stunnel, Securing Email Client Communications


system analysis
- OProfile (see OProfile)

system information
- cpu usage, Viewing CPU Usage
- file systems, Viewing Block Devices and File Systems
- gathering, System Monitoring Tools
- hardware, Viewing Hardware Information
- memory usage, Viewing Memory Usage
- processes, Viewing System Processes
- currently running, Using the top Command

System Monitor, Using the System Monitor Tool, Using the System Monitor Tool, Using the
System Monitor Tool, Using the System Monitor Tool
system-config-users (see user configuration and group configuration)

T
testparm program, Samba Distribution Programs
TLS , Setting Up an SSL Server
- (see also Apache HTTP Server )

top, Using the top Command


twm, Window Managers
- (see also X)

U
updating currently installed packages

503
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

- available updates, Updating Packages with Software Update

updating packages with PackageKit


- PolicyKit, Updating Packages with Software Update

user configuration
- adding users, Adding a New User
- changing full name, Modifying User Properties
- changing home directory, Modifying User Properties
- changing login shell, Modifying User Properties
- changing password, Modifying User Properties
- command line configuration
- passwd, Adding a New User
- useradd, Adding a New User

- filtering list of users, Viewing Users and Groups


- modify groups for a user, Modifying User Properties
- modifying users, Modifying User Properties
- viewing list of users, Using the User Manager Tool

User Manager (see user configuration)


user private groups (see groups)
- and shared directories, Creating Group Directories

useradd command
- user account creation using, Adding a New User

users (see user configuration)


- introducing, Managing Users and Groups
- tools for management of
- User Manager, Using Command Line Tools
- useradd, Using Command Line Tools

- UID, Managing Users and Groups

V
virtual host (see Apache HTTP Server )
vsftpd
- additional resources, Additional Resources
- installed documentation, Installed Documentation
- online documentation, Online Documentation

- encrypting, Encrypting vsftpd Connections Using SSL


- multihome configuration, Starting Multiple Copies of vsftpd

504
Revision History

- restarting, Starting and Stopping vsftpd


- securing, Encrypting vsftpd Connections Using SSL, SELinux Policy for vsftpd
- SELinux, SELinux Policy for vsftpd
- SSL, Encrypting vsftpd Connections Using SSL
- starting, Starting and Stopping vsftpd
- starting multiple copies of, Starting Multiple Copies of vsftpd
- status, Starting and Stopping vsftpd
- stopping, Starting and Stopping vsftpd

W
wbinfo program, Samba Distribution Programs
web server (see Apache HTTP Server)
window managers (see X)
Windows 2000
- connecting to shares using Samba, Encrypted Passwords

Windows 98
- connecting to shares using Samba, Encrypted Passwords

Windows ME
- connecting to shares using Samba, Encrypted Passwords

Windows NT 4.0
- connecting to shares using Samba, Encrypted Passwords

Windows XP
- connecting to shares using Samba, Encrypted Passwords

X
X
- /etc/X11/xorg.conf
- Boolean values for, The Structure of the Configuration
- Device, The Device section
- DRI, The DRI section
- Files section, The Files section
- InputDevice section, The InputDevice section
- introducing, The xorg.conf.d Directory, The xorg.conf File
- Monitor, The Monitor section
- Screen, The Screen section
- Section tag, The Structure of the Configuration
- ServerFlags section, The ServerFlags section
- ServerLayout section, The ServerLayout Section
- structure of, The Structure of the Configuration

505
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

- additional resources, Additional Resources


- installed documentation, Installed Documentation
- useful websites, Useful Websites

- configuration directory
- /etc/X11/xorg.conf.d, The xorg.conf.d Directory

- configuration files
- /etc/X11/ directory, X Server Configuration Files
- /etc/X11/xorg.conf, The xorg.conf File
- options within, X Server Configuration Files
- server options, The xorg.conf.d Directory, The xorg.conf File

- desktop environments
- GNOME, Desktop Environments
- KDE, Desktop Environments

- display managers
- configuration of preferred, Runlevel 5
- definition of, Runlevel 5
- GNOME, Runlevel 5
- KDE, Runlevel 5
- prefdm script, Runlevel 5
- xdm, Runlevel 5

- fonts
- Fontconfig, Fonts
- Fontconfig, adding fonts to, Adding Fonts to Fontconfig
- FreeType, Fonts
- introducing, Fonts
- Xft, Fonts

- introducing, The X Window System


- runlevels
- 3, Runlevel 3
- 5, Runlevel 5

- runlevels and, Runlevels and X


- window managers
- kwin, Window Managers
- metacity, Window Managers
- mwm, Window Managers
- twm, Window Managers

- X clients, The X Window System, Desktop Environments and Window Managers


- desktop environments, Desktop Environments
- startx command, Runlevel 3
- window managers, Window Managers
- xinit command, Runlevel 3

506
Revision History

- X server, The X Window System


- features of, The X Server

X Window System (see X)


X.500 (see OpenLDAP)
X.500 Lite (see OpenLDAP)
xinit (see X)
Xorg (see Xorg)

Y
Yum
- configuring plug-ins, Enabling, Configuring, and Disabling Yum Plug-ins
- configuring Yum and Yum repositories, Configuring Yum and Yum Repositories
- disabling plug-ins, Enabling, Configuring, and Disabling Yum Plug-ins
- displaying packages
- yum info, Displaying Package Information

- displaying packages with Yum


- yum info, Displaying Package Information

- downloading packages with Yum, Downloading Packages


- enabling plug-ins, Enabling, Configuring, and Disabling Yum Plug-ins
- installing a package group with Yum, Installing a Package Group
- installing with Yum, Installing Packages
- listing package groups with Yum
- yum groups list, Listing Package Groups

- listing packages with Yum


- Glob expressions, Searching Packages
- yum list, Listing Packages
- yum list available, Listing Packages
- yum list installed, Listing Packages
- yum repolist, Listing Packages

- packages, Working with Packages


- plug-ins
- fs-snapshot, Working with Plug-ins
- kabi, Working with Plug-ins
- presto, Working with Plug-ins
- product-id, Working with Plug-ins
- protect-packages, Working with Plug-ins
- refresh-packagekit, Working with Plug-ins
- security, Working with Plug-ins
- subscription-manager, Working with Plug-ins
- yum-cron, Working with Plug-ins

507
Red Hat Enterprise Linux 7.0 Beta System Administrators Guide

- yum-fastestmirror, Working with Plug-ins

- repository, Adding, Enabling, and Disabling a Yum Repository, Creating a Yum Repository
- searching packages with Yum
- yum search, Searching Packages

- setting [main] options, Setting [main] Options


- setting [repository] options, Setting [repository] Options
- uninstalling packages with Yum, Removing Packages
- variables, Using Yum Variables
- Yum plug-ins, Yum Plug-ins
- Yum repositories
- configuring Yum and Yum repositories, Configuring Yum and Yum Repositories

Yum repositories
- viewing Yum repositories with PackageKit, Refreshing Software Sources (Yum
Repositories)

Yum Updates
- checking for updates, Checking For Updates
- updating a single package, Updating Packages
- updating all packages and dependencies, Updating Packages
- updating packages, Updating Packages
- updating security-related packages, Updating Packages

508
RED HAT ENTERPRISE LINUX 5, 6, AND 7
TASK RHEL5 RHEL6 RHEL7

Graphical user management system-config-users

Create user account useradd

Common administrative commands Delete user account

Change user account details


userdel

usermod

usermod
View user account details

USER MANAGEMENT
/etc/passwd

Create user group groupadd

Delete user group groupdel


TASK RHEL5 RHEL6 RHEL7 TASK RHEL5 RHEL6 RHEL7
Change group details groupmod
/etc/sysconfig/rhn/systemid /etc/selinux/config
View subscription information
SYSTEM BASICS

/etc/sysconfig/rhn/systemid subscription-manager identity Change user password passwd username


subscription-manager identity chcon
Configure system restorecon

AND IDENTITY
rhn_register security semanage
Change user permissions usermod
Configure subscription subscription-manager 1 setsebool /etc/sudoers
rhn_register rhnreg_ks

SECURITY
rhn_register 2 system-config-selinux
subscription-manager
groupmod
Change group permissions
sosreport sosreport Report on system /etc/sudoers
View system profile sealert
dmidecode dmidecode security
hwbrowser lshw Change password policy chage
authconfig
View RHEL version information /etc/redhat-release LDAP, SSSD, Kerberos authconfig-tui Encrypted password
/etc/shadow
authconfig-gtk location
1 subscription-manager is used for Satellite 6, Satellite 5.6 with SAM and newer, and Red Hat’s CDN.
2 RHN tools are deprecated on Red Hat Enterprise Linux 7. rhn_register should be used for Satellite server 5.6 and newer only. For details, see: Satellite 5.6 unable to register RHEL 7 client system due to Network users getent View/end user sessions w
rhn-setup package not included in Minimal installation

TASK RHEL5 RHEL6 RHEL7 TASK RHEL5 RHEL6 RHEL7

Graphical configuration tools system-config-* gnome-control-center Default file system ext3 ext4 xfs

nmcli copy data to new file system


copy data to new file system
Configure network system-config-network nmtui Defragment disk space fsck (look for ‘non-contiguous inodes’)
fsck (look for ‘non-contiguous inodes’)
nm-connection-editor xfs_fsr
BASIC CONFIGURATION

Configure system language system-config-language localectl fdisk


Create/modify disk fdisk gdisk
timedatectl
partitions parted parted
Configure time and date system-config-date ssm create
date

ntpdate timedatectl mkfs.filesystem_type (ext4, xfs)


Synchronize time and date mkfs.filesystem_type (ext4, xfs)
/etc/ntp.conf /etc/chrony.conf
Format disk partition mkswap
mkswap
ssm create
Configure keyboard system-config-keyboard localectl
mount
mount
Mount storage /etc/fstab
Text-based configuration tools system-config-*-tui /etc/fstab
ssm mount

Configure printer system-config-printer pvcreate


Create physical volume pvcreate
ssm create (if backend is lvm)
smbclient
Configure samba /etc/samba/smb.conf vgcreate
Create volume group vgcreate

FILE SYSTEMS, VOLUMES, AND DISKS


smbpasswd ssm create (if backend is lvm)

/etc/ssh/ssh_config lvcreate
Configure SSH Create logical volume lvcreate
/etc/ssh/sshd_config ssm create (if backend is lvm)
~/.ssh/config ssh-keygen
vgextend
Enlarge volumes vgextend
lvextend
formatted with default lvextend
xfs_growfs
TASK RHEL5 RHEL6 RHEL7 file system resize2fs
ssm resize

/etc/rsyslog.conf Shrink volumes resize2fs


/etc/rsyslog.d/*.conf XFS cannot currently be shrunk; copy
Configure logging /etc/syslog.conf /etc/rsyslog.conf formatted with default lvreduce
/var/log/journal desired data to a smaller file system.
file system vgreduce
systemd-journald.service
fsck
systemctl -at service Check/repair file system fsck
chkconfig --list ssm check
List all services ls /etc/systemd/system/*.service
ls /etc/init.d/
ls /usr/lib/systemd/system/*.service /etc/exports /etc/exports
Configure NFS share
service nfs reload systemctl reload nfs.service
List running services service --status-all systemctl -t service --state=active
Mount and activate /etc/fstab
service name start systemctl start name.service swap swapon -a
Start/stop service
service name stop systemctl stop name.service
Automatically mount
chkconfig name on systemctl enable name.service /etc/fstab
Enable/disable service at boot
chkconfig name off systemctl disable name.service
View free disk space df
View service status service name status systemctl status name.service
lvdisplay
Check if service is enabled chkconfig name systemctl is-enabled name lvs
vgdisplay
Create new service file or View logical volume info
JOBS AND SERVICES

chkconfig --add systemctl daemon-reload vgs


modify configuration pvdisplay
pvs
runlevel systemctl get-default
View run level/target
who -r who -r showmount -e
View NFS share
mount
/etc/inittab systemctl isolate name.target
Change run level/target
init run_level systemctl set-default Automatically /etc/auto.master.d/*.autofs
mount after boot /etc/auto.*
/var/log
View logs /var/log
journalctl chmod
chown
add audit=1 to kernel cmdline Change file permissions
chgrp
auditctl umask
/etc/audit/auditd.conf
Configure system audit /etc/audit/audit.rules Change access
authconfig setfacl
control list
/etc/pam.d/system-auth
pam_tty_audit kernel module

View audit output aureport /var/log/faillog TASK RHEL5 RHEL6 RHEL7

cron iptables and ip6tables


Schedule tasks iptables and ip6tables firewall-cmd
at Configure firewall /etc/sysconfig/ip*tables
/etc/sysconfig/ip*tables firewall-config
system-config-firewall
Configure batch tasks batch
dhcpd
/etc/dhcpd.conf
Find file by name locate Configure DHCP client /etc/dhcp/dhcpd.conf
/etc/dhcp6c.conf
/etc/sysconfig/dhcpd
Find file by characteristic find
/etc/hosts
Configure name /etc/hosts
/etc/resolv.conf
tar resolution /etc/resolv.conf
nmcli con mod
cpio
Create archive zip
hostnamectl
gzip
NETWORKING

Configure hostname /etc/sysconfig/network /etc/hostname


bzip2
nmtui

ip addr
TASK RHEL5 RHEL6 RHEL7 ip addr nmcli dev show
View network interface
ifconfig teamdctl
info
brctl brctl
append rd.break or init=/bin/bash
Single user/rescue mode append 1 or s or init=/bin/bash to kernel cmdline bridge
to kernel cmdline
/etc/sysconfig/network-scripts/ifcfg-*
Shut down system shutdown systemctl shutdown
Configure network nmcli con [add|mod|edit]
/etc/sysconfig/network-scripts/ifcfg-*
interface nmtui
Power off system poweroff systemctl poweroff
KERNEL, BOOT, AND HARDWARE

nm-connection-editor

Halt system halt systemctl halt ss


ss
View ports/sockets lsof
lsof
Reboot system reboot systemctl reboot netstat

Configure default run level/target /etc/inittab systemctl set-default View routes ip route

/etc/default/grub /etc/sysconfig/network
Configure routes
Configure GRUB bootloader /boot/grub/grub.conf grub2-mkconfig system-config-network
grub-set-default

View hardware configured hwbrowser lshw


TASK RHEL5 RHEL6 RHEL7
Configure kernel module modprobe
top
top
top ps
Configure hardware device udev ps
ps sar
sar
sar iostat
sysctl -a iostat
View kernel parameters iostat netstat
View system usage ss
RESOURCE MANAGEMENT

cat /proc/cmdline netstat ss


vmstat
vmstat vmstat
Load kernel module modprobe mpstat
mpstat mpstat
numastat
numastat numastat
tuna
Remove kernel module modprobe -r tuna

rpm -q kernel df
View kernel version View disk usage df
uname -r iostat

Trace system calls strace

TASK RHEL5 RHEL6 RHEL7


Trace library calls ltrace

yum install yum install


Install software Change process priority
nice
yum groupinstall yum group install renice

yum info yum info


View software info Change process run
MANAGEMENT

yum groupinfo yum group info taskset


location
SOFTWARE

Update software yum update


kill
Kill a process pkill
Upgrade software yum upgrade
killall

Configure software repository /etc/yum.repos.d/*.repo

Find file in package rpm -qf filename

View software version rpm -q packagename


Copyright © 2014 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, and JBoss are trademarks of Red Hat, Inc.,
10/14 registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
View installed software rpm -qa
Red Hat Enterprise Linux 8

Considerations in adopting RHEL 8

Key differences between Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8

Last Updated: 2020-04-03


Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8
Key differences between Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8
Legal Notice
Copyright © 2020 Red Hat, Inc.

The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.

MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.

Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.

The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Abstract
This document provides an overview of changes in Red Hat Enterprise Linux 8 since Red Hat
Enterprise Linux 7 to help you evaluate migration to Red Hat Enterprise Linux 8.
Table of Contents

Table of Contents
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .

. . . . . . . . . . . 1.. .PREFACE
CHAPTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .

. . . . . . . . . . . 2.
CHAPTER . . ARCHITECTURES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
..............

. . . . . . . . . . . 3.
CHAPTER . . REPOSITORIES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . . . . . .

. . . . . . . . . . . 4.
CHAPTER . . .APPLICATION
. . . . . . . . . . . . . . .STREAMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
..............

.CHAPTER
. . . . . . . . . . 5.
. . INSTALLER
. . . . . . . . . . . . .AND
. . . . .IMAGE
. . . . . . . .CREATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
..............
5.1. ADD-ONS 13
5.1.1. OSCAP 13
5.1.2. Kdump 13
5.2. INSTALLER NETWORKING 13
5.2.1. Device naming scheme 13
5.3. INSTALLATION IMAGES AND PACKAGES 13
5.3.1. Unified ISO 13
5.3.2. Stage2 image 13
5.3.3. inst.addrepo parameter 14
5.3.4. Installation from an expanded ISO 14
5.4. INSTALLER GRAPHICAL USER INTERFACE 14
5.4.1. The Installation Summary window 14
5.5. SYSTEM PURPOSE NEW IN RHEL 14
5.5.1. System Purpose support in the graphical installation 14
5.5.2. System Purpose support in Pykickstart 14
5.6. INSTALLER MODULE SUPPORT 14
5.6.1. Installing modules using Kickstart 14
5.7. KICKSTART CHANGES 15
5.7.1. auth or authconfig is deprecated in RHEL 8 15
5.7.2. Kickstart no longer supports Btrfs 15
5.7.3. Using Kickstart files from previous RHEL releases 15
5.7.4. Deprecated Kickstart commands and options 15
5.7.5. Removed Kickstart commands and options 16
5.7.6. New Kickstart commands and options 16
5.8. IMAGE CREATION 16
5.8.1. Custom system image creation with Image Builder 16

. . . . . . . . . . . 6.
CHAPTER . . .SOFTWARE
. . . . . . . . . . . . MANAGEMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
..............
6.1. NOTABLE CHANGES TO THE YUM STACK 18
6.1.1. Advantages of YUM v4 over YUM v3 18
6.1.2. How to use YUM v4 18
Installing software 18
Availability of plug-ins 18
Availability of APIs 18
6.1.3. Availability of YUM configuration file options 18
6.1.4. YUM v4 features behaving differently 26
6.1.4.1. yum list presents duplicate entries 26
6.1.5. Changes in the transaction history log files 27
6.2. NOTABLE RPM FEATURES AND CHANGES 27

.CHAPTER
. . . . . . . . . . 7.
. . INFRASTRUCTURE
. . . . . . . . . . . . . . . . . . . . SERVICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
..............
7.1. TIME SYNCHRONIZATION 29

1
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

7.1.1. Implementation of NTP 29


7.1.2. Introduction to chrony suite 29
7.1.2.1. Differences between chrony and ntp 29
7.1.2.1.1. Chrony applies leap second correction by default 29
7.1.3. Additional information 30
7.2. BIND - IMPLEMENTATION OF DNS 30
7.3. DNS RESOLUTION 31
7.4. PRINTING 31
7.4.1. Print settings tools 31
7.4.2. Location of CUPs logs 31
7.4.3. Additional information 32
7.5. PERFORMANCE AND POWER MANAGEMENT OPTIONS 32
7.5.1. Notable changes in the recommended Tuned profile 32
7.6. OTHER CHANGES TO INFRASTRUCTURE SERVICES COMPONENTS 32

.CHAPTER
. . . . . . . . . . 8.
. . .SECURITY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
..............
8.1. CHANGES IN CORE CRYPTOGRAPHIC COMPONENTS 37
8.1.1. System-wide cryptographic policies are applied by default 37
8.1.2. Strong crypto defaults by removing insecure cipher suites and protocols 37
8.1.3. Cipher suites and protocols disabled in all policy levels 37
8.1.4. Switching the system to FIPS mode 38
8.1.5. TLS 1.0 and TLS 1.1 are deprecated 38
8.1.6. TLS 1.3 support in cryptographic libraries 38
8.1.7. DSA is deprecated in RHEL 8 38
8.1.8. SSL2 Client Hello has been deprecated in NSS 38
8.1.9. NSS now use SQL by default 39
8.2. SSH 39
8.2.1. OpenSSH rebased to version 7.8p1 39
8.2.2. libssh implements SSH as a core cryptographic component 39
8.2.3. libssh2 is not available in RHEL 8 39
8.3. RSYSLOG 40
8.3.1. The default rsyslog configuration file format is now non-legacy 40
8.3.2. The imjournal option and configuring system logging with minimized journald usage 40
8.3.3. Negative effects of the default logging setup on performance 40
8.4. OPENSCAP 40
8.4.1. OpenSCAP API consolidated 40
8.4.2. A utility for security and compliance scanning of containers is not available 40
8.5. AUDIT 40
8.5.1. Audit 3.0 replaces audispd with auditd 41
8.6. SELINUX 41
8.6.1. New SELinux booleans 41
8.6.2. SELinux packages migrated to Python 3 41
8.7. REMOVED SECURITY FUNCTIONALITY 41
8.7.1. shadow-utils no longer allow all-numeric user and group names 41
8.7.2. securetty is now disabled by default 41
8.7.3. The Clevis HTTP pin has been removed 42
8.7.3.1. Coolkey has been removed 42
8.7.3.2. crypto-utils have been removed 42
8.7.3.3. KLIPS has been removed from Libreswan 42

. . . . . . . . . . . 9.
CHAPTER . . .NETWORKING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
..............
9.1. NETWORKMANAGER 43
9.1.1. Legacy network scripts support 43

2
Table of Contents

9.1.2. NetworkManager supports SR-IOV virtual functions 43


9.1.3. NetworkManager supports a wildcard interface name match for connections 43
9.1.4. NetworkManager supports configuring ethtool offload features 43
9.1.5. NetworkManager now uses the internal DHCP plug-in by default 44
9.1.6. The NetworkManager-config-server package is not installed by default in RHEL 8 44
9.2. PACKET FILTERING 44
9.2.1. nftables replaces iptables as the default network packet filtering framework 44
9.2.2. Arptables FORWARD is removed from filter tables in RHEL 8 45
9.2.3. Output of iptables-ebtables is not 100% compatible with ebtables 45
9.2.4. New tools to convert iptables to nftables 45
9.3. CHANGES IN WPA_SUPPLICANT 46
9.3.1. journalctl can now read the wpa_supplicant log 46
9.3.2. The compile-time support for wireless extensions in wpa_supplicant is disabled 46
9.4. A NEW DATA CHUNK TYPE, I-DATA, ADDED TO SCTP 46
9.5. NOTABLE TCP FEATURES IN RHEL 8 46
9.5.1. TCP BBR support in RHEL 8 46
9.6. VLAN-RELATED CHANGES 46
9.6.1. IPVLAN virtual network drivers are now supported 47
9.6.2. Certain network adapters require a firmware update to fully support 802.1ad 47
9.7. NETWORK INTERFACE NAME CHANGES 47
9.8. THE -OK OPTION OF THE TC COMMAND REMOVED 47

.CHAPTER
. . . . . . . . . . 10.
. . . KERNEL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
..............
10.1. RESOURCE CONTROL 48
10.1.1. Control group v2 available as a Technology Preview in RHEL 8 48
10.2. MEMORY MANAGEMENT 48
10.2.1. 52-bit PA for 64-bit ARM available 48
10.2.2. 5-level page tables x86_64 49
10.3. PERFORMANCE ANALYSIS AND OBSERVABILITY TOOLS 49
10.3.1. bpftool added to kernel 49
10.3.2. eBPF available as a Technology Preview 49
10.3.3. BCC is available as a Technology Preview 49
10.4. BOOTING PROCESS 49
10.4.1. How to install and boot custom kernels in RHEL 8 49
10.4.2. Early kdump support in RHEL 8 50

.CHAPTER
. . . . . . . . . . 11.
. . .HARDWARE
. . . . . . . . . . . . .ENABLEMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
..............
11.1. REMOVED HARDWARE SUPPORT 51
11.1.1. Removed device drivers 51
11.1.2. Removed adapters 54
11.1.3. Other removed hardware support 59
11.1.3.1. AGP graphics cards are no longer supported 59
11.1.3.2. FCoE software removal 59
11.1.3.3. The e1000 network driver is not supported in RHEL 8 60
11.1.3.4. RHEL 8 does not support the tulip driver 60
11.1.3.5. The qla2xxx driver no longer supports target mode 60

. . . . . . . . . . . 12.
CHAPTER . . . FILE
. . . . . SYSTEMS
. . . . . . . . . . .AND
. . . . .STORAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
..............
12.1. FILE SYSTEMS 62
12.1.1. Btrfs has been removed 62
12.1.2. XFS now supports shared copy-on-write data extents 62
12.1.3. The ext4 file system now supports metadata checksums 63
12.1.4. The /etc/sysconfig/nfs file and legacy NFS service names are no longer available 63
12.2. STORAGE 63

3
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

12.2.1. The BOOM boot manager simplifies the process of creating boot entries 63
12.2.2. Stratis is now available 63
12.2.3. LUKS2 is now the default format for encrypting volumes 64
12.2.4. Multiqueue scheduling on block devices 64
12.2.5. VDO now supports all architectures 64
12.2.6. VDO no longer supports read cache 64
12.2.7. The dmraid package has been removed 65
12.2.8. Software FCoE and Fibre Channel no longer support the target mode 65
12.2.9. The detection of marginal paths in DM Multipath has been improved 65
12.2.10. New overrides section of the DM Multipath configuration file 65
12.2.11. NVMe/FC is fully supported on Broadcom Emulex and Marvell Qlogic Fibre Channel adapters 66
12.2.12. Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) 66
12.2.13. libstoragemgmt-netapp-plugin has been removed 66
12.3. LVM 67
12.3.1. Removal of clvmd for managing shared storage devices 67
12.3.2. Removal of lvmetad daemon 67
12.3.3. LVM can no longer manage devices formatted with the GFS pool volume manager or the lvm1 metadata
format. 67
12.3.4. LVM libraries and LVM Python bindings have been removed 67
12.3.5. The ability to mirror the log for LVM mirrors has been removed 68

. . . . . . . . . . . 13.
CHAPTER . . . HIGH
. . . . . . AVAILABILITY
. . . . . . . . . . . . . . . AND
. . . . . .CLUSTERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
..............
13.1. NEW FORMATS FOR PCS CLUSTER SETUP, PCS CLUSTER NODE ADD AND PCS CLUSTER NODE
REMOVE COMMANDS 69
13.2. MASTER RESOURCES RENAMED TO PROMOTABLE CLONE RESOURCES 69
13.3. NEW COMMANDS FOR AUTHENTICATING NODES IN A CLUSTER 70
13.4. LVM VOLUMES IN A RED HAT HIGH AVAILABILITY ACTIVE/PASSIVE CLUSTER 70
13.5. SHARED LVM VOLUMES IN A RED HAT HIGH AVAILABILITY ACTIVE/ACTIVE CLUSTER 70
13.6. GFS2 FILE SYSTEMS IN A RHEL 8 PACEMAKER CLUSTER 71

. . . . . . . . . . . 14.
CHAPTER . . . SHELLS
. . . . . . . . . AND
. . . . . COMMAND-LINE
. . . . . . . . . . . . . . . . . . TOOLS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
..............
14.1. LOCALIZATION IS DISTRIBUTED IN MULTIPLE PACKAGES 72
14.2. REMOVED SUPPORT FOR ALL-NUMERIC USER AND GROUP NAMES 72
14.3. THE NOBODY USER REPLACES NFSNOBODY 72
14.4. VERSION CONTROL SYSTEMS 72
14.4.1. Notable changes in Subversion 1.10 72

. . . . . . . . . . . 15.
CHAPTER . . . DYNAMIC
. . . . . . . . . . .PROGRAMMING
. . . . . . . . . . . . . . . . . LANGUAGES,
. . . . . . . . . . . . . . .WEB
. . . . . SERVERS,
. . . . . . . . . . .DATABASE
. . . . . . . . . . . . SERVERS
. . . . . . . . . . . . . . . . . . . . . . .74
..............
15.1. DYNAMIC PROGRAMMING LANGUAGES 74
15.1.1. Notable changes in Python 74
15.1.1.1. Python 3 is the default Python implementation in RHEL 8 74
15.1.1.2. Migrating from Python 2 to Python 3 74
15.1.1.3. Configuring the unversioned Python 74
15.1.1.4. Python scripts must specify major version in hashbangs at RPM build time 75
15.1.1.5. Python binding of the net-snmp package is unavailable 75
15.1.1.6. Additional resources 75
15.1.2. Notable changes in PHP 75
15.1.3. Notable changes in Perl 76
15.1.4. Notable changes in Ruby 77
15.1.5. Notable changes in SWIG 77
15.1.6. Node.js new in RHEL 78
15.1.7. Tcl 78
15.1.7.1. Notable changes in Tcl/Tk 8.6 78
15.2. WEB SERVERS 79

4
Table of Contents

15.2.1. Notable changes in the Apache HTTP Server 79


15.2.2. The nginx web server new in RHEL 81
15.2.3. Apache Tomcat has been removed 81
15.3. PROXY CACHING SERVERS 81
15.3.1. Varnish Cache new in RHEL 81
15.3.2. Notable changes in Squid 81
15.4. DATABASE SERVERS 82
Database servers are not installable in parallel 82
15.4.1. Notable changes in MariaDB 10.3 82
15.4.2. Notable changes in MySQL 8.0 83
15.4.3. Notable changes in PostgreSQL 83

. . . . . . . . . . . 16.
CHAPTER . . . COMPILERS
. . . . . . . . . . . . . AND
. . . . . .DEVELOPMENT
. . . . . . . . . . . . . . . . .TOOLS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
..............
16.1. CHANGES IN TOOLCHAIN SINCE RHEL 7 85
16.1.1. Changes in GCC in RHEL 8 85
16.1.2. Security enhancements in GCC in RHEL 8 87
16.1.3. Compatibility-breaking changes in GCC in RHEL 8 90
C++ ABI change in std::string and std::list 90
GCC no longer builds Ada, Go, and Objective C/C++ code 90
16.2. COMPILER TOOLSETS 90
16.3. JAVA IMPLEMENTATIONS AND JAVA TOOLS IN RHEL 8 90
16.4. COMPATIBILITY-BREAKING CHANGES IN GDB 91
GDBserver now starts inferiors with shell 91
gcj support removed 91
New syntax for symbol dumping maintenance commands 92
Thread numbers are no longer global 92
Memory for value contents can be limited 93
Sun version of stabs format no longer supported 93
Sysroot handling changes 93
HISTSIZE no longer controls GDB command history size 93
Completion limiting added 93
HP-UX XDB compatibility mode removed 94
Handling signals for threads 94
Breakpoint modes always-inserted off and auto merged 94
remotebaud commands no longer supported 94
16.5. COMPATIBILITY-BREAKING CHANGES IN COMPILERS AND DEVELOPMENT TOOLS 94
librtkaio removed 94
Sun RPC and NIS interfaces removed from glibc 95
The nosegneg libraries for 32-bit Xen have been removed 95
make new operator != causes a different interpretation of certain existing makefile syntax 95
Valgrind library for MPI debugging support removed 95
Development headers and static libraries removed from valgrind-devel 95

. . . . . . . . . . . 17.
CHAPTER . . . IDENTITY
. . . . . . . . . . .MANAGEMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
..............
17.1. IDENTITY MANAGEMENT PACKAGES ARE INSTALLED AS A MODULE 96
17.2. ACTIVE DIRECTORY USERS CAN NOW ADMINISTER IDENTITY MANAGEMENT 96
17.3. SESSION RECORDING SOLUTION FOR RHEL 8 ADDED 97
17.4. REMOVED IDENTITY MANAGEMENT FUNCTIONALITY 97
17.4.1. NSS databases not supported in OpenLDAP 97
17.4.2. Selected Python Kerberos packages have been replaced 97
17.5. SSSD 97
17.5.1. authselect replaces authconfig 97
17.5.2. KCM replaces KEYRING as the default credential cache storage 98

5
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

17.5.3. sssctl prints an HBAC rules report for an IdM domain 98


17.5.4. Local users are cached by SSSD and served through the nss_sss module 98
17.5.5. SSSD now allows you to select one of the multiple smart-card authentication devices 98
17.6. REMOVED SSSD FUNCTIONALITY 99
17.6.1. sssd-secrets has been removed 99

. . . . . . . . . . . 18.
CHAPTER . . . THE
. . . . .WEB
. . . . . CONSOLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
...............
18.1. THE WEB CONSOLE IS NOW AVAILABLE BY DEFAULT 100
18.2. NEW FIREWALL INTERFACE 100
18.3. SUBSCRIPTION MANAGEMENT 100
18.4. BETTER IDM INTEGRATION FOR THE WEB CONSOLE 100
18.5. THE WEB CONSOLE IS NOW COMPATIBLE WITH MOBILE BROWSERS 101
18.6. THE WEB CONSOLE FRONT PAGE NOW DISPLAYS MISSING UPDATES AND SUBSCRIPTIONS 101
18.7. THE WEB CONSOLE NOW SUPPORTS PBD ENROLLMENT 101
18.8. SUPPORT LUKS V2 101
18.9. VIRTUAL MACHINES CAN NOW BE MANAGED USING THE WEB CONSOLE 101
18.10. INTERNET EXPLORER UNSUPPORTED BY THE WEB CONSOLE 102

. . . . . . . . . . . 19.
CHAPTER . . . VIRTUALIZATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
...............
19.1. VIRTUAL MACHINES CAN NOW BE MANAGED USING THE WEB CONSOLE 103
19.2. THE Q35 MACHINE TYPE IS NOW SUPPORTED BY VIRTUALIZATION 103
19.3. REMOVED VIRTUALIZATION FUNCTIONALITY 103
The cpu64-rhel6 CPU model has been deprecated and removed 103
IVSHMEM has been disabled 103
virt-install can no longer use NFS locations 104
RHEL 8 does not support the tulip driver 104
LSI Logic SAS and Parallel SCSI drivers are not supported 104
Installing virtio-win no longer creates a floppy disk image with the Windows drivers 104

.CHAPTER
. . . . . . . . . . 20.
. . . .CONTAINERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105
...............
20.1. RHEL 8 INTERNATIONAL LANGUAGES 105
20.2. NOTABLE CHANGES TO INTERNATIONALIZATION IN RHEL 8 106

. . . . . . . . . . . 21.
CHAPTER . . . RELATED
. . . . . . . . . . .INFORMATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
...............

.APPENDIX
. . . . . . . . . . .A.
. . CHANGES
. . . . . . . . . . . .TO
. . . PACKAGES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108
...............
A.1. NEW PACKAGES 108
A.1.1. Packages added in RHEL 8 minor releases 108
A.1.2. Packages new in RHEL 8.0 109
A.2. PACKAGE REPLACEMENTS 116
A.3. MOVED PACKAGES 173
A.4. REMOVED PACKAGES 175
A.5. PACKAGES WITH REMOVED SUPPORT 283

6
Table of Contents

7
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

PROVIDING FEEDBACK ON RED HAT DOCUMENTATION


We appreciate your input on our documentation. Please let us know how we could make it better. To do
so:

For simple comments on specific passages:

1. Make sure you are viewing the documentation in the Multi-page HTML format. In addition,
ensure you see the Feedback button in the upper right corner of the document.

2. Use your mouse cursor to highlight the part of text that you want to comment on.

3. Click the Add Feedback pop-up that appears below the highlighted text.

4. Follow the displayed instructions.

For submitting more complex feedback, create a Bugzilla ticket:

1. Go to the Bugzilla website.

2. As the Component, use Documentation.

3. Fill in the Description field with your suggestion for improvement. Include a link to the
relevant part(s) of documentation.

4. Click Submit Bug.

8
CHAPTER 1. PREFACE

CHAPTER 1. PREFACE
This document provides an overview of differences between two major versions of Red Hat
Enterprise Linux: RHEL 7 and RHEL 8. It provides a list of changes relevant for evaluating migration to
RHEL 8 rather than an exhaustive list of all alterations.

Capabilities and limits of RHEL 8 as compared to other versions of the system are available in the
Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits .

Information regarding the RHEL life cycle is provided in the Red Hat Enterprise Linux Life Cycle
document.

The Package manifest document provides a package listing for RHEL 8.

For details regarding RHEL 8 usage, see the RHEL 8 product documentation .

For guidance regarding an in-place upgrade from RHEL 7 to RHEL 8, see Upgrading to RHEL 8 .

For information about major differences between RHEL 6 and RHEL 7, see the RHEL 7 Migration
Planning Guide.

9
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

CHAPTER 2. ARCHITECTURES
Red Hat Enterprise Linux 8 is distributed with the kernel version 4.18, which provides support for the
following architectures:

AMD and Intel 64-bit architectures

The 64-bit ARM architecture

IBM Power Systems, little endian

IBM Z

Make sure you purchase the appropriate subscription for each architecture. For more information, see
Get Started with Red Hat Enterprise Linux - additional architectures . For a list of available subscriptions,
see Subscription Utilization on the Customer Portal.

Note that all architectures are supported by the standard kernel packages in RHEL 8; no kernel-alt
package is needed.

10
CHAPTER 3. REPOSITORIES

CHAPTER 3. REPOSITORIES
Red Hat Enterprise Linux 8 is distributed through two main repositories:

BaseOS

AppStream

Both repositories are required for a basic RHEL installation, and are available with all RHEL
subscriptions.

Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality
that provides the foundation for all installations. This content is available in the RPM format and is
subject to support terms similar to those in previous releases of RHEL. For a list of packages distributed
through BaseOS, see the Package manifest.

Content in the Application Stream repository includes additional user space applications, runtime
languages, and databases in support of the varied workloads and use cases. Content in AppStream is
available in one of two formats - the familiar RPM format and an extension to the RPM format called
modules. For a list of packages available in AppStream, see the Package manifest.

In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides
additional packages for use by developers. Packages included in the CodeReady Linux Builder
repository are unsupported.

For more information about RHEL 8 repositories, see the Package manifest.

11
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

CHAPTER 4. APPLICATION STREAMS


Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user
space components are now delivered and updated more frequently than the core operating system
packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the
underlying stability of the platform or specific deployments.

Components made available as Application Streams can be packaged as modules or RPM packages and
are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a
given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life
Cycle.

Modules are collections of packages representing a logical unit: an application, a language stack, a
database, or a set of tools. These packages are built, tested, and released together.

Module streams represent versions of the Application Stream components. For example, two streams
(versions) of the PostgreSQL database server are available in the postgresql module: PostgreSQL 10
(the default stream) and PostgreSQL 9.6. Only one module stream can be installed on the system.
Different versions can be used in separate containers.

Detailed module commands are described in the Installing, managing, and removing user-space
components document. For a list of modules available in AppStream, see the Package manifest.

12
CHAPTER 5. INSTALLER AND IMAGE CREATION

CHAPTER 5. INSTALLER AND IMAGE CREATION

5.1. ADD-ONS

5.1.1. OSCAP
The OSCAP add-on is enabled by default in Red Hat Enterprise Linux 8.

5.1.2. Kdump
The Kdump add-on adds support for configuring kernel crash dumping during installation. This add-on
has full support in Kickstart (using the %addon com_redhat_kdump command and its options), and is
fully integrated as an additional window in the graphical and text-based user interfaces.

5.2. INSTALLER NETWORKING

5.2.1. Device naming scheme


A new network device naming scheme that generates network interface names based on a user-defined
prefix is available in Red Hat Enterprise Linux 8. The net.ifnames.prefix boot option allows the device
naming scheme to be used by the installation program and the installed system. See the
dracut.cmdline(7) man page for information.

5.3. INSTALLATION IMAGES AND PACKAGES

5.3.1. Unified ISO


In Red Hat Enterprise Linux 8, a unified ISO automatically loads the BaseOS and AppStream
installation source repositories. This feature works for the first base repository that is loaded during
installation. For example, if you boot the installation with no repository configured and have the unified
ISO as the base repository in the graphical user interface (GUI), or if you boot the installation using the
inst.repo= option that points to the unified ISO.

As a result, the AppStream repository is enabled under the Additional Repositories section of the
Installation Source GUI window. You cannot remove the AppStream repository or change its settings
but you can disable it in Installation Source. This feature does not work if you boot the installation using
a different base repository and then change it to the unified ISO. If you do that, the base repository is
replaced. However, the AppStream repository is not replaced and points to the original file.

5.3.2. Stage2 image


In Red Hat Enterprise Linux 8, multiple network locations of stage2 or Kickstart files can be specified to
prevent installation failure. This update enables the specification of multiple inst.stage2 and inst.ks
boot options with network locations of stage2 and a Kickstart file. This avoids the situation in which the
requested files cannot be reached and the installation fails because the contacted server with the
stage2 or the Kickstart file is inaccessible.

With this new update, the installation failure can be avoided if multiple locations are specified. If all the
defined locations are URLs, namely HTTP, HTTPS, or FTP, they will be tried sequentially until the
requested file is fetched successfully. If there is a location that is not a URL, only the last specified
location is tried. The remaining locations are ignored.

13
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

5.3.3. inst.addrepo parameter


Previously, you could only specify a base repository from the kernel boot parameters. In Red Hat
Enterprise Linux 8, a new kernel parameter, inst.addrepo=<name>,<url>, allows you to specify an
additional repository during installation. This parameter has two mandatory values: the name of the
repository and the URL that points to the repository. For more information, see the inst-addrepo usage.

5.3.4. Installation from an expanded ISO


Red Hat Enterprise Linux 8 supports installing from a repository on a local hard drive. Previously, the
only installation method from a hard drive was using an ISO image as the installation source. However,
the Red Hat Enterprise Linux 8 ISO image might be too big for some file systems; for example, the
FAT32 file system cannot store files larger than 4 GiB. In Red Hat Enterprise Linux 8, you can enable
installation from a repository on a local hard drive; you only need to specify the directory instead of the
ISO image. For example: inst.repo=hd:<device>:<path to the repository>.

For more information about the Red Hat Enterprise Linux 8 BaseOS and AppStream repositories, see
the Repositories section of this document.

5.4. INSTALLER GRAPHICAL USER INTERFACE

5.4.1. The Installation Summary window


The Installation Summary window of the Red Hat Enterprise Linux 8 graphical installation has been
updated to a new three-column layout that provides improved organization of graphical installation
settings.

5.5. SYSTEM PURPOSE NEW IN RHEL

5.5.1. System Purpose support in the graphical installation


Previously, the Red Hat Enterprise Linux installation program did not provide system purpose
information to Subscription Manager. In Red Hat Enterprise Linux 8, you can set the intended purpose
of the system during a graphical installation by using the System Purpose window, or in a Kickstart
configuration file by using the syspurpose command. When you set a system’s purpose, the
entitlement server receives information that helps auto-attach a subscription that satisfies the intended
use of the system.

5.5.2. System Purpose support in Pykickstart


Previously, it was not possible for the pykickstart library to provide system purpose information to
Subscription Manager. In Red Hat Enterprise Linux 8, pykickstart parses the new syspurpose
command and records the intended purpose of the system during automated and partially-automated
installation. The information is then passed to the installation program, saved on the newly-installed
system, and available for Subscription Manager when subscribing the system.

5.6. INSTALLER MODULE SUPPORT

5.6.1. Installing modules using Kickstart

In Red Hat Enterprise Linux 8, the installation program has been extended to handle all modular
14
CHAPTER 5. INSTALLER AND IMAGE CREATION

In Red Hat Enterprise Linux 8, the installation program has been extended to handle all modular
features. Kickstart scripts can now enable module and stream combinations, install module profiles, and
install modular packages.

5.7. KICKSTART CHANGES


The folowing sections describe the changes in Kickstart commands and options in Red Hat
Enterprise Linux 8.

5.7.1. auth or authconfig is deprecated in RHEL 8


The auth or authconfig Kickstart command is deprecated in Red Hat Enterprise Linux 8 because the
authconfig tool and package have been removed.

Similarly to authconfig commands issued on command line, authconfig commands in Kickstart scripts
now use the authselect-compat tool to run the new authselect tool. For a description of this
compatibility layer and its known issues, see the manual page authselect-migration(7). The installation
program will automatically detect use of the deprecated commands and install on the system the
authselect-compat package to provide the compatibility layer.

5.7.2. Kickstart no longer supports Btrfs


The Btrfs file system is not supported in Red Hat Enterprise Linux 8. As a result, the Graphical User
Interface (GUI) and the Kickstart commands no longer support Btrfs.

5.7.3. Using Kickstart files from previous RHEL releases


If you are using Kickstart files from previous RHEL releases, see the Repositories section of the
Considerations in adopting RHEL 8 document for more information about the Red Hat Enterprise Linux
8 BaseOS and AppStream repositories.

5.7.4. Deprecated Kickstart commands and options


The following Kickstart commands and options have been deprecated in Red Hat Enterprise Linux 8.

Where only specific options are listed, the base command and its other options are still available and not
deprecated.

auth or authconfig - use authselect instead

device

deviceprobe

dmraid

install - use the subcommands or methods directly as commands

multipath

bootloader --upgrade

ignoredisk --interactive

partition --active

15
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

reboot --kexec

Except the auth or authconfig command, using the commands in Kickstart files prints a warning in the
logs.

You can turn the deprecated command warnings into errors with the inst.ksstrict boot option, except
for the auth or authconfig command.

5.7.5. Removed Kickstart commands and options


The following Kickstart commands and options have been completely removed in Red Hat
Enterprise Linux 8. Using them in Kickstart files will cause an error.

upgrade (This command had already previously been deprecated.)

btrfs

part/partition btrfs

part --fstype btrfs or partition --fstype btrfs

logvol --fstype btrfs

raid --fstype btrfs

unsupported_hardware

Where only specific options and values are listed, the base command and its other options are still
available and not removed.

5.7.6. New Kickstart commands and options


The following commands and options were added in Red Hat Enterprise Linux 8.2 Beta.

RHEL 8.2 Beta

rhsm

zipl

The following commands and options were added in Red Hat Enterprise Linux 8.

RHEL 8.0

authselect

module

5.8. IMAGE CREATION

5.8.1. Custom system image creation with Image Builder


The Image Builder tool enables users to create customized RHEL images. Image Builder is available in
AppStream in the lorax-composer package.

With Image Builder, users can create custom system images which include additional packages. Image
16
CHAPTER 5. INSTALLER AND IMAGE CREATION

With Image Builder, users can create custom system images which include additional packages. Image
Builder functionality can be accessed through:

a graphical user interface in the web console

a command line interface in the composer-cli tool.

Image Builder output formats include, among others:

live ISO disk image

qcow2 file for direct use with a virtual machine or OpenStack

file system image file

cloud images for Azure, VMWare and AWS

To learn more about Image Builder, see the documentation title Composing a customized RHEL system
image.

17
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

CHAPTER 6. SOFTWARE MANAGEMENT

6.1. NOTABLE CHANGES TO THE YUM STACK


On Red Hat Enterprise Linux (RHEL) 8, installing software is ensured by the new version of the YUM
tool, which is based on the DNF technology (YUM v4).

6.1.1. Advantages of YUM v4 over YUM v3


YUM v4 has the following advantages over the previous YUM v3 used on RHEL 7:

Increased performance

Support for modular content

Well-designed stable API for integration with tooling

For detailed information about differences between the new YUM v4 tool and the previous version YUM
v3 from RHEL 7, see Changes in DNF CLI compared to YUM .

6.1.2. How to use YUM v4


Installing software
YUM v4 is compatible with YUM v3 when using from the command line, editing or creating
configuration files.

For installing software, you can use the yum command and its particular options in the same way as on
RHEL 7.

See more detailed information on Installing software with yum .

Availability of plug-ins
Legacy YUM v3 plug-ins are incompatible with the new version of YUM v4. Selected yum plug-ins and
utilities have been ported to the new DNF back end, and can be installed under the same names as in
RHEL 7. They also provide compatibility symlinks, so the binaries, configuration files and directories can
be found in usual locations.

In the event that a plug-in is no longer included, or a replacement does not meet a usability need, please
reach out to Red Hat Support to request a Feature Enhancement as described in How do I open and
manage a support case on the Customer Portal?

For more information, see Plugin Interface .

Availability of APIs
Note that the legacy Python API provided by YUM v3 is no longer available. Users are advised to
migrate their plug-ins and scripts to the new API provided by YUM v4 (DNF Python API), which is stable
and fully supported. The upstream project documents the new DNF Python API - see the DNF API
Reference.

The Libdnf and Hawkey APIs (both C and Python) are to be considered unstable, and will likely change
during RHEL 8 life cycle.

6.1.3. Availability of YUM configuration file options

This section summarizes changes in configuration file options between RHEL 7 and RHEL 8 for the

18
CHAPTER 6. SOFTWARE MANAGEMENT

This section summarizes changes in configuration file options between RHEL 7 and RHEL 8 for the
/etc/yum.conf and /etc/yum.repos.d/*.repo files.

Table 6.1. Changes in configuration file options for the /etc/yum.conf file

RHEL 7 option RHEL 8 status

alwaysprompt removed

assumeno available

assumeyes available

autocheck_running_kernel available

autosavets removed

bandwidth available

bugtracker_url available

cachedir available

check_config_file_age available

clean_requirements_on_remove available

color available

color_list_available_downgrade available

color_list_available_install available

color_list_available_reinstall available

color_list_available_running_kernel removed

color_list_available_upgrade available

color_list_installed_extra available

color_list_installed_newer available

color_list_installed_older available

color_list_installed_reinstall available

color_list_installed_running_kernel removed

19
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

RHEL 7 option RHEL 8 status

color_search_match available

color_update_installed available

color_update_local available

color_update_remote available

commands removed

config_file_path available

debuglevel available

deltarpm available

deltarpm_metadata_percentage removed

deltarpm_percentage available

depsolve_loop_limit removed

disable_excludes available

diskspacecheck available

distroverpkg removed

enable_group_conditionals removed

errorlevel available

exactarchlist removed

exclude available

exit_on_lock available

fssnap_abort_on_errors removed

fssnap_automatic_keep removed

fssnap_automatic_post removed

20
CHAPTER 6. SOFTWARE MANAGEMENT

RHEL 7 option RHEL 8 status

fssnap_automatic_pre removed

fssnap_devices removed

fssnap_percentage removed

ftp_disable_epsv removed

gpgcheck available

group_command removed

group_package_types available

groupremove_leaf_only removed

history_list_view available

history_record available

history_record_packages available

http_caching removed

include removed

installonly_limit available

installonlypkgs available

installrootkeep removed

ip_resolve available

keepalive removed

keepcache available

kernelpkgnames removed

loadts_ignoremissing removed

loadts_ignorenewrpm removed

21
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

RHEL 7 option RHEL 8 status

loadts_ignorerpm removed

localpkg_gpgcheck available

logfile removed

max_connections removed

mddownloadpolicy removed

mdpolicy removed

metadata_expire available

metadata_expire_filter removed

minrate available

mirrorlist_expire removed

multilib_policy available

obsoletes available

override_install_langs removed

overwrite_groups removed

password available

payload_gpgcheck removed

persistdir available

pluginconfpath available

pluginpath available

plugins available

protected_multilib removed

protected_packages available

22
CHAPTER 6. SOFTWARE MANAGEMENT

RHEL 7 option RHEL 8 status

proxy available

proxy_password available

proxy_username available

query_install_excludes removed

recent available

recheck_installed_requires removed

remove_leaf_only removed

repo_gpgcheck available

repopkgsremove_leaf_only removed

reposdir available

reset_nice available

retries available

rpmverbosity available

shell_exit_status removed

showdupesfromrepos available

skip_broken available

skip_missing_names_on_install removed

skip_missing_names_on_update removed

ssl_check_cert_permissions removed

sslcacert available

sslclientcert available

sslclientkey available

23
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

RHEL 7 option RHEL 8 status

sslverify available

syslog_device removed

syslog_facility removed

syslog_ident removed

throttle available

timeout available

tolerant removed

tsflags available

ui_repoid_vars removed

upgrade_group_objects_upgrade available

upgrade_requirements_on_install removed

usercache removed

username available

usr_w_check removed

Table 6.2. Changes in configuration file options for the /etc/yum.repos.d/*.repo file

RHEL 7 option RHEL 8 status

async removed

bandwidth available

baseurl available

compare_providers_priority removed

cost available

deltarpm_metadata_percentage removed

24
CHAPTER 6. SOFTWARE MANAGEMENT

RHEL 7 option RHEL 8 status

deltarpm_percentage available

enabled available

enablegroups available

exclude available

failovermethod removed

ftp_disable_epsv removed

gpgcakey removed

gpgcheck available

gpgkey available

http_caching removed

includepkgs available

ip_resolve available

keepalive removed

metadata_expire available

metadata_expire_filter removed

metalink available

mirrorlist available

mirrorlist_expire removed

name available

password available

proxy available

proxy_password available

25
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

RHEL 7 option RHEL 8 status

proxy_username available

repo_gpgcheck available

repositoryid removed

retries available

skip_if_unavailable available

ssl_check_cert_permissions removed

sslcacert available

sslclientcert available

sslclientkey available

sslverify available

throttle available

timeout available

ui_repoid_vars removed

username available

6.1.4. YUM v4 features behaving differently


Some of the YUM v3 features may behave differently in YUM v4. If any such change negatively impacts
your workflows, please open a case with Red Hat Support, as described in How do I open and manage a
support case on the Customer Portal?

6.1.4.1. yum list presents duplicate entries

When listing packages using the yum list command, duplicate entries may be presented, one for each
repository where a package of the same name and version resides.

This is intentional, and it allows the users to distinguish such packages when necessary.

For example, if package-1.2 is available in both repo1 and repo2, YUM v4 will print both instances:

[…​]
package-1.2 repo1
package-1.2 repo2

26
CHAPTER 6. SOFTWARE MANAGEMENT

[…​]

By contrast, the legacy YUM v3 command filtered out such duplicates so that only one instance was
shown:

[…​]
package-1.2 repo1
[…​]

6.1.5. Changes in the transaction history log files


This section summarizes changes in the transaction history log files between RHEL 7 and RHEL 8.

In RHEL 7, the /var/log/yum.log file stores:

Registry of installations, updates, and removals of the software packages

Transactions from yum and PackageKit

In RHEL 8, there is no direct equivalent to the /var/log/yum.log file. To display the information about
the transactions, including the PackageKit and microdnf, use the yum history command.

Alternatively, you can search the /var/log/dnf.rpm.log file, but this log file does not include the
transactions from PackageKit and microdnf, and it has a log rotation which provides the periodic removal
of the stored information.

6.2. NOTABLE RPM FEATURES AND CHANGES


Red Hat Enterprise Linux (RHEL) 8 is distributed with RPM 4.14. This version introduces many
enhancements over RPM 4.11, which is available in RHEL 7.

Notable features include:

The debuginfo packages can be installed in parallel

Support for weak dependencies

Support for rich or boolean dependencies

Support for packaging files above 4 GB in size

Support for file triggers

New --nopretrans and --noposttrans switches to disable the execution of the %pretrans and
%posttrans scriptlets respectively.

New --noplugins switch to disable loading and execution of all RPM plug-ins.

New syslog plug-in for logging any RPM activity by the System Logging protocol (syslog).

The rpmbuild command can now do all build steps from a source package directly.
This is possible if rpmbuild is used with any of the -r[abpcils] options.

Support for the reinstall mode.

This is ensured by the new --reinstall option. To reinstall a previously installed package, use the
27
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

This is ensured by the new --reinstall option. To reinstall a previously installed package, use the
syntax below:

rpm {--reinstall} [install-options] PACKAGE_FILE

This option ensures a proper installation of the new package and removal of the old package.

Support for SSD conservation mode.


This is ensured by the new %_minimize_writes macro, which is available in the
/usr/lib/rpm/macros file. The macro is by default set to 0. To minize writing to SSD disks, set
%_minimize_writes to 1.

New rpm2archive utility for converting rpm payload to tar archives

See more information about New RPM features in RHEL 8 .

Notable changes include:

Stricter spec-parser

Simplified signature checking the output in non-verbose mode

Improved support for reproducible builds (builds that create an identical package):

Setting build time

Setting file mtime (file modification time)

Setting buildhost

Using the -p option to query an uninstalled PACKAGE_FILE is now optional. For this use case,
the rpm command now returns the same result with or without the -p option. The only use case
where the -p option is necessary is to verify that the file name does not match any Provides in
the rpmdb database.

Additions and deprecations in macros

The %makeinstall macro has been deprecated. To install a program, use the
%make_install macro instead.

The rpmbuild --sign command has been deprecated.


Note that using the --sign option with the rpmbuild command has been deprecated. To add a
signature to an already existing package, use rpm --addsign instead.

28
CHAPTER 7. INFRASTRUCTURE SERVICES

CHAPTER 7. INFRASTRUCTURE SERVICES

7.1. TIME SYNCHRONIZATION


Accurate timekeeping is important for a number of reasons. In Linux systems, the Network Time
Protocol (NTP) protocol is implemented by a daemon running in user space.

7.1.1. Implementation of NTP


RHEL 7 supported two implementations of the NTP protocol: ntp and chrony.

In RHEL 8, the NTP protocol is implemented only by the chronyd daemon, provided by the chrony
package.

The ntp daemon is no longer available. If you used ntp on your RHEL 7 system, you might need to
migrate to chrony.

Possible replacements for previous ntp features that are not supported by chrony are documented in
Achieving some settings previously supported by ntp in chrony .

7.1.2. Introduction to chrony suite


chrony is an implementation of NTP, which performs well in a wide range of conditions, including
intermittent network connections, heavily congested networks, changing temperatures (ordinary
computer clocks are sensitive to temperature), and systems that do not run continuously, or run on a
virtual machine.

You can use chrony:

To synchronize the system clock with NTP servers

To synchronize the system clock with a reference clock, for example a GPS receiver

To synchronize the system clock with a manual time input

As an NTPv4(RFC 5905) server or peer to provide a time service to other computers in the
network

For more information about chrony, see Configuring basic system settings.

7.1.2.1. Differences between chrony and ntp

See the following resources for information about differences between chrony and ntp:

Configuring basic system settings

Comparison of NTP implementations

7.1.2.1.1. Chrony applies leap second correction by default

In RHEL 8, the default chrony configuration file, /etc/chrony.conf, includes the leapsectz directive.

The leapsectz directive enables chronyd to:

Get information about leap seconds from the system tz database (tzdata)
Set the TAI-UTC offset of the system clock in order that the system provides an accurate
29
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Set the TAI-UTC offset of the system clock in order that the system provides an accurate
International Atomic Time (TAI) clock (CLOCK_TAI)

The directive is not compatible with servers that hide leap seconds from their clients using a leap smear,
such as chronyd servers configured with the leapsecmode and smoothtime directives. If a client
chronyd is configured to synchronize to such servers, remove leapsectz from the configuration file.

7.1.3. Additional information


For more information on how to configure NTP using the chrony suite, see Configuring basic system
settings.

7.2. BIND - IMPLEMENTATION OF DNS


RHEL 8 includes BIND (Berkeley Internet Name Domain) in version 9.11. This version of the DNS server
introduces multiple new features and feature changes compared to version 9.10.

New features:

A new method of provisioning secondary servers called Catalog Zones has been added.

Domain Name System Cookies are now sent by the named service and the dig utility.

The Response Rate Limiting feature can now help with mitigation of DNS amplification
attacks.

Performance of response-policy zone (RPZ) has been improved.

A new zone file format called map has been added. Zone data stored in this format can be
mapped directly into memory, which enables zones to load significantly faster.

A new tool called delv (domain entity lookup and validation) has been added, with dig-like
semantics for looking up DNS data and performing internal DNS Security Extensions (DNSSEC)
validation.

A new mdig command is now available. This command is a version of the dig command that
sends multiple pipelined queries and then waits for responses, instead of sending one query and
waiting for the response before sending the next query.

A new prefetch option, which improves the recursive resolver performance, has been added.

A new in-view zone option, which allows zone data to be shared between views, has been added.
When this option is used, multiple views can serve the same zones authoritatively without
storing multiple copies in memory.

A new max-zone-ttl option, which enforces maximum TTLs for zones, has been added. When a
zone containing a higher TTL is loaded, the load fails. Dynamic DNS (DDNS) updates with
higher TTLs are accepted but the TTL is truncated.

New quotas have been added to limit queries that are sent by recursive resolvers to
authoritative servers experiencing denial-of-service attacks.

The nslookup utility now looks up both IPv6 and IPv4 addresses by default.

The named service now checks whether other name server processes are running before
starting up.

30
CHAPTER 7. INFRASTRUCTURE SERVICES

When loading a signed zone, named now checks whether a Resource Record Signature’s (RSIG)
inception time is in the future, and if so, it regenerates the RRSIG immediately.

Zone transfers now use smaller message sizes to improve message compression, which reduces
network usage.

Feature changes:

The version 3 XML schema for the statistics channel, including new statistics and a flattened
XML tree for faster parsing, is provided by the HTTP interface. The legacy version 2 XML
schema is no longer supported.

The named service now listens on both IPv6 and IPv4 interfaces by default.

The named service no longer supports GeoIP. Access control lists (ACLs) defined by presumed
location of query sender are unavailable.

7.3. DNS RESOLUTION


In RHEL 7, the nslookup and host utilities were able to accept any reply without the recursion
available flag from any name server listed. In RHEL 8, nslookup and host ignore replies from name
servers with recursion not available unless it is the name server that is last configured. In case of the last
configured name server, answer is accepted even without the recursion available flag.

However, if the last configured name server is not responding or unreachable, name resolution fails. To
prevent such fail, you can use one of the following approaches:

Ensure that configured name servers always reply with the recursion available flag set.

Allow recursion for all internal clients.

Optionally, you can also use the dig utility to detect whether recursion is available or not.

7.4. PRINTING

7.4.1. Print settings tools


The Print Settings configuration tool, which was used in RHEL 7, is no longer available.

To achieve various tasks related to printing, you can choose one of the following tools:

CUPS web user interface (UI)

GNOME Control center

For more information on print setting tools in RHEL 8, see Deploying different types of servers.

7.4.2. Location of CUPs logs


CUPS provides three kinds of logs:

Error log

Access log

31
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Page log

In RHEL 8, the logs are no longer stored in specific files within the /var/log/cups directory, which was
used in RHEL 7. Instead, all three types are logged centrally in systemd-journald together with logs from
other programs.

For more information on how to use CUPS logs in RHEL 8, see Deploying different types of servers.

7.4.3. Additional information


For more information how to configure printing in RHEL 8, see Deploying different types of servers.

7.5. PERFORMANCE AND POWER MANAGEMENT OPTIONS

7.5.1. Notable changes in the recommended Tuned profile


In RHEL 8, the recommended Tuned profile, reported by the tuned-adm recommend command, is
selected based on the following rules:

If the syspurpose role (reported by the syspurpose show command) contains atomic, and at
the same time:

if Tuned is running on bare metal, the atomic-host profile is selected

if Tuned is running in a virtual machine, the atomic-guest profile is selected

If Tuned is running in a virtual machine, the virtual-guest profile is selected

If the syspurpose role contains desktop or workstation and the chassis type (reported by
dmidecode) is Notebook, Laptop, or Portable, then the balanced profile is selected

If none of the above rules matches, the throughput-performance profile is selected

Note that the first rule that matches takes effect.

7.6. OTHER CHANGES TO INFRASTRUCTURE SERVICES


COMPONENTS
This section summarizes other notable changes to particular infrastructure services components.

Table 7.1. Notable changes to infrastructure services components

Name Type of change Additional information

acpid Option change -d (debug) no longer implies -f (foreground)

bind Configuration dnssec-lookaside auto removed; use no instead


option removal

brltty Configuration --message-delay brltty renamed to --message-timeout


option change

32
CHAPTER 7. INFRASTRUCTURE SERVICES

Name Type of change Additional information

brltty Configuration -U [--update-interval=] removed


option removal

brltty Configuration A Bluetooth device address may now contain dashes (-) instead
option change of colons (:). The bth: and bluez: device qualifier aliases are no
longer supported.

cups Functionality Upstream removed support of interface scripts because of


removal security reasons. Use ppds and drivers provided by OS or
proprietary ones.

cups Directive options Removed Digest and BasicDigest authentication types for
removal AuthType and DefaultAuthType directives in
/etc/cups/cupsd.conf. Migrate to Basic.

cups Directive options Removed Include from cupsd.conf


removal

cups Directive options Removed ServerCertificate and ServerKey from cups-


removal files.conf use Serverkeychain instead

cups Directives moved SetEnv and PassEnv moved from cupsd.conf to cups-
between conf files files.conf

cups Directives moved PrintcapFormat moved from cupsd.conf to cups-


between conf files files.conf

cups-filters Default Names of remote print queues discovered by cups-browsed are


configuration now created based on device ID of printer, not on the name of
change remote print queue.

cups-filters Default CreateIPPPrinterQueues must be set to All for automatic


configuration creation of queues of IPP printers
change

cyrus-imapd Data format Cyrus-imapd 3.0.7 has different data format.


change

dhcp Behavior change dhclient sends the hardware address as a client identifier by
default. The client-id option is configurable. For more
information, see the /etc/dhcp/dhclient.conf file.

dhcp Options The -I option is now used for standard-ddns-updates. For the
incompatibility previous functionality (dhcp-client-identifier), use the new -C
option.

33
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Name Type of change Additional information

dosfstools Behavior change Data structures are now automatically aligned to cluster size. To
disable the alignment, use the -a option. fsck.fat now defaults
to interactive repair mode which previously had to be selected
with the -r option.

finger Functionality
removal

GeoIP Functionality
removal

grep Behavior change grep now treats files containining data improperly encoded for
the current locale as binary.

grep Behavior change grep -P no longer reports an error and exits when given invalid
UTF-8 data

grep Behavior change grep now warns if the GREP_OPTIONS environment variable is
now used. Use an alias or script instead.

grep Behavior change grep -P eports an error and exits in locales with multibyte
character encodings other than UTF-8

grep Behavior change When searching binary data, grep may treat non-text bytes as
line terminators, which impacts performance significantly.

grep Behavior change grep -z no longer automatically treats the byte '\200' as binary
data.

grep Behavior change Context no longer excludes selected lines omitted because of -
m.

irssi Behavior change SSLv2 and SSLv3 no longer supported

lftp Change of options xfer:log and xfer:log-file`deprecated; now available


under `log:enabled and log:file commands

ntp Functionality ntp has been removed; use chrony instead


removal

postfix Configuration 3.x version have compatibility safety net that runs Postfix
change programs with backwards-compatible default settings after an
upgrade.

postfix Configuration In the Postfix MySQL database client, the default option_group
change value has changed to client , set it to empty value for backward
compatible behavior.

34
CHAPTER 7. INFRASTRUCTURE SERVICES

Name Type of change Additional information

postfix Configuration The postqueue command no longer forces all message arrival
change times to be reported in UTC. To get the old behavior, set
TZ=UTC in main.cf.

postfix Configuration ECDHE - smtpd_tls_eecdh_grade defaults to auto; new


change parameter tls_eecdh_auto_curves with the names of curves
that may be negotiated

postfix Configuration Changed defaults for append_dot_mydomain (new: no, old:


change yes), master.cf chroot (new: n, old: y), smtputf8 (new: yes,
old: no).

postfix Configuration Changed defaults for relay_domains (new: empty, old:


change $mydestination).

postfix Configuration The mynetworks_style default value has changed from


change subnet to host.

powertop Option removal -d removed

powertop Option change -h is no longer alias for--html . It is now an alias for--help.

powertop Option removal -u removed

quagga Functionality
removal

sendmail Configuration sendmail uses uncompressed IPv6 addresses by default, which


change permits a zero subnet to have a more specific match.
Configuration data must use the same format, so make sure
patterns such as IPv6:[0-9a-fA-F:]*:: and IPv6:: are updated
before using 8.15.

spamassasin Command line Removed --ssl-version in spamd.


option removal

spamassasin Command line In spamc, the command line option -S/--ssl can no longer be
option change used to specify SSL/TLS version. The option can now only be
used without an argument to enable TLS.

spamassasin Change in In spamc and spamd, SSLv3 is no longer supported.


supported SSL
versions

35
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Name Type of change Additional information

spamassasin Functionality sa-update no longer supports SHA1 validation of filtering rules,


removal and uses SHA256/SHA512 validation instead.

vim Default settings Vim runs default.vim script, if no ~/.vimrc file is available.
change

vim Default settings Vim now supports bracketed paste from terminal. Include 'set
change t_BE=' in vimrc for the previous behavior.

vsftpd Default anonymous_enable disabled


configuration
change

vsftpd Default strict_ssl_read_eof now defaults to YES


configuration
change

vsftpd Functionality tcp_wrappers no longer supported


removal

vsftpd Default TLSv1 and TLSv1.1 are disabled by default


configuration
change

wireshark Python bindings Dissectors can no longer be written in Python, use C instead.
removal

wireshark Option removal -C suboption for -N option for asynchronous DNS name
resolution removed

wireshark Ouput change With the -H option, the output no longer shows SHA1,
RIPEMD160 and MD5 hashes. It now shows SHA256,
RIPEMD160 and SHA1 hashes.

wvdial Functionality
removal

36
CHAPTER 8. SECURITY

CHAPTER 8. SECURITY

8.1. CHANGES IN CORE CRYPTOGRAPHIC COMPONENTS

8.1.1. System-wide cryptographic policies are applied by default


Crypto-policies is a component in Red Hat Enterprise Linux 8, which configures the core cryptographic
subsystems, covering the TLS, IPsec, DNSSEC, Kerberos protocols, and the OpenSSH suite. It provides
a small set of policies, which the administrator can select using the update-crypto-policies command.

The DEFAULT system-wide cryptographic policy offers secure settings for current threat models. It
allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-
Hellman parameters are accepted if larger than 2047 bits.

See the Consistent security by crypto policies in Red Hat Enterprise Linux 8 article on the Red Hat Blog
and the update-crypto-policies(8) man page for more information.

8.1.2. Strong crypto defaults by removing insecure cipher suites and protocols
The following list contains cipher suites and protocols removed from the core cryptographic libraries in
RHEL 8. They are not present in the sources, or their support is disabled during the build, so applications
cannot use them.

DES (since RHEL 7)

All export grade cipher suites (since RHEL 7)

MD5 in signatures (since RHEL 7)

SSLv2 (since RHEL 7)

SSLv3 (since RHEL 8)

All ECC curves < 224 bits (since RHEL 6)

All binary field ECC curves (since RHEL 6)

8.1.3. Cipher suites and protocols disabled in all policy levels


The following cipher suites and protocols are disabled in all crypto policy levels. They can be enabled
only by an explicit configuration of individual applications.

DH with parameters < 1024 bits

RSA with key size < 1024 bits

Camellia

ARIA

SEED

IDEA

Integrity-only cipher suites

37
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

TLS CBC mode cipher suites using SHA-384 HMAC

AES-CCM8

All ECC curves incompatible with TLS 1.3, including secp256k1

IKEv1 (since RHEL 8)

8.1.4. Switching the system to FIPS mode


The system-wide cryptographic policies contain a policy level that enables cryptographic modules self-
checks in accordance with the requirements by Federal Information Processing Standard (FIPS)
Publication 140-2. The fips-mode-setup tool that enables or disables FIPS mode internally uses the
FIPS system-wide cryptographic policy level.

To switch the system to FIPS mode in RHEL 8, enter the following command and restart your system:

# fips-mode-setup --enable

See the fips-mode-setup(8) man page for more information.

8.1.5. TLS 1.0 and TLS 1.1 are deprecated


The TLS 1.0 and TLS 1.1 protocols are disabled in the DEFAULT system-wide cryptographic policy level.
If your scenario, for example, a video conferencing application in the Firefox web browser, requires using
the deprecated protocols, switch the system-wide cryptographic policy to the LEGACY level:

# update-crypto-policies --set LEGACY

For more information, see the Strong crypto defaults in RHEL 8 and deprecation of weak crypto
algorithms Knowledgebase article on the Red Hat Customer Portal and the update-crypto-policies(8)
man page.

8.1.6. TLS 1.3 support in cryptographic libraries


This update enables Transport Layer Security (TLS) 1.3 by default in all major back-end crypto libraries.
This enables low latency across the operating system communications layer and enhances privacy and
security for applications by taking advantage of new algorithms, such as RSA-PSS or X25519.

8.1.7. DSA is deprecated in RHEL 8


The Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8.
Authentication mechanisms that depend on DSA keys do not work in the default configuration. Note
that OpenSSH clients do not accept DSA host keys even in the LEGACY system-wide cryptographic
policy level.

8.1.8. SSL2 Client Hello has been deprecated in NSS

The Transport Layer Security (TLS) protocol version 1.2 and earlier allow to start a negotiation with a
Client Hello message formatted in a way that is backward compatible with the Secure Sockets Layer
(SSL) protocol version 2. Support for this feature in the Network Security Services ( NSS) library has
been deprecated and it is disabled by default.

Applications that require support for this feature need to use the new

38
CHAPTER 8. SECURITY

Applications that require support for this feature need to use the new
SSL_ENABLE_V2_COMPATIBLE_HELLO API to enable it. Support for this feature may be removed
completely in future releases of Red Hat Enterprise Linux 8.

8.1.9. NSS now use SQL by default


The Network Security Services (NSS) libraries now use the SQL file format for the trust database by
default. The DBM file format, which was used as a default database format in previous releases, does not
support concurrent access to the same database by multiple processes and it has been deprecated in
upstream. As a result, applications that use the NSS trust database to store keys, certificates, and
revocation information now create databases in the SQL format by default. Attempts to create
databases in the legacy DBM format fail. The existing DBM databases are opened in read-only mode,
and they are automatically converted to the SQL format. Note that NSS support the SQL file format
since Red Hat Enterprise Linux 6.

8.2. SSH

8.2.1. OpenSSH rebased to version 7.8p1

The openssh packages have been upgraded to upstream version 7.8p1. Notable changes include:

Removed support for the SSH version 1 protocol.

Removed support for the hmac-ripemd160 message authentication code.

Removed support for RC4 (arcfour) ciphers.

Removed support for Blowfish ciphers.

Removed support for CAST ciphers.

Changed the default value of the UseDNS option to no.

Disabled DSA public key algorithms by default.

Changed the minimal modulus size for Diffie-Hellman parameters to 2048 bits.

Changed semantics of the ExposeAuthInfo configuration option.

The UsePrivilegeSeparation=sandbox option is now mandatory and cannot be disabled.

Set the minimal accepted RSA key size to 1024 bits.

8.2.2. libssh implements SSH as a core cryptographic component

This change introduces libssh as a core cryptographic component in Red Hat Enterprise Linux 8. The
libssh library implements the Secure SHell (SSH) protocol.

Note that libssh does not comply with the system-wide crypto policy.

8.2.3. libssh2 is not available in RHEL 8

The deprecated libssh2 library misses features, such as support for elliptic curves or Generic Security
Service Application Program Interface (GSSAPI), and it has been removed from RHEL 8 in favor of
libssh

39
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

8.3. RSYSLOG

8.3.1. The default rsyslog configuration file format is now non-legacy

The configuration files in the rsyslog packages now use the non-legacy format by default. The legacy
format can be still used, although mixing current and legacy configuration statements has several
constraints. Configurations carried from previous RHEL releases should be revised. See the
rsyslog.conf(5) man page for more information.

8.3.2. The imjournal option and configuring system logging with minimized journald
usage
To avoid duplicate records that might appear when journald rotated its files, the imjournal option has
been added. Note that use of this option can affect performance.

Note that the system with rsyslog can be configured to provide better performance as described in the
Configuring system logging without journald or with minimized journald usage Knowledgebase article.

8.3.3. Negative effects of the default logging setup on performance


The default logging environment setup might consume 4 GB of memory or even more and adjustments
of rate-limit values are complex when systemd-journald is running with rsyslog.

See the Negative effects of the RHEL default logging setup on performance and their mitigations
Knowledgebase article for more information.

8.4. OPENSCAP

8.4.1. OpenSCAP API consolidated


This update provides OpenSCAP shared library API that has been consolidated. 63 symbols have been
removed, 14 added, and 4 have an updated signature. The removed symbols in OpenSCAP 1.3.0 include:

symbols that were marked as deprecated in version 1.2.0

SEAP protocol symbols

internal helper functions

unused library symbols

unimplemented symbols

8.4.2. A utility for security and compliance scanning of containers is not available
In Red Hat Enterprise Linux 7, the oscap-docker utility can be used for scanning of Docker containers
based on Atomic technologies. In Red Hat Enterprise Linux 8, the Docker- and Atomic-related
OpenSCAP commands are not available. As a result, oscap-docker or an equivalent utility for security
and compliance scanning of containers is not available in RHEL 8 at the moment.

8.5. AUDIT

40
CHAPTER 8. SECURITY

8.5.1. Audit 3.0 replaces audispd with auditd

With this update, functionality of audispd has been moved to auditd. As a result, audispd configuration
options are now part of auditd.conf. In addition, the plugins.d directory has been moved under
/etc/audit. The current status of auditd and its plug-ins can now be checked by running the service
auditd state command.

8.6. SELINUX

8.6.1. New SELinux booleans


This update of the SELinux system policy introduces the following booleans:

colord_use_nfs

mysql_connect_http

pdns_can_network_connect_db

ssh_use_tcpd

sslh_can_bind_any_port

sslh_can_connect_any_port

virt_use_pcscd

To get a list of booleans including their meaning, and to find out if they are enabled or disabled, install
the selinux-policy-devel package and use:

# semanage boolean -l

8.6.2. SELinux packages migrated to Python 3


The functionality of the libselinux-python package is now provided by the python3-libselinux
package, and the policycoreutils-python has been replaced by the policycoreutils-python-utils and
python3-policycoreutils packages.

8.7. REMOVED SECURITY FUNCTIONALITY

8.7.1. shadow-utils no longer allow all-numeric user and group names

The useradd and groupadd commands disallow user and group names consisting purely of numeric
characters. The reason for not allowing such names is that this can confuse potentially many tools that
work with user and group names and user and group ids (which are numbers). Please note that the all-
numeric user and group names are deprecated in Red Hat Enterprise Linux 7 and their support is
completely removed in Red Hat Enterprise Linux 8.

8.7.2. securetty is now disabled by default

Because of the dynamic nature of tty device files on modern Linux systems, the securetty PAM module
has been disabled by default and the /etc/securetty configuration file is no longer included in RHEL.
Since /etc/securetty listed many possible devices so that the practical effect in most cases was to allow

41
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

by default, this change has only a minor impact. However, if you use a more restrictive configuration, you
need to add a line enabling the pam_securetty.so module to the appropriate files in the /etc/pam.d
directory, and create a new /etc/securetty file.

8.7.3. The Clevis HTTP pin has been removed

The Clevis HTTP pin has been removed from RHEL 8, and the clevis encrypt http sub-command is no
longer available.

8.7.3.1. Coolkey has been removed

The Coolkey driver for smart cards has been removed from RHEL 8, and OpenSC now provides its
functionality.

8.7.3.2. crypto-utils have been removed

The crypto-utils packages have been removed from RHEL 8. You can use tools provided by the
openssl, gnutls-utils, and nss-tools packages instead.

8.7.3.3. KLIPS has been removed from Libreswan

In Red Hat Enterprise Linux 8, support for Kernel IP Security (KLIPS) IPsec stack has been removed
from Libreswan.

42
CHAPTER 9. NETWORKING

CHAPTER 9. NETWORKING

9.1. NETWORKMANAGER

9.1.1. Legacy network scripts support


Network scripts are deprecated in Red Hat Enterprise Linux 8 and are no longer provided by default.
The basic installation provides a new version of the ifup and ifdown scripts which call NetworkManager
through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts,
NetworkManager must be running.

NOTE

Custom commands in /sbin/ifup-local, ifdown-pre-local and ifdown-local scripts are not


executed.

If any of these scripts are required, the installation of the deprecated network scripts in
the system is still possible with the following command:

~]# yum install network-scripts

The ifup and the ifdown scripts link to the installed legacy network scripts.

Calling the legacy network scripts shows a warning about their deprecation.

9.1.2. NetworkManager supports SR-IOV virtual functions


In Red Hat Enterprise Linux 8, NetworkManager allows configuring the number of virtual functions (VF)
for interfaces that support single-root I/O virtualization (SR-IOV). Additionally, NetworkManager
allows configuring some attributes of the VFs, such as the MAC address, VLAN, the spoof checking
setting and allowed bitrates. Note that all properties related to SR-IOV are available in the sriov
connection setting. For more details, see the nm-settings(5) man page.

9.1.3. NetworkManager supports a wildcard interface name match for connections


Previously, it was possible to restrict a connection to a given interface using only an exact match on the
interface name. With this update, connections have a new match.interface-name property which
supports wildcards. This update enables users to choose the interface for a connection in a more flexible
way using a wildcard pattern.

9.1.4. NetworkManager supports configuring ethtool offload features


With this enhancement, NetworkManager supports configuring ethtool offload features, and users no
longer need to use init scripts or a NetworkManager dispatcher script. As a result, users can now
configure the offload feature as a part of the connection profile using one of the following methods:

By using the nmcli utility

By editing key files in the /etc/NetworkManager/system-connections/ directory

By editing the /etc/sysconfig/network-scripts/ifcfg-* files

Note that this feature is currently not supported in graphical interfaces and in the nmtui utility.

43
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

9.1.5. NetworkManager now uses the internal DHCP plug-in by default


NetworkManager supports the internal and dhclient DHCP plug-ins. By default, NetworkManager in
Red Hat Enterprise Linux (RHEL) 7 uses the dhclient and RHEL 8 the internal plug-in. In certain
situations, the plug-ins behave differently. For example, dhclient can use additional settings specified in
the /etc/dhcp/ directory.

If you upgrade from RHEL 7 to RHEL 8 and NetworkManager behaves different, add the following
setting to the [main] section in the /etc/NetworkManager/NetworkManager.conf file to use the
dhclient plug-in:

[main]
dhcp=dhclient

9.1.6. The NetworkManager-config-server package is not installed by default in


RHEL 8
The NetworkManager-config-server package is only installed by default if you select either the Server
or Server with GUI base environment during the setup. If you selected a different environment, use the
yum install NetworkManager-config-server command to install the package.

9.2. PACKET FILTERING

9.2.1. nftables replaces iptables as the default network packet filtering framework

The nftables framework provides packet classification facilities and it is the designated successor to the
iptables, ip6tables, arptables, and ebtables tools. It offers numerous improvements in convenience,
features, and performance over previous packet-filtering tools, most notably:

lookup tables instead of linear processing

a single framework for both the IPv4 and IPv6 protocols

rules all applied atomically instead of fetching, updating, and storing a complete rule set

support for debugging and tracing in the rule set (nftrace) and monitoring trace events (in the
nft tool)

more consistent and compact syntax, no protocol-specific extensions

a Netlink API for third-party applications

Similarly to iptables, nftables use tables for storing chains. The chains contain individual rules for
performing actions. The nft tool replaces all tools from the previous packet-filtering frameworks. The
libnftables library can be used for low-level interaction with nftables Netlink API over the libmnl library.

The iptables, ip6tables, ebtables and arptables tools are replaced by nftables-based drop-in
replacements with the same name. While external behavior is identical to their legacy counterparts,
internally they use nftables with legacy netfilter kernel modules through a compatibility interface where
required.

Effect of the modules on the nftables rule set can be observed using the nft list ruleset command.
Since these tools add tables, chains, and rules to the nftables rule set, be aware that nftables rule-set
operations, such as the nft flush ruleset command, might affect rule sets installed using the formerly

44
CHAPTER 9. NETWORKING

separate legacy commands.

To quickly identify which variant of the tool is present, version information has been updated to include
the back-end name. In RHEL 8, the nftables-based iptables tool prints the following version string:

$ iptables --version
iptables v1.8.0 (nf_tables)

For comparison, the following version information is printed if legacy iptables tool is present:

$ iptables --version
iptables v1.8.0 (legacy)

9.2.2. Arptables FORWARD is removed from filter tables in RHEL 8

The arptables FORWARD chain functionality has been removed in Red Hat Enterprise Linux (RHEL) 8.
You can now use the FORWARD chain of the ebtables tool adding the rules into it.

9.2.3. Output of iptables-ebtables is not 100% compatible with ebtables

In RHEL 8, the ebtables command is provided by the iptables-ebtables package, which contains an
nftables-based reimplementation of the tool. This tool has a different code base, and its output
deviates in aspects, which are either negligible or deliberate design choices.

Consequently, when migrating your scripts parsing some ebtables output, adjust the scripts to reflect
the following:

MAC address formatting has been changed to be fixed in length. Where necessary, individual
byte values contain a leading zero to maintain the format of two characters per octet.

Formatting of IPv6 prefixes has been changed to conform with RFC 4291. The trailing part after
the slash character no longer contains a netmask in the IPv6 address format but a prefix length.
This change applies to valid (left-contiguous) masks only, while others are still printed in the old
formatting.

9.2.4. New tools to convert iptables to nftables

This update adds the iptables-translate and ip6tables-translate tools to convert the existing iptables
or ip6tables rules into the equivalent ones for nftables. Note that some extensions lack translation
support. If such an extension exists, the tool prints the untranslated rule prefixed with the # sign. For
example:

| % iptables-translate -A INPUT -j CHECKSUM --checksum-fill


| nft # -A INPUT -j CHECKSUM --checksum-fill

Additionally, users can use the iptables-restore-translate and ip6tables-restore-translate tools to


translate a dump of rules. Note that before that, users can use the iptables-save or ip6tables-save
commands to print a dump of current rules. For example:

| % sudo iptables-save >/tmp/iptables.dump


| % iptables-restore-translate -f /tmp/iptables.dump
| # Translated by iptables-restore-translate v1.8.0 on Wed Oct 17 17:00:13 2018

45
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

| add table ip nat


| ...

9.3. CHANGES IN WPA_SUPPLICANT

9.3.1. journalctl can now read the wpa_supplicant log

In Red Hat Enterprise Linux (RHEL) 8, the wpa_supplicant package is built with
CONFIG_DEBUG_SYSLOG enabled. This allows reading the wpa_supplicant log using the journalctl
utility instead of checking the contents of the /var/log/wpa_supplicant.log file.

9.3.2. The compile-time support for wireless extensions in wpa_supplicant is disabled

The wpa_supplicant package does not support wireless extensions. When a user is trying to use wext
as a command-line argument, or trying to use it on old adapters which only support wireless extensions,
will not be able to run the wpa_supplicant daemon.

9.4. A NEW DATA CHUNK TYPE, I-DATA, ADDED TO SCTP


This update adds a new data chunk type, I-DATA, and stream schedulers to the Stream Control
Transmission Protocol (SCTP). Previously, SCTP sent user messages in the same order as they were
sent by a user. Consequently, a large SCTP user message blocked all other messages in any stream until
completely sent. When using I-DATA chunks, the Transmission Sequence Number (TSN) field is not
overloaded. As a result, SCTP now can schedule the streams in different ways, and I-DATA allows user
messages interleaving (RFC 8260). Note that both peers must support the I-DATA chunk type.

9.5. NOTABLE TCP FEATURES IN RHEL 8


Red Hat Enterprise Linux 8 is distributed with TCP networking stack version 4.18, which provides higher
performances, better scalability, and more stability. Performances are boosted especially for busy TCP
server with a high ingress connection rate.

Additionally, two new TCP congestion algorithms, BBR and NV, are available, offering lower latency, and
better throughput than cubic in most scenarios.

9.5.1. TCP BBR support in RHEL 8


A new TCP congestion control algorithm, Bottleneck Bandwidth and Round-trip time (BBR) is now
supported in Red Hat Enterprise Linux (RHEL) 8. BBR attempts to determine the bandwidth of the
bottleneck link and the Round-trip time (RTT). Most congestion algorithms are based on packet loss
(including CUBIC, the default Linux TCP congestion control algorithm), which have problems on high-
throughput links. BBR does not react to loss events directly, it adjusts the TCP pacing rate to match it
with the available bandwidth. Users of TCP BBR should switch to the fq queueing setting on all the
involved interfaces.

Note that users should explicitly use fq and not fq_codel.

For more details, see the tc-fq man page.

9.6. VLAN-RELATED CHANGES

46
CHAPTER 9. NETWORKING

9.6.1. IPVLAN virtual network drivers are now supported


In Red Hat Enterprise Linux 8.0, the kernel includes support for IPVLAN virtual network drivers. With this
update, IPVLAN virtual Network Interface Cards (NICs) enable the network connectivity for multiple
containers exposing a single MAC address to the local network. This allows a single host to have a lot of
containers overcoming the possible limitation on the number of MAC addresses supported by the peer
networking equipment.

9.6.2. Certain network adapters require a firmware update to fully support 802.1ad
The firmware of certain network adapters does not fully support the 802.1ad standard, which is also
called Q-in-Q or stacked virtual local area networks (VLANs). Contact your hardware vendor on details
how to verify that your network adapter uses a firmware that supports the 802.1ad standard and how to
update the firmware. As a result, with the correct firmware, configuring stacked VLANs on RHEL 8.0
work as expected.

9.7. NETWORK INTERFACE NAME CHANGES


In Red Hat Enterprise Linux 8, the same consistent network device naming scheme is used by default as
in RHEL 7. However, certain kernel drivers, such as e1000e, nfp, qede, sfc, tg3 and bnxt_en changed
their consistent name on a fresh installation of RHEL 8. However, the names are preserved on upgrade
from RHEL 7.

9.8. THE -OK OPTION OF THE TC COMMAND REMOVED


The -ok option of the tc command has been removed in Red Hat Enterprise Linux 8. As a workaround,
users can implement code to communicate directly via netlink with the kernel. Response messages
received, indicate completion and status of sent requests. An alternative way for less time-critical
applications is to call tc for each command separately. This may happen with a custom script which
simulates the tc -batch behavior by printing OK for each successful tc invocation.

47
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

CHAPTER 10. KERNEL

10.1. RESOURCE CONTROL

10.1.1. Control group v2 available as a Technology Preview in RHEL 8


Control group v2 mechanism is a unified hierarchy control group. Control group v2 organizes
processes hierarchically and distributes system resources along the hierarchy in a controlled and
configurable manner.

Unlike the previous version, control group v2 has only a single hierarchy. This single hierarchy enables
the Linux kernel to:

Categorize processes based on the role of their owner.

Eliminate issues with conflicting policies of multiple hierarchies.

Control group v2 supports numerous controllers:

CPU controller regulates the distribution of CPU cycles. This controller implements:

Weight and absolute bandwidth limit models for normal scheduling policy.

Absolute bandwidth allocation model for real time scheduling policy.

Memory controller regulates the memory distribution. Currently, the following types of memory
usages are tracked:

Userland memory - page cache and anonymous memory.

Kernel data structures such as dentries and inodes.

TCP socket buffers.

I/O controller regulates the distribution of I/O resources.

Remote Direct Memory Access (RDMA) controller limits RDMA/IB specific resources that
certain processes can use. These processes are grouped through the RDMA controller.

Process number controller enables the control group to stop any new tasks from being fork()’d
or clone()’d after a certain limit.

Writeback controller acts as a mechanism, which balances conflicts between I/O and the
memory controllers.

The information above was based on cgroups-v2 online documentation. You can refer to the same link to
obtain more information about particular control group v2 controllers.

10.2. MEMORY MANAGEMENT

10.2.1. 52-bit PA for 64-bit ARM available


With this update, support for 52-bit physical addressing (PA) for the 64-bit ARM architecture is
available. This provides a larger physical address space than previous 48-bit PA.

48
CHAPTER 10. KERNEL

10.2.2. 5-level page tables x86_64


With Red Hat Enterprise Linux 7, existing memory bus had 48/46 bit of virtual/physical memory
addressing capacity, and the Linux kernel implemented 4 levels of page tables to manage these virtual
addresses to physical addresses. The physical bus addressing line put the physical memory upper limit
capacity at 64 TB.

These limits have been extended to 57/52 bit of virtual/physical memory addressing with 128 PiB of
virtual address space (64PB user/64PB kernel) and 4 PB of physical memory capacity.

With the extended address range, the memory management in Red Hat Enterprise Linux 8 adds support
for 5-level page table implementation, to be able to handle the expanded address range. By default
RHEL8 will disable the 5-level page table support even on systems that support this feature. This is due
to a potential performance degradation when using 5 level of page tables if extended virtual or physical
address space is not needed. A boot argument will enable systems with hardware that supports this
feature to use it.

10.3. PERFORMANCE ANALYSIS AND OBSERVABILITY TOOLS

10.3.1. bpftool added to kernel


The bpftool utility that serves for inspection and simple manipulation of programs and maps based on
extended Berkeley Packet Filtering (eBPF) has been added into the Linux kernel. bpftool is a part of the
kernel source tree, and is provided by the bpftool package, which is included as a sub-package of the
kernel package.

10.3.2. eBPF available as a Technology Preview


The extended Berkeley Packet Filtering (eBPF)feature is available as a Technology Preview for both
networking and tracing. eBPF enables the user space to attach custom programs onto a variety of
points (sockets, trace points, packet reception) to receive and process data. The feature includes a new
system call bpf(), which supports creating various types of maps, and also to insert various types of
programs into the kernel. Note that the bpf() syscall can be successfully used only by a user with the
CAP_SYS_ADMIN capability, such as a root user. See the bpf(2) man page for more information.

10.3.3. BCC is available as a Technology Preview


BPF Compiler Collection (BCC) is a user space tool kit for creating efficient kernel tracing and
manipulation programs that is available as a Technology Preview in Red Hat Enterprise Linux 8. BCC
provides tools for I/O analysis, networking, and monitoring of Linux operating systems using the
extended Berkeley Packet Filtering (eBPF).

10.4. BOOTING PROCESS

10.4.1. How to install and boot custom kernels in RHEL 8


The Boot Loader Specification (BLS) defines a scheme and file format to manage bootloader
configurations for each boot option in a drop-in directory. There is no need to manipulate the individual
drop-in configuration files. This premise is particularly relevant in Red Hat Enterprise Linux 8 because
not all architectures use the same bootloader:

x86_64, aarch64 and ppc64le with open firmware use GRUB2

49
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

ppc64le with Open Power Abstraction Layer (OPAL) uses Petitboot

s390x uses zipl

Each bootloader has a different configuration file and format that has to be modified when a new kernel
is installed or removed. In the previous versions of Red Hat Enterprise Linux the component that
permitted this work was the grubby utility. However, for Red Hat Enterprise Linux 8 the bootloader
configuration was standardized by implementing the BLS file format, where grubby works as a thin
wrapper around the BLS operations.

10.4.2. Early kdump support in RHEL 8


Previously, the kdump service started too late to register the kernel crashes that occurred in early
stages of the booting process. As a result, the crash information together with a chance for
troubleshooting was lost.

To address this problem, RHEL 8 introduced an early kdump support. To learn more about this
mechanism, see the /usr/share/doc/kexec-tools/early-kdump-howto.txt file. See also What is early
kdump support and how do I configure it?.

50
CHAPTER 11. HARDWARE ENABLEMENT

CHAPTER 11. HARDWARE ENABLEMENT

11.1. REMOVED HARDWARE SUPPORT


This section lists device drivers and adapters that were supported in RHEL 7 but are no longer available
in RHEL 8.0.

11.1.1. Removed device drivers


Support for the following device drivers has been removed in RHEL 8:

3w-9xxx

3w-sas

aic79xx

aoe

arcmsr

ata drivers:

acard-ahci

sata_mv

sata_nv

sata_promise

sata_qstor

sata_sil

sata_sil24

sata_sis

sata_svw

sata_sx4

sata_uli

sata_via

sata_vsc

bfa

cxgb3

cxgb3i

e1000

51
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

floppy

hptiop

initio

isci

iw_cxgb3

mptbase - This driver is left in place for virtualization use case and easy developer transition.
However it is not supported.

mptctl

mptsas - This driver is left in place for virtualization use case and easy developer transition.
However it is not supported.

mptscsih - This driver is left in place for virtualization use case and easy developer transition.
However it is not supported.

mptspi - This driver is left in place for virtualization use case and easy developer transition.
However it is not supported.

mtip32xx

mvsas

mvumi

OSD drivers:

osd

libosd

osst

pata drivers:

pata_acpi

pata_ali

pata_amd

pata_arasan_cf

pata_artop

pata_atiixp

pata_atp867x

pata_cmd64x

pata_cs5536

52
CHAPTER 11. HARDWARE ENABLEMENT

pata_hpt366

pata_hpt37x

pata_hpt3x2n

pata_hpt3x3

pata_it8213

pata_it821x

pata_jmicron

pata_marvell

pata_netcell

pata_ninja32

pata_oldpiix

pata_pdc2027x

pata_pdc202xx_old

pata_piccolo

pata_rdc

pata_sch

pata_serverworks

pata_sil680

pata_sis

pata_via

pdc_adma

pm80xx(pm8001)

pmcraid

qla3xxx - This driver is left in place for virtualization use case and easy developer transition.
However it is not supported.

stex

sx8

tulip

ufshcd

wireless drivers:

53
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

carl9170

iwl4965

iwl3945

mwl8k

rt73usb

rt61pci

rtl8187

wil6210

11.1.2. Removed adapters


Support for the adapters listed below has been removed in RHEL 8. Support for other than listed
adapters from the mentioned drivers remains unchanged.

PCI IDs are in the format of vendor:device:subvendor:subdevice. If the subdevice or subvendor:subdevice


entry is not listed, devices with any values of such missing entries have been removed.

To check the PCI IDs of the hardware on your system, run the lspci -nn command.

The following adapters from the aacraid driver have been removed:

PERC 2/Si (Iguana/PERC2Si), PCI ID 0x1028:0x0001:0x1028:0x0001

PERC 3/Di (Opal/PERC3Di), PCI ID 0x1028:0x0002:0x1028:0x0002

PERC 3/Si (SlimFast/PERC3Si), PCI ID 0x1028:0x0003:0x1028:0x0003

PERC 3/Di (Iguana FlipChip/PERC3DiF), PCI ID 0x1028:0x0004:0x1028:0x00d0

PERC 3/Di (Viper/PERC3DiV), PCI ID 0x1028:0x0002:0x1028:0x00d1

PERC 3/Di (Lexus/PERC3DiL), PCI ID 0x1028:0x0002:0x1028:0x00d9

PERC 3/Di (Jaguar/PERC3DiJ), PCI ID 0x1028:0x000a:0x1028:0x0106

PERC 3/Di (Dagger/PERC3DiD), PCI ID 0x1028:0x000a:0x1028:0x011b

PERC 3/Di (Boxster/PERC3DiB), PCI ID 0x1028:0x000a:0x1028:0x0121

catapult, PCI ID 0x9005:0x0283:0x9005:0x0283

tomcat, PCI ID 0x9005:0x0284:0x9005:0x0284

Adaptec 2120S (Crusader), PCI ID 0x9005:0x0285:0x9005:0x0286

Adaptec 2200S (Vulcan), PCI ID 0x9005:0x0285:0x9005:0x0285

Adaptec 2200S (Vulcan-2m), PCI ID 0x9005:0x0285:0x9005:0x0287

Legend S220 (Legend Crusader), PCI ID 0x9005:0x0285:0x17aa:0x0286

54
CHAPTER 11. HARDWARE ENABLEMENT

Legend S230 (Legend Vulcan), PCI ID 0x9005:0x0285:0x17aa:0x0287

Adaptec 3230S (Harrier), PCI ID 0x9005:0x0285:0x9005:0x0288

Adaptec 3240S (Tornado), PCI ID 0x9005:0x0285:0x9005:0x0289

ASR-2020ZCR SCSI PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285:0x9005:0x028a

ASR-2025ZCR SCSI SO-DIMM PCI-X ZCR (Terminator), PCI ID


0x9005:0x0285:0x9005:0x028b

ASR-2230S + ASR-2230SLP PCI-X (Lancer), PCI ID 0x9005:0x0286:0x9005:0x028c

ASR-2130S (Lancer), PCI ID 0x9005:0x0286:0x9005:0x028d

AAR-2820SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029b

AAR-2620SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029c

AAR-2420SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029d

ICP9024RO (Lancer), PCI ID 0x9005:0x0286:0x9005:0x029e

ICP9014RO (Lancer), PCI ID 0x9005:0x0286:0x9005:0x029f

ICP9047MA (Lancer), PCI ID 0x9005:0x0286:0x9005:0x02a0

ICP9087MA (Lancer), PCI ID 0x9005:0x0286:0x9005:0x02a1

ICP5445AU (Hurricane44), PCI ID 0x9005:0x0286:0x9005:0x02a3

ICP9085LI (Marauder-X), PCI ID 0x9005:0x0285:0x9005:0x02a4

ICP5085BR (Marauder-E), PCI ID 0x9005:0x0285:0x9005:0x02a5

ICP9067MA (Intruder-6), PCI ID 0x9005:0x0286:0x9005:0x02a6

Themisto Jupiter Platform, PCI ID 0x9005:0x0287:0x9005:0x0800

Themisto Jupiter Platform, PCI ID 0x9005:0x0200:0x9005:0x0200

Callisto Jupiter Platform, PCI ID 0x9005:0x0286:0x9005:0x0800

ASR-2020SA SATA PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285:0x9005:0x028e

ASR-2025SA SATA SO-DIMM PCI-X ZCR (Terminator), PCI ID


0x9005:0x0285:0x9005:0x028f

AAR-2410SA PCI SATA 4ch (Jaguar II), PCI ID 0x9005:0x0285:0x9005:0x0290

CERC SATA RAID 2 PCI SATA 6ch (DellCorsair), PCI ID 0x9005:0x0285:0x9005:0x0291

AAR-2810SA PCI SATA 8ch (Corsair-8), PCI ID 0x9005:0x0285:0x9005:0x0292

AAR-21610SA PCI SATA 16ch (Corsair-16), PCI ID 0x9005:0x0285:0x9005:0x0293

ESD SO-DIMM PCI-X SATA ZCR (Prowler), PCI ID 0x9005:0x0285:0x9005:0x0294

55
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

AAR-2610SA PCI SATA 6ch, PCI ID 0x9005:0x0285:0x103C:0x3227

ASR-2240S (SabreExpress), PCI ID 0x9005:0x0285:0x9005:0x0296

ASR-4005, PCI ID 0x9005:0x0285:0x9005:0x0297

IBM 8i (AvonPark), PCI ID 0x9005:0x0285:0x1014:0x02F2

IBM 8i (AvonPark Lite), PCI ID 0x9005:0x0285:0x1014:0x0312

IBM 8k/8k-l8 (Aurora), PCI ID 0x9005:0x0286:0x1014:0x9580

IBM 8k/8k-l4 (Aurora Lite), PCI ID 0x9005:0x0286:0x1014:0x9540

ASR-4000 (BlackBird), PCI ID 0x9005:0x0285:0x9005:0x0298

ASR-4800SAS (Marauder-X), PCI ID 0x9005:0x0285:0x9005:0x0299

ASR-4805SAS (Marauder-E), PCI ID 0x9005:0x0285:0x9005:0x029a

ASR-3800 (Hurricane44), PCI ID 0x9005:0x0286:0x9005:0x02a2

Perc 320/DC, PCI ID 0x9005:0x0285:0x1028:0x0287

Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046:0x9005:0x0365

Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046:0x9005:0x0364

Dell PERC2/QC, PCI ID 0x1011:0x0046:0x9005:0x1364

HP NetRAID-4M, PCI ID 0x1011:0x0046:0x103c:0x10c2

Dell Catchall, PCI ID 0x9005:0x0285:0x1028

Legend Catchall, PCI ID 0x9005:0x0285:0x17aa

Adaptec Catch All, PCI ID 0x9005:0x0285

Adaptec Rocket Catch All, PCI ID 0x9005:0x0286

Adaptec NEMER/ARK Catch All, PCI ID 0x9005:0x0288

The following adapters from the mpt2sas driver have been removed:

SAS2004, PCI ID 0x1000:0x0070

SAS2008, PCI ID 0x1000:0x0072

SAS2108_1, PCI ID 0x1000:0x0074

SAS2108_2, PCI ID 0x1000:0x0076

SAS2108_3, PCI ID 0x1000:0x0077

SAS2116_1, PCI ID 0x1000:0x0064

SAS2116_2, PCI ID 0x1000:0x0065

56
CHAPTER 11. HARDWARE ENABLEMENT

SSS6200, PCI ID 0x1000:0x007E

The following adapters from the megaraid_sas driver have been removed:

Dell PERC5, PCI ID 0x1028:0x0015

SAS1078R, PCI ID 0x1000:0x0060

SAS1078DE, PCI ID 0x1000:0x007C

SAS1064R, PCI ID 0x1000:0x0411

VERDE_ZCR, PCI ID 0x1000:0x0413

SAS1078GEN2, PCI ID 0x1000:0x0078

SAS0079GEN2, PCI ID 0x1000:0x0079

SAS0073SKINNY, PCI ID 0x1000:0x0073

SAS0071SKINNY, PCI ID 0x1000:0x0071

The following adapters from the qla2xxx driver have been removed:

ISP24xx, PCI ID 0x1077:0x2422

ISP24xx, PCI ID 0x1077:0x2432

ISP2422, PCI ID 0x1077:0x5422

QLE220, PCI ID 0x1077:0x5432

QLE81xx, PCI ID 0x1077:0x8001

QLE10000, PCI ID 0x1077:0xF000

QLE84xx, PCI ID 0x1077:0x8044

QLE8000, PCI ID 0x1077:0x8432

QLE82xx, PCI ID 0x1077:0x8021

The following adapters from the qla4xxx driver have been removed:

QLOGIC_ISP8022, PCI ID 0x1077:0x8022

QLOGIC_ISP8324, PCI ID 0x1077:0x8032

QLOGIC_ISP8042, PCI ID 0x1077:0x8042

The following adapters from the be2iscsi driver have been removed:

BladeEngine 2 (BE2) devices

BladeEngine2 10Gb iSCSI Initiator (generic), PCI ID 0x19a2:0x212

OneConnect OCe10101, OCm10101, OCe10102, OCm10102 BE2 adapter family, PCI ID


0x19a2:0x702

57
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

OCe10100 BE2 adapter family, PCI ID 0x19a2:0x703

BladeEngine 3 (BE3) devices

OneConnect TOMCAT iSCSI, PCI ID 0x19a2:0x0712

BladeEngine3 iSCSI, PCI ID 0x19a2:0x0222

The following Ethernet adapters controlled by the be2net driver have been removed:

BladeEngine 2 (BE2) devices

OneConnect TIGERSHARK NIC, PCI ID 0x19a2:0x0700

BladeEngine2 Network Adapter, PCI ID 0x19a2:0x0211

BladeEngine 3 (BE3) devices

OneConnect TOMCAT NIC, PCI ID 0x19a2:0x0710

BladeEngine3 Network Adapter, PCI ID 0x19a2:0x0221

The following adapters from the lpfc driver have been removed:

BladeEngine 2 (BE2) devices

OneConnect TIGERSHARK FCoE, PCI ID 0x19a2:0x0704

BladeEngine 3 (BE3) devices

OneConnect TOMCAT FCoE, PCI ID 0x19a2:0x0714

Fibre Channel (FC) devices

FIREFLY, PCI ID 0x10df:0x1ae5

PROTEUS_VF, PCI ID 0x10df:0xe100

BALIUS, PCI ID 0x10df:0xe131

PROTEUS_PF, PCI ID 0x10df:0xe180

RFLY, PCI ID 0x10df:0xf095

PFLY, PCI ID 0x10df:0xf098

LP101, PCI ID 0x10df:0xf0a1

TFLY, PCI ID 0x10df:0xf0a5

BSMB, PCI ID 0x10df:0xf0d1

BMID, PCI ID 0x10df:0xf0d5

ZSMB, PCI ID 0x10df:0xf0e1

ZMID, PCI ID 0x10df:0xf0e5

NEPTUNE, PCI ID 0x10df:0xf0f5

58
CHAPTER 11. HARDWARE ENABLEMENT

NEPTUNE_SCSP, PCI ID 0x10df:0xf0f6

NEPTUNE_DCSP, PCI ID 0x10df:0xf0f7

FALCON, PCI ID 0x10df:0xf180

SUPERFLY, PCI ID 0x10df:0xf700

DRAGONFLY, PCI ID 0x10df:0xf800

CENTAUR, PCI ID 0x10df:0xf900

PEGASUS, PCI ID 0x10df:0xf980

THOR, PCI ID 0x10df:0xfa00

VIPER, PCI ID 0x10df:0xfb00

LP10000S, PCI ID 0x10df:0xfc00

LP11000S, PCI ID 0x10df:0xfc10

LPE11000S, PCI ID 0x10df:0xfc20

PROTEUS_S, PCI ID 0x10df:0xfc50

HELIOS, PCI ID 0x10df:0xfd00

HELIOS_SCSP, PCI ID 0x10df:0xfd11

HELIOS_DCSP, PCI ID 0x10df:0xfd12

ZEPHYR, PCI ID 0x10df:0xfe00

HORNET, PCI ID 0x10df:0xfe05

ZEPHYR_SCSP, PCI ID 0x10df:0xfe11

ZEPHYR_DCSP, PCI ID 0x10df:0xfe12

Lancer FCoE CNA devices

OCe15104-FM, PCI ID 0x10df:0xe260

OCe15102-FM, PCI ID 0x10df:0xe260

OCm15108-F-P, PCI ID 0x10df:0xe260

11.1.3. Other removed hardware support

11.1.3.1. AGP graphics cards are no longer supported

Graphics cards using the Accelerated Graphics Port (AGP) bus are not supported in Red Hat
Enterprise Linux 8. Use the graphics cards with the PCI Express bus as the recommended replacement.

11.1.3.2. FCoE software removal

Fibre Channel over Ethernet (FCoE) software has been removed from Red Hat Enterprise Linux 8.
59
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Fibre Channel over Ethernet (FCoE) software has been removed from Red Hat Enterprise Linux 8.
Specifically, the fcoe.ko kernel module is no longer available for creating software FCoE interfaces over
Ethernet adapters and drivers. This change is due to a lack of industry adoption for software-managed
FCoE.

Specific changes to Red Hat Enterprise 8 include:

The fcoe.ko kernel module is no longer available. This removes support for software FCoE with
Data Center Bridging enabled Ethernet adapters and drivers.

Link-level software configuration via Data Center Bridging eXchange (DCBX) using lldpad is no
longer supported for FCoE.

The fcoe-utils tools (specifically fcoemon) is configured by default to not validate DCB
configuration or communicate with lldpad.

The lldpad integration in fcoemon might be permanently disabled.

The libhbaapi and libhbalinux libraries are no longer used by fcoe-utils, and will not undergo
any direct testing from Red Hat.

Support for the following remains unchanged:

Currently supported offloading FCoE adapters that appear as Fibre Channel adapters to the
operating system and do not use the fcoe-utils management tools, unless stated in a separate
note. This applies to select adapters supported by the lpfc FC driver. Note that the bfa driver is
not included in Red Hat Enterprise Linux 8.

Currently supported offloading FCoE adapters that do use the fcoe-utils management tools
but have their own kernel drivers instead of fcoe.ko and manage DCBX configuration in their
drivers and/or firmware, unless stated in a separate note. The fnic, bnx2fc, and qedf drivers will
continue to be fully supported in Red Hat Enterprise Linux 8.

The libfc.ko and libfcoe.ko kernel modules that are required for some of the supported drivers
covered by the previous statement.

For more information, see Section 12.2.8, “Software FCoE and Fibre Channel no longer support the
target mode”.

11.1.3.3. The e1000 network driver is not supported in RHEL 8

In Red Hat Enterprise Linux 8, the e1000 network driver is not supported. This affects both bare metal
and virtual environments. However, the newer e1000e network driver continues to be fully supported in
RHEL 8.

11.1.3.4. RHEL 8 does not support the tulip driver

With this update, the tulip network driver is no longer supported. As a consequence, when using RHEL 8
on a Generation 1 virtual machine (VM) on the Microsoft Hyper-V hypervisor, the "Legacy Network
Adapter" device does not work, which causes PXE installation of such VMs to fail.

For the PXE installation to work, install RHEL 8 on a Generation 2 Hyper-V VM. If you require a RHEL 8
Generation 1 VM, use ISO installation.

11.1.3.5. The qla2xxx driver no longer supports target mode

Support for target mode with the qla2xxx QLogic Fibre Channel driver has been disabled. The effects
60
CHAPTER 11. HARDWARE ENABLEMENT

Support for target mode with the qla2xxx QLogic Fibre Channel driver has been disabled. The effects
of this change are:

The kernel no longer provides the tcm_qla2xxx module.

The rtslib library and the targetcli utility no longer support qla2xxx.

Initiator mode with qla2xxx is still supported.

61
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

CHAPTER 12. FILE SYSTEMS AND STORAGE

12.1. FILE SYSTEMS

12.1.1. Btrfs has been removed


The Btrfs file system has been removed in Red Hat Enterprise Linux 8. This includes the following
components:

The btrfs.ko kernel module

The btrfs-progs package

The snapper package

You can no longer create, mount, or install on Btrfs file systems in Red Hat Enterprise Linux 8. The
Anaconda installer and the Kickstart commands no longer support Btrfs.

12.1.2. XFS now supports shared copy-on-write data extents


The XFS file system supports shared copy-on-write data extent functionality. This feature enables two
or more files to share a common set of data blocks. When either of the files sharing common blocks
changes, XFS breaks the link to common blocks and creates a new file. This is similar to the copy-on-
write (COW) functionality found in other file systems.

Shared copy-on-write data extents are:

Fast
Creating shared copies does not utilize disk I/O.
Space-efficient
Shared blocks do not consume additional disk space.
Transparent
Files sharing common blocks act like regular files.

Userspace utilities can use shared copy-on-write data extents for:

Efficient file cloning, such as with the cp --reflink command

Per-file snapshots

This functionality is also used by kernel subsystems such as Overlayfs and NFS for more efficient
operation.

Shared copy-on-write data extents are now enabled by default when creating an XFS file system,
starting with the xfsprogs package version 4.17.0-2.el8.

Note that Direct Access (DAX) devices currently do not support XFS with shared copy-on-write data
extents. To create an XFS file system without this feature, use the following command:

# mkfs.xfs -m reflink=0 block-device

Red Hat Enterprise Linux 7 can mount XFS file systems with shared copy-on-write data extents only in
the read-only mode.

62
CHAPTER 12. FILE SYSTEMS AND STORAGE

12.1.3. The ext4 file system now supports metadata checksums


With this update, ext4 metadata is protected by checksums. This enables the file system to recognize
the corrupt metadata, which avoids damage and increases the file system resilience.

12.1.4. The /etc/sysconfig/nfs file and legacy NFS service names are no longer available

In Red Hat Enterprise Linux 8.0, the NFS configuration has moved from the /etc/sysconfig/nfs
configuration file, which was used in Red Hat Enterprise Linux 7, to /etc/nfs.conf.

The /etc/nfs.conf file uses a different syntax. Red Hat Enterprise Linux 8 attempts to automatically
convert all options from /etc/sysconfig/nfs to /etc/nfs.conf when upgrading from Red Hat Enterprise
Linux 7.

Both configuration files are supported in Red Hat Enterprise Linux 7. Red Hat recommends that you use
the new /etc/nfs.conf file to make NFS configuration in all versions of Red Hat Enterprise Linux
compatible with automated configuration systems.

Additionally, the following NFS service aliases have been removed and replaced by their upstream
names:

nfs.service, replaced by nfs-server.service

nfs-secure.service, replaced by rpc-gssd.service

rpcgssd.service, replaced by rpc-gssd.service

nfs-idmap.service, replaced by nfs-idmapd.service

rpcidmapd.service, replaced by nfs-idmapd.service

nfs-lock.service, replaced by rpc-statd.service

nfslock.service, replaced by rpc-statd.service

12.2. STORAGE

12.2.1. The BOOM boot manager simplifies the process of creating boot entries
BOOM is a boot manager for Linux systems that use boot loaders supporting the BootLoader
Specification for boot entry configuration. It enables flexible boot configuration and simplifies the
creation of new or modified boot entries: for example, to boot snapshot images of the system created
using LVM.

BOOM does not modify the existing boot loader configuration, and only inserts additional entries. The
existing configuration is maintained, and any distribution integration, such as kernel installation and
update scripts, continue to function as before.

BOOM has a simplified command-line interface (CLI) and API that ease the task of creating boot
entries.

12.2.2. Stratis is now available


Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with
additional features to the user.

63
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Stratis enables you to more easily perform storage tasks such as:

Manage snapshots and thin provisioning

Automatically grow file system sizes as needed

Maintain file systems

To administer Stratis storage, use the stratis utility, which communicates with the stratisd background
service.

Stratis is provided as a Technology Preview.

For more information, see the Stratis documentation: Managing layered local storage with Stratis .

12.2.3. LUKS2 is now the default format for encrypting volumes


In RHEL 8, the LUKS version 2 (LUKS2) format replaces the legacy LUKS (LUKS1) format. The dm-
crypt subsystem and the cryptsetup tool now uses LUKS2 as the default format for encrypted
volumes. LUKS2 provides encrypted volumes with metadata redundancy and auto-recovery in case of a
partial metadata corruption event.

Due to the internal flexible layout, LUKS2 is also an enabler of future features. It supports auto-
unlocking through the generic kernel-keyring token built in libcryptsetup that allow users unlocking of
LUKS2 volumes using a passphrase stored in the kernel-keyring retention service.

Other notable enhancements include:

The protected key setup using the wrapped key cipher scheme.

Easier integration with Policy-Based Decryption (Clevis).

Up to 32 key slots - LUKS1 provides only 8 key slots.

For more details, see the cryptsetup(8) and cryptsetup-reencrypt(8) man pages.

12.2.4. Multiqueue scheduling on block devices


Block devices now use multiqueue scheduling in Red Hat Enterprise Linux 8. This enables the block layer
performance to scale well with fast solid-state drives (SSDs) and multi-core systems.

The SCSI Multiqueue (scsi-mq) driver is now enabled by default, and the kernel boots with the
scsi_mod.use_blk_mq=Y option. This change is consistent with the upstream Linux kernel.

Device Mapper Multipath (DM Multipath) requires the scsi-mq driver to be active.

12.2.5. VDO now supports all architectures


Virtual Data Optimizer (VDO) is now available on all of the architectures supported by RHEL 8.

12.2.6. VDO no longer supports read cache


The read cache functionality has been removed from Virtual Data Optimizer (VDO). The read cache is
always disabled on VDO volumes, and you can no longer enable it using the --readCache option of the
vdo utility.

64
CHAPTER 12. FILE SYSTEMS AND STORAGE

Red Hat might reintroduce the VDO read cache in a later Red Hat Enterprise Linux release, using a
different implementation.

12.2.7. The dmraid package has been removed

The dmraid package has been removed from Red Hat Enterprise Linux 8. Users requiring support for
combined hardware and software RAID host bus adapters (HBA) should use the mdadm utility, which
supports native MD software RAID, the SNIA RAID Common Disk Data Format (DDF), and the Intel®
Matrix Storage Manager (IMSM) formats.

12.2.8. Software FCoE and Fibre Channel no longer support the target mode
Software FCoE: NIC Software FCoE target functionality is removed in Red Hat Enterprise Linux
8.0.

Fibre Channel no longer supports the target mode. Target mode is disabled for the qla2xxx
QLogic Fibre Channel driver in Red Hat Enterprise Linux 8.0.

For more information, see Section 11.1.3.2, “FCoE software removal” .

12.2.9. The detection of marginal paths in DM Multipath has been improved


The multipathd service now supports improved detection of marginal paths. This helps multipath
devices avoid paths that are likely to fail repeatedly, and improves performance. Marginal paths are
paths with persistent but intermittent I/O errors.

The following options in the /etc/multipath.conf file control marginal paths behavior:

marginal_path_double_failed_time

marginal_path_err_sample_time

marginal_path_err_rate_threshold

marginal_path_err_recheck_gap_time

DM Multipath disables a path and tests it with repeated I/O for the configured sample time if:

the listed multipath.conf options are set,

a path fails twice in the configured time, and

other paths are available.

If the path has more than the configured err rate during this testing, DM Multipath ignores it for the
configured gap time, and then retests it to see if it is working well enough to be reinstated.

For more information, see the multipath.conf man page.

12.2.10. New overrides section of the DM Multipath configuration file

The /etc/multipath.conf file now includes an overrides section that allows you to set a configuration
value for all of your devices. These attributes are used by DM Multipath for all devices unless they are
overwritten by the attributes specified in the multipaths section of the /etc/multipath.conf file for

65
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

paths that contain the device. This functionality replaces the all_devs parameter of the devices section
of the configuration file, which is no longer supported.

12.2.11. NVMe/FC is fully supported on Broadcom Emulex and Marvell Qlogic Fibre
Channel adapters
The NVMe over Fibre Channel (NVMe/FC) transport type is now fully supported in Initiator mode when
used with Broadcom Emulex and Marvell Qlogic Fibre Channel 32Gbit adapters that feature NVMe
support.

NVMe over Fibre Channel is an additional fabric transport type for the Nonvolatile Memory Express
(NVMe) protocol, in addition to the Remote Direct Memory Access (RDMA) protocol that was
previously introduced in Red Hat Enterprise Linux.

Enabling NVMe/FC:

To enable NVMe/FC in the lpfc driver, edit the /etc/modprobe.d/lpfc.conf file and add the
following option:

lpfc_enable_fc4_type=3

To enable NVMe/FC in the qla2xxx driver, edit the /etc/modprobe.d/qla2xxx.conf file and add
the following option:

qla2xxx.ql2xnvmeenable=1

Additional restrictions:

Multipath is not supported with NVMe/FC.

NVMe clustering is not supported with NVMe/FC.

With Marvell Qlogic adapters, Red Hat Enterprise Linux does not support using NVMe/FC and
SCSI/FC on an initiator port at the same time.

kdump is not supported with NVMe/FC.

Booting from Storage Area Network (SAN) NVMe/FC is not supported.

12.2.12. Support for Data Integrity Field/Data Integrity Extension (DIF/DIX)


DIF/DIX is an addition to the SCSI Standard. It remains in Technology Preview for all HBAs and storage
arrays, except for those specifically listed as supported.

DIF/DIX increases the size of the commonly used 512 byte disk block from 512 to 520 bytes, adding the
Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the
Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on
receipt, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can
be verified by the storage device, and by the receiving HBA.

12.2.13. libstoragemgmt-netapp-plugin has been removed


The libstoragemgmt-netapp-plugin package used by the libStorageMgmt library has been removed. It
is no longer supported because:

66
CHAPTER 12. FILE SYSTEMS AND STORAGE

The package requires the NetApp 7-mode API, which is being phased out by NetApp.

RHEL 8 has removed default support for the TLSv1.0 protocol with the
TLS_RSA_WITH_3DES_EDE_CBC_SHA cipher, using this plug-in with TLS does not work.

12.3. LVM

12.3.1. Removal of clvmd for managing shared storage devices

LVM no longer uses clvmd (cluster lvm daemon) for managing shared storage devices. Instead, LVM
now uses lvmlockd (lvm lock daemon).

For details about using lvmlockd, see the lvmlockd(8) man page. For details about using
shared storage in general, see the lvmsystemid(7) man page.

For information on using LVM in a Pacemaker cluster, see the help screen for the LVM-activate
resource agent.

For an example of a procedure to configure a shared logical volume in a Red Hat High
Availability cluster, see Configuring a GFS2 file system in a cluster .

12.3.2. Removal of lvmetad daemon

LVM no longer uses the lvmetad daemon for caching metadata, and will always read metadata from
disk. LVM disk reading has been reduced, which reduces the benefits of caching.

Previously, autoactivation of logical volumes was indirectly tied to the use_lvmetad setting in the
lvm.conf configuration file. The correct way to disable autoactivation continues to be setting
auto_activation_volume_list in the lvm.conf file.

12.3.3. LVM can no longer manage devices formatted with the GFS pool volume
manager or the lvm1 metadata format.

LVM can no longer manage devices formatted with the GFS pool volume manager or the`lvm1`
metadata format. if you created your logical volume before Red Hat Enterprise Linux 4 was introduced,
then this may affect you. Volume groups using the lvm1 format should be converted to the lvm2 format
using the vgconvert command.

12.3.4. LVM libraries and LVM Python bindings have been removed
The lvm2app library and LVM Python bindings, which were provided by the lvm2-python-libs package,
have been removed. Red Hat recommends the following solutions instead:

The LVM D-Bus API in combination with the lvm2-dbusd service. This requires using Python
version 3.

The LVM command-line utilities with JSON formatting; this formatting has been available since
the lvm2 package version 2.02.158.

The libblockdev library, included in AppStream, for C/C++

You must port any applications using the removed libraries and bindings to the D-Bus API before
upgrading to Red Hat Enterprise Linux 8.

67
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

12.3.5. The ability to mirror the log for LVM mirrors has been removed
The mirrored log feature of mirrored LVM volumes has been removed. Red Hat Enterprise Linux (RHEL)
8 no longer supports creating or activating LVM volumes with a mirrored mirror log.

The recommended replacements are:

RAID1 LVM volumes. The main advantage of RAID1 volumes is their ability to work even in
degraded mode and to recover after a transient failure.

Disk mirror log. To convert a mirrored mirror log to disk mirror log, use the following command:
lvconvert --mirrorlog disk my_vg/my_lv.

68
CHAPTER 13. HIGH AVAILABILITY AND CLUSTERS

CHAPTER 13. HIGH AVAILABILITY AND CLUSTERS


In Red Hat Enterprise Linux 8, pcs fully supports the Corosync 3 cluster engine and the Kronosnet
(knet) network abstraction layer for cluster communication. When planning an upgrade to a RHEL 8
cluster from an existing RHEL 7 cluster, some of the considerations you must take into account are as
follows:

Application versions: What version of the highly-available application will the RHEL 8 cluster
require?

Application process order: What may need to change in the start and stop processes of the
application?

Cluster infrastructure: Since pcs supports multiple network connections in RHEL 8, does the
number of NICs known to the cluster change?

Needed packages: Do you need to install all of the same packages on the new cluster?

Because of these and other considerations for running a Pacemaker cluster in RHEL 8, it is not possible
to perform in-place upgrades from RHEL 7 to RHEL 8 clusters and you must configure a new cluster in
RHEL 8. You cannot run a cluster that includes nodes running both RHEL 7 and RHEL 8.

Additionally, you should plan for the following before performing an upgrade:

Final cutover: What is the process to stop the application running on the old cluster and start it
on the new cluster to reduce application downtime?

Testing: Is it possible to test your migration strategy ahead of time in a development/test


environment?

The major differences in cluster creation and administration between RHEL 7 and RHEL 8 are listed in
the following sections.

13.1. NEW FORMATS FOR PCS CLUSTER SETUP, PCS CLUSTER NODE ADD AND PCS
CLUSTER NODE REMOVE COMMANDS

In Red Hat Enterprise Linux 8, pcs fully supports the use of node names, which are now required and
replace node addresses in the role of node identifier. Node addresses are now optional.

In the pcs host auth command, node addresses default to node names.

In the pcs cluster setup and pcs cluster node add commands, node addresses default to the
node addresses specified in the pcs host auth command.

With these changes, the formats for the commands to set up a cluster, add a node to a cluster, and
remove a node from a cluster have changed. For information on these new command formats, see the
help display for the pcs cluster setup, pcs cluster node add and pcs cluster node remove
commands.

13.2. MASTER RESOURCES RENAMED TO PROMOTABLE CLONE


RESOURCES

Red Hat Enterprise Linux (RHEL) 8 supports Pacemaker 2.0, in which a master/slave resource is no
69
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Red Hat Enterprise Linux (RHEL) 8 supports Pacemaker 2.0, in which a master/slave resource is no
longer a separate type of resource but a standard clone resource with a promotable meta-attribute set
to true. The following changes have been implemented in support of this update:

It is no longer possible to create master resources with the pcs command. Instead, it is possible
to create promotable clone resources. Related keywords and commands have been changed
from master to promotable.

All existing master resources are displayed as promotable clone resources.

When managing a RHEL7 cluster in the Web UI, master resources are still called master, as
RHEL7 clusters do not support promotable clones.

13.3. NEW COMMANDS FOR AUTHENTICATING NODES IN A CLUSTER


Red Hat Enterprise Linux (RHEL) 8 incorporates the following changes to the commands used to
authenticate nodes in a cluster.

The new command for authentication is pcs host auth. This command allows users to specify
host names, addresses and pcsd ports.

The pcs cluster auth command authenticates only the nodes in a local cluster and does not
accept a node list

It is now possible to specify an address for each node. pcs/pcsd will then communicate with
each node using the specified address. These addresses can be different than the ones
corosync uses internally.

The pcs pcsd clear-auth command has been replaced by the pcs pcsd deauth and pcs host
deauth commands. The new commands allow users to deauthenticate a single host as well as all
hosts.

Previously, node authentication was bidirectional, and running the pcs cluster auth command
caused all specified nodes to be authenticated against each other. The pcs host auth
command, however, causes only the local host to be authenticated against the specified nodes.
This allows better control of what node is authenticated against what other nodes when running
this command. On cluster setup itself, and also when adding a node, pcs automatically
synchronizes tokens on the cluster, so all nodes in the cluster are still automatically
authenticated as before and the cluster nodes can communicate with each other.

Note that these changes are not backward compatible. Nodes that were authenticated on a RHEL 7
system will need to be authenticated again.

13.4. LVM VOLUMES IN A RED HAT HIGH AVAILABILITY


ACTIVE/PASSIVE CLUSTER
When configuring LVM volumes as resources in a Red Hat HA active/passive cluster in RHEL 8, you
configure the volumes as an LVM-activate resource. In RHEL 7, you configured the volumes as an LVM
resource. For an example of a cluster configuration procedure that includes configuring an LVM volume
as a resource in an active/passive cluster in RHEL 8, see Configuring an active/passive Apache HTTP
server in a Red Hat High Availability cluster.

13.5. SHARED LVM VOLUMES IN A RED HAT HIGH AVAILABILITY


ACTIVE/ACTIVE CLUSTER

70
CHAPTER 13. HIGH AVAILABILITY AND CLUSTERS

In RHEL 8, LVM uses the LVM lock daemon lvmlockd instead of clvmd for managing shared storage
devices in an active/active cluster. This requires that you configure the logical volumes on which you
mount a GFS2 file system as shared logical volumes.

Additionally, this requires that you use the LVM-activate resource agent to manage an LVM volume and
that you use the lvmlockd resource agent to manage the lvmlockd daemon.

For a full procedure for configuring a RHEL 8 Pacemaker cluster that includes GFS2 file systems using
shared logical volumes, see Configuring a GFS2 file system in a cluster .

13.6. GFS2 FILE SYSTEMS IN A RHEL 8 PACEMAKER CLUSTER


In RHEL 8, LVM uses the LVM lock daemon lvmlockd instead of clvmd for managing shared storage
devices in an active/active cluster as described in Section 12.3.1, “Removal of clvmd for managing
shared storage devices”.

To use GFS2 file systems that were created on a RHEL 7 system in a RHEL 8 cluster, you must
configure the logical volumes on which they are mounted as shared logical volumes in a RHEL 8 system,
and you must start locking for the volume group. For an example of the procedure that configures
existing RHEL 7 logical volumes as shared logical volumes for use in a RHEL 8 Pacemaker cluster, see
Migrating a GFS2 file system from RHEL7 to RHEL8 .

71
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

CHAPTER 14. SHELLS AND COMMAND-LINE TOOLS

14.1. LOCALIZATION IS DISTRIBUTED IN MULTIPLE PACKAGES


In RHEL 8, locales and translations are no longer provided by the single glibc-common package.
Instead, every locale and language is available in a glibc-langpack-CODE package. Additionally, not all
locales are installed by default, only these selected in the installer. Users must install all further locale
packages that they need separately.

The meta-packages which install extra add-on packages containing translations, dictionaries and locales
for every package installed on the system are called langpacks.

For more information see Installing and using langpacks .

14.2. REMOVED SUPPORT FOR ALL-NUMERIC USER AND GROUP


NAMES
In Red Hat Enterprise Linux (RHEL) 8, the useradd and groupadd commands does not allow you to use
user and group names consisting purely of numeric characters. The reason for not allowing such names
is that this can confuse tools that work with user and group names and user and group ids, which are
numbers.

See more information about Managing users using command-line tools .

14.3. THE NOBODY USER REPLACES NFSNOBODY


Red Hat Enterprise Linux (RHEL) 7 used the nobody user and group pair with the ID of 99 and the
nfsnobody user and group pair with the ID of 65534, which is also the default kernel overflow ID.

In RHEL 8, both of these pair have been merged into the nobody user and group pair, which uses the ID
of 65534. The nfsnobody pair is not created in RHEL 8.

This change reduces the confusion about files that are owned by nobody but are not related to NFS.

14.4. VERSION CONTROL SYSTEMS


RHEL 8 provides the following version control systems:

Git 2.18, a distributed revision control system with a decentralized architecture.

Mercurial 4.8, a lightweight distributed version control system, designed for efficient handling
of large projects.

Subversion 1.10, a centralized version control system.

Note that the Concurrent Versions System (CVS) and Revision Control System (RCS), available in
RHEL 7, are not distributed with RHEL 8.

14.4.1. Notable changes in Subversion 1.10

Subversion 1.10 introduces a number of new features since the version 1.7 distributed in RHEL 7, as well
as the following compatibility changes:

Due to incompatibilities in the Subversion libraries used for supporting language bindings,
72
CHAPTER 14. SHELLS AND COMMAND-LINE TOOLS

Due to incompatibilities in the Subversion libraries used for supporting language bindings,
Python 3 bindings for Subversion 1.10 are unavailable. As a consequence, applications that
require Python bindings for Subversion are unsupported.

Repositories based on Berkeley DB are no longer supported. Before migrating, back up


repositories created with Subversion 1.7 by using the svnadmin dump command. After
installing RHEL 8, restore the repositories using the svnadmin load command.

Existing working copies checked out by the Subversion 1.7 client in RHEL 7 must be upgraded
to the new format before they can be used from Subversion 1.10. After installing RHEL 8, run
the svn upgrade command in each working copy.

Smartcard authentication for accessing repositories using https:// is no longer supported.

73
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

CHAPTER 15. DYNAMIC PROGRAMMING LANGUAGES, WEB


SERVERS, DATABASE SERVERS

15.1. DYNAMIC PROGRAMMING LANGUAGES

15.1.1. Notable changes in Python

15.1.1.1. Python 3 is the default Python implementation in RHEL 8

Red Hat Enterprise Linux 8 is distributed with Python 3.6. The package might not be installed by default.
To install Python 3.6, use the yum install python3 command.

Python 2.7 is available in the python2 package. However, Python 2 will have a shorter life cycle and its
aim is to facilitate a smoother transition to Python 3 for customers.

Neither the default python package nor the unversioned /usr/bin/python executable is distributed with
RHEL 8. Customers are advised to use python3 or python2 directly. Alternatively, administrators can
configure the unversioned python command using the alternatives command.

For details, see Using Python in Red Hat Enterprise Linux 8 .

15.1.1.2. Migrating from Python 2 to Python 3

As a developer, you may want to migrate your former code that is written in Python 2 to Python 3. For
more information on how to migrate large code bases to Python 3, see The Conservative Python 3
Porting Guide.

Note that after this migration, the original Python 2 code becomes interpretable by the Python 3
interpreter and stays interpretable for the Python 2 interpreter as well.

15.1.1.3. Configuring the unversioned Python

System administrators can configure the unversioned python command on the system using the
alternatives command. Note that the required package, either python3 or python2, needs to be
installed before configuring the unversioned command to the respective version.

To configure the unversioned python command to Python 3 directly, run:

alternatives --set python /usr/bin/python3

Use an analogous command if you choose Python 2.

Alternatively, you can configure the unversioned python command interactively:

1. Run the following command:

alternatives --config python

2. Select the required version from the provided list.

To reset this configuration and remove the unversioned python command, run:

74
CHAPTER 15. DYNAMIC PROGRAMMING LANGUAGES, WEB SERVERS, DATABASE SERVERS

alternatives --auto python


WARNING

Additional Python-related commands, such as pip3, do not have configurable


unversioned variants.

15.1.1.4. Python scripts must specify major version in hashbangs at RPM build time

In RHEL 8, executable Python scripts are expected to use hashbangs (shebangs) specifying explicitly at
least the major Python version.

The /usr/lib/rpm/redhat/brp-mangle-shebangs buildroot policy (BRP) script is run automatically when


building any RPM package. This script attempts to correct hashbangs in all executable files. When the
script encounters ambiguous Python hashbangs that do not specify the major version of Python, it
generates errors and the RPM build fails. Examples of such ambiguous hashbangs include:

#! /usr/bin/python

#! /usr/bin/env python

To modify hashbangs in the Python scripts causing these build errors at RPM build time, use the
pathfix.py script from the platform-python-devel package:

pathfix.py -pn -i %{__python3} PATH ...

Multiple PATHs can be specified. If a PATH is a directory, pathfix.py recursively scans for any Python
scripts matching the pattern ^[a-zA-Z0-9_]+\.py$, not only those with an ambiguous hashbang. Add the
command for running pathfix.py to the %prep section or at the end of the %install section.

For more information, see Handling hashbangs in Python scripts .

15.1.1.5. Python binding of the net-snmp package is unavailable

The Net-SNMP suite of tools does not provide binding for Python 3, which is the default Python
implementation in RHEL 8. Consequently, python-net-snmp, python2-net-snmp, or python3-net-snmp
packages are unavailable in RHEL 8.

15.1.1.6. Additional resources

Packaging of Python 3 RPMs

15.1.2. Notable changes in PHP

Red Hat Enterprise Linux 8 is distributed with PHP 7.2. This version introduces the following major
changes over PHP 5.4, which is available in RHEL 7:

PHP uses FastCGI Process Manager (FPM) by default (safe for use with a threaded httpd)

75
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

The php_value and php-flag variables should no longer be used in the httpd configuration files;
they should be set in pool configuration instead: /etc/php-fpm.d/*.conf

PHP script errors and warnings are logged to the /var/log/php-fpm/www-error.log file instead
of /var/log/httpd/error.log

When changing the PHP max_execution_time configuration variable, the httpd ProxyTimeout
setting should be increased to match

The user running PHP scripts is now configured in the FPM pool configuration (the /etc/php-
fpm.d/www.conf file; the apache user is the default)

The php-fpm service needs to be restarted after a configuration change or after a new
extension is installed

The zip extension has been moved from the php-common package to a separate package,
php-pecl-zip

The following extensions have been removed:

aspell

mysql (note that the mysqli and pdo_mysql extensions are still available, provided by php-
mysqlnd package)

memcache

15.1.3. Notable changes in Perl

Perl 5.26, distributed with RHEL 8, introduces the following changes over the version available in RHEL
7:

Unicode 9.0 is now supported.

New op-entry, loading-file, and loaded-file SystemTap probes are provided.

Copy-on-write mechanism is used when assigning scalars for improved performance.

The IO::Socket::IP module for handling IPv4 and IPv6 sockets transparently has been added.

The Config::Perl::V module to access perl -V data in a structured way has been added.

A new perl-App-cpanminus package has been added, which contains the cpanm utility for
getting, extracting, building, and installing modules from the Comprehensive Perl Archive
Network (CPAN) repository.

The current directory . has been removed from the @INC module search path for security
reasons.

The do statement now returns a deprecation warning when it fails to load a file because of the
behavioral change described above.

The do subroutine(LIST) call is no longer supported and results in a syntax error.

Hashes are randomized by default now. The order in which keys and values are returned from a
hash changes on each perl run. To disable the randomization, set the PERL_PERTURB_KEYS
environment variable to 0.

76
CHAPTER 15. DYNAMIC PROGRAMMING LANGUAGES, WEB SERVERS, DATABASE SERVERS

Unescaped literal { characters in regular expression patterns are no longer permissible.

Lexical scope support for the $_ variable has been removed.

Using the defined operator on an array or a hash results in a fatal error.

Importing functions from the UNIVERSAL module results in a fatal error.

The find2perl, s2p, a2p, c2ph, and pstruct tools have been removed.

The ${^ENCODING} facility has been removed. The encoding pragma’s default mode is no
longer supported. To write source code in other encoding than UTF-8, use the encoding’s Filter
option.

The perl packaging is now aligned with upstream. The perl package installs also core modules,
while the /usr/bin/perl interpreter is provided by the perl-interpreter package. In previous
releases, the perl package included just a minimal interpreter, whereas the perl-core package
included both the interpreter and the core modules.

The IO::Socket::SSL Perl module no longer loads a certificate authority certificate from the
./certs/my-ca.pem file or the ./ca directory, a server private key from the ./certs/server-
key.pem file, a server certificate from the ./certs/server-cert.pem file, a client private key from
the ./certs/client-key.pem file, and a client certificate from the ./certs/client-cert.pem file.
Specify the paths to the files explicitly instead.

15.1.4. Notable changes in Ruby

RHEL 8 provides Ruby 2.5, which introduces numerous new features and enhancements over Ruby
2.0.0 available in RHEL 7. Notable changes include:

Incremental garbage collector has been added.

The Refinements syntax has been added.

Symbols are now garbage collected.

The $SAFE=2 and $SAFE=3 safe levels are now obsolete.

The Fixnum and Bignum classes have been unified into the Integer class.

Performance has been improved by optimizing the Hash class, improved access to instance
variables, and the Mutex class being smaller and faster.

Certain old APIs have been deprecated.

Bundled libraries, such as RubyGems, Rake, RDoc, Psych, Minitest, and test-unit, have been
updated.

Other libraries, such as mathn, DL, ext/tk, and XMLRPC, which were previously distributed with
Ruby, are deprecated or no longer included.

The SemVer versioning scheme is now used for Ruby versioning.

15.1.5. Notable changes in SWIG

RHEL 8 includes the Simplified Wrapper and Interface Generator (SWIG) version 3.0, which provides

77
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

numerous new features, enhancements, and bug fixes over the version 2.0 distributed in RHEL 7. Most
notably, support for the C++11 standard has been implemented. SWIG now supports also Go 1.6, PHP 7,
Octave 4.2, and Python 3.5.

15.1.6. Node.js new in RHEL

Node.js, a software development platform for building fast and scalable network applications in the
JavaScript programming language, is provided for the first time in RHEL. It was previously available only
as a Software Collection. RHEL 8 provides Node.js 10.

15.1.7. Tcl
Tool command language (Tcl) is a dynamic programming language. The interpreter for this language,
together with the C library, is provided by the tcl package.

Using Tcl paired with Tk (Tcl/Tk) enables creating cross-platform GUI applications. Tk is provided by
the tk package.

Note that Tk can refer to any of the the following:

A programming toolkit for multiple languages

A Tk C library bindings available for multiple languages, such as C, Ruby, Perl and Python

A wish interpreter that instantiates a Tk console

A Tk extension that adds a number of new commands to a particular Tcl interpreter

15.1.7.1. Notable changes in Tcl/Tk 8.6

RHEL 8 is distributed with Tcl/Tk version 8.6, which provides multiple notable changes over Tcl/Tk
version 8.5:

Object-oriented programming support

Stackless evaluation implementation

Enhanced exceptions handling

Collection of third-party packages built and installed with Tcl

Multi-thread operations enabled

SQL database-powered scripts support

IPv6 networking support

Built-in Zlib compression

List processing
Two new commands, lmap and dict map are available, which allow the expression of
transformations over Tcl containers.

Stacked channels by script


Two new commands, chan push and chan pop are available, which allow to add or remove
transformations to or from I/O channels.
For more detailed information about Tcl/Tk version 8.6 changes and new feaures, see the following
78
CHAPTER 15. DYNAMIC PROGRAMMING LANGUAGES, WEB SERVERS, DATABASE SERVERS

For more detailed information about Tcl/Tk version 8.6 changes and new feaures, see the following
resources:

Configuring basic system settings

Changes in Tcl/Tk 8.6

If you need to migrate to Tcl/Tk 8.6, see Migrating to Tcl/Tk 8.6 .

15.2. WEB SERVERS

15.2.1. Notable changes in the Apache HTTP Server


The Apache HTTP Server, has been updated from version 2.4.6 to version 2.4.37 between RHEL 7 and
RHEL 8. This updated version includes several new features, but maintains backwards compatibility with
the RHEL 7 version at the level of configuration and Application Binary Interface (ABI) of external
modules.

New features include:

HTTP/2 support is now provided by the mod_http2 package, which is a part of the httpd
module.

systemd socket activation is supported. See httpd.socket(8) man page for more details.

Multiple new modules have been added:

mod_proxy_hcheck - a proxy health-check module

mod_proxy_uwsgi - a Web Server Gateway Interface (WSGI) proxy

mod_proxy_fdpass - provides support for the passing the socket of the client to another
process

mod_cache_socache - an HTTP cache using, for example, memcache backend

mod_md - an ACME protocol SSL/TLS certificate service

The following modules now load by default:

mod_request

mod_macro

mod_watchdog

A new subpackage, httpd-filesystem, has been added, which contains the basic directory layout
for the Apache HTTP Server including the correct permissions for the directories.

Instantiated service support, httpd@.service has been introduced. See the httpd.service man
page for more information.

A new httpd-init.service replaces the %post script to create a self-signed mod_ssl key pair.

Automated TLS certificate provisioning and renewal using the Automatic Certificate
Management Environment (ACME) protocol is now supported with the mod_md package (for
use with certificate providers such as Let’s Encrypt).

79
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

The Apache HTTP Server now supports loading TLS certificates and private keys from
hardware security tokens directly from PKCS#11 modules. As a result, a mod_ssl configuration
can now use PKCS#11 URLs to identify the TLS private key, and, optionally, the TLS certificate
in the SSLCertificateKeyFile and SSLCertificateFile directives.

A new ListenFree directive in the /etc/httpd/conf/httpd.conf file is now supported.


Similarly to the Listen directive, ListenFree provides information about IP addresses, ports, or
IP address-and-port combinations that the server listens to. However, with ListenFree, the
IP_FREEBIND socket option is enabled by default. Hence, httpd is allowed to bind to a nonlocal
IP address or to an IP address that does not exist yet. This allows httpd to listen on a socket
without requiring the underlying network interface or the specified dynamic IP address to be up
at the time when httpd is trying to bind to it.

Note that the ListenFree directive is currently available only in RHEL 8.

For more details on ListenFree, see the following table:

Table 15.1. ListenFree directive’s syntax, status, and modules

Syntax Status Modules

ListenFree [IP- MPM event, worker, prefork,


address:]portnumber mpm_winnt, mpm_netware,
[protocol] mpmt_os2

Other notable changes include:

The following modules have been removed:

mod_file_cache

mod_nss

mod_perl

The default type of the DBM authentication database used by the Apache HTTP Server in
RHEL 8 has been changed from SDBM to db5.

The mod_wsgi module for the Apache HTTP Server has been updated to Python 3. WSGI
applications are now supported only with Python 3, and must be migrated from Python 2.

The multi-processing module (MPM) configured by default with the Apache HTTP Server has
changed from a multi-process, forked model (known as prefork) to a high-performance multi-
threaded model, event.
Any third-party modules that are not thread-safe need to be replaced or removed. To change
the configured MPM, edit the /etc/httpd/conf.modules.d/00-mpm.conf file. See the
httpd.service(8) man page for more information.

The minimum UID and GID allowed for users by suEXEC are now 1000 and 500, respectively
(previously 100 and 100).

The /etc/sysconfig/httpd file is no longer a supported interface for setting environment


variables for the httpd service. The httpd.service(8) man page has been added for the systemd
service.

Stopping the httpd service now uses a “graceful stop” by default.

80
CHAPTER 15. DYNAMIC PROGRAMMING LANGUAGES, WEB SERVERS, DATABASE SERVERS

The mod_auth_kerb module has been replaced by the mod_auth_gssapi module.

For instructions on deploying, see Setting up the Apache HTTP web server.

15.2.2. The nginx web server new in RHEL

RHEL 8 introduces nginx 1.14, a web and proxy server supporting HTTP and other protocols, with a
focus on high concurrency, performance, and low memory usage. nginx was previously available only as
a Software Collection.

The nginx web server now supports loading TLS private keys from hardware security tokens directly
from PKCS#11 modules. As a result, an nginx configuration can use PKCS#11 URLs to identify the TLS
private key in the ssl_certificate_key directive.

15.2.3. Apache Tomcat has been removed


The Apache Tomcat server has been removed from Red Hat Enterprise Linux. Apache Tomcat is a
servlet container for the Java Servlet and JavaServer Pages (JSP) technologies. Red Hat recommends
that users requiring a servlet container use the JBoss Web Server.

15.3. PROXY CACHING SERVERS

15.3.1. Varnish Cache new in RHEL

Varnish Cache, a high-performance HTTP reverse proxy, is provided for the first time in RHEL. It was
previously available only as a Software Collection. Varnish Cache stores files or fragments of files in
memory that are used to reduce the response time and network bandwidth consumption on future
equivalent requests. RHEL 8.0 is distributed with Varnish Cache 6.0.

15.3.2. Notable changes in Squid

RHEL 8.0 is distributed with Squid 4.4, a high-performance proxy caching server for web clients,
supporting FTP, Gopher, and HTTP data objects. This release provides numerous new features,
enhancements, and bug fixes over the version 3.5 available in RHEL 7.

Notable changes include:

Configurable helper queue size

Changes to helper concurrency channels

Changes to the helper binary

Secure Internet Content Adaptation Protocol (ICAP)

Improved support for Symmetric Multi Processing (SMP)

Improved process management

Removed support for SSL

Removed Edge Side Includes (ESI) custom parser

Multiple configuration changes

81
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

15.4. DATABASE SERVERS


RHEL 8 provides the following database servers:

MySQL 8.0, a multi-user, multi-threaded SQL database server. It consists of the MySQL server
daemon, mysqld, and many client programs.

MariaDB 10.3, a multi-user, multi-threaded SQL database server. For all practical purposes,
MariaDB is binary-compatible with MySQL.

PostgreSQL 10 and PostgreSQL 9.6, an advanced object-relational database management


system (DBMS).

Redis 5, an advanced key-value store. It is often referred to as a data structure server because
keys can contain strings, hashes, lists, sets, and sorted sets. Redis is provided for the first time
in RHEL.

Note that the NoSQL MongoDB database server is not included in RHEL 8.0 because it uses the Server
Side Public License (SSPL).

Database servers are not installable in parallel


The mariadb and mysql modules cannot be installed in parallel in RHEL 8.0 due to conflicting RPM
packages.

By design, it is impossible to install more than one version (stream) of the same module in parallel. For
example, you need to choose only one of the available streams from the postgresql module, either 10
(default) or 9.6. Parallel installation of components is possible in Red Hat Software Collections for RHEL
6 and RHEL 7. In RHEL 8, different versions of database servers can be used in containers.

15.4.1. Notable changes in MariaDB 10.3

MariaDB 10.3 provides numerous new features over the version 5.5 distributed in RHEL 7, such as:

Common table expressions

System-versioned tables

FOR loops

Invisible columns

Sequences

Instant ADD COLUMN for InnoDB

Storage-engine independent column compression

Parallel replication

Multi-source replication

In addition, the new mariadb-connector-c packages provide a common client library for MySQL and
MariaDB. This library is usable with any version of the MySQL and MariaDB database servers. As a
result, the user is able to connect one build of an application to any of the MySQL and MariaDB servers
distributed with RHEL 8.

Other notable changes include:

82
CHAPTER 15. DYNAMIC PROGRAMMING LANGUAGES, WEB SERVERS, DATABASE SERVERS

MariaDB Galera Cluster, a synchronous multi-master cluster, is now a standard part of


MariaDB.

InnoDB is used as the default storage engine instead of XtraDB.

The mariadb-bench subpackage has been removed.

The default allowed level of the plug-in maturity has been changed to one level less than the
server maturity. As a result, plug-ins with a lower maturity level that were previously working, will
no longer load.

See also Using MariaDB on Red Hat Enterprise Linux 8 .

15.4.2. Notable changes in MySQL 8.0

RHEL 8 is distributed with MySQL 8.0, which provides, for example, the following enhancements:

MySQL now incorporates a transactional data dictionary, which stores information about
database objects.

MySQL now supports roles, which are collections of privileges.

The default character set has been changed from latin1 to utf8mb4.

Support for common table expressions, both nonrecursive and recursive, has been added.

MySQL now supports window functions, which perform a calculation for each row from a query,
using related rows.

InnoDB now supports the NOWAIT and SKIP LOCKED options with locking read statements.

GIS-related functions have been improved.

JSON functionality has been enhanced.

The new mariadb-connector-c packages provide a common client library for MySQL and
MariaDB. This library is usable with any version of the MySQL and MariaDB database servers.
As a result, the user is able to connect one build of an application to any of the MySQL and
MariaDB servers distributed with RHEL 8.

In addition, the MySQL 8.0 server distributed with RHEL 8 is configured to use
mysql_native_password as the default authentication plug-in because client tools and libraries in
RHEL 8 are incompatible with the caching_sha2_password method, which is used by default in the
upstream MySQL 8.0 version.

To change the default authentication plug-in to caching_sha2_password, edit the


/etc/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows:

[mysqld]
default_authentication_plugin=caching_sha2_password

15.4.3. Notable changes in PostgreSQL

RHEL 8.0 provides two versions of the PostgreSQL database server, distributed in two streams of the
postgresql module: PostgreSQL 10 (the default stream) and PostgreSQL 9.6. RHEL 7 includes
PostgreSQL version 9.2.

83
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Notable changes in PostgreSQL 9.6 are, for example:

Parallel execution of the sequential operations: scan, join, and aggregate

Enhancements to synchronous replication

Improved full-text search enabling users to search for phrases

The postgres_fdw data federation driver now supports remote join, sort, UPDATE, and
DELETE operations

Substantial performance improvements, especially regarding scalability on multi-CPU-socket


servers

Major enhancements in PostgreSQL 10 include:

Logical replication using the publish and subscribe keywords

Stronger password authentication based on the SCRAM-SHA-256 mechanism

Declarative table partitioning

Improved query parallelism

Significant general performance improvements

Improved monitoring and control

See also Using PostgreSQL on Red Hat Enterprise Linux 8 .

84
CHAPTER 16. COMPILERS AND DEVELOPMENT TOOLS

CHAPTER 16. COMPILERS AND DEVELOPMENT TOOLS

16.1. CHANGES IN TOOLCHAIN SINCE RHEL 7


The following sections list changes in toolchain since the release of the described components in
Red Hat Enterprise Linux 7. See also Release notes for Red Hat Enterprise Linux 8.0 .

16.1.1. Changes in GCC in RHEL 8


In Red Hat Enterprise Linux 8, the GCC toolchain is based on the GCC 8.2 release series. Notable
changes since Red Hat Enterprise Linux 7 include:

Numerous general optimizations have been added, such as alias analysis, vectorizer
improvements, identical code folding, inter-procedural analysis, store merging optimization
pass, and others.

The Address Sanitizer has been improved.

The Leak Sanitizer for detection of memory leaks has been added.

The Undefined Behavior Sanitizer for detection of undefined behavior has been added.

Debug information can now be produced in the DWARF5 format. This capability is experimental.

The source code coverage analysis tool GCOV has been extended with various improvements.

Support for the OpenMP 4.5 specification has been added. Additionally, the offloading features
of the OpenMP 4.0 specification are now supported by the C, C++, and Fortran compilers.

New warnings and improved diagnostics have been added for static detection of certain likely
programming errors.

Source locations are now tracked as ranges rather than points, which allows much richer
diagnostics. The compiler now offers “fix-it” hints, suggesting possible code modifications. A
spell checker has been added to offer alternative names and ease detecting typos.

Security
GCC has been extended to provide tools to ensure additional hardening of the generated code.
Improvements related to security include:

The __builtin_add_overflow, __builtin_sub_overflow, and __builtin_mul_overflow built-in


functions for arithmetics with overflow checking have been added.

The -fstack-clash-protection option has been added to generate additional code guarding
against stack clash.

The -fcf-protection option was introduced to check target addresses of control-flow


instructions for increased program security.

The new -Wstringop-truncation warning option lists calls to bounded string manipulation
functions such as strncat, strncpy, or stpncpy that might truncate the copied string or leave
the destination unchanged.

The -Warray-bounds warning option has been improved to detect out-of-bounds array indices
and pointer offsets better.

85
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

The -Wclass-memaccess warning option has been added to warn about potentially unsafe
manipulation of objects of non-trivial class types by raw memory access functions such as
memcpy or realloc.

Architecture and processor support


Improvements to architecture and processor support include:

Multiple new architecture-specific options for the Intel AVX-512 architecture, a number of its
microarchitectures, and Intel Software Guard Extensions (SGX) have been added.

Code generation can now target the 64-bit ARM architecture LSE extensions, ARMv8.2-A 16-
bit Floating-Point Extensions (FPE), and ARMv8.2-A, ARMv8.3-A, and ARMv8.4-A architecture
versions.

Handling of the -march=native option on the ARM and 64-bit ARM architectures has been
fixed.

Support for the z13 and z14 processors of the IBM Z architecture has been added.

Languages and standards


Notable changes related to languages and standards include:

The default standard used when compiling code in the C language has changed to C17 with
GNU extensions.

The default standard used when compiling code in the C++ language has changed to C++14 with
GNU extensions.

The C++ runtime library now supports the C++11 and C++14 standards.

The C++ compiler now implements the C++14 standard with many new features such as variable
templates, aggregates with non-static data member initializers, the extended constexpr
specifier, sized deallocation functions, generic lambdas, variable-length arrays, digit separators,
and others.

Support for the C language standard C11 has been improved: ISO C11 atomics, generic
selections, and thread-local storage are now available.

The new __auto_type GNU C extension provides a subset of the functionality of C++11 auto
keyword in the C language.

The _FloatN and _FloatNx type names specified by the ISO/IEC TS 18661-3:2015 standard are
now recognized by the C front end.

The default standard used when compiling code in the C language has changed to C17 with
GNU extensions. This has the same effect as using the --std=gnu17 option. Previously, the
default was C89 with GNU extensions.

GCC can now experimentally compile code using the C++17 language standard and certain
features from the C++20 standard.

Passing an empty class as an argument now takes up no space on the Intel 64 and AMD64
architectures, as required by the platform ABI. Passing or returning a class with only deleted
copy and move constructors now uses the same calling convention as a class with a non-trivial
copy or move constructor.

86
CHAPTER 16. COMPILERS AND DEVELOPMENT TOOLS

The value returned by the C++11 alignof operator has been corrected to match the C _Alignof
operator and return minimum alignment. To find the preferred alignment, use the GNU
extension __alignof__.

The main version of the libgfortran library for Fortran language code has been changed to 5.

Support for the Ada (GNAT), GCC Go, and Objective C/C++ languages has been removed. Use
the Go Toolset for Go code development.

Additional resources

See also the Red Hat Enterprise Linux 8 Release Notes .

Using Go Toolset

16.1.2. Security enhancements in GCC in RHEL 8


This section decribes in detail the changes in GCC related to security and added since the release of
Red Hat Enterprise Linux 7.0.

New warnings
These warning options have been added:

Option Displays warnings for

-Wstringop-truncation Calls to bounded string manipulation functions such as strncat , strncpy ,


and stpncpy that might either truncate the copied string or leave the
destination unchanged.

-Wclass-memaccess Objects of non-trivial class types manipulated in potentially unsafe ways by


raw memory functions such as memcpy or realloc.

The warning helps detect calls that bypass user-defined constructors or


copy-assignment operators, corrupt virtual table pointers, data members of
const-qualified types or references, or member pointers. The warning also
detects calls that would bypass access controls to data members.

-Wmisleading- Places where the indentation of the code gives a misleading idea of the block
indentation structure of the code to a human reader.

-Walloc-size-larger- Calls to memory allocation functions where the amount of memory to


than=size allocate exceeds size. Works also with functions where the allocation is
specified by multiplying two parameters and with any functions decorated
with attribute alloc_size.

-Walloc-zero Calls to memory allocation functions that attempt to allocate zero amount of
memory. Works also with functions where the allocation is specified by
multiplying two parameters and with any functions decorated with attribute
alloc_size.

-Walloca All calls to the alloca function.

87
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Option Displays warnings for

-Walloca-larger- Calls to the alloca function where the requested memory is more thansize.
than=size

-Wvla-larger-than=size Definitions of Variable Length Arrays (VLA) that can either exceed the
specified size or whose bound is not known to be sufficiently constrained.

-Wformat-overflow=level Both certain and likely buffer overflow in calls to the sprintf family of
formatted output functions. For more details and explanation of the level
value, see the gcc(1) manual page.

-Wformat- Both certain and likely output truncation in calls to the snprintf family of
truncation=level formatted output functions. For more details and explanation of the level
value, see the gcc(1) manual page.

-Wstringop- Buffer overflow in calls to string handling functions such as memcpy and
overflow=type strcpy. For more details and explanation of thelevel value, see the gcc(1)
manual page.

Warning improvements
These GCC warnings have been improved:

The -Warray-bounds option has been improved to detect more instances of out-of-bounds
array indices and pointer offsets. For example, negative or excessive indices into flexible array
members and string literals are detected.

The -Wrestrict option introduced in GCC 7 has been enhanced to detect many more instances
of overlapping accesses to objects via restrict-qualified arguments to standard memory and
string manipulation functions such as memcpy and strcpy.

The -Wnonnull option has been enhanced to detect a broader set of cases of passing null
pointers to functions that expect a non-null argument (decorated with attribute nonnull).

New UndefinedBehaviorSanitizer
A new run-time sanitizer for detecting undefined behavior called UndefinedBehaviorSanitizer has been
added. The following options are noteworthy:

Option Check

-fsanitize=float-divide-by-zero Detect floating-point division by zero.

-fsanitize=float-cast-overflow Check that the result of floating-point type to integer conversions


do not overflow.

-fsanitize=bounds Enable instrumentation of array bounds and detect out-of-bounds


accesses.

88
CHAPTER 16. COMPILERS AND DEVELOPMENT TOOLS

Option Check

-fsanitize=alignment Enable alignment checking and detect various misaligned objects.

-fsanitize=object-size Enable object size checking and detect various out-of-bounds


accesses.

-fsanitize=vptr Enable checking of C++ member function calls, member accesses,


and some conversions between pointers to base and derived
classes. Additionally, detect when referenced objects do not have
correct dynamic type.

-fsanitize=bounds-strict Enable strict checking of array bounds. This enables -


fsanitize=bounds and instrumentation of flexible array member-
like arrays.

-fsanitize=signed-integer- Diagnose arithmetic overflows even on arithmetic operations with


overflow generic vectors.

-fsanitize=builtin Diagnose at run time invalid arguments to __builtin_clz or


__builtin_ctz prefixed builtins. Includes checks from -
fsanitize=undefined.

-fsanitize=pointer-overflow Perform cheap run-time tests for pointer wrapping. Includes


checks from -fsanitize=undefined .

New options for AddressSanitizer


These options have been added to AddressSanitizer:

Option Check

-fsanitize=pointer-compare Warn about comparison of pointers that point to a different


memory object.

-fsanitize=pointer-subtract Warn about subtraction of pointers that point to a different


memory object.

-fsanitize-address-use-after- Sanitize variables whose address is taken and used after a scope
scope where the variable is defined.

Other sanitizers and instrumentation

The option -fstack-clash-protection has been added to insert probes when stack space is
allocated statically or dynamically to reliably detect stack overflows and thus mitigate the attack
vector that relies on jumping over a stack guard page provided by the operating system.

A new option -fcf-protection=[full|branch|return|none] has been added to perform code

89
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

A new option -fcf-protection=[full|branch|return|none] has been added to perform code


instrumentation and increase program security by checking that target addresses of control-
flow transfer instructions (such as indirect function call, function return, indirect jump) are valid.

Additional resources

For more details and explanation of the values supplied to some of the options above, see the
gcc(1) manual page:

$ man gcc

16.1.3. Compatibility-breaking changes in GCC in RHEL 8


C++ ABI change in std::string and std::list
The Application Binary Interface (ABI) of the std::string and std::list classes from the libstdc++ library
changed between RHEL 7 (GCC 4.8) and RHEL 8 (GCC 8) to conform to the C++11 standard. The
libstdc++ library supports both the old and new ABI, but some other C++ system libraries do not. As a
consequence, applications that dynamically link against these libraries will need to be rebuilt. This affects
all C++ standard modes, including C++98. It also affects applications built with Red Hat Developer
Toolset compilers for RHEL 7, which kept the old ABI to maintain compatibility with the system libraries.

GCC no longer builds Ada, Go, and Objective C/C++ code


Capability for building code in the Ada (GNAT), GCC Go, and Objective C/C++ languages has been
removed from the GCC compiler.

To build Go code, use the Go Toolset instead.

16.2. COMPILER TOOLSETS


RHEL 8.0 provides the following compiler toolsets as Application Streams:

Clang and LLVM Toolset 7.0.1, which provides the LLVM compiler infrastructure framework, the
Clang compiler for the C and C++ languages, the LLDB debugger, and related tools for code
analysis. See the Using Clang and LLVM Toolset document.

Rust Toolset 1.31, which provides the Rust programming language compiler rustc, the cargo
build tool and dependency manager, the cargo-vendor plugin, and required libraries. See the
Using Rust Toolset document.

Go Toolset 1.11.5, which provides the Go programming language tools and libraries. Go is
alternatively known as golang. See the Using Go Toolset document.

16.3. JAVA IMPLEMENTATIONS AND JAVA TOOLS IN RHEL 8


The RHEL 8 AppStream repository includes:

The java-11-openjdk packages, which provide the OpenJDK 11 Java Runtime Environment and
the OpenJDK 11 Java Software Development Kit.

The java-1.8.0-openjdk packages, which provide the OpenJDK 8 Java Runtime Environment
and the OpenJDK 8 Java Software Development Kit.

The icedtea-web packages, which provide an implementation of Java Web Start.

The ant module, providing a Java library and command-line tool for compiling, assembling,

90
CHAPTER 16. COMPILERS AND DEVELOPMENT TOOLS

The ant module, providing a Java library and command-line tool for compiling, assembling,
testing, and running Java applications. Ant has been updated to version 1.10.

The maven module, providing a software project management and comprehension tool. Maven
was previously available only as a Software Collection or in the unsupported Optional channel.

The scala module, providing a general purpose programming language for the Java platform.
Scala was previously available only as a Software Collection.

In addition, the java-1.8.0-ibm packages are distributed through the Supplementary repository. Note
that packages in this repository are unsupported by Red Hat.

16.4. COMPATIBILITY-BREAKING CHANGES IN GDB


The version of GDB provided in Red Hat Enterprise Linux 8 contains a number of changes that break
compatibility, especially for cases where the GDB output is read directly from the terminal. The
following sections provide more details about these changes.

Parsing output of GDB is not recommended. Prefer scripts using the Python GDB API or the GDB
Machine Interface (MI).

GDBserver now starts inferiors with shell


To enable expansion and variable substitution in inferior command line arguments, GDBserver now
starts the inferior in a shell, same as GDB.

To disable using the shell:

When using the target extended-remote GDB command, disable shell with the set startup-
with-shell off command.

When using the target remote GDB command, disable shell with the --no-startup-with-shell
option of GDBserver.

Example 16.1. Example of shell expansion in remote GDB inferiors

This example shows how running the /bin/echo /* command through GDBserver differs on Red Hat
Enterprise Linux versions 7 and 8:

On RHEL 7:

$ gdbserver --multi :1234


$ gdb -batch -ex 'target extended-remote :1234' -ex 'set remote exec-file /bin/echo' -ex
'file /bin/echo' -ex 'run /*'
/*

On RHEL 8:

$ gdbserver --multi :1234


$ gdb -batch -ex 'target extended-remote :1234' -ex 'set remote exec-file /bin/echo' -ex
'file /bin/echo' -ex 'run /*'
/bin /boot (...) /tmp /usr /var

gcj support removed

Support for debugging Java programs compiled with the GNU Compiler for Java (gcj) has been

91
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Support for debugging Java programs compiled with the GNU Compiler for Java (gcj) has been
removed.

New syntax for symbol dumping maintenance commands


The symbol dumping maintenance commands syntax now includes options before file names. As a
result, commands that worked with GDB in RHEL 7 do not work in RHEL 8.

As an example, the following command no longer stores symbols in a file, but produces an error message:

(gdb) maintenance print symbols /tmp/out main.c

The new syntax for the symbol dumping maintenance commands is:

maint print symbols [-pc address] [--] [filename]


maint print symbols [-objfile objfile] [-source source] [--] [filename]
maint print psymbols [-objfile objfile] [-pc address] [--] [filename]
maint print psymbols [-objfile objfile] [-source source] [--] [filename]
maint print msymbols [-objfile objfile] [--] [filename]

Thread numbers are no longer global


Previously, GDB used only global thread numbering. The numbering has been extended to be displayed
per inferior in the form inferior_num.thread_num, such as 2.1. As a consequence, thread numbers in the
$_thread convenience variable and in the InferiorThread.num Python attribute are no longer unique
between inferiors.

GDB now stores a second thread ID per thread, called the global thread ID, which is the new equivalent
of thread numbers in previous releases. To access the global thread number, use the $_gthread
convenience variable and InferiorThread.global_num Python attribute.

For backwards compatibility, the Machine Interface (MI) thread IDs always contains the global IDs.

Example 16.2. Example of GDB thread number changes

On Red Hat Enterprise Linux 7:

# debuginfo-install coreutils
$ gdb -batch -ex 'file echo' -ex start -ex 'add-inferior' -ex 'inferior 2' -ex 'file echo' -ex start -ex 'info
threads' -ex 'pring $_thread' -ex 'inferior 1' -ex 'pring $_thread'
(...)
Id Target Id Frame
* 2 process 203923 "echo" main (argc=1, argv=0x7fffffffdb88) at src/echo.c:109
1 process 203914 "echo" main (argc=1, argv=0x7fffffffdb88) at src/echo.c:109
$1 = 2
(...)
$2 = 1

On Red Hat Enterprise Linux 8:

# dnf debuginfo-install coreutils


$ gdb -batch -ex 'file echo' -ex start -ex 'add-inferior' -ex 'inferior 2' -ex 'file echo' -ex start -ex 'info
threads' -ex 'pring $_thread' -ex 'inferior 1' -ex 'pring $_thread'
(...)
Id Target Id Frame
1.1 process 4106488 "echo" main (argc=1, argv=0x7fffffffce58) at ../src/echo.c:109
* 2.1 process 4106494 "echo" main (argc=1, argv=0x7fffffffce58) at ../src/echo.c:109

92
CHAPTER 16. COMPILERS AND DEVELOPMENT TOOLS

$1 = 1
(...)
$2 = 1

Memory for value contents can be limited


Previously, GDB did not limit the amount of memory allocated for value contents. As a consequence,
debugging incorrect programs could cause GDB to allocate too much memory. The max-value-size
setting has been added to enable limiting the amount of allocated memory. The default value of this
limit is 64 KiB. As a result, GDB in Red Hat Enterprise Linux 8 will not display too large values, but report
that the value is too large instead.

As an example, printing a value defined as char s[128*1024]; produces different results:

On Red Hat Enterprise Linux 7, $1 = 'A' <repeats 131072 times>

On Red Hat Enterprise Linux 8, value requires 131072 bytes, which is more than max-value-
size

Sun version of stabs format no longer supported


Support for the Sun version of the stabs debug file format has been removed. The stabs format
produced by GCC in RHEL with the gcc -gstabs option is still supported by GDB.

Sysroot handling changes


The set sysroot path command specifies system root when searching for files needed for debugging.
Directory names supplied to this command may now be prefixed with the string target: to make GDB
read the shared libraries from the target system (both local and remote). The formerly available remote:
prefix is now treated as target:. Additionally, the default system root value has changed from an empty
string to target: for backward compatibility.

The specified system root is prepended to the file name of the main executable, when GDB starts
processes remotely, or when it attaches to already running processes (both local and remote). This
means that for remote processes, the default value target: makes GDB always try to load the
debugging information from the remote system. To prevent this, run the set sysroot command before
the target remote command so that local symbol files are found before the remote ones.

HISTSIZE no longer controls GDB command history size


Previously, GDB used the HISTSIZE environment variable to determine how long command history
should be kept. GDB has been changed to use the GDBHISTSIZE environment variable instead. This
variable is specific only to GDB. The possible values and their effects are:

a positive number - use command history of this size,

-1 or an empty string - keep history of all commands,

non-numeric values - ignored.

Completion limiting added


The maximum number of candidates considered during completion can now be limited using the set
max-completions command. To show the current limit, run the show max-completions command.
The default value is 200. This limit prevents GDB from generating excessively large completion lists and
becoming unresponsive.

As an example, the output after the input p <tab><tab> is:

on RHEL 7: Display all 29863 possibilities? (y or n)

93
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

on RHEL 8: Display all 200 possibilities? (y or n)

HP-UX XDB compatibility mode removed


The -xdb option for the HP-UX XDB compatibility mode has been removed from GDB.

Handling signals for threads


Previously, GDB could deliver a signal to the current thread instead of the thread for which the signal
was actually sent. This bug has been fixed, and GDB now always passes the signal to the correct thread
when resuming execution.

Additionally, the signal command now always correctly delivers the requested signal to the current
thread. If the program is stopped for a signal and the user switched threads, GDB asks for confirmation.

Breakpoint modes always-inserted off and auto merged


The breakpoint always-inserted setting has been changed. The auto value and corresponding
behavior has been removed. The default value is now off. Additionally, the off value now causes GDB to
not remove breakpoints from the target until all threads stop.

remotebaud commands no longer supported


The set remotebaud and show remotebaud commands are no longer supported. Use the set serial
baud and show serial baud commands instead.

16.5. COMPATIBILITY-BREAKING CHANGES IN COMPILERS AND


DEVELOPMENT TOOLS
librtkaio removed
With this update, the librtkaio library has been removed. This library provided high-performance real-
time asynchronous I/O access for some files, which was based on Linux kernel Asynchronous I/O
support (KAIO).

As a result of the removal:

Applications using the LD_PRELOAD method to load librtkaio display a warning about a
missing library, load the librt library instead and run correctly.

Applications using the LD_LIBRARY_PATH method to load librtkaio load the librt library
instead and run correctly, without any warning.

Applications using the dlopen() system call to access librtkaio directly load the librt library
instead.

Users of librtkaio have the following options:

Use the fallback mechanism described above, without any changes to their applications.

Change code of their applications to use the librt library, which offers a compatible POSIX-
compliant API.

Change code of their applications to use the libaio library, which offers a compatible API.

Both librt and libaio can provide comparable features and performance under specific conditions.

Note that the libaio package has Red Hat compatibility level of 2, while librtk and the removed librtkaio
level 1.

For more details, see https://fedoraproject.org/wiki/Changes/GLIBC223_librtkaio_removal

94
CHAPTER 16. COMPILERS AND DEVELOPMENT TOOLS

Sun RPC and NIS interfaces removed from glibc


The glibc library no longer provides Sun RPC and NIS interfaces for new applications. These interfaces
are now available only for running legacy applications. Developers must change their applications to use
the libtirpc library instead of Sun RPC and libnsl2 instead of NIS. Applications can benefit from IPv6
support in the replacement libraries.

The nosegneg libraries for 32-bit Xen have been removed


Previously, the glibc i686 packages contained an alternative glibc build, which avoided the use of the
thread descriptor segment register with negative offsets (nosegneg). This alternative build was only
used in the 32-bit version of the Xen Project hypervisor without hardware virtualization support, as an
optimization to reduce the cost of full paravirtualization. These alternative builds are no longer used and
they have been removed.

make new operator != causes a different interpretation of certain existing makefile syntax
The != shell assignment operator has been added to GNU make as an alternative to the $(shell …​)
function to increase compatibility with BSD makefiles. As a consequence, variables with name ending in
exclamation mark and immediately followed by assignment such as variable!=value are now interpreted
as the shell assignment. To restore the previous behavior, add a space after the exclamation mark, such
as variable! =value.

For more details and differences between the operator and the function, see the GNU make manual.

Valgrind library for MPI debugging support removed


The libmpiwrap.so wrapper library for Valgrind provided by the valgrind-openmpi package has been
removed. This library enabled Valgrind to debug programs using the Message Passing Interface (MPI).
This library was specific to the Open MPI implementation version in previous versions of Red Hat
Enterprise Linux.

Users of libmpiwrap.so are encouraged to build their own version from upstream sources specific to
their MPI implementation and version. Supply these custom-built libraries to Valgrind using the
LD_PRELOAD technique.

Development headers and static libraries removed from valgrind-devel


Previously, the valgrind-devel sub-package used to include development files for developing custom
valgrind tools. This update removes these files because they do not have a guaranteed API, have to be
linked statically, and are unsupported. The valgrind-devel package still does contain the development
files for valgrind-aware programs and header files such as valgrind.h, callgrind.h, drd.h, helgrind.h,
and memcheck.h, which are stable and well-supported.

95
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

CHAPTER 17. IDENTITY MANAGEMENT

17.1. IDENTITY MANAGEMENT PACKAGES ARE INSTALLED AS A


MODULE
In RHEL 8, the packages necessary for installing an Identity Management (IdM) server and client are
distributed as a module. The client stream is the default stream of the idm module, and you can
download the packages necessary for installing the client without enabling the stream.

The IdM server module stream is called DL1 and contains multiple profiles that correspond to the
different types of IdM servers:

server: an IdM server without integrated DNS

dns: an IdM server with integrated DNS

adtrust: an IdM server that has a trust agreement with Active Directory

client: an IdM client

To download the packages in a specific profile of the DL1 stream:

1. Enable the stream:

# yum module enable idm:DL1

2. Switch to the RPMs delivered through the stream:

# yum distro-sync

3. Install the selected profile:

# yum module install idm:DL1/profile

Replace profile with one of the specific profiles defined above.

For details, see Installing packages required for an Identity Management server and Packages required
to install an Identity Management client.

17.2. ACTIVE DIRECTORY USERS CAN NOW ADMINISTER IDENTITY


MANAGEMENT
In Red Hat Enterprise Linux (RHEL) 7, external group membership allows AD users and groups to access
IdM resources in a POSIX environment with the help of the System Security Services Daemon (SSSD).

The IdM LDAP server has its own mechanisms to grant access control. RHEL 8 introduces an update
that allows adding an ID user override for an AD user as a member of an IdM group. An ID override is a
record describing what a specific Active Directory user or group properties should look like within a
specific ID view, in this case the Default Trust View. As a consequence of the update, the IdM LDAP
server is able to apply access control rules for the IdM group to the AD user.

AD users are now able to use the self service features of IdM UI, for example to upload their SSH keys,
96
CHAPTER 17. IDENTITY MANAGEMENT

AD users are now able to use the self service features of IdM UI, for example to upload their SSH keys,
or change their personal data. An AD administrator is able to fully administer IdM without having two
different accounts and passwords.

NOTE

Currently, selected features in IdM may still be unavailable to AD users. For example,
setting passwords for IdM users as an AD user from the IdM admins group might fail.

17.3. SESSION RECORDING SOLUTION FOR RHEL 8 ADDED


A session recording solution has been added to Red Hat Enterprise Linux 8 (RHEL 8). A new tlog
package and its associated web console session player enable to record and playback the user terminal
sessions. The recording can be configured per user or user group via the System Security Services
Daemon (SSSD) service. All terminal input and output is captured and stored in a text-based format in a
system journal. The input is inactive by default for security reasons not to intercept raw passwords and
other sensitive information.

The solution can be used for auditing of user sessions on security-sensitive systems. In the event of a
security breach, the recorded sessions can be reviewed as a part of a forensic analysis. The system
administrators are now able to configure the session recording locally and view the result from the RHEL
8 web console interface or from the Command-Line Interface using the tlog-play utility.

17.4. REMOVED IDENTITY MANAGEMENT FUNCTIONALITY

17.4.1. NSS databases not supported in OpenLDAP


The OpenLDAP suite in previous versions of Red Hat Enterprise Linux (RHEL) used the Mozilla Network
Security Services (NSS) for cryptographic purposes. With RHEL 8, OpenSSL, which is supported by the
OpenLDAP community, replaces NSS. OpenSSL does not support NSS databases for storing
certificates and keys. However, it still supports privacy enhanced mail (PEM) files that serve the same
purpose.

17.4.2. Selected Python Kerberos packages have been replaced


In Red Hat Enterprise Linux (RHEL) 8, the python-gssapi package, python-requests-gssapi module,
and urllib-gssapi library have replaced Python Kerberos packages such as python-krbV, python-
kerberos, python-requests-kerberos, and python-urllib2_kerberos. Notable benefits include:

python-gssapi is easier to use than python-kerberos and python-krbV

python-gssapi supports both python 2 and python 3 whereas python-krbV does not

the GSSAPI-based packages allow the use of other Generic Security Services API (GSSAPI)
mechanisms in addition to Kerberos, such as the NT LAN Manager NTLM for backward
compatibility reasons

This update improves the maintainability and debuggability of GSSAPI in RHEL 8.

17.5. SSSD

17.5.1. authselect replaces authconfig

In RHEL 8, the authselect utility replaces the authconfig utility. authselect comes with a safer
97
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

In RHEL 8, the authselect utility replaces the authconfig utility. authselect comes with a safer
approach to PAM stack management that makes the PAM configuration changes simpler for system
administrators. authselect can be used to configure authentication methods such as passwords,
certificates, smart cards and fingerprint. authselect does not configure services required to join remote
domains. This task is performed by specialized tools, such as realmd or ipa-client-install.

17.5.2. KCM replaces KEYRING as the default credential cache storage


In RHEL 8, the default credential cache storage is the Kerberos Credential Manager (KCM) which is
backed by the sssd-kcm deamon. KCM overcomes the limitations of the previously used KEYRING,
such as its being difficult to use in containerized environments because it is not namespaced, and to
view and manage quotas.

With this update, RHEL 8 contains a credential cache that is better suited for containerized
environments and that provides a basis for building more features in future releases.

17.5.3. sssctl prints an HBAC rules report for an IdM domain

With this update, the sssctl utility of the System Security Services Daemon (SSSD) can print an access
control report for an Identity Management (IdM) domain. This feature meets the need of certain
environments to see, for regulatory reasons, a list of users and groups that can access a specific client
machine. Running sssctl access-report domain_name on an IdM client prints the parsed subset of
host-based access control (HBAC) rules in the IdM domain that apply to the client machine.

Note that no other providers than IdM support this feature.

17.5.4. Local users are cached by SSSD and served through the nss_sss module

In RHEL 8, the System Security Services Daemon (SSSD) serves users and groups from the
/etc/passwd and /etc/groups files by default. The sss nsswitch module precedes files in the
/etc/nsswitch.conf file.

The advantage of serving local users through SSSD is that the nss_sss module has a fast memory-
mapped cache that speeds up Name Service Switch (NSS) lookups compared to accessing the disk and
opening the files on each NSS request. Previously, the Name service cache daemon (nscd) helped
accelerate the process of accessing the disk. However, using nscd in parallel with SSSD is cumbersome,
as both SSSD and nscd use their own independent caching. Consequently, using nscd in setups where
SSSD is also serving users from a remote domain, for example LDAP or Active Directory, can cause
unpredictable behavior.

With this update, the resolution of local users and groups is faster in RHEL 8. Note that the root user is
never handled by SSSD, therefore root resolution cannot be impacted by a potential bug in SSSD. Note
also that if SSSD is not running, the nss_sss module handles the situation gracefully by falling back to
nss_files to avoid problems. You do not have to configure SSSD in any way, the files domain is added
automatically.

17.5.5. SSSD now allows you to select one of the multiple smart-card authentication
devices
By default, the System Security Services Daemon (SSSD) tries to detect a device for smart-card
authentication automatically. If there are multiple devices connected, SSSD selects the first one it
detects. Consequently, you cannot select a particular device, which sometimes leads to failures.

With this update, you can configure a new p11_uri option for the [pam] section of the sssd.conf
configuration file. This option enables you to define which device is used for smart-card authentication.

98
CHAPTER 17. IDENTITY MANAGEMENT

For example, to select a reader with the slot id 2 detected by the OpenSC PKCS#11 module, add:

p11_uri = library-description=OpenSC%20smartcard%20framework;slot-id=2

to the [pam] section of sssd.conf.

For details, see the man sssd.conf page.

17.6. REMOVED SSSD FUNCTIONALITY

17.6.1. sssd-secrets has been removed

The sssd-secrets component of the System Security Services Daemon (SSSD) has been removed in
Red Hat Enterprise Linux 8. This is because Custodia, a secrets service provider, is no longer actively
developed. Use other Identity Management tools to store secrets, for example the Identity Management
Vault. :parent-context-of-the-web-console: considerations-in-adopting-RHEL-8

99
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

CHAPTER 18. THE WEB CONSOLE

18.1. THE WEB CONSOLE IS NOW AVAILABLE BY DEFAULT


Packages for the RHEL 8 web console, also known as Cockpit, are now part of Red Hat Enterprise Linux
default repositories, and can therefore be immediately installed on a registered RHEL 8 system.

In addition, on a non-minimal installation of RHEL 8, the web console is automatically installed and
firewall ports required by the console are automatically open.

A system message has also been added prior to login that provides information about how to enable or
access the web console.

18.2. NEW FIREWALL INTERFACE


The Networking tab in the RHEL 8 web console now includes the Firewall settings. In this section, users
can:

Enable/disable firewall

Add/remove services

For details, see Using the web console for managing firewall .

18.3. SUBSCRIPTION MANAGEMENT


The RHEL 8 web console provides an interface for using Red Hat Subscription Manager installed on your
local system. The Subscription Manager connects to the Red Hat Customer Portal and verifies all
available:

Active subscriptions

Expired subscriptions

Renewed subscriptions

If you want to renew the subscription or get a different one in Red Hat Customer Portal, you do not have
to update the Subscription Manager data manually. The Subscription Manager synchronizes data with
Red Hat Customer Portal automatically.

This paragraph is the assembly introduction. It explains what the user will accomplish by working through
the modules in the assembly and sets the context for the user story the assembly is based on. Can
include more than one paragraph. Consider using the information from the user story.

NOTE

The web console’s Subscriptions page is now provided by the new subscription-manager-
cockpit package.

For details, see Managing subscriptions in the web console .

18.4. BETTER IDM INTEGRATION FOR THE WEB CONSOLE

If your system is enrolled in an Identity Management (IdM) domain, the RHEL 8 web console now uses
100
CHAPTER 18. THE WEB CONSOLE

If your system is enrolled in an Identity Management (IdM) domain, the RHEL 8 web console now uses
the domain’s centrally managed IdM resources by default. This includes the following benefits:

The IdM domain’s administrators can use the web console to manage the local machine.

The console’s web server automatically switches to a certificate issued by the IdM certificate
authority (CA) and accepted by browsers.

Users with a Kerberos ticket in the IdM domain do not need to provide login credentials to
access the web console.

SSH hosts known to the IdM domain are accessible to the web console without manually adding
an SSH connection.

Note that for IdM integration with the web console to work properly, the user first needs to run the ipa-
advise utility with the enable-admins-sudo option in the IdM master system.

18.5. THE WEB CONSOLE IS NOW COMPATIBLE WITH MOBILE


BROWSERS
With this update, the web console menus and pages can be navigated on mobile browser variants. This
makes it possible to manage systems using the RHEL 8 web console from a mobile device.

18.6. THE WEB CONSOLE FRONT PAGE NOW DISPLAYS MISSING


UPDATES AND SUBSCRIPTIONS
If a system managed by the RHEL 8 web console has outdated packages or a lapsed subscription, a
warning is now displayed on the web console front page of the system.

18.7. THE WEB CONSOLE NOW SUPPORTS PBD ENROLLMENT


With this update, you can use the the RHEL 8 web console interface to apply Policy-Based Decryption
(PBD) rules to disks on managed systems. This uses the Clevis decryption client to facilitate a variety of
security management functions in the web console, such as automatic unlocking of LUKS-encrypted
disk partitions.

18.8. SUPPORT LUKS V2


In the web console’s Storage tab, you can now create, lock, unlock, resize, and otherwise configure
encrypted devices using the LUKS (Linux Unified Key Setup) version 2 format.

This new version of LUKS offers:

More flexible unlocking policies

Stronger cryptography

Better compatibility with future changes

18.9. VIRTUAL MACHINES CAN NOW BE MANAGED USING THE WEB


CONSOLE

The Virtual Machines page can now be added to the RHEL 8 web console interface, which enables the

101
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

The Virtual Machines page can now be added to the RHEL 8 web console interface, which enables the
user to create and manage libvirt-based virtual machines.

For information about the differences in virtual management features between the web console and the
Virtual Machine Manager, see Differences in virtualization features in Virtual Machine Manager and the
web console.

18.10. INTERNET EXPLORER UNSUPPORTED BY THE WEB CONSOLE


Support for the Internet Explorer browser has been removed from the RHEL 8 web console. Attempting
to open the web console in Internet Explorer now displays an error screen with a list of recommended
browsers that can be used instead.

102
CHAPTER 19. VIRTUALIZATION

CHAPTER 19. VIRTUALIZATION

19.1. VIRTUAL MACHINES CAN NOW BE MANAGED USING THE WEB


CONSOLE
The Virtual Machines page can now be added to the RHEL 8 web console interface, which enables the
user to create and manage libvirt-based virtual machines (VMs).

In addition, the Virtual Machine Manager (virt-manager) application has been deprecated, and may
become unsupported in a future major release of RHEL.

Note, however, that the web console currently does not provide all of the virtual management features
that virt-manager does. For details about the differences in available features between the RHEL 8 web
console and the Virtual Machine Manager, see the Configuring and managing virtualization document.

19.2. THE Q35 MACHINE TYPE IS NOW SUPPORTED BY


VIRTUALIZATION
Red hat Enterprise Linux 8 introduces the support for Q35, a more modern PCI Express-based machine
type. This provides a variety of improvements in features and performance of virtual devices, and
ensures that a wider range of modern devices are compatible with virtualization. In addition, virtual
machines created in Red Hat Enterprise Linux 8 are set to use Q35 by default.

Note that the previously default PC machine type has become deprecated and may become
unsupported in a future major release of RHEL. However, changing the machine type of existing VMs
from PC to Q35 is not recommended.

Notable differences between PC and Q35 include:

Older operating systems, such as Windows XP, do not support Q35 and will not boot if used on a
Q35 VM.

Currently, when using RHEL 6 as the operating system on a Q35 VM, hot-plugging a PCI device
to that VM in some cases does not work. In addition, certain legacy virtio devices do not work
properly on RHEL 6 Q35 VMs.
Therefore, using the PC machine type is recommended for RHEL 6 VMs.

Q35 emulates PCI Express (PCI-e) buses instead of PCI. As a result, a different device
topology and addressing scheme is presented to the guest OS.

Q35 has a built-in SATA/AHCI controller, instead of an IDE controller.

The SecureBoot feature only works on Q35 VMs.

19.3. REMOVED VIRTUALIZATION FUNCTIONALITY


The cpu64-rhel6 CPU model has been deprecated and removed
The cpu64-rhel6 QEMU virtual CPU model has been deprecated in RHEL 8.1, and has been removed
from RHEL 8.2. It is recommended that you use the other CPU models provided by QEMU and libvirt,
according to the CPU present on the host machine.

IVSHMEM has been disabled

The inter-VM shared memory device (IVSHMEM) feature, which provides shared memory between

103
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

The inter-VM shared memory device (IVSHMEM) feature, which provides shared memory between
multiple virtual machines, is now disabled in Red Hat Enterprise Linux 8. A virtual machine configured
with this device will fail to boot. Similarly, attempting to hot-plug such a device device will fail as well.

virt-install can no longer use NFS locations


With this update, the virt-install utility cannot mount NFS locations. As a consequence, attempting to
install a virtual machine using virt-install with a NFS address as a value of the --location option fails. To
work around this change, mount your NFS share prior to using virt-install, or use a HTTP location.

RHEL 8 does not support the tulip driver


With this update, the tulip network driver is no longer supported. As a consequence, when using RHEL 8
on a Generation 1 virtual machine (VM) on the Microsoft Hyper-V hypervisor, the "Legacy Network
Adapter" device does not work, which causes PXE installation of such VMs to fail.

For the PXE installation to work, install RHEL 8 on a Generation 2 Hyper-V VM. If you require a RHEL 8
Generation 1 VM, use ISO installation.

LSI Logic SAS and Parallel SCSI drivers are not supported
The LSI Logic SAS driver (mptsas) and LSI Logic Parallel driver (mptspi) for SCSI are no longer
supported. As a consequence, the drivers can be used for installing RHEL 8 as a guest operating system
on a VMWare hypervisor to a SCSI disk, but the created VM will not be supported by Red Hat.

Installing virtio-win no longer creates a floppy disk image with the Windows drivers
Due to the limitation of floppy drives, virtio-win drivers are no longer provided as floppy images. Users
should use the ISO image instead. :context: considerations-in-adopting-RHEL-8

104
CHAPTER 20. CONTAINERS

CHAPTER 20. CONTAINERS


A set of container images is available for Red Hat Enterprise Linux 8. Notable changes include:

Docker is not included in RHEL 8.0. For working with containers, use the podman, buildah,
skopeo, and runc tools.
For information on these tools and on using containers in RHEL 8, see Building, running, and
managing containers.

The podman tool has been released as a fully supported feature.


The podman tool manages pods, container images, and containers on a single node. It is built on
the libpod library, which enables management of containers and groups of containers, called
pods.

To learn how to use podman, see Building, running, and managing containers.

In RHEL 8 GA, Red Hat Universal Base Images (UBI) are newly available. UBIs replace some of
the images Red Hat previously provided, such as the standard and the minimal RHEL base
images.
Unlike older Red Hat images, UBIs are freely redistributable. This means they can be used in any
environment and shared anywhere. You can use them even if you are not a Red Hat customer.

For UBI documentation, see Building, running, and managing containers.

In RHEL 8 GA, additional container images are available that provide AppStream components,
for which container images are distributed with Red Hat Software Collections in RHEL 7. All of
these RHEL 8 images are based on the ubi8 base image.

Container images ARM for the 64-bit ARM architecture are fully supported in RHEL 8.

The rhel-tools container has been removed in RHEL 8. The sos and redhat-support-tool tools
are provided in the support-tools container. System administrators can also use this image as a
base for building system tools container image.

The support for rootless containers is available as a technology preview in RHEL 8.


Rootless containers are containers that are created and managed by regular system users
without administrative permissions. :context: internationalization

= Internationalization

20.1. RHEL 8 INTERNATIONAL LANGUAGES


Red Hat Enterprise Linux 8 supports the installation of multiple languages and the changing of
languages based on your requirements.

East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese.

European Languages - English, German, Spanish, French, Italian, Portuguese, and Russian.

The following table lists the fonts and input methods provided for various major languages.

Language Default Font (Font Package) Input Methods

English dejavu-sans-fonts

105
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Language Default Font (Font Package) Input Methods

French dejavu-sans-fonts

German dejavu-sans-fonts

Italian dejavu-sans-fonts

Russian dejavu-sans-fonts

Spanish dejavu-sans-fonts

Portuguese dejavu-sans-fonts

Simplified Chinese google-noto-sans-cjk-ttc-fonts, ibus-libpinyin, libpinyin


google-noto-serif-cjk-ttc-fonts

Traditional Chinese google-noto-sans-cjk-ttc-fonts, ibus-libzhuyin, libzhuyin


google-noto-serif-cjk-ttc-fonts

Japanese google-noto-sans-cjk-ttc-fonts, ibus-kkc, libkkc


google-noto-serif-cjk-ttc-fonts

Korean google-noto-sans-cjk-ttc-fonts, ibus-hangul, libhangu


google-noto-serif-cjk-ttc-fonts

20.2. NOTABLE CHANGES TO INTERNATIONALIZATION IN RHEL 8


RHEL 8 introduces the following changes to internationalization compared to RHEL 7:

Support for the Unicode 11 computing industry standard has been added.

Internationalization is distributed in multiple packages, which allows for smaller footprint


installations. For more information, see Using langpacks.

The glibc package updates for multiple locales are now synchronized with the Common Locale
Data Repository (CLDR). :context: considerations-in-adopting-RHEL-8

106
CHAPTER 21. RELATED INFORMATION

CHAPTER 21. RELATED INFORMATION


Red Hat Enterprise Linux technology capabilities and limits

Red Hat Enterprise Linux Life Cycle document

RHEL 8 product documentation

RHEL 8.0 Release Notes

RHEL 8 Package manifest

Upgrading to RHEL 8

Application Compatibility Guide

RHEL 7 Migration Planning Guide

Customer Portal Labs

Red Hat Insights

Getting the most out of your support experience

107
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

APPENDIX A. CHANGES TO PACKAGES


This chapter lists changes to packages between RHEL 7 and RHEL 8, as well as changes between minor
releases of RHEL 8.

A.1. NEW PACKAGES

A.1.1. Packages added in RHEL 8 minor releases


The following packages were added in RHEL 8 minor relases starting from RHEL 8.1:

Package Repository New in

ansible-freeipa rhel8-AppStream RHEL 8.1

asio-devel rhel8-CRB RHEL 8.1

fapolicyd rhel8-AppStream RHEL 8.1

idn2 rhel8-AppStream RHEL 8.1

ipa-client-samba rhel8-AppStream RHEL 8.1

ipa-healthcheck rhel8-AppStream RHEL 8.1

Judy-devel rhel8-BaseOS RHEL 8.1

libssh-config rhel8-BaseOS RHEL 8.1

lmdb-libs rhel8-AppStream RHEL 8.1

mod_auth_mellon-diagnostics rhel8-AppStream RHEL 8.1

python2-pip-wheel rhel8-Modules RHEL 8.1

python2-setuptools-wheel rhel8-Modules RHEL 8.1

python2-wheel-wheel rhel8-Modules RHEL 8.1

python3-distro rhel8-Modules RHEL 8.1

python3-pip-wheel rhel8-BaseOS RHEL 8.1

python3-setuptools-wheel rhel8-BaseOS RHEL 8.1

python3-wheel-wheel rhel8-AppStream RHEL 8.1

108
APPENDIX A. CHANGES TO PACKAGES

Package Repository New in

sblim-cmpi-base rhel8-AppStream RHEL 8.1

sblim-indication_helper rhel8-AppStream RHEL 8.1

sblim-wbemcli rhel8-AppStream RHEL 8.1

sssd-polkit-rules rhel8-BaseOS RHEL 8.1

udica rhel8-AppStream RHEL 8.1

For a complete list of packages available in the current minor RHEL 8 release, see the Package manifest.

A.1.2. Packages new in RHEL 8.0


The following packages are new in RHEL 8.0:

# | 389-ds-base-legacy-tools

A | aajohan-comfortaa-fonts, abrt-addon-coredump-helper, abrt-cli-ng, abrt-plugin-machine-id, abrt-


plugin-sosreport, adcli-doc, alsa-ucm, alsa-utils-alsabat, anaconda-install-env-deps, annobin, ant-lib,
ant-xz, apcu-panel, apr-util-bdb, aspell-en, assertj-core, assertj-core-javadoc, atlas-corei2, atlas-
corei2-devel, audispd-plugins-zos, authselect, authselect-compat, authselect-libs

B | bacula-logwatch, beignet, blivet-data, bluez-obexd, bnd-maven-plugin, boom-boot, boom-boot-


conf, boom-boot-grub2, boost-container, boost-coroutine, boost-fiber, boost-log, boost-mpich-
python3, boost-numpy3, boost-openmpi-python3, boost-python3, boost-python3-devel, boost-
stacktrace, boost-type_erasure, brltty-dracut, brltty-espeak-ng, brotli, brotli-devel, bubblewrap, buildah

C | c2esp, cargo, cargo-doc, cargo-vendor, cjose, cjose-devel, clang, clang-analyzer, clang-devel, clang-
libs, clang-tools-extra, cldr-emoji-annotation, clippy, cmake-data, cmake-doc, cmake-filesystem,
cmake-rpm-macros, cockpit-composer, cockpit-dashboard, cockpit-machines, cockpit-packagekit,
cockpit-pcp, cockpit-session-recording, cockpit-storaged, compat-guile18, compat-guile18-devel,
compat-libgfortran-48, compat-libpthread-nonshared, compat-openssl10, compiler-rt, composer-cli,
container-exception-logger, container-selinux, containernetworking-plugins, containers-common,
coreutils-common, coreutils-single, cppcheck, createrepo_c, createrepo_c-devel, createrepo_c-libs,
crypto-policies, CUnit, CUnit-devel, cyrus-imapd-vzic

D | dbus-c, dbus-c-devel, dbus-c++-glib, dbus-common, dbus-daemon, dbus-tools, dhcp-client, dhcp-


relay, dhcp-server, dleyna-renderer, dnf, dnf-automatic, dnf-data, dnf-plugin-spacewalk, dnf-plugin-
subscription-manager, dnf-plugins-core, dnf-utils, dnssec-trigger-panel, docbook2X, dotnet, dotnet-
host, dotnet-host-fxr-2.1, dotnet-runtime-2.1, dotnet-sdk-2.1, dotnet-sdk-2.1.5xx, dpdk, dpdk-devel,
dpdk-doc, dpdk-tools, dracut-live, dracut-squash, driverctl, drpm, drpm-devel, dtc

E | edk2-aarch64, edk2-ovmf, efi-filesystem, efi-srpm-macros, egl-wayland, eglexternalplatform-devel,


eigen3-devel, emacs-lucid, enca, enca-devel, enchant2, enchant2-devel, espeak-ng, evemu, evemu-libs,
execstack

F | fence-agents-lpar, fence-agents-zvm, fftw-libs-quad, freeradius-rest, fuse-common, fuse-overlayfs,


fuse-sshfs, fuse3, fuse3-devel, fuse3-libs

G | galera, gcc-gdb-plugin, gcc-offload-nvptx, gdb-headless, gdbm-libs, gdk-pixbuf2-modules, gdk-

109
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

pixbuf2-xlib, gdk-pixbuf2-xlib-devel, gegl04, gegl04-devel, genwqe-tools, genwqe-vpd, genwqe-zlib,


genwqe-zlib-devel, geronimo-jpa, geronimo-jpa-javadoc, gfbgraph, gflags, gflags-devel, ghc-srpm-
macros, ghostscript-tools-dvipdf, ghostscript-tools-fonts, ghostscript-tools-printing, ghostscript-x11,
git-clang-format, git-core, git-core-doc, git-subtree, glassfish-annotation-api, glassfish-annotation-
api-javadoc, glassfish-jax-rs-api, glassfish-jax-rs-api-javadoc, glassfish-jaxb-bom, glassfish-jaxb-bom-
ext, glassfish-jaxb-codemodel, glassfish-jaxb-codemodel-annotation-compiler, glassfish-jaxb-
codemodel-parent, glassfish-jaxb-core, glassfish-jaxb-external-parent, glassfish-jaxb-parent, glassfish-
jaxb-rngom, glassfish-jaxb-runtime, glassfish-jaxb-runtime-parent, glassfish-jaxb-txw-parent, glassfish-
jaxb-txw2, glassfish-legal, glassfish-master-pom, glassfish-servlet-api, glassfish-servlet-api-javadoc,
glibc-all-langpacks, glibc-langpack-aa, glibc-langpack-af, glibc-langpack-agr, glibc-langpack-ak, glibc-
langpack-am, glibc-langpack-an, glibc-langpack-anp, glibc-langpack-ar, glibc-langpack-as, glibc-
langpack-ast, glibc-langpack-ayc, glibc-langpack-az, glibc-langpack-be, glibc-langpack-bem, glibc-
langpack-ber, glibc-langpack-bg, glibc-langpack-bhb, glibc-langpack-bho, glibc-langpack-bi, glibc-
langpack-bn, glibc-langpack-bo, glibc-langpack-br, glibc-langpack-brx, glibc-langpack-bs, glibc-
langpack-byn, glibc-langpack-ca, glibc-langpack-ce, glibc-langpack-chr, glibc-langpack-cmn, glibc-
langpack-crh, glibc-langpack-cs, glibc-langpack-csb, glibc-langpack-cv, glibc-langpack-cy, glibc-
langpack-da, glibc-langpack-de, glibc-langpack-doi, glibc-langpack-dsb, glibc-langpack-dv, glibc-
langpack-dz, glibc-langpack-el, glibc-langpack-en, glibc-langpack-eo, glibc-langpack-es, glibc-
langpack-et, glibc-langpack-eu, glibc-langpack-fa, glibc-langpack-ff, glibc-langpack-fi, glibc-langpack-
fil, glibc-langpack-fo, glibc-langpack-fr, glibc-langpack-fur, glibc-langpack-fy, glibc-langpack-ga, glibc-
langpack-gd, glibc-langpack-gez, glibc-langpack-gl, glibc-langpack-gu, glibc-langpack-gv, glibc-
langpack-ha, glibc-langpack-hak, glibc-langpack-he, glibc-langpack-hi, glibc-langpack-hif, glibc-
langpack-hne, glibc-langpack-hr, glibc-langpack-hsb, glibc-langpack-ht, glibc-langpack-hu, glibc-
langpack-hy, glibc-langpack-ia, glibc-langpack-id, glibc-langpack-ig, glibc-langpack-ik, glibc-langpack-
is, glibc-langpack-it, glibc-langpack-iu, glibc-langpack-ja, glibc-langpack-ka, glibc-langpack-kab, glibc-
langpack-kk, glibc-langpack-kl, glibc-langpack-km, glibc-langpack-kn, glibc-langpack-ko, glibc-
langpack-kok, glibc-langpack-ks, glibc-langpack-ku, glibc-langpack-kw, glibc-langpack-ky, glibc-
langpack-lb, glibc-langpack-lg, glibc-langpack-li, glibc-langpack-lij, glibc-langpack-ln, glibc-langpack-
lo, glibc-langpack-lt, glibc-langpack-lv, glibc-langpack-lzh, glibc-langpack-mag, glibc-langpack-mai,
glibc-langpack-mfe, glibc-langpack-mg, glibc-langpack-mhr, glibc-langpack-mi, glibc-langpack-miq,
glibc-langpack-mjw, glibc-langpack-mk, glibc-langpack-ml, glibc-langpack-mn, glibc-langpack-mni,
glibc-langpack-mr, glibc-langpack-ms, glibc-langpack-mt, glibc-langpack-my, glibc-langpack-nan,
glibc-langpack-nb, glibc-langpack-nds, glibc-langpack-ne, glibc-langpack-nhn, glibc-langpack-niu,
glibc-langpack-nl, glibc-langpack-nn, glibc-langpack-nr, glibc-langpack-nso, glibc-langpack-oc, glibc-
langpack-om, glibc-langpack-or, glibc-langpack-os, glibc-langpack-pa, glibc-langpack-pap, glibc-
langpack-pl, glibc-langpack-ps, glibc-langpack-pt, glibc-langpack-quz, glibc-langpack-raj, glibc-
langpack-ro, glibc-langpack-ru, glibc-langpack-rw, glibc-langpack-sa, glibc-langpack-sah, glibc-
langpack-sat, glibc-langpack-sc, glibc-langpack-sd, glibc-langpack-se, glibc-langpack-sgs, glibc-
langpack-shn, glibc-langpack-shs, glibc-langpack-si, glibc-langpack-sid, glibc-langpack-sk, glibc-
langpack-sl, glibc-langpack-sm, glibc-langpack-so, glibc-langpack-sq, glibc-langpack-sr, glibc-
langpack-ss, glibc-langpack-st, glibc-langpack-sv, glibc-langpack-sw, glibc-langpack-szl, glibc-
langpack-ta, glibc-langpack-tcy, glibc-langpack-te, glibc-langpack-tg, glibc-langpack-th, glibc-
langpack-the, glibc-langpack-ti, glibc-langpack-tig, glibc-langpack-tk, glibc-langpack-tl, glibc-
langpack-tn, glibc-langpack-to, glibc-langpack-tpi, glibc-langpack-tr, glibc-langpack-ts, glibc-
langpack-tt, glibc-langpack-ug, glibc-langpack-uk, glibc-langpack-unm, glibc-langpack-ur, glibc-
langpack-uz, glibc-langpack-ve, glibc-langpack-vi, glibc-langpack-wa, glibc-langpack-wae, glibc-
langpack-wal, glibc-langpack-wo, glibc-langpack-xh, glibc-langpack-yi, glibc-langpack-yo, glibc-
langpack-yue, glibc-langpack-yuw, glibc-langpack-zh, glibc-langpack-zu, glibc-locale-source, glibc-
minimal-langpack, glog, glog-devel, gmock, gmock-devel, gmp-c++, gnome-autoar, gnome-
backgrounds-extras, gnome-characters, gnome-control-center, gnome-control-center-filesystem,
gnome-logs, gnome-photos, gnome-photos-tests, gnome-remote-desktop, gnome-shell-extension-
desktop-icons, gnome-tweaks, go-compilers-golang-compiler, go-srpm-macros, go-toolset, golang,
golang-bin, golang-docs, golang-misc, golang-race, golang-src, golang-tests, google-droid-kufi-fonts,
google-droid-sans-fonts, google-droid-sans-mono-fonts, google-droid-serif-fonts, google-noto-cjk-
fonts-common, google-noto-mono-fonts, google-noto-nastaliq-urdu-fonts, google-noto-sans-cjk-jp-
fonts, google-noto-sans-cjk-ttc-fonts, google-noto-sans-oriya-fonts, google-noto-sans-oriya-ui-

110
APPENDIX A. CHANGES TO PACKAGES

fonts, google-noto-sans-tibetan-fonts, google-noto-serif-bengali-fonts, google-noto-serif-cjk-ttc-


fonts, google-noto-serif-devanagari-fonts, google-noto-serif-gujarati-fonts, google-noto-serif-
kannada-fonts, google-noto-serif-malayalam-fonts, google-noto-serif-tamil-fonts, google-noto-serif-
telugu-fonts, google-roboto-slab-fonts, gpgmepp, gpgmepp-devel, grub2-tools-efi, gssntlmssp,
gstreamer1-plugins-good-gtk, gtest, gtest-devel, guava20, guava20-javadoc, guava20-testlib, guice-
assistedinject, guice-bom, guice-extensions, guice-grapher, guice-jmx, guice-jndi, guice-multibindings,
guice-servlet, guice-testlib, guice-throwingproviders, gutenprint-libs, gutenprint-libs-ui

H | hamcrest-core, hawtjni-runtime, hexchat, hexchat-devel, httpcomponents-client-cache, httpd-


filesystem, hunspell-es-AR, hunspell-es-BO, hunspell-es-CL, hunspell-es-CO, hunspell-es-CR,
hunspell-es-CU, hunspell-es-DO, hunspell-es-EC, hunspell-es-ES, hunspell-es-GT, hunspell-es-HN,
hunspell-es-MX, hunspell-es-NI, hunspell-es-PA, hunspell-es-PE, hunspell-es-PR, hunspell-es-PY,
hunspell-es-SV, hunspell-es-US, hunspell-es-UY, hunspell-es-VE

I | i2c-tools-perl, ibus-libzhuyin, ibus-wayland, iio-sensor-proxy, infiniband-diags-compat, integritysetup,


ipa-idoverride-memberof-plugin, ipcalc, ipmievd, iproute-tc, iptables-arptables, iptables-ebtables,
iptables-libs, isl, isl-devel, isns-utils-devel, isns-utils-libs, istack-commons-runtime, istack-commons-
tools, ivy-local

J | jackson-annotations, jackson-annotations-javadoc, jackson-core, jackson-core-javadoc, jackson-


databind, jackson-databind-javadoc, jackson-jaxrs-json-provider, jackson-jaxrs-providers, jackson-jaxrs-
providers-datatypes, jackson-jaxrs-providers-javadoc, jackson-module-jaxb-annotations, jackson-
module-jaxb-annotations-javadoc, javapackages-filesystem, javapackages-local, jbig2dec-libs, jboss-
annotations-1.2-api, jboss-interceptors-1.2-api, jboss-interceptors-1.2-api-javadoc, jboss-jaxrs-2.0-api,
jboss-logging, jboss-logging-tools, jcl-over-slf4j, jdeparser, jdom2, jdom2-javadoc, jimtcl, jimtcl-devel,
jq, js-uglify, Judy, jul-to-slf4j, julietaula-montserrat-fonts

K | kabi-dw, kdump-anaconda-addon, kernel-core, kernel-cross-headers, kernel-debug-core, kernel-


debug-modules, kernel-debug-modules-extra, kernel-modules, kernel-modules-extra, kernel-rpm-
macros, kernel-rt-core, kernel-rt-debug-core, kernel-rt-debug-modules, kernel-rt-debug-modules-
extra, kernel-rt-modules, kernel-rt-modules-extra, kernelshark, koan, kyotocabinet-libs

L | lame-devel, lame-libs, langpacks-af, langpacks-am, langpacks-ar, langpacks-as, langpacks-ast,


langpacks-be, langpacks-bg, langpacks-bn, langpacks-br, langpacks-bs, langpacks-ca, langpacks-cs,
langpacks-cy, langpacks-da, langpacks-de, langpacks-el, langpacks-en, langpacks-en_GB, langpacks-es,
langpacks-et, langpacks-eu, langpacks-fa, langpacks-fi, langpacks-fr, langpacks-ga, langpacks-gl,
langpacks-gu, langpacks-he, langpacks-hi, langpacks-hr, langpacks-hu, langpacks-ia, langpacks-id,
langpacks-is, langpacks-it, langpacks-ja, langpacks-kk, langpacks-kn, langpacks-ko, langpacks-lt,
langpacks-lv, langpacks-mai, langpacks-mk, langpacks-ml, langpacks-mr, langpacks-ms, langpacks-nb,
langpacks-ne, langpacks-nl, langpacks-nn, langpacks-nr, langpacks-nso, langpacks-or, langpacks-pa,
langpacks-pl, langpacks-pt, langpacks-pt_BR, langpacks-ro, langpacks-ru, langpacks-si, langpacks-sk,
langpacks-sl, langpacks-sq, langpacks-sr, langpacks-ss, langpacks-sv, langpacks-ta, langpacks-te,
langpacks-th, langpacks-tn, langpacks-tr, langpacks-ts, langpacks-uk, langpacks-ur, langpacks-ve,
langpacks-vi, langpacks-xh, langpacks-zh_CN, langpacks-zh_TW, langpacks-zu, lato-fonts, lensfun,
lensfun-devel, leptonica, leptonica-devel, liba52, libaec, libaec-devel, libatomic_ops, libbabeltrace,
libblockdev-lvm-dbus, libcephfs-devel, libcephfs2, libcmocka, libcmocka-devel, libcomps, libcomps-
devel, libcurl-minimal, libdap, libdap-devel, libdatrie, libdatrie-devel, libdazzle, libdc1394, libdnf, libEMF,
libEMF-devel, libeot, libepubgen, libertas-sd8686-firmware, libertas-sd8787-firmware, libertas-
usb8388-firmware, libertas-usb8388-olpc-firmware, libev, libev-devel, libev-libevent-devel, libev-
source, libfdisk, libfdisk-devel, libfdt, libfdt-devel, libgit2, libgit2-devel, libgit2-glib, libgit2-glib-devel,
libgomp-offload-nvptx, libgudev, libgudev-devel, libi2c, libidn2, libidn2-devel, libijs, libinput-utils, libipt,
libisoburn, libisoburn-devel, libkcapi, libkcapi-hmaccalc, libkeepalive, libknet1, libknet1-compress-bzip2-
plugin, libknet1-compress-lz4-plugin, libknet1-compress-lzma-plugin, libknet1-compress-lzo2-plugin,
libknet1-compress-plugins-all, libknet1-compress-zlib-plugin, libknet1-crypto-nss-plugin, libknet1-
crypto-openssl-plugin, libknet1-crypto-plugins-all, libknet1-devel, libknet1-plugins-all, liblangtag-data,
libmad, libmad-devel, libmcpp, libmemcached-libs, libmetalink, libmodulemd, libmodulemd-devel,

111
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

libmodulemd1, libnghttp2, libnghttp2-devel, libnice-gstreamer1, libnsl, libnsl2, libnsl2-devel, liboggz,


libomp, libomp-devel, libomp-test, libpeas-loader-python3, libpkgconf, libpq, libpq-devel, libproxy-
webkitgtk4, libpsl, libqhull, libqhull_p, libqhull_r, libqxp, librados-devel, libradosstriper-devel,
libradosstriper1, librbd-devel, libreoffice-help-en, libreoffice-langpack-af, libreoffice-langpack-ar,
libreoffice-langpack-as, libreoffice-langpack-bg, libreoffice-langpack-bn, libreoffice-langpack-br,
libreoffice-langpack-ca, libreoffice-langpack-cs, libreoffice-langpack-cy, libreoffice-langpack-da,
libreoffice-langpack-de, libreoffice-langpack-dz, libreoffice-langpack-el, libreoffice-langpack-es,
libreoffice-langpack-et, libreoffice-langpack-eu, libreoffice-langpack-fa, libreoffice-langpack-fi,
libreoffice-langpack-fr, libreoffice-langpack-ga, libreoffice-langpack-gl, libreoffice-langpack-gu,
libreoffice-langpack-he, libreoffice-langpack-hi, libreoffice-langpack-hr, libreoffice-langpack-hu,
libreoffice-langpack-id, libreoffice-langpack-it, libreoffice-langpack-ja, libreoffice-langpack-kk,
libreoffice-langpack-kn, libreoffice-langpack-ko, libreoffice-langpack-lt, libreoffice-langpack-lv,
libreoffice-langpack-mai, libreoffice-langpack-ml, libreoffice-langpack-mr, libreoffice-langpack-nb,
libreoffice-langpack-nl, libreoffice-langpack-nn, libreoffice-langpack-nr, libreoffice-langpack-nso,
libreoffice-langpack-or, libreoffice-langpack-pa, libreoffice-langpack-pl, libreoffice-langpack-pt-BR,
libreoffice-langpack-pt-PT, libreoffice-langpack-ro, libreoffice-langpack-ru, libreoffice-langpack-si,
libreoffice-langpack-sk, libreoffice-langpack-sl, libreoffice-langpack-sr, libreoffice-langpack-ss,
libreoffice-langpack-st, libreoffice-langpack-sv, libreoffice-langpack-ta, libreoffice-langpack-te,
libreoffice-langpack-th, libreoffice-langpack-tn, libreoffice-langpack-tr, libreoffice-langpack-ts,
libreoffice-langpack-uk, libreoffice-langpack-ve, libreoffice-langpack-xh, libreoffice-langpack-zh-
Hans, libreoffice-langpack-zh-Hant, libreoffice-langpack-zu, librhsm, librx, librx-devel, libsass, libsass-
devel, libserf, libsigsegv, libsigsegv-devel, libssh, libssh-devel, libstemmer, libstemmer-devel, libubsan,
libucil, libucil-devel, libunicap, libunicap-devel, libuv, libvarlink, libvarlink-devel, libvarlink-util, libvirt-dbus,
libX11-xcb, libxcam, libxcrypt, libxcrypt-devel, libxcrypt-static, libXNVCtrl, libXNVCtrl-devel, libzhuyin,
libzip-tools, lld, lld-devel, lld-libs, lldb, lldb-devel, lldpd, lldpd-devel, llvm, llvm-devel, llvm-doc, llvm-
googletest, llvm-libs, llvm-static, llvm-test, llvm-toolset, log4j-over-slf4j, log4j12, log4j12-javadoc, lohit-
gurmukhi-fonts, lohit-odia-fonts, lorax-composer, lorax-lmc-novirt, lorax-lmc-virt, lorax-templates-
generic, lorax-templates-rhel, lttng-ust, lttng-ust-devel, lua-expat, lua-filesystem, lua-json, lua-libs, lua-
lpeg, lua-lunit, lua-posix, lua-socket, lvm2-dbusd, lz4-libs

M | make-devel, man-db-cron, mariadb-backup, mariadb-common, mariadb-connector-c, mariadb-


connector-c-config, mariadb-connector-c-devel, mariadb-connector-odbc, mariadb-errmsg, mariadb-
gssapi-server, mariadb-java-client, mariadb-oqgraph-engine, mariadb-server-galera, mariadb-server-
utils, maven-artifact-transfer, maven-artifact-transfer-javadoc, maven-lib, maven-resolver, maven-
resolver-api, maven-resolver-connector-basic, maven-resolver-impl, maven-resolver-javadoc, maven-
resolver-spi, maven-resolver-test-util, maven-resolver-transport-classpath, maven-resolver-transport-
file, maven-resolver-transport-http, maven-resolver-transport-wagon, maven-resolver-util, maven-
wagon-file, maven-wagon-ftp, maven-wagon-http, maven-wagon-http-lightweight, maven-wagon-
http-shared, maven-wagon-provider-api, maven-wagon-providers, mcpp, mecab, mecab-ipadic,
mecab-ipadic-EUCJP, mesa-vulkan-devel, meson, metis, metis-devel, microdnf, mingw-binutils-
generic, mingw-filesystem-base, mingw32-binutils, mingw32-bzip2, mingw32-bzip2-static, mingw32-
cairo, mingw32-cpp, mingw32-crt, mingw32-expat, mingw32-filesystem, mingw32-fontconfig,
mingw32-freetype, mingw32-freetype-static, mingw32-gcc, mingw32-gcc-c, mingw32-gettext,
mingw32-gettext-static, mingw32-glib2, mingw32-glib2-static, mingw32-gstreamer1, mingw32-
harfbuzz, mingw32-harfbuzz-static, mingw32-headers, mingw32-icu, mingw32-libffi, mingw32-libjpeg-
turbo, mingw32-libjpeg-turbo-static, mingw32-libpng, mingw32-libpng-static, mingw32-libtiff,
mingw32-libtiff-static, mingw32-openssl, mingw32-pcre, mingw32-pcre-static, mingw32-pixman,
mingw32-pkg-config, mingw32-readline, mingw32-sqlite, mingw32-sqlite-static, mingw32-termcap,
mingw32-win-iconv, mingw32-win-iconv-static, mingw32-winpthreads, mingw32-winpthreads-static,
mingw32-zlib, mingw32-zlib-static, mingw64-binutils, mingw64-bzip2, mingw64-bzip2-static,
mingw64-cairo, mingw64-cpp, mingw64-crt, mingw64-expat, mingw64-filesystem, mingw64-
fontconfig, mingw64-freetype, mingw64-freetype-static, mingw64-gcc, mingw64-gcc-c, mingw64-
gettext, mingw64-gettext-static, mingw64-glib2, mingw64-glib2-static, mingw64-gstreamer1,
mingw64-harfbuzz, mingw64-harfbuzz-static, mingw64-headers, mingw64-icu, mingw64-libffi,
mingw64-libjpeg-turbo, mingw64-libjpeg-turbo-static, mingw64-libpng, mingw64-libpng-static,
mingw64-libtiff, mingw64-libtiff-static, mingw64-openssl, mingw64-pcre, mingw64-pcre-static,

112
APPENDIX A. CHANGES TO PACKAGES

mingw64-pixman, mingw64-pkg-config, mingw64-readline, mingw64-sqlite, mingw64-sqlite-static,


mingw64-termcap, mingw64-win-iconv, mingw64-win-iconv-static, mingw64-winpthreads, mingw64-
winpthreads-static, mingw64-zlib, mingw64-zlib-static, mockito, mockito-javadoc, mod_http2,
mod_md, mozvoikko, mpich, mpich-devel, mpitests-mvapich2-psm2, multilib-rpm-config, munge,
munge-devel, munge-libs, mvapich2, mvapich2-psm2, mysql, mysql-common, mysql-devel, mysql-
errmsg, mysql-libs, mysql-server, mysql-test

N | nbdkit-bash-completion, nbdkit-plugin-gzip, nbdkit-plugin-python3, nbdkit-plugin-xz, ncurses-c++-


libs, ncurses-compat-libs, netconsole-service, network-scripts, network-scripts-team,
NetworkManager-config-connectivity-redhat, nghttp2, nginx, nginx-all-modules, nginx-filesystem,
nginx-mod-http-image-filter, nginx-mod-http-perl, nginx-mod-http-xslt-filter, nginx-mod-mail, nginx-
mod-stream, ninja-build, nkf, nodejs, nodejs-devel, nodejs-docs, nodejs-nodemon, nodejs-packaging,
npm, npth, nss_db, nss_nis, nss_wrapper, nss-altfiles, ntpstat

O | objectweb-pom, objenesis, objenesis-javadoc, ocaml-cppo, ocaml-labltk, ocaml-labltk-devel, oci-


systemd-hook, oci-umount, ocl-icd, ocl-icd-devel, ongres-scram, ongres-scram-client, oniguruma,
oniguruma-devel, openal-soft, openal-soft-devel, openblas, openblas-devel, openblas-openmp,
openblas-openmp64, openblas-openmp64_, openblas-Rblas, openblas-serial64, openblas-serial64_,
openblas-srpm-macros, openblas-static, openblas-threads, openblas-threads64, openblas-threads64_,
opencl-filesystem, opencl-headers, opencv-contrib, OpenIPMI-lanserv, openscap-python3, openssl-
ibmpkcs11, openssl-pkcs11, openwsman-python3, os-maven-plugin, os-maven-plugin-javadoc, osad,
osgi-annotation, osgi-annotation-javadoc, osgi-compendium, osgi-compendium-javadoc, osgi-core,
osgi-core-javadoc, ostree, ostree-devel, ostree-grub2, ostree-libs, overpass-mono-fonts

P | p11-kit-server, pacemaker-schemas, pam_cifscreds, pandoc, pandoc-common, papi-libs, pcaudiolib,


pcp-pmda-podman, pcre-cpp, pcre-utf16, pcre-utf32, peripety, perl-AnyEvent, perl-Attribute-
Handlers, perl-B-Debug, perl-B-Hooks-EndOfScope, perl-bignum, perl-Canary-Stability, perl-Class-
Accessor, perl-Class-Factory-Util, perl-Class-Method-Modifiers, perl-Class-Tiny, perl-Class-
XSAccessor, perl-common-sense, perl-Compress-Bzip2, perl-Config-AutoConf, perl-Config-Perl-V,
perl-CPAN-DistnameInfo, perl-CPAN-Meta-Check, perl-Data-Dump, perl-Data-Section, perl-Data-
UUID, perl-Date-ISO8601, perl-DateTime-Format-Builder, perl-DateTime-Format-HTTP, perl-
DateTime-Format-ISO8601, perl-DateTime-Format-Mail, perl-DateTime-Format-Strptime, perl-
DateTime-TimeZone-SystemV, perl-DateTime-TimeZone-Tzfile, perl-Devel-CallChecker, perl-Devel-
Caller, perl-Devel-GlobalDestruction, perl-Devel-LexAlias, perl-Devel-Peek, perl-Devel-PPPort, perl-
Devel-SelfStubber, perl-Devel-Size, perl-Digest-CRC, perl-DynaLoader-Functions, perl-encoding, perl-
Errno, perl-Eval-Closure, perl-experimental, perl-Exporter-Tiny, perl-ExtUtils-Command, perl-ExtUtils-
Miniperl, perl-ExtUtils-MM-Utils, perl-Fedora-VSP, perl-File-BaseDir, perl-File-chdir, perl-File-
DesktopEntry, perl-File-Find-Object, perl-File-MimeInfo, perl-File-ReadBackwards, perl-Filter-Simple,
perl-generators, perl-Import-Into, perl-Importer, perl-inc-latest, perl-interpreter, perl-IO, perl-IO-All,
perl-IO-Multiplex, perl-IPC-System-Simple, perl-IPC-SysV, perl-JSON-XS, perl-libintl-perl, perl-libnet,
perl-libnetcfg, perl-List-MoreUtils-XS, perl-Locale-gettext, perl-Math-BigInt, perl-Math-BigInt-
FastCalc, perl-Math-BigRat, perl-Math-Complex, perl-Memoize, perl-MIME-Base64, perl-MIME-
Charset, perl-MIME-Types, perl-Module-CoreList-tools, perl-Module-CPANfile, perl-Module-Install-
AuthorTests, perl-Module-Install-ReadmeFromPod, perl-MRO-Compat, perl-namespace-autoclean,
perl-namespace-clean, perl-Net-Ping, perl-Net-Server, perl-NKF, perl-NTLM, perl-open, perl-Params-
Classify, perl-Params-ValidationCompiler, perl-Parse-PMFile, perl-Path-Tiny, perl-Perl-Destruct-Level,
perl-perlfaq, perl-PerlIO-utf8_strict, perl-PerlIO-via-QuotedPrint, perl-Pod-Html, perl-Pod-Markdown,
perl-Ref-Util, perl-Ref-Util-XS, perl-Role-Tiny, perl-Scope-Guard, perl-SelfLoader, perl-Software-
License, perl-Specio, perl-Sub-Exporter-Progressive, perl-Sub-Identify, perl-Sub-Info, perl-Sub-Name,
perl-SUPER, perl-Term-ANSIColor, perl-Term-Cap, perl-Term-Size-Any, perl-Term-Size-Perl, perl-
Term-Table, perl-Test, perl-Test-LongString, perl-Test-Warnings, perl-Test2-Suite, perl-Text-
Balanced, perl-Text-Tabs+Wrap, perl-Text-Template, perl-Types-Serialiser, perl-Unicode-Collate, perl-
Unicode-EastAsianWidth, perl-Unicode-LineBreak, perl-Unicode-Normalize, perl-Unicode-UTF8, perl-
Unix-Syslog, perl-utils, perl-Variable-Magic, perl-YAML-LibYAML, php-dbg, php-gmp, php-json, php-
opcache, php-pecl-apcu, php-pecl-apcu-devel, php-pecl-zip, pigz, pinentry-emacs, pinentry-gnome3,
pipewire, pipewire-devel, pipewire-doc, pipewire-libs, pipewire-utils, pkgconf, pkgconf-m4, pkgconf-

113
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

pkg-config, pki-servlet-4.0-api, pki-servlet-container, platform-python, platform-python-coverage,


platform-python-debug, platform-python-devel, platform-python-pip, platform-python-setuptools,
plexus-interactivity-api, plexus-interactivity-jline, plexus-languages, plexus-languages-javadoc, plotutils,
plotutils-devel, pmix, pmreorder, podman, podman-docker, policycoreutils-dbus, policycoreutils-python-
utils, polkit-libs, poppler-qt5, poppler-qt5-devel, postfix-mysql, postfix-pgsql, postgresql-odbc-tests,
postgresql-plpython3, postgresql-server-devel, postgresql-test-rpm-macros, postgresql-upgrade-
devel, potrace, powermock-api-easymock, powermock-api-mockito, powermock-api-support,
powermock-common, powermock-core, powermock-javadoc, powermock-junit4, powermock-reflect,
powermock-testng, prefixdevname, pstoedit, ptscotch-mpich, ptscotch-mpich-devel, ptscotch-mpich-
devel-parmetis, ptscotch-openmpi, ptscotch-openmpi-devel, publicsuffix-list, publicsuffix-list-dafsa,
python-pymongo-doc, python-qt5-rpm-macros, python-sphinx-locale, python-sqlalchemy-doc,
python-virtualenv-doc, python2, python2-attrs, python2-babel, python2-backports, python2-
backports-ssl_match_hostname, python2-bson, python2-cairo, python2-cairo-devel, python2-chardet,
python2-coverage, python2-Cython, python2-debug, python2-devel, python2-dns, python2-docs,
python2-docs-info, python2-docutils, python2-funcsigs, python2-idna, python2-ipaddress, python2-
iso8601, python2-jinja2, python2-libs, python2-lxml, python2-markupsafe, python2-mock, python2-
nose, python2-numpy, python2-numpy-doc, python2-numpy-f2py, python2-pip, python2-pluggy,
python2-psycopg2, python2-psycopg2-debug, python2-psycopg2-tests, python2-py, python2-
pygments, python2-pymongo, python2-pymongo-gridfs, python2-PyMySQL, python2-pysocks,
python2-pytest, python2-pytest-mock, python2-pytz, python2-pyyaml, python2-requests, python2-
scipy, python2-scour, python2-setuptools, python2-setuptools_scm, python2-six, python2-sqlalchemy,
python2-talloc, python2-test, python2-tkinter, python2-tools, python2-urllib3, python2-virtualenv,
python2-wheel, python3-abrt, python3-abrt-addon, python3-abrt-container-addon, python3-abrt-doc,
python3-argcomplete, python3-argh, python3-asn1crypto, python3-attrs, python3-audit, python3-
augeas, python3-avahi, python3-azure-sdk, python3-babel, python3-bcc, python3-bind, python3-
blivet, python3-blockdev, python3-boom, python3-boto3, python3-botocore, python3-brlapi,
python3-bson, python3-bytesize, python3-cairo, python3-cffi, python3-chardet, python3-click,
python3-clufter, python3-configobj, python3-configshell, python3-cpio, python3-createrepo_c,
python3-cryptography, python3-cups, python3-custodia, python3-Cython, python3-dateutil, python3-
dbus, python3-dbus-client-gen, python3-dbus-python-client-gen, python3-dbus-signature-pyparsing,
python3-decorator, python3-dmidecode, python3-dnf, python3-dnf-plugin-spacewalk, python3-dnf-
plugin-versionlock, python3-dnf-plugins-core, python3-dns, python3-docs, python3-docutils,
python3-enchant, python3-ethtool, python3-evdev, python3-fasteners, python3-firewall, python3-
flask, python3-gevent, python3-gflags, python3-gobject, python3-gobject-base, python3-google-api-
client, python3-gpg, python3-greenlet, python3-greenlet-devel, python3-gssapi, python3-hawkey,
python3-hivex, python3-html5lib, python3-httplib2, python3-humanize, python3-hwdata, python3-
hypothesis, python3-idna, python3-imagesize, python3-iniparse, python3-inotify, python3-into-dbus-
python, python3-ipaclient, python3-ipalib, python3-ipaserver, python3-iscsi-initiator-utils, python3-
iso8601, python3-itsdangerous, python3-jabberpy, python3-javapackages, python3-jinja2, python3-
jmespath, python3-jsonpatch, python3-jsonpointer, python3-jsonschema, python3-justbases, python3-
justbytes, python3-jwcrypto, python3-jwt, python3-kdcproxy, python3-keycloak-httpd-client-install,
python3-kickstart, python3-kmod, python3-koan, python3-langtable, python3-ldap, python3-ldb,
python3-lesscpy, python3-lib389, python3-libcomps, python3-libdnf, python3-libguestfs, python3-
libipa_hbac, python3-libnl3, python3-libpfm, python3-libproxy, python3-librepo, python3-libreport,
python3-libselinux, python3-libsemanage, python3-libsss_nss_idmap, python3-libstoragemgmt,
python3-libstoragemgmt-clibs, python3-libuser, python3-libvirt, python3-libvoikko, python3-libxml2,
python3-linux-procfs, python3-lit, python3-lldb, python3-louis, python3-lxml, python3-magic,
python3-mako, python3-markdown, python3-markupsafe, python3-meh, python3-meh-gui, python3-
mock, python3-mod_wsgi, python3-mpich, python3-netaddr, python3-netifaces, python3-newt,
python3-nose, python3-nss, python3-ntplib, python3-numpy, python3-numpy-f2py, python3-
oauth2client, python3-oauthlib, python3-openipmi, python3-openmpi, python3-ordered-set, python3-
osa-common, python3-osad, python3-packaging, python3-pcp, python3-perf, python3-pexpect,
python3-pid, python3-pillow, python3-pki, python3-pluggy, python3-ply, python3-policycoreutils,
python3-prettytable, python3-productmd, python3-psycopg2, python3-ptyprocess, python3-
pwquality, python3-py, python3-pyasn1, python3-pyasn1-modules, python3-pyatspi, python3-
pycparser, python3-pycurl, python3-pydbus, python3-pygments, python3-pymongo, python3-

114
APPENDIX A. CHANGES TO PACKAGES

pymongo-gridfs, python3-PyMySQL, python3-pyOpenSSL, python3-pyparsing, python3-pyparted,


python3-pyqt5-sip, python3-pyserial, python3-pysocks, python3-pytest, python3-pytoml, python3-
pytz, python3-pyudev, python3-pyusb, python3-pywbem, python3-pyxattr, python3-pyxdg, python3-
pyyaml, python3-qrcode, python3-qrcode-core, python3-qt5, python3-qt5-base, python3-qt5-devel,
python3-reportlab, python3-requests, python3-requests-file, python3-requests-ftp, python3-
requests-oauthlib, python3-rhn-check, python3-rhn-client-tools, python3-rhn-setup, python3-rhn-
setup-gnome, python3-rhn-virtualization-common, python3-rhn-virtualization-host, python3-rhncfg,
python3-rhncfg-actions, python3-rhncfg-client, python3-rhncfg-management, python3-rhnlib,
python3-rhnpush, python3-rpm, python3-rrdtool, python3-rtslib, python3-s3transfer, python3-samba,
python3-samba-test, python3-schedutils, python3-scipy, python3-scons, python3-semantic_version,
python3-setools, python3-setuptools_scm, python3-simpleline, python3-sip, python3-sip-devel,
python3-six, python3-slip, python3-slip-dbus, python3-snowballstemmer, python3-spacewalk-abrt,
python3-spacewalk-backend-libs, python3-spacewalk-koan, python3-spacewalk-oscap, python3-
spacewalk-usix, python3-speechd, python3-sphinx, python3-sphinx_rtd_theme, python3-sphinx-
theme-alabaster, python3-sphinxcontrib-websupport, python3-sqlalchemy, python3-sss, python3-sss-
murmur, python3-sssdconfig, python3-subscription-manager-rhsm, python3-suds, python3-sure,
python3-sushy, python3-syspurpose, python3-systemd, python3-talloc, python3-tbb, python3-tdb,
python3-tevent, python3-unbound, python3-unittest2, python3-uritemplate, python3-urllib3,
python3-urwid, python3-varlink, python3-virtualenv, python3-webencodings, python3-werkzeug,
python3-whoosh, python3-yubico, python36, python36-debug, python36-devel, python36-rpm-
macros

Q | qemu-kvm-block-curl, qemu-kvm-block-gluster, qemu-kvm-block-iscsi, qemu-kvm-block-rbd,


qemu-kvm-block-ssh, qemu-kvm-core, qemu-kvm-tests, qgpgme, qhull-devel, qt5-devel, qt5-srpm-
macros, quota-rpc

R | re2c, readonly-root, redhat-backgrounds, redhat-logos-httpd, redhat-logos-ipa, redhat-release,


redis, redis-devel, redis-doc, resteasy, resteasy-javadoc, rhel-system-roles, rhn-custom-info, rhn-
virtualization-host, rhncfg, rhncfg-actions, rhncfg-client, rhncfg-management, rhnpush, rls, rpcgen,
rpcsvc-proto-devel, rpm-mpi-hooks, rpm-ostree, rpm-ostree-libs, rpm-plugin-ima, rpm-plugin-
prioreset, rpm-plugin-selinux, rpm-plugin-syslog, rsync-daemon, rubygem-bson, rubygem-bson-doc,
rubygem-did_you_mean, rubygem-diff-lcs, rubygem-mongo, rubygem-mongo-doc, rubygem-mysql2,
rubygem-mysql2-doc, rubygem-net-telnet, rubygem-openssl, rubygem-pg, rubygem-pg-doc,
rubygem-power_assert, rubygem-rspec, rubygem-rspec-core, rubygem-rspec-expectations, rubygem-
rspec-mocks, rubygem-rspec-support, rubygem-test-unit, rubygem-xmlrpc, runc, rust, rust-analysis,
rust-debugger-common, rust-doc, rust-gdb, rust-lldb, rust-src, rust-srpm-macros, rust-std-static,
rust-toolset, rustfmt

S | samyak-odia-fonts, sane-backends-daemon, sblim-sfcCommon, scala, scala-apidoc, scala-swing,


scotch, scotch-devel, SDL2, SDL2-devel, SDL2-static, sendmail-milter-devel, sil-scheherazade-fonts,
sisu-mojos, sisu-mojos-javadoc, skopeo, slf4j-ext, slf4j-jcl, slf4j-jdk14, slf4j-log4j12, slf4j-sources,
slirp4netns, smc-tools, socket_wrapper, sombok, sombok-devel, sos-audit, spacewalk-abrt, spacewalk-
client-cert, spacewalk-koan, spacewalk-oscap, spacewalk-remote-utils, spacewalk-usix, sparsehash-
devel, spec-version-maven-plugin, spec-version-maven-plugin-javadoc, speech-dispatcher-espeak-ng,
speexdsp, speexdsp-devel, spice-gtk, spirv-tools-libs, splix, sqlite-libs, sscg, sssd-nfs-idmap, stratis-cli,
stratisd, SuperLU, SuperLU-devel, supermin-devel, swig-gdb, switcheroo-control, syslinux-extlinux-
nonlinux, syslinux-nonlinux, systemd-container, systemd-journal-remote, systemd-pam, systemd-tests,
systemd-udev, systemtap-exporter, systemtap-runtime-python3

T | target-restore, tcl-doc, texlive-anyfontsize, texlive-awesomebox, texlive-babel-english, texlive-


breqn, texlive-capt-of, texlive-classpack, texlive-ctablestack, texlive-dvisvgm, texlive-environ, texlive-
eqparbox, texlive-finstrut, texlive-fontawesome, texlive-fonts-tlwg, texlive-graphics-cfg, texlive-
graphics-def, texlive-import, texlive-knuth-lib, texlive-knuth-local, texlive-latex2man, texlive-lib,
texlive-lib-devel, texlive-linegoal, texlive-lineno, texlive-ltabptch, texlive-lualibs, texlive-luatex85,
texlive-manfnt-font, texlive-mathtools, texlive-mflogo-font, texlive-needspace, texlive-tabu, texlive-
tabulary, texlive-tex-ini-files, texlive-texlive-common-doc, texlive-texlive-docindex, texlive-texlive-en,

115
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

texlive-texlive-msg-translations, texlive-texlive-scripts, texlive-trimspaces, texlive-unicode-data,


texlive-updmap-map, texlive-upquote, texlive-wasy2-ps, texlive-xmltexconfig, thai-scalable-laksaman-
fonts, timedatex, tinycdb, tinycdb-devel, tinyxml2, tinyxml2-devel, tlog, torque, torque-devel, torque-
libs, tpm2-abrmd-selinux, tracker-miners, trousers-lib, tuned-profiles-nfv-host-bin, twolame-libs

U | uglify-js, uid_wrapper, usbguard-dbus, userspace-rcu, userspace-rcu-devel, utf8proc, uthash-devel,


util-linux-user

V | varnish, varnish-devel, varnish-docs, varnish-modules, vulkan-headers, vulkan-loader, vulkan-loader-


devel

W | WALinuxAgent, web-assets-devel, web-assets-filesystem, webkit2gtk3, webkit2gtk3-devel,


webkit2gtk3-jsc, webkit2gtk3-jsc-devel, webkit2gtk3-plugin-process-gtk2, wireshark-cli, woff2

X | Xaw3d, Xaw3d-devel, xmlstreambuffer, xmlstreambuffer-javadoc, xmvn-api, xmvn-bisect, xmvn-


connector-aether, xmvn-connector-ivy, xmvn-core, xmvn-install, xmvn-minimal, xmvn-mojo, xmvn-
parent-pom, xmvn-resolve, xmvn-subst, xmvn-tools-pom, xorg-x11-drv-wacom-serial-support, xterm-
resize

Y | yasm

A.2. PACKAGE REPLACEMENTS


The following table lists packages that were replaced, renamed, merged, or split:

Original New package(s) Changed Note


package(s) since

389-ds-base 389-ds-base, 389- RHEL 8.0 The 389-ds-base package in RHEL 7


ds-base-legacy- contains Perl Tools for manipulating the
tools Directory Server. In RHEL 8, a new set of
tools written in Python is distributed within
the 389-ds-base package. The legacy Perl
Tools have been extracted into a separate
package, 389-ds-base-legacy-tools, but
are deprecated and not recommended for
use.

AAVMF edk2-aarch64 RHEL 8.0

abrt-addon-python python3-abrt- RHEL 8.0


addon

abrt-python python3-abrt RHEL 8.0

abrt-python-doc python3-abrt-doc RHEL 8.0

adcli adcli, adcli-doc RHEL 8.0

adwaita-qt5 adwaita-qt RHEL 8.0

116
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

alsa-utils alsa-utils, alsa-utils- RHEL 8.0


alsabat

anaconda-core anaconda-core, RHEL 8.0


anaconda-install-
env-deps

apache-commons- apache-commons- RHEL 8.0


collections- collections-javadoc
testframework-
javadoc

apr-util apr-util, apr-util- RHEL 8.0 The apr-util-bdb and apr-util-openssl


bdb, apr-util- packages have been split from apr-util.
openssl These package provides the loadable module
supporting Berkeley DB in the apr_dbm.h
interface, and OpenSSL in the apr_crypto.h
interface, respectively. Both the apr-util-
bdb and apr-util-openssl packages have a
weak dependency from apr-util, so
packages using these APIs should continue to
work without changes.

aqute-bndlib- aqute-bnd-javadoc RHEL 8.0


javadoc

arptables iptables-arptables RHEL 8.0

authconfig authselect-compat RHEL 8.0 The authselect utility improves the


configuration of user authentication on RHEL
8 hosts and it is the only supported way of
configuring the operating system’s PAM
stack. To simplify the migration from
authconfig, the authselect-compat
package is provided with the respective
compatibility command.

bacula-director bacula-director, RHEL 8.0


bacula-logwatch

bind-libs-lite bind-export-libs, RHEL 8.0 The bind-libs-lite libraries have been


bind-libs-lite moved to the bind-export-libs package,
used by the dhcp-client and dhcp-server
packages. The bind-libs-lite libraries now
contain a subset of bind-libs, which
depends on the bind-libs-lite package. The
dhcp-server and dhcp-client now depend
on the bind-export-libs package.

117
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

bind-lite-devel bind-export-devel, RHEL 8.0 The bind-export-devel package provides a


bind-lite-devel replacement for the bind-lite-devel
package. Cflags and libraries used for linking
to export libraries should be obtained from
the isc-export-config.sh output. Linking
against the bind-export-libs libraries should
be done using the isc-export-config.sh
parameters.

bluez bluez, bluez-obexd RHEL 8.0

boost-devel boost-devel, boost- RHEL 8.0


python3-devel

boost-mpich- boost-mpich- RHEL 8.0


python python3

boost-openmpi- boost-openmpi- RHEL 8.0


python python3

boost-python boost-python3 RHEL 8.0

brltty-at-spi brltty-at-spi2 RHEL 8.0

cjkuni-uming-fonts google-noto-serif- RHEL 8.0


cjk-ttc-fonts

compat-libgfortran- compat-libgfortran- RHEL 8.0


41 48

compat-locales-sap compat-locales- RHEL 8.1


sap, compat-
locales-sap-
common

compat-locales- compat-locales-sap RHEL 8.0


sap, compat-
locales-sap-
common

control-center gnome-control- RHEL 8.0


center

control-center- gnome-control- RHEL 8.0


filesystem center-filesystem

118
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

coolkey opensc RHEL 8.0

coreutils coreutils, coreutils- RHEL 8.0


common

createrepo createrepo_c, RHEL 8.0


python3-
createrepo_c

Cython python2-Cython, RHEL 8.0


python3-Cython

dbus dbus, dbus- RHEL 8.0


common, dbus-
daemon, dbus-tools

dbus-python python3-dbus RHEL 8.0

deltarpm drpm RHEL 8.0

dhclient dhcp-client RHEL 8.0

dhcp dhcp-relay, dhcp- RHEL 8.0


server

dnf-utils yum-utils RHEL 8.1

dnssec-trigger dnssec-trigger, RHEL 8.0


dnssec-trigger-
panel

dracut dracut, dracut-live, RHEL 8.0


dracut-squash

dstat pcp-system-tools RHEL 8.0

easymock2 easymock RHEL 8.0

easymock2-javadoc easymock-javadoc RHEL 8.0

ebtables iptables-ebtables RHEL 8.0

edac-utils rasdaemon RHEL 8.0

119
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

emacs-common, emacs-common RHEL 8.0


emacs-el

emacs-libidn, libidn libidn RHEL 8.0

emacs-mercurial, mercurial RHEL 8.0


emacs-mercurial-el,
mercurial

espeak espeak-ng RHEL 8.0 The espeak package, providing backends for
speech synthesis, has been replaced by an
actively developed espeak-ng package.
espeak-ng is mostly compatible with
espeak.

firstboot gnome-initial-setup RHEL 8.0

foomatic-filters cups-filters RHEL 8.0

freerdp freerdp, libwinpr RHEL 8.0

freerdp-devel freerdp-devel, RHEL 8.0


libwinpr-devel

freerdp-libs, freerdp-libs RHEL 8.0


freerdp-plugins

fuse fuse, fuse-common RHEL 8.0

gdb gdb, gdb-headless RHEL 8.0

gdbm gdbm, gdbm-libs RHEL 8.0

gdk-pixbuf2 gdk-pixbuf2, gdk- RHEL 8.0


pixbuf2-modules,
gdk-pixbuf2-xlib

gdk-pixbuf2-devel gdk-pixbuf2-devel, RHEL 8.0


gdk-pixbuf2-xlib-
devel

gdm, pulseaudio- gdm RHEL 8.0


gdm-hooks

120
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

ghostscript ghostscript, libgs, RHEL 8.0


libijs

ghostscript-devel libgs-devel RHEL 8.0

ghostscript-fonts urw-base35-fonts RHEL 8.0

git git, git-core, git- RHEL 8.0


core-doc, git-
subtree

glassfish-el-api- glassfish-el-javadoc RHEL 8.0


javadoc

glassfish-fastinfoset glassfish- RHEL 8.0


fastinfoset,
glassfish-
fastinfoset-javadoc

glassfish-jaxb glassfish-jaxb-bom, RHEL 8.0


glassfish-jaxb-bom-
ext, glassfish-jaxb-
codemodel,
glassfish-jaxb-
codemodel-
annotation-
compiler, glassfish-
jaxb-codemodel-
parent, glassfish-
jaxb-core, glassfish-
jaxb-external-
parent, glassfish-
jaxb-parent,
glassfish-jaxb-
rngom, glassfish-
jaxb-runtime,
glassfish-jaxb-
runtime-parent,
glassfish-jaxb-txw-
parent, glassfish-
jaxb-txw2

glassfish-jaxb-api glassfish-jaxb-api, RHEL 8.0


glassfish-jaxb-api-
javadoc

121
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

glibc glibc, glibc-all- RHEL 8.0 The non-core NSS modules for NIS and other
langpacks, glibc- data sources have been split into separate
locale-source, glibc- packages ( nss_db, libnsl ). The language
minimal-langpack, support has been split into language pack
libnsl, libxcrypt, support (glibc-all-langpacks, glibc-
nss_db minimal-langpack, glibc-locale-source,
and the glibc-langpack-* modules). The
libxcrypt package is distinct.

glibc-common glibc-common, RHEL 8.0


rpcgen

glibc-devel compat-libpthread- RHEL 8.0


nonshared, glibc-
devel, libnsl2-devel,
libxcrypt-devel

glibc-headers glibc-headers, RHEL 8.0


rpcsvc-proto-devel

glibc-static glibc-static, RHEL 8.0


libxcrypt-static

gmp gmp, gmp-c++ RHEL 8.0

gnome- gnome- RHEL 8.0


backgrounds backgrounds,
gnome-
backgrounds-extras

gnome-session, gnome-session RHEL 8.0


gnome-session-
custom-session

gnome-system-log gnome-logs RHEL 8.0

gnome-tweak-tool gnome-tweaks RHEL 8.0

golang go-srpm-macros, RHEL 8.0


golang

google-noto-sans- google-noto-sans- RHEL 8.0


cjk-fonts cjk-ttc-fonts

122
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

google-noto-sans- google-noto-sans- RHEL 8.0


japanese-fonts cjk-jp-fonts

grub2-common efi-filesystem, RHEL 8.0


grub2-common

grub2-tools grub2-tools, grub2- RHEL 8.0


tools-efi

gstreamer1-plugins- gstreamer1-plugins- RHEL 8.0


bad-free-gtk good-gtk

guava guava20 RHEL 8.0

guava-javadoc guava20-javadoc RHEL 8.0

gutenprint gutenprint, RHEL 8.0


gutenprint-libs,
gutenprint-libs-ui

hawkey, libhif libdnf RHEL 8.0

hmaccalc libkcapi-hmaccalc RHEL 8.0

hpijs hplip RHEL 8.0

i2c-tools i2c-tools, i2c-tools- RHEL 8.0


perl

ibus-chewing ibus-libzhuyin RHEL 8.0

infiniband-diags, infiniband-diags RHEL 8.0


libibmad

infiniband-diags- infiniband-diags- RHEL 8.0


devel, libibmad- devel
devel

infiniband-diags- infiniband-diags- RHEL 8.0


devel-static, devel-static
libibmad-static

initscripts initscripts, RHEL 8.0


netconsole-service,
network-scripts,
readonly-root

123
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

ipmitool ipmievd, ipmitool RHEL 8.0

iproute iproute, iproute-tc RHEL 8.0

iptables iptables, iptables- RHEL 8.0


libs

iscsi-initiator-utils iscsi-initiator-utils, RHEL 8.0


python3-iscsi-
initiator-utils

istack-commons istack-commons, RHEL 8.0


istack-commons-
runtime, istack-
commons-tools

ivtv-firmware, linux- linux-firmware RHEL 8.0


firmware

iwl7260-firmware, iwl7260-firmware RHEL 8.0


iwl7265-firmware

jabberpy python3-jabberpy RHEL 8.0

jackson jackson- RHEL 8.0


annotations,
jackson-core,
jackson-databind,
jackson-jaxrs-json-
provider, jackson-
jaxrs-providers,
jackson-jaxrs-
providers-
datatypes, jackson-
module-jaxb-
annotations

jackson-javadoc jackson- RHEL 8.0


annotations-
javadoc, jackson-
core-javadoc,
jackson-databind-
javadoc, jackson-
jaxrs-providers-
javadoc, jackson-
module-jaxb-
annotations-javadoc

124
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

javapackages-tools ivy-local, RHEL 8.0


javapackages-
filesystem,
javapackages-tools

jboss-annotations- jboss-annotations- RHEL 8.0


1.1-api 1.2-api

jboss-annotations- jboss-annotations- RHEL 8.0


1.1-api-javadoc 1.2-api-javadoc

jboss-interceptors- jboss-interceptors- RHEL 8.0


1.1-api 1.2-api

jboss-interceptors- jboss-interceptors- RHEL 8.0


1.1-api-javadoc 1.2-api-javadoc

joda-time java-1.8.0-openjdk- RHEL 8.0


headless

joda-time-javadoc java-1.8.0-openjdk- RHEL 8.0


javadoc

kernel kernel, kernel-core, RHEL 8.0


kernel-modules,
kernel-modules-
extra

kernel-debug kernel-debug, RHEL 8.0


kernel-debug-core,
kernel-debug-
modules, kernel-
debug-modules-
extra

kernel-rt kernel-rt, kernel-rt- RHEL 8.0


core, kernel-rt-
modules, kernel-rt-
modules-extra

kernel-rt-debug kernel-rt-debug, RHEL 8.0


kernel-rt-debug-
core, kernel-rt-
debug-modules,
kernel-rt-debug-
modules-extra

125
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

kernel-tools, qemu- kernel-tools RHEL 8.0


kvm-tools

kexec-tools, kexec- kexec-tools RHEL 8.0


tools-eppic

kexec-tools- kdump-anaconda- RHEL 8.0


anaconda-addon addon

koan koan, python3-koan RHEL 8.0

langtable-python python3-langtable RHEL 8.0

lasso-python python3-lasso RHEL 8.0

ldns ldns, ldns-utils RHEL 8.0

libgnome-keyring libsecret RHEL 8.0

libgudev1 libgudev RHEL 8.0

libgudev1-devel libgudev-devel RHEL 8.0

libinput libinput, libinput- RHEL 8.0


utils

liblouis-python python3-louis RHEL 8.0

libmemcached libmemcached, RHEL 8.0


libmemcached-libs

libmodulemd libmodulemd, RHEL 8.0


libmodulemd1

libmusicbrainz libmusicbrainz5 RHEL 8.0

libmusicbrainz-devel libmusicbrainz5- RHEL 8.0


devel

libnice libnice, libnice- RHEL 8.0


gstreamer1

libpeas-loader- libpeas-loader- RHEL 8.0


python python3

126
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

libpfm-python python3-libpfm RHEL 8.0

libproxy-mozjs libproxy-webkitgtk4 RHEL 8.0

libproxy-python python3-libproxy RHEL 8.0

libproxy-webkitgtk3 libproxy-webkitgtk4 RHEL 8.0

librabbitmq- librabbitmq-tools RHEL 8.0


examples

librados2-devel librados-devel RHEL 8.0

librbd1-devel librbd-devel RHEL 8.0

libreoffice-base libreoffice-base, RHEL 8.0


libreoffice-help-en

libreoffice-calc libreoffice-calc, RHEL 8.0


libreoffice-help-en

libreoffice-core libreoffice-core, RHEL 8.0


libreoffice-help-en

libreoffice-draw libreoffice-draw, RHEL 8.0


libreoffice-help-en

libreoffice-impress libreoffice-help-en, RHEL 8.0


libreoffice-impress

libreoffice-math libreoffice-help-en, RHEL 8.0


libreoffice-math

libreoffice-writer libreoffice-help-en, RHEL 8.0


libreoffice-writer

libreport-python python3-libreport RHEL 8.0

libselinux-python python3-libselinux RHEL 8.0

libselinux-python libselinux-python, RHEL 7.8


libselinux-python3

libsemanage- python3- RHEL 8.0


python libsemanage

127
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

libssh2 libssh, libssh2 RHEL 8.0 The libssh2 package was temporarily
available in RHEL 8.0 due to a qemu-kvm
dependency. Starting with RHEL 8.1, the
QEMU emulator uses the libssh library
instead, and libssh2 has been removed.

libstoragemgmt- python3- RHEL 8.0


python libstoragemgmt

libstoragemgmt- python3- RHEL 8.0


python-clibs libstoragemgmt-
clibs

libuser-python python3-libuser RHEL 8.0

libvirt-python python3-libvirt RHEL 8.0

libX11 libX11, libX11-xcb RHEL 8.0

libxml2-python python3-libxml2 RHEL 8.0

llvm-private llvm RHEL 8.0

llvm-private-devel llvm-devel RHEL 8.0

log4j log4j12 RHEL 8.0

log4j-javadoc log4j12-javadoc RHEL 8.0

lohit-oriya-fonts lohit-odia-fonts RHEL 8.0

lohit-punjabi-fonts lohit-gurmukhi- RHEL 8.0


fonts

lua lua, lua-libs RHEL 8.0

lvm2-python-boom boom-boot, boom- RHEL 8.0


boot-conf, boom-
boot-grub2,
python3-boom

lz4 lz4, lz4-libs RHEL 8.0

make make, make-devel RHEL 8.0

128
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

mariadb-devel mariadb-connector- RHEL 8.0


c-devel, mariadb-
devel

mariadb-libs mariadb-connector- RHEL 8.0


c

mariadb-server mariadb-server, RHEL 8.0


mariadb-server-utils

maven maven, maven-lib RHEL 8.0

maven-downloader maven-artifact- RHEL 8.0


transfer

maven-downloader- maven-artifact- RHEL 8.0


javadoc transfer-javadoc

maven-doxia-tools maven-doxia- RHEL 8.0


sitetools

maven-doxia-tools- maven-doxia- RHEL 8.0


javadoc sitetools-javadoc

maven-local javapackages-local, RHEL 8.0


maven-local

maven-wagon maven-wagon, RHEL 8.0


maven-wagon-file,
maven-wagon-ftp,
maven-wagon-http,
maven-wagon-http-
lightweight, maven-
wagon-http-shared,
maven-wagon-
provider-api,
maven-wagon-
providers

mesa-libEGL-devel mesa-khr-devel, RHEL 8.0


mesa-libEGL-devel

mesa-libwayland- libwayland-egl RHEL 8.0


egl

129
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

mesa-libwayland- wayland-devel RHEL 8.0


egl-devel, wayland-
devel

mod_auth_kerb mod_auth_gssapi RHEL 8.0

mod_nss mod_ssl RHEL 8.0

mod_wsgi python3-mod_wsgi RHEL 8.0 The mod_wsgi module for the Apache
HTTP Server has been updated to Python 3.
WSGI applications are now supported only
with Python 3, and must be migrated from
Python 2.

mpich-3.0, mpich- mpich RHEL 8.0


3.2

mpich-3.0-devel, mpich-devel RHEL 8.0


mpich-3.2-devel

mpitests-mpich, mpitests-mpich RHEL 8.0


mpitests-mpich32

mpitests-mvapich2, mpitests-mvapich2 RHEL 8.0


mpitests-
mvapich222,
mpitests-
mvapich23

mpitests-mvapich2- mpitests-mvapich2- RHEL 8.0


psm, mpitests- psm2
mvapich222-psm,
mpitests-
mvapich222-psm2,
mpitests-
mvapich23-psm,
mpitests-
mvapich23-psm2

mpitests-openmpi, mpitests-openmpi RHEL 8.0


mpitests-openmpi3

mvapich2-2.0, mvapich2 RHEL 8.0


mvapich2-2.2,
mvapich23

130
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

mvapich2-2.0-psm, mvapich2-psm2 RHEL 8.0


mvapich2-2.2-psm,
mvapich2-2.2-psm2,
mvapich23-psm,
mvapich23-psm2

mysql-connector- mariadb-java-client RHEL 8.0


java

mysql-connector- mariadb-connector- RHEL 8.0


odbc odbc

MySQL-python python2-PyMySQL, RHEL 8.0


python3-PyMySQL

nbdkit-plugin- nbdkit-plugin- RHEL 8.0


python2 python3

ncurses-libs ncurses-c++-libs, RHEL 8.0


ncurses-compat-
libs, ncurses-libs

newt-python python3-newt RHEL 8.0

nextgen-yum4 yum RHEL 8.0

nhn-nanum-gothic- google-noto-sans- RHEL 8.0


fonts cjk-ttc-fonts

ntp chrony, ntpstat RHEL 8.0 For details, see Using the Chrony suite to
configure NTP.

ntpdate chrony RHEL 8.0

numpy python2-numpy, RHEL 8.0


python3-numpy

numpy-f2py python2-numpy- RHEL 8.0


f2py, python3-
numpy-f2py

objectweb-asm4 objectweb-asm RHEL 8.0

objectweb-asm4- objectweb-asm- RHEL 8.0


javadoc javadoc

131
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

opencv opencv, opencv- RHEL 8.0


contrib, opencv-
core

OpenIPMI OpenIPMI, RHEL 8.0


OpenIPMI-lanserv

OpenIPMI-python python3-openipmi RHEL 8.0

openjpeg openjpeg2 RHEL 8.0

openjpeg-devel openjpeg2-devel RHEL 8.0

openmpi, openmpi3 openmpi RHEL 8.0

openmpi-devel, openmpi-devel RHEL 8.0


openmpi3-devel

openscap, openscap RHEL 8.0


openscap-extra-
probes

openscap-python openscap-python3 RHEL 8.0

openwsman-python openwsman- RHEL 8.0


python3

oprofile perf RHEL 8.0

osa-common python3-osa- RHEL 8.0


common

osad osad, python3-osad RHEL 8.0

ostree ostree, ostree-libs RHEL 8.0

ostree-fuse ostree RHEL 8.0

OVMF edk2-ovmf RHEL 8.0

p11-kit-doc p11-kit-devel RHEL 8.0

pacemaker-cli pacemaker-cli, RHEL 8.0


pacemaker-
schemas

132
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

PackageKit, PackageKit RHEL 8.0


PackageKit-yum

pam_krb5 sssd RHEL 8.0 For details on migrating from pam_krb5 to


sssd, see Migrating from pam_krb5 to sssd in
the upstream SSSD documentation.

pam_pkcs11 sssd RHEL 8.0

papi papi, papi-libs RHEL 8.0

parfait parfait, parfait- RHEL 8.0


examples, parfait-
javadoc, pcp-
parfait-agent

pcp-pmda-kvm pcp RHEL 8.0

pcp-webapi pcp RHEL 8.2

pcp-webapp- grafana-pcp RHEL 8.2


blinkenlights

pcp-webapp- grafana-pcp RHEL 8.2


grafana

pcp-webapp- grafana-pcp RHEL 8.2


graphite

pcp-webapp-vector grafana-pcp RHEL 8.2

pcp-webjs grafana-pcp RHEL 8.2

pcre pcre, pcre-cpp, RHEL 8.0 The PCRE libpcrecpp.so.0 library with C++
pcre-utf16, pcre- API has been moved from the pcre package
utf32 to pcre-cpp package. The libpcre16.so.0
library with UTF-16 support has been moved
from the pcre package to the pcre-utf16
package, and the libpcre32.so.0 library with
UTF-32 support has been moved to the
pcre-utf32 package.

133
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

perl perl, perl-Attribute- RHEL 8.0 In RHEL 8, the package providing the Perl
Handlers, perl-B- interpreter has been renamed from perl to
Debug, perl- perl-interpreter, while the perl package is
bignum, perl- now just a meta-package. Basic language
bignum, perl-Devel- support modules have been moved to perl-
Peek, perl-Devel- libs, and a number of other modules
PPPort, perl-Devel- previously bundled in perl are now
SelfStubber, perl- distributed as separate packages.
Errno, perl-ExtUtils-
Command, perl-
ExtUtils-Miniperl,
perl-Filter-Simple,
perl-interpreter,
perl-IO, perl-IPC-
SysV, perl-libs, perl-
Math-BigInt, perl-
Math-BigInt-
FastCalc, perl-
Math-BigRat, perl-
Math-Complex,
perl-Memoize, perl-
MIME-Base64, perl-
Net-Ping, perl-
open, perl-perlfaq,
perl-PerlIO-via-
QuotedPrint, perl-
Pod-Html, perl-
SelfLoader, perl-
Term-ANSIColor,
perl-Term-Cap,
perl-Test, perl-
Text-Balanced,
perl-Unicode-
Collate, perl-
Unicode-Normalize

perl-core perl RHEL 8.0

perl-gettext perl-Locale-gettext RHEL 8.0

perl-libintl perl-libintl-perl RHEL 8.0

pexpect python3-pexpect RHEL 8.0

php-common php-common, php- RHEL 8.0


gmp, php-json, php-
pecl-zip, php-xml

134
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

php-mysql php-mysqlnd RHEL 8.0 The php-mysql package, which uses the
libmysqlclient library, has been replaced by
the php-mysqlnd package, which uses the
MySQL Native Driver.

pkgconfig pkgconf-pkg-config RHEL 8.0

pki-base pki-base, python3- RHEL 8.0


pki

pki-servlet- pki-servlet-engine RHEL 8.1


container

plexus-cdc plexus-containers- RHEL 8.0


component-
metadata

plexus-cdc-javadoc plexus-containers- RHEL 8.0


javadoc

plexus-interactivity plexus-interactivity, RHEL 8.0


plexus-interactivity-
api, plexus-
interactivity-jline

policycoreutils-gui policycoreutils- RHEL 8.0


dbus,
policycoreutils-gui

policycoreutils- policycoreutils- RHEL 8.0


python python-utils,
python3-
policycoreutils

polkit polkit, polkit-libs RHEL 8.0

postfix postfix, postfix-cdb, RHEL 8.0


postfix-ldap,
postfix-mysql,
postfix-pcre,
postfix-pgsql,
postfix-sqlite

postgresql-devel libpq-devel RHEL 8.0

postgresql-libs libpq RHEL 8.0

135
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

postgresql-plpython postgresql- RHEL 8.0


plpython3

prelink execstack RHEL 8.0

pth npth RHEL 8.0

pycairo python2-cairo, RHEL 8.0


python3-cairo

pycairo-devel python2-cairo-devel RHEL 8.0

PyGreSQL python3-psycopg2 RHEL 8.0

pykickstart pykickstart, RHEL 8.0


python3-kickstart

pyldb python3-ldb RHEL 8.0

pyOpenSSL python3- RHEL 8.0


pyOpenSSL

pyparsing python3-pyparsing RHEL 8.0

pyparted python3-pyparted RHEL 8.0

pyserial python3-pyserial RHEL 8.0

pytalloc python3-talloc RHEL 8.0

pytest python2-pytest, RHEL 8.0


python3-pytest

python platform-python RHEL 8.0

python-augeas python3-augeas RHEL 8.0

python-azure-sdk python3-azure-sdk RHEL 8.0

python-babel python2-babel, RHEL 8.0


python3-babel

python-backports python2-backports RHEL 8.0

136
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

python-backports- python2-backports- RHEL 8.0


ssl_match_hostnam ssl_match_hostnam
e e

python-bcc python3-bcc RHEL 8.0

python-blivet python3-blivet RHEL 8.0

python-boto3 python3-boto3 RHEL 8.0

python-brlapi python3-brlapi RHEL 8.0

python-cffi python3-cffi RHEL 8.0

python-chardet python2-chardet, RHEL 8.0


python3-chardet

python-clufter python3-clufter RHEL 8.0

python-configobj python3-configobj RHEL 8.0

python-configshell python3-configshell RHEL 8.0

python-coverage platform-python- RHEL 8.0


coverage, python2-
coverage

python-cpio python3-cpio RHEL 8.0

python-cups python3-cups RHEL 8.0

python-custodia python3-custodia RHEL 8.0

python-custodia- python3-custodia RHEL 8.0


ipa

python-dateutil python3-dateutil RHEL 8.0

python-decorator python3-decorator RHEL 8.0

python-devel python2-devel, RHEL 8.0


python36-devel

python-dmidecode python3- RHEL 8.0


dmidecode

137
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

python-dns python2-dns, RHEL 8.0


python3-dns

python-docs python2-docs, RHEL 8.0


python3-docs

python-docutils python2-docutils, RHEL 8.0


python3-docutils

python-enum34 python3-libs RHEL 8.0

python-ethtool python3-ethtool RHEL 8.0

python-firewall python3-firewall RHEL 8.0

python-flask python3-flask RHEL 8.0

python-gevent python3-gevent RHEL 8.0

python-gobject python3-gobject RHEL 8.0

python-gobject- python3-gobject- RHEL 8.0


base base

python-greenlet python3-greenlet RHEL 8.0

python-greenlet- python3-greenlet- RHEL 8.0


devel devel

python-gssapi python3-gssapi RHEL 8.0

python-hivex python3-hivex RHEL 8.0

python-httplib2 python3-httplib2 RHEL 8.0

python-hwdata python3-hwdata RHEL 8.0

python-idna python2-idna, RHEL 8.0


python3-idna

python-iniparse python3-iniparse RHEL 8.0

python-inotify python3-inotify RHEL 8.0

138
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

python-ipaddress python2-ipaddress, RHEL 8.0


python3-libs

python- python3- RHEL 8.0


itsdangerous itsdangerous

python- python3- RHEL 8.0


javapackages javapackages

python-jinja2 python2-jinja2, RHEL 8.0


python3-jinja2

python-jsonpatch python3-jsonpatch RHEL 8.0

python-jsonpointer python3- RHEL 8.0


jsonpointer

python-jwcrypto python3-jwcrypto RHEL 8.0

python-jwt python3-jwt RHEL 8.0

python-kdcproxy python3-kdcproxy RHEL 8.0

python-kerberos python3-gssapi RHEL 8.0

python-kmod python3-kmod RHEL 8.0

python-krbV python3-gssapi RHEL 8.0

python-ldap python3-ldap RHEL 8.0

python-libguestfs python3-libguestfs RHEL 8.0

python-libipa_hbac python3- RHEL 8.0


libipa_hbac

python-librepo python3-librepo RHEL 8.0

python-libs python2-libs, RHEL 8.0


python3-libs

python- python3- RHEL 8.0


libsss_nss_idmap libsss_nss_idmap

139
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

python-linux-procfs python3-linux- RHEL 8.0


procfs

python-lxml python2-lxml, RHEL 8.0


python3-lxml

python-magic python3-magic RHEL 8.0

python-mako python3-mako RHEL 8.0

python-markupsafe python2- RHEL 8.0


markupsafe,
python3-
markupsafe

python-meh python3-meh RHEL 8.0

python-meh-gui python3-meh-gui RHEL 8.0

python-netaddr python3-netaddr RHEL 8.0

python-netifaces python3-netifaces RHEL 8.0

python-nose python2-nose, RHEL 8.0


python3-nose

python-nss python3-nss RHEL 8.0

python-ntplib python3-ntplib RHEL 8.0

python-pcp python3-pcp RHEL 8.0

python-perf python3-perf RHEL 8.0

python-pillow python3-pillow RHEL 8.0

python-ply python3-ply RHEL 8.0

python-prettytable python3- RHEL 8.0


prettytable

python-psycopg2 python2-psycopg2, RHEL 8.0


python3-psycopg2

140
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

python-psycopg2- python2-psycopg2- RHEL 8.0


debug debug

python-pwquality python3-pwquality RHEL 8.0

python-py python2-py, RHEL 8.0


python3-py

python-pycparser python3-pycparser RHEL 8.0

python-pycurl python3-pycurl RHEL 8.0

python-pygments python2-pygments, RHEL 8.0


python3-pygments

python-pytoml python3-pytoml RHEL 8.0

python-pyudev python3-pyudev RHEL 8.0

python-qrcode python3-qrcode RHEL 8.0

python-qrcode- python3-qrcode- RHEL 8.0


core core

python-reportlab python3-reportlab RHEL 8.0

python-requests python2-requests, RHEL 8.0


python3-requests

python-rhsm python3- RHEL 8.0


subscription-
manager-rhsm

python-rhsm- subscription- RHEL 8.0


certificates manager-rhsm-
certificates

python-rtslib python3-rtslib, RHEL 8.0


target-restore

python-s3transfer python3-botocore, RHEL 8.0


python3-jmespath,
python3-s3transfer

python-schedutils python3-schedutils RHEL 8.0

141
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

python-setuptools platform-python- RHEL 8.0


setuptools,
python2-setuptools,
python3-setuptools

python-six python2-six, RHEL 8.0


python3-six

python-slip python3-slip RHEL 8.0

python-slip-dbus python3-slip-dbus RHEL 8.0

python-sphinx python-sphinx- RHEL 8.0


locale, python3-
sphinx

python-sqlalchemy python2- RHEL 8.0


sqlalchemy,
python3-
sqlalchemy

python-sss python3-sss RHEL 8.0

python-sss-murmur python3-sss- RHEL 8.0


murmur

python-sssdconfig python3-sssdconfig RHEL 8.0

python-suds python3-suds RHEL 8.0

python-syspurpose python3- RHEL 8.0


syspurpose

python-tdb python3-tdb RHEL 8.0

python-test python2-test, RHEL 8.0


python3-test

python-tevent python3-tevent RHEL 8.0

python-tools python2-tools RHEL 8.0

python-urllib3 python2-urllib3, RHEL 8.0


python3-urllib3

python-urwid python3-urwid RHEL 8.0

142
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

python-virtualenv python2-virtualenv, RHEL 8.0


python3-virtualenv

python-werkzeug python3-werkzeug RHEL 8.0

python-yubico python3-yubico RHEL 8.0

python2-blockdev python3-blockdev RHEL 8.0

python2-bytesize python3-bytesize RHEL 8.0

python2- python3- RHEL 8.0


createrepo_c createrepo_c

python2- python3- RHEL 8.0


cryptography cryptography

python2-dnf python3-dnf RHEL 8.0

python2-dnf- python3-dnf- RHEL 8.0


plugin-versionlock plugin-versionlock

python2-dnf- python3-dnf- RHEL 8.0


plugins-core plugins-core

python2-hawkey python3-hawkey RHEL 8.0

python2-ipaclient python3-ipaclient RHEL 8.0

python2-ipalib python3-ipalib RHEL 8.0

python2-ipaserver python3-ipaserver RHEL 8.0

python2-jmespath python3-jmespath RHEL 8.0

python2-keycloak- python3-keycloak- RHEL 8.0


httpd-client-install httpd-client-install

python2-libcomps python3-libcomps RHEL 8.0

python2-libdnf python3-libdnf RHEL 8.0

python2-oauthlib python3-oauthlib RHEL 8.0

143
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

python2-pyasn1 python3-pyasn1 RHEL 8.0

python2-pyasn1- python3-pyasn1- RHEL 8.0


modules modules

python2-pyatspi python3-pyatspi RHEL 8.0

python2-requests- python3-requests- RHEL 8.0


oauthlib oauthlib

pytz python2-pytz, RHEL 8.0


python3-pytz

pyusb python3-pyusb RHEL 8.0

pywbem python3-pywbem RHEL 8.0

pyxattr python3-pyxattr RHEL 8.0

PyYAML python2-pyyaml, RHEL 8.0


python3-pyyaml

qemu-img-ma qemu-img RHEL 8.0

qemu-img-rhev qemu-img RHEL 8.0

qemu-kvm qemu-kvm, qemu- RHEL 8.0


kvm-block-curl,
qemu-kvm-block-
gluster, qemu-kvm-
block-iscsi, qemu-
kvm-block-rbd,
qemu-kvm-block-
ssh, qemu-kvm-
core

qemu-kvm- qemu-kvm- RHEL 8.0


common-ma common

qemu-kvm- qemu-kvm- RHEL 8.0


common-rhev common

144
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

qemu-kvm-ma qemu-kvm, qemu- RHEL 8.0 The qemu-kvm-ma packages, introduced in


kvm-block-curl, RHEL 7 for virtualization support on the ARM,
qemu-kvm-block- IBM POWER, and IBM Z architectures, have
gluster, qemu-kvm- been replaced by the qemu-kvm packages
block-iscsi, qemu- providing support for all architectures.
kvm-block-rbd,
qemu-kvm-block-
ssh, qemu-kvm-
core

qemu-kvm-rhev qemu-kvm, qemu- RHEL 8.0


kvm-block-curl,
qemu-kvm-block-
gluster, qemu-kvm-
block-iscsi, qemu-
kvm-block-rbd,
qemu-kvm-block-
ssh, qemu-kvm-
core

qemu-kvm-tools- qemu-kvm- RHEL 8.0


ma common, tuned-
profiles-nfv-host-
bin

qemu-kvm-tools- qemu-kvm- RHEL 8.0


rhev common, tuned-
profiles-nfv-host-
bin

quagga frr RHEL 8.1

quagga-contrib frr-contrib RHEL 8.1

quota quota, quota-rpc RHEL 8.0 The rpc.rquotad daemon has been moved
from the quota RPM package to quota-rpc .
To use disk quota limits on your NFS server
and to have the limits readable or settable
from other machines, install the quota-rpc
package, and enable and start the rpc-
rquotad.service systemd service.

redhat-logos redhat- RHEL 8.0


backgrounds,
redhat-logos,
redhat-logos-httpd

145
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

redhat-release- redhat-release, RHEL 8.0


client redhat-release-eula

redhat-release- redhat-release, RHEL 8.0


computenode redhat-release-eula

redhat-release- redhat-release, RHEL 8.0


server redhat-release-eula

redhat-release- redhat-release, RHEL 8.0


workstation redhat-release-eula

redhat-rpm-config kernel-rpm-macros, RHEL 8.0


redhat-rpm-config

resteasy-base resteasy RHEL 8.0

resteasy-base- resteasy RHEL 8.0


atom-provider

resteasy-base- resteasy RHEL 8.0


client

resteasy-base- resteasy RHEL 8.0


jackson-provider

resteasy-base- resteasy-javadoc RHEL 8.0


javadoc

resteasy-base-jaxb- resteasy RHEL 8.0


provider

resteasy-base-jaxrs resteasy RHEL 8.0

resteasy-base- resteasy RHEL 8.0


jaxrs-all

resteasy-base- resteasy RHEL 8.0


jaxrs-api

resteasy-base- resteasy RHEL 8.0


providers-pom

resteasy-base- resteasy RHEL 8.0


resteasy-pom

146
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

rh-dotnet21-dotnet dotnet RHEL 8.0

rhn-virtualization- python3-rhn- RHEL 8.0


common virtualization-
common

rhn-virtualization- python3-rhn- RHEL 8.0


host virtualization-host,
rhn-virtualization-
host

rhncfg python3-rhncfg, RHEL 8.0


rhncfg

rhncfg-actions python3-rhncfg- RHEL 8.0


actions, rhncfg-
actions

rhncfg-client python3-rhncfg- RHEL 8.0


client, rhncfg-client

rhncfg- python3-rhncfg- RHEL 8.0


management management,
rhncfg-
management

rhnpush python3-rhnpush, RHEL 8.0


rhnpush

rpm-python python3-rpm RHEL 8.0

rrdtool-python python3-rrdtool RHEL 8.0

rsync rsync, rsync- RHEL 8.0


daemon

samba-python python3-samba RHEL 8.0

samba-python-test python3-samba- RHEL 8.0


test

samyak-oriya-fonts samyak-odia-fonts RHEL 8.0

sane-backends sane-backends, RHEL 8.0


sane-backends-
daemon

147
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

scipy python2-scipy, RHEL 8.0


python3-scipy

scons python3-scons RHEL 8.0

selinux-policy-devel selinux-policy- RHEL 8.0


devel, selinux-
policy-doc

sendmail-devel sendmail-milter- RHEL 8.0


devel

setools-libs python3-setools RHEL 8.0

shotwell gnome-photos RHEL 8.0

si-units si-units, si-units- RHEL 8.0


javadoc

sip python3-pyqt5-sip, RHEL 8.0


python3-sip

sip-devel python3-sip-devel, RHEL 8.0


sip

sip-macros sip RHEL 8.0

sisu-bean, sisu- sisu-inject RHEL 8.0


bean-binders, sisu-
bean-containers,
sisu-bean-
converters, sisu-
bean-inject, sisu-
bean-locators, sisu-
bean-reflect, sisu-
bean-scanners,
sisu-containers,
sisu-inject-bean,
sisu-osgi-registry,
sisu-registries, sisu-
spi-registry

148
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

sisu-inject-plexus, sisu-plexus RHEL 8.0


sisu-plexus-binders,
sisu-plexus-
converters, sisu-
plexus-lifecycles,
sisu-plexus-
locators, sisu-
plexus-metadata,
sisu-plexus-
scanners, sisu-
plexus-shim

sisu-maven-plugin sisu-mojos RHEL 8.0

sisu-maven-plugin- sisu-mojos-javadoc RHEL 8.0


javadoc

slf4j jcl-over-slf4j, jul-to- RHEL 8.0


slf4j, log4j-over-
slf4j, slf4j, slf4j-ext,
slf4j-jcl, slf4j-jdk14,
slf4j-log4j12

spacewalk-abrt python3- RHEL 8.0


spacewalk-abrt,
spacewalk-abrt

spacewalk- python3- RHEL 8.0


backend-libs spacewalk-
backend-libs

spacewalk-koan python3- RHEL 8.0


spacewalk-koan,
spacewalk-koan

spacewalk-oscap python3- RHEL 8.0


spacewalk-oscap,
spacewalk-oscap

spacewalk-usix python3- RHEL 8.0


spacewalk-usix,
spacewalk-usix

speech-dispatcher speech-dispatcher, RHEL 8.0


speech-dispatcher-
espeak-ng

149
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

speech-dispatcher- python3-speechd RHEL 8.0


python

speex speex, speexdsp RHEL 8.0

speex-devel speex-devel, RHEL 8.0


speexdsp-devel

spice-gtk3 spice-gtk, spice- RHEL 8.0


gtk3

sssd-common sssd-common, sssd- RHEL 8.0


nfs-idmap

stax-ex stax-ex, stax-ex- RHEL 8.0


javadoc

strace, strace32 strace RHEL 8.0

subscription- subscription- RHEL 8.0


manager-gui manager-cockpit

subscription- python3- RHEL 8.0


manager-rhsm subscription-
manager-rhsm

supermin supermin RHEL 8.0

supermin5 supermin RHEL 8.0

supermin5-devel supermin-devel RHEL 8.0

syslinux syslinux, syslinux- RHEL 8.0


nonlinux

syslinux-extlinux syslinux-extlinux, RHEL 8.0


syslinux-extlinux-
nonlinux

system-config- cockpit-system RHEL 8.0


kdump

system-config- cockpit RHEL 8.0


users

150
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

systemd systemd, systemd- RHEL 8.0


container, systemd-
udev, timedatex

systemd-journal- systemd-journal- RHEL 8.0


gateway remote

systemd-libs systemd-libs, RHEL 8.0


systemd-pam

systemd-networkd, systemd RHEL 8.0


systemd-resolved

systemd-python python3-systemd RHEL 8.0

systemtap-runtime- systemtap-runtime- RHEL 8.0


python2 python3

sysvinit-tools procps-ng, util-linux RHEL 8.0

tcl tcl, tcl-doc RHEL 8.0

teamd network-scripts- RHEL 8.0


team, teamd

texlive-adjustbox, texlive-adjustbox RHEL 8.0


texlive-adjustbox-
doc

texlive-ae, texlive- texlive-ae RHEL 8.0


ae-doc

texlive-algorithms, texlive-algorithms RHEL 8.0


texlive-algorithms-
doc

texlive-amscls, texlive-amscls RHEL 8.0


texlive-amscls-doc

texlive-amsfonts, texlive-amsfonts RHEL 8.0


texlive-amsfonts-
doc

texlive-amsmath, texlive-amsmath RHEL 8.0


texlive-amsmath-
doc

151
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-anysize, texlive-anysize RHEL 8.0


texlive-anysize-doc

texlive-appendix, texlive-appendix RHEL 8.0


texlive-appendix-
doc

texlive-arabxetex, texlive-arabxetex RHEL 8.0


texlive-arabxetex-
doc

texlive-arphic, texlive-arphic RHEL 8.0


texlive-arphic-doc

texlive-attachfile, texlive-attachfile RHEL 8.0


texlive-attachfile-
doc

texlive-babel, texlive-babel RHEL 8.0


texlive-babel-doc

texlive-babelbib, texlive-babelbib RHEL 8.0


texlive-babelbib-
doc

texlive-beamer, texlive-beamer RHEL 8.0


texlive-beamer-doc

texlive-bera, texlive-bera RHEL 8.0


texlive-bera-doc

texlive-beton, texlive-beton RHEL 8.0


texlive-beton-doc

texlive-bibtex-bin, texlive-bibtex RHEL 8.0


texlive-bibtex-doc

texlive-bibtopic, texlive-bibtopic RHEL 8.0


texlive-bibtopic-
doc

texlive-bidi, texlive- texlive-bidi RHEL 8.0


bidi-doc

texlive-bigfoot, texlive-bigfoot RHEL 8.0


texlive-bigfoot-doc

152
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-booktabs, texlive-booktabs RHEL 8.0


texlive-booktabs-
doc

texlive-breakurl, texlive-breakurl RHEL 8.0


texlive-breakurl-
doc

texlive-caption, texlive-caption RHEL 8.0


texlive-caption-doc

texlive-carlisle, texlive-carlisle RHEL 8.0


texlive-carlisle-doc

texlive-changebar, texlive-changebar RHEL 8.0


texlive-changebar-
doc

texlive-changepage, texlive-changepage RHEL 8.0


texlive-
changepage-doc

texlive-charter, texlive-charter RHEL 8.0


texlive-charter-doc

texlive-chngcntr, texlive-chngcntr RHEL 8.0


texlive-chngcntr-
doc

texlive-cite, texlive- texlive-cite RHEL 8.0


cite-doc

texlive-cjk, texlive- texlive-cjk RHEL 8.0


cjk-doc

texlive-cm, texlive- texlive-cm RHEL 8.0


cm-doc

texlive-cm-lgc, texlive-cm-lgc RHEL 8.0


texlive-cm-lgc-doc

texlive-cm-super, texlive-cm-super RHEL 8.0


texlive-cm-super-
doc

texlive-cmap, texlive-cmap RHEL 8.0


texlive-cmap-doc

153
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-cns, texlive- texlive-cns RHEL 8.0


cns-doc

texlive-collectbox, texlive-collectbox RHEL 8.0


texlive-collectbox-
doc

texlive-colortbl, texlive-colortbl RHEL 8.0


texlive-colortbl-doc

texlive-crop, texlive-crop RHEL 8.0


texlive-crop-doc

texlive-csquotes, texlive-csquotes RHEL 8.0


texlive-csquotes-
doc

texlive-ctable, texlive-ctable RHEL 8.0


texlive-ctable-doc

texlive-currfile, texlive-currfile RHEL 8.0


texlive-currfile-doc

texlive-datetime, texlive-datetime RHEL 8.0


texlive-datetime-
doc

texlive-dvipdfm, texlive-dvipdfmx RHEL 8.0


texlive-dvipdfm-bin,
texlive-dvipdfm-
doc, texlive-
dvipdfmx, texlive-
dvipdfmx-bin,
texlive-dvipdfmx-
doc

texlive-dvipdfmx- texlive-graphics-def RHEL 8.0


def

texlive-dvipng, texlive-dvipng RHEL 8.0


texlive-dvipng-bin,
texlive-dvipng-doc

texlive-dvips, texlive-dvips RHEL 8.0


texlive-dvips-bin,
texlive-dvips-doc

154
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-ec, texlive- texlive-ec RHEL 8.0


ec-doc

texlive-eepic, texlive-eepic RHEL 8.0


texlive-eepic-doc

texlive-enctex, texlive-enctex RHEL 8.0


texlive-enctex-doc

texlive-enumitem, texlive-enumitem RHEL 8.0


texlive-enumitem-
doc

texlive-epsf, texlive-epsf RHEL 8.0


texlive-epsf-doc

texlive-epstopdf, texlive-epstopdf RHEL 8.0


texlive-epstopdf-
bin, texlive-
epstopdf-doc

texlive-eso-pic, texlive-eso-pic RHEL 8.0


texlive-eso-pic-doc

texlive-eso-pic, texlive-eso-pic RHEL 8.0


texlive-eso-pic-doc

texlive-etex, texlive-etex RHEL 8.0


texlive-etex-doc

texlive-etex-pkg, texlive-etex-pkg RHEL 8.0


texlive-etex-pkg-
doc

texlive-etoolbox, texlive-etoolbox RHEL 8.0


texlive-etoolbox-
doc

texlive-euenc, texlive-euenc RHEL 8.0


texlive-euenc-doc

texlive-euler, texlive-euler RHEL 8.0


texlive-euler-doc

155
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-euro, texlive-euro RHEL 8.0


texlive-euro-doc

texlive-eurosym, texlive-eurosym RHEL 8.0


texlive-eurosym-
doc

texlive-extsizes, texlive-extsizes RHEL 8.0


texlive-extsizes-doc

texlive-fancybox, texlive-fancybox RHEL 8.0


texlive-fancybox-
doc

texlive-fancyhdr, texlive-fancyhdr RHEL 8.0


texlive-fancyhdr-
doc

texlive-fancyref, texlive-fancyref RHEL 8.0


texlive-fancyref-
doc

texlive-fancyvrb, texlive-fancyvrb RHEL 8.0


texlive-fancyvrb-
doc

texlive-filecontents, texlive-filecontents RHEL 8.0


texlive-
filecontents-doc

texlive-filehook, texlive-filehook RHEL 8.0


texlive-filehook-doc

texlive-fix2col, texlive-fix2col RHEL 8.0


texlive-fix2col-doc

texlive-fixlatvian, texlive-fixlatvian RHEL 8.0


texlive-fixlatvian-
doc

texlive-float, texlive-float RHEL 8.0


texlive-float-doc

texlive-fmtcount, texlive-fmtcount RHEL 8.0


texlive-fmtcount-
doc

156
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-fncychap, texlive-fncychap RHEL 8.0


texlive-fncychap-
doc

texlive-fontbook, texlive-fontbook RHEL 8.0


texlive-fontbook-
doc

texlive-fontspec, texlive-fontspec RHEL 8.0


texlive-fontspec-
doc

texlive-fontware, texlive-fontware RHEL 8.0


texlive-fontware-
bin

texlive-fontwrap, texlive-fontwrap RHEL 8.0


texlive-fontwrap-
doc

texlive-footmisc, texlive-footmisc RHEL 8.0


texlive-footmisc-
doc

texlive-fp, texlive- texlive-fp RHEL 8.0


fp-doc

texlive-fpl, texlive- texlive-fpl RHEL 8.0


fpl-doc

texlive-framed, texlive-framed RHEL 8.0


texlive-framed-doc

texlive-geometry, texlive-geometry RHEL 8.0


texlive-geometry-
doc

texlive-graphics, texlive-graphics RHEL 8.0


texlive-graphics-
doc, texlive-
rotating, texlive-
rotating-doc

texlive-gsftopk, texlive-gsftopk RHEL 8.0


texlive-gsftopk-bin

157
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-hyperref, texlive-hyperref RHEL 8.0


texlive-hyperref-
doc

texlive-hyph-utf8, texlive-hyph-utf8 RHEL 8.0


texlive-hyph-utf8-
doc

texlive-hyph-utf8, texlive-hyph-utf8 RHEL 8.0


texlive-hyph-utf8-
doc

texlive-hyphenat, texlive-hyphenat RHEL 8.0


texlive-hyphenat-
doc

texlive-ifetex, texlive-ifetex RHEL 8.0


texlive-ifetex-doc

texlive-ifluatex, texlive-ifluatex RHEL 8.0


texlive-ifluatex-doc

texlive-ifmtarg, texlive-ifmtarg RHEL 8.0


texlive-ifmtarg-doc

texlive-ifoddpage, texlive-ifoddpage RHEL 8.0


texlive-ifoddpage-
doc

texlive-iftex, texlive-iftex RHEL 8.0


texlive-iftex-doc

texlive-ifxetex, texlive-ifxetex RHEL 8.0


texlive-ifxetex-doc

texlive-index, texlive-index RHEL 8.0


texlive-index-doc

texlive-jadetex, texlive-jadetex RHEL 8.0


texlive-jadetex-bin,
texlive-jadetex-doc

texlive-jknapltx, texlive-jknapltx RHEL 8.0


texlive-jknapltx-doc

158
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-kastrup, texlive-kastrup RHEL 8.0


texlive-kastrup-doc

texlive-kerkis, texlive-kerkis RHEL 8.0


texlive-kerkis-doc

texlive-kpathsea, texlive-kpathsea RHEL 8.0


texlive-kpathsea-
bin, texlive-
kpathsea-doc

texlive-kpathsea-lib texlive-lib RHEL 8.0

texlive-kpathsea- texlive-lib-devel RHEL 8.0


lib-devel

texlive- texlive- RHEL 8.0


l3experimental, l3experimental
texlive-
l3experimental-doc

texlive-l3kernel, texlive-l3kernel RHEL 8.0


texlive-l3kernel-doc

texlive-l3packages, texlive-l3packages RHEL 8.0


texlive-l3packages-
doc

texlive-lastpage, texlive-lastpage RHEL 8.0


texlive-lastpage-
doc

texlive-latex, texlive-latex RHEL 8.0


texlive-latex-bin,
texlive-latex-bin-
bin, texlive-latex-
doc

texlive-latex-fonts, texlive-latex-fonts RHEL 8.0


texlive-latex-fonts-
doc

texlive-lettrine, texlive-lettrine RHEL 8.0


texlive-lettrine-doc

159
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-listings, texlive-listings RHEL 8.0


texlive-listings-doc

texlive-lm, texlive- texlive-lm RHEL 8.0


lm-doc

texlive-lm-math, texlive-lm-math RHEL 8.0


texlive-lm-math-
doc

texlive-lua-alt- texlive-lua-alt- RHEL 8.0


getopt, texlive-lua- getopt
alt-getopt-doc

texlive-lua-alt- texlive-lua-alt- RHEL 8.0


getopt, texlive-lua- getopt
alt-getopt-doc

texlive-lualatex- texlive-lualatex- RHEL 8.0


math, texlive- math
lualatex-math-doc

texlive-lualatex- texlive-lualatex- RHEL 8.0


math, texlive- math
lualatex-math-doc

texlive-luaotfload, texlive-luaotfload RHEL 8.0


texlive-luaotfload-
bin, texlive-
luaotfload-doc

texlive-luatex, texlive-luatex RHEL 8.0


texlive-luatex-bin,
texlive-luatex-doc

texlive-luatexbase, texlive-luatexbase RHEL 8.0


texlive-luatexbase-
doc

texlive-makecmds, texlive-makecmds RHEL 8.0


texlive-makecmds-
doc

texlive-makeindex, texlive-makeindex RHEL 8.0


texlive-makeindex-
bin, texlive-
makeindex-doc

160
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-marginnote, texlive-marginnote RHEL 8.0


texlive-marginnote-
doc

texlive-marvosym, texlive-marvosym RHEL 8.0


texlive-marvosym-
doc

texlive-mathpazo, texlive-mathpazo RHEL 8.0


texlive-mathpazo-
doc

texlive-mathspec, texlive-mathspec RHEL 8.0


texlive-mathspec-
doc

texlive-mdwtools, texlive-mdwtools RHEL 8.0


texlive-mdwtools-
doc

texlive-memoir, texlive-memoir RHEL 8.0


texlive-memoir-doc

texlive-metafont, texlive-metafont RHEL 8.0


texlive-metafont-
bin

texlive-metalogo, texlive-metalogo RHEL 8.0


texlive-metalogo-
doc

texlive-metapost, texlive-metapost RHEL 8.0


texlive-metapost-
bin, texlive-
metapost-doc,
texlive-metapost-
examples-doc

texlive-mflogo, texlive-mflogo RHEL 8.0


texlive-mflogo-doc

texlive-mfnfss, texlive-mfnfss RHEL 8.0


texlive-mfnfss-doc

texlive-mfware, texlive-mfware RHEL 8.0


texlive-mfware-bin

161
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-microtype, texlive-microtype RHEL 8.0


texlive-microtype-
doc

texlive-mnsymbol, texlive-mnsymbol RHEL 8.0


texlive-mnsymbol-
doc

texlive-mparhack, texlive-mparhack RHEL 8.0


texlive-mparhack-
doc

texlive-mptopdf, texlive-mptopdf RHEL 8.0


texlive-mptopdf-
bin

texlive-ms, texlive- texlive-ms RHEL 8.0


ms-doc

texlive-multido, texlive-multido RHEL 8.0


texlive-multido-doc

texlive-multirow, texlive-multirow RHEL 8.0


texlive-multirow-
doc

texlive-natbib, texlive-natbib RHEL 8.0


texlive-natbib-doc

texlive-ncctools, texlive-ncctools RHEL 8.0


texlive-ncctools-
doc

texlive-ntgclass, texlive-ntgclass RHEL 8.0


texlive-ntgclass-
doc

texlive-oberdiek, texlive-oberdiek RHEL 8.0


texlive-oberdiek-
doc

texlive-overpic, texlive-overpic RHEL 8.0


texlive-overpic-doc

texlive-paralist, texlive-paralist RHEL 8.0


texlive-paralist-doc

162
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-parallel, texlive-parallel RHEL 8.0


texlive-parallel-doc

texlive-parskip, texlive-parskip RHEL 8.0


texlive-parskip-doc

texlive-pdfpages, texlive-pdfpages RHEL 8.0


texlive-pdfpages-
doc

texlive-pdftex, texlive-pdftex RHEL 8.0


texlive-pdftex-bin,
texlive-pdftex-doc

texlive-pdftex-def texlive-graphics-def RHEL 8.0

texlive-pgf, texlive- texlive-pgf RHEL 8.0


pgf-doc

texlive-philokalia, texlive-philokalia RHEL 8.0


texlive-philokalia-
doc

texlive-placeins, texlive-placeins RHEL 8.0


texlive-placeins-
doc

texlive-polyglossia, texlive-polyglossia RHEL 8.0


texlive-polyglossia-
doc

texlive-powerdot, texlive-powerdot RHEL 8.0


texlive-powerdot-
doc

texlive-preprint, texlive-preprint RHEL 8.0


texlive-preprint-doc

texlive-psfrag, texlive-psfrag RHEL 8.0


texlive-psfrag-doc

texlive-psnfss, texlive-psnfss RHEL 8.0


texlive-psnfss-doc

163
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-pspicture, texlive-pspicture RHEL 8.0


texlive-pspicture-
doc

texlive-pst-3d, texlive-pst-3d RHEL 8.0


texlive-pst-3d-doc

texlive-pst-3d, texlive-pst-3d RHEL 8.0


texlive-pst-3d-doc

texlive-pst-blur, texlive-pst-blur RHEL 8.0


texlive-pst-blur-
doc

texlive-pst-coil, texlive-pst-coil RHEL 8.0


texlive-pst-coil-doc

texlive-pst-eps, texlive-pst-eps RHEL 8.0


texlive-pst-eps-doc

texlive-pst-fill, texlive-pst-fill RHEL 8.0


texlive-pst-fill-doc

texlive-pst-grad, texlive-pst-grad RHEL 8.0


texlive-pst-grad-
doc

texlive-pst-math, texlive-pst-math RHEL 8.0


texlive-pst-math-
doc

texlive-pst-node, texlive-pst-node RHEL 8.0


texlive-pst-node-
doc

texlive-pst-plot, texlive-pst-plot RHEL 8.0


texlive-pst-plot-
doc

texlive-pst-slpe, texlive-pst-slpe RHEL 8.0


texlive-pst-slpe-
doc

164
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-pst-text, texlive-pst-text RHEL 8.0


texlive-pst-text-
doc

texlive-pst-tree, texlive-pst-tree RHEL 8.0


texlive-pst-tree-
doc

texlive-pstricks, texlive-pstricks RHEL 8.0


texlive-pstricks-doc

texlive-pstricks- texlive-pstricks-add RHEL 8.0


add, texlive-
pstricks-add-doc

texlive-ptext, texlive-ptext RHEL 8.0


texlive-ptext-doc

texlive-pxfonts, texlive-pxfonts RHEL 8.0


texlive-pxfonts-doc

texlive-qstest, texlive-qstest RHEL 8.0


texlive-qstest-doc

texlive-rcs, texlive- texlive-rcs RHEL 8.0


rcs-doc

texlive-realscripts, texlive-realscripts RHEL 8.0


texlive-realscripts-
doc

texlive-rsfs, texlive- texlive-rsfs RHEL 8.0


rsfs-doc

texlive-sansmath, texlive-sansmath RHEL 8.0


texlive-sansmath-
doc

texlive-sauerj, texlive-sauerj RHEL 8.0


texlive-sauerj-doc

texlive-section, texlive-section RHEL 8.0


texlive-section-doc

texlive-sectsty, texlive-sectsty RHEL 8.0


texlive-sectsty-doc

165
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-seminar, texlive-seminar RHEL 8.0


texlive-seminar-doc

texlive-sepnum, texlive-sepnum RHEL 8.0


texlive-sepnum-doc

texlive-setspace, texlive-setspace RHEL 8.0


texlive-setspace-
doc

texlive-showexpl, texlive-showexpl RHEL 8.0


texlive-showexpl-
doc

texlive-soul, texlive- texlive-soul RHEL 8.0


soul-doc

texlive-stmaryrd, texlive-stmaryrd RHEL 8.0


texlive-stmaryrd-
doc

texlive-subfig, texlive-subfig RHEL 8.0


texlive-subfig-doc

texlive-subfigure, texlive-subfigure RHEL 8.0


texlive-subfigure-
doc

texlive-svn-prov, texlive-svn-prov RHEL 8.0


texlive-svn-prov-
doc

texlive-svn-prov, texlive-svn-prov RHEL 8.0


texlive-svn-prov-
doc

texlive-t2, texlive- texlive-t2 RHEL 8.0


t2-doc

texlive-tetex, texlive-tetex RHEL 8.0


texlive-tetex-bin,
texlive-tetex-doc

texlive-tex, texlive- texlive-tex RHEL 8.0


tex-bin

166
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-tex-gyre, texlive-tex-gyre RHEL 8.0


texlive-tex-gyre-
doc

texlive-tex-gyre- texlive-tex-gyre- RHEL 8.0


math, texlive-tex- math
gyre-math-doc

texlive-tex4ht, texlive-tex4ht RHEL 8.0


texlive-tex4ht-bin,
texlive-tex4ht-doc

texlive-texconfig, texlive-texconfig RHEL 8.0


texlive-texconfig-
bin

texlive-texlive.infra, texlive-texlive.infra RHEL 8.0


texlive-texlive.infra-
bin, texlive-
texlive.infra-doc

texlive-textcase, texlive-textcase RHEL 8.0


texlive-textcase-
doc

texlive-textpos, texlive-textpos RHEL 8.0


texlive-textpos-doc

texlive- texlive- RHEL 8.0


threeparttable, threeparttable
texlive-
threeparttable-doc

texlive-thumbpdf, texlive-thumbpdf RHEL 8.0


texlive-thumbpdf-
bin, texlive-
thumbpdf-doc

texlive-tipa, texlive- texlive-tipa RHEL 8.0


tipa-doc

texlive-titlesec, texlive-titlesec RHEL 8.0


texlive-titlesec-doc

texlive-titling, texlive-titling RHEL 8.0


texlive-titling-doc

167
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-tocloft, texlive-tocloft RHEL 8.0


texlive-tocloft-doc

texlive-tools, texlive-tools RHEL 8.0


texlive-tools-doc

texlive-txfonts, texlive-txfonts RHEL 8.0


texlive-txfonts-doc

texlive-type1cm, texlive-type1cm RHEL 8.0


texlive-type1cm-
doc

texlive-typehtml, texlive-typehtml RHEL 8.0


texlive-typehtml-
doc

texlive- texlive-ucharclasses RHEL 8.0


ucharclasses,
texlive-
ucharclasses-doc

texlive-ucs, texlive- texlive-ucs RHEL 8.0


ucs-doc

texlive-uhc, texlive- texlive-uhc RHEL 8.0


uhc-doc

texlive-ulem, texlive-ulem RHEL 8.0


texlive-ulem-doc

texlive-underscore, texlive-underscore RHEL 8.0


texlive-underscore-
doc

texlive-unicode- texlive-unicode- RHEL 8.0


math, texlive- math
unicode-math-doc

texlive-unicode- texlive-unicode- RHEL 8.0


math, texlive- math
unicode-math-doc

texlive-unisugar, texlive-unisugar RHEL 8.0


texlive-unisugar-
doc

168
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-url, texlive- texlive-url RHEL 8.0


url-doc

texlive-utopia, texlive-utopia RHEL 8.0


texlive-utopia-doc

texlive-varwidth, texlive-varwidth RHEL 8.0


texlive-varwidth-
doc

texlive-wadalab, texlive-wadalab RHEL 8.0


texlive-wadalab-
doc

texlive-was, texlive- texlive-was RHEL 8.0


was-doc

texlive-wasy, texlive-wasy RHEL 8.0


texlive-wasy-doc

texlive-wasysym, texlive-wasysym RHEL 8.0


texlive-wasysym-
doc

texlive-wrapfig, texlive-wrapfig RHEL 8.0


texlive-wrapfig-doc

texlive-xcolor, texlive-xcolor RHEL 8.0


texlive-xcolor-doc

texlive-xdvi, texlive- texlive-xdvi RHEL 8.0


xdvi-bin

texlive-xecjk, texlive-xecjk RHEL 8.0


texlive-xecjk-doc

texlive-xecolor, texlive-xecolor RHEL 8.0


texlive-xecolor-doc

texlive-xecyr, texlive-xecyr RHEL 8.0


texlive-xecyr-doc

texlive-xeindex, texlive-xeindex RHEL 8.0


texlive-xeindex-doc

169
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

texlive-xepersian, texlive-xepersian RHEL 8.0


texlive-xepersian-
doc

texlive-xesearch, texlive-xesearch RHEL 8.0


texlive-xesearch-
doc

texlive-xetex, texlive-xetex RHEL 8.0


texlive-xetex-bin,
texlive-xetex-doc

texlive-xetex-def texlive-graphics-def RHEL 8.0

texlive-xetex-itrans, texlive-xetex-itrans RHEL 8.0


texlive-xetex-
itrans-doc

texlive-xetex- texlive-xetex- RHEL 8.0


pstricks, texlive- pstricks
xetex-pstricks-doc

texlive-xetex- texlive-xetex- RHEL 8.0


tibetan, texlive- tibetan
xetex-tibetan-doc

texlive- texlive- RHEL 8.0


xetexfontinfo, xetexfontinfo
texlive-
xetexfontinfo-doc

texlive-xifthen, texlive-xifthen RHEL 8.0


texlive-xifthen-doc

texlive-xkeyval, texlive-xkeyval RHEL 8.0


texlive-xkeyval-doc

texlive-xltxtra, texlive-xltxtra RHEL 8.0


texlive-xltxtra-doc

texlive-xmltex, texlive-xmltex RHEL 8.0


texlive-xmltex-bin,
texlive-xmltex-doc

texlive-xstring, texlive-xstring RHEL 8.0


texlive-xstring-doc

170
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

texlive-xtab, texlive-xtab RHEL 8.0


texlive-xtab-doc

texlive-xunicode, texlive-xunicode RHEL 8.0


texlive-xunicode-
doc

tkinter python2-tkinter, RHEL 8.0


python3-tkinter

trace-cmd kernelshark, trace- RHEL 8.0


cmd

tracker tracker, tracker- RHEL 8.0


miners

trousers trousers, trousers- RHEL 8.0


lib

unbound-python python3-unbound RHEL 8.0

unit-api unit-api, unit-api- RHEL 8.0


javadoc

uom-lib uom-lib, uom-lib- RHEL 8.0


javadoc

uom-se uom-se, uom-se- RHEL 8.0


javadoc

uom-systems uom-systems, uom- RHEL 8.0


systems-javadoc

urw-fonts urw-base35-fonts RHEL 8.0

util-linux util-linux, util-linux- RHEL 8.0


user

vlgothic-fonts google-noto-sans- RHEL 8.0


cjk-ttc-fonts

vulkan vulkan-loader, RHEL 8.0


vulkan-tools,
vulkan-validation-
layers

171
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Original New package(s) Changed Note


package(s) since

vulkan-devel mesa-vulkan-devel, RHEL 8.0


vulkan-headers,
vulkan-loader-devel

vulkan-filesystem vulkan-loader RHEL 8.0

webkitgtk4 webkit2gtk3 RHEL 8.0

webkitgtk4-devel webkit2gtk3-devel RHEL 8.0

webkitgtk4-jsc webkit2gtk3-jsc RHEL 8.0

webkitgtk4-jsc- webkit2gtk3-jsc- RHEL 8.0


devel devel

webkitgtk4-plugin- webkit2gtk3- RHEL 8.0


process-gtk2 plugin-process-gtk2

wireshark wireshark-cli RHEL 8.0

wireshark-gnome wireshark RHEL 8.0

wqy-zenhei-fonts google-noto-sans- RHEL 8.0


cjk-ttc-fonts

xchat hexchat RHEL 8.0

xmvn xmvn, xmvn-api, RHEL 8.0


xmvn-bisect, xmvn-
connector-aether,
xmvn-connector-
ivy, xmvn-core,
xmvn-install, xmvn-
minimal, xmvn-mojo,
xmvn-parent-pom,
xmvn-resolve,
xmvn-subst, xmvn-
tools-pom

xorg-x11-drv- xorg-x11-drv- RHEL 8.0


wacom wacom, xorg-x11-
drv-wacom-serial-
support

xsom xsom, xsom-javadoc RHEL 8.0

xterm xterm, xterm-resize RHEL 8.0

172
APPENDIX A. CHANGES TO PACKAGES

Original New package(s) Changed Note


package(s) since

yum-cron dnf-automatic RHEL 8.0 The dnf-automatic package provides


similar functionality, but is not compatible
with the yum-cron configuration files.

yum-metadata- python3-dnf RHEL 8.0 Users should now use the DNF API (queries,
parser package objects, and others) to work with the
repodata content.

yum-plugin-aliases, dnf RHEL 8.0 The mentioned functionalities are now


yum-plugin- provided by DNF. The functionality of yum-
fastestmirror, yum- plugin-tmprepo is provided by the --
plugin-priorities, repofrompath option. Setting the tsflags
yum-plugin- option is now an integral part of dnf: use --
remove-with-leaves, setopt=tsflags=<flags> .
yum-plugin-
tmprepo, yum-
plugin-tsflags

yum-plugin-auto- dnf-plugins-core RHEL 8.0 All these plug-ins are now part of the dnf-
update-debug-info, plugins-core package but are still
yum-plugin- installable under the original names.
changelog, yum-
plugin-copr

yum-plugin- python3-dnf- RHEL 8.0 Still installable under the original name.
versionlock plugin-versionlock

yum-rhn-plugin dnf-plugin- RHEL 8.0


spacewalk

yum-utils dnf-utils RHEL 8.0

For a complete list of packages available in the current minor RHEL 8 release, see the Package manifest.

A.3. MOVED PACKAGES


The following packages were moved between repositories within RHEL 8:

Package Original repository Current repository Changed


since

apache-commons-collections-javadoc rhel8-AppStream rhel8-CRB RHEL 8.1

173
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Original repository Current repository Changed


since

apache-commons-collections- rhel8-AppStream rhel8-CRB RHEL 8.1


testframework

apache-commons-lang-javadoc rhel8-AppStream rhel8-CRB RHEL 8.1

compat-locales-sap rhel8-AppStream rhel8-SAP- RHEL 8.1


NetWeaver

iso-codes-devel rhel8-CRB rhel8-AppStream RHEL 8.1

jakarta-commons-httpclient-demo rhel8-AppStream rhel8-CRB RHEL 8.1

jakarta-commons-httpclient-javadoc rhel8-AppStream rhel8-CRB RHEL 8.1

jakarta-commons-httpclient-manual rhel8-AppStream rhel8-CRB RHEL 8.1

jna rhel8-CRB rhel8-AppStream RHEL 8.1

libseccomp-devel rhel8-CRB rhel8-AppStream RHEL 8.1

samba-test rhel8-BaseOS rhel8-AppStream RHEL 8.1

spirv-tools-libs rhel8-CRB rhel8-AppStream RHEL 8.1

velocity-demo rhel8-AppStream rhel8-CRB RHEL 8.1

velocity-javadoc rhel8-AppStream rhel8-CRB RHEL 8.1

velocity-manual rhel8-AppStream rhel8-CRB RHEL 8.1

virtio-win rhel8-Supplementary rhel8-AppStream RHEL 8.1

xerces-j2-demo rhel8-AppStream rhel8-CRB RHEL 8.1

xerces-j2-javadoc rhel8-AppStream rhel8-CRB RHEL 8.1

xkeyboard-config-devel rhel8-CRB rhel8-AppStream RHEL 8.1

xml-commons-apis-javadoc rhel8-AppStream rhel8-CRB RHEL 8.1

xml-commons-apis-manual rhel8-AppStream rhel8-CRB RHEL 8.1

xml-commons-resolver-javadoc rhel8-AppStream rhel8-CRB RHEL 8.1

174
APPENDIX A. CHANGES TO PACKAGES

For a complete list of packages available in the current minor RHEL 8 release, see the Package manifest.

A.4. REMOVED PACKAGES


The following packages are part of RHEL 7 but are not distributed with RHEL 8:

Package Note

a2ps The a2ps package has been removed. Theenscript package can cover
some its functionality. Users can configure enscript in the
/etc/enscript.cfg file.

abrt-addon-upload-watch

abrt-devel

abrt-gui-devel

abrt-retrace-client

acpid-sysvinit

advancecomp

adwaita-icon-theme-devel

adwaita-qt-common

adwaita-qt4

agg

agg-devel

aic94xx-firmware

akonadi

akonadi-devel

akonadi-mysql

alacarte

alsa-tools

anaconda-widgets-devel

175
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

ant-antunit

ant-antunit-javadoc

antlr-C++-doc

antlr-python

apache-commons-
configuration

apache-commons-
configuration-javadoc

apache-commons-daemon

apache-commons-daemon-
javadoc

apache-commons-daemon-
jsvc

apache-commons-dbcp

apache-commons-dbcp-
javadoc

apache-commons-digester

apache-commons-digester-
javadoc

apache-commons-jexl

apache-commons-jexl-
javadoc

apache-commons-pool

apache-commons-pool-
javadoc

apache-commons-validator

176
APPENDIX A. CHANGES TO PACKAGES

Package Note

apache-commons-validator-
javadoc

apache-commons-vfs

apache-commons-vfs-ant

apache-commons-vfs-
examples

apache-commons-vfs-
javadoc

apache-rat

apache-rat-core

apache-rat-javadoc

apache-rat-plugin

apache-rat-tasks

apr-util-nss The apr-util-nss package provided a backend for the apr_crypto.h


interface, using the NSS Cryptography Library. Any applications using the
NSS backend for this interface should migrate to using theOpenSSL
backend, which is provided in the apr-util-openssl package.

args4j

args4j-javadoc

ark

ark-libs

asciidoc-latex

at-spi

at-spi-devel

at-spi-python

at-sysvinit

177
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

atlas-static

attica

attica-devel

audiocd-kio

audiocd-kio-devel

audiocd-kio-libs

audiofile

audiofile-devel

audit-libs-python

audit-libs-static

authconfig-gtk

authd

autogen-libopts-devel

automoc

autotrace-devel

avahi-dnsconfd

avahi-glib-devel

avahi-gobject-devel

avahi-qt3

avahi-qt3-devel

avahi-qt4

avahi-qt4-devel

avahi-tools

178
APPENDIX A. CHANGES TO PACKAGES

Package Note

avahi-ui

avahi-ui-devel

avahi-ui-tools

avalon-framework

avalon-framework-javadoc

avalon-logkit

avalon-logkit-javadoc

bacula-console-bat

bacula-devel

bacula-traymonitor

baekmuk-ttf-batang-fonts

baekmuk-ttf-dotum-fonts

baekmuk-ttf-fonts-common

baekmuk-ttf-fonts-
ghostscript

baekmuk-ttf-gulim-fonts

baekmuk-ttf-hline-fonts

base64coder

base64coder-javadoc

batik

batik-demo

batik-javadoc

batik-rasterizer

179
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

batik-slideshow

batik-squiggle

batik-svgpp

batik-ttf2svg

bcc-devel

bison-devel

blas-static

blas64-devel

blas64-static

bltk

bluedevil

bluedevil-autostart

bmc-snmp-proxy

bogofilter-bogoupgrade

bridge-utils

bsdcpio

bsh-demo

bsh-utils

btrfs-progs

btrfs-progs-devel

buildnumber-maven-plugin

buildnumber-maven-plugin-
javadoc

180
APPENDIX A. CHANGES TO PACKAGES

Package Note

bwidget

bzr

bzr-doc

cairo-tools

caribou

caribou-antler

caribou-devel

caribou-gtk2-module

caribou-gtk3-module

cdparanoia-static

cdrskin

ceph-common

check-static

cheese-libs-devel

cifs-utils-devel

cim-schema-docs

cim-schema-docs

cjkuni-ukai-fonts

clutter-gst2-devel

clutter-tests

cmpi-bindings-pywbem

cobertura

cobertura-javadoc

181
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

cockpit-machines-ovirt

codehaus-parent

codemodel-javadoc

cogl-tests

colord-extra-profiles

colord-kde

compat-cheese314

compat-dapl

compat-dapl-devel

compat-dapl-static

compat-dapl-utils

compat-db

compat-db-headers

compat-db47

compat-exiv2-023

compat-gcc-44

compat-gcc-44-c++

compat-gcc-44-gfortran

compat-glade315

compat-glew

compat-glibc

compat-glibc-headers

compat-gnome-desktop314

182
APPENDIX A. CHANGES TO PACKAGES

Package Note

compat-grilo02

compat-libcap1

compat-libcogl-pango12

compat-libcogl12

compat-libcolord1

compat-libf2c-34

compat-libgdata13

compat-libgfortran-41

compat-libgnome-bluetooth11

compat-libgnome-desktop3-
7

compat-libgweather3

compat-libical1

compat-libmediaart0

compat-libmpc

compat-libpackagekit-glib2-
16

compat-libstdc++-33

compat-libtiff3

compat-libupower-glib1

compat-libxcb

compat-openldap

compat-openmpi16

compat-openmpi16-devel

183
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

compat-opensm-libs

compat-poppler022

compat-poppler022-cpp

compat-poppler022-glib

compat-poppler022-qt

compat-sap-c++-5

compat-sap-c++-6

compat-sap-c++-7

comps-extras

conman

console-setup

coolkey-devel

cpptest

cpptest-devel

cppunit

cppunit-devel

cppunit-doc

cpuid

cracklib-python

crda-devel

crit

criu-devel

184
APPENDIX A. CHANGES TO PACKAGES

Package Note

crypto-utils

cryptsetup-python

cvs Version control system supported in RHEL 8 are Git, Mercurial, and
Subversion.

cvs-contrib Version control system supported in RHEL 8 are Git, Mercurial, and
Subversion.

cvs-doc Version control system supported in RHEL 8 are Git, Mercurial, and
Subversion.

cvs-inetd Version control system supported in RHEL 8 are Git, Mercurial, and
Subversion.

cvsps

cyrus-imapd-devel

dapl

dapl-devel

dapl-static

dapl-utils

dbus-doc

dbus-python-devel

dbus-tests

dbusmenu-qt

dbusmenu-qt-devel

dbusmenu-qt-devel-docs

debugmode

dejavu-lgc-sans-fonts

dejavu-lgc-sans-mono-fonts

185
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

dejavu-lgc-serif-fonts

deltaiso

device-mapper-multipath-
sysvinit

dhcp-devel

dialog-devel

dleyna-connector-dbus-devel

dleyna-core-devel

dlm-devel

dmraid Users requiring support for combined hardware and software RAID host bus
adapters (HBA) should use the mdadm utility.

dmraid-devel

dmraid-events

dmraid-events-logwatch

docbook-simple

docbook-slides

docbook-utils-pdf

docbook5-style-xsl

docbook5-style-xsl-
extensions

docker-rhel-push-plugin

dom4j

dom4j-demo

dom4j-javadoc

186
APPENDIX A. CHANGES TO PACKAGES

Package Note

dom4j-manual

dovecot-pigeonhole

dracut-fips The functionality of the dracut-fips package is provided by the crypto-


policies package and the fips-mode-setup tool in RHEL 8.

dracut-fips-aesni

dragon

drm-utils

drpmsync

dtdinst

dumpet

dvgrab

e2fsprogs-static

ecj

edac-utils-devel

efax

efivar-devel

egl-utils

ekiga

ElectricFence

emacs-a2ps

emacs-a2ps-el

emacs-auctex

emacs-auctex-doc

187
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

emacs-git

emacs-git-el

emacs-gnuplot

emacs-gnuplot-el

emacs-php-mode

empathy Instant messaging clients supported in RHEL 8 are hexchat and pidgin.

enchant-aspell

enchant-voikko

eog-devel

epydoc

espeak-devel

evince-devel

evince-dvi

evolution-data-server-doc

evolution-data-server-perl

evolution-data-server-tests

evolution-devel

evolution-devel-docs

evolution-tests

expat-static The expat-static package providing a static library for the expat XML
library is no longer provided. Use dynamic linking instead.

expect-devel

expectk

188
APPENDIX A. CHANGES TO PACKAGES

Package Note

farstream

farstream-devel

farstream-python

farstream02-devel

fedfs-utils-admin

fedfs-utils-client

fedfs-utils-common

fedfs-utils-devel

fedfs-utils-lib

fedfs-utils-nsdbparams

fedfs-utils-python

fedfs-utils-server

felix-bundlerepository

felix-bundlerepository-
javadoc

felix-framework

felix-framework-javadoc

felix-osgi-obr

felix-osgi-obr-javadoc

felix-shell

felix-shell-javadoc

fence-sanlock

festival

189
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

festival-devel

festival-docs

festival-freebsoft-utils

festival-lib

festival-speechtools-devel

festival-speechtools-libs

festival-speechtools-utils

festvox-awb-arctic-hts

festvox-bdl-arctic-hts

festvox-clb-arctic-hts

festvox-jmk-arctic-hts

festvox-kal-diphone

festvox-ked-diphone

festvox-rms-arctic-hts

festvox-slt-arctic-hts

file-static

filebench

filesystem-content

finch

finch-devel

finger Users of the finger client/server can use the who, pinky , and last
commands. For remote machines, use these commands with SSH.

finger-server

190
APPENDIX A. CHANGES TO PACKAGES

Package Note

flatpak-devel

fltk-fluid

fltk-static

flute-javadoc

folks

folks-devel

folks-tools

fontforge-devel

fontpackages-tools

fonttools

fop

fop-javadoc

fprintd-devel

freeradius-python

freetype-demos

fros

fros-gnome

fros-recordmydesktop

fuseiso

fwupd-devel

fwupdate-devel

gamin-python

gavl-devel

191
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

gcab

gcc-gnat

gcc-go

gcc-objc

gcc-objc++

gcc-plugin-devel

gconf-editor

gd-progs

gdk-pixbuf2-tests

gdm-devel

gdm-pam-extensions-devel

gedit-devel

gedit-plugin-bookmarks

gedit-plugin-
bracketcompletion

gedit-plugin-charmap

gedit-plugin-codecomment

gedit-plugin-colorpicker

gedit-plugin-colorschemer

gedit-plugin-commander

gedit-plugin-drawspaces

gedit-plugin-findinfiles

gedit-plugin-joinlines

192
APPENDIX A. CHANGES TO PACKAGES

Package Note

gedit-plugin-multiedit

gedit-plugin-smartspaces

gedit-plugin-synctex

gedit-plugin-terminal

gedit-plugin-textsize

gedit-plugin-translate

gedit-plugin-wordcompletion

gedit-plugins

gedit-plugins-data

gegl-devel

geoclue

geoclue-devel

geoclue-doc

geoclue-gsmloc

geoclue-gui

GeoIP The GeoIp package is capable of working only with legacy databases. A
replacement provided in RHEL 8 is the new libmaxminddb package,
together with the geoipupdate package. This is a new API created by the
upstream GeoIP project and it supports new format of databases, mmdb.

GeoIP-data

GeoIP-devel

GeoIP-update

geronimo-jaspic-spec

geronimo-jaspic-spec-
javadoc

193
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

geronimo-jaxrpc

geronimo-jaxrpc-javadoc

geronimo-jta

geronimo-jta-javadoc

geronimo-osgi-support

geronimo-osgi-support-
javadoc

geronimo-saaj

geronimo-saaj-javadoc

ghostscript-chinese

ghostscript-chinese-zh_CN

ghostscript-chinese-zh_TW

ghostscript-cups

ghostscript-gtk

giflib-utils

gimp-data-extras

gimp-help

gimp-help-ca

gimp-help-da

gimp-help-de

gimp-help-el

gimp-help-en_GB

gimp-help-es

194
APPENDIX A. CHANGES TO PACKAGES

Package Note

gimp-help-fr

gimp-help-it

gimp-help-ja

gimp-help-ko

gimp-help-nl

gimp-help-nn

gimp-help-pt_BR

gimp-help-ru

gimp-help-sl

gimp-help-sv

gimp-help-zh_CN

git-bzr

git-cvs

git-gnome-keyring

git-hg

git-p4

gjs-tests

glade

glade3

glade3-libgladeui

glade3-libgladeui-devel

glassfish-dtd-parser

glassfish-dtd-parser-javadoc

195
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

glassfish-jaxb-javadoc

glassfish-jsp

glassfish-jsp-javadoc

glew

glib-networking-tests

gmp-static

gnome-clocks

gnome-contacts

gnome-desktop3-tests

gnome-devel-docs

gnome-dictionary

gnome-doc-utils

gnome-doc-utils-stylesheets

gnome-documents

gnome-documents-libs

gnome-icon-theme

gnome-icon-theme-devel

gnome-icon-theme-extras

gnome-icon-theme-legacy

gnome-icon-theme-symbolic

gnome-packagekit

gnome-packagekit-common

gnome-packagekit-installer

196
APPENDIX A. CHANGES TO PACKAGES

Package Note

gnome-packagekit-updater

gnome-python2

gnome-python2-bonobo

gnome-python2-canvas

gnome-python2-devel

gnome-python2-gconf

gnome-python2-gnome

gnome-python2-gnomevfs

gnome-settings-daemon-
devel

gnome-software-devel

gnome-vfs2

gnome-vfs2-devel

gnome-vfs2-smb

gnome-weather

gnome-weather-tests

gnote

gnu-efi-utils

gnu-getopt

gnu-getopt-javadoc

gnuplot-latex

gnuplot-minimal

gob2

197
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

gom-devel

google-noto-sans-korean-
fonts

google-noto-sans-simplified-
chinese-fonts

google-noto-sans-traditional-
chinese-fonts

gperftools

gperftools-devel

gperftools-libs

gpm-static

grantlee

grantlee-apidocs

grantlee-devel

graphviz-graphs

graphviz-guile

graphviz-java

graphviz-lua

graphviz-ocaml

graphviz-perl

graphviz-php

graphviz-python

graphviz-ruby

graphviz-tcl

198
APPENDIX A. CHANGES TO PACKAGES

Package Note

groff-doc

groff-perl

groff-x11

groovy

groovy-javadoc

grub2

grub2-ppc-modules

grub2-ppc64-modules

gsm-tools

gsound-devel

gssdp-utils

gstreamer

gstreamer-devel

gstreamer-devel-docs

gstreamer-plugins-bad-free

gstreamer-plugins-bad-free-
devel

gstreamer-plugins-bad-free-
devel-docs

gstreamer-plugins-base

gstreamer-plugins-base-devel

gstreamer-plugins-base-
devel-docs

gstreamer-plugins-base-tools

199
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

gstreamer-plugins-good

gstreamer-plugins-good-
devel-docs

gstreamer-python

gstreamer-python-devel

gstreamer-tools

gstreamer1-devel-docs

gstreamer1-plugins-base-
devel-docs

gstreamer1-plugins-base-
tools

gstreamer1-plugins-ugly-free-
devel

gtk-vnc

gtk-vnc-devel

gtk-vnc-python

gtk-vnc2-devel

gtk3-devel-docs

gtk3-immodules

gtk3-tests

gtkhtml3

gtkhtml3-devel

gtksourceview3-tests

gucharmap

gucharmap-devel

200
APPENDIX A. CHANGES TO PACKAGES

Package Note

gucharmap-libs

gupnp-av-devel

gupnp-av-docs

gupnp-dlna-devel

gupnp-dlna-docs

gupnp-docs

gupnp-igd-python

gutenprint-devel

gutenprint-extras

gutenprint-foomatic

gvfs-tests

gvnc-devel

gvnc-tools

gvncpulse

gvncpulse-devel

gwenview

gwenview-libs

hawkey-devel

highcontrast-qt

highcontrast-qt4

highcontrast-qt5

highlight-gui

201
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

hispavoces-pal-diphone

hispavoces-sfl-diphone

hsakmt

hsakmt-devel

hspell-devel

hsqldb

hsqldb-demo

hsqldb-javadoc

hsqldb-manual

htdig

html2ps

http-parser-devel

httpunit

httpunit-doc

httpunit-javadoc

i2c-tools-eepromer

i2c-tools-python

ibus-pygtk2

ibus-qt

ibus-qt-devel

ibus-qt-docs

ibus-rawcode

202
APPENDIX A. CHANGES TO PACKAGES

Package Note

ibus-table-devel

ibutils

ibutils-devel

ibutils-libs

icc-profiles-openicc

icon-naming-utils

im-chooser

im-chooser-common

ImageMagick

ImageMagick-c++

ImageMagick-c++-devel

ImageMagick-devel

ImageMagick-doc

ImageMagick-perl

imsettings

imsettings-devel

imsettings-gsettings

imsettings-libs

imsettings-qt

imsettings-xim

indent

infinipath-psm

203
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

infinipath-psm-devel

iniparser

iniparser-devel

iok

ipa-gothic-fonts

ipa-mincho-fonts

ipa-pgothic-fonts

ipa-pmincho-fonts

iperf3-devel

iproute-doc

ipset-devel

ipsilon

ipsilon-authform

ipsilon-authgssapi

ipsilon-authldap

ipsilon-base

ipsilon-client

ipsilon-filesystem

ipsilon-infosssd

ipsilon-persona

ipsilon-saml2

ipsilon-saml2-base

ipsilon-tools-ipa

204
APPENDIX A. CHANGES TO PACKAGES

Package Note

iputils-sysvinit

iscsi-initiator-utils-devel

isdn4k-utils

isdn4k-utils-devel

isdn4k-utils-doc

isdn4k-utils-static

isdn4k-utils-vboxgetty

isomd5sum-devel

istack-commons-javadoc

ixpdimm-cli

ixpdimm-monitor

ixpdimm_sw

ixpdimm_sw-devel

jai-imageio-core

jai-imageio-core-javadoc

jakarta-taglibs-standard

jakarta-taglibs-standard-
javadoc

jandex

jandex-javadoc

jansson-devel-doc

jarjar

jarjar-javadoc

205
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

jarjar-maven-plugin

jasper

jasper-utils

java-1.6.0-openjdk

java-1.6.0-openjdk-demo

java-1.6.0-openjdk-devel

java-1.6.0-openjdk-javadoc

java-1.6.0-openjdk-src

java-1.7.0-openjdk

java-1.7.0-openjdk-
accessibility

java-1.7.0-openjdk-demo

java-1.7.0-openjdk-devel

java-1.7.0-openjdk-headless

java-1.7.0-openjdk-javadoc

java-1.7.0-openjdk-src

java-1.8.0-openjdk-
accessibility-debug

java-1.8.0-openjdk-debug

java-1.8.0-openjdk-demo-
debug

java-1.8.0-openjdk-devel-
debug

java-1.8.0-openjdk-headless-
debug

206
APPENDIX A. CHANGES TO PACKAGES

Package Note

java-1.8.0-openjdk-javadoc-
debug

java-1.8.0-openjdk-javadoc-
zip-debug

java-1.8.0-openjdk-src-debug

java-11-openjdk-debug

java-11-openjdk-demo-debug

java-11-openjdk-devel-debug

java-11-openjdk-headless-
debug

java-11-openjdk-javadoc-
debug

java-11-openjdk-javadoc-zip-
debug

java-11-openjdk-jmods-debug

java-11-openjdk-src-debug

jboss-ejb-3.1-api

jboss-ejb-3.1-api-javadoc

jboss-el-2.2-api

jboss-el-2.2-api-javadoc

jboss-jaxrpc-1.1-api

jboss-jaxrpc-1.1-api-javadoc

jboss-servlet-2.5-api

jboss-servlet-2.5-api-javadoc

jboss-servlet-3.0-api

207
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

jboss-servlet-3.0-api-javadoc

jboss-specs-parent

jboss-transaction-1.1-api

jboss-transaction-1.1-api-
javadoc

jettison

jettison-javadoc

jetty-annotations

jetty-ant

jetty-artifact-remote-
resources

jetty-assembly-descriptors

jetty-build-support

jetty-build-support-javadoc

jetty-client

jetty-continuation

jetty-deploy

jetty-distribution-remote-
resources

jetty-http

jetty-io

jetty-jaas

jetty-jaspi

jetty-javadoc

208
APPENDIX A. CHANGES TO PACKAGES

Package Note

jetty-jmx

jetty-jndi

jetty-jsp

jetty-jspc-maven-plugin

jetty-maven-plugin

jetty-monitor

jetty-parent

jetty-plus

jetty-project

jetty-proxy

jetty-rewrite

jetty-runner

jetty-security

jetty-server

jetty-servlet

jetty-servlets

jetty-start

jetty-test-policy

jetty-test-policy-javadoc

jetty-toolchain

jetty-util

jetty-util-ajax

jetty-version-maven-plugin

209
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

jetty-version-maven-plugin-
javadoc

jetty-webapp

jetty-websocket-api

jetty-websocket-client

jetty-websocket-common

jetty-websocket-parent

jetty-websocket-server

jetty-websocket-servlet

jetty-xml

jing

jing-javadoc

jline-demo

jna-contrib

jna-javadoc

joda-convert

joda-convert-javadoc

js

js-devel

jsch-demo

json-glib-tests

jsr-311

jsr-311-javadoc

210
APPENDIX A. CHANGES TO PACKAGES

Package Note

juk

junit-demo

k3b

k3b-common

k3b-devel

k3b-libs

kaccessible

kaccessible-libs

kactivities

kactivities-devel

kamera

kate

kate-devel

kate-libs

kate-part

kcalc

kcharselect

kcm-gtk

kcm_colors

kcm_touchpad

kcolorchooser

kcoloredit

kde-base-artwork

211
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

kde-baseapps

kde-baseapps-devel

kde-baseapps-libs

kde-filesystem

kde-l10n

kde-l10n-Arabic

kde-l10n-Basque

kde-l10n-Bosnian

kde-l10n-British

kde-l10n-Bulgarian

kde-l10n-Catalan

kde-l10n-Catalan-Valencian

kde-l10n-Croatian

kde-l10n-Czech

kde-l10n-Danish

kde-l10n-Dutch

kde-l10n-Estonian

kde-l10n-Farsi

kde-l10n-Finnish

kde-l10n-Galician

kde-l10n-Greek

kde-l10n-Hebrew

kde-l10n-Hungarian

212
APPENDIX A. CHANGES TO PACKAGES

Package Note

kde-l10n-Icelandic

kde-l10n-Interlingua

kde-l10n-Irish

kde-l10n-Kazakh

kde-l10n-Khmer

kde-l10n-Latvian

kde-l10n-Lithuanian

kde-l10n-LowSaxon

kde-l10n-Norwegian

kde-l10n-Norwegian-Nynorsk

kde-l10n-Polish

kde-l10n-Portuguese

kde-l10n-Romanian

kde-l10n-Serbian

kde-l10n-Slovak

kde-l10n-Slovenian

kde-l10n-Swedish

kde-l10n-Tajik

kde-l10n-Thai

kde-l10n-Turkish

kde-l10n-Ukrainian

kde-l10n-Uyghur

kde-l10n-Vietnamese

213
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

kde-l10n-Walloon

kde-plasma-
networkmanagement

kde-plasma-
networkmanagement-
libreswan

kde-plasma-
networkmanagement-libs

kde-plasma-
networkmanagement-mobile

kde-print-manager

kde-runtime

kde-runtime-devel

kde-runtime-drkonqi

kde-runtime-libs

kde-settings

kde-settings-ksplash

kde-settings-minimal

kde-settings-plasma

kde-settings-pulseaudio

kde-style-oxygen

kde-style-phase

kde-wallpapers

kde-workspace

kde-workspace-devel

214
APPENDIX A. CHANGES TO PACKAGES

Package Note

kde-workspace-ksplash-
themes

kde-workspace-libs

kdeaccessibility

kdeadmin

kdeartwork

kdeartwork-screensavers

kdeartwork-sounds

kdeartwork-wallpapers

kdeclassic-cursor-theme

kdegraphics

kdegraphics-devel

kdegraphics-libs

kdegraphics-strigi-analyzer

kdegraphics-thumbnailers

kdelibs

kdelibs-apidocs

kdelibs-common

kdelibs-devel

kdelibs-ktexteditor

kdemultimedia

kdemultimedia-common

kdemultimedia-devel

215
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

kdemultimedia-libs

kdenetwork

kdenetwork-common

kdenetwork-devel

kdenetwork-fileshare-samba

kdenetwork-kdnssd

kdenetwork-kget

kdenetwork-kget-libs

kdenetwork-kopete

kdenetwork-kopete-devel

kdenetwork-kopete-libs

kdenetwork-krdc

kdenetwork-krdc-devel

kdenetwork-krdc-libs

kdenetwork-krfb

kdenetwork-krfb-libs

kdepim

kdepim-devel

kdepim-libs

kdepim-runtime

kdepim-runtime-libs

kdepimlibs

216
APPENDIX A. CHANGES TO PACKAGES

Package Note

kdepimlibs-akonadi

kdepimlibs-apidocs

kdepimlibs-devel

kdepimlibs-kxmlrpcclient

kdeplasma-addons

kdeplasma-addons-devel

kdeplasma-addons-libs

kdesdk

kdesdk-cervisia

kdesdk-common

kdesdk-devel

kdesdk-dolphin-plugins

kdesdk-kapptemplate

kdesdk-kapptemplate-
template

kdesdk-kcachegrind

kdesdk-kioslave

kdesdk-kmtrace

kdesdk-kmtrace-devel

kdesdk-kmtrace-libs

kdesdk-kompare

kdesdk-kompare-devel

kdesdk-kompare-libs

217
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

kdesdk-kpartloader

kdesdk-kstartperf

kdesdk-kuiviewer

kdesdk-lokalize

kdesdk-okteta

kdesdk-okteta-devel

kdesdk-okteta-libs

kdesdk-poxml

kdesdk-scripts

kdesdk-strigi-analyzer

kdesdk-thumbnailers

kdesdk-umbrello

kdeutils

kdeutils-common

kdeutils-minimal

kdf

kernel-rt-doc

kernel-rt-trace

kernel-rt-trace-devel

kernel-rt-trace-kvm

keytool-maven-plugin

keytool-maven-plugin-
javadoc

218
APPENDIX A. CHANGES TO PACKAGES

Package Note

kgamma

kgpg

kgreeter-plugins

khotkeys

khotkeys-libs

kiconedit

kinfocenter

kio_sysinfo

kmag

kmenuedit

kmix

kmod-oracleasm

kolourpaint

kolourpaint-libs

konkretcmpi

konkretcmpi-devel

konkretcmpi-python

konsole

konsole-part

kross-interpreters

kross-python

kross-ruby

kruler

219
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

ksaneplugin

kscreen

ksnapshot

ksshaskpass

ksysguard

ksysguard-libs

ksysguardd

ktimer

kwallet

kwin

kwin-gles

kwin-gles-libs

kwin-libs

kwrite

kxml

kxml-javadoc

lapack64-devel

lapack64-static

lasso-devel

latencytop

latencytop-common

latencytop-tui

latrace

220
APPENDIX A. CHANGES TO PACKAGES

Package Note

lcms2-utils

ldns-doc

ldns-python

libabw-devel

libabw-doc

libabw-tools

libappindicator

libappindicator-devel

libappindicator-docs

libappstream-glib-builder

libappstream-glib-builder-
devel

libart_lgpl

libart_lgpl-devel

libasan-static

libavc1394-devel

libbase-javadoc

libblockdev-btrfs

libblockdev-btrfs-devel

libblockdev-crypto-devel

libblockdev-devel

libblockdev-dm-devel

libblockdev-fs-devel

221
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

libblockdev-kbd-devel

libblockdev-loop-devel

libblockdev-lvm-devel

libblockdev-mdraid-devel

libblockdev-mpath-devel

libblockdev-nvdimm-devel

libblockdev-part-devel

libblockdev-swap-devel

libblockdev-utils-devel

libblockdev-vdo-devel

libbluedevil

libbluedevil-devel

libbluray-devel

libbonobo

libbonobo-devel

libbonoboui

libbonoboui-devel

libbytesize-devel

libcacard-tools

libcap-ng-python

libcdr-devel

libcdr-doc

libcdr-tools

222
APPENDIX A. CHANGES TO PACKAGES

Package Note

libcgroup-devel

libchamplain-demos

libchewing

libchewing-devel

libchewing-python

libcmis-devel

libcmis-tools

libcmpiutil

libcmpiutil-devel

libcryptui

libcryptui-devel

libdb-devel-static

libdb-java

libdb-java-devel

libdb-tcl

libdb-tcl-devel

libdbi

libdbi-dbd-mysql

libdbi-dbd-pgsql

libdbi-dbd-sqlite

libdbi-devel

libdbi-drivers

223
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

libdbusmenu-gtk2

libdbusmenu-gtk2-devel

libdhash-devel

libdmapsharing-devel

libdmmp-devel

libdmx-devel

libdnet-progs

libdnet-python

libdnf-devel

libdv-tools

libdvdnav-devel

libeasyfc-devel

libeasyfc-gobject-devel

libee

libee-devel

libee-utils

libesmtp

libesmtp-devel

libestr-devel

libetonyek-doc

libetonyek-tools

libevdev-utils

224
APPENDIX A. CHANGES TO PACKAGES

Package Note

libexif-doc

libexttextcat-devel

libexttextcat-tools

libfastjson-devel

libfonts-javadoc

libformula-javadoc

libfprint-devel

libfreehand-devel

libfreehand-doc

libfreehand-tools

libgcab1-devel

libgccjit

libgdither-devel

libgee06

libgee06-devel

libgepub

libgepub-devel

libgfortran-static

libgfortran4

libgfortran5

libglade2

libglade2-devel

libGLEWmx

225
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

libgnat

libgnat-devel

libgnat-static

libgnome

libgnome-devel

libgnome-keyring-devel

libgnomecanvas

libgnomecanvas-devel

libgnomeui

libgnomeui-devel

libgo

libgo-devel

libgo-static

libgovirt-devel

libgxim

libgxim-devel

libgxps-tools

libhangul-devel

libhbaapi-devel

libhif-devel

libibcommon

libibcommon-devel

libibcommon-static

226
APPENDIX A. CHANGES TO PACKAGES

Package Note

libical-glib

libical-glib-devel

libical-glib-doc

libid3tag

libid3tag-devel

libiec61883-utils

libieee1284-python

libimobiledevice-python

libimobiledevice-utils

libindicator

libindicator-devel

libindicator-tools

libinvm-cim

libinvm-cim-devel

libinvm-cli

libinvm-cli-devel

libinvm-i18n

libinvm-i18n-devel

libiodbc

libiodbc-devel

libipa_hbac-devel

libiptcdata-devel

libiptcdata-python

227
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

libitm-static

libixpdimm-cim

libixpdimm-core

libjpeg-turbo-static

libkcddb

libkcddb-devel

libkcompactdisc

libkcompactdisc-devel

libkdcraw

libkdcraw-devel

libkexiv2

libkexiv2-devel

libkipi

libkipi-devel

libkkc-devel

libkkc-tools

libksane

libksane-devel

libkscreen

libkscreen-devel

libkworkspace

liblayout-javadoc

libloader-javadoc

228
APPENDIX A. CHANGES TO PACKAGES

Package Note

liblognorm-devel

liblouis-devel

liblouis-doc

liblouis-utils

libmatchbox-devel

libmbim-devel

libmediaart-devel

libmediaart-tests

libmnl-static

libmodman-devel

libmpc-devel

libmsn

libmsn-devel

libmspub-devel

libmspub-doc

libmspub-tools

libmtp-examples

libmudflap

libmudflap-devel

libmudflap-static

libmwaw-devel

libmwaw-doc

libmwaw-tools

229
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

libmx

libmx-devel

libmx-docs

libndp-devel

libnetfilter_cthelper-devel

libnetfilter_cttimeout-devel

libnftnl-devel

libnl

libnl-devel

libnm-gtk

libnm-gtk-devel

libntlm

libntlm-devel

libobjc

libodfgen-doc

libofa

libofa-devel

liboil

liboil-devel

libopenraw-pixbuf-loader

liborcus-devel

liborcus-doc

liborcus-tools

230
APPENDIX A. CHANGES TO PACKAGES

Package Note

libosinfo-devel

libosinfo-vala

libotf-devel

libpagemaker-devel

libpagemaker-doc

libpagemaker-tools

libpinyin-devel

libpinyin-tools

libpipeline-devel

libplist-python

libpmemcto

libpmemcto-debug

libpmemcto-devel

libpmemobj++-devel

libpng-static

libpng12-devel

libproxy-kde

libpst

libpst-devel

libpst-devel-doc

libpst-doc

libpst-python

libpurple-perl

231
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

libpurple-tcl

libqmi-devel

libquadmath-static

LibRaw-static

librelp-devel

libreoffice

libreoffice-bsh

libreoffice-gdb-debug-
support

libreoffice-glade

libreoffice-librelogo

libreoffice-nlpsolver

libreoffice-officebean

libreoffice-officebean-
common

libreoffice-postgresql

libreoffice-rhino

libreofficekit-devel

librepo-devel

libreport-compat

libreport-devel

libreport-gtk-devel

libreport-web-devel

librepository-javadoc

232
APPENDIX A. CHANGES TO PACKAGES

Package Note

librevenge-doc

librhsm-devel

librsvg2-tools

libselinux-static

libsemanage-devel

libsemanage-static

libserializer-javadoc

libsexy

libsexy-devel

libsmbios-devel

libsmi-devel

libsndfile-utils

libsolv-demo

libsolv-devel

libsolv-tools

libspiro-devel

libss-devel

libssh2 The libssh2 package was temporarily available in RHEL 8.0 due to a
qemu-kvm dependency. Starting with RHEL 8.1, the QEMU emulator uses
the libssh library instead, and libssh2 has been removed.

libssh2-devel

libsss_certmap-devel

libsss_idmap-devel

libsss_simpleifp-devel

233
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

libstaroffice-devel

libstaroffice-doc

libstaroffice-tools

libstdc++-static

libstoragemgmt-devel

libstoragemgmt-targetd-
plugin

libtar-devel

libteam-devel

libtheora-devel-docs

libtiff-static

libtimezonemap-devel

libtnc

libtnc-devel

libtranslit

libtranslit-devel

libtranslit-icu

libtranslit-m17n

libtsan-static

libudisks2-devel

libuninameslist-devel

libunwind

libunwind-devel

234
APPENDIX A. CHANGES TO PACKAGES

Package Note

libusal-devel

libusb-static

libusbmuxd-utils

libuser-devel

libusnic_verbs

libvdpau-docs

libverto-glib

libverto-glib-devel

libverto-libevent-devel

libverto-tevent

libverto-tevent-devel

libvirt-cim

libvirt-daemon-driver-lxc

libvirt-daemon-lxc

libvirt-gconfig-devel

libvirt-glib-devel

libvirt-gobject-devel

libvirt-java

libvirt-java-devel

libvirt-java-javadoc

libvirt-login-shell

libvirt-snmp

libvisio-doc

235
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

libvisio-tools

libvma-devel

libvma-utils

libvoikko-devel

libvpx-utils

libwebp-java

libwebp-tools

libwpd-tools

libwpg-tools

libwps-tools

libwsman-devel

libwvstreams

libwvstreams-devel

libwvstreams-static

libxcb-doc

libXevie

libXevie-devel

libXfont

libXfont-devel

libxml2-static

libxslt-python

libXvMC-devel

libzapojit

236
APPENDIX A. CHANGES TO PACKAGES

Package Note

libzapojit-devel

libzmf-devel

libzmf-doc

libzmf-tools

lldpad-devel

log4cxx

log4cxx-devel

log4j-manual

lpsolve-devel

lua-static

lvm2-cluster

lvm2-python-libs

lvm2-sysvinit

lz4-static

m17n-contrib

m17n-contrib-extras

m17n-db-devel

m17n-db-extras

m17n-lib-devel

m17n-lib-tools

m2crypto

malaga-devel

man-pages-cs

237
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

man-pages-es

man-pages-es-extra

man-pages-fr

man-pages-it

man-pages-ja

man-pages-ko

man-pages-pl

man-pages-ru

man-pages-zh-CN

mariadb-bench

marisa-devel

marisa-perl

marisa-python

marisa-ruby

marisa-tools

maven-changes-plugin

maven-changes-plugin-
javadoc

maven-deploy-plugin

maven-deploy-plugin-javadoc

maven-doxia-module-fo

maven-ear-plugin

maven-ear-plugin-javadoc

238
APPENDIX A. CHANGES TO PACKAGES

Package Note

maven-ejb-plugin

maven-ejb-plugin-javadoc

maven-error-diagnostics

maven-gpg-plugin

maven-gpg-plugin-javadoc

maven-istack-commons-
plugin

maven-jarsigner-plugin

maven-jarsigner-plugin-
javadoc

maven-javadoc-plugin

maven-javadoc-plugin-
javadoc

maven-jxr

maven-jxr-javadoc

maven-osgi

maven-osgi-javadoc

maven-plugin-jxr

maven-project-info-reports-
plugin

maven-project-info-reports-
plugin-javadoc

maven-release

maven-release-javadoc

maven-release-manager

239
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

maven-release-plugin

maven-reporting-exec

maven-repository-builder

maven-repository-builder-
javadoc

maven-scm

maven-scm-javadoc

maven-scm-test

maven-shared-jar

maven-shared-jar-javadoc

maven-site-plugin

maven-site-plugin-javadoc

maven-verifier-plugin

maven-verifier-plugin-javadoc

maven-wagon-provider-test

maven-wagon-scm

maven-war-plugin

maven-war-plugin-javadoc

mdds-devel

meanwhile-devel

meanwhile-doc

memcached-devel

memstomp

240
APPENDIX A. CHANGES TO PACKAGES

Package Note

mesa-demos

mesa-libxatracker-devel

mesa-private-llvm

mesa-private-llvm-devel

metacity-devel

mgetty Logins through a serial line can be done using agetty. Customers can use
other means for faxing (web faxing, multi-function printer, and others).

mgetty-sendfax

mgetty-viewfax

mgetty-voice

migrationtools

minizip

minizip-devel

mipv6-daemon

mkbootdisk

mobile-broadband-provider-
info-devel

mod_auth_mellon-diagnostics

mod_revocator

ModemManager-vala

mono-icon-theme

mozjs17

mozjs17-devel

241
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

mozjs24

mozjs24-devel

mpage

mpich-3.0-autoload

mpich-3.0-doc

mpich-3.2-autoload

mpich-3.2-doc

mpitests-compat-openmpi16

msv-demo

msv-msv

msv-rngconv

msv-xmlgen

mvapich2-2.0-devel

mvapich2-2.0-doc

mvapich2-2.0-psm-devel

mvapich2-2.2-devel

mvapich2-2.2-doc

mvapich2-2.2-psm-devel

mvapich2-2.2-psm2-devel

mvapich23-devel

mvapich23-doc

mvapich23-psm-devel

mvapich23-psm2-devel

242
APPENDIX A. CHANGES TO PACKAGES

Package Note

nagios-plugins-bacula

nasm-doc

nasm-rdoff

ncurses-static

nekohtml

nekohtml-demo

nekohtml-javadoc

nepomuk-core

nepomuk-core-devel

nepomuk-core-libs

nepomuk-widgets

nepomuk-widgets-devel

net-snmp-gui

net-snmp-perl

net-snmp-python

net-snmp-sysvinit

netsniff-ng

NetworkManager-glib

NetworkManager-glib-devel

newt-static

nfsometer

nfstest

nhn-nanum-brush-fonts

243
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

nhn-nanum-fonts-common

nhn-nanum-myeongjo-fonts

nhn-nanum-pen-fonts

nmap-frontend

nss-pem

nss-pkcs11-devel

nss_compat_ossl

nss_compat_ossl-devel

ntp-doc

ntp-perl

nuvola-icon-theme

nuxwdog

nuxwdog-client-java

nuxwdog-client-perl

nuxwdog-devel

obex-data-server

obexd

objectweb-anttask

objectweb-anttask-javadoc

ocaml-brlapi

ocaml-calendar

ocaml-calendar-devel

ocaml-csv

244
APPENDIX A. CHANGES TO PACKAGES

Package Note

ocaml-csv-devel

ocaml-curses

ocaml-curses-devel

ocaml-docs

ocaml-emacs

ocaml-fileutils

ocaml-fileutils-devel

ocaml-gettext

ocaml-gettext-devel

ocaml-libvirt

ocaml-libvirt-devel

ocaml-ocamlbuild-doc

ocaml-source

ocaml-x11

ocaml-xml-light

ocaml-xml-light-devel

oci-register-machine

okular

okular-devel

okular-libs

okular-part

opa-libopamgt-devel

opal

245
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

opal-devel

open-vm-tools-devel

open-vm-tools-test

opencc

opencc-devel

opencc-tools

openchange-client

openchange-devel

openchange-devel-docs

opencv-devel-docs

opencv-python

OpenEXR

openhpi-devel

OpenIPMI-modalias

openjpeg-libs

openldap-servers

openldap-servers-sql

openlmi

openlmi-account

openlmi-account-doc

openlmi-fan

openlmi-fan-doc

openlmi-hardware

246
APPENDIX A. CHANGES TO PACKAGES

Package Note

openlmi-hardware-doc

openlmi-indicationmanager-
libs

openlmi-indicationmanager-
libs-devel

openlmi-journald

openlmi-journald-doc

openlmi-logicalfile

openlmi-logicalfile-doc

openlmi-networking

openlmi-networking-doc

openlmi-pcp

openlmi-powermanagement

openlmi-powermanagement-
doc

openlmi-providers

openlmi-providers-devel

openlmi-python-base

openlmi-python-providers

openlmi-python-test

openlmi-realmd

openlmi-realmd-doc

openlmi-service

openlmi-service-doc

247
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

openlmi-software

openlmi-software-doc

openlmi-storage

openlmi-storage-doc

openlmi-tools

openlmi-tools-doc

openobex Customers can use gnome-bluetooth for transferring files between PC


and mobile devices through bluetooth, or gvfs-afc for reading files on
mobile devices. Applications using the OBEX protocol need to be rewritten.

openobex-apps

openobex-devel

openscap-containers

openscap-engine-sce-devel

openslp-devel

openslp-server

opensm-static

openssh-server-sysvinit

openssl-static

openssl098e

openvswitch

openvswitch-controller

openvswitch-test

openwsman-perl

openwsman-ruby

248
APPENDIX A. CHANGES TO PACKAGES

Package Note

oprofile-devel

oprofile-gui

oprofile-jit

optipng

ORBit2

ORBit2-devel

orc-doc

ortp

ortp-devel

oscilloscope

oxygen-cursor-themes

oxygen-gtk

oxygen-gtk2

oxygen-gtk3

oxygen-icon-theme

PackageKit-yum-plugin

pakchois-devel

pam_snapper

pango-tests

paps-devel

passivetex

pax

pciutils-devel-static

249
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

pcp-collector

pcp-monitor

pcre-tools

pcre2-static

pcre2-tools

pentaho-libxml-javadoc

pentaho-reporting-flow-
engine-javadoc

perl-AppConfig

perl-Archive-Extract

perl-B-Keywords

perl-Browser-Open

perl-Business-ISBN

perl-Business-ISBN-Data

perl-CGI-Session

perl-Class-Load

perl-Class-Load-XS

perl-Config-Simple

perl-Config-Tiny

perl-Convert-ASN1

perl-CPAN-Changes

perl-CPANPLUS

perl-CPANPLUS-Dist-Build

250
APPENDIX A. CHANGES TO PACKAGES

Package Note

perl-Crypt-CBC

perl-Crypt-DES

perl-Crypt-PasswdMD5

perl-Crypt-SSLeay

perl-CSS-Tiny

perl-Data-Peek

perl-DateTime-Format-
DateParse

perl-DBD-Pg-tests

perl-DBIx-Simple

perl-Devel-Cover

perl-Devel-Cycle

perl-Devel-
EnforceEncapsulation

perl-Devel-Leak

perl-Email-Address

perl-FCGI

perl-File-Find-Rule-Perl

perl-File-Inplace

perl-Font-AFM

perl-Font-TTF

perl-FreezeThaw

perl-GD

perl-GD-Barcode

251
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

perl-Hook-LexWrap

perl-HTML-Format

perl-HTML-FormatText-
WithLinks

perl-HTML-FormatText-
WithLinks-AndTables

perl-Image-Base

perl-Image-Info

perl-Image-Xbm

perl-Image-Xpm

perl-Inline

perl-Inline-Files

perl-IO-CaptureOutput

perl-JSON-tests

perl-LDAP

perl-libxml-perl

perl-Locale-Maketext-
Gettext

perl-Locale-PO

perl-Log-Message

perl-Log-Message-Simple

perl-Mixin-Linewise

perl-Module-Manifest

perl-Module-Signature

252
APPENDIX A. CHANGES TO PACKAGES

Package Note

perl-Net-Daemon

perl-Net-DNS-Nameserver

perl-Net-DNS-Resolver-
Programmable

perl-Net-LibIDN

perl-Net-Telnet

perl-Newt

perl-Object-Accessor

perl-Object-Deadly

perl-Package-Constants

perl-PAR-Dist

perl-Parallel-Iterator

perl-Parse-CPAN-Meta

perl-Parse-RecDescent

perl-Perl-Critic

perl-Perl-Critic-More

perl-Perl-MinimumVersion

perl-Perl4-CoreLibs

perl-PlRPC

perl-Pod-Coverage-TrustPod

perl-Pod-Eventual

perl-Pod-POM

perl-Pod-Spell

253
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

perl-PPI

perl-PPI-HTML

perl-PPIx-Regexp

perl-PPIx-Utilities

perl-Probe-Perl

perl-Readonly-XS

perl-Sort-Versions

perl-String-Format

perl-String-Similarity

perl-Syntax-Highlight-
Engine-Kate

perl-Task-Weaken

perl-Template-Toolkit

perl-Term-UI

perl-Test-ClassAPI

perl-Test-CPAN-Meta

perl-Test-DistManifest

perl-Test-EOL

perl-Test-HasVersion

perl-Test-Inter

perl-Test-Manifest

perl-Test-Memory-Cycle

perl-Test-MinimumVersion

254
APPENDIX A. CHANGES TO PACKAGES

Package Note

perl-Test-MockObject

perl-Test-NoTabs

perl-Test-Object

perl-Test-Output

perl-Test-Perl-Critic

perl-Test-Perl-Critic-Policy

perl-Test-Portability-Files

perl-Test-Script

perl-Test-Spelling

perl-Test-SubCalls

perl-Test-Synopsis

perl-Test-Tester

perl-Test-Vars

perl-Test-Without-Module

perl-Text-CSV_XS

perl-Text-Iconv

perl-Tree-DAG_Node

perl-Unicode-Map8

perl-Unicode-String

perl-UNIVERSAL-can

perl-UNIVERSAL-isa

perl-Version-Requirements

perl-WWW-Curl

255
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

perl-XML-Dumper

perl-XML-Filter-BufferText

perl-XML-Grove

perl-XML-Handler-YAWriter

perl-XML-LibXSLT

perl-XML-SAX-Writer

perl-XML-TreeBuilder

perl-XML-Writer

perl-XML-XPathEngine

phonon

phonon-backend-gstreamer

phonon-devel

php-pecl-memcache

php-pspell

pidgin-perl

pinentry-qt

pinentry-qt4

pki-javadoc

plasma-scriptengine-python

plasma-scriptengine-ruby

plexus-digest

plexus-digest-javadoc

plexus-mail-sender

256
APPENDIX A. CHANGES TO PACKAGES

Package Note

plexus-mail-sender-javadoc

plexus-tools-pom

plymouth-devel

pm-utils

pm-utils-devel

pngcrush

pngnq

polkit-kde

polkit-qt

polkit-qt-devel

polkit-qt-doc

poppler-demos

poppler-qt

poppler-qt-devel

popt-static

postfix-sysvinit

pothana2000-fonts

powerpc-utils-python

pprof

pps-tools

pptp-setup

procps-ng-devel

protobuf-emacs

257
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

protobuf-emacs-el

protobuf-java

protobuf-javadoc

protobuf-lite-devel

protobuf-lite-static

protobuf-python

protobuf-static

protobuf-vim

psutils

psutils-perl

pth-devel

ptlib

ptlib-devel

publican

publican-common-db5-web

publican-common-web

publican-doc

publican-redhat

pulseaudio-esound-compat

pulseaudio-module-gconf

pulseaudio-module-zeroconf

pulseaudio-qpaeq

pygpgme

258
APPENDIX A. CHANGES TO PACKAGES

Package Note

pygtk2-libglade

pykde4

pykde4-akonadi

pykde4-devel

pyldb-devel

pyliblzma

PyOpenGL

PyOpenGL-Tk

pyOpenSSL-doc

pyorbit

pyorbit-devel

PyPAM

pyparsing-doc

PyQt4

PyQt4-devel

pytalloc-devel

python-adal

python-appindicator

python-beaker

python-cffi-doc

python-cherrypy

python-criu

python-debug

259
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

python-deltarpm

python-di

python-dtopt

python-fpconst

python-gpod

python-gudev

python-inotify-examples

python-ipaddr

python-IPy

python-isodate

python-isomd5sum

python-kitchen

python-kitchen-doc

python-libteam

python-lxml-docs

python-matplotlib

python-matplotlib-doc

python-matplotlib-qt4

python-matplotlib-tk

python-memcached

python-msrest

python-msrestazure

python-mutagen

260
APPENDIX A. CHANGES TO PACKAGES

Package Note

python-openvswitch

python-paramiko

python-paramiko-doc

python-paste

python-pillow-devel

python-pillow-doc

python-pillow-qt

python-pillow-sane

python-pillow-tk

python-pyblock

python-rados

python-rbd

python-reportlab-docs

python-rtslib-doc

python-setproctitle

python-slip-gtk

python-smbc

python-smbc-doc

python-smbios

python-sphinx-doc

python-sphinx-theme-
openlmi

python-tempita

261
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

python-tornado

python-tornado-doc

python-twisted-core

python-twisted-core-doc

python-twisted-web

python-twisted-words

python-urlgrabber

python-volume_key

python-webob

python-webtest

python-which

python-zope-interface

python2-caribou

python2-futures

python2-gexiv2

python2-smartcols

python2-solv

python2-subprocess32

qca-ossl

qca2

qca2-devel

qimageblitz

qimageblitz-devel

262
APPENDIX A. CHANGES TO PACKAGES

Package Note

qimageblitz-examples

qjson

qjson-devel

qpdf-devel

qt

qt-assistant

qt-config

qt-demos

qt-devel

qt-devel-private

qt-doc

qt-examples

qt-mysql

qt-odbc

qt-postgresql

qt-qdbusviewer

qt-qvfb

qt-settings

qt-x11

qt3

qt3-config

qt3-designer

qt3-devel

263
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

qt3-devel-docs

qt3-MySQL

qt3-ODBC

qt3-PostgreSQL

qt5-qt3d-doc

qt5-qtbase-doc

qt5-qtcanvas3d-doc

qt5-qtconnectivity-doc

qt5-qtdeclarative-doc

qt5-qtenginio

qt5-qtenginio-devel

qt5-qtenginio-doc

qt5-qtenginio-examples

qt5-qtgraphicaleffects-doc

qt5-qtimageformats-doc

qt5-qtlocation-doc

qt5-qtmultimedia-doc

qt5-qtquickcontrols-doc

qt5-qtquickcontrols2-doc

qt5-qtscript-doc

qt5-qtsensors-doc

qt5-qtserialbus-devel

qt5-qtserialbus-doc

264
APPENDIX A. CHANGES TO PACKAGES

Package Note

qt5-qtserialport-doc

qt5-qtsvg-doc

qt5-qttools-doc

qt5-qtwayland-doc

qt5-qtwebchannel-doc

qt5-qtwebsockets-doc

qt5-qtx11extras-doc

qt5-qtxmlpatterns-doc

quagga Since RHEL 8.1, Quagga has been replaced by Free Range Routing
(FRRouting, or FRR), a new routing protocol stack, provided by the frr
package in the AppStream repository. FRR provides TCP/IP-based routing
services with support for multiple IPv4 and IPv6 routing protocols, such as
BGP, IS-IS, OSPF, PIM, and RIP. For more information, see Setting the
routing protocols for your system.

quagga-contrib

quota-devel

qv4l2

rarian-devel

ras-utils

rcs Version control system supported in RHEL 8 are Git, Mercurial, and
Subversion.

rdate

rdist

readline-static

realmd-devel-docs

Red_Hat_Enterprise_Linux-
Release_Notes-7-as-IN

265
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

Red_Hat_Enterprise_Linux-
Release_Notes-7-bn-IN

Red_Hat_Enterprise_Linux-
Release_Notes-7-de-DE

Red_Hat_Enterprise_Linux-
Release_Notes-7-en-US

Red_Hat_Enterprise_Linux-
Release_Notes-7-es-ES

Red_Hat_Enterprise_Linux-
Release_Notes-7-fr-FR

Red_Hat_Enterprise_Linux-
Release_Notes-7-gu-IN

Red_Hat_Enterprise_Linux-
Release_Notes-7-hi-IN

Red_Hat_Enterprise_Linux-
Release_Notes-7-it-IT

Red_Hat_Enterprise_Linux-
Release_Notes-7-ja-JP

Red_Hat_Enterprise_Linux-
Release_Notes-7-kn-IN

Red_Hat_Enterprise_Linux-
Release_Notes-7-ko-KR

Red_Hat_Enterprise_Linux-
Release_Notes-7-ml-IN

Red_Hat_Enterprise_Linux-
Release_Notes-7-mr-IN

Red_Hat_Enterprise_Linux-
Release_Notes-7-or-IN

Red_Hat_Enterprise_Linux-
Release_Notes-7-pa-IN

266
APPENDIX A. CHANGES TO PACKAGES

Package Note

Red_Hat_Enterprise_Linux-
Release_Notes-7-pt-BR

Red_Hat_Enterprise_Linux-
Release_Notes-7-ru-RU

Red_Hat_Enterprise_Linux-
Release_Notes-7-ta-IN

Red_Hat_Enterprise_Linux-
Release_Notes-7-te-IN

Red_Hat_Enterprise_Linux-
Release_Notes-7-zh-CN

Red_Hat_Enterprise_Linux-
Release_Notes-7-zh-TW

redhat-access-gui

redhat-access-plugin-ipa

redhat-bookmarks

redhat-lsb-supplemental

redhat-lsb-trialuse

redhat-upgrade-dracut

redhat-upgrade-dracut-
plymouth

redhat-upgrade-tool

redland-mysql

redland-pgsql

redland-virtuoso

relaxngcc

rest-devel

267
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

resteasy-base-jettison-
provider

resteasy-base-tjws

rfkill

rhdb-utils

rhino

rhino-demo

rhino-javadoc

rhino-manual

rhythmbox-devel

rngom

rngom-javadoc

rp-pppoe

rrdtool-php

rrdtool-python

rsh To log in to remote systems, use SSH instead.

rsh-server

rsyslog-libdbi

rsyslog-udpspoof

rtcheck

rtctl

ruby-tcltk

rubygem-net-http-persistent

268
APPENDIX A. CHANGES TO PACKAGES

Package Note

rubygem-net-http-persistent-
doc

rubygem-thor

rubygem-thor-doc

rusers

rusers-server

rwho

sac-javadoc

samba-dc

samba-dc-libs

samba-devel

sanlock-python

satyr-devel

satyr-python

saxon

saxon-demo

saxon-javadoc

saxon-manual

saxon-scripts

sbc-devel

sblim-cim-client2

sblim-cim-client2-javadoc

sblim-cim-client2-manual

269
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

sblim-cmpi-base

sblim-cmpi-base-devel

sblim-cmpi-base-test

sblim-cmpi-fsvol

sblim-cmpi-fsvol-devel

sblim-cmpi-fsvol-test

sblim-cmpi-network

sblim-cmpi-network-devel

sblim-cmpi-network-test

sblim-cmpi-nfsv3

sblim-cmpi-nfsv3-test

sblim-cmpi-nfsv4

sblim-cmpi-nfsv4-test

sblim-cmpi-params

sblim-cmpi-params-test

sblim-cmpi-sysfs

sblim-cmpi-sysfs-test

sblim-cmpi-syslog

sblim-cmpi-syslog-test

sblim-gather

sblim-gather-devel

sblim-gather-provider

sblim-gather-test

270
APPENDIX A. CHANGES TO PACKAGES

Package Note

sblim-indication_helper

sblim-indication_helper-devel

sblim-smis-hba

sblim-testsuite

sblim-wbemcli

scannotation

scannotation-javadoc

scpio

screen

SDL-static

sdparm

seahorse-nautilus

seahorse-sharing

sendmail-sysvinit

setools-devel

setools-gui

setools-libs-tcl

setuptool

shared-desktop-ontologies

shared-desktop-ontologies-
devel

shim-unsigned-ia32

shim-unsigned-x64

271
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

sisu

sisu-parent

slang-slsh

slang-static

smbios-utils

smbios-utils-bin

smbios-utils-python

snakeyaml

snakeyaml-javadoc

snapper

snapper-devel

snapper-libs

sntp

SOAPpy

soprano

soprano-apidocs

soprano-devel

source-highlight-devel

sox

sox-devel

speex-tools

spice-streaming-agent

spice-streaming-agent-devel

272
APPENDIX A. CHANGES TO PACKAGES

Package Note

spice-xpi

sqlite-tcl

squid-migration-script

squid-sysvinit

sssd-libwbclient-devel

sssd-polkit-rules The sssd-polkit-rules package is available in RHEL 8 since RHEL 8.1.

stax2-api

stax2-api-javadoc

strigi

strigi-devel

strigi-libs

strongimcv

subversion-kde

subversion-python

subversion-ruby

sudo-devel

suitesparse-doc

suitesparse-static

supermin-helper

svgpart

svrcore

svrcore-devel

sweeper

273
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

syslinux-devel

syslinux-perl

system-config-date

system-config-date-docs

system-config-firewall

system-config-firewall-base

system-config-firewall-tui

system-config-keyboard

system-config-keyboard-base

system-config-kickstart

system-config-language

system-config-printer The system-config-printer package has been removed. Its workstation


functionality has been included in gnome-control-center , and its server
use cases are covered by the CUPS Web UI.

system-config-users-docs

system-switch-java

systemd-sysv

t1lib

t1lib-apps

t1lib-devel

t1lib-static

t1utils

taglib-doc

talk

274
APPENDIX A. CHANGES TO PACKAGES

Package Note

talk-server

tang-nagios

targetd

tcl-pgtcl

tclx

tclx-devel

tcp_wrappers See Replacing TCP Wrappers in RHEL 8.

tcp_wrappers-devel

tcp_wrappers-libs

teamd-devel

teckit-devel

telepathy-farstream

telepathy-farstream-devel

telepathy-filesystem

telepathy-gabble

telepathy-glib

telepathy-glib-devel

telepathy-glib-vala

telepathy-haze

telepathy-logger

telepathy-logger-devel

telepathy-mission-control

275
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

telepathy-mission-control-
devel

telepathy-salut

tex-preview

texlive-collection-
documentation-base

texlive-mh

texlive-mh-doc

texlive-misc

texlive-thailatex

texlive-thailatex-doc

tix-doc

tn5250

tn5250-devel

tncfhh

tncfhh-devel

tncfhh-examples

tncfhh-libs

tncfhh-utils

tog-pegasus-test

tokyocabinet-devel-doc

tomcat The Apache Tomcat server has been removed from RHEL. Apache Tomcat
is a servlet container for the Java Servlet and JavaServer Pages (JSP)
technologies. Red Hat recommends that users requiring a servlet container
use the JBoss Web Server.

276
APPENDIX A. CHANGES TO PACKAGES

Package Note

tomcat-admin-webapps

tomcat-docs-webapp

tomcat-el-2.2-api

tomcat-javadoc

tomcat-jsp-2.2-api

tomcat-jsvc

tomcat-lib

tomcat-servlet-3.0-api

tomcat-webapps

totem-devel

totem-pl-parser-devel

tracker-devel

tracker-docs

tracker-needle

tracker-preferences

trang

trousers-static

txw2

txw2-javadoc

udftools

unique3

unique3-devel

unique3-docs

277
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

unoconv

uriparser

uriparser-devel

usbguard-devel

usbredir-server

usnic-tools

ustr-debug

ustr-debug-static

ustr-devel

ustr-static

uuid-c++

uuid-c++-devel

uuid-dce

uuid-dce-devel

uuid-perl

uuid-php

v4l-utils

v4l-utils-devel-tools

vala-doc

valadoc

valadoc-devel

valgrind-openmpi

vemana2000-fonts

278
APPENDIX A. CHANGES TO PACKAGES

Package Note

vigra

vigra-devel

virtuoso-opensource

virtuoso-opensource-utils

vlgothic-p-fonts

vsftpd-sysvinit

vte3

vte3-devel

wayland-doc

webkitgtk3

webkitgtk3-devel

webkitgtk3-doc

webkitgtk4-doc

webrtc-audio-processing-
devel

whois

woodstox-core

woodstox-core-javadoc

wordnet

wordnet-browser

wordnet-devel

wordnet-doc

ws-commons-util

279
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

ws-commons-util-javadoc

ws-jaxme

ws-jaxme-javadoc

ws-jaxme-manual

wsdl4j

wsdl4j-javadoc

wvdial

x86info

xchat-tcl

xdg-desktop-portal-devel

xerces-c

xerces-c-devel

xerces-c-doc

xferstats

xguest

xhtml2fo-style-xsl

xhtml2ps

xisdnload

xml-commons-apis12

xml-commons-apis12-javadoc

xml-commons-apis12-manual

xmlgraphics-commons

280
APPENDIX A. CHANGES TO PACKAGES

Package Note

xmlgraphics-commons-
javadoc

xmlrpc-c-apps

xmlrpc-client

xmlrpc-common

xmlrpc-javadoc

xmlrpc-server

xmlsec1-gcrypt-devel

xmlsec1-nss-devel

xmlto-tex

xmlto-xhtml

xorg-x11-drv-intel-devel

xorg-x11-drv-keyboard

xorg-x11-drv-mouse

xorg-x11-drv-mouse-devel

xorg-x11-drv-openchrome

xorg-x11-drv-openchrome-
devel

xorg-x11-drv-synaptics

xorg-x11-drv-synaptics-devel

xorg-x11-drv-vmmouse

xorg-x11-drv-void

xorg-x11-server-source

281
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package Note

xorg-x11-xkb-extras

xpp3

xpp3-javadoc

xpp3-minimal

xsettings-kde

xstream

xstream-javadoc

xulrunner

xulrunner-devel

xvattr

xz-compat-libs

yelp-xsl-devel

yum-langpacks Localization is now an integral part of DNF.

yum-NetworkManager-
dispatcher

yum-plugin-filter-data

yum-plugin-fs-snapshot

yum-plugin-keys

yum-plugin-list-data

yum-plugin-local

yum-plugin-merge-conf

yum-plugin-ovl

yum-plugin-post-transaction-
actions

282
APPENDIX A. CHANGES TO PACKAGES

Package Note

yum-plugin-pre-transaction-
actions

yum-plugin-protectbase

yum-plugin-ps

yum-plugin-rpm-warm-cache

yum-plugin-show-leaves

yum-plugin-upgrade-helper

yum-plugin-verify

yum-updateonboot

A.5. PACKAGES WITH REMOVED SUPPORT


Certain packages in RHEL 8 are distributed through the CodeReady Linux Builder repository, which
contains unsupported packages for use by developers. For a complete list of packages in this repository,
see the Package manifest. This section covers only packages that are supported in RHEL 7 but not in
RHEL 8.

The following packages are distributed in a supported channel in RHEL 7 and in RHEL 8 they are a part
of the CodeReady Linux Builder repository:

Package RHEL 7 channel

antlr-tool rhel7-base

bcel rhel7-base

cal10n rhel7-base

cdi-api-javadoc rhel7-base

codemodel rhel7-base

dejagnu rhel7-base

docbook-style-dsssl rhel7-base

docbook-utils rhel7-base

283
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package RHEL 7 channel

docbook5-schemas rhel7-base

flex-devel rhel7-base

geronimo-jms rhel7-base

gnome-common rhel7-base

hamcrest rhel7-base

imake rhel7-base

isorelax rhel7-base

jakarta-oro rhel7-base

javamail rhel7-base

jaxen rhel7-base

jdom rhel7-base

jna rhel7-base

junit rhel7-base

jvnet-parent rhel7-base

libdbusmenu-doc rhel7-base

libdbusmenu-gtk3-devel rhel7-base

libfdt rhel7-base

libgit2-devel rhel7-extras

libindicator-gtk3-devel rhel7-base

libmodulemd-devel rhel7-extras

libseccomp-devel rhel7-base

nasm rhel7-base

objectweb-asm rhel7-base

284
APPENDIX A. CHANGES TO PACKAGES

Package RHEL 7 channel

openjade rhel7-base

openldap-servers rhel7-base

opensp rhel7-base

perl-Class-Singleton rhel7-base

perl-DateTime rhel7-base

perl-DateTime-Locale rhel7-base

perl-DateTime-TimeZone rhel7-base

perl-Devel-Symdump rhel7-base

perl-Digest-SHA1 rhel7-base

perl-HTML-Tree rhel7-base

perl-HTTP-Daemon rhel7-base

perl-IO-stringy rhel7-base

perl-List-MoreUtils rhel7-base

perl-Module-Implementation rhel7-base

perl-Package-DeprecationManager rhel7-base

perl-Package-Stash rhel7-base

perl-Package-Stash-XS rhel7-base

perl-Params-Validate rhel7-base

perl-Pod-Coverage rhel7-base

perl-SGMLSpm rhel7-base

perl-Test-Pod rhel7-base

perl-Test-Pod-Coverage rhel7-base

perl-XML-Twig rhel7-base

285
Red Hat Enterprise Linux 8 Considerations in adopting RHEL 8

Package RHEL 7 channel

perl-YAML-Tiny rhel7-base

perltidy rhel7-base

qdox rhel7-base

regexp rhel7-base

texinfo rhel7-base

ustr rhel7-base

weld-parent rhel7-base

xmltoman rhel7-base

xorg-x11-apps rhel7-base

The following packages have been moved to the CodeReady Linux Builder repository within RHEL 8:

Package Original RHEL 8 repository Changed since

apache-commons-collections-javadoc rhel8-AppStream RHEL 8.1

apache-commons-collections-testframework rhel8-AppStream RHEL 8.1

apache-commons-lang-javadoc rhel8-AppStream RHEL 8.1

jakarta-commons-httpclient-demo rhel8-AppStream RHEL 8.1

jakarta-commons-httpclient-javadoc rhel8-AppStream RHEL 8.1

jakarta-commons-httpclient-manual rhel8-AppStream RHEL 8.1

velocity-demo rhel8-AppStream RHEL 8.1

velocity-javadoc rhel8-AppStream RHEL 8.1

velocity-manual rhel8-AppStream RHEL 8.1

xerces-j2-demo rhel8-AppStream RHEL 8.1

xerces-j2-javadoc rhel8-AppStream RHEL 8.1

286
APPENDIX A. CHANGES TO PACKAGES

Package Original RHEL 8 repository Changed since

xml-commons-apis-javadoc rhel8-AppStream RHEL 8.1

xml-commons-apis-manual rhel8-AppStream RHEL 8.1

xml-commons-resolver-javadoc rhel8-AppStream RHEL 8.1

287
Samba download, install and configuration

• Samba is a Linux tool or utility that allows sharing for Linux resources such as files
and printers to with other operating systems
• It works exactly like NFS but the difference is NFS shares within Linux or Unix like
system whereas Samba shares with other OS (e.g. Windows, MAC etc.)

For example, computer “A” shares its filesystem with computer “B” using Samba
then computer “B” will see that shared filesystem as if it is mounted as the local
filesystem

• Samba shares its filesystem through a protocol called SMB (Server Message Block)
which was invented by IBM
• Another protocol used to share Samba is through CIFS (Common Internet File
System) invented by Microsoft and also NMB (NetBios Name server)

• CIFS became the extension of SMB and now Microsoft has introduced newer version
of SMB v2 and v3 that are mostly used in the industry

• Most people, when they use either SMB or CIFS, are talking about the same exact
thing. The two are interchangeable not only in discussion, but also in application –
i.e., a client speaking CIFS can talk to a server speaking SMB and vice versa. Why?
Because CIFS is a form of SMB

Step by steps installation instructions


First please make sure to take a snapshot of your VM

• Install samba packages


# Become root user
# yum install samba samba-client samba-common

• Enable samba to be allowed through firewall (Only if you have firewall running)
# firewall-cmd --permanent --zone=public --add-service=samba
# firewall-cmd –reload

• To stop and disable firewall or iptables


# systemctl stop firewalld
# systemctl stop iptables
# systemctl disable firewalld
# systemctl disable iptables
• Create Samba share directory and assign permissions
# mkdir -p /samba/morepretzels
# chmod a+rwx /samba/morepretzels
# chown -R nobody:nobody /samba

• Also, you need to change the SELinux security context for the samba shared
directory as follows: (Only if you have SELinux enabled)
# chcon -t samba_share_t /samba/morepretzels

• If you want to disable SELinux, follow these instructions


# sestatus To check the SELinux status)
# vi /etc/selinux/config
Change
SELINUX=enforcing
To
SELINUX=disabled
# reboot

• Modify /etc/samba/smb.conf file to add new shared filesystem (Make sure to


create a copy of smb.conf file)
Delete everything from smb.conf file and add the following parameters

[global]
workgroup = WORKGROUP
netbios name = centos
security = user
map to guest = bad user
dns proxy = no

[Anonymous]
path = /samba/morepretzels
browsable = yes
writable = yes
guest ok = yes
guest only = yes
read only = no

• Verify the setting


# testparm

• Once the packages are installed, enable and start Samba services
# systemctl enable smb
# systemctl enable nmb
# systemctl start smb
# systemctl start nmb
• Mount on Windows client
o Go to start
o Go to search bar
o Type \\192.168.1.95 (This is my server IP, you can check your Linux
CentOS IP by running the command ifconfig)

• Mount on Linux client


Become root
# yum -y install cifs-utils samba-client
Create a mount point directory
# mkdir /mnt/sambashare
Mount the samba share
# mount -t cifs //192.168.1.95/Anonymous /mnt/sambashare/
# Entry without password

Secure Samba Server

• Create a group smbgrp & user larry to access the samba server with proper
authentication

# useradd larry
# groupadd smbgrp
# usermod -a -G smbgrp larry
# smbpasswd -a larry
New SMB password: YOUR SAMBA PASS
Retype new SMB password: REPEAT YOUR SAMBA PASS
Added user larry

• Create a new share, set the permission on the share:

# mkdir /samba/securepretzels
# chown -R larry:smbgrp /samba/securepretzels
# chmod -R 0770 /samba/securepretzels
# chcon -t samba_share_t /samba/securepretzels

• Edit the configuration file /etc/samba/smb.conf (Create a backup copy first)

# vi /etc/samba/smb.conf
Add the following lines
[Secure]
path = /samba/securepretzels
valid users = @smbgrp
guest ok = no
writable = yes
browsable = yes

• Restart the services


# systemctl restart smb
# systemctl restart nmb
Red Hat Enterprise Linux 8

Configuring basic system settings

A guide to configuring basic system settings in Red Hat Enterprise Linux 8

Last Updated: 2020-08-11


Red Hat Enterprise Linux 8 Configuring basic system settings
A guide to configuring basic system settings in Red Hat Enterprise Linux 8
Legal Notice
Copyright © 2020 Red Hat, Inc.

The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.

MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.

Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.

The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Abstract
This document describes basics of system administration on Red Hat Enterprise Linux 8. The title
focuses on: basic tasks that a system administrator needs to do just after the operating system has
been successfully installed, installing software with yum, using systemd for service management,
managing users, groups and file permissions, using chrony to configure NTP, working with Python 3
and others.
Table of Contents

Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .GETTING
. . . . . . . . . . STARTED
. . . . . . . . . . .WITH
. . . . . .SYSTEM
. . . . . . . . .ADMINISTRATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .
1.1. PERFORMING BASIC SYSTEM ADMINISTRATION TASKS IN THE WEB CONSOLE 9
1.1.1. What the RHEL 8 web console is and which tasks it can be used for 9
1.1.2. Restarting the system using the web console 10
1.1.3. Shutting down the system using the web console 11
1.1.4. Configuring the host name in the web console 12
1.1.4.1. Host name 12
1.1.4.2. Pretty host name in the web console 12
1.1.4.3. Setting the host name using the web console 12
1.1.5. Joining a RHEL 8 system to an IdM domain using the web console 14
1.1.6. Configuring time settings using the web console 16
1.1.7. Optimizing the system performance using the web console 17
1.1.7.1. Performance tuning options in the web console 18
1.1.7.2. Setting a performance profile in the web console 18
1.1.8. Disabling SMT to prevent CPU security issues using the web console 19
1.2. GETTING STARTED WITH RHEL SYSTEM ROLES 21
1.2.1. Introduction to RHEL System Roles 21
1.2.2. RHEL System Roles terminology 21
1.2.3. Applying a role 22
1.2.4. Additional resources 24
1.3. CHANGING BASIC ENVIRONMENT SETTINGS 24
1.3.1. Configuring the date and time 24
1.3.1.1. Displaying the current date and time 24
1.3.1.2. Additional resources 25
1.3.2. Configuring the system locale 25
1.3.3. Configuring the keyboard layout 25
1.3.4. Changing the language using desktop GUI 26
1.3.5. Additional resources 29
1.4. CONFIGURING AND MANAGING NETWORK ACCESS 29
1.4.1. Configuring the network and host name in the graphical installation mode 29
1.4.2. Configuring a static Ethernet connection using nmcli 30
1.4.3. Adding a connection profile using nmtui 33
1.4.4. Managing networking in the RHEL 8 web console 35
1.4.5. Managing networking using RHEL System Roles 36
1.4.6. Additional resources 37
1.5. REGISTERING THE SYSTEM AND MANAGING SUBSCRIPTIONS 37
1.5.1. Registering the system after the installation 37
1.5.2. Registering subscriptions with credentials in the web console 38
1.5.3. Registering a system using Red Hat account on GNOME 41
1.5.4. Registering a system using an activation key on GNOME 42
1.6. MAKING SYSTEMD SERVICES START AT BOOT TIME 42
1.6.1. Enabling or disabling the services using the CLI 42
1.6.2. Managing services in the RHEL 8 web console 43
1.7. CONFIGURING SYSTEM SECURITY 45
1.7.1. Enhancing system security with a firewall 45
1.7.1.1. Enabling the firewalld service 45
1.7.1.2. Managing firewall in the RHEL 8 web console 46
1.7.1.3. Additional resources 46
1.7.2. Managing basic SELinux settings 46
1.7.2.1. SELinux states and modes 46
1.7.2.2. Ensuring the required state of SELinux 47

1
Red Hat Enterprise Linux 8 Configuring basic system settings

1.7.2.3. Switching SELinux modes in the RHEL 8 web console 48


1.7.2.4. Next steps 48
1.7.3. Next steps 49
1.8. GETTING STARTED WITH MANAGING USER ACCOUNTS 49
1.8.1. Overview of user accounts and groups 49
1.8.2. Managing accounts and groups using command-line tools 50
1.8.3. System user accounts managed in the web console 50
1.8.4. Adding new accounts using the web console 50
1.8.5. Next steps 52
1.9. DUMPING A CRASHED KERNEL FOR LATER ANALYSIS 52
1.9.1. What is kdump 52
1.9.2. Configuring kdump memory usage and target location in web console 52
1.9.3. Configuring kdump using RHEL System Roles 54
1.9.4. Additional resources 55
1.10. RECOVERING AND RESTORING A SYSTEM 55
1.10.1. Setting up ReaR 55
1.11. TROUBLESHOOTING PROBLEMS USING LOG FILES 56
1.11.1. Services handling syslog messages 56
1.11.2. Subdirectories storing syslog messages 57
1.11.3. Inspecting log files using the web console 57
1.11.4. Viewing logs using the command line 58
1.11.5. Additional resources 59
1.12. ACCESSING THE RED HAT SUPPORT 59
1.12.1. Obtaining Red Hat Support through Red Hat Customer Portal 59
1.12.2. Troubleshooting problems using sosreport 60

.CHAPTER
. . . . . . . . . . 2.
. . MANAGING
. . . . . . . . . . . . .SOFTWARE
. . . . . . . . . . . . .PACKAGES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
..............
2.1. SOFTWARE MANAGEMENT TOOLS IN RED HAT ENTERPRISE LINUX 8 62
2.2. APPLICATION STREAMS 62
2.3. SEARCHING FOR SOFTWARE PACKAGES 62
2.3.1. Searching packages with yum 63
2.3.2. Listing packages with yum 63
2.3.3. Listing repositories with yum 63
2.3.4. Displaying package information with yum 64
2.3.5. Listing package groups with yum 64
2.3.6. Specifying global expressions in yum input 64
2.4. INSTALLING SOFTWARE PACKAGES 65
2.4.1. Installing packages with yum 65
2.4.2. Installing a package group with yum 66
2.4.3. Specifying a package name in yum input 66
2.5. UPDATING SOFTWARE PACKAGES 67
2.5.1. Checking for updates with yum 67
2.5.2. Updating a single package with yum 67
2.5.3. Updating a package group with yum 67
2.5.4. Updating all packages and their dependencies with yum 67
2.5.5. Updating security-related packages with yum 68
2.5.6. Automating software updates 68
2.5.6.1. Installing DNF Automatic 68
2.5.6.2. DNF Automatic configuration file 68
2.5.6.3. Enabling DNF Automatic 69
2.5.6.4. Overview of the systemd timer units included in the dnf-automatic package 71
2.6. UNINSTALLING SOFTWARE PACKAGES 72
2.6.1. Removing packages with yum 72

2
Table of Contents

2.6.2. Removing a package group with yum 73


2.6.3. Specifying a package name in yum input 73
2.7. MANAGING SOFTWARE PACKAGE GROUPS 73
2.7.1. Listing package groups with yum 74
2.7.2. Installing a package group with yum 74
2.7.3. Removing a package group with yum 74
2.7.4. Specifying global expressions in yum input 75
2.8. HANDLING PACKAGE MANAGEMENT HISTORY 75
2.8.1. Listing transactions with yum 75
2.8.2. Reverting transactions with yum 76
2.8.3. Repeating transactions with yum 76
2.8.4. Specifying global expressions in yum input 76
2.9. MANAGING SOFTWARE REPOSITORIES 77
2.9.1. Setting yum repository options 77
2.9.2. Adding a yum repository 77
2.9.3. Enabling a yum repository 78
2.9.4. Disabling a yum repository 78
2.10. CONFIGURING YUM 78
2.10.1. Viewing the current yum configurations 79
2.10.2. Setting yum main options 79
2.10.3. Using yum plug-ins 79
2.10.3.1. Managing yum plug-ins 79
2.10.3.2. Enabling yum plug-ins 79
2.10.3.3. Disabling yum plug-ins 79

.CHAPTER
. . . . . . . . . . 3.
. . MANAGING
. . . . . . . . . . . . . SERVICES
. . . . . . . . . . .WITH
. . . . . . SYSTEMD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
..............
3.1. INTRODUCTION TO SYSTEMD 81
Overriding the default systemd configuration using system.conf 82
3.1.1. Main features 82
3.1.2. Compatibility changes 83
3.2. MANAGING SYSTEM SERVICES 84
Specifying service units 85
Behavior of systemctl in a chroot environment 86
3.2.1. Listing services 86
3.2.2. Displaying service status 87
3.2.3. Starting a service 89
3.2.4. Stopping a service 89
3.2.5. Restarting a service 90
3.2.6. Enabling a service 90
3.2.7. Disabling a service 91
3.2.8. Starting a conflicting service 92
3.3. WORKING WITH SYSTEMD TARGETS 92
3.3.1. Viewing the default target 93
3.3.2. Viewing the current target 93
3.3.3. Changing the default target 94
3.3.4. Changing the current target 95
3.3.5. Changing to rescue mode 95
3.3.6. Changing to emergency mode 96
3.4. SHUTTING DOWN, SUSPENDING, AND HIBERNATING THE SYSTEM 96
3.4.1. Shutting down the system 97
Using systemctl commands 97
Using the shutdown command 97
3.4.2. Restarting the system 98

3
Red Hat Enterprise Linux 8 Configuring basic system settings

3.4.3. Suspending the system 98


3.4.4. Hibernating the system 98
3.5. WORKING WITH SYSTEMD UNIT FILES 99
3.5.1. Understanding the unit file structure 99
3.5.2. Creating custom unit files 103
3.5.3. Converting SysV init scripts to unit files 107
Finding the service description 107
Finding service dependencies 108
Finding default targets of the service 108
Finding files used by the service 109
3.5.4. Modifying existing unit files 110
Extending the default unit configuration 111
Overriding the default unit configuration 112
Monitoring overriden units 113
3.5.5. Working with instantiated units 115
3.6. OPTIMIZING SYSTEMD TO SHORTEN THE BOOT TIME 116
3.6.1. Examining system boot performance 116
Analyzing overall boot time 117
Analyzing unit initialization time 117
Identifying critical units 117
3.6.2. A guide to selecting services that can be safely disabled 117
3.7. ADDITIONAL RESOURCES 121
3.7.1. Installed Documentation 121
3.7.2. Online Documentation 121

. . . . . . . . . . . 4.
CHAPTER . . .MANAGING
. . . . . . . . . . . . USER
. . . . . . AND
. . . . . .GROUP
. . . . . . . .ACCOUNTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
...............
4.1. INTRODUCTION TO USERS AND GROUPS 122
4.2. CONFIGURING RESERVED USER AND GROUP IDS 122
4.3. USER PRIVATE GROUPS 123
4.4. MANAGING USER ACCOUNTS WITH THE WEB CONSOLE 123
4.4.1. Getting started using the RHEL web console 123
4.4.1.1. What is the RHEL web console 124
4.4.1.2. Installing the web console 125
4.4.1.3. Logging in to the web console 125
4.4.1.4. Connecting to the web console from a remote machine 126
4.4.1.5. Logging in to the web console using a one-time password 127
4.4.2. Managing user accounts in the web console 128
4.4.2.1. System user accounts managed in the web console 128
4.4.2.2. Adding new accounts using the web console 129
4.4.2.3. Enforcing password expiration in the web console 130
4.4.2.4. Terminating user sessions in the web console 131
4.5. MANAGING USERS FROM THE COMMAND LINE 132
4.5.1. Adding a new user from the command line 133
4.5.2. Adding a new group from the command line 133
4.5.3. Adding a user to a groups from the command line 134
4.5.4. Removing a user from a group from the command line 134
4.5.4.1. Overriding the primary group of the user 135
4.5.4.2. Overriding the supplementary groups of the user 135
4.5.5. Creating a group directory 136
4.6. MANAGING SUDO ACCESS 137
4.6.1. Granting sudo access to a user 137
4.7. CHANGING AND RESETTING THE ROOT PASSWORD 138
4.7.1. Changing the root password as the root user 138

4
Table of Contents

4.7.2. Changing or resetting the forgotten root password as a non-root user 139
4.7.3. Resetting the forgotten root password on boot 139

. . . . . . . . . . . 5.
CHAPTER . . MANAGING
. . . . . . . . . . . . .FILE
. . . . . PERMISSIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
..............
5.1. INTRODUCTION TO FILE PERMISSIONS 141
5.1.1. Base permissions 141
5.1.2. User file-creation mode mask 143
5.1.3. Default permissions 144
5.2. DISPLAYING FILE PERMISSIONS 146
5.3. CHANGING FILE PERMISSIONS 146
5.3.1. Changing file permissions using symbolic values 146
5.3.2. Changing file permissions using octal values 148
5.4. DISPLAYING THE UMASK 148
5.4.1. Displaying the current octal value of the umask 148
5.4.2. Displaying the current symbolic value of the umask 148
5.4.3. Displaying the default bash umask 149
5.5. SETTING THE UMASK FOR THE CURRENT SHELL SESSION 150
5.5.1. Setting the umask using symbolic values 150
5.5.2. Setting the umask using octal values 151
5.6. CHANGING THE DEFAULT UMASK 151
5.6.1. Changing the default umask for the non-login shell 151
5.6.2. Changing the default umask for the login shell 152
5.6.3. Changing the default umask for a specific user 152
5.6.4. Setting default UMASK for newly created home directories 152
5.7. ACCESS CONTROL LIST 153
5.7.1. Displaying the current ACL 153
5.7.2. Setting the ACL 153

. . . . . . . . . . . 6.
CHAPTER . . .USING
. . . . . . .THE
. . . . .CHRONY
. . . . . . . . . .SUITE
. . . . . . TO
. . . .CONFIGURE
. . . . . . . . . . . . . NTP
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
...............
6.1. INTRODUCTION TO CONFIGURING NTP WITH CHRONY 155
6.2. INTRODUCTION TO CHRONY SUITE 155
6.2.1. Using chronyc to control chronyd 155
6.3. DIFFERENCES BETWEEN CHRONY AND NTP 156
6.4. MIGRATING TO CHRONY 156
6.4.1. Migration script 157
6.4.2. Timesync role 158
6.5. CONFIGURING CHRONY 158
6.5.1. Configuring chrony for security 162
6.6. USING CHRONY 163
6.6.1. Installing chrony 163
6.6.2. Checking the status of chronyd 163
6.6.3. Starting chronyd 164
6.6.4. Stopping chronyd 164
6.6.5. Checking if chrony is synchronized 164
6.6.5.1. Checking chrony tracking 164
6.6.5.2. Checking chrony sources 166
6.6.5.3. Checking chrony source statistics 167
6.6.6. Manually Adjusting the System Clock 168
6.7. SETTING UP CHRONY FOR DIFFERENT ENVIRONMENTS 168
6.7.1. Setting up chrony for a system in an isolated network 168
6.8. CHRONY WITH HW TIMESTAMPING 169
6.8.1. Understanding hardware timestamping 169
6.8.2. Verifying support for hardware timestamping 169

5
Red Hat Enterprise Linux 8 Configuring basic system settings

6.8.3. Enabling hardware timestamping 170


6.8.4. Configuring client polling interval 170
6.8.5. Enabling interleaved mode 170
6.8.6. Configuring server for large number of clients 171
6.8.7. Verifying hardware timestamping 171
6.8.8. Configuring PTP-NTP bridge 172
6.9. ACHIEVING SOME SETTINGS PREVIOUSLY SUPPORTED BY NTP IN CHRONY 172
6.9.1. Monitoring by ntpq and ntpdc 172
6.9.2. Using authentication mechanism based on public key cryptography 173
6.9.3. Using ephemeral symmetric associations 173
6.9.4. multicast/broadcast client 174
6.10. ADDITIONAL RESOURCES 174
6.10.1. Installed Documentation 174
6.10.2. Online Documentation 175
6.11. MANAGING TIME SYNCHRONIZATION USING RHEL SYSTEM ROLES 175

. . . . . . . . . . . 7.
CHAPTER . . USING
. . . . . . . .SECURE
. . . . . . . . .COMMUNICATIONS
. . . . . . . . . . . . . . . . . . . . .BETWEEN
. . . . . . . . . . .TWO
. . . . . SYSTEMS
. . . . . . . . . . . WITH
. . . . . . OPENSSH
. . . . . . . . . . . . . . . . . . . . . . . 177
...............
7.1. SSH AND OPENSSH 177
7.2. CONFIGURING AND STARTING AN OPENSSH SERVER 178
7.3. USING KEY PAIRS INSTEAD OF PASSWORDS FOR SSH AUTHENTICATION 179
7.3.1. Setting an OpenSSH server for key-based authentication 179
7.3.2. Generating SSH key pairs 180
7.4. USING SSH KEYS STORED ON A SMART CARD 181
7.5. MAKING OPENSSH MORE SECURE 183
7.6. CONNECTING TO A REMOTE SERVER USING AN SSH JUMP HOST 185
7.7. ADDITIONAL RESOURCES 186

.CHAPTER
. . . . . . . . . . 8.
. . .CONFIGURING
...............A
. . REMOTE
. . . . . . . . . .LOGGING
. . . . . . . . . . .SOLUTION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
...............
8.1. THE RSYSLOG LOGGING SERVICE 188
8.2. INSTALLING RSYSLOG DOCUMENTATION 188
8.3. CONFIGURING REMOTE LOGGING OVER TCP 189
8.3.1. Configuring a server for remote logging over TCP 189
8.3.2. Configuring remote logging to a server over TCP 191
8.4. CONFIGURING REMOTE LOGGING OVER UDP 192
8.4.1. Configuring a server for receiving remote logging information over UDP 192
8.4.2. Configuring remote logging to a server over UDP 194
8.5. CONFIGURING RELIABLE REMOTE LOGGING 195
8.6. SUPPORTED RSYSLOG MODULES 197
8.7. ADDITIONAL RESOURCES 197

. . . . . . . . . . . 9.
CHAPTER . . .USING
. . . . . . .PYTHON
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198
...............
9.1. INTRODUCTION TO PYTHON 198
9.1.1. Python versions 198
9.1.2. The internal platform-python package 199
9.2. INSTALLING AND USING PYTHON 199
9.2.1. Installing Python 3 199
9.2.1.1. Installing additional Python 3 packages for developers 200
9.2.2. Installing Python 2 201
9.2.3. Using Python 3 201
9.2.4. Using Python 2 202
9.2.5. Configuring the unversioned Python 202
9.2.5.1. Configuring the unversioned python command directly 202
9.2.5.2. Configuring the unversioned python command to the required Python version interactively 203
9.3. MIGRATION FROM PYTHON 2 TO PYTHON 3 203

6
Table of Contents

9.4. PACKAGING OF PYTHON 3 RPMS 203


9.4.1. SPEC file description for a Python package 203
9.4.2. Common macros for Python 3 RPMs 205
9.4.3. Automatic provides for Python RPMs 206
9.4.4. Handling hashbangs in Python scripts 206
9.4.4.1. Modifying hashbangs in Python scripts 207
9.4.4.2. Changing /usr/bin/python3 hashbangs in their custom packages 207
9.4.5. Additional resources 207

.CHAPTER
. . . . . . . . . . 10.
. . . USING
. . . . . . . .THE
. . . . PHP
. . . . . SCRIPTING
. . . . . . . . . . . . .LANGUAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
................
10.1. INSTALLING THE PHP SCRIPTING LANGUAGE 208
10.2. USING THE PHP SCRIPTING LANGUAGE WITH A WEB SERVER 209
10.2.1. Using PHP with the Apache HTTP Server 209
10.2.2. Using PHP with the nginx web server 210
10.3. RUNNING A PHP SCRIPT USING THE COMMAND-LINE INTERFACE 212
10.4. ADDITIONAL RESOURCES 213

. . . . . . . . . . . 11.
CHAPTER . . .USING
. . . . . . .LANGPACKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .214
...............
11.1. CHECKING LANGUAGES THAT PROVIDE LANGPACKS 214
11.2. WORKING WITH RPM WEAK DEPENDENCY-BASED LANGPACKS 214
11.2.1. Listing already installed language support 214
11.2.2. Checking the availability of language support 214
11.2.3. Listing packages installed for a language 215
11.2.4. Installing language support 215
11.2.5. Removing language support 215
11.3. SAVING DISK SPACE BY USING GLIBC-LANGPACK-<LOCALE_CODE> 215

. . . . . . . . . . . 12.
CHAPTER . . . GETTING
. . . . . . . . . . STARTED
. . . . . . . . . . .WITH
. . . . . .TCL/TK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
...............
12.1. INTRODUCTION TO TCL/TK 217
12.2. NOTABLE CHANGES IN TCL/TK 8.6 217
12.3. MIGRATING TO TCL/TK 8.6 218
12.3.1. Migration path for developers of Tcl extensions 218
12.3.2. Migration path for users scripting their tasks with Tcl/Tk 218

. . . . . . . . . . . 13.
CHAPTER . . . USING
. . . . . . . .PREFIXDEVNAME
. . . . . . . . . . . . . . . . . . .FOR
. . . . NAMING
. . . . . . . . . .OF
. . . ETHERNET
. . . . . . . . . . . . NETWORK
. . . . . . . . . . . .INTERFACES
. . . . . . . . . . . . . . . . . . . . . . . . .220
...............
13.1. INTRODUCTION TO PREFIXDEVNAME 220
13.2. SETTING PREFIXDEVNAME 220
13.3. LIMITATIONS OF PREFIXDEVNAME 220

7
Red Hat Enterprise Linux 8 Configuring basic system settings

8
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

CHAPTER 1. GETTING STARTED WITH SYSTEM


ADMINISTRATION
The following sections provide an overview of basic administration tasks on the installed system.

NOTE

The following basic administration tasks may include items that are usually done already
during the installation process, but they do not have to be done necessarily, such as the
registration of the system. The sections dealing with such tasks provide a summary of
how you can achieve the same goals during the installation.

For information on Red Hat Enterprise Linux installation, see Performing a standard
RHEL installation.

Although you can perform all post-installation tasks through the command line, you can also use the
RHEL 8 web console to perform some of them.

1.1. PERFORMING BASIC SYSTEM ADMINISTRATION TASKS IN THE


WEB CONSOLE
In this chapter, you will learn how to perform basic system administration tasks, such as restart,
shutdown, or basic configuration, using the web console.

1.1.1. What the RHEL 8 web console is and which tasks it can be used for
The RHEL 8 web console is an interactive server administration interface. It interacts directly with the
operating system from a real Linux session in a browser.

The web console enables to perform these tasks:

Monitoring basic system features, such as hardware information, time configuration,


performance profiles, connection to the realm domain

Inspecting system log files

Managing network interfaces and configuring firewall

Handling docker images

Managing virtual machines

Managing user accounts

Monitoring and configuring system services

Creating diagnostic reports

Setting kernel dump configuration

Managing packages

Configuring SELinux

9
Red Hat Enterprise Linux 8 Configuring basic system settings

Updating software

Managing system subscriptions

Accessing the terminal

For more information on installing and using the RHEL 8 web console, see Managing systems using the
RHEL 8 web console.

1.1.2. Restarting the system using the web console


This procedure uses the web console to restart a RHEL system that the web console is attached to.

Prerequisites

The web console is installed and accessible.


For details, see Installing the web console .

Procedure

1. Log into the RHEL 8 web console.


For details, see Logging in to the web console .

2. Click Overview.

3. Click the Restart restart button.

4. If any users are logged into the system, write a reason for the restart in the Restart dialog box.

5. Optional: In the Delay drop down list, select a time interval.

10
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

6. Click Restart.

1.1.3. Shutting down the system using the web console


This procedure uses the web console to shut down a RHEL system that the web console is attached to.

Prerequisites

The web console is installed and accessible.


For details, see Installing the web console .

Procedure

1. Log into the RHEL 8 web console.


For details, see Logging in to the web console .

2. Click Overview.

3. In the Restart drop down list, select Shut Down.

4. If any users are logged in to the system, write a reason for the shutdown in the Shut Down
dialog box.

5. Optional: In the Delay drop down list, select a time interval.

11
Red Hat Enterprise Linux 8 Configuring basic system settings

6. Click Shut Down.

1.1.4. Configuring the host name in the web console


You can use the web console to configure different forms of the host name on the system that the web
console is attached to.

1.1.4.1. Host name

The host name identifies the system. By default, the host name is set to localhost, but you can change
it.

A host name consists of two parts:

Host name
It is a unique name which identifies a system.
Domain
Add the domain as a suffix behind the host name when using a system in a network and when using
names instead of just IP addresses.

A host name with an attached domain name is called a fully qualified domain name (FQDN). For
example: mymachine.example.com.

Host names are stored in the /etc/hostname file.

1.1.4.2. Pretty host name in the web console

You can configure a pretty host name in the RHEL web console. The pretty host name is a host name
with capital letters, spaces, and so on.

The pretty host name displays in the web console, but it does not have to correspond with the host
name.

Example 1.1. Host name formats in the web console

Pretty host name


My Machine
Host name
mymachine
Real host name - fully qualified domain name (FQDN)
mymachine.idm.company.com

1.1.4.3. Setting the host name using the web console

This procedure sets the real host name or the pretty host name in the web console.

Prerequisites

The web console is installed and accessible.


For details, see Installing the web console .

Procedure
12
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Procedure

1. Log into the RHEL 8 web console.


For details, see Logging in to the web console .

2. Click Overview.

3. Click edit next to the current host name.

4. In the Change Host Name dialog box, enter the host name in the Pretty Host Name field.

5. The Real Host Name field attaches a domain name to the pretty name.
You can change the real host name manually if it does not correspond with the pretty host
name.

6. Click Change.

13
Red Hat Enterprise Linux 8 Configuring basic system settings

Verification steps

1. Log out from the web console.

2. Reopen the web console by entering an address with the new host name in the address bar of
your browser.

1.1.5. Joining a RHEL 8 system to an IdM domain using the web console
This procedure uses the web console to join the Red Hat Enterprise Linux 8 system to the Identity
Management (IdM) domain.

Prerequisites

The IdM domain is running and reachable from the client you want to join.

You have the IdM domain administrator credentials.

Procedure

1. Log into the RHEL web console.


For details, see Logging in to the web console .

2. Open the System tab.

3. Click Join Domain.

14
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

4. In the Join a Domain dialog box, enter the host name of the IdM server in the Domain Address
field.

5. In the Authentication drop down list, select if you want to use a password or a one-time
password for authentication.

6. In the Domain Administrator Name field, enter the user name of the IdM administration
account.

7. In the password field, add the password or one-time password according to what you selected in
the Authentication drop down list earlier.

8. Click Join.

15
Red Hat Enterprise Linux 8 Configuring basic system settings

Verification steps

1. If the RHEL 8 web console did not display an error, the system has been joined to the IdM
domain and you can see the domain name in the System screen.

2. To verify that the user is a member of the domain, click the Terminal page and type the id
command:

$ id
euid=548800004(example_user) gid=548800004(example_user)
groups=548800004(example_user) context=unconfined_u:unconfined_r:unconfined_t:s0-
s0:c0.c1023

Additional resources

Planning Identity Management

Installing Identity Management

Configuring and managing Identity Management

1.1.6. Configuring time settings using the web console


This procedure sets a time zone and synchronizes the system time with a Network Time Protocol (NTP)
server.

Prerequisites

The web console is installed and accessible.


For details, see Installing the web console .

Procedure

1. Log in to the RHEL 8 web console.


For details, see Logging in to the web console .

16
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

2. Click the current system time in Overview.

3. In the Change System Time dialog box, change the time zone if necessary.

4. In the Set Time drop down menu, select one of the following:

Manually
Use this option if you need to set the time manually, without an NTP server.
Automatically using NTP server
This is a default option, which synchronizes time automatically with the preset NTP servers.
Automatically using specific NTP servers
Use this option only if you need to synchronize the system with a specific NTP server.
Specify the DNS name or the IP address of the server.

5. Click Change.

Verification steps

Check the system time displayed in the System tab.

Additional resources

Chapter 6, Using the Chrony suite to configure NTP .

1.1.7. Optimizing the system performance using the web console

17
Red Hat Enterprise Linux 8 Configuring basic system settings

In the web console, you can set a performance profile to optimize the performance of the system for a
selected task.

1.1.7.1. Performance tuning options in the web console

Red Hat Enterprise Linux 8 provides several performance profiles that optimize the system for the
following tasks:

Systems using the desktop

Throughput performance

Latency performance

Network performance

Low power consumption

Virtual machines

The tuned service optimizes system options to match the selected profile.

In the web console, you can set which performance profile your system uses.

Additional resources

For details about the tuned service, see Monitoring and managing system status and
performance.

1.1.7.2. Setting a performance profile in the web console

This procedure uses the web console to optimize the system performance for a selected task.

Prerequisites

The web console is installed and accessible.


For details, see Installing the web console .

Procedure

1. Log into the RHEL 8 web console.


For details, see Logging in to the web console .

2. Click Overview.

3. In the Performance Profile field, click the current performance profile.

18
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

4. In the Change Performance Profile dialog box, change the profile if necessary.

5. Click Change Profile.

Verification steps

The Overview tab now shows the selected performance profile.

1.1.8. Disabling SMT to prevent CPU security issues using the web console
This section describes how to to disable Simultaneous Multi Threading (SMT) in case of attacks that
misuse CPU SMT. Disabling SMT can mitigate security vulnerabilities, such as L1TF or MDS.

IMPORTANT

Disabling SMT might lower the system performance.

Prerequisites

19
Red Hat Enterprise Linux 8 Configuring basic system settings

The web console must be installed and accessible.


For details, see Installing the web console .

Procedure

1. Log in to the RHEL 8 web console.


For details, see Logging in to the web console .

2. Click System.

3. In the Hardware item, click the hardware information.

4. In the CPU Security item, click Mitigations.


If this link is not present, it means that your system does not support SMT, and therefore is not
vulnerable.

5. In the CPU Security Toggles, switch on the Disable simultaneous multithreading (nosmt)
option.

6. Click on the Save and reboot button.

After the system restart, the CPU no longer uses SMT.

20
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Additional resources
For more details on security attacks that you can prevent by disabling SMT, see:

L1TF - L1 Terminal Fault Attack - CVE-2018-3620 & CVE-2018-3646

MDS - Microarchitectural Data Sampling - CVE-2018-12130, CVE-2018-12126, CVE-2018-12127,


and CVE-2019-11091

1.2. GETTING STARTED WITH RHEL SYSTEM ROLES


This section explains what RHEL System Roles are. Additionally, it describes how to apply a particular
role through an Ansible playbook to perform various system administration tasks.

1.2.1. Introduction to RHEL System Roles


RHEL System Roles is a collection of Ansible roles and modules. RHEL System Roles provide a
configuration interface to remotely manage multiple RHEL systems. The interface enables managing
system configurations across multiple versions of RHEL, as well as adopting new major releases.

On Red Hat Enterprise Linux 8, the interface currently consists of the following roles:

kdump

network

selinux

storage

timesync

All these roles are provided by the rhel-system-roles package available in the AppStream repository.

Additional resources

For RHEL System Roles overview, see the Red Hat Enterprise Linux (RHEL) System Roles
Red Hat Knowledgebase article.

For information on a particular role, see the documentation under the /usr/share/doc/rhel-
system-roles directory. This documentation is installed automatically with the rhel-system-
roles package.

Introduction to the SELinux system role

Introduction to the storage role

1.2.2. RHEL System Roles terminology


You can find the following terms across this documentation:

System Roles terminology

Ansible playbook

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a
21
Red Hat Enterprise Linux 8 Configuring basic system settings

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a
policy you want your remote systems to enforce, or a set of steps in a general IT process.
Control node
Any machine with Ansible installed. You can run commands and playbooks, invoking /usr/bin/ansible
or /usr/bin/ansible-playbook, from any control node. You can use any computer that has Python
installed on it as a control node - laptops, shared desktops, and servers can all run Ansible. However,
you cannot use a Windows machine as a control node. You can have multiple control nodes.
Inventory
A list of managed nodes. An inventory file is also sometimes called a “hostfile”. Your inventory can
specify information like IP address for each managed node. An inventory can also organize managed
nodes, creating and nesting groups for easier scaling. To learn more about inventory, see the
Working with Inventory section.
Managed nodes
The network devices (and/or servers) you manage with Ansible. Managed nodes are also sometimes
called “hosts”. Ansible is not installed on managed nodes.

1.2.3. Applying a role


The following procedure describes how to apply a particular role.

Prerequisites

The rhel-system-roles package is installed on the system that you want to use as a control
node:

# yum install rhel-system-roles

The Ansible Engine repository is enabled, and the ansible package is installed on the system
that you want to use as a control node. You need the ansible package to run playbooks that use
RHEL System Roles.

If you do not have a Red Hat Ansible Engine Subscription, you can use a limited supported
version of Red Hat Ansible Engine provided with your Red Hat Enterprise Linux subscription.
In this case, follow these steps:

1. Enable the RHEL Ansible Engine repository:

# subscription-manager refresh
# subscription-manager repos --enable ansible-2-for-rhel-8-x86_64-rpms

2. Install Ansible Engine:

# yum install ansible

If you have a Red Hat Ansible Engine Subscription, follow the procedure described in How
do I Download and Install Red Hat Ansible Engine?.

You are able to create an Ansible playbook.


Playbooks represent Ansible’s configuration, deployment, and orchestration language. By using
playbooks, you can declare and manage configurations of remote machines, deploy multiple
remote machines or orchestrate steps of any manual ordered process.

A playbook is a list of one or more plays. Every play can include Ansible variables, tasks, or roles.

22
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Playbooks are human-readable, and they are expressed in the YAML format.

For more information about playbooks, see Ansible documentation.

Procedure

1. Create an Ansible playbook including the required role.


The following example shows how to use roles through the roles: option for a given play:

---
- hosts: webservers
roles:
- rhel-system-roles.network
- rhel-system-roles.timesync

For more information on using roles in playbooks, see Ansible documentation.

See Ansible examples for example playbooks.

NOTE

Every role includes a README file, which documents how to use the role and
supported parameter values. You can also find an example playbook for a
particular role under the documentation directory of the role. Such
documentation directory is provided by default with the rhel-system-roles
package, and can be found in the following location:

/usr/share/doc/rhel-system-roles/SUBSYSTEM/

Replace SUBSYSTEM with the name of the required role, such as selinux,
kdump, network, timesync, or storage.

2. Execute the playbook on targeted hosts by running the ansible-playbook command:

# ansible-playbook -i name.of.the.inventory name.of.the.playbook

An inventory is a list of systems against which Ansible works. For more information on how to
create and inventory, and how to work with it, see Ansible documentation.

If you do not have an inventory, you can create it at the time of running ansible-playbook:

If you have only one targeted host against which you want to run the playbook, use:

# ansible-playbook -i host1, name.of.the.playbook

If you have multiple targeted hosts against which you want to run the playbook, use:

# ansible-playbook -i host1,host2,....,hostn name.of.the.playbook

Additional resources

For more detailed information on using the ansible-playbook command, see the ansible-
playbook man page.

23
Red Hat Enterprise Linux 8 Configuring basic system settings

1.2.4. Additional resources


For RHEL System Roles overview, see the Red Hat Enterprise Linux (RHEL) System Roles
Red Hat Knowledgebase article.

Managing local storage using RHEL System Roles

Deploying the same SELinux configuration on multiple systems using RHEL System Roles

1.3. CHANGING BASIC ENVIRONMENT SETTINGS


Configuration of basic environment settings is a part of the installation process. The following sections
guide you when you change them later. The basic configuration of the environment includes:

Date and time

System locales

Keyboard layout

Language

1.3.1. Configuring the date and time


Accurate timekeeping is important for a number of reasons. In Red Hat Enterprise Linux, timekeeping is
ensured by the NTP protocol, which is implemented by a daemon running in user space. The user-space
daemon updates the system clock running in the kernel. The system clock can keep time by using
various clock sources.

Red Hat Enterprise Linux 8 uses the chronyd daemon to implement NTP. chronyd is available from the
chrony package. For more information, see Using the chrony suite to configure NTP .

1.3.1.1. Displaying the current date and time

To display the current date and time, use either of these steps.

Procedure

1. Enter the date command:

$ date
Mon Mar 30 16:02:59 CEST 2020

2. To see more details, use the timedatectl command:

$ timedatectl
Local time: Mon 2020-03-30 16:04:42 CEST
Universal time: Mon 2020-03-30 14:04:42 UTC
RTC time: Mon 2020-03-30 14:04:41
Time zone: Europe/Prague (CEST, +0200)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no

24
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Additional resources

For more information, see the date(1) and timedatectl(1) man pages.

1.3.1.2. Additional resources

For more information on time settings in the web console, see Using the web console for
configuring time settings.

1.3.2. Configuring the system locale


System-wide locale settings are stored in the /etc/locale.conf file, which is read at early boot by the
systemd daemon. Every service or user inherits the locale settings configured in /etc/locale.conf, unless
individual programs or individual users override them.

This section describes how to manage system locale.

Procedure

1. To list available system locale settings:

$ localectl list-locales
C.utf8
aa_DJ
aa_DJ.iso88591
aa_DJ.utf8
...

2. To display the current status of the system locales settings:

$ localectl status

3. To set or change the default system locale settings, use a localectl set-locale sub-command as
the root user. For example:

# localectl set-locale LANG=en-US

Additional resources

For more information, see the localectl(1), locale(7), and locale.conf(5) man pages.

1.3.3. Configuring the keyboard layout


The keyboard layout settings control the layout used on the text console and graphical user interfaces.

Procedure

1. To list available keymaps:

$ localectl list-keymaps
ANSI-dvorak
al
al-plisi

25
Red Hat Enterprise Linux 8 Configuring basic system settings

amiga-de
amiga-us
...

2. To display the current status of keymaps settings:

$ localectl status
...
VC Keymap: us
...

3. To set or change the default system keymap, use a localectl set-keymap sub-command as the
root user. For example:

# localectl set-keymap us

Additional resources

For more information, see the localectl(1), locale(7), and locale.conf(5) man pages.

1.3.4. Changing the language using desktop GUI


This section describes how to change the system language using the desktop GUI.

Prerequisites

Required language packages are installed on your system

Procedure

1. Open the GNOME Control Center from the System menu by clicking on its icon.

26
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

2. In the GNOME Control Center, choose Region & Language from the left vertical bar.

3. Click the Language menu.

4. Select the required region and language from the menu.

27
Red Hat Enterprise Linux 8 Configuring basic system settings

If your region and language are not listed, scroll down, and click More to select from available
regions and languages.

5. Click Done.

6. Click Restart for changes to take effect.

NOTE
28
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

NOTE

Some applications do not support certain languages. The text of an application that
cannot be translated into the selected language remains in US English.

Additional resources

For more information on how to launch the GNOME Control Center, see approaches described
in Launching applications

1.3.5. Additional resources


For more information about configuring basic environment settings, see Performing a standard
RHEL installation.

1.4. CONFIGURING AND MANAGING NETWORK ACCESS


This section describes different options on how to add Ethernet connections in Red Hat
Enterprise Linux.

1.4.1. Configuring the network and host name in the graphical installation mode
Follow the steps in this procedure to configure your network and host name.

Procedure

1. From the Installation Summary window, click Network and Host Name*.

2. From the list in the left-hand pane, select an interface. The details are displayed in the right-
hand pane.

3. Toggle the ON/OFF switch to enable or disable the selected interface.

NOTE

The installation program automatically detects locally accessible interfaces, and


you cannot add or remove them manually.

4. Click + to add a virtual network interface, which can be either: Team, Bond, Bridge, or VLAN.

5. Click - to remove a virtual interface.

6. Click Configure to change settings such as IP addresses, DNS servers, or routing configuration
for an existing interface (both virtual and physical).

7. Type a host name for your system in the Host Name field.

NOTE
29
Red Hat Enterprise Linux 8 Configuring basic system settings

NOTE

There are several types of network device naming standards used to identify
network devices with persistent names, for example, em1 and wl3sp0. For
information about these standards, see the Configuring and managing
networking document.

The host name can be either a fully-qualified domain name (FQDN) in the
format hostname.domainname, or a short host name with no domain name.
Many networks have a Dynamic Host Configuration Protocol (DHCP) service
that automatically supplies connected systems with a domain name. To allow
the DHCP service to assign the domain name to this machine, specify only
the short host name. The value localhost.localdomain means that no
specific static host name for the target system is configured, and the actual
host name of the installed system is configured during the processing of the
network configuration, for example, by NetworkManager using DHCP or
DNS.

8. Click Apply to apply the host name to the environment.

Additional resources and information

For details about configuring network settings and the host name when using a Kickstart file, see
the corresponding appendix in Performing an advanced RHEL installation .

If you install Red Hat Enterprise Linux using the text mode of the Anaconda installation
program, use the Network settings option to configure the network.

1.4.2. Configuring a static Ethernet connection using nmcli


This procedure describes adding an Ethernet connection with the following settings using the nmcli
utility:

A static IPv4 address - 192.0.2.1 with a /24 subnet mask

A static IPv6 address - 2001:db8:1::1 with a /64 subnet mask

An IPv4 default gateway - 192.0.2.254

An IPv6 default gateway - 2001:db8:1::fffe

An IPv4 DNS server - 192.0.2.200

An IPv6 DNS server - 2001:db8:1::ffbb

A DNS search domain - example.com

Procedure

1. Add a new NetworkManager connection profile for the Ethernet connection:

# nmcli connection add con-name Example-Connection ifname enp7s0 type ethernet

The further steps modify the Example-Connection connection profile you created.

30
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

2. Set the IPv4 address:

# nmcli connection modify Example-Connection ipv4.addresses 192.0.2.1/24

3. Set the IPv6 address:

# nmcli connection modify Example-Connection ipv6.addresses 2001:db8:1::1/64

4. Set the IPv4 and IPv6 connection method to manual:

# nmcli connection modify Example-Connection ipv4.method manual


# nmcli connection modify Example-Connection ipv6.method manual

5. Set the IPv4 and IPv6 default gateways:

# nmcli connection modify Example-Connection ipv4.gateway 192.0.2.254


# nmcli connection modify Example-Connection ipv6.gateway 2001:db8:1::fffe

6. Set the IPv4 and IPv6 DNS server addresses:

# nmcli connection modify Example-Connection ipv4.dns "192.0.2.200"


# nmcli connection modify Example-Connection ipv6.dns "2001:db8:1::ffbb"

To set multiple DNS servers, specify them space-separated and enclosed in quotes.

7. Set the DNS search domain for the IPv4 and IPv6 connection:

# nmcli connection modify Example-Connection ipv4.dns-search example.com


# nmcli connection modify Example-Connection ipv6.dns-search example.com

8. Active the connection profile:

# nmcli connection up Example-Connection


Connection successfully activated (D-Bus active path:
/org/freedesktop/NetworkManager/ActiveConnection/13)

Verification steps

1. Display the status of the devices and connections:

# nmcli device status


DEVICE TYPE STATE CONNECTION
enp7s0 ethernet connected Example-Connection

2. To display all settings of the connection profile:

# nmcli connection show Example-Connection


connection.id: Example-Connection
connection.uuid: b6cdfa1c-e4ad-46e5-af8b-a75f06b79f76
connection.stable-id: --

31
Red Hat Enterprise Linux 8 Configuring basic system settings

connection.type: 802-3-ethernet
connection.interface-name: enp7s0
...

3. Use the ping utility to verify that this host can send packets to other hosts.

Ping an IP address in the same subnet.


For IPv4:

# ping 192.0.2.3

For IPv6:

# ping 2001:db8:2::1

If the command fails, verify the IP and subnet settings.

Ping an IP address in a remote subnet.


For IPv4:

# ping 198.162.3.1

For IPv6:

# ping 2001:db8:2::1

If the command fails, ping the default gateway to verify settings.


For IPv4:

# ping 192.0.2.254

For IPv6:

# ping 2001:db8:1::fffe

4. Use the host utility to verify that name resolution works. For example:

# host client.example.com

If the command returns any error, such as connection timed out or no servers could be
reached, verify your DNS settings.

Troubleshooting steps

1. If the connection fails or if the network interface switches between an up and down status:

Make sure that the network cable is plugged-in to the host and a switch.

Check whether the link failure exists only on this host or also on other hosts connected to
the same switch the server is connected to.

Verify that the network cable and the network interface are working as expected. Perform
hardware diagnosis steps and replace defect cables and network interface cards.

32
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Additional resources

See the nm-settings(5) man page for more information on connection profile properties and
their settings.

For further details about the nmcli utility, see the nmcli(1) man page.

If the configuration on the disk does not match the configuration on the device, starting or
restarting NetworkManager creates an in-memory connection that reflects the configuration of
the device. For further details and how to avoid this problem, see NetworkManager duplicates a
connection after restart of NetworkManager service.

1.4.3. Adding a connection profile using nmtui


The nmtui application provides a text user interface to NetworkManager. This procedure describes how
to add a new connection profile.

Prerequisites

The NetworkManager-tui package is installed.

Procedure

1. Start the NetworkManager text user interface utility:

# nmtui

2. Select the Edit a connection menu entry, and press Enter.

3. Select the Add button, and press Enter.

4. Select Ethernet, and press Enter.

5. Fill the fields with the connection details.

33
Red Hat Enterprise Linux 8 Configuring basic system settings

6. Select OK to save the changes.

7. Select Back to return to the main menu.

8. Select Activate a connection, and press Enter.

9. Select the new connection entry, and press Enter to activate the connection.

10. Select Back to return to the main menu.

11. Select Quit.

Verification steps

1. Display the status of the devices and connections:

34
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

# nmcli device status


DEVICE TYPE STATE CONNECTION
enp7s0 ethernet connected Example-Connection

2. To display all settings of the connection profile:

# nmcli connection show Example-Connection


connection.id: Example-Connection
connection.uuid: b6cdfa1c-e4ad-46e5-af8b-a75f06b79f76
connection.stable-id: --
connection.type: 802-3-ethernet
connection.interface-name: enp7s0
...

Additional resources

For more information on testing connections, see Testing basic network settings in Configuring
and managing networking.

For further details about the nmtui application, see the nmtui(1) man page.

If the configuration on the disk does not match the configuration on the device, starting or
restarting NetworkManager creates an in-memory connection that reflects the configuration of
the device. For further details and how to avoid this problem, see NetworkManager duplicates a
connection after restart of NetworkManager service.

1.4.4. Managing networking in the RHEL 8 web console


In the web console, the Networking menu enables you:

To display currently received and sent packets

To display the most important characteristics of available network interfaces

To display content of the networking logs.

To add various types of network interfaces (bond, team, bridge, VLAN)

Figure 1.1. Managing Networking in the RHEL 8 web console


35
Red Hat Enterprise Linux 8 Configuring basic system settings

Figure 1.1. Managing Networking in the RHEL 8 web console

1.4.5. Managing networking using RHEL System Roles


You can configure the networking connections on multiple target machines using the network role.

The network role allows to configure the following types of interfaces:

Ethernet

Bridge

Bonded

VLAN

MacVLAN

Infiniband

The required networking connections for each host are provided as a list within the
network_connections variable.


WARNING

The network role updates or creates all connection profiles on the target system
exactly as specified in the network_connections variable. Therefore, the network
role removes options from the specified profiles if the options are only present on
the system but not in the network_connections variable.

The following example shows how to apply the network role to ensure that an Ethernet connection with
the required parameters exists:

36
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Example 1.2. An example playbook applying the network role to set up an Ethernet connection
with the required parameters

# SPDX-License-Identifier: BSD-3-Clause
---
- hosts: network-test
vars:
network_connections:

# Create one ethernet profile and activate it.


# The profile uses automatic IP addressing
# and is tied to the interface by MAC address.
- name: prod1
state: up
type: ethernet
autoconnect: yes
mac: "00:00:5e:00:53:00"
mtu: 1450

roles:
- rhel-system-roles.network

For more information on applying a system role, see Introduction to RHEL System Roles .

1.4.6. Additional resources


For further details about network configuration, such as configuring network bonding and
teaming, see the Configuring and managing networking title.

1.5. REGISTERING THE SYSTEM AND MANAGING SUBSCRIPTIONS


Subscriptions cover products installed on Red Hat Enterprise Linux, including the operating system
itself.

You can use a subscription to Red Hat Content Delivery Network to track:

Registered systems

Products installed on your systems

Subscriptions attached to the installed products

1.5.1. Registering the system after the installation


Use the following procedure to register your system if you have not registered it during the installation
process already.

Prerequisites

A valid user account in the Red Hat Customer Portal.

See the Create a Red Hat Login page.

37
Red Hat Enterprise Linux 8 Configuring basic system settings

An active subscription for the RHEL system.

For more information about the installation process, see Performing a standard RHEL
installation.

Procedure

1. Register and automatically subscribe your system in one step:

# subscription-manager register --username <username> --password <password> --auto-


attach
Registering to: subscription.rhsm.redhat.com:443/subscription
The system has been registered with ID: 37to907c-ece6-49ea-9174-20b87ajk9ee7
The registered system name is: client1.idm.example.com
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux for x86_64
Status: Subscribed

The command prompts you to enter your Red Hat Customer Portal user name and password.

If the registration process fails, you can register your system with a specific pool. For guidance
on how to do it, proceed with the following steps:

a. Determine the pool ID of a subscription that you require:

# subscription-manager list --available

This command displays all available subscriptions for your Red Hat account. For every
subscription, various characteristics are displayed, including the pool ID.

b. Attach the appropriate subscription to your system by replacing pool_id with the pool ID
determined in the previous step:

# subscription-manager attach --pool=pool_id

Additional resources

For more details about registering RHEL systems using the --auto-attach option, see
Understanding autoattaching subscriptions on the Customer Portal section.

For more details about manually registering RHEL systems, see Understanding the manual
registration and subscription on the Customer Portal section.

1.5.2. Registering subscriptions with credentials in the web console


Use the following steps to register a newly installed Red Hat Enterprise Linux using the RHEL 8 web
console.

Prerequisites

A valid user account on the Red Hat Customer Portal.


See the Create a Red Hat Login page.

Active subscription for your RHEL system.

38
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Procedure

1. Type subscription in the search field and press the Enter key.

Alternatively, you can log in to the RHEL 8 web console. For details, see Logging in to the web
console.

2. In the polkit authentication dialog for privileged tasks, add the password belonging to the user
name displayed in the dialog.

3. Click Authenticate.

4. In the Subscriptions dialog box, click Register.

39
Red Hat Enterprise Linux 8 Configuring basic system settings

5. Enter your Customer Portal credentials.

6. Enter the name of your organization.


If you have more than one account on the Red Hat Customer Portal, you have to add the
organization name or organization ID. To get the org ID, go to your Red Hat contact point.

7. Click the Register button.

At this point, your Red Hat Enterprise Linux 8 system has been successfully registered.

40
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

1.5.3. Registering a system using Red Hat account on GNOME


Follow the steps in this procedure to enroll your system with your Red Hat account.

Prerequisites

A valid account on Red Hat customer portal.


See the Create a Red Hat Login page for new user registration.

Procedure

1. Go to the system menu, which is accessible from the top-right screen corner and click the
Settings icon.

2. In the Details → About section, click Register.

3. Select Registration Server.

4. If you are not using the Red Hat server, enter the server address in the URL field.

5. In the Registration Type menu, select Red Hat Account.

41
Red Hat Enterprise Linux 8 Configuring basic system settings

6. Under Registration Details:

Enter your Red hat account user name in the Login field,

Enter your Red hat account password in the Password field.

Enter the name of your organization in the Organization field.

7. Click Register.

1.5.4. Registering a system using an activation key on GNOME


Follow the steps in this procedure to register your system with an activation key. You can get the
activation key from your organization administrator.

Prerequisites

Activation key or keys.


See the Activation Keys page for creating new activation keys.

Procedure

1. Go to the system menu, which is accessible from the top-right screen corner and click the
Settings icon.

2. In the Details → About section, click Register.

3. Select Registration Server.

4. Enter URL to the customized server, if you are not using the Red Hat server.

5. In the Registration Type menu, select Activation Keys.

6. Under Registration Details:

Enter Activation Keys.


Separate multiple keys by a comma (,).

Enter the name or ID of your organization in the Organization field.

7. Click Register

1.6. MAKING SYSTEMD SERVICES START AT BOOT TIME


Systemd is a system and service manager for Linux operating systems that introduces the concept of
systemd units.

This section provides information on how to ensure that a service is enabled or disabled at boot time. It
also explains how to manage the services through the web console.

1.6.1. Enabling or disabling the services using the CLI


You can determine which services are enabled or disabled at boot time already during the installation
process. You can also enable or disable a service on an installed operating system.

This section describes the steps for enabling or disabling those services on an already installed operating
42
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

This section describes the steps for enabling or disabling those services on an already installed operating
system:

Prerequisites

You must have root access to the system.

Procedure

1. To enable a service, use the enable option:

# systemctl enable service_name

Replace service_name with the service you want to enable.

You can also enable and start a service in a single command:

# systemctl enable --now service_name

2. To disable a service, use the disable option:

# systemctl disable service_name

Replace service_name with the service you want to disable.


WARNING

You cannot enable a service that has been previously masked. You have to unmask
it first:

# systemctl unmask service_name

1.6.2. Managing services in the RHEL 8 web console


This section describes how you can also enable or disable a service using the web console. You can
manage systemd targets, services, sockets, timers, and paths. You can also check the service status,
start or stop services, enable or disable them.

Prerequisites

You must have root access to the system.

Procedure

1. Open https://localhost:9090/ in a web browser of your preference.

2. Log in to the web console with your root credentials on the system.

3. To display the web console panel, click the Host icon, which is in the upper-left corner of the

43
Red Hat Enterprise Linux 8 Configuring basic system settings

3. To display the web console panel, click the Host icon, which is in the upper-left corner of the
window.

4. On the menu, click Services.


You can manage systemd targets, services, sockets, timers, and paths.

5. For example, to manage the service NFS client services:

a. Click Targets.

b. Select the service NFS client services.

c. To enable or disable the service, click the Toogle button.

d. To stop the service, click the button and choose the option 'Stop'.

44
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

1.7. CONFIGURING SYSTEM SECURITY


Computer security is the protection of computer systems and their hardware, software, information, and
services from theft, damage, disruption, and misdirection. Ensuring computer security is an essential
task, in particular in enterprises that process sensitive data and handle business transactions.

This section covers only the basic security features that you can configure after installation of the
operating system. For detailed information on securing Red Hat Enterprise Linux, see the Security
section in Product Documentation for Red Hat Enterprise Linux 8 .

1.7.1. Enhancing system security with a firewall


A firewall is a network security system that monitors and controls incoming and outgoing network traffic
according to configured security rules. A firewall typically establishes a barrier between a trusted secure
internal network and another outside network.

The firewalld service, which provides a firewall in Red Hat Enterprise Linux, is automatically enabled
during installation.

1.7.1.1. Enabling the firewalld service

To enable the firewalld service, follow this procedure.

Procedure

1. Display the current status of firewalld:

$ systemctl status firewalld


● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset:
enabled)
Active: inactive (dead)
...

2. If firewalld is not enabled and running, switch to the root user, and start the firewalld service
and enable to start it automatically after the system restarts:

# systemctl enable --now firewalld

Verification steps

1. Check that firewalld is running and enabled:

$ systemctl status firewalld


● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset:
enabled)
Active: active (running)
...

Additional resources

For more information, see the firewalld(1) man page.

45
Red Hat Enterprise Linux 8 Configuring basic system settings

1.7.1.2. Managing firewall in the RHEL 8 web console

To configure the firewalld service in the web console, navigate to Networking → Firewall.

By default, the firewalld service is enabled.

Procedure

1. To enable or disable firewalld in the web console, switch the Firewall toggle button.

NOTE

Additionally, you can define more fine-grained access through the firewall to a service
using the Add services…​ button.

1.7.1.3. Additional resources

For detailed information on configuring and using a firewall, see Using and configuring firewalls .

1.7.2. Managing basic SELinux settings


Security-Enhanced Linux (SELinux) is an additional layer of system security that determines which
processes can access which files, directories, and ports. These permissions are defined in SELinux
policies. A policy is a set of rules that guide the SELinux security engine.

1.7.2.1. SELinux states and modes

SELinux has two possible states:

Disabled

Enabled

When SELinux is enabled, it runs in one of the following modes:

Enabled

Enforcing

Permissive

In enforcing mode, SELinux enforces the loaded policies. SELinux denies access based on SELinux
policy rules and enables only the interactions that are explicitly allowed. Enforcing mode is the safest
SELinux mode and is the default mode after installation.

In permissive mode, SELinux does not enforce the loaded policies. SELinux does not deny access, but
reports actions that break the rules to the /var/log/audit/audit.log log. Permissive mode is the default

46
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

mode during installation. Permissive mode is also useful in some specific cases, for example when
troubleshooting problems.

Additional resources

For more information on SELinux, see Using SELinux.

1.7.2.2. Ensuring the required state of SELinux

By default, SELinux operates in enforcing mode. However, in specific scenarios, you can set SELinux to
permissive mode or even disable it.

IMPORTANT

Red Hat recommends to keep your system in enforcing mode. For debugging purposes,
you can set SELinux to permissive mode.

Follow this procedure to change the state and mode of SELinux on your system.

Procedure

1. Display the current SELinux mode:

$ getenforce

2. To temporarily set SELinux:

a. To Enforcing mode:

# setenforce Enforcing

b. To Permissive mode:

# setenforce Permissive

NOTE

After reboot, SELinux mode is set to the value specified in the


/etc/selinux/config configuration file.

3. To set SELinux mode to persist across reboots, modify the SELINUX variable in the
/etc/selinux/config configuration file.
For example, to switch SELinux to enforcing mode:

# This file controls the state of SELinux on the system.


# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
...

47
Red Hat Enterprise Linux 8 Configuring basic system settings


WARNING

Disabling SELinux reduces your system security. Avoid disabling SELinux


using the SELINUX=disabled option in the /etc/selinux/config file
because this can result in memory leaks and race conditions causing kernel
panics. Instead, disable SELinux by adding the selinux=0 parameter to the
kernel command line as described in Changing SELinux modes at boot time
.

Additional resources

For more information on permanent changes of SELinux modes, see Changing SELinux states
and modes.

1.7.2.3. Switching SELinux modes in the RHEL 8 web console

You can set SELinux mode through the RHEL 8 web console in the SELinux menu item.

By default, SELinux enforcing policy in the web console is on, and SELinux operates in enforcing mode.
By turning it off, you switch SELinux to permissive mode. Note that this selection is automatically
reverted on the next boot to the configuration defined in the /etc/sysconfig/selinux file.

Procedure

1. In the web console, use the Enforce policy toggle button in the SELinux menu item to turn
SELinux enforcing policy on or off.

1.7.2.4. Next steps

You can manage various SELinux local customizations on multiple target systems using the
selinux system role. For more information, see the Deploying the same SELinux configuration
on multiple systems section.

48
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

1.7.3. Next steps


Using key pairs instead of passwords for SSH authentication

Security hardening

Using SELinux

Securing networks

1.8. GETTING STARTED WITH MANAGING USER ACCOUNTS


Red Hat Enterprise Linux is a multi-user operating system, which enables multiple users on different
computers to access a single system installed on one machine.

Every user operates under its own account, and managing user accounts thus represents a core element
of Red Hat Enterprise Linux system administration.

1.8.1. Overview of user accounts and groups


This section provides an overview of user accounts and groups. The following are the different types of
user accounts:

Normal user accounts:


Normal accounts are created for users of a particular system. Such accounts can be added,
removed, and modified during normal system administration.

System user accounts


System user accounts represent a particular applications identifier on a system. Such accounts
are generally added or manipulated only at software installation time, and they are not modified
later.


WARNING

System accounts are presumed to be available locally on a system. If these


accounts are configured and provided remotely, such as in the instance of
an LDAP configuration, system breakage and service start failures can
occur.

For system accounts, user IDs below 1000 are reserved. For normal accounts, you can use IDs
starting at 1000. However, the recommended practice is to assign IDs starting at 5000.

Group
A group in an entity which ties together multiple user accounts for a common purpose, such as
granting access to particular files.

Additional resources

For more information, see Configuring reserved user and group IDs .

49
Red Hat Enterprise Linux 8 Configuring basic system settings

For assigning IDs, see the /etc/login.defs file.

1.8.2. Managing accounts and groups using command-line tools


This section describes basic command-line tools to manage user accounts and groups.

To display user and group IDs:

$ id
uid=1000(example.user) gid=1000(example.user) groups=1000(example.user),10(wheel)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

To create a new user account:

# useradd example.user

To assign a new password to a user account belonging to example.user:

# passwd example.user

To add a user to a group:

# usermod -a -G example.group example.user

Additional resources

The useradd(8), passwd(1), and usermod(8) man pages.

1.8.3. System user accounts managed in the web console


With user accounts displayed in the RHEL web console you can:

Authenticate users when accessing the system.

Set them access rights to the system.

The RHEL web console displays all user accounts located in the system. Therefore, you can see at least
one user account just after the first login to the web console.

After logging into the RHEL web console, you can perform the following operations:

Create new users accounts.

Change their parameters.

Lock accounts.

Terminate user sessions.

1.8.4. Adding new accounts using the web console


Use the following steps for adding user accounts to the system and setting administration rights to the
accounts through the RHEL web console.

50
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Prerequisites

The RHEL web console must be installed and accessible. For details, see Installing the web
console.

Procedure

1. Log in to the RHEL web console.

2. Click Accounts.

3. Click Create New Account.

4. In the Full Name field, enter the full name of the user.
The RHEL web console automatically suggests a user name from the full name and fills it in the
User Name field. If you do not want to use the original naming convention consisting of the first
letter of the first name and the whole surname, update the suggestion.

5. In the Password/Confirm fields, enter the password and retype it for verification that your
password is correct. The color bar placed below the fields shows you security level of the
entered password, which does not allow you to create a user with a weak password.

6. Click Create to save the settings and close the dialog box.

7. Select the newly created account.

8. Select Server Administrator in the Roles item.

51
Red Hat Enterprise Linux 8 Configuring basic system settings

Now you can see the new account in the Accounts settings and you can use the credentials to connect
to the system.

1.8.5. Next steps


For more information, see Managing user and group accounts

1.9. DUMPING A CRASHED KERNEL FOR LATER ANALYSIS


To analyze why a system crashed, you can use the kdump service to save the contents of the system’s
memory for later analysis.

This section provides a brief introduction to kdump, and information about configuring kdump using the
RHEL web console or using the corresponding RHEL system role.

1.9.1. What is kdump


kdump is a service providing a crash dumping mechanism. The service enables you to save the contents
of the system’s memory for later analysis. kdump uses the kexec system call to boot into the second
kernel (a capture kernel) without rebooting; and then captures the contents of the crashed kernel’s
memory (a crash dump or a vmcore) and saves it. The second kernel resides in a reserved part of the
system memory.

IMPORTANT

A kernel crash dump can be the only information available in the event of a system failure
(a critical bug). Therefore, ensuring that kdump is operational is important in mission-
critical environments. Red Hat advise that system administrators regularly update and
test kexec-tools in your normal kernel update cycle. This is especially important when new
kernel features are implemented.

1.9.2. Configuring kdump memory usage and target location in web console
The procedure below shows you how to use the Kernel Dump tab in the Red Hat Enterprise Linux web
console interface to configure the amount of memory that is reserved for the kdump kernel. The

52
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

procedure also describes how to specify the target location of the vmcore dump file and how to test
your configuration.

Prerequisites

Introduction to operating the web console

Procedure

1. Open the Kernel Dump tab and start the kdump service.

2. Configure the kdump memory usage through the command line.

3. Click the link next to the Crash dump location option.

4. Select the Local Filesystem option from the drop-down and specify the directory you want to
save the dump in.

Alternatively, select the Remote over SSH option from the drop-down to send the vmcore
to a remote machine using the SSH protocol.
Fill the Server, ssh key, and Directory fields with the remote machine address, ssh key
location, and a target directory.

Another choice is to select the Remote over NFS option from the drop-down and fill the
Mount field to send the vmcore to a remote machine using the NFS protocol.

NOTE
53
Red Hat Enterprise Linux 8 Configuring basic system settings

NOTE

Tick the Compression check box to reduce the size of the vmcore file.

5. Test your configuration by crashing the kernel.


WARNING

This step disrupts execution of the kernel and results in a system crash and
loss of data.

Additional resources

For a complete list of currently supported targets for kdump, see Supported kdump targets .

For information on how to configure an SSH server and set up a key-based authentication, see
Using secure communications between two systems with OpenSSH .

1.9.3. Configuring kdump using RHEL System Roles


RHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration
interface to remotely manage multiple RHEL systems. The kdump role enables you to set basic kernel
dump parameters on multiple systems.


WARNING

The kdump role replaces the kdump configuration of the managed hosts entirely
by replacing the /etc/kdump.conf file. Additionally, if the kdump role is applied, all
previous kdump settings are also replaced, even if they are not specified by the role
variables, by replacing the /etc/sysconfig/kdump file.

The following example playbook shows how to apply the kdump system role to set the location of the

54
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

The following example playbook shows how to apply the kdump system role to set the location of the
crash dump files:

---
- hosts: kdump-test
vars:
kdump_path: /var/crash
roles:
- rhel-system-roles.kdump

Additional resources

For a detailed reference on kdump role variables, install the rhel-system-roles package, and
see the README.md or README.html files in the /usr/share/doc/rhel-system-roles/kdump
directory.

For more information on RHEL System Roles, see Introduction to RHEL System Roles

1.9.4. Additional resources


For more detailed information about kdump, see Installing and configuring kdump .

1.10. RECOVERING AND RESTORING A SYSTEM


To recover and restore a system using an existing backup, Red Hat Enterprise Linux provides the Relax-
and-Recover (ReaR) utility.

You can use the utility as a disaster recovery solution and also for system migration.

The utility enables you to perform the following tasks:

Produce a bootable image and restore the system from an existing backup, using the image.

Replicate the original storage layout.

Restore user and system files.

Restore the system to a different hardware.

Additionally, for disaster recovery, you can also integrate certain backup software with ReaR.

Setting up ReaR involves the following high-level steps:

1. Install ReaR.

2. Create rescue system.

3. Modify ReaR configuration file, to add backup method details.

4. Generate backup files.

1.10.1. Setting up ReaR


Use the following steps to install the packages for using the Relax-and-Recover (ReaR) utility, create a
rescue system, configure and generate a backup.

55
Red Hat Enterprise Linux 8 Configuring basic system settings

Prerequisites

Necessary configurations as per the backup restore plan are ready.


Note that you can use the NETFS backup method, a fully-integrated and built-in method with
ReaR.

Procedure

1. Install ReaR, the genisomage pre-mastering program, and the syslinux package providing a
suite of boot loaders:

# yum install rear genisoimage syslinux

2. Create a rescue system:

# rear mkrescue

3. Modify the ReaR configuration file in an editor of your choice, for example:

# vi /etc/rear/local.conf

4. Add the backup setting details to /etc/rear/local.conf. For example, in the case of the NETFS
backup method, add the following lines:

BACKUP=NETFS
BACKUP_URL=backup.location

Replace backup.location by the URL of your backup location.

5. To configure ReaR to keep the previous backup archives when the new ones are created, also
add the following line to the configuration file:

NETFS_KEEP_OLD_BACKUP_COPY=y

6. To make the backups incremental, meaning that only the changed files are backed up on each
run, add the following line:

BACKUP_TYPE=incremental

7. Take a backup as per the restore plan.

1.11. TROUBLESHOOTING PROBLEMS USING LOG FILES


Log files contain messages about the system, including the kernel, services, and applications running on
it. These contain information that helps troubleshoot issues or monitor system functions. The logging
system in Red Hat Enterprise Linux is based on the built-in syslog protocol. Particular programs use
this system to record events and organize them into log files, which are useful when auditing the
operating system and troubleshooting various problems.

1.11.1. Services handling syslog messages


The following two services handle syslog messages:

56
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

The systemd-journald daemon

The Rsyslog service

The systemd-journald daemon collects messages from various sources and forwards them to Rsyslog
for further processing. The systemd-journald daemon collects messages from the following sources:

Kernel

Early stages of the boot process

Standard and error output of daemons as they start up and run

Syslog

The Rsyslog service sorts the syslog messages by type and priority and writes them to the files in the
/var/log directory. The /var/log directory persistently stores the log messages.

1.11.2. Subdirectories storing syslog messages


The following subdirectories under the /var/log directory store syslog messages.

/var/log/messages - all syslog messages except the following

/var/log/secure - security and authentication-related messages and errors

/var/log/maillog - mail server-related messages and errors

/var/log/cron - log files related to periodically executed tasks

/var/log/boot.log - log files related to system startup

1.11.3. Inspecting log files using the web console


Follow the steps in this procedure to inspect the log files using the web console.

Procedure

1. Log into the Red Hat Enterprise Linux 8 web console.


For details, see Logging in to the web console .

2. Click Logs.

Figure 1.2. Inspecting the log files in the RHEL 8 web console
57
Red Hat Enterprise Linux 8 Configuring basic system settings

Figure 1.2. Inspecting the log files in the RHEL 8 web console

1.11.4. Viewing logs using the command line


The Journal is a component of systemd that helps to view and manage log files. It addresses problems
connected with traditional logging, closely integrated with the rest of the system, and supports various
logging technologies and access management for the log files.

You can use the journalctl command to view messages in the system journal using the command line,
for example:

$ journalctl -b | grep kvm


May 15 11:31:41 localhost.localdomain kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
May 15 11:31:41 localhost.localdomain kernel: kvm-clock: cpu 0, msr 76401001, primary cpu clock
...

Table 1.1. Viewing system information

Command Description

journalctl Shows all collected journal entries.

journalctl FILEPATH Shows logs related to a specific file. For example, the
journalctl /dev/sda command displays logs related
to the /dev/sda file system.

journalctl -b Shows logs for the current boot.

journalctl -k -b -1 Shows kernel logs for the current boot.

Table 1.2. Viewing information on specific services

Command Description

journalctl -b _SYSTEMD_UNIT=foo Filters log to see ones matching the "foo" systemd
service.

58
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Command Description

journalctl -b _SYSTEMD_UNIT=foo Combines matches. For example, this command


_PID=number shows logs for systemd-units that match foo and
the PID number.

journalctl -b _SYSTEMD_UNIT=foo The separator “+” combines two expressions in a


_PID=number + _SYSTEMD_UNIT=foo1 logical OR. For example, this command shows all
messages from the foo service process with thePID
plus all messages from the foo1 service (from any of
its processes).

journalctl -b _SYSTEMD_UNIT=foo This command shows all entries matching either


_SYSTEMD_UNIT=foo1 expression, referring to the same field. Here, this
command shows logs matching a systemd-unit foo
or a systemd-unit foo1.

Table 1.3. Viewing logs related to specific boots

Command Description

journalctl --list-boots Shows a tabular list of boot numbers, their IDs, and
the timestamps of the first and last message
pertaining to the boot. You can use the ID in the next
command to view detailed information.

journalctl --boot=ID _SYSTEMD_UNIT=foo Shows information about the specified boot ID.

1.11.5. Additional resources


For details on configuring Rsyslog to record logs, see Chapter 8, Configuring a remote logging
solution.

The journalctl(1) man page.

For more information on systemd, see xref:managing-services-with-systemd_configuring-


basic-system-settings.

1.12. ACCESSING THE RED HAT SUPPORT


This section describes how to effectively troubleshoot your problems using Red Hat support and
sosreport.

To obtain support from Red Hat, use the Red Hat Customer Portal, which provides access to everything
available with your subscription.

1.12.1. Obtaining Red Hat Support through Red Hat Customer Portal
The following section describes how to use the Red Hat Customer Portal to get help.

Prerequisites
59
Red Hat Enterprise Linux 8 Configuring basic system settings

Prerequisites

A valid user account on the Red Hat Customer Portal. See Create a Red Hat Login .

An active subscription for the RHEL system.

Procedure

1. Access Red Hat support:

a. Open a new support case.

b. Initiate a live chat with a Red Hat expert.

c. Contact a Red Hat expert by making a call or sending an email.

1.12.2. Troubleshooting problems using sosreport


The sosreport command collects configuration details, system information and diagnostic information
from a Red Hat Enterprise Linux system.

The following section describes how to use the sosreport command to produce reports for your support
cases.

Prerequisites

A valid user account on the Red Hat Customer Portal. See Create a Red Hat Login .

An active subscription for the RHEL system.

A support-case number.

Procedure

1. Install the sos package:

# yum install sos

NOTE

The default minimal installation of Red Hat Enterprise Linux does not include the
sos package, which provides the sosreport command.

2. Generate a report:

# sosreport

3. Attach the report to your support case.


See the How can I attach a file to a Red Hat support case? Red Hat Knowledgebase article for
more information.

Note that when attaching the report, you are prompted to enter the number of the relevant
support case.

60
CHAPTER 1. GETTING STARTED WITH SYSTEM ADMINISTRATION

Additional resources

For more information on sosreport, see the What is a sosreport and how to create one in
Red Hat Enterprise Linux 4.6 and later? Red Hat Knowledgebase article.

61
Red Hat Enterprise Linux 8 Configuring basic system settings

CHAPTER 2. MANAGING SOFTWARE PACKAGES

2.1. SOFTWARE MANAGEMENT TOOLS IN RED HAT


ENTERPRISE LINUX 8
In RHEL 8, software installation is enabled by the new version of the YUM tool (YUM v4), which is based
on the DNF technology.

NOTE

Upstream documentation identifies the technology as DNF and the tool is referred to as
DNF in the upstream. As a result, some output returned by the new YUM tool in RHEL 8
mentions DNF.

Although YUM v4 used in RHEL 8 is based on DNF, it is compatible with YUM v3 used in RHEL 7. For
software installation, the yum command and most of its options work the same way in RHEL 8 as they
did in RHEL 7.

Selected yum plug-ins and utilities have been ported to the new DNF back end, and can be installed
under the same names as in RHEL 7. Packages also provide compatibility symlinks, so the binaries,
configuration files, and directories can be found in usual locations.

Note that the legacy Python API provided by YUM v3 is no longer available. You can migrate your plug-
ins and scripts to the new API provided by YUM v4 (DNF Python API), which is stable and fully
supported. See DNF API Reference for more information.

2.2. APPLICATION STREAMS


Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user
space components are now delivered and updated more frequently than the core operating system
packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the
underlying stability of the platform or specific deployments.

Components made available as Application Streams can be packaged as modules or RPM packages, and
are delivered through the AppStream repository in Red Hat Enterprise Linux 8. Each Application Stream
has a given life cycle, either the same as RHEL 8 or shorter, more suitable to the particular application.
Application Streams with a shorter life cycle are listed in the Red Hat Enterprise Linux 8 Application
Streams Life Cycle page.

Modules are collections of packages representing a logical unit: an application, a language stack, a
database, or a set of tools. These packages are built, tested, and released together.

Module streams represent versions of the Application Stream components. For example, two streams
(versions) of the PostgreSQL database server are available in the postgresql module: PostgreSQL 10
(the default stream) and PostgreSQL 9.6. Only one module stream can be installed on the system.
Different versions can be used in separate containers.

Detailed module commands are described in the Installing, managing, and removing user-space
components document. For a list of modules available in AppStream, see the Package manifest.

2.3. SEARCHING FOR SOFTWARE PACKAGES


yum allows you to perform a complete set of operations with software packages.

62
CHAPTER 2. MANAGING SOFTWARE PACKAGES

The following section describes how to use yum to:

Search for packages.

List packages.

List repositories.

Display information about the packages.

List package groups.

Specify global expressions in yum input.

2.3.1. Searching packages with yum


To search for a package, use:

# yum search term

Replace term with a term related to the package.

Note that yum search command returns term matches within the name and summary of the
packages. This makes the search faster and enables you to search for packages you do not know
the name of, but for which you know a related term.

To include term matches within package descriptions, use:

# yum search --all term

Replace term with a term you want to search for in a package name, summary, or description.

Note that yum search --all enables a more exhaustive but slower search.

2.3.2. Listing packages with yum


To list information on all installed and available packages, use:

# yum list --all

To list all packages installed on your system, use:

# yum list --installed

To list all packages in all enabled repositories that are available to install, use:

# yum list --available

Note that you can filter the results by appending global expressions as arguments. See Section 2.3.6,
“Specifying global expressions in yum input” for more details.

2.3.3. Listing repositories with yum


To list all enabled repositories on your system, use:

63
Red Hat Enterprise Linux 8 Configuring basic system settings

# yum repolist

To list all disabled repositories on your system, use:

# yum repolist --disabled

To list both enabled and disabled repositories, use:

# yum repolist --all

To list additional information about the repositories, use:

# yum repoinfo

Note that you can filter the results by passing the ID or name of repositories as arguments or by
appending global expressions. See Section 2.3.6, “Specifying global expressions in yum input” for more
details.

2.3.4. Displaying package information with yum


To display information about one or more packages, use:

# yum info package-name

Replace package-name with the name of the package.

Note that you can filter the results by appending global expressions as arguments. See Section 2.3.6,
“Specifying global expressions in yum input” for more details.

2.3.5. Listing package groups with yum


To view the number of installed and available groups, use:

# yum group summary

To list all installed and available groups, use:

# yum group list

Note that you can filter the results by appending command line options for the yum group list
command (--hidden, --available). For more available options see the man pages.

To list mandatory and optional packages contained in a particular group, use:

# yum group info group-name

Replace group-name with the name of the group.

Note that you can filter the results by appending global expressions as arguments. See Section 2.7.4,
“Specifying global expressions in yum input” for more details.

2.3.6. Specifying global expressions in yum input

64
CHAPTER 2. MANAGING SOFTWARE PACKAGES

yum commands allow you to filter the results by appending one or more glob expressions as arguments.
Global expressions must be escaped when passed as arguments to the yum command. To ensure global
expressions are passed to yum as intended, use one of the following methods:

Double-quote or single-quote the entire global expression.

# yum provides "*/file-name"

Replace file-name with the name of the file.

Escape the wildcard characters by preceding them with a backslash (\) character.

# yum provides \*/file-name

Replace file-name with the name of the file.

2.4. INSTALLING SOFTWARE PACKAGES


The following section describes how to use yum to:

Install packages.

Install a package group.

Specify a package name in yum input.

2.4.1. Installing packages with yum


To install a package and all the package dependencies, use:

# yum install package-name

Replace package-name with the name of the package.

To install multiple packages and their dependencies simultaneously, use:

# yum install package-name-1 package-name-2

Replace package-name-1 and package-name-2 with the names of the packages.

When installing packages on a multilib system (AMD64, Intel 64 machine), you can specify the
architecture of the package by appending it to the package name:

# yum install package-name.arch

Replace package-name.arch with the name and architecture of the package.

If you know the name of the binary you want to install, but not the package name, you can use
the path to the binary as an argument:

# yum install /usr/sbin/binary-file

Replace /usr/sbin/binary-file with a path to the binary file.

yum searches through the package lists, finds the package which provides /usr/sbin/binary-file,
65
Red Hat Enterprise Linux 8 Configuring basic system settings

yum searches through the package lists, finds the package which provides /usr/sbin/binary-file,
and prompts you as to whether you want to install it.

To install a previously-downloaded package from a local directory, use:

# yum install /path/

Replace /path/ with the path to the package.

Note that you can optimize the package search by explicitly defining how to parse the argument. See
Section 2.4.3, “Specifying a package name in yum input” for more details.

2.4.2. Installing a package group with yum


To install a package group by a group name, use:

# yum group install group-name

Or

# yum install @group-name

Replace group-name with the full name of the group or environmental group.

To install a package group by the groupID, use:

# yum group install groupID

Replace groupID with the ID of the group.

2.4.3. Specifying a package name in yum input


To optimize the installation and removal process, you can append -n, -na, or -nerva suffixes to yum
install and yum remove commands to explicitly define how to parse an argument:

To install a package using it’s exact name, use:

# yum install-n name

Replace name with the exact name of the package.

To install a package using it’s exact name and architecture, use:

# yum install-na name.architecture

Replace name and architecture with the exact name and architecture of the package.

To install a package using it’s exact name, epoch, version, release, and architecture, use:

# yum install-nevra name-epoch:version-release.architecture

Replace name, epoch, version, release, and architecture with the exact name, epoch, version,
release, and architecture of the package.

66
CHAPTER 2. MANAGING SOFTWARE PACKAGES

2.5. UPDATING SOFTWARE PACKAGES


yum allows you to check if your system has any pending updates. You can list packages that need
updating and choose to update a single package, multiple packages, or all packages at once. If any of the
packages you choose to update have dependencies, they are updated as well.

The following section describes how to use yum to:

Check for updates.

Update a single package.

Update a package group.

Update all packages and their dependencies.

Apply security updates.

Automate software updates.

2.5.1. Checking for updates with yum


To see which packages installed on your system have available updates, use:

# yum check-update

The output returns the list of packages and their dependencies that have an update available.

2.5.2. Updating a single package with yum


To update a package, use:

# yum update package-name

Replace package-name with the name of the package.

IMPORTANT

When applying updates to kernel, yum always installs a new kernel regardless of whether
you are using the yum update or yum install command.

2.5.3. Updating a package group with yum


To update a package group, use:

# yum group update group-name

Replace group-name with the name of the package group.

2.5.4. Updating all packages and their dependencies with yum


To update all packages and their dependencies, use:

# yum update

67
Red Hat Enterprise Linux 8 Configuring basic system settings

2.5.5. Updating security-related packages with yum


To upgrade to the latest available packages that have security errata, use:

# yum update --security

To upgrade to the last security errata packages, use:

# yum update-minimal --security

2.5.6. Automating software updates


To check and download package updates automatically and regularly, you can use the DNF Automatic
tool that is provided by the dnf-automatic package.

DNF Automatic is an alternative command-line interface to yum that is suited for automatic and regular
execution using systemd timers, cron jobs and other such tools.

DNF Automatic synchronizes package metadata as needed and then checks for updates available.
After, the tool can perform one of the following actions depending on how you configure it:

Exit

Download updated packages

Download and apply the updates

The outcome of the operation is then reported by a selected mechanism, such as the standard output or
email.

2.5.6.1. Installing DNF Automatic

The following procedure describes how to install the DNF Automatic tool.

Procedure

To install the dnf-automatic package, use:

# yum install dnf-automatic

Verification steps

To verify the successful installation, confirm the presence of the dnf-automatic package by
running the following command:

# rpm -qi dnf-automatic

2.5.6.2. DNF Automatic configuration file

By default, DNF Automatic uses /etc/dnf/automatic.conf as its configuration file to define its behavior.

The configuration file is separated into the following topical sections:

68
CHAPTER 2. MANAGING SOFTWARE PACKAGES

[commands] section
Sets the mode of operation of DNF Automatic.

[emitters] section
Defines how the results of DNF Automatic are reported.

[command_email] section
Provides the email emitter configuration for an external command used to send email.

[email] section
Provides the email emitter configuration.

[base] section
Overrides settings from the main configuration file of yum.

With the default settings of the /etc/dnf/automatic.conf file, DNF Automatic checks for available
updates, downloads them, and reports the results as standard output.


WARNING

Settings of the operation mode from the [commands] section are overridden by
settings used by a systemd timer unit for all timer units except dnf-automatic.timer.

Additional resources

For more details on particular sections, see DNF Automatic documentation.

For more details on systemd timer units, see the man dnf-automatic manual pages.

For the overview of the systemd timer units included in the dnf-automatic package, see
Section 2.5.6.4 Overview of the systemd timer units included in the dnf-automatic package

2.5.6.3. Enabling DNF Automatic

To run DNF Automatic, you always need to enable and start a specific systemd timer unit. You can use
one of the timer units provided in the dnf-automatic package, or you can write your own timer unit
depending on your needs.

The following section describes how to enable DNF Automatic.

Prerequisites

You specified the behavior of DNF Automatic by modifying the /etc/dnf/automatic.conf


configuration file.

For more information on DNF Automatic configuration file, see Section 2.5.6.2, “DNF Automatic
configuration file”.

Procedure

Select, enable and start a systemd timer unit that fits your needs:

69
Red Hat Enterprise Linux 8 Configuring basic system settings

# systemctl enable --now <unit>

where <unit> is one of the following timers:

dnf-automatic-download.timer

dnf-automatic-install.timer

dnf-automatic-notifyonly.timer

dnf-automatic.timer

For downloading available updates, use:

# systemctl enable dnf-automatic-download.timer

# systemctl start dnf-automatic-download.timer

For downloading and installing available updates, use:

# systemctl enable dnf-automatic-install.timer

# systemctl start dnf-automatic-install.timer

For reporting about available updates, use:

# systemctl enable dnf-automatic-notifyonly.timer

# systemctl start dnf-automatic-notifyonly.timer

Optionally, you can use:

# systemctl enable dnf-automatic.timer

# systemctl start dnf-automatic.timer

In terms of downloading and applying updates, this timer unit behaves according to settings in the
/etc/dnf/automatic.conf configuration file. The default behavior is similar to dnf-automatic-
download.timer: it downloads the updated packages, but it does not install them.

NOTE

Alternatively, you can also run DNF Automatic by executing the /usr/bin/dnf-automatic
file directly from the command line or from a custom script.

Verification steps

To verify that the timer is enabled, run the following command:

# systemctl status <systemd timer unit>

70
CHAPTER 2. MANAGING SOFTWARE PACKAGES

Additional resources

For more information on the dnf-automatic timers, see the man dnf-automatic manual pages.

For the overview of the systemd timer units included in the dnf-automatic package, see Section
2.5.6.4 Overview of the systemd timer units included in the dnf-automatic package

2.5.6.4. Overview of the systemd timer units included in the dnf-automatic package

The systemd timer units take precedence and override the settings in the /etc/dnf/automatic.conf
configuration file concerning downloading and applying updates.

For example if you set:

download_updates = yes

in the /etc/dnf/automatic.conf configuration file, but you have activated the dnf-automatic-
notifyonly.timer unit, the packages will not be downloaded.

The dnf-automatic package includes the following systemd timer units:

Timer unit Function Overrides settings in the


/etc/dnf/automatic.conf file?

dnf-automatic- Downloads packages to cache Yes


download.timer and makes them available for
updating.

Note: This timer unit does not


install the updated packages. To
perform the installation, you have
to execute the dnf update
command.

dnf-automatic-install.timer Downloads and installs updated Yes


packages.

dnf-automatic- Downloads only repository data to Yes


notifyonly.timer keep repository cache up-to-date
and notifies you about available
updates.

Note: This timer unit does not


download or install the updated
packages

71
Red Hat Enterprise Linux 8 Configuring basic system settings

Timer unit Function Overrides settings in the


/etc/dnf/automatic.conf file?

dnf-automatic.timer The behavior of this timer No


concerning downloading and
applying updates is specified by
the settings in the
/etc/dnf/automatic.conf
configuration file.

Default behavior is the same as


for the dnf-automatic-
download.timer unit: it only
downloads packages, but does
not install them.

Additional resources

For more information on the dnf-automatic timers, see the man dnf-automatic manual pages.

For more information on the /etc/dnf/automatic.conf configuration file, see Section 2.5.6.2.
DNF Automatic configuration file

2.6. UNINSTALLING SOFTWARE PACKAGES


The following section describes how to use yum to:

Remove packages.

Remove a package group.

Specify a package name in yum input.

2.6.1. Removing packages with yum


To remove a particular package and all dependent packages, use:

# yum remove package-name

Replace package-name with the name of the package.

To remove multiple packages and their dependencies simultaneously, use:

# yum remove package-name-1 package-name-2

Replace package-name-1 and package-name-2 with the names of the packages.

NOTE

yum is not able to remove a package without removing depending packages.

72
CHAPTER 2. MANAGING SOFTWARE PACKAGES

Note that you can optimize the package search by explicitly defining how to parse the argument. See
Section 2.6.3, “Specifying a package name in yum input” for more details.

2.6.2. Removing a package group with yum


To remove a package group by the group name, use:

# yum group remove group-name

Or

# yum remove @group-name

Replace group-name with the full name of the group.

To remove a package group by the groupID, use:

# yum group remove groupID

Replace groupID with the ID of the group.

2.6.3. Specifying a package name in yum input


To optimize the installation and removal process, you can append -n, -na, or -nerva suffixes to yum
install and yum remove commands to explicitly define how to parse an argument:

To install a package using it’s exact name, use:

# yum install-n name

Replace name with the exact name of the package.

To install a package using it’s exact name and architecture, use:

# yum install-na name.architecture

Replace name and architecture with the exact name and architecture of the package.

To install a package using it’s exact name, epoch, version, release, and architecture, use:

# yum install-nevra name-epoch:version-release.architecture

Replace name, epoch, version, release, and architecture with the exact name, epoch, version,
release, and architecture of the package.

2.7. MANAGING SOFTWARE PACKAGE GROUPS


A package group is a collection of packages that serve a common purpose (System Tools, Sound and
Video). Installing a package group pulls a set of dependent packages, which saves time considerably.

The following section describes how to use yum to:

List package groups.

73
Red Hat Enterprise Linux 8 Configuring basic system settings

Install a package group.

Remove a package group.

Specify global expressions in yum input.

2.7.1. Listing package groups with yum


To view the number of installed and available groups, use:

# yum group summary

To list all installed and available groups, use:

# yum group list

Note that you can filter the results by appending command line options for the yum group list
command (--hidden, --available). For more available options see the man pages.

To list mandatory and optional packages contained in a particular group, use:

# yum group info group-name

Replace group-name with the name of the group.

Note that you can filter the results by appending global expressions as arguments. See Section 2.7.4,
“Specifying global expressions in yum input” for more details.

2.7.2. Installing a package group with yum


To install a package group by a group name, use:

# yum group install group-name

Or

# yum install @group-name

Replace group-name with the full name of the group or environmental group.

To install a package group by the groupID, use:

# yum group install groupID

Replace groupID with the ID of the group.

2.7.3. Removing a package group with yum


To remove a package group by the group name, use:

# yum group remove group-name

Or

74
CHAPTER 2. MANAGING SOFTWARE PACKAGES

# yum remove @group-name

Replace group-name with the full name of the group.

To remove a package group by the groupID, use:

# yum group remove groupID

Replace groupID with the ID of the group.

2.7.4. Specifying global expressions in yum input


yum commands allow you to filter the results by appending one or more glob expressions as arguments.
Global expressions must be escaped when passed as arguments to the yum command. To ensure global
expressions are passed to yum as intended, use one of the following methods:

Double-quote or single-quote the entire global expression.

# yum provides "*/file-name"

Replace file-name with the name of the file.

Escape the wildcard characters by preceding them with a backslash (\) character.

# yum provides \*/file-name

Replace file-name with the name of the file.

2.8. HANDLING PACKAGE MANAGEMENT HISTORY


The yum history command allows you to review information about the timeline of yum transactions,
dates and times they occurred, the number of packages affected, whether these transactions
succeeded or were aborted, and if the RPM database was changed between transactions. yum history
command can also be used to undo or redo the transactions.

The following section describes how to use yum to:

List transactions.

Revert transactions.

Repeat transactions.

Specify global expressions in yum input.

2.8.1. Listing transactions with yum


To display a list of all the latest yum transactions, use:

# yum history

To display a list of all the latest operations for a selected package, use:

# yum history list package-name

75
Red Hat Enterprise Linux 8 Configuring basic system settings

Replace package-name with the name of the package. You can filter the command output by
appending global expressions. See Section 2.8.4, “Specifying global expressions in yum input”
for more details.

To examine a particular transaction, use:

# yum history info transactionID

Replace transactionID with the ID of the transaction.

2.8.2. Reverting transactions with yum


To revert a particular transaction, use:

# yum history undo transactionID

Replace transactionID with the ID of the transaction.

To revert the last transaction, use:

# yum history undo last

Note that the yum history undo command only reverts the steps that were performed during the
transaction. If the transaction installed a new package, the yum history undo command uninstalls it. If
the transaction uninstalled a package, the yum history undo command reinstalls it. yum history undo
also attempts to downgrade all updated packages to their previous versions, if the older packages are
still available.

2.8.3. Repeating transactions with yum


To repeat a particular transaction, use:

# yum history redo transactionID

Replace transactionID with the ID of the transaction.

To repeat the last transaction, use:

# yum history redo last

Note that the yum history redo command only repeats the steps that were performed during the
transaction.

2.8.4. Specifying global expressions in yum input


yum commands allow you to filter the results by appending one or more glob expressions as arguments.
Global expressions must be escaped when passed as arguments to the yum command. To ensure global
expressions are passed to yum as intended, use one of the following methods:

Double-quote or single-quote the entire global expression.

# yum provides "*/file-name"

76
CHAPTER 2. MANAGING SOFTWARE PACKAGES

Replace file-name with the name of the file.

Escape the wildcard characters by preceding them with a backslash (\) character.

# yum provides \*/file-name

Replace file-name with the name of the file.

2.9. MANAGING SOFTWARE REPOSITORIES


The configuration information for yum and related utilities are stored in the /etc/yum.conf file. This file
contains one or more [repository] sections, which allow you to set repository-specific options.

It is recommended to define individual repositories in new or existing .repo files in the


/etc/yum.repos.d/ directory.

Note that the values you define in individual [repository] sections of the /etc/yum.conf file override
values set in the [main] section.

The following section describes how to:

Set [repository] options.

Add a yum repository.

Enable a yum repository.

Disable a yum repository.

2.9.1. Setting yum repository options


The /etc/yum.conf configuration file contains the [repository] sections, where repository is a unique
repository ID. The [repository] sections allows you to define individual yum repositories.

NOTE

Do not give custom repositories names used by the Red Hat repositories to avoid
conflicts.

For a complete list of available [repository] options, see the [repository] OPTIONS section of the
yum.conf(5) manual page.

2.9.2. Adding a yum repository


To define a new repository, you can:

Add a [repository] section to the /etc/yum.conf file.

Add a [repository] section to a .repo file in the /etc/yum.repos.d/ directory.


yum repositories commonly provide their own .repo file.

NOTE
77
Red Hat Enterprise Linux 8 Configuring basic system settings

NOTE

It is recommended to define your repositories in a .repo file instead of /etc/yum.conf as


all files with the .repo file extension in this directory are read by yum.

To add a repository to your system and enable it, use:

# yum-config-manager --add-repo repository_URL

Replace repository_url with URL pointing to the repository.


WARNING

Obtaining and installing software packages from unverified or untrusted sources


other than Red Hat certificate-based Content Delivery Network (CDN)
constitutes a potential security risk, and could lead to security, stability,
compatibility, and maintainability issues.

2.9.3. Enabling a yum repository


To enable a repository, use:

# yum-config-manager --enable repositoryID

Replace repositoryID with the unique repository ID.

To list available repository IDs, see Section 2.3.2, “Listing packages with yum”.

2.9.4. Disabling a yum repository


To disable a yum repository, use:

# yum-config-manager --disable repositoryID

Replace repositoryID with the unique repository ID.

To list available repository IDs, see Section 2.3.2, “Listing packages with yum”.

2.10. CONFIGURING YUM


The configuration information for yum and related utilities are stored in the /etc/yum.conf file. This file
contains one mandatory [main] section, which enables you to set yum options that have global effect.

The following section describes how to:

View the current yum configurations.

Set yum [main] options.

78
CHAPTER 2. MANAGING SOFTWARE PACKAGES

Use yum plug-ins.

2.10.1. Viewing the current yum configurations


To display the current values of global yum options specified in the [main] section of the
/etc/yum.conf file, use:

# yum config-manager --dump

2.10.2. Setting yum main options


The /etc/yum.conf configuration file contains one [main] section. The key-value pairs in this section
affect how yum operates and treats repositories.

You can add additional options under the [main] section heading in /etc/yum.conf.

For a complete list of available [main] options, see the [main] OPTIONS section of the yum.conf(5)
manual page.

2.10.3. Using yum plug-ins


yum provides plug-ins that extend and enhance its operations. Certain plug-ins are installed by default.

The following section describes how to enable, configure, and disable yum plug-ins.

2.10.3.1. Managing yum plug-ins

The plug-in configuration files always contain a [main] section where the enabled= option controls
whether the plug-in is enabled when you run yum commands. If this option is missing, you can add it
manually to the file.

Every installed plug-in has its own configuration file in the /etc/dnf/plugins/ directory. You can enable or
disable plug-in specific options in these files.

2.10.3.2. Enabling yum plug-ins

To enable all yum plug-ins:

1. Ensure a line beginning with plugins= is present in the [main] section of the /etc/yum.conf
file.

2. Set the value of plugins= to 1.

plugins=1

2.10.3.3. Disabling yum plug-ins

To disable all yum plug-ins:

1. Ensure a line beginning with plugins= is present in the [main] section of the /etc/yum.conf
file.

2. Set the value of plugins= to 0.

79
Red Hat Enterprise Linux 8 Configuring basic system settings

plugins=0

IMPORTANT

Disabling all plug-ins is not advised. Certain plug-ins provide important yum
services. In particular, the product-id and subscription-manager plug-ins
provide support for the certificate-based Content Delivery Network (CDN).
Disabling plug-ins globally is provided as a convenience option, and is
advisable only when diagnosing a potential problem with yum.

To disable all yum plug-ins for a particular command, append --noplugins option to the
command.

# yum --noplugins update

To disable certain yum plug-ins for a single command, append --disableplugin=plugin-name


option to the command.

# yum update --disableplugin=plugin-name

Replace plugin-name with the name of the plug-in.

80
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

3.1. INTRODUCTION TO SYSTEMD


Systemd is a system and service manager for Linux operating systems. It is designed to be backwards
compatible with SysV init scripts, and provides a number of features such as parallel startup of system
services at boot time, on-demand activation of daemons, or dependency-based service control logic.
Starting with Red Hat Enterprise Linux 7, systemd replaced Upstart as the default init system.

Systemd introduces the concept of systemd units. These units are represented by unit configuration
files located in one of the directories listed in the following table.

Table 3.1. Systemd unit files locations

Directory Description

/usr/lib/systemd/system/ Systemd unit files distributed with installed RPM


packages.

/run/systemd/system/ Systemd unit files created at run time. This directory


takes precedence over the directory with installed
service unit files.

/etc/systemd/system/ Systemd unit files created by systemctl enable as


well as unit files added for extending a service. This
directory takes precedence over the directory with
runtime unit files.

The units encapsulate information about:

System services

Listening sockets

Other objects that are relevant to the init system

For a complete list of available systemd unit types, see the following table.

Table 3.2. Available systemd unit types

Unit Type File Extension Description

Service unit .service A system service.

Target unit .target A group of systemd units.

Automount unit .automount A file system automount point.

Device unit .device A device file recognized by the


kernel.

81
Red Hat Enterprise Linux 8 Configuring basic system settings

Unit Type File Extension Description

Mount unit .mount A file system mount point.

Path unit .path A file or directory in a file system.

Scope unit .scope An externally created process.

Slice unit .slice A group of hierarchically


organized units that manage
system processes.

Socket unit .socket An inter-process communication


socket.

Swap unit .swap A swap device or a swap file.

Timer unit .timer A systemd timer.

Overriding the default systemd configuration using system.conf


The default configuration of systemd is defined during the compilation and it can be found in the
systemd configuration file at /etc/systemd/system.conf. Use this file if you want to deviate from those
defaults and override selected default values for systemd units globally.

For example, to override the default value of the timeout limit, which is set to 90 seconds, use the
DefaultTimeoutStartSec parameter to input the required value in seconds.

DefaultTimeoutStartSec=required value

For further information, see Example 3.20, “Changing the timeout limit” .

3.1.1. Main features


The systemd system and service manager provides the following main features:

Socket-based activation — At boot time, systemd creates listening sockets for all system
services that support this type of activation, and passes the sockets to these services as soon as
they are started. This not only allows systemd to start services in parallel, but also makes it
possible to restart a service without losing any message sent to it while it is unavailable: the
corresponding socket remains accessible and all messages are queued.
Systemd uses socket units for socket-based activation.

Bus-based activation — System services that use D-Bus for inter-process communication can
be started on-demand the first time a client application attempts to communicate with them.
Systemd uses D-Bus service files for bus-based activation.

Device-based activation — System services that support device-based activation can be


started on-demand when a particular type of hardware is plugged in or becomes available.
Systemd uses device units for device-based activation.

Path-based activation — System services that support path-based activation can be started

82
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Path-based activation — System services that support path-based activation can be started
on-demand when a particular file or directory changes its state. Systemd uses path units for
path-based activation.

Mount and automount point management — Systemd monitors and manages mount and
automount points. Systemd uses mount units for mount points and automount units for
automount points.

Aggressive parallelization — Because of the use of socket-based activation, systemd can start
system services in parallel as soon as all listening sockets are in place. In combination with
system services that support on-demand activation, parallel activation significantly reduces the
time required to boot the system.

Transactional unit activation logic — Before activating or deactivating a unit, systemd


calculates its dependencies, creates a temporary transaction, and verifies that this transaction is
consistent. If a transaction is inconsistent, systemd automatically attempts to correct it and
remove non-essential jobs from it before reporting an error.

Backwards compatibility with SysV init — Systemd supports SysV init scripts as described in
the Linux Standard Base Core Specification , which eases the upgrade path to systemd service
units.

3.1.2. Compatibility changes


The systemd system and service manager is designed to be mostly compatible with SysV init and
Upstart. The following are the most notable compatibility changes with regards to Red Hat
Enterprise Linux 6 system that used SysV init:

Systemd has only limited support for runlevels. It provides a number of target units that can be
directly mapped to these runlevels and for compatibility reasons, it is also distributed with the
earlier runlevel command. Not all systemd targets can be directly mapped to runlevels,
however, and as a consequence, this command might return N to indicate an unknown runlevel.
It is recommended that you avoid using the runlevel command if possible.
For more information about systemd targets and their comparison with runlevels, see
Section 3.3, “Working with systemd targets” .

The systemctl utility does not support custom commands. In addition to standard commands
such as start, stop, and status, authors of SysV init scripts could implement support for any
number of arbitrary commands in order to provide additional functionality. For example, the init
script for iptables could be executed with the panic command, which immediately enabled
panic mode and reconfigured the system to start dropping all incoming and outgoing packets.
This is not supported in systemd and the systemctl only accepts documented commands.
For more information about the systemctl utility and its comparison with the earlier service
utility, see Table 3.3, “Comparison of the service utility with systemctl” .

The systemctl utility does not communicate with services that have not been started by
systemd. When systemd starts a system service, it stores the ID of its main process in order to
keep track of it. The systemctl utility then uses this PID to query and manage the service.
Consequently, if a user starts a particular daemon directly on the command line, systemctl is
unable to determine its current status or stop it.

Systemd stops only running services. Previously, when the shutdown sequence was initiated,
Red Hat Enterprise Linux 6 and earlier releases of the system used symbolic links located in the
/etc/rc0.d/ directory to stop all available system services regardless of their status. With
systemd , only running services are stopped on shutdown.

System services are unable to read from the standard input stream. When systemd starts a

83
Red Hat Enterprise Linux 8 Configuring basic system settings

System services are unable to read from the standard input stream. When systemd starts a
service, it connects its standard input to /dev/null to prevent any interaction with the user.

System services do not inherit any context (such as the HOME and PATH environment
variables) from the invoking user and their session. Each service runs in a clean execution
context.

When loading a SysV init script, systemd reads dependency information encoded in the Linux
Standard Base (LSB) header and interprets it at run time.

All operations on service units are subject to a default timeout of 5 minutes to prevent a
malfunctioning service from freezing the system. This value is hardcoded for services that are
generated from initscripts and cannot be changed. However, individual configuration files can
be used to specify a longer timeout value per service, see Example 3.20, “Changing the timeout
limit”.

For a detailed list of compatibility changes introduced with systemd, see the Migration Planning Guide
for Red Hat Enterprise Linux 7.

3.2. MANAGING SYSTEM SERVICES


Previous versions of Red Hat Enterprise Linux, which were distributed with SysV init or Upstart, used init
scripts located in the /etc/rc.d/init.d/ directory. These init scripts were typically written in Bash, and
allowed the system administrator to control the state of services and daemons in their system. Starting
with Red Hat Enterprise Linux 7, these init scripts have been replaced with service units.

Service units end with the .service file extension and serve a similar purpose as init scripts. To view,
start, stop, restart, enable, or disable system services, use the systemctl command as described in
Comparison of the service utility with systemctl , Comparison of the chkconfig utility with systemctl , and
further in this section. The service and chkconfig commands are still available in the system and work
as expected, but are only included for compatibility reasons and should be avoided.

Table 3.3. Comparison of the service utility with systemctl

service systemctl Description

service name start systemctl start name.service Starts a service.

service name stop systemctl stop name.service Stops a service.

service name restart systemctl restart Restarts a service.


name.service

service name condrestart systemctl try-restart Restarts a service only if it is


name.service running.

service name reload systemctl reload Reloads configuration.


name.service

84
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

service systemctl Description

service name status systemctl status Checks if a service is running.


name.service

systemctl is-active
name.service

service --status-all systemctl list-units --type Displays the status of all services.
service --all

Table 3.4. Comparison of the chkconfig utility with systemctl

chkconfig systemctl Description

chkconfig name on systemctl enable Enables a service.


name.service

chkconfig name off systemctl disable Disables a service.


name.service

chkconfig --list name systemctl status Checks if a service is enabled.


name.service

systemctl is-enabled
name.service

chkconfig --list systemctl list-unit-files -- Lists all services and checks if


type service they are enabled.

chkconfig --list systemctl list-dependencies Lists services that are ordered to


--after start before the specified unit.

chkconfig --list systemctl list-dependencies Lists services that are ordered to


--before start after the specified unit.

Specifying service units


For clarity, all command examples in the rest of this section use full unit names with the .service file
extension, for example:

# systemctl stop nfs-server.service

However, the file extension can be omitted, in which case the systemctl utility assumes the argument is
a service unit. The following command is equivalent to the one above:

# systemctl stop nfs-server

Additionally, some units have alias names. Those names can have shorter names than units, which can be
used instead of the actual unit names. To find all aliases that can be used for a particular unit, use:

85
Red Hat Enterprise Linux 8 Configuring basic system settings

# systemctl show nfs-server.service -p Names

Behavior of systemctl in a chroot environment


If you change the root directory using the chroot command, most systemctl commands refuse to
perform any action. The reason for this is that the systemd process and the user that used the chroot
command do not have the same view of the filesystem. This happens, for example, when systemctl is
invoked from a kickstart file.

The exception to this are unit file commands such as the systemctl enable and systemctl disable
commands. These commands do not need a running system and do not affect running processes, but
they do affect unit files. Therefore, you can run these commands even in chroot environment. For
example, to enable the httpd service on a system under the /srv/website1/ directory:

# chroot /srv/website1
# systemctl enable httpd.service
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service, pointing to
/usr/lib/systemd/system/httpd.service.

3.2.1. Listing services


To list all currently loaded service units, type the following at a shell prompt:

systemctl list-units --type service

For each service unit file, this command displays its full name (UNIT) followed by a note whether the unit
file has been loaded (LOAD), its high-level ( ACTIVE) and low-level ( SUB) unit file activation state, and
a short description (DESCRIPTION).

By default, the systemctl list-units command displays only active units. If you want to list all loaded
units regardless of their state, run this command with the --all or -a command line option:

systemctl list-units --type service --all

You can also list all available service units to see if they are enabled. To do so, type:

systemctl list-unit-files --type service

For each service unit, this command displays its full name (UNIT FILE) followed by information whether
the service unit is enabled or not (STATE). For information on how to determine the status of individual
service units, see Displaying service status.

Example 3.1. Listing services

To list all currently loaded service units, run the following command:

$ systemctl list-units --type service


UNIT LOAD ACTIVE SUB DESCRIPTION
abrt-ccpp.service loaded active exited Install ABRT coredump hook
abrt-oops.service loaded active running ABRT kernel log watcher
abrt-vmcore.service loaded active exited Harvest vmcores for ABRT
abrt-xorg.service loaded active running ABRT Xorg log watcher
abrtd.service loaded active running ABRT Automated Bug Reporting Tool
…​

86
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

systemd-vconsole-setup.service loaded active exited Setup Virtual Console


tog-pegasus.service loaded active running OpenPegasus CIM Server

LOAD = Reflects whether the unit definition was properly loaded.


ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.

46 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'

To list all installed service unit files to determine if they are enabled, type:

$ systemctl list-unit-files --type service


UNIT FILE STATE
abrt-ccpp.service enabled
abrt-oops.service enabled
abrt-vmcore.service enabled
abrt-xorg.service enabled
abrtd.service enabled
…​
wpa_supplicant.service disabled
ypbind.service disabled

208 unit files listed.

3.2.2. Displaying service status


To display detailed information about a service unit that corresponds to a system service, type the
following at a shell prompt:

systemctl status name.service

Replace name with the name of the service unit you want to inspect (for example, gdm). This command
displays the name of the selected service unit followed by its short description, one or more fields
described in Table 3.5, “Available service unit information” , and if it is executed by the root user, also the
most recent log entries.

Table 3.5. Available service unit information

Field Description

Loaded Information whether the service unit has been


loaded, the absolute path to the unit file, and a note
whether the unit is enabled.

Active Information whether the service unit is running


followed by a time stamp.

Main PID The PID of the corresponding system service


followed by its name.

87
Red Hat Enterprise Linux 8 Configuring basic system settings

Field Description

Status Additional information about the corresponding


system service.

Process Additional information about related processes.

CGroup Additional information about related Control Groups


(cgroups).

To only verify that a particular service unit is running, run the following command:

systemctl is-active name.service

Similarly, to determine whether a particular service unit is enabled, type:

systemctl is-enabled name.service

Note that both systemctl is-active and systemctl is-enabled return an exit status of 0 if the specified
service unit is running or enabled. For information on how to list all currently loaded service units, see
Listing services.

Example 3.2. Displaying service status

The service unit for the GNOME Display Manager is named gdm.service. To determine the current
status of this service unit, type the following at a shell prompt:

# systemctl status gdm.service


gdm.service - GNOME Display Manager
Loaded: loaded (/usr/lib/systemd/system/gdm.service; enabled)
Active: active (running) since Thu 2013-10-17 17:31:23 CEST; 5min ago
Main PID: 1029 (gdm)
CGroup: /system.slice/gdm.service
├─1029 /usr/sbin/gdm
├─1037 /usr/libexec/gdm-simple-slave --display-id /org/gno…​
└─1047 /usr/bin/Xorg :0 -background none -verbose -auth /r…​

Oct 17 17:31:23 localhost systemd[1]: Started GNOME Display Manager.

Example 3.3. Displaying services ordered to start before a service

To determine what services are ordered to start before the specified service, type the following at a
shell prompt:

# systemctl list-dependencies --after gdm.service


gdm.service
├─dbus.socket
├─getty@tty1.service
├─livesys.service
├─plymouth-quit.service

88
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

├─system.slice
├─systemd-journald.socket
├─systemd-user-sessions.service
└─basic.target
[output truncated]

Example 3.4. Displaying services ordered to start after a service

To determine what services are ordered to start after the specified service, type the following at a
shell prompt:

# systemctl list-dependencies --before gdm.service


gdm.service
├─dracut-shutdown.service
├─graphical.target
│ ├─systemd-readahead-done.service
│ ├─systemd-readahead-done.timer
│ └─systemd-update-utmp-runlevel.service
└─shutdown.target
├─systemd-reboot.service
└─final.target
└─systemd-reboot.service

3.2.3. Starting a service


To start a service unit that corresponds to a system service, type the following at a shell prompt as root:

systemctl start name.service

Replace name with the name of the service unit you want to start (for example, gdm). This command
starts the selected service unit in the current session. For information on how to enable a service unit to
be started at boot time, see Enabling a service . For information on how to determine the status of a
certain service unit, see Displaying service status.

Example 3.5. Starting a service

The service unit for the Apache HTTP Server is named httpd.service. To activate this service unit
and start the httpd daemon in the current session, run the following command as root:

# systemctl start httpd.service

3.2.4. Stopping a service


To stop a service unit that corresponds to a system service, type the following at a shell prompt as root:

systemctl stop name.service

Replace name with the name of the service unit you want to stop (for example, bluetooth). This
command stops the selected service unit in the current session. For information on how to disable a

89
Red Hat Enterprise Linux 8 Configuring basic system settings

service unit and prevent it from being started at boot time, see Disabling a service . For information on
how to determine the status of a certain service unit, see Displaying service status.

Example 3.6. Stopping a service

The service unit for the bluetoothd daemon is named bluetooth.service. To deactivate this service
unit and stop the bluetoothd daemon in the current session, run the following command as root:

# systemctl stop bluetooth.service

3.2.5. Restarting a service


To restart a service unit that corresponds to a system service, type the following at a shell prompt as
root:

systemctl restart name.service

Replace name with the name of the service unit you want to restart (for example, httpd). This command
stops the selected service unit in the current session and immediately starts it again. Importantly, if the
selected service unit is not running, this command starts it too. To tell systemd to restart a service unit
only if the corresponding service is already running, run the following command as root:

systemctl try-restart name.service

Certain system services also allow you to reload their configuration without interrupting their execution.
To do so, type as root:

systemctl reload name.service

Note that system services that do not support this feature ignore this command altogether. For
convenience, the systemctl command also supports the reload-or-restart and reload-or-try-restart
commands that restart such services instead. For information on how to determine the status of a
certain service unit, see Displaying service status.

Example 3.7. Restarting a service

In order to prevent users from encountering unnecessary error messages or partially rendered web
pages, the Apache HTTP Server allows you to edit and reload its configuration without the need to
restart it and interrupt actively processed requests. To do so, type the following at a shell prompt as
root:

# systemctl reload httpd.service

3.2.6. Enabling a service


To configure a service unit that corresponds to a system service to be automatically started at boot
time, type the following at a shell prompt as root:

systemctl enable name.service

90
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Replace name with the name of the service unit you want to enable (for example, httpd). This command
reads the [Install] section of the selected service unit and creates appropriate symbolic links to the
/usr/lib/systemd/system/name.service file in the /etc/systemd/system/ directory and its
subdirectories. This command does not, however, rewrite links that already exist. If you want to ensure
that the symbolic links are re-created, use the following command as root:

systemctl reenable name.service

This command disables the selected service unit and immediately enables it again. For information on
how to determine whether a certain service unit is enabled to start at boot time, see Displaying service
status. For information on how to start a service in the current session, see Starting a service .

Example 3.8. Enabling a service

To configure the Apache HTTP Server to start automatically at boot time, run the following
command as root:

# systemctl enable httpd.service


Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to
/usr/lib/systemd/system/httpd.service.

3.2.7. Disabling a service


To prevent a service unit that corresponds to a system service from being automatically started at boot
time, type the following at a shell prompt as root:

systemctl disable name.service

Replace name with the name of the service unit you want to disable (for example, bluetooth). This
command reads the [Install] section of the selected service unit and removes appropriate symbolic links
to the /usr/lib/systemd/system/name.service file from the /etc/systemd/system/ directory and its
subdirectories. In addition, you can mask any service unit to prevent it from being started manually or by
another service. To do so, run the following command as root:

systemctl mask name.service

This command replaces the /etc/systemd/system/name.service file with a symbolic link to /dev/null,
rendering the actual unit file inaccessible to systemd. To revert this action and unmask a service unit,
type as root:

systemctl unmask name.service

For information on how to determine whether a certain service unit is enabled to start at boot time, see
Displaying service status. For information on how to stop a service in the current session, see Stopping a
service.

Example 3.9. Disabling a service

Example 3.6, “Stopping a service” illustrates how to stop the bluetooth.service unit in the current
session. To prevent this service unit from starting at boot time, type the following at a shell prompt
as root:

91
Red Hat Enterprise Linux 8 Configuring basic system settings

# systemctl disable bluetooth.service


Removed symlink /etc/systemd/system/bluetooth.target.wants/bluetooth.service.
Removed symlink /etc/systemd/system/dbus-org.bluez.service.

3.2.8. Starting a conflicting service


In systemd, positive and negative dependencies between services exist. Starting particular service may
require starting one or more other services (positive dependency) or stopping one or more services
(negative dependency).

When you attempt to start a new service, systemd resolves all dependencies automatically. Note that
this is done without explicit notification to the user. If you are already running a service, and you attempt
to start another service with a negative dependency, the first service is automatically stopped.

For example, if you are running the postfix service, and you try to start the sendmail service, systemd
first automatically stops postfix, because these two services are conflicting and cannot run on the same
port.

3.3. WORKING WITH SYSTEMD TARGETS


Previous versions of Red Hat Enterprise Linux, which were distributed with SysV init or Upstart,
implemented a predefined set of runlevels that represented specific modes of operation. These
runlevels were numbered from 0 to 6 and were defined by a selection of system services to be run when
a particular runlevel was enabled by the system administrator. Starting with Red Hat Enterprise Linux 7,
the concept of runlevels has been replaced with systemd targets .

Systemd targets are represented by target units. Target units end with the .target file extension and
their only purpose is to group together other systemd units through a chain of dependencies. For
example, the graphical.target unit, which is used to start a graphical session, starts system services such
as the GNOME Display Manager (gdm.service) or Accounts Service (accounts-daemon.service) and
also activates the multi-user.target unit. Similarly, the multi-user.target unit starts other essential
system services such as NetworkManager (NetworkManager.service) or D-Bus (dbus.service) and
activates another target unit named basic.target.

Red Hat Enterprise Linux 7 was distributed with a number of predefined targets that are more or less
similar to the standard set of runlevels from the previous releases of this system. For compatibility
reasons, it also provides aliases for these targets that directly map them to SysV runlevels. Table 3.6,
“Comparison of SysV runlevels with systemd targets” provides a complete list of SysV runlevels and
their corresponding systemd targets.

Table 3.6. Comparison of SysV runlevels with systemd targets

Runlevel Target Units Description

0 runlevel0.target, Shut down and power off the


poweroff.target system.

1 runlevel1.target, Set up a rescue shell.


rescue.target

2 runlevel2.target, multi- Set up a non-graphical multi-user


user.target system.

92
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Runlevel Target Units Description

3 runlevel3.target, multi- Set up a non-graphical multi-user


user.target system.

4 runlevel4.target, multi- Set up a non-graphical multi-user


user.target system.

5 runlevel5.target, Set up a graphical multi-user


graphical.target system.

6 runlevel6.target, Shut down and reboot the system.


reboot.target

To view, change, or configure systemd targets, use the systemctl utility as described in Table 3.7,
“Comparison of SysV init commands with systemctl” and in the sections below. The runlevel and telinit
commands are still available in the system and work as expected, but are only included for compatibility
reasons and should be avoided.

Table 3.7. Comparison of SysV init commands with systemctl

Old Command New Command Description

runlevel systemctl list-units --type Lists currently loaded target units.


target

telinit runlevel systemctl isolate Changes the current target.


name.target

3.3.1. Viewing the default target


To determine which target unit is used by default, run the following command:

systemctl get-default

This command resolves the symbolic link located at /etc/systemd/system/default.target and displays
the result.

Example 3.10. Viewing the default target

To display the default target unit, type:

$ systemctl get-default
graphical.target

3.3.2. Viewing the current target


To list all currently loaded target units, type the following command at a shell prompt:

93
Red Hat Enterprise Linux 8 Configuring basic system settings

systemctl list-units --type target

For each target unit, this commands displays its full name (UNIT) followed by a note whether the unit has
been loaded (LOAD), its high-level ( ACTIVE) and low-level ( SUB) unit activation state, and a short
description (DESCRIPTION).

By default, the systemctl list-units command displays only active units. If you want to list all loaded
units regardless of their state, run this command with the --all or -a command line option:

systemctl list-units --type target --all

Example 3.11. Viewing the current target

To list all currently loaded target units, run:

$ systemctl list-units --type target


UNIT LOAD ACTIVE SUB DESCRIPTION
basic.target loaded active active Basic System
cryptsetup.target loaded active active Encrypted Volumes
getty.target loaded active active Login Prompts
graphical.target loaded active active Graphical Interface
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target loaded active active Local File Systems
multi-user.target loaded active active Multi-User System
network.target loaded active active Network
paths.target loaded active active Paths
remote-fs.target loaded active active Remote File Systems
sockets.target loaded active active Sockets
sound.target loaded active active Sound Card
spice-vdagentd.target loaded active active Agent daemon for Spice guests
swap.target loaded active active Swap
sysinit.target loaded active active System Initialization
time-sync.target loaded active active System Time Synchronized
timers.target loaded active active Timers

LOAD = Reflects whether the unit definition was properly loaded.


ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.

17 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

3.3.3. Changing the default target


To configure the system to use a different target unit by default, type the following at a shell prompt as
root:

systemctl set-default name.target

Replace name with the name of the target unit you want to use by default (for example, multi-user).
This command replaces the /etc/systemd/system/default.target file with a symbolic link to
/usr/lib/systemd/system/name.target, where name is the name of the target unit you want to use.

94
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Example 3.12. Changing the default target

To configure the system to use the multi-user.target unit by default, run the following command as
root:

# systemctl set-default multi-user.target


rm '/etc/systemd/system/default.target'
ln -s '/usr/lib/systemd/system/multi-user.target' '/etc/systemd/system/default.target'

3.3.4. Changing the current target


To change to a different target unit in the current session, type the following at a shell prompt as root:

systemctl isolate name.target

Replace name with the name of the target unit you want to use (for example, multi-user). This
command starts the target unit named name and all dependent units, and immediately stops all others.

Example 3.13. Changing the current target

To turn off the graphical user interface and change to the multi-user.target unit in the current
session, run the following command as root:

# systemctl isolate multi-user.target

3.3.5. Changing to rescue mode


Rescue mode provides a convenient single-user environment and allows you to repair your system in
situations when it is unable to complete a regular booting process. In rescue mode, the system attempts
to mount all local file systems and start some important system services, but it does not activate
network interfaces or allow more users to be logged into the system at the same time. Rescue mode
requires the root password.

To change the current target and enter rescue mode in the current session, type the following at a shell
prompt as root:

systemctl rescue

This command is similar to systemctl isolate rescue.target, but it also sends an informative message to
all users that are currently logged into the system. To prevent systemd from sending this message, run
this command with the --no-wall command line option:

systemctl --no-wall rescue

For information on how to enter emergency mode, see Section 3.3.6, “Changing to emergency mode” .

Example 3.14. Changing to rescue mode

To enter rescue mode in the current session, run the following command as root:

95
Red Hat Enterprise Linux 8 Configuring basic system settings

# systemctl rescue

Broadcast message from root@localhost on pts/0 (Fri 2013-10-25 18:23:15 CEST):

The system is going down to rescue mode NOW!

3.3.6. Changing to emergency mode


Emergency mode provides the most minimal environment possible and allows you to repair your system
even in situations when the system is unable to enter rescue mode. In emergency mode, the system
mounts the root file system only for reading, does not attempt to mount any other local file systems,
does not activate network interfaces, and only starts a few essential services. Emergency mode requires
the root password.

To change the current target and enter emergency mode, type the following at a shell prompt as root:

systemctl emergency

This command is similar to systemctl isolate emergency.target, but it also sends an informative
message to all users that are currently logged into the system. To prevent systemd from sending this
message, run this command with the --no-wall command line option:

systemctl --no-wall emergency

For information on how to enter rescue mode, see Section 3.3.5, “Changing to rescue mode” .

Example 3.15. Changing to emergency mode

To enter emergency mode without sending a message to all users that are currently logged into the
system, run the following command as root:

# systemctl --no-wall emergency

3.4. SHUTTING DOWN, SUSPENDING, AND HIBERNATING THE


SYSTEM
In Red Hat Enterprise Linux 7, the systemctl utility replaced a number of power management
commands used in previous versions of Red Hat Enterprise Linux. The commands listed in Table 3.8,
“Comparison of power management commands with systemctl” are still available in the system for
compatibility reasons, but it is advised that you use systemctl when possible.

Table 3.8. Comparison of power management commands with systemctl

Old Command New Command Description

halt systemctl halt Halts the system.

poweroff systemctl poweroff Powers off the system.

96
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Old Command New Command Description

reboot systemctl reboot Restarts the system.

pm-suspend systemctl suspend Suspends the system.

pm-hibernate systemctl hibernate Hibernates the system.

pm-suspend-hybrid systemctl hybrid-sleep Hibernates and suspends the


system.

3.4.1. Shutting down the system


The systemctl utility provides commands for shutting down the system, however the traditional
shutdown command is also supported. Although the shutdown command will call the systemctl utility
to perform the shutdown, it has an advantage in that it also supports a time argument. This is
particularly useful for scheduled maintenance and to allow more time for users to react to the warning
that a system shutdown has been scheduled. The option to cancel the shutdown can also be an
advantage.

Using systemctl commands


To shut down the system and power off the machine, type the following at a shell prompt as root:

systemctl poweroff

To shut down and halt the system without powering off the machine, run the following command as
root:

systemctl halt

By default, running either of these commands causes systemd to send an informative message to all
users that are currently logged into the system. To prevent systemd from sending this message, run the
selected command with the --no-wall command line option, for example:

systemctl --no-wall poweroff

Using the shutdown command


To shut down the system and power off the machine at a certain time, use a command in the following
format as root:

shutdown --poweroff hh:mm

Where hh:mm is the time in 24 hour clock format. The /run/nologin file is created 5 minutes before
system shutdown to prevent new logins. When a time argument is used, an optional message, the wall
message, can be appended to the command.

To shut down and halt the system after a delay, without powering off the machine, use a command in
the following format as root:

shutdown --halt +m

97
Red Hat Enterprise Linux 8 Configuring basic system settings

Where +m is the delay time in minutes. The now keyword is an alias for +0.

A pending shutdown can be canceled by the root user as follows:

shutdown -c

See the shutdown(8) manual page for further command options.

3.4.2. Restarting the system


To restart the system, run the following command as root:

systemctl reboot

By default, this command causes systemd to send an informative message to all users that are currently
logged into the system. To prevent systemd from sending this message, run this command with the --
no-wall command line option:

systemctl --no-wall reboot

3.4.3. Suspending the system


To suspend the system, type the following at a shell prompt as root:

systemctl suspend

This command saves the system state in RAM and with the exception of the RAM module, powers off
most of the devices in the machine. When you turn the machine back on, the system then restores its
state from RAM without having to boot again. Because the system state is saved in RAM and not on the
hard disk, restoring the system from suspend mode is significantly faster than restoring it from
hibernation, but as a consequence, a suspended system state is also vulnerable to power outages.

For information on how to hibernate the system, see Section 3.4.4, “Hibernating the system”.

3.4.4. Hibernating the system


To hibernate the system, type the following at a shell prompt as root:

systemctl hibernate

This command saves the system state on the hard disk drive and powers off the machine. When you turn
the machine back on, the system then restores its state from the saved data without having to boot
again. Because the system state is saved on the hard disk and not in RAM, the machine does not have to
maintain electrical power to the RAM module, but as a consequence, restoring the system from
hibernation is significantly slower than restoring it from suspend mode.

To hibernate and suspend the system, run the following command as root:

systemctl hybrid-sleep

For information on how to suspend the system, see Section 3.4.3, “Suspending the system”.

98
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

3.5. WORKING WITH SYSTEMD UNIT FILES


A unit file contains configuration directives that describe the unit and define its behavior. Several
systemctl commands work with unit files in the background. To make finer adjustments, system
administrator must edit or create unit files manually. Table 3.1, “Systemd unit files locations” lists three
main directories where unit files are stored on the system, the /etc/systemd/system/ directory is
reserved for unit files created or customized by the system administrator.

Unit file names take the following form:

unit_name.type_extension

Here, unit_name stands for the name of the unit and type_extension identifies the unit type, see
Table 3.2, “Available systemd unit types” for a complete list of unit types. For example, there usually is
sshd.service as well as sshd.socket unit present on your system.

Unit files can be supplemented with a directory for additional configuration files. For example, to add
custom configuration options to sshd.service, create the sshd.service.d/custom.conf file and insert
additional directives there. For more information on configuration directories, see Section 3.5.4,
“Modifying existing unit files”.

Also, the sshd.service.wants/ and sshd.service.requires/ directories can be created. These


directories contain symbolic links to unit files that are dependencies of the sshd service. The symbolic
links are automatically created either during installation according to [Install] unit file options or at
runtime based on [Unit] options. It is also possible to create these directories and symbolic links
manually. For more details on [Install] and [Unit] options, see the tables below.

Many unit file options can be set using the so called unit specifiers – wildcard strings that are
dynamically replaced with unit parameters when the unit file is loaded. This enables creation of generic
unit files that serve as templates for generating instantiated units. See Section 3.5.5, “Working with
instantiated units” for details.

3.5.1. Understanding the unit file structure


Unit files typically consist of three sections:

The [Unit] section — contains generic options that are not dependent on the type of the unit.
These options provide unit description, specify the unit’s behavior, and set dependencies to
other units. For a list of most frequently used [Unit] options, see Table 3.9, “Important [Unit]
section options”.

The [Unit type] section — if a unit has type-specific directives, these are grouped under a
section named after the unit type. For example, service unit files contain the [Service] section.

The [Install] section — contains information about unit installation used by systemctl enable
and disable commands. For a list of options for the [Install] section, see Table 3.11, “Important
[Install] section options”.

Table 3.9. Important [Unit] section options

Option [a] section, see the systemd.unit(5) Description


manual page.]

99
Red Hat Enterprise Linux 8 Configuring basic system settings

Option [a] section, see the systemd.unit(5) Description


manual page.]

Description A meaningful description of the unit. This text is


displayed for example in the output of the
systemctl status command.

Documentation Provides a list of URIs referencing documentation for


the unit.

After [b] Defines the order in which units are started. The unit
starts only after the units specified in After are
active. Unlike Requires, After does not explicitly
activate the specified units. The Before option has
the opposite functionality to After .

Requires Configures dependencies on other units. The units


listed in Requires are activated together with the
unit. If any of the required units fail to start, the unit is
not activated.

Wants Configures weaker dependencies than Requires. If


any of the listed units does not start successfully, it
has no impact on the unit activation. This is the
recommended way to establish custom unit
dependencies.

Conflicts Configures negative dependencies, an opposite to


Requires.

[a] For a complete list of options configurable in the [Unit

[b] In most cases, it is sufficient to set only the ordering dependencies with After and Before unit file options. If you also
set a requirement dependency with Wants (recommended) or Requires, the ordering dependency still needs to be
specified. That is because ordering and requirement dependencies work independently from each other.

Table 3.10. Important [Service] section options

Option [a] section, see the systemd.service(5) Description


manual page.]

100
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Option [a] section, see the systemd.service(5) Description


manual page.]

Type Configures the unit process startup type that affects


the functionality of ExecStart and related options.
One of:

* simple – The default value. The process started


with ExecStart is the main process of the service.

* forking – The process started withExecStart


spawns a child process that becomes the main
process of the service. The parent process exits
when the startup is complete.

* oneshot – This type is similar to simple, but the


process exits before starting consequent units.

* dbus – This type is similar to simple, but


consequent units are started only after the main
process gains a D-Bus name.

* notify – This type is similar to simple, but


consequent units are started only after a notification
message is sent via the sd_notify() function.

* idle – similar to simple, the actual execution of the


service binary is delayed until all jobs are finished,
which avoids mixing the status output with shell
output of services.

ExecStart Specifies commands or scripts to be executed when


the unit is started. ExecStartPre and
ExecStartPost specify custom commands to be
executed before and after ExecStart .
Type=oneshot enables specifying multiple custom
commands that are then executed sequentially.

ExecStop Specifies commands or scripts to be executed when


the unit is stopped.

ExecReload Specifies commands or scripts to be executed when


the unit is reloaded.

Restart With this option enabled, the service is restarted after


its process exits, with the exception of a clean stop
by the systemctl command.

RemainAfterExit If set to True, the service is considered active even


when all its processes exited. Default value is False.
This option is especially useful if Type=oneshot is
configured.

101
Red Hat Enterprise Linux 8 Configuring basic system settings

Option [a] section, see the systemd.service(5) Description


manual page.]

[a] For a complete list of options configurable in the [Service

Table 3.11. Important [Install] section options

Option [a] section, see the systemd.unit(5) Description


manual page.]

Alias Provides a space-separated list of additional names


for the unit. Most systemctl commands, excluding
systemctl enable , can use aliases instead of the
actual unit name.

RequiredBy A list of units that depend on the unit. When this unit
is enabled, the units listed in RequiredBy gain a
Require dependency on the unit.

WantedBy A list of units that weakly depend on the unit. When


this unit is enabled, the units listed in WantedBy
gain a Want dependency on the unit.

Also Specifies a list of units to be installed or uninstalled


along with the unit.

DefaultInstance Limited to instantiated units, this option specifies the


default instance for which the unit is enabled. See
Section 3.5.5, “Working with instantiated units”

[a] For a complete list of options configurable in the [Install

A whole range of options that can be used to fine tune the unit configuration. The below example shows
a service unit installed on the system. Moreover, unit file options can be defined in a way that enables
dynamic creation of units as described in Working with instantiated units .

Example 3.16. postfix.service unit file

What follows is the content of the /usr/lib/systemd/system/postfix.service unit file as currently


provided by the postfix package:

[Unit]
Description=Postfix Mail Transport Agent
After=syslog.target network.target
Conflicts=sendmail.service exim.service

[Service]
Type=forking
PIDFile=/var/spool/postfix/pid/master.pid

102
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

EnvironmentFile=-/etc/sysconfig/network
ExecStartPre=-/usr/libexec/postfix/aliasesdb
ExecStartPre=-/usr/libexec/postfix/chroot-update
ExecStart=/usr/sbin/postfix start
ExecReload=/usr/sbin/postfix reload
ExecStop=/usr/sbin/postfix stop

[Install]
WantedBy=multi-user.target

The [Unit] section describes the service, specifies the ordering dependencies, as well as conflicting
units. In [Service], a sequence of custom scripts is specified to be executed during unit activation, on
stop, and on reload. EnvironmentFile points to the location where environment variables for the
service are defined, PIDFile specifies a stable PID for the main process of the service. Finally, the
[Install] section lists units that depend on the service.

3.5.2. Creating custom unit files


There are several use cases for creating unit files from scratch: you could run a custom daemon, create a
second instance of some existing service (as in Creating a second instance of the sshd service ), or
import a SysV init script (more in Converting SysV init scripts to unit files ). On the other hand, if you
intend just to modify or extend the behavior of an existing unit, use the instructions from Modifying
existing unit files. The following procedure describes the general process of creating a custom service:

1. Prepare the executable file with the custom service. This can be a custom-created script, or an
executable delivered by a software provider. If required, prepare a PID file to hold a constant
PID for the main process of the custom service. It is also possible to include environment files to
store shell variables for the service. Make sure the source script is executable (by executing the
chmod a+x) and is not interactive.

2. Create a unit file in the /etc/systemd/system/ directory and make sure it has correct file
permissions. Execute as root:

touch /etc/systemd/system/name.service

chmod 664 /etc/systemd/system/name.service

Replace name with a name of the service to be created. Note that file does not need to be
executable.

3. Open the name.service file created in the previous step, and add the service configuration
options. There is a variety of options that can be used depending on the type of service you wish
to create, see Section 3.5.1, “Understanding the unit file structure” . The following is an example
unit configuration for a network-related service:

[Unit]
Description=service_description
After=network.target

[Service]
ExecStart=path_to_executable
Type=forking
PIDFile=path_to_pidfile

103
Red Hat Enterprise Linux 8 Configuring basic system settings

[Install]
WantedBy=default.target

Where:

service_description is an informative description that is displayed in journal log files and in


the output of the systemctl status command.

the After setting ensures that the service is started only after the network is running. Add a
space-separated list of other relevant services or targets.

path_to_executable stands for the path to the actual service executable.

Type=forking is used for daemons that make the fork system call. The main process of the
service is created with the PID specified in path_to_pidfile. Find other startup types in
Table 3.10, “Important [Service] section options” .

WantedBy states the target or targets that the service should be started under. Think of
these targets as of a replacement of the older concept of runlevels, see Section 3.3,
“Working with systemd targets” for details.

4. Notify systemd that a new name.service file exists by executing the following command as
root:

systemctl daemon-reload

systemctl start name.service


WARNING

Always run the systemctl daemon-reload command after creating new


unit files or modifying existing unit files. Otherwise, the systemctl start or
systemctl enable commands could fail due to a mismatch between states
of systemd and actual service unit files on disk. Note, that on systems with
a large number of units this can take a long time, as the state of each unit
has to be serialized and subsequently deserialized during the reload.

Example 3.17. Creating the emacs.service file

When using the Emacs text editor, it is often faster and more convenient to have it running in the
background instead of starting a new instance of the program whenever editing a file. The following
steps show how to create a unit file for Emacs, so that it can be handled like a service.

1. Create a unit file in the /etc/systemd/system/ directory and make sure it has the correct file
permissions. Execute as root:

# touch /etc/systemd/system/emacs.service

# chmod 664 /etc/systemd/system/emacs.service

104
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

2. Add the following content to the file:

[Unit]
Description=Emacs: the extensible, self-documenting text editor

[Service]
Type=forking
ExecStart=/usr/bin/emacs --daemon
ExecStop=/usr/bin/emacsclient --eval "(kill-emacs)"
Environment=SSH_AUTH_SOCK=%t/keyring/ssh
Restart=always

[Install]
WantedBy=default.target

With the above configuration, the /usr/bin/emacs executable is started in daemon mode on
service start. The SSH_AUTH_SOCK environment variable is set using the "%t" unit specifier
that stands for the runtime directory. The service also restarts the emacs process if it exits
unexpectedly.

3. Execute the following commands to reload the configuration and start the custom service:

# systemctl daemon-reload

# systemctl start emacs.service

As the editor is now registered as a systemd service, you can use all standard systemctl commands.
For example, run systemctl status emacs to display the editor’s status or systemctl enable emacs
to make the editor start automatically on system boot.

Example 3.18. Creating a second instance of the sshd service

System Administrators often need to configure and run multiple instances of a service. This is done
by creating copies of the original service configuration files and modifying certain parameters to
avoid conflicts with the primary instance of the service. The following procedure shows how to
create a second instance of the sshd service:

1. Create a copy of the sshd_config file that will be used by the second daemon:

# cp /etc/ssh/sshd{,-second}_config

2. Edit the sshd-second_config file created in the previous step to assign a different port
number and PID file to the second daemon:

Port 22220
PidFile /var/run/sshd-second.pid

See the sshd_config(5) manual page for more information on Port and PidFile options.
Make sure the port you choose is not in use by any other service. The PID file does not have
to exist before running the service, it is generated automatically on service start.

3. Create a copy of the systemd unit file for the sshd service:

105
Red Hat Enterprise Linux 8 Configuring basic system settings

# cp /usr/lib/systemd/system/sshd.service /etc/systemd/system/sshd-second.service

4. Alter the sshd-second.service created in the previous step as follows:

a. Modify the Description option:

Description=OpenSSH server second instance daemon

b. Add sshd.service to services specified in the After option, so that the second instance
starts only after the first one has already started:

After=syslog.target network.target auditd.service sshd.service

c. The first instance of sshd includes key generation, therefore remove the
ExecStartPre=/usr/sbin/sshd-keygen line.

d. Add the -f /etc/ssh/sshd-second_config parameter to the sshd command, so that the


alternative configuration file is used:

ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config $OPTIONS

e. After the above modifications, the sshd-second.service should look as follows:

[Unit]
Description=OpenSSH server second instance daemon
After=syslog.target network.target auditd.service sshd.service

[Service]
EnvironmentFile=/etc/sysconfig/sshd
ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartSec=42s

[Install]
WantedBy=multi-user.target

5. If using SELinux, add the port for the second instance of sshd to SSH ports, otherwise the
second instance of sshd will be rejected to bind to the port:

# semanage port -a -t ssh_port_t -p tcp 22220

6. Enable sshd-second.service, so that it starts automatically upon boot:

# systemctl enable sshd-second.service

Verify if the sshd-second.service is running by using the systemctl status command. Also,
verify if the port is enabled correctly by connecting to the service:

$ ssh -p 22220 user@server

If the firewall is in use, make sure that it is configured appropriately in order to allow

106
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

If the firewall is in use, make sure that it is configured appropriately in order to allow
connections to the second instance of sshd.

To learn how to properly choose a target for ordering and dependencies of your custom unit files,
see the following articles

How to write a service unit file which enforces that particular services have to be started

How to decide what dependencies a systemd service unit definition should have

Additional information with some real-world examples of cases triggered by the ordering and
dependencies in a unit file is available in Red Hat Knowledgebase article Is there any useful information
about writing unit files?

If you want to set limits for services started by systemd, see the Red Hat Knowledgebase article How to
set limits for services in RHEL 7 and systemd. These limits need to be set in the service’s unit file. Note
that systemd ignores limits set in the /etc/security/limits.conf and /etc/security/limits.d/*.conf
configuration files. The limits defined in these files are set by PAM when starting a login session, but
daemons started by systemd do not use PAM login sessions.

3.5.3. Converting SysV init scripts to unit files


Before taking time to convert a SysV init script to a unit file, make sure that the conversion was not
already done elsewhere. All core services installed on Red Hat Enterprise Linux come with default unit
files, and the same applies for many third-party software packages.

Converting an init script to a unit file requires analyzing the script and extracting the necessary
information from it. Based on this data you can create a unit file. As init scripts can vary greatly
depending on the type of the service, you might need to employ more configuration options for
translation than outlined in this chapter. Note that some levels of customization that were available with
init scripts are no longer supported by systemd units.

The majority of information needed for conversion is provided in the script’s header. The following
example shows the opening section of the init script used to start the postfix service on Red Hat
Enterprise Linux 6:

!/bin/bash # postfix Postfix Mail Transfer Agent # chkconfig: 2345 80 30 # description: Postfix is a Mail
Transport Agent, which is the program that moves mail from one machine to another. # processname:
master # pidfile: /var/spool/postfix/pid/master.pid # config: /etc/postfix/main.cf # config:
/etc/postfix/master.cf BEGIN INIT INFO # Provides: postfix MTA # Required-Start: $local_fs $network
$remote_fs # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0
1 6 # Short-Description: start and stop postfix # Description: Postfix is a Mail Transport Agent, which
is the program that moves mail from one machine to another. # END INIT INFO

In the above example, only lines starting with # chkconfig and # description are mandatory, so you
might not find the rest in different init files. The text enclosed between the BEGIN INIT INFO and END
INIT INFO lines is called Linux Standard Base (LSB) header. If specified, LSB headers contain
directives defining the service description, dependencies, and default runlevels. What follows is an
overview of analytic tasks aiming to collect the data needed for a new unit file. The postfix init script is
used as an example, see the resulting postfix unit file in Example 3.16, “postfix.service unit file” .

Finding the service description

Find descriptive information about the script on the line starting with #description. Use this description

107
Red Hat Enterprise Linux 8 Configuring basic system settings

Find descriptive information about the script on the line starting with #description. Use this description
together with the service name in the Description option in the [Unit] section of the unit file. The LSB
header might contain similar data on the #Short-Description and #Description lines.

Finding service dependencies


The LSB header might contain several directives that form dependencies between services. Most of
them are translatable to systemd unit options, see Table 3.12, “Dependency options from the LSB
header”

Table 3.12. Dependency options from the LSB header

LSB Option Description Unit File Equivalent

Provides Specifies the boot facility name –


of the service, that can be
referenced in other init scripts
(with the "$" prefix). This is no
longer needed as unit files refer to
other units by their file names.

Required-Start Contains boot facility names of After , Before


required services. This is
translated as an ordering
dependency, boot facility names
are replaced with unit file names
of corresponding services or
targets they belong to. For
example, in case of postfix, the
Required-Start dependency on
$network was translated to the
After dependency on
network.target.

Should-Start Constitutes weaker dependencies After , Before


than Required-Start. Failed
Should-Start dependencies do
not affect the service startup.

Required-Stop , Should-Stop Constitute negative Conflicts


dependencies.

Finding default targets of the service


The line starting with #chkconfig contains three numerical values. The most important is the first
number that represents the default runlevels in which the service is started. Map these runlevels to
equivalent systemd targets. Then list these targets in the WantedBy option in the [Install] section of the
unit file. For example, postfix was previously started in runlevels 2, 3, 4, and 5, which translates to multi-
user.target and graphical.target. Note that the graphical.target depends on multiuser.target, therefore it
is not necessary to specify both, as in Example 3.16, “postfix.service unit file” . You might find information
on default and forbidden runlevels also at #Default-Start and #Default-Stop lines in the LSB header.

The other two values specified on the #chkconfig line represent startup and shutdown priorities of the
init script. These values are interpreted by systemd if it loads the init script, but there is no unit file
equivalent.

108
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Finding files used by the service


Init scripts require loading a function library from a dedicated directory and allow importing
configuration, environment, and PID files. Environment variables are specified on the line starting with
#config in the init script header, which translates to the EnvironmentFile unit file option. The PID file
specified on the #pidfile init script line is imported to the unit file with the PIDFile option.

The key information that is not included in the init script header is the path to the service executable,
and potentially some other files required by the service. In previous versions of Red Hat Enterprise Linux,
init scripts used a Bash case statement to define the behavior of the service on default actions, such as
start, stop, or restart, as well as custom-defined actions. The following excerpt from the postfix init
script shows the block of code to be executed at service start.

conf_check() {
[ -x /usr/sbin/postfix ] || exit 5
[ -d /etc/postfix ] || exit 6
[ -d /var/spool/postfix ] || exit 5
}

make_aliasesdb() {
if [ "$(/usr/sbin/postconf -h alias_database)" == "hash:/etc/aliases" ]
then
# /etc/aliases.db might be used by other MTA, make sure nothing
# has touched it since our last newaliases call
[ /etc/aliases -nt /etc/aliases.db ] ||
[ "$ALIASESDB_STAMP" -nt /etc/aliases.db ] ||
[ "$ALIASESDB_STAMP" -ot /etc/aliases.db ] || return
/usr/bin/newaliases
touch -r /etc/aliases.db "$ALIASESDB_STAMP"
else
/usr/bin/newaliases
fi
}

start() {
[ "$EUID" != "0" ] && exit 4
# Check that networking is up.
[ ${NETWORKING} = "no" ] && exit 1
conf_check
# Start daemons.
echo -n $"Starting postfix: "
make_aliasesdb >/dev/null 2>&1
[ -x $CHROOT_UPDATE ] && $CHROOT_UPDATE
/usr/sbin/postfix start 2>/dev/null 1>&2 && success || failure $"$prog start"
RETVAL=$?
[ $RETVAL -eq 0 ] && touch $lockfile
echo
return $RETVAL
}

The extensibility of the init script allowed specifying two custom functions, conf_check() and
make_aliasesdb(), that are called from the start() function block. On closer look, several external files
and directories are mentioned in the above code: the main service executable /usr/sbin/postfix, the
/etc/postfix/ and /var/spool/postfix/ configuration directories, as well as the /usr/sbin/postconf/
directory.

Systemd supports only the predefined actions, but enables executing custom executables with

109
Red Hat Enterprise Linux 8 Configuring basic system settings

ExecStart, ExecStartPre, ExecStartPost, ExecStop, and ExecReload options. The /usr/sbin/postfix


together with supporting scripts are executed on service start. Consult the postfix unit file at
Example 3.16, “postfix.service unit file” .

Converting complex init scripts requires understanding the purpose of every statement in the script.
Some of the statements are specific to the operating system version, therefore you do not need to
translate them. On the other hand, some adjustments might be needed in the new environment, both in
unit file as well as in the service executable and supporting files.

3.5.4. Modifying existing unit files


Services installed on the system come with default unit files that are stored in the
/usr/lib/systemd/system/ directory. System Administrators should not modify these files directly,
therefore any customization must be confined to configuration files in the /etc/systemd/system/
directory. Depending on the extent of the required changes, pick one of the following approaches:

Create a directory for supplementary configuration files at /etc/systemd/system/unit.d/. This


method is recommended for most use cases. It enables extending the default configuration with
additional functionality, while still referring to the original unit file. Changes to the default unit
introduced with a package upgrade are therefore applied automatically. See Extending the
default unit configuration for more information.

Create a copy of the original unit file /usr/lib/systemd/system/ in /etc/systemd/system/ and


make changes there. The copy overrides the original file, therefore changes introduced with the
package update are not applied. This method is useful for making significant unit changes that
should persist regardless of package updates. See Overriding the default unit configuration for
details.

In order to return to the default configuration of the unit, just delete custom-created configuration files
in /etc/systemd/system/. To apply changes to unit files without rebooting the system, execute:

systemctl daemon-reload

The daemon-reload option reloads all unit files and recreates the entire dependency tree, which is
needed to immediately apply any change to a unit file. As an alternative, you can achieve the same result
with the following command, which must be executed under the root user:

init q

Also, if the modified unit file belongs to a running service, this service must be restarted to accept new
settings:

systemctl restart name.service

IMPORTANT
110
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

IMPORTANT

To modify properties, such as dependencies or timeouts, of a service that is handled by a


SysV initscript, do not modify the initscript itself. Instead, create a systemd drop-in
configuration file for the service as described in Extending the default unit configuration
and Overriding the default unit configuration . Then manage this service in the same way
as a normal systemd service.

For example, to extend the configuration of the network service, do not modify the
/etc/rc.d/init.d/network initscript file. Instead, create new directory
/etc/systemd/system/network.service.d/ and a systemd drop-in file
/etc/systemd/system/network.service.d/my_config.conf. Then, put the modified values
into the drop-in file. Note: systemd knows the network service as network.service,
which is why the created directory must be called network.service.d

Extending the default unit configuration


To extend the default unit file with additional configuration options, first create a configuration directory
in /etc/systemd/system/. If extending a service unit, execute the following command as root:

mkdir /etc/systemd/system/name.service.d/

Replace name with the name of the service you want to extend. The above syntax applies to all unit
types.

Create a configuration file in the directory made in the previous step. Note that the file name must end
with the .conf suffix. Type:

touch /etc/systemd/system/name.service.d/config_name.conf

Replace config_name with the name of the configuration file. This file adheres to the normal unit file
structure, therefore all directives must be specified under appropriate sections, see Section 3.5.1,
“Understanding the unit file structure”.

For example, to add a custom dependency, create a configuration file with the following content:

[Unit]
Requires=new_dependency
After=new_dependency

Where new_dependency stands for the unit to be marked as a dependency. Another example is a
configuration file that restarts the service after its main process exited, with a delay of 30 seconds:

[Service]
Restart=always
RestartSec=30

It is recommended to create small configuration files focused only on one task. Such files can be easily
moved or linked to configuration directories of other services.

To apply changes made to the unit, execute as root:

systemctl daemon-reload
systemctl restart name.service

111
Red Hat Enterprise Linux 8 Configuring basic system settings

Example 3.19. Extending the httpd.service configuration

To modify the httpd.service unit so that a custom shell script is automatically executed when starting
the Apache service, perform the following steps. First, create a directory and a custom configuration
file:

# mkdir /etc/systemd/system/httpd.service.d/

# touch /etc/systemd/system/httpd.service.d/custom_script.conf

Provided that the script you want to start automatically with Apache is located at
/usr/local/bin/custom.sh, insert the following text to the custom_script.conf file:

[Service]
ExecStartPost=/usr/local/bin/custom.sh

To apply the unit changes, execute:

# systemctl daemon-reload

# systemctl restart httpd.service

NOTE

The configuration files from configuration directories in /etc/systemd/system/ take


precedence over unit files in /usr/lib/systemd/system/. Therefore, if the configuration
files contain an option that can be specified only once, such as Description or ExecStart,
the default value of this option is overridden. Note that in the output of the systemd-
delta command, described in Monitoring overriden units , such units are always marked as
[EXTENDED], even though in sum, certain options are actually overridden.

Overriding the default unit configuration


To make changes that will persist after updating the package that provides the unit file, first copy the
file to the /etc/systemd/system/ directory. To do so, execute the following command as root:

cp /usr/lib/systemd/system/name.service /etc/systemd/system/name.service

Where name stands for the name of the service unit you wish to modify. The above syntax applies to all
unit types.

Open the copied file with a text editor, and make the desired changes. To apply the unit changes,
execute as root:

systemctl daemon-reload
systemctl restart name.service

Example 3.20. Changing the timeout limit

You can specify a timeout value per service to prevent a malfunctioning service from freezing the

112
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

You can specify a timeout value per service to prevent a malfunctioning service from freezing the
system. Otherwise, timeout is set by default to 90 seconds for normal services and to 300 seconds
for SysV-compatible services.

For example, to extend timeout limit for the httpd service:

1. Copy the httpd unit file to the /etc/systemd/system/ directory:

cp /usr/lib/systemd/system/httpd.service /etc/systemd/system/httpd.service

2. Open file /etc/systemd/system/httpd.service and specify the TimeoutStartUSec value in


the [Service] section:

…​
[Service]
…​
PrivateTmp=true
TimeoutStartSec=10

[Install]
WantedBy=multi-user.target
…​

3. Reload the systemd daemon:

systemctl daemon-reload

4. Optional. Verify the new timeout value:

systemctl show httpd -p TimeoutStartUSec

NOTE

To change the timeout limit globally, input the DefaultTimeoutStartSec in the


/etc/systemd/system.conf file.

Monitoring overriden units


To display an overview of overridden or modified unit files, use the following command:

systemd-delta

For example, the output of the above command can look as follows:

[EQUIVALENT] /etc/systemd/system/default.target → /usr/lib/systemd/system/default.target


[OVERRIDDEN] /etc/systemd/system/autofs.service → /usr/lib/systemd/system/autofs.service

--- /usr/lib/systemd/system/autofs.service 2014-10-16 21:30:39.000000000 -0400


+ /etc/systemd/system/autofs.service 2014-11-21 10:00:58.513568275 -0500
@@ -8,7 +8,8 @@
EnvironmentFile=-/etc/sysconfig/autofs
ExecStart=/usr/sbin/automount $OPTIONS --pid-file /run/autofs.pid
ExecReload=/usr/bin/kill -HUP $MAINPID

113
Red Hat Enterprise Linux 8 Configuring basic system settings

-TimeoutSec=180
+TimeoutSec=240
+Restart=Always

[Install]
WantedBy=multi-user.target

[MASKED] /etc/systemd/system/cups.service → /usr/lib/systemd/system/cups.service


[EXTENDED] /usr/lib/systemd/system/sssd.service →
/etc/systemd/system/sssd.service.d/journal.conf

4 overridden configuration files found.

Table 3.13, “systemd-delta difference types” lists override types that can appear in the output of
systemd-delta. Note that if a file is overridden, systemd-delta by default displays a summary of
changes similar to the output of the diff command.

Table 3.13. systemd-delta difference types

Type Description

[MASKED] Masked unit files, see Section 3.2.7, “Disabling a


service” for description of unit masking.

[EQUIVALENT] Unmodified copies that override the original files but


do not differ in content, typically symbolic links.

[REDIRECTED] Files that are redirected to another file.

[OVERRIDEN] Overridden and changed files.

[EXTENDED] Files that are extended with .conf files in the


/etc/systemd/system/unit.d/ directory.

[UNCHANGED] Unmodified files are displayed only when the --


type=unchanged option is used.

It is good practice to run systemd-delta after system update to check if there are any updates to the
default units that are currently overridden by custom configuration. It is also possible to limit the output
only to a certain difference type. For example, to view just the overridden units, execute:

systemd-delta --type=overridden

If you want to edit a unit file and automatically create a drop-in file with the submitted changes, use the
following command:

# systemctl edit unit_name.type_extension

To dump the unit configuration applying all overrides and drop-ins, use this command:

# systemctl cat unit_name.type_extension

114
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Replace the unit_name.type_extension by the name of the required unit and its type, for example
tuned.service.

3.5.5. Working with instantiated units


It is possible to instantiate multiple units from a single template configuration file at runtime. The "@"
character is used to mark the template and to associate units with it. Instantiated units can be started
from another unit file (using Requires or Wants options), or with the systemctl start command.
Instantiated service units are named the following way:

template_name@instance_name.service

Where template_name stands for the name of the template configuration file. Replace instance_name
with the name for the unit instance. Several instances can point to the same template file with
configuration options common for all instances of the unit. Template unit name has the form of:

unit_name@.service

For example, the following Wants setting in a unit file:

Wants=getty@ttyA.service getty@ttyB.service

first makes systemd search for given service units. If no such units are found, the part between "@" and
the type suffix is ignored and systemd searches for the getty@.service file, reads the configuration
from it, and starts the services.

Wildcard characters, called unit specifiers, can be used in any unit configuration file. Unit specifiers
substitute certain unit parameters and are interpreted at runtime. Table 3.14, “Important unit specifiers”
lists unit specifiers that are particularly useful for template units.

Table 3.14. Important unit specifiers

Unit Specifier Meaning Description

%n Full unit name Stands for the full unit name


including the type suffix. %N has
the same meaning but also
replaces the forbidden characters
with ASCII codes.

%p Prefix name Stands for a unit name with type


suffix removed. For instantiated
units %p stands for the part of the
unit name before the "@"
character.

%i Instance name Is the part of the instantiated unit


name between the "@" character
and the type suffix. %I has the
same meaning but also replaces
the forbidden characters for ASCII
codes.

115
Red Hat Enterprise Linux 8 Configuring basic system settings

Unit Specifier Meaning Description

%H Host name Stands for the hostname of the


running system at the point in
time the unit configuration is
loaded.

%t Runtime directory Represents the runtime directory,


which is either /run for the root
user, or the value of the
XDG_RUNTIME_DIR variable for
unprivileged users.

For a complete list of unit specifiers, see the systemd.unit(5) manual page.

For example, the getty@.service template contains the following directives:

[Unit]
Description=Getty on %I
…​
[Service]
ExecStart=-/sbin/agetty --noclear %I $TERM
…​

When the getty@ttyA.service and getty@ttyB.service are instantiated from the above template,
Description= is resolved as Getty on ttyA and Getty on ttyB.

3.6. OPTIMIZING SYSTEMD TO SHORTEN THE BOOT TIME


There is a list of systemd unit files that are enabled by default. System services that are defined by these
unit files are automatically run at boot, which influences the boot time.

This section describes:

The tools to examine system boot performance.

The purpose of systemd units enabled by default, and circumstances under which you can safely
disable such systemd units in order to shorten the boot time.

3.6.1. Examining system boot performance


To examine system boot performance, you can use the systemd-analyze command. This command has
many options available. However, this section covers only the selected ones that may be important for
systemd tuning in order to shorten the boot time.

For a complete list and detailed description of all options, see the systemd-analyze man page.

Prerequisites
Before starting to examine systemd in order to tune the boot time, you may want to list all enabled
services:

$ systemctl list-unit-files --state=enabled

116
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Analyzing overall boot time

Procedure

For the overall information about the time that the last successful boot took, use:

$ systemd-analyze

Analyzing unit initialization time

Procedure

For the information about the initialization time of each systemd unit, use:

$ systemd-analyze blame

The output lists the units in descending order according to the time they took to initialize during the last
successful boot.

Identifying critical units

Procedure

To identify the units that took most time to initialize at the last successful boot, use:

$ systemd-analyze critical-chain

The output highlights the units that critically slow down the boot with the red color.

Figure 3.1. The output of the systemd-analyze critical-chain command

3.6.2. A guide to selecting services that can be safely disabled


If you find the boot time of your system long, you can shorten it by disabling some of the services
enabled on boot by default.

To list such services, run:

117
Red Hat Enterprise Linux 8 Configuring basic system settings

$ systemctl list-unit-files --state=enabled

To disable a service, run:

# systemctl disable service_name

However, certain services must stay enabled in order that your operating system is safe and functions in
the way you need.

You can use the table below as a guide to selecting the services that you can safely disable. The table
lists all services enabled by default on a minimal installation of Red Hat Enterprise Linux 8, and for each
service it states whether this service can be safely disabled.

The table also provides more information about the circumstances under which the service can be
disabled, or the reason why you should not disable the service.

Table 3.15. Services enabled by default on a minimal installation of RHEL 8

Service Can it be disabled? More information


name

auditd.servic yes Disable auditd.service only if you do not need audit messages
e from the kernel. Be aware that if you disable auditd.service, the
/var/log/audit/audit.log file is not produced. Consequently, you
are not able to retroactively review some commonly-reviewed
actions or events, such as user logins, service starts or password
changes. Also note that auditd has two parts: a kernel part, and a
service itself. By using the systemctl disable auditd command,
you only disable the service, but not the kernel part. To disable
system auditing in its entirety, set audit=0 on kernel command
line.

autovt@.servi no This service runs only when it is really needed, so it does not need
ce to be disabled.

crond.service yes Be aware that no items from crontab will run if you disable
crond.service.

dbus- yes A symlink to firewalld.service


org.fedorapr
oject.Firewall
D1.service

dbus- yes A symlink to NetworkManager.service


org.freedeskt
op.NetworkM
anager.servic
e

118
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

Service Can it be disabled? More information


name

dbus- yes A symlink to NetworkManager-dispatcher.service


org.freedeskt
op.nm-
dispatcher.se
rvice

firewalld.servi yes Disable firewalld.service only if you do not need firewall.


ce

getty@.servic no This service runs only when it is really needed, so it does not need
e to be disabled.

import- yes Disable import-state.service only if you do not need to boot


state.service from a network storage.

irqbalance.se yes Disable irqbalance.service only if you have just one CPU. Do not
rvice disable irqbalance.service on systems with multiple CPUs.

kdump.servic yes Disable kdump.service only if you do not need reports from
e kernel crashes.

loadmodules. yes This service is not started unless the /etc/rc.modules or


service /etc/sysconfig/modules directory exists, which means that it is
not started on a minimal RHEL 8 installation.

lvm2- yes Disable lvm2-monitor.service only if you do not use Logical


monitor.servi Volume Manager (LVM).
ce

microcode.se no Do not be disable the service because it provides updates of the


rvice microcode software in CPU.

NetworkMan yes Disable NetworkManager-dispatcher.service only if you do


ager- not need notifications on network configuration changes (for
dispatcher.se example in static networks).
rvice

NetworkMan yes Disable NetworkManager-wait-online.service only if you do


ager-wait- not need working network connection available right after the
online.service boot. If the service is enabled, the system does not finish the boot
before the network connection is working. This may prolong the
boot time significantly.

NetworkMan yes Disable NetworkManager.service only if you do not need


ager.service connection to a network.

119
Red Hat Enterprise Linux 8 Configuring basic system settings

Service Can it be disabled? More information


name

nis- yes Disable nis-domainname.service only if you do not use


domainname. Network Information Service (NIS).
service

rhsmcertd.se no
rvice

rngd.service yes Disable rngd.service only if you do not need a lot of entropy on
your system, or you do not have any sort of hardware generator.
Note that the service is necessary in environments that require a
lot of good entropy, such as systems used for generation of X.509
certificates (for example the FreeIPA server).

rsyslog.servic yes Disable rsyslog.service only if you do not need persistent logs,
e or you set systemd-journald to persistent mode.

selinux- yes Disable selinux-autorelabel-mark.service only if you do not


autorelabel- use SELinux.
mark.service

sshd.service yes Disable sshd.service only if you do not need remote logins by
OpenSSH server.

sssd.service yes Disable sssd.service only if there are no users who log in the
system over the network (for example by using LDAP or Kerberos).
Red Hat recommends to disable all sssd-* units if you disable
sssd.service.

syslog.servic yes An alias for rsyslog.service


e

tuned.service yes Disable tuned.service only if you do need to use performance


tuning.

lvm2- yes Disable lvm2-lvmpolld.socket only if you do not use Logical


lvmpolld.sock Volume Manager (LVM).
et

dnf- yes Disable dnf-makecache.timer only if you do not need your


makecache.ti package metadata to be updated automatically.
mer

unbound- yes Disable unbound-anchor.timer only if you do not need daily


anchor.timer update of the root trust anchor for DNS Security Extensions
(DNSSEC). This root trust anchor is used by Unbound resolver and
resolver library for DNSSEC validation.

120
CHAPTER 3. MANAGING SERVICES WITH SYSTEMD

To find more information about a service, you can run one of the following commands:

$ systemctl cat <service_name>

$ systemctl help <service_name>

The systemctl cat command provides the content of the service file located under
/usr/lib/systemd/system/<service>, as well as all applicable overrides. The applicable overrides include
unit file overrides from the /etc/systemd/system/<service> file or drop-in files from a corresponding
unit.type.d directory.

For more information on drop-in files, see the systemd.unit man page.

The systemctl help command shows the man page of the particular service.

3.7. ADDITIONAL RESOURCES


For more information on systemd and its usage on Red Hat Enterprise Linux, see the resources listed
below.

3.7.1. Installed Documentation


systemctl(1) — The manual page for the systemctl command line utility provides a complete
list of supported options and commands.

systemd(1) — The manual page for the systemd system and service manager provides more
information about its concepts and documents available command line options and
environment variables, supported configuration files and directories, recognized signals, and
available kernel options.

systemd-delta(1) — The manual page for the systemd-delta utility that allows to find extended
and overridden configuration files.

systemd.directives(7) — The manual page named systemd.directives provides detailed


information about systemd directives.

systemd.unit(5) — The manual page named systemd.unit provides detailed information about
systemd unit files and documents all available configuration options.

systemd.service(5) — The manual page named systemd.service documents the format of


service unit files.

systemd.target(5) — The manual page named systemd.target documents the format of target
unit files.

systemd.kill(5) — The manual page named systemd.kill documents the configuration of the
process killing procedure.

3.7.2. Online Documentation


systemd Home Page — The project home page provides more information about systemd.

121
Red Hat Enterprise Linux 8 Configuring basic system settings

CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS


The control of users and groups is a core element of Red Hat Enterprise Linux (RHEL) system
administration. The following sections describe how to:

Manage user accounts with the web console .

Manage users and groups from the command-line .

Manage sudo access.

Change and reset the root password.

4.1. INTRODUCTION TO USERS AND GROUPS


Each RHEL user has distinct login credentials and can be assigned to various groups to customize their
system privileges.

A user who creates a file is the owner of that file and the group owner of that file. The file is assigned
separate read, write, and execute permissions for the owner, the group, and those outside that group.
The file owner can be changed only by the root user. Access permissions to the file can be changed by
both the root user and the file owner. A regular user can change group ownership of a file they own to a
group of which they are a member of.

Each user is associated with a unique numerical identification number called user ID (UID). Each group is
associated with a group ID (GID). Users within a group share the same permissions to read, write, and
execute files owned by that group.

4.2. CONFIGURING RESERVED USER AND GROUP IDS


RHEL reserves user and group IDs below 1000 for system users and groups. You can find the reserved
user and group IDs in the setup package. To view reserved user and group IDs, use:

cat /usr/share/doc/setup*/uidgid

It is recommended to assign IDs to the new users and groups starting at 5000, as the reserved range
can increase in the future.

To make the IDs assigned to new users start at 5000 by default, modify the UID_MIN and GID_MIN
parameters in the /etc/login.defs file.

Procedure
To modify make the IDs assigned to new users start at 5000 by default, use:

1. Open the /etc/login.defs file in an editor of your choice.

2. Find the lines that define the minimum value for automatic UID selection.

# Min/max values for automatic uid selection in useradd


#
UID_MIN 1000

3. Modify the UID_MIN value to start at 5000.

122
CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS

# Min/max values for automatic uid selection in useradd


#
UID_MIN 5000

4. Find the lines that define the minimum value for automatic GID selection.

# Min/max values for automatic gid selection in groupadd


#
GID_MIN 1000

Note that for users and groups created before you changed the UID_MIN and GID_MIN values, UIDs
and GIDs still start at the default 1000.


WARNING

Do not raise IDs reserved by the system above 1000 by changing SYS_UID_MAX to
avoid conflict with systems that retain the 1000 limit.

4.3. USER PRIVATE GROUPS


RHEL uses the user private group (UPG) system configuration, which makes UNIX groups easier to
manage. A user private group is created whenever a new user is added to the system. The user private
group has the same name as the user for which it was created and that user is the only member of the
user private group.

UPGs simplify the collaboration on a project between multiple users. In addition, UPG system
configuration makes it safe to set default permissions for a newly created file or directory, as it allows
both the user, and the group this user is a part of, to make modifications to the file or directory.

A list of all groups is stored in the /etc/group configuration file.

4.4. MANAGING USER ACCOUNTS WITH THE WEB CONSOLE


RHEL web console enables you to execute a wide range of administrative tasks without accessing your
terminal directly. The RHEL 8 web console offers a graphical interface for adding, editing, and removing
system user accounts. The following section describes how to:

Get started with the RHEL web console.

Manage user accounts in the web console.

4.4.1. Getting started using the RHEL web console


The following sections aim to help you install the web console in Red Hat Enterprise Linux 8 and open
the web console in your browser. You will also learn how to add remote hosts and monitor them in the
RHEL 8 web console.

Prerequisites

123
Red Hat Enterprise Linux 8 Configuring basic system settings

Installed Red Hat Enterprise Linux 8.

Enabled networking.

Registered system with appropriate subscription attached.


To obtain a subscription, see Managing subscriptions in the web console .

4.4.1.1. What is the RHEL web console

The RHEL web console is a Red Hat Enterprise Linux 8 web-based interface designed for managing and
monitoring your local system, as well as Linux servers located in your network environment.

The RHEL web console enables you a wide range of administration tasks, including:

Managing services

Managing user accounts

Managing and monitoring system services

Configuring network interfaces and firewall

Reviewing system logs

Managing virtual machines

Creating diagnostic reports

Setting kernel dump configuration

Configuring SELinux

124
CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS

Updating software

Managing system subscriptions

The RHEL web console uses the same system APIs as you would in a terminal, and actions performed in
a terminal are immediately reflected in the RHEL web console.

You can monitor the logs of systems in the network environment, as well as their performance, displayed
as graphs. In addition, you can change the settings directly in the web console or through the terminal.

4.4.1.2. Installing the web console

Red Hat Enterprise Linux 8 includes the RHEL 8 web console installed by default in many installation
variants.

If this is not the case on your system, install the cockpit package and set up the cockpit.socket service
to enable the RHEL 8 web console.

Procedure

1. Install the cockpit package:

# yum install cockpit

2. Enable and start the cockpit.socket service, which runs a web server:

# systemctl enable --now cockpit.socket

3. If you are using a custom firewall profile, add the cockpit service to firewalld to open port 9090
in the firewall:

# firewall-cmd --add-service=cockpit --permanent


# firewall-cmd --reload

Verification steps

1. To verify the previous installation and configuration, you open the web console.

4.4.1.3. Logging in to the web console

Use the steps in this procedure for the first login to the RHEL web console using a system user name
and password.

Prerequisites

Use one of the following browsers for opening the web console:

Mozilla Firefox 52 and later

Google Chrome 57 and later

Microsoft Edge 16 and later

System user account credentials


The RHEL web console uses a specific PAM stack located at /etc/pam.d/cockpit. Authentication
125
Red Hat Enterprise Linux 8 Configuring basic system settings

The RHEL web console uses a specific PAM stack located at /etc/pam.d/cockpit. Authentication
with PAM allows you to log in with the user name and password of any local account on the
system.

Procedure

1. Open the web console in your web browser:

Locally: https://localhost:9090

Remotely with the server’s hostname: https://example.com:9090

Remotely with the server’s IP address: https://192.0.2.2:9090


If you use a self-signed certificate, the browser issues a warning. Check the certificate and
accept the security exception to proceed with the login.

The console loads a certificate from the /etc/cockpit/ws-certs.d directory and uses the last
file with a .cert extension in alphabetical order. To avoid having to grant security exceptions,
install a certificate signed by a certificate authority (CA).

2. In the login screen, enter your system user name and password.

3. Optionally, click the Reuse my password for privileged tasks option.


If the user account you are using to log in has sudo privileges, this makes it possible to perform
privileged tasks in the web console, such as installing software or configuring SELinux.

4. Click Log In.

After successful authentication, the RHEL web console interface opens.

4.4.1.4. Connecting to the web console from a remote machine

It is possible to connect to your web console interface from any client operating system and also from

126
CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS

It is possible to connect to your web console interface from any client operating system and also from
mobile phones or tablets. The following procedure shows how to do it.

Prerequisites

Device with a supported internet browser, such as:

Mozilla Firefox 52 and later

Google Chrome 57 and later

Microsoft Edge 16 and later

RHEL 8 server you want to access with an installed and accessible web console. For more
information about the installation of the web console see Installing the web console .

Procedure

1. Open your web browser.

2. Type the remote server’s address in one of the following formats:

a. With the server’s host name: server.hostname.example.com:port_number

b. With the server’s IP address: server.IP_address:port_number

3. After the login interface opens, log in with your RHEL machine credentials.

4.4.1.5. Logging in to the web console using a one-time password

Complete this procedure to login into the RHEL web console using a one-time password (OTP).

IMPORTANT

It is possible to log in using a one-time password only if your system is part of an Identity
Management (IdM) domain with enabled OTP configuration. For more information about
OTP in IdM, see One-time password in Identity Management .

Prerequisites

The RHEL web console has been installed.


For details, see Installing the web console .

An Identity Management server with enabled OTP configuration.

A configured hardware or software device generating OTP tokens.

Procedure

1. Open the RHEL web console in your browser:

Locally: https://localhost:PORT_NUMBER

Remotely with the server hostname: https://example.com:PORT_NUMBER

Remotely with the server IP address:

127
Red Hat Enterprise Linux 8 Configuring basic system settings

Remotely with the server IP address:


https://EXAMPLE.SERVER.IP.ADDR:PORT_NUMBER
If you use a self-signed certificate, the browser issues a warning. Check the certificate and
accept the security exception to proceed with the login.

The console loads a certificate from the /etc/cockpit/ws-certs.d directory and uses the last
file with a .cert extension in alphabetical order. To avoid having to grant security exceptions,
install a certificate signed by a certificate authority (CA).

2. The Login window opens. In the Login window, enter your system user name and password.

3. Generate a one-time password on your device.

4. Enter the one-time password into a new field that appears in the web console interface after
you confirm your password.

5. Click Log in.

6. Succesful login takes you to the Overview page of the web console interface.

4.4.2. Managing user accounts in the web console


The RHEL web console offers an interface for adding, editing, and removing system user accounts.
After reading this section, you will know:

From where the existing accounts come from.

How to add new accounts.

How to set password expiration.

How and when to terminate user sessions.

Prerequisites

Being logged into the RHEL web console with an account that has administrator permissions
assigned. For details, see Logging in to the RHEL web console .

4.4.2.1. System user accounts managed in the web console

With user accounts displayed in the RHEL web console you can:

Authenticate users when accessing the system.

Set them access rights to the system.

The RHEL web console displays all user accounts located in the system. Therefore, you can see at least
one user account just after the first login to the web console.

After logging into the RHEL web console, you can perform the following operations:

Create new users accounts.

Change their parameters.

Lock accounts.

128
CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS

Terminate user sessions.

4.4.2.2. Adding new accounts using the web console

Use the following steps for adding user accounts to the system and setting administration rights to the
accounts through the RHEL web console.

Prerequisites

The RHEL web console must be installed and accessible. For details, see Installing the web
console.

Procedure

1. Log in to the RHEL web console.

2. Click Accounts.

3. Click Create New Account.

4. In the Full Name field, enter the full name of the user.
The RHEL web console automatically suggests a user name from the full name and fills it in the
User Name field. If you do not want to use the original naming convention consisting of the first
letter of the first name and the whole surname, update the suggestion.

5. In the Password/Confirm fields, enter the password and retype it for verification that your
password is correct. The color bar placed below the fields shows you security level of the
entered password, which does not allow you to create a user with a weak password.

129
Red Hat Enterprise Linux 8 Configuring basic system settings

6. Click Create to save the settings and close the dialog box.

7. Select the newly created account.

8. Select Server Administrator in the Roles item.

Now you can see the new account in the Accounts settings and you can use the credentials to connect
to the system.

4.4.2.3. Enforcing password expiration in the web console

By default, user accounts have set passwords to never expire. To enforce password expiration, as
administrator, set system passwords to expire after a defined number of days.

When the password expires, the next login attempt will prompt for a password change.

130
CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS

Procedure

1. Log in to the RHEL 8 web console interface.

2. Click Accounts.

3. Select the user account for which to enforce password expiration.

4. In the user account settings, click Never expire password.

5. In the Password Expiration dialog box, select Require password change every …​ days and
enter a positive whole number representing the number of days when the password expires.

6. Click Change.

To verify the settings, open the account settings. The RHEL 8 web console displays a link with the date
of expiration.

4.4.2.4. Terminating user sessions in the web console

A user creates user sessions when logging into the system. Terminating user sessions means to log the
user out from the system.

131
Red Hat Enterprise Linux 8 Configuring basic system settings

It can be helpful if you need to perform administrative tasks sensitive to configuration changes, for
example, system upgrades.

In each user account in the RHEL 8 web console, you can terminate all sessions for the account except
for the web console session you are currently using. This prevents you from cutting yourself off the
system.

Procedure

1. Log in to the RHEL 8 web console.

2. Click Accounts.

3. Click the user account for which you want to terminate the session.

4. Click the Terminate Session button.

If the Terminate Session button is inactive, the user is not logged in to the system.

The RHEL web console terminates the sessions.

4.5. MANAGING USERS FROM THE COMMAND LINE


You can manage users and groups using the command-line interface (CLI).

The following sections describe how to use the CLI to:

Add new users.

Add new groups.

Add a user to a group .

Remove a user from a group .

Create group directories.

Prerequisites

Root access.

132
CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS

4.5.1. Adding a new user from the command line


This section describes how to use the useradd command to add a new user.

Procedure

To add a new user, use:

# useradd options username

Replace options with the command-line options for the useradd command, and replace
username with the name of the user.

Example

To add the user sarah with user ID 5000, use:

# useradd -u 5000 sarah

Verification steps

To verify the new user is added, use the id utility.

# id sarah

The output returns:

uid=5000(sarah) gid=5000(sarah) groups=5000(sarah)

Additional resources

For more information about useradd, see the useradd man page.

4.5.2. Adding a new group from the command line


This section describes how to use the groupadd command to add a new group.

Procedure

To add a new group, use:

# groupadd options group-name

Replace options with the command-line options for the groupadd command, and replace
group-name with the name of the group.

Example

To add the group sysadmins with group ID 5000, use:

# groupadd -g 5000 sysadmins

Verification steps

133
Red Hat Enterprise Linux 8 Configuring basic system settings

Verification steps

To verify the new group is added, use the tail utility.

# tail /etc/group

The output returns:

sysadmins:x:5000:

Additional resources

For more information about useradd, see the groupadd man page.

4.5.3. Adding a user to a groups from the command line


This section describes how to use the usermod command to add a group to the supplementary groups
of the user.

Procedure

To add a group to the supplementary groups of the user, use:

# usermod --append -G group-name username

Replace group-name with the name of the group, and replace username with the name of the
user.

Example

To add the user sysadmin to the group system-administrators, use:

# usermod --append -G system-administrators sysadmin

Verification steps

To verify the new groups is added to the supplementary groups of the user sysadmin, use:

# groups sysadmin

The output returns:

sysadmin: sysadmin system-administrators

4.5.4. Removing a user from a group from the command line


You can remove a user from a primary or supplementary group by overriding the groups the user
belongs to with a new set of groups that does not contain the group you want to remove the user from.
The following section describes how to:

Override the primary group of the user.

134
CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS

Override the supplementary groups of the user.

4.5.4.1. Overriding the primary group of the user

This section describes how to use the usermod command to override the primary group of the user.

Procedure

To override the primary group of the user, use:

# usermod -g group-name username

Replace group-name with the name of the group, and replace username with the name of the
user.

Example

If the user sarah belongs to the primary groups sarah1, and you want to change the primary
group of the user to sarah2, use:

# usermod -g sarah2 sarah

Verification steps

To verify that the primary group of the user is overridden, use:

# groups sarah

The output returns:

sarah : sarah2

4.5.4.2. Overriding the supplementary groups of the user

This section describes how to use the usermod command to override the supplementary groups of the
user.

Procedure

To override the supplementary groups of the user, use:

# usermod -G group-name username

Replace group-name with the name of the group, and replace username with the name of the
user.

Example

If the user sarah belongs to the system-administrator group and to the developer group and
you want to remove the user sarah from the system-administrator group, you can do that by
replacing the old list of groups with a new one. To do that, use:

135
Red Hat Enterprise Linux 8 Configuring basic system settings

# usermod -G developer sarah

Verification steps

To verify that the supplementary groups of the user are overridden, use:

# groups sarah

The output returns:

sarah : sarah developer

4.5.5. Creating a group directory


Under the UPG system configuration, you can apply the set-group identification permission (setgid bit)
to a directory. The setgid bit makes managing group projects that share a directory simpler. When you
apply the setgid bit to a directory, files created within that directory are automatically assigned to a
group that owns the directory. Any user that has the permission to write and execute within this group
can now create, modify, and delete files in the directory.

The following section describes how to create group directories.

Procedure

1. Create a directory:

# mkdir directory-name

Replace directory-name with the name of the directory.

2. Create a group:

# groupadd group-name

Replace group-name with the name of the group.

3. Add users to the group:

# usermod --append -G group-name username

Replace group-name with the name of the group, and replace username with the name of the
user.

4. Associate the user and group ownership of the directory with the group-name group:

# chown :group-name directory-name

Replace group-name with the name of the group, and replace directory-name with the name of
the directory.

5. Set the write permissions to allow the users to create and modify files and directories and set
the setgid bit to make this permission be applied within the directory-name directory:

136
CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS

# chmod g+rwxs directory-name

Replace directory-name with the name of the directory.

Now all members of the group-name group can create and edit files in the directory-name
directory. Newly created files retain the group ownership of group-name group.

Verification steps

To verify the correctness of set permissions, use:

# ls -ld directory-name

Replace directory-name with the name of the directory.

The output returns:

drwxrwsr-x. 2 root group-name 6 Nov 25 08:45 directory-name

4.6. MANAGING SUDO ACCESS


System administrators can grant sudo access to allow non-root users to execute administrative
commands. The sudo command provides users with administrative access without using the password
of the root user.

When users need to perform an administrative command, they can precede that command with sudo.
The command is then executed as if they were the root user.

Be aware of the following limitations:

Only users listed in the /etc/sudoers configuration file can use the sudo command.

The command is executed in the shell of the user, not in the root shell.

The following section describes how to grant sudo access to a user.

4.6.1. Granting sudo access to a user


A non-root user requires sudo access to perform administrative commands. The following section
describes how to grant sudo access to a user.

Prerequisites

Root access.

Procedure

1. Open the /etc/sudoers file.

# visudo

The /etc/sudoers file defines the policies applied by the sudo command.

2. In the /etc/sudoers file find the lines that grant sudo access to users in the administrative
137
Red Hat Enterprise Linux 8 Configuring basic system settings

2. In the /etc/sudoers file find the lines that grant sudo access to users in the administrative
wheel group.

## Allows people in group wheel to run all commands


%wheel ALL=(ALL) ALL

3. Make sure the line that starts with %wheel does not have # comment character before it.

4. Save any changes, and exit the editor.

5. Add users you want to grant sudo access to into the administrative wheel group .

# usermod --append -G wheel username

Replace username with the name of the user.

Example

To add the user sarah to the administrative wheel group, use:

# usermod --append -G wheel sarah

Verification steps

To verify the user is added to the administrative wheel group, use the id utility.

# id sarah

The output returns:

uid=5000(sarah) gid=5000(sarah) groups=5000(sarah),10(wheel)

4.7. CHANGING AND RESETTING THE ROOT PASSWORD


If the existing root password is no longer satisfactory or is forgotten, you can change or reset it both as
the root user and a non-root user.

This following sections describe how to:

Change the root password as the root user.

Change or reset the forgotten root password as a non-root user.

Reset the forgotten root password on boot.

4.7.1. Changing the root password as the root user


This section describes how to use the passwd command to change the root password as the root user.

Prerequisites

Root access.

138
CHAPTER 4. MANAGING USER AND GROUP ACCOUNTS

Procedure

To change the root password, use:

# passwd

You are prompted to enter your current password before you can change it.

4.7.2. Changing or resetting the forgotten root password as a non-root user


This section describes how to use the passwd command to change or reset the forgotten root
password as a non-root user.

Prerequisites

You are able to log in as a non-root user.

You are a member of the administrative wheel group.

Procedure

To change or reset the root password as a non-root user that belongs to the wheel group, use:

$ sudo passwd root

You are prompted to enter your current non-root password before you can change the root
password.

4.7.3. Resetting the forgotten root password on boot


If you are unable to log in as a non-root user or do not belong to the administrative wheel group, you can
reset the root password on boot by switching into a specialized chroot jail environment.

Procedure

1. Reboot the system and, on the GRUB 2 boot screen, press the e key to interrupt the boot
process.
The kernel boot parameters appear.

load_video
set gfx_payload=keep
insmod gzio
linux ($root)/vmlinuz-4.18.0-80.e18.x86_64 root=/dev/mapper/rhel-root ro crash\
kernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv/swap rhab quiet
initrd ($root)/initramfs-4.18.0-80.e18.x86_64.img $tuned_initrd

2. Go to the end of the line that starts with linux.

linux ($root)/vmlinuz-4.18.0-80.e18.x86_64 root=/dev/mapper/rhel-root ro crash\


kernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv/swap rhab quiet

Press Ctrl+e to jump to the end of the line.

3. Add rd.break to the end of the line that starts with linux.

139
Red Hat Enterprise Linux 8 Configuring basic system settings

linux ($root)/vmlinuz-4.18.0-80.e18.x86_64 root=/dev/mapper/rhel-root ro crash\


kernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv/swap rhab quiet rd.break

4. Press Ctrl+x to start the system with the changed parameters.


The switch_root prompt appears.

5. Remount the file system as writable:

mount -o remount,rw /sysroot

The file system is mounted as read-only in the /sysroot directory. Remounting the file system
as writable allows you to change the password.

6. Enter the chroot environment:

chroot /sysroot

The sh-4.4# prompt appears.

7. Reset the root password:

passwd

Follow the instructions displayed by the command line to finalize the change of the root
password.

8. Enable the SELinux relabeling process on the next system boot:

touch /.autorelabel

9. Exit the chroot environment:

exit

10. Exit the switch_root prompt:

exit

11. Wait until the SELinux relabeling process is finished. Note that relabeling a large disk might take
a long time. The system reboots automatically when the process is complete.

140
CHAPTER 5. MANAGING FILE PERMISSIONS

CHAPTER 5. MANAGING FILE PERMISSIONS

5.1. INTRODUCTION TO FILE PERMISSIONS


Every file or directory has three levels of ownership:

User owner (u).

Group owner (g).

Others (o).

Each level of ownership can be assigned the following permissions:

Read (r).

Write (w).

Execute (x).

Note that the execute permission for a file allows you to execute that file. The execute permission for a
directory allows you to access the contents of the directory, but not execute it.

When a new file or directory is created, the default set of permission is automatically assigned to it. The
default permission for a file or directory is based on two factors:

Base permission.

The user file-creation mode mask (umask).

5.1.1. Base permissions


Whenever a new file or directory is created, a base permission is automatically assigned to it.

Base permissions for a file or directory can be expressed in symbolic or octal values.

Permission Symbolic value Octal value

No permission --- 0

Execute --x 1

Write -w- 2

Write and execute -wx 3

Read r-- 4

Read and execute r-x 5

Read and write rw- 6

141
Red Hat Enterprise Linux 8 Configuring basic system settings

Read, write, execute rwx 7

The base permission for a directory is 777 (drwxrwxrwx), which grants everyone the permissions to
read, write, and execute. This means that the directory owner, the group, and others can list the
contents of the directory, create, delete, and edit items within the directory, and descend into it.

Note that individual files within a directory can have their own permission that might prevent you from
editing them, despite having unrestricted access to the directory.

The base permission for a file is 666 (-rw-rw-rw-), which grants everyone the permissions to read and
write. This means that the file owner, the group, and others can read and edit the file.

Example 1
If a file has the following permissions:

$ ls -l
-rwxrw----. 1 sysadmins sysadmins 2 Mar 2 08:43 file

- indicates it is a file.

rwx indicates that the file owner has permissions to read, write, and execute the file.

rw- indicates that the group has permissions to read and write, but not execute the file.

--- indicates that other users have no permission to read, write, or execute the file.

. indicates that the SELinux security context is set for the file.

Example 2
If a directory has the following permissions:

$ ls -dl
drwxr-----. 1 sysadmins sysadmins 2 Mar 2 08:43 directory

d indicates it is a directory.

rwx indicates that the directory owner has the permissions to read, write, and access the
contents of the directory.
As a directory owner, you can list the items (files, subdirectories) within the directory, access the
content of those items, and modify them.

r-- indicates that the group has permissions to read, but not write or access the contents of the
directory.
As a member of the group that owns the directory, you can list the items within the directory.
You cannot access information about the items within the directory or modify them.

--- indicates that other users have no permission to read, write, or access the contents of the
directory.
As someone who is not an user owner, or as group owner of the directory, you cannot list the
items within the directory, access information about those items, or modify them.

. indicates that the SELinux security context is set for the directory.

NOTE
142
CHAPTER 5. MANAGING FILE PERMISSIONS

NOTE

The base permission that is automatically assigned to a file or directory is not the default
permission the file or directory ends up with. When you create a file or directory, the base
permission is altered by the umask. The combination of the base permission and the
umask creates the default permission for files and directories.

5.1.2. User file-creation mode mask


The umask is variable that automatically removes permissions from the base permission value whenever
a file or directory is created to increase the overall security of a linux system.

The umask can be expressed in symbolic or octal.

Permission Symbolic value Octal value

Read, write, and execute rwx 0

Read and write rw- 1

Read and execute r-x 2

Read r-- 3

Write and execute -wx 4

Write -w- 5

Execute --x 6

No permissions --- 7

The default umask for a standard user is 0002. The default umask for a root user is 0022.

The first digit of the umask represents special permissions (sticky bit, ). The last three digits of the
umask represent the permissions that are removed from the user owner ( u), group owner (g), and
others (o) respectively.

Example
The following example illustrates how the umask with an octal value of 0137 is applied to the file with the
base permission of 777, to create the file with the default permission of 640.

Figure 5.1. Applying the umask when creating a file


143
Red Hat Enterprise Linux 8 Configuring basic system settings

Figure 5.1. Applying the umask when creating a file

5.1.3. Default permissions


The default permission for a new file or directory is determined by applying the umask to the base
permission.

Example 1
When a standard user creates a new directory, the umask is set to 002 (rwxrwxr-x), and the base
permission for a directory is set to 777 (rwxrwxrwx). This brings the default permission to 775
(drwxrwxr-x).

Symbolic value Octal value

Base permission rwxrwxrwx 777

Umask rwxrwxr-x 002

Default permission rwxrwxr-x 775

This means that the directory owner and the group can list the contents of the directory, create, delete,
and edit items within the directory, and descend into it. Other users can only list the contents of the
directory and descend into it.

Example 2
When a standard user creates a new file, the umask is set to 002 (rwxrwxr-x), and the base permission
for a file is set to 666 (rw-rw-rw-). This brings the default permission to 664 (-rw-rw-r--).

Symbolic value Octal value

144
CHAPTER 5. MANAGING FILE PERMISSIONS

Base permission rw-rw-rw- 666

Umask rwxrwxr-x 002

Default permission rw-rw-r-- 664

This means that the file owner and the group can read and edit the file, while other users can only read
the file.

Example 3
When a root user creates a new directory, the umask is set to 022 (rwxr-xr-x), and the base permission
for a directory is set to 777 (rwxrwxrwx). This brings the default permission to 755 (rwxr-xr-x).

Symbolic value Octal value

Base permission rwxrwxrwx 777

Umask rwxr-xr-x 022

Default permission rwxr-xr-x 755

This means that the directory owner can list the contents of the directory, create, delete, and edit items
within the directory, and descend into it. The group and others can only list the contents of the directory
and descend into it.

Example 4
When a root user creates a new file, the umask is set to 022 (rwxr-xr-x), and the base permission for a
file is set to 666 (rw-rw-rw-). This brings the default permission to 644 (-rw-r—​r--).

Symbolic value Octal value

Base permission rw-rw-rw- 666

Umask rwxr-xr-x 022

Default permission rw-r—​r-- 644

This means that the file owner can read and edit the file, while the group and others can only read the
file.

NOTE

For security reasons, regular files cannot have execute permissions by default, even if the
umask is set to 000 (rwxrwxrwx). However, directories can be created with execute
permissions.

145
Red Hat Enterprise Linux 8 Configuring basic system settings

5.2. DISPLAYING FILE PERMISSIONS


The following section describes how to use the ls command to display the permissions for directories,
files, files within directories.

Procedure

To see the permissions for a particular directory, use:

$ ls -dl directory-name

Replace directory-name with the name of the directory.

To see the permissions for a particular directory and all files within that directory, use:

$ ls -l directory-name

Replace directory-name with the name of the directory.

To see the permissions for a particular file, use:

$ ls -l file-name

Replace file-name with the name of the file.

Additional information

See the ls man page for more details.

5.3. CHANGING FILE PERMISSIONS


The following section describes how to:

Change file permissions using symbolic values.

Change file permissions using octal values.

5.3.1. Changing file permissions using symbolic values


You can assign the following permissions:

Read (r).

Write (w).

Execute (x).

Permissions can be assigned to:

User owner (u).

Group owner (g).

Other (o).

146
CHAPTER 5. MANAGING FILE PERMISSIONS

All (a).

To add or take away the permissions you can use the following signs:

+ to add the permissions on top of the existing permissions.

- to take away the permissions from the existing permission.

= to omit the existing permissions and explicitly define the new ones.

The following section describes how to set and remove file permissions using the symbolic values.

Procedure

To change the file permissions for an existing file or directory, use:

$ chmod u=symbolic_value,g+symbolic_value,o-symbolic_value file-name

Replace file-name with the name of the file or directory, and replace symbolic_value for user,
groups, and others with corresponding symbolic values. See Section 5.1.1, “Base permissions” for
more details.

Example
To change file permissions for my-file.txt from 664 (-rw-rw-r--) to 740 (-rwx-r---), use:

$ chmod u+x,g-w,o= my-file.txt

Note that any permission that is not specified after the equals sign (=) is automatically
prohibited.

To set the same permissions for user, group, and others, use:

$ chmod a=symbolic_value file-name

Replace file-name with the name of the file or directory, and replace symbolic_value with a
symbolic value. See Section 5.1.1, “Base permissions” for more details.

Example
To set the permission for my-file.txt to 777 (-rwxrwxrwx or drwxrwxrwx), use:

$ chmod a=rwx my-file

To change the permissions for a directory and all its sub-directories, add the -R option:

$ chmod -R symbolic_value directory-name

Replace directory-name with the name of the directory, and replace symbolic_value with a
symbolic value. See Section 5.1.1, “Base permissions” for more details.

Example
To change the permissions for /my-directory/ and all its sub-directories from 775 (drwxrwxr-x)
to 740 (drwx-r---), use:

147
Red Hat Enterprise Linux 8 Configuring basic system settings

$ chmod -R g-wx,o= /my-directory

5.3.2. Changing file permissions using octal values


The following section describes how to use the chmod command to change the permissions for a file or
directory.

Procedure

To change the file permissions for an existing file or directory, use:

$ chmod octal_value file-name

Replace file-name with the name of the file or directory, and replace octal_value with an octal
value. See Section 5.1.1, “Base permissions” for more details.

5.4. DISPLAYING THE UMASK


The following section describes how to:

Display the current octal value of the umask.

Display the current symbolic value of the umask.

Display the default bash umask.

5.4.1. Displaying the current octal value of the umask


The following section describes how to use the umask command to display the current umask.

Procedure:

To display the current octal value of the umask for a standard user, use:

$ umask

To display the current octal value of the umask for a root user, use:

$ sudo umask

Or:

# umask

NOTE

When displaying the umask, you may notice it displayed as a four digit number ( 0002 or
0022). The first digit of the umask represents a special bit (sticky bit, SGID bit, or SUID
bit). If the first digit is set to 0, the special bit is not set.

5.4.2. Displaying the current symbolic value of the umask

148
CHAPTER 5. MANAGING FILE PERMISSIONS

The following section describes how to use the umask command to display the current umask.

Procedure

To display the current symbolic value of the umask, use:

$ umask -S

To display the current symbolic value of the umask for a root user, use:

$ sudo umask -S

Or:

# umask -S

5.4.3. Displaying the default bash umask


There are a number of shells you can use, such as bash, ksh, zsh and tcsh.

Those shells can behave as login or non-login shells. The login shell is typically invoked by opening a
native or a GUI terminal.

To determine whether you are executing a command in a login or a non-login shell, use the echo $0
command.

In bash shell, if the output returns bash, you are executing a command in a non-login shell.

$ echo $0
bash

The default umask for the non-login shell is set in /etc/bashrc configuration file.

If the output returns -bash, you are executing a command in a login shell.

# echo $0
-bash

The default umask for the login shell is set in /etc/profile configuration file.

Procedure

To display the default bash umask for the non-login shell, use:

$ grep umask /etc/bashrc

The output returns:

# By default, we want umask to get set. This sets it for non-login shell.
umask 002
umask 022

To display the default bash umask for the login shell, use:

149
Red Hat Enterprise Linux 8 Configuring basic system settings

$ grep umask /etc/profile

The output returns:

# By default, we want umask to get set. This sets it for login shell
umask 002
umask 022

5.5. SETTING THE UMASK FOR THE CURRENT SHELL SESSION


The following section describes how to set the umask for the current shell session:

Using symbolic values.

Using octal values.

Note that the umask is valid only during the current shell session and reverts to the default umask after
the session is complete.

5.5.1. Setting the umask using symbolic values


The following section describes how to set the umask with symbolic values.

Procedure

To set or remove permissions for the current shell session, you can use minus (-), plus (+), and
equals (=) signs in combination with symbolic values.

$ umask -S u=symbolic_value,g+symbolic_value,o-symbolic_value

Replace symbolic_value for user, group, and others with symbolic values. See Section 5.1.2,
“User file-creation mode mask” for more details.

Example
If your current umask is set to 113 (u=rw-,g=rw-,o=r--) and you want to set it to 037 (u=rwx,g=-
r-,o=---), use:

$ umask -S u+x,g-w,o=

Note that any permission that is not specified after the equals sign (=) is automatically
prohibited.

To set the same permissions for user, group, and others, use:

$ umask a=symbolic_value

Replace symbolic_value with a symbolic value. See Section 5.1.2, “User file-creation mode mask”
for more details.

Example
To set the umask to 000 (u=rwx,g=rwx,o=rwx), use:

150
CHAPTER 5. MANAGING FILE PERMISSIONS

$ umask a=rwx

Note that the umask is only valid for the current shell session.

5.5.2. Setting the umask using octal values


The following section describes how to set the umask with octal values.

Procedure

To set the umask for the current shell session using octal values, use:

$ umask octal_value

Replace octal_value with an octal value. See Section 5.1.2, “User file-creation mode mask” for
more details.

Note that the umask is only valid for the current shell session.

5.6. CHANGING THE DEFAULT UMASK


The following section describes how to:

Change the default bash umask for the non-login shell.

Change the default bash umask for the login shell.

Change the default bash umask for a specific user.

Set default permissions for newly created home directories.

Prerequisites

Root access.

5.6.1. Changing the default umask for the non-login shell


The following section describes how to change the default bash umask for standard users.

Procedure

1. As root, open the /etc/bashrc file in an editor of your choice.

2. Modify the following sections to set a new default bash umask:

if [ $UID -gt 199 ] && [ “id -gn” = “id -un” ]; then


umask 002
else
umask 022
fi

Replace the default octal value of the umask (002) with another octal value. See Section 5.1.2,
“User file-creation mode mask” for more details.

151
Red Hat Enterprise Linux 8 Configuring basic system settings

3. Save the changes.

5.6.2. Changing the default umask for the login shell


The following section describes how to change the default bash umask for the root user.

Procedure

1. As root, open the /etc/profile file in an editor of your choice.

2. Modify the following sections to set a new default bash umask:

if [ $UID -gt 199 ] && [ “/usr/bin/id -gn” = “/usr/bin/id -un” ]; then


umask 002
else
umask 022
fi

Replace the default octal value of the umask (022) with another octal value. See Section 5.1.2,
“User file-creation mode mask” for more details.

3. Save the changes.

5.6.3. Changing the default umask for a specific user


The following section describes how to change the default umask for a specific user.

Procedure

Put the line that specifies the octal value of the umask into the .bashrc file for the particular
user.

$ echo 'umask octal_value' >> /home/username/.bashrc

Replace octal_value with an octal value and replace username with the name of the user. See
Section 5.1.2, “User file-creation mode mask” for more details.

5.6.4. Setting default UMASK for newly created home directories


The following section describes how to change the permissions that specify the UMASK for newly
created user home directories.

Procedure

1. As root, open the /etc/login.defs file in an editor of your choice.

2. Modify the following section to set a new default UMASK:

# The permission mask is initialized to this value. If not specified,


# the permission mask will be initialized to 022.
UMASK 077

Replace the default octal value (077) with another octal value. See Section 5.1.2, “User file-
creation mode mask” for more details.

152
CHAPTER 5. MANAGING FILE PERMISSIONS

3. Save the changes.

5.7. ACCESS CONTROL LIST


Traditionally, each file and directory can only have one user owner and one group owner at a time. If you
want to apply a more specific set of permissions to a file or directory (allow certain users outside the
group to gain access to a specific file within a directory but not to other files) without changing the
ownership and permissions of a file or directory, you can use the access control lists (ACL).

The following section describes how to:

Display the current ACL.

Set the ACL.

5.7.1. Displaying the current ACL


The following section describes how to display the current ACL.

Procedure

To display the current ACL for a particular file or directory, use:

$ getfacl file-name

Replace file-name with the name of the file or directory.

5.7.2. Setting the ACL


The following section describes how to set the ACL.

Prerequisites

Root access

Procedure

To set the ACL for a file or directory, use:

# setfacl -m u:username:symbolic_value file-name

Replace username with the name of the user, symbolic_value with a symbolic value, and file-name with
the name of the file or directory. For more information see the setfacl man page.

Example
The following example describes how to modify permissions for the group-project file owned by the
root user that belongs to the root group so that this file is:

Not executable by anyone.

The user andrew has the rw- permission.

The user susan has the --- permission.

153
Red Hat Enterprise Linux 8 Configuring basic system settings

Other users have the r-- permission.

Procedure

# setfacl -m u:andrew:rw- group-project


# setfacl -m u:susan:--- group-project

Verification steps

To verify that the user andrew has the rw- permission, the user susan has the --- permission,
and other users have the r-- permission, use:

$ getfacl group-project

The output returns:

# file: group-project
# owner: root
# group: root
user:andrew:rw-
user:susan:---
group::r--
mask::rw-
other::r--

154
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

6.1. INTRODUCTION TO CONFIGURING NTP WITH CHRONY


Accurate timekeeping is important for a number of reasons in IT. In networking for example, accurate
time stamps in packets and logs are required. In Linux systems, the NTP protocol is implemented by a
daemon running in user space.

The user space daemon updates the system clock running in the kernel. The system clock can keep time
by using various clock sources. Usually, the Time Stamp Counter (TSC) is used. The TSC is a CPU
register which counts the number of cycles since it was last reset. It is very fast, has a high resolution,
and there are no interruptions.

In Red Hat Enterprise Linux 8, the NTP protocol is implemented by the chronyd daemon, available from
the repositories in the chrony package.

These sections describe the use of the chrony suite.

6.2. INTRODUCTION TO CHRONY SUITE


chrony is an implementation of the Network Time Protocol (NTP). You can use chrony:

To synchronize the system clock with NTP servers

To synchronize the system clock with a reference clock, for example a GPS receiver

To synchronize the system clock with a manual time input

As an NTPv4(RFC 5905) server or peer to provide a time service to other computers in the
network

chrony performs well in a wide range of conditions, including intermittent network connections, heavily
congested networks, changing temperatures (ordinary computer clocks are sensitive to temperature),
and systems that do not run continuously, or run on a virtual machine.

Typical accuracy between two machines synchronized over the Internet is within a few milliseconds, and
for machines on a LAN within tens of microseconds. Hardware timestamping or a hardware reference
clock may improve accuracy between two machines synchronized to a sub-microsecond level.

chrony consists of chronyd, a daemon that runs in user space, and chronyc, a command line program
which can be used to monitor the performance of chronyd and to change various operating parameters
when it is running.

The chrony daemon, chronyd, can be monitored and controlled by the command line utility chronyc.
This utility provides a command prompt which allows entering a number of commands to query the
current state of chronyd and make changes to its configuration. By default, chronyd accepts only
commands from a local instance of chronyc, but it can be configured to accept monitoring commands
also from remote hosts. The remote access should be restricted.

6.2.1. Using chronyc to control chronyd


To make changes to the local instance of chronyd using the command line utility chronyc in interactive
mode, enter the following command as root:

# chronyc

155
Red Hat Enterprise Linux 8 Configuring basic system settings

chronyc must run as root if some of the restricted commands are to be used.

The chronyc command prompt will be displayed as follows:

chronyc>

You can type help to list all of the commands.

The utility can also be invoked in non-interactive command mode if called together with a command as
follows:

chronyc command

NOTE

Changes made using chronyc are not permanent, they will be lost after a chronyd
restart. For permanent changes, modify /etc/chrony.conf.

6.3. DIFFERENCES BETWEEN CHRONY AND NTP


Network Time Protocol (NTP) has two different implementations with similar basic functionality - ntp
and chrony.

Both ntp and chrony can operate as an NTP client in order to synchronize the system clock with NTP
servers and they can operate as an NTP server for other computers in the network. Each implementation
has some unique features. For comparison of ntp and chrony, see Comparison of NTP
implementations.

Configuration specific to an NTP client is identical in most cases. NTP servers are specified with the
server directive. A pool of servers can be specified with the pool directive.

Configuration specific to an NTP server differs in how the client access is controlled. By default, ntpd
responds to client requests from any address. The access can be restricted with the restrict directive,
but it is not possible to disable the access completely if ntpd uses any servers as a client. chronyd
allows no access by default and operates as an NTP client only. To make chrony operate as an NTP
server, you need to specify some addresses within the allow directive.

ntpd and chronyd differ also in the default behavior with respect to corrections of the system clock.
ntpd corrects the clock by step when the offset is larger than 128 milliseconds. If the offset is larger than
1000 seconds, ntpd exits unless it is the first correction of the clock and ntpd is started with the -g
option. chronyd does not step the clock by default, but the default chrony.conf file provided in the
chrony package allows steps in the first three updates of the clock. After that, all corrections are made
slowly by speeding up or slowing down the clock. The chronyc makestep command can be issued to
force chronyd to step the clock at any time.

6.4. MIGRATING TO CHRONY


In Red Hat Enterprise Linux 7, users could choose between ntp and chrony to ensure accurate
timekeeping. For differences between ntp and chrony, ntpd and chronyd, see Differences between
ntpd and chronyd.

In Red Hat Enterprise Linux 8, ntp is no longer supported. chrony is enabled by default. For this reason,
you might need to migrate from ntp to chrony.

156
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

Migrating from ntp to chrony is straightforward in most cases. The corresponding names of the
programs, configuration files and services are:

Table 6.1. Corresponding names of the programs, configuration files and services when migrating
from ntp to chrony

ntp name chrony name

/etc/ntp.conf /etc/chrony.conf

/etc/ntp/keys /etc/chrony.keys

ntpd chronyd

ntpq chronyc

ntpd.service chronyd.service

ntp-wait.service chrony-wait.service

The ntpdate and sntp utilities, which are included in the ntp distribution, can be replaced with chronyd
using the -q option or the -t option. The configuration can be specified on the command line to avoid
reading /etc/chrony.conf. For example, instead of running ntpdate ntp.example.com, chronyd could
be started as:

# chronyd -q 'server ntp.example.com iburst'


2018-05-18T12:37:43Z chronyd version 3.3 starting (+CMDMON +NTP +REFCLOCK +RTC
+PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG)
2018-05-18T12:37:43Z Initial frequency -2.630 ppm
2018-05-18T12:37:48Z System clock wrong by 0.003159 seconds (step)
2018-05-18T12:37:48Z chronyd exiting

The ntpstat utility, which was previously included in the ntp package and supported only ntpd, now
supports both ntpd and chronyd. It is available in the ntpstat package.

6.4.1. Migration script


A Python script called ntp2chrony.py is included in the documentation of the chrony package
(/usr/share/doc/chrony). The script automatically converts an existing ntp configuration to chrony. It
supports the most common directives and options in the ntp.conf file. Any lines that are ignored in the
conversion are included as comments in the generated chrony.conf file for review. Keys that are
specified in the ntp key file, but are not marked as trusted keys in ntp.conf are included in the generated
chrony.keys file as comments.

By default, the script does not overwrite any files. If /etc/chrony.conf or /etc/chrony.keys already exist,
the -b option can be used to rename the file as a backup. The script supports other options. The --help
option prints all supported options.

An example of an invocation of the script with the default ntp.conf provided in the ntp package is:

# python3 /usr/share/doc/chrony/ntp2chrony.py -b -v
Reading /etc/ntp.conf

157
Red Hat Enterprise Linux 8 Configuring basic system settings

Reading /etc/ntp/crypto/pw
Reading /etc/ntp/keys
Writing /etc/chrony.conf
Writing /etc/chrony.keys

The only directive ignored in this case is disable monitor, which has a chrony equivalent in the
noclientlog directive, but it was included in the default ntp.conf only to mitigate an amplification attack.

The generated chrony.conf file typically includes a number of allow directives corresponding to the
restrict lines in ntp.conf. If you do not want to run chronyd as an NTP server, remove all allow directives
from chrony.conf.

6.4.2. Timesync role


Note that using the timesync role on your Red Hat Enterprise Linux 7 system facilitates the migration to
chrony, because you can use the same playbook on all versions of RHEL starting with RHEL 6 regardless
of whether the system uses ntp or chrony to implement the NTP protocol.

Additional resources

For a detailed reference on timesync role variables, install the rhel-system-roles package, and
see the README.md or README.html files in the /usr/share/doc/rhel-system-
roles/timesync directory.

For more information on RHEL System Roles, see Introduction to RHEL System Roles .

6.5. CONFIGURING CHRONY


The default configuration file for chronyd is /etc/chrony.conf. The -f option can be used to specify an
alternate configuration file path. See the chrony.conf(5) man page for further options. For a complete
list of the directives that can be used see The chronyd configuration file .

Below is a selection of chronyd configuration options:

Comments
Comments should be preceded by #, %, ; or !
allow
Optionally specify a host, subnet, or network from which to allow NTP connections to a machine
acting as NTP server. The default is not to allow connections.
Examples:

allow 192.0.2.0/24

Use this command to grant access to a specific network.

allow 2001:0db8:85a3::8a2e:0370:7334

Use this this command to grant access to an IPv6.

The UDP port number 123 needs to be open in the firewall in order to allow the client access:

# firewall-cmd --zone=public --add-port=123/udp

158
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

If you want to open port 123 permanently, use the --permanent option:

# firewall-cmd --permanent --zone=public --add-port=123/udp

cmdallow
This is similar to the allow directive (see section allow), except that it allows control access (rather
than NTP client access) to a particular subnet or host. (By "control access" is meant that chronyc
can be run on those hosts and successfully connect to chronyd on this computer.) The syntax is
identical. There is also a cmddeny all directive with similar behavior to the cmdallow all directive.
dumpdir
Path to the directory to save the measurement history across restarts of chronyd (assuming no
changes are made to the system clock behavior whilst it is not running). If this capability is to be used
(via the dumponexit command in the configuration file, or the dump command in chronyc), the
dumpdir command should be used to define the directory where the measurement histories are
saved.
dumponexit
If this command is present, it indicates that chronyd should save the measurement history for each
of its time sources recorded whenever the program exits. (See the dumpdir command above).
hwtimestamp
The hwtimestamp directive enables hardware timestamping for extremely accurate synchronization.
For more details, see the chrony.conf(5) manual page.
local
The local keyword is used to allow chronyd to appear synchronized to real time from the viewpoint
of clients polling it, even if it has no current synchronization source. This option is normally used on
the "master" computer in an isolated network, where several computers are required to synchronize
to one another, and the "master" is kept in line with real time by manual input.
An example of the command is:

local stratum 10

A large value of 10 indicates that the clock is so many hops away from a reference clock that its time
is unreliable. If the computer ever has access to another computer which is ultimately synchronized to
a reference clock, it will almost certainly be at a stratum less than 10. Therefore, the choice of a high
value like 10 for the local command prevents the machine’s own time from ever being confused with
real time, were it ever to leak out to clients that have visibility of real servers.

log
The log command indicates that certain information is to be logged. It accepts the following options:
measurements
This option logs the raw NTP measurements and related information to a file called
measurements.log.
statistics
This option logs information about the regression processing to a file called statistics.log.
tracking
This option logs changes to the estimate of the system’s gain or loss rate, and any slews made, to
a file called tracking.log.
rtc
This option logs information about the system’s real-time clock.

159
Red Hat Enterprise Linux 8 Configuring basic system settings

refclocks
This option logs the raw and filtered reference clock measurements to a file called refclocks.log.
tempcomp
This option logs the temperature measurements and system rate compensations to a file called
tempcomp.log.
The log files are written to the directory specified by the logdir command.

An example of the command is:

log measurements statistics tracking

logdir
This directive allows the directory where log files are written to be specified.
An example of the use of this directive is:

logdir /var/log/chrony

makestep
Normally chronyd will cause the system to gradually correct any time offset, by slowing down or
speeding up the clock as required. In certain situations, the system clock may be so far adrift that this
slewing process would take a very long time to correct the system clock. This directive forces
chronyd to step system clock if the adjustment is larger than a threshold value, but only if there were
no more clock updates since chronyd was started than a specified limit (a negative value can be used
to disable the limit). This is particularly useful when using reference clock, because the initstepslew
directive only works with NTP sources.
An example of the use of this directive is:

makestep 1000 10

This would step the system clock if the adjustment is larger than 1000 seconds, but only in the first
ten clock updates.

maxchange
This directive sets the maximum allowed offset corrected on a clock update. The check is performed
only after the specified number of updates to allow a large initial adjustment of the system clock.
When an offset larger than the specified maximum occurs, it will be ignored for the specified number
of times and then chronyd will give up and exit (a negative value can be used to never exit). In both
cases a message is sent to syslog.
An example of the use of this directive is:

maxchange 1000 1 2

After the first clock update, chronyd will check the offset on every clock update, it will ignore two
adjustments larger than 1000 seconds and exit on another one.

maxupdateskew
One of chronyd's tasks is to work out how fast or slow the computer’s clock runs relative to its
reference sources. In addition, it computes an estimate of the error bounds around the estimated
value.

160
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

If the range of error is too large, it indicates that the measurements have not settled down yet, and
that the estimated gain or loss rate is not very reliable.

The maxupdateskew parameter is the threshold for determining whether an estimate is too
unreliable to be used. By default, the threshold is 1000 ppm.

The format of the syntax is:

maxupdateskew skew-in-ppm

Typical values for skew-in-ppm might be 100 for a dial-up connection to servers over a telephone
line, and 5 or 10 for a computer on a LAN.

It should be noted that this is not the only means of protection against using unreliable estimates. At
all times, chronyd keeps track of both the estimated gain or loss rate, and the error bound on the
estimate. When a new estimate is generated following another measurement from one of the
sources, a weighted combination algorithm is used to update the master estimate. So if chronyd has
an existing highly-reliable master estimate and a new estimate is generated which has large error
bounds, the existing master estimate will dominate in the new master estimate.

minsources
The minsources directive sets the minimum number of sources that need to be considered as
selectable in the source selection algorithm before the local clock is updated.
The format of the syntax is:

minsources number-of-sources

By default, number-of-sources is 1. Setting minsources to a larger number can be used to improve the
reliability, because multiple sources will need to correspond with each other.

noclientlog
This directive, which takes no arguments, specifies that client accesses are not to be logged.
Normally they are logged, allowing statistics to be reported using the clients command in chronyc
and enabling the clients to use interleaved mode with the xleave option in the server directive.
reselectdist
When chronyd selects synchronization source from available sources, it will prefer the one with
minimum synchronization distance. However, to avoid frequent reselecting when there are sources
with similar distance, a fixed distance is added to the distance for sources that are currently not
selected. This can be set with the reselectdist option. By default, the distance is 100 microseconds.
The format of the syntax is:

reselectdist dist-in-seconds

stratumweight
The stratumweight directive sets how much distance should be added per stratum to the
synchronization distance when chronyd selects the synchronization source from available sources.
The format of the syntax is:

stratumweight dist-in-seconds

By default, dist-in-seconds is 1 millisecond. This means that sources with lower stratum are usually

161
Red Hat Enterprise Linux 8 Configuring basic system settings

By default, dist-in-seconds is 1 millisecond. This means that sources with lower stratum are usually
preferred to sources with higher stratum even when their distance is significantly worse. Setting
stratumweight to 0 makes chronyd ignore stratum when selecting the source.

rtcfile
The rtcfile directive defines the name of the file in which chronyd can save parameters associated
with tracking the accuracy of the system’s real-time clock (RTC).
The format of the syntax is:

rtcfile /var/lib/chrony/rtc

chronyd saves information in this file when it exits and when the writertc command is issued in
chronyc. The information saved is the RTC’s error at some epoch, that epoch (in seconds since
January 1 1970), and the rate at which the RTC gains or loses time. Not all real-time clocks are
supported as their code is system-specific. Note that if this directive is used then the real-time clock
should not be manually adjusted as this would interfere with chrony's need to measure the rate at
which the real-time clock drifts if it was adjusted at random intervals.

rtcsync
The rtcsync directive is present in the /etc/chrony.conf file by default. This will inform the kernel the
system clock is kept synchronized and the kernel will update the real-time clock every 11 minutes.

6.5.1. Configuring chrony for security


chronyc can access chronyd in two ways:

Internet Protocol, IPv4 or IPv6.

Unix domain socket, which is accessible locally by the root or chrony user.

By default, chronyc connects to the Unix domain socket. The default path is
/var/run/chrony/chronyd.sock. If this connection fails, which can happen for example when chronyc is
running under a non-root user, chronyc tries to connect to 127.0.0.1 and then ::1.

Only the following monitoring commands, which do not affect the behavior of chronyd, are allowed from
the network:

activity

manual list

rtcdata

smoothing

sources

sourcestats

tracking

waitsync

The set of hosts from which chronyd accepts these commands can be configured with the cmdallow

162
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

The set of hosts from which chronyd accepts these commands can be configured with the cmdallow
directive in the configuration file of chronyd, or the cmdallow command in chronyc. By default, the
commands are accepted only from localhost (127.0.0.1 or ::1).

All other commands are allowed only through the Unix domain socket. When sent over the network,
chronyd responds with a Not authorised error, even if it is from localhost.

Accessing chronyd remotely with chronyc

1. Allow access from both IPv4 and IPv6 addresses by adding the following to the
/etc/chrony.conf file:

bindcmdaddress 0.0.0.0

or

bindcmdaddress :

2. Allow commands from the remote IP address, network, or subnet by using the cmdallow
directive.
Add the following content to the /etc/chrony.conf file:

cmdallow 192.168.1.0/24

3. Open port 323 in the firewall to connect from a remote system.

# firewall-cmd --zone=public --add-port=323/udp

If you want to open port 323 permanently, use the --permanent.

# firewall-cmd --permanent --zone=public --add-port=323/udp

Note that the allow directive is for NTP access whereas the cmdallow directive is to enable receiving of
remote commands. It is possible to make these changes temporarily using chronyc running locally. Edit
the configuration file to make permanent changes.

6.6. USING CHRONY

6.6.1. Installing chrony


The chrony suite is installed by default on Red Hat Enterprise Linux. To ensure that it is, run the
following command as root:

# yum install chrony

The default location for the chrony daemon is /usr/sbin/chronyd. The command line utility will be
installed to /usr/bin/chronyc.

6.6.2. Checking the status of chronyd


To check the status of chronyd, issue the following command:

163
Red Hat Enterprise Linux 8 Configuring basic system settings

$ systemctl status chronyd


chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled)
Active: active (running) since Wed 2013-06-12 22:23:16 CEST; 11h ago

6.6.3. Starting chronyd


To start chronyd, issue the following command as root:

# systemctl start chronyd

To ensure chronyd starts automatically at system start, issue the following command as root:

# systemctl enable chronyd

6.6.4. Stopping chronyd


To stop chronyd, issue the following command as root:

# systemctl stop chronyd

To prevent chronyd from starting automatically at system start, issue the following command as root:

# systemctl disable chronyd

6.6.5. Checking if chrony is synchronized


To check if chrony is synchronized, make use of the tracking, sources, and sourcestats commands.

6.6.5.1. Checking chrony tracking

To check chrony tracking, issue the following command:

$ chronyc tracking
Reference ID : CB00710F (foo.example.net)
Stratum :3
Ref time (UTC) : Fri Jan 27 09:49:17 2017
System time : 0.000006523 seconds slow of NTP time
Last offset : -0.000006747 seconds
RMS offset : 0.000035822 seconds
Frequency : 3.225 ppm slow
Residual freq : 0.000 ppm
Skew : 0.129 ppm
Root delay : 0.013639022 seconds
Root dispersion : 0.001100737 seconds
Update interval : 64.2 seconds
Leap status : Normal

The fields are as follows:

Reference ID

This is the reference ID and name (or IP address) if available, of the server to which the computer is
164
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

This is the reference ID and name (or IP address) if available, of the server to which the computer is
currently synchronized. Reference ID is a hexadecimal number to avoid confusion with IPv4
addresses.
Stratum
The stratum indicates how many hops away from a computer with an attached reference clock we
are. Such a computer is a stratum-1 computer, so the computer in the example is two hops away (that
is to say, a.b.c is a stratum-2 and is synchronized from a stratum-1).
Ref time
This is the time (UTC) at which the last measurement from the reference source was processed.
System time
In normal operation, chronyd never steps the system clock, because any jump in the timescale can
have adverse consequences for certain application programs. Instead, any error in the system clock is
corrected by slightly speeding up or slowing down the system clock until the error has been removed,
and then returning to the system clock’s normal speed. A consequence of this is that there will be a
period when the system clock (as read by other programs using the gettimeofday() system call, or by
the date command in the shell) will be different from chronyd's estimate of the current true time
(which it reports to NTP clients when it is operating in server mode). The value reported on this line is
the difference due to this effect.
Last offset
This is the estimated local offset on the last clock update.
RMS offset
This is a long-term average of the offset value.
Frequency
The "frequency" is the rate by which the system’s clock would be wrong if chronyd was not
correcting it. It is expressed in ppm (parts per million). For example, a value of 1 ppm would mean that
when the system’s clock thinks it has advanced 1 second, it has actually advanced by 1.000001
seconds relative to true time.
Residual freq
This shows the "residual frequency" for the currently selected reference source. This reflects any
difference between what the measurements from the reference source indicate the frequency
should be and the frequency currently being used.
The reason this is not always zero is that a smoothing procedure is applied to the frequency. Each
time a measurement from the reference source is obtained and a new residual frequency computed,
the estimated accuracy of this residual is compared with the estimated accuracy (see skew) of the
existing frequency value. A weighted average is computed for the new frequency, with weights
depending on these accuracies. If the measurements from the reference source follow a consistent
trend, the residual will be driven to zero over time.

Skew
This is the estimated error bound on the frequency.
Root delay
This is the total of the network path delays to the stratum-1 computer from which the computer is
ultimately synchronized. Root delay values are printed in nanosecond resolution. In certain extreme
situations, this value can be negative. (This can arise in a symmetric peer arrangement where the
computers’ frequencies are not tracking each other and the network delay is very short relative to the
turn-around time at each computer.)
Root dispersion

This is the total dispersion accumulated through all the computers back to the stratum-1 computer

165
Red Hat Enterprise Linux 8 Configuring basic system settings

This is the total dispersion accumulated through all the computers back to the stratum-1 computer
from which the computer is ultimately synchronized. Dispersion is due to system clock resolution,
statistical measurement variations etc. Root dispersion values are printed in nanosecond resolution.
Leap status
This is the leap status, which can be Normal, Insert second, Delete second or Not synchronized.

6.6.5.2. Checking chrony sources

The sources command displays information about the current time sources that chronyd is accessing.

The optional argument -v can be specified, meaning verbose. In this case, extra caption lines are shown
as a reminder of the meanings of the columns.

$ chronyc sources
210 Number of sources = 3
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================

#* GPS0 0 4 377 11 -479ns[ -621ns] /- 134ns


^? a.b.c 2 6 377 23 -923us[ -924us] +/- 43ms
^ d.e.f 1 6 377 21 -2629us[-2619us] +/- 86ms

The columns are as follows:

M
This indicates the mode of the source. ^ means a server, = means a peer and # indicates a locally
connected reference clock.
S
This column indicates the state of the sources. "*" indicates the source to which chronyd is currently
synchronized. "+" indicates acceptable sources which are combined with the selected source. "-"
indicates acceptable sources which are excluded by the combining algorithm. "?" indicates sources to
which connectivity has been lost or whose packets do not pass all tests. "x" indicates a clock which
chronyd thinks is a falseticker (its time is inconsistent with a majority of other sources). "~" indicates
a source whose time appears to have too much variability. The "?" condition is also shown at start-
up, until at least 3 samples have been gathered from it.
Name/IP address
This shows the name or the IP address of the source, or reference ID for reference clock.
Stratum
This shows the stratum of the source, as reported in its most recently received sample. Stratum 1
indicates a computer with a locally attached reference clock. A computer that is synchronized to a
stratum 1 computer is at stratum 2. A computer that is synchronized to a stratum 2 computer is at
stratum 3, and so on.
Poll
This shows the rate at which the source is being polled, as a base-2 logarithm of the interval in
seconds. Thus, a value of 6 would indicate that a measurement is being made every 64 seconds.
chronyd automatically varies the polling rate in response to prevailing conditions.

Reach
This shows the source’s reach register printed as an octal number. The register has 8 bits and is
updated on every received or missed packet from the source. A value of 377 indicates that a valid
reply was received for all of the last eight transmissions.

166
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

LastRx
This column shows how long ago the last sample was received from the source. This is normally in
seconds. The letters m, h, d or y indicate minutes, hours, days or years. A value of 10 years indicates
there were no samples received from this source yet.
Last sample
This column shows the offset between the local clock and the source at the last measurement. The
number in the square brackets shows the actual measured offset. This may be suffixed by ns
(indicating nanoseconds), us (indicating microseconds), ms (indicating milliseconds), or s
(indicating seconds). The number to the left of the square brackets shows the original measurement,
adjusted to allow for any slews applied to the local clock since. The number following the +/- indicator
shows the margin of error in the measurement. Positive offsets indicate that the local clock is ahead
of the source.

6.6.5.3. Checking chrony source statistics

The sourcestats command displays information about the drift rate and offset estimation process for
each of the sources currently being examined by chronyd.

The optional argument -v can be specified, meaning verbose. In this case, extra caption lines are shown
as a reminder of the meanings of the columns.

$ chronyc sourcestats
210 Number of sources = 1
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
===============================================================================

abc.def.ghi 11 5 46m -0.001 0.045 1us 25us

The columns are as follows:

Name/IP address
This is the name or IP address of the NTP server (or peer) or reference ID of the reference clock to
which the rest of the line relates.
NP
This is the number of sample points currently being retained for the server. The drift rate and current
offset are estimated by performing a linear regression through these points.
NR
This is the number of runs of residuals having the same sign following the last regression. If this
number starts to become too small relative to the number of samples, it indicates that a straight line
is no longer a good fit to the data. If the number of runs is too low, chronyd discards older samples
and re-runs the regression until the number of runs becomes acceptable.
Span
This is the interval between the oldest and newest samples. If no unit is shown the value is in seconds.
In the example, the interval is 46 minutes.
Frequency
This is the estimated residual frequency for the server, in parts per million. In this case, the
computer’s clock is estimated to be running 1 part in 109 slow relative to the server.
Freq Skew
This is the estimated error bounds on Freq (again in parts per million).
Offset

167
Red Hat Enterprise Linux 8 Configuring basic system settings

This is the estimated offset of the source.


Std Dev
This is the estimated sample standard deviation.

6.6.6. Manually Adjusting the System Clock


To step the system clock immediately, bypassing any adjustments in progress by slewing, issue the
following command as root:

# chronyc makestep

If the rtcfile directive is used, the real-time clock should not be manually adjusted. Random adjustments
would interfere with chrony's need to measure the rate at which the real-time clock drifts.

6.7. SETTING UP CHRONY FOR DIFFERENT ENVIRONMENTS

6.7.1. Setting up chrony for a system in an isolated network


For a network that is never connected to the Internet, one computer is selected to be the master
timeserver. The other computers are either direct clients of the master, or clients of clients. On the
master, the drift file must be manually set with the average rate of drift of the system clock. If the
master is rebooted, it will obtain the time from surrounding systems and calculate an average to set its
system clock. Thereafter it resumes applying adjustments based on the drift file. The drift file will be
updated automatically when the settime command is used.

On the system selected to be the master, using a text editor running as root, edit /etc/chrony.conf as
follows:

driftfile /var/lib/chrony/drift
commandkey 1
keyfile /etc/chrony.keys
initstepslew 10 client1 client3 client6
local stratum 8
manual
allow 192.0.2.0

Where 192.0.2.0 is the network or subnet address from which the clients are allowed to connect.

On the systems selected to be direct clients of the master, using a text editor running as root, edit the
/etc/chrony.conf as follows:

server master
driftfile /var/lib/chrony/drift
logdir /var/log/chrony
log measurements statistics tracking
keyfile /etc/chrony.keys
commandkey 24
local stratum 10
initstepslew 20 master
allow 192.0.2.123

Where 192.0.2.123 is the address of the master, and master is the host name of the master. Clients with
this configuration will resynchronize the master if it restarts.

168
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

On the client systems which are not to be direct clients of the master, the /etc/chrony.conf file should
be the same except that the local and allow directives should be omitted.

In an isolated network, you can also use the local directive that enables a local reference mode, which
allows chronyd operating as an NTP server to appear synchronized to real time, even when it was never
synchronized or the last update of the clock happened a long time ago.

To allow multiple servers in the network to use the same local configuration and to be synchronized to
one another, without confusing clients that poll more than one server, use the orphan option of the
local directive which enables the orphan mode. Each server needs to be configured to poll all other
servers with local. This ensures that only the server with the smallest reference ID has the local
reference active and other servers are synchronized to it. When the server fails, another one will take
over.

6.8. CHRONY WITH HW TIMESTAMPING

6.8.1. Understanding hardware timestamping


Hardware timestamping is a feature supported in some Network Interface Controller (NICs) which
provides accurate timestamping of incoming and outgoing packets. NTP timestamps are usually created
by the kernel and chronyd with the use of the system clock. However, when HW timestamping is
enabled, the NIC uses its own clock to generate the timestamps when packets are entering or leaving
the link layer or the physical layer. When used with NTP, hardware timestamping can significantly
improve the accuracy of synchronization. For best accuracy, both NTP servers and NTP clients need to
use hardware timestamping. Under ideal conditions, a sub-microsecond accuracy may be possible.

Another protocol for time synchronization that uses hardware timestamping is PTP.

Unlike NTP, PTP relies on assistance in network switches and routers. If you want to reach the best
accuracy of synchronization, use PTP on networks that have switches and routers with PTP support, and
prefer NTP on networks that do not have such switches and routers.

6.8.2. Verifying support for hardware timestamping


To verify that hardware timestamping with NTP is supported by an interface, use the ethtool -T
command. An interface can be used for hardware timestamping with NTP if ethtool lists the
SOF_TIMESTAMPING_TX_HARDWARE and SOF_TIMESTAMPING_TX_SOFTWARE capabilities
and also the HWTSTAMP_FILTER_ALL filter mode.

Example 6.1. Verifying support for hardware timestamping on a specific interface

# ethtool -T eth0

Output:

Timestamping parameters for eth0:


Capabilities:
hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0

169
Red Hat Enterprise Linux 8 Configuring basic system settings

Hardware Transmit Timestamp Modes:


off (HWTSTAMP_TX_OFF)
on (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
none (HWTSTAMP_FILTER_NONE)
all (HWTSTAMP_FILTER_ALL)
ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC)
ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ)
ptpv2-l4-sync (HWTSTAMP_FILTER_PTP_V2_L4_SYNC)
ptpv2-l4-delay-req (HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ)
ptpv2-l2-sync (HWTSTAMP_FILTER_PTP_V2_L2_SYNC)
ptpv2-l2-delay-req (HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ)
ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT)
ptpv2-sync (HWTSTAMP_FILTER_PTP_V2_SYNC)
ptpv2-delay-req (HWTSTAMP_FILTER_PTP_V2_DELAY_REQ)

6.8.3. Enabling hardware timestamping


To enable hardware timestamping, use the hwtimestamp directive in the /etc/chrony.conf file. The
directive can either specify a single interface, or a wildcard character can be used to enable hardware
timestamping on all interfaces that support it. Use the wildcard specification in case that no other
application, like ptp4l from the linuxptp package, is using hardware timestamping on an interface.
Multiple hwtimestamp directives are allowed in the chrony configuration file.

Example 6.2. Enabling hardware timestamping by using the hwtimestamp directive

hwtimestamp eth0
hwtimestamp eth1
hwtimestamp *

6.8.4. Configuring client polling interval


The default range of a polling interval (64-1024 seconds) is recommended for servers on the Internet.
For local servers and hardware timestamping, a shorter polling interval needs to be configured in order
to minimize offset of the system clock.

The following directive in /etc/chrony.conf specifies a local NTP server using one second polling
interval:

server ntp.local minpoll 0 maxpoll 0

6.8.5. Enabling interleaved mode


NTP servers that are not hardware NTP appliances, but rather general purpose computers running a
software NTP implementation, like chrony, will get a hardware transmit timestamp only after sending a
packet. This behavior prevents the server from saving the timestamp in the packet to which it
corresponds. In order to enable NTP clients receiving transmit timestamps that were generated after
the transmission, configure the clients to use the NTP interleaved mode by adding the xleave option to
the server directive in /etc/chrony.conf:

server ntp.local minpoll 0 maxpoll 0 xleave

170
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

6.8.6. Configuring server for large number of clients


The default server configuration allows a few thousands of clients at most to use the interleaved mode
concurrently. To configure the server for a larger number of clients, increase the clientloglimit directive
in /etc/chrony.conf. This directive specifies the maximum size of memory allocated for logging of
clients' access on the server:

clientloglimit 100000000

6.8.7. Verifying hardware timestamping


To verify that the interface has successfully enabled hardware timestamping, check the system log. The
log should contain a message from chronyd for each interface with successfully enabled hardware
timestamping.

Example 6.3. Log messages for interfaces with enabled hardware timestamping

chronyd[4081]: Enabled HW timestamping on eth0


chronyd[4081]: Enabled HW timestamping on eth1

When chronyd is configured as an NTP client or peer, you can have the transmit and receive
timestamping modes and the interleaved mode reported for each NTP source by the chronyc ntpdata
command:

Example 6.4. Reporting the transmit, receive timestamping and interleaved mode for each NTP
source

# chronyc ntpdata

Output:

Remote address : 203.0.113.15 (CB00710F)


Remote port : 123
Local address : 203.0.113.74 (CB00714A)
Leap status : Normal
Version :4
Mode : Server
Stratum :1
Poll interval : 0 (1 seconds)
Precision : -24 (0.000000060 seconds)
Root delay : 0.000015 seconds
Root dispersion : 0.000015 seconds
Reference ID : 47505300 (GPS)
Reference time : Wed May 03 13:47:45 2017
Offset : -0.000000134 seconds
Peer delay : 0.000005396 seconds
Peer dispersion : 0.000002329 seconds
Response time : 0.000152073 seconds
Jitter asymmetry: +0.00
NTP tests : 111 111 1111
Interleaved : Yes

171
Red Hat Enterprise Linux 8 Configuring basic system settings

Authenticated : No
TX timestamping : Hardware
RX timestamping : Hardware
Total TX : 27
Total RX : 27
Total valid RX : 27

Example 6.5. Reporting the stability of NTP measurements

# chronyc sourcestats

With hardware timestamping enabled, stability of NTP measurements should be in tens or hundreds
of nanoseconds, under normal load. This stability is reported in the Std Dev column of the output of
the chronyc sourcestats command:

Output:

210 Number of sources = 1


Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
ntp.local 12 7 11 +0.000 0.019 +0ns 49ns

6.8.8. Configuring PTP-NTP bridge


If a highly accurate Precision Time Protocol (PTP) grandmaster is available in a network that does not
have switches or routers with PTP support, a computer may be dedicated to operate as a PTP slave and
a stratum-1 NTP server. Such a computer needs to have two or more network interfaces, and be close to
the grandmaster or have a direct connection to it. This will ensure highly accurate synchronization in the
network.

Configure the ptp4l and phc2sys programs from the linuxptp packages to use one interface to
synchronize the system clock using PTP.

Configure chronyd to provide the system time using the other interface:

Example 6.6. Configuring chronyd to provide the system time using the other interface

bindaddress 203.0.113.74
hwtimestamp eth1
local stratum 1

6.9. ACHIEVING SOME SETTINGS PREVIOUSLY SUPPORTED BY NTP


IN CHRONY
Some settings that were in previous major version of Red Hat Enterprise Linux supported by ntp, are not
supported by chrony. This section lists such settings, and describes ways to achieve them on a system
with chrony.

6.9.1. Monitoring by ntpq and ntpdc


chronyd cannot be monitored by the ntpq and ntpdc utilities from the ntp distribution, because
172
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

chronyd cannot be monitored by the ntpq and ntpdc utilities from the ntp distribution, because
chrony does not support the NTP modes 6 and 7. It supports a different protocol and chronyc is the
client implementation. For more information, see the chronyc(1) man page.

To monitor the status of the system clock sychronized by chronyd, you can:

Use the tracking command

Use the ntpstat utility, which supports chrony and provides a similar output as it used to with
ntpd

Example 6.7. Using the tracking command

$ chronyc -n tracking
Reference ID : 0A051B0A (10.5.27.10)
Stratum :2
Ref time (UTC) : Thu Mar 08 15:46:20 2018
System time : 0.000000338 seconds slow of NTP time
Last offset : +0.000339408 seconds
RMS offset : 0.000339408 seconds
Frequency : 2.968 ppm slow
Residual freq : +0.001 ppm
Skew : 3.336 ppm
Root delay : 0.157559142 seconds
Root dispersion : 0.001339232 seconds
Update interval : 64.5 seconds
Leap status : Normal

Example 6.8. Using the ntpstat utility

$ ntpstat
synchronised to NTP server (10.5.27.10) at stratum 2
time correct to within 80 ms
polling server every 64 s

6.9.2. Using authentication mechanism based on public key cryptography


In Red Hat Enterprise Linux 7, ntp supported Autokey, which is an authentication mechanism based on
public key cryptography. Autokey is not supported in chronyd.

On a Red Hat Enterprise Linux 8 system, it is recommended to use symmetric keys instead. Generate
the keys with the chronyc keygen command. A client and server need to share a key specified in
/etc/chrony.keys. The client can enable authentication using the key option in the server, pool, or peer
directive.

6.9.3. Using ephemeral symmetric associations


In Red Hat Enterprise Linux 7, ntpd supported ephemeral symmetric associations, which can be
mobilized by packets from peers which are not specified in the ntp.conf configuration file. In Red Hat
Enterprise Linux 8, chronyd needs all peers to be specified in chrony.conf. Ephemeral symmetric
associations are not supported.

173
Red Hat Enterprise Linux 8 Configuring basic system settings

Note that using the client/server mode enabled by the server or pool directive is more secure
compared to the symmetric mode enabled by the peer directive.

6.9.4. multicast/broadcast client


Red Hat Enterprise Linux 7 supported the broadcast/multicast NTP mode, which simplifies
configuration of clients. With this mode, clients can be configured to just listen for packets sent to a
multicast/broadcast address instead of listening for specific names or addresses of individual servers,
which may change over time.

In Red Hat Enterprise Linux 8, chronyd does not support the broadcast/multicast mode. The main
reason is that it is less accurate and less secure than the ordinary client/server and symmetric modes.

There are several options of migration from an NTP broadcast/multicast setup:

Configure DNS to translate a single name, such as ntp.example.com, to multiple addresses of


different servers
Clients can have a static configuration using only a single pool directive to synchronize with
multiple servers. If a server from the pool becomes unreacheable, or otherwise unsuitable for
synchronization, the clients automatically replace it with another server from the pool.

Distribute the list of NTP servers over DHCP


When NetworkManager gets a list of NTP servers from the DHCP server, chronyd is
automatically configured to use them. This feature can be disabled by adding PEERNTP=no to
the /etc/sysconfig/network file.

Use the Precision Time Protocol (PTP)


This option is suitable mainly for environments where servers change frequently, or if a larger
group of clients needs to be able to synchronize to each other without having a designated
server.

PTP was designed for multicast messaging and works similarly to the NTP broadcast mode. A
PTP implementation is available in the linuxptp package.

PTP normally requires hardware timestamping and support in network switches to perform well.
However, PTP is expected to work better than NTP in the broadcast mode even with software
timestamping and no support in network switches.

In networks with very large number of PTP slaves in one communication path, it is recommended
to configure the PTP slaves with the hybrid_e2e option in order to reduce the amount of
network traffic generated by the slaves. You can configure a computer running chronyd as an
NTP client, and possibly NTP server, to operate also as a PTP grandmaster to distribute
synchronized time to a large number of computers using multicast messaging.

6.10. ADDITIONAL RESOURCES


The following sources of information provide additional resources regarding chrony.

6.10.1. Installed Documentation


chronyc(1) man page — Describes the chronyc command-line interface tool including
commands and command options.

chronyd(8) man page — Describes the chronyd daemon including commands and command
options.

174
CHAPTER 6. USING THE CHRONY SUITE TO CONFIGURE NTP

chrony.conf(5) man page — Describes the chrony configuration file.

6.10.2. Online Documentation


https://chrony.tuxfamily.org/doc/3.3/chronyc.html

https://chrony.tuxfamily.org/doc/3.3/chronyd.html

https://chrony.tuxfamily.org/doc/3.3/chrony.conf.html

For answers to FAQs, see https://chrony.tuxfamily.org/faq.html

6.11. MANAGING TIME SYNCHRONIZATION USING RHEL SYSTEM


ROLES
You can manage time synchronization on multiple target machines using the timesync role.

The timesync role installs and configures an NTP or PTP implementation to operate as an NTP client or
PTP slave in order to synchronize the system clock with NTP servers or grandmasters in PTP domains.

Note that using the timesync role also facilitates migration to chrony , because you can use the same
playbook on all versions of Red Hat Enterprise Linux starting with RHEL 6 regardless of whether the
system uses ntp or chrony to implement the NTP protocol.


WARNING

The timesync role replaces the configuration of the given or detected provider
service on the managed host. Previous settings are lost, even if they are not
specified in the role variables. The only preserved setting is the choice of provider if
the timesync_ntp_provider variable is not defined.

The following example shows how to apply the timesync role in a situation with just one pool of servers.

Example 6.9. An example playbook applying the timesync role for a single pool of servers

---
- hosts: timesync-test
vars:
timesync_ntp_servers:
- hostname: 2.rhel.pool.ntp.org
pool: yes
iburst: yes
roles:
- rhel-system-roles.timesync

Additional resources

For a detailed reference on timesync role variables, install the rhel-system-roles package, and

175
Red Hat Enterprise Linux 8 Configuring basic system settings

For a detailed reference on timesync role variables, install the rhel-system-roles package, and
see the README.md or README.html files in the /usr/share/doc/rhel-system-
roles/timesync directory.

For more information on RHEL System Roles, see Introduction to RHEL System Roles .

176
CHAPTER 7. USING SECURE COMMUNICATIONS BETWEEN TWO SYSTEMS WITH OPENSSH

CHAPTER 7. USING SECURE COMMUNICATIONS BETWEEN


TWO SYSTEMS WITH OPENSSH
SSH (Secure Shell) is a protocol which provides secure communications between two systems using a
client-server architecture and allows users to log in to server host systems remotely. Unlike other
remote communication protocols, such as FTP or Telnet, SSH encrypts the login session, which prevents
intruders to collect unencrypted passwords from the connection.

Red Hat Enterprise Linux includes the basic OpenSSH packages: the general openssh package, the
openssh-server package and the openssh-clients package. Note that the OpenSSH packages require
the OpenSSL package openssl-libs, which installs several important cryptographic libraries that enable
OpenSSH to provide encrypted communications.

7.1. SSH AND OPENSSH


SSH (Secure Shell) is a program for logging into a remote machine and executing commands on that
machine. The SSH protocol provides secure encrypted communications between two untrusted hosts
over an insecure network. You can also forward X11 connections and arbitrary TCP/IP ports over the
secure channel.

The SSH protocol mitigates security threats, such as interception of communication between two
systems and impersonation of a particular host, when you use it for remote shell login or file copying.
This is because the SSH client and server use digital signatures to verify their identities. Additionally, all
communication between the client and server systems is encrypted.

OpenSSH is an implementation of the SSH protocol supported by a number of Linux, UNIX, and similar
operating systems. It includes the core files necessary for both the OpenSSH client and server. The
OpenSSH suite consists of the following user-space tools:

ssh is a remote login program (SSH client)

sshd is an OpenSSH SSH daemon

scp is a secure remote file copy program

sftp is a secure file transfer program

ssh-agent is an authentication agent for caching private keys

ssh-add adds private key identities to ssh-agent

ssh-keygen generates, manages, and converts authentication keys for ssh

ssh-copy-id is a script that adds local public keys to the authorized_keys file on a remote SSH
server

ssh-keyscan - gathers SSH public host keys

Two versions of SSH currently exist: version 1, and the newer version 2. The OpenSSH suite in Red Hat
Enterprise Linux 8 supports only SSH version 2, which has an enhanced key-exchange algorithm not
vulnerable to known exploits in version 1.

OpenSSH, as one of the RHEL core cryptographic subsystems uses system-wide crypto policies. This
ensures that weak cipher suites and cryptographic algorithms are disabled in the default configuration.
To adjust the policy, the administrator must either use the update-crypto-policies command to make
settings stricter or looser or manually opt-out of the system-wide crypto policies.

177
Red Hat Enterprise Linux 8 Configuring basic system settings

The OpenSSH suite uses two different sets of configuration files: those for client programs (that is, ssh,
scp, and sftp), and those for the server (the sshd daemon). System-wide SSH configuration
information is stored in the /etc/ssh/ directory. User-specific SSH configuration information is stored in
~/.ssh/ in the user’s home directory. For a detailed list of OpenSSH configuration files, see the FILES
section in the sshd(8) man page.

Additional resources

Man pages for the ssh topic listed by the man -k ssh command.

Using system-wide cryptographic policies .

7.2. CONFIGURING AND STARTING AN OPENSSH SERVER


Use the following procedure for a basic configuration that might be required for your environment and
for starting an OpenSSH server. Note that after the default RHEL installation, the sshd daemon is
already started and server host keys are automatically created.

Prerequisites

The openssh-server package is installed.

Procedure

1. Start the sshd daemon in the current session and set it to start automatically at boot time:

# systemctl start sshd


# systemctl enable sshd

2. To specify different addresses than the default 0.0.0.0 (IPv4) or :: (IPv6) for the
ListenAddress directive in the /etc/ssh/sshd_config configuration file and to use a slower
dynamic network configuration, add the dependency on the network-online.target target unit
to the sshd.service unit file. To achieve this, create the
/etc/systemd/system/sshd.service.d/local.conf file with the following content:

[Unit]
Wants=network-online.target
After=network-online.target

3. Review if OpenSSH server settings in the /etc/ssh/sshd_config configuration file meet the
requirements of your scenario.

4. Optionally, change the welcome message that your OpenSSH server displays before a client
authenticates by editing the /etc/issue file, for example:

Welcome to ssh-server.example.com
Warning: By accessing this server, you agree to the referenced terms and conditions.

Note that to change the message displayed after a successful login you have to edit the
/etc/motd file on the server. See the pam_motd man page for more information.

5. Reload the systemd configuration to apply the changes:

# systemctl daemon-reload

178
CHAPTER 7. USING SECURE COMMUNICATIONS BETWEEN TWO SYSTEMS WITH OPENSSH

Verification steps

1. Check that the sshd daemon is running:

# systemctl status sshd


● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-11-18 14:59:58 CET; 6min ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 1149 (sshd)
Tasks: 1 (limit: 11491)
Memory: 1.9M
CGroup: /system.slice/sshd.service
└─1149 /usr/sbin/sshd -D -oCiphers=aes128-ctr,aes256-ctr,aes128-cbc,aes256-cbc -
oMACs=hmac-sha2-256,>

Nov 18 14:59:58 ssh-server-example.com systemd[1]: Starting OpenSSH server daemon...


Nov 18 14:59:58 ssh-server-example.com sshd[1149]: Server listening on 0.0.0.0 port 22.
Nov 18 14:59:58 ssh-server-example.com sshd[1149]: Server listening on :: port 22.
Nov 18 14:59:58 ssh-server-example.com systemd[1]: Started OpenSSH server daemon.

2. Connect to the SSH server with an SSH client.

# ssh user@ssh-server-example.com
ECDSA key fingerprint is SHA256:dXbaS0RG/UzlTTku8GtXSz0S1++lPegSy31v3L/FAEc.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'ssh-server-example.com' (ECDSA) to the list of known hosts.

user@ssh-server-example.com's password:

Additional resources

sshd(8) and sshd_config(5) man pages

7.3. USING KEY PAIRS INSTEAD OF PASSWORDS FOR SSH


AUTHENTICATION
To improve system security even further, generate SSH key pairs and then enforce key-based
authentication by disabling password authentication.

7.3.1. Setting an OpenSSH server for key-based authentication


Follow these steps to configure your OpenSSH server for enforcing key-based authentication.

Prerequisites

The openssh-server package is installed.

The sshd daemon is running on the server.

Procedure

1. Open the /etc/ssh/sshd_config configuration in a text editor, for example:

179
Red Hat Enterprise Linux 8 Configuring basic system settings

# vi /etc/ssh/sshd_config

2. Change the PasswordAuthentication option to no:

PasswordAuthentication no

On a system other than a new default installation, check that PubkeyAuthentication no has not
been set and the ChallengeResponseAuthentication directive is set to no. If you are
connected remotely, not using console or out-of-band access, test the key-based login process
before disabling password authentication.

3. To use key-based authentication with NFS-mounted home directories, enable the


use_nfs_home_dirs SELinux boolean:

# setsebool -P use_nfs_home_dirs 1

4. Reload the sshd daemon to apply the changes:

# systemctl reload sshd

Additional resources

sshd(8), sshd_config(5), and setsebool(8) man pages

7.3.2. Generating SSH key pairs


Use this procedure to generate an SSH key pair on a local system and to copy the generated public key
to an OpenSSH server. If the server is configured accordingly, you can log in to the OpenSSH server
without providing any password.

IMPORTANT

If you complete the following steps as root, only root is able to use the keys.

Procedure

1. To generate an ECDSA key pair for version 2 of the SSH protocol:

$ ssh-keygen -t ecdsa
Generating public/private ecdsa key pair.
Enter file in which to save the key (/home/joesec/.ssh/id_ecdsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/joesec/.ssh/id_ecdsa.
Your public key has been saved in /home/joesec/.ssh/id_ecdsa.pub.
The key fingerprint is:
SHA256:Q/x+qms4j7PCQ0qFd09iZEFHA+SqwBKRNaU72oZfaCI
joesec@localhost.example.com
The key's randomart image is:
+---[ECDSA 256]---+
|.oo..o=++ |
|.. o .oo . |
|. .. o. o |

180
CHAPTER 7. USING SECURE COMMUNICATIONS BETWEEN TWO SYSTEMS WITH OPENSSH

|....o.+... |
|o.oo.o +S . |
|.=.+. .o |
|E.*+. . . . |
|.=..+ +.. o |
| . oo*+o. |
+----[SHA256]-----+

You can also generate an RSA key pair by using the -t rsa option with the ssh-keygen
command or an Ed25519 key pair by entering the ssh-keygen -t ed25519 command.

2. To copy the public key to a remote machine:

$ ssh-copy-id joesec@ssh-server-example.com
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are
already installed
...
Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'joesec@ssh-server-example.com'" and check to
make sure that only the key(s) you wanted were added.

If you do not use the ssh-agent program in your session, the previous command copies the
most recently modified ~/.ssh/id*.pub public key if it is not yet installed. To specify another
public-key file or to prioritize keys in files over keys cached in memory by ssh-agent, use the
ssh-copy-id command with the -i option.

NOTE

If you reinstall your system and want to keep previously generated key pairs, back up the
~/.ssh/ directory. After reinstalling, copy it back to your home directory. You can do this
for all users on your system, including root.

Verification steps

1. Log in to the OpenSSH server without providing any password:

$ ssh joesec@ssh-server-example.com
Welcome message.
...
Last login: Mon Nov 18 18:28:42 2019 from ::1

Additional resources

ssh-keygen(1) and ssh-copy-id(1) man pages

7.4. USING SSH KEYS STORED ON A SMART CARD


Red Hat Enterprise Linux 8 enables you to use RSA and ECDSA keys stored on a smart card on
OpenSSH clients. Use this procedure to enable authentication using a smart card instead of using a
password.

Prerequisites

181
Red Hat Enterprise Linux 8 Configuring basic system settings

On the client side, the opensc package is installed and the pcscd service is running.

Procedure

1. List all keys provided by the OpenSC PKCS #11 module including their PKCS #11 URIs and save
the output to the keys.pub file:

$ ssh-keygen -D pkcs11: > keys.pub


$ ssh-keygen -D pkcs11:
ssh-rsa AAAAB3NzaC1yc2E...KKZMzcQZzx
pkcs11:id=%02;object=SIGN%20pubkey;token=SSH%20key;manufacturer=piv_II?module-
path=/usr/lib64/pkcs11/opensc-pkcs11.so
ecdsa-sha2-nistp256 AAA...J0hkYnnsM=
pkcs11:id=%01;object=PIV%20AUTH%20pubkey;token=SSH%20key;manufacturer=piv_II?
module-path=/usr/lib64/pkcs11/opensc-pkcs11.so

2. To enable authentication using a smart card on a remote server (example.com), transfer the
public key to the remote server. Use the ssh-copy-id command with keys.pub created in the
previous step:

$ ssh-copy-id -f -i keys.pub username@example.com

3. To connect to example.com using the ECDSA key from the output of the ssh-keygen -D
command in step 1, you can use just a subset of the URI, which uniquely references your key, for
example:

$ ssh -i "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so" example.com


Enter PIN for 'SSH key':
[example.com] $

4. You can use the same URI string in the ~/.ssh/config file to make the configuration permanent:

$ cat ~/.ssh/config
IdentityFile "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so"
$ ssh example.com
Enter PIN for 'SSH key':
[example.com] $

Because OpenSSH uses the p11-kit-proxy wrapper and the OpenSC PKCS #11 module is
registered to PKCS#11 Kit, you can simplify the previous commands:

$ ssh -i "pkcs11:id=%01" example.com


Enter PIN for 'SSH key':
[example.com] $

If you skip the id= part of a PKCS #11 URI, OpenSSH loads all keys that are available in the proxy module.
This can reduce the amount of typing required:

$ ssh -i pkcs11: example.com


Enter PIN for 'SSH key':
[example.com] $

Additional resources

182
CHAPTER 7. USING SECURE COMMUNICATIONS BETWEEN TWO SYSTEMS WITH OPENSSH

Fedora 28: Better smart card support in OpenSSH

p11-kit(8) man page

ssh(1) man page

ssh-keygen(1) man page

opensc.conf(5) man page

pcscd(8) man page

7.5. MAKING OPENSSH MORE SECURE


The following tips help you to increase security when using OpenSSH. Note that changes in the
/etc/ssh/sshd_config OpenSSH configuration file require reloading the sshd daemon to take effect:

# systemctl reload sshd

IMPORTANT

The majority of security hardening configuration changes reduce compatibility with


clients that do not support up-to-date algorithms or cipher suites.

Disabling insecure connection protocols

To make SSH truly effective, prevent the use of insecure connection protocols that are replaced
by the OpenSSH suite. Otherwise, a user’s password might be protected using SSH for one
session only to be captured later when logging in using Telnet. For this reason, consider
disabling insecure protocols, such as telnet, rsh, rlogin, and ftp.

Enabling key-based authentication and disabling password-based authentication

Disabling passwords for authentication and allowing only key pairs reduces the attack surface
and it also might save users’ time. On clients, generate key pairs using the ssh-keygen tool and
use the ssh-copy-id utility to copy public keys from clients on the OpenSSH server. To disable
password-based authentication on your OpenSSH server, edit /etc/ssh/sshd_config and
change the PasswordAuthentication option to no:

PasswordAuthentication no

Key types

Although the ssh-keygen command generates a pair of RSA keys by default, you can instruct it
to generate ECDSA or Ed25519 keys by using the -t option. The ECDSA (Elliptic Curve Digital
Signature Algorithm) offers better performance than RSA at the equivalent symmetric key
strength. It also generates shorter keys. The Ed25519 public-key algorithm is an implementation
of twisted Edwards curves that is more secure and also faster than RSA, DSA, and ECDSA.
OpenSSH creates RSA, ECDSA, and Ed25519 server host keys automatically if they are missing.
To configure the host key creation in RHEL 8, use the sshd-keygen@.service instantiated
service. For example, to disable the automatic creation of the RSA key type:

# systemctl mask sshd-keygen@rsa.service

183
Red Hat Enterprise Linux 8 Configuring basic system settings

To exclude particular key types for SSH connections, comment out the relevant lines in
/etc/ssh/sshd_config, and reload the sshd service. For example, to allow only Ed25519 host
keys:

# HostKey /etc/ssh/ssh_host_rsa_key
# HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

Non-default port

By default, the sshd daemon listens on TCP port 22. Changing the port reduces the exposure
of the system to attacks based on automated network scanning and thus increase security
through obscurity. You can specify the port using the Port directive in the
/etc/ssh/sshd_config configuration file.
You also have to update the default SELinux policy to allow the use of a non-default port. To do
so, use the semanage tool from the policycoreutils-python-utils package:

# semanage port -a -t ssh_port_t -p tcp port_number

Furthermore, update firewalld configuration:

# firewall-cmd --add-port port_number/tcp


# firewall-cmd --runtime-to-permanent

In the previous commands, replace port_number with the new port number specified using the
Port directive.

No root login

If your particular use case does not require the possibility of logging in as the root user, you
should consider setting the PermitRootLogin configuration directive to no in the
/etc/ssh/sshd_config file. By disabling the possibility of logging in as the root user, the
administrator can audit which users run what privileged commands after they log in as regular
users and then gain root rights.
Alternatively, set PermitRootLogin to prohibit-password:

PermitRootLogin prohibit-password

This enforces the use of key-based authentication instead of the use of passwords for logging
in as root and reduces risks by preventing brute-force attacks.

Using the ⁠X Security extension

The X server in Red Hat Enterprise Linux clients does not provide the X Security extension.
Therefore, clients cannot request another security layer when connecting to untrusted SSH
servers with X11 forwarding. Most applications are not able to run with this extension enabled
anyway.
By default, the ForwardX11Trusted option in the /etc/ssh/ssh_config.d/05-redhat.conf file is
set to yes, and there is no difference between the ssh -X remote_machine (untrusted host)
and ssh -Y remote_machine (trusted host) command.

If your scenario does not require the X11 forwarding feature at all, set the X11Forwarding
directive in the /etc/ssh/sshd_config configuration file to no.

184
CHAPTER 7. USING SECURE COMMUNICATIONS BETWEEN TWO SYSTEMS WITH OPENSSH

Restricting access to specific users, groups, or domains

The AllowUsers and AllowGroups directives in the /etc/ssh/sshd_config configuration file


server enable you to permit only certain users, domains, or groups to connect to your OpenSSH
server. You can combine AllowUsers and AllowGroups to restrict access more precisely, for
example:

AllowUsers *@192.168.1.*,*@10.0.0.*,!*@192.168.1.2
AllowGroups example-group

The previous configuration lines accept connections from all users from systems in 192.168.1.*
and 10.0.0.* subnets except from the system with the 192.168.1.2 address. All users must be in
the example-group group. The OpenSSH server denies all other connections.

Note that using whitelists (directives starting with Allow) is more secure than using blacklists
(options starting with Deny) because whitelists block also new unauthorized users or groups.

Changing system-wide cryptographic policies

OpenSSH uses RHEL system-wide cryptographic policies, and the default system-wide
cryptographic policy level offers secure settings for current threat models. To make your
cryptographic settings more strict, change the current policy level:

# update-crypto-policies --set FUTURE


Setting system policy to FUTURE

To opt-out of the system-wide crypto policies for your OpenSSH server, uncomment the line
with the CRYPTO_POLICY= variable in the /etc/sysconfig/sshd file. After this change, values
that you specify in the Ciphers, MACs, KexAlgoritms, and GSSAPIKexAlgorithms sections in
the /etc/ssh/sshd_config file are not overridden. Note that this task requires deep expertise in
configuring cryptographic options.

See Using system-wide cryptographic policies in the RHEL 8 Security hardening title for more
information.

Additional resources

sshd_config(5), ssh-keygen(1), crypto-policies(7), and update-crypto-policies(8) man pages

7.6. CONNECTING TO A REMOTE SERVER USING AN SSH JUMP HOST


Use this procedure for connecting to a remote server through an intermediary server, also called jump
host.

Prerequisites

A jump host accepts SSH connections from your system.

A remote server accepts SSH connections only from the jump host.

Procedure

1. Define the jump host by editing the ~/.ssh/config file, for example:

185
Red Hat Enterprise Linux 8 Configuring basic system settings

Host jump-server1
HostName jump1.example.com

2. Add the remote server jump configuration with the ProxyJump directive to ~/.ssh/config, for
example:

Host remote-server
HostName remote1.example.com
ProxyJump jump-server1

3. Connect to the remote server through the jump server:

$ ssh remote-server

The previous command is equivalent to the ssh -J jump-server1 remote-server command if


you omit the configuration steps 1 and 2.

NOTE

You can specify more jump servers and you can also skip adding host definitions to the
configurations file when you provide their complete host names, for example:

$ ssh -J jump1.example.com,jump2.example.com,jump3.example.com
remote1.example.com

Change the host name-only notation in the previous command if the user names or SSH
ports on the jump servers differ from the names and ports on the remote server, for
example:

$ ssh -J
johndoe@jump1.example.com:75,johndoe@jump2.example.com:75,johndoe@jump3.e
xample.com:75 joesec@remote1.example.com:220

Additional resources

ssh_config(5) and ssh(1) man pages

7.7. ADDITIONAL RESOURCES


For more information on configuring and connecting to OpenSSH servers and clients on Red Hat
Enterprise Linux, see the resources listed below.

Installed documentation

sshd(8) man page documents available command-line options and provides a complete list of
supported configuration files and directories.

ssh(1) man page provides a complete list of available command-line options and supported
configuration files and directories.

scp(1) man page provides a more detailed description of the scp utility and its usage.

sftp(1) man page provides a more detailed description of the sftp utility and its usage.

186
CHAPTER 7. USING SECURE COMMUNICATIONS BETWEEN TWO SYSTEMS WITH OPENSSH

ssh-keygen(1) man page documents in detail the use of the ssh-keygen utility to generate,
manage, and convert authentication keys used by ssh.

ssh-copy-id(1) man page describes the use of the ssh-copy-id script.

ssh_config(5) man page documents available SSH client configuration options.

sshd_config(5) man page provides a full description of available SSH daemon configuration
options.

update-crypto-policies(8) man page provides guidance on managing system-wide


cryptographic policies

crypto-policies(7) man page provides an overview of system-wide cryptographic policy levels

Online documentation

OpenSSH Home Page - contains further documentation, frequently asked questions, links to
the mailing lists, bug reports, and other useful resources.

Configuring SELinux for applications and services with non-standard configurations - you can
apply analogous procedures for OpenSSH in a non-standard configuration with SELinux in
enforcing mode.

Controlling network traffic using firewalld - provides guidance on updating firewalld settings
after changing an SSH port

187
Red Hat Enterprise Linux 8 Configuring basic system settings

CHAPTER 8. CONFIGURING A REMOTE LOGGING SOLUTION


To ensure that logs from various machines in your environment are recorded centrally on a logging
server, you can configure the Rsyslog application to record logs that fit specific criteria from the client
system to the server.

8.1. THE RSYSLOG LOGGING SERVICE


The Rsyslog application, in combination with the systemd-journald service, provides local and remote
logging support in Red Hat Enterprise Linux. The rsyslogd daemon continuously reads syslog
messages received by the systemd-journald service from the journal. rsyslogd then filters and
processes these syslog events and records them to rsyslog log files or forwards them to other services
according to its configuration.

The rsyslogd daemon also provides extended filtering, encryption protected relaying of messages,
input and output modules, and support for transportation using the TCP and UDP protocols.

In /etc/rsyslog.conf, which is the main configuration file for rsyslog, you can specify the rules according
to which rsyslogd handles the messages. Generally, you can classify messages by their source and topic
(facility) and urgency (priority), and then assign an action that should be performed when a message fits
these criteria.

In /etc/rsyslog.conf, you can also see a list of log files maintained by rsyslogd. Most log files are
located in the /var/log/ directory. Some applications, such as httpd and samba, store their log files in a
subdirectory within /var/log/.

Additional resources

The rsyslogd(8) and rsyslog.conf(5) man pages

Documentation installed with the rsyslog-doc package at


file:///usr/share/doc/rsyslog/html/index.html

8.2. INSTALLING RSYSLOG DOCUMENTATION


The Rsyslog application has extensive documentation that is available at https://www.rsyslog.com/doc/,
but you can also install the rsyslog-doc documentation package locally by following this procedure.

Prerequisites

You have activated the AppStream repository on your system

You are authorized to install new packages using sudo

Procedure

Install the rsyslog-doc package:

$ sudo yum install rsyslog-doc

Verification

Open the file:///usr/share/doc/rsyslog/html/index.html file in a browser of your choice, for


example:

188
CHAPTER 8. CONFIGURING A REMOTE LOGGING SOLUTION

$ firefox file:///usr/share/doc/rsyslog/html/index.html

8.3. CONFIGURING REMOTE LOGGING OVER TCP


The Rsyslog application enables you to both run a logging server and configure individual systems to
send their log files to the logging server. To use remote logging through TCP, configure both the server
and the client. The server collects and analyzes the logs sent by one or more client systems.

With the Rsyslog application, you can maintain a centralized logging system where log messages are
forwarded to a server over the network. To avoid message loss when the server is not available, you can
configure an action queue for the forwarding action. This way, messages that failed to be sent are stored
locally until the server is reachable again. Note that such queues cannot be configured for connections
using the UDP protocol.

The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the
plug-in is built in, it does not have to be loaded.

8.3.1. Configuring a server for remote logging over TCP


Follow this procedure to configure a server for collecting and analyzing logs sent by one or more client
systems.

By default, rsyslog uses TCP on port 514.

Prerequisites

rsyslog is installed on the server system

You are logged in as root on the server

Procedure

1. Optional: To use a different port for rsyslog traffic, add the syslogd_port_t SELinux type to
port. For example, enable port 30514:

# semanage port -a -t syslogd_port_t -p tcp 30514

2. Optional: To use a different port for rsyslog traffic, configure firewalld to allow incoming
rsyslog traffic on that port. For example, allow TCP traffic on port 30514 in zone zone:

# firewall-cmd --zone=zone --permanent --add-port=30514/tcp


success

3. Create a new file in the /etc/rsyslog.d/ directory named, for example, remotelog.conf, and
insert the following content:

# Define templates before the rules that use them


### Per-Host Templates for Remote Systems ###
template(name="TmplAuthpriv" type="list") {
constant(value="/var/log/remote/auth/")
property(name="hostname")
constant(value="/")
property(name="programname" SecurePath="replace")

189
Red Hat Enterprise Linux 8 Configuring basic system settings

constant(value=".log")
}

template(name="TmplMsg" type="list") {
constant(value="/var/log/remote/msg/")
property(name="hostname")
constant(value="/")
property(name="programname" SecurePath="replace")
constant(value=".log")
}

# Provides TCP syslog reception


module(load="imtcp")
# Adding this ruleset to process remote messages
ruleset(name="remote1"){
authpriv.* action(type="omfile" DynaFile="TmplAuthpriv")
*.info;mail.none;authpriv.none;cron.none
action(type="omfile" DynaFile="TmplMsg")
}

input(type="imtcp" port="30514" ruleset="remote1")

4. Save the changes to the /etc/rsyslog.d/remotelog.conf file.

5. Make sure the rsyslog service is running and enabled on the logging server:

# systemctl status rsyslog

6. Restart the rsyslog service.

# systemctl restart rsyslog

7. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot:

# systemctl enable rsyslog

Your log server is now configured to receive and store log files from the other systems in your
environment.

Verification

Test the syntax of the /etc/rsyslog.conf file:

# rsyslogd -N 1
rsyslogd: version 8.1911.0-2.el8, config validation run (level 1), master config
/etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.

Additional resources

The rsyslogd(8), rsyslog.conf(5), semanage(8), and firewall-cmd(1) man pages

Documentation installed with the rsyslog-doc package at


file:///usr/share/doc/rsyslog/html/index.html

190
CHAPTER 8. CONFIGURING A REMOTE LOGGING SOLUTION

8.3.2. Configuring remote logging to a server over TCP


Follow this procedure to configure a system for forwarding log messages to a server over the TCP
protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP.
Because the plug-in is built in, you do not have to load it.

Prerequisites

The rsyslog package is installed on the client systems that should report to the server.

You have configured the server for remote logging.

The specified port is permitted in SELinux and open in firewall.

Procedure

1. Create a new file in the /etc/rsyslog.d/ directory named, for example, remotelog.conf, and
insert the following content:

*.* action(type="omfwd"
queue.type="linkedlist"
queue.filename="example_fwd"
action.resumeRetryCount="-1"
queue.saveOnShutdown="on"
target="example.com" port="30514" protocol="tcp"
)

Where:

queue.type="linkedlist" enables a LinkedList in-memory queue,

queue.filename defines a disk storage. The backup files are created with the example_fwd
prefix in the working directory specified by the preceding global workDirectory directive,

the action.resumeRetryCount -1 setting prevents rsyslog from dropping messages when


retrying to connect if server is not responding,

enabled queue.saveOnShutdown="on" saves in-memory data if rsyslog shuts down,

the last line forwards all received messages to the logging server, port specification is
optional.

With this configuration, rsyslog sends messages to the server but keeps messages in memory if
the remote server is not reachable. A file on disk is created only if rsyslog runs out of the
configured memory queue space or needs to shut down, which benefits the system
performance.

2. Restart the rsyslog service.

# systemctl restart rsyslog

Verification
To verify that the client system sends messages to the server, follow these steps:

1. On the client system, send a test message:

191
Red Hat Enterprise Linux 8 Configuring basic system settings

# logger test

2. On the server system, view the /var/log/messages log, for example:

# cat /var/log/remote/msg/hostname/root.log
Feb 25 03:53:17 hostname root[6064]: test

Where hostname is the host name of the client system. Note that the log contains the user
name of the user that entered the logger command, in this case root.

Additional resources

The rsyslogd(8) and rsyslog.conf(5) man pages

Documentation installed with the rsyslog-doc package at


file:///usr/share/doc/rsyslog/html/index.html

8.4. CONFIGURING REMOTE LOGGING OVER UDP


The Rsyslog application enables you to configure a system to receive logging information from remote
systems. To use remote logging through UDP, configure both the server and the client. The receiving
server collects and analyzes the logs sent by one or more client systems. By default, rsyslog uses UDP
on port 514 to receive log information from remote systems.

8.4.1. Configuring a server for receiving remote logging information over UDP
Follow this procedure to configure a server for collecting and analyzing logs sent by one or more client
systems over the UDP protocol.

Prerequisites

The rsyslog utility is installed.

Procedure

1. Optional: To use a different port for rsyslog traffic than the default port 514:

a. Add the syslogd_port_t SELinux type to the SELinux policy configuration, replacing
portno with the port number you want rsyslog to use:

# semanage port -a -t syslogd_port_t -p udp portno

b. Configure firewalld to allow incoming rsyslog traffic, replacing portno with the port
number and zone with the zone you want rsyslog to use:

# firewall-cmd --zone=zone --permanent --add-port=portno/udp


success

c. Reload the firewall rules:

# firewall-cmd --reload

2. Create a new .conf file in the /etc/rsyslog.d/ directory, for example, remotelogserv.conf, and

192
CHAPTER 8. CONFIGURING A REMOTE LOGGING SOLUTION

2. Create a new .conf file in the /etc/rsyslog.d/ directory, for example, remotelogserv.conf, and
insert the following content:

# Define templates before the rules that use them


### Per-Host Templates for Remote Systems ###
template(name="TmplAuthpriv" type="list") {
constant(value="/var/log/remote/auth/")
property(name="hostname")
constant(value="/")
property(name="programname" SecurePath="replace")
constant(value=".log")
}

template(name="TmplMsg" type="list") {
constant(value="/var/log/remote/msg/")
property(name="hostname")
constant(value="/")
property(name="programname" SecurePath="replace")
constant(value=".log")
}

# Provides UDP syslog reception


module(load="imudp")

# This ruleset processes remote messages


ruleset(name="remote1"){
authpriv.* action(type="omfile" DynaFile="TmplAuthpriv")
*.info;mail.none;authpriv.none;cron.none
action(type="omfile" DynaFile="TmplMsg")
}

input(type="imudp" port="514" ruleset="remote1")

Where 514 is the port number rsyslog uses by default. You can specify a different port instead.

3. Restart the rsyslog service.

# systemctl restart rsyslog

4. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot:

# systemctl enable rsyslog

Verification

1. Verify the syntax of the /etc/rsyslog.conf file and all .conf files in the /etc/rsyslog.d/ directory:

# rsyslogd -N 1
rsyslogd: version 8.1911.0-2.el8, config validation run (level 1), master config
/etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.

Additional resources

193
Red Hat Enterprise Linux 8 Configuring basic system settings

The rsyslogd(8), rsyslog.conf(5), semanage(8), and firewall-cmd(1) man pages

Browser-based documentation, which you can install from the rsyslog-doc package, at
file:///usr/share/doc/rsyslog/html/index.html

8.4.2. Configuring remote logging to a server over UDP


Follow this procedure to configure a system for forwarding log messages to a server over the UDP
protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP.
Because the plug-in is built in, you do not have to load it.

Prerequisites

The rsyslog package is installed on the client systems that should report to the server.

You have configured the server for remote logging as described in Configuring a server for
receiving remote logging information over UDP.

Procedure

1. Create a new .conf file in the /etc/rsyslog.d/ directory, for example, remotelogcli.conf, and
insert the following content:

*.* action(type="omfwd"
queue.type="linkedlist"
queue.filename="example_fwd"
action.resumeRetryCount="-1"
queue.saveOnShutdown="on"
target="example.com" port="portno" protocol="udp"
)

Where:

queue.type="linkedlist" enables a LinkedList in-memory queue.

queue.filename defines a disk storage. The backup files are created with the example_fwd
prefix in the working directory specified by the preceding global workDirectory directive.

The action.resumeRetryCount -1 setting prevents rsyslog from dropping messages when


retrying to connect if the server is not responding.

enabled queue.saveOnShutdown="on" saves in-memory data if rsyslog shuts down.

portno is the port number you want rsyslog to use. The default value is 514.

The last line forwards all received messages to the logging server, port specification is
optional.
With this configuration, rsyslog sends messages to the server but keeps messages in
memory if the remote server is not reachable. A file on disk is created only if rsyslog runs
out of the configured memory queue space or needs to shut down, which benefits the
system performance.

2. Restart the rsyslog service.

# systemctl restart rsyslog

194
CHAPTER 8. CONFIGURING A REMOTE LOGGING SOLUTION

3. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot:

# systemctl enable rsyslog

Verification
To verify that the client system sends messages to the server, follow these steps:

1. On the client system, send a test message:

# logger test

2. On the server system, view the /var/log/remote/msg/hostname/root.log log, for example:

# cat /var/log/remote/msg/hostname/root.log
Feb 25 03:53:17 hostname root[6064]: test

Where hostname is the host name of the client system. Note that the log contains the user
name of the user that entered the logger command, in this case root.

Additional resources

The rsyslogd(8) and rsyslog.conf(5) man pages

Browser-based documentation, which you can install from the rsyslog-doc package, at
file:///usr/share/doc/rsyslog/html/index.html

8.5. CONFIGURING RELIABLE REMOTE LOGGING


With the Reliable Event Logging Protocol (RELP), you can send and receive syslog messages over TCP
with a much reduced risk of message loss. RELP provides reliable delivery of event messages, which
makes it useful in environments where message loss is not acceptable. To use RELP, configure the
imrelp input module, which runs on the server and receives the logs, and the omrelp output module,
which runs on the client and sends logs to the logging server.

Prerequisites

You have installed the rsyslog, librelp, and rsyslog-relp packages on the server and the client
systems.

The specified port is permitted in SELinux and open in the firewall.

Procedure

1. Configure the client system for reliable remote logging:

a. On the client system, create a new .conf file in the /etc/rsyslog.d/ directory named, for
example, relpcli.conf, and insert the following content:

module(load="omrelp")
*.* action(type="omrelp" target="target_IP" port="target_port")

Where:

195
Red Hat Enterprise Linux 8 Configuring basic system settings

target_IP is the IP address of the logging server.

target_port is the port of the logging server.

b. Save the changes to the /etc/rsyslog.d/relpserv.conf file.

c. Restart the rsyslog service.

# systemctl restart rsyslog

d. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after
reboot:

# systemctl enable rsyslog

2. Configure the server system for reliable remote logging:

a. On the server system, create a new .conf file in the /etc/rsyslog.d/ directory named, for
example, relpserv.conf, and insert the following content:

ruleset(name="relp"){
*.* action(type="omfile" file="log_path")
}

module(load="imrelp")
input(type="imrelp" port="target_port" ruleset="relp")

Where:

log_path specifies the path for storing messages.

target_port is the port of the logging server. Use the same value as in the client
configuration file.

b. Save the changes to the /etc/rsyslog.d/relpserv.conf file.

c. Restart the rsyslog service.

# systemctl restart rsyslog

d. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after
reboot:

# systemctl enable rsyslog

Verification
To verify that the client system sends messages to the server, follow these steps:

1. On the client system, send a test message:

# logger test

2. On the server system, view the log at the specified log_path, for example:

196
CHAPTER 8. CONFIGURING A REMOTE LOGGING SOLUTION

# cat /var/log/remote/msg/hostname/root.log
Feb 25 03:53:17 hostname root[6064]: test

Where hostname is the host name of the client system. Note that the log contains the user
name of the user that entered the logger command, in this case root.

Additional resources

The rsyslogd(8) and rsyslog.conf(5) man pages

Browser-based documentation, which you can install from the rsyslog-doc package, at
file:///usr/share/doc/rsyslog/html/index.html

8.6. SUPPORTED RSYSLOG MODULES


To expand the functionality of the Rsyslog utility, you can use specific additional modules. Modules
provide additional inputs (Input Modules), outputs (Output Modules), and other specific functionalities.
A module may also provide additional configuration directives that become available after you load that
module.

List the input and output modules installed on your system with the following command:

# ls /usr/lib64/rsyslog/{i,o}m*

To view the list of all available rsyslog modules, open the following page from documentation installed
from the rsyslog-doc package.

$ firefox file:///usr/share/doc/rsyslog/html/configuration/modules/idx_output.html

8.7. ADDITIONAL RESOURCES


Documentation installed with the rsyslog-doc package at
file:///usr/share/doc/rsyslog/html/index.html

The rsyslog.conf(5) and rsyslogd(8) man pages

The Configuring system logging without journald or with minimized journald usage
Knowledgebase article

The Negative effects of the RHEL default logging setup on performance and their mitigations
article

197
Red Hat Enterprise Linux 8 Configuring basic system settings

CHAPTER 9. USING PYTHON

9.1. INTRODUCTION TO PYTHON


Python is a high-level programming language that supports multiple programming paradigms, such as
object-oriented, imperative, functional, and procedural. Python has dynamic semantics and can be used
for general-purpose programming.

With Red Hat Enterprise Linux, many packages that are installed on the system, such as packages
providing system tools, tools for data analysis or web applications are written in Python. To be able to
use these packages, you need to have the python packages installed.

9.1.1. Python versions


Two incompatible versions of Python are widely used, Python 2.x and Python 3.x.

RHEL 8 provides the following versions of Python.

Version Package to Command examples Available since Life cycle


install

Python 3.6 python3 python3, pip3 RHEL 8.0 full RHEL 8

Python 2.7 python2 python2, pip2 RHEL 8.0 shorter

Python 3.8 python38 python3.8, pip3.8 RHEL 8.2 shorter

See Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux 8 Application Streams Life Cycle
for details about the length of support.

Each of the Python versions is distributed in a separate module, and by design, you can install multiple
modules in parallel on the same system.

The python38 module does not include the same bindings to system tools (RPM, DNF, SELinux, and
others) that are provided for the python36 module.

IMPORTANT

Always specify the version of Python when installing it, invoking it, or otherwise
interacting with it. For example, use python3 instead of python in package and
command names. All Python-related commands should also include the version, for
example, pip3, pip2, or pip3.8.

The unversioned python command (/usr/bin/python) is not available by default in RHEL


8. You can configure it using the alternatives command; for instructions, see Configuring
the unversioned Python. Any manual changes to /usr/bin/python, except changes made
using the alternatives command, may be overwritten upon an update.

As a system administrator, you are recommended to use preferably Python 3 for the following reasons:

Python 3 represents the main development direction of the Python project.

198
CHAPTER 9. USING PYTHON

Support for Python 2 in the upstream community ends in 2020.

Popular Python libraries are dropping Python 2 support in upstream.

Python 2 in Red Hat Enterprise Linux 8 will have a shorter life cycle and its aim is to facilitate
smoother transition to Python 3 for customers.

For developers, Python 3 has the following advantages over Python 2:

Python 3 allows writing expressive, maintainable, and correct code more easily.

Code written in Python 3 will have greater longevity.

Python 3 has new features, including asyncio, f-strings, advanced unpacking, keyword only
arguments, chained exceptions and more.

However, existing software tends to require /usr/bin/python to be Python 2. For this reason, no default
python package is distributed with Red Hat Enterprise Linux 8, and you can choose between using
Python 2 and 3 as /usr/bin/python, as described in Section 9.2.5, “Configuring the unversioned Python” .

9.1.2. The internal platform-python package


System tools in Red Hat Enterprise Linux 8 use a Python version 3.6 provided by the internal platform-
python package. Red Hat advises customers to use the python36 package instead.

9.2. INSTALLING AND USING PYTHON


WARNING

Using the unversioned python command to install or run Python does not work by
default due to ambiguity. Always specify the version of Python, or configure the
system default version by using the alternatives command.

9.2.1. Installing Python 3


In Red Hat Enterprise Linux 8, Python 3 is distributed in versions 3.6 and 3.8, provided by the python36
and python38 modules in the AppStream repository.

Procedure

To install Python 3.6 from the python36 module, execute the following command:

# yum install python3

The python36:3.6 module stream is enabled automatically.

To install Python 3.8 from the python38 module, use:

# yum install python38

199
Red Hat Enterprise Linux 8 Configuring basic system settings

The python38:3.8 module stream is enabled automatically.

For details regarding modules in RHEL 8, see Installing, managing, and removing user-space
components.

NOTE

By design, RHEL 8 modules can be installed in parallel, including the python27, python36,
and python38 modules. Note that parallel installation is not supported for multiple
streams within a single module.

Python 3.8 and packages built for it can be installed in parallel with Python 3.6 on the
same system, with the exception of the mod_wsgi module. Due to a limitation of the
Apache HTTP Server, only one of the python3-mod_wsgi and python38-mod_wsgi
packages can be installed on a system.

Packages with add-on modules for Python 3.6 generally use the python3- prefix; packages for Python
3.8 include the python38- prefix. Always include the prefix when installing additional Python packages,
as shown in the examples below.

Procedure

To install the Requests module for Python 3.6, execute this command:

# yum install python3-requests

To install the Cython extension to Python 3.8, use:

# yum install python38-Cython

9.2.1.1. Installing additional Python 3 packages for developers

Additional Python 3.8 packages for developers are distributed through the CodeReady Linux Builder
repository in the python38-devel module. This module contains the python38-pytest package and its
dependencies: the pyparsing, atomicwrites, attrs, packaging, py, more-itertools, pluggy, and
wcwidth packages.

IMPORTANT

The CodeReady Linux Builder repository and its content is unsupported by Red Hat.

To install packages from the python38-devel module, follow the procedure below.

Procedure

Enable the unsupported CodeReady Linux Builder repository:

# subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms

Enable the python38-devel module:

# yum module enable python38-devel

200
CHAPTER 9. USING PYTHON

Install the python38-pytest package:

# yum install python38-pytest

For more information about the CodeReady Linux Builder repository, see How to enable and make use
of content within CodeReady Linux Builder.

9.2.2. Installing Python 2


Some software has not yet been fully ported to Python 3, and needs Python 2 to operate. Red Hat
Enterprise Linux 8 allows parallel installation of Python 3 and Python 2. If you need the Python 2
functionality, install the python27 module, which is available in the AppStream repository.


WARNING

Note that Python 3 is the main development direction of the Python project. The
support for Python 2 is being phased out. The python27 module has a shorter
support period than Red Hat Enterprise Linux 8.

Procedure

To install Python 2.7 from the python27 module, execute this command:

# yum install python2

The python27:2.7 module stream is enabled automatically.

NOTE

By design, RHEL 8 modules can be installed in parallel, including the python27, python36,
and python38 modules.

For details regarding modules, see Installing, managing, and removing user-space components .

Packages with add-on modules for Python 2 generally use the python2- prefix. Always include the prefix
when installing additional Python packages, as shown in the examples below.

Procedure

To install the Requests module for Python 2, execute this command:

# yum install python2-requests

To install the Cython extension to Python 2, use:

# yum install python2-Cython

9.2.3. Using Python 3

201
Red Hat Enterprise Linux 8 Configuring basic system settings

When running the Python interpreter or Python-related commands, always specify the version.

Procedure

To run the Python 3.6 interpreter or related commands, use, for example:

$ python3
$ python3 -m cython --help
$ pip3 install <package>

To run the Python 3.8 interpreter or related commands, use, for example:

$ python3.8
$ python3.8 -m cython --help
$ pip3.8 install <package>

9.2.4. Using Python 2


When running the Python 2 interpreter or Python2-related commands, always specify the version.

Procedure

To run the Python 2 interpreter or related commands, use, for example:

$ python2
$ python2 -m cython --help
$ pip2 install <package>

9.2.5. Configuring the unversioned Python


System administrators can configure the unversioned python command, located at /usr/bin/python,
using the alternatives command. Note that the required package, python3, python38, or python2,
needs to be installed before configuring the unversioned command to the respective version.

IMPORTANT

The /usr/bin/python executable is controlled by the alternatives system. Any manual


changes may be overwritten upon an update.

Additional Python-related commands, such as pip3, do not have configurable


unversioned variants.

9.2.5.1. Configuring the unversioned python command directly

To configure the unversioned python command directly to a selected version of Python, use this
procedure.

Procedure

To configure the unversioned python command to Python 3.6, execute this command:

# alternatives --set python /usr/bin/python3

202
CHAPTER 9. USING PYTHON

To configure the unversioned python command to Python 3.8, use the following command:

# alternatives --set python /usr/bin/python3.8

To configure the unversioned python command to Python 2, use:

# alternatives --set python /usr/bin/python2

9.2.5.2. Configuring the unversioned python command to the required Python version
interactively

You can also configure the unversioned python command to the required Python version interactively.

To configure the unversioned python command interactively, use this procedure.

Procedure

1. Execute the following command:

# alternatives --config python

2. Select the required version from the provided list.

3. To reset this configuration and remove the unversioned python command, run:

# alternatives --auto python

9.3. MIGRATION FROM PYTHON 2 TO PYTHON 3


As a developer, you may want to migrate your former code that is written in Python 2 to Python 3. For
more information on how to migrate large code bases to Python 3, see The Conservative Python 3
Porting Guide.

Note that after this migration, the original Python 2 code becomes interpretable by the Python 3
interpreter and stays interpretable for the Python 2 interpreter as well.

9.4. PACKAGING OF PYTHON 3 RPMS


Most Python projects use Setuptools for packaging, and define package information in the setup.py
file. For more information on Setuptools packaging, see Setuptools documentation.

You can also package your Python project into an RPM package, which provides the following
advantages compared to Setuptools packaging:

Specification of dependencies of a package on other RPMs (even non-Python)

Cryptographic signing
With cryptographic signing, content of RPM packages can be verified, integrated, and tested
with the rest of the operating system.

9.4.1. SPEC file description for a Python package

A SPEC file contains instructions that the rpmbuild utility uses to build an RPM. The instructions are
203
Red Hat Enterprise Linux 8 Configuring basic system settings

A SPEC file contains instructions that the rpmbuild utility uses to build an RPM. The instructions are
included in a series of sections. A SPEC file has two main parts in which the sections are defined:

Preamble (contains a series of metadata items that are used in the Body)

Body (contains the main part of the instructions)

For further information about SPEC files, see Packaging and distributing software .

An RPM SPEC file for Python projects has some specifics compared to non-Python RPM SPEC files.
Most notably, a name of any RPM package of a Python library must always include the prefix
determining the version, for example, python3 for Python 3.6 or python38 for Python 3.8.

Other specifics are shown in the following SPEC file example for the python3-detox package. For
description of such specifics, see the notes below the example.

%global modname detox 1

Name: python3-detox 2
Version: 0.12
Release: 4%{?dist}
Summary: Distributing activities of the tox tool
License: MIT
URL: https://pypi.io/project/detox
Source0: https://pypi.io/packages/source/d/%{modname}/%{modname}-%{version}.tar.gz

BuildArch: noarch

BuildRequires: python36-devel 3
BuildRequires: python3-setuptools
BuildRequires: python36-rpm-macros
BuildRequires: python3-six
BuildRequires: python3-tox
BuildRequires: python3-py
BuildRequires: python3-eventlet

%?python_enable_dependency_generator 4

%description

Detox is the distributed version of the tox python testing tool. It makes efficient use of multiple CPUs
by running all possible activities in parallel.
Detox has the same options and configuration that tox has, so after installation you can run it in the
same way and with the same options that you use for tox.

$ detox

%prep
%autosetup -n %{modname}-%{version}

%build
%py3_build 5

%install
%py3_install

204
CHAPTER 9. USING PYTHON

%check
%{__python3} setup.py test 6

%files -n python3-%{modname}
%doc CHANGELOG
%license LICENSE
%{_bindir}/detox
%{python3_sitelib}/%{modname}/
%{python3_sitelib}/%{modname}-%{version}*

%changelog
...

1 The modname macro contains the name of the Python project. In this example it is detox.

2 When packaging a Python project into RPM, the python3 prefix always needs to be added to the
original name of the project. The original name here is detox and the name of the RPM is
python3-detox.

3 BuildRequires specifies what packages are required to build and test this package. In
BuildRequires, always include items providing tools necessary for building Python packages:
python36-devel and python3-setuptools. The python36-rpm-macros package is required so
that files with /usr/bin/python3 shebangs are automatically changed to /usr/bin/python3.6. For
more information, see Section 9.4.4, “Handling hashbangs in Python scripts” .

4 Every Python package requires some other packages to work correctly. Such packages need to be
specified in the SPEC file as well. To specify the dependencies, you can use the
%python_enable_dependency_generator macro to automatically use dependencies defined in
the setup.py file. If a package has dependencies that are not specified using Setuptools, specify
them within additional Requires directives.

5 The %py3_build and %py3_install macros run the setup.py build and setup.py install commands,
respectively, with additional arguments to specify installation locations, the interpreter to use, and
other details.

6 The check section provides a macro that runs the correct version of Python. The %{__python3}
macro contains a path for the Python 3 interpreter, for example /usr/bin/python3. We recommend
to always use the macro rather than a literal path.

9.4.2. Common macros for Python 3 RPMs


In a SPEC file, always use the macros below rather than hardcoding their values.

In macro names, always use python3 or python2 instead of unversioned python. Configure the
particular Python 3 version in the BuildRequires of the SPEC file to either python36-rpm-macros or
python38-rpm-macros.

Macro Normal Definition Description

%{__python3} /usr/bin/python3 Python 3 interpreter

%{python3_version} 3.6 The full version of the Python 3


interpreter.

205
Red Hat Enterprise Linux 8 Configuring basic system settings

Macro Normal Definition Description

%{python3_sitelib} /usr/lib/python3.6/site-packages Where pure-Python modules are


installed.

%{python3_sitearch} /usr/lib64/python3.6/site- Where modules containing


packages architecture-specific extensions
are installed.

%py3_build Runs the setup.py build


command with arguments suitable
for a system package.

%py3_install Runs the setup.py install


command with arguments suitable
for a system package.

9.4.3. Automatic provides for Python RPMs


When packaging a Python project, make sure that, if present, the following directories are included in
the resulting RPM:

.dist-info

.egg-info

.egg-link

From these directories, the RPM build process automatically generates virtual pythonX.Ydist provides,
for example, python3.6dist(detox). These virtual provides are used by packages that are specified by
the %python_enable_dependency_generator macro.

9.4.4. Handling hashbangs in Python scripts


In Red Hat Enterprise Linux 8, executable Python scripts are expected to use hashbangs (shebangs)
specifying explicitly at least the major Python version.

The /usr/lib/rpm/redhat/brp-mangle-shebangs buildroot policy (BRP) script is run automatically when


building any RPM package, and attempts to correct hashbangs in all executable files.

NOTE

The BRP script generates errors when encountering a Python script with an ambiguous
hashbang, such as:

#! /usr/bin/python

or

#! /usr/bin/env python

206
CHAPTER 9. USING PYTHON

9.4.4.1. Modifying hashbangs in Python scripts

To modify hashbangs in the Python scripts that cause the build errors at RPM build time, use this
procedure.

Procedure

Apply the pathfix.py script from the platform-python-devel package:

# pathfix.py -pn -i %{__python3} PATH …​

Note that multiple PATHs can be specified. If a PATH is a directory, pathfix.py recursively
scans for any Python scripts matching the pattern ^[a-zA-Z0-9_]+\.py$, not only those with an
ambiguous hashbang. Add this command to the %prep section or at the end of the %install
section.

Alternatively, modify the packaged Python scripts so that they conform to the expected format. For this
purpose, pathfix.py can be used outside the RPM build process, too. When running pathfix.py outside a
RPM build, replace __python3 from the example above with a path for the hashbang, such as
/usr/bin/python3.

If the packaged Python scripts require other version than Python 3.6, adjust the commands above to
include the respective version.

9.4.4.2. Changing /usr/bin/python3 hashbangs in their custom packages

Additionally, hashbangs in the form /usr/bin/python3 are by default replaced with hashbangs pointing to
Python from the platform-python package used for system tools with Red Hat Enterprise Linux.

To change the /usr/bin/python3 hashbangs in their custom packages to point to a version of Python
installed from Application Stream, in the form /usr/bin/python3.6, use the following procedure.

Procedure

Add the python36-rpm-macros package into the BuildRequires section of the SPEC file by
including the following line:

BuildRequires: python36-rpm-macros

NOTE

To prevent hashbang check and modification by the BRP script, use the following RPM
directive:

%undefine %brp_mangle_shebangs

If you are using other version than Python 3.6, adjust the commands above to include the respective
version.

9.4.5. Additional resources


For more information on RPM packaging, see Packaging and distributing software .

207
Red Hat Enterprise Linux 8 Configuring basic system settings

CHAPTER 10. USING THE PHP SCRIPTING LANGUAGE


Hypertext Preprocessor (PHP) is a general-purpose scripting language mainly used for server-side
scripting, which enables you to run the PHP code using a web server.

In RHEL 8, the PHP scripting language is provided by the php module, which is available in multiple
streams (versions).

Depending on your use case, you can install a specific profile of the selected module stream:

common - The default profile for server-side scripting using a web server. It includes several
widely used extensions.

minimal - This profile installs only the command-line interface for scripting with PHP without
using a web server.

devel - This profile includes packages from the common profile and additional packages for
development purposes.

10.1. INSTALLING THE PHP SCRIPTING LANGUAGE


This section describes how to install a selected version of the php module.

Procedure

To install a php module stream with the default profile, use:

# yum module install php:stream

Replace stream with the version of PHP you wish to install.

For example, to install PHP 7.3:

# yum module install php:7.3

The default common profile installs also the php-fpm package, and preconfigures PHP for use
with the Apache HTTP Server or nginx.

To install a specific profile of a php module stream, use:

# yum module install php:stream/profile

Replace stream with the desired version and profile with the name of the profile you wish to
install.

For example, to install PHP 7.3 for use without a web server:

# yum module install php:7.3/minimal

Additional resources

If you want to upgrade from an earlier version of PHP available in RHEL 8, see Switching to a
later stream.

For more information on managing RHEL 8 modules and streams, see Installing, managing, and
208
CHAPTER 10. USING THE PHP SCRIPTING LANGUAGE

For more information on managing RHEL 8 modules and streams, see Installing, managing, and
removing user-space components.

10.2. USING THE PHP SCRIPTING LANGUAGE WITH A WEB SERVER

10.2.1. Using PHP with the Apache HTTP Server


In RHEL 8, the Apache HTTP Server enables you to run PHP as a FastCGI process server. FastCGI
Process Manager (FPM) is an alternative PHP FastCGI daemon that allows a website to manage high
loads. PHP uses FastCGI Process Manager by default in RHEL 8.

This section describes how to run the PHP code using the FastCGI process server.

Prerequisites

The PHP scripting language is installed on your system.


See Section 10.1, “Installing the PHP scripting language” .

Procedure

1. Install the httpd module:

# yum module install httpd:2.4

2. Start the Apache HTTP Server:

# systemctl start httpd

Or, if the Apache HTTP Server is already running on your system, restart the httpd service
after installing PHP:

# systemctl restart httpd

3. Start the php-fpm service:

# systemctl start php-fpm

4. Optional: Enable both services to start at boot time:

# systemctl enable php-fpm httpd

5. To obtain information about your PHP settings, create the index.php file with the following
content in the /var/www/html/ directory:

echo '<?php phpinfo(); ?>' > /var/www/html/index.php

6. To run the index.php file, point the browser to:

http://<hostname>/

7. Optional: Adjust configuration if you have specific requirements:

209
Red Hat Enterprise Linux 8 Configuring basic system settings

/etc/httpd/conf/httpd.conf - generic httpd configuration

/etc/httpd/conf.d/php.conf - PHP-specific configuration for httpd

/usr/lib/systemd/system/httpd.service.d/php-fpm.conf - by default, the php-fpm service


is started with httpd

/etc/php-fpm.conf - FPM main configuration

/etc/php-fpm.d/www.conf - default www pool configuration

Example 10.1. Running a "Hello, World!" PHP script using the Apache HTTP Server

1. Create a hello directory for your project in the /var/www/html/ directory:

# mkdir hello

2. Create a hello.php file in the /var/www/html/hello/ directory with the following content:

# <!DOCTYPE html>
<html>
<head>
<title>Hello, World! Page</title>
</head>
<body>
<?php
echo 'Hello, World!';
?>
</body>
</html>

3. Start the Apache HTTP Server:

# systemctl start httpd

4. To run the hello.php file, point the browser to:

http://<hostname>/hello/hello.php

As a result, a web page with the “Hello, World!” text is displayed.

Additional resources

Setting up the Apache HTTP web server

10.2.2. Using PHP with the nginx web server


This section describes how to run PHP code through the nginx web server.

Prerequisites

The PHP scripting language is installed on your system.


See Section 10.1, “Installing the PHP scripting language” .

210
CHAPTER 10. USING THE PHP SCRIPTING LANGUAGE

Procedure

1. Install an nginx module stream:

# yum module install nginx:stream

Replace stream with the version of nginx you wish to install.

For example, to install nginx version 1.16:

# yum module install nginx:1.16

2. Start the nginx server:

# systemctl start nginx

Or, if the nginx server is already running on your system, restart the nginx service after
installing PHP:

# systemctl restart nginx

3. Start the php-fpm service:

# systemctl start php-fpm

4. Optional: Enable both services to start at boot time:

# systemctl enable php-fpm nginx

5. To obtain information about your PHP settings, create the index.php file with the following
content in the /usr/share/nginx/html/ directory:

echo '<?php phpinfo(); ?>' > /usr/share/nginx/html/index.php

6. To run the index.php file, point the browser to:

http://<hostname>/

7. Optional: Adjust configuration if you have specific requirements:

/etc/nginx/nginx.conf - nginx main configuration

/etc/nginx/conf.d/php-fpm.conf - FPM configuration for nginx

/etc/php-fpm.conf - FPM main configuration

/etc/php-fpm.d/www.conf - default www pool configuration

Example 10.2. Running a "Hello, World!" PHP script using the nginx server

1. Create a hello directory for your project in the /usr/share/nginx/html/ directory:

# mkdir hello

211
Red Hat Enterprise Linux 8 Configuring basic system settings

2. Create a hello.php file in the /usr/share/nginx/html/hello/ directory with the following


content:

# <!DOCTYPE html>
<html>
<head>
<title>Hello, World! Page</title>
</head>
<body>
<?php
echo 'Hello, World!';
?>
</body>
</html>

3. Start the nginx server:

# systemctl start nginx

4. To run the hello.php file, point the browser to:

http://<hostname>/hello/hello.php

As a result, a web page with the “Hello, World!” text is displayed.

10.3. RUNNING A PHP SCRIPT USING THE COMMAND-LINE


INTERFACE
A PHP script is usually run using a web server, but also can be run using the command-line interface.

If you want to run php scripts using only command-line, install the minimal profile of a php module
stream.

See Section 10.1, “Installing the PHP scripting language” for details.

Prerequisites

The PHP scripting language is installed on your system.


See Section 10.1, “Installing the PHP scripting language” .

Procedure

1. In a text editor, create a filename.php file


Replace filename with the a name of your file.

2. Execute the created filename.php file from the command line:

# php filename.php

Example 10.3. Running a "Hello, World!" PHP script using the command-line interface

1. Create a hello.php file with the following content using a text editor:

212
CHAPTER 10. USING THE PHP SCRIPTING LANGUAGE

<?php
echo 'Hello, World!';
?>

2. Execute the hello.php file from the command line:

# php hello.php

As a result, “Hello, World!” is printed.

10.4. ADDITIONAL RESOURCES


httpd(8) — The manual page for the httpd service containing the complete list of its command-
line options.

httpd.conf(5) — The manual page for httpd configuration, describing the structure and location
of the httpd configuration files.

nginx(8) — The manual page for the nginx web server containing the complete list of its
command-line options and list of signals.

php-fpm(8) — The manual page for PHP FPM describing the complete list of its command-line
options and configuration files.

213
Red Hat Enterprise Linux 8 Configuring basic system settings

CHAPTER 11. USING LANGPACKS


Langpacks are meta-packages which install extra add-on packages containing translations, dictionaries
and locales for every package installed on the system.

On a Red Hat Enterprise Linux 8 system, langpacks installation is based on the langpacks-<langcode>
language meta-packages and RPM weak dependencies (Supplements tag).

There are two prerequisites to be able to use langpacks for a selected language. If these prerequisites
are fulfilled, the language meta-packages pull their langpack for the selected language automatically in
the transaction set.

Prerequisites

The langpacks-<langcode> language meta-package for the selected language has been
installed on the system.
On Red Hat Enterprise Linux 8, the langpacks meta packages are installed automatically with
the initial installation of the operating system using the Anaconda installer, because these
packages are available in the in Application Stream repository.

For more information, see Section 11.1, “Checking languages that provide langpacks”

The base package, for which you want to search the local packages, has already been installed on
the system.

11.1. CHECKING LANGUAGES THAT PROVIDE LANGPACKS


Folow this procedure to check which languages provide langpacks.

Procedure

Execute the following command:

# yum list langpacks-*

11.2. WORKING WITH RPM WEAK DEPENDENCY-BASED LANGPACKS


This section describes multiple actions that you may want to perform when querying RPM weak
dependency-based langpacks, installing or removing language support.

11.2.1. Listing already installed language support


To list the already installed language support, use this procedure.

Procedure

Execute the following command:

# yum list installed langpacks*

11.2.2. Checking the availability of language support

214
CHAPTER 11. USING LANGPACKS

To check if language support is available for any language, use the following procedure.

Procedure

Execute the following command:

# yum list available langpacks*

11.2.3. Listing packages installed for a language


To list what packages get installed for any language, use the following procedure:

Procedure

Execute the following command:

# yum repoquery --whatsupplements langpacks-<locale_code>

11.2.4. Installing language support


To add new a language support, use the following procedure.

Procedure

Execute the following command:

# yum install langpacks-<locale_code>

11.2.5. Removing language support


To remove any installed language support, use the following procedure.

Procedure

Execute the following command:

# yum remove langpacks-<locale_code>

11.3. SAVING DISK SPACE BY USING GLIBC-LANGPACK-


<LOCALE_CODE>
Currently, all locales are stored in the /usr/lib/locale/locale-archive file, which requires a lot of disk
space.

On systems where disk space is a critical issue, such as containers and cloud images, or only a few locales
are needed, you can use the glibc locale langpack packages (glibc-langpack-<locale_code>).

To install locales individually, and thus gain a smaller package installation footprint, use the following
procedure.

Procedure

215
Red Hat Enterprise Linux 8 Configuring basic system settings

Execute the following command:

# yum install glibc-langpack-<locale_code>

When installing the operating system with Anaconda, glibc-langpack-<locale_code> is installed for the
language you used during the installation and also for the languages you selected as additional
languages. Note that glibc-all-langpacks, which contains all locales, is installed by default, so some
locales are duplicated. If you installed glibc-langpack-<locale_code> for one or more selected
languages, you can delete glibc-all-langpacks after the installation to save the disk space.

Note that installing only selected glibc-langpack-<locale_code> packages instead of glibc-all-


langpacks has impact on run time performance.

NOTE

If disk space is not an issue, keep all locales installed by using the glibc-all-langpacks
package.

216
CHAPTER 12. GETTING STARTED WITH TCL/TK

CHAPTER 12. GETTING STARTED WITH TCL/TK

12.1. INTRODUCTION TO TCL/TK


Tool command language (Tcl) is a dynamic programming language. The interpreter for this language,
together with the C library, is provided by the tcl package.

Using Tcl paired with Tk (Tcl/Tk) enables creating cross-platform GUI applications. Tk is provided by
the tk package.

Note that Tk can refer to any of the the following:

A programming toolkit for multiple languages

A Tk C library bindings available for multiple languages, such as C, Ruby, Perl and Python

A wish interpreter that instantiates a Tk console

A Tk extension that adds a number of new commands to a particular Tcl interpreter

For more information about Tcl/Tk, see the Tcl/Tk manual or Tcl/Tk documentation web page .

12.2. NOTABLE CHANGES IN TCL/TK 8.6


Red Hat Enterprise Linux 7 used Tcl/Tk 8.5. With Red Hat Enterprise Linux 8, Tcl/Tk version 8.6 is
provided in the Base OS repository.

Major changes in Tcl/Tk 8.6 compared to Tcl/Tk 8.5 are:

Object-oriented programming support

Stackless evaluation implementation

Enhanced exceptions handling

Collection of third-party packages built and installed with Tcl

Multi-thread operations enabled

SQL database-powered scripts support

IPv6 networking support

Built-in Zlib compression

List processing
Two new commands, lmap and dict map are available, which allow the expression of
transformations over Tcl containers.

Stacked channels by script


Two new commands, chan push and chan pop are available, which allow to add or remove
transformations to or from I/O channels.

Major changes in Tk include:

Built-in PNG image support

217
Red Hat Enterprise Linux 8 Configuring basic system settings

Busy windows
A new command, tk busy is available, which disables user interaction for a window or a widget
and shows the busy cursor.

New font selection dialog interface

Angled text support

Moving things on a canvas support

For the detailed list of changes between Tcl 8.5 and Tcl 8.6, see Changes in Tcl/Tk 8.6 .

12.3. MIGRATING TO TCL/TK 8.6


Red Hat Enterprise Linux 7 used Tcl/Tk 8.5. With Red Hat Enterprise Linux 8, Tcl/Tk version 8.6 is
provided in the Base OS repository.

This section describes migration path to Tcl/Tk 8.6 for:

Developers writing Tcl extensions or embedding Tcl interpreter into their applications

Users scripting tasks with Tcl/Tk

12.3.1. Migration path for developers of Tcl extensions


To make your code compatible with Tcl 8.6, use the following procedure.

Procedure

1. Rewrite the code to use the interp structure. For example, if your code reads
interp→errorLine, rewrite it to use the following function:

Tcl_GetErrorLine(interp)

This is necessary because Tcl 8.6 limits direct access to members of the interp structure.

2. To make your code compatible with both Tcl 8.5 and Tcl 8.6, use the following code snippet in
a header file of your C or C++ application or extension that includes the Tcl library:

# include <tcl.h>
# if !defined(Tcl_GetErrorLine)
# define Tcl_GetErrorLine(interp) (interp→errorLine)
# endif

12.3.2. Migration path for users scripting their tasks with Tcl/Tk
In Tcl 8.6, most scripts work the same way as with the previous version of Tcl.

To migrate you code into Tcl 8.6, use this procedure.

Procedure

When writing a portable code, make sure to not use the commands that are no longer supported
in Tk 8.6:

218
CHAPTER 12. GETTING STARTED WITH TCL/TK

tkIconList_Arrange
tkIconList_AutoScan
tkIconList_Btn1
tkIconList_Config
tkIconList_Create
tkIconList_CtrlBtn1
tkIconList_Curselection
tkIconList_DeleteAll
tkIconList_Double1
tkIconList_DrawSelection
tkIconList_FocusIn
tkIconList_FocusOut
tkIconList_Get
tkIconList_Goto
tkIconList_Index
tkIconList_Invoke
tkIconList_KeyPress
tkIconList_Leave1
tkIconList_LeftRight
tkIconList_Motion1
tkIconList_Reset
tkIconList_ReturnKey
tkIconList_See
tkIconList_Select
tkIconList_Selection
tkIconList_ShiftBtn1
tkIconList_UpDown

Note that you can check the list of unsupported commands also in the
/usr/share/tk8.6/unsupported.tcl file.

219
Red Hat Enterprise Linux 8 Configuring basic system settings

CHAPTER 13. USING PREFIXDEVNAME FOR NAMING OF


ETHERNET NETWORK INTERFACES
This documentation describes how to set the prefixes for consistent naming ot Ethernet network
interfaces in case that you do not want to use the default naming scheme of such interfaces.

However, Red Hat recommends to use the default naming scheme, which is the same as in Red Hat
Enterprise Linux 7.

For more details about this scheme, see Consistent network interface device naming .

13.1. INTRODUCTION TO PREFIXDEVNAME


The prefixdevname tool is a udev helper utility that enables you to define your own prefix used for
naming of the Ethernet network interfaces.

13.2. SETTING PREFIXDEVNAME


The setting of the prefix with prefixdevname is done during system installation.

To set and activate the required prefix for your Ethernet network interfaces, use the following
procedure.

Procedure

Add the following string on the kernel command line:

net.ifnames.prefix=<required prefix>


WARNING

Red Hat does not support the use of prefixdevname on already deployed systems.

After the prefix was once set, and the operating system was rebooted, the prefix is effective every time
when a new network interface appears. The new device is assigned a name in the form of <PREFIX>
<INDEX>. For example, if your selected prefix is net, and the interfaces with net0 and net1 prefixes
already exist on the system, the new interface is named net2. The prefixdevname utility then generates
the new .link file in the /etc/systemd/network directory that applies the name to the interface with the
MAC address that just appeared. The configuration is persistent across reboots.

13.3. LIMITATIONS OF PREFIXDEVNAME


There are certain limitations for prefixes of Ethernet network interfaces.

The prefix that you choose must meet the following requirements:

Be ASCII string

220
CHAPTER 13. USING PREFIXDEVNAME FOR NAMING OF ETHERNET NETWORK INTERFACES

Be alphanumeric string

Be shorter than 16 characters


WARNING

The prefix cannot conflict with any other well-known prefix used for network
interface naming on Linux. Specifically, you cannot use these prefixes: eth, eno,
ens, em.

221
Linux Boot Sequence

1. BIOS
 BIOS stands for Basic Input/Output System
 Performs some system integrity checks
 Searches, loads, and executes the boot loader program.
 It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically F12 or F2,
but it depends on your system) during the BIOS startup to change the boot sequence.
 Once the boot loader program is detected and loaded into the memory, BIOS gives the control to
it.
 So, in simple terms BIOS loads and executes the MBR boot loader.

2. MBR
 MBR stands for Master Boot Record.
 It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda
 MBR is less than 512 bytes in size. This has three components 1) primary boot loader info in 1st
446 bytes 2) partition table info in next 64 bytes 3) mbr validation check in last 2 bytes.
 It contains information about GRUB (or LILO in old systems).
 So, in simple terms MBR loads and executes the GRUB boot loader.

3. GRUB
 GRUB stands for Grand Unified Bootloader.
 If you have multiple kernel images installed on your system, you can choose which one to be
executed.
 GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the
default kernel image as specified in the grub configuration file.
 GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand
filesystem).
 Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is
sample grub.conf of CentOS.
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-194.el5PAE)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/
initrd /boot/initrd-2.6.18-194.el5PAE.img
 As you notice from the above info, it contains kernel and initrd image.
 So, in simple terms GRUB just loads and executes Kernel and initrd images.

4. Kernel
 Mounts the root file system as specified in the “root=” in grub.conf
 Kernel executes the /sbin/init program
 Since init was the 1st program to be executed by Linux Kernel, it has the process id (PID) of 1. Do
a ‘ps -ef | grep init’ and check the pid.
 initrd stands for Initial RAM Disk.
 initrd is used by kernel as temporary root file system until kernel is booted and the real root file
system is mounted. It also contains necessary drivers compiled inside, which helps it to access the
hard drive partitions, and other hardware.

5. Init
 Looks at the /etc/inittab file to decide the Linux run level.
 Following are the available run levels
o 0 – halt
o 1 – Single user mode
o 2 – Multiuser, without NFS
o 3 – Full multiuser mode
o 4 – unused
o 5 – X11
o 6 – reboot
 Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate program.
 Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level
 If you want to get into trouble, you can set the default run level to 0 or 6. Since you know what 0
and 6 means, probably you might not do that.
 Typically you would set the default run level to either 3 or 5.

6. Runlevel programs
 When the Linux system is booting up, you might see various services getting started. For
example, it might say “starting sendmail …. OK”. Those are the runlevel programs, executed
from the run level directory as defined by your run level.
 Depending on your default init level setting, the system will execute the programs from one of
the following directories.
o Run level 0 – /etc/rc.d/rc0.d/
o Run level 1 – /etc/rc.d/rc1.d/
o Run level 2 – /etc/rc.d/rc2.d/
o Run level 3 – /etc/rc.d/rc3.d/
o Run level 4 – /etc/rc.d/rc4.d/
o Run level 5 – /etc/rc.d/rc5.d/
o Run level 6 – /etc/rc.d/rc6.d/
 Please note that there are also symbolic links available for these directory under /etc directly. So,
/etc/rc0.d is linked to /etc/rc.d/rc0.d.
 Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K.
 Programs starts with S are used during startup. S for startup.
 Programs starts with K are used during shutdown. K for kill.
 There are numbers right next to S and K in the program names. Those are the sequence number
in which the programs should be started or killed.
 For example, S12syslog is to start the syslog deamon, which has the sequence number of 12.
S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So, syslog
program will be started before sendmail.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy