Cheat Sheet
Cheat Sheet
Cheat Sheet
4. Launch instance
6. Add volume
7. Attach volume
8. Volume snapshot
Answer
=====================
CASE 1 Using Guestfis
=====================
guestfish -i --network -a production-rhel-web.qcow2
><fs> command "yum -y install httpd"
><fs> touch /var/www/html/index.html
><fs> edit /var/www/html/index.html
"Tulis Nama"
><fs> command "systemctl enable httpd"
><fs> command "adduser user1"
><fs> command "passwd -d user1"
><fs> selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts /
><fs> exit
source operator1-production-rc
openstack image create \
--disk-format qcow2 \
--min-ram 1024 \
--file production-rhel-web.qcow2 \
production-rhel-web
.vimrc
autocmd FileType yaml setlocal ai ts=2 sw=2 et
[student@workstation ~]$ mkdir ~/heat-templates
web_server:
type: OS::Nova::Server
properties:
name: { get_param: instance_name }
image: { get_param: image_name }
flavor: { get_param: instance_flavor }
key_name: { get_param: key_name }
networks:
- port: { get_resource: web_net_port }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash
yum -y install httpd
systemctl restart httpd.service
systemctl enable httpd.service
sudo touch /var/www/html/index.html
sudo cat << EOF > /var/www/html/index.html
<h1>You are connected to $public_ip</h1>
<h2>The private IP address is:$private_ip</h2>
Red Hat Training
EOF
params:
$private_ip: {get_attr: [web_net_port,fixed_ips,0,ip_address]}
$public_ip: {get_attr: [web_floating_ip,floating_ip_address]}
parameters:
image_name: finance-rhel7
instance_name: finance-web1
instance_flavor: m1.small
key_name: developer1-keypair1
public_net: provider-172.25.250
private_net: finance-network1
private_subnet: finance-subnet1
15. ??
In this lab, you will validate that the overcloud is functional by deploying a server instance using a new
user and project, creating the resources required. The lab is designed to be accomplished using the
OpenStack CLI, but you can also perform tasks using the dashboard
(http://dashboard.overcloud.example.com). You can find the admin password in the
/home/stack/overcloudrc file on director.
Outcomes
You should be able to:
• Create the resources required to deploy a server instance.
• Deploy and verify an external instance.
Steps
1. On workstation, load the admin user environment file. To prepare for deploying an server instance,
create the production project in which to work, and an operator1 user with the password redhat. Create
an authentication environment file for this new user.
1.1. On workstation, source the admin-rc authentication environment file in the student home directory.
View the admin password in the OS_PASSWORD variable.
[student@workstation ~]$ source admin-rc
[student@workstation ~(admin-admin)]$ env | grep "^OS_"
OS_REGION_NAME=regionOne
OS_PASSWORD=mbhZABea3qjUTZGNqVMWerqz8
OS_AUTH_URL=http://172.25.250.50:5000/v2.0
OS_USERNAME=admin
OS_TENANT_NAME=admin
1.2. As admin, create the production project and the operator1 user.
[student@workstation ~(admin-admin)]$ openstack project create --description Production production
[student@workstation ~(admin-admin)]$ openstack user create --project production --password redhat
--email student@example.com operator1
1.3. Create a new authentication environment file by copying the existing admin-rc file.
[student@workstation ~(admin-admin)]$ cp admin-rc operator1-production-rc
1.4. Edit the file with the new user's settings. Match the settings shown here.
unset OS_SERVICE_TOKEN
export OS_AUTH_URL=http://172.25.250.50:5000/v2.0
export OS_PASSWORD=redhat
export OS_REGION_NAME=regionOne
export OS_TENANT_NAME=production
export OS_USERNAME=operator1
export PS1='[\u@\h \W(operator1-production)]\$ '
2. The lab setup script preconfigured an external provider network and subnet, an image, and multiple
flavors. Working as the operator1 user, create the security resources required to deploy this server
instance, including a key pair named operator1-keypair1.pem placed in student's home directory, and a
production-ssh security group with rules for SSH and ICMP.
2.1. Source the new environment file. Remaining lab tasks must be performed as this production project
member.
[student@workstation ~(admin-admin)]$ source operator1-production-rc
2.2. Create a keypair. Redirect the command output into the operator1-keypair1.pem file. Set the
required permissions on the key pair file.
[student@workstation ~(operator1-production)]$ openstack keypair create operator1-keypair1 >
/home/student/operator1-keypair1.pem
[student@workstation ~(operator1-production)]$ chmod 600 operator1-keypair1.pem
2.3. Create a security group with rules for SSH and ICMP access.
[student@workstation ~(operator1-production)]$ openstack security group create production-ssh
[student@workstation ~(operator1-production)]$ openstack security group rule create --protocol tcp --
dst-port 22 production-ssh
[student@workstation ~(operator1-production)]$ openstack security group rule create --protocol icmp
production-ssh
3. Create the network resources required to deploy an external instance, including a production-
network1 network, a production-subnet1 subnet using the range 192.168.0.0/24, a DNS server at
172.25.250.254, and a production-router1 router. Use the external provider-172.25.250 network to
provide a floating IP address.
3.2. Create a router. Set the gateway address. Add the internal network interface.
[student@workstation ~(operator1-production)]$ openstack router create production-router1
[student@workstation ~(operator1-production)]$ neutron router-gateway-set production-router1
provider-172.25.250
[student@workstation ~(operator1-production)]$ openstack router add subnet production-router1
production-subnet1
3.3. Create a floating IP, taken from the external network. You will use this address to deploy the server
instance.
[student@workstation ~(operator1-production)]$ openstack floating ip create provider-172.25.250
4. Deploy the production-web1 server instance using the rhel7 image and the m1.webflavor.
4.1. Deploy the server instance, and verify the instance has an ACTIVE status.
[student@workstation ~(operator1-production)]$ openstack server create --nic net-id=production-
network1 --security-group production-ssh --image rhel7 --flavor m1.web --key-name operator1-keypair1
--wait production-web1
[student@workstation ~(operator1-production)]$ openstack server show production-web1 -c status -f
value
5. When deployed, use ssh to log in to the instance console. From the instance, verify network
connectivity by using ping to reach the external gateway at 172.25.250.254. Exit the production-web1
instance when finished.
5.1. Use the ssh command with the key pair to log in to the instance as the cloud-user user at the
floating IP address.
[student@workstation ~(operator1-production)]$ ssh -i operator1-keypair1.pem cloud-
user@172.25.250.N
5.2. Test for external network access. Ping the network gateway from production-web1.
[cloud-user@production-web1 ~]$ ping -c3 172.25.250.254
Evaluation
On workstation, run the lab deployment-review grade command to confirm the success of this exercise.
[student@workstation ~(operator1-production)]$ lab deployment-review grade
Cleanup
On workstation, run the lab deployment-review cleanup script to clean up this exercise.
[student@workstation ~(operator1-production)]$ lab deployment-review cleanup
In this exercise, you will view the Keystone endpoints and catalog, issue a token, and manage token
expiration.
Outcomes
You should be able to:
• View the Keystone service catalog.
• View the Keystone service endpoints.
• Issue a Keystone token.
• Clear expired tokens from the database.
Steps
1. On workstation, source the Keystone admin-rc file and list the Keystone endpoints registry. Take note
of the available service names and types.
[student@workstation ~]$ source admin-rc
[student@workstation ~(admin-admin)]$ openstack endpoint list
2. View the Keystone service catalog and notice the endpoint URLs (especially the IP addresses), the
version number, and the port number.
[student@workstation ~(admin-admin)]$ openstack catalog list -f value
3. Issue an admin token to manually (using curl) find information about OpenStack.
[student@workstation ~(admin-admin)]$ openstack token issue
4. Verify the token retrieved in the previous command. Use the curl command with the token ID to
retrieve the projects (tenants) for the admin user.
[student@workstation ~(admin-admin)]$ curl -H "X-Auth-Token:1cdacca5070b44ada325f861007461c1"
http://172.25.250.50:5000/v2.0/tenants
{"tenants_links": [], "tenants": [{"description": "admin tenant", "enabled": true,"id":
"0b73c3d8b10e430faeb972fec5afa5e6", "name": "admin"}]}
5. Use SSH to connect to director as the user root. The database, MariaDB, resides on director and
provides storage for expired tokens. Accessing MariaDB enables you to determine the amount of space
used for expired tokens.
[student@workstation ~(admin-admin)]$ ssh root@director
6. Log in to MariaDB.
[root@director ~]# mysql -u root
7. Use an SQL statement to list the tables and pay special attention to the size of the token table.
MariaDB [(none)]> use keystone
MariaDB [keystone]> SELECT table_name, (data_length+index_length) tablesize FROM
information_schema.tables;
8. Use an SQL statement to view the amount of space used for expired Keystone tokens.
MariaDB [keystone]> SELECT COUNT(*) FROM token WHERE token.expires < CONVERT_TZ(NOW(),
@@session.time_zone, '+00:00');
9. Truncate the token table then ensure the amount of space used for expired tokens is zero.
MariaDB [keystone]> TRUNCATE TABLE token;
Query OK, 0 rows affected (0.04 sec)
MariaDB [keystone]> SELECT COUNT(*) FROM token WHERE token.expires < CONVERT_TZ(NOW(),
@@session.time_zone, '+00:00');
11. Ensure that the Keystone user has a cron job to flush tokens from the database.
[root@director ~]# crontab -u keystone -l
...output omitted...
PATH=/bin:/usr/bin:/usr/sbin SHELL=/bin/sh
1 0 * * * keystone-manage token_flush >>/dev/null 2>1
Cleanup
From workstation, run the lab communication-svc-catalog cleanup script to clean up
the resources created in this exercise.
[student@workstation ~(admin-admin)]$ lab communication-svc-catalog cleanup
In this exercise, you will enable the RabbitMQ Management Plugin to create an exchange and queue,
publish a message, and retrieve it.
Resources
Files: http://material.example.com/cl210_producer, http://material.example.com/cl210_consumer
Outcomes
You should be able to:
• Authorize a RabbitMQ user.
• Enable the RabbitMQ Management Plugin.
• Create a message exchange.
• Create a message queue.
• Publish a message to a queue.
• Retrieve a published message.
1. From workstation, use SSH to connect to director as the stack user. Use sudo to become the root
user.
[student@workstation ~]$ ssh stack@director
[stack@director ~]$ sudo -i
5. Verify that a RabbitMQ Management configuration file exists in root's home directory. The contents
should match as shown here.
[root@director ~]# cat ~/.rabbitmqadmin.conf
[default]
hostname = 172.25.249.200
port = 15672
username = rabbitmqauth
password = redhat
10. On workstation, open a second terminal. Using SSH, log in as the stack user to director. Switch to the
root user. Launch the cl210_consumer script using anonymous.info as the routing key.
[student@workstation ~]$ ssh stack@director
[stack@director ~]$ sudo -i
[root@director ~]# python /root/cl210_consumer anonymous.info
11. In the first terminal, launch the cl210_producer script to send messages using the routing key
anonymous.info.
[root@director ~]# python /root/cl210_producer
[x] Sent 'anonymous.info':'Hello World!'
12. In the second terminal, sent message(s) are received and displayed. Running the cl210_producer
script multiple times sends multiple messages
[x] 'anonymous.info':'Hello World!'
Exit this cl210_consumer terminal after observing the message(s) being received. You are finished with
the example publisher-consumer exchange scripts.
13. The next practice is to observe a message queue. Create a queue named redhat.queue.
[root@director ~]# rabbitmqadmin -c ~/.rabbitmqadmin.conf declare queue name=redhat.queue
queue declared
14. Verify that the queue is created. The message count is zero.
[root@director ~]# rabbitmqctl list_queues | grep redhat
redhat.queue 0
15. Publish messages to the redhat.queue queue. These first two examples include the message payload
on the command line.
[root@director ~]# rabbitmqadmin -c ~/.rabbitmqadmin.conf publish routing_key=redhat.queue
payload="a message"
Message published
[root@director ~]# rabbitmqadmin -c ~/.rabbitmqadmin.conf publish routing_key=redhat.queue
payload="another message"
Message published
16. Publish a third message to the redhat.queue queue, but without using the payload parameter. When
executing the command without specifying a payload, rabbitmqadmin
waits for multi-line input. Press Ctrl+D when the cursor is alone at the first space of a new line to end
message entry and publish the message.
[root@director ~]# rabbitmqadmin -c ~/.rabbitmqadmin.conf publish routing_key=redhat.queue
message line 1
message line 2
message line 3
Ctrl+D
Message published
17. Verify that the redhat queue has an increased message count.
[root@director ~]# rabbitmqctl list_queues | grep redhat
redhat.queue 3
18. Display the first message in the queue. The message_count field indicates how many more messages
exist after this one.
[root@director ~]# rabbitmqadmin -c ~/.rabbitmqadmin.conf get queue=redhat.queue -f pretty_json
[
{
"exchange": "",
"message_count": 2,
"payload": "a message",
"payload_bytes": 9,
"payload_encoding": "string",
"properties": [],
"redelivered": false,
"routing_key": "redhat.queue"
}
]
19. Display multiple messages using the count option. Each displayed message indicates how
many more messages follow. The redelivered field indicates whether you have previously
viewed this specific message.
[root@director ~]# rabbitmqadmin -c ~/.rabbitmqadmin.conf get queue=redhat.queue count=2 -f
pretty_json
[
{
"exchange": "",
"message_count": 2,
"payload": "a message",
"payload_bytes": 9,
"payload_encoding": "string",
"properties": [],
"redelivered": true,
"routing_key": "redhat.queue"
}
{
"exchange": "",
"message_count": 1,
"payload": "another message",
"payload_bytes": 15,
"payload_encoding": "string",
"properties": [],
"redelivered": false,
"routing_key": "redhat.queue"
}
]
20. When finished, delete the queue named redhat.queue. Return to workstation.
[root@director ~]# rabbitmqadmin -c ~/.rabbitmqadmin.conf delete queue name=redhat.queue
queue deleted
[root@director ~]# exit
[stack@director ~]$ exit
[student@workstation ~]$
Cleanup
From workstation, run lab communication-msg-brokering cleanup to clean up resources created for this
exercise.
[student@workstation ~]$ lab communication-msg-brokering cleanup
In this lab, you will troubleshoot and fix issues with the Keystone identity service and the RabbitMQ
message broker.
Outcomes
You should be able to:
• Troubleshoot the Keystone identity service.
• Troubleshoot the RabbitMQ message broker.
Scenario
During a recent deployment of the overcloud, cloud administrators are reporting issues with the
Compute and Image services. Cloud administrators are not able to access the Image service nor the
Compute service APIs. You have been tasked with troubleshooting and fixing these issues.
Steps
1. From workstation, verify the issue by attempting to list instances as the OpenStack admin user. The
command is expected to hang.
1.1. From workstation, source the admin-rc credential file. Attempt to list any running instances. The
command is expected to hang, and does not return to the command prompt. Use Ctrl+C to escape the
command.
[student@workstation ~]$ source admin-rc
[student@workstation ~(admin-admin)]$ openstack server list
2.1. From workstation, use SSH to connect to controller0 as the heat-admin user.
[student@workstation ~(admin-admin)]$ ssh heat-admin@controller0
3.1. Check /var/log/nova/nova-conductor.log on controller0 for a recent error from the AMQP server.
[heat-admin@controller0 ~]$ sudo tail /var/log/nova/nova-conductor.log
2017-05-30 02:54:28.223 6693 ERROR oslo.messaging._drivers.impl_rabbit [-]
[3a3a6e2f-00bf-4a4a-8ba5-91bc32c381dc] AMQP server on 172.24.1.1:5672 is
unreachable:
[Errno 111] ECONNREFUSED. Trying again in 32 seconds. Client port: None
4. Investigate and fix the issue based on the error discovered in the log. Modify the incorrect rabbitmq
port value in /etc/rabbitmq/rabbitmq-env.conf and use HUP signal to respawn the beam.smp process.
Log out of the controller0 node when finished.
4.1. Modify the incorrect rabbitmq port value in /etc/rabbitmq/rabbitmq-env.conf by setting the
variable NODE_PORT to 5672. Check that the variable is correct by displaying the value again with the --
get option. Because this file does not have a section header, crudini requires specifying the section as "".
[heat-admin@controller0 ~]$ sudo crudini --set /etc/rabbitmq/rabbitmq-env.conf "" NODE_PORT 5672
[heat-admin@controller0 ~]$ sudo crudini --get /etc/rabbitmq/rabbitmq-env.conf "" NODE_PORT
5672
4.2. List the process ID for the beam.smp process. The beam.smp process is the application virtual
machine that interprets the Erlang language bytecode in which RabbitMQ works. By locating and
restarting this process, RabbitMQ reloads the fixed configuration.
[heat-admin@controller0 ~]$ sudo ps -ef | grep beam.smp
rabbitmq 837197 836998 10 03:42 ? 00:00:01 /usr/lib64/erlang/erts-7.3.1.2/bin/
beam.smp
-rabbit tcp_listeners [{"172.24.1.1",56721
4.3. Restart beam.smp by send a hangup signal to the retrieved process ID.
[heat-admin@controller0 ~]$ sudo kill -HUP 837197
4.4. List the beam.smp process ID to verify the tcp_listeners port is now 5672.
[heat-admin@controller0 ~]$ sudo ps -ef |grep beam.smp
rabbitmq 837197 836998 10 03:42 ? 00:00:01 /usr/lib64/erlang/erts-7.3.1.2/bin/
beam.smp
-rabbit tcp_listeners [{"172.24.1.1",5672
5. From workstation, attempt to aqgain list instances, to verify that the issue is fixed. This command is
expected to display instances or return to a command prompt without hangiing.
6. Next, attempt to list images as well. The command is expected to fail, returning an internal
server error.
7.1. From workstation, use SSH to connect to controller0 as the heat-admin user.
[student@workstation ~(admin-admin)]$ ssh heat-admin@controller0
9. The error in the Image service log indicates a communication issue with the Image service API and the
Identity service. In a previous step, you verified that the Identity service could communicate with the
Compute service API, so the next logical step is to focus on the Image service configuration. Investigate
and fix the issue based on the traceback found in the Image service log.
9.1. First, view the endpoint URL for the Identity service.
[student@workstation ~(admin-admin)]$ openstack catalog show identity
9.2. The traceback in /var/log/glance/api.log indicated an issue determining the authentication URL.
Inspect /etc/glance/glance-api.conf to verify auth_url setting, noting the incorrect port.
[heat-admin@controller0 ~]$ sudo grep 'auth_url' /etc/glance/glance-api.conf
#auth_url = None
auth_url=http://172.25.249.60:3535
9.3. Modify the auth_url setting in /etc/glance/glance-api.conf to use port 35357. Check that the
variable is correct by displaying the value again with the --get option.
[heat-admin@controller0 ~]$ sudo crudini --set /etc/glance/glance-api.conf keystone_authtoken
auth_url http://172.25.249.50:35357
[heat-admin@controller0 ~]$ sudo crudini --get /etc/glance/glance-api.conf keystone_authtoken
auth_url
http://172.25.249.50:35357
9.4. Restart the openstack-glance-api service. When finished, exit from controller0.
[heat-admin@controller0 ~]$ sudo systemctl restart openstack-glance-api
[heat-admin@controller0 ~]$ exit
[student@workstation ~(admin-admin)]$
10. From workstation, again attempt to list images to verify the fix. This command should succeed and
returning a command prompt without error.
10.1. From workstation, attempt to list images. This command should succeed and returning a command
prompt without error.
[student@workstation ~(admin-admin)]$ openstack image list
[student@workstation ~(admin-admin)]$
Cleanup
From workstation, run the lab communication-review cleanup script to clean up the resources created in
this exercise.
[student@workstation ~]$ lab communication-review cleanup
In this exercise you will build and customize a disk image using diskimage-builder.
Resources
Base Image http://materials.example.com/osp-small.qcow2
Working Copy of diskimagebuilder Elements /home/student/elements
Outcomes
You should be able to:
• Build and customize an image using diskimage-builder.
• Upload the image into the OpenStack image service.
• Spawn an instance using the customized image.
2. Create a copy of the diskimage-builder elements directory to work with under /home/student/.
[student@workstation ~]$ cp -a /usr/share/diskimage-builder/elements /home/student/
3. Create a post-install.d directory under the working copy of the rhel7 element.
[student@workstation ~]$ mkdir -p /home/student/elements/rhel7/post-install.d
4. Add three scripts under the rhel7 element post-install.d directory to enable the vsftpd service, add
vsftpd:ALL to /etc/hosts.allow, and disable anonymous ftp in / etc/vsftpd/vsftpd.conf.
[student@workstation ~]$ cd /home/student/elements/rhel7/post-install.d/
[student@workstation post-install.d]$ cat <<EOF > 01-enable-services
#!/bin/bash
systemctl enable vsftpd
EOF
[student@workstation post-install.d]$ cat <<EOF > 02-vsftpd-hosts-allow
#!/bin/bash
if ! grep -q vsftpd /etc/hosts.allow
then
echo "vsftpd:ALL" >> /etc/hosts.allow
fi
EOF
[student@workstation post-install.d]$ cat <<EOF > 03-vsftpd-disable-anonymous
#!/bin/bash
sed -i 's|^anonymous_enable=.*|anonymous_enable=NO|' /etc/vsftpd/vsftpd.conf
EOF
5. Return to the student home directory. Set the executable permission on the scripts.
[student@workstation post-install.d]$ cd
[student@workstation ~]$ chmod +x /home/student/elements/rhel7/post-install.d/*
7. Build the finance-rhel-ftp.qcow2 image and include the vsftpd package. The scripts created earlier are
automatically integrated.
[student@workstation ~]$ disk-image-create vm rhel7 -t qcow2 -p vsftpd -o finance-rhel-ftp.qcow2
8. As the developer1 OpenStack user, upload the finance-rhel-ftp.qcow2 image to the image service as
finance-rhel-ftp, with a minimum disk requirement of 10 GiB, and a minimum RAM requirement of 2
GiB.
8.2. Using the openstack command, upload the image finance-rhel-ftp.qcow2 into the OpenStack image
service.
[student@workstation ~(developer1-finance)]$ openstack image create --disk-format qcow2 --min-disk
10 --min-ram 2048 --file finance-rhel-ftp.qcow2 finance-rhel-ftp
10. List the available floating IP addresses, then allocate one to finance-ftp1.
11. When the image build was successful, the resulting FTP server displays messages and requests login
credentials. If the following ftp command does not prompt for login credentials, troubleshoot the image
build or deployment. Attempt to log in to the finance-ftp1 instance as student using the ftp command.
Look for the 220 (vsFTPd 3.0.2) message indicating server response. After login, exit at the ftp prompt.
[student@workstation ~(developer1-finance)]$ ftp 172.25.250.P
Connected to 172.25.250.P (172.25.250.P).
220 (vsFTPd 3.0.2)
Name (172.25.250.P:student): student
331 Please specify the password.
Password: student
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> exit
Cleanup
From workstation, run the lab customization-img-building cleanup command to clean up this exercise.
[student@workstation ~]$ lab customization-img-building cleanup
In this exercise you will customize disk images using guestfish and virt-customize.
Resources
Base Image http://materials.example.com/osp-small.qcow2
Outcomes
You should be able to:
• Customize an image using guestfish.
• Customize an image using virt-customize.
• Upload an image into Glance.
• Spawn an instance using a customized image.
2. Using the guestfish command, open the image for editing and include network access.
[student@workstation ~]$ guestfish -i --network -a ~/finance-rhel-db.qcow2
Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.
Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
Operating system: Red Hat Enterprise Linux Server 7.3 (Maipo)
/dev/sda1 mounted on /
><fs>
5. Because there was no output, ensure the mariadb service was enabled.
><fs> command "systemctl is-enabled mariadb"
enabled
6. Ensure the SELinux contexts for all affected files are correct.
><fs> selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts /
8. As the developer1 OpenStack user, upload the finance-rhel-db.qcow2 image to the image service as
finance-rhel-db, with a minimum disk requirement of 10 GiB, and a minimum RAM requirement of 2
GiB.
10. List the available floating IP addresses, and then allocate one to finance-db1.
10.1. List the floating IPs; unallocated IPs have None listed as their Port value.
[student@workstation ~(developer1-finance)]$ openstack floating ip list -c "Floating IP Address" -c Port
11. Use ssh to connect to the finance-db1 instance. Ensure the mariadb-server package is installed, and
that the mariadb service is enabled and running.
11.3. Confirm that the mariadb service is enabled and running, and then log out.
[cloud-user@finance-db1 ~]$ systemctl status mariadb
...output omitted...
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor
preset: disabled)
Active: active (running) since Mon 2017-05-29 20:49:37 EDT; 9min ago
Process: 1033 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID
(code=exited, status=0/SUCCESS)
Process: 815 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited,
status=0/SUCCESS)
Main PID: 1031 (mysqld_safe)
...output omitted...
[cloud-user@finance-db1 ~]$ exit
logout
Connection to 172.25.250.P closed.
[student@workstation ~(developer1-finance)]$
13. Use the virt-customize command to customize the ~/finance-rhel-mail.qcow2 image. Enable the
postfix service, configure postfix to listen on all interfaces, and relay all mail to
workstation.lab.example.com. Install the mailx package to enable sending a test email. Ensure the
SELinux contexts are restored.
[student@workstation ~(developer1-finance)]$ virt-customize \
-a ~/finance-rhel-mail.qcow2 \
--run-command 'systemctl enable postfix' \
--run-command 'postconf -e "relayhost = [workstation.lab.example.com]"' \
--run-command 'postconf -e "inet_interfaces = all"' \
--run-command 'yum -y install mailx' \
--selinux-relabel
[ 0.0] Examining the guest ...
[ 84.7] Setting a random seed
[ 84.7] Running: systemctl enable postfix
[ 86.5] Running: postconf -e "relayhost = [workstation.lab.example.com]"
[ 88.4] Running: postconf -e "inet_interfaces = all"
[ 89.8] Running: yum -y install mailx
[ 174.0] SELinux relabelling
[ 532.7] Finishing off
14. As the developer1 OpenStack user, upload the finance-rhel-mail.qcow2 image to the image service
as finance-rhel-mail, with a minimum disk requirement of 10 GiB, and a minimum RAM requirement of 2
GiB.
14.1. Use the openstack command to upload the finance-rhel-mail.qcow2 image to the image service.
[student@workstation ~(developer1-finance)]$ openstack image create --disk-format qcow2 --min-disk
10 --min-ram 2048 --file ~/finance-rhel-mail.qcow2 finance-rhel-mail
...output omitted...
16. List the available floating IP addresses, and allocate one to finance-mail1.
16.1. List the available floating IPs.
[student@workstation ~(developer1-finance)]$ openstack floating ip list -c "Floating IP Address" -c Port
17. Use ssh to connect to the finance-mail1 instance. Ensure the postfix service is running, that postfix is
listening on all interfaces, and that the relay_host directive is correct.
17.1. Log in to the finance-mail1 instance using ~/developer1-keypair1.pem with ssh.
[student@workstation ~(developer1-finance)]$ ssh -i ~/developer1-keypair1.pem cloud-
user@172.25.250.R
Warning: Permanently added '172.25.250.R' (ECDSA) to the list of known hosts.
[cloud-user@finance-mail1 ~]$
Cleanup
From workstation, run the lab customization-img-customizing cleanup commandto clean up this
exercise.
[student@workstation ~]$ lab customization-img-customizing cleanup
In this lab, you will build a disk image using diskimage-builder, and then modify it using guestfish.
Resources
Base Image URL http://materials.example.com/osp-small.qcow2
Diskimage-builder elements directory -> /usr/share/diskimage-builder/elements
Outcomes
You will be able to:
• Build an image using diskimage-builder.
• Customize the image using the guestfish command.
• Upload the image to the OpenStack image service.
• Spawn an instance using the customized image.
2. Create a copy of the diskimage-builder elements directory to work with in the /home/student/
directory.
[student@workstation ~]$ cp -a /usr/share/diskimage-builder/elements /home/student/
3. Create a post-install.d directory under the working copy of the rhel7 element.
[student@workstation ~]$ mkdir -p /home/student/elements/rhel7/post-install.d
4. Add a script under the rhel7/post-install.d directory to enable the httpd service.
7. Add a custom web index page to the production-rhel-web.qcow2 image using guestfish. Include the
text production-rhel-web in the index.html file. Ensure the SELinux context of
/var/www/html/index.html is correct.
7.3. Edit the /var/www/html/index.html file and include the required key words.
><fs> edit /var/www/html/index.html
This instance uses the production-rhel-web image.
7.4. To ensure the new index page works with SELinux in enforcing mode, restore the /var/
www/ directory context (including the index.html file).
><fs> selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts /var/www/
8. As the operator1 user, create a new OpenStack image named production-rhel-web using the
production-rhel-web.qcow2 image, with a minimum disk requirement of 10 GiB, and a minimum RAM
requirement of 2 GiB.
10. List the available floating IP addresses, and then allocate one to production-web1.
10.1. List the floating IPs. Available IP addresses have the Port attribute set to None.
[student@workstation ~(operator1-production)]$ openstack floating ip list -c "Floating IP Address" -c
Port
11. Log in to the production-web1 instance using operator1-keypair1.pem with ssh. Ensure the httpd
package is installed, and that the httpd service is enabled and running.
11.1. Use SSH to log in to the production-web1 instance using operator1-keypair1.pem.
[student@workstation ~(operator1-production)]$ ssh -i operator1-keypair1.pem cloud-
user@172.25.250.P
Warning: Permanently added '172.25.250.P' (ECDSA) to the list of known hosts.
[cloud-user@production-web1 ~]$
12. From workstation, confirm that the custom web page, displayed from productionweb1,contains the
text production-rhel-web.
[student@workstation ~(operator1-production)]$ curl http://172.25.250.P/index.html
This instance uses the production-rhel-web image.
Evaluation
From workstation, run the lab customization-review grade command to confirm the success of this
exercise. Correct any reported failures and rerun the command until successful.
[student@workstation ~]$ lab customization-review grade
Cleanup
From workstation, run the lab customization-review cleanup command to clean up this exercise.
[student@workstation ~]$ lab customization-review cleanup
1.2. Verify Ceph cluster status using the sudo ceph health command.
[heat-admin@overcloud-controller-0 ~]$ sudo ceph health
HEALTH_OK
2. Verify the status of the Ceph daemons and the cluster's latest events.
2.1. Using the sudo ceph -s command, you will see a MON daemon and three OSD daemons. The three
OSD daemons' states will be up and in.
[heat-admin@overcloud-controller-0 ~]$ sudo ceph -s
cluster 2ff74e60-3cb9-11e7-96f3-52540001fac8
health HEALTH_OK
monmap e1: 1 mons at {overcloud-controller-0=172.24.3.1:6789/0}
election epoch 4, quorum 0 overcloud-controller-0
osdmap e50: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v556: 224 pgs, 6 pools, 1358 kB data, 76 objects
121 MB used, 58213 MB / 58334 MB avail
224 active+clean
2.2. Display the Ceph cluster's latest events using the sudo ceph -w command. Press Ctrl+C to break the
event listing.
[heat-admin@overcloud-controller-0 ~]$ sudo ceph -w
cluster 2ff74e60-3cb9-11e7-96f3-52540001fac8
health HEALTH_OK
monmap e1: 1 mons at {overcloud-controller-0=172.24.3.1:6789/0}
election epoch 4, quorum 0 overcloud-controller-0
osdmap e50: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v556: 224 pgs, 6 pools, 1358 kB data, 76 objects
121 MB used, 58213 MB / 58334 MB avail
224 active+clean
2017-05-22 10:48:03.427574 mon.0 [INF] pgmap v574: 224 pgs: 224 active+clean;
1359 kB data, 122 MB used, 58212 MB / 58334 MB avail
...output omitted...
Ctrl+C
3. Verify that the pools and the openstack user, required for configuring Ceph as the backend for Red
Hat OpenStack Platform services, are available.
3.1. Verify that the images and volumes pools are available using the sudo ceph osd lspools command.
[heat-admin@overcloud-controller-0 ~]$ sudo ceph osd lspools
0 rbd,1 metrics,2 images,3 backups,4 volumes,5 vms,
3.2. Verify that the openstack user is available using the sudo ceph auth list command. This user will
have rwx permissions for both the images and volumes pools.
[heat-admin@overcloud-controller-0 ~]$ sudo ceph auth list
...output omitted...
client.openstack
key: AQBELB9ZAAAAABAAmS+6yVgIuc7aZA/CL8rZoA==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children,
allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms,
allow rwx pool=images, allow rwx pool=metrics
...output omitted...
4. Stop the OSD daemon with ID 0. Verify the Ceph cluster's status.
4.1. Verify that the Ceph cluster's status is HEALTH_OK, and the three OSD daemons are up and in.
[heat-admin@overcloud-controller-0 ~]$ sudo ceph -s
cluster 2ff74e60-3cb9-11e7-96f3-52540001fac8
health HEALTH_OK
monmap e1: 1 mons at {overcloud-controller-0=172.24.3.1:6789/0}
election epoch 4, quorum 0 overcloud-controller-0
osdmap e50: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v556: 224 pgs, 6 pools, 1358 kB data, 76 objects
121 MB used, 58213 MB / 58334 MB avail
224 active+clean
4.3. Use the systemd unit file for ceph-osd to stop the OSD daemon with ID 0.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl stop ceph-osd@0
4.5. Verify the Ceph cluster's status is HEALTH_WARN. The two OSDs daemons are up and in out of
three.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph -w
cluster 2ff74e60-3cb9-11e7-96f3-52540001fac8
health HEALTH_WARN
224 pgs degraded
224 pgs undersized
recovery 72/216 objects degraded (33.333%)
1/3 in osds are down
monmap e1: 1 mons at {overcloud-controller-0=172.24.3.1:6789/0}
election epoch 4, quorum 0 overcloud-controller-0
osdmap e43: 3 osds: 2 up, 3 in; 224 remapped pgs
flags sortbitwise
pgmap v153: 224 pgs, 6 pools, 1720 kB data, 72 objects
114 MB used, 58220 MB / 58334 MB avail
72/216 objects degraded (33.333%)
224 active+undersized+degraded
mon.0 [INF] pgmap v153: 224 pgs: 224 active+undersized+degraded;
1720 kB data, 114 MB used, 58220 MB / 58334 MB avail;
72/216 objects degraded (33.333%)
mon.0 [INF] osd.0 out (down for 304.628763)
mon.0 [INF] osdmap e44: 3 osds: 2 up, 2 in
...output omitted...
Ctrl+C
5. Start the OSD daemon with ID 0 to fix the issue. Verify that the Ceph cluster's status is HEALTH_OK.
5.1. Use the systemd unit file for ceph-osd to start the OSD daemon with ID 0.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl start ceph-osd@0
5.2. Verify the Ceph cluster's status is HEALTH_OK. The three OSD daemons are up and in. It may take
some time until the cluster status changes to HEALTH_OK.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph -s
cluster 2ff74e60-3cb9-11e7-96f3-52540001fac8
health HEALTH_OK
monmap e1: 1 mons at {overcloud-controller-0=172.24.3.1:6789/0}
election epoch 4, quorum 0 overcloud-controller-0
osdmap e50: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v556: 224 pgs, 6 pools, 1358 kB data, 76 objects
121 MB used, 58213 MB / 58334 MB avail
224 active+clean
Cleanup
From workstation, run the lab storage-config-ceph cleanup script to clean up this exercise.
[student@workstation ~]$ lab storage-config-ceph cleanup
1. Create a 10MB file named dataset.dat. As the developer1 user, create a container called container1 in
the OpenStack object storage service. Upload the dataset.dat file to this container.
1.2. Load the credentials for the developer1 user. This user has been configured by the lab script with
the role swiftoperator.
[student@workstation ~]$ source developer1-finance-rc
2. Download the dataset.dat object to the finance-web1 instance created by the lab script.
2.1. Verify that the finance-web1 instance's status is ACTIVE. Verify the floating IP address associated
with the instance.
[student@workstation ~(developer1-finance)]$ openstack server show finance-web1
2.2. Copy the credentials file for the developer1 user to the finance-web1 instance. Use the cloud-user
user and the /home/student/developer1-keypair1.pem key file.
[student@workstation ~(developer1-finance)]$ scp -i developer1-keypair1.pem developer1-finance-rc
cloud-user@172.25.250.P:~
2.3. Log in to the finance-web1 instance using cloud-user as the user and the
/home/student/developer1-keypair1.pem key file.
[student@workstation ~(developer1-finance)]$ ssh -i ~/developer1-keypair1.pem cloud-
user@172.25.250.P
2.5. Download the dataset.dat object from the object storage service.
[cloud-user@finance-web1 ~(developer1-finance)]$ openstack object save container1 dataset.dat
2.6. Verify that the dataset.dat object has been downloaded. When done, log out from the instance.
[cloud-user@finance-web1 ~(developer1-finance)]$ ls -lh dataset.dat
-rw-rw-r--. 1 cloud-user cloud-user 10M May 26 06:58 dataset.dat
[cloud-user@finance-web1 ~(developer1-finance)]$ exit
Cleanup
From workstation, run the lab storage-obj-storage cleanup script to clean up thisexercise.
[student@workstation ~]$ lab storage-obj-storage cleanup
In this lab, you will fix an issue in the Ceph environment. You will also upload a MOTD file to the
OpenStack object storage service. Finally, you will retrieve that MOTD file inside an instance.
Resources
Files: http://materials.example.com/motd.custom
Outcomes
You should be able to:
• Fix an issue in a Ceph environment.
• Upload a file to the Object storage service.
• Download and implement an object in the Object storage service inside an instance.
1. The Ceph cluster has a status issue. Fix the issue to return the status to HEALTH_OK.
2. As the operator1 user, create a new container called container4 in the Object storage service. Upload
the custom MOTD file available at http://materials.example.com/motd.custom to this container.
3. Log in to the production-web1 instance, and download the motd.custom object from Swift to
/etc/motd. Use the operator1 user credentials.
4. Verify that the MOTD file includes the message Updated MOTD message.
Evaluation
On workstation, run the lab storage-review grade command to confirm success of thisexercise.
[student@workstation ~]$ lab storage-review grade
Cleanup
From workstation, run the lab storage-review cleanup script to clean up this exercise.
[student@workstation ~]$ lab storage-review cleanup
Solution
In this lab, you will fix an issue in the Ceph environment. You will also upload a MOTD file to the
OpenStack object storage service. Finally, you will retrieve that MOTD file inside an instance.
Resources
Files: http://materials.example.com/motd.custom
Outcomes
You should be able to:
• Fix an issue in a Ceph environment.
• Upload a file to the Object storage service.
• Download and implement an object in the Object storage service inside an instance.
Before you begin
Log in to workstation as student using student as the password. From workstation, run lab storage-
review setup, which verifies OpenStack services and previously created resources. This script also
misconfigures Ceph and launches a productionweb1 instance with OpenStack CLI tools.
[student@workstation ~]$ lab storage-review setup
Steps
1. The Ceph cluster has a status issue. Fix the issue to return the status to HEALTH_OK.
1.2. Determine the Ceph cluster status. This status will be HEALTH_WARN.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph health
HEALTH_WARN 224 pgs degraded; 224 pgs stuck unclean; 224 pgs undersized;
recovery 501/870 objects degraded (57.586%)
1.3. Determine what the issue is by verifying the status of the Ceph daemons. Only two OSD daemons
will be reported as up and in, instead of the expected three up and three in.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph -s
health HEALTH_WARN
...output omitted...
osdmap e50: 3 osds: 2 up, 2 in; 224 remapped pgs
flags sortbitwise
...output omitted...
1.4. Determine which OSD daemon is down. The status of the OSD daemon with ID 0 on ceph0 is down.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.05499 root default
-2 0.05499 host overcloud-cephstorage-0
0 0.01799 osd.0 down 0 1.00000
1 0.01799 osd.1 up 1.00000 1.00000
2 0.01799 osd.2 up 1.00000 1.00000
1.5. Start the OSD daemon with ID 0 using the systemd unit file.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl start ceph-osd@0
1.6. Verify that the Ceph cluster status is HEALTH_OK. Initial displays may show the Ceph cluster in
recovery mode, with the percentage still degraded shown in parenthesis.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph health
HEALTH_WARN 8 pgs degraded; recovery 26/27975 objects degraded (0.093%)
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph health
HEALTH_OK
2. As the operator1 user, create a new container called container4 in the Object storage service. Upload
the custom MOTD file available at http://materials.example.com/motd.custom to this container.
2.1. Download the motd.custom file from http://materials.example.com/motd.custom.
[student@workstation ~]$ wget http://materials.example.com/motd.custom
2.2. View the contents of the motd.custom file. This file contains a new MOTD message.
[student@workstation ~]$ cat ~/motd.custom
Updated MOTD message
2.5. Create a new object in the container4 container using the motd.custom file.
[student@workstation ~(operator1-production)]$ openstack object create container4 motd.custom
3. Log in to the production-web1 instance, and download the motd.custom object from Swift to
/etc/motd. Use the operator1 user credentials.
3.2. Copy the operator1 user credentials to the production-web1 instance. Use clouduser as the user
and the /home/student/operator1-keypair1.pem key file.
[student@workstation ~(operator1-production)]$ scp -i ~/operator1-keypair1.pem operator1-
production-rc cloud-user@172.25.250.P:~
3.3. Log in to the production-web1 instance as the cloud-user user. Use the /home/student/operator1-
keypair1.pem key file.
[student@workstation ~(operator1-production)]$ ssh -i ~/operator1-keypair1.pem cloud-
user@172.25.250.P
3.5. Download the motd.custom object from the Object service using the operator1-production-rc user
credentials. Use the --file option to save the object as /etc/motd.
[cloud-user@production-web1 ~(operator1-production)]$ sudo -E openstack object save --file /etc/motd
container4 motd.custom
4. Verify that the MOTD file includes the message Updated MOTD message.
Evaluation
On workstation, run the lab storage-review grade command to confirm success of thisexercise.
[student@workstation ~]$ lab storage-review grade
Cleanup
From workstation, run the lab storage-review cleanup script to clean up this exercise.
[student@workstation ~]$ lab storage-review cleanup
1. From workstation, source the developer1-research-rc credentials file. As the developer1 user, create a
network for the project. Name the network researchnetwork1.
[student@workstation ~]$ source developer1-research-rc
[student@workstation ~(developer1-research)]$ openstack network create research-network1
2. Create the subnet research-subnet1 for the network in the 192.168.1.0/24 range. Use172.25.250.254
as the DNS server.
[student@workstation ~(developer1-research)]$ openstack subnet create \
--network research-network1 \
--subnet-range=192.168.1.0/24 \
--dns-nameserver=172.25.250.254 \
--dhcp research-subnet1
3. Open another terminal and log in to the controller node, controller0, to review the ML2 configuration.
Ensure that there are driver entries for VLAN networks.
3.1. Log in to the controller node as the heat-admin user and become root.
[student@workstation ~]$ ssh heat-admin@controller0
[heat-admin@overcloud-controller-0 ~]$ sudo -i
[root@overcloud-controller-0 ~]#
3.2. Go to the /etc/neutron/ directory. Use the crudini command to retrieve the values for the
type_drivers key in the ml2 group. Ensure that the vlan driver is included.
[root@overcloud-controller-0 heat-admin]# cd /etc/neutron
[root@overcloud-controller-0 neutron]# crudini --get plugin.ini ml2 type_drivers
vxlan,vlan,flat,gre
3.3. Retrieve the name of the physical network used by VLAN networks. ML2 groups are named after the
driver, for example, ml2_type_vlan.
[root@overcloud-controller-0 neutron]# crudini --get plugin.ini ml2_type_vlan network_vlan_ranges
datacentre:1:1000
[root@overcloud-controller-0 neutron]# exit
[heat-admin@overcloud-controller-0 ~]$ exit
[student@workstation ~]$ exit
4. On workstation, as the architect1 user, create the provider network provider-172.25.250. The
network will be used to provide external connectivity. Use vlan as the provider network type with an
segment ID of 500. Use datacentre as the physical network name, as defined in the ML2 configuration
file.
[student@workstation ~(developer1-research)]$ source architect1-research-rc
[student@workstation ~(architect1-research)]$ openstack network create \
--external \
--provider-network-type vlan \
--provider-physical-network datacentre \
--provider-segment 500 \
provider-172.25.250
5. Create the subnet for the provider network provider-172.25.250 with an allocation pool of
172.25.250.101 - 172.25.250.189. Name the subnet providersubnet-172.25.250. Use 172.25.250.254 for
both the DNS server and the gateway. Disable DHCP for this network.
[student@workstation ~(architect1-research)]$ openstack subnet create \
--no-dhcp \
--subnet-range 172.25.250.0/24 \
--gateway 172.25.250.254 \
--dns-nameserver 172.25.250.254 \
--allocation-pool start=172.25.250.101,end=172.25.250.189 \
--network provider-172.25.250 \
provider-subnet-172.25.250
6. As the developer1 user, create the router research-router1. Add an interface to research-router1 in
the research-subnet1 subnet. Define the router as a gateway for the provider-172.25.250 network.
6.1. Source the developer1-research-rc credentials file and create the researchrouter1 router.
[student@workstation ~(architect1-research)]$ source developer1-research-rc
[student@workstation ~(developer1-research)]$ openstack router create research-router1
6.2. Add an interface to research-router1 in the research-subnet1 subnet. The command does not
produce any output.
[student@workstation ~(developer1-research)]$ openstack router add subnet research-router1
research-subnet1
6.3. Use the neutron command to define the router as a gateway for the provider-172.25.250 network.
[student@workstation ~(developer1-research)]$ neutron router-gateway-set research-router1 provider-
172.25.250
Set gateway for router research-router1
8. Launch the research-web1 instance in the environment. Use the m1.small flavor and the rhel7 image.
Connect the instance to the research-network1 network.
[student@workstation ~(developer1-research)]$ openstack server create \
--image rhel7 \
--flavor m1.small \
--nic net-id=research-network1 \
--wait research-web1
9.3. List the network ports. Locate the UUID of the port corresponding to the instance in the research-
network1 network.
In the output, f952b9e9-bf30-4889-bb89-4303b4e849ae is the ID of the subnet for the research-
network1 network.
[student@workstation ~(developer1-research)]$ openstack subnet list -c ID -c Name
[student@workstation ~(developer1-research)]$ openstack port list -f json
[
{
"Fixed IP Addresses": "ip_address='192.168.1.N', subnet_id='f952b9e9-
bf30-4889-bb89-4303b4e849ae'",
"ID": "1f5285b0-76b5-41db-9cc7-578289ddc83c",
"MAC Address": "fa:16:3e:f0:04:a9",
"Name": ""
},
...output omitted...
10. Open another terminal. Use the ssh command to log in to the compute0 virtual machine as the heat-
admin user.
[student@workstation ~]$ ssh heat-admin@compute0
[heat-admin@overcloud-compute-0 ~]$
11. List the Linux bridges in the environment. Ensure that there is a qbr bridge that uses the first ten
characters of the Neutron port in its name. The bridge has two ports in it: the TAP device that the
instance uses and the qvb vEth pair, which connects the Linux bridge to the integration bridge.
[heat-admin@overcloud-compute-0 ~]$ brctl show
qbr1f5285b0-76 8000.ce25a52e5a32 no qvb1f5285b0-76
tap1f5285b0-76
12. Exit from the compute node and connect to the controller node.
[heat-admin@overcloud-compute-0 ~]$ exit
[student@workstation ~]$ ssh heat-admin@controller0
[heat-admin@overcloud-controller-0 ~]$
13. To determine the port ID of the phy-br-ex bridge, use the ovs-ofctl command. The output lists the
ports in the br-ex bridge.
[heat-admin@overcloud-controller-0 ~]$ sudo ovs-ofctl show br-ex
OFPT_FEATURES_REPLY (xid=0x2): dpid:000052540002fa01
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst
mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(eth2): addr:52:54:00:02:fa:01
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
2(phy-br-ex): addr:1a:5d:d6:bb:01:a1
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
...output omitted...
14. Dump the flows for the external bridge, br-ex. Review the entries to locate the flow for the packets
passing through the tenant network. Locate the rule that handles packets in the
phy-br-ex port. The following output shows how the internal VLAN ID, 2, is replaced with the VLAN ID
500 as defined by the --provider-segment 500 option.
[heat-admin@overcloud-controller-0 ~]$ sudo ovs-ofctl dump-flows br-ex
NXST_FLOW reply (xid=0x4):
cookie=0xbcb9ae293ed51406, duration=2332.961s, table=0, n_packets=297,
n_bytes=12530, idle_age=872, priority=4,in_port=2,dl_vlan=2
actions=mod_vlan_vid:500,NORMAL
...output omitted...
Cleanup
From workstation, run the lab network-managing-sdn cleanup script to clean up the resources created in
this exercise.
[student@workstation ~]$ lab network-managing-sdn cleanup
1. As the architect1 administrative user, review the instances for each of the two projects.
1.1. From workstation, source the credential file for the architect1 user in the finance project, available
at /home/student/architect1-finance-rc. List the instances in the finance project.
[student@workstation ~]$ source architect1-finance-rc
[student@workstation ~(architect1-finance)]$ openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "finance-network1=192.168.2.F",
"ID": "fcdd9115-5e05-4ec6-bd1c-991ab36881ee",
"Image Name": "rhel7",
"Name": "finance-app1"
}
]
1.2. Source the credential file of the architect1 user for the research project, available at
/home/student/architect1-research-rc. List the instances in the project.
[student@workstation ~(architect1-finance)]$ source architect1-research-rc
[student@workstation ~(architect1-research)]$ openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "research-network1=192.168.1.R",
"ID": "d9c2010e-93c0-4dc7-91c2-94bce5133f9b",
"Image Name": "rhel7",
"Name": "research-app1"
}
]
2. As the architect1 administrative user in the research project, create a shared external network to
provide external connectivity for the two projects. Use provider-172.25.250 as the name of the network.
The environment uses flat networks with datacentre as the physical network name.
[student@workstation ~(architect1-research)]$ openstack network create \
--external --share \
--provider-network-type flat \
--provider-physical-network datacentre \
provider-172.25.250
3. Create the subnet for the provider network in the 172.25.250.0/24 range. Name the subnet provider-
subnet-172.25.250. Disable the DHCP service for the network and use an allocation pool of
172.25.250.101 - 172.25.250.189. Use 172.25.250.254 as the DNS server and the gateway for the
network.
[student@workstation ~(architect1-research)]$ openstack subnet create \
--network provider-172.25.250 \
--no-dhcp --subnet-range 172.25.250.0/24 \
--gateway 172.25.250.254 \
--dns-nameserver 172.25.250.254 \
--allocation-pool start=172.25.250.101,end=172.25.250.189 \
provider-subnet-172.25.250
4. List the subnets present in the environment. Ensure that there are three subnets: one subnet for each
project and one subnet for the external network.
[student@workstation ~(architect1-research)]$ openstack subnet list -f json
[
{
"Network": "14f8182a-4c0f-442e-8900-daf3055e758d",
"Subnet": "192.168.2.0/24",
"ID": "79d5d45f-e9fd-47a2-912e-e1acb83c6978",
"Name": "finance-subnet1"
},
{
"Network": "f51735e7-4992-4ec3-b960-54bd8081c07f",
"Subnet": "192.168.1.0/24",
"ID": "d1dd16ee-a489-4884-a93b-95028b953d16",
"Name": "research-subnet1"
},
{
"Network": "56b18acd-4f5a-4da3-a83a-fdf7fefb59dc",
"Subnet": "172.25.250.0/24",
"ID": "e5d37f20-c976-4719-aadf-1b075b17c861",
"Name": "provider-subnet-172.25.250"
}
]
5. Create the research-router1 router and connect it to the two subnets, finance and research.
7. Ensure that the router is connected to the three networks by listing the router ports.
[student@workstation ~(architect1-research)]$ neutron router-port-list research-router1 -f json
[
{
"mac_address": "fa:16:3e:65:71:68",
"fixed_ips": "{\"subnet_id\": \"0e6db9a7-40b6-4b10-b975-9ac32c458879\",
\"ip_address\": \"192.168.2.1\"}",
"id": "ac11ea59-e50e-47fa-b11c-1e93d975b534",
"name": ""
},
{
"mac_address": "fa:16:3e:5a:74:28",
"fixed_ips": "{\"subnet_id\": \"e5d37f20-c976-4719-aadf-1b075b17c861\",
\"ip_address\": \"172.25.250.S\"}",
"id": "dba2aba8-9060-4cef-be9f-6579baa016fb",
Chapter 5. Managing and Troubleshooting Virtual Network Infrastructure
188 CL210-RHOSP10.1-en-2-20171006
"name": ""
},
{
"mac_address": "fa:16:3e:a1:77:5f",
"fixed_ips": "{\"subnet_id\": \"d1dd16ee-a489-4884-a93b-95028b953d16\",
\"ip_address\": \"192.168.1.1\"}",
"id": "fa7dab05-e5fa-4c2d-a611-d78670006ddf",
"name": ""
}
]
8. As the developer1 user, create a floating IP and attach it to the research-app1 virtualmachine.
8.1. Source the credentials for the developer1 user and create a floating IP.
[student@workstation ~(architect1-finance)]$ source developer1-research-rc
[student@workstation ~(developer1-research)]$ openstack floating ip create provider-172.25.250
9. As the developer2 user, create a floating IP and attach it to the finance-app1 virtual machine.
9.1. Source the credentials for the developer2 user and create a floating IP.
[student@workstation ~(developer1-research)]$ source developer2-finance-rc
[student@workstation ~(developer2-finance)]$ openstack floating ip create provider-172.25.250
10. Source the credentials for the developer1 user and retrieve the floating IP attached to the research-
app1 virtual machine.
[student@workstation ~(developer2-finance)]$ source developer1-research-rc
[student@workstation ~(developer1-research)]$ openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "research-network1=192.168.1.R, 172.25.250.N",
"ID": "d9c2010e-93c0-4dc7-91c2-94bce5133f9b",
"Image Name": "rhel7",
"Name": "research-app1"
}
]
11. Test the connectivity to the instance research-app1, running in the research project by using the ping
command.
[student@workstation ~(developer1-research)]$ ping -c 3 172.25.250.N
PING 172.25.250.N (172.25.250.N) 56(84) bytes of data.
64 bytes from 172.25.250.N: icmp_seq=1 ttl=63 time=1.77 ms
64 bytes from 172.25.250.N: icmp_seq=2 ttl=63 time=0.841 ms
64 bytes from 172.25.250.N: icmp_seq=3 ttl=63 time=0.861 ms
--- 172.25.250.N ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.841/1.159/1.776/0.437 ms
12. As the developer2 user, retrieve the floating IP attached to the finance-app1 virtual machine so you
can test connectivity.
[student@workstation ~(developer1-research)]$ source developer2-finance-rc
[student@workstation ~(developer2-finance)]$ openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "finance-network1=192.168.2.F, 172.25.250.P",
"ID": "797854e4-1253-4059-a6d1-3cb5a99a98ec",
"Image Name": "rhel7",
"Name": "finance-app1"
}
]
13. Use the ping command to reach the 172.25.250.P IP. Leave the command running, as you will
connect to the overcloud nodes to review how the packets are routed.
[student@workstation ~(developer2-finance)]$ ping 172.25.250.P
PING 172.25.250.P (172.25.250.P) 56(84) bytes of data.
64 bytes from 172.25.250.P: icmp_seq=1 ttl=63 time=1.84 ms
64 bytes from 172.25.250.P: icmp_seq=2 ttl=63 time=0.639 ms
64 bytes from 172.25.250.P: icmp_seq=3 ttl=63 time=0.708 ms
...output omitted...
14. Open another terminal. Use the ssh command to log in to controller0 as the heatadminuser.
[student@workstation ~]$ ssh heat-admin@controller0
[heat-admin@overcloud-controller-0 ~]$
15. Run the tcpdump command against all interfaces. Notice the two IP address to which the ICMP
packets are routed: 192.168.2.F, which is the private IP of the finance-app1 virtual machine, and
172.25.250.254, which is the gateway for the provider network.
[heat-admin@overcloud-controller-0 ~]$ sudo tcpdump \
-i any -n -v \
'icmp[icmptype] = icmp-echoreply' \
or 'icmp[icmptype] = icmp-echo'
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535
bytes
16:15:09.301102 IP (tos 0x0, ttl 64, id 31032, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 172.25.250.P: ICMP echo request, id 24572, seq 10, length 64
16:15:09.301152 IP (tos 0x0, ttl 63, id 31032, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 192.168.2.F: ICMP echo request, id 24572, seq 10, length 64
16:15:09.301634 IP (tos 0x0, ttl 64, id 12980, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.2.F > 172.25.250.254: ICMP echo reply, id 24572, seq 10, length 64
16:15:09.301677 IP (tos 0x0, ttl 63, id 12980, offset 0, flags [none], proto ICMP
(1), length 84)
172.25.250.P > 172.25.250.254: ICMP echo reply, id 24572, seq 10, length 64
16:15:10.301102 IP (tos 0x0, ttl 64, id 31282, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 172.25.250.P: ICMP echo request, id 24572, seq 11, length 64
16:15:10.301183 IP (tos 0x0, ttl 63, id 31282, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 192.168.2.F: ICMP echo request, id 24572, seq 11, length 64
16:15:10.301693 IP (tos 0x0, ttl 64, id 13293, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.2.F > 172.25.250.254: ICMP echo reply, id 24572, seq 11, length 64
16:15:10.301722 IP (tos 0x0, ttl 63, id 13293, offset 0, flags [none], proto ICMP
(1), length 84)
172.25.250.P > 172.25.250.254: ICMP echo reply, id 24572, seq 11, length 64
...output omitted...
16. Cancel the tcpdump command by pressing Ctrl+C and list the network namespaces. Retrieve the
routes in the qrouter namespace to determine the network device that handles the routing for the
192.168.2.0/24 network. The following output indicates that packets destined to the 192.168.2.0/24
network are routed through the qr-ac11ea59-e5 device (the IDs and names will be different in your
output).
[heat-admin@overcloud-controller-0 ~]$ ip netns list
qrouter-3fed0799-5da7-48ac-851d-c2b3dee01b24
...output omitted...
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec \
qrouter-3fed0799-5da7-48ac-851d-c2b3dee01b24 \
ip route
172.25.250.0/24 dev qg-dba2aba8-90 proto kernel scope link src 172.25.250.107
192.168.1.0/24 dev qr-fa7dab05-e5 proto kernel scope link src 192.168.1.1
192.168.2.0/24 dev qr-ac11ea59-e5 proto kernel scope link src 192.168.2.1
17. Within the qrouter namespace, run the ping command to confirm that the private IP of the finance-
app1 virtual machine, 192.168.2.F, is reachable.
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec \
qrouter-3fed0799-5da7-48ac-851d-c2b3dee01b24 \
ping -c 3 -I qr-ac11ea59-e5 192.168.2.F
PING 192.168.2.F (192.168.2.F) from 192.168.2.1 qr-ac11ea59-e5: 56(84) bytes of
data.
64 bytes from 192.168.2.F: icmp_seq=1 ttl=64 time=0.555 ms
64 bytes from 192.168.2.F: icmp_seq=2 ttl=64 time=0.507 ms
64 bytes from 192.168.2.F: icmp_seq=3 ttl=64 time=0.601 ms
--- 192.168.2.F ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.507/0.554/0.601/0.042 ms
18. From the first terminal, cancel the ping command by pressing Ctrl+C. Rerun the ping command
against the floating IP of the research-app1 virtual machine, 172.25.250.N. Leave the command running,
as you will be inspecting the packets from the controller0.
[student@workstation ~(developer2-finance)]$ ping 172.25.250.N
PING 172.25.250.N (172.25.250.N) 56(84) bytes of data.
64 bytes from 172.25.250.N: icmp_seq=1 ttl=63 time=1.84 ms
64 bytes from 172.25.250.N: icmp_seq=2 ttl=63 time=0.639 ms
64 bytes from 172.25.250.N: icmp_seq=3 ttl=63 time=0.708 ms
...output omitted...
19. From the terminal connected to the controller-0, run the tcpdump command. Notice the two IP
address to which the ICMP packets are routed: 192.168.1.R, which is the private IP of the research-app1
virtual machine, and 172.25.250.254, which is the IP address of the gateway for the provider network.
[heat-admin@overcloud-controller-0 ~]$ sudo tcpdump \
-i any -n -v \
'icmp[icmptype] = icmp-echoreply' or \
'icmp[icmptype] = icmp-echo'
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535
bytes
16:58:40.340643 IP (tos 0x0, ttl 64, id 65405, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 172.25.250.N: ICMP echo request, id 24665, seq 47, length 64
16:58:40.340690 IP (tos 0x0, ttl 63, id 65405, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 192.168.1.R: ICMP echo request, id 24665, seq 47, length 64
16:58:40.341130 IP (tos 0x0, ttl 64, id 41896, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.1.R > 172.25.250.254: ICMP echo reply, id 24665, seq 47, length 64
16:58:40.341141 IP (tos 0x0, ttl 63, id 41896, offset 0, flags [none], proto ICMP
(1), length 84)
172.25.250.N > 172.25.250.254: ICMP echo reply, id 24665, seq 47, length 64
16:58:41.341051 IP (tos 0x0, ttl 64, id 747, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 172.25.250.N: ICMP echo request, id 24665, seq 48, length 64
16:58:41.341102 IP (tos 0x0, ttl 63, id 747, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 192.168.1.R: ICMP echo request, id 24665, seq 48, length 64
16:58:41.341562 IP (tos 0x0, ttl 64, id 42598, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.1.R > 172.25.250.254: ICMP echo reply, id 24665, seq 48, length 64
16:58:41.341585 IP (tos 0x0, ttl 63, id 42598, offset 0, flags [none], proto ICMP
(1), length 84)
172.25.250.N > 172.25.250.254: ICMP echo reply, id 24665, seq 48, length 64
...output omitted...
20. Cancel the tcpdump command by pressing Ctrl+C and list the network namespaces. Retrieve the
routes in the qrouter namespace to determine the network device that handles routing for the
192.168.1.0/24 network. The following output indicates that packets destined to the 192.168.1.0/24
network are routed through the qr-fa7dab05-e5 device (the IDs and names will be different in your
output).
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns list
qrouter-3fed0799-5da7-48ac-851d-c2b3dee01b24
...output omitted...
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec \
qrouter-3fed0799-5da7-48ac-851d-c2b3dee01b24 \
ip route
172.25.250.0/24 dev qg-dba2aba8-90 proto kernel scope link src 172.25.250.107
192.168.1.0/24 dev qr-fa7dab05-e5 proto kernel scope link src 192.168.1.1
192.168.2.0/24 dev qr-ac11ea59-e5 proto kernel scope link src 192.168.2.1
21. Within the qrouter namespace, run the ping command to confirm that the private IP of the finance-
app1 virtual machine, 192.168.1.F, is reachable.
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec \
qrouter-3fed0799-5da7-48ac-851d-c2b3dee01b24 \
ping -c 3 -I qr-fa7dab05-e5 192.168.1.R
PING 192.168.192.168.1.R (192.168.192.168.1.R) from 192.168.1.1 qr-fa7dab05-e5:
56(84) bytes of data.
64 bytes from 192.168.192.168.1.R: icmp_seq=1 ttl=64 time=0.500 ms
64 bytes from 192.168.192.168.1.R: icmp_seq=2 ttl=64 time=0.551 ms
64 bytes from 192.168.192.168.1.R: icmp_seq=3 ttl=64 time=0.519 ms
--- 192.168.192.168.1.R ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.500/0.523/0.551/0.028 ms
23. List the Linux bridges. The following output indicates two bridges with two ports each. Each bridge
corresponds to an instance. The TAP devices in each bridge correspond to the virtual NIC; the qvb
devices correspond to the vEth pair that connect the Linux bridge to the integration bridge, br-int.
[heat-admin@overcloud-compute-0 ~]$ brctl show
bridge name bridge id STP enabled interfaces
qbr03565cda-b1 8000.a2117e24b27b no qvb03565cda-b1
tap03565cda-b1
qbr92387a93-92 8000.9a21945ec452 no qvb92387a93-92
tap92387a93-92
24. Run the tcpdump command against any of the two qvb interface while the ping command is still
running against the 172.25.250.N floating IP. If the output does not show any packets being captured,
press CTRL+C and rerun the command against the other qvb interface.
[heat-admin@overcloud-compute-0 ~]$ sudo tcpdump -i qvb03565cda-b1 \
-n -vv 'icmp[icmptype] = icmp-echoreply' or \
'icmp[icmptype] = icmp-echo'
tcpdump: WARNING: qvb03565cda-b1: no IPv4 address assigned
tcpdump: listening on qvb03565cda-b1, link-type EN10MB (Ethernet), capture size
65535 bytes
CTRL+C
[heat-admin@overcloud-compute-0 ~]$ sudo tcpdump -i qvb92387a93-92 \
-n -vv 'icmp[icmptype] = icmp-echoreply' or \
'icmp[icmptype] = icmp-echo'
tcpdump: WARNING: qvb92387a93-92: no IPv4 address assigned
tcpdump: listening on qvb92387a93-92, link-type EN10MB (Ethernet), capture size
65535 bytes
17:32:43.781928 IP (tos 0x0, ttl 63, id 48653, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 192.168.1.12: ICMP echo request, id 24721, seq 1018, length 64
17:32:43.782197 IP (tos 0x0, ttl 64, id 37307, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.1.R > 172.25.250.254: ICMP echo reply, id 24721, seq 1018, length 64
17:32:44.782026 IP (tos 0x0, ttl 63, id 49219, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 192.168.1.12: ICMP echo request, id 24721, seq 1019, length 64
17:32:44.782315 IP (tos 0x0, ttl 64, id 38256, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.1.R > 172.25.250.254: ICMP echo reply, id 24721, seq 1019, length 64
...output omitted...
25. From the first terminal, cancel the ping command. Rerun the command against the 172.25.250.P IP,
which is the IP of the finance-app1 instance.
[student@workstation ~(developer2-finance)]$ ping 172.25.250.P
PING 172.25.250.P (172.25.250.P) 56(84) bytes of data.
64 bytes from 172.25.250.P: icmp_seq=1 ttl=63 time=0.883 ms
64 bytes from 172.25.250.P: icmp_seq=2 ttl=63 time=0.779 ms
64 bytes from 172.25.250.P: icmp_seq=3 ttl=63 time=0.812 ms
64 bytes from 172.25.250.P: icmp_seq=4 ttl=63 time=0.787 ms
...output omitted...
26. From the terminal connected to compute0 node, enter CTRL+C to cancel the tcpdump command.
Rerun the command against the second qvb interface, qvb03565cda-b1. Confirm that the output
indicates some activity.
[heat-admin@overcloud-compute-0 ~]$ sudo tcpdump -i qvb03565cda-b1 \
-n -vv 'icmp[icmptype] = icmp-echoreply' or \
'icmp[icmptype] = icmp-echo'
tcpdump: WARNING: qvb03565cda-b1: no IPv4 address assigned
17:40:20.596012 IP (tos 0x0, ttl 63, id 58383, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 192.168.2.F: ICMP echo request, id 24763, seq 172, length 64
17:40:20.596240 IP (tos 0x0, ttl 64, id 17005, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.2.F > 172.25.250.254: ICMP echo reply, id 24763, seq 172, length 64
17:40:21.595997 IP (tos 0x0, ttl 63, id 58573, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 192.168.2.F: ICMP echo request, id 24763, seq 173, length 64
17:40:21.596294 IP (tos 0x0, ttl 64, id 17064, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.2.F > 172.25.250.254: ICMP echo reply, id 24763, seq 173, length 64
17:40:22.595953 IP (tos 0x0, ttl 63, id 59221, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.254 > 192.168.2.F: ICMP echo request, id 24763, seq 174, length 64
17:40:22.596249 IP (tos 0x0, ttl 64, id 17403, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.2.F > 172.25.250.254: ICMP echo reply, id 24763, seq 174, length 64
...output omitted...
27. From the first terminal, cancel the ping and confirm that the IP address 192.168.2.F is the private IP
of the finance-app1 instance.
28. Log in to the finance-app1 instance as the cloud-user user. Run the ping command against the
floating IP assigned to the research-app1 virtual machine, 172.25.250.N.
28.1.Use the ssh command as the cloud-user user to log in to finance-app1, with an IP address of
172.25.250.P. Use the developer2-keypair1 located in the home directory of the student user.
[student@workstation ~(developer2-finance)]$ ssh -i developer2-keypair1.pem cloud-
user@172.25.250.P
[cloud-user@finance-app1 ~]$
28.2.Run the ping command against the floating IP of the research-app1 instance,172.25.250.N.
[cloud-user@finance-app1 ~]$ ping 172.25.250.N
29. From the terminal connected to compute-0, enter CTRL+C to cancel the tcpdump command. Rerun
the command without specifying any interface. Confirm that the output indicates some activity.
[heat-admin@overcloud-compute-0 ~]$ sudo tcpdump -i any \
-n -v 'icmp[icmptype] = icmp-echoreply' or \
'icmp[icmptype] = icmp-echo'
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535
bytes
18:06:05.030442 IP (tos 0x0, ttl 64, id 39160, offset 0, flags [DF], proto ICMP (1),
length 84)
192.168.2.F > 172.25.250.N: ICMP echo request, id 12256, seq 309, length 64 ;
18:06:05.030489 IP (tos 0x0, ttl 63, id 39160, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.P > 192.168.1.R: ICMP echo request, id 12256, seq 309, length 64 ;
18:06:05.030774 IP (tos 0x0, ttl 64, id 32646, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.1.R > 172.25.250.P: ICMP echo reply, id 12256, seq 309, length 64 ;
18:06:05.030786 IP (tos 0x0, ttl 63, id 32646, offset 0, flags [none], proto ICMP
(1), length 84)
172.25.250.N > 192.168.2.F: ICMP echo reply, id 12256, seq 309, length 64 ;
18:06:06.030527 IP (tos 0x0, ttl 64, id 40089, offset 0, flags [DF], proto ICMP (1),
length 84)
192.168.2.F > 172.25.250.N: ICMP echo request, id 12256, seq 310, length 64
18:06:06.030550 IP (tos 0x0, ttl 63, id 40089, offset 0, flags [DF], proto ICMP (1),
length 84)
172.25.250.P > 192.168.1.R: ICMP echo request, id 12256, seq 310, length 64
18:06:06.030880 IP (tos 0x0, ttl 64, id 33260, offset 0, flags [none], proto ICMP
(1), length 84)
192.168.1.R > 172.25.250.P: ICMP echo reply, id 12256, seq 310, length 64
18:06:06.030892 IP (tos 0x0, ttl 63, id 33260, offset 0, flags [none], proto ICMP
(1), length 84)
172.25.250.N > 192.168.2.F: ICMP echo reply, id 12256, seq 310, length 64
...output omitted...
The output indicates the following flow for the sequence ICMP 309 (seq 309):
The private IP of the finance-app1 instance, 192.168.2.F sends an echo request
to the floating IP of the research-app1 instance, 172.25.250.N.
The floating IP of the finance-app1 instance, 172.25.250.P sends an echo
request to the private IP of the research-app1 instance, 192.168.1.R.
The private IP of the research-app1 instance, 192.168.1.R sends an echo reply
to the floating IP of the finance-app1 instance, 172.25.250.P.
The floating IP of the research-app1 instance, 172.25.250.N sends an echo
reply to the private IP of the finance-app1 instance, 192.168.2.F.
30. Close the terminal connected to compute-0. Cancel the ping command, and log out of finance-app1.
Cleanup
From workstation, run the lab network-tracing-net-flows cleanup script to clean up the resources
created in this exercise.
[student@workstation ~]$ lab network-tracing-net-flows cleanup
1. From workstation, source the credentials for the developer1 user and review the environment.
1.1. Source the credentials for the developer1 user located at /home/student/developer1-research-rc.
List the instances in the environment.
[student@workstation ~]$ source developer1-research-rc
[student@workstation ~(developer1-research)]$ openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "research-network1=192.168.1.N, 172.25.250.P",
"ID": "2cfdef0a-a664-4d36-b27d-da80b4b8626d",
"Image Name": "rhel7",
"Name": "research-app1"
}
]
1.2. Retrieve the name of the security group that the instance uses.
[student@workstation ~(developer1-research)]$ openstack server show research-app1 -f json
{
...output omitted...
"security_groups": [
{
"name": "default"
}
],
...output omitted...
}
1.3. List the rules for the default security group. Ensure that there is one rule that allows traffic for SSH
connections and one rule for ICMP traffic.
[student@workstation ~(developer1-research)]$ openstack security group rule list default -f json
[
...output omitted...
{
"IP Range": "0.0.0.0/0",
"Port Range": "22:22",
"Remote Security Group": null,
"ID": "3488a2cd-bd85-4b6e-b85c-e3cd7552fea6",
"IP Protocol": "tcp"
},
...output omitted...
{
"IP Range": "0.0.0.0/0",
"Port Range": "",
"Remote Security Group": null,
"ID": "f7588545-2d96-44a0-8ab7-46aa7cfbdb44",
"IP Protocol": "icmp"
}
]
1.4. List the networks in the environment.
[student@workstation ~(developer1-research)]$ openstack network list -f json
[
{
"Subnets": "8647161a-ada4-468f-ad64-8b7bb6f97bda",
"ID": "93e91b71-402e-45f6-a006-53a388e053f6",
"Name": "provider-172.25.250"
},
{
"Subnets": "ebdd4578-617c-4301-a748-30b7ca479e88",
"ID": "eed90913-f5f4-4e5e-8096-b59aef66c8d0",
"Name": "research-network1"
}
]
1.6. Ensure that the router research-router1 has an IP address defined as a gateway for the
172.25.250.0/24 network and an interface in the research-network1 network.
[student@workstation ~(developer1-research)]$ neutron router-port-list research-router1 -f json
[
{
"mac_address": "fa:16:3e:28:e8:85",
"fixed_ips": "{\"subnet_id\": \"ebdd4578-617c-4301-a748-30b7ca479e88\",
\"ip_address\": \"192.168.1.S\"}",
"id": "096c6e18-3630-4993-bafa-206e2f71acb6",
"name": ""
},
{
"mac_address": "fa:16:3e:d2:71:19",
"fixed_ips": "{\"subnet_id\": \"8647161a-ada4-468f-ad64-8b7bb6f97bda\",
\"ip_address\": \"172.25.250.R\"}",
"id": "c684682c-8acc-450d-9935-33234e2838a4",
"name": ""
}
]
2. Retrieve the floating IP assigned to the research-app1 instance and run the ping command against the
floating IP assigned to the instance, 172.25.250.P. The command should fail.
[student@workstation ~(developer1-research)]$ openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "research-network1=192.168.1.N, 172.25.250.P",
"ID": "2cfdef0a-a664-4d36-b27d-da80b4b8626d",
"Image Name": "rhel7",
"Name": "research-app1"
}
]
[student@workstation ~(developer1-research)]$ ping -c 3 172.25.250.P
PING 172.25.250.P (172.25.250.P) 56(84) bytes of data.
From 172.25.250.254 icmp_seq=1 Destination Host Unreachable
From 172.25.250.254 icmp_seq=2 Destination Host Unreachable
From 172.25.250.254 icmp_seq=3 Destination Host Unreachable
--- 172.25.250.P ping statistics ---
3. Attempt to connect to the instance as the root at its floating IP. The command should fail.
[student@workstation ~(developer1-research)]$ ssh root@172.25.250.P
ssh: connect to host 172.25.250.P port 22: No route to host
4. Reach the IP address assigned to the router in the provider network, 172.25.250.R.
[student@workstation ~(developer1-research)]$ openstack router show research-router1 -f json
{
"external_gateway_info":
"{\"network_id\":
...output omitted...
\"ip_address\": \"172.25.250.R\"}]}",
...output omitted...
}
[student@workstation ~(developer1-research)]$ ping -c 3 172.25.250.R
PING 172.25.250.R (172.25.250.R) 56(84) bytes of data.
64 bytes from 172.25.250.R: icmp_seq=1 ttl=64 time=0.642 ms
64 bytes from 172.25.250.R: icmp_seq=2 ttl=64 time=0.238 ms
64 bytes from 172.25.250.R: icmp_seq=3 ttl=64 time=0.184 ms
--- 172.25.250.R ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.184/0.354/0.642/0.205 ms
5. Review the namespaces implementation on controller0. Use the ping command within the qrouter
namespace to reach the router's private IP.
5.1. Retrieve the UUID of the router research-router1. You will compare this UUID with the one of the
qrouter namespace.
[student@workstation ~(developer1-research)]$ openstack router show research-router1 -f json
{
...output omitted...
"id": "8ef58601-1b60-4def-9e43-1935bb708938",
"name": "research-router1"
}
5.2. Open another terminal and use the ssh command to log in to controller0 as the heat-admin user.
Review the namespace implementation. Ensure that the qrouter namespace uses the ID returned by the
previous command.
[student@workstation ~]$ ssh heat-admin@controller0
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns list
qrouter-8ef58601-1b60-4def-9e43-1935bb708938
5.4. Within the qrouter namespace, run the ping command against the private IP of the router,
192.168.1.S.
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec qrouter-8ef58601-1b60-4def-9e43-
1935bb708938 ping -c 3 192.168.1.S
PING 192.168.1.S (192.168.1.S) 56(84) bytes of data.
64 bytes from 192.168.1.S: icmp_seq=1 ttl=64 time=0.070 ms
64 bytes from 192.168.1.S: icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from 192.168.1.S: icmp_seq=3 ttl=64 time=0.030 ms
--- 192.168.1.S ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.030/0.047/0.070/0.016 ms
6. From the first terminal, retrieve the private IP of the research-app1 instance. From the second
terminal, run the ping command against the private IP of the instance IP within the qrouter namespace.
6.1. From the first terminal, retrieve the private IP of the research-app1 instance.
[student@workstation ~(developer1-research)]$ openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "research-network1=192.168.1.N, 172.25.250.P",
"ID": "2cfdef0a-a664-4d36-b27d-da80b4b8626d",
"Image Name": "rhel7",
"Name": "research-app1"
}
]
6.2. From the second terminal, run the ping command in the qrouter namespace against 192.168.1.N.
The output indicates that the command fails.
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec qrouter-8ef58601-1b60-4def-9e43-
1935bb708938 ping -c 3 192.168.1.N
PING 192.168.1.N (192.168.1.N) 56(84) bytes of data.
From 192.168.1.S icmp_seq=1 Destination Host Unreachable
From 192.168.1.S icmp_seq=2 Destination Host Unreachable
From 192.168.1.S icmp_seq=3 Destination Host Unreachable
--- 192.168.1.N ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2000ms
7. The previous output that listed the namespace indicated that the qdhcp namespace is missing.
Review the namespaces in controller0 to confirm that the namespace is missing.
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns list
qrouter-8ef58601-1b60-4def-9e43-1935bb708938
8. The qdhcp namespace is created for the DHCP agents. List the running processes on controller0. Use
the grep command to filter dnsmasq processes. The output indicates
that no dnsmasq is running on the server.
[heat-admin@overcloud-controller-0 ~]$ ps axl | grep dnsmasq
0 1000 579973 534047 20 0 112648 960 pipe_w S+ pts/1 0:00 grep --
color=auto dnsmasq
9. From the first terminal, source the credentials of the administrative user, architect1, located at
/home/student/architect1-research-rc. List the Neutron agents to ensure that there is one DHCP agent.
[student@workstation ~(developer1-research)]$ source architect1-research-rc
[student@workstation ~(architect1-research)]$ neutron agent-list -f json
...output omitted...
{
"binary": "neutron-dhcp-agent",
"admin_state_up": true,
"availability_zone": "nova",
"alive": ":-)",
"host": "overcloud-controller-0.localdomain",
"agent_type": "DHCP agent",
"id": "98fe6c9b-3f66-4d14-a88a-bfd7d819ddb7"
},
...output omitted...
10. List the Neutron ports to ensure that there is one IP assigned to the DHCP agent in the
192.168.1.0/24 network.
[student@workstation ~(architect1-research)]$ openstack port list -f json | grep 192.168.1
"Fixed IP Addresses": "ip_address='192.168.1.S', subnet_id='ebdd4578-617c-4301-
a748-30b7ca479e88'",
"Fixed IP Addresses": "ip_address='192.168.1.N', subnet_id='ebdd4578-617c-4301-
a748-30b7ca479e88'",
The output indicates that there are two ports in the subnet. This indicates that the
research-subnet1 does not run a DHCP server.
11. Update the subnet to run a DHCP server and confirm the updates in the environment.
11.1. Review the subnet properties. Locate the enable_dhcp property and confirm that it reads False.
[student@workstation ~(architect1-research)]$ openstack subnet show research-subnet1
11.2. Run the openstack subnet set command to update the subnet. The command does not produce
any output.
[student@workstation ~(architect1-research)]$ openstack subnet set --dhcp research-subnet1
11.3. Review the updated subnet properties. Locate the enable_dhcp property and confirm that it reads
True.
[student@workstation ~(architect1-research)]$ openstack subnet show research-subnet1
11.4. From the terminal connected to controller0, rerun the ps command. Ensure that a dnsmasq is now
running.
[heat-admin@overcloud-controller-0 ~]$ ps axl | grep dnsmasq
5 99 649028 1 20 0 15548 892 poll_s S ? 0:00 dnsmasq
--no-hosts \
--no-resolv \
--strict-order \
--except-interface=lo \
...output omitted...
--dhcp-match=set:ipxe,175 \
--bind-interfaces \
--interface=tapdc429585-22 \
--dhcp-range=set:tag0,192.168.1.0,static,86400s \
--dhcp-option-force=option:mtu,1446 \
--dhcp-lease-max=256 \
--conf-file= \
--domain=openstacklocal
0 1000 650642 534047 20 0 112648 960 pipe_w S+ pts/1 0:00 grep --
color=auto dnsmasq
11.5. From the first terminal, rerun the openstack port list command. Ensure that there is a third IP in
the research-subnet1 network.
[student@workstation ~(architect1-research)]$ openstack port list -f json | grep 192.168.1
"Fixed IP Addresses": "ip_address='192.168.1.S',
subnet_id='ebdd4578-617c-4301-a748-30b7ca479e88'",
"Fixed IP Addresses": "ip_address='192.168.1.N',
subnet_id='ebdd4578-617c-4301-a748-30b7ca479e88'"
"Fixed IP Addresses": "ip_address='192.168.1.2',
subnet_id='ebdd4578-617c-4301-a748-30b7ca479e88'",
11.6. From the terminal connected to controller0, list the network namespaces. Ensure that there is a
new namespace called qdhcp.
[heat-admin@overcloud-controller-0 ~]$ ip netns list
qdhcp-eed90913-f5f4-4e5e-8096-b59aef66c8d0
qrouter-8ef58601-1b60-4def-9e43-1935bb708938
11.7. List the interfaces in the qdhcp namespace. Confirm that there is an interface with an IP address of
192.168.1.2.
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec qdhcp-eed90913-f5f4-4e5e-8096-
b59aef66c8d0 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
61: tap7e9c0f8b-a7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1446 qdisc noqueue
state UNKNOWN qlen 1000
link/ether fa:16:3e:7c:45:e1 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.2/24 brd 192.168.1.255 scope global tap7e9c0f8b-a7
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe7c:45e1/64 scope link
valid_lft forever preferred_lft forever
12. From the first terminal, stop then start the research-app1 instance to reinitialize IP assignment and
cloud-init configuration.
13.2. Run the ping command against the floating IP 172.25.250.P until it responds.
[student@workstation ~(architect1-research)]$ ping 172.25.250.P
PING 172.25.250.P (172.25.250.P) 56(84) bytes of data.
...output omitted...
From 172.25.250.P icmp_seq=22 Destination Host Unreachable
From 172.25.250.P icmp_seq=23 Destination Host Unreachable
From 172.25.250.P icmp_seq=24 Destination Host Unreachable
From 172.25.250.P icmp_seq=25 Destination Host Unreachable
64 bytes from 172.25.250.P: icmp_seq=26 ttl=63 time=1.02 ms
64 bytes from 172.25.250.P: icmp_seq=27 ttl=63 time=0.819 ms
64 bytes from 172.25.250.P: icmp_seq=28 ttl=63 time=0.697 ms
...output omitted...
^C
--- 172.25.250.P ping statistics ---
35 packets transmitted, 10 received, +16 errors, 71% packet loss, time 34019ms
rtt min/avg/max/mdev = 4.704/313.475/2025.262/646.005 ms, pipe 4
13.3. Use ssh to connect to the instance. When finished, exit from the instance.
[student@workstation ~(architect1-research)]$ ssh -i developer1-keypair1.pem cloud-
user@172.25.250.P
[cloud-user@research-app1 ~]$ exit
Cleanup
From workstation, run the lab network-troubleshooting cleanup script to clean up the resources created
in this exercise.
[student@workstation ~]$ lab network-troubleshooting cleanup
Lab: Managing and Troubleshooting Virtual Network Infrastructure
In this lab, you will troubleshoot the network connectivity of OpenStack instances.
Outcomes
You should be able to:
• Use Linux tools to review the network configuration of instances.
• Review the network namespaces for a project.
• Restore the network connectivity of OpenStack instances.
Scenario
Cloud users reported issues reaching their instances via their floating IPs. Both ping and ssh connections
time out. You have been tasked with troubleshooting and fixing these issues.
Steps
1. As the operator1 user, list the instances present in the environment. The credentials file for the user is
available at /home/student/operator1-production-rc. Ensure that the instance production-app1 is
running and has an IP in the 192.168.1.0/24 network
2. Attempt to reach the instance via its floating IP by using the ping and ssh commands. Confirm that the
commands time out. The private key for the SSH connection is available at /home/student/operator1-
keypair1.pem.
3. Review the security rules for the security group assigned to the instance. Ensure that there is a rule
that authorizes packets sent by the ping command to pass.
4. As the administrative user, architect1, ensure that the external network provider-172.25.250 is
present. The credentials file for the user is available at /home/student/architect1-production-rc. Review
the network type and the physical network defined for the network. Ensure that the network is a flat
network that uses the datacentre provider network.
5. As the operator1 user, list the routers in the environment. Ensure that productionrouter1 is present,
has a private network port, and is the gateway for the external network.
6. From the compute node, review the network implementation by listing the Linux bridges and ensure
that the ports are properly defined. Ensure that there is one bridge with two ports init. The bridge and
the port names should be named after the first 10 characters of the port UUID in the private network for
the instance production-app1.
7. From workstation, use the ssh command to log in to controller0 as the heat-admin user. List the
network namespaces to ensure that there is a namespace for the router and for the internal network
production-network1. Review the UUID of the router and the UUID of the internal network to make sure
they match the UUIDs of the namespaces. List the interfaces in the network namespace for the internal
network. Within the private network namespace, use the ping command to reach the private IP address
of the router. Run the ping command within the qrouter namespace against the IP assigned as a
gateway to the router. From the tenant network namespace, use the ping command to reach the private
IP of the instance.
8. From controller0, review the bridge mappings configuration. Ensure that the provider network named
datacentre is mapped to the br-ex bridge. Review the configuration of the Open vSwitch bridge br-int.
Ensure that there is a patch port for the connection between the integration bridge and the external
bridge. Retrieve the name of the peer port for the patch from the integration bridge to the external
bridge. Make any necessary changes.
9. From workstation use the ping command to reach the IP defined as a gateway for the router and the
floating IP associated to the instance. Use the ssh command to log in to the instance production-app1 as
the cloud-user user. The private key is available at /home/student/operator1-keypair1.pem.
Evaluation
From workstation, run the lab network-review grade command to confirm the success of this exercise.
Correct any reported failures and rerun the command until successful.
[student@workstation ~]$ lab network-review grade
Cleanup
From workstation, run the lab network-review cleanup command to clean up this exercise.
[student@workstation ~]$ lab network-review cleanup
Solution
In this lab, you will troubleshoot the network connectivity of OpenStack instances.
Outcomes
You should be able to:
• Use Linux tools to review the network configuration of instances.
• Review the network namespaces for a project.
• Restore the network connectivity of OpenStack instances.
Scenario
Cloud users reported issues reaching their instances via their floating IPs. Both ping and ssh connections
time out. You have been tasked with troubleshooting and fixing these issues.
Before you begin
Log in to workstation as student using student as the password.
On workstation, run the lab network-review setup command. This script creates the production project
for the operator1 user and creates the /home/student/operator1-production-rc credentials file. The SSH
public key is available at /home/student/operator1-keypair1.pem. The script deploys the instance
production-app1 in the production project with a floating IP in the provider-172.25.250 network.
[student@workstation ~]$ lab network-review setup
Steps
1. As the operator1 user, list the instances present in the environment. The credentials file for the user is
available at /home/student/operator1-production-rc. Ensure that the
instance production-app1 is running and has an IP in the 192.168.1.0/24 network
1.1. From workstation, source the operator1-production-rc file and list the running instances.
[student@workstation ~]$ source ~/operator1-production-rc
[student@workstation ~(operator1-production)]$ openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "production-network1=192.168.1.N, 172.25.250.P",
"ID": "ab497ff3-0335-4b17-bd3d-5aa2a4497bf0",
"Image Name": "rhel7",
"Name": "production-app1"
}
]
2. Attempt to reach the instance via its floating IP by using the ping and ssh commands. Confirm that the
commands time out. The private key for the SSH connection is available at
/home/student/operator1-keypair1.pem.
2.1. Run the ping command against the floating IP 172.25.250.P. The command should fail.
[student@workstation ~(operator1-production)]$ ping -c 3 172.25.250.P
PING 172.25.250.P (172.25.250.P) 56(84) bytes of data.
From 172.25.250.254 icmp_seq=1 Destination Host Unreachable
From 172.25.250.254 icmp_seq=2 Destination Host Unreachable
From 172.25.250.254 icmp_seq=3 Destination Host Unreachable
--- 172.25.250.102 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 1999ms
2.2. Attempt to connect to the instance using the ssh command. The command should fail.
[student@workstation ~(operator1-production)]$ ssh -i ~/operator1-keypair1.pem cloud-
user@172.25.250.P
ssh: connect to host 172.25.250.P port 22: No route to host
3. Review the security rules for the security group assigned to the instance. Ensure that there is a rule
that authorizes packets sent by the ping command to pass.
3.1. Retrieve the name of the security group that the instance production-app1 uses.
[student@workstation ~(operator1-production)]$ openstack server show production-app1
3.2. List the rules in the default security group. Ensure that there is a rule for ICMP traffic.
[student@workstation ~(operator1-production)]$ openstack security group rule list default -f json
[
{
"IP Range": "0.0.0.0/0",
"Port Range": "",
"Remote Security Group": null,
"ID": "68baac6e-7981-4326-a054-e8014565be6e",
"IP Protocol": "icmp"
},
...output omitted...
The output indicates that there is a rule for the ICMP traffic. This indicates that the environment
requires further troubleshooting.
4. As the administrative user, architect1, ensure that the external network provider-172.25.250 is
present. The credentials file for the user is available at /home/student/architect1-production-rc. Review
the network type and the physical network defined for the network. Ensure that the network is a flat
network that uses the datacentre provider network.
4.1. Source the architect1 credentials. List the networks. Confirm that the provider-172.25.250 network
is present.
[student@workstation ~(operator1-production)]$ source ~/architect1-production-rc
[student@workstation ~(architect1-production)]$ openstack network list -f json
[
{
"Subnets": "2b5110fd-213f-45e6-8761-2e4a2bcb1457",
"ID": "905b4d65-c20f-4cac-88af-2b8e0d2cf47e",
"Name": "provider-172.25.250"
},
{
"Subnets": "a4c40acb-f532-4b99-b8e5-d1df14aa50cf",
"ID": "712a28a3-0278-4b4e-94f6-388405c42595",
"Name": "production-network1"
}
]
4.2. Review the provider-172.25.250 network details, including the network type and the physical
network defined.
[student@workstation ~(architect1-production)]$ openstack network show provider-172.25.250
5. As the operator1 user, list the routers in the environment. Ensure that productionrouter1 is present,
has a private network port, and is the gateway for the external network.
5.1. Source the operator1-production-rc credentials file and list the routers in the environment.
[student@workstation ~(architect1-production)]$ source ~/operator1-production-rc
[student@workstation ~(operator1-production)]$ openstack router list -f json
[
{
"Status": "ACTIVE",
"Name": "production-router1",
"Distributed": "",
"Project": "91f3ed0e78ad476495a6ad94fbd7d2c1",
"State": "UP",
"HA": "",
"ID": "e64e7ed3-8c63-49ab-8700-0206d1b0f954"
}
]
5.2. Display the router details. Confirm that the router is the gateway for the external network provider-
172.25.250.
[student@workstation ~(operator1-production)]$ openstack router show production-router1
5.3. Use ping to test the IP defined as the router gateway interface. Observe the command timing out.
[student@workstation ~(operator1-production)]$ ping -W 5 -c 3 172.25.250.S
PING 172.25.250.S (172.25.250.S) 56(84) bytes of data.
--- 172.25.250.S ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms
The ping test was unable to reach the external gateway interface of the router from an external host,
but the root cause is still unknown, so continue troubleshooting.
6. From the compute node, review the network implementation by listing the Linux bridges and ensure
that the ports are properly defined. Ensure that there is one bridge with two ports init. The bridge and
the port names should be named after the first 10 characters of the port UUID in the private network for
the instance production-app1. From workstation, use ssh to connect to compute0 as the heat-admin
user. Review the configuration of the Open vSwitch integration bridge. Ensure that the vEth pair, which
has a port associated to the bridge, has another port in the integration bridge. Exit from the virtual
machine.
6.1. From the first terminal, list the network ports. Ensure that the UUID matches the private IP of the
instance. In this example, the UUID is 04b3f285-7183-4673-836b-317d80c27904, which matches the
characters displayed above.
[student@workstation ~(operator1-production)]$ openstack port list -f json
[
{
"Fixed IP Addresses": "ip_address='192.168.1.N',
subnet_id='a4c40acb-f532-4b99-b8e5-d1df14aa50cf'",
"ID": "04b3f285-7183-4673-836b-317d80c27904",
"MAC Address": "fa:16:3e:c8:cb:3d",
"Name": ""
},
...output omitted...
6.2. Use the ssh command to log in to compute0 as the heat-admin user. Use the brctl command to list
the Linux bridges. Ensure that there is a qbr bridge with two ports in it. The bridge and the ports should
be named after the first 10 characters of the port of the instance in the private network.
[student@workstation ~] $ ssh heat-admin@compute0
[heat-admin@overcloud-compute-0 ~]$ brctl show
bridge name bridge id STP enabled interfaces
qbr04b3f285-71 8000.9edbfc39d5a5 no qvb04b3f285-71
tap04b3f285-71
6.3. As the root user from the compute0 virtual machine, list the network ports in the integration bridge,
br-int. Ensure that the port of the vEth pair qvo is present in the
integration bridge.
[heat-admin@overcloud-compute-0 ~]$ sudo ovs-vsctl list-ifaces br-int
int-br-ex
patch-tun
qvo04b3f285-71
The qvo port exists as expected, so continue troubleshooting.
7. From workstation, use the ssh command to log in to controller0 as the heat-admin user. List the
network namespaces to ensure that there is a namespace for the router and
for the internal network production-network1. Review the UUID of the router and the UUID of the
internal network to make sure they match the UUIDs of the namespaces. List the interfaces in the
network namespace for the internal network. Within the private network namespace, use the ping
command to reach the private IP address of the router.
Run the ping command within the qrouter namespace against the IP assigned as a gateway to the
router. From the tenant network namespace, use the ping command to reach the private IP of the
instance.
7.1. Use the ssh command to log in to controller0 as the heat-admin user. List the network namespaces.
[student@workstation ~] $ ssh heat-admin@controller0
[heat-admin@overcloud-controller-0 ~]$ ip netns list
qrouter-e64e7ed3-8c63-49ab-8700-0206d1b0f954
qdhcp-712a28a3-0278-4b4e-94f6-388405c42595
7.2. From the previous terminal, retrieve the UUID of the router production-router1. Ensure that the
output matches the qrouter namespace.
[student@workstation ~(operator1-production)]$ openstack router show production-router1
7.3. Retrieve the UUID of the private network, production-network1. Ensure that the output matches
the qdhcp namespace.
[student@workstation ~(operator1-production)]$ openstack network show production-network1
7.4. Use the neutron command to retrieve the interfaces of the router productionrouter1.
[student@workstation ~(operator1-production)]$ neutron router-port-list production-router1
{"subnet_id": "a4c40acb-f532-4b99-b8e5-d1df14aa50cf",
"ip_address": "192.168.1.R"} |
{"subnet_id": "2b5110fd-213f-45e6-8761-2e4a2bcb1457",
"ip_address": "172.25.250.S"} |
7.5. From the terminal connected to the controller, use the ping command within the qdhcp namespace
to reach the private IP of the router.
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec qdhcp-712a28a3-0278-4b4e-94f6-
388405c42595 ping -c 3 192.168.1.R
PING 192.168.1.R (192.168.1.R) 56(84) bytes of data.
64 bytes from 192.168.1.R: icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from 192.168.1.R: icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from 192.168.1.R: icmp_seq=3 ttl=64 time=0.639 ms
--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.041/0.262/0.639/0.268 ms
7.6. Within the router namespace, qrouter, run the ping command against the IP defined as a gateway in
the 172.25.250.0/24 network.
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec qrouter-e64e7ed3-8c63-49ab-8700-
0206d1b0f954 ping -c 3 172.25.250.S
PING 172.25.250.S (172.25.250.S) 56(84) bytes of data.
64 bytes from 172.25.250.S: icmp_seq=1 ttl=64 time=0.091 ms
64 bytes from 172.25.250.S: icmp_seq=2 ttl=64 time=0.037 ms
64 bytes from 172.25.250.S: icmp_seq=3 ttl=64 time=0.597 ms
--- 172.25.250.25 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.037/0.241/0.597/0.252 ms
7.8. Use the ping command in the same namespace to reach the private IP of the instance production-
app1.
[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec qdhcp-712a28a3-0278-4b4e-94f6-
388405c42595 ping -c 3 192.168.1.N
PING 192.168.1.N (192.168.1.N) 56(84) bytes of data.
64 bytes from 192.168.1.N: icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from 192.168.1.N: icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from 192.168.1.N: icmp_seq=3 ttl=64 time=0.639 ms
--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.041/0.262/0.639/0.268 ms
8. From controller0, review the bridge mappings configuration. Ensure that the provider network named
datacentre is mapped to the br-ex bridge. Review the configuration of the Open vSwitch bridge br-int.
Ensure that there is a patch port for the connection between the integration bridge and the external
bridge. Retrieve the name of the peer port for the patch from the integration bridge to the external
bridge. Make any necessary changes.
8.1. From controller0, as the root user, review the bridge mappings configuration. Bridge mappings for
Open vSwitch are defined in the /etc/neutron/plugins/ml2/openvswitch_agent.ini configuration file.
Ensure that the provider network name, datacentre, is mapped to the br-ex bridge.
[heat-admin@overcloud-controller-0 ~]$ cd /etc/neutron/plugins/ml2/
[heat-admin@overcloud-controller-0 ml2]$ sudo grep bridge_mappings openvswitch_agent.ini
#bridge_mappings =
bridge_mappings =datacentre:br-ex
8.2. Review the ports in the integration bridge br-int. Ensure that there is a patch port in the integration
bridge. The output lists phy-br-ex as the peer for the patch
[heat-admin@overcloud-controller-0 ml2]$ sudo ovs-vsctl show
...output omitted...
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "tapfabe9e7e-0b"
tag: 1
Interface "tapfabe9e7e-0b"
type: internal
Port "qg-bda4e07f-64"
tag: 3
Interface "qg-bda4e07f-64"
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qr-30fc535c-85"
tag: 1
Interface "qr-30fc535c-85"
type: internal
Port br-int
Interface br-int
type: internal
8.3. List the ports in the external bridge, br-ex. The output indicates that the port phy-brex is absent
from the bridge.
[heat-admin@overcloud-controller-0 ml2]$ sudo ovs-vsctl show
...output omitted...
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "eth2"
Interface "eth2"
Port br-ex
Interface br-ex
type: internal
8.4. Patch ports are managed by the neutron-openvswitch-agent, which uses the bridge mappings for
Open vSwitch bridges. Restart the neutron-openvswitchagent.
[heat-admin@overcloud-controller-0 ml2]$ sudo systemctl restart neutron-openvswitch-agent.service
8.5. Wait a minute then list the ports in the external bridge, br-ex. Ensure that the patch port phy-br-ex
is present in the external bridge.
[heat-admin@overcloud-controller-0 ml2]$ sudo ovs-vsctl show
...output omitted...
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "eth2"
Interface "eth2"
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
9. From workstation use the ping command to reach the IP defined as a gateway for the router and the
floating IP associated to the instance. Use the ssh command to log in to the
instance production-app1 as the cloud-user user. The private key is available at
/home/student/operator1-keypair1.pem.
9.1. Use the ping command to reach the IP of the router defined as a gateway.
[student@workstation ~(operator1-production)]$ ping -W 5 -c 3 172.25.250.S
PING 172.25.250.S (172.25.250.S) 56(84) bytes of data.
64 bytes from 172.25.250.S: icmp_seq=1 ttl=64 time=0.658 ms
64 bytes from 172.25.250.S: icmp_seq=2 ttl=64 time=0.273 ms
64 bytes from 172.25.250.S: icmp_seq=3 ttl=64 time=0.297 ms
--- 172.25.250.25 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.273/0.409/0.658/0.176 ms
9.3. Use the ping command to reach the floating IP allocated to the instance.
[student@workstation ~(operator1-production)]$ ping -W 5 -c 3 172.25.250.P
PING 172.25.250.P (172.25.250.P) 56(84) bytes of data.
64 bytes from 172.25.250.P: icmp_seq=1 ttl=63 time=0.658 ms
64 bytes from 172.25.250.P: icmp_seq=2 ttl=63 time=0.616 ms
64 bytes from 172.25.250.P: icmp_seq=3 ttl=63 time=0.690 ms
--- 172.25.250.P ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.616/0.654/0.690/0.042 ms
9.4. Use the ssh command to log in to the instance as the cloud-user user. The private key is available at
/home/student/operator1-keypair1.pem. Exit from theinstance.
[student@workstation ~(operator1-production)]$ ssh -i ~/operator1-keypair1 cloud-user@172.25.250.P
[cloud-user@production-app1 ~]$ exit
[student@workstation ~(operator1-production)]$
Evaluation
From workstation, run the lab network-review grade command to confirm the success of this exercise.
Correct any reported failures and rerun the command until successful.
[student@workstation ~]$ lab network-review grade
Cleanup
From workstation, run the lab network-review cleanup command to clean up this exercise.
[student@workstation ~]$ lab network-review cleanup
Steps
1. Log in to director and review the environment.
1.1. Use the ssh command to connect to director. Review the environment file for the stack user.
OpenStack environment variables all begin with OS_.
[student@workstation ~]$ ssh stack@director
[stack@director ~]$ env | grep "OS_"
OS_IMAGE_API_VERSION=1
OS_PASSWORD=9ee0904a8dae300a37c4857222b10fb10a2b6db5
OS_AUTH_URL=https://172.25.249.201:13000/v2.0
OS_USERNAME=admin
OS_TENANT_NAME=admin
OS_NO_CACHE=True
OS_CLOUDNAME=undercloud
1.2. View the environment file for the stack user. This file is automatically sourced when the stack user
logs in. The OS_AUTH_URL variable in this file defines the Identity Service endpoint of the undercloud.
[stack@director ~]$ grep "^OS_" stackrc
OS_PASSWORD=$(sudo hiera admin_password)
OS_AUTH_URL=https://172.25.249.201:13000/v2.0
OS_USERNAME=admin
OS_TENANT_NAME=admin
OS_BAREMETAL_API_VERSION=1.15
OS_NO_CACHE=True
CL210-RHOSP10.1-en-2-20171006 247
OS_CLOUDNAME=undercloud
2.1. Inspect the /home/stack/undercloud.conf configuration file. Locate the variables that define the
provisioning network, such as undercloud_admin_vip.
[DEFAULT]
local_ip = 172.25.249.200/24
undercloud_public_vip = 172.25.249.201
undercloud_admin_vip = 172.25.249.202
local_interface = eth0
masquerade_network = 172.25.249.0/24
dhcp_start = 172.25.249.51
dhcp_end = 172.25.249.59
network_cidr = 172.25.249.0/24
network_gateway = 172.25.249.200
inspection_iprange = 172.25.249.150,172.25.249.180
...output omitted...
2.2. Compare the IP addresses in the configuration file with the IP address assigned to the virtual
machine. Use the ip command to list the devices.
[stack@director ~]$ ip addr | grep -E 'eth1|br-ctlplane'
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
inet 172.25.250.200/24 brd 172.25.250.255 scope global eth1
7: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
inet 172.25.249.200/24 brd 172.25.249.255 scope global br-ctlplane
inet 172.25.249.202/32 scope global br-ctlplane
inet 172.25.249.201/32 scope global br-ctlplane
2.3. List the networks configured in the undercloud. If an overcloud is currently deployed, then
approximately six networks are displayed. If the overcloud has been deleted or
has not been deployed, only one network will display. Look for the provisioning network named
ctlplane. This display includes the subnets configured within the networks listed. You will use the ID for
the provisioning network's subnet in the next step.
[stack@director ~]$ openstack network list --long -c Name -c Subnets -c "Network Type"
2.4. Display the subnet for the ctlplane provisioning network using the subnet ID obtained in the
previous step. The allocation_pools field is the DHCP scope, and the dns_nameservers and gateway_ip
fields are DHCP options, for an overcloud node's provisioning network interface.
[stack@director ~]$ openstack subnet show 64f6a0a6-dc27-4c92-a81a-b294d1bb22a4
3. List the services, and their passwords, installed for the undercloud.
3.1. List the undercloud services running on director.
[stack@director ~]$ openstack service list -c Name -c Type
3.2. Review the admin and other component service passwords located in the /home/stack/undercloud-
passwords.conf configuration file. You will use various service passwords in later exercises.
[auth]
undercloud_db_password=eb35dd789280eb196dcbdd1e8e75c1d9f40390f0
undercloud_admin_token=529d7b664276f35d6c51a680e44fd59dfa310327
undercloud_admin_password=96c087815748c87090a92472c61e93f3b0dcd737
undercloud_glance_password=6abcec10454bfeec6948518dd3de6885977f6b65
undercloud_heat_encryption_key=45152043171b30610cb490bb40bff303
undercloud_heat_password=a0b7070cd8d83e59633092f76a6e0507f85916ed
undercloud_neutron_password=3a19afd3302615263c43ca22704625db3aa71e3f
undercloud_nova_password=d59c86b9f2359d6e4e19d59bd5c60a0cdf429834
undercloud_ironic_password=260f5ab5bd24adc54597ea2b6ea94fa6c5aae326
...output omitted...
4. View the configuration used to prepare for deploying the overcloud and the resulting overcloud
nodes.
4.1. View the /home/stack/instackenv-initial.json configuration file. The file was used to define each
overcloud node, including power management access settings.
{
"nodes": [
{
"name": "controller0",
"pm_user": "admin",
"arch": "x86_64",
"mac": [ "52:54:00:00:f9:01" ],
"cpu": "2",
"memory": "8192",
"disk": "40",
"pm_addr": "172.25.249.101",
"pm_type": "pxe_ipmitool",
"pm_password": "password"
...output omitted...
4.2. List the provisioned nodes in the current overcloud environment. This command lists the nodes that
were created using the configuration file shown in the previous step.
[stack@director ~]$ openstack baremetal node list -c Name -c 'Power State' -c 'Provisioning State' -c
'Maintenance'
4.3. List the servers in the environment. Review the status and the IP address of the nodes. This
command lists the overcloud servers built on the bare-metal nodes defined in the previous step. The IP
address assigned to the nodes are reachable from the director virtual machine.
[stack@director ~]$ openstack server list -c Name -c Status -c Networks
5. Using the controller0 node and the control role as an example, review the settings that define how a
node is selected to be built for a server role.
5.1. List the flavors created for each server role in the environment. These flavors were created to define
the sizing for each deployment server role. It is recommended practice that flavors are named for the
roles for which they are used. However, properties set on a flavor, not the flavor's name, determine its
use.
[stack@director ~]$ openstack flavor list -c Name -c RAM -c Disk -c Ephemeral -c VCPUs
5.2. Review the control flavor's properties by running the openstack flavor show command. The
capabilities settings include the profile='control' tag. When this flavor is specified, it will only work with
nodes that match these requested capabilities, including the profile='control' tag.
[stack@director ~]$ openstack flavor show control
5.3. Review the controller0 node's properties field. The capabilities settings include the same
profile:control tag as defined on the control flavor. When a flavor is specified during deployment, only a
node that matches a flavor's requested capabilities is eligible to be selected for deployment.
[stack@director ~]$ openstack baremetal node show controller0
6. Review the template and environment files that were used to define the deployment configuration.
6.1. Locate the environment files used for the overcloud deployment.
[stack@director ~]$ ls ~/templates/cl210-environment/
00-node-info.yaml 32-network-environment.yaml 50-pre-config.yaml
20-storage-environment.yaml 34-ips-from-pool-all.yaml 60-post-config.yaml
30-network-isolation.yaml 40-compute-extraconfig.yaml
6.2. Locate the configuration files used for the overcloud deployment.
[stack@director ~]$ ls ~/templates/cl210-configuration/single-nic-vlans/
ceph-storage.yaml compute.yaml controller.yaml
6.3. Review the /home/stack/templates/cl210-environment/30-networkisolation.yaml environment file
that defines the networks and VLANs for each
server. For example, this partial output lists the networks (port attachments) to be
configured on a node assigned the controller role.
...output omitted...
# Port assignments for the controller role
OS::TripleO::Controller::Ports::ExternalPort: ../network/ports/external.yaml
OS::TripleO::Controller::Ports::InternalApiPort: ../network/ports/internal...
OS::TripleO::Controller::Ports::StoragePort: ../network/ports/storage.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: ../network/ports/storage_...
OS::TripleO::Controller::Ports::TenantPort: ../network/ports/tenant.yaml
...output omitted...
6.4. Review the /home/stack/templates/cl210-environment/32-networkenvironment.
yaml environment file that defines the overall network configuration
for the overcloud. For example, this partial output lists the IP addressing used for the
Internal API VLAN.
...output omitted...
# Internal API - used for private OpenStack services traffic
InternalApiNetCidr: '172.24.1.0/24'
InternalApiAllocationPools: [{'start': '172.24.1.60','end': '172.24.1.99'}]
InternalApiNetworkVlanID: 10
InternalApiVirtualFixedIPs: [{'ip_address':'172.24.1.50'}]
RedisVirtualFixedIPs: [{'ip_address':'172.24.1.51'}]
...output omitted...
6.5. View the /home/stack/templates/cl210-configuration/single-nic-vlans/
controller.yaml configuration file that defines the network interfaces for the
controller0 node. For example, this partial output shows the Internal API interface,
using variables seen previously in the 32-network-environment.yaml file.
...output omitted...
type: vlan
# mtu: 9000
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
...output omitted...
7.2. Source the overcloudrc authentication environment file. The OS_AUTH_URL variable in this file
defines the Identity Service endpoint of the overcloud.
[stack@director ~]$ source overcloudrc
[stack@director ~]$ env | grep "OS_"
OS_PASSWORD=y27kCBdDrqkkRHuzm72DTn3dC
OS_AUTH_URL=http://172.25.250.50:5000/v2.0
OS_USERNAME=admin
OS_TENANT_NAME=admin
OS_NO_CACHE=True
OS_CLOUDNAME=overcloud
7.4. Review general overcloud configuration. This listing contains default settings, formats, and core
component version numbers. The currently empty network field displays networks created, although
none yet exist in this new overcloud.
[stack@director ~]$ openstack configuration show
1. Use SSH to connect to director as the user stack and source the stackrc credentials file.
1.1. From workstation, use SSH to connect to director as the user stack and source the stackrc
credentials file.
[student@workstation ~]$ ssh stack@director
[stack@director ~]$ source stackrc
2.3. Import instackenv-onenode.json into the Bare Metal service using the command openstack
baremetal import, and ensure that the node has been properlyimported.
[stack@director ~]$ openstack baremetal import --json /home/stack/instackenv-onenode.json
Started Mistral Workflow. Execution ID: 8976a32a-6125-4c65-95f1-2b97928f6777
Successfully registered node UUID b32d3987-9128-44b7-82a5-5798f4c2a96c
Started Mistral Workflow. Execution ID: 63780fb7-bff7-43e6-bb2a-5c0149bc9acc
Successfully set all nodes to available
[stack@director ~]$ openstack baremetal node list -c Name -c 'Power State' -c 'Provisioning State' -c
Maintenance
2.4. Prior to starting introspection, set the provisioning state for compute1 to manageable.
[stack@director ~]$ openstack baremetal node manage compute1
4.1. Update the node profile for compute1 to assign it the compute profile.
[stack@director ~]$ openstack baremetal node set compute1 --property
"capabilities=profile:compute,boot_option:local"
6.1. Deploy the overcloud, to scale out compute node by adding one more node. The deployment may
take 40 minutes or more to complete.
[stack@director ~]$ openstack overcloud deploy \
--templates ~/templates \
--environment-directory ~/templates/cl210-environment
Removing the current plan files
Uploading new plan files
Started Mistral Workflow. Execution ID: 6de24270-c3ed-4c52-8aac-820f3e1795fe
Plan updated
Deploying templates in the directory /tmp/tripleoclient-WnZ2aA/tripleo-heattemplates
Started Mistral Workflow. Execution ID: 50f42c4c-d310-409d-8d58-e11f993699cb
...output omitted...
Cleanup
From workstation, run the lab resilience-scaling-nodes cleanup command to clean up this exercise.
[student@workstation ~]$ lab resilience-scaling-nodes cleanup
After the add-compute task has completed successfuly, continue with the setup task in the following
paragraph. Start with the setup task if you have two functioning compute nodes, either from having
completed the previous overcloud scaling guided exercise, or by completing the extra add compute task
described above. On workstation, run the lab resilience-block-storage setup command. This command
verifies the OpenStack environment and creates the project resources used in this exercise.
[student@workstation ~]$ lab resilience-block-storage setup
Steps
1. Configure compute0 to use block-based migration. Later in this exercise, you will repeat these steps
on compute1.
1.1. Log into compute0 as heat-admin and switch to the root user.
[student@workstation ~]$ ssh heat-admin@compute0
[heat-admin@overcloud-compute-0 ~]$ sudo -i
[root@overcloud-compute-0 ~]#
1.2. Configure iptables for live migration.
[root@overcloud-compute-0 ~]# iptables -v -I INPUT 1 -p tcp --dport 16509 -j ACCEPT
ACCEPT tcp opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 tcp dpt:16509
[root@overcloud-compute-0 ~]# iptables -v -I INPUT -p tcp --dport 49152:49261 -j ACCEPT
ACCEPT tcp opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 tcp dpts:49152:49261
[root@overcloud-compute-0 ~]# service iptables save
1.3. Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf. Include the following lines at the
bottom of the file.
user="root"
group="root"
vnc_listen="0.0.0.0"
1.4. The classroom overcloud deployment uses Ceph as shared storage by default. Demonstrating block-
based migration requires disabling shared storage for the Compute service. Enable the compute0 node
to store virtual disk images, associated with running instances, locally under /var/lib/nova/instances.
Edit the /etc/nova/nova.conf file to set the images_type variable to default.
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf libvirt images_type default
2.1. Log into compute1 as heat-admin and switch to the root user.
[student@workstation ~]$ ssh heat-admin@compute1
[heat-admin@overcloud-compute-1 ~]$ sudo -i
[root@overcloud-compute-1 ~]#
2.3. Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf. Include the following lines at the
bottom of the file.
user="root"
group="root"
vnc_listen="0.0.0.0"
2.4. The classroom overcloud deployment uses Ceph as shared storage by default. Demonstrating block-
based migration requires disabling shared storage for the Compute service. Enable the compute0 node
to store virtual disk images, associated with running instances, locally under /var/lib/nova/instances.
Edit the /etc/nova/nova.conf file to set the images_type variable to default.
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf \
libvirt images_type default
3.1. Log into controller0 as heat-admin and switch to the root user.
[student@workstation ~]$ ssh heat-admin@controller0
[heat-admin@overcloud-controller-0 ~]$ sudo -i
[root@overcloud-controller-0 ~]#
5. List the available floating IP addresses, then allocate one to the finance-web1 instance.
5.1. List the floating IPs. An available one has the Port attribute set to None.
[student@workstation ~(developer1-finance)]$ openstack floating ip list -c "Floating IP Address" -c Port
6.1. To perform live migration, the user developer1 must have the admin role assigned for
the project finance. Assign the admin role to developer1 for the project finance.
The developer1 user may already have been assigned the admin role.
[student@workstation ~(developer1-finance)]$ source ~/admin-rc
[student@workstation ~(admin-admin)]$ openstack role add --user developer1 --project finance admin
[student@workstation ~(admin-admin)]$ source ~/developer1-finance-rc
6.3. Prior to migration, ensure the destination compute node has sufficient resources to host the
instance. In this example, the current server instance location node is overcloudcompute-1.localdomain,
and the destination to check is overcloud-compute-0. Modify the command to reflect your actual source
and destination compute nodes. Estimate whether the total minus the amount used now is sufficient.
[student@workstation ~(developer1-finance)]$ openstack host show overcloud-compute-0.localdomain
-f json
[
{
"Project": "(total)",
"Disk GB": 39,
"Host": "overcloud-compute-0.localdomain",
"CPU": 2,
"Memory MB": 6143
},
{
"Project": "(used_now)",
"Disk GB": 0,
"Host": "overcloud-compute-0.localdomain",
"CPU": 0,
"Memory MB": 2048
},
{
"Project": "(used_max)",
"Disk GB": 0,
"Host": "overcloud-compute-0.localdomain",
"CPU": 0,
"Memory MB": 0
}
6.4. Migrate the instance finance-web1 to a new compute node. In this example, the instance is
migrated from overcloud-compute-1 to overcloud-compute-0. Your scenario may require migrating in
the reverse direction.
[student@workstation ~(developer1-finance)]$ openstack server migrate \
--block-migration \
--live overcloud-compute-0.localdomain \
--wait finance-web1
Complete
7. Use the command openstack server show to verify that the migration of financeweb1 using block
storage migration was successful. The compute node displayed should be the destination node.
[student@workstation ~(developer1-finance)]$ openstack server show finance-web1 -f json | grep
compute
"OS-EXT-SRV-ATTR:host": "overcloud-compute-0.localdomain",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "overcloud-compute-0.localdomain",
Cleanup
From workstation, run the lab resilience-block-storage cleanup command to cleanup this exercise.
[student@workstation ~]$ lab resilience-block-storage cleanup
Important
As described above, only run this command if you still need to install a second compute node. If you
already have two functioning compute nodes, skip this task and continue with the setup task.
[student@workstation ~]$ lab resilience-shared-storage add-compute
After the add-compute task has completed successfuly, continue with the setup task in the following
paragraph.
Start with the setup task if you have two functioning compute nodes, either from having completed the
previous overcloud scaling guided exercise, or by completing the extra addcompute
task described above. On workstation, run the lab resilience-sharedstorage setup command. This
command verifies the OpenStack environment and creates the project resources used in this exercise.
[student@workstation ~]$ lab resilience-shared-storage setup
Steps
1. Configure controller0 for shared storage.
1.1. Log into controller0 as heat-admin and switch to the root user.
[student@workstation ~]$ ssh heat-admin@controller0
[heat-admin@overcloud-controller-0 ~]$ sudo -i
[root@overcloud-controller-0 ~]#
1.4. Configure /etc/exports to export /var/lib/nova/instances via NFS to compute0 and compute1. Add
the following lines to the bottom of the file.
/var/lib/nova/instances 172.25.250.2(rw,sync,fsid=0,no_root_squash)
/var/lib/nova/instances 172.25.250.12(rw,sync,fsid=0,no_root_squash)
1.5. Enable and start the NFS service.
[root@overcloud-controller-0 ~]# systemctl enable nfs --now
2. Configure compute0 to use shared storage. Later in this exercise, you will repeat these steps on
compute1.
2.1. Log into compute0 as heat-admin and switch to the root user.
[student@workstation ~]$ ssh heat-admin@compute0
[heat-admin@overcloud-compute-0 ~]$ sudo -i
[root@overcloud-compute-0 ~]#
2.2. Configure /etc/fstab to mount the directory /var/lib/nova/instances, exported from controller0. Add
the following line to the bottom of the file. Confirm that the entry is on a single line in the file; the two
line display here in the book is due to insufficient width.
172.25.250.1:/ /var/lib/nova/instances nfs4
context="system_u:object_r:nova_var_lib_t:s0" 0 0
2.5. Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf Add the following lines to the
bottom of the file.
user="root"
group="root"
vnc_listen="0.0.0.0"
2.6. Configure /etc/nova/nova.conf virtual disk storage and other properties for live migration. Use the
nfs mounted /var/lib/nova/instances directory to store instance virtual disks.
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf libvirt images_type default
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT instances_path
/var/lib/nova/instances
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT
novncproxy_base_url http://172.25.250.1:6080/vnc_auto.html
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen
0.0.0.0
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT
live_migration_flag VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
3.1. Log into compute1 as heat-admin and switch to the root user.
[student@workstation ~]$ ssh heat-admin@compute1
[heat-admin@overcloud-compute-1 ~]$ sudo -i
[root@overcloud-compute-1 ~]#
3.2. Configure /etc/fstab to mount the directory /var/lib/nova/instances, exported from controller0. Add
the following line to the bottom of the file. Confirm that the entry is on a single line in the file; the two
line display here in the book is due to insufficient width.
172.25.250.1:/ /var/lib/nova/instances nfs4
context="system_u:object_r:nova_var_lib_t:s0" 0 0
3.5. Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf Add the following lines to the
bottom of the file.
user="root"
group="root"
vnc_listen="0.0.0.0"
3.6. Configure /etc/nova/nova.conf virtual disk storage and other properties for live migration. Use the
nfs mounted /var/lib/nova/instances directory to store instance virtual disks.
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf libvirt images_type default
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT instances_path
/var/lib/nova/instances
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT
novncproxy_base_url http://172.25.250.1:6080/vnc_auto.html
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen
0.0.0.0
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT
live_migration_flag VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
5. List the available floating IP addresses, then allocate one to the finance-web2 instance.
5.1. List the floating IPs. An available one has the Port attribute set to None.
[student@workstation ~(developer1-finance)]$ openstack floating ip list -c "Floating IP Address" -c Port
6.3. Prior to migration, ensure the destination compute node has sufficient resources to host the
instance. In this example, the current server instance location node is overcloudcompute-1.localdomain,
and the destination to check is overcloud-compute-0. Modify the command to reflect your actual source
and destination compute nodes. Estimate whether the total, minus the amount used now, is sufficient.
[student@workstation ~(developer1-finance)]$ openstack host show overcloud-compute-0.localdomain
-f json
[
{
"Project": "(total)",
"Disk GB": 56,
"Host": "overcloud-compute-0.localdomain",
"CPU": 2,
"Memory MB": 6143
},
{
"Project": "(used_now)",
"Disk GB": 0,
"Host": "overcloud-compute-0.localdomain",
"CPU": 0,
"Memory MB": 2048
},
{
"Project": "(used_max)",
"Disk GB": 0,
"Host": "overcloud-compute-0.localdomain",
"CPU": 0,
"Memory MB": 0
}
6.4. Migrate the instance finance-web1 to a new compute node. In this example, the instance is
migrated from overcloud-compute-1 to overcloud-compute-0. Your
scenario may require migrating in the opposite direction.
[student@workstation ~(developer1-finance)]$ openstack server migrate --shared-migration --live
overcloud-compute-0.localdomain --wait finance-web2
Complete
7. Use the command openstack server show to verify that finance-web2 is now running on the other
compute node.
[student@workstation ~(developer1-finance)]$ openstack server show finance-web2 -f json | grep
compute
"OS-EXT-SRV-ATTR:host": "overcloud-compute-0.localdomain",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "overcloud-compute-0.localdomain",
Cleanup
From workstation, run the lab resilience-shared-storage cleanup command to clean up this exercise.
[student@workstation ~]$ lab resilience-shared-storage cleanup
1. Use SSH to connect to director as the user stack and source the stackrc credentials file.
2. Prepare compute1 for introspection. Use the details available in
http://materials.example.com/instackenv-onenode.json file.
3. Initiate introspection of compute1. Introspection may take a few minutes.
4. Update the node profile for compute1 to use the compute profile.
5. Configure 00-node-info.yaml to scale two compute nodes.
6. Deploy the overcloud, to scale compute by adding one node.
7. Prepare compute1 for the next part of the lab.
[student@workstation ~] lab resilience-review prep-compute1
8. Configure controller0 for shared storage.
9. Configure shared storage for compute0.
10. Configure shared storage for compute1.
11. Launch an instance named production1 as the user operator1 using the following
12. List the available floating IP addresses, then allocate one to the production1 instance.
13. Ensure that the production1 instance is accessible by logging in to the instance as the user cloud-
user, then log out of the instance.
14. Migrate the instance production1 using shared storage.
15. Verify that the migration of production1 using shared storage was successful.
Evaluation
From workstation, run the lab resilience-review grade command to confirm the success of this exercise.
Correct any reported failures and rerun the command until successful.
[student@workstation ~]$ lab resilience-review grade
Cleanup
Save any data that you would like to keep from the virtual machines. After the data is saved, reset all of
the overcloud virtual machines and the director virtual machine. In the physical classroom environment,
reset all of the overcloud virtual machines and the director virtual machine using the rht-vmctl
command. In the online environment, reset and start the director and overcloud nodes.
Solution
In this lab, you will add compute nodes, manage shared storage, and perform instance live migration.
Resources Files: http://materials.example.com/instackenv-onenode.json
Outcomes
You should be able to:
• Add a compute node.
• Configure shared storage.
• Live migrate an instance using shared storage.
Before you begin
If you have not done so already, save any data that you would like to keep from the virtual machines.
After the data is saved, reset all of the virtual machines. In the physical classroom environment, reset all
of the virtual machines using the command rht-vmctl. In the online environment, delete and provision a
new classroom lab environment. Log in to workstation as student with a password of student. On
workstation, run the lab resilience-review setup command. The script ensures that OpenStack services
are running and the environment has been properly configured for the lab.
[student@workstation ~]$ lab resilience-review setup
Steps
1. Use SSH to connect to director as the user stack and source the stackrc credentials file.
[student@workstation ~]$ ssh stack@director
[stack@director ~]$
2.3. Import instackenv-onenode.json into the baremetal service using openstack baremetal import, and
ensure that the node has been properly imported.
[stack@director ~]$ openstack baremetal import --json /home/stack/instackenv-onenode.json
Started Mistral Workflow. Execution ID: 8976a32a-6125-4c65-95f1-2b97928f6777
Successfully registered node UUID b32d3987-9128-44b7-82a5-5798f4c2a96c
Started Mistral Workflow. Execution ID: 63780fb7-bff7-43e6-bb2a-5c0149bc9acc
Successfully set all nodes to available
[stack@director ~]$ openstack baremetal node list -c Name -c 'Power State' -c 'Provisioning State' -c
Maintenance
2.4. Prior to starting introspection, set the provisioning state for compute1 to manageable.
[stack@director ~]$ openstack baremetal node manage compute1
4. Update the node profile for compute1 to use the compute profile.
[stack@director ~]$ openstack baremetal node set compute1 --property
"capabilities=profile:compute,boot_option:local"
8.1. Log into controller0 as heat-admin and switch to the root user.
[student@workstation ~]$ ssh heat-admin@controller0
[heat-admin@overcloud-controller-0 ~]$ sudo -i
[root@overcloud-controller-0 ~]#
9.1. Log into compute0 as heat-admin and switch to the root user.
student@workstation ~]$ ssh heat-admin@compute0
[heat-admin@overcloud-compute-0 ~]$ sudo -i
[root@overcloud-compute-0 ~]#
9.2. Configure /etc/fstab to mount the directory /var/lib/nova/instances, exported from controller0. Add
the following line to the bottom of the file. Confirm that the entry is on a single line in the file; the two
line display here in the book is due to insufficient width.
172.25.250.1:/ /var/lib/nova/instances nfs4
context="system_u:object_r:nova_var_lib_t:s0" 0 0
9.5. Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf Add the following lines to the
bottom of the file.
user="root"
group="root"
vnc_listen="0.0.0.0"
9.6. Configure /etc/nova/nova.conf virtual disk storage and other properties for live migration. Use the
nfs mounted /var/lib/nova/instances directory to store instance virtual disks.
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf libvirt images_type default
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT instances_path
/var/lib/nova/instances
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT
novncproxy_base_url http://172.25.250.1:6080/vnc_auto.html
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen
0.0.0.0
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT
live_migration_flag VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
10.1. Log into compute1 as heat-admin and switch to the root user.
[student@workstation ~]$ ssh heat-admin@compute1
[heat-admin@overcloud-compute-1 ~]$ sudo -i
[root@overcloud-compute-1 ~]#
10.5.Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf Add the following lines to the
bottom of the file.
user="root"
group="root"
vnc_listen="0.0.0.0"
10.6.Configure /etc/nova/nova.conf virtual disk storage and other properties for live migration. Use the
nfs mounted /var/lib/nova/instances directory to store instance virtual disks.
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf libvirt images_type default
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT instances_path
/var/lib/nova/instances
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT
novncproxy_base_url http://172.25.250.1:6080/vnc_auto.html
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen
0.0.0.0
[root@overcloud-compute-1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT
live_migration_flag VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
12. List the available floating IP addresses, then allocate one to the production1 instance.
12.1. List the floating IPs. An available one has the Port attribute set to None.
[student@workstation ~(operator1-production)]$ openstack floating ip list -c "Floating IP Address" -c
Port
13. Ensure that the production1 instance is accessible by logging in to the instance as the user cloud-
user, then log out of the instance.
[student@workstation ~(operator1-production)]$ ssh -i ~/operator1-keypair1.pem cloud-
user@172.25.250.P
Warning: Permanently added '172.25.250.P' (ECDSA) to the list of known hosts.
[cloud-user@production1 ~]$ exit
[student@workstation ~(operator1-production)]$
14.1. To perform live migration, the user operator1 must have the admin role assigned for the project
production. Assign the admin role to operator1 for the project production.
Source the /home/student/admin-rc file to export the admin user credentials.
[student@workstation ~(operator1-production)]$ source ~/admin-rc
[student@workstation ~(admin-admin)]$ openstack role add --user operator1 --project production
admin
14.2.Determine whether the instance is currently running on compute0 or compute1. In the example
below, the instance is running on compute0, but your instance may be running on compute1. Source the
/home/student/operator1-production-rc file to export the operator1 user credentials.
[student@workstation ~(admin-admin)]$ source ~/operator1-production-rc
[student@workstation ~(operator1-production)]$ openstack server show production1 -f json | grep
compute
"OS-EXT-SRV-ATTR:host": "overcloud-compute-0.localdomain",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "overcloud-compute-0.localdomain",
14.3.Prior to migration, ensure compute1 has sufficient resources to host the instance. The example
below uses compute1, however you may need to use compute0. The compute node should contain 2
VCPUs, a 56 GB disk, and 2048 MBs of available RAM.
[student@workstation ~(operator1-production)]$ openstack host show overcloud-compute-
1.localdomain -f json
[
{
"Project": "(total)",
"Disk GB": 56,
"Host": "overcloud-compute-1.localdomain",
"CPU": 2,
"Memory MB": 6143
},
{
"Project": "(used_now)",
"Disk GB": 0,
"Host": "overcloud-compute-1.localdomain",
"CPU": 0,
"Memory MB": 2048
},
{
"Project": "(used_max)",
"Disk GB": 0,
"Host": "overcloud-compute-1.localdomain",
"CPU": 0,
"Memory MB": 0
}
14.4.Migrate the instance production1 using shared storage. In the example below, the instance is
migrated from compute0 to compute1, but you may need to migrate the instance from compute1 to
compute0.
[student@workstation ~(operator1-production)]$ openstack server migrate --shared-migration --live
overcloud-compute-1.localdomain production1
15. Verify that the migration of production1 using shared storage was successful.
15.1. Verify that the migration of production1 using shared storage was successful. The example below
displays compute1, but your output may display compute0.
[student@workstation ~(operator1-production)]$ openstack server show production1 -f json | grep
compute
"OS-EXT-SRV-ATTR:host": "overcloud-compute-1.localdomain",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "overcloud-compute-1.localdomain",
From workstation, run the lab resilience-review grade command to confirm the success of this exercise.
Correct any reported failures and rerun the command until successful.
[student@workstation ~]$ lab resilience-review grade
Steps
1. Launch an instance named finance-web1 using the rhel7 image, the m1.web flavor, the finance-
network1 network, the finance-web security group, and the developer1-keypair1 key pair. These
resources were all created by the setup script. The instance deployment will return an error.
1.6. Verify that the developer1-keypair1 key pair, and its associated file located at
/home/student/developer1-keypair1.pem are available.
[student@workstation ~(developer1-finance)]$ openstack keypair list
+---------------------+-----------------+
| Name | Fingerprint |
+---------------------+-----------------+
| developer1-keypair1 | cc:59(...)0f:f9 |
+---------------------+-----------------+
[student@workstation ~(developer1-finance)]$ file ~/developer1-keypair1.pem
/home/student/developer1-keypair1.pem: PEM RSA private key
1.7. Launch an instance named finance-web1 using the rhel7 image, the m1.web flavor, the finance-
network1 network, the finance-web security group, and the developer1-keypair1 key pair. The instance
deployment will return an error.
[student@workstation ~(developer1-finance)]$ openstack server create \
--image rhel7 \
--flavor m1.web \
--security-group finance-web \
--key-name developer1-keypair1 \
--nic net-id=finance-network1 \
finance-web1
...output omitted...
1.8. Verify the status of the finance-web1 instance. The instance status will be ERROR.
[student@workstation ~(developer1-finance)]$ openstack server show finance-web1 -c name -c status
+--------+--------------+
| Field | Value |
+--------+--------------+
| name | finance-web1 |
| status | ERROR |
+--------+--------------+
2. Verify on which host the Nova scheduler and Nova conductor services are running. You will need to
load the admin credentials located at the /home/student/admin-rc file.
2.1. Load the admin credentials located at the /home/student/admin-rc file.
[student@workstation ~(developer1-finance)]$ source ~/admin-rc
2.2. Verify in which host the Nova scheduler and Nova conductor services are running. Both services are
running in controller0.
[student@workstation ~(admin-admin)]$ openstack host list
+------------------------------------+-------------+----------+
| Host Name | Service | Zone |
+------------------------------------+-------------+----------+
| overcloud-controller-0.localdomain | scheduler | internal |
| overcloud-controller-0.localdomain | conductor | internal |
...output omitted...
3. Review the logs for the Compute scheduler and conductor services in controller0. Find the issue
related to a no valid host found for the finance-web1 instance in the Compute conductor log file located
at /var/log/nova/nova-conductor.log. Find also the issue related to no hosts found by the compute filter
in the Compute scheduler log file located at /var/log/nova/nova-scheduler.log
3.3. Locate the log message in the Compute conductor log file, which sets the financeweb1 instance's
status to error, since no valid host is available to deploy the instance. The log file shows the instance ID.
[root@overcloud-controller-0 heat-admin]# cat /var/log/nova/nova-conductor.log
...output omitted...
NoValidHost: No valid host was found. There are not enough hosts available.
(...) WARNING (...) [instance: 168548c9-a7bb-41e1-a7ca-aa77dca302f8] Setting
instance to ERROR state.
...output omitted...
3.4. Locate the log message, in the Nova scheduler file, which returns zero hosts for the compute filter.
When done, log out of the root account.
[root@overcloud-controller-0 heat-admin]# cat /var/log/nova/nova-scheduler.log
...output omitted...
(...) Filter ComputeFilter returned 0 hosts
(...) Filtering removed all hosts for the request with instance ID '168548c9-a7bb-41e1-a7ca-
aa77dca302f8'. (...)
[root@overcloud-controller-0 heat-admin]# exit
5.2. Verify that the Nova compute service has been correctly enabled on compute0. When done, log out
from the controller node.
[heat-admin@overcloud-controller-0 ~]$ openstack compute service list -c Binary -c Host -c Status
+------------------+------------------------------------+---------+
| Binary | Host | Status |
+------------------+------------------------------------+---------+
| nova-compute | overcloud-compute-0.localdomain | enabled |
...output omitted...
[heat-admin@overcloud-controller-0 ~]$ exit
6. Launch the finance-web1 instance again from workstation using the developer1 user credentials. Use
the rhel7 image, the m1.web flavor, the finance-network1 network, the finance-web security group, and
the developer1-keypair1 key pair. The instance will be deployed without errors. You will need to delete
the previous instance deployment with an error status before deploying the new instance.
6.2. Delete the previous finance-web1 instance which deployment issued an error.
[student@workstation ~(developer1-finance)]$ openstack server delete finance-web1
6.3. Verify that the instance has been correctly deleted. The command should not return any instances
named finance-web1.
[student@workstation ~(developer1-finance)]$ openstack server list
6.4. Launch the finance-web1 instance again, using the rhel7 image, the m1.web flavor, the finance-
network1 network, the finance-web security group, and the developer1-keypair1 key pair.
[student@workstation ~(developer1-finance)]$ openstack server create \
--image rhel7 \
--flavor m1.web \
--security-group finance-web \
--key-name developer1-keypair1 \
--nic net-id=finance-network1 \
--wait finance-web1
... output omitted ...
6.5. Verify the status of the finance-web1 instance. The instance status will be ACTIVE. It may take some
time for the instance's status to became ACTIVE.
[student@workstation ~(developer1-finance)]$ openstack server show finance-web1 -c name -c status
+--------+--------------+
| Field | Value |
+--------+--------------+
| name | finance-web1 |
| status | ACTIVE |
+--------+--------------+
Cleanup
From workstation, run the lab troubleshooting-compute-nodes cleanup script toclean up this exercise.
[student@workstation ~]$ lab troubleshooting-compute-nodes cleanup
Steps
1. Create a 1 GB volume named finance-volume1 using developer1 user credentials. The command will
raise an issue.
1.2. Create a 1 GB volume named finance-volume1. This command raises a service unavailable issue.
[student@workstation ~(developer1-finance)]$ openstack volume create --size 1 finance-volume1
Discovering versions from the identity service failed when creating the password plugin. Attempting to
determine version from URL. Service Unavailable (HTTP 503)
2. Verify that the IP address used in the authentication URL of the developer1 user credentials file is the
same one configured as a virtual IP in the HAProxy service for the keystone_public service. The HAProxy
service runs in controller0.
2.1. Find the authentication URL in the developer1 user credentials file.
[student@workstation ~(developer1-finance)]$ cat ~/developer1-finance-rc
unset OS_SERVICE_TOKEN
export OS_USERNAME=developer1
export OS_PASSWORD=redhat
export OS_AUTH_URL=http://172.25.250.50:5000/v2.0
export PS1='[\u@\h \W(developer1-finance)]\$ '
export OS_TENANT_NAME=finance
export OS_REGION_NAME=regionOne
2.3. Find the virtual IP address configured in the HAProxy service for the keystone_public service.
[heat-admin@overcloud-controller-0 ~]$ sudo less /etc/haproxy/haproxy.cfg
...output omitted...
listen keystone_public
bind 172.24.1.50:5000 transparent
bind 172.25.250.50:5000 transparent
mode http
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
server overcloud-controller-0.internalapi.localdomain 172.24.1.1:5000 check
fall 5 inter 2000 rise 2
2.5. Verify the status for the httpd service. The httpd service is inactive.
[heat-admin@overcloud-controller-0 ~]$ systemctl status httpd
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor
preset: disabled)
Drop-In: /usr/lib/systemd/system/httpd.service.d
└─openstack-dashboard.conf
Active: inactive (dead) since Thu 2017-06-15 09:37:15 UTC; 21min ago
...output omitted...
3. Start the httpd service. It may take some time for the httpd service to be started.
3.2. Verify that the httpd service is active. When done, log out from the controller node.
[heat-admin@overcloud-controller-0 ~]$ systemctl status httpd
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor
preset: disabled)
Drop-In: /usr/lib/systemd/system/httpd.service.d
└─openstack-dashboard.conf
Active: active (running) since Thu 2017-06-15 10:13:15 UTC; 1min 8s ag
[heat-admin@overcloud-controller-0 ~]$ logout
4. On workstation try to create a 1 GB volume named finance-volume1 again. The command will hang
because the Keystone identity service is not able to respond. Press Ctrl+C to get back to the prompt.
[student@workstation ~(developer1-finance)]$ openstack volume create --size 1 finance-volume1
Ctrl+C
5.2. Verify that the log file for the Keystone identity service reports that the RabbitMQ service in
unreachable.
[heat-admin@overcloud-controller-0 ~]$ sudo less /var/log/keystone/keystone.log
...output omitted...
(...) AMQP server on 172.24.1.1:5672 is unreachable: [Errno 111] Connection
refused. (...)
6. Verify that the root cause for the RabbitMQ cluster unavailability is that the rabbitmq
Pacemaker resource is disabled. When done, enable the rabbitmq Pacemaker resource.
6.1. Verify that the root cause for the RabbitMQ cluster unavailability is that the rabbitmq
Pacemaker resource is disabled.
[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
Cluster name: tripleo_cluster
Stack: corosync
...output omitted...
Clone Set: rabbitmq-clone [rabbitmq]
Stopped (disabled): [ overcloud-controller-0 ]
...output omitted...
6.2. Enable the rabbitmq resource in Pacemaker. When done, log out from the controller node.
[heat-admin@overcloud-controller-0 ~]$ sudo pcs resource enable rabbitmq --wait
Resource 'rabbitmq' is running on node overcloud-controller-0.
[heat-admin@overcloud-controller-0 ~]$ logout
7. On workstation, try to create again a 1 GB volume, named finance-volume1. The volume will be
created successfully.
Cleanup
From workstation, run the lab troubleshooting-authentication cleanup script to clean up this exercise.
[student@workstation ~]$ lab troubleshooting-authentication cleanup
Steps
1. Launch an instance named finance-web1. Use the rhel7 image, the finance-web security group, the
developer1-keypair1 key pair, the m1.lite flavor, and the finance-network1 network. The instance's
deployment will fail because the flavor does not meet the image's minimal requirements.
1.1. Load the credentials for the developer1 user.
[student@workstation ~]$ source ~/developer1-finance-rc
1.4. Verify that the developer1-keypair1 key pair, and its associated key file located at
/home/student/developer1-keypair1.pem are available.
[student@workstation ~(developer1-finance)]$ openstack keypair list
+---------------------+-----------------+
| Name | Fingerprint |
+---------------------+-----------------+
| developer1-keypair1 | 04:9c(...)cb:1d |
+---------------------+-----------------+
[student@workstation ~(developer1-finance)]$ file ~/developer1-keypair1.pem
/home/student/developer1-keypair1.pem: PEM RSA private key
2. Verify the rhel7 image requirements for memory and disk, and the m1.lite flavor specifications.
2.1. Verify the rhel7 image requirements for both memory and disk. The minimum disk required is 10
GB. The minimum memory required is 2048 MB.
[student@workstation ~(developer1-finance)]$ openstack image show rhel7
+------------------+----------+
| Field | Value |
+------------------+----------+
...output omitted...
| min_disk | 10 |
| min_ram | 2048 |
| name | rhel7 |
...output omitted...
2.2. Verify the m1.lite flavor specifications. The disk and memory specifications for the m1.lite flavor do
not meet the rhel7 image requirements.
[student@workstation ~(developer1-finance)]$ openstack flavor show m1.lite
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
...output omitted...
| disk | 5 |
| name | m1.lite |
| ram | 1024 |
...output omitted...
3. Verify that the m1.web flavor meets the rhel7 image requirements. Launch an instance named
finance-web1. Use the rhel7 image, the finance-web security group, the developer1-keypair1 key pair,
the m1.web flavor, and the finance-network1 network. The instance's deployment will be successful.
3.1. Verify that the m1.web flavor meets the rhel7 image requirements.
[student@workstation ~(developer1-finance)]$ openstack flavor show m1.web
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
...output omitted...
| disk | 10 |
| name | m1.web |
| ram | 2048 |
...output omitted...
3.2. Launch an instance named finance-web1. Use the rhel7 image, the financeweb security group, the
developer1-keypair1 key pair, the m1.web flavor, and the finance-network1 network.
[student@workstation ~(developer1-finance)]$ openstack server create \
--image rhel7 \
--security-group finance-web \
--key-name developer1-keypair1 \
--flavor m1.web \
--nic net-id=finance-network1 \
--wait \
finance-web1
...output omitted...
4. Attach an available floating IP to the finance-web1. The floating IP will not be attached because the
external network is not reachable from the internal network.
4.2. Attach the previous floating IP to the finance-web1. The floating IP will not be attached because the
external network is not reachable from the internal network.
[student@workstation ~(developer1-finance)]$ openstack server add floating ip finance-web1
172.25.250.P
Unable to associate floating IP 172.25.250.P to fixed IP 192.168.0.N (...)
Error: External network cb3a(...)6a35 is not reachable from subnet
ec0d(...)480b.(...)
5. Fix the previous issue by adding the finance-subnet1 subnetwork to the financerouter1 router.
5.2. Verify the current subnetworks added to the finance-router1 router. No output will display because
the subnetwork has not been added.
[student@workstation ~(developer1-finance)]$ neutron router-port-list finance-router1
5.4. Verify that the finance-subnet1 subnetwork has been correctly added to the finance-router1 router
[student@workstation ~(developer1-finance)]$ neutron router-port-list finance-router1 -c fixed_ips
+-------------------------------------------------------------+
| fixed_ips |
+-------------------------------------------------------------+
| {"subnet_id": "dbac(...)673d", "ip_address": "192.168.0.1"} |
+-------------------------------------------------------------+
6. Attach the available floating IP to the finance-web1 instance. When done, log in to the finance-web1
instance as the cloud-user user, using the /home/student/ developer1-keypair1.pem key file. Even
though the floating IP address is attached to the finance-web1 instance, logging in to the instance will
fail. This issue will be resolved in an upcoming step in this exercise.
6.2. Log in to the finance-web1 instance as the cloud-user user, using the /home/student/developer1-
keypair1.pem key file.
[student@workstation ~(developer1-finance)]$ ssh -i ~/developer1-keypair1.pem cloud-
user@172.25.250.P
Warning: Permanently added '172.25.250.P' (ECDSA) to the list of known hosts.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
7. Verify that the instance is not able to contact the metadata service at boot time. The metadata
service is not reachable because the finance-subnet1 was not connected to the finance-router1 router
when the finance-web1 instance was created. This is the root cause for the previous issue because the
key is not added to the authorized_keys for the cloud-user user.
7.2. Open Firefox, and navigate to the finance-web1 instance's console URL.
7.3. Log in to the finance-web1 instance's console as the root user, using redhat as a password.
7.4. Verify that the authorized_keys file for the cloud-user is empty. No key has been injected by cloud-
init during the instance's boot process.
[root@host-192-168-0-N ~]# cat /home/cloud-user/.ssh/authorized_keys
7.5. Verify in the cloud-init log file, located at /var/log/cloud-init.log, that the finance-web1 instance
cannot reach the metadata service during its boot process.
[root@host-192-168-0-N ~]# less /var/log/cloud-init.log
...output omitted...
[ 134.170335] cloud-init[475]: 2014-07-01 07:33:22,857 -
url_helper.py[WARNING]: Calling 'http://192.168.0.1//latest/meta-data/instanceid'
failed [0/120s]:
request error [HTTPConnectionPool(host='192.168.0.1', port=80): Max retries
exceeded with url: //latest/meta-data/instance-id (...)
[Errno 113] No route to host)]
...output omitted...
8. On workstation, stop then start finance-web1 instance to allow cloud-init to recover. The metadata
service is reachable now because the finance-subnet1 subnetwork is connected to the finance-router1
router.
8.4. Log in to the finance-web1 instance as the cloud-user user, using the /home/student/developer1-
keypair1.pem key file.
[student@workstation ~(developer1-finance)]$ ssh -i ~/developer1-keypair1.pem cloud-
user@172.25.250.P
Warning: Permanently added '172.25.250.P' (ECDSA) to the list of known hosts.
8.5. Verify that the authorized_keys file for the cloud-user user has had a key injected into it. When
done, log out from the instance.
[cloud-user@finance-web1 ~]$ cat .ssh/authorized_keys
ssh-rsa AAAA(...)JDGZ Generated-by-Nova
[cloud-user@finance-web1 ~]$ exit
9. On workstation, create a 1 GB volume named finance-volume1. The volume creation will fail.
9.2. Verify the status of the volume finance-volume1. The volume's status will be error.
[student@workstation ~(developer1-finance)]$ openstack volume list
+---------------+-----------------+--------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+---------------+-----------------+--------+------+-------------+
| b375(...)0008 | finance-volume1 | error | 1 | |
+---------------+-----------------+--------+------+-------------+
10. Confirm the reason that the finance-volume1 volume was not correctly created. It is because no
valid host was found by the Block Storage scheduler service.
10.2.Verify that the Block Storage scheduler log file, located at /var/log/cinder/scheduler.log, reports a
no valid host issue.
[heat-admin@overcloud-controller-0 ~]$ sudo less /var/log/cinder/scheduler.log
...output omitted...
(...) in rbd.RBD.create (rbd.c:3227)\n', u'PermissionError: error creating image
\n']
(...) No valid host was found. (...)
11. Verify that the Block Storage volume service's status is up to discard any issue related to RabbitMQ.
11.2. Verify that the Block Storage volume service's status is up.
[heat-admin@overcloud-controller-0 ~]$ openstack volume service list -c Binary -c Status -c State
+------------------+---------+-------+
| Binary | Status | State |
+------------------+---------+-------+
| cinder-volume | enabled | up |
...output omitted...
+------------------+---------+-------+
12. Verify that the Block Storage service is configured to use the openstack user, and the volumes pool.
When done, verify that the volume creation error is related to the permissions of the openstack user in
Ceph. This user needs read, write and execute capabilities on the volumes pools.
12.1. Verify that the block storage service is configured to use the openstack user, and the volumes pool.
[heat-admin@overcloud-controller-0 ~]$ sudo grep "rbd_" /etc/cinder/cinder.conf
...output omitted...
rbd_pool=volumes
rbd_user=openstack
...output omitted...
12.4.Verify that the openstack user has no capabilities on the volumes pool.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph auth get client.openstack
exported keyring for client.openstack
[client.openstack]
key = AQCg7T5ZAAAAABAAI6ZtsCQEuvVNqoyRKzeNcw==
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rwx
pool=backups,
allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"
13. Fix the issue by adding read, write, and execute capabilities to the openstack user on the volumes
pool.
13.1. Add the read, write, and execute capabilities to the openstack user on the volumes pool.
Unfortunately, you cannot simply add to the list, you must retype it entirely.
Important
Please note that the line starting with osd must be entered as a single line.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph auth caps \
client.openstack \
mon 'allow r' \
osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes,
allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx
pool=metrics'
updated caps for client.openstack
13.2. Verify that the openstack user's capabilities has been correctly updated. When done, log out from
the Ceph node.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph auth get client.openstack
exported keyring for client.openstack
[client.openstack]
key = AQCg7T5ZAAAAABAAI6ZtsCQEuvVNqoyRKzeNcw==
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rwx
pool=volumes,
allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx
pool=metrics"
[heat-admin@overcloud-cephstorage-0 ~]$ logout
14. On workstation, try to create again a 1 GB volume, named finance-volume1. The volume creation
will be successful. You need to delete the failed finance-volume1 volume.
14.3.Verify that the finance-volume1 volume has been correctly created. The volume status should show
available, if status is error, please ensure permissions were set correctly in the previous step.
[student@workstation ~(developer1-finance)]$ openstack volume list
+---------------+-----------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+---------------+-----------------+-----------+------+-------------+
| e454(...)ddc8 | finance-volume1 | available | 1 | |
+---------------+-----------------+-----------+------+-------------+
Cleanup
From workstation, run the lab troubleshooting-services cleanup script to clean up this exercise.
[student@workstation ~]$ lab troubleshooting-services cleanup
Steps
1. As the operator1 user, remove the existing image called production-rhel7. The operator1-production-
rc file can be found in student's home directory on workstation. Troubleshoot any problems.
2. Source the admin-rc credential file, then run lab troubleshooting-review break to set up the next part
of the lab exercise.
[student@workstation ~]$ source ~/admin-rc
[student@workstation ~(admin-admin)]$ lab troubleshooting-review break
3. Re-source the /home/student/operator1-production-rc and attempt to list the images. It should fail.
Troubleshoot any issues and fix the problem.
4. Create a new server instance named production-web1. Use the m1.web flavor, the operator1-
keypair1 key pair, the production-network1 network, the productionweb security group, and the rhel7
image. This action will fail. Troubleshoot any issues and fix the problem.
5. Create a floating IP address and assign it to the instance. Troubleshoot any issues and fix the problem.
6. Access the instance using SSH. An error will occur. Troubleshoot any issues and fix the problem.
7. Create a volume named production-volume1, size 1 GB. Verify the volume status. Use the admin
user's Identity service rc file on controller0 at /home/heat-admin/overcloudrc. Troubleshoot any issues
and fix the problem.
Evaluation
On workstation, run the lab troubleshooting-review grade command to confirm success of this exercise.
[student@workstation ~]$ lab troubleshooting-review grade
Cleanup
From workstation, run the lab troubleshooting-review cleanup script to clean up thisexercise.
[student@workstation ~]$ lab troubleshooting-review cleanup
Solution
In this lab, you will find and fix issues in the OpenStack environment. You will solve problems in the
areas of authentication, networking, compute nodes, and security. Finally, you will launch an instance
and ensure that everything is working as it should.
Outcomes
You should be able to:
• Troubleshoot authentication issues within OpenStack
• Search log files to help describe the nature of the problem
• Troubleshoot messaging issues within OpenStack
• Troubleshoot networking issues within OpenStack
Steps
1. As the operator1 user, remove the existing image called production-rhel7. The operator1-production-
rc file can be found in student's home directory on workstation. Troubleshoot any problems.
1.1. Source the /home/student/operator1-production-rc file.
[student@workstation ~]$ source ~/operator1-production-rc
1.3. The error you see is because the image is currently protected. You need to unprotect the image
before it can be deleted.
[student@workstation ~(operator1-production)]$ openstack image set --unprotected production-rhel7
[student@workstation ~(operator1-production)]$ openstack image delete production-rhel7
2. Source the admin-rc credential file, then run lab troubleshooting-review break to set up the next part
of the lab exercise.
[student@workstation ~]$ source ~/admin-rc
[student@workstation ~(admin-admin)]$ lab troubleshooting-review break
3. Re-source the /home/student/operator1-production-rc and attempt to list the images. It should fail.
Troubleshoot any issues and fix the problem.
3.1.
[student@workstation ~(admin-admin)]$ source ~/operator1-production-rc
[student@workstation ~(operator1-production)]$ openstack image list
Discovering versions from the identity service failed when creating
the password plugin. Attempting to determine version from URL. Unable
to establish connection to http://172.25.251.50:5000/v2.0/tokens:
HTTPConnectionPool(host='172.25.251.50', port=5000): Max retries exceeded
with url: /v2.0/tokens.................: Failed to establish a new connection:
[Errno 110] Connection timed out',)
3.2. The error occurs because OpenStack cannot authenticate the operator1 user. This can happen when
the rc file for the user has a bad IP address. Check the rc file and note the OS_AUTH_URL address.
Compare this IP address to the one that can be found in /etc/haproxy/haproxy.cfg on controller0.
Search for the line: listen keystone_public. The second IP address is the one that must be used in the
user's rc file. When done, log out from the controller node.
[student@workstation ~(operator1-production)]$ ssh heat-admin@controller0 cat
/etc/haproxy/haproxy.cfg
...output omitted...
listen keystone_public
bind 172.24.1.50:5000 transparent
bind 172.25.250.50:5000 transparent
...output omitted...
3.3. Compare the IP address from HAproxy and the rc file. You need to change it to the correct IP
address to continue.
...output omitted...
export OS_AUTH_URL=http://172.25.251.50:5000/v2.0
...output omitted...
3.5. Source the operator1-production-rc again. Use the openstack image list command to ensure that
the OS_AUTH_URL option is correct.
[student@workstation ~(operator1-production)]$ source ~/operator1-production-rc
[student@workstation ~(operator1-production)]$ openstack image list
+--------------------------------------+-------+--------+
| ID | Name | Status |
+--------------------------------------+-------+--------+
| 21b3b8ba-e28e-4b41-9150-ac5b44f9d8ef | rhel7 | active |
+--------------------------------------+-------+--------+
4. Create a new server instance named production-web1. Use the m1.web flavor, the operator1-
keypair1 key pair, the production-network1 network, the productionweb
security group, and the rhel7 image. This action will fail. Troubleshoot any issues and fix the problem.
4.2. This error is due to a problem with the nova compute service. List the Nova services. You need to
source the /home/student/admin-rc first, as operator1 does not have permission to interact directly
with nova services.
[student@workstation ~(operator1-production)]$ source ~/admin-rc
[student@workstation ~(admin-admin)]$ nova service-list
+----+-----------------+-----------------------------------+----------+------+
| ID | Binary | Host | Status | State|
+----+-----------------+-----------------------------------+----------+------+
| 3 | nova-consoleauth| overcloud-controller-0.localdomain| enabled | up |
| 4 | nova-scheduler | overcloud-controller-0.localdomain| enabled | up |
| 5 | nova-conductor | overcloud-controller-0.localdomain| enabled | up |
| 7 | nova-compute | overcloud-compute-0.localdomain | disabled | down |
+----+-----------------+-----------------------------------+----------+------+
4.3. Restart the nova-compute service.
[student@workstation ~(admin-admin)]$ nova service-enable \
overcloud-compute-0.localdomain \
nova-compute
+---------------------------------+--------------+---------+
| Host | Binary | Status |
+---------------------------------+--------------+---------+
| overcloud-compute-0.localdomain | nova-compute | enabled |
+---------------------------------+--------------+---------+
4.4. Source the operator1 rc file and try to create the instance again. First, delete the instance that is
currently showing an error status. The instance deployment will finish correctly.
[student@workstation ~(admin-admin)]$ source ~/operator1-production-rc
[student@workstation ~(operator1-production)]$ openstack server delete production-web1
[student@workstation ~(operator1-production)]$ openstack server list
[student@workstation ~(operator1-production)]$ openstack server create \
--flavor m1.web \
--nic net-id=production-network1 \
--key-name operator1-keypair1 \
--security-group production-web \
--image rhel7 --wait production-web1
5. Create a floating IP address and assign it to the instance. Troubleshoot any issues and fix the problem.
6. Access the instance using SSH. An error will occur. Troubleshoot any issues and fix the problem.
6.2. Find out which security group the instance is using, then list the rules in that security group.
[student@workstation ~(operator1-production)]$ openstack server show production-web1 -f json
...output omitted...
"security_groups": [
{
"name": "production-web"
}
],
...output omitted...
[student@workstation ~(operator1-production)]$ openstack security group rule list production-web
+---------------+-------------+----------+------------+-----------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+---------------+-------------+----------+------------+-----------------------+
| cc92(...)95b1 | None | None | | None |
| eb84(...)c6e7 | None | None | | None |
+---------------+-------------+----------+------------+-----------------------+
6.3. We can see that there is no rule allowing SSH to the instance. Create the security grouprule.
[student@workstation ~(operator1-production)]$ openstack security group rule create --protocol tcp --
dst-port 22:22 production-web
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-06-12T07:24:34Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 06070264-1427-4679-bd8e-e3a8f2e189e9 |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 9913a8abd192443c96587a8dc1c0a364 |
| project_id | 9913a8abd192443c96587a8dc1c0a364 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | ac9ae6e6-0056-4501-afea-f83087b8297f |
| updated_at | 2017-06-12T07:24:34Z |
+-------------------+--------------------------------------+
7. Create a volume named production-volume1, size 1 GB. Verify the volume status. Use the admin
user's Identity service rc file on controller0 at /home/heat-admin/overcloudrc. Troubleshoot any issues
and fix the problem.
7.3. The volume displays an error status. The Block Storage scheduler service is unable to find a valid
host on which to create the volume. The Block Storage volume service is currently down. Log into
controller0 as heat-admin.
[student@workstation ~(operator1-production)]$ ssh heat-admin@controller0
7.4. Verify that no valid host was found to create the production-volume1 in the Block Storage
scheduler's log file.
[heat-admin@overcloud-controller-0 ~]$ sudo less /var/log/cinder/scheduler.log
...output omitted...
201 (...) Failed to run task
cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create:
No valid host was found. No weighed hosts available
7.5. Load the admin credentials and verify that the Cinder volume service is down. The admin credential
can be found in /home/heat-admin/overcloudrc.
[heat-admin@overcloud-controller-0 ~]$ source ~/overcloudrc
[heat-admin@overcloud-controller-0 ~]$ openstack volume service list -c Binary -c Host -c Status -c State
+------------------+------------------------+---------+-------+
| Binary | Host | Status | State |
+------------------+------------------------+---------+-------+
| cinder-scheduler | hostgroup | enabled | up |
| cinder-volume | hostgroup@tripleo_ceph | enabled | down |
+------------------+------------------------+---------+-------+
7.6. Confirm that the IP address and port for the RabbitMQ cluster and the rabbitmqclone
Pacemaker resource are correct.
[heat-admin@overcloud-controller-0 ~]$ sudo rabbitmqctl status
Status of node 'rabbit@overcloud-controller-0' ...
...output omitted...
{listeners,[{clustering,25672,"::"},{amqp,5672,"172.24.1.1"}]},
...output omitted...
7.7. Verify the Cinder configuration file. The username for rabbit_userid is wrong. In the following
output, you can see the default is guest, but is currently set as change_me.
[heat-admin@overcloud-controller-0 ~]$ sudo cat /etc/cinder/cinder.conf | grep rabbit_userid
#rabbit_userid = guest
rabbit_userid = change_me
7.8. Using crudini change the RabbitMQ user name in the Cinder configuration file. Then reload the
Cinder configuration in the Pacemaker cluster to apply the changes and log out.
[heat-admin@overcloud-controller-0 ~]$ sudo crudini --set /etc/cinder/cinder.conf
oslo_messaging_rabbit rabbit_userid guest
[heat-admin@overcloud-controller-0 ~]$ sudo pcs resource restart openstack-cinder-volume
[heat-admin@overcloud-controller-0 ~]$ exit
7.9. On workstation, delete the incorrect volume and recreate it. Verify it has been properly created.
[student@workstation ~(operator1-production]$ openstack volume delete production-volume1
[student@workstation ~(operator1-production)]$ openstack volume create --size 1 production-volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-06-14T08:08:01.726844 |
| description | None |
| encrypted | False |
| id | 128a9514-f8bd-4162-9f7e-72036f684cba |
| multiattach | False |
| name | production-volume1 |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | 0ac575bb96e24950a9551ac4cda082a4 |
+---------------------+--------------------------------------+
[student@workstation ~(operator1-production)]$ openstack volume list
+--------------------------------------+--------------------+-----------+------+
| ID | Display Name | Status | Size |
+--------------------------------------+--------------------+-----------+------+
| 128a9514-f8bd-4162-9f7e-72036f684cba | production-volume1 | available | 1 |
+--------------------------------------+--------------------+-----------+------+
Evaluation
On workstation, run the lab troubleshooting-review grade command to confirm success of this exercise.
[student@workstation ~]$ lab troubleshooting-review grade
Cleanup
From workstation, run the lab troubleshooting-review cleanup script to clean up this exercise.
[student@workstation ~]$ lab troubleshooting-review cleanup
Steps
1. From workstation connect to the controller0 node. Open the /etc/ceilometer/ceilometer.conf file and
which meter dispatcher is configured for the Telemetry service. On workstation, run the ceilometer
command (which should produce an error) to verify that the Gnocchi telemetry service is running
instead of Ceilometer.
1.2. Open the /etc/ceilometer/ceilometer.conf and search for the meter_dispatchers variable. The
meter dispatcher is set to gnocchi, which is storing the metering data.
[heat-admin@overcloud-controller-0 ~]$ sudo grep meter_dispatchers /etc/ceilometer/ceilometer.conf
#meter_dispatchers = database
meter_dispatchers=gnocchi
1.4. From workstation, source the /home/student/developer1-finance-rc file. Verify that the ceilometer
command returns an error because Gnocchi is set as the meter dispatcher.
[student@workstation ~]$ source ~/developer1-finance-rc
[student@workstation ~(developer1-finance)]$ ceilometer --debug meter-list
...output omitted...
DEBUG (client) RESP BODY: {"error_message": "410 Gone\n\nThis resource is no
longer available.
No forwarding address is given.
\n\n This telemetry installation is configured to use Gnocchi.
Please use the Gnocchi API available on the metric endpoint to retrieve data.
"}
...output omitted...
2. List the resource types available in the Telemetry metering service. Use the resource ID of the
instance resource type to list all the meters available.
2.2. List the resources accessible by the developer1 user. Note the resource ID of the instance resource
type.
[student@workstation ~(developer1-finance)]$ openstack metric resource list -c id -c type
+--------------------------------------+----------------------------+
| id | type |
+--------------------------------------+----------------------------+
| 6bd6e073-4e97-4a48-92e4-d37cb365cddb | image |
| 05a6a936-4a4c-5d1b-b355-2fd6e2e47647 | instance_disk |
| cef757c0-6137-5905-9edc-ce9c4d2b9003 | instance_network_interface |
| 6776f92f-0706-54d8-94a1-2dd8d2397825 | instance_disk |
| dbf53681-540f-5ee1-9b00-c06bb53cbd62 | instance_disk |
| cebc8e2f-3c8f-45a1-8f71-6f03f017c623 | swift_account |
| a2b3bda7-1d9e-4ad0-99fe-b4f7774deda0 | instance |
+--------------------------------------+----------------------------+
2.3. Verify that the instance ID of the finance-web1 instance is the same as the resource ID.
[student@workstation ~(developer1-finance)]$ openstack server list -c ID -c Name
+--------------------------------------+--------------+
| ID | Name |
+--------------------------------------+--------------+
| a2b3bda7-1d9e-4ad0-99fe-b4f7774deda0 | finance-web1 |
+--------------------------------------+--------------+
2.4. Using the resource ID, list all the meters associated with the finance-web1 instance.
[student@workstation ~(developer1-finance)]$ openstack metric resource show a2b3bda7-1d9e-4ad0-
99fe-b4f7774deda0
+-----------------------+------------------------------------------------------+
| Field | Value |
+-----------------------+------------------------------------------------------+
| created_by_project_id | d42393f674a9488abe11bd0ef6d18a18 |
| created_by_user_id | 7521059a98cc4d579eea897027027575 |
| ended_at | None |
| id | a2b3bda7-1d9e-4ad0-99fe-b4f7774deda0 |
| metrics | cpu.delta: 75369002-85ca-47b9-8276-88f5314aa9ad |
| | cpu: 71d9c293-f9ba-4b76-aaf8-0b1806a3b280 |
| | cpu_util: 37274980-f825-4aef-b1b9-fe46d266d1d8 |
| | disk.allocation: 93597246-5a02-4f65-b51c- |
| | 2b4946f411cd |
| | disk.capacity: cff713de-cdcc-4162-8a5b-16f76f86cf10 |
| | disk.ephemeral.size: |
| | b7cccbb5-bc27-40fe-9296-d62dfc22dfce |
| | disk.iops: ccf52c4d-9f59-4f78-8b81-0cb10c02b8e3 |
| | disk.latency: c6ae52ee-458f-4b8f-800a-cb00e5b1c1a6 |
| | disk.read.bytes.rate: 6a1299f2-a467-4eab- |
| | 8d75-f8ea68cad213 |
| | disk.read.bytes: |
| | 311bb209-f466-4713-9ac9-aa5d8fcfbc4d |
| | disk.read.requests.rate: 0a49942b-bbc9-4b2b- |
| | aee1-b6acdeeaf3ff |
| | disk.read.requests: 2581c3bd-f894-4798-bd5b- |
| | 53410de25ca8 |
| | disk.root.size: b8fe97f1-4d5e-4e2c-ac11-cdd92672c3c9 |
| | disk.usage: 0e12b7e5-3d0b-4c0f-b20e-1da75b2bff01 |
| | disk.write.bytes.rate: a2d063ed- |
| | 28c0-4b82-b867-84c0c6831751 |
| | disk.write.bytes: |
| | 8fc5a997-7fc0-43b5-88a0-ca28914e47cd |
| | disk.write.requests.rate: |
| | db5428d6-c6d7-4d31-888e-d72815076229 |
| | disk.write.requests: a39417a5-1dca- |
| | 4a94-9934-1deaef04066b |
| | instance: 4ee71d49-38f4-4368-b86f-a72d73861c7b |
| | memory.resident: 8795fdc3-0e69-4990-bd4c- |
| | 61c6e1a12c1d |
| | memory.usage: 902c7a71-4768-4d28-9460-259bf968aac5 |
| | memory: 277778df-c551-4573-a82e-fa7d3349f06f |
| | vcpus: 11ac9f36-1d1f-4e72-a1e7-9fd5b7725a14 |
| original_resource_id | a2b3bda7-1d9e-4ad0-99fe-b4f7774deda0 |
| project_id | 861b7d43e59c4edc97d1083e411caea0 |
| revision_end | None |
| revision_start | 2017-05-26T04:10:16.250620+00:00 |
| started_at | 2017-05-26T03:32:08.440478+00:00 |
| type | instance |
| user_id | cbcc0ad8d6ab460ca0e36ba96528dc03 |
+-----------------------+------------------------------------------------------+
3.1. Retrieve the resource ID associated with the image resource type.
[student@workstation ~(developer1-finance)]$ openstack metric resource list --type image -c id -c type
+--------------------------------------+-------+
| id | type |
+--------------------------------------+-------+
| 6bd6e073-4e97-4a48-92e4-d37cb365cddb | image |
+--------------------------------------+-------+
3.2. List the meters associated with the image resource ID.
[student@workstation ~(developer1-finance)]$ openstack metric resource show 6bd6e073-4e97-4a48-
92e4-d37cb365cddb
+-----------------------+------------------------------------------------------+
| Field | Value |
+-----------------------+------------------------------------------------------+
| created_by_project_id | df179bcea2e540e398f20400bc654cec |
| created_by_user_id | b74410917d314f22b0301c55c0edd39e |
| ended_at | None |
| id | 6bd6e073-4e97-4a48-92e4-d37cb365cddb |
| metrics | image.download: fc82d8eb-2f04-4a84-8bc7-fe35130d28eb |
| | image.serve: 2a8329f3-8378-49f6-aa58-d1c5d37d9b62 |
| | image.size: 9b065b52-acf0-4906-bcc6-b9604efdb5e5 |
| | image: 09883163-6783-4106-96ba-de15201e72f9 |
| original_resource_id | 6bd6e073-4e97-4a48-92e4-d37cb365cddb |
| project_id | cebc8e2f3c8f45a18f716f03f017c623 |
| revision_end | None |
| revision_start | 2017-05-23T04:06:12.958634+00:00 |
| started_at | 2017-05-23T04:06:12.958618+00:00 |
| type | image |
| user_id | None |
+-----------------------+------------------------------------------------------+
4. Using the resource ID, list the details for the disk.read.requests.rate metric associated with the
finance-web1 instance.
[student@workstation ~(developer1-finance)]$ openstack metric metric show --resource-id a2b3bda7-
1d9e-4ad0-99fe-b4f7774deda0 disk.read.requests.rate
+------------------------------------+-----------------------------------------+
| Field | Value |
+------------------------------------+-----------------------------------------+
| archive_policy/aggregation_methods | std, count, 95pct, min, max, sum, |
| | median, mean |
| archive_policy/back_window | 0 |
| archive_policy/definition | - points: 12, granularity: 0:05:00, |
| | timespan: 1:00:00 |
| | - points: 24, granularity: 1:00:00, |
| | timespan: 1 day, 0:00:00 |
| | - points: 30, granularity: 1 day, |
| | 0:00:00, timespan: 30 days, 0:00:00 |
| archive_policy/name | low |
| created_by_project_id | d42393f674a9488abe11bd0ef6d18a18 |
| created_by_user_id | 7521059a98cc4d579eea897027027575 |
| id | 0a49942b-bbc9-4b2b-aee1-b6acdeeaf3ff |
| name | disk.read.requests.rate |
| resource/created_by_project_id | d42393f674a9488abe11bd0ef6d18a18 |
| resource/created_by_user_id | 7521059a98cc4d579eea897027027575 |
| resource/ended_at | None |
| resource/id | a2b3bda7-1d9e-4ad0-99fe-b4f7774deda0 |
| resource/original_resource_id | a2b3bda7-1d9e-4ad0-99fe-b4f7774deda0 |
| resource/project_id | 861b7d43e59c4edc97d1083e411caea0 |
| resource/revision_end | None |
| resource/revision_start | 2017-05-26T04:10:16.250620+00:00 |
| resource/started_at | 2017-05-26T03:32:08.440478+00:00 |
| resource/type | instance |
| resource/user_id | cbcc0ad8d6ab460ca0e36ba96528dc03 |
| unit | None |
+------------------------------------+-----------------------------------------+
The disk.read.requests.rate metric uses the low archive policy. The low archive policy uses as low as 5
minutes granularity for aggregation and the maximum life span of the aggregated data is 30 days.
5. Display the measures gathered and aggregated by the disk.read.requests.rate metric associated with
the finance-web1 instance. The number of records returned in the output may vary.
[student@workstation ~(developer1-finance)]$ openstack metric measures show --resource-id
a2b3bda7-1d9e-4ad0-99fe-b4f7774deda0 disk.read.requests.rate
+---------------------------+-------------+----------------+
| timestamp | granularity | value |
+---------------------------+-------------+----------------+
| 2017-05-23T00:00:00+00:00 | 86400.0 | 0.277122710561 |
| 2017-05-23T04:00:00+00:00 | 3600.0 | 0.0 |
| 2017-05-23T05:00:00+00:00 | 3600.0 | 0.831368131683 |
| 2017-05-23T06:00:00+00:00 | 3600.0 | 0.0 |
| 2017-05-23T07:00:00+00:00 | 3600.0 | 0.0 |
| 2017-05-23T05:25:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T05:35:00+00:00 | 300.0 | 4.92324971194 |
| 2017-05-23T05:45:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T05:55:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T06:05:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T06:15:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T06:25:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T06:35:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T06:45:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T06:55:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T07:05:00+00:00 | 300.0 | 0.0 |
| 2017-05-23T07:15:00+00:00 | 300.0 | 0.0 |
+---------------------------+-------------+----------------+
Observe the value column, which displays the aggregated values based on archive policy associated with
the metric. The 86400, 3600, and 300 granularity column values represent the aggregation period as 1
day, 1 hour, and 5 minutes, respectively, in seconds.
6. Using the resource ID, list the maximum measures associated with the cpu_util metric with 300
seconds granularity. The number of records returned in the output may vary.
[student@workstation ~(developer1-finance)]$ openstack metric measures show \
--resource-id a2b3bda7-1d9e-4ad0-99fe-b4f7774deda0 \
--aggregation max \
--granularity 300 \
cpu_util
+---------------------------+-------------+-----------------+
| timestamp | granularity | value |
+---------------------------+-------------+-----------------+
| 2017-05-23T05:45:00+00:00 | 300.0 | 0.0708371692841 |
| 2017-05-23T05:55:00+00:00 | 300.0 | 0.0891683788482 |
| 2017-05-23T06:05:00+00:00 | 300.0 | 0.0907790288644 |
| 2017-05-23T06:15:00+00:00 | 300.0 | 0.0850440360854 |
| 2017-05-23T06:25:00+00:00 | 300.0 | 0.0691660923575 |
| 2017-05-23T06:35:00+00:00 | 300.0 | 0.0858326136269 |
| 2017-05-23T06:45:00+00:00 | 300.0 | 0.0666668728895 |
| 2017-05-23T06:55:00+00:00 | 300.0 | 0.0658094259754 |
| 2017-05-23T07:05:00+00:00 | 300.0 | 0.108326315232 |
| 2017-05-23T07:15:00+00:00 | 300.0 | 0.066695508806 |
| 2017-05-23T07:25:00+00:00 | 300.0 | 0.0666670677802 |
| 2017-05-23T07:35:00+00:00 | 300.0 | 0.0666727313294 |
+---------------------------+-------------+-----------------+
7. List the average CPU utilization for all instances provisioned using the rhel7 image. Query for all
instances containing the word finance in the instance name.
7.1. List the attributes supported by the instance resource type. The command returns the attributes
that may be used to query this resource type.
[student@workstation ~(developer1-finance)]$ openstack metric resource-type show instance
+-------------------------+----------------------------------------------------+
| Field | Value |
+-------------------------+----------------------------------------------------+
| attributes/display_name | max_length=255, min_length=0, required=True, |
| | type=string |
| attributes/flavor_id | max_length=255, min_length=0, required=True, |
| | type=string |
| attributes/host | max_length=255, min_length=0, required=True, |
| | type=string |
| attributes/image_ref | max_length=255, min_length=0, required=False, |
| | type=string |
| attributes/server_group | max_length=255, min_length=0, required=False, |
| | type=string |
| name | instance |
| state | active |
+-------------------------+----------------------------------------------------+
7.2. Only users with the admin role can query measures using resource attributes. Use the architect1
user's Identity credentials to execute the command. The architect1 credentials are stored in the
/home/student/architect1-finance-rc file.
[student@workstation ~(developer1-finance)]$ source ~/architect1-finance-rc
[student@workstation ~(architect1-finance)]$
Cleanup
From workstation, run the lab monitoring-analyzing-metrics cleanup command to clean up this exercise.
[student@workstation ~]$ lab monitoring-analyzing-metrics cleanup
Steps
1. List all of the instance type telemetry resources accessible by the user operator1. Ensure the
production-rhel7 instance is available. Observe the resource ID of the instance.
Credentials for user operator1 are in /home/student/operator1-production-rc on workstation.
2. List all metrics associated with the production-rhel7 instance.
3. List the available archive policies. Verify that the cpu_util metric of the productionrhel7 instance uses
the archive policy named low.
4. Add new measures to the cpu_util metric. Observe that the newly added measures are available using
min and max aggregation methods. Use the values from the following table. The measures must be
added using the architect1 user's credentials, because manipulating data points requires an account
with the admin role. Credentials of user architect1 are stored in /home/student/architect1-production-
rc file.
Measures Parameter
Timestamp Current time in ISO 8601 formatted timestamp
Measure values 30, 42
The measure values 30 and 42 are manual data values added to the cpu_util metric.
5. Create a threshold alarm named cputhreshold-alarm based on aggregation by resources. Set the
alarm to trigger when maximum CPU utilization for the productionrhel7 instance exceeds 50% for two
consecutive 5 minute periods.
6. Simulate high CPU utilization scenario by manually adding new measures to the cpu_util metric of the
instance. Observe that the alarm triggers when the aggregated CPU utilization exceeds the 50%
threshold through two evluation periods of 5 minutes each. To simulate high CPU utilization, manually
add a measure with a value of 80 once every minute until the alarm triggers. It is expected to take
between 5 and 10 minutes to trigger.
Evaluation
On workstation, run the lab monitoring-review grade command to confirm success of this exercise.
Correct any reported failures and rerun the command until successful. [student@workstation ~] lab
monitoring-review grade
Cleanup
From workstation, run the lab monitoring-review cleanup command to clean up this exercise.
[student@workstation ~] lab monitoring-review cleanup
Solution
In this lab, you will analyze the Telemetry metric data and create an Aodh alarm. You will also set the
alarm to trigger when the maximum CPU utilization of an instance exceeds a threshold value.
Outcomes
You should be able to:
• Search and list the metrics available with the Telemetry service for a particular user.
• View the usage data collected for a metric.
• Check which archive policy is in use for a particular metric.
• Add new measures to a metric.
• Create an alarm based on aggregated usage data of a metric, and trigger it.
• View and analyze an alarm history.
Steps
1. List all of the instance type telemetry resources accessible by the user operator1. Ensure the
production-rhel7 instance is available. Observe the resource ID of the instance. Credentials for user
operator1 are in /home/student/operator1-production-rc on workstation.
1.1. From workstation, source the /home/student/operator1-production-rc file to use operator1 user
credentials. Find the ID associated with the user.
[student@workstation ~]$ source ~/operator1-production-rc
[student@workstation ~(operator1-production)]$ openstack user show operator1
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| enabled | True |
| id | 4301d0dfcbfb4c50a085d4e8ce7330f6 |
| name | operator1 |
| project_id | a8129485db844db898b8c8f45ddeb258 |
+------------+----------------------------------+
1.2. Use the retrieved user ID to search the resources accessible by the operator1 user. Filter the output
based on the instance resource type.
[student@workstation ~(operator1-production)]$ openstack metric resource \
search user_id=4301d0dfcbfb4c50a085d4e8ce7330f6 \
-c id -c type -c user_id --type instance -f json
[
{
"user_id": "4301d0dfcbfb4c50a085d4e8ce7330f6",
"type": "instance",
"id": "969b5215-61d0-47c4-aa3d-b9fc89fcd46c"
}
]
1.3. Observe that the ID of the resource in the previous output matches the instance ID of the
production-rhel7 instance. The production-rhel7 instance is available.
[student@workstation ~(operator1-production)]$ openstack server show production-rhel7 -c id -c name
-c status
+--------+--------------------------------------+
| Field | Value |
+--------+--------------------------------------+
| id | 969b5215-61d0-47c4-aa3d-b9fc89fcd46c |
| name | production-rhel7 |
| status | ACTIVE |
+--------+--------------------------------------+
The production-rhel7 instance resource ID matches the production-rhel7 instance ID. Note this resource
ID, as it will be used in upcoming lab steps.
2.1. Use the production-rhel7 instance resource ID to list the available metrics. Verify that the cpu_util
metric is listed.
[student@workstation ~(operator1-production)]$ openstack metric resource show 969b5215-61d0-
47c4-aa3d-b9fc89fcd46c --type instance
+--------------+---------------------------------------------------------------+
|Field | Value |
+--------------+---------------------------------------------------------------+
|id | 969b5215-61d0-47c4-aa3d-b9fc89fcd46c |
|image_ref | 280887fa-8ca4-43ab-b9b0-eea9bfc6174c |
|metrics | cpu.delta: a22f5165-0803-4578-9337-68c79e005c0f |
| | cpu: e410ce36-0dac-4503-8a94-323cf78e7b96 |
| | cpu_util: 6804b83c-aec0-46de-bed5-9cdfd72e9145 |
| | disk.allocation: 0610892e-9741-4ad5-ae97-ac153bb53aa8 |
| | disk.capacity: 0e0a5313-f603-4d75-a204-9c892806c404 |
| | disk.ephemeral.size: 2e3ed19b-fb02-44be-93b7-8f6c63041ac3 |
| | disk.iops: 83122c52-5687-4134-a831-93d80dba4b4f |
| | disk.latency: 11e2b022-b602-4c5a-b710-2acc1a82ea91 |
| | disk.read.bytes.rate: 3259c60d-0cb8-47d0-94f0-cded9f30beb2 |
| | disk.read.bytes: eefa65e9-0cbd-4194-bbcb-fdaf596a3337 |
| | disk.read.requests.rate: 36e0b15c-4f6c-4bda-bd03-64fcea8a4c70 |
| | disk.read.requests: 6f14131e-f15c-401c-9599-a5dbcc6d5f2e |
| | disk.root.size: 36f6f5c1-4900-48c1-a064-482d453a4ee7 |
| | disk.usage: 2e510e08-7820-4214-81ee-5647bdaf0db0 |
| | disk.write.bytes.rate: 059a529e-dcad-4439-afd1-7d199254bec9 |
| | disk.write.bytes: 68be5427-df81-4dac-8179-49ffbbad219e |
| | disk.write.requests.rate: 4f86c785-35ef-4d92-923f-b2a80e9dd14f|
| | disk.write.requests: 717ce076-c07b-4982-95ed-ba94a6993ce2 |
| | instance: a91c09e3-c9b9-4f9a-848b-785f9028b78a |
| | memory.resident: af7cd10e-6784-4970-98ff-49bf1e153992 |
| | memory.usage: 2b9c9c3f-05ce-4370-a101-736ca2683607 |
| | memory: dc4f5d14-1b55-4f44-a15c-48aac461e2bf |
| | vcpus: c1cc42a0-4674-44c2-ae6d-48df463a6586 |
|resource_id | 969b5215-61d0-47c4-aa3d-b9fc89fcd46c |
| ... output omitted... |
+--------------+---------------------------------------------------------------+
3. List the available archive policies. Verify that the cpu_util metric of the productionrhel7 instance uses
the archive policy named low.
3.1. List the available archive policies and their supported aggregation methods.
[student@workstation ~(operator1-production)]$ openstack metric archive-policy list -c name -c
aggregation_methods
+--------+------------------------------------------------+
| name | aggregation_methods |
+--------+------------------------------------------------+
| high | std, count, 95pct, min, max, sum, median, mean |
| low | std, count, 95pct, min, max, sum, median, mean |
| medium | std, count, 95pct, min, max, sum, median, mean |
+--------+------------------------------------------------+
3.3. Use the resource ID of the production-rhel7 instance to check which archive policy is in use for the
cpu_util metric.
[student@workstation ~(operator1-production)]$ openstack metric metric show --resource-id
969b5215-61d0-47c4-aa3d-b9fc89fcd46c -c archive_policy/name cpu_util
+---------------------+-------+
| Field | Value |
+---------------------+-------+
| archive_policy/name | low |
+---------------------+-------+
3.4. View the measures collected for the cpu_util metric associated with the production-rhel7 instance
to ensure that it uses granularities according to the definition of the low archive policy.
[student@workstation ~(operator1-production)]$ openstack metric measures show --resource-id
969b5215-61d0-47c4-aa3d-b9fc89fcd46c cpu_util
+---------------------------+-------------+----------------+
| timestamp | granularity | value |
+---------------------------+-------------+----------------+
| 2017-05-28T00:00:00+00:00 | 86400.0 | 0.838532808055 |
| 2017-05-28T15:00:00+00:00 | 3600.0 | 0.838532808055 |
| 2017-05-28T18:45:00+00:00 | 300.0 | 0.838532808055 |
+---------------------------+-------------+----------------+
4. Add new measures to the cpu_util metric. Observe that the newly added measures are available using
min and max aggregation methods. Use the values from the following table. The measures must be
added using the architect1 user's credentials, because manipulating data points requires an account
with the admin role. Credentials of user architect1 are stored in /home/student/architect1-production-
rc file.
Measures Parameter
Timestamp Current time in ISO 8601 formatted timestamp
Measure values 30, 42
The measure values 30 and 42 are manual data values added to the cpu_util metric.
4.1. Source architect1 user's credential file. Add 30 and 42 as new measure values.
[student@workstation ~(operator1-production)]$ source ~/architect1-production-rc
[student@workstation ~(architect1-production)]$ openstack metric measures add \
--resource-id 969b5215-61d0-47c4-aa3d-b9fc89fcd46c \
--measure $(date -u --iso=seconds)@30 cpu_util
[student@workstation ~(architect1-production)]$ openstack metric measures add \
--resource-id 969b5215-61d0-47c4-aa3d-b9fc89fcd46c \
--measure $(date -u --iso=seconds)@42 cpu_util
4.2. Verify that the new measures have been successfully added for the cpu_util metric. Force the
aggregation of all known measures. The default aggregation method is mean,
so you will see a value of 36 (the mean of 30 and 42). The number of records and their values returned
in the output may vary.
[student@workstation ~(architect1-production)]$ openstack metric measures show --resource-id
969b5215-61d0-47c4-aa3d-b9fc89fcd46c cpu_util --refresh
+---------------------------+-------------+----------------+
| timestamp | granularity | value |
+---------------------------+-------------+----------------+
| 2017-05-28T00:00:00+00:00 | 86400.0 | 15.419266404 |
| 2017-05-28T15:00:00+00:00 | 3600.0 | 15.419266404 |
| 2017-05-28T19:55:00+00:00 | 300.0 | 0.838532808055 |
| 2017-05-28T20:30:00+00:00 | 300.0 | 36.0 |
+---------------------------+-------------+----------------+
4.3. Display the maximum and minimum values for the cpu_util metric measure.
[student@workstation ~(architect1-production)]$ openstack metric measures show --resource-id
969b5215-61d0-47c4-aa3d-b9fc89fcd46c cpu_util --refresh --aggregation max
+---------------------------+-------------+----------------+
| timestamp | granularity | value |
+---------------------------+-------------+----------------+
| 2017-05-28T00:00:00+00:00 | 86400.0 | 42.0 |
| 2017-05-28T15:00:00+00:00 | 3600.0 | 42.0 |
| 2017-05-28T19:55:00+00:00 | 300.0 | 0.838532808055 |
| 2017-05-28T20:30:00+00:00 | 300.0 | 42.0 |
+---------------------------+-------------+----------------+
[student@workstation ~(architect1-production)]$ openstack metric measures show --resource-id
969b5215-61d0-47c4-aa3d-b9fc89fcd46c cpu_util --refresh --aggregation min
+---------------------------+-------------+----------------+
| timestamp | granularity | value |
+---------------------------+-------------+----------------+
| 2017-05-28T00:00:00+00:00 | 86400.0 | 0.838532808055 |
| 2017-05-28T15:00:00+00:00 | 3600.0 | 0.838532808055 |
| 2017-05-28T20:30:00+00:00 | 300.0 | 30.0 |
+---------------------------+-------------+----------------+
5. Create a threshold alarm named cputhreshold-alarm based on aggregation by resources. Set the
alarm to trigger when maximum CPU utilization for the productionrhel7instance exceeds 50% for two
consecutive 5 minute periods.
5.2. View the newly created alarm. Verify that the state of the alarm is either ok or insufficient data.
According to the alarm definition, data is insufficient until two evaluation periods have been recorded.
Continue with the next step if the state is ok or insufficient data.
[student@workstation ~(architect1-production)]$ openstack alarm list -c name -c state -c enabled
+--------------------+-------+---------+
| name | state | enabled |
+--------------------+-------+---------+
| cputhreshold-alarm | ok | True |
+--------------------+-------+---------+
6. Simulate high CPU utilization scenario by manually adding new measures to the cpu_util metric of the
instance. Observe that the alarm triggers when the aggregated CPU utilization
exceeds the 50% threshold through two evluation periods of 5 minutes each. To simulate high CPU
utilization, manually add a measure with a value of 80 once every minute until the
alarm triggers. It is expected to take between 5 and 10 minutes to trigger.
6.1. Open two terminal windows, either stacked vertically or side-by-side. The second terminal will be
used in subsequent steps to add data points until the alarm triggers. In the first window, use the watch
to repetitively display the alarm state.
[student@workstation ~(architect-production)]$ watch openstack alarm list -c alarm_id -c name -c state
Every 2.0s: openstack alarm state -c alarm_id -c name -c state
+--------------------------------------+--------------------+-------+
| alarm_id | name | state |
+--------------------------------------+--------------------+-------+
| 82f0b4b6-5955-4acd-9d2e-2ae4811b8479 | cputhreshold-alarm | ok |
+--------------------------------------+--------------------+-------+
6.2. In the second terminal window, use the watch command to add new measures to the cpu_util
metric of the production-rhel7 instance every minute. A value of 80 will simulate high CPU utilization,
since the alarm is set to trigger at 50%.
[student@workstation ~(architect1-production)]$ openstack metric measures add --resource-id
969b5215-61d0-47c4-aa3d-b9fc89fcd46c --measure $(date -u --iso=seconds)@80 cpu_util
Repeat this command once per minute. Continue to add manual data points at a rate of about one of
these commands per minute. Be patient, as the trigger must detect a maximum value greater than 50 in
2 consecutive 5 minute evaluation periods. This is expected to take between 6 and 10 minutes. As long
as you are adding one measure at a casual pace of one per minute, the alarm will always trigger.
[student@workstation ~(architect-production)]$ openstack metric measures add --resource-id
969b5215-61d0-47c4-aa3d-b9fc89fcd46c --measure $(date -u --iso=seconds)@80 cpu_util
Note
In a real-world environment, measures are collected automatically using various polling and notification
agents. Manually adding data point measures for a metric is only for alarm configuration testing
purposes.
6.3. The alarm-evaluator service will detect the new manually added measures. Within the expected 6 to
10 minutes, the alarm change state to alarm in the first terminal window. Stop manually adding new
data measures as soon as the new allarm state occurs. Observe the new alarm state. The alarm state will
transition back to ok after
one more evaluation period, because high CPU utilization values are no longer being received. Press
CTRL-C to stop the watch.
Every 2.0s: openstack alarm state -c alarm_id -c name -c state
+--------------------------------------+--------------------+-------+
| alarm_id | name | state |
+--------------------------------------+--------------------+-------+
| 82f0b4b6-5955-4acd-9d2e-2ae4811b8479 | cputhreshold-alarm | alarm |
+--------------------------------------+--------------------+-------+
6.4. After stopping the watch and closing the second terminal, view the alarm history to analyze when
the alarm transitioned from the ok state to the alarm state. The output
may look similar to the lines displayed below.
[student@workstation ~(architect1-production)]$ openstack alarm-history show 82f0b4b6-5955-4acd-
9d2e-2ae4811b8479 -c timestamp -c type -c detail -f json
[
{
"timestamp": "2017-06-08T14:05:53.477088",
"type": "state transition",
"detail": "{\"transition_reason\": \"Transition to alarm due to 2 samples
outside threshold, most recent: 70.0\", \"state\": \"alarm\"}"
},
{
"timestamp": "2017-06-08T13:18:53.356979",
"type": "state transition",
"detail": "{\"transition_reason\": \"Transition to ok due to 2 samples
inside threshold, most recent: 0.579456043152\", \"state\": \"ok\"}"
},
{
"timestamp": "2017-06-08T13:15:53.338924",
"type": "state transition",
"detail": "{\"transition_reason\": \"2 datapoints are unknown\", \"state\":
\"insufficient data\"}"
},
{
"timestamp": "2017-06-08T13:11:51.328482",
"type": "creation",
"detail": "{\"alarm_actions\": [\"log:/tmp/alarm.log\"], \"user_id\":
\"b5494d9c68eb4938b024c911d75f7fa7\", \"name\": \"cputhreshold-alarm\",
\"state\": \"insufficient data\", \"timestamp\": \"2017-06-08T13:11:51.328482\",
\"description\": \"Alarm to monitor CPU utilization\", \"enabled\":
true, \"state_timestamp\": \"2017-06-08T13:11:51.328482\", \"rule\":
{\"evaluation_periods\": 2, \"metric\": \"cpu_util\", \"aggregation_method\":
\"max\", \"granularity\": 300,
\"threshold\": 50.0, \"query\": \"{\\\"=\\\": {\\\"id\\\": \\
\"969b5215-61d0-47c4-aa3d-b9fc89fcd46c\\\"}}\", \"comparison_operator
\": \"ge\", \"resource_type\": \"instance\"},\"alarm_id\":
\"82f0b4b6-5955-4acd-9d2e-2ae4811b8479\", \"time_constraints\": [], \
"insufficient_data_actions\": [], \"repeat_actions\": false, \"ok_actions
\": [], \"project_id\": \"4edf4dd1e80c4e3b99c0ba797b3f3ed8\", \"type\":
\"gnocchi_aggregation_by_resources_threshold\", \"severity\": \"low\"}"
Evaluation
On workstation, run the lab monitoring-review grade command to confirm success of this exercise.
Correct any reported failures and rerun the command until successful.
[student@workstation ~] lab monitoring-review grade
Cleanup
From workstation, run the lab monitoring-review cleanup command to clean up this exercise.
[student@workstation ~] lab monitoring-review cleanup
Steps
1. On workstation, create a directory named /home/student/heat-templates. The /home/student/heat-
templates directory will store downloaded template files and environment files used for orchestration.
[student@workstation ~]$ mkdir ~/heat-templates
2. When you edit YAML files, you must use spaces, not the tab character, for indentation. If you use vi
for text editing, add a setting in the .vimrc file to set auto-indentation and set the tab stop and shift
width to two spaces for YAML files. Create the /home/student/.vimrc file with the content, as shown:
autocmd FileType yaml setlocal ai ts=2 sw=2 et
The $public_ip variable is the floating IP address of the instance. The $private_ip variable is the private
IP address of the instance.
You will define these variables in the template.
• The orchestration stack must retry once to execute the user data script. The user data
script must return success on the successful execution of the script. The script must return the fail result
if it is unable to execute the user data script within 600 seconds due to timeout.
3.1. Change to the /home/student/heat-templates directory. Download the orchestration template file
from http://materials.example.com/heat/finance-app1.yaml in the /home/student/heat-templates
directory.
[student@workstation ~]$ cd ~/heat-templates
[student@workstation heat-templates]$ wget http://materials.example.com/heat/finance-app1.yaml
3.2. Use the user_data property to define the user data script to install the httpd package. The httpd
service must be started and enabled to start at boot time. The
user_data_format property for the OS::Nova::Server resource type must be set to RAW.
Edit the /home/student/heat-templates/finance-app1.yaml file, as shown:
web_server:
type: OS::Nova::Server
properties:
name: { get_param: instance_name }
image: { get_param: image_name }
flavor: { get_param: instance_flavor }
key_name: { get_param: key_name }
networks:
- port: { get_resource: web_net_port }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash
yum -y install httpd
systemctl restart httpd.service
systemctl enable httpd.service
3.3. In the user_data property, create a web page with the following content:
<h1>You are connected to $public_ip</h1>
<h2>The private IP address is: $private_ip</h2>
Red Hat Training
The web page uses the $public_ip and the $private_ip variables passed as parameters. These
parameters are defined using the params property of the str_replace intrinsic function.
The $private_ip variable uses the web_net_port resource attribute fixed_ips to retrieve the first IP
address associated with the network interface. The $public_ip variable uses the web_floating_ip
resource attribute floating_ip_address to set the public IP address associated with the instance.
Edit the /home/student/heat-templates/finance-app1.yaml file, as shown:
web_server:
type: OS::Nova::Server
properties:
name: { get_param: instance_name }
image: { get_param: image_name }
flavor: { get_param: instance_flavor }
key_name: { get_param: key_name }
networks:
- port: { get_resource: web_net_port }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash
yum -y install httpd
systemctl restart httpd.service
systemctl enable httpd.service
sudo touch /var/www/html/index.html
sudo cat << EOF > /var/www/html/index.html
<h1>You are connected to $public_ip</h1>
<h2>The private IP address is:$private_ip</h2>
Red Hat Training
EOF
params:
$private_ip: {get_attr: [web_net_port,fixed_ips,0,ip_address]}
$public_ip: {get_attr: [web_floating_ip,floating_ip_address]}
3.4. Use the WaitHandleCondition resource to send a signal about the status of the user data script.
The $wc_notify variable is set to the wait handle URL using the curl_cli attribute of the wait_handle
resource. The wait handle URL value is set to the $wc_notify variable. The $wc_notify variable returns
the status as SUCCESS if the web page deployed by the script is accessible and returns 200 as the HTTP
status code. The web_server resource state is marked as CREATE_COMPLETE when the
WaitConditionHandle resource signals SUCCESS. The WaitConditionHandle returns FAILURE if the web
page is not accessible or if it times out after 600 seconds. The web_server resource state is marked as
CREATE_FAILED.
Edit the /home/student/heat-templates/finance-app1.yaml file, as shown:
web_server:
type: OS::Nova::Server
properties:
name: { get_param: instance_name }
image: { get_param: image_name }
flavor: { get_param: instance_flavor }
key_name: { get_param: key_name }
networks:
- port: { get_resource: web_net_port }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash
yum -y install httpd
systemctl restart httpd.service
systemctl enable httpd.service
sudo touch /var/www/html/index.html
sudo cat << EOF > /var/www/html/index.html
<h1>You are connected to $public_ip</h1>
<h2>The private IP address is:$private_ip</h2>
Red Hat Training
EOF
export response=$(curl -s -k \
--output /dev/null \
--write-out %{http_code} http://$public_ip/)
[[ ${response} -eq 200 ]] && $wc_notify \
--data-binary '{"status": "SUCCESS"}' \
|| $wc_notify --data-binary '{"status": "FAILURE"}'
params:
$private_ip: {get_attr: [web_net_port,fixed_ips,0,ip_address]}
$public_ip: {get_attr: [web_floating_ip,floating_ip_address]}
$wc_notify: {get_attr: [wait_handle,curl_cli]}
Save and exit the file.
5.1. Using the developer1 user credentials to dry run the stack, check the resources that will be created
when launching the stack. Rectify all errors before proceeding to the next step to launch the stack. Use
the finance-app1.yaml template file and the environment.yaml environment file. Name the stack
finance-app1.
Note
Before running the dry run of the stack. Download the http://materials.example.com/heat/finance-
app1.yaml-final template file in the /home/student/heat-templates directory. Use the diff command to
confirm the editing of the finance-app1.yaml template file with the known good template file; finance-
app1.yaml-final. Fix any differences you find, then proceed to launch the stack.
[student@workstation heat-templates]$ wget http://materials.example.com/heat/finance-app1.yaml-
final
[student@workstation heat-templates]$ diff finance-app1.yaml finance-app1.yaml-final
[student@workstation heat-templates]$ source ~/developer1-finance-rc
[student@workstation heat-templates(developer1-finance)]$ openstack stack \
create \
--environment environment.yaml \
--template finance-app1.yaml \
--dry-run -c description \
finance-app1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| description | spawning a custom web server |
+---------------------+--------------------------------------+
5.2. Launch the stack using the finance-app1.yaml template file and the
environment.yaml environment file. Name the stack finance-app1.
If the dry run is successful, run the openstack stack create with the --enablerollback
option. Do not use the --dry-run option while launching the stack.
[student@workstation heat-templates(developer1-finance)]$ openstack stack \
create \
--environment environment.yaml \
--template finance-app1.yaml \
--enable-rollback \
--wait \
finance-app1
[finance-app1]: CREATE_IN_PROGRESS Stack CREATE started
[finance-app1.wait_handle]: CREATE_IN_PROGRESS state changed
[finance-app1.web_security_group]: CREATE_IN_PROGRESS state changed
[finance-app1.wait_handle]: CREATE_COMPLETE state changed
[finance-app1.web_security_group]: CREATE_COMPLETE state changed
[finance-app1.wait_condition]: CREATE_IN_PROGRESS state changed
[finance-app1.web_net_port]: CREATE_IN_PROGRESS state changed
[finance-app1.web_net_port]: CREATE_COMPLETE state changed
[finance-app1.web_floating_ip]: CREATE_IN_PROGRESS state changed
[finance-app1.web_floating_ip]: CREATE_COMPLETE state changed
[finance-app1.web_server]: CREATE_IN_PROGRESS state changed
[finance-app1.web_server]: CREATE_COMPLETE state changed
[finance-app1.wait_handle]: SIGNAL_COMPLETE Signal: status:SUCCESS
reason:Signal 1 received
[finance-app1.wait_condition]: CREATE_COMPLETE state changed
[finance-app1]: CREATE_COMPLETE Stack CREATE completed successfully
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| id | 23883f81-19b0-4446-a2b8-7f261958a0f1 |
| stack_name | finance-app1 |
| description | spawning a custom web server |
| creation_time | 2017-06-01T08:04:29Z |
| updated_time | None |
| stack_status | CREATE_COMPLETE |
| stack_status_reason | Stack CREATE completed successfully |
+---------------------+--------------------------------------+
5.3. List the output returned by the finance-app1 stack. Check the website_url output value.
[student@workstation heat-templates(developer1-finance)]$ openstack stack output list finance-app1
+----------------+------------------------------------------------------+
| output_key | description |
+----------------+------------------------------------------------------+
| web_private_ip | IP address of first web server in private network |
| web_public_ip | Floating IP address of the web server |
| website_url | This URL is the "external" URL that |
| | can be used to access the web server. |
|||
+----------------+------------------------------------------------------+
[student@workstation heat-templates(developer1-finance)]$ openstack stack output show finance-
app1 website_url
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| description | This URL is the "external" URL that can be |
| | used to access the web server. |
|||
| output_key | website_url |
| output_value | http://172.25.250.N/ |
+--------------+--------------------------------------------+
5.4. Verify that the instance was provisioned and the user data was executed successfully on the
instance. Use the curl command to access the URL returned as the value for the website_url output.
[student@workstation heat-templates(developer1-finance)]$ curl http://172.25.250.N/
<h1>You are connected to 172.25.250.N</h1>
<h2>The private IP address is:192.168.1.P</h2>
Red Hat Training
In the previous output, the N represents the last octet of the floating IP address associated with the
instance. The P represents the last octet of the private IP address associated with the instance.
7. Use the OS::Heat::ResourceGroup resource type to provision identical resources. The stack must
orchestrate a maximum of two such resources. The main stack must call the /home/student/heat-
templates/finance-app1.yaml for provisioning the resource defined in the file.
Edit the orchestration template after downloading from http://materials.example.com/heat/nested-
stack.yaml to the /home/student/heattemplates directory.
Note
Before running the dry run of the stack. Download the http://materials.example.com/heat/nested-
stack.yaml-final template file in the /home/student/heat-templates directory. Use the diff command to
confirm the editing of the nested-stack.yaml template file with the known good template file; nested-
stack.yaml-final. Fix any differences you find, then proceed to launch the stack.
[student@workstation heat-templates]$ wget http://materials.example.com/heat/nested-stack.yaml-
final
[student@workstation heat-templates]$ diff nested-stack.yaml nested-stack.yaml-final
[student@workstation heat-templates(developer1-finance)]$ openstack stack \
create \
--environment environment.yaml \
--template nested-stack.yaml \
--dry-run \
finance-app2
7.8. Launch the stack using the nested-stack.yaml template file and the environment.yaml environment
file. Name the stack finance-app2. If the dry run succeeds, run the openstack stack create with --
enablerollback option. Do not use the --dry-run option while launching the stack.
[student@workstation heat-templates(developer1-finance)]$ openstack stack \
create \
--environment environment.yaml \
--template nested-stack.yaml \
--enable-rollback \
--wait \
finance-app2
2017-06-01 08:48:03Z [finance-app2]: CREATE_IN_PROGRESS Stack CREATE started
2017-06-01 08:48:03Z [finance-app2.my_resource]: CREATE_IN_PROGRESS state changed
2017-06-01 08:51:10Z [finance-app2.my_resource]: CREATE_COMPLETE state changed
2017-06-01 08:51:10Z [finance-app2]: CREATE_COMPLETE Stack CREATE completed
successfully
+---------------------+------------------------------------------------------+
| Field | Value |
+---------------------+------------------------------------------------------+
| id | dbb32889-c565-495c-971e-8f27b4e35588 |
| stack_name | finance-app2 |
| description | Using ResourceGroup to scale out the custom instance |
| creation_time | 2017-06-01T08:48:02Z |
| updated_time | None |
| stack_status | CREATE_COMPLETE |
| stack_status_reason | Stack CREATE completed successfully |
+---------------------+------------------------------------------------------+
8.1. Download the template and the environment files in the /home/student/templates directory.
[student@workstation heat-templates(developer1-finance)]$ wget
http://materials.example.com/heat/ts-stack.yaml
[student@workstation heat-templates(developer1-finance)]$ wget
http://materials.example.com/heat/ts-environment.yaml
8.2. Verify that the Heat template does not contain any errors. Use the developer1 user credentials to
dry run the stack and check for any errors.
Name the stack finance-app3. Use the ts-stack.yaml template and the tsenvironment.yaml environment
file.
The finance-app3 stack dry run returns the following error:
[student@workstation heat-templates(developer1-finance)]$ openstack stack \
create \
--environment ts-environment.yaml \
--template ts-stack.yaml \
--dry-run \
finance-app3
Error parsing template file:///home/student/heat-templates/ts-stack.yaml while
parsing a block mapping
in "<unicode string>", line 58, column 5:
type: OS::Nova::Server
^
expected <block end>, but found '<block mapping start>'
in "<unicode string>", line 61, column 7:
image: { get_param: image_name }
8.3. Fix the indentation error for the name property of the OS::Nova::Server resource type.
web_server:
type: OS::Nova::Server
properties:
name: { get_param: instance_name }
8.4. Verify the indentation fix by running the dry run of the finance-app3 stack again.
The finance-app3 stack dry run returns the following error:
[student@workstation heat-templates(developer1-finance)]$ openstack stack \
create \
--environment ts-environment.yaml \
--template ts-stack.yaml \
--dry-run \
finance-app3
ERROR: Parameter 'key_name' is invalid:
Error validating value 'finance-keypair1':
The Key (finance-keypair1) could not be found.
8.5. Resolve the error as the keypair passed in the ts-environment.yaml file does not exists.
Check the keypair name that exists.
[student@workstation heat-templates(developer1-finance)]$ openstack keypair list
+---------------------+-------------------------------------------------+
| Name | Fingerprint |
+---------------------+-------------------------------------------------+
| developer1-keypair1 | e3:f0:de:43:36:7e:e9:a4:ee:04:59:80:8b:71:48:dc |
+---------------------+-------------------------------------------------+
Edit the /home/student/heat-templates/ts-environment.yaml file. Enter the correct key pair name,
developer1-keypair1.
8.6. Verify the keypair name fix in the /home/student/heat-templates/tsenvironment.yaml file. The
finance-app3 stack dry run must not return any error.
[student@workstation heat-templates(developer1-finance)]$ openstack stack \
create \
--environment ts-environment.yaml \
--template ts-stack.yaml \
--dry-run \
finance-app3
8.7. Launch the stack using the ts-stack.yaml template file and the tsenvironment.yaml environment file.
Name the stack finance-app3. If the dry run succeeds, run the openstack stack create with --
enablerollback option. Do not use the --dry-run option while launching the stack.
[student@workstation heat-templates(developer1-finance)]$ openstack stack \
create \
--environment ts-environment.yaml \
--template ts-stack.yaml \
--enable-rollback \
--wait \
finance-app3
[finance-app3]: CREATE_IN_PROGRESS Stack CREATE started
[finance-app3.wait_handle]: CREATE_IN_PROGRESS state changed
[finance-app3.web_security_group]: CREATE_IN_PROGRESS state changed
[finance-app3.web_security_group]: CREATE_COMPLETE state changed
[finance-app3.wait_handle]: CREATE_COMPLETE state changed
[finance-app3.wait_condition]: CREATE_IN_PROGRESS state changed
[finance-app3.web_net_port]: CREATE_IN_PROGRESS state changed
[finance-app3.web_net_port]: CREATE_COMPLETE state changed
[finance-app3.web_floating_ip]: CREATE_IN_PROGRESS state changed
[finance-app3.web_server]: CREATE_IN_PROGRESS state changed
[finance-app3.web_floating_ip]: CREATE_COMPLETE state changed
[finance-app3.web_server]: CREATE_COMPLETE state changed
[finance-app3.wait_handle]: SIGNAL_COMPLETE Signal: status:SUCCESS
reason:Signal 1 received
[finance-app3.wait_condition]: CREATE_COMPLETE state changed
[finance-app3]: CREATE_COMPLETE Stack CREATE completed successfully
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| id | 839ab589-1ded-46b2-8987-3fe18e5e823b |
| stack_name | finance-app3 |
| description | spawning a custom server |
| creation_time | 2017-06-01T12:08:22Z |
| updated_time | None |
| stack_status | CREATE_COMPLETE |
| stack_status_reason | Stack CREATE completed successfully |
+---------------------+--------------------------------------+
Cleanup
From workstation, run the lab orchestration-heat-templates cleanup command to clean up this exercise.
[student@workstation ~]$ lab orchestration-heat-templates cleanup