Killer Shell - Exam Simulators
Killer Shell - Exam Simulators
https://killer.sh
• terminal (default)
• web-srv1
• app-srv1
• data-001
• data-002
2. The server on which to solve a question is mentioned in the question text. If you'll need to create your solution
on the default terminal
3. If you're asked to create solution files at /opt/course/* , then on your main terminal
4. You can connect to each server using ssh, like ssh web-srv1
5. All server addresses are configured in /etc/hosts on each server
6. Nested ssh is : you can only connect to each server from your main terminal
7. It's not possible to restart single servers. If deeper issues or misconfigurations occurred then the only solution might be to restart the
complete simulator. This is possible using top menu by selecting "Restart Session"
8. This simulator might not contain all LFCS exam topics. Attendees are still required to learn and study the complete curriculum
Please let us know on Support or Slack if you find any issues, mistakes, spelling mistakes, unclear explanations or improvement
suggestions etc. We appreciate any feedback that can help improving this simulator!
Task weight: 2%
If no server is mentioned in the question text, you'll need to create your solution on the default terminal
# /opt/course/1/kernel
5.15.0-69-generic
# /opt/course/1/ip_forward
1
# /opt/course/1/timezone
UTC
kim wüstkamp design faq store support legal / privacy
Task weight: 3%
On server data-001 , user asset-manager is responsible for timed operations on existing data. Some changes and additions are necessary.
Currently there is one system-wide cronjob configured that runs every day at 8:30pm. Convert it from being a system-wide cronjob to one
owned and executed by user asset-manager . This means that user should see it when running crontab -l .
Create a new cronjob owned and executed by user asset-manager that runs bash /home/asset-manager/clean.sh every week on Monday
and Thursday at 11:15am.
Here we should move a cronjob from system-wide to user asset-manager . First we check out that cronjob:
➜ ssh data-001
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
We go ahead and cut this line from that file, or copy it and remove it later! Next we're going to add it to the users cronjobs:
➜ root@data-001:~$ su asset-manager
➜ asset-manager@data-001:/root$ cd
➜ asset-manager@data-001:~$ pwd
/home/asset-manager
The system-wide cronjobs in /etc/crontab always specify the user that executes the command. Now it's no longer necessary.
➜ asset-manager@data-001:~$ crontab -l
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
30 20 * * * bash /home/asset-manager/generate.sh
For the next step we should add a new cronjob. We can just copy and then change the existing one to our needs:
➜ asset-manager@data-001:~$ crontab -e
➜ asset-manager@data-001:~$ crontab -l
...
30 20 * * * bash /home/asset-manager/generate.sh
15 11 * * 1,4 bash /home/asset-manager/clean.sh
For guidance check the comments in /etc/crontab , they're really useful. Instead of numbers for the days we can also use the actual names
of days:
The files for user crontabs are stored at location /var/spool/cron/crontabs and user root can access those.
Task weight: 3%
➜ timedatectl
Local time: Sun 2023-06-11 16:29:05 UTC
Universal time: Sun 2023-06-11 16:29:05 UTC
RTC time: Sun 2023-06-11 16:29:05
Time zone: UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
Here we see for example the current local time and timezone. Let's open the configuration:
[Time]
NTP=0.de.pool.ntp.org 1.de.pool.ntp.org 2.de.pool.ntp.org
We see three german NTP servers currently configured via setting NTP .
Above we see one successful request and one to www.google.de that failed. This is correct because the Google web-domain doesn't provide a
NTP service.
: Main servers
[Time]
NTP=0.pool.ntp.org 1.pool.ntp.org
: Fallback servers
Often times various settings are already included in the timesyncd.conf but commented out. Here it seems that we've to work with a pretty
clean file. Hence we can use man timesyncd.conf for help:
[Time]
NTP=0.pool.ntp.org 1.pool.ntp.org
FallbackNTP=ntp.ubuntu.com 0.debian.pool.ntp.org
: Remaining settings
[Time]
NTP=0.pool.ntp.org 1.pool.ntp.org
FallbackNTP=ntp.ubuntu.com 0.debian.pool.ntp.org
PollIntervalMaxSec=1000
ConnectionRetrySec=20
: Restart service
Status output looking good. In the logs above we can see which NTP server was used for synchronisation. We could also check the logs with:
Server 0.pool.ntp.org was used here which means our configuration change worked.
Task weight: 3%
There is an existing env variable for user student@terminal : VARIABLE1=random-string , defined in file .bashrc . Create a new script under
/opt/course/4/script.sh which:
1. Defines a new env variable VARIABLE2 with content v2 , only available in the script itself
2. Outputs the content of the env variable VARIABLE2
3. Defines a new env variable VARIABLE3 with content ${VARIABLE1}-extended , available in the script itself and all child processes of the
shell as well
4. Outputs the content of the env variable VARIABLE3
Do not alter the .bashrc file, everything needs to be done in the script itself
Well, let's check the existing variable and its content as mentioned:
➜ echo $VARIABLE1
random-string
How the variable's value defined? Let's check the .bashrc file:
➜ cat .bashrc | grep VARIABLE1
export VARIABLE1=random-string
Now let's create a script which will define a new env variable called VARIABLE2 with content v2 :
➜ vim /opt/course/4/script.sh
VARIABLE2="v2"
echo $VARIABLE2
We give it a try, it should output the variable content, but shouldn't make it available (export) afterwards:
➜ bash /opt/course/4/script.sh
v2
Finally, we define the third environment variable called VARIABLE3 within the same script
VARIABLE2="v2"
echo $VARIABLE2
export VARIABLE3="${VARIABLE1}-extended" # add
echo $VARIABLE3 # add
➜ sh /opt/course/4/script.sh
v2
random-string-extended
What's the difference between export and not using export? Let's demonstrate it:
➜ TEST1=test1
➜ export TEST2=test2
➜ echo $TEST1
test1
➜ echo $TEST2
test2
➜ bash
Task weight: 6%
There is archive /imports/import001.tar.bz2 on server data-001 . You're asked to create a new gzip compressed archive with its raw
contents. Make sure the original archive will remain untouched.
Store the new archive under /imports/import001.tar.gz . Compression should be the best possible, using gzip.
To make sure both archives contain the same files, write a list of their sorted contents into /imports/import001.tar.bz2_list and
/imports/import001.tar.gz_list .
➜ ssh data-001
➜ root@data-001:~$ cd /imports
➜ root@data-001:/imports$ ls -lh
total 1.5K
-rw-r--r-- 1 root root 560 Jul 16 13:49 import001.tar.bz2
➜ root@data-001:/imports$ ls -lh
total 3.0K
-rw-r--r-- 1 root root 20K Jul 16 14:20 import001.tar # combination of all files without compression
-rw-r--r-- 1 root root 550 Jul 16 14:20 import001.tar.bz2
Every tar archive contains a "tar" data layer. This can then be further compressed with various compression algorithms. Here we can now go
ahead and create a new gzip compression from the tar layer:
➜ root@data-001:/imports$ ls -lh
total 2.0K
-rw-r--r-- 1 root root 550 Jul 16 14:20 import001.tar.bz2
-rw-r--r-- 1 root root 544 Jul 16 14:20 import001.tar.gz
We should see:
➜ root@data-001:/imports$ ls -lh
total 4.5K
drwxr-xr-x 7 ubuntu ubuntu 7 Jul 16 13:32 import001
-rw-r--r-- 1 root root 550 Jul 16 13:57 import001.tar.bz2
-rw-r--r-- 1 root root 531 Jul 16 14:00 import001.tar.gz
We ensure that both archives contain the same files and structure:
To compare further we could use cat import001.tar.bz2_list | sha512sum and compare the hashes.
➜ root@data-001:/imports$ ls -lha
total 6.0K
drwxr-xr-x 2 root root 6 Jul 16 14:07 .
drwxr-xr-x 3 root root 3 Jul 16 13:28 ..
-rw-r--r-- 1 root root 550 Jul 16 13:57 import001.tar.bz2
-rw-r--r-- 1 root root 1.2K Jul 16 14:07 import001.tar.bz2_list
-rw-r--r-- 1 root root 531 Jul 16 14:00 import001.tar.gz
-rw-r--r-- 1 root root 1.2K Jul 16 14:07 import001.tar.gz_list
Task weight: 5%
On server app-srv1 :
1. Change the primary group of user user1 to dev and the home directory to /home/accounts/user1
2. Add a new user user2 with groups dev and op , home directory /home/accounts/user2 , terminal /bin/bash
3. User user2 should be able to execute sudo bash /root/dangerous.sh without having to enter the root password
➜ ssh app-srv1
root:x:0:0:root:/root:/bin/bash
...
lxd:x:999:100::/var/snap/lxd/common/lxd:/bin/false
ntp:x:113:121::/nonexistent:/usr/sbin/nologin
user1:x:1001:1001::/home/accounts/user1:/bin/bash # change path
➜ root@app-srv1:~$ su user1
➜ user1@app-srv1:/root$ cd
➜ user1@app-srv1:~$ pwd
/home/accounts/user1
➜ root@app-srv1:~$ useradd -h
Usage: useradd [options] LOGIN
useradd -D
useradd -D [options]
Options:
--badnames do not check for bad names
-b, --base-dir BASE_DIR base directory for the home directory of the
new account
--btrfs-subvolume-home use BTRFS subvolume for home directory
-c, --comment COMMENT GECOS field of the new account
-d, --home-dir HOME_DIR home directory of the new account # useful
-D, --defaults print or change default useradd configuration
-e, --expiredate EXPIRE_DATE expiration date of the new account
-f, --inactive INACTIVE password inactivity period of the new account
-g, --gid GROUP name or ID of the primary group of the new
account
-G, --groups GROUPS list of supplementary groups of the new # useful
account
-h, --help display this help message and exit
-k, --skel SKEL_DIR use this alternative skeleton directory
-K, --key KEY=VALUE override /etc/login.defs defaults
-l, --no-log-init do not add the user to the lastlog and
faillog databases
-m, --create-home create the users home directory # useful
-M, --no-create-home do not create the user's home directory
-N, --no-user-group do not create a group with the same name as
the user
-o, --non-unique allow to create users with duplicate
(non-unique) UID
-p, --password PASSWORD encrypted password of the new account
-r, --system create a system account
-R, --root CHROOT_DIR directory to chroot into
-P, --prefix PREFIX_DIR prefix directory where are located the /etc/* files
-s, --shell SHELL login shell of the new account # useful
-u, --uid UID user ID of the new account
-U, --user-group create a group with the same name as the user
-Z, --selinux-user SEUSER use a specific SEUSER for the SELinux user mapping
--extrausers Use the extra users database
Now it's getting interesting. We can try to execute the script with current configuration:
➜ root@app-srv1:~$ su user2
➜ user2@app-srv1:/root$ cd
We need to configure sudoers to allow this specific script call. We should always edit the /etc/sudoers file using the command visudo ,
because it performs proper syntax validation before saving the file. Any misconfiguration of that file could lock us out of the system for good.
So we do as root:
➜ root@app-srv1:~$ visudo
...
@includedir /etc/sudoers.d
You can exit via Ctrl + X , then Y and then Enter to save.
And to verify:
Server data-002 is used for big data and provides internally used apis for various data operations. You're asked to implement network packet
filters on interface eth0 on data-002 :
In case of misconfiguration you can still access the instance using sudo lxc exec data-002 bash
➜ curl data-002:5000
app on port 5000
➜ curl data-002:6000
curl: (7) Failed to connect to data-002 port 6000 after 4 ms: Connection refused
➜ curl data-002:6001
app on port 6001
➜ curl data-002:6002
app on port 6002
Further we can check for existing iptables rules and interfaces, because we're asked to implement the filters for eth0 :
➜ ssh data-002
➜ root@data-002:~$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
➜ root@data-002:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
26: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:6c:54:78 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.10.90/24 metric 100 brd 192.168.10.255 scope global dynamic eth0
valid_lft 2752sec preferred_lft 2752sec
inet6 fd42:a4f:8f61:21e3:216:3eff:fe6c:5478/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 3305sec preferred_lft 3305sec
inet6 fe80::216:3eff:fe6c:5478/64 scope link
valid_lft forever preferred_lft forever
Above we can see the etc0 interface and that there are no existing iptables rules implemented.
We're asked to close port 5000 and are going to use iptables for it:
➜ root@data-002:~$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
DROP tcp -- anywhere anywhere tcp dpt:5000 # new rule
➜ root@data-002:~$ exit
Now we're going to perform some NAT for connections on port 6000 :
➜ root@data-002:~$ iptables -A PREROUTING -i eth0 -t nat -p tcp --dport 6000 -j REDIRECT --to-port 6001
If we would like to clear these we could run iptables -F -t nat . Let's test the result:
➜ root@data-002:~$ exit
Now we're asked to open port 5000 only from a specific source IP:
The idea there is that we first allow that IP and then deny all other traffic on that port. Let's verify it:
➜ curl data-002:6002
^C # timeout
➜ ssh data-001
➜ root@data-002:~$ nc app-srv1 22
SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.1
^C # success
➜ root@data-002:~$ nc app-srv1 22
^C # timeout
➜ root@data-002:~$ nc data-001 22
SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.1
^C # success
Above we can see that outgoing connections to app-srv1 no longer work and they time out.
Task weight: 4%
Your team selected you for this task because of your deep filesystem and disk/devices expertise. Solve the following steps to not let your team
down:
1. Format /dev/vdb with ext4 , mount it to /mnt/backup-black and create empty file /mnt/backup-black/completed
2. Find the fullest of the disks /dev/vdc and /dev/vdd . Then empty the .trash folder on it
3. There are two processes running: dark-matter-v1 and dark-matter-v2 . Find the one that consumes more memory. Then unmount the
disk where the process executable is located on
➜ sudo fdisk -l
...
Disk /dev/vdb: 100 MiB, 104857600 bytes, 204800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
➜ lsblk -f
NAME FSTYPE FSVER LABEL ... FSAVAIL FSUSE% MOUNTPOINTS
...
vdb
vdc ext4 1.0 ... 0 98% /mnt/backup001
vdd ext4 1.0 ... 77.7M 6% /mnt/backup002
vde ext4 1.0 ... 82.6M 0% /mnt/app-8e127b55
vdf ext4 1.0 ... 82.6M 0% /mnt/app-4e9d7e1e
...
➜ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 198M 1.1M 197M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 62G 5.6G 54G 10% /
tmpfs 988M 0 988M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda2 2.0G 130M 1.7G 8% /boot
tmpfs 198M 4.0K 198M 1% /run/user/1000
/dev/vdc 90M 51M 33M 61% /mnt/backup001
/dev/vdd 90M 5.1M 78M 7% /mnt/backup002
/dev/vde 90M 44K 83M 1% /mnt/app-8e127b55
/dev/vdf 90M 44K 83M 1% /mnt/app-4e9d7e1e
➜ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 198M 1.4M 197M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 62G 7.5G 52G 13% /
tmpfs 988M 0 988M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda2 2.0G 130M 1.7G 8% /boot
tmpfs 198M 4.0K 198M 1% /run/user/1000
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/vdc 90M 51M 33M 61% /mnt/backup001
/dev/vdd 90M 5.1M 78M 7% /mnt/backup002
/dev/vde 90M 44K 83M 1% /mnt/app-8e127b55
/dev/vdf 90M 44K 83M 1% /mnt/app-4e9d7e1e
/dev/vdb 90M 24K 83M 1% /mnt/backup-black # this is us
tmpfs 198M 4.0K 198M 1% /run/user/0
➜ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 198M 1.4M 197M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 62G 7.5G 52G 13% /
tmpfs 988M 0 988M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda2 2.0G 130M 1.7G 8% /boot
tmpfs 198M 4.0K 198M 1% /run/user/1000
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/vdc 90M 51M 33M 61% /mnt/backup001 # compare
/dev/vdd 90M 5.1M 78M 7% /mnt/backup002 # compare
/dev/vde 90M 44K 83M 1% /mnt/app-8e127b55
/dev/vdf 90M 44K 83M 1% /mnt/app-4e9d7e1e
/dev/vdb 90M 24K 83M 1% /mnt/backup-black
➜ df -h
...
/dev/vdc 90M 28K 83M 1% /mnt/backup001 # much space gained
/dev/vdd 90M 5.1M 78M 7% /mnt/backup002
For the final step we need to check the memory of two processes and then find out on which disk the largest consumer runs:
We see both processes with their memory consumption in column 9. We can also use ps which can show the full path of the executables,
which we also need:
Above we see the memory usage in column 4 and the full paths at the very end. This means process dark-matter-v2 is the offender and the
executable is to be found at /mnt/app-4e9d7e1e/dark-matter-v2 . So we need to do:
➜ df -h | grep /mnt/app-4e9d7e1e
/dev/vdf 90M 44K 83M 1% /mnt/app-4e9d7e1e # to see the disk and mount point
The disk is busy, probably because of that one process! But we can check for any other processes blocking this:
Well, only that one bad process! Hence we can finish this question with:
Task weight: 6%
First:
• Find all files smaller than 3KiB and move these to /var/backup/backup-015/small/
• Find all files larger than 10KiB and move these to /var/backup/backup-015/large/
• Find all files with permission 777 and move these to /var/backup/backup-015/compromised/
➜ root@data-001:~$ cd /var/backup/backup-015
Seems to contain good amount of files! Now we need to clean it up. A good way for this is to use find with arguments and a command to
execute, like:
find -exec echo {} \; # will find all files and runs "echo FILE" for each
find -exec echo {} + # will find all files and runs "echo FILE1 FILE2 FILE3 ..."
man find # search for "-newerXY" to find files "newer than date"
Using this we can delete all files created before 2020. Always "debug" a command first by just listing without executing a command:
And finally we move all files with too open permissions into the subfolder:
➜ root@data-001:/var/backup/backup-015$ ls small/ | wc -l
28
➜ root@data-001:/var/backup/backup-015$ ls large/ | wc -l
11
➜ root@data-001:/var/backup/backup-015$ ls compromised/ | wc -l
12
Task weight: 7%
On your main server terminal use SSHFS to mount directory /data-export from server app-srv1 to /app-srv1/data-export . The mount
should be read-write and option allow_other should be enabled.
The NFS service has been installed on your main server terminal . Directory /nfs/share should be read-only accessible from
192.168.10.0/24 . On app-srv1 , mount the NFS share /nfs/share to /nfs/terminal/share .
➜ find /app-srv1/data-export
/app-srv1/data-export
/app-srv1/data-export/datRTG_1.gz
/app-srv1/data-export/datRTG_2.gz
➜ touch /app-srv1/data-export/new
➜ find /app-srv1/data-export
/app-srv1/data-export
/app-srv1/data-export/new
/app-srv1/data-export/datRTG_1.gz
/app-srv1/data-export/datRTG_2.gz
Above we can see that creating a new file works and that both directories are synced. Note that all users can now access that mount in read-
write mode because of option allow_other, which might not be something we want in production environments!
NFS server seems to be running. We can expose certain directories via /etc/exports :
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
/nfs/share 192.168.10.0/24(ro,sync,no_subtree_check,no_root_squash)
The file provides some comments with examples which can be very useful. After adding the exports we need to run:
➜ man exportfs # in case we forget the arguments: "-r" for reexport and "-a" for all
➜ showmount -e
Export list for terminal:
/nfs/share 192.168.10.0/24
Now we check if our NFS server-side settings can actually be accessed on client-side by NFS mounting:
➜ ssh app-srv1
➜ root@app-srv1:~$ nc -v terminal 2049 # for debugging we check can if NFS on port 2049 is open
Connection to terminal (192.168.100.2) 2049 port [tcp/nfs] succeeded!
^C
➜ root@app-srv1:~$ mkdir -p /nfs/terminal/share # we need to create the local mount destination directory
➜ root@app-srv1:~$ touch /nfs/terminal/share/new # not possible to create new file (because of read-only)
touch: cannot touch '/nfs/terminal/share/new': Read-only file system
Above we see that we were able to mount the required NFS export in read-only mode on app-srv1 .
Task weight: 6%
Someone overheard that you're a Containerisation Specialist, so the following should be easy for you! Please:
◦ Name: frontend_v3
◦ Image: nginx:alpine
◦ Memory limit: 30m (30 Megabytes)
◦ TCP Port map: 1234/host => 80/container
➜ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9b334cfaae0 nginx:alpine "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 80/tcp frontend_v1
e68fa28f231d nginx:alpine "/docker-entrypoint.…" 7 minutes ago Up 7 minutes 80/tcp frontend_v2
Docker inspect provides the container configuration in JSON format which contains all information asked for in this task:
"Mounts": [
{
"Type": "bind",
"Source": "/var/www",
"Destination": "/srv", # WE NEED THIS
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
...
"NetworkSettings": {
"Bridge": "",
"SandboxID": "a550576248b3f6c15211c2ad10efc421c7b48285cd69d57b64d8dc7b1b59e0ef",
"HairpinMode": false,
...
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "5889aae0b056e4f944d6fb9afd8edd26ecd589c5e45b7533b59c878138fb5625",
"EndpointID": "954fcc88c2b64ef9ba3809eb130060ceb064301c4906b4a7ab0902be6153eedb",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.3", # WE NEED THIS
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:03",
"DriverOpts": null
}
}
}
It's probably a good idea to search in the inspect output for specific values. For this we could open the output directly in vim:
# /opt/course/11/ip-address
172.17.0.3
# /opt/course/11/mount-destination
/srv
Finally we can start our own container! Unfortunately with very strict conditions to follow... so let's obey!
The help output for docker run usually provides all that's needed:
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] # the order of arguments is important
Options:
--add-host list Add a custom host-to-IP mapping (host:ip)
-a, --attach list Attach to STDIN, STDOUT or STDERR
...
--cpus decimal Number of CPUs
--cpuset-cpus string CPUs in which to allow execution (0-3, 0,1)
--cpuset-mems string MEMs in which to allow execution (0-3, 0,1)
-d, --detach Run container in background and print container ID # need this
--detach-keys string Override the key sequence for detaching a container
--device list Add a host device to the container
--device-cgroup-rule list Add a rule to the cgroup allowed devices list
...
--mount mount Attach a filesystem mount to the container
--name string Assign a name to the container # need this
--network network Connect a container to a network
...
--pids-limit int Tune container pids limit (set -1 for unlimited)
--platform string Set platform if server is multi-platform capable
--privileged Give extended privileges to this container
-p, --publish list Publish a containers port(s) to the host # need this
-P, --publish-all Publish all exposed ports to random ports
...
In case the above command throws iptables errors we can restart the docker service:
➜ curl localhost:1234
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
You're asked to perform changes in the Git repository of the Auto-Verifier app:
2. Find the one of the branches dev4 , dev5 and dev6 in which file config.yaml contains user_registration_level: open . Merge only
that branch into branch main
3. In branch main create a new directory logs on top repository level. To ensure the directory will be committed create hidden empty file
.keep in it
: Clone Repository
Git is most often used to clone from and work with remote repositories on like Github or Gitlab. But most of Gits functionality can also be
used locally. We go ahead and clone the local directory:
➜ cd ~/repositories/auto-verifier
We can simply move through all branches and check the file content:
# Match! We could've stopped here, but let's also check the final branch
updated config.yaml
commit 9b73a28f2c87c6b34b9a779f5e82b4ebbf8bc78c
Author: manager <manager@auto-verifier>
Date: Thu Jan 5 20:19:58 2023 +1000
Another way could also be to compare from branch main without actually switching into another:
We're asked to create new directory, let's see what happens if we try to commit it just like that:
➜ ~/repositories/auto-verifier$ ls -l
total 20
-rwxrwxr-x 1 student student 2015 Jul 27 14:20 config.yaml
drwxrwxr-x 2 student student 4096 Jul 27 14:16 lib
drwxrwxr-x 2 student student 4096 Jul 27 14:29 logs
-rwxrwxr-x 1 student student 205 Jul 27 14:16 main.go
-rwxrwxr-x 1 student student 133 Jul 27 14:16 README.md
Above we see that git status doesn't list the directory at all because it's empty. This means that it wouldn't be included in commits too.
Now we add the requested file and see if things change:
Untracked files:
(use "git add <file>..." to include in what will be committed)
logs/
nothing added to commit but untracked files present (use "git add" to track)
: Commit
Untracked files:
(use "git add <file>..." to include in what will be committed)
logs/
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
new file: logs/.keep
Task weight: 5%
There was a security alert which you need to follow up on. On server web-srv1 there are three processes: collector1 , collector2 , and
collector3 . It was alerted that any of these might run periodically the per custom policy forbidden syscall kill .
End the process and remove the executable for those where this is true.
➜ ssh web-srv1
After watching for a while, there don't seem to be any kill syscalls. Next one is collector2 :
Gotcha! Seems like collector2 is one bad process. Still we need to check the last one, collector3 :
Seems like only collector2 should be terminated. First we run ps again to see the binary path:
➜ root@web-srv1:~$ ps aux | grep collector2
root 3612 0.0 0.0 101916 612 ? Sl 13:23 0:00 /bin/collector2
➜ root@web-srv1:~$ rm /bin/collector2
Task weight: 3%
On server app-srv1 there is a program /bin/output-generator which, who would've guessed, generates some output. It'll always generate
the very same output for every run:
➜ ssh app-srv1
➜ root@app-srv1:~$ output-generator
...
2021/07/20 08:58:20 32668 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 08:58:20 32669 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 08:58:20 32670 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 08:58:20 32671 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 08:58:20 32672 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 08:58:20 32673 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 08:58:20 32674 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 08:58:20 32675 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 08:58:20 32676 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 08:58:20 32677 - a99947a1-2f68-475e-9326-32dbd4876f60
➜ root@app-srv1:~$ output-generator | wc -l
...
2021/07/20 09:00:44 22670 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 09:00:44 22671 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 09:00:44 22672 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 09:00:44 22673 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 09:00:44 22674 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 09:00:44 22675 - a99947a1-2f68-475e-9326-32dbd4876f60
2021/07/20 09:00:44 22676 - a99947a1-2f68-475e-9326-32dbd4876f60
20342
It looks like wc -l does only count rows that are written to stdout.
We can investigate a bit further using stdout (1) and stderr (2) redirection
➜ root@app-srv1:~$ output-generator
➜ root@app-srv1:~$ echo $? > /var/output-generator/4.out # get the exit code and redirect to file
Task weight: 4%
Install the text based terminal browser links2 from source on server app-srv1 . The source is provided at /tools/links-2.14.tar.bz2 on
that server.
Let's first check out the provided archive and extract it's content:
➜ ssh app-srv1
➜ root@app-srv1:~$ cd /tools
➜ root@app-srv1:/tools$ ls
links-2.14.tar.bz2
➜ root@app-srv1:/tools$ cd links-2.14
➜ root@app-srv1:/tools/links-2.14$ ls -lh
total 6.9M
-rw-r--r-- 1 ubuntu ubuntu 9.2K Aug 12 2015 AUTHORS
-rw-r--r-- 1 ubuntu ubuntu 1.2K Jan 2 2005 BRAILLE_HOWTO
-rw-r--r-- 1 ubuntu ubuntu 19K Sep 21 2013 COPYING
-rw-r--r-- 1 ubuntu ubuntu 114K Nov 26 2016 ChangeLog
-rw-r--r-- 1 ubuntu ubuntu 4.6K Nov 26 2016 INSTALL
-rw-r--r-- 1 ubuntu ubuntu 1.9K Apr 9 2012 KEYS
-rw-r--r-- 1 ubuntu ubuntu 650 Jan 10 2005 Links_logo.png
-rw-r--r-- 1 ubuntu ubuntu 2.6K Jun 16 2016 Makefile.am
-rw-r--r-- 1 ubuntu ubuntu 22K Nov 3 2016 Makefile.in
-rw-r--r-- 1 ubuntu ubuntu 214 Feb 11 2012 NEWS
...
1. ./configure (args...)
2. make
3. make install
Here the question requires specific configuration parameters. We can list all possible options:
It's also possible to open the ./configure file in a text editor to investigate possible options if the help output won't suffice.
Event handler: NO
IPv6: NO
Supported compression: NO
SSL support: NO
UTF-8 terminal: YES
GPM support: NO
Graphics enabled: NO
---------------------------------------------------------
➜ root@app-srv1:/tools/links-2.14$ make
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c af_unix.c
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c auth.c
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c beos.c
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c bfu.c
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c block.c
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c bookmark.c
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c cache.c
...
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c vms.c
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c x.c
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c xbm.c
gcc -g -O2 -o links af_unix.o auth.o beos.o bfu.o block.o bookmark.o cache.o charsets.o compress.o connect.o
cookies.o data.o default.o dip.o directfb.o dither.o dns.o dos.o drivers.o error.o file.o finger.o fn_impl.o font_inc.o
framebuf.o ftp.o gif.o grx.o hpux.o html.o html_gr.o html_r.o html_tbl.o http.o https.o img.o imgcache.o jpeg.o jsint.o
kbd.o language.o listedit.o lru.o mailto.o main.o memory.o menu.o objreq.o os_dep.o pmshell.o png.o sched.o select.o
session.o smb.o string.o suffix.o svg.o svgalib.o terminal.o tiff.o types.o url.o view.o view_gr.o vms.o x.o xbm.o
-lpthread
/usr/bin/ld: session.o: in function `get_temp_name':
/tools/links-2.14/session.c:1056: warning: the use of `tempnam' is dangerous, better use `mkstemp'
Task weight: 8%
Server web-srv1 is hosting two applications, one accessible on port 1111 and one on 2222 . These are served using Nginx and it's
allowed to change their config. The ip of web-srv1 is 192.168.10.60 .
Nginx is already preinstalled and is recommended to be used for the implementation. Though it's also possible to use any other technologies
(like Apache or HAProxy) because only the end result will be verified.
➜ curl web-srv1:1111
app1
➜ curl web-srv1:2222
app2
➜ curl web-srv1:2222/special
app2 special
Cool, we can work with that! Here we're now going to create an Nginx LoadBalancer, but it would also be possible to use any other technology
if you're more familiar with it.
➜ root@web-srv1:~$ cd /etc/nginx/sites-available
➜ root@web-srv1:/etc/nginx/sites-available$ ls -lh
total 2.0K
-rw-r--r-- 1 root root 329 Jun 14 17:17 app1
-rw-r--r-- 1 root root 413 Jun 14 17:17 app2
server_name _;
location / {
return 200 'app1\n';
}
}
There we see the two existing applications app1 and app2 which we aren't allowed to be changed. But we sure can use one as template for
our new LoadBalancer:
➜ root@web-srv1:/etc/nginx/sites-available$ cp app1 lb
➜ root@web-srv1:/etc/nginx/sites-available$ vim lb
server {
listen 8001 default_server; # change port
listen [::]:8001 default_server; # change port
server_name _;
location / {
proxy_pass http://192.168.10.60:2222/special; # reverse proxy to the requsted url
}
}
We start slowly with the easier one on port 8001 where we simply use a proxy_pass option. Save the file and:
➜ root@web-srv1:/etc/nginx/sites-available$ cd ../sites-enabled
Working! Now we go ahead and copy the first part, make some additions and use it as the second part:
➜ root@web-srv1:/etc/nginx/sites-enabled$ vim lb
server_name _;
location / {
proxy_pass http://192.168.10.60:2222/special; # reverse proxy to the requsted url
}
}
server {
listen 8000 default_server;
listen [::]:8000 default_server;
server_name _;
location / {
proxy_pass http://backend; # reverse proxy to the requsted url
}
}
The second part is mostly the same as before. It's possible to create multiple servers which listen on different ports within the same file. Here
we simply created an upstream backend that contains the provided urls and we use it in the proxy_pass directive. Pretty nice right! But does
it work?
➜ root@web-srv1:/etc/nginx/sites-enabled$ exit
➜ curl web-srv1:8000
app1
➜ curl web-srv1:8000
app2
➜ curl web-srv1:8000
app1
➜ curl web-srv1:8000
app2
➜ curl web-srv1:8000
app1
➜ curl web-srv1:8001
app2 special
➜ curl web-srv1:8001
app2 special
➜ curl web-srv1:8001/anything/even/not/special
app2 special
Above we see that requests to web-srv1:8000 are sent to both app1 and app2 . And requests on web-srv1:8001 are only sent to app2 and
path /special .
Task weight: 6%
You need to perform OpenSSH server configuration changes on data-002 . Users marta and cilla exist on that server and can be used for
testing. Passwords are their username and shouldn't be changed. Please go ahead and:
1. Disable X11Forwarding
3. Enable Banner with file /etc/ssh/sshd-banner for users marta and cilla
In case of misconfiguration you can still access the instance using sudo lxc exec data-002 bash
We are required to perform ssh server config changes, always fun because nothing can ever go wrong!
➜ ssh data-002
...
X11Forwarding no # find and update to "no"
...
We now need to first disable PasswordAuthentication globally and then enable it for user marta . Then we also add the Banner settings for
users marta and cilla :
It's to add any Match lines at the very bottom of the config file, otherwise it might not get accepted and errors will be thrown
during sshd service restart.
Using Match User or Match Group we can override global settings for specific users and groups. Let's test if it works:
➜ root@data-002:~$ exit
➜ ssh marta@data-002
Hello our favorite user! # user "marta" sees banner message
marta@data-002's password:
Last login: Sat Jun 17 16:13:35 2023 from 192.168.10.1 # user "marta" can log in using password
➜ marta@data-002:~$ exit
➜ ssh cilla@data-002
Hello our favorite user! # user "cilla" sees banner message
cilla@data-002: Permission denied (publickey). # user "cilla" can't log in using password
➜ ssh root@data-002
Last login: Sat Jun 17 16:21:30 2023 from 192.168.10.1 # no banner message for user "root"
Both users marta and cilla see the banner message, but only marta can still log in using password.
Task weight: 8%
Some helpful abbreviations when working with LVM, because command names usually start with those:
PV = Physical Volume
VG = Volume Group
LV = Logical Volume
➜ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/vda3 ubuntu-vg lvm2 a-- <126.00g 63.00g
/dev/vdg vol1 lvm2 a-- 96.00m 84.00m
/dev/vdh vol1 lvm2 a-- 96.00m 96.00m
In the output above we can see that the VG vol1 uses two disks /dev/vdg and /dev/vdh . We can also get an overview over all system disks
and their LVM usage:
➜ sudo lvmdiskscan
/dev/loop0 [ <49.84 MiB]
/dev/loop1 [ 63.28 MiB]
/dev/loop2 [ <111.95 MiB]
/dev/vda2 [ 2.00 GiB]
/dev/loop3 [ <53.26 MiB]
/dev/vda3 [ <126.00 GiB] LVM physical volume
/dev/loop4 [ 63.45 MiB]
/dev/vdb [ 100.00 MiB]
/dev/vdc [ 100.00 MiB]
/dev/vdd [ 100.00 MiB]
/dev/vde [ 100.00 MiB]
/dev/vdf [ 100.00 MiB]
/dev/vdg [ 100.00 MiB] LVM physical volume # disk 1
/dev/vdh [ 100.00 MiB] LVM physical volume # disk 2
5 disks
6 partitions
2 LVM physical volume whole disks
1 LVM physical volume
The existing PV /dev/vda3 with VG ubuntu-vg is created by the main operating system and shouldn't be touched.
➜ sudo vgs
VG PV LV SN Attr VSize VFree
ubuntu-vg 1 1 0 wz--n- <126.00g 63.00g
vol1 1 1 0 wz--n- 96.00m 84.00m # one PV
That should do it. We can also verify this by listing all PVs:
➜ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/vda3 ubuntu-vg lvm2 a-- <126.00g 63.00g
/dev/vdg vol1 lvm2 a-- 96.00m 84.00m
/dev/vdh lvm2 --- 100.00m 100.00m # not assigned to a VG any longer
Now we're going to create a new PV using that now free disk:
➜ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/vda3 ubuntu-vg lvm2 a-- <126.00g 63.00g
/dev/vdg vol1 lvm2 a-- 96.00m 84.00m
/dev/vdh vol2 lvm2 a-- 96.00m 96.00m # assigned to the VG
➜ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi-ao---- <63.00g
p1 vol1 -wi-a----- 12.00m
p1 vol2 -wi-a----- 52.00m # there we go
We can access LVM partitions or LVs in the usual way once we know the path:
➜ sudo mkfs -t ext4 /dev/vol2/p1
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 13312 4k blocks and 13312 inodes
We could now go ahead and mount and use /dev/vol2/p1 as we're used to.
Also interesting and could also be part of the exam is extending a mounted LV:
➜ sudo fdisk -l
...
Disk /dev/mapper/vol2-p1: 52 MiB, 54525952 bytes, 106496 sectors # ~50M
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
➜ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi-ao---- <63.00g
p1 vol1 -wi-a----- 12.00m
p1 vol2 -wi-ao---- 52.00m
➜ sudo fdisk -l
...
Disk /dev/mapper/vol2-p1: 72 MiB, 75497472 bytes, 147456 sectors # ~70M
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Task weight: 4%
On server web-srv1 there are two log files that need to be worked with:
1. File /var/log-collector/003/nginx.log : extract all log lines where URLs start with /app/user and that were accessed by browser
identity hacker-bot/1.2 . Write only those lines into /var/log-collector/003/nginx.log.extracted
2. File /var/log-collector/003/server.log : replace all lines starting with container.web , ending with 24h and that have the word
Running anywhere in-between with: SENSITIVE LINE REMOVED
➜ ssh web-srv1
➜ root@web-srv1:~$ cd /var/log-collector/003
➜ root@web-srv1:/var/log-collector/003$ ls -lha
total 23K
drwxr-xr-x 2 root root 4 Jul 19 09:43 .
drwxr-xr-x 8 root root 8 Jul 19 09:43 ..
-rw-r--r-- 1 root root 207K Jul 19 09:40 nginx.log
-rw-r--r-- 1 root root 29K Jul 19 09:39 server.log
To extract all log lines as required we could try some simple grep like:
➜ root@web-srv1:/var/log-collector/003$ cat nginx.log | grep "/app/user" | grep "hacker-bot/1.2"
But this would also catch lines like these which are not asked for:
127.0.0.1 - - [18/Jul/2075:08:35:34 +0000] "GET /hacker-bot/1.2 HTTP/1.1" 200 8 "-" "/app/user" "-"
127.0.0.1 - - [18/Jul/2075:08:35:15 +0000] "GET /hacker-bot/1.2 HTTP/1.1" 200 8 "-" "/app/user" "-"
The lines above shouldn't match because the url is hacker-bot/1.2 and the browser identity.
It should be 27 lines:
27
Next we shall remove some sensitive logs in server.log . Anything in the pattern of:
^container.web.*Running.*24h$
It should be 44 lines:
44
This will simply output everything to stdout for us to verify. We can even further check by counting the lines:
44
Looks fine, 44 lines again. Now we can use sed to replace the actual file. Still, always make a backup!
Task weight: 6%
User Jackie caused an issue with her account on server web-srv1 . She did run a program which created too many subprocesses for the server
to handle. A coworker of yours already solved it temporarily and limited the number of processes user jackie can run.
For this the coworker added a command into .bashrc in the home directory of jackie . But the command just sets the soft limit and not the
hard limit. Jackies password is brown in case needed.
Configure the number-of-processes limitation as a hard limit for user jackie . Use the same number currently set as a soft limit for that user.
Do it in the proper way, not via .bashrc .
While at it you remember another ticket. On the same server you should enforce that group operators can only ever log in once at the same
time.
Well, that's a lot. We check out that user and its limits first:
➜ ssh web-srv1
➜ root@web-srv1:~$ su jackie
➜ jackie@web-srv1:/root$ cd
➜ jackie@web-srv1:~$
➜ jackie@web-srv1:~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7815
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024 # that's he one!
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Using ulimit we can see all configured limits, and there is max user processes set to 1024. We should check how it has been set in
.bashrc :
This works, but user jackie could change it herself like this:
➜ jackie@web-srv1:~$ ulimit -u
1100
Next we go ahead and configure it via /etc/security/limits.conf (we need to be root for this). In that file there are already some useful
examples on the bottom. We add a new line
...
jackie hard nproc 1024 # add a new line
➜ jackie@web-srv1:~$ ulimit -u
1024
The other ticket was about implementing a limitation for group operators , so we add it to the file as well:
...
jackie hard nproc 1024
@operators hard maxlogins 1 # add this line
We can test it using user jackie because she actually is in group operators . For this we can to ssh into web-srv1 with jackie:brown :
➜ ssh jackie@web-srv1
jackie@web-srv1's password: brown
➜ jackie@web-srv1:~$ su jackie
Password: brown
Too many logins for 'jackie'.
su: cannot open session: Permission denied