0% found this document useful (0 votes)
23 views287 pages

Linux Labs

The document outlines a series of labs focused on Linux skills, covering topics such as networking, file management, directory navigation, file viewing, shared libraries, user and group management, and basic security. Each lab includes objectives, purposes, tools, and detailed tasks to enhance practical understanding of Linux commands and functionalities. The labs are designed for hands-on experience, using Kali Linux or other distributions in a virtual or single machine environment.

Uploaded by

George emma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views287 pages

Linux Labs

The document outlines a series of labs focused on Linux skills, covering topics such as networking, file management, directory navigation, file viewing, shared libraries, user and group management, and basic security. Each lab includes objectives, purposes, tools, and detailed tasks to enhance practical understanding of Linux commands and functionalities. The labs are designed for hands-on experience, using Kali Linux or other distributions in a virtual or single machine environment.

Uploaded by

George emma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 287

Simple Labs

Lab 1. Your Computer on the Network

Lab 2. Managing Files and Directories in Linux

Lab 3. Navigating Directory Hierarchies with .. and ../..

Lab 4: Viewing Files with more, less, head, and tail

Lab 5. Manage Shared Libraries

Lab 6. Creating Users and Groups

Lab 7. Basic Security and Identifying User Types

Lab 8. File Permissions

Lab 9. Shell Scripting

Lab 10. Apt

Lab 11. Dpkg

Lab 12 – Major Open Source Applications: Desktop

Lab 13– ICT Skills and Working in Linux

Lab 14 – Command Line Basics: Syntax and Quoting

Lab 15 – Using the Command Line to Get Help

Lab 16 – Using Directories and Listing Files: Directories

Lab 17 – Searching and Extracting Data from Files: Pipes and


Redirection

Lab 18 – Searching and Extracting Data from Files: Regular


Expressions
Lab 19 – Turning Commands into a Script: Scripting Practice

Lab 20 – Managing Scheduled Tasks with cron

Lab 1: Your Computer on the Network


Lab Objective:

Learn how to manage networking on Linux computers.

Lab Purpose:

In this lab, you will learn how to query important network


configuration and gather information for connecting a Kali Linux
computer to a Local Area Network.

Lab Tool:

Kali Linux (latest version)

Lab Topology:

A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:

Open your Terminal and run the following commands to gather


information on your current network configuration:

●​ ip addr show
Machine's local IPv4 address: 192.168.241.3

Supports IPv4 and IPv6; local network and global


communication are available.

●​ ip route show

The default gateway is 192.168.241.1, which allows internet


access. The machine is directly connected to 192.168.241.0/24.

●​ cat /etc/hosts

Local hostname (kali) points to 127.0.1.1.


Localhost resolves to 127.0.0.1.

Helps resolve hostnames locally without DNS.

●​ cat /etc/resolv.conf

DNS Servers:

192.168.241.1 (your network gateway, likely your router)

IPv6 address for DNS resolution

Search domain: www.google.com

Question: Does your local network support IPv4, IPv6, or both?

Task 2:

Now turn off your Wi-Fi, disconnect your Ethernet cable, or disable
the network connection to your VM. Then run the first three of those
commands again.
The state DOWN for eth0 indicates that network interface isn't
currently active

Question: What changes occurred, and why?

Task 3:

Reconnect your network and gather some information on 101labs


using:

●​ host -v 101labs.net
Question: Can you ping 101labs.net? If not, how might you diagnose
the problem?

Task 4:

Run the following command and copy the output:

●​ ip route show | grep default


default via 192.168.241.1 dev eth0

You’ll use it later!

Now, make sure to stop anything important you might be doing... then
run:

●​ sudo ip route del default

Congratulations, you’ve just broken your internet connection by


deleting the default route. Confirm this by attempting to browse the
internet with Firefox. In a real-world scenario, you could diagnose this
problem by running ip route show and noting the lack of a default
route.

To fix your connection, run:

●​ sudo ip route add <extra>, where <extra> is the output you


copied earlier.

Notes:

The net-tools package, containing commands like netstat, ifconfig,


and route, is considered deprecated. Nevertheless, you may
encounter systems that only have these tools, so having a basic
familiarity with them is beneficial.

Lab 2: Managing Files and Directories in Linux

Lab Objective:

Learn how to create, copy, rename, and delete files and directories using
Linux commands.

Lab Purpose:
Get hands-on experience with basic file system management commands
needed for navigating and modifying Linux filesystems.

Lab Tool:

Any Linux distribution (e.g., Kali Linux, Ubuntu, etc.)

Lab Topology:

Single Linux machine or virtual environment.

Task 1: Create Files and Directories

Step 1: Open your terminal and create a new file called hackingskills using
cat:
cat > hackingskills

Type:
Hacking is the most valuable skill set of the 21st century!

Press Ctrl-D to save and exit.


Step 2: Append new content to the same file:

cat >> hackingskills

Type:
Everyone should learn hacking

Press Ctrl-D.
Step 3: View the contents of the file:
cat hackingskills

Step 4: Create a new empty file called newfile using touch:


touch newfile

Step 5: Create a directory named newdirectory:

mkdir newdirectory

Task 2: Copy, Rename, and Delete Files

Step 1: Copy oldfile (create it first if it doesn't exist):


touch oldfile

cp oldfile newdirectory/

Step 2: Rename the copied file inside newdirectory to newfile:


mv newdirectory/oldfile newdirectory/newfile
Step 3: Copy newfile from newdirectory to your home directory and rename
it to secretfile:

cp newdirectory/newfile ~/secretfile
The tilde (~) means your home folder.

Task 3: Remove Files and Directories

Step 1: Remove the newfile in the directory:

rm newdirectory/newfile

Step 2: Remove the newdirectory (ensure it's empty):


Normally, rmdir only works on empty directories. If the directory has
contents, rmdir will give an error.

rmdir newdirectory

Step 3: To remove a non-empty directory and all its contents, use:

rm -r newdirectory

Task 4: Writing Into Files

Step 1: Write a specific text string into a new file:


echo "This is a test file." > myfile.txt
The > symbol is used for output redirection. It takes the output of a
command and writes it to a file, overwriting the file if it already exists.
This command writes "This is a test file." into myfile.txt, replacing its
contents if the file already exists.
Step 2: Confirm that the text has been written into the file by viewing its
contents:
cat myfile.txt

Notes:

●​ Always verify your actions with ls before and after to confirm


changes.
●​ Practice navigating with pwd, and exploring your directories with ls.

Lab 3: Navigating Directory Hierarchies with .. and ../..

Lab Objective:

Learn how to navigate to parent directories and traverse the directory tree
using relative paths .. and ../...

Lab Purpose:

Master directory navigation techniques essential for efficient command-line


workflow in Linux.

Lab Tool:

Any Linux distribution (e.g., Kali Linux, Ubuntu, etc.)

Lab Topology:
Single Linux machine or virtual environment with multiple nested
directories.

Task 1: Explore Your Current Directory

Step 1: Open your terminal and verify your current location:


pwd

Step 2: List the contents of the directory:

ls -l

Task 2: Create a Nested Directory Structure

Step 1: Create nested directories for practice:

mkdir -p level1/level2/level3

The -p option in the mkdir command stands for "parents". It allows you to
create parent directories as needed, meaning:
●​ If any of the specified directories don't exist, mkdir -p will create all
necessary parent directories without error.
●​ If the directories already exist, it won't throw an error—they're simply
ignored.

Step 2: Change directory into level3:


cd level1/level2/level3

Step 3: Confirm your current position:


pwd

Task 3: Navigate Up the Directory Tree

Step 1: Move to the immediate parent directory (level2) using ..:

cd ..

Repeat the command to go up another level to level1:


cd ..
Step 2: Verify your current directory again:
pwd

Task 4: Traverse Multiple Levels Using Relative Paths

Step 1: Use ../.. to move up two levels to the root of these directories:

cd ..

Check your location:


pwd

Step 2: From the current directory, go directly to level1/level2/level3 using


relative navigation:

cd ruth/level1/level2/level3 (insert your home directory name in place of


ruth)

Task 5: Practice with Absolute and Relative Paths


Step 1: Use an absolute path to go back to your home directory:

cd ~

Step 2: Use a relative path with .. to move into level1:

cd level1

Step 3: Move into level2:

cd level2

Notes:

●​ The .. symbol always refers to the parent directory.


●​ The ../.. traverses up two levels, and so on.
●​ Using relative paths helps navigate efficiently within a directory tree
without needing full absolute paths.

Lab 4: Viewing Files with more, less, head, and tail

Lab Objective:
Learn how to effectively view and navigate large text files using more, less,
head, and tail commands.

Lab Purpose:

Master file viewing techniques essential for analyzing logs, configuration


files, and large datasets in Kali Linux.

Lab Tool:

Kali Linux (or any Linux distribution)

Lab Topology:

Single Kali Linux machine or virtual environment with sample text files.

Task 1: Prepare a Sample File

Step 1: Create a sample text file containing multiple lines, for example:
seq 1 50 > sample.txt

This generates numbers 1 to 50 in the file sample.txt.


Step 2: View the file with cat to verify contents:
cat sample.txt

Task 2: Using more to View Files

Step 1: Open the file with more:


more sample.txt
Step 2: Use Enter to scroll line by line, or press Space to scroll page
by page.
Step 3: To exit, press q.
Task 3: Using less for Advanced Viewing

Step 1: Open the same file with less:


less sample.txt
Step 2: Use arrow keys or Page Up/Page Down to navigate.
Step 3: Search for a number, e.g., 25, by typing:
/25 then press Enter.

Step 4: Exit less by pressing q.

Task 4: Viewing the Beginning of Files with head

Step 1: Display the first 10 lines:


head sample.txt
Step 2: Show the first 20 lines:
head -n 20 sample.txt

Task 5: Viewing the End of Files with tail

Step 1: Display the last 10 lines:


tail sample.txt

Step 2: Show the last 20 lines:


tail -n 20 sample.txt

Class work: display last 30 lines

Notes:

●​ more is suitable for simple page-by-page viewing.


●​ less offers more navigation and search features.
●​ head and tail are useful for quickly viewing the beginning or end
of large files.
●​ Use these commands to efficiently analyze logs and data files in
Kali Linux.

Lab 5: Manage Shared Libraries


Lab Objective:

Learn how to identify and install shared libraries.


Lab Purpose:

Shared libraries provide common functionality to one or more (often many)


Linux applications. In this lab, you will learn how to manage these essential
libraries. Think of shared libraries as toolkits or sets of tools used by many
applications (programs). Instead of each program having its own copy of a
common tool, they share one, which saves space and makes updates
easier.

Lab Tool:

Kali Linux (latest version)

Lab Topology:

A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:

Open the Terminal and run:

cat /etc/ld.so.conf{,.d/*}

/etc/ld.so.conf and the included files specify the directories where shared
libraries are stored.
This syntax is a shortcut for /etc/ld.so.conf /etc/ld.so.conf.d/*.​
cat: Concatenates and displays the content of files.

●​ /etc/ld.so.conf: Main configuration file listing directories where


sharedlibs are stored.
●​ {,.d/*}: Brace expansion that includes:
○​ /etc/ld.so.conf
○​ All files in /etc/ld.so.conf.d/ directory.

This command shows all directories where the system looks for shared
libraries.

Task 2:

Now run:

ldd $(which passwd)

●​ ldd $(which passwd) shows the specific shared libraries that passwd
uses when executed.
●​ ldd: "List Dynamic Dependencies" — shows all shared libraries linked
to a binary.
●​ which passwd: Finds the full path of the passwd command.
○​ which: searches for a command in your PATH.
○​ passwd: the command used to change user passwords.
●​ $(): Command substitution — runs the command inside and replaces
it with its output.

So, which passwd gives you the path to passwd, and ldd shows all the
shared libraries needed to run it.
This command shows which shared libraries are linked by the passwd tool.

Lab 6: Creating Users and Groups


Lab Objective:

Learn how to create and customize users and groups.

Lab Purpose:

In this lab, you will learn how to create users and groups, as well as set
their passwords.

Lab Tool:

Kali Linux (latest version)

Lab Topology:

A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:

Open the Terminal and run:

sudo sh -c 'echo "Hello World" > /etc/skel/hello.txt'

Here, you’re adding a file to the /etc/skel directory, which determines the
default files in a new user’s home directory. Now:

sudo useradd -m foo


sudo cat ~foo/hello.txt

You should see the output “Hello World.” You created a new user called
foo, and every file in /etc/skel was copied to the new user’s home directory.
Finally, set an initial password for this user with:

sudo passwd foo

Task 2:

Now, let’s add a system user. Run:

sudo useradd -r bar

When adding a system user, useradd does not add account or password
aging information and sets a user ID appropriate for a system user
(typically below 1000). It also does not create a home directory unless you
specify -m.

Task 3:

Now, create a group and add our new users to it:

sudo groupadd baz


sudo usermod -a -G baz foo

sudo usermod -a -G baz bar

Verify that the users were added with:

grep ^baz /etc/group

Task 4:

Finally, clean up:

sudo rm /etc/skel/hello.txt

sudo userdel foo


sudo userdel bar

sudo groupdel baz

sudo rm -r /home/foo

Notes:

The usermod command is essential for managing users. Besides adding


and removing users from groups, it allows you to manage account and
password expirations and other security features.

Lab 7: Basic Security and Identifying User Types


Lab Objective:

Learn how to identify the differences between the root user, standard users,
and system users.

Lab Purpose:

In this lab, you will learn about the different types of Linux users and tools
to identify and audit them.

Lab Tool:

Kali Linux (latest version)


Lab Topology:

A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:

Open the Terminal and run:

cat /etc/passwd

●​ any users have /usr/sbin/nologin or /bin/false as their shell:


○​ This means these accounts are not intended for login access;
they're service accounts.
●​ The accounts like sshd, dnsmasq, avahi, polkitd, etc., are system or
service accounts used by background services or daemons.
●​ The account ruth:
○​ User ruth with UID and GID 1000, meaning it's a regular user.
○​ Its home directory is /home/ruth, and the login shell is
/usr/bin/zsh, a shell alternative to bash.
This file is an inventory of all users on your system. The passwd file is a
database with seven fields, separated by colons:

1.​ Username
2.​ Encrypted password (rarely used)
3.​ User ID
4.​ Group ID
5.​ Comment
6.​ Home directory
7.​ Default shell

Standard user IDs typically begin at 1000 or 500, while most others are
system users used to run specific services.

To see the passwd information for your own user, run:

grep ^$USER /etc/passwd

Task 2:

Now run:

sudo cat /etc/shadow


The shadow file is sensitive and contains hashed user passwords. It has
nine fields, with the first three being significant:

1.​ Username
2.​ Encrypted password
3.​ Date of last password change

You can view this information for your own user with:

sudo grep ^$USER /etc/shadow

Task 3:

Run:
cat /etc/group

This file contains a database of all groups on the system, with four fields
separated by colons:

1.​ Group name


2.​ Encrypted group password
3.​ Group ID
4.​ List of users who are members

Run groups to see which groups your user is a member of.

Get more info, including group IDs, with:

id
Task 4:

Finally, use these commands to audit user logins:

who

●​ Interpretation:​
User ruth is logged in on tty7, which typically corresponds to the
graphical display (the desktop environment). The login time is May 7,
2025, at 03:52.

Interpretation:

●​ User ruth has been logged in on tty7 for 2 days.


●​ The load average indicates system activity over the last 1, 5, and 15
minutes.

last
●​ Interpretation:
○​ ruth logged in on tty7 at 03:52 on May 7, but the session ended
without proper logout (gone - no logout).
○​ The system was booted twice on May 2 and May 7.

last $USER # For your own user

●​ Shows login history specifically for current user (ruth).


●​ Output matches previous login times, indicating current session
started on May 2 and continued until now.

Lab 8: File Permissions


Lab Objective:

Learn how to manipulate permissions and ownership settings of files.

Lab Purpose:

In this lab, you will learn to use chmod, chown, chgrp, and umask.

Lab Tool:
Kali Linux (latest version)

Lab Topology:

A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:

Open the Terminal and run:

echo "Hello World" > foo

ls -l foo

Check the first field; you should see -rw-r--r--. This indicates read/write
permissions for the user who owns the file (rw-), read-only permissions for
the group (r--), and read-only permissions for others (r--). The first
character signifies the file type.

Identify the user and group owners of this file in the third and fourth fields of
the ls -l output.

File Permissions in Linux

Basic structure
●​ The first character indicates the file type:
○​ - : Regular file
○​ d : Directory
○​ l : Symbolic link
○​ c : Character special file
○​ b : Block special file
○​ p : Pipe (named pipe)
○​ s : Socket
●​ Next nine characters: permissions for user, group, others in sets of
three

Permission characters

●​ r: read
●​ w: write
●​ x: execute
●​ -: no permission

Permission sets

Set Description Possible combinations


(octal)

User Permissions of the file owner e.g., rwx, rw-, r--

Group Permissions for members of the e.g., r-x, r--


file's group

Other Permissions for everyone else e.g., r--, --x


s

Examples

●​ -rw-r--r--: user can read/write, group can read, others can read
●​ -rwxr-xr-x: user has full control, group and others can read and
execute
●​ -rwx------: user has full control, no one else has access

Permission codes (octal)

Cod Permission Explanation


e s

0 --- no permissions

1 --x execute only

2 -w- write only

3 -wx write and execute

4 r-- read only

5 r-x read and execute

6 rw- read and write

7 rwx read, write,


execute
Task 2:

Now run:

sudo chown root foo

ls -l foo

cat foo

You’ve changed the user ownership to root, and the group ownership
remains the same. Since the file has group- and world-read permissions,
you can still see its contents.

Task 3:

Now run:

sudo chmod o-r foo


The o-r part in the command sudo chmod o-r foo means:
●​ o: others — users who are neither the owner nor in the group
●​ -r: remove read permission

In essence, this command removes read permission for everyone who is


not the owner or in the group on the file or directory named foo.

chmod stands for "change mode".


It is a command used in Linux and Unix systems to change the permissions
(mode) of files and directories, controlling who can read, write, or execute
them.

ls -l foo

cat foo

This chmod command removes read permissions for others. However,


since you still have group ownership, you can still see the file’s contents.

Task 4:

Now run:
sudo chgrp root foo

sudo chmod 600 foo

ls -l foo

cat foo

This changes the permissions to read/write for the owning user only. As the
owner is root, you can no longer read the file.

Task 5:

The default permissions for newly created files can be adjusted with
umask:

u=$(umask)

umask 0777
●​ This masks out all permissions for new files and directories, meaning
no permissions will be granted to anyone** when creating new files**.

touch bar

ls -l bar

umask $u

●​ Restores your previous umask (stored in $u) so that future


files/directories have default permissions again.

The bar file should have no permissions set, as you masked them all out.

What is umask?

●​ Think of it as a mask or filter that saps away certain permission bits


when new files or directories are created.
●​ It defines what permissions are not granted by default.
Default permissions:

●​ Files: default to 666 (read/write for owner, group, others)


●​ Directories: default to 777 (read/write/execute for owner, group,
others)

How umask works:

●​ When creating a new file, the umask value is subtracted from the
default permissions.
●​ The result is the actual permissions assigned.

Basic example:

Suppose:

●​ Default for new files = 666


●​ Default for new directories = 777

And:

umask 022

Calculation for a new file:

●​ 666 minus 022 → 644


●​ Permissions: owner can read/write, others can read

Calculation for a new directory:

●​ 777 minus 022 → 755


●​ Permissions: owner can read/write/execute, others can read/execute
What does umask 0777 do?

●​ 0777 in octal consists of:​

○​ 0 for owner permissions


○​ 7 (rwx) for group
○​ 7 (rwx) for others
●​ When you set umask 0777:​

○​ All permission bits are masked/made unavailable for group


and others.
○​ Resulting permissions for files and directories:

For files:

●​ Default: 666
●​ 666 minus 777 → 000
●​ Result: no permissions for owner, group, or others.

For directories:

●​ Default: 777
●​ 777 minus 777 → 000
●​ Result: no permissions at all— owner, group, others all have no
access.

In summary:

umask Effect on permissions for files Effect on


value permissions for
directories

0000 Permissions are not masked; default Same as files


permissions (666/777) stay the same
077 Files: 666 - 077 = ugly (e.g., 666 - Directories: 777 - 077
077 = 600) = 700

022 Files: 666 - 022 = 644 Directories: 777 - 022


= 755

0777 Files: 666 - 777 = 000 (none) Directories: 777 - 777


= 000 (none)

Caution:

●​ Setting umask 0777 is rare and very restrictive—it essentially


prevents all access to any new files/directories for everyone,
including the owner.
●​ Usually, more moderate umask values like 002 or 022 are used.

Task 6:

Finally, clean up:

sudo rm foo bar


●​ Deletes files foo and bar as superuser (sudo) to bypass permission
restrictions, just in case.

Lab 9: Shell Scripting

Lab Objective:

Learn how to write a simple shell script containing multiple commands.

Lab Purpose:

Scripting with Bash is essential for many professional Linux administrators.


When tasks are repetitive, it’s inefficient to type the same commands
repeatedly; instead, you create a script and potentially schedule it to run
automatically.

Lab Tool:

Kali Linux (latest version)

Lab Topology:

A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:

Open the Terminal application and run your favorite text editor, such as vi
or nano. (If you have never used vi or vim, then nano is recommended.)

Type:

nano script.txt


Create a file with the following contents:
#!/bin/bash

for (( n=0; n<$1; n++ )) do

echo $n

done

echo "Output from $0 complete."

Save (Ctrl + O), press enter to write to file, then exit (Ctrl + X).

Task 2:

Make the file executable:

chmod +x script.txt

Then run it like this:

./script.txt 10

Task 3:
Run your script without an argument:

./script.txt

Then check the exit status:

echo $?

$? is a special variable that references the exit status. A successful


program returns 0. Try it for yourself.

Notes:

The first line, starting with #! (called a shebang), denotes which program
will interpret your script. This is typically /bin/bash, but it could also be
another shell (like /bin/zsh) or even another programming language (like
/usr/bin/python). Always include it to ensure compatibility across different
environments.

Lab 10: Apt

Lab Objective:

Learn how to use the Apt package manager to install, upgrade, search for,
and uninstall packages.
Lab Purpose:

The Apt package manager is a collection of tools designed for


Debian-based distributions to install and manage DEB-based packages.

Lab Tool:

Kali Linux (latest version)

Lab Topology:

A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:

Open the Terminal and run the following commands:

grep -v -e ^# -e ^$ /etc/apt/sources.list

cat /etc/apt/sources.list

These commands display the repositories used by Apt. Reputable


third-party repositories can be added to these files, but we’ll use the
defaults for now.​
Apt uses a cache that should be updated periodically. Do that now with:

sudo apt update

Then run all system updates:

sudo apt -y full-upgrade

You might not have any updates even if it has been a while since checking:

apt list --installed | grep unattended-upgrades

Task 2:

Now install a useful package:

sudo apt install apt-file

Then update the apt-file database:


sudo apt-file update

Let’s say you downloaded the source code for an application and it
complains that “libvorbis.so” is missing. You can use apt-file to check if a
package contains it:

apt-file search libvorbis.so

You should find several packages, the most relevant being libvorbis-dev.
Confirm by running:

apt show libvorbis-dev

Installing that package will allow you to compile your application.

Notes:

You might be familiar with classic tools like apt-get and apt-cache. While
Apt doesn’t completely replace them, it’s more suitable for interactive usage
than scripting.

Lab 11: Dpkg

Lab Objective:

Learn how to use the dpkg package manager to install, uninstall, and
obtain information on packages.

Lab Purpose:

Dpkg is a tool for installing Debian-based packages. It does not use


repositories nor have automatic dependency resolution like Apt does, but it
is useful for managing packages you may have downloaded outside of
official repos.

Lab Tool:

Kali Linux (latest version)

Lab Topology:

A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:

Open the Terminal and run:

dpkg -l

This lengthy list is your list of installed packages. To get more information
on a given package, try:

dpkg -s sudo

dpkg -L sudo

dpkg -p sudo

Some packages have configuration scripts which can be re-run with:

sudo dpkg-reconfigure [package]

To install a package manually, you would download the .deb file and run:

sudo dpkg -i [file]


However, you would need to install the necessary dependencies manually
first. To remove the package when you’re done with it, run:

dpkg -P [package]

Notes:

Though dpkg is great for information gathering, in practice, it is generally


better to manage your installed packages through a single central
database. This may involve adding a third-party repository or downloading
a package and then manually adding it to Apt.

Lab 12 – Major Open Source Applications: Desktop

Lab Objective:​
Learn how to access and use major open-source desktop applications on
Kali Linux.

Lab Purpose:​
In this lab, you will become familiar with common open-source desktop
tools available in Kali Linux. These tools are widely used for productivity,
media editing, and web browsing in Linux environments.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1: Explore Installed Desktop Applications​


Open your Terminal and list all installed GUI applications using:

ls /usr/share/applications/
Scroll through the list and take note of common applications such as:

●​ LibreOffice
●​ GIMP
●​ Firefox ESR
●​ Terminal
●​ File Manager

Open at least three of these applications from the terminal or the


applications menu.

Question:​
Which desktop applications did you open, and what is their primary
function?

Task 2: Use LibreOffice for Document Editing​


Open LibreOffice Writer.​
Type a short paragraph about open-source software.​
Save the document in two formats:

●​ As an .odt file (LibreOffice document format)


●​ As a .pdf file

Close LibreOffice and verify that the files were saved successfully.

Question:​
Which format is better for sharing with users on different platforms, and
why?

Task 3: Use GIMP for Image Editing​


Launch GIMP.​
Create a new image with dimensions 800x600.​
Add some text and draw a shape (e.g., rectangle or circle).​
Export the image as a .png file.

Question:​
What are some advantages of using GIMP over proprietary tools?
Task 4: Browse the Web with Firefox ESR​
Open Firefox ESR.​
Visit the following websites:

●​ https://www.kali.org
●​ https://libreoffice.org
●​ https://gimp.org

Bookmark each site.​


Change the default homepage to one of the above sites.

Question:​
How can browser bookmarks and a customized homepage improve your
workflow?

Task 5: Install a New Desktop Application​


Open your Terminal.​
Update your system packages:

sudo apt update

Install VLC Media Player:

sudo apt install vlc -y

Launch VLC either from the applications menu or by typing:

vlc

Question:​
What command would you use to remove VLC if you no longer need it?

Notes:​
Open-source desktop applications are powerful alternatives to commercial
software. Kali Linux comes with several productivity and multimedia tools
pre-installed, and you can install more applications using the apt package
manager.

Lab 13 – ICT Skills and Working in Linux

Lab Objective:​
Learn essential ICT skills and foundational Linux operations using the Kali
Linux environment.

Lab Purpose:​
In this lab, you will practice basic Linux command-line skills, file
management, user permissions, and system navigation—critical for
working effectively in any Linux-based environment.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1: Basic Command Line Navigation​


Open your Terminal and run the following commands:

pwd​
ls -l​
cd /etc​
ls -a​
cd ~

Question:​
What does the pwd command show? How is ~ used in Linux file paths?

Task 2: File and Directory Management​


Create a directory:
mkdir ~/lab5

Navigate into it:

cd ~/lab5

Create a file and write some text into it:

echo "Linux skills are essential!" > notes.txt

Copy the file:

cp notes.txt copy_notes.txt

Rename the file:

mv copy_notes.txt final_notes.txt

Delete the original:

rm notes.txt

Question:​
What command would you use to move a file to a different directory?

Task 3: Working with Users and Permissions​


View the current user:

whoami

Create a new user (requires root privileges):

sudo adduser testuser

Set a password when prompted.​


Change ownership of a file:

sudo chown testuser:testuser final_notes.txt


Check permissions:

ls -l final_notes.txt

Question:​
What does the chown command do, and why is it important?

Task 4: Working with Processes and Resources​


View running processes:

ps aux

Find a process using grep (e.g., Terminal):

ps aux | grep gnome-terminal

Check system resource usage:

top

Exit top by pressing q.

Question:​
How would you find and kill a misbehaving process?

Task 5: Getting Help in Linux​


Use man to read about a command:

man ls

Use --help:

mkdir --help

Question:​
What is the difference between using man and --help?
Notes:​
Mastering basic Linux commands is critical for all IT roles.​
Kali Linux is a full-featured Linux OS and can be used for general ICT skill
development, not just penetration testing.​
Practice using the terminal regularly to improve your confidence and
efficiency.

Lab 14 – Command Line Basics: Syntax and Quoting

Lab Objective:​
Learn the basic syntax of Linux commands and understand how quoting
works in the command line to manage strings, files, and commands
properly.

Lab Purpose:​
In this lab, you will explore the syntax of Linux commands, including how
to use and differentiate between different types of quotes (single, double,
and backticks) in the command line.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1: Basic Command Line Syntax​


Open your Terminal.​
Run the following commands:

ls /etc​
cat /etc/hostname​
ls -l /home
Question:​
What is the purpose of the -l flag with the ls command?

Task 2: Understanding and Using Quotation Marks​


Single Quotes (')​
Create a file containing a string with spaces using single quotes:

echo 'This is a test' > single_quotes.txt

Check the contents:

cat single_quotes.txt

Double Quotes (")​


Create a file with variables inside a string using double quotes:

my_var="Kali Linux"​
echo "Welcome to $my_var" > double_quotes.txt

Check the contents:

cat double_quotes.txt

Backticks (`` ) (used for command substitution)​


Display the current date:

echo "Today's date is: date"

Question:​
How does the use of single quotes differ from double quotes when
handling variables in the command line?

Task 3: Mixing Quotation Marks​


Try combining both types of quotes:

echo 'The current directory is "$PWD"'


Run the same command with double quotes:

echo "The current directory is '$PWD'"

Question:​
What happens when you combine single and double quotes? How does
the command output differ?

Task 4: Escaping Characters​


Use the escape character \ to include special characters:

echo "This is a backslash: \"

echo "This is a dollar sign: $"

Try escaping spaces in filenames:

touch "file with spaces.txt"​


ls -l "file with spaces.txt"

Question:​
What is the purpose of escaping characters like spaces or dollar signs?
Why is this necessary in some scenarios?

Task 5: Combining Multiple Commands​


Use && to run two commands sequentially, but only if the first succeeds:

mkdir test_directory && cd test_directory

Use ; to run two commands independently:

mkdir test_directory ; echo "Test directory created"

Question:​
How do the && and ; operators differ when chaining commands?
Notes:

●​ Understanding command line syntax and quoting is crucial for


efficient Linux work.
●​ The type of quotes (single, double, backticks) affects how variables
and special characters are handled.
●​ Escaping characters with \ ensures your commands work as intended
when dealing with spaces or special symbols.

Lab 15 – Using the Command Line to Get Help

Lab Objective:
Learn how to use various command-line tools and options to access help
documentation for Linux commands and applications.

Lab Purpose:
In this lab, you will practice using the built-in help utilities in Kali Linux, such
as man, info, --help, and what is, to get assistance and learn about
commands and their options.

Lab Tool:
Kali Linux (latest version)

Lab Topology:
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1: Using the man Command (Manual Pages)


1.​ Open your Terminal.
2.​ Type the following command to get the manual page for the ls
command:
man ls
3.​ Navigate through the manual using the arrow keys or j and k.
4.​ Search within the manual by pressing / followed by the search term
(e.g., /option).
Exit the manual by pressing q.
Question: What information can you typically find in a man page? How is it
structured?

Task 2: Using the --help Option


1.​ Open your Terminal and type the following command to view the help
information for the cp (copy) command:
cp --help
2.​ Look for common options (e.g., -r, -i) in the output.
3.​ Try another command, such as:
rm --help
Question: How does the information provided by --help differ from that in
the man page?

Task 3: Using the info Command


In your Terminal, type the following to view the information page for ls:
info ls
Use the arrow keys or j and k to scroll through the information.
To exit the info page, press q.
Question: How does the info page compare to the man page? Is there any
difference in formatting or content?

Task 4: Using the whatis Command


1.​ Open your Terminal and run the following command to get a short
description of the ls command:
whatis ls
2.​ Run the whatis command for another common command, such as:
whatis cp
Question: What type of information does whatis provide, and how is it
different from man or info?

Task 5: Using the apropos Command


1.​ If you are unsure about the name of a command, use apropos to
search for
commands related to a specific keyword. For example, to find commands
related to "copy":
apropos copy
2, Browse through the list of results and note any relevant commands.
Question: How can apropos be helpful when you don't know the exact
name of the command you're looking for?
Task 6: Using the which Command
1.​ Use which to find the location of a command's executable file. For
example:
which ls
2.​ Try this with another command:
which python
Question: What does which show you, and why might this be useful?

Notes:
man pages are the primary source of detailed documentation for most
Linux commands.
--help is a quick reference for basic usage and options of a command.
info pages can sometimes provide more detailed information than man
pages, but they are less commonly used.
whatis and apropos help you find basic descriptions of commands or
search for commands related to a specific topic.
which shows the full path to the executable file for a command, which
can help diagnose issues related to system paths or command
availability.

Lab 16 – Using Directories and Listing Files: Directories

Lab Objective:​
Learn how to manage directories, navigate the file system, and list files
using various command-line tools in Kali Linux.

Lab Purpose:​
In this lab, you will explore basic operations for creating, removing, and
navigating directories, as well as listing files and directories within the Linux
file system.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:
Task 1: Navigating the File System​
Open your Terminal.​
Print the current directory:

pwd

List the files in the current directory:

ls

List files with detailed information:

ls -l

List all files including hidden ones:

ls -a

Question:​
What is the difference between ls and ls -a? How do hidden files behave in
Linux?

Task 2: Creating Directories​


Create a new directory:

mkdir test_directory

Verify the creation:

ls

Create a directory with spaces in its name:

mkdir "Test Directory With Spaces"

Verify its creation:


ls

Question:​
Why is it important to know how to handle directories with spaces in their
names?

Task 3: Navigating Directories​


Change into the new directory:

cd test_directory

Confirm your location:

pwd

Return to the home directory:

cd ~

Navigate to the root directory:

cd /

Question:​
How does the cd command work with absolute and relative paths? What is
the significance of ~ and /?

Task 4: Removing Directories​


Remove the test_directory:

rmdir test_directory

Remove the directory with spaces:

rmdir "Test Directory With Spaces"

Important:
rmdir only removes empty directories.​
To delete directories that contain files or subdirectories, use:​
rm -r directory_name

Question:​
What is the difference between rmdir and rm -r when removing directories?

Task 5: Listing Files with More Details​


List all files and directories with details:

ls -l

Display sizes in a human-readable format:

ls -lh

Sort files by modification time:

ls -lt

Question:​
How do the options -l, -h, and -t modify the output of the ls command?

Task 6: Working with Directory Paths​


Use absolute paths to list a specific directory:

ls /etc

Use relative paths to navigate and list files in a subdirectory:

cd /etc​
ls ../

Question:​
What is the difference between an absolute path and a relative path in
Linux?
Task 7: Using Tab Completion for Directories​
Type part of a directory name, e.g.:

cd Doc

Notice how pressing Tab completes the directory name if it’s unique.​
This speeds up navigation and reduces typos.

Question:​
How does tab completion improve efficiency when navigating directories?

Notes:

●​ Linux directories are key to organizing your files.


●​ Hidden files (those starting with a dot) often store configuration data.
●​ Be cautious when removing directories: use rmdir for empty
directories, and rm -r for directories with contents.

Lab 17 – Searching and Extracting Data from Files: Pipes and


Redirection

Lab Objective:​
Learn how to use pipes (|) and redirection (>, >>, <) in the command
line to search, filter, and manipulate data from files and commands.

Lab Purpose:​
In this lab, you will practice combining commands with pipes,
redirecting output to files, and input redirection in Kali Linux. You will
also explore the power of searching and extracting specific data from
files using command-line tools.

Lab Tool:​
Kali Linux (latest version)
Lab Topology:​
A single Kali Linux machine or virtual machine.

Task 1: Using Redirection to Write and Append to Files

Open your Terminal.​


Create a new file with some text using output redirection (>):

echo "This is the first line" > myfile.txt

Append additional text to the same file with append redirection (>>):

echo "This is the second line" >> myfile.txt

View the contents of the file:

cat myfile.txt

Question:​
What is the difference between > and >> when redirecting output to a
file?

Task 2: Using Redirection to Read from Files

Display data from a file using input redirection (<):

cat < myfile.txt

Question:​
How does input redirection work, and how is it different from output
redirection?

Task 3: Using Pipes to Combine Commands


Use a pipe (|) to pass the output from one command to another.​
Count the number of files in the current directory:

ls | wc -l

List running processes and search for a specific process like ssh:

ps aux | grep ssh

Question:​
How does piping (|) allow you to chain commands together? Why is
this useful?

Task 4: Searching for Data in Files Using grep

Create a file with multiple lines of text:

echo -e "apple\nbanana\ncherry\napple\norange" > fruits.txt

Search for the word "apple" in the file:

grep "apple" fruits.txt

Search case-insensitively:

grep -i "apple" fruits.txt

Question:​
What is the difference between case-sensitive and case-insensitive
searches with grep?

Task 5: Using grep with Pipes to Filter Command Output

List processes related to apache2:

ps aux | grep apache2


Exclude processes related to ssh:

ps aux | grep -v ssh

Question:​
How does grep with the -v option change the search behavior?

Task 6: Redirecting Output to Multiple Files with tee

Capture command output both to a file and the terminal:

ls | tee directory_list.txt

Verify the saved data:

cat directory_list.txt

Question:​
What is the benefit of using the tee command, and how does it differ
from simple redirection?

Task 7: Combining Multiple Commands with Pipes

Search system messages for errors:

dmesg | grep error

List files with .log extension in /var/log, then sort them:

ls /var/log | grep ".log" | sort

Question:​
How does chaining multiple commands with pipes help refine your
search or filter output more effectively?
Notes:

●​ Redirection (>, >>, <) manages input and output in commands,


useful for logging or reading data.
●​ Pipes (|) connect commands, allowing for complex data
processing streams.
●​ grep is a powerful tool for searching text data and can be
combined with others to filter or extract info efficiently.
●​ tee lets you view output on the screen and save it
simultaneously, useful for monitoring and record-keeping.

Lab 18 – Searching and Extracting Data from Files: Regular


Expressions

Lab Objective:
Learn how to use regular expressions (regex) with the grep command and
other tools in Kali Linux to search for complex patterns in files.

Lab Purpose:
In this lab, you will practice using regular expressions to search and extract
data from files. You will understand how to create powerful search
patterns to match text based on specific criteria.

Lab Tool:
Kali Linux (latest version)

Lab Topology:
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1: Understanding Regular Expressions Basics


1.​ Open your Terminal.
2.​ Basic Pattern Matching: Use grep to search for a simple string
pattern in a file. For example:
echo -e "apple\nbanana\ncherry" > fruits.txt
grep "banana" fruits.txt
Question: What happens when you search for a string that does not exist
in a file? How does grep behave?

Task 2: Using Special Characters in Regular Expressions


1.​ Dot (.): The dot matches any single character. For example:
echo -e "cat\nbat\nrat\ndog" > animals.txt
grep "c.t" animals.txt
This will match any three-character word starting with "c" and ending
with "t".
2.​ Asterisk (*): The asterisk matches zero or more of the preceding
element. For example:
grep "a*" animals.txt
Question: How does the dot (.) and asterisk (*) affect the search behavior?
What does grep "a*" match in the file?
Task 3: Anchors in Regular Expressions
1.​ Caret (^): The caret anchors the pattern to the start of the line. For
example:
echo -e "apple\nbanana\ncherry" > fruits.txt
grep "^b" fruits.txt
This will match lines that start with "b".
2.​ Dollar Sign ($): The dollar sign anchors the pattern to the end of the
line. For example:
grep "y$" fruits.txt
Question: What does the caret (^) and dollar sign ($) do when placed in
front or behind a pattern? How would you use them to match the start or
end of a string?

Task 4: Using Square Brackets for Character Classes


1.​ Character Set ([]): Square brackets allow you to specify a set of
characters. For example:
echo -e "cat\nbat\nrat\ndog" > animals.txt
grep "[bc]at" animals.txt
This matches "cat" or "bat".
2.​ Character Ranges ([a-z]): You can specify ranges of characters. For
example:
grep "[a-z]at" animals.txt
3.​ Negation ([^]): Use [^] to match anything except the characters in the
brackets. For example:
grep "[^a]at" animals.txt
Question: What does the negation ([^]) inside square brackets do? How
would you use it in a search?

Task 5: Using grep with Extended Regular Expressions (ERE)


1.​ Extended Regular Expressions (-E): Use the -E option with grep to
enable extended regular expressions. For example:
grep -E "a(b|c)" fruits.txt
This matches either "ab" or "ac".
2.​ Plus Sign (+): The plus sign matches one or more of the preceding
element (similar to *, but requires at least one match). For example:
echo -e "cat\ncatt\ncattt\n" > animals.txt
grep -E "cat+" animals.txt
Question: How does using the -E flag change the behavior of grep? What
does the + symbol do in regex?

Task 6: Using grep with Multiple Patterns


1.​ Use grep to search for multiple patterns. For example, search for both
"apple" and "banana" in the fruits.txt file:
grep -E "apple|banana" fruits.txt
Alternatively, use multiple grep commands with piping:
cat fruits.txt | grep "apple" | grep "banana"
Question: How does using the | (OR) operator inside grep -E allow for
searching multiple patterns?

Task 7: Matching Multiple Lines Using grep and -P (Perl-Compatible


Regex)
1.​ Use Perl-compatible regular expressions (-P) to match patterns
across multiple lines. For example:
echo -e "apple\nbanana\ncherry" > fruits.txt
grep -Pzo "(apple\nbanana)" fruits.txt
The -z option makes grep treat the file as a single line, and -P enables
Perl
compatible regex.
Question: How does the -P option in grep change the way regex operates
compared to standard regular expressions? What does the -z flag do?

Task 8: Using sed with Regular Expressions


1.​ Substituting Text: Use sed to substitute text matching a regular
expression. For example:
sed 's/cat/dog/' animals.txt
2.​ Using Regular Expressions for Substitution:
sed 's/[a-z]/X/g' animals.txt
Question: How does sed utilize regular expressions to perform
substitution? How can you modify the output of sed with different regex
patterns?

Notes:
✔​ Regular expressions are powerful tools for pattern matching, making it
easier to search for complex strings in files.
✔​ grep is one of the most common utilities for searching using regular
expressions, but there are other tools like sed and awk that provide
extended functionality with regex.
✔​ Mastery of regular expressions can greatly improve your ability to
search, manipulate, and extract data from files and command outputs in
Linux.

Lab 19 – Turning Commands into a Script: Scripting Practice

Lab Objective:
Learn how to convert a series of commands into a reusable script to
automate tasks
in Kali Linux.

Lab Purpose:
In this lab, you will practice creating, editing, and running a simple shell
script. You will learn how to automate a series of commands to make your
workflow more efficient and how to make your scripts executable.

Lab Tool:
Kali Linux (latest version)

Lab Topology:
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1: Creating Your First Script


1.​ Open your Terminal.
2.​ Use a text editor like nano to create a new script file:
nano myscript.sh
3.​ In the text editor, start by adding the shebang line at the top of the
file:
#!/bin/bash
The shebang (#!/bin/bash) tells the system that this script should be
executed using the Bash shell.
4.​ Below the shebang, add a simple command, such as echo:
echo "Hello, this is my first script!"
5.​ Save the file and exit the text editor (in nano, press CTRL + X, then Y,
and Enter to save).
Question: Why is the shebang (#!/bin/bash) important in a script, and what
would happen if you left it out?

Task 2: Making the Script Executable


1.​ Before running the script, you need to make it executable using the
chmod command:
chmod +x myscript.sh
2.​ Now, run the script:
./myscript.sh
Question: What does the chmod +x command do? How does this differ
from simply trying to run the script without setting the execute permission?

Task 3: Adding More Commands to Your Script


1.​ Edit your script file to add more commands. For example, add
commands to display the current date, list files in the current directory, and
print a custom message:
#!/bin/bash
echo "Hello, this is my first script!"
echo "The current date and time is: $(date)"
echo "Here are the files in the current directory:"
ls -l
2.​ Save and close the file.
3.​ Run the script again:
./myscript.sh
Question: How does using $(date) in the script work? What does ls -l do in
the context of this script?

Task 4: Using Variables in Scripts


1.​ Modify your script to use variables. For example, create a variable for
your name and use it in the script:
#!/bin/bash
name="Alice"
echo "Hello, $name! Welcome to the scripting lab."
echo "The current date and time is: $(date)"
echo "Here are the files in the current directory:"
ls -l
2.​ Save the script and run it again.
Question: How do you reference a variable in a script, and what is the
purpose of $name in the script?

Task 5: Adding User Input to Your Script


1.​ Modify your script to accept user input. Use the read command to
prompt the user for their name:
#!/bin/bash
echo "What is your name?"
read name
echo "Hello, $name! Welcome to the scripting lab."
echo "The current date and time is: $(date)"
echo "Here are the files in the current directory:"
ls -l
2.​ Save and run the script. It will now prompt you for your name before
executing the commands.
Question: How does the read command work in a script, and why is it
useful for gathering user input?

Task 6: Using Conditional Statements in Scripts


1.​ Modify your script to add a simple conditional statement. For
example, check if a specific file exists and print a message accordingly:
#!/bin/bash
echo "What is your name?"
read name
echo "Hello, $name! Welcome to the scripting lab."

# Check if a specific file exists


if [ -f /etc/passwd ]; then
echo "/etc/passwd exists on the system."
else
echo "/etc/passwd does not exist."
fi
2.​ Save the script and run it again.
Question: How does the if [ -f /etc/passwd ] statement work, and what
would happen if the file /etc/passwd doesn't exist on your system?

Task 7: Using Loops in Scripts


1.​ Add a loop to your script that lists the first 5 files in the current
directory:
#!/bin/bash
echo "Listing the first 5 files in the current directory:"
count=1
for file in $(ls -1); do
echo "$count. $file"
((count++))
if [ $count -gt 5 ]; then
break
fi
done
2.​ Save and run the script.
Question: How does the for loop work in the script, and why do you use
break to stop after the first 5 files?

Task 8: Debugging Your Script


1.​ Modify your script to intentionally introduce an error (e.g., by
referencing an undefined variable).
2.​ Run the script and observe the error message.
3.​ Use the bash -x option to run your script in debug mode:
bash -x myscript.sh
Question: How does the bash -x option help with debugging scripts, and
how can it be useful when troubleshooting?

Task 9: Writing Scripts with Comments


1.​ Modify your script to add comments explaining each section.
Comments start with a # symbol:
#!/bin/bash
# This script asks for the user's name and shows system information

# Ask for the user's name


echo "What is your name?"
read name

# Greet the user


echo "Hello, $name! Welcome to the scripting lab."

# Show the current date and time


echo "The current date and time is: $(date)"

# List files in the current directory


echo "Here are the files in the current directory:"
ls -l
2.​ Save and run the script again.
Question: Why is it important to add comments to your script, and how do
they help others (or yourself) understand the script’s purpose?

Notes:
✔​ Scripting allows you to automate tasks and make repetitive tasks more
efficient.
✔​ Always ensure that your scripts are executable by using chmod +x.
✔​ Using variables, conditional statements, loops, and user input can make
your scripts dynamic and interactive.
✔​ Debugging with bash -x is a useful way to track down errors in your
scripts.
Lab 20: Managing Scheduled Tasks with cron

Lab Objective:

●​ Learn how to schedule automated, recurring system tasks using


cron utilities.

Lab Purpose:

●​ Understand how to manage periodic tasks with cron, allowing


your system to perform actions automatically (like backups,
updates, or custom scripts).

Tools:

●​ Kali Linux (or similar Linux distro).

Topology:

●​ Single Linux system or VM.

Walkthrough & Explanation

Task 1: Check the Configuration - cat /etc/crontab

cat /etc/crontab

●​ Displays the main configuration file for system-wide scheduled


tasks.
●​ Format:
○​ Minutes, hours, day of month, month, day of week: specify
when the task runs.
○​ User: which user runs the command.
○​ Command: the script or command to execute.

Ubuntu's /etc/crontab contains default jobs, often to run scripts in


/etc/cron.hourly, /etc/cron.daily, etc.

Inspecting Cron directories:

ls /etc/cron.hourly /etc/cron.daily /etc/cron.weekly /etc/cron.monthly

●​ These measure how often scripts are run:


○​ Hourly
○​ Daily
○​ Weekly
○​ Monthly
●​ You can peek in these directories and see what scripts are
scheduled.

User-specific crontabs:

●​ To view your personal cron jobs:

crontab -l

●​ To edit your cron jobs:

crontab -e

●​ To view another user's cron jobs (requires root):


sudo crontab -l -u username

Task 2: Create a New Cron Job

●​ Run:

crontab -e

●​ Add a line like this:

* * * * * date > ~/foo

●​ Explanation:
○​ * * * * * → Run every minute.
○​ date > ~/foo → Save the current date/time into the file foo in
your home directory.
●​ Wait a minute, then check the file:

cat ~/foo

●​ To clean up, remove the cron job and delete the file:

crontab -e # then delete your line

rm ~/foo

Additional notes:
●​ Access control files (/etc/cron.allow, /etc/cron.deny) restrict or permit
user access to crontab.
●​ On Kali Linux, by default, most users can edit their cron jobs
unless restricted.

Intermediate Labs

Lab 21. Filesystems and Storage

Lab 22. SysVInit and Systemd

Lab 23. Special Directories and Files

Lab 24. Managing File Permissions and Ownership

Lab 25. Pipes and Redirection

Lab 26. Redirection and File Descriptors

Lab 27. vi Navigation

Lab 28. Locales

Lab 29. Sending Signals

Lab 30. Immutability

Lab 31. Understanding Computer Hardware

Lab 32. Journald

Lab 33. Monitoring Disk Performance

Lab 34. Memory Monitoring

Lab 35. OpenVPN

Lab 36. Package Repositories


Lab 37. Swap Space

Lab 38. Memory Testing

Lab 39. YUM

Lab 40. Automatic Mounting

Lab 41 – Command Line Basics: Environment Variables, Types,


and History
Lab 42 – Turning Commands into a Script: Shell Scripting
Lab 43 – Choosing an Operating System
Lab 44 – Basic Security and Identifying User Types
Lab 45 – Special Directories and File
Lab 46 – Determine and Configure Hardware Settings: PCI
Devices and Peripherals
Lab 47 – Change Run-levels / Boot Targets and Shutdown or
Reboot System: Runnels and Boot Targets
Lab 48 – Use Debian Package Management: Apt
Lab 49 – Linux as a Virtualization Guest
Lab 50 – Work on the Command Line: Environment Variables,
Types, and History

Lab 21: Filesystems and Storage

Lab Objective:​
Learn about gathering information on and configuring storage devices and
special filesystems.

Lab Purpose:​
In this lab, you will learn how to differentiate between various types of
mass storage devices, and about sysfs, udev, and dbus.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine, or virtual machine.
Lab Walkthrough:

Task 1:​
Open the Terminal and run:

ls /sys/block

This will give you a list of block devices on your system. At a minimum, you
should have a device such as sda or xvda. These are your standard disk
devices, like hard disks and USB drives. You may also have an optical
drive, such as sr0, and possibly one or more loopback devices.​
To get more information on these devices, run:

lsblk

This will list all of those devices as well as their sizes, types, and mount
points. Your hard drive(s) should be labeled as “disk,” partitions as “part,”
loopback devices as “loop,” and optical or other read-only devices as “rom.”

Task 2:​
Choose a device from above; for example, /dev/sda. Run:

udevadm info /dev/sda

Udev is a Linux subsystem that deals with hardware events, like plugging
and unplugging a USB keyboard. Here, we are using yet another tool to
gather information on a hard disk device (or another device of your choice).​
Notice the first line from the output, beginning with “P:”. Tying back to
sysfs, this is the path for said device under the /sys filesystem. You can
confirm this with:

ls /sys/[path]

where [path] is the output from the first line above.​


Udev is also responsible for generating the device nodes in /dev on every
boot. If you run ls /dev, you will see all of the devices you discovered
earlier—plus many others.

Task 3:​
D-Bus is a mechanism for inter-process communication (IPC). Run:
dbus-monitor

Then open an application like Thunderbird. You will see a large number of
messages being sent back and forth between various processes. You might
need to use CTRL + C or q to quit.

Notes:​
Challenge: Can you figure out how to suspend the system just by echoing
a value to a file under sysfs?

Lab 22: SysVInit and Systemd

Lab Objective:​
Learn how to manage SysVInit and Systemd, and understand the
differences between them.

Lab Purpose:​
SysVInit and Systemd are two different ways of managing Linux startup
processes. SysVInit is considered the “classic” method, while Systemd has
supplanted it in many distros today.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:​
Kali Linux is one of the distros that has switched to Systemd. However,
many SysVInit-based scripts are still present and active:

ls /etc/init.d

Every file in this directory is a script that starts, stops, checks status, and
possibly other features for a given service. You can see which scripts are
enabled for which runlevels with:
ls -lR /etc/rc*

Take a look at the symlinks listed under /etc/rc5.d. Runlevel 5 is the default
boot runlevel, when you’re booting normally into a desktop with a display
manager. Runlevel 0 (under /etc/rc0.d) contains links to the scripts run
when your computer is halted.

Task 2:​
Now run:

systemctl list-unit-files --state=enabled

These are the services that are enabled at boot within Systemd, and should
be mostly distinct from the ones listed under /etc/rc*. Use CTRL + C or q to
quit.​
While SysVInit relies on standard shell scripts, Systemd uses unit files with
its own custom format. As just one example, take a look at:

cat /lib/systemd/system/rsyslog.service

Systemd unit files can have many options. See man systemd.unit for more
information.

Task 3:​
Finally, take a look at those system logs with:
journalctl -xe
Journalctl is Systemd’s replacement for the classic syslog. Again, however,
Kali Linux still has quite a few logs in classic text format—see /var/log.
Notes:​
Kali Linux has embraced Systemd, similar to other distributions. Unlike
older versions of Ubuntu which once used Upstart as a replacement for
SysVInit, Systemd has now become the standard for managing services.

Lab 23: Special Directories and Files

Lab Objective:​
Learn how to use temporary files and directories, symbolic links, and
special permissions.
Lab Purpose:​
In this lab, you will work with symbolic links, special file/directory
permissions, and temporary files and directories.
Lab Tool:​
Kali Linux (latest version)
Lab Topology:​
A single Kali Linux machine, or virtual machine.
Lab Walkthrough:
Task 1:​
Open the Terminal and run:
ls -ld /tmp
Notice the permissions: drwxrwxrwt. The "t" at the end indicates what is
called the sticky bit. This means that only users/groups who own a given
file may modify it. This is important on world-writable directories, such as
those holding temporary files, to prevent users from messing with other
users’ files.
Task 2:​
Now run:
1.​ bash
2.​ ln -sv $(mktemp) mytmp
3.​ ls -l mytmp
4.​ rm mytmp

What just happened? mktemp created a randomly-named temporary file.


Then you created a symbolic link to that file called mytmp. The ls output
shows this link. A symbolic link is simply a reference to an existing file. This
way, you may have multiple references to a single file—edit the original file,
and all of the references instantly update. This is useful for some system
configurations.
Notes:​
In addition to /tmp, there is also /var/tmp, which is typically used for larger
and/or longer-lasting temporary files. /tmp is usually cleaned on every
reboot; the same is not necessarily true for /var/tmp.
Lab 24: Managing File Permissions and Ownership

Lab Objective:​
Learn how to manipulate file permissions and ownership settings.

Lab Purpose:​
In this lab, you will learn to use chmod and chown, as well as view
permissions and ownership settings with ls.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:​
Open the Terminal and run (you may need to add the user ‘foo’ if you
removed it on the last lab):

echo "Hello World" > foo

ls -l foo

Look at the first field; you should see -rw-r--r--. This indicates the user,
group, and other permissions. The last nine characters, in groups of three,
denote these permissions. In this instance:

●​ rw- indicates read/write (but not execute) permissions for the user
who owns the file.
●​ r-- indicates read-only permissions for the group that owns the file.
●​ r-- indicates read-only permissions for all non-owners, a.k.a. “world”.

The first character indicates the type of file. In this case, it is a regular file;
directories begin with d.​
Who are the user and group owners of this file? The third and fourth fields
of ls -l tell us that. By default, it should be your own user and primary group.
Task 2:​
Now run:

sudo chown root foo

ls -l foo

cat foo

You’ve just changed the user ownership to root, while keeping the group
ownership. As the file has group- and world-read permissions, you can still
see its contents.

Task 3:​
Now run:

sudo chmod o-r foo

ls -l foo

cat foo

That chmod command removes read permissions from others. However, as


you still have group ownership, you can still see the file’s contents.

Task 4:​
Now run:

sudo chmod 600 foo

ls -l foo

cat foo
This chmod command sets the permissions explicitly to read-write for the
owning user only. As that is root, we can no longer read the file.

Task 5:​
Finally, clean up with:

sudo rm foo

Notes:​
Execute permissions come into play on executable files, as well as
directories. If a user/group cannot "execute" a directory, it cannot view the
contents of that directory or any subdirectories.

Lab 25: Pipes and Redirection

Lab Objective:​
Learn how to use pipes and redirect input and output in Bash.

Lab Purpose:​
Bash has two commonly used features, known as pipes and I/O
redirection, that make life easier when using Linux. In this lab, you will learn
how to make use of these powerful features.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Linux machine, or virtual machine.

Lab Walkthrough:

Task 1:​
Open the Terminal application and run:

mkdir lab13

cd lab13
echo "hello world" > hello

cat hello

You just redirected the output from your echo command into the hello file.
The > is a shortcut; the proper version of this would be 1>, where 1 is
called a file descriptor, which references the standard output.

Task 2:​
Now run:

ls nonexistent 2>> hello

cat hello

Based on the new contents of the hello file ("ls: cannot access
‘nonexistent’: No such file or directory"), what can you surmise about the
function of 2>>? Here, 2 is a file descriptor for standard error.

Task 3:​
Sometimes, you want to redirect both output and error to the same place.
In ye olden days, you would have to do something like this:

ls foo > bar 2>&1

However, modern versions of Bash have an easy-to-use shortcut:

ls foo &> bar

Task 4:​
We’ve talked about redirecting output, but what about input? To learn
about that, run the following commands:

cat < hello

read foo <<< “foo” echo $foo

cat << END-OF-FILE > goodbye hello

goodbye
END-OF-FILE

cat goodbye

In short: < redirects input from a file name, << starts a here document, and
<<< redirects input from another command.

Task 5:

Pipes (|) let you use the output of a command as the input to another
command. In many situations (but not all), | and <<< are kind of a reverse
of one another. For example, the output of...

echo “hello world” | grep e

... is identical to the output of... grep e <<< “hello world”

Try to predict the output of the following command string before you run it
(see Answer 1 below):

grep non hello | cut -d: -f3

Task 6:

Finally, clean up:

cd ~

rm -r lab13

Answer 1:

No such file or directory

Notes:

grep searches for matching patterns in a string, while cut splits the input
into pieces. Both are very common tools that you will use more in
subsequent labs.
Lab 26: Redirection and File Descriptors

Lab Objective:​
Learn how to redirect input and output in Bash.

Lab Purpose:​
Bash has a commonly-used feature, known as I/O redirection, that makes
life easier when using Linux. In this lab, you will learn how to make use of
this powerful feature.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine or virtual machine.

Lab Walkthrough:

Task 1:​
Open the Terminal application and run:

1.​ mkdir lab46


2.​ cd lab46
3.​ echo "hello world" > hello
4.​ cat hello

You just redirected the output from your echo command into the hello file.
The > is a shortcut; the proper version of this would be 1>, where 1 is a file
descriptor that references the standard output.

Task 2:​
Now run:

1.​ ls nonexistent 2>> hello


2.​ cat hello
Based on the new contents of the hello file ("ls: cannot access
'nonexistent': No such file or directory"), what can you surmise about the
function of 2>>? Here, 2 is a file descriptor for standard error.

Task 3:​
Sometimes, you want to redirect both output and error to the same place.
In the past, you would have to do something like this:

1.​ ls foo > bar 2>&1

However, modern versions of Bash have an easy-to-use shortcut:

1.​ ls foo &> bar

Task 4:​
We’ve talked about redirecting output, but what about input? To learn
about that, run the following commands:

1.​ cat < hello


2.​ read foo <<< "foo"
3.​ echo $foo
4.​ cat << END-OF-FILE > goodbye​
hello​
goodbye​
END-OF-FILE
5.​ cat goodbye

In short: < redirects input from a file name, << starts a here document, and
<<< redirects input from another command.

Task 5:​
You can also create your own file descriptors. Observe:

1.​ exec 3<> foo


2.​ echo "Hello World" >&3
3.​ cat foo
4.​ exec 3>&-
5.​ echo "fail" >&3

What this is doing is opening file descriptor 3 (for both reading and writing)
with the file foo and sending output to that. The fourth line then closes the
file descriptor, causing the last echo command to fail.

Task 6:​
Finally, clean up:

1.​ cd ~
2.​ rm -r lab46

Notes:​
To open a file descriptor for reading only, you could use exec 3< file, or for
writing only, exec 3> file.

Lab 27: vi Navigation

Lab Objective:​
Learn how to open, close, and navigate the vi editor.

Lab Purpose:​
In this lab, you will learn about one of the classic (but often opaque to new
users) Linux text editors, vi.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine or virtual machine.

Lab Walkthrough:
Task 1:​
Open the Terminal and run:

1.​ vi /etc/passwd

Assuming you’ve not run this as root, vi should tell you at the bottom that
the file is read-only. That’s fine, since you won’t be making any changes
right now. Instead, explore just navigating vi:

●​ Use the h, j, k, and l keys, or the arrow keys, to navigate the cursor
between characters and lines.
●​ Enter /username, or ?username, to search forward and backward,
respectively, for your username. Use n or N to hop to the next or
previous match, respectively.
●​ Enter 10G to hop to line 10. Then just enter G to hop to the bottom of
the file.
●​ Finally, type :q to quit. Or if you’ve somehow accidentally made
changes, type :q! to force quit without saving

Notes:
The EDITOR environment variable defines which editor is used by
programs that ask for one, such as sudoedit. By default this may be set to
nano. If you’re comfortable enough with vi, you may want to change this
variable.

Lab 28: Locales

Lab Objective:​
Learn how to configure locale settings.
Lab Purpose:​
In this lab, you will learn how locales work in Linux and how to configure
their settings and environment variables.
Lab Tool:​
Kali Linux (latest version)
Lab Topology:​
A single Kali Linux machine or virtual machine.
Lab Walkthrough:

Task 1:​
Open the Terminal and run:
1.​ locale

You should see a number of variables, the first of which being LANG — as
you might guess, this is the locale setting for your default language.
Several other variables are listed below it, naming the settings for
dates/times, phone numbers, currency, and other designations.
To see the locale settings currently supported by your system:
1.​ locale -a

And to see supported character maps:


1.​ locale -m

Your default locale settings are probably some variation of UTF-8, which is
a standard character map on Linux.

Task 2:​
If you’re feeling daring (this is not recommended), you can use
update-locale to change your settings to a radically different language
and/or character map. For example:
1.​ sudo update-locale LANG=<setting>

LANG=C is notable for being useful in scripting—it disables localization and


uses the default language so that output is consistent.

Notes:​
You can use iconv to convert text between character encodings.

Lab 29. Sending Signals

Lab Objective:

Learn how to send signals to processes in Linux.


Lab Purpose:

In this lab, you will learn about signals, including but not limited to kill, and
how to send them to processes.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Sometimes, in Linux, things are not quite as they seem. This sequence of
commands will demonstrate that

sleep 10000 &

kill -19 $(pgrep sleep)

echo "Mobs"

kill -18 $(pgrep sleep)

echo "Mobs"

What is happening here? Contrary to its name, the kill command doesn’t
necessarily kill processes. Instead, it sends them a signal, which may or
may not be SIGKILL, the signal that immediately terminates a process. In
fact, by default, kill doesn’t even send SIGKILL—it sends SIGTERM
(terminate gracefully), which is basically a computer’s way of saying
“please wrap up everything that you’re doing… or else I’ll have to send
SIGKILL.”

In this sequence, you first sent signal 19, which is SIGSTOP (stop),
followed by signal 18, which is SIGCONT (continue). Given the output of
echo "Mobs", this should now make more sense.
You can see a list of all signals with:

kill -L

Task 2:

Now create a few sleep processes if you haven’t already, and use either
killall or pkill to terminate them all with a single command. Both commands
have the same general function, with slightly different arguments, so it’s
worth experimenting with and reading the man pages for both.

For example, to start some sleep processes:

sleep 60 &

sleep 60 &

sleep 60 &

Then, to terminate all sleep processes:

pkill sleep

or:

killall sleep

Notes:

pkill and pgrep have exactly the same searching/matching semantics. This
means you can use pgrep as a “dry run” to find the processes you want to
signal before actually doing so.

Lab 30. Immutability

Lab Objective:

Learn about immutable files and directories, and their uses.


Lab Purpose:

In this lab you will use chattr to set and unset the immutability property, and
understand what it means.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Create an example file

echo hello ! lab82.txt

Add an immutable flag

sudo chattr +i lab82.txt

Now modify it

echo goodbye ! lab82.txt

Hmm… did that not work? That’s funny… can you delete it?

rm lab82.txt

No. What about as root?

sudo rm lab82.txt

Obviously, if you noted what the word “immutable” actually means, you
understand what’s going on here. You could use lsattr >file@ to check
whether a file has that flag—if so, the fifth field will be marked with i.
Task 2:

Now do a similar procedure for a directory

mkdir lab82

touch lab82/foo

sudo chattr +i lab82

touch lab82/bar

rm lab82/foo

rm -rf lab82

lsattr -d lab82

Immutability on directories works exactly as you would expect it to, with one
interesting exception. It doesn’t prevent the directory from acting as a
mount point! This means you can mark the mount point for an external
drive as immutable, which will prevent files from being written to it when
that drive is not mounted.

Task 3:

Finally, clean up

sudo chattr -i lab82*

rm -rf lab82*

Notes:

Some operating systems mark important binaries as immutable for security


purposes, to make it more difficult for attackers to overwrite them with
malicious versions.
Lab 31: Understanding Computer Hardware

Lab Objective:

Learn what hardware goes into building a computer and how to interface
with some hardware from Linux.

Lab Purpose:

The following are primary hardware components in desktop and server


computers

● Motherboard

● Processor(s)

● Memory

● Power suppl(y/ies)

● Hard/solid-state disks

● Optical drives

● Peripherals

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:​
Open the Terminal and run the following commands to review some of the
hardware (real or virtualized) which is powering your Linux computer

lscpu
lspci

lsusb -t

sudo lshw

sudo hdparm -i /dev/sd*

Task 2:​
Now investigate some related low-level resources, like partitions, drivers
(kernel modules), and resource usage

sudo fdisk -l

lsmod

df -h

free -m

Notes:​
If you want hardware information that you don’t know how to find, a good
way to start is by exploring the /proc filesystem. Most of the tools listed
above simply gather and format information from /proc — that is where the
kernel presents it.

Lab 32: Journald

Lab Objective:​
Learn how to manage and read data from journald, the Systemd logger.

Lab Purpose:​
In this lab, you will get to know journald, the Systemd logging system that
is gradually replacing classic syslog variants in most Linux distros.
Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine or virtual machine.

Lab Walkthrough:

Task 1:​
Open two terminal windows/tabs.​
In the first one, run:

journalctl -f

In the second one, run:

systemd-cat echo "Hello World"

Now go back to the first tab. Did your friendly message show up in the
system logs? Hit Ctrl+C to exit.​
systemd-cat is a convenient tool, like logger, that can send output directly
to the journald logs.

Task 2:​
Now look at the configuration:

cat /etc/systemd/journald.conf

On a default Kali Linux system, you will probably see a lot of


commented-out lines. This is just the default configuration, which can still
tell you a lot. You can learn the meaning of these options from man
journald.conf, which is conveniently laid out in the same order as the config
file.

Use the two tabs from Task 1, or open a second tab if necessary. In the first
tab, run:
bash
tty

Save the value it prints out for later use.


In the second tab, open /etc/systemd/journald.conf for editing as root (use
sudo or sudoedit). Append the following two lines to the bottom of the file:
plaintext

ForwardToConsole=yes

TTYPath=[tty]

Replace [tty] with the value printed by the tty command in the other tab.
Save and quit the editor, then run:
bash

sudo systemctl restart systemd-journald

systemd-cat echo "Hello World"

Now look back at the first tab again. You should see logs being printed
directly to your console! This can be useful for monitoring system logs
generally, without having to keep a process running.

Task 3:​
Clean up the changes made during this lab:
1.​ Edit /etc/systemd/journald.conf again and delete the two lines you
added.
2.​ Run the following command to restart the journald service:

bash

sudo systemctl restart systemd-journald


Notes:​
When stored persistently on the disk (which is typically desired, as opposed
to only storing logs in memory), journald logs are stored in /var/log/journal.
If you try to read these files, however, you will quickly notice that they are
not the text logs you may be used to! Journald stores its logs in a binary
format which is only readable by systemd-related tools.

Here’s the adapted version of Lab 71: Monitoring Disk Performance


specifically for Kali Linux. The language has been updated to reflect the
use of Kali while maintaining the original intent and structure.

Lab 33: Monitoring Disk Performance

Lab Objective:​
Learn how to monitor the I/O performance of your storage devices.

Lab Purpose:​
In this lab, you will use ioping and iostat to monitor disk I/O performance
on your Kali Linux machine.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine or virtual machine.

Lab Walkthrough:

Task 1:​
First, install a new performance monitoring tool:

sudo apt install ioping

Then create a directory to prepare for the next step:


mkdir lab71

Compared with its cousin ping, ioping measures the latency of disk I/O
rather than the latency of a network connection. Run a few commands to
compare the output. Each of these commands will run for ten seconds:

ioping -w 10 -A ./lab71

ioping -w 10 -C ./lab71

ioping -w 10 -L ./lab71

ioping -w 10 -W ./lab71

ioping -w 10 -R ./lab71 # No output until the end

If you read the man page or run ioping -h, you will discover the meaning of
each of those flags:

●​ -A: Uses asynchronous I/O.


●​ -C: Takes advantage of caching.
●​ -L: Uses sequential operations (as opposed to random).
●​ -W: Uses write I/O (which can destroy a device if you aren’t careful).
●​ -R: Runs a seek test.

Carefully compare the results of each test. Given what you know about disk
operations under the hood, consider why the results might differ
significantly.

Finally, you can now remove the temporary directory:

rmdir lab71

Task 2:​
Now run the following command to get an overview of your disk
performance:

iostat -dh

Depending on your machine, the output may show a long list of loopback
devices (the lines beginning with “loop”). However, right now you are only
interested in your main disk devices, which may include “sda,” “sdb,” or
similar. The output will show five columns indicating historical statistics of
the indicated device: transfers per second, data written/read per second,
and total data written/read.

For more detailed information, add the -x flag to the command:

iostat -dh -x

Make sure to read the man page to learn what all the abbreviations stand
for!

Notes:

Both ioping and iostat have modes for continually printing reports, allowing
you to monitor their output while performing another task.

●​ For ioping, simply omit the -w flag.


●​ For iostat, add an interval at the end, like:

iostat -dh 5

Lab 34: Memory Monitoring

Lab Objective:​
Learn how to monitor memory usage on a Kali Linux system.

Lab Purpose:​
In this lab, you will explore monitoring memory usage and the OOM killer,
and cause a bit of mayhem yourself.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine or virtual machine.
Lab Walkthrough:

Task 1:​
Open the Terminal application with two tabs or windows.​
In the first tab, run the following command to monitor memory usage
continuously:

free -hs 0.5

Keep that running for now. In the second tab, execute:

a="a"; while true; do a+="$a"; done

Switch back to the first tab while that little one-liner causes all sorts of
problems (as you will soon see). Observe your system’s memory usage
and, perhaps, swap usage climbing. Now go back to the second tab, review
what you typed, and see if you can understand why your system is
consuming memory at such a rapid rate.

After several seconds, the second tab should crash due to a failure to find
enough memory for its very hungry process.

Task 2:​
Now, use Ctrl+C to stop the free command. Instead, run the following
command to search for OOM (Out of Memory) killer messages in the kernel
logs:

zgrep -i oom /var/log/kern*

Ideally, this command should print nothing (though if it did, that’s fine, too).
What you’re looking for here are messages from the kernel’s OOM killer.
You probably won’t find any from the process you just ran, because that
one was short-lived and essentially self-limiting—once the shell could no
longer allocate memory, it terminated that process on its own.
The OOM killer operates when the kernel itself can’t find enough memory
to perform its tasks. If and when it does run, you will see logs in
/var/log/kern*, among other locations.

Notes:

You can use vmstat to serve a similar purpose to free (though it presents
the information in a different format). Finally, you can view the
/proc/meminfo file for extensive details about your system’s memory usage:

cat /proc/meminfo

Here’s the adapted version of Lab 56: OpenVPN for Kali Linux. The
content has been modified to fit the context of using Kali while preserving
the essential objectives and tasks of the lab.

Lab 35: OpenVPN

Lab Objective:​
Learn how to configure a popular VPN client called OpenVPN.

Lab Purpose:​
In this lab, you will configure a Linux OpenVPN client connection.
However, before you can do that, you need something for the client to
connect to.

Lab Tool:​
Kali Linux (latest version), and resources for a second VM to be set up
during this lab.

Lab Topology:​
Two Linux machines accessible via the same network.
Lab Walkthrough:

Task 1:​
Before you can set up an OpenVPN client, you will first need a server for
that client to connect to. Complex VPN server configuration is beyond the
scope of this lab, but fortunately, there’s an easier way:

1.​ Download a pre-configured OpenVPN VM from TurnkeyLinux. You


want the OVA file, which will work with VirtualBox: Turnkey OpenVPN
Download.​

2.​ Follow these instructions to create the VM in VirtualBox. You will be


deploying a VM-optimized image: Installation Guide.​

3.​ Launch the new VM and run through the setup process, following the
on-screen prompts. Most prompts are self-explanatory, but be sure to
select the “server” profile and correctly type the server’s IP address.
The subnets you choose don’t really matter, as long as they don’t
overlap with your local network. You may skip the TurnkeyLinux API
and the email notifications.​

4.​ Now you have an OpenVPN server running! You can quit the setup
menu and log in via terminal, which you will use in the next task.​

Task 2:​
At the terminal prompt (as root) on your OpenVPN VM, run the following
command to create client information on the server:

openvpn-addclient lab56 101labs@example.com

This command creates a client configuration file containing all the


information the client needs to connect. Now, you just need to get that file
to the client.

On the client VM (running Kali Linux), run the following commands to install
OpenVPN and copy the configuration file:

sudo apt update


sudo apt install openvpn
sudo scp root@[serverIP]:/etc/openvpn/easy-rsa/keys/lab56.ovpn
/etc/openvpn/lab56.conf

Make sure to replace [serverIP] with the IP address of your OpenVPN


server. You will be prompted for the password of the OpenVPN VM, and
then the file will be copied to its correct location.

IMPORTANT: If this step does not work, ensure that your virtual network is
configured correctly or that there are no connectivity issues on your local
network. By default, neither your lab VM nor the TurnkeyLinux VM should
have firewalls interfering with connectivity.

Task 3:​
Now open and view the /etc/openvpn/lab56.conf file in your favorite text
editor. Make sure the first line contains the correct server IP address and
change it if necessary. The first few lines of this file should look something
like this:

remote 192.168.10.1 1194


proto udp
remote-cert-tls server
dev tun
resolv-retry infinite
keepalive 10 120
nobind
comp-lzo
verb 3

After this are possibly a few other options plus your client’s private key,
public certificate, and server certificate(s). These are required for the client
and server to authenticate each other.

The key option here is that very first line, where you specify the remote
server and port. In various situations, you might also need to change the
proto or dev options, but for the most part, this client configuration is pretty
standard.

Task 4:​
To start OpenVPN, run the following command:

sudo openvpn /etc/openvpn/lab56.conf

You will see a lot of output, but if all goes well, the final line should include
"Initialization Sequence Completed."

Open another tab or terminal window to test your connection. First, run:

ip addr show

In this list, you should see a new tunnel device. The name will vary but
typically starts with "tun". On Kali Linux, it should be called "tun0". This tun0
device should have an IP address assigned to it by the OpenVPN server
via DHCP. Take this IP address and replace the last number with a
1—typically, this is the OpenVPN server’s private network IP. (But not
always. To confirm this, run ip addr show on the server itself.)

In any case, this is not the same IP that is in your client configuration. The
server and client are using their own new addresses as part of this Virtual
Private Network.

Once you’ve determined the server’s private IP, try connecting to it:

ping [serverIP] # Replace [serverIP] with the determined IP

ssh root@[serverIP] # Replace [serverIP] with the determined IP

If you can connect to the OpenVPN server via its VPN address using these
commands, then you know your VPN is functioning correctly.

Task 5:
To close out, hit Ctrl+C on your client’s OpenVPN process to terminate it.
After that, you can remove the /etc/openvpn/lab56.conf file, uninstall
OpenVPN, and shut down the server if you no longer need it. Here are the
commands to assist with that cleanup:

sudo rm /etc/openvpn/lab56.conf

sudo apt remove --purge openvpn

If you want to shut down the OpenVPN server as well, log in to the
OpenVPN VM and use the following command:

sudo poweroff

Notes:

Another common VPN protocol is IPsec. While IPsec can be more complex
to configure and understand, it is also worth learning. It is left as an
exercise for the reader to find (or create!) a “turnkey” IPsec server to test.

Lab 36: Package Repositories

Lab Objective:​
Learn how to configure package repositories (repos) for your system’s
package manager.

Lab Purpose:​
In this lab, you will examine repo configurations and add a third-party repo
to expand the selection of packages available to you.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Kali Linux machine or virtual machine.

Lab Walkthrough:
Task 1:​
Run the following command to open the /etc/apt/sources.list file:

sudo apt edit-sources

This command will open the list of repositories currently configured for your
Kali system. By default, many repos might be commented out. Uncomment
the following line to enable the Kali "contrib" repository, which provides
access to additional packages:

deb http://http.kali.org/kali kali-rolling contrib

Save and quit the editor, and then run the following command to update
your package list:

sudo apt update

Task 2:​
The Kali contrib repository contains a variety of tools that are not provided
directly by the main distribution but may be useful for your tasks. For
example, to check a package from this repository, run:

apt show <package-name>

Replace <package-name> with an actual package name you want to


check, such as burpsuite or wireshark. In the output, pay attention to the
APT-Sources line, which indicates that the package comes from the
repository you just enabled.

Task 3:​
Now you will add a non-existing repository to illustrate how Apt can fail.
First, open your sources list again:

sudo apt edit-sources

At the bottom of the file, add the following line:


deb http://example.com/badrepo kali-rolling badrepo

Next, run:

sudo apt update

The first error message should inform you that the fake repository doesn’t
have a Release file. This is not a mistake that any actual maintainer would
make, so just work around it for now.

Next, edit the sources list again:

sudo apt edit-sources

In the last line (the one you just added), immediately after deb, add:

[allow-insecure=yes]

So it reads:

deb [allow-insecure=yes] http://example.com/badrepo kali-rolling badrepo

Now, run:

sudo apt update

The Release file error will be converted to a warning, and Apt might
complain that it cannot find the necessary index files, indicating that the
repository isn’t functional.

However, actions related to other working repositories will still function,


although you may see warnings cluttering the output.

Task 4:​
For cleanup, open the sources list for editing again:
sudo apt edit-sources

Delete the last line you added for the fake repository. You may also want to
comment out the Kali contrib line (unless you want to keep it) by adding a #
at the beginning of the line. Then, save and quit. Finally, run:

sudo apt update

Notes:

Adding third-party repositories can be a security risk, so always make sure


to only use trustworthy ones. For Kali Linux, popular third-party repositories
include Kali's official payloads and tools, found on their specific
documentation pages.

Lab 37: Swap Space

Lab Objective:​
Learn how to configure swap space.

Lab Purpose:​
Swap space is disk space used by the operating system when RAM is
running low. In this lab, you will configure a swap file.

Lab Tool:​
Kali Linux (latest version)

Lab Topology:​
A single Linux machine or virtual machine.

Lab Walkthrough:

First, open the Terminal and run the command free -h. Pay attention to the
bottom line, labeled “Swap.” Check if you currently have a swap partition,
and make note of the available swap space as you’ll check it again later.
Next, create and activate a 1GB swap file by running the following
commands sequentially:

dd if=/dev/zero of=./lab75.img bs=1M count=1000

mkswap ./lab75.img

sudo swapon ./lab75.img

You may see some permission warnings; these can be safely ignored in
this lab environment. After activating the swap file, check again with free -h,
and you should see that your available swap space has increased by 1GB.

Finally, to remove the insecure swap file you created, undo your previous
steps by running:

sudo swapoff ./lab75.img

rm lab75.img

Notes:

Ideally, a healthy system rarely or never uses its swap space. Some Linux
administrators today prefer not to set it up at all, but it is always nice to
have some amount of swap space as a failsafe.

Lab 38: Memory Testing

Lab Objective:

Learn how to test your system RAM hardware for problems.

Lab Purpose:

In this lab, you will learn about two tools that can be used to test system
RAM.
Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:​
Install memtester

sudo apt install memtester

This is a tool which can test memory while your operating system is
running. First, use free -m to figure out how much memory your system
has. Unless you suspect actual problems, you should be conservative here
and test no more than 50% of your RAM. Pass this to memtester as a
number of MB, followed by a number of iterations. For example, if you want
to run a single test of 1024MB, run:

sudo memtester 1024 1

This will take some time, but in the end you will be presented with a
(hopefully clean) report.

Task 2:​
Now reboot your system to the GRUB menu and select memtest86+. This
is a Linux package, installed with Kali by default, but run through GRUB. It
works similarly to memtester, running a variety of tests over a number of
iterations. Because it cannot have the OS interfering with its operations, it
is arguably more accurate.

Notes:​
Genuine memory problems are hard to diagnose except by trial and error.
This is because problems with other hardware, such as CPU, PSU, or
motherboard, can manifest as memory errors in some tests.
Lab 39: Environment Variables, Types, and History
Lab Objective:
Learn about Bash environment variables, types, and how to use the
shell history.
Lab Purpose:
The Bash command shell is pre-installed on millions of Linux
computers around the world. Understanding how this shell works is
a critical skill for working with Linux.
Lab Tool:
Kali Linux (or another distro of your choice)
Lab Topology:
A single Linux machine, or virtual machine
Lab Walkthrough:
Task 1:​
Open the Terminal application, then enter
echo $PATH
The PATH environment variable lists all locations that Bash will
search when you enter a command with a relative path. A relative
path example is bash, compared to /bin/bash, which is an absolute
path. There are other environment variables as well. You can view all
of them, and their values, using the env command:
env
Task 2:​
You can add or modify an environment variable with export. For
example
export PATH=$PATH:./
In most cases, this change will disappear when you exit Bash, but the
advantage of export is that child processes will now be able to see
the new or modified variable.
For example
foo=123
export bar=456
bash
echo $foo $bar
This prints 456, because only bar is seen by the child shell.
Task 3:​
Another useful Bash command is called type. Use type to figure out
information on other commands or functions
type -a date
type -a if
type -a echo
type -a ls
type will tell you if a command or shell function is a binary file (like
/bin/date), a shell built-in (like help), an alias (which you can view
with alias), or a shell keyword (like if or while).
Task 4:​
Enter history to see a history of commands you have typed into
Bash.
To repeat a command, simply enter ! and then the appropriate
number. For example
!12
will execute command number 12 in the history list.
If you accidentally saved a password or other sensitive information to
the history, you may use, for example
history -d 47
to delete command number 47 from the list, or even
history -c
to clear the history altogether.
If you want to save your current shell’s history to another file, use
history -w filename
The default filename Bash saves its history to is .bash_history.
However, by default it only saves at the end of a session. If your
session crashes, or you open multiple sessions, etc., then your shell
history could become incomplete or out of order.
Notes:
If you wish to permanently modify an environment variable or other
shell settings, look into the .bashrc and .bash_profile files.

Lab 40: Automatic Mounting

Lab Objective:​
Learn how to automatically mount Linux filesystems.

Lab Purpose:​
In this lab, you will learn how to automatically mount a filesystem on a Kali
Linux system at boot by using the /etc/fstab file.
Lab Tool:​
Kali Linux (latest version).

Lab Topology:​
A single Linux machine or virtual machine.

Lab Walkthrough:

First, you will need another filesystem to mount. This can be an external
disk or a loopback block device that you create for this purpose (refer to
Lab 11 for details). Identify the label or UUID of this device by running sudo
lsblk -f. Once you have the UUID, open /etc/fstab with your preferred text
editor. You will add a line for the new device in /etc/fstab. The details will
vary based on the device, but here is a template for how the line should
look:

UUID=01234567-89ab-cdef-0123-456789abcdef /mnt/mydevice vfat


defaults 0 2

(Replace the UUID and mount point with the proper values for your device.)
For example, let's say you find that the UUID for your external drive is
2018-08-14-11-58-42-18. Your line in /etc/fstab might look like this:

UUID=2018-08-14-11-58-42-18 /media iso9660 defaults 0 2

After editing the file, save it and close the editor. To ensure the changes
take effect, reboot your machine and then check the mounted filesystems
by running the mount command. If you configured everything correctly, you
should see that your device is mounted at the designated mount point.

Notes:

Users with experience in networked filesystems, such as NFS, may want to


explore autofs, a tool that automatically mounts filesystems as they are
needed.
Lab 41 – Command Line Basics: Environment Variables, Types, and
History

Lab Objective:
Learn how to manage environment variables in Linux, understand the types
of environment variables, and utilize the command line history features to
improve efficiency in Kali Linux.

Lab Purpose:
In this lab, you will explore environment variables in Linux, including how to
view, set, and manage them. You will also learn how to use the history
command to improve your workflow in the terminal.

Lab Tool:
Kali Linux (latest version)

Lab Topology:
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:

Task 1: Understanding Environment Variables


1.​ Viewing Environment Variables: Open your terminal and use the
printenv command to list all environment variables:
printenv
Alternatively, you can use the env command to display the
environment variables:
env
2.​ Viewing a Specific Environment Variable: Use the echo command
to display the value of a specific environment variable, such as HOME or
PATH:
echo $HOME
echo $PATH
Question: What do you notice about the HOME and PATH variables? Why
is PATH especially important for command execution?

Task 2: Setting Environment Variables


1.​ Setting a Temporary Environment Variable: Set a temporary
environment variable that only lasts for the current terminal session. For
example:
export MYVAR="Hello, Kali!"
echo $MYVAR
You can test that it is temporary by opening a new terminal session
and running echo $MYVAR—it should not be defined in the new
session.
2.​ Setting a Persistent Environment Variable: To make an
environment variable persistent across sessions, you need to add it to a
configuration file like ~/.bashrc:
echo 'export MYVAR="Hello, Kali!"' >> ~/.bashrc
source ~/.bashrc
echo $MYVAR
Question: What happens if you set a variable in a new terminal without
sourcing .bashrc? What is the purpose of sourcing the .bashrc file?

Task 3: Types of Environment Variables


1.​ User-Defined Variables: These are variables you set for your
session. For example, the MYVAR set earlier.
2.​ System Environment Variables: These are predefined by the
system, such as PATH, HOME, SHELL, etc. You can list all system
environment variables with printenv or env.
3.​ Shell Variables: Shell variables store data within the shell, such as
$0 (the name of the current script), $# (the number of arguments passed
to a script), or $? (the exit status of the last command).
Question: Can you modify system environment variables like PATH
temporarily? What effect does this have?

Task 4: Using history to View and Manage Command History


1.​ Viewing Command History: Use the history command to view the
list of commands you have previously executed:
history
2.​ Executing a Command from History: You can execute a command
from history by referencing its number. For example, if history shows a
command numbered 100, you can execute it like so:
!100
3.​ Searching the Command History: Use history with grep to search
for a specific command you ran in the past. For example:
history | grep "ssh"
Question: How does the history command improve your workflow? Why
would you want to search through your command history?
Task 5: Configuring Command History
1.​ Configuring History Size: The number of commands stored in
history is controlled by the HISTSIZE variable. Check its value:
echo $HISTSIZE
You can change the number of commands stored in history by setting
HISTSIZE:
export HISTSIZE=1000
2.​ Setting the Location of History File: The location where your
history is stored is controlled by the HISTFILE variable. The default is
usually ~/.bash_history. Check its value:
echo $HISTFILE
You can change this by setting HISTFILE to a new path:
export HISTFILE=~/my_custom_history
3.​ Preventing History from Saving Specific Commands: You can
prevent sensitive commands from being saved in history by prefixing them
with a space:
export HISTCONTROL=ignorespace
Then, try running a command with a leading space and verify it isn't saved
in history.
Question: How does modifying HISTSIZE and HISTFILE impact your
command history management? Why might you want to prevent certain
commands from being saved?

Task 6: Using History Expansion


1.​ Re-executing Previous Commands: Use !! to execute the last
command you ran:
!!
2.​ Repeating the Last Command with sudo: You can repeat the last
command with sudo by using sudo !!:
sudo !!
3.​ Using !n to Execute Command by Number: You can run a
command by its number from the history list. For example, to run
command 42 from the history:
!42
Question: How do history expansions like !! and !n help speed up
repetitive tasks?

Task 7: Clearing the History


1.​ Clearing the Command History: You can clear the history for the
current session by running:
history -c
2.​ Clearing the History File: You can also remove the history file by
running:
rm ~/.bash_history
Question: Why might you want to clear your command history, and what
are the security implications of leaving sensitive commands in history?

Notes:
✔​ Environment variables are crucial for configuring the shell
environment and managing system behavior.
✔​ Command history can save time by allowing you to re-run previous
commands without retyping them. It's a powerful tool for productivity.
✔​ Remember that security concerns can arise when sensitive
commands or information are saved in history, so it's important to know
how to manage and clear your command history.

Lab 42 – Turning Commands into a Script: Shell Scripting

Lab Objective:
Learn how to create shell scripts by combining multiple commands, using
control structures like conditionals and loops, and implementing basic
shell scripting principles to automate tasks in Kali Linux.

Lab Purpose:
In this lab, you will learn the fundamentals of shell scripting. You will
convert a series of commands into an executable shell script, apply basic
scripting concepts like conditionals and loops, and create scripts that can
automate tasks in Kali Linux.

Lab Tool:
Kali Linux (latest version)

Lab Topology:
A single Kali Linux machine, or virtual machine.

Lab Walkthrough:
Task 1: Writing a Basic Shell Script
1.​ Create a New Script File: Open your terminal and create a new
script file using nano (or your preferred text editor):
nano myscript.sh
2.​ Add the Shebang: The first line of any shell script should be the
shebang (#!/bin/bash), which tells the system to use the Bash shell to
execute the script. Add this to your script:
#!/bin/bash
3.​ Add Some Simple Commands: Below the shebang, add a couple of
simple commands to the script. For example, use echo to print messages
to the terminal:
#!/bin/bash
echo "Welcome to my shell script!"
echo "Today is: $(date)"
4.​ Save and Exit: After adding the commands, save the file and exit the
editor (in nano, press CTRL + X, then Y, and press Enter).
Question: What does the $(date) command do in the script? Why is it
useful to include the current date in a script?

Task 2: Making the Script Executable


1.​ Change File Permissions: To run the script, you need to make it
executable. Use the chmod command to give execute permissions:
chmod +x myscript.sh
2.​ Run the Script: Now, execute the script using ./:
./myscript.sh
Question: What does chmod +x do? Why is it necessary to make the script
executable before running it?

Task 3: Using Variables in Shell Scripts


1.​ Add Variables: Modify the script to use variables. For example, ask
for the user's name and greet them using that name:
#!/bin/bash
echo "Enter your name:"
read name
echo "Hello, $name! Welcome to my shell script!"
echo "Today is: $(date)"
2.​ Save and Run: After saving the script, run it again and enter your
name when prompted.
Question: How does the read command work in a script, and how does
$name reference the variable you entered?

Task 4: Using Conditionals in Shell Scripts


1/ Adding an If Statement: Modify the script to include an if statement. For
example, check if the user is a root user (UID 0) and display a message:
#!/bin/bash
echo "Enter your name:"
read name
echo "Hello, $name! Welcome to my shell script!"

# Check if the user is root


if [ $(id -u) -eq 0 ]; then
echo "You are running this script as root."
else
echo "You are not running this script as root."
fi
Save and Run: Save and run the script again, then test it by running it
as a normal user and as root (use sudo ./myscript.sh to run as root).
Question: What does $(id -u) return, and how is it used in the if statement
to check if the script is run by the root user?

Task 5: Using Loops in Shell Scripts


1.​ Adding a Loop: Modify the script to add a loop. For example, create
a loop that lists the first 5 files in the current directory:
#!/bin/bash
echo "Listing the first 5 files in the current directory:"

count=1
for file in $(ls -1); do
echo "$count. $file"
((count++))
if [ $count -gt 5 ]; then
break
fi
done
2.​ Save and Run: Save the script and run it to see the first 5 files listed.
Question: How does the for loop work in this script, and what does the
break statement do?
Task 6: Using Functions in Shell Scripts
1.​ Adding a Function: Modify the script to include a simple function.
For example, create a function that displays system information:
#!/bin/bash

# Function to display system info


system_info() {
echo "System Information:"
uname -a
}

# Call the function


system_info
2.​ Save and Run: Save the script and run it to display the system
information.
Question: How does the function system_info work, and why is it useful to
organize tasks into functions in a script?

Task 7: Redirecting Output in Shell Scripts


1.​ Redirecting Output to a File: Modify the script to redirect the output
of a command to a file. For example, redirect the output of ls -l to a file
named directory_listing.txt:
#!/bin/bash

echo "Listing files in the current directory:"


ls -l > directory_listing.txt

echo "The file directory_listing.txt has been created with the listing."
2.​ Save and Run: Save the script and run it. Check the contents of the
directory_listing.txt file:
at directory_listing.txt
Question: How does output redirection (>) work in a shell script, and how
can it be used to log command outputs to a file?

Task 8: Scheduling Scripts with Cron Jobs


1.​ Scheduling a Script to Run Automatically: Use cron to schedule
your script to run automatically at a specific time. Open the cron editor:
crontab -e
2.​ Add a New Cron Job: Add a line to run your script at a specific time.
For example, to run the script every day at 5 PM:
0 17 * * * /path/to/myscript.sh
3.​ Save the Cron Job: Save and exit the cron editor.
Question: What does the cron job schedule 0 17 * * * represent? How can
cron jobs be useful for automating regular tasks?

Notes:
✔​ Shell scripts are powerful tools for automating tasks and managing
system configurations. By combining commands, variables,
conditionals, loops, and functions, you can create flexible and reusable
scripts.
✔​ Cron jobs can be used to run scripts at scheduled times, which is
useful for automating system maintenance or periodic tasks.

Lab 43 – Choosing an Operating System

Lab Objective:
Understand the key factors that influence the choice of an operating system
(OS) for a specific task or environment, and learn how to evaluate different
operating systems based on needs, requirements, and compatibility.

Lab Purpose:
In this lab, you will explore the key aspects of choosing an appropriate
operating system for your needs. You will learn how to compare various
operating systems, taking into account factors like security, hardware
compatibility, software availability, and user requirements.

Lab Tool:
Kali Linux (latest version), Virtual Machines (if necessary), and other OSes
(Windows, macOS, Ubuntu, etc.) for comparison.

Lab Topology:
This lab can be done on a single Kali Linux machine or in a virtual machine
environment, where different operating systems can be installed and
compared.

Lab Walkthrough:
Task 1: Identify Your Use Case
1.​ Define Your Purpose: Before selecting an operating system, it’s
essential to define the purpose of the system. Ask yourself:
✔​ Will the system be used for general-purpose computing, security
testing, gaming, or software development?
✔​ Are there specific hardware requirements, such as GPU or
multi-core CPU usage?
✔​ What level of security do you need (e.g., secure OS for penetration
testing, high-privacy system for sensitive tasks)?
Write down your system's intended use case. For example:
✔​ Penetration testing: Kali Linux, Parrot OS, or Ubuntu with security
tools.
✔​ Software development: Ubuntu, Fedora, or macOS for ease of
development.
✔​ General use or gaming: Windows 10 or 11, macOS for Apple
hardware.
Question: What is the main function of the machine you are selecting an
OS for? Does your choice of OS align with this purpose?

Task 2: Evaluate Hardware Compatibility


1.​ Check Hardware Requirements: Different operating systems have
different hardware requirements. Research the following:
✔​ Windows: Requires a minimum of 4GB RAM (8GB recommended)
and a compatible 64-bit processor.
✔​ macOS: Only runs on Apple hardware or through virtualization.
Requires an Apple machine for native use.
✔​ Linux: Highly configurable and runs on many different types of
hardware, from low-resource systems (Raspberry Pi) to
high-performance servers.
2.​ Install and Test Multiple OSes:
✔​ Use virtual machines (e.g., VirtualBox, VMware) to install different
operating systems on the same hardware and compare their
performance and compatibility.
✔​ For example, try running Kali Linux, Windows 10, and Ubuntu on
your machine and observe any compatibility issues.
Question: How do different operating systems perform on the same
hardware? Are there compatibility issues, such as missing drivers or poor
performance?
Task 3: Software Compatibility and Ecosystem
1.​ Evaluate Software Availability: Software availability is a crucial
factor when choosing an operating system. Consider:
✔​ Windows: Supports a wide range of consumer and professional
software, especially for gaming, media editing, and productivity tools
like Microsoft Office.
✔​ macOS: Strong support for creative software (e.g., Adobe Creative
Suite) and development tools for Apple devices, but limited to Apple
hardware.
✔​ Linux: Has extensive support for open-source software and is highly
customizable. For developers and security professionals, Linux has
a huge repository of tools (e.g., Kali Linux for penetration testing,
Ubuntu for development).
2.​ Test Software on Different OSes:
✔​ Install the most important software you need (e.g., Office tools, code
editors, or security tools) on each OS to see how well they work.
✔​ Try to use the same software on multiple platforms (e.g., Google
Chrome, LibreOffice) and observe differences in performance or
features.
Question: How does your preferred software run on each operating
system? Are there any limitations or compatibility issues with certain
software?

Task 4: Security Features of Different OSes


1.​ Evaluate Built-in Security Features: Each operating system comes
with its own set of security features:
✔​ Windows: Has built-in security features like Windows Defender,
BitLocker, and User Account Control (UAC). However, Windows is
more commonly targeted by malware and viruses, which means
additional security software might be necessary.
✔​ macOS: Known for its strong security, it includes features like
Gatekeeper, FileVault, and a Unix-based permission system. macOS
is less targeted by malware but is still vulnerable to certain attacks.
✔​ Linux: Typically considered very secure due to its open-source
nature. Many distributions like Kali Linux and Ubuntu come with
built-in firewalls and encryption tools, and Linux has strong
community support for security practices. However, Linux users may
need to be more diligent in securing their systems manually.
2.​ Testing Security Tools:
✔​ In Kali Linux, you have access to a wide range of penetration testing
and security auditing tools, such as nmap, wireshark, metasploit,
and more.
✔​ For Windows, you can test built-in tools like Windows Defender, and
third-party tools like Malwarebytes.
✔​ For macOS, try testing features like FileVault (full disk encryption)
and check the built-in firewall and privacy settings.
Question: Which operating system offers the security features you need
for your use case? Which OS has the best built-in security for your specific
needs?

Task 5: User Interface and Experience


1.​ Compare User Interfaces: The user interface (UI) can make a big
difference in how comfortable and efficient you are while using an
operating system:
✔​ Windows: The familiar Start Menu and taskbar make it easy for
users to find applications and perform basic tasks.
✔​ macOS: Features a polished, minimalistic UI with a unique dock for
easy access to apps. However, it can be restrictive for users who
want extensive customization.
✔​ Linux: Highly customizable depending on the desktop environment
(GNOME, KDE, Xfce, etc.). Users can configure it for a completely
tailored experience, but it may have a steeper learning curve for
beginners.
2.​ Personal Test:
✔​ Try using different desktop environments in Linux (e.g., GNOME,
KDE Plasma, XFCE) and see which one feels most comfortable to
you.
✔​ Use Windows and macOS for everyday tasks and compare how
intuitive their interfaces are compared to Linux.
Question: Which user interface feels the most intuitive and productive for
your needs? Would you prefer more customization options, or are you
happy with a simple and clean interface?

Task 6: Cost and Licensing


1.​ Assess Costs and Licensing Models: The cost and licensing model
of the operating system might also play a role in your decision:
✔​ Windows: Requires a licensed copy (can be purchased from
Microsoft or bundled with hardware). Microsoft also offers additional
paid versions with more features (e.g., Windows Pro or Enterprise).
✔​ macOS: Free with Apple hardware, but only available on Apple
devices, which can be more expensive.
✔​ Linux: Completely free and open-source. Some Linux distributions,
like Red Hat, offer paid support, but many others (like Kali Linux,
Ubuntu, Debian) are free.
2.​ Consider Total Cost of Ownership: Think about the total cost
involved, including any potential additional software or support
subscriptions, and compare it with your budget.
Question: What is your budget for the operating system and any required
software? Does the cost of a particular operating system align with your
budget and needs?

Task 7: Final Decision and Justification


1.​ Choose Your Operating System: Based on the previous tasks,
make a final decision on which operating system is best for your needs.
Consider the following:
✔​ Purpose of the system (e.g., development, gaming, security, etc.)
✔​ Hardware compatibility and performance
✔​ Software support and availability
✔​ Security needs and features
✔​ User experience and customization options
✔​ Cost and licensing
2.​ Justify Your Choice: Write a brief justification for your decision. Why
did you choose this OS over others, and how does it best meet your
requirements?
Question: After evaluating all aspects, which OS did you choose, and what
factors were most important in your decision-making process?

Notes:
✔​ The choice of an operating system should be based on a balance of
factors such as security, hardware compatibility, software needs, user
interface preferences, and cost.
✔​ Testing and comparing different operating systems is a great way to
determine which one works best for you and your specific tasks.
✔​ Consider using virtualization tools (like VirtualBox or VMware) to test
various operating systems before committing to one for production.
Lab 44 – Basic Security and Identifying User Types

Lab Objective:
Learn the basics of security on a Linux system, including understanding
different user types, their privileges, and how to manage users and groups
to ensure a secure environment.

Lab Purpose:
In this lab, you will explore user management in Linux, focusing on
identifying user types (such as root, regular users, and system users),
understanding user privileges, and learning how to apply security
principles to user management.

Lab Tool:
Kali Linux (latest version) or any Linux distribution

Lab Topology:
A single Kali Linux machine or virtual machine, with administrative access
(root or sudo privileges).

Lab Walkthrough:

Task 1: Understanding Different User Types


1.​ Root User: The root user (also known as the superuser) has full
privileges and can execute any command on the system. This user has
access to all files and can perform administrative tasks such as
installing software, changing system configurations, and managing
users.
2.​ Regular Users: Regular users are granted limited privileges to prevent
them from making system-wide changes. They can only modify their
own files and settings. Regular users cannot install software or modify
system configurations unless they have been granted elevated
privileges (using sudo).
3.​ System Users: These users are typically created by the system for
specific system services and processes. They are not meant to log in
interactively and usually do not have a home directory. These users
often have limited or no privileges to interact with the system, and their
primary role is to run background processes (e.g., www-data for the
web server or nobody for a non-privileged user).
Question: What are the key differences between the root user, regular
users, and system users in terms of privileges and system access?

Task 2: Viewing User Information


1.​ List Users on the System: Open a terminal and run the following
command to see all users on the system:
cat /etc/passwd
This will display a list of all user accounts on the system, including system
users. Each line represents a user and contains the following information:
✔​ Username: The user's name.
✔​ Password: (Usually an x or *, which means the password is stored
in /etc/shadow).
✔​ User ID (UID): The user’s ID number.
✔​ Group ID (GID): The primary group’s ID.
✔​ Full Name: The user's full name or description (optional).
✔​ Home Directory: The path to the user’s home directory.
✔​ Shell: The program that starts when the user logs in.
2.​ Identify the Root User: The root user will have the UID of 0. You can
filter for root user specifically:
grep 'root' /etc/passwd
Question: What is the difference in the UID of the root user compared to
regular users? Why is the root user assigned UID 0?

Task 3: Viewing User Groups


1.​ List User Groups: In addition to being part of a user, each user is
also a member of one or more groups. To see all groups on your system,
run:
cat /etc/group
This will list all groups and their members. Each line contains:
✔​ Group name: The name of the group.
✔​ Password: The password for the group, if applicable (typically
empty).
✔​ Group ID (GID): The group’s unique ID.
✔​ Group members: The list of users who are members of that group.
2.​ Identify Group Membership: You can check which groups a specific
user is a member of by using the groups command. For example, check
the groups for the user username:
groups username
Question: How do groups help in managing user permissions? What role
do they play in system security?

Task 4: User Privileges and Sudo


1.​ Understanding Sudo: The sudo command allows a regular user to
execute commands with elevated privileges (as root) without logging in
as the root user. Users must be granted permission to use sudo by
being added to the sudoers file.
2.​ Check if Your User Has Sudo Privileges: To see if your current user
has sudo privileges, run:
sudo -l
This will show you a list of commands you are allowed to run with
sudo.
3.​ Granting Sudo Privileges: To grant a user sudo privileges, you can
add them to the sudo group. Run the following command as root:
usermod -aG sudo username
Replace username with the name of the user you want to grant sudo
privileges to. After this, the user can execute commands with sudo.
Question: Why is it recommended to use sudo instead of logging in as root
directly? How does this improve system security?

Task 5: Changing User Information


1.​ Modify User Information: You can modify a user’s information using
the usermod command. For example, to change the username of a user:
sudo usermod -l newusername oldusername
2.​ Changing the User’s Home Directory: If you want to change a
user’s home directory:
sudo usermod -d /new/home/directory -m username
This will change the home directory and move the contents from the old
home directory to the new one.
Question: Why might you need to change a user's home directory? What
are the potential security implications of modifying user information?

Task 6: Deleting and Locking User Accounts


1.​ Delete a User Account: To delete a user account, run:
sudo deluser username
This will delete the user but leave the home directory intact. If you
want to delete the user along with their home directory, use:
sudo deluser --remove-home username
2.​ Lock a User Account: If you want to lock a user account (disabling
login but keeping the account on the system), run:
sudo passwd -l username
This locks the user account by disabling their password.
Question: When would you choose to lock a user account instead of
deleting it? How does this impact the system's security?

Task 7: Applying Basic Security Principles


1.​ Enforcing Strong Passwords: To improve security, ensure users
have strong passwords. You can configure password policies in
/etc/login.defs to enforce minimum length, complexity, and expiration time
for passwords.
2.​ Limiting Root Access: It is best practice to disable direct root
login over SSH and rely on sudo for administrative tasks. To do this,
modify /etc/ssh/sshd_config and ensure the following line exists:
PermitRootLogin no
3.​ Setting Up User Account Expiration: You can set an expiration
date for user accounts to ensure they are disabled after a certain period:
sudo chage -E 2025-01-01 username
This will disable the user account on January 1, 2025.
Question: How does enforcing strong password policies and limiting root
access improve the security of a Linux system? What are the risks of
leaving user accounts with weak passwords or unlimited access?

Notes:
✔​ User management is an essential aspect of Linux system security.
Properly managing user accounts and groups can limit access to
sensitive files and commands, reducing the risk of unauthorized system
modifications.
✔​ Sudo is an important security tool, as it helps minimize the risks
associated with using the root account directly.
✔​ Regularly review user privileges and account status to ensure that only
authorized users have access to critical system functions.

Lab 45 – Special Directories and Files

Lab Objective:
Learn about the special directories and files in a Linux system, understand
their purpose, and explore how they are used in system management and
troubleshooting.

Lab Purpose:
In this lab, you will explore the special directories and files in a Linux
system, understand their unique functions, and learn how to interact with
them. Special directories such as /etc, /proc, /dev, and /sys hold
configuration files, system information, and device files critical for system
management and troubleshooting.

Lab Tool:
Kali Linux (latest version) or any Linux distribution

Lab Topology:
A single Kali Linux machine or virtual machine with root (administrative)
access to view and interact with special directories and files.

Lab Walkthrough:

Task 1: Understanding the /etc Directory


1.​ Overview of /etc: The /etc directory contains system-wide
configuration files and settings for the operating system and installed
applications. It is one of the most important directories in the system and
is crucial for system administration.
2.​ View Contents of /etc: Use the following command to list the
contents of /etc:
ls /etc
This directory contains essential configuration files, such as:
✔​ /etc/passwd: Contains information about users.
✔​ /etc/fstab: Describes disk partitions and mount points.
✔​ /etc/network/interfaces: Network interface configuration (on
Debian-based systems).
✔​ /etc/hostname: Contains the system’s hostname.
3.​ Task: Explore Common Files in /etc: Open the /etc/passwd file,
which stores user account information:
cat /etc/passwd
You will see a list of all system users, along with their user IDs (UID),
group IDs (GID), home directories, and default shells.
Question: What is the format of the entries in /etc/passwd, and how does it
relate to user management? What would happen if you accidentally
deleted this file?

Task 2: Understanding the /proc Directory


1.​ Overview of /proc: The /proc directory is a virtual filesystem that
provides an interface to kernel data structures. It contains runtime system
information, including information about processes, kernel parameters,
and hardware configuration.
2, View Contents of /proc: List the contents of /proc:
ls /proc
Some important files and directories in /proc include:
/proc/cpuinfo: Information about the system’s CPU architecture.
/proc/meminfo: Information about system memory.
/proc/uptime: The system’s uptime.
/proc/[pid]: Directory containing information about each running
process (where [pid] is the process ID).
3, Task: Check System Memory and CPU Info: View the contents of
/proc/meminfo to see memory statistics:
cat /proc/meminfo
View the contents of /proc/cpuinfo to see CPU details:
cat /proc/cpuinfo
Question: How does /proc/meminfo help in monitoring system
performance? What kind of data is provided in /proc/cpuinfo?

Task 3: Exploring the /dev Directory


1.​ Overview of /dev: The /dev directory contains device files, which
represent hardware devices or software interfaces on the system. These
files are used to interact with hardware components like hard drives,
printers, and terminals.
2.​ View Contents of /dev: List the contents of /dev:
ls /dev
Some key device files include:
✔​ /dev/sda: Represents the first hard disk.
✔​ /dev/null: A special file that discards all data written to it.
✔​ /dev/tty: Represents the controlling terminal for the session.
✔​ /dev/tty1 to /dev/tty6: Represent virtual terminals.
3.​ Task: Experiment with Special Device Files:
Test /dev/null: Try writing something to /dev/null:
echo "Hello, World!" > /dev/null
Nothing will appear because /dev/null discards any data written
to it.
Test /dev/zero: This device file produces a continuous stream of
zero bytes. You can use it to create files of a specific size:
dd if=/dev/zero of=zero_file.txt bs=1M count=10
This will create a file called zero_file.txt with 10 MB of zeroes.
Question: What is the purpose of /dev/null and /dev/zero? How can these
special files be useful in system management and debugging?

Task 4: Exploring the /sys Directory


1.​ Overview of /sys: The /sys directory is another virtual filesystem that
provides a view of the kernel’s runtime parameters. It allows you to
interact with kernel objects, and it is commonly used for managing
devices, kernel modules, and other system-level configurations.
2.​ View Contents of /sys: List the contents of /sys:
ls /sys
Some key subdirectories include:
✔​ /sys/class: Contains information about devices and drivers.
✔​ /sys/block: Contains information about block devices, such as hard
drives and partitions.
✔​ /sys/devices: Contains information about all devices attached to the
system.
3.​ Task: Check CPU and Memory Information:
To see the CPU information:
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
To see memory-related information:
cat /sys/devices/system/memory/memory0/block_size
Question: How does /sys differ from /proc? What kind of information can
you retrieve from the /sys directory that you can’t find in /proc?

Task 5: Special Files and Links


1.​ Symbolic Links: A symbolic link (or symlink) is a file that points to
another file or directory. You can create symlinks using the ln -s command.
Example: Create a symlink called link_to_etc that points to /etc:
ln -s /etc link_to_etc
Now, link_to_etc is a symlink to /etc, and you can access it like any
other directory.
2.​ Hard Links: A hard link is a second name for an existing file. Unlike
symlinks, hard links reference the same inode and share the same data
blocks. To create a hard link, use the ln command (without the -s option).
Example: Create a hard link for a file:
ln /path/to/file /path/to/hardlink
Question: What is the difference between a symbolic link and a hard link?
When would you use one over the other?

Task 6: Understanding Log Files in /var


1.​ Overview of /var: The /var directory contains variable data, such as
logs, databases, and application data files. The subdirectory /var/log is
where most system log files are stored.
2.​ View Contents of /var/log: List the contents of /var/log:
ls /var/log
Some important log files include:
✔​ /var/log/syslog: General system logs.
✔​ /var/log/auth.log: Authentication logs, such as login attempts.
✔​ /var/log/dmesg: Kernel ring buffer logs, which contain boot
messages.
3.​ Task: Examine System Logs: To view recent system logs, use the
cat or less command:
cat /var/log/syslog
Or, to view the authentication logs:
cat /var/log/auth.log
Question: What kind of information is stored in /var/log/syslog? How can
these logs help in troubleshooting system issues or security incidents?

Notes:
✔​ Special directories like /etc, /proc, /dev, and /sys are critical to the
operation of a Linux system. They store important configuration files,
system information, and device interfaces, which are essential for
system management, troubleshooting, and performance monitoring.
✔​ Familiarity with these directories and files will make you more efficient in
managing Linux systems and diagnosing problems.

Lab 46 – Determine and Configure Hardware Settings: PCI Devices


and Peripherals
Lab Objective:
Learn how to identify and configure hardware devices in a Linux system,
specifically PCI devices and peripherals, to understand how Linux
interacts with physical hardware components.

Lab Purpose:
In this lab, you will learn how to determine which PCI devices and
peripherals are installed on a Linux system, how to view their settings, and
how to configure them to ensure proper functioning of the system. You will
also explore tools and commands that help you gather detailed
information about hardware components.

Lab Tool:
Kali Linux (latest version) or any Linux distribution with root access.

Lab Topology:
A single Kali Linux machine or virtual machine with physical or virtual
hardware components that you can examine and configure.

Lab Walkthrough:

Task 1: Identifying PCI Devices on Your System


1.​ Overview of PCI Devices: PCI (Peripheral Component Interconnect)
is a bus standard used for adding expansion cards (like graphics cards,
network adapters, and sound cards) to your system. Linux provides
several utilities to identify PCI devices and their associated configurations.
2.​ List All PCI Devices: The first step in identifying PCI devices on your
system is to use the lspci command, which lists all PCI devices. Run the
following command in the terminal:
lspci
This will display a list of all PCI devices, including graphics cards, network
adapters, and storage controllers. The output will include the device's bus
number, vendor ID, device ID, and a brief description of the device.
Example output:
00:1f.2 IDE interface: Intel Corporation 82801M (ICH4) Ultra ATA
Storage Controllers (rev 01)
01:00.0 VGA compatible controller: NVIDIA Corporation GF108M
[GeForce GT 620M] (rev a1)
3.​ Get Detailed Information About a PCI Device: To get more detailed
information about a specific device, use the following command:
lspci -v
The -v option adds verbose output, including the device’s driver, IRQ line,
memory addresses, and more.
Question: What information can you learn from the output of lspci? How
does it help in managing hardware devices?

Task 2: Investigating PCI Devices with lspci -vv


1.​ Using Extended Output: The lspci -vv command provides even
more detailed information about each PCI device, including the exact
driver in use, power management features, and capabilities of the device.
Example output:
01:00.0 VGA compatible controller: NVIDIA Corporation GF108M
[GeForce GT 620M] (rev a1)
Subsystem: Hewlett-Packard Company Device 1899
Flags: bus master, fast devsel, latency 0, IRQ 30
Memory at e0000000 (32-bit, non-prefetchable) [size=16M]
I/O ports at 3000 [size=128]
Expansion ROM at f0000000 [disabled] [size=1M]
2.​ Check for Driver Information: The lspci -vv command will show you
which driver is being used for each device. For example, in the output
above, the VGA controller (NVIDIA) has its driver listed in the “Kernel
driver in use” line.
3.​ Task: Review the lspci -vv output for your system’s PCI devices. Take
note of the drivers associated with each device.
Question: Why is it important to know which driver is in use for each PCI
device? What might happen if a device doesn’t have the correct driver
installed?

Task 3: Checking the Kernel Log for Hardware Events


1.​ Overview of Kernel Logs: The Linux kernel maintains logs of
hardware events, such as the detection of new devices and loading of
drivers. These logs can help diagnose issues with hardware or verify the
proper functioning of devices.
2.​ View Kernel Messages with dmesg: Run the dmesg command to
view recent kernel messages related to hardware devices:
dmesg | less
You can search through the kernel log for any relevant messages
related to PCI devices or peripherals by using the grep command:
dmesg | grep -i pci
This will filter the log for any mentions of PCI devices, such as the
detection of new hardware or the loading of drivers.
Question: How can the dmesg command help troubleshoot
hardware-related issues? What kind of information would you look for in
the log?

Task 4: Configuring PCI Devices


1.​ Manually Loading and Unloading Drivers: Sometimes, you may
need to manually load or unload drivers for PCI devices. To load a driver,
use the modprobe command:
sudo modprobe <driver_name>
Example to load the nvidia driver for an NVIDIA graphics card:
sudo modprobe nvidia
To unload a driver, use:
sudo modprobe -r <driver_name>
2.​ Check Loaded Modules: To check which modules are currently
loaded, you can use the lsmod command:
lsmod
This will list all loaded kernel modules, including device drivers.
3, Task: Try loading and unloading a module for a PCI device on your
system. You can use lsmod to verify whether the module was successfully
loaded.
Question: What are the potential consequences of manually loading or
unloading a device driver? How can it impact system stability?
Task 5: Configuring Peripheral Devices (USB, Audio, etc.)
1.​ Listing USB Devices: Although PCI devices are the main focus of
this lab, peripherals like USB devices are also critical hardware
components. Use the following command to list all USB devices
connected to your system:
lsusb
Example output:
Bus 002 Device 003: ID 046d:c52b Logitech, Inc. Unifying Receiver
Bus 002 Device 002: ID 8087:0a2b Intel Corp. Integrated Rate
Matching Hub
2.​ Checking Audio Devices: Use the aplay -l command to list audio
devices connected to your system:
aplay -l
This will show the available audio playback devices.
3.​ Checking Network Interfaces: To list network interfaces, including
network cards and USB network adapters, use the ip link or ifconfig
command:
ip link
or:
ifconfig
Question: How does the lsusb command help identify peripherals? What
other tools can be used to configure audio and network devices?

Task 6: Using lshw for Comprehensive Hardware Information


1.​ Overview of lshw: The lshw command is a comprehensive hardware
information tool that provides detailed information about all system
hardware, including PCI devices, memory, processors, and storage
devices.
2.​ Running lshw: Run the following command to get detailed hardware
information:
sudo lshw
You can use the -short option to get a concise list:
sudo lshw -short
3.​ Task: Examine the Hardware Report: Review the output of lshw to
identify your PCI devices, memory, CPU, and other peripherals.
Question: What kind of hardware information does lshw provide that other
commands like lspci might not show? How can this information be useful
in troubleshooting hardware issues?

Notes:
✔​ Understanding PCI devices and peripherals is essential for diagnosing
hardware issues and optimizing system performance.
✔​ Linux provides powerful tools like lspci, dmesg, lsusb, and lshw to
interact with hardware, view device status, and configure settings.
✔​ Be cautious when manually loading or unloading drivers, as it can affect
the system’s stability if done improperly.

Lab 47– Change Runlevels / Boot Targets and Shutdown or Reboot


System: Runlevels and Boot Targets
Lab Objective:
Learn how to manage system runlevels and boot targets in a Linux system,
understand the differences between SysV init runlevels and systemd boot
targets, and practice shutting down or rebooting the system via the
command line.

Lab Purpose:
In this lab, you will explore the concept of runlevels and boot targets in
Linux. You will learn to change system runlevels or boot targets to manage
system startup, shutdown, and various operating modes (multi-user,
graphical, etc.). You will also practice safely shutting down or rebooting
the system.

Lab Tool:
Kali Linux (latest version) or any Linux distribution that uses systemd for
managing services and boot targets.

Lab Topology:
A single Kali Linux machine or virtual machine with root (administrative)
access. Ensure you have a terminal or access to the system via SSH to
execute the commands.

Lab Walkthrough:

Task 1: Understanding Runlevels and Boot Targets


1.​ Overview of Runlevels (SysV init):
✔​ Linux systems historically used runlevels to define the system's
state at boot time, ranging from multi-user mode to shutdown.
✔​ SysV runlevels (old-style) range from 0 to 6, with each number
corresponding to a specific state of the system:
✔​ Runlevel 0: Halt (Shutdown)
✔​ Runlevel 1: Single-user mode (for administrative tasks)
✔​ Runlevel 2: Multi-user mode (without networking)
✔​ Runlevel 3: Multi-user mode (with networking)
✔​ Runlevel 4: User-definable (unused by default)
✔​ Runlevel 5: Multi-user mode with GUI (Graphical login)
✔​ Runlevel 6: Reboot
2.​ Overview of Boot Targets (systemd):
✔​ systemd replaced the traditional SysV init system with boot targets
in newer Linux distributions.
✔​ Boot targets provide more flexibility and control over system states
and processes. Common systemd targets include:
✔​ default.target: The default target to boot into (similar to runlevel 5).
✔​ multi-user.target: Similar to runlevel 3, it provides a multi-user,
non-graphical environment.
✔​ graphical.target: Similar to runlevel 5, provides multi-user mode
with a graphical user interface.
✔​ rescue.target: Single-user mode for system rescue.
✔​ poweroff.target: Shuts down the system.
✔​ reboot.target: Reboots the system.
3.​ Difference Between SysV Runlevels and systemd Boot Targets:
✔​ SysV init uses runlevels to define the state of the system, while
systemd uses boot targets for similar purposes.
✔​ Systemd provides more advanced capabilities, like parallel service
startup and improved dependency handling.

Task 2: Checking Current Runlevel or Boot Target


1.​ Check the Current Runlevel (SysV init):
✔​ In older systems or for compatibility, you can check the current
runlevel using the runlevel command:
​ runlevel
✔​ Alternatively, the who -r command also displays the current runlevel.
2.​ Check the Current Boot Target (systemd):
✔​ For modern systems using systemd, the following command shows
the active boot target:
​ systemctl get-default
✔​ The output will indicate the current target, which may be something
like:
​ graphical.target
Question: Why is it important to know the current runlevel or boot target?
What are the differences in system behavior between a multi-user target
and a graphical target?

Task 3: Changing the Runlevel or Boot Target


1.​ Change Runlevel (SysV init):
✔​ If you are working on an older system that uses SysV init, you can
change the runlevel using the init or telinit command:
​ sudo init 3
This would change the system to runlevel 3 (multi-user mode with ​
networking).
✔​ Similarly, you can use telinit:
​ sudo telinit 3
2.​ Change Boot Target (systemd):
✔​ To change the boot target with systemd, use the systemctl
command:
​ sudo systemctl isolate multi-user.target
This will switch the system to multi-user mode (non-graphical).
✔​ To switch to a graphical environment (like the default target):
​ sudo systemctl isolate graphical.target
✔​ To change the default boot target (e.g., make graphical mode the
default target):
​ sudo systemctl set-default graphical.target
✔​ To switch to single-user mode (useful for troubleshooting):
​ sudo systemctl isolate rescue.target
Question: How does changing the boot target to multi-user.target or
graphical.target affect the system’s environment and user experience?

Task 4: Shutting Down and Rebooting the System


1.​ Shutdown Commands:
✔​ To shut down the system immediately, you can use the following
commands:
​ sudo shutdown now
or
​ sudo poweroff
✔​ These commands stop all running services and shut down the
system cleanly.
✔​ Scheduled Shutdown: You can schedule a shutdown at a specific
time. For example, to shut down the system in 10 minutes:
sudo shutdown +10
✔​ You can cancel a scheduled shutdown with:
sudo shutdown -c
2.​ Reboot Commands:
✔​ To reboot the system immediately, use:
sudo reboot
or
sudo systemctl reboot
✔​ This will restart the system, terminating all running processes and
reloading the operating system.
Task: Shut down and reboot the system using both shutdown and
systemctl commands. Observe the differences in behavior.
Question: How does using shutdown differ from reboot in terms of system
behavior? When would you prefer to use one over the other?

Task 5: Understanding the Systemd Boot Target Lifecycle


1.​ List Available Targets:
✔​ You can list all available systemd targets using the following
command:
systemctl list-units --type=target
✔​ This will display a list of all boot targets, their status, and whether
they are currently active.
2.​ View Services in a Target:
✔​ To view all services that are part of a specific target, use:
systemctl list-dependencies graphical.target
This shows all services and units associated with the graphical
target.
3.​ Stopping and Starting Services:
✔​ You can stop or start specific services within a target using the
systemctl command. For example, to stop the display manager
(which controls graphical login), use:
​ sudo systemctl stop display-manager.service
✔​ Similarly, to restart it:
sudo systemctl restart display-manager.service
Question: How does managing services in different targets (e.g.,
multi-user vs. graphical) impact system performance and security?

Task 6: Exploring the Impact of Changing Targets and Runlevels


1.​ Impact on System Services:
✔​ Changing to a lower runlevel or boot target (e.g., from
graphical.target to multi-user.target) will stop unnecessary services,
freeing up system resources.
✔​ Conversely, switching to a higher target (e.g., from multi-user.target
to graphical.target) will start additional services, like the display
manager, which is required for a graphical login.
2.​ Safety Considerations:
✔​ Changing targets or runlevels can affect the system’s operation.
Always ensure critical services are running before making these
changes in production environments.
✔​ In emergency scenarios (e.g., system recovery), using
single-user.target or rescue.target can allow you to troubleshoot
without interference from other system services.
Question: In what situations would you switch to a runlevel or boot target
like single-user.target or rescue.target? How can this be useful for system
administration?

Notes:
✔​ SysV init is deprecated in many modern Linux distributions in favor of
systemd, which offers more flexibility and control over the boot process
and system services.
✔​ Runlevels are still used in legacy systems, but most current
distributions (including Kali Linux) use boot targets to define the
system’s state.
✔​ Changing run-levels or boot targets

Lab 48 – Use Debian Package Management: Apt

Lab Objective:
Learn how to manage software packages on a Debian-based system using
the apt package management tool. This includes installing, updating,
upgrading, and removing packages.

Lab Purpose:
In this lab, you will practice using APT (Advanced Package Tool), the
package management system for Debian and its derivatives. You will gain
familiarity with basic package management commands, how to search for
packages, and how to manage package dependencies.

Lab Tool:
Kali Linux (latest version) or any Debian-based Linux distribution (Ubuntu,
Linux Mint, etc.).

Lab Topology:
A single Kali Linux machine or virtual machine with internet access.

Lab Walkthrough:

Task 1: Updating Package Lists


1.​ Overview of apt: apt is the front-end tool for handling packages on
Debian-based systems. It allows users to install, update, upgrade, and
remove packages. The apt tool works with repositories to retrieve and
manage software.
2.​ Update Package Lists: Before installing or upgrading any packages,
it’s important to update the local package index. This ensures that your
system has the latest information about available packages from the
configured repositories.
Run the following command:
sudo apt update
Explanation:
✔​ The update command refreshes the local package index by
downloading the latest package lists from the repositories defined in
/etc/apt/sources.list.
Question: Why is it important to run apt update before installing or
upgrading packages? What might happen if you skip this step?

Task 2: Installing Packages


1.​ Search for Available Packages: Before installing a package, it’s
useful to search for it by name. Use the apt search command to look for
software packages available in the repositories.
Example:
apt search nmap
This will show all available packages related to nmap.
2.​ Install a Package: Once you’ve identified the package you want to
install, use the apt install command to install it. For example, to install
nmap, run:
sudo apt install nmap
Explanation:
✔​ The install command fetches the package from the repository and
installs it along with any necessary dependencies.
Question: What happens when a package has dependencies? How
does apt handle these dependencies?

Task 3: Upgrading Installed Packages


1.​ Upgrade Packages: To ensure that your system is up-to-date with
the latest software versions, you can upgrade the installed packages.
First, update the package list if you haven't done so already:
sudo apt update
Then, upgrade the installed packages with the following command:
sudo apt upgrade
Explanation:
✔​ The upgrade command will upgrade all upgradable packages to their
latest versions. If any package has a new version, it will be installed.
✔​ During this process, you may be prompted to confirm the upgrade,
especially if there are significant changes or new packages to install.
Question: What happens if you have multiple packages with updates?
How does apt handle package upgrades?
2.​ Full Upgrade: Sometimes, a package upgrade may require the
removal of obsolete packages or the installation of new packages. In such
cases, use full-upgrade (previously known as dist-upgrade):
sudo apt full-upgrade
Explanation:
✔​ The full-upgrade command upgrades all packages but also handles the
removal of old or obsolete packages and the installation of new
dependencies.
Question: Why would you use full-upgrade instead of just upgrade? What
potential risks could this pose?

Task 4: Removing Packages


1 Remove a Package: To remove an installed package, use the apt
remove command. For example, to remove nmap, run:
sudo apt remove nmap
Explanation:
✔​ The remove command will uninstall the specified package but leave
its configuration files intact. This is useful if you want to reinstall the
package later without losing your settings.
2. Completely Remove a Package: To remove the package along with its
configuration files, use the purge option:
sudo apt purge nmap
Explanation:
✔​ The purge command removes the package and its configuration files.
This is useful if you want to completely remove all traces of a package.
Question: What are the differences between remove and purge? In which
scenario would you use purge over remove?

Task 5: Cleaning Up Unnecessary Packages


1.​ Autoremove Unused Packages: Over time, when you install and
remove packages, some dependencies may no longer be needed. Use
the autoremove command to remove unnecessary packages that were
installed as dependencies but are no longer required:
sudo apt autoremove
Explanation:
✔​ The autoremove command removes packages that were
automatically installed as dependencies but are no longer required
by any installed package. This helps keep the system clean.
2.​ Clean the Local Cache: After installing or upgrading many
packages, cached .deb files are left on the system, taking up space. To
clean the package cache and free up disk space, use:
sudo apt clean
Explanation:
✔​ The clean command removes all cached package files from
/var/cache/apt/archives/, which can help free up disk space.
Question: Why is it important to clean the package cache periodically?
What are the potential risks of leaving old cached packages on the
system?

Task 6: Checking Installed Packages


1.​ List Installed Packages: To view all installed packages on your
system, run:
apt list --installed
2.​ Check Package Details: To view detailed information about a
specific installed package, use:
apt show nmap
This will provide information like the package description, version,
dependencies, and more.
Question: How can listing installed packages help in system administration
and troubleshooting?

Task 7: Searching for Packages


1.​ Search for Packages: You can use the apt-cache search command
to search for packages by keywords. For example, to search for packages
related to networking:
apt-cache search network
This will list packages related to networking, including tools and libraries.
Question: How can searching for packages help you find software that
might not be installed on your system but could be useful?

Notes:
✔​ APT (Advanced Package Tool) is one of the most commonly used
package management tools on Debian-based distributions. It simplifies
the installation, removal, and management of software packages.
✔​ While apt handles both package installation and removal, it's crucial to
use the update command regularly to ensure your system is using the
latest package lists.
✔​ Always review package upgrades and removals carefully, especially on
production systems, to avoid inadvertently breaking critical software or
services.
✔​ The apt tool handles dependencies automatically, which simplifies
package management significantly, but you should still be cautious
when upgrading critical components.

Lab 49 – Linux as a Virtualization Guest

Lab Objective:
Learn how to set up a Linux operating system as a guest in a virtualization
environment. This lab will cover the installation of Linux in a virtual
machine, configuring virtual hardware, and optimizing performance for
virtualized environments.

Lab Purpose:
In this lab, you will gain practical experience in setting up and managing a
Linux guest operating system (OS) within a virtualized environment, such
as VMware, VirtualBox, or KVM. You will learn how to install a Linux
distribution as a guest OS and optimize its settings for virtualized
environments, improving performance and resource management.

Lab Tool:
Linux Distribution: Any modern Linux distribution (e.g., Kali Linux,
Ubuntu).
Virtualization Software: VMware Workstation, Oracle VirtualBox, or
KVM (Kernel-based Virtual Machine).
Virtual Machine (VM) Host: A physical machine running virtualization
software with sufficient resources (CPU, RAM, disk space).
Guest Additions/Tools: Install any additional tools provided by the
virtualization software for better performance (e.g., VirtualBox Guest
Additions or VMware Tools).
Lab Topology:
Host Machine: A physical machine running virtualization software
(e.g., VMware Workstation or VirtualBox).
Guest Machine: A Linux distribution (e.g., Kali Linux) running as a
virtual machine within the host system.

Lab Walkthrough:

Task 1: Setting Up the Virtualization Environment


1.​ Install Virtualization Software: If you haven't already installed a
virtualization tool, do so now. Here are the installation steps for common
virtualization platforms:
VirtualBox:
Download and install VirtualBox from Oracle's website.
Once installed, launch the VirtualBox application.
VMware Workstation:
Download and install VMware Workstation from VMware's
website.
Once installed, launch VMware Workstation.
KVM:
For KVM, use your distribution's package manager (on
Ubuntu, for example, use apt install qemu-kvm
libvirt-daemon-system libvirt-clients bridge-utils).
2.​ Create a New Virtual Machine:
Open your virtualization software (e.g., VirtualBox, VMware
Workstation).
Create a new virtual machine by selecting Create New VM (or
similar).
Assign the following resources:
Memory (RAM): At least 2 GB (or more based on the system requirements
of your Linux guest OS).
CPU: Typically, allocate at least 1 or 2 virtual CPU cores (depending on the
available CPU resources of the host).
Disk Size: 20 GB (or more, depending on your needs).
Question: Why is it important to allocate enough resources (CPU, RAM,
disk space) when creating a virtual machine? What happens if these
resources are insufficient?

Task 2: Installing Linux as a Virtual Machine


1.​ Download a Linux ISO:
Download the ISO image of the Linux distribution you wish to
install, such as Kali Linux, Ubuntu, or Debian. You can
download these from the official website of the distribution (e.g.,
Kali Linux ISO).
2.​ Mount the ISO in the VM:
Attach the downloaded ISO image to the virtual optical drive of
the virtual machine.
In VirtualBox, go to the VM settings and click Storage.
Under Controller: IDE, click the empty disk and then
select the ISO file.
In VMware Workstation, go to the VM settings, select
CD/DVD (SATA), and choose the ISO file.
3.​ Start the Virtual Machine:
Start the virtual machine. The system should boot from the
attached ISO image, and you’ll begin the installation of the
Linux distribution.
4.​ Install Linux:
✔​ Follow the on-screen instructions to install the Linux distribution on
the virtual machine. This will be similar to installing the OS on a
physical machine:
✔​ Choose the language and keyboard layout.
✔​ Set up disk partitions (you can typically choose the default settings
or use the entire disk option).
✔​ Set up a username and password.
Question: How does installing Linux on a virtual machine differ from
installing it on a physical machine? What challenges or
considerations should be taken into account?

Task 3: Installing Virtual Machine Tools/Guest Additions


After installing the Linux guest OS, you should install Virtual Machine
Tools or Guest Additions to enhance the virtual machine's performance
and integration with the host system.
1.​ Install Guest Additions in VirtualBox:
✔​ With the Linux VM running, click Devices in the VirtualBox menu
and choose Insert Guest Additions CD image.
✔​ Inside the VM, mount the CD and install the Guest Additions:
​sudo mount /dev/cdrom /mnt
​sudo sh /mnt/VBoxLinuxAdditions.run
2.​ Install VMware Tools:
✔​ In VMware Workstation, click VM > Install VMware Tools.
✔​ Inside the VM, mount the VMware Tools disk and install the
software:
sudo mount /dev/cdrom /mnt
sudo tar zxvf /mnt/VMwareTools-*.tar.gz -C /tmp
cd /tmp/vmware-tools-distrib
sudo ./vmware-install.pl
3.​ Reboot the VM:
✔​ After installation, reboot the VM to apply the changes.
Question: What are the key benefits of installing VirtualBox Guest
Additions or VMware Tools? How do they improve the performance
and usability of the virtual machine?

Task 4: Optimizing the Linux Guest for Virtualization


1.​ Optimize Virtual Machine Settings:
✔​ In VirtualBox or VMware, you can fine-tune the VM's settings to
optimize performance. For instance:
✔​ Enable 3D Acceleration (if supported by the guest OS) for better
graphical performance.
✔​ Adjust CPU Allocation: Ensure the VM has at least 1 or 2 CPU
cores for adequate performance.
✔​ Increase Video Memory: Allocate more video memory if you're
running graphical environments.
2.​ Enable Shared Folders (Optional):
✔​ You can set up shared folders between the host and guest to easily
transfer files:
✔​ In VirtualBox: Go to VM Settings > Shared Folders, and add a
shared folder.
✔​ In VMware: Go to VM Settings > Options > Shared Folders, and
enable the feature.
3.​ Network Configuration:
✔​ Set up the network adapter for the virtual machine, usually in NAT
mode (Network Address Translation) for Internet access.
✔​ For local network testing, you can switch to Bridged Mode or
Host-Only Networking.
Question: Why is it important to optimize virtual machine settings? How
does this improve the performance of the guest operating system?
Task 5: Managing and Monitoring the Virtual Machine
1.​ Monitoring System Resources:
Once your Linux guest OS is running, you can monitor system
resources (such as CPU, RAM, disk, and network usage) using
commands like:
top
free -h
df -h
2.​ Managing Virtual Machine:
✔​ From your virtualization software, you can perform several tasks,
including:
✔​ Pause the VM: Temporarily suspends the guest OS.
✔​ Take a Snapshot: Save the state of the VM to revert to later if
needed.
✔​ Shut Down/Restart: Properly shut down or restart the guest OS
using the command:
sudo shutdown -h now
Question: What are the benefits of using snapshots in a virtualized
environment? How does it assist in system management and
troubleshooting?

Task 6: Shutting Down and Reverting the Virtual Machine


1.​ Shut Down the Virtual Machine:
✔​ To shut down the virtual machine properly, use the command:
sudo shutdown -h now
✔​ Alternatively, you can shut down the VM from the virtualization
software's interface (e.g., using Power Off or Shut Down options).
2.​ Revert to a Snapshot (Optional):
If you took a snapshot earlier, you can revert to it by selecting
the snapshot from the virtualization software and restoring it.
Question: Why is it important to shut down a virtual machine properly?
What risks are involved with forcefully powering off a virtual machine?

Notes:
✔​ Running Linux as a guest in a virtualized environment allows for
easy testing, development, and experimentation without impacting
the host system.
✔​ Always allocate enough resources to the guest OS to ensure it
operates smoothly and efficiently.
✔​ Installing Guest Additions or VMware Tools can significantly
improve the integration between the host and guest system,
providing better performance and additional features (e.g., shared
clipboard, seamless mouse integration).
✔​ Virtualization tools also allow you to take snapshots, making it easier
to return to a previous state in case something goes wrong during
your experimentation.

Lab 50 – Work on the Command Line: Environment Variables, Types,


and History

Lab Objective:
Learn how to work with environment variables, understand their types, and
utilize command history on Linux. This lab will help you manage user
environment settings, control the shell environment, and retrieve and
reuse commands from the command history.

Lab Purpose:
In this lab, you will practice working with environment variables, which
define key settings that affect the behavior of processes in your system.
You will also explore shell history, which allows you to retrieve previously
run commands for convenience and efficiency.

Lab Tool:
Kali Linux (or any Linux distribution)
Terminal (Bash or another shell)

Lab Topology:
Linux System: A single system or virtual machine running a Linux
distribution (e.g., Kali Linux, Ubuntu).
Shell: The Bash shell or any other command-line shell available.

Lab Walkthrough:

Task 1: Understanding Environment Variables


1.​ What Are Environment Variables? Environment variables are
dynamic values that affect the way processes run on a system. They store
system configuration information that can be accessed by the shell and
applications.
Common environment variables include:
✔​ PATH: Directories where executable files are located.
✔​ HOME: The current user's home directory.
✔​ USER: The current logged-in user.
✔​ SHELL: The path to the current shell.
2.​ List Environment Variables: To list all environment variables, use
the printenv or env command:
printenv
or
env
This will show a list of environment variables along with their values.
3.​ Check Specific Environment Variables: To display the value of a
specific environment variable, use the echo command:
echo $PATH
echo $HOME
echo $USER
Question: What does the PATH variable represent? Why is it
important?

Task 2: Modifying Environment Variables


1.​ Set Environment Variables Temporarily: To set an environment
variable temporarily for the current session, use the export command:
export MY_VAR="Hello World"
To verify the variable was set:
echo $MY_VAR
Question: How long will the environment variable MY_VAR exist after the
terminal session ends?
2.​ Set Environment Variables Permanently: To make an environment
variable permanent, you need to add it to a configuration file. For example:
Add the export line to your ~/.bashrc (for Bash) or
~/.bash_profile file:
echo 'export MY_VAR="Hello World"' >> ~/.bashrc
After modifying the file, reload the configuration:
source ~/.bashrc
Question: Why might you want to make an environment variable
permanent? What are some examples of permanent environment
variables you might set?

Task 3: Types of Environment Variables


1.​ User vs System Environment Variables:
✔​ User Environment Variables are set per user and stored in
user-specific files such as ~/.bashrc or ~/.bash_profile.
✔​ System Environment Variables are global and can affect all users.
These are typically set in system files like /etc/environment or
/etc/profile.
2.​ Commonly Used Environment Variables: Some important
environment variables to be aware of are:
✔​ HOME: The home directory of the user.
✔​ PATH: Directories that the shell searches for commands.
✔​ USER: The username of the current logged-in user.
✔​ SHELL: The path of the shell currently being used.
You can check the values of these variables like so:
echo $HOME
echo $USER
echo $SHELL

Task 4: Using Command History


1.​ Viewing Command History: The shell keeps a record of commands
you've run. To view your command history, use the history command:
history
This will list your most recent commands. You can also specify the
number of commands to show:
history 10
2.​ Reusing Commands from History: You can reuse a previous
command from your history by using the ! followed by the command
number. For example:
!5
This will execute the 5th command in the history list.
3.​ Search Command History: To search for a specific command in the
history, use Ctrl + r and start typing the command. This will search through
your history in reverse.
Once you find the desired command, press Enter to execute it or Ctrl
+ g to exit the search.
Question: How does using the history search improve efficiency on the
command line?

Task 5: Clearing and Managing Command History


1.​ Clear Command History: If you want to clear your command history,
use the following command:
history -c
This will clear the history of the current shell session.
2.​ Delete a Specific Command from History: To delete a specific
command from history, use:
history -d <line_number>
For example, to delete the 10th command in the history:
history -d 10
3.​ Persist History Across Sessions: To make sure your history is
saved across sessions, you can check that the HISTFILE variable is set to
a valid file. This is typically ~/.bash_history for Bash:
echo $HISTFILE
Question: Why is it important to manage your command history carefully,
especially in shared or production environments?

Task 6: Practical Use of Environment Variables and History


1.​ Exporting Variables for Scripts: Environment variables are
commonly used in scripts to configure the environment for commands that
will run. For example:
export
DATABASE_URL="mysql://user:password@localhost:3306/dbname"
Then you can run a script that uses this variable to connect to the
database.
2.​ Reusing Commands for Efficiency: Reuse commands from your
history to save time. For instance, if you regularly check disk usage, you
can find the previous command using history and use ! to quickly run it
again.
3.​ Setting Up Aliases Using Environment Variables: You can use
environment variables to create custom aliases. For example:
alias ll='ls -alF'
alias gs='git status'
Add these to your ~/.bashrc file for them to be available in every
session.
Question: How do aliases and environment variables improve your
workflow on the command line? What types of tasks can be automated
using aliases?

Notes:
✔​ Environment Variables are critical for configuring your system and
application environments. They control various aspects of the shell and
system behavior.
✔​ Command History is a powerful tool for efficiency, allowing you to
quickly reuse and search past commands.
✔​ Remember to manage your history and environment variables securely,
especially if your system is shared with others or used in production
environments.
✔​ Always set environment variables in appropriate files to ensure they
persist and are available to your processes.

Advanced Labs

Lab 51. Logical Volume Manager


Lab 52. Best Practices (Auditing)

Lab 53. Monitoring Disk Usage

Lab 54. CPU Configuration

Lab 55. RAID

Lab 56. Patching

Lab 57. Best Practices (Authentication)

Lab 58. Archiving and Unarchiving

Lab 59. Scripting Practice 1

Lab 60. Scripting Practice 2

Lab 61. Creating a Git Repo

Lab 62. Working with Git

Lab 63. Git Branches

Lab 64. Ansible

Lab 65. Puppet

Lab 66. Special Directories and Files

Lab 67. Process Monitoring

Lab 68. Manage Printers and Printing

Lab 69. Modify Process Execution Priorities

Lab 70. Network Troubleshooting

Lab 71. CPU Monitoring

Lab 72. System Messaging and Logging


Lab 73. UFW

Lab 74. Processes and Configuration

Lab 75. Ports and Services

Lab 76. Standard Syntax

Lab 77. Quoting and Escaping

Lab 78. Basic Regex

Lab 79. Extended Regex

Lab 80. Resetting the Root Password

Lab 81. – Use Streams, Pipes, and Redirects: Redirection and


File Descriptors

Lab 82. – Create Partitions and Filesystems

Lab 83 – Control Mounting and Unmounting of Filesystems:


Manual Mounting

Lab 84. – Manage User and Group Accounts and Related System
Files: Special Purpose Accounts

Lab 85. – System Logging: Rsyslog

Lab 51: Managing Disk Space with LVM on Kali Linux

Lab Objective:
Learn how to set up and manage disk space using LVM—a flexible way to
manage storage.

Lab Purpose:

Understand how to create, extend, and remove logical volumes, using


virtual loopback devices as a safe way to experiment.

Tools:

●​ Kali Linux
●​ lvm2 package (pre-installed or can be installed via sudo apt install
lvm2)

Topology:

●​ Single Kali Linux VM or system

Task 1: Create Loopback Devices for Storage

Why: Loopback files emulate disks, making experiments safe and


reversible.

dd if /dev/zero of block.img bs 100M count 10


dd if /dev/zero of block2.img bs 100M count 10
sudo losetup -fP block.img
sudo losetup -fP block2.img
losetup -a | grep block | cut -d: -f1

Explanations:

●​ dd if /dev/zero of block.img bs 100M count 10: creates a 1GB file


filled with zeros (10 * 100MB).
●​ sudo losetup -fP block.img: automatically finds the first free loop
device (-f) and attaches the image (-P to scan for partitions). Repeat
for the second image.
●​ losetup -a: lists all loop devices associated.
●​ grep block: filters those related to your images.
●​ cut -d: -f1: extracts device names like /dev/loop0, /dev/loop1, etc.

Substitute the actual /dev/loopX device names in subsequent commands.

Task 2: Create Physical Volumes

sudo pvcreate /dev/loopX /dev/loopY


sudo pvdisplay

Details:

●​ pvcreate converts loop devices into physical volumes (the base units
for LVM).
●​ pvdisplay shows details such as size and device name of created
PVs.
Task 3: Create a Volume Group

sudo vgcreate lab15 /dev/loopX /dev/loopY

sudo vgdisplay

Details:

●​ vgcreate combines your PVs into a volume group called lab15.


●​ vgdisplay provides info about the volume group, including size and
PVs.

Task 4: Create Logical Volumes

sudo lvcreate -L 1500 lab15

sudo lvcreate -L 492 lab15

sudo lvdisplay

Details:

●​ lvcreate creates logical volumes (lvol0, lvol1) of specified sizes.


●​ lvdisplay shows info about the logical volumes, including their device
paths such as /dev/lab15/lvol0 and /dev/lab15/lvol1.
(You would format them with mkfs.ext4 /dev/lab15/lvol0 and mount as
needed.)

Task 5: Cleanup / Remove the LVM setup

sudo lvremove /dev/lab15/*

sudo vgremove lab15

sudo pvremove /dev/loopX /dev/loopY

sudo losetup -d /dev/loopX /dev/loopY

rm block.img block2.img

Explanation:

●​ lvremove: deletes logical volumes.


●​ vgremove: deletes the volume group.
●​ pvremove: removes the physical volume metadata.
●​ losetup -d: detaches the loop devices.
●​ rm: deletes the loopback image files.

Notes for Kali Linux:

●​ Kali Linux typically has lvm2 installed; if not, install with:​


sudo apt update && sudo apt install lvm2
●​ Make sure to run all commands with appropriate permissions (sudo
when necessary).
●​ Always clean up loop devices and images after your experiments.

Lab 52: Best Practices (Auditing)

Lab Objective:

Learn about best practices for system auditing and how to implement them.

Lab Purpose:

You will use a tool called Lynis to scan your system for potential issues
related to auditing configurations and apply simple fixes.

Tools:

●​ Kali Linux (or a compatible Linux distribution)

Topology:

●​ Single Linux machine or virtual machine

Task 1: Install and Run Lynis Audit

First, install Lynis:

sudo apt install lynis

Now, use it to scan for potential system accounting problems:


sudo lynis audit system --tests-from-category security --tests-from-group
accounting

Output will be extensive. Focus on the “Warnings” and “Suggestions”


sections. Common recommendations include:

●​ Enable process accounting


●​ Enable sysstat for accounting
●​ Enable auditd to collect audit information

Task 2: Install Packages and Enable Monitoring

Install required packages:

sudo apt install acct sysstat auditd

To enable sysstat, modify its configuration file:

Open /etc/default/sysstat in your preferred editor and change:

ENABLED="false"

to:

ENABLED="true"

Save your changes, then restart sysstat or run:

sudo systemctl restart sysstat


Run the scan again:

sudo lynis audit system --tests-from-category security --tests-from-group


accounting

You may find Lynis now raises warnings about the audit daemon.

Dealing with Auditd Warnings

Lynis might warn that your audit daemon is enabled but with an empty
ruleset. To address this:

Edit /etc/audit/rules.d/audit.rules and add a line at the end:

-w /var/log/audit -k auditlog

This tells auditd to monitor the /var/log/audit directory and to label this rule
with “auditlog”.

After editing, restart auditd:

sudo systemctl restart auditd

Final Scan

Run the Lynis audit one last time:

sudo lynis audit system --tests-from-category security --tests-from-group


accounting
You should see fewer warnings related to accounting or auditing.

Notes:

●​ Results from Lynis may vary; some warnings are informational, others
important.
●​ Feel free to implement recommendations to improve system security
and auditing.

Lab 53: Monitoring Disk Usage

Lab Objective:

Learn how to monitor disk space and inode usage to manage storage
effectively.

Lab Purpose:

Use common Linux tools to check available space and inodes on your
disks and directories.

Tools:

●​ Linux (Kali Linux or other distro)

Topology:

●​ Single Linux machine or virtual machine

Task 1: Check Disk Space Usage

Use df to evaluate free space on your disk partitions:


df -h /dev/sd*

Explanation:

●​ df reports disk space usage.


●​ -h makes sizes human-readable.
●​ /dev/sd* targets your disk partitions (like /dev/sda1, /dev/sda2).

You will see used and available space on your main partition(s).

To check inode usage (inodes are filesystem metadata entries):

df -hi /dev/sd*

Explanation:

●​ -i shows inode information, including usage percentages.

Typically, inode usage is lower than disk space usage, but monitoring both
helps prevent filesystem issues.

Task 2: Check Space and Usage in Directories

du is used to measure disk space used by files/directories:

du -h a

Explanation:

●​ du reports disk space in human-readable units (-h).


●​ a is your target directory (replace with your directory name).

To sort the output by size:


du -h a | sort -h

Explanation:

●​ It lists all items inside a and sorts them from smallest to largest.

To understand inode usage in a directory, add the --inodes flag:

du -h --inodes a | sort -h

Note:

●​ This shows inode counts instead of disk space.

Summarized by size or inodes in a directory:

du -hs a

●​ -s: summaries only total size for the directory.

du -hs --inodes a

●​ Similar, but shows inode count instead of size.

Notes:

●​ Use these tools regularly to monitor storage and prevent issues.


●​ df is good for disk-level info; du for directory and file-level insights.
Lab 54: CPU Configuration

Lab Objective:

Learn how to gather information about your CPU(s) and explore some
kernel parameters related to CPU and system performance.

Lab Purpose:

Use various tools to inspect CPU hardware details and view or modify
kernel settings affecting CPU behavior.

Task 1: View CPU Information

Open the terminal and run the following commands:

lscpu

Explanation:​
Displays detailed CPU architecture information in a user-friendly format,
including number of cores, model name, CPU MHz, flags, etc.

cat /proc/cpuinfo

Explanation:​
Shows detailed CPU info, including vendor ID, model, flags, bugs
(Spectre, Meltdown), and more. It’s more verbose and raw but useful for
detailed hardware insights.
Task 2: View Kernel CPU Parameters

Run:

sysctl -a | grep -i cpu

Explanation:​
Shows all kernel parameters (sysctl -a) related to CPU (case-insensitive).
The output includes many settings; focus on those relevant for tuning.

Optionally, for a more comprehensive view, run:

sudo sysctl -a

But be cautious: This outputs many parameters (over 900+), some of which
require root privileges or may contain sensitive info.

You can also inspect sysctl configuration files:

cat /etc/sysctl.conf

and:

ls /etc/sysctl.d/

To see the configuration snippets applied at startup.

To modify a setting temporarily, run:

sudo sysctl -w some.setting=value


To make persistent changes, add a line to a file in /etc/sysctl.d/ and run:

sudo sysctl -p

Notes:

●​ These commands retrieve hardware and kernel-level CPU info.


●​ Using /proc filesystem (not covered here in commands) allows for
real-time, ephemeral modifications, e.g., changing CPU governor
settings.

Lab 55: RAID

Lab Objective:

Learn how to configure and manage software RAID arrays using mdadm.

Lab Purpose:

Set up a software RAID array (specifically RAID 1) to provide redundancy,


similar to hardware RAID, using Linux's mdadm tool.

Tools:

●​ Kali Linux

Topology:

●​ Single Linux machine or virtual machine


Task 1: Install MDADM Utility

First, install the RAID management tools:

sudo apt install mdadm

Next, prepare two virtual disks (loopback files) for RAID setup:

dd if /dev/zero of disk1.img bs 1M count 1000

dd if /dev/zero of disk2.img bs 1M count 1000

Attach these images as loop devices:

sudo losetup -fP disk1.img

sudo losetup -fP disk2.img

Identify the loop devices assigned (they might be /dev/loopX). Run:

losetup -a | grep disk

Note:​
The output will identify devices like /dev/loop18 and /dev/loop19.
Remember these device names as they will be used in subsequent
commands as >dev1@ and >dev2@.

Task 2: Create the RAID Array

Create a mirrored RAID-1 array from the two loopback devices:


sudo mdadm -C /dev/md0 -l 1 -n 2 /dev/loopX /dev/loopY

Replace /dev/loopX and /dev/loopY with your actual device names.

●​ -C: create array.


●​ /dev/md0: the RAID device.
●​ -l 1: RAID level 1 (mirroring).
●​ -n 2: number of devices.

Verify creation with:

cat /proc/mdstat

This file shows arrays and their status, along with the disks involved.

Task 3: Use the RAID Array

Format the array and mount it:

sudo mkfs.ext4 /dev/md0

mkdir lab84

sudo mount /dev/md0 lab84

sudo touch lab84/foo

This creates a filesystem on /dev/md0, creates a directory, mounts the


array, and touches a test file.

Task 4: Simulate Disk Failure and Rebuild


Simulate a disk failure on one device:

sudo mdadm -I -f /dev/loopY

Replace /dev/loopY with your second loop device — drop the /dev/ and use
just loopX or loopY.

Check the status:

cat /proc/mdstat

Or get detailed info:

sudo mdadm --detail /dev/md0

This shows the array as degraded with one device removed.

Rebuild the array by adding the disk back and rebuilding:

sudo dd if /dev/urandom of lab84/bar

sudo mdadm -a /dev/md0 /dev/loopY

sudo mdadm --detail /dev/md0

The first command fills a file with random data to simulate activity. The -a
command re-adds the disk, triggering a rebuild. The --detail shows the
rebuild status.

Task 5: Clean Up
Unmount and stop the array, detach loop devices, and delete files:

sudo umount lab84

sudo mdadm --stop /dev/md0

sudo losetup -d /dev/loopX

sudo losetup -d /dev/loopY

rm disk1.img disk2.img

Notes:

●​ RAID 0 offers performance but no redundancy.


●​ RAID 1 provides mirroring; if one disk fails, data is still available.
●​ mdadm supports many other RAID levels and configurations.

Lab 56: Patching

Lab Objective:

Learn how to apply software patches and keep your system up-to-date.

Lab Purpose:

Practice applying system updates and other patches to ensure your system
remains secure and current.

Tools:

●​ Kali linux

Topology:
●​ Single Linux machine or virtual machine

Task 1: Apply System Updates

Update your system packages to the latest versions:

sudo apt full-upgrade

Explanation:​
This command performs a full upgrade, installing new packages, removing
obsolete ones, and updating existing packages to their latest versions. It
may take some time depending on how many updates are available.

To automate the process of applying updates, ensure unattended-upgrades


is installed and configured:

sudo apt install unattended-upgrades

The configuration file is usually located at:

/etc/apt/apt.conf.d/50unattended-upgrades

Note:​
By default, this configuration typically only applies security updates, which
is usually what you want for security reasons.

To update a single package, run:

sudo apt upgrade package@


Replace package@ with the actual package name, for example:​
sudo apt upgrade nginx

Notes:

●​ You can also manually apply source patches, which involves


downloading source code, editing files, and applying patches with the
patch command.
●​ This method is mostly used by developers maintaining custom
versions of software but is beyond the scope of typical system
patching.

Lab 57: Best Practices (Authentication)

Lab Objective:

Learn about best practices for user authentication and how to improve
security in this area.

Lab Purpose:

Use the Lynis security auditing tool to identify and fix weaknesses in user
authentication configurations.

Tools:

●​ Kali Linux

Topology:

●​ Single Linux machine or virtual machine


Task 1: Install Lynis and Run Initial Audit

First, install the Lynis security scanning tool:

sudo apt install lynis

Now, scan the system for authentication-related security issues:

sudo lynis audit system --tests-from-category security --tests-from-group


authentication

Note:​
The output will include various warnings and suggestions. Focus on
security weak points under those sections. The review will steer you toward
the following suggested improvements.

Task 2: Implement Basic Security Enhancements

Start by installing a PAM module for password strength testing:

sudo apt install libpam-cracklib

Next, open /etc/login.defs in your favorite editor (with root privileges), for
example:

sudo nano /etc/login.defs

Update the following lines:


Change PASSBMAXBDAYS to 90 to set maximum password age:​

PASSBMAXBDAYS 90

●​

Change PASSBMINBDAYS to 1 to set minimum password age:​



PASSBMINBDAYS 1

●​

Change UMASK to 027 to make the default permissions more restrictive:​



UMASK 027

●​

Save your changes and exit the editor.

Task 3: Run the Security Scan Again

Rerun Lynis to verify the improvements:

sudo lynis audit system --tests-from-category security --tests-from-group


authentication

Question:​
Did you get a perfect score? Great! But remember, in a real-world
scenario, such changes affect user policies. For example:

●​ Forcing all users to change passwords every 90 days may require


informing your colleagues or users beforehand.

Notes:
●​ Your scan results may show variations in warnings and suggestions.
●​ Feel free to implement or revert recommendations based on your
environment.
●​ Always test configuration changes in a lab environment before
applying them to production servers.

Certainly! Here's Lab 58: Archiving and Unarchiving, with the commands
in plain text and the necessary explanations, formatted for Kali Linux or any
Linux distro.

Lab 58: Archiving and Unarchiving

Lab Objective:

Learn how to create, extract, and examine various archive files and
compression formats.

Lab Purpose:

Understand gzip, bzip2, and xz compression formats, as well as tools like


tar and cpio for managing archives.

Tools:

●​ Ubuntu 18.04 (or similar Linux distro)

Topology:

●​ Single Linux machine or virtual machine

Task 1: Set Up Files for Archiving

Open the terminal and run:

mkdir lab58
cd lab58

touch f1 f2 f3 f4 f5 f6 f7 f8 f9

Task 2: Compress Files with gzip

Run:

gzip f*

ls

Explanation:​
This compresses each of the nine files individually into f1.gz, f2.gz, ...,
f9.gz.​
Note:​
gzip is a compression format, not an archiving tool; original files are
replaced by compressed versions.

Task 3: Create a Tar Archive and Compress

First, remove the compressed files:

rm f*

Create the files again:

touch f1 f2 f3 f4 f5 f6 f7 f8 f9
Create a tar archive containing these files:

tar -cf files.tar.gz f*

Note:​
This creates an archive named files.tar.gz (note: not compressed yet).

Now, to verify the contents of the archive:

tar --list -f files.tar.gz

Question:​
What do you see? You should see the list of files inside.

Task 4: Create Compressed tarballs with Different Formats

Create tarballs with bzip2 and xz compression:

tar -cjf archive.tar.bz2 f*

tar -cJf archive.tar.xz f*

Explanation:

●​ -j: use bzip2 compression.


●​ -J: use xz compression.

To extract these archives later, simply run:

tar -xf archive.tar.bz2

tar -xf archive.tar.xz


Task 5: Use cpio for Archiving

Create an archive with find and cpio:

find . -o -c noclobber -o ! -name 'lab58.cpio' > lab58.cpio

(Note: this command is a general idea; for actual cpio creation, the steps
are:)

Create an archive:

find . | cpio -o > lab58.cpio

List contents of the archive:

cpio -t < lab58.cpio

Extract the f1 file:

rm f1

cpio -i -d < lab58.cpio

List all files extracted:

rm f*

cpio -i -v < lab58.cpio


Summary:

●​ Creating archive with find | cpio -o.


●​ Viewing contents with cpio -t.
●​ Extracting files with cpio -i.

Task 6: Using dd for Backups and Copies

Create a backup copy of f1:

dd if=f1 of=f1.bak

Note:​
dd works at the block level, similar to copying entire disks or partitions.

Example (do not run):

dd if=/dev/sda1 of=/dev/sdb1

This copies the entire /dev/sda1 partition to /dev/sdb1.

Task 7: Clean Up

Remove created files and directories:

cd ..

rm -r lab58
Notes:

●​ To compress files without archiving, use gzip, bzip2, or xz:

gzip filename

gunzip filename.gz

bzip2 filename

bunzip2 filename.bz2

xz filename

unxz filename.xz

●​ These commands handle compression/decompression directly.

Lab 59. Scripting Practice

Lab Objective:

Use what you already know to write a simple shell script, from scratch.

Lab Purpose:

Scripting with Bash is a daily task by many professional Linux


administrators. When a task is repetitive, you don’t want to be typing the
same list of commands over and over — you want to create a script, and
perhaps also schedule that script to run automatically.
Lab Tool:

Kali linux

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Open the Terminal application, and run your favorite text editor, such as vi
or nano. (If you have never used vi or vim, then nano is strongly
recommended.)

You may wish to reference Lab 15 if you have not already done so. Your
goal is to write a script which accepts one argument—a number—and
prints out two numbers at a time such that:

●​ The first number counts down from the argument to 0


●​ The second number counts up from 0 to the argument

Example:

./foo.sh 100

Output: 100 0​
99 1​
98 2​
... and so on until 0 100

If the user doesn’t pass an argument, or passes an argument that doesn’t


make sense, like a word, you should print a friendly message like “Please
give a number.”

Remember to make the file executable (with chmod +x filename) before


testing it.

Answer 1:
●​ Create your script file, e.g., foo.sh​

●​ Use a text editor to write the following script:​



#!/bin/bash​

if ! > "$1" -ge 0 @ then​
echo "Please give a number."​
exit 1​
fi​

for (( n=0; n <= $1; n++ ))​
do​
echo $(($1 - $n)) $n​
done​

Notes:

In Lab 15, you read about the exit status. For error-checking purposes, can
you figure out how to use the exit status before your script has exited?
Don’t change formatting, but make this lab 59. Let the bash commands be
plain text, not inside a bash shell.

Lab 60. Scripting Practice 2

Lab Objective:

Use what you already know to write a simple shell script, from scratch.

Lab Purpose:

Scripting with Bash is a daily task by many professional Linux


administrators and security professionals. When a task is repetitive, you
don’t want to be typing the same list of commands over and over — you
want to create a script, and perhaps also schedule that script to run
automatically.
Lab Tool:

Kali Linux (or similar Debian-based distro)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Open the Terminal application and run your favorite text editor.

You may wish to reference Labs 66 and 67 if you have not already done so.
Your goal is to write a script which accepts three arguments — a directory
path, and two flags to be passed to test. This script should check every file
in the directory (non-recursively) and, if it passes both tests, prints that file’s
name.

Example:​
./foo.sh a/mydir -f -w myfile

In this example, the script is looking for files with a/mydir that are regular
files with write permissions granted. myfile passes those tests.

All error conditions should print a friendly message and exit with status 1.
For example, if three arguments are not passed, or the passed directory
doesn’t exist, or one of the test arguments is not valid.​
See Answer 1 below for one possible solution.

For an extra challenge, you might print all files, but sort them according to
how many tests they passed.

Answer 1:

●​ Create your script file, e.g., foo.sh​


●​ Use a text editor to write the following script:​

#!/bin/bash

test $# -eq 3

if [ $? -ne 0 ]; then​
echo "Exactly three arguments are required."​
exit 1​
fi

if ! [ -d "$1" ]; then​
echo "Not a directory."​
exit 1​
fi

files=$(ls "$1" 2>/dev/null)

if [ -z "$files" ]; then​
echo "I can't read that directory."​
exit 1​
fi

for f in $files; do

test $2 "$1/$f" 2>/dev/null

rc1=$?

test $3 "$1/$f" 2>/dev/null

rc2=$?

if [ $rc1 -gt 1 ]; then

echo "First test argument is invalid."

exit 1

fi
if [ $rc2 -gt 1 ]; then

echo "Second test argument is invalid."

exit 1

fi

if [ $rc1 -eq 0 ] && [ $rc2 -eq 0 ]; then

echo "$f"

fi

done

Notes:

Remember that using /bin/sh as an interpreter can have different results


from /bin/bash! It is good to understand what makes a Bourne-compatible
shell, and what non-Bourne-compatible additions are available in Bash.

Lab 61. Creating a Git Repo

Lab Objective:

Learn how to create a repository in Git, a popular version control tool.

Lab Purpose:

In this lab, you will create a Git repo to serve as a basis from which to learn
other Git operations in Labs 62 and 63.

Lab Tool:

Kali Linux (or another distro of your choice)


Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Install Git, if you haven’t already:

sudo apt install git

Create a new directory with some simple files to start with:

mkdir myrepo

cd myrepo

echo 'Hello World!' > hello.txt

echo 'This is a very important file. Do not delete.' > README

Now, to initialize the repo, run:

git init

Task 2:

If you’ve never used Git on this system before, then you probably haven’t
added your name and e-mail address yet. This isn’t a website signup form,
but a configuration file which tells other collaborators who committed a
given piece of code. Run:

git config --global -e

Edit the file with your name and e-mail, then save and quit.

You can also use git config -e to edit the local (repo-specific) configuration
file, which is located at .git/config, but you don’t need to worry about that
right now.
What you may want to do now is edit the .git/description file, which, as you
might guess, is a description of the repo.

Lab 62. Working with Git

Lab Objective:

Learn how to use basic day-to-day Git operations for version control.

Lab Purpose:

In this lab, you will practice working with commit and other operations for
managing code collaboration in Git.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

First, you will need a Git repo. If you don’t have one, you may reference
Lab 97, or find an interesting repo on the internet and use git clone to
retrieve it. This lab will assume you are using the repo from Lab 61.

Task 2:

Make sure your working directory is the repo’s directory, e.g., a/myrepo. At
this point, you have a repo, but should not have made any commits yet.
Run the following commands to fix that:

git add hello.txt README

git commit -am “Initial commit”


You can see your commit with:

git log

To make further changes, follow the same workflow: edit any files that need
editing, and run git commit again.

It’s important to note that, in a typical development setting, you would


usually do one more step after you were done making commits:

git push

This pushes your most recent commits to a remote repo, which is often, but
not always, a central server from which the rest of your team can pick up
the changes.

To fetch changes made by your teammates, you would use:

git pull

Notes:

You can use:

git show

to show exactly what changes were made by a given commit. The syntax
is:

git show [hash]

where [hash] is the hash shown in git log. Or, alternatively, run git show to
show the most recent commit.

Lab 63. Git Branches

Lab Objective:
Learn how to work with branches in Git.

Lab Purpose:

In this lab, you will practice using branch and merge operations for
advanced code collaboration in Git.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

First, you will need a Git repo. If you don’t have one, you may reference
Lab 61, or find an interesting repo on the internet and use git clone to
retrieve it. This lab will assume you are using the repo from Lab 61.

Task 2:

The default code branch in Git is called master. Sometimes you want to
“branch off” and work on a large block of code without interfering with
master, maybe for the purposes of experimenting. Make sure your working
directory is the repo’s directory, then do that now with:

git checkout -b lab63

This is a shortcut for creating a branch and then checking it out at the same
time. You can confirm that you are now using the lab63 branch with:

git branch

Task 3:

Now, safely on the lab63 branch, make a few modifications:


echo 'I love git branches!' > lab63.txt

git add lab63.txt

git commit -am “Lab 63”

Unfortunately, there’s been a problem. While you were playing around with
this lab, a critical security patch was applied to the master branch:

git checkout master

echo 'This file is obsolete and should be removed.' > README

git commit -am “Bug fix #42”

Task 4:

Now, you may notice, the two branches are out of sync. Each branch has a
commit that the other one doesn’t. That’s where git merge comes into play:

git checkout lab63

git merge master

Using:

cat README

you can confirm that the patch from master was applied to lab63.

Finally, you can merge lab99 back into master and remove the branch:

echo 'And merging, too!' >> lab63.txt

git commit -am “I just merged”

git checkout master

git merge lab63

git branch -d lab63


Notes:

One important case not covered here was merge conflicts, i.e., what
happens when two changes conflict within a merge. Typically, Git will warn
you and indicate the conflict within one or more files, which you can then
examine with:

git diff

and an editor.

Lab 64. Ansible

Lab Objective:

Learn how to use the Ansible configuration management tool.

Lab Purpose:

In this lab, you will practice using Ansible, which is an infrastructure


automation tool to make lives easier for professional system administrators.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

First, install Ansible:

sudo apt-add-repository --yes --update ppa:ansible/ansible


sudo apt install ansible

Add your localhost to the Ansible hosts file, telling it that your local machine
is one of the devices (in this case, the only device) it should manage:

sudo sh -c 'echo localhost >> /etc/ansible/hosts'

Ansible connects over SSH, so normally you would need to do a bit of


setup to ensure that SSH connectivity is working. On localhost, you should
be able to skip this step. Use:

ansible -m ping all

to confirm that Ansible can connect. The result should look something like:

localhost | SUCCESS => {"changed": false, "ping": "pong"}

Task 2:

To run a one-off shell command on all Ansible hosts, try something like:

ansible -m shell -a 'free -m' all

This is a useful feature, but not nearly the most useful. Open your favorite
editor and create an Ansible playbook file called lab100.yml, with the
following content:

●​ hosts: all​
tasks:
○​ name: hello copy​
copy:​
src: hello.txt​
dest: a/lab100.txt

And of course, you must create the hello.txt file to copy:

echo 'Hello World!' > hello.txt

Now run:

ansible-playbook lab100.yml
And the lab100.txt file will now exist in your specified location. Running the
same command again will show no changes; removing lab100.txt and
running it again will cause the file to be recreated. Modifying hello.txt and
running the playbook again will cause your changes to be applied.

Can you understand how Ansible is useful for managing hundreds or


thousands of devices?

Notes:

This is only one very small example usage of a complex and powerful tool.
Ansible is well worth understanding if you aspire to manage Linux
machines professionally.

Lab 65. Puppet

Lab Objective:

Learn how to use the Puppet configuration management tool.

Lab Purpose:

In this lab, you will practice using Puppet, which is an infrastructure


automation tool to make lives easier for professional system administrators.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:
First, install Puppet:

sudo apt install puppet puppetmaster

Unlike Ansible’s default setup (see Lab 100), Puppet uses a client-server
model with a “puppetmaster” server from which nodes pull their manifests.
Thus, in a production environment, you would set up this puppetmaster
server and install the Puppet client on each node you want to manage.

Some configuration:

sudo puppet config set server localhost

sudo puppet config set certname localhost

sudo puppet agent -t

sudo puppet cert sign localhost

sudo systemctl restart puppetmaster

sudo systemctl start puppet

This is a somewhat non-trivial process to get both the Puppet client and
server talking to each other on the same machine. In a typical installation,
you would follow mostly the same steps, by requesting a certificate from
the client and then signing it from the server.

Task 2:

Now the fun begins. Open an editor, as root, and create


/etc/puppet/code/site.pp with the following contents:

node default {​
file { '/tmp/lab101.txt':​
content => 'Hello World!',​
}​
}

In a typical environment, you’d have a more complex directory structure


with multiple modules and environments, and the Puppet agents would
automatically pick up changes every 30 minutes. Since this is a lab
environment and the directory structures aren’t fully set up (and you don’t
want to wait 30 minutes anyway), you must apply this manifest as a one-off:

sudo puppet apply site.pp

The lab101.txt file will now exist in /tmp. Running the same command again
will show no changes; removing lab101.txt and running it again will cause
the file to be recreated. Modifying the file and running it again will cause
your changes to be reverted.

Can you understand how Puppet is useful for managing hundreds or


thousands of devices?

Notes:

This is only one very small example usage of a complex and powerful tool.
Puppet is well worth understanding if you aspire to manage Linux machines
professionally.

Lab 66. Special Directories and Files

Lab Objective:

Learn how to use temporary files and directories, symbolic links, and
special permissions.

Lab Purpose:

In this lab, you will work with symbolic links, special file/directory
permissions, and temporary files and directories.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:
A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Open the Terminal and run:

ls -ld /tmp

Notice the permissions:

drwxrwxrwt

The t at the end indicates what is called the sticky bit. This means that only
users/groups who own a given file may modify it. This is important on
world-writable directories, such as those holding temporary files, to prevent
users from messing with other users’ files.

Task 2:

Now run:

ln -sv $(mktemp) mytmp

ls -l mytmp

rm mytmp

What should have happened:

●​ mktemp created a randomly-named temporary file.


●​ Then you created a symbolic link to that file called mytmp.
●​ The ls output shows this link.​
A symbolic link is simply a reference to an existing file. This way, you
may have multiple references to a single file—edit the original file,
and all of the references instantly update. This is useful for some
system configurations.
Notes:

In addition to /tmp, there is also /var/tmp, which is typically used for larger
and/or longer-lasting temporary files. /tmp is usually cleaned on every
reboot—the same is not necessarily true for /var/tmp.

Lab 67. Process Monitoring

Lab Objective:

Learn how to monitor processes and their resource usage.

Lab Purpose:

In this lab, you will learn how to monitor a number of metrics for both
foreground and background processes.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

If you weren’t careful to terminate all of your sleep processes from Lab 48,
you might have a stray or two running. Let’s find out:

ps -ef

Well, that’s a lot of output! You could maybe filter it through a grep sleep or
use ps without arguments, but there’s a better way:

pgrep -a sleep
pgrep supports regex and has a number of matching options to make sure
you find the right process(es).

Task 2:

A sleep process isn’t going to use many system resources, but plenty of
other processes might. It’s important to know how to monitor those.

Take a look at:

free -h

This gives you an output of system memory usage. The most generally
useful columns here are “total”, “used”, and “available”. The “available”
column offers an estimate of memory which is free for starting new
processes without swapping. That isn’t the same as “free” memory, which is
often low on Linux systems. This is because the OS can take advantage of
this memory for performance reasons, while still holding it as “available” for
user processes.

free has a -s N switch to automatically print the output every N seconds.


That’s useful in its own right, but wouldn’t you like to have that ability for
every command? Then watch might be for you!

watch -n 10 free -h

Finally, take a look at:

top

Running top opens a rather confusing output screen that’s continually


changing, but one of its most useful capabilities may be simply sorting
processes by CPU or memory usage. When in top, use the '<' and '>' keys
to shift the sorting column over to the metric you’re interested in. In the
upper panel, you can also see the system load, a count of running
processes, and some CPU and memory metrics. Finally, press ‘q’ to quit.

It is strongly recommended to spend some time reading the top man page,
as this is one of the most featureful system monitoring tools that is installed
by default.
Notes:​
uptime, despite the name, is another quick way to get system load.
However, top also shows this, as well as lots of other useful information.

Lab 68. Manage Printers and Printing

Lab Objective:

Learn how to manage print queues using CUPS.

Lab Purpose:

In this lab, you will learn about managing print queues and jobs using
CUPS and the LPD compatibility interface.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

To eliminate variability around printer hardware and all the problems it


brings (and in case you don’t have a printer handy), this lab will assume
printing to a PDF file instead. If you would like to print to a real printer,
ignore this step. Otherwise, run:

sudo apt install printer-driver-cups-pdf

(You may have to run apt-get update --fix-missing in order to install


cups-pdf correctly.)

Verify with:
lpstat -t

—you should see “device for PDF: cups-pdf /” as part of the output. Or, if
you plug in a real printer, it should be auto-detected. (If not, then it would be
easier to print to PDF for the purposes of this lab.)

Finally, check where your files should be printed:

grep AOut /etc/cups/cups-pdf.conf

You may see something like $HOME/PDF as the directory where your
“printed” files will end up. You can change this if you like. If you do, finalize
it with:

sudo systemctl restart cups

Task 2:

You can probably guess what happens next:

echo "Hello World" | lpr

Locate the file in the directory you identified above (it may have an odd
name, like stdinBBBtuBPDF-MobB1.pdf), and open it in a PDF viewer or
web browser.

You should find a basic PDF file with your text printed in the top left corner.
If you didn’t change anything, this should be under PDF/ in your home
directory.

Task 3:

Now some other helpful commands:

lpr -T gibberish

Type some gibberish, followed by Ctrl+C to cancel. (The -T flag also


influences the filename after printing.)

lpq
The previous command should tell you the Mob (printer) number (third
column) of the queued job. Then run:

lprm [number]

You canceled the job (if you do it quickly enough—there is a timeout), but it
will still show up as “completed” via:

lpstat -W completed

Notes:

lpadmin can be used to add and configure printers which are not
auto-detected. For example, you would add a cups-pdf printer manually
with:

sudo lpadmin -p cups-pdf -v cups-pdf:/ -E -P


/usr/share/ppd/cups-pdf/CUPSPDFBopt.ppd

Lab 69. Modify Process Execution Priorities

Lab Objective:

Learn how to manage the priority of started and running processes.

Lab Purpose:

In this lab, you will learn how to get a running task’s priority, change its
priority, and set the priority of a task to be run.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine


Lab Walkthrough:

Task 1:

Open the Terminal and run the following commands:

nice -n 19 sleep 10000 &

ps -l

sudo renice -n -20 $(pgrep sleep)

ps -l

pkill sleep

What you’ve done here is start a sleep process with a “niceness” value of
19. This is the most “nice” a process can be—in other words, the least
demanding in terms of priority. Then you changed its priority, with renice, to
the “meanest” possible value of -20, or the most demanding in terms of
priority. The default priority for all user processes is 0, and you must be root
to run a process with higher priority. ps -l will list your running processes
with their priorities.

Notes:

You can also use top to list and sort processes by their priority values.

Lab 70. Network Troubleshooting

Lab Objective:

Learn how to troubleshoot networking on a Linux host.

Lab Purpose:

In this lab, you will learn how to troubleshoot a Linux host as a network
client, using various tools.
Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

You’ve worked with ip in previous labs to get network information, but it’s
worth reiterating. It is a powerful tool with a lot of uses. Run the following
commands now to get a feel for what kinds of information you can gather
from it:

ip address

ip maddress

ip route

ip netconf

Task 2:

Install a couple of new tools:

sudo apt install iftop iperf

iftop is a tool to monitor bandwidth usage, while iperf is a tool to test how
much bandwidth you have (and, in the process, fills up enough for iftop to
measure). Start:

sudo iftop

in its own terminal tab/window and leave that running for now.
iperf requires a separate client and server to get any meaningful results.
Fortunately, a number of public servers exist — visit https://iperf.cc to
choose one near you, then run:

iperf -c [server@] -p [port@]

While you’re waiting, switch to the other tab to see what kind of bandwidth
patterns are shown by iftop. When it’s done, iperf will produce a report of
your machine’s bandwidth capabilities. Kill both processes before moving
on.

Task 3:

mtr is a handy diagnostic tool that combines many of the uses of ping and
traceroute. Its output is pretty self-explanatory:

mtr 101labs.net

As this is an interactive program, type ‘q’ or Ctrl+C to quit.

Task 4:

Finally, install the venerable whois:

sudo apt install whois

whois provides information about domain ownership and contact


information. For example:

whois 101labs.net

Often, but not always, the domain owner’s contact information is hidden
behind a registrar’s privacy feature. What may be more useful, however, is
the ability to discover an abuse contact in case of spam, phishing, or other
nefarious activity originating from a domain.

This works on IP addresses as well:

whois 184.168.221.43
Occasionally you may have a need to discover the owners and abuse
contacts behind an IP or block of IPs — this command will do that.

Lab 71. CPU Monitoring

Lab Objective:

Learn how to monitor CPU usage and metrics.

Lab Purpose:

In this lab you will monitor CPU load and other metrics over time using a
tool called sar.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

If you don’t already have sysstat, install it now:

sudo apt install sysstat

NOTE: If sysstat was not previously installed, you should now give it at
least 20-30 minutes to gather some data. This tool is most useful when it
can collect data over a longer period of time.

Task 2:

Open a terminal and run sar.


This is a basic output of some CPU metrics, listed by time period. “%user”
here indicates the amount of CPU time spent on userspace tasks.
“%system” indicates kernel space. “%iowait” indicates time waiting on disk
(or perhaps network) I/O. “%steal” should be zero, except on busy virtual
systems where the CPU may have to wait on another virtual CPU.

Not enough information for you? Well, you’re in luck, because sar can do a
lot more. Run:

sar -q -w -m CPU -u ALL -P ALL

The man pages explain a lot more, but this command will print information
about queues, load averages, context switches, power management…
basically everything you’d ever want to know about your CPU without
getting into other hardware—and yes, sar can monitor that too!

Notes:

Many other tools, such as top or uptime, can show you a real-time view of
CPU usage.

Lab 72. System Messaging and Logging

Lab Objective:

Learn how to view system messages and logs.

Lab Purpose:

In this lab, you will learn how to debug your system (or get a glimpse at
what’s going on) by viewing log messages.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine


Lab Walkthrough:

Task 1:

Open your Terminal and run:

dmesg

The man (manual) page for dmesg states that it is used to “examine or
control the kernel ring buffer.” To put it simply, you can use dmesg to view
boot logs and other logs your distro considers important.

Task 2:

Run:

ls -aR /var/log

This is the main storage directory for all system logs (although, see the
Notes). If you peruse some of the log files, you may find some of the same
logs that were printed by dmesg. In particular, look at:

/var/log/syslog

Configuration of rsyslog is outside the scope of this lab, but through rsyslog
you can record logs of all severities, write them to different files, e-mail
them, or even print emergency alert messages to the console for all users
to see.

Task 3:

You can use tail to monitor logs in real-time. Run:

sudo tail -f /var/log/auth.log

Then, in a separate terminal window or tab, run:

sudo -i

three times. When prompted for your password, do each of the following
once:
●​ Type Ctrl+C to cancel the command.
●​ Intentionally mistype your password.
●​ Type your password correctly.

Check the tab where tail is running to see what is logged as a result of your
actions. This is a simple, but effective, method of keeping tabs on sudo
users on a system.

Notes:

In recent years, traditional logging tools have been supplanted in some


distros by systemd, which uses journalctl to manage logs. In practice, you
should familiarize yourself with both log management techniques.

Lab 73. UFW

Lab Objective:

Learn how to manage UFW, a more user-friendly way to configure iptables.

Lab Purpose:

In this lab, you will practice using UFW (Uncomplicated Firewall) as an


alternative way of managing Linux firewall rules.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

First, make sure UFW is enabled (by default, it is not):


sudo ufw enable

You can confirm the status with:

sudo ufw status verbose

Now take a peek at the actual firewall rules UFW has set up:

sudo iptables -nL

If you did Lab 65, you have a pretty good idea of the (sparse) rulesets that
your OS had set up by default. Enabling UFW alone made a lot of changes!
The rules themselves aren’t terribly complex, but the way they are
presented can make them difficult for a human to analyze directly. You can
see this output, slightly modified, with:

sudo ufw show raw

Task 2:

The syntax for adding rules via UFW is rather simple:

sudo ufw deny 12345

sudo ufw status

sudo iptables -nL | grep 12345

It’s not obvious from the last command, but UFW has added two DROP
rules (one for TCP, one for UDP) to the ufw-user-input chain.

You can also add rules for applications based on profiles, even if those
applications change their listening ports or if you don’t remember what
those ports are:

sudo ufw app info OpenSSH

sudo ufw allow OpenSSH

sudo ufw status


sudo iptables -nL | grep OpenSSH

Task 3:

Finally, clean up the extra rules you added with:

sudo ufw reset

If you don’t wish to keep UFW, disable it with:

sudo ufw disable

This will clear UFW’s rules, but not its chains. To do that, run:

sudo iptables -F

sudo iptables -X

Note that UFW only ever operates on the filter table, so you don’t need to
worry about any other rules or chains polluting the other tables.

Notes:

You can change some of UFW’s options, including default policies and
whether it manages the built-in chains (INPUT, OUTPUT, FORWARD) in
/etc/default/ufw. The application profiles, should you need to change their
ports, are stored in /etc/ufw/applications.d/.

Lab 74. Processes and Configuration

Lab Objective:

Learn how to manage running processes and program configurations.

Lab Purpose:

In this lab, you will learn how to configure various system processes and
how to manage and monitor those processes while they are running.
Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Open the Terminal and run:

ls /boot

The contents will vary, but generally what you will see is a Linux kernel file
(name starting with vmlinuz), initrd file, System.map, a GRUB configuration
directory, and sometimes a copy of the kernel configuration. Everything in
this directory, including the grub config file (/boot/grub/grub.cfg), is
related—as you might expect—to your system boot processes. This is one
directory you don’t want to mess with unless you know what you’re doing!

Task 2:

Next, run:

ls /etc

In Linux parlance, ‘etc’ stands for ‘editable text configuration.’ /etc is the
primary directory tree for all system configuration — with few exceptions,
nearly all configurations are stored in, or linked from, /etc.

The few (non-user-specific) exceptions might be in /dev, /sys or /proc.


These are special, dynamically-generated filesystems. You may
occasionally read information from these filesystems, but most of the time
you shouldn’t be writing directly to them.

Task 3:

Run the following commands in your Terminal:


free -m

ps -ef

top

When using top, press q to quit.

These are three common tools to monitor processes and their resource
usage. (Got a runaway process that won’t quit via normal means? Try kill or
pkill.)

Notes:

/dev contains special files called device nodes, which are linked to device
drivers. /sys and /proc contain files through which the kernel communicates
with user space. You can read from them to get system information, but
rarely should you write to them directly, especially without understanding
what you are doing.

Lab 75. Ports and Services

Lab Objective:

Learn how to gather information on ports and services on a Kali Linux


system.

Lab Purpose:

In this lab, you will examine some common networking principles, namely
ports and services, using Linux command-line tools.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine


Lab Walkthrough:

Task 1:

Open the terminal and run:

sudo ss -Saputn

This is a list of currently open connections and listening ports on your


machine. There’s a lot of output here, but you’re mostly interested in the
first column and the last three columns.

The first column lists what protocol each connection follows. Most likely this
will be either UDP or TCP, which you may recall are faster/connectionless
and more-reliable/connection-oriented, respectively. Columns 5 and 6 list
your IP:port and the remote host’s IP:port. TCP and UDP each have 65,535
ports available, some of which are assigned (either officially, by IANA, or
unofficially by common use) to various services.

Look at the port numbers and try to identify which services are using each
“server” port. The server port is important, because the client port (the port
from which a client first makes a connection) is usually dynamic. It may
help to look at column 7 to see what process on your machine is using the
connection.

Task 2:

Programs commonly use the /etc/services file to map service names to the
port(s) they should listen or connect on. The file is human-readable, so you
should look yourself.

By looking through:

cat /etc/services

Can you identify what services your machine was listening for, or
connecting to, in Task 1?​
Can you identify which ports are used by common services, such as HTTP,
SSH, LDAP, SNMP, and SMTP?

Task 3:
Another very common protocol is one that you won’t find any listening ports
for… because it doesn’t have ports! This is ICMP, a.k.a. “ping” (though in
reality, it is used for much more than pinging).

Open two terminal windows/tabs. In the first one, run:

sudo tcpdump -n -i any icmp

In the second one, run:

ping -c 5 localhost

Go back to the first tab, and you should see the requests and replies from
your pings—and maybe some other ICMP traffic too, if you’re lucky.

Notes:

If you couldn’t see any traffic or get any ICMP replies in the last task, a
firewall on your machine may be to blame. Blocking ICMP traffic
indiscriminately like that is a poor practice, so it's an exercise to the student
to figure out how to open that up a little. (Hint: iptables)

Lab 76. Standard Syntax

Lab Objective:

Learn about standard shell scripting syntax specific to scripting setup.

Lab Purpose:

In this lab, you will learn about standard shell scripting syntax, such as
tests and control flow.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:
A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Open your favorite editor via the terminal and create a file called lab76.sh:

nano lab76.sh

In the editor, add the following script:

#!/bin/sh

i=0

while test $i -lt 5

do

echo $i

i=$(($i+1))

done

for file in $(ls /etc)

do

test -r /etc/$file && echo "$file is readable by $USER."

test -s /etc/$file

__ echo "$file has size zero!"


if test -d /etc/$file

then

echo "$file is a directory."

fi

done

This highly contrived and not very useful script contains a number of
important features. The first is test, which can test a wide variety of
conditions depending on its flags and return a 0 or 1 (true or false) exit
status. That exit status can, in turn, be used by control flow syntax—if
statements, or for or while loops. A simpler kind of if statement is the && or
|| syntax, a boolean ‘and’ or ‘or’, respectively.

If you wish, you can test the script by executing it with:

chmod +x lab76.sh

./lab76.sh

You should see the numbers 0-4 printed out, followed by a large amount of
text output, depending on what’s in your /etc directory.

Notes:

This is a somewhat-rare situation where using /bin/sh and /bin/bash as the


interpreter may yield different results. Bash has a ((++var)) syntax for
variable incrementation, which is non-standard in the normal Bourne shell.
Similar for [[ ]] test syntax and the C-style for ((i=0; i<5; i++)) loops.

Lab 77. Quoting and Escaping

Lab Objective:
Learn basic Linux commands and shell syntax.

Lab Purpose:

The Bash command shell is pre-installed on millions of Linux computers


around the world. Understanding how this shell works is a critical skill for
working with Linux.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Open the Terminal application, then enter:

echo Hello World

The echo command simply prints out all of the “words,” or arguments, that
you give to it. This might not seem very useful, until...

Task 2:

Now enter the following four commands, in order:

quote='cat /etc/issue'

backquote=cat /etc/issue

echo $quote

echo $backquote

Here, quote and backquote are variables. In the first two lines, we set them
to the values 'cat /etc/issue' and `cat /etc/issue`, respectively. (We’ll get to
the difference between quotes and backquotes in a minute.) In the last two
lines, we reference the variables by putting $ before them, and print out
their contents.

Task 3:

To understand single quotes, double quotes, and backquotes, enter the


following:

hello

“echo hello”

echo “$hello”

echo ‘$hello’

echo “$hello”

(Note: The commands may be visually presented; type them accordingly in


your terminal.)

Double quotes allow for variables to have their values referenced. Single
quotes are always literal, while backquotes execute the command
(including variable references) within those quotes.

Task 4:

Sometimes you will need to distinguish certain characters on the command


line, using what is called escaping. Try the following command sequence:

echo hello!\ foo\ bar

cat foo\ bar

Your output should be:

hello! foo bar

cat foo bar


What’s going on here? Both echo and cat are interpreting the space
between foo and bar to indicate separate arguments rather than a single
filename, as intended. You could put 'foo bar' into quotes, or you could
escape the space by using a backslash, as follows:

echo hello!\ foo\ bar

cat foo\ bar

It is safe to assume that any character which may be interpreted directly by


the shell, when your intention is for it to be part of an argument, should be
escaped. Examples include but are not limited to whitespace, !, $, *, &, and
quotes.

Notes:

In some cases, you will want to reference variables with the ${variable}
syntax. This is equivalent to $variable, but the reasons for choosing one
over the other are beyond the scope of this lab.

Backquotes also have an alternative syntax:

$(echo hello)

This syntax is generally preferred over backquotes. We only mention


backquotes to distinguish them from single quotes and ensure the
differences are clear.

Lab 78. Basic Regex

Lab Objective:

Learn how to use basic regular expressions for complex string pattern
matching.

Lab Purpose:

Regular expressions are a tool used commonly in Linux to match complex


patterns in strings and text files. Many tools support regular expressions.
With basic regex, some metacharacters require backslashes in front of
them.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Open the Terminal application and create a text file first (copy and paste
won’t work for the below)

echo "abc123 456def \fhtml!

Hello World! password 01234567890 A1B2C3D4E5 6F7G8H9I0J Quoth


the raven 'Nevermore.'" > lab93.txt

Task 2:

Now, try to predict what the output of each command will be before running
it (see Answers 1-7 below):

grep >0-3@ lab93.txt

grep >e68@ lab93.txt

grep "A>0-9a-zA-=@?+$" lab93.txt

grep ">0-9@>A-=@?">0-9@ lab93.txt

grep "_?_" lab93.txt

grep -v ".*" lab93.txt

fgrep -v ".*" lab93.txt


Note:​
The last command is a bit of a curveball. fgrep (short for grep -F) treats an
expression as a literal string and does not use regex matching. So it is
excluding all matches containing a literal ‘.*’, which, of course, is none of
them.

Answer 1:

abc123 01234567890 A1B2C3D4E5 6F7G8H9I0J

Answer 2:

456def Hello World! 01234567890 6F7G8H9I0J Quoth the raven


'Nevermore.'

Answer 3:

abc123 456def password 01234567890 A1B2C3D4E5 6F7G8H9I0J

Answer 4:

abc123 456def 01234567890 A1B2C3D4E5 6F7G8H9I0J

Answer 5:

\x1fhtml!

Quoth the raven 'Nevermore.'

Answer 6:

(No output)

Answer 7:

abc123 456def \x1fhtml! Hello World!

password 01234567890 A1B2C3D4E5 6F7G8H9I0J Quoth the raven


'Nevermore.'

Notes:
Regexr is a great resource for practicing your regular expressions. See
man 7 regex for more information on regex as supported by your distro.

Lab 79. Extended Regex

Lab Objective:

Learn how to use extended regular expressions for complex string pattern
matching.

Lab Purpose:

Regular expressions are a tool used commonly in Linux to match complex


patterns in strings and text files. Many tools support regular expressions.
With extended regex, metacharacters do not require backslashes.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

Task 1:

Open the Terminal application and create a text file first

echo "abc123 456def \fhtml!

Hello World! password 01234567890 A1B2C3D4E5 6F7G8H9I0J Quoth


the raven 'Nevermore.'" > lab94.txt

Task 2:

Now, try to predict what the output of each command will be before running
it (see Answers 1-7 below):
egrep >0-3@ lab94.txt

egrep >e68@ lab94.txt

egrep "A>0-9a-zA-=@+$" lab94.txt

egrep ">0-9@>A-=@">0-9@ lab94.txt

egrep "_?_" lab94.txt

egrep -v ".*" lab94.txt

egrep ">*ls@^2`" lab94.txt

egrep (short for grep -E) uses extended regex, instead of basic regex,
which is the default.

Task 3:

Finally, apply some of what you’ve learned about regex to these sed
substitution commands. Again, what will the output be (see Answers 8-10
below)"

sed -E 's/>A-=@/B/g' lab94.txt

sed -E 's/A>a-z@+$/ >omitted@/' lab94.txt

sed -E 's/(>a-z@)?1/**/g' lab94.txt

Answer 1:

abc123 01234567890 A1B2C3D4E5 6F7G8H9I0J

Answer 2:

456def Hello World!

01234567890 6F7G8H9I0J Quoth the raven 'Nevermore.'

Answer 3:
abc123 456def password 01234567890 A1B2C3D4E5 6F7G8H9I0J

Answer 4:

abc123 456def 01234567890 A1B2C3D4E5 6F7G8H9I0J

Answer 5:

\x1fhtml!

Quoth the raven 'Nevermore.'

Answer 6: (No output)

Answer 7:

Hello World! password

Answer 8:

abc123 456def \x1fhtml!

Bello Borld!

password 01234567890 B1B2B3B4B5 6B7B8B9B0B Buoth the raven


'Bevermore.'

Answer 9:

abc123 456def \x1fhtml!

Hello World!

omitted@ 01234567890 A1B2C3D4E5 6F7G8H9I0J Quoth the


raven 'Nevermore.'

Answer 10:

abc123 456def \x1fhtml!

He**o World!
pa**word 01234567890 A1B2C3D4E5 6F7G8H9I0J Quoth the raven
'Nevermore.'

Notes:

Regexr is a great resource for practicing your regular expressions. See


man 7 regex for more information on regex as supported by your distro.

Lab 80. Resetting the Root Password

Lab Objective:

Learn how to reset a forgotten root password on Linux.

Lab Purpose:

In this lab, you will practice changing boot parameters for the purpose of
resetting the root password.

Lab Tool:

Kali Linux (or another distro of your choice)

Lab Topology:

A single Linux machine, or virtual machine

Lab Walkthrough:

IMPORTANT: This is one lab where the exact steps may vary greatly
between distros, your particular bootloader configuration, and even different
versions of Kali. If you run into problems, Google may be enlightening.

Task 1:

Reboot until you reach the GRUB boot menu. If you cannot see the GRUB
boot menu, follow your distro-specific instructions to modify your boot
configuration so that the menu is visible.
Task 2:​
Hit ‘e’ to edit the Kali Linux boot commands (this should be the first menu
item). Scroll down to the line beginning with linux. On that line, replace the
ro option with rw init /bin/bash, then hit Ctrl+X to boot.

Task 3:​
You should boot into a root prompt with your root file system mounted.
Now simply run:

passwd

to change the root password, then run:

reboot -f

Notes:

It is not possible to recover a lost disk encryption password (unless that


password happens to be very weak). It is possible to recover a lost root
password stored on an encrypted disk drive, but the steps are outside the
scope of this lab.

Lab 81 – Use Streams, Pipes, and Redirects: Redirection and File


Descriptors

Lab Objective:
Learn how to manage input and output in the Linux shell by using
redirection and file descriptors. This lab will introduce you to the concepts
of stream redirection and how to use file descriptors to control how data is
read from and written to files.

Lab Purpose:
In this lab, you will explore how to manipulate the standard input, output,
and error streams using redirection and pipes. You will also learn how to
handle file descriptors to redirect or suppress output and control where
input comes from in your shell commands.

Lab Tool:
Kali Linux (or any Linux distribution)
Terminal (Bash or any other shell)

Lab Topology:
Linux System: A single system or virtual machine running a Linux
distribution (e.g., Kali Linux, Ubuntu).
Shell: The Bash shell or any other command-line shell available.

Lab Walkthrough:

Task 1: Understanding File Descriptors


In Unix-like systems, processes communicate with each other using
streams. Every process has three standard file descriptors:
●​ Standard Input (stdin): The input stream (file descriptor 0).
●​ Standard Output (stdout): The output stream (file descriptor 1).
●​ Standard Error (stderr): The error stream (file descriptor 2).
These file descriptors are used to handle input and output for commands in
the shell.
Example:
# The default for commands is to read from stdin (keyboard) and write to
stdout (terminal).
echo "Hello, World!" # Sends output to stdout (the terminal)
Question: What would happen if a command did not have access to the
stdin, stdout, or stderr file descriptors?

Task 2: Redirecting Output


●​ Redirecting Standard Output to a File: You can use the > symbol to
redirect the standard output of a command to a file:
echo "Hello, World!" > output.txt
This will create a file called output.txt (or overwrite it if it already exists)
and write the string "Hello, World!" to it.
●​ Appending Output to a File: If you want to append output to an
existing file rather than overwriting it, use >>:
echo "Hello, again!" >> output.txt
●​ Redirecting Standard Error to a File: Similarly, you can redirect the
standard error using 2>:
ls non_existing_file 2> error_log.txt
This will redirect the error message (if the file does not exist) to the
error_log.txt file.
●​ Redirecting Both Standard Output and Standard Error: To redirect
both stdout and stderr to the same file, you can combine the two using &>
or >>:
ls existing_file non_existing_file &> output_and_error.txt
This command writes both the output of the ls command and any
errors (if any) to the output_and_error.txt file.
Question: What’s the difference between > and >> when redirecting output
to a file?

Task 3: Redirecting Input


●​ Redirecting Input from a File: You can redirect the input of a
command using the < symbol. This is particularly useful for commands
that normally require input from the terminal:
sort < input.txt
This will take the contents of input.txt as the input for the sort
command.
●​ Using Here Documents (Heredocs): A here document allows you
to provide multiline input directly within a script or command. It’s useful for
feeding a block of text into a command:
cat << EOF
This is a
multiline
input
EOF
In this case, the contents between << EOF and EOF are passed as
input to the cat command.
Question: How does redirecting input with < differ from a typical interactive
session where you type commands?

Task 4: Piping Output Between Commands


●​ Using Pipes (|): A pipe connects the output of one command directly
to the input of another command. This allows you to chain commands
together:
ls -l | grep "txt"
In this example, the output of ls -l is passed to grep, which filters for
lines containing "txt".
●​ Using Pipes with Multiple Commands: You can chain multiple
commands using pipes:
cat file.txt | sort | uniq
This command will:
●​ Read the content of file.txt.
●​ Sort the lines.
●​ Remove any duplicate lines.
Question: How does piping improve the efficiency of command execution,
especially in scenarios where you need to manipulate large datasets?

Task 5: Redirecting Standard Output and Error Using File Descriptors


●​ Redirecting Both Output and Errors to Different Files: You can
redirect both stdout and stderr to different files:
command > output.txt 2> error.txt
●​ Combining Standard Output and Standard Error: You can redirect
both stdout and stderr to the same file using &> or >>:
command &> combined_output.txt
●​ Redirecting to /dev/null (Suppress Output): If you want to discard
the output of a command (i.e., not write it to a file or the terminal), you can
redirect it to /dev/null:
command > /dev/null 2>&1
This will send both the standard output and error to /dev/null,
essentially ignoring both.
Question: Why might you want to redirect output to /dev/null? Can you
think of situations where this could be useful?

Task 6: Working with File Descriptors in Scripts


●​ Using File Descriptors in Scripts: You can explicitly specify file
descriptors when redirecting input and output in scripts. For example, in a
script, you could redirect both stdout and stderr to separate files:
#!/bin/bash
ls -l /some_directory > output.log 2> error.log
●​ Using exec to Redirect All Output in a Script: The exec command
can be used to redirect all output (including stderr and stdout) within a
script:
exec > output.log 2>&1
This command will redirect both standard output and standard error for
all subsequent commands in the script to output.log.
Question: How can redirecting output to a file be useful in automated
scripts or logging processes?
Task 7: Combining Redirects and Pipes
●​ Using Redirects and Pipes Together: You can combine redirection
and pipes to both process data and save it to a file. For example:
cat input.txt | sort > sorted_output.txt
This command reads from input.txt, sorts the lines, and writes the
sorted lines to sorted_output.txt.
●​ Using Pipes and Redirecting Errors: You can also pipe and redirect
errors simultaneously:
cat non_existing_file | sort 2> error_log.txt
This command attempts to read from non_existing_file, but since the
file doesn’t exist, the error is captured in error_log.txt.
Question: How do combining redirection and pipes provide flexibility when
processing data from multiple sources?

Notes:
●​ Redirection is a powerful tool in the shell, enabling you to control
where input and output come from and where they go. This is crucial for
automating tasks and managing files.
●​ File Descriptors allow processes to interact with the system in a
structured way. Understanding them will enable you to manage
input/output more efficiently.
●​ Pipes allow you to chain commands together, making it easier to
perform complex tasks in a single line of code. This is a hallmark of
powerful shell scripting.
●​ Standard Input, Output, and Error: Understanding these concepts is
key to managing shell processes effectively. It helps in debugging,
logging, and automating tasks.

Lab 82 – Create Partitions and Filesystems

Lab Objective:
Learn how to create partitions, format them with filesystems, and mount
them on a Linux system. In this lab, you will work with the fdisk, mkfs, and
mount commands to create partitions, set up filesystems, and manage
storage devices.

Lab Purpose:
In this lab, you will gain hands-on experience in partitioning a disk, creating
filesystems, and mounting partitions on Linux. Understanding how to
manage disk partitions is critical for configuring storage, setting up new
systems, and managing data.

Lab Tool:
Kali Linux (or any Linux distribution)
Terminal (Bash or any other shell)
A disk or virtual disk (for testing)

Lab Topology:
Linux System: A single system or virtual machine running a Linux
distribution (e.g., Kali Linux, Ubuntu).
Disk: You can either use a virtual machine with an unpartitioned disk
or a physical disk.

Lab Walkthrough:

Task 1: Check Available Disks


Before creating partitions, you should first check the available disks and
their current partitions.
●​ List Available Disks: Use the lsblk command to list all available
block devices:
lsblk
This will show a list of devices such as sda, sdb, and sr0, along with
their partitions (sda1, sda2, etc.). If you're using a virtual disk, you
might see a device like /dev/vda.
●​ Verify Disk Space: You can also use fdisk -l to list the details of all
available disks and partitions:
sudo fdisk -l
This command will show detailed information about each disk and its
partitions.
Question: What does the lsblk output tell you about the structure of your
disks and partitions?

Task 2: Create a Partition Using fdisk


To create partitions, we use the fdisk utility, which is a command-line tool
for partitioning disks.
●​ Start fdisk: Select the disk you want to partition. For example, if you
want to partition /dev/sdb, use:
sudo fdisk /dev/sdb
●​ Create a New Partition:
●​ Type n to create a new partition.
●​ Select the partition type (primary or extended) and partition number.
Press Enter to accept the default.
●​ Specify the first and last sector for the partition (or press Enter to
accept the default, which will use the remaining available space).
●​ Write Changes: After creating the partition, write the changes to the
disk by typing w.
●​ Verify the Partition: To verify that the partition was created, use lsblk
or sudo fdisk -l again to check the new partition.
Question: What do the partition numbers (e.g., sdb1, sdb2) represent in
the context of a disk? Why is it important to properly assign partition
numbers?

Task 3: Format the Partition with a Filesystem


Once the partition is created, you need to format it with a filesystem.
●​ Format with ext4: The ext4 filesystem is commonly used on Linux
systems. You can format the new partition with mkfs.ext4:
sudo mkfs.ext4 /dev/sdb1
This will format the partition /dev/sdb1 with the ext4 filesystem.
●​ Check the Filesystem: After formatting, you can use the lsblk -f or
blkid command to verify that the partition now has a filesystem:
sudo lsblk -f
The output should show that /dev/sdb1 is formatted with ext4.
Question: What is the significance of choosing the correct filesystem type
(like ext4, xfs, or btrfs) for different use cases?

Task 4: Mount the Partition


To use the new partition, you need to mount it to a directory.
●​ Create a Mount Point: First, create a directory where you can mount
the new partition. This is known as a mount point:
sudo mkdir /mnt/mydata
●​ Mount the Partition: Now, mount the newly formatted partition to the
directory:
sudo mount /dev/sdb1 /mnt/mydata
●​ Verify the Mount: You can verify that the partition is mounted by
using the df command:
df -h
This will show a list of mounted filesystems and their disk usage. Look
for /dev/sdb1 mounted on /mnt/mydata.
Question: What happens if you try to mount a partition to a directory that
already contains files? What are some strategies to avoid data loss?

Task 5: Make the Mount Permanent


By default, the partition will only be mounted until the system is rebooted.
To make the mount permanent, you need to modify the /etc/fstab file.
●​ Edit /etc/fstab: Open the /etc/fstab file in a text editor like nano:
sudo nano /etc/fstab
●​ Add the Mount Entry: Add the following line to the file, adjusting it to
match your partition:
/dev/sdb1 /mnt/mydata ext4 defaults 0 2
This line tells the system to mount /dev/sdb1 at /mnt/mydata using the
ext4 filesystem with the default options every time the system starts.
●​ Test the Changes: After saving the file, you can test the new fstab
entry by unmounting and remounting all filesystems:
sudo mount -a
Verify the mount again with df -h.
Question: Why is the /etc/fstab file important in managing disk mounts on
a Linux system? What could happen if this file is misconfigured?

Task 6: Resize Partitions (Optional)


If you need to resize a partition (e.g., expand or shrink it), you can use tools
like resize2fs (for ext4) and fdisk. Note: Resizing partitions may result in
data loss if not done correctly, so make sure to back up your data first.
●​ Resize the Filesystem: To resize an ext4 filesystem, use resize2fs:
sudo resize2fs /dev/sdb1
After resizing the filesystem, you may need to adjust the partition size
using fdisk or another partitioning tool.
Question: What are the risks associated with resizing partitions, and how
can you mitigate them?

Task 7: Unmount the Partition


When you're done working with the partition, you can unmount it using the
umount command:
●​ Unmount the Partition:
sudo umount /mnt/mydata
●​ Verify the Unmount: Use df -h or lsblk to ensure the partition is no
longer mounted.
Question: Why is it important to unmount a filesystem before
disconnecting a disk or shutting down a system?

Notes:
●​ Partitioning a disk divides it into logical sections that can be used
independently.
●​ Filesystems define how data is stored and retrieved on a disk.
Common filesystems include ext4, xfs, and btrfs.
●​ Mounting a partition allows the system to use it. It connects the
partition to a directory in the file system hierarchy.
●​ fstab ensures partitions are automatically mounted at boot time.

Lab 83 – Control Mounting and Unmounting of Filesystems: Manual


Mounting

Lab Objective:
Learn how to manually mount and unmount filesystems in Linux. You will
understand how to use the mount and umount commands to manage
filesystems, and how to work with various mount options.

Lab Purpose:
In this lab, you will gain hands-on experience with manually mounting and
unmounting filesystems. Understanding how to control mounting is
essential for managing storage devices and configuring Linux systems for
various use cases.

Lab Tool:
Kali Linux (or any Linux distribution)
Terminal (Bash or any other shell)
A disk or virtual disk (for testing purposes)

Lab Topology:
Linux System: A single system or virtual machine running a Linux
distribution (e.g., Kali Linux, Ubuntu).
Disk/Partition: You can either use a physical disk or a virtual disk (for
example, /dev/sdb1) for testing purposes.

Lab Walkthrough:
Task 1: List Available Disks and Partitions
Before mounting a partition, it's important to identify available disks and
partitions.
●​ List Disks and Partitions Using lsblk: The lsblk command shows all
available block devices on your system:
lsblk
This will list devices such as /dev/sda, /dev/sdb, etc., and their
partitions (e.g., /dev/sdb1, /dev/sdb2).
●​ List Detailed Information Using fdisk: Use fdisk -l to list detailed
partition information:
sudo fdisk -l
This will display detailed information about the disk partitions, including
their filesystem type, partition size, and partition table type.
Question: What information can you gather from the fdisk -l command, and
how does it help in identifying partitions for mounting?

Task 2: Manually Mount a Filesystem


In Linux, you can manually mount a filesystem using the mount command.
This allows you to mount a specific partition to a directory (mount point) of
your choice.
●​ Create a Mount Point: A mount point is a directory in the file system
where you can access the mounted partition. First, create a mount point:
sudo mkdir /mnt/mydisk
●​ Mount the Partition: To mount the partition (e.g., /dev/sdb1), use the
mount command:
sudo mount /dev/sdb1 /mnt/mydisk
This command will mount the partition /dev/sdb1 to the /mnt/mydisk
directory. You can now access the contents of /dev/sdb1 through
/mnt/mydisk.
●​ Verify the Mount: After mounting the partition, use the df -h
command to verify that the partition is mounted:
df -h
This will display all mounted filesystems along with their disk usage.
Look for /dev/sdb1 in the list and check the mount point.
Question: What does the df -h command show about mounted filesystems,
and how can it be helpful in system management?

Task 3: Use Mount Options


The mount command allows you to specify various options when mounting
a filesystem. These options control the behavior of the filesystem after it’s
mounted.
●​ Mount with Read-Only Option: To mount a partition in read-only
mode (so that no changes can be made to the filesystem), use the -o
option:
sudo mount -o ro /dev/sdb1 /mnt/mydisk
This mounts /dev/sdb1 as read-only, which can be useful if you just
need to access files without modifying them.
●​ Mount with Specific Filesystem Type: If you want to specify a
filesystem type (e.g., ext4, ntfs, vfat), you can use the -t option:
sudo mount -t ext4 /dev/sdb1 /mnt/mydisk
This ensures that the partition is mounted with the ext4 filesystem,
even if it's not automatically detected.
●​ Mount with User Permissions: You can mount a filesystem with
specific user permissions using the uid and gid options:
sudo mount -o uid=1000,gid=1000 /dev/sdb1 /mnt/mydisk
This mounts the filesystem and sets the owner and group of the
mounted files to the user with UID 1000 and GID 1000.
Question: Why would you use options like -o ro or -t ext4 when mounting a
filesystem? What are some situations where these options are helpful?

Task 4: Verify the Filesystem


Once the partition is mounted, you can verify the filesystem using the lsblk
or mount command.
●​ Using lsblk: You can use lsblk to see the mount points of all
partitions:
lsblk
This will show the mount points of each partition, allowing you to
confirm that /dev/sdb1 is mounted at /mnt/mydisk.
●​ Using mount: The mount command without arguments will display all
currently mounted filesystems:
mount
This shows the device names, mount points, filesystem types, and
mount options in use.
Question: How do the lsblk and mount commands help in verifying
mounted filesystems, and what additional information do they provide?

Task 5: Unmount a Filesystem


To unmount a filesystem (i.e., remove it from the filesystem tree), you can
use the umount command.
●​ Unmount the Partition: To unmount the partition that you mounted
earlier, use the umount command followed by the mount point or device
name:
sudo umount /mnt/mydisk
●​ Verify the Unmount: After unmounting, use df -h or lsblk to ensure
that the partition is no longer mounted:
df -h
You should no longer see /dev/sdb1 listed in the output.
Question: Why is it important to unmount a filesystem before physically
disconnecting the disk or shutting down the system?

Notes:
●​ Mounting a filesystem connects it to the Linux file hierarchy, allowing
access to its files.
●​ The **mount** command is versatile and supports many options for
customizing how filesystems are mounted.
●​ Unmounting a filesystem is essential to avoid data corruption,
especially when disconnecting drives or shutting down.
●​ Force unmounting should be used with caution, as it can result in loss
of data if the filesystem is in use.
●​ NTFS filesystems are often used with Windows systems, and special
handling may be required to mount them on Linux.

Lab 84 – Manage User and Group Accounts and Related System Files:
Special Purpose Accounts

Lab Objective:
Learn how to manage special-purpose user and group accounts in Linux.
These accounts typically include system accounts, service accounts, and
accounts used by various system processes and applications.

Lab Purpose:
In this lab, you will explore special-purpose accounts such as those used
by system services, processes, and applications. Understanding how to
manage these accounts is important for system administration, security,
and maintaining proper user permissions in a Linux environment.
Lab Tool:
Kali Linux (or any Linux distribution)
Terminal (Bash or any other shell)

Lab Topology:
Linux System: A single system or virtual machine running a Linux
distribution (e.g., Kali Linux, Ubuntu).
Terminal: Used for running commands and managing user and group
accounts.

Lab Walkthrough:

Task 1: Review System Accounts


Special-purpose accounts are typically used for system services and
processes and are created by the system during installation or when
certain packages are installed.
●​ View the /etc/passwd File: The /etc/passwd file contains information
about all user accounts on the system, including special-purpose
accounts. Use cat to view the file:
cat /etc/passwd
You’ll see a list of users, including system users. System accounts usually
have lower user IDs (UIDs), which are typically below 1000. Here’s an
example of what you might see:
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
●​ System Accounts: Accounts such as daemon, bin, and mail are
examples of system accounts created for system processes.
●​ Service Accounts: Some applications create their own user
accounts, for example, mysql for MySQL or www-data for the
Apache web server.
Question: What is the purpose of special-purpose accounts in the
/etc/passwd file, and why are their UIDs typically lower than those of
regular users?

Task 2: Identify Special Purpose Accounts


●​ Check for System Accounts: Special-purpose accounts usually
have UIDs below 1000. To quickly identify them, use the awk command to
filter the /etc/passwd file for users with UIDs less than 1000:
awk -F: '$3 < 1000 {print $1}' /etc/passwd
This will return a list of system accounts such as daemon, bin, and
www-data.
●​ View the Purpose of Each Account: To understand the purpose of
each account, you can look up their details in the relevant manual pages
or documentation. For example:
www-data: This is typically the user account used by the Apache
HTTP server.
mysql: This is the user account used by the MySQL database
server.
Additionally, special-purpose accounts may have their home
directories set to a system path (e.g., /usr/sbin or /bin) and may have
/usr/sbin/nologin as their login shell, preventing login to the system.
Question: What is the difference between a user account with a UID below
1000 and a regular user account with a UID above 1000?

Task 3: Create Special Purpose Accounts


In some cases, you may need to create special-purpose user accounts,
especially when setting up new system services or applications.
●​ Create a New Special-Purpose Account: Use the useradd
command to create a special-purpose user account. For example, to
create a user for a custom service:
sudo useradd -r -s /usr/sbin/nologin customservice
The -r option creates a system account.
The -s /usr/sbin/nologin option prevents the user from logging in
interactively.
This account will typically not have a home directory, but you can
create one using the -m option if needed.
●​ Verify the New Account: To confirm the new account has been
created, check the /etc/passwd file:
cat /etc/passwd | grep customservice
The output should look like:
customservice:x:1001:1001::/nonexistent:/usr/sbin/nologin
Question: Why do you use the -r option when creating special-purpose
accounts, and why is the -s /usr/sbin/nologin option commonly used?

Task 4: Manage Special Purpose Group Accounts


System services may also have associated group accounts that control
access to system resources. These groups allow users in those groups to
have permissions on certain files or resources.
●​ List All Groups: View the list of groups on your system by reading
the /etc/group file:
cat /etc/group
Special-purpose groups often have names corresponding to system
services, such as mysql, www-data, or syslog.
●​ Check Group Members: You can check the members of a specific
group by using the getent command:
getent group mysql
This will display the group information for mysql and any users that
belong to it.
●​ Create a Special Purpose Group: To create a new group for a
system service, use the groupadd command:
sudo groupadd customservicegroup
●​ Add a User to a Group: You can add a special-purpose user to a
specific group using the usermod command:
sudo usermod -aG customservicegroup customservice
Question: Why would you create separate groups for system services, and
how do groups help in controlling access to system resources?

Task 5: Modify or Delete Special Purpose Accounts


It’s important to manage and modify special-purpose accounts when
services are no longer needed, or when their configuration changes.
●​ Modify a Special-Purpose Account: You can modify a user’s
properties using the usermod command. For example, to change the
home directory of the customservice account:
sudo usermod -d /var/customservice customservice
●​ Delete a Special-Purpose Account: If you need to delete a
special-purpose user account, use the userdel command:
sudo userdel customservice
To also remove the user’s home directory (if one exists), use the
-r option:
sudo userdel -r customservice
●​ Delete a Group: To delete a group, use the groupdel command:
sudo groupdel customservicegroup
Question: What are the potential consequences of deleting a
special-purpose account, and what precautions should you take before
doing so?

Task 6: Check Permissions for Special-Purpose Accounts


After creating special-purpose accounts, it is crucial to verify that they have
the correct permissions to access the necessary files and resources, while
ensuring they do not have unnecessary access.
●​ Check File Permissions: Use ls -l to check the ownership and
permissions of files used by special-purpose accounts. For example:
ls -l /var/www
●​ Change Ownership: You can change the ownership of files and
directories to the appropriate service account using chown. For example,
to set the ownership of web files to the www-data user and group:
sudo chown -R www-data:www-data /var/www
Question: Why is it important to properly set file ownership and
permissions for special-purpose accounts, and how does it impact system
security?

Notes:
●​ System Accounts are typically used for background services and
system processes, and they usually have UIDs below 1000.
●​ Service Accounts are associated with specific software or services
and often have restricted login capabilities (e.g., /usr/sbin/nologin as the
shell).
●​ Groups control access to resources and permissions, and managing
groups ensures proper isolation of system services.
●​ Managing special-purpose accounts involves not only creation and
deletion but also ensuring appropriate permissions for system security
and stability.

Lab 85 – System Logging: Rsyslog

Lab Objective:
Learn how to configure and manage system logs using Rsyslog in Linux.
You will understand how Rsyslog works, configure logging for various
services, and troubleshoot by analyzing logs.

Lab Purpose:
This lab focuses on Rsyslog, a system logging daemon that provides a
centralized logging mechanism for various system processes. By the end
of this lab, you will be able to configure log sources, set up log forwarding,
and perform basic troubleshooting using log files.

Lab Tool:
Kali Linux (or any Linux distribution)
Terminal (Bash or any other shell)

Lab Topology:
Linux System: A single system or virtual machine running a Linux
distribution (e.g., Kali Linux, Ubuntu).
Rsyslog Daemon: The Rsyslog service should be installed and
running by default in most Linux distributions.

Lab Walkthrough:

Task 1: Verify Rsyslog Installation


Rsyslog is typically pre-installed on most Linux systems, but it is important
to verify that it is running and active.
●​ Check if Rsyslog is Installed: To check if Rsyslog is installed, run
the following command:
dpkg -l | grep rsyslog
or if you are using a RedHat-based distribution:
rpm -qa | grep rsyslog
●​ Check if Rsyslog is Running: You can check if the Rsyslog service
is active and running by using the following systemd command:
sudo systemctl status rsyslog
If Rsyslog is running, you will see output indicating that the service is
active (running). If it’s not running, start it with:
sudo systemctl start rsyslog
Question: What steps would you take if you discovered that Rsyslog was
not installed or running on your system?

Task 2: Understand the Rsyslog Configuration Files


Rsyslog uses configuration files to define the rules for logging. The main
configuration file is usually located at /etc/rsyslog.conf.
●​ Open the Rsyslog Configuration File: You can view the
configuration by opening the /etc/rsyslog.conf file in a text editor:
sudo nano /etc/rsyslog.conf
This file contains global configurations as well as specific rules for log
handling.
●​ Configuration File Structure:
●​ Modules: Rsyslog modules are loaded at the beginning of the file.
For example:
module(load="imuxsock") # loads the Unix socket module for
local log collection
module(load="imklog") # loads the kernel log module
●​ Rules: After loading the necessary modules, you will see rule-based
entries specifying what logs should be collected, where to store
them, and whether to forward them to remote servers.
Question: What is the purpose of modules in the Rsyslog configuration file,
and how do they impact log collection?

Task 3: Configure Local Log Storage


By default, Rsyslog stores log messages in /var/log/ directory. You can
modify the configuration file to change this location or create custom
logging rules.
●​ Configure Rsyslog to Store Logs in a Specific Directory: Open
the /etc/rsyslog.conf file and modify or add a rule to log messages to a
specific directory. For example:
*.* /var/log/mycustomlog.log
This will log all messages (from any facility and priority) to a new file
called mycustomlog.log.
●​ Save and Apply the Configuration: After modifying the
configuration file, save your changes and reload the Rsyslog service to
apply the new configuration:
sudo systemctl restart rsyslog
●​ Verify the Log File: To verify that logs are being written to the new
location, use tail to view the contents of the log file:
sudo tail -f /var/log/mycustomlog.log
Question: How does Rsyslog decide which log messages go to which log
file? Explain the syntax used to configure the log file location in the
Rsyslog configuration file.

Task 4: Configure Rsyslog to Forward Logs to a Remote Server


Rsyslog supports forwarding log messages to a remote server for
centralized log management.
●​ Set Up Remote Log Forwarding: To forward logs to a remote
server, open the /etc/rsyslog.conf file and add a line at the end specifying
the remote server’s IP address and the protocol (TCP or UDP):
*.* @remote-server-ip:514
Or for TCP (which is more reliable than UDP):
*.* @@remote-server-ip:514
@ indicates UDP, while @@ indicates TCP.
●​ Save and Restart Rsyslog: After configuring the remote server, save
the file and restart the Rsyslog service:
sudo systemctl restart rsyslog
●​ Verify the Forwarding: To verify that the logs are being forwarded,
you can monitor the log output on the remote server or check the log file
on the local system to ensure there are no errors.
Question: What are the advantages of using remote log forwarding, and
what would happen if the remote server is unreachable?

Task 5: Troubleshoot Rsyslog and Analyze Logs


Once Rsyslog is properly configured, you can begin troubleshooting issues
using logs and system messages.
●​ View System Logs: To view system logs, use the journalctl
command or examine specific log files under /var/log/, such as:
sudo journalctl
sudo tail -f /var/log/syslog
sudo tail -f /var/log/messages
●​ Filter Logs: You can filter logs based on time, process, or specific
keywords. For example, to view only logs related to the sshd service:
sudo journalctl -u sshd
●​ Analyze Log Files for Errors: Use tools like grep to search for
specific entries, such as error messages or warnings. For example:
sudo grep "error" /var/log/syslog
Question: How can filtering logs with tools like journalctl and grep help with
troubleshooting? Give an example of a log entry you might search for.

Task 6: Log Rotation and Retention


Managing log file sizes is crucial to prevent logs from filling up the disk.
Rsyslog relies on logrotate to manage log rotation and retention.
●​ Configure Log Rotation: Log rotation is managed by the logrotate
utility. To check the log rotation configuration for Rsyslog logs, open the
/etc/logrotate.d/rsyslog file:
cat /etc/logrotate.d/rsyslog
●​ Modify Log Rotation Settings: You can modify settings like the
frequency of rotation and the number of old log files to keep. For example,
to rotate logs weekly and keep four weeks of logs, you can adjust the
settings in the logrotate configuration file:
/var/log/syslog {
weekly
rotate 4
compress
delaycompress
}
●​ Test Log Rotation: You can manually force log rotation by running:
sudo logrotate -f /etc/logrotate.conf
Question: Why is log rotation important, and what might happen if log files
are not rotated regularly?

Notes:
●​ Rsyslog is a powerful and flexible tool for logging system messages. It
allows you to control how logs are stored and forwarded, making it an
essential tool for system administrators.
●​ Log rotation ensures that log files do not consume excessive disk
space by regularly rotating them and keeping a limited number of old
logs.
●​ Logs are crucial for system monitoring and troubleshooting, and
Rsyslog provides the central mechanism for managing these logs in
Linux systems.

GITHUB LABS

Lab 1: Introduction to Git and GitHub

Lab 2: Remote Repositories

Lab 3: Branching and GitHub Flow

Lab 4: Git vs. GitHub and Advanced Concepts

Lab 5: README and Markdown Formatting

Lab 6: GitHub Desktop and Collaboration

Lab 7: Repository Settings and Branch Protections

Lab 8: Issues and Discussions

Lab 9: Pull Requests and Code Review


Lab 10: Code Review and Feedback

Lab 11: Repository Templates and Forking

Lab 12: Branch Management

Lab 13: CODEOWNERS and Reviews

Lab 14: Advanced GitHub Features

Lab 15: Using Draft Pull Requests, Gists and Commenting

Lab 1 – Introduction to Git and GitHub

Lab Objective:
Learn the fundamentals of Git and GitHub, including how to initialize a
repository, stage changes, commit files, and understand version control
basics.

Lab Purpose:
In this lab, you will create a local Git repository, add files to it, make
commits, and explore the basic workflow of version control using Git. This
foundational knowledge is essential for collaborative development and
code management.

Lab Tool:
Git (latest stable version), GitHub account, and a terminal or command-line
interface.
Lab Topology:
A local machine with Git installed. Optionally, access to GitHub via a web
browser or GitHub Desktop.

Lab Walkthrough:

Task 1:
Initialize a local Git repository in a new directory. Use the following
commands:​
mkdir my-first-repo​
cd my-first-repo​
git init

Question: What does the `git init` command do? What changes occur in
your directory after running it?

Task 2:
Create a new file in your repository and add some content to it. For
example:​
echo "Hello Git!" > hello.txt

Then, use the following commands to add and commit the file:​
git add hello.txt​
git commit -m "Initial commit with hello.txt"

Question: What is the difference between `git add` and `git commit`? Why
is it important to stage files before committing?

Notes:
- Initializing a repository with `git init` allows you to begin versioning files
locally.
- Commits record snapshots of your file changes and are used for tracking
project history.
Lab 2 – Remote Repositories

Lab Objective:
Understand how to connect a local repository to GitHub and explore the
GitHub user interface to navigate repositories, branches, and commits.

Lab Purpose:
In this lab, you will learn how to push a local Git repository to GitHub,
create and link a remote, and use the GitHub UI to view project data such
as branches and commit history.

Lab Tool:
Git (CLI), GitHub.com (web interface).

Lab Topology:
A local Git repository and a GitHub account with a new or existing
repository.

Lab Walkthrough:

Task 1:
Initialize a local Git repository and push it to GitHub.

Steps:​
1. Create a directory and initialize Git:​
mkdir my-project && cd my-project​
git init​
2. Create a file, add and commit:​
echo "# My Project" > README.md​
git add .​
git commit -m "Initial commit"​
3. Create a repository on GitHub.​
4. Add the remote and push:​
git remote add origin https://github.com/your-username/my-project.git​
git push -u origin main

Question: What command links your local repository to a GitHub remote?


What does the `-u` option do in `git push`?

Task 2:
Explore the GitHub UI.

Go to your GitHub repository in a browser and explore the following:​


- Repository overview (files and README)​
- Branch dropdown to view and switch between branches​
- Commit history under the "Commits" tab or "Insights" → "Network"

Question: What information can you view about each commit? How does
the GitHub UI help you track branch activity and changes over time?

Notes:
Connecting a local Git repository to a remote GitHub repository enables
online collaboration and version tracking. The GitHub UI is a powerful tool
to visualize and manage repositories, branches, and development
progress.

Task 3:
Check the status and log of your repository to view tracked changes and
commit history:​
git status​
git log

Question: What information does `git status` provide? What details can you
observe in the `git log` output?

Task 4:
Create a second file and repeat the process of adding and committing.
Then, make a change to the first file and commit again.​
echo "Another file" > second.txt​
git add second.txt​
git commit -m "Add second file"​
echo "Update to hello.txt" >> hello.txt​
git add hello.txt​
git commit -m "Update hello.txt with more text"

Question: How does Git track changes between commits? How can you
view what was changed using Git?

Task 5:
Explore the .git directory and discuss its contents. Use:​
ls -a​
cd .git​
ls

Question: What kind of data is stored in the `.git` folder? Why should this
folder not be modified manually?

Notes:
Git is a powerful version control system that enables efficient collaboration
and code tracking. Understanding the basic Git workflow—including
initializing repositories, committing changes, and viewing history—is
essential for any developer or IT professional. This lab provides hands-on
experience with core Git commands to build a solid foundation for more
advanced version control techniques.

Lab 3 – Branching and GitHub Flow

Lab Objective:
Learn how to create and switch branches using Git CLI and demonstrate
the GitHub Flow model including branching, pull requests (PR), and
merging.
Lab Purpose:
In this lab, you will use Git on the command line to create and manage
branches, simulate collaborative development using GitHub Flow, and
understand how pull requests facilitate code review and merging in a team
environment.

Lab Tool:
Git (latest stable version), GitHub account, web browser, and a terminal or
command-line interface.

Lab Topology:
A local Git repository linked to a remote GitHub repository.

Lab Walkthrough:

Task 1:
Ensure your local repository is connected to GitHub. Check the current
branch:

git branch​
Question: What is your current branch? What does the asterisk (*)
indicate?

Task 2:
Create and switch to a new branch for a feature:

git checkout -b feature-xyz​


Make changes and commit:

echo "New feature line" > feature.txt​


git add feature.txt​
git commit -m "Add new feature line"​
Question: Why do we use a separate branch for new features or bug fixes?

Task 3:
Push your branch to GitHub:
git push -u origin feature-xyz​
Navigate to GitHub and open a Pull Request (PR) for your `feature-xyz`
branch.

Question: What is a Pull Request? How does it help in team collaboration?

Task 4:
Review and merge the PR on GitHub. Once merged, pull the updated main
branch:

git checkout main​


git pull origin main​
Question: What happens when you merge a PR on GitHub? How does
your local repository get updated with the merge?

Task 5:
Clean up the feature branch locally and remotely:

git branch -d feature-xyz​


git push origin --delete feature-xyz​
Question: Why is it important to delete old branches after merging?

Notes:
The GitHub Flow is a lightweight, branch-based workflow that supports
collaboration and code review. It encourages the use of short-lived
branches and pull requests for structured development. Mastering this flow
is essential for modern software teams and continuous integration
practices.

Lab 4 – Git vs. GitHub and Advanced Concepts

Lab Objective:
Understand the difference between Git and GitHub, and explore advanced
Git concepts such as stashing, rebasing, tagging, and gitignore.
Lab Purpose:
This lab will help you distinguish between Git as a local version control
system and GitHub as a remote repository hosting platform. You will also
explore several advanced Git features that improve your workflow and
project management.

Lab Tool:
Git (latest stable version), GitHub account, and a terminal or command-line
interface.

Lab Topology:
A local Git repository connected to a GitHub remote repository.

Lab Walkthrough:

Task 1:
Compare Git and GitHub. Research and write down the differences
between the two.

Question: How does Git manage your code locally? What does GitHub add
to the development process that Git alone cannot provide?

Task 2:
Use git stash to temporarily save changes:​
echo "Temporary change" >> temp.txt​
git add temp.txt​
git stash

List and apply your stashed changes:​


git stash list​
git stash apply

Question: When would using `git stash` be helpful in your workflow?


Task 3:
Rebase your feature branch onto the main branch:​
git checkout feature-branch​
git rebase main

Question: How does rebasing differ from merging? What are the benefits
and risks of using rebase?

Task 4:
Create a tag for a release version:​
git tag -a v1.0 -m "Release version 1.0"​
git push origin v1.0

Question: What is the purpose of tagging in Git? How can it be useful in


software versioning?

Task 5:
Create and use a .gitignore file to exclude specific files or directories:​
echo "*.log" > .gitignore​
echo "node_modules/" >> .gitignore​
git add .gitignore​
git commit -m "Add .gitignore file"

Question: Why is it important to use a .gitignore file in your projects?

Notes:
Git is a powerful local version control tool, while GitHub offers remote
repository management and collaboration features. Understanding the
distinction and mastering advanced Git functionalities will significantly
improve your productivity and code management. These skills are essential
for real-world software development workflows.
Lab 5 – README and Markdown Formatting

Lab Objective:
Learn how to create and format a README file using Markdown to
effectively document your GitHub repository.

Lab Purpose:
In this lab, you will add a README file to your repository and use
Markdown syntax to format text, insert links, create lists, and embed
images. This helps communicate your project purpose, usage, and setup
instructions clearly to others.

Lab Tool:
Text editor (VS Code, nano, or any Markdown editor), Git, GitHub.

Lab Topology:
A local Git repository with a GitHub remote.

Lab Walkthrough:

Task 1:
Create a new file called README.md in your repository root:​
touch README.md

Question: What is the purpose of a README file in a GitHub repository?

Task 2:
Add a project title and description using Markdown headers and text
formatting. Example content:​
# Project Title​
This is a sample project to demonstrate how to use Markdown in a
README file.

Question: How do Markdown headers work (e.g., #, ##, ###)? What are the
differences in size and hierarchy?
Task 3:
Create a usage section with bullet points and code blocks. Example:


# Usage​
- Clone the repository​
- Navigate to the directory​
- Run the application​

```​
git clone https://github.com/your-username/your-repo.git​
cd your-repo​
npm start​
```

Question: Why are code blocks useful in a README?

Task 4:
Add links and images using Markdown. Example:​
# Resources​
[GitHub Docs](https://docs.github.com)​
![Sample Image](https://via.placeholder.com/150)

Question: How do you format a link vs. an image in Markdown?

Task 5:
Commit and push your README.md file:​
git add README.md​
git commit -m "Add README with Markdown formatting"​
git push origin main

Question: How does having a well-structured README improve


collaboration and project adoption?

Notes:
README files are often the first thing users see when they visit your
repository. Markdown makes it easy to structure content, add visuals, and
improve readability. A professional README reflects the quality and
usability of your project.

Lab 6 – GitHub Desktop and Collaboration

Lab Objective:
Learn how to use GitHub Desktop to clone repositories, commit changes,
push updates, and manage collaborators with appropriate permissions.

Lab Purpose:
This lab introduces GitHub Desktop as a visual tool for working with GitHub
repositories. You will learn how to perform essential Git operations and
manage collaborators on a GitHub repository to facilitate teamwork and
secure project access.

Lab Tool:
GitHub Desktop, GitHub account, web browser.

Lab Topology:
A remote GitHub repository cloned, edited, and managed using GitHub
Desktop.

Lab Walkthrough:

Task 1:
Install GitHub Desktop from https://desktop.github.com and log in using
your GitHub credentials.

Question: What are the benefits of using GitHub Desktop over the Git
command line for beginners?
Task 2:
Clone an existing GitHub repository using GitHub Desktop:

- Open GitHub Desktop​


- Go to File > Clone repository​
- Select the desired repository and location​
- Click "Clone"

Question: Where is the local copy stored, and how can you confirm it’s
cloned correctly?

Task 3:
Make a small change in the local repo (e.g., add text to a README file).
Save the file and commit the changes using GitHub Desktop.

- View the file change in GitHub Desktop​


- Enter a commit message​
- Click "Commit to main"​
- Click "Push origin" to upload changes

Question: What does committing and pushing achieve? Where can the
changes be seen online?

Task 4:
Add a collaborator to your repository:

- Visit your repository on GitHub​


- Go to Settings > Collaborators​
- Add a GitHub username​
- Assign the appropriate permissions (Read, Write, Admin)

Question: Why is it important to manage collaborator permissions


correctly? What are the risks of giving admin access unnecessarily?

Task 5:
Have your collaborator clone the repo via GitHub Desktop, make a change,
and push it. Observe the collaboration process.
Question: What challenges might arise in collaboration, and how does
GitHub help resolve them (e.g., file history, PRs, merge conflict resolution)?

Notes:
GitHub Desktop is a beginner-friendly tool for managing Git repositories. It
simplifies the process of cloning, committing, and pushing code. Adding
collaborators and controlling permissions is essential for teamwork,
ensuring security and effective collaboration.

Lab 7 – Repository Settings and Branch Protections

Lab Objective:
Learn how to customize GitHub repository settings and apply basic branch
protection rules to improve project organization and maintain code quality.

Lab Purpose:
This lab guides you through updating repository metadata (like description
and topics) and configuring basic branch protection rules such as requiring
pull request reviews. These practices help teams collaborate more
effectively and protect critical branches from unintended changes.

Lab Tool:
Web browser, GitHub account.

Lab Topology:
A GitHub-hosted repository with branch protection settings enabled via the
web interface.
Lab Walkthrough:

Task 1:
Navigate to your repository on GitHub. In the top section of your repo page,
click the "Settings" tab.

Update the following metadata:

- Description (e.g., "This project demonstrates basic GitHub configuration")​


- Topics (e.g., "git", "github", "collaboration")

Question: How do topics help with repository discoverability? What kind of


description is most useful for potential contributors?

Task 2:
Scroll down in Settings to "Features" and ensure features like Issues,
Discussions, and Wiki are enabled or disabled according to your team’s
needs.

Question: Why might you disable certain features like the Wiki or
Discussions?

Task 3:
Click on "Branches" in the left sidebar. Under "Branch protection rules",
click "Add rule". Configure the following:

- Branch name pattern: `main`​


- Require a pull request before merging​
- Require approvals (at least 1 reviewer)​
- Require status checks to pass before merging (if CI is set up)​
- Optional: Restrict who can push to matching branches

Question: How does enabling branch protection help maintain code


quality? What are the trade-offs of making rules too strict?

Task 4:
Create a test branch and attempt to push changes directly to `main`.
Observe what happens if branch protections are enabled.
Question: What message or restriction do you encounter? How does this
reinforce collaborative workflows like pull requests?

Notes:
Repository settings are crucial for discoverability, security, and team
workflows. Branch protections reduce errors and enforce quality standards
by ensuring changes go through peer review or testing. These features
promote consistency and collaboration across teams.

Lab 8 – Issues and Discussions

Lab Objective:
Learn how to create, manage, and convert GitHub issues and discussions
to facilitate communication, task tracking, and team collaboration.

Lab Purpose:
This lab introduces GitHub Issues and Discussions as tools for project
planning and community engagement. You will learn how to use labels,
assignees, and pinning to organize issues, and how to convert between
issues and discussions to support different types of collaboration.

Lab Tool:
Web browser, GitHub account.

Lab Topology:
A GitHub-hosted repository with Issues and Discussions features enabled.

Lab Walkthrough:

Task 1:
Navigate to your GitHub repository and go to the "Issues" tab.
Click "New Issue" and create an issue with a meaningful title and
description. Assign the issue to yourself and apply relevant labels (e.g.,
bug, enhancement, question).

Question: How do labels help organize issues in a busy repository? What


types of issues are best suited for label use?

Task 2:
After creating your issue, click the three dots (•••) at the top-right and select
"Pin issue".

Question: When might you want to pin an issue? How does pinning impact
visibility for team members or contributors?

Task 3:
Click the "Convert to Discussion" option on your issue. Choose the
appropriate category (e.g., Q&A, ideas).

Question: What is the main difference between an issue and a discussion?


Why might you prefer one over the other?

Task 4:
Go to the Discussions tab, find your newly converted discussion, and
interact with it (e.g., reply, react, or close it). Then click "Convert back to
Issue".

Question: What happens to the content and structure when converting back
to an issue? Are any changes lost or preserved?

Notes:
Issues and Discussions support collaborative project management in
different ways. Issues are best for task tracking and bugs, while
Discussions are better for brainstorming and community input. GitHub’s
conversion features let you switch formats as needs evolve.
Lab 9 – Pull Requests and Code Review

Lab Objective:
Learn how to link pull requests to issues, conduct code reviews, and
understand draft pull requests and pull request statuses.

Lab Purpose:
This lab guides you through the GitHub workflow of creating a pull request
(PR) linked to an issue, requesting a review, and navigating draft PRs and
status indicators. This process is essential for collaborative development
and ensuring code quality.

Lab Tool:
Web browser, GitHub account.

Lab Topology:
A GitHub-hosted repository with at least one open issue and a newly
created branch to submit pull requests from.

Lab Walkthrough:

Task 1:
Ensure there is an open issue in your repository. Create a new branch
locally or on GitHub, make a small change (e.g., update a file), and push
the branch.

Open a pull request and link it to the issue by including "Closes


#<issue-number>" in the description.

Question: What happens to the issue when the PR is merged? Why is it


useful to link PRs to issues?

Task 2:
Request a review from a collaborator. The reviewer should leave
comments, approve the PR, or request changes.
Respond to the comments and, if necessary, make further commits to the
branch.

Question: How does the code review process contribute to code quality and
team learning?

Task 3:
Create a new pull request but select "Create draft pull request".

Observe the "Draft" label and how it limits merging until marked as "Ready
for review".

Question: When is it appropriate to use a draft PR? What are the benefits
of starting with a draft rather than a regular PR?

Task 4:
Explore PR status checks (e.g., "All checks have passed", "Waiting for
review"). Enable or simulate a CI check if applicable.

Question: How do status checks affect the ability to merge a PR? Why are
automated checks important in collaborative projects?

Notes:
Pull requests are central to GitHub collaboration, allowing developers to
propose changes, request feedback, and merge contributions safely.
Linking PRs to issues ensures traceability, and using draft PRs and review
workflows maintains quality across teams.

Lab 10 – Code Review and Feedback

Lab Objective:
Learn how to conduct detailed code reviews by commenting on specific
lines of code within a pull request on GitHub.
Lab Purpose:
This lab teaches how to provide constructive feedback by reviewing
individual lines of code in a pull request. Reviewing code at this granular
level improves quality and facilitates team communication during
development.

Lab Tool:
Web browser, GitHub account.

Lab Topology:
A GitHub-hosted repository with an open pull request containing code
changes to review.

Lab Walkthrough:

Task 1:
Open an existing pull request in your repository. Navigate to the "Files
changed" tab to view the modified files.

Click the "+" icon next to a line of code to add an in-line comment. Provide
feedback or ask a question.

Question: How does commenting on specific lines differ from general PR


comments? Why might this be more helpful to developers?

Task 2:
Add multiple comments across different lines in the pull request, then click
"Submit review" and select either "Comment", "Approve", or "Request
changes".

Question: When would you choose each review option? How does each
one affect the PR and the contributor?

Task 3:
Respond to a review comment from another collaborator. Discuss how this
feedback cycle leads to better code and understanding.
Question: How should developers handle feedback they disagree with?
What’s the best approach to maintaining positive team communication
during reviews?

Notes:
In-line code comments provide targeted feedback that helps developers
understand exactly what needs improvement. GitHub’s code review tools
support professional collaboration, encourage discussion, and ensure
higher code quality through collective input.

Lab 11 – Repository Templates and Forking

Lab Objective:
Learn how to use repository templates to start new projects and explore
GitHub Gist and forking for sharing and reusing code.

Lab Purpose:
This lab introduces repository templates and gist as tools to promote
re-usability and quick project setup. You’ll create a new project from a
template and explore how forking gist enables collaboration and
modification of shared code snippets.

Lab Tool:
Web browser, GitHub account.

Lab Topology:
A GitHub-hosted repository marked as a template, and the GitHub Gist
platform.
Lab Walkthrough:

Task 1:
Navigate to a repository marked as a template on GitHub (or mark one of
your own repos as a template).

Click the "Use this template" button to create a new repository based on it.
Customize the repo name and settings during creation.

Question: What are the benefits of using a repository template over starting
from scratch? How might this be useful for on-boarding team members?

Task 2:
Visit https://gist.github.com/ and create a new public or secret gist with at
least one file. Add a brief description and meaningful content.

Question: What types of code or content are best suited for gist versus full
repositories?

Task 3:
Search for another user’s public gist and click "Fork" to create your own
copy. Modify the content in your forked version and save it.

Question: How does forking a gist support collaboration or


experimentation? How is it different from forking a full repository?

Notes:
Repository templates provide consistent project structures and are ideal for
reusable codebases or starter kits. Gist offer lightweight sharing for code
snippets or single files, with forking enabling easy modification. Both tools
enhance development efficiency and sharing.
Lab 12 – Branch Management

Lab Objective:
Learn how to manage branches by creating a new branch, switching
contexts, and adding files in GitHub or via the Git CLI.

Lab Purpose:
This lab focuses on branch management as a key part of working with Git.
By creating a new branch and adding files to it, developers can safely test
and build features in isolation from the main code-base.

Lab Tool:
GitHub.com, GitHub Desktop, or Git CLI.

Lab Topology:
A GitHub repository with write access for performing branch operations.

Lab Walkthrough:

Task 1:
Navigate to your GitHub repository or open it in GitHub Desktop/CLI.
Create a new branch called "feature/new-content".

Using the CLI:​


git checkout -b feature/new-content

Using GitHub.com: Navigate to the branch dropdown and type a new


branch name.

Question: Why is it important to use branches for new features or updates


instead of working directly on the main branch?

Task 2:
Add a new file (e.g., README-update.md or test.txt) to the new branch.
Commit and push the changes to GitHub.
Using the CLI:​
echo "Test content" > test.txt​
git add test.txt​
git commit -m "Added test file to new branch"​
git push origin feature/new-content

Question: How does Git track changes per branch, and what happens
when you switch between branches with different files?

Notes:
Branching allows multiple streams of development to occur without
interfering with each other. This enables safer experimentation and easier
code reviews before merging into production branches.

Lab 13 – CODEOWNERS and Reviews

Lab Objective:
Understand how to enforce code reviews by configuring a CODEOWNERS
file in a GitHub repository.

Lab Purpose:
This lab introduces the CODEOWNERS file, which designates users or
teams responsible for reviewing code in specific parts of a repository. It
automates review requests and enforces collaborative workflows.

Lab Tool:
GitHub.com or GitHub CLI, text editor.

Lab Topology:
A GitHub repository with at least two collaborators or teams, and branch
protection rules enabled.
Lab Walkthrough:

Task 1:
Create a `.github` directory in the root of your repository if it doesn’t exist
already.

Inside this directory, create a file named `CODEOWNERS`.

Add the following content as an example:

# This assigns ownership of all files to the user or team​


* @your-username​
# Specific directory or file owners​
/docs/ @docs-team​
*.js @frontend-team​
Commit and push the file to the default branch.

Question: How does GitHub use the CODEOWNERS file to automatically


request reviews? What happens if a PR modifies files owned by different
users or teams?

Task 2:
Enable branch protection on the default branch and require code reviews.

Go to Settings → Branches → Add/Edit a branch protection rule.

Check "Require pull request reviews before merging" and optionally enable
"Require review from Code Owners".

Question: How does requiring CODEOWNERS reviews enforce quality


control and prevent unauthorized changes?

Notes:
The CODEOWNERS file is a powerful GitHub feature for automating
reviews and managing contributions across large or multi-team projects.
When paired with branch protection rules, it ensures accountability and
helps maintain high-quality code standards.
Lab 14 – Advanced GitHub Features

Lab Objective:
Explore advanced GitHub features that help manage and track project
progress effectively.

Lab Purpose:
This lab focuses on using GitHub’s advanced project management tools
such as Projects, Milestones, Labels, and Insights to plan, track, and
measure development progress.

Lab Tool:
GitHub.com (web interface).

Lab Topology:
An active GitHub repository with Issues and Pull Requests.

Lab Walkthrough:

Task 1:
Create a new GitHub Project board.

Navigate to the "Projects" tab in your repository and click "New Project".
Choose either a Kanban or Table layout.

Add at least three columns (e.g., To Do, In Progress, Done) and move
existing issues or PRs to appropriate columns.

Question: How does organizing work in a project board help improve


development visibility and team collaboration?

Task 2:
Create a Milestone and assign it to issues or pull requests.

Navigate to the "Milestones" section under "Issues", create a new


milestone, and add a due date and description.
Assign the milestone to existing issues or PRs.

Question: How do milestones help in tracking progress toward specific


project goals or release versions?

Task 3:
Use Labels to categorize issues and pull requests.

Apply built-in labels (e.g., bug, enhancement) or create custom labels.

Filter issues/PRs using labels to understand the nature and status of your
project’s tasks.

Question: How can label systems be customized to suit different teams and
workflows?

Task 4:
Explore GitHub Insights.

Go to the "Insights" tab and review metrics such as contributors, commit


history, code frequency, and issue tracking.

Question: What insights were most helpful in understanding project activity


and team contributions?

Notes:
Advanced GitHub features support agile planning, progress tracking, and
productivity measurement. Teams that use Projects, Milestones, and
Insights efficiently can coordinate development better and meet project
deadlines more reliably.

Lab 15 – Using Draft Pull Requests, Gists, and Commenting

Lab Objective:
Understand the use of draft pull requests and their statuses, create and
fork gists, and provide code-specific comments during code reviews.
Lab Purpose:
This lab combines key GitHub collaboration tools to enhance the
development workflow. You’ll learn how to initiate work with draft pull
requests, share and collaborate using gists, and conduct detailed code
reviews by commenting on specific lines.

Lab Tool:
GitHub.com (web interface), GitHub CLI (optional).

Lab Topology:
An active GitHub repository and access to GitHub Gists.

Lab Walkthrough:

Task 1:
Create a draft pull request.

On GitHub, go to your repository, push a new branch, and click "Compare


& pull request". Select "Create draft pull request".

Review the pull request page to observe the "Draft" label and limited merge
actions.

Question: What is the benefit of using a draft PR instead of a regular one at


the start of development?

Task 2:
Understand and track pull request statuses.

Observe the change from "Draft" to "Ready for review" when you mark a
draft PR as ready.

Note CI checks, review statuses, and merge conflicts under the PR status
box.

Question: How do pull request statuses improve visibility and


communication in a development team?
Task 3:
Create and fork a gist.

Visit https://gist.github.com/ and create a new gist with some code content.

Find another user’s gist and click "Fork" to create your copy.

Question: In what scenarios would gists be preferred over full repositories?

Task 4:
Comment on specific lines of code in a pull request.

Open an active pull request and click on the "Files changed" tab. Hover
over a line and click the blue "+" icon to comment.

Leave one or more comments with constructive feedback.

Question: How do inline comments in PRs improve the code review


process compared to general comments?

Notes:
Using draft PRs allows early collaboration while clearly indicating
incomplete work. Gists simplify sharing small code snippets, and inline PR
comments make code reviews precise and actionable, fostering better
communication and code quality.
Why the lab. Why the command. Case studies to use commands and labs.
Scenarios. How the command can be used to solve networking cybersec
devops issues. Explanation should be at the beginning of each lab.

Just two paragraphs. Lab objective

don't change formatting but make this lab 79 suited for kali linux. let the
bash commands be plain text not inside a bash shell

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy