0% found this document useful (0 votes)
12 views45 pages

FULLTEXT01

This document summarizes a degree project that performed penetration testing on a web application built with C#, .NET and Episerver to evaluate its security. The project tested for vulnerabilities to common attacks like SQL injection, cross-site scripting, HTTP request tampering and directory traversal using Kali Linux and Burp Suite. While no personal data was disclosed, many HTTP error codes were returned that could reveal areas of interest to hackers. The security audit also found the admin page could be accessed with just a username and password, and user invoice files could be accessed with just their URL.

Uploaded by

kangirene9705
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views45 pages

FULLTEXT01

This document summarizes a degree project that performed penetration testing on a web application built with C#, .NET and Episerver to evaluate its security. The project tested for vulnerabilities to common attacks like SQL injection, cross-site scripting, HTTP request tampering and directory traversal using Kali Linux and Burp Suite. While no personal data was disclosed, many HTTP error codes were returned that could reveal areas of interest to hackers. The security audit also found the admin page could be accessed with just a username and password, and user invoice files could be accessed with just their URL.

Uploaded by

kangirene9705
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

DEGREE PROJECT IN TECHNOLOGY,

FIRST CYCLE, 15 CREDITS


STOCKHOLM, SWEDEN 2022

Web Penetration
testing
Finding and evaluating
vulnerabilities in a web page
based on C#, .NET and
Episerver
Ivan Khudur and Ameena Lundquist

KTH ROYAL INSTITUTE OF TECHNOLOGY


ELECTRICAL ENGINEERING AND COMPUTER SCIENCE
Abstract

Today’s society is highly dependent on functional and secure digital resources,


to protect users and to deliver different kinds of services. To achieve this, it is
important to evaluate the security of such resources, to find vulnerabilities and
handle them before they are exploited. This study aimed to see if web applications
based on C#, .NET and Episerver had vulnerabilities, by performing different
penetration tests and a security audit. The penetration tests utilized were SQL
injection, Cross Site Scripting, HTTP request tampering and Directory Traversal
attacks. These attacks were performed using Kali Linux and the Burp Suite
tool on a specific web application. The results showed that the web application
could withstand the penetration tests without disclosing any personal or sensitive
information. However, the web application returned many different types of
HTTP error status codes, which could potentially reveal areas of interest to a
hacker. Furthermore, the security audit showed that it was possible to access
the admin page of the web application with nothing more than a username and
password. It was also found that having access to the URL of a user’s invoice file
was all that was needed to access it.

Keywords

Ethical hacking, Penetration testing, Cybersecurity, DREAD, HTTP, HTTPS,


Episerver, Kali Linux, Burp Suite, SQL injection, XSS, HTTP Method Tampering,
Directory Traversal, HSTS, IDOR, Authentication, MFA

i
Abstract
Dagens samhälle är starkt beroende av funktionella och säkra digitala resurser,
för att skydda användare och för att leverera olika typer av tjänster. För
att uppnå detta är det viktigt att utvärdera säkerheten för sådana resurser
för att hitta sårbarheter och hantera dem innan de utnyttjas. Denna studie
syftar till att se om webapplikationer baserade på C#, .NET och Episerver har
sårbarheter, genom att utföra olika penetrationstester och genom att göra en
säkerhetsgranskning. Penetrationstesterna som användes var SQL-injektion,
Cross Site Scripting, HTTP-förfrågningsmanipulering och Directory Traversal-
attacker. Dessa attacker utfördes med Kali Linux och Burp Suite-verktygen på
en specifik webbapplikation. Resultaten visade att webbapplikationen klarade
penetrationstesterna utan att avslöja någon personlig eller känslig information.
Webbapplikationen returnerade dock många olika typer av HTTP-felstatuskoder,
som potentiellt kan avslöja områden av intresse för en hackare. Vidare visade
säkerhetsgranskningen att det var möjligt att komma åt webbapplikationens
adminsida med inget annat än ett användarnamn och lösenord. Det visade sig
också att allt som behövdes för att komma åt en användares fakturafiler var
webbadressen.

ii
Acknowledgements
We would like to express our deepest thanks to Markus for providing us with
invaluable advice and guidance throughout this project. His help contributed
tremendously to the completion of the project.
A big thanks to our supervisor Alexander Kozlov as well as our examiner Pawel
Herman, for the opportunity to do this project, which has given us valuable
information about cyber security.
Also, we would like to pay our special regards to Nevin Gürsoy for the insightful
discussions and support, which helped us overcome obstacles in difficult periods.
Last but not least, we would like to pay our special regards to our families and
friends for their support and love, and everyone else who made this project a
possibility. We could not have done it without you.

iii
Authors
Ivan Khudur <ikhudur@kth.se> and Ameena Lundquist <ameena@kth.se>
School of Electrical Engineering and Computer Science
KTH Royal Institute of Technology

Place for Project


Stockholm, Sweden

Examiner
Pawel Herman
KTH Royal Institute of Technology

Supervisor
Alexander Kozlov
KTH Royal Institute of Technology
Contents

1 Introduction 1
1.1 Research Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Background 3
2.1 Cybersecurity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Ethical Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Penetration Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Risk Assesment Models . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4.1 DREAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.5 Network Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5.1 TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.6 HTTP/HTTPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6.1 HTTP versions . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6.2 HTTP Structure . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Method 12
3.1 Cross Site Scripting(XSS) . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 SQL Injection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 HTTP Method Tampering . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Directory Traversal . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Authentication and Insecure Direct Object References (IDOR) . . . 16
3.6 HSTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 Results 17
4.1 Cross Site Scripting (XSS) Results . . . . . . . . . . . . . . . . . . . 17
4.2 SQL Injection Results . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 HTTP Method Tampering Results . . . . . . . . . . . . . . . . . . . 19
4.4 Directory Traversal Results . . . . . . . . . . . . . . . . . . . . . . . 20
4.5 Authentication and Insecure Direct Object References (IDOR)
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.6 HSTS Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.7 DREAD Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

v
5 Discussion 23

6 Conclusions 27
6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

References 29

Appendix A 32

vi
1 Introduction
The advances in computational and network technologies during the last decades
has seen a rapid increase in digitalization. Huge amounts of information can
be accessed and transported to and from different locations around the globe
almost at the speed of light. The convenience of this has however come with
an increased safety risk. Just as the end-users can access or send information
almost wherever they are, so can a person with malicious intent. Criminals
and criminal organizations are able to perform attacks, using computers and
an internet connection, without physically being near their target. These cyber
attacks can lead to personal and sensitive information being stolen, such as one’s
social security number and bank account details [8]. Cyber attacks may also have
a wider effect on society. For example, in December 2015, hackers were able to
get access to the power grid in Ukraine through SSH backdoor, caused a blackout
in the country and interrupted communication services [21] [4]. In July 2021 the
software company Kaseya, who sells IT-services, was targeted by a hacker group.
This affected several companies, among them the grocery store chain Coop in
Sweden, which were forced to close down their stores for some days due to the
incident [2].

Cyber attacks are happening more frequently and with the risk of putting vital
infrastructure out of commission. It is therefore necessary to prevent these
attacks, preferably before they occur. One way to do it is to step into the
shoes of hackers and perform so called penetration tests, to find any potential
vulnerabilities that could be exploited and patch them.

Therefore hacking and cyber security is a relevant problem today. Hence, this
study aims to explore penetration testing and ethical hacking to evaluate the cyber
security of a specific company’s website.

1.1 Research Question

The purpose of this project is to evaluate the security of websites when performing
simulated cyber attacks, using penetration testing techniques. The project aims
to answer the following research questions:

1
• Are web applications based on C#, .NET and Episerver vulnerable to
common penetration attacks such as SQL Injection, Cross Site Scripting,
HTTP request tampering and Directory Traversal?

• Have measures been implemented to secure the communication between


the website and the user, access to files and login?

1.2 Scope

Due to there being a wide range of ethical hacking methods and several different
strategies to attack different applications, limitations are set for the scope of
the study due to time constraints. Therefore, this exploration only looks at
penetration testing on one web application and only utilizes SQL injection, Cross
Site Scripting, Directory Traversal and HTTP request tampering, as well as
checking for broken authentication and encrypted communication. Also there
are legal concerns when hacking websites, so one must have permission from the
owners of the web application to perform the tests. The web application belongs
to a company who have decided to stay anonymous. As there were concerns
regarding the company’s privacy and security, the hacking is done in a test version
of the website, to ensure the company’s customers’ privacy is respected.

2
2 Background

2.1 Cybersecurity

When the first computers emerged, it was soon realized that they had to be
guarded as they could contain sensitive data which unauthorized users could
get access to. In the 1970s the realization that computers can be attacked and
have data stolen from them led to an increase in software security, such as the
implementation of hashed passwords and administrator accounts. When global
network communication started to become more common in the 1980s, security
concerns were raised as it was possible for people to bypass security measures on a
computer from a different location. An example being that a group of high school
students in Milwaukee USA were able to gain access to military networks in 1893,
by hacking their security systems [31].

Storing information digitally is convenient and is vastly used in today’s society.


By using networks, individuals, organizations, companies and governmental
agencies are able to communicate with each other and access information
in an instant. The convenience of this, however, carries a risk. Network
communications make it possible for individuals/crime organizations to perform
attacks and steal information from a distance. Such cyber attacks can be
countered if cybersecurity is well implemented [8]. Cybersecurity is not only
about preventing attacks on digital resources from happening, but also to detect
them before they cause severe damage, as well as responding to the attacks and
implementing ways of recovering from an attack [4].

2.2 Ethical Hacking

The term hacking was first coined in the 1960s and the meaning of it was an elegant
way of performing any type of programming. Since then, the word hacking has
evolved into the activity of breaking into a computer system [10]. The popular
meaning of hacking is often associated with a negative action, i.e. to exploit
vulnerabilities to steal information or cause damage [15]. Hacking can also be
performed in order to identify the vulnerabilities of a computer system, and come
up with ways to improve the system to prevent future attacks. This type of

3
hacking, with the intention to access a system, without causing damage or stealing
information, is called ethical hacking. Ethical hacking is usually performed the
same way as a malicious hacker would do, but with the permission from the target
to perform the hacking [10].

Hackers can usually be categorized into three different categories; black hats, gray
hats and white hats. Black hats are hackers that act illegally and unethically for
personal gain and making others miserable [10]. Gray hats are hackers that work
both as black hats and white hats. The term gray hat is however questionable
and is sometimes considered to be the same as a black hat, considering that these
hackers perform illegal and unethical actions [15]. White hats are hackers that
perform ethical hacking with the goal to better secure systems [10].

2.3 Penetration Testing

Penetration testing is a method used to evaluate network security. It can also


be considered a subclass of ethical hacking [3]. Penetration testing involves
understanding the set of methods and procedures which aim at testing and
protecting an organization’s security. These tests aim to find weaknesses of the
organization and see if an attacker would be able to take advantage of these
weaknesses to gain access to sensitive information. Vulnerability is defined as
a flaw or weakness inside the system which can be taken advantage of to gain
unauthorized access [3].

There are many different types of penetration tests. Some of the most common
ones are network, web application, mobile application, social engineering and
physical penetration testing [31]. The method which will be utilized in this study
is web application penetration testing. A web application is often times the only
Internet exposed interface that an organization has. The potential vulnerabilities
of web applications varies, depending on the web applications design choices, for
example: OS, web server, programming language etc.

The categories of vulnerabilities and security risks are tracked by the Open Web
Application Security Project (OWASP) and at the time of writing the top 3 are
Broken Access Control, Cryptographic Failures and Injection. Attacks that could
be used against Broken Access Control are: file inclusion, automated brute force

4
password attack and authentication bypass. Cryptographic Failures are when
sensitive data is not protected and possible attacks could be Man in The Middle
(MiTM) and file inclusion. In the Injection category, possible attacks can be cross-
site scripting (XSS), buffer overflows and SQL Injection [17].

2.4 Risk Assesment Models

An important part of penetration testing is to assess the impact of any vulnerability


that is found. This can be done with risk assessment models, where threats
are rated according to different categories and the impact any exploit of the
vulnerability may have, in order to help prioritize security efforts. The most
common rating systems are the DREAD model and CVSS [16].

2.4.1 DREAD

DREAD is an mnemonic/acronym which stands for Damage


potential, Reproducibility, Exploitability, Affected users and Discoverability. It
is a rating system that is used in web testing, where threats are rated with a value
between 1-3 in each of the previously mentioned categories [27]. The categories,
their meaning and a description of the three scores in each category are presented
in Table 1.

5
To determine the risk level of a threat the values in each category are added
together to get a score result, which corresponds to one of three risk levels: low,
medium and high [27]. The risk levels and their corresponding score results are
presented in Table 2.

2.5 Network Protocols

Computers of all types that are connected together form a computer network [12].
In order for the nodes in the network to communicate with each other they must

6
adhere to certain rules. Such rules in a network are called network protocols. Each
node in the network has to understand the network protocols that are used in order
for information to be transmitted and received correctly within the network [11].
The purpose of the protocols are among other things to [11]:

• Create and terminate connections between nodes

• Identify nodes to make sure information is sent to the correct destination

• Control the flow of data to minimize delays in the communication and make
sure data is sent and received in correct order

• Encode and decode data, as well as detecting and possibly correcting


corrupted data

Some network protocols are grouped together into so-called protocol suites, where
protocols perform different tasks. An example of such a group of protocols is the
Internet Protocol Suite (IPS), also called the TCP/IP protocol suite. The TCP/IP
protocol suite forms the foundation of Internet [12] [9] [11].

2.5.1 TCP/IP

Transmission Control Protocol (TCP) and Internet Protocol (IP) are two protocols
that together make it possible to transfer information from one point to another
through the Internet. Furthermore they make up a protocol stack of four layers:
Application layer, Transport layer, Internet layer and Link layer [11]. The
Application and Transport layers are part of TCP, while the Internet and Link
layers are part of IP. The Link layer can be further divided into 2 parts: the data
link layer and the physical layer. Each of these layers in the TCP/IP protocol suite
communicate with the ones adjacent to them, transforming input data into output
data that is sent to the next layer [12]. The responsibility of each layer are as
follow:

• Link layer: handles the data transmission between physical nodes [11].

• Internet layer: utilizes the protocols IPv4 and IPv6 to address where to send
the data and from who they are sent [11].

• Transport layer: establishes a connection between the sender and receiver

7
and initiates communication between them. Divides the data that should
be sent into TCP- or User Datagram Protocol (UDP) packets. Controls the
flow of packets and performs multiplexing (i.e. send packets to different
ports/services on a node) [12] [11].

• Application layer: assembles the data that is sent or should be sent over the
network, such as emails, the contents of a web page or files. Protocols that
are used in this layer among others HyperText Transport Protocol (HTTP),
HyperText Transport Protocol Secure (HTTPS), Simple Mail Transport
Protocol (SMTP), Domain Name System Protocol (DNS) and File Transfer
Protocol (FTP) [12] [11].

Figure 2.1: The layers in the TCP/IP protocol suite) [12] [11]

2.6 HTTP/HTTPS

HTTP was first introduced by Tim Berners-Lee in 1991 and has since then become
one of the most used application protocols in client-server communication over
the Internet [14]. The workings of HTTP are based on simplicity: the client sends a

8
request to a server, which in turn responds to the client. There are several different
requests that can be sent: CONNECT, DELETE, GET, HEAD, OPTIONS, PATCH,
POST, PUT, TRACE. Of these requests, the most common one is the GET method,
which asks the server to retrieve e.g. one or several hyper text markup language
(HTML) files to display a website, or to retrieve an image or any other type of file
from a webpage [12] [14].

2.6.1 HTTP versions

Since the introduction of HTTP there have been 5 different versions of it:
HTTP/0.9, HTTP/1.0, HTTP/1.1, HTTP/2 and most recently in 2019, HTTP/3
[14] [13]. There is also the HTTPS version, which combines HTTP with either
the Secure Sockets Layer (SSL) protocol, or SSLs successor, the Transport Layer
Security (TLS) protocol. The purpose of this combination is to encrypt the
requests and messages that are made by HTTP, as HTTP alone sends messages
unencrypted. By encrypting the messages using SSL/TLS protocol, the client
and server are able to authenticate the connection and make sure no other party
than the server and client are able to read or alter the data passed between them
[25].

HTTP/1.1 was the first of these HTTP protocols that was turned into an Internet
standard and was released by the HTTP Working Group as RFC 2068 in 1997
[14]. The protocol has been revised twice, in 1999 through RFC 2616 and in
2014 in RFC 7230 [13]. The main difference between the older versions and
HTTP/1.1, besides HTTP/1.1 being an Internet standard and used by billions of
devices, is that HTTP/1.1 had better performance. This was done, amongst other
improvements, by maintaining the connection between the client and server after
completing a request as well as trying to add pipelining and concurrent TCP
connections. Pipelining allows the client to send multiple requests at the same
time over its TCP connection with the server, thus reducing the time to e.g. load
a webpage [14].

9
2.6.2 HTTP Structure

HTTP messages are central to the exchange of data between the client and the
server. These messages can be divided into two types; HTTP requests and HTTP
responses. HTTP requests are sent by the user to engage in an action on the server.
They consist of three elements:

• A HTTP method

• The request target

• The HTTP version

A HTTP method describes which action is to be performed, for example, GET,


PUT or POST. A request target is normally a URL, and varies depending on which
HTTP method is used [19]. HTTP response codes relay information on whether or
not the HTTP request has succeeded. These response codes can be grouped into
five categories which are information responses, successful responses, redirection
responses, client error responses and finally server error responses. Table 3 shows
the different responses and their meanings [20].

Two significant components of HTTP requests and response messages are headers
and cookies. HTTP headers are the central part of them, as they contain context
information and metadata of the messages. The headers allow the user and server
to exchange additional information. The structure of HTTP headers consists of a

10
case sensitive name followed by a colon and the value of headers [18]. Cookies are
a piece of identifiable data which a server sends to the user’s browser. It is used
to tell if requests are being sent from the same browser/client. They have three
main purposes; session management, personalization and tracking [29].

Figure 2.2: A HTTP Request

Figure 2.3: A HTTP Response

11
3 Method
In this project, a virtualization of Kali Linux inside of Oracle’s Virtual Box was
used to perform penetration tests. Kali Linux is a distribution of the linux
operating system specialized in penetration testing. Kali Linux has several
software packages which are specific to security work, making the target users
people who engage in security work. The system is focused on testing rather
than keeping the distribution safe from attack. Kali Linux runs inside a Virtual
Machine, it doesn’t need its own hardware. Linux distributions are different from
one another; Linux is the core while every distribution has additional software,
creating a different distribution [23]. The reason for virtualization was due to the
ease of installation and in order to reduce the risk of damaging the host system
[30].

To minimize possible damage or leakage of information the penetration tests


were done in a test version of the web application. The penetration tests were
performed using the tool Burp Suite in Kali Linux, as well as manually entering
inputs of different kinds in the target. Burp Suite is an intercepting proxy that
allows the user to capture, read and alter the traffic that flows between the user’s
web browser and the web server of the web application [32]. The web application
was started in Firefox and Burp Suite was configured to be able to intercept HTTP
traffic in Firefox. This was done by setting Burp Suite’s HTTP proxy to its default
value 127.0.0.1, listening to port 80 and installing Burp’s CA certificate in Firefox
according to the instructions [26].

The following sections describe the different penetration testing methods that
were used in this project to test the target. The server’s responses were
recorded in all of the penetration tests, which were used to analyze for potential
vulnerabilities. Each found vulnerability in this study was evaluated with regards
to the 5 categories of DREAD, to determine the potential risk level.

3.1 Cross Site Scripting(XSS)

Cross Site Scripting (XSS) is an attack that uses flaws in the application to
make it return malicious responses. XSS attacks use scripting languages, such

12
as JavaScript or Action Script, to run scripts on the client’s web browser by
manipulating the HTTP requests. An easy way to test if a XSS vulnerability exists,
is to use the JavaScript Alert method to try to get a popup box to appear, when
searching for something. There are 2 types of XSS attacks that are commonly
performed: Reflected XSS and Stored XSS [22] [24]. In this project, both methods
were tested, as well as ways to get around XSS filters that might be in place.

Reflected XSS was tested in 2 different ways: it was tested by altering the user-
agent header in a HTTP request to load the main page of the web application,
and by manually entering JavaScript into a search field. The Intercept and
Repeater tools of Burp Suite were used to alter the user-agent headers value to
<script>alert(0)</script>, before sending the HTTP request to update the main
page. The script <script>alert(‘xss’)</script> was entered into the search field to
test how search queries handle embedded scripts.

The implementation of XSS filters were tested by writing different variations of


scripts into the search field:

• The JavaScript method alert(0) was written to see if the web application
would add the <script> tag by itself, and execute the alert-method.

• A nested tag was created to see if any implemented XSS filter worked
recursively or not. If an XSS filter is not working recursively, it would filter
away the first occurrence of the tag, and accept the rest of the input [24]. This
test was done by searching for <scr<script>ipt>alert(0)</scr</script>ipt>.

• An incomplete script, <script was searched for in the search bar to see the
web applications response.

Stored XSS was tested by updating a user’s email address, after signing in to the
web application. The POST request to update the email address was intercepted
using Burp Suite, and altered in the Repeater tool. The user’s email address
was replaced with ”><script>alert(document.cookie)</script> before the HTTP
request was sent.

13
3.2 SQL Injection

If the server is retrieving data from a database it is possible to alter the request
to fetch other desired information instead. Testing SQL injections to get access
or retrieve some hidden information were done in 3 different places of the web
application, which we assumed could fetch information from a database. These
were:

• The search bar

• The password field in the login page

• The URL to access a file The following strings were used to see if it was
possible to perform an SQL injection attack:

• ‘1 OR ‘1’=’1’

• 1’ OR ‘1’=’1’

• ‘ OR ‘1’=’1’

• “ OR “1”=”1”

• OR 1=1

• ‘

• ‘–

• –

3.3 HTTP Method Tampering

HTTP methods can be used to perform actions on a web server. These methods
are designed to help developers test applications, however they can also be used
for malicious purposes [24]. Different HTTP methods trigger different responses
from a webpage. All HTTP methods were tested with and without cookies to see
if there were any differences in response.

The OPTIONS method requests information on which communication options


are available for the user on the webpage. The TRACE method is used to return

14
the request back to the client. One can also test to see if arbitrary methods are
possible, by replacing “GET” with something random such as “JEFF”.

The OPTIONS, TRACE and JEFF methods were tested on the main page, the
navigation to the login page and the login page itself. The methods were tested
by sending a “GET” request to the webpage, and got a response which formed
the web page. Thereafter, the request was copied and sent to a tool in the Burp
Suite which allowed one to change the request from “GET” to either ”OPTIONS”,
”TRACE” or ”JEFF” depending on what method was being tested. This was done
to then see what response one received from the server.

The HEAD method is nearly identical to GET, however the server should not
send a message in the body as a response. This method is used to gain metadata
about the select representation without transferring it. Testing was again done on
navigation to the login page and the login page.

To request that the web server accepts data enclosed in the body of a request
message one uses the POST method. This method was tested on “mina sidor”
to see if one is able to change the username.

3.4 Directory Traversal

Directory Traversal is an attack that tries to access files and sites on a web
application that should not be accessible to everyone, i.e. files and directories
outside of the root directory [26, pp. 78]. To test if it was possible to access such
files and directories, the Intruder section of Burp Suite was used to alter the path
of a HTTP request to the main page of the website. Common directory names,
such as /admin and /etc/passwd, were sent in the request method, as well as
concatenating ../ into long strings in order to try to access some directory outside
the root directory. 175 different alterations, taken from directory traversal cheat
sheet [1], were tested using the Sniper Attack in Burp Suite. The Sniper Attack
uses so-called payloads to alter a specific part of the HTTP request. Upon altering
the HTTP request, one payload at the time was sent to the web application and
the response was recorded. A full list of the tested paths are available in Appendix
A.

15
3.5 Authentication and Insecure Direct Object References
(IDOR)

In this part of the tests, the web application was tested with respect to how it
verifies who has sent a request, how a user is authenticated when signing in to
the test environment and whether the web application only communicates over
HTTPS. Authentication was tested by checking if Multi factor Authentication
(MFA) is used. The tests to see if the web application handles IDOR and verifies
a user, were done by signing in a user account and trying to access a file on its
page. The HTTP request to access the file was intercepted using Burp Suite and
sent to the Repeater tool for modification. All types of cookies and headers were
removed, except for the GET request, the file ID and the session cookie, before
sending the request. An attempt was also made to copy the file URL and paste it
into the browser after signing out with the user.

3.6 HSTS

To see if the web application was set to only allow communication over HTTPS,
a HTTP GET request was sent to display the main page, login page and the user
account page respectively. The server responses were then checked, to see if they
contained the HSTS header [32].

16
4 Results
This study aimed to see if web applications based on C# .NET and Episerver were
vulnerable to common penetration attacks, such as XSS, SQL injection and HTTP
request tampering. Furthermore, security aspects related to login, file access and
encrypted communication were audited to see if there were any vulnerabilities
that could be exploited. This was done by checking if communication between
client and server were always secure, i.e. made over HTTPS, if Multi Factor
Authentication (MFA) was used to sign in to the test environment of the web
application and if measures had been taken to verify who is accessing a user’s
files. This section describes the results retrieved from the penetration tests and
the audit regarding HTTPS communication, MFA and accessing user files.

4.1 Cross Site Scripting (XSS) Results

A total of 6 different tests were performed to try to find XSS vulnerabilities. Each
of the tests were done using different scripts. 5 of these attempted to make the
server respond with an alert popup and 1 of these tested how the server handles
a broken script. Of the 5 scripts attempting to produce an alert popup, 4 of them
resulted in the HTTP code “403 Forbidden”, while the 1 that did not use the
<script> tag was considered to be a normal search string, and were processed
by the server the same way as a regular searchword. The broken script test
resulted in an “500 Internal Server Error”, but did not show any information
about the backend server. The results state that the web application prohibits
malicious scripts to be used on the web application. Table 4 shows the different
tests that were conducted, and corresponding HTTP response. The dash symbol
(-) in the table means that the cross site script was not performed as it was not
applicable.

17
4.2 SQL Injection Results

Trying to login to a user account by sending in query-parts into the password field
was handled by the web application as if a user had written the wrong password
i.e. login attempts failed. When trying to search for the query in the search bar,
the HTTP response was “403 Forbidden”. Adding a query in the URL for a file
resulted in “500 Internal Server Error”, but no other information was shown in
the HTTP response. The SQL injection attacks were unsuccessful in getting access
to information that a user should not be able to, meaning that the web application
has implemented ways to restrict malicious input.

18
4.3 HTTP Method Tampering Results

In order to see if it is possible to override HTTP methods, the HTTP requests were
tampered with on different pages and the results were recorded. The different
methods were tested with and without cookies to see if there were any differences
in responses. It was tested on both HTTP/2 and HTTP/1.1 and results were
recorded as shown in Table 6. Most of the tests resulted in either “406 Not
acceptable” or “405 Method not allowed”. The error response “405 Method not
allowed” indicates that the server recognizes the request method, but the resource
does not support the request. Error “406 Not acceptable” shows that the server
could not produce a response allowed by the request. Therefore no sensitive
information was accessed when tampering with the HTTP methods, the website
was able to block the unacceptable requests. When testing POST on the user’s
personal page, to change the username, a server error response “500 Internal
server error” was received in the HTTP response.

19
4.4 Directory Traversal Results

A total of 175 different requests were sent to the web application. Table 7 presents
the different status codes that were received from the server, and how many times
each were received.

None of the 6 requests that got the “500 Internal Server Error” response displayed

20
any information about the background system. The 1 path that resulted in “200
OK” did not display anything out of the ordinary either; the main page was
shown.

4.5 Authentication and Insecure Direct Object References


(IDOR) Results

To access the test environment, the default path to the Episerver was needed, i.e.
/episerver and a password. It was also possible to access the admin page, without
the need of MFA. This means it was possible to access the test environment and
the admin page solely by knowing a valid username and a password, which is a
vulnerability as no login authentication is done. Attempting to access an invoice
file connected to a particular user, with no more information than the URL proved
to be successful. By copying the URL to the invoice file and entering it into another
web browser, it was possible to open the invoice file, which contains the name of
the user, address and the price they have to pay. It was possible to do without a
session cookie or any other type of verification that the user who owns the file was
the one accessing it.

4.6 HSTS Results

The HSTS header was missing in the HTTP Response when retrieving the main
page of the web site and the user account page. This indicates that the user and
the web server could potentially have unencrypted communication, i.e. there is
no requirement from the web application’s side to send information only over an
encrypted channel.

4.7 DREAD Analysis

The 4 possible threats that were found in this study were assessed using the
DREAD model. Table 8 shows the score for each of the threats in each category
of DREAD. All of them got a total score in the medium risk level, except
for the vulnerability regarding authentication which is placed in the high risk.
Considering there is no need to authenticate the sign in of a user to the test

21
environment or admin page of the web application, the priority should be to fix
this issue as it has the potential to do the most damage, affect every user and could
be an easy target to attack.

22
5 Discussion
The penetration methods that were used in this study were not able to access
any pages or files, or tamper with the response, which could mean that the
application has been built in a secure way. For example, tampering the HTTP
request did not result in any sensitive information being disclosed, neither was
it possible to perform SQL injection nor XSS attacks. It is worth noticing that
the penetration tests were done using basic scripts and query snippets, as well as
common directories, which does not cover the full spectrum of attacks that could
be done. There are several different scripts that could produce different results,
that were not included in the scope of this study.

Testing the web application to see if it handles IDOR vulnerabilities, showed to


be successful. When trying to access the invoice of a user, without being signed
in as the user, we managed to open the file in a different web browser using the
URL to the file. This shows that IDOR is available to some extent in the web
application. It is however important noticing that in order to access the invoice,
the user id and the invoice id has to be known. It is possible to find an invoice by
iterating over several thousands user id’s and invoice id’s. It would however be
a time consuming process to find an invoice. Furthermore, the invoice does not
contain any sensitive information of the user, meaning that it probably would not
be of any use to malicious intent. Nonetheless, a recommendation is to include
a session cookie that only accepts requests for a specific user, including access to
the users data.

When accessing the test environment, no extra authentication was required,


which could be a possible security risk. When the only requirement for a login
is a username-password combination, the password could potentially be guessed
easily [5]. Another important finding was that it was possible to access the
production environment with the same password, and even in this case no extra
authentication was required. Considering the production environment directly
affects the web site that any user can see, a hacker could cause potentially
damages and affect several users before the issue is noticed. Adding multi factor
authentication (MFA) to access the test and production environments would
help in reducing the risk of unauthorized access to the web site, and should be

23
prioritized in the near future. By adding MFA, the user has to e.g. enter a pin
code or use an authentication app to prove that the one signing in really is the user,
in addition to entering the username and password [5]. Another way to increase
the security is to add network restriction to only allow logins to happen on the
company’s IP addresses.

Furthermore, accessing the test environment was done using the default Episerver
path, i.e. /episerver. Considering the default path was used, web crawlers that
scan the internet for default paths could find the login page and initiate login
attempts. One recommendation is to change the default path to the Episerver
login page to something completely different. This will reduce the risk of scanners
/ web crawlers to find the login page.

Some of the penetration tests resulted in a “500 Internal Server Error” response,
which could potentially be problematic. Considering that HTTP status codes
starting with 5 are server errors, the response from the web application could
contain information of how the backend is constructed [20]. Since the HTTP
response can be seen by any user, this is a vulnerability, as it might allow any
user to see what databases, tables or systems that are used and how they interact
with each other. As more information of the backend system might give insight
of potential vulnerabilities that can be targeted for exploitation, this type of HTTP
response code could then be used to perform an attack.

The different HTTP status codes the web application responded with the different
tests, were 400 Bad Request, 403 Forbidden, 404 Not Found, 405 and 406. A
possible risk with these client error responses is that some of them might expose
more information about the backend system than necessary. For example, 403
Forbidden tells a user that they are not authorized to get access to whatever was
requested, which could be translated as there is more information to be found.
Generally, it would be better to reduce the amount of status code results that are
shown in the HTTP request, e.g. only use 404 Not Found, which is normally
used to hide content from unauthorized users [20]. This way it would be hard
for attackers to find potential areas of interest in the web application.

This study was done using black box testing, i.e. without any knowledge of
the backend code structure. Blackbox testing is somewhat inefficient, as there

24
are many varying methods that could be tested, but it is not known if they
would work. Time could therefore be spent on testing methods, although the
web application has already implemented security measures to prevent these
from succeeding. Having access to the code structure would have made it
possible to perform specialized penetration testing, by first auditing the code,
and then choosing relevant methods to see how the web application would react.
Furthermore, potential vulnerabilities could be easier to find when auditing
the code, considering it is possible to see how it handles different situations
directly.

It was also found that the HSTS header was not used in the web application.
The issue with this is that communication could potentially be unencrypted, as
the header informs that the connection between client and server should always
be over HTTPS [28]. By not including it, it could be possible to connect to the
website over HTTP, which in turn would allow someone reading the traffic to see
what HTTP requests and responses, as well as their content are sent. It is worth
noting that using the HSTS does not automatically mean that the connection will
always be over HTTPS. The HSTS header is valid for a certain amount of seconds,
which is specified by the developers. Unless the browser updates the HSTS valid
time, the header might expire and thus allow connection to the website over HTTP
[20]. Furthermore, to make sure the connection is always secure, the preloading
directive of HSTS should be used, which makes sure the browser that is used will
never connect to a website over HTTP [20].

The DREAD analysis showed that the vulnerabilities found in the web application
could all be placed in the medium or high risk level. The reason for this was
because the threats could have severe impact on the users, if found out and
exploited. This is especially true for the lack of MFA to make sure no unauthorized
person could access the admin pages of the website. It is worth mentioning that
the DREAD analysis has its drawbacks, as the scoring system is often subjective
and is not very detailed [7]. For example, the category reproducibility asks if it is
possible to repeat the attack, which it most likely is unless the web application has
a one-time feature. Furthermore, discoverability seems to deal with how likely it
is a hacker will find the specified vulnerability. This is rather hard to determine

25
as it depends on the hacker, and the perception of the person setting the score in
this category.

In this project a single website, based on C, .NET and Episerver, was tested
for vulnerabilities. Episerver is used by more than 7000 different websites
[6]. Considering the large amount of web applications using this web content
management system, the same vulnerabilities could exist for some of these as
well. Also, as websites are built by developers, the same or similar mistakes could
occur in other websites due to human error. The tests that were done in this
project could therefore be applicable in finding similar vulnerabilities in other
websites.

26
6 Conclusions
This study aimed to investigate the security of websites that are built upon
Episerver and C# .NET, by conducting penetration tests as well as auditing
if certain security measures have been implemented. All in all, the target
web application did not have any major security issues and none of the
penetration tests (XSS, SQL injection, HTTP Method Tampering and Directory
Traversal) succeeded in finding any severe vulnerabilities. However, the following
observations were made in the web application that could be potential risks:

• Many different HTTP status codes were returned, that could reveal more
information about the website than necessary to a user.

• No extra authentication was needed to access the admin pages of the


test environment and the production environment, besides username and
password.

• Invoice files for a customer could be accessed by anyone who has the URL
to the file.

• The browser could potentially connect to the web application over an


unencrypted channel, as the HSTS header was not set

A proposal to handle these weaknesses in the web application, is to implement


Multi Factor Authentication when signing in to the admin pages of the test- and
production environments, review the HTTP status codes and preferably use only
a standard status code such as 404 Page Not Found. Furthermore, the HSTS
header with the preload directive could be added to make sure the connection
to the website always is encrypted, and add required verification cookies to avoid
unauthorized access to a user’s invoice files.

6.1 Future Work

Further investigation could be done by conducting other types of penetration tests,


including different variations of XSS, SQL injection, HTTP Method tampering
and Directory Traversal on several other websites with the same structure. File
Inclusion should also be tested, as the chosen web application allows a user

27
to upload files, which potentially contains harmful scripts that could seriously
damage the web site or let viruses into the company’s computers. It is also
suggested that an audit of the backend code is done before performing penetration
tests, as it will provide insights of potential vulnerabilities that might exist and
where to find them.

28
References
[1] Administrator. Directory Traversal Cheat Sheet. URL: https : / /
pentestlab.blog/2012/06/29/directory-traversal-cheat-sheet/.

[2] Andersson, M. “Hackade Coop – Kräver nu 598 miljoner kronor SVT


Nyheter, 05-Jul-2021”. In: 19 (). URL: https : / / www . svt . se / nyheter /
nyhetstecken/hackade-coop-kraver-nu-598-miljoner-kronor.

[3] Baloch, R. Ethical hacking and Penetration Testing Guide. London: CRC
Press, 2017.

[4] Bayuk, J. L. et al. Cyber security policy guidebook. Hoboken, New Jersey:
Wiley, 2012.

[5] Bromiley, Matt. Bye bye Passwords: New Ways to Authenticate Microsoft.
2019. URL: https://query.prod.cms.rt.microsoft.com/cms/api/am/
binary/RE3y9UJ.

[6] BuiltWith. EPiServer Usage Statistics. 2022. URL: https : / / trends .


builtwith.com/cms/EPiServer.

[7] Conklin, Larry and Drake, Victoria. Threat Modeling OWASP. 2021. URL:
https : / / owasp . org / www - community / Threat _ Modeling _ Process #
subjective-model-dread.

[8] Dutta, N. et al. Cyber security: Issues and current trends. Vol. 995.
Singapore: Springer, 2022.

[9] Fall, K. R. and Stevens, W. R. Tcp/ip illustrated, Volume 1;The protocols.


ADDISON-WESLEY, 2011.

[10] Farsole, A. A., Kashikar, A. G., and Zunzunwala, A. “Ethical Hacking”. In:
International Journal of Computer Applications 1.10 (2010), pp. 12–20.

[11] Forshaw, J. Attacking network protocols a Hacker’s Guide to capture,


analysis, and Exploitation. San Francisco, California: No Starch Press,
2018.

29
[12] Fox, R. and Hao, W. Internet infrastructure : networking, web services,
and cloud computing. 1st edition. Boca Raton: CRC Press. [Online].
Available: 2017. URL: https://doi- org.focus.lib.kth.se/10.1201/
9781315175577.

[13] Ghedini, A. and Lalkaka, R. HTTP/3: the past, the present, and the future.
Sept. 2019. URL: https : / / blog . cloudflare . com / http3 - the - past -
present-and-future/.

[14] Grigorik, Ilya. HTTP Protocols. 1st. 2017.

[15] Grimes, R. A. Hacking the hacker. 1st edition. Indianapolis: John and Sons,
Inc. [Online].: Wiley, 2017.

[16] Guzman, A. and A. Gupta, Iot penetration testing cookbook. Identify


vulnerabilities and secure your smart devices. Birmingham, UK: PACKT
Publishing, 2017.

[17] Halton, W. et al. Penetration Testing: A Survival Guide. Packt Publishing,


2017.

[18] HTTP header - MDN web docs glossary: Definitions of web-related terms:
MDN. URL: https://developer.mozilla.org/en- US/docs/Glossary/
HTTP_header.

[19] HTTP messages - http: MDN. URL: https://developer.mozilla.org/en-


US/docs/Web/HTTP/Messages.

[20] HTTP response status codes - HTTP: MDN. URL: https : / / developer .
mozilla.org/en-US/docs/Web/HTTP/Status.

[21] Jarmakiewicz, J., Parobczak, K., and Maślanka,


K. “Cybersecurity protection for Power Grid Control Infrastructures”. In:
International Journal of Critical Infrastructure Protection (Sept. 2017),
pp. 20–33.

[22] Khawaja, G. Kali Linux Penetration Testing Bible. Hoboken, New Jersey:
Wiley, 2021.

[23] Messier, R. Learning Kali Linux, O’Reilly Media. [Online]. Available, 2018.

30
[24] OWASP. “Testing Guide 4.0”. 2014. URL: https : / / owasp . org / www -
project - web - security - testing - guide / assets / archive / OWASP _
Testing_Guide_v4.pdf.

[25] Pollard, B. HTTP/2 in action. Shelter Island, NY, New York: Manning
Publications, 2019.

[26] PortSwigger. Configuring Firefox to work with Burp. 2022. URL: https:
//portswigger.net/burp/documentation/desktop/external-browser-
config/browser-config-firefox.

[27] Smith, C. The Car Hacker’s Handbook. San Francisco: No Starch Press,
Incorporated, 2016.

[28] Strict-transport-security - http: MDN. URL: https : / / developer .


mozilla . org / en - US / docs / Web / HTTP / Headers / Strict - Transport -
Security.

[29] Using HTTP cookies - http: MDN. URL: https : / / developer . mozilla .
org/en-US/docs/Web/HTTP/Cookies..

[30] VirtualBix, Oraclle Vm. “Why is Virtualization Useful?,” Chapter 1. first


steps. URL: https : / / www . virtualbox . org / manual / ch01 . html # virt -
why-useful.

[31] Warner, M. “Cybersecurity: A pre-history”. In: Intelligence and National


Security 27.5 (Oct. 2012), pp. 781–799.

[32] Wear, S. Burp suite cookbook: Practical recipes to help you master web
penetration testing with BURP suite. PACKT Publishing Limited, 2018.

31
Appendix A

Table A: Directory paths tested (left column) and the web applications response.
(right column). The directory paths were taken from the directory path cheat sheet
at [1]

Tested directory paths HTTP


Status
Code
/etc/master.passwd 404
/master.passwd 404
etc/passwd 403
etc/shadow%00 400
/etc/passwd 403
/etc/passwd%00 400
../etc/passwd 400
../etc/passwd%00 400
../../etc/passwd 400
../../etc/passwd%00 400
../../../etc/passwd 400
../../../etc/passwd%00 400
../../../../etc/passwd 400
../../../../etc/passwd%00 400
../../../../../etc/passwd 400
../../../../../etc/passwd%00 400
../../../../../../etc/passwd 400
../../../../../../etc/passwd%00 400
../../../../../../../etc/passwd 400
../../../../../../../etc/passwd%00 400
../../../../../../../../etc/passwd 400
../../../../../../../../etc/passwd%00 400
../../../../../../../../../etc/passwd 400
../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../etc/passwd 400

32
../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../../../../../../../../../etc/passwd 400
../../../../../../../../../../../../../../../../../../../../../../etc/passwd%00 400
../../../../../../../../../../../../../../../../../../../../../../etc/shadow%00 400
../../../../../../etc/passwd&=%3C%3C%3C%3C 400
../../../administrator/inbox 400
../../../../../../../dev 400
.htpasswd 403
passwd 404

33
passwd.dat 404
pass.dat 404
.htpasswd 403
/.htpasswd 403
../.htpasswd 400
.passwd 403
/.passwd 403
../.passwd 400
.pass 404
../.pass 400
members/.htpasswd 403
member/.htpasswd 403
user/.htpasswd 403
users/.htpasswd 403
root/.htpasswd 403
db.php 404
data.php 404
database.asp 404
database.js 404
database.php 404
dbase.phpa 404
admin/access_log 403
../users.db.php 400
users.db.php 404
/core/config.php 404
config.php 404
config.js 404
../config.js 400
config.asp 404
../config.asp 400
_config.php 404

34
../_config.php 400
../_config.php%00 400
../config.php 400
config.inc.php 404
../config.inc.php 400
/config.asp 404
../config.asp 400
/../../../../pswd 400
/admin/install.php 404
../install.php 400
install.php 404
..%2F..%2F..%2F..%2F..%2F..%2F..%2F
..%2F..%2F..%2F..%2Fetc%2Fpasswd 400
..%2F..%2F..%2F..%2F..%2F..%2F..%2F
..%2F..%2F..%2F..%2Fetc%2Fshadow 400
..%2F..%2F..%2F%2F..%2F..%2Fetc/passwd 400
..%2F..%2F..%2F%2F..%2F..%2Fetc/shadow 400
..%2F..%2F..%2F%2F..%2F..%2F%2Fvar%2Fnamed 400
..%5c..%5c..%5c..%5c..%5c..%5c..%5c..%5c..%5c..%5c/boot.ini 403
/%c0%ae%c0%ae/%c0%ae%c0%ae/%c0%ae%c0%ae/etc/passwd 403
/..\..\..\..\..\..\winnt\win.ini 403
../../windows/win.ini 400
..//..//..//..//..//boot.ini 400
..\../..\../boot.ini 403
..\../..\../..\../..\../boot.ini 403
\…..\\\…..\\\…..\\\ 200
[HTML]FFFFFF=3D“/..”.“%2f.. 404
d:\AppServ\MySQL 500
c:\AppServ\MySQL 500
c:WINDOWS/system32/ 500
/C:\ProgramFiles\ 500
/D:\ProgramFiles\ 500

35
/C:/inetpub/ftproot/ 500
/boot/grub/grub.conf 404
/proc/interrupts 404
/proc/cpuinfo 404
/proc/meminfo 404
../apache/logs/error.log 400
../apache/logs/access.log 400
../../apache/logs/error.log 400
../../apache/logs/access.log 400
../../../apache/logs/error.log 400
../../../apache/logs/access.log 400
../../../../../../../etc/httpd/logs/acces_log 400
../../../../../../../etc/httpd/logs/acces.log 400
../../../../../../../etc/httpd/logs/error_log 400
../../../../../../../etc/httpd/logs/error.log 400
../../../../../../../var/www/logs/access_log 400
../../../../../../../var/www/logs/access.log 400
../../../../../../../usr/local/apache/logs/access_log 400
../../../../../../../usr/local/apache/logs/access.log 400
../../../../../../../var/log/apache/access_log 400
../../../../../../../var/log/apache2/access_log 400
../../../../../../../var/log/apache/access.log 400
../../../../../../../var/log/apache2/access.log 400
../../../../../../../var/log/access_log 400
../../../../../../../var/log/access.log 400
../../../../../../../var/www/logs/error_log 400
../../../../../../../var/www/logs/error.log 400
../../../../../../../usr/local/apache/logs/error_log 400
../../../../../../../usr/local/apache/logs/error.log 400
../../../../../../../var/log/apache/error_log 400
../../../../../../../var/log/apache2/error_log 400

36
../../../../../../../var/log/apache/error.log 400
../../../../../../../var/log/apache2/error.log 400
../../../../../../../var/log/error_log 400
../../../../../../../var/log/error.log 400
/etc/init.d/apache 404
/etc/init.d/apache2 404
/etc/httpd/httpd.conf 404
/etc/apache/apache.conf 404
/etc/apache/httpd.conf 404
/etc/apache2/apache2.conf 404
/etc/apache2/httpd.conf 404
/usr/local/apache2/conf/httpd.conf 404
/usr/local/apache/conf/httpd.conf 404
/opt/apache/conf/httpd.conf 404
/home/apache/httpd.conf 404
/home/apache/conf/httpd.conf 404
/etc/apache2/sites-available/default 404
/etc/apache2/vhosts.d/default_vhost.include 404
/etc/passwd 403
/etc/shadow 403
/etc/group 403
/etc/security/group 404
/etc/security/passwd 404
/etc/security/user 404
/etc/security/environ 404
/etc/security/limits 404
/usr/lib/security/mkuser.default 404

37
TRITA-EECS-EX-2022:443

www.kth.se

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy