Qb Delhi Campus
Qb Delhi Campus
Qb Delhi Campus
They are
stored to help system administrators track what happened and
troubleshoot problems.
The steps involved in Event log analysis are:
1. Collect the relevant event logs: Determine which event logs are
relevant to your analysis and make sure you have access to them.
You may need to gather logs from multiple sources, such as
servers, network devices, and application logs.
2. Filter and organize the event logs: Use filters to narrow down the
event logs and make them easier to work with. You can filter by
event log type, severity level, source, and time range, among
other things.
3. Analyse the event logs: Examine the event logs to identify
patterns, trends, and anomalies. Look for events that are related
to the issue you are trying to resolve or the metric you are trying
to track.
4. Document and report your findings: Document your analysis in a
clear and concise manner, including any actions taken as a result
of the analysis. If necessary, generate reports or charts to visualize
your findings.
A tool that is commonly used for log analysis is “Event Viewer”.
Ques1:
RAID
RAID, or “Redundant Arrays of Independent Disks” is a technique which
makes use of a combination of multiple disks instead of using a single
disk for increased performance, data redundancy etc.
Key Features that RAID offers:
1) Mirroring:
The feature of RAID that stores same block of information on
different disks. This helps in reducing downtimes as the data could
be accessed from the second disk if the first disk does not respond.
2) Stripping:
This technique split a file into blocks and store those blocks over
multiple disks. Read and write operations are fast on striped disk
since all the drives can be read at once as the data exist in different
places in the array.
3) Parity Bit:
A parity bit is a check bit which is added to a block of data for error
detection. It validates the integrity of data. When a single parity bit
is used, it is called single parity whereas when 2 bits are used, it is
called double parity.
There are different types of RAIDs:
a) RAID 0 (Stripping)
b) RAID 1 (Mirroring)
c) RAID 2,3,4,5 (Stripping with single-parity)
d) RAID 6 (Stripping with double-parity)
e) RAID 10 (Stripping and Mirroring)
f) JBOD (Just a Bunch Of Disk)
Ques2:
JOURNALING:
Journaling is a technique used to protect against data loss in the event
of a power failure or system crash. When a file system is journaled,
changes to the file system are first written to a journal, or log, before
being applied to the actual file system. If a power failure or system
crash occurs before the changes are applied to the file system, the
journal can be used to restore the file system to a consistent state.
There are 3 types of Journals:
• Writeback Mode:
Only file system metadata is journaled and then only data blocks are
written directly to their fixed location. So even if the system failed to
write the actual data block to the disk, the Log will be present in the
Journal.
• Ordered journaling mode:
In this mode of journaling also only metadata are journaled. But the
journal entries will only happen after the data block is written to the
disk. So, in this both the data and the metadata are guaranteed to be
consistent after recovery.
• Full data journaling mode:
It writes both file data and metadata to the journal and then copies
the information to the actual file system area on the disk. Full data
journaling mode certainly being the safest method, is the costliest of
the three in terms of performance.
Ques4:
There are several different types of file systems in the EXT family,
including:
1. ext2 (Second Extended File System): This is the original version of
the EXT file system, which was developed for Linux. It is a stable
and reliable file system that is still in use today, although it is not
as commonly used as some of the newer versions.
2. ext3 (Third Extended File System): This is an updated version of
the EXT file system that was introduced in 2001. It adds journaling
functionality to the file system to improve data integrity and
reliability.
3. ext4 (Fourth Extended File System): This is the latest version of
the EXT file system, which was introduced in 2008. It includes a
number of improvements, such as support for larger file sizes and
faster file transfer speeds. It is currently the most widely used file
system for Linux.
The table below give basic difference between Ext2, Ext3 and Ext4:
2. Using ‘ls’ command: ls command is to list the files. ‘a’ list all files
including the hidden files, ‘l’ enables listing format and ‘time-
style=+%D’ shows date in mm/dd/yyyy format.
ls -al --time-style=+%D
3. using ‘find’ command: find command finds the files. ‘.’ Is used to
specify that the file need to be in the folder, ‘maxdepth’ is used to
specify the level and ‘newerat’ gives that the file should be newer
than the time specified.
find . -maxdepth 1 -newerat "date"
Ques8:
LINUX LOGS:
Log files are a set of records that Linux maintains for the administrators
to keep track of important events. They contain messages about the
server, including the kernel, services and applications running on it.
Logs to monitor for forensic purposes are:
• /var/log/messages - Shows general messages and info regarding
the system. Basically, a data log of all activity throughout the
global system. This is the first log file that the Linux administrators
should check if something goes wrong.
• /var/log/auth.log - Keep authentication logs for both successful or
failed logins, and authentication processes. If you’re looking for
anything involving the user authorization mechanism, you can find
it in this log file.
• /var/log/boot.log – Contains information about start-up messages
and boot info. The system initialization script,
/etc/init.d/bootmisc.sh, sends all bootup messages to this log file.
This is the repository of booting related information and messages
logged during system startup process.
• /var/log/kern.log – keeps in Kernel logs and warning info. Also,
useful to fix problems with custom kernels. Kernel logs can be
helpful to troubleshoot a custom-built kernel.
• /var/log/faillog - records info on failed logins. Hence, handy for
examining potential security breaches like login credential hacks
and brute-force attacks.
• /var/log/maillog or /var/log/mail.log - All mail server related logs
are stored here. Find information about postfix, smtpd,
MailScanner, SpamAssassain or any other email related services
running on the mail server.
• /var/log/yum.log - It contains the information that is logged when
a new package is installed using the yum command. Check the
messages logged here to see whether a package was correctly
installed or not.
• /var/log/mysqld.log or /var/log/mysql.log - As the name suggests,
this is the MySQL log file. All debug, failure and success messages
related to the [mysqld] and [mysqld_safe] daemon is logged to
this file.
Ques10:
TIMELINE ANALYSIS:
Timeline analysis is the process of collecting and analysing event data
to determine when and what has occurred on a filesystem for forensic
purposes. Timeline analysis is an important part of digital forensics
because it allows investigators to piece together the events that
occurred on a computer or other digital device. It helps to establish a
chronology of activity and can provide valuable insights into how a
device was used and what actions were taken on it.
We can do timeline analysis using Autopsy. Following steps should be
followed to do the same:
1. Create a new case: To create a new case, click on the "File" menu
and select "New Case."
2. Add a data source: To add a data source, click on the "Add Data
Source" button and select the device or media that you want to
include in the case.
3. Extract metadata: To extract metadata, click on the "Extract"
button and select the "Extract Meta Data" module.
4. Create a timeline: Once the metadata has been extracted, you can
use the "Timeline" module to create a timeline of activity on the
device. Click on the "Analyse" button and select the "Timeline"
module. Autopsy will generate a timeline that shows the creation
and modification dates of the files on the device, as well as other
relevant information.
5. Analyse the timeline: You can use the timeline to identify patterns
and anomalies in the data and to understand the sequence of
events that occurred on the device.
Ques11:
In Linux, everything is a file. Device files are normally located in the
/dev/ directory and are created dynamically by the udev daemon
(systemdudevd). Device files could be of 2 types: Block device (these
devices are accessed in chunks) and Character device (these devices
are sequentially accessed one byte at a time). The /dev/ directory is a
pseudo filesystem that a running kernel provides in memory.
Here are some commands that can be used to gather information
about peripheral devices for investigatory purposes:
1. lsusb: This command lists all USB devices that are currently
connected to the computer. It provides information about the
device's vendor and product IDs, as well as its device class and
serial number.
2. lspci: This command lists all PCI devices that are currently
connected to the computer. It provides information about the
device's vendor and product IDs, as well as its device class and bus
location.
3. lshw: This command lists all hardware devices that are currently
connected to the computer. It provides detailed information
about each device, including its vendor and product IDs, device
class, and bus location.
Ques14:
Following are the filesystem artifacts you can examine:
• Filesystem metadata:
▪ Label or volume name specified by the system owner
▪ Unique ID (UUID/GUID)
▪ Timestamps
▪ Size and number of blocks
▪ Number of mounts and last mount point
▪ FS features and configurations
• Particular files metadata:
▪ POSIX filetype
▪ Permissions and ownerships
▪ Multiple timestamps (MACB)
▪ sizes and blocks
▪ flags and attributes.
• The storage content forensic artifacts:
▪ Sector
▪ Block
▪ Extent: A group of consecutive filesystem blocks
▪ Allocated blocks
▪ Unallocated blocks: possibly containing data from deleted
files
• Additional slack where the data could exist:
▪ Volume slack (Area between end of filesystem and end of
partition)
▪ File slack (Area between end of file and end of block)
▪ RAM and memory slack (Area between end of file and end of
sector)
▪ Inter-partition gaps (Probably a deleted partition)
Ques15:
The ps command has a lot of useful options and is a tool to see what's
running on a Mac at the time of collection of evidence.
The command ‘ps -ao
user,pid,ppid,%cpu,%mem,start,time,command’ can give a lot of
information about all running processes. The -a flag gives us
information about processes running under all users. -o allows us to
specify keywords with a comma separated list. user is the login name
of user, pid is process ID, ppid is the process ID of the parent process.
%cpu is cpu usage, %mem is memory usage, start is the start time of
the process, time is the amount of CPU time used, command gives the
command called along with its arguments.
If the PPID is something other than 1, it indicates that a user process is
creating its own child processes. This might be something to
investigate further.
The $ lsappinfo list command also gives useful information about all
running processes, and can be used along with ps.
The $ launchctl print user/<UID> command can be used to print
information about a specific user. Simply replace <UID> with the User
ID of the process of interest. launchctl is used to load or unload
daemons in MacOS. This command can give the current status of the
process, last exit status of the process, origin on disk, etc.
The investigator must be aware that these programs may have been
infected by rootkits, and therefore their outputs may not be
completely reliable.
Ques16:
There are several inbuilt tools in Linux that can be used to create an
image of a device disk: