Qb Delhi Campus

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Logs are records of events that happened in the computer.

They are
stored to help system administrators track what happened and
troubleshoot problems.
The steps involved in Event log analysis are:
1. Collect the relevant event logs: Determine which event logs are
relevant to your analysis and make sure you have access to them.
You may need to gather logs from multiple sources, such as
servers, network devices, and application logs.
2. Filter and organize the event logs: Use filters to narrow down the
event logs and make them easier to work with. You can filter by
event log type, severity level, source, and time range, among
other things.
3. Analyse the event logs: Examine the event logs to identify
patterns, trends, and anomalies. Look for events that are related
to the issue you are trying to resolve or the metric you are trying
to track.
4. Document and report your findings: Document your analysis in a
clear and concise manner, including any actions taken as a result
of the analysis. If necessary, generate reports or charts to visualize
your findings.
A tool that is commonly used for log analysis is “Event Viewer”.
Ques1:
RAID
RAID, or “Redundant Arrays of Independent Disks” is a technique which
makes use of a combination of multiple disks instead of using a single
disk for increased performance, data redundancy etc.
Key Features that RAID offers:
1) Mirroring:
The feature of RAID that stores same block of information on
different disks. This helps in reducing downtimes as the data could
be accessed from the second disk if the first disk does not respond.
2) Stripping:
This technique split a file into blocks and store those blocks over
multiple disks. Read and write operations are fast on striped disk
since all the drives can be read at once as the data exist in different
places in the array.
3) Parity Bit:
A parity bit is a check bit which is added to a block of data for error
detection. It validates the integrity of data. When a single parity bit
is used, it is called single parity whereas when 2 bits are used, it is
called double parity.
There are different types of RAIDs:
a) RAID 0 (Stripping)
b) RAID 1 (Mirroring)
c) RAID 2,3,4,5 (Stripping with single-parity)
d) RAID 6 (Stripping with double-parity)
e) RAID 10 (Stripping and Mirroring)
f) JBOD (Just a Bunch Of Disk)
Ques2:
JOURNALING:
Journaling is a technique used to protect against data loss in the event
of a power failure or system crash. When a file system is journaled,
changes to the file system are first written to a journal, or log, before
being applied to the actual file system. If a power failure or system
crash occurs before the changes are applied to the file system, the
journal can be used to restore the file system to a consistent state.
There are 3 types of Journals:
• Writeback Mode:
Only file system metadata is journaled and then only data blocks are
written directly to their fixed location. So even if the system failed to
write the actual data block to the disk, the Log will be present in the
Journal.
• Ordered journaling mode:
In this mode of journaling also only metadata are journaled. But the
journal entries will only happen after the data block is written to the
disk. So, in this both the data and the metadata are guaranteed to be
consistent after recovery.
• Full data journaling mode:
It writes both file data and metadata to the journal and then copies
the information to the actual file system area on the disk. Full data
journaling mode certainly being the safest method, is the costliest of
the three in terms of performance.
Ques4:
There are several different types of file systems in the EXT family,
including:
1. ext2 (Second Extended File System): This is the original version of
the EXT file system, which was developed for Linux. It is a stable
and reliable file system that is still in use today, although it is not
as commonly used as some of the newer versions.
2. ext3 (Third Extended File System): This is an updated version of
the EXT file system that was introduced in 2001. It adds journaling
functionality to the file system to improve data integrity and
reliability.
3. ext4 (Fourth Extended File System): This is the latest version of
the EXT file system, which was introduced in 2008. It includes a
number of improvements, such as support for larger file sizes and
faster file transfer speeds. It is currently the most widely used file
system for Linux.
The table below give basic difference between Ext2, Ext3 and Ext4:

Feature ext2 ext3 ext4


Journaling No Yes Yes
Maximum file size 2 TB 2 TB 16 TB
Maximum filesystem size 32 TB 32 TB 1 EB
Ques5:
EXT4:
EXT4 (Fourth Extended File System) is a popular file system for Linux
systems. It is an evolution of the EXT3 file system, which was itself an
improvement over the EXT2 file system. EXT4 was introduced in 2008
and is currently the most widely used file system for Linux. EXT4 is a
stable and reliable file system that is well-suited for use on large Linux
systems.
Some of the main features of EXT4 are that: It supports for very large
file sizes and volume sizes (files up to 16 TB in size and volumes up to 1
EB in size) and Journaling.

Main components defining the EXT4 filesystem are:


• Super Block: This is a special block at the beginning of the file
system that contains important information about the file system,
such as the size, layout, and status of the file system.
• Group Descriptors: These blocks contain information about the
layout of the file system, including the location of the inode tables
and block bitmaps.
• Inode Tables: These are tables that contain information about
each file and directory in the file system, including its name,
permissions, and location on the disk.
• Data Blocks: These are blocks that contain the actual data for the
files and directories stored in the file system.
• Block Bitmaps: These are bitmaps that track which blocks are in
use and which are available for allocation.
• Journal: The ext4 file system uses a journal to keep a log of
changes made to the file system. This can help to recover from file
system corruption and improve data integrity.
Ques7:
Commands to see recently accessed file are
1. Using ‘stat’ command: stat command is to show statistic of a
particular file, including the access time
stat [File_Name]

2. Using ‘ls’ command: ls command is to list the files. ‘a’ list all files
including the hidden files, ‘l’ enables listing format and ‘time-
style=+%D’ shows date in mm/dd/yyyy format.
ls -al --time-style=+%D

3. using ‘find’ command: find command finds the files. ‘.’ Is used to
specify that the file need to be in the folder, ‘maxdepth’ is used to
specify the level and ‘newerat’ gives that the file should be newer
than the time specified.
find . -maxdepth 1 -newerat "date"
Ques8:
LINUX LOGS:
Log files are a set of records that Linux maintains for the administrators
to keep track of important events. They contain messages about the
server, including the kernel, services and applications running on it.
Logs to monitor for forensic purposes are:
• /var/log/messages - Shows general messages and info regarding
the system. Basically, a data log of all activity throughout the
global system. This is the first log file that the Linux administrators
should check if something goes wrong.
• /var/log/auth.log - Keep authentication logs for both successful or
failed logins, and authentication processes. If you’re looking for
anything involving the user authorization mechanism, you can find
it in this log file.
• /var/log/boot.log – Contains information about start-up messages
and boot info. The system initialization script,
/etc/init.d/bootmisc.sh, sends all bootup messages to this log file.
This is the repository of booting related information and messages
logged during system startup process.
• /var/log/kern.log – keeps in Kernel logs and warning info. Also,
useful to fix problems with custom kernels. Kernel logs can be
helpful to troubleshoot a custom-built kernel.
• /var/log/faillog - records info on failed logins. Hence, handy for
examining potential security breaches like login credential hacks
and brute-force attacks.
• /var/log/maillog or /var/log/mail.log - All mail server related logs
are stored here. Find information about postfix, smtpd,
MailScanner, SpamAssassain or any other email related services
running on the mail server.
• /var/log/yum.log - It contains the information that is logged when
a new package is installed using the yum command. Check the
messages logged here to see whether a package was correctly
installed or not.
• /var/log/mysqld.log or /var/log/mysql.log - As the name suggests,
this is the MySQL log file. All debug, failure and success messages
related to the [mysqld] and [mysqld_safe] daemon is logged to
this file.
Ques10:
TIMELINE ANALYSIS:
Timeline analysis is the process of collecting and analysing event data
to determine when and what has occurred on a filesystem for forensic
purposes. Timeline analysis is an important part of digital forensics
because it allows investigators to piece together the events that
occurred on a computer or other digital device. It helps to establish a
chronology of activity and can provide valuable insights into how a
device was used and what actions were taken on it.
We can do timeline analysis using Autopsy. Following steps should be
followed to do the same:
1. Create a new case: To create a new case, click on the "File" menu
and select "New Case."
2. Add a data source: To add a data source, click on the "Add Data
Source" button and select the device or media that you want to
include in the case.
3. Extract metadata: To extract metadata, click on the "Extract"
button and select the "Extract Meta Data" module.
4. Create a timeline: Once the metadata has been extracted, you can
use the "Timeline" module to create a timeline of activity on the
device. Click on the "Analyse" button and select the "Timeline"
module. Autopsy will generate a timeline that shows the creation
and modification dates of the files on the device, as well as other
relevant information.
5. Analyse the timeline: You can use the timeline to identify patterns
and anomalies in the data and to understand the sequence of
events that occurred on the device.
Ques11:
In Linux, everything is a file. Device files are normally located in the
/dev/ directory and are created dynamically by the udev daemon
(systemdudevd). Device files could be of 2 types: Block device (these
devices are accessed in chunks) and Character device (these devices
are sequentially accessed one byte at a time). The /dev/ directory is a
pseudo filesystem that a running kernel provides in memory.
Here are some commands that can be used to gather information
about peripheral devices for investigatory purposes:
1. lsusb: This command lists all USB devices that are currently
connected to the computer. It provides information about the
device's vendor and product IDs, as well as its device class and
serial number.
2. lspci: This command lists all PCI devices that are currently
connected to the computer. It provides information about the
device's vendor and product IDs, as well as its device class and bus
location.
3. lshw: This command lists all hardware devices that are currently
connected to the computer. It provides detailed information
about each device, including its vendor and product IDs, device
class, and bus location.
Ques14:
Following are the filesystem artifacts you can examine:
• Filesystem metadata:
▪ Label or volume name specified by the system owner
▪ Unique ID (UUID/GUID)
▪ Timestamps
▪ Size and number of blocks
▪ Number of mounts and last mount point
▪ FS features and configurations
• Particular files metadata:
▪ POSIX filetype
▪ Permissions and ownerships
▪ Multiple timestamps (MACB)
▪ sizes and blocks
▪ flags and attributes.
• The storage content forensic artifacts:
▪ Sector
▪ Block
▪ Extent: A group of consecutive filesystem blocks
▪ Allocated blocks
▪ Unallocated blocks: possibly containing data from deleted
files
• Additional slack where the data could exist:
▪ Volume slack (Area between end of filesystem and end of
partition)
▪ File slack (Area between end of file and end of block)
▪ RAM and memory slack (Area between end of file and end of
sector)
▪ Inter-partition gaps (Probably a deleted partition)
Ques15:
The ps command has a lot of useful options and is a tool to see what's
running on a Mac at the time of collection of evidence.
The command ‘ps -ao
user,pid,ppid,%cpu,%mem,start,time,command’ can give a lot of
information about all running processes. The -a flag gives us
information about processes running under all users. -o allows us to
specify keywords with a comma separated list. user is the login name
of user, pid is process ID, ppid is the process ID of the parent process.
%cpu is cpu usage, %mem is memory usage, start is the start time of
the process, time is the amount of CPU time used, command gives the
command called along with its arguments.
If the PPID is something other than 1, it indicates that a user process is
creating its own child processes. This might be something to
investigate further.
The $ lsappinfo list command also gives useful information about all
running processes, and can be used along with ps.
The $ launchctl print user/<UID> command can be used to print
information about a specific user. Simply replace <UID> with the User
ID of the process of interest. launchctl is used to load or unload
daemons in MacOS. This command can give the current status of the
process, last exit status of the process, origin on disk, etc.
The investigator must be aware that these programs may have been
infected by rootkits, and therefore their outputs may not be
completely reliable.
Ques16:
There are several inbuilt tools in Linux that can be used to create an
image of a device disk:

1. dd: It can be used to create a raw image of the disk, which


includes all the data on the disk, including deleted files and
unallocated space. To create an image with dd, you can use the
following command:
dd if=[drive_to_image] of=[output_file.format]

2. dc3dd: This is a modified version of the dd utility that includes


additional features for forensic imaging. It can be used to create
an image of a device disk and includes options for verifying the
image, calculating hashes, and generating a log file. To create an
image with dc3dd, you can use the following command:
dc3dd if=[drive_to_image] of=[output_file.format]
hash=md5
Ques20:
Linux maintains a Filesystem Hierarchy Standard (FHS). This FHS
defines the directory structure and the content/purpose of the
directories in Linux distributions.
The top of this hierarchical tree is called the root directory, or /.

Some of the directories are:


• /bin: This directory contains essential command binaries that are
required to boot the system and run in single-user mode.
• /sbin: This directory contains system binaries, such as commands
that are used to manage the system, configure hardware, and
perform system maintenance tasks.
• /etc: This directory contains configuration files for the system and
installed applications.
• /usr: This directory contains shareable, read-only data, such as
libraries, documentation, and executables for user-installed
applications.
• /var: This directory contains variable data, such as logs, spool files,
and temporary files.
• /lib: This directory contains libraries that are required by
executables in the /bin and /sbin directories.
• /tmp: This directory is used for temporary storage of files that are
needed across system reboots.
• /home: This directory contains the home directories for users of
the system.
• /root: This is the home directory for the root user.
• /dev: This directory contains device files, which represent devices
such as disks, printers, and terminal devices.
Ques21:
Swap space is a portion of a computer's storage that is used as an
extension of its physical memory. When a computer's physical memory
is full, the operating system can move some of the data from memory
to swap space, freeing up memory for other use. A swap partition (or
file) can be identified by a 10-character signature string: SWAPSPACE2
OR SWAP-SPACE.
To analyse the swap space in Linux, we need to follow the steps
mentioned below:
1. Locating the swap: the swap file is typically located at /swapfile.
2. Imaging the swap partition: You can use a tool such as dd or
dc3dd to create an image of the swap partition or file.
3. Analyse the image: Volatility is a memory forensics tool that can
be used to extract and analyse data from memory dumps,
including swap space.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy