assignment of os of Aps clg

Download as pdf or txt
Download as pdf or txt
You are on page 1of 65

RUBRICS

BEGINNER INTERMEDIATE GOOD ADVANCED EXPERT STUDENTS


SL NO DIMENSION
2 4 5 8 10 SCORE

10

11

12

13

AVERAGE MARKS

STAFF SIGNATURE
ACTIVITY NAMES
1. OVERVIEW,NEED,STRUCTURE AND TYPES OF
OPERATING SYSTEM
2. WORKING, TYPES AND CHALLENGES OF
VIRTUALIZATION TECHNOLOGY
3. INTRODUCTION TO FILE SYSTEM AND FILETYPES
4. SCHEDULING- LONG TERM,SHORT TERM AND MEDIUM
TERM
5. DEADLOCK- SYSTEM MODEL AND METHODS FOR
HANDLING DEADLOCKS-
PREVENTION,AVOIDANCE,RECOVERY FROM
DEADLOCKS
6. INTRODUCTION TO MEMORY MANAGEMENT AND
DIFFERENCE OF STATIC AND DYNAMIC LINKING AND
LOADING
7. BASICS OF SHELL PROGRAMMING AND TYPES OF SHELL
IN LINUX
8. ABOUT THE CRON COMMAND AND ADDTIONAL
RESOURCES
9. NETWORK COMPONENT- IP ADDRESS, SUBNET MASK
AND GATEWAY
10. USER AND GROUP ACCOUNT MANAGEMENT
11. INTRODUCTION TO SYSTEM MONITORING
12. ABOUT DNS AND FTP
13. INTRODUCTION TO STORAGE MANAGEMENT
WEEK-01
OVERVIEW OF OPERATING SYSTEMS

1. OVERVIEW:-
 An Operating System (OS) is an interface between a computer
user and computer hardware. An operating system is a
software which performs all the basic tasks like file
management, memory management, process
management, handling input and output, and controlling
peripheral devices such as disk drives and printers.
 An operating system is software that enables applications
to interact with a computer's hardware. The software that
contains the core components of the operating system is
called the kernel.
 The primary purposes of an Operating System are to enable
applications (software) to interact with a computer's
hardware and to manage a system's hardware and software
resources.
 Some popular Operating Systems include Linux Operating
System, Windows Operating System, VMS, OS/400, AIX, ,
etc. Today, Operating systems is found almost in every
device like mobile phones, personal computers, mainframe
computers, automobiles, TV, Toys etc.
2. Need of os:-
 You know that an OS is like a mediator between the system
hardware and its user. Hence its requirement lies in
providing an interface between the users and systems like
computer systems, mobile phones, music players, tablets,
etc.
 Resource Allocation -
Since more than one program runs
simultaneously on the system and uses the CPU and memory,
we need an operating system to manage the resource
distribution among the various processes.
 Multitasking -
There is a need for system software to
facilitate the execution of more than one process/application
simultaneously.
 Graphical user interface -
It eases the user’s understanding of the
system process and enables a smooth interaction between the
two.
 File management -
It means there are many resources
required by a process for its execution, so which resource to
allocate and when is the task handled by an operating system.
 Platform -
It’s a platform or a link without which the
communication between user
and system is next to
impossible.
3. STRUCTURE OF OS:-
It is the most straightforward operating system
structure, but it lacks definition and is only appropriate for usage with tiny
and restricted systems. Since the interfaces and degrees of functionality in
this structure are clearly defined, programs are able to access I/O routines,
which may result in unauthorized access to I/O procedures.

This organizational structure is used by the MS-DOS


operating system:
o There are four layers that make up the MS-DOS operating system,
and each has its own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device
drivers, application programs, and system programs.
o The MS-DOS operating system benefits from layering because each
level can be defined independently and, when necessary, can
interact with one another.
o If the system is built in layers, it will be simpler to design, manage,
and update. Because of this, simple structures can be used to build
constrained systems that are less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs
and I/O procedures are visible to end users, giving them the
potential for unwanted access.

Advantages of Simple Structure:

o Because there are only a few interfaces and levels, it is


simple to develop.
o Because there are fewer layers between the hardware and
the applications, it offers superior performance.
Disadvantages of Simple Structure:

o The entire operating system breaks if just one user program


malfunctions.
o Since the layers are interconnected, and in communication
with one another, there is no abstraction or data hiding.
o The operating system's operations are accessible to layers,
which can result in data tampering and system failure.
4. Types of os:-
WEEK-02
Virtualization technology and
linux and linux boot process
1. Working:-

Virtualization technology allows multiple operating systems (OS) or


instances to run on a single physical machine. This is achieved by
using a hypervisor or virtual machine monitor (VMM), which acts as
an abstraction layer between the hardware and the virtualized
environments. Here's a basic overview of how virtualization works:

1. Hypervisor:
- The hypervisor is a software layer that sits between the
hardware and the operating systems or virtual machines (VMs).
- There are two types of hypervisors: Type 1 (bare-metal) and
Type 2 (hosted). Type 1 runs directly on the hardware, while Type
2 runs on top of an existing operating system.

2. Creation of Virtual Machines:


- Virtual machines are created by the hypervisor. Each VM
functions as an independent, isolated environment that emulates a
physical computer.
- The hypervisor allocates resources (CPU, memory, storage, and
network) to each VM from the underlying physical hardware.
3. Resource Allocation:
- The hypervisor manages and allocates resources dynamically
based on the needs of the virtual machines.

4. Guest Operating Systems:


- Each virtual machine runs its own guest operating system, which
can be different from the host OS or other guest OS instances.
- The guest OS interacts with the virtualized hardware provided
by the hypervisor, unaware that it is running in a virtual
environment.

5. Isolation:
- Virtualization provides strong isolation between virtual
machines. Even if one VM crashes or experiences issues, it doesn't
affect the others.
- Security features ensure that one VM cannot access the memory
or data of another VM.

6. Snapshot and Migration:


- Virtualization allows for the creation of snapshots, which are
point-in-time images of a VM's state. Snapshots can be used for
backup and recovery purposes.
- Virtual machines can be migrated or moved between different
physical servers without downtime, enhancing flexibility and
resource utilization.

7. Resource Pooling:
- Resources from the physical hardware are pooled together and
distributed among the virtual machines as needed. This efficient
utilization of resources is one of the key advantages of
virtualization.
2. Types:-

1. Server Virtualization:
- Description: In server virtualization, a single
physical server is partitioned into multiple virtual
servers, each running its own operating system and
applications.
- Use Cases: Server consolidation, resource
optimization, and efficient utilization of hardware
resources.

2. Desktop Virtualization:
- Description: Desktop virtualization involves
running desktop environments on a server, with end-
users accessing these virtual desktops remotely.
- Use Cases: Centralized management, security, and
flexibility in delivering desktop environments to end-
users.

3. Application Virtualization:
- Description: Application virtualization isolates
applications from the underlying operating system,
allowing them to run independently.
- Use Cases: Simplified application deployment,
compatibility across different OS versions, and
isolation of applications for security purposes.
4. Storage Virtualization:
- Description: Storage virtualization abstracts
physical storage resources and provides a logical layer
for managing and presenting storage to the systems.
- Use Cases: Simplified storage management,
improved utilization, and flexibility in storage
allocation.

5. Hardware Virtualization:
- Description: Hardware virtualization involves
creating virtual machines that run on a hypervisor,
allowing multiple operating systems to run on a single
physical machine.
- Use Cases: Efficient resource utilization, server
consolidation, and the ability to run multiple OS
instances on a single server.

6. Memory Virtualization:
- Description: Memory virtualization pools together
physical memory resources, allowing dynamic
allocation and reallocation based on the needs of
virtual machines.
- Use Cases: Improved memory utilization, efficient
handling of varying workloads, and prevention of
resource contention.
7. GPU Virtualization:
- Description: GPU virtualization involves sharing the
processing power of a Graphics Processing Unit (GPU)
among multiple virtual machines.
- Use Cases: Enhanced graphics performance for
virtual desktops, improved scalability for GPU-
intensive workloads.

8. Data Virtualization:
- Description: Data virtualization abstracts data from
its physical location and provides a unified view,
allowing users to access and query data seamlessly.
- Use Cases: Integration of diverse data sources,
simplified data access, and improved data agility.
2. CHALLENGES:-

1. Performance Overhead:
- Challenge: Virtualization introduces some level of
performance overhead due to the additional layer
(hypervisor) between the virtual machines and the physical
hardware.
- Mitigation: Advances in virtualization technology and
hardware support have significantly reduced performance
overhead. Hardware-assisted virtualization features can
help mitigate this challenge.

2. Security Concerns:
- Challenge: Security vulnerabilities within the hypervisor
or misconfigurations can pose risks to the entire
virtualization environment. Additionally, VMs on the same
physical host might be vulnerable to attacks.
- Mitigation: Regular security updates, proper
configuration, and adherence to security best practices can
help mitigate these concerns. Strong isolation between
virtual machines is crucial.

3. Resource Contention:
- Challenge: Multiple virtual machines sharing the same
physical resources may lead to resource contention,
affecting performance.
- Mitigation: Proper resource planning, monitoring, and
management can help prevent resource contention.
4. Complexity of Management:
- Challenge: Managing a virtualized environment can be
complex, especially as the number of virtual machines
increases.
- Mitigation: Implementing management tools,
automation, and orchestration systems can simplify the
administration of virtualized resources. Cloud management
platforms can also assist in managing virtual infrastructure.

5. Backup and Recovery:


- Challenge: Traditional backup and recovery methods
may not be directly applicable to virtualized environments.
- Mitigation: Implementing specialized backup solutions
designed for virtualization, taking advantage of snapshot
features, and ensuring regular testing of recovery processes
can address this challenge.
WEEK-03
File system
1. INTRODUCTION :-
 A file system is a crucial component of modern computing that
facilitates the organization, storage, retrieval, and management
of digital data.

 In the real-time of information technology, a file system acts as


the framework through which files and directories are
structured on storage devices such as hard drives, solid-state
drives, and external storage media.

 It provides the necessary mechanisms for users and applications


to interact with their stored data, enabling efficient data access
and retrieval.

 These files are organized in a hierarchical structure, allowing


users to create, save, and organize their work seamlessly.

 This document aims to delve into the fundamentals of file


systems, exploring their significance in the digital landscape and
shedding light on how users can navigate and leverage file
systems. From creating and
saving documents to
managing versions and
collaborating with others.
2. File types:-

1. Text Files:
- Extension: txt
- Description: Simple files containing plain text
without formatting. They can be opened with basic
text editors.

2. Document Files:
- Extensions: .doc, .docx (Microsoft Word), .pdf
(Portable Document Format), .odt (OpenDocument
Text), .rtf (Rich Text Format)
- Description: Files used for creating and storing
documents with formatted text, images, and other
elements.

3. Spreadsheet Files:
- Extensions: .xls, .lux (Microsoft Excel), .ods
(OpenDocument Spreadsheet), .csv (Comma-
Separated Values)
- Description: Files used for organizing data in rows
and columns, typically for numerical and tabular
data.
4. Presentation Files:
- Extensions: .pet, .pptx (Microsoft PowerPoint),
.odp (OpenDocument Presentation)
- Description: Files used for creating slideshows
and presentations, often including text, images, and
multimedia elements.

5. Image Files:
- Extensions: .jpg, .png, .gif, .bmp, .tiff
- Description: Files containing visual information,
such as photographs, illustrations, or graphics.

6. Audio Files:
- Extensions: .mp3, .wav, .aac, .ogg
- Description: Files containing audio data, often
used for music, sound effects, or voice recordings.

7. Video Files:
- Extensions: .mp4, .avi, .mkv, .mov
- Description: Files containing video data, used for
movies, clips, or presentations.
8. Executable Files:
- Extensions: .exe (Windows), .app (macOS), .deb,
.rpm.
- Description: Files containing compiled code that
can be executed by a computer, launching specific
software applications.

9. Archive Files:
- Extensions: .zip, .tar, .rar
- Description: Files that bundle and compress one
or more files and folders into a single file, making it
easier for distribution or storage.

10. Database Files:


- Extensions: .db, .accdb (Microsoft Access), .sql
- Description: Files used for storing structured data
in a database format, often associated with database
management systems.
WEEK-04
Process management
1. Scheduling:-
Schedulers are special system software which handle process scheduling
in various ways. Their main task is to select the jobs to be submitted into
the system and to decide which process to run. Schedulers are of three
types –
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler
Long Term Scheduler:-
 It is also called a job scheduler. A long-term scheduler determines
which programs are admitted to the system for processing. It
selects processes from the queue and loads them into memory for
execution.
 Process loads into the memory for CPU scheduling. The primary
objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound.
 It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes
leaving the system. Available or minimal.
 Timesharing operating systems have no long term scheduler. When
a process changes the state from new to ready, then there is use of
long-term scheduler.
Short Term Scheduler
 It is also called as CPU scheduler. Its main objective is to increase
system performance in accordance with the chosen set of criteria.
It is the change of ready state to running state of the process.
 CPU scheduler selects a process among the processes that are
ready to execute and allocates CPU to one of them.
 Short-term schedulers, also known as dispatchers, make the
decision of which process to execute next. Short-term schedulers
are faster than long-term schedulers.
Medium Term Scheduler
 Medium-term scheduling is a part of swapping. It removes the
processes from the memory. It reduces the degree of
multiprogramming.
 The medium-term scheduler is in-charge of handling the swapped
out-processes. A running process may become suspended if it
makes an I/O request.
 A suspended processes cannot make any progress towards
completion. In this condition, to remove the process from memory
and make space for other processes, the suspended process is
moved to the secondary storage.
 This process is called swapping, and the process is said to be
swapped out or rolled out. Swapping may be necessary to improve
the process mix.

2. Advantages:-

 Optimized CPU Utilization:


- Schedulers, especially short-term schedulers, aim to keep the CPU
busy by efficiently selecting processes for execution. This leads to higher
CPU utilization and overall system throughput.

 Improved System Responsiveness:


- Prioritization and efficient scheduling algorithms contribute to better
system responsiveness. Users experience reduced waiting times, quicker
application launches, and a more interactive computing environment.
 Fair Allocation of Resources:
- Schedulers help maintain fairness by allocating CPU time equitably
among competing processes. This prevents any single process from
monopolizing system resources, ensuring a balanced and fair
distribution.

 Enhanced Throughput:
- Effective scheduling strategies contribute to increased throughput,
allowing the system to handle more tasks in a given time period. This is
particularly important in environments with high workloads or time-
sensitive applications.

 Adaptability to Different Workloads:


- Schedulers are designed to adapt to varying workloads and priorities.
They can dynamically adjust their strategies based on the characteristics
of the processes and the system's current state.

3. disAdvantages:-

 Overhead:
- Schedulers introduce overhead in terms of computational resources and
time. The process of decision-making, context switching, and managing
various queues can consume system resources and affect overall
performance.

 Complexity:
- The implementation and management of sophisticated scheduling
algorithms can add complexity to the operating system. This complexity may
lead to challenges in understanding, maintaining, and debugging the
scheduler.
 Starvation:
- Starvation occurs when a process is unable to acquire the necessary
resources for an extended period due to continuously lower priority. Certain
scheduling algorithms may struggle to provide fairness, leading to some
processes being starved of resources.

 Unpredictability:
- The dynamic nature of some scheduling algorithms can result in
unpredictable behavior. It may be challenging for users and administrators
to anticipate how the system will prioritize processes under certain
conditions.

 Inefficient with Certain Workloads:


- Some scheduling algorithms that work well in specific scenarios may
perform poorly in others. For example, an algorithm optimized for CPU-
bound tasks may not be efficient when dealing with I/O-bound workloads.
WEEK-05
Process synchronization
1. Deadlock:-

 In the domain of operating systems and concurrent


programming, a deadlock represents a state where a set of
processes are unable to proceed because each is waiting
for a resource held by another process within the same set.
Deadlocks can bring a system to a standstill, leading to
unresponsive applications and a failure to make progress.

Key Characteristics of Deadlocks:

 Resource Contention:
- Deadlocks occur when processes contend for resources, such
as CPU time, memory, or devices. Each process holds a resource
while waiting for another resource held by a different process.

 Mutual Exclusion:
- At least one resource involved in the deadlock must be non-
shareable, meaning that only one process can use it at any given
time.

 Hold and Wait:


- Processes must hold at least one resource and be waiting for
another resource to be released by a different process.
 No Pre-emption:
- Resources cannot be forcibly taken away from a process. If a
process holds a resource and needs another, it must wait for that
resource to be released by another process voluntarily.

 Circular Wait:
- There must be a circular chain of two or more processes, each
waiting for a resource held by the next process in the chain.

2. System model:-
1. Methods of handling:-

 PREVENTION:
- Resource Allocation Graph (RAG): Use a resource allocation
graph to analyse and prevent deadlocks. The graph models the
relationships between processes and resources, helping to
identify and avoid circular wait conditions.
- Lock Ordering: Establish a global order for resource acquisition
and require processes to acquire resources in this order. This
helps prevent circular waits.
- Timeouts and Resource Reclamation: Implement timeouts for
resource requests, ensuring that a process releases acquired
resources within a specified time. If a process does not release
resources in time, they are forcefully reclaimed.

 AVOIDANCE:
- Banker's Algorithm: Use the Banker's algorithm to ensure that
resource allocations do not result in an unsafe state. It allows the
system to determine if a resource allocation will potentially lead
to a deadlock before granting the request.
- Dynamic Resource Allocation: Dynamically allocate resources
to processes based on their maximum resource needs. The system
checks for safety before allocating resources to prevent
deadlocks.

 DETECTION AND RECOVERY:


a) DETECTION STRATEGIES:
- In a transactional environment, these schemes are used to
detect deadlocks. Wait-Die allows younger transactions to wait for
older ones, and Wound-Wait allows older transactions to abort
younger ones.
- Resource Allocation Graph (RAG) Algorithm: Periodically check
the resource allocation graph for cycles, indicating potential
deadlocks. If a cycle is found, employ recovery strategies.
b) Recovery Strategies:
- Process Termination: Terminate one or more processes to
break the deadlock. The selection of processes to terminate
should be based on factors such as priority or the amount of work
done.
- Resource Preemption: Pre-empt resources from one or more
processes to resolve the deadlock. The pre-empted resources are
then allocated to the waiting processes.

 AVOIDING CIRCULAR WAIT:


- One Resource at a Time: Design the system to allow processes
to request only one resource at a time. This eliminates the
possibility of circular waits.
- Lock Hierarchy: Establish a hierarchy for resource locking and
require processes to acquire resources in a specified order. This
reduces the likelihood of circular waits.

 DYNAMIC RESOURCE MANAGEMENT:


- Dynamic Adjustment of Resource Limits: Allow the operating
system to dynamically adjust resource limits for processes based
on their behavior. This prevents a process from holding onto
resources indefinitely.
WEEK-06
Memory management
1. Introduction:-
 Memory management is a fundamental aspect that
involves the organization and control of a
computer's primary memory (RAM). Efficient
memory management is critical for ensuring that the
operating system and applications can store,
retrieve, and manipulate data in a way that
optimizes system performance and resource
utilization.

KEY OBJECTIVE OF MEMORY MANAGEMENT:-

1. Allocation and De-allocation:


- Allocation: Assign portions of memory to different
programs and processes based on their requirements.
- De-allocation: Reclaim and release memory that is no
longer needed or in use by a program or process.

2. Organization and Mapping:


- Organization: Arrange memory in a structured manner
to facilitate efficient storage and retrieval of data.
3. Protection:
- Implement mechanisms to protect memory from
unauthorized access. This includes defining access rights
and privileges for different processes to prevent
unintended interference.

4. Sharing:
- Facilitate the sharing of memory among multiple
processes when required. Shared memory enables
efficient communication and collaboration between
processes.

5. Optimization of Resource Utilization:


- Optimize the use of available memory resources to
ensure that the system operates efficiently. This involves
minimizing fragmentation and maximizing the utilization of
RAM.

Components of Memory Management:-

1. Logical vs. Physical Address Space:


- Logical Address Space: The addresses generated by a
program, also known as virtual addresses.
2. Address Binding:
- Compile Time Binding Addresses are determined at
compile time and are fixed before the program runs.
- Load Time Binding: Addresses are assigned at the time
of program loading into memory.
- Run Time Binding: Addresses are determined
dynamically during program execution.

3. Memory Partitioning:
- Fixed Partitioning: Divide memory into fixed-sized
partitions, assigning each partition to a single job or
process.
- Variable Partitioning: Memory is divided into variable-
sized partitions based on the size of the jobs or processes.

4. Fragmentation:
- Internal Fragmentation: Wastage of memory within a
partition due to the allocation of more space than
required.
- External Fragmentation: Unallocated memory exists in
the system, but it is not contiguous, making it challenging
to allocate to processes.
5. Memory Protection:
- Prevent unauthorized access to specific memory
regions. Access permissions are assigned to different
segments of memory to protect critical data.

6. Swapping:
- Transfer portions of a program between main memory
and secondary storage (like the hard disk) to free up space
for other processes. This is especially useful in scenarios
where the total memory requirements exceed the
available RAM.

7. Paging and Segmentation:


- Paging: Divide both logical and physical memory into
fixed-sized pages, allowing for efficient use of memory
resources.
- Segmentation: Divide the logical address space into
variable-sized segments, providing flexibility in memory
allocation.
2. Difference of static and dynamic
linking and loading:-

 STATIC LINKING:

1. Definition:
- Static linking is a process in which the linker combines
all the needed modules and libraries into a single
executable file during the compilation phase.

2. Timing:
- Linking Time: It occurs during the compilation process.

3. Linker's Role:
- Linker's Responsibility: The linker is responsible for
resolving addresses and generating a single executable file
that contains all the necessary code and data.

4. Flexibility:
- Flexibility: Once the program is linked statically, it is
fixed and cannot be changed without recompiling the
entire code.
5. Memory Usage:
- Memory Usage: The resulting executable file includes all
the necessary code and data, potentially leading to larger
file sizes and increased memory usage.

6. Efficiency:
- Efficiency: Executable generated through static linking
tend to be more efficient in terms of runtime performance
as the addresses are fixed.

7. Example:
- Example: Creating a standalone executable file for a C
or C++ program.

 DYNAMIC LINKING:

1. Definition:
- Dynamic linking is a process in which the linking is
postponed until runtime. The executable file contains
references to functions or modules that are resolved
during program execution.

2. Timing:
- Linking Time: It occurs during runtime when the
program is loaded into memory.
3. Linker's Role:
- Linker's Responsibility: The linker includes references to
dynamic libraries in the executable, and the actual linking
is done by the operating system's loader when the
program is loaded into memory.

4. Flexibility:
- Flexibility: Dynamic linking allows for flexibility, as
changes to the dynamic libraries do not require
recompiling the entire program.

5. Memory Usage:
- Memory Usage: Dynamic linking reduces memory usage
since multiple programs can share the same copy of a
dynamically linked library.

6. Efficiency:
- Efficiency: There might be a slight runtime overhead
associated with dynamic linking, as the addresses need to
be resolved during program execution.

7. Example:
- Example: Using shared libraries or DLLs (Dynamic Link
Libraries) in Windows.
 STATIC LOADING:

1. Definition:
- Static loading involves loading all necessary program
modules into memory at program startup.

2. Timing:
- Loading Time: It occurs at the beginning of the program
execution.

 DYNAMIC LOADING:

1. Definition:
- Dynamic loading involves loading a module into
memory only when it is explicitly called by the program
during runtime.

2. Timing:
- Loading Time: It occurs during program execution, as
needed.
WEEK-07
Shell programming
1. Basics of shell programming:-

 Shebang Line:
- Definition: The shebang line (`#!`) is used at the
beginning of a script to specify the interpreter that
should be used to execute the script. It helps identify
the scripting language and the version to be used.

 Comments:
- Definition: Comments in shell scripts are lines
that begin with the `#` character. They are used for
adding explanatory notes and are ignored by the
shell during execution.

 Variables:
- Definition: Variables in shell programming are
containers for storing data. They hold values that
can be referenced or manipulated within the
script.
 Echo:
- Definition: The `echo` command is used to print
messages or display the values of variables in the
terminal. It outputs text to the standard output.

 Read Input:
- Definition: The `read` command is used to take
user input during the execution of a script. It
assigns the entered value to a variable for further
use.

 Conditional Statements:
- Definition: Conditional statements in shell
scripts, such as `if`, `else`, and `fi`, are used to
control the flow of execution based on specified
conditions.

 Loops:
- Definition: Loops, such as `for` and `while`,
enable repetitive execution of code. They iterate
over a sequence of values or until a specific
condition is met.
 Functions:
- Definition: Functions in shell scripts are blocks
of reusable code. They encapsulate a set of
instructions and can accept parameters.

 Command Substitution:
- Definition: Command substitution involves
capturing the output of a command and using it as
part of another command or storing it in a variable.

 File Operations:
- Definition: File operations in shell scripts
involve manipulating files and directories using
commands like `cp` (copy), `mv` (move), `arm`
(remove), etc.

 Input/output Redirection:
- Definition: Input/output redirection allows
changing the flow of data between commands and
files. It uses symbols like `<`, `>`, `>>`, and pipes
(`|`).
2. Types of shell in linux:-

 Bash (Bourne Again Shell):


- Description: Bash is the default shell for most Linux distributions.
It is a successor to the Bourne Shell and incorporates features from
the Korn Shell and the C Shell Bash is known for its powerful
scripting capabilities, command-line editing, and extensive support
in the Linux ecosystem.

 Zsh (Z Shell):
- Description: Zsh is an extended and interactive shell with
additional features compared to Bash. It includes advanced tab
completion, themes, and customization options. Zsh is popular
among power users for its flexibility and rich set of plugins.

 Fish (Friendly Interactive Shell):


- Description: Fish is designed to be user-friendly and interactive.
It features syntax highlighting, auto-suggestions, and a
straightforward scripting syntax. Fish is known for its emphasis on
providing a pleasant user experience for both beginners and
advanced users.

 Dash:
- Description: Dash is a lightweight and fast shell that aims to be
POSIX-compliant. It is often used as the default `/bin/sh` on some
systems due to its efficiency, especially in scenarios where quick
shell startup is essential.
 Ash (Almquist Shell):
- Description: Ash is a minimalistic shell that adheres to the POSIX
standard. It is commonly used in embedded systems and
environments with resource constraints. BusyBox, a software suite
for embedded systems, often includes a variant of the Ash shell.

 Ksh (Korn Shell):


- Description: The Korn Shell, developed by David Korn, is known
for its powerful scripting capabilities and features found in both the
Bourne Shell and the C Shell. It is less commonly used as the default
interactive shell but is popular for scripting purposes.

 Csh (C Shell) and Tcsh (Enhanced C Shell):


- Description: The C Shell and its enhanced version, Tcsh, have C-
like syntax. Tcsh incorporates additional interactive features such
as command-line editing and history manipulation. While Tcsh has
more features, Bash is more widely used.

 Sh (Bourne Shell):
- Description: The Bourne Shell, often referred to as `sh`, is one of
the earliest UNIX shells. It is relatively basic compared to modern
shells but remains important as it serves as the foundation for many
scripting conventions.
WEEK-08
Automation of system tasks
1. crone command:-
 The cron can be defined as a software utility that is
provided by a Linux-like OS that operates the
scheduled operation on a predetermined period. It's
a daemon process and executes as a background
process. It performs the described tasks at a
predefined period when a condition or event is
encountered without user intervention.

 Frequently, dealing with any repeated operation is an


intimidating operation for a system administrator.
Hence, the system administrator can schedule these
types of processes for running in the background
automatically at regular time intervals by making a list
of the commands with the help of the cron command.

 It enables all the users to run the scheduled operation


on the regular basis like doing a backup every single
day, synchronizing files on a few regular intervals, and
scheduling updates on a weekly basis.

 Recurrently, Cron inspects the scheduled task and


when the fields of scheduled time match the fields of
the current time, the scheduled commands will be
executed. Automatically, it is begun from the file on
entering run levels of multi-user.
2. Additional resources in crone:-

 Official Documentation:
- Linux man pages: The official manual pages provide in-
depth information on the `cron` command and its usage.
You can access them by typing `man cron` in the terminal.

 Online Tutorials and Guides:


- Digital Ocean’s Cron Tutorial: Digital Ocean provides a
comprehensive tutorial on using cron, covering various
aspects, from basic scheduling to more advanced
configurations.
- GeeksforGeeks Cron Guide: GeeksforGeeks offers a
beginner-friendly guide to understanding and using cron
for scheduled tasks.

 Video Tutorial:
- YouTube Video: Introduction to Cron Jobs: A video
tutorial providing an introduction to cron jobs, explaining
the syntax and demonstrating common use cases.

- YouTube Video: Advanced Cron Jobs in Linux: A video


tutorial covering more advanced topics related to cron
jobs, including environment setup and troubleshooting.
WEEK-09
Network management
1. Ip-address:-
DEFINITION:
An IP address, or Internet Protocol address, is a numerical
label assigned to each device participating in a computer
network that uses the Internet Protocol for communication.
It serves two main purposes: host or network interface
identification and location addressing.

TYPES OF IP ADDRESSES:

 IPv4 (Internet Protocol version 4):


- Format: Consists of four sets of numbers separated by
dots (e.g., 192.168.0.1).
- Range: Approximately 4.3 billion unique addresses.
- Commonly Used Today: Despite its limitations, IPv4
remains widely used, and most devices on the internet are
assigned IPv4 addresses.
 IPv6 (Internet Protocol version 6):
- Format: Utilizes hexadecimal notation and colons,
allowing for a significantly larger address space (e.g.,
2001:0db8:85a3:0000:0000:8a2e:0370:7334).
- Range: Vastly expanded address space to accommodate
the growing number of devices connected to the internet.
- Transition: IPv6 is gradually being adopted to address the
limitations of IPv4 and support the increasing number of
internet-connected devices.

PURPOSE OF IP ADDRESSES:

 Identification:
- IP addresses uniquely identify devices on a network,
allowing them to send and receive data. Every device,
including computers, servers, routers, and smartphones, is
assigned a unique IP address.

 Routing:
- IP addresses are used for routing data between devices
and networks. Routers use IP addresses to determine the
most efficient path for data to travel from the source to the
destination.
2. Subnet mask:-
DEFINITION:

Subnet mask is a 32-bit number used in Internet Protocol


(IP) addressing to divide an IP address into network and host
portions. It plays a crucial role in defining the boundaries of
a network and allows for the creation of sub networks or
subnets within a larger network.

COMPONENTS:

- The subnet mask is made up of contiguous blocks of 1s


followed by contiguous blocks of 0s.

- The 1s represent the network portion, and the 0s


represent the host portion of the IP address.

PURPOSE:

- The primary purpose of a subnet mask is to define the


network and host portions of an IP address, enabling
efficient routing and management of IP addresses within a
network.
KEY CONCEPTS:

1. Network Portion:

- The network portion of an IP address is determined by


the 1s in the subnet mask. It identifies the specific network
to which a device belongs.

2. Host Portion:

- The host portion, determined by the 0s in the subnet


mask, specifies the unique identifier for a device within the
network.

3. Subnetting:

- Subnetting involves creating smaller subnetworks within


a larger network. It helps improve network performance,
security, and efficient use of IP addresses.

Subnetting Example:

- If an organization is assigned the IP address range


192.168.1.0 with a subnet mask of 255.255.255.0 (/24), it
can create multiple subnets, each with its own unique range
of host addresses.
1. gateway:-
DEFINITION:

A gateway is a network device that serves as an entry or exit


point between two different networks, enabling data to flow
between them. It acts as a translator, facilitating
communication between devices on distinct networks that
may use different communication protocols.

KEY CHARACTERISTICS:

1. Protocol Translation:

- Gateways are capable of translating data between


networks that may operate using different communication
protocols. This is essential for ensuring seamless
communication between diverse systems.

2. Connectivity Hub:

- As a connectivity hub, a gateway allows devices on one


network to connect and communicate with devices on
another network. It establishes a bridge between otherwise
isolated networks.
3. Security Features:

- Gateways often include security features, such as firewalls


and packet filtering, to control and monitor the flow of data
between networks. This helps in securing sensitive
information and preventing unauthorized access.

4. Internet Gateway:

- In the context of home or office networks, the gateway is


commonly referred to as an internet gateway. It connects the
local network to the internet, enabling devices within the local
network to access online resources.

5. IPv4 to IPv6 Translation:

- With the transition from IPv4 to IPv6, gateways may


perform translation services to allow communication between
devices using different versions of the Internet Protocol.

TYPES OF GATEWAYS:

1. Protocol Gateways:

- Translate data between networks that use different


communication protocols (e.g., TCP/IP to IPX/SPX).
2. Application Gateways:

- Focus on specific applications and provide translation


services (e.g., for web servers or email).

3. Residential Gateways:

- Commonly used in homes to connect local networks to the


internet. Often integrated with features like DHCP, NAT, and
firewall capabilities.

4. Cloud Gateways:

- Facilitate communication between local networks and


cloud-based services. Often used in cloud computing
environments.
WEEK-10
User authentication
1. User and group account
management
USER ACCOUNT MANAGEMENT:
1. Creating a User:
- Definition: The process of adding a new user to a Unix-
like operating system, typically done with the `useradd`
command.
2. Setting Passwords:
- Definition: The act of defining or modifying the password
associated with a user account using the `passwd`
command.
3. User Information:
- Definition: Adding or updating details like full name and
contact information associated with a user account using
the `usermod` command.
4. User Deactivation:
- Definition: Temporarily disabling a user account without
deleting it, achieved using the `usermod` command with the
`-L` option.
5. User Deletion:
- *Definition:* The process of permanently removing a
user account from the system using the `userdel` command.

6. Viewing User Information:


- Definition: Retrieving details about a user account, such
as user ID and groups, using commands like `id` or `finger`.

GROUP ACCOUNT MANAGEMENT:

1. Creating a Group:
- Definition: Establishing a new group in a Unix-like
system, accomplished using the `groupadd` command.

2. Adding Users to a Group:


- Definition: Associating a user with a specific group,
achieved through the `usermod` command with the `-aG`
option.

3. Viewing Group Information:


- Definition: Retrieving information about a group,
including its members, using the `id` command.
4. Modifying Group Properties:
- Definition: Adjusting group settings, adding or removing
users, or setting a group password using the `gpasswd`
command.

5. Deleting a Group:
- Definition: Permanently removing a group from the
system with the `groupdel` command.

PASSWORD POLICIES:

1. Password Expiry:
- Definition: Configuring the expiration period for a user's
password, typically done using the `charge` command.

2. Account Locking:
- Definition: Temporarily preventing or allowing a user to
log in by locking or unlocking the account with the `passwd`
command.

3. Account Expiry:
- Definition: Setting or removing the expiration date for a
user account using the `usermod` command.
WEEK-11
System monitoring
1. Introduction:
DEFINITION:
System monitoring is the process of continuously observing
and evaluating the performance, health, and behavior of a
computer system. It involves the collection, analysis, and
presentation of data to ensure the efficient operation of
hardware, software, and network components. The primary
goal is to identify potential issues, prevent system failures,
and optimize overall system performance.

KEY COMPONENTS OF SYSTEM MONITORING:

1. Performance Metrics:
- Monitoring involves tracking various performance
metrics, including CPU usage, memory utilization, disk I/O,
network activity, and system uptime.

2. Resource Utilization:
- Examining the allocation and consumption of system
resources such as CPU, memory, disk space, and network
bandwidth to identify bottlenecks or inefficiencies.
3. Event Logging:
- Recording system events, errors, warnings, and other
critical information in logs for later analysis. Event logs
provide insights into system behavior and potential issues.

4. Alerting and Notifications:


- Setting up alerts or notifications for predefined
thresholds or abnormal conditions to promptly inform
administrators about potential problems.

5. Security Monitoring:
- Monitoring for security-related events, unauthorized
access attempts, and potential vulnerabilities to ensure the
integrity and confidentiality of the system.

Tools for System Monitoring:

1. Resource Monitors:
- Tools like `top` (Unix/Linux), Task Manager (Windows),
or third-party applications provide real-time information on
CPU, memory, and process utilization.
2. Log Analyzers:
- Utilities like `syslog` (Unix/Linux), Event Viewer
(Windows), or specialized log analyzers help review system
logs and detect issues.

3. Network Monitoring Tools:


- Applications such as Wireshark, Nagios, or Zabbix help
monitor network traffic, identify bottlenecks, and ensure
network stability.

4. Performance Monitoring Software:


- Comprehensive solutions like Prometheus, Gahanna, or
SolarWinds offer advanced monitoring capabilities,
customizable dashboards, and historical data analysis.

BENEFITS OF SYSTEM MONITORING:

1. Early Issue Detection:


- Identifying and addressing potential problems before
they escalate, reducing system downtime and enhancing
reliability.
2. Performance Optimization:
- Analyzing resource utilization helps optimize system
performance, ensuring efficient use of hardware and
preventing bottlenecks.

3. Capacity Planning:
- Understanding resource trends over time facilitates
effective capacity planning, ensuring that the system can
handle anticipated workloads.

4. Security Enhancement:
- Monitoring security-related events helps detect and
respond to potential security threats, safeguarding the
integrity and confidentiality of the system.

5. Improved Decision-Making:
- Access to real-time and historical data allows
administrators to make informed decisions regarding
system upgrades, maintenance, or changes.
WEEK-12
Server setup
1. Dns[domain name system]:-
DEFINITION:

The Domain Name System (DNS) is a hierarchical and


distributed naming system that translates human-readable
domain names into IP addresses and vice versa. It serves as
a crucial component of the internet infrastructure, enabling
users to access websites and other resources using easy-to-
remember domain names rather than numerical IP
addresses.

KEY COMPONENTS OF DNS:

1. DNS Server:
- DNS servers are responsible for storing and providing
access to the database of domain names and their
corresponding IP addresses. They handle queries from
clients, translating domain names into IP addresses.

2. Domain Name:
- A domain name is a human-readable label assigned to a
specific IP address. It consists of two main parts: the top-
level domain (TLD) and the second-level domain (SLD).
3. Top-Level Domain (TLD):
- The TLD is the highest level in the DNS hierarchy and is
typically found at the end of a domain name. Examples
include .com, .org, .net, and country-code TLDs like .uk or
.jp.

4. Second-Level Domain (SLD):


- The SLD is the part of the domain name that precedes
the TLD. It is the unique identifier for a specific website or
resource.

5. DNS Resolver:
- DNS resolvers, also known as DNS clients, are
applications or components within networking devices that
send DNS queries to DNS servers, translating domain names
into IP addresses.

DNS Resolution Process:

1. User Input:
- A user enters a domain name (e.g., www.example.com)
into a web browser.

2. Local DNS Resolver:


- The local DNS resolver, often provided by the Internet
Service Provider (ISP) or configured on the user's device,
sends a query to the DNS server to resolve the domain.
3. Root DNS Server:
- If the local resolver doesn't have the IP address for the
requested domain, it queries the root DNS server to get
information about the Top-Level Domain (TLD) server
responsible for the specific TLD in the domain.

4. TLD DNS Server:


- The root DNS server responds with the IP address of the
TLD DNS server. The local resolver then queries the TLD DNS
server for information about the authoritative DNS server
for the Second-Level Domain (SLD).

5. Authoritative DNS Server:


- The TLD DNS server responds with the IP address of the
authoritative DNS server for the specified domain.

6. IP Address Retrieval:
- The local resolver queries the authoritative DNS server,
which responds with the IP address associated with the
requested domain.

7. Result Returned:
- The IP address is returned to the user's device, allowing
the web browser to connect to the requested website.
DNS RECORDS:

1. A (Address) Record:
- Associates a domain with an IPv4 address.

2. AAAA (IPv6 Address) Record:


- Associates a domain with an IPv6 address.

3. CNAME (Canonical Name) Record:


- Creates an alias for a domain, redirecting it to another
domain.

4. MX (Mail Exchange) Record:


- Specifies mail servers responsible for receiving emails on
behalf of the domain.

5. NS (Name Server) Record:


- Identifies authoritative DNS servers for the domain.
2. ftp[file transfer protocol]:-
DEFINITION:
File Transfer Protocol (FTP) is a standard network protocol
used for the transfer of files between a client and a server
on a computer network. It operates on the application layer
of the Internet Protocol (IP) suite and is widely employed
for sharing files, uploading, and downloading content
between computers.

KEY COMPONENTS OF FTP:

1. FTP Client:
- An FTP client is a software application that initiates a
connection to an FTP server for the purpose of transferring
files. Common FTP clients include FileZilla, WinSCP, and
built-in command-line tools like `ftp` in Unix/Linux systems.

2. FTP Server:
- An FTP server is a software application that accepts and
manages incoming FTP connections. It stores and provides
access to files for FTP clients. Examples of FTP servers
include vsftpd (Very Secure FTP Daemon) and ProFTPD.
3. Control Connection:
- The control connection is established between the FTP
client and server to exchange commands, authentication
details, and control information. It typically operates on
port 21.

4. Data Connection:
- The data connection is used for the actual transfer of
files. Depending on the FTP mode (active or passive), data
connection can operate on separate ports. Passive mode is
often used to accommodate firewalls and NAT
configurations.

FTP Modes:

1. Active Mode:
- In active mode, the FTP client initiates a command
connection to the server (on port 21) and a data connection
for file transfers. The client specifies an IP address and port
for the server to connect to for the data transfer.

2. Passive Mode:
- In passive mode, the FTP server provides an IP address
and port for the client to connect to for the data transfer.
WEEK-13
Storage management
1. Introduction:-
DEFINITION:
Storage management refers to the systematic
administration, organization, and optimization of data
storage resources within a computing environment. It
involves the planning, provisioning, maintenance, and
monitoring of storage infrastructure to ensure efficient
data storage, retrieval, and protection.

KEY COMPONENTS OF STORAGE MANAGEMENT:

1. Storage Devices:
- Storage management encompasses various storage
devices such as hard disk drives (HDDs), solid-state
drives (SSDs), network-attached storage (NAS), storage
area networks (SANs), and cloud storage.

2. File Systems:
- File systems manage the organization and retrieval of
data on storage devices. They define how data is stored,
named, accessed, and secured. Examples include NTFS
3. Data Organization:
- Efficient storage management involves organizing
data in a way that enables quick access, retrieval, and
modification. Hierarchical storage structures, directory
hierarchies, and indexing mechanisms contribute to
effective data organization.

4. Storage Virtualization:
- Storage virtualization abstracts physical storage
resources, allowing multiple storage devices to be
managed as a single logical unit. This enhances
flexibility, scalability, and ease of management.

5. Data Backup and Recovery:


- Storage management includes implementing backup
and recovery strategies to protect data from loss or
corruption. This involves regular backups, offsite
storage, and the ability to restore data quickly.

6. Storage Tiering:
- Storage tiering involves categorizing data based on
usage patterns and moving it to different storage
classes. Frequently accessed data may reside on high-
performance storage, while less accessed data is stored
on more cost-effective, slower storage.
7. Capacity Planning:
- Predicting future storage needs and planning for
sufficient capacity to accommodate data growth.
Capacity planning involves monitoring current usage,
projecting future requirements, and adjusting storage
infrastructure accordingly.

8. Data DE duplication:
- Eliminating redundant copies of data to optimize
storage space. Data DE duplication reduces storage costs
and enhances efficiency by storing only unique data.

STORAGE MANAGEMENT PROCESSES:

1. Allocation and Provisioning:


- Allocating storage resources to users or applications
based on their requirements and provisioning additional
storage as needed.

2. Monitoring and Performance Tuning:


- Continuously monitoring storage performance,
identifying bottlenecks, and implementing
optimizations to ensure optimal data access and
retrieval times.
3. Security and Access Control:
- Implementing access controls, encryption, and other
security measures to protect stored data from
unauthorized access, ensuring compliance with data
protection regulations.

4. Data Lifecycle Management:


- Managing data throughout its lifecycle, from creation
to archival or deletion. This involves determining when
data becomes inactive and can be moved to lower-cost
storage.

5. Disaster Recovery Planning:


- Developing and implementing strategies to recover
data in the event of a disaster or data loss. This includes
offsite backups and replication to secondary storage
locations.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy