UNIT-3 Cloud Computing
UNIT-3 Cloud Computing
KAKINADA INSTITUTE OF
ENGINEERING &
TECHNOLOGY
Engineering
Cloud Computing
UNIT-3
TOPICS:-
1. Cloud Resource virtualization: Virtualization, layering and virtualization, virtual machine monitors, virtual
machines, virtualization- full and para, performance and security isolation, hardware support for
virtualization,
2. Case Study: Xen, vBlades, Cloud Resource Management and Scheduling: Policies and Mechanisms,
Applications of control theory to task scheduling, Stability of a two-level resource allocation architecture,
feedback control based on dynamic thresholds, coordination, resource bundling, scheduling algorithms, fair
queuing, start time fair queuing, cloud scheduling subject to deadlines, Scheduling Map Reduce
and users, enabling more effective use of hardware and improving overall
resource management.
Server Virtualization:
host. Each VM can run its own operating system and applications
Storage Virtualization:
presenting them as logical storage units. Users and applications interact with
Network Virtualization:
Resource Pooling:
and networking, into a shared pool. These pooled resources can then be
Resource Abstraction:
Layering and virtualization are two closely related concepts, often used in
Layering:
layers below.
other hand, abstracts physical resources into virtual instances. Together, they
network resources are abstracted into virtual layers. These layers can then
delivering services.
functions, and this layered structure allows for flexibility and interoperability.
functions can be abstracted and virtualized, allowing for more flexible and
provides flexibility, as changes in one layer may not affect the others.
Cloud Computing:
hardware resources.
Definition:
the hardware and the operating systems (OS) running on it. It provides a
operate independently.
Types of Hypervisors:
the host's hardware to control the hardware and to manage guest operating
systems. It does not require a host operating system and is often considered
and KVM.
installed on Windows).
Key Functions:
machines. Each virtual machine (VM) operates as if it has its own dedicated
hardware.
Isolation: The hypervisor ensures that each virtual machine is isolated from
stability.
instructions that would normally interact with the physical hardware. This
Performance Considerations:
Department of Cyber Security
Cloud Computing
Overhead: While virtualization provides many benefits, there is some
overhead associated with the hypervisor. This can affect the performance of
guest operating systems are aware that they are running in a virtualized
the hypervisor.
Use Cases:
server.
Security Considerations:
virtual machines
and network interfaces. Here are key points about virtual machines:
Hypervisor:
Types of Virtualization:
guest OS.
resources. Multiple VMs can share the same physical hardware, leading to
environments.
Flexibility and Scalability: VMs can be easily moved between physical hosts,
based on demand.
Use Cases:
desktop virtualization.
their dependencies are packaged together. While VMs virtualize the entire
Virtualization comes in different forms, and two common types are full
(VMM). Let's explore the characteristics of both full virtualization and para-
virtualization:
Full Virtualization:
physical hardware.
Guest OS Awareness:
The guest operating system is not modified and does not need to be aware
Performance Overhead:
instructions.
Examples:
Para-virtualization:
Definition:
efficient communication.
Guest OS Modifications:
Performance Benefits:
workloads.
Examples:
Comparison:
compared to full virtualization, making it more suitable for certain use cases
performance.
OS can be modified.
Performance Isolation:
Definition:
does not significantly impact the performance of other VMs sharing the same
physical resources.
Resource Allocation:
network bandwidth.
Overhead:
Performance Monitoring:
Live Migration:
Technologies like live migration allow VMs to be moved from one physical
Security Isolation:
Definition:
and ensuring that activities within one VM do not compromise the security of
VM Isolation:
VMs must be isolated from each other to prevent one VM from accessing
to achieve this.
Network Isolation:
Security Policies:
security standards.
Technologies like Secure Boot ensure that only authenticated and signed
malicious code.
information and event management (SIEM) tools can be employed for this
purpose.
In summary, both performance and security isolation are crucial for the
systems.
Nested Virtualization:
host and manage other virtual machines. This is useful for scenarios such
map guest virtual memory directly to the host's physical memory, reducing
I/O Virtualization:
VT-d and AMD-Vi provide hardware support for direct assignment of I/O
workloads.
(VDI).
Multi-Core Processors:
functions such as secure boot and secure storage within virtual machines.
information on a case study involving both Xen and vBlades. However, I can
provide you with a general understanding of Xen and vBlades, and how they
Xen:
Overview:
Key Features:
performance.
Dom0 and DomU: Xen architecture includes a privileged domain (Dom0) that
manages other unprivileged domains (DomU), each running its own operating
system.
vBlades:
Overview:
virtualized manner.
environment.
Scenario:
Implementation:
environment.
Benefits:
resources as needed.
Challenges:
Please note that this is a hypothetical scenario, and the actual case study
and Mechanisms
enhance performance, and meet service level agreements. Below are key
scheduling:
Policies:
Load Balancing:
Priority-based Scheduling:
Policy: Ensure that each user or group receives a fair share of resources.
Deadline-based Scheduling:
Cost Optimization:
Energy Efficiency:
operational costs.
ensure redundancy.
applications.
Mechanisms:
Dynamic Scaling:
Predictive Scheduling:
Task Migration:
Preemption:
arrive.
Elastic Provisioning:
Control theory is a field of engineering and mathematics that deals with the
Application: PID controllers are widely used in control theory and can be
performance.
Use Case: A PID controller can monitor the system's response to task
scheduling decisions.
Use Case: By predicting the future state of the system and optimizing task
Adaptive Control:
Application: Optimal control theory aims to find the best scheduling strategy
Use Case: In task scheduling, optimal control theory can be applied to find
State Estimation:
system performance, the scheduler can make more informed decisions about
Queueing Theory:
Application: Queueing theory can model the flow of tasks through a system
Use Case: Understanding the queuing behavior of tasks can guide the
Applying control theory to task scheduling allows for dynamic and adaptive
and efficient operation of the system. Let's explore the concept of stability in
Stability Considerations:
Feedback Mechanisms:
allocations as needed.
Adaptive Control:
might adapt its allocation policies to ensure that resources are distributed
efficiently.
Performance Metrics:
Load Balancing:
utilization.
Predictive Modeling:
Example: The upper-level manager might use historical data and predictive
accordingly.
Fault Tolerance:
Concurrency Control:
Challenges:
Dynamic Environments:
challenging.
Scalability:
complex.
environments.
the ability to adapt to varying workloads and conditions are key components
current state of the system. This approach allows the control system to
System:
Sensor:
Controller:
Actuator:
Takes the output from the controller and implements changes in the system
Feedback Loop:
The control loop includes the sensor, controller, and actuator. Information
about the system's current state is continuously fed back to the controller to
Dynamic Thresholds:
Definition:
environmental conditions.
Adaptability:
Response Time Thresholds: The acceptable response time for a service may
characteristics.
Implementation Steps:
Monitoring:
system health.
Threshold Calculation:
learning algorithms.
Control Decision:
Adjustment:
If the current state exceeds or deviates from the dynamic threshold, the
feedback loop informs the controller for further refinements. This iterative
Benefits:
Adaptability:
The system can respond to changes in real-time without the need for
Efficiency:
Resilience:
Automation:
Use Cases:
Network Management:
1. Coordination:
Transaction Management:
Event Ordering:
Lamport timestamps and vector clocks are commonly used for event
ordering.
Communication Protocols:
coordination.
2. Resource Bundling:
Batch Processing:
Parallel Processing:
tasks concurrently.
Techniques like task parallelism and data parallelism involve dividing tasks
Containerization:
containerized environments.
Job Scheduling:
of computing resources.
3. Scheduling Algorithms:
computing resources over time. These algorithms are critical for optimizing
First-Come-First-Serve (FCFS):
Prioritizing tasks based on their execution time, with the shortest job
scheduled first.
Round Robin:
Assigning tasks to resources in a circular order, with each task given a fixed
time slice.
Priority Scheduling:
Assigning priorities to tasks, and the task with the highest priority is
scheduled first.
Fair-Share Scheduling:
Ensuring that each user or group receives a fair share of resources over
time.
Deadline-Based Scheduling:
Load Balancing:
cost.
being considered.
Fair queuing, start time fair queuing, and cloud scheduling subject to
1. Fair Queuing:
Definition:
Key Features:
system.
Department of Cyber Security
Cloud Computing
Queues: Requests or tasks are placed in queues, and resources are
Applications:
Definition:
Start time fair queuing is an extension of fair queuing that considers the
start time of tasks or jobs. It aims to provide fairness not only in resource
allocation but also in terms of the time each user or application has been
Key Features:
start time fair queuing considers the cumulative time each entity has been
using resources.
Applications:
Definition:
Key Features:
Applications:
for real-time applications where tasks must be completed within specific time
limits.
Challenges:
Example Scenario:
while ensuring that tasks are completed within their specified deadlines.
Department of Cyber Security
Cloud Computing
In summary, fair queuing, start time fair queuing, and cloud scheduling
MapReduce Overview:
Scheduling Considerations:
Data Locality: Assign tasks to nodes where the data is already present (data
concurrently.
Schedulers:
resource allocation.
2. Resource Management:
allocation.
costs.
data locality.
Definition:
workloads.
Automatic Scaling:
MapReduce-Specific Scaling:
Cost Optimization: Scale resources up during peak demand and down during
Challenges:
de-provisioning resources.
responsiveness.
Holistic Approach:
Optimization Considerations:
Apache Spark applications can leverage Kubernetes for dynamic scaling and