0% found this document useful (0 votes)
71 views

Resouce Management Andloadbalancing

The document discusses resource management techniques in cloud computing including server consolidation, dynamic provisioning, resource optimization, scheduling techniques, capacity management, and load balancing. It poses questions about the role of load balancing and resource management algorithms in cloud infrastructure, how to develop and test such algorithms, and how to estimate their performance. Key aspects of resource allocation covered are response time, reliability, performance, execution time, workload, utilization, throughput, service level agreements, power consumption, fault tolerance, and cost. The document also outlines concepts related to VM provisioning, VM scheduling, task scheduling, and different types of resource provisioning approaches.

Uploaded by

suresh chaudhary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views

Resouce Management Andloadbalancing

The document discusses resource management techniques in cloud computing including server consolidation, dynamic provisioning, resource optimization, scheduling techniques, capacity management, and load balancing. It poses questions about the role of load balancing and resource management algorithms in cloud infrastructure, how to develop and test such algorithms, and how to estimate their performance. Key aspects of resource allocation covered are response time, reliability, performance, execution time, workload, utilization, throughput, service level agreements, power consumption, fault tolerance, and cost. The document also outlines concepts related to VM provisioning, VM scheduling, task scheduling, and different types of resource provisioning approaches.

Uploaded by

suresh chaudhary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Resource Management

and Load Balancing


Agenda
Distributed Management of
01 Virtual Infrastructures
02 Server consolidation

Dynamic provisioning and


03 resource management
04 Resource Optimization

Resource dynamic Scheduling Techniques


05 reconfiguration
06 for Advance
Reservation

Capacity Management to Load Balancing and its


07 meet SLA Requirements 08 Techniques
Resource Management (Resource
Allocation, Provisioning, Scheduling
and Load Balancing)
Few Questions …
• What is role of such load balancing / resource provisioning /scheduling
algorithms in Cloud infrastructure Management?
• How to develop Scheduling/resource provisioning/load balancing
Algorithms when it is asked to do?
• How would you claim your algorithm is good?
• Where would you test your algorithm?
• How would you give statics of performance improvement ?
• Do you have any standard for performance estimations ?
• How would you benchmark the outcomes?
• Why do we need to develop more load balancing /scheduling algorithms ?
• How long do we require to develop such load balancing /scheduling
algorithms ?
Let’s us have a look on
Cloud Analyst Tool
Cloud as a Marketplace of
“Computing Utilities”
Compute Cloud Cluster (VM Pool)
User Pool node
User
VM VM VM
Cloud Manager
Coordinator

Cloud Broker 1 Cloud Broker N Publish Offers Pool node Pool node
...... VM VM VM

VM VM SaaS
Negotiate/Bid
Request Directory
Capacity PaaS
Bank Cloud
Coordinator IaaS
Auctioneer
Storage Cloud
Cloud
Coordinator

Enterprise Cloud Exchange


Resource (CEx)
Server
(Proxy) Storage Cloud Compute Cloud

Enterprise IT Consumer
Introduction to Resource Allocation
Resource Allocation Process
Parameters for resource allocating

• Response time: Minimum time to respond to a service request to perform the task.

• Reliability: The ability to successfully complete the runtime.

• Performance: The number of tasks performed on the request of cloud users.

• Execution Time: It is also defined as completion time, which is the time taking to satisfy the demands of cloud users.

• Workload: The amount of processing to be done out for a particular amount of time. It’s the ability to process cloud
computing jobs.

• Utilization: The overall amount of resources currently used in data centers. Cloud computing involves maximizing the
use of resources to optimize the revenue and income of cloud providers to the satisfaction
Parameters for resource allocating

• Throughput: In cloud computing, the total number of tasks fully performed is within a given period.

• SLA: This is an agreement that describes the QoS offered by cloud providers to cloud users. The Cloud provider is
committed to delivering its best service can serve the need of a cloud Customer and avoid violating the SLA.

• Power: The VM placement & migration strategies used in the cloud data center must reduce their consumption.

• Fault-tolerant: The system should continue to provide service in spite of the failure of resources.

• Cost: The amount to be billed for the use of CC facilities. This is an expense to cloud customers and a benefit and
income to cloud providers

• Bandwidth/speed: Maximum data transmission rate of the network links

• Availability: In cloud computing, it represents a collection of services that allow accessibility, maintenance, reliability,
durability, and serviceability of the resources that depend on a request of cloud consumers to perform the specified
or necessary activity
Parameters for resource allocating
VM Provisioning
VM Provisioner C1 C2

Alloc

● Management of VMs (creation...)


● Defined @ Data Center level
VM1 VM2
• Different Data Centers in the same simulation may use
different policies VMM
● Which host will receive the VM? PEs
• Scheduling or Load balance
• Consolidation (green IT)
● Migration Hosts

Data Center
VM Scheduling
VM Scheduler C1 C2

Alloc

● Defined in Host level


• Different hosts in the same Data Center may VM1 VM2
have different policies
● How to share PEs among VMs in the host?
VMM
• Xen: RR, Credit
PEs
• Time-shared, space-shared, proportional
● Provisioning policies for resources (memory and

bandwidth) also customizable


Hosts

Data Center
Task scheduling

Cloudlet Scheduler
C1 C2

Alloc
• Defined in VM level
• Different VMs in the same host may have different
VM1 VM2
policies
• How to share processing power allocated to a VM among
VMM
Cloudlets?
• OS scheduling PEs

Hosts

Data Center
Broker
C1 C2
VMScheduler
Datacenter Broker
VM1 VM2

VMM
“Cloud scheduling”
PEs
• Selects the Data Center to host VMs
Hosts
• Selects VMs to run Cloudlets
• Application model: PS? BoT? Workflow? Data Center

• Economic decisions
Network

VM1 VM2

C1
C2
User/Broker
Resource management in CC

Resource Management Strategies can be grouped into 2 – Resource Provisioning and Resource
scheduling. Then each of those paradigms can be split into various categories.
Resource Provisioning

Resource provisioning is defined as the act of allocating virtualized resources to users.


Resource Provisioning

• Create VMs (once, user made request ) and allocate them for the user on
demand.
• Responsible to meet users need based on QoS, SLA and matching the
resources based on upcoming workloads.
• NEED: to detect and select the best/optimal resources based on the
requirements with minimal maintenances.
• Map upcoming requests to the running VMs considering QoS and SLA.
Types of Resource Provisioning

• On-demand provisioning: an intermediate level plan pay per hour basis,


if demand exceeds the reserved value, it arranges resources higher cost
than advanced reservation resources.

• Advanced reservation: long-term plan to reserve the resources in


advance for a specific time period, useful in federated cloud as well as
EC2. prediction of future demand and prices, Overprovisioning and
under-provisioning are key issues

• Spot instances: short-term plan to bid on unused resources, Amazon's


third plan to offer unused resources at lower cost in comparison with on-
demand and advanced reservation (AWS, Google, and Azure), price vary
frequently based on supply and demand is major issue.
Advantages of cloud resource
provisioning

• Efficient resource provisioning techniques reduce makespan time and


response time for submitted workloads.

• The issues of overprovisioning and under provisioning can be reduced


through the optimized utilization of resources.

• Better resource provisioning can be brought to cloud environments


through reducing VMs’ startup delay.

• Both the robustness and fault tolerance capabilities can be brought


using effective cloud resource provisioning algorithms.

• Power consumption can be also reduced using a resource provisioning


algorithm without violating the SLA.
Resource/task Scheduling

Scheduling is the art of analyzing the required QoS parameters with the aim to determine which
activity should be performed. In clouds, scheduling is responsible for:

i. selecting the most optimal VM to execute a task using a heuristic/meta-heuristic algorithm,


and
ii. ensuring the fulfillment of QoS constraints.
Task Scheduling

• To generate an order of tasks assignments to the allocated VMs considering


quality of service (QoS) parameters and SLA.
• To select optimal virtual machines for execution of tasks using either heuristic
or meta-heuristic algorithm.
• On Demand: SP may assign to random VM, unequal distribution, may occur
over provisioning,
• Long Term Reservation: under provisioning,
• Over provisioning and under provisioning type of problem increase the cost of
services due to unnecessary wastage of resources and time.
Provisioning and Scheduling of cloud resources

Provisioning and Scheduling of cloud resources


Resource Scheduling
Different Strategies of scheduling in clouds
Categorization of cloud task
scheduling schemes

Traditional Scheduling -

Heuristic Scheduling - Heuristic algorithms depend on the nature of the problem and perform very
well with certain problems while present low performance with others.

Meta-Heuristic Scheduling - meta-heuristic algorithms have gained huge circulation because of


their effectiveness in solving complex and large computational problems.
Meta-heuristic = Heuristic + Randomization.
Categorization of cloud task
scheduling schemes
Mainstream Algorithms under Heuristic algorithms

• Min-Min - Shortest Task will be picked and mapped to a VM


• Max-Min - The longest task is assigned to the VM for execution
• FCFS – Assigns the incoming task based on arrival time. First come first serve.
• HEFT - Heterogeneous Earliest Finish Time is a list-based scheduling heuristic in which a task priority list is firstly
built so that optimal allocation decisions are then made locally for each task based on the task’s estimated
completion time.
• SJF – It assigns the task with the shortest CPU burst time. If 2 tasks have same burst time, FCFS is implemented.
• RR – Round Robin assigns tasks immediately and allocates the available resources to the incoming task
• MCT – Minimum Completion Time schedules the tasks based on their expected minimum execution time
• Sufferage – is a heuristic technique in which a resource is mapped immediately with a task which would likely
suffer the most according to a “sufferage ”threshold value which is associated with its expected completion time.
Meta-heuristics based approaches
in cloud task scheduling
Cloud Task Scheduling
Cloud Task Scheduling
Open Issues and Challenges

• Resource Scheduling
• Quality of Service (QoS)
• Service level agreements (SLAs)
• Self-management service
• Energy management
• Dynamic Scalability
• Reliability
• Scheduling based on emerging meta-heuristic approaches
Future Trends

• Priority of users: Cloud is ultimately a business model; thus, during the execution of submitted applications, the
prioritization of cloud consumers should be taken into consideration.
• Green computing: Energy-aware task scheduling needs extensive research so that computing resources can be used
more user- and environment-friendly by reducing the use of contaminated materials.
• Resource controlling: Key mechanisms, such as monitoring task migration, VM migration, memory or CPU utilization,
etc., should be handled in a more controlled manner.
• Workload prediction: There is a need for more effective workload estimation techniques to predict the scale of
upcoming workload, thereby increasing both the throughput and resource utilization.
• Network Bandwidth: Generally, network bandwidth has not received enough attention in the majority of current
techniques, although dis- regarding it might cause communication delay, data loss, general network failure, etc.
Future Trends

• Fog computing: Traditional elastic cloud suffers from issues regarding security and delays which can be solved based on
the new trend of fog computing which provides a higher level of heterogeneity and decentralization.
• Failure prediction: Resource failures including resource missing, storage failure, network failure, hardware failure,
software failure, computing failure, database failure, overflow, underflow, and timeout can be predicted using diverse
ML techniques.
• Failure management: The features regarding the management of task migration and failure have been tackled by a few
scheduling algorithms; therefore, future research should address those features for maintaining the availability and
constancy of the system.
• IoT: Managing the IoT devices and multimedia contents are critical recent trends on task scheduling in cloud.
• Next generation computing: Nano-computing-based/Quantum, non- traditional architecture is an attractive
environment that should be involved in the next generation cloud.
Load Balancing

• Cloud computing resources can be scaled up on demand to meet the


performance requirements of applications.
• Cloud load balancing reduces costs associated with Cloud management
systems and maximizes availability of resources
• Load balancing distributes workloads across multiple servers to meet the
application requirements.
• The goals of load balancing techniques include:
• Achieve maximum utilization of resources
• Minimizing the response times
• Maximizing throughput
Load Balancing metrics

• Throughput: This metric is used to calculate the number of processes completed per unit time.
• Response time: It measures the total time that the system takes to serve a submitted task.
• Make span: This metric is used to calculate the maximum completion time or the time when the resources
are allocated to a user.
• Scalability: It is the ability of an algorithm to perform uniform load balancing in the system according to the
requirements upon increasing the number of nodes. The preferred algorithm is highly scalable.
• Fault tolerance: It determines the capability of the algorithm to perform load balancing in the event of some
failures in some nodes or links.
• Migration time: The amount of time required to transfer a task from an overloaded node to an under-loaded
one.
Load Balancing metrics

• Degree of imbalance: This metric measures the imbalance among VMs.


• Performance: It measures the system efficiency after performing a load-balancing algorithm.
• Energy consumption: It calculates the amount of energy consumed by all nodes. Load balancing
helps to avoid overheating and therefore reducing energy usage by balancing the load across all the
nodes.
• Carbon emission: It calculates the amount of carbon produced by all resources. Load balancing has
a key role in minimizing this metric by moving loads from under loaded nodes and shutting them
down.
Policies in dynamic load-
balancing algorithms

• Dynamic load-balancing algorithms use the current state of the system, they apply
some policies:
• Transfer Policy: determines the conditions under which a task should be transferred
from one node to another.
• This rule relies on the workload of each of the nodes. This policy includes task re-
scheduling and task migration.
• Selection policy: determines which task should be transferred.
• Some factors for task selection, including the amount of overhead required for
migration, the number of nonlocal system calls, and the execution time of the task.
Policies in dynamic load-
balancing algorithms

• Location Policy: determines which nodes are underloaded, and transfers tasks
to them, checks the availability of necessary services for task migration or task
rescheduling in the targeted node.
• Information Policy: collects all information regarding the nodes in the system
and the other policies, use it for making their decision.
• Incoming tasks-> Transfer policy (decides transferred to a remote node or
process locally) ->Selection policy (Select which task based on some parameters)
->Location policy ( if remote node selected)-> Information Policy
Challenges in cloud-based
load balancing

Virtual machine migration (time and security)


• The service-on-demand nature of cloud computing implies, there is a service
request, the resources should be provided.
• Sometimes VMs should be migrated from one physical server to another, possibly
on a far location.
• Designers of load-balancing algorithms have to consider two issues in such cases:
• Time of migration that affects the performance and the probability of attacks
(security issue).
Challenges in cloud-based
load balancing

Spatially distributed nodes in a cloud


• Nodes in cloud computing are distributed geographically.
• Load balancing algorithms should be designed considering parameters such as the
network bandwidth, communication speeds, the distances among nodes, and the
distance between the client and resources.
Challenges in cloud-based
load balancing

Single point of failure


• Most of the load-balancing algorithms are centralized.
• In such cases, if the node executing the algorithm (controller) fails, the whole
system will crash because of that single point of failure.
• The challenge here is to design distributed or decentralized algorithms.
Algorithm complexity
• The load-balancing algorithms should be simple in terms of implementation and
operation.
• Complex algorithms have negative effects on the whole performance.
Challenges in cloud-
based load balancing

Emergence of small data centers in cloud computing


• Small data centers are cheaper and consume less energy with respect to large
data centers, and computing resources are distributed all around the world.
• The challenge here is to design load-balancing algorithms for an adequate
response time.
Energy management
• Load-balancing algorithms should be designed to minimize the amount of energy
consumption: Energy usage reduction and carbon emission reduction.
• Load-balancing mechanisms are necessary for achieving green computing in a
cloud.
Adding Load Balancer in
CloudAnalyst

1. Download CloudAnalyst ( http://www.cloudbus.org/cloudsim/CloudAnalyst.zi) and


import it in eclipse
2. Create and Add your load balancer
a. Create your own algorithm under cloudsim.ext.datacenter
and name it MyLoadBalancer.java
b. Create a string in constant.java under cloudsim.ext name it
LOAD_BALANCE_MYLB
c. add the policy in ConfigureSimulationPanel.java under
cloudsim.ext.gui.screen <Line-435>
d. add policy using ifelse conditions Datacenter.Controller.java under
cloudsim .ext.datacenter <Line-102>
India: +91-7022374614

US: 1-800-216-8930 (TOLL FREE)

support@intellipaat.com

24/7 Chat with Our Course Advisor

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy