0% found this document useful (0 votes)
13 views

Cloud operations management

The document discusses changes in cloud operations management, highlighting the shift in roles from traditional infrastructure management to cloud-based operations, which require new technologies for efficiency. It also covers memory management techniques in cloud computing, emphasizing the complexity of scheduling tasks and resource allocation in heterogeneous systems. Additionally, it explores the significance of threads and objects in cloud programming, detailing their roles in enhancing concurrency and resource sharing.

Uploaded by

rmnd6565
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Cloud operations management

The document discusses changes in cloud operations management, highlighting the shift in roles from traditional infrastructure management to cloud-based operations, which require new technologies for efficiency. It also covers memory management techniques in cloud computing, emphasizing the complexity of scheduling tasks and resource allocation in heterogeneous systems. Additionally, it explores the significance of threads and objects in cloud programming, detailing their roles in enhancing concurrency and resource sharing.

Uploaded by

rmnd6565
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Cloud operations management

1- Changes in roles of operations

Organizations are only able to devote thirty percent of their expenditures to new development
since they are said to spend seventy percent of their information and communications technology
(ICT) expenses on management and operations.
This is mostly due to the individualized nature of on-site operational expertise, which makes
automation and operations requiring little skill di cult. Historically, businesses have managed
their information and communications technology (ICT) systems by assigning an infrastructure
operation manager to build and maintain the infrastructure in accordance with the planners'
instructions. An operations administrator builds and manages the infrastructure in a cloud
environment in person. The center operation manager and operations administrator now have the
responsibilities of an infrastructure operation manager who was previously limited to a single
operation. The center operation manager is in charge of overseeing daily operations, managing
the connections between various infrastructure components, and overseeing the upkeep of large-
scale infrastructure. The operations administrator oversees billing and operational status and
keeps an eye on operational excellence. Cloud users anticipate that, in comparison to traditional
systems, they will be able to build and run their systems at a lower cost. This creates a demand
for novel and innovative technologies in operations management.
New technologies for operations management in the cloud era are covered in the sections that
follow. In particular, the paper's thesis is that visualization technology, or something similar, has
made universal infrastructure control conceivable. Next, from the standpoint of Platform as a
Service (PaaS), it addresses technologies for managing a system's whole life cycle, including its
design, development, and operation, with an emphasis on its features and unique selling qualities.

Memory Management Techniques in Cloud Computing

Traditionally, scheduling has been viewed as an NP-complete optimization problem, where the
execution time is the only relevant objective. Additional goals including energy consumption,
operating expenses, throughput, makespan, fault tolerance, dependability, security, predictability,
and elasticity were introduced by the complexity of the current heterogeneous cloud systems. The
problem lies in the fact that these goals frequently clash, necessitating the development of new,
intricate algorithms to nd the best solution that takes into account all of the competing goals.

The goal of scheduling intermediate data and interdependent tasks in a system with four-level
cache/memory hierarchies is to maximize energy e ciency and performance. The work ow
pro ling data and resources utilized during execution, such as CPU, RAM, the number of
compute nodes, and input le size, are used to construct pro les on the machine learning models
that are employed in the scheduling strategy. In order to migrate virtual machines (VMs) from
overcrowded hosts to idle hosts or other hosts with lower resource utilisation, virtual resource
scheduling strategies employ an energy consumption model. The scheduler's obligation in the VM
migration.

An updated ant colony algorithm is employed to determine the destination, and prediction model
instructions are used to determine whether migration is required. a scheduling technique for multi-
objective optimization (PBACO). Two components of the CPU and memory, such as the base cost
and the transmission cost, are included in a resource cost model. The suggested method
succeeds in achieving multi-objective optimization for the best possible duration, deadline, usage
of resources, and costs to users. Although the technique enables scaling of the amount of
resources, this work concentrates on two types of resources: CPU and disk I/O bandwidth. When
used in conjunction with VM dynamic forecast scheduling, the ant colony optimization method

The suggested method makes use of resource manager-provided stored data regarding CPU and
memory utilization. Slave ant diversi cation and reinforcement techniques enable avoiding lengthy
pathways where leading ants mistakenly amass pheromones. Solutions produced by the rst t
decreasing (FFD) method are here evaluated as a function of the speci c solution's memory loss
fi
fi
fi
fi
ffi
ffi
fi
fi
fl
fi
fi
in order to assess the pheromone level. An overall plan for ant colony optimization in cloud
scheduling
is shown in this gure

The two interrelated components of scheduling for Distributed Stream Processing Systems
(DSPS) are thread and resource allocation and thread to resource mapping. The amount of
resource slots allotted and the number of threads per task are determined by the allocation
section. In order to ful ll the requirements, the mapping section attempts to allocate these tasks
to the appropriate resource slots. A dynamic resource scheduling strategy for processing various
cloud user requests that is based on the Max Min Fuzzy algorithm.

The number of machines needed to nish a multi-frontal solver at a given time is positively
impacted by the task scheduling heuristic with memory consumption balancing. In order to
estimate the memory needed to execute a certain subtree of the element partition tree, they
suggest task agglomeration techniques. The subtree is combined into a single job if this goes
below a predetermined cuto . To avoid performance deterioration, a task scheduling mechanism
based on the distribution of memory accesses in the given period is intended. In order to prioritize
latency-sensitive jobs above bandwidth-sensitive tasks, this method employs per-task memory
access monitoring for memory access classi cation.

A genetic-based optimization approach combines phase-change memory (PCM) setup with a


genetic-based algorithm for task-core scheduling. With an eye towards environmentally friendly
cloud computing, this method o ers a PCM con guration that strikes a compromise between
e ciency and PCM memory performance.

Threads in cloud computing:

In CLOUDS, objects and threads are essential components of programming. Objects are
persistent virtual address spaces, consisting of code, data, and entry points. Threads are active
abstractions of a CPU, executing within an initial object and traversing between objects through
invocations. The mapping between threads and objects is de ned at runtime by the invocation
mechanism. Persistent objects provide storage and sharing, making them more structured than
les. Shared memory provided by objects makes message passing unnecessary when used for
other purposes, such as synchronization. The separation of computation from the address space
allows for pervasive concurrency and allows threads to reach out to environments shared with
other threads and applications. Threads execute on compute servers, where the permanent
fi
ffi
fi
fi
ff
ff
fi
fi
fi
fi
storage repository for object content storage is on data servers. This separation allows any thread
executing on any compute server to invoke any object regardless of its storage site.

Survey of Memory Management Techniques for HPC and Cloud Computing. (2019). IEEE Journals
& Magazine | IEEE Xplore. https://ieeexplore.ieee.org/abstract/document/8906102

Distributed Programmin g with Objects and Threads in the Clouds System*. (n.d.). Retrieved May
10, 2024, from https://www.usenix.org/legacy/publications/compsystems/1991/
sum_dasgupta.pdf

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy