0% found this document useful (0 votes)
1K views

Performance, Security, and Energy

The document discusses performance, security, and energy efficiency in distributed systems. It covers performance metrics like throughput and latency. It also discusses different dimensions of scalability, including size, software, application, and technology scalability. Finally, it discusses the importance of energy efficiency and outlines strategies to improve efficiency at different layers, including the application, middleware, resource, and network layers. Reducing unused servers and optimizing tasks and routing can help lower energy consumption without impacting performance.

Uploaded by

Pradum Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Performance, Security, and Energy

The document discusses performance, security, and energy efficiency in distributed systems. It covers performance metrics like throughput and latency. It also discusses different dimensions of scalability, including size, software, application, and technology scalability. Finally, it discusses the importance of energy efficiency and outlines strategies to improve efficiency at different layers, including the application, middleware, resource, and network layers. Reducing unused servers and optimizing tasks and routing can help lower energy consumption without impacting performance.

Uploaded by

Pradum Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Performance, Security, and Energy

Efficiency

Neelam Singh
Performance Metrics and Scalability Analysis
Performance metrics are needed to measure various distributed systems.

Performance Metrics

We discussed CPU speed in MIPS and network bandwidth in Mbps in Section 1.3.1 to estimate pro-cessor
and network performance. In a distributed system, performance is attributed to a large number of
factors. System throughput is often measured in MIPS, Tflops (tera floating-point operations per second),
or TPS (transactions per second).

Other measures include job response time and network latency. An interconnection network that has low
latency and high bandwidth is preferred. System overhead is often attributed to OS boot time, compile time,
I/O data rate, and the runtime support sys-tem used. Other performance-related metrics include the QoS for
Internet and web services; system availability and dependability; and security resilience for system defense
against network attacks.
Dimensions of Scalability
Users want to have a distributed system that can achieve scalable performance. Any resource upgrade in a system should be
backward compatible with existing hardware and software resources. Overdesign may not be cost-effective. System scaling can
increase or decrease resources depending on many practical factors. The following dimensions of scalability are characterized in
parallel and distributed systems:

• Size scalability This refers to achieving higher performance or more functionality by increasing the machine size. The
word “size” refers to adding processors, cache, memory, storage, or I/O channels. The most obvious way to determine size
scalability is to simply count the number of processors installed

• Software scalability This refers to upgrades in the OS or compilers, adding mathematical and engineering libraries, porting new
application software, and installing more user-friendly programming environments.

• Application scalability This refers to matching problem size scalability with machine size scalability. Problem size affects the size of
the data set or the workload increase.

• Technology scalability This refers to a system that can adapt to changes in building technologies, such as the component and
networking technologies
Energy Efficiency
Primary performance goals in conventional parallel and distributed computing systems are high performance and
high throughput, considering some form of performance reliability (e.g., fault tol-erance and security). However,
these systems recently encountered new challenging issues including energy efficiency, and workload and resource
outsourcing. These emerging issues are crucial not only on their own, but also for the sustainability of large-scale
computing systems in general.

Protection of data centers demands integrated solutions. Energy consumption in parallel and dis-tributed
computing systems raises various monetary, environmental, and system performance issues.

Energy Consumption of Unused Servers

To run a server farm (data center) a company has to spend a huge amount of money for hardware, software,
operational support, and energy every year. Therefore, companies should thoroughly identify whether their
installed server farm (more specifically, the volume of provisioned resources) is at an appropriate level, particularly
in terms of utilization.
It was estimated in the past that, on average, one-sixth (15 percent) of the full-time servers in a company are left
powered on without being actively used (i.e., they are idling) on a daily basis. This indicates that with 44 million
servers in the world, around 4.7 million servers are not doing any useful work.

The potential savings in turning off these servers are large—$3.8 billion globally in energy costs alone, and $24.7
billion in the total cost of running nonproductive servers, according to a study by 1E Company in partnership with
the Alliance to Save Energy (ASE).

This amount of wasted energy is equal to 11.8 million tons of carbon dioxide per year, which is equivalent to the CO
pollution of 2.1 million cars. In the United States, this equals 3.17 million tons of carbon dioxide, or 580,678 cars.
Therefore, the first step in IT departments is to analyze their servers to find unused and/or underutilized servers.
Reducing Energy in Active Servers

In addition to identifying unused/underutilized servers for energy savings, it is also necessary to apply appropriate
techniques to decrease energy consumption in active distributed systems with negligible influence on their
performance.

Application Layer

Until now, most user applications in science, business, engineering, and financial areas tend to increase a system’s speed
or quality. By introducing energy-aware applications, the challenge is to design sophisticated multilevel and multi-
domain energy management applications without hurting performance. The first step toward this end is to explore a
relationship between performance and energy consumption. Indeed, an application’s energy consumption depends
strongly on the number of instructions needed to execute the application and the number of transactions with the
storage unit (or memory). These two factors (compute and storage) are correlated and they affect completion time.
Middleware Layer

The middleware layer acts as a bridge between the application layer and the resource layer. This layer provides resource
broker, communication service, task analyzer, task scheduler, security access, reliability control, and information service
capabilities. It is also responsible for applying energy-efficient techniques, particularly in task scheduling. Until recently,
scheduling was aimed at minimizing make span, that is, the execution time of a set of tasks. Distributed computing systems
necessitate a new cost function covering both make span and energy consumption.

Resource Layer

The resource layer consists of a wide range of resources including computing nodes and storage units. This layer generally
interacts with hardware devices and the operating system; therefore, it is responsible for controlling all distributed resources
in distributed computing systems. In the recent past, several mechanisms have been developed for more efficient power
management of hardware and operating systems. The majority of them are hardware approaches particularly for processors.
Network Layer

Routing and transferring packets and enabling network services to the resource layer are the main responsibility of the
network layer in distributed computing systems. The major challenge to build energy-efficient networks is, again,
determining how to measure, predict, and create a balance between energy consumption and performance. Two major
challenges to designing energy-efficient networks are:

• The models should represent the networks comprehensively as they should give a full understanding of interactions
among time, space, and energy.

• New, energy-efficient routing algorithms need to be developed. New, energy-efficient protocols should be developed
against network attacks.
As information resources drive economic and social development, data centers become increasingly
important in terms of where the information items are stored and processed, and where services are
provided. Data centers become another core infrastructure, just like the power grid and transportation
systems. Traditional data centers suffer from high construction and operational costs, complex resource
management, poor usability, low security and reliability, and huge energy consumption. It is necessary to
adopt new technologies in next-generation data-center designs

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy