100% found this document useful (1 vote)
616 views31 pages

Cloud Computing Unit-I JNTUH R16

This document provides an overview of various computing paradigms including high performance computing, parallel computing, distributed computing, cluster computing, grid computing, cloud computing, biocomputing, mobile computing, and quantum computing. It discusses the key characteristics of each paradigm such as using multiple connected processors, solving problems concurrently using discrete parts, leveraging unused computing resources, provisioning resources dynamically based on user needs, and harnessing the power of quantum information processing.

Uploaded by

vizier
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
616 views31 pages

Cloud Computing Unit-I JNTUH R16

This document provides an overview of various computing paradigms including high performance computing, parallel computing, distributed computing, cluster computing, grid computing, cloud computing, biocomputing, mobile computing, and quantum computing. It discusses the key characteristics of each paradigm such as using multiple connected processors, solving problems concurrently using discrete parts, leveraging unused computing resources, provisioning resources dynamically based on user needs, and harnessing the power of quantum information processing.

Uploaded by

vizier
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Cloud Computing

Department of Computer Science and


Engineering
Academic Year 2020-21
Subject: Cloud Computing
Faculty:K.Prem Kumar
Calss and Section:IV B.tech CSE A&B
Unit-I
Computing Paradigms
High Performance Computing:-
In High performance computing systems, a
pool of processors(processor machines or
central processing unit(CPUs)
connected(Networked)with other resources
like memory, storage and input and output
devices, and the deployed software is enabled
to run in the entire system of connected
components.
• The processor machines can be of
homogeneous or heterogeneous type.
• The legacy meaning of high performance
computing(HPC) is the super computers;
however, it is not true in present day
computing scenarios.
• Examples of high performance computing
include a small cluster of desktop computers or
personal computer(PCs) to the fastest super
computers.
• HPC systems are normally found in those
applications where it is required to use or solve
scientific problems.
• Most of the time, the challenge in working with
these kinds of problems is to perform suitable
simulation study, and this can be accomplished
by HPC without any difficulty.
• Scientific examples such as protein folding in
molecular biology and studies on developing
models and applications based on nuclear
fusion are worth nothing as potential
applications for HPC.
Parallel Computing:-
• Parallel computing is also one of the facets of HPC.
• Here, a set of processors work cooperatively to solve a
computational problem.
• These processor machines or CPUs are mostly of
homogeneous type.
• Therefore, this definition is the same as that of HPC and
is broad enough to include supercomputers that have
hundreds or thousands of processors interconnected
with other resources.
• One can distinguish between conventional (also known
as serial or sequential or Von Neumann) computers and
parallel computers in the way the applications are
executed.
In serial or sequential computers, the following
apply:
• It runs on a single computer/processor
machine having a single CPU.
• A problem is broken down into a discrete series
of instructions.
• Instructions are executed one after another.
In parallel computing, since there is simultaneous
use of multiple processor machines, the following
apply:
• It is run using multiple processors (multiple CPUs).
• A problem is broken down into discrete parts that
can be solved concurrently.
• Each part is further broken down into a series of
instructions.
• Instructions from each part are executed
simultaneously on different processors.
• An overall control/coordination mechanism is
employed.
Distributed Computing:-
• Distributed computing is also a computing
system that consists of multiple computers or
processor machines connected through a
network, which can be homogeneous or
heterogeneous, but run as a single system.
• The connectivity can be such that the CPUs in a
distributed system can be physically close
together and connected by a local network, or
they can be geographically distant and
connected by a wide area network.
• The heterogeneity in a distributed system
supports any number of possible configurations
in the processor machines, such as mainframes,
PCs, workstations, and minicomputers.
• The goal of distributed computing is to make
such a network as a single computer.
• Distributed computing systems are
advantageous over centralized systems, because
there is a support for the following
characteristic features:
1. Scalability: It is the ability of the system to
be easily expanded by adding more machines
as needed, and vice versa, without affecting
the existing setup.
2. Redundancy or replication: Here, several
machines can provide the same services, so
that even if one is unavailable (or failed), work
does not stop because other similar computing
supports will be available.
Cluster Computing:-
• A cluster computing system consists of a set of the
same or similar type of processor machines
connected using a dedicated network infrastructure.
• All processor machines share resources such as a
common home directory and have a software such
as a message passing interface (MPI)
implementation installed to allow programs to be
run across all nodes simultaneously.
• This is also a kind of HPC category.
• The individual computers in a cluster can be referred
to as nodes.
• The reason to realize a cluster as HPC is due to
the fact that the individual nodes can work
together to solve a problem larger than any
computer can easily solve.
• And, the nodes need to communicate with one
another in order to work cooperatively and
meaningfully together to solve the problem in
hand.
• If we have processor machines of
heterogeneous types in a cluster, this kind of
clusters become a subtype and still mostly are
in the experimental or research stage.
Grid Computing:-
• The computing resources in most of the
organizations are underutilized but are necessary
for certain operations.
• The idea of grid computing is to make use of such
nonutilized computing power by the needy
organizations, and there by the return on
investment (ROI) on computing investments can
be increased.
• Thus, grid computing is a network of computing
or processor machines managed with a kind of
software such as middleware, in order to access
and use the resources remotely.
• The managing activity of grid resources through
the middleware is called grid services.
• Grid services provide access control, security,
access to data including digital libraries and
databases, and access to large-scale interactive
and long-term storage facilities.
• Grid computing is more popular due to the
following reasons:
• Its ability to make use of unused computing
power, and thus, it is a cost-effective solution
(reducing investments, only recurring costs)
• As a way to solve problems in line with any HPC-
based application
• Enables heterogeneous resources of computers
to work cooperatively and collaboratively to
solve a scientific problem.
• Researchers associate the term grid to the way
electricity is distributed in municipal areas for
the common man.
Cloud Computing:-
• The computing trend moved toward cloud from
the concept of grid computing, particularly when
large computing resources are required to solve a
single problem, using the ideas of computing
power as a utility and other allied concepts
However, the potential difference between grid
and cloud is that grid computing supports
leveraging several computers in parallel to solve a
particular application, while cloud computing
supports leveraging multiple resources, including
computing resources, to deliver a unified service
to the end user.
• In cloud computing, the IT and business
resources, such as servers, storage, network,
applications, and processes, can be
dynamically provisioned to the user needs and
workload.
• In addition, while a cloud can provision and
support a grid, a cloud can also support
nongrid environments, such as a three-tier web
architecture running on traditional or Web 2.0
applications.
• Biocomputing:- Biocomputing systems use
the concepts of biologically derived or
simulated molecules (or models) that perform
computational processes in order to solve a
problem.
• The biologically derived models aid in
structuring the computer programs that
become part of the application.
• Biocomputing provides the theoretical
background and practical tools for scientists to
explore proteins and DNA.
• DNA and proteins are nature’s building blocks,
but these building blocks are not exactly used
as bricks; the function of the final molecule
rather strongly depends on the order of these
blocks.
• Thus, the biocomputing scientist works on
inventing the order suitable for various
applications mimicking biology.
• Biocomputing shall, therefore, lead to a better
understanding of life and the molecular
causes of certain diseases.
Mobile Computing:-
• In mobile computing, the processing (or
computing) elements are small (i.e., handheld
devices) and the communication between
various resources is taking place using wireless
media.
• Mobile communication for voice applications
(e.g., cellular phone) is widely established
throughout the world and witnesses a very
rapid growth in all its dimensions including the
increase in the number of subscribers of
various cellular networks
• An extension of this technology is the ability
to send and receive data across various
cellular networks using small devices such as
smartphones.
• There can be numerous applications based on
this technology; for example, video call or
conferencing is one of the important
applications that people prefer to use in place
of existing voice (only) communications on
mobile phones.
• Mobile computing–based applications are
becoming very important and rapidly evolving
with various technological advancements as it
allows users to transmit data from remote
locations to other remote or fixed locations.
Quantum Computing:-
• Manufacturers of computing systems say that
there is a limit for cramming more and more
transistors into smaller and smaller spaces of
integrated circuits (ICs) and thereby doubling
the processing power about every 18 months.
• This problem will have to be overcome by a new
quantum computing–based solution, where in the
dependence is on quantum information, the rules
that govern the subatomic world.
• Quantum computers are millions of times faster
than even our most powerful supercomputers
today.
• Since quantum computing works differently on the
most fundamental level than the current
technology, and although there are working
prototypes, these systems have not so far proved
to be alternatives to today’s silicon-based
machines.
Optical Computing:-
• Optical computing system uses the photons in visible
light or infrared beams, rather than electric current, to
perform digital computations.
• An electric current flows at only about 10% of the
speed of light.
• This limits the rate at which data can be exchanged over
long distances and is one of the factors that led to the
evolution of optical fiber.
• By applying some of the advantages of visible and/or IR
networks at the device and component scale, a
computer can be developed that can perform
operations 10 or more times faster than a conventional
electronic computer.
Nanocomputing:-
• Nanocomputing refers to computing systems
that are constructed from nanoscale
components.
• The silicon transistors in traditional computers
may be replaced by transistors based on carbon
nanotubes.
• The issues of scale relate to the dimensions of
the components; they are, at most, a few
nanometers in at least two dimensions.
The issues of integration of the components are
twofold:
• first, the manufacture of complex arbitrary
patterns may be economically infeasible, and
• second, nanocomputers may include massive
quantities of devices.
• Researchers are working on all these issues to
bring nanocomputing a reality.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy