Cloud Computing Assignment Dec 2022
Cloud Computing Assignment Dec 2022
Cloud Computing Assignment Dec 2022
Ans 1
A Workload Distribution Architecture (WDA) is a cloud computing architecture that
enables the efficient distribution of workloads across a network of computers. A WDA can
be used to improve the performance of a cloud computing system by ensuring that the
workload is evenly distributed across the available resources. This can be accomplished by
using a variety of techniques, including load balancing, job scheduling, and resource
allocation. A WDA can also improve the efficiency of a cloud computing system by allowing
the system to dynamically adjust the distribution of workloads in response to changes in
demand. This can help to ensure that the system is able to meet the needs of its users while
minimizing the amount of wasted resources and way that maximizes performance and
efficiency.
Cloud Computing
A virtual server is a server that is not physically present in a data center, but rather is
a "virtual" server that is created through software. Virtual servers are created by
partitioning a physical server into multiple virtual servers. Each virtual server has its own
operating system and can be independently rebooted. Virtual servers are often used in
cloud computing environments, where multiple virtual servers can be created on a single
physical server. This allows for greater flexibility and scalability, as more virtual servers can
be created as needed to handle increased demand. Virtual servers can also be used to
provide a more isolated environment for certain applications or services.
Redundancy is a term used in many different fields, but in cloud computing, it refers
to the duplicate copies of data that are stored in different locations. This is done to protect
the data from being lost if one location experiences a problem, such as a power outage or a
natural disaster. The data is typically stored on multiple servers in different data centers,
and each copy is synced so that it is up to date.
Cloud scaling is the ability to increase or decrease the capacity of a cloud service or
application. This can be done manually or automatically and is a key feature of cloud
computing. Scaling is important because it allows businesses to respond quickly to changes
in demand. For example, if a company experiences a sudden increase in traffic to its
website, it can scale up its cloud resources to meet the increased demand. Similarly, if traffic
decreases, the company can scale down its resources to save money. There are two main
types of scaling: vertical and horizontal. Vertical scaling involves increasing or decreasing the
capacity of a single server. Horizontal scaling involves adding or removing servers. Vertical
scaling is the most common type of scaling and is often used to meet sudden increases in
demand. For example, if a website receives a sudden influx of traffic, the server hosting the
website can be quickly upgraded to a larger server to handle the increased traffic.
Horizontal scaling is less common but can be used to meet more gradual increases in
demand. For example, if a website expects a gradual increase in traffic over time, new
servers can be added to the existing infrastructure to accommodate the expected growth.
A load balancer is a device that acts as a reverse proxy and distributes network or
application traffic across a number of servers. Load balancers are used to improve
application availability and performance by distributing traffic across multiple servers. A
load balancer sits between the client and the servers and routes traffic to the servers in a
round-robin fashion, meaning that it sends traffic to each server in turn. When one server is
unavailable, the load balancer redirects traffic to the next available server. Load balancers
can also be used to distribute traffic across different regions, known as geo-load balancing.
This can be used to improve performance by sending traffic to the region that is closest to
the user. Load balancers can be hardware devices, software applications, or a combination
of both. Hardware load balancers are dedicated devices that are purpose-built for load
balancing and are typically more expensive than software load balancers. Software load
balancers are usually included as part of an application delivery controller (ADC) and can be
run on commodity hardware.
Cloud Computing
There are several factors to consider when choosing a load balancer, including
performance, availability, scalability, and cost. The type of traffic that will be load balanced
is also an important consideration.
The following are some of the most common load balancing algorithms:
• Round Robin – The round robin algorithm is the most common load balancing
algorithm. It is a simple algorithm that distributes traffic evenly across the servers.
• Least Connections – The least connections algorithm routes traffic to the server with
the fewest active connections. This algorithm is used when there is many servers and
a small number of clients.
• Weighted Round Robin – The weighted round robin algorithm is like the round robin
algorithm, but each server is assigned a weight. The weight is used to determine how
much traffic the server should receive.
• Least Response Time – The least response time algorithm routes traffic to the server
with the lowest response time. This algorithm is used when there is a small number
of servers and many clients.
• Weighted Least Connections – The weighted least connections algorithm is like the
least connections algorithm, but each server is assigned a weight. The weight is used
to determine how much traffic the server should receive.
• Static Round Robin – The static round robin algorithm is similar to the round robin
algorithm, but the order in which the servers are visited is static. This means that the
first server in the list will always be visited first, and the second server in the list will
always be visited second, and so on.
The following are some of the most common load balancing techniques:
• Layer 4 Load Balancing – Layer 4 load balancing is the most common type of load
balancing. It operates at the transport layer and balances traffic based on the source
and destination IP addresses.
• Layer 7 Load Balancing – Layer 7 load balancing is the most complex type of load
balancing. It operates at the application layer and balances traffic based on the
content of the traffic, such as the URL or the cookies.
• Content-Based Routing – Content-based routing is a type of load balancing that uses
content-based routing algorithms to determine where to route traffic. Content-
based routing algorithms consider the content of the traffic, such as the URL or the
cookies, to determine where to route traffic.
• Context-Aware Routing – Context-aware routing is a type of load balancing that uses
context-aware routing algorithms to determine where to route traffic. Context-
aware routing algorithms consider the context of the traffic, such as the time of day
or the location of the user, to determine where to route traffic.
• Load balancing is a critical component of any high-availability system. A load
balancer ensures that traffic is distributed evenly across several servers, which
improves performance and availability.
Cloud Computing
Ans 2
Cloud scalability is the ability of a cloud computing system to increase or decrease
its capacity in response to changes in demand. A scalable cloud system can dynamically
allocate resources as needed to maintain acceptable levels of performance and service
quality, while minimizing costs.Cloud scalability is a key characteristic of cloud computing
that enables organizations to respond quickly and efficiently to fluctuations in demand. By
scaling up or down as needed, businesses can avoid over-provisioning or under-utilizing
resources, and can optimize their use of cloud resources to match changing business needs.
Cloud scalability also allows businesses to respond quickly to unexpected spikes in demand,
such as those that occur during promotional events or natural disasters. By quickly scaling
up capacity, businesses can ensure that their customers continue to receive the level of
service they expect, even during times of peak demand.
There are two main types of cloud scalability: vertical scalability and horizontal
scalability. Vertical scalability refers to the ability to increase or decrease the capacity of a
single server by adding or removing resources such as CPU, memory, or storage. Horizontal
scalability refers to the ability to add or remove servers from a system to increase or
decrease capacity. Cloud scalability is a critical factor to consider when choosing a cloud
provider. Not all cloud providers offer the same level of scalability, so it is important to
choose a provider that can meet your specific scalability needs.
When evaluating cloud providers, consider the following factors:
Cloud Computing
Elasticity is a term used in cloud computing that refers to the ability to scale
resources up or down quickly and easily in response to changes in demand. Cloud elasticity
allows organizations to pay only for the resources they need when they need them, and to
scale those resources up or down as needed. It is a key advantage of cloud computing that
can help organizations save money and improve efficiency. When demand for a particular
resource is high, organizations can quickly scale up their resources to meet that demand.
When demand decreases, they can scale down their resources, which can help reduce costs.
Organizations can use cloud elasticity to quickly respond to changes in demand,
which can be beneficial in a number of different situations. For example, if a company is
experiencing a surge in traffic to its website, it can quickly scale up its resources to
accommodate the increased traffic. Similarly, if a company is launching a new product or
service, it can quickly scale up its resources to meet the demand for the new product or
service. Cloud elasticity can also help organizations cope with unexpected events. For
example, if a company's website goes down due to a sudden spike in traffic, it can quickly
scale up its resources to restore the website. Cloud elasticity is a key advantage of cloud
computing that can help organizations save money, improve efficiency, and quickly respond
to changes in demand.
Underprovisioning in cloud computing is the practice of allocating fewer resources
than are needed to run a service or application. This can happen for a variety of reasons,
including to save money, to improve performance, or to reduce complexity.
Underprovisioning can have several negative consequences, including degraded
performance, reduced reliability, and increased complexity. In some cases, it can even lead
to service outages. As a result, it is important to carefully consider the trade-offs before
underprovisioning resources in the cloud. One common reason for underprovisioning
resources is to save money. Cloud providers typically charge for resources on a pay-as-you-
go basis, so it can be tempting to allocate fewer resources than are actually needed in order
to keep costs down. However, this can lead to degraded performance or even service
outages if the resources that are allocated are not sufficient to meet demand. In addition, it
can be difficult to predict future demand, so it is possible to end up underprovisioning
resources even if that was not the intention.
Cloud Computing
By having more resources available than are needed, it is possible to ensure that the system
can continue to function even if demand unexpectedly increases. Overprovisioning can also
be used to reduce costs. By providing more resources than are needed, it is possible to use
those resources more efficiently. This can be particularly beneficial for organizations that
are looking to reduce their overall expenditure on IT.
By using a scalable and elastic system, organizations can reduce their costs by only
providing the resources that are needed, when they are needed. Together, these two
concepts can help prevent cloud resources from being under- or over-utilized. Elasticity
allows organisations to dynamically add or remove resources from their cloud environment
in response to changes in demand. This can help to ensure that resources are only used
when they are needed, which can help to prevent underprovisioning. Many cloud providers
offer auto-scaling features that can help organisations to prevent underprovisioning in cloud
computing. By using auto-scaling, organisations can ensure that their cloud resources are
automatically scaled up or down in response to changes in demand. This can help to ensure
that organisations always have the right number of resources to meet their needs, without
overprovisioning or underprovisioning.
Cloud Computing
Ans 3a
In computing, virtualization is the creation of a virtual (rather than actual) version of
something, including virtual computer hardware platforms, storage devices, and computer
networks. The main reason for virtualization is to save money on hardware and energy costs
by consolidating multiple physical devices into a smaller number of virtual machines that
can be run on fewer physical devices. Virtualization can also improve performance and
availability by allowing multiple virtual machines to share physical resources, such as
processors and memory. For example, if one virtual machine is running a processor-
intensive application, the other virtual machines can still access the processor resources
they need, even though the first virtual machine is using a large portion of those resources.
Virtualization can also be used to create test and development environments that
are isolated from the rest of the network, which helps to ensure that changes made in the
test environment do not affect the production environment. Virtualization is typically
implemented using a software called a hypervisor, which creates and manages the virtual
machines on a physical server. The most popular type of hypervisor is called a bare-metal
hypervisor, which is installed directly on the physical server hardware. Other types of
hypervisors include hosted hypervisors, which run on top of an operating system, and
paravirtualized hypervisors, which require special software to be installed on the physical
server.
Multitenancy is a term used in computer science to describe the concept of multiple
users sharing a single instance of a software application or resource. In a multitenant
environment, each user has their own isolated workspace within the overall application, and
is unable to see or access the data or resources of other users. Multitenancy is often used in
cloud computing environments, where multiple customers share the same physical
infrastructure and resources. This can be a cost-effective way to deliver a service, as the
provider can spread the cost of the infrastructure across multiple customers. However, it
can also lead to some challenges, as each customer's data is stored in the same place and
there is the potential for one customer's data to be accessed by another.
To address these issues, providers of multitenant services often use security
measures such as segregation of data and user accounts, and access control mechanisms to
ensure that each customer's data is kept isolated from other customers. In addition,
customers may be offered the option to have their own dedicated instance of the service,
which offers more control and isolation but at a higher cost. In addition, you should consider
using a virtualization platform that provides built-in security and resource management
features to help you more easily deploy and manage multitenant applications and services.
Additionally, administrators can consider using dedicated hardware for each tenant, or
using a single-tenant virtualization solution. the administrator can change the settings so
that only one tenant can use a given virtual machine at a time. Additionally, the
administrator can set limits on the number of virtual machines that each tenant can use.
Cloud Computing
Ans 3b
Serverless computing is a cloud computing model in which the cloud provider runs
the server, and the customer pays per use. There is no need for the customer to provision or
manage any servers. The term "serverless" can be misleading, as there are still servers
involved in the backend. However, these servers are managed by the cloud provider, and
the customer only pays for the resources they use, rather than for a fixed amount of server
time.
This model can be contrasted with traditional cloud models, such as Infrastructure as
a Service (IaaS) or Platform as a Service (PaaS), in which the customer is responsible for
provisioning and managing servers.
list of serverless services by cloud computing providers
Serverless computing is a relatively new model, and is often used for applications
that are event-driven or have variable workloads. For example, a serverless application
might be triggered by a user upload to a cloud storage service, or an event from an Internet
of Things (IoT) device.
Cloud Computing
The advantages of serverless computing include:
1. Increased Efficiency: - Serverless computing allows you to only pay for the resources you
use. There is no need to pay for idle capacity or resources that are not being used. This can
lead to considerable cost savings as you are only paying for what you use.
2. Reduced Management Overhead: - With serverless computing, the cloud provider
manages the server and operating system for you. This can lead to reduced management
overhead and complexity as you do not need to worry about patching, updating, or
managing the underlying infrastructure.
3. Increased Scalability: - Serverless computing can be easily scaled up or down as needed.
This can be done without having to provision or de-provision servers, which can save time
and money.
4. High Availability: - Serverless computing can be designed for high availability as the cloud
provider can manage the underlying infrastructure. This can lead to increased uptime and
reliability.
5. Reduced Latency: - Serverless computing can reduce latency as the cloud provider can
manage the underlying infrastructure. This can lead to improved performance for
applications that are latency sensitive.
There are some drawbacks to serverless computing, however, such as the potential
for increased latency, as the serverless application must wait for the cloud provider to
provision the necessary resources. Additionally, serverless applications can be more difficult
to debug and monitor, as they are distributed across multiple servers. The main
disadvantage of serverless computing is that it can be more expensive than traditional
server-based models. This is because you are paying for the resources used to run your
code, rather than paying for a fixed amount of server capacity. In addition, serverless
computing can be less flexible than traditional server-based models, as you are dependent
on the cloud provider to offer the services you need.
Cloud Computing