What Is Cloud Computing
What Is Cloud Computing
Internet Connectivity: Cloud services rely on internet access; poor connectivity may impact
accessibility.
Vendor lock-in: Transferring services from one cloud provider to another can be challenging
due to platform differences.
Limited Control: Cloud users have limited control over the infrastructure, as it is managed
by the service provider.
Security: While cloud providers implement robust security measures, data security
concerns persist.
Q.2 Deployment models, Types & Diagram of deployment models.
Cloud Deployment Models Cloud deployment models refer to the ways in which cloud
computing resources are deployed and managed. These models define the ownership,
access, and management of the underlying cloud infrastructure. Cloud deployment models
define how cloud resources are deployed and accessed, and they cater to various needs
based on factors like security, customization, compliance, and scalability. Organizations
often choose a deployment model that aligns with their specific requirements and business
objectives.
Public Cloud
A Public Cloud is Cloud Computing in which the infrastructure and services are owned and
operated by a third-party provider and made available to the public over the internet. The
public can access and use shared resources, such as servers, storage, and applications and
the main thing is you pay for what you used. . Examples of public cloud providers – are
Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)
Advantages: ● Cost-efficient due to resource sharing. ● Rapid scalability to handle changing
workloads. ● No need for upfront hardware investments. ● Global accessibility and ease of
use.
Disadvantages: ● Potential security and privacy concerns ● Limited customization and
control over infrastructure. ● Dependence on the cloud provider's services and policies.
Private Cloud
A Private Cloud is a cloud computing environment in which the infrastructure and services
are owned and operated by a single organization, for example, a company or government,
and it is accessed by only authorized users within that organization. Private Cloud
organizations have their own data center. private cloud provides a higher level of security.
Examples – HPE, Dell, VMware, etc.
Advantages: ● Enhanced security and data privacy. ● Full customization and control over
infrastructure. ● Suitable for industries with strict compliance requirements. ● Can be
tailored to specific organizational needs. Disadvantages: ● Higher upfront costs due to
dedicated infrastructure. ● Requires specialized IT expertise for management. ● Limited
scalability compared to public clouds.
Hybrid Cloud
A hybrid cloud is a combination of both public and private cloud environments that allows
organizations to take advantage of the benefits of both types of clouds. It manages traffic
levels during peak usage periods It can provide greater flexibility, scalability, and cost-
effectiveness than using a single cloud environment. Examples – IBM, DataCore Software,
Rackspace, Threat Stack, Infinidat, etc.
Advantages: ● Balances cost efficiency and security. ● Enables workload flexibility based on
demand. ● Provides disaster recovery and backup capabilities. ● Facilitates seamless data
migration between environments. Disadvantages: ● Complexity in managing and
integrating multiple cloud environments. ● Potential data transfer costs between public and
private components. ● Challenges in maintaining consistent security policies.
Q.2 Software as a Service & Deployment Tools.
Ans: SaaS is also known as "On-Demand Software." It is a software distribution model in
which services are hosted by a cloud service provider. These services are available to end-
users over the internet, so the end-users do not need to install any Software on their
devices to access these services
Characteristics of SaaS:
1. Web-based Delivery: SaaS apps can be accessed from anywhere with an internet
connection because they are supplied over the internet, often through a web browser.
Users no longer need to install and maintain software programs on their local machines as a
result.
2. Multiple Users or "tenants" can access SaaS applications from a single instance of the
program thanks to the concept of multi-tenancy. As a result, the provider can serve several
clients with the same application without administering unique program instances for every
client.
3. Automatic Updates: SaaS providers are in charge of keeping the software up to date and
making sure that everyone has access to the newest features and security patches. Users
are no longer required to manually install updates or fixes as a result.
4. Scalable: SaaS systems are scalable, which can readily grow or shrink in response to user
demand. This frees up enterprises from worrying about infrastructure or licensing fees and
lets them add or remove users as needed.
5. Pricing on a Subscription Basis: SaaS programs are frequently sold using a subscription-
based pricing model, in which customers pay a monthly or yearly price to access the
program. As a result, companies won't need to invest significantly in software licenses
upfront.
6. Data Security, including data encryption, access restrictions, and backups, is the
responsibility of SaaS providers. Users no longer need to handle their own data security
because of this.
Amazon
1. Amazon, one of the largest retailers on the Internet, is also one of the primary providers
of cloud development services.
2. Amazon has spent a lot of time and money setting up a multitude of servers to service its
popular website, and is making those vast hardware resources available for all developers to
use.
3. The service in question is called the Elastic Compute Cloud, also known as EC2. This is a
commercial web service that allows developers and companies to rent capacity on
Amazon’s proprietary cloud of servers— which happens to be one of the biggest server
farms in the world.
4. EC2 enables scalable deployment of applications by letting customers request a set
number of virtual machines, onto which they can load any application of their choice.
5. Thus, customers can create, launch, and terminate server instances on demand, creating
a truly ―elastic‖ operation. Amazon’s service lets customers choose from three sizes of
virtual servers:
a. Small, which offers the equivalent of a system with 1.7GB of memory,160GB of storage,
and one virtual 32-bit core processor.
b. Large, which offers the equivalent of a system with 7.5GB of memory,850GB of storage,
and two 64-bit virtual core processors.
c. Extra-large, which offers the equivalent of a system with 15GB of memory,1.7TB of
storage, and four virtual 64-bit core processors.
(In other words, you pick the size and power you want for your virtual server, and Amazon
does the rest)
6. EC2 is just part of Amazon’s Web Services (AWS) set of offerings, which provides
developers with direct access to Amazon’s software and machines.
7. By tapping into the computing power that Amazon has already constructed, developers
can build reliable, powerful, and low-cost web-based applications.
8. Amazon provides the cloud (and access to it), and developers provide the rest. They pay
only for the computing power that they use.
9. AWS is perhaps the most popular cloud computing service to date. Amazon claims a
market of more than 330,000 customers—a combination of developers, start-ups, and
established companies
Q.5. Three Layered Architectural Requirement & Providers.
Ans: Layered Architecture of Cloud:
It is possible to organize all the concrete realizations of cloud computing into a layered view
covering the entire, from hardware appliances to software systems.
All of the physical manifestations of cloud computing can be arranged into a layered picture
that encompasses anything from software systems to hardware appliances. Utilizing cloud
resources can provide the “computer horsepower” needed to deliver services. This layer is
frequently done utilizing a data center with dozens or even millions of stacked nodes.
Because it can be constructed from a range of resources, including clusters and even
networked PCs, cloud infrastructure can be heterogeneous in character. The infrastructure
can also include database systems and other storage services.
The core middleware, whose goals are to create an optimal runtime environment for
applications and to best utilize resources, manages the physical infrastructure. Virtualization
technologies are employed at the bottom of the stack to ensure runtime environment
modification, application isolation, sandboxing, and service quality. At this level, hardware
virtualization is most frequently utilized. The distributed infrastructure is exposed as a
collection of virtual computers via hypervisors, which control the pool of available
resources. By adopting virtual machine technology, it is feasible to precisely divide up
hardware resources like CPU and memory as well as virtualize particular devices to
accommodate user and application needs.
Application Layer
1. The application layer, which is at the top of the stack, is where the actual cloud apps
are located. Cloud applications, as opposed to traditional applications, can take
advantage of the automatic-scaling functionality to gain greater performance,
availability, and lower operational costs.
2. This layer consists of different Cloud Services which are used by cloud users. Users
can access these applications according to their needs. Applications are divided
into Execution layers and Application layers.
3. In order for an application to transfer data, the application layer determines whether
communication partners are available. Whether enough cloud resources are
accessible for the required communication is decided at the application layer.
Applications must cooperate in order to communicate, and an application layer is in
charge of this.
4. The application layer, in particular, is responsible for processing IP traffic handling
protocols like Telnet and FTP. Other examples of application layer systems include
web browsers, SNMP protocols, HTTP protocols, or HTTPS, which is HTTP’s successor
protocol.
Platform Layer
1. The operating system and application software make up this layer.
2. Users should be able to rely on the platform to provide them with Scalability,
Dependability, and Security Protection which gives users a space to create their
apps, test operational processes, and keep track of execution outcomes and
performance. SaaS application implementation’s application layer foundation.
3. The objective of this layer is to deploy applications directly on virtual machines.
4. Operating systems and application frameworks make up the platform layer, which is
built on top of the infrastructure layer. The platform layer’s goal is to lessen the
difficulty of deploying programmers directly into VM containers.
5. By way of illustration, Google App Engine functions at the platform layer to provide
API support for implementing storage, databases, and business logic of ordinary web
apps.
Infrastructure Layer
1. It is a layer of virtualization where physical resources are divided into a collection of
virtual resources using virtualization technologies like Xen, KVM, and VMware.
2. This layer serves as the Central Hub of the Cloud Environment, where resources are
constantly added utilizing a variety of virtualization techniques.
3. A base upon which to create the platform layer. constructed using the virtualized
network, storage, and computing resources. Give users the flexibility they want.
4. Automated resource provisioning is made possible by virtualization, which also
improves infrastructure management.
5. The infrastructure layer sometimes referred to as the virtualization layer, partitions
the physical resources using virtualization technologies like Xen, KVM, Hyper-V, and
VMware to create a pool of compute and storage resources.
6. The infrastructure layer is crucial to cloud computing since virtualization technologies
are the only ones that can provide many vital capabilities, like dynamic resource
assignment.
Datacenter Layer
In a cloud environment, this layer is responsible for Managing Physical
Resources such as servers, switches, routers, power supplies, and cooling systems.
Providing end users with services requires all resources to be available and managed
in data centers.
Physical servers connect through high-speed devices such as routers and switches to
the data center.
In software application designs, the division of business logic from the persistent data
it manipulates is well-established. This is due to the fact that the same data cannot be
incorporated into a single application because it can be used in numerous ways to
support numerous use cases. The requirement for this data to become a service has
arisen with the introduction of microservices.
A single database used by many microservices creates a very close coupling. As a
result, it is hard to deploy new or emerging services separately if such services need
database modifications that may have an impact on other services. A data layer
containing many databases, each serving a single microservice or perhaps a few
closely related microservices, is needed to break complex service interdependencies.
Quality of Service (QoS) in Cloud Computing Overview: Quality of Service (QoS) refers to
the performance characteristics and guarantees provided by a cloud service or network. In
cloud computing, QoS encompasses various parameters that ensure that services meet the
required standards and provide a satisfactory user experience. QoS is crucial for maintaining
the reliability, efficiency, and performance of cloud-based applications and services.
Key QoS Parameters:
1. Latency: Definition: The time it takes for a data packet to travel from the source to the
destination. Importance: Low latency is essential for real-time applications, such as online
gaming or video conferencing, where delays can significantly impact user experience.
2. Throughput: - Definition: The amount of data transmitted successfully from one point to
another within a given time frame. - Importance: Higher throughput ensures that large
volumes of data can be transferred efficiently, which is crucial for data-intensive
applications like big data analytics and streaming services.
3. Bandwidth: - Definition: The maximum rate at which data can be transferred over a
network connection. - Importance: Adequate bandwidth is necessary to support high-speed
data transfers and ensure that applications perform optimally under varying loads.
4. Jitter: - Definition: The variation in latency over time. - Importance: Low jitter is crucial
for applications requiring stable and consistent timing, such as VoIP and video calls. High
jitter can lead to poor audio and video quality.
5. Packet Loss: - Definition: The percentage of data packets that are lost during
transmission. - Importance: Minimizing packet loss is essential for maintaining the integrity
and reliability of data transmission, especially for applications that rely on real-time data
delivery.
6. Availability: - Definition: The proportion of time a service is operational and accessible. -
Importance: High availability ensures that services are reliable and accessible when needed,
minimizing downtime and service interruptions.
7. Reliability: - Definition: The ability of a service to consistently perform as expected
without failures. - Importance: Reliable services maintain consistent performance and
provide users with dependable access to applications and data.