0% found this document useful (0 votes)
38 views

KCS713 22-23 Solution

Uploaded by

rememberme6783
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

KCS713 22-23 Solution

Uploaded by

rememberme6783
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Printed Pages: 02 Sub Code: KCS-713

Paper Id: 231552 Roll No.

SOLUTION
B.TECH. (SEM VII) THEORY
EXAMINATION 2022-23 CLOUD
COMPUTING
Time: 3 Hours Total Marks: 100
Note: Attempt all Sections. If require any missing data; then choose suitably.

SECTION A

1. Attempt all questions in brief. 2 x 10 = 20


(a) Compare parallel computing and distributed computing.
Ans : Parallel computing involves breaking a complex task into smaller sub-tasks that
can be executed simultaneously on multiple processors within a single computer
system. It uses shared memory architectures and synchronization mechanisms. In
contrast, distributed computing involves breaking a complex task into smaller sub-tasks
that are distributed across multiple computer systems connected through a network. It
uses message passing architectures and allows for scalability, fault tolerance, and load
balancing. Parallel computing is suitable for high-performance computing tasks, while
distributed computing is suitable for large-scale computing tasks that require multiple
computer systems.
(b) What are the service models available in cloud computing?
Ans : The three service models available in cloud computing are Infrastructure as a
Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
(c) Express the levels of virtualization.
Ans :

(d) Illustrate Web services.


Ans : A web service is a set of open protocols and standards that allow data exchange
between different applications or systems. Web services can be used by software
programs written in different programming languages and on different platforms to
exchange data through computer networks such as the Internet. In the same way,
QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98
communication on a computer can be inter-processed.
Any software, application, or cloud technology that uses a standardized Web protocol
(HTTP or HTTPS) to connect, interoperate, and exchange data messages over the
Internet-usually XML (Extensible Markup Language) is considered a Web service.
(e) Differentiate Public cloud and Private cloud.
Ans : A public cloud is a type of cloud computing where services are provided over the
internet to multiple users or organizations, managed by a third-party provider. A private
cloud, on the other hand, is a cloud computing environment dedicated to a single
organization, which can be managed in-house or by a third-party provider.
(f) List out the characteristics of SaaS.
Ans : The characteristics of Software as a Service (SaaS) include:
i. Accessibility: Users can access the software application from any device with an internet
connection.
ii. Scalability: SaaS applications can be easily scaled up or down based on the user's needs.
iii. Multi-tenancy: SaaS applications are designed to serve multiple users or organizations
from a single instance of the software.
iv. Automatic updates: SaaS providers automatically update the software, ensuring that
users always have access to the latest features and functionalities.
v. Subscription-based pricing: SaaS applications are typically offered on a subscription
basis, with users paying a monthly or annual fee.
vi. Centralized management: SaaS providers are responsible for managing and maintaining
the infrastructure, ensuring that users can focus on their core business operations.
(g) List the security issues in cloud.
Ans : Security issues in cloud computing include:
i. Data breaches: unauthorized access to sensitive data stored in the cloud.
ii. Data loss: accidental or intentional deletion of data from the cloud.
iii. Malware attacks: malicious software that can infect cloud infrastructure, applications or
data.
iv. Denial of service (DoS) attacks: malicious attempts to prevent access to cloud services.
v. Insider threats: breaches caused by employees or contractors with access to cloud
resources.
vi. Shared technology vulnerabilities: security risks posed by vulnerabilities shared across
cloud tenants.
vii. Lack of transparency: difficulty in assessing the security measures in place in the cloud.
viii. Compliance and regulatory issues: challenges related to complying with legal and
regulatory requirements across multiple jurisdictions.
ix. Physical security: security threats related to the physical security of cloud data centers.
(h) Define security governance.
Ans : Security governance in cloud computing refers to the process of establishing policies,
procedures, and controls to manage and mitigate security risks associated with cloud
computing. It involves defining the roles and responsibilities of all stakeholders, including
cloud providers and cloud customers, and ensuring that security requirements are met
through effective security management practices. Security governance includes activities
such as risk assessment, threat analysis, vulnerability management, incident response, and
compliance management. It aims to ensure that cloud services are delivered in a secure and
reliable manner, and that security risks are managed throughout the entire lifecycle of cloud
computing services.
(i) List the functional models of GAE.
Ans : The functional models of Google App Engine (GAE) include:
i. Datastore: A fully managed NoSQL database service for storing non-relational data.
ii. App Engine Standard Environment: A platform for developing and running web
applications using a wide range of programming languages, including Python, Java,
Node.js, Go, and PHP.
iii. App Engine Flexible Environment: A platform for deploying containerized applications
using Docker and Kubernetes.
QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98
iv. Cloud Storage: A scalable and durable object storage service for storing and accessing
unstructured data.
v. Cloud SQL: A fully managed relational database service for running MySQL and
PostgreSQL databases.
vi. Identity and Access Management (IAM): A service for managing access to cloud
resources by defining roles and permissions.
vii. Compute Engine: A service for running virtual machines in the cloud.
viii. Kubernetes Engine: A fully managed Kubernetes service for deploying, managing, and
scaling containerized applications.
(j) What is the use of cloud Watch in Amazon EC2?
Ans : Amazon CloudWatch is a monitoring and management service provided by
Amazon Web Services (AWS) that is used to monitor and collect metrics, logs, and
events from Amazon EC2 instances and other AWS resources. CloudWatch can be
used to monitor the performance and availability of Amazon EC2 instances and
applications running on them. It can collect and track metrics such as CPU utilization,
network traffic, and disk usage, and can generate alarms based on predefined
thresholds. CloudWatch can also be used to monitor logs and events from EC2
instances, enabling real-time troubleshooting and analysis. Overall, CloudWatch helps
in maintaining the operational health of Amazon EC2 instances and applications, and
provides insights into resource utilization and application performance.
ON B

2. Attempt any three of the following: 10x3 = 30


(a) Describe in detail about major Deployment Models and services for
cloud computing.
(b) Illustrate the three major components of virtualized environment.
(c) Give the diagram of Cloud Computing Reference Architecture. Illustrate
in detail about the Conceptual Reference Model of cloud.
(d) Illustrate the following in detail
i. Demand-Driven Resource Provisioning
ii. Event-Driven Resource Provisioning
(e) Elaborate HDFS concepts with suitable illustrations.

SECTION C

3. Attempt any one part of the following: 10x1=10

(a) Describe in detail about cloud computing reference model with diagram.
Ans : The Cloud Computing Reference Model (CCRM) is a high-level architectural framework
that provides a common vocabulary and structure for understanding cloud computing. The
CCRM was developed by the National Institute of Standards and Technology (NIST) to provide
a standard approach to describing cloud computing architectures and to help organizations
develop and deploy cloud services.

The CCRM consists of five layers that represent the major functional areas of cloud computing:

i. The Cloud Service User Layer: This layer represents the end-users who access cloud
services over the internet. The users can be individuals, organizations, or machines that
consume cloud services.

ii. The Cloud Service Broker Layer: This layer represents the intermediaries or middlemen
who provide services to cloud service users. They help in selecting and integrating
multiple cloud services from different providers to meet the user's requirements.

iii. The Cloud Service Provider Layer: This layer represents the cloud service providers who
offer cloud services such as Infrastructure as a Service (IaaS), Platform as a Service
QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98
(PaaS), and Software as a Service (SaaS). They provide computing resources and
infrastructure to the cloud service users.

iv. The Cloud Service Infrastructure Layer: This layer represents the physical infrastructure
that supports cloud computing, including servers, storage devices, and networks. It
includes both the physical and virtual components of the cloud infrastructure.

v. The Cloud Service Management Layer: This layer represents the management and
control systems that operate and monitor the cloud infrastructure and services. It includes
cloud orchestration, monitoring, security, and governance.

The CCRM provides a standard framework for describing cloud computing architectures and
helps organizations develop and deploy cloud services. It enables users to compare and evaluate
different cloud service offerings, and it helps cloud service providers to design and implement
their services in a consistent and interoperable way.

(b) List out and discuss the innovative characteristic of cloud computing.
Ans : Cloud computing is a rapidly evolving technology that offers several innovative
characteristics, including:

i. On-demand self-service: Cloud computing provides users with the ability to provision and
deploy computing resources on demand, without the need for human intervention from
the service provider. This allows users to scale up or down their computing resources
according to their business needs.

ii. Rapid elasticity: Cloud computing allows for the rapid scaling up or down of computing
resources as required by the user. This means that users can quickly increase their
computing resources during peak usage periods and reduce them during periods of low
demand, allowing for optimal resource utilization.

iii. Resource pooling: Cloud computing provides users with access to a shared pool of
computing resources, including servers, storage, and networks. This allows users to take
advantage of economies of scale, reducing costs and improving efficiency.

iv. Ubiquitous network access: Cloud computing enables users to access their computing
resources from anywhere, using any device with an internet connection. This provides
users with greater flexibility and mobility, and allows for remote collaboration and access
to resources.

v. Metered service: Cloud computing provides users with the ability to pay for computing
resources on a metered basis, allowing users to only pay for the resources they use. This
provides cost savings and helps to optimize resource utilization.

vi. Multi-tenancy: Cloud computing allows multiple users to share the same physical
resources while maintaining data security and privacy. This provides greater resource
utilization and cost savings.

vii. Automation: Cloud computing provides users with the ability to automate tasks such as
resource provisioning, configuration, and management. This reduces the need for manual
intervention and improves efficiency.

These innovative characteristics of cloud computing provide significant advantages over


traditional on-premises computing, including cost savings, flexibility, scalability, and increased
efficiency.

4. Attempt any one part of the following: 10x1=10


QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98
(a) Describe in detail about the REST a software architecture style for
distributed systems.
Ans : Representational State Transfer (REST) is a software architecture style for
building distributed systems that is based on the principles of the World Wide
Web. REST is a popular choice for designing APIs for web applications, and it is
widely used for building web services.
The main principles of REST include:
i. Client-server architecture: In REST, the client and server are decoupled and
communicate through a uniform interface. This allows the server to be
scaled independently of the client and allows multiple clients to interact
with the same server.

ii. Stateless: REST is stateless, meaning that each request from the client
contains all the necessary information to be processed by the server. The
server does not maintain any state between requests, which allows for
better scalability and fault tolerance.

iii. Cacheability: REST supports caching, which can improve performance


and reduce network traffic. Responses from the server can be marked as
cacheable or non-cacheable, and the client can use cached responses
when appropriate.

iv. Uniform interface: REST uses a uniform interface for communication


between clients and servers. The interface consists of four components:
resource identification, resource manipulation, self-descriptive messages,
and hypermedia as the engine of application state (HATEOAS).

v. Layered system: REST allows for a layered system architecture, where


intermediaries such as proxies or gateways can be used to improve
scalability, security, and performance.

In a REST-based system, resources are identified by URIs (Uniform Resource


Identifiers) and can be manipulated using a limited set of HTTP methods,
including GET, POST, PUT, DELETE, and PATCH. The resources can be
represented in different formats such as JSON, XML, or HTML, and can be
accessed using a wide range of client applications.
Overall, REST provides a flexible and scalable approach to building distributed
systems, and it has become a widely adopted architecture style for building web
services and APIs.

(b) Analyze the pros and cons of virtualization in detail.

Ans: Virtualization is a technique of creating multiple virtual instances of an operating system,


application or server on a single physical hardware. Virtualization has become a popular technology
due to its ability to reduce costs, increase efficiency, and improve flexibility. However, it also has its
drawbacks. In this answer, we will analyze the pros and cons of virtualization in detail.
Pros of Virtualization:
i. Cost savings: Virtualization enables businesses to run multiple operating systems on a single
physical server, thus reducing the number of physical servers required. This leads to
significant cost savings in terms of hardware, maintenance, and energy costs.

ii. Improved resource utilization: Virtualization enables better resource utilization by


consolidating multiple workloads onto a single physical server. This leads to better utilization
of resources such as CPU, memory, and storage.

QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98


iii. Flexibility: Virtualization allows businesses to easily and quickly add or remove resources
as needed, providing greater flexibility in managing their infrastructure.

iv. Increased availability: Virtualization enables high availability and disaster recovery
capabilities through live migration and replication of virtual machines.

v. Scalability: Virtualization enables businesses to easily scale their infrastructure as needed,


by adding more virtual machines to their environment.

Cons of Virtualization:
i. Performance overhead: Virtualization introduces a performance overhead due to the
additional layer of software and hardware abstraction. This can lead to reduced performance
of virtual machines compared to physical machines.

ii. Security concerns: Virtualization introduces new security concerns such as the risk of virtual
machine escape, hypervisor vulnerabilities, and inter-VM attacks.

iii. Complexity: Virtualization adds complexity to the infrastructure, which can lead to increased
management and troubleshooting efforts.

iv. Licensing: Virtualization can lead to licensing challenges as some software vendors require
licenses per physical server, while others require licenses per virtual machine.

v. Single point of failure: Virtualization introduces a single point of failure in the form of the
hypervisor, which can lead to downtime for all virtual machines running on the affected
physical server.

vi.
In conclusion, virtualization provides significant benefits in terms of cost savings, improved resource
utilization, flexibility, availability, and scalability. However, it also has some drawbacks, such as
performance overhead, security concerns, complexity, licensing challenges, and a single point of
failure. Organizations need to carefully evaluate their needs and requirements to determine if
virtualization is the right solution for their infrastructure.
5. Attempt any one part of the following: 10x1=10
(a) List and discuss the principles for designing public cloud, private cloud
and hybrid cloud.
Ans : Public Cloud Principles:
i. Multi-tenancy: Public clouds are designed to serve multiple customers
or tenants simultaneously. This requires the cloud to provide a highly
scalable and distributed infrastructure that can handle the varying
demands of different customers.
ii. Self-service: Public clouds are designed to be highly self-service
oriented, meaning that customers should be able to quickly and easily
provision resources and services without any human intervention.
iii. Resource pooling: Public clouds provide a shared pool of resources
such as compute, storage, and networking, which can be dynamically
allocated to different customers based on demand.
iv. Elasticity: Public clouds are designed to be highly elastic, allowing
customers to quickly and easily scale up or down their resources as
needed to meet changing demand.
v. Pay-per-use: Public clouds are typically billed based on usage, with
customers only paying for the resources they consume. This allows for
QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98
a highly cost-effective model where customers can quickly and easily
adjust their spending based on their needs.

Private Cloud Principles:

i. Dedicated infrastructure: Private clouds are designed to be deployed


within an organization's own data center or private network, providing
a highly secure and dedicated infrastructure.
ii. Controlled access: Private clouds are typically accessed only by
authorized users within the organization, providing a high degree of
control and security.
iii. Customization: Private clouds can be customized to meet the specific
needs of an organization, providing a high degree of flexibility and
control over the infrastructure.
iv. Resource isolation: Private clouds provide dedicated resources to each
tenant, ensuring that there is no contention for resources between
different departments or users.
v. Predictable performance: Private clouds are designed to provide
predictable performance and availability, which is critical for mission-
critical applications.
Hybrid Cloud Principles:
i. Integration: Hybrid clouds are designed to seamlessly integrate private
and public cloud resources, allowing customers to leverage the benefits
of both.
ii. Portability: Hybrid clouds allow customers to move workloads between
private and public clouds, providing a high degree of flexibility and
agility.
iii. Resource optimization: Hybrid clouds allow customers to optimize
their resource utilization by leveraging the best of both private and
public clouds.
iv. Cost optimization: Hybrid clouds allow customers to optimize their
costs by leveraging the most cost-effective resources for each workload.
v. Security: Hybrid clouds must be designed with a strong security
posture, ensuring that sensitive data is protected regardless of where it
is stored or processed.

(b) Describe Cloud deployment models with neat diagrams.


Ans : Cloud deployment models refer to the different ways in which cloud computing services
can be deployed or hosted. There are mainly four cloud deployment models: public cloud,
private cloud, hybrid cloud, and multi-cloud. Here's a brief description and a neat diagram of
each of these cloud deployment models:

Public Cloud:
Public cloud deployment model is hosted and managed by third-party service providers who
offer computing resources, such as servers, storage, and applications, over the internet. This
model is suitable for businesses of all sizes that require an affordable and scalable computing
environment.

In a public cloud deployment, the infrastructure is shared among multiple customers, and each
customer's data is isolated from others. The provider is responsible for maintaining the security
and availability of the cloud environment.
QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98
Private Cloud:
Private cloud deployment model is owned and managed by a single organization or a dedicated
third-party provider, and the infrastructure is not shared with other customers. This model is
suitable for businesses that require more control and customization of their cloud environment.

In a private cloud deployment, the infrastructure can be located on-premises or hosted in a third-
party data center. The organization is responsible for maintaining the security and availability
of the cloud environment.

Hybrid Cloud:
Hybrid cloud deployment model combines the features of both public and private clouds to
provide a unified computing environment. This model is suitable for businesses that require the
flexibility to move their workloads between public and private clouds.

In a hybrid cloud deployment, some applications and data can be hosted on-premises or in a
private cloud, while others can be hosted in a public cloud. The organization is responsible for
managing the security and availability of the cloud environment.

Multi-Cloud:
Multi-cloud deployment model involves the use of multiple public clouds or a combination of
public and private clouds. This model is suitable for businesses that require a high level of
resilience, scalability, and vendor flexibility.

In a multi-cloud deployment, the organization can choose different cloud providers for different
workloads or applications. The organization is responsible for managing the security and
availability of the cloud environment across different cloud providers.

6. Attempt any one part of the following: 10x1=10


(a) Describe the Secure Software Development Life Cycle with neat
diagram.
Ans : The Secure Software Development Life Cycle (SSDLC) is a process
designed to develop software applications with security in mind, from the initial
planning stages to deployment and maintenance. The goal of the SSDLC is to
reduce the risk of security vulnerabilities in software by integrating security
practices throughout the software development process. The SSDLC typically
includes the following phases:

i. Requirements Gathering: The first phase involves gathering the


requirements for the software application. This includes identifying the
intended users, understanding the business needs, and defining the
functional and non-functional requirements.

ii. Design: In this phase, the software design is created. This includes
identifying the architecture, components, and interfaces of the system.
Security requirements are also identified and incorporated into the
design.

iii. Implementation: In this phase, the software is developed and tested.


Secure coding practices are followed to prevent vulnerabilities, and
security testing is performed to identify and mitigate any potential
security issues.

iv. Testing: In this phase, the software is tested to ensure that it meets the
requirements and is secure. This includes functional testing,
performance testing, and security testing.

QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98


v. Deployment: In this phase, the software is deployed to the production
environment. This includes configuring the software, setting up the
infrastructure, and ensuring that the software is deployed securely.

vi. Maintenance: In this phase, the software is monitored and maintained


to ensure that it remains secure. This includes patching vulnerabilities,
updating software components, and responding to security incidents.
Throughout the SSDLC, security is integrated into each phase, from design to
deployment and maintenance. This helps to ensure that the software is secure
and reduces the risk of security vulnerabilities.

(b) Explain in detail about security monitoring and incident response.


Ans : Security monitoring and incident response are two critical aspects of an organization's
overall security strategy. Security monitoring refers to the process of monitoring a network,
system, or application for suspicious activities, vulnerabilities, or threats. Incident response, on
the other hand, is the process of identifying, containing, and mitigating security incidents.

Security Monitoring:

Security monitoring is an ongoing process that involves the collection, analysis, and correlation
of various security-related events and alerts generated by the organization's security
infrastructure. The objective of security monitoring is to detect potential security threats before
they can cause damage to the organization's assets or data.

There are various tools and techniques used for security monitoring, such as intrusion detection
systems (IDS), intrusion prevention systems (IPS), security information and event management
(SIEM) systems, and network traffic analysis tools. These tools are used to monitor network
traffic, system logs, and other security-related events to detect potential security incidents.

Incident Response:

Incident response is a set of procedures that are followed when a security incident is detected.
The incident response process involves the following steps:

i. Identification: The first step in the incident response process is to identify the security
incident. This can be done through various means, such as alerts generated by security
monitoring systems, user reports, or other indications of suspicious activity.

ii. Containment: Once the security incident has been identified, the next step is to contain
it to prevent further damage. This can involve isolating affected systems or network
segments, blocking network traffic from known malicious IP addresses, or disabling
user accounts.

iii. Investigation: After the security incident has been contained, the next step is to
investigate the incident to determine its root cause and scope. This can involve
analyzing system logs, network traffic, and other sources of information to determine
the extent of the incident.

iv. Mitigation: Once the incident has been investigated, the next step is to mitigate the
impact of the incident. This can involve applying patches to affected systems, updating
security configurations, or implementing new security measures to prevent similar
incidents in the future.

v. Recovery: The final step in the incident response process is to recover from the
incident. This can involve restoring affected systems and data from backups,
reconfiguring security controls, or implementing new security measures to prevent
future incidents.

QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98


Conclusion:

In conclusion, security monitoring and incident response are two essential components of an
organization's security strategy. By implementing robust security monitoring processes and
following established incident response procedures, organizations can detect and mitigate
potential security incidents before they can cause significant damage to their assets or data.

7. Attempt any one part of the following: 10x1=10


(a) Describe the following in detail
i. Google Cloud Infrastructure
Ans : Google Cloud Infrastructure is a set of computing resources and services
provided by Google Cloud Platform (GCP) that allow businesses and
individuals to build, deploy, and scale applications, websites, and services on
Google's infrastructure. It offers a range of services, including computing,
storage, networking, database, machine learning, and analytics, among others.

Google Cloud Infrastructure is built on Google's global network of data centers,


which are located in multiple regions around the world. The infrastructure is
designed to be highly scalable, reliable, and secure, with multiple layers of
redundancy and backup systems.

Some of the key components of Google Cloud Infrastructure include Compute


Engine, which provides virtual machines for running applications and
workloads; Cloud Storage, which provides scalable storage for data and files;
Kubernetes Engine, which allows users to deploy and manage containerized
applications; and BigQuery, which is a data analytics platform.

Overall, Google Cloud Infrastructure provides a comprehensive set of tools and


services that can help businesses and individuals build and run their applications
and services on a secure and scalable platform.

ii. GAE Architecture


Ans : GAE (Google App Engine) is a Platform-as-a-Service (PaaS) solution offered by
Google for building and deploying web applications on Google's infrastructure. The
GAE architecture is designed to provide developers with a scalable, reliable, and easy-
to-use platform for developing and deploying web applications.

The GAE architecture consists of several components that work together to provide the
necessary functionality for developing and deploying web applications. These
components include:

i. Front-end servers: These servers handle incoming HTTP requests from users
and direct them to the appropriate backend servers.

ii. Backend servers: These servers process requests from the front-end servers and
execute the necessary code to generate a response.

iii. Datastore: This is a NoSQL database that is used to store and retrieve data used
by the application.

iv. Task queues: These queues allow developers to perform background processing
tasks asynchronously.

v. Memcache: This is an in-memory cache that is used to store frequently accessed


data to reduce the number of database queries.

vi. APIs: GAE provides a set of APIs for developers to interact with various
QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98
Google services, including Google Cloud Storage and Google Cloud SQL.

The GAE architecture is designed to scale automatically based on the incoming traffic
to the application. This means that as the traffic increases, GAE will automatically spin
up additional front-end and backend servers to handle the increased load. Additionally,
GAE provides built-in security features, such as SSL support and a firewall, to help
protect applications from malicious attacks.
Overall, the GAE architecture provides developers with a scalable, reliable, and secure
platform for building and deploying web applications on Google's infrastructure.

(b) Illustrate any five web services of Amazon in detail.


Ans : Amazon is a global technology giant that offers a wide range of web services to help
businesses and developers build scalable, reliable, and cost-effective applications. Here are five
web services provided by Amazon and their details:

i. Amazon S3 (Simple Storage Service):


Amazon S3 is a highly scalable and secure object storage service that enables customers
to store and retrieve any amount of data from anywhere on the web. It is designed to
deliver 99.999999999% durability and 99.99% availability of objects over a given year.
It is used by businesses and developers to store and retrieve data, host static websites,
and backup and archive data.

ii. Amazon EC2 (Elastic Compute Cloud):


Amazon EC2 is a web service that provides scalable computing capacity in the cloud.
It allows customers to launch and manage virtual machines, known as instances, on
Amazon's infrastructure. Customers can choose from a range of instance types and
sizes, and pay only for the computing resources they consume. Amazon EC2 is used
for a variety of applications, including web and mobile applications, big data
processing, and batch processing.

iii. Amazon RDS (Relational Database Service):


Amazon RDS is a managed database service that makes it easy to set up, operate, and
scale a relational database in the cloud. It supports popular database engines such as
MySQL, PostgreSQL, Oracle, SQL Server, and Amazon Aurora. Amazon RDS
automates time-consuming tasks such as patching, backups, and software updates, and
provides high availability and reliability.

iv. Amazon DynamoDB:


Amazon DynamoDB is a fast, flexible, and fully managed NoSQL database service
that can handle any amount of data at any scale. It is designed for applications that
require low latency and high throughput, and provides built-in security, backup and
restore, and automatic scaling. Amazon DynamoDB is used for a variety of use cases,
including gaming, ad tech, IoT, and mobile applications.

v. Amazon SES (Simple Email Service):


Amazon SES is a cost-effective and flexible email service that enables businesses to
send and receive emails using their own domain names. It provides a reliable email
infrastructure, easy-to-use APIs, and advanced features such as email tracking, custom
email headers, and automatic feedback loops. Amazon SES is used by businesses of all
sizes, from startups to large enterprises, to send transactional emails, newsletters, and
marketing emails.

QP23DP1_029 | 13-01-2023 13:34:59 | 125.21.249.98

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy