Unit V_TCS 351

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Unit V: Foundations Services of AWS: Savings, Security, Compliance and DRaaS Development Operations.

AWS Services: Amazon Lambda, Amazon Relational Database Service (Amazon RDS), Amazon S3, Amazon
CloudFront, Amazon Glacier, and Amazon SNS.
Service Management in Cloud Computing: Service Level Agreements (SLAs), Billing & Accounting.
Economics of Cloud Computing: SWOT Analysis and Value Proposition, General Cloud Computing Risks,
(Performance, Network Dependence, Reliability, Outages, Safety Critical Processing Compliance and
Information Security.
Design and Deploy an Online Video Subscription Application on the Cloud.

Foundations Services of AWS

AWS provides four services that form the foundation of any cloud deployment: computing, storage, networking,
and database. Each service is designed to offer high availability and scalability so that you can build a robust
and reliable application in the cloud. Let’s take a closer look at each service and know how it can benefit your
business.

Compute: Compute resources are the brains and processing power needed by applications and systems to carry
out computational tasks. So Compute is essentially the same as common server components, such as CPU and
RAM, which many of you are already familiar with. Physical servers within a data centre are considered
compute resources as they may contain multiple CPUs and tons of RAM to process instructions given by the
operating system and applications. Below are the computing services provided by AWS:

AWS EC2: Amazon Elastic Compute Cloud (EC2) is a web service that provides secure, resizable computing
capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s
simple web interface allows you to obtain and configure capacity with minimal friction. It provides you with
complete control of your computing resources and lets you run on Amazon’s proven computing environment.
You can use Amazon EC2 to launch as many applications as you want—whether they’re running on Linux,
Windows, or Oracle—and manage all of them using a single API call. And since it’s scalable and pay-as-you-
go, Amazon EC2 reduces up-front investment costs while providing flexibility and control over resource
allocation.
With Amazon EC2, there are no upfront investments required – instead, you simply pay per hour of usage. So if
your application needs more computing power, you can increase capacity right away without having to wait for
an IT department to order and install new hardware.

AWS Lambda: Amazon Web Services Lambda is a serverless computing platform that runs your code in
response to events and automatically manages the underlying compute resources for you. You can use Lambda
to build applications that respond quickly to new information. Plus, Lambda is scalable so you can process
events as they happen, without having to provision or manage any servers. For example, an e-commerce
company might use Lambda functions to analyze incoming customer data for marketing purposes. Or an
enterprise IT organization might use it to keep their systems up-to-date with security patches and fixes.
With Lambda, you don’t have to worry about capacity planning because it scales seamlessly along with your
needs. It also lets developers spend more time on development and less time on managing infrastructure –
perfect for fast-moving startups!

AWS Elastic Beanstalk: Amazon Web Services Elastic Beanstalk is a platform as a service (PaaS) that
streamlines the process of deploying and scaling web applications and services developed with popular
programming languages and frameworks. Elastic Beanstalk provides pre-configured platforms for programming
languages like Java, .NET, PHP, Node.js, Python, and Ruby. You can simply upload your code and Elastic
Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, and auto-scaling
to application health monitoring. Elastic Beanstalk’s no configuration mode allows you to deploy an application
without worrying about the details, such as what type of environment it needs or where it should be deployed.
It also includes many features to allow developers to focus on their code rather than administrative tasks,
including integrations with other AWS products. These features include Auto Scaling and Load Balancing
which will scale up servers when traffic increases and automatically distribute incoming requests across all
servers.

Network: Networking in cloud computing is the process of connecting computers and devices together so they
can communicate with each other. The four main types of networking are: point-to-point, client-server, peer-to-
peer, and mesh. Point-to-point networking is the most basic type of networking, and it involves two devices that
are connected directly to each other. Client-server networking is a bit more complex, and it involves a server
that provides services to clients. Peer-to-peer networking occurs when two or more devices share data with one
another without using an intermediary device like a server. Mesh networks are built for redundancy and consist
of multiple paths for messages to travel between nodes.

Amazon Route 53: Amazon Route 53 is a scalable and highly available Domain Name System (DNS) service.
It provides secure and reliable routing to your resources, such as websites and web applications, with low
latency. Amazon Route 53 is fully compliant with IPv6 as well. You can use Amazon Route 53 to perform three
main functions: Domain registration, DNS routing, and health checking.
One thing that sets Amazon Route 53 apart from other DNS services is the inclusion of various geo and routing
features. An example of this would be Latency Based Routing which routes traffic depending on its proximity to
the desired destination, or you could also do IP Prefix.
Hijacking Protection protects against accidental changes in prefixes at the registrar level by monitoring and
blocking requests for domain registrations that conflict with your prefixes in route 53.
You could also set up Dynamic Record Sets which will automatically create records if they don’t exist when
queried so there’s no need to maintain static records. Or maybe Reverse DNS Lookups which will map an IP
address back to a domain name for security purposes. Other products that work well with Amazon Route 53
include AppStream 2.0, AppSync, and CloudFront CDN.

AWS VPC: Amazon Web Services (AWS) is a cloud platform that provides customers with a wide array of
infrastructure services, such as computing power, storage options, networking, and databases. One of these
services is called Amazon Virtual Private Cloud (VPC), which is a secure and scalable cloud computing service
that isolates your resources from those of other AWS customers. AWS VPC lets you create an isolated virtual
network environment in the AWS cloud. With Amazon VPC, AWS resources can be launched into a virtual
network. Common items to define for your networks such as IP address ranges, subnet creations, route tables,
gateways, and security settings are within the normal range. It integrates with many AWS services and is a
foundational service of AWS. You can use both IPv4 and IPv6 in your VPC for secure and easy access to
resources and applications. VPC provides you with complete control over your virtual networking environment,
including a selection of your own IP address range, creation of subnets, and configuration of route tables and
network gateways. In addition, you can launch AWS resources into a VPC to provide isolation from the rest of
the AWS cloud.

Cloud Storage: Cloud storage is a service that allows users to store and access data over the Internet. It is a
popular choice for businesses because it is scalable, reliable, and secure. In a cloud storage model, data that is
digital in format is stored, and thus it is called logical pools in the cloud. Multiple servers are used for this
storage system, which can be located throughout the country or even outside the country depending on many
factors. Private companies or the cloud providers like AWS, Azure, Google Cloud, and IBM Cloud own and
maintain these servers. In addition to ensuring data is available and accessible at all times, the cloud storage
services also maintain the physical environment and safeguard the data. A provider of storage capacity sells or
leases storage space to individuals and companies in order to store information about their users, entities, and
applications.
The storage services provided by AWS Cloud are Amazon S3 and Amazon Glacier.
Amazon S3: Amazon S3 is an object storage service that offers industry-leading scalability, data availability,
security, and performance. This means that you can store and retrieve any amount of data, at any time, from
anywhere on the web. Amazon S3 is designed to make web-scale computing easier for developers.
It provides a simple web services interface that can be used to store and retrieve any amount of data, at any time,
from anywhere on the web. It gives customers complete control over their data by providing robust access
controls and multiple redundant storage facilities with no single point of failure. You only pay for what you use.
Amazon S3 makes it easy to serve your content quickly and reliably, even when some parts of your
infrastructure don’t function properly or become unavailable.
Amazon Glacier: Amazon Glacier is a low-cost storage service that provides secure and durable storage for
data backup and archival. Amazon Glacier is easy to use, with a simple web interface that you can use to store
and retrieve any amount of data. Amazon Glacier is a great choice for storing data that you don’t need to access
frequently, but want to keep in a safe place.
When you upload your data to Amazon Glacier, it is copied onto multiple devices at different physical locations,
which means your data will stay safe even if there’s an unexpected event like a fire or flood at one of our
facilities. It also helps ensure that your data stays available during regional outages since there’s no single point
of failure when accessing it from another location.
With Amazon Glacier, customers pay only for what they use. Storage prices start as low as $0.01 per gigabyte
per month; retrieval pricing starts at $0.001 per gigabyte. That’s considerably less than most tape libraries, so
storing your archived data with us could save you money over time!

Database: Cloud databases are a new breed of database that offers all the benefits of the cloud: elasticity,
scalability, and cost-effectiveness. Just like traditional databases, they can be used to store data, but they also
come with a few key differences. For one, cloud databases are designed to be scalable and highly available, so
they can handle large workloads without going down. They’re also automatically replicated across regions for
high availability and seamless disaster recovery. Unlike most traditional databases, which require you to set up
hardware yourself in order to grow your compute power, these services are preconfigured for auto-scaling as
needed so you don’t have to worry about capacity planning. Plus, there’s no upfront cost for these powerful
services – pay only for what you use when you need it.

Amazon RDS (Relational Database Service): Amazon RDS is a managed relational database service that
makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-effective and
resizable capacity while automating time-consuming administration tasks such as hardware provisioning,
database setup, patching, and backups. Amazon RDS is available on several database instance types – optimized
for memory, performance or I/O – and provides you with six familiar database engines to choose from,
including Amazon Aurora, MySQL, MariaDB, Oracle Database, Microsoft SQL Server, and PostgreSQL. With
these features, Amazon RDS gives you maximum flexibility and control over your data.
You can also use various deployment models for Amazon RDS, which include managed services (where we
handle everything) and shared services (where you maintain ownership).

Amazon DynamoDB (Non-Relational Database): Amazon DynamoDB is a non-relational database that


delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-region, multi-master,
durable database with built-in security, backup and restore, and in-memory caching for internet-scale
applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than
20 million requests per second.
Compliance and DRaaS Development Operations

Cloud security at AWS is the highest priority. As organizations embrace the scalability and flexibility of the
cloud, AWS is helping them evolve security, identity, and compliance into key business enablers. AWS builds
security into the core of our cloud infrastructure, and offers foundational services to help organizations meet
their unique security requirements in the cloud.

As an AWS customer, you will benefit from a data center and network architecture built to meet the
requirements of the most security-sensitive organizations. Security in the cloud is much like security in your on-
premises data centers—only without the costs of maintaining facilities and hardware. In the cloud, you don’t
have to manage physical servers or storage devices. Instead, you use software-based security tools to monitor
and protect the flow of information into and out of your cloud resources.

An advantage of the AWS Cloud is that it allows you to scale and innovate, while maintaining a secure
environment and paying only for the services you use. This means that you can have the security you need at a
lower cost than in an on-premises environment.

As an AWS customer you inherit all the best practices of AWS policies, architecture, and operational processes
built to satisfy the requirements of our most security-sensitive customers. Get the flexibility and agility you need
in security controls.

The AWS Cloud enables a shared responsibility model. While AWS manages security of the cloud, you are
responsible for security in the cloud. This means that you retain control of the security you choose to implement
to protect your own content, platform, applications, systems, and networks no differently than you would in an
on-site data center.

AWS provides you with guidance and expertise through online resources, personnel, and partners. AWS
provides you with advisories for current issues, plus you have the opportunity to work with AWS when you
encounter security issues.

You get access to hundreds of tools and features to help you to meet your security objectives. AWS provides
security-specific tools and features across network security, configuration management, access control, and data
encryption.

Finally, AWS environments are continuously audited, with certifications from accreditation bodies across
geographies and verticals. In the AWS environment, you can take advantage of automated tools for asset
inventory and privileged access reporting.

Benefits of AWS security


 Keep Your data safe — The AWS infrastructure puts strong safeguards in place to help protect your
privacy. All data is stored in highly secure AWS data centers.

 Meet compliance requirements — AWS manages dozens of compliance programs in its infrastructure.
This means that segments of your compliance have already been completed.

 Save money: —Cut costs by using AWS data centers. Maintain the highest standard of security without
having to manage your own facility

 Scale quickly — Security scales with your AWS Cloud usage. No matter the size of your business, the
AWS infrastructure is designed to keep your data safe.

Compliance
AWS Cloud Compliance helps you understand the robust controls in place at AWS for security and data
protection in the cloud. Compliance is a shared responsibility between AWS and the customer, and you can visit
the Shared Responsibility Model to learn more. Customers can feel confident in operating and building on top of
the security controls AWS uses on its infrastructure.
The IT infrastructure that AWS provides to its customers is designed and managed in alignment with best
security practices and a variety of IT security standards. The following is a partial list of assurance programs
with which AWS complies:

SOC 1/ISAE 3402, SOC 2, SOC 3

FISMA, DIACAP, and FedRAMP

PCI DSS Level 1

ISO 9001, ISO 27001, ISO 27017, ISO 27018

AWS provides customers a wide range of information on its IT control environment in whitepapers, reports,
certifications, accreditations, and other third-party attestations.

AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of
on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time
recovery.

Elastic Disaster Recovery is the recommended service for disaster recovery to AWS. It provides similar
capabilities as CloudEndure Disaster Recovery, and is operated from the AWS Management Console. This
facilitates seamless integration between AWS DRS and other AWS services, such as AWS CloudTrail, AWS
Identity and Access Management (IAM), and Amazon CloudWatch.

With AWS DRS, you can recover your applications on AWS from physical infrastructure, VMware vSphere,
Microsoft Hyper-V, and cloud infrastructure. You can also use AWS DRS to recover Amazon Elastic Compute
Cloud (EC2) instances in a different AWS Region.

You can use AWS DRS to recover all of your applications and databases that run on supported Windows and
Linux operating system versions.

AWS Services: Amazon Lambda,


AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of Amazon
Web Services. Therefore you don’t need to worry about which AWS resources to launch, or how will you
manage them. Instead, you need to put the code on Lambda, and it runs.
In AWS Lambda the code is executed based on the response of events in AWS services such as add/delete files
in S3 bucket, HTTP request from Amazon API gateway, etc. However, Amazon Lambda can only be used to
execute background tasks.

AWS Lambda function helps you to focus on your core product and business logic instead of managing
operating system (OS) access control, OS patching, right-sizing, provisioning, scaling, etc.
AWS Lambda Block Diagram

Step 1: First upload your AWS Lambda code in any language supported by AWS Lambda. Java, Python, Go,
and C# are some of the languages that are supported by AWS Lambda function.

Step 2: These are some AWS services which allow you to trigger AWS Lambda.

Step 3: AWS Lambda helps you to upload code and the event details on which it should be triggered.

Step 4: Executes AWS Lambda Code when it is triggered by AWS services:

Step 5: AWS charges only when the AWS lambda code executes, and not otherwise.

This will happen in the following scenarios:

 Upload files in an S3 bucket


 When HTTP get/post endpoint URL is hit
 For adding/modifying and deleting Dynamo DB tables
 In the process of data streams collection
 Push notification
 Hosting of website
 Email sending

Note: You should remember that you will charge for AWS services only when the AWS Lambda code executes,
else you don’t need to pay anything.

Events that Trigger AWS Lambda


Here, are Events which will be triggered when you use AWS Lambda.

 Insert, updating and deleting data Dynamo DB table


 To include push notifications in SNS
 To search for log history in CloudTrail
 Entry into an S3 object
 DynamoDB can trigger AWS Lambda whenever there is data added, modified, and deleted in the table.
 Helps you to schedule the event to carry out the task at regular time pattern.
 Modifications to objects in S3 buckets
 Notifications sent from Amazon SNS.
 AWS Lambda can be used to process the CloudTrail logs
 API Gateway allows you to trigger AWS Lambda on GET/POST methods.

AWS Lambda Concepts


Function:

A function is a program or a script which runs in AWS Lambda. Lambda passes invocation events into your
function, which processes an event and returns its response.

Runtimes:

Runtime allows functions in various languages which runs on the same base execution environment. This helps
you to configure your function in runtime. It also matches your selected programming language.

Event source:

An event source is an AWS service, such as Amazon SNS, or a custom service. This triggers function helps you
to executes its logic.

Lambda Layers:

Lambda layers are an important distribution mechanism for libraries, custom runtimes, and other important
function dependencies. This AWS component also helps you to manage your development function code
separately from the unchanging code and resources that it uses.

Log streams:

Log stream allows you to annotate your function code with custom logging statements which helps you to
analyse the execution flow and performance of your AWS Lambda functions.

How to use AWS Lambda


Now, we will learn how to use AWS Lambda with AWS Lambda example:

Step 1) Step 1) Open AWS Lambda URL


Goto https://aws.amazon.com/lambda/ and Get Started
Step 2) Create an account
Next, Create an account or sign in with your existing account

Step 3) Edit the code & Click Run,


In the next Lambda page,

1. Edit the code


2. Click Run

Step 4) Check output


You will see output
AWS Lambda VS AWS EC2
Here, are some major differences between AWS Lambda and EC2.

Parameters AWS Lambda AWS EC2


AWS Lambda is a Platform as a Service (PaaS). AWS EC2 Is an Infrastructure as a Service
Definition It helps you to run and execute your backend (laaS). It provides virtualized computing
code. resources.
Does not offers any flexibility to log in to
Offers the flexibility to select the variety of
compute instances. It allows you to choose a
Flexibility instances, customoperating systems, security
customized operating system or language
patches, and network, etc.
runtime.
You need to select your environment where you For the first time in EC2, you have to choose
Installation
want to runthe code and push the code into the OS and install all the software required
process
AWS Lambda. and then push your code in EC2.

Environment
It is restricted to fewlanguages. No environment restrictions.
restrictions

AWS Lambda VS AWS Elastic Beanstalk


Here, are some major differences between AWS Lambda and Elastic Beanstalk.

Parameters AWS Elastic Beanstalk AWS Lambda


Deploy and manage the apps on AWS Cloud AWS Lambda is used for running and
Main task without worrying about the infrastructure which executing your Back-end code. You
runs those applications. can’t use it to deploy an application.
It gives you a Freedom to select AWS You can’t select the AWS resources,
Selection of resources; For example, you can choose EC2 like a type of EC2 instance, Lambda
AWS resources instance which is optimal according to your offers resources based on your
application. workload.
Type of system It is a stateful system. It is a stateless system.

Use Cases of AWS Lambda


AWS Lambda used for a wide range of applications like:

 Helps you for ETL process


 Allows you to perform real-time file processing and real-time stream processing
 Use for creating web applications
 Use in Amazon products like Alexa Chatbots and Amazon Echo/Alexa
 Data processing (real-time streaming analytics)
 Automated Backups of everyday tasks
 Scalable back ends (mobile apps, loT devices)
 Helps you to execute server-side backend logic
 Allows you to filter and Transform data

Best practices of Lambda function


Here are some best practices of AWS Lambda functions:

 Use the right “timeout.”


 Utilize the functions of local storage which is 500MB in size in the /temp folder
 Minimizing the use of start-up code which is not directly related to processing the current event.
 You should use built-in CloudWatch monitoring of your Lambda functions to view and optimize
request latencies.

When not to use AWS Lambda


Following are the situation where Lambda is surely not an ideal option:

 It is not appropriate to use AWS Lambda software packages or applications which rely on calling
underlying Windows RPCs
 If is used for custom software applications with licensing agreements like MS-Office document
processing, Oracle databases, etc.
 AWS Lambda should not be used for custom hardware process such as GPU acceleration, hardware
affinity.

Advantages of using AWS Lambda


Here, are pros/benefits of using AWS lambda:

 AWS Lambda is a highly flexible tool to use


 It helps you to grant access to resources, including VPCs
 Author directly with WYSIWYG editor in console.
 You can use it as a plugin for Eclipse and Visual Studio.
 As it is serverless architecture, you don’t need to worry about managing or provisioning servers.
 You do not need to set up any Virtual Machine.
 Helps developers to run and execute the code’s response to events without building any infrastructure.
 You just need to for the compute time taken, only when your code runs.
 You can monitor your code performance in real time through CloudWatch.
 It allows you to run your code without provisioning or to manage any other server
 Helps you to execute the code only when needed
 You can scale it automatically to handle a few requests per day and even support more than thousands
of requests per second.
 AWS Lambda can be configured with the help of external event timers to perform scheduled tasks.
 Lambda function in AWS should be configured with external event and timers so; it can be used for
scheduling.
 Lambda functions are stateless so that it can be scaled quickly.
 AWS Lambda is fast so it will execute your code within milliseconds.

Limitations of AWS Lambda


Here are the cons/disadvantages of using AWS Lambda:

 AWS Lambda tool is not suitable for small projects.


 AWS Lambda entirely relies on AWS for the infrastructure, so you can’t install any additional software
if your code demands it.
 Concurrent execution is limited to 100
 AWS Lambda completely depended on AWS for the infrastructure; you cannot install anything
additional software if your code demands it.
 Its memory volume can vary between 128 to 1536 MB.
 Event request should not exceed 128 KB.
 Lambda functions help you to write their logs only in CloudWatch. This is the only tool that allows you
to monitor or troubleshoot your functions.
 Its code execution timeout is just 5 minutes.

Amazon Relational Database Service (Amazon RDS),

Amazon RDS is the Relational Database Service offered as a web service by Amazon. It makes it easy to set-up
and operate a relational database in the cloud. It provides a very cost-effective way to use industry’s leading
RDBMS software as a managed service. Because of this web service from amazon AWS, You do not have to
buy any server or install any database software in it. You just have subscribe to the AWS RDS web service and
start using the RDBMS features after some initial configuration involving memory and CPU capacity allocation
etc . In this Tutorial we will learn about the different interfaces available in AWS RDS to use the industry’s
leading RDBMS software.
As RDS is a managed service provided by AWS, we can expect that like other AWS services it will provide
scalability, security and cost effectiveness to the various RDBMS it provides. The database products available
through AWS RDS are as listed below.
 MySQL - Support versions for MySQL 5.5 to 5.7. Minor upgrades happen automatically without needing
any involvement from the user.
 MariaDB – Support versions for MariaDB from 10.0 to 10.2.
 Oracle – Supports version 11g and 12c. You can use the oracle license provided by aws or bring your
own license. The costing for these two are different.
 Microsoft SQL Server – Supports version 200t to 2017. Also AWS supports the various editions like –
Enterprise, Standard, Web and Express.
 PostgreSQL – Supports version 9 to 11. Can be configured as a multi A-Z deployment with read replicas.
 Amazon Aurora – This is Amazon’s own RDBMS. We will be covering it in a separate tutorial.
Each of these Database software is offered as Software as a Service (saas) by providing following features.
 Customization of CPU capacity, Memory allocation and IOPS(Input Output per second) for a database
instance.
 Manage software patching, failure and recovery of the RDBMS software without any user intervention.
 Allow manual or automated backup of the database using snapshots. Restore the database from these
snapshots.
 Provide high availability by creating a primary and secondary instance which are synchronous. In case of
a failure of primary AWS RDS automatically fails over to secondary.
 Put the databases in a virtual private cloud (VPC) and aslo use AWS IAM (Identity and Access
management) service to control access to the databases.
 There are two purchase options for AWS RDS service. On-Demand Instances and Reserved Instances.
For on-Demand instance you pay for every hour of usage while for Reserved instance you make a upfront
payment for one year to three period time frame.

For using any Aws service you ned to set up an AWS account. We assume you have set up the AWS account by
following the guide lines mentioned in the Amazon Web Services aws home page. Below are the preliminary
steps to access the RDS services from the console.
Step-1
After logging in to the amazon console, to access the RDS services we need to navigate to the Amazon RDS home
page by searching for RDS in the search box of under the services tag as shown in the diagram below.

Step-2
On clicking the link above we get the Amazon RDS home page. If it is the first time ever you are accessing RDS
services, then it will show you a screen prompting for creating a database as shown below.
In case you have already created some RDS resources a summary of that will be available by scrolling down in
the above page. A screen shot is shown below.
Step-3
The next screen gives us an option to select the DB engine we need and that is the start of our configuration steps
for the database we need.
The RDS interfaces are a way to access the RDS service we create. After the creation and configuration of the
RDS service there is a need of accessing the data, uploading data to this database and running some other program
which should be able to connect to the database. Such requirements of accessing and manipulating data by end
users of the database and not necessarily the AWS account holder which created the database needs these
interfaces.
There are three main such interfaces.
GUI Console
This is the simplest of the interfaces where the user can login through a web browser and start using the DB
services. The down side of such access is , it needs a human to interact with the RDS services and we cannot run
a database program to do some regular tasks like – backup or analysing the DB etc.

Command Line Interface


It is also called CLI access where you can execute DB command through the AWS command prompt screen which
should have been installed in the client computer you are using. Below are the steps to install CLI in your local
system using which you will access AWS services.
The steps to install AWS CLI are as below.
Step-1
Check for the version of python in your environment.
ubuntu@ubuntu:~$ python -V
ubuntu@ubuntu:~$ python3 -V

When we run the above program, we get the following output −


Python 2.7.12
Python 3.5.2
If the version is less than 2.6 or 3.3 , then you need to upgrade the version of python in your system.
Step -2
Check for availability of the python package named pip . It will be needed to install AWS CLI.

Pip -V

When we run the above program, we get the following output −


pip 10.0.1 from /home/ubuntu/.local/lib/python3.5/site-packages/pip (python 3.5)
Step -3
Issue the following command to install the AWS CLI.

pip install awscli –upgrade –user


aws --version

When we run the above program, we get the following output −


Aws-cli/1.11.84 Python/3.6.2 Linux/4.4.0
Step-4
Next we configure the aws CLI with credentials. We issue this command and then input the required values one
by one.

aws configure

When we run the above program, we get the following output −


AWS Access Key ID [None]: ****PLE
AWS Secret Access Key [None]: ********8
Default region name [None]: us-west-2
Default output format [None]: json
With the above configuration in place you are now ready to use CLI for communicating with AWS environments
for setting up and using amazon RDS. In the next chapters we will see how we can do that.
AWS API
Amazon Relational Database Service (Amazon RDS) also provides an application programming interface (API).
APIs are used when the information is exchanged between the systems rather than a human issuing the commands
and receiving the result. For example, if you want to automate the addition of database instances to a RDS service
when the number of transactions reach certain threshold, then you use a AWS SDK to write a program which will
monitor the number of database transactions and spin-off a RDS instance when the required condition is met.
Below is an example of API code that creates a copy of a DB snapshot. It is a python program which uses AWS
sdk named boto3. The client library in boto3 has a method named copy_db_snapshot which is called by the python
program to create a copy of the DB snapshot with the required parameters as shown.

import boto3

client = boto3.client('rds')

response = client.copy_db_snapshot(

SourceDBSnapshotIdentifier='mydbsnapshot',
TargetDBSnapshotIdentifier='mydbsnapshot-copy',
)
print(response)

When the above program is run we get the response which describes the various properties of the copy event.
Here the term string represents the various names of parameters which is defined by the user for their environment.
For example VpcID represents the ID of the vpc in which the copy action is happening.
{
'DBSnapshot': {
'DBSnapshotIdentifier': 'string',
'DBInstanceIdentifier': 'string',
'SnapshotCreateTime': datetime(2015, 1, 1),
'Engine': 'string',
'AllocatedStorage': 123,
'Status': 'string',
'Port': 123,
'AvailabilityZone': 'string',
'VpcId': 'string',
'InstanceCreateTime': datetime(2015, 1, 1),
'MasterUsername': 'string',
'EngineVersion': 'string',
'LicenseModel': 'string',
'SnapshotType': 'string',
'Iops': 123,
'OptionGroupName': 'string',
'PercentProgress': 123,
'SourceRegion': 'string',
'SourceDBSnapshotIdentifier': 'string',
'StorageType': 'string',
'TdeCredentialArn': 'string',
'Encrypted': True|False,
'KmsKeyId': 'string',
'DBSnapshotArn': 'string',
'Timezone': 'string',
'IAMDatabaseAuthenticationEnabled': True|False,
'ProcessorFeatures': [
{
'Name': 'string',
'Value': 'string'
},
]
}
}

A DB instance is an isolated database environment running in the cloud which can contain multiple user-created
databases. It can be accessed using the same client tools and applications used to access a standalone database
instance. But there is restriction on how many DB instances of what type you can have for a single customer
account. The below diagram illustrates the different combinations based on the type of license you opt for.
Each DB instance is identified by a customer supplied name called DB instance identifier. It is unique for the
customer for a given AWS region.
DB Instance Classes
Depending on the need of the processing power and memory requirement, there is a variety of instance classes
offered by AWS for the RDS service.

Instance Class Number of Memory Range in Bandwidth Range in


Vcpu GB Mbps

Standard 1 to 64 1.7 to 256 450 to 10000


Memory Optimized 2 to 128 17.1 to 3904 500 to 14000

Burstable 1 to 8 1 to 32 Low to Moderate


Performance

When there is a need of more processing power than memory requirement you can choose the standard instance
class with a higher number of virtual CPUs. But in the case of very high memory requirement you can choose
Memory optimized class with appropriate number of VCPUs. Choosing a correct class not only impacts the speed
of the processing but also the cost of using service. The burstable performance class is needed when you have a
minimal processing requirement and the data size in not in peta bytes.
DB Instance Status
The DB Instance status indicates the health of the DB. It’s value can be seen from the AWS console or using
AWS CLI command describe-db-instances. The important status values of DB instances and their meaning is
described below.

DB Instance Meaning Is the Instance


Status Billed?

Creating The instance is being created. The instance is No


inaccessible while it is being created.

Deleting The instance is being deleted. No

Failed The instance has failed and Amazon RDS can't recover No
it.

Available The instance is healthy and available. Yes

Backing-up The instance is currently being backed up. Yes

Amazon S3,

Amazon S3 (Simple Storage Service) is a scalable, high-speed, low-cost web-based service designed for online
backup and archiving of data and application programs. It allows to upload, store, and download any type of files
up to 5 TB in size. This service allows the subscribers to access the same systems that Amazon uses to run its own
web sites. The subscriber has control over the accessibility of data, i.e. privately/publicly accessible.
How to Configure S3?
Following are the steps to configure a S3 account.
Step 1 − Open the Amazon S3 console using this link − https://console.aws.amazon.com/s3/home
Step 2 − Create a Bucket using the following steps.
 A prompt window will open. Click the Create Bucket button at the bottom of the page.

 Create a Bucket dialog box will open. Fill the required details and click the Create button.

 The bucket is created successfully in Amazon S3. The console displays the list of buckets and its
properties.
 Select the Static Website Hosting option. Click the radio button Enable website hosting and fill the
required details.

Step 3 − Add an Object to a bucket using the following steps.


 Open the Amazon S3 console using the following link − https://console.aws.amazon.com/s3/home
 Click the Upload button.

 Click the Add files option. Select those files which are to be uploaded from the system and then click the
Open button.
 Click the start upload button. The files will get uploaded into the bucket.
To open/download an object − In the Amazon S3 console, in the Objects & Folders list, right-click on the object
to be opened/downloaded. Then, select the required object.

How to Move S3 Objects?


Following are the steps to move S3 objects.
step 1 − Open Amazon S3 console.
step 2 − Select the files & folders option in the panel. Right-click on the object that is to be moved and click the
Cut option.
step 3 − Open the location where we want this object. Right-click on the folder/bucket where the object is to be
moved and click the Paste into option.

How to Delete an Object?


Step 1 − Open Amazon S3.
Step 2 − Select the files & folders option in the panel. Right-click on the object that is to be deleted. Select the
delete option.
Step 3 − A pop-up window will open for confirmation. Click Ok.
How to Empty a Bucket?
Step 1 − Open Amazon S3 console.
Step 2 − Right-click on the bucket that is to be emptied and click the empty bucket option.

Step 3 − A confirmation message will appear on the pop-up window. Read it carefully and click the Empty
bucket button to confirm.

Amazon S3 Features
 Low cost and Easy to Use − Using Amazon S3, the user can store a large amount of data at very low
charges.
 Secure − Amazon S3 supports data transfer over SSL and the data gets encrypted automatically once it
is uploaded. The user has complete control over their data by configuring bucket policies using AWS
IAM.
 Scalable − Using Amazon S3, there need not be any worry about storage concerns. We can store as much
data as we have and access it anytime.
 Higher performance − Amazon S3 is integrated with Amazon CloudFront, that distributes content to
the end users with low latency and provides high data transfer speeds without any minimum usage
commitments.
 Integrated with AWS services − Amazon S3 integrated with AWS services include Amazon
CloudFront, Amazon CLoudWatch, Amazon Kinesis, Amazon RDS, Amazon Route 53, Amazon VPC,
AWS Lambda, Amazon EBS, Amazon Dynamo DB, etc.

Amazon CloudFront,

The following diagram shows an overview of how this static website solution works:

1. The viewer requests the website at www.example.com.


2. If the requested object is cached, CloudFront returns the object from its cache to the viewer.
3. If the object is not in CloudFront’s cache, CloudFront requests the object from the origin (an S3 bucket).
4. S3 returns the object to CloudFront, which triggers the Lambda@Edge origin response event.
5. The object, including the security headers added by the Lambda@Edge function, is added to CloudFront’s
cache.
6. (Not shown) The objects is returned to the viewer. Subsequent requests for the object that come to the same
CloudFront edge location are served from the CloudFront cache.

Deploying the solution

To deploy this secure static website solution, you can choose from either of the following options:

 Use the AWS CloudFormation console to deploy the solution with default content, then upload your website
content to Amazon S3.
 Clone the solution to your computer to add your website content. Then, deploy the solution with the AWS
Command Line Interface (AWS CLI).
Topics

 Prerequisites
 Using the AWS CloudFormation console
 Cloning the solution locally
 Finding access logs

Prerequisites

To use this solution, you must have the following prerequisites:

 A registered domain name, such as example.com, that’s pointed to an Amazon Route 53 hosted zone. The
hosted zone must be in the same AWS account where you deploy this solution. If you don’t have a registered
domain name, you can register one with Route 53. If you have a registered domain name but it’s not pointed to a
Route 53 hosted zone, configure Route 53 as your DNS service.
 AWS Identity and Access Management (IAM) permissions to launch CloudFormation templates that create
IAM roles, and permissions to create all the AWS resources in the solution.

You are responsible for the costs incurred while using this solution. For more information about costs, see the
pricing pages for each AWS service.

Using the AWS CloudFormation console

To deploy using the CloudFormation console


1. Choose Launch on AWS to open this solution in the AWS CloudFormation console. If necessary, sign in to
your AWS account.

2. The Create stack wizard opens in the AWS CloudFormation console, with prepopulated fields that specify this
solution’s CloudFormation template.

At the bottom of the page, choose Next.


3. On the Specify stack details page, enter values for the following fields:
 SubDomain – Enter the subdomain to use for your website. For example, if the subdomain is www, your
website is available at www.example.com. (Replace example.com with your domain name, as explained in the
following bullet.)
 DomainName – Enter your domain name, such as example.com. This domain must be pointed to a Route 53
hosted zone.

When finished, choose Next.


4. (Optional) On the Configure stack options page, add tags and other stack options.
When finished, choose Next.
5. On the Review page, scroll to the bottom of the page, then select the two boxes in the Capabilities section.
These capabilities allow AWS CloudFormation to create an IAM role that allows access to the stack’s resources,
and to name the resources dynamically.
6. Choose Create stack.
7. Wait for the stack to finish creating. The stack creates some nested stacks, and can take several minutes to
finish. When it’s finished, the Status changes to CREATE_COMPLETE.

When the status is CREATE_COMPLETE, go to https://www.example.com to view your website (replace


www.example.com with the subdomain and domain name that you specified in step 3). You should see the
website’s default content:

To replace the website’s default content with your own


1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
2. Choose the bucket whose name begins with amazon-cloudfront-secure-static-site-s3bucketroot-.
Note

Make sure to choose the bucket with s3bucketroot in its name, not s3bucketlogs. The bucket
with s3bucketroot in its name contains the website content. The one with s3bucketlogs contains only log files.
3. Delete the website’s default content, then upload your own.
Note

If you viewed your website with this solution’s default content, then it’s likely that some of the default content
is cached in a CloudFront edge location. To make sure that viewers see your updated website
content, invalidate the files to remove the cached copies from CloudFront edge locations. For more information,
see Invalidating files.

Cloning the solution locally

Prerequisites

To add your website content before deploying this solution, you must package the solution’s artifacts locally,
which requires Node.js and npm. For more information, see https://www.npmjs.com/get-npm.

To add your website content and deploy the solution


1. Clone or download the solution from https://github.com/aws-samples/amazon-cloudfront-secure-static-site.
After you clone or download it, open a command prompt or terminal and navigate to the amazon-cloudfront-
secure-static-site folder.
2. Run the following command to install and package the solution’s artifacts:

make package-static

3. Copy your website’s content into the www folder, overwriting the default website content.
4. Run the following AWS CLI command to create an Amazon S3 bucket to store the solution’s artifacts.
Replace example-bucket-for-artifacts with your own bucket name.

aws s3 mb s3://example-bucket-for-artifacts --region us-east-1

5. Run the following AWS CLI command to package the solution’s artifacts as an AWS CloudFormation template.
Replace example-bucket-for-artifacts with the name of the bucket that you created in the previous step.

6. aws cloudformation package \

7. --region us-east-1

8. --template-file templates/main.yaml \

9. --s3-bucket example-bucket-for-artifacts \

--output-template-file packaged.template

10. Run the following command to deploy the solution with AWS CloudFormation, replacing the following values:
 your-CloudFormation-stack-name – Replace with a name for the AWS CloudFormation stack.
 example.com – Replace with your domain name. This domain must be pointed to a Route 53 hosted zone in the
same AWS account.
 www – Replace with the subdomain to use for your website. For example, if the subdomain is www, your
website is available at www.example.com.

11. aws cloudformation deploy \

12. --region us-east-1

13. --stack-name your-CloudFormation-stack-name \

14. --template-file packaged.template \

15. --capabilities CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \

--parameter-overrides DomainName=example.com SubDomain=www

16. Wait for the AWS CloudFormation stack to finish creating. The stack creates some nested stacks, and can take
several minutes to finish. When it’s finished, the Status changes to CREATE_COMPLETE.

When the status changes to CREATE_COMPLETE, go to https://www.example.com to view your website


(replace www.example.com with the subdomain and domain name that you specified in the previous step). You
should see your website’s content.
Amazon Glacier,

AWS offers a wide range of storage services that can be provisioned depending on your project requirements
and use case. AWS storage services have different provisions for highly confidential data, frequently accessed
data, and the not so frequently accessed data. You can choose from various storage types namely, object storage,
file storage, block storage services, backups, and data migration options. All of which fall under the AWS
Storage Services list.
AWS Glacier: From the aforementioned list, AWS Glacier, is the backup and archival storage provided by
AWS. It is an extremely low cost, long term, durable, secure storage service that is ideal for backups and
archival needs. In a lot of its operation AWS Glacier is similar to S3, and, it interacts directly with S3, using
S3-lifecycle policies. However, the main difference between AWS S3 and Glacier is the cost structure. The
cost of storing the same amount of data in AWS Glacier is significantly less as compared to S3. Storage costs
in Glacier can be as little as $1 for one petabyte of data per month.
AWS Glacier Terminology
1. Vaults: Vaults are virtual containers that are used to store data. Vaults in AWS Glacier are similar to buckets
in S3.
 Each Vault has its specific access policies(Vault lock/access policies). Thus providing you with more
control over who has what kind of access to your data.
 Vaults are region-specific.
2. Archives: Archives are the fundamental entity type stored in Vaults. Archives in AWS Glacier are similar
to Objects in S3. Virtually you have unlimited storage capacity on AWS Glacier and hence, can store an
unlimited number of archives in a vault.
3. Vault Access Policies: In addition to the basic IAM controls AWS Glacier offers Vault access policies that
help managers and administrators have more granular control of their data.
 Each vault has its own set of Vault Access Policies.
 If either of Vault Access Policy or IAM control doesn’t pass for some user action. The user is not declared
unauthorized.
4. Vault Lock Policies: Vault lock policies are exactly like Vault access policies but once set, they cannot be
changed.
 Specific to each bucket.
 This helps you with data compliance controls. For example- Your business administrators might want some
highly confidential data to be only accessible to the root user of the account, no matter what. Vault lock
policy for such a use case can be written for the required vaults.
Features of AWS Glacier
 Given the extremely cheap storage, provided by AWS Glacier, it doesn’t provide as many features as AWS
S3. Access to data in AWS Glacier is an extremely slow process.
 Just like S3, AWS Glacier can essentially store all kinds of data types and objects.
 Durability: AWS Glacier, just like Amazon S3, claims to have a 99.9999999% of durability (11 9’s).
This means the possibility of losing your data stored in one of these services one in a billion. AWS Glacier
replicates data across multiple Availability Zones for providing high durability.
 Data Retrieval Time: Data retrieval from AWS Glacier can be as fast as 1-5 minutes (high-cost retrieval)
to 5-12 hours(cheap data retrieval).
 AWS Glacier Console: The AWS Glacier dashboard is not as intuitive and friendly as AWS S3. The
Glacier console can only be used to create vaults. Data transfer to and from AWS Glacier can only be done
via some kind of code. This functionality is provided via:
 AWS Glacier API
 AWS SDKs
 Region-specific costs: The cost of storing data in AWS Glacier varies from region to region.
 Security:
 AWS Glacier automatically encrypts your data using the AES-256 algorithm and manages its
keys for you.
 Apart from normal IAM controls AWS Glacier also has resource policies (vault access
policies and vault lock policies) that can be used to manage access to your Glacier vaults.
 Infinite Storage Capacity: Virtually AWS Glacier is supposed to have infinite storage capacity.
Data Transfer In Glacier
1. Data Upload:
 Data can be uploaded to AWS Glacier by creating a vault from the Glacier console and using one of the
following methods:
 Write code that uses AWS Glacier SDK to upload data.
 Write code that uses AWS Glacier API to upload data.
 S3 Lifecycle policies: S3 lifecycle policies can be set to upload S3 objects to AWS Glacier
after some time. This can be used to backup old and infrequently access data stored in S3.
2. Data Transfer between regions:
AWS Glacier is a region-specific service. Data in one region can be transferred to another from the AWS
console. This cost of suck a data transfer is $0.02.
3. Data Retrieval
As mentioned before, AWS Glacier is a backup and data archive service, given its low cost of storage, AWS
Glacier data is not readily available for consumption.
 Data retrieval from Glacier can only be done via some sort of code, using AWS Glacier SDK or the Glacier
API.
 Data Retrieval in AWS Glacier is of three types:
 Expedited:
 This mode of data retrieval is only suggested for urgent requirements of data.
 A single expedited retrieval request can only be used to retrieve 250MB of data
at max.
 This data is then provided to you within 1-5 minutes.
 The cost of expedited retrieval is $0.03 per GB and 0.01 per request.
 Standard:
 This data retrieval mode can be used for any size of data, full or partial archive.
 This data is then provided to you within 3-5 hours.
 The cost of standard retrieval is $0.01 per GB and $0.05 per 1000 requests.
 Bulk:
 This data retrieval is suggested for mass retrieval of data (petabytes of data).
 It is the cheapest data retrieval option offered by AWS Glacier
 This data is then provided to you within 5-12 hours.
 The cost of bulk retrieval is 0.0025 per GB and 0.025 per 1000 requests

creating a vault:

1. Login to your Management Console and head straight to the S3 Glacier console through the following
link https://console.aws.amazon.com/glacier/.
2. Choose a specific Region from the top Region selector tab.

For this tutorial, we are going to use US West (Oregon) Region.

3. In case it’s your first experience with S3 Glacier, click on the button showing “Get started”. (If it’s not your
first time, you will see a different button to click on having the words “Create Vault”.)
Amazon Glacier – Getting Started Page

4. Type in “examplevault” for the vault name inside the Vault Name text box then choose the button Next
Step.

5. Choose the option of Do not enable notifications. In this tutorial, there is no need to configure notifications
for the vault that you are creating.

In case you needed to get notifications sent to you or your app at the time that specific S3 Glacier jobs get
finished, you should choose the option of Enable notifications and create a new SNS topic, or the
option of Enable notifications and use an existing SNS topic for setting up Amazon SNS notifications. The
coming steps will let you upload an archive then get it downloaded through high-level API of SDK. Working
with high-level API will not need you to get vault notification configured for retrieving data.
6. In case the entered Vault name and Region turn out to be correct, later select the Submit button.

7. A newly created vault will be shown in the list located on the page of S3 Glacier Vaults.

Amazon SNS.

Amazon Web Services Simple Notification Service (AWS SNS) is a web service that automates the process
of sending notifications to the subscribers attached to it. SNS provides this service to both application-to-person
and application-to-application. It uses the publishers/subscribers paradigm for the push and delivery of
messages. The data loss is prevented by storing the data across multiple availability zones. It is cost -efficient
and provides low-cost infrastructure, especially to mobile users. It sends the notifications through SMS or email
to an Amazon Simple Queue Service (SQS), AWS lambda functions, or an HTTP endpoint. When the CPU
utilization of an instance goes above 80%, the AWS cloudwatch alarm is triggered. This cloudwatch alarm
activates the SNS topic hence notifying the subscribers about the high CPU utilization of the instance. SNS
service has a topic that has a unique name. It acts as a logical access point and the communication channel
between publishers and subscribers.
Benefits of using SNS
 SNS increases Durability.
 SNS increases Security.
 SNS ensures accuracy.
 SNS reduces and simplifies the cost.
 SNS supports SMS in over 200 countries.
Clients of SNS
 Publishers: They communicate with subscribers in an asynchronous manner by producing and sending a
message to a topic (i.e a logical access point and communication channel). They do not include a specific
destination (ex – email id) in each message instead, send a message to the topic. They only send
messages to topics they have permission to publish.
 Subscribers: Subscribers like web servers, email addresses, Amazon SQS queues, and AWS Lambda
functions receive the notification over one of the supported protocols like Amazon SQS, HTTP/S, email,
SMS, Lambda) when they are subscribed to the topic. Amazon SNS matches the topic to a list of
subscribers who have subscribed to that topic and delivers the message to each of those subscribers.
Steps to create Simple Notification Service in AWS
Step 1: Go to the Amazon SNS dashboard. Click on Create Topic button.

Step 2: Type in the name of the topic and description( optional )

Step 3: Type in the key-value of the tag which is completely optional. Click on create the topic.

Step 4: Congratulations!! The topic is created successfully.


Step 5: Go back to the SNS dashboard. The created topic is visible in the dashboard now. Click on the link to
the topic.

Step 6: You will be redirected to this page. Under subscription options, Click on Create subscription.

Step 7: Select the Protocol of the topic as Email and endpoint of the topic as your email id. Click on create
subscription.
Step 8: Now go to the mailbox of the mentioned email id and click on Confirm subscription.

Step 9: You will be directed to this page. Your subscription is confirmed.

Service Management in Cloud Computing: Service Level Agreements (SLAs),

A Service Level Agreement (SLA) is the bond for performance negotiated between the cloud services
provider and the client. Earlier, in cloud computing all Service Level Agreements were negotiated between a
client and the service consumer. Nowadays, with the initiation of large utility-like cloud computing providers,
most Service Level Agreements are standardized until a client becomes a large consumer of cloud services.
Service level agreements are also defined at different levels which are mentioned below:
 Customer-based SLA
 Service-based SLA
 Multilevel SLA
Few Service Level Agreements are enforceable as contracts, but mostly are agreements or contracts which are
more along the lines of an Operating Level Agreement (OLA) and may not have the restriction of law. It is
fine to have an attorney review the documents before making a major agreement to the cloud service
provider. Service Level Agreements usually specify some parameters which are mentioned below:
1. Availability of the Service (uptime)
2. Latency or the response time
3. Service components reliability
4. Each party accountability
5. Warranties
In any case, if a cloud service provider fails to meet the stated targets of minimums then the provi der has to
pay the penalty to the cloud service consumer as per the agreement. So, Service Level Agreements are like
insurance policies in which the corporation has to pay as per the agreements if any casualty occurs. Microsoft
publishes the Service Level Agreements linked with the Windows Azure Platform components, which is
demonstrative of industry practice for cloud service vendors. Each individual component has its own Service
Level Agreements. Below are two major Service Level Agreements (SLA) described:
1. Windows Azure SLA – Window Azure has different SLA’s for compute and storage. For compute, there
is a guarantee that when a client deploys two or more role instances in separate fault and upgrade
domains, client’s internet facing roles will have external connectivity minimum 99.95% of the time.
Moreover, all of the role instances of the client are monitored and there is guarantee of detection 99.9%
of the time when a role instance’s process is not runs and initiates properly.
2. SQL Azure SLA – SQL Azure clients will have connectivity between the database and internet gateway
of SQL Azure. SQL Azure will handle a “Monthly Availability” of 99.9% within a month. Monthly
Availability Proportion for a particular tenant database is the ratio of the time the dat abase was available
to customers to the total time in a month. Time is measured in some intervals of minutes in a 30 -day
monthly cycle. Availability is always remunerated for a complete month. A portion of time is marked as
unavailable if the customer’s attempts to connect to a database are denied by the SQL Azure gateway.
Service Level Agreements are based on the usage model. Frequently, cloud providers charge their pay-as-per-
use resources at a premium and deploy standards Service Level Agreements only for that purpose. Clients can
also subscribe at different levels that guarantees access to a particular amount of purchased resources. The
Service Level Agreements (SLAs) attached to a subscription many times offer various terms and conditions.
If client requires access to a particular level of resources, then the client need to subscribe to a service. A
usage model may not deliver that level of access under peak load condition.

SLA Lifecycle
Steps in SLA Lifecycle

1. Discover service provider: This step involves identifying a service provider that can meet the needs of
the organization and has the capability to provide the required service. This can be done through
research, requesting proposals, or reaching out to vendors.
2. Define SLA: In this step, the service level requirements are defined and agreed upon between the service
provider and the organization. This includes defining the service level objectives, metrics, and targets
that will be used to measure the performance of the service provider.
3. Establish Agreement: After the service level requirements have been defined, an agreement is
established between the organization and the service provider outlining the terms and conditions of the
service. This agreement should include the SLA, any penalties for non-compliance, and the process for
monitoring and reporting on the service level objectives.
4. Monitor SLA violation: This step involves regularly monitoring the service level objectives to ensure
that the service provider is meeting their commitments. If any violations are identified, they should be
reported and addressed in a timely manner.
5. Terminate SLA: If the service provider is unable to meet the service level objectives, or if the
organization is not satisfied with the service provided, the SLA can be terminated. This can be done
through mutual agreement or through the enforcement of penalties for non-compliance.
6. Enforce penalties for SLA Violation: If the service provider is found to be in violation of the SLA,
penalties can be imposed as outlined in the agreement. These penalties can include financial penalties,
reduced service level objectives, or termination of the agreement.

Advantages of SLA

1. Improved communication: A better framework for communication between the service provider and
the client is established through SLAs, which explicitly outline the degree of service that a customer may
anticipate. This can make sure that everyone is talking about the same things when it comes to service
expectations.
2. Increased accountability: SLAs give customers a way to hold service providers accountable if their
services fall short of the agreed-upon standard. They also hold service providers responsible for
delivering a specific level of service.
3. Better alignment with business goals: SLAs make sure that the service being given is in line with the
goals of the client by laying down the performance goals and service level requirements that the service
provider must satisfy.
4. Reduced downtime: SLAs can help to limit the effects of service disruptions by creating explicit
protocols for issue management and resolution.
5. Better cost management: By specifying the level of service that the customer can anticipate and
providing a way to track and evaluate performance, SLAs can help to limit costs. Making sure the
consumer is getting the best value for their money can be made easier by doing this.

Disadvantages of SLA

1. Complexity: SLAs can be complex to create and maintain, and may require significant resources to
implement and enforce.
2. Rigidity: SLAs can be rigid and may not be flexible enough to accommodate changing business needs or
service requirements.
3. Limited service options: SLAs can limit the service options available to the customer, as the service
provider may only be able to offer the specific services outlined in the agreement.
4. Misaligned incentives: SLAs may misalign incentives between the service provider and the customer, as
the provider may focus on meeting the agreed-upon service levels rather than on providing the best
service possible.
5. Limited liability: SLAs are not legal binding contracts and often limited the liability of the service
provider in case of service failure.

Economics of Cloud Computing: SWOT Analysis and Value Proposition,


Cloud management refers to the software and technologies made for monitoring and operating applications,
services and in the cloud. The tools help to confirm that cloud computing-based resources are working properly.
It is an important field and it is not easy. There can be many problems in cloud management processes. There can
also be inefficiencies amid cloud computing company activities.

In order to address all these concerns and come to a conclusion, companies use SWOT analysis. SWOT is a
very common tool for company managers. It helps to identify and overcome weaknesses and threats. It also
assists to recognize and utilize strengths and opportunities of the firm. Strengths, Weaknesses, Opportunities,
and Threats make up SWOT.

The companies have control over strengths and weaknesses. This is why the elements are known as internal
factors. Opportunities and threats are external factors as firms have little or no control over them.

A regular SWOT analysis would look like this:

Contents
o Strengths
o Weaknesses
o Opportunities
o Threats
 Threats
 Opportunities
Strengths

 Experienced business units


 Great distribution and sales networks
 Low barriers to market entry
 High profitability and revenue

Weaknesses

 Future profitability
 Additional costs
 Tax structure

Opportunities

 Scope in global markets


 Emergence of new markets

Threats

 Rising costs
 Increase in rates of interest
 Growing competition and less profitability

While there are both internal and external factors affecting cloud management, in this article I will only focus on
the external factors. I will explain in details how the opportunities and threats can impact cloud management
activities.

Before we venture in the details of this topic, you must know that leading companies in the field include
RightScale, Data Mines, and Scalr. Other direct or indirect competitors include enStratus, ScaleXtreme, Bitnami,
and ComputeNext. However, Amazon’s AWS Management is something that all the other firms fear of.
RightScale claims to be the industry leader in Cloud Portfolio Management. It allows enterprises to hasten
delivery of applications. Leading enterprises like Intercontinental Hotels Group, Pearson International, and PBS
have launched millions of servers through RightScale since 2007.

Scalr was established in 2007, at the very beginning of the cloud computing revolution. With time, it is growing.
Scalr is a mature and profitable business, which is here to stay.

Leading personalities in the field have stated that Amazon is the biggest threat facing other companies. It is the
largest cloud player till now. AWS is solving certain major problems which many cloud management platforms
such as SCALR and RightScale have set out to unravel. This includes the following issues:

 Increasing agility (cloud formation)


 Decreasing maintenance (amazon Linux, rds)
 Adjusting capacity (auto-scaling groups)

Cloud IaaS providers find a mutual platform to federate across their infrastructures without involvement from
firms like RightScale and Scalr. This will endanger the direct business model of the cloud management companies
as it will take over a part of their value proposition. It will also enable single cloud providers to compete with
AWS. This can be made possible by broadening the product range horizontally and vertically. As AWS is after a
huge part of Rightscale’s revenue, a true alternative to it could pose a threat to the business model.

When Cloud Management vendors question about opportunities, multi-cloud support is often mentioned.
Many often state that the promise of being able to shift easily from one vendor to another as an opportunity. This
is quite similar to buying milk.

The main flaw with this argument is that it needs all the vendors to provide same or similar functionality. A buyer
would never switch from buying fresh milk to spoiled milk. Similarly, consumers will not shift from Amazon to
a lesser cloud. Therefore, for this to become a really great opportunity, cloud management companies need to
present themselves as viable alternatives.

Another opportunity for Rightscale and other companies would be to lower the entry barriers. This will give a
wider range of IaaS/Cloud-hosters. This is not in their short-term interest at the moment. If they do not do it
anytime soon, some other company will and that could become a long-term problem for the business model.

While these are the most commonly discussed topics, let’s see some other threats and opportunities that the
cloud management companies such as Data Mines have.

Threats

 Cloud OEMs offers Auto scaling tools


 Cloud OEMs provide free tools for conversion or migration from competitive clouds
 Cloud OEMs provide Snapshot or Backup tools.
 Cloud providers are offering more advanced dashboards
 New companies are offering tools like RightScale but at a much lower prices
 More advances are being made in the hypervisors. This will make conversion or migration from a specific
Cloud to a different Cloud seamless

Opportunities

 Unified dashboard experience


 Least-Cost-Routing equivalent: This refers to having a tool which can automatically deploy your cloud
from 1 provider to another. This way, you can take advantage of the lower costs.
 Pre-defined management: This means having pre-bundled scripts and tasks
 3rd party integration like Pingdom, backup companies, domain registrars and aicache
 Auto-scaling for beginners or newbies
 Mobile and iPad apps might be used control the cloud
 Administration scripts. For example, click this and a certain directory is zipped or click this and log files
are rotated
 Can importing from other cloud management tools.
 Better traffic insights

Always remember that identifying the threats and opportunities is only the beginning. Your task is to eliminate
the weakness. Also, try to take full advantage of the opportunity. Often, the aim is to make weaknesses into
strengths. Cloud management companies should conduct SWOT after every few months.

General Cloud Computing Risks,

(Performance, Network Dependence, Reliability, Outages, Safety Critical Processing Compliance and
Information Security.

1. Lack of Visibility
Shifting operations, assets, and workloads to the cloud means transferring the responsibility of managing certain
systems and policies to a contracted cloud service provider (CSP). As a result, organizations lose visibility into
some network operations, services, and resource usage and cost.

Organizations must obtain visibility into their cloud services to ensure security, privacy, and adherence to
organizational and regulatory requirements. It typically involves using additional tools for cloud
security configuration monitoring and logging and network-based monitoring. Organizations should set up
protocols up front with the assistance of the CSP to alleviate these concerns and ensure transparency.

2. Cloud Misconfigurations
Threat actors can exploit system and network misconfigurations as entry points that potentially allow them to
move laterally across the network and access confidential resources. Misconfigurations can occur due to
overlooked system areas or improper security settings.

3. Data Loss
Organizations leverage backups as a defensive tactic against data loss. Cloud storage is highly resilient because
vendors set up redundant servers and storage across several geographic locations. However, cloud storage and
Software as a Service (SaaS) providers are increasingly targeted by ransomware attacks that compromise
customer data.

4. Accidental Data Exposure


Organizations must protect data privacy and confidentiality to ensure compliance with various regulations,
including GDPR, HIPAA, and PCI DSS. Data protection regulations impose strict penalties for failing to secure
data. Organizations also need to protect their own data to maintain a competitive advantage.

Placing data in the cloud offers great benefits but creates major security challenges for organizations.
Unfortunately, many organizations migrate to the cloud without prior knowledge as to how to ensure they are
using it securely, putting sensitive data at risk of exposure.

5. Identity Theft
Phishing attacks often use cloud environments and applications to launch attacks. The widespread use of cloud-
based email, like G-Suite and Microsoft 365, and document-sharing services, like Google Drive and Dropbox,
has made email attachments and links a standard.

Many employees are used to emails asking them to confirm account credentials before accessing a particular
website or document. It enables cybercriminals to trick employees into divulging cloud credentials, making
accidental exposure of credentials a major concern for many organizations.
6. Insecure Integration and APIs
APIs enable businesses and individuals to sync data, customize the cloud service experience, and automate data
workflows between cloud systems. However, APIs that fail to encrypt data, enforce proper access control, and
sanitize inputs appropriately can cause cross-system vulnerabilities. Organizations can minimize this risk using
industry standard APIs that utilize proper authentication and authorization protocols.

7. Data Sovereignty
Cloud providers typically utilize several geographically distributed data centers to improve the performance and
availability of cloud-based resources. It also helps CSPs ensure they can maintain service level agreements
(SLAs) during business-disrupting events like natural disasters or power outages.

Organizations that store data in the cloud do not know where this data is stored within the CSP’s array of data
centers. Since data protection regulations like General Data Protection Regulation (GDPR) limit where EU
citizens’ data can be sent, organizations using a cloud platform with data centers outside the approved areas risk
regulatory non-compliance. Organizations should also consider jurisdictions when governing data. Each
jurisdiction has different laws regarding data.

Design and Deploy an Online Video Subscription Application on the Cloud.

Create and deploy

1. Log in to the Azure portal.


2. Click Create a resource > Compute, and then scroll down to and click Cloud Service.
3. In the new Cloud Service pane, enter a value for the DNS name.
4. Create a new Resource Group or select an existing one.
5. Select a Location.
6. Click Package. This opens the Upload a package pane. Fill in the required fields. If any of your roles
contain a single instance, ensure Deploy even if one or more roles contain a single instance is selected.
7. Make sure that Start deployment is selected.
8. Click OK which will close the Upload a package pane.
9. If you do not have any certificates to add, click Create.
Upload a certificate

If your deployment package was configured to use certificates, you can upload the certificate now.

1. Select Certificates, and on the Add certificates pane, select the TLS/SSL certificate .pfx file, and then
provide the Password for the certificate,
2. Click Attach certificate, and then click OK on the Add certificates pane.
3. Click Create on the Cloud Service pane. When the deployment has reached the Ready status, you can
proceed to the next steps.
Verify your deployment completed successfully

1. Click the cloud service instance.

The status should show that the service is Running.

2. Under Essentials, click the Site URL to open your cloud service in a web browser.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy