Unit V_TCS 351
Unit V_TCS 351
Unit V_TCS 351
AWS Services: Amazon Lambda, Amazon Relational Database Service (Amazon RDS), Amazon S3, Amazon
CloudFront, Amazon Glacier, and Amazon SNS.
Service Management in Cloud Computing: Service Level Agreements (SLAs), Billing & Accounting.
Economics of Cloud Computing: SWOT Analysis and Value Proposition, General Cloud Computing Risks,
(Performance, Network Dependence, Reliability, Outages, Safety Critical Processing Compliance and
Information Security.
Design and Deploy an Online Video Subscription Application on the Cloud.
AWS provides four services that form the foundation of any cloud deployment: computing, storage, networking,
and database. Each service is designed to offer high availability and scalability so that you can build a robust
and reliable application in the cloud. Let’s take a closer look at each service and know how it can benefit your
business.
Compute: Compute resources are the brains and processing power needed by applications and systems to carry
out computational tasks. So Compute is essentially the same as common server components, such as CPU and
RAM, which many of you are already familiar with. Physical servers within a data centre are considered
compute resources as they may contain multiple CPUs and tons of RAM to process instructions given by the
operating system and applications. Below are the computing services provided by AWS:
AWS EC2: Amazon Elastic Compute Cloud (EC2) is a web service that provides secure, resizable computing
capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s
simple web interface allows you to obtain and configure capacity with minimal friction. It provides you with
complete control of your computing resources and lets you run on Amazon’s proven computing environment.
You can use Amazon EC2 to launch as many applications as you want—whether they’re running on Linux,
Windows, or Oracle—and manage all of them using a single API call. And since it’s scalable and pay-as-you-
go, Amazon EC2 reduces up-front investment costs while providing flexibility and control over resource
allocation.
With Amazon EC2, there are no upfront investments required – instead, you simply pay per hour of usage. So if
your application needs more computing power, you can increase capacity right away without having to wait for
an IT department to order and install new hardware.
AWS Lambda: Amazon Web Services Lambda is a serverless computing platform that runs your code in
response to events and automatically manages the underlying compute resources for you. You can use Lambda
to build applications that respond quickly to new information. Plus, Lambda is scalable so you can process
events as they happen, without having to provision or manage any servers. For example, an e-commerce
company might use Lambda functions to analyze incoming customer data for marketing purposes. Or an
enterprise IT organization might use it to keep their systems up-to-date with security patches and fixes.
With Lambda, you don’t have to worry about capacity planning because it scales seamlessly along with your
needs. It also lets developers spend more time on development and less time on managing infrastructure –
perfect for fast-moving startups!
AWS Elastic Beanstalk: Amazon Web Services Elastic Beanstalk is a platform as a service (PaaS) that
streamlines the process of deploying and scaling web applications and services developed with popular
programming languages and frameworks. Elastic Beanstalk provides pre-configured platforms for programming
languages like Java, .NET, PHP, Node.js, Python, and Ruby. You can simply upload your code and Elastic
Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, and auto-scaling
to application health monitoring. Elastic Beanstalk’s no configuration mode allows you to deploy an application
without worrying about the details, such as what type of environment it needs or where it should be deployed.
It also includes many features to allow developers to focus on their code rather than administrative tasks,
including integrations with other AWS products. These features include Auto Scaling and Load Balancing
which will scale up servers when traffic increases and automatically distribute incoming requests across all
servers.
Network: Networking in cloud computing is the process of connecting computers and devices together so they
can communicate with each other. The four main types of networking are: point-to-point, client-server, peer-to-
peer, and mesh. Point-to-point networking is the most basic type of networking, and it involves two devices that
are connected directly to each other. Client-server networking is a bit more complex, and it involves a server
that provides services to clients. Peer-to-peer networking occurs when two or more devices share data with one
another without using an intermediary device like a server. Mesh networks are built for redundancy and consist
of multiple paths for messages to travel between nodes.
Amazon Route 53: Amazon Route 53 is a scalable and highly available Domain Name System (DNS) service.
It provides secure and reliable routing to your resources, such as websites and web applications, with low
latency. Amazon Route 53 is fully compliant with IPv6 as well. You can use Amazon Route 53 to perform three
main functions: Domain registration, DNS routing, and health checking.
One thing that sets Amazon Route 53 apart from other DNS services is the inclusion of various geo and routing
features. An example of this would be Latency Based Routing which routes traffic depending on its proximity to
the desired destination, or you could also do IP Prefix.
Hijacking Protection protects against accidental changes in prefixes at the registrar level by monitoring and
blocking requests for domain registrations that conflict with your prefixes in route 53.
You could also set up Dynamic Record Sets which will automatically create records if they don’t exist when
queried so there’s no need to maintain static records. Or maybe Reverse DNS Lookups which will map an IP
address back to a domain name for security purposes. Other products that work well with Amazon Route 53
include AppStream 2.0, AppSync, and CloudFront CDN.
AWS VPC: Amazon Web Services (AWS) is a cloud platform that provides customers with a wide array of
infrastructure services, such as computing power, storage options, networking, and databases. One of these
services is called Amazon Virtual Private Cloud (VPC), which is a secure and scalable cloud computing service
that isolates your resources from those of other AWS customers. AWS VPC lets you create an isolated virtual
network environment in the AWS cloud. With Amazon VPC, AWS resources can be launched into a virtual
network. Common items to define for your networks such as IP address ranges, subnet creations, route tables,
gateways, and security settings are within the normal range. It integrates with many AWS services and is a
foundational service of AWS. You can use both IPv4 and IPv6 in your VPC for secure and easy access to
resources and applications. VPC provides you with complete control over your virtual networking environment,
including a selection of your own IP address range, creation of subnets, and configuration of route tables and
network gateways. In addition, you can launch AWS resources into a VPC to provide isolation from the rest of
the AWS cloud.
Cloud Storage: Cloud storage is a service that allows users to store and access data over the Internet. It is a
popular choice for businesses because it is scalable, reliable, and secure. In a cloud storage model, data that is
digital in format is stored, and thus it is called logical pools in the cloud. Multiple servers are used for this
storage system, which can be located throughout the country or even outside the country depending on many
factors. Private companies or the cloud providers like AWS, Azure, Google Cloud, and IBM Cloud own and
maintain these servers. In addition to ensuring data is available and accessible at all times, the cloud storage
services also maintain the physical environment and safeguard the data. A provider of storage capacity sells or
leases storage space to individuals and companies in order to store information about their users, entities, and
applications.
The storage services provided by AWS Cloud are Amazon S3 and Amazon Glacier.
Amazon S3: Amazon S3 is an object storage service that offers industry-leading scalability, data availability,
security, and performance. This means that you can store and retrieve any amount of data, at any time, from
anywhere on the web. Amazon S3 is designed to make web-scale computing easier for developers.
It provides a simple web services interface that can be used to store and retrieve any amount of data, at any time,
from anywhere on the web. It gives customers complete control over their data by providing robust access
controls and multiple redundant storage facilities with no single point of failure. You only pay for what you use.
Amazon S3 makes it easy to serve your content quickly and reliably, even when some parts of your
infrastructure don’t function properly or become unavailable.
Amazon Glacier: Amazon Glacier is a low-cost storage service that provides secure and durable storage for
data backup and archival. Amazon Glacier is easy to use, with a simple web interface that you can use to store
and retrieve any amount of data. Amazon Glacier is a great choice for storing data that you don’t need to access
frequently, but want to keep in a safe place.
When you upload your data to Amazon Glacier, it is copied onto multiple devices at different physical locations,
which means your data will stay safe even if there’s an unexpected event like a fire or flood at one of our
facilities. It also helps ensure that your data stays available during regional outages since there’s no single point
of failure when accessing it from another location.
With Amazon Glacier, customers pay only for what they use. Storage prices start as low as $0.01 per gigabyte
per month; retrieval pricing starts at $0.001 per gigabyte. That’s considerably less than most tape libraries, so
storing your archived data with us could save you money over time!
Database: Cloud databases are a new breed of database that offers all the benefits of the cloud: elasticity,
scalability, and cost-effectiveness. Just like traditional databases, they can be used to store data, but they also
come with a few key differences. For one, cloud databases are designed to be scalable and highly available, so
they can handle large workloads without going down. They’re also automatically replicated across regions for
high availability and seamless disaster recovery. Unlike most traditional databases, which require you to set up
hardware yourself in order to grow your compute power, these services are preconfigured for auto-scaling as
needed so you don’t have to worry about capacity planning. Plus, there’s no upfront cost for these powerful
services – pay only for what you use when you need it.
Amazon RDS (Relational Database Service): Amazon RDS is a managed relational database service that
makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-effective and
resizable capacity while automating time-consuming administration tasks such as hardware provisioning,
database setup, patching, and backups. Amazon RDS is available on several database instance types – optimized
for memory, performance or I/O – and provides you with six familiar database engines to choose from,
including Amazon Aurora, MySQL, MariaDB, Oracle Database, Microsoft SQL Server, and PostgreSQL. With
these features, Amazon RDS gives you maximum flexibility and control over your data.
You can also use various deployment models for Amazon RDS, which include managed services (where we
handle everything) and shared services (where you maintain ownership).
Cloud security at AWS is the highest priority. As organizations embrace the scalability and flexibility of the
cloud, AWS is helping them evolve security, identity, and compliance into key business enablers. AWS builds
security into the core of our cloud infrastructure, and offers foundational services to help organizations meet
their unique security requirements in the cloud.
As an AWS customer, you will benefit from a data center and network architecture built to meet the
requirements of the most security-sensitive organizations. Security in the cloud is much like security in your on-
premises data centers—only without the costs of maintaining facilities and hardware. In the cloud, you don’t
have to manage physical servers or storage devices. Instead, you use software-based security tools to monitor
and protect the flow of information into and out of your cloud resources.
An advantage of the AWS Cloud is that it allows you to scale and innovate, while maintaining a secure
environment and paying only for the services you use. This means that you can have the security you need at a
lower cost than in an on-premises environment.
As an AWS customer you inherit all the best practices of AWS policies, architecture, and operational processes
built to satisfy the requirements of our most security-sensitive customers. Get the flexibility and agility you need
in security controls.
The AWS Cloud enables a shared responsibility model. While AWS manages security of the cloud, you are
responsible for security in the cloud. This means that you retain control of the security you choose to implement
to protect your own content, platform, applications, systems, and networks no differently than you would in an
on-site data center.
AWS provides you with guidance and expertise through online resources, personnel, and partners. AWS
provides you with advisories for current issues, plus you have the opportunity to work with AWS when you
encounter security issues.
You get access to hundreds of tools and features to help you to meet your security objectives. AWS provides
security-specific tools and features across network security, configuration management, access control, and data
encryption.
Finally, AWS environments are continuously audited, with certifications from accreditation bodies across
geographies and verticals. In the AWS environment, you can take advantage of automated tools for asset
inventory and privileged access reporting.
Meet compliance requirements — AWS manages dozens of compliance programs in its infrastructure.
This means that segments of your compliance have already been completed.
Save money: —Cut costs by using AWS data centers. Maintain the highest standard of security without
having to manage your own facility
Scale quickly — Security scales with your AWS Cloud usage. No matter the size of your business, the
AWS infrastructure is designed to keep your data safe.
Compliance
AWS Cloud Compliance helps you understand the robust controls in place at AWS for security and data
protection in the cloud. Compliance is a shared responsibility between AWS and the customer, and you can visit
the Shared Responsibility Model to learn more. Customers can feel confident in operating and building on top of
the security controls AWS uses on its infrastructure.
The IT infrastructure that AWS provides to its customers is designed and managed in alignment with best
security practices and a variety of IT security standards. The following is a partial list of assurance programs
with which AWS complies:
AWS provides customers a wide range of information on its IT control environment in whitepapers, reports,
certifications, accreditations, and other third-party attestations.
AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of
on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time
recovery.
Elastic Disaster Recovery is the recommended service for disaster recovery to AWS. It provides similar
capabilities as CloudEndure Disaster Recovery, and is operated from the AWS Management Console. This
facilitates seamless integration between AWS DRS and other AWS services, such as AWS CloudTrail, AWS
Identity and Access Management (IAM), and Amazon CloudWatch.
With AWS DRS, you can recover your applications on AWS from physical infrastructure, VMware vSphere,
Microsoft Hyper-V, and cloud infrastructure. You can also use AWS DRS to recover Amazon Elastic Compute
Cloud (EC2) instances in a different AWS Region.
You can use AWS DRS to recover all of your applications and databases that run on supported Windows and
Linux operating system versions.
AWS Lambda function helps you to focus on your core product and business logic instead of managing
operating system (OS) access control, OS patching, right-sizing, provisioning, scaling, etc.
AWS Lambda Block Diagram
Step 1: First upload your AWS Lambda code in any language supported by AWS Lambda. Java, Python, Go,
and C# are some of the languages that are supported by AWS Lambda function.
Step 2: These are some AWS services which allow you to trigger AWS Lambda.
Step 3: AWS Lambda helps you to upload code and the event details on which it should be triggered.
Step 5: AWS charges only when the AWS lambda code executes, and not otherwise.
Note: You should remember that you will charge for AWS services only when the AWS Lambda code executes,
else you don’t need to pay anything.
A function is a program or a script which runs in AWS Lambda. Lambda passes invocation events into your
function, which processes an event and returns its response.
Runtimes:
Runtime allows functions in various languages which runs on the same base execution environment. This helps
you to configure your function in runtime. It also matches your selected programming language.
Event source:
An event source is an AWS service, such as Amazon SNS, or a custom service. This triggers function helps you
to executes its logic.
Lambda Layers:
Lambda layers are an important distribution mechanism for libraries, custom runtimes, and other important
function dependencies. This AWS component also helps you to manage your development function code
separately from the unchanging code and resources that it uses.
Log streams:
Log stream allows you to annotate your function code with custom logging statements which helps you to
analyse the execution flow and performance of your AWS Lambda functions.
Environment
It is restricted to fewlanguages. No environment restrictions.
restrictions
It is not appropriate to use AWS Lambda software packages or applications which rely on calling
underlying Windows RPCs
If is used for custom software applications with licensing agreements like MS-Office document
processing, Oracle databases, etc.
AWS Lambda should not be used for custom hardware process such as GPU acceleration, hardware
affinity.
Amazon RDS is the Relational Database Service offered as a web service by Amazon. It makes it easy to set-up
and operate a relational database in the cloud. It provides a very cost-effective way to use industry’s leading
RDBMS software as a managed service. Because of this web service from amazon AWS, You do not have to
buy any server or install any database software in it. You just have subscribe to the AWS RDS web service and
start using the RDBMS features after some initial configuration involving memory and CPU capacity allocation
etc . In this Tutorial we will learn about the different interfaces available in AWS RDS to use the industry’s
leading RDBMS software.
As RDS is a managed service provided by AWS, we can expect that like other AWS services it will provide
scalability, security and cost effectiveness to the various RDBMS it provides. The database products available
through AWS RDS are as listed below.
MySQL - Support versions for MySQL 5.5 to 5.7. Minor upgrades happen automatically without needing
any involvement from the user.
MariaDB – Support versions for MariaDB from 10.0 to 10.2.
Oracle – Supports version 11g and 12c. You can use the oracle license provided by aws or bring your
own license. The costing for these two are different.
Microsoft SQL Server – Supports version 200t to 2017. Also AWS supports the various editions like –
Enterprise, Standard, Web and Express.
PostgreSQL – Supports version 9 to 11. Can be configured as a multi A-Z deployment with read replicas.
Amazon Aurora – This is Amazon’s own RDBMS. We will be covering it in a separate tutorial.
Each of these Database software is offered as Software as a Service (saas) by providing following features.
Customization of CPU capacity, Memory allocation and IOPS(Input Output per second) for a database
instance.
Manage software patching, failure and recovery of the RDBMS software without any user intervention.
Allow manual or automated backup of the database using snapshots. Restore the database from these
snapshots.
Provide high availability by creating a primary and secondary instance which are synchronous. In case of
a failure of primary AWS RDS automatically fails over to secondary.
Put the databases in a virtual private cloud (VPC) and aslo use AWS IAM (Identity and Access
management) service to control access to the databases.
There are two purchase options for AWS RDS service. On-Demand Instances and Reserved Instances.
For on-Demand instance you pay for every hour of usage while for Reserved instance you make a upfront
payment for one year to three period time frame.
For using any Aws service you ned to set up an AWS account. We assume you have set up the AWS account by
following the guide lines mentioned in the Amazon Web Services aws home page. Below are the preliminary
steps to access the RDS services from the console.
Step-1
After logging in to the amazon console, to access the RDS services we need to navigate to the Amazon RDS home
page by searching for RDS in the search box of under the services tag as shown in the diagram below.
Step-2
On clicking the link above we get the Amazon RDS home page. If it is the first time ever you are accessing RDS
services, then it will show you a screen prompting for creating a database as shown below.
In case you have already created some RDS resources a summary of that will be available by scrolling down in
the above page. A screen shot is shown below.
Step-3
The next screen gives us an option to select the DB engine we need and that is the start of our configuration steps
for the database we need.
The RDS interfaces are a way to access the RDS service we create. After the creation and configuration of the
RDS service there is a need of accessing the data, uploading data to this database and running some other program
which should be able to connect to the database. Such requirements of accessing and manipulating data by end
users of the database and not necessarily the AWS account holder which created the database needs these
interfaces.
There are three main such interfaces.
GUI Console
This is the simplest of the interfaces where the user can login through a web browser and start using the DB
services. The down side of such access is , it needs a human to interact with the RDS services and we cannot run
a database program to do some regular tasks like – backup or analysing the DB etc.
Pip -V
aws configure
import boto3
client = boto3.client('rds')
response = client.copy_db_snapshot(
SourceDBSnapshotIdentifier='mydbsnapshot',
TargetDBSnapshotIdentifier='mydbsnapshot-copy',
)
print(response)
When the above program is run we get the response which describes the various properties of the copy event.
Here the term string represents the various names of parameters which is defined by the user for their environment.
For example VpcID represents the ID of the vpc in which the copy action is happening.
{
'DBSnapshot': {
'DBSnapshotIdentifier': 'string',
'DBInstanceIdentifier': 'string',
'SnapshotCreateTime': datetime(2015, 1, 1),
'Engine': 'string',
'AllocatedStorage': 123,
'Status': 'string',
'Port': 123,
'AvailabilityZone': 'string',
'VpcId': 'string',
'InstanceCreateTime': datetime(2015, 1, 1),
'MasterUsername': 'string',
'EngineVersion': 'string',
'LicenseModel': 'string',
'SnapshotType': 'string',
'Iops': 123,
'OptionGroupName': 'string',
'PercentProgress': 123,
'SourceRegion': 'string',
'SourceDBSnapshotIdentifier': 'string',
'StorageType': 'string',
'TdeCredentialArn': 'string',
'Encrypted': True|False,
'KmsKeyId': 'string',
'DBSnapshotArn': 'string',
'Timezone': 'string',
'IAMDatabaseAuthenticationEnabled': True|False,
'ProcessorFeatures': [
{
'Name': 'string',
'Value': 'string'
},
]
}
}
A DB instance is an isolated database environment running in the cloud which can contain multiple user-created
databases. It can be accessed using the same client tools and applications used to access a standalone database
instance. But there is restriction on how many DB instances of what type you can have for a single customer
account. The below diagram illustrates the different combinations based on the type of license you opt for.
Each DB instance is identified by a customer supplied name called DB instance identifier. It is unique for the
customer for a given AWS region.
DB Instance Classes
Depending on the need of the processing power and memory requirement, there is a variety of instance classes
offered by AWS for the RDS service.
When there is a need of more processing power than memory requirement you can choose the standard instance
class with a higher number of virtual CPUs. But in the case of very high memory requirement you can choose
Memory optimized class with appropriate number of VCPUs. Choosing a correct class not only impacts the speed
of the processing but also the cost of using service. The burstable performance class is needed when you have a
minimal processing requirement and the data size in not in peta bytes.
DB Instance Status
The DB Instance status indicates the health of the DB. It’s value can be seen from the AWS console or using
AWS CLI command describe-db-instances. The important status values of DB instances and their meaning is
described below.
Failed The instance has failed and Amazon RDS can't recover No
it.
Amazon S3,
Amazon S3 (Simple Storage Service) is a scalable, high-speed, low-cost web-based service designed for online
backup and archiving of data and application programs. It allows to upload, store, and download any type of files
up to 5 TB in size. This service allows the subscribers to access the same systems that Amazon uses to run its own
web sites. The subscriber has control over the accessibility of data, i.e. privately/publicly accessible.
How to Configure S3?
Following are the steps to configure a S3 account.
Step 1 − Open the Amazon S3 console using this link − https://console.aws.amazon.com/s3/home
Step 2 − Create a Bucket using the following steps.
A prompt window will open. Click the Create Bucket button at the bottom of the page.
Create a Bucket dialog box will open. Fill the required details and click the Create button.
The bucket is created successfully in Amazon S3. The console displays the list of buckets and its
properties.
Select the Static Website Hosting option. Click the radio button Enable website hosting and fill the
required details.
Click the Add files option. Select those files which are to be uploaded from the system and then click the
Open button.
Click the start upload button. The files will get uploaded into the bucket.
To open/download an object − In the Amazon S3 console, in the Objects & Folders list, right-click on the object
to be opened/downloaded. Then, select the required object.
Step 3 − A confirmation message will appear on the pop-up window. Read it carefully and click the Empty
bucket button to confirm.
Amazon S3 Features
Low cost and Easy to Use − Using Amazon S3, the user can store a large amount of data at very low
charges.
Secure − Amazon S3 supports data transfer over SSL and the data gets encrypted automatically once it
is uploaded. The user has complete control over their data by configuring bucket policies using AWS
IAM.
Scalable − Using Amazon S3, there need not be any worry about storage concerns. We can store as much
data as we have and access it anytime.
Higher performance − Amazon S3 is integrated with Amazon CloudFront, that distributes content to
the end users with low latency and provides high data transfer speeds without any minimum usage
commitments.
Integrated with AWS services − Amazon S3 integrated with AWS services include Amazon
CloudFront, Amazon CLoudWatch, Amazon Kinesis, Amazon RDS, Amazon Route 53, Amazon VPC,
AWS Lambda, Amazon EBS, Amazon Dynamo DB, etc.
Amazon CloudFront,
The following diagram shows an overview of how this static website solution works:
To deploy this secure static website solution, you can choose from either of the following options:
Use the AWS CloudFormation console to deploy the solution with default content, then upload your website
content to Amazon S3.
Clone the solution to your computer to add your website content. Then, deploy the solution with the AWS
Command Line Interface (AWS CLI).
Topics
Prerequisites
Using the AWS CloudFormation console
Cloning the solution locally
Finding access logs
Prerequisites
A registered domain name, such as example.com, that’s pointed to an Amazon Route 53 hosted zone. The
hosted zone must be in the same AWS account where you deploy this solution. If you don’t have a registered
domain name, you can register one with Route 53. If you have a registered domain name but it’s not pointed to a
Route 53 hosted zone, configure Route 53 as your DNS service.
AWS Identity and Access Management (IAM) permissions to launch CloudFormation templates that create
IAM roles, and permissions to create all the AWS resources in the solution.
You are responsible for the costs incurred while using this solution. For more information about costs, see the
pricing pages for each AWS service.
2. The Create stack wizard opens in the AWS CloudFormation console, with prepopulated fields that specify this
solution’s CloudFormation template.
Make sure to choose the bucket with s3bucketroot in its name, not s3bucketlogs. The bucket
with s3bucketroot in its name contains the website content. The one with s3bucketlogs contains only log files.
3. Delete the website’s default content, then upload your own.
Note
If you viewed your website with this solution’s default content, then it’s likely that some of the default content
is cached in a CloudFront edge location. To make sure that viewers see your updated website
content, invalidate the files to remove the cached copies from CloudFront edge locations. For more information,
see Invalidating files.
Prerequisites
To add your website content before deploying this solution, you must package the solution’s artifacts locally,
which requires Node.js and npm. For more information, see https://www.npmjs.com/get-npm.
make package-static
3. Copy your website’s content into the www folder, overwriting the default website content.
4. Run the following AWS CLI command to create an Amazon S3 bucket to store the solution’s artifacts.
Replace example-bucket-for-artifacts with your own bucket name.
5. Run the following AWS CLI command to package the solution’s artifacts as an AWS CloudFormation template.
Replace example-bucket-for-artifacts with the name of the bucket that you created in the previous step.
7. --region us-east-1
8. --template-file templates/main.yaml \
9. --s3-bucket example-bucket-for-artifacts \
--output-template-file packaged.template
10. Run the following command to deploy the solution with AWS CloudFormation, replacing the following values:
your-CloudFormation-stack-name – Replace with a name for the AWS CloudFormation stack.
example.com – Replace with your domain name. This domain must be pointed to a Route 53 hosted zone in the
same AWS account.
www – Replace with the subdomain to use for your website. For example, if the subdomain is www, your
website is available at www.example.com.
16. Wait for the AWS CloudFormation stack to finish creating. The stack creates some nested stacks, and can take
several minutes to finish. When it’s finished, the Status changes to CREATE_COMPLETE.
AWS offers a wide range of storage services that can be provisioned depending on your project requirements
and use case. AWS storage services have different provisions for highly confidential data, frequently accessed
data, and the not so frequently accessed data. You can choose from various storage types namely, object storage,
file storage, block storage services, backups, and data migration options. All of which fall under the AWS
Storage Services list.
AWS Glacier: From the aforementioned list, AWS Glacier, is the backup and archival storage provided by
AWS. It is an extremely low cost, long term, durable, secure storage service that is ideal for backups and
archival needs. In a lot of its operation AWS Glacier is similar to S3, and, it interacts directly with S3, using
S3-lifecycle policies. However, the main difference between AWS S3 and Glacier is the cost structure. The
cost of storing the same amount of data in AWS Glacier is significantly less as compared to S3. Storage costs
in Glacier can be as little as $1 for one petabyte of data per month.
AWS Glacier Terminology
1. Vaults: Vaults are virtual containers that are used to store data. Vaults in AWS Glacier are similar to buckets
in S3.
Each Vault has its specific access policies(Vault lock/access policies). Thus providing you with more
control over who has what kind of access to your data.
Vaults are region-specific.
2. Archives: Archives are the fundamental entity type stored in Vaults. Archives in AWS Glacier are similar
to Objects in S3. Virtually you have unlimited storage capacity on AWS Glacier and hence, can store an
unlimited number of archives in a vault.
3. Vault Access Policies: In addition to the basic IAM controls AWS Glacier offers Vault access policies that
help managers and administrators have more granular control of their data.
Each vault has its own set of Vault Access Policies.
If either of Vault Access Policy or IAM control doesn’t pass for some user action. The user is not declared
unauthorized.
4. Vault Lock Policies: Vault lock policies are exactly like Vault access policies but once set, they cannot be
changed.
Specific to each bucket.
This helps you with data compliance controls. For example- Your business administrators might want some
highly confidential data to be only accessible to the root user of the account, no matter what. Vault lock
policy for such a use case can be written for the required vaults.
Features of AWS Glacier
Given the extremely cheap storage, provided by AWS Glacier, it doesn’t provide as many features as AWS
S3. Access to data in AWS Glacier is an extremely slow process.
Just like S3, AWS Glacier can essentially store all kinds of data types and objects.
Durability: AWS Glacier, just like Amazon S3, claims to have a 99.9999999% of durability (11 9’s).
This means the possibility of losing your data stored in one of these services one in a billion. AWS Glacier
replicates data across multiple Availability Zones for providing high durability.
Data Retrieval Time: Data retrieval from AWS Glacier can be as fast as 1-5 minutes (high-cost retrieval)
to 5-12 hours(cheap data retrieval).
AWS Glacier Console: The AWS Glacier dashboard is not as intuitive and friendly as AWS S3. The
Glacier console can only be used to create vaults. Data transfer to and from AWS Glacier can only be done
via some kind of code. This functionality is provided via:
AWS Glacier API
AWS SDKs
Region-specific costs: The cost of storing data in AWS Glacier varies from region to region.
Security:
AWS Glacier automatically encrypts your data using the AES-256 algorithm and manages its
keys for you.
Apart from normal IAM controls AWS Glacier also has resource policies (vault access
policies and vault lock policies) that can be used to manage access to your Glacier vaults.
Infinite Storage Capacity: Virtually AWS Glacier is supposed to have infinite storage capacity.
Data Transfer In Glacier
1. Data Upload:
Data can be uploaded to AWS Glacier by creating a vault from the Glacier console and using one of the
following methods:
Write code that uses AWS Glacier SDK to upload data.
Write code that uses AWS Glacier API to upload data.
S3 Lifecycle policies: S3 lifecycle policies can be set to upload S3 objects to AWS Glacier
after some time. This can be used to backup old and infrequently access data stored in S3.
2. Data Transfer between regions:
AWS Glacier is a region-specific service. Data in one region can be transferred to another from the AWS
console. This cost of suck a data transfer is $0.02.
3. Data Retrieval
As mentioned before, AWS Glacier is a backup and data archive service, given its low cost of storage, AWS
Glacier data is not readily available for consumption.
Data retrieval from Glacier can only be done via some sort of code, using AWS Glacier SDK or the Glacier
API.
Data Retrieval in AWS Glacier is of three types:
Expedited:
This mode of data retrieval is only suggested for urgent requirements of data.
A single expedited retrieval request can only be used to retrieve 250MB of data
at max.
This data is then provided to you within 1-5 minutes.
The cost of expedited retrieval is $0.03 per GB and 0.01 per request.
Standard:
This data retrieval mode can be used for any size of data, full or partial archive.
This data is then provided to you within 3-5 hours.
The cost of standard retrieval is $0.01 per GB and $0.05 per 1000 requests.
Bulk:
This data retrieval is suggested for mass retrieval of data (petabytes of data).
It is the cheapest data retrieval option offered by AWS Glacier
This data is then provided to you within 5-12 hours.
The cost of bulk retrieval is 0.0025 per GB and 0.025 per 1000 requests
creating a vault:
1. Login to your Management Console and head straight to the S3 Glacier console through the following
link https://console.aws.amazon.com/glacier/.
2. Choose a specific Region from the top Region selector tab.
3. In case it’s your first experience with S3 Glacier, click on the button showing “Get started”. (If it’s not your
first time, you will see a different button to click on having the words “Create Vault”.)
Amazon Glacier – Getting Started Page
4. Type in “examplevault” for the vault name inside the Vault Name text box then choose the button Next
Step.
5. Choose the option of Do not enable notifications. In this tutorial, there is no need to configure notifications
for the vault that you are creating.
In case you needed to get notifications sent to you or your app at the time that specific S3 Glacier jobs get
finished, you should choose the option of Enable notifications and create a new SNS topic, or the
option of Enable notifications and use an existing SNS topic for setting up Amazon SNS notifications. The
coming steps will let you upload an archive then get it downloaded through high-level API of SDK. Working
with high-level API will not need you to get vault notification configured for retrieving data.
6. In case the entered Vault name and Region turn out to be correct, later select the Submit button.
7. A newly created vault will be shown in the list located on the page of S3 Glacier Vaults.
Amazon SNS.
Amazon Web Services Simple Notification Service (AWS SNS) is a web service that automates the process
of sending notifications to the subscribers attached to it. SNS provides this service to both application-to-person
and application-to-application. It uses the publishers/subscribers paradigm for the push and delivery of
messages. The data loss is prevented by storing the data across multiple availability zones. It is cost -efficient
and provides low-cost infrastructure, especially to mobile users. It sends the notifications through SMS or email
to an Amazon Simple Queue Service (SQS), AWS lambda functions, or an HTTP endpoint. When the CPU
utilization of an instance goes above 80%, the AWS cloudwatch alarm is triggered. This cloudwatch alarm
activates the SNS topic hence notifying the subscribers about the high CPU utilization of the instance. SNS
service has a topic that has a unique name. It acts as a logical access point and the communication channel
between publishers and subscribers.
Benefits of using SNS
SNS increases Durability.
SNS increases Security.
SNS ensures accuracy.
SNS reduces and simplifies the cost.
SNS supports SMS in over 200 countries.
Clients of SNS
Publishers: They communicate with subscribers in an asynchronous manner by producing and sending a
message to a topic (i.e a logical access point and communication channel). They do not include a specific
destination (ex – email id) in each message instead, send a message to the topic. They only send
messages to topics they have permission to publish.
Subscribers: Subscribers like web servers, email addresses, Amazon SQS queues, and AWS Lambda
functions receive the notification over one of the supported protocols like Amazon SQS, HTTP/S, email,
SMS, Lambda) when they are subscribed to the topic. Amazon SNS matches the topic to a list of
subscribers who have subscribed to that topic and delivers the message to each of those subscribers.
Steps to create Simple Notification Service in AWS
Step 1: Go to the Amazon SNS dashboard. Click on Create Topic button.
Step 3: Type in the key-value of the tag which is completely optional. Click on create the topic.
Step 6: You will be redirected to this page. Under subscription options, Click on Create subscription.
Step 7: Select the Protocol of the topic as Email and endpoint of the topic as your email id. Click on create
subscription.
Step 8: Now go to the mailbox of the mentioned email id and click on Confirm subscription.
A Service Level Agreement (SLA) is the bond for performance negotiated between the cloud services
provider and the client. Earlier, in cloud computing all Service Level Agreements were negotiated between a
client and the service consumer. Nowadays, with the initiation of large utility-like cloud computing providers,
most Service Level Agreements are standardized until a client becomes a large consumer of cloud services.
Service level agreements are also defined at different levels which are mentioned below:
Customer-based SLA
Service-based SLA
Multilevel SLA
Few Service Level Agreements are enforceable as contracts, but mostly are agreements or contracts which are
more along the lines of an Operating Level Agreement (OLA) and may not have the restriction of law. It is
fine to have an attorney review the documents before making a major agreement to the cloud service
provider. Service Level Agreements usually specify some parameters which are mentioned below:
1. Availability of the Service (uptime)
2. Latency or the response time
3. Service components reliability
4. Each party accountability
5. Warranties
In any case, if a cloud service provider fails to meet the stated targets of minimums then the provi der has to
pay the penalty to the cloud service consumer as per the agreement. So, Service Level Agreements are like
insurance policies in which the corporation has to pay as per the agreements if any casualty occurs. Microsoft
publishes the Service Level Agreements linked with the Windows Azure Platform components, which is
demonstrative of industry practice for cloud service vendors. Each individual component has its own Service
Level Agreements. Below are two major Service Level Agreements (SLA) described:
1. Windows Azure SLA – Window Azure has different SLA’s for compute and storage. For compute, there
is a guarantee that when a client deploys two or more role instances in separate fault and upgrade
domains, client’s internet facing roles will have external connectivity minimum 99.95% of the time.
Moreover, all of the role instances of the client are monitored and there is guarantee of detection 99.9%
of the time when a role instance’s process is not runs and initiates properly.
2. SQL Azure SLA – SQL Azure clients will have connectivity between the database and internet gateway
of SQL Azure. SQL Azure will handle a “Monthly Availability” of 99.9% within a month. Monthly
Availability Proportion for a particular tenant database is the ratio of the time the dat abase was available
to customers to the total time in a month. Time is measured in some intervals of minutes in a 30 -day
monthly cycle. Availability is always remunerated for a complete month. A portion of time is marked as
unavailable if the customer’s attempts to connect to a database are denied by the SQL Azure gateway.
Service Level Agreements are based on the usage model. Frequently, cloud providers charge their pay-as-per-
use resources at a premium and deploy standards Service Level Agreements only for that purpose. Clients can
also subscribe at different levels that guarantees access to a particular amount of purchased resources. The
Service Level Agreements (SLAs) attached to a subscription many times offer various terms and conditions.
If client requires access to a particular level of resources, then the client need to subscribe to a service. A
usage model may not deliver that level of access under peak load condition.
SLA Lifecycle
Steps in SLA Lifecycle
1. Discover service provider: This step involves identifying a service provider that can meet the needs of
the organization and has the capability to provide the required service. This can be done through
research, requesting proposals, or reaching out to vendors.
2. Define SLA: In this step, the service level requirements are defined and agreed upon between the service
provider and the organization. This includes defining the service level objectives, metrics, and targets
that will be used to measure the performance of the service provider.
3. Establish Agreement: After the service level requirements have been defined, an agreement is
established between the organization and the service provider outlining the terms and conditions of the
service. This agreement should include the SLA, any penalties for non-compliance, and the process for
monitoring and reporting on the service level objectives.
4. Monitor SLA violation: This step involves regularly monitoring the service level objectives to ensure
that the service provider is meeting their commitments. If any violations are identified, they should be
reported and addressed in a timely manner.
5. Terminate SLA: If the service provider is unable to meet the service level objectives, or if the
organization is not satisfied with the service provided, the SLA can be terminated. This can be done
through mutual agreement or through the enforcement of penalties for non-compliance.
6. Enforce penalties for SLA Violation: If the service provider is found to be in violation of the SLA,
penalties can be imposed as outlined in the agreement. These penalties can include financial penalties,
reduced service level objectives, or termination of the agreement.
Advantages of SLA
1. Improved communication: A better framework for communication between the service provider and
the client is established through SLAs, which explicitly outline the degree of service that a customer may
anticipate. This can make sure that everyone is talking about the same things when it comes to service
expectations.
2. Increased accountability: SLAs give customers a way to hold service providers accountable if their
services fall short of the agreed-upon standard. They also hold service providers responsible for
delivering a specific level of service.
3. Better alignment with business goals: SLAs make sure that the service being given is in line with the
goals of the client by laying down the performance goals and service level requirements that the service
provider must satisfy.
4. Reduced downtime: SLAs can help to limit the effects of service disruptions by creating explicit
protocols for issue management and resolution.
5. Better cost management: By specifying the level of service that the customer can anticipate and
providing a way to track and evaluate performance, SLAs can help to limit costs. Making sure the
consumer is getting the best value for their money can be made easier by doing this.
Disadvantages of SLA
1. Complexity: SLAs can be complex to create and maintain, and may require significant resources to
implement and enforce.
2. Rigidity: SLAs can be rigid and may not be flexible enough to accommodate changing business needs or
service requirements.
3. Limited service options: SLAs can limit the service options available to the customer, as the service
provider may only be able to offer the specific services outlined in the agreement.
4. Misaligned incentives: SLAs may misalign incentives between the service provider and the customer, as
the provider may focus on meeting the agreed-upon service levels rather than on providing the best
service possible.
5. Limited liability: SLAs are not legal binding contracts and often limited the liability of the service
provider in case of service failure.
In order to address all these concerns and come to a conclusion, companies use SWOT analysis. SWOT is a
very common tool for company managers. It helps to identify and overcome weaknesses and threats. It also
assists to recognize and utilize strengths and opportunities of the firm. Strengths, Weaknesses, Opportunities,
and Threats make up SWOT.
The companies have control over strengths and weaknesses. This is why the elements are known as internal
factors. Opportunities and threats are external factors as firms have little or no control over them.
Contents
o Strengths
o Weaknesses
o Opportunities
o Threats
Threats
Opportunities
Strengths
Weaknesses
Future profitability
Additional costs
Tax structure
Opportunities
Threats
Rising costs
Increase in rates of interest
Growing competition and less profitability
While there are both internal and external factors affecting cloud management, in this article I will only focus on
the external factors. I will explain in details how the opportunities and threats can impact cloud management
activities.
Before we venture in the details of this topic, you must know that leading companies in the field include
RightScale, Data Mines, and Scalr. Other direct or indirect competitors include enStratus, ScaleXtreme, Bitnami,
and ComputeNext. However, Amazon’s AWS Management is something that all the other firms fear of.
RightScale claims to be the industry leader in Cloud Portfolio Management. It allows enterprises to hasten
delivery of applications. Leading enterprises like Intercontinental Hotels Group, Pearson International, and PBS
have launched millions of servers through RightScale since 2007.
Scalr was established in 2007, at the very beginning of the cloud computing revolution. With time, it is growing.
Scalr is a mature and profitable business, which is here to stay.
Leading personalities in the field have stated that Amazon is the biggest threat facing other companies. It is the
largest cloud player till now. AWS is solving certain major problems which many cloud management platforms
such as SCALR and RightScale have set out to unravel. This includes the following issues:
Cloud IaaS providers find a mutual platform to federate across their infrastructures without involvement from
firms like RightScale and Scalr. This will endanger the direct business model of the cloud management companies
as it will take over a part of their value proposition. It will also enable single cloud providers to compete with
AWS. This can be made possible by broadening the product range horizontally and vertically. As AWS is after a
huge part of Rightscale’s revenue, a true alternative to it could pose a threat to the business model.
When Cloud Management vendors question about opportunities, multi-cloud support is often mentioned.
Many often state that the promise of being able to shift easily from one vendor to another as an opportunity. This
is quite similar to buying milk.
The main flaw with this argument is that it needs all the vendors to provide same or similar functionality. A buyer
would never switch from buying fresh milk to spoiled milk. Similarly, consumers will not shift from Amazon to
a lesser cloud. Therefore, for this to become a really great opportunity, cloud management companies need to
present themselves as viable alternatives.
Another opportunity for Rightscale and other companies would be to lower the entry barriers. This will give a
wider range of IaaS/Cloud-hosters. This is not in their short-term interest at the moment. If they do not do it
anytime soon, some other company will and that could become a long-term problem for the business model.
While these are the most commonly discussed topics, let’s see some other threats and opportunities that the
cloud management companies such as Data Mines have.
Threats
Opportunities
Always remember that identifying the threats and opportunities is only the beginning. Your task is to eliminate
the weakness. Also, try to take full advantage of the opportunity. Often, the aim is to make weaknesses into
strengths. Cloud management companies should conduct SWOT after every few months.
(Performance, Network Dependence, Reliability, Outages, Safety Critical Processing Compliance and
Information Security.
1. Lack of Visibility
Shifting operations, assets, and workloads to the cloud means transferring the responsibility of managing certain
systems and policies to a contracted cloud service provider (CSP). As a result, organizations lose visibility into
some network operations, services, and resource usage and cost.
Organizations must obtain visibility into their cloud services to ensure security, privacy, and adherence to
organizational and regulatory requirements. It typically involves using additional tools for cloud
security configuration monitoring and logging and network-based monitoring. Organizations should set up
protocols up front with the assistance of the CSP to alleviate these concerns and ensure transparency.
2. Cloud Misconfigurations
Threat actors can exploit system and network misconfigurations as entry points that potentially allow them to
move laterally across the network and access confidential resources. Misconfigurations can occur due to
overlooked system areas or improper security settings.
3. Data Loss
Organizations leverage backups as a defensive tactic against data loss. Cloud storage is highly resilient because
vendors set up redundant servers and storage across several geographic locations. However, cloud storage and
Software as a Service (SaaS) providers are increasingly targeted by ransomware attacks that compromise
customer data.
Placing data in the cloud offers great benefits but creates major security challenges for organizations.
Unfortunately, many organizations migrate to the cloud without prior knowledge as to how to ensure they are
using it securely, putting sensitive data at risk of exposure.
5. Identity Theft
Phishing attacks often use cloud environments and applications to launch attacks. The widespread use of cloud-
based email, like G-Suite and Microsoft 365, and document-sharing services, like Google Drive and Dropbox,
has made email attachments and links a standard.
Many employees are used to emails asking them to confirm account credentials before accessing a particular
website or document. It enables cybercriminals to trick employees into divulging cloud credentials, making
accidental exposure of credentials a major concern for many organizations.
6. Insecure Integration and APIs
APIs enable businesses and individuals to sync data, customize the cloud service experience, and automate data
workflows between cloud systems. However, APIs that fail to encrypt data, enforce proper access control, and
sanitize inputs appropriately can cause cross-system vulnerabilities. Organizations can minimize this risk using
industry standard APIs that utilize proper authentication and authorization protocols.
7. Data Sovereignty
Cloud providers typically utilize several geographically distributed data centers to improve the performance and
availability of cloud-based resources. It also helps CSPs ensure they can maintain service level agreements
(SLAs) during business-disrupting events like natural disasters or power outages.
Organizations that store data in the cloud do not know where this data is stored within the CSP’s array of data
centers. Since data protection regulations like General Data Protection Regulation (GDPR) limit where EU
citizens’ data can be sent, organizations using a cloud platform with data centers outside the approved areas risk
regulatory non-compliance. Organizations should also consider jurisdictions when governing data. Each
jurisdiction has different laws regarding data.
If your deployment package was configured to use certificates, you can upload the certificate now.
1. Select Certificates, and on the Add certificates pane, select the TLS/SSL certificate .pfx file, and then
provide the Password for the certificate,
2. Click Attach certificate, and then click OK on the Add certificates pane.
3. Click Create on the Cloud Service pane. When the deployment has reached the Ready status, you can
proceed to the next steps.
Verify your deployment completed successfully
2. Under Essentials, click the Site URL to open your cloud service in a web browser.