0% found this document useful (0 votes)
197 views

Understanding AWS Core Services

The document discusses cloud computing concepts including traditional datacenter challenges, benefits of cloud computing like elasticity and reliability, and types of cloud including IaaS, PaaS, and SaaS. It also covers AWS global infrastructure including regions, availability zones, and edge locations. It summarizes AWS tools for organizing costs like Cost Explorer, TCO calculator, and resource tags. Finally, it discusses AWS support plans and services like EC2, Lambda, and content delivery.

Uploaded by

Christopher Root
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
197 views

Understanding AWS Core Services

The document discusses cloud computing concepts including traditional datacenter challenges, benefits of cloud computing like elasticity and reliability, and types of cloud including IaaS, PaaS, and SaaS. It also covers AWS global infrastructure including regions, availability zones, and edge locations. It summarizes AWS tools for organizing costs like Cost Explorer, TCO calculator, and resource tags. Finally, it discusses AWS support plans and services like EC2, Lambda, and content delivery.

Uploaded by

Christopher Root
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 13

pluralsight: first course in PATH to AWS cloud practitioner

traditional: large up front investment, forecasting is difficult, slow to deploy,


maintenance, security/compliance burden
benefits: trade capex for variable expense, economies of scale (aws buys the
servers), stop guessing capacity, increase speed/agility, no maintenance on data
centers, scalability

elasticity: acquire resources when needed, release when you dont


reliability: failover
agility: trying new ideas, business processes, no maintenance, reduces risk
(security/compliance), emerging technologies

types of cloud: on demand delivery with pay as you go pricing


IaaS: full access to servers, OS access, maintenance/patching
SaaS: software you can use
PaaS: given a service, add your own customizations

public cloud (or just cloud): aws, azure, gcp


on-prem (private cloud): cloud-like platform, vmware
hybrid: both public and private cloud

AWS global infrastructure: regions, AZs, edge locations


-regions: specific geographic location, clusters of data centers, 22 regions
-availability zones: 1 or more datacenters, each region has at least 2 azs, located
within geographic area of a region, redundant power/networking/connectivity, 69 AZs
globally

US: 6 regions, 4 public 2 govcloud


us-east-2a: area, sub-area. us-east-2a, us-east-2b, us-east-2c
AWS edge locations: CDN (content delivery network)
CloudFront (global CDN) and Route 53 (DNS service)
-200 locations, most prevalent, serve content closer to users
-locations where users can fetch content

visualizing aws: infrastructure.aws


edge locations are called POInts of presence

quiz: multiple geographic areas = regions


content across the globe = edge locations
high uptime = availability zones

cloud economics: capex (upfront investment) vs opex (operating expenditures)


financial implications of managing own: large upfront, have to predict, unmet
capacity/demand, growing requires MORE capex
financial of cloud infra: no upfront, costs maps directly to user demand

organizing/optimizing costs of AWS: AWS Cost Explorer


-by service, cost tag, predictions (3mo), recommendations, or API!
-AWS budgets: plan/track usage
-AWS TCO calculator: cloud transitioning
-AWS Simple monthly calculator: calculator cost of running specific AWS infra
-AWS resource tags: metadata per resource (webserver + optional value)
web server + social network
-cost allocation report, can be grouped by active tag, and also used in Cost
Explorer

AWS Organization: multiple accounts under a master account


-Consolidated billing: 1 bill for each account
-centralize logging/security standards

TCO Calculator Demo (total cost of ownership):


AWS TCO assumes sizing for EC2 instances, cost breakdowns of reserved EC2 instances
-provides total cost comparison over a 3 yr period vs on-prem

AWS Simple monthly calculator: cost of AWS workload in cloud


-upfront cost shown because it's on-demand!
-but if you do 1 or 3 yrs, it lowers the effective monthly cost

Cost explorer: within billing dashboard


-can delve into details, group by specific services, linked accounts, tags (as
discussed), availability zones
-there are some 'canned' reports based on what you have looked at
-can also download csv for work in excel

how to apply that shit:


scenario 1: creating organizations, multiple charts in cost explorer
-this was wrong, all services in a single account
-the solution is TAGS due to minimal effort requirement
scenario 2: total cost of ownership
-correct!
scenario 3: simple monthly calculator
-correct

summary: traditional datacenters vs cloud, aws tools (TCO, budgets, resource tags,
simple monthly calc, cost explorer)

SUPPORTING AWS INFRASTRUCTURE: tools, support plans


AWS SUPPORT: file support requests for cloud workloads
-automated answers

personal health dashboard: alerts + remediation guidance


trusted advisor: check AWS usage against best practices, may eliminate support
requests
-different checks based on tier, ALL customers get 7 CORE checks

5 categories: cost optimization, performance, security, fault tolerance, service


limits

plan tiers: communication, response time, cost, type of guidance


-BASIC: provided for all, trusted advisor (7 core), 24/7 customer service, personal
health dashboard
-DEVELOPER: all BASIC, but get business hours email, 1 primary contact, $29 per mo
tied to usage
-BUSINESS: all DEVELOPER, all trusted advisor checks, 24x7 phone/email/chat
engineers, unlimited contacts (not just root), third-party software support, $100
per mo
-ENTERPRISE: all business, includes TAM (technical account mgr), concierge team,
$15000 per mo

response times: 24/12/4/1/15

trusted advisor: found by searching on mgmt page


-can see limits within SERVICE limits
personal health dashboard: can check the 'event log' to see historical information
about overall issues
-can check specific services in specific regions
basic/developer/business/enterprise
scenario 1: most cost effective to chat with someone is BUSINESS
scenario 2: ENTERPRISE, due to mission critical 15 minute response time
scenario 3: BASIC - he doesn't need technical guidance at all

ITEMS TO REVIEW: cloud vs traditional infrastructure, elasticity


-elements of regions/azs/edge locations
-funding cloud infrastructure: capex/opex
-forecasting/managing AWS cost: cost explorer, tco calculator, simple monthly
calculator, tags
-support plans: BASIC/DEVELOPEr/BUSINESs/ENTERPRISE
-additional support: personal health dashboard, trusted advisor (and categories)

AWS core services:


interacting with AWS: console, CLI, SDK

console: shows login, REGION, recently used services


CLI: every task in console can be done in CLI
SDK: use code to script interactions (java/.net/python/ruby/Go/C++ etc)

console: great for testing


cli/sdk: automation/repeating tasks
SDK: enables automation in a custom application

console: browser, root/iam, regions, lists of services


-Root user is created when AWS was created, special perms (delete acct/support
plans)
-if you change region, will also change services for MOSt services
services will indicate region or GLOBAL, global means it works for all regions
(route 53, for example)

CLI: creating access keys, root user access keys should not typically be used
version 1/version 2
-download and install the awscli package using pip or msi
aws --version (shows version)
aws configure --profile crayroot
enter in access key, secret key, default region name (us-west-2), output format
(json)

quiz:
scenario1: SDK would be used because it requires code for a custom app
scenario2: console would be used, she's just testing, considering aws
scenario3: requires automation from aws already built in, so CLI only

COMPUTE SERVICES: cloud-based VMs - ec2 (iaas), elastic beanstalk (paas), labmda
(serverless)

EC2 use cases: web application hosting, batch processing, API server, desktop in
the cloud
EC2 concepts: instance types, root device type, AMI, purchase options

instance types: processor/memory/storage, cannot be changed without downtime


-general purpose, compute, memory, storage, accelerated computing (ML)
-pricing based on type (specialized > more), some families have dif capabilities
(GPUs for example)
-prices CHANGE!

compute optimized: 96 vCPUs, $4.6 per hour! 64 vCPU specialized (has GPU): $24.48
per hour
storage optimized: 64 vCPUs $4.9 per hour

Root Device Type: Instance Store (physically attached to host on server) vs EBS
(persistent storage/separate)
-data on ephemeral goes away completely vs EBS which can be snapshotted/migrated to
new EC2
AMI: template for EC2 instance (config/OS/data), can be shared, custom AMIs,
commercial available in marketplace

EC2 purchase types:


On-demand: purchase by the SECOND
reserved: discount purchases 1-3 years
spot: unused ec2 capacity, even bigger discount

all upfront: 1/3 years all paid on the frontend


partial upfront: 1/3 years part, reduced monthly cost
no upfront: no upfront payment, reduced monthly cost

SPOT instances: up to 90% discount over on-demand pricing


-market price per AZ called the spot price, BID higher than SPOT price, it will
launch
-if SPOT price grows to exceed bid, instances will TERMINATE. 2 minute timer

purchase options: consistent/always needed = reserved, batch processing = spot


instances, inconsistent need without impacting jobs = on-demand

launching an instance:
security group: by default, allows ALL ips to try to connect. can specify 'my IP'
so only i can

ELASTIC BEANSTALK - automates deploying/scaling EC2 (PaaS): provisioning/load


balancing
-java/.net/php/node.js/python/ruby/go/docker
-integrated monitoring, deployments (easier than ec2), scaling, ec2 customization
(blends into IaaS)
-use cases: minimal knowledge of other services (allows autoscaling), reduce
maintenance, few customizations reqd
demo: download a sample code type, upload it when creating
-once app is launched: health is OK
--can request logs, health (requests 2xx/3xx etc), monitoring (how its performing),
alarms based on unhealthy
-to terminate, go to Actions and are required to type the name

LAMBDA: run code without infrastructure, charged for execution only, memory from
128-3008MB
-integrates with MANY services: event-driven workflows (such as uploading a file)
-reduced maintenance, fault tolerance, scales on demand, based on USAGE

QUIZ:
scenario1: reserved instance, EC2, 3 years
-ALL UPFRONT, since we know it is at LEAST 3 years, this would provide the MOST
cost effective option
scenario2: elastic beanstalk - uploading code, supports PHP, scales out of the box
scenario3: spot instances - NO COMMITMENTS, but if you can start/stop without a
problem, its always spot

CONTENT AND NETWORK DELIVERY SERVICES (CDN): route 53, VPC, direct connect, API
gateway, cloudfront, ELB
-VPC: virtual private cloud, logically isolated, enables NETWORK (ipv4/ipv6),
ranges/subnets/gateways
--supports private/public subnets (external/internal), can use NAT for private, can
connect other VPCs (peering)
--AWS Direct Connect: dedicated network connection to AWS

Route 53: DNS, GLOBAL service (not regional), HA, global resource routing (send
people to an app based on location)
-DNS: translates more readily memorized domain names and IP addresses
-requires propagation (couple of hours), can configure route53 to failover from us-
east-1 to eu-west-1

ELB: elasticity (grows/contracts based on usage), distributes traffic to multiple


targets
-integrates with EC2/ECS/Lambda, one or more AZs in a single region
-three types: ALB (application), NLB (network), ELB/classic
-scaling on EC2: vertical (scale up/larger instance types), horizontal (scale out)

CloudFront: uses EDGE locations, CDN, enables users to get content CLOSER to them,
static/dynamic
-includes security features, AWS Shield for DDoS, AWS WAF (web app firewall)
-edge locations are global

API Gateway: managed API mgmt service, monitoring/metrics on API calls, supports
VPC if needed

quiz:
scenario 1: Direct connect, persistent connection to AWS
scenario 2: CloudFront, its a CDN that uses EDGE locations
scenario 3: horizontal, Elastic load balancing

FILE STORAGE SERVICES: S3, S3 Glacier, Elastic Block Store (EBS), elastic File
system, AWS Snowball, Snowmobile

S3 overview: simple storage service, files as objects in BUCKETS


-buckets are the unit of organization in S3, any file in buckets has those settings
-data is stored across multiple availability zones
-enables URL access for files (send link with permissions)
-configurable rules for data lifecycle (expires, moves to another storage class)
-static website host

non-archival classes:
s3 standard: default, used for frequently accessed data
s3 intelligent-itering: can move data based on usage
s3 standard-ia: infrequent access at a discount
s3 one zone-ia: stored in a single AZ, much lower cost, less resilience

intelligent tiering: automatically move based on access - frequent vs infrequent


-same performance as s3 standard, can give cost savings if data is moved

s3 lifecycle policies: transition objects to move to another storage class based on


time
-can move objects based on TIME, but not usage, that requires intelligent tiering
-can delete based on AGE, can factor in VERSIONS of specific objects

s3 transfer acceleration: upload data much faster, uses EDGE locations (CloudFront)

DEMO: hosting a website on s3, s3 bucket requires unique name


-by default, an s3 bucket is BLOCKED from public access. must de-select and choose
acknowledge
-can enable encryption at rest - s3 master key or KMS master key
-after uploading all objects, each is given a URL. access to them is denied until
permissions re granted
-can enable static website hosting in an s3 bucket (from properties of the bucket)

created bucket, uploaded files, set access to objects, enabled static hosting

S3 Glacier: archiving data within S3 as a separate storage class


-example: legal/compliance reasons for storing payment information
-retrieval times: quickly or less quickly, you pay more or less
-can run lifecycle rules
-two storage classes: glacier or glacier deep archive

glacier: 90 days minimum, retrieved in minutes or hours, retrieval fee per GB, 5
times less expensive than s3 std
glacier deep archive: 180 day minimum, hour retrieval, fee per GB, 23x less
expensive
data can be uploaded/retrieved programatically (this is done via CLI/SDK ONLY!)

EBS: elastic black storage, persistent block within EC2, single ec2 instance
-redundancy within an AZ, data snapshots, can enable encryption (SSD, IOPS SSD,
throughput, cold)
--SSD: cost effective, general
--IOPS SSD: high performance/low latency
--Throughput optimized: frequently accessed data
--Cold: less accessed
EFS: elastic file system, linux-based workloads, fully managed NFS, petabyte scale,
multiple AZs
-standard vs infrequent access

EFS = network filesystem for multiple EC2 instances vs EBS which is attaching
drives to single EC2
Amazon FSx = Windows File server, SMB support, AD integration, NTFS, SSD

Snowball: physically migrate petabytes of data to AWS, physical device delivered by


AWS, returned by carrier, data is loaded into S3
Snowmobile: exabyte scale migration to the cloud, ruggedized shipping container,
sets up connection to network, data is loaded into s3

quiz:
scenario 1: s3 lifecycle policies, move to cold storage
scenario 2: snowball can migrate petabytes of data, smaller device shipped
scenario 3: EFS is a shared file system for linux EC2 hosts

WRONG 1: partially correct, can use policies, but rather than cold storage, move to
S3 infrequent access (discount)
3 notes: must be in petabytes or LESS for EFS to function

database services: RDS/aurora/dynamodb/redshift/elasticache/database miration, iaas


to paas to saas
-could put a database directly on ec2 (iaas), or isntead, RDS (paas)
-fully managed RDS (privisioning/patching/backup/recovery), multiple AZs, read-
replicas, launches into VPC
RFS: general purpose SSD vs provisioned IOPS SSD
-mysql, postgressql, mariadb, oracle, sql server, aurora
AURORA: mysql/postgresql compatbiel, built specifically for RDS
DMS: enables move into AWS from existing database, only pay for COMPUTE in
migration process
DynamoDB: NoSQL database service, don't even manage database, key value/document
database, automated scaling, DAX (in-memory cache)
-handles 10 trillion requests per day, 20 million requests per second
-low maintenance, serverless applications, low latency, no BLOB storage

Elasticache: in-memory data store, low latency, scaling/read-replicas, db layer


caching/session storage
-memcached + redis
Redshift: scalable data warehouse service (analytics/user behavior data), petabyte
scaling, high performance
-data can be fully encrypted, isolation in VPC
-can query EXABYTES of data in redhisft spectrum

quiz:
scenario 1: redshift, since they possibly need encryption/data storage
scenario 2: EC2 and deploy the service
scenario 3: dynamodb with elasticache

APP INTEGRATION SERVICES: SNS (managed pubsub messaging), SQS (managed queue
service), Step Functions
-SNS: pub/sub, decoupled applications, organizes by topic, multiple AWS services,
user notifications
-connect user signup to SNS topic, required to be listening
-SQS: decoupled/fault tolerant apps, 256KB payload/14 days
standard queue: does not guarantee order
FIFO: processed in order

order goes in through SNS topic, fan out to queues (analytics/fulfillment)


-if service goes down, it stays in queue, when it is back up, can get data from the
queue

Step functions: orchestration of workflows, managed service, serverless


architecture
-can support complex workflows + error handling, charged per transition + AWS
services
-Amazon States Language

workflow: user signup, insert into crm + send email, schedule call with
salesperson, wait 1 week, send followup
-integrates with compute, database, messaging (sqs/sns), data processing, machine
learning

quiz:
scenario 1: SQS, user signups go into a queue rather than being dropped if service
is down (fault tolerance)
scenario 2: step services (workflow - called Step Functions)
scenario 3: SNS topics, listening for event types

MANAGEMENT AND GOVERNANCE SERVICES: cloudtrail, cloudformation, cloudwatch, config,


systems manager, control tower
-these are used to manage/track services that are already launched

CloudTrail: log/monitor activity across AWS infrastructure, logs all actions


-audit trail in an S3 bucket, events in regions which they occur, who initiates the
actions, should be enabled
-should be turned on, but costs (s3), done for compliance reasons

CloudWatch: metrics/logs/alarms for infrastructure


-set alarms on services, visualization capabilities, custom dashboards based on
metrics
-network traffic, EBS activity, memory
Config: evaluates against a set of rules, monitors/records desired state config
-configuration history: can use conformance packs, such as PCI-DSS to ensure
compliance
-can look at Regions or accounts, also tells you how to fix these items
Systems Manager: operational data/automation, view data from multiple services or
automate
-can automate common tasks, securely access servers using AWS credentials

CloudFormation: templates to create a stack of services, without using the CLI, no


charge!
-can be written in YAML or JSON, infrastructure as code, manages dependencies
-provides DRIFT detection: changes are made
-sort of like docker-compose or an ansible playbook

Organizations + Control Tower:


Organizations: multiple accounts under master account, consolidated billing,
centralize logging/security
Control Tower: creates multi-account environment with best practices -
efficiency/governance
-centralize users, create NEW aws accounts w/ templates, GUARDRAILS, single view
dashboard

QUIZ:
scenario 1: AWS Config, monitors desired state config (CAN SET RULES)
scenario 2: CloudFormation is used to automate creation of a lot of services
scenario 3: CloudTrail - who initiates actions, deletions

------------------------------------------------------
INTRO TO SECURITY AND ARCHITECTURE ON AWS: shared resonsibility, well architected,
fault tolerance/HA

Acceptable Use Policy (AUP): mass emails, virii/malware, pentests are ALLOWED
Least Privilege Access: grant minimum permission to complete tasks
-do NOT use Root account on day to day, use IAM account
Shared Responsibility Model: security/compliance is SHARED between AWS + Customer
-AWS: security of the cloud, customer: security IN the cloud

AWS: responsible for access/training amazon employees, global data centers/network,


hardware, config mgmt, patching
Customer: access to cloud resources/training, data security (transit and rest),
OS/net/fw in EC2, all deployed code

AWS Well-architected Framework: best practices across five pillars that drive
business value
-operational excellence: running/monitoring for business value
-security: protecting information and business assets
-reliability: recovering from disruptions
-performance efficiency: using resources to achieve business value
-cost optimization: minimal costs for desired value (s3 storage classes, instances,
etc)

HA and FT (fault tolerance): everything fails all the time


Fault Tolerance: supporting failure of components (SQS)
HA: entire solution despite issues

most AWS services provide HA ootb, must be ARCHITECTED, multiple availability


zones, fault tolerance in custom apps (SQS, route 53 - detecting unhealthy
endpoints)

COMPLIANCE: PCI-DSS (credit card processing) must be in compliance


HIPAA: compliance for healthcare data, certain people have access
SOC1/SOC2/SOC3
FedRAMP: US govt handling
ISO 27018 - handling PII

AWS Config: provides conformance packs


AWS Artifact: access to reports
GuardDuty: intelligent detection

DEMO of AWS Config > Conformance Packs, helps monitor compliance


AWS Artifact: various artifacts related to specific compliance standards, providing
info

QUIZ:
scenario 1: compliance REPORTS are found in AWS Artifact
scenario 2: AWS is not responsible, we are for CODE, data, encryption, etc
WRONG: review the SHARED RESPONSIBILITY MODEL to deliniate what AWS is responsible
for
scenario 3: well-architected framework for best practices for developing in AWS

AWS IDENTITIES AND USER MGMT: least privilege access, IAM, IAM types, enabling MFA,
Cognito
-IAM: service that controls access to AWS services, free, authentication (login),
authorization (access)
-federation: external identity provider

user: single individual w/ access to AWS resources


group: manage permissions for group of IAM users
role: user or service assumes permissions for a task
-give role permission to write to an s3 bucket

policies in AWS IAM: defines permission for an identity (user/group/role)


-defines AWS services and what actions can be taken
-can be customer managed OR managed by AWS, policies created by AWS
-read access, full access, can use those

multi-factor authentication: extra layer of security for users


least-privilege access: users can access only what they need

DEMO: create a user, can attach a policy, AWS managed, select a policy
-rather than adding the role manually, can add them to groups

adding MFA: done by accessing security permissions for Root user, but...
-for IAM users, click the user then security credentials

Cognito: handles authentication/authorization for web/mobile applications through


AWS
-fully managed directory service for custom applications
-something like IAM for your own custom apps, UI components, security capabilities
-enables controlled access to AWS resources without having to sign up for an IAM
account
-can use Google/Amazon/Facebook/Microsoft AD/SAML 2.0 to sign in and grant access

QUIZ:
scenario 1: create a Group, attach a Role or select a POLICY, then add users to the
Group
--make sure that all members require the same access (least privilege!)
scenario 2: no, he needs to follow least privilege, only the service requires
access to S3
-takeaway here is that ROLES can be assigned to users or services, user may not
require access, just the service!
scenario 3: multi-factor authentication requires more than just a password

INTRO TO SECURITY AND ARCHITECTURE - did core concepts/user identities in homepc

DATA ARCHITECTURE: integrating, processing, data analysis, ML + AI

AWS Storage Gateway: hybrid-cloud storage service, merging datacenter + aws


-gives cloud storage in your local network, uses S3 and EBS
-3 types:
File: store files in S3, low-latency local access cache
Tape: virtual tape backups
Volume: iSCSI volumes needed by certain applications

AWS DataSync: data transfer service, automated xfers via network


-agent deployed as vm on network, S3/EFs/FSx, high transfer speeds due to custom
protocol, charged per GB of data

data processing:
AWS Glue (Extract Transform Load): data is stored somewhere, extracted, transformed
(normalizing phone numbers, group), placed in new location for analysis (LOAD)
-fully managed ETL, supports RDS/DynamoDB/Redshift/S3, supports serverless model
(no servers needed, just use it)
Amazon EMR (elastic map reduce): big-data processing using popular tools for S3 and
EC2
-clustered environment, no configuration: Apache Spark/Hive/HBase/Flink/Hudi,
Presto... can use these tools without configuring!
AWS Data Pipeline: workflow/orchestration service for AWS services - managing
processing for point A to point B, ensuring stops at specifc points
-managed ETL, supports S3/EMR (elastic map reduce for big data), Redshift,
DynamoDB, RDS

analyzing data: services in place to analyze data, querying data in S3, BI tools
with dashboards, search service for custom apps
-Amazon Athena: serverless, query large scale data in S3, can write queries using
standard SQL (no database required), charged based on data scanned for query
-Amazon Quicksight: fully managed business intelligence, dynamic data dashboards
based on data in AWS, per user or session pricing model
--standard vs enterprise, different capabilities/costpoints
-Amazon Cloudsearch: fully managed search, custom app but make data available to
users, scaling of infrastructure, charged per hour and instance type
--integrate search into custom apps (search through a ton of PDF docs for example)

AI + ML: data is processed then analyzed - rekognition, translate, transcribe


(speech to text with machine learning)
-Amazon Rekognition: image/video deep learning, can identify OBJECTS and ACTIONS,
facial analysis images comparison, custom labels for business objects (picture of
shopping cart)
-Amazon Translate: translating languages (54), identifying source languages (can be
done in batch OR realtime)
-Amazon Transcribe: speech recognition service converted into text in custom
applications, specific sub-service for medical use, supports batch or real time, 31
languages
QUIZ:
scenario 1: large scale data processing, ETL would be GLUE, supports serverless
(versus EMR which uses popular tools in EC2)
scenario 2: rekognition does facial imaging (AI)
scenario 3: visualizing/dynamic dashboards would be done in Quicksight

SUMMARY: integrating data from own datacenter, processing data, data analysis, AI +
ML

DISASTER RECOVERY ON AWS: prepare/recover for events that could impact the business
(power, network, physical dmg, flooding, fire, etc)
-what if there was a complete REGION outage?! four approaches recommended by AWS
-Backup/Restore > Pilot Light > Warm Standby, Multi-Site (from cheapest to most
expensive AND from slowest recovery time to fastest)

backup and restore: back up everything in S3, either standard or archival, EBS data
can be stored as snapshots in S3
-in DR, process is started to launch new environment, longest recovery time,
cheapest
pilot light: key infrastructure running in cloud, reduce recovery time, increases
cost to continually run in cloud
-AMIs are prepared for systems, core pieces are running/kept up to date
warm standby: scaled down version of full environment, infrastructure continually
running
multi-site: full environment running in the cloud at all times (multiple regions,
full instance types, near seemless, most expensive)

RTO (recovery time objective): TIME. time it takes to get systems running to ideal
business state
RPO (recovery point objective): amount of DATA loss in terms of time (if the RPO is
1hr, how much DATA is lost)

RTO/RPO = least with multie site, most with backup/restore


pilot light: key databases up in cloud, reduces recovery POINT objective, which is
DATA

takeaway: how much TIME it takes versus how much DATA (expressed in time)

QUIZ:
scenario 1: multi-site is a seemless transition
scenario 2: backup/restore or pilot light, most likely backup since that minimizes
cost
scenario 3: few key servers up, warm standby = smaller instance types (almost like
a DEV)
-WRONG: keyword here is that a FEW KEY servers running in the cloud, even if it's
scaled down

ARCHITECTING APPS in EC2: scaling, instance access, services to protect from


hacking/attacks, developer tools, predefined solutions
-scaling: vertical (bigger), horizontal (more)

auto-scaling group: EC2 instances with rules/scaling, uses LAUNCH template (OS,
instance type, security group)
-define min, max, and DESIRED number of instances (at least, at most, desired
state)
-health checks are performed (if web server, check web url, etc)
-scaling group: exists in 1 or more availability zone in single region, works with
or without SPOT instances
DEMO: 1 region, 1 VPC, 2 AZs, scaling group in both AZs with desired of 2 (1 in
each)
-application load balancer: distributes traffic to best instance
-1 instance goes down, ALB is informed to stop traffic routing, then autoscaling
group brings up NEW instance
-SECRETS manager: credentials, API keys, tokens - natively with RDS, DocumentDB,
Redshift
--auto-rotates credentials, granular access to secrets (which servers have access)

elastic load balancer: distribute traffic amongst a number of instances

ACCESS TO EC2 INSTANCES: security groups (firewall-like), ACLs (in/out traffic),


VPN (encrypted tunnel)

security groups: serve as firewall, control in/out traffic, INSTANCE level,


multiple security groups
-VPCs: default security group included, otherwise must explicitly assign
-by default, all outbound is ALLOWED, instance can send information out

network acl: works at subnet level, every instance in subnet


-VPC: default ACL allows all in/out, custom ACL will deny ALL traffic by default

aws vpn: encrypted tunnel, vpc not available to public internet, data center or
client machines
-site to site vpn: customer gateway to vpn gateway in aws, encrypted traffic
between sites
-direct connect: does not go over public internet

ATTACKS/THREATS: AWS Shield, Amazon Macie (data protection machine learning),


Amazon Inspector (assessment service)
-Shield: protects against DDoS, on-going threat detection/mitigation, 2 services
levels (standard vs advanced)
-Macie: analyze data stored in S3, detects personal/intellectual info, dashboards +
alerts for unusual access
-Inspector: scanning of EC2 instances for vulns, charged per instance per
assessment, two types
--network reachability (from the internet?) vs host assessment (patching)

pre-defined solutions: AWS Service Catalog (IT Services) and AWS Marketplace (third
party)
service catalog: service catalog for the cloud, stuff that is already configured
-could be single server or multi-tier apps. can be levered to meet COMPLIANCE
-supports lifecycle (version1 > 1.1 > etc)

marketplace: third party vendors, curated catalog (AMIs, CloudFormation stacks,


SaaS solutions)
-charges appear on AWS bill

DEV TOOLS: CodeCommit (alternative to GitHub), CodeBuild (CI service), CodeDeploy


(deploy to many services), CodePipeline (works with commit/build/deploy for
building/testing), CodeStar (bootstrap entire process)

CodeCommit: source control service using Git, uses IAM policies, alt to
Github/Bitbucket
CodeBuild: no need to manage infrastructure, charged per minute
CodeDeploy: managed deployment service for EC2/Fargate/Lambda and on-preimse,
provides dashboard in Console
CodePipeline: fully managed continuous delivery for building/testing/deploying,
integrates with Github as well
CodeStar: workflow, automates continuous delivery toolchain, custom dashboards,
only charged for OTHER svcs (free)

quiz:
scenario 1: in this case its AWS Service Catalog because compliance is the keyword
scenario 2: autoscaling with application load balancing
scenario 3: macie, because it protects personal info

CERTIFICATION EXAM: 90 minute proctored, multiple choice + multiple answer


-areas of focus: Cloud Concepts, Technology, Security + Compliance, Billing +
Pricing
-how to study:
cloud concepts: cloud vs traditional, aws global infrastructure, capex/opex
security: shared responsibility model, best practices, options for traffic within a
vpc (security groups/ACLs), IAM, pricinple of least privilege
billing + pricing: AWS cost tools, cost-effective ways (s3 storage classes, reserve
instances, spot instances), how to manage/review, support plans
technology: review EACH AWS services, implement basic solutions, fault toleranc

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy