Parmod
Parmod
md 2025-04-12
Submitted To:
Submitted By:
Parmod
233112720003
M. Sc. (CS) II
1 / 36
Parmod.md 2025-04-12
Index
Page Teacher's
S.No. Experiment Name
No. Signature
Amazon Elastic Compute Cloud (EC2) and Amazon Elastic Block Store
2.
(EBS)
AWS Simple Queue Service (SQS), Simple Workflow Service (SWF), and
7.
Simple Notification Service (SNS)
9. Amazon ElastiCache
2 / 36
Parmod.md 2025-04-12
Experiment 1: Amazon Simple Storage Service (S3) and Amazon Glacier Storage
Aim: To understand and utilize Amazon S3 for scalable object storage and Amazon Glacier for long-term data
archival.
Theory:
Amazon S3 (Simple Storage Service): Provides highly durable, available, and scalable object storage.
Data is stored as objects within containers called buckets. Key concepts include:
Buckets: Globally unique containers for objects.
Objects: Files and their associated metadata.
Storage Classes: Different tiers optimized for various access patterns and costs (e.g., S3
Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3
Glacier Flexible Retrieval, S3 Glacier Deep Archive).
Versioning: Keeps multiple versions of an object, protecting against accidental deletion or
overwrites.
Lifecycle Policies: Automate the transition of objects between storage classes or their deletion
based on age.
Permissions: Control access via Bucket Policies, Access Control Lists (ACLs), and IAM policies.
Amazon S3 Glacier: A secure, durable, and extremely low-cost storage service for data archiving and
long-term backup. Designed for data that is infrequently accessed. Retrieval times vary depending on
the chosen option (Expedited, Standard, Bulk for Flexible Retrieval; Standard, Bulk for Deep Archive). S3
Lifecycle policies are commonly used to move data from S3 to Glacier storage classes.
Procedure:
3 / 36
Parmod.md 2025-04-12
Conclusion: Successfully created an S3 bucket, uploaded an object, and configured a lifecycle policy to
transition objects to a Glacier storage class. Understood the basic concepts of S3 object storage and Glacier
archival.
4 / 36
Parmod.md 2025-04-12
Experiment 2: Amazon Elastic Compute Cloud (EC2) and Amazon Elastic Block Store (EBS)
Aim: To launch, configure, and manage virtual servers (EC2 instances) and persistent block storage (EBS
volumes) in the AWS cloud.
Theory:
Amazon EC2 (Elastic Compute Cloud): Provides scalable computing capacity in the AWS cloud. Allows
launching virtual machines called instances. Key concepts include:
Instances: Virtual servers.
AMIs (Amazon Machine Images): Templates containing the OS and software configuration
used to launch instances.
Instance Types: Various combinations of CPU, memory, storage, and networking capacity.
Key Pairs: Used for securely connecting to Linux instances via SSH.
Security Groups: Virtual firewalls controlling inbound and outbound traffic to instances.
Instance Store: Temporary block-level storage attached to the host computer. Data is lost if the
instance is stopped or terminated.
Amazon EBS (Elastic Block Store): Provides persistent block-level storage volumes for use with EC2
instances. Key concepts include:
Volumes: Network-attached block storage devices, independent of instance lifecycle (data
persists if the instance is stopped/terminated).
Snapshots: Point-in-time backups of EBS volumes stored durably in S3. Used for backup and
creating new volumes.
Volume Types: Different performance characteristics and costs (e.g., gp3, gp2, io1, io2).
Procedure:
1. Navigate to EC2: In the AWS Console, find and open the EC2 service.
2. Launch an EC2 Instance:
Click "Launch instances".
Name and tags: Give your instance a name (e.g., mcs46-webserver).
Application and OS Images (AMI): Select an AMI (e.g., "Amazon Linux 2" or "Ubuntu Server" -
choose a Free Tier eligible one).
Instance type: Choose a Free Tier eligible type (e.g., t2.micro or t3.micro).
Key pair (login): Create a new key pair, give it a name, download the .pem file, and keep it
secure. Or select an existing key pair if you have one.
Network settings:
Choose a VPC (default is fine for now).
Security Group: Create a new security group. Name it (e.g., webserver-sg). Add rules:
Allow SSH (port 22) from your IP address (select "My IP").
Allow HTTP (port 80) from Anywhere (0.0.0.0/0).
Configure storage: Keep the default root EBS volume settings (e.g., 8 GiB gp2/gp3).
Advanced details: Explore options but defaults are usually fine for a basic launch.
Review and click "Launch instance".
3. Connect to the Instance (Linux):
Select the instance in the EC2 dashboard. Wait for "Instance state" to become "Running" and
"Status checks" to pass.
5 / 36
Parmod.md 2025-04-12
5. Access the Web Server: Open a web browser and navigate to http://<Public-IP-Address>. You
should see the "Hello" message.
6. Create and Attach an EBS Volume:
In the EC2 console, go to "Elastic Block Store" -> "Volumes".
Click "Create volume".
Choose volume type (e.g., gp3), size (e.g., 1 GiB), and Availability Zone (must be the same AZ as
your EC2 instance).
Click "Create volume".
Select the new volume (wait for state "Available"), click "Actions" -> "Attach volume".
Select your running EC2 instance and click "Attach volume".
7. Mount the EBS Volume (inside SSH):
List available block devices: lsblk (Note the new device name, e.g., xvdf).
Check if it has a file system: sudo file -s /dev/xvdf (If "data", it needs formatting).
Format the volume (only if new): sudo mkfs -t ext4 /dev/xvdf
Create a mount point: sudo mkdir /data
Mount the volume: sudo mount /dev/xvdf /data
Verify mount: df -h
(Optional) Add to /etc/fstab for auto-mount on reboot.
8. Create an EBS Snapshot:
Go back to "Volumes" in the EC2 console.
Select the root volume of your instance (or the new data volume).
Click "Actions" -> "Create snapshot". Add a description and create.
Find the snapshot under "Snapshots".
9. Terminate the Instance:
6 / 36
Parmod.md 2025-04-12
Conclusion: Successfully launched an EC2 instance, connected to it, installed a web server, created and
attached an EBS volume for persistent storage, and created an EBS snapshot for backup. Understood the roles
of EC2 for compute and EBS for persistent storage.
7 / 36
Parmod.md 2025-04-12
Aim: To design and configure a custom network environment within AWS using VPC, including subnets, route
tables, and an Internet Gateway.
Theory:
Amazon VPC (Virtual Private Cloud): Allows provisioning a logically isolated section of the AWS
Cloud where you can launch AWS resources in a virtual network that you define. Key concepts include:
VPC: A virtual network dedicated to your AWS account, identified by a CIDR block (e.g.,
10.0.0.0/16).
Subnets: Ranges of IP addresses within your VPC, tied to a specific Availability Zone (AZ). Can be
public (direct route to internet) or private (no direct route).
Route Tables: Control where network traffic from subnets is directed. Each subnet is associated
with one route table.
Internet Gateway (IGW): A horizontally scaled, redundant, and highly available VPC component
that allows communication between instances in your VPC and the internet.
NAT Gateway/Instance (Network Address Translation): Allows instances in private subnets to
initiate outbound traffic to the internet (e.g., for updates) but prevents the internet from initiating
connections to those instances.
Security Groups: Act as instance-level firewalls (stateful).
Network ACLs (NACLs): Act as subnet-level firewalls (stateless).
Procedure:
1. Navigate to VPC: In the AWS Console, find and open the VPC service.
2. Create a Custom VPC:
Go to "Your VPCs". Click "Create VPC".
Select "VPC only".
Give it a name tag (e.g., mcs46-custom-vpc).
Specify an IPv4 CIDR block (e.g., 10.10.0.0/16). Use a private address range.
Keep IPv6 CIDR block as "No IPv6".
Tenancy: Default.
Click "Create VPC".
3. Create Subnets:
Go to "Subnets". Click "Create subnet".
Select your mcs46-custom-vpc.
Subnet 1 (Public):
Name tag: mcs46-public-subnet-az1
Availability Zone: Choose one (e.g., us-east-1a).
IPv4 CIDR block: 10.10.1.0/24 (must be within the VPC's CIDR).
Click "Create subnet".
Subnet 2 (Private):
Click "Create subnet" again.
Select mcs46-custom-vpc.
Name tag: mcs46-private-subnet-az1
Availability Zone: Choose the same AZ (us-east-1a).
8 / 36
Parmod.md 2025-04-12
9 / 36
Parmod.md 2025-04-12
8. Clean Up: Terminate the test instances. Delete the IGW (detach first), subnets, custom route table, and
finally the VPC.
Conclusion: Successfully designed and configured a custom VPC with public and private subnets, an Internet
Gateway, and appropriate route tables to control traffic flow. Demonstrated understanding of basic VPC
networking concepts.
10 / 36
Parmod.md 2025-04-12
Aim: To implement a highly available and scalable web application using Elastic Load Balancing (ELB), monitor
resources with CloudWatch, and automatically adjust capacity with Auto Scaling.
Theory:
Elastic Load Balancing (ELB): Automatically distributes incoming application traffic across multiple
targets, such as EC2 instances, containers, and IP addresses, in multiple Availability Zones. Increases
fault tolerance. Types include Application Load Balancer (ALB - Layer 7), Network Load Balancer (NLB -
Layer 4), Gateway Load Balancer (GWLB), and Classic Load Balancer (CLB - previous generation).
Amazon CloudWatch: A monitoring and observability service. Collects metrics, logs, and events. Allows
setting alarms based on metrics (e.g., CPU utilization).
AWS Auto Scaling: Monitors your applications and automatically adjusts capacity to maintain steady,
predictable performance at the lowest possible cost. Key components:
Launch Configuration/Template: Specifies the instance configuration (AMI, instance type, key
pair, security groups) for new instances launched by Auto Scaling. Launch Templates are newer
and recommended.
Auto Scaling Group (ASG): A collection of EC2 instances treated as a logical grouping for
scaling and management. Defines minimum, maximum, and desired capacity.
Scaling Policies: Define how the ASG should scale out (add instances) or scale in (remove
instances) based on CloudWatch alarms or schedules.
Procedure:
11 / 36
Parmod.md 2025-04-12
Click "Create target group". Wait for the initial health checks to pass (status should become
"healthy").
3. Create an Application Load Balancer (ALB):
Navigate to EC2 -> Load Balancing -> Load Balancers.
Click "Create Load Balancer".
Choose "Application Load Balancer" -> "Create".
Load balancer name: mcs46-alb.
Scheme: Internet-facing.
IP address type: IPv4.
Network mapping: Select your VPC. Select at least two Availability Zones where your instances
are located and choose a public subnet for each selected AZ.
Security groups: Create a new security group for the ALB (e.g., alb-sg). Allow inbound HTTP
(port 80) from 0.0.0.0/0. (The web-instance-sg should ideally be updated to allow HTTP only
from alb-sg).
Listeners and routing: Listener Protocol HTTP, Port 80. Default action: Forward to -> select
mcs46-tg.
Review and click "Create load balancer". Wait for the state to become "Active".
4. Test the Load Balancer:
Find the DNS name of the ALB on its details page.
Access http://<ALB-DNS-Name> in your browser. Refresh several times. You should see the
content served alternately from your different instances.
5. Create CloudWatch Alarm:
Navigate to CloudWatch -> Alarms -> All alarms.
Click "Create alarm".
Click "Select metric".
Browse metrics: EC2 -> Per-Instance Metrics.
Search for CPUUtilization and select it for one of your web server instances. Click "Select
metric".
Conditions: Static threshold, Greater/Equal, Threshold value: e.g., 70 (%).
Additional configuration: Datapoints to alarm: 1 out of 1.
Click "Next".
Configure actions: Choose "Create new topic" to create an SNS topic for notifications (e.g.,
mcs46-cpu-alarm-topic), enter your email, and create the topic. Confirm the email
subscription. Select this topic for the "In alarm" state.
Click "Next".
Alarm name: mcs46-high-cpu. Add description.
Click "Next", review, and "Create alarm".
6. Create Launch Template:
Navigate to EC2 -> Instances -> Launch Templates.
Click "Create launch template".
Launch template name: mcs46-webserver-lt. Add description.
Check "Provide guidance...".
AMI: Select the same AMI used for your initial instances.
Instance type: Select the same instance type (e.g., t2.micro).
Key pair: Select the key pair used previously.
Network settings: Select "Security groups". Choose the existing web-instance-sg.
12 / 36
Parmod.md 2025-04-12
Crucial: Expand "Advanced details". In the "User data" field, paste the script to install and start
the web server automatically on launch (modify based on your AMI):
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-
id)
AZ=$(curl -s http://169.254.169.254/latest/meta-
data/placement/availability-zone)
echo "<h1>Hello from AutoScaled Instance $INSTANCE_ID in $AZ</h1>" >
/var/www/html/index.html
13 / 36
Parmod.md 2025-04-12
instances. Delete the CloudWatch Alarm and SNS Topic. Delete the Security Groups if no longer needed.
Conclusion: Successfully configured an Application Load Balancer to distribute traffic, created a CloudWatch
alarm for monitoring, and set up an Auto Scaling Group with a Launch Template and scaling policy to
automatically manage application capacity based on demand.
14 / 36
Parmod.md 2025-04-12
Aim: To understand and manage secure access to AWS resources using IAM users, groups, roles, and policies.
Theory:
IAM (Identity and Access Management): Enables you to manage access to AWS services and
resources securely. You use IAM to control who is authenticated (signed in) and authorized (has
permissions) to use resources. Key concepts:
Root User: The account owner, has full access. Avoid using for everyday tasks.
IAM Users: An entity representing a person or application interacting with AWS. Has long-term
credentials (password for console, access keys for API/CLI).
IAM Groups: Collections of IAM users. Permissions applied to a group are inherited by its
members. Simplifies permission management.
IAM Roles: An IAM identity with permission policies that can be assumed by trusted entities
(users, applications, AWS services like EC2). Uses temporary credentials. Preferred for applications
or granting cross-account access.
IAM Policies: JSON documents defining permissions. Can be AWS Managed (pre-defined),
Customer Managed (you create/manage), or Inline (attached directly to a single user/group/role).
Follows the principle of least privilege (grant only necessary permissions).
MFA (Multi-Factor Authentication): Adds an extra layer of security for user sign-in.
Procedure:
1. Navigate to IAM: In the AWS Console, find and open the IAM service.
2. Create an IAM Group:
Go to "User groups". Click "Create group".
User group name: mcs46-developers.
Attach permissions policies: Search for and select AmazonS3ReadOnlyAccess.
Click "Create group".
3. Create an IAM User:
Go to "Users". Click "Add users".
User name: mcs46-test-user.
Select AWS credential type: Choose "Password - AWS Management Console access".
Select "Custom password" and set a temporary password.
Require password reset: Check this box.
Click "Next: Permissions".
Add user to group: Select the mcs46-developers group.
Click "Next: Tags" (optional).
Click "Next: Review".
Click "Create user".
Important: Download the .csv file containing the user's credentials (including console sign-in
link) or copy the sign-in link and password.
4. Test User Login and Permissions:
Sign out of your root/admin account.
Use the specific IAM user console sign-in link (it looks like https://<account-id-or-
alias>.signin.aws.amazon.com/console).
15 / 36
Parmod.md 2025-04-12
Login as mcs46-test-user with the temporary password. You will be prompted to create a new
password.
Once logged in, try accessing the S3 service. You should be able to list buckets and view objects
(Read Only access).
Try accessing another service like EC2. You should receive permission errors, demonstrating the
S3 read-only restriction.
Sign out of the test user account and sign back in with your admin/root credentials.
5. Create a Custom IAM Policy:
Navigate back to IAM -> Policies. Click "Create policy".
Go to the "JSON" tab. Paste the following policy (allows describing EC2 instances but nothing
else in EC2):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:DescribeInstances",
"Resource": "*"
}
]
}
16 / 36
Parmod.md 2025-04-12
During launch (or via Actions -> Security -> Modify IAM role for a running instance), attach the
mcs46-EC2-S3-ReadOnly-Role to the instance.
SSH into the instance. Install AWS CLI (sudo yum install aws-cli -y on Amazon Linux 2).
Run AWS CLI commands to access S3 (e.g., aws s3 ls). These commands should work without
needing configured access keys, as the EC2 instance assumes the role and gets temporary
credentials.
10. Clean Up: Delete the IAM user (mcs46-test-user). Delete the IAM group (mcs46-developers).
Delete the custom IAM policy (mcs46-EC2DescribeOnlyPolicy). Delete the IAM role (mcs46-EC2-S3-
ReadOnly-Role). Terminate any EC2 instance launched for role testing.
Conclusion: Successfully created and managed IAM users, groups, custom policies, and roles. Demonstrated
understanding of permission boundaries, the principle of least privilege, and how roles grant permissions to
AWS services like EC2.
17 / 36
Parmod.md 2025-04-12
Aim: To set up, configure, and connect to a managed relational database instance using Amazon RDS.
Theory:
Amazon RDS (Relational Database Service): A managed service that makes it easy to set up, operate,
and scale a relational database in the cloud. It handles tasks like hardware provisioning, database setup,
patching, and backups. Key concepts:
DB Instance: A database environment in the cloud, running a specific engine (e.g., MySQL,
PostgreSQL, MariaDB, Oracle, SQL Server).
Database Engines: The underlying relational database software.
Instance Classes: Define the compute and memory capacity of the DB instance.
Storage: Options include General Purpose SSD (gp2/gp3), Provisioned IOPS SSD (io1/io2), and
Magnetic. Storage can often be scaled.
Multi-AZ Deployment: Provides high availability and durability by synchronously replicating
data to a standby instance in a different AZ.
Read Replicas: Asynchronously replicated copies of the primary instance to scale read-heavy
workloads.
Security Groups: Control network access to the DB instance (acts as a firewall).
Parameter Groups: Manage database engine configuration parameters.
Option Groups: Enable additional features for certain engines (e.g., Oracle Statspack).
Other AWS Databases (Brief Mention): AWS also offers NoSQL databases like DynamoDB (key-
value/document), document databases like DocumentDB (MongoDB compatible), graph databases like
Neptune, time-series databases like Timestream, etc.
1. Navigate to RDS: In the AWS Console, find and open the RDS service.
2. Create a Database:
Click "Create database".
Choose a creation method: "Standard Create".
Engine options: Select "MySQL" or "PostgreSQL".
Templates: Select "Free tier" (ensure eligibility criteria are met).
Settings:
DB instance identifier: mcs46-mydb.
Master username: admin (or choose another name).
Master password: Set a strong password and confirm it. Store it securely.
DB instance class: Should default to a Free Tier eligible class (e.g., db.t2.micro or
db.t3.micro).
Storage: Keep defaults for Free Tier (e.g., 20 GiB General Purpose SSD). Disable storage
autoscaling for Free Tier.
Availability & durability: Keep default "Do not create a standby instance" for Free Tier.
Connectivity:
VPC: Select your desired VPC (default is okay).
Subnet group: Usually default is fine.
18 / 36
Parmod.md 2025-04-12
Public access: Select "Yes" (for easy connection from your local machine for this lab - Not
recommended for production!). Alternatively, select "No" and ensure you connect from
an EC2 instance within the same VPC and security group.
VPC security group (firewall): Choose "Create new". Enter a name: rds-sg. (Or choose an
existing one if appropriate).
Availability Zone: No preference (or choose one).
Database port: Keep default (MySQL: 3306, PostgreSQL: 5432).
Database authentication: Password authentication.
Expand "Additional configuration":
Initial database name (optional): mydatabase.
Backup: Enable automated backups (defaults are fine for Free Tier).
Monitoring: Enable Enhanced monitoring (optional, might incur cost).
Maintenance: Enable auto minor version upgrade (recommended). Select a maintenance
window.
Deletion protection: Uncheck for easy cleanup in the lab.
Review estimated monthly costs (should be $0.00 if within Free Tier limits).
Click "Create database". (Creation can take several minutes).
3. Configure Security Group:
Wait for the DB instance status to become "Available".
Click on the DB instance identifier (mcs46-mydb) to view details.
Go to the "Connectivity & security" tab. Click on the active VPC security group (rds-sg).
Select the security group, go to the "Inbound rules" tab. Click "Edit inbound rules".
Click "Add rule".
Type: Select "MYSQL/Aurora" (port 3306) or "PostgreSQL" (port 5432) depending on the
engine chosen.
Source: Select "My IP" to allow connections only from your current public IP address. (If
connecting from EC2, use the EC2 instance's private IP or its security group ID).
Click "Save rules".
4. Connect to the Database:
Go back to the RDS instance details page ("Connectivity & security" tab). Note the Endpoint
name (e.g., mcs46-mydb.cxyzabcdef.us-east-1.rds.amazonaws.com).
Use a database client tool (e.g., MySQL Workbench, DBeaver, pgAdmin, or command-line client
like mysql or psql).
Configure a new connection:
Host/Server Address: The RDS Endpoint name.
Port: 3306 (MySQL) or 5432 (PostgreSQL).
Username: admin (or your chosen master username).
Password: The master password you set.
Database (optional, if created): mydatabase.
Test the connection.
5. Perform Basic SQL Operations:
Once connected, run some basic SQL commands:
-- Show databases/schemas
SHOW DATABASES; -- (MySQL)
-- \l -- (psql command)
19 / 36
Parmod.md 2025-04-12
-- Insert data
INSERT INTO users (name, email) VALUES ('Alice Smith',
'alice@example.com');
INSERT INTO users (name, email) VALUES ('Bob Johnson',
'bob@example.com');
-- Query data
SELECT * FROM users;
6. Clean Up:
In the RDS console, select your DB instance (mcs46-mydb).
Click "Actions" -> "Delete".
You will likely be asked if you want to create a final snapshot (choose "No" for the lab) and
acknowledge that you understand data will be lost.
Enter "delete me" to confirm. Click "Delete". (Deletion takes some time).
Delete the rds-sg security group if it's no longer needed.
Conclusion: Successfully launched, configured, and connected to a managed relational database instance
using Amazon RDS. Performed basic SQL operations, demonstrating understanding of managed database
services in AWS.
20 / 36
Parmod.md 2025-04-12
Experiment 7: AWS Simple Queue Service (SQS), Simple Workflow Service (SWF), and Simple
Notification Service (SNS)
Aim: To understand and utilize AWS messaging services (SQS, SNS) for decoupling applications and workflow
coordination (briefly touching on SWF or its modern alternative, Step Functions).
Theory:
Amazon SQS (Simple Queue Service): A fully managed message queuing service that enables
decoupling and scaling of microservices, distributed systems, and serverless applications.
Standard Queues: Offer maximum throughput, best-effort ordering, and at-least-once delivery.
FIFO Queues (First-In, First-Out): Provide message ordering and exactly-once processing.
Lower throughput than standard queues.
Use Cases: Decoupling application components, distributing tasks, buffering requests.
Amazon SNS (Simple Notification Service): A fully managed messaging service for both application-
to-application (A2A) and application-to-person (A2P) communication.
Topics: Logical access points and communication channels. Producers publish messages to
topics.
Subscriptions: Endpoints that receive messages published to a topic (e.g., SQS queues, Lambda
functions, HTTP/S endpoints, email, SMS).
Use Cases: Fanout pattern (publish once, deliver to many subscribers), event notifications, mobile
push notifications.
Amazon SWF (Simple Workflow Service): Helps developers build, run, and scale background jobs
that have parallel or sequential steps. Provides task coordination and state tracking. (Note: AWS Step
Functions is often preferred for new workflow orchestration use cases due to its visual interface and
serverless nature).
AWS Step Functions (Modern Alternative to SWF): A serverless function orchestrator that makes it
easy to sequence AWS Lambda functions and multiple AWS services into business-critical applications.
Uses state machines defined in JSON.
Procedure:
Part A: SQS
1. Navigate to SQS: In the AWS Console, find and open the SQS service.
2. Create an SQS Queue:
Click "Create queue".
Type: Select "Standard".
Name: mcs46-myqueue.
Keep default configuration settings (visibility timeout, etc.).
Click "Create queue".
3. Send and Receive Messages:
Select mcs46-myqueue from the list.
Click "Send and receive messages".
In the "Send message" section, enter a message body (e.g., {"task": "process_order",
"order_id": 123}). Click "Send message".
Send another message (e.g., Task details for job 456).
In the "Receive messages" section, click "Poll for messages".
21 / 36
Parmod.md 2025-04-12
You should see one or more messages appear. Select a message to view its body, details, and
receipt handle.
Important: Messages remain in the queue after polling until explicitly deleted. Select a retrieved
message and click "Delete" to remove it from the queue.
Poll again to retrieve the next message (if any) and delete it.
Part B: SNS
1. (SWF - Less Common): Navigate to SWF. Explore the concepts of Domains, Workflows, and Activities.
Note that setting up a working SWF example is complex and involves coding workers and deciders.
Understand its purpose for stateful, long-running workflows.
2. (Step Functions - Recommended Exploration):
Navigate to Step Functions.
Click "Create state machine".
Choose "Design your workflow visually".
Type: "Standard".
Explore the visual editor. Drag simple states like "Pass" or "Wait" onto the canvas. Connect them.
Observe the generated Amazon States Language (ASL) JSON definition on the right.
You don't need to fully execute a complex workflow, but understand how states are defined and
transitioned.
(Optional) Try creating a simple state machine with two Pass states and run it with sample input.
Clean Up:
22 / 36
Parmod.md 2025-04-12
SNS: Select mcs46-mytopic. Delete any subscriptions first (select subscription, click "Delete"). Then,
select the topic and click "Delete". Confirm deletion.
Step Functions: If you created a state machine, select it and click "Delete".
Conclusion: Successfully created and used an SQS queue for sending/receiving messages and an SNS topic
with an email subscription for publishing notifications. Gained a conceptual understanding of how these
services enable decoupled architectures and explored the basics of workflow orchestration with Step
Functions (as a modern approach compared to SWF).
23 / 36
Parmod.md 2025-04-12
Aim: To understand DNS concepts and use Amazon Route 53 to manage DNS records for a domain name.
Theory:
DNS (Domain Name System): The internet's phonebook. Translates human-readable domain names
(e.g., www.amazon.com) into machine-readable IP addresses (e.g., 192.0.2.44). Key concepts:
Domain Name: A unique name identifying a website or resource (e.g., example.com).
Hosted Zone: A container in DNS that holds information about how to route traffic for a specific
domain and its subdomains.
Record Sets: Entries within a hosted zone defining how traffic for specific domain/subdomain
names (like www.example.com or example.com) should be routed. Common types include:
A Record: Maps a hostname to an IPv4 address.
AAAA Record: Maps a hostname to an IPv6 address.
CNAME Record (Canonical Name): Maps a hostname to another hostname (an alias).
Cannot be used for the root domain (zone apex).
MX Record (Mail Exchanger): Specifies mail servers responsible for accepting email for
the domain.
TXT Record (Text): Used to store arbitrary text, often for verification purposes (e.g.,
domain ownership, SPF).
NS Record (Name Server): Specifies the authoritative name servers for the domain.
Amazon Route 53: A highly available and scalable cloud Domain Name System (DNS) web service.
Offers:
Domain Registration: Register new domain names.
DNS Service: Manage DNS records for your domains via Hosted Zones.
Health Checks: Monitor the health of your endpoints (servers, load balancers).
Routing Policies: Advanced traffic routing options beyond simple DNS resolution:
Simple: Route traffic to a single resource.
Weighted: Distribute traffic across multiple resources based on assigned weights.
Latency-based: Route traffic to the resource providing the best latency for the end-user.
Failover: Route traffic to a primary resource when healthy, and to a secondary/backup
resource when the primary fails health checks.
Geolocation: Route traffic based on the geographic location of the user.
Geoproximity: Route traffic based on the geographic location of users and resources,
optionally biasing traffic flow.
Multivalue Answer: Respond to DNS queries with up to eight healthy records selected at
random.
Procedure:
Note: This experiment works best if you have a registered domain name (you can register one via Route 53 or
another registrar). If you don't have one, you can still perform most steps within Route 53 using a private
hosted zone or explore public zone creation without actually delegating a real domain. For this lab, we'll
assume you can create records in a public hosted zone, even if it's not live on the internet.
1. Navigate to Route 53: In the AWS Console, find and open the Route 53 service.
24 / 36
Parmod.md 2025-04-12
2. (Optional) Register a Domain: If needed, use Route 53 -> Registered domains -> Register domain.
(This incurs cost).
3. Create a Public Hosted Zone:
Go to "Hosted zones". Click "Create hosted zone".
Domain name: Enter your domain name (e.g., my-unique-lab-domain.com) or a subdomain if
you own the parent (e.g., lab.mydomain.com). If you don't own a domain, you can still enter a
fictitious one like mcs46lab.example for practice within the console, but it won't resolve
publicly.
Type: Select "Public hosted zone".
Add a description (optional).
Click "Create hosted zone".
4. Review NS and SOA Records:
Once created, Route 53 automatically creates two record sets:
NS (Name Server): Lists the 4 authoritative AWS name servers for this zone. If you owned
the domain and registered it elsewhere, you would update your registrar's settings to
point to these NS records.
SOA (Start of Authority): Contains administrative information about the zone.
Note down the NS record values.
5. Create an 'A' Record:
Click "Create record".
Record name: Leave blank to create a record for the root domain (e.g., my-unique-lab-
domain.com), or enter a subdomain like www.
Record type: Select "A - Routes traffic to an IPv4 address...".
Value: Enter an IP address. This could be:
The Public IP address of an EC2 instance you have running (from Exp 2).
The DNS name of an Application Load Balancer (from Exp 4) - Wait! For ALBs, use an Alias
record. Let's use an IP for a simple A record first. Use a known public IP like 1.1.1.1 for
practice if you don't have an EC2 instance ready.
TTL (Time to Live): Keep default (e.g., 300 seconds).
Routing policy: Simple.
Click "Create records".
6. Create an Alias Record (Pointing to ALB):
Click "Create record".
Record name: e.g., app. (This will create app.my-unique-lab-domain.com).
Record type: Select "A".
Crucial: Toggle "Alias" ON.
Route traffic to:
Choose endpoint: "Alias to Application and Classic Load Balancer".
Choose Region: Select the region where your ALB resides.
Choose load balancer: Select your mcs46-alb (from Exp 4, if available). If not, you can
explore other alias targets like S3 websites or CloudFront distributions conceptually.
Routing policy: Simple.
Evaluate target health: Keep default (Yes, if applicable).
Click "Create records". (Note: Alias records are AWS-specific extensions to DNS, providing
CNAME-like functionality even at the zone apex and resolving directly to IPs without extra
lookups).
25 / 36
Parmod.md 2025-04-12
nslookup my-unique-lab-domain.com
nslookup www.my-unique-lab-domain.com
nslookup app.my-unique-lab-domain.com
nslookup legacy.my-unique-lab-domain.com
# OR using dig
dig my-unique-lab-domain.com A
dig www.my-unique-lab-domain.com A
dig app.my-unique-lab-domain.com A
dig legacy.my-unique-lab-domain.com CNAME
If the domain isn't live, you can query one of the specific NS servers noted earlier:
9. Clean Up:
Inside the hosted zone, delete all record sets except the NS and SOA records. Select each custom
record and click "Delete".
Go back to "Hosted zones". Select the zone my-unique-lab-domain.com. Click "Delete".
Confirm deletion.
If you registered a domain you no longer need, configure it not to auto-renew.
Conclusion: Successfully created a Route 53 hosted zone and managed various DNS record types (A, Alias,
CNAME). Understood the role of NS and SOA records and how Route 53 acts as an authoritative DNS service.
Learned how to use tools like nslookup or dig for testing DNS resolution.
26 / 36
Parmod.md 2025-04-12
Aim: To set up and interact with an in-memory caching service using Amazon ElastiCache (using Redis or
Memcached).
Theory:
1. Prerequisite: EC2 Instance: You need an EC2 instance running in the same VPC as your planned
ElastiCache cluster to connect to it. Ensure the instance's security group allows outbound connections.
You can reuse an instance from a previous experiment if it's still running, or launch a new one (e.g.,
Amazon Linux 2, t2.micro).
2. Create an ElastiCache Subnet Group:
Navigate to ElastiCache -> Subnet groups.
Click "Create subnet group".
Name: mcs46-cache-subnet-group.
Description: Subnet group for ElastiCache lab.
VPC ID: Select the VPC where your EC2 instance resides.
Add subnets: Select at least one private subnet from the chosen VPC (ideally, select private
subnets from multiple AZs for potential high availability later, even if using a single node now).
ElastiCache nodes should generally not be in public subnets.
Click "Create".
3. Create an ElastiCache Redis Cluster:
Navigate to ElastiCache -> Redis clusters.
Click "Create Redis cluster".
Choose creation method: "Easy Create" is simpler, but "Configure and create" gives more control.
Let's use "Configure and create".
27 / 36
Parmod.md 2025-04-12
Cluster settings:
Cluster mode: Keep Disabled for a single-node cluster (simpler for this lab).
Location: AWS Cloud.
Cluster settings (again):
Name: mcs46-redis-cluster.
Description: Optional.
Engine version compatibility: Choose a recent version.
Port: Keep default 6379.
Parameter group: Keep default.
Node type: Select a Free Tier eligible node (e.g., cache.t2.micro or cache.t3.micro).
Number of replicas: 0 (for single node).
Connectivity:
Subnet group: Select mcs46-cache-subnet-group.
Availability Zone(s): Select "No preference" or choose a specific AZ where you have a
chosen subnet.
Security:
Security groups: Choose "Create new": elasticache-sg. (Or select an existing appropriate
one).
Encryption in-transit: Uncheck for simplicity in this lab (requires TLS handling on client).
Encryption at-rest: Uncheck for simplicity/Free Tier.
Skip Backup, Maintenance, Tags for this basic setup.
Review and click "Create". (Cluster creation takes several minutes).
4. Configure Security Group:
Navigate to EC2 -> Security Groups. Find the security group associated with your EC2 instance.
Edit its outbound rules to ensure it can reach port 6379 (or allow all outbound).
Find the security group created for ElastiCache (elasticache-sg). Edit its inbound rules.
Click "Add rule".
Type: "Custom TCP".
Port range: 6379.
Source: Select "Custom" and enter the Security Group ID of your EC2 instance. This allows
only instances within that EC2 security group to connect to the cache.
Click "Save rules".
5. Connect and Interact with Redis:
Wait for the ElastiCache cluster status to become "Available".
Select mcs46-redis-cluster. Note the Primary Endpoint (e.g., mcs46-redis-
cluster.abcdef.xyz.use1.cache.amazonaws.com).
SSH into your EC2 instance.
Install the Redis CLI client:
# Amazon Linux 2
sudo amazon-linux-extras install redis6 -y
# Ubuntu
# sudo apt update
# sudo apt install redis-tools -y
If successful, the prompt will change to the endpoint address followed by the port (e.g., mcs46-
redis-cluster.abcdef....:6379>).
Execute some Redis commands:
6. Clean Up:
ElastiCache: Select mcs46-redis-cluster. Click "Actions" -> "Delete". Confirm deletion (may
ask about final snapshot - choose No). Wait for deletion.
Subnet Group: Select mcs46-cache-subnet-group. Click "Actions" -> "Delete".
Security Group: Delete elasticache-sg if no longer needed.
EC2 Instance: Terminate the EC2 instance used for testing if no longer needed.
Conclusion: Successfully created an ElastiCache Redis cluster, configured network access using subnet and
security groups, and connected to it from an EC2 instance using redis-cli. Performed basic cache
operations (SET, GET, INCR, DEL), demonstrating the use of ElastiCache for in-memory caching.
29 / 36
Parmod.md 2025-04-12
Aim: To explore serverless computing with AWS Lambda and Infrastructure as Code (IaC) with AWS
CloudFormation.
Theory:
AWS Lambda: A serverless, event-driven compute service that lets you run code without provisioning
or managing servers.
Functions: Your code packaged with its dependencies. Supports various runtimes (Python,
Node.js, Java, Go, Ruby, .NET etc.).
Events/Triggers: AWS services (like API Gateway, S3, SNS, DynamoDB Streams) or custom
applications can trigger Lambda functions.
Serverless: AWS handles infrastructure management, scaling, patching, and availability. You pay
only for the compute time consumed.
Use Cases: Web backends (with API Gateway), data processing, real-time file processing (S3
triggers), scheduled tasks.
AWS CloudFormation: Provides a common language to model and provision AWS infrastructure
resources in your cloud environment in a safe, repeatable way (Infrastructure as Code - IaC).
Templates: Text files (JSON or YAML) describing the AWS resources you want to create and
configure (e.g., EC2 instances, S3 buckets, VPCs, IAM roles).
Stacks: A collection of AWS resources managed as a single unit, created based on a
CloudFormation template.
Change Sets: Preview how proposed changes to a stack might impact your running resources
before implementing them.
Benefits: Automation, consistency, version control for infrastructure, standardization.
Procedure:
1. Navigate to Lambda: In the AWS Console, find and open the Lambda service.
2. Create a Lambda Function:
Click "Create function".
Select "Author from scratch".
Function name: mcs46-hello-lambda.
Runtime: Select "Python 3.9" (or another familiar runtime like Node.js).
Architecture: Keep default x86_64.
Permissions: Keep "Create a new role with basic Lambda permissions". AWS will create an
execution role allowing the function to write logs to CloudWatch Logs.
Click "Create function".
3. Review and Edit Code:
The console will show a default lambda_function.py (or index.js etc.). The basic handler
takes event and context objects.
Modify the code slightly, for example:
30 / 36
Parmod.md 2025-04-12
import json
import datetime
return {
'statusCode': 200,
'body': json.dumps(f'Hello {name} from Lambda! The time is
{now.isoformat()}')
}
AWSTemplateFormatVersion: '2010-09-09'
Description: Simple CloudFormation template to create an S3 bucket for
MCS46 Lab.
Resources:
MyS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub 'mcs46-cfn-
bucket-${AWS::AccountId}-${AWS::Region}' # Creates a unique bucket name
31 / 36
Parmod.md 2025-04-12
Tags:
- Key: Project
Value: MCS46-Lab
Outputs:
BucketName:
Description: Name of the created S3 bucket
Value: !Ref MyS3Bucket
BucketArn:
Description: ARN of the created S3 bucket
Value: !GetAtt MyS3Bucket.Arn
Clean Up:
Lambda: Select mcs46-hello-lambda. Click "Actions" -> "Delete function". Confirm deletion. Also
delete the associated execution role from IAM -> Roles if desired (search for role name containing
mcs46-hello-lambda). Delete the CloudWatch Log Group associated with the function if desired.
CloudFormation: Ensure the stack mcs46-s3-stack is deleted.
Conclusion: Successfully created and tested a serverless AWS Lambda function, understanding its basic
execution model and logging. Used AWS CloudFormation to define infrastructure (an S3 bucket) in a template
32 / 36
Parmod.md 2025-04-12
and manage its lifecycle (create/delete) via a stack, demonstrating the principles of Infrastructure as Code.
33 / 36
Parmod.md 2025-04-12
Aim: To understand and utilize fundamental AWS security services and features, including network controls
(Security Groups, NACLs), configuration monitoring (AWS Config), and API activity logging (AWS CloudTrail).
Theory:
Security Groups (SGs): Act as a stateful firewall for EC2 instances (and other resources like RDS,
ElastiCache nodes). Control inbound and outbound traffic at the instance level. Rules are "allow" only
(no deny rules). Stateful means if outbound traffic is allowed, the return traffic is automatically allowed,
regardless of inbound rules.
Network Access Control Lists (NACLs): Act as a stateless firewall for subnets within a VPC. Control
inbound and outbound traffic at the subnet level. Rules can be "allow" or "deny" and are evaluated in
order based on rule number (lowest number first). Stateless means return traffic must be explicitly
allowed by a corresponding outbound/inbound rule. Default NACL allows all traffic. Custom NACLs
deny all traffic until rules are added.
AWS Config: A service that enables you to assess, audit, and evaluate the configurations of your AWS
resources. Continuously monitors and records resource configurations and allows you to automate
evaluation against desired configurations using Config Rules (managed or custom). Helps with
compliance auditing, security analysis, change tracking, and troubleshooting.
AWS CloudTrail: Records AWS API calls for your account and delivers log files to an S3 bucket.
Provides event history of account activity, including actions taken through the AWS Management
Console, SDKs, CLI, and other AWS services. Essential for security analysis, resource change tracking,
compliance auditing, and operational troubleshooting. Enabled by default for recent events (Event
history), but creating a "Trail" provides long-term storage and advanced features.
Procedure:
1. Navigate to AWS Config: Find and open the AWS Config service.
2. Initial Setup (if first time):
If prompted, click "Get started".
Settings:
Record specific resource types: You can choose specific types or "Record all resources
supported...". Choose "Record all..." for wider visibility in the lab. Check "Include global
resources".
Amazon S3 bucket: Config needs an S3 bucket to store history/snapshot files. Choose
"Create a bucket" or select an existing one.
Amazon SNS topic: Optional for notifications. Skip for now.
AWS Config role: Choose "Create AWS Config service-linked role".
Click "Next".
Rules (optional): Skip adding rules for now. Click "Next".
Review and click "Confirm". (Allow some time for initial setup and discovery).
3. Explore Config Dashboard:
Once set up, explore the dashboard. It shows resource inventory counts and compliance status (if
rules are active).
Go to "Resources" in the left pane.
Filter by resource type (e.g., AWS::EC2::Instance or AWS::S3::Bucket).
Select a specific resource (e.g., an EC2 instance or S3 bucket from previous labs).
4. View Resource Configuration and History:
On the resource details page, view the "Configuration details" (current state).
Click on the "Configuration timeline". This shows changes detected over time. Click on different
timeline points to see how the configuration looked at that time.
5. (Optional) Enable a Managed Config Rule:
Go to "Rules". Click "Add rule".
Filter by "managed". Search for a simple rule, e.g., s3-bucket-public-read-prohibited.
Select the rule and click "Next".
Configure frequency (e.g., Triggered by configuration changes).
Review and click "Add rule".
Wait for the rule to evaluate compliance status (may take time). Check the dashboard or rule
details page.
35 / 36
Parmod.md 2025-04-12
Clean Up:
NACLs: Ensure any temporary Deny rules added to NACLs are removed.
AWS Config:
If you added a rule, select it in "Rules" and click "Actions" -> "Delete rule".
Go to "Settings". Note the S3 bucket name used by Config. Click "Edit". You can stop recording
by unchecking resource types, but fully disabling Config often involves deleting the delivery
channel and configuration recorder via CLI/SDK. For the lab, stopping recording might suffice.
Alternatively, delete the S3 bucket after stopping recording (ensure no other service needs it).
Delete the Config role if desired.
CloudTrail:
If you created a trail (mcs46-management-trail), select it and click "Delete". Confirm.
Delete the associated S3 bucket and CloudWatch Log Group (if created for the trail) if no longer
needed. Delete the IAM role created for CloudTrail logging to CloudWatch if desired.
Conclusion: Reviewed and contrasted the functionality of Security Groups (stateful, instance-level) and
Network ACLs (stateless, subnet-level). Explored AWS Config for monitoring resource configurations and
viewing change history. Utilized AWS CloudTrail to view API activity logs for security analysis and auditing.
Gained understanding of these fundamental AWS security tools.
36 / 36