0% found this document useful (0 votes)
3 views

Parmod

The document outlines a series of experiments conducted in an IoT and Cloud Computing Lab, focusing on various AWS services including Amazon S3, EC2, and VPC. Each experiment includes aims, theoretical concepts, procedures, and conclusions, demonstrating practical applications of cloud computing technologies. The document serves as a comprehensive guide for students to understand and implement cloud services effectively.

Uploaded by

sharmashree9876
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Parmod

The document outlines a series of experiments conducted in an IoT and Cloud Computing Lab, focusing on various AWS services including Amazon S3, EC2, and VPC. Each experiment includes aims, theoretical concepts, procedures, and conclusions, demonstrating practical applications of cloud computing technologies. The document serves as a comprehensive guide for students to understand and implement cloud services effectively.

Uploaded by

sharmashree9876
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Parmod.

md 2025-04-12

MCS-46 IoT and Cloud Computing Lab

Submitted To:

Dr. Yashwant Sangwan

Dept. Of Computer Science

Submitted By:

Parmod

233112720003

M. Sc. (CS) II

GGJ Government College, Hisar

1 / 36
Parmod.md 2025-04-12

Index
Page Teacher's
S.No. Experiment Name
No. Signature

1. Amazon Simple Storage Service (S3) and Amazon Glacier Storage

Amazon Elastic Compute Cloud (EC2) and Amazon Elastic Block Store
2.
(EBS)

3. Amazon Virtual Private Cloud (VPC)

4. Elastic Load Balancing, Amazon CloudWatch, and Auto Scaling

5. AWS Identity and Access Management (IAM)

6. Databases and AWS (Focus on RDS)

AWS Simple Queue Service (SQS), Simple Workflow Service (SWF), and
7.
Simple Notification Service (SNS)

8. Domain Name System (DNS) and Amazon Route 53

9. Amazon ElastiCache

10. Additional Key Services (Focus on Lambda and CloudFormation)

11. Security on AWS (Focus on Security Groups/NACLs, Config, CloudTrail)

2 / 36
Parmod.md 2025-04-12

Experiment 1: Amazon Simple Storage Service (S3) and Amazon Glacier Storage

Aim: To understand and utilize Amazon S3 for scalable object storage and Amazon Glacier for long-term data
archival.

Theory:

Amazon S3 (Simple Storage Service): Provides highly durable, available, and scalable object storage.
Data is stored as objects within containers called buckets. Key concepts include:
Buckets: Globally unique containers for objects.
Objects: Files and their associated metadata.
Storage Classes: Different tiers optimized for various access patterns and costs (e.g., S3
Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3
Glacier Flexible Retrieval, S3 Glacier Deep Archive).
Versioning: Keeps multiple versions of an object, protecting against accidental deletion or
overwrites.
Lifecycle Policies: Automate the transition of objects between storage classes or their deletion
based on age.
Permissions: Control access via Bucket Policies, Access Control Lists (ACLs), and IAM policies.
Amazon S3 Glacier: A secure, durable, and extremely low-cost storage service for data archiving and
long-term backup. Designed for data that is infrequently accessed. Retrieval times vary depending on
the chosen option (Expedited, Standard, Bulk for Flexible Retrieval; Standard, Bulk for Deep Archive). S3
Lifecycle policies are commonly used to move data from S3 to Glacier storage classes.

Procedure:

1. Login to AWS Management Console: Access your AWS account.


2. Navigate to S3: Find and open the S3 service.
3. Create an S3 Bucket:
Click "Create bucket".
Provide a globally unique bucket name (e.g., mcs46-lab1-yourname-uniqueid).
Choose an AWS Region.
Keep default settings for Block Public Access (recommended for security).
Enable Bucket Versioning (optional, but good practice).
Click "Create bucket".
4. Upload an Object:
Navigate into your newly created bucket.
Click "Upload".
Add a sample file (e.g., a text file, image).
Keep default settings and click "Upload".
Verify the object is listed in the bucket. Click on the object to see its properties (URL, storage
class, etc.).
5. Configure Lifecycle Policy (S3 to Glacier Transition):
Go to the bucket's "Management" tab.
Under "Lifecycle rules", click "Create lifecycle rule".
Give the rule a name (e.g., MoveToGlacierFlex).
Choose rule scope (apply to all objects or filter by prefix/tags).

3 / 36
Parmod.md 2025-04-12

Select "Move current versions of objects between storage classes".


Add a transition: Choose "Glacier Flexible Retrieval" (or Deep Archive) as the storage class.
Set the number of days after object creation to transition (e.g., 30 days - Note: for testing, this
won't happen immediately, but configure it).
Acknowledge the settings and create the rule.
6. (Optional) Explore Object Permissions:
Select the uploaded object. Go to the "Permissions" tab. Explore ACLs (generally discouraged
now) and understand how Bucket Policies or IAM policies would grant access.
7. (Optional) Initiate Glacier Retrieval (if object transitioned):
If an object has transitioned to Glacier Flexible Retrieval or Deep Archive (this takes time as per
the lifecycle rule), select the object.
Click "Actions" -> "Initiate restore".
Choose a retrieval tier (e.g., Bulk, Standard) and duration for availability.
Monitor the restore status. Accessing the object is only possible after the restore completes.

Conclusion: Successfully created an S3 bucket, uploaded an object, and configured a lifecycle policy to
transition objects to a Glacier storage class. Understood the basic concepts of S3 object storage and Glacier
archival.

4 / 36
Parmod.md 2025-04-12

Experiment 2: Amazon Elastic Compute Cloud (EC2) and Amazon Elastic Block Store (EBS)

Aim: To launch, configure, and manage virtual servers (EC2 instances) and persistent block storage (EBS
volumes) in the AWS cloud.

Theory:

Amazon EC2 (Elastic Compute Cloud): Provides scalable computing capacity in the AWS cloud. Allows
launching virtual machines called instances. Key concepts include:
Instances: Virtual servers.
AMIs (Amazon Machine Images): Templates containing the OS and software configuration
used to launch instances.
Instance Types: Various combinations of CPU, memory, storage, and networking capacity.
Key Pairs: Used for securely connecting to Linux instances via SSH.
Security Groups: Virtual firewalls controlling inbound and outbound traffic to instances.
Instance Store: Temporary block-level storage attached to the host computer. Data is lost if the
instance is stopped or terminated.
Amazon EBS (Elastic Block Store): Provides persistent block-level storage volumes for use with EC2
instances. Key concepts include:
Volumes: Network-attached block storage devices, independent of instance lifecycle (data
persists if the instance is stopped/terminated).
Snapshots: Point-in-time backups of EBS volumes stored durably in S3. Used for backup and
creating new volumes.
Volume Types: Different performance characteristics and costs (e.g., gp3, gp2, io1, io2).

Procedure:

1. Navigate to EC2: In the AWS Console, find and open the EC2 service.
2. Launch an EC2 Instance:
Click "Launch instances".
Name and tags: Give your instance a name (e.g., mcs46-webserver).
Application and OS Images (AMI): Select an AMI (e.g., "Amazon Linux 2" or "Ubuntu Server" -
choose a Free Tier eligible one).
Instance type: Choose a Free Tier eligible type (e.g., t2.micro or t3.micro).
Key pair (login): Create a new key pair, give it a name, download the .pem file, and keep it
secure. Or select an existing key pair if you have one.
Network settings:
Choose a VPC (default is fine for now).
Security Group: Create a new security group. Name it (e.g., webserver-sg). Add rules:
Allow SSH (port 22) from your IP address (select "My IP").
Allow HTTP (port 80) from Anywhere (0.0.0.0/0).
Configure storage: Keep the default root EBS volume settings (e.g., 8 GiB gp2/gp3).
Advanced details: Explore options but defaults are usually fine for a basic launch.
Review and click "Launch instance".
3. Connect to the Instance (Linux):
Select the instance in the EC2 dashboard. Wait for "Instance state" to become "Running" and
"Status checks" to pass.

5 / 36
Parmod.md 2025-04-12

Note the Public IPv4 address or Public IPv4 DNS.


Use an SSH client (like Terminal on Mac/Linux, PuTTY or WSL on Windows):

# Set correct permissions for your key file first


chmod 400 /path/to/your-key-pair.pem

# Connect (replace placeholders)


ssh -i /path/to/your-key-pair.pem ec2-user@<Public-IP-Address>
# (Note: username might be 'ubuntu' for Ubuntu AMIs)

4. Install a Web Server (Example: Apache on Amazon Linux 2):


Inside the SSH session:

sudo yum update -y


sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd # Start on boot
# Create a simple test page
echo "<h1>Hello from EC2 Instance $(hostname -f)</h1>" | sudo tee
/var/www/html/index.html

5. Access the Web Server: Open a web browser and navigate to http://<Public-IP-Address>. You
should see the "Hello" message.
6. Create and Attach an EBS Volume:
In the EC2 console, go to "Elastic Block Store" -> "Volumes".
Click "Create volume".
Choose volume type (e.g., gp3), size (e.g., 1 GiB), and Availability Zone (must be the same AZ as
your EC2 instance).
Click "Create volume".
Select the new volume (wait for state "Available"), click "Actions" -> "Attach volume".
Select your running EC2 instance and click "Attach volume".
7. Mount the EBS Volume (inside SSH):
List available block devices: lsblk (Note the new device name, e.g., xvdf).
Check if it has a file system: sudo file -s /dev/xvdf (If "data", it needs formatting).
Format the volume (only if new): sudo mkfs -t ext4 /dev/xvdf
Create a mount point: sudo mkdir /data
Mount the volume: sudo mount /dev/xvdf /data
Verify mount: df -h
(Optional) Add to /etc/fstab for auto-mount on reboot.
8. Create an EBS Snapshot:
Go back to "Volumes" in the EC2 console.
Select the root volume of your instance (or the new data volume).
Click "Actions" -> "Create snapshot". Add a description and create.
Find the snapshot under "Snapshots".
9. Terminate the Instance:

6 / 36
Parmod.md 2025-04-12

Go to "Instances". Select your instance.


Click "Instance state" -> "Terminate instance". Confirm.
Important: Termination deletes the instance and its root volume (by default). Attached EBS
volumes (like the /data volume) persist unless configured otherwise or manually deleted. Delete
the extra EBS volume and snapshot if no longer needed to avoid charges.

Conclusion: Successfully launched an EC2 instance, connected to it, installed a web server, created and
attached an EBS volume for persistent storage, and created an EBS snapshot for backup. Understood the roles
of EC2 for compute and EBS for persistent storage.

7 / 36
Parmod.md 2025-04-12

Experiment 3: Amazon Virtual Private Cloud (VPC)

Aim: To design and configure a custom network environment within AWS using VPC, including subnets, route
tables, and an Internet Gateway.

Theory:

Amazon VPC (Virtual Private Cloud): Allows provisioning a logically isolated section of the AWS
Cloud where you can launch AWS resources in a virtual network that you define. Key concepts include:
VPC: A virtual network dedicated to your AWS account, identified by a CIDR block (e.g.,
10.0.0.0/16).
Subnets: Ranges of IP addresses within your VPC, tied to a specific Availability Zone (AZ). Can be
public (direct route to internet) or private (no direct route).
Route Tables: Control where network traffic from subnets is directed. Each subnet is associated
with one route table.
Internet Gateway (IGW): A horizontally scaled, redundant, and highly available VPC component
that allows communication between instances in your VPC and the internet.
NAT Gateway/Instance (Network Address Translation): Allows instances in private subnets to
initiate outbound traffic to the internet (e.g., for updates) but prevents the internet from initiating
connections to those instances.
Security Groups: Act as instance-level firewalls (stateful).
Network ACLs (NACLs): Act as subnet-level firewalls (stateless).

Procedure:

1. Navigate to VPC: In the AWS Console, find and open the VPC service.
2. Create a Custom VPC:
Go to "Your VPCs". Click "Create VPC".
Select "VPC only".
Give it a name tag (e.g., mcs46-custom-vpc).
Specify an IPv4 CIDR block (e.g., 10.10.0.0/16). Use a private address range.
Keep IPv6 CIDR block as "No IPv6".
Tenancy: Default.
Click "Create VPC".
3. Create Subnets:
Go to "Subnets". Click "Create subnet".
Select your mcs46-custom-vpc.
Subnet 1 (Public):
Name tag: mcs46-public-subnet-az1
Availability Zone: Choose one (e.g., us-east-1a).
IPv4 CIDR block: 10.10.1.0/24 (must be within the VPC's CIDR).
Click "Create subnet".
Subnet 2 (Private):
Click "Create subnet" again.
Select mcs46-custom-vpc.
Name tag: mcs46-private-subnet-az1
Availability Zone: Choose the same AZ (us-east-1a).

8 / 36
Parmod.md 2025-04-12

IPv4 CIDR block: 10.10.2.0/24.


Click "Create subnet".
(Optional: Create subnets in a second AZ for high availability).
4. Create an Internet Gateway (IGW):
Go to "Internet Gateways". Click "Create internet gateway".
Name tag: mcs46-igw.
Click "Create internet gateway".
Select the new IGW, click "Actions" -> "Attach to VPC".
Select your mcs46-custom-vpc and click "Attach internet gateway".
5. Configure Route Tables:
Go to "Route Tables". You'll see a default route table associated with your VPC.
Create Public Route Table:
Click "Create route table".
Name tag: mcs46-public-rt.
VPC: Select mcs46-custom-vpc.
Click "Create route table".
Edit Public Route Table Routes:
Select mcs46-public-rt. Go to the "Routes" tab. Click "Edit routes".
Click "Add route".
Destination: 0.0.0.0/0 (all IPv4 traffic).
Target: Select "Internet Gateway" -> choose mcs46-igw.
Click "Save changes".
Associate Public Subnet:
With mcs46-public-rt selected, go to the "Subnet associations" tab. Click "Edit subnet
associations".
Select mcs46-public-subnet-az1.
Click "Save associations".
(Optional) Review Private Route Table: Select the main route table (check the "Main" column).
It should only have the local route (10.10.0.0/16). Associate mcs46-private-subnet-az1
explicitly with this main/private route table if needed (though it might be associated by default if
it's the Main one and you haven't changed its associations).
6. Enable Auto-assign Public IP for Public Subnet:
Go to "Subnets". Select mcs46-public-subnet-az1.
Click "Actions" -> "Edit subnet settings".
Check "Enable auto-assign public IPv4 address".
Click "Save".
7. (Test) Launch Instances:
Launch one EC2 instance in the public subnet (mcs46-public-subnet-az1). Ensure it gets a
public IP. Use a security group allowing SSH. Verify you can SSH into it from the internet.
Launch another EC2 instance in the private subnet (mcs46-private-subnet-az1). It should not
get a public IP. Use a security group allowing SSH from the public instance's private IP or from
within the VPC. Verify you cannot SSH directly from the internet, but you can SSH from the public
instance to the private instance using its private IP. Verify the private instance cannot reach the
internet (e.g., ping 8.8.8.8 will fail). (To enable outbound internet access for private instances,
you'd need a NAT Gateway).

9 / 36
Parmod.md 2025-04-12

8. Clean Up: Terminate the test instances. Delete the IGW (detach first), subnets, custom route table, and
finally the VPC.

Conclusion: Successfully designed and configured a custom VPC with public and private subnets, an Internet
Gateway, and appropriate route tables to control traffic flow. Demonstrated understanding of basic VPC
networking concepts.

10 / 36
Parmod.md 2025-04-12

Experiment 4: Elastic Load Balancing, Amazon CloudWatch, and Auto Scaling

Aim: To implement a highly available and scalable web application using Elastic Load Balancing (ELB), monitor
resources with CloudWatch, and automatically adjust capacity with Auto Scaling.

Theory:

Elastic Load Balancing (ELB): Automatically distributes incoming application traffic across multiple
targets, such as EC2 instances, containers, and IP addresses, in multiple Availability Zones. Increases
fault tolerance. Types include Application Load Balancer (ALB - Layer 7), Network Load Balancer (NLB -
Layer 4), Gateway Load Balancer (GWLB), and Classic Load Balancer (CLB - previous generation).
Amazon CloudWatch: A monitoring and observability service. Collects metrics, logs, and events. Allows
setting alarms based on metrics (e.g., CPU utilization).
AWS Auto Scaling: Monitors your applications and automatically adjusts capacity to maintain steady,
predictable performance at the lowest possible cost. Key components:
Launch Configuration/Template: Specifies the instance configuration (AMI, instance type, key
pair, security groups) for new instances launched by Auto Scaling. Launch Templates are newer
and recommended.
Auto Scaling Group (ASG): A collection of EC2 instances treated as a logical grouping for
scaling and management. Defines minimum, maximum, and desired capacity.
Scaling Policies: Define how the ASG should scale out (add instances) or scale in (remove
instances) based on CloudWatch alarms or schedules.

Procedure:

1. Prepare EC2 Instances (Targets):


Launch at least two EC2 instances (e.g., t2.micro, Amazon Linux 2) in different Availability Zones
within the same VPC (e.g., the default VPC or your custom VPC's public subnets).
Ensure each instance has a web server (like Apache/Nginx) installed and running (see Exp 2
steps). Make the default page slightly different on each (e.g., include the instance ID or AZ) to
verify load balancing.
Use a security group for these instances that allows HTTP (port 80) from 0.0.0.0/0 (or
specifically from the Load Balancer later) and SSH (port 22) from your IP. Let's call this web-
instance-sg.
2. Create a Target Group:
Navigate to EC2 -> Load Balancing -> Target Groups.
Click "Create target group".
Choose target type: "Instances".
Target group name: mcs46-tg.
Protocol: HTTP, Port: 80.
VPC: Select the VPC where your instances reside.
Health check protocol: HTTP, Path: / (or /index.html).
Expand "Advanced health check settings" and adjust thresholds if needed (defaults are usually
okay).
Click "Next".
Register targets: Select the two web server instances you launched. Click "Include as pending
below".

11 / 36
Parmod.md 2025-04-12

Click "Create target group". Wait for the initial health checks to pass (status should become
"healthy").
3. Create an Application Load Balancer (ALB):
Navigate to EC2 -> Load Balancing -> Load Balancers.
Click "Create Load Balancer".
Choose "Application Load Balancer" -> "Create".
Load balancer name: mcs46-alb.
Scheme: Internet-facing.
IP address type: IPv4.
Network mapping: Select your VPC. Select at least two Availability Zones where your instances
are located and choose a public subnet for each selected AZ.
Security groups: Create a new security group for the ALB (e.g., alb-sg). Allow inbound HTTP
(port 80) from 0.0.0.0/0. (The web-instance-sg should ideally be updated to allow HTTP only
from alb-sg).
Listeners and routing: Listener Protocol HTTP, Port 80. Default action: Forward to -> select
mcs46-tg.
Review and click "Create load balancer". Wait for the state to become "Active".
4. Test the Load Balancer:
Find the DNS name of the ALB on its details page.
Access http://<ALB-DNS-Name> in your browser. Refresh several times. You should see the
content served alternately from your different instances.
5. Create CloudWatch Alarm:
Navigate to CloudWatch -> Alarms -> All alarms.
Click "Create alarm".
Click "Select metric".
Browse metrics: EC2 -> Per-Instance Metrics.
Search for CPUUtilization and select it for one of your web server instances. Click "Select
metric".
Conditions: Static threshold, Greater/Equal, Threshold value: e.g., 70 (%).
Additional configuration: Datapoints to alarm: 1 out of 1.
Click "Next".
Configure actions: Choose "Create new topic" to create an SNS topic for notifications (e.g.,
mcs46-cpu-alarm-topic), enter your email, and create the topic. Confirm the email
subscription. Select this topic for the "In alarm" state.
Click "Next".
Alarm name: mcs46-high-cpu. Add description.
Click "Next", review, and "Create alarm".
6. Create Launch Template:
Navigate to EC2 -> Instances -> Launch Templates.
Click "Create launch template".
Launch template name: mcs46-webserver-lt. Add description.
Check "Provide guidance...".
AMI: Select the same AMI used for your initial instances.
Instance type: Select the same instance type (e.g., t2.micro).
Key pair: Select the key pair used previously.
Network settings: Select "Security groups". Choose the existing web-instance-sg.

12 / 36
Parmod.md 2025-04-12

Crucial: Expand "Advanced details". In the "User data" field, paste the script to install and start
the web server automatically on launch (modify based on your AMI):

#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-
id)
AZ=$(curl -s http://169.254.169.254/latest/meta-
data/placement/availability-zone)
echo "<h1>Hello from AutoScaled Instance $INSTANCE_ID in $AZ</h1>" >
/var/www/html/index.html

Review and click "Create launch template".


7. Create Auto Scaling Group (ASG):
Navigate to EC2 -> Auto Scaling -> Auto Scaling Groups.
Click "Create Auto Scaling group".
ASG name: mcs46-web-asg.
Launch template: Select mcs46-webserver-lt. Click "Next".
Network: Choose the VPC. Select the same public subnets in different AZs that your manual
instances and ALB are using. Click "Next".
Load balancing: Select "Attach to an existing load balancer". Choose target group -> mcs46-tg.
Enable ELB health checks if desired. Click "Next".
Group size: Desired capacity: 2, Minimum capacity: 1, Maximum capacity: 4.
Scaling policies: Choose "Target tracking scaling policy".
Metric type: Average CPU utilization.
Target value: e.g., 50 (%).
Instances need: e.g., 300 seconds (warm-up time).
Click "Next".
Add notifications (optional): Link to the SNS topic created earlier.
Add tags (optional). Click "Next".
Review and click "Create Auto Scaling group".
8. Test Auto Scaling:
The ASG will launch instances to meet the desired capacity (2). They should automatically register
with the target group and become healthy.
Terminate one of the manually launched instances (if you kept them). The ASG should detect this
(if desired=2) and launch a new one to replace it.
(Optional, advanced): Generate load on the instances (e.g., using stress command via SSH sudo
amazon-linux-extras install epel -y; sudo yum install stress -y; stress --cpu
1 --timeout 300s) to trigger the CPU alarm and scaling policy. Monitor the ASG history and
CloudWatch alarm state. Observe new instances being launched. Once load stops, observe the
ASG scaling back in.
9. Clean Up: Delete the ASG (set min/desired/max to 0 first, wait for termination). Delete the Launch
Template. Delete the Load Balancer. Delete the Target Group. Terminate any remaining manual

13 / 36
Parmod.md 2025-04-12

instances. Delete the CloudWatch Alarm and SNS Topic. Delete the Security Groups if no longer needed.

Conclusion: Successfully configured an Application Load Balancer to distribute traffic, created a CloudWatch
alarm for monitoring, and set up an Auto Scaling Group with a Launch Template and scaling policy to
automatically manage application capacity based on demand.

14 / 36
Parmod.md 2025-04-12

Experiment 5: AWS Identity and Access Management (IAM)

Aim: To understand and manage secure access to AWS resources using IAM users, groups, roles, and policies.

Theory:

IAM (Identity and Access Management): Enables you to manage access to AWS services and
resources securely. You use IAM to control who is authenticated (signed in) and authorized (has
permissions) to use resources. Key concepts:
Root User: The account owner, has full access. Avoid using for everyday tasks.
IAM Users: An entity representing a person or application interacting with AWS. Has long-term
credentials (password for console, access keys for API/CLI).
IAM Groups: Collections of IAM users. Permissions applied to a group are inherited by its
members. Simplifies permission management.
IAM Roles: An IAM identity with permission policies that can be assumed by trusted entities
(users, applications, AWS services like EC2). Uses temporary credentials. Preferred for applications
or granting cross-account access.
IAM Policies: JSON documents defining permissions. Can be AWS Managed (pre-defined),
Customer Managed (you create/manage), or Inline (attached directly to a single user/group/role).
Follows the principle of least privilege (grant only necessary permissions).
MFA (Multi-Factor Authentication): Adds an extra layer of security for user sign-in.

Procedure:

1. Navigate to IAM: In the AWS Console, find and open the IAM service.
2. Create an IAM Group:
Go to "User groups". Click "Create group".
User group name: mcs46-developers.
Attach permissions policies: Search for and select AmazonS3ReadOnlyAccess.
Click "Create group".
3. Create an IAM User:
Go to "Users". Click "Add users".
User name: mcs46-test-user.
Select AWS credential type: Choose "Password - AWS Management Console access".
Select "Custom password" and set a temporary password.
Require password reset: Check this box.
Click "Next: Permissions".
Add user to group: Select the mcs46-developers group.
Click "Next: Tags" (optional).
Click "Next: Review".
Click "Create user".
Important: Download the .csv file containing the user's credentials (including console sign-in
link) or copy the sign-in link and password.
4. Test User Login and Permissions:
Sign out of your root/admin account.
Use the specific IAM user console sign-in link (it looks like https://<account-id-or-
alias>.signin.aws.amazon.com/console).

15 / 36
Parmod.md 2025-04-12

Login as mcs46-test-user with the temporary password. You will be prompted to create a new
password.
Once logged in, try accessing the S3 service. You should be able to list buckets and view objects
(Read Only access).
Try accessing another service like EC2. You should receive permission errors, demonstrating the
S3 read-only restriction.
Sign out of the test user account and sign back in with your admin/root credentials.
5. Create a Custom IAM Policy:
Navigate back to IAM -> Policies. Click "Create policy".
Go to the "JSON" tab. Paste the following policy (allows describing EC2 instances but nothing
else in EC2):

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:DescribeInstances",
"Resource": "*"
}
]
}

Click "Next: Tags" (optional).


Click "Next: Review".
Policy name: mcs46-EC2DescribeOnlyPolicy. Add description.
Click "Create policy".
6. Attach Custom Policy to User:
Go to "Users". Select mcs46-test-user.
Go to the "Permissions" tab. Click "Add permissions" -> "Attach existing policies directly".
Search for mcs46-EC2DescribeOnlyPolicy and select it.
Click "Next: Review", then "Add permissions".
7. Retest User Permissions:
Log in again as mcs46-test-user.
Navigate to EC2. You should now be able to view the list of instances (DescribeInstances action).
Try to launch or terminate an instance. These actions should still fail.
Sign out and log back in as admin.
8. Create an IAM Role for EC2:
Navigate to IAM -> Roles. Click "Create role".
Trusted entity type: Select "AWS service".
Use case: Select "EC2". Click "Next".
Add permissions: Search for and select AmazonS3ReadOnlyAccess. Click "Next".
Role name: mcs46-EC2-S3-ReadOnly-Role. Add description.
Click "Create role".
9. (Usage Scenario - Conceptual or Actual):
Launch a new EC2 instance (or modify an existing one).

16 / 36
Parmod.md 2025-04-12

During launch (or via Actions -> Security -> Modify IAM role for a running instance), attach the
mcs46-EC2-S3-ReadOnly-Role to the instance.
SSH into the instance. Install AWS CLI (sudo yum install aws-cli -y on Amazon Linux 2).
Run AWS CLI commands to access S3 (e.g., aws s3 ls). These commands should work without
needing configured access keys, as the EC2 instance assumes the role and gets temporary
credentials.
10. Clean Up: Delete the IAM user (mcs46-test-user). Delete the IAM group (mcs46-developers).
Delete the custom IAM policy (mcs46-EC2DescribeOnlyPolicy). Delete the IAM role (mcs46-EC2-S3-
ReadOnly-Role). Terminate any EC2 instance launched for role testing.

Conclusion: Successfully created and managed IAM users, groups, custom policies, and roles. Demonstrated
understanding of permission boundaries, the principle of least privilege, and how roles grant permissions to
AWS services like EC2.

17 / 36
Parmod.md 2025-04-12

Experiment 6: Databases and AWS (Focus on RDS)

Aim: To set up, configure, and connect to a managed relational database instance using Amazon RDS.

Theory:

Amazon RDS (Relational Database Service): A managed service that makes it easy to set up, operate,
and scale a relational database in the cloud. It handles tasks like hardware provisioning, database setup,
patching, and backups. Key concepts:
DB Instance: A database environment in the cloud, running a specific engine (e.g., MySQL,
PostgreSQL, MariaDB, Oracle, SQL Server).
Database Engines: The underlying relational database software.
Instance Classes: Define the compute and memory capacity of the DB instance.
Storage: Options include General Purpose SSD (gp2/gp3), Provisioned IOPS SSD (io1/io2), and
Magnetic. Storage can often be scaled.
Multi-AZ Deployment: Provides high availability and durability by synchronously replicating
data to a standby instance in a different AZ.
Read Replicas: Asynchronously replicated copies of the primary instance to scale read-heavy
workloads.
Security Groups: Control network access to the DB instance (acts as a firewall).
Parameter Groups: Manage database engine configuration parameters.
Option Groups: Enable additional features for certain engines (e.g., Oracle Statspack).
Other AWS Databases (Brief Mention): AWS also offers NoSQL databases like DynamoDB (key-
value/document), document databases like DocumentDB (MongoDB compatible), graph databases like
Neptune, time-series databases like Timestream, etc.

Procedure (Using MySQL or PostgreSQL Free Tier):

1. Navigate to RDS: In the AWS Console, find and open the RDS service.
2. Create a Database:
Click "Create database".
Choose a creation method: "Standard Create".
Engine options: Select "MySQL" or "PostgreSQL".
Templates: Select "Free tier" (ensure eligibility criteria are met).
Settings:
DB instance identifier: mcs46-mydb.
Master username: admin (or choose another name).
Master password: Set a strong password and confirm it. Store it securely.
DB instance class: Should default to a Free Tier eligible class (e.g., db.t2.micro or
db.t3.micro).
Storage: Keep defaults for Free Tier (e.g., 20 GiB General Purpose SSD). Disable storage
autoscaling for Free Tier.
Availability & durability: Keep default "Do not create a standby instance" for Free Tier.
Connectivity:
VPC: Select your desired VPC (default is okay).
Subnet group: Usually default is fine.

18 / 36
Parmod.md 2025-04-12

Public access: Select "Yes" (for easy connection from your local machine for this lab - Not
recommended for production!). Alternatively, select "No" and ensure you connect from
an EC2 instance within the same VPC and security group.
VPC security group (firewall): Choose "Create new". Enter a name: rds-sg. (Or choose an
existing one if appropriate).
Availability Zone: No preference (or choose one).
Database port: Keep default (MySQL: 3306, PostgreSQL: 5432).
Database authentication: Password authentication.
Expand "Additional configuration":
Initial database name (optional): mydatabase.
Backup: Enable automated backups (defaults are fine for Free Tier).
Monitoring: Enable Enhanced monitoring (optional, might incur cost).
Maintenance: Enable auto minor version upgrade (recommended). Select a maintenance
window.
Deletion protection: Uncheck for easy cleanup in the lab.
Review estimated monthly costs (should be $0.00 if within Free Tier limits).
Click "Create database". (Creation can take several minutes).
3. Configure Security Group:
Wait for the DB instance status to become "Available".
Click on the DB instance identifier (mcs46-mydb) to view details.
Go to the "Connectivity & security" tab. Click on the active VPC security group (rds-sg).
Select the security group, go to the "Inbound rules" tab. Click "Edit inbound rules".
Click "Add rule".
Type: Select "MYSQL/Aurora" (port 3306) or "PostgreSQL" (port 5432) depending on the
engine chosen.
Source: Select "My IP" to allow connections only from your current public IP address. (If
connecting from EC2, use the EC2 instance's private IP or its security group ID).
Click "Save rules".
4. Connect to the Database:
Go back to the RDS instance details page ("Connectivity & security" tab). Note the Endpoint
name (e.g., mcs46-mydb.cxyzabcdef.us-east-1.rds.amazonaws.com).
Use a database client tool (e.g., MySQL Workbench, DBeaver, pgAdmin, or command-line client
like mysql or psql).
Configure a new connection:
Host/Server Address: The RDS Endpoint name.
Port: 3306 (MySQL) or 5432 (PostgreSQL).
Username: admin (or your chosen master username).
Password: The master password you set.
Database (optional, if created): mydatabase.
Test the connection.
5. Perform Basic SQL Operations:
Once connected, run some basic SQL commands:

-- Show databases/schemas
SHOW DATABASES; -- (MySQL)
-- \l -- (psql command)

19 / 36
Parmod.md 2025-04-12

-- Use your database (if created)


USE mydatabase; -- (MySQL)
-- \c mydatabase -- (psql command)

-- Create a simple table


CREATE TABLE users (
id INT AUTO_INCREMENT PRIMARY KEY, -- MySQL
-- id SERIAL PRIMARY KEY, -- PostgreSQL
name VARCHAR(100),
email VARCHAR(100) UNIQUE
);

-- Insert data
INSERT INTO users (name, email) VALUES ('Alice Smith',
'alice@example.com');
INSERT INTO users (name, email) VALUES ('Bob Johnson',
'bob@example.com');

-- Query data
SELECT * FROM users;

6. Clean Up:
In the RDS console, select your DB instance (mcs46-mydb).
Click "Actions" -> "Delete".
You will likely be asked if you want to create a final snapshot (choose "No" for the lab) and
acknowledge that you understand data will be lost.
Enter "delete me" to confirm. Click "Delete". (Deletion takes some time).
Delete the rds-sg security group if it's no longer needed.

Conclusion: Successfully launched, configured, and connected to a managed relational database instance
using Amazon RDS. Performed basic SQL operations, demonstrating understanding of managed database
services in AWS.

20 / 36
Parmod.md 2025-04-12

Experiment 7: AWS Simple Queue Service (SQS), Simple Workflow Service (SWF), and Simple
Notification Service (SNS)

Aim: To understand and utilize AWS messaging services (SQS, SNS) for decoupling applications and workflow
coordination (briefly touching on SWF or its modern alternative, Step Functions).

Theory:

Amazon SQS (Simple Queue Service): A fully managed message queuing service that enables
decoupling and scaling of microservices, distributed systems, and serverless applications.
Standard Queues: Offer maximum throughput, best-effort ordering, and at-least-once delivery.
FIFO Queues (First-In, First-Out): Provide message ordering and exactly-once processing.
Lower throughput than standard queues.
Use Cases: Decoupling application components, distributing tasks, buffering requests.
Amazon SNS (Simple Notification Service): A fully managed messaging service for both application-
to-application (A2A) and application-to-person (A2P) communication.
Topics: Logical access points and communication channels. Producers publish messages to
topics.
Subscriptions: Endpoints that receive messages published to a topic (e.g., SQS queues, Lambda
functions, HTTP/S endpoints, email, SMS).
Use Cases: Fanout pattern (publish once, deliver to many subscribers), event notifications, mobile
push notifications.
Amazon SWF (Simple Workflow Service): Helps developers build, run, and scale background jobs
that have parallel or sequential steps. Provides task coordination and state tracking. (Note: AWS Step
Functions is often preferred for new workflow orchestration use cases due to its visual interface and
serverless nature).
AWS Step Functions (Modern Alternative to SWF): A serverless function orchestrator that makes it
easy to sequence AWS Lambda functions and multiple AWS services into business-critical applications.
Uses state machines defined in JSON.

Procedure:

Part A: SQS

1. Navigate to SQS: In the AWS Console, find and open the SQS service.
2. Create an SQS Queue:
Click "Create queue".
Type: Select "Standard".
Name: mcs46-myqueue.
Keep default configuration settings (visibility timeout, etc.).
Click "Create queue".
3. Send and Receive Messages:
Select mcs46-myqueue from the list.
Click "Send and receive messages".
In the "Send message" section, enter a message body (e.g., {"task": "process_order",
"order_id": 123}). Click "Send message".
Send another message (e.g., Task details for job 456).
In the "Receive messages" section, click "Poll for messages".
21 / 36
Parmod.md 2025-04-12

You should see one or more messages appear. Select a message to view its body, details, and
receipt handle.
Important: Messages remain in the queue after polling until explicitly deleted. Select a retrieved
message and click "Delete" to remove it from the queue.
Poll again to retrieve the next message (if any) and delete it.

Part B: SNS

1. Navigate to SNS: Find and open the SNS service.


2. Create an SNS Topic:
Go to "Topics". Click "Create topic".
Type: Select "Standard".
Name: mcs46-mytopic.
Keep default settings. Click "Create topic".
3. Create a Subscription:
Select mcs46-mytopic. Go to the "Subscriptions" tab. Click "Create subscription".
Protocol: Select "Email".
Endpoint: Enter your email address where you want to receive notifications.
Click "Create subscription".
Confirmation: Check your email inbox for a message from AWS Notifications with a
confirmation link. Click the link to confirm the subscription. The subscription status in the console
should change to "Confirmed".
4. Publish a Message:
Go back to the mcs46-mytopic details page. Click "Publish message".
Subject (optional): Test Notification from SNS.
Message body: This is a test message published to the mcs46-mytopic..
Keep defaults for other settings. Click "Publish message".
5. Verify Notification: Check your email inbox again. You should receive the message you just published.

Part C: SWF/Step Functions (Conceptual/Brief Exploration)

1. (SWF - Less Common): Navigate to SWF. Explore the concepts of Domains, Workflows, and Activities.
Note that setting up a working SWF example is complex and involves coding workers and deciders.
Understand its purpose for stateful, long-running workflows.
2. (Step Functions - Recommended Exploration):
Navigate to Step Functions.
Click "Create state machine".
Choose "Design your workflow visually".
Type: "Standard".
Explore the visual editor. Drag simple states like "Pass" or "Wait" onto the canvas. Connect them.
Observe the generated Amazon States Language (ASL) JSON definition on the right.
You don't need to fully execute a complex workflow, but understand how states are defined and
transitioned.
(Optional) Try creating a simple state machine with two Pass states and run it with sample input.

Clean Up:

SQS: Select mcs46-myqueue. Click "Actions" -> "Delete". Confirm deletion.

22 / 36
Parmod.md 2025-04-12

SNS: Select mcs46-mytopic. Delete any subscriptions first (select subscription, click "Delete"). Then,
select the topic and click "Delete". Confirm deletion.
Step Functions: If you created a state machine, select it and click "Delete".

Conclusion: Successfully created and used an SQS queue for sending/receiving messages and an SNS topic
with an email subscription for publishing notifications. Gained a conceptual understanding of how these
services enable decoupled architectures and explored the basics of workflow orchestration with Step
Functions (as a modern approach compared to SWF).

23 / 36
Parmod.md 2025-04-12

Experiment 8: Domain Name System (DNS) and Amazon Route 53

Aim: To understand DNS concepts and use Amazon Route 53 to manage DNS records for a domain name.

Theory:

DNS (Domain Name System): The internet's phonebook. Translates human-readable domain names
(e.g., www.amazon.com) into machine-readable IP addresses (e.g., 192.0.2.44). Key concepts:
Domain Name: A unique name identifying a website or resource (e.g., example.com).
Hosted Zone: A container in DNS that holds information about how to route traffic for a specific
domain and its subdomains.
Record Sets: Entries within a hosted zone defining how traffic for specific domain/subdomain
names (like www.example.com or example.com) should be routed. Common types include:
A Record: Maps a hostname to an IPv4 address.
AAAA Record: Maps a hostname to an IPv6 address.
CNAME Record (Canonical Name): Maps a hostname to another hostname (an alias).
Cannot be used for the root domain (zone apex).
MX Record (Mail Exchanger): Specifies mail servers responsible for accepting email for
the domain.
TXT Record (Text): Used to store arbitrary text, often for verification purposes (e.g.,
domain ownership, SPF).
NS Record (Name Server): Specifies the authoritative name servers for the domain.
Amazon Route 53: A highly available and scalable cloud Domain Name System (DNS) web service.
Offers:
Domain Registration: Register new domain names.
DNS Service: Manage DNS records for your domains via Hosted Zones.
Health Checks: Monitor the health of your endpoints (servers, load balancers).
Routing Policies: Advanced traffic routing options beyond simple DNS resolution:
Simple: Route traffic to a single resource.
Weighted: Distribute traffic across multiple resources based on assigned weights.
Latency-based: Route traffic to the resource providing the best latency for the end-user.
Failover: Route traffic to a primary resource when healthy, and to a secondary/backup
resource when the primary fails health checks.
Geolocation: Route traffic based on the geographic location of the user.
Geoproximity: Route traffic based on the geographic location of users and resources,
optionally biasing traffic flow.
Multivalue Answer: Respond to DNS queries with up to eight healthy records selected at
random.

Procedure:

Note: This experiment works best if you have a registered domain name (you can register one via Route 53 or
another registrar). If you don't have one, you can still perform most steps within Route 53 using a private
hosted zone or explore public zone creation without actually delegating a real domain. For this lab, we'll
assume you can create records in a public hosted zone, even if it's not live on the internet.

1. Navigate to Route 53: In the AWS Console, find and open the Route 53 service.

24 / 36
Parmod.md 2025-04-12

2. (Optional) Register a Domain: If needed, use Route 53 -> Registered domains -> Register domain.
(This incurs cost).
3. Create a Public Hosted Zone:
Go to "Hosted zones". Click "Create hosted zone".
Domain name: Enter your domain name (e.g., my-unique-lab-domain.com) or a subdomain if
you own the parent (e.g., lab.mydomain.com). If you don't own a domain, you can still enter a
fictitious one like mcs46lab.example for practice within the console, but it won't resolve
publicly.
Type: Select "Public hosted zone".
Add a description (optional).
Click "Create hosted zone".
4. Review NS and SOA Records:
Once created, Route 53 automatically creates two record sets:
NS (Name Server): Lists the 4 authoritative AWS name servers for this zone. If you owned
the domain and registered it elsewhere, you would update your registrar's settings to
point to these NS records.
SOA (Start of Authority): Contains administrative information about the zone.
Note down the NS record values.
5. Create an 'A' Record:
Click "Create record".
Record name: Leave blank to create a record for the root domain (e.g., my-unique-lab-
domain.com), or enter a subdomain like www.
Record type: Select "A - Routes traffic to an IPv4 address...".
Value: Enter an IP address. This could be:
The Public IP address of an EC2 instance you have running (from Exp 2).
The DNS name of an Application Load Balancer (from Exp 4) - Wait! For ALBs, use an Alias
record. Let's use an IP for a simple A record first. Use a known public IP like 1.1.1.1 for
practice if you don't have an EC2 instance ready.
TTL (Time to Live): Keep default (e.g., 300 seconds).
Routing policy: Simple.
Click "Create records".
6. Create an Alias Record (Pointing to ALB):
Click "Create record".
Record name: e.g., app. (This will create app.my-unique-lab-domain.com).
Record type: Select "A".
Crucial: Toggle "Alias" ON.
Route traffic to:
Choose endpoint: "Alias to Application and Classic Load Balancer".
Choose Region: Select the region where your ALB resides.
Choose load balancer: Select your mcs46-alb (from Exp 4, if available). If not, you can
explore other alias targets like S3 websites or CloudFront distributions conceptually.
Routing policy: Simple.
Evaluate target health: Keep default (Yes, if applicable).
Click "Create records". (Note: Alias records are AWS-specific extensions to DNS, providing
CNAME-like functionality even at the zone apex and resolving directly to IPs without extra
lookups).

25 / 36
Parmod.md 2025-04-12

7. Create a 'CNAME' Record:


Click "Create record".
Record name: e.g., legacy.
Record type: Select "CNAME - Routes traffic to another domain name...".
Value: Enter another domain name (e.g., www.amazon.com or the DNS name of your ALB mcs46-
alb....elb.amazonaws.com).
TTL: Keep default.
Routing policy: Simple.
Click "Create records".
8. Test DNS Resolution (if domain is live or using dig):
If your domain is properly delegated to Route 53's NS servers (takes time to propagate), you can
use tools like nslookup (Windows) or dig (Linux/macOS) to query your records from your
command line:

nslookup my-unique-lab-domain.com
nslookup www.my-unique-lab-domain.com
nslookup app.my-unique-lab-domain.com
nslookup legacy.my-unique-lab-domain.com

# OR using dig
dig my-unique-lab-domain.com A
dig www.my-unique-lab-domain.com A
dig app.my-unique-lab-domain.com A
dig legacy.my-unique-lab-domain.com CNAME

If the domain isn't live, you can query one of the specific NS servers noted earlier:

# Replace ns-XXX.awsdns-YY.com with one of your actual NS servers


dig @ns-XXX.awsdns-YY.com my-unique-lab-domain.com A

9. Clean Up:
Inside the hosted zone, delete all record sets except the NS and SOA records. Select each custom
record and click "Delete".
Go back to "Hosted zones". Select the zone my-unique-lab-domain.com. Click "Delete".
Confirm deletion.
If you registered a domain you no longer need, configure it not to auto-renew.

Conclusion: Successfully created a Route 53 hosted zone and managed various DNS record types (A, Alias,
CNAME). Understood the role of NS and SOA records and how Route 53 acts as an authoritative DNS service.
Learned how to use tools like nslookup or dig for testing DNS resolution.

26 / 36
Parmod.md 2025-04-12

Experiment 9: Amazon ElastiCache

Aim: To set up and interact with an in-memory caching service using Amazon ElastiCache (using Redis or
Memcached).

Theory:

In-Memory Caching: A technique used to improve application performance by storing frequently


accessed data in memory (RAM) rather than relying solely on slower disk-based databases. Reduces
latency and database load.
Amazon ElastiCache: A fully managed in-memory data store and cache service by AWS. Supports two
popular open-source engines:
Redis: A versatile in-memory data structure store used as a cache, database, message broker,
and queue. Offers persistence options, data structures (strings, hashes, lists, sets, sorted sets),
replication, and high availability (Multi-AZ).
Memcached: A simpler, distributed memory object caching system. Often used for caching
database query results or web session data. Multi-threaded. No persistence.
Key Concepts:
Nodes: The building blocks of an ElastiCache deployment, providing memory and compute.
Clusters: A collection of one or more nodes running the chosen cache engine (Redis cluster
mode disabled/enabled, Memcached cluster).
Parameter Groups: Control engine-specific parameters.
Subnet Groups: Specify the VPC subnets where the cache nodes will reside.
Security Groups: Control network access to the cache nodes.
Endpoints: DNS hostnames used by applications to connect to the cache cluster/nodes.

Procedure (Using Redis Free Tier):

1. Prerequisite: EC2 Instance: You need an EC2 instance running in the same VPC as your planned
ElastiCache cluster to connect to it. Ensure the instance's security group allows outbound connections.
You can reuse an instance from a previous experiment if it's still running, or launch a new one (e.g.,
Amazon Linux 2, t2.micro).
2. Create an ElastiCache Subnet Group:
Navigate to ElastiCache -> Subnet groups.
Click "Create subnet group".
Name: mcs46-cache-subnet-group.
Description: Subnet group for ElastiCache lab.
VPC ID: Select the VPC where your EC2 instance resides.
Add subnets: Select at least one private subnet from the chosen VPC (ideally, select private
subnets from multiple AZs for potential high availability later, even if using a single node now).
ElastiCache nodes should generally not be in public subnets.
Click "Create".
3. Create an ElastiCache Redis Cluster:
Navigate to ElastiCache -> Redis clusters.
Click "Create Redis cluster".
Choose creation method: "Easy Create" is simpler, but "Configure and create" gives more control.
Let's use "Configure and create".

27 / 36
Parmod.md 2025-04-12

Cluster settings:
Cluster mode: Keep Disabled for a single-node cluster (simpler for this lab).
Location: AWS Cloud.
Cluster settings (again):
Name: mcs46-redis-cluster.
Description: Optional.
Engine version compatibility: Choose a recent version.
Port: Keep default 6379.
Parameter group: Keep default.
Node type: Select a Free Tier eligible node (e.g., cache.t2.micro or cache.t3.micro).
Number of replicas: 0 (for single node).
Connectivity:
Subnet group: Select mcs46-cache-subnet-group.
Availability Zone(s): Select "No preference" or choose a specific AZ where you have a
chosen subnet.
Security:
Security groups: Choose "Create new": elasticache-sg. (Or select an existing appropriate
one).
Encryption in-transit: Uncheck for simplicity in this lab (requires TLS handling on client).
Encryption at-rest: Uncheck for simplicity/Free Tier.
Skip Backup, Maintenance, Tags for this basic setup.
Review and click "Create". (Cluster creation takes several minutes).
4. Configure Security Group:
Navigate to EC2 -> Security Groups. Find the security group associated with your EC2 instance.
Edit its outbound rules to ensure it can reach port 6379 (or allow all outbound).
Find the security group created for ElastiCache (elasticache-sg). Edit its inbound rules.
Click "Add rule".
Type: "Custom TCP".
Port range: 6379.
Source: Select "Custom" and enter the Security Group ID of your EC2 instance. This allows
only instances within that EC2 security group to connect to the cache.
Click "Save rules".
5. Connect and Interact with Redis:
Wait for the ElastiCache cluster status to become "Available".
Select mcs46-redis-cluster. Note the Primary Endpoint (e.g., mcs46-redis-
cluster.abcdef.xyz.use1.cache.amazonaws.com).
SSH into your EC2 instance.
Install the Redis CLI client:

# Amazon Linux 2
sudo amazon-linux-extras install redis6 -y
# Ubuntu
# sudo apt update
# sudo apt install redis-tools -y

Connect to the Redis cluster using the endpoint:


28 / 36
Parmod.md 2025-04-12

# Replace <Your-Redis-Endpoint> with the actual Primary Endpoint


redis-cli -h <Your-Redis-Endpoint> -p 6379

If successful, the prompt will change to the endpoint address followed by the port (e.g., mcs46-
redis-cluster.abcdef....:6379>).
Execute some Redis commands:

PING # Should return PONG


SET mykey "Hello ElastiCache"
GET mykey # Should return "Hello ElastiCache"
INCR counter # Increments 'counter', returns 1 (first time)
GET counter # Should return "1"
INCR counter # Returns 2
GET counter # Returns "2"
KEYS * # List all keys
DEL mykey # Delete a key
GET mykey # Should return (nil)
EXIT # Exit redis-cli

6. Clean Up:
ElastiCache: Select mcs46-redis-cluster. Click "Actions" -> "Delete". Confirm deletion (may
ask about final snapshot - choose No). Wait for deletion.
Subnet Group: Select mcs46-cache-subnet-group. Click "Actions" -> "Delete".
Security Group: Delete elasticache-sg if no longer needed.
EC2 Instance: Terminate the EC2 instance used for testing if no longer needed.

Conclusion: Successfully created an ElastiCache Redis cluster, configured network access using subnet and
security groups, and connected to it from an EC2 instance using redis-cli. Performed basic cache
operations (SET, GET, INCR, DEL), demonstrating the use of ElastiCache for in-memory caching.

29 / 36
Parmod.md 2025-04-12

Experiment 10: Additional Key Services (Focus on Lambda and CloudFormation)

Aim: To explore serverless computing with AWS Lambda and Infrastructure as Code (IaC) with AWS
CloudFormation.

Theory:

AWS Lambda: A serverless, event-driven compute service that lets you run code without provisioning
or managing servers.
Functions: Your code packaged with its dependencies. Supports various runtimes (Python,
Node.js, Java, Go, Ruby, .NET etc.).
Events/Triggers: AWS services (like API Gateway, S3, SNS, DynamoDB Streams) or custom
applications can trigger Lambda functions.
Serverless: AWS handles infrastructure management, scaling, patching, and availability. You pay
only for the compute time consumed.
Use Cases: Web backends (with API Gateway), data processing, real-time file processing (S3
triggers), scheduled tasks.
AWS CloudFormation: Provides a common language to model and provision AWS infrastructure
resources in your cloud environment in a safe, repeatable way (Infrastructure as Code - IaC).
Templates: Text files (JSON or YAML) describing the AWS resources you want to create and
configure (e.g., EC2 instances, S3 buckets, VPCs, IAM roles).
Stacks: A collection of AWS resources managed as a single unit, created based on a
CloudFormation template.
Change Sets: Preview how proposed changes to a stack might impact your running resources
before implementing them.
Benefits: Automation, consistency, version control for infrastructure, standardization.

Procedure:

Part A: AWS Lambda

1. Navigate to Lambda: In the AWS Console, find and open the Lambda service.
2. Create a Lambda Function:
Click "Create function".
Select "Author from scratch".
Function name: mcs46-hello-lambda.
Runtime: Select "Python 3.9" (or another familiar runtime like Node.js).
Architecture: Keep default x86_64.
Permissions: Keep "Create a new role with basic Lambda permissions". AWS will create an
execution role allowing the function to write logs to CloudWatch Logs.
Click "Create function".
3. Review and Edit Code:
The console will show a default lambda_function.py (or index.js etc.). The basic handler
takes event and context objects.
Modify the code slightly, for example:

30 / 36
Parmod.md 2025-04-12

import json
import datetime

def lambda_handler(event, context):


now = datetime.datetime.now()
print(f"Received event: {json.dumps(event)}") # Logs to CloudWatch

name = event.get('name', 'World') # Get 'name' from input, default


to 'World'

return {
'statusCode': 200,
'body': json.dumps(f'Hello {name} from Lambda! The time is
{now.isoformat()}')
}

Click "Deploy" to save your changes.


4. Test the Function:
Go to the "Test" tab.
Create a new test event:
Event name: TestEventWithName
Template: hello-world (can leave default)
Modify the JSON payload: { "name": "MCS46 Student" }
Click "Save changes".
Select TestEventWithName from the dropdown and click the "Test" button.
Observe the "Execution results" tab:
Response: Should show the JSON returned by your function (e.g., {"statusCode": 200,
"body": "\"Hello MCS46 Student from Lambda! The time is ...\""}).
Function Logs: Shows output from print() statements and execution summary
(duration, memory used). Click the "Log group" link to view logs in CloudWatch Logs.
Try another test event without the "name" key to see the default "World".

Part B: AWS CloudFormation

1. Create a Simple CloudFormation Template:


Create a text file named mcs46-s3-template.yaml on your local machine.
Paste the following YAML content (this template defines a single S3 bucket):

AWSTemplateFormatVersion: '2010-09-09'
Description: Simple CloudFormation template to create an S3 bucket for
MCS46 Lab.

Resources:
MyS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub 'mcs46-cfn-
bucket-${AWS::AccountId}-${AWS::Region}' # Creates a unique bucket name

31 / 36
Parmod.md 2025-04-12

Tags:
- Key: Project
Value: MCS46-Lab
Outputs:
BucketName:
Description: Name of the created S3 bucket
Value: !Ref MyS3Bucket
BucketArn:
Description: ARN of the created S3 bucket
Value: !GetAtt MyS3Bucket.Arn

2. Navigate to CloudFormation: Find and open the CloudFormation service.


3. Create a Stack:
Click "Create stack" -> "With new resources (standard)".
Prepare template: Select "Template is ready".
Template source: Select "Upload a template file". Click "Choose file" and select your mcs46-s3-
template.yaml.
Click "Next".
Stack name: mcs46-s3-stack.
Parameters: This template doesn't have any parameters. Click "Next".
Configure stack options: Keep defaults for this lab (can explore tags, permissions, rollback config
later). Click "Next".
Review the details. Acknowledge that CloudFormation might create IAM resources if the
template included them (this one doesn't, but it's good practice).
Click "Create stack".
4. Monitor Stack Creation:
The stack status will go through stages like CREATE_IN_PROGRESS.
Go to the "Events" tab to see CloudFormation interacting with AWS APIs.
Go to the "Resources" tab to see the S3 bucket being created.
Go to the "Outputs" tab to see the defined outputs (BucketName, BucketArn) once creation is
complete (CREATE_COMPLETE status).
Verify in the S3 console that the bucket was actually created with the correct name and tag.
5. Delete the Stack:
Select mcs46-s3-stack. Click "Delete".
Confirm deletion.
Monitor the deletion process (DELETE_IN_PROGRESS -> DELETE_COMPLETE).
Verify in the S3 console that the bucket created by CloudFormation has been deleted.

Clean Up:

Lambda: Select mcs46-hello-lambda. Click "Actions" -> "Delete function". Confirm deletion. Also
delete the associated execution role from IAM -> Roles if desired (search for role name containing
mcs46-hello-lambda). Delete the CloudWatch Log Group associated with the function if desired.
CloudFormation: Ensure the stack mcs46-s3-stack is deleted.

Conclusion: Successfully created and tested a serverless AWS Lambda function, understanding its basic
execution model and logging. Used AWS CloudFormation to define infrastructure (an S3 bucket) in a template

32 / 36
Parmod.md 2025-04-12

and manage its lifecycle (create/delete) via a stack, demonstrating the principles of Infrastructure as Code.

33 / 36
Parmod.md 2025-04-12

Experiment 11: Security on AWS (Focus on Security Groups/NACLs, Config, CloudTrail)

Aim: To understand and utilize fundamental AWS security services and features, including network controls
(Security Groups, NACLs), configuration monitoring (AWS Config), and API activity logging (AWS CloudTrail).

Theory:

Security Groups (SGs): Act as a stateful firewall for EC2 instances (and other resources like RDS,
ElastiCache nodes). Control inbound and outbound traffic at the instance level. Rules are "allow" only
(no deny rules). Stateful means if outbound traffic is allowed, the return traffic is automatically allowed,
regardless of inbound rules.
Network Access Control Lists (NACLs): Act as a stateless firewall for subnets within a VPC. Control
inbound and outbound traffic at the subnet level. Rules can be "allow" or "deny" and are evaluated in
order based on rule number (lowest number first). Stateless means return traffic must be explicitly
allowed by a corresponding outbound/inbound rule. Default NACL allows all traffic. Custom NACLs
deny all traffic until rules are added.
AWS Config: A service that enables you to assess, audit, and evaluate the configurations of your AWS
resources. Continuously monitors and records resource configurations and allows you to automate
evaluation against desired configurations using Config Rules (managed or custom). Helps with
compliance auditing, security analysis, change tracking, and troubleshooting.
AWS CloudTrail: Records AWS API calls for your account and delivers log files to an S3 bucket.
Provides event history of account activity, including actions taken through the AWS Management
Console, SDKs, CLI, and other AWS services. Essential for security analysis, resource change tracking,
compliance auditing, and operational troubleshooting. Enabled by default for recent events (Event
history), but creating a "Trail" provides long-term storage and advanced features.

Procedure:

Part A: Security Groups and NACLs Review

1. Review Security Groups:


Navigate to EC2 -> Security Groups.
Select a security group used in a previous experiment (e.g., webserver-sg or rds-sg).
Examine the "Inbound rules" and "Outbound rules".
Understand what traffic is allowed (ports, protocols, sources/destinations). Note the stateful
nature (typically outbound allows all, and inbound restricts).
Consider the principle of least privilege: Are the rules too permissive? (e.g., allowing SSH from
0.0.0.0/0 is generally bad practice).
2. Review Network ACLs:
Navigate to VPC -> Network ACLs.
Select the default NACL associated with your VPC (or a custom VPC used previously).
Examine the "Inbound Rules" and "Outbound Rules". Note they allow ALL traffic by default (Rule
number * Deny is the final default if no allow rule matches).
Examine the rule numbering and the stateless nature (separate inbound/outbound rules needed
for request/response).
3. (Optional) Modify NACL (Use Caution!):
Identify the subnet ID your EC2 instance resides in (VPC -> Subnets).
Select the NACL associated with that subnet. Go to "Inbound Rules" -> "Edit inbound rules".
34 / 36
Parmod.md 2025-04-12

Click "Add new rule".


Rule number: 90 (lower than the default 100 allow, higher numbers evaluated later).
Type: SSH (22).
Protocol: TCP (6).
Port Range: 22.
Source: 0.0.0.0/0.
Allow/Deny: Deny.
Click "Save changes".
Try to SSH into your EC2 instance. It should now fail because the NACL denies the traffic before it
reaches the instance/security group.
Important: Remove the Deny rule afterwards to restore connectivity! Select the rule (90) and
click "Remove", then "Save changes".

Part B: AWS Config

1. Navigate to AWS Config: Find and open the AWS Config service.
2. Initial Setup (if first time):
If prompted, click "Get started".
Settings:
Record specific resource types: You can choose specific types or "Record all resources
supported...". Choose "Record all..." for wider visibility in the lab. Check "Include global
resources".
Amazon S3 bucket: Config needs an S3 bucket to store history/snapshot files. Choose
"Create a bucket" or select an existing one.
Amazon SNS topic: Optional for notifications. Skip for now.
AWS Config role: Choose "Create AWS Config service-linked role".
Click "Next".
Rules (optional): Skip adding rules for now. Click "Next".
Review and click "Confirm". (Allow some time for initial setup and discovery).
3. Explore Config Dashboard:
Once set up, explore the dashboard. It shows resource inventory counts and compliance status (if
rules are active).
Go to "Resources" in the left pane.
Filter by resource type (e.g., AWS::EC2::Instance or AWS::S3::Bucket).
Select a specific resource (e.g., an EC2 instance or S3 bucket from previous labs).
4. View Resource Configuration and History:
On the resource details page, view the "Configuration details" (current state).
Click on the "Configuration timeline". This shows changes detected over time. Click on different
timeline points to see how the configuration looked at that time.
5. (Optional) Enable a Managed Config Rule:
Go to "Rules". Click "Add rule".
Filter by "managed". Search for a simple rule, e.g., s3-bucket-public-read-prohibited.
Select the rule and click "Next".
Configure frequency (e.g., Triggered by configuration changes).
Review and click "Add rule".
Wait for the rule to evaluate compliance status (may take time). Check the dashboard or rule
details page.

35 / 36
Parmod.md 2025-04-12

Part C: AWS CloudTrail

1. Navigate to CloudTrail: Find and open the CloudTrail service.


2. Explore Event History:
Go to "Event history". This shows the last 90 days of management events recorded by CloudTrail
by default.
Browse recent events. See API calls made by your user or AWS services (e.g., RunInstances,
CreateBucket, CreateFunction, PutObject, console sign-ins).
Click on an event to view its details (timestamp, user identity, event source, resource names,
request parameters, response elements).
Use filters (e.g., filter by "Event name" like TerminateInstances or by "User name").
3. (Optional) Create a Trail:
Go to "Trails". Click "Create trail".
Trail name: mcs46-management-trail.
Check "Enable for all accounts in my organization" (if applicable) or keep as single-account trail.
Storage location: Choose "Create new S3 bucket" or specify an existing one where log files will be
stored long-term.
Log file SSE-S3 encryption: Keep enabled.
CloudWatch Logs: Optional, enables sending logs also to CloudWatch Logs for easier
searching/alarming. Enable it and choose "New" log group (or existing). Create a new IAM role
for CloudTrail to write to CloudWatch Logs.
Management events: Ensure "Read" and "Write" are checked.
Data events (optional, high volume/cost): Can log S3 object-level or Lambda function-level
activity. Skip for this lab.
Insights events (optional, cost): Detect unusual activity. Skip.
Review and click "Create trail".
(Note: Creating a trail ensures logs are kept beyond 90 days in S3).

Clean Up:

NACLs: Ensure any temporary Deny rules added to NACLs are removed.
AWS Config:
If you added a rule, select it in "Rules" and click "Actions" -> "Delete rule".
Go to "Settings". Note the S3 bucket name used by Config. Click "Edit". You can stop recording
by unchecking resource types, but fully disabling Config often involves deleting the delivery
channel and configuration recorder via CLI/SDK. For the lab, stopping recording might suffice.
Alternatively, delete the S3 bucket after stopping recording (ensure no other service needs it).
Delete the Config role if desired.
CloudTrail:
If you created a trail (mcs46-management-trail), select it and click "Delete". Confirm.
Delete the associated S3 bucket and CloudWatch Log Group (if created for the trail) if no longer
needed. Delete the IAM role created for CloudTrail logging to CloudWatch if desired.

Conclusion: Reviewed and contrasted the functionality of Security Groups (stateful, instance-level) and
Network ACLs (stateless, subnet-level). Explored AWS Config for monitoring resource configurations and
viewing change history. Utilized AWS CloudTrail to view API activity logs for security analysis and auditing.
Gained understanding of these fundamental AWS security tools.

36 / 36

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy