database
database
• Amazon RDS (Relational Database Service) is a web service that facilitates the setup,
operation, and scaling of relational databases in the AWS cloud.
• It can contain multiple user-created databases and supports various database engines like
MySQL, MariaDB, PostgreSQL, Oracle, SQL Server, and Amazon Aurora.
• RDS instances can be managed via the AWS Console, CLI, or RDS API.
• Supported engines: MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server, and Aurora
(with MySQL and PostgreSQL compatibility).
• Each engine has parameters in a database parameter group controlling the behavior of
managed databases.
4. Storage Options:
• RDS storage types include Magnetic, General Purpose SSD, and Provisioned IOPS.
• Storage volumes are configured similarly to EC2, with dynamic scaling options available.
• Automated backups are configured during the backup window and save snapshots based on
the backup retention period.
• Backups include full storage volume snapshots of the database instance, with the first
snapshot being a full backup and subsequent ones being incremental.
• Manual snapshots can be created and shared, with automatic snapshots only copyable.
• Configuration Steps:
7. Post-Creation Monitoring:
• After creation, monitor the database instance for CPU utilization, storage usage, and free
memory.
• Check the configuration and ensure security group rules allow necessary access.
• Familiarity with RDS instance types, storage autoscaling, multi-AZ deployments, and
database authentication methods.
• Remember important details like the default MySQL port (3306) and database security group
settings.
• A) MySQL
• B) Oracle
• C) MongoDB
• D) PostgreSQL
• Answer: C) MongoDB
What is the default port number for a MySQL database instance in Amazon RDS?
• A) 1521
• B) 1433
• C) 3306
• D) 5432
• Answer: C) 3306
Which storage type is recommended for workloads requiring high and consistent I/O
performance in Amazon RDS?
• A) Magnetic
• D) Standard SSD
In Amazon RDS, which feature allows you to automatically scale the storage of your database
instance when it reaches capacity?
• A) Multi-AZ Deployment
• B) Enhanced Monitoring
• D) Read Replicas
You have an RDS instance using the MySQL engine. Which type of snapshot can you share
with another AWS account?
• A) Automated snapshot
• B) Manual snapshot
• C) Incremental snapshot
• D) Full snapshot
Which Amazon RDS feature allows you to define when your database instance should be
automatically backed up?
• B) Backup Window
What type of RDS instance class is best suited for memory-intensive workloads?
• A) T2 Burstable class
• B) Standard class
Which of the following methods is NOT a valid way to manage an Amazon RDS database
instance?
• C) RDS API
• D) AWS Lambda
• Purpose: Multi-AZ deployment is designed to provide high availability (HA) for your
application by creating redundancy across different availability zones.
• Configuration:
o The client application can only connect to the primary/master database via its
database endpoint.
• Synchronous Replication:
o Updates made to the master database are synchronously replicated to the standby
database.
o This ensures that both databases are always in sync, maintaining data consistency.
• Failover Process:
o In case the master database fails or the AZ goes down, the standby instance is
automatically promoted to become the new master.
o The database endpoint switches to the new master, allowing the client to reconnect
seamlessly.
• Key Points:
o The database endpoint switches to the standby during failover to ensure the
application remains connected.
• A) Data encryption
• B) Cost optimization
• C) High availability
• D) Performance improvement
In an RDS Multi-AZ deployment, which of the following is true about the standby instance?
How does the replication between the primary and standby instances occur in an RDS Multi-
AZ deployment?
• A) Asynchronous replication
• B) Synchronous replication
• C) Periodic backups
• D) Manual replication
Which of the following statements is true about the client connection in a Multi-AZ
deployment?
• B) The client connects only to the primary instance via its endpoint.
Which type of replication ensures that the standby instance is always in sync with the primary
in a Multi-AZ deployment?
• A) Asynchronous replication
• B) Synchronous replication
• C) Snapshot replication
• D) Log-based replication
When configuring an RDS instance for Multi-AZ, where is the standby instance located?
• D) On-premises
What is the impact on database availability during the promotion of a standby instance in a
Multi-AZ deployment?
Which feature of RDS Multi-AZ deployment helps maintain high availability during routine
maintenance?
Overview
• Purpose: Read Replicas in RDS are used to achieve high scalability by creating read-only
copies of a database instance. These replicas are designed to serve high-volume application
read traffic, which increases the overall read throughput.
Key Points
2. Scaling Options:
o Scale Up: Change the database instance type to a larger one (e.g., from db.m5.large
to db.m5.8xlarge).
o Scale Out: Use Read Replicas to distribute the read traffic across multiple instances.
3. Architecture:
o Read Replicas are deployed in one or more Availability Zones (AZs) to handle read-
only operations and reduce the load on the primary instance.
4. Asynchronous Replication:
o Data from the master database is copied to Read Replicas using asynchronous
replication.
o This means Read Replicas may not always be in sync with the master database and
may experience some delays in receiving updates.
5. Usage Scenarios:
o Read-heavy Workloads: Direct excess read traffic to Read Replicas to scale beyond
the compute or I/O capacity of a single database instance.
o Running Complex Queries: Use Read Replicas for business reporting or data
warehousing queries to offload the primary database instance.
6. Creation Process:
o Amazon RDS takes a snapshot of the source instance and creates a read-only
instance from that snapshot.
o Asynchronous replication is used to update the Read Replica whenever the source
database instance changes.
7. Cross-Region Replication:
o This capability is advantageous for disaster recovery and for improving read
performance in different geographic locations.
o This is useful for scenarios like upgrading a database engine version or for disaster
recovery purposes.
o Replicating: The Read Replica is actively replicating data from the source instance.
o Error: An issue has occurred, and the replication is not functioning correctly.
o Replication Stop Point Set/Reached (MySQL only): Specific replication stop points
are set or reached.
11. 1. What is the primary purpose of using Read Replicas in Amazon RDS?
12. A) To achieve high availability by replicating data across multiple AZs
B) To offload write-heavy workloads to additional instances
C) To improve scalability by serving high-volume read traffic from read-only
copies
D) To store backups in a different region
13. 2. How does Amazon RDS replicate data from the master database instance to
Read Replicas?
14. A) Synchronously, ensuring data is always up to date
B) Asynchronously, allowing for potential delays in data propagation
C) Using Amazon S3 as an intermediary for data transfer
D) By creating manual snapshots and restoring them periodically
15. 3. Which of the following statements is true regarding Read Replicas?
16. A) Read Replicas can be promoted to standalone database instances.
B) Read Replicas support both read and write operations.
C) Read Replicas must always be in the same region as the source database.
D) Read Replicas automatically replicate specific databases from the source
instance.
17. 4. Which scenario would benefit the most from deploying Read Replicas?
18. A) Handling failover when the primary instance goes down
B) Distributing write-heavy workloads across multiple instances
C) Serving read traffic while the source database instance is under maintenance
D) Ensuring zero downtime during database upgrades
19. 5. What type of replication method is used by Amazon RDS to update Read
Replicas?
20. A) Synchronous replication
B) Full backup and restore replication
C) Asynchronous replication
D) Streaming replication
21. 6. What happens when you promote a Read Replica to a standalone database
instance?
22. A) The replica continues to receive updates from the source instance.
B) The replica stops replicating data from the source and becomes independent.
C) The replica gets deleted after promotion.
D) The replica automatically becomes the primary instance.
23. 7. Which of the following is a valid reason to create a Read Replica in a different
region?
24. A) To improve write performance across multiple regions
B) To lower costs by using cheaper instances in another region
C) To provide disaster recovery capabilities by having a copy in another region
D) To reduce latency for write operations in a different region
25. 8. What is a potential drawback of using Read Replicas in Amazon RDS?
26. A) Increased complexity in managing multiple write operations
B) Synchronous replication leading to performance degradation
C) Data on Read Replicas might be slightly stale due to asynchronous replication
D) Automatic promotion of Read Replicas in case of source instance failure
27. 9. Which RDS database engine does NOT support Read Replicas?
28. A) MySQL
B) PostgreSQL
C) MariaDB
D) Microsoft SQL Server
29. 10. How can you reduce the load on the master database instance when using
Read Replicas?
30. A) Direct all read and write operations to the master instance
B) Direct read-heavy queries to the Read Replicas
C) Scale up the master instance to handle more requests
D) Enable Multi-AZ deployment for the master instance
Multi-AZ Deployment
• Client Connections: Clients can only connect to the primary (active) instance. Secondary
(standby) instances are not accessible to clients.
• Availability Zones: Always spans two availability zones within a single region.
Read Replicas
• Replication Type: Asynchronous replication, providing scalability rather than high durability.
• Availability Zones: Can be within a single availability zone, multiple availability zones, or
across regions.
• Database Engine Upgrades: The upgrade process is independent from the source instance.
• Promotion: Read replicas can be manually promoted to a standalone database instance if
necessary.
• A) Improved scalability
• C) Lower latency
• D) Cost savings
• A) Primary instance
• B) Standby instance
Which of the following is true about client connections in a Multi-AZ RDS deployment?
• A) Synchronously
• B) Asynchronously
Which of the following scenarios is best suited for using Read Replicas?
• A) Multi-AZ deployment
• B) Read Replicas
Which deployment type automatically fails over to a standby instance when a problem is
detected?
• A) Multi-AZ deployment
• B) Read Replicas
• C) Cross-region replication
• D) Standalone instances
o Definition: The maximum period of data loss that is acceptable in the event of a
failure or incident.
o Definition: The maximum amount of downtime allowed to recover from a backup and
resume processing.
o Example: If an SLA specifies an RTO of 1 hour, the system must recover within 1 hour
after a crash.
4. A company has an RTO of 2 hours for its database. What does this mean?
6. A company has set an RPO of 10 minutes and an RTO of 1 hour for its critical database. Which
of the following is true?
• B) The company can tolerate losing 10 minutes of data and must restore the database
within 1 hour.
7. In the context of disaster recovery, what is the primary purpose of setting an RPO?
8. How does RTO affect the choice of backup and disaster recovery solutions?
• A) A shorter RTO may require faster recovery methods, such as active-passive failover.
• B) 30 minutes
• C) 1 hour
• D) 24 hours
10. Which of the following scenarios would likely require a shorter RTO?
1. Database Type:
2. Data Relationships:
3. Data Structure:
o SQL: Table-based databases, representing data in the form of rows and columns.
4. Schema:
5. Scalability:
o SQL: Vertically scalable (increasing CPU, RAM, SSD, etc., on a single server).
6. Query Language:
o SQL: Uses SQL (Structured Query Language) for defining and manipulating data.
7. Foreign Keys:
8. Complex Queries:
o NoSQL: Not ideal for complex queries but better for hierarchical data storage (e.g.,
key-value pairs similar to JSON).
9. Transaction Management:
o SQL: Emphasizes ACID properties (Atomicity, Consistency, Isolation, Durability).
10. Examples:
• A) NoSQL
• B) SQL
• C) Document-based
• D) Key-value pairs
• A) Predefined schema
• B) Dynamic schema
• D) Vertical scalability
• A) Vertical scalability
• B) Horizontal scalability
• A) Key-value pairs
• B) SQL
• C) Wide-column stores
• D) NoSQL
Which of the following best describes the data structure of a NoSQL database?
• A) Document-based
• B) Table-based
• C) Structured data
• D) Relational data
Which of the following SQL features is not typically found in NoSQL databases?
• A) Dynamic schema
• B) Horizontal scalability
• C) Foreign keys
• D) Distributed storage
• A) NoSQL
• B) SQL
• C) Key-value store
• D) Graph database
• A) DynamoDB
• B) MySQL
• C) Oracle
• D) MS-SQL
Data Structure:
• Data Format:
• Core Components:
Examples:
• People Table:
• Cars Table:
o Example:
▪ Music Table: artist as the partition key and song_title as the sort key.
• ACID Transactions:
• Encryption:
o Data is encrypted at rest, even when the table is not in use, enhancing security.
• API Requests:
o Unlike relational databases that use SQL, DynamoDB interacts through HTTP POST
API requests.
o Queries and actions are performed through these API requests, and responses are
received over HTTP.
• A) Relational Database
• B) NoSQL Database
• C) Graph Database
• D) In-Memory Database
• C) A collection of attributes
• D) A collection of tables
Which key structure does DynamoDB support for uniquely identifying items in a table?
In DynamoDB, what is the difference between a partition key and a composite primary key?
• A) A partition key is a single attribute; a composite primary key consists of a partition key
and a sort key.
• C) A partition key is a simple primary key; a composite primary key uses indexes.
• D) A partition key is for indexing; a composite primary key is for storing data.
• A) XML
• B) CSV
• C) JSON
• D) Parquet
Which of the following DynamoDB features allows you to restore data to any point in time?
• A) DynamoDB Streams
• C) Global Tables
Which of the following best describes how applications interact with DynamoDB?
Global Secondary Index (GSI) vs. Local Secondary Index (LSI) in AWS DynamoDB
• Primary Key: Must be composite, requiring both a partition key and a sort key.
• Creation: Must be created at the time of table creation; cannot be added later.
1. Which of the following statements is true about Global Secondary Index (GSI) in DynamoDB?
• D) Only when the table has at least one Local Secondary Index (LSI)
3. Which of the following is required for a Local Secondary Index (LSI) in DynamoDB?
• B) The partition key must be unique and different from the table's partition key
4. What type of consistency does Global Secondary Index (GSI) in DynamoDB support?
5. What type of queries can be performed using a Local Secondary Index (LSI) in DynamoDB?
6. Which of the following must be true to create a Local Secondary Index (LSI) in DynamoDB?
7. Which of the following is a difference between Global Secondary Index (GSI) and Local
Secondary Index (LSI)?
• A) GSI can only be created at table creation, while LSI can be added later.
• B) LSI supports only eventual consistency, while GSI supports strong consistency.
• C) GSI allows querying the entire table, while LSI allows querying within a single
partition.
• D) LSI can be created at any time, while GSI must be created at table creation.
8. What is the primary benefit of using a Global Secondary Index (GSI) in DynamoDB?
9. Which of the following is a true statement about Local Secondary Index (LSI) in DynamoDB?
• B) The sort key can be any attribute, but the partition key must match the table's partition
key.
• C) The sort key must be the same as the table's sort key.
10. Which type of DynamoDB index supports both eventual and strong consistency for read
operations?
4o
• When your application writes data to a DynamoDB table and receives an HTTP 200 (OK)
response, it confirms that the write operation has occurred and is durable.
o After a write operation, reading data immediately may not always return the most up-
to-date data.
o The data will eventually become consistent, typically within a fraction of a second.
Throughput Capacity
• When creating a DynamoDB table or index, you must specify the read and write capacity
requirements to ensure consistent, low-latency performance.
• On-Demand Capacity Mode: Pay only for the read/write operations you perform, with no
need for prior capacity planning.
Capacity Units
o 1 RCU = One strongly consistent read per second, or two eventually consistent reads
per second, for an item up to 4 KB in size.
Example Scenarios
• Read Operations:
o With 1 RCU:
• Write Operations:
o With 1 WCU:
• For Reads:
• For Writes:
o Required capacity = X amount of WCU, where X is the size of the item in KB.
• If the read/write requests exceed the throughput settings, DynamoDB may throttle the
request to prevent overconsumption.
Capacity Planning
• Always check the units (KB, MB, etc.) and time intervals (seconds, minutes, hours) when
calculating throughput capacity to ensure accurate provisioning.
• On-Demand: Pay only for the read/write operations as they occur, without upfront capacity
planning.
• 5 RCUs:
• 5 WCUs:
1. When your application writes data to a DynamoDB table and receives an HTTP 200 response,
what does this indicate?
3. What is the primary difference between eventual consistency and strong consistency in
DynamoDB?
• B) Strong consistency guarantees that the most recent data is returned, while eventual
consistency may not.
• D) Strong consistency requires more write capacity units than eventual consistency.
4. How many strongly consistent reads per second can 1 Read Capacity Unit (RCU) support for
an item up to 4 KB in size?
• A) 1 Write Capacity Unit (WCU) can handle two writes per second for an item up to 1 KB in
size.
• B) 1 Write Capacity Unit (WCU) can handle one write per second for an item up to 1 KB in
size.
• C) 1 Write Capacity Unit (WCU) can handle one write per second for an item up to 4 KB in size.
• D) 1 Write Capacity Unit (WCU) can handle two writes per second for an item up to 4 KB in
size.
6. If your application performs an eventual consistent read on an item that is 8 KB in size, how
many Read Capacity Units (RCUs) are required?
• A) 1 RCU
• B) 2 RCUs
• C) 4 RCUs
• D) 8 RCUs
7. What happens if your application exceeds the provisioned read or write throughput on a
DynamoDB table?
• A) The request will be retried automatically.
• B) The request will be throttled and fail with a 400 HTTP status code.
8. In which scenario would you prefer to use on-demand capacity mode for a DynamoDB table?
9. How should you calculate the required number of read capacity units for a strongly
consistent read on an item of size 16 KB?
10. Which of the following best describes a scenario where you might encounter a
"ProvisionedThroughputExceededException" in DynamoDB?
• A) When your application exceeds the allocated read or write capacity units for the
table.
• Purpose: DynamoDB Streams is an optional feature that captures data modification events
in DynamoDB tables in real-time and in the same order of occurrence.
• Use Case: Essential for keeping track of database changes and the order in which they occur.
• How it Works:
▪ Insertion: Captures the image of the entire item, including all attributes.
▪ Update: Captures both the before and after images of modified attributes.
o Stream Record Contents: Includes the table name, event timestamp, and other
metadata. Each record is guaranteed to be delivered only once.
• Retention: Stream records are retained for 24 hours, allowing time for management,
diagnostics, and other processing tasks.
• Integration:
o AWS Lambda: Can be used with DynamoDB Streams to automatically trigger code
execution whenever an item of interest appears in a stream.
o Kinesis Client Library and DynamoDB APIs: Other methods for interacting with
stream data.
• Workflow Example:
o The stream record triggers an AWS Lambda function that reads the product details
and publishes a message to an Amazon SNS topic.
o Subscribers to the SNS topic, such as via email, are notified about the new product.
• Configuration:
o Streams can be enabled via the DynamoDB console under the "Overview" tab.
o View Types:
▪ Keys Only: Captures only the primary key attributes of the modified item.
▪ New Image: Captures the new image of the item after modification.
▪ New and Old Images: Captures both the before and after images of the item,
useful for updates.
• Enabling Streams:
• A) Table creation
• B) Index creation
When an item is updated in a DynamoDB table with streams enabled, what does the stream
capture?
• D) No data is captured
• Answer: C) Both the new and old images of the modified attributes
• A) 12 hours
• B) 24 hours
• C) 48 hours
• D) 7 days
• Answer: B) 24 hours
Which of the following AWS services can be triggered by DynamoDB Streams to execute code
in response to stream events?
• A) Amazon S3
• B) AWS Lambda
• C) Amazon EC2
• D) Amazon RDS
What type of data does the 'Keys Only' option capture in a DynamoDB Stream?
• A) All attributes of the item
Which view type in DynamoDB Streams should you choose if you want to capture both the
state of an item before and after an update?
• A) Keys Only
• B) New Image
• C) Old Image
What is a common use case for integrating DynamoDB Streams with AWS Lambda?
What information is guaranteed to be unique and delivered only once by DynamoDB Streams?
• A) Stream Record ID
• B) Table Name
• C) Event Timestamp
• D) Primary Key
Which of the following can you configure when enabling DynamoDB Streams on a table?
• A) Indexing Strategy
• D) Backup Frequency
• Answer: C) Stream View Type
What is DAX?
• DAX acts as a cache for DynamoDB, providing fast response times, typically in
microseconds, for accessing eventually consistent data.
Benefits:
• Improved Read Performance: DAX delivers fast response times, making it highly beneficial
for read-heavy workloads.
• Cost Savings: By reducing the need to over-provision read capacity units (RCUs), DAX can
lead to potential operational cost savings.
• Reduced Load: Caching frequently accessed data reduces the load on DynamoDB tables,
particularly for read operations.
How it Works:
2. Data Caching: Frequently accessed data is then written to DAX, the in-memory cache.
3. Data Reading: Before querying the DynamoDB table, the application checks DAX for the
requested data, reducing the need for direct read operations on the table.
2. Which of the following best describes the primary benefit of using DAX?
• A) Write-heavy workloads
• B) Read-heavy workloads
5. When an application reads data, what is the first step if DAX is enabled?
6. What type of data consistency does DAX primarily support for read operations?
• A) Strongly consistent
• B) Eventually consistent
• C) Fully consistent
• D) Consistently updated
7. Which of the following scenarios would benefit most from implementing DAX?
9. Which AWS service is primarily integrated with DAX to enhance its functionality?
• A) Amazon S3
• B) Amazon RDS
• C) Amazon DynamoDB
• D) AWS Lambda
10. What happens if a piece of data is not found in the DAX cache?
Overview:
• Aurora has two configurations: Serverless and Provisioned. In the Serverless model,
capacity is managed automatically, unlike the Provisioned model where you must manage it
manually.
Key Features:
• Auto-scaling: Scales compute capacity up and down based on demand, ideal for
applications with infrequent, intermittent, or unpredictable workloads.
• Pause and Resume: Aurora Serverless can automatically pause after a specified period of
inactivity, then resume when requests are received, further reducing costs.
• Separation of Storage and Compute: Storage is decoupled from compute, with storage
being fault-tolerant and self-healing, replicating data across three availability zones (AZs).
Use Cases:
• Infrequently Used Applications: Such as low-volume blogs, where traffic is not continuous.
• New Applications: Where database load is uncertain at launch and may grow over time.
• Variable Workloads: For applications like HR, budgeting, and reporting that experience
irregular usage.
• Unpredictable Workloads: Suitable for applications that see sudden spikes and drops in
activity.
• Development and Test Databases: Ideal for environments used only during work hours.
Technical Details:
• Scaling: Aurora Serverless uses a warm pool of resources to minimize scaling time. It
instantly allocates capacity as needed by switching active client connections to these pre-
warmed resources.
• Data API: Provides a secure HTTP endpoint backed by a connection pool, helping to manage
database connections more efficiently, especially for serverless and IoT applications.
• Regional Availability: Only available in specific AWS regions and supports certain versions
of Aurora MySQL and Aurora PostgreSQL.
• Unsupported Features:
o Connections are automatically closed if left open for more than 24 hours.
• S3 Integration:
o Cannot load text file data to Aurora MySQL from S3 (possible with Aurora
PostgreSQL).
Exam Focus:
• Be familiar with ACUs, how they control scaling, and the concept of a warm pool of
resources.
• Know the limitations of Aurora Serverless v1 and the types of applications it's best suited
for.
These notes should help you prepare for questions on Aurora Serverless, focusing on its unique
features, use cases, and limitations.
1. What is the primary benefit of using Amazon Aurora Serverless over a provisioned Aurora
database?
o A) Lower latency
2. Which of the following workloads is Amazon Aurora Serverless best suited for?
o C) Multi-AZ deployments
6. How does Amazon Aurora Serverless handle sudden spikes in database workload?
o A) It increases the number of database instances manually
o C) Real-time backup to S3
8. What is the maximum storage limit that Amazon Aurora Serverless can scale to?
o A) 64 TB
o B) 256 TB
o C) 512 TB
o D) 128 TB
9. Which statement is true about the Data API in Amazon Aurora Serverless?
o C) It abstracts away the need for connection pools by providing an HTTP endpoint
General Characteristics:
• Aurora Global Database: A single database that spans multiple regions, providing low-
latency global reads and fast recovery from region-specific outages.
o Asynchronous replication is used between the primary and secondary regions with
sub-second latency.
Structure:
• Configuration Flexibility: You can change the configuration of an Aurora Global Database
while it’s running to support global processing use cases.
• Primary Region: Handles writes and synchronous replication across availability zones within
the region.
• Secondary Region: Supports read capabilities with asynchronous replication from the
primary region.
Use Cases:
• Global Processing: Common in financial services, where different global regions manage
trading operations at different times (e.g., New York, London, Australia).
o "Follow the Sun" Strategy: Write capabilities are moved to different regions as the
global trading day progresses.
Benefits:
• Global Reads with Local Latency: Provides local latency for global users, ensuring fast read
operations in all configured regions.
• Scalable Secondary Clusters: Secondary clusters are read-only and can support up to 16
replica instances (compared to the 15 in a non-global Aurora cluster).
• Fast Replication: Replication from primary to secondary clusters has minimal impact on
performance.
• Disaster Recovery: In the event of a regional outage, secondary clusters can be promoted to
primary in under a minute.
Limitations:
• Compatibility Restrictions:
• Cluster Operations: You cannot start or stop individual Aurora database clusters within an
Aurora Global Database.
• Engine Version Consistency: AWS recommends that secondary clusters use the same
Aurora Database Engine version as the primary cluster. Discrepancies in versions can lead to
issues.
• Following the Sun: Understand this concept where the primary region is switched across
global regions based on operational demands (e.g., trading day shifts).
• Limitations: Be aware of the features that are not supported in Aurora Global Databases, as
these could be potential exam questions.
What is the maximum number of read-only secondary regions you can configure in an Aurora
Global Database?
• A) 3
• B) 5
• C) 7
• D) 10
Which of the following features is NOT supported by Amazon Aurora Global Database?
How quickly can you promote a secondary region to the primary region in an Aurora Global
Database?
• D) Instantly
Which replication method is used between the primary and secondary regions in an Aurora
Global Database?
• A) Synchronous replication
• B) Asynchronous replication
• C) Multi-Master replication
• D) Point-in-time replication
Which of the following is a key benefit of using Aurora Global Database?
In the context of an Aurora Global Database, what is the "Follow the Sun" strategy?
• B) Moving the primary region to different global regions based on time zone needs
Which of the following is true about Aurora Global Database when considering disaster
recovery?
Which scenario would be problematic according to AWS best practices for Aurora Global
Database?
• B) Running different Aurora engine versions between primary and secondary clusters
What is the maximum number of Aurora replicas that can be configured in a secondary region
of an Aurora Global Database?
• A) 10
• B) 12
• C) 16
• D) 20
Which of the following features is currently mutually exclusive with Aurora Global Database?
• A) Aurora Serverless v1
• B) Global reads
• C) Asynchronous replication
• D) Disaster recovery
• Amazon ElastiCache is an in-memory cache service that stores frequently accessed and
infrequently changing critical pieces of information in memory.
Engines Supported
• Memcached:
o Ideal for storing data from persistent data stores (e.g., Amazon RDS, DynamoDB) and
dynamically generated web pages.
o Suitable for transient session data that does not require persistent backing.
• Redis:
o Supports more complex data structures, such as lists and sorted sets, in addition to
simple key-value pairs.
• Performance: Reduces the load on databases like Amazon RDS by serving frequently
accessed content from the in-memory cache.
• Scalability: Eliminates the need to scale in and out the data layer during peak and slack
times, as frequently accessed content is served from memory.
• Availability: Features built-in automatic failure detection and recovery, ensuring high
availability of cached data.
Example Scenario
• Online Store: Consider an online store using AWS infrastructure with a load balancer,
multiple EC2 targets, and an RDS database with Multi-AZ standby.
o When a customer logs in, the application needs to show preferred products based on
previous purchases. Performing this calculation directly on RDS would be slow and
resource-intensive.
What feature of Redis in Amazon ElastiCache ensures high availability by storing a replica in
another availability zone?
• A) Cross-region replication
• B) Multi-AZ replication
• C) Read replicas
• D) Global tables
In which scenario would you choose Memcached over Redis in Amazon ElastiCache?
• A) When you need support for complex data structures like lists and sets.
• Answer: B) When you need a simple key-value store without complex operations.
• A) By reducing the load on the backend database by serving frequently accessed data from
memory.
Which Amazon ElastiCache engine would you choose if your application requires operations
on sorted sets and lists?
• A) Memcached
• B) Redis
• C) DynamoDB
• D) MySQL
• Answer: B) Redis
What is a key difference between using ElastiCache with Redis and Memcached?
• A) Redis supports complex data types, whereas Memcached supports only simple key-value
pairs.
• Answer: A) Redis supports complex data types, whereas Memcached supports only
simple key-value pairs.
How does Amazon ElastiCache help reduce the response time for read-heavy applications?
• B) By serving cached results from memory rather than querying the database.
• Answer: B) By serving cached results from memory rather than querying the database.
• D) It reduces the need for a primary database by storing all data in Redis.
• In-Transit Encryption:
o Security enabled for encryption as data moves from one node to another in a cluster
or between your cluster and your application.
• At-Rest Encryption:
• Redis Version:
• VPC Configuration:
o EC2 instances and ElastiCache Redis cluster are in the same VPC.
o Create custom security group rules to allow connections from the EC2 instances’
security group to the ElastiCache security group.
• Study and memorize access patterns and security configurations for ElastiCache Redis
clusters.
2. To use encryption at rest with Amazon ElastiCache, your Redis cluster must meet which of
the following conditions?
• C) The Redis version must be 3.2.6, 4.0.10, or later, and the cluster must be in a VPC.
3. Which of the following AWS services is used to secure communication between two VPCs in
different regions for ElastiCache access?
• B) AWS VPN
• D) AWS CloudFront
4. What must be ensured when configuring VPC peering for ElastiCache access between two
VPCs?
5. In which scenario would you need to use a Transit VPC to access an ElastiCache Redis
cluster?
• B) When you want to connect multiple EC2 instances within the same VPC.
• D) When you want to connect EC2 instances within the same region but different availability
zones.
6. Which of the following is required to enable ElastiCache access from an on-premises data
center?
• A) A VPN or Direct Connect connection between the data center and the AWS VPC.
8. When setting up a VPC peering connection for ElastiCache, what must you configure to
ensure proper routing between VPCs?
• C) Update the route tables of both VPCs to allow traffic to the peered VPC.
• B) Accessing an ElastiCache cluster from an EC2 instance within the same VPC.
• A) VPC Peering
• D) PrivateLink
• A session store is used by web applications to keep track of user sessions, which are active
from the time a user logs in until they log out or the session times out.
• Sticky Sessions ensure that each request from a specific user is directed to the same web
server.
o Reduced Elasticity: If a web server goes down, the session data is lost, causing
disruptions.
• ElastiCache Redis provides a scalable, in-memory cache that can be used as a session
store.
• Benefits:
o Redis acts as a fast key-value store, with session data stored in it and retrieved as
needed.
o Web servers can scale independently without losing session data since sessions are
managed by Redis.
2. Modify Application Code: Update the application to use Redis as the session store.
3. Session Workflow:
o A new session is created in Redis and a session cookie is returned to the user's
browser.
o Subsequent requests from the user, regardless of which web server handles them,
retrieve session data from Redis.
o This allows for load balancing across multiple web servers without losing session
continuity.
• Scalable and Fast: Redis can handle large volumes of session data efficiently.
• Session Persistence: Sessions persist across server failures, maintaining user experience.
MCQ 1:
Q: What is the primary benefit of using ElastiCache Redis as a session store in a web application?
MCQ 2:
MCQ 3:
Q: When using ElastiCache Redis as a session store, where is the session data stored?
• B) In an S3 bucket
• D) In an RDS database
MCQ 4:
Q: What happens to a user's session if a web server handling sticky sessions fails and Auto Scaling
launches a new server?
Answer: B) The session is lost and the user must log in again
MCQ 5:
Q: How does ElastiCache Redis help in maintaining session continuity when using multiple web
servers?
• A) By ensuring each user is directed to the same server for every request
MCQ 6:
Q: Which of the following best describes the purpose of setting a Time to Live (TTL) on session keys
in ElastiCache Redis?
MCQ 7:
Q: What is a potential risk of not using a session store like ElastiCache Redis in a multi-server web
application?
MCQ 8:
Q: Which AWS service can be used to store session data outside of the application server, ensuring
session persistence across scaling events?
• A) Amazon RDS
• B) AWS Lambda
• C) Amazon ElastiCache
• D) Amazon S3
MCQ 9:
Q: What should be modified in your web application to use ElastiCache Redis as a session store?
Answer: B) The application code to store and retrieve sessions from Redis
MCQ 10:
Q: Which of the following best describes the relationship between ElastiCache Redis and EC2
instances in the context of session management?
Answer: C) Redis stores session data that EC2 instances retrieve as needed