Practice Test 2
Practice Test 2
Return to review
Attempt 1
All knowledge areas
All questions
Question 1: Correct
You have developed an enhancement for a photo compression application running
on the App Engine Standard service in Google Cloud Platform, and you want to
canary test this enhancement on a small percentage of live users. How can you do
this?
Use gcloud app deploy to deploy the enhancement as a new version in the existing
application and use --splits flag to split the traffic between the old version and the
new version. Assign a weight of 1 to the new version and 99 to the old version.
(Correct)
Use gcloud app deploy to deploy the enhancement as a new version in the existing
application with --migrate flag.
Deploy the enhancement as a new App Engine Application in the existing GCP
project. Make use of App Engine native routing to have the old App Engine
application proxy 1% of the requests to the new App Engine application.
Deploy the enhancement as a new App Engine Application in the existing GCP
project. Configure the network load balancer to route 99% of the requests to the
old (existing) App Engine Application and 1% to the new App Engine Application.
Explanation
Use gcloud app deploy to deploy the enhancement as a new version in the
existing application with --migrate flag. is not right.
migrate is not a valid flag for the gcloud app deploy command.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy
Also, gcloud app versions migrate, which is a valid command to migrate traffic from one
version to another for a set of services, is not suitable either as we only want to send 1%
traffic.
https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
Deploy the enhancement as a new App Engine Application in the existing GCP
project. Make use of App Engine native routing to have the old App Engine
application proxy 1% of the requests to the new App Engine application. is
not right.
While this can be done, we are increasing complexity and do not meet our requirement
"minimize complexity". There is an out of the box option in the app engine to split traffic
seamlessly.
Deploy the enhancement as a new App Engine Application in the existing GCP
project. Configure the network load balancer to route 99% of the requests to
the old (existing) App Engine Application and 1% to the new App Engine
Application. is not right.
Instances that participate as backend VMs for network load balancers must be running
the appropriate Linux guest environment, Windows guest environment, or other
processes that provide equivalent functionality. The network load balancer is not
suitable for the App Engine standard environment, which is container-based and
provides us with specific runtimes without any promise on the underlying guest
environments.
Use gcloud app deploy to deploy the enhancement as a new version in the
existing application and use --splits flag to split the traffic between the
old version and the new version. Assign a weight of 1 to the new version and
99 to the old version. is the right answer.
You can use traffic splitting to specify a percentage distribution of traffic across two or
more of the versions within a service. Splitting traffic allows you to conduct A/B testing
between your versions and provides control over the pace when rolling out features. For
this scenario, we can split the traffic as shown below, sending 1% to v2 and 99% to v1 by
executing the command gcloud app services set-traffic service1 --splits v2=1,v1=99
Ref: https://cloud.google.com/sdk/gcloud/reference/app/services/set-traffic
Question 2: Incorrect
Your company collects and stores CCTV footage videos in raw format in Google
Cloud Storage. Within the first 30 days, footage is processed regularly for
detecting patterns such as threat/object/face detection and suspicious behavior
detection. You want to minimize the cost of storing all the data in Google Cloud.
How should you store the videos?
Use Google Cloud Regional Storage for the first 30 days, and use lifecycle rules to
transition to Coldline Storage.
(Correct)
Use Google Cloud Nearline Storage for the first 30 days, and use lifecycle rules to
transition to Coldline Storage.
(Incorrect)
Use Google Cloud Regional Storage for the first 30 days, and and use lifecycle
rules to transition to Nearline Storage.
Use Google Cloud Regional Storage for the first 30 days, and then move videos to
Google Persistent Disk.
Explanation
Footage is processed regularly within the first 30 days and is rarely used after that. So
we need to store the videos for the first 30 days in a storage class that supports
economic retrieval (for processing) or at no cost, and then transition the videos to a
cheaper storage after 30 days.
Use Google Cloud Regional Storage for the first 30 days, and use lifecycle
rules to transition to Nearline Storage. is not right.
Transitioning the data to Nearline Storage is a good idea as Nearline Storage costs less
than standard storage, is highly durable for storing infrequently accessed data and a
better choice than Standard Storage in scenarios where slightly lower availability is an
acceptable trade-off for lower at-rest storage costs.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
However, we do not have a requirement to access the data after 30 days; and there are
storage classes that are cheaper than nearline storage, so it is not a suitable option.
Ref: https://cloud.google.com/storage/pricing#storage-pricing
Use Google Cloud Regional Storage for the first 30 days, and then move
videos to Google Persistent Disk. is not right.
Persistent disk pricing is almost double that of standard storage class in Google Cloud
Storage service. Plus the persistent disk can only be accessed when attached to another
service such as compute engine, GKE, etc making this option very expensive.
Ref: https://cloud.google.com/storage/pricing#storage-pricing
Ref: https://cloud.google.com/compute/disks-image-pricing#persistentdisk
Use Google Cloud Nearline Storage for the first 30 days, and use lifecycle
rules to transition to Coldline Storage. is not right.
Nearline storage class is suitable for storing infrequently accessed data and has costs
associated with retrieval. Since the footage is processed regularly within the first 30
days, data retrieval costs may far outweigh the savings made by using nearline storage
over standard storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Ref: https://cloud.google.com/storage/pricing#archival-pricing
Use Google Cloud Regional Storage for the first 30 days, and use lifecycle
rules to transition to Coldline Storage. is the right answer.
We save the videos initially in Regional Storage (Standard) which does not have retrieval
charges so we do not pay for accessing data within the first 30 days during which the
videos are accessed frequently. We only pay for the standard storage costs. After 30
days, we transition the CCTV footage videos to Coldline storage which is a very-low-
cost, highly durable storage service for storing infrequently accessed data. Coldline
Storage is a better choice than Standard Storage or Nearline Storage in scenarios where
slightly lower availability, a 90-day minimum storage duration, and higher costs for data
access are acceptable trade-offs for lowered at-rest storage costs. Coldline storage class
is cheaper than Nearline storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Question 3: Correct
Your company wants to move all documents from a secure internal NAS drive to a
Google Cloud Storage (GCS) bucket. The data contains personally identifiable
information (PII) and sensitive customer information. Your company tax auditors
need access to some of these documents. What security strategy would you
recommend on GCS?
Grant IAM read-only access to users, and use default ACLs on the bucket.
Grant no Google Cloud Identity and Access Management (Cloud IAM) roles to
users, and use granular ACLs on the bucket.
(Correct)
Create randomized bucket and object names. Enable public access, but only
provide specific file URLs to people who do not have Google accounts and need
access.
Explanation
Use signed URLs to generate time-bound access to objects. is not right.
When dealing with sensitive customer information such as PII, using signed URLs is not
a great idea as anyone with access to the URL has access to PII data. Signed URLs
provide time-limited resource access to anyone in possession of the URL, regardless of
whether they have a Google account. With PII Data, we want to be sure who has access
and signed URLs don't guarantee that.
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls
Grant IAM read-only access to users, and use default ACLs on the bucket. is
not right.
We do not need to grant all IAM read-only access to this sensitive data. Just the users
who need access to sensitive/PII data should be provided access to this data.
Create randomized bucket and object names. Enable public access, but only
provide specific file URLs to people who do not have Google accounts and
need access. is not right.
Enabling public access to the buckets and objects makes them visible to everyone. There
are a number of scanning tools out in the market with the sole purpose of identifying
buckets/objects that can be reached publicly. Should one of these tools be used by a
bad actor to find out our public bucket/objects, it would result in a security breach.
Grant no Google Cloud Identity and Access Management (Cloud IAM) roles to
users, and use granular ACLs on the bucket. is the right answer.
We start with no explicit access to any of the IAM users, and the bucket ACLs can then
control which users can access what objects. This is the most secure way of ensuring just
the people who require access to the bucket are provided with access. We block
everyone from accessing the bucket and explicitly provided access to specific users
through ACLs.
Question 4: Correct
A GKE cluster (test environment) in your test GCP project is experiencing issues
with a sidecar container connecting to Cloud SQL. This issue has resulted in a
massive amount of log entries in Cloud Logging and shot up your bill by 25%.
Your manager has asked you to disable these logs as quickly as possible and using
the least number of steps. You want to follow Google recommended practices.
What should you do?
In Cloud Logging, disable the log source for GKE Cluster Operations resource in
the Logs ingestion window.
In Cloud Logging, disable the log source for GKE container resource in the Logs
ingestion window.
(Correct)
Recreate the GKE cluster and disable Cloud Monitoring. is not right.
We require to disable the logs ingested from the GKE container. We don't need to
delete the existing cluster and create a new one.
In Cloud Logging, disable the log source for GKE Cluster Operations resource
in the Logs ingestion window. is not right.
We require to disable the logs ingested from GKE container, not the complete GKE
Cluster Operations resource.
In Cloud Logging, disable the log source for GKE container resource in the
Logs ingestion window. is the right answer.
We want to disable logs from a specific GKE container, and this is the only option that
does that.
More information about logs
exclusions: https://cloud.google.com/logging/docs/exclusions.
Question 5: Incorrect
You are migrating a mission-critical HTTPS Web application from your on-
premises data centre to Google Cloud, and you need to ensure unhealthy compute
instances within the autoscaled Managed Instances Group (MIG) are recreated
automatically. What should you do?
Configure a health check on port 443 when creating the Managed Instance Group
(MIG).
(Correct)
Add a metadata tag to the Instance Template with key: healthcheck value:
enabled.
(Incorrect)
When creating the instance template, add a startup script that sends server status
to Cloud Monitoring as a custom metric.
Explanation
Deploy Managed Instance Group (MIG) instances in multiple zones. is not right.
You can create two types of MIGs: A zonal MIG, which deploys instances to a single zone
and a regional MIG, which deploys instances to multiple zones across the same region.
However, this doesn't help with recreating unhealthy VMs.
Ref: https://cloud.google.com/compute/docs/instance-groups
Add a metadata tag to the Instance Template with key: healthcheck value:
enabled. is not right.
Metadata entries are key-value pairs and do not influence any other behaviour.
Ref: https://cloud.google.com/compute/docs/storing-retrieving-metadata
When creating the instance template, add a startup script that sends server
status to Cloud Monitoring as a custom metric. is not right.
The startup script is executed only when the instance boots up. In contrast, we need
something like a liveness check that monitors the status of the server periodically to
identify if the VM is unhealthy. So this is not going to work.
Ref: https://cloud.google.com/compute/docs/startupscript
Configure a health check on port 443 when creating the Managed Instance
Group (MIG). is the right answer.
To improve the availability of your application and to verify that your application is
responding, you can configure an auto-healing policy for your managed instance group
(MIG). An auto-healing policy relies on an application-based health check to verify that
an application is responding as expected. If the auto healer determines that an
application isn't responding, the managed instance group automatically recreates that
instance. Since our application is an HTTPS web application, we need to set up our
health check on port 443, which is the standard port for HTTPS.
Ref: https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-
in-migs
Question 6: Correct
You have annual audits every year and you need to provide external auditors
access to the last 10 years of audit logs. You want to minimize the cost and
operational overhead while following Google recommended practices. What
should you do? (Select Three)
Export audit logs to Cloud Storage via an audit log export sink.
(Correct)
Configure a lifecycle management policy on the logs bucket to delete objects older
than 10 years.
(Correct)
Grant external auditors Storage Object Viewer role on the logs storage bucket.
(Correct)
Explanation
Export audit logs to Cloud Filestore via a Pub/Sub export sink. is not right.
Storing logs in Cloud Filestore is expensive. In Cloud Filestore, Standard Tier pricing
costs $0.2 per GB per month and Premium Tier pricing costs $0.3 per GB per month. In
comparison, Google Cloud Storage offers several storage classes that are significantly
cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to BigQuery via an audit log export sink. is not right.
Storing logs in BigQuery is expensive. In BigQuery, Active storage costs $0.02 per GB per
month and Long-term storage costs $0.01 per GB per month. In comparison, Google
Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to Cloud Storage via an audit log export sink. is the right
answer.
Among all the storage solutions offered by Google Cloud Platform, Cloud storage offers
the best pricing for long term storage of logs. Google Cloud Storage offers several
storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage
($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are
significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing
Grant external auditors Storage Object Viewer role on the logs storage
bucket. is the right answer.
You can provide external auditors access to the logs in the bucket by granting the
Storage Object Viewer role which allows them to read any object stored in any bucket.
Ref: https://cloud.google.com/storage/docs/access-control/iam
Question 7: Incorrect
Your company hosts a number of applications in Google Cloud and requires that
log messages from all applications be archived for 10 years to comply with local
regulatory requirements. Which approach should you use?
(Incorrect)
(Correct)
The difference between the remaining two options is whether we store the logs in
BigQuery or Google Cloud Storage.
Question 8: Incorrect
You want to use Google Cloud Storage to host a static website on
www.example.com for your staff. You created a bucket example-static-website
and uploaded index.html and css files to it. You turned on static website hosting
on the bucket and set up a CNAME record on www.example.com to point to
c.storage.googleapis.com. You access the static website by navigating to
www.example.com in the browser but your index page is not displayed. What
should you do?
Delete the existing bucket, create a new bucket with the name www.example.com
and upload the html/css files.
(Correct)
Reload the Cloud Storage static website server to load the objects.
(Incorrect)
In example.com zone, delete the existing CNAME record and set up an A record
instead to point to c.storage.googleapis.com.
Explanation
In example.com zone, modify the CNAME record to
c.storage.googleapis.com/example-static-website. is not right.
CNAME records cannot contain paths. There is nothing wrong with the current CNAME
record.
In example.com zone, delete the existing CNAME record and set up an A record
instead to point to c.storage.googleapis.com. is not right.
A records cannot use hostnames. A records use IP Addresses.
Reload the Cloud Storage static website server to load the objects. is not
right.
There is no such thing as a Cloud Storage static website server. All infrastructure that
underpins the static websites is handled by Google Cloud Platform.
Delete the existing bucket, create a new bucket with the name
www.example.com and upload the html/css files. is the right answer.
We need to create a bucket whose name matches the CNAME you created for your
domain. For example, if you added a CNAME record pointing www.example.com to
c.storage.googleapis.com., then create a bucket with the name
"www.example.com".A CNAME record is a type of DNS record. It directs traffic that
requests a URL from your domain to the resources you want to serve, in this case,
objects in your Cloud Storage buckets. For www.example.com, the CNAME record might
contain the following information:
Ref: https://cloud.google.com/storage/docs/hosting-static-website
Question 9: Correct
You created an update for your application on App Engine. You want to deploy the
update without impacting your users. You want to be able to roll back as quickly
as possible if it fails. What should you do?
Deploy the update as the same version that is currently running. You are confident
the update works so you don't plan for a rollback strategy.
Deploy the update as the same version that is currently running. If the update
fails, redeploy your older version using the same version identifier.
Notify your users of an upcoming maintenance window and ask them not to use
your application during this window. Deploy the update in that maintenance
window.
Deploy the update as a new version. Migrate traffic from the current version to the
new version. If it fails, migrate the traffic back to your older version.
(Correct)
Explanation
Deploy the update as the same version that is currently running. You are
confident the update works so you don't plan for a rollback strategy. is not
right.
Irrespective of the level of confidence, you should always prepare a rollback strategy as
things can go wrong for reasons out of our control.
Deploy the update as the same version that is currently running. If the
update fails, redeploy your older version using the same version
identifier. is not right.
While this can be done, the rollback process is not quick. Your application is
unresponsive until you have redeployed the older version which can take quite a bit of
time depending on how it is set up.
Notify your users of an upcoming maintenance window and ask them not to use
your application during this window. Deploy the update in that maintenance
window. is not right.
Our requirement is to deploy the update without impacting our users but by asking
them to not use the application during the maintenance window, you are impacting all
users.
Deploy the update as a new version. Migrate traffic from the current version
to the new version. If it fails, migrate the traffic back to your older
version. is the right answer.
This option enables you to deploy a new version and send all traffic to the new version.
If you realize your updated application is not working, the rollback is as simple as
marking your older version as default. This can all be done in the GCP console with a
few clicks.
Ref: https://cloud.google.com/appengine/docs/admin-api/deploying-apps
Cloud Datastore.
Cloud Spanner.
Cloud Bigtable.
(Correct)
Cloud SQL.
Explanation
Cloud Spanner. is not right.
Cloud Spanner is not a NoSQL database. Cloud SQL is a fully-managed relational
database service.
Ref: https://cloud.google.com/sql/docs
In the GCP Console, move all projects to the root organization in the Resource
Manager.
(Correct)
Raise a support request with Google Billing Support and request them to create a
new billing account and link all the projects to the billing account.
Ensure you have the Billing Account Creator Role. Create a new Billing account
yourself and set up a payment method with company credit card details.
Explanation
Send an email to billing.support@cloud.google.com and request them to create
a new billing account and link all the projects to the billing account. is
not right.
That is not how we set up billing for the organization.
Ref: https://cloud.google.com/billing/docs/concepts
Raise a support request with Google Billing Support and request them to
create a new billing account and link all the projects to the billing
account. is not right.
That is not how we set up billing for the organization.
Ref: https://cloud.google.com/billing/docs/concepts
Ensure you have the Billing Account Creator Role. Create a new Billing
account yourself and set up a payment method with company credit card
details. is not right.
Unless all projects are modified to use the new billing account, this doesn't work.
Ref: https://cloud.google.com/billing/docs/concepts
In the GCP Console, move all projects to the root organization in the
Resource Manager. is the right answer.
If we move all projects under the root organization hierarchy, they still need to modify
to use a billing account within the organization (same as the previous option).
Ref: https://cloud.google.com/resource-manager/docs/migrating-projects-
billing#top_of_page
Note: The link between projects and billing accounts is preserved, irrespective of the
hierarchy. When you move your existing projects into the organization, they will
continue to work and be billed as they used to before the migration, even if the
corresponding billing account has not been migrated yet.
But in this option, all projects are in the organization resource hierarchy so the
organization can uniformly apply organization policies to all its projects which is a
Google recommended practice. So this is the better of the two options.
Ref: https://cloud.google.com/billing/docs/concepts
Question 12: Correct
You want to persist logs for 10 years to comply with regulatory requirements. You
want to follow Google recommended practices. Which Google Cloud Storage class
should you use?
(Correct)
Explanation
In April 2019, Google introduced a new storage class "Archive storage class" is the
lowest-cost, highly durable storage service for data archiving, online backup, and
disaster recovery. Google previously recommended you use Coldline storage class but
the recommendation has since been updated to "Coldline Storage is ideal for data you
plan to read or modify at most once a quarter. Note, however, that for data being kept
entirely for backup or archiving purposes, Archive Storage is more cost-effective, as it
offers the lowest storage costs."
Ref: https://cloud.google.com/storage/docs/storage-classes#archive
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Export audit logs from Cloud Logging to BigQuery via an export sink.
Export audit logs from Cloud Logging to Cloud Pub/Sub via an export sink.
Configure a Cloud Dataflow pipeline to process these messages and store them in
Cloud SQL for MySQL.
Write a script that exports audit logs from Cloud Logging to BigQuery. Use Cloud
Scheduler to trigger the script every hour.
Export audit logs from Cloud Logging to Coldline Storage bucket via an export
sink.
(Correct)
Explanation
Export audit logs from Cloud Logging to BigQuery via an export sink. is not
right.
You can export logs into BigQuery by creating one or more sinks that include a logs
query and an export destination (big query). However, this option is costly compared to
the cost of Cloud Storage.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Write a script that exports audit logs from Cloud Logging to BigQuery. Use
Cloud Scheduler to trigger the script every hour. is not right.
Stackdriver already offers sink exports that let you copy logs from Stackdriver logs to
BigQuery. While BigQuery is already quite expensive compared to Cloud Storage,
coming up with a custom script and maintaining it to copy the logs from Stackdriver
logs to BigQuery is going to add to the cost. This option is very inefficient and
expensive.
Export audit logs from Cloud Logging to Cloud Pub/Sub via an export sink.
Configure a Cloud Dataflow pipeline to process these messages and store them
in Cloud SQL for MySQL. is not right.
Cloud SQL is primarily used for storing relational data. Storing vast quantities of logs in
Cloud SQL is very expensive compared to Cloud Storage. And add to it the fact that you
also need to pay for Cloud Pub/Sub and Cloud Dataflow pipeline, and this option gets
very expensive very soon.
Export audit logs from Cloud Logging to Coldline Storage bucket via an
export sink. is the right answer.
Coldline Storage is the perfect service to store audit logs from all the projects and is
very cost-efficient as well. Coldline Storage is a very-low-cost, highly durable storage
service for storing infrequently accessed data. Coldline Storage is a better choice than
Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-
day minimum storage duration, and higher costs for data access are acceptable trade-
offs for lowered at-rest storage costs. Coldline Storage is ideal for data you plan to read
or modify at most once a quarter.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
(Correct)
Explanation
Use gcloud deployment-manager resources create and point to the deployment
config file. is not right.
gcloud deployment-manager resources command does not support the action create.
The supported actions are describe and list. So this option is not right.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-manager/resources
(Incorrect)
(Correct)
Deploy the Citrix Licensing Server on a Google Compute Engine instance with an
ephemeral IP address. Once the server is responding to requests, promote the
ephemeral IP address to a static internal IP address.
Deploy the Citrix Licensing Server on a Google Compute Engine instance and set
its ephemeral IP address to 10.10.10.10.
Explanation
Use gcloud compute addresses create to reserve 10.10.10.10 as a static
external IP and assign it to the Citrix Licensing Server VM Instance. is not
right.
The private network range is defined by IETF (Ref: https://tools.ietf.org/html/rfc1918)
and includes 10.0.0.0/8. So all IP Addresses from 10.0.0.0 to 10.255.255.255 belong to
this internal IP range. As the IP of interest 10.10.10.10 falls within this range, it can not
be reserved as a public IP Address.
Deploy the Citrix Licensing Server on a Google Compute Engine instance and
set its ephemeral IP address to 10.10.10.10. is not right.
An ephemeral IP address is the public IP Address assigned to compute instance. An
ephemeral external IP address is an IP address that doesn't persist beyond the life of the
resource. When you create an instance or forwarding rule without specifying an IP
address, the resource is automatically assigned an ephemeral external IP address.
Ref: https://cloud.google.com/compute/docs/ip-addresses#ephemeraladdress
The private network range is defined by IETF (Ref: https://tools.ietf.org/html/rfc1918)
and includes 10.0.0.0/8. So all IP Addresses from 10.0.0.0 to 10.255.255.255 belong to
this internal IP range. As the IP of interest 10.10.10.10 falls within this range, it can not
be used as a public IP Address (ephemeral IP is public).
Deploy the Citrix Licensing Server on a Google Compute Engine instance with
an ephemeral IP address. Once the server is responding to requests, promote
the ephemeral IP address to a static internal IP address. is not right.
When a compute instance is started with public IP, it gets an ephemeral IP address. An
ephemeral external IP address is an IP address that doesn't persist beyond the life of the
resource.
Ref: https://cloud.google.com/compute/docs/ip-addresses#ephemeraladdress
You can promote this ephemeral address into a Static IP address, but this will be an
external IP address and not an internal one.
Ref: https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-
address#promote_ephemeral_ip
(Correct)
Create another identical kubernetes workload and split traffic between the two
workloads.
Explanation
Enable Horizontal Pod Autoscaling for the Kubernetes deployment. is not right.
Horizontal Pod Autoscaling can not be enabled for Daemon Sets, this is because there is
only one instance of a pod per node in the cluster. In a replica deployment, when
Horizontal Pod Autoscaling scales up, it can add pods to the same node or another
node within the cluster. Since there can only be one pod per node in the Daemon Set
workload, Horizontal Pod Autoscaling is not supported with Daemon Sets.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
Create another identical Kubernetes cluster and split traffic between the
two workloads. is not right.
Creating another identical Kubernetes cluster is going to double your costs; at the same
time, there is no guarantee that this is enough to handle all the traffic. Finally, it doesn't
satisfy our requirement of "cluster scales up and scales down automatically"
Perform a rolling update to modify machine type from n1-standard-2 to n1-
standard-4. is not right.
While increasing the machine type from n1-standard-2 to n1-standard-4 gives the
existing nodes more resources and processing power, we don't know if that would be
enough to handle the increased volume of traffic. Also, it doesn't satisfy our
requirement of "cluster scales up and scales down automatically"
Ref: https://cloud.google.com/compute/docs/machine-types
(Correct)
Explanation
The ideal answer to this would have been Archive Storage, but that is not one of the
options. Archive Storage is the lowest-cost, highly durable storage service for data
archiving, online backup, and disaster recovery. Your data is available within
milliseconds, not hours or days. https://cloud.google.com/storage/docs/storage-
classes#archive
In the absence of Archive Storage Class, Use Coldline Storage Class is the right
answer.
Coldline Storage Class is a very-low-cost, highly durable storage service for storing
infrequently accessed data. Coldline Storage is a better choice than Standard Storage or
Nearline Storage in scenarios where slightly lower availability, a 90-day minimum
storage duration, and higher costs for data access are acceptable trade-offs for lowered
at-rest storage costs. Coldline Storage is ideal for data you plan to read or modify at
most once a quarter.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Although Nearline, Regional and Multi-Regional can also be used to store the backups,
they are expensive in comparison, and Google recommends we use Coldline for
backups.
More information about Nearline: https://cloud.google.com/storage/docs/storage-
classes#nearline
More information about
Standard/Regional: https://cloud.google.com/storage/docs/storage-classes#standard
More information about Standard/Multi-
Regional: https://cloud.google.com/storage/docs/storage-classes#standard
Use parallel uploads to break the file into smaller chunks then transfer it
simultaneously.
(Correct)
Explanation
Use parallel uploads to break the file into smaller chunks then transfer it
simultaneously. is the right answer.
With cloud storage, Object composition can be used for uploading an object in parallel:
you can divide your data into multiple chunks, upload each chunk to a distinct object in
parallel, compose your final object, and delete any temporary source objects. This helps
maximize your bandwidth usage and ensures the file is uploaded as fast as possible.
Ref: https://cloud.google.com/storage/docs/composite-objects#uploads
Cloud Datastore
BigQuery
Cloud Storage
Cloud Bigtable
(Correct)
Explanation
Our requirement is to write/update a very high volume of data at a high speed.
Performance is our primary concern, not cost.
While all other options are capable of storing high volumes of the order of petabytes,
they are not as efficient as Bigtable at processing IoT time-series data.
Export all audit logs to Google Cloud Storage bucket and set up the necessary IAM
acces to restrict the data shared with auditors.
Export all audit logs to BigQuery dataset. Make use of ACLs and views to restrict
the data shared with the auditors. Have the auditors query the required
information quickly.
(Correct)
Export all audit logs to Cloud Pub/Sub via an export sink. Use a Cloud Function to
read the messages and store them in Cloud SQL. Make use of ACLs and views to
restrict the data shared with the auditors.
Explanation
Configure alerts in Cloud Monitoring and trigger notifications to the
auditors. is not right.
Stackdriver Alerting gives timely awareness to problems in your cloud applications so
you can resolve the problems quickly. Sending alerts to your auditor is not of much use
during audits.
Ref: https://cloud.google.com/monitoring/alerts
Export all audit logs to Cloud Pub/Sub via an export sink. Use a Cloud
Function to read the messages and store them in Cloud SQL. Make use of ACLs
and views to restrict the data shared with the auditors. is not right.
Using Cloud Functions to transfer log entries to Google Cloud SQL is expensive in
comparison to audit logs export feature which exports logs to various destinations with
minimal configuration.
Ref: https://cloud.google.com/logging/docs/export/
Auditors spend a lot of time reviewing log messages. And you want to expedite the
audit process!! So you want to make it easier for the auditor to extract the information
easily from the logs.
Between the two remaining options, the only difference is the log export sink
destination.
Ref: https://cloud.google.com/logging/docs/export/
One option exports to Google Cloud Storage (GCS) bucket whereas other exports
to BigQuery. Querying information out of files in a bucket is much harder compared to
querying information from BigQuery Dataset where it is as simple as running a job or set
of jobs to extract just the required information and in the format required. By enabling
the auditor to run jobs in Big Queries, you streamline the log extraction process, and the
auditor can review the extracted logs much quicker. While as good as the other option
(bucket) is, Export all audit logs to BigQuery dataset. Make use of ACLs and
views to restrict the data shared with the auditors. Have the auditors query
the required information quickly. is the right answer.
You need to configure log sinks before you can receive any logs, and you can’t
retroactively export logs that were written before the sink was created.
Deploy the VM in a new subnet in europe-west1 region in a new VPC. Peer the two
VPCs and have the VM contact the Citrix Licensing Server on its internal IP
Address.
Deploy the VM in a new subnet in europe-west1 region in the existing VPC. Have
the VM contact the Citrix Licensing Server on its internal IP Address.
(Correct)
Deploy the VM in a new subnet in europe-west1 region in the existing VPC. Peer
the two subnets using Cloud VPN. Have the VM contact the Citrix Licensing Server
on its internal IP Address.
Explanation
Our requirements are to connect the instance in europe-west1 region with the
application running in us-central1 region following Google-recommended practices.
The two instances are in the same project.
Deploy the VM in a new subnet in europe-west1 region in a new VPC. Peer the
two VPCs and have the VM contact the Citrix Licensing Server on its internal
IP Address. is not right.
Given that the new instance wants to access the application on the existing compute
engine instance, these applications seem to be related so they should be within the
same VPC. This option does not mention how the VPC networks are created and what
the subnet range is.
You can't connect two auto mode VPC networks using VPC Network Peering because
their subnets use identical primary IP ranges. We don't know how the VPCs were
created.
There are several restrictions based on the subnet ranges.
https://cloud.google.com/vpc/docs/vpc-peering#restrictions
Even if we assume the above restrictions don’t apply and enable peering is possible, this
is still a lot of additional work, and we can simplify this by choosing the option below
(which is the answer)
Use Managed instance groups with instances in a single zone. Enable Autoscaling
on the Managed instance group.
(Incorrect)
Use Managed instance groups across multiple zones. Enable Autoscaling on the
Managed instance group.
(Correct)
Use Managed instance groups with preemptible instances across multiple zones.
Enable Autoscaling on the Managed instance group.
Use Unmanaged instance groups across multiple zones. Enable Autoscaling on the
Unmanaged instance group.
Explanation
Use Managed instance groups with preemptible instances across multiple
zones. Enable Autoscaling on the Managed instance group. is not right.
A preemptible VM runs at a much lower price than normal instances and is cost-
effective. However, Compute Engine might terminate (preempt) these instances if it
requires access to those resources for other tasks. Preemptible instances are not suitable
for production applications that need to be available 24*7.
Ref: https://cloud.google.com/compute/docs/instances/preemptible
Use Managed instance groups across multiple zones. Enable Autoscaling on the
Managed instance group. is the right answer.
Distribute your resources across multiple zones and regions to tolerate outages. Google
designs zones to be independent of each other: a zone usually has power, cooling,
networking, and control planes that are isolated from other zones, and most single
failure events will affect only a single zone. Thus, if a zone becomes unavailable, you can
transfer traffic to another zone in the same region to keep your services running.
Ref: https://cloud.google.com/compute/docs/regions-zones
In addition, a managed instance group (MIG) contains offers auto-scaling capabilities
that let you automatically add or delete instances from a managed instance group
based on increases or decreases in load. Autoscaling helps your apps gracefully handle
increases in traffic and reduce costs when the need for resources is lower. Autoscaling
works by adding more instances to your instance group when there is more load
(upscaling), and deleting instances when the need for instances is lowered
(downscaling).
Ref: https://cloud.google.com/compute/docs/autoscaler/
(Correct)
Explanation
The security team needs detailed visibility of all GCP projects in the organization so they
should be able to view all the projects in the organization as well as view all resources
within these projects.
Cloud Firestore.
Cloud SQL.
Cloud Spanner.
(Correct)
Cloud Datastore.
Explanation
Our requirements are relational data, global users, scaling
Create a Kubernetes Deployment YAML file referencing the Jenkins docker image
and deploy to a new GKE cluster.
(Correct)
Explanation
Download Jenkins binary from https://www.jenkins.io/download/ and deploy in
a new Google Compute Engine instance. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server
through App Engine.
All you need to do is spin up an instance from a suitable market place build, and you
have a Jenkins server in a few minutes with just a few clicks.
Add the developers and finance managers to the Viewer role for the Project.
Add the finance team to the Viewer role for the Project. Add the developers to the
Security Reviewer role for each of the billing accounts.
Add the finance team to the default IAM Owner role. Add the developers to a
custom role that allows them to see their own spend only.
Add the finance team to the Billing Administrator role for each of the billing
accounts that they need to manage. Add the developers to the Viewer role for the
Project.
(Correct)
Explanation
Add the finance team to the default IAM Owner role. Add the developers to a
custom role that allows them to see their own spend only. is not right.
Granting your finance team the default IAM role provides them permissions to manage
roles and permissions for a project and subsequently use that to assign them the
permissions to view/edit resources in all projects. This is against our requirements. Also,
you can write a custom role that lets developers view their project spend but they are
missing permissions to view project resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the developers and finance managers to the Viewer role for the
Project. is not right.
Granting your finance team the Project viewer role lets them view resources in all
projects and doesn’t let them set budgets - both are against our requirements.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the finance team to the Viewer role on all projects. Add the developers
to the Security Reviewer role for each of the billing accounts. is not right.
Granting your finance team the Project viewer role lets them view resources in all
projects which is against our requirements. Also, the security Reviewer role enables the
developers to view custom roles but doesn’t let them view the project's costs or project
resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the finance team to the Billing Administrator role for each of the
billing accounts that they need to manage. Add the developers to the Viewer
role for the Project. is the right answer.
Billing Account Administrator role is an owner role for a billing account. It provides
permissions to manage payment instruments, configure billing exports, view cost
information, set budgets, link and unlink projects and manage other user roles on the
billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Project viewer role provides permissions for read-only actions that do not affect the
state, such as viewing (but not modifying) existing resources or data; including viewing
the billing charges for the project.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Build a container image from the Dockerfile and push it to Google Cloud Storage (GCS).
Create a Kubernetes Deployment YAML file and have it use the image from GCS. Use
kubectl apply -f {deployment.YAML} to deploy the application to the GKE cluster.
Build a container image from the Dockerfile and push it to Google Container Registry
(GCR). Create a Kubernetes Deployment YAML file and have it use the image from GCR.
Use kubectl apply -f {deployment.YAML} to deploy the application to the GKE cluster.
(Correct)
Explanation
Deploy the application using kubectl app deploy {Dockerfile}. is not right.
kubectl does not accept app as a verb. Kubectl can deploy a configuration file using
kubectl deploy.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
Deploy the application using gcloud app deploy {Dockerfile}. is not right.
gcloud app deploy - Deploys the local code and/or configuration of your app to App
Engine. gcloud app deploy accepts a flag --image-url which is the docker image, but it
can't directly use a docker file.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy
Build a container image from the Dockerfile and push it to Google Cloud
Storage (GCS). Create a Kubernetes Deployment YAML file and have it use the
image from GCS. Use kubectl apply -f {deployment.YAML} to deploy the
application to the GKE cluster. is not right.
You can not upload a docker image to cloud storage. They can only be pushed to a
Container Registry (e.g. GCR, Dockerhub etc.)
Ref: https://cloud.google.com/container-registry/docs/pushing-and-pulling
Build a container image from the Dockerfile and push it to Google Container
Registry (GCR). Create a Kubernetes Deployment YAML file and have it use the
image from GCR. Use kubectl apply -f {deployment.YAML} to deploy the
application to the GKE cluster. is the right answer.
Once you have a docker image, you can push it to the container register. You can then
create a deployment YAML file pointing to this image and use kubectl apply -f
{deployment YAML filename} to deploy this to the Kubernetes cluster. This command
assumes you already have a Kubernetes cluster and you gcloud environment is set up to
talk to this container by executing gcloud container clusters get-credentials {cluster name}
--zone={container_zone}
Ref: https://cloud.google.com/container-registry/docs/pushing-and-pulling
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials
Question 28: Incorrect
The deployment team currently spends a lot of time creating and configuring VMs
in Google Cloud Console, and feel they could be more productive and consistent if
the same can be automated using Infrastructure as Code. You want to help them
identify a suitable service. What should you recommend?
(Incorrect)
Deployment Manager.
(Correct)
Cloud Build.
Explanation
Unmanaged Instance Group. is not right.
Unmanaged instance groups let you load balance across a fleet of VMs that you
manage yourself. But it doesn't help with dynamically provisioning VMs.
Ref: https://cloud.google.com/compute/docs/instance-
groups#unmanaged_instance_groups
Set an Object Lifecycle Management policy to delete data older than 2 years.
(Correct)
(Correct)
Set an Object Lifecycle Management policy to change the storage class to Archive
for data older than 2 years.
Set an Object Lifecycle Management policy to change the storage class to Coldline
for data older than 2 years.
Explanation
Set an Object Lifecycle Management policy to change the storage class to
Coldline for data older than 2 years. is not right.
Data older than 2 years is not needed so there is no point in transitioning the data to
Coldline. The data needs to be deleted.
(Incorrect)
(Correct)
Explanation
Start with 3 instances and manually scale as needed. is not right.
Manual scaling uses resident instances that continuously run the specified number of
instances regardless of the load level. This scaling allows tasks such as complex
initializations and applications that rely on the state of the memory over time. Manual
scaling does not autoscale based on the request rate, so it doesn't fit our requirements.
Ref: https://cloud.google.com/appengine/docs/standard/python/how-instances-are-
managed
Enable Automatic Scaling and set minimum idle instances to 3. is the right
answer.
Automatic scaling creates dynamic instances based on request rate, response latencies,
and other application metrics. However, if you specify the number of minimum idle
instances, that specified number of instances run as resident instances while any
additional instances are dynamic.
Ref: https://cloud.google.com/appengine/docs/standard/python/how-instances-are-
managed
The lifecycle rule archives current (live) objects older than 60 days and transitions
Multi-regional objects older than 365 days to Nearline storage class.
The lifecycle rule deletes current (live) objects older than 60 days and transitions
Multi-regional objects older than 365 days to Nearline storage class.
The lifecycle rule transitions Multi-regional objects older than 365 days to Nearline
storage class.
The lifecycle rule deletes non-current (archived) objects older than 60 days and
transitions Multi-regional objects older than 365 days to Nearline storage class.
(Correct)
Explanation
The lifecycle rule archives current (live) objects older than 60 days and
transitions Multi-regional objects older than 365 days to Nearline storage
class. is not right.
The action has "type":"Delete" which means we want to Delete, not archive.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
The lifecycle rule deletes current (live) objects older than 60 days and
transitions Multi-regional objects older than 365 days to Nearline storage
class. is not right.
We want to delete objects as indicated by the action; however, we don't want to delete
all objects older than 60 days. We only want to delete archived objects as indicated by
"isLive":false condition.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
The lifecycle rule transitions Multi-regional objects older than 365 days to
Nearline storage class. is not right.
The first rule is missing. It deletes archived objects older than 60 days.
The lifecycle rule deletes non-current (archived) objects older than 60 days
and transitions Multi-regional objects older than 365 days to Nearline
storage class. is the right answer.
The first part of the rule: The action has "type":"Delete" which means we want to Delete.
"isLive":false condition means we are looking for objects that are not Live, i.e. objects
that are archived. Together, it means we want to delete archived objects older than 60
days. Note that if an object is deleted, it cannot be undeleted. Take care in setting up
your lifecycle rules so that you do not cause more data to be deleted than you intend.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
The second part of the rule: The action indicates we want to set storage class to
Nearline. The condition is satisfied if the existing storage class is multi-regional, and the
age of the object is 365 days or over. Together it means we want to set the storage class
to Nearline if existing storage class is multi-regional and the age of the object is 365
days or over.
(Correct)
gcloud compute instances create [INSTANCE_NAME] --preemptible. The flag --
boot-disk-auto-delete is disbaled by default.
Explanation
gcloud compute instances create [INSTANCE_NAME] --preemptible --boot-disk-
auto-delete=no. is not right.
gcloud compute instances create doesn't provide a parameter called boot-disk-auto-
delete. It does have a flag by the same name. --boot-disk-auto-delete is enabled by
default. It enables automatic deletion of boot disks when the instances are deleted. Use
--no-boot-disk-auto-delete to disable.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Make use of gcloud iam roles copy command to copy the IAM roles from the
Development GCP organization to the Staging GCP organization.
Make use of the Create Role from Role feature in GCP console to create IAM roles
in the Staging project from the Development IAM roles.
(Incorrect)
Make use of gcloud iam roles copy command to copy the IAM roles from the
Development GCP project to the Staging GCP project.
(Correct)
Make use of Create Role feature in GCP console to create all necessary IAM roles
from new in the Staging project.
Explanation
We are required to create the same iam roles in a different (staging) project with the
fewest possible steps.
Make use of the Create Role from Role feature in GCP console to create IAM
roles in the Staging project from the Development IAM roles. is not right.
This option creates a role in the same (development) project, not in the staging project.
So this doesn't meet our requirement to create same iam roles in the staging project.
Make use of Create Role feature in GCP console to create all necessary IAM
roles from new in the Staging project. is not right.
This option works but is not as efficient as copying the roles from development project
to the staging project.
Make use of gcloud iam roles copy command to copy the IAM roles from the
Development GCP organization to the Staging GCP organization. is not right.
We can optionally specify a destination organization but since we require to copy the
roles into "staging project" (i.e. project, not organization), this option does not meet our
requirement to create same iam roles in the staging project.
Ref: https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy
Make use of gcloud iam roles copy command to copy the IAM roles from the
Development GCP project to the Staging GCP project. is the right answer.
This option fits all the requirements. You copy the roles into the destination project
using gcloud iam roles copy and by specifying the staging project destination project.
Ref: https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy
Question 34: Incorrect
You have been asked to create a new Kubernetes Cluster on Google Kubernetes
Engine that can autoscale the number of worker nodes as well as pods. What
should you do? (Select 2)
(Correct)
Create a GKE cluster and enable autoscaling on the instance group of the cluster.
(Incorrect)
Create Compute Engine instances for the workers and the master and install
Kubernetes. Rely on Kubernetes to create additional Compute Engine instances
when needed.
Configure a Compute Engine instance as a worker and add it to an unmanaged
instance group. Add a load balancer to the instance group and rely on the load
balancer to create additional Compute Engine instances when needed.
(Correct)
Explanation
Create a GKE cluster and enable autoscaling on the instance group of the
cluster. is not right.
GKE's cluster auto-scaler automatically resizes the number of nodes in a given node
pool, based on the demands of your workloads. However, we should not enable
Compute Engine autoscaling for managed instance groups for the cluster nodes. GKE's
cluster auto-scaler is separate from Compute Engine autoscaling.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler
Create Compute Engine instances for the workers and the master and install
Kubernetes. Rely on Kubernetes to create additional Compute Engine instances
when needed. is not right.
When using Google Kubernetes Engine, you can not install master node separately. The
cluster master runs the Kubernetes control plane processes, including the Kubernetes
API server, scheduler, and core resource controllers. The master's lifecycle is managed by
GKE when you create or delete a cluster.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
Also, you can not add manually created compute instances to the worker node pool. A
node pool is a group of nodes within a cluster that all have the same configuration.
Node pools use a NodeConfig specification.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools
Create a GKE cluster and enable autoscaling on Kubernetes Engine. is the right
answer.
GKE's cluster autoscaler automatically resizes the number of nodes in a given node pool,
based on the demands of your workloads. You don't need to manually add or remove
nodes or over-provision your node pools. Instead, you specify a minimum and
maximum size for the node pool, and the rest is automatic. When demand is high,
cluster autoscaler adds nodes to the node pool. When demand is low, cluster autoscaler
scales back down to a minimum size that you designate. This can increase the
availability of your workloads when you need it while controlling costs.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler
Enable Horizontal Pod Autoscaling for the kubernetes deployment. is the right
answer.
Horizontal Pod Autoscaler scales up and scales down your Kubernetes workload by
automatically increasing or decreasing the number of Pods in response to the
workload's CPU or memory consumption, or in response to custom metrics reported
from within Kubernetes or external metrics from sources outside of your cluster.
Horizontal Pod Autoscaling cannot be used for workloads that cannot be scaled, such as
DaemonSets.
Ref: https://cloud.google.com/kubernetes-
engine/docs/concepts/horizontalpodautoscaler
1. Create an ingress firewall rule that allows traffic on port 80 from all instances with
serviceAccount_subnet1 to all instances with serviceAccount_subnet2.
2. Create an ingress firewall rule that allows traffic on port 3306 from all instances with
serviceAccount_subnet2 to all instances with serviceAccount_subnet3.
(Correct)
1. Create an ingress firewall rule that allows all traffic from Subnet 2 (range:
192.168.57.0/24) to all other instances.
2. Create another ingress firewall rule that allows all traffic from Subnet 1 (range:
192.168.56.0/24) to all other instances.
(Incorrect)
1. Create an ingress firewall rule that allows all traffic from all instances with
serviceAccount_subnet1 to all instances with serviceAccount_subnet2.
2. Create an ingress firewall rule that allows all traffic from all instances with
serviceAccount_subnet2 to all instances with serviceAccount_subnet3.
1. Create an egress firewall rule that allows traffic on port 80 from Subnet 2 (range:
192.168.57.0/24) to all other instances.
2. Create another egress firewall rule that allows traffic on port 3306 from Subnet 1
(range: 192.168.56.0/24) to all other instances.
Explanation
This architecture resembles a standard 3 tier architecture - web, application, and
database; where the web tier can talk to just the application tier; and the application tier
can talk to both the web and database tier. The database tier only accepts requests from
the application tier and not the web tier.
We want to ensure that Web Tier can communicate with App Tier, and App Tier can
communicate with Database Tier.
1. Create an egress firewall rule that allows traffic on port 80 from Subnet
2 (range: 192.168.57.0/24) to all other instances.
2. Create another egress firewall rule that allows traffic on port 3306 from
Subnet 1 (range: 192.168.56.0/24) to all other instances. is not right.
We are creating egress rules here which allow outbound communication but not ingress
rules which are for inbound traffic.
1. Create an ingress firewall rule that allows all traffic from Subnet 2
(range: 192.168.57.0/24) to all other instances.
2. Create another ingress firewall rule that allows all traffic from Subnet
1 (range: 192.168.56.0/24) to all other instances. is not right.
If we create an ingress firewall rule with the settings
We are allowing Web Tier (192.168.56.0/24) access to all instances - including Database
Tier (192.168.58.0/24) which is not desirable.
1. Create an ingress firewall rule that allows all traffic from all
instances with serviceAccount_subnet1 to all instances with
serviceAccount_subnet2.
2. Create an ingress firewall rule that allows all traffic from all
instances with serviceAccount_subnet2 to all instances with
serviceAccount_subnet3. is not right.
The first firewall rule ensures that all instances with serviceAccount_subnet2, i.e. all
instances in Subnet Tier #2 (192.168.57.0/24) can be reached from all instances with
serviceAccount_subnet1, i.e. all instances in Subnet Tier #1 (192.168.56.0/24), on all
ports. Similarly, the second firewall rule ensures that all instances with
serviceAccount_subnet3, i.e. all instances in Subnet Tier #3 (192.168.58.0/24) can be
reached from all instances with serviceAccount_subnet2, i.e. all instances in Subnet Tier
#2 (192.168.57.0/24), on all ports. Though this matches our requirements, we are
opening all ports instead of the specified ports, which is our requirement. While this
solution works, it is not as secure as the other option (see below)
1. Create an ingress firewall rule that allows traffic on port 80 from all
instances with serviceAccount_subnet1 to all instances with
serviceAccount_subnet2.
2. Create an ingress firewall rule that allows traffic on port 3306 from all
instances with serviceAccount_subnet2 to all instances with
serviceAccount_subnet3. is the right answer.
The first firewall rule ensures that all instances with serviceAccount_subnet2, i.e. all
instances in Subnet Tier #2 (192.168.57.0/24) can be reached from all instances with
serviceAccount_subnet1, i.e. all instances in Subnet Tier #1 (192.168.56.0/24), on port 80.
Similarly, the second firewall rule ensures that all instances with serviceAccount_subnet3,
i.e. all instances in Subnet Tier #3 (192.168.58.0/24) can be reached from all instances
with serviceAccount_subnet2, i.e. all instances in Subnet Tier #2 (192.168.57.0/24), on
port 3306.
Create a second Google App Engine project with the new application code, and
onboard users gradually to the new application.
Set up a second Google App Engine service, and then update a subset of clients to
hit the new service.
Deploy a new version of the application, and use traffic splitting to send a small
percentage of traffic to it.
(Correct)
A. Deploy the new application version temporarily, capture logs and then roll it
back to the previous version.
Explanation
Deploy the new application version temporarily, capture logs and then roll
it back to the previous version. is not right.
Deploying a new application version and promoting it would result in your new version
serving all production traffic. If the code fix doesn't work as expected, it would result in
the application becoming unreachable to all users. This is a risky approach and should
be avoided.
Create a second Google App Engine project with the new application code, and
onboard users gradually to the new application. is not right.
You want to minimize costs. This approach effectively doubles your costs as you have to
pay for two identical environments until all users are moved over to the new application.
There is an additional overhead of manually onboarding users to the new application
which could be expensive as well as time-consuming.
Set up a second Google App Engine service, and then update a subset of
clients to hit the new service. is not right.
It is not straightforward to update a set of clients to hit the new service. When users
access an App Engine service, they use an endpoint like https://SERVICE_ID-dot-
PROJECT_ID.REGION_ID.r.appspot.com. Introducing a new service introduces a new URL
and getting your users to use the new URL is possible but involves effort and
coordination. If you want to mask these differences to the end-user, then you have to
make changes in the DNS and use a weighted algorithm to split the traffic between the
two services based on the weights assigned.
Ref: https://cloud.google.com/appengine/docs/standard/python/splitting-traffic
Ref: https://cloud.google.com/appengine/docs/standard/python/an-overview-of-app-
engine
This approach also has the drawback of doubling your costs until all users are moved
over to the new service.
Deploy a new version of the application, and use traffic splitting to send a
small percentage of traffic to it. is the right answer.
This option minimizes the risk to the application while also minimizing the complexity
and cost. When you deploy a new version to App Engine, you can choose not to
promote it to serve live traffic. Instead, you could set up traffic splitting to split traffic
between the two versions - this can all be done within Google App Engine. Once you
send a small portion of traffic to the new version, you can analyze logs to identify if the
fix has worked as expected. If the fix hasn't worked, you can update your traffic splitting
configuration to send all traffic back to the old version. If you are happy your fix has
worked, you can send more traffic to the new version or move all user traffic to the new
version and delete the old version.
Ref: https://cloud.google.com/appengine/docs/standard/python/splitting-traffic
Ref: https://cloud.google.com/appengine/docs/standard/python/an-overview-of-app-
engine
(Incorrect)
Cloud Functions
(Correct)
Explanation
Cloud Functions. is the right answer.
Cloud Functions is Google Cloud’s event-driven serverless compute platform. It
automatically scales based on the load and requires no additional configuration. You
pay only for the resources used.
Ref: https://cloud.google.com/functions
While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google
App Engine support autoscaling, it needs to be configured explicitly based on the load
and is not as trivial as the scale up or scale down offered by Google's cloud functions.
(Correct)
(Incorrect)
Generate a signed URL to the Stackdriver export destination for auditors to access.
(Correct)
Explanation
Create an account for auditors to have view access to Stackdriver Logging. is
not right.
While it is possible to configure a custom retention period of 10 years in Stackdriver
logging, storing logs in Stackdriver is expensive compared to Cloud Storage. Stackdriver
charges $0.01 per GB per month, whereas something like Cloud Storage Coldline
Storage costs $0.007 per GB per month (30% cheaper) and Cloud Storage Archive
Storage costs 0.004 per GB per month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to Cloud Filestore via a Pub/Sub export sink. is not right.
Storing logs in Cloud Filestore is expensive. In Cloud Filestore, Standard Tier pricing
costs $0.2 per GB per month and Premium Tier pricing costs $0.3 per GB per month. In
comparison, Google Cloud Storage offers several storage classes that are significantly
cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to Cloud Storage via an export sink. is the right answer.
Among all the storage solutions offered by Google Cloud Platform, Cloud storage offers
the best pricing for long term storage of logs. Google Cloud Storage offers several
storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage
($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are
significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing
(Correct)
Explanation
kubectl container clusters update my-gcp-ace-proj-1 --node-pool my-gcp-ace-
primary-node-pool --num-nodes 20. is not right.
kubectl does not accept container as an operation.
Ref: https://kubernetes.io/docs/reference/kubectl/overview/#operations
(Incorrect)
Discuss load balancer options with the relevant teams.
(Correct)
Explanation
Google HTTP(S) Load Balancing has native support for the WebSocket protocol when
you use HTTP or HTTPS, not HTTP/2, as the protocol to the backend.
Ref: https://cloud.google.com/load-balancing/docs/https#websocket_proxy_support
The load balancer also supports session affinity.
Ref: https://cloud.google.com/load-balancing/docs/backend-service#session_affinity
So the next possible step is Discuss load balancer options with the relevant
teams. is the right answer.
We don't need to convert WebSocket code to use HTTP streaming or Redesign the
application, as WebSocket support and session affinity are offered by Google HTTP(S)
Load Balancing. Reviewing the design is a good idea, but it has nothing to do with
WebSockets.
Cloud Datastore
(Correct)
Cloud Dataproc
Cloud Bigtable
Cloud SQL
Explanation
Cloud SQL. is not right.
Cloud SQL is not suitable for non-relational data. Cloud SQL is a fully-managed
database service that makes it easy to set up, maintain, manage, and administer your
relational databases on Google Cloud Platform
Ref: https://cloud.google.com/sql/docs
2. Configure autoscaling on the managed instance group with a scaling policy based on
HTTP traffic.
3. Configure the instance group as the backend service of an External HTTP(S) load
balancer.
(Correct)
2. Create an image from the instance's disk and export it to Cloud Storage.
3. Create an External HTTP(s) load balancer and add the Cloud Storage bucket as its
backend service.
1. Create the necessary number of instances based on the instance template to handle
peak user traffic.
3. Configure the instance group as the Backend Service of an External HTTP(S) load
balancer.
1. Deploy your Python web application instance template to Google Cloud App Engine.
2. Configure autoscaling on the managed instance group with a scaling policy based on
HTTP traffic.
2. Configure autoscaling on the unmanaged instance group with a scaling policy based
on HTTP traffic.
Explanation
1. Create an instance from the instance template.
2. Create an image from the instance's disk and export it to Cloud Storage.
3. Create an External HTTP(s) load balancer and add the Cloud Storage bucket
as its backend service. is not right.
You can upload a custom image from instance's boot disk and export it to cloud
storage.
https://cloud.google.com/compute/docs/images/export-image
However, this image in the Cloud Storage bucket is unable to handle traffic as it is not a
running application. Cloud Storage can not serve requests of the custom image.
1. Deploy your Python web application instance template to Google Cloud App
Engine.
2. Configure autoscaling on the managed instance group with a scaling policy
based on HTTP traffic. is not right.
You can not use compute engine instance templates to deploy applications to Google
Cloud App Engine. Google App Engine lets you deploy applications quickly by providing
run time environments for many of the popular languages like Java, PHP, Node.js,
Python, C#, .Net, Ruby, and Go. You have an option of using custom runtimes but using
compute engine instance templates is not an option.
Ref: https://cloud.google.com/appengine
(Incorrect)
Write a script that runs gsutil ls -lr gs://myapp-gcp-ace-logs/ to find and remove
items older than 90 days. Repeat this process every morning.
Write a lifecycle management rule in JSON and push it to the bucket with gsutil
lifecycle set config-json-file.
(Correct)
Write a lifecycle management rule in XML and push it to the bucket with gsutil
lifecycle set config-xml-file.
Explanation
You write a lifecycle management rule in XML and push it to the bucket with
gsutil lifecycle set config-xml-file. is not right.
gsutil lifecycle set enables you to set the lifecycle configuration on one or more buckets
based on the configuration file provided. However, XML is not a valid supported type for
the configuration file.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
Write a lifecycle management rule in JSON and push it to the bucket with
gsutil lifecycle set config-json-file. is the right answer.
You can assign a lifecycle management configuration to a bucket. The configuration
contains a set of rules which apply to current and future objects in the bucket. When an
object meets the criteria of one of the rules, Cloud Storage automatically performs a
specified action on the object. One of the supported actions is to Delete objects. You
can set up a lifecycle management to delete objects older than 90 days. "gsutil lifecycle
set" enables you to set the lifecycle configuration on the bucket based on the
configuration file. JSON is the only supported type for the configuration file. The config-
json-file specified on the command line should be a path to a local file containing the
lifecycle configuration JSON document.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
Ref: https://cloud.google.com/storage/docs/lifecycle
Create a Cloud Identity account for each analyst and add them all to a group.
Grant roles/bigquery.jobUser role to the group.
Create a Cloud Identity account for each analyst and add them all to a group.
Grant roles/bigquery.dataViewer role to the group.
(Correct)
Create a Cloud Identity account for each analyst and grant roles/bigquery.jobUser
role to each account.
Explanation
Create a Cloud Identity account for each analyst and grant
roles/bigquery.dataViewer role to each account. is not right.
dataViewer provides permissions to Read data (i.e. query) and metadata from the table
or view so this is the right role but given that our data science team changes frequently,
we do not want to go through this lengthy provisioning and de-provisioning process.
Instead, we should be using groups so that provisioning and de-provisioning are as
simple as adding/removing the user to/from the group. Google Groups are a convenient
way to apply an access policy to a collection of users.
Ref: https://cloud.google.com/bigquery/docs/access-control
Create a Cloud Identity account for each analyst and add them all to a
group. Grant roles/bigquery.jobUser role to the group. is not right.
Since you want users to query the datasets, you need dataViewer role. jobUser provides
the ability to run jobs, including "query jobs". The query job lets you query an
authorized view. An authorized view lets you share query results with particular users
and groups without giving them access to the underlying tables. You can also use the
view's SQL query to restrict the columns (fields) the users can query.
Ref: https://cloud.google.com/bigquery/docs/access-control-examples
Ref: https://cloud.google.com/bigquery/docs/access-control
Create a Cloud Identity account for each analyst and add them all to a
group. Grant roles/bigquery.dataViewer role to the group. is the right answer.
dataViewer provides permissions to Read data (i.e. query) and metadata from the table
or view, so this is the right role, and this option also rightly uses groups instead of
assigning permissions at the user level.
Ref: https://cloud.google.com/bigquery/docs/access-control-examples
Ref: https://cloud.google.com/bigquery/docs/access-control
Create a shared VPC to enable the intern access Compute resources.
(Incorrect)
Grant Compute Engine Instance Admin Role for the sandbox project.
(Correct)
Explanation
Create a shared VPC to enable the intern access Compute resources. is not
right.
Creating a shared VPC is not sufficient to grant intern access to compute resources.
Shared VPCs are primarily used by organizations to connect resources from multiple
projects to a common Virtual Private Cloud (VPC) network, so that they can
communicate with each other securely and efficiently using internal IPs from that
network.
Ref: https://cloud.google.com/vpc/docs/shared-vpc
Grant Project Editor IAM role for sandbox project. is not right.
Project editor role grants all viewer permissions, plus permissions for actions that modify
state, such as changing existing resources. While this role lets the intern explore
compute engine settings and spin up compute instances, it grants more permissions
than what is needed. Our intern can modify any resource in the project.
https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Grant Compute Engine Admin Role for sandbox project. is not right.
Compute Engine Admin Role grants full control of all Compute Engine resources;
including networks, load balancing, service accounts etc. While this role lets the intern
explore compute engine settings and spin up compute instances, it grants more
permissions than what is needed.
Ref: https://cloud.google.com/compute/docs/access/iam#compute.storageAdmin
Grant Compute Engine Instance Admin Role for the sandbox project. is the right
answer.
Compute Engine Instance Admin Role grants full control of Compute Engine instances,
instance groups, disks, snapshots, and images. It also provides read access to all
Compute Engine networking resources. This provides just the required permissions to
the intern.
Ref: https://cloud.google.com/compute/docs/access/iam#compute.storageAdmin
Cloud Functions
(Incorrect)
(Correct)
(Correct)
Deploy the application on a GCE Managed Instance Group (MIG) with autoscaling
enabled.
(Correct)
Deploy the application on a GCE Unmanaged Instance Group. Front the group with
a network load balancer.
Explanation
Deploy the application on a GCE Unmanaged Instance Group. Front the group
with a network load balancer. is not right.
An unmanaged instance group is a collection of virtual machines (VMs) that reside in a
single zone, VPC network, and subnet. An unmanaged instance group is useful for
grouping together VMs that require individual configuration settings or tuning.
Unmanaged instance group does not autoscale, so it does not help reduce the amount
of time it takes to test a change to the system thoroughly.
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-
unmanaged-instances
Deploy the application on Google App Engine Standard service. is not right.
App Engine supports many popular languages like Java, PHP, Node.js, Python, C#, .Net,
Ruby, and Go. However, C++ isn’t supported by App Engine.
Ref: https://cloud.google.com/appengine
Deploy the application as Cloud Dataproc job based on Hadoop. is not right.
Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache
Spark and Apache Hadoop clusters in a simpler, more cost-efficient way. While Dataproc
is very efficient at processing ETL and Big Data pipelines, it is not as suitable for running
a ruby application that runs tests each day.
Ref: https://cloud.google.com/dataproc
(Correct)
Explanation
We want to get to the end goal with the fewest possible steps.
The output shows the clusters and the configurations they use. Using this information, it
is possible to find out the cluster using the inactive configuration with just 1 step.
Execute kubectl config use-context, then kubectl config view. is not right.
kubectl config use-context [my-cluster-name] is used to set the default context to [my-
cluster-name]. But to do this, we first need a list of contexts, and if you have multiple
contexts, you'd need to execute kubectl config use-context [my-cluster-name] against
each context. So that is at least 2+ steps. Further to that, the kubectl config view is used
to get a full list of config. The output of the kubectl config view can be used to verify
which clusters use what configuration, but that is one additional step. Moreover, the
output of the kubectl config view doesn't change much from one context to others -
other than the current-context field. So our earlier steps of determining the contexts
and using each context are of not much use. Though this can be used to achieve the
same outcome, it involves more steps than the other option.
Step 3: Determine the inactive configuration and the cluster using that
configuration.
The config itself has details about the clusters and contexts, as shown below.
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://35.222.130.166
name: gke_kubernetes-260922_us-central1-a_standard-cluster-1
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://35.225.14.172
name: gke_kubernetes-260922_us-central1-a_your-first-cluster-1
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://34.69.212.109
name: gke_kubernetes-260922_us-central1_standard-cluster-1
contexts:
- context:
cluster: gke_kubernetes-260922_us-central1-a_standard-cluster-1
user: gke_kubernetes-260922_us-central1-a_standard-cluster-1
name: gke_kubernetes-260922_us-central1-a_standard-cluster-1
- context:
cluster: gke_kubernetes-260922_us-central1-a_your-first-cluster-1
user: gke_kubernetes-260922_us-central1-a_your-first-cluster-1
name: gke_kubernetes-260922_us-central1-a_your-first-cluster-1
- context:
cluster: gke_kubernetes-260922_us-central1_standard-cluster-1
user: gke_kubernetes-260922_us-central1_standard-cluster-1
name: gke_kubernetes-260922_us-central1_standard-cluster-1
current-context: gke_kubernetes-260922_us-central1-a_standard-cluster-1
Setup the application using App Engine Standard environment with Cloud Router
to connect to on-premise database.
Setup the application using App Engine Standard environment with Cloud VPN to
connect to on-premise database.
(Incorrect)
Setup the application using App Engine Flexible environment with Cloud VPN to
connect to on-premise database.
(Correct)
Setup the application using App Engine Flexible environment with Cloud Router to
connect to on-premise database.
Explanation
Setup the application using App Engine Standard environment with Cloud
Router to connect to on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud
Router enables you to dynamically exchange routes between your Virtual Private Cloud
(VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router
Setup the application using App Engine Flexible environment with Cloud
Router to connect to on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud
Router enables you to dynamically exchange routes between your Virtual Private Cloud
(VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router
Setup the application using App Engine Standard environment with Cloud VPN
to connect to on-premise database. is not right.
App Engine Standard can’t connect to the on-premise network with just Cloud VPN.
Since App Engine is serverless, it can’t use Cloud VPN tunnels. In order to get App
Engine to work with Cloud VPN, you need to connect it to the VPC using serverless VPC.
You can configure the Serverless VPC by creating a connector:
https://cloud.google.com/vpc/docs/configure-serverless-vpc-access
and then you then update your app in App Engine Standard to use this connector:
https://cloud.google.com/appengine/docs/standard/python/connecting-vpc
Setup the application using App Engine Flexible environment with Cloud VPN
to connect to on-premise database. is the right answer.
You need Cloud VPN to connect VPC to an on-premise network.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
Unlike App Engine Standard which is serverless, App Engine Flex instances are already
within the VPC, so they can use Cloud VPN to connect to the on-premise network.
Grant the development team Billing Account User (roles/billing.user) role on the
billing account and Project Billing Manager (roles/billing.projectManager) on the
GCP organization.
(Incorrect)
Grant the finance team Billing Account User (roles/billing.user) role on the billing
account.
Grant the finance team Billing Account User (roles/billing.user) role on the billing
account and Project Billing Manager (roles/billing.projectManager) on the GCP
organization.
(Correct)
Grant the development team Billing Account User (roles/billing.user) role on the
billing account.
Explanation
Grant the finance team Billing Account User (roles/billing.user) role on the
billing account. is not right.
To link a project to a billing account, you need the necessary roles at the project level as
well as at the billing account level. In this scenario, we are granting just the Billing
Account User role on the billing account to the Finance team, which allows them to link
projects to the billing account on which the role is granted. But we haven't granted
them any role at the project level. So they would not be unable to link projects.
Grant the finance team Billing Account User (roles/billing.user) role on the
billing account and Project Billing Manager (roles/billing.projectManager)
on the GCP organization. is the right answer.
To link a project to a billing account, you need the necessary roles at the project level as
well as at the billing account level. In this scenario, we are assigning the finance team
the Billing Account User role on the billing account, which allows them to create new
projects linked to the billing account on which the role is granted. We are also assigning
them the Project Billing Manager role on the organization (trickles down to the project
as well) which lets them attach the project to the billing account, but does not grant any
rights over resources.