0% found this document useful (0 votes)
240 views84 pages

Practice Test 2

The document describes the results of a practice test containing 3 multiple choice questions about Google Cloud Platform services and security strategies. The first question asked how to canary test an app enhancement on GCP and was correctly answered by deploying the enhancement as a new version and using traffic splitting to send 1% to the new version. The second question asked how to store CCTV footage videos to minimize cost and was incorrectly answered by the user, with the correct answer being to use regional storage for 30 days and then transition to coldline storage. The third question asked for the best security strategy when moving sensitive documents to GCS and was correctly answered to grant no IAM roles and use granular ACLs on the

Uploaded by

Rithik Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
240 views84 pages

Practice Test 2

The document describes the results of a practice test containing 3 multiple choice questions about Google Cloud Platform services and security strategies. The first question asked how to canary test an app enhancement on GCP and was correctly answered by deploying the enhancement as a new version and using traffic splitting to send 1% to the new version. The second question asked how to store CCTV footage videos to minimize cost and was incorrectly answered by the user, with the correct answer being to use regional storage for 30 days and then transition to coldline storage. The third question asked for the best security strategy when moving sensitive documents to GCS and was correctly answered to grant no IAM roles and use granular ACLs on the

Uploaded by

Rithik Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Practice Test 2 - Results

Return to review
Attempt 1
All knowledge areas
All questions
Question 1: Correct
You have developed an enhancement for a photo compression application running
on the App Engine Standard service in Google Cloud Platform, and you want to
canary test this enhancement on a small percentage of live users. How can you do
this?

Use gcloud app deploy to deploy the enhancement as a new version in the existing
application and use --splits flag to split the traffic between the old version and the
new version. Assign a weight of 1 to the new version and 99 to the old version.

(Correct)

Use gcloud app deploy to deploy the enhancement as a new version in the existing
application with --migrate flag.

Deploy the enhancement as a new App Engine Application in the existing GCP
project. Make use of App Engine native routing to have the old App Engine
application proxy 1% of the requests to the new App Engine application.

Deploy the enhancement as a new App Engine Application in the existing GCP
project. Configure the network load balancer to route 99% of the requests to the
old (existing) App Engine Application and 1% to the new App Engine Application.

Explanation
Use gcloud app deploy to deploy the enhancement as a new version in the
existing application with --migrate flag. is not right.
migrate is not a valid flag for the gcloud app deploy command.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy
Also, gcloud app versions migrate, which is a valid command to migrate traffic from one
version to another for a set of services, is not suitable either as we only want to send 1%
traffic.
https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate

Deploy the enhancement as a new App Engine Application in the existing GCP
project. Make use of App Engine native routing to have the old App Engine
application proxy 1% of the requests to the new App Engine application. is
not right.
While this can be done, we are increasing complexity and do not meet our requirement
"minimize complexity". There is an out of the box option in the app engine to split traffic
seamlessly.

Deploy the enhancement as a new App Engine Application in the existing GCP
project. Configure the network load balancer to route 99% of the requests to
the old (existing) App Engine Application and 1% to the new App Engine
Application. is not right.
Instances that participate as backend VMs for network load balancers must be running
the appropriate Linux guest environment, Windows guest environment, or other
processes that provide equivalent functionality. The network load balancer is not
suitable for the App Engine standard environment, which is container-based and
provides us with specific runtimes without any promise on the underlying guest
environments.
Use gcloud app deploy to deploy the enhancement as a new version in the
existing application and use --splits flag to split the traffic between the
old version and the new version. Assign a weight of 1 to the new version and
99 to the old version. is the right answer.
You can use traffic splitting to specify a percentage distribution of traffic across two or
more of the versions within a service. Splitting traffic allows you to conduct A/B testing
between your versions and provides control over the pace when rolling out features. For
this scenario, we can split the traffic as shown below, sending 1% to v2 and 99% to v1 by
executing the command gcloud app services set-traffic service1 --splits v2=1,v1=99
Ref: https://cloud.google.com/sdk/gcloud/reference/app/services/set-traffic

Question 2: Incorrect
Your company collects and stores CCTV footage videos in raw format in Google
Cloud Storage. Within the first 30 days, footage is processed regularly for
detecting patterns such as threat/object/face detection and suspicious behavior
detection. You want to minimize the cost of storing all the data in Google Cloud.
How should you store the videos?

Use Google Cloud Regional Storage for the first 30 days, and use lifecycle rules to
transition to Coldline Storage.

(Correct)

Use Google Cloud Nearline Storage for the first 30 days, and use lifecycle rules to
transition to Coldline Storage.

(Incorrect)

Use Google Cloud Regional Storage for the first 30 days, and and use lifecycle
rules to transition to Nearline Storage.

Use Google Cloud Regional Storage for the first 30 days, and then move videos to
Google Persistent Disk.

Explanation
Footage is processed regularly within the first 30 days and is rarely used after that. So
we need to store the videos for the first 30 days in a storage class that supports
economic retrieval (for processing) or at no cost, and then transition the videos to a
cheaper storage after 30 days.

Use Google Cloud Regional Storage for the first 30 days, and use lifecycle
rules to transition to Nearline Storage. is not right.
Transitioning the data to Nearline Storage is a good idea as Nearline Storage costs less
than standard storage, is highly durable for storing infrequently accessed data and a
better choice than Standard Storage in scenarios where slightly lower availability is an
acceptable trade-off for lower at-rest storage costs.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
However, we do not have a requirement to access the data after 30 days; and there are
storage classes that are cheaper than nearline storage, so it is not a suitable option.
Ref: https://cloud.google.com/storage/pricing#storage-pricing

Use Google Cloud Regional Storage for the first 30 days, and then move
videos to Google Persistent Disk. is not right.
Persistent disk pricing is almost double that of standard storage class in Google Cloud
Storage service. Plus the persistent disk can only be accessed when attached to another
service such as compute engine, GKE, etc making this option very expensive.
Ref: https://cloud.google.com/storage/pricing#storage-pricing
Ref: https://cloud.google.com/compute/disks-image-pricing#persistentdisk

Use Google Cloud Nearline Storage for the first 30 days, and use lifecycle
rules to transition to Coldline Storage. is not right.
Nearline storage class is suitable for storing infrequently accessed data and has costs
associated with retrieval. Since the footage is processed regularly within the first 30
days, data retrieval costs may far outweigh the savings made by using nearline storage
over standard storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Ref: https://cloud.google.com/storage/pricing#archival-pricing

Use Google Cloud Regional Storage for the first 30 days, and use lifecycle
rules to transition to Coldline Storage. is the right answer.
We save the videos initially in Regional Storage (Standard) which does not have retrieval
charges so we do not pay for accessing data within the first 30 days during which the
videos are accessed frequently. We only pay for the standard storage costs. After 30
days, we transition the CCTV footage videos to Coldline storage which is a very-low-
cost, highly durable storage service for storing infrequently accessed data. Coldline
Storage is a better choice than Standard Storage or Nearline Storage in scenarios where
slightly lower availability, a 90-day minimum storage duration, and higher costs for data
access are acceptable trade-offs for lowered at-rest storage costs. Coldline storage class
is cheaper than Nearline storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline

Question 3: Correct
Your company wants to move all documents from a secure internal NAS drive to a
Google Cloud Storage (GCS) bucket. The data contains personally identifiable
information (PII) and sensitive customer information. Your company tax auditors
need access to some of these documents. What security strategy would you
recommend on GCS?

Grant IAM read-only access to users, and use default ACLs on the bucket.

Grant no Google Cloud Identity and Access Management (Cloud IAM) roles to
users, and use granular ACLs on the bucket.

(Correct)

Use signed URLs to generate time bound access to objects.

Create randomized bucket and object names. Enable public access, but only
provide specific file URLs to people who do not have Google accounts and need
access.

Explanation
Use signed URLs to generate time-bound access to objects. is not right.
When dealing with sensitive customer information such as PII, using signed URLs is not
a great idea as anyone with access to the URL has access to PII data. Signed URLs
provide time-limited resource access to anyone in possession of the URL, regardless of
whether they have a Google account. With PII Data, we want to be sure who has access
and signed URLs don't guarantee that.
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls

Grant IAM read-only access to users, and use default ACLs on the bucket. is
not right.
We do not need to grant all IAM read-only access to this sensitive data. Just the users
who need access to sensitive/PII data should be provided access to this data.

Create randomized bucket and object names. Enable public access, but only
provide specific file URLs to people who do not have Google accounts and
need access. is not right.
Enabling public access to the buckets and objects makes them visible to everyone. There
are a number of scanning tools out in the market with the sole purpose of identifying
buckets/objects that can be reached publicly. Should one of these tools be used by a
bad actor to find out our public bucket/objects, it would result in a security breach.

Grant no Google Cloud Identity and Access Management (Cloud IAM) roles to
users, and use granular ACLs on the bucket. is the right answer.
We start with no explicit access to any of the IAM users, and the bucket ACLs can then
control which users can access what objects. This is the most secure way of ensuring just
the people who require access to the bucket are provided with access. We block
everyone from accessing the bucket and explicitly provided access to specific users
through ACLs.

Question 4: Correct
A GKE cluster (test environment) in your test GCP project is experiencing issues
with a sidecar container connecting to Cloud SQL. This issue has resulted in a
massive amount of log entries in Cloud Logging and shot up your bill by 25%.
Your manager has asked you to disable these logs as quickly as possible and using
the least number of steps. You want to follow Google recommended practices.
What should you do?

Recreate the GKE cluster and disable Cloud Logging.

In Cloud Logging, disable the log source for GKE Cluster Operations resource in
the Logs ingestion window.

In Cloud Logging, disable the log source for GKE container resource in the Logs
ingestion window.

(Correct)

Recreate the GKE cluster and disable Cloud Monitoring.


Explanation
Recreate the GKE cluster and disable Cloud Logging. is not right.
We require to disable the logs ingested from the GKE container. We don't need to
delete the existing cluster and create a new one.

Recreate the GKE cluster and disable Cloud Monitoring. is not right.
We require to disable the logs ingested from the GKE container. We don't need to
delete the existing cluster and create a new one.

In Cloud Logging, disable the log source for GKE Cluster Operations resource
in the Logs ingestion window. is not right.
We require to disable the logs ingested from GKE container, not the complete GKE
Cluster Operations resource.

In Cloud Logging, disable the log source for GKE container resource in the
Logs ingestion window. is the right answer.
We want to disable logs from a specific GKE container, and this is the only option that
does that.
More information about logs
exclusions: https://cloud.google.com/logging/docs/exclusions.

Question 5: Incorrect
You are migrating a mission-critical HTTPS Web application from your on-
premises data centre to Google Cloud, and you need to ensure unhealthy compute
instances within the autoscaled Managed Instances Group (MIG) are recreated
automatically. What should you do?

Configure a health check on port 443 when creating the Managed Instance Group
(MIG).

(Correct)

Add a metadata tag to the Instance Template with key: healthcheck value:
enabled.

(Incorrect)

Deploy Managed Instance Group (MIG) instances in multiple zones.

When creating the instance template, add a startup script that sends server status
to Cloud Monitoring as a custom metric.

Explanation
Deploy Managed Instance Group (MIG) instances in multiple zones. is not right.
You can create two types of MIGs: A zonal MIG, which deploys instances to a single zone
and a regional MIG, which deploys instances to multiple zones across the same region.
However, this doesn't help with recreating unhealthy VMs.
Ref: https://cloud.google.com/compute/docs/instance-groups

Add a metadata tag to the Instance Template with key: healthcheck value:
enabled. is not right.
Metadata entries are key-value pairs and do not influence any other behaviour.
Ref: https://cloud.google.com/compute/docs/storing-retrieving-metadata

When creating the instance template, add a startup script that sends server
status to Cloud Monitoring as a custom metric. is not right.
The startup script is executed only when the instance boots up. In contrast, we need
something like a liveness check that monitors the status of the server periodically to
identify if the VM is unhealthy. So this is not going to work.
Ref: https://cloud.google.com/compute/docs/startupscript

Configure a health check on port 443 when creating the Managed Instance
Group (MIG). is the right answer.
To improve the availability of your application and to verify that your application is
responding, you can configure an auto-healing policy for your managed instance group
(MIG). An auto-healing policy relies on an application-based health check to verify that
an application is responding as expected. If the auto healer determines that an
application isn't responding, the managed instance group automatically recreates that
instance. Since our application is an HTTPS web application, we need to set up our
health check on port 443, which is the standard port for HTTPS.
Ref: https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-
in-migs
Question 6: Correct
You have annual audits every year and you need to provide external auditors
access to the last 10 years of audit logs. You want to minimize the cost and
operational overhead while following Google recommended practices. What
should you do? (Select Three)

Export audit logs to Cloud Storage via an audit log export sink.

(Correct)

Set a custom retention of 10 years in Stackdriver logging and provide external


auditors view access to Stackdriver Logs.

Export audit logs to Cloud Filestore via a Pub/Sub export sink.

Configure a lifecycle management policy on the logs bucket to delete objects older
than 10 years.

(Correct)

Export audit logs to BigQuery via an audit log export sink.

Grant external auditors Storage Object Viewer role on the logs storage bucket.

(Correct)

Explanation
Export audit logs to Cloud Filestore via a Pub/Sub export sink. is not right.
Storing logs in Cloud Filestore is expensive. In Cloud Filestore, Standard Tier pricing
costs $0.2 per GB per month and Premium Tier pricing costs $0.3 per GB per month. In
comparison, Google Cloud Storage offers several storage classes that are significantly
cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing

Set a custom retention of 10 years in Stackdriver logging and provide


external auditors view access to Stackdriver Logs. is not right.
While it is possible to configure a custom retention period of 10 years in Stackdriver
logging, storing logs in Stackdriver is expensive compared to Cloud Storage. Stackdriver
charges $0.01 per GB per month, whereas something like Cloud Storage Coldline
Storage costs $0.007 per GB per month (30% cheaper) and Cloud Storage Archive
Storage costs 0.004 per GB per month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing

Export audit logs to BigQuery via an audit log export sink. is not right.
Storing logs in BigQuery is expensive. In BigQuery, Active storage costs $0.02 per GB per
month and Long-term storage costs $0.01 per GB per month. In comparison, Google
Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing

Export audit logs to Cloud Storage via an audit log export sink. is the right
answer.
Among all the storage solutions offered by Google Cloud Platform, Cloud storage offers
the best pricing for long term storage of logs. Google Cloud Storage offers several
storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage
($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are
significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing

Grant external auditors Storage Object Viewer role on the logs storage
bucket. is the right answer.
You can provide external auditors access to the logs in the bucket by granting the
Storage Object Viewer role which allows them to read any object stored in any bucket.
Ref: https://cloud.google.com/storage/docs/access-control/iam

Configure a lifecycle management policy on the logs bucket to delete objects


older than 10 years. is the right answer.
You need to archive log files for 10 years but you don't need log files older than 10
years. And since you also want to minimize costs, it is a good idea to set up a lifecycle
management policy on the bucket to delete objects that are older than 10 years.
Livecycle management configuration is a set of rules which apply to current and future
objects in the bucket. When an object meets the criteria of one of the rules, Cloud
Storage automatically performs a specified action (delete in this case) on the object.
Ref: https://cloud.google.com/storage/docs/lifecycle

Question 7: Incorrect
Your company hosts a number of applications in Google Cloud and requires that
log messages from all applications be archived for 10 years to comply with local
regulatory requirements. Which approach should you use?

1. Enable Stackdriver Logging API

2. Configure web applications to send logs to Stackdriver

3. Export logs to BigQuery

(Incorrect)

1. Enable Stackdriver Logging API

2. Configure web applications to send logs to Stackdriver

3. Export logs to Google Cloud Storage

(Correct)

1. Enable Stackdriver Logging API

2. Configure web applications to send logs to Stackdriver

Grant the security team access to the logs in each Project


Explanation
Grant the security team access to the logs in each Project. is not right.
Granting the security team access to the logs in each Project doesn't guarantee log
retention. If the security team is to come up with a manual process to copy all the logs
files into another archival source, the ongoing operational costs can be huge.

1. Enable Stackdriver Logging API


2. Configure web applications to send logs to Stackdriver. is not right.
In Stackdriver, application logs are retained by default for just 30 days after which they
are purged.
Ref: https://cloud.google.com/logging/quotas
While it is possible to configure a custom retention period of 10 years, storing logs in
Stackdriver is very expensive compared to Cloud Storage. Stackdriver charges $.01 per
GB per month, whereas something like Cloud Storage Coldline Storage costs $0.007 per
GB per month (30% cheaper) and Cloud Storage Archive Storage costs 0.004 per GB per
month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing

The difference between the remaining two options is whether we store the logs in
BigQuery or Google Cloud Storage.

1. Enable Stackdriver Logging API


2. Configure web applications to send logs to Stackdriver
3. Export logs to BigQuery. is not right.
While enabling Stackdriver Logging API and having the applications send logs to stack
driver is a good start, exporting and storing logs in BigQuery is fairly expensive. In
BigQuery, Active storage costs $0.02 per GB per month and Long-term storage costs
$0.01 per GB per month. In comparison, Google Cloud Storage offers several storage
classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing

1. Enable Stackdriver Logging API


2. Configure web applications to send logs to Stackdriver
3. Export logs to Google Cloud Storage. is the right answer.
Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per
GB per Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004
per GB per month) which are significantly cheaper than the storage options covered by
the above options above.
Ref: https://cloud.google.com/storage/pricing

Question 8: Incorrect
You want to use Google Cloud Storage to host a static website on
www.example.com for your staff. You created a bucket example-static-website
and uploaded index.html and css files to it. You turned on static website hosting
on the bucket and set up a CNAME record on www.example.com to point to
c.storage.googleapis.com. You access the static website by navigating to
www.example.com in the browser but your index page is not displayed. What
should you do?

Delete the existing bucket, create a new bucket with the name www.example.com
and upload the html/css files.

(Correct)

Reload the Cloud Storage static website server to load the objects.

In example.com zone, modify the CNAME record to


c.storage.googleapis.com/example-static-website.

(Incorrect)

In example.com zone, delete the existing CNAME record and set up an A record
instead to point to c.storage.googleapis.com.

Explanation
In example.com zone, modify the CNAME record to
c.storage.googleapis.com/example-static-website. is not right.
CNAME records cannot contain paths. There is nothing wrong with the current CNAME
record.
In example.com zone, delete the existing CNAME record and set up an A record
instead to point to c.storage.googleapis.com. is not right.
A records cannot use hostnames. A records use IP Addresses.

Reload the Cloud Storage static website server to load the objects. is not
right.
There is no such thing as a Cloud Storage static website server. All infrastructure that
underpins the static websites is handled by Google Cloud Platform.

Delete the existing bucket, create a new bucket with the name
www.example.com and upload the html/css files. is the right answer.
We need to create a bucket whose name matches the CNAME you created for your
domain. For example, if you added a CNAME record pointing www.example.com to
c.storage.googleapis.com., then create a bucket with the name
"www.example.com".A CNAME record is a type of DNS record. It directs traffic that
requests a URL from your domain to the resources you want to serve, in this case,
objects in your Cloud Storage buckets. For www.example.com, the CNAME record might
contain the following information:

NAME TYPE DATA


www.example.com CNAME c.storage.googleapis.com.

Ref: https://cloud.google.com/storage/docs/hosting-static-website
Question 9: Correct
You created an update for your application on App Engine. You want to deploy the
update without impacting your users. You want to be able to roll back as quickly
as possible if it fails. What should you do?

Deploy the update as the same version that is currently running. You are confident
the update works so you don't plan for a rollback strategy.

Deploy the update as the same version that is currently running. If the update
fails, redeploy your older version using the same version identifier.


Notify your users of an upcoming maintenance window and ask them not to use
your application during this window. Deploy the update in that maintenance
window.

Deploy the update as a new version. Migrate traffic from the current version to the
new version. If it fails, migrate the traffic back to your older version.

(Correct)

Explanation
Deploy the update as the same version that is currently running. You are
confident the update works so you don't plan for a rollback strategy. is not
right.
Irrespective of the level of confidence, you should always prepare a rollback strategy as
things can go wrong for reasons out of our control.

Deploy the update as the same version that is currently running. If the
update fails, redeploy your older version using the same version
identifier. is not right.
While this can be done, the rollback process is not quick. Your application is
unresponsive until you have redeployed the older version which can take quite a bit of
time depending on how it is set up.

Notify your users of an upcoming maintenance window and ask them not to use
your application during this window. Deploy the update in that maintenance
window. is not right.
Our requirement is to deploy the update without impacting our users but by asking
them to not use the application during the maintenance window, you are impacting all
users.

Deploy the update as a new version. Migrate traffic from the current version
to the new version. If it fails, migrate the traffic back to your older
version. is the right answer.
This option enables you to deploy a new version and send all traffic to the new version.
If you realize your updated application is not working, the rollback is as simple as
marking your older version as default. This can all be done in the GCP console with a
few clicks.
Ref: https://cloud.google.com/appengine/docs/admin-api/deploying-apps

Question 10: Correct


Your organization is planning the infrastructure for a new large-scale application
that will need to store anything between 200 TB to a petabyte of data in NoSQL
format for Low-latency read/write and High-throughput analytics. Which storage
option should you use?

Cloud Datastore.

Cloud Spanner.

Cloud Bigtable.

(Correct)

Cloud SQL.

Explanation
Cloud Spanner. is not right.
Cloud Spanner is not a NoSQL database. Cloud SQL is a fully-managed relational
database service.
Ref: https://cloud.google.com/sql/docs

Cloud SQL. is not right.


Cloud SQL is not a NoSQL database. Cloud Spanner is a highly scalable, enterprise-
grade, globally-distributed, and strongly consistent relational database service
Ref: https://cloud.google.com/spanner

Cloud Datastore. is not right.


While Cloud Datastore is a highly scalable NoSQL database, it can't handle petabyte-
scale data.
https://cloud.google.com/datastore

Cloud Bigtable. is the right answer.


Cloud Bigtable is a petabyte-scale, fully managed NoSQL database service for large
analytical and operational workloads.
Ref: https://cloud.google.com/bigtable/

Question 11: Correct


Your company recently acquired a startup that lets its developers pay for their
projects using their company credit cards. You want to consolidate the billing of
all GCP projects into a new billing account. You want to follow Google
recommended practices. How should you do this?

In the GCP Console, move all projects to the root organization in the Resource
Manager.

(Correct)

Raise a support request with Google Billing Support and request them to create a
new billing account and link all the projects to the billing account.

Send an email to billing.support@cloud.google.com and request them to create a


new billing account and link all the projects to the billing account.

Ensure you have the Billing Account Creator Role. Create a new Billing account
yourself and set up a payment method with company credit card details.

Explanation
Send an email to billing.support@cloud.google.com and request them to create
a new billing account and link all the projects to the billing account. is
not right.
That is not how we set up billing for the organization.
Ref: https://cloud.google.com/billing/docs/concepts

Raise a support request with Google Billing Support and request them to
create a new billing account and link all the projects to the billing
account. is not right.
That is not how we set up billing for the organization.
Ref: https://cloud.google.com/billing/docs/concepts

Ensure you have the Billing Account Creator Role. Create a new Billing
account yourself and set up a payment method with company credit card
details. is not right.
Unless all projects are modified to use the new billing account, this doesn't work.
Ref: https://cloud.google.com/billing/docs/concepts

In the GCP Console, move all projects to the root organization in the
Resource Manager. is the right answer.
If we move all projects under the root organization hierarchy, they still need to modify
to use a billing account within the organization (same as the previous option).
Ref: https://cloud.google.com/resource-manager/docs/migrating-projects-
billing#top_of_page
Note: The link between projects and billing accounts is preserved, irrespective of the
hierarchy. When you move your existing projects into the organization, they will
continue to work and be billed as they used to before the migration, even if the
corresponding billing account has not been migrated yet.
But in this option, all projects are in the organization resource hierarchy so the
organization can uniformly apply organization policies to all its projects which is a
Google recommended practice. So this is the better of the two options.
Ref: https://cloud.google.com/billing/docs/concepts
Question 12: Correct
You want to persist logs for 10 years to comply with regulatory requirements. You
want to follow Google recommended practices. Which Google Cloud Storage class
should you use?

Standard storage class

Coldline storage class

Nearline storage class

Archive storage class

(Correct)

Explanation
In April 2019, Google introduced a new storage class "Archive storage class" is the
lowest-cost, highly durable storage service for data archiving, online backup, and
disaster recovery. Google previously recommended you use Coldline storage class but
the recommendation has since been updated to "Coldline Storage is ideal for data you
plan to read or modify at most once a quarter. Note, however, that for data being kept
entirely for backup or archiving purposes, Archive Storage is more cost-effective, as it
offers the lowest storage costs."

Ref: https://cloud.google.com/storage/docs/storage-classes#archive
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline

So Archive storage class is the right answer

Question 13: Correct


You work for a big multinational financial company that has several hundreds of
Google Cloud Projects for various development, test and production workloads.
Financial regulations require your company to store all audit files for three years.
What should you do to implement a log retention solution while minimizing
storage cost?

Export audit logs from Cloud Logging to BigQuery via an export sink.

Export audit logs from Cloud Logging to Cloud Pub/Sub via an export sink.
Configure a Cloud Dataflow pipeline to process these messages and store them in
Cloud SQL for MySQL.

Write a script that exports audit logs from Cloud Logging to BigQuery. Use Cloud
Scheduler to trigger the script every hour.

Export audit logs from Cloud Logging to Coldline Storage bucket via an export
sink.

(Correct)

Explanation
Export audit logs from Cloud Logging to BigQuery via an export sink. is not
right.
You can export logs into BigQuery by creating one or more sinks that include a logs
query and an export destination (big query). However, this option is costly compared to
the cost of Cloud Storage.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2

Write a script that exports audit logs from Cloud Logging to BigQuery. Use
Cloud Scheduler to trigger the script every hour. is not right.
Stackdriver already offers sink exports that let you copy logs from Stackdriver logs to
BigQuery. While BigQuery is already quite expensive compared to Cloud Storage,
coming up with a custom script and maintaining it to copy the logs from Stackdriver
logs to BigQuery is going to add to the cost. This option is very inefficient and
expensive.

Export audit logs from Cloud Logging to Cloud Pub/Sub via an export sink.
Configure a Cloud Dataflow pipeline to process these messages and store them
in Cloud SQL for MySQL. is not right.
Cloud SQL is primarily used for storing relational data. Storing vast quantities of logs in
Cloud SQL is very expensive compared to Cloud Storage. And add to it the fact that you
also need to pay for Cloud Pub/Sub and Cloud Dataflow pipeline, and this option gets
very expensive very soon.

Export audit logs from Cloud Logging to Coldline Storage bucket via an
export sink. is the right answer.
Coldline Storage is the perfect service to store audit logs from all the projects and is
very cost-efficient as well. Coldline Storage is a very-low-cost, highly durable storage
service for storing infrequently accessed data. Coldline Storage is a better choice than
Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-
day minimum storage duration, and higher costs for data access are acceptable trade-
offs for lowered at-rest storage costs. Coldline Storage is ideal for data you plan to read
or modify at most once a quarter.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline

Question 14: Correct


A mission-critical application running in Google Cloud Platform requires an urgent
update to fix a security issue without any downtime. How should you do this in CLI
using deployment manager?

Use gcloud deployment-manager deployments create and point to the


deployment config file.

Use gcloud deployment-manager resources update and point to the deployment


config file.

Use gcloud deployment-manager deployments update and point to the


deployment config file.

(Correct)

Use gcloud deployment-manager resources create and point to the deployment


config file.

Explanation
Use gcloud deployment-manager resources create and point to the deployment
config file. is not right.
gcloud deployment-manager resources command does not support the action create.
The supported actions are describe and list. So this option is not right.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-manager/resources

Use gcloud deployment-manager resources update and point to the deployment


config file. is not right.
gcloud deployment-manager resources command does not support the action update.
The supported actions are describe and list. So this option is not right.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-manager/resources

Use gcloud deployment-manager deployments create and point to the deployment


config file. is not right.
gcloud deployment-manager deployments create - creates a deployment, but we want
to update a deployment. So this option is not right.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-
manager/deployments/create

Use gcloud deployment-manager deployments update and point to the deployment


config file. is the right answer.
gcloud deployment-manager deployments update - updates a deployment based on a
provided config file and fits our requirement.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-
manager/deployments/update

Question 15: Incorrect


Your company has many Citrix services deployed in the on-premises datacenter,
and they all connect to the Citrix Licensing Server on 10.10.10.10 in the same data
centre. Your company wants to migrate the Citrix Licensing Server and all Citrix
services to Google Cloud Platform. You want to minimize changes while ensuring
the services can continue to connect to the Citrix licensing server. How should you
do this in Google Cloud?

Use gcloud compute addresses create to reserve 10.10.10.10 as a static external IP


and assign it to the Citrix Licensing Server VM Instance.

(Incorrect)

Use gcloud compute addresses create to reserve 10.10.10.10 as a static internal IP


and assign it to the Citrix Licensing Server VM Instance.

(Correct)

Deploy the Citrix Licensing Server on a Google Compute Engine instance with an
ephemeral IP address. Once the server is responding to requests, promote the
ephemeral IP address to a static internal IP address.

Deploy the Citrix Licensing Server on a Google Compute Engine instance and set
its ephemeral IP address to 10.10.10.10.

Explanation
Use gcloud compute addresses create to reserve 10.10.10.10 as a static
external IP and assign it to the Citrix Licensing Server VM Instance. is not
right.
The private network range is defined by IETF (Ref: https://tools.ietf.org/html/rfc1918)
and includes 10.0.0.0/8. So all IP Addresses from 10.0.0.0 to 10.255.255.255 belong to
this internal IP range. As the IP of interest 10.10.10.10 falls within this range, it can not
be reserved as a public IP Address.

Deploy the Citrix Licensing Server on a Google Compute Engine instance and
set its ephemeral IP address to 10.10.10.10. is not right.
An ephemeral IP address is the public IP Address assigned to compute instance. An
ephemeral external IP address is an IP address that doesn't persist beyond the life of the
resource. When you create an instance or forwarding rule without specifying an IP
address, the resource is automatically assigned an ephemeral external IP address.
Ref: https://cloud.google.com/compute/docs/ip-addresses#ephemeraladdress
The private network range is defined by IETF (Ref: https://tools.ietf.org/html/rfc1918)
and includes 10.0.0.0/8. So all IP Addresses from 10.0.0.0 to 10.255.255.255 belong to
this internal IP range. As the IP of interest 10.10.10.10 falls within this range, it can not
be used as a public IP Address (ephemeral IP is public).

Deploy the Citrix Licensing Server on a Google Compute Engine instance with
an ephemeral IP address. Once the server is responding to requests, promote
the ephemeral IP address to a static internal IP address. is not right.
When a compute instance is started with public IP, it gets an ephemeral IP address. An
ephemeral external IP address is an IP address that doesn't persist beyond the life of the
resource.
Ref: https://cloud.google.com/compute/docs/ip-addresses#ephemeraladdress
You can promote this ephemeral address into a Static IP address, but this will be an
external IP address and not an internal one.
Ref: https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-
address#promote_ephemeral_ip

Use gcloud compute addresses create to reserve 10.10.10.10 as a static


internal IP and assign it to the Citrix Licensing Server VM Instance. is right
answer.
This option lets us reserve IP 10.10.10.10 as a static internal IP address because it falls
within the standard IP Address range as defined by IETF
(Ref: https://tools.ietf.org/html/rfc1918). 10.0.0.0/8 is one of the allowed ranges, so all IP
Addresses from 10.0.0.0 to 10.255.255.255 belong to this internal IP range. Since we can
now reserve this IP Address as a static internal IP address, it can be assigned to the
licensing server in the VPC so that the application can reach the licensing server.

Question 16: Correct


You have an application deployed in GKE Cluster as a kubernetes workload with
Daemon Sets. Your application has become very popular and is now struggling to
cope up with increased traffic. You want to add more pods to your workload and
want to ensure your cluster scales up and scales down automatically based on
volume. What should you do?

Enable Horizontal Pod Autoscaling for the kubernetes deployment.

Enable autoscaling on Kubernetes Engine.

(Correct)

Create another identical kubernetes workload and split traffic between the two
workloads.

Perform a rolling update to modify machine type from n1-standard-2 to n1-


standard-4.

Explanation
Enable Horizontal Pod Autoscaling for the Kubernetes deployment. is not right.
Horizontal Pod Autoscaling can not be enabled for Daemon Sets, this is because there is
only one instance of a pod per node in the cluster. In a replica deployment, when
Horizontal Pod Autoscaling scales up, it can add pods to the same node or another
node within the cluster. Since there can only be one pod per node in the Daemon Set
workload, Horizontal Pod Autoscaling is not supported with Daemon Sets.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset

Create another identical Kubernetes cluster and split traffic between the
two workloads. is not right.
Creating another identical Kubernetes cluster is going to double your costs; at the same
time, there is no guarantee that this is enough to handle all the traffic. Finally, it doesn't
satisfy our requirement of "cluster scales up and scales down automatically"
Perform a rolling update to modify machine type from n1-standard-2 to n1-
standard-4. is not right.
While increasing the machine type from n1-standard-2 to n1-standard-4 gives the
existing nodes more resources and processing power, we don't know if that would be
enough to handle the increased volume of traffic. Also, it doesn't satisfy our
requirement of "cluster scales up and scales down automatically"
Ref: https://cloud.google.com/compute/docs/machine-types

Enable autoscaling on Kubernetes Engine. is the right answer.


GKE's cluster autoscaler automatically resizes the number of nodes in a given node pool,
based on the demands of your workloads. As you add nodes to a node pool,
DaemonSets automatically add Pods to the new nodes as needed. DaemonSets attempt
to adhere to a one-Pod-per-node model.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler

Question 17: Correct


To facilitate disaster recovery, your company wants to save database backup tar
files in Cloud Storage bucket. You want to minimize the cost. Which GCP Cloud
Storage class should you use?

Use Multi-Regional Storage Class.

Use Coldline Storage Class.

(Correct)

Use Nearline Storage Class.

Use Regional Storage Class.

Explanation
The ideal answer to this would have been Archive Storage, but that is not one of the
options. Archive Storage is the lowest-cost, highly durable storage service for data
archiving, online backup, and disaster recovery. Your data is available within
milliseconds, not hours or days. https://cloud.google.com/storage/docs/storage-
classes#archive

In the absence of Archive Storage Class, Use Coldline Storage Class is the right
answer.

Coldline Storage Class is a very-low-cost, highly durable storage service for storing
infrequently accessed data. Coldline Storage is a better choice than Standard Storage or
Nearline Storage in scenarios where slightly lower availability, a 90-day minimum
storage duration, and higher costs for data access are acceptable trade-offs for lowered
at-rest storage costs. Coldline Storage is ideal for data you plan to read or modify at
most once a quarter.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline

Although Nearline, Regional and Multi-Regional can also be used to store the backups,
they are expensive in comparison, and Google recommends we use Coldline for
backups.
More information about Nearline: https://cloud.google.com/storage/docs/storage-
classes#nearline
More information about
Standard/Regional: https://cloud.google.com/storage/docs/storage-classes#standard
More information about Standard/Multi-
Regional: https://cloud.google.com/storage/docs/storage-classes#standard

Question 18: Correct


You have a collection of audio/video files over 80GB each that you need to
migrate to Google Cloud Storage. The files are in your on-premises data center.
What migration method can you use to help speed up the transfer process?

Start a recursive upload.

Use multithreaded uploads using the -m option.

Use the Cloud Transfer Service to transfer.


Use parallel uploads to break the file into smaller chunks then transfer it
simultaneously.

(Correct)

Explanation
Use parallel uploads to break the file into smaller chunks then transfer it
simultaneously. is the right answer.
With cloud storage, Object composition can be used for uploading an object in parallel:
you can divide your data into multiple chunks, upload each chunk to a distinct object in
parallel, compose your final object, and delete any temporary source objects. This helps
maximize your bandwidth usage and ensures the file is uploaded as fast as possible.
Ref: https://cloud.google.com/storage/docs/composite-objects#uploads

Use multithreaded uploads using the -m option. is not right.


Using the -m option lets you upload multiple files at the same time, but in our case, the
individual files are over 80GB each. The best upload speed can be achieved by breaking
the file into smaller chunks and transferring it simultaneously.

Use the Cloud Transfer Service to transfer. is not right.


Cloud Transfer Service is used for transferring massive amounts (in the range of
petabytes of data) of data to the cloud. While nothing stops us from using Cloud
Transfer Service to upload our files, it would be an overkill and very expensive.
Ref: https://cloud.google.com/products/data-transfer

Start a recursive upload. is not right.


In Google Cloud Storage, there is no such thing as a recursive upload.

Question 19: Correct


Your organization processes a very high volume of timestamped IoT data. The
total volume can be several petabytes. The data needs to be written and changed
at a high speed. You want to use the most performant storage option for your
data. Which product should you use?

Cloud Datastore


BigQuery

Cloud Storage

Cloud Bigtable

(Correct)

Explanation
Our requirement is to write/update a very high volume of data at a high speed.
Performance is our primary concern, not cost.

Cloud Bigtable is the right answer.


Cloud Bigtable is Google's flagship product for ingest and analyze large volumes of time
series data from sensors in real-time, matching the high speeds of IoT data to track
normal and abnormal behavior.
Ref: https://cloud.google.com/bigtable/

While all other options are capable of storing high volumes of the order of petabytes,
they are not as efficient as Bigtable at processing IoT time-series data.

Question 20: Correct


Your company’s auditors carry out an annual audit every year and have asked you
to provide them with all the IAM policy changes in Google Cloud since the last
audit. You want to streamline and expedite the analysis for audit. How should you
share the information requested by auditors?

Export all audit logs to Google Cloud Storage bucket and set up the necessary IAM
acces to restrict the data shared with auditors.

Export all audit logs to BigQuery dataset. Make use of ACLs and views to restrict
the data shared with the auditors. Have the auditors query the required
information quickly.
(Correct)

Export all audit logs to Cloud Pub/Sub via an export sink. Use a Cloud Function to
read the messages and store them in Cloud SQL. Make use of ACLs and views to
restrict the data shared with the auditors.

Configure alerts in Cloud Monitoring and trigger notifications to the auditors.

Explanation
Configure alerts in Cloud Monitoring and trigger notifications to the
auditors. is not right.
Stackdriver Alerting gives timely awareness to problems in your cloud applications so
you can resolve the problems quickly. Sending alerts to your auditor is not of much use
during audits.
Ref: https://cloud.google.com/monitoring/alerts

Export all audit logs to Cloud Pub/Sub via an export sink. Use a Cloud
Function to read the messages and store them in Cloud SQL. Make use of ACLs
and views to restrict the data shared with the auditors. is not right.
Using Cloud Functions to transfer log entries to Google Cloud SQL is expensive in
comparison to audit logs export feature which exports logs to various destinations with
minimal configuration.
Ref: https://cloud.google.com/logging/docs/export/
Auditors spend a lot of time reviewing log messages. And you want to expedite the
audit process!! So you want to make it easier for the auditor to extract the information
easily from the logs.

Between the two remaining options, the only difference is the log export sink
destination.
Ref: https://cloud.google.com/logging/docs/export/

One option exports to Google Cloud Storage (GCS) bucket whereas other exports
to BigQuery. Querying information out of files in a bucket is much harder compared to
querying information from BigQuery Dataset where it is as simple as running a job or set
of jobs to extract just the required information and in the format required. By enabling
the auditor to run jobs in Big Queries, you streamline the log extraction process, and the
auditor can review the extracted logs much quicker. While as good as the other option
(bucket) is, Export all audit logs to BigQuery dataset. Make use of ACLs and
views to restrict the data shared with the auditors. Have the auditors query
the required information quickly. is the right answer.

You need to configure log sinks before you can receive any logs, and you can’t
retroactively export logs that were written before the sink was created.

Question 21: Correct


Your company runs all its applications in us-central1 region in a single GCP project
and single VPC. The company has recently expanded its operations to Europe, but
customers in the EU are complaining about slowness accessing the application.
Your manager has requested you to deploy a new instance in the same project in
europe-west1 region to reduce latency to the EU customers. The newly deployed
VM needs to reach a central Citrix Licensing Server in us-central-1. How should
you design the network and firewall rules while adhering to Google
Recommended practices?

Deploy the VM in a new subnet in europe-west1 region in a new VPC. Peer the two
VPCs and have the VM contact the Citrix Licensing Server on its internal IP
Address.

Deploy the VM in a new subnet in europe-west1 region in a new VPC. Set up an


HTTP(s) Load Balancer for the Citrix Licensing Server and have the VM contact the
Citrix Licensing Server through the Load Balancer’s public address.

Deploy the VM in a new subnet in europe-west1 region in the existing VPC. Have
the VM contact the Citrix Licensing Server on its internal IP Address.

(Correct)


Deploy the VM in a new subnet in europe-west1 region in the existing VPC. Peer
the two subnets using Cloud VPN. Have the VM contact the Citrix Licensing Server
on its internal IP Address.

Explanation
Our requirements are to connect the instance in europe-west1 region with the
application running in us-central1 region following Google-recommended practices.
The two instances are in the same project.

Deploy the VM in a new subnet in europe-west1 region in a new VPC. Set up an


HTTP(s) Load Balancer for the Citrix Licensing Server and have the VM
contact the Citrix Licensing Server through the Load Balancer’s public
address. is not right.
We have two different VPCs. There is no mention of the CIDR range so let's assume the
two subnets in two VPCs use different CIDR ranges. However, there is no
communication route between the two VPCs. If we create an internal load balancer, that
load balancer is not visible outside the VPC. So the new instance cannot connect to the
load balancer's internal address.
Ref: https://cloud.google.com/load-balancing/docs/internal

Deploy the VM in a new subnet in europe-west1 region in the existing VPC.


Peer the two subnets using Cloud VPN. Have the VM contact the Citrix
Licensing Server on its internal IP Address. is not right.
Cloud VPN securely connects your on-premises network to your Google Cloud (GCP)
Virtual Private Cloud (VPC) network through an IPsec VPN connection. It is not meant to
connect two subnets within the same VPC. Moreover, subnets within the same VPC can
communicate with each other by setting up relevant firewall rules.

Deploy the VM in a new subnet in europe-west1 region in a new VPC. Peer the
two VPCs and have the VM contact the Citrix Licensing Server on its internal
IP Address. is not right.
Given that the new instance wants to access the application on the existing compute
engine instance, these applications seem to be related so they should be within the
same VPC. This option does not mention how the VPC networks are created and what
the subnet range is.

You can't connect two auto mode VPC networks using VPC Network Peering because
their subnets use identical primary IP ranges. We don't know how the VPCs were
created.
There are several restrictions based on the subnet ranges.

https://cloud.google.com/vpc/docs/vpc-peering#restrictions
Even if we assume the above restrictions don’t apply and enable peering is possible, this
is still a lot of additional work, and we can simplify this by choosing the option below
(which is the answer)

Deploy the VM in a new subnet in europe-west1 region in the existing VPC.


Have the VM contact the Citrix Licensing Server on its internal IP
Address. is the right answer.
We can create another subnet in the same VPC, and this subnet is located in europe-
west1. We can then spin up a new instance in this subnet. We also have to set up a
firewall rule to allow communication between the two subnets. All instances in the two
subnets with the same VPC can communicate through the internal IP Address.
Ref: https://cloud.google.com/vpc

Question 22: Incorrect


You host a production application in Google Compute Engine in us-central1-a
zone. Your application needs to be available 247 all through the year. The
application suffered an outage recently due to a Compute Engine outage in the
zone hosting your application. Your application is also susceptible to slowness
during peak usage. You have been asked for a recommendation on how to modify
the infrastructure to implement a cost-effective and scalable solution that can
withstand zone failures. What would you recommend?

Use Managed instance groups with instances in a single zone. Enable Autoscaling
on the Managed instance group.

(Incorrect)

Use Managed instance groups across multiple zones. Enable Autoscaling on the
Managed instance group.

(Correct)


Use Managed instance groups with preemptible instances across multiple zones.
Enable Autoscaling on the Managed instance group.

Use Unmanaged instance groups across multiple zones. Enable Autoscaling on the
Unmanaged instance group.

Explanation
Use Managed instance groups with preemptible instances across multiple
zones. Enable Autoscaling on the Managed instance group. is not right.
A preemptible VM runs at a much lower price than normal instances and is cost-
effective. However, Compute Engine might terminate (preempt) these instances if it
requires access to those resources for other tasks. Preemptible instances are not suitable
for production applications that need to be available 24*7.
Ref: https://cloud.google.com/compute/docs/instances/preemptible

Use Unmanaged instance groups across multiple zones. Enable Autoscaling on


the Unmanaged instance group. is not right.
Unmanaged instance groups do not autoscale. An unmanaged instance group is simply
a collection of virtual machines (VMs) that reside in a single zone, VPC network, and
subnet. An unmanaged instance group is useful for grouping together VMs that require
individual configuration settings or tuning.
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-
unmanaged-instances

Use Managed instance groups with instances in a single zone. Enable


Autoscaling on the Managed instance group. is not right.
While enabling auto-scaling is a good idea, autoscaling would spin up instances in the
same zone. Should there be a zone failure, all instances of the managed instance group
would be unreachable and cause the application to be unreachable. Google
recommends you distribute your resources across multiple zones to tolerate outages.
Ref: https://cloud.google.com/compute/docs/regions-zones

Use Managed instance groups across multiple zones. Enable Autoscaling on the
Managed instance group. is the right answer.
Distribute your resources across multiple zones and regions to tolerate outages. Google
designs zones to be independent of each other: a zone usually has power, cooling,
networking, and control planes that are isolated from other zones, and most single
failure events will affect only a single zone. Thus, if a zone becomes unavailable, you can
transfer traffic to another zone in the same region to keep your services running.
Ref: https://cloud.google.com/compute/docs/regions-zones
In addition, a managed instance group (MIG) contains offers auto-scaling capabilities
that let you automatically add or delete instances from a managed instance group
based on increases or decreases in load. Autoscaling helps your apps gracefully handle
increases in traffic and reduce costs when the need for resources is lower. Autoscaling
works by adding more instances to your instance group when there is more load
(upscaling), and deleting instances when the need for instances is lowered
(downscaling).
Ref: https://cloud.google.com/compute/docs/autoscaler/

Question 23: Correct


Your company has deployed a wide range of application across several Google
Cloud projects in the organization. You are a security engineer within the Cloud
Security team, and an apprentice has recently joined your team. To gain a better
understanding of your company’s Google cloud estate, the apprentice has asked
you to provide them access which lets them have detailed visibility of all projects
in the organization. Your manager has approved the request but has asked you to
ensure the access does not let them edit/write access to any resources. Which IAM
roles should you assign to the apprentice?

Grant roles/owner and roles/networkmanagement.admin.

Grant roles/resourcemanager.organizationViewer and roles/owner.

Grant roles/resourcemanager.organizationAdmin and roles/browser.

Grant roles/resourcemanager.organizationViewer and roles/viewer.

(Correct)

Explanation
The security team needs detailed visibility of all GCP projects in the organization so they
should be able to view all the projects in the organization as well as view all resources
within these projects.

Grant roles/resourcemanager.organizationViewer and roles/owner. is not right.


roles/resourcemanager.organizationViewer role provides permissions to see the
organization in the Cloud Console without having access to view all resources in the
organization.
roles/owner provides permissions to manage roles and permissions for a project and all
resources within the project and set up billing for a project.
Neither of the roles gives the security team visibility of the projects in the organization.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles

Grant roles/resourcemanager.organizationAdmin and roles/browser. is not right.


roles/resourcemanager.organizationAdmin provides access to administer all resources
belonging to the organization and goes against the least privilege principle. Our security
team needs detailed visibility, i.e. read-only access but should not be able to administer
resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles

Grant roles/owner and roles/networkmanagement.admin. is not right.


roles/owner provides permissions to manage roles and permissions for a project and all
resources within the project and set up billing for a project.
roles/networkmanagement.admin provides full access to Cloud Network Management
resources.
Neither of the roles gives the security team visibility of the projects in the organization.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles

Grant roles/resourcemanager.organizationViewer and roles/viewer. is the right


answer.
roles/viewer provides permissions to view existing resources or data.
roles/resourcemanager.organizationViewer provides access to view an organization.
With the two roles, the security team can view the organization, including all the
projects and folders; as well as view all the resources within the projects.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles

Question 24: Correct


You are designing a mobile game which you hope will be used by numerous users
around the world. The game backend requires a Relational DataBase Management
System (RDBMS) for persisting game state and player profiles. You want to select
a database that can scale to a global audience with minimal configuration updates.
Which database should you choose?

Cloud Firestore.

Cloud SQL.

Cloud Spanner.

(Correct)

Cloud Datastore.

Explanation
Our requirements are relational data, global users, scaling

Cloud Firestore. is not right.


Cloud Firestore is not a relational database. Cloud Firestore is a flexible, scalable
database for mobile, web, and server development from Firebase and Google Cloud
Platform.
Ref: https://firebase.google.com/docs/firestore

Cloud Datastore. is not right.


Cloud Datastore is not a relational database. Datastore is a NoSQL document database
built for automatic scaling, high performance, and ease of application development
Ref: https://cloud.google.com/datastore/docs/concepts/overview

Cloud SQL. is not right.


While Cloud SQL is a relational database, it does not offer infinite automated scaling
with minimum configuration changes. Cloud SQL is a fully-managed database service
that makes it easy to set up, maintain, manage, and administer your relational databases
on Google Cloud Platform
Ref: https://cloud.google.com/sql/docs

Cloud Spanner. is the right answer.


Cloud Spanner is a relational database and is highly scalable. Cloud Spanner is a highly
scalable, enterprise-grade, globally-distributed, and strongly consistent database service
built for the cloud specifically to combine the benefits of relational database structure
with a non-relational horizontal scale. This combination delivers high-performance
transactions and strong consistency across rows, regions, and continents with an
industry-leading 99.999% availability SLA, no planned downtime, and enterprise-grade
security
Ref: https://cloud.google.com/spanner

Question 25: Correct


Your company plans to migrate all applications from its on-premises data centre
to Google Cloud Platform. The DevOps team currently use Jenkins extensively to
automate configuration updates in applications. How should you provision Jenkins
in Google Cloud with the least number of steps?

Download Jenkins binary from https://www.jenkins.io/download/ and deploy in


Google App Engine Standard Service.

Download Jenkins binary from https://www.jenkins.io/download/ and deploy in a


new Google Compute Engine instance.

Create a Kubernetes Deployment YAML file referencing the Jenkins docker image
and deploy to a new GKE cluster.

Provision Jenkins from GCP marketplace.

(Correct)
Explanation
Download Jenkins binary from https://www.jenkins.io/download/ and deploy in
a new Google Compute Engine instance. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server
through App Engine.

Create a Kubernetes Deployment YAML file referencing the Jenkins docker


image and deploy to a new GKE cluster. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server
through App Engine.

Download Jenkins binary from https://www.jenkins.io/download/ and deploy in


Google App Engine Standard Service. is not right.
While this is possible, we need to ensure App Engine is enabled, we then need to
download the Java project/WAR, and run gcloud app deploy to set up a Jenkins server.
This option involves more steps than spinning up an instance from GCP Marketplace.
Ref: https://cloud.google.com/appengine/docs/standard/java/tools/uploadinganapp
Ref: https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-
compute-engine

Provision Jenkins from GCP marketplace. is the right answer.


The simplest way to launch a Jenkins server is from GCP Market place. GCP market place
has several builds available for
Jenkins: https://console.cloud.google.com/marketplace/browse?q=jenkins.

All you need to do is spin up an instance from a suitable market place build, and you
have a Jenkins server in a few minutes with just a few clicks.

Question 26: Correct


Your company has a number of GCP projects that are managed by the respective
project teams. Your expenditure of all GCP projects combined has exceeded your
operational expenditure budget. At a review meeting, it has been agreed that your
finance team should be able to set budgets and view the current charges for all
projects in the organization but not view the project resources; and your
developers should be able to see the Google Cloud Platform billing charges for
only their own projects as well as view resources within the project. You want to
follow Google recommended practices to set up IAM roles and permissions. What
should you do?


Add the developers and finance managers to the Viewer role for the Project.

Add the finance team to the Viewer role for the Project. Add the developers to the
Security Reviewer role for each of the billing accounts.

Add the finance team to the default IAM Owner role. Add the developers to a
custom role that allows them to see their own spend only.

Add the finance team to the Billing Administrator role for each of the billing
accounts that they need to manage. Add the developers to the Viewer role for the
Project.

(Correct)

Explanation
Add the finance team to the default IAM Owner role. Add the developers to a
custom role that allows them to see their own spend only. is not right.
Granting your finance team the default IAM role provides them permissions to manage
roles and permissions for a project and subsequently use that to assign them the
permissions to view/edit resources in all projects. This is against our requirements. Also,
you can write a custom role that lets developers view their project spend but they are
missing permissions to view project resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles

Add the developers and finance managers to the Viewer role for the
Project. is not right.
Granting your finance team the Project viewer role lets them view resources in all
projects and doesn’t let them set budgets - both are against our requirements.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles

Add the finance team to the Viewer role on all projects. Add the developers
to the Security Reviewer role for each of the billing accounts. is not right.
Granting your finance team the Project viewer role lets them view resources in all
projects which is against our requirements. Also, the security Reviewer role enables the
developers to view custom roles but doesn’t let them view the project's costs or project
resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles

Add the finance team to the Billing Administrator role for each of the
billing accounts that they need to manage. Add the developers to the Viewer
role for the Project. is the right answer.
Billing Account Administrator role is an owner role for a billing account. It provides
permissions to manage payment instruments, configure billing exports, view cost
information, set budgets, link and unlink projects and manage other user roles on the
billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Project viewer role provides permissions for read-only actions that do not affect the
state, such as viewing (but not modifying) existing resources or data; including viewing
the billing charges for the project.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles

Question 27: Correct


A mission-critical application running on a Managed Instance Group (MIG) in
Google Cloud has been having scaling issues. Although the scaling works, it is not
quick enough, and users experience slow response times. The solution architect
has recommended moving to GKE to achieve faster scaling and optimize machine
resource utilization. Your colleague containerized the application and provided
you with a Dockerfile. You now need to deploy this in a GKE cluster. How should
you do it?

Deploy the application using kubectl app deploy {Dockerfile}.

Build a container image from the Dockerfile and push it to Google Cloud Storage (GCS).
Create a Kubernetes Deployment YAML file and have it use the image from GCS. Use
kubectl apply -f {deployment.YAML} to deploy the application to the GKE cluster.

Deploy the application using gcloud app deploy {Dockerfile}.


Build a container image from the Dockerfile and push it to Google Container Registry
(GCR). Create a Kubernetes Deployment YAML file and have it use the image from GCR.
Use kubectl apply -f {deployment.YAML} to deploy the application to the GKE cluster.

(Correct)

Explanation
Deploy the application using kubectl app deploy {Dockerfile}. is not right.
kubectl does not accept app as a verb. Kubectl can deploy a configuration file using
kubectl deploy.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply

Deploy the application using gcloud app deploy {Dockerfile}. is not right.
gcloud app deploy - Deploys the local code and/or configuration of your app to App
Engine. gcloud app deploy accepts a flag --image-url which is the docker image, but it
can't directly use a docker file.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy

Build a container image from the Dockerfile and push it to Google Cloud
Storage (GCS). Create a Kubernetes Deployment YAML file and have it use the
image from GCS. Use kubectl apply -f {deployment.YAML} to deploy the
application to the GKE cluster. is not right.
You can not upload a docker image to cloud storage. They can only be pushed to a
Container Registry (e.g. GCR, Dockerhub etc.)
Ref: https://cloud.google.com/container-registry/docs/pushing-and-pulling

Build a container image from the Dockerfile and push it to Google Container
Registry (GCR). Create a Kubernetes Deployment YAML file and have it use the
image from GCR. Use kubectl apply -f {deployment.YAML} to deploy the
application to the GKE cluster. is the right answer.
Once you have a docker image, you can push it to the container register. You can then
create a deployment YAML file pointing to this image and use kubectl apply -f
{deployment YAML filename} to deploy this to the Kubernetes cluster. This command
assumes you already have a Kubernetes cluster and you gcloud environment is set up to
talk to this container by executing gcloud container clusters get-credentials {cluster name}
--zone={container_zone}
Ref: https://cloud.google.com/container-registry/docs/pushing-and-pulling
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials
Question 28: Incorrect
The deployment team currently spends a lot of time creating and configuring VMs
in Google Cloud Console, and feel they could be more productive and consistent if
the same can be automated using Infrastructure as Code. You want to help them
identify a suitable service. What should you recommend?

Managed Instance Group (MIG).

(Incorrect)

Deployment Manager.

(Correct)

Cloud Build.

Unmanaged Instance Group.

Explanation
Unmanaged Instance Group. is not right.
Unmanaged instance groups let you load balance across a fleet of VMs that you
manage yourself. But it doesn't help with dynamically provisioning VMs.
Ref: https://cloud.google.com/compute/docs/instance-
groups#unmanaged_instance_groups

Cloud Build. is not right.


Cloud Build is used for building and deploying services to serverless CI/CD platform. It
can’t be used to automate the creation of VMs.
Ref: https://cloud.google.com/cloud-build

Managed Instance Group (MIG). is not right.


Managed instance groups (MIGs) let you operate apps on multiple identical VMs. You
can make your workloads scalable and highly available by taking advantage of
automated MIG services, including autoscaling, autohealing, regional (multiple zones)
deployment, and automatic updating. While MIG dynamically provisions virtual
machines based on scaling policy, it doesn't satisfy our requirement of "dedicated
configuration file."
Ref: https://cloud.google.com/compute/docs/instance-
groups#managed_instance_groups

Deployment Manager. is the right answer.


Google Cloud Deployment Manager allows you to specify all the resources needed for
your application in a declarative format using YAML. You can also use Python or Jinja2
templates to parameterize the configuration and allow reuse of common deployment
paradigms such as a load-balanced, auto-scaled instance group. You can deploy many
resources at one time, in parallel. Using the deployment manager, you can apply a
Python/Jinja2 template to create a MIG/auto-scaling policy that dynamically provisions
VM. And our other requirement of "dedicated configuration file" is also met. Using the
deployment manager for provisioning results in a repeatable deployment process. By
creating configuration files that define the resources, the process of creating those
resources can be repeated over and over with consistent results. Google recommends
we script our infrastructure and deploy using Deployment Manager.
Ref: https://cloud.google.com/deployment-manager

Question 29: Correct


You want to reduce storage costs for infrequently accessed data. The data will still
be accessed approximately once a month and data older than 2 years is no longer
needed. What should you do to reduce storage costs? (Select 2)

Set an Object Lifecycle Management policy to delete data older than 2 years.

(Correct)

Store infrequently accessed data in a Nearline bucket.

(Correct)

Store infrequently accessed data in a Multi-Regional bucket.


Set an Object Lifecycle Management policy to change the storage class to Archive
for data older than 2 years.

Set an Object Lifecycle Management policy to change the storage class to Coldline
for data older than 2 years.

Explanation
Set an Object Lifecycle Management policy to change the storage class to
Coldline for data older than 2 years. is not right.
Data older than 2 years is not needed so there is no point in transitioning the data to
Coldline. The data needs to be deleted.

Set an Object Lifecycle Management policy to change the storage class to


Archive for data older than 2 years. is not right.
Data older than 2 years is not needed so there is no point in transitioning the data to
Archive. The data needs to be deleted.

Store infrequently accessed data in a Multi-Regional bucket. is not right.


While infrequently accessed data can be stored in Multi-Regional bucket, there are
several other storage classes offered by Google Cloud Storage that are primarily aimed
at storing infrequently accessed data and cost less. Multi-Region buckets are primarily
used for achieving geo-redundancy.
Ref: https://cloud.google.com/storage/docs/locations

Set an Object Lifecycle Management policy to delete data older than 2


years. is the right answer.
Since you don't need data older than 2 years, deleting such data is the right approach.
You can set a lifecycle policy to automatically delete objects older than 2 years. The
policy is valid on current as well as future objects and doesn't need any human
intervention.
Ref: https://cloud.google.com/storage/docs/lifecycle

Store infrequently accessed data in a Nearline bucket. is the right answer.


Nearline Storage is a low-cost, highly durable storage service for storing infrequently
accessed data. Nearline Storage is ideal for data you plan to read or modify on average
once per month or less.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline

Question 30: Incorrect


You want to migrate a mission-critical application from the on-premises data
centre to Google Cloud Platform. Due to the mission-critical nature of the
application, you want to have 3 idle (unoccupied) instances all the time to ensure
the application always has enough resources to handle sudden bursts in traffic.
How should you configure the scaling to meet this requirement?

Start with 3 instances and manually scale as needed.

Enable Basic Scaling and set maximum instances to 3.

Enable Basic Scaling and set minimum instances to 3.

(Incorrect)

Enable Automatic Scaling and set minimum idle instances to 3.

(Correct)

Explanation
Start with 3 instances and manually scale as needed. is not right.
Manual scaling uses resident instances that continuously run the specified number of
instances regardless of the load level. This scaling allows tasks such as complex
initializations and applications that rely on the state of the memory over time. Manual
scaling does not autoscale based on the request rate, so it doesn't fit our requirements.
Ref: https://cloud.google.com/appengine/docs/standard/python/how-instances-are-
managed

Enable Basic Scaling and set minimum instances to 3. is not right.


Basic scaling creates dynamic instances when your application receives requests. Each
instance will be shut down when the app becomes idle. Basic scaling is ideal for work
that is intermittent or driven by user activity. In the absence of any load, the App engine
may shut down all instances, so it is not suitable for our requirement of "at least 3
instances at all times".
Ref: https://cloud.google.com/appengine/docs/standard/python/how-instances-are-
managed

Enable Basic Scaling and set maximum instances to 3. is not right.


Basic scaling creates dynamic instances when your application receives requests. Each
instance will be shut down when the app becomes idle. Basic scaling is ideal for work
that is intermittent or driven by user activity. In the absence of any load, the App engine
may shut down all instances, so it is not suitable for our requirement of "at least 3
instances at all times".
Ref: https://cloud.google.com/appengine/docs/standard/python/how-instances-are-
managed

Enable Automatic Scaling and set minimum idle instances to 3. is the right
answer.
Automatic scaling creates dynamic instances based on request rate, response latencies,
and other application metrics. However, if you specify the number of minimum idle
instances, that specified number of instances run as resident instances while any
additional instances are dynamic.
Ref: https://cloud.google.com/appengine/docs/standard/python/how-instances-are-
managed

Question 31: Correct


You transitioned an application to your operations team. The lead operations engineer
has asked you to help understand what this lifecycle management rule does. What
should your response be?
1. {
2. "rule":[
3. {
4. "action":{
5. "type":"Delete"
6. },
7. "condition":{
8. "age":60,
9. "isLive":false
10. }
11. },
12. {
13. "action":{
14. "type":"SetStorageClass",
15. "storageClass":"NEARLINE"
16. },
17. "condition":{
18. "age":365,
19. "matchesStorageClass":"MULTI_REGIONAL"
20. }
21. }
22. ]
23. }

The lifecycle rule archives current (live) objects older than 60 days and transitions
Multi-regional objects older than 365 days to Nearline storage class.

The lifecycle rule deletes current (live) objects older than 60 days and transitions
Multi-regional objects older than 365 days to Nearline storage class.

The lifecycle rule transitions Multi-regional objects older than 365 days to Nearline
storage class.

The lifecycle rule deletes non-current (archived) objects older than 60 days and
transitions Multi-regional objects older than 365 days to Nearline storage class.

(Correct)

Explanation
The lifecycle rule archives current (live) objects older than 60 days and
transitions Multi-regional objects older than 365 days to Nearline storage
class. is not right.
The action has "type":"Delete" which means we want to Delete, not archive.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles

The lifecycle rule deletes current (live) objects older than 60 days and
transitions Multi-regional objects older than 365 days to Nearline storage
class. is not right.
We want to delete objects as indicated by the action; however, we don't want to delete
all objects older than 60 days. We only want to delete archived objects as indicated by
"isLive":false condition.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles

The lifecycle rule transitions Multi-regional objects older than 365 days to
Nearline storage class. is not right.
The first rule is missing. It deletes archived objects older than 60 days.

The lifecycle rule deletes non-current (archived) objects older than 60 days
and transitions Multi-regional objects older than 365 days to Nearline
storage class. is the right answer.
The first part of the rule: The action has "type":"Delete" which means we want to Delete.
"isLive":false condition means we are looking for objects that are not Live, i.e. objects
that are archived. Together, it means we want to delete archived objects older than 60
days. Note that if an object is deleted, it cannot be undeleted. Take care in setting up
your lifecycle rules so that you do not cause more data to be deleted than you intend.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
The second part of the rule: The action indicates we want to set storage class to
Nearline. The condition is satisfied if the existing storage class is multi-regional, and the
age of the object is 365 days or over. Together it means we want to set the storage class
to Nearline if existing storage class is multi-regional and the age of the object is 365
days or over.

Question 32: Correct


You want to ensure the boot disk of a preemptible instance is persisted for re-use.
How should you provision the gcloud compute instance to ensure your
requirement is met.

gcloud compute instances create [INSTANCE_NAME] --no-auto-delete

gcloud compute instances create [INSTANCE_NAME] --preemptible --no-boot-


disk-auto-delete

(Correct)


gcloud compute instances create [INSTANCE_NAME] --preemptible. The flag --
boot-disk-auto-delete is disbaled by default.

gcloud compute instances create [INSTANCE_NAME] --preemptible --boot-disk-


auto-delete=no

Explanation
gcloud compute instances create [INSTANCE_NAME] --preemptible --boot-disk-
auto-delete=no. is not right.
gcloud compute instances create doesn't provide a parameter called boot-disk-auto-
delete. It does have a flag by the same name. --boot-disk-auto-delete is enabled by
default. It enables automatic deletion of boot disks when the instances are deleted. Use
--no-boot-disk-auto-delete to disable.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create

gcloud compute instances create [INSTANCE_NAME] --preemptible. --boot-disk-


auto-delete flag is disabled by default. is not right.
--boot-disk-auto-delete is enabled by default. It enables automatic deletion of boot
disks when the instances are deleted. Use --no-boot-disk-auto-delete to disable.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create

gcloud compute instances create [INSTANCE_NAME] --no-auto-delete. is not right.


gcloud compute instances create doesn't provide a flag called no-auto-delete
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create

gcloud compute instances create [INSTANCE_NAME] --preemptible --no-boot-


disk-auto-delete. is the right answer.
Use --no-boot-disk-auto-delete to disable automatic deletion of boot disks when the
instances are deleted. --boot-disk-auto-delete flag is enabled by default. It enables
automatic deletion of boot disks when the instances are deleted. In order to prevent
automatic deletion, we have to specify --no-boot-disk-auto-delete flag.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create

Question 33: Incorrect


You want to migrate an XML parser application from the on-premises data centre
to Google Cloud Platform. You created a development project, set up the
necessary IAM roles and deployed the application in a compute engine instance.
The testing has succeeded, and you are ready to deploy the staging instance. You
want to create the same IAM roles in a new staging GCP project. How can you do
this efficiently without compromising security?

Make use of gcloud iam roles copy command to copy the IAM roles from the
Development GCP organization to the Staging GCP organization.

Make use of the Create Role from Role feature in GCP console to create IAM roles
in the Staging project from the Development IAM roles.

(Incorrect)

Make use of gcloud iam roles copy command to copy the IAM roles from the
Development GCP project to the Staging GCP project.

(Correct)

Make use of Create Role feature in GCP console to create all necessary IAM roles
from new in the Staging project.

Explanation
We are required to create the same iam roles in a different (staging) project with the
fewest possible steps.

Make use of the Create Role from Role feature in GCP console to create IAM
roles in the Staging project from the Development IAM roles. is not right.
This option creates a role in the same (development) project, not in the staging project.
So this doesn't meet our requirement to create same iam roles in the staging project.

Make use of Create Role feature in GCP console to create all necessary IAM
roles from new in the Staging project. is not right.
This option works but is not as efficient as copying the roles from development project
to the staging project.
Make use of gcloud iam roles copy command to copy the IAM roles from the
Development GCP organization to the Staging GCP organization. is not right.
We can optionally specify a destination organization but since we require to copy the
roles into "staging project" (i.e. project, not organization), this option does not meet our
requirement to create same iam roles in the staging project.
Ref: https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy

Make use of gcloud iam roles copy command to copy the IAM roles from the
Development GCP project to the Staging GCP project. is the right answer.
This option fits all the requirements. You copy the roles into the destination project
using gcloud iam roles copy and by specifying the staging project destination project.

$gcloud iam roles copy --source "<<role id to copy>>" --destination <<role id of


the copied role in staging project>> --dest-project <<id of staging project>>

Ref: https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy
Question 34: Incorrect
You have been asked to create a new Kubernetes Cluster on Google Kubernetes
Engine that can autoscale the number of worker nodes as well as pods. What
should you do? (Select 2)

Enable Horizontal Pod Autoscaling for the kubernetes deployment.

(Correct)

Create a GKE cluster and enable autoscaling on the instance group of the cluster.

(Incorrect)

Create Compute Engine instances for the workers and the master and install
Kubernetes. Rely on Kubernetes to create additional Compute Engine instances
when needed.


Configure a Compute Engine instance as a worker and add it to an unmanaged
instance group. Add a load balancer to the instance group and rely on the load
balancer to create additional Compute Engine instances when needed.

Create a GKE cluster and enable autoscaling on Kubernetes Engine.

(Correct)

Explanation
Create a GKE cluster and enable autoscaling on the instance group of the
cluster. is not right.
GKE's cluster auto-scaler automatically resizes the number of nodes in a given node
pool, based on the demands of your workloads. However, we should not enable
Compute Engine autoscaling for managed instance groups for the cluster nodes. GKE's
cluster auto-scaler is separate from Compute Engine autoscaling.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler

Configure a Compute Engine instance as a worker and add it to an unmanaged


instance group. Add a load balancer to the instance group and rely on the
load balancer to create additional Compute Engine instances when needed. is
not right.
When using GKE to manage your Kubernetes clusters, you can not add manually created
compute instances to the worker node pool. A node pool is a group of nodes within a
cluster that all have the same configuration. Node pools use a NodeConfig specification.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools
Moreover, Unmanaged instance groups do not autoscale. An unmanaged instance
group is simply a collection of virtual machines (VMs) that reside in a single zone, VPC
network, and subnet. An unmanaged instance group is useful for grouping together
VMs that require individual configuration settings or tuning.
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-
unmanaged-instances

Create Compute Engine instances for the workers and the master and install
Kubernetes. Rely on Kubernetes to create additional Compute Engine instances
when needed. is not right.
When using Google Kubernetes Engine, you can not install master node separately. The
cluster master runs the Kubernetes control plane processes, including the Kubernetes
API server, scheduler, and core resource controllers. The master's lifecycle is managed by
GKE when you create or delete a cluster.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
Also, you can not add manually created compute instances to the worker node pool. A
node pool is a group of nodes within a cluster that all have the same configuration.
Node pools use a NodeConfig specification.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools

Create a GKE cluster and enable autoscaling on Kubernetes Engine. is the right
answer.
GKE's cluster autoscaler automatically resizes the number of nodes in a given node pool,
based on the demands of your workloads. You don't need to manually add or remove
nodes or over-provision your node pools. Instead, you specify a minimum and
maximum size for the node pool, and the rest is automatic. When demand is high,
cluster autoscaler adds nodes to the node pool. When demand is low, cluster autoscaler
scales back down to a minimum size that you designate. This can increase the
availability of your workloads when you need it while controlling costs.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler

Enable Horizontal Pod Autoscaling for the kubernetes deployment. is the right
answer.
Horizontal Pod Autoscaler scales up and scales down your Kubernetes workload by
automatically increasing or decreasing the number of Pods in response to the
workload's CPU or memory consumption, or in response to custom metrics reported
from within Kubernetes or external metrics from sources outside of your cluster.
Horizontal Pod Autoscaling cannot be used for workloads that cannot be scaled, such as
DaemonSets.
Ref: https://cloud.google.com/kubernetes-
engine/docs/concepts/horizontalpodautoscaler

Question 35: Incorrect


Your company is a leading multinational news media organization and runs its online
news website in Google Cloud on a 3-tier architecture as described below. 1. Web tier in
Subnet 1 with a CIDR range 192.168.56.0/24. All instances in this tier use
serviceAccount_subnet1 2. App tier in Subnet 2 with a CIDR range 192.168.57.0/24. All
instances in this tier use serviceAccount_subnet2 3. DB tier in Subnet 3 with a CIDR
range 192.168.58.0/24. All instances in this tier use serviceAccount_subnet3
Larger image
Your security team has asked you to disable all but essential communication between
the tiers. Your application requires instances in the Web tier to communicate with the
instances in App tier on port 80, and the instances in App tier to communicate with the
instances in DB tier on port 3306. How should you design the firewall rules?

1. Create an ingress firewall rule that allows traffic on port 80 from all instances with
serviceAccount_subnet1 to all instances with serviceAccount_subnet2.
2. Create an ingress firewall rule that allows traffic on port 3306 from all instances with
serviceAccount_subnet2 to all instances with serviceAccount_subnet3.

(Correct)

1. Create an ingress firewall rule that allows all traffic from Subnet 2 (range:
192.168.57.0/24) to all other instances.

2. Create another ingress firewall rule that allows all traffic from Subnet 1 (range:
192.168.56.0/24) to all other instances.

(Incorrect)

1. Create an ingress firewall rule that allows all traffic from all instances with
serviceAccount_subnet1 to all instances with serviceAccount_subnet2.

2. Create an ingress firewall rule that allows all traffic from all instances with
serviceAccount_subnet2 to all instances with serviceAccount_subnet3.

1. Create an egress firewall rule that allows traffic on port 80 from Subnet 2 (range:
192.168.57.0/24) to all other instances.

2. Create another egress firewall rule that allows traffic on port 3306 from Subnet 1
(range: 192.168.56.0/24) to all other instances.

Explanation
This architecture resembles a standard 3 tier architecture - web, application, and
database; where the web tier can talk to just the application tier; and the application tier
can talk to both the web and database tier. The database tier only accepts requests from
the application tier and not the web tier.

We want to ensure that Web Tier can communicate with App Tier, and App Tier can
communicate with Database Tier.

1. Create an egress firewall rule that allows traffic on port 80 from Subnet
2 (range: 192.168.57.0/24) to all other instances.
2. Create another egress firewall rule that allows traffic on port 3306 from
Subnet 1 (range: 192.168.56.0/24) to all other instances. is not right.
We are creating egress rules here which allow outbound communication but not ingress
rules which are for inbound traffic.

1. Create an ingress firewall rule that allows all traffic from Subnet 2
(range: 192.168.57.0/24) to all other instances.
2. Create another ingress firewall rule that allows all traffic from Subnet
1 (range: 192.168.56.0/24) to all other instances. is not right.
If we create an ingress firewall rule with the settings

Targets: all instances

Source filter: IP ranges (with the range set to 192.168.56.0/24)

Protocols: allow all.

We are allowing Web Tier (192.168.56.0/24) access to all instances - including Database
Tier (192.168.58.0/24) which is not desirable.

1. Create an ingress firewall rule that allows all traffic from all
instances with serviceAccount_subnet1 to all instances with
serviceAccount_subnet2.
2. Create an ingress firewall rule that allows all traffic from all
instances with serviceAccount_subnet2 to all instances with
serviceAccount_subnet3. is not right.

The first firewall rule ensures that all instances with serviceAccount_subnet2, i.e. all
instances in Subnet Tier #2 (192.168.57.0/24) can be reached from all instances with
serviceAccount_subnet1, i.e. all instances in Subnet Tier #1 (192.168.56.0/24), on all
ports. Similarly, the second firewall rule ensures that all instances with
serviceAccount_subnet3, i.e. all instances in Subnet Tier #3 (192.168.58.0/24) can be
reached from all instances with serviceAccount_subnet2, i.e. all instances in Subnet Tier
#2 (192.168.57.0/24), on all ports. Though this matches our requirements, we are
opening all ports instead of the specified ports, which is our requirement. While this
solution works, it is not as secure as the other option (see below)

1. Create an ingress firewall rule that allows traffic on port 80 from all
instances with serviceAccount_subnet1 to all instances with
serviceAccount_subnet2.
2. Create an ingress firewall rule that allows traffic on port 3306 from all
instances with serviceAccount_subnet2 to all instances with
serviceAccount_subnet3. is the right answer.
The first firewall rule ensures that all instances with serviceAccount_subnet2, i.e. all
instances in Subnet Tier #2 (192.168.57.0/24) can be reached from all instances with
serviceAccount_subnet1, i.e. all instances in Subnet Tier #1 (192.168.56.0/24), on port 80.
Similarly, the second firewall rule ensures that all instances with serviceAccount_subnet3,
i.e. all instances in Subnet Tier #3 (192.168.58.0/24) can be reached from all instances
with serviceAccount_subnet2, i.e. all instances in Subnet Tier #2 (192.168.57.0/24), on
port 3306.

Question 36: Correct


You recently deployed a new application in Google App Engine to serve
production traffic. After analyzing logs for various user flows, you uncovered
several issues in your application code and have developed a fix to address the
issues. Parts of your proposed fix could not be validated in the pre production
environment by your testing team as some of the scenarios can only be validated
by an end user with access to specific data in your production environment. In the
company's weekly Change Approval Board meeting, concerns were raised that the
fix could possibly take down the application. It was unanimously agreed that while
the fix is risky, it is a necessary change to the application. You have been asked to
suggest a solution that minimizes the impact of the change going wrong. You also
want to minimize costs. What should you do?

Create a second Google App Engine project with the new application code, and
onboard users gradually to the new application.

Set up a second Google App Engine service, and then update a subset of clients to
hit the new service.

Deploy a new version of the application, and use traffic splitting to send a small
percentage of traffic to it.

(Correct)

A. Deploy the new application version temporarily, capture logs and then roll it
back to the previous version.

Explanation
Deploy the new application version temporarily, capture logs and then roll
it back to the previous version. is not right.
Deploying a new application version and promoting it would result in your new version
serving all production traffic. If the code fix doesn't work as expected, it would result in
the application becoming unreachable to all users. This is a risky approach and should
be avoided.

Create a second Google App Engine project with the new application code, and
onboard users gradually to the new application. is not right.
You want to minimize costs. This approach effectively doubles your costs as you have to
pay for two identical environments until all users are moved over to the new application.
There is an additional overhead of manually onboarding users to the new application
which could be expensive as well as time-consuming.

Set up a second Google App Engine service, and then update a subset of
clients to hit the new service. is not right.
It is not straightforward to update a set of clients to hit the new service. When users
access an App Engine service, they use an endpoint like https://SERVICE_ID-dot-
PROJECT_ID.REGION_ID.r.appspot.com. Introducing a new service introduces a new URL
and getting your users to use the new URL is possible but involves effort and
coordination. If you want to mask these differences to the end-user, then you have to
make changes in the DNS and use a weighted algorithm to split the traffic between the
two services based on the weights assigned.
Ref: https://cloud.google.com/appengine/docs/standard/python/splitting-traffic
Ref: https://cloud.google.com/appengine/docs/standard/python/an-overview-of-app-
engine
This approach also has the drawback of doubling your costs until all users are moved
over to the new service.

Deploy a new version of the application, and use traffic splitting to send a
small percentage of traffic to it. is the right answer.
This option minimizes the risk to the application while also minimizing the complexity
and cost. When you deploy a new version to App Engine, you can choose not to
promote it to serve live traffic. Instead, you could set up traffic splitting to split traffic
between the two versions - this can all be done within Google App Engine. Once you
send a small portion of traffic to the new version, you can analyze logs to identify if the
fix has worked as expected. If the fix hasn't worked, you can update your traffic splitting
configuration to send all traffic back to the old version. If you are happy your fix has
worked, you can send more traffic to the new version or move all user traffic to the new
version and delete the old version.
Ref: https://cloud.google.com/appengine/docs/standard/python/splitting-traffic
Ref: https://cloud.google.com/appengine/docs/standard/python/an-overview-of-app-
engine

Question 37: Incorrect


A company wants to build an application that stores images in a Cloud Storage
bucket and want to generate thumbnails as well resize the images. They want to
use a google managed service that can scale up and scale down to zero
automatically with minimal effort. You have been asked to recommend a service.
Which GCP service would you suggest?

Google Kubernetes Engine

Google Compute Engine

(Incorrect)

Cloud Functions

(Correct)

Google App Engine

Explanation
Cloud Functions. is the right answer.
Cloud Functions is Google Cloud’s event-driven serverless compute platform. It
automatically scales based on the load and requires no additional configuration. You
pay only for the resources used.

Ref: https://cloud.google.com/functions

While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google
App Engine support autoscaling, it needs to be configured explicitly based on the load
and is not as trivial as the scale up or scale down offered by Google's cloud functions.

Question 38: Incorrect


Your compliance team requested all audit logs are stored for 10 years and to allow
access for external auditors to view. You want to follow Google recommended
practices. What should you do? (Choose two)

Export audit logs to Cloud Storage via an export sink.

(Correct)

Export audit logs to Splunk via a Pub/Sub export sink.

Create an account for auditors to have view access to Stackdriver Logging.

(Incorrect)

Export audit logs to BigQuery via an export sink.

Generate a signed URL to the Stackdriver export destination for auditors to access.

(Correct)

Explanation
Create an account for auditors to have view access to Stackdriver Logging. is
not right.
While it is possible to configure a custom retention period of 10 years in Stackdriver
logging, storing logs in Stackdriver is expensive compared to Cloud Storage. Stackdriver
charges $0.01 per GB per month, whereas something like Cloud Storage Coldline
Storage costs $0.007 per GB per month (30% cheaper) and Cloud Storage Archive
Storage costs 0.004 per GB per month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing

Export audit logs to BigQuery via an export sink. is not right.


Storing logs in BigQuery is expensive. In BigQuery, Active storage costs $0.02 per GB per
month and Long-term storage costs $0.01 per GB per month. In comparison, Google
Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing

Export audit logs to Cloud Filestore via a Pub/Sub export sink. is not right.
Storing logs in Cloud Filestore is expensive. In Cloud Filestore, Standard Tier pricing
costs $0.2 per GB per month and Premium Tier pricing costs $0.3 per GB per month. In
comparison, Google Cloud Storage offers several storage classes that are significantly
cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing

Export audit logs to Cloud Storage via an export sink. is the right answer.
Among all the storage solutions offered by Google Cloud Platform, Cloud storage offers
the best pricing for long term storage of logs. Google Cloud Storage offers several
storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage
($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are
significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing

Generate a signed URL to the Stackdriver export destination for auditors to


access. is the right answer.
In Google Cloud Storage, you can generate a signed URL to provide limited permission
and time to make a request. Anyone who possesses it can use the signed URL to
perform specified actions, such as reading an object, within a specified period of time.
In our scenario, we do not need to create accounts for our auditors to provide access to
logs in Cloud Storage. Instead, we can generate them signed URLs which are time-
bound and lets them access/download log files.
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls

Question 39: Correct


You've created a Kubernetes engine cluster named "my-gcp-ace-proj-1", which
has a cluster pool named my-gcp-ace-primary-node-pool. You want to increase
the number of nodes within your cluster pool from 10 to 20 to meet capacity
demands. What is the command to change the number of nodes in your pool?

kubectl container clusters update my-gcp-ace-proj-1 --node-pool my-gcp-ace-


primary-node-pool --num-nodes 20

gcloud container clusters resize my-gcp-ace-proj-1 --node-pool my-gcp-ace-


primary-node-pool --new-size 20

gcloud container clusters resize my-gcp-ace-proj-1 --node-pool my-gcp-ace-


primary-node-pool --num-nodes 20

(Correct)

gcloud container clusters update my-gcp-ace-proj-1 --node-pool my-gcp-ace-


primary-node-pool --num-nodes 20

Explanation
kubectl container clusters update my-gcp-ace-proj-1 --node-pool my-gcp-ace-
primary-node-pool --num-nodes 20. is not right.
kubectl does not accept container as an operation.
Ref: https://kubernetes.io/docs/reference/kubectl/overview/#operations

gcloud container clusters update my-gcp-ace-proj-1 --node-pool my-gcp-ace-


primary-node-pool --num-nodes 20. is not right.
gcloud container clusters update can not be used to specify the number of nodes. It can
be used to specify the node locations, but not the number of nodes.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/update

gcloud container clusters resize my-gcp-ace-proj-1 --node-pool my-gcp-ace-


primary-node-pool --new-size 20. is not right.
gcloud container clusters resize command does not support the parameter new-size.
While --size can be used to resize the cluster node pool, use of --size is discouraged as
this is a deprecated parameter. "The --size flag is now deprecated. Please use --num-
nodes instead."
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize

gcloud container clusters resize my-gcp-ace-proj-1 --node-pool my-gcp-ace-


primary-node-pool --num-nodes 20. is the right answer
gcloud container clusters resize can be used to specify the number of nodes using the --
num-nodes parameter which is the target number of nodes in the cluster.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize

Question 40: Incorrect


Your company is migrating all applications from the on-premises data centre to
Google Cloud, and one of the applications is dependent on Websockets protocol
and session affinity. You want to ensure this application can be migrated to
Google Cloud platform and continue serving requests without issues. What should
you do?

Modify application code to not depend on session affinity.

Review the design with the security team.

Modify application code to use HTTP streaming.

(Incorrect)


Discuss load balancer options with the relevant teams.

(Correct)

Explanation
Google HTTP(S) Load Balancing has native support for the WebSocket protocol when
you use HTTP or HTTPS, not HTTP/2, as the protocol to the backend.
Ref: https://cloud.google.com/load-balancing/docs/https#websocket_proxy_support
The load balancer also supports session affinity.
Ref: https://cloud.google.com/load-balancing/docs/backend-service#session_affinity

So the next possible step is Discuss load balancer options with the relevant
teams. is the right answer.

We don't need to convert WebSocket code to use HTTP streaming or Redesign the
application, as WebSocket support and session affinity are offered by Google HTTP(S)
Load Balancing. Reviewing the design is a good idea, but it has nothing to do with
WebSockets.

Question 41: Correct


Your company has an App Engine application that needs to store stateful data in a
proper storage service. Your data is non-relational data. You do not expect the
database size to grow beyond 10 GB and you need to have the ability to scale
down to zero to avoid unnecessary costs. Which storage service should you use?

Cloud Datastore

(Correct)

Cloud Dataproc

Cloud Bigtable

Cloud SQL
Explanation
Cloud SQL. is not right.
Cloud SQL is not suitable for non-relational data. Cloud SQL is a fully-managed
database service that makes it easy to set up, maintain, manage, and administer your
relational databases on Google Cloud Platform
Ref: https://cloud.google.com/sql/docs

Cloud Dataproc. is not right.


Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache
Spark and Apache Hadoop clusters in a simple, cost-efficient way. It is not a database.
Ref: https://cloud.google.com/dataproc

Cloud Bigtable. is not right.


Bigtable is a petabyte-scale, massively scalable, fully managed NoSQL database service
for large analytical and operational workloads. Cloud Bigtable is overkill for our
database which is just 10 GB. Also, Cloud Bigtable can't be scaled down to 0, as there is
always a cost with the node, SSD/HDD storage etc.
Ref: https://cloud.google.com/bigtable

Cloud Datastore. is the right answer.


Cloud Datastore is a highly-scalable NoSQL database. Cloud Datastore scales seamlessly
and automatically with your data, allowing applications to maintain high performance as
they receive more traffic; automatically scales back when the traffic reduces.
Ref: https://cloud.google.com/datastore/

Question 42: Correct


You defined an instance template for a Python web application. When you deploy
this application in Google Compute Engine, you want to ensure the service scales
up and scales down automatically based on the number of HTTP requests. What
should you do?

1. Create a managed instance group from the instance template.

2. Configure autoscaling on the managed instance group with a scaling policy based on
HTTP traffic.

3. Configure the instance group as the backend service of an External HTTP(S) load
balancer.
(Correct)

1. Create an instance from the instance template.

2. Create an image from the instance's disk and export it to Cloud Storage.

3. Create an External HTTP(s) load balancer and add the Cloud Storage bucket as its
backend service.

1. Create the necessary number of instances based on the instance template to handle
peak user traffic.

2. Group the instances together in an unmanaged instance group.

3. Configure the instance group as the Backend Service of an External HTTP(S) load
balancer.

1. Deploy your Python web application instance template to Google Cloud App Engine.

2. Configure autoscaling on the managed instance group with a scaling policy based on
HTTP traffic.

1. Create an unmanaged instance group from the instance template.

2. Configure autoscaling on the unmanaged instance group with a scaling policy based
on HTTP traffic.

3. Configure the unmanaged instance group as the backend service of an Internal


HTTP(S) load balancer.

Explanation
1. Create an instance from the instance template.
2. Create an image from the instance's disk and export it to Cloud Storage.
3. Create an External HTTP(s) load balancer and add the Cloud Storage bucket
as its backend service. is not right.
You can upload a custom image from instance's boot disk and export it to cloud
storage.
https://cloud.google.com/compute/docs/images/export-image
However, this image in the Cloud Storage bucket is unable to handle traffic as it is not a
running application. Cloud Storage can not serve requests of the custom image.

1. Create an unmanaged instance group from the instance template.


2. Configure autoscaling on the unmanaged instance group with a scaling
policy based on HTTP traffic.
3. Configure the unmanaged instance group as the backend service of an
Internal HTTP(S) load balancer. is not right.
An unmanaged instance group does not autoscale. An unmanaged instance group is a
collection of virtual machines (VMs) that reside in a single zone, VPC network, and
subnet. An unmanaged instance group is useful for grouping together VMs that require
individual configuration settings or tuning.
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-
unmanaged-instances

1. Create the necessary number of instances based on the instance template


to handle peak user traffic.
2. Group the instances together in an unmanaged instance group.
3. Configure the instance group as the Backend Service of an External
HTTP(S) load balancer. is not right.
An unmanaged instance group does not autoscale. Although we may have enough
compute power to handle peak user traffic, it does not automatically scale down when
the traffic goes down so it doesn't meet our requirements.
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-
unmanaged-instances

1. Deploy your Python web application instance template to Google Cloud App
Engine.
2. Configure autoscaling on the managed instance group with a scaling policy
based on HTTP traffic. is not right.
You can not use compute engine instance templates to deploy applications to Google
Cloud App Engine. Google App Engine lets you deploy applications quickly by providing
run time environments for many of the popular languages like Java, PHP, Node.js,
Python, C#, .Net, Ruby, and Go. You have an option of using custom runtimes but using
compute engine instance templates is not an option.
Ref: https://cloud.google.com/appengine

1. Create a managed instance group from the instance template.


2. Configure autoscaling on the managed instance group with a scaling policy
based on HTTP traffic.
3. Configure the instance group as the backend service of an External
HTTP(S) load balancer. is the right answer.
The auto-scaling capabilities of Managed instance groups let you automatically add or
delete instances from a managed instance group based on increases or decreases in
load - this can be set up by configuring scaling policies. In addition, you can configure
External HTTP(S) load balancer to send traffic to the managed instance group. The
External HTTP(S) load balancer tries to balance requests by using a round-robin
algorithm and when the load increases beyond the threshold defined in the scaling
policy, autoscaling kicks in and adds more nodes.
Ref: https://cloud.google.com/load-balancing/docs/https
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-
managed-instances

Question 43: Incorrect


The storage costs for your application logs have far exceeded the project budget.
The logs are currently being retained indefinitely in the Cloud Storage bucket
myapp-gcp-ace-logs. You have been asked to remove logs older than 90 days from
your Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend.
What should you do?

Write a script that runs gsutil ls -l gs://myapp-gcp-ace-logs/ to find and remove


items older than 90 days. Schedule the script with cron.

(Incorrect)

Write a script that runs gsutil ls -lr gs://myapp-gcp-ace-logs/ to find and remove
items older than 90 days. Repeat this process every morning.


Write a lifecycle management rule in JSON and push it to the bucket with gsutil
lifecycle set config-json-file.

(Correct)

Write a lifecycle management rule in XML and push it to the bucket with gsutil
lifecycle set config-xml-file.

Explanation
You write a lifecycle management rule in XML and push it to the bucket with
gsutil lifecycle set config-xml-file. is not right.
gsutil lifecycle set enables you to set the lifecycle configuration on one or more buckets
based on the configuration file provided. However, XML is not a valid supported type for
the configuration file.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle

Write a script that runs gsutil ls -lr gs://myapp-gcp-ace-logs/** to find


and remove items older than 90 days. Repeat this process every morning. is
not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud
Storage provides lifecycle management rules that let you achieve this with minimal
effort.

Write a script that runs gsutil ls -l gs://myapp-gcp-ace-logs/** to find and


remove items older than 90 days. Schedule the script with cron. is not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud
Storage provides lifecycle management rules that let you achieve this with minimal
effort.

Write a lifecycle management rule in JSON and push it to the bucket with
gsutil lifecycle set config-json-file. is the right answer.
You can assign a lifecycle management configuration to a bucket. The configuration
contains a set of rules which apply to current and future objects in the bucket. When an
object meets the criteria of one of the rules, Cloud Storage automatically performs a
specified action on the object. One of the supported actions is to Delete objects. You
can set up a lifecycle management to delete objects older than 90 days. "gsutil lifecycle
set" enables you to set the lifecycle configuration on the bucket based on the
configuration file. JSON is the only supported type for the configuration file. The config-
json-file specified on the command line should be a path to a local file containing the
lifecycle configuration JSON document.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
Ref: https://cloud.google.com/storage/docs/lifecycle

Question 44: Correct


Your company, which runs highly rated mobile games, has chosen to migrate its
analytics backend to BigQuery. The analytics team of 7 analysts need access to
perform queries against the data in BigQuery. The analytics team members change
frequently. How should you grant them access?

Create a Cloud Identity account for each analyst and add them all to a group.
Grant roles/bigquery.jobUser role to the group.

Create a Cloud Identity account for each analyst and add them all to a group.
Grant roles/bigquery.dataViewer role to the group.

(Correct)

Create a Cloud Identity account for each analyst and grant roles/bigquery.jobUser
role to each account.

Create a Cloud Identity account for each analyst and grant


roles/bigquery.dataViewer role to each account.

Explanation
Create a Cloud Identity account for each analyst and grant
roles/bigquery.dataViewer role to each account. is not right.
dataViewer provides permissions to Read data (i.e. query) and metadata from the table
or view so this is the right role but given that our data science team changes frequently,
we do not want to go through this lengthy provisioning and de-provisioning process.
Instead, we should be using groups so that provisioning and de-provisioning are as
simple as adding/removing the user to/from the group. Google Groups are a convenient
way to apply an access policy to a collection of users.
Ref: https://cloud.google.com/bigquery/docs/access-control

Create a Cloud Identity account for each analyst and grant


roles/bigquery.jobUser role to each account. is not right.
Given that our data science team changes frequently, we do not want to go through this
lengthy provisioning and de-provisioning process. Instead, we should be using groups
so that provisioning and de-provisioning are as simple as adding/removing the user
to/from the group. Google Groups are a convenient way to apply an access policy to a
collection of users.
Ref: https://cloud.google.com/bigquery/docs/access-control
Ref: https://cloud.google.com/iam/docs/overview#google_group

Create a Cloud Identity account for each analyst and add them all to a
group. Grant roles/bigquery.jobUser role to the group. is not right.
Since you want users to query the datasets, you need dataViewer role. jobUser provides
the ability to run jobs, including "query jobs". The query job lets you query an
authorized view. An authorized view lets you share query results with particular users
and groups without giving them access to the underlying tables. You can also use the
view's SQL query to restrict the columns (fields) the users can query.
Ref: https://cloud.google.com/bigquery/docs/access-control-examples
Ref: https://cloud.google.com/bigquery/docs/access-control

Create a Cloud Identity account for each analyst and add them all to a
group. Grant roles/bigquery.dataViewer role to the group. is the right answer.
dataViewer provides permissions to Read data (i.e. query) and metadata from the table
or view, so this is the right role, and this option also rightly uses groups instead of
assigning permissions at the user level.
Ref: https://cloud.google.com/bigquery/docs/access-control-examples
Ref: https://cloud.google.com/bigquery/docs/access-control

Question 45: Incorrect


An intern joined your team recently and needs access to Google Compute Engine
in your sandbox project to explore various settings and spin up compute instances
to test features. You have been asked to facilitate this. How should you give your
intern access to compute engine without giving more permissions than is
necessary?


Create a shared VPC to enable the intern access Compute resources.

Grant Compute Engine Admin Role for sandbox project.

Grant Project Editor IAM role for sandbox project.

(Incorrect)

Grant Compute Engine Instance Admin Role for the sandbox project.

(Correct)

Explanation
Create a shared VPC to enable the intern access Compute resources. is not
right.
Creating a shared VPC is not sufficient to grant intern access to compute resources.
Shared VPCs are primarily used by organizations to connect resources from multiple
projects to a common Virtual Private Cloud (VPC) network, so that they can
communicate with each other securely and efficiently using internal IPs from that
network.
Ref: https://cloud.google.com/vpc/docs/shared-vpc

Grant Project Editor IAM role for sandbox project. is not right.
Project editor role grants all viewer permissions, plus permissions for actions that modify
state, such as changing existing resources. While this role lets the intern explore
compute engine settings and spin up compute instances, it grants more permissions
than what is needed. Our intern can modify any resource in the project.
https://cloud.google.com/iam/docs/understanding-roles#primitive_roles

Grant Compute Engine Admin Role for sandbox project. is not right.
Compute Engine Admin Role grants full control of all Compute Engine resources;
including networks, load balancing, service accounts etc. While this role lets the intern
explore compute engine settings and spin up compute instances, it grants more
permissions than what is needed.
Ref: https://cloud.google.com/compute/docs/access/iam#compute.storageAdmin
Grant Compute Engine Instance Admin Role for the sandbox project. is the right
answer.
Compute Engine Instance Admin Role grants full control of Compute Engine instances,
instance groups, disks, snapshots, and images. It also provides read access to all
Compute Engine networking resources. This provides just the required permissions to
the intern.
Ref: https://cloud.google.com/compute/docs/access/iam#compute.storageAdmin

Question 46: Incorrect


Your organization is planning to deploy a Python web application to Google
Cloud. The web application uses a custom linux distribution and you want to
minimize rework.The web application underpins an important website that is
accessible to the customers globally. You have been asked to design a solution
that scales to meet demand. What would you recommend to fulfill this
requirement? (Select Two)

Cloud Functions

App Engine Standard environment

(Incorrect)

HTTP(S) Load Balancer

(Correct)

Managed Instance Group on Compute Engine

(Correct)

Network Load Balance


Explanation
Requirements are - use custom Linux distro, global access, auto scale.

Cloud Functions. is not right.


Cloud Functions is a serverless compute platform. You can not use a custom Linux
distribution with Cloud Functions.
Ref: https://cloud.google.com/functions

App Engine Standard environment. is not right.


The App Engine Standard Environment is based on container instances running on
Google's infrastructure. Containers are preconfigured with one of several available
runtimes such as Python, Java, NodeJS, PHP, Ruby, GO etc. It is not possible to specify a
custom Linux distribution with App Engine Standard.
Ref: https://cloud.google.com/appengine/docs/standard

Network Load Balance. is not right.


The external (TCP/UDP) Network Load Balancing is a regional load balancer. Since we
need to cater to a global user base, this load balancer is not suitable.
Ref: https://cloud.google.com/load-balancing/docs/network

HTTP(S) Load Balancer. is the right answer.


HTTP(S) Load Balancing is a global service (when the Premium Network Service Tier is
used). We can create backend services in more than one region and have them all
serviced by the same global load balancer
Ref: https://cloud.google.com/load-balancing/docs/https

Managed Instance Group on Compute Engine. is the right answer.


Managed instance groups (MIGs) maintain the high availability of your applications by
proactively keeping your virtual machine (VM) instances available. An autohealing policy
on the MIG relies on an application-based health check to verify that an application is
responding as expected. If the auto-healer determines that an application isn't
responding, the managed instance group automatically recreates that instance.
Ref: https://cloud.google.com/compute/docs/instance-groups

Question 47: Correct


Your organization specializes in helping other companies detect if any pages on
their website do not align to the specified standards. To do this, your company has
deployed a custom C++ application in your on-premises data centre that crawls all
the web pages of a customer’s website, compares the headers and template to the
expected standard and stores the result before moving on to another customer’s
website. This testing takes a lot of time and has resulted in it missing out on the
SLA several times recently. The application team is aware of the slow processing
time and wants to run the application on multiple virtual machines to split the
load, but there is no free space in the data centre. You have been asked to identify
if it is possible to migrate this application to Google cloud, ensuring it can
autoscale with minimal changes and reduce the processing time. What GCP service
should you recommend?

Deploy the application as Cloud Dataproc job based on Hadoop.

Deploy the application on a GCE Managed Instance Group (MIG) with autoscaling
enabled.

(Correct)

Deploy the application on Google App Engine Standard service.

Deploy the application on a GCE Unmanaged Instance Group. Front the group with
a network load balancer.

Explanation
Deploy the application on a GCE Unmanaged Instance Group. Front the group
with a network load balancer. is not right.
An unmanaged instance group is a collection of virtual machines (VMs) that reside in a
single zone, VPC network, and subnet. An unmanaged instance group is useful for
grouping together VMs that require individual configuration settings or tuning.
Unmanaged instance group does not autoscale, so it does not help reduce the amount
of time it takes to test a change to the system thoroughly.
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-
unmanaged-instances

Deploy the application on Google App Engine Standard service. is not right.
App Engine supports many popular languages like Java, PHP, Node.js, Python, C#, .Net,
Ruby, and Go. However, C++ isn’t supported by App Engine.
Ref: https://cloud.google.com/appengine

Deploy the application as Cloud Dataproc job based on Hadoop. is not right.
Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache
Spark and Apache Hadoop clusters in a simpler, more cost-efficient way. While Dataproc
is very efficient at processing ETL and Big Data pipelines, it is not as suitable for running
a ruby application that runs tests each day.
Ref: https://cloud.google.com/dataproc

Deploy the application on a GCE Managed Instance Group (MIG) with


autoscaling enabled. is the right answer.
A managed instance group (MIG) contains identical virtual machine (VM) instances that
are based on an instance template. MIGs support auto-healing, load balancing,
autoscaling, and auto-updating. Managed instance groups offer auto-scaling
capabilities that let you automatically add or delete instances from a managed instance
group based on increases or decreases in load. Autoscaling helps your apps gracefully
handle traffic increases and reduce costs when the need for resources is lower.
Autoscaling works by adding more instances to your instance group when there is more
load (upscaling), and deleting instances when the need for instances is lowered
(downscaling).
Ref: https://cloud.google.com/compute/docs/autoscaler/

Question 48: Incorrect


Your company has multiple GCP projects in several regions, and your operations
team have created numerous gcloud configurations for most common operational
needs. They have asked your help to retrieve an inactive gcloud configuration and
the GKE clusters that use it, using the least number of steps. What command
should you execute to retrieve this information?

Execute gcloud config configurations describe.

Execute gcloud config configurations activate, then gcloud config list.

Execute kubectl config use-context, then kubectl config view.


(Incorrect)

Execute kubectl config get-contexts.

(Correct)

Explanation
We want to get to the end goal with the fewest possible steps.

Execute gcloud config configurations describe. is not right.


gcloud config configurations describe - describes a named configuration by listing its
properties. This does not return any Kubernetes cluster details.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/describe

Execute gcloud config configurations activate, then gcloud config list. is


not right.
gcloud config configurations activate - activates an existing named configuration. This
does not return any Kubernetes cluster details.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate

Execute kubectl config get-contexts. is the right answer.


kubectl config get-contexts displays a list of contexts as well as the clusters that use
them. Here's a sample output.

$ kubectl config get-contexts


CURRENT NAME CLUSTER
gke_kubernetes-260922_us-central1-a_standard-cluster-1 gke_kubernetes-260922_us-
central1-a_standard-cluster-1
gke_kubernetes-260922_us-central1-a_your-first-cluster-1 gke_kubernetes-260922_u
s-central1-a_your-first-cluster-1
* gke_kubernetes-260922_us-central1_standard-cluster-1 gke_kubernetes-260922_us-
central1_standard-cluster-1

The output shows the clusters and the configurations they use. Using this information, it
is possible to find out the cluster using the inactive configuration with just 1 step.

Execute kubectl config use-context, then kubectl config view. is not right.
kubectl config use-context [my-cluster-name] is used to set the default context to [my-
cluster-name]. But to do this, we first need a list of contexts, and if you have multiple
contexts, you'd need to execute kubectl config use-context [my-cluster-name] against
each context. So that is at least 2+ steps. Further to that, the kubectl config view is used
to get a full list of config. The output of the kubectl config view can be used to verify
which clusters use what configuration, but that is one additional step. Moreover, the
output of the kubectl config view doesn't change much from one context to others -
other than the current-context field. So our earlier steps of determining the contexts
and using each context are of not much use. Though this can be used to achieve the
same outcome, it involves more steps than the other option.

Here’s a sample execution

Step 1: First get a list of contexts

kubectl config get-contexts -o=name


gke_kubernetes-260922_us-central1-a_standard-cluster-1
gke_kubernetes-260922_us-central1-a_your-first-cluster-1
gke_kubernetes-260922_us-central1_standard-cluster-1

Step 2: Use each context and view the config.


kubectl config use-context gke_kubernetes-260922_us-central1-a_standard-cluster-
1
Switched to context "gke_kubernetes-260922_us-central1-a_standard-cluster-1".
kubectl config view > 1.out (this saves the output in of config view in 1.out)

kubectl config use-context gke_kubernetes-260922_us-central1-a_your-first-cluste


r-1
Switched to context "gke_kubernetes-260922_us-central1-a_your-first-cluster-1".
kubectl config view > 2.out (this saves the output in of config view in 2.out)

kubectl config use-context gke_kubernetes-260922_us-central1_standard-cluster-1


Switched to context "gke_kubernetes-260922_us-central1_standard-cluster-1".
kubectl config view > 3.out (this saves the output in of config view in 3.out)

diff 1.out 2.out


28c28
< current-context: gke_kubernetes-260922_us-central1-a_standard-cluster-1
---
> current-context: gke_kubernetes-260922_us-central1-a_your-first-cluster-1

diff 2.out 3.out


28c28
< current-context: gke_kubernetes-260922_us-central1-a_your-first-cluster-1
---
> current-context: gke_kubernetes-260922_us-central1_standard-cluster-1

Step 3: Determine the inactive configuration and the cluster using that
configuration.
The config itself has details about the clusters and contexts, as shown below.
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://35.222.130.166
name: gke_kubernetes-260922_us-central1-a_standard-cluster-1
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://35.225.14.172
name: gke_kubernetes-260922_us-central1-a_your-first-cluster-1
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://34.69.212.109
name: gke_kubernetes-260922_us-central1_standard-cluster-1
contexts:
- context:
cluster: gke_kubernetes-260922_us-central1-a_standard-cluster-1
user: gke_kubernetes-260922_us-central1-a_standard-cluster-1
name: gke_kubernetes-260922_us-central1-a_standard-cluster-1
- context:
cluster: gke_kubernetes-260922_us-central1-a_your-first-cluster-1
user: gke_kubernetes-260922_us-central1-a_your-first-cluster-1
name: gke_kubernetes-260922_us-central1-a_your-first-cluster-1
- context:
cluster: gke_kubernetes-260922_us-central1_standard-cluster-1
user: gke_kubernetes-260922_us-central1_standard-cluster-1
name: gke_kubernetes-260922_us-central1_standard-cluster-1
current-context: gke_kubernetes-260922_us-central1-a_standard-cluster-1

Question 49: Incorrect


Your Company is planning to migrate all Java web applications to Google App
Engine. However, you still want to continue using your on-premise database. How
can you setup the app engine to communicate with your on-premise database
while minimizing effort?

Setup the application using App Engine Standard environment with Cloud Router
to connect to on-premise database.

Setup the application using App Engine Standard environment with Cloud VPN to
connect to on-premise database.

(Incorrect)

Setup the application using App Engine Flexible environment with Cloud VPN to
connect to on-premise database.

(Correct)

Setup the application using App Engine Flexible environment with Cloud Router to
connect to on-premise database.

Explanation
Setup the application using App Engine Standard environment with Cloud
Router to connect to on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud
Router enables you to dynamically exchange routes between your Virtual Private Cloud
(VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router

Setup the application using App Engine Flexible environment with Cloud
Router to connect to on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud
Router enables you to dynamically exchange routes between your Virtual Private Cloud
(VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router

Setup the application using App Engine Standard environment with Cloud VPN
to connect to on-premise database. is not right.
App Engine Standard can’t connect to the on-premise network with just Cloud VPN.
Since App Engine is serverless, it can’t use Cloud VPN tunnels. In order to get App
Engine to work with Cloud VPN, you need to connect it to the VPC using serverless VPC.
You can configure the Serverless VPC by creating a connector:
https://cloud.google.com/vpc/docs/configure-serverless-vpc-access
and then you then update your app in App Engine Standard to use this connector:
https://cloud.google.com/appengine/docs/standard/python/connecting-vpc

Setup the application using App Engine Flexible environment with Cloud VPN
to connect to on-premise database. is the right answer.
You need Cloud VPN to connect VPC to an on-premise network.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
Unlike App Engine Standard which is serverless, App Engine Flex instances are already
within the VPC, so they can use Cloud VPN to connect to the on-premise network.

Question 50: Incorrect


You work at a large organization where each team has a distinct role. The
development team can create Google Cloud projects but can’t link them to a
billing account – this role is reserved for the finance team, and the development
team do not want finance team to make changes to their project resources. How
should you configure IAM access controls to enable this?


Grant the development team Billing Account User (roles/billing.user) role on the
billing account and Project Billing Manager (roles/billing.projectManager) on the
GCP organization.

(Incorrect)

Grant the finance team Billing Account User (roles/billing.user) role on the billing
account.

Grant the finance team Billing Account User (roles/billing.user) role on the billing
account and Project Billing Manager (roles/billing.projectManager) on the GCP
organization.

(Correct)

Grant the development team Billing Account User (roles/billing.user) role on the
billing account.

Explanation
Grant the finance team Billing Account User (roles/billing.user) role on the
billing account. is not right.
To link a project to a billing account, you need the necessary roles at the project level as
well as at the billing account level. In this scenario, we are granting just the Billing
Account User role on the billing account to the Finance team, which allows them to link
projects to the billing account on which the role is granted. But we haven't granted
them any role at the project level. So they would not be unable to link projects.

Grant the development team Billing Account User (roles/billing.user) role on


the billing account. is not right.
To link a project to a billing account, you need the necessary roles at the project level as
well as at the billing account level. In this scenario, we are granting just the Billing
Account User role on the billing account to the Engineering team which allows them to
link projects to the billing account and our question clearly states we do not want to do
that.
Grant the development team Billing Account User (roles/billing.user) role on
the billing account and Project Billing Manager
(roles/billing.projectManager) on the GCP organization. is not right.
To link a project to a billing account, you need the necessary roles at the project level as
well as at the billing account level. In this scenario, we are assigning the engineering
team the Billing Account User role on the billing account, which allows them to create
new projects linked to the billing account on which the role is granted. We are also
assigning them the Project Billing Manager role on the organization (trickles down to
the project as well) which lets them attach the project to the billing account. But we
don't want the engineering team to link projects to the billing account.

Grant the finance team Billing Account User (roles/billing.user) role on the
billing account and Project Billing Manager (roles/billing.projectManager)
on the GCP organization. is the right answer.
To link a project to a billing account, you need the necessary roles at the project level as
well as at the billing account level. In this scenario, we are assigning the finance team
the Billing Account User role on the billing account, which allows them to create new
projects linked to the billing account on which the role is granted. We are also assigning
them the Project Billing Manager role on the organization (trickles down to the project
as well) which lets them attach the project to the billing account, but does not grant any
rights over resources.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy