100% found this document useful (1 vote)
337 views23 pages

Week 4 GCP Lec Notes

Google Cloud Foundations IIT NPTEL Notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
337 views23 pages

Week 4 GCP Lec Notes

Google Cloud Foundations IIT NPTEL Notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Google Cloud Computing Foundation Course

Sowmya Kannan
Google Cloud

Lecture-20
Configuring Elastic Apps with Autoscaling

(Refer Slide Time: 00:07)

Your next topic looks at building elastic applications with auto-scaling. Let’s look at how auto-
scaling works. Auto scalar controls managed instance groups adding and removing instances
using policies. A policy includes the minimum and maximum number of replicas. In this
diagram, n is any number of instance replicas based on a template. The template requisitions
resource from Compute Engine identifies an OS image to boot and starts new VMs.

67
(Refer Slide Time: 00:42)

The percentage utilization that an additional VM contributes depends on the size of the group.
The fourth VM added to a group offers a 25% increase in capacity to the group. The tenth VM
added to a group only offers 10% more capacity even though the VMs are the same size. In this
example, the auto scalar is conservative and rounds up. In other words, it would prefer to start an
extra VM that isn’t needed than to possibly run out of capacity.

In this example, removing one who VM does not get close enough to the target of 75%.
Removing a second VM would exceed the target. Auto scalar behaves conservatively. So, it will
shut down one VM rather than two VMs. It would prefer underutilization over running out of
resources when they are needed.

68
Google Cloud Computing Foundation Course
Sowmya Kannan
Google Cloud

Lecture-21
Exploring PaaS with App Engine

(Refer Slide Time: 00:09)

Next, you will explore how App Engine can run applications without having your managed
infrastructure. App engine allows you to build highly scalable applications on a fully managed
serverless platform. App Engine is ideal if the time to market is highly valuable to you, and you
want to be able to focus on writing code without ever having to touch a server, cluster, or
infrastructure. It’s also ideal if you don’t want to worry about a pager going off for receiving 5xx
errors. App Engine allows you to have high availability apps without a complex architecture.

69
(Refer Slide Time: 00:48)

As a fully managed environment App Engine is a perfect example of a computing platform


provided as a service.

(Refer Slide Time: 00:09)

App Engine can save organizations time and cost in software application development by
eliminating the need to buy, build, and operate computer hardware and other infrastructure. This
includes no server management and no need to configure deployments. This allows engineering

70
teams to focus on creating high-value applications instead of no value operations work. You can
quickly build and deploy applications using the range of popular programming languages, like
Java, PHP, Node JS, Python, C#, .NET, Ruby, and Go, or you can bring your own language
runtime and frameworks.

App Engine allows you to manage resources from the command line, debug source code in
production, and run API back ends easily using industry-leading tools such as cloud SDK, cloud
source repositories, IntelliJ Idea, Visual Studio, and Portia. App Engine also automatically scales
depending on the application traffic and consumes resources only when code is running. This
allows cost to be kept to a minimum.

(Refer Slide Time: 02:21)

You can run your applications in App Engine using a standard flexible environment. You can
also choose simultaneously use both environments and allow your services to take advantage of
each environment individual benefits. The standard environment offers a fully managed
infrastructure for your application that can scale down to 0 if not in use. This means you are
stopping to use the service. However, your applications must conform to the top town
environment of App Engine in the standard.

71
Only the specific version of a few runtime or support. You cannot sign in to the system to make
changes. You cannot write to a persistent disk in the configuration of the environment is limited.
App Engine flexible runs your application in a Docker container environment. You can use any
http-based runtime. The virtual machines are exposed, allowing you to log into them and write to
persistent disks. However, the system will not scale to 0.

You’ll still pay for the service even if the users aren’t using the application because VM
instances in the flexible environment are Compute Engine virtual machines. There are far more
options for infrastructure customization. You’re also able to take advantage of a wide array of
CPU and memory configurations. In summary, if you just need a high-performance managed
infrastructure and can confirm to strict runtime limitations, then App Engine standard is a great
option.

If you need to use custom runtimes or if you need a less rigid environment but still want to
leverage a platform-as-a-service, then App Engine flexible would be a more suitable option.

(Refer Slide Time: 04:26)

The frontend is often critical to the user experience. To ensure consistent performance, a built-in
load balancer will distribute traffic to multiple frontends and scale the frontend as necessary. The

72
backend is for more intensive processing. This separation of function allows each part to scale as
needed. Note that the App Engine services modular, and this example shows a single service.
More complex architectures are possible.

(Refer Slide Time: 05:06)

When using App Engine, you also have multiple options to store application data including
caching through App Engine, Memcache, cloud storage for any objects upto 5 terabytes size,
cloud data storage for persistent low-latency memory for serving data to applications. Cloud
SQL, which is a relational database that can be run on a persistent, is greater than one terabyte in
size, and cloud Big Table is no SQL database for heavy read-write and then analysis.

73
(Refer Slide Time: 05:44)

The automatic scaling of App Engine allows you to meet any demand and load balancing
distribute load balance computer resources in single or multiple regions close to users to meet
high availability requirements.

(Refer Slide Time: 06:03)

74
App Engine allows you to easily host different versions of your app, which includes creating
development, test, staging, and production environment. Start driver gives you a powerful
application diagnostic to debug and monitor the health and performance of your app.

(Refer Slide Time: 06:22)

75
And you can leverage robust security tools like cloud security scanner. These services are
provided with high availability and guaranteed redundancy.

76
Google Cloud Computing Foundation Course
Sowmya Kannan
Google Cloud

Lecture-22
Event Driven Programs with Cloud Functions

Cloud function is a serverless code that allows you to run it based on certain events. In this topic,
you will learn how cloud functions work.

(Refer Slide Time: 00:12)

Developer agility comes from building systems composed of small independent units of
functionality focused on doing one thing well. Cloud functions let you build and deploy services
at the level of a single function, not at the level of entire application containers or VMs. Cloud
functions are ideal if you need to connect and extend cloud services and want to automate with
event-driven functions that respond to cloud events.

It is also ideal if you want to use open and familiar Node JS, Python, or Go without the need to
manage a server or runtime environment.

77
(Refer Slide Time: 01:02)

A cloud function provides a connective layer of logic that lets you write code to connect and
extend cloud services. You can listen and respond to a file upload to cloud storage, a log change
or an incoming message on a cloud pub/sub topic, and so on. Cloud functions have access to the
Google service account credential and are therefore seamlessly authenticated with the majority of
the GCP services such as cloud data store, cloud spanner, cloud translation API, and cloud vision
API.

Cloud events are things that happen in the cloud environment. These might be things like
changes to data in a database. Files added to a storage system, or a new virtual machine instance
created. Events occur whether or not users choose to respond to them. You can create a response
to an event with a trigger. A trigger is a declaration of interest in a certain event or set of events.
Binding a function to a trigger allows you to capture and act on the events.

A cloud function removes the work of managing servers, configuring software, updating
frameworks, and patching operating systems. We fully manage the software and infrastructure so
that you just add code. Furthermore, the provisioning of resources that happens automatically in

78
response to events. This means that a function can scale from a few invocations a day to many
millions of invocations without any additional work for you.

Events happen all the time within a system like file uploads to cloud storage, changes to database
records, requests to HTTP endpoints, and so on. By writing, code that runs in response to those
events cloud functions runs it while automatically managing any underlying infrastructure. Cloud
function connects and extends cloud services with code. So, you can treat them as building
blocks and adjust them as your needs change. You can also extend your application using a
broad ecosystem of third-party services and APIs.

(Refer Slide Time: 03:41)

A cloud service emits some kind of event. This can be a pub-sub message, a change to a cloud
storage object, or a webhook, for example. The event kicks off a cloud function. The function
can be written in Node JS, Python, or Go. The function can invoke other services and write back
the results. Building infrastructure is not required when leveraging cloud functions.

79
Google Cloud Computing Foundation Course
Sowmya Kannan
Google Cloud

Lecture-23
Containerizing and Orchestrating Apps with GKE

(Refer Slide Time: 00:06)

In this final topic, you will learn how to leverage Google Kubernetes Engine. You’ve already
discovered the spectrum between infrastructure-as-a-service and platform-as-a-service. And you
have learned about Compute Engine, which is the infrastructure-as-a-service offering of GCP
with access to servers file systems and networking. Now you will see an introduction to
containers and GKE, which is a hybrid that conceptually sits between the two.

It offers the managed infrastructure of infrastructure-as-a-service with the developer orientation


of platform-as-a-service. GKE is ideal for those that have been challenged when deploying or
maintaining a fleet of VMs, and it has been determined that containers are the solution.

80
(Refer Slide Time: 00:56)

It’s also ideal when organizations have containerized the workloads and need a system on which
to run and manage them and do not have dependencies on kernel changes or on a specific non-
Linux operating system. With GKE, there is no need to ever touch a server or infrastructure. So,
how does containerization work?

(Refer Slide Time: 01:26)

81
Infrastructure-as-a-service allows you to share compute resources with other developers by
virtualizing the hardware using virtual machines. Each developer can deploy their own operating
system access the hardware and build their applications in a self-contained environment with
access to their own runtimes and libraries as well as their own partitions of RAM, file systems,
networking, interfaces, and so on. You have your tools of choice on your own configurable
system.

So you can install your favorite runtime, web server, database or middleware configure the
underlying system resources such as disk space disk i/o or networking and build as you like. But,
flexibility comes with a cost. The smallest unit of compute is an app with its via the guest OS
may be large even gigabytes in size, and takes minutes to boot. As demand for your application
increases, you have to copy an entire VM and boot the guest OS for each instance of your app,
which can be slow and costly.

(Refer Slide Time: 02:44)

A platform-as-a-service provides hosted services and an environment that can scale workloads
independently. All you do is write your code in self-contained workloads that use these services
and include any dependent libraries. Workloads do not need to represent entire applications.
They are easier to decouple because they’re not tied to the underlying hardware operating system
or a lot of the software stacks that you use to manage.

82
(Refer Slide Time: 03:24)

As demand for your app increases, the platform scales your app seamlessly and independently by
workload and infrastructure. The scales rapidly and encourages you to build your applications as
decoupled microservices that run more efficiently, but you would not be able to fine-tune the
underlying architecture to save cost.

(Refer Slide Time: 03:52)

83
That is where containers come in. The idea of a container is to give you the independent
scalability of workloads in a platform-as-a-service and an abstraction layer of the operating
system and hardware in an infrastructure-as-a-service. It only requires a few system calls to
create, and it starts as quickly as a process. All you need on each host is an OS kernel that
supports containers and container runtime.

In a sense, you’re virtualizing the operating system. It scales like platform-as-a-service but gives
you nearly the same flexibility as an infrastructure-as-a-service. Containers provide an
abstraction layer of the hardware and operating system. An invisible box with configurable
access to isolated partitions of the file system RAM and networking as well as a fast start up with
only a few system calls.

(Refer Slide Time: 04:58)

Using a common host configuration, you can deploy hundreds of containers on a group of
servers. If you want to scale, for example, a web server, you can do so in seconds and deploy any
number of containers depending on the size of your workload on a single host or a group of
hosts. You will likely want to build your applications using lots of containers, each performing
their own function like microservices.

84
If you build them this way and connect them with network connections, you can make them
modular and deploy them easily and scale independently across a group of hosts. And the host
can scale up and down and start and stop containers as demand for your app changes or as hosts
fail. With a cluster, you can connect containers using network connections build code modularly,
deployed easily, and scaled containers, and hosts independently for maximum efficiency and
savings.

Kubernetes is an open source container orchestration tool; you can use to simplify the
management of containerized environments. You can install Kubernetes on a group of your own
manage service or run it as a hosted service in GCP on a cluster of managed Compute Engine
instances called Google Kubernetes Engine. Kubernetes makes it easy to orchestrate many
containers on many hosts scale them as microservices and deploy roll all and rollbacks.

Kubernetes was built by Google to run applications at scale. Kubernetes lets you install the
system on local servers in the cloud, manage container networking, and storage. Deploy rollouts
and rollbacks, and monitor and manage container and host health.

(Refer Slide Time: 07:03)

Just like shipping containers, the software container makes it easier for teams to package manage
and ship their code. They write software applications that run in a container. The container

85
provides the operating system needed to run their application. The container will run on any
container platform. This can save a lot of time and cost compared to running servers or virtual
machines. Like a virtual machine imitates a computer, a container imitates an operating system.

Everything at Google runs on containers. Gmail, web search, maps, MapReduce, batch processes
Google file system, Colossus even cloud functions our VMs in containers. Google launches over
2 billion containers per week. Docker is the tool that puts the application and everything it needs
in the container. Once the application is in a container, it can be moved anywhere that will run
Docker containers, any laptop server, or cloud provider.

This portability makes code easier to produce manage and troubleshoot and update. For service
providers, containers make it easy to develop code that can be ported to the customer and back.
Kubernetes is an open source container orchestration tool for managing a cluster of Docker
Linux containers as a single system. It can be run in the cloud and on-premises environments. It
is inspired and informed by Google's experiences and internal systems.

(Refer Slide Time: 08:58)

GKE is a managed environment for deploying containerized apps. It brings Google's latest
innovations in developer productivity, resource efficiency, automated operations, and open

86
source flexibility to accelerate time to market. GKE is a powerful cluster manager and
orchestration system for running Docker containers in Google cloud. GKE manages containers
automatically based on specifications such as CPU and memory.

It is built on the open source Kubernetes system, making it easy for users to orchestrate
container clusters or groups of containers because it is built on the open source Kubernetes
system it provides customers the flexibility to take advantage of on-premises hybrid or public
cloud infrastructure.

87
Google Cloud Computing Foundation Course
Sowmya Kannan
Google Cloud

Lecture-24
Summary

(Refer Slide Time: 00:11)

That concludes the module used GCP to build your apps. Here is a reminder of what you have
learned. You began by learning that there are four different computer options in the cloud to
choose from Compute Engine, App Engine, Cloud Functions, and Google Kubernetes Engine.
You also found out that Compute Engine, which is an infrastructure-as-a-service, delivers virtual
machines via Google's data centers and global fiber network.

Next, you saw how auto-scaling controls, managed instance groups, and learned more about how
App Engine is a service that allows users to focus on writing code and not infrastructure.

88
(Refer Slide Time: 00:49)

You discovered more about Cloud Functions, a serverless option that connects cloud services
with event-driven functions that respond to cloud events. And finally, you found out that Google
Kubernetes Engine, otherwise known as GKE, is a managed environment for deploying
containerized apps.

89

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy