0% found this document useful (0 votes)
64 views

Final Thesis

The document discusses serverless computing, which is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. With serverless, developers can build and run applications and services without having to manage infrastructure. Key aspects of serverless discussed include: - Lambda functions are units of deployment that are triggered by events like HTTP requests or database changes. - Applications need to be broken into microservices due to the stateless nature of serverless functions. - Cold starts occur when functions are initially launched, leading to latency that can be optimized.

Uploaded by

Sunidhi Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

Final Thesis

The document discusses serverless computing, which is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. With serverless, developers can build and run applications and services without having to manage infrastructure. Key aspects of serverless discussed include: - Lambda functions are units of deployment that are triggered by events like HTTP requests or database changes. - Applications need to be broken into microservices due to the stateless nature of serverless functions. - Cold starts occur when functions are initially launched, leading to latency that can be optimized.

Uploaded by

Sunidhi Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Making scalable web apps with AWS

and serverless Technology

A Thesis submitted in the partial fulfilment of the requirements for


The award of the Degree of
Integrated Masters of Science
In
Mathematics and Computing
By
Aman Kumar Singh
(IMH/10012/16)

DEPARTMENT OF MATHEMATICS
BIRLA INSTITUTE OF TECHNOLOGY
MESRA-835215, RANCHI
Declaration
I certify that

a. The work contained in the thesis is original and has been done by myself under
the general supervision of my supervisor(s).

b. The work has not been submitted to any other Institute for any degree or
diploma.

c. I have followed the guidelines provided by the Institute in writing the thesis.

d. I have conformed to the norms and guidelines given in the Ethical Code of
Conduct of the Institute

e. Whenever I have used materials (data, theoretical analysis, and text) from other
sources, I have given due credit to them by citing them in the text of the thesis and
giving their details in the references.

f. Whenever I have quoted written materials from other sources, I have put them
under quotation marks and given due credit to the sources by citing them and
giving required details in the references.

Name : Aman Kumar Singh Rollnumber : IMH/10012/16


APPROVAL OF THE GUIDE(S)

Recommended that the thesis entitled “Making scalable web apps with aws and
serverless technology ” prepared by Mr. Vasudev Harshal under my/our supervision
and guidance will be accepted as fulfilling this part of the requirements for the
degree of Integrated Master of Science in Mathematics and Computing.

To the best of my/our knowledge, the contents of this thesis did not form a basis for
the award of any previous degree to anyone else.

Date : 02/05/2021

Ravichandra chatripu
Engineering manager,
Borderfree Financial,
Hyderabad.
Department Of Mathematics

Birla Institute of Technology, Mesra, Ranchi -835215

THESIS APPROVAL CERTIFICATE

This is to certify that the work embodied in this thesis entitled as “Making
scalable web apps with aws and serverless technology" is carried out by Mr.
Aman Kumar Singh (Roll No. IMH/10012/16) is approved for the degree of
Integrated Master of Science in Mathematics and Computing of Birla Institute of
Technology, Mesra, Ranchi.

Internal Examiner(s) ExternalExaminer(s)


Name & Signature Name & Signature

Dr S padhi Head of
Department Department of
Mathematics Birla Institute of Technology,
Mesra, Ranchi 835215
Abstract

Borderfree Financial is an agile innovative company focused in software


making, payments solution, and creating best experience for window
shopping. The team works on the latest technology stack to solve
morden and complex problems.

This document is mainly focused on the work done by me with the live
team at borderfree financial, the project is titled as “ Making scalable
web apps using serverless technology”.

The Project basically focus on the web domain of development.


ACKNOWLEDGEMENT

I take immense pleasure in thanking Dr. Vandana guleria, Department of


Mathematics, Birla Institute of Technology, Mesra for having me carry out this
internship and for his constant guidance and support. I would also like to thank the
entire Department of Mathematics BIT, Mesra for displaying immense support during
my internship tenure.

I would like to express my heartfelt gratitude to Borderfree Financial for giving me


the opportunity to work under their guidance and help me gain immensely enriching
professional experience.My heartfelt gratitude goes to my mentor RaviChandra,
Engineering Manager, Borderfree Financial for guiding me during my Internship.
Also, the entire team of Borderfree Financial for their valuable guidance, assistance
and precious time during the internship.

Finally, yet importantly, I would like to express my heartfelt thanks to my beloved


parents for their blessings, my friends and all those who supported me directly or
indirectly for their help.

Aman Kumar Singh IMH/10012/2016


Table of Content

I. Serverless stack ------------------------------------------page 08


II. Lambda functions------------------------------------------page 11
III. Why serverless stack --------------------------------------page 13
IV. Webapp’s-----------------------------------------------------page 14
V. Optimizing webapps --------------------------------------page 17
Chapter 1 : Serverless stack

I. Introduction
This chapter provides a detailed overview about the serverless stack. We've
traditionally developed and deployed web applications in which we have some
control over the HTTP requests made to our server. The server hosts our
application, and we are in charge of provisioning and managing its
resources.There are a couple of problems with this :
1. While we are not serving any demands, we are responsible for keeping
the server running which is very costly.
2. We are in charge of the server's uptime and upkeep, as well as all of its
tools.
3. We're still in charge of making sure the server is up to date with
security patches.
4. If our user base grows, we'll need to scale up our server as well. As a
result, we're able to scale it down when we don't have as much use.

This can be overwhelming for small businesses and independent developers.


This ends up diverting our attention away from the more critical task at hand:
developing and managing the programme itself. This is normally done by the
infrastructure team of bigger organisations, and it is not the responsibility of
the individual developer.

However, the procedures used to facilitate this will cause production times to
be slowed.
And you can't just create the software without collaborating with the
infrastructure team to get it up and running. As developers, we've been
searching for a solution to these issues, and serverless is the answer.

II. Serverless Computing


Serverless computing (or serverless for short) is an execution paradigm in
which the cloud service (AWS, Azure, or Google Cloud) is in charge of
dynamically allocating resources to execute a piece of code. And only for the
tools that were used to execute the code. The code is usually executed in
stateless containers that are triggered by a range of events such as http queries,
database events, queuing services, tracking notifications, file uploads, and
planned events (cron jobs), among others. A feature is the most common type
of code sent to the cloud provider for execution. As a result, serverless
computing is also known as "Functions as a Service" or "FaaS." The below
are the big cloud providers' FaaS offerings:
1. AWS
2. AZURE
3. Google Cloud
Servers are still interested in the execution of our functions, even though
serverless abstracts the underlying architecture away from the creator.

There are a few things we need to be mindful of because your code will be run
as separate functions.

III. Microservices
The most significant shift we'll face when we move to a serverless
environment is that our framework will need to be built in the form of
functions. You may be accustomed to implementing the programme as a
single monolithic Rails or Express app. However, in the serverless
environment, you'll almost always be forced to use a microservices-based
architecture. You will avoid this by running the entire programme as a
monolithic function and doing the routing yourself. However, this is not
advised since it is preferable to reduce the scale of your roles. We'll go over
this in more detail later.

IV. Stateless Functions


Usually, the functions operate within stable (almost) stateless containers.
This means you won't be able to run code in your application server that runs
long after an occurrence has finished or that serves a request using a previous
execution setting. Any time your function is called, you must essentially
presume that it is called in a new container.

V. Cold Starts
There is some lag since the functions are performed within a container that is
pulled up on demand to respond to an incident. A cold start is the term for this
situation. After your function has finished execution, your container can be
left around for a while. If any incident occurs at this period, the system reacts
even more rapidly, which is known as a Warm Start.
The duration of a cold start is determined by the cloud provider's
implementation. It will take anywhere from a few hundred milliseconds to a
few seconds on AWS Lambda.The runtime (or language) used, the size of the
function (as a package), and, of course, the cloud provider in question will all
influence this.
As cloud services have become better at optimising for lower latency periods,
cold starts have vastly improved over time.

Aside from improving your functions, you can use basic tricks to keep them
warm, such as using a different scheduled mechanism to call your function
every few minutes.
The Serverless Framework, which we'll be using in this tutorial, comes with a
few plugins to keep your functions wet.

Let's take a closer look at what a Lambda function is and how the
programming can be run now that we have a better understanding of
serverless computation.
Chapter 2 : Lambda Function
I. Introduction
In this chapter we will discuss about lambda function, it’s invocation and the
pricing. AWS is an acronym for Amazon Web Services. AWS Lambda is a
serverless computing service. We'll be using Lambda to build our serverless
application in this chapter. Although we won't be dealing with the inner
workings of Lambda, it's useful to have a general understanding of how the
functions will be run.
II. Features and specs of lambda function
Lambda supports following runtime
● Node.js 14.x, 12.x and 10.x
● Java 11 and 8
● Python 3.8, 3.7, 3.6 and 2.7
● .NET Core 2.1, 2.2, 3.0 and 3.1
● Go 1.x
● Ruby 2.7 and 2.5
● Rust

Each function runs inside a container with a 64-bit Amazon Linux AMI. And
the execution environment has:
● Memory: 128MB - 3008MB, in 64 MB increments
● Ephemeral disk space: 512MB
● Max execution duration: 900 seconds
● Compressed package size: 50MB
● Uncompressed package size: 250MB
III. Lambda function (Nodejs version)

A Node Js Lambda function looks like this


The name of our Lambda function is myHandler in this case This Lambda
was caused by an occurrence, and the event object holds all of the information
about the event. That would be details about the individual HTTP request in
the case of an HTTP request. The background object provides information
about the runtime environment in which our Lambda function is running. We
simply call the callback function with the results (or the error) after we've
completed our Lambda function, and AWS will respond to the HTTP request
with it.

Lambda functions must be bundled and delivered to Amazon Web Services.


Typically, this entails compressing the feature and all of its dependencies
before uploading them to an S3 bucket. As well as notifying AWS that you
intend to use this kit for a special case.

Chapter 3 : Why serverless stack


I. Introduction
To answer this question let's think in terms of
A. Maintenance cost
B. Cost of using
C. Scalability

II. Benefits
The most significant advantage is that you just have to think about the code.
Since there are no servers to maintain, the upkeep is minimal. You don't need
to constantly monitor the server to make sure it's up to date or that it's running
properly. You just have to deal with your own programme file.

The key explanation why serverless applications are less expensive is that you
are only charged per order. As a result, you will not be paying with your
submission while it is not in operation. Let's take a look at how much it will
cost to run our note-taking programme. We'll say we have 1000 regular active
users who make 20 API requests a day and store about 10MB of data on S3.
Here's a rough estimate of our expenses.
Chapter 4 : Webapp’s
I. Introduction
In contrast to computer-based software applications that operate locally on the
operating system (OS) of the user, a web application (or web app) is
application software that operates on a web server. The customer uses a web
browser and an active network link to view web applications. This systems are
built on a client–server architecture, in which the customer ("client") receives
resources from an off-site server hosted by a third party. Webmail, online
retail stores, online finance, and online auctions are also examples of widely
used web apps.

II. Web App architecture


There are three types of webapp architecture
1. Single page architecture
2. Microservices
3. serverless

III. Single page architecture (SPA)


In the era of minimalism, and single-page web apps are becoming increasingly
prevalent. Only the content elements that are needed are used in the most
common applications.

This provides a more engaging user interface, allowing for a more complex
relationship between the Single-page web client and the user.

III. Microservices
The Microservices Architecture system allows developers to carry out
software quicker and more efficiently by executing a single and basic
functionality.

There is more variety in selecting a technology of choice since multiple


components are built in different coding languages.

IV. Serverless architecture


This allows programmes to run independently of infrastructure-related
operations, removing the need for developers to handle backend servers while
using third-party infrastructure.

V. Basic serverless architecture using AWS services

This architecture basically leverages all the above mentioned topics, let’s
discuss the topics we missed.
1. API Gateway
2. cognito
3. Dynamodb

API Gateway
This is a method to invoke serverless lambda function through http
request

Amazon Dynamodb
It is a nosql database provided by amazon

Amazon cognito
It is an authentication module given by amazon.
Chapter 5 : Optimizing web apps
I. Introduction
In this chapter we will discuss optimizing our webapps based on the metric
defined by google.

When we talk about web optimization, people by default talk about lighthouse
performance and result.

(This is a lighthouse report)

Let's discuss the performance of the webapp. For this google defined 6 metric
1. First contentful paint (FCP)
2. Speed index
3. Largest contentful paint (LCP)
4. Time to interact (TTI)
5. Total blocking time
6. Cumulative layout shift (CLS)
II. First contentful paint (FCP)
This is how google defined it : “First Contentful Paint marks the time at which
the first text or image is painted.”

FCP measures after a user navigates to your tab, FCP tests how long it takes
the browser to make the first piece of DOM content. Images, non-white
canvas> objects, and SVGs on your website are called DOM content;
everything included inside an iframe is not.

Based on data from the HTTP Archive, the FCP score is a measure of your
page's FCP time to FCP times for actual websites. Sites in the ninety-ninth
percentile, for example, make FCP in around 1.5 seconds. Your FCP ranking
is 99 if your website's FCP is 1.5 seconds.
To improve FCP one can do this : font load time is a problem that is
especially relevant for FCP. If you want to speed up your font loading, read
the post Ensure text remains readable during webfont launch.

III. Speed Index


The Speed Index is one of six metrics tracked in the Lighthouse report's
Performance portion. Each metric measures a different feature of page load
time.
The Speed Index is a metric that calculates how rapidly content is physically
viewed on a website as it loads. Lighthouse begins by recording a video of the
page loading in the browser and calculating the visual progression between
frames.The Speed Index score is then generated by Lighthouse using the
Speedline Node.js module.

Determination of speed index score based on data from the HTTP Archive,
the Speed Index score is a measure of your page's speed index to the speed
indices of actual websites.

To improve this one can follow this measure :


Although any action you take to improve page load speed will improve your
Performance Index ranking, fixing any problems identified by these
Diagnostic audits should have an especially significant impact:

1. Job on the key thread should be kept to a minimum.


2. Reduce the time it takes for JavaScript to run
3. Ensure the text is available while the webfont is loading.
IV. Largest Contentful paint
One of the metrics monitored in the Performance portion of the Lighthouse
study is the Largest Contentful Paint (LCP). Each metric measures a different
feature of page load time.

Lighthouse measures it in second

When the largest material feature in the viewport is made to the viewer, LCP
is measured. This is a rough estimate of when the page's key content is
available to users. For further information about how LCP is measured, see
Largest Contentful Paint described.

To improve this one can follow this measure :


The table below shows how to interpret your LCP score:

Chrome 77 was the first browser to support LCP. LCP data were extracted
from Chrome's tracing function by Lighthouse.

Sites should aim for a Largest Contentful Paint of 2.5 seconds or less to have a
decent user interface. The 75th percentile of page loads, segmented between
mobile and desktop computers, is a reasonable threshold to calculate and
ensure you're meeting this target for the majority of your customers.

What are the elements considered ?

The categories of elements considered for Largest Contentful Paint are


currently defined in the Largest Contentful Paint API:
1. <img> elements
2. <img> elements within an < svg> element.
3. <video> elements (the poster image is used)
4. The url() function is used to load a background image for an element
(as opposed to a CSS gradient)
5. Text nodes or other inline-level text elements that are children of
block-level elements.
It's worth noting that limiting the elements to this small collection was done
on purpose to make things straightforward at first. If further analysis is done,
additional elements (e.g. <svg>, <video>) can be included in the future.

How to improve this :


1. Defer the JS.
2. Lazy load the images and other assets.
3. Responsive image practice prefer using (use srcset).
4. Responsive image practice prefer using (use srcset).
5. Preconnect and preload practices.

V. Time to interact
Time to Interactive (TTI) is one of six indicators monitored in the Lighthouse
report's Performance portion.Each metric measures a different feature of page
load time.
TTI is important to measure because some websites prioritise page visibility
over interactivity.This can lead to a disappointing user experience: the web
appears to be ready, but nothing happens when the user attempts to
communicate with it.
Lighthouse displays TTi in second ( see the above image).

TTI measures following :


The time it takes for a website to become completely functional is measured
by TTI. When a website is completely interactive, it meets the following
criteria:
“The page shows useful content, as determined by the First Contentful Paint,
has event handlers for the majority of observable page features, and reacts to
user experiences in less than 50 milliseconds.”

Based on data from the HTTP Archive, the TTI score compares the TTI of
your page to the TTI of actual websites. Sites in the ninety-ninth percentile,
for example, make TTI in around 2.2 seconds. Your TTI score is 99 if your
website's TTI is 2.2 seconds.

This table explains how to read the TTI score:

To improve we can take the following measures :


Deferring or eliminating redundant JavaScript work is one change that can
have a significant impact on TTI. Look for ways to make your JavaScript
more effective. Consider using code splitting and the PRPL pattern to reduce
JavaScript payloads in particular. Any pages benefit greatly from the
optimization of third-party JavaScript.

Additional opportunities to minimise JavaScript jobs can be found in these


two diagnostic audits:
1. Minimizing main thread work,
2. Reduce javascript execution time.

VI. Total blocking time


Total Blocking Time (TBT) is one of the metrics recorded in the Lighthouse
report's Performance section. Each metric measures a different feature of
page load time.

TBT is shown in milliseconds in the Lighthouse report:


TBT is a metric that calculates how long a website is unable to respond to
user feedback such as mouse clicks, screen touches, or keyboard presses.
Between First Contentful Paint and Time to Interactive, the total is computed
by inserting the blocking portion of all long activities. A long task is
described as one that takes longer than 50 milliseconds to complete. The
blocking part begins after 50 ms. If Lighthouse senses a mission that takes 70
milliseconds to complete, the blocking time is 20 milliseconds.

How lighthouse measures this :


When loaded on mobile devices, the TBT score is a measure of your page's
TBT time and the TBT times for the top 10,000 pages.404 pages are included
in the top site info.

This table explains how to read the TBT score:


To improve this we take following measures :
See What's the Deal with My Long Tasks? to learn how to use Chrome
DevTools' Performance panel to diagnose the root cause of long tasks.
The following are the most important causes of long tasks:
1. Loading, decoding, or execution of JavaScript that isn't needed.When
you examine the code in the Performance panel, you might notice that
the main thread is doing work that isn't required to load the tab.Reduce
JavaScript payloads by breaking code, deleting unused code, or
loading third-party JavaScript more effectively to increase the TBT
performance.
2. JavaScript sentences that are inefficient.Let's say you find a call to
record after reviewing the code in the Performance panel.2000 nodes
are returned by querySelectorAll('a').Your TBT score should increase
if you refactor your code to use a more basic selector that only returns
10 nodes.

VII. Cumulative layout shift


CLS is a metric that calculates the sum of all individual layout shift scores
for any accidental layout shift that happens during the page's lifetime.

When a recognisable entity moves from one rendered frame to the next, this
is known as a layout transfer. (For more information about how individual
layout change ratings are determined, see the section below.)
Conclusion
The main aim of this document is to provide a brief description of a module which
was needed to create a scalable fullstack large scale application.

Finally i was able to make a few fullstack scalable webapps using thise principles
please go and see them;
1. https://amankumarsingh01.github.io/#/
2. http://react-music.s3-website.us-east-2.amazonaws.com/
References

1. Web.dev
2. amankumarsingh01.github.io/blogs
3. Wikipedia
4. Aws documentation

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy