Final Thesis
Final Thesis
DEPARTMENT OF MATHEMATICS
BIRLA INSTITUTE OF TECHNOLOGY
MESRA-835215, RANCHI
Declaration
I certify that
a. The work contained in the thesis is original and has been done by myself under
the general supervision of my supervisor(s).
b. The work has not been submitted to any other Institute for any degree or
diploma.
c. I have followed the guidelines provided by the Institute in writing the thesis.
d. I have conformed to the norms and guidelines given in the Ethical Code of
Conduct of the Institute
e. Whenever I have used materials (data, theoretical analysis, and text) from other
sources, I have given due credit to them by citing them in the text of the thesis and
giving their details in the references.
f. Whenever I have quoted written materials from other sources, I have put them
under quotation marks and given due credit to the sources by citing them and
giving required details in the references.
Recommended that the thesis entitled “Making scalable web apps with aws and
serverless technology ” prepared by Mr. Vasudev Harshal under my/our supervision
and guidance will be accepted as fulfilling this part of the requirements for the
degree of Integrated Master of Science in Mathematics and Computing.
To the best of my/our knowledge, the contents of this thesis did not form a basis for
the award of any previous degree to anyone else.
Date : 02/05/2021
Ravichandra chatripu
Engineering manager,
Borderfree Financial,
Hyderabad.
Department Of Mathematics
This is to certify that the work embodied in this thesis entitled as “Making
scalable web apps with aws and serverless technology" is carried out by Mr.
Aman Kumar Singh (Roll No. IMH/10012/16) is approved for the degree of
Integrated Master of Science in Mathematics and Computing of Birla Institute of
Technology, Mesra, Ranchi.
Dr S padhi Head of
Department Department of
Mathematics Birla Institute of Technology,
Mesra, Ranchi 835215
Abstract
This document is mainly focused on the work done by me with the live
team at borderfree financial, the project is titled as “ Making scalable
web apps using serverless technology”.
I. Introduction
This chapter provides a detailed overview about the serverless stack. We've
traditionally developed and deployed web applications in which we have some
control over the HTTP requests made to our server. The server hosts our
application, and we are in charge of provisioning and managing its
resources.There are a couple of problems with this :
1. While we are not serving any demands, we are responsible for keeping
the server running which is very costly.
2. We are in charge of the server's uptime and upkeep, as well as all of its
tools.
3. We're still in charge of making sure the server is up to date with
security patches.
4. If our user base grows, we'll need to scale up our server as well. As a
result, we're able to scale it down when we don't have as much use.
However, the procedures used to facilitate this will cause production times to
be slowed.
And you can't just create the software without collaborating with the
infrastructure team to get it up and running. As developers, we've been
searching for a solution to these issues, and serverless is the answer.
There are a few things we need to be mindful of because your code will be run
as separate functions.
III. Microservices
The most significant shift we'll face when we move to a serverless
environment is that our framework will need to be built in the form of
functions. You may be accustomed to implementing the programme as a
single monolithic Rails or Express app. However, in the serverless
environment, you'll almost always be forced to use a microservices-based
architecture. You will avoid this by running the entire programme as a
monolithic function and doing the routing yourself. However, this is not
advised since it is preferable to reduce the scale of your roles. We'll go over
this in more detail later.
V. Cold Starts
There is some lag since the functions are performed within a container that is
pulled up on demand to respond to an incident. A cold start is the term for this
situation. After your function has finished execution, your container can be
left around for a while. If any incident occurs at this period, the system reacts
even more rapidly, which is known as a Warm Start.
The duration of a cold start is determined by the cloud provider's
implementation. It will take anywhere from a few hundred milliseconds to a
few seconds on AWS Lambda.The runtime (or language) used, the size of the
function (as a package), and, of course, the cloud provider in question will all
influence this.
As cloud services have become better at optimising for lower latency periods,
cold starts have vastly improved over time.
Aside from improving your functions, you can use basic tricks to keep them
warm, such as using a different scheduled mechanism to call your function
every few minutes.
The Serverless Framework, which we'll be using in this tutorial, comes with a
few plugins to keep your functions wet.
Let's take a closer look at what a Lambda function is and how the
programming can be run now that we have a better understanding of
serverless computation.
Chapter 2 : Lambda Function
I. Introduction
In this chapter we will discuss about lambda function, it’s invocation and the
pricing. AWS is an acronym for Amazon Web Services. AWS Lambda is a
serverless computing service. We'll be using Lambda to build our serverless
application in this chapter. Although we won't be dealing with the inner
workings of Lambda, it's useful to have a general understanding of how the
functions will be run.
II. Features and specs of lambda function
Lambda supports following runtime
● Node.js 14.x, 12.x and 10.x
● Java 11 and 8
● Python 3.8, 3.7, 3.6 and 2.7
● .NET Core 2.1, 2.2, 3.0 and 3.1
● Go 1.x
● Ruby 2.7 and 2.5
● Rust
Each function runs inside a container with a 64-bit Amazon Linux AMI. And
the execution environment has:
● Memory: 128MB - 3008MB, in 64 MB increments
● Ephemeral disk space: 512MB
● Max execution duration: 900 seconds
● Compressed package size: 50MB
● Uncompressed package size: 250MB
III. Lambda function (Nodejs version)
II. Benefits
The most significant advantage is that you just have to think about the code.
Since there are no servers to maintain, the upkeep is minimal. You don't need
to constantly monitor the server to make sure it's up to date or that it's running
properly. You just have to deal with your own programme file.
The key explanation why serverless applications are less expensive is that you
are only charged per order. As a result, you will not be paying with your
submission while it is not in operation. Let's take a look at how much it will
cost to run our note-taking programme. We'll say we have 1000 regular active
users who make 20 API requests a day and store about 10MB of data on S3.
Here's a rough estimate of our expenses.
Chapter 4 : Webapp’s
I. Introduction
In contrast to computer-based software applications that operate locally on the
operating system (OS) of the user, a web application (or web app) is
application software that operates on a web server. The customer uses a web
browser and an active network link to view web applications. This systems are
built on a client–server architecture, in which the customer ("client") receives
resources from an off-site server hosted by a third party. Webmail, online
retail stores, online finance, and online auctions are also examples of widely
used web apps.
This provides a more engaging user interface, allowing for a more complex
relationship between the Single-page web client and the user.
III. Microservices
The Microservices Architecture system allows developers to carry out
software quicker and more efficiently by executing a single and basic
functionality.
This architecture basically leverages all the above mentioned topics, let’s
discuss the topics we missed.
1. API Gateway
2. cognito
3. Dynamodb
API Gateway
This is a method to invoke serverless lambda function through http
request
Amazon Dynamodb
It is a nosql database provided by amazon
Amazon cognito
It is an authentication module given by amazon.
Chapter 5 : Optimizing web apps
I. Introduction
In this chapter we will discuss optimizing our webapps based on the metric
defined by google.
When we talk about web optimization, people by default talk about lighthouse
performance and result.
Let's discuss the performance of the webapp. For this google defined 6 metric
1. First contentful paint (FCP)
2. Speed index
3. Largest contentful paint (LCP)
4. Time to interact (TTI)
5. Total blocking time
6. Cumulative layout shift (CLS)
II. First contentful paint (FCP)
This is how google defined it : “First Contentful Paint marks the time at which
the first text or image is painted.”
FCP measures after a user navigates to your tab, FCP tests how long it takes
the browser to make the first piece of DOM content. Images, non-white
canvas> objects, and SVGs on your website are called DOM content;
everything included inside an iframe is not.
Based on data from the HTTP Archive, the FCP score is a measure of your
page's FCP time to FCP times for actual websites. Sites in the ninety-ninth
percentile, for example, make FCP in around 1.5 seconds. Your FCP ranking
is 99 if your website's FCP is 1.5 seconds.
To improve FCP one can do this : font load time is a problem that is
especially relevant for FCP. If you want to speed up your font loading, read
the post Ensure text remains readable during webfont launch.
Determination of speed index score based on data from the HTTP Archive,
the Speed Index score is a measure of your page's speed index to the speed
indices of actual websites.
When the largest material feature in the viewport is made to the viewer, LCP
is measured. This is a rough estimate of when the page's key content is
available to users. For further information about how LCP is measured, see
Largest Contentful Paint described.
Chrome 77 was the first browser to support LCP. LCP data were extracted
from Chrome's tracing function by Lighthouse.
Sites should aim for a Largest Contentful Paint of 2.5 seconds or less to have a
decent user interface. The 75th percentile of page loads, segmented between
mobile and desktop computers, is a reasonable threshold to calculate and
ensure you're meeting this target for the majority of your customers.
V. Time to interact
Time to Interactive (TTI) is one of six indicators monitored in the Lighthouse
report's Performance portion.Each metric measures a different feature of page
load time.
TTI is important to measure because some websites prioritise page visibility
over interactivity.This can lead to a disappointing user experience: the web
appears to be ready, but nothing happens when the user attempts to
communicate with it.
Lighthouse displays TTi in second ( see the above image).
Based on data from the HTTP Archive, the TTI score compares the TTI of
your page to the TTI of actual websites. Sites in the ninety-ninth percentile,
for example, make TTI in around 2.2 seconds. Your TTI score is 99 if your
website's TTI is 2.2 seconds.
When a recognisable entity moves from one rendered frame to the next, this
is known as a layout transfer. (For more information about how individual
layout change ratings are determined, see the section below.)
Conclusion
The main aim of this document is to provide a brief description of a module which
was needed to create a scalable fullstack large scale application.
Finally i was able to make a few fullstack scalable webapps using thise principles
please go and see them;
1. https://amankumarsingh01.github.io/#/
2. http://react-music.s3-website.us-east-2.amazonaws.com/
References
1. Web.dev
2. amankumarsingh01.github.io/blogs
3. Wikipedia
4. Aws documentation