0% found this document useful (0 votes)
540 views

Cloud Virtual Internship

Uploaded by

bandimalli96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
540 views

Cloud Virtual Internship

Uploaded by

bandimalli96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

CLOUD VIRTUAL INTERNSHIP

A Summer Internship-II report submitted to


JAWAHARLAL NEHRU TECHONOLOGICAL UNIVERSITY ANANTAPUR
ANANTHAPURAMU

in partial fulfilment of the requirements for


the award of the degree of
BACHELOR OF TECHNOLOGY
in
ELECTRONICS AND COMMUNICATION ENGINEERING

Submitted by

SHAIK MUBEENA (Roll no: 21125A0427)

Under the Esteemed Supervisionof


Ms. D. Harika, M.Tech.,
Assistant Professor, Department of ECE

Department of Electronics and Communication


Engineering
SREE VIDYANIKETHAN ENGINEERING
COLLEGE
(AUTONOMOUS)
Sree Sainath Nagar, A. Rangampet, Tirupati – 517102
(2021-2024)

I
SREE VIDYANIKETHAN ENGINEERING
COLLEGE
(AUTONOMOUS)
SreeSainath Nagar, A.Rangampet - 517 102

VISION
To be one of the Nation’s premier Engineering Colleges by achieving the highest order
of excellence in Teaching and Research.

MISSION
 To foster intellectual curiosity, pursuit and dissemination of knowledge.
 To explore students’ potential through academic freedom and integrity.
 To promote technical mastery and nurture skilled professionals to face competition
in ever increasing complex world.

DEPARTMENT OF ELECTRONICS AND COMMUNICATIONENGINEERING

Vision
To be a center of excellence in Electronics and Communication Engineering through
teaching and research producing high quality engineering professionals with values and
ethics to meet local and global demands.

Mission
 The Department of Electronics and Communication Engineering is established
with the cause of creating competent professionals to work in multicultural and
multidisciplinary environments.
 Imparting knowledge through contemporary curriculum and striving for
development of students with diverse background.
 Inspiring students and faculty members for innovative research through constant
interaction with research organizations and industry to meet societal needs.
 Developing skills for enhancing employability of students through comprehensive
training process.
 Imbibing ethics and values in students for effective engineering practice.

PAGE \* MERGEFORMAT iv
B. Tech. (Electronics and Communication Engineering)
Program Educational Objectives

After few years of graduation, the graduates of B.Tech (ECE) will be:
PEO1 Enrolled or completed higher education in the core or allied areas of
. electronics and communication engineering or management.
PEO2 Successful entrepreneurial or technical career in the core or allied areas of
. electronics and communication engineering.
PEO3 Continued to learn and to adapt to the world of constantly evolving
. technologies in the core or allied areas of electronics and communication
engineering.

Program Outcomes
On successful completion of the Program, the graduates of B.Tech. (ECE) Program will be able
to:
PO1 Engineering knowledge: Apply the knowledge of mathematics, science,
engineering fundamentals, and an engineering specialization to the solution
of complex engineering problems.
PO2 Problem analysis: Identify, formulate, research literature, and analyze
complex engineering problems reaching substantiated conclusions using first
principles of mathematics, natural sciences, and engineering sciences.
PO3 Design/development of solutions: Design solutions for complex
engineering problems and design system components or processes that meet
the specified needs with appropriate consideration for the public health and
safety, and the cultural, societal, and environmental considerations.
PO4 Conduct investigations of complex problems: Use research-based
knowledge and research methods including design of experiments, analysis
and interpretation of data, and synthesis of the information to provide valid
conclusions.
PO5 Modern tool usage: Create, select, and apply appropriate techniques,
resources, and modern engineering and IT tools including prediction and
modelling to complex engineering activities with an understanding of the
limitations.
PO6 The engineer and society: Apply reasoning informed by the contextual
knowledge to assess societal, health, safety, legal and cultural issues and the
consequent responsibilities relevant to the professional engineering practice.
PO7 Environment and sustainability: Understand the impact of the
professional engineering solutions in societal and environmental contexts,
and demonstrate the knowledge of, and need for sustainable development.
PO8 Ethics: Apply ethical principles and commit to professional ethics and
responsibilities and norms of the engineering practice.
PO9 Individual and team work: Function effectively as an individual, and
as a member or leader in diverse teams, and in multidisciplinary settings.
PO10 Communication: Communicate effectively on complex engineering
activities with the engineering community and with society at large, such as,
being able to comprehend and write effective reports and design

PAGE \* MERGEFORMAT iv
documentation, make effective presentations, and give and receive clear
instructions
.

PO11 Project management and finance: Demonstrate knowledge and


understanding of the engineering and management principles and apply these
to one’s own work, as a member and leader in a team, to manage projects and
in multidisciplinary environments.
PO12 Lifelong learning: Recognize the need for, and have the preparation and
ability to engage in independent and life-long learning in the broadest context
of technological change.

Program Specific Outcomes

On successful completion of the Program, the graduates of B. Tech. (ECE) will be able to
PSO1. Design and develop customized electronic circuits for domestic and industrial
applications.
PSO2. Use specific tools and techniques to design, analyze and synthesize wired and
wireless communication systems for desired specifications and applications.
PSO3. Apply suitable methods and algorithms to process and extract information
from signals and images in Radar, Satellite, Fiber optic and Mobile
communication systems.

PAGE \* MERGEFORMAT iv
PAGE \* MERGEFORMAT iv
Department of Electronics and Communication Engineering

SREE VIDYANIKETHAN
ENGINEERINGCOLLEGE
(AUTONOMOUS)
Sree SainathNagar, A.Rangampet-517102.

Certificate
This is to certify that the Summer Internship-II Report entitled Cloud Virtual
Internship is the bonafide work done & submitted by SHAIK MUBEENA
(21125A0427) in the Department of Electronics and Communication Engineering, Sree
Vidyanikethan Engineering College, A.Rangampet and is submitted to Jawaharlal
Nehru Technological University Anantapur, Ananthapuramu in partial fulfilment of
the requirements for the award of degree of Bachelor of Technology in Electronics and
Communication Engineering during 2019-2023.

Supervisor Head of the Department


Ms. D. Harika, M.Tech., Dr.V.Vijaya Kishore, M.Tech., Ph.D.,
Assistant Professor, Dept. of ECE Professor & Head of Dept. of ECE

Senior Faculty Member

PAGE \* MERGEFORMAT iv
ACKNOWLEDGEMENTS

I deeply indebted to the supervisor, Ms. D. HARIKA M.Tech., Assistant Professor


of Department of Electronics and Communication Engineering for valuable
guidance, constructive criticism and keen interest evinced throughout the course of
our internship work. We are really fortunate to associate ourselves with such an
advising and helping guide in every possible way, at all stages, for the successful
completion of this work.

I express our deep sense of gratitude to Dr. V.VIJAYA KISHORE, M.Tech., Ph.D.
Professor and Head of the Department of Electronic and Communication
Engineering for his valuable guidance and constant encouragement given to us
during this Summer internship-II and the course.

I express gratitude to our principal Dr. B. M. SATISH, Ph.D., for supporting us in


completion of our Summer internship-II work successfully by providing the
facilities. We are pleased to express our heart full thanks to our faculty in
Department of ECE of Sree Vidyanikethan Engineering College for their moral
support and good wishes.

Finally, we have a notation to express our sincere thanks to friends and all those who
guided, inspired and helped us in the completion of our Summer internship-II.

SHAIK MUBEENA - 21125A0427

PAGE \* MERGEFORMAT iv
CONTENTS
Pa
ge no.
Acknowledgements .. vii
List of Figures .. ix
List of Tables
Abstract x
Chapter 1: CLOUD CONCEPTS OVERVIEW 01-06
1.1 Introduction to cloud computing 01
1.2 Traditional computing model 01
1.3 Cloud service models 03
1.4 Cloud computing deployment models 04
1.5 Advantages of cloud computing 05
Chapter 2: NETWORKING AND CONTENT DELIVERY 07-15
2.1 Networking basics 07
2.2 IPv4 and IPv6 addresses 08
2.3 Open Systems Inter connection (OSI) model 09
2.4 Amazon VPC 10
2.5 VPC networking 10-15
Chapter 3: CORE AWS SERVICES 16-19
3.1 Amazon Elastic Block Store 16
3.2 Amazon Simple Storage Services 17
3.3 Amazon Elastic File System 18
3.4 Amazon S3 Glacier 19
Chapter 4: DATABASES 20-22
4.1 Amazon Relational Database Service 20
4.2 Amazon RDS 20
4.3 Amazon RDS DB instances 21
4.4 Amazon RDS: Deployment type and data transfer 22
4.5 Amazon RDS: Storage 22
Chapter 5: CASE STUDY 22-25
CONCLUSION 25
FUTURE SCOPE OF AWS CERTIFICATION 26
REFERENCES 6
27

PAGE \* MERGEFORMAT iv
LIST OF FIGURES
Figures Description Page no.

Figure 1.1 Infrastructure as hardware .. 02


Figure 1.2 Infrastructure as software .. 02
Figure 1.3 Cloud service models 03
Figure 1.4 Cloud computing deployment models 04
Figure 2.1 Basic networking 07
Figure 2.2.1 IPv4 address 08
Figure 2.2.2 IPv6 address 08
Figure 2.3 OSI Model 09
Figure 2.5.1 Internet gateway 11
Figure 2.5.2 NAT gateway 11
Figure 2.5.3 VPC sharing 12
Figure 2.5.4 VPC peering 12
Figure 2.5.5 Site-to-Site VPN 13
Figure 2.5.6 Direct connect 13
Figure 2.5.7 VPC endpoint 14
Figure 2.5.8 Transit Gateway 14
Figure 3.1 Amazon EBS 16
Figure 3.2 Amazon S3 17
Figure 3.3 Amazon EFS 18
Figure 3.4 Amazon S3 Glacier 19
Figure 4.1 Amazon RDS 20
Figure 4.2 Amazon RDS DB instances 21
Figure 4.3 Amazon RDS 22
Figure 4.4 Future Scope on Certification 26

PAGE \* MERGEFORMAT iv
ABSTRACT
A cloud foundation, often referred to as a "cloud infrastructure" or "cloud architecture,"
forms the fundamental framework upon which cloud computing services and resources
are built and deployed. It encompasses the underlying hardware, software, networking,
and security components that enable the delivery of cloud services to users and
organizations. The concept of a cloud foundation is essential in the context of cloud
computing, as it provides the necessary structure and environment for running
applications, storing data, and managing resources in the cloud.

In practice, cloud foundations can be built using various cloud service providers,
such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP),
and others, each offering their own set of tools and services to create and manage the
underlying infrastructure. Whether an organization is migrating to the cloud, building a
new cloud-native application, or expanding its existing cloud presence, a solid cloud
foundation is the starting point for a successful cloud strategy. It provides the necessary
infrastructure to support the digital transformation and agility required in today's
technology landscape.

PAGE \* MERGEFORMAT iv
CHAPTER 1

CLOUD CONCEPTS OVERVIEW

1.1 Introduction to cloud computing

Cloud computing is the on-demand delivery of compute power, database, storage,


applications, and other IT resources via the internet with pay-as-you-go pricing. These
resources run on server computers that are located in large data centers in different
locations around the world. When you use a cloud service provider like AWS, that
service provider owns the computers that you are using. These resources can be used
together like building blocks to build solutions that help meet business goals and satisfy
technology requirements.

1.2 Traditional computing model:

In this model, organizations maintain and operate their own physical hardware,
software, and data centers within their premises.

 Infrastructure as hardware

In the traditional computing model, infrastructure is thought of as hardware. Hardware


solutions are physical, which means they require space, staff, physical security, planning,
and capital expenditure.

With a hardware solution, you must ask if there is enough resource capacity or sufficient
storage to meet your needs, and you provision capacity by guessing theoretical maximum
peaks. If you don’t meet your projected maximum peak, then you pay for expensive
resources that stay idle. If you exceed your projected maximum peak, then you don’t
have sufficient capacity to meet your needs. And if your needs change, then you must
spend the time, effort, and money required to implement a new solution.

1
Figure 1.1 Infrastructure as hardware

 Infrastructure as software:

By contrast, cloud computing enables you to think of your infrastructure as


software. Software solutions are flexible. You can select the cloud services that best
match your needs, provision and terminate those resources on-demand, and pay for what
you use. You can elastically scale resources up and down in an automated fashion. With
the cloud computing model, you can treat resources as temporary and disposable. The
flexibility that cloud computing offers enables businesses to implement new solutions
quickly and with low upfront costs. Compared to hardware solutions, software solutions
can change much more quickly, easily, and cost-effectively.

Figure 1.2 Infrastructure as software

PAGE \* MERGEFORMAT iv
1.3 Cloud service models:

Figure 1.3 Cloud service models

Infrastructure as a service (IaaS):

Services in this category are the basic building blocks for cloud IT and typically
provide you with access to networking features, computers (virtual or on dedicated
hardware), and data storage space. IaaS provides you with the highest level of flexibility
and management control over your IT resources[1].

Platform as a service (PaaS):

Services in this category reduce the need for you to manage the underlying
infrastructure (usually hardware and operating systems) and enable you to focus on the
deployment and management of your applications.

PAGE \* MERGEFORMAT iv
Software as a service (SaaS):

Services in this category provide you with a completed product that the service
provider runs and manages. In most cases, software as a service refers to end-user
applications. With a SaaS offering, you do not have to think about how the service is
maintained or how the underlying infrastructure is managed. You need to think only
about how you plan to use that particular piece of software[3].

1.4 Cloud computing deployment models:

Cloud: A cloud-based application is fully deployed in the cloud, and all parts of the
application run in the cloud. Applications in the cloud have either been created in the
cloud or have been migrated from an existing infrastructure to take advantage of the
benefits of cloud computing (see https://aws.amazon.com/what-is-cloud-computing/).

Hybrid: A hybrid deployment is a way to connect infrastructure and applications


between cloud-based resources and existing resources that are not located in the cloud.
The most common method of hybrid deployment is between the cloud and existing on-
premises infrastructure.

Figure 1.4 Cloud computing deployment models

PAGE \* MERGEFORMAT iv
On-premises: Deploying resources on-premises, using virtualization and resource
management tools, is sometimes called private cloud. While on-premises deployment
does not provide many of the benefits of cloud computing, it is sometimes sought for its
ability to provide dedicated resources[2].

1.5- Advantages of cloud computing:

Cloud computing offers a wide range of advantages for individuals, businesses, and
organizations. Some of the key benefits of cloud computing include:

Cost-Efficiency: Cloud services eliminate the need for purchasing and maintaining
on-premises hardware and software, reducing upfront costs.

Pay-as-You-Go: Many cloud providers offer a pay-as-you-go pricing model, allowing


users to pay only for the resources they consume.

Scalability: On-Demand Resources: Cloud resources can be quickly scaled up or down


to meet changing needs, ensuring efficient resource utilization.

Auto-Scaling: Many cloud platforms offer auto-scaling, which automatically adjusts


resources based on traffic or demand.

Flexibility and Accessibility: Anytime, Anywhere Access: Cloud services can be


accessed from anywhere with an internet connection, facilitating remote work and global
collaboration.

Cross-Platform Compatibility: Cloud applications and data can be accessed from


various devices and operating systems.

Redundancy: Cloud providers typically maintain multiple data centers and have
redundancy built into their infrastructure to ensure high availability.

Professional Expertise: Cloud providers invest heavily in security measures,


including encryption, identity and access management, and threat detection.

Regular Updates: Security updates and patches are applied by the cloud provider,
reducing the burden on users.

PAGE \* MERGEFORMAT iv
Data Versioning: Some cloud services allow users to access previous versions of
their files, aiding in data recovery.

File Sharing: Cloud storage services allow easy sharing of files and documents with
colleagues, clients, or partners.

Development Tools: Cloud platforms provide tools and services for app
development, machine learning, and IoT, fostering innovation. Automatic Updates and
Maintenance: Cloud providers handle routine updates and maintenance tasks, reducing
the administrative burden on users.

Cloud computing has transformed the way businesses and individuals leverage
technology, offering the benefits of cost savings, agility, and innovation. However, it's
important to select the right cloud services and configurations to best align with specific
needs and security requirements.

PAGE \* MERGEFORMAT iv
CHAPTER 2

NETWORKING AND CONTENT DELIVERY

2.1 Networking basics:

A computer network is two or more client machines that are connected together to
share resources. A network can be logically partitioned into subnets. Networking requires
a networking device (such as a router or switch) to connect all the clients together and
enable communication between them.

Figure 2.1 Basic Networking

Each client machine in a network has a unique Internet Protocol (IP) address that
identifies it. An IP address is a numerical label in decimal format. Machines convert that
decimal number to a binary format.

In this example, the IP address is 192.0.2.0. Each of the four dot (.)-separated
numbers of the IP address represents 8 bits in octal number format. That means each of
the four numbers can be anything from 0 to 255. The combined total of the four numbers
for an IP address is 32 bits in binary format[5].
PAGE \* MERGEFORMAT iv
2.2 IPv4 and IPv6 addresses:

A 32-bit IP address is called an IPv4 address.

Figure 2.2.1 IPv4 address

IPv6 addresses, which are 128 bits, are also available. IPv6 addresses can accommodate
more user devices. An IPv6 address is composed of eight groups of four letters and
numbers that are separated by colons (:). In this example, the IPv6 address is
2600:1f18:22ba:8c00:ba86:a05e:a5ba:00FF. Each of the eight colon-separated groups of
the IPv6 address represents 16 bits in hexadecimal number format. That means each of
the eight groups can be anything from 0 to FFFF. The combined total of the eight groups
for an IPv6 address is 128 bits in binary format[3].

Figure 2.2.2 IPv6 address

PAGE \* MERGEFORMAT iv
2.3-Open Systems Interconnection (OSI) model:

The Open Systems Interconnection (OSI) model is a conceptual model that is used to
explain how data travels over a network. It consists of seven layers and shows the
common protocols and addresses that are used to send data at each layer. For example,
hubs and switches work at layer 2 (the data link layer). Routers work at layer 3 (the
network layer). The OSI model can also be used to understand how communication takes
place in a virtual private cloud (VPC), which you will learn about in the next section[3].

Figure 2.3 OSI model

PAGE \* MERGEFORMAT iv
2.4-Amazon VPC

Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you provision a
logically isolated section of the AWS Cloud (called a virtual private cloud, or VPC)
where you can launch your AWS resources. Amazon VPC gives you control over your
virtual networking resources, including the selection of your own IP address range, the
creation of subnets, and the configuration of route tables and network gateways. You can
use both IPv4 and IPv6 in your VPC for secure access to resources and applications.

VPCs and subnets

VPCs

•Logically isolated from other VPCs

•Dedicated to your AWS account

•Belong to a single AWS Region and can span multiple Availability Zones

Subnets

•Range of IP addresses that divide a VPC

•Belong to a single Availability Zone

•Classified as public or private

2.5-VPC networking:

I. Internet gateway

An internet gate way is a scalable, redundant, and highly available VPC component
that allows communication between instances in your VPC and the internet. An internet
gateway serves two purposes: to provide a target in your VPC route tables for internet -
routable traffic, and to perform network address translation for instances that were
assigned public IPv4 addresses[6].

PAGE \* MERGEFORMAT iv
Figure 2.5.1 Internet gateway

II. Network address translation (NAT) gateway

To create a NAT gateway, you must specify the public subnet in which the NAT
gateway should reside. You must also specify an Elastic IP address to associate with the
NAT gateway when you create it. After you create a NAT gateway, you must update the
route table that is associated with one or more of your private subnets to point internet-
bound traffic to the NAT gateway. Thus, instances in your private subnets can
communicate with the internet.

Figure 2.5.2 NAT gateway

PAGE \* MERGEFORMAT iv
III. VPC sharing:

In this model, the account that owns the VPC (owner) shares one or more subnets with
other accounts (participants) that belong to the same organization in AWS Organizations.
After a subnet is shared, the participants can view, create, modify, and delete their
application resources in the subnets that are shared with them. Participants cannot view,
modify, or delete resources that belong to other participants or the VPC owner[7].

Figure 2.5.3 VPC sharing

IV. VPC peering

A VPC peering connection is a networking connection between two VPCs that enables
you to route traffic between them privately. Instances in either VPC can communicate
with each other as if they are within the same network. You can create a VPC peering
connection between your own VPCs, with a VPC in another AWS account, or with a
VPC in a different AWS Region[4].

Figure 2.5.4 VPC peering


PAGE \* MERGEFORMAT iv
V. AWS Site-to-Site VPN

1. Create a new virtual gateway device (called a virtual private network (VPN) gateway)
and attach it to your VPC.

2. Define the configuration of the VPN device or the customer gateway. The customer
gateway is not a device but an AWS resource that provides information to AWS about
your VPN device.

3. Create a custom route table to point corporate data center-bound traffic to the VPN
gateway. You also must update security group rules. (You will learn about security
groups in the next section.)

4. Establish an AWS Site-to-Site VPN (Site-to-Site VPN) connection to link the two
systems together.

5. Configure routing to pass traffic through the connection.

Figure 2.5.5 Site-to-Site VPN

VI.AWS Direct Connect:

One of the challenges of network communication is network performance. Performance


can be negatively affected if your data center is located far away from your AWS
Region. For such situations, AWS offers AWS Direct Connect, or DX. AWS Direct
Connect enables you to establish a dedicated, private network connection between your
network and one of the DX locations.

PAGE \* MERGEFORMAT iv
Figure 2.5.6 Direct Connect

VII.VPC endpoints:

A VPC end point is a virtual device that enables you to privately connect your VPC to
supported AWS services and VPC endpoint services that are powered by AWS Private
Link. Connection to these services does not require an internet gateway, NAT device,
VPN connection, or AWS Direct Connect connection.

Figure 2.5.6 VPC endpoints

VIII.AWS Transit Gateway:

You can configure your VPCs in several ways, and take advantage of numerous
connectivity options and gateways. These options and gateways include AWS Direct
Connect (via DX gateways), NAT gateways, internet gateways, VPC peering, etc. It is
PAGE \* MERGEFORMAT iv
not uncommon to find AWS customers with hundreds of VPCs distributed across AWS
accounts and Regions to serve multiple lines of business, teams, projects, and so forth.

Figure 2.5.7 Transit Gateway

You can configure your VPCs in several ways, and take advantage of numerous
connectivity options and gateways. These options and gateways include AWS Direct
Connect (via DX gateways), NAT gateways, internet gateways, VPC peering, etc.

PAGE \* MERGEFORMAT iv
CHAPTER 3

CORE AWS SERVICES


Cloud storage is typically more reliable, scalable, and secure than traditional on-premises
storage systems. Cloud storage is a critical component of cloud computing because it
holds the information that applications use. Big data analytics, data warehouses, the
Internet of Things (IoT), databases, and backup and archive applications all rely on some
form of data storage architecture.

3.1 Amazon Elastic Block Store (Amazon EBS):

Amazon EBS provides persistent block storage volumes for use with Amazon
EC2instances. Persistent storage is any data storage device that retains data after power
to that device is shut off. It is also sometimes called non-volatile storage.

Amazon EBS enables you to create individual storage volumes and attach them to an
Amazon EC2 instance:

Figure 3.1 Amazon EBS

•Amazon EBS offers block-level storage.

•Volumes are automatically replicated within its Availability Zone

.•It can be backed up automatically to Amazon S3 through snapshots

PAGE \* MERGEFORMAT iv
•Boot volumes and storage for Amazon Elastic Compute Cloud (Amazon EC2) instances

• Data storage with a file system

•Database hosts

A backup of an Amazon EBS volume is called a snapshot. The first snapshot is called the
baseline snapshot. Any other snapshot after the baseline captures only what is different
from the previous snapshot.

3.2 Amazon Simple Storage Service (Amazon S3):

Companies need the ability to simply and securely collect, store, and analyze their
data on a massive scale. Amazon S3 is object storage that is built to store and retrieve
any amount of data from anywhere: websites and mobile apps, corporate applications,
and data from Internet of Things (IoT) sensors or devices.

Fig:3.2 Amazon S3

Amazon S3 is object-level storage, which means that if you want to change a part
of a file, you must make the change and then re-upload the entire modified file. Amazon
S3 stores data as objects within resources that are called buckets.[7]

By default, none of your data is shared publicly. You can also encrypt your data in
transit and choose to enable server-side encryption on your objects. You can access
Amazon S3 through the web-based AWS Management Console; programmatically
through the API and SDKs; or with third-party solutions, which use the API or the SDKs.

PAGE \* MERGEFORMAT iv
3.3 Amazon Elastic File System (Amazon EFS):

Amazon EFS implements storage for EC2 instances that multiple virtual machines
can access at the same time. It is implemented as a shared file system that uses the
Network File System (NFS) protocol.

Figure 3.3 Amazon EFS

Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file
storage for use with AWS services and on-premises resources. It offers a simple interface
that enables you to create and configure file systems quickly and easily.

Amazon EFS is built to dynamically scale on demand without disrupting


applications—it will grow and shrink automatically as you add and remove files. It is
designed so that your applications have the storage they need, when they need it.

Amazon EFS provides file storage in the cloud. With Amazon EFS, you can create a
file system, mount the file system on an Amazon EC2 instance, and then read and write
data from to and from your file system.

PAGE \* MERGEFORMAT iv
3.4 Amazon S3 Glacier:

Amazon S3 Glacier is a secure, durable, and extremely low-cost cloud storage service for
data archiving and long-term backup. When you use Amazon S3 Glacier to archive data,
you can store your data at an extremely low cost (even in comparison to Amazon S3), but
you cannot retrieve your data immediately when you want it. Data that is stored in
Amazon S3 Glacier can take several hours to retrieve, which is why it works well for
archiving.

There are three key Amazon S3 Glacier terms you should be familiar with:

Figure 3.4 Amazon S3 Glacier

•Archive–Any object (such as a photo, video, file, or document) that you store in
Amazon S3 Glacier. It is the base unit of storage in Amazon S3 Glacier. Each archive has
its own unique ID and it can also have a description.

•Vault–A container for storing archives. When you create a vault, you specify the
vault name and the Region where you want to locate the vault.

•Vault access policy –Determine who can and cannot access the data that is stored in
the vault, and what operations users can and cannot perform. One vault access
permissions policy can be created for each vault to manage access permissions for that
vault.

PAGE \* MERGEFORMAT iv
CHAPTER 4

DATABASES

4.1 Amazon Relational Database Service:

When you run your own relational database, you are responsible for several
administrative tasks, such as server maintenance and energy footprint, software,
installation and patching, and database backups. You are also responsible for ensuring
high availability, planning for scalability, data security, and operating system (OS)
installation and patching. All these tasks take resources from other items on your to-do
list, and require expertise in several areas.

4.2 Amazon RDS:

Amazon RDS is a managed service that sets up and operates a relational database in the
cloud. To address the challenges of running an unmanaged, standalone relational
database, AWS provides a service that sets up, operates, and scales the relational
database without any ongoing administration.

Figure 4.1 Amazon RDS

Amazon RDS provides cost-efficient and resizable capacity, while automating time-
consuming administrative tasks. Amazon RDS enables you to focus on your application,
so you can give applications the performance, high availability, security, and

PAGE \* MERGEFORMAT iv
compatibility that they need. With Amazon RDS, your primary focus is your data and
optimizing your application.

4.3 Amazon RDS DB instances:

The basic building block of Amazon RDS is the database instance. A database instance is
an isolated database environment that can contain multiple user-created databases. It can
be accessed by using the same tools and applications that you use with a standalone
database instance. The resources in a database instance are determined by its database
instance class, and the type of storage is dictated by the type of disks.

Figure 4.2 Amazon RDS DB instances

Database instances and storage differ in performance characteristics and price,


which enable you to customize your performance and cost to the needs of your database.
When you choose to create a database instance, you must first specify which database
engine to run. Amazon RDS currently supports six databases: MySQL, Amazon Aurora,
Microsoft SQL Server, PostgreSQL, MariaDB, and Oracle.

4.4 Amazon RDS: Deployment type and data transfer

Consider the deployment type. You can deploy your DB instance to a single Availability
Zone (which is analogous to a standalone data center) or to multiple Availability Zones
(which is analogous to a secondary data center for enhanced availability and durability).

PAGE \* MERGEFORMAT iv
Storage and I/O charges vary, depending on the number of Availability Zones that you
deploy to.

Figure 4.3 Amazon RDS

Finally, consider data transfer. Inbound data transfer is free, and outbound data
transfer costs are tiered. Depending on the needs of your application, it’s possible to
optimize your costs for Amazon RDS database instances by purchasing Reserved
Instances. To purchase Reserved Instances, you make a low, one-time payment for each
instance that you want to reserve. As a result, you receive a significant discount on the
hourly usage charge for that instance.

4.5 Amazon RDS: Storage

Consider provisioned storage. There is no additional charge for backup storage of up to


100 percent of your provisioned database storage for an active database instance. After
the database instance is terminated, backup storage is billed per GB, per month. Also
consider the amount of backup storage in addition to the provisioned storage amount,
which is billed per GB, per month.

PAGE \* MERGEFORMAT iv
CHAPTER 5

CASE STUDY: SMALL BUSINESS WEBSITE


MIGRATION TO THE CLOUD

Background: MYNTRA Crafts, a small handicrafts business, currently hosts its


website on a local server. As the business expands, the owners have decided to migrate
the website to the cloud for improved performance, reliability, and cost-effectiveness.

Objectives:

Website Performance:

Enhance the website's responsiveness and load times for a better user experience.

Reliability:

Improve the overall reliability of the website by leveraging cloud infrastructure.

Cost Efficiency:

Optimize hosting costs while ensuring the scalability needed for potential growth.

Security:

Enhance the security posture of the website by leveraging cloud security features.

Implementation Steps:

Cloud Provider Selection:

1.Choose a cloud provider (e.g., AWS, Azure, Google Cloud) based on simplicity, cost-
effectiveness, and ease of use.

2.Consider services like AWS S3 for static content hosting or a platform-as-a-service


(PaaS) offering for web hosting.

PAGE \* MERGEFORMAT iv
Website Code and Data Migration:

aws s3api create-bucket --bucket MYNTRAcrafts-website --region us-east-1

(#Create an S3 Bucket)

aws s3 sync website s3://MYNTRAcrafts-website

(#Upload Website Content)

aws s3 website s3://MYNTRAcrafts-website/ --index-document index.html --error-


document error.html

(#Configure Bucket for Website


Hosting)

aws s3api get-bucket-website --bucket MYNTRAcrafts-website

(#Set Up DNS (Assuming you're using AWS Route 53)

Enable SSL (Optional):

If you want to enable SSL, you can use AWS Certificate Manager to create an SSL
certificate and then configure your Cloud Front distribution.

Assess the current website code and data structure:

Migrate static content to a cloud storage service and the dynamic parts to a cloud-based
web server.

Domain and DNS Configuration:

Update domain settings to point to the new cloud-based server.Configure DNS settings to
ensure a smooth transition.

Scalability and Reliability:

1.Leverage auto-scaling features to handle traffic spikes.

2.Distribute the website components across multiple availability zones for improved
reliability.

PAGE \* MERGEFORMAT iv
Security Implementation:

1.Implement SSL/TLS certificates for secure data transfer.

2.Configure firewalls and security groups to control access to the website.

Monitoring and Alerts:

1.Set up basic monitoring for website performance.

2.Configure alerts for any unusual activities or downtime.

Backup and Recovery:

1.Establish regular backup schedules for website data.

2.Test the backup and recovery process to ensure data integrity.

Outcome:

MYNTRA Crafts successfully migrates its website to the cloud, resulting in improved
performance, reliability, and security. The owners can now easily scale the website based
on business needs, and the cost-effective cloud solution allows them to focus on growing
their business without worrying about infrastructure management.

PAGE \* MERGEFORMAT iv
CONCLUSION
During the internship, comprehensive curriculum that extended beyond the foundational
aspects of cloud computing. This immersive program not only covered various types of
cloud computing and their components but also delved into the difficulties of diverse
cloud services, offering a nuanced exploration of different perspectives within the field.
Notably, the coursework provided a detailed examination of Amazon Web Services
(AWS), shedding light on the manipulation, configuration, and online access of
applications. It also emphasized the critical role AWS plays in providing online data
storage, managing infrastructure, and supporting a wide array of applications. The self-
paced structure of the internship, executed in a virtual environment with a ten week
deadline for entire course, not only facilitated the acquisition of a deep understanding of
cloud computing basics but also underscored the significance of this technology in real-
world applications. This hands-on approach allowed interns to assimilate theoretical
knowledge into practical scenarios, enhancing our ability to apply cloud computing
concepts effectively. Beyond the theoretical framework, the allocated courses
empowered interns to explore workable solutions independently. The practical challenges
presented during the coursework honed problem-solving skills and fostered a sense of
autonomy. This aspect of the internship was particularly valuable as it encouraged
creative thinking and innovation in addressing complex issues related to cloud
computing. In reflection, the knowledge gained from these courses not only significantly
influenced our approach to cloud computing in subsequent projects but also instilled a
sense of confidence in navigating the intricacies of this dynamic and ever-evolving field.
Overall, the internship experience served as a transformative journey, equipping interns
with both theoretical insights and practical skills essential for a career in cloud
computing.

PAGE \* MERGEFORMAT iv
FUTURE SCOPE OF AWS CERTIFICATION

AWS certifications are the most in-demand professional certifications in the world, with
the highest salary potential. AWS certifications are valued for a candidate’s Handson
experience and best lab practices. As a result, a certification like AWS must often be
highly valued in the progress of an IT professional’s career.

Future scope on AWS Certification

PAGE \* MERGEFORMAT iv
REFERENCES
[1]. Google app engine. http-//code.google.com/appengine/.

[2]. Cloud computing for e-governance. White paper, IIIT-Hyderabad, January


2010.Available online (13 pages).
[3]. Demographics of India. http-//en.wikipedia.org/wiki/Demographics_of_ India,
April 2010.
[4]. Economy of India. http-//en.wikipedia.org/wiki/Economy_of_India,April 2010.

[5]. Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy H.
Katz, Andrew Konwinski ,Gunho Lee, David A. Patterson, Ariel Rabkin , Ion
Stoica, and Matei Zaharia. Above the clouds- A berkeley view of cloud .
[6]. M. Backus. E-governance in Developing Countries. IICD Research Brief,1,
2001.
[7]. Jaijit Bhattacharya and Sushant Vashistha. Utility computing-based framework
for e-governance, pages 303–309. ACM, New York, NY, USA, 2008.

PAGE \* MERGEFORMAT iv

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy