Course File-Unit 3
Course File-Unit 3
UNIT-2
Cloud Computing Architecture
1. Cloud Service Models: This layer defines the types of services offered by cloud providers. The
commonly recognized service models are:
Infrastructure as a Service (IaaS)
Platform as a Service (PaaS)
Software as a Service (SaaS)
Function as a Service (FaaS) or Serverless Computing
2. Cloud Deployment Models: This layer describes how cloud services are deployed and accessed.
Common deployment models include:
Public Cloud: Services are provided over the internet and available to anyone.
Private Cloud: Services are provisioned for a specific organization and may be hosted on-
premises or by a third-party provider.
Hybrid Cloud: A combination of public and private cloud services, allowing data and
applications to be shared between them.
Community Cloud: Infrastructure shared by several organizations with similar concerns
(e.g., security or compliance).
3. Cloud Infrastructure: This layer includes the physical resources and virtualization technologies
that underpin cloud services, such as servers, storage, networking, and virtualization software.
4. Cloud Orchestration and Management: This layer involves the tools and processes used to
manage and orchestrate cloud resources efficiently. It includes functions like provisioning,
monitoring, scaling, and automation.
5. Cloud Security and Compliance: This layer encompasses the security measures and compliance
standards implemented to protect data and ensure regulatory compliance in the cloud environment.
6. Cloud Services and Applications: This layer represents the actual services and applications hosted
on the cloud platform, which could include web applications, databases, analytics tools, etc.
By providing a structured framework, the Cloud Reference Model helps organizations understand
the various components of cloud computing and how they interact, enabling them to make informed
decisions about cloud adoption, deployment, and management.
Page | 1
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
1. Infrastructure Layer: This layer forms the foundation of cloud computing and includes physical
resources such as servers, storage devices, networking equipment, and data centers. It also involves
virtualization technologies that enable the abstraction of hardware resources.
2. Platform Layer: The platform layer provides a runtime environment and development tools for
building and deploying applications. Platform as a Service (PaaS) offerings are common in this
layer, providing developers with tools, APIs, and frameworks to develop, test, and deploy
applications without worrying about the underlying infrastructure.
3. Software Layer: At this layer, cloud providers offer software applications and services that users
can access over the internet. Software as a Service (SaaS) offerings are prevalent here, providing
users with access to applications without the need for installation or maintenance.
4. Management and Orchestration Layer: This layer involves tools and services for managing
cloud resources, automating workflows, and orchestrating deployments. It includes services for
provisioning, monitoring, scaling, and optimizing cloud infrastructure and applications.
5. Security and Compliance Layer: Security is a crucial aspect of cloud computing, and this layer
encompasses measures and services for protecting data, applications, and infrastructure in the cloud.
It includes features like encryption, identity and access management (IAM), threat detection, and
compliance auditing.
Page | 2
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
1. Public Cloud: Services are delivered over the public internet and are available to anyone who
wants to use or purchase them. Public clouds are owned and operated by third-party cloud service
providers, offering scalability, flexibility, and cost-effectiveness.
2. Private Cloud: Private clouds are dedicated cloud environments exclusively used by a single
organization. They can be hosted on-premises or by a third-party provider and offer greater control,
security, and customization compared to public clouds.
3. Hybrid Cloud: Hybrid clouds combine elements of public and private clouds, allowing data and
applications to be shared between them. This approach offers flexibility and scalability, enabling
organizations to leverage the benefits of both public and private cloud environments.
4. Community Cloud: Community clouds are shared infrastructure and services that are tailored to
meet the specific needs of a particular community or group of organizations. They offer
collaboration opportunities while addressing shared concerns such as security, compliance, or
industry regulations.
5. Multi-Cloud: Multi-cloud refers to the use of multiple cloud computing services or platforms from
different providers. Organizations may use a combination of public, private, or hybrid clouds to
meet their diverse needs, avoid vendor lock-in, and optimize performance, cost, and reliability.
These layers and types of cloud computing provide a framework for understanding the architecture,
deployment models, and services offered in the cloud computing ecosystem.
Service Models
Cloud computing offers different service models, each providing varying levels of control,
management responsibility, and abstraction of underlying infrastructure. The common service
models in cloud computing are:
Page | 3
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
The provider hosts and manages the entire application stack, including infrastructure,
middleware, and application software.
Users typically have configuration options to customize the application to their needs but
have limited control over underlying infrastructure or code.
Examples: Salesforce, Google Workspace (formerly G Suite), Microsoft Office 365.
These service models offer different levels of abstraction and management, catering to various use
cases and requirements. Organizations can choose the appropriate service model based on factors
such as control, scalability, management overhead, and cost-effectiveness.
Data Center Design and interconnection network
In cloud computing, data center design and interconnection networks are fundamental components
that enable the delivery of cloud services to users. Here's how they are typically structured in a
cloud computing environment:
Page | 4
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
locations closer to end-users. CDNs reduce latency, minimize bandwidth usage, and enhance the
user experience for cloud-based applications and services.
4. Software-Defined Networking (SDN): SDN technologies are employed in cloud data centers to
programmatically manage and control network resources using software-based controllers. SDN
enables dynamic network provisioning, traffic optimization, and policy-based management, leading
to greater agility, scalability, and efficiency in cloud networking.
5. Security and Compliance: Cloud providers implement robust security measures, including
encryption, access controls, intrusion detection/prevention systems (IDS/IPS), and traffic
monitoring tools, to protect data and ensure compliance with regulatory requirements. Network
security is a critical aspect of cloud computing infrastructure, given the multi-tenant nature of cloud
environments.
Overall, effective data center design and interconnection networks are essential for delivering
scalable, reliable, and high-performance cloud services to users while maintaining security,
compliance, and operational efficiency.
Compute Architecture:
1. Virtualized Compute Resources: Cloud providers deploy virtualization technologies such as
hypervisors to abstract physical compute resources (e.g., servers, CPUs, memory) and create virtual
machines (VMs) or containers. This enables multiple workloads to run on the same physical
hardware, leading to resource consolidation and flexibility.
2. Compute Orchestration and Management: Orchestration platforms such as Kubernetes or cloud-
native services like AWS Elastic Kubernetes Service (EKS) manage the lifecycle of compute
resources, including provisioning, scaling, load balancing, and scheduling of containerized or VM-
based workloads.
3. Elastic Scalability: Cloud architectures are designed to scale compute resources dynamically based
on workload demand. Auto-scaling features automatically adjust the number of VM instances or
containers in response to changes in traffic, ensuring optimal performance and resource utilization.
4. Serverless Computing: Serverless computing platforms like AWS Lambda or Google Cloud
Functions abstract away the underlying infrastructure and allow developers to focus on writing code
without managing servers. Functions are executed in response to events, and users are billed based
on actual usage, promoting cost efficiency and agility.
5. Compute Instance Types: Cloud providers offer a variety of compute instance types optimized for
different workloads, such as general-purpose, compute-optimized, memory-optimized, and GPU-
accelerated instances. Users can choose instance types based on their specific performance and
resource requirements.
Page | 5
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
Storage Architecture:
1. Object Storage: Object storage services like Amazon S3 or Google Cloud Storage provide scalable
and durable storage for unstructured data, such as images, videos, documents, and backups. Objects
are stored with metadata and accessed via HTTP/HTTPS APIs, making them suitable for web
applications and content delivery.
2. Block Storage: Block storage offerings such as Amazon EBS or Azure Disk provide persistent
block-level storage volumes that can be attached to VMs or containers. Block storage is commonly
used for databases, file systems, and applications requiring low-latency access to data.
3. File Storage: File storage services like Amazon EFS or Azure Files offer shared file systems that
can be accessed concurrently by multiple VMs or containers. File storage is suitable for applications
requiring file-based access and shared data repositories.
4. Scalable Storage Architectures: Cloud providers employ distributed storage architectures that
replicate data across multiple nodes or data centers to ensure durability and availability. Techniques
such as sharding, replication, and erasure coding are used to distribute data efficiently and mitigate
the risk of data loss.
5. Data Lifecycle Management: Cloud storage services offer features for data lifecycle management,
including versioning, encryption, data tiering, and automated backup and archival. These features
help optimize storage costs, ensure data security, and comply with regulatory requirements.
6. Hybrid and Multi-Cloud Storage: Cloud architectures support hybrid and multi-cloud storage
solutions that seamlessly integrate on-premises data centers with cloud storage services.
Technologies like AWS Storage Gateway or Azure StorSimple enable hybrid storage deployments,
while data replication and synchronization tools facilitate multi-cloud data management.
By leveraging these architectural principles and technologies, cloud providers build resilient,
scalable, and cost-effective compute and storage infrastructures that meet the diverse needs of users
and applications in the cloud computing ecosystem.
Page | 6
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
1. Task Decomposition: Breaking down large computational tasks into smaller, independent units of
work that can be executed concurrently across multiple processing units.
2. Parallel Algorithms: Designing algorithms and data structures that support parallel execution, such
as parallel sorting, searching, and matrix multiplication algorithms.
Page | 7
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
3. Parallelism Models: Leveraging different models of parallelism, such as task parallelism, data
parallelism, and pipeline parallelism, to distribute workloads efficiently across multiple processors
or nodes.
4. Concurrency Control: Managing concurrent access to shared resources and ensuring data
consistency and integrity through techniques such as locking, synchronization, and transaction
management.
5. Parallel Programming Models: Using programming models and frameworks that support parallel
execution, such as message passing (e.g., MPI), shared-memory (e.g., OpenMP), or functional
programming (e.g., MapReduce).
By applying parallel and distributed programming paradigms in cloud computing, developers can
harness the power of distributed resources and build scalable, high-performance, and fault-tolerant
applications that meet the demands of modern cloud-based environments.
Page | 8
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
1. Job Submission:
A MapReduce job is submitted to the cluster, specifying the input data, mapper function,
reducer function, and other job parameters.
2. Job Scheduling:
The job scheduler assigns map and reduce tasks to available nodes in the cluster, taking into
account resource availability and data locality.
3. Map Phase:
Each mapper function processes a portion of the input data and generates intermediate key-
value pairs.
Mapper tasks run in parallel across multiple nodes, processing data in distributed fashion.
4. Shuffle and Sort Phase:
Intermediate key-value pairs are shuffled and sorted based on their keys.
This phase ensures that all values associated with the same key are grouped together and
ready for the reduce phase.
5. Reduce Phase:
Each reducer function processes a subset of the sorted intermediate data, aggregating or
summarizing values associated with the same key.
Reducer tasks run in parallel across multiple nodes, producing the final output.
6. Output Generation:
The output of the reduce phase is written to storage, typically distributed file systems such
as Hadoop Distributed File System (HDFS).
The output can be further processed or analyzed by downstream applications.
1. Scalability: MapReduce enables parallel processing of large datasets across distributed clusters,
allowing applications to scale horizontally as demand grows.
2. Fault Tolerance: MapReduce frameworks provide built-in fault tolerance mechanisms,
automatically handling node failures and ensuring job completion.
Page | 9
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
3. Data Locality: MapReduce frameworks optimize data processing by scheduling tasks on nodes
where the data is stored, minimizing data transfer over the network.
4. Programming Model: MapReduce provides a simple and intuitive programming model,
abstracting away the complexities of distributed computing and parallel processing.
5. Ecosystem Integration: MapReduce frameworks such as Apache Hadoop integrate with a wide
range of tools and libraries for data processing, analytics, and machine learning.
Overall, MapReduce is a powerful paradigm for distributed data processing in cloud computing
environments, enabling efficient and scalable analysis of large-scale datasets
Hadoop:
1. Hadoop Distributed File System (HDFS): HDFS is a distributed file system that stores data
across multiple machines in a Hadoop cluster. It provides high throughput access to
application data and is designed to be fault-tolerant, detecting and recovering from
failures at the application layer.
2. MapReduce: MapReduce is a programming model and processing engine for distributed
computing on large datasets. It enables parallel processing of data across a cluster of
computers. MapReduce programs are written in Java and consist of two main functions:
map and reduce. The map function processes input data and produces a set of
intermediate key-value pairs, and the reduce function merges these intermediate pairs to
produce the final output.
3. Yet Another Resource Negotiator (YARN): YARN is the resource management layer of
Hadoop. It is responsible for managing computing resources in a Hadoop cluster and
scheduling tasks. YARN decouples the resource management and job scheduling
functionalities from the MapReduce programming model, allowing other programming
models and frameworks to run on Hadoop.
Hadoop is widely used for processing and analyzing large datasets in various industries,
including finance, healthcare, retail, and telecommunications. It provides a cost-effective
solution for storing and analyzing vast amounts of data, enabling organizations to derive
valuable insights and make data-driven decisions.
Page | 10
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
1. Python: Python is a versatile and widely-used language known for its simplicity and
readability. It has a rich ecosystem of libraries and frameworks that facilitate cloud
development, including Flask and Django for web development, Boto3 for interacting with
AWS services, and TensorFlow for machine learning applications.
2. JavaScript (Node.js): Node.js allows developers to use JavaScript on the server-side,
making it a popular choice for full-stack development. It offers asynchronous and event-
driven programming, which align well with cloud-native architectures. Frameworks like
Express.js are commonly used for building APIs and microservices.
3. Java: Java is a robust and mature language with strong support for building enterprise-
grade applications. It is well-suited for large-scale cloud deployments and offers
frameworks like Spring Boot for creating microservices and web applications. Java's
portability also makes it suitable for developing applications that can run across different
cloud platforms.
4. Go (Golang): Go is a statically typed language developed by Google, known for its
simplicity, performance, and built-in support for concurrency. It is increasingly popular for
cloud-native development, particularly for building microservices and distributed systems.
Go's small footprint and fast compilation make it well-suited for deploying lightweight and
scalable applications in the cloud.
5. C#: C# is commonly associated with the Microsoft ecosystem and is widely used for
developing applications on the Azure cloud platform. It offers robust frameworks like
ASP.NET Core for building web applications and services. C# developers can leverage
Azure-specific SDKs and tools for seamless integration with Azure services.
6. Ruby: Ruby is valued for its developer-friendly syntax and productivity features. While not
as commonly associated with cloud development as some other languages, it is still used
for building web applications and APIs, often with the Ruby on Rails framework. Heroku, a
popular cloud platform, offers strong support for Ruby applications.
These languages each have their strengths and are chosen based on factors such as
developer expertise, project requirements, and compatibility with the chosen cloud
platform. Ultimately, the best language for cloud development depends on the specific
needs and goals of the project.
Page | 11
Swami Keshvanand Institute of Technology, Management & Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: info@skit.ac.in Web: www.skit.ac.in
1. Google Maps JavaScript API: This API allows you to embed Google Maps into web pages and
customize them with markers, overlays, and interactive elements. You can also use it to add features like
directions, street view, and geocoding to your web applications.
2. Google Maps Geocoding API: Geocoding is the process of converting addresses (like "1600
Amphitheatre Parkway, Mountain View, CA") into geographic coordinates (latitude and longitude). The
Geocoding API allows you to perform geocoding and reverse geocoding, translating coordinates into
addresses.
3. Google Maps Directions API: This API provides directions for various modes of transportation (driving,
walking, bicycling, and transit) between two or more locations. You can use it to retrieve step-by-step
directions, distance, and duration for routes.
4. Google Maps Places API: The Places API enables you to search for places (e.g., restaurants, hotels,
landmarks) based on specific criteria such as keyword, type, or geographic location. You can also use it
to retrieve details about individual places, including reviews, ratings, and photos.
5. Google Maps Static API: This API allows you to generate static map images that can be embedded in
web pages or mobile apps. You can customize the appearance of the map, add markers and overlays,
and specify the desired size and format of the image.
6. Google Maps Street View Static API: Similar to the Static API, this API allows you to generate static
street view images using specified coordinates or panorama IDs.
7. Google Maps Distance Matrix API: The Distance Matrix API provides travel distance and time
estimates for multiple origins and destinations. It can be used to calculate distances for various
transportation modes and to optimize routes based on travel time.
8. Google Maps Elevation API: This API provides elevation data for points on the earth's surface, allowing
you to retrieve elevation profiles for routes or display elevation contours on maps.
To use these APIs, you'll need to sign up for a Google Maps Platform account and obtain an API key,
which you'll include in your API requests for authentication and usage tracking. Google provides
comprehensive documentation, code samples, and client libraries for various programming languages
to help you get started with integrating Google Maps into your applications.
Page | 12