Cloud computing enables users to provision virtual hardware and services on demand without upfront commitments, leading to cost savings, improved scalability, and enhanced security. It encompasses various models such as IaaS, PaaS, and SaaS, and can be deployed through public, private, or hybrid clouds. Technologies like virtualization and frameworks such as Hadoop and OpenStack play crucial roles in delivering cloud services efficiently.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
9 views
cc
Cloud computing enables users to provision virtual hardware and services on demand without upfront commitments, leading to cost savings, improved scalability, and enhanced security. It encompasses various models such as IaaS, PaaS, and SaaS, and can be deployed through public, private, or hybrid clouds. Technologies like virtualization and frameworks such as Hadoop and OpenStack play crucial roles in delivering cloud services efficiently.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3
The vision of cloud computing Cloud computing allows anyone with a private cloud facilities.
cilities. They keep the sensitive data in the private
credit card to provision virtual hardware, runtime environments, and cloud and non-sensitive data in the public cloud. Characteristics and services. These are used for as long as needed, with no up-front benefits of CC • No up-front commitments • On-demand access • commitments required. The entire stack of a computing system is Nice pricing • Simplified application acceleration & scalability • transformed into a collection of utilities, which can be provisioned and Efficient resource allocation • Energy efficiency • Seamless creation & composed together to deploy systems in hours rather than days and use of third-party services 1. Cost savings :- Instead of purchasing with virtually no maintenance costs. This opportunity, initially met with more hardware every time your needs expand, virtualization lets you skepticism, has now become a practice across several application simply create a new virtual machine on your existing infrastructure. 2. domains and business sectors. The demand has fast tracked Improved scalability :- Scaling your startup shouldn’t feel like pushing technical development and enriched the set of services offered, which a boulder uphill. With cloud scalability, it can be as simple as have also become more sophisticated and cheaper. Despite its adjusting a slider on your user interface.3. Better security:- When you evolution, the use of cloud computing is often limited to a single virtualize your physical servers, applications, or networks, each virtual service at a time or, more commonly, a set of related services offered machine operates in its own isolated environment. That means if one by the same vendor. Define Cloud computing Cloud computing has cloud security threat hits a virtual machine, it doesn’t automatically become a popular buzzword; it has been widely used to refer to compromise all the others. 5. Remote work support :- Virtualization different technologies, services, and concepts. It is often associated allows employees to securely access their desktops, applications, with virtualized infrastructure or hardware on demand, utility and data from any device with an internet connection. This is done computing, IT outsourcing, platform and software as a service, and through virtual desktop infrastructure (VDI) or similar technologies many other things that now are the focus of the IT industry. The term that host user desktops on a central server. Web 2.0 1) The Web is cloud has historically been used in the telecommunications industry the primary interface through which cloud computing delivers its as an abstraction of the network in system diagrams. It then became services. At present,the Web encompasses a set of technologies and the symbol of the most popular computer network: the Internet. This services that facilitate interactive information sharing,collaboration, meaning also applies to cloud computing, which refers to an user-centered design, and application composition. 2) Web 2.0 brings Internet-centric way of computing. Cloud computing refers to both the interactivity and flexibility into Web pages, providing enhanced user applications delivered as services over the Internet and the hardware experience by gaining Web-based access to all the functions that are and system software in the datacenters that provide those services. normally found in desktop applications.These capabilities are Cloud computing is a model for enabling ubiquitous, convenient, obtained by integrating a collection of standards and technologies on-demand network access to a shared pool of configurable such as XML, Asynchronous JavaScript and XML (AJAX), Web computing resources (e.g., networks, servers, storage, applications, Services, and others. 3) Web 2.0 applications are extremely dynamic: and services) that can be rapidly provisioned and released with they improve continuously, and new updates and features are minimal management effort or service provider interaction. Cloud integrated at a constant rate by following the usage trend of the Computing Reference Model Or Service Model 1) IaaS :- At the base community. There is no need to deploy new software releases on the of the stack, Infrastructure-as-a-Service solutions deliver installed base at the client side. 4) Web 2.0 applications aim to infrastructure on demand in the form of virtual hardware, storage, and leverage the “long tail” of Internet users by making themselves networking. Virtual hardware is utilized to provide compute on available to everyone in terms of either media accessibility or demand in the form of virtual machine instances. These are created affordability. 5) Examples of Web 2.0 applications are Google at users’ request on the provider’s infrastructure, and users are given Documents, Google Maps, Facebook, Twitter, YouTube and tools and interfaces to configure the software stack installed in the Wikipedia. Cloud computing platform and technologies Hadoop:- virtual machine. 2) PaaS :- Platform-as-a-Service solutions are the Apache Hadoop is an open-source framework that is suited for next step in the stack. They deliver scalable and elastic runtime processing large data sets on commodity hardware. Hadoop is an environments on demand and host the execution of applications. implementation of MapReduce, an 2 application programming model These services are backed by a core middleware platform that is developed by Google, which provides two fundamental operations for responsible for creating the abstract environment where applications data processing: map and reduce. Hadoop provides the runtime are deployed and executed. It is the responsibility of the service environment, and developers need only provide the input data and provider to provide scalability and to manage fault tolerance, while specify the map and reduce functions that need to be executed. users are requested to focus on the logic of the application developed Yahoo!, the sponsor of the Apache Hadoop project. Hadoop is an by leveraging the provider’s APIs and libraries. 3) SaaS:- At the top of integral part of the Yahoo! cloud infrastructure and supports several the stack, Software-as-a-Service solutions provide applications and business processes of the company. AWS :- AWS provides different services on demand. Most of the common functionalities of desktop wide-ranging clouds IaaS services, which ranges from virtual applications—such as office automation, document management, compute, storage, and networking to complete computing stacks. photo editing, and customer relationship management (CRM) AWS is well known for its storage and compute on demand services, software—are replicated on the provider’s infrastructure and made named as Elastic Compute Cloud (EC2) and Simple Storage Service more scalable and accessible through a browser on demand. Cloud (S3). EC2 offers customizable virtual hardware to the end user which Deployment Models The three major models for deploying and can be utilize as the base infrastructure for deploying computing accessing cloud computing environments are public clouds, systems on the cloud. EC2 also offers the capability of saving an private/enterprise clouds, and hybrid clouds. 1 The public cloud is explicit running instance as image, thus allowing users to create their available for the general public who want to use computing resources own templates for deploying system. S3 stores these templates and such as software and hardware over the internet. It is a good choice delivers persistent storage on demand. S3 is well ordered into for companies and organizations with low-security concerns. There is buckets which contains objects that are stored in binary form and can no need to manage these resources as cloud computing providers be grow with attributes. Microsoft Azure :- Microsoft Azure is a cloud configure and manage these services. Generally, public clouds are operating system and a platform for developing applications in the used for application development and testing. Private Cloud lets you cloud. It provides a scalable runtime environment for Web use the infrastructure and resources for a single organization. Users applications and distributed applications in general. Applications in and organizations do not share resources with other users. That is Azure are organized around the concept of roles, which identify a why it is also called as Internal or corporate model. Private clouds are distribution unit for applications and embody the application’s logic. more costly than public clouds due to their costly maintenance. The Currently, there are three types of role: Web role, worker role, and Hybrid Cloud is a combination of both public and private clouds. Very virtual machine role. The Web role is designed to host a Web few companies and organizations can migrate their tech stack to application the worker role is a more generic container of applications cloud computing rapidly in one go. Hence, Cloud vendors came up and can be used to perform workload processing, and the virtual with a hybrid cloud that offers a smooth transition with public and machine role provides a virtual environment in which the computing stack can be fully customized, including the operating systems. system partition (internal network virtualization). Storage virtualization Besides roles, Azure provides a set of additional services that is a system administration practice that allows decoupling the complement application execution, such as support for storage physical organization of the hardware from its logical representation. (relational data and blobs), networking, caching, content delivery, and Storage virtualization allows us to harness a wide range of storage others. Service-oriented computing :- Service-oriented computing facilities and represent them under a single logical file system. organizes distributed systems in terms of services, which represent Application server virtualization abstracts a collection of application the major abstraction for building systems. Service orientation servers that provide the same services as a single virtual application expresses applications and software systems as aggregations of server by using load-balancing strategies and providing a services that are coordinated within a service-oriented architecture high-availability infrastructure for the services hosted in the (SOA). Even though there is no designed technology for the application server. 4 hyper-v :- Hyper-V is an infrastructure development of service-oriented software systems, Web services are virtualization solution developed by Microsoft for server virtualization. the de facto approach for developing SOA. Web services, the As the name recalls, it uses a hypervisor-based approach to fundamental component enabling cloud computing systems, leverage hardware virtualization, which leverages several techniques to the Internet as the main interaction channel between users and the support a variety of guest operating systems. Hyper-V is currently system. Service Oriented Computing is a model that aims to provide shipped as a component of Windows Server 2008 R2 that installs the standardized models and protocols to make services easily hypervisor as a role within the server. Hyper-V supports multiple and accessible and interoperable among distributed application concurrent execution of guest operating systems by means of components. It allows users to use services on-demand, relieving partitions. A partition is a completely isolated environment in which an them from the need to build and maintain a complete system operating system is installed and run. Hyper-V constitutes the basic in-house. RPC :- RPC is the fundamental abstraction enabling the building block of Microsoft virtualization infrastructure. Other execution of procedures on client’s request. RPC allows extending components contribute to creating a fully featured platform for server the concept of a procedure call beyond the boundaries of a process virtualization. Xen in detail. :-Xen is an open-source initiative and a single memory address space. The called procedure and implementing a virtualization platform based on paravirtualization. calling procedure may be on the same system or they may be on Initially developed by a group of researchers at the University of different systems in a network. The concept of RPC has been Cambridge in the United Kingdom, Xen now has a large open-source discussed since 1976 and completely formalized by Nelson and community backing it. Citrix also offers it as a commercial solution, Birrell in the early 1980s. From there on, it has not changed in its XenSource. Xen-based technology is used for either desktop major components. Even though it is a quite old technology, RPC is virtualization or server virtualization, and recently it has also been still used today as a fundamental component for IPC in 3 more used to provide cloud computing solutions by means of Xen Cloud complex systems. An important aspect of RPC is marshaling, which Platform (XCP). At the basis of all these solutions is the Xen identifies the process of converting parameter and return values into Hypervisor, which constitutes the core technology of Xen. Recently a form that is more suitable to be transported over a network through Xen has been advanced to support full virtualization using a sequence of bytes. The term unmarshaling refers to the opposite hardware-assisted virtualization. Xen is the most popular procedure. Marshaling and unmarshaling are performed by the RPC implementation of paravirtualization, which, in contrast with full runtime infrastructure. Approaches to parallel programming A wide virtualization, allows high-performance execution of guest operating variety of parallel programming approaches are available. The most systems. This is made possible by eliminating the performance loss prominent among them are the following: • Data parallelism • Process while executing instructions that require special management. parallelism • Farmer-and-worker model These three models are all Machine reference model of execution virtualizationq Modern suitable for task-level parallelism. ●In the case of data parallelism, the computing systems can be expressed in terms of the reference model divide-and-conquer technique is used to split data into multiple sets, described in the diagram. At the bottom layer, the model for the and each data set is processed on different PEs using the same hardware is expressed in terms of the Instruction Set Architecture instruction. This approach is highly suitable to processing on (ISA), which defines the instruction set for the processor, registers, machines based on the SIMD model. ●In the case of process memory, and interrupt management. ISA is the interface between parallelism, a given operation has mul-tiple (but distinct) activities that hardware and software, and it is important to the operating system can be processed on multiple processors. ●In the case of the (OS) developer (System ISA) and developers of applications that farmerand-worker model, a job distribution approach is used: one directly manage the underlying hardware (User ISA). The application processor is configured as master and all other remaining PEs are binary interface (ABI) separates the operating system layer from the designated as slaves; the master assigns jobs to slave PEs and, on applica- tions and libraries, which are managed by the OS. ABI completion, they inform the master, which in turn collects results. covers details such as low-level data types, alignment, and call These approaches can be utilized in different levels of parallelism. conventions and defines a format for executable programs. System Virtualization :- Virtualization technology is one of the fundamental calls are defined at this level. This interface allows portability of components of cloud computing, especially in regard to applications and libraries across operating systems that implement infrastructure-based services. Virtualization allows the creation of a the same ABI. The highest level of abstraction is represented by the secure, customizable, and isolated execution environment for running application programming interface (API), which interfaces applications, even if they are untrusted, without affecting other users’ applications to libraries and/or the underlying operating system. 5 applications. Virtualization is a large umbrella of technologies and VMware :- VMware’s technology is based on the concept of full concepts that are meant to provide an abstract environment whether virtualization, where the underlying hardware is replicated and made virtual hardware or an operating system—to run applications. The available to the guest operating system, which runs unaware of such term virtualization is often synonymous with hardware virtualization, abstraction layers and does not need to be modified. VMware which plays a fundamental role in efficiently delivering implements full virtualization either in the desktop environment, by Infrastructure-as-a-Service (IaaS) solutions for cloud computing. means of Type II hypervisors, or in the server environment, by means Types of virtualization :- Desktop virtualization abstracts the desktop of Type I hypervisors. In both cases, full virtualization is made environment available on a personal computer in order to provide possible by means of direct execution (for nonsensitive instructions) access to it using a client/server approach. Desktop virtualization and binary translation (for sensitive instructions), thus allowing the provides the same outcome of hardware virtualization but serves a virtualization of architecture such as x86. Besides these two core different purpose. Network virtualization combines hardware solutions, VMware provides additional tools and software that simplify appliances and specific software for the creation and management of the use of virtualization technology either in a desktop environment, a virtual network. Network virtualization can aggregate different with tools enhancing the integration of virtual guests with the host, or physical networks into a single logical network (external network in a server environment, with solutions for building and managing virtualization) or provide network-like functionality to an operating virtual computing infrastructures. VMware is well known for the devices such as network controllers and other peripherals such as Provides block storage services, allowing users to create and operating systems and interact directly with the ISA interface exposed manage virtual disks. OpenStack Image Service (Glance): Manages by the underlying hardware, and they emulate this interface in order disk images, including image discovery, registration, and delivery to to allow the management of guest operating systems. • Type II the Compute service. What is OpenStack? 1. For hypervisors require the support of an operating system to provide cloud/system/storage/network administrators—OpenStack controls virtualization services. This means that they are programs managed many types of commercial and open source hardware and software, by the operating system, which interact with it through the ABI and providing a cloud management layer on top of vendor-specific emulate the ISA of virtual hardware for guest operating systems. resources. Repetitive manual tasks like disk and network provisioning Three main modules, dispatcher, allocator, and interpreter, coordinate are automated with the OpenStack framework. In fact, the entire their activity in order to emulate the underlying hardware. The process of provisioning virtual machines and even applications can dispatcher constitutes the entry point of the monitor and reroutes the be automated using the OpenStack framework. 2. For the instructions issued by the virtual machine instance to one of the two developer—OpenStack is a platform that can be used not only as an other modules. The allocator is responsible for deciding the system Amazon-like service for procuring resources (virtual machines, routine is executed. Pros and cons of virtualization :- Virtualization storage, and so on) used in development environments, but also as a has now become extremely popular and widely used, especially in cloud orchestration platform for deploying extensible applications cloud computing. The primary reason for its wide success is the based on application templates. Imagine the ability to describe the elimination of technology barriers that prevented virtualization from infrastructure (X servers with Y RAM) and software dependencies being an effective and viable solution in the past. The most relevant (MySQL, Apache2, and so on) of your application, and having the barrier has been performance. Today, the capillary diffusion of the OpenStack framework deploy those resources for you. 3. For the end Internet connection and the advancements in computing technology user- OpenStack is a self-service system for infrastructure and have made virtualization an interesting opportunity to deliver applications. Users can do everything from simply provisioning virtual on-demand IT infrastructure and services. Despite its renewed machines (VMs) like with AWS, to constructing advanced virtual popularity, this technology has benefits & also drawbacks. Web networks and applications, all within an isolated tenant (project) services :- Web services are the prominent technology for space. Nova is the OpenStack project that provides a way to implementing SOA systems and applications. They leverage Internet provision compute instances (aka virtual servers). Nova supports technologies and standards for building distributed systems. Several creating virtual machines, baremetal servers (through the use of aspects make Web services the technology of choice for SOA. First, ironic), and has limited support for system containers. Nova runs as a they allow for interoperability across different platforms and set of daemons on top of existing Linux servers to provide that programming languages. Second, they are based on well-known and service. It requires the following additional OpenStack services for vendor-independent standards such as HTTP, SOAP, XML, and basic function: Keystone: This provides identity and authentication for WSDL. Third, they provide an intuitive and simple way to connect all OpenStack services. Glance: This provides the compute image heterogeneous software systems, enabling the quick composition of repository. All compute instances launch from glance images. services in a distributed environment. Finally, they provide the Neutron: This is responsible for provisioning the virtual or physical features required by enterprise business applications to be used in an networks that compute instances connect to on boot. DevStack was industrial environment. They define facilities for enabling service created to make the job of deploying OpenStack in test and discovery, which allows system architects to more efficiently compose development environments quicker, easier, and more SOA applications, and service metering to assess whether a specific understandable, but the ease with which it allows users to deploy service complies with the contract between the service provider and OpenStack makes it a natural starting point for learning the the service consumer. Web services are made accessible by being framework. DevStack is a collection of documented Bash hosted in a Web server; therefore, HTTP is the most popular (command-line interpreter) shell scripts that are used to prepare an transport protocol used for interacting with Web services. History of environment for, configure, and deploy OpenStack. The choice of OpenStack. :- On his first day in office in 2009, US President Barack using a shell-scripting language for DevStack was deliberate. The Obama signed a memorandum to all federal agencies directing them code is intended to be read by humans and computers alike, and it’s to break down barriers to transparency, participation, and used as a source of documentation by developers. Developers of collaboration between the federal government and the people it OpenStack components can document dependencies outside of raw serves. The memorandum became known as the Open Government code segments, and users can understand how these dependencies Directive. One hundred and twenty days after the directive was must be provided in a working system. As the name suggests, issued, NASA announced its Open Government framework, which DevStack is a development tool, and its related OpenStack code is outlined the sharing of a tool called Nebula. Nebula was developed to under constant development. The support packages used by speed the delivery of IaaS resources to NASA scientists and DevStack to deploy. 8 Components of openStack Hardware researchers. At the same time, the cloud computing provider architectures for parallel processing • Single-instruction, single-data Rackspace announced it would open-source its object storage (SISD) systems : An SISD computing system is a uniprocessor platform, Swift.In July 2010, Rackspace and NASA, along with 25 machine capable of executing a single instruction, which operates on other companies, launched the OpenStack project. Over the past five a single data stream. In SISD, machine instructions are processed years there have been ten releases. OpenStack interface 1. sequentially; hence computers adopting this model are popularly OpenStack Dashboard (Horizon): This is a web-based interface that called sequential computers. • Single-instruction, multiple-data allows users to provision, manage, and monitor cloud resources. 2. (SIMD) systems : An SIMD computing system is a multiprocessor APIs: OpenStack uses Application Programming Interfaces (APIs) to machine capable of executing the same instruction on all the CPUs abstract virtual resources into discrete pools, enabling programmatic but operating on different data streams. Machines based on an SIMD interaction with the cloud infrastructure. 3. Command-line tools: model are well suited to scientific computing since they involve lots of OpenStack also provides command-line tools for managing cloud vector and matrix operations • Multiple-instruction, single-data (MISD) resources. 4. OpenStack Networking (Neutron): This project provides systems : An MISD computing system is a multiprocessor machine APIs for networking functions and components, such as interfaces, capable of executing different instructions on different PEs• routers, switches, ports, and security groups. 7 Key OpenStack Multiple-instruction, multiple-data (MIMD) systems : An MIMD Components and Their Interfaces: 1. OpenStack Compute (Nova): computing system is a multiprocessor machine capable of executing Manages the lifecycle of compute instances, including spawning, multiple instructions on multiple data sets. Each PE in the MIMD scheduling, and decommissioning virtual machines. 2. OpenStack model has separate instruction and data streams; hence machines Networking (Neutron): Enables network connectivity as a service for built using this model are well suited to any kind of application. 9 other OpenStack services, providing an API for users to define networks and their attachments. OpenStack Block Storage (Cinder):