Practical Part 2
Practical Part 2
Conclusion:
From this practical I learned about cloudsim and how to install cloudsim in Netbeans and
Eclipse.
11
CE443 : Cloud Computing D17CE144
Practical 2
12
CE443 : Cloud Computing D17CE144
13
CE443 : Cloud Computing D17CE144
5) Output see there is Dynamic Load Balancer option will be available in advanced
option of Configure Simulation
14
CE443 : Cloud Computing D17CE144
15
CE443 : Cloud Computing D17CE144
16
CE443 : Cloud Computing D17CE144
CloudSimTags. This class contains various static event/command tags that indicate the type
of action that needs to be undertaken by CloudSim entities when they receive or send events.
CloudInformationService: A Cloud Information Service (CIS) is an entity that provides
resource registration, indexing, and discovering capabilities. CIS supports two basic primitives:
(i) publish(), which allows entities to register themselves with CIS and (ii) search(), which
allows entities such as CloudCoordinator and Brokers in discovering status and endpoint
contact address of other entities. This entity also notifies the (other?) entities about the end of
simulation.
CloudSimShutdown: This is an entity that waits for the termination of all end-user and broker
entities, and then signals the end of simulation to CIS.
Predicate: Predicates are used for selecting events from the deferred queue. This is an abstract
class and must be extended to create a new predicate. Some standard predicates are provided
that are presented in Figure 7 (b).
PredicateAny: This class represents a predicate that matches any event on the deferred event
queue. There is a publicly accessible instance of this predicate in the CloudSim class, called
CloudSim.SIM_ANY, and hence no new instances need to be created.
PredicateFrom: This class represents a predicate that selects events fired by specific entities.
17
CE443 : Cloud Computing D17CE144
PredicateNone: This represents a predicate that does not match any event on the deferred event
queue. There is a publicly accessible static instance of this predicate in the CloudSim class,
called CloudSim.SIM_NONE, hence the users are not needed to create any new instances of
this class.
PredicateNotFrom: This class represents a predicate that selects events that have not been
sent by specific entities.
PredicateNotType: This class represents a predicate to select events that don't match specific
tags.
PredicateType: This class represents a predicate to select events with specific tags.
18
CE443 : Cloud Computing D17CE144
Practical 3
(1) Write a program in cloudsim using NetBeans IDE to create a datacenter with one host
and run four cloudlet on it.
(2) Write a program in cloudsim using NetBeans IDE to create a datacenter with three hosts
and run three cloudlets on it.
(1) Write a program in cloudsim using NetBeans IDE to create a datacenter with one
host and run four cloudlet on it.
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package org.cloudbus.cloudsim.examples;
import java.text.DecimalFormat;
import java.util.*;
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
19
CE443 : Cloud Computing D17CE144
*/
private static List<Vm> vmlist;
20
CE443 : Cloud Computing D17CE144
new CloudletSchedulerTimeShared());
// add the VM to the vmList
vmlist.add(vm);
// submit vm list to the broker
broker.submitVmList(vmlist);
// Fifth step: Create one Cloudlet
cloudletList = new ArrayList<Cloudlet>();
// Cloudlet properties
int id = 0;
long length = 400000;
long fileSize = 300;
long outputSize = 300;
UtilizationModel utilizationModel = new UtilizationModelFull();
Cloudlet cloudlet1 = new Cloudlet(id, length, pesNumber, fileSize,
outputSize, utilizationModel, utilizationModel, utilizationModel);
id++;
Cloudlet cloudlet2 = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationModel, utilizationModel);
id++;
Cloudlet cloudlet3 = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationModel, utilizationModel);
id++;
Cloudlet cloudlet4 = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationModel, utilizationModel);
cloudlet1.setUserId(brokerId);
cloudlet1.setVmId(vmid);
cloudlet2.setUserId(brokerId);
cloudlet2.setVmId(vmid);
cloudlet3.setUserId(brokerId);
cloudlet3.setVmId(vmid);
21
CE443 : Cloud Computing D17CE144
cloudlet4.setUserId(brokerId);
cloudlet4.setVmId(vmid);
cloudletList.add(cloudlet1);
cloudletList.add(cloudlet2);
cloudletList.add(cloudlet3);
cloudletList.add(cloudlet4);
// submit cloudlet list to the broker
broker.submitCloudletList(cloudletList);
// Sixth step: Starts the simulation
CloudSim.startSimulation();
CloudSim.stopSimulation();
//Final step: Print results when simulation is over
List<Cloudlet> newList = broker.getCloudletReceivedList();
printCloudletList(newList);
// Print the debt of each user to each datacenter
// datacenter0.printDebts();
Log.printLine("CloudSimExample1 finished!");
} catch (Exception e) {
e.printStackTrace();
Log.printLine("Unwanted errors happen");
}
}
22
CE443 : Cloud Computing D17CE144
23
CE443 : Cloud Computing D17CE144
24
CE443 : Cloud Computing D17CE144
25
CE443 : Cloud Computing D17CE144
(2) Write a program in cloudsim using NetBeans IDE to create a datacenter with three
hosts and run three cloudlets on it.
package org.cloudbus.cloudsim.examples;
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package org.cloudbus.cloudsim.examples;
26
CE443 : Cloud Computing D17CE144
/**
*
* @author Pooja Patel
*/
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
/**
* The cloudlet list.
*/
private static List<Cloudlet> cloudletList;
/**
* The vmlist.
*/
private static List<Vm> vmlist;
27
CE443 : Cloud Computing D17CE144
28
CE443 : Cloud Computing D17CE144
29
CE443 : Cloud Computing D17CE144
cloudlet3.setUserId(brokerId);
//add the cloudlets to the list
cloudletList.add(cloudlet1);
cloudletList.add(cloudlet2);
cloudletList.add(cloudlet3);
//submit cloudlet list to the broker
broker.submitCloudletList(cloudletList);
//bind the cloudlets to the vms. This way, the broker
// will submit the bound cloudlets only to the specific VM
broker.bindCloudletToVm(cloudlet1.getCloudletId(), vm1.getId());
broker.bindCloudletToVm(cloudlet2.getCloudletId(), vm2.getId());
broker.bindCloudletToVm(cloudlet3.getCloudletId(), vm3.getId());
// Sixth step: Starts the simulation
CloudSim.startSimulation();
// Final step: Print results when simulation is over
List<Cloudlet> newList = broker.getCloudletReceivedList();
CloudSim.stopSimulation();
printCloudletList(newList);
//Print the debt of each user to each datacenter
// datacenter0.printDebts();
Log.printLine("CloudSimExample3 finished!");
} catch (Exception e) {
e.printStackTrace();
Log.printLine("The simulation has been terminated due to an unexpected error");
}
}
30
CE443 : Cloud Computing D17CE144
31
CE443 : Cloud Computing D17CE144
hostId,
new RamProvisionerSimple(ram),
new BwProvisionerSimple(bw),
storage,
peList2,
new VmSchedulerTimeShared(peList2)
)
); // This is our second machine
List<Pe> peList3 = new ArrayList<Pe>();
peList3.add(new Pe(0, new PeProvisionerSimple(mips)));
hostId++;
hostList.add(
new Host(
hostId,
new RamProvisionerSimple(ram),
new BwProvisionerSimple(bw),
storage,
peList3,
new VmSchedulerTimeShared(peList3)
)
); // This is our second machine
// 5. Create a DatacenterCharacteristics object that stores the
// properties of a data center: architecture, OS, list of
// Machines, allocation policy: time- or space-shared, time zone
// and its price (G$/Pe time unit).
String arch = "x86"; // system architecture
String os = "Linux"; // operating system
String vmm = "Xen";
double time_zone = 10.0; // time zone this resource located
double cost = 3.0; // the cost of using processing in this resource
32
CE443 : Cloud Computing D17CE144
33
CE443 : Cloud Computing D17CE144
34
CE443 : Cloud Computing D17CE144
Output:
Conclusion:
From this practical I learned about cloudsim and how to implement different scenarios in
cloudsim.
35
CE443 : Cloud Computing D17CE144
Practical 4
1. Network Example 1
Program:
package org.cloudbus.cloudsim.examples.network;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.*;
/**
* A simple example showing how to create
* a datacenter with one host and a network
* topology and and run one cloudlet on it.
*/
public class NetworkExample1 {
/** The cloudlet list. */
private static List<Cloudlet> cloudletList;
/** The vmlist. */
private static List<Vm> vmlist;
/**
* Creates main() to run this example
*/
public static void main(String[] args) {
Log.printLine("Starting NetworkExample1...");
try {
// First step: Initialize the CloudSim package. It should be called
// before creating any entities.
int num_user = 1; // number of cloud users
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false; // mean trace events
// Initialize the CloudSim library CloudSim.init(num_user, calendar, trace_flag);
// Second step: Create Datacenters
//Datacenters are the resource providers in CloudSim. We need at list one of them to run a
CloudSim simulation
Datacenter datacenter0 = createDatacenter("Datacenter_0");
//Third step: Create Broker
DatacenterBroker broker = createBroker();
36
CE443 : Cloud Computing D17CE144
37
CE443 : Cloud Computing D17CE144
38
CE443 : Cloud Computing D17CE144
39
CE443 : Cloud Computing D17CE144
Output:
2. Network Example 2
Program:
package org.cloudbus.cloudsim.examples.network;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
40
CE443 : Cloud Computing D17CE144
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.NetworkTopology;
import org.cloudbus.cloudsim.Pe;
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel;
import org.cloudbus.cloudsim.UtilizationModelFull;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerSpaceShared;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
/**
* A simple example showing how to create
* two datacenters with one host each and
* run cloudlets of two users with network
* topology on them.
*/
public class NetworkExample3 {
/** The cloudlet list. */
private static List<Cloudlet> cloudletList1;
private static List<Cloudlet> cloudletList2;
/** The vmlist. */
private static List<Vm> vmlist1;
private static List<Vm> vmlist2;
/**
* Creates main() to run this example
*/
public static void main(String[] args) {
Log.printLine("Starting NetworkExample3...");
try {
// First step: Initialize the CloudSim package. It should be called
// before creating any entities.
int num_user = 2; // number of cloud users
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false; // mean trace events
// Initialize the CloudSim library
CloudSim.init(num_user, calendar, trace_flag);
// Second step: Create Datacenters
//Datacenters are the resource providers in CloudSim. We need at list one
of them to run a CloudSim simulation
Datacenter datacenter0 = createDatacenter("Datacenter_0");
Datacenter datacenter1 = createDatacenter("Datacenter_1");
41
CE443 : Cloud Computing D17CE144
42
CE443 : Cloud Computing D17CE144
43
CE443 : Cloud Computing D17CE144
44
CE443 : Cloud Computing D17CE144
try {
broker = new DatacenterBroker("Broker"+id);
} catch (Exception e) {
e.printStackTrace();
return null;
}
return broker;
}
/**
* Prints the Cloudlet objects
* @param list list of Cloudlets
*/
private static void printCloudletList(List<Cloudlet> list) {
int size = list.size();
Cloudlet cloudlet;
String indent = " ";
Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent +
"Data center ID" + indent + "VM ID" + indent + "Time" + indent +
"Start Time" + indent + "Finish Time");
for (int i = 0; i < size; i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS){
Log.print("SUCCESS");
DecimalFormat dft = new DecimalFormat("###.##");
Log.printLine( indent + indent + cloudlet.getResourceId() + indent + indent + indent +
cloudlet.getVmId() + indent + indent + dft.format(cloudlet.getActualCPUTime()) + indent
+ indent + dft.format(cloudlet.getExecStartTime())+ indent + indent +
dft.format(cloudlet.getFinishTime()));
}}}}
Output:
45
CE443 : Cloud Computing D17CE144
Conclusion:
From this practical, we learnt how to write the code of networking in netbeans.
Aim: Write program in cloudsim using NetBeans IDE/ Eclipse to create one datacenter with
four host, one databroker , 40 cloudlets, 10 virtual machines.
Hardware Requirement: 4GB RAM, 500GB HDD, CPU
Software Requirement: Netbeans IDE/ Eclipse
Program:
import java.util.Calendar;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
46
CE443 : Cloud Computing D17CE144
import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.CloudletSchedulerSpaceShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Pe;
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel;
import org.cloudbus.cloudsim.UtilizationModelFull;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerSpaceShared;
import org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.core.CloudSim;
47
CE443 : Cloud Computing D17CE144
Log.printLine("Starting CloudSimExample1...");
48
CE443 : Cloud Computing D17CE144
cloudlet.setUserId(brokerId);
cloudletList.add(cloudlet);
}
// Fourth step: Create one virtual machine
vmlist = new ArrayList<Vm>();
// VM description
int vmid;
int mips = 1000;
long size = 20000; // image size (MB)
int ram = 2048; // vm memory (MB)
long bw = 1000;
int vCPU = 1; // number of cpus
String vmm = "Xen"; // VMM name
for(vmid=0;vmid<40;vmid++) {
// create VM
Vm vm = new Vm(vmid, brokerId, mips, vCPU, ram, bw, size, vmm,
new CloudletSchedulerSpaceShared());
49
CE443 : Cloud Computing D17CE144
printCloudletList(newList);
//}
}
// 4. Create Host with its id and list of PEs and add them to the list
// of machines
int hostId ;
int ram = 8000; // host memory (MB)
long storage = 100000; // host storage
int bw = 8000;
for(hostId=0;hostId<4;hostId++) {
hostList.add(new Host(hostId,new RamProvisionerSimple(ram),new
BwProvisionerSimple(bw),storage,peList,
new VmSchedulerSpaceShared(peList)
)
);
}// This is our machine
50
CE443 : Cloud Computing D17CE144
51
CE443 : Cloud Computing D17CE144
52
CE443 : Cloud Computing D17CE144
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
Log.printLine(indent + indent + cloudlet.getResourceId()
+ indent + indent + indent + cloudlet.getVmId() + indent + indent
+ dft.format(cloudlet.getActualCPUTime()) + indent+ indent
+dft.format(cloudlet.getExecStartTime()) + indent + indent +
dft.format(cloudlet.getFinishTime()));
}
}
}
}
Output:
53
CE443 : Cloud Computing D17CE144
54
CE443 : Cloud Computing D17CE144
Practical 5
Aim: Create a scenario in Aneka / Eucalyptus to create a datacenter and host. Also
create virtual machines with static configuration to run cloudlets on them.
Hardware Requirement: 4GB RAM, 500GB HDD, CPU
Software Requirement: Linux OS
55
CE443 : Cloud Computing D17CE144
2. Select an image from the list (for this example, we'll select a CentOS image), then
click the Next button:
56
CE443 : Cloud Computing D17CE144
3. Select an instance type and availability zone from the Details tab. For this example,
select the defaults, and then click the Next button:
4. On the Security tab, we'll create a key pair and a security group to use with our new
instance. A key pair will allow you to access your instance, and a security group
allows you to define what kinds of incoming traffic your instance will allow.
57
CE443 : Cloud Computing D17CE144
a. First, we will create a key pair. Click the Create key pair link to bring up
the Create key pair dialog:
58
CE443 : Cloud Computing D17CE144
b. Type the name of your new key pair into the Name text box, and then click
the Create and Download button:
59
CE443 : Cloud Computing D17CE144
c. The Create key pair dialog will close, and the Key name text box will be
populated with the name of the key pair you just created:
d. Next, we will create a security group. Click the Create security group link to
bring up the Create security group dialog:
60
CE443 : Cloud Computing D17CE144
e. On the Create security group dialog, type the name of your security group into
the Name text box.
f. Type a brief description of your security group into the Description text box.
g. We'll need to SSH into our instance later, so in the Rules section of the dialog, select the
SSH protocol from the Protocol drop-down list box.
h. Note: In this example, we allow any IP address to access our new instance. For production
use, please use appropriate caution when specifying an IP range. For more information on
CIDR notation, go to CIDR notation.
You need to specify an IP address or a range of IP addresses that can use SSH to access
your instance. For this example, click the Open to all addresses link. This will populate
the IP Address text box with 0.0.0.0/0, which allows any IP address to access your instance
via SSH.
61
CE443 : Cloud Computing D17CE144
i. Click the Add rule button. The Create security group dialog should now
look something like this:
62
CE443 : Cloud Computing D17CE144
The Create security group dialog will close, and the Security group text box
will be populated with the name of the security group you just created:
63
CE443 : Cloud Computing D17CE144
5. You're now ready to launch your new instance. Click the Launch Instance button.
The Launch Instance dialog will close, and the Instances screen will display. The
instance you just created will display at the top of the list with a status of Pending:
6. When the status of your new instance changes to Running, click the instance in the list
to bring up a page showing details of your instance. For example:
64
CE443 : Cloud Computing D17CE144
7. Note the Public IP address and/or the Public hostname fields. You will need this
information to connect to your new instance. For example:
65
CE443 : Cloud Computing D17CE144
8. Using the public IP address or hostname of your new instance, you can now use SSH
to log into the instance using the private key file you saved when you created a key
pair. For example:
66
CE443 : Cloud Computing D17CE144
Practical 6
Theory:
A Beowulf cluster is a group of what are normally identical, commercially available computers,
which are running a Free and Open Source Software (FOSS), Unix-like operating system, such
as BSD, GNU/Linux, or Solaris. They are networked into a small TCP/IP LAN, and have
libraries and programs installed which allow processing to be shared among them.
1. For accessing edit rights on all the Linux machines, we will login as a root user by applying
command on both master and slave.
2. For creating any cluster, we’ll set SSI (Single System Image) on both master and slave
machines.
3. Now, we’ll check whether our machine is connected to internet or not by applying
command
we’ll also perform this step by going to the path machine-> settings-> network-> secure
connection in Virtual Box.
67
CE443 : Cloud Computing D17CE144
4. After checking internet connectivity we’ll check for SSH (Secure Shell Protocol) on our
both master and slave machines, if it is not installed then we’ll install it manually.
5. Now, we’ll establish NFS (Network File System) on master by following step
6. After connecting to the network and following above steps, we’ll assign IP addresses to all
68
CE443 : Cloud Computing D17CE144
We can do this by opening a window from machine-> Edit connection-> Set IPv4 address for
both master and slave.
Now, to check whether the network of cluster works or not we can ping to the assigned IP
address.
69
CE443 : Cloud Computing D17CE144
7. Now, we’ll add all the connected nodes of a cluster into host file of each machine.
To check whether this step completed successfully or not we can ping to particular machine
70
CE443 : Cloud Computing D17CE144
8. Now on master node, we’ll install mpich to add all other slave nodes in a cluster to it.
sudo
apt-get install mpich
sudo adduser mpiuser ce109-VirtualBox
password: password ce109-VirtualBox:
So, now all the slave nodes are added to the master.
At last, to communicate master will generate a key
Conclusion:
From this practical, we learnt how to create Beowulf Cluster and what the applications of it
are.
71
CE443 : Cloud Computing D17CE144
Practical 7
Aim:- A report on Charusat Datacenter Visit.
CHARUSAT Cloud
Need
1. Resource Utilization
2. Increase Server Efficiency
3. Improve Disaster Recovery Efforts
4. Increase Business Continuity
Real implementation
HP C7000 Blade Centre housing 7 Blades & each Blade comprising :
• 2 No of Xeon Hexacore (2.66 Ghz) CPU
• 48 GB ECC RAM
• 300 GB Internal Harddisk
• All these linked with HP SAN Box(Storage Area Network) with the capacity 20
TB
Six redundant power supply
Redundant backbone fiber controller card
Redundant IO controller
LTO-5 Auto loader for the Data backup
10 KVA Online UPS with 5 hours of Battery backup
15PGCE028 CE-746 CC
78
CSPIT-CE
Migration
Migration is a technology used for load balancing and optimization of VM deployment in
data centers. With the help of live migration, VMs can be transferred to another node
without shutting down.
Pre-copy: In this, first Memory is transferred and after this execution is transferred. The
pre-copy method is used to transfer the memory to the destination node over a number of
iterations.
Post-copy: In this, First execution is transferred and after this, memory is transferred.
Unlike pre-copy, in post copy the Virtual CPU and devices on the destination node is
transfer in the first step and starts the execution in second step.
Following metrics are used to measure the performance of migration.
i) Preparation: In this, resources are reserved on the destination which performed various
operations.
ii) Downtime: Time during which the VM on the source host is suspended
iii) Resume: It does the instantiation of VM on the destination but with the same state as
suspended source.
iv)Total time: The total time taken in completion of all these phases is called Total
Migration time.
Virtualization
Figure 1 Virtualization
Virtualization essentially means to create multiple, logical instances of software or
hardware on a single physical hardware resource. This technique simulates the available
hardware and gives every application running on top of it, the feel that it is the unique
holder of the resource. The details of the virtual, simulated environment are kept
72
CE443 : Cloud Computing D17CE144
73
CE443 : Cloud Computing D17CE144
SAN
A storage-area network (SAN) is a dedicated high-speed network (or sub network) that
interconnects and presents shared pools of storage devices to multiple servers.
A SAN moves storage resources off the common user network and reorganizes them into
an independent, high-performance network. This allows each server to access shared
storage as if it were a drive directly attached to the server. When a host wants to access a
storage device on the SAN, it sends out a block-based access request for the storage device.
74
CE443 : Cloud Computing D17CE144
Practical 8
Aim: Create a sample mobile application using Microsoft Azure account as a cloud service.
Also provide database connectivity with implemented mobile application.
Create a sample mobile application using Amazon Web Service
(AWS) account as a cloud service. Also provide database connectivity
with implemented mobile application.
Part 1:
1. Click Create a new Project in the AWS Mobile Hub console. Provide a name for your
project.
2. Click NoSQL Database.
3. Click Enable NoSQL.
4. Click Add a new table.
5. Click Start with an example schema.
6. Select Notes as the example schema.
7. Select Public for the permissions (we won’t add sign-in code to this app).
8. Click Create Table, and then click Create Table in the dialog box.
Even though the table you created has a userID, the data is stored unauthenticated in this
example. If you were to use this app in production, use Amazon Cognito to sign the user in to
your app, and then use the userID to store the authenticated data.
In the left navigation pane, click Resources. AWS Mobile Hub created an Amazon Cognito
identity pool, an IAM role, and a DynamoDB database for your project. Mobile Hub also
linked the three resources according to the permissions you selected when you created the
table. For this demo, you need the following information:
These are stored in the application code and used when connecting to the database.
Now that you have a mobile app backend, it’s time to look at the frontend. We have a memo
app that you can download from GitHub. First, add the required libraries to the dependencies
section of the application build.gradle file:
compile 'com.amazonaws:aws-android-sdk-core:2.4.4'
compile 'com.amazonaws:aws-android-sdk-ddb:2.4.4'
75
CE443 : Cloud Computing D17CE144
compile 'com.amazonaws:aws-android-sdk-ddb-document:2.4.4'
Set up the connection to the Amazon DynamoDB table by creating an Amazon Cognito
credentials provider (for appropriate access permissions), and then creating
a DynamoDbClient object. Finally, create a table reference:
You can now perform CRUD (Create, Read, Update, Delete) operations on the table:
/**
76
CE443 : Cloud Computing D17CE144
*/
if (!memo.containsKey("userId")) {
memo.put("userId", credentialsProvider.getCachedIdentityId());
if (!memo.containsKey("noteId")) {
memo.put("noteId", UUID.randomUUID().toString());
if (!memo.containsKey("creationDate")) {
memo.put("creationDate", System.currentTimeMillis());
dbTable.putItem(memo);
/**
77
CE443 : Cloud Computing D17CE144
*/
/**
*/
dbTable.deleteItem(
/**
*/
/**
*/
return dbTable.query(new
Primitive(credentialsProvider.getCachedIdentityId())).getAllResults();
There are two mechanisms for searching the dataset: scan and query. The query() method
uses indexed fields within the DynamoDB table to rapidly retrieve the appropriate
information. The scan() method is more flexible. It allows you to search on every field, but it
can run into performance issues when searching large amounts of data. This results in a worse
experience for your users because data retrieval will be slower. For the best experience, index
fields that you intend to search often and use the query() method.
The Notes schema in DynamoDB usually segments data on a per-user basis. The app works
with both authenticated and unauthenticated users by using
the .getCachedIdentityId() method. This method stores the current user identity with every
new note that is created.
Android does not allow you to perform network requests on the main UI thread. You must
wrap each operation in an AsyncTask. For example:
79
CE443 : Cloud Computing D17CE144
/**
*/
@Override
databaseAccess.create(documents[0]);
return null;
You can initiate a save operation by instantiating the appropriate AsyncTask and then calling
.execute():
/**
*/
if (memo == null) {
80
CE443 : Cloud Computing D17CE144
newMemo.put("content", etText.getText().toString());
task.execute(newMemo);
} else {
memo.put("content", etText.getText().toString());
task.execute(memo);
this.finish();
Similarly, you can retrieve a list of memos on an AsyncTask and pass the memos back to a
method in the MainActivity to populate the UI:
/**
* Async Task for handling the network retrieval of all the memos in DynamoDB
*/
@Override
81
CE443 : Cloud Computing D17CE144
return databaseAccess.getAllMemos();
@Override
if (documents != null) {
populateMemoList(documents);
You can find this sample in our GitHub repository. To run the sample:
82
CE443 : Cloud Computing D17CE144
Part 2:
83
CE443 : Cloud Computing D17CE144
You are all set to use this newly created App Service Web app as a Mobile app.
Create a database connection and configure the client and server project
iOS (Objective-C)
iOS (Swift)
Android (Java)
Xamarin.iOS
Xamarin.Android
Xamarin.Forms
Cordova
Windows (C#)
Note
o Existing data source: Follow the instructions below if you want to use an
existing database connection
1. SQL Database Connection String format - Data
Source=tcp:{your_SQLServer},{port};Initial
Catalog={your_catalogue};User
ID={your_username};Password={your_password}
84
CE443 : Cloud Computing D17CE144
2. Add the connection string to your mobile app In App Service, you can
manage connection strings for your application by using
the Configuration option in the menu.
1. In the Azure portal, go to Easy Tables, you will see this screen.
85
CE443 : Cloud Computing D17CE144
86
CE443 : Cloud Computing D17CE144
87
CE443 : Cloud Computing D17CE144
88
CE443 : Cloud Computing D17CE144
o .NET backend
89
CE443 : Cloud Computing D17CE144
If you’re going to use .NET quickstart app, follow the instructions below.
1. Download the Azure Mobile Apps .NET server project from the azure-
mobile-apps-quickstarts repository.
2. Build the .NET server project locally in Visual Studio.
3. In Visual Studio, open Solution Explorer, right-click
on ZUMOAPPNAMEService project, click Publish, you will see a Publish to
App Service window. If you are working on Mac, check out other ways
to deploy the app here.
4. Select App Service as publish target, then click Select Existing, then
click the Publish button at the bottom of the window.
5. You will need to log into Visual Studio with your Azure subscription first.
Select the Subscription, Resource Group, and then select the name of your
app. When you are ready, click OK, this will deploy the .NET server
project that you have locally into the App Service backend. When
deployment finishes, you will be redirected
to http://{zumoappname}.azurewebsites.net/ in the browser.
90
CE443 : Cloud Computing D17CE144
1. Open the project using Android Studio, using Import project (Eclipse ADT,
Gradle, etc.). Make sure you make this import selection to avoid any JDK errors.
2. Open the file ToDoActivity.java in this folder -
ZUMOAPPNAME/app/src/main/java/com/example/zumoappname. The
application name is ZUMOAPPNAME.
3. Go to the Azure portal and navigate to the mobile app that you created. On
the Overview blade, look for the URL which is the public endpoint for your mobile
app. Example - the sitename for my app name "test123" will
be https://test123.azurewebsites.net.
4. In onCreate() method, replace ZUMOAPPURL parameter with public endpoint above.
becomes
new MobileServiceClient("https://test123.azurewebsites.net",
this).withFilter(new ProgressFilter());
5. Press the Run 'app' button to build the project and start the app in the Android
simulator.
6. In the app, type meaningful text, such as Complete the tutorial and then click the
'Add' button. This sends a POST request to the Azure backend you deployed
earlier. The backend inserts data from the request into the TodoItem SQL table,
and returns information about the newly stored items back to the mobile app.
The mobile app displays this data in the list.
91
CE443 : Cloud Computing D17CE144
92
CE443 : Cloud Computing D17CE144
Practical 9
Aim: To find city wise maximum temperature of cities using Hadoop Cluster and Map
Reduce.
Now a day’s the data processing and computing became crucial technology in
enterprise and critical business decision making. Suppose the Government of India
has placed the temperature sensor’s as a part of digital India/smart city project. These
temperate sensors collects data on hourly basis and send it to the server for storing
and further processing. Hence these sensors generate huge structured data. Now if
researcher/government want to find the max temp of cities in given duration. Here
traditional clusters outperforms due to both huge data and heavy computing. In this
case data processing and computing can be implemented distributed manner as well
as parallel model using popular hadoop and map reduce. This provides cost
effectiveness as well as better performance within given time constraint.
Hardware Requirements: 4GB RAM, 500GB HDD, CPU
Software Requirements: Java 6 JDK, Hadoop requires a working Java 1.5+ (aka Java 5)
installation.
93
CE443 : Cloud Computing D17CE144
check whether java JDK is correctly installed or not, with the following command
user@ubuntu:~$ java -version
Adding a dedicated Hadoop system user
We will use a dedicated Hadoop user account for running Hadoop.
94
CE443 : Cloud Computing D17CE144
This will add the user hduser1 and the group hadoop_group to the local machine. Add hduser1 to
the sudo group
95
CE443 : Cloud Computing D17CE144
The second line will create an RSA key pair with an empty password.
Note:
P “”, here indicates an empty password
You have to enable SSH access to your local machine with this newly created key which is done
by the following command.
hduser1@ubuntu:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
96
CE443 : Cloud Computing D17CE144
The final step is to test the SSH setup by connecting to the local machine with the hduser1 user.
The step is also needed to save your local machine’s host key fingerprint to the hduser user’s
known hosts file.
hduser@ubuntu:~$ ssh localhost
INSTALLATION
Main Installation
Now, I will start by switching to hduser
hduser@ubuntu:~$ su - hduser1
Now, download and extract Hadoop 1.2.0
Setup Environment
Variables for Hadoop
Add the following
entries to .bashrc file
# Set Hadoop-
related environment
variables export
HADOOP_HOME
=/usr/local/hadoop
97
CE443 : Cloud Computing D17CE144
#export
JAVA_HOME=/usr
/lib/j2sdk1.5-sun to
in the same file
# export JAVA_HOME=/usr/lib/jvm/java-
6-openjdk-amd64 (for 64 bit) # export
JAVA_HOME=/usr/lib/jvm/java-6-
openjdk-amd64 (for 32 bit) conf/*-site.xml
Now we create the directory and set the required ownerships and permissions
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose scheme and authority
determine the FileSystem implementation. The uri's scheme determines the config property
(fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used
to determine the host, port, etc. for a filesystem.</description>
</property>
98
CE443 : Cloud Computing D17CE144
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job
tracker runs at. If "local", then jobs are run in-process
as a single map
and reduce task.
</description>
</property>
In file conf/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the
file is created. The default is used if replication is not
specified in create time.
</description>
</property>
Formatting the HDFS filesystem via the NameNode
To format the filesystem (which simply initializes the directory specified by the dfs.name.dir
variable). Run the command
hduser@ubuntu:~$ /usr/local/hadoop/bin/hadoop namenode –format
99
CE443 : Cloud Computing D17CE144
100
CE443 : Cloud Computing D17CE144
Errors:
1. If by chance your datanode is not starting, then you have to erase the contents of
the folder /app/hadoop/tmp The command that can be used
hduser@ubuntu:~:$ sudo rm –Rf /app/hadoop/tmp/*
2. You can also check with netstat if Hadoop is listening on the configured ports.
3. The command that can be used
hduser@ubuntu:~$ sudo netstat -plten | grep java
4. Errors if any, examine the log files in the
/logs/ directory. Stopping your single-node
cluster
Run the command to stop all the daemons running on your machine.
hduser@ubuntu:~$ /usr/local/hadoop/bin/stop-all.sh
ERROR POINTS:
If datanode is not starting, then clear the tmp folder before formatting the namenode
using the following command
hduser@ubuntu:~$ rm -Rf /app/hadoop/tmp/*
Note:
101
CE443 : Cloud Computing D17CE144
Problem Statement : Find the max temperature of each city using MapReduce
Mapper:
Import
java.io.IOException;
import
java.util.StringTokeni
zer;
import
org.apache.hadoop.io.IntWrita
ble; import
org.apache.hadoop.io.LongWri
table; import
org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
@Override
protected void map(LongWritable key, Text
value, Context context) throws IOException,
InterruptedException {
word.set(line.nextToken());
max.set(Integer.parseInt(line.n
extToken()));
context.write(word,max);
}
}
Reducer:
import
java.io.IOExcepti
on; import
java.util.Iterator;
102
CE443 : Cloud Computing D17CE144
import
org.apache.hadoop.io.IntWrit
able; import
org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
@Override
protected void reduce(Text key,
Iterable<IntWritable> values, Context
context)
throws IOException,
InterruptedException {
Iterator<IntWritable> itr =
values.iterator(); while
(itr.hasNext()) {
temp =
itr.next().get()
; if( temp >
max_temp)
{
max_temp
=temp;
}
}
context.write(key, new IntWritable(max_temp));
}
}
Driver Class:
import
org.apache.hadoop.fs.Path;
import
org.apache.hadoop.io.IntWri
103
CE443 : Cloud Computing D17CE144
table; import
org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import
org.apache.hadoop.mapreduce.lib.input.FileInput
Format; import
org.apache.hadoop.mapreduce.lib.output.FileOut
putFormat;
// Create a
new job Job
job = new
Job();
// Set input and output Path, note that we use the default input format
// which is TextInputFormat (each record
is a line of input)
FileInputFormat.addInputPath(job, new
Path(args[0]));
FileOutputFormat.setOutputPath(job, new
Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
104
CE443 : Cloud Computing D17CE144
Input:
Kolk
ata,5
6
Jaipu
r,45
Delhi
,43
Mum
bai,3
4
Goa,
45
Kolk
ata,3
5
Jaipu
r,34
Delhi
,32
Outp
ut
Kolkata 56
Jaipur 45
Delhi 43
Mumbai 34
105
CE443 : Cloud Computing D17CE144
Practical 10
Aim: Implementing and Configuration of High availability web server cluster for Web
Services.
Hardware Requirement: 4GB RAM, 500GB HDD, CPU
Software Requirement: Ubuntu 14.04 servers on your DigitalOcean account.
Introduction
Prerequisites
Both servers must be located within the same datacenter and should have private
networking enabled.
On each of these servers, you will need a non-root user configured with sudo
access. You can follow our Ubuntu 14.04 initial server setup guide to learn how
to set up these users.
When you are ready to get started, log into both of your servers with your non-root user.
Install and Configure Nginx
While keepalivedis often used to monitor and failover load balancers, in order to
reduce our operational complexity, we will be using Nginx as a simple web server
in this guide.
Start off by updating the local package index on each of your servers. We can then install
Nginx:
sudo apt-get update
106
CE443 : Cloud Computing D17CE144
In most cases, for a highly available setup, you would want both servers to
serve exactly the same content. However, for the sake of clarity, in this guide
we will use Nginx to indicate which of the two servers is serving our requests
at any given time. To do this, we will change the
default index.htmlpage on each of our hosts. Open the file now:
sudo nano /usr/share/nginx/html/index.html
On your first server, replace the contents of the file with this:
Primary server's /usr/share/nginx/html/index.html
<h1>Primary</h1>
On your second server, replace the contents of the file with this:
Secondary server's
/usr/share/nginx/html/index.html
<h1>Secondary</h1>
Save and close the files when you are finished.
Build and Install Keepalived
Next, we will install the keepalived daemon on our servers. There is a version of
keepalived in Ubuntu's default repositories, but it is outdated and suffers from a few
bugs that prevent our configuration from working. Instead, we will install the latest
version of keepalivedfrom source.
Before we begin, we should grab the dependencies we will need to build the
software. The build- essentialmeta-package will provide the compilation tools we
need, while the libssl-devpackage contains the SSL libraries that keep alived needs
to build against:
sudo apt-get install build-essential libssl-dev
Once the dependencies are in place, we can download the tarball for keepalived.
Visit this page to find the latest version of the software. Right-click on the latest
version and copy the link address. Back on your servers, move to your home
directory and use wgetto grab the link you copied:
cd ~
wget http://www.keepalived.org/software/keepalived-1.2.19.tar.gz
Use the tarcommand to expand the archive and then move into the resulting directory:
tar xzvf
keepalive
d* cd
keepalive
d*
107
CE443 : Cloud Computing D17CE144
We can create a very simple Upstart script that can handle our keepalivedservice.
Open a file called keepalived.confwithin the /etc/initdirectory to get started:
sudo nano /etc/init/keepalived.conf
start on runlevel
[2345] stop on
runlevel [!2345]
Because this service is integral to ensuring our web service remains available, we
want to restart this service in the event of a failure. We can then specify the actual
execline that will start the service. We need to add the --dont-forkoption so that
Upstart can track the pidcorrectly:
/etc/init/keepalived.conf
description "load-balancing and high-availability service"
start on runlevel
[2345] stop on
runlevel [!2345]
respawn
108
CE443 : Cloud Computing D17CE144
Before we create the configuration file, we need to find the private IP addresses
of both of our servers. On DigitalOcean servers, you can get our private IP
address through the metadata service by typing:
curl http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address &&
echo
Output
1
0
.
1
3
2
.
7
.
1
0
7
This can also be found with the iproute2 tools by typing:
ip -4 addr show dev eth1
Output
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast state UP group default qlen 1000
inet 10.132.7.107/16 brd 10.132.255.255 scope global eth1
valid_lft forever preferred_lft forever
Copy this value from both of your systems. We will need to reference these
addresses inside of our configuration files below.
109
CE443 : Cloud Computing D17CE144
nginx"
interval 2
}
Next, we will open a block called vrrp_instance. This is the main configuration
section that defines the way that keepalived will implement high availability.
We will start off by telling keepalivedto communicate with its peers over eth1,
our private interface. Since we are configuring our primary server, we will set the
stateconfiguration to "MASTER". This is the initial value that keepalivedwill use
until the daemon can contact its peer and hold an election.
During the election, the priority option is used to decide which member is elected.
The decision is simply based on which server has the highest number for this
setting. We will use "200" for our primary server:
Next, we will assign an ID for this cluster group that will be shared by both nodes.
We will use "33" for this example. We need to set unicast_src_ip to our primary
server's private IP address that we retrieved earlier. We will set unicast_peerto our
secondary server's private IP address:
Primary server's /etc/keepalived/keepalived.conf
vrrp_script
chk_ngin
x { script
"pidof
110
CE443 : Cloud Computing D17CE144
nginx"
interval 2
}
vrrp_insta
nce
VI_1
{
inter
face
eth1
state
MA
STE
R
prior
ity
200
virtual_router_id 33 unicast_src_ip
primary_private_IP
unicast_peer {
secondary_private_IP
} }
Next, we can set up some simple authentication for our keepalived daemons to
communicate with one another. This is just a basic measure to ensure that the servers
in question are legitimate. Create an authentication sub-block. Inside, specify
password authentication by setting the auth_type. For the auth_pass parameter, set
a shared secret that will be used by both nodes. Unfortunately, only the first eight
characters are significant:
vrrp_script
chk_nginx { script
"pidof nginx"
interval 2
}
vrrp_instance
VI_1 {
interface eth1
state
MASTER
priority 200
virtual_router_id 33
unicast_src_ip primary_private_IP
unicast_peer {
secondary_private_IP
}
111
CE443 : Cloud Computing D17CE144
authentication {
auth_type PASS
auth_pass password
}
Next, we will tell keepalivedto use the routine we created at the top of the file,
labeled chk_nginx, to determine the health of the local system. Finally, we will set
a notify_masterscript, which is executed whenever this node becomes the "master"
of the pair. This script will be responsible for triggering the floating IP address
reassignment. We will create this script momentarily:
virtual_router_id 33
unicast_src_ip primary_private_IP
unicast_peer {
secondary_private_IP
}
authentication {
auth_type PASS
auth_pass password
112
CE443 : Cloud Computing D17CE144
113
CE443 : Cloud Computing D17CE144
114
CE443 : Cloud Computing D17CE144
Copy the token now. For security purposes, there is no way to display this token
again later. If you lose this token, you will have to destroy it and create another
one.
115
CE443 : Cloud Computing D17CE144
A new floating IP address will be created in your account and assigned to the
Droplet specified:
116
CE443 : Cloud Computing D17CE144
If you visit the floating IP in your web browser, you should see the "primary" server
index.htmlpage:
Copy the floating IP address down. You will need this value in the script below.
Create the Wrapper Script
Now, we have the items we need to create the wrapper
script that will call our /usr/local/bin/assign-ipscript
with the correct credentials.
Create the file now on both servers by typing:
sudo nano /etc/keepalived/master.sh
Inside, start by assigning and exporting a variable called DO_TOKENthat holds
the API token you just created. Below that, we can assign a variable called IPthat
holds your floating IP address:
/etc/keepalived/master.sh
export
DO_TOKEN='digitalocean_api_toke
n' IP='floating_ip_addr'
Next, we will use curlto ask the metadata service for the Droplet ID of the server
we're currently on. This will be assigned to a variable called ID. We will also ask
whether this Droplet currently has the floating IP address assigned to it. We will
store the results of that request in a variable called HAS_FLOATING_IP:
/etc/keepalived/master.sh
export
DO_TOKEN='digitalocean_api_toke
n' IP='floating_ip_addr'
ID=$(curl -s http://169.254.169.254/metadata/v1/id)
HAS_FLOATING_IP=$(curl -s
http://169.254.169.254/metadata/v1/floating_ip/ipv4/active)
Now, we can use the variables above to call the assign-ip script. We will only call
the script if the floating IP is not already associated with our Droplet. This will help
minimize API calls and will help prevent conflicting requests to the API in cases
where the master status switches between your servers rapidly.
To handle cases where the floating IP already has an event in progress, we will retry the
assign-
ipscript a few times. Below, we attempt to run the script 10 times, with a 3 second
117
CE443 : Cloud Computing D17CE144
interval between each call. The loop will end immediately if the floating IP move
is successful:
/etc/keepalived/master.sh
export
DO_TOKEN='digitalocean_api_t
oken' IP='floating_ip_addr'
ID=$(curl -s http://169.254.169.254/metadata/v1/id)
HAS_FLOATING_IP=$(curl -s
http://169.254.169.254/metadata/v1/floating_ip/ipv4/activ
e)
The service should start up on each server and contact its peer, authenticating with
the shared secret we configured. Each daemon will monitor the local Nginx
process, and will listen to signals from the remote keepalivedprocess.
When both servers are healthy, if you visit your floating IP in your web browser,
you
should be taken to the primary server's Nginx page:
118
CE443 : Cloud Computing D17CE144
If the primary server later recovers, it will transition back to the master state and
reclaim the floating IP because it will initiate a new election (it will still have the
highest priority number).
Testing Nginx Failure
We can test the first condition by stopping the Nginx service on the primary server:
sudo service nginx stop
If you refresh your web browser, you might initially get a response indicating that
the page is not available:
However, after just a few seconds, if you refresh the page a few times, you will
see that the secondary server has claimed the floating IP address:
A moment later, when the primary server finishes rebooting, it will reclaim the IP
address:
119
CE443 : Cloud Computing D17CE144
120
CE443 : Cloud Computing D17CE144
Practical 11
Aim: Create Virtual Machine using Xen Hypervisor
Hardware Requirement: 4GB RAM, 500GB HDD, CPU
Software Requirement: VM Ware Workstation
Here, One scenario designed for virtualization with the help of Xen hyperviser. There are two Xen
servers configured and on that you can see on Xen server-1 there are three virtual machine (VM) created
and on server-2 only one VM is created.Now let's see the steps to configure below scenario.
Step-1:-
1. Open VM Ware Workstation
2. Create a new virtual machine
3. Select Installer disc image file (.iso)
4. Select guest operating system linux(Ubuntu 64-bit)
5. Write virtual machine name(Xen-Server-1)
6. Specify Disk Capacity (100GB) & Select store virtual disk as a single file radio button
7. Assign 8GB RAM or more to server.
8. Start XenServer(VM) in VM Ware.(installation process start as you seen below)
Step:-2
Here you have to select keyboard layout or keyboard type & Click OK
121
CE443 : Cloud Computing D17CE144
Step-3:-
It is simple welcome screen for XenServer Setup. Click OK
Step:-4:-
Here it is End User License Agreement (between User & Provider) so read it and Click Accept
EULA
122
CE443 : Cloud Computing D17CE144
Step:-5
Here , we have to assign storage space for virtual machine. For our scenario 100 GB assigned of
VMware virtual storage. Click OK
Note:- Here we install XenServer in VMWare so you can see 100 GB [VMware, VMware Virtual S]
but in place of VMware if you install on direct machine then you have to assign storage from main
hard disk of machine. You will see option like 100 GB [HP Logical , Volume] but here you can see
100 GB [VMware, VMware Virtual S].
123
CE443 : Cloud Computing D17CE144
Step:-6
Here , you have to select Media or Source from which system will use for installations, here I select
Local Media. Click OK.
Step-7:-
One more option you have to select for supplemental package. Here for our configuration there is no
requirements for supplemental package . So Click NO.
124
CE443 : Cloud Computing D17CE144
Step:-8:-
This is for testing purpose so select Skip Verification. Click OK.
125
CE443 : Cloud Computing D17CE144
Step:-9
Here , you have to set root password and confirm it. This password is use later on for various
task related to server for instance root password required for Server Shut Down.
Step:-10
There are two ways for networking configuration Static & Dynamic . Using static way you have to
assign IP address , subnet mask and gateway but on the other hand in automatic configuration mode
all the related addresses will take automatically using DHCP. Here select Automatic
configuration(DHCP) Click OK.
Note:- If you install server direct on machine and if you have multiple NIC card then you have to
select one NIC card then you will get the network configuration tab(Step-10).
Step:-11:-
Now , you have to configure host name and DNS address. In our case , I chose Manually
specify option for ease management of multiple server on Xen Center. Select Manually specify and
126
CE443 : Cloud Computing D17CE144
write IT1.
In DNS configuration select Automatically set via DHCP and Clik OK.
Step:-12
Select time zone according to geographical location of XenServer. Select Asia and click OK.
Step:-13:-
Select time zone according to city so select Kolkata click OK.
127
CE443 : Cloud Computing D17CE144
Step:-14:-
Select Network time here I chose Using NTP (Network Time Protocol) which help to synchronise
time according to network so select Using NTP click OK.
Note:- If you select Manual time entry option then you have enter time after step:-15 in which first
you have to assign NTP Server address.
Step:-15:-
128
CE443 : Cloud Computing D17CE144
Step:-16:-
Now, you have to give final confirmation after our previous configuration. Select Install
XenServer and click OK.
Step:-17:-
129
CE443 : Cloud Computing D17CE144
130
CE443 : Cloud Computing D17CE144
1.Make sure that your server has booted the Xen kernel. Next, run the virt-
manager command to start Virtual Machine Manager. This will give you an
interface, as in figure.
131
CE443 : Cloud Computing D17CE144
Select I need to install an operating system to start installation of a new virtual machine
132
CE443 : Cloud Computing D17CE144
133
CE443 : Cloud Computing D17CE144
machine that has one CPU only, you can specify that here.
Both the amount of memory and the amount of CPU's available to a virtual machine can
be changed easily later.
134
CE443 : Cloud Computing D17CE144
9.One of the most important choices when setting up a virtual machine is the disk
that you want to use. The default choice of the installer is to create a disk image file
in the directory /var/lib/xen/images. This is fine, but for performance reasons, it's a
good idea to set up LVM volumes and use an LVM volume as the virtualized disk.
To keep setting up the virtual machine easy, in this article we'll configure a virtual
disk based on a disk image file. Click the link Disks. This gives an overview in
which you can see the disk that the installer has created for you.
Both the amount of memory and the amount of CPUs available to a virtual
machine can be changed easily later.
*Note: Here's a tip. Want to use your virtual machines in a data center? Put the disk
image files on the SAN, which makes migrating a virtual machine to another host
much easier!
To change disk properties, such as the size or location of the disk file, select the
virtual disk and click "Edit." Change the disk properties according to your needs
now.
As you can see in figure 5, the installation wizard doesn't give you access to an
optical drive by default. You may want to set this up anyway, if only to be able to
perform the installation from the installation DVD! Click CD- ROM and select the
medium you want to use as the optical drive within the virtual machine. By default
this is
135
CE443 : Cloud Computing D17CE144
/dev/cdrom on the host operating system. If you want to install from an ISO file,
use the Open button to browse to the location of the ISO file.
In the Network Adapters part of the summary window, you'll see that a
paravirtualized network adapter has been added automatically. We'll talk about
network adapters later, so let's just keep it this way now.
After installing the virtual operating system, you can access it from Virtual Machine
Manager. Later in this series, you'll read more about all the managent options you
have from this interface and the command line as well.
136
CE443 : Cloud Computing D17CE144
Practical 12
Aim: Implementing Container based virtualization using Docker
Docker image containers can run natively on Linux and Windows. However,
Windows images can run only on Windows hosts and Linux images can run on
Linux hosts and Windows hosts (using a Hyper-V Linux VM, so far), where host
means a server or a VM.
137
CE443 : Cloud Computing D17CE144
machine. In this configuration, the kernel of the container host isn't shared
with the Hyper-V Containers, providing better isolation.
The images for these containers are created the same way and function the same.
The difference is in how the container is created from the image running a
Hyper-V Container requires an extra parameter. For details, see Hyper-V
Containers.
Comparing Docker containers with virtual machines
shows a comparison between VMs and Docker containers.
Comparison of traditional virtual machines to Docker containers
Because containers require far fewer resources (for example, they don't need a
full OS), they're easy to deploy and they start fast. This allows you to have
higher density, meaning that it allows you to run more services on the same
hardware unit, thereby reducing costs.
As a side effect of running on the same kernel, you get less isolation than VMs.
The main goal of an image is that it makes the environment (dependencies) the
same across different deployments. This means that you can debug it on your
machine and then deploy it to another machine with the same environment
guaranteed.
When using Docker, you won't hear developers say, "It works on my machine,
why not in production?" They can simply say, "It runs on Docker", because the
packaged Docker application can be executed on any supported Docker
environment, and it runs the way it was intended to on all deployment targets
(such as Dev, QA, staging, and production).
138
CE443 : Cloud Computing D17CE144
Because containers require far fewer resources (for example, they don't need a full
OS), they're easy to deploy and they start fast. This allows you to have higher
density, meaning that it allows you to run more services on the same hardware unit,
thereby reducing costs.
As a side effect of running on the same kernel, you get less isolation than VMs.
The main goal of an image is that it makes the environment (dependencies) the same
across different deployments. This means that you can debug it on your machine and
then deploy it to another machine with the same environment guaranteed.
When using Docker, you won't hear developers say, "It works on my machine, why
not in production?" They can simply say, "It runs on Docker", because the packaged
Docker application can be executed on any supported Docker environment, and it
runs the way it was intended to on all deployment targets (such as Dev, QA, staging,
and production).
A simple analogy
Perhaps a simple analogy can help getting the grasp of the core concept of Docker.
Let's go back in time to the 1950s for a moment. There were no word processors,
and the photocopiers were used everywhere (kind of).
Imagine you're responsible for quickly issuing batches of letters as required, to mail
139
CE443 : Cloud Computing D17CE144
them to customers, using real paper and envelopes, to be delivered physically to each
customer's address (there was no email back then).
At some point, you realize the letters are just a composition of a large set of
paragraphs, which are picked and arranged as needed, according to the purpose of
the letter, so you devise a system to issue letters quickly, expecting to get a hefty
raise.
1. You begin with a deck of transparent sheets containing one paragraph each.
2. To issue a set of letters, you pick the sheets with the paragraphs you
need, then you stack and align them so they look and read fine.
3. Finally, you place the set in the photocopier and press start to produce as many
letters as required.
So, simplifying, that's the core idea of Docker.
In Docker, each layer is the resulting set of changes that happen to the filesystem
after executing a command, such as, installing a program.
So, when you "look" at the filesystem after the layer has been copied, you see all
the files, included the layer when the program was installed.
You can think of an image as an auxiliary read-only hard disk ready to be installed in
a "computer" where the operating system is already installed.
Similarly, you can think of a container as the "computer" with the image hard disk
installed. The container, just like a computer, can be powered on or off.
First Docker container
Docker can be run from terminal. Docker run command will create the
container using the image user specifies, then spin up the container and
run it.Simple example of giving a command to run docker image can look
like this:
When we use the image to run a container docker first looks through local box to
find the image. If docker can’t find image locally it will look for a download from
remote registry. To check if there are any images we have locally we can type in
$docker images
140
CE443 : Cloud Computing D17CE144
We can see that image wasn’t found locally, and it was pulled from remote registry.
If we run the command again the execution will happen a lot faster since now the
image is located locally.
The -i flag starts an interactive container. The -t flag creates a pseudo-TTY
that attaches receiving or reading input such as stdin(standard in) and
printing output such as stdout(standard output).
Implementing command:
$docker run -i -t busybox:1.24
$docker ps
$docker ps -a
Shows all the containers which stopped as well.
In order to give a container specific name we can use a command
$docker run --name ello busybox:1.24
This will give a particular name to the container.
141
CE443 : Cloud Computing D17CE144
Container names
I had to use IP address given in Ubuntu console, since I was using cloud in order
to access Docker and Linux distribution system.
$docker logs
There are three main steps to be done in order to achieve the goal. We have to spin up
a container from the base image, install Git package in the container and commit
changes made in the container.
Docker commit is a command used to save the changes made to the Docker
container file system to a new image.
By default, docker would search for the Dockerfile in the build context path. Docker
exe-cutes all the instructions written in the file. Then it creates a new container from
142
CE443 : Cloud Computing D17CE144
the base image. Docker daemon runs each instruction in a different container. For
example, for instruction, Docker daemon creates a container, runs the instruction,
commits new layer to the image and removes the container.
Each run command will execute the command on the top writable layer of the
container, then commit the container as a new image. As well as each run command
will create a new image layer. It is important to remember to update packages
alphanumerically. This way the process will be going faster. CMD instructions
specify what command user wants to run when the container starts up, otherwise if
CMD instructions are not specified in the Dockerfile, Docker will use default
command defined in the base image.
If instructions do not change Docker will use same layer when building an image.
ADD instruction let’s not only to copy file but also allow to download file from
internet and add it to the container.
All the dependencies are managed by Docker, so there is no need to install, for
example, Python on the local machine in order to work with it
Docker compose
Container links allow containers locate each other and securely transfer information
about one container to another container. When we set up a link we create a pipeline
between the source container and recipient container. The recipient container can
then access select data above the source container. In our case the rattus container is
the source container and our container is the recipient container. The links are
established by using container names.
The main use for docker container links is when we build an application with a
micro-service architecture, we are able to run many independent components in
different con-tainers. Docker creates a secure tunnel between the containers that
doesn’t need to expose any ports externally on the container.
Docker compose is a very important component which is made in order to run multi-
container Docker applications. With the help of Docker compose we can define all
the containers in a single file called yemo file.
You can check if you have Docker compose installed simply by running a command:
143
CE443 : Cloud Computing D17CE144
As we clearly see you will be told if installation is required. Next step is to create
docker- compose.yml file.
Structure of a yemo file
There are three versions of docker compose file, version 1 which is legacy format
which does not support volumes or networks, version 2 and version 3 which is the
most up to date format and it is recommended to use it. Next we define services to
make up our application for each service. We should provide instructions on how to
build and run the container in the application. We have two services: example and
redis.
The first instruction is the “build” instruction. The “build” instruction tells the path
to the file which will be used to build docker image. Second instruction is “port”
which defines what ports to show to external network. “Depends on” is the next part
since in this exam-ple docker container is the client of redis and we need to start
redis beforehand.
Docker-compose file
Dockerizing React
application
There are bunch of articles which can help building applications and deploy react
appli-cations, well basically any applications. The most important steps are
explained be-low.[11]
Step 1
$ npm install -g create-react-app
Someone who has worked with React might see this steps as
very familiar. Step 2 (creating a folder)
$ mkdir myApp
144
CE443 : Cloud Computing D17CE144
Writing Dockerfile
So what does the code mean? [11]
145
CE443 : Cloud Computing D17CE144
$ docker build ./
Type $ docker images to find out image id
After following command you should be able to run your container in localhost:3000
$ docker run -it -p 3000:3000 -v [put your path
here]/frontend/src:/frontend/src [image id] To build a production image run:
$ docker build ./ --build-arg
app_env=production To run the
production image:
$ docker run -i -t -p 3000:3000 [image id]
And worry not if you make a mistake in your file, the image won’t be built.
146
CE443 : Cloud Computing D17CE144
147
CE443 : Cloud Computing D17CE144
Next step is to login into Docker Hub. You will be able to see your image in both
Docker Cloud and Docker Hub.
148
CE443 : Cloud Computing D17CE144
Image push
149
CE443 : Cloud Computing D17CE144
Repository
interface Docker
Networks
Docker networks
Docker has three built in networks:
bridge
host
none
You can specify which network you want to use with --net command.
None network means that the container is isolated. To make it run use:
$docker run -d --net none [your image]
150
CE443 : Cloud Computing D17CE144
Containers in the same network can connect to each other. Containers from other
net-works can’t connect to containers in the given one.
Bridge is the default network. If network is not specified this the the network type
you are creating. Usually this kind of network is created in single containers which
need a way to communicate. Host is a network that removes network isolation
between the container and the Docker host. Least protected network. This kind of
containers are usually called open containers.Overlay network connect multiple
Docker daemons and enables swarm services to talk to each other. It allows
communication for two single containers on different Docker dae-mons. Macvlan
network allows assigning a MAC address to your container, which shows your
container as a physical device on your network. It’s usually best choice when dealing
with application that have to be directly connected to the physical network.
Creating a custom network
In order to create a custom network use command: [14]
$docker network create --driver [you driver choice] [your network]
Loopback interface
Private interface
Containers in the same network can communicate with each other. We can define
networks in docker-compose file as well
151
CE443 : Cloud Computing D17CE144
Networks are defined similar to other services, with sub being the name of my
network. Network should be defined as well in other sections where it is being
used.
You can create two networks which will provide network isolation between ser-vices.
152
CE443 : Cloud Computing D17CE144
or not. Main concerns are that Docker is missing important security and data
management. On the other hand, Docker is being developed at a very fast pace. In
the case studies men-tioned previously it can be verified that Docker in
production can work and is in fact quite efficient.
Technically it is possible to run many different processes in one container, but it
is better to run one specific process in each. It is easier to use containers with only
one function-ality. You can always spin up container to use in some other project,
but you can’t really spin up container which already has your database and you
don’t need it in another place. It is also easier to debug and find mistakes in one
component out of the whole application than the whole application. Benefit of
docker containers is their small size, so it is good to keep it that way especially
when many containers have to be deployed and updated at the same time. The
most important part though is to remember about security. Once you deploy your
containers to production be careful of the network vulnerabilities and make sure
your data is protected.
Conclusion
The purpose of this thesis was to document learning of the Docker technology
and re-search its apparent success. As a result, I can sum up that Docker is a very
powerful tool which helped many companies to overcome their difficulties in
recourse management, isolation of environments, security issues and moving into
the cloud. Since information is being received and sent as fast as ever before it is
essential for services providers to ensure that they can give the best assistance for
the customers.
Overall, I think that Docker documentation and pool of developers provides good
support for new Docker users.
153