Network Virtualization For CC
Network Virtualization For CC
Network Virtualization For CC
ABSTRACT
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept
of computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
KEYWORDS
Cloud Computing, Virtualization, Open Source Technology.
1. INTRODUCTION
Administrators usually use lots of servers with heavy hardware to keep their service accessible,
available for the authenticated users. As days passes by concernment of new services increases
which require more hardware, more effort from IT administrators. There is another issue of
capacity (Hardware as well as storage and networking) which always increases day by day.
Moreover sometime we need to upgrade old running servers as their resources have been
occupied fully. On that case we need to buy new servers, install those services on that server
and finally migrate to the service on it. Cloud computing focus on what IT always needs: a way
to increase capacity on the fly without investing in new infrastructure. Cloud computing also
encompasses any subscription-based, user-based, services-based or pay-per-use service that in
real time over the internet extends its existing capabilities.
Capacity –With cloud computing, Capacity always increase and it is no longer an issue. Now,
focus on how the solution will help in further mission. The IT piece belongs to somebody else.
Cost – Cloud and Virtualization technology reduce your all maintenance fees. There is no more
servers, software, and update fees. Many of the hidden costs typically associated with software
implementation, customization, hardware, maintenance, and training are rolled into a
transparent subscription fee.
1.3 Virtualization
Virtualization can be practical very broadly to just about everything you can imagine including
processor, memory, networks, storage, operating systems, and applications. Three
characteristics of virtualization technology make it ideal for cloud computing:
Partitioning: In virtualization technology, single physical server or system can use partitioning
to support many different applications and operating systems (OS).
Isolation: In cloud computing, each virtual machine is isolated and protected from crashes or
viruses in the other machines. What makes virtualization so important for the cloud is that it
decouples the software from the hardware.
Encapsulation: Encapsulation can protect each application so that it doesn’t interfere with
other applications. By using encapsulation, each virtual machine stored as a single file, making
it easy to identify and present to other applications and software. To understand how
virtualization helps with cloud computing, we must understand its many forms. In all cases, a
single resource actually emulates or imitates other resources. Here are some examples:
Virtual memory: Every disk has a lot more space than memory. PCs can use virtual memory to
borrow extra memory from the hard disk. Although virtual disks are slower than real memory, if
managed right, the substitution works surprisingly well.
Software: Virtualization software is available which can emulate an entire computer. A virtual
single computer can perform as though it were actually more than computers. This kind of
software might be able to move from a data centre with thousands of servers. To manage
virtualization in cloud computing, most of companies are using different hypervisors. Because
in cloud computing we need different operating environments, the hypervisor becomes an ideal
delivery mechanism by allowing same application on lots of different systems. Hypervisors can
load multiple operating systems in single node; they are a very practical way of getting things
virtualized quickly and efficiently. Let’s try to draw a picture on above statement.
Figure 1.1: A normal Workstation / Computer
We use 3 hardwires stated like above. We will use Debian GNU/Linux 7 as our base operating
system and run Ganeti over the operating system using KVM as hypervisor. Later we will
initiate a cluster on one physical host as a master node. We will join other nodes on that cluster.
We will use a manageable switch and VLAN on it to separate our management + storage
network and public facing VM network for security purpose. Later we will create VMs and
check live migration, Network changes and failover scenarios.
We have installed a clean, minimal operation system as standard OS. The only requirement we
need to be aware of at this stage is to partition leaving enough space for a big (minimum 10GB)
LVM volume group which will then host your instance file systems, if we want to use all Ganeti
features. In this case we will install the base operating system on 10GB of our storage space and
remaining storage space will leave un-partitioned for LVM use. The volume group name we use
will be genetic.
Also check /etc/hosts to ensure that you have the both the fully-qualified name and the
short name there, pointing to the correct IP address:
127.0.0.1 localhost
192.168.20.222 node1.project.edu node1
If it shows we have a volume group called 'ganeti' then skip to the next section, "Configure the
Network". If the command is not found, then install the lvm2 package:
# apt-get install lvm2
Now, our host machine should have either a spare partition or a spare hard drive which we will
use for LVM. If it's a second hard drive it will be /dev/vdb or /dev/sdb. Check which you
have:
# pvcreate /dev/sda3
# pvs # should show the physical volume
Figure 2.4: Physical Volume Create
Let's make changes to the network configuration file for your system. If you remember, this
is /etc/network/interfaces.
Edit this file, and look for the br-man definition for management and storage network and br-
public for public VM network. This is the bridge interface you created earlier, and eth0 is
attached to it. If should looks something like this:
# apt-get update
Now, install the Ganeti software package. Note that the back ports packages are not used unless
you ask for them explicitly.
# gnt-cluster modify -H
kvm:kernel_path=,initrd_path=,vnc_bind_address=0.0.0.0
-- WARNING --
Performing this operation is going to replace the ssh daemon keypair
on the target machine (hostY) with the ones of the current one and
grant full intra-cluster ssh root access to/from it
When asked if you want to continue connection, say yes:
The authenticity of host 'node2 (192.168.20.223)' can't be
established.
ECDSA key fingerprint is
a1:af:e8:20:ad:77:6f:96:4a:19:56:41:68:40:2f:06.
Are you sure you want to continue connecting (yes/no)? yes
When prompted for the root password for node2, enter it:
Warning: Permanently added 'node2' (ECDSA) to the list of known hosts.
root@node1's password:
You may see the following informational message; you can ignore it:
Restarting OpenBSD Secure Shell server: sshd.
Rather than invoking init scripts through /etc/init.d, use the service utility, e.g. service
ssh restart
Since the script you are attempting to invoke has been converted to an Upstart job, you may also
use the stop and then start utilities,
e.g. stop ssh ; start ssh. The restart utility is also available.
ssh stop/waiting
ssh start/running, process 2921
The last message you should see is this:
Tue Nov 17 17:19:50 2015 - INFO: Node will be a master candidate
This means that the machine you have just added into the node (hostY) can take over the role of
configuration master for the cluster, should the master (hostX) crash or be unavailable.
Check the node has been added in cluster or not by following command:
#gnt-node list
The package can be installed as follows: do this on all nodes in your cluster.
# wget https://code.osuosl.org/attachments/download/2169/ganeti-
instance-image_0.5.1-1_all.deb
# dpkg -i ganeti-instance-image_0.5.1-1_all.deb
CDINSTALL="yes"
NOMOUNT="yes"
Aside: the full set of settings you could put in this file are listed
in /etc/default/ganeti-instance-image, but don't edit them there
Now edit /etc/ganeti/instance-image/variants.list so it looks like this:
default
cd
Copy these two files to the other nodes:
# gnt-cluster copyfile /etc/ganeti/instance-
image/variants/cd.conf
# gnt-cluster copyfile /etc/ganeti/instance-image/variants.list
If using DRBD, the ISO images used for CD installs must be present on all nodes in the cluster,
in the same path. You could copy them to local storage on the master node, and then use gnt-
cluster copy file to distribute them to local storage on the other nodes. However to make things
simpler, we've made all the ISO images available on an NFS share (Network File Service),
which you can attach. On every node, create a empty directory /iso:
# mkdir /iso
Now copy a test OS iso in /iso directory. We have copied a debian iso image for test. Now send
the iso image to every node by following command:
# gnt-cluster copyfile /iso/debian-7.9.0-amd64-netinst.iso
2. R ESULTS
3.1 Run an instance
Now start the VM using the following command, which attaches the CD-ROM temporarily and
boots from it:
# gnt-instance start -H
boot_order=cdrom,cdrom_image_path=/iso/debian-7.9.0-amd64-
netinst.iso testvm.project.edu
Waiting for job 332 for testvm.project.edu ...
By clicking “Connect” button the console will appear in front of us and we will install the OS
on testvm.project.edu instance with the IP address of 192.168.20.232.
Figure 3.6: Install Guest Operating System in an Instance
Now we need to change the network of the instance to “br-man” from “br-public”. Here
is how to do that:
Moving the network interface 0 to another network:
# gnt-instance modify --net 0:modify,link=br-man --hotplug
testvm.project.edu
Try to do this to move the network interface of one of the instances you created earlier, onto the
br-man. After successfully shifting the network, now we can access the instance without any
problem.
Instances’ running on that node has been down and services stopped. But we can make the
service alive within short time without losing any data of that instance by following command.
Now if we check the instance list we can see the instance is running on
node1.project.edu as its primary node.
# gnt-instance list –o name,pnode,snodes,status
Authors
Mohammad Mamun Or Rashid received his B.Sc. (Hon’s) in Computer Science from North
South University (NSU), Dhaka, Bangladesh in 2006 and M.Sc. in Computer Science in 2015
from Jahangirnagar University, Savar, Dhaka, Bangladesh. He has been working in
Government of the People's Republic of Bangladesh as a “System Analyst” in Ministry of
Expatriate’s Welfare and Overseas Employment. His current research interests include Cloud
Computing, virtualization and information Security management system. He is also interested
in Linux and Virtual networking in cloud computing.
M. Masud Rana received the B.Sc. in Computer Science and Engineering from the Dhaka
International University, Dhaka, Bangladesh in 2014. Currently, he is working towards M.Sc.
in Computer Science from the Jahangirnagar University, Savar, Dhaka, Bangladesh. He has
serving as an Executive Engineer, Information Technology in Bashundhara Group also he has
more than 5 years of experience as an Assistant Engineer, IT in SQUARE Informatix Ltd,
Bangladesh and Executive Engineer, IT in Computer Source Ltd, Bangladesh. His main areas
of research interests include virtualization, networking and security aspects of cloud
computing.
Jugal Krishna Das received B.Sc., M.Sc. and PhD in Computer Science all from Rassia. He is
currently an Professor of Computer Science and Engineering department of Jahangirnagar
University, Savar, Dhaka. His research interests include topics such as Computer Networks,
Natural Language Processing, Software Engineering.