LVM Beginner
LVM Beginner
LVM Beginner
By Falko Timme
Published: 2007-01-14 18:51
This guide shows how to work with LVM (Logical Volume Management) on Linux. It also describes how to use LVM together with RAID1 in an extra
chapter. As LVM is a rather abstract topic, this article comes with a Debian Etch VMware image that you can download and start, and on that Debian Etch
system you can run all the commands I execute here and compare your results with mine. Through this practical approach you should get used to LVM very
fast.
However, I do not issue any guarantee that this tutorial will work for you!
1 Preliminary Note
This tutorial was inspired by two articles I read:
- http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html
- http://www.debian-administration.org/articles/410
These are great articles, but hard to understand if you've never worked with LVM before. That's why I have created this Debian Etch VMware image that
you can download and run in VMware Server or VMware Player (see http://www.howtoforge.com/import_vmware_images to learn how to do that).
I installed all tools we need during the course of this guide on the Debian Etch system (by running
The Debian Etch system's network is configured through DHCP, so you don't have to worry about conflicting IP addresses. The root password is
howtoforge. You can also connect to that system with an SSH client like PuTTY. To find out the IP address of the Debian Etch system, run
ifconfig
The system has six SCSI hard disks, /dev/sda - /dev/sdf. /dev/sda is used for the Debian Etch system itself, while we will use /dev/sdb - /dev/sdf for
LVM and RAID. /dev/sdb - /dev/sdf each have 80GB of disk space. In the beginning we will act as if each has only 25GB of disk space (thus using only
25GB on each of them), and in the course of the tutorial we will "replace" our 25GB hard disks with 80GB hard disks, thus demonstrating how you can
replace small hard disks with bigger ones in LVM.
The article http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html uses hard disks of 250GB and 800GB, but
some commands such as pvmove take a long time with such hard disk sizes, that's why I decided to use hard disks of 25GB and 80GB (that's enough to
understand how LVM works).
1.1 Summary
Download this Debian Etch VMware image (~310MB) and start it like this. Log in as root with the password howtoforge.
2 LVM Layout
Basically LVM looks like this:
You have one or more physical volumes (/dev/sdb1 - /dev/sde1 in our example), and on these physical volumes you create one or more volume groups
(e.g. fileserver), and in each volume group you can create one or more logical volumes. If you use multiple physical volumes, each logical volume can
be bigger than one of the underlying physical volumes (but of course the sum of the logical volumes cannot exceed the total space offered by the physical
volumes).
It is a good practice to not allocate the full space to logical volumes, but leave some space unused. That way you can enlarge one or more logical volumes
later on if you feel the need for it.
In this example we will create a volume group called fileserver, and we will also create the logical volumes /dev/fileserver/share,
/dev/fileserver/backup, and /dev/fileserver/media (which will use only half of the space offered by our physical volumes for now - that way we
can switch to RAID1 later on (also described in this tutorial)).
fdisk -l
server1:~# fdisk -l
There are no partitions yet on /dev/sdb - /dev/sdf. We will create the partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 and leave /dev/sdf
untouched for now. We act as if our hard disks had only 25GB of space instead of 80GB for now, therefore we assign 25GB to /dev/sdb1, /dev/sdc1,
/dev/sdd1, and /dev/sde1:
fdisk /dev/sdb
fdisk /dev/sdc
fdisk /dev/sdd
fdisk /dev/sde
Then run
fdisk -l
server1:~# fdisk -l
Then run
again:
Now run
pvdisplay
server1:~# pvdisplay
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 23.29 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID G8lu2L-Hij1-NVde-sOKc-OoVI-fadg-Jd1vyU
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 3upcZc-4eS2-h4r4-iBKK-gZJv-AYt3-EKdRK6
Now let's create our volume group fileserver and add /dev/sdb1 - /dev/sde1 to it:
vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name fileserver
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 93.14 GB
PE Size 4.00 MB
Total PE 23844
Alloc PE / Size 0 / 0
Free PE / Size 23844 / 93.14 GB
VG UUID 3Y1WVF-BLET-QkKs-Qnrs-SZxI-wrNO-dTqhFP
vgscan
server1:~# vgscan
Reading all physical volumes. This may take a while...
Found volume group "fileserver" using metadata type lvm2
For training purposes let's rename our volumegroup fileserver into data:
Let's run vgdisplay and vgscan again to see if the volume group has been renamed:
vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name data
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 93.14 GB
PE Size 4.00 MB
Total PE 23844
Alloc PE / Size 0 / 0
Free PE / Size 23844 / 93.14 GB
VG UUID 3Y1WVF-BLET-QkKs-Qnrs-SZxI-wrNO-dTqhFP
vgscan
server1:~# vgscan
Reading all physical volumes. This may take a while...
Found volume group "data" using metadata type lvm2
vgremove data
vgdisplay
server1:~# vgdisplay
vgscan
server1:~# vgscan
Reading all physical volumes. This may take a while...
Next we create our logical volumes share (40GB), backup (5GB), and media (1GB) in the volume group fileserver. Together they use a little less than
50% of the available space (that way we can make use of RAID1 later on):
lvdisplay
server1:~# lvdisplay
--- Logical volume ---
LV Name /dev/fileserver/share
VG Name fileserver
LV UUID 280Mup-H9aa-sn0S-AXH3-04cP-V6p9-lfoGgJ
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:2
lvscan
server1:~# lvscan
ACTIVE '/dev/fileserver/share' [40.00 GB] inherit
ACTIVE '/dev/fileserver/backup' [5.00 GB] inherit
ACTIVE '/dev/fileserver/media' [1.00 GB] inherit
For training purposes we rename our logical volume media into films:
lvdisplay
server1:~# lvdisplay
--- Logical volume ---
LV Name /dev/fileserver/share
VG Name fileserver
LV UUID 280Mup-H9aa-sn0S-AXH3-04cP-V6p9-lfoGgJ
LV Write Access read/write
LV Status available
# open 0
LV Size 40.00 GB
Current LE 10240
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0
Allocation inherit
Read ahead sectors 0
Block device 253:2
lvscan
server1:~# lvscan
ACTIVE '/dev/fileserver/share' [40.00 GB] inherit
ACTIVE '/dev/fileserver/backup' [5.00 GB] inherit
ACTIVE '/dev/fileserver/films' [1.00 GB] inherit
lvremove /dev/fileserver/films
Until now we have three logical volumes, but we don't have any filesystems in them, and without a filesystem we can't save anything in them. Therefore we create an ext3
filesystem in share, an xfs filesystem in backup, and a reiserfs filesystem in media:
mkfs.ext3 /dev/fileserver/share
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
5242880 inodes, 10485760 blocks
524288 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
320 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
mkfs.xfs /dev/fileserver/backup
mkfs.reiserfs /dev/fileserver/media
A pair of credits:
Alexander Lyamin keeps our hardware running, and was very generous to our
project in many little ways.
Chris Mason wrote the journaling code for V3, which was enormously more useful
to users than just waiting until we could create a wandering log filesystem as
Hans would have unwisely done without him.
Jeff Mahoney optimized the bitmap scanning code for V3, and performed the big
endian cleanups.
Continue (y/n):y
Initializing journal - 0%....20%....40%....60%....80%....100%
Syncing..ok
Tell your friends to use a kernel based on 2.4.18 or later, and especially not a
kernel based on 2.4.9, when you use reiserFS. Have fun.
Now we are ready to mount our logical volumes. I want to mount share in /var/share, backup in /var/backup, and media in /var/media, therefore we
must create these directories first:
Now run
df -h
server1:~# df -h
Congratulations, you've just set up your first LVM system! You can now write to and read from /var/share, /var/backup, and /var/media as usual.
We have mounted our logical volumes manually, but of course we'd like to have them mounted automatically when the system boots. Therefore we modify
/etc/fstab:
mv /etc/fstab /etc/fstab_orig
vi /etc/fstab
If you compare it to our backup of the original file, /etc/fstab_orig, you will notice that we added the lines:
shutdown -r now
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
umount /var/share
df -h
output:
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 88K 10M 1% /dev
Until now we have enlarged only share, but not the ext3 filesystem on share. This is what we do now:
e2fsck -f /dev/fileserver/share
Make a note of the total amount of blocks (10485760) because we need it when we shrink share later on.
resize2fs /dev/fileserver/share
and in the
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 88K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
Shrinking a logical volume is the other way round: first we must shrink the filesystem before we reduce the logical volume's size. Let's shrink share to
40GB again:
umount /var/share
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 88K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
1.0G 33M 992M 4% /var/media
e2fsck -f /dev/fileserver/share
When resizing an ext3 filesystem to a certain size (instead of all available space), resize2fs takes the number of blocks as argument (you can as well
specify the new size in MB, etc. See
man resize2fs
for more details). From our previous operation we know the 40GB equals 10485760 blocks so we run
We've shrinked the filesystem, now we must shrink the logical volume, too:
We can ignore the warning that data might be destroyed because we have shrinked the filesystem before.
The output of
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 88K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
1.0G 33M 992M 4% /var/media
/dev/mapper/fileserver-share
40G 177M 38G 1% /var/share
fdisk /dev/sdf
pvcreate /dev/sdf1
Run
vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name fileserver
System ID
Format lvm2
Metadata Areas 5
Metadata Sequence No 12
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 5
Act PV 5
VG Size 116.43 GB
PE Size 4.00 MB
Total PE 29805
Alloc PE / Size 11776 / 46.00 GB
Free PE / Size 18029 / 70.43 GB
VG UUID iWr1Vk-7h7J-hLRL-SHbx-3p87-Rq47-L1GyEO
That's it. /dev/sdf1 has been added to the fileserver volume group.
Now let's remove /dev/sdb1. Before we do this, we must copy all data on it to /dev/sdf1:
vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name fileserver
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 16
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 4
Act PV 4
VG Size 93.14 GB
PE Size 4.00 MB
Total PE 23844
Alloc PE / Size 11776 / 46.00 GB
Free PE / Size 12068 / 47.14 GB
VG UUID iWr1Vk-7h7J-hLRL-SHbx-3p87-Rq47-L1GyEO
Then we run
pvremove /dev/sdb1
pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/sdc1
VG Name fileserver
PV Size 23.29 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 5961
Free PE 1682
Allocated PE 4279
PV UUID 40GJyh-IbsI-pzhn-TDRq-PQ3l-3ut0-AVSE4B
You could now remove /dev/sdb from the system (if this was a real system and not a virtual machine).
umount /var/share
umount /var/backup
umount /var/media
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 665M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 92K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
lvremove /dev/fileserver/share
lvremove /dev/fileserver/backup
lvremove /dev/fileserver/media
vgremove fileserver
Finally we do this:
vgdisplay
server1:~# vgdisplay
No volume groups found
pvdisplay
server1:~# pvdisplay
Now we must undo our changes in /etc/fstab to avoid that the system tries to mount non-existing devices. Fortunately we have made a backup of the
original file that we can copy back now:
mv /etc/fstab_orig /etc/fstab
shutdown -r now
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 666M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 92K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
Now the system is like it was in the beginning (except that the partitions /dev/sdb1 - /dev/sdf1 still exist - you could delete them with fdisk but we don't
do this now - as well as the directories /var/share, /var/backup, and /var/media which we also don't delete).
7 LVM On RAID1
In this chapter we will set up LVM again and move it to a RAID1 array to guarantee for high-availability. In the end this should look like this:
This means we will make the RAID array /dev/md0 from the partitions /dev/sdb1 + /dev/sdc1, and the RAID array /dev/md1 from the partitions
/dev/sdd1 + /dev/sde1. /dev/md0 and /dev/md1 will then be the physical volumes for LVM.
mkfs.ext3 /dev/fileserver/share
mkfs.xfs /dev/fileserver/backup
mkfs.reiserfs /dev/fileserver/media
The output of
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 666M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 92K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-share
40G 177M 38G 1% /var/share
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
1.0G 33M 992M 4% /var/media
Now we must move the contents of /dev/sdc1 and /dev/sde1 (/dev/sdc1 is the second partition of our future /dev/md0, /dev/sde1 the second partition
of our future /dev/md1) to the remaining partitions, because we will afterwards remove them from LVM and format them with the type fd (Linux RAID
autodetect) and move them to /dev/md0 resp. /dev/md1.
modprobe dm-mirror
pvmove /dev/sdc1
pvremove /dev/sdc1
pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name fileserver
PV Size 23.29 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 5961
Free PE 0
Allocated PE 5961
PV UUID USDJyG-VDM2-r406-OjQo-h3eb-c9Mp-4nvnvu
Allocated PE 1280
PV UUID qdEB5d-389d-O5UA-Kbwv-mn1y-74FY-4zublN
pvmove /dev/sde1
pvremove /dev/sde1
pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name fileserver
PV Size 23.29 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 5961
Free PE 0
Allocated PE 5961
PV UUID USDJyG-VDM2-r406-OjQo-h3eb-c9Mp-4nvnvu
fdisk /dev/sdc
fdisk /dev/sde
The output of
fdisk -l
server1:~# fdisk -l
Next we add /dev/sdc1 to /dev/md0 and /dev/sde1 to /dev/md1. Because the second nodes (/dev/sdb1 and /dev/sdd1) are not ready yet, we must
specify missing in the following commands:
The outputs of
pvdisplay
and
vgdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name fileserver
server1:~# vgdisplay
--- Volume group ---
VG Name fileserver
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 14
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 4
Act PV 4
VG Size 93.14 GB
PE Size 4.00 MB
Total PE 23844
Alloc PE / Size 11776 / 46.00 GB
Free PE / Size 12068 / 47.14 GB
VG UUID dQDEHT-kNHf-UjRm-rmJ3-OUYx-9G1t-aVskI1
Now we move the contents of /dev/sdb1 to /dev/md0 and the contents of /dev/sdd1 to /dev/md1, then we remove /dev/sdb1 and /dev/sdd1 from
LVM:
pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/md0
VG Name fileserver
PV Size 23.29 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 5961
Free PE 0
Allocated PE 5961
PV UUID 7JHUXF-1R2p-OjbJ-X1OT-uaeg-gWRx-H6zx3P
fdisk /dev/sdb
fdisk /dev/sdd
Now the two RAID arrays will be synchronized. This will take some time, you can check with
cat /proc/mdstat
when the process is finished. The output looks like this for an unfinished process:
pvdisplay
you will see that 2 * 23.29GB = 46.58GB are available, however only 40GB (share) + 5GB (backup) + 1GB (media) = 46GB are used which means
we could extend one of our logical devices with about 0.5GB. I've already shown how to extend an ext3 logical volume (share), so we will resize media
now which uses reiserfs. reiserfs filesystems can be resized without unmounting:
resize_reiserfs /dev/fileserver/media
The output of
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 666M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 92K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-share
40G 177M 38G 1% /var/share
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
1.5G 33M 1.5G 3% /var/media
If we want our logical volumes to be mounted automatically at boot time, we must modify /etc/fstab again (like in chapter 3):
mv /etc/fstab /etc/fstab_orig
vi /etc/fstab
If you compare it to our backup of the original file, /etc/fstab_orig, you will notice that we added the lines:
shutdown -r now
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 19G 666M 17G 4% /
tmpfs 78M 0 78M 0% /lib/init/rw
udev 10M 100K 10M 1% /dev
tmpfs 78M 0 78M 0% /dev/shm
/dev/sda1 137M 17M 114M 13% /boot
/dev/mapper/fileserver-share
40G 177M 38G 1% /var/share
/dev/mapper/fileserver-backup
5.0G 144K 5.0G 1% /var/backup
/dev/mapper/fileserver-media
1.5G 33M 1.5G 3% /var/media
The procedure is as follows: first we remove /dev/sdb and /dev/sdd from the RAID arrays, replace them with bigger hard disks, put them back into the
RAID arrays, and then we do the same again with /dev/sdc and /dev/sde.
The output of
cat /proc/mdstat
cat /proc/mdstat
cat /proc/mdstat
cat /proc/mdstat
On a real system you would now shut it down, pull out the 25GB /dev/sdb and /dev/sdd and replace them with 80GB ones. As I said before, we don't
have to do this because all hard disks already have a capacity of 80GB.
Next we must format /dev/sdb and /dev/sdd. We must create a /dev/sdb1 resp. /dev/sdd1 partition, type fd (Linux RAID autodetect), size 25GB (the
same settings as on the old hard disks), and a /dev/sdb2 resp. /dev/sdd2 partition, type fd, that cover the rest of the hard disks. As /dev/sdb1 and
/dev/sdd1 are still present on our hard disks, we only have to create /dev/sdb2 and /dev/sdd2 in this special example.
fdisk /dev/sdb
fdisk /dev/sdd
The output of
fdisk -l
server1:~# fdisk -l
Now the contents of both RAID arrays will be synchronized. We must wait until this is finished before we can go on. We can check the status of the
synchronization with
cat /proc/mdstat
Now we do the same process again, this time replacing /dev/sdc and /dev/sde:
fdisk /dev/sdc
fdisk /dev/sde
cat /proc/mdstat
Next we create the RAID arrays /dev/md2 from /dev/sdb2 and /dev/sdc2 as well as /dev/md3 from /dev/sdd2 and /dev/sde2.
The new RAID arrays must be synchronized before we go on, so you should check
cat /proc/mdstat
After the synchronization has finished, we prepare /dev/md2 and /dev/md3 for LVM:
pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/md0
VG Name fileserver
PV Size 23.29 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 5961
Free PE 0
Allocated PE 5961
PV UUID 7JHUXF-1R2p-OjbJ-X1OT-uaeg-gWRx-H6zx3P
PV Name /dev/md2
VG Name fileserver
PV Size 56.71 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 14517
Free PE 14517
Allocated PE 0
PV UUID 300kTo-evxm-rfmf-90LA-4YOJ-2LG5-t4JHnf
vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name fileserver
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 26
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 4
Act PV 4
VG Size 159.98 GB
PE Size 4.00 MB
Total PE 40956
Alloc PE / Size 11904 / 46.50 GB
Free PE / Size 29052 / 113.48 GB
VG UUID dQDEHT-kNHf-UjRm-rmJ3-OUYx-9G1t-aVskI1
lvdisplay
server1:~# lvdisplay
--- Logical volume ---
LV Name /dev/fileserver/share
VG Name fileserver
LV UUID bcn3Oi-vW3p-WoyX-QlF2-xEtz-uz7Z-4DllYN
LV Write Access read/write
LV Status available
# open 1
LV Size 40.00 GB
Current LE 10240
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0
If your outputs look similar, you have successfully replaced your small hard disks with bigger ones.
Now that we have more disk space (2* 23.29GB + 2 * 56.71GB = 160GB) we could enlarge our logical volumes. Until now you know how to enlarge
ext3 and reiserfs partitions, so let's enlarge our backup logical volume now which uses xfs:
xfs_growfs /dev/fileserver/backup
The output of
df -h
server1:~# df -h
That's it! If you've made it until here, you should now be used to LVM and LVM on RAID.
9 Links
- Managing Disk Space with LVM: http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html
- A simple introduction to working with LVM: http://www.debian-administration.org/articles/410
- Debian: http://www.debian.org