Zfs
Zfs
Zfs
C Sanjeev Kumar
Charly V. Joseph
Mewan Peter DAlmeida
Srinidhi K.
Introduc
tion
Zpool
ZFS file systems are
built on top of virtual
storage pools called
zpools.
A zpool is constructed
of devices, real or
logical.
They are constructed
by combining block
devices using either
mirroring or RAID-Z.
Traditional Volumes
Partitions/volumes
traditional file
Systems.
exist
Traditional Volumes
Data
Integrity
End-to-End Checksums
Traditional Mirroring
Mirroring
The easiest way to get
high availability
Half the size
Higher read performance
Striping
Higher performance
Distributed across disks
Work in parallel
Disk One
Disk Two
Data A
Mirror A
Data B
Mirror B
Data C
Mirror C
Data D
Mirror D
Disk One
Disk Two
Data A
Data B
Data C
Data D
Data E
Data F
Data G
Data H
RAID-Z
RAID-Z
Easier
Administration
Resilvering of mirrors
Resilvering (AKA resyncing, rebuilding, or
reconstructing) is the process of repairing a
damaged device using the contents of healthy
devices.
For a mirror, resilvering can be as simple as
a whole-disk copy. For RAID-5 it's only slightly
more complicated: instead of copying one disk
to another, all of the other disks in the RAID-5
stripe must be XORed together.
Resilvering of mirrors
The main advantages of this feature are as follows:
ZFS only resilvers the minimum amount of necessary
data.
The entire disk can be resilvered in a matter of
minutes or seconds,
Resilvering is interruptible and safe. If the system
loses power or is rebooted, the resilvering process
resumes exactly where it left off, without any need for
manual intervention.
Transactional pruning. If a disk suffers a transient
outage, it's not necessary to resilver the entire disk -only the parts that have changed.
Live blocks only. ZFS doesn't waste time and I/O
bandwidth copying free disk blocks because they're
Resilvering of mirrors
Types of resilvering:
Top-down resilvering-the very first thing
ZFS resilvers is the uberblock and the disk
labels. Then it resilvers the pool-wide
metadata; then each file system's metadata;
and so on down the tree.
Priority-based resilvering-Not yet
implemented in ZFS.
Create ZFS
Pools
Create a ZFS pool
# zpool create tank c0d1 c1d0 c1d1
# zpool list
NAME
SIZE
USED AVAIL CAP HEALTH
ALTROOT
tank
23.8G
91K
23.8G
0% ONLINE
Destroy a pool
# zpool destroy tank
Create a mirrored pool
# zpool create mirror c1d0 c1d1
Mirror between disk c1d0 and disk c1d1
Available storage is the same as if you used only one of these disks
If disk sizes differ, the smaller size will be your storage size
references
get modified
deleted.
Presence
of snapshots
doesnt slowor
down
any operation
Deleting
snapshots
takes
time proportional to the number of blocks that
Constant
time
operation.
the delete will free
Snapshots allow us to take a full back-up of all files/directories referenced
by the snapshot
Independent of the size of the file system that it references to
Unparalleled
The limitations of ZFS are designed to be so large that
Scalability
they will not be encountered in practice for some time.
Some theoretical limitations in ZFS are:
Number of snapshots of any file system - 264
Number of entries in any individual directory - 2 48
Maximum size of a file system - 264 bytes
Maximum size of a single file - 264 bytes
Maximum size of any attribute - 264 bytes
Maximum size of any zpool - 278 bytes
Number of attributes of a file - 256
Number of files in a directory - 256
Number of devices in any zpool - 264
Number of zpools in a system - 264
Number of file systems in a zpool - 264
Multiple
Block
Size
No single value
works well
with all types of files
Large blocks increase bandwidth but reduce
metadata and can lead to wasted space
Small blocks save space for smaller files, but
increase I/O operations on larger ones
FSBs are the basic unit of ZFS datasets, of which
checksums are maintained
Files that are less than the record size are written
as a single file system block (FSB) of variable size in
multiples of disk sectors (512B)
Files that are larger than the record size are stored
in multiple FSBs equal to record size
Pipelined I/O
Reorders writes to be as
sequential as possible
App #1 writes:
App #2
writes:
If left in original order,
we
waste a lot of time
waiting
for head and platter
positioning:
Mov
e
Hea
d
Spin
Head
Mov
e
Hea
d
Mov
e
Hea
d
Mov
e
Hea
d
Pipelined I/O
Reorders writes to be as
sequential as possible
App #1 writes:
App #2
writes:
Pipelining lets us
examine
writes as a group and
optimize order:
Mov
e
Hea
d
Mov
e
Hea
d
Dynamic
Striping
Dynamic
Striping
Writes striped across
both mirrors.
Reads occur wherever
data was
written.
Disadvantages