Introduction to LVM thin provisioning

LVM (Logical Volume Manager) is a technology which allow us to create a layer of abstraction over physical storage devices, and implement flexible partitioning schemes where logical volumes are easier to shrink, enlarge or remove than classical “bare” partitions. While LVM “thick” provisioning requires the allocation of a fixed amount of storage space to a LVM logical volume at creation time, by using “thin” provisioning, storage is allocated only when needed.

In this tutorial we learn the basic concepts behind LVM thin provisioning, and we see how to create and manage LVM thin pools and volumes on Linux.

In this tutorial you will learn:

  • The basic concepts behind LVM thin provisioning
  • How to create thin pools and volumes
introduction to lvm thin provisioning
Introduction to LVM thin provisioning – Original image by vectorjuice on Freepik
Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Distribution agnostic
Software lvm2
Other Root privileges, being familiar with Linux LVM
Conventions # – requires given linux-commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux-commands to be executed as a regular non-privileged user

Introduction

Thin provisioning is based on the concept of thin pools and volumes. When we use thin provisioning, we don’t allocate disk space for a logical volume at creation time: we just provide a “virtual” size for them. Before we see how to create thin pools and volumes, we must create a volume group and populate it with at least one physical volume.

Creating a volume group

We start by using an hypothetical /dev/vda1 partition of 20 GiB as an LVM physical volume:

$ sudo pvcreate /dev/vda1



To create a volume group (for the sake of this tutorial we will name it “vg0”), and at the same time add the physical volume to it, we execute the following command:

$ sudo vgcreate vg0 /dev/vda1

We can use the vgdisplay command to visualize our current setup. In this case, it returns the following output:

--- Volume group ---
VG Name vg0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <20.00 GiB
PE Size 4.00 MiB
Total PE 5119
Alloc PE / Size 0 / 0
Free PE / Size 5119 / <20.00 GiB
VG UUID jXRrgk-YfNA-Qisj-XxmE-FtCs-chzj-w1idAM

Creating an LVM thin pool

We create an LVM thin pool just like any other logical volume; the only difference is that we use the --thinpool option. Suppose we want to create a thin pool named “lxcfgpool”; here is the command we would run:

$ sudo lvcreate --thinpool lxcfgpool -L15GiB vg0

You can see we didn’t use all the available space in the volume group: this is a safety measure to ensure we have some margin of expansion for the pool (we will talk about this in a moment). To verify the thin pool has been created, we can use the lvs command:

$ sudo lvs
LV        VG  Attr       LSize  Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lxcfgpool vg0 twi-a-tz-- 15.00g             0.00  10.57



By observing the output, we can see no data is used on the pool at the moment, while 10.57% of space is used for metadata. The first lv_attr bit, as reported under the “Attr” column, indicates the volume type. A t, as expected, means we are dealing with a thin pool. By adding the -a option to the command, we can gather even more information:

$ sudo lvs -a
LV                VG  Attr       LSize  Pool Origin Data% Meta% Move Log Cpy%Sync Convert
[lvol0_pmspare]   vg0 ewi------- 16.00m 
lxcfgpool         vg0 twi-a-tz-- 15.00g             0.00  10.57 
[lxcfgpool_tdata] vg0 Twi-ao---- 15.00g 
[lxcfgpool_tmeta] vg0 ewi-ao---- 16.00m

Since we used the -a option, existing “internal” logical volumes were included in the output: those volumes cannot be accessed directly by the user. What are they for? A thin pool is created combining two “standard” logical volumes: one is meant to hold data (the “T” in the first lv_attr bit stands for thin pool data), the other is meant to hold metadata (“e”). An extra pmspare (Pool Metadata spare) logical volume is also created with the same size as the metadata LV: it is used for repair/recovery operations.

Creating thin volumes

Once we have a thin pool, we can create thin logical volumes. Creating a thin volume is really simple. We still use the lvcreate command; this time, however, instead of specifying the “static” size of the volume, we use the -V option to specify its “virtual” size, which is the maximum amount of space it can take. In the example below, we create a thin logical volume named “thinvolume0” with a virtual size of 10 GiB:

$ sudo lvcreate -V 10GiB -T vg0/lxcfgpool -n thinvolume0

Our first thin volume is ready. Now, let’s try to create another one of the same size:

$ sudo lvcreate -V 10GiB -T vg0/lxcfgpool -n thinvolume1

The command is executed successfully, however, this time we receive a series of warnings:

WARNING: Sum of all thin volume sizes (20.00 GiB) exceeds the size of thin pool vg0/lxcfgpool and the size of whole volume group (<20.00 GiB).
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Logical volume "thinvolume1" created.

The first message is quite clear: the sum of the virtual sizes of the two thin volumes (10 GiB each), is bigger than the size of the thin pool. This is still a valid scenario, because, as we already said, space is allocated only when it is used.



The second message warns us we have not turned on protection against the thin pool running out of space. A thin pool should never run out of space, otherwise there may be filesystem corruptions. By setting the thin_pool_autoextend_threshold option to a percentage value < 100 in the /etc/lvm/lvm.conf configuration file, we can make so that the thin pool is automatically extended when the specified threshold is reached. It goes by itself that there must be available space in the volume group for this operation to succeed. The amount of space added to the pool when it is automatically extended, can be configured via the thin_pool_autoextend_percent option, which, by default, is usually set to 20% of the pool space.

The importance of trimming

With a thin provisioning setup, It is very important to periodically use utilities like fstrim to ensure freed blocks in thin volumes return to the pool as available space. Let’s demonstrate this. We have our thin volumes, now let’s create filesystems on them:

$ sudo mkfs.ext2 /dev/vg0/thinvolume0
$ sudo mkfs.ext2 /dev/vg0/thinvolume1

Now, let’s mount the “thinvolume0” logical volume, and simulate the creation of a 1 GiB file using dd:

$ sudo mount /dev/vg0/thinvolume0 /mnt
$ sudo dd if=/dev/zero of=/mnt/test bs=1M count=1024

Here is the status of the pool after we created the file:

$ sudo lvs
LV          VG  Attr       LSize  Pool      Origin Data% Meta% Move Log Cpy%Sync Convert
lxcfgpool   vg0 twi-aotz-- 15.00g                  8.87  13.55
thinvolume0 vg0 Vwi-aotz-- 10.00g lxcfgpool        11.65
thinvolume1 vg0 Vwi-a-tz-- 10.00g lxcfgpool        1.65

As you can see, 8.87% of pool space is now used. Let’s see what happens if we delete the file we just created:

$ sudo rm /mnt/test

Let’s check the pool status, again:

$ sudo lvs
LV          VG  Attr       LSize  Pool      Origin Data% Meta% Move Log Cpy%Sync Convert
lxcfgpool   vg0 twi-aotz-- 15.00g                  8.87  13.55
thinvolume0 vg0 Vwi-aotz-- 10.00g lxcfgpool        11.65
thinvolume1 vg0 Vwi-a-tz-- 10.00g lxcfgpool        1.65

Looks like nothing changed. Now let’s run fstrim on the mounted filesystem:

$ sudo fstrim /mnt
/mnt: 9.7 GiB (10461900800 bytes) trimmed

Let’s check the status of the pool one last time:

$ sudo lvs
LV          VG  Attr       LSize  Pool      Origin Data% Meta% Move Log Cpy%Sync Convert
lxcfgpool   vg0 twi-aotz-- 15.00g                  2.19  11.30
thinvolume0 vg0 Vwi-aotz-- 10.00g lxcfgpool        1.65
thinvolume1 vg0 Vwi-a-tz-- 10.00g lxcfgpool        1.65

This time, as you can see, the trimmed space has been returned to the pool.

Closing thoughts

Unlike “thick” provisioning, which requires the allocation of a fixed amount of space to logical volumes at creation time, LVM thin provisioning allows us to create a pool of space (a thin pool) and thin logical volumes based on it, specifying only their virtual size. Space from the pool is allocated to thin volumes only when required. In this tutorial we briefly saw how to create a thin pool and thin volumes.



Comments and Discussions
Linux Forum