In this tutorial you will learn:
- What CLI (Command Line Interface: your Bash or other terminal environment) disk performance measuring tools are available
- What GUI (Graphical User Interface: your desktop environment) disk performance measuring tool we recommend
- How to effectively measure disk performance in a straightforward manner
- Discover and learn with various disk performance measuring examples
- How to get a sense for the quality of disk/flash hardware you own
Software Requirements and Conventions Used
|Category||Requirements, Conventions or Software Version Used|
|Other||Privileged access to your Linux system as root or via the
# - requires given linux commands to be executed with root privileges either directly as a root user or by use of
How to benchmark Disk performance on Linux - CLI ToolsTo start, plug your drive into your machine. If it is a SSD (Solid State Drive) or HDD (Hard Disk Drive), you will want to shutdown your computer, insert the drive and reboot the system. For SD cards, you will usually use a SD card reader which you can insert via a USB port to your computer. For USB memory stick/flash drives, simply insert them via a USB port to your computer.
Next, navigate to your terminal/command prompt (On Ubuntu for example you can do this by simply clicking
Activitiesat the top left of the screen > type
Terminaland click the Terminal icon). At the command line, type lsblk:
$ lsblk | grep sdc sdc 8:32 1 119.3G 0 diskHere we are executing
lsblk: you can read this as ls blk: i.e. do a listing similar to ls ('directory listing') on all bulk (blk) volumes.
As you can see, there is a
119.3Gdrive available. This drive is marketed as 128GB, and it's a major brand. It is not uncommon for a 128GB drive to show as only ~115-120G in
lsblk. This is because
lsblkwill give you the result in Gibibyte (1 Gibibyte = 1073700000 bytes) whereas drive manufactures sell their drives using the "Gigabyte" standard (a Gigabyte = 1000000000 bytes).
We can see in this case this works out near perfectly when we look at the byte based size:
$ lsblk -b | grep sdc sdc 8:32 1 128043712512 0 disk
119.3(as reported by lsblk) = 119.3 x 1073700000 = 128092410000. So when you buy that next drive, read the fine print on the back and check whether they use the "1000" bytes per KiloByte or the "1024" bytes per Kibibyte standard. Almost always, it will be the former.
Some SD manufacturers even include the size of a reserved special area for wear leveling on the SD card as main disk space, yet such space is not accessible to the user, and you may end with for example only 115G showing as usable. Buyer beware.
When you execute
lsblkfor the first time, you will want to take some time looking at the various drives available. The easiest way to locate a specific volume, for example a flash drive just inserted, is to look for a size which approximately matches the size of the disk inserted.
Now that we know that our new drive is labelled
sdc(Linux uses sda,sdb,sdc etc. according to drives detected during startup and/or inserted), we also know where the device file descriptor for this device is located (it's always in
$ ls /dev/sdc /dev/sdcAlso, if there were already partitions on the drive, it would show differently, like this:
$ lsblk -b | grep sdc sdc 8:32 1 128043712512 0 disk └─sdc1 8:33 1 128042663936 0 partYou can see how it has the disk (
/dev/sdc- indicated by 'disk'), and the first partition (
/dev/sdc1- indicated by 'part'). Logically the partition is slightly smaller then the total disk size due to alignment/reserved space for the partition table etc.
Finally, if you have other types of storage/disk devices, for example a NVMe drive, then this may show for example as:
$ lsblk | grep nvme nvme0n1 259:0 0 701.3G 0 disk ├─nvme0n1p1 259:1 0 512M 0 part /boot/efi ├─nvme0n1p2 259:2 0 732M 0 part /boot └─nvme0n1p3 259:3 0 700G 0 partHere we have an NVMe drive which hosts 3 partitions (
p3) and the first two are small boot partitions and the third one is our main data partition. As this partition is in use, we will not be able to have exclusive access or unmounted access to it. This will become relevant once we discuss some of the tools below.
Armed with this information, it's now easy to run a basic disk performance check against this drive using
$ sudo hdparm -Ttv /dev/sdc1 /dev/sdc1: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 15567/255/63, sectors = 250083328, start = 2048 Timing cached reads: 36928 MB in 1.99 seconds = 18531.46 MB/sec Timing buffered disk reads: 276 MB in 3.02 seconds = 91.37 MB/secWe can use
hdparmto perform timings for benchmark and comparison purposes, using the
-T(perform timings of cache reads) and
-t(perform timings of device reads) options.
As you can see, our cached reads come in extremely fast (as is to be expected; it is cached), and they are not necessarily a good number to go by, unless you are testing cache performance specifically.
The more useful number is the buffered disk reads, and they come in at 91.37 MB/sec. Not bad as the manufacturer for this drive did not even advertise write speed.
As the manual for
-Ttoptions) states, For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a couple of megabytes of free memory, we should run another test to be sure of our results.
A repeated test, this time with only buffered reads and a bit more verbose output (achieved by adding the '-v' option):
$ sudo hdparm -tv /dev/sdc1 /dev/sdc1: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 15567/255/63, sectors = 250083328, start = 2048 Timing buffered disk reads: 276 MB in 3.01 seconds = 91.54 MB/secAs we can see, the number reported by hdparm is quite reliable.
The safest way to do this, is to first create a filesystem (outside of the scope of this article - to make it easier you can use a GUI tool like GParted) and then measuring the performance with dd . Note that the type of filesystem (e.g. ext4, FAT32, ...) will affect the performance, usability and security of your drive.
$ sudo su # cd /tmp # mkdir mnt # mount /dev/sdc1 ./mnt # Assumes there is at least 1 partition defined on /dev/sdc. In this case there is, and it is an ext4 partition. # sync # echo 3 > /proc/sys/vm/drop_caches # dd if=/dev/zero of=/tmp/mnt/temp oflag=direct bs=128k count=1G # Our actual performance test # rm -f /tmp/mnt/tempThe performance test will show as follows:
# dd if=/dev/zero of=/tmp/mnt/temp oflag=direct bs=128k count=16k 16384+0 records in 16384+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 32.1541 s, 66.8 MB/sAs we can see, our 128GB drive is performing reasonably well with a 66.8 MB/s write speed. let us double check with twice the size (4GB file) using the
# dd if=/dev/zero of=/tmp/mnt/temp oflag=direct bs=128k count=32k 32768+0 records in 32768+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 66.7746 s, 64.3 MB/sSo let us look at everything we did here.
First we elevated the privileges to sudo/root level
sudo su, and then we created a
/tmp. This will be our 'mount point' where we will access our 128 GB drive from (after mounting it using
mount /dev/sdc1 ./mntwhich effectively maps the first partition
After this we made sure that all our system's file caches where synchronized/empty using
sync. This is also a handy command to execute before umounting and pulling out your USB drives as it ensures that all data which was being written to your USB drive is flushed to the disk instead of remaining in memory. If you unmount a disk in the desktop/gui, it will execute a
syncfor you in the background before unmounting the drive and subsequently telling you the disk is save to remove.
Next we make sure that all remaining system caches are dropped from memory by executing
echo 3 > /proc/sys/vm/drop_caches. While both the last two commands could be left off, especially as we are using
/dev/zeroas the input device (a virtual device which keeps outputting zero's whenever accessed), it is nice to have the system 'super clean and ready' to perform a disk performance test! Basically, we are making sure there is as little as possible caching going to happen.
Next we have our main performance test using
dd. The syntax of
ddis quite straightforward, but different from most other command line tools. let us look at it in some detail:
if=/dev/zero: Use the /dev/zero device as input file
of=/tmp/mnt/temp: Use the 'temp' file, located on the partition(/disk) we just mounted under /tmp/mnt as the output file
oflag=direct: set the output flag 'direct' ensuring that we 'use direct I/O for data' which will eliminate most if not all of the caching the operating system does
bs=128k: write up to 128k bytes at a time. The default of 512 is much to small, and would result in not maximizing possible throughput speed
count=16k: copy 16k input blocks, which totals about 2.1 GB or 2.0 GiB. You may want to adjust this variable depending on your drive size and drive performance accuracy requirements (more is better: more reliable)
rm -f /tmp/mnt/temp.
Note that if your disk was empty, and only if you are sure it is completely empty and does not contain any valuable data, you could do something along the lines of:
of=/dev/sdcto run an exclusive-access / unmounted disk speed test.
This is a very pure way of testing disk performance, but (!) please be very careful with using this, as any device or partition specified in
of=...will definitely be overwritten with whatever comes from any
if=...you specify. Take care.
How to benchmark Disk performance on Linux - GUI ToolNow that you know how to run a disk performance test from the command line, using the
hdparm(for read) and
dd(for write) terminal/CLI tools, let us next look at using a more visual/graphical tool inside the desktop environment.
If you are using Ubuntu, the most common Linux desktop operating system, there is a great utility disk performance build into the operating system. It is also one of the few (or perhaps only readibly available) graphical disk performance testing tools available in Linux. Most other tools are command line based, or have no Linux equivalents to their Microsoft Windows counterparts. For example, there is no graphical counterpart for the CrystalDiskMark Windows disk performance utility.
Activitiesat the top left of the screen, and type
diskswhich will show you the
DisksIcon (showing an image of a hard drive). Click the same to open the
Disksutility which has a build in disk benchmark tool.
Once open, use a single click to select your disk from the left hand side of the dialog window, and then click on the 3 vertical dots near the top right of the dialog window (to the left of the minimize button). From there, select the option
Benchmark Disk...to open the benchmarking tool for the selected drive. The 'Benchmark' window will open. Click on
Start Benchmark...to open the configuration dialog named
Benchmark Settings. From here I recommend you set the following options:
- Number of samples: 10
- Sample Size (MiB): 1000 (this is also the maximum)
- Perform write benchmark: ticked (read the notes below first before starting the benchmark!)
- Number of Samples: 1000
Start Benchmarking...to start the test. Let's have a look at the settings we made here.
The maximum sample size is 1000 MiB, and this (1,048,576,000 bytes) is a great number to test with, but it would have been great if we would be allowed to select sizes like 2GB and 4GB as we did in our
ddcommand line disk utility write test above. We will take 10 samples, or in other words 10 runs of the 1GB read and write.
This graphical disk performance measurement utility is very smart in that it will not destroy data on your drive, as for example dd may do if you incorrectly specify the
of=setting to be a disk or partition instead of a file.
The way it does this is - when you select to perform a write benchmark (as we have done here) - is by reading data from the drive in exclusive access mode (more on this soon), then writing the same data back to the same location! Unless some highly odd write error happens, it is unlikely that this would ever damage data on your drive (though not guaranteed!). If you hover your cursor over the
Perform write benchmarksetting you can read a bit more on this.
Exclusive access simply means that selecting the write option will ensure that your drive is unmounted before the test, making it available only to this utility without you being able to access it from anywhere else while the test is running. This is necessary for the write test to run properly. It is what you would want in any case; i.e. you do not want to be accessing your drive (or copying data to/from the drive) while the test is running, as this may skew the results significantly.
We also request to take 1000 samples of
access time- i.e. the time it takes for the operating system to access the drive. For SD cards this will be quite low, for example our 128GB card gave an average access time of just
0.71 msec across 1000 samples, whereas a slower disk may result in 20-100ms access times.