How to benchmark Disk performance on Linux

Just bought the latest and greatest – and especially fastest – SDD? Or upgraded your phone’s microSD memory card? Before you start using your shiny new hardware, you may want to run a performance check against the drive. Is the write and read speed up to manufacturer’s specifications? How does your performance compare with that of others? Is that 1TB flash drive you bought on an auction site from China really as fast as the listing said it was? Let us find out!

In this tutorial you will learn:

  • What CLI (Command Line Interface: your Bash or other terminal environment) disk performance measuring tools are available
  • What GUI (Graphical User Interface: your desktop environment) disk performance measuring tool we recommend
  • How to effectively measure disk performance in a straightforward manner
  • Discover and learn with various disk performance measuring examples
  • How to get a sense for the quality of disk/flash hardware you own

How to benchmark Disk performance on Linux

How to benchmark Disk performance on Linux

Software Requirements and Conventions Used

Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Any GNU/Linux
Software N/A
Other Privileged access to your Linux system as root or via the sudo command.
Conventions # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux commands to be executed as a regular non-privileged user

How to benchmark Disk performance on Linux – CLI Tools

To start, plug your drive into your machine. If it is a SSD (Solid State Drive) or HDD (Hard Disk Drive), you will want to shutdown your computer, insert the drive and reboot the system. For SD cards, you will usually use a SD card reader which you can insert via a USB port to your computer. For USB memory stick/flash drives, simply insert them via a USB port to your computer.

Next, navigate to your terminal/command prompt (On Ubuntu
for example you can do this by simply clicking Activities at the top left of the screen > type Terminal and click the Terminal icon).

At the command line, type lsblk:

$ lsblk | grep sdc
sdc                       8:32   1 119.3G  0 disk 

Here we are executing lsblk: you can read this as ls blk: i.e. do a listing similar to ls (‘directory listing’) on all bulk (blk) volumes.

As you can see, there is a 119.3G drive available. This drive is marketed as 128GB, and it’s a major brand. It is not uncommon for a 128GB drive to show as only ~115-120G in lsblk. This is because lsblk will give you the result in Gibibyte (1 Gibibyte = 1073700000 bytes) whereas drive manufactures sell their drives using the “Gigabyte” standard (a Gigabyte = 1000000000 bytes).

We can see in this case this works out near perfectly when we look at the byte based size:

$ lsblk -b | grep sdc
sdc                       8:32   1  128043712512  0 disk 

And 119.3 (as reported by lsblk) = 119.3 x 1073700000 = 128092410000. So when you buy that next drive, read the fine print on the back and check whether they use the “1000” bytes per KiloByte or the “1024” bytes per Kibibyte standard. Almost always, it will be the former.

Some SD manufacturers even include the size of a reserved special area for wear leveling on the SD card as main disk space, yet such space is not accessible to the user, and you may end with for example only 115G showing as usable. Buyer beware.

When you execute lsblk for the first time, you will want to take some time looking at the various drives available. The easiest way to locate a specific volume, for example a flash drive just inserted, is to look for a size which approximately matches the size of the disk inserted.

Now that we know that our new drive is labelled sdc (Linux uses sda,sdb,sdc etc. according to drives detected during startup and/or inserted), we also know where the device file descriptor for this device is located (it’s always in /dev):

$ ls /dev/sdc

Also, if there were already partitions on the drive, it would show differently, like this:

$ lsblk -b | grep sdc
sdc                       8:32   1  128043712512  0 disk  
└─sdc1                    8:33   1  128042663936  0 part  

You can see how it has the disk (/dev/sdc – indicated by ‘disk’), and the first partition (/dev/sdc1 – indicated by ‘part’). Logically the partition is slightly smaller then the total disk size due to alignment/reserved space for the partition table etc.

Finally, if you have other types of storage/disk devices, for example a NVMe drive, then this may show for example as:

$ lsblk | grep nvme
nvme0n1                 259:0    0 701.3G  0 disk  
├─nvme0n1p1             259:1    0   512M  0 part  /boot/efi
├─nvme0n1p2             259:2    0   732M  0 part  /boot
└─nvme0n1p3             259:3    0   700G  0 part  

Here we have an NVMe drive which hosts 3 partitions (p1, p2, p3) and the first two are small boot partitions and the third one is our main data partition. As this partition is in use, we will not be able to have exclusive access or unmounted access to it. This will become relevant once we discuss some of the tools below.

Armed with this information, it’s now easy to run a basic disk performance check against this drive using hdparm:

$ sudo hdparm -Ttv /dev/sdc1

 multcount     =  0 (off)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 15567/255/63, sectors = 250083328, start = 2048
 Timing cached reads:   36928 MB in  1.99 seconds = 18531.46 MB/sec
 Timing buffered disk reads: 276 MB in  3.02 seconds =  91.37 MB/sec

We can use hdparm to perform timings for benchmark and comparison purposes, using the -T (perform timings of cache reads) and -t (perform timings of device reads) options.

As you can see, our cached reads come in extremely fast (as is to be expected; it is cached), and they are not necessarily a good number to go by, unless you are testing cache performance specifically.

The more useful number is the buffered disk reads, and they come in at 91.37 MB/sec. Not bad as the manufacturer for this drive did not even advertise write speed.

As the manual for hdparm (-Tt options) states, For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a couple of megabytes of free memory, we should run another test to be sure of our results.

A repeated test, this time with only buffered reads and a bit more verbose output (achieved by adding the ‘-v’ option):

$ sudo hdparm -tv /dev/sdc1

 multcount     =  0 (off)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 15567/255/63, sectors = 250083328, start = 2048
 Timing buffered disk reads: 276 MB in  3.01 seconds =  91.54 MB/sec

As we can see, the number reported by hdparm is quite reliable.

So far we have only discussed read speeds. let us next have a look at write speeds. For this, we will be using dd.

The safest way to do this, is to first create a filesystem (outside of the scope of this article – to make it easier you can use a GUI tool like GParted) and then measuring the performance with dd
. Note that the type of filesystem (e.g. ext4, FAT32, …) will affect the performance, usability and security of your drive.

$ sudo su
# cd /tmp
# mkdir mnt
# mount /dev/sdc1 ./mnt  # Assumes there is at least 1 partition defined on /dev/sdc. In this case there is, and it is an ext4 partition.
# sync
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/tmp/mnt/temp oflag=direct bs=128k count=1G  # Our actual performance test
# rm -f /tmp/mnt/temp

The performance test will show as follows:

# dd if=/dev/zero of=/tmp/mnt/temp oflag=direct bs=128k count=16k
16384+0 records in
16384+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 32.1541 s, 66.8 MB/s

As we can see, our 128GB drive is performing reasonably well with a 66.8 MB/s write speed. let us double check with twice the size (4GB file) using the count=32k option:

# dd if=/dev/zero of=/tmp/mnt/temp oflag=direct bs=128k count=32k
32768+0 records in
32768+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 66.7746 s, 64.3 MB/s

So let us look at everything we did here.

First we elevated the privileges to sudo/root level sudo su, and then we created a mnt folder in /tmp. This will be our ‘mount point’ where we will access our 128 GB drive from (after mounting it using mount /dev/sdc1 ./mnt which effectively maps the first partition sdc1to the ./mnt (/tmp/mnt) folder).

After this we made sure that all our system’s file caches where synchronized/empty using sync. This is also a handy command to execute before umounting and pulling out your USB drives as it ensures that all data which was being written to your USB drive is flushed to the disk instead of remaining in memory. If you unmount a disk in the desktop/gui, it will execute a sync for you in the background before unmounting the drive and subsequently telling you the disk is save to remove.

Next we make sure that all remaining system caches are dropped from memory by executing echo 3 > /proc/sys/vm/drop_caches. While both the last two commands could be left off, especially as we are using /dev/zero as the input device (a virtual device which keeps outputting zero’s whenever accessed), it is nice to have the system ‘super clean and ready’ to perform a disk performance test! Basically, we are making sure there is as little as possible caching going to happen.

Next we have our main performance test using dd. The syntax of dd is quite straightforward, but different from most other command line tools. let us look at it in some detail:

  • if=/dev/zero: Use the /dev/zero device as input file
  • of=/tmp/mnt/temp: Use the ‘temp’ file, located on the partition(/disk) we just mounted under /tmp/mnt as the output file
  • oflag=direct: set the output flag ‘direct’ ensuring that we ‘use direct I/O for data’ which will eliminate most if not all of the caching the operating system does
  • bs=128k: write up to 128k bytes at a time. The default of 512 is much to small, and would result in not maximizing possible throughput speed
  • count=16k: copy 16k input blocks, which totals about 2.1 GB or 2.0 GiB. You may want to adjust this variable depending on your drive size and drive performance accuracy requirements (more is better: more reliable)

And finally we delete the file we wrote to with rm -f /tmp/mnt/temp.

Note that if your disk was empty, and only if you are sure it is completely empty and does not contain any valuable data, you could do something along the lines of: of=/dev/sdc1 or even of=/dev/sdc to run an exclusive-access / unmounted disk speed test.

This is a very pure way of testing disk performance, but (!) please be very careful with using this, as any device or partition specified in of=... will definitely be overwritten with whatever comes from any if=... you specify. Take care.

How to benchmark Disk performance on Linux – GUI Tool

Now that you know how to run a disk performance test from the command line, using the hdparm (for read) and dd (for write) terminal/CLI tools, let us next look at using a more visual/graphical tool inside the desktop environment.

If you are using Ubuntu, the most common Linux desktop operating system, there is a great utility disk performance build into the operating system. It is also one of the few (or perhaps only readibly available) graphical disk performance testing tools available in Linux. Most other tools are command line based, or have no Linux equivalents to their Microsoft Windows counterparts. For example, there is no graphical counterpart for the CrystalDiskMark Windows disk performance utility.

Simply click Activities at the top left of the screen, and type disks which will show you the Disks Icon (showing an image of a hard drive). Click the same to open the Disks utility which has a build in disk benchmark tool.

Once open, use a single click to select your disk from the left hand side of the dialog window, and then click on the 3 vertical dots near the top right of the dialog window (to the left of the minimize button). From there, select the option Benchmark Disk... to open the benchmarking tool for the selected drive. The ‘Benchmark’ window will open.

Click on Start Benchmark... to open the configuration dialog named Benchmark Settings. From here I recommend you set the following options:

Transfer Rate:

  • Number of samples: 10
  • Sample Size (MiB): 1000 (this is also the maximum)
  • Perform write benchmark: ticked (read the notes below first before starting the benchmark!)

Access Time:

  • Number of Samples: 1000

Then click Start Benchmarking... to start the test. Let’s have a look at the settings we made here.

The maximum sample size is 1000 MiB, and this (1,048,576,000 bytes) is a great number to test with, but it would have been great if we would be allowed to select sizes like 2GB and 4GB as we did in our dd command line disk utility write test above. We will take 10 samples, or in other words 10 runs of the 1GB read and write.

This graphical disk performance measurement utility is very smart in that it will not destroy data on your drive, as for example dd may do if you incorrectly specify the of= setting to be a disk or partition instead of a file.

The way it does this is – when you select to perform a write benchmark (as we have done here) – is by reading data from the drive in exclusive access mode (more on this soon), then writing the same data back to the same location! Unless some highly odd write error happens, it is unlikely that this would ever damage data on your drive (though not guaranteed!). If you hover your cursor over the Perform write benchmark setting you can read a bit more on this.

Exclusive access simply means that selecting the write option will ensure that your drive is unmounted before the test, making it available only to this utility without you being able to access it from anywhere else while the test is running. This is necessary for the write test to run properly. It is what you would want in any case; i.e. you do not want to be accessing your drive (or copying data to/from the drive) while the test is running, as this may skew the results significantly.

We also request to take 1000 samples of access time – i.e. the time it takes for the operating system to access the drive. For SD cards this will be quite low, for example our 128GB card gave an average access time of just 0.71 msec across 1000 samples, whereas a slower disk may result in 20-100ms access times.

SD vs HDD performance difference

SD vs HDD performance difference

The screenshot above shows the clear differences in output between the 128GB SD card test and a 3TB Hard Disk Drive.


Armed with the skills to measure disk read and write performance, what will be your next drive performance test? Please let us know in the comments below, and if you end up testing or benchmarking modern day SSD, NVMe, SD or other flash storage, please post some of the results you’re seeing!

Comments and Discussions
Linux Forum