The cURL linux command can use various network protocols to download and upload data on Linux. Normally, using the cURL command is pretty basic, but it has a ton of options and can grow more complicated very quickly. In this guide, we'll go over some of the more common uses for the cURL command and show you syntax examples so you can use it on your own system.In this tutorial you will learn:
- What is cURL and what can it do?
- How cURL compares to wget
- How to download a file from a website with cURL
- How to follow redirects
- How to download and untar a file automatically
- How to authenticate with cURL
- How to download headers with cURL
- How to use quiet mode with cURL
|Category||Requirements, Conventions or Software Version Used|
|System||Linux (any distribution)|
|Other||Privileged access to your Linux system as root or via the |
|Conventions|| # - requires given linux commands to be executed with root privileges either directly as a root user or by use of |
What can cURL do?
Curl can use a large assortment of network protocols to communicate with remote systems. It's a perfect debugging tool, capable of sending requests to servers and sending the responses to stdout, usually logging the data or handing it off to other tools as part of a bash script for processing.
The man page for curl shows all the protocols it supports:
$ man curl
curl is a tool to transfer data from or to a server, using one of the supported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP). The command is designed to work without user interaction.
HTTP and HTTPS are among the protocols listed, meaning curl can download files from websites. If you're familiar with the wget command, the two tools are similar in this aspect. We'll show you how to download files with it in the next section.
So, it's like wget?
Curl is capable of retrieving files through HTTP, HTTPS, and FTP protocols, just like wget. Both commands are fine choices for the task, though wget is sometimes preferred for its ability to download recursively. Both commands can also send HTTP POST requests. Apart from this overlap, the functionality available in the two utilities is quite different.
Download a file from a website with cURL
Let's see the command used to download a file with curl. As an example, curl can be used to download a Linux distribution, which are available as ISO files.
Open a terminal and type the following command to download an ISO file with curl:
$ curl https://example.com/linux.iso --output linux.iso
The terminal shows us some output about the progress of the download until it completes. The
--outputoption is necessary because curl will just output downloaded data to the terminal (stdout) by default. For example:
$ curl https://linuxconfig.org
In the case of a website, which serves HTML content, you'll get a bunch of HTML code in your terminal. Now you can see why curl makes for an easy debugging tool. If we had wanted to download the page to a file, we'd just need to append the
-Oflag does the same thing and is a shorter way to write it.
You can name your downloads however you like by specifying a filename after the command.
$ curl https://example.com/linux.iso -O any_file_name.iso
It's worth noting that a lot of websites have 301 or 302 redirects setup, for example to redirect users landing on HTTP pages to the corresponding HTTPS page. Curl doesn't try to follow these redirects unless you tell it to with the
-Loption. If you find curl getting held up by redirects, just tack that option onto the command.
$ curl -L linuxconfig.org
Untar download automatically
You can save some time when downloading tar files by piping the curl command over to tar. This will not generate a tar file on your system, since the file is downloaded to stdout and tar handles things from there. For example, to download WordPress and open the tar archive in a single command:
$ curl https://wordpress.org/latest.tar.gz | tar -xz
Authentication with curl
You can authenticate with a website, FTP server, etc. with the
-uoption in your curl command. Specify the username and password directly after that switch, separated by a colon. For example, here's how to authenticate with an FTP server. This server is provided to the public for testing purposes and you can try the command from your own terminal:
$ curl -u demo:password ftp://test.rebex.net
We can also download the readme file on the server:
$ curl -u demo:password ftp://test.rebex.net/readme.txt
Curl is a great tool for downloading headers from a remote server. This can give you some general information about the requested page, server, etc. Again, it's great for troubleshooting. Use the
-Ioption on your curl command to get the headers:
$ curl -I linuxconfig.org
There's a good chance you'll want to use the redirect option
-Lon websites as well:
$ curl -IL linuxconfig.org
If you could do without curl's progress meter and error messages, the
-soption will silence curl. Of course, regular output will still come to your terminal, so you probably also want to use
--outputto tell curl where to put the content it downloads.
curl -s https://linuxconfig.org --output index.html
In this article, we saw how the curl command can be used for things like downloading files from the command line, authenticating with servers, etc. It's an excellent debugging tool and all around useful command to know.
Curl's options are very extensive, as it supports a ton of network protocols and can be easily piped to other tools since it sends content to stdout. We've covered some of the common uses of curl in this tutorial, but be sure to check the man pages to see the many other things it can do.