s3cmd – Backup directly to Amazon S3 storage using STDOUT

From time to time you may want to backup your files directly to Amazon S3 storage without the middle step of saving your backup compressed file to a local disk. This is especially important if you have a limited disk space on your local drive/server. The below commands assume that you have already installed and configured s3cmd tool on your server. The bucket name we are going to use for our examples is called backup. Let’s start with a regular backup creating a local file.

$ tar cPf /tmp/lubos.tar /home/lubos
$ s3cmd put /tmp/lubos.tar s3://backup/lubos.tar

What has happened above is that we first created a tarball from /home/lubos directory and stored it locally. In the next, step we have copied our backup file to S3 storage. The alternative way is to store backup file directly on S3 storage using STDOUT and some pipes. Please note that this feature is only available for s3cmd versions >= 1.5.

$  tar -cP /home/lubos | s3cmd put - s3://backup/lubos.tar

The above command will store our tarball using small batches directly into S3’s bucket. If compression is required replace the above s3cmd command with the one below which applies a maximum compression level 9:

$  tar -cP /home/lubos | gzip -9 | s3cmd put - s3://backup/lubos.tar

Lastly, below you find a daily backup script example using the technique above to backup all user’s directories to the Amazon S3 bucket:

#!/bin/sh

TODAY=`date +%F`

for i in $( cut -d: -f6 /etc/passwd | grep ^\/home )
    do
            tar -cP $i | gzip -9 | s3cmd put - s3://backup/$TODAY/user-$(basename $i).tar.gz
    done


Comments and Discussions
Linux Forum