From time to time you may want to backup your files directly to Amazon S3 storage without the middle step of saving your backup compressed file to a local disk. This is especially important if you have a limited disk space on your local drive/server. The below commands assume that you have already installed and configured s3cmd tool on your server. The bucket name we are going to use for our examples is called backup. Let's start with a regular backup creating a local file.
$ tar cPf /tmp/lubos.tar /home/lubos
$ s3cmd put /tmp/lubos.tar s3://backup/lubos.tar
What has happened above is that we first created a tarball from /home/lubos directory and stored it locally. In the next, step we have copied our backup file to S3 storage. The alternative way is to store backup file directly on S3 storage using STDOUT and some pipes. Please note that this feature is only available for s3cmd versions >= 1.5.
$  tar -cP /home/lubos | s3cmd put - s3://backup/lubos.tar
The above command will store our tarball using small batches directly into S3's bucket. If compression is required replace the above s3cmd command with the one below which applies a maximum compression level 9:
$  tar -cP /home/lubos | gzip -9 | s3cmd put - s3://backup/lubos.tar
Lastly, below you find a daily backup script example using the technique above to backup all user's directories to the Amazon S3 bucket:
#!/bin/sh

TODAY=`date +%F`

for i in $( cut -d: -f6 /etc/passwd | grep ^\/home )
    do
            tar -cP $i | gzip -9 | s3cmd put - s3://backup/$TODAY/user-$(basename $i).tar.gz
    done
FIND LATEST LINUX JOBS on LinuxCareers.com
Submit your RESUME, create a JOB ALERT or subscribe to RSS feed.
LINUX CAREER NEWSLETTER
Subscribe to NEWSLETTER and receive latest news, jobs, career advice and tutorials.
DO YOU NEED ADDITIONAL HELP?
Get extra help by visiting our LINUX FORUM or simply use comments below.

You may also be interested in: