Return to Snippet

Revision: 26362
at April 25, 2010 14:08 by mprabhuram


Updated Code
What is S3CMD ?
Command line Amazon S3 client that can be used in scripts, backup cron jobs, etc. This is your best choice if you want to quickly get up to speed with S3. Requires Python 2.4 or newer and some pretty common Python modules.

Simple s3cmd HowTo
------------------
1) Register for Amazon AWS / S3
   Go to http://aws.amazon.com/s3, click the "Sign up
   for web service" button in the right column and work 
   through the registration. You will have to supply 
   your Credit Card details in order to allow Amazon 
   charge you for S3 usage. 
   At the end you should have your Access and Secret Keys

2) Run "s3cmd --configure"
   You will be asked for the two keys - copy and paste 
   them from your confirmation email or from your Amazon 
   account page. Be careful when copying them! They are 
   case sensitive and must be entered accurately or you'll 
   keep getting errors about invalid signatures or similar.

3) Run "s3cmd ls" to list all your buckets.
   As you just started using S3 there are no buckets owned by 
   you as of now. So the output will be empty.

4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
   As mentioned above the bucket names must be unique amongst 
   _all_ users of S3. That means the simple names like "test" 
   or "asdf" are already taken and you must make up something 
   more original. To demonstrate as many features as possible
   let's create a FQDN-named bucket s3://public.s3tools.org:

   ~$ s3cmd mb s3://public.s3tools.org
   Bucket 's3://public.s3tools.org' created

5) List your buckets again with "s3cmd ls"
   Now you should see your freshly created bucket

   ~$ s3cmd ls
   2009-01-28 12:34  s3://public.s3tools.org

6) List the contents of the bucket

   ~$ s3cmd ls s3://public.s3tools.org
   ~$ 

   It's empty, indeed.

7) Upload a single file into the bucket:

   ~$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
   some-file.xml -> s3://public.s3tools.org/somefile.xml  [1 of 1]
    123456 of 123456   100% in    2s    51.75 kB/s  done

   Upload a two directory tree into the bucket's virtual 'directory':

   ~$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
   File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
   File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
   File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
   File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
   File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]

   As you can see we didn't have to create the /somewhere
   'directory'. In fact it's only a filename prefix, not 
   a real directory and it doesn't have to be created in
   any way beforehand.

8) Now list the bucket contents again:

   ~$ s3cmd ls s3://public.s3tools.org
                          DIR   s3://public.s3tools.org/somewhere/
   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml

   Use --recursive (or -r) to list all the remote files:

   ~$ s3cmd ls s3://public.s3tools.org
   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
   2009-02-10 05:13        18   s3://public.s3tools.org/somewhere/dir1/file1-1.txt
   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir1/file1-2.txt
   2009-02-10 05:13        16   s3://public.s3tools.org/somewhere/dir1/file1-3.log
   2009-02-10 05:13        11   s3://public.s3tools.org/somewhere/dir2/file2-1.bin
   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir2/file2-2.txt

9) Retrieve one of the files back and verify that it hasn't been 
   corrupted:

   ~$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
   s3://public.s3tools.org/somefile.xml -> some-file-2.xml  [1 of 1]
    123456 of 123456   100% in    3s    35.75 kB/s  done

   ~$ md5sum some-file.xml some-file-2.xml
   39bcb6992e461b269b95b3bda303addf  some-file.xml
   39bcb6992e461b269b95b3bda303addf  some-file-2.xml

   Checksums of the original file matches the one of the 
   retrieved one. Looks like it worked :-)

   To retrieve a whole 'directory tree' from S3 use recursive get:

   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere
   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
   File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'

   Since the destination directory wasn't specified s3cmd 
   saved the directory structure in a current working 
   directory ('.'). 

   There is an important difference between:
      get s3://public.s3tools.org/somewhere
   and
      get s3://public.s3tools.org/somewhere/
   (note the trailing slash)
   S3cmd always uses the last path part, ie the word
   after the last slash, for naming files.
 
   In the case of s3://.../somewhere the last path part 
   is 'somewhere' and therefore the recursive get names
   the local files as somewhere/dir1, somewhere/dir2, etc.

   On the other hand in s3://.../somewhere/ the last path
   part is empty and s3cmd will only create 'dir1' and 'dir2' 
   without the 'somewhere/' prefix:

   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'

   See? It's /tmp/dir1 and not /tmp/somewhere/dir1 as it 
   was in the previous example.

10) Clean up - delete the remote files and remove the bucket:

   Remove everything under s3://public.s3tools.org/somewhere/

   ~$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
   ...

   Now try to remove the bucket:

   ~$ s3cmd rb s3://public.s3tools.org
   ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty

   Ouch, we forgot about s3://public.s3tools.org/somefile.xml
   We can force the bucket removal anyway:

   ~$ s3cmd rb --force s3://public.s3tools.org/
   WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
   File s3://public.s3tools.org/somefile.xml deleted
   Bucket 's3://public.s3tools.org/' removed

Hints
-----
The basic usage is as simple as described in the previous 
section.

You can increase the level of verbosity with -v option and 
if you're really keen to know what the program does under 
its bonet run it with -d to see all 'debugging' output.

After configuring it with --configure all available options
are spitted into your ~/.s3cfg file. It's a text file ready
to be modified in your favourite text editor.

For more information refer to:
* S3cmd / S3tools homepage at http://s3tools.org
* Amazon S3 homepage at http://aws.amazon.com/s3

Revision: 26361
at April 25, 2010 13:58 by mprabhuram


Initial Code
Simple s3cmd HowTo
------------------
1) Register for Amazon AWS / S3
   Go to http://aws.amazon.com/s3, click the "Sign up
   for web service" button in the right column and work 
   through the registration. You will have to supply 
   your Credit Card details in order to allow Amazon 
   charge you for S3 usage. 
   At the end you should have your Access and Secret Keys

2) Run "s3cmd --configure"
   You will be asked for the two keys - copy and paste 
   them from your confirmation email or from your Amazon 
   account page. Be careful when copying them! They are 
   case sensitive and must be entered accurately or you'll 
   keep getting errors about invalid signatures or similar.

3) Run "s3cmd ls" to list all your buckets.
   As you just started using S3 there are no buckets owned by 
   you as of now. So the output will be empty.

4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
   As mentioned above the bucket names must be unique amongst 
   _all_ users of S3. That means the simple names like "test" 
   or "asdf" are already taken and you must make up something 
   more original. To demonstrate as many features as possible
   let's create a FQDN-named bucket s3://public.s3tools.org:

   ~$ s3cmd mb s3://public.s3tools.org
   Bucket 's3://public.s3tools.org' created

5) List your buckets again with "s3cmd ls"
   Now you should see your freshly created bucket

   ~$ s3cmd ls
   2009-01-28 12:34  s3://public.s3tools.org

6) List the contents of the bucket

   ~$ s3cmd ls s3://public.s3tools.org
   ~$ 

   It's empty, indeed.

7) Upload a single file into the bucket:

   ~$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
   some-file.xml -> s3://public.s3tools.org/somefile.xml  [1 of 1]
    123456 of 123456   100% in    2s    51.75 kB/s  done

   Upload a two directory tree into the bucket's virtual 'directory':

   ~$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
   File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
   File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
   File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
   File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
   File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]

   As you can see we didn't have to create the /somewhere
   'directory'. In fact it's only a filename prefix, not 
   a real directory and it doesn't have to be created in
   any way beforehand.

8) Now list the bucket contents again:

   ~$ s3cmd ls s3://public.s3tools.org
                          DIR   s3://public.s3tools.org/somewhere/
   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml

   Use --recursive (or -r) to list all the remote files:

   ~$ s3cmd ls s3://public.s3tools.org
   2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
   2009-02-10 05:13        18   s3://public.s3tools.org/somewhere/dir1/file1-1.txt
   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir1/file1-2.txt
   2009-02-10 05:13        16   s3://public.s3tools.org/somewhere/dir1/file1-3.log
   2009-02-10 05:13        11   s3://public.s3tools.org/somewhere/dir2/file2-1.bin
   2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir2/file2-2.txt

9) Retrieve one of the files back and verify that it hasn't been 
   corrupted:

   ~$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
   s3://public.s3tools.org/somefile.xml -> some-file-2.xml  [1 of 1]
    123456 of 123456   100% in    3s    35.75 kB/s  done

   ~$ md5sum some-file.xml some-file-2.xml
   39bcb6992e461b269b95b3bda303addf  some-file.xml
   39bcb6992e461b269b95b3bda303addf  some-file-2.xml

   Checksums of the original file matches the one of the 
   retrieved one. Looks like it worked :-)

   To retrieve a whole 'directory tree' from S3 use recursive get:

   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere
   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
   File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'

   Since the destination directory wasn't specified s3cmd 
   saved the directory structure in a current working 
   directory ('.'). 

   There is an important difference between:
      get s3://public.s3tools.org/somewhere
   and
      get s3://public.s3tools.org/somewhere/
   (note the trailing slash)
   S3cmd always uses the last path part, ie the word
   after the last slash, for naming files.
 
   In the case of s3://.../somewhere the last path part 
   is 'somewhere' and therefore the recursive get names
   the local files as somewhere/dir1, somewhere/dir2, etc.

   On the other hand in s3://.../somewhere/ the last path
   part is empty and s3cmd will only create 'dir1' and 'dir2' 
   without the 'somewhere/' prefix:

   ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
   File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
   File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'

   See? It's /tmp/dir1 and not /tmp/somewhere/dir1 as it 
   was in the previous example.

10) Clean up - delete the remote files and remove the bucket:

   Remove everything under s3://public.s3tools.org/somewhere/

   ~$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
   File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
   File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
   ...

   Now try to remove the bucket:

   ~$ s3cmd rb s3://public.s3tools.org
   ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty

   Ouch, we forgot about s3://public.s3tools.org/somefile.xml
   We can force the bucket removal anyway:

   ~$ s3cmd rb --force s3://public.s3tools.org/
   WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
   File s3://public.s3tools.org/somefile.xml deleted
   Bucket 's3://public.s3tools.org/' removed

Hints
-----
The basic usage is as simple as described in the previous 
section.

You can increase the level of verbosity with -v option and 
if you're really keen to know what the program does under 
its bonet run it with -d to see all 'debugging' output.

After configuring it with --configure all available options
are spitted into your ~/.s3cfg file. It's a text file ready
to be modified in your favourite text editor.

For more information refer to:
* S3cmd / S3tools homepage at http://s3tools.org
* Amazon S3 homepage at http://aws.amazon.com/s3

Initial URL
http://s3tools.org

Initial Description


Initial Title
S3CMD tool : Amazon S3 command Line tool - command reference

Initial Tags


Initial Language
Other