Skip to content

Example

sh
s3m /path/to/your/file aws/bucket/destination
                        |     |      |
                        |     |      Path to store the file
                        |     Bucket
                        Host (S3 provider)

s3m default command is to stream an object, first you need to have a configuration file.

INFO

The checksum of the file is calculated before uploading it and is used to keep a reference of where the file has been uploaded to prevent uploading it again, this is stored in $HOME/.config/s3m/streams use the option (--clean) to clean up the directory.

If the file is bigger than the buffer size (-b 10MB default) is going to be uploaded in parts. The upload process can be interrupted at any time and in the next attempt, it will be resumed in the position that was left when possible.

Example of output

txt
$ s3m /path/of/file <s3 provider>/bucket/file
checksum: 15cedddb58fd884984808baa9ada34701f2c9d9d6944e620996f890281a87cf5
[00:00:00] ██████████████████████████████████████████████████ 5.02 MiB/5.02 MiB (16.43 MiB/s - 0s)
ETag: "07621fa4dbb59c0a69fd1bf52ff96338"
Previous object size: 5266467

TIP

If the file was not overwritten (new file), the Previous object size will be 0

PIPE/STDIN

You could pipe the output of your application or a file for example:

sh
mysqldump | xz -c | s3m --pipe <s3 provider>/<bucket>/dump.sql

-x --compress

If you use the option -x or in the config file define compress, example:

yaml
---
hosts:
  test:
    access_key: ACCESS_KEY
    secret_key: SECRET_KEY
    region: us-east-2
    compress: true

The file is going to be compressed while uploading, using zsdtandard compression, content-type is going to be set to application/zstd.

Note: If reading from from an existing file, the file is going to be compressed on the fly while streaming, because of this it can't be resumed.

Example:

sh
pg_dump | s3m --pipe -x <s3 provider>/<bucket>/dump.sql

The extension of the file is going to be .zst

Metadata

To add a custom, user-defined object metadata x-amz-meta-*, the option -m/--meta can be used, for example:

sh
s3m /path_to/file <s3 provider>/<bucket>/file -m "key1=value1;key2=value2"

This will create the object with the following metadata:

yaml
x-amz-meta-key1: value1
x-amz-meta-key2: value2

--checksum

To use the additional checksums:

  • CRC32
  • CRC32C
  • SHA1
  • SHA256

Example using CRC32C

sh
s3m /path/to/file <s3 provider>/bucket/file --checksum crc32c

Anonymous user

If you want to list contents of a public bucket (no credentials required), use the option --no-sign-request, for example to list the contents of aws-codedeploy-us-east-2.s3.amazonaws.com this confi can be used:

yaml
---
hosts:
  public:
    region: us-east-2

Then to list the contents with:

sh
s3m ls public/aws-codedeploy-us-east-2 --no-sign-request

Throttle bandwidth

Use the option -k/--kilobytes to limit the upload/download to the defined kilobytes per second, for example, limit streaming to 128KB/s:

sh
s3m /path/to/file <s3 provider>/bucket/file -k 128

--tmp-dir

When streaming from STDIN you can define a path to store the buffer, defaults to /tmp, for example to use /opt/backup/tmp:

sh
mariabackup --stream=xbstream | s3m --pipe -t /opt/backup/tmp <s3 provider>/bucket/backup.xb

Released under the BSD License.