Skip to content

Upload

s3m uploads with the default command:

sh
s3m /path/to/file host/bucket/object

You need a valid configuration first.

Path format

txt
host/bucket/object
  • host: name defined in ~/.config/s3m/config.yml
  • bucket: destination bucket
  • object: key stored in the bucket

Regular file uploads

For regular files, s3m calculates a checksum before upload and keeps local multipart state under $HOME/.config/s3m/streams.

  • Files smaller than the multipart buffer are uploaded in a single request.
  • Files larger than the multipart buffer are uploaded in parts and can be resumed.
  • Use s3m --clean to remove the local multipart state.

Example:

sh
s3m /path/to/file backup/backups/dump.sql

Multipart and resumable uploads

Large regular files use multipart upload automatically. This is the upload path that creates local stream state and can later be inspected with s3m streams.

  • multipart uploads are used when the file is larger than the effective part size
  • local state is stored under ~/.config/s3m/streams
  • interrupted regular file uploads can be resumed
  • transformed uploads such as --pipe, compression, or encryption are not resumed through this path

Example:

sh
s3m /path/to/large-backup.tar backup/backups/large-backup.tar

PIPE / STDIN

You can stream live input with --pipe:

sh
pg_dump mydb | s3m --pipe backup/backups/mydb.sql

STDIN uploads are not resumable after interruption because the original stream cannot be replayed safely.

When the input size is unknown, s3m uses fixed 512 MiB multipart parts for streaming uploads.

For interrupted or failed streaming uploads, configure bucket lifecycle rules to clean up incomplete multipart uploads automatically.

Compression

Enable compression with -x or with compress: true in config.yml:

yaml
---
hosts:
  backup:
    region: us-east-2
    access_key: ACCESS_KEY
    secret_key: SECRET_KEY
    compress: true

Compression uses zstd and sets the content type to application/zstd.

Because the file is transformed while uploading, compressed uploads are not resumable.

Example:

sh
pg_dump mydb | s3m --pipe -x backup/backups/mydb.sql

Metadata

Add user-defined object metadata with -m:

sh
s3m /path/to/file backup/backups/file.dat -m "key1=value1;key2=value2"

This creates headers such as:

yaml
x-amz-meta-key1: value1
x-amz-meta-key2: value2

Additional checksums

Supported values for --checksum:

  • crc32
  • crc32c
  • sha1
  • sha256

Example:

sh
s3m /path/to/file backup/backups/file.dat --checksum crc32c

--checksum applies to file uploads, not --pipe.

Anonymous requests

For public buckets, use --no-sign-request:

yaml
---
hosts:
  public:
    region: us-east-2
sh
s3m ls public/aws-codedeploy-us-east-2 --no-sign-request

Throttle bandwidth

Use -k/--kilobytes to limit upload or download rate:

sh
s3m /path/to/file backup/backups/file.dat -k 128

Temporary directory

When streaming from STDIN, you can choose where the temporary buffer lives:

sh
mariabackup --stream=xbstream | s3m --pipe -t /opt/backup/tmp backup/backups/backup.xb

Encryption

To encrypt uploads, add enc_key to config.yml:

yaml
---
hosts:
  backup:
    region: us-east-2
    access_key: ACCESS_KEY
    secret_key: SECRET_KEY
    enc_key: "0123456789abcdef0123456789abcdef"

The key must be exactly 32 characters long.

Encrypted files use CHACHA20-Poly1305 and the content type application/vnd.s3m.encrypted.

  • encrypted files get the .enc suffix
  • compressed + encrypted files get the .zst.enc suffix
  • encrypted uploads are not resumable because the content is transformed while streaming

If compress is enabled, the data is compressed before it is encrypted.

Released under the BSD License.