Upload
s3m uploads with the default command:
s3m /path/to/file host/bucket/objectYou need a valid configuration first.
Path format
host/bucket/objecthost: name defined in~/.config/s3m/config.ymlbucket: destination bucketobject: key stored in the bucket
Regular file uploads
For regular files, s3m calculates a checksum before upload and keeps local multipart state under $HOME/.config/s3m/streams.
- Files smaller than the multipart buffer are uploaded in a single request.
- Files larger than the multipart buffer are uploaded in parts and can be resumed.
- Use
s3m --cleanto remove the local multipart state.
Example:
s3m /path/to/file backup/backups/dump.sqlMultipart and resumable uploads
Large regular files use multipart upload automatically. This is the upload path that creates local stream state and can later be inspected with s3m streams.
- multipart uploads are used when the file is larger than the effective part size
- local state is stored under
~/.config/s3m/streams - interrupted regular file uploads can be resumed
- transformed uploads such as
--pipe, compression, or encryption are not resumed through this path
Example:
s3m /path/to/large-backup.tar backup/backups/large-backup.tarPIPE / STDIN
You can stream live input with --pipe:
pg_dump mydb | s3m --pipe backup/backups/mydb.sqlSTDIN uploads are not resumable after interruption because the original stream cannot be replayed safely.
When the input size is unknown, s3m uses fixed 512 MiB multipart parts for streaming uploads.
For interrupted or failed streaming uploads, configure bucket lifecycle rules to clean up incomplete multipart uploads automatically.
Compression
Enable compression with -x or with compress: true in config.yml:
---
hosts:
backup:
region: us-east-2
access_key: ACCESS_KEY
secret_key: SECRET_KEY
compress: trueCompression uses zstd and sets the content type to application/zstd.
Because the file is transformed while uploading, compressed uploads are not resumable.
Example:
pg_dump mydb | s3m --pipe -x backup/backups/mydb.sqlMetadata
Add user-defined object metadata with -m:
s3m /path/to/file backup/backups/file.dat -m "key1=value1;key2=value2"This creates headers such as:
x-amz-meta-key1: value1
x-amz-meta-key2: value2Additional checksums
Supported values for --checksum:
crc32crc32csha1sha256
Example:
s3m /path/to/file backup/backups/file.dat --checksum crc32c--checksum applies to file uploads, not --pipe.
Anonymous requests
For public buckets, use --no-sign-request:
---
hosts:
public:
region: us-east-2s3m ls public/aws-codedeploy-us-east-2 --no-sign-requestThrottle bandwidth
Use -k/--kilobytes to limit upload or download rate:
s3m /path/to/file backup/backups/file.dat -k 128Temporary directory
When streaming from STDIN, you can choose where the temporary buffer lives:
mariabackup --stream=xbstream | s3m --pipe -t /opt/backup/tmp backup/backups/backup.xbEncryption
To encrypt uploads, add enc_key to config.yml:
---
hosts:
backup:
region: us-east-2
access_key: ACCESS_KEY
secret_key: SECRET_KEY
enc_key: "0123456789abcdef0123456789abcdef"The key must be exactly 32 characters long.
Encrypted files use CHACHA20-Poly1305 and the content type application/vnd.s3m.encrypted.
- encrypted files get the
.encsuffix - compressed + encrypted files get the
.zst.encsuffix - encrypted uploads are not resumable because the content is transformed while streaming
If compress is enabled, the data is compressed before it is encrypted.