Example
s3m /path/to/your/file aws/bucket/destination
| | |
| | Path to store the file
| Bucket
Host (S3 provider)
s3m default command is to stream an object, first you need to have a configuration file.
INFO
The checksum of the file is calculated before uploading it and is used to keep a reference of where the file has been uploaded to prevent uploading it again, this is stored in $HOME/.config/s3m/streams
use the option (--clean
) to clean up the directory.
If the file is bigger than the buffer size (-b 10MB default) is going to be uploaded in parts. The upload process can be interrupted at any time and in the next attempt, it will be resumed in the position that was left when possible.
Example of output
$ s3m /path/of/file <s3 provider>/bucket/file
checksum: 15cedddb58fd884984808baa9ada34701f2c9d9d6944e620996f890281a87cf5
[00:00:00] ██████████████████████████████████████████████████ 5.02 MiB/5.02 MiB (16.43 MiB/s - 0s)
ETag: "07621fa4dbb59c0a69fd1bf52ff96338"
Previous object size: 5266467
TIP
If the file was not overwritten (new file), the Previous object size will be 0
PIPE/STDIN
You could pipe the output of your application or a file for example:
mysqldump | xz -c | s3m --pipe <s3 provider>/<bucket>/dump.sql
-x --compress
If you use the option -x
or in the config file define compress
, example:
---
hosts:
test:
access_key: ACCESS_KEY
secret_key: SECRET_KEY
region: us-east-2
compress: true
The file is going to be compressed while uploading, using zsdtandard compression, content-type is going to be set to application/zstd
.
Note: If reading from from an existing file, the file is going to be compressed on the fly while streaming, because of this it can't be resumed.
Example:
pg_dump | s3m --pipe -x <s3 provider>/<bucket>/dump.sql
The extension of the file is going to be .zst
Metadata
To add a custom, user-defined object metadata x-amz-meta-*
, the option -m/--meta
can be used, for example:
s3m /path_to/file <s3 provider>/<bucket>/file -m "key1=value1;key2=value2"
This will create the object with the following metadata:
x-amz-meta-key1: value1
x-amz-meta-key2: value2
--checksum
To use the additional checksums:
- CRC32
- CRC32C
- SHA1
- SHA256
Example using CRC32C
s3m /path/to/file <s3 provider>/bucket/file --checksum crc32c
Anonymous user
If you want to list contents of a public bucket (no credentials required), use the option --no-sign-request
, for example to list the contents of aws-codedeploy-us-east-2.s3.amazonaws.com
this confi can be used:
---
hosts:
public:
region: us-east-2
Then to list the contents with:
s3m ls public/aws-codedeploy-us-east-2 --no-sign-request
Throttle bandwidth
Use the option -k/--kilobytes
to limit the upload/download to the defined kilobytes per second, for example, limit streaming to 128KB/s:
s3m /path/to/file <s3 provider>/bucket/file -k 128
--tmp-dir
When streaming from STDIN you can define a path to store the buffer, defaults to /tmp
, for example to use /opt/backup/tmp
:
mariabackup --stream=xbstream | s3m --pipe -t /opt/backup/tmp <s3 provider>/bucket/backup.xb
Encryption
To encrypt add the enc_key
to the config file, for example:
---
hosts:
myhost:
access_key: ACCESS_KEY
secret_key: SECRET_KEY
region: us-east
enc_key: "0123456789abcdef0123456789abcdef"
Note: The key must be 32 characters long, if the key is not defined, the file will not be encrypted.
Files will be encrypted using CHACHA20-Poly1305, the content-type will be set to application/vnd.s3m.encrypted
.
The file name will be suffixed with .enc
if compression is enabled, the file name will be suffixed with .zst.enc
.
When downloading the file, it will be decrypted automatically, and removed the .enc
suffix, if the file was compressed, it will remain with the .zst
suffix.
Note Like when using the
compress
option, the file can't be resumed because the file is encrypted on the fly while streaming. If compress is enabled, the file will be compressed before encrypting it to get a better compression ratio.
Example, if you want compession and encryption always enabled:
---
hosts:
myhost:
access_key: ACCESS_KEY
secret_key: SECRET_KEY
region: us-east
compress: true
enc_key: "0123456789abcdef0123456789abcdef"