s3cmd
and aws-cli
throughput could be very slow (s4cmd
and s5cmd
are
not easy to install in all environments). If you're using MinIO to host your
files, use mc
instead.
# Get the latest `mc` version:
docker pull minio/mc
# Run `mc` container sharing the current directory:
docker run -v $(pwd):/data -it --rm --entrypoint=/bin/sh minio/mc
# Inside container, login to your S3 provider:
mc config host add mys3 https://s3.example.net/ access_key secret_key
# You may want to copy `/root/.mc/config.json` file so you can reuse it
# Test your config:
mc ls mys3
# Upload any files you want:
mc cp /data/my-file.ext mys3/bucket/
For even faster uploads you may want to connect mc
locally with your MinIO
container (if they're running on the same machine):
# Get the latest `mc` version:
docker pull minio/mc
# First, create a docker network:
docker network create mynet
# Then, add your MinIO server container to it:
docker network connect mynet abc123456def
# Now, run the `mc` container connected to this network:
docker run --network mynet -v $(pwd):/data -it --rm --entrypoint=/bin/sh minio/mc
# Inside container, login to your S3 provider using container ID and port:
mc config host add mys3 http://abc123456def:9000/ access_key secret_key
# You may want to copy `/root/.mc/config.json` file so you can reuse it
# Test your config:
mc ls mys3
# Upload any files you want:
mc cp /data/my-file.ext mys3/bucket/