-
-
Save lsm/1056037 to your computer and use it in GitHub Desktop.
#!/bin/bash | |
_host="${1:?Usage: gridfs host db}" | |
_db="${2:?Usage: gridfs host db}" | |
while read -r line; do | |
file=$(echo "$line" | awk -F'\t' '{ print $1 }') | |
[[ $file == 'connected to'* ]] && continue | |
directory=${file%/*} | |
mkdir -p $directory | |
mongofiles -h $_host -db $_db get $file | |
done < <(mongofiles -h $_host -db $_db list) |
For those looking for answer in 2016. For a recent mongo db it is usefull to use someting like this:
#!/bin/bash
_host="${1:?Usage: gridfs host db}"
_db="${2:?Usage: gridfs host db}"
while read -r line; do
file=$(echo "$line" | awk -F'\t' '{ print $1 }')
[[ $file == 'connected to'* ]] && continue
mongofiles -h $_host -d $_db --port 3001 get "$file"
done < <(mongofiles -h $_host -d $_db --port 3001 list)
Besides if you are working with something like meteor collectionFS you can add --prefix=cfs_gridfs.images
or similar parameter to mongofiles command strings
Hey guys! I have a NodeJS app that stores all of its data, even images, to a MongoDB server. Now I want to use an S3 to store these images (43gb) and so, I need to transfer them there.
Whilst I have figured out how to transfer files from a server to an S3 using the s3cmd
tool, I have NO idea how to export my images from the db! I know they are stored in the fs.files
and fs.chunks
collections, but I can't figure out how to actually use them as input to s3cmd
!
I found this script, but I'm not sure how to use it... Any help will be much appreciated!!!
any file with spaces in file name won't be exported. Need to replace $file with "$file"