-
-
Save rcoup/5358786 to your computer and use it in GitHub Desktop.
#!/bin/bash | |
set -e | |
# Usage: | |
# rsync_parallel.sh [--parallel=N] [rsync args...] | |
# | |
# Options: | |
# --parallel=N Use N parallel processes for transfer. Defaults to 10. | |
# | |
# Notes: | |
# * Requires GNU Parallel | |
# * Use with ssh-keys. Lots of password prompts will get very annoying. | |
# * Does an itemize-changes first, then chunks the resulting file list and launches N parallel | |
# rsyncs to transfer a chunk each. | |
# * be a little careful with the options you pass through to rsync. Normal ones will work, you | |
# might want to test weird options upfront. | |
# | |
if [[ "$1" == --parallel=* ]]; then | |
PARALLEL="${1##*=}" | |
shift | |
else | |
PARALLEL=10 | |
fi | |
echo "Using up to $PARALLEL processes for transfer..." | |
TMPDIR=$(mktemp -d) | |
trap "rm -rf $TMPDIR" EXIT | |
echo "Figuring out file list..." | |
# sorted by size (descending) | |
rsync $@ --out-format="%l %n" --no-v --dry-run | sort -n -r > $TMPDIR/files.all | |
# check for nothing-to-do | |
TOTAL_FILES=$(cat $TMPDIR/files.all | wc -l) | |
if [ "$TOTAL_FILES" -eq "0" ]; then | |
echo "Nothing to transfer :)" | |
exit 0 | |
fi | |
function array_min { | |
# return the (index, value) of the minimum element in the array | |
IC=($(tr ' ' '\n' <<<$@ | cat -n | sort -k2,2nr | tail -n1)) | |
echo $((${IC[0]} - 1)) ${IC[1]} | |
} | |
echo "Calculating chunks..." | |
# declare chunk-size array | |
for ((I = 0 ; I < PARALLEL ; I++ )); do | |
CHUNKS["$I"]=0 | |
done | |
# add each file to the emptiest chunk, so they're as balanced by size as possible | |
while read FSIZE FPATH; do | |
MIN=($(array_min ${CHUNKS[@]})) | |
CHUNKS["${MIN[0]}"]=$((${CHUNKS["${MIN[0]}"]} + $FSIZE)) | |
echo $FPATH >> $TMPDIR/chunk.${MIN[0]} | |
done < $TMPDIR/files.all | |
find "$TMPDIR" -type f -name "chunk.*" -printf "\n*** %p ***\n" -exec cat {} \; | |
echo "Starting transfers..." | |
find "$TMPDIR" -type f -name "chunk.*" | parallel -j $PARALLEL -t --verbose --progress rsync --files-from={} $@ |
Hello guys,
I am trying to transfer buch of directories using rsync with the help of multithreading which will start transferring directories and all its sub directories in parallel so that it will reduce the transfer time between two remote servers. can you help me with the code?
As was pointed out in the comments of this related gist, this script is great but the way the files are listed prevents copying of entire directories since a portion of the path for each file is duplicated. Here is an example of a failed output using the above script:
rsync: link_stat "/home/enggen/RAID5/UTK_ITS_test/UTK_ITS_test/strip_primers_out_ITS4_Funrc-5.8S_Funrc_3prime" failed: No such file or directory (2)
rsync: link_stat "/home/enggen/RAID5/UTK_ITS_test/UTK_ITS_test/strip_primers_out_ITS4_Funrc-5.8S_Funrc_3prime/log_strip_primers_20160803_0640PM.txt" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.0]
building file list ... 0 files to consider
There you can see the directory "UTK_ITS_test" is repeated in the file path, and nothing happens.
I changed line 32 on my system (Ubuntu 14.04) thusly and now it works:
rsync $@ --out-format="%l %n" --no-v --dry-run -- | sort -n -r | grep -v "sending incremental file list" | sed -r 's@\s\w+[^/]@ @g' > $TMPDIR/files.all
I used "@" as delimiter for the sed command since I was matching the first slash which is typically used for delimiting in sed. Please note that sed comes in various flavors and this may not work as well for you. A real coder can probably see a better, more portable solution here.
Edit:
I should point out that this failed to put all the the files transferred in the top-most directory.
Edit2:
Alas, it didn't actually work correctly. Anyone else have an idea?
very neat idea, I have an odd bug preventing me from using it on bsd (osx) however:
samm at samm-imac in ~ ~/git/scripts/parallel_rsync.sh -W -r -m --numeric-ids --progress --update --list-only /Volumes/RAID10/music/ /Volumes/bigdata/music/
Using up to 10 processes for transfer...
Figuring out file list...
Calculating chunks...
/Users/samm/git/scripts/parallel_rsync.sh: line 56: 0 + drwx------: syntax error: operand expected (error token is "-")
find: -printf: unknown primary or operator
So it doesn't like something about this on bsd / osx?
You should put quotes around $@
like so: "$@"
in case a argument contains whitespace.
@rmoorecpcc That works great. For Centos 6 you'll need to manually compile newer coreutils than what default repos provide (which is 8.4). 8.22 works, didn't try older than that.
I had just a little change to use as many threads as possible :
if [[ "$1" == --parallel=* ]]; then
PARALLEL="${1##*=}"
shift
elif [ -f /proc/cpuinfo ]
then
PARALLEL=$( grep processor /proc/cpuinfo | wc -l )
else
PARALLEL=10
fi
I have the next error:
./rsync_parallel.sh --parallel=2 -P -a -h -v -r /DataVolume/shares/BOOTCAMP/mycloud.img /DataVolume/shares/WDMyCloud/
Using up to 1 processes for transfer...
Figuring out file list...
Calculating chunks...
*** /tmp/tmp.1AFWstVhH3/chunk.0 ***
mycloud.img
Starting transfers...
./rsync_parallel.sh: line 63: parallel: command not found
Line 57:
echo $FPATH >> $TMPDIR/chunk.${MIN[0]}
must be
echo "$FPATH" >> $TMPDIR/chunk.${MIN[0]}
Otherwise, double spaces in path name will be squeezed.
Using split
for chunk file generation:
cat $TMPDIR/files.all | cut -d" " -f2- | split -d -a3 -n r/$PARALLEL - $TMPDIR/chunk.
This (-f2-
) deals with spaces in file names.
The algorithm you use for generating CHUNKS
which are as balanced as possible in terms of file sizes is good. However, the implementation is slow.
For a list of 74,814 files your while
loop took 289s to complete. My implementation was almost 3.5x faster at just 86s and generated chunks of the same size:
function array_min {
ARR=("$@")
# Default index for min value
min_i=0
# Default min value
min_v=${ARR[$min_i]}
for i in "${!ARR[@]}"; do
v="${ARR[$i]}"
(( v < min_v )) && min_v=$v && min_i=$i
done
echo "${min_i}"
}
The while
loop then looks like this (with some progress reporting included:
PROGRESS=0
SECONDS=0
while read FSIZE FPATH; do
PROGRESS=$((PROGRESS+1))
# Original Implementation
#MIN=($(array_min_old ${CHUNKS[@]})); MIN_I=${MIN[0]}
# Nathan's implementation
MIN_I=$(array_min ${CHUNKS[@]})
CHUNKS[${MIN_I}]=$((${CHUNKS[${MIN_I}]} + ${FSIZE}))
echo "${FPATH}" >> "${TMPDIR}/chunk.${MIN_I}"
if ! ((PROGRESS % 5000)); then
>&2 echo "${SECONDS}s: ${PROGRESS} of ${TOTAL_FILES}"
fi
done < "${TMPDIR}/files.all"
echo "${SECONDS}s"
my for works in Mac and also includes @nathanhaigh revised size function
https://gist.github.com/reijovosu/fce3d808bed89d5021ade70223bfc4c3
@reijovosu please add the other fixes as per above such as @michaelletzgus' spacing issue @t-animal's "$@"
quoting fix. Furthermore, please remove hard coded path in your script. Thanks for pulling what you have together thus far!
I updated my zsh reimplementation of this script so it should now handle filenames with any funny characters in them. It still starts transferring immediately, without waiting for the full file list to be built first. Comments welcome.
Amazing work @akorn!
Please take a look at: https://github.com/nathanhaigh/parallel-rsync/blob/main/prsync
Thanks, will evaluate this @xuanyuanaosheng.
To increase the speed of the chunk file creation, use the split tool: