Skip to content

Instantly share code, notes, and snippets.

@rcoup
Created April 10, 2013 21:52
Star You must be signed in to star a gist
Save rcoup/5358786 to your computer and use it in GitHub Desktop.
Parallel-ise an rsync transfer when you want multiple concurrent transfers happening,
#!/bin/bash
set -e
# Usage:
# rsync_parallel.sh [--parallel=N] [rsync args...]
#
# Options:
# --parallel=N Use N parallel processes for transfer. Defaults to 10.
#
# Notes:
# * Requires GNU Parallel
# * Use with ssh-keys. Lots of password prompts will get very annoying.
# * Does an itemize-changes first, then chunks the resulting file list and launches N parallel
# rsyncs to transfer a chunk each.
# * be a little careful with the options you pass through to rsync. Normal ones will work, you
# might want to test weird options upfront.
#
if [[ "$1" == --parallel=* ]]; then
PARALLEL="${1##*=}"
shift
else
PARALLEL=10
fi
echo "Using up to $PARALLEL processes for transfer..."
TMPDIR=$(mktemp -d)
trap "rm -rf $TMPDIR" EXIT
echo "Figuring out file list..."
# sorted by size (descending)
rsync $@ --out-format="%l %n" --no-v --dry-run | sort -n -r > $TMPDIR/files.all
# check for nothing-to-do
TOTAL_FILES=$(cat $TMPDIR/files.all | wc -l)
if [ "$TOTAL_FILES" -eq "0" ]; then
echo "Nothing to transfer :)"
exit 0
fi
function array_min {
# return the (index, value) of the minimum element in the array
IC=($(tr ' ' '\n' <<<$@ | cat -n | sort -k2,2nr | tail -n1))
echo $((${IC[0]} - 1)) ${IC[1]}
}
echo "Calculating chunks..."
# declare chunk-size array
for ((I = 0 ; I < PARALLEL ; I++ )); do
CHUNKS["$I"]=0
done
# add each file to the emptiest chunk, so they're as balanced by size as possible
while read FSIZE FPATH; do
MIN=($(array_min ${CHUNKS[@]}))
CHUNKS["${MIN[0]}"]=$((${CHUNKS["${MIN[0]}"]} + $FSIZE))
echo $FPATH >> $TMPDIR/chunk.${MIN[0]}
done < $TMPDIR/files.all
find "$TMPDIR" -type f -name "chunk.*" -printf "\n*** %p ***\n" -exec cat {} \;
echo "Starting transfers..."
find "$TMPDIR" -type f -name "chunk.*" | parallel -j $PARALLEL -t --verbose --progress rsync --files-from={} $@
@csbogdan
Copy link

@rmoorecpcc That works great. For Centos 6 you'll need to manually compile newer coreutils than what default repos provide (which is 8.4). 8.22 works, didn't try older than that.

@gerbier
Copy link

gerbier commented Jul 21, 2017

I had just a little change to use as many threads as possible :

if [[ "$1" == --parallel=* ]]; then
PARALLEL="${1##*=}"
shift
elif [ -f /proc/cpuinfo ]
then
PARALLEL=$( grep processor /proc/cpuinfo | wc -l )
else
PARALLEL=10
fi

@oijm17
Copy link

oijm17 commented Oct 5, 2017

I have the next error:
./rsync_parallel.sh --parallel=2 -P -a -h -v -r /DataVolume/shares/BOOTCAMP/mycloud.img /DataVolume/shares/WDMyCloud/
Using up to 1 processes for transfer...
Figuring out file list...
Calculating chunks...

*** /tmp/tmp.1AFWstVhH3/chunk.0 ***
mycloud.img
Starting transfers...
./rsync_parallel.sh: line 63: parallel: command not found

@michaelletzgus
Copy link

Line 57:

echo $FPATH >> $TMPDIR/chunk.${MIN[0]}

must be

echo "$FPATH" >> $TMPDIR/chunk.${MIN[0]}

Otherwise, double spaces in path name will be squeezed.

@michaelletzgus
Copy link

Using split for chunk file generation:

cat $TMPDIR/files.all | cut -d" " -f2- | split -d -a3 -n r/$PARALLEL - $TMPDIR/chunk.

This (-f2-) deals with spaces in file names.

@nathanhaigh
Copy link

The algorithm you use for generating CHUNKS which are as balanced as possible in terms of file sizes is good. However, the implementation is slow.

For a list of 74,814 files your while loop took 289s to complete. My implementation was almost 3.5x faster at just 86s and generated chunks of the same size:

function array_min {
  ARR=("$@")

  # Default index for min value
  min_i=0

  # Default min value
  min_v=${ARR[$min_i]}

  for i in "${!ARR[@]}"; do
    v="${ARR[$i]}"

    (( v < min_v )) && min_v=$v && min_i=$i
  done

  echo "${min_i}"
}

The while loop then looks like this (with some progress reporting included:

PROGRESS=0
SECONDS=0
while read FSIZE FPATH; do
  PROGRESS=$((PROGRESS+1))

  # Original Implementation
  #MIN=($(array_min_old ${CHUNKS[@]})); MIN_I=${MIN[0]}
  # Nathan's implementation
  MIN_I=$(array_min ${CHUNKS[@]})

  CHUNKS[${MIN_I}]=$((${CHUNKS[${MIN_I}]} + ${FSIZE}))
  echo "${FPATH}" >> "${TMPDIR}/chunk.${MIN_I}"

  if ! ((PROGRESS % 5000)); then
    >&2 echo "${SECONDS}s: ${PROGRESS} of ${TOTAL_FILES}"
  fi
done < "${TMPDIR}/files.all"
echo "${SECONDS}s"

@reijovosu
Copy link

reijovosu commented Mar 3, 2020

my for works in Mac and also includes @nathanhaigh revised size function
https://gist.github.com/reijovosu/fce3d808bed89d5021ade70223bfc4c3

@ylluminate
Copy link

@reijovosu please add the other fixes as per above such as @michaelletzgus' spacing issue @t-animal's "$@" quoting fix. Furthermore, please remove hard coded path in your script. Thanks for pulling what you have together thus far!

@akorn
Copy link

akorn commented May 4, 2020

I updated my zsh reimplementation of this script so it should now handle filenames with any funny characters in them. It still starts transferring immediately, without waiting for the full file list to be built first. Comments welcome.

@ylluminate
Copy link

Amazing work @akorn!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment