Skip to content

Instantly share code, notes, and snippets.

@kyle0r
Last active September 20, 2021 02:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kyle0r/266529e2fda900c41555899422ee585b to your computer and use it in GitHub Desktop.
Save kyle0r/266529e2fda900c41555899422ee585b to your computer and use it in GitHub Desktop.
zed scheduled zpool scrub support for MAX_PARALLEL_SCRUBS

Check the inline code docs.

The revision of the script on my system was this one.

I don't have the time at the moment to submit a bug and/or patch and/or commit directly to HEAD.
I did a quick study of the Debian bug reporting process and the reportbug tool.
cite: https://www.debian.org/Bugs/Reporting
I will try to find the time to report the improvement and link the maintainers to this gist.

example of the xargs process pool with MAX_PARALLEL_SCRUBS=3

xargs -L 1 --max-args=1 --max-procs=3 -I{} -- /bin/sh -c $1 zed-scrub.sh {}
 \_ /bin/sh -c $1 zed-scrub.sh zpool scrub -w rpool
 |   \_ zpool scrub -w rpool
 \_ /bin/sh -c $1 zed-scrub.sh zpool scrub -w store1
 |   \_ zpool scrub -w store1
 \_ /bin/sh -c $1 zed-scrub.sh zpool scrub -w store1-backup
     \_ zpool scrub -w store1-backup
#!/bin/bash -eu
# the existing logic launches scrubs without regard to how many pools are present
#+ on a system. If a system had 12 pools, then the scrub would run 12 scrubs in
#+ parallel. This can have various negative impacts.
#+ The code for the script is maintained by Debian:
#+ https://salsa.debian.org/zfsonlinux-team/zfs/-/blob/master/debian/tree/zfsutils-linux/usr/lib/zfs-linux/scrub
#+ The revision on my system and the one I modified: 41e457da
# added -w arg to the zpool scrub to ensure the command waits for the scrub to complete.
#+ adding this change on its own would mean the pools would be scrubbed in serial.
#+ this could take a very long time.
#+ The -w arg is also needed for the xargs process pool logic to work as expected.
#+ Without -w arg scrub concurrency would not be maintained at MAX_PARALLEL_SCRUBS.
# swapped sh for bash to enable the use of process substitution <(...)
#+ this avoids piping into a while loop invoking a sub shell and keeps variables
#+ in local scope: https://stackoverflow.com/a/124321/490487
POOLS_TO_SCRUB=
MAX_PARALLEL_SCRUBS=3
# Scrub all healthy pools that are not already scrubbing.
while read pool
do
if ! zpool status "$pool" | grep -q "scrub in progress"
then
POOLS_TO_SCRUB="$POOLS_TO_SCRUB$(printf 'zpool scrub -w %s' $pool)"$'\n'
fi
done < <(
zpool list -H -o health,name 2>&1 | \
awk 'BEGIN {FS="\t"} {if ($1 ~ /^ONLINE/) print $2}'
)
# observing best practice https://unix.stackexchange.com/q/156008/19406
#+ Not sure it helps in this case as command injection is the objective.
#+ I cannot see any high risks from the approach unless $pool contained injected commands,
#+ which I think would be very unlikely and prevented by zpool naming restrictions?
#+ man zpool-create
#+ The pool name must begin with a letter, and can only contain alphanumeric characters as well as under‐
#+ score ("_"), dash ("-"), colon (":"), space (" "), and period (".").
if [ -n "$POOLS_TO_SCRUB" ]; then
echo -n "$POOLS_TO_SCRUB" | xargs -L 1 --max-args=1 --max-procs=${MAX_PARALLEL_SCRUBS} -I{} -- /bin/sh -c '$1' zed-scrub.sh {} &
fi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment