Skip to content

Instantly share code, notes, and snippets.

@bensteinberg
Last active August 21, 2020 16:27
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bensteinberg/8dc28980d4444997fb7ce412ed5d5a1e to your computer and use it in GitHub Desktop.
Save bensteinberg/8dc28980d4444997fb7ce412ed5d5a1e to your computer and use it in GitHub Desktop.

SaltStack, a system for configuration and orchestration of remote servers, allows for targeting of minions by

  • minion ID: salt 'perma-server-01' test.ping or salt 'perma*' test.ping
  • grains, data specific to a minion, such as CPU architecture or deployment tier: salt -G 'oscodename:buster' test.ping
  • pillars, data assigned to one or more minions centrally, often secrets: salt -I 'foo:bar' test.ping
  • IP address or subnet
  • nodelists, preset groups of minions
  • combinations of the above: salt -C 'perma* and G@tier:prod' test.ping

As far as I can tell, there is no built-in way to target minions by, for example, whether a given piece of software is installed, or a given service is running. Here's a way to accomplish this; you'll need to install the JSON processor jq.

To find out which machines have a program installed, you could run

salt '*' cmd.run 'which collectd'

To extract a clean list of minion IDs, one per line, get the output in JSON format and pass it to jq for filtering:

salt --out=json --static '*' cmd.run 'which collectd' | \
jq -r '. | to_entries | .[] | select(.value == "/usr/sbin/collectd") | .key'

To assign a custom grain for later targeting, loop over the minion IDs:

for MINION in \
`salt --out=json --static '*' cmd.run 'which collectd' | \
jq -r '. | to_entries | .[] | select(.value == "/usr/sbin/collectd") | .key'` ; \
do salt $MINION grains.set lil:collectd installed ; done

salt -G 'lil:collectd:installed' test.ping

(I've used something like this for-loop technique to do things like reboot or restart a service on a series of minions one at a time, with a sleep in between, but it turns out there's a --batch-size option that takes a number or a percentage; you can also add a number of seconds to separate jobs with --batch-wait.)

To set a custom grain based on output that returns True or False, replace cmd.run 'which collectd' with e.g. service.status collectd and select(.value == "/usr/sbin/collectd") with select(.value == true).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment