Skip to content

Instantly share code, notes, and snippets.

@jerodsanto
Created December 13, 2011 16:00
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jerodsanto/1472671 to your computer and use it in GitHub Desktop.
Save jerodsanto/1472671 to your computer and use it in GitHub Desktop.
Printing the $PATH with each entry on a separate line
# using ruby:
$ echo $PATH | ruby -e 'STDIN.read.split(":").each { |l| puts l }'
$ echo $PATH | ruby -e 'print STDIN.read.gsub(":", "\n")'
# Using sed: I would expect this to work, but it does not
$ echo $PATH | sed 's/:/\n/g'
@gma
Copy link

gma commented Dec 13, 2011

Not sure what's going on there, I've never tried new lines in sed. No need for sed though; we can tell the shell that the internal field separator is a colon.

$ IFS=":"; for p in $PATH; do echo $p; done

Also try this:

$ IFS=":"; echo $PATH

@jerodsanto
Copy link
Author

I knew you could loop over whitespace like that, but had no idea about IFS. Awesome.

@jerodsanto
Copy link
Author

The second one doesn't work, but the first one is miles ahead of my solutions.

@gma
Copy link

gma commented Dec 13, 2011

Sure, I posted the second so you can see how to get them separated by whitespace. Depending on the context in which you actually want them on separate lines, that might be enough (lots of Unix tools break their arguments on words or lines, so I thought it could be handy).

@gerhard
Copy link

gerhard commented Dec 14, 2011

Some awk love:

echo -n $PATH | awk -v RS=":" "1"
  • set the row separator (RS) to :
  • print every entry

The long form would be:

echo -n $PATH | awk 'BEGIN { RS=":" } { print $1}'

ps: I only read the tweet now.

@jerodsanto
Copy link
Author

@gerhard very nice. I figured there was a way to do it with awk, but my exposure to the tool is limited.

@gerhard
Copy link

gerhard commented Dec 14, 2011

@sant0sk1 awk is a little gem. Nothing beats it when processing large amounts of text. Single process, no optimizations whatsoever:

DATASET:
33GB (original) > 129MB (extracted)

Records extracted:
12,253,389

TIME:
real    15m37.961s
user    7m43.820s
sys 0m35.540s

That was on 4 x SAS as a software RAID5. With a few tweaks, that would become a 5min jobbie. Now imagine running that on SSDs or shared memory : ).

My most recent pride is awk scripts wrapped in shell commands, running as upstart jobs, leveraging redis pipelines to feed graphite with metrics every n seconds. All minitested, sub second for the whole test suite. I'm getting a hard-on just talking about it :D.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment