-
-
Save jerodsanto/1472671 to your computer and use it in GitHub Desktop.
# using ruby: | |
$ echo $PATH | ruby -e 'STDIN.read.split(":").each { |l| puts l }' | |
$ echo $PATH | ruby -e 'print STDIN.read.gsub(":", "\n")' | |
# Using sed: I would expect this to work, but it does not | |
$ echo $PATH | sed 's/:/\n/g' |
I knew you could loop over whitespace like that, but had no idea about IFS
. Awesome.
The second one doesn't work, but the first one is miles ahead of my solutions.
Sure, I posted the second so you can see how to get them separated by whitespace. Depending on the context in which you actually want them on separate lines, that might be enough (lots of Unix tools break their arguments on words or lines, so I thought it could be handy).
Some awk love:
echo -n $PATH | awk -v RS=":" "1"
- set the row separator (RS) to :
- print every entry
The long form would be:
echo -n $PATH | awk 'BEGIN { RS=":" } { print $1}'
ps: I only read the tweet now.
@gerhard very nice. I figured there was a way to do it with awk, but my exposure to the tool is limited.
@sant0sk1 awk is a little gem. Nothing beats it when processing large amounts of text. Single process, no optimizations whatsoever:
DATASET:
33GB (original) > 129MB (extracted)
Records extracted:
12,253,389
TIME:
real 15m37.961s
user 7m43.820s
sys 0m35.540s
That was on 4 x SAS as a software RAID5. With a few tweaks, that would become a 5min jobbie. Now imagine running that on SSDs or shared memory : ).
My most recent pride is awk scripts wrapped in shell commands, running as upstart jobs, leveraging redis pipelines to feed graphite with metrics every n seconds. All minitested, sub second for the whole test suite. I'm getting a hard-on just talking about it :D.
Not sure what's going on there, I've never tried new lines in sed. No need for sed though; we can tell the shell that the internal field separator is a colon.
Also try this: