Skip to content

Instantly share code, notes, and snippets.

@lunacookies
Last active January 22, 2020 02:53
Show Gist options
  • Save lunacookies/d76c61041abafb71c02e2800bb762ed7 to your computer and use it in GitHub Desktop.
Save lunacookies/d76c61041abafb71c02e2800bb762ed7 to your computer and use it in GitHub Desktop.

An Idea I Had On Shells

Hi, everyone! I recently found out about the performance benefits of keeping things (specifically command-line utilities) in one binary (groundbreaking, I know), and thought they might have some interesting applications in shell development. Don’t expect anything amazing or not-rambling though -- my ideas are not very well-formed at the moment.

GNU’s find and grep

Let’s say you are looking for {files,directories} that are named rust. Well, you might start with getting an overview of what you’re dealing with:

$ find .

Instinctively, you add a pipe to grep to filter the output:

$ find . | grep 'rust'

‘Ah wait, this shows all entries that contain rust somewhere in their name, rather than rust being their entire name -- easily fixed by some regex.’

$ find . | grep '\brust\b'

‘But wait!’ you cry to yourself, ‘find has a built-in method to filter its output.’

$ find . -name 'rust'

‘Obviously this is much faster than piping through grep,’ you proclaim, ‘because all the searching occurs inside the find process rather than having the overhead of the shell passing find’s output to grep.’


When I first thought about creating a shell of my own I was under the same impression as the second person was in the (very bad) narrative above. However, it turns out that this is just not true. Here are the results from testing with hyperfine, a command-line benchmarking utility:

Note that all hyperfine commands were run with the options min-runs set to 100 and warmup set to 1.

$ hyperfine "find . | grep '\brust\b'" "find . -name 'rust'"
Benchmark #1: find . | grep '\brust\b'
  Time (mean ± σ):     507.5 ms ±  44.6 ms    [User: 107.9 ms, System: 397.3 ms]
  Range (min … max):   452.3 ms … 659.0 ms    100 runs

Benchmark #2: find . -name 'rust'
  Time (mean ± σ):     509.9 ms ±  59.8 ms    [User: 112.6 ms, System: 387.1 ms]
  Range (min … max):   450.3 ms … 776.8 ms    100 runs

Summary
  'find . | grep '\brust\b'' ran
    1.00 ± 0.15 times faster than 'find . -name 'rust''

The two commands ran in what is effectively the same time.

Maybe we should try replacing the venerable find with fd, a newer alternative?

$ hyperfine "fd | grep '\brust\b'" "fd 'rust'"
Benchmark #1: fd | grep '\brust\b'
  Time (mean ± σ):     152.4 ms ±  30.8 ms    [User: 231.9 ms, System: 239.8 ms]
  Range (min … max):    98.7 ms … 266.5 ms    100 runs

Benchmark #2: fd 'rust'
  Time (mean ± σ):     117.6 ms ±  17.3 ms    [User: 188.0 ms, System: 180.9 ms]
  Range (min … max):    92.7 ms … 178.4 ms    100 runs

Summary
  'fd 'rust'' ran
    1.30 ± 0.32 times faster than 'fd | grep '\brust\b''

This, one the other hand, shows the ‘native’ solution performing thirty percent faster on average (keep in mind that this example exhibits quite a large spread, though). Maybe this is a sign of find having an inefficient implementation of filtering by name, but this seems extremely unlikely for such a venerable program. Or, just maybe, the second person from before was onto something?

Plan 9’s walk and sol

In fact, there is a more concrete example of the performance benefits of stuffing functionality into one program rather than splitting it out. A GitHub repo named ‘walk’ under Google’s GitHub account is home to the utility programs walk and sol. The README states that the pair was originally written for Plan 9. It also gives an example of a specific find invocation:

$ ‌find . -type f -name '*foo*'

(this returns all entries that are files and whose name contains foo), and its equivalent using walk and sol:

$ walk | grep foo | sor 'test -f'

walk lists all the entries in a directory recursively (defaults to the working directory), grep only keeps lines that contain foo, and sor only keeps lines for which a given predicate returns 0. To me, at least, this seems much easier to write and remember and, because the functionality is made composable by splitting it over multiple programs, feels more ‘Unixy’. And that’s not surprising either -- Plan 9 was worked on by some of the people who first wrote Unix.

Note that all performance measurements in the next paragraph are from google/walk’s README.

However, this added ease of use and sophistication comes at an added cost -- although walk performs almost forty percent faster than find, comparing find /usr -type f with walk /usr | sor 'test -f' shows find performing circa 133 times faster. Yes, you read right. Whether this is down to the overhead of using a pipe or because sor shells out to test, it is regardless a demonstration of keeping more functionality in the one binary can improve performance. The README says it best:

find implements its predicates in-process, making them orders of magnitude faster than sor

So how does this apply to shells?

My idea is to implement as many commands as possible as builtins of the shell, especially common LEGO-like programs like grep, whether implementing these built-ins means creating them from scratch or simply linking in library-ified versions of existing utilities. Although my ideas are very vague at the moment (which is why I decided to write them down), ideally this would result in some small, composable utilities that can be used in ways similar to walk and sol from both inside and outside of this hypothetical shell (minus the potential performance benefits, of course). When piping these programs together they would maintain performance resembling that of a single program with all the features needed to accomplish the task at hand. (This is where the walk & sol and find comparison comes in handy.)

This kind of architecture would allow for structural data to be passed between programs, perhaps by using generics to specify how certain programs can accept a variety of input types.


Sorry for my rambling. I hope that my ideas aren’t completely worthless and were worth writing down :)

@shadiakiki1986
Copy link

Check nushell and ion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment