Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Basic introduction to using forks in Dyalog APL

Forks: Spoon fed

This just a short article explaining how forks work in Dyalog APL. Some basic knowledge of APL is assumed but not much more than can be attained by browsing an introduction to the language.

I should say that this will be opinionated. It’s not meant to declare what is but rather my take on what is. Feel free to read “as I see it” before every sentence. Further, this is meant for beginners. It attempts to present a framework for thinking about this tool and not as a historical reference on the development of it. It’s similar to learning a Chinese character and thinking “that looks like a mouse!” Even if that’s not the origin of the character, if it helps you remember it then it’s worth latching onto.

Let’s get to it.

First contact

Functions in APL can take one or two args. When it’s one it’s called ⍵ and goes on the right and when there are two they’re called ⍺ and ⍵ for the left and right argument respectively which appear on the left and right sides of the function.


1 × 2

Forks similarly have a left and right side called tines which are functions and the middle function (tine) is applied to the outer tines.


⌈ × ⌊

Easy. Or is it? How do you apply one function to another? What we’re missing is arguments to our fork. We’ve already said that the middle tine takes the outer tines as arguments but we should have said it takes the values of the outer tines as arguments. To make a value out of a tine we need to pass arguments to it. But how do we do that?

The arguments to the fork as a whole become arguments to each of the tines first.  The values of
that application become the arguments to the middle tine.

This is where the term “fork” comes from. The arguments are applied to each tine independently first and then merged through the middle tine.

Still, how many arguments does the fork take? It can take either one or two arguments just as any other function and the arguments are applied as just described.


(⌈ × ⌊) 3.9

This fork takes one argument. We take the ceiling of that argument and then the floor separately and then multiply the two to get 12 as a result.

Note also that the left tine can also be a constant as in the following:

(1 + ⌈) 3.9

Further, you can string together forks.

( ⌊ × 1 + ⌈) 3.9

Remembering that forks have three tines, the way this works is by associating to the right. So you first run the 1 + ⌈ fork and then that becomes the right tine of the next fork to its left. The left tine of that next fork takes the original argument to produce a value merged by the next fork’s middle tine.

It’s worth taking a second to let that sink in. The tines of a single fork are evaluated independently but a sequence of forks are strung together via the right tine.

Such sequences are called “trains”.

Dyadic forks

If you’re like me, this is where it starts to get confusing. It’s easy to lose sight of the rule above because you have left and right arguments to the fork which become left and right arguments to the outer tines whose results are left and right arguments to the middle tine.


2.1 (⌈ × ⌊) 3.4

Both args are applied to both tines. So we have the max of 2.1 and 3.4 times the min of 2.1 and 3.4. What this is not is the ceiling of 2.1 times the floor of 3.4.

These args need not be scalars.


2 3 1 (⌈ - ⌊) 3 1 5

This is the max applied element-wise subtracting element-wise the minimum applied element-wise resulting in 1 2 4.

Things get hairy

In addition to chaining forks you get one free monadic function you get to apply to the result of the fork. For example, this calculates the Manhattan distance between two vectors:


Here you apply the fork taking the difference of the max and the min (which amounts to the absolute value of the difference) and then sum over the elements of the resultant vector.

The following does not work:


Why not? Well the interpreter sees three functions in a row and thinks “fork!”. This seems a shame because you could naïvely read this as applying two monadic functions to the result of a dyadic. But the interpreter can’t know that you didn’t mean a fork without deeper insight into precisely what you’re attempting.

What you need to do is to trick the interpreter into seeing only two functions instead of three using beside.


Here the beside operator binds tighter than sequential application. I.e. +/∘| is turned into a single function first before it’s applied to the result of applying -.

This also explains why you only have one free monad to apply to the result of a fork. If you try to apply two then the interpreter assumes this is a chain of forks. If that’s not what you want you need to use similar tricks to make the best use of your one free monad.

The careful reader may have noticed that this last example is not a fork but the same principle of the one free monad exists for single primitives as well. You get one free because three would be interpretted as a fork. From a slightly higher perspective, both a fork as a unit and a primitive act similarly. They’re both functions and they can both be used as constituents of more complicated constructs such as longer fork trains.

Remember above when we said 2.1 (⌈ × ⌊) 3.4 is not the ceiling of 2.1 times the floor of 3.4? Well, what if we wanted to do this? Well you could use left tack and right tack like so:


If you think about it, the parens are absolutely necessary. How else would it know where the outer tines start and end? Also notice here that we’re using our one free monad to apply to the results of the dyadic tack functions. The tack functions are actively ignoring one arg or another.

Here’s a sillier example:


This is a perfectly valid train, but completely unnecessary. If the left arg ends up the left arg of the middle tine and similarly for the right arg, then you could just pass them directly.

Diving in the deep end

SPOILER ALERT: This is my solution to day 7 of the 2018 Dyalog practice problems.


Cool, right? But is it readable? Why not this?

{0=⍺:⍵ ⋄ ((×-⍺)×≢⍵)↑((×-⍺)×(≢⍵)-|⍺))↑(-⍺)⌽⍵}       

or even

{0=⍺:⍵ ⋄ a←-⍺⋄((×a)×≢⍵)↑((×a)×(≢⍵)-|a))↑a⌽⍵}

All three are viable solutions and essentially the same idea. As you get more advanced you’ll be able to assess things like performance implications of your choice. But when you’re just starting out I’d argue that it’s important to see these as each equally valid before attempting to ask questions like “Can I make this shorter?”

Of course pushing the limits is also a great way to test your understanding. Just know that there’s a lot of trial and error down that road and that’s okay.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment