Skip to content

Instantly share code, notes, and snippets.

@sleexyz
Last active May 11, 2016 08:26
Show Gist options
  • Save sleexyz/635952be39b5591fd5a99d76a13d48ff to your computer and use it in GitHub Desktop.
Save sleexyz/635952be39b5591fd5a99d76a13d48ff to your computer and use it in GitHub Desktop.
Context-Free live-coding

A Comparison of Interfaces: Programming Languages vs. Musical Instruments

As interfaces, musical instruments are simple at their individual components, yet are evidently capable of much improvisatory creative realtime expression by a human. On the other hand, progamming languages are complex interfaces that are infinitely more expressive when "integrated" over time, but regardless perhaps not as capable of realtime creative expression.

Is this a inevitable tradeoff?

As interfaces, how are instruments different from live-coding?

How can use these differences to make programming more expressive?

From Locally Context-Free Interfaces...

Complexity ≈ Complecity ≈ Context-sensitivity

Instruments are locally context-free interfaces; one does not need to keep track of one's state to just make a sound. That is to say, individual keys on a piano are not complected, or intertwined; one key's state does not affect another key's state.

As interfaces, programming languages are more locally context-sensitive. A misplaced variable will fail the typecheker, a forgotten character will result in a syntax error. The direct interface for programming languages are somewhat complected; to spell out a word, one key's state will depend on a history of other keys' states.

... To Emergent Context-Sensitive Meta-Interfaces

But what happens to the context-sensitivity of our interfaces when we go from the perspective of the interface at a given moment to the interface integrated over time?


Instrument as interfaces over time are indeed context-sensitive! Things sound good relative to what is played before and what is played at the same time. One can think of harmony, rhythm, and structure as temporal contexes.

However, this context-sensitivity is less a characteristic of the direct interface but of emergent meta-interfaces: higher-level interfaces that emerge out of nonlinear interactions from combinatorial expression of a lower-level interface. For example, take an arpeggio played on a piano; it constitutes as both a sequence of low-level actions and a singular high-level action in its own right. All of a sudden there are higher-level aesthetic rules to what can be played at a given moment.

Programming languages over time are also very context-sensitive. Code turns into codebases, and at some point useful increase in expression becomes intractable.


So what's different here between instruments and programming languages? Can we quantify/asymptotically analyze the context-sensitivity of the two interfaces over time?

The difference between the two is that the combinatorial interaction in instrument expression is constant-bounded by the physical constraints of the system (eg. pianists only have ten fingers, quieter sounds get masked, etc.). This means the context-sensitivity in instrument expression is constant-bounded; the complecity of actions in a given interval of time can only be so high.

In progamming, because code is non-transient, code grows and grows, and the interaction context of previous actions explodes combinatorially. In practice, this means diminishing returns until a refactor, in which one writes in terms of higher abstractions, aka meta-interfaces. If the right level of abstraction is used every time, context-sensitivity grows at best linearly.

Transience

Instrument expression is transient. A chord played on a piano will last for the duration it is held. As mentioned earlier, transience forces instrument expression to be constant bounded in context-sensitivity, as the context can only grow as large as one has fingers, for example.

Code expression is more permanent; code more or less increases monotonically in complexity until it is unwieldy, and then it is refactored: reduced to its core abstract representation. This process is repeated until at some point the codebase is rewritten from scratch, or the project becomes irrelevant. Another way of looking at this is that code is also transient, but given a much longer time-scale.

Forgivingness

If one makes a mistake while improvising on an instrument, the mistake only lasts for so long in the consciousness until it is forgotten; instrument expression is forgiving. Compare to bug in code; a bug stays in the codebase until it is discovered and removed explicitly; code expression is sticky.

Feedback

Instrument expression is immediately perceived by the player; the time required from thought to motor actuation to sound detection is at the millisecond level. This feedback loop is even short-circuited when one takes into account muscle memory; improvised music is not thought out, but played out.

Code expression is not necessarily immediately perceived by the programmer. In writing traditional software, a bug might take years to surface. Even when writing code in the REPL, it takes time to type out an expression and evaluate it. Code is thought out.

Wild concluding questions and hypotheses:

Can we make programming interfaces more like musical instruments?

ie. programming interfaces that are:

  • locally as context-free as possible
  • capable of emergent, constant-bounded context-sensitivity
  • transient (at the perceptual time-scale) and therefore forgiving
  • quick to express and perceive? More viceral; played out as opposed to thought out.

Do the following make some sense?

Context-free ≈ viceral ≈ modular ≈ simple

Context-specific ≈ linguistic ≈ complected ≈ complex

possible experiment?

Semantics:

  • Transient
  • continuous
  • L-system

Syntax/Interface:

  • something tactile?
  • something where syntax errors are impossible
  • midi-fighter?

beyond live-coding??

  • Liveness in interfaces as a spectrum; can we have more liveness in everyday development?
  • WRT immediate expressivity, context-free DSLs > context-sensitive DSLs
  • context-sensitivity as a spectrum?
  • What if we traded asymptotically constant context-sensitivity to asymptotically linear context-sensitivity?
  • In reference to the above, can we algorithmically optimize programming itself as process?

Hypothesis: context-free is scalable. Context-freeing is abstracting

  • Denotational semantics are good for context-freeness!
  • Expressions are more context-free than statements!
  • abstraction = controlling combinatorial explosion
  • ideal: maximally context-free DSLs all the way down (with slices of context in between)?
  • from interfaces to meta-interfaces to meta-meta-interfaces: DSL's all the way down
  • adjunctions?
s/o to tom and barak 4 convo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment