Skip to content

Instantly share code, notes, and snippets.

@joinr
Created May 14, 2024 06:50
Show Gist options
  • Save joinr/c52f599bd4d18d950d3eca789188749f to your computer and use it in GitHub Desktop.
Save joinr/c52f599bd4d18d950d3eca789188749f to your computer and use it in GitHub Desktop.
notes on "Embracing Common Lisp in the Modern World"

[Since OP deleted the original post, I ported my response from https://www.reddit.com/r/Clojure/comments/1afviv4/embracing_common_lisp_in_the_modern_world/kod6r0d/?context=3]

I saw this pop on the lisp subreddit and concurred with some of the criticisms of the claims. I have not watched the presentation, but based my remarks on the slide contents.

10.1 Compiled Code Performance

CL implementations compile to machine code, often more CPU efficient.

Especially true for numeric and CPU-intensive tasks

I don't really buy this per se.

JVM does compilation to machine code as well via hotspot JIT. Primitive value types exist. They need to get structs in for generic control of heterogeneous memory layout (if the Democratic Order of Planets ever passes approval of project valhalla....it's been years now but it's always just around the corner), but primitive numerics on the JVM appears competitive. I think there some lower-level optimizations e.g. c/c++ can approach (and SBCL with simd intrinsics) that might edge out the JVM, but then again maybe the emergent vector API makes a difference. I think in most cases, even on the JVM, we are delegating to optimized native libs like MKL or another BLAS implementation (for best in class performance).

I don't know if SBCL got around to it, but at least circa 2019 make-array was only compatible with primitive types, and not structs (e.g. no (SIMPLE-ARRAY SOME-STRUCT (100)), but auto promotion/boxing to SIMPLE-VECTOR 100). I want to say I saw something in a recent SBCL release about this, but I couldn't find it and may have been hallucinating. No idea if any other implementations allow it (they didn't seem to, according to the community, circa 2019).

I would like to see reproducible benchmarks highlighting the claims here though, out of personal interest.

10.4 Garbage Collection

CL offers more tunable garbage collection strategies.

JVM’s collector optimized for long-running processes but can introduce latency.

I don't think this jives with the variety of GC's available on the JVM; particularly when compared to the 90's-2k era tech in CL land (from my perspective, and from responses to this topic in the lisp subredit). JVM has low latency collectors, high throughput parallel, concurrent / low-pause, massive heap, etc. All with too many tuning options to mention. Newer GC's can also be configured to more liberally release heap back to the OS instead of being fat babies for the duration of the process.

10.3 Startup Time

Faster startup times in CL compared to JVM.

There's some interesting research on lazy-var initialization (both legacy an in-progress) that may reduce this gap. The problem is more Clojure than the JVM. Still, if startup cost is amortized for a long running process....who cares? If not, maybe take the performance hit and script it with babashka, try to native-image it with graal. As a baseline though, I think baking a binary exectuable from save-lisp-and-die in CL appears to be superior compared to e.g a vanilla clojure uberjar.

10.5 Tail Call Optimization

CL supports efficient tail recursion in some implementations.

Clojure has recur, but JVM support varies.

TCO is implicitly supported in CL implementations during compilation, but it's not in the standard (so YMMV). I don't get the TCO fetishism (it is particularly pervasive in the mutual recursion crowd). Coming from TCO langs (including some CL implementations), loop/recur (or even just recur from a function body) has been not only adequate, but arguably better semantically since TCO is explicit/enforced at the language level. I find that to be practical. Maybe there is a problem domain where the limitation actually matters in practice; I have not run across one to date.

10.6 Data Structure Efficiency

CL’s mutable structures can be more memory-efficient.

Clojure’s immutable structures might have higher overhead in some cases.

In those cases where it matters...clojure just uses mutable data structures too, even primitive backed collections like those from fastutils, or build up the primitive layer ourselves with optional off-heap storage (look at dtype.next...).

Conversely, CL provides an limited immutable path via non-destructive consing operations, with a substantial default towards mutable ops in the non-list data structures. The community standard library for persistent data structures Fset (perhaps until folks started porting clojure's stuff) was built on inferior balanced binary trees (but works). cl-hamt should be competitive but I have no experience with it. I think clojure covers the domain where immutability/persistence as a default is desired (with a substantially higher scope of tractable use cases where the data structure is both efficient enough to use and integrated into the language design).

10.7 Direct Hardware Access

CL provides more efficient pathways for direct hardware access and C interoperability

I see the bevy of stuff coming out from recent FFI work (particularly in the zero-copy pathways and c-bindings from python interop and the scicloj stuff) and I am dubious of this claim. It's not something I delve in often though; benchmarks would be informative.

Overall, I think the desire to pitch this solution as efficient for the green / ethical movement is perhaps using rose-colored (maybe green-tinted) glasses to push the scale in terms of justifying using CL. The biggest arrow in the quiver is startup time, followed by (typical) memory use / heap size. I don't think most of the other superficial criticisms from the slides hold water (happy to be shown differently). I also found the idea of "green cloud computing" to be somewhat of an oxymoron, since my default internal image of data centers backing cloud resourcing is anything but green / eco friendly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment