This article is a response to mfiano’s From Common Lisp to Julia which might also convey some developments happening in Common Lisp. I do not intend to suggest that someone coming from a Matlab, R, or Python background should pickup Common Lisp. Julia is a reasonably good language when compared to what it intends to replace. You should pickup Common Lisp only if you are interested in programming in general, not limited to scientific computing, and envision yourself writing code for the rest of your life. It will expand your mind to what is possible, and that goes beyond the macro system. Along the same lines though, you should also pickup C, Haskell, Forth, and perhaps a few other languages that have some noteworthy things to teach, and that I too have been to lazy to learn.
I also do not intend to offend anyone. I’m okay with criticizing Common Lisp (I myself have done it below!), but I want the criticism to be done correctly. If I had to wish something from Julia, it would be to provide a way to turn off runtime optimization to (radically) speed up compile times for purposes of debugging.
- Table of Contents
- Why I still use Common Lisp, and did not switch over to Julia
- Agreeing with some points: what Common Lisp can improve
- Conclusion: Why I still use python as my primary language, and when might I switch over to Julia?
Reference: Editor Support
I actually liked the mention of Editor Support going on to mention how Common Lisp provides a excellent ability to inspect the current state of the stack in case of an error or warning, or just a condition that is not an error. You can inspect the stack without rewinding the stack, exploring the values of the variables in various stack frames, making debugging is easier. I can merely point to phoe’s Common Lisp Condition System (2020) for someone who might be intrigued by mfiano’s description.
About emacs: I personally dislike emacs, but I dislike the other editors more. (EDIT) That does not mean you should use emacs. The ALIVE extension for VS Code has been under development since 2020. I do look forward to seeing a Common Lisp tutorial series that uses VS Code instead of Emacs or Portacle.
Reference: Language Evolution
All of these features are left to be implemented by third-party libraries, and if possible, portability libraries that allow them to function with a unified interface between all implementations of Common Lisp.
Yes, indeed, even though the ANSI standard has been frozen in 1994, there have been various attempts at adding support for the features added since then. portability.cl provides a nice summary of the implementation support for the features.
- multithreading, which exists on the 13 implementations but JSCL (yes that’s Javascript), and there is a higher level library lparallel built over it
- foreign function interface, which again exists on the 13 implementations but JSCL
- garbage collection interface, on 12 implementations (except MKCL and JSCL)
- networking support (usocket?), which exists on 11 but JSCL, Mezzano, and MKCL
- IEEE floats, I’m not sure what the exact status is about this, but the more popular implementations including SBCL and CCL do follow IEEE floating point format and provide support for 64-bit as well as 32-but floats. (I’ll revisit this point later.)
- unicode support, again I cannot comment on the status across implementations; there exists cl-unicode, and SBCL 2.2.7 (July 2022) recently upgraded their support for unicode to Unicode 10.0.0.
There will certainly be other features for which Julia has better support than SBCL, but the state of affairs in Common Lisp might not be as bad as the section on Language Evolution makes one believe.
the standard is incredibly hard to navigate, especially as a beginner trying to learn the language. In many places it is very ambiguous, or erroneous, and often leads to long debates in the Common Lisp communication forums.
There exist MiniSpec and Common Lisp Quick Reference for a beginner. And if you are able to see the inconsistencies in the standard, and capable of understanding debates concerning it, you are no longer worthy of being a beginner :’).
My personal interest in Common Lisp also stems from a vested interest in very long term stability, not just for programming, but for things in general. I’m interested in taking sometimes gold, sometimes outdated ideas (or code) from people who have lived before me, and passing them to the people coming ahead. I think, in the absence of knowing where our predecessors went wrong, our successors will only make the same mistakes. I usually lean more towards the side of implementing and testing ideas, rather than delivering a product to an end-user. And as such, I’m less interested in getting things to work now and here than trying to make a reasonable attempt at ensuring things work 10 or 20 or 40 years from now. I don’t care about this when I’m writing code for the end-user (in my case, as a student so far, instructors and real-life collaborators) who might not be as worried about long-term stability, and there I actually end up using python or javascript, or whatever will get things up fast, until I begin to face their limitations.
Reference: Performance
I, again, highly agree with the mfiano. But again, things aren’t as gloomy. With CLTL2 through something like cl-environments and cl-form-types, it is possible to dispatch generic-functions statically using static-dispatch.
Comparison of performance is valid only in terms of implementations; thus a fair comparison would involve comparing one particular implementation of Common Lisp aka SBCL, with the only implementation of Julia. And, SBCL has been in development since the last 30 years (and more than that if you include CMUCL); monthly releases take place even today. SIMD support for intel architectures was added recently in June 2022. (Thanks Marco Heisig!) And here’s using it to implement the multiplication of a 4x4 matrix with a 4-length vector, that was requested by u/kaveh808 who has been working on an IDE for 3D production.
It is also certainly true that dynamicity and performance go against each other. This has less to do with Common Lisp, and more to do with what the nature of dynamicity: if you want to go fast, you want to minimize unnecessary checks; but if you want correct non-segfaulting behavior with objects whose structure is unknown at compile time, then you will need to runtime checks. At best, a language should provide facilities to swing towards either ends. And yes, in Common Lisp, this is hard impossible to do portably sticking to the ANSI standard. But if you are using another implementation-tied language, you might as well do with SBCL; or if you had to be forced to stick with SBCL, you might as well pick up the other language. Either is fine.
I also agree about the lack of packed arrays of arbitrary structures in Common Lisp. Certainly there should be a way use FFI to achieve it. There should also be a way to put extensible-compound-types and polymorphic-functions to achieve it. And you also have Coalton. But this does not seem to have a native solution in the near future; the difficulty is related to garbage collection.
Reference: Programming Paradigm
While I do think that CLOS is very nice, and it is hard to live without it, I also think that OOP in general does not fit every programming problem. Infact, for most applications, I would much rather have parametric polymorphism instead of ad-hoc polymorphism that you get with Common Lisp generic functions.
Generic functions in Common Lisp do not have arity-overloading like they do in Julia and other languages with generic functions. A generic function in Common Lisp has a fixed arity, and defines a protocol.
Besides the points mfiano mentioned, another major annoyance I have run into on ANSI CL is the inability to dispatch on specialized array types. I have been unable to do this using generic functions, CLOSER-MOP, or even SB-PCL. But have instead found it simpler to work on polymorphic-functions to provide functions that dispatch on types rather than classes.
- And this is possible for optional and keyword arguments.
- And this is possible with heterogeneous argument lists.
- The static dispatch is optional, you can turn it off since some lispers think dynamic dispatch is the right thing to do.
- And there is some support for writing functions that are parameterized over types. (I recently used this feature in numericals to cut down on the number of functions needed from about 70 to 10.) Granted, this support is far from (and neither intended to be) something that coalton povides.
- And there is support for compiler-notes, aka notes emitted during compilation of the specific function and/or line to help you optimize the code.
- And there is support for ”declaration propagation”.
- And it respects declarations asking to optimize for debugging or for speed
And through this and CFFI, I have been able to optimize numericals for small as well as large arrays. I have tested numericals on SBCL and CCL, and am using Sleef under-the-hood through CFFI. And it works. And it is performant. (And at least I find it convenient.)
That said, except the first two points, all the rest of the points are beyond the ANSI standard, and even though cl-environments provides support for a number of implementations, I have run into issues on them. Thus, CLTL2 support needs polishing.
Additionally, most of the language proper is not generic.
There are at least two projects generic-cl and lisp-polymorph that attempt to provide support for a generics based language. But yes, this will never be a “first class” solution; it might get the work done but the support might not be native. That said, implementing generics correctly doesn’t seem like an easy solved problem either; different languages seem to be working with different tradeoffs for the different varieties of generics they provide.
User-defined types are merely type aliases for existing types.
Indeed, when I made a custom array class, I also had to come up with extensible-compound-types. polymorphic-functions also plays nicely with this. So, users should be good to go.
The bottom line, things aren’t as gloomy.
Beginner programmers are often advised to focus on writing readable code instead of attempting to prematurely optimize it at the cost of readability. I think this same advice also applies to compilers, which should focus on producing debuggable (and quickly compilable) code, and only then on performant code. But unfortunately, both julia and numcl (whose structure was motivated by julia’s JAOT) have focused on runtime performance at the cost of compilation performance.
Granted, if I want to run the calculation for a day or week, it doesn’t matter whether I spend 30 seconds compiling the program or 1 second. However, more often than not, during development, I will be running a calculation for 2 seconds to test my code. And then, 30 seconds is too long a time! It is only once I have debugged my code, then it makes sense to compile it for performance and then run it. Julia and numcl abysmally fail here with the hiccups and first times to run. Granted things are improving, but they will never be as good as providing an option to ditch runtime performance optimization completely to focus on compile-time performance.
The standard C compilers, as well as SBCL get this right. Common Lisp’s ANSI standard provides optimization declaration specifiers for speed safety debug compilation-speed space
that suggest (but do not require :/) the compiler to optimize for that particular quality. SBCL puts this to use. C compilers also provide equivalent options. In their absence, I cannot imagine making julia or numcl my first choice of implementing a program; may be for some other people, the hiccups do not matter much, for me they do.
Julia does not allow bindings to have dynamic scope, aka looking up variable value from the place where the function was called, rather than from where the function was compiled. The reason cited for this is performance.
However, dynamic scope for bindings is needed not only to simplify specifying local-but-global variables, but also for implementing the Condition System discussed previously. By putting performance above everything else, julia fails with respect to Common Lisp (actually, SBCL in particular) in terms of debugging as well as compilation-speed.
EDIT: In response to a comment on hackernews (thanks for pointing out!): I forgot to write - Yes, local variable bindings in Common Lisp have lexical scoping by default. It is only the global variables whose bindings have a dynamic scope. In fact, ANSI standard makes no provision for global variables with lexically scoped bindings; although a few implementations (SBCL, CCL, Lispworks) provide support for global variables with lexically scoped bindings if required.
What dynamic scoping for bindings allows me to do is the following:
CL-USER> (in-package :dense-numericals.impl)
#<PACKAGE "DENSE-NUMERICALS.IMPL">
IMPL> (let* ((nu:*array-element-type* 'double-float)
(a (nu:rand 5 5))
(b (nu:rand 5 5)))
(nu:add a b))
;; In less lispy terms:
;; let nu.ARRAY-ELEMENT-TYPE := 'double-float
;; a := nu.rand(5, 5)
;; b := nu.rand(5, 5)
;; nu.add(a, b)
#<STANDARD-DENSE-ARRAY :ROW-MAJOR 5x5 DOUBLE-FLOAT
( 0.353 0.518 1.773 0.453 0.582 )
( 0.619 1.072 0.458 0.696 0.840 )
( 0.965 0.511 0.213 0.496 0.999 )
( 0.973 1.104 1.294 1.203 0.519 )
( 0.771 1.120 0.457 1.436 1.783 )
{1031A4DC63}>
IMPL> (let* ((nu:*array-element-type* 'single-float)
(a (nu:rand 5 5))
(b (nu:rand 5 5)))
(nu:add a b))
;; In less lispy terms:
;; let nu.ARRAY-ELEMENT-TYPE := 'single-float
;; a := nu.rand(5, 5)
;; b := nu.rand(5, 5)
;; nu.add(a, b)
#<STANDARD-DENSE-ARRAY :ROW-MAJOR 5x5 SINGLE-FLOAT
( 0.699 1.833 0.830 0.985 0.422 )
( 1.058 0.996 1.372 1.143 0.760 )
( 1.051 1.019 1.293 1.272 1.018 )
( 0.819 0.735 0.160 1.431 0.805 )
( 0.827 0.821 1.484 0.126 1.160 )
{1031A7F7D3}>
Here, the nu:*array-element-type*
is the variable whose binding has a dynamic scope. The function nu:rand
is looking up its value while it is being called, rather than from where it was defined. In essence, I’m not required to supply the type
argument every time.
Of course, I can specify the type
if I wanted to optimize it or override the global value.
IMPL> (let* ((nu:*array-element-type* 'single-float)
(a (nu:rand 5 5 :type 'double-float))
(b (nu:rand 5 5 :type 'double-float)))
(nu:add a b))
;; let nu.ARRAY-ELEMENT-TYPE := 'single-float
;; a := nu:rand(5, 5, type = 'double-float)
;; b := nu:rand(5, 5, type = 'double-float)
;; nu:add(a, b)
#<STANDARD-DENSE-ARRAY :ROW-MAJOR 5x5 DOUBLE-FLOAT
( 1.611 0.862 0.473 1.118 0.534 )
( 1.314 1.172 0.943 1.064 1.157 )
( 1.232 0.439 0.961 0.984 0.993 )
( 0.642 0.720 1.119 0.871 1.328 )
( 1.155 1.455 0.667 1.770 1.296 )
{10379E0843}>
Why, after so many decades, are we still writing and editing line by line, instead of… instead of relying on its structure? See a perhaps experimental tree-edit for instance. Here’s another example using paredit. Here’s another.
Granted, this is a bit geeky, but once the initial learning curve is done, it does give you superpowers for the rest of your life. The lisp parentheses are a feature, not a bug, once you are past the initial hiccups :/.
I am aware julia has a --lisp
mode, but I have never found any documentation for it. So, I don’t agree that all the things in julia are well-documented either :).
Reference: Software Versioning and Deployment
The Quicklisp dist is curated by the Quicklisp maintainer, who ensures that all software builds successfully (in isolation; no checks are done to ensure that a piece of software is compatible with other software in the same dist).
All software builds successfully /together/. EDIT: I was mistaken to think that quicklisp actually tests whether or not systems build together, but nope, quicklisp only ensures that each system builds individually. This still ensures that there are no compile-time dependency conflicts in a particular quicklisp dist since the same versions of all the dependencies are being used to load the libraries, but there can be other issues like (i) runtime dependency conflicts (ii) local-nicknames conflicts in the quicklisp 2022-04-01 dist. Thanks mfiano for pointing this out!
Because the official Quicklisp dists are released once every month or two (which is an eternity in the software world), developers cannot push hot-fixes or address user-reported problems in a timely manner, unless they run their own dist and convince their users to use that, or convince their users to clone directly from upstream, and place it in a particular location on their filesystem that Quicklisp looks for to override dist versions.
Other than quicklisp, there also exists Common Lisp Package Manager (clpm) that intends to provide project-specific contexts, and dependency versioning in the sense mfiano mentions it.
There is an easier (= more similar to quicklisp that a new lisper is often recommended to start out with) alternative to clpm: ultralisp, builds every 5 minutes, but does not provide the “build together” guarantee. Adding repositories and even creating your own dists is as simple as a few clicks!
But by and large, I agree about the culture that Common Lisp developers (myself included) rarely version their software, or more specifically about them not specifying their version dependencies: they do version their own library, but they rarely mention the version dependencies. For the defacto libraries, this works, your code will more than likely work even if you pick up a 5 year old library. For libraries in quicklisp, this too works because quicklisp already checks if or not the libraries build together. Where it does not work is (i) for bleeding-edge libraries (ii) you are working in a Common Lisp team, this latter seems to be Eric Timmon’s motivation for developing CLPM!
Quicklisp also allows submission of github tags or releases (and perhaps gitlab as well), a specific ones, the “latest” ones, and perhaps even specific commits, and not just specific branches.
Reference: Documentation
Common Lisp Omnificient GUI is an excellent non-example of a “Common Lisp project that isn’t well documented”. More lispers, especially those of us who aim to attract new developers, should follow the lead and make their documentation just as extensive. Many other defacto libraries also have fairly extensive documentation, but it can (i) certainly be more extensive (ii) be more newcomer friendly.
Reference: Software Quality
This problem is recursive, in that we have many “50%” solutions to the same problems, and the next round of developers will create another set of solutions to add to the pile.
…
In my opinion, this is due to the language being incredibly malleable. It is usually much easier to re-implement an idea than it is to use/fix someone else’s implementation.
This is best visible by the presence of 10+ libraries for numerical/matrix computing in Common Lisp, yet none of them as complete as someone coming from a R/Matlab/Julia background might want it to be. Even for something as simple as JSON, there are 10+ libraries.
I myself have been guilty of this. The lisp community - if I assume it exists - needs to come up with better ways to tackle the issue. Just like standardization process in 1994 took an extensive amount of time (10 years?), an equivalent amount of effort (a week or a month or few) needs to be spent in evaluating the current options, and indicating why they are insufficient, and asking existing library developers if they are willing to incorporate your request. But alongside that, an effort needs to be made for cross-library compatibility. Because Common Lisp is highly malleable, it is also easy to come up with glue code, but someone needs to do the work!
Reference: Performance
As discussed earlier, the only way I see of resolving the aspects of performance that OP raised concerns with is through CLTL2 (and CFFI and closer-mop). Currently, polymorphic-functions is being tested on SBCL, CCL, and ECL through continuous-integration. I only use on SBCL in my day-to-day work, so CCL and ECL support might not be as good.
Reference: Community
This is a slightly weird topic. Common Lisp has a small community yes. But even in that small community, the users have very diverse use cases. There’s quilc using Common Lisp to develop Coalton and magicl (a fairly reasonable and scientific computing library) to work on quantum computing. There’s kaveh808 and others using Common Lisp for developing an extensible IDE for 3D production. There’s vindarel and others using Common Lisp primarily for web development. There are some people like me who take an interest in Common Lisp for cognitive architectures like ACT-R. I bet there are even diverse use cases. Just how are these people to collaborate. It’s the same circular problem that OP mentioned for Software Quality, that all these exist because Common Lisp provided features to cater to the needs of each of them, but providing those features already meant a steeper learning curve.
I will disagree that lispers are unhelpful in general; some are, many are not. That’s just the internet. And even if you get into a niche, people will certainly try to help you when you ask for it. Just today morning, I woke up reading u/stylewarning implementing a magicl wrapper for LAPACK’s dgges soon after a user requested for it.
Catering such a diverse user base requires a huge community, so that each niche has some or the other users. Perhaps in the late future, we will have more projects like CIEL is an Extended Lisp cropping up, along the lines of emacs and linux distros, to cater to each of the niches.
I will certainly not recommend anyone to learn Common Lisp while trying to get into the niche at the same time. That’s too much to learn in one shot. Along the same lines, I will also discourage anyone trying to learn Common Lisp and Emacs at the same time. First learn your niche, and may be then if you find that the language your niche uses feels awkward, try learning Common Lisp, it will be a much smoother experience. If you are embarking on decades long projects, I don’t think spending a month or two learning Emacs and Common Lisp will go wasted.
To lispers and open source developers, of which I myself am one, I might also add that we ought to take our life into account while working on projects. We don’t want to burn out, we don’t want to go into debt. We want to work on our projects, we want to do it sustainably. And if you see yourself being unable to support your work for a while, it is okay, focus on your life before it gets out of hands. Take care to not burn out. Developers will come and go, Common Lisp will stay, your projects will be used and taken care of (and may be even built upon) years after you write them :).
All these paragraphs might seem like I use Common Lisp as my daily driver. But because the people around me rely on Python, and because Python has vastly more libraries that Common Lisp, I am stuck with Python. However, sometimes, I also rely on py4cl/2 to use python libraries in Common Lisp in cases where performance is not a concern.
I wanted to make the switch from Python to Julia, and attempted a course project in Julia as a replacement for Python. The unnecessarily-long-for-debugging compilation times certainly put me off. And at least at the time when I tried Julia (2020), Revise.jl failed to meet my needs, and perhaps will still fail now :(. I think I do like the semantics of Julia better than Python, I’d also like a cleaner language and ecosystem than Common Lisp. But the compilation times is a deal-breaker for me. Perhaps I might make the switch once this gets fixed in the upcoming years :’).
Although, for larger and long-term projects, I will perhaps still stick with Common Lisp; by the time Julia gets all the goodies necessary for wanting me switch to it from Common Lisp, we might have a Common Lisp implementation over Julia. There certainly already exists one over C++ and LLVM :D.
FYI, there's, in case helpful: https://github.com/swadey/LispREPL.jl
But to call (any) Common Lisp, I've not seen it. There's ECL, so I suppose possible with it. You can call from CL (so I suppose to Julia too), not sure all are embeddable, at least ECL if helpful. See also (and Reddit thread linking to it):
https://www.tamaspapp.eu/post/common-lisp-to-julia/
https://discourse.julialang.org/t/about-julia-and-lisp/25119